The real question is why is X language so slow?
This is not intended to be glib: but I do not think I can put it more simply than that. X language is slow because it uses lots of library code that it doesn't actually use (to get a friendly interface), it has a lot of redundancy (because of the wrong abstraction level), and because it wastes memory (in order to have an API that links well with others). Or it's slow because it thinks B-Trees are really cool. Or because it has the wrong intrinsics. I don't know.
But I am convinced this is the discussion we need to be having.
The code should provide isomorphic samples from the languages (or implementations of languages) that are being tested. Ideally, the code samples should be idiomatic.
Benchmarks should test the aforementioned code such that performance comparisons can be made. (With some degree of accuracy.)
Analysis should summarize and draw tempered conclusions from the code and benchmarks. Trade offs should be documented.
The Computer Language Benchmarks Game[1] is a first approximation of this. It provides the first two things but has no analysis in prose.
Does such a thing for k exist? Even if it's a very rough blog post somewhere from 5 years ago?
I ask this because there's a lot of lofty claims being made in this thread, and the closest thing to evidence I've seen is, "trust us, it's made a lot of money doing [financial things]. oh, and it can update a million records in a second." Since I don't believe that a free lunch can exist, I am naturally inclined to reach the truth of the matter. I see that k is supposed to be wicked fast---at what cost? Where are the examples? Where is the analysis documenting this?