What if you shifted time to nanoseconds ? Or source code size in terms of Megabytes. The rankings could change. The culprit is the '+'
I would think Geometric mean of (time x gzipped source code size) is the correct way to compare languages together. It would not matter what the units of time or size are in that case.
[Here the geometric mean is the geometric mean of (time x gzipped size) of all benchmark programs of a particular language.]
$ insect '5s + 10MB'
Conversion error:
Cannot convert unit MB (base units: bit)
to unit s
$ insect '5s * 10MB'
50 s·MBThe whole point of benchmarks is to protect against accidental bias in your calculations. Adding them seems totally against my intuition. If you did want to give time more weight then I would raise it to some power. Example: geometric mean of (time x time x source size) would give time much more importance in an arguably more principled way.
That annotation does seem to have caused much frothing and gnashing.
Here's how the calculation is made — "How not to lie with statistics: The correct way to summarize benchmark results."
[pdf] http://www.cse.unsw.edu.au/~cs9242/11/papers/Fleming_Wallace...
That is a huge metric I care about.
You can figure out it somewhat by clicking on each language benchmark but it is not aggregated.
BTW as biased guy in the Java world I can tell you this is one area Java is actually mostly the winner even beating out many scripting languages apparently.
This is basically meaningless. I don't see why you'd even need to do this. You can easily show code size and performance on the same graph.
The text is not clear enough, but "geometric mean" is not the benchmark. The 11 problems are listed in https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
The results of the 11 problems are combined using the "geometric mean" into a single number. Some people prefer the "geometric mean", other people prefer the "arithmetic mean" to combine the numbers, other people prefer the maximum, and there rare many other methods (like the average excluding both borders).
Thanks that makes more sense, that's another issue for context then. I don't have anything against geometric means but there should be basic statistics like average, max, min,... available as well.
https://twitter.com/ChapelLanguage/status/152442889069266944...
https://salsa.debian.org/benchmarksgame-team/benchmarksgame/...
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Actually if you look at all the top net core submissions the only one fast are the one using low level intrinsics etc ...
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Do you mean "fast" like a C program using low level intrinsics?
This thing has been a long running joke in the software industry, exceeded only by the level of their defensiveness.
SMH.
After trying hard to use julia for about a year and I came to conclusion it's one of the slowest things around. Maybe the stuff changed? Maybe, but julia code still remains incorrect.
I hope they fix both things, speed (including start up speed, it counts A LOT) and correctness.
“Julia features optional typing, multiple dispatch, and good performance, achieved using type inference and just-in-time (JIT) compilation, implemented using LLVM.”
Julia 1.7 Documentation, Introduction