One of the standard caveats with this particular benchmark game with respect to Go is idiomatic optimizations are prohibited. To use the btree example, Go's memory management is low latency and non-moving, so allocations are expensive--any Go programmer writing a performance-sensitive btree implementation would pre-allocate the nodes in a single allocation--an absolutely idiomatic and trivial optimization--but the benchmark game requires that the nodes are allocated one at a time. In other words, the C# version is idiomatic, but the Go version is expressly contrived to be slower--not a very useful comparison.
Mad respect for .Net though; it's really impressive, I like the direction it's going, I'm glad it exists, etc.
Forcing allocations for every node isn't justified by a desire to demonstrate dynamically sized binary trees. A naive dynamically-sized tree would just keep a list of node buffers and allocate a new node buffer every time the previous one fills up (perhaps with subsequent buffers doubling in size). The benchmark is, by all appearances, contrived to be slower.
Which is not accepted for the C# programs either.
sync.Pool is accepted —
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
> the Go version is expressly contrived to be slower
The requirements were contrived in April 2008.
afaict Go initial release was March 2012.
Because C# doesn't benefit from this kind of optimization. Its GC is generational, which means that it has very fast allocations at the expense of high latency. In most applications, lower latency is more important than slower allocations (not least of all because these batch-allocating optimizations are nearly trivial), but these benchmarks don't reflect that at all.
> The requirements were contrived in April 2008. afaict Go initial release was March 2012.
Contrived = "the rules artificially prohibit idiomatic optimizations". It doesn't require that the maintainers have a prejudice against Go (although as you point out, the maintainers have had a decade to revisit their rules).
Overall you can see how fast Go is, it has little optimization compare to C# and it's as fast. Compare this: https://benchmarksgame-team.pages.debian.net/benchmarksgame/... and overly complicated C# version: https://benchmarksgame-team.pages.debian.net/benchmarksgame/... ( avx, Intrinsics etc ... )
It will intentionally use more memory for the sake of throughput, hence why this post has all .NET program flag for it, as it's a _speed_ benchmark primarily.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
I have to give you credit for trying to apply the Rule of Three to a single criticism.
Of course, I don't really understand how the fact that someone took the time to vectorize the C# submission is supposed to be a mark against C#...
There wasn’t even a lot of “modern C#” low level optimization like ref-structs/spans and similar. It actually looks like there is quite a bit of performance from C#8, 9, 10 left on the table.
Tons of very "interesting" attributes like this:
// prevent inlining into main to decrease JIT time to generate main
[SkipLocalsInit][MethodImpl(NoInlining)]
[SkipLocalsInit][StructLayout(LayoutKind.Explicit, Pack = 32)]
[SkipLocalsInit][MethodImpl(AggressiveOptimization | NoInlining)]
[FieldOffset(32)]
and tons of "unchecked" blocks.Not to mention that the entire file is using explicit vectorization, which I consider to be a very high degree of optimization -- tons of software never bothers to implement explicit vectorization, and does just fine.
If all of this is "nothing out of the ordinary", then "ordinary" C# has changed a lot since I last spent much time with it.
[0]: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Unfortunately developers who don’t know better judge it by it’s historical association with Windows rather than how powerful it is today.
Some of us actually got to experience the entire journey from the old to new world first-hand. We started out as a .NET 3.5 Framework solution (windows only), and are now looking at a .NET 6 upgrade (any platform). Over the course of 7+ years, we went through all of the following frameworks:
3.5 => 4.0 => 4.5 => 4.6.2 => [netcore convert]
2.0 => 2.2 => 3.0 => 3.1 => 5.0 => ...
Some of the transitions were a little painful, but the same fundamental product survived the entire trip.
I don't know of many other development ecosystems where you can get away with something like this. If we didn't have the stability this ecosystem has to offer, we would not be in business today.
At the time I was hired, it was to modernize an Access application used internally, to try to sell it as a product.
.Net was still in beta at the time (this was early in 2001). I figured I might as well go with the flow and try it out.
It was a crazy ride, and I've since left the company ... but we went through every version from beta through 4.8 before I left.
I'm now using .Net 5 in Azure to power my new company's REST APIs.
Avoid vscode for C# development, unlike TS/JS (which is top of the line), the support for C# even in core is toy level.
I've always thought the whole ecosystem looked really productive, and the code I've had to review occasionally looked well structured and readable. But when I was starting out MSDN cost a fortune and I've never really considered trying to learn it.
Most applications I develop these days are web services. I write most things today in Go, but I used to work quite a bit with C#/Asp.NET applications.
Here's what I do to build a Go web application:
- go build .
What I get out of it is a single, statically-linked, self-contained ELF binary for which deployment is as simple as scp, if I want to. It will run on any x64 linux box, without dependencies, since it contains its own webserver. I don't need to dump it into IIS to make sure the build works.Here's what I used to do with .NET:
- Open VS, wait about 30 seconds for it to finally start working
- Set the target, Rebuild All
- Publish to file, wait
- Eventually get a packaging error, predicated on some obscure tools dependency issue somewhere inside my 98KB .csproj file, which I have to fix by closing down VS, manually editing in Notepad, and re-opening VS
- Finally get a working build+publish, ok, let's take a look
- Oh, very nice, the publish directory weighs in at nearly a QUARTER GIGABYTE, and contains about 60 dll dependencies and a ton of entirely useless descriptor files.
- Well, okay, let's at least get this deployed to the test server to make sure it still plays nice with IIS, xcopy this over.
- Oh, IIS doesn't like this at all, now let's spend the next couple hours tracking down this insane .net framework dependency hell.
- Screw it, where's my whiskey?Seems pretty simple. Though the binary size is definitely not on par with go.
Can you elaborate on this? What does it mean that ".Net ships with Azure"?
But for many years, the per seat cost was higher, which scared many away
I couldn’t find a direct C# to Rust comparison but Rust trying to compete with C++ means performance is a goal, if that’s what you are after.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
p44 "Oberon — The Overlooked Jewel" Michael Franz, in "The School of Niklaus Wirth".
https://www.google.com/books/edition/The_School_of_Niklaus_W...
It's an intentional choice, that's why it compiles code so fast. Also because of that it's a lot simpler than say GCC. Before Go, the Plan 9 C compiler was designed in a similar manner too (I think the Go compiler was forked from it).
I think the simplicity aspect is even more important than the compiling speed. It's easier and cleaner to keep the compiler simple and write optimized assembly code by hand when it's needed. That way, the compiler doesn't get so messy (fewer bugs, easier to maintain...) and the written program is of better quality (humans can produce better code than compilers).
I very much want my computer to work as hard as it can to make my code more performant for free. What I would like to see more is a separate debug and release build mode, where the former can go as fast as it can without optimizations, while the latter can be as slow as it wants and result in the most optimal binary it can produce. Zig does that for example.
Why does their web site have no contact nor link to where the source code for the project can be checked out, contributed to, or amended?
Perhaps they don't read the website text?
> … no contact nor link…
Search works.
At the very least, I couldn't realistically re-run some of the example benchmarks from the source embedded in the HTML, because they did not include vendoring/version information for external packages they depend on. That made me doubt the provenance of https://salsa.debian.org/benchmarksgame-team/benchmarksgame.