And these rules are particularly bizarre. Rust, C, and C++ are all allowed to use custom allocators while Java and C# have GCs which are optimized for this particular micro benchmark but not for real world applications (although I hear Java’s GCs are making good headway on latency lately, and clearly all of the GCs are suitable for general application development). So it’s really just Go which is forbidden from an idiomatic optimization as far as I can tell.
Go optimizes ridiculously insanely for latency because it's driven by Hacker News articles about GC latency, not because it's better for "real world applications." Its atrocious throughput is entirely a consequence of that decision and is one of very the few useful things that the benchmark games do actually demonstrate. Java's unwillingness to provide an allocator with such an extreme tradoeff has proven a pretty good idea, and now they are able to provide only moderately worse latency than Go for "real world applications" with far better throughput.