Their benchmarks are also really cool because you can choose to filter down technologies by what you personally know or just want to compare, for example:
https://www.techempower.com/benchmarks/#section=data-r20&hw=...Thus, in my case those numbers might be closer to the following:
- plaintext: up to 2'500'000 requests per second, most technologies go up to around 500'000
- data updates: up to 14'000 requests per second (20 updates per request, so 280'000 updates per second)
- fortunes: up to 300'000 requests per second (full CRUD and sorting)
- multiple queries: up to 32'000 requests per second (20 queries per request, so 640'000 queries)
- single query: up to 530'000 requests per second, most technologies go up to around 100'000
- JSON serialization: up to 970'000 requests per second, most technologies go up to around 200'000
Of course, their setup also plays a part, since the VPSes that i'd go for probably wouldn't be comparable to a Dell R440 Xeon Gold.
It's really nice to have this data, but the code that's written also plays a really big factor - i've seen people who write code with N+1 problems in it and call ORMs in loops and adamantly defend that choice because "such code is easier to reason about" instead of a simple DB view that would be 20-100x faster. With such code, it'd be closer to the "multiple queries" test.
Then again, these tests basically tell you that in 90% of the cases you should go for Java or .NET, abandoning Python, PHP and Ruby for them (though one could also introduce Rust into the mix and say the same), which realistically won't happen and people will use whatever technologies and practices that they feel comfortable with.
I've seen applications that work fine with hundreds of thousands of page loads per minute (multiple requests per load) and i've seen systems that roll over and die with 100 concurrent users, lots of variety out there.