Until we have such a test type, there is no value in exercising higher concurrency levels. Outside of a few frameworks that have systemic difficulty utilizing all available CPU cores, all tests are fully CPU saturated by the existing tests.
With that condition, additional concurrency would only stress-test servers' inbound request queue capacity and cause some with shorter queues to generate 500 responses. Even at our 256 concurrency (maximum for all but the plaintext test), many servers' request queues are tapped out and they cope with this by responding with 500s.
The existing tests are all about processing requests as quickly as possible and moving onto the next request. When we have a future test type that by design allows requests to idle for a period of time, higher concurrency levels will be necessary to fully saturate the CPU.
Presently, the Plaintext test spans to higher concurrency levels because the workload is utterly trivial and some frameworks are not CPU constrained at 256 concurrency on our i7 hardware. As for the EC2 instances, their much smaller CPU capacity means the higher-concurrency tests are fairly moot. If you switch to the data-table for Plaintext, you can see that the higher concurrency levels are roughly equivalent to 256 concurrency on EC2.
For example, jetty-servlet on EC2 m1.large:
256 concurrency: 51,418
1,024 concurrency: 44,615
4,096 concurrency: 49,903
16,384 concurrency: 50,117
The EC2 m1.large virtual CPU cores are saturated at all tested concurrency levels.jetty-servlet on i7:
256 concurrency: 320,543
1,024 concurrency: 396,285
4,096 concurrency: 432,456
16,384 concurrency: 448,947
The i7 CPU cores are not saturated at 256 concurrency, and reach saturation at 16,384 concurrency.We are not against high-concurrency tests; we are just not interested in high-concurrency tests where they would add no value. We're trying to find where the maximum capacity of frameworks is, not how frameworks behave after they reach maximum capacity. We know that they tend to send 500s after they reach maximum capacity. That's not very interesting.
All that said, once we have an environment set up that can do continuous running of the tests, I'll be more amenable to a wider variety of test variables (such as higher concurrency for already CPU-saturated test types) because the amount of time to execute a full run will no longer matter as much.
[1] https://github.com/TechEmpower/FrameworkBenchmarks/issues/13...