However, I'm starting to think that all of the advocates of various frameworks are now conspiring independently to make this comparison meaningless...any framework (except Cake for some reason) can be superoptimized towards a small set of tasks. If you do another round, could you increase the number of different tasks? Some examples could be:
1) Mixed bag of queries of various complexity 2) Static file serving 3) A few computation/memory-intense benchmarks (such as those in the Language Benchmarks Game) 4) Templating
As Pat points out, we definitely look forward to implementing some more computationally-intense request types in the future. This round does include the first server-side template test. We'd like to hear the community's opinions about more tests.
That said, I feel most of the frameworks' implementations of the existing tests are not cheating. Our objective in this project is to measure every framework with realistic production-style implementation of the tests. No doubt there is temptation to trim out unnecessary functionality and focus on the benchmark's particular behavior. We have attempted to identify any such tests that remove framework features to target the benchmark as "Stripped" and those can now be filtered out from the list.
In other words, our aim is that the implementation of each framework's test is idiomatic to that framework and platform. And if that's not the case for a test, we want to correct it.
Your concern could be clarified by pointing out that framework authors may be tuning up their JSON serialization, database connection pools, and template processing in order to improve their position on these charts. And, to be clear, I have already seen evidence of that in my interaction with framework authors. To that concern, however, I would say: That is awesome. I want those features to be fast.
I can also say that this benchmark inspired me to take a hard look at class loading and I was able to make some improvements to the framework's efficiency in general. So, in a way, I did some tuning - not for the benchmark, but rather as a result of the benchmark. Thanks to this benchmark all Phreeze users will gain a little performance.
I would also like to suggest a test idea. I think the biggest challenge for frameworks comes into play when you have to do table joins. Something like looping through all purchase orders and displaying the customer name from a 2nd table - that would be a very real-world type of test. I think foreign key type of queries are more telling about an ORM than a single table query.
Thanks again!
Except now it is clear that you are refusing optimizations for some frameworks due to a vague, aesthetic judgement of 'stripped'. Which now means that you actually aren't measuring the minimum framework overhead. You are measuring the overhead of the defaults, or the overhead of not taking optimization seriously, with large amounts of performance left on the table. Worse, selectively applying optimizations means you are comparing one framework's defaults to another framework's minimum overhead. And since you have abandoned minimum overhead, it now makes very little sense about why we are measuring performance independent of normal first-resort tactics like caching (who is running Cake without caching?)
If you were going to do that, you should have benchmarked defaults right down the line and allowed a full, normal range of simple deployment optimizations. Instead we have selective optimization and totally unrealistic deploys, so it really indicates very little.
[1] https://github.com/TechEmpower/FrameworkBenchmarks/issues/13...
A new "Fortunes" test was also added (implemented in 17 of the frameworks) that exercises server-side templates and collections.
With 57 total frameworks being tested, we have implemented some filtering to allow you to narrow your view to only those you care about.
As always, we'd really like to hear your questions, suggestions, and criticisms. And we hope you enjoy this latest round of data.
Also when measuring latency, average and std dev are only revelent if the distribution is guassian in distrition. Which is unlikely.
Better to show percentile based measurements. Like 90% of all requests served in 5ms, and 99% of requests served in 15ms.
See Gil Tene's talk "How not to measure latency" [1] for more info. Also be sure you are not falling into the "Coordinated Omission" trap where you end up measureing the latency wrong.
Thanks for the feedback! We started the project with WeigHTTP, then starting with Round 2 we switched to Wrk [1] at the advice of other readers. Wrk provides latency measurements consisting of average, standard deviation, and maximum.
See the earlier conversation about standard deviation here: https://news.ycombinator.com/item?id=5455972
If we had distribution data available, we would aim to provide that in some form. And perhaps the author of Wrk could add that in time.
However, for the time being, I consider the matter somewhat academic. Not to be dismissive--I value your opinion--but I don't believe that would measurably impact my assessment of each framework's performance. Though, it would be fascinating to be able to validate my suspicion that Onion, being written in C, does not suffer even the tiny garbage collection pauses of the Java frameworks.
technically not true. Knowledge of the second order moment (variance) lets you uniquely identify other distributions like Poisson, or uniform. Knowledge of even higher order moments lets you fit more complicated statistical models.
Low variance is good, regardless of underlying distribution.
Mono Issue #1, since the vast vast majority of ASP.Net websites run on windows a Mono performance test even if accurate is going to be of dubious value.
Mono Issue #2, since Mono is nowhere near as polished as the Microsoft .Net implementation the numbers wont really be meaningful.
Windows Issue #1, if you do the test on a different OS than every other test implementation, the results really wont be comparable in any fair way.
Microsoft Issue #1, dont know if it still holds now a days but in past official EULA for .Net prohibited publishing benchmark results. PERIOD.
I am a .net developer and as much as I like ASP.Net I dont think the effort of adding a .Net implementation really would pay off.
Edit: yeah right Mono and w/e
Benchmarks like this are designed to be the starting-point of a discussion an investigation, and not as anything meaningful in their own right. Boiling it down a framework to one performance number ignores the many, many nuances of a framework.
What surprises me most is the difference between different frameworks. A few years ago the mantra seemed to be "Use Rails, Django or a similar full-stack framework. Speed of deployment trumps everything!" Over the last few years I've seen a shift as people are trying to get more performance from limited hardware. Personally I'm intrigued by how a fairly innocent decision early in the project (of what language/framework) may have profound performance implications in the long run.
For myself, I've been looking for a good functional-programming framework. Just looking at this gives me a good list of frameworks to start looking at. It feels to me that a framework that performs well is likely well engineered, so the ones that perform better will go at the front of my queue for investigations.
You're precisely right about how to put this data to use: as one point in a holistic decision making process. We address that in the Questions section of the site, in fact. That said, we are not reducing each framework to a single performance number. Our goal is to measure the performance of several key components of modern frameworks: database abstraction and connection pool performance, JSON serialization, list and collection functions, and server-side templates. We'd like to add even more computationally-intensive request types in future rounds.
So, no, we're not testing your (or anyone else's) specific application on each framework. But we are testing functions that your application is likely to use. You're still better off measuring the performance of your use-case on candidate frameworks before you start work, but perhaps you can first trim the field to a manageable number.
In the first round, we echoed your surprise at the spread--four orders of magnitude! I think the shifting winds of opinion come from the fact that today's high-performance languages, platforms, frameworks are not necessarily more cumbersome to use for development than the old guard. As others have pointed out elsewhere in this thread, Go is not a terribly verbose language, and yet its performance is fantastic.
Has the era of sacrificing performance at the altar of developer efficiency ended? I'm not sure. But we have some data to add to the conversation.
Part of that shift is also that other frameworks have learned and integrated a lot from Rails/Django. The productivity/time-to-launch gap isn't as significant as it used to be, so other factors like performance, compatibility with pre-existing infrastructure (eg for JVM-based frameworks), security, etc. are gaining more influence in the decision about what to use.
I still don't find these benchmarks very useful. From the looks of the comments, a lot of you don't really either (even if you don't realize it).
For example, a lot of people in these comments want to correlate language speed with performance in these benchmarks, by arguing specific examples, but comparing almost any two frameworks/platforms in this "benchmark" is an apples to non-apples comparison, and the result is actually full of counter examples (faster languages performing more poorly). That should instantly tell you that this benchmark isn't telling you what you think it's telling you, and that you haven't really derived any value from it.
Perhaps the biggest reason I don't find value here is that every product here does wildly different things. It's like comparing wrenches to hammers to screwdrivers to 3D printers.
I also want to point out to people who say that this is a "comparison" of frameworks that it is emphatically not a comparison. What is the value of a framework? Is it speed? Atypically. And this "benchmark" tends to point at such cases as "being better" because they do better in this specific task. A framework/platform's value lies in features and abstractions. This does not compare those.
I will gladly build a "framework" in NodeJS that is only capable of doing the tasks in this benchmark as fast and with as little overhead as possible. You would NEVER use it in the real world, but it would be a beast at serializing JSON and making repeated database queries in an insecure fashion. But score here is the important factor, right?
1) If you see problems with a language you're an expert in, submit a pull request. I've never seen a benchmark done like this before, it gives everyone a chance to fix problems in their favorite framework/language. 2) It is a little bit of a unfair comparison between very low feature frameworks to higher ones, but it gives you a good idea of what you're trading off on basic performance. For example, I thought our use of play1-java wasn't far off of servlet on basic tasks, but boy was I wrong, perhaps by 10x.
Should you read this list and pick the top thing on the chart? No. However, hard to argue this isn't interesting and useful information.
Otherwise I can see how someone would assume a simplified and misleading heuristic "If I can process 1000 requests in 1 second. That means the server can handle 1000 requests/seconds. So if 1000 requests come in at once, they will all be processed in 1 second". Two thing can happen, it could processes it slower than one seconds, it could error out and die, or it could actually process it fast if it can scale across CPUs. That is where the gold is if you ask me... Anyway just my 2 cents.
Says someone (many someones) about every benchmark, ever. I've never seen a benchmark that yields universal praise, every one earning criticism from people who don't like the results.
What is the value of a framework? Is it speed?
This is clearly a benchmark of performance. Is that the single value of a framework? Of course it isn't. But you certainly shouldn't stick your head in the sand about it.
While for no or just one query it's slower than a lot of the other frameworks (due to PHP being slow to parse, startup etc), as soon as we have a lot of DB queries, the C interface to MySQL leaves the other frameworks in the dust.
The well known PHP shortcomings aside, that's a nice example of optimizing for the things that matter most, especially for it's common use cases (Wordpress, Drupal, etc).
Disclaimer: mysqli does have async capabilities, but most people such as myself use PDO for its other benefits. And mysqli only works with MySQL.
That being said, above a certain scale and complexity level, you probably want the topology of your persistent data store hidden from your web request handlers anyway. For one thing, making requests to N backend shards from M frontend web workers starts to get bad when N and M are both large; for another, introducing really complex scatter-gather query logic into your request-handling pipeline can be a maintenance and debugging nightmare.
Introducing a proxy or data-abstraction service in between cuts down on the number of open connections and lets you change the data storage topology without updating frontend code.
With Servlet for example, a worker thread is chosen from Resin's thread pool and used to handle a request. The Servlet then executes 20 queries sequentially and returns the resulting list data structure. This is Servlet 3.0 but not using Servlet 3.0 async.
Async isn't making the top performers fast. Being fast is making them fast.
Many people choose Ruby and figure, given that premature optimization is the root of all evil, they'll optimize later if needed.
That's like choosing between a farm tractor or a ferrari - and figuring if the tractor doesn't perform up to snuff, we'll add a spoiler (and given the 10x disparity between Java and Ruby in some of those graphs, if we throw out a 20mph top speed for a farm tractor, the ferrari analogy is actually rather spot on).
There are many good reasons to choose dynamic/interpreted languages - but always know you're giving up performance in exchange.
True.
> That's like choosing between a farm tractor or a ferrari - and figuring if the tractor doesn't perform up to snuff, we'll add a spoiler
Its really not like that at all, because programming languages aren't like vehicles. Particularly, with Ruby, on typical method of optimization is finding which bits of code are bottlenecks, and then optimizing those bottlenecks, often by replacing them with C (or, if the Ruby runtime being used in JRuby, Java).
Which I guess is like having your tractor turn into a Ferrari for the parts of work that involve going long distances on a road without towing something, but I think that kind of points out how bad even using the tractor/Ferrari analogy is.
Why not - is that a limitation of OpenResty or of LuaJIT?
How would you turn on JIT compiler in OpenResty?
But to further add to the analogy, the tractor, the ferrari and the go-kart may all perform about the same if you're only traveling 1 inch.
Love me some analogies!
In other words, although interesting (and exceedingly well done) these benchmarks should have "surprised" no one. Not even the disparity between languages.
For all non-trivial apps, by the time you get 100 req/sec your bottleneck is very likely going to be your database.
This benchmark is even less useful than alioth's shootout, I'm not sure why there is so much effort put into it :)
One thing I am wondering is "what about concurrency level"?
Just because a server can handle 10x the number of requests when doing a single request a time for 1000 requests, doesn't necessarily mean it can also handle those 1000 request at 10x performance when they all come in at once or in a short time period.
I saw some tests have "256 concurrency" does that mean they are sending 256 request concurrently? I want to see them play more with those numbers. Why not have 1024 or more. Then also play with the number of available CPUs and see which frameworks can auto-scale based on that. Some that can process sequential requests fast might fall face down when faced with slightly increased concurrency, in that respect these benchmarks are a bit misleading.
On the other hand it is good to see latency. That is a important. Now latency vs level of concurrency would also be interesting.
On a side-note, I'd really like to know why so few start-ups seem to be using Spring. It could be just a wrong impression . But from what I have seen most start-ups use RoR or Django. My guess is that Spring is less flexible and less known outside big companies, where it is usually the default. It could also be that Spring works better with the waterfall model whereas Django or RoR are better suited for explorative programming and that fits the respective spheres better.
I've used spring mvc in an agile setting a couple times now, and it has worked fine. It doesn't tend to make developers all that happy, in my experience. If you're in an enterprise full of spring, starting up the next app with it can be attractive -- there likely already exists a bunch of tooling and knowledge around spring.
I wouldn't use spring directly if I were trying to build something quickly for a startup. I'd be more apt to reach for grails (which wraps spring), dropwizard, or any of the other rapid-development frameworks.
I love the language, but let's not get too carried away until the ecosystem grows. The reality is, if you're going to use Go for web dev, you're going to need to be prepared to do a whole lot of things on your own.
How so? There's Clojure, Scala and others perfectly good choices on the list. The max latency for Go is also very high compared to most others.
Avoiding preoptimization applies just as much to frameworks.
Javascript is also expressive, flexible, super performant and has the best and fastest-growing community around.
The first isn't true any more: the Java VM competes with native code on most benchmarks, and due to its ability to perform runtime optimizations, can occasionally outperform native code.
The second doesn't matter at all for web servers. The cost of starting up the web server is tertiary to uptime and performance. If the thing is going to run for 4 months without going down, who cares all that much if it takes 5 seconds or 5 ms to start up?
If it takes you 5s to start up your server, that's a lot of time you've added to each development iteration. Make a change, restart the server, wait 5s, see if it works/check debug output.
Although I hope to never write "public static void main" again (except ironically, of course), and I spend some time dabbling in Python/Ruby/obscure-language land, I'm really happy to see Clojure and Scala doing well here.
That being said, as a day-to-day Java web-developer, I cannot honestly remember the last time I wrote "public static void main".
Compared to the current crop of dynamic language interpreters, waaaay more engineering time and talent has been poured into optimizing the jvm.
People I think forget that Java was a more user friendly C++; the price you paid was somewhat slower apps, but that's OK because you write more robust apps more easily. Rinse and repeat for Ruby/Python/Your Lang Here.
http://benchmarksgame.alioth.debian.org/u32/benchmark.php?te...
The JVM indeed kicks some major butt.
I've usually had pretty good luck with Slim, I'll have to try a version with & without redbean and see how big a difference it makes.
Which means that in most common web use cases (which are db heavy), PHP is as fast as any of them, since all the slowdowns (initialization, slow Zend engine etc) are dwarfed out by the fast db handling.
Also no love for Yii.
And since that day I've been wondering: why does NodeJs (=V8 JS engine in C) talking to MongoDB have higher response times and latency than Ringo (=Rhino JS engine on JVM) talking to MySQL. The only thing where Node beats us JVM guys seems to be the JSON response test.
Looking at the res/seq we got from round4. In order of concurrency (8, 16, 32, 64, 128, 256):
nodejs (mongodb raw)
12,541 22,073 26,761 26,461 28,316 28,856
ringojs (mysql raw)
13,190 23,556 27,758 30,697 31,382 31,997
both look like they got room to grow
This is highly respected work (bookmarked) and deserves all the upvotes HN can give.
In a previous round, I pasted the relevant code directly into the results view. I will likely do that again soon since it's convenient for the reader.
For the time being, I invite you to browse the Github repository and examine the test implementation source.
Again, thank you very much for hard work. I think for some there are some revelations there, like that C framework.
Take a look at the new Fortunes test. That test lists rows from a database, sorts them, and then renders the list using a server-side template. It's only implemented in 17 of the 57 frameworks right now, but we hope to have better coverage on that in time as well.
Is there some reason that you use built in json serialization for some frameworks and not others?
There is also a lot of heterogeneity in the implementation of the multiple queries test. For instance, even if I only look at... say... java frameworks, you seem to implement the exact same feature in very different ways between platforms. For instance, for servlets, you will store all of the results in a simple array... and then write them out when you are done. Like so:
final World[] worlds = new World[count];
final Random random = ThreadLocalRandom.current();
try (Connection conn = source.getConnection())
{
try (PreparedStatement statement = conn.prepareStatement(DB_QUERY,
ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY))
{
// Run the query the number of times requested.
for (int i = 0; i < count; i++)
{
final int id = random.nextInt(DB_ROWS) + 1;
statement.setInt(1, id);
try (ResultSet results = statement.executeQuery())
{
if (results.next())
{
worlds[i] = new World(id, results.getInt("randomNumber"));
}
}
}
}
}
catch (SQLException sqlex)
{
System.err.println("SQL Exception: " + sqlex);
}
// Write JSON encoded message to the response.
try
{
Common.MAPPER.writeValue(res.getOutputStream(), worlds);
}
catch (IOException ioe)
{
// do nothing
}
}
But for other frameworks, like Vert.x, you use CopyOnWriteArray to store all of the results... and then write them out when you are done. Like so: private final HttpServerRequest req;
private final int queries;
private final List<Object> worlds = new CopyOnWriteArrayList<>();
.
.
.
@Override
public void handle(Message<JsonObject> reply)
{
final JsonObject body = reply.body;
if ("ok".equals(body.getString("status")))
{
this.worlds.add(body.getObject("result"));
}
if (this.worlds.size() == this.queries)
{
// All queries have completed; send the response.
// final JsonArray arr = new JsonArray(worlds);
try
{
final String result = mapper.writeValueAsString(worlds);
final int contentLength = result.getBytes(StandardCharsets.UTF_8).length;
this.req.response.putHeader("Content-Type", "application/json; charset=UTF-8");
this.req.response.putHeader("Content-Length", contentLength);
this.req.response.write(result);
this.req.response.end();
}
catch (IOException e)
{
req.response.statusCode = 500;
req.response.end();
}
}
}
In other words, you literally create a new array each time you add a result to that CopyOnWriteArray. In fact, not only are you creating a new array, but you are creating new copies of the data in the array as well. Seems a little strange??? DEFINITELY inefficient. Is there a reason that is implemented differently? It seems to me that, at the least, they should both use arrays... but maybe there is something more you guys are testing???The Onion C based code is written in an even MORE efficient manner for the multiple queries test. It actually stores it's results in json format from the outset! Like so:
json_object *json=json_object_new_object();
json_object *array=json_object_new_array();
int i;
for (i=0;i<queries;i++){
json_object *obj=json_object_new_object();
snprintf(query,sizeof(query), "SELECT * FROM World WHERE id = %d", 1 + (rand()%10000));
mysql_query(db, query);
MYSQL_RES *sqlres = mysql_store_result(db);
MYSQL_ROW row = mysql_fetch_row(sqlres);
json_object_object_add(obj, "randomNumber", json_object_new_int( atoi(row[1]) ));
json_object_array_add(array, obj);
mysql_free_result(sqlres);
}
json_object_object_add(json,"json",array);
const char *str=json_object_to_json_string(json);
The equivalent java code would be something like: private final HttpServerRequest req;
private final int queries;
// INSTEAD OF:
//private final List<Object> worlds = new CopyOnWriteArrayList<>();
// HAVE:
private final JsonArray worlds = new JsonArray();
.
.
.
@Override
public void handle(Message<JsonObject> reply)
{
final JsonObject body = reply.body;
if ("ok".equals(body.getString("status")))
{
// INSTEAD OF:
//this.worlds.add(body.getObject("result"));
// HAVE:
this.worlds.addObject(body.getObject("result"));
}
if (this.worlds.size() == this.queries)
{
// All queries have completed; send the response.
// final JsonArray arr = new JsonArray(worlds);
try
{
// INSTEAD OF:
//final String result = mapper.writeValueAsString(worlds);
// HAVE:
final String result = worlds.encode();
final int contentLength = result.getBytes(StandardCharsets.UTF_8).length;
this.req.response.putHeader("Content-Type", "application/json; charset=UTF-8");
this.req.response.putHeader("Content-Length", contentLength);
this.req.response.write(result);
this.req.response.end();
}
catch (IOException e)
{
req.response.statusCode = 500;
req.response.end();
}
}
}
With a similar change for Servlets. According to the benchmark results, Onion comes out on top. It's the fastest. But how much of that's because it seems to be written correctly and other tests seem to be written without taking advantage of the same efficiencies.Is it the case here that some people have sent you test code optimized for their own frameworks?
If that is so, you should add some tests that would not be so amenable to optimization. I'm not picking on Onion here by the way. In fact, the argument could be made that Onion is not actually 'optimized', so much as just written correctly, and the other frameworks have tests written incorrectly. But I just wanted to know if you guys actually intended to use these different implementations for some reason that I am unaware of? Do they make the tests more fair somehow???
Thanks for taking the time to dig in and provide some feedback. As much as possible, we want each test to be representative of idiomatic production-grade usage of the framework or platform. Furthermore, we have solicited contributions from fans of frameworks and the frameworks' authors. A side objective is that the code double as an example of how best to use the framework or platform.
All of this means we fully expect that the implementation approaches will vary significantly.
The multiple query test has a client-provided count of queries, so in most Java cases, we create a fixed-size array to hold the results fetched from the database. I wrote the Servlet and Gemini tests, so I can confirm that behavior in those tests.
We are not Vert.x experts and we have not yet received a community contribution for the Vert.x test. However, it is our understanding that idiomatic Vert.x usage encourages the use of asynchronous queries. The question then is: how do we collect the results into a single List in a threadsafe manner? Is your JsonArray alternative threadsafe? Admittedly, using a CopyOnWriteArrayList gave us pause, but we are not (yet) aware of a better alternative.
The Onion test was contributed by a reader and admittedly its compliance with the specification we've created is perhaps a bit dubious. We want a JSON serializer to process an in-memory object into JSON. I'm not certain if the Onion implementation matches that expectation, but the test implementation nevertheless seemed sufficiently idiomatic for his platform.
We're certainly open to more opinions on that matter.
I agree, that approach would be best. I just was unsure why you didn't do it in Vert.x.
"...it is our understanding that idiomatic Vert.x usage encourages the use of asynchronous queries..."
Someone can correct me if I am wrong, but my understanding of Vert.x is that any query you send to the event bus is already asynchronous. There is no need for a developer to worry about threads at all when writing a vert.x handler. That handler will only ever be called from a single thread. So using a simple array is fine. Using the JsonArray is even better, because then it matches the Onion test idiomatically speaking. Which, I agree, is what you should be going for.
"...The Onion test was contributed by a reader and admittedly its compliance with the specification we've created is perhaps a bit dubious. We want a JSON serializer to process an in-memory object into JSON..."
Please don't misunderstand, the Onion test does what you want it to do. As well, it does it in the correct idiomatic fashion. That's exactly how I would write the Onion test. I was just wondering why the other tests went out of their way to decode Json and the reencode Json for each result. Onion only ever encodes to Json once, other tests are encoding and decoding multiple times. I only pointed out Vert.x because it was the most egregious. I mean in that case the answer from the persistor is already in Json. It is put in a non Json data structure... and then that data structure is encoded to Json??? Just seemed weird.
----
EDIT: Just verified that there is no need for thread safe code in a Vert.x handler. (Gotta say... that is pretty slick)
On a connected note... man ... these tests are a VERY good way to learn more about these different frameworks!
----
Unless the number of reads heavily outnumbers the numbers of writes, it is better to use something like Collections.synchronizedList(new ArrayList<String>()) instead of CopyOnArrayList. The synchronized list does lock while reading or writing, but writing still becomes much faster.
Did you use orm when it was available? Or just used raw query?
This is in line with my previous query about more usage scenarios (and thank you for fortunes test) who would be like we use them usually.
My point is, that the same reason why we use framework instead of raw language, we use orm instead of direct sql.
it is stated in the benchmark when a framework test uses an ORM or "raw" pdo/whatever for db request.
Things like Doctrine are elegant and smart ,but let's face it,they are so slow. PHP is not JAVA. Hibernate may be fast on JAVA but Doctrine is hardly (fast)...
Symfony Laravele and Silex share the same http-kernel & event-dispatcher. Laravele and Silex however can use closures for controllers and filters/middleware, Maybe that's why there are faster.
Classes are expensive in PHP , since PHP is not OO centric and classes are merely a add-on. Bootstraping Symfony means creating an insane number of objects. There are things that could be done about it. I'm sure PHP frameworks are so slow because of the abuse of class hierarchy.
https://github.com/TechEmpower/FrameworkBenchmarks
We welcome all pull requests, suggestions and criticisms.
I guess though if this is only testing JSON serialization, it may not make sense. Perhaps adding JAX-RS implementations like CXF, Jersey, RESTEasy, and RESTLet would be more appropriate.
The World Wide Wait benchmark, which tested quite a lot, showed quite favorable numbers for both JSF implementations.
[1] http://forum.dlang.org/thread/urpqdftuofgwespkcdxg@forum.dla...
I guess everyone is busy because of the DConf 2013 http://dconf.org/
Otherwise I will remember them next week.
I guess if anything it speaks to what a solid piece of software the JVM is.
While true, it all depends which JVM one talks about, there are plenty to choose from, even native code compilers.
I just noticed something, the two major JSF2.0 implementations MyFaces and Mojarra are both missing.
BTW, if you like Node.js, you should probably look at Vert.x. I haven't used it, but it's a similar concept, it runs on the JVM, and it seems to spank Node.js.
I have had a little less joy experimenting with both Clojurescript and Ember.js (with Clojure back end services): I eventually get things working, but at a huge time cost over writing non-rich clients just using Hiccup.
I wonder how the results would change if only one or two cores were enabled (using taskset or isolcpus)
http://docs.pylonsproject.org/projects/pyramid/en/1.4-branch...
Did you submit a pull request for it and it was denied?
I believe that the way these test are setup slightly advantage gemini, and more broadly java. Since they do not measure memory usage, or tasks that make memory usage critical, which is something JVM sucks at.
Gemini is our in-house framework and there are two points to consider:
(a) We are obviously very familiar with Gemini and therefore know how to use it effectively. For example, we know that we prefer to deploy Gemini applications using the Caucho Resin application server because it has proven the quickest Java application server in our previous experience. Of course, the other Java Servlet-based frameworks also benefit from deployment on Resin in these tests.
(b) In our design of Gemini, we do keep an eye on performance. But as the data shows, there are faster options.
Although we included Gemini in these tests, we did so because we wanted to know how it stacked up against other frameworks that we routinely use on projects. See more information in response to an earlier question here: https://groups.google.com/d/msg/framework-benchmarks/p3PbUTg...
Incidentally, the memory usage profile for the Gemini test is fairly compact, in terms of used heap space within the JVM, as are most of the Java tests in this project. With no need to do so, we're not trimming the memory allocation for the JVM to a minimum in our tests. But if we did, as you point out, the tests we've implemented so far don't require much memory.
A few frameworks have a "stripped" version (just Django and Rails so far) to try to show the best that can be achieved when typical functionality is stripped out. Essentially optimizing for this test, which is interesting even if it isn't the point of these benchmarks. If you think Symfony2 would benefit from a separate "stripped" test then please consider a submitting a pull request with that.
Same with any of the other frameworks - some have been optimized by their fans, others are running in default configs.
Techempower should add a filter to show only frameworks that have been optimized.
https://github.com/TechEmpower/FrameworkBenchmarks/issues/11...
If you're interested, we'd love to have some help getting this accomplished.