If all you do is calculating Fibonacci, you can get ~(amount of CPUs) times the performance. You could use multinode for the same effect, but this is additional work.
In the end, it's a matter of the type of service you are doing. If it's a Fibonacci generator, you'd use something that's better suited than either node/JS or any other scripting language.
If you are doing something that's I/O heavy (which is probably the majority of today's web applications), node or other scripting languages might be better suited because they are easier to work with for most of us.
It's just tools. Not religion. I wouldn't use a hammer to remove a screw. And I definitely wouldn't write hateful articles (I'm referring to the original one) because somebody was using a screwdriver instead of a hammer to remove the screw.
I think that obviousness is exactly what the author missed.
But it's totally ridiculous how in response, people keep writing these terrible, straw-man Python servers to try to prove that Python is so horribly slow.
If you want to write Python web apps, there is a correct way to do it, and it isn't to use SimpleHTTPServer. Write a WSGI application, and serve it with any of a number of decent WSGI servers (just for starters, try uwsgi behind nginx; but if you really insist on directly serving requests out of a server written in an interpreted language, you could try gevent).
try:
my_var
except NameError:
pass
else:
if my_var is not None:
# Ted needs better examples
...
When would you EVER need to use this code? There is no situation in which you should ever need to use a variable in Python that may or may not be defined. While Ted's example may seem like a cheap shot, it does highlight an important problem with JavaScript: all the craziness with regards to types that aren't "real" types like undefined, arguments, and Array.I'm sure things will be a lot better if/when the V8 engine supports let/yield.
http://onilabs.com/stratifiedjs https://github.com/0ctave/node-sync
I am sure cancer sucks more than anything I had to experience but the world is not going to change its figures of speach for you and it is not directly derogatory at all it just reminds you of your misfortune.
Btw I am not criticizing you for asking the OP to change his headline I just think it is an impossible crusade.
assume that the python server is less efficient, which takes 7 seconds to finish the job. but it runs in parallel. so both two people waited 7 seconds.
therefore, on average, the python server is in fact, faster
I'd guess you used MRI 1.8.X.
I decided to benchmark other versions(and implementations) of Ruby.
= jruby 1.6.4 (7.3 seconds)
user system total real
7.388000 0.000000 7.388000 ( 7.349000)
= Rubinius 1.2.4 (Little under 6 seconds) user system total real
5.940015 0.006878 5.946893 ( 5.842485)
= CRuby 1.9.2 (38 seconds) user system total real
38.250000 0.090000 38.340000 ( 38.376857)
= CRuby 1.8.7 (Little under 137 seconds) user system total real
136.960000 0.240000 137.200000 (137.437748)
Thanks!Of course, it is true that V8 is ubiquitous, whereas Rubinius and PyPy are not -- that is the one majore advantage of javascript.
1) node.js async i/o is any different from haskell i/o? 2) author knows something about strongly-typed languages, or he deliberately banned them those from server side? Imho, dziuba tried to drop a hint about strongly-typed languages, not python or ruby.
Also if his thoughts on node.js don't annoy you enough go take a look at his archive: http://teddziuba.com/archives.html. He blogs/trolls/thinks about NoSQL, OS X, twisted/tornado, python, queues and more.
The Craigslist Reverse Programmer Troll is clever. While it's openly a troll, it makes a good point. Worth reading.
Python 2.7.1 took 1m25.259s (no server).
Am I doing something wrong? Or is there some incredibly optimized code path for OSX?
edit: even weirder, `time node fibonacci.js` without a server takes 0.090s.
What I was showing was that if your request handler does a nontrivial amount of CPU work, it will hold up the event loop and kill any "scalability" you think you're getting from Node.
If you Node guys were really that irritated by this, you're going to be super pissed when you learn how computers work.
I ain't even mad.
As Ted points out, there are things like Fugue and Nginx which people who are not "less-that-expert-programmers" do, "experts" will be fine whether they've got magical behind the scenes async stuff going on or not. The question as I see it is - are the node.js docs/homepage misleading about how easy is is to "develop fast systems"?
Nobody has been using for computing fib though. Nobody has been putting in things that burn non-trivial amounts of CPU though.
Its all about this: http://jlouisramblings.blogspot.com/2011/10/one-major-differ...
It is fairly easy to scale node with multiple processes. As long as you don't have a long running (such as fibonacci) operation. If you do have tasks like that, process outside node and check for completion. Like how Tasks work in Google App Engine.
Also, most other web stacks will discourage you from running a 30s fib on a thread processing web requests. This isn't specific to node.
Node and coffeescript has worked really well for us. Product coming out later this month.
[EDIT: Just noticed that several other people pointed out the same thing. Looks like most node users are aware of potential problems, but I can see such issues being confusing for new users.]
Difference being, with other stacks a request running for 30s will have little impact on the rest of the machine. With node, the whole server gets stuck, not just that precise request and the machine resources necessary to perform the computation (or whatever).
The fib example is extreme, but it's rooted into a real issue of cooperative multitasking: code does not always behave correctly and is not always perfect. You might have used a quadratic algorithm and it ran in 10ms on 10 items or so, but in production it happens a user is getting it to run on a hundred or a thousand items, and now other users are severely affected, in that their requests are completely frozen while the computation is going on. There are hundreds of other possibilities, small inefficiencies, shortcuts, plain bugs, etc... which are basically going to break your node application.
- - - -
Which is like the WHOLE POINT of the original article...
I still don't know why people are so maximist with their tools. NodeJS is a tool. It works well with your _other_ tools.
There - is - no - perfect - tool, no perfect programmer, not even perfect intent.
So Node is reduced to the trivial work? Then why make it unnecessarily hard on yourself?
> I still don't know why people are so maximist with their tools.
Because moving parts = risk.
What about a single core system? Well I guess a threaded/multi-process solution would time slice the fibonacci requests between two threads so that both requests are served in 10 seconds, instead of one request in 5 seconds and the next one in 10 seconds like the node.js solution. Does not sound much better.
If you have done any kind of systems programming, you would know that availability of asynchronous I/O is a life saver, and can simplify your locking model greatly. 90% of the issues you face when building such systems is that some module deep inside grabbed a lock and issued a blocking I/O request and now the rest of the system is bottle necked behind it. Node.js is basically trying to eliminate the possibility of the existence of such a module. This complicates the issue of I/O calls, but simplifies locking in the sense that you don't really need all those locks in your system. In node.js of course there are no locks. The complexity moves from reasoning about locks to reasoning about correctly handling I/O calls and responses. IMO, this is the correct place to move the complexity to, because locks are simply an abstraction the programmer built. When debugging the system, we have to deal with - "How to get rid of this monolithic lock", when the real problem is - "This IO is taking too long we shouldn't be blocking on it". An async programming framework tackles this problem head on.
If you use Python/Perl you will never really know the number of instances of the process to run, Too many and you time slice requests, slow down all of them, increase your queueing buffers instead of just dropping the extra requests. Too few processes and you start dropping requests that you could have served. With a framework like node.js the number of instances you want is equal to the number of cores on the server.
Of course node.js can be an inappropriate solution for a wide variety of reasons, but I could not find anything really relevant regarding that in your post. Alex Payne discusses some issues here. You may want to read it. http://al3x.net/2010/07/27/node.html
Furthermore, since node.js is single threaded, what's wrong with blocking on IO in that single thread? The process hangs, is put to sleep by the OS and wakes when there's IO available. You gain a simpler model of programming than using callbacks/continuations
Eh? Think you could do async IO in both Perl & Python (and C and erlang ..) years before node came along..
and why are you at the top ?
really disappointed