Linux (and Unix in general) is "control plane" software. It was literally developed to replace the people that used to patch phone calls using physical cables.
That thing they were patching? That's the "data plane". It was incredibly high bandwidth relative to the control plane, since it was completely optimized for moving data.
Back to Unix. Unix is designed for the control plane. It is not designed for rapidly moving large amounts of data with maximum efficiency. Why do we use it today for what are arguably "data plane" tasks? Well, when all you have is a hammer...
Nowadays, you can have both on the same machine: run Linux on the first core or two, and reserve the remaining cores for your app. Along with huge page allocations to reduce TLB impact, you can literally own all CPU activity on those cores, and lock all of your RAM too.
That app is a normal Linux app, but it runs on the raw hardware—like not having an operating system at all. When you also give your app complete control over the network hardware, you've completely bypassed the kernel.
With UDP, you don't even need a networking stack, making this approach particularly attractive. Another poster mentioned saturating a 10Gb link. How about saturating four 10Gb links on a single machine? It can be done with the E5 processors and the software architecture I described above.
We're shooting for 10 million packets processed per second on a single, ~$20K machine. That's pretty sweet if you ask me, and a hell of a lot more than Node.js can do.