Is this any better than the Docker method of reusing the same base-OS and compartmentalizing the applications? Is there that much to be gained in avoiding kernel/user-space transitions?
If you want to bring the isolation level of that process down to just absolutely what it needs to run we've got things like jails and cgroups. You could probably run a Go app with no access to the filesystem since everything is linked in statically anyways.
I think it misses the reasons people are excited about virtualization. Reproducibility and uniformity of environment has a higher value than isolation to most software developers. The priorities may be inverted on the sysadmin side, but I don't think so far as to justify this kind of approach.
You can achieve the isolation with jails and cgroups, but not the performance improvements.
Shouldn't the unikernel approach actually improve reproducibility greatly? You build your application and all it's dependencies together, that should run exactly the same locally or on your Xen cloud.
Yes, there's at least an order-of-magnitude improvement in packet processing, for example, if you bypass the kernel.
Our mobile app runs WebSocket-like connections over UDP with libsodium for crypto, and we're moving our stack off of Node.js for exactly that reason.
Things like file descriptor limits or numbers of connections are more of a pain in modern times, not necessarily a context-switch caused problem.
Once that's done, the Xen backend is just a matter of filling in the missing kernel components with OCaml libraries (or, in the case of OSv, with C libraries, or in HalVM's, Haskell libraries).
In the case of MirageOS though, we're using this fine-grained dependency control to implement other backends too. For instance, compiling the same source code to run as a FreeBSD kernel module or as a JavaScript library. There's already a Unix backend, so nothing special needs to happen to run it under Docker.