I use it and love it every day in both dev and prod, but I also really kind of hate it.
I'll keep my complaints short.
There should not be a system-wide daemon. (Or any daemon).
It should not require root at all (no setuid either).
From outside the container, the container and its processes should be a single process (with threads). (Like glueing a bunch of processes together.)
The containers should be nest-able to arbitrary depth without performance loss (at least to say, hundreds of nestings deep.)
Docker-compose should not exist, instead it should be replaced by nesting of containers.
Basically, I think it needs to follow the UNIX philosophy better by providing simple abstractions that can be combined easily. The containers would visually look a bit more like an old virtual machine (single process) than our current containers.
These changes probably require a bunch of kernel hacking, but I think it would be worth it long-term for a cleaner architecture.
It appears there are some movements into this direction thanks to podman, but it's really not there yet, especially with nesting.
Also, it wouldn't really be a product at all but just a built-in tool on Linux systems.
I particularly love the quote, The kernel developers view of the docker community is that in the rare case they can actually formulate the question correctly they usually don't understand the answer.
There is only so much that you can say to clarify things to someone who is thinking about everything wrong and doesn't realize it. :-(
I guess I am one of those so I got to ask, is the proposed solution of unikernels something we had before but lost in favor of containers, or is it something completely new anyways?
It does look like it might be the latter so why blame developers for using containers due to lack of choice? If unikernels are better and just as easy to use then I am sure people will convert.
He blames a lot on marketing and marketing lies but his company (https://nanovms.com/) seems to make it just as hard to figure out what's going on with the apparently only option being a schedule a demo button.
Come on, I remember Docker being that fancy new thing that people at university taught themselves and to each other around ~2014/2015. That hype was well deserved and if you want to compete with that you can't just decide to brush it off as wrong and misguided.
At the risk of pointing out that I also might be one of those that the quote above is referring to, I gotta ask:
Is there a technical reason why I shouldn't be able to eventually just replace Docker with a micro or unikernel? Same or similar style of image definition, completely different runtime technology?
Isn't it up to the kernel and platform developers to build the tools to make that happen comfortably for all of us naive container users?
Many legacy pre-docker apps were able to run inside docker without any dev work.
Very few apps would run on unikernel without dev work (porting). It's a different kernel after all.
This article completely conflates containers, orchestrators and schedulers in every aspect of discussion. Something will schedule and orchestrate these microVMs. Something with orchestrate secret manifestation inside those VMs. Something with operate on the host to supervise the VMs (which necessarily will have access to the guests).
So far, every microVM platform with any adoption uses Kubernetes to orchestrate. I don't know, maybe someone is running Kata on Nomad or something, but I've not heard of it. And so far, most (all?) microVM implementation utilizes namespaces and cgroups either inside/outside the VM or both. This includes Chromium's use of OCI in Crostini (their Linux-VM-on-ChromeOS).
Whatever comes along and replaces Kubernetes will push the envelope and will reduce the default blast-radius, will undoubtedly entirely rethink how authorization and namespacing work. The core would be much more minimal. And thousands of lines of generated Go would be replaced with <use your imagination>. And progress will have happened.
I get it. Hating k8s is cool. I hate it too, for a whole myriad of reasons. But it's actually frustrating how bombastic and off the mark that article manages to be. And it's too bad, if it had just stuck with "Kubernetes isn't the future, and actually understood the problems with it, it could've been a decent rant. As-is, I think it does a pretty poor job of justifying the title. (And so far, microVM workloads look to be worse for "image" security than Docker, as the tooling (outside of Nix|Guix) is somehow even worse.)
Is there a microvm that can run chromium with puppeteer?
I've been thinking that server side chromium might actually turn into a pretty badass application server platform ... security, async, remote debug, webasm for cross platform secure binaries ...
Some efficient infrastructure for deploying is needed -- but should be far easier to create a fast server runtime for puppeteer+chromium than it is to create a generic container execution environment ... -- so the microvm approach seems like the right one for what i want ...
Perhaps the problem is Darwin.
0: https://github.com/docker/roadmap/issues/12#issuecomment-652...
Strongly disagree about Docker Compose though - I actually really like the ability to compose a stack of different containers together with some simple yaml.
It could even be compatible with docker-compose and it's yaml.
It meant that a bunch of my beta users suddenly had broken PhotoStructure configurations because their docker-compose implementation had received a minor update. Why require a version to your configuration file and not increment it on breaking changes?
I ended up tearing out the script that helped people create their own docker-compose.yml file, and replaced the installation instructions with an annotated call to `docker run`.
And don't get me started on how janky it is to update existing containers to new images without docker-compose: there seems to only be one third-party tool to assist with this automatically (lighthouse), but is essentially abandoned. I'd love to be wrong about this, please point me to other solutions if they exist!
I don't miss Docker.
UNIX, and especially Linux, is a monolithic design. Even such an OS is able to separate user processes form each other all system parts run by concept in the form of a "big ball of mud", with "god-like" capabilities available to them by default. Sure, some internal "barriers" have been added, and per process capability dropping has been retrofitted, but this is backwards form the architectural point of view. Cutting things in peaces after the fact is almost always way more complicated and awkward compared to designing things in a modular way form the get go.
This is related as virtualizing a modular OS is almost a no-brainer (conceptually). You just need to start additional instances of the required system servers / modules / whatever-you-call-that-parts. Compared to that virtualizing a monolith is like trying to construct a kind of Ouroboros: It needs to run itself (with an altered, usually constrained view on the 'outside' world) from inside of itself; and it can't just globally drop the "god-like capabilities" its execution context provides—like it would be possible with an external process. It needs to "hide or manipulate things in front of its own eyes" even "it" has the "all seeing eye". Or to put it even more metaphorical: "A God tries to use his divine powers to constrain his omnipotence so he can lie to himself about the things he sees, without himself ever being able to look through this jugglery". Formulated like that the architectural issue is obvious, I guess.
[1] https://www.youtube.com/watch?v=PivpCKEiQOQ , and I just learned it seems he was also a Kubernetes fan. :-)
No idea if this is possible.
Linux is partially to blame as well, since the container/isolation APIs can be hard to use correctly, and many people have latched on to docker as something that sort of works.
It also seems to me that the failed security and isolation designs, and painful management and administration designs, of every mainstream operating system have been primary factors in pushing us toward VMs and containers in the first place.
I think the biggest reason people like Docker is that Docker makes it so easy to distribute containers.
The only thing that really needs setuid are network namespaces to setup the bridges. Userspace workarounds are clunky and slow. If you can do without network isolation then this would be possible.
> The containers should be nest-able to arbitrary depth without performance loss (at least to say, hundreds of nestings deep.)
Multiple levels of nesting are ppossible if you disable seccomp. I don't know if it scales to hundreds though. Overlayfs has hard limits and btrfs snapeshots don't scale infinitely either.
> Also, it wouldn't really be a product at all but just a built-in tool on Linux systems.
Well, there's systemd-nspawn and machinectl
> Well, there's systemd-nspawn and machinectl
There's also podman, which is a drop-in replacement for docker, and buildah, which does daemonless container builds. I switched to them from Docker recently and will never look back.
I'm pretty sure this is doable today. It's a monstrous hack, and I've got no idea what the performance overheads would look like, but as a way of hiding a mess behind a clean facade, I'm not aware of any reason it shouldn't work.
They're trying to use built-in Linux LXC container features.
When I'm staring at the worst of it (unsticking myself or worse, trying to explain why it's like this to a coworker who is stuck), I keep thinking that there's a standard for making these containers, won't someone get around to rewriting the user-facing bits with the modern requirements designed in from the start?
But it's good enough, so we are probably stuck with it until someone comes up with a better idea to base application compartmentalization upon. Like an OS that actually does what I was promised 25 years ago and am still waiting for.
But one pattern I see all the time is:
RUN foo && bar && bletch && ...
They should have a way of achieving the same thing (just one layer) without multiple commands and added to the same line