The solution has to either come in the form of static compilation, or, even less feasible, getting devs to actually care if their software runs on platforms more than a year old. Containers just make everything worse in all cases beyond the contrived "it just worked and I never need to change anything".
Packaging is hard, and both debian-based and rpm-based (and really most other's I've seen) are pretty awful. (except BSDs, which I've had a lovely time with)
They're slow, they're stateful, writing them involves eldritch magic and a lot of boilerplate, and they're just frequently broken. Unless you're installing an entire OS from scratch you're probably going to have a hard time getting your system into the same state as somebody else's. And running that from-scratch OS install is definitely possible in a as-code way, it can take an hour.
Containers came along and provided a host of things traditional packaging systems didn't and they took over by storm and with them came a whole lot of probably unnecessary complexity from people wanting to add things. Adding things without ending up with a huge mass of complexity is hard and takes a lot of context knowledge.
So we ended up solving a host of problems with containers and creating a whole new set along the way.
A few random examples (not the best you could find, just something I've used recently):
- re-packaging pre-built binaries:
https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=visua...
https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=nomad...
- building C from source
https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=tinc-...
- building Go from source
https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=yay
- patching and building a kernel
https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=linux...
A big reason for that in the past much fewer developers were confronted with this problem domain.
In larger companies packaging and deployment was often the responsibility of ops, with some input from and interaction with development. That of course also meant much longer lead times, arguments about upgrading versions of libraries or other executable dependencies, divergence of production and development/test environments, and the associated unfamiliarity with the production environment for developers and hence often more difficult debugging.
Ever since Docker (+ Kubernetes and various cloud specific container solutions) became so popular, a lot of devs now at least partially deal with this on a regular basis.
Which is mostly a good thing, due to the negatives above.
Most corporate use of Docker I've encountered is a mess of stupid patterns like "RUN command A && command B && command C && ..." to reduce layers or some such nonsense which makes debugging build failures tedious.
Yes, absolutely, and I hope you mean that in the capital-F "Future Shock", Alvin Toffler sense, because there is a lot he wrote that hasn't even been carried over and digested. Software is an endlessly disorienting sea of change, getting faster and thus worse as time progresses, and it's frankly madness at this point.
It seems absolutely no one is committed to providing a stable platform for any purpose whatsoever. Even Java, where I spent many years being ingrained with the absolute necessity of backwards compatibility with old (perhaps even dumb) classfile versions, has been making breaking changes as part of its ramp up to semi-annual major version releases. Node Long Term Support "typically guarantees that critical bugs will be fixed for a total of 30 months."[1] Pfft. It's a joke. You can't get your damn API design straight by version 12? I'll do my damnedest to avoid you forever, then. It's so unserious and frankly irresponsible to break so much stuff so often.
But change only begets more change. We're all on an endless treadmill, constantly adapting to the change for no reason. And people have to adapt to our changes, and so it goes.
Containers side-stepped the deficiencies of Linux distributions, which had become so based on 'singleton' concepts; one init system, one version of that library etc.
A shame because there's an inherent hierarchy; everything from the filesystem, UIDs, $LD_LIBRARY_PATH that could really allow things to co-exist without kludge of overlay filesystems. Just it was never practical to eg. install an RPM in a subdirectory and use it.
Containers aren't a very _good_ solution, they're just just best we've got; and still propped up by an ever-growing Linux kernel API as the only stable API in the stack...
This is why we don't play games with siloing responsibilities on the tech stack. Every single developer on the team is responsible for making the entire product work on whatever machine it is intended to work on. No one gets to play "not my job", so they are encouraged to select robust solutions lest they be paged to resolve their own mess in the future.
Maybe those solutions are containers in some cases, but not for our shop right now. Our product ships as a single .NET binary that can run on any x86 machine supported by the runtime.
This is really not a new problem :) I remeber dealing with shared libary versioning issues from no long after I started in IT in the 90's and it's been a problem since.
Solving that problem seems like a win to me.
Considering the level of options from Kubernetes, heml, istio, etc can get complex, the developer can focus on the boundary requirements... expected environment variables and peer systems/services.
I would also not downplay the importance of Docker's support for software-defined networks and it's ability to arbitrarily configure networking at the container level.
I firmly believe that networking doesn't pop up so often while discussing Docker because Docker solves that problem so fantastically well that a complex problem simply ceases to exist and completely abandons everyone's mental model.
I relatively rarely work with Java and am probably mistaken.
The problem doesn’t start with virtualization, that is indeed a side-track.
Also:
> Consider also that Docker relies on Linux kernel-specific features to implement containers, so users of macOS, Windows, FreeBSD, and other operating systems still need a virtualization layer.
First, FreeBSD has its own native form of containers and Windows has its own native implementation. Docker != containers.
I really don't see how Docker (or containers as we mostly know them) relying on kernel-features from an open source operating system in order to run Linux OS images as something to even complain about, and there is nothing preventing Mac from implementing their own form of containers.
Is vanilla Kubernetes easy for new developers? No, but there is an entire ecosystem offering tools and platforms to make development using containers a seamless as possible. Microsoft saw this, so they really had no choice but to adopt the container terminology and partner with Docker to try to stay relevant.
My guess is without containers, Microsoft would have never even built WSL. If you want smooth developer experience with containers then that is what solutions like GitLab offer. Even Microsoft's GitLab is essentially built around running various actions inside containers.
I personally welcome the change. I can spin up a local Kubernetes cluster and test an entire cluster of applications locally if I want, or integrate it into Skaffold or whatever else and test live in the cloud. It really is a lot better than what we had before. I think the solutions though really come down to documentation and resources to help train new employees and acclimate them.
In the end, there's only a few missing pieces to offer a more robust solution. I do think that making it all webassembly will be the way to go, assuming the WASI model(s) get more flushed out (Sockets, Fetch, etc). The Multi-user web doom on cloudflare[1] is absolutely impressive to say the least.
I kind of wonder if Cloudflare could take what FaunaDB, CockroachDB or similar offers and push this more broadly... At least a step beyond k/v which could be database queries/indexes against multiple fields.
Been thinking on how I could use the existing Cloudflare system for something like a forum or for live chat targeting/queries... I think that the Durable Objects might be able to handle this, but could get very ugly.
But that's why anytime you integrate with one of these tools you should be aware that there is a cost for maintaining that integration.
My efforts => https://micro.mu
Oh and prior efforts https://github.com/asim/go-micro
I wish someone would rewrite docker-compose in a single go or rust binary so that I don't have to deal with the python3 crypto package being out of date or something when simply configuring docker/docker-compose for another user (usually me on a different machine or new account).
^ There's an rc of a compose command built into the standard docker CLI.
Next, I started working with Docker and languages with better package management. Dependencies were fetched in CI and were either statically linked or packaged in a container with the application I was working on. Still, these were mostly monoliths or applications with simple API boundaries and a known set of clients.
In the past few years, almost everything I have written has involved cloud services, and when I deploy an application, I do not know what other services may soon depend on the one I just wrote. This type of workflow that depends on runtime systems that I do not own - and where my app may become a runtime dependency of someone else - is what I am referring to as a "modern development workflow".
I know docker has made it part of the way there over the years with Compose and so on, but it's all felt pretty ad-hoc, whereas k8s feels like a cohesive system designed against a clear vision (which makes sense, since it was designed as borg 2.0)— no one else working in this space had the benefit of having already built a giant system for it and used it at scale for years beforehand.
providers in turn responded by shilling their 'in house' containerization products and things like Lambda for lock-in.
Containers were the next logical step, as each virtual machine vendor tried to lock in their users. Containers allowed routing around it.
Both of these steps could be eliminated if a well behaved operating system similar to those in mainframes could be deployed, so that each application sat in its own runtime, had its own resources, and no other default access.
There's a market opportunity here, it just needs to be found.
Containers and VMs let you divide and solve problems in isolation in a convenient manner. You still have the same problems inside each container.
Firstly, Docker & k8s made using containers easy. Minimal distros like alpine simplify containers to a set of one or more executable. You could implement the same thing with a system of systemd services & namespaces.
But now that everything was a container, you need a way to manage what & where containers are running and how they communicate with each other.
It looks like 90% of the stuff different container tools and gadgets try to solve is the issues they created. You can no longer install a LAMP stack via 'apt install mysql apache php7.4' so instead you need a tool that sets up 3 containers with the necessary network & filesystem connections. It certainly better because it is all decoratively defined but it is still the same problem.
This is why I mostly stayed out of containers until recently. The complexity of containers really only helps if you need to replicate certain server/application. You will still need to template all of your configuration files even if you use Docker, etc.
What is changing everything IMO is NixOS because it solves the same issues without jumping all the way to Docker or k8s. Dependencies are isolated like containers but the system itself whether it is a host/standalone or a container can be defined in the same manner. This means that going from n=1 to n>1 is super easy and migrating from a multi-application server (i.e a pet server) to a containerized environment (i.e to a 'cattle' server/container) is straightforward. It's still more complex and a bit rough compared to Docker & k8s but using the same configuration system everywhere makes it worthwhile.
Nothing says love like realizing that you are segfaulting due to a library version you didn't test against subtly changing its behavior.
This amounts to using Perl, bash, and POSIX.
On the client side, of course, it is HTML and JS, which I use a very limited subset of to improve compatibility.
All applications run in its own container, unless they are granted granular permissions to do otherwise.
The code and assets for a program belong in its own quarantined section, not spread out over the filesystem or littered around /etc/, /var/
Built in networking for these containers.
Even if it is a joke, people want to have silver bullets. Those are killing the hairy problems which can be named werewolves.
Downside is hairy problems just like werewolves come from people. So it in the end it is people problems not some container tech or other stack problem. There are no werewolves without people :)