What containerization enables is that it allows you to confer some of the advantages of static linking to languages and libraries that don’t natively support it.
Not really. It seems the keyword "static linking" is being abused to refer to stand-alone executables, because that's what some people know. Yet, calling containers a kind of "static linking" is simplistic and incorrect, even taking the standalone executable interpretation info account.
If anything, container images are installers, and containers are the end-result of installing and configuring these containers, which is barely noticed because it works so well even and specially the networking part. More importantly, containers are designed to be both ephemeral and support multiple instances running in parallel on the same machine.
Then there's also the support for healthchecks, which allows container engines to not only determine when they should regenerate containers, but also provides out-of-the-box support for blue-green deployments.
And absolutely none of this fits the "static linking" metaphor.
All the other things you talk about could be set up for standalone binaries as some form of orchestration, after installation, as you say.
To me, the analogy doesn't have to be 1:1 for it to work. Yes, that means there are edge cases which should be taken into account, but that doesn't make it useless.
Especially if you look at how most developers see containers: "I'll give you this image, which I know works [in a given way] and you can run it with something that understands it". You can go ahead and set up a full K8S cluster to run it, or you can run it on Docker Desktop on a random Windows / Mac laptop. "It just works", and it's in this I think the "universal static binary" analogy works.
I get the feeling that all those fancy orchestration tools (health checks, blue / green deployment / fancy network setup / etc) have seen an impressive growth around container runtimes, and therefore often thought of as belonging together, but I don't think that one requires the other and couldn't exist without the other.
I'm curious to see what kind of ecosystem will grow around new developments such as Firecracker and "unikernel containers". I seem to remember a post on HN the other day about some effort by google to run go binaries directly on some VM kernel.
That metaphor makes no sense beyond the standalone part.
Container images are way more than mere stand-alone statically linked binaries, much like an installer (deb/rpm/MSI/PKG/etc) are way more than statically linked binaries.
If anything, container images are a kin to fat JARS or macOS's bundles, but even that metaphor leaves key features out, like healthchecks, image inheritance, and of course support for software defined networking.
> To me, the analogy doesn't have to be 1:1 for it to work.
The whole point is that the metaphor makes no sense and completely misses the whole point of containers.
No one uses containers because they want statically linked binaries. Or even installers or packages. At all.
What container users want is what containers provide and neither statically linked binaries or zips or installers or bundles come close to offer. We're talking about being able to deploy and scale the same app at will in a completely sandboxed and controlled environment. We're talking about treating everything as cattle, from services to system architecture. We're talking about out-of-the-box support for healthchecks, and restart apps when they fail.
Each and every single one of these features is made available with docker run. That's what containers offer.
None of this has anything to do with static linking anything, and insisting on this metaphor shows that people completely miss the whole point of containers.
No, not really. Just because in the end you get to run an application that does not mean that it's reasonable to explain thins in terms of static linking.
This broken metaphor is particularly counterproductive once you take into account that container images also provide you the tools to bundle interpreters, run shells, and create temporary file systems automatically.
Talking about containers in any way that leaves out the containing part is counterproductive and a bad mental model.
This sounds a lot like how all os processes work (regardless of linking type of a binary).