Docker's not a package manager. It doesn't know what packages are, which is part of why the chunks that make up Docker containers (image layers) are so coarse. This is also part of why many Docker images are so huge: you don't know exactly the packages you need, strictly speaking, so you start from a whole OS. This is also why your Dockerfiles all invoke real package managers— Docker can't know how to install packages if it doesn't know what they are!
It's also not cross-platform, or at least 99.999% of images you might care about aren't— they're Linux-only.
It's also not a service manager, unless you mean docker-compose (which is not as good as systemd or any number of other process supervisors) or Docker Swarm (which has lost out to Kubernetes). (I'm not sure what you even mean by 'init system for containers' since most containers don't include an init system.)
There actually are cross-platform package managers out there, too. Nix, Pkgsrc, Homebrew, etc. All of those I mentioned and more have rolling release repositories as well. ('Rolling release' is not a feature of package managers; there is no such thing as a 'rolling release package manager'.)
Nope! It’s not wrong in any way at all!
You’re thinking of how it’s built. I’m thinking of what it does (for me).
I tell it a package (image) to fetch, optionally at a version. It has a very large set of well maintained up-to-date packages (images). It’s built-in, I don’t even have to configure that part, though I can have it use other sources for packages if I want to. It fetches the package. If I want it to update it, I can have it do that too. Or uninstall it. Or roll back the version. I am 100% for-sure using it as a package manager, and it does that job well.
Then I run a service with a simple shell script (actually, I combine the fetching and running, but I’m highlighting the two separate roles it performs for me). It takes care of managing the process (image, which is really just a very-fat process for these purposes). It restarts it if it crashes, if I like. It auto-starts it when the machine reboots—all my services come back up on boot, and I’ve never touched systemd (which my Debian uses), Docker is my interface to that and I didn’t even have to configure it to do that part. I’m sure it’s doing systemd stuff under the hood, at least to bring the docker daemon up, but I’ve never touched that and it’s not my interface to managing my services. The docker command is. Do I see what’s running with systemd or ps? No, with docker. Start, restart, or stop a service? Docker.
I’ve been running hobbyist servers at home (and setting up and administrating “real” ones for work) since 2000 or so and this is the smoothest way to do it that I’ve seen, at least for the hobbyist side. Very nearly the only roles I’m using Docker to fill, in this scenario, are package manager and service manager.
I don’t care how it works—I know how, but the details don’t matter for my use case, just the outcomes. The outcome is that I have excellent, updated, official packages for way more services than are in the Debian repos, that leave my base system entirely alone and don’t meaningfully interact with it, with config that’s highly portable to any other distro, all managed with a common interface that would also be the same on any other distro. I don’t have to give any shits about my distro, no “oh if I want to run this I have to update the whole damn distro to a new major version or else manually install some newer libraries and hope that doesn’t break anything”, I just run packages (images) from Docker, update them with Docker, and run them with Docker. Docker is my UI for everything that matters except ZFS pool management.
> It's also not cross-platform, or at least 99.999% of images you might care about aren't— they're Linux-only.
I specifically wrote cross-distro for this reason.
> There actually are cross-platform package managers out there, too. Nix, Pkgsrc, Homebrew, etc.
Docker “packages” have a broader selection and better support than any of those, as far as services/daemons go; it’s guaranteed to keep everything away from the base system and tidy for better stability; and it provides a common interface for configuring where to put files & config for easier and more-confident backup.
I definitely use it mainly as a package manager and service manager, and find it better than any alternative for that role.
I've read your reply and I hear you (now). But as far as I'm concerned package management is a little more than that. Not everything that installs or uninstalls software is a package manager-- for instance I would say that winget and Chocolatey are hardly package managers, despite their pretensions (scoop is closer). I think of package management, as an approach to managing software and as a technology, as generally characterized by things like and including: dependency tracking, completeness (packages' dependencies are themselves all packaged, recursively, all the way down), totality (installing software by any means other than the package manager is not required to have a practically useful system), minimal redundancy of dependencies common to multiple packages, collective aggregation and curation of packages, transparency (the unit the software management tool operates on, the package, tracks the versions of the software contained in it and the versions of the software contained in its dependencies), exclusivity (packaged software does not self-update; updates all come through the package manager), etc. Many of these things come in degrees, and many package managers do not have all of them to the highest degree possible. But the way Docker gets software running on your system just isn't meaningfully aligned with that paradigm, and this also impacts the way Docker can be used. I won't enumerate Docker's deviations from this archetype because it sounds like you already have plenty of relevant experience and knowledge.
> I don’t care how it works—I know how, but the details don’t matter for my use case, just the outcomes.
When there's a vuln in your libc or some similar common dependency, Docker can't tell you about which of your images contains it because it has no idea what glibc or liblzma are. The whole practice of generating SBOMs is about trying to recover or regenerate data that is already easily accessible in any competent package manager (and indeed, the tools that generate SBOMs for container images depend on actual package managers to get that data, which is why their support comes distro-by-distro).
Managing Docker containers is also complicated in some ways that managing conventional packages (even in other containerized formats like Flatpak, Snap, and AppImage) isn't, in that you have to worry about bind mounts and port forwarding. How the software works leads to a radically different sort of practice. (Admittedly maybe that's still a bit distant from very broad outcomes like 'I have postgres running'.)
> The outcome is that I have [many services] that leave my base system entirely alone and don’t meaningfully interact with it, with config that’s highly portable to any other distro, all managed with a common interface that would also be the same on any other distro.
This is indeed a great outcome. But when you achieve it with Docker, the practice by means of which you've achieved it is not really a package management discipline but something else. And that is (sadly, to me) part of the appeal, right? Package management can be a really miserable paradigm when your packages all live in a single, shared global namespace (the paths on your filesystem, starting with /). Docker broke with that paradigm specifically to address that pain.
But that's not the end of the story! That same excellent outcome is also achievable by better package managers than ol' yum/dnf and apt! And when you go that route, you also get back the benefits of the old regime like the ability to tell what's on your system and easily patch small pieces of it once-and-for-all. Nix and Guix are great for this and work in all the same scenarios, and can also readily generate containers from arbitrary packages for those times you need the resource management aspect of containers.
> The outcome is that I have [...] official packages
For me, this is not a benefit. I think the curation, integration, vetting, and patching that coordinated software distributions do is extremely valuable, and I expect the average software developer to be much worse at packaging and systems administration tasks than the average contributor to a Linux distro is. To me, this feels like a step backwards into chaos, like apps self-updating or something like that. It makes me think of all the absolutely insane things I've seen Java developers do with Maven and Gradle, or entire communities of hobbyists who depend on software whose build process is so brittle and undocumented that seemingly no one knows how to build it and Docker has become the sole supported distribution mechanism.
> I specifically wrote cross-distro for this reason.
My bad! Although that actually widens the field of contenders to include Guix, which is excellent, and arguably also Flatpak, which still aligns fairly well with package management as an approach despite being container-based.
> Docker “packages” have a broader selection and better support than any of those, as far as services/daemons go
I suppose this is an advantage of a decentralized authority-to-publish, like we also see in the AUR or many language-specific package repositories, and also of freedom from the burden of integration, since all docker image authors have to do is put together any runtime at all that runs. :-\
> service manager
Ok. So you're just having dockerd autostart your containers, then, no docker-compose or Docker Swarm or some other layer on top? Does that even have a notion of dependencies between services? That feels like table stakes for me for 'good service manager'.
PS: thanks for giving a charitable and productive reply to a comment where I was way gratuitously combative about a pet peeve/hobby horse of mine for no good reason
Like, I’m damn near running Docker/Linux, in the pattern of gnu/Linux or (as started as a bit of a joke, but is less so with each year) systemd/Linux, as far as the key parts that I interact with and care about and that complete the OS for me.
As a result, some docker alternatives aren’t alternatives for me—I want the consistent, fairly complete UI for the things I use it for, and the huge library of images, largely official. I can’t just use raw lxc or light VMs instead, as that gets me almost nothing of what I’m currently benefiting from.
I haven’t run into a need to have dependent services (I take the SQLite option for anything that has it—makes backups trivial) but probably would whip up a little docker-compose for if I ever need that. In work contexts I usually just go straight for docker-compose, but with seven or eight independent services on my home server I’ve found I prefer tiny shell scripts for each one.
[edit] oh, and I get what you mean about it not really doing things like solving software dependencies—it’s clearly not suitable as, like, a system package manager, but fills the role well enough for me when it comes to the high-level “packages” I’m intending to use directly.