For anyone wondering, the main difference between this and docker/docker-compose is that podman can run in a daemonless mode such as containers are running directly under systemd which makes them integrate into the existing systemd infrastructure and appear as any other normal service.
Make that a mount unit in systemd (free from lines in /etc/fstab) and now you can accurately lay out your service's requirement/dependency on this filesystem.
I know systemd gets flack for overreach 'as an init system', but there's a reason - initialization doesn't happen in a vacuum.
Services need filesystems, networks, etc to matter.
Anyway, I think that Podman is the mature Docker and tries to fit much better into the linux/unix-y way of doing things. Especially, being daemonless allows to integrate into systemd, etc. the way it should, and makes for a mature integration of containers into the ecosystem.
Docker?
At the same time, most Linux systems already come with a pretty fancy process supervisor. Personally, I think writing systemd units from scratch is already pretty easy. But it makes sense that Linux software which often integrates with (essentially) process supervisors would want painless integration with systemd!
Also, in some ways I think this is simpler. Anyone who has used a reasonably modern Linux likely has some systemd experience. For local testing and 'orchestration', why rely on some additional one-off layer like docker-compose when the operating system's built-in process supervisor has all of the facilities you need?
Not really. Being simpler would be managing containers directly with systemd, as with systemd-nspawn. Why do I need to use a container manager in systemd for something systemd can already do directly? This integration with Podman is Red Hat's way of promoting the tool to stay relevant, but it's not actually simpler.
There is a little more to container orchestration runtimes. Would say at this time they are akin to a badly designed, distributed linker (I'm saying this as someone who did not fully buy into this stuff, but I see that it solves some problems)
I've done this for a while on small or disconnected systems, systemd + podman is very nice, the regular unit file generators are very usable + modifiable.
From the development side, the issue is unit files must be "installed", I can't just have a set of `x.service y.service" files and `systemd start $(pwd)/x.service`, so the overhead is a bit awkward there.
`podman play kubelet` is sort of there, except it doesn't support some networking options in the kubelet file, so its not a complete replacement.
Podman also includes support for running kubelet files via systemd but I don't use that myself.
I think ideally kubelet files with some extra podman annotations are the compose replacement, even if writing them isn't as pleasant as compose files. They you could `podman play kube x` to boot the dev stack and use the systemd-x-kubelet template to deploy.
People have been telling me this for years now and I have yet to see a working example.
I don't understand why they made it so complicated, if you have a file format just let the user run it from their CWD.
Seems to be part of the idea. However, I personally have a bit of a hard time imagining this for the average developer. Maybe it will have the nice side effect of me digging further into systemd. However, most the compose stuff I used had to do with network and mounts. Wonder how to declare this in a systemd manner.
I like the self-healing aspects of Kubernetes, but even something like k0s has a large, 1GB footprint that I don't want to have for my self-hosted personal projects.
Using podman and quadlet looks like it solves exactly what I want -- just enough kubernetes on a very small footprint.
This is not a replacement for docker-compose. I've never found a good use for that in infra because it lacks self-healing, so it stayed in the dev stack. If I was more proficient with Nix, I'd probably use that instead of docker-compose.
I’m going to try this tomorrow, because containers are so useful, but I just don’t want to deal with K8s on anything that I run myself.
For my own servers I use an internal tool that integrates apps with systemd. You point it at the output of your build system and a config file, and it produces a deb that contains systemd unit files and which registers/starts the server on install/reboot/upgrade, as a regular debian package would. Then it uploads it to the server via sftp and installs it using apt, so dependencies are resolved. As part of the build process it can download and bundle language runtimes (I use it with a JVM), it scans native binaries to find packages that the app should depend on, and you can define your config including package metadata like dependencies and systemd units using the HOCON language [1].
Upshot is you can go from native binaries/Gradle/Maven to a running server with a few lines of config. Oh and it can build debs from any OS, so you can push from macOS and Windows too. If your server needs to depend on e.g. Postgres, you just add that dependency in your config and it'll be up and running after the push.
It also has features to turn on DynamicUser and other sandboxing features. I think I'll experiment with socket activation next, and then bundled BorgBackup.
Net/net it's pretty nice. I haven't tried with containers because many language ecosystems don't seem to really need them for many use cases. If your build tool knows how to download your language runtime and bundle it sans container by just setting up paths correctly, then going without means you can rely on your Linux distribution to keep things up to date with security patches in the background, it means networking works as you'd expect (no accidentally opened firewall ports!) and so on. SystemD knows how to configure resource isolation/cgroups and kernel sandboxing, so if you need those you can just write that into your build config and it's done. Or not, as you wish.
With a deployment tool to automate builds/pushes, systemd to supervise processes and a big beefy dedicated machine to let you scale up, I wonder how much value the container part is really still providing if you don't need the full functionality of Kubernetes.
[1]: https://github.com/containers/ansible-podman-collections
Either way, it's indeed quite tempting to use quadlet instead of the nasty templates that build the podman commandline.
I also want to check if quadlet supports override files like systemd's, because that would be quite interesting as a customization mechanism that does not require forking the playbooks.
[1] https://github.com/patchew-project/patchew/blob/master/scrip...
[2] https://github.com/patchew-project/patchew/blob/master/scrip...
In fact, I'd prefer if the tools you mentioned used something else besides yaml.
Unfortunately, there norway way that's going happen.
To be fair, most of the popular DevOps tools can work with JSON instead of YAML just fine. And JSON can be easily generated from almost anywhere. I don't think you can work with systemd syntax as easily.
Yes, it's inspired by INI files.
also, fwiw, YAML is a data serialization format, not a configuration format. people who use YAML and pretend it's a config file format are either lazy, incompetent, or both.
RH being RH only RH (and derivatives) supports latest podman. For example on ubuntu lts you cannot run podman 4.4 and you will never have the possibility to run it. Maybe in 5 years Ubuntu/Debian repos will be updated to contain podman 4.4, but until then you are stuck with whatever version your distro has.
The Redhat folks develop software for Redhat. The software will run fine on any other distro with up to date kernels and systemd versions, but there's no guarantee that it does because it's not Redhat's business to work on the OS of their competitors.
If Debian and Ubuntu are too slow to update, that's completely out of Redhat's control. They chose to pin an older version of a piece of software developed in a much more rolling release schedule, it's up to them to fix the incompatibilities their choice introduced. The whole point of an LTS is that you use one older version for several years.
I expect Podman 4.4 to be available in Ubuntu 23.10, as 23.04 is a bit close (current repos list 3.4.4, the version used in 22.04 and 22.10). If Ubuntu can't move fast enough to include it in 23.10, then that's Ubuntus's fault more than anything. You should also consider that Canonical sells their own competing container ecosystem (Charmed/microk8s) to businesses so not supporting their competitors' software may be intentional.
If you want Podman 4.4 but don't want to use Redhat distributions, Arch and derivatives already have it ready to go. You'll also get much more recent versions of the Linux kernel and systemd as a nice bonus.
(oh, and also you mean that is a community package - meaning unsupported)
Red Hat developers primary work in the upstream. There are also Red Hat engineers that work on packaging for Fedora, RHEL and Centos Stream, as well as Clients for Windows and Mac. We work with Fedora to provide CoreOS images for Windows and Mac.
Red Hat engineers work with the community for support of the other distributions, but they don't guarantee or support for all other distributions or versions of distributions.
LTS doesn't only mean long term stability - long term suck applies, too.
The only thing preventing podman from working is the age of their source, which is a deliberate choice -- LTS
Can you elaborate on why such a categorical statement is true?
What about https://mpr.makedeb.org/packages/podman ?
Also the Kubic repo is old.
I don't know what makedeb is, but of course anyone can make .deb packaging for anything, but that does not mean it is supported in any way (not to mention if a package has several other package dependecies, and those also have to be packaged carefully)
Also see: https://github.com/containers/podman/discussions/17362 https://github.com/containers/podman/issues/14065 https://github.com/containers/podman/discussions/13097
Although of course it won't be integrated into journalctl.