With quadlets, the only thing required is to drop a `.container` file in the right place and you end up with a container properly supervised by `systemd`. And this of course also supports per-user rootless containers as described in [1].
Is coreOS even maintained any more? I wouldn't expect it to be very secure if the most recent VM images were built in ~2020.
Would love another writeup just using Ubuntu or some other bog-standard Linux distro.
Conveniently, RH also invented both Podman and systemd.
Is it? He defines a .network file in that butane config without it won't work. Not really obvious. I'm sure this has a use-case and it's nice to have but personally I'm not convinced. You can switch on user-namespaces in docker-daemon or even run docker itself rootless - I guess if you are in Redhat land and use podman anyway it's an alternative but for instance where is this thing logging into? journalctl --user? Can I use a logshipper like loki with this? Is there something like docker compose config that shows the fully rendered configuration? I personally don't see the point and it feels like overly complicated.
> Currently, Promtail can tail logs from two sources: local log files and the systemd journal (on AMD64 machines only).
Whether it supports user services, I don't know.
[0] https://grafana.com/docs/loki/latest/send-data/promtail/
> Butane (formerly the Fedora CoreOS Config Transpiler, FCCT) translates human readable Butane Configs into machine readable Ignition Configs.
igwhat? Why, WHY?!
My conclusion is that there is absolutely no reason to stop using docker-compose if your developers are comfortable running one command, on one file, in one git root.
Quadlets are basically docker compose, in systemd. They've finally done it, systemd has it all and now it even has docker compose. ;)
That's really all it is in practice. I'm going to continue using it because I'm a RHEL kinda guy, but don't make it up to be magic.
https://man.archlinux.org/man/quadlet.5.en#Kube_units_%5BKub...
I'm naive, whats the difference between doing that and using these "quadlets"?
The approach is quite different from docker-compose and not really a substitute. It makes your individual containers into systemd services in an easier way than creating a unit file that calls `docker run`. But you still have to manually define networks in .network files, and configure all your dependencies in unit file syntax.
If you're very familiar with writing in systemd unit files, or really really want to use systemd to manage all container-related objects individually instead of having your container daemon do most of the work and a single compose file per group of related objects, you should consider switching. But in my experience there's little to be gained, a LOT to be lowt, and a lot of work to do the switch.
Systemd had dependencies between services and containers since forever.
The only difference here seems to be podman instead of systemd-nspawn.
It runs a daemon, it uses a bunch of IPs, it mounts a ton of stuff ...
Is there a reason for all that noise and complexity?
There can't be a reason until I run a container, right? And even then, it seems way too much.
Is it different when using Quadlets?
I'm not familiar enough with the lower level details to know how it works, but it certainly feels less like you are making "a ton of changes to the system" compared to docker
I'd be curious what failed to build under podman. I have been using podman as a replacement for docker for the last 3 years and haven't found any blocker. Sometimes you can't reuse a docker-compose file shared by a third party project straight away without adaptation but if you know the difference between docker and podman you can build and run anything that also run on docker.
Of course, if you run rootless then there's no possibility to do so. :)
Yes! And it has a hard dependency on iptables, which I have removed from all my servers long ago in favor of nftables. Grrrrrr.
Any article out there on how to have windows + wsl2 + podman + vscode devcontainers working?
To make it simple at the point of use. If developers had to configure firewalls and bind mounts for Docker to work, it never would have taken off as much as it did.
Related to Docker, I finally bit down and tried to do a simple deployment stack myself using systemd and OS-level dependencies instead of containers. I'll never go back. The simplicity of the implementation (and maintenance of it) made Docker irrelevant—and something I look at as a liability—for me. There's something remarkably zen about being able to SSH into a box, patch any dependency issues, and whistle on down the road.
No. It's just Docker being shit.
I also do not understand separation of services to different files. Is it supposed to be more convenient? With docker-compose, the whole application stack is described in one file, before your eyes, within single yaml hierarchy. With quadlets, it's not.
Lastly, I do not understand the author's emphasize on AutoUpdate option. Is software supposed to update without administrator supervision? I guess not. What are the rules for new version matching: update to semver minor, patch version, does it skip release candidates etc?
yes proper CI is a thing, and containers not being updated is actually quite a bit of an issue in the current software industry
especially if combined with custom registries auto update is quite a neet thing
oh also it's a SystemD feature to let SystemD manage your containers so why are you asking if it works without SystemD?
I guess given wordpress's security record taking breaking your site from time to time is preferable to your site being broken into from time to time.
As for the SystemD dependency, in this case the quadlets can not even be compared to docker-compose, nor be a replacement to it. Docker-compose always was independent of init system, where as quadlets are strictly tied to SystemD-based distros. E.g. users of Alpine, Gentoo won't be able to replace their compose stacks with quadlets.
Whereas with Docker it's one file, one command and you're done, you don't have to deal with anything else.
You use .container for a single container, .kube for all-in-one pods, .network for networks, and .volume for volumes. It has all the stuff it's just broken down in a more (imho) sysadmin friendly way where all the pieces are independent and can be independently deployed.
This is at best hyperbole and at worst nonsense. Come on, at least put some effort into drawing the line of complexity a little clearer.
docker-compose -f /<dir>/docker-compose.yaml -d
on boot, and it's only a little irritating to have use absolute paths for everything instead of relative. But, after having to update all of my services manually for 3 years, I will never be able to go back.So I had to add --userns keep-id to my container unit what caused all sort of problem because of podman apparently.
So you always end up with the kind of investigation & fiddling that shouldn't be necessary after 10 years of docker & containers.
For images intended for rootless deployments e.g. podman, take a look at the onedr0p container images, https://github.com/onedr0p/containers
I have been trusting the plan but I notice that after 10 years of container industry standard etc. we have to search for podman friendly images to enjoy integration with the common Linux service manager...
Now if container-based Linux distributions are the future I'm starting to wonder if we are not gonna soon see RedHat & co. packaging docker images in RPMs to make sure guarantee things work together & people don't badly mess up the security...
Maybe everything is this easy & good. Maybe this is an /etc/systems/system/WordPress.quadlet file, part & parcel to everything else in the systemd-verse. But it doesn't say clearly whether it is or isn't. It's an acontextual example.
I think it's powerful tech either way, but so much of the explanation is missing here. It focuses on the strengths, on what is consistent, but isn't discussing the overall picture of how things slot together.
In many ways I think this is the most interesting frontier for systemd. It's supposedly not a monolith, supposedly modular, it so far that has largely meant that components are modular, optional. You don't need to run the pretty fine systemd-resolvd, for example. But what k8s has done is make handling resources modular, and that feels like the broad idea here. But it seems dubious that systemd really has that extensibility builtin; it seems likely that podman quadlet is a secondary entirely unrelated controller, ape-ing what systemd does without integrating at all. It seems likely that's not a podman-quadlet fault: it's likely a broad systemd inflexibility.
Could be wrong here. But the article seems to offer no support that there is any integration, no support that this systemd-alike integrates or extends at all. Quadlet seem to be a parallel and similar-looking tech, with deep parallel, but those parallels from what I read here are handcrafted. Jt's not quadlet that fails here to be great, ut systemd not offering actual deep integration options.
[1] https://www.freedesktop.org/software/systemd/man/systemd.gen...
[2] https://blogs.gnome.org/alexl/2021/10/12/quadlet-an-easier-w...
Maybe this helps. Picking a random example container unit...
[root@xoanon ~]# cat /etc/containers/systemd/oxidized.container
[Unit]
Description=Oxidized
[Service]
ExecStartPre=/usr/bin/rm -f /var/local/oxidized/pid
[Container]
Exec=oxidized
Image=docker.io/oxidized/oxidized
User=972
Group=971
NoNewPrivileges=yes
ReadOnly=yes
RunInit=yes
VolatileTmp=yes
Volume=/var/local/oxidized:/var/local/oxidized:rw,Z
PodmanArgs=--cpus=1
PodmanArgs=--memory=256m
Label=io.containers.autoupdate=registry
Environment=OXIDIZED_HOME=/var/local/oxidized
[Service]
Restart=always
[Install]
WantedBy=multi-user.target
After a `systemctl daemon-reload` an `oxidized` service springs into being. [root@xoanon ~]# systemctl status oxidized
● oxidized.service - Oxidized
Loaded: loaded (/etc/containers/systemd/oxidized.container; generated)
Active: active (running) since Sat 2023-09-23 09:53:11 UTC; 2 days ago
Process: 221712 ExecStopPost=/usr/bin/rm -f /run/oxidized.cid (code=exited, status=219/CGROUP)
Process: 221711 ExecStopPost=/usr/bin/podman rm -f -i --cidfile=/run/oxidized.cid (code=exited, status=219/CGROUP)
Process: 221713 ExecStartPre=/usr/bin/rm -f /var/local/oxidized/pid (code=exited, status=0/SUCCESS)
Main PID: 221799 (conmon)
Tasks: 8 (limit: 98641)
Memory: 169.0M
CGroup: /system.slice/oxidized.service
├─libpod-payload-b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390
│ ├─221801 /run/podman-init -- oxidized
│ └─221803 puma 3.11.4 (tcp://127.0.0.1:8888) [/]
└─runtime
└─221799 /usr/bin/conmon --api-version 1 -c b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390 -u b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390/userdata -p /run/containers/storage/overlay-containers/b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390/userdata/pidfile -n systemd-oxidized --exit-dir /run/libpod/exits --full-attach -l passthrough --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/containers/storage/overlay-containers/b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390/userdata/oci-log --conmon-pidfile /run/containers/storage/overlay-containers/b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg /usr/bin/crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg b78fd35eeb591012534d267c963cdbb78316fe498c9acf424ea443a7a6ac5390Learn more at:
Sep 26 00:03:05 xoanon oxidized[221803]: I, [2023-09-26T00:03:05.807594 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 01:03:15 xoanon oxidized[221803]: I, [2023-09-26T01:03:15.083603 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 02:03:24 xoanon oxidized[221803]: I, [2023-09-26T02:03:24.414821 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 03:03:33 xoanon oxidized[221803]: I, [2023-09-26T03:03:33.677828 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 04:03:42 xoanon oxidized[221803]: I, [2023-09-26T04:03:42.983589 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 05:03:52 xoanon oxidized[221803]: I, [2023-09-26T05:03:52.297830 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 06:04:01 xoanon oxidized[221803]: I, [2023-09-26T06:04:01.637348 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 07:04:10 xoanon oxidized[221803]: I, [2023-09-26T07:04:10.935352 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 08:04:20 xoanon oxidized[221803]: I, [2023-09-26T08:04:20.199651 #2] INFO -- : Configuration updated for /192.168.89.5
Sep 26 09:04:29 xoanon oxidized[221803]: I, [2023-09-26T09:04:29.553178 #2] INFO -- : Configuration updated for /192.168.89.5
During the daemon-reload, systemd invoked /usr/lib/systemd/system-generators/podman-system-generator, which read the files in /etc/podman/systemd and synthesized a systemd service for each of then, which it dropped into /run/systemd/generator, which is one of the directories from which systemd loads unit files.Far from being a parallel service control mechanism (á la Docker), this is proper separation of concerns: the service is a first-class systemd service like any other; the payload of the service is the podman command that runs the container. We can introspect this a bit to examine the systemd unit that was generated:
[root@xoanon ~]# systemctl cat oxidized
# /run/systemd/generator/oxidized.service
# Automatically generated by /usr/lib/systemd/system-generators/podman-system-generator
#
[Unit]
Description=Oxidized
SourcePath=/etc/containers/systemd/oxidized.container
RequiresMountsFor=%t/containers
RequiresMountsFor=/var/local/oxidized
[Service]
ExecStartPre=/usr/bin/rm -f /var/local/oxidized/pid
Restart=always
Environment=PODMAN_SYSTEMD_UNIT=%n
KillMode=mixed
ExecStopPost=-/usr/bin/podman rm -f -i --cidfile=%t/%N.cid
ExecStopPost=-rm -f %t/%N.cid
Delegate=yes
Type=notify
NotifyAccess=all
SyslogIdentifier=%N
ExecStart=/usr/bin/podman run --name=systemd-%N --cidfile=%t/%N.cid --replace --rm --log-driver passthrough --runtime /usr/bin/crun --cgroups=split --init --sdnotify=conmon -d --security-opt=no-new-privileges --read-only --user 972:971 -v /var/local/oxidized:/var/local/oxidized:rw,Z --env OXIDIZED_HOME=/var/local/oxidized --label io.containers.autoupdate=registry --cpus=1 --memory=256m docker.io/oxidized/oxidized oxidized
[X-Container]
Exec=oxidized
Image=docker.io/oxidized/oxidized
User=972
Group=971
NoNewPrivileges=yes
ReadOnly=yes
RunInit=yes
VolatileTmp=yes
Volume=/var/local/oxidized:/var/local/oxidized:rw,Z
PodmanArgs=--cpus=1
PodmanArgs=--memory=256m
Label=io.containers.autoupdate=registry
Environment=OXIDIZED_HOME=/var/local/oxidized
[Install]
WantedBy=multi-user.target
No deep magic, just the pleasant feeling you get when you see layered systems interacting together without cross-cutting.You can learn more about the systemd generator extension mechanism at: https://www.freedesktop.org/software/systemd/man/systemd.gen...
Either this must be some systemd weirdness that I thankfully haven't had to deal with until now, or I'm misunderstanding something.
Did I understand correctly you don't specify which services you need but rather which ones depend on your service? So if your service doesn't start you'll need to check the configuration files of all other services to figure out which dependency is preventing it from starting?
[Install]
WantedBy=multi-user.target
This is the mechanism by which one unit can ask to be added to the Wants= of another when it is installed.i.e., when you run 'systemctl enable whatever.service', it will be symlinked into '/etc/systemd/system/multi-user.target.wants'. And 'systemctl show multi-user.target' will show 'whatever.service' in its Wants property.
https://www.freedesktop.org/software/systemd/man/systemd.uni...
During bootup of a headless system, the 'default' target is usually multi-user.target, so what we've done here is ensure that whatever.service will be started before the machine finishes booting.
https://www.freedesktop.org/software/systemd/man/bootup.html
One of its main design goals is a fast system startup, to do that it does need know the dependency ordering of all services.
I don't know why it's being used in that way for these containers. It'd be easier to just add a Wants line on Caddy.
You can run your docker-compose file, then run "podman generate kube" and you will get a Kubernates yaml file.
Then you can run:
$ escaped=$(system-escape ~/guestbook.yaml)
$ systemctl --user start podman-kube@$escaped.service
And you can enable it to start on boot. It will read the yaml file and create the pod.
Nothing can kube better than the kubelet! The new sidecar support for init containers can be used today. Every other abstraction is playing catch up
But this only makes me like docker even more :)
I used a very similar approach in the (now EOL'ed, gonna be replaced by full K8s) infra at $DAYJOB. My main reason to stick with docker-compose is because developers are familiar with it and can then easily patch/modify the thing themselves. Replacing with something systemd will add a dependency over the people that know systemd (which are not usually application developers in your average HTTP API shop)
Classical definition: > A word consisting of four bytes
Quad implies 4 something or rather. What does it refer to here? 4 sections of configuration?
> The original Quadlet repository describes Quadlet this way:
>> What do you get if you squash a Kubernetes kubelet? A quadlet
> What do you get if you squash a Kubernetes kubelet?
> A quadlet
So it's based on reinterpreting the root "kuber-" which ultimately means something"to do with "turn", as "cube", and then metaphorically reducing a cube to a square.
- Node starts up with a container daemonized
- Automated updates of the registry image
- A nice single arg to pass to your cloud provider CLI userdata that launches the OS + container (vultr-cli in this case)
I guess you can do something similar with any linux userdata but the script wont be as clean. Has anyone built something lite to launch docker containers on boot in ubuntu (this is particularly helpful for cloud provider CLIs) without writing the whole script manually? Something nicer than `apt update && apt install -y docker && docker run --rm --restart-always ...` that includes the registry autoupdate.
Quadlets seems not exactly a replacement for the function docker-compose was supposed to perform though, or am I wrong? It seems like its target audience is administrators who are supposed to run containers as systemd services (questionable choice, but probably there are people who want that)...
What am I missing?
Using a recent HN submission to spin up a local vector stack for analyzing notes, https://github.com/memgraph/odin/blob/main/docker-compose.ym...
How would you suggest a bash script handle configuring all the different images, their ports, and ensuring services are spun up in the correct dependency graph (parallel where possible), and are exposed to each other as a reliable host name over a subnet without polluting the host network?
And then how is that bash script extendable so it's not a custom script every time?
So, how would I go about that in Shell? I don't see a problem. Can you point to a specific problem? All these settings in your example easily translate into docker commands.
Basically the appeal is "one command > everything running" when you have multiple services working together, which sure, you could do with imperative shellscripting, but for people who don't spend their daily time writing shellscripts, something declarative like YAML is usually easier to get started with, especially when you only revisit it sometimes.
Python people generally have either installed by default in their OS, or already using because of some other system. I wonder how many developers don't have Python at all on their systems already?
But I can accomplish this with Shell script... And no need to deal with Python, its broken dependency management, poor piping / I/O in general, bugs in the docker-compose itself... What do I win by having to suffer all these problems?
> who don't spend their daily time writing shellscripts,
Do you write docker-compose scripts daily? Seriously? Why? My impression was that you write that once and edit very infrequently (like maybe once a month or less). So, in terms of time investment it doesn't seem to make much of a difference. Also, I see no value in imperative vs declarative approaches here. It's actually hard to understand what is going to happen when you use declarative style because you need to rely on and have a very deep knowledge of the imperative aspect of the system interpreting your declarations to be confident of the end result.
> Python people generally have either installed by default in their OS,
Being one of "Python people" I have Python 3.7 thru 3.12 built from respective heads of cPython project installed on my work laptop. I would hate to have to add more to support a tool with dubious (or as is my case extraneous) functionality.
Also, being one of those "Python people" who deals with infra, I had to import docker-compose code into my code and deal with it as dependency, both with its CLI and its modules. And... it's not good code. Well, like the wast majority of Python is a piece of garbage. Particular feature of docker-compose that stands out is that it was written by "Go people" with poor command of Python, so it's "Go written in Python" kind of program.
Also, people who write docker-compose don't care about how it interacts with other packages, and this shows in how they define their dependencies (very selective versions of ubiquitous libraries, eg. requests) that will almost certainly not play well with other libraries you'd use in this context (eg. boto3). I had plenty of headache trying to use this tool and had so far sworn never to use it in my own infra projects because of its dependency issues. If I ever use it (as in to deal with someone else's problems) I install it in its own environment. Which is yet another problem with its use because then you'd have to switch environments just to call it, and then you forget to switch back and things start behaving weirdly.
Among other things.
The way I do it is by starting podman's systemd service which exposes a docker compatible API over a Unix socket.
Then export DOCKER_HOST pointing to the socket and you can run docker-compose with podman transparently.
there are also some issues with keys being all string by default making some things like linting harder and allowing more less standard ways to get something done, e.g. in json true is true in init files all of true, 1, yes, y and others might be true depending on the application (but then yaml no problem is worse)
also there is no clear single standard for init files
through this is where toml comes from it took the general layout ideas behind init files but give it a strict standard and strict string vs. bool vs. float types and a bit more nesting capabilities and fixes some string distance escape issues, etc
Also, YAML supports data structures that INI files don't.
I don't understand the hate that YAML gets. If you're not tasked with writing a parser, the data format just works as expected.
https://noyaml.com/ is a decent overview of exactly how shit it is.
But outside of that, using spaces for logic is extremely error prone if you go past 10-15 lines and 2-3 levels deep.
NI: Nicaragua
NL: Netherlands
NO: Norway
or python: 3.2.3
numpy: 2.1
or octal: 042
Tell me those are well defined or not confusing.Container orchestration always seemed like a natural progression as systemd already manages cgroups for other services.
Unified plain text format.
For simple projects, it’s hard to beat
The basic usage of podman quadlets is putting an `app.container` in `/etc/containers/systemd/` containing something like the first snippet and then starting the unit. For someone familiar with systemd, this seems very very nice to work with.
The reliance on systemd is an issue on its own. Much has been said about its intrusion in all aspects of Linux, and I still prefer using distros without it. How can I use this on, say, Void Linux? Standalone Podman does work there, but I'm not familiar if there were some hacks needed to make it work with runit, and if more would be needed for this quadlet feature.
Just call things what they are.
To setup/tear down software dev environments deployed locally, the root/rootless discussion isn't really relevant. Ease of deployment and ease of use are critical though, and Docker is above all a development experience victory.
Docker compose is part of docker now, it's just another subcommand.
Kubelets in K8s also have similar duplication of supervisor functionality. Perhaps this is a good place to mention the Aurae runtime [1], which is designed to replace systemd, docker daemon, kubelet, etc on dedicated worker nodes. Sadly, its chief designer Kris Nova passed away recently in an accident. I wish the rest of the team the strength of carry her legacy forward.
However, I agree with your sentiment. It's basically a part of any modern Docker installation now. Calling it an external dependency "like watchtower" is not a fair comparison.
If you want to say that there are machines that don't have systemd which therefore cannot use quadlets, that's like arguing that something made for Linux is useless because "Linux is not present on most machines".