You must subtract its shared memory use from its resident memory use before judging how much memory it's consuming. The file-backed shared mappings are reclaimable, because they are file-backed. The kernel will just evict the mapped journal pages at will, since they can always be faulted back in from the filesystem.
TFA is much ado about nothing, learn to measure memory use properly before breaking out the pitch forks.
Full disclosure: I've hacked a bunch on journald upstream.
You can run quite a lot in 512MB of RAM if you use the right languages to write code in. I was surprised about how little RAM my moderately complex daemon written in Rust uses, for example; I expected to have to allocate a gigabyte of RAM to the VM running it (based on what other tools similar to what I was doing needed) but the entire system turned out to be quite comfortable with just a quarter of that. I didn't even try to optimise for memory usage, which is what made this so surprising. I stil had to give it some more RAM because unattended upgrades tended to get stuck, but I learned a lesson that day.
Ever since I've been meaning to try to mess with Firecracker + bare bones daemons to run virtual machines services with absolutely minimal overhead. I like the virtualisation boundaries from a security standpoint much more than container boundaries and now I wonder how much I can shrink my overhead by.
Well the author seems to want text logs instead, which seems much much worse for this.
I recently delivered a production-ready embedded system running Armbian with 512megs RAM, and indeed disabled systemd-journald for our uses, also .. but even with it enabled, our Lua-based app was (science/data analysis on sensor network) running in the best environment it has ever run, so I can confirm: 512MB is enough for a lot of things.
It's almost the same result for the standard page cache when you're reading a file, isn't it?
I haven't checked in the last 4 years or so, but, before that, every time I've worked with a Linux-based storage system that used mmap to write to files, I've ended up rewriting it to use pread/pwrite.
Each time, there was no perceptible CPU hit, but there was a massive page cache / memory pressure win. It turns out that aggressively evicting warm code pages then faulting them back in is bad for system performance, even with a fast SSD.
If you don't like that, you can always use mlock(). You can also tune things like writeback sysctls and readahead behavior. But I disagree it's "broken" because it doesn't do what you want by default.
It'd be far more interesting to explore an io_uring based implementation IMNSHO.
It's truly impressive how systemd turns out the be the right place to do absolutely anything and everything from bootloader to init to dhcp to ntp to network shares... except fix systemd bugs. systemd doesn't seem to be the right place to do that ever. Someone else needs to do it. For another example of this phenomenon, see the nohup bug.
Ever since then I switched to rsyslogd and the like. Rock solid.
I mean, there could definitely be a bug in journald, but I haven't seen any fixes mentioned in changelog for the last 5 years and if it was happening in standard usage, people would notice.
For recovering corrupted logs - you can still "less" them as usual. They have some extra markers, but the text is available as text. Journalctl has some special options for that too.
If you are a developer on Linux then you really owe it to yourself to learn as much as you can about systemd.
If you really understand systemd then you'll find yourself architecting your software around its capabilities.
systemd, if you understand it, can mean you can completely avoid large chunks of development you might otherwise have assumed you need to do.
socket activated services, nspawn, traffic accounting, the list of juicy goodness goes on and on....
Ignore the haters, they wouldn't hate if they dedicated their energy to understanding systemd instead of hating on it.
In 2022 you are a really an incomplete full stack developer if systemd is not one of the technologies you know very well.
In that one sentence I think you've highlighted what garners hate from the haters. It's powerful, and it's great when that power is used for good. Those times when it's not is where people get agitated.
I still hate it.
Sorry, but understanding systemd does not preclude hating it.
In fact, it's because of the "haters" that I decided to dive deep into what init/supervision systems are and can be. Without that deep dive, I would have always thought that systemd was great. Afterward, however, I know just how wrong that is.
If a developer's only interaction with init systems was SysV init, yes, of course, systemd is the greatest thing since sliced bread.
But systemd could have been so much better...
Disclaimer: I'm writing an init/supervision system to be so much better. And simpler. Orders of magnitude simpler. Oh, and one that doesn't reach its fingers into every part of your OS.
tl;dr: systemd is better than SysV init, but it's not the end-all-be-all of init/supervision systems.
Devuan
I see no reason not to consider it.
He is a Debian user who hates the way systemd works.
Devuan is for you.
Other than that, Devuan is a solid choice for people who want to get rid of systemd. It comes with the Debian-typical rather old versions of most programs, but I guess for a file server it doesn't matter much if you run kernel 4.19 or 5.15
Now if you're a sysadmin and have come to rely on systemd and are now locked in, Devuan is obviously not for you. But if you're running Linux on a desktop or on a laptop and aren't a fan of systemd, Devuan is great.
Devuan can allow your precious daemon to stay up despite netdev going offline, unlike systemd-networkd which would kill the daemon.
This is quite important if your large ling-running daemon has statefulness data.
that is my sentiment.
Linus was hesitant about binary logs.
I think they are just not unix. They are doing the wrong thing correctly (I would prefer doing the right thing poorly - or better yet doing it well)