It feels much more dangerous to have such a system instead in place and provide false sense of security. Users know best what kind of data they need to backup, where they want to back it up, if it needs to be encrypted or not, if it needs to be daily or weekly etc.
As a side note, it's pretty interesting TrueNas is heavily designed around the ZFS paradigm. Most people think of applications and time stamps to restore not data pools and the snapshot paradigm accompanying ZFS.
With the right layer of abstraction on top of snapshots one could have the cake you need it to as it is difficult for beginners to grasp ZFS.
https://utcc.utoronto.ca/~cks/space/blog/linux/UbuntuKernels...
Well, the people who host the people do. Is that not argument enough in favor of it?
There are a lot of non-obvious gotchas with ZFS, and a lot of knobs to turn to make it do what you want. Anecdotally, a coworker of mine set it up on his development machine back when Ubuntu was heavily promoting it for default installs. It worked well until one day his machine started randomly freezing for minutes multiple times a day... He traced the issue back to some improper snapshotting setup, then spend a couple of days trying to fix it before going back to ext4.
For the Postgres data use case in particular, I would be wary of interactions and probably require a lot of testing if we were to introduce it... Though it seems at least some people are having success with it (not exactly plug and play or cheap setup though): https://lackofimagination.org/2022/04/our-experience-with-po...
There are a ton of ZFS knobs, yes, but you don’t need most of them to have a safe and performant setup. Optimal, no, but good enough.
It’s been well-tested with DBs for years; Percona in particular is quite fond of it, with many employees writing blog posts on their experiences.
Addressing your argument directly though: you know that if you spin up a Postgres database for your app, you need to dump the database to disk to back it up (or if you wanna get fancy, you can do a delta from the last backup + a full backup periodically). Anytime a Postgres database exists, you know the steps you need to take to backup that service.
Same with persistent file storage on disk: if you have a directory of files, you need a snapshot of all of those files.
Each _service_ can know how to back itself up. If you tell a Dokku _app_ to back itself up, what you really mean is that each _service_ attached to that app should do whatever it needs to do to create a backup. Then, dokku only needs to collate all of the various backup outputs, include a copy of the git repository that drives the app, tar/zstd it, and write it to disk.
As you pointed out, the user should probably be able to control the backup cadence, where those backups are shipped off to, the retention period, whether or not they are encrypted, etc, but the actual mechanics of performing a backup aren't exactly rocket science. All of the user configurable values can have reasonable defaults too -- they can/should Just Work (tm). There's value in having that work OOTB even if the backups are just being written to disk on the actual Dokku machine somewhere.
I sort of see it the same way as "just install Linux Mint, it's great, you'll love it", but when something doesn't work, it's "oh yeah, just open the terminal and ________": opening the terminal is the _last_ thing people want to do if they just wanna read their email or whatever.