Would you say that the defaults are sane enough for that kind of person (no real configuration needed)?
Every system I’ve used has `compression=on` set as default, which currently means lz4. People set it manually out of paranoia from earlier days I think.
For linux systems you can set `xattr=sa` and `acltype=posixacl` if you like, which offers a minor optimization that you’ll never notice.
I suppose if you don’t like how much memory zfs uses for ARC, you can reduce it. For desktop use 2-4gb is plenty. For actual active heavier storage use like working with big files or via a slow HDD filled NAS, 8GB+ is a better amount.
Dataset recordsize can be set as well, but that’s really something for nerds like me that have huge static archives (set to 1-4MiB) or virtual machines (match with qcow2’s 64KiB cluster size). The default recordsize works well enough for everything that unless you have a particular concern, you don’t need to care.
I should note, beware of rolling release linux distros and ZFS. The linux kernel breaks compatibility nonstop, and sometimes it can take a while for zfs to catch up. This means your distro can update to a new kernel version, and suddenly you can’t load your filesystem. ZFSbootmenu is probably the best way to navigate that, it makes rolling back easy.
You also want to setup automatic snapshots, snapshot pruning, and sending of snapshots to a backup machine. Highly recommend https://github.com/jimsalterjrs/sanoid
If you really find yourself wanting to gracefully deal with rollbacks and differences in a project, HotTubTimeMachine (HTTM) is nice to be aware of: https://github.com/kimono-koans/httm
I would say the learning curve is similar but perhaps a bit steeper than configuring vim for web development for the first time or configuring nginx and let's encrypt on a webserver for the first time.
It didn't help that I started by trying to setup root on encrypted zfs. I would recommend first using zfs unencrypted, then adding encryption once you're comfortable with that, and finally trying /home on zfs or even root on zfs.
For example, I had to move between SSDs which is an absolutely trivial operation with any other filesystem if you know your Linux well. It took me maybe three hours of work on ZFS because it brings its own environment and does everything its own way, so I had to remember and re-discover a lot of stuff. The system booted on maybe 15th try.
If you are fine with this, or you're not going to use it on root, or you are a FreeBSD user, it's an excellent choice for pretty much any task.
Though I consider zfs to be nice, I'm willing to use it only with distributions, where it comes with the distribution, built for the distro kernel (i.e. ubuntu or proxmox). In the past, I've wasted too much time with dkms, failing to build the kernel modules, or kABI modules failing to load, or other problems that left me without root filesystem at reboot. That's not an experience I'd like to repeat. Also, btrfs is nice too ;).
Regardless, I'm surprised your one-year with ZFS didn't convinced you that it's worthwhile to invest in its mastery. The filesystem will pay dividends for your entire compute life-time and not only save your data but hours of maintenance over that time.
Major downgrade moving away from it in my view.
On a side note, FreeBSD is great and worth investing in as well. I still have more to learn but the journey has been wonderful and I will not be using Linux in the next iteration of my home / lab server and hobby projects.
It's like saying you should know how virtual memory and pre-emptive multitasking work if you want to upgrade from Classic Mac OS to OS X.
You don't need to know about SLOGs and L2ARCs. They're not super useful generally but people geek out a out them.
I know those were just examples and I don't disagree with the general point.