Also, I have a 1.5TB directory. There is a lot of redundant backup data in it. I archived it with Restic to 400 GB. As a ZFS dataset, it would have taken ~4X the size.
Honestly, backups up to 100 TB are better be done with tools such as Restic than file system stream backups: less hardware requirements, lots of support for repository management, integration with clouds, portable repositories, better and more trusted encryption, etc. Tools based on Go are static binaries with no dependencies. You can recover your data in the future on any X86 platform.
zfsnapr mounts recursive snapshots on a target directory so you can just point whatever backup tool you like at a normal directory tree.
I still use send/recv for local backups - I think it's good to have a mix of strategies.