Since OpenZFS already implements LZ4 compression (and has so for quite some time), this is yet another feature that, once enabled, will stop you from importing your incompatible pool into anything that actually implements ZFS.
You are correct about incompatible features. Sun and Oracle use a monotonically increasing integer to note new ZFS versions. OpenZFS instead incremented the version to 5000 and now uses feature flags so it is possible to coordinate individual feature enablement between all the operating systems that support OpenZFS.
[1] has the OpenZFS launch announcement in September 2013, [2] dates Sun's acquisition to January 2010, [3] has the last OpenSolaris derived bits coming out of Sun in November 2010.
[1] - http://open-zfs.org/wiki/Announcement
[2] - https://www.cnet.com/news/oracle-buys-sun-becomes-hardware-c...
[3] - https://en.wikipedia.org/wiki/OpenSolaris (I'd cite osol-discuss, but that mailing list was shut down with the rest of sun.com)
Most importantly, how fast is the disk? I suspect (but would benchmark if I really needed to know) that the effects of compressions will be greatly different on an older 7,200 rpm spinning disk, vs a modern SSD.
It's a very poor test.
The big "gap" is probably between lz4 and gzip. e.g., for compressing logs, where gzip compresses a lot more but is terribly slow.
I hope zstd could be used for this case someday: http://facebook.github.io/zstd/
The license granted hereunder will terminate,
automatically and without notice, if you (or any
of your subsidiaries, corporate affiliates or
agents) initiate directly or indirectly, or take
a direct financial interest in, any Patent
Assertion: (i) against Facebook or any of its
subsidiaries or corporate affiliates...
https://github.com/facebook/zstd/blob/dev/PATENTSI've worked with a few "ZFS appliances" from Sun (256-512TB range, NFS/iSCSI shares, 1-2k clients) and would never enable any advanced features on those (compression, dedup, etc). They were awfully unstable when we did that.
Granted, that was 5 years ago but I don't see any indication this technology has evolved significantly with all the drama surrounding Oracle, licensing, forks, etc. Just not worth the trouble these days, IMHO.
But ZFS deeply hardcodes assumptions which mean you don't get to rewrite history like that, so it gets to do it synchronously (and keep all the ever-growing data structures required for this in memory for all writing).
I don't think an arbitrarily larger amount of time or money behind it would have permitted a better implementation, short of a ZFS2 and an in-place migration tool.
Compression on the other hand is very standard and no issue at all, from many many years of ZFS experience. It's the default in many cases (ie on Nexenta)