So let me get it straight: he stores a bunch of CD ISOs, presumably with block size of 2048 bytes to different dedup file systems without caring about dedup block size?
ZFS has 128 kB recordsize by default, so little wonder it does so badly in this particular test without any tuning!
Windows has 4 kB blocks, so that's why it does so well. Doh.
He could have configured other systems to use a different block size. 2 kB block would obviously be optimal, one should get the highest deduplication savings with that size.
From ZFS documentation: http://open-zfs.org/wiki/Performance_tuning#Dataset_recordsi...
"ZFS datasets use an internal recordsize of 128KB by default. The dataset recordsize is the basic unit of data used for internal copy-on-write on files. Partial record writes require that data be read from either ARC (cheap) or disk (expensive). recordsize can be set to any power of 2 from 512 bytes to 128 kilobytes. Software that writes in fixed record sizes (e.g. databases) will benefit from the use of a matching recordsize."
So what happens if he sets ZFS recordsize to 2 kB (assuming it can be done?)? Ok, dedup table will probably be huge, but... savings ratio is what we need to know.
> ZFS is another filesystem capable of deduplication, but this one does it in-line and no additional software is required.
Yup, ZFS is probably the best choice for online deduplication.
But except for the block size, I don't see other explanation for the differences.
Dedup is dedup, so I failed to understand why results between different implementation should lead to such differences at the end (except very incorrect implementation !).
Say you have this data:
ABCABCBACCBBABCC
Dedup system that has block size of 1 can see you really have just three unique blocks, A, B and C.Same data, but dedup with block size of 2:
AB CA BC BA CC BB AB CC
Dedup block size of 2 thinks you have 6 unique blocks: AB, CA, BC, BA, CC and BB.Etc.