That's good to know because, for important things, I test the archive before throwing the original data away.
Not to single out zstd but it's a good opportunity to be reminded: if you make backups, test your backups! A bug like this can also be introduced, not only fixed. I'm not saying it's likely, or that you're likely to be affected by it, but for things you care about, just opening the backup once a year and spot checking some files is not a lot of work for a lot lower risk compared to never checking it. Most backup tools will also let you do checksum validation in an automated fashion, but I prefer to also manually open it and check that it truly works (not that a partial archive has no errors, for example).
Anyway, details of the bug are here: https://github.com/facebook/zstd/pull/3517
> This [affects] the block splitter. Only levels using the optimal parser (levels 13 and above depending on the source size), or users who explicitly enable the block splitter, are affected.
So if you use, for example, zstd -9 (I didn't know it went higher, at least not at somewhat reasonable speeds) or below, then you should always have been fine unless you called the block splitter yourself explicitly (or, perhaps, if your backup software does that for you? It sounds like something relevant for deduplication but I'm not sure what this feature is exactly).
> The block splitter confuses sequences with literal length == 65536 that use a repeat offset code. It interprets this as literal length == 0 when deciding the meaning of the repeat offset, and corrupts the repeat offset history. This is benign, merely causing suboptimal compression performance, if the confused history is flushed before the end of the block, e.g. if there are 3 consecutive non-repeat code sequences after the mistake. It also is only triggered if the block splitter decided to split the block.
If I understand it correctly, the bug triggers under circumstances where the data causes the splitter to split at exactly 2^16 and then doesn't flush the block, and one example where it doesn't do that is if any part of the next 2^17 bytes (128K) is compressible? Not sure what a "repeat code sequence" is, my lz77-geared brain thinks of a reference that points to an earlier repetition, aka a compressed part.
To begin with, it requires the distance between 2 consecutive matches to be exactly 65536. This is extremely rare. Mountains of files never generate such a situation. Then it needs to employ a repcode match as the match following the 65536 literals. Repcode matches are more common after short literal lengths (1-3 bytes). Needless to say, 65536 is far from this territory, so it's uncommon to say the least. Finally, the block splitter must be active (hence only high compression modes), and it must decide to split the block exactly at the boundary between the literals and the repcode match.
So this is not 0, since Google found a sample, but all these conditions _together_ have an astronomically low chance to happen, as in competitive with winning the Powerball jackpot. I wouldn't worry so much for my own dozens of archives.
Compression corruptions are worse then regular corruption due to the cascading impact. A corrupt sector can be replaced inline but a corrupt compressed file will generally destroy everything downstream from the error.
Big +1 to actually verifying the round trip. Backups that aren’t tested through an actual restore are like theoretical war plans vs reality. No plan survives first contact with the enemy.
That would be a nice extra benefit, besides the speedup from being multithreaded. (I assume zstd also does multithreading but for those stuck with gzip, this is a drop-in replacement.)
Edit: bzip2 apparently does the same, "bzip2 compresses files in blocks, usually 900 kbytes long. Each block is handled independently. If a media or transmission error causes a multi-block .bz2 file to become damaged, it may be possible to recover data from the undamaged blocks in the file." (--man bzip2)
If anyone is interested in the clearlinux optimized zstd build config ( imho it is useful for the 1.5.5 building )
https://github.com/clearlinux-pkgs/zstd/blob/main/zstd.spec
( CFLAGS, 4 patch )
What advantages or disadvantages am I getting from them?
* Change default value for some parameters, that can't be upstream due to compatibility reasons.
* Change build flag / macro for optimization and/or workaround compiler bugs.
Anybody willing to explain what "nb" stands for in this context?
I don't know zstd how could handle the outofmemory, but the sparse reading might not be perfect.
OpenZFS does support up to level 19, though I'd generally advise not using it for many reasons, so if it were running the newer version, it could be affected.