How is "Now using Zstandard instead of xz for package compression" followed by the minuscule low-contrast grey "(archlinux.org)" better than "Arch Linux now using Zstandard instead of xz for package compression" like it was when I originally read this a few hours ago?
If people can't read the source site's domain after the headline then I agree there wouldn't be much context, but equally, if they can't read that, surely their best solution is to adjust the zoom level in the browser.
It's clear you won't get complete context from the headline list plus domain, but a hint of it is provided and if you want more you click the link. Maybe I'm being a little uncharitable but I don't see a big problem here.
should be allowed. But I'm not sure that it is.
The question is what's better than a strict "no editorialization" rule.
For a contrived example of how dated the guideline seems, what if somebody submitted a tweet thread criticizing Twitter the company with a headline/sitebit like "Twitter now banning third-party clients. (twitter.com)". Would it have to be renamed to "Now banning third-party clients. (twitter.com)"? That would make it appear to be a more official statement instead of an unsponsored opinion.
I'm picking on Twitter out of recent memory of this submission of mine a couple weeks ago, where the submission title "Tracking down the true origin of a font used in many games and shareware titles" was 100% my own editorializing for lack of title-worthy material in the linked tweet itself: https://news.ycombinator.com/item?id=21667238
Unless one uses a link shortener. Are shorteners permitted on HN?
If it's not easy to read, then the problem is between the css and your screen. Not the title rules.
Earlier last year I was doing some research that involved repeatedly grepping through over a terabyte of data, most of which were tiny text files that I had to un-zip/7zip/rar/tar and it was painful (maybe I needed a better laptop).
With Zstd I was able to re-compress the whole thing down to a few hundred gigs and use ripgrep which solved the problem beautifully.
Out of curiosity I tested compression with (single-threaded) lz4 and found that multi-threaded zstd was pretty close. It was an unscientific and maybe unfair test but I found it amazing that I could get lz4-ish compression speeds at the cost of more CPU but with much better compression ratios.
EDIT: Btw, I use arch :) - yes, on servers too.
http://pages.di.unipi.it/farruggia/dcb/
Looks like Snappy beats both LZ4 and Zstd in compression speed and compression ratio, by a huge margin.
LZ4 is a ahead of Snappy in the decompression speed.
I have not researched this opinion much
The numbers I know about are wrong: zstd always beats gzip for compression ratio.
I will need to do my own testing.
tar -I zstd -xvf archive.tar.zst
https://stackoverflow.com/questions/45355277/how-can-i-decom...Hopefully there's another option added to tar that simplifies this if this compression becomes mainstream.
tar -axf archive.tar.whatever
and it should work for gz, bz2, Z, zstd, and probably more. (verified works for zstd on gnu tar 1.32) zstd -cd archive.tar.xst | tar xvf -
it's needed anyway as soon as you step outside of what somebody made an option for, for example encryption. tar -acf blah.tar.zst blah/
-a figures it out from the filename extension, and its zst not zstd.For compression, you can use "-c -I zstd"
Package installations are quite a bit faster, and while I don't have any numbers I expect that the ISO image compose times are faster, since it performs an installation from RPM to create each of the images.
Hopefully in the near future the squashfs image on those ISOs will use zstd, not only for the client side speed boost for boot and install, but it cuts the CPU hit for lzma decompression by a lot (more than 50%). https://pagure.io/releng/issue/8581
Also one more benefit of zstd compression, that is not widely noted - a zstd file conpressed with multiple threads is binary the same as file compressed with single thread. So you can use multi threaded compression and you will end up with the same file cheksum, which is very important for package signing.
On the other hand xz, which has been used before, produces a binary different file if compressed by single or multiple threads. This basucally precludes multi threaded compression at package build time, as the compressed file checksums would not match if the package was rebuild with a different number of compression threads. (the unpacked payload will be always the same, but the compressed xz file will be binary different)
This looks like a very good move. Debian should follow suit.
EDIT: nevermind, this doesn't seem to have made this the default for building packages locally, just for ones you download from the official repos. Guess I'll go change that by hand and then still be sad that I can't have it easily disabled entirely for AUR helpers but build my packages with compression.
Impressive. As a AUR package maintainer I am also wondering how the compression speed is though.
FWIW I really applaud Arch here. Even if it's just a small step. Commercial operating systems should take notice. OS updates should really not take as long as they (mostly) do.
"... decompression time dropped to 14% of what it was..." (s/14/actual_value)
Luckily updating libarchive manually with an intermediate version resolved my issue and everything proceeded fine.
This is a good change, but it's a reminder to pay attention to the Arch Linux news feed, because every now and then something important will change. The maintainers provided ample warning about this change there (and indeed I had updated by other systems in response) so we procrastinators really had no excuse :)
Few (or none?) of Chrome's fairly dramatic improvements to zlib have been upstreamed. https://github.com/madler/zlib/issues/346
Edit: Also, if browsers do adopt zstd, it's likely you'll end up with the same situation where they fork their own implementation of zstd. Upstreaming requires signing Facebook's CLA, which has patent clauses that don't work for most.
https://github.com/facebook/zstd#benchmarks
AFAIK (I haven't looked much into it since 2018) it's not widely supported by CDNs, but at least Cloudflare seems to serve it by default (EDIT: must be enabled per-site https://support.cloudflare.com/hc/en-us/articles/200168396-W...)
Also lz4, of course.
PKGEXT='.pkg.tar.zst'
The largest package I always wait on perfect for this scenario is `google-cloud-sdk` (the re-compression is a killer -- `zoom` is another one in AUR that's a beast) so I used it as a test on my laptop here in "real world conditions" (browsers running, music playing, etc.). It's an old Dell m4600 (i7-2760QM, rotating disk), nothing special. What matters is using default xz, compression takes twice as long and appears to drive the CPU harder. Using xz my fans always kick in for a bit (normal behaviour), testing zst here did not kick the fans on the same way.After warming up all my caches with a few pre-builds to try and keep it fair by reducing disk I/O, here's a sampling of the results:
xz defaults - Size: 33649964
real 2m23.016s
user 1m49.340s
sys 0m35.132s
----
zst defaults - Size: 47521947
real 1m5.904s
user 0m30.971s
sys 0m34.021s
----
zst mpthread - Size: 47521114
real 1m3.943s
user 0m30.905s
sys 0m33.355s
I can re-run them and get a pretty consistent return (so that's good, we're "fair" to a degree); there's disk activity building this package (seds, etc.) so it's not pure compression only. It's a scenario I live every time this AUR package (google-cloud-sdk) is refreshed and we get to upgrade. Trying to stick with real world, not synthetic benchmarks. :)I did not seem to notice any appreciable difference in adding the `--threads=0` to `COMPRESSZST=` (from the Arch wiki), they both consistently gave me right around what you see above. This was compression only testing which is where my wait time is when upgrading these packages, huge improvement with zst seen here...
pacman:
COMPRESSZST=(zstd -c -z -q -)
https://git.archlinux.org/svntogit/packages.git/tree/trunk/m...devtools:
COMPRESSZST=(zstd -c -T0 --ultra -20 -)
https://github.com/archlinux/devtools/blob/master/makepkg-x8...I have a server that spools off the entire New York stock and options market every day, plus Chicago futures, using Lz4. But when we copy to archive, we recompress it with Zstd, in parallel using all the cores that were tied up all day.
There is not much size benefit to more than compression level 3: I would never use more than 6. And, there's not much CPU benefit for less than 1, even though it will go into negative numbers; switch to Lz4 instead.
I am a little shocked that they bothered; Arch is rolling release and explicitly does not support partial upgrades (https://wiki.archlinux.org/index.php/System_maintenance#Part...). So to hit this means that you didn't update a rather important library for over a year, which officially implies that you didn't update at all for over a year, which... is unlikely to be sensible.
(I wanted the challenge of running arch in production just to learn, good times)
That sort of attention to detail is what continues to impress me about the Arch methodology.
In comparison, the 0.8% of zstd looks like a bargain.
Do you know if Debian is using parallelized XZ or not with apt / dpkg?
edit: Sorry, my fault that was decompression RAM I was thinking about, not speed, although I was influenced by my test that without measuring both xz and zstd seemed instant.
For those figures, this will be better total time for you if your computer network connection is faster than about 1.25mbit/sec. For a slow arm computer with an XZ decompress speed of 3MB/s the bandwidth threshold for a speedup drops to _dialup_ speeds.
And no matter how slow your network connection is and how fast your computer is you'll never take more than 0.8% longer with this change.
For many realistic setups it will be faster, in some cases quite a bit. Your 54MB XZ host should be about 3% faster if you're on a 6mbit/sec link-- assuming your disk can keep up. A slow host that decompresses xz at 3MB/s w/ a 6mbit link would a wopping 40% faster.
This is for html web compression, but the results are similar for other datasets. For internet transfer more compression is better than more decompression speed.
You can make your own experiments incl. the plots with turbobench [2]
[1] https://sites.google.com/site/powturbo/home/web-compression [2] https://github.com/powturbo/TurboBench
If this was netbsd m68k, you'd probably easily understand.