Likely the format with the best chance of overthrowing the jpg/gif/png incumbents is AVIF. Since it's based on AV1, you'd get hardware acceleration for decoding/encoding once it starts becoming a standard, and browser support will be trivial to add once AV1 has wide support.
Compression wise AVIF is performing at about the same level as FLIF (15-25% better than webp, depending on the image), and is also royalty free. The leg it has upon FLIF is the Alliance for Open Media[1] is behind it, which is a consortium of companies including: "Amazon, Apple, ARM, Cisco, Facebook, Google, IBM, Intel Corporation, Microsoft, Mozilla, Netflix, Nvidia, Samsung Electronics and Tencent."
I'm really excited for it and I hope it actually gets traction. It'd be lovely to have photos / screenshots / gifs all able to share a common format.
I used to think the same as well, however I now think Jpeg XL is poised to be the 'winner' among next gen image codecs. It's royalty free, great lossy and lossless compression which is said to beat the competition, as well as providing a perfect upgrade path for existing jpeg's as it can losslessly recompress them into the jpeg XL format with a ~20% size decrease (courtesy of the PIK project).
It's slated for standardisation within a couple of weeks, it will be very interesting to see large-scale comparisons of this codec against the likes of AVIF and HEIF.
AVIF (and its image sequences) seems to be fairly complicated. Here's few comments [1] about it:
"Given all that I'm also beginning to understand why some folks want something simpler like webp2 :)"
"Despite authoring libavif (library, not the standard) and being a big fan of the AV1 codec, I do occasionally find HEIF/AVIF as a format to be a bit "it can do anything!", which is likely to lead to huge or fragmented/incomplete implementations."
Anyway, instead of adding FLIF, AVIF, BPG, and lots of similar image formats to web browsers, I think only one good format is enough and JPEG XR might be it. After something has been added to web browsers, it can't be removed.
Safari hasn't added support for WebP (which is good, there's no need for WebP after AVIF/JPEG XR is out) and it hasn't added support for HEIF (which is weird, considering Apple is using it on iOS), but maybe they know that there's no need to rush.
[1] https://bugs.chromium.org/p/chromium/issues/detail?id=960620...
(Speaking as someone who's seen several open, licensing-unencumbered image/video/audio formats fail to get traction with a majority of browsers).
Given the way GIF is used nowadays the only reason it exists is it let's you put a video on the most of the places where you are only allowed to put pictures. For some weird reason many of such websites still won't let you upload an MP4 file despite they auto-convert the gifs you upload to MP4/AVC. From the technical point of view there hardly is a reason for GIF to be used at all.
I also believe this was due to some license choices in the past.
But in this time of responsive websites it is a great format! Just download the amount of pixels you need to form a sharp image and then skip loading the rest.
I really hope this becomes mainstream one day.
That said, lossless codecs have it easier because you can freely transcode between them as the mood takes you, with no generation loss, so there's less lock-in. For example, FLAC is dominant in audio, but there are a variety of other smaller lossless formats which still see use. Nobody much minds.
The hardest part is figuring out which patents they infringe, and how to circumvent them.
The UX concern you're describing doesn't necessarily have to have anything to do with the implementation details themselves, as demonstrated by sites like imgur rebranding silent mp4/webms as "gifv". As long as the new format makes it reasonably easy to access data about its content (e.g., header field indicating animation/movement), there shouldn't be any issue with collapsing the use cases into a single implementation and simultaneously addressing the UX concern you mention.
Here are some supporting implementations: https://github.com/AOMediaCodec/av1-avif/wiki#demuxers--play...
If you want to check how it looks like on supported browser: https://kagami.github.io/avif.js/
I was also digging around and as OP mentioned found it interesting topic.
Modular mode in JPEG XL also has MA trees, but the bitstream is organized in a way that allows parallel decode (as well as efficient cropped decode).
In contrast, with JPEG XL, I can simply use the high quality setting once per image and be done with it, trusting that I will get a faithful image at a reasonable size.
> Keep in mind that the author of the FLIF works on the new FUIF (https://github.com/cloudinary/fuif) which will be part of the JPEG XL. So, probably FLIF will be deprecated soon. And as far as JPEG XL is also based on Google PIX, there is a high probability that Google will support this new format in their Blink engine.
(Compiled to WebAssembly, not by me)
I'm very excited about JPEG XL, it's a great codec that has all the technical ingredients to replace JPEG, PNG and GIF. We are close to finalizing the bitstream and standardizing it (ISO/IEC 18181). Now let's hope it will get adoption!
[0] https://github.com/cloudinary/fuif [1] https://jpeg.org/jpegxl/index.html
I'm not sure if putting it all under one standard will make it hard to adopt, and/or if AVIF (based on AV1) will just become the new open advanced image format first. Interesting to see in any case.
Brief presentation: https://www.itu.int/en/ITU-T/Workshops-and-Seminars/20191008...
Longer presentation with more of the rationales for choices: https://www.spiedigitallibrary.org/conference-proceedings-of...
Draft standard: https://www.researchgate.net/publication/335134903_Committee...
Play in your browser: http://libwebpjs.appspot.com/jpegxl/
> Input: 61710Bytes => JPEGXL: 35845Bytes 0.58% size of input
I think someone forgot to multiply by 100. (too bad, I mean 172x compression would be even more amazing!)
The reason we have PNG and JPEG is that they are, all in all, more than good enough. Yes, the dreaded "good enough" argument surfaces again stronger than ever. They are also easy to understand, i.e. use JPEG for lossy photos and PNG for pixel-exact graphics. But most importantly they both compress substantially in comparison to uncompressed images (like TIFF) and both have long ago reached the level of compression where improving compression is mostly about diminishing gains.
As there's less and less data left to compress further the compression ratio would need to go higher and higher for the new algorithm to make even a dent in JPEG or PNG in any practical sense.
Also, image compression algorithms try to solve a problem that has been gradually made less and less important each year with faster network connections. Improvements in image compression efficiency are way outrun by improvements in the network bandwidth in the last 20 years. The available memory and disk space have grown enormously as well.
For example, it's not so much of a problem if a background image of a website compresses down to 500Kb rather than 400Kb because the web page itself is 10M and always takes 10 seconds to load regardless of which decade it is. If you could squeeze a half-a-megabyte off the website's image data the site wouldn't effectively be any faster because of that (but maybe marginally so to allow the publisher to add another half-a-megabyte of ads or other useless crap instead.
Compression is still majorly important on mobile. Mobile coverage is mostly not great except maybe in bigger cities where you get to share the coverage with millions of others. Also mobile providers still throttle connections, bill per GB, etc. So, it matters. E.g. Instagram adopting this could be a big deal. All the major companies are looking to cut bandwidth cost. That's also what's driving progress for video codecs. With 4K and 8K screens becoming more common, jpeg is maybe not good enough anymore.
It really needs a giant caveat saying "lossless". I mean, that's still great and impressive, but it clearly doesn't erase the need for a user to switch formats as a lossless format is still not suitable for end users a lot of the time.
(It does have a lossy mode, detailed on another page, but they clearly show it doesn't have the same advantage over other formats there.)
Seems like they're doing a pretty decent job at communication it's lossless to me.
What the first graph shows me is that this format is slightly better than WebP in one aspect, ignoring half the other features of it. Considering WebP is already far more ahead in terms of adoption, I'm happy sticking with that instead. It's natively supported in most browser and many graphics softwares, which is the real uphill battle for a new file format.
FLIF relies on progressive loading instead of lossy compression, and still beats BPG and WebP except on the very low end.
https://github.com/UprootLabs/poly-flif
It weights 77kB gzipped which is a no-no in my book. Jesus, my current Mithril SPA weights 37kB gzipped. Not the JS bundle, the complete application.
77kb gzipped is massive considering it’s only doing one thing. If I want my website to load in ~100 milliseconds or less (or even 1 second or less!), I absolutely do need to pay attention to all the libraries I add.
I can and do criticize software developers for making hideously bloated websites because they don’t pay attention to what they add. Not only are a lot of modern websites wasteful, they’re painful or outright useless on slow mobile connections—not a problem for software developers on fibre networks, beefy dev machines, etc.
I wonder if tANS [0] could be used instead of arithmetic coding for (presumably) higher performance. Although I guess the authors must be aware of ANS. Would be interesting to hear why it wasn't used instead.
[0]: Tabled asymmetric numeral systems https://en.wikipedia.org/wiki/Asymmetric_numeral_systems#tAN...
Basically, if you want to be able to easily view the image on a mobile device or on the web, the format needs their blessing.
Also, a GPL-licensed encoder will in no way stop incompatible extensions.
My intuition was informed by choosing FLAC for my music collection ~15 years ago, and that working out fantastically. If a better format does come along, or if I change my mind, I can always transcode.
Also, why not PNG?
I don't think PNG provided meaningful compression, due to the greyscale. If FLIF didn't exist, I certainly could have used PNG, for being nicer than PGM. But using FLIF seemed like a small compromise to pay for going lossless.
JPEG would have sufficed, but JPEG artifacts have always bugged me. I also considered JPEG2000 for a bit, which left me with a concern of how stable/future-proof are the actual implementations. Lossless is bit-perfect, so that concern is alleviated.
The article claims 43% improvement over typical PNGs, if you have a lot of images that's pretty significant.
EDIT: Their reference JS polyfill implementation is LGPLv3 as well, which may further harm adoption.
I'm gonna make enemies....
That's FUD and BSD bullshit.
LGPL is not GPL. You can link what the fuck you want to a LGPL library included your proprietary code as long as you link dynamically.
Gtk+ uses LGPL, GLib, uses LGPL, GNOME use LGPL, Qt uses LGPL, Cairo uses LGPL and millions of applications link into these ones without problems.
I'm not aware of any case in history where a company got attacked, or frightened to use a LGPL library in their codebase.
It would be good to stop the FUD excepted if you need to feed your lawyer.
The LGPL is hardly "restrictive". If you aren't modifying the code, and for a codec if you are I have questions, then you've nothing to worry about.
Regarding license compatibility, which actually existing browser are you worried about the licensing terms of?
I could totally see a universe where something like FLIF becoming a de-facto standard in image archiving, particularly for extremely large images. 33% smaller than PNG is a measurably savings in image archiving.
[0] https://en.wikipedia.org/wiki/Huffyuv [1] https://en.wikipedia.org/wiki/Lossless_JPEG [2] https://en.wikipedia.org/wiki/FFV1
=====
EDIT:
I forgot to mention, there are plenty of non-archival formats that get traction even with incomplete browser support. The MKV container format is only partially supported in browsers (with WebM), but it's still a popular format for any kind of home video.
Along the same lines, I suspect BMP+LZMA would likely be beaten by BMP+PPM or BMP+PAQ, the current extreme in general-purpose compression.
Not very surprising imho. PNG was never a strong compression format to begin with.
As an unencumbered open source technology, it should breeze through legal's OK pretty quickly, and getting it integrated could certainly take a bit of time, but should just be part of the FLIF roadmap itself, if the idea is to actually get this adopted.
You don't set out to come up with "one format to rule them all" and then not also go "by implementing libflif and sending patches upstream to all the open source browsers that we want to see it used in" =)
- multi-layer (overlays), with named layers
- high bit depth
- metadata (the JXL file format will support not just Exif/XMP but also JUMBF, which gives you all kinds of already-standardized metadata, e.g. for 360)
- arbitrary extra channels
All that with state-of-the-art compression, both lossy and lossless.
TIFF acknowledges EXIF and IPTC as first class data. PNG added EXIF data as an official extension a couple of years ago and I know ExifTool does support it but I'd want to check all applications in a workflow for import/edit/export support before trusting it.
TIFF supports multiple pages (images) per file and also multilayer images (ala photo-editing).
TIFF supports various colour spaces like CMYK and LAB. AFAIK, PNG only supports RGB/RGBA so for image or print professionals, that could be a non-starter.
So I get why PNG can't warm photographers' hearts yet. Witness still the most common workflows: RAW->TIFF & RAW->JPEG.
https://uprootlabs.github.io/poly-flif/
If you compare with "same size JPG" and set the truncation to 80%, JPG appears to win out in terms of clarity of the image.
https://www.dw.com/en/is-netflix-bad-for-the-environment-how...
See the responsive part in https://flif.info/example.html
We can imagine decoding taking into account battery save mode, bandwidth save mode...
> A FLIF image can be loaded in different ‘variations’ from the same source file, by loading the file only partially. This makes it a very appropriate file format for responsive web design. Since there is only one file, the browser can start downloading the beginning of that file immediately, even before it knows exactly how much detail will be needed. The download or file read operations can be stopped as soon as sufficient detail is available, and if needed, it can be resumed when for whatever reason more detail is needed — e.g. the user zooms in or decides to print the page.
- SIMD-optimised encoder/decoder for major programming languages;
- stable photoshop and gimp import/export plugins, supporting layers, animation and transparency features;
- metadata reading/writing library for major programming languages;
- firefox and chrome integration at least.
Simply search "webp" in the codebase, and you can add another format like that.
I wonder if the Chromium developers would accept a patch for it?
Secondly, this is the Lesser GPL, which means that only modifications to the FLIF implementation itself have to be free. It can still be linked in proprietary programs as long as they don't make any modifications.
Ah, thanks for the correction.
I went through a legal headache years ago releasing some software and our lawyers strongly urged against *GPL and pushed us to Apache because we would have had severely limited our opportunity for Fortune 500 companies. Apparently many prohibit anything with the letters GPL in the license, despite that the software was free and we were charging for services. It was a long headspinning debate and due to mounting legal fees we went with Apache.
Wrong, mostly.
Any changes you make to the encoder library itself would be LGPL, but if all you are doing is calling the library from other code then that is not an issue.
If you make a change to the library, even as part of a larger project, nothing else but that change is forced to be LGPL licensed. If your update is a bit of code you use elsewhere, as long as you own that code it does not force the elsewhere to be LGPL - while you are forced to LGPL the change to the library there is no stipulation that you can't dual license and use the same code under completely different terms outside the library.
I would appreciate a reference implementation in Rust. Or, if not intended for immediate linking, in something like OCaml or ATS. Clarity and correctness are important in a reference implementation, and they are harder to achieve using C.