- It isn't really a "new format". It's an update to the existing format. - It is very backwards compatible. -- Old programs will load new PNGs to the best of their capability. A user will still know "that is a picture of a red apple".
There also seems to be some confusion about how PNGs work internally. Short and sweet: - There are chunks of data. -- Chunks have a name, which says what data it contains. A program can skip a chunk it doesn't recognize. - There is only one image stream.
Chris Lilley--one of the original PNG co-authors--has a post with an example HDR image: https://svgees.us/blog/cICP.html It is about half way down, with the birthday cake. Generally, us tech nerds have phones that are capable of displaying it well. So perhaps view the page on your phone.
What you should look for is the cake, the pink tips in her hair, and the background being more vivid. For me, the pink in the cake was the big give-away.
There is also the Web Platform Tests (WPT) which we use to validate browser support: https://wpt.fyi/results/png/cicp-chunk.html?label=master&lab...
Although, that image is just a boring teal. See it live in your browser here: https://wpt.live/png/cicp-chunk.html
For an example of APNG, you can use Wikipedia's images: https://en.wikipedia.org/wiki/APNG
But you have a bigger point: I should have live demonstrations of those things to help people understand.
This is great but also has the issue that users might not notice that their setup is giving them a less than optimal result. Of course that is probably still better than not having backwards compatibility.
Edit: Seems the backwards compatibility isn't as great as it could be. Old programs show a washed out image instead which sucks. This should have been avoidable in the same way JPG gain maps work so that you only need updated programs to take advantage of the increased gamut on more-than-sRGB screens and not to correctly show colors that fit into sRGB.
However, gain maps are extra data. So there is a trade off.
The reason gain maps didn't make it into Third Edition is it isn't yet a formal standard. We have a bunch of the work ready to go once their standard launches.
The big one is adoption. I love JPEG XL and hope it becomes more widely adopted. It is very scary to add a format to a browser because you can never remove it. Photoshop and MSPaint no longer support ICO files, but browsers do. So it makes sense for browsers to add support last, after it is clearly universal. I think JPEG XL is well on their way using this approach. But they aren't there yet and PNG is.
There is also longevity and staying power. I can grab an ancient version of Photoshop off eBay and it'll support PNG. This also benefits archivists.
As a quick side note on that: I want people to think about their storage and bandwidth. Have they ever hit storage/bandwidth limits? If so, were PNGs the cause? Was their site slow to load because of PNGs? I think we battle on file size as an old habit from the '90s image compression wars. Back then, we wanted pixels on the screen quickly. The slow image loads were noticeable on dial-up. So file size was actually important then. But today?? We're being penny-wise and pound-foolish.
What sort of improvements might we expect? Is there a chance of it rivalling lossless WebP and JPEG XL?
You can see this with PNG optimizers like OptiPNG and pngcrush.
So step 1 is to improve libpng. This requires no spec change.
Step 2 is to enable parallel encoding and decoding. We expect this to increase the file size, but aren't sure how much. It might be surprisingly small (a few hundred bytes?). It will be optional.
Step 3 is the major changes like zstd. This would prevent a new PNG from being viewable in old software, so there is a considerably higher bar for adoption. If we find step 1 got us within 1% of zstd, it might not be worth such a major change.
I don't yet know what results we'll find or if all the work will be done in time. So please don't take this as promises or something to expect. I'm just being open and honest about our intentions and goals.
For the lifetime of PNG so far, a PNG file has almost, but just barely not, been a valid Interchange File Format (IFF) file.
IFF is a great (simple to understand, simple to implement support for, easy to generate, easy to decode, memory-efficient, IO-efficient, relatively compact, highly compressible) metaformat, that more people should be aware of.
However, up to this point, the usage of IFF has consisted of:
• some old proprietary game-data and image formats from the 1980s that no modern person has heard of
• some popular-yet-proprietary AV formats [AIFF, RIFF] that nobody would write a decoder for by hand anyway (because they would need a DSP library to handle the resulting sample-stream data anyway, and that library may as well offer container-format support too)
• The object files of an open but uncommon language runtime (Erlang .beam files), where that runtime exposes only high-level domain-specific parsing tooling (`beam_lib`) rather than IFF-general decoding tooling
• An "open-source but corporate-steered" image format that people are wary of allowing to gain ecosystem traction (WebP — which is more-specifically a document in a RIFF container)
• And PNG... but non-conformantly, such that any generic IFF decoder that could decode the other things above, would choke on a PNG file.
IMHO, this is a major reason that there is no such thing as "generalized IFF tooling" today, despite the IFF metaformat having all the attributes required to make it the "JSON of the binary world". (Don't tell me about CBOR; ain't nobody hand-rolling a CBOR encoder out of template strings.)
If you can't guess by now, my wishlist item for PNGv3, is for PNG files to somehow become valid/conformant IFF files — such that the popularity of PNG could then serve as the bootstrap for a real IFF tooling ecosystem, and encourage awareness/use of IFF in new greenfield format-definition use-cases.
---
Now, I've written PNG parsers, and generic IFF parsers too. I've even tried this exact unification trick before (I wanted an Erlang library that could parse both .beam files and PNG files. $10 if you can guess the use-case for that!)
Because of this, I know that "making PNG valid per IFF" isn't really possible by modifying the PNG format, while ensuring that the resulting format is decodable by existing PNG decoders. If you want all the old [esp. hardware] PNG parsers to be compatible with PNGv3s, then y'all can't exactly do anything in PNGv3 like "move the 4-byte CRC inside the chunk as measured by the 4-byte chunk length" or "make the CRCs into their own chunks that reference the preceding record".
But I'm not proposing that. I'm actually proposing the opposite.
Much of what PNGv2 did in contravention of the IFF spec, is honestly a pretty good idea in general. It's all stuff that could be "upstreamed" — from the PNG level, to the IFF level.
I propose: formalizing "the variant of IFF used in PNG" as its own separate metaformat specification — breaking this metaformat out from the PNG spec itself into its own standards document.
This would then be the "Interchange File Format specification, version 2.0" (not that there was ever a formal IFFv1 spec; we all just kind of looked at what EA/Commodore had done, and copied it in our own code since it was so braindead-easy to implement.)
This IFF 2.0 spec would formalize, at least, a version or "profile" of IFF for which PNGv2 images are conformant files. It would have chunk CRCs; chunk attribute bits encoded for purposes of decoders + editors via meaningful chunk-name letter-casing; and an allowance for some number of garbage bytes before the first valid chunk begins (for PNG's leading file signature that is not itself a valid IFF chunk.)
This could be as far as the IFF 2.0 spec goes — leaving IFFv1 files non-decodable in IFFv2 parsers. But that'd be a shame.
I would suggest going further — formalizing a second IFFv2 "profile" against which IFFv1 documents (like AIFF or RIFF files) are conformant; and then specifying that "generic" IFFv2-conformant decoders (i.e. a hypothetical "libiff", not a format-specific libpng) MUST implement support for decoding both the IFFv1-conforming and the PNGv2-conforming profiles of IFF.
It could then be up to the IFF-decoding-tooling user (CLI command user, library caller) to determine which IFFv2 "profile" to apply to a given document... or the IFFv2 spec could also specify some heuristic algorithm for input-document "profile" detection. (I think it'd be pretty easy; find a single chunk, and if what follows its chunk-length is a CRC that validates that chunk, then you have the PNGv2-like profile. Whereas if it's not that, but is instead four bytes of chunk-name-valid character ranges, then you've got the IFFv1-like profile. [And if it's neither, then you've got a file with a corrupted first chunk.])
---
And, if you want to go really far, you could then specify a third entirely-novel "profile", for use in greenfield IFF applications:
• A few bytes of space aren't so precious; we can hash things much faster these days, with hardware-accelerated hashing instructions; and those instructions are also for hashes that do much better than CRC to ensure integriaty. So either replace the inline CRCs with CRC chunks, or with nested FORM-like container records (WCRC [len] [CRC4] [interior chunk]). Or just skip per-chunk CRCs and formalize a fHsh chunk for document-level integrity, embedding the output of an arbitrary hash algorithm specified by its registered https://github.com/multiformats/multihash "hash function code".
• Re-widen the chunk-name-valid character set to those valid in IFFv1 documents, to ensure those can be losslessly re-encoded into this profile. To allow chunks with non-letter characters to have a valid attribute decoding, specify a document-level per-chunk-name "attributes of all chunks of this type" chunk, that can either be included into a given concrete format's header-chunk specification, or allowed at various points in the chunk stream per a concrete format's encoding rules (where it would then be expected to apply to any successor + successor-descendant chunks within its containing chunk's "scope.") Note that the goal here is to keep the attribute bits in some way or another — they're very useful IMHO, and I would expect an IFF decoder lib to always be emitting these boolean chunk-attribute fields as part of each decoded chunk.
• Formalize the magic signature at the beginning into a valid chunk, that somehow encodes 1. that this is an IFF 2.0 "greenfield profile" document (bytes 0-3); 2. what the concrete format in use is (bytes 4-7). (You could just copy/generalize what RIFF does here [where a RIFF chunk has the semantics of a LIST chunk but with a leading 4-byte chunk-name type], such that the whole document is enclosed by a root chunk — though this is painful in that you need to buffer the entire document if you're going to calculate the root-chunk length.)
I'm just spitballing; the concrete details of such a greenfield profile don't matter here, just the design goal — having a profile into which both IFFv1 and PNGv2 documents could be losslessly transcoded. Ideally with as minimal change to the "wider and weirder/more brittle ecosystem" side [in this case that's IFFv1] as possible. (Compare/contrast: how HTML5 documents are a profile of HTML that supersedes both HTML4 and XHTML1.1 — supporting both unclosed tags and XML-namespaced element names — allowing HTML4 documents to parse "as" HTML5 without rewrites, and XHTML1.1 documents to be transcoded to HTML5 by just stripping some root-level xmlns declarations and changing the doctype.)
W3C requires that we do not break old, conformant specs. Meaning if the next PNG spec would invalidate prior specs, they won't approve it. By extension, an old, conformant program will not suddenly become non-conformant.
I could see a group of people formalizing IFFv2, and adapting PNG to it. But that would effectively be PNGIFF, not PNG. It would be a new spec. Because we cannot break the old one.
That might be fine. But it comes with a new set of problems, like adoption.
Soooo I like the idea but it would probably be a separate thing. FWIW, it would actually be nice to make a formal IFF spec. If there was no governing body that owns it, we can find an org and gather interest.
I doubt W3C would be the right org for it. ISO subgroup??
It's also questionable how much you actually benefit from common container formats like this since you need to know the application specific format contained anyway in order to do anything useful with it. It also causes problems where "smart" programs treat files in ways that make no sense, e.g. by offering to extract a .docx file just because it looks like a .zip
[1] - https://github.com/Draneria/Metallics-by-Draneria_Krita-Brus...
[2] - https://krita-artists.org/t/memileo-impasto-brushes/92952/11...
But you did need to remember to export if you didn't want the extra fields increasing the file size. I remember finding fireworks pngs on web pages many times back then.
Also, Adobe saves AI files into a PDF (every AI file is a PDF file), and Photoshop can save PSD files into TIFF files (people wonder why these TIFFs have several layers in Photoshop, but just one layer in all other software).
Fireworks was my favorite image editor, I don't know that I've ever found one I love as much as I loved Fireworks. I'm not a graphics guy, but Fireworks was just fantastic.
https://dev.exiv2.org/projects/exiv2/wiki/The_Metadata_in_PN...
This is similar to HTTP request headers, if you're familiar with that. There are a set of standard headers (User-Agent, ETag etc) but nobody is stopping you from inventing x-tomtom and sending that along with HTTP request. And on the receiving end, you can parse and make use of it. Same thing with PNG here.
Yes, a hypothetical user's sprinker layout "map" or whatever they're working on is actually composed of a few rectangles that represent their house, and a spline representing the garden border, and a circle representing the tree in the front yard, and a bunch of line segments that draw the pipes between the sprinkler heads. Yes, each of those geometric elements can be concisely defined by JSON text that defines the X and Y location, the length/width/diameter/spline coordinates or whatever, the color, etc. of the objects on the map. And yes, OP has a rendering engine that can turn that JSON back into an image.
But when the user thinks about the map, they want to think about the image. If a landscaping customer is viewing a dashboard of all their open projects, OP doesn't want to have to run the rendering engine a dozen times to re-draw the projects each time the page loads just to show a bunch of icons on the screen. They just want to load a bunch of PNGs. You could store two objects on disk/in the database, one being the icon and another being the JSON, but why store two things when you could store one?
Really, I think its pretty common for tools that work with images generally.
Can you compress it? I mean, theoretically there is this 'zTXt' chunk, but it never worked for me, therefore I'm asking.
Assuming Next gen PNG will still require new decoder. They could just call it PNG2.
JPEG-XL already provides everything most people asked for a lossless codec. If there are any problems it is its encoding and decoding speed and resources.
Current champion of Lossless image codec is HALIC. https://news.ycombinator.com/item?id=38990568
[1] https://encode.su/threads/4025-HALIC-(High-Availability-Loss...
It looks like LEA 0.5 is the champion.
And HALIC is not even close to ten in this [2] lossless image compression benchmark.
[2] https://github.com/WangXuan95/Image-Compression-Benchmark
And this will improve over time, like jpg encoders and decoders did.
WebP is amazing. But if I were going to label something "state of the art" I would go with JPEGXL :)
Are there usecases for lossless other than archival?
For myself, I use PNG only for computer-generated still images. I tend to use good ol' JPEG for photos.
Apart from the widespread support in codecs, there are 3 important elements: processing speed, compression ratio and memory usage. These are taken into account when making a decision (pareto limit). In other words, the fastest or the best compression maker alone does not matter. Otherwise, the situation can be interpreted as insufficient knowledge and experience about the subject.
HALIC is very good in lossless image compression in terms of speed/compression ratio. It also uses a comic amount of memory. No one mentioned whether this was necessary or not. However, low memory usage negatively affects both the processing speed and the compression ratio. You can see the real performance of HALIC only on large-sized(20 MPixel+) images(single and multi-thread). An example current test is below. During operations, HALIC uses only about 20 MB of memory, while JXL uses more than 1 GB of memory.
https://www.dpreview.com/sample-galleries/6970112006/fujifil...
June 2025, i7 3770k, Single Thread Results
----------------------------------------------------
First 4 JPG Images to PPM, Total 1,100,337,479 bytes
HALIC NORMAL: 5.143s 6.398s 369,448,062 bytes
HALIC FAST : 3.481s 5.468s 381,993,631 bytes
JXL 0.11.1 -e1: 17.809s 28.893s 414,659,797 bytes
JXL 0.11.1 -e2: 39.732s 26.195s 369,642,206 bytes
JXL 0.11.1 -e3: 81.869s 72.354s 371,984,220 bytes
JXL 0.11.1 -e4: 261.237s 80.128s 357,693,875 bytes
----------------------------------------------------
First 4 RAW Images to PPM, Total 1.224.789.960 bytes
HALIC NORMAL: 5.872s 7.304s 400,942,108 bytes
HALIC FAST : 3.842s 6.149s 414,113,254 bytes
JXL 0.11.1 -e1: 19.736s 32.411s 457,193,750 bytes
JXL 0.11.1 -e2: 42.845s 29.807s 413,731,858 bytes
JXL 0.11.1 -e3: 87.759s 81.152s 402,224,531 bytes
JXL 0.11.1 -e4: 259.400s 83.041s 396,079,448 bytes
----------------------------------------------------
I had a very busy time with HALAC. Now I've given him a break, too. Maybe I can go back to HALIC, which I left unfinished, and do better. That is, more intense and/or faster. Or I can make it work much better in synthetic images. I can also add a mode that is near-lossless. But I don't know if it's worth the time I'm going to spend on it.
Strictly true, but e.g. for archival or content delivered to many users compression speed and memory needed for compression is an afterthought compared to compressed size.
Probably the best news here. While you already can write custom data into a header, having Exif is good.
BTW: Does Exif have a magnetometer (rotation) and acceleration (gravity) field? I often wonder about why Google isn't saving this information in the images which the camera app saves. It could help so much with post-processing, like with leveling the horizon or creating panoramas.
Old decoders and new decoders now could render an image with exif rotation differently since it's an optional chunk that can be ignored, and even for new decoders, the spec lists no decoder recommendations for how to use the exif rotation
It does say "It is recommended that unless a decoder has independent knowledge of the validity of the Exif data, the data should be considered to be of historical value only.", so hopefully the rotation will not be used by renderers, but it's only a vague recommendation, there's no strict "don't rotate the image" which would be the only backwards compatible way
With jpeg's exif, there have also been bugs with the rotation being applied twice, e.g. desktop environment and underlying library both doing it independently
The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.
Exif fields: https://exiv2.org/tags.html
You could store this in Exif.Photo.MakerNote: "A tag for manufacturers of Exif writers to record any desired information. The contents are up to the manufacturer." I think it can be pretty big, certainly more than enough for 9 DoF position data.
And is being able to read an image without an opt-in tag something that has to be explicitly enabled in the reference implementation's API?
Curious if Animated SVGs are also a thing. I remember seeing some Javascript based SVG animations (it was a animated chatbot avatar) - but not sure if there is any standard framework.
Yes. Relevant animation elements:
• <set>
• <animate>
• <animateTransform>
• <animateMotion>
https://shkspr.mobi/blog/2025/06/an-annoying-svg-animation-b...
This could possibly be used to build full fledged games like pong and breakout :)
Can animated PNG beat av1 or whatever?
[0] like for example these old Windows animations: https://www.randomnoun.com/wp/2013/10/27/windows-shell32-ani...
Animated PNGs can't beat GIF nevermind video compression algorithms.
It's a shame that browser vendors didn't add silent looping video support to the img tag over (imo) baseless concerns.
That's not really true. Some websites lie to you by putting .gif in the address bar but then serving a file of a different type. File extensions are merely a convention and an address isn't a file name to begin with so the browser doesn't care about this attempt at end user deception one way or the other.
Nowadays, AVIF serves that purpose best I think.
AVIF is only starting to become widespread so can't be used without fallback if you care about your users. Not sure how it compares to AV1 quality/compression wise but hopefully its not as gimped as webp and there will encoders that aren't as crap as libwebp that almost everyone uses.
SVG is just html5, it has full support for CSS, javascript with buttons, web workers, arbitrary fetch requests, and so on (obviously not supported by image viewers or allowed by browsers).
If you use an <img> tag, svgs are loaded in "restricted" mode. This disables scripting and external resources. However animation via either SMIL or CSS is still supported.
I'm not sure about the tools and DX around animated PNGs. Is that a thing?
You can even have one frame that gets shown if and only if animation is not supported.
Not progressively though unless browsers add a new mime type for it which they did not bother to do with animated webp.
Also while true color gifs seem to be possible it is usually limited to 256 colors per image.
For those reasons alone APNG is much better than GIF.
There is just no need for a PNG update, just adopt JPEG XL.
No one can afford to "just". Five years later and it's only one browser! Crazy.
Browser vendors must deliver, only then it's okay to admonish an end user or Web developer to adopt the format.
JPEG-XL's lossy modular mode is a very unique feature which needs a lot more exposure than it has. It is well-suited to non-photographic drawings or images that aren't continuous, and have never touched any JPEG-like codecs. It has different kinds of artifacts than what you typically see in a DCT image codec. Rather than ringing, you get slight pixellation.
JPEG XL has an edge-preserving filter ("EPF") for the purpose of reducing ringing.
I've not tried it on images, but wouldn't zstandard be exceedingly bad at gradients? It completely fails to compress numbers that change at a fixed rate
Bzip2 does that fine, not sure why https://chaos.social/@luc/114531687791022934 The two variables (inner and outer loop) could be two color channels that change at different rates. Real-world data will never be a clean i++ like it is here, but more noise surely isn't going to help the algorithm compared to this clean example
Tell that to Google. They gave up on XL in Chrome[1] and essentially killed its adoption.
It is only a matter of time until the Chrome team has to reverse their decision.
It also has pretty much every feature desired in an image standard. It is future-proofed.
You can losslessly re-compress a JPEG into a JPEG-XL file and gain space.
It is a worthy successor (while also being vastly superior to) JPEG.
Try opening a HEIC or AV1 or something on a machine that doesn't natively support it down to the OS-level, and you're in for a bad time. This stuff needs to work everywhere—in every app, in the OS shell for quick-looking at files, in APIs, on Linux, etc. If a codec does not function at that level, it is not functional for wider use and should not be a default for any platform.
But this is a feature. Think about all those exploits made possible by this feature. Sincerely, the CIA, the MI-6, the FSB, the Mossad, etc.
"photo scanned in 2025, is about something in easter, before 1940 and after 1920"
For ambiguous dates there is the EDTF Spec[1] which would be nice to see more widely adopted.
[0] https://www.media.mit.edu/pia/Research/deepview/exif.html
Different software reacts in different ways to partial specifications of yyyy/mm/dd such that you can try some of the cute tricks but probably only one s.w. package honours it.
And the majors ignore almost all fields other than a core set of one or two, disagree about their semantics, and also do wierd stuff with file name and atime/mtime.
cICP is 16 bytes for identifying one out of a "list of known spaces" but they chose not to include a couple of the most common ones. Off to a great start...
I wonder if it's some kind of legal issue with Adobe. That would also explain why EXIF / DCF refer to Adobe RGB only by the euphemism "optional color space" or "option file". [1]
[1] https://en.wikipedia.org/wiki/Design_rule_for_Camera_File_sy...
Maybe iccMAX supports HDR. I'm not sure. In either case, that isn't what PNG supported.
So something new was required for HDR.
How so? As far as I can tell, the ICCv2 spec is very agnostic as to the gamut and dynamic range of the output medium. It doesn't say anything to the extent of "thou shalt not produce any colors outside the sRGB gamut, nor make the white point too bright".
Unless HDR support is supposed to be something other than just the primaries, white point, and transfer function. All the breathless blogspam about HDR doesn't make it very clear what it means in terms of colorspaces.
> Many of the programs you use already support the new PNG spec: ... Photoshop, ...
Photoshop does NOT support APNGs. The PR calls out APNg recognition as the 2nd bullet point of "What's new?"
Am I missing something? Seems like a pretty big mistake. I was excited that an art tool with some marketshare finally supported it.
This worries me. Because presumably, changing the compression algorithm will break backwards compatibility, which means we'll start to see "png" files that aren't actually png files.
It'll be like USB-C but for images.
[1] https://github.com/w3c/png/issues/39#issuecomment-2674690324
https://svgees.us/blog/img/revoy-cICP-bt.2020.png uses the new colour space. If your software and monitor can handle it, you see better colour than I, otherwise, you see what I see.
Now, PNG datatype for AmigaOS will need upgrading.
The PNG format is specifically designed to allow software to read the parts they can understand and to leave the parts they cannot. Having an extensible format and electing never to extend it seems pointless.
This proves OP analogy regarding USB-C. Having PNG as some generic container for lossless bitmap compression means fragmentation in libraries, hardware support, etc. The reason being that if the container starts to support too many formats, implementations will start restricting to only the subsets the implementers care about.
For instance, almost nobody fully implements MPEG-4 Part 3; the standard includes dozens of distinct codecs. Most software only targets a few profiles of AAC (specifically, the LC and HE profiles), and MPEG-1 Layer 3 audio. Next to no software bothers with e.g. ALS, TwinVQ, or anything else in the specification. Even libavcodec, if I recall correctly, does not implement encoders for MPEG-4 Part 3 formats like TwinVQ. GP's fear is exactly this -- that PNG ends up as a standard too large to fully implement and people have to manually check which subsets are implemented (or used at all).
Yeah, we know. That's terrible.
If you've created an extensible file format, but you never need to extend it, you've done everything right, I'd say.
And considering we already have plenty of more advanced competing lossless formats, I really don't see why "feed a BMP to deflate" needs a new, incompatible spin in 2025.
In an ideal world, yes. In practice however, if some field doesn't change often, then software will start to assume that it never changes, and break when it does.
TLS has learned this the hard way when they discovered that huge numbers of existing web servers have TLS version intolerance. So now TLS 1.2 is forever enshrined in the ClientHello.
So then it was pointless for PNG to be extensible? Not sure what your argument is.
The main use case for PNG is web browsers and all of them seem to be on board. Using old web browsers is a bad idea. You do get these relics showing up using some old version of internet explorer. But some images not rendering is the least of their problems. The main challenge is actually going to be updating graphics tools to export the new files. And teaching people that sRGB maybe isn't good enough any more. That's going to be hard since most people have no clue about color spaces.
Anyway, that gives everybody plenty of time to upgrade. By the time this stuff is widely used, it will be widely supported. So, you kind of get forward compatibility that way. Your browser already supports the new format. Your image editor probably doesn't.
It's not, most images you encounter on the web need better compression.
The main PNG use case is to store lossless images locally as master copies that are then compressed or in workflows where you intend to edit and change them where compressed formats would degrade the more they were edited.
This is news to me. I'm pretty sure the main use case for PNG is lossless transparent graphics.
That being said, they also can do dumb things however, right at the end of the sentence you quote they say:
> we want to make sure we do it right.
So there's hope.
That's just changing an implementation detail of the encoder, and you don't need spec changes for that e.g. there are PNG compressors which support zopfli for extra gains on the DEFLATE (at a non-insignificant cost). This is transparent to the client as the output is still just a DEFLATE stream.
EG your GPU and monitor both have a USB-C port. Plug them together with the right USB cable and you'll get images displayed. Plug them together with the wrong USB cable and you won't.
USB 3 didn't have this issue - every cable worked with every port.
What was broken was the promise of a "single cable to rule them all", partly due to manufacturers ignoring the requirements of USB-C (missing resistors or PD chips to negotiate voltages, requiring workarounds with A-to-C adapters), and a myriad of optional stuff, that might be supported or not, without a clear way to indicate it.
The first bit of our research is "What can we already make use of which requires no spec update? There are plenty of PNG optimizers. How much of that should go into the typical PNG libraries?"
Same with parallel encoding & decoding. An older image viewer will be able to decode it on one thread without ever knowing parallel decoding was an option.
Here's the worry-a-little part: Everybody immediately jumps to file size as to what image compression is better or worse. That isn't the best take, but it is what it is. So there is pressure to adopt newer technologies.
We often do have a way to maintain some degree of backwards compatibility even when we do this. For example, we can store a downsampled image for old viewers. Then extra, new chunks will know "Mix that with this full scale data, using a different compression".
As you can imagine, this mixing complicates things. It might not be the best option. Sooooo we're researching it :)
I’m not saying this is what will happen — but if I was able to construct a plausible approach to compression in ten minutes, then perhaps it’s a bit early to predict the doom of compatibility.
Also if you forbid evolving existing formats, the only alternative to improve is to introduce a new format, and I argue that it would be causing even more fragmentation and be more difficult to adopt to. Look at all the drama surrounding JPEG XL.
> Many of the programs you use already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
It might be too late to rename png to .png4 or something. It sounds like we're using the new png standard already in a lot of our software.
Back then, there were no libraries in C# for it, but it's actually quite easy to make APNG from PNGs directly by writing chunks with correct headers, no encoders needed (assuming PNGs are already encoded as input).
https://github.com/NightElfik/Malsys/blob/master/src/Malsys....
While I welcome that there is now PNG with animations, I am less impressed about how Mozilla chose to push for it.
Using PNG's magic numbers and pretend to existing software that it is just normal PNG? That is the same mindset that lead to HTML becoming tag soup. After all, HTML with a <blink> tag is still HTML, no?
I think they could have achieved animated PNG standardization much faster with a more humble and careful approach.
> PNG is pronounced “ping”
See the end of Section 1 [0]
[1] https://edition.cnn.com/2013/05/22/tech/web/pronounce-gif
Not sure I'll bother to reprogram myself from “png”, “pung”, or “pee-enn-gee”.
Surely they aren't releasing a new, incompatible version and expecting us to pretend it's the same format...?
> This updates the existing image/png Internet Media type
whyyyyyyy
We went to pretty extreme lengths to make sure old software worked with the new changes. Effectively, the limit will be the software, not the image.
For example, you can imagine some old software that is unaware of color spaces and treats everything as sRGB. There is nothing we can do to make that software show a Display P3 correctly. However, we can still show the image well enough that a user understands "that is a red apple".
Society doesn't need a new image format. I'd wager to say not any new multimedia format. Big corporate entites do, and have churning them out at a steady pace.
Look at poor webp - a format pushed by the largest industry players - and the abysmal everyday use it gets, and the hate it generates.
They say it's technically compatible since older image decoders should recognize the PNG file is using a different compression algorithm than the default.
> Many programs already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
This is intentionally ignoring the fact that there are countless PNG decoders out in the wild, many using libpng the standard decoder last updated 6 years ago; and they will not be able to read the new PNG v2 files.
They should have used a different file extension, PNG2, to distinguish this incompatible format. Otherwise, users will be confused why their newly saved PNG file cannot be read by certain existing programs.
Estimates are that 95% of Internet users have a browser that supports WebP and that ~25% of the top million websites serve WebP images. I wouldn't call that abysmal.
> Many […] programs […] already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
> Plus, you saw some broadcast companies in that list above. Behind the scenes, hardware and tooling are being updated to support the new PNG spec.
We need good video formats however. Video makes up most of the global internet traffic, probably accounts for a good part of global storage capacity too. Even slightly better compression will have a massive impact.
We jumped through quite a lot of hoops to make sure old software will be able to display new images. They simply won't display them optimally. But for the most part, that would be because the old software wouldn't display images optimally anyway. So the limit was the software, not the format.
What I mean by this is old software that treats everything as sRGB wouldn't correctly show a Display P3 image anyway. But we made sure it will still display the image as correctly as it could.
You'll never be able to faithfully represent an HDR image on a non-HDR system, but you'll still see an image.
What about it?
"Lossless WebP is typically 26% smaller than PNG, while lossy WebP can be 25-34% smaller than JPEG at equivalent quality levels"
This literally saves houndred of thousand of cost, bandwith, electricity every month on the internet. In fact, I strongly belive that this is one of the greatest contributions from Google to society just like ZSTD from Facebook.
But you can absolutely have an SDR image encoded using a large color space. So I am not sure why the author talks about color primaries when it tries to justify HDR… I still don’t know what kind of HDR images this new PNG variant can encode.
PNG is a highly structured file format internally. It borrows design ideas from formats like EA's Interchange File Format in that it contains lists of chunks with fixed headers encoding chunk type amd length. Decoders are expected to parse them and ignore chunk types they do not support.
There is also some leeway for how encoding is done as long as you end up with a valid stream of bits at the end (called the bit stream format), so encoders can improve over time. This is common in video formats. I don’t know if a lossless image format would benefit much from that.
In this case there could be an embedded reduced colour space image next to an extended color space one
Lossless AVIF is not competitive.
However, lossless WEBP does not support indexed color images. If you need palettes, you're stuck with PNG for now.
I designed the article to be accessible and understandable for the average person. So I took some liberties like showing only HDR primaries and not deep diving into HDR transfer functions. People understand the primaries intuitively.
But you are right that a wide color image could also use those same primaries without being HDR.
My goal was to be as truthful as possible while still being digestible at a glance.
In the article, I linked to Chris Lilley's post which explains it more thoroughly for the technical people.
After 20 years of success, we can't resist the temptation to mess with what works.
Pleasantly surprised.
I did skim through the specs, it seems most of it is related to cleanup and optional blocks, so it seems PNG is still safe, am I wrong? (asking those who did dive into the new specs deeply).
Is this written exactly for (1) people who implement/maintain this and, I say this with love, (2) nerds. Or will there be effects outside of a microscopic improvement on storage + latency.
PNG was a huge deal when it was new, mostly because of all the headaches Unisys was giving everyone over GIF compression patents. This new standard is mostly of interest to people that have reason to care about what format their image data is in.
PNG is popular with some Commercial Application developers, but the exposure and color problems still look 1980's awful in some use-cases.
Even after spending a few grand on seats for a project, one still gets arrogant 3D clown-ware vendors telling people how they should run their pipeline with PNG hot garbage as input.
People should choose EXR more often, and pick a consistent color standard. PNG does not need yet another awful encoding option. =3
What are you talking about? It's a bitmap. It has nothing to do with "exposure and color problems."
!!!
I'm retired and making zero money here. (I'm actually losing money on it. Wish I had a company sponsoring me for the flights and hotels for meetups.)
All participants are required to not patent any piece of it. We work hard to make sure we only reference open standards. (This one is quite tricky. We have to convince other standard orgs to make their stuff free.)
I could see the argument for getting around a gate. But fwiw I don't think that's the case :)