The most important reason for me is not display on the monitor, but printing the image. I do a lot of large format printing, and printers smash parts of your color space and make some gradient/banding problems stick out like a sore thumb. Dithering is critical when printing!
Gradients with banding can easily show up when you resize a large image down to a smaller size. This is a good reason that resizing probably anything less than 16 bits per channel images should be done in a higher precision format than the image.
I haven't tried to use an error diffusion dither when converting 16 bits per channel down to 8... I'm not sure how much that matters. It might make a difference, and it sounds fun to code either way, but in my experience, a good random number generator suffices to make color bands vanish.
The author touches upon it here, but I think it's worth generalizing further: If you have high or maybe infinite precision in your color values, dithering will look much nicer than simply rounding to the nearest value. A concrete example is a color gradient. If done naively with rounding, color bands will be clearly visible. With dithering, they will be almost impossible to see.
See for example: http://johanneshoff.com/dithering/
For example the madVR[2] video renderer uses it for realtime video dithering to avoid banding shallow color gradients from 10bit sources or debanded internal 16bit representation on <= 8bit displays.
[0] http://cv.ulichney.com/papers/1993-void-cluster.pdf [1] http://www.hpl.hp.com/research/isl/halftoning/publications/1... [2] http://madvr.com/
If you're processing the decode at 10 or 16 bits and need to render at 8 bits, it's much better to dither than truncate.
It works nicely with video, and it makes still images (and images you're slowly panning and zooming with the Ken Burns effect [2]) look alive and detailed, as if they were live video.
Another variation of the serpentine scanning (aka boustrophedon transform [3]) you can apply when iteratively dithering an image is to rotate the scan direction 90 degrees each frame, so even frames scan horizontally, odd frames scan vertically, and the vertical and horizontal scan directions rotate round every four frames. That results in a completely uniform diffusion, even when each scan frame only diffuses errors to the next cell.
It spreads the error out over time, as well as space, which has a pleasing effect on the eyes, I think. Any one frame has artifacts, but they tend to cancel each other out over time, break up the log jams, and dance around local minima and without getting stuck at fixed points.
I've implemented some 8 bit anisotropic heat diffusion cellular automata, that exhibited a subtle drift because of the scan order. But rotating the scan order 90 degrees each frame completely eliminated the subtle drifting effect I was getting when using a fixed scan order. Here's a discussion about it I had with Rudy Rucker [4] who inspired some of the rules and pointed out the problem, and a demo [5], and source [6].
Here's one of my favorite spooky Heizenbugs:
>The original version of this code written in C running on a Sun did have an interesting bug: I was not initializing the "error" accumulator that carried the leftover of the average from cell to cell, so when different kinds of background activities were happening on the Sun, the error accumulator got initialized from the stack frame with a random undefined value! I noticed it when every time I typed to a terminal window, the dithering shivered! It was really spooky until I figured out what was going on!
[1] https://www.youtube.com/watch?v=jb6H14gVWjM
[2] https://en.wikipedia.org/wiki/Ken_Burns_effect
[3] https://en.wikipedia.org/wiki/Boustrophedon_transform
[4] http://donhopkins.com/mediawiki/index.php/CAM6_Simulator
[5] https://github.com/SimHacker/CAM6/blob/master/javascript/CAM...
Turns out there's a paper on it "MANGAWALL: GENERATING MANGA PAGES FOR REAL-TIME APPLICATIONS" ( https://www.semanticscholar.org/paper/MangaWall-Generating-m... ) and an implementation - https://github.com/zippon/MangaWall - that implementation uses ordered dithering ( https://github.com/zippon/MangaWall/blob/master/src/MangaEng... ) among other things to help produce a pencil drawn like effect.
Anyway just saying ;) To me at least, pretty fascinating ...
[1] https://forums.tigsource.com/index.php?topic=40832.msg121719...
[2] https://forums.tigsource.com/index.php?topic=40832.msg121280...
(Just in case and for easier browsing, I took liberty of uploading a copy to github: https://github.com/akavel/WernessDithering, although I'm not clear on what's the license of the code, unfortunately; that said I hope the numbers in the matrix are not patented.)
(edit: lol, didn't notice there's another thread on Obra Dinn already on HN, now I'm surprised! :)
http://www.flipcode.com/archives/Texturing_As_In_Unreal.shtm...
Some time ago I experimented with 16-32 color images on websites. Because of high res screens they look great but save a lot of data.
The algorithm is based on Sierra Lite, but I added a random element to the direction in which the error is propagated. This removes essentially all dithering artifacts.
http://codegolf.stackexchange.com/questions/26554/dither-a-g...
The most extreme example could be 'dithering' a grey square into 1) a black square and 2) a white square, and when they are rapidly switching between the two it might appear as a grey instead.
I guess that in practice, the refresh rates of monitors are too low to make it seem anything other than a terrible flickering image, but on old CRTs the effect might work a little better. It's also memory and CPU intensive, but I'd still be curious to see if it could be used successfully, and if it improved the quality compared to 'just' a single dithered image.
See http://notes.tweakblogs.net/blog/8712/high-color-gif-images.... for a nice animation
e.g. Photochrome on the Atari ST, was especially impressive at the time: https://www.youtube.com/watch?v=vPsY4P8bnVw
The most extreme version I've seen of this was on the ZX Spectrum, which had not only a very limited 15 colour palette, but also limited to 2 colours within each 8x8 block of the screen. Some bright spark came up with the idea of flipping rapidly between R, G, and B frames to give (limited) per-pixel RGB. Unfortunately it did flicker quite badly because of the extreme changes in colour levels (only two levels of each channel), and the fact that it required 3 whole frames to make a single colour virtual frame.
Example here: (not suitable if you have photosenstive epilepsy!) https://en.wikipedia.org/wiki/File:Parrot_rgb3.gif
I never knew anyone had tried that before on a spectrum. At first I thought you were just talking about the other trick, getting more than 2 colours per 8x8 by changing the palette as the raster scanned down the screen. The multi-colour parrot is way more adventurous!
I'd love to see the parrot image on an old CRT to get a feeling for what the effect might look like with the phosphor afterglow. Leaving the spectrum behind and using a bigger palette range, like on the ST, the effect seems much less epileptic fit inducing, because you can pick closer colours to switch between.
So that doesn't really work for looped animations.
But for single pass you can incrementally build a higher-color image, similar to interlaced loading.[1]
[0] http://nullsleep.tumblr.com/post/16524517190/animated-gif-mi... [1] https://en.wikipedia.org/wiki/GIF#True_color
I remember seeing old STN grayscale LCDs that seemed to work like this (and I even implemented it myself on the TI-83 as a test)
edit: looks like is called "Frame Rate Control" and used to be common http://robotics.ee.uwa.edu.au/eyejr/lcd/FRC-information.pdf
http://www.mattgreer.org/articles/sega-saturn-and-transparen...
> For simplicity of computation, all standard dithering formulas push the error forward, never backward. If you loop through an image one pixel at a time, starting at the top-left and moving right, you never want to push errors backward (e.g. left and/or up).
Would the image look a lot different if you dithered it backwards from the bottom right pixel?
Are there dithering algorithms that consider the error in all directions instead of pushing the errors forward only?
The answer seems to be that changing the image parsing direction gets rid of some artifacts but introduces others while not vastly improving on faster and simpler approaches.
Isn't this equivalent to just rotating the image 180 degrees, dithering it, and rotating it back?
Dithering was one of the assignments. We were required to implement black-white quantization and then Atkinson and Floyd-Steinberg. We were given the freedom to choose our own images.
During development at the dorm my favourite picture to debug on was pretty racy (think along the lines of full version of "Lena"). I totally did not intend to put it to the floppy disk...
Not only I got the 10 - the highest number of points for this assignment, I got +2 on top of that with the comment from prof: "for the choice of test images in the best tradition of the field".
-3DFX Voodoo (all models in 16bit color depth) let you enable hardware dithering block (2x2/4x4 ordered dither, zero performance penalty). They did it to save framebuffer space (24bit textures, 16bit framebuffer). It made 3dfx graphics look significantly better than nvidia/ati in 16bit depth. Earlier cards used 4x1 filter on the output, Banshee and later models gained 2x2 filter providing "22 bit like quality" as 3dfx called it.
-Back in ~1994 some HP unix workstations used dithering to produce 'near 23bit color' out of 8bit framebuffer https://en.wikipedia.org/wiki/HP_Color_recovery
-All crappy (TN) LCD panels (staple of garbage bin supermarket 1366x768 laptop, and older 'gaming' fullhd ones) use FRC which is a form of temporal dithering https://en.wikipedia.org/wiki/Frame_rate_control
One legitimate use of dithering was in the days of CGA/EGA/other fixed palette hardware. You can relive it here 'Joel Yliluoma's arbitrary-palette positional dithering algorithm': http://bisqwit.iki.fi/story/howto/dither/jy/