Dithering is a technique used to mitigate the loss of depth in a quantization process of any signal and is often used in computer graphics to reduce the perception of “steps” and increase visual fidelity.
There are different approaches to this and the original concept is way older than computer graphics. However, even though it’s widely adopted, it often isn’t done right.
In principle, the idea is to apply noise to make use of spatial resolution to make up for the lack of depth-resolution. In computer graphics, this means that, if the color depth isn’t sufficient and you don’t have enough distinct colors at your disposal, you can scatter dots of different colors around your image to trick people’s perception into seeing more colors, because they “blend” colors together. Dithering works for systems where there’s a certain inertia in place – like the human eye.
The big fallacy here, however, is the assumption that noise can be applied after the quantization process. The Wikipedia article on this is technically correct, but potentially misleading.
The proper way to dither is to apply noise as part of the quantization process itself and keep the brightness values between the quantization values above and below the original value which I’ll explain here.