 # Dithering Explained – For Humans

Dithering is a technique used to mitigate the loss of depth in a quantization process of any signal and is often used in computer graphics to reduce the perception of “steps” and increase visual fidelity.

There are different approaches to this and the original concept is way older than computer graphics. However, even though it’s widely adopted, it often isn’t done right.

In principle, the idea is to apply noise to use spatial resolution to make up for the lack of depth-resolution. In computer graphics, this means that, if the color depth isn’t sufficient and you don’t have enough distinct colors at your disposal, you can scatter dots of different colors around your image to trick people’s perception into seeing more colors, because they “blend” colors together. Dithering works for systems where there’s a certain inertia in place – like the human eye.

The big fallacy here, however, is the assumption that noise can be applied after the quantization process. The Wikipedia article on this is technically correct, but potentially misleading.

The proper way to dither is to apply noise as part of the quantization process itself and keep the brightness values between the quantization values above and below the original value which I’ll explain here.

# The basics

Let’s say, we’re starting off with a simple linear function like this: x = y. There is an infinite number of possible values from zero to 1 at potentially infinite sample points. In computer graphics, the value would be the brightness of the pixel on the display, and space-time would be the area of pixels on the display. The problem is, however, that there is a limited number of brightness values we can show, and our display also has likely has a limited number of pixels. To make it easier to understand, I’ll exaggerate the issue by using a lower color depth and pixel resolution than what monitors usually have. I’m also taking one dimension off, so essentially, I’m demonstrating the dithering process with only one row of pixels to show a linear gradient.

Here’s what would happen on a display with a depth of 4 bit (which gives us 2^4 = 16 possible brightness values) and a resolution of, say, 300 pixels: Yuck! Even tough we have a whole 300 pixels at our disposal, we see stairs. (which is called “banding”).
These diagrams visualize the brightness value at a row of pixels. On your computer monitor, the nominal and actual values would look like this:

nominal: actual: Because I can’t actually display infinite pixels and greyscale values on your Monitor, I used dithering here to visualize how a smooth gradient should look.
The more we reduce the bit depth, the more visible our problem gets:  Am I crazy? How are we supposed to show a gradient with 1 bit color depth? That’s just black and white!
Let’s accept the challenge and move on. I’m using the 1-bit example, because the problem with the post-noise gets really obvious here.

# Dithering - the wrong way

Let’s first show why dithering using a post-processing application of noise is wrong and nobody should ever do this:

Value-wise, random noise is basically something that looks like this: However, since the steps we can do are discrete and considering that we don't want to have noise that is greater than the possible differences between the brightness values, the noise that we have available looks more like this: Now, if we apply this 1-step noise to the already crushed values, we get this:

 1-bit 2-bit 4-bit   Essentially, the image would just as terrible as before, plus we get some extra noise.

So much about why one should never even think about applying the noise after crushing the values down (stop thinking about it!).

Ok… let's talk about a better solution.

# The proper way

The essential component to dithering is to take into account the original, non-quantized values when applying noise. This can be done in multiple ways.

## applying noise before crushing

One way is: apply exactly the right amount of noise as a whole quantization step to the source data before crushing it. Half negative, half positive. This means, that, the less steps you have available for your color gradient, the more noise you'll need to apply. Here's how that looks:

 1-bit 2-bit 4-bit   Now, all we need to do is crush the values by rounding them:

 1-bit 2-bit 4-bit   Isn't it beautiful? We reproduced the gradient!

## randomly picking the higher or lower value based on weights

Awesome! But let me to show you a different algorithm, to make it easier to understand why that works.

For any given value from the source data, there is a lower and a higher value that it could be rounded to, unless the source value is already equal to a possible step. Always rounding down would give us stairs. Always rounding up would give us stairs as well, just a step higher. If we just randomly pick the higher or the lower value, we'll have the same result as adding the values after crushing them, because we didn't take into account how close to a possible step the source value is.

This means that we have to use weights, increasing the chance to round up the closer the source value is to the higher number. Let's say, we have 100 pixels with a depth of 10 possible steps. 20 of those pixels have a color value of 3. Now, if we want to reduce the image to a depth of 2 steps, we want roughly 30% of the pixels with a former value of 3 to be 1, the other 70% to be 0. This means that, for each pixel we need to look how close it is to the next possible value (in this case 3/10 of the way), and then use the resulting fraction as a chance to choose the higher pixel.

Let's try that!

Here's a chart of the chances to choose the higher value:

 1-bit 2-bit 4-bit   If we now use the same random values as above, along with the chances we just generated based on the proximity of the values, we get a number of "votes" In the following charts, positive means "vote for higher value", while negative means "vote for lesser value":

 1-bit 2-bit 4-bit   Now, the only thing we need to do is choose the higher or lower value for each of the crushed values, based on the decisions made:

 1-bit 2-bit 4-bit   If that looks the same as the result above, that's because it is!

# Conclusion

I hope, this helped you understand why it's important to do whatever dithering-related noise-operation before reducing the depth. There are more advanced dithering algorithms, especailly for image-processing, that take human factors into account and distribute the noise in a way thats easier on the eyes, combining colors or even generating the color-palette based on the source images. However, they all have one thing in common: They do whatever enhancement they do with the original data.

For reference, I attached a Numbers spreadsheet that allows some playing with numbers. Every time, a value is changed, the random values will be re-generated.

## {{ 'Comments (%count%)' | trans {count:count} }}

{{ message }}

{{ 'Comments are closed.' | trans }}