Background#

Some time ago, I got myself one of these black and white e-ink paper without any support for grayscale. I.e. per pixel on the e-ink, I could only have either black or white, and no scale between. This kind of worked fine for true type font rendering, but I also wanted to be able to output images. And for that I had to be creative. Frame Now, there were a lot of specific problems related to the actual version of the e-ink screen that I got, but this post in particular is not going to focus on those. They might be covered in a future post if I get around to it. For now, this post is only going to cover dithering, and more specifically Floyd-Steinberg and algorithms derived from those.

It is assumed by the reader that some understanding of programming is had. For the examples and pseudo code, I will use C/++ and CImg. However, the concepts should be transferrable to your programming language of choice.

Problem#

Ok, so what is the problem that I am trying to solve? For the frame I have, I can only display pixels being either black or white. I don’t have the full range of grey ( for small values of grey say 255 ) to select from. So somehow, I have to quantize the picture in such a way that it looks believable as a gray scale picture without actually being so. The two first pictures shows the ideal. The first one in color shows the golden gate bridge in its full colors. The 2nd picture shows the “ideal” version of the bridge in gray scale. This is the experience we try to recreate. Greyscale

However, if we approach this in a simplistic manner, we can apply a simple thresholding algorithm. The thresholding algorithm would look like the following

for each pixel in image {
	pixel_out = convert_grey(pixel) > threshold ? 1.0 : 0.0;
}

Whereas simplistic, it does not lead to a good image quality. For the pictures below, I have chosen a threshold value of .25, .5 and .75. Threshold As you can see here, the black and white pictures lacks quite a lot of details. And they for certain does not look good. There exists possibilities of tweaking the threshold value for each picture until you find something that looks good. However, that is not going to be sustainable.

Instead we are going to introduce the concept of dithering. In short, for image processing, dithering is the act of introducing noise into the image to avoid artifacts of the quantization process. ( There probably exists a better scientific description of the problem.) The way to think about it is that in this particular case, we try to approximate the same ratio of black/white pixels in a region to match the intensity of the greyscale value as in the “ideal” greyscale picture.

The final result can be seen below. It looks a bit dodgy here, but remember that it is meant to be seen on the e-ink paper as seen above.

Dithered

Algorithm#

The overall algorithm for this is relatively simple.

  1. Convert the picture into greyscale.
  2. Apply dithering to the greyscale image converting it into a black and white image.

Step number one should be relatively easy, I will however briefly touch upon it. I’ll talk a bit more about step number two, which is the actual dithering step.

Converting to Greyscale#

First step is as mentioned to convert the image to an internal greyscale format. Nothing too magical about this process. The only thing I would caution against is a naive way of converting an RGB value to greyscale using equal weights for all of the components. This can be seen in the code snippet below. It turns out that your eye perceives different levels of red/green/blue of different intensity. I.e. the same amount of red will not appear as bright as green. As such, the different channels gets weighted differently.

// Convert to greyscale
// Naive: 
float convert_naive(float r, float g, float b){
	return (r*0.33) + ( g*0.33 ) + ( c * 0.33 );
}

// Better 
float convert_grey(float r, float g, float b){
	return (r*0.299) + ( g*0.587 ) + ( b * 0.113 );
}

With this in place, the pseudo-algorithm to convert the image should look similar to this

float grey[];
for each pixel in image{
	grey[pixel] = convert_grey(pixel.r, pixel.g, pixel.b);
}

Now, you should have the picture in greyscale format, and ready for input into the dithering stage.

Dithering#

So, we have finally gotten to the dithering stage. Dithering as mentioned earlier is the art of applying noise to the signal to reduce the artifacts of the quantization process. For image processing the most famous technique is ’ Floyd Steinberg’, and then there are other variants that all builds upon Floyd-Steinberg.

The general principle behind the algorithm is the following.

  1. Go through sequentially each pixel in the image.
  2. For each pixel, calculate the error of the pixel from that of the ideal. E.g. error = ideal - quantized
  3. Then distribute an weighted error amount to the neighboring pixels.
    1. The weight amount, and which pixels to distribute to is the difference between the variants that I’ve investigated.

Or in pseudo code, it will be something like this

for x,y in greyscale_image.size: 
	// Get the value from the image
	float old = image[x][y];
	// Quantize the new value to 1.0 or 0.0
	float new = old + 0.5 > 1.0 : 1.0; 0.0;
	// Calculate the error
	float err = old - new;
	float err_amt = err / 16; // For the Floyd-Steinberg algorithm.  

For the Floyd-Steinberg algorithm the distribution of the errors is quite simple. The code can be found below

image[x+1][y]   += error * 7.0;
image[x+1][y+1] += error * 1.0;
image[x][y+1]   += error * 5.0;
image[x-1][y+1] += error * 3.0;

Now, some boundary conditions and technicalities has to be fixed before you have the full program, but this is the general gist of how to apply dithering.

Different dithering matrices#

Now, I mentioned that there are different weight schemes. The one that is presented here is the Floyd Steinberg variant. However, from some investigation and experimentation I’ve found that, at least for me, the one that gave the best results for my use-case was the Atkinson dithering. It is using the same algorithm. The only thing that changes is the weight distribution, and which pixels it maps into. You can find that in the attached reference code on github.

References#

  1. https://github.com/langly/dithering