How do I compute the luminance of an RGB pixel?
For projects in this class, you can safely use:
Luminance = Y = .30*R + .59*G + .11*B
There are a couple different formulations from varying standards such as:
Y = 0.299*R + 0.587*G + 0.114*B
Y = 0.2126*R + 0.7152*G + 0.0722*B
Y = 0.212*R + 0.701*G + 0.087*B
The “right” choice depends on several factors, such as the gamma of the display source.
Why does the blue channel contribute so little to luminance?
The human eye has 3 types of color detecting cone cells S,M,L which respond primarily to roughly Blue, Green, and Red stimulus.
It turns out, blue-sensitive cones contribute almost nothing to the perception of luminance (see Eisner and MacLeod, ’79). Blue light does effect the perception of luminance somewhat because it will stimulate the green-cones to a small extent.
What should a 0 contrast image look like?
A single-tone, purely grey image at the average luminance of all the pixels.
What should a 0 saturation image look like?
A gray scale version of the image.
What’s the right way to handle edges with Convolution?
Experiment and choose work works best for your images and filters. Assuming black outside the edges of an image leads to the most obvious errors, the other methods such as reflecting the image across edges tend to cause many fewer visible artifacts.
What’s the right way to handle edges with Floyd-Steinburg Dithering?
Renormalize the weights based on how many pixels the error is being distributed over. The key is to avoid adding or subtracting energy from the image.
How wide should the radius be on the gaussian sampling for the reconstruction?
You’ll have to experiment to get non-aliased, non-blurry images. A good starting point is a little under 1 pixel std deviation, and a radius of 2-3 pixels.