Tech News
← Back to articles

Cameras and Lenses

read original related products more articles

Pictures have always been a meaningful part of the human experience. From the first cave drawings, to sketches and paintings, to modern photography, we’ve mastered the art of recording what we see.

Cameras and the lenses inside them may seem a little mystifying. In this blog post I’d like to explain not only how they work, but also how adjusting a few tunable parameters can produce fairly different results:

Over the course of this article we’ll build a simple camera from first principles. Our first steps will be very modest – we’ll simply try to take any picture. To do that we need to have a sensor capable of detecting and measuring light that shines onto it.

Recording Light

Before the dawn of the digital era, photographs were taken on a piece of film covered in crystals of silver halide. Those compounds are light-sensitive and when exposed to light they form a speck of metallic silver that can later be developed with further chemical processes.

For better or for worse, I’m not going to discuss analog devices – these days most cameras are digital. Before we continue the discussion relating to light we’ll use the classic trick of turning the illumination off. Don’t worry though, we’re not going to stay in darkness for too long.

The image sensor of a digital camera consists of a grid of photodetectors. A photodetector converts photons into electric current that can be measured – the more photons hitting the detector the higher the signal. In the demonstration below you can observe how photons fall onto the arrangement of detectors represented by small squares. After some processing, the value read by each detector is converted to the brightness of the resulting image pixels which you can see on the right side. I’m also symbolically showing which photosite was hit with a short highlight. The slider below controls the flow of time: The longer the time of collection of photons the more of them are hitting the detectors and the brighter the resulting pixels in the image. When we don’t gather enough photons the image is underexposed, but if we allow the photon collection to run for too long the image will be overexposed. While the photons have the “color” of their wavelength, the photodetectors don’t see that hue – they only measure the total intensity which results in a black and white image. To record the color information we need to separate the incoming photons into distinct groups. We can put tiny color filters on top of the detectors so that they will only accept, more or less, red, green, or blue light: This color filter array can be arranged in many different formations. One of the simplest is a Bayer filter which uses one red, one blue, and two green filters arranged in a 2x2 grid: A Bayer filter uses two green filters because light in green part of the spectrum heavily correlates with perceived brightness. If we now repeat this pattern across the entire sensor we’re able to collect color information. For the next demo we will also double the resolution to an astonishing 1 kilopixel arranged in a 32x32 grid: Note that the individual sensors themselves still only see the intensity, and not the color, but knowing the arrangement of the filters we can recreate the colored intensity of each sensor, as shown on the right side of the simulation. The final step of obtaining a normal image is called demosaicing. During demosaicing we want to reconstruct the full color information by filling in the gaps in the captured RGB values. One of the simplest way to do it is to just linearly interpolate the values between the existing neighbors. I’m not going to focus on the details of many other available demosaicing algorithms and I’ll just present the resulting image created by the process: Notice that yet again the overall brightness of the image depends on the length of time for which we let the photons through. That duration is known as shutter speed or exposure time. For most of this presentation I will ignore the time component and we will simply assume that the shutter speed has been set just right so that the image is well exposed. The examples we’ve discussed so far were very convenient – we were surrounded by complete darkness with the photons neatly hitting the pixels to form a coherent image. Unfortunately, we can’t count on the photon paths to be as favorable in real environments, so let’s see how the sensor performs in more realistic scenarios.

... continue reading