Engineering a digital-camera system

Article By : Richard Crisp

What are the design parameters and how does one go about engineering a camera system?

Digital cameras are nearly everywhere today. According to LDV Capital, by 2022, there will be 45 billion cameras in operation worldwide. In 2019, over 500 hours of video was uploaded to YouTube every hour of the day. Besides our familiar smartphones, automotive, and home-security applications, digital cameras are used inside laboratory instruments to sequence DNA as well as in the search for dangerous Earth-threatening asteroids and all sorts of applications covering microscopic and macroscopic objects.

Despite the broad range of applications, these visible-light cameras share a common technology platform with a handful of interrelated design parameters that differentiate one from the other. From the perspective of basic operating principles, there’s little difference between the Keck Telescope imaging system on Maunakea in Hawaii and the selfie-cam in your smartphone. Basically, the design parameters are different.

So, what are these parameters and how does one go about engineering a camera system? From a design perspective, the two key components to choose are the optics and image sensor. But the choice of one will affect the choice of the other, as we shall see.

First-order choices in digital-camera design

First, decide how large of an image you want of the planned target object and at what range. It will play a role in setting the size of pixel in the image sensor and the focal length of the optics.

For example, assume that the target is two meters tall and the range is 100 meters. To accurately image the target, you may decide that you wish the target’s height to cover 100 pixels at range. The image scale will therefore be 2 meters/100 pixels = 20 millimeters/pixel at a range of 100 meters.

It can be approximated from a trigonometry perspective by noting it’s a ‘small angle’: the sine or tangent of small angle is approximately equal to the angle itself when measured in radians. So, we can approximate the instantaneous field of view (IFOV) for a single pixel by:

20 mm / 100,000 mm = angle (radians) = 0.2 milliradians/pixel

There are an infinite number of focal lengths and pixel sizes that can be combined to attain a specific IFOV, so we need to either pick a focal length or a pixel size. Once one is picked the other is determined by solving similar triangles (Figure 1). For this example, let’s assume we have a 2-micron pixel. So, what is the proper focal length?

2 microns / focal length (microns) = 20 mm / 100 m, focal length = 10 mm

diagram of triangles to determine proper focal lengthFigure 1 Design engineers need to either pick a focal length or a pixel size, then determine the other by solving similar triangles. Source: Etron Technology

Deciding pixel size

We picked a pixel size arbitrarily to be 2 microns. What are the implications of that? The size of the pixel determines how much light can be collected in a given time; a bigger pixel collects more light. Likewise, a larger pixel usually has a higher saturation capacity; it can hold more charge than a smaller pixel, meaning it has a higher dynamic range.

But there are implications related to pixel size. One is cost; bigger pixels mean a larger sensor die for a given pixel count. That frequently means a more costly lens capable of illuminating a larger sensor with acceptable distortion. Lens cost and weight grow rapidly with aperture diameter, leading to constraints on the system design.

There’s another implication of pixel size and that is sampling of the image scene. Even if imaging a point source, optics do not focus to an arbitrarily small point. Instead, due to diffraction, the smallest spot the lens can make is finite and is a function of the optical focal ratio (F#) and wavelength of the light. This “optical blur,” as it is often called, is also known as Airy disk.

The Airy disk is defined as the diameter of the trough at the first null. Nyquist sampling theory says that to properly sample the Airy disk, there should be 4.88 pixels covering it. So, if we are using a 2-micron pixel, then a 9.76-micron Airy disk would be the proper size for such a pixel.

plot of intensity vs. radius shows diffraction limited optics and the Airy disk focal ratioFigure 2 A plot of intensity vs. radius shows diffraction limited optics and the Airy disk focal ratio. Source: Etron Technology

So, for our 2-micron pixel and 550-nm light, the optimum focal ratio is approximately F/8—10-micron Airy disk. It’s for a single monochrome pixel used as the sampling unit. For an image sensor incorporating a color filter array, such as a standard Bayer pattern, the sampling unit is the 2×2 cluster of pixels, which is four times the area of the pixels of the same type sensor used as a monochrome sensor. For the Bayer case, the sampling unit consists of a blue pixel that is only 25% active, a red pixel with the same 25% active, and a green pixel with 50% active.

Actually, since the monochrome pixels, which are color-masked pixels, are not actually 100% light sensitive, that same percentage reduction applies to the color-masked pixels. So, the 25% and 50% sensitivities are maximums that the real pixel will not exceed (Figure 3).

sampling unit illustration shows color masked pixelsFigure 3 In this illustration, the sampling unit is shown on the left and color masked pixels comprising the sampling unit on the right. Source: Etron Technology

Why focal ratio matters

When it comes to optimum focal ratio for the same base sensor used as a color-masked sensor, the sampling unit is larger for the color-masked version. Since the sampling unit now is a 2×2 cluster of pixels, the 4.88 sampling of the spot size means the spot size needs to be twice as large as in the monochrome case (Figure 4).

illustration of color mask spot size compared to monochromeFigure 4 The spot size in a color mask should be twice as large as in the monochrome case. Source: Etron Technology

As a consequence of the larger-sized sampling unit, a color-masked version of a monochrome sensor would optimally use a focal ratio 2× that of a monochrome version of the same sensor. That means what was an F/8 optimum focal ratio for a mono camera will now be an F/16 for color.

Now let’s define the focal ratio, commonly denoted as F# or F/#. It’s the ratio of the focal length to the aperture diameter. For our design example, the 10-mm F/8 optic would have an aperture of 10 mm/8 = 1.25 mm. Some lenses can have an adjustable aperture size via an adjustable iris that effectively varies the aperture size by “stopping it down.” Low-cost lenses typically have a fixed aperture size.

The focal ratio is often spoken of in terms of the “speed” of the lens; a fast lens would have a low focal ratio like F/2 or F/3, while a slow lens would have a higher focal ratio such as F/8, F/16, or F/32.

The higher the focal ratio, the slower the lens and the longer it takes for a given level of exposure. A doubling of focal ratio will quadruple the exposure time needed to reach a target signal level in an image. To see why that is, consider the following example.

A variable focal length lens with a fixed aperture will have a variable focal ratio. For a 2:1 zoom lens adjustable from 10 to 20 mm focal length with a fixed 1.25 mm aperture, the focal ratio will vary from F/8 at 10 mm focal length to F/16 at 20 mm focal length. As the focal length is increased, the “magnification” is increased at the sensor end. The net effect is spreading the light coming out of the lens to a larger diameter circle, reducing the “brightness” in proportion to the area increase. There’s only a fixed amount of light flux.

A doubling of focal length results in a doubling of the diameter of the circle of light illuminating the sensor. As the diameter of a circle is doubled and the area is quadrupled, the exposure duration quadruples when the focal ratio is doubled in order to reach the same integrated signal for the same scene brightness (Figure 5).

triangle calculation for focal length and the diameter of the circle of light illuminating the image sensorFigure 5 The doubling of focal length results in a doubling of the diameter of the circle of light illuminating the image sensor. Source: Etron Technology

Field of view calibration

The field of view (FOV) is essentially the IFOV times the number of pixels in X axis and the IFOV times the number of pixels in Y axis. So, if there’s an IFOV of 0.2 milliradians/pixel and design engineers have a 1000×1000-pixel image sensor aka “focal plane array,” they would have an overall FOV of 0.2×0.2 radians. Recall that there are Pi radians in 180 degrees, so that amounts to a FOV of 11.46×11.46 degrees, approximately.

For a larger FOV using the same image scale, engineers would use a sensor that has more pixels. For example, using 0.2 milliradians/pixel and a 4K video sensor featuring 3840×2160 pixels, the system would have a total FOV of 44×24.75 degrees.

Note that this FOV could be obtained with larger or smaller pixels by adjusting the focal length of the optics, and that consideration should be given to proper sampling of the Airy disk provided by the optics.

Editor’s Note: This is the first article in a series on digital-camera design. It shows how to pick pixel size, focal length, and focal ratio of an imaging system for monochrome image sensors and those that have integrated color filter arrays. Moreover, it informs design engineers about the sensitivity of exposure time to focal ratio and how to calculate the FOV for a camera. Part 2 of this series provides a detailed treatment of imaging when there’s motion in the scene.

This article was originally published on EDN.

Richard Crisp is VP of New Product Development for Etron Technology America.

Other articles in this series:

Related articles:

Leave a comment