Mark Peterson, VP Advanced Technology, Theia Technologies
The detail in an image is determined by resolution. The shorter the lens focal length, the wider the field of view. Greater than about 90° most lenses start to show curved, barrel distorted images that compress the image at the edges. Rectilinear lenses such as those using Theia Technologies’ Linear Optical Technology® do not exhibit barrel distortion and thus maintain image resolution out to the edge of the image.
Resolution has many definitions; no one definition is correct for all situations. Here we list only the definitions relevant to video in surveillance and machine vision applications.
Definition 1: Resolution can be expressed as the number of pixel rows or columns on the sensor used to record an image. The greater the number of lines, the greater detail or larger field of view can be recorded with the camera. Unfortunately, there is no uniformity in this definition so numbers like 720 or 1080 refer to the number of pixel rows (vertical) but 4k (~4000 pixels) refers to the pixel columns (horizontal) of the sensor.
Definition 2: Resolution can be expressed as the total number of pixels. With megapixel cameras, the resolution is generally the total number of pixels, divided by 1,000,000, and rounded off. Table 1 below shows examples of typical megapixel camera resolutions.
Definition 3: Resolution can be the level of detail with which an image can be reproduced or recorded. At the image sensor resolution is expressed as line pairs per millimeter (lpm) commonly used by lens designers and optical engineers. As the total number of pixels on an image sensor increases, the pixel size gets smaller and requires a higher quality lens to achieve best focus. These high-quality lenses, including those manufactured by Theia Technologies, are rated for megapixel or multi megapixel cameras meaning the image will be sharply in focus at the camera resolution it is rated for.
Definition 4: Resolution can be specified in pixels per foot or meter at the object. This mapping of the image sensor dimensions onto the object is most intuitive for calculating what level of detail can be seen in the image. Fundamentally it is the horizontal field of view (HFOV) of the camera divided by the horizontal number of pixels. This gives a pixels per foot number that can be related to image quality. This is the definition that I will expand upon further in the rest of this white paper.
There is not yet an industry standard for the level of sharpness required in every video surveillance application (detection or identification) or machine vision application (barcode or license plate reading). For security applications, the more pixels on a target, the higher the resolution will be, and the more likely recognition and positive identification will be made. However, higher detail requires higher resolution cameras or more cameras and thus more bandwidth and storage. There is a balance that must be made between level of detail and project budget.
In Table 2 below, an image is shown at different levels of resolution from “high detail” for clear identification at 60pix/ft to “motion tracking” for wide field of view at 10pix/ft. Each image has the same number of pixels but as the field of view increases, the pixels per foot in the image decreases. Because there are no more pixels in one image compared to another, there is no effect on the amount of data transferred over the network and no degradation of network performance by going to either higher image resolution or greater field of view.
Table 2: As field of view increases the pixels per foot decreases so that each picture has the same number of pixels and thus causes the same amount of network loading.
A higher resolution megapixel camera (5MP) can cover a larger field of view at the same image resolution as a lower resolution megapixel camera (1.3MP). Because the total available pixels spread across the field of view is greater, the field of view can be increased without decreasing image resolution.
Table 3 below compares the field of view of different cameras at 32 feet from the subject at the same image resolution. As the camera resolution (total number of pixels) increases, so does the field of view at constant image resolution (pixels per foot). Clearly the higher the number of pixels in the camera, the wider the field of view at a constant image resolution. This increase in field of view is also shown in Figure 1 below.
Figure 1: Field of view increases with increasing camera resolution (total number of pixels) without any change in image resolution (pixels per foot). The 3- and 5-megapixel images are cropped vertically to eliminate uninteresting sky and ground areas of the image. This cropping reduces the total number of pixels but doesn’t affect the pixels per foot resolution.
The graph in Figure 2 shows that a shorter focal length lens like the 1.3mm SY125 compared to the 1.7mm SY110 lens (dashed versus solid lines) on the same HD resolution camera will increase the field of view which will either decrease the resolution or decrease the required object distance at a fixed resolution.
Since field of view is not related to resolution, the fact that the 3 sensors listed have similar chip sizes (see Table 4) means they will have similar fields of view.
Figure 2: Field of view for different lens focal lengths. SY110 with 1.7mm is compared to the SY125 with 1.3mm.
The object distance versus image resolution chart in Figure 3 shows the effect of changing camera resolution. For a fixed lens focal length, increasing the camera resolution allows increased object distance at the same image resolution because the increased number of pixels in the camera can be distributed over the same field of view of the image. If the image of a parking lot for instance doesn’t have enough resolution to capture license plates, increasing the camera resolution is one option that doesn’t require adding a new light pole or changing camera location. Alternately, the camera could be placed farther away from the object and maintain the same image resolution.
The chart in Figure 3 also shows the effect of going to a wider-angle lens. Changing from the 110° wide SY110 to the 125° wide SY125 without changing the camera location or resolution (HD resolution) will result in a lower image resolution. However, the object distance and image resolution can be maintained by increasing the camera resolution to 5MP (green line to orange dashed line).
Although the graph is shown for rectilinear lenses, the same effect is noticeable with typical lenses that have barrel distortion.
Figure 3: Object distance versus image resolution is affected by camera resolution and lens focal length. Here the SY110 at 1.7mm and the wider SY125 at 1.3mm are compared on 3 different cameras.
Most wide-angle lenses have barrel distortion (also known as fisheye distortion) that causes the image to look curved and bulged out in the center. Rectilinear lenses like those made for the security and machine vision industries by Theia Technologies keep lines that appear straight in the real world straight on the image sensor. This has the benefit of increasing the resolution of the image at the edges (i.e., an object will cover more pixels in the image when the object is at the edge of the image) whereas lenses with barrel distortion cause the image to be compressed at the edges and resolution is reduced. With typical distorted wide-angle lenses, potentially valuable information is lost in the lens and no software, de-warping or otherwise, can recapture or reconstruct this lost information in the image. Any de-warping will create an image that looks like that from a rectilinear lens but at lower resolution. With a rectilinear lens, the image is spread over a greater number of pixels at the edges, increasing the probability of detection and identification.
With a rectilinear lens, objects in a common plane perpendicular to the camera have the same image resolution at the center and edge even though the objects at the edges are much farther away from the camera. This is shown in Figures 4 and 5 below.
Figure 5: These targets are in a 10x10ft grid. At 20ft from the camera using Theia's SY110 lens with 120deg field of view, the HFOV is 60 feet. Targets at the edge of the image are twice as far from the camera but can be seen as clearly as those in the center of the image along the same plane.
This rectilinearity creates an effect called 3D stretching or lean-over in which objects at the image edge seem to be stretched because they are being “flattened” onto a plane along the tangent angle from the lens. With rectilinear lenses, the wider the field of view, the more noticeable this effect. This effect is not what most people are used to seeing but it has the advantage of increased resolution (pixels per foot) for objects at the edge of the image compared to lenses with barrel distortion. For lenses with barrel distortion, the objects at the edge of the image will be smaller than those in the center and they will curve towards the center.
Figure 6 below shows this 3D stretching. The length of the black car near the edge of the image is flattened onto the image plane along a steep tangent angle so it appears stretched. But the width of the two cars is the same because they are in the same plane perpendicular to the camera. Because the effect is only present when objects have length parallel to the camera in the third (depth) dimension, such as the length of the cars, it is called 3D stretching.
With a rectilinear lens, the calculation of resolution of objects in an arc with the camera at the center is a little more complicated. As an object moves from the center of the image towards the edge in an arc without changing the distance to the camera, the object will increase in resolution significantly. This is shown in figures 7 and 8 below.
This case, shown in Figure 8 below, clearly shows the resolution increase as objects move around the arc at constant distance from the camera. The image of the person standing 11.5ft from the camera will increase in width due to 3D stretching as they move to the edge of the image. At the image edge, they may be more clearly identified compared to the center and compared to a lens with barrel distortion. Lenses with barrel distortion will not show an increase in object width.
Figure 8: As subjects move in a circle with the camera at the center, they increase in size due to 3D stretching, making them more recognizable towards the edges of the image. This 135° field of view was captured using Theia’s SY125 lens.
Given a lens and camera, it is possible to calculate the image resolution by using the simple equations below. If the field of view is not known, it can be calculated for a rectilinear lens using the equation in Table 5. If the lens has barrel distortion it is best to look up the HFOV in the specification sheet.
Once the horizontal field of view is calculated and the camera is known, the image resolution is simply the ratio of the pixels to HFOV.
For a design with a known resolution requirement, it is possible to invert the equations above to calculate the lens focal length required for a given camera. This equation is shown in below.
For objects in the same plane, the edge resolution equals the center resolution. However, for objects in an arc, equidistant from the camera, the edge resolution is related to the center resolution by the cosine of the maximum HFOV angle. This equation is shown in Table 8 below.
The variables in the equations depend not only on the lens choice but camera choice as well. Different camera resolutions have different chip sizes, therefore different fields of view for the same lens. Below are tables of data for the most common megapixel cameras and the corresponding fields of view for two of Theia’s rectilinear lenses.
In summary, there are many definitions of resolution. The two most commonly used are total number of pixels in a camera and the pixels per foot or pixels per meter in an image. As the total number of pixels increases, the detail in the image or the field of view or both can be increased. For wide angle lenses, rectilinear lenses increase the image resolution at the edges of the image improving the possibility of detection and identification.
For any further explanation please contact Theia Technologies:
Mark Peterson, VP Advanced Technology
Theia Technologies | 29765 SW Town Center Loop W, Suite #4 | Wilsonville, OR 97070 | 503 570-3296
mpeterson@theiatech.com | www.TheiaTech.com