This file type includes high resolution graphics and schematics when applicable.
Selecting a camera for a machine-vision system can prove to be a difficult task. Every application has specific camera requirements. Thus, it’s critical to become as educated as possible on what’s available when attempting to choose between the various technologies and features.
The first step should be to determine what you want to achieve with a vision system. Is it faster throughput, higher quality, lower cost, or some other business goal? Designers must also consider the need for future upgrades, which might lead to selecting a more advanced and flexible camera. The key considerations when selecting a camera are speed, resolution, image quality, and the camera technology that will offer the best value.
CMOS vs. CCD
“The biggest paradigm shifts in the industrial and scientific camera industry are occurring in sensor technology, where nearly all camera manufacturers are now embracing complementary metal oxide silicon (CMOS) over charge-coupled device sensors (CCD), says Tom Hospod sales director for IDS Imaging Development Systems. “For many years, the industrial and scientific imaging industry used cameras with CCD sensors. The type of applications where CMOS sensors dominated were mainly in security and surveillance industries. Some 20 years ago, CMOS sensors were viewed as somewhat noisy with poor image quality. However, today, CMOS sensors are now beginning to dominate the industrial and scientific imaging market.”
In fact, Sony recently announced it will end CCD production beginning sometime in March 2017. However, Sony announced plans to continue shipping CCD sensors until year 2020. “Like Sony, we plan on being close to 100% CMOS within the next five years. Currently, CCD represents only about 5% of our business,” says Hospod.
“The difference between CMOS and CCD is that a CCD camera generates analog signals, which must be sent through separate analog-to-digital conversion circuits before outputting from the camera. On a CMOS imager, the analog-to-digital conversion happens on the chip itself,” says Rich Dickerson, marketing communications manager at JAI. “Not only does this mean faster processing speeds, but it also reduces overall circuitry and cost.” This news is important when considering maintenance and longevity of a system.
The key differentiator between CCD and CMOS performance is processing speed, or the ability to get data off the sensor faster. In turn, more data can be pushed through the interface faster, which means bandwidth is important. Speeds will vary; for example, a fast CCD camera can push a little over 1 Gb/s (gigabits per second) while a CMOS version is capable of running over 10 Gb/s. CMOS cameras also consume less power, generate less heat, and are better under fluctuating light conditions.
Not all applications need high speeds. But in automated processes, the faster you can process objects or scan items, the more productive the manufacturing line. It tends to be the speed and abilities that justify the cost of higher-end camera systems.
Over the years, reduced production of CCDs might make ordering parts, maintenance, or upgrades a problem. Manufacturers have assured customers not to worry if they have applications better suited for CCD, because the technology will still be available. Nonetheless, the industry is leaning strongly toward CMOS, so this article will focus on issues to watch out for with this technology.
Scanning and Shutters types
Choosing between line-scan and area-scan is the next decision facing designers. Line-scan cameras process one line of an image at a time and need more processing to assemble them into a continuous image. This technique works well for continuous items like rolls of paper or steel, or raw produce on a conveyor belt where the “length” of the item is uncertain. But when discrete items need inspection, whether moving or stationary, most developers will opt for area-scan cameras, which capture standard 2D images for computer analysis.
Of course, the type of shutter on your camera—rolling or global—is a significant factor. Rolling shutters, which are used in some CMOS area-scan cameras, expose and output 2D images that scan across, either vertically or horizontally, to capture an image, typically from top to bottom of the image. But unlike line-scan cameras, which are always positioned so that the direction of movement is perpendicular to the scan direction, rolling shutter cameras often must try to capture horizontal movement or even circular movement within the full 2D field of view. Therefore, even moderate movement can cause significant skewing (the image will present a diagonal bend), or other spatial distortion, such as a partial exposure (if a flash is used, the top 1/3 may show the flash while the bottom 2/3 is darker because it was processed post-flash).
A global shutter captures a 2D image all at once, and should be used if the application is presenting the aforementioned problems. If you select a rolling shutter and want to increase speeds later, it may be possible to change the shutter speed to minimize these effects.
Older CMOS cameras often have rolling shutters, which is why they weren’t widely used for machine vision in years past. However, most new CMOS cameras will incorporate a global shutter. If an application involves movement, developers are strongly encouraged to gravitate toward a camera with a global-shutter capability.
Resolution
A problem with higher speeds could mean a faster frame rate. More images to process per second means you must transmit more data, which leads to consumption of more bandwidth. Cameras with low resolution keep costs down while offering many capabilities. They can compare shapes, orientations, and match shades of pixels, which might suffice for inspections depending on the desired tolerances. Stricter tolerances generally need finer resolutions with higher pixel counts. Keep in mind that higher resolutions increase the amount of data being moved around, possibly resulting in a more expensive network.
An alternative to higher-resolution cameras might be to use a lower-resolution camera but take multiple smaller images to maintain accuracy. This can lead to more software demands and complexity in the system. Still, whether one high-resolution or several low-resolution images are used, the data transmitted should be about the same and shouldn’t increase network cost.
High resolution also applies to color tolerance. Inspections of print, paint, foods, and other materials need high levels of color precision to ensure consistency. If color is important, there are two types of color cameras. Bayer mosaic cameras use a color matrix (of filters) over the pixels to make each pixel sensitive to a specific color band—red, green, or blue. Cameras are forced to estimate the missing colors for each pixel by looking at the surrounding pixels. This creates some uncertainty with respect to the pixel colors as well as the shapes and edges they might represent.
The second type of color camera uses a three-faced prism. Coatings on the faces are able to direct light to a specific color sensor (red, green, or blue) based on the color’s wavelength. A prism camera doesn’t have to estimate color values, which is sometimes referred to as true color. Having each pixel offer a true red, green and blue value provides better color precision and sharpens lines and edges in color images. Of course, these benefits come with a higher cost. Developers have to carefully weigh the tradeoffs between lower color fidelity and higher system cost.
It should be noted that prism cameras may also be more sensitive to vibration and shock. Many Bayer color cameras can withstand 10 G of vibration and 80 G of shock. A well-built prism camera should still be able to achieve ratings of 3 G for vibration and 50 G for shock, which is suitable for most industrial applications. Still, if a prism camera is necessary, finding ways to reduce vibration and shock in the design can save wear and tear, increasing the camera’s longevity.
Image Quality
The faster a production line moves, the faster the camera’s shutter operates to prevent blurring. Fast shutter speeds mean minimal time to capture light. The fewer photons captured by a sensor means the signal is lower and closer to the level of noise, leaving the user with reduced quality or grainy images.
Image quality can be difficult, especially for line-scan cameras trying to capture tens of thousands of lines every second. Many designers have installed lighting in the camera package or near the application to fix this. Depending on the application, power may not be available, and if it is, adding lighting can be a costly solution. A solution may be to use a camera with larger pixels.
“For example, there are now high-speed cameras available offering pixels that are 20 microns in size,” says JAI’s Dickerson. “These can increase light sensitivity eight-fold from a camera offering pixels that are seven microns, while still processing 80,000 lines per second.”
Be careful as increasing shutter speeds on a CMOS camera with a global shutter may lead to images that look washed out. Called shutter leakage, it’s when light continues to fall on the imager while the image is buffering. This situation causes electrical “leakage” and washes out the image, making it less accurate for line detection and other applications.
“Developers should seek out cameras that have good Parasitic Light Sensitivity (PLS) ratios—a term commonly used for CMOS shutter leakage,” says Dickerson. “Play with them in different speeds and lighting environments to get comfortable with what works for you. Lower-quality imagers typically have PLS ratings below 1:1000, meaning one out of every 1,000 photons that strike the sensor while the shutter is closed “leaks” into the image information. Higher-quality sensors have PLS ratings of 1:3000 or above, while the best can achieve ratings as high as 1:50000.”
Thermal Control
Industrial cameras operate for long periods of time, some even continuously, making temperature control vital. Noise increases as electronics become warmer. In general, a heat exchanger often works as a thermal control, depending on the ambient conditions. However, faster speeds and higher resolutions might require a fan.
To protect cameras from dust and debris, an IP67 enclosure might be used. But an enclosure can increase thermal loads and introduce heat-related noise into the images. “If the camera offers a temperature range of 49 to 158ËF, it might operate despite the enclosure, says Dickerson. “But if the temperature range is limited, say, 32 to 113ËF, the camera might show lower image quality because of the heat trapped by the enclosure, increasing noise.”
Heat can be a large noise factor in both CCD and CMOS cameras. There are more expensive ways to keep cameras cool, such as thermoelectric cooling (TEC)—a method of applying electrical voltage of a constant polarity to remove thermal energy—and liquid cooling. These techniques are generally used in biomedical industries for applications where low light, long exposure, or high gain conditions prevail. Companies that need this type of equipment should work through a machine-vision consultant or a vendor with extensive customer assistance.
Smart cameras can be user-friendly preset devices sold as a package that includes all you need to get the camera system up and running quickly. However, smart cameras can be difficult to upgrade or adjust as production lines change. As a result, some customers who don’t need short setup times or are producing large volumes will outsource to a machine-vision company that specializes in custom setups. Overall, the amount of money saved by using camera systems can offset the cost associated with custom camera setups.
This represents only the surface of camera systems. To get comfortable with the different features and gain more confidence for cable and interface selection for your camera system, take advantage of the 30-day trials offered by many companies and talk to the company’s engineers (see "Trying Before Buying"). As the industrial Internet of Things speeds up, camera systems and knowledge may become a necessity to stay competitive.
Trying Before Buying
There are several reasons why companies should take advantage of the 30-day trial periods offered by most camera suppliers. It lets company engineers examine and compare different kinds of cameras and observe their operation in specific applications. For example, the trial period can be used to check for fixed pattern noise (FPN) and hot pixels.
FPN consists of fixed lines contained in an image or video. The lines generate from the structure of CMOS sensors. It’s normal for photosensor manufacturers to turn out units that have FPN, with the understanding that it’s up to camera designers to compensate for FPN using software or determine that the FPN will not interfere with their applications.
To test for FPN, take the lens cap off, set the gain and exposure to its operational setting, and look for non-moving stripes or lines of darker pixels in the image. These lines are normally vertical, but can also be horizontal. If an image contains FPN, it must be determined if it will interfere with the application or needs to be modified with software.
Hot pixels, which are always turned on, will look like sparkles or white pixels when they should not be registering any light. To test for this, put the lens cap on and look for unresponsive pixels, or sparkles.
If you do see lines or sparkles, note which camera performs better compared to cost for your own benefit, and make sure any noise will not jeopardize your application. If you don’t see this type of noise, it’s because camera designers are finding creative ways of “fixing” or estimating what the flawed pixels should be registering and thus compensating for such noise.
It’s also important to check the camera’s dynamic range and sensitivity in your application. Dynamic range is the discernable amount of contrast in an image. A high dynamic range is good for low-light applications or where there are large light differentials. For example, trying to read part numbers scribed on polished metal parts forces the camera to detect difference between shiny and dull portions of the part. This would make the camera’s dynamic range an important consideration.
Camera sensitivity refers to the level of light required to differentiate an object from the image’s own random noise. Sensitivity is important for low-light applications, enabling objects to be seen and analyzed in darker environments.
It can also be beneficial to work with companies that have extensive customer service. For instance, maintenance plans can save time and headaches when trying to untangle the wide range of capabilities found on industrial cameras.