Color sensors in packaging applications let motion-control systems do a lot more than B&W cameras
By Ali Zadeh
Senior Research Engineer
Applied Engineering Manager DVT Corp.
Color sensors coupled to interpretive software make inspection systems in packaging applications more capable and smarter than those based on grayscale images. For example, updated inspection systems can examine a salmon fillet, distinguish skin from flesh based on color differences that would previously blend in a grayscale system, and decide whether or not to transfer the fillet to a cleanup line. Such systems can find uses wherever color carries meaningful information, such as in pharmaceutical companies, on automobilecomponent assembly lines, and even bakeries. The challenge for those developing color-vision systems is to counter the myth that color technology is complex and too expensive.
At one time, packaging equipment builders had to buy off-the-shelf analog color cameras, called sensors, for about $3,000 and then buy a frame grabber, a board that plugs into a PC. Connecting the camera to the board would at least provide the hardware for a color inspection system. The camera captures images and sends them to the frame grabber which performs an analog-to-digital conversion and the image analysis. Then software would be needed to perform some analysis. And the image-analysis programs of the day were a lot less user friendly than software currently available.
The difference between frame grabbers and modern systems is almost like night and day. For instance, a smart sensor, actually a digital camera, captures an image and a processor in the camera handles analysis. The camera can even output a signal or send coordinates from the image to other equipment in the production line. The processor-on-camera eliminates the need for a controlling PC. In addition, many cameras come with an IP address so they can be identified as nodes on the Internet.
More vision systems using color are just one trend in the industry. Systems are also getting smaller and less expensive. Some recent vision sensors now measure only 1.6 2.2 4 in. And the capability in a $50,000 color system from about five years ago now sells for about $10,000.
A primer on color
Color carries information. Each pixel in a grayscale digital image has eight bits of information, while color captures three times the data in red, green, and blue components for each pixel. This means image analyses can be more detailed. Several brief examples will highlight the advantages of color.
Color sensors in high-volume food processing and packaging often verify colors of fruits and vegetables. Color systems spot surface defects using fine differences between colors, variations not detectable with grayscale devices.
For instance, a bakery producing filled pastry had a problem with crusts tearing and spilling. If torn crusts could be spotted soon after baking, the errant pastry could be removed from the line before it was packaged and shipped.
The engineer in charge obtained a pastry sample with a torn crust exposing the filling. He captured its image with a color sensor, transferred the image into the sensor's software running on a PC, in this case, Framework from DVT. He then drew a sampling rectangle around the tear and recorded the pixel count and color level for the exposed filling, which was darker than the surrounding crust. The engineer then programmed the system to emit a failure-condition signal when later images had more than, for example, 4,000 pixels of the filling color. Another piece of equipment removes the torn pastry from the line.
In the next example, a vision system looks at salmon fillets coming from an automatic skinning machine and on their way to a weighing and wrapping station. A fillet is mostly red with a darker colored skin. To teach the vision system how to find the skin, the engineer puts a matte over the picture and a small square around the skin. The system stores the skin color level and examines other fillets for it.
If a fillet exceeds the pixel count for skin, then it along with its image and coordinates for the skin would be shuttled to another processing line. An automatic system could then reexamine the image for a fix on the exact position of the skin and give its coordinates to a water jet or other tool that could remove it.
The automobile application involves a fuse box with colored fuses that must be in appropriate sockets. Colors indicate the fuse's amperage rating, so locations are critical. Fuses are manually installed in fuse boxes with each amp rating printed on top. The vision system records the assembled fuse box, compares the color locations with a map stored in memory, and then decides whether the box is acceptable or not. A few in the auto industry want automated inspection of the fuse box as the last step in the production line. An OCR (optical character reader) might be an alternative but it would take longer to read each fuse rating, and therefore reduce throughput.
Software's contribution to vision systems has been to make the sensors easier to employ. For example, there are now easy to use programs that teach systems to take action based on images. Software is also adding new features to the machine vision lexicon, such as Object Find, Blob Analysis, and Precision Measure.
Object Find identifies a particular shape among many. It finds a badly defined object in the field of view. It can also count the number of particular objects passing on a conveyor belt. Teaching the software the shape of the object is simple. If it were a safety pin, the engineer would indicate that an object with the proper pixel length and width, and rounded ends is a safety pin.
Object Find is often used when looking for complex parts, typically in robotic applications. Its value is that it does not need perfect images. Objects may touch one another, or be partially off the screen. Once identified, other operations can be done with it.
Blob analysis or recognition differs from Object Find in that it looks for a shape or size with constant color or connective regions, even though two objects might be overlapping. Object Find recognizes both.
A blob tool could look at pills in a package to ensure that they are all there and are of good integrity. It could also be used to check labels for tears and positioning.
When looking for five characters on a printed label, for instance, the system would look for all five. Or it could look for defects such as objects within an object, like a scratch on an auto part.
Precision measure resolves edges down to about 1⁄10 of a pixel. A lot of CCD cameras have different pixel resolution. For example, a standard DVT camera has 640 480 pixels. So its resolution is about 1⁄4000 the field of view. If the field is a large wall, 1⁄4000 might be an inch. With a microscopic lens on the sensor, a one-inch field of view could be about 0.004 in.
Motion control data link is a simple way of feeding information to other systems. Data can be the number of failures, number of cartons, a specific cause for a failure, or positioning information. Most of the time, it is used for motion control — sending X-Y coordinates and rotations to a motion controller. It lets vision systems examine some action and correct a process.
A data link transmits data two ways: Discretely by digital I/O to a PLC, which is an onoff condition, or by keeping a running total of some value. If a number of defects occurred during a shift, the data link could send this information to a printer or display panel. In this way, the camera is a communication source, not just go/no-go sensor. It opens up communications to motion controllers, robots, control systems, as well as higher level systems.
On the internet
The Internet adds capability to vision systems by giving each camera or sensor an IP address. This means a camera could be in an Alaskan fishing village inspecting fish. If there is a problem with the system, the camera can be connected to the Internet and technicians in a far away city can log onto it, see the image, and change parameters in the software. Or, a user with software expertise could teach Alaskan employees to do something new, without a technician ever stepping on an airplane.
Remarkable changes to the vision system industry are just beginning. Besides dropping prices and rising capability, image analysis software that was a costly option is now included free. Tech support is also free at some companies. All of this could lead to the lights-out factory promised by computerintegrated-manufacturing systems over 10 years ago.
Putting a system to work
Engineers at GSMA Systems, Palm Bay, Fla., recently designed and built an automated system to assemble keyless entry transmitters and receivers for cars and trucks. The system requirements were for a modular vision system that could be easily programmed and networked.
The engineering team developed a vision system around eight DVT Series 600 SmartImage Sensors connected by Ethernet with six Fanuc LRMate200i robots. "The system assembles and inspects keyless entry units from a number of different parts including plastic cases, keypads, PCBs, and key rings," says Mark Senti, GSMA president. The final product consists of two transmitters and a receiver which are shipped to automakers worldwide. To automate assembly, the robotic production line includes a number of separate workcells for assembly and inspection.
A vibratory feeder initially sends two case shells to a date stamping cell. Keypads are inserted and inspected for the right keypad and position. "For this inspection," says Senti, "the sensors are taught to recognize different keypads. An engineer programs the camera to pattern match on a specific part of the image to decide if the right item is in place."
At the same time, the PCB controller in the transmitter's shell is inspected and verified at another workcell. Another smart camera images the back shell of the transmitter and checks for the type of shell. Both halves then move to a station that assembles the complete unit. The final workcell attaches a key ring and inspects this attachment with another smart sensor.
"Customized machine-vision software for the process was developed off-line on a laptop PC using DVT's FrameWork," says Stuart Geraghty, vice-president at GSMA. An engineer taught the system to recognize 13 different parts for label verification, part presence, and inspection. GSMA says smart sensors are easy to troubleshoot. Changes are easily tested off-site through a SmartImage Sensor emulator. A programmer would need only representative images to use the emulator, so no camera need be connected to the laptop. GSMA used the emulator to program sensors and transmit inspection configurations to the manufacturing line by Internet. The assembly line is producing one keyless entry set every 6 to 8 sec at a Tier One supplier's facility. System payback has been calculated to be less than 18 months.