Technology advances continue to drive down the cost of automatic inspection and make systems more plug-and-play
Until recently, machine-vision systems were mostly proprietary and required a cadre of experts for set up and maintenance. Only the largest corporations installed them. One reason: they were considered too pricey. But that’s changing as companies of all sizes face tough global competition. The ever increasing demands placed on manufacturers to deliver quality products at lower prices often makes high-speed, automated production a necessity, not an option.
For example, Peterson Manufacturing, Grandview, Mo., a maker of vehicle lighting for OEMs, recently installed a vision system on a lamp burn-in tester. The system verifies that filaments glow with the proper intensity. Human inspectors once performed the job, but this proved impractical when production rates climbed to over 500 units/hr. The process line moved so fast that human eyes couldn’t keep up.
Not so with machine vision. The silicon eyes tirelessly check each and every unit and rarely, if ever, make a mistake. Acceptable lamp units automatically go to final assembly and faulty ones route to a rework area. “Our OEM customers expect products with zero defects, both in appearance and functionality,” explains plant manager Steven Ham. “With 100% on-line inspection, we can react faster to possible quality issues.”
Besides boosting product quality, machine vision can help finger errant processes. When used in combination with a SPC (statistical process control) program, a vision system can quickly locate malfunctioning production equipment for repair.
Keep it simple
Essentially all vision systems include four key elements: a light-sensitive sensor such as a video camera, support electronics, a light source to illuminate target objects, and image-processing software.
In the past, systems tended to be application specific and difficult to modify. But much has changed in recent years. Ease-of-use is the single most important trend shaping machine vision today. The industry recognized the need for simplicity and flexibility and delivered what it calls general-purpose vision systems. Such equipment can more easily be reconfigured by users as production needs change.
At the crest of this technology wave are what are termed vision sensors. These systems usually integrate charge-coupled devices (CCDs) and most support electronics into small industrial-strength packages.
The F30 vision system from Omron Electronics Inc., Schaumburg, Ill., for example, has the camera, lighting, and processor all in a 3 3 3 3 6-in. assembly. Embedded software is changeable through a five-button keypad and miniature display. Users place the object of interest under the camera lens, outline the inspection area, and activate an auto-teach function. The program sets inspection parameters automatically. A manual mode is available for tweaking contrast and fault tolerances.
The system uses this stored blueprint as a comparison for subsequent parts. It communicates its findings (high, low, or OK) via an RS-232 serial port. The camera shutter is triggered externally, also through pinouts. Shutter speed adjusts from 1⁄60 to 1⁄4,000 sec so the system can synch to different line speeds. Although it has somewhat limited capability, the F30 does fill a niche between photoelectric sensors which can’t detect two dimensional objects, and more elaborate vision systems.
Taking the vision sensor concept a step further are companies such as DVT Corp., Norcross, Ga. Its so-called Smartimage sensors operate under Windows-based software and communicate via Ethernet or fieldbus connections. The modular sensors directly digitize signals captured by their CCDs and pipe images to onboard RAM. This direct scheme lowers system cost because it eliminates video cameras, video signals, and frame grabbers. Because electrical charge on each pixel is recorded digitally, the capture area can be narrowed to a field of interest. Full images can take approximately 24-msec to acquire, whereas partial images require proportionately less time. This partial frame feature is especially useful for high-speed production applications.
Still, some applications may require more capability than vision sensors can deliver. For example, a chip-scale packager would likely use special high-resolution cameras for inspection. In such cases, PC-based systems give users more options and design flexibility. The ubiquitous PC has helped vision system makers by providing an accepted, standard platform to build upon.
Companies such as RVSI, Canton, Mass., offer complete vision systems on single PCI-slot boards. The boards contain their own processors and require only power from the computer bus. The boards accept most standard camera inputs with resolutions to 2,000 3 2,000 pixels/inch.
While cameras tend to be standard issue, boards and software are often proprietary and only work together. To compensate, most companies offer open, GUI-based software. This open-architecture scheme allows the vision system to interface with other software and hardware. Some systems can be customized using familiar programming languages such as Visual Basic or Visual C++. The majority of packages also contain standard software tools that can do common tasks including feature counting, edge finding, and precision measuring. For example, most vision software supports subpixel measurements to the 0.10-pixel level. Depending on camera resolution and field of view, some systems are capable of submicron accuracy.
Not quite turnkey
Despite advances in the technology, implementation of machine vision on the factory floor often requires considerable planning and testing. One reason is the inherent limitations of image-processing techniques. Most systems today use an algorithm called normalized or pixel-grid correlation. Here, the system subtracts captured run-time images from the trained standard or so-called golden template. The resulting image is checked for variations in intensity that often accompany defects. But if the run-time image appreciably changes in size, appearance, or orientation, the system can get confused and falsely reject good parts. Two factors that must be addressed are object fixturing and lighting.
Shedding some light
Peterson Manufacturing’s lamp burn-in application is unusual because the product itself is the light source. Normally that’s not the case. Many vision systems are synched to strobed light sources such as LED arrays. The lamps illuminate target objects while sensors or video cameras take a snapshot. But sometimes objects that reflect light can give vision systems fits. Stray light from sources other than the strobes may cause vision systems to mistakenly fail good product. Tiny variations in object surface finish can cause so-called hot spots which may illicit false signals as well.
One solution is to back light a section of translucent conveyor belt on which the object rides. Light passing through the belt silhouettes the object and may provide sufficient contrast. Most video cameras and vision sensors are equipped with electronic shutters and apertures which help users control the amount of light entering the devices.
Size and orientation
Other variables that can affect performance are the size and orientation of target objects. All vision systems use stored images of known good objects to compare against targets. If the target image size changes a few percent, the system may not recognize good objects. Also, if a good object’s angular position differs a few degrees from the template, it may also fail. In other words, objects must be securely fixtured or located, especially for precision measurements.
Although systems that use pixel-grid methods have certain limitations, they work well for most tasks. However, problems that are beyond pixel-grid methods may be tackled with a geometric-based algorithm. Vision-system maker Cognex Corp., Natick, Mass., recently introduced its Patmax software based on this method. Instead of correlating pixel grid values, the algorithm interprets the geometric shapes within objects. For instance, it sees a square as four line segments and a football as two arcs. These descriptions are valid regardless of the object angle or how much its size differs with the trained image. The system can locate and measure objects to 0.025-pixel accuracy.