The introductions of data-flow programming techniques to smartcamera vision systems promises to change the way these systems are deployed.
National Instruments Corp.
Consider a pick-and-place machine. Here a proximity sensor fires when a component comes in range. It triggers a robot arm to pick up the part and move it to a shipping container. Many times this usually works well, but some situations require a bit more data on the product the robot handles. For example, if one piece became rotated or skewed at some point on the line, the proximity sensor would not detect the out-of-position product. The attempt by the robot arm to pick up the piece fails because the arm was expecting the product to have the same orientation as all of the previous pieces.
In such a design, machine vision can replace the simple proximity sensor to detect the part and guide the robotic arm. A vision-based system not only triggers the start of the pick-and-place routine, but can also give the robotic arm the rotation and exact coordinates of the piece in question. This reduces missed parts, boosts machine efficiency and, in turn, saves the end user money. Vision systems can also read the 2D codes on parts to verify that the correct piece is being picked up and that all the nuts, bolts, and washers have been added correctly.
There are many other applications well suited for machinevision systems. These range from verifying labels on soda bottles to counting the number of pills packaged in a bottle as it is prepared for a pharmacy. Machine-vision systems easily perform no-contact measurements. Once engineers realize the roles machine vision can play in their new designs, the next question is whether to use a commercial off-the-shelf (Cots) machine- vision systems or to create something custom.
Many machine builders think they can save money using a custom design instead of a Cots approach. But there are many costs associated with a custom design that often go unnoticed until late in the game. Determining these costs up front makes it more obvious why Cots hardware is a more viable option.
One consideration is simply the time it takes to design a custom system versus the time it takes to implement a system using Cots components. It is no small task to create a custom vision sensor and set up I/O on that sensor to communicate with the rest of the machine. Years can go into the development of custom designs, and engineering time is not cheap. It’s often the highest cost associated with custom designs.
Support costs for custom designs can be higher. Cots product support falls on the product vendor.
Cots machine-vision equipment often take the form of smart cameras. These cameras combine an image sensor with a built-in processor that lets inspections run directly on the camera. Unlike traditional cameras that return images, smart cameras return inspection results. This helps lower overall system complexity.
Typical of the new smart cameras are the NI 1722 and NI 1742 models from National Instruments Corp. Powered respectively by 400 and 533-MHz PowerPC processors, both cameras use a monochrome VGA sensor of 640 480-pixel resolution that acquires images at a rate of up to 60 frames/sec (fps).
An RS-232 serial port provides signals that can go to a number of instruments and programmablelogic controllers (PLCs). Both cameras also have dual Gigabit Ethernet ports. One port communicates with PLCs or programmable automation controllers (PACs), such as the NI CompactRIO, for expansion I/O while the other reports results to the rest of the world. Both ports support the Modbus TCP protocol for connectivity to many PLCs.
As is typical with smart cameras, the NI models also come equipped with two optoisolated 24-V digital output lines that can connect to actuators, PLCs, and other devices. The digital outputs can drive up to 100 mA for generating pulse trains or single-shot pulses. This feature permits handling advanced applications such as stepper-motor control directly from the smart camera. A quadrature-encoder input on the NI 1742 lets the camera acquire images timed to a rotary or linear-drive system.
Lighting plays a critical role in any vision inspection system. The NI 1742 does away with the need for an external lighting controller because one is built directly into the camera. This lighting controller connects directly to currentdriven light heads, sourcing up to 500 mA of dc current continuously with up to 1-A strobed current. The integrated controller reduces the amount of wiring needed, cuts the cost of additional hardware, and shortens development time because the application program interface (API) for the lighting controller is built into the imageacquisition API.
Smart cameras are designed to perform many basic and advanced machine-vision functions including edge detection, geometric pattern matching, optical character recognition, and 2D bar-code reading. This means that instead of integrating individual sensors for all these tasks, engineers can use one smart camera to handle all of them.
Of course, smart cameras still must be told what to look for. In the case of the NI vision platform, there are two options for setting up inspections. The first uses a menudriven machine-vision package that demands no programming skills on the part of the user.
This is called the NI Vision Builder for Automated Inspection (AI) and ships with all NI smart cameras. It includes over 100 machine- vision tools such as pattern matching, OCR, DataMatrix readers, color matching, and so forth. The preconfigured inspection routines help reduce vision setup times dramatically.
Vision Builder targets users with limited vision experience yet desire standard vision inspection processes. Most often these inspections need only a simple Pass/Fail output as the part passes in front of the camera.
The second option employs NI LabView Real-Time along with the Vision Development modules. Engineers already familiar with Lab- View data-flow programming can develop their own applications and get more flexibility than that available through the preconfigured routines of Vision Builder. The LabView graphical-programming language provides APIs for machine vision along with data acquisition, motion, instrument control, and many other measurement and control functions.
NI considers the smart camera as a LabView target, so it can handle most virtual instrument (VI) programs written in LabView. If the programmer tries to do an operation the camera cannot perform, LabView alerts them that the operation is not supported.
The flexibility of LabView programming makes it possible to develop smart camera VI modules without the presence of a smart camera. One vision installation had to be up and running by a certain date, but the camera wouldn’t arrive until that morning. The VI module was created and tested using a monochrome video camera, a 640 480 pixel frame grabber, and the development PC as the processor. Once the smart camera arrived, all the programmer needed to do was drag the already written and debugged VI module over and drop it onto the camera target icon. The program ran without modification as the smart camera took on all duties. Only commands for the lighting controller built into the camera were added later as lighting control was not supported by the test setup.
The current models of smart cameras can control simple servoloops. For example, a smart camera generated pulses to control a stepper motor that rotated a bottle for a 360° inspection. The stepper position was monitored using the quadrature input of the camera.
National Instruments Corp.,
Read more about:
data flow programming at
machine vision at