Machinedesign 2031 Cmos Cameras Delivers 0 0
Machinedesign 2031 Cmos Cameras Delivers 0 0
Machinedesign 2031 Cmos Cameras Delivers 0 0
Machinedesign 2031 Cmos Cameras Delivers 0 0
Machinedesign 2031 Cmos Cameras Delivers 0 0

Eye Spy

Jan. 1, 2004
Locating and positioning work, tracking the flow of parts, and inspecting output for quality and consistency — easy enough, right

When human eyes are too slow or costly to screen, machine vision keeps better track of flying production lines and tight tolerances. And since their first trials in the semiconductor industry twenty years ago, vision systems have evolved considerably. Also called automated optical inspection, constant monitoring of objects can now extract complex information from images and generate sophisticated decisions.

The first machine vision function is inspection such as measuring machined part dimensions for unacceptable tolerances, often right on production lines. The second machine vision use is tracking; barcodes and matrices affixed to products or boxes are scanned for quick identification. The third vision application is guidance. Once a vision system has processed images, inspection results are used by a waiting controller. (For example, a system might automatically make measurements and then guide a robot arm into place.) A machine vision system often performs a combination of these functions: Parts might be moved into position for the detection of blemishes, scanning, sorting, and positioning (to help a picker grab them) before they’re moved along to makeway for more parts.

CCD cameras

Vision begins at the sensing device. The most prevalent design, CCD cameras are charge-coupled devices developed in the 1960s. The wafer, often with metal oxide semiconductor capacitors, is a solid-state electronic component segmented into an array of individual light-sensitive cells called photodetectors. (Because photodetectors are elements of the picture whole, they’re also called pixels.) CCD pixels sense incoming light by the photoelectric effect, the action of certain materials to release electrons when hit with photons of light. As long as light impinges on the pixels, electrons (fenced in by nonconductive boundaries) accumulate in them. A good industrial CCD has a pixel array of 640 x 480 or so, with each pixel less that 10 microns square.

The row-by-row processing of data in a CCD is where the sensor gets its name. Arrays of polysilicon electrodes — one for each pixel — are packed close to allow travel of electrons to other electrodes. A sequence of clock signals shifts charges across this chip surface towards an amplifying register that measures charge. Like a bucket brigade, a row of information is transferred to the readout register while following rows are shifted closer. After being fed out, electric charges are released and the register emptied for the next row in line, until all information is read.

A drawback to CCDs is that to prevent signal degradation, clock pulse amplitude and shape must be controlled; this requires a clock chip with multiple power rails. CCDs also suffer from blooming — where charge leaks from one photodetector into others. Finally, the step-by-step process of a CCD is not conducive to high speeds. However, a solution for this exists: In higher-end frame interline transfer CCDs, readout registers as large as the light receptor area read absorbed information in one pass.

CMOS cameras

Commoditization is partly responsible for the spread of machine vision use, and CMOS cameras could keep this trend going in the near future. CMOS, or complementary metal oxide semiconductors, is actually a generic term referring to the process of making CMOS devices including not only cameras but also computer RAM, processors, and other semiconductors. Because these cameras are made with the same equipment as other common components, economy of scale equates to cost savings.

Used in digital imaging, active pixel cameras integrate separate charge amplifiers at each pixel. Performance increases while noise levels fall, but (because each pixel needs at least three transistors) more costly silicon area is used. Active support circuitry is located near light receptors to cancel noise right at the pixel. CMOS pixel fill factors are lower than with CCDs, because extra circuitry bordering each pixel takes up space. And though the problem has been addressed in some designs, image quality of CMOS can suffer from fixed pattern noise caused by unequal amplifiers at pixels. What are the benefits of CMOS, then? Processes can be integrated right into CMOS chips, eliminating circuitry for analog-todigital conversion, clocks, and white balancing. This translates into reduced power consumption — sometimes to just one-third that required for comparable CCDs. Another benefit over CCDs: If only a small section of an image is of interest, it can be accessed directly.

PCs and frame grabbers

After images are collected, a computer must process and make conclusions about them. Though other systems are growing, the everincreasing flexibility and performance of personal computers have ensured their place in complex designs and replaced proprietary systems. Even standard-issue PCs can juggle information from multiple cameras. On these PC-based systems, most often an interface board or graphics card (also called a frame grabber) converts camera images into digital data. The computer, in turn, uses that data to make decisions about the manufacturing environment. But besides converting and transferring data, frame grabbers compensate for poor lighting, varied camera data styles, and simpler optics while interfacing with conveyors, lighting systems, and rejection mechanisms through I/Os. While frame grabbers were once very application- specific, newer designs are more expandable and generic for changing requirements.

There are some drawbacks to PC-based systems running non-realtime operating systems. Their inability to prioritize communication — especially when multiple cameras are involved — is problematic on high-speed and safety applications. Explains Jayson Wilkinson, Motion Control Product Manager at National Instruments Corp. in Austin: “Windows doesn’t prioritize tasks as well, so things like the mouse could throw the whole system off. But solutions exist: Realtime environments ensure that assigned tasks take place in a deterministic amount of time. In some development environments, engineers can even develop code in familiar Windows and then download that code to a real-time target. And now, some small stand-alone units can connect multiple Firewire cameras to ensure that vision tasks are given highest priority.”

George Blackwell of Cognex Corp., Milwaukee, explains, “The development environment allows users to build (set up, and program or configure) vision applications to meet specific needs. PC-based systems have a programmable environment and offer the most capable vision tools. They also provide the fastest performance because they rely on the latest CPU architectures and as a result, they are generally used for more complex or mathematically intensive applications.” However, Blackwell also points out that applying PC-based systems requires more knowledge of low-level programming languages such as C++ or VisualBasic.

Continue on Page 2

All bundled up

Intelligent cameras are quickly growing in use. They consist of a CCD or CMOS sensor, a digital signal processor, discrete I/O ports, and software in one palmable housing. Mark Sippel, Vision Product Manager at Omron Electronics LLC of Schaumburg, Ill. explains, “The stand-alone style is becoming very common in industry, especially for end-users, due to their fast setup and lower overall cost.” Now further improvements include barcode reading and optical character recognition in a single package; applications performed with four or five different devices can now be solved with one. Adds Dr. Phil Heil, Applied Engineering Manager of DVT Corp. in Duluth, Ga., “Now better intelligent cameras can measure down to a tenth of a pixel with 100% reliability, or far smaller when some error is acceptable — with speeds to 2,000 parts per minute.” Intelligent cameras may not be suitable for the fastest inspection rates, though some can approach 10,000 total inspections per minute.

Two major classes exist. The first type: sensors that can be programmed using a single button or a video game controller. The second type: sensors that set up through connection to a PC. The first type is easy to initiate, but is not as flexible. These sensors are best when there is already a great deal of automation hardware present to handle communication. Dr. Heil explains how the second type works: “On intelligent cameras, a PC with a software interface can be used for setup; after setup, the PC is removed. With PCs and their universally familiar Windows interface, this makes setting up advanced configurations easier. In this way, machine vision inspection can coordinate with other manufacturing processes.” One consideration for the PC-setup type: Some of these cameras require reconnection to a PC for even small setting adjustments.

Even with image processing libraries and ease-of-programming (through spreadsheets or other visual methods), intelligent cameras are still limited in functionality compared to PC host-based systems. The computer is actually housed in the camera, and firmware and processing algorithms must reside onboard. When faced with the same environmental challenges as more expensive vision solutions, their adaptability limitations sometimes cause snags. Integrated vision (which can be plugged into Ethernet or other data networks) attempts to address this problem. But here, another issue threatens: The cost of an intelligent camera system with advanced functionality is only slightly below the cost of a low-end PC host-based system with full processing power. But in contrast to actual PC-based systems, vision sensors generally require no programming, provide more userfriendly interfaces, and offer configurable environments more easy to use.

Blackwell observes, “It’s interesting that over the last several years vision sensors have become increasingly sophisticated while the cost of PC-based systems has simultaneously come down; the gap between PC and sensor-based platforms continues to narrow. But — though application complexity and other variables dictate final hardware and software requirements — new users (more often than not) end up choosing vision sensors because generally these offer more justifiable price tags. Vision sensors are also easily integrated to provide single- point inspections with dedicated processing, and most offer built-in Ethernet communication for factory-wide networkability.”

Indeed, connection trends will influence further development of intelligent vision. “Throughout the 1990s, Ethernet outgrew its humble beginnings and industrial devices began communicating over this now-ubiquitous interface,” adds Dr. Heil. “Then five years ago the first intelligent vision sensor with onboard Ethernet was introduced. If current trends continue, intelligent sensors should grow with standards and support communication — from inexpensive PLCs to expensive enterprise resource planning software packages.”

Software

The pricey part of machine vision usually isn’t hardware — it’s the integration and programming. One way to simplify this step: Application software. This software, which resides a level above the driver software, runs whole systems. It comes in a variety of forms, from turnkey application software to development tools used to create applications. Turnkey software is increasingly popular, but is sometimes more difficult to integrate into a system. For example, very powerful and easy-to-use vision software may work well for vision applications, but not for motion control or data acquisition. Vision applications beyond simple dimensioning, barcode, or data-matrix reading all require some level of expertise to turn off-the-shelf equipment into successful vision. But Wilkinson explains, “One solution to this problem is to use software that generates code for more general-purpose development environments; some software is designed to make application- suitable vision algorithm creation easier.”

Blackwell adds, “While reliability is a key hardware differentiator, it’s actually the software that differentiates a vision system’s performance and operation. In that regard, vision software tool robustness and development environment are key differentiators.” Software development environments allow users to build vision applications to meet specific needs, and some systems even allow users to choose between different software development environments based on individual skill and experience levels. Libraries of robust vision tools also help handle more complex applications.

National Instruments Corp.>>(888)280- 7645 • Omron Corp.>>(800)556-6766 • Coreco Inc.>>(800)361-4914 • Cognex Corp.>>(508)650-3000 • DVT Corp.>>(770)814-7920.

Sponsored Recommendations

NEW Low Profile, Ultra Compact Power Supplies

March 13, 2024
Learn more HERE about Altech's Power supplies!

Altech's Liquid Tight Strain Relifs Catalog

March 13, 2024
With experienced Product Engineers and Customer Service personnel, Altech provides solutions to your most pressing application challenges. All with one thought in mind - to ensure...

Industrial Straight-Through Cable Gland

March 13, 2024
Learn more about Altech's cable glands and all they have to offer for your needs!

All-In-One DC-UPS Power Solutions

March 13, 2024
Introducing the All-In-One DC-UPS, a versatile solution combining multiple functionalities in a single device. Serving as a power supply, battery charger, battery care module,...

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!