Industries that rely on automation — such as automotive manufacturing, packaging, material handling, and metal forming — employ sensors as the “eyes” of their intelligent process. For decades, photoelectric sensors were the only choice for automated optics-based inspections that monitored processes and checked quality. Then, multi-component vision systems came along, making detailed area inspections possible, but at the cost of high prices and complexity. Today, vision sensors — sometimes called smart cameras — bridge the gap, combining ease of use and affordability. To leverage any of these technologies, however, engineers must understand how photoelectric sensors, vision systems, and vision sensors work.

In the beginning: photoelectric sensors

Photoelectric sensors originated in the late 1940s, causing the number of automated processes to boom. Photoelectrics detect a visible or invisible light beam and respond when the intensity changes. A widely recognized example is the two-piece sensor linked to a garage door opener that prevents the door from closing when an object breaks the beam.

Industrial-grade photoelectric sensors are available in several styles and technologies to accommodate a range of settings and applications. The following are some of the most common (additional information can be found at http://www.bannerengineering.com/training/):

  • Opposed sensors are among the most reliable and have two components: an emitter that initiates the beam and a receiver that receives it. The two components line up opposite each other — for example, on both sides of a conveyor — to detect objects passing between them. Their main drawbacks are not detecting clear materials such as glass, working poorly at extremely short ranges, and requiring wiring to both sides. However, they do have many advantages, such as counting objects and sorting between large and small items. One example is in a mail-sorting line where pieces taller than a letter are moved to another line by a diverter.

  • Fixed-field sensors work at close range, require wiring to only one device, and contain an emitter and receiver in the same unit. They function by bouncing a beam off an object within the fixed field and returning it to the device, but also react when no object is in the field. One application is mounting them over a conveyor in a bottling plant to detect uncapped bottles.

  • Like fixed-field sensors, retroreflective sensors also contain an emitter and receiver in the same unit. But unlike fixed-field sensors, they bounce a beam off a reflector mounted opposite the sensor. One drawback to this is that shiny objects can go undetected if they send the beam back to the receiver. To counteract this problem, light-polarizing reflectors can be installed to alter the beam. Retroreflective sensors are an alternative to opposed sensors when sensing or wiring is possible from only one side.

In the past 50 years, advances in electronic manufacturing have created smaller and faster photoelectric sensors, often offering more features at a lower cost. However, because photoelectrics sense with a narrow beam, their usefulness is limited. First, an accidental bump from a passing worker or vibration on the line can misalign the sensor. Also, changing the line to inspect a different object usually requires changing the alignment, fixturing, and often the sensor. In addition, examining an area larger than the beam requires multiple sensors, which increase the amount of fixturing and alignment as well as possible crosstalk. Advances in how photoelectric sensors modulate their signal have reduced, but not eliminated, the risk of crosstalk. Finally, photoelectrics can only determine one piece of information: whether an object is or is not present.

The vision revolution

Like photoelectric sensors, vision systems and vision sensors are optical devices that sense light. Unlike photoelectrics, their abilities go beyond detecting an object's presence or absence — they proficiently automate complicated inspections. For example, a photoelectric sensor can determine whether a beverage bottle is capped, but a vision sensor can ensure the cap has the correct date stamp.

Both vision systems and sensors use imagers (cameras) to capture a scene within the field of view. Because they view an area rather than a point, they don't need to align as precisely as photoelectrics, nor are they as vulnerable to vibration and other disturbances.

After capturing an image, vision systems and sensors process, judge, and analyze it, regardless of orientation. For example, they judge whether the text stamped on a carton's side is correct regardless of the carton being upside down or on its side. They then communicate this judgment to external equipment for processing decisions. One such case is informing a robot to correctly pick up an asymmetrical object. Another is flagging an incorrectly labeled bottle so it is removed from a packaging line.

Vision systems and vision sensors share many traits:

  • Optics — To capture clear images, they use high-quality optics that minimize distortion and have low chromatic aberration.

  • Lighting — Dedicated lighting is critical when determining a vision sensing application's success. The light's intensity and angle must create a high-contrast image that distinguishes the target object from the background and highlight its key parts, called features of interest. Shape, surface texture, color, and translucency influence contrasts between the feature of interest and background.

  • Algorithms — Vision systems and sensors analyze images using algorithms, or tools. While toolsets vary among products, most include the same basic tools. During error proofing, tools compare “good” parts stored in their memory to parts they are inspecting. For example, a vision device configured to recognize a machine part with eight bolts correctly inserted knows to reject a part with only seven bolts or with tilted bolts. It determines this regardless of the part's location in the field of view and how it rotates in a 360° range.

Vision systems

Vision system technology developed in the 1950s, and vision systems came into general use in the late 1970s. Their key components include a camera, a “frame grabber” that captures image data, and a PC that processes it. A vision system relies on custom software — including the toolset and user interface — designed to meet an application's requirements for gathering, analyzing, reporting, and archiving data. The complexity of vision systems requires a specialist, usually an outside consultant, to design, build, and maintain an integrated system for a specific application.

A vision system is powerful and highly customizable. It performs extremely detailed inspections at very high speeds, relays data to a mainframe, and logs every image at high speed. But that power comes with a hefty price tag (upwards of $10,000), keeping vision systems to a limited number of applications that justify the cost. What's more, because vision systems are application-specific, it is difficult to modify them for other applications.

Common tools
TOOL FUNCTION EXAMPLE APPLICATION
Average gray scale Calculates average pixel intensity Checking a label's presence
Binary large object (Blob) Detects groups of similar pixels based on their intensity (lightness or darkness) and determines location, number, and size of each group Detecting whether any cells in a packet are missing a pill
Edge Detects transition between bright and dark pixels Determining if a carton's end flap is sealed
Object Finds, counts, and measures the spaces between edge transitions Measuring the pitch and gap of pins on an integrated circuit
Geometric count Locates patterns, regardless of orientation and partial occlusions Verifying the correct lid has been placed on a container of ice cream
Pattern count Uses a pattern as a template for locating the same pattern in new images Verifying date and lot codes
Measure Measures the distance between specified points Determining whether a bolt is screwed in straight; if lopsided, the distance will be off
Vision products vary in the contents of their toolsets. Each performs a specialized task for a variety of end uses.

Vision sensors

Vision sensors are a newer innovation that became widespread in the last decade. Because they analyze an entire scene rather than a point, vision sensors can be used for applications that formerly required multiple photoelectric sensors. For example, verifying that each case of mustard in a packaging line contains 24 bottles requires 24 photoelectric sensors, one aimed at each bottle. Further, every carton must pass under the array of photoelectrics in the same position. On the other hand, a single vision sensor can detect whether any bottle is missing; the carton's position or rotation in the line does not matter, as long as it is within the sensor's field of view.

In addition, vision sensors can be used for applications that formerly relied on human inspection because they were too complicated for photoelectric sensors and not costly enough to merit a vision system. In a print shop, a mark printed on the last piece in a run signals the end of a group to bundle together. At one time, a person watched pieces coming off the printer for this mark, but now a vision sensor detects it.

Vision sensors' simplicity fits applications that don't require the complexity of a PC-based custom vision system. Images are analyzed inside the camera-like sensor (as opposed to an attached PC) and can be displayed on an optional monitor. Also, vision sensors are general-purpose industrial devices that can be moved and configured for many applications. Depending on speed and image clarity, vision sensors range from $1,000 to $2,500 — about one tenth the price of a vision system.

Beam break

Photoelectric sensors respond to objects that interrupt the light beam traveling from emitter to receiver. An emitter and receiver can be in separate units or in one unit.

Opposed sensors

Opposed sensors mount on either side of a conveyor to detect and count integrated circuit chips.

Fixed-field sensors

This fixed-field sensor bounces an invisible beam off the caps, which are in its fixed field. Uncapped bottles do not reflect the beam, triggering the machine to divert them from the line.

Retroreflective sensors

This sensor ensures that boxes do not fall off the line. Here, a mounted reflector bounces a beam back to the sensor, which has both an emitter and receiver.

LIGHTING BASICS

The most important factor in a vision-sensing application is lighting. It separates the target object from its background and highlights the features of interest.

Linear tools

Vision systems and sensors use algorithms (tools) to analyze captured images. A linear tool scans a single line and is best suited to predictable areas of interest, due to its speed and precision. For example, a vision sensor using the linear edge tool ensures that vials rushing past on an assembly line all have tightly sealed lids.

Area tools

Vision tools look for a transition in the image. Here, an area tool examines an entire box, or area, for any deviation. It is most useful for varying target locations, such as a case of plastic bottles missing one or more units anywhere in the box.

VISION SENSOR APPLICATIONS

Lower cost and improved operation have led machine designers and process engineers to incorporate vision sensors where inspections were impossible or relied on humans or multiple photoelectric sensors. Industrial uses include verification, gauging and measuring, orientation, flaw detection, and sorting. The following examples are actual applications:

  • Inspecting robot-applied adhesive beads on an automotive door panel to ensure that the bead is unbroken, meets width specifications, and is in the proper place

  • Verifying that foreign material has not fallen into a soft-drink bottle before capping it

  • Confirming that package labels are affixed in the correct location and match the packaged product

  • Inspecting stamped metal for microscopic flaws at more than 150 parts per minute — more than 13 times faster than human inspection