When machine vision first came onto the scene in the 1970s and 80s, the technology was generally over-promised and under-delivered. Systems were very costly — running from $15,000 and up — and their operation and maintenance required highly trained technical staff. For these reasons, machine vision was initially used only for applications in which production mistakes were particularly costly. Now, many more industrial manufacturers are using machine vision to ensure product quality, traceability, and production efficiency. Why?

Today, machine vision is more affordable, user-friendly and offers significantly more features and functionality. Plus, traditional vision systems (used in applications we'll detail) consist of separate hardware and software that often require significant programming time and the use of an auxiliary computer. However, some newer vision now combines multiple elements in single units, with programming performed through an integrated touchscreen, eliminating the need for a PC or other programming device.

Vision uses

Photoelectric and ultrasonic sensors are suitable for applications in which one specific area on the same type of part must be examined. In contrast, machine vision analyzes and interprets data from an entire image scene, rather than a single point. This capability allows for inspection of larger object areas, multiple part features, and features that differ from the surrounding area in more than one way — in texture, color, and height, for example.

Vision sensors can also be programmed to distinguish bad parts from good; inspect multiple parts with one feature of interest; and make multiple inspections with different criteria, when multiple parts travel on the same line.

Functionality

Vision sensors perform inspections in three basic steps. First, its camera acquires an image of the part. Next, a processor analyzes this image and then determines if the inspection passes or fails — and reports results to the manufacturing line.

Capabilities are determined by hardware (including camera and controller) and software, consisting of controls, graphical user interface, and image algorithms. While camera resolutions and image quality have improved greatly over the past years, improved capabilities and accuracy are largely due to improved processors and memory density. The availability of increased storage space and speed allow execution of more complex algorithms in a timely fashion.

Vision-sensor tools allow differentiation of characteristics that identify good and bad parts. One caveat: Algorithms and processor (even the fastest) cannot compensate for poor images: Increasing the contrast between a good and bad part does more for application robustness than most technology improvements.

Application examples

Typically, vision sensors are loaded with application-specific tool sets. For instance:

  • A locate tool is an edge-based tool that finds absolute or relative target position in an image by finding a particular edge. This tool can be used to quickly locate the position of a label on a package, for example.

  • To match letters and numbers on a label, a pattern-find tool finds the absolute position and rotation of a taught pattern within the search region of interest with normalized gray scale correlation or geometric-based pattern matching techniques. The template pattern information is stored in memory and all potential matches are compared to it.

Page 2 of 2

More specifically, in the automotive industry, machine vision is used to verify that piston rings — which can be accidentally placed 180° out of position — are assembled correctly. The vision sensor captures an image, and the processor analyzes it to detect whether or not the part is positioned to receive another component. If the part is incorrectly oriented, the vision sensor will not detect a predetermined edge, and warns the operator.

A common application in the pharmaceutical industry is blister package verification. Tablets are put into blister pockets on the material web; machine vision checks that a tablet is present in each blister pocket, and for broken tablets or foreign materials. More specifically, the vision sensor is taught to recognize a good image: This model is then compared to each collected; if deviations appear, the sensor sends a signal to the process controller, stopping the machine and allowing an operator to intervene.

Machine vision also executes gap pitch measurement in the electronics industry. A roll of thin metal stock passes through a stamping machine where it is stamped into individual (but connected) pins — which in turn must be straight and spaced at specific intervals for later manufacturing processes. If one is bent or positioned incorrectly, the gap between the adjacent pins changes.

For this application, a fiber optic sensor can also be paired with a vision sensor — to detect guide holes and trigger the vision sensor's camera to capture images. The vision sensor's processor then measures the gap (or pitch) between the last edge of one pin and the leading edge of the next.

For more information, call (800) 809-7043 or visit bannerengineering.com.

Technology improvements in machine vision

Many complex mathematical algorithms that allow the latest machine vision systems to make sophisticated, accurate analyses were first developed in the 1930s — but only in their theoretical form. Previous hardware was just not fast enough (or big enough storage-wise) to effectively put them to work.

Recently, processor speed and storage space have advanced enough to unleash the capabilities of these algorithms.

One simpler image-analysis tool is large binary object or BLOB analysis. It was used predominately in the early days of machine vision, and is sometimes still applied. How does it work? The algorithm assigns a greyscale value between 0 and 255 to every pixel in a region of interest. The designer or user then determines a threshold value in between 0 to 255, and every pixel within the region of interest is assigned a 1 or 0; 1 represents bright and 0 represents dark. In other words, if a pixel's greyscale value is above the threshold, it is considered a 1 or bright pixel.

Next, BLOB connectivity analysis is run on the 1s and 0s, forming either four or eight-way connectivity patterns. This analysis is performed for every pixel to determine how many pixels are connected; those form blobs that can be sized and identified for different characteristics.

Every calculation involves many instructions and steps — so requires high processing speed and storage space. Furthermore, high-resolution images are 5 to 10 million pixels in size as opposed to the norm of less than 310,000 pixels only a few years ago. They provide greater accuracy but also demand even higher speed and storage space.

Most pattern-matching vision systems are more capable than BLOB tools. They are more complex and require even more sophisticated analysis, but the algorithms return significantly more information.

For example: BLOB analysis of a label simply determines how many alphanumeric characters are present. In contrast, pattern-matching vision can ensure that each character on the label is correct, and in the proper location.

Due to these expanded capabilities, the number of calculations involved in running a pattern-match routine (versus a BLOB routine) increases by 100 times or more — and requires approximately 20 to 50 times the memory.

In fact, this is true for all image-analysis tools: More sophisticated analysis involves more complex algorithms and these require greater speed and memory. If speed and storage capacities aren't sufficient, operators must slow inspection lines, or the sensors can miss objects.