Deep Learning Algorithms Help Vision Inspection Systems See Better

Aug. 12, 2024
Zebra Technologies’ Andrew Zosel discusses how AI and deep learning tools help expand machine vision capabilities, particularly in inspection system applications.

Deep learning finds numerous applications in machine vision solutions, particularly in enhancing image analysis and recognition tasks.  

Algorithmic models can be trained to recognize patterns, shapes and objects in images, explained Andrew Zosel, senior vice president and general manager, Zebra Technologies.

Deep learning models use various techniques—such as image classification, object detection, segmentation and optical character recognition (OCR)—to extract features from images and are trained to make decisions based on the context. Manufacturers benefit from the speed, accuracy and reliability that this computation brings, said Zosel.

Are the Disciplines of Machine Vision and Computer Vision Melding?

As machine vision systems mature, the differences between machine vision and computer vision become less obvious. Both machine vision and computer vision involve going over or analyzing visual inputs. However, machine vision requires the use of digital cameras before processing the images for an output decision. Machine vision systems also typically contain a camera, a lens, a processor and software to enable the machine to make these decisions.

READ MORE: A Booth Visit with Yaskawa at Automate 2024: Two Vision System Applications

Computer vision does not need the camera input and can work from saved images (real or synthetic) to interpret and produce a result.

“Basically, computer vision may be viewed as the broader category of anything that takes an image and processes it—anything from inspecting license plates to people counters and all kinds of more generic vision processing using a computer,” said Zosel, who is responsible for Zebra’s Advanced Data Capture, Machine Vision and Robotic Automation businesses.

In contrast, machine vision is a more specific term, typically used in industrial or factory and warehouse-type environments, said Zosel, “where you’re actually looking at a product being made or created and being processed, and therefore leveraging camera and imaging and technology to do a specific task for an operation.”

In addition to delineating machine vision from computer vision, Part 2 of a three-part conversation with Machine Design, Zosel answered questions on current uses for deep learning algorithms in machine vision systems and highlights the subtleties that Zebra Technologies has achieved in enhancing inspection accuracy.

The following questions have been edited for clarity and context.

Machine Design: Can you demystify some of the expectations about machine vision? For example, what can a machine vision solution do today that it couldn’t do in the past? And what have they yet to achieve?

Andrew Zosel: Fundamentally, we see the world through our eyes and brain, and we are a vision processor ourselves. As people, we’re an amazing machine vision system. Our vision helps guide us, helps with motion, etc., and helps us see and inspect parts.

And traditionally, machine vision has been applied to deterministic objects—things that are easily defined as good or bad, defined as specific, square measurements, etc. and less subtlety of comparing images and whatnot.

With advances in deep learning and AI, machine vision technology and machine vision capabilities come closer to how a human interprets the world. Still, we don’t have the capabilities today—in the hardware, in the software and the algorithms—that are at the same level as the person. So, it’s closer than it’s ever been, but it’s still significantly less. A person can perceive a defect or a scratch or, or a misalignment of something, typically much better than a machine vision system can.

Of course, the exceptions are when things are very, very small or very high speed. For example, we can’t see semiconductor parts unless they’re under a microscope. Or, we wouldn’t be able to see bottles whizzing by on a packaging line, unless somebody stopped motion. But machine vision systems can do that.

Historically, machine vision systems were used in applications that were either very fixed and non-organic type applications or applications where humans weren’t capable because of high speed and high resolution.

Now, with AI and deep learning, there are more and more applications where humans and human inspection could be realistically replaced with machine vision tools. Advanced tools like anomaly detection or segmentation analysis—telling one part from another and saying, OK, this is part A, this is part B, based on subtle features—can be trained into an AI model and can be effectively used with those types of tools. Traditionally it would be very challenging to use traditional algorithms to create that type of segmentation or anomaly detection type applications.

MD: Can you walk us through an application where you use that methodology? Is there an application where Zebra Technologies has been able to move the needle, so to speak?

AZ: There are many assemblies, where one cable or one electronic component is plugged in to another, and these could be done by people, or they could be done by a machine. If you open your cell phone, for example, there are a lot of little connectors that are seated into other parts and the lens and camera system is plugged into the main board, and the display is plugged in. The seating of those two pieces together is critical; that they’re plugged in and maintain that connection over time [is critical].

READ MORE: Create Scalable Vision and AI Solutions with a Systems-Level Approach to Data

In traditional machine vision, you can look at the gap and try to measure it in the lens and lighting. But it has been a challenging application because, where do you set the threshold or what’s good enough as far as how seated a connector is, for example. Some of the applications we’ve been able to do are some of the subtleties of that threshold of what’s good enough, and what is going to maintain.

For instance, if you have a data set or a set of images that you train an AI model around, and you can show all the different nuances of connector seated and not seated, then you can set a different way of inspecting it, such that you get feedback from the feed on where it wasn't quite seated enough. Or, if there’s some way that the connector is slightly unplugged and eventually it becomes unplugged, you can feed that back to the original algorithm and say, ‘Okay, that is slightly unplugged, it is an unacceptable part and put it in the reject stack of the AI.’

So, opportunities like that. Deterministic algorithms can be very complex and very difficult to program, and AI and deep learning and using that anomaly detection type algorithm or classification algorithm makes it easier.

Watch additional parts of this interview series with Andrew Zosel:

Part 1: How Zebra Technologies Uses Machine Vision to Transform Production Automation

Part 3: How Deep Learning Complements Machine Vision Solutions

About the Author

Rehana Begg | Editor-in-Chief, Machine Design

As Machine Design’s content lead, Rehana Begg is tasked with elevating the voice of the design and multi-disciplinary engineer in the face of digital transformation and engineering innovation. Begg has more than 24 years of editorial experience and has spent the past decade in the trenches of industrial manufacturing, focusing on new technologies, manufacturing innovation and business. Her B2B career has taken her from corporate boardrooms to plant floors and underground mining stopes, covering everything from automation & IIoT, robotics, mechanical design and additive manufacturing to plant operations, maintenance, reliability and continuous improvement. Begg holds an MBA, a Master of Journalism degree, and a BA (Hons.) in Political Science. She is committed to lifelong learning and feeds her passion for innovation in publishing, transparent science and clear communication by attending relevant conferences and seminars/workshops. 

Follow Rehana Begg via the following social media handles:

X: @rehanabegg

LinkedIn: @rehanabegg and @MachineDesign

Sponsored Recommendations

Flexible Power and Energy Systems for the Evolving Factory

Aug. 29, 2024
Exploring industrial drives, power supplies, and energy solutions to reduce peak power usage and installation costs, & to promote overall system efficiency

Timber Recanting with SEW-EURODRIVE!

Aug. 29, 2024
SEW-EURODRIVE's VFDs and gearmotors enhance timber resawing by delivering precise, efficient cuts while reducing equipment stress. Upgrade your sawmill to improve safety, yield...

Advancing Automation with Linear Motors and Electric Cylinders

Aug. 28, 2024
With SEW‑EURODRIVE, you get first-class linear motors for applications that require direct translational movement.

Gear Up for the Toughest Jobs!

Aug. 28, 2024
Check out SEW-EURODRIVEs heavy-duty gear units, built to power through mining, cement, and steel challenges with ease!

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!