In all industries, pursuing one technology over another could set the pace for the market. Machine vision is no different and today factory automation applications increasingly lean into inspection solutions.
But as markets mature, decision-makers need to defend their innovations based both on opportunity and demand within their market segments. In this vein, Gokul NA, co-founder of CynLr (Cybernetics Laboratories), makes a business case for the transformative value of company’s machine vision technology.
Based in Bangalore, India, the startup is billed as a deep-tech robotics and cybernetics company, where its team is working on developing a product-agnostic robotic assembly line. Picking up unrecognized objects is largely an underdeveloped area of machine vision and denotes the main problem that CynLr wants to solve.
In the accompanying video, NA responds to the question, “Why is machine vision critical to factory automation today?”
NA starts by outlining the differences between machine vision and computer vision before outlining his rationale for the need to change both the approach to the use of machine vision technology, and by extension, the need to rework the business case for machine vision in factory automation.
Distinguish Between Machine Vision and Computer Vision
The primary focus of computer vision, NA said, is on processing images after they reach the computer. “Many algorithms are born out of computer vision,” NA said. “And that starts with data. Most of the computer vision originates after an image has reached the computer.”
Machine vision systems are designed to function without human intervention. “You’re trying to remove the human in the loop,” he said. “That is the crux from which you would have to understand machine vision.” And since the objective is to try and make a machine autonomous, adaptive vision comes into play.
Computer vision is designed to augment the work of humans. “From your Adobe Photoshop to even most of your sophisticated OpenAI algorithms to vision transformers, what they’ve been built on top of is a sequence of identification missions such as barcode reading; there’s always a human in the loop,” said NA.
A contrasting feature of computer vision, NA added, is that the source of the image is unknown, which includes the lighting it was taken from, what orientation it was taken from, what camera was used, the type of lens used, or the distance between the objects.
Developing Intuitive Machine Vision
NA argued that the development of intuitive machine vision systems that can adapt to new and unprogrammed conditions are essential to future developments. This may include integrating the physics of light, optics and advanced algorithms into future designs. In addition, reinforcement learning will spur dynamic learning and adaptability, he said.
Machine vision has achieved a level of sophistication when it comes to object identification, where we either classify or look at the defects, said NA.
But that is not the case when it comes to guiding a system, acting as a feedback system to manipulate an object. A simple task that requires the robot to pick up a screw, placing it in a hole and bolting it into a car’s chassis as it moves on a conveyor system in an automotive plant “is an unsolved problem across the globe,” NA said.
He argued there is no better reason than this to automate such mundane tasks. The business case ought to be clear, but when it comes to assembling parts, it has always been very difficult. The cost of customizing robotic arms, NA said, has been shown to be about 70% more than the robot itself, making automation impractical or prohibitive. Until now, robotic solutions required extensive programming and precise conditions to remain profitable.
Consider the automotive industry. “A car has 22,000 parts, and you are doing close to around 10,000 jobs on a particular car,” explained NA. “Probably, GM alone is deploying 155,000 people—blue collar labor. On the contrary, the highest number of machinery adoption or especially robotic arm adoption industry is also automotive, and among the organized sector, the largest employer of blue-collar laborers also is automotive.”
The question that begs to be asked is: Why has the automotive industry not been able to replace jobs and hyper-engineer the whole environment?
The answer, NA said, requires one to take a step back and consider the difficulty in making a robotic arm. “It is just a machine with six motors that promises precise positioning and promises 50 microns of accuracy. There’s another arm from KUKA that promises 20 microns of repeatability. In other words, if I take a joystick, make the arm move to a location, ask it to repeat this position for next two years without any recalibration, it will keep repeating at 20 microns of precision in the place.”
That efficiency is also the pain, NA pointed out. “Why? Because, if you cannot present the part within the position of 20 micron of precision, the robotic arm will falter,” he said.
For NA, the better question to answer is: “How does one arrange the whole environment in such a way that every one of the 22,000 parts come to the precise location? And how do you even make a wire stay in a particular location if it keeps twisting and changing? [The solution] is that you need a system that can adapt.”
For NA, the adaptability of the machine vision system is the crux and holds the value of machine vision. “And that’s the largest market for machine vision—making robotic arms mobilize,” he said.
Watch additional parts of this interview series with Gokul NA: