Tennplasco's Sawyer robot

5 Challenges in Developing Sharp Robotic Vision

May 2, 2018
To help foster collaboration between human workers and robots, the next generation of robots needs to have far superior skills compared to their traditional counterparts. This includes highly advanced vision systems that let robots see the world like their human counterparts.

One of the main aspects of Industry 4.0 is the use of robots in various industrial applications. With the rapid increase of automation, the demand for collaborative robots is also on the rise. The market for collaborative robots is expected to reach $12 billion by 2025.

As the collaborative aspect mainly involves collaboration between human workers and robots, the next generation of robots needs to have far superior skills compared to their traditional counterparts. They must be able to work alongside the staff in a secure and valuable manner. The purpose is to assist the workers rather than replacing them.

That’s why building cobots is a challenging process. One challenge is adding efficiency and safety. Using visions systems is one solution, but building a robot with human like vision capabilities continues to be a demanding task. Here are five challenges in developing robot vision for cobots.

Sawyer from Rethink Robotics is a cobot that uses advanced vision systems for sorting applications. (Image credit: Rethink Robotics)

1. Distinguishing backgrounds and objects. In almost all industrial applications, the robot will need to make the distinction between an object and its background. Only then it will be able to identify and track the object until the process is complete. Under ideal circumstances, the background is blank and provides better contrast against the object to be identified. However, most factory floors may not have these ideal settings. In fact, the color and shading of the background may keep changing.

As a result, vision systems have to distinguish the desired object from a blend of backgrounds that include both moving and stationary objects. In other words, the vision detection systems will need to consider several potential background scenarios to achieve higher vision efficiency. For example, the robot may need to distinguish the object from a background similar to it or having lower contrast. If backgrounds have sharp edges, you can’t use an edge-detector algorithm.

The robot should also be able to differentiate between the actual object and an image of said object.  For example, the robot should be able distinguish between an actual ball and an image of a ball. Otherwise, it will simply confuse the image with real objects.

2. Identifying moving objects. Motion is an integral part of any automation process. Most manufacturing plants require robots and human workers to work with objects moving on a conveyor belt. Sometimes, they may have to move the objects from one place to another. In short, when it comes to robotic vision, you need to consider three different types of situations.

Case 1: Only the object is moving, but the robot is in a fixed position.

Case 2: Only the robot is moving, but the object is in a fixed position.

Case 3: Both robot and object are moving in the same or opposite directions.

Engineers have successfully used high speed cameras for high-speed applications, resulting in better motion detection. Right now, most robots working in close proximity of humans have a lower payload and are slow-moving, owing to the safety concerns.  However, in the future, they may have the ability to work with high speed and accuracy. Thus, they will have to become more human aware using vision systems like the one developed by Veo Robotics.

Tennplasco, a plastic injection modeling company, uses vision robots like Sawyer to supply key manufacturing components to the global automotive industry. (Image credit: Tennplasco)

3. Identifying partially covered objects. Most robot vision algorithms are capable of identifying an object when the camera or sensor captures its complete image. The robot can’t detect something that is not present in the designated algorithm. Consequently, it will treat a partially covered object as a foreign material—a process called occlusion. This occurs when something has been closed up or blocked off, even by a shadow.

One way to overcome occlusion is to create an algorithm that matches the visible part of the desired object with its programmed image. However, it has to assume that the covered portion is indeed the genuine part of the object. Unlike the conventional robots, the next generation will have to use a combination of cameras and motion detectors—such as LiDAR or an ultrasonic sensor—along with improved visual tracking algorithms to overcome occlusion.

4. Recognizing changing shapes or articulation. Human vision is a marvelous piece of hardware. We can detect an object under remarkably different conditions, even when it is deformed. For example, recognizing your car after it went through a major accident isn't going to be very difficult for you. Robots may have problems detecting deformations into products without the help of an advanced vision system.

Humans not only have a high-resolution vision, but they also have an unusually swift data processor called the brain. When we see a deformed object, the brain searches for an image that is an exact match (or a template). Likewise, robots can also use templates to identify deformed objects. However, they are not as sophisticated as the human hardware. Thus, detecting such objects can cause considerable difficulties.

5. Understanding the position and orientation of objects. One of the most common tasks the robot has to perform involves picking– and-place applications. However, the robot will need a broad sense of position and orientation to complete this task accurately. Representing orientations in 2D using angles, orthogonal rotation matrices, and matrix exponentials are relatively simple. Recognizing 3D orientations, on the other hand, is quite tricky.

As the object is rotated along more than one axis, due to difference in lighting conditions, it may change the color, appearance, shading, position, background, texture, and motion of the object as well. Thus, 3D orientations can become a significant hurdle in the process of developing high-precision robot vision.

Though researchers and engineers have managed to use reliable solutions such as LiDARs to detect orientation, they are primarily designed for making 3D measurements. As a result, a LiDAR sensor will struggle to read a change in texture, imprint or writing on an object due to orientation. Only human-like robotic vision can detect these changes. A sophisticated vision system consisting of 3D sensors and high-resolution cameras seems to be a potential solution here.

ABB’s YuMI is another cobot example that features advanced vision systems as integral component. Each system delivered has enhanced cameras built into in the hands for maximum flexibility. (Image credit: ABB)

Over to You

The robotic vision algorithms have evolved to a great extent in recent years. However, they are still very rudimentary compared to the human vision. As we enter the era of collaborative robots where humans will work side-by-side with robots, the ability to see like humans will provide robots with an added value in terms of safety and productivity. It may help us to build robots that can handle higher payloads and perform high-speed tasks around humans without compromising their safety.

Nancy Kerley is a freelance writer who covers technology and business trends. She frequently writes for Lintech, a high technology corporation dealing security products.

Sponsored Recommendations

50 Years Old and Still Plenty of Drive

Dec. 12, 2024
After 50 years of service in a paper plant, an SEW-EURODRIVE K160 gear unit was checked. Some parts needed attention, but the gears remained pristine.

Explore the power of decentralized conveying

Dec. 12, 2024
Discover the flexible, efficient MOVI-C® Modular Automation System by SEW-EURODRIVE—engineered for quick startup and seamless operation in automation.

Goodbye Complexity, Hello MOVI-C

Dec. 12, 2024
MOVI-C® modular automation system – your one-stop-shop for every automation task. Simple, future-proof, with consulting and service worldwide.

Sawmill Automation: Going Where Direct-Stop and Hydraulic Technologies “Cant”

Aug. 29, 2024
Exploring the productivity and efficiency gains of outfitting a sawmill’s resaw line with VFDs, Ethernet and other automated electromechanical systems.

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!