R&D Spotlight: Mechatronics Students Plot a 2.5D Vision Method to Improve Picking Precision
If 3D vision is too costly and 2D vision has precision limitations, could there be an in-between solution?
That’s the problem that Evangeline Dryburgh and Amy Spencer, a pair of mechatronics engineering co-op students from the University of Waterloo, Canada, have been challenged to solve over the course of their summer internship.
“2.5D vision is something that we’ve been working on,” said Dryburgh. “It is a solution that sits in-between 2D vision and 3D vision.”
The students devised a practical method for capturing the exact position of objects while working on a project involving a pick-and-place robot configured to pick objects from a flat plate. “We found that the pick area near the edges of the plate had a bit of distortion with taller objects,” Dryburgh said. “We needed a way to get rid of the distortion and to tell the robot precisely where the objects were, no matter how tall it was.”
The focal distance affects the way the image is presented to the guidance system, said Skye Gorter, president, Skye Automation, an industrial automation systems provider that focuses on robotics and machine vision based in Ontario, Canada. By applying a scatterplot to the calibration routine, his co-op student team algorithmically adjusted for those height changes on the parts.
READ MORE: A Vision-Guided Robotic System Designed to Grab Any Object
“We’re talking about traditional vision-guided feeder systems that are very common in the market today,” said Gorter. “Products like FlexiBowl and other feeder systems present a nice solution to the market. But there is bit of a gap where they’re designed truly for flexibility but lose precision when it relates to dimensional changes in the parts.”
The 2.5D vision algorithmic method that the students worked on solves this problem and it can be applied accurately to any of those systems, Gorter said.
Building an Algorithm That Compensates for Height
The students’ investigation started on one project, where the taller the objects were, the more distorted the images were. “We were using a typical 2D camera and integrating a 3D camera would have been a lot more expensive and time consuming,” said Spencer.
They came up with the 2.5 D vision method to alleviate the distortion and to accommodate taller objects that showed up as being in a slightly different position due to their height.
Instead of using a 3D camera, the pair worked with Skye Automation’s programming lead, Nejma Latheef, to build an algorithm that would account for the way height affects the placement of objects and thus accommodate the distortion.
“People have heard of 2D and 3D, but 2.5D is not very common,” pointed out Latheef, who does the mainstream programming for robotics as well as the front-end HMI designing and integration of various systems. “That’s the name we gave it because it’s more precise than 2D imaging, but it doesn’t create any 3D images. That’s why we call it 2.5D,” she said.
Plotting Pick-Heights and Sending Data to the Robot
The students recorded the position of the object from the camera based on the typical pick height for the application and brought it closer to the camera. They recorded the pick action with two different heights and plotted the data using MATLAB. “Unfortunately, that didn’t tell us much, so we had to change our thinking,” said Spencer.
READ MORE: Deep Learning Algorithms Help Vision Inspection Systems See Better
They recorded a third height and plotted the data in Excel. A schematic of the data showing the different heights (represented by green, orange and blue dots) confirmed that objects in the center of the plate (pick area) were stacked one on top of the other. This meant the robot perceived the part as being in the same spot at each height, Spencer explained.
But there was more distortion closer to the edges. “Our robot would think that a part was slightly to the left or to the right of where it actually was,” Spencer said. “We were originally sending incorrect points to the robot, and we were seeing that when it was going to pick, it would be a millimeter shifted up into the right or up into the left. We realized that we needed to find a way to fix that. And that’s how we created 2.5D, which was an in between of taking our 2D image and creating a rudimentary third dimension with that information.”
Spencer said that the novel positioning method allows them to continue using a 2D camera, and it will algorithmically adjust the position of the part that it sees.
The 2.5D method can be applied to any application where a 2D camera needs to be integrated with a robot, added Dryburgh. “If we have lots of varying heights of parts that we might want to pick, this offers a lot of flexibility because we can pick a short part, or we can pick a part that is three times, five times taller than that, and it won’t affect the vision,” Dryburgh said. “We don’t have to recalibrate our cameras, and everything will work seamlessly together, even if they're completely different sizes.”
The big benefit, pointed out Gorter, is that the method presents “higher reliability and greater degree of flexibility on a system that’s designed for flexibility to begin with.” The solution will be an added value his company can apply to future applications.
READ MORE: High-Speed Camera and Barcode Reading Technology Showcased by Cognex Corp.
Editor’s Note: Machine Design’s WISE (Workers in Science and Engineering) hub compiles our coverage of workplace issues affecting the engineering field, in addition to contributions from equity seeking groups and subject matter experts within various subdisciplines.