New 3D Ultrasonic Sensor Technology Dramatically Reduces Costs and Improves Safety for Mobile Robot Developers
A new class of sensor technology dubbed ADAR (acoustic ranging and detection) enhances autonomous mobile robot (AMR) safety while offering 50-80% cost savings compared to traditional, LiDAR-based systems.
AMRs are seeing increased adoption worldwide due to their effectiveness across multiple material handling applications and spurred by labor shortages across the manufacturing, logistics and warehouse sectors. In fact, the global mobile robot market grew by 27% in 2023 to reach $4.5 billion, according to leading market analyst firm Interact Analysis.
As AMRs find their way into more facilities, it is critical to ensure that these versatile robots can safely navigate around objects and people to the level described in the EN/ISO 13849/SIL2 machine safety standard. A typical sensor package for an AMR today includes LiDARs and depth cameras. This combination of sensors is expensive and computationally intensive to operate.
READ MORE: Next-Generation Vision Sensors for Autonomous Fleets
ADAR, the foundation for the Sonair 3D ultrasonic sensor, is the first sensor technology to use ultrasound in the air to give robots safe spatial awareness in 3D. Decades in the making, and leveraging a patented design developed at the world-renowned MiNaLab sensor and nanotechnology research center in Norway, Sonair provides distance and direction to all objects in a 180 × 180 field-of-view (FOV).
To understand Sonair’s benefits versus traditional sensor combinations, we’ll look at the well-established pros and cons of traditional AMR sensor technologies. Before doing so, it’s important to note that you can’t directly compare Sonair software output (the “point cloud”), point by point with the output of LiDAR systems.
As a 3D ultrasonic sensor, Sonair works very differently, enabling it to provide safe, fast and effective obstacle detection using fewer data points—consuming less energy and making lower demands on computational resources while delivering the most crucial value: seeing objects that laser and camera technologies struggle to see.
Pros and Cons of LiDAR Technology
LiDAR technology alone accounts for an estimated 30% of the hardware cost for an AMR. LiDAR sensors are used for real-time 2D mapping, obstacle detection and navigation. Normally, AMRs incorporate safety-certified 2D LiDARs.
LiDAR works by emitting directed laser beams and measuring the time it takes for the beam to return after hitting an object. This enables the creation of a map of the robot’s 2D surroundings in a straight horizontal plane. If the LiDAR is mounted on a standard AMR, it would typically only see the legs of someone standing in front of it, in a single plane. High-resolution LiDAR creates millions of data points, which ensures detailed sensing in the LiDAR plane, but is also a drag on battery power and computational resources.
LiDAR typically provides a 360-deg. FOV, but the robot itself will shield the FOV, so one sensor in each horizontal corner is typically used (two in total), and with openings in the robot chassis to ensure 360-deg. FOV is obtained. Today, AMRs need to be built around the LiDAR to allow it enough viewing angle and to protect the sensitive scanners from harm.
3D LiDARs are expensive, with a typical system costing around $12,000, and they are not safety certified. (Safety certified 2D LiDARs can cost anywhere from USD $1,500 to $5,000, and non-safety certified 2D LiDARs cost less than $100.)
LiDAR can struggle in dusty environments as the particles interfere with laser beams resulting in inaccurate distance readings and sometimes, failure to detect obstacles altogether. LiDAR is also heavily dependent on having a direct line-of-sight, which can result in detection failures in dynamic environments. Reflective surfaces, such as mirrors and some metals can also interfere with LiDAR performance, leading to inaccurate distance measurements.
What Constraints do Cameras Pose?
AMRs often incorporate front-facing cameras to provide visual data for object recognition, obstacle detection and sometimes visual navigation. RGB cameras, for example, are used to provide color images and are commonly used in conjunction with AI models for object detection and classification. Meanwhile, depth cameras measure depth and are used to understand the AMR’s environment in 3D.
Using cameras for obstacle detection in robotics faces challenges due to limited FOV. As a result, AMRs typically incorporate a combination of several cameras, which drives up cost and complexity. In addition, camera-based systems—even those with AI capabilities—do not qualify for safety on their own.
READ MORE: Emergent Technologies That Improve Productivity and Flexibility
Cameras are also sensitive to lighting conditions, which affects performance in low light or when glare is present. In addition, environmental factors such as rain, fog and occlusions further complicate detection, and transparent or reflective surfaces can confuse cameras. Additionally, cameras often struggle with small or distant objects and object classification can lead to false positives or negatives, reducing system reliability.
Moreover, depth perception is limited, especially with monocular cameras, while stereo setups require complex calibration. AMRs typically incorporate non-safety certified low-cost depth cameras. And when you do gather useful camera data, processing that data is computationally expensive, leading to potential latency and higher energy consumption.
1D Ultrasonic Sensors Have Limitations
Simple, 1D ultrasonic sensor technology is probably best known in its “parking sensor” form. Most cars today have coin-sized circles on their front and back bumpers to help the car measure the distance to objects and help the driver avoid them when parking or moving slowly.
This technology is used on AMRs for obstacle detection at close range, particularly in situations where precise, immediate feedback is needed, like when avoiding nearby objects. These sensors emit sound waves and measure the time it takes for the echoes to return, determining the distance to nearby objects.
Typically, 1D ultrasonic sensors are distributed around the AMR’s body, often near the base to achieve 360-deg. coverage. However, these sensors cannot provide directional information, a severe limitation when it comes to safety and effective obstacle detection.
How ADAR Provides a Safety Shield
Sonair, based on ADAR, is a new sensor category designed to both enhance safety and reduce cost compared to the sensor combinations we’ve looked at so far.
Sonair is a 3D ultrasonic obstacle detection sensor that provides a “safety shield” around a robot. The new sensor operates by emitting a burst of ultrasound and then analyzing the signals received by an array of receivers. This gives a 3D view of the area in front of the robot, up to a range of five meters.
The innovation is made possible by the integration of piezoelectric actuation in MEMS (micro electromechanical system). The MEMS transducers, made of silicon, have an acoustic impedance which is well matched to air and, above all, they are of millimeter size.
READ MORE: FMCW LiDAR Gives the Gift of Sight
Unlike commercially available transducers, these miniaturized transducers can be placed in an array with a separation corresponding to half an ultrasonic pulse wavelength. This opens for image reconstruction of the full volume in front of the array. This allows Sonair to both send directional ultrasound and determine the direction the ultrasound is coming from.
The imaging method is called beamforming, which uses well-established techniques deployed in medical ultrasound, SONAR and RADAR systems.
3D-based safety sensors are designed to overcome the limitations of 2D LiDAR, which only senses in a 2D plane. 2D safety planes can miss important safety information such as people leaning towards the robot, garage doors, cables hanging from the roof and objects lying on the floor below the LiDAR mount height.
Direct comparisons between the “resolution” of ADAR and LiDAR are not meaningful, because the two systems use information from the sensor in completely different ways. Sonair uses only a few points per object to accurately detect the object in 3D. Combined with a camera, the image information from the camera is used to define the extent of the object. Moreover, when combined with either a camera or a cheap, non-safety certified LiDAR, Sonair can also be used for effective navigation and scene understanding.
The Sonair 3D ultrasonic sensor is not commercially available yet, but Sonair, the company behind this new category of mobile robot sensors, has launched a global early access program that a wide range of warehouse AMR manufacturers, automotive OEMs, and healthcare and cleaning robotics companies have been quick to sign up to.