Well, drone technology advances by leaps and bounds. Now researchers think they have a new way to make these UAVs and other vehicles more nimble in the face of environmental variables. The programming uses neuromorphic sensors (that get triggered by sudden events) instead of the standard inertia-measuring sensors (such as accelerometers and gyroscopes) to track motion.
Even autonomous vehicles with cameras and controls need time interpret camera data about the environment. Here, state-estimation algorithms identify image features first (usually boundaries between objects through shade and color differences) then select a subset unlikely to change with new perspectives. Some dozen msec later, cameras fire again and the algorithm attempts to match current features to previous ones. Once the algorithm matches features, it calculates the vehicle’s change in position. The sampling takes 50 to 250 msec depending on how dramatically the environment changes, and the whole control cycle to correct course takes 0.2 sec or more — not fast enough to react to sudden changes in a vehicle’s surroundings.
To address this limitation, researcher Andrea Censi of MIT’s Laboratory for Information and Decision Systems and others have developed a way to supplement cameras with a neuromorphic sensor that takes measurements a million times a second.
Censi and colleagues presented the new algorithm at the International Conference on Robotics and Automation earlier this year. Vehicles running the algorithm can update location every 0.001 sec to make nimble maneuvers. "Other cameras have sensors and a clock, so with a 30-frames-per-sec camera, the clock freezes all the values every 33 msec" says Censi — and then values are read. In contrast, neuromorphic sensors let each pixel act as an independent sensor. “When a change in luminance is larger than a threshold, the pixel … communicates this information as an event and then waits until it sees another change."
The algorithm tracks every change in luminance every 1 µsec and supplements camera data with events, so doesn’t need to identify features. Comparing the before and after of a situation's change is easier, because even dynamic environments don't change much over a µsec. The algorithm doesn’t match all the features in the previous and current situation at once, either — but instead generates hypotheses about how far the vehicle moved. Then over time, the algorithm uses a statistical construct called a Bingham distribution to pick the hypothesis that’s confirmed most often and track vehicle orientation more efficiently that other approaches.
Recent experiments with a small vehicle fitted with a camera and event-based sensor show the algorithm is as accurate as existing state-estimation algorithms. Censi says with that done, the next step is to develop controls that decide what to do based on state estimates.
What's most interesting is that the algotrithm is said to work particularly well for making quadrators with only onboard perception and control nimbler. So maybe it's time for Wallich to perfect his son-walking UAV at last.