Egocentric Vision: Why a Scientist Leverages Wearable Cameras as a Sensor Modality
The final installment of a three-part interview series with Dr. Brokoslaw Laschowski highlights his research in vision technology.
Laschowski, a neuroscientist who holds expertise in robotics and artificial intelligence, underscored the extent and significance of human reliance on vision. He said that his research lab takes special interest in “teaching computers to see like humans.”
Up until a few years ago robotic prosthetic legs and exoskeletons were designed with no vision. “These robots were essentially walking blind and that was really the genesis of my program,” explained the research scientist and principal investigator at the Toronto Rehabilitation Institute.
Since then, other research labs have started to investigate the use of wearable cameras as a sensor modality to perceive the surrounding walking environment. Researchers like Laschowski develop various machine learning models, recurrent neural networks, convolutional networks and various LSTM models by using basic models, such as support vector machines or linear discriminant analysis, to make sense of visual input.
Designing high performance, highly efficient deep-learning models that “can be deployed, run in real time and accurately detect the environment and are able to understand the visual scene,” remains a complex challenge, he said.
All the technology his lab develops is general purpose. Predictions from the smart glasses his team develops can be used to interface with a robotic prosthetic, such as an exoskeleton, a powered wheelchair or a smart walker. And instead of using it to interface with a mechatronic system, it could be used to interface with the user.
Future Vision for Seeing
Laschowski plans to expand the vision research that he and others are doing in brain machine interfaces. He imagines an invasive intracortical, bidirectional interface that would be able to read and write information to the brain. For inspiration, he looks to systems developed by Neuralink, one of Elon Musk’s companies.
“We could do something along the lines of having a chip that’s implanted into the visual cortex of the brain—the back of the brain, the part responsible for vision,” he said. “The idea is that we could wirelessly interface our smart glasses with a brain implant, and then translate the pixels that are sensed with our camera, decode that visual information and then stimulate areas to the brain in the visual cortex to be able to elicit a concept known as phosphene.”
A phosphene is the artificial sensation of seeing light without light entering the eye. Laschowski explained further that if one is able to map pixels from the camera to electrical stimulation of the brain through neuromodulation, then one could potentially recreate artificially the sensation of seeing.
“That’s one of my long-term goals, where we’re able to use technology like smart glasses to restore vision for patients with blindness and visual impairments,” he said.
Watch additional parts of this interview series with Dr. Brokoslaw Laschowski: