News
New AI-Powered BackPack Helps the Visually Impaired Navigate their Environments
- By John K. Waters
- 03/24/2021
Researchers at the University of Georgia's Institute for Artificial Intelligence led by computer vision specialist Jagadish K. Mahendran have developed an AI-powered, voice-activated backpack system designed to help vision-impaired wearers understand and navigate their surroundings.
The system is designed to detect things like signs, hanging obstacles, crosswalks, moving objects, and stairs. It combines a "host computing unit," such as a laptop, carried in a backpack; a camera concealed in a vest jacket; and a pocket-size battery carried in a fanny pack. The system uses a Luxonis OAK-D spatial AI camera, which is connected to the computing unit in the backpack. Three tiny holes in the vest provide viewports for the camera. The battery life is about eight hours.
A Bluetooth-enabled earphone lets the user interact with the system via voice queries and commands. As he or she moves through their environments, the system conveys information about obstacles through the earphone.
The OAK-D camera is an AI-powered device that runs on Intel's Movidius VPU and the Intel Distribution of OpenVINO toolkit for on-chip edge AI interfacing. It can process advanced neural networks and provide a real-time depth map from its stereo pair and accelerated computer vision functions from a single 4K camera.
The OAK-D unit is capable of running advanced neural networks, Intel says, while providing accelerated computer vision functions and a real-time depth map from its stereo pair, as well as color information from a single 4k camera.
Mahendran is a computer vision/AI engineer at the University of Georgia’s Institute for Artificial Intelligence.
"Last year, when I met up with a visually impaired friend, I was struck by the irony that, while I have been teaching robots to see, there are many people who cannot see and need help," Mahendran said in a statement. "This motivated me to build the visual assistance system with OpenCV’s Artificial Intelligence Kit with Depth (OAK-D), powered by Intel."
The World Health Organization estimates that globally 285 million people are visually impaired. Currently available visual assistance systems for navigation are relatively limited and range from Global Positioning System-based, voice-assisted smartphone apps to camera-enabled smart walking stick solutions. But these systems lack the depth perception necessary to facilitate independent navigation, said Hema Chamraj, director of Intel's Technology Advocacy and AI4Good teams.
"It’s incredible to see a developer take Intel’s AI technology for the edge and quickly build a solution to make their friend’s life easier," Chamraj said in a statement. "The technology exists; we are only limited by the imagination of the developer community."
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at jwaters@converge360.com.