Researchers are working to enable smartphones and other mobile devices to understand and immediately identify objects in a camera’s field of view, overlaying lines of text that describe items in the environment.
"It analyzes the scene and puts tags on everything," said Eugenio Culurciello, an associate professor in Purdue University’s Weldon School of Biomedical Engineering and the Department of Psychological Sciences.
The concept is called deep learning because it requires layers of neural networks that mimic how the human brain processes information. Internet companies are using deep-learning software, which allows users to search the Web for pictures and video that have been tagged with keywords. Such tagging, however, is not possible for portable devices and home computers.