New digital technologies are having a major impact on the lives of people with a visual impairment or blindness. Three sets of innovations are proving useful – advances in computer vision, new methods of creating augmented reality, and innovative uses of geolocation technologies.
According to the World Health Organization, as of 2014 there were approximately 285 million persons with a visual impairment worldwide. These include 39 million people who were totally blind and 246 million who were visually impaired. About 1.3 million Americans are classified as legally blind (their visual field in their best eye is 20 degrees or less or their acuity is less than 20/200), while about 290,000 are totally blind (with at most some light perception).
In spite their immense usefulness, canes and guide dogs have significant limitations. For example, canes can’t detect overhead hazards. According to a 2010 survey of 307 persons with a visual impairment, approximately 13% experience a head-level accident more frequently than once a month. And, 23% of those head-level accidents required some level of medical intervention. In addition to head injuries, a review of 31 studies found that “those with reduced visual acuity are 1.7 times more likely to have a fall and 1.9 times more likely to have multiple falls compared with fully sighted populations. The odds of a hip fracture are between 1.3 and 1.9 times greater for those with reduced visual acuity.
Safety is paramount for a person with a visual impairment. They need to find safe places to cross a street. They must avoid hitting obstacles or falling over drop-offs. They sometimes have to navigate unfamiliar environments, read signs, or ask for confirmation of their current location or orientation. Persons with visual impairments often have difficulty recognizing the objects and people around them. Changes in a familiar environment can present challenges, as can the task of finding a misplaced object.
One of the advantages afforded by relying on canes, guide dogs, or human assistance is simplicity and relative ease of use. Technological aids are often problematic because of cost, usability, performance, and complexity. As Manduchi and Coughlan put it, in their 2012 article on electronic travel aids (ETA), “The cane is economical, reliable and long-lasting, and never runs out of power.” They add:
“…it is not clear whether some of the innovative features of newly proposed ETAs (longer detection range, for example) are really useful for blind mobility. Finally, presenting complex environmental features (such as the direction and distance to multiple obstacles) through auditory or tactile channels can easily overwhelm the user, who is already concentrated on using his or her remaining sensory capacity for mobility and orientation.”
Electronic travel aids (ETAs) may place additional burdens on a person with a visual impairment. Most ETAs require the user to actively scan the environment, which can be time-consuming and require conscious, concentrated effort. Sometimes the feedback from an ETA means that the user must perform additional measurements in order to identify an obstacle. And, for those visually impaired persons who rely on environmental sound cues, acoustic feedback from an ETA can interfere with environmental sound cues. Finally, there is the additional expense involved in acquiring a technological aid.
The use of mobile digital devices as navigational aids is quite recent, with the first usable wearable digital devices for wayfinding developed in the late 1990s (a handheld talking calculator for blind people was developed as early as 1976 by Telesensory Systems of California). Since 2000, there’s been a veritable explosion of research and development in this field, with over 7,500 engineering articles written on assistive technologies and visual impairment in the past 25 years, and over 1,300 articles available on solving the problem of navigation for people who are blind or visually impaired. Over 600 engineering articles on augmented reality and visual impairment have been published since 2000, the majority published within the past 5 years. And the number of articles in this area is increasing every year.
Despite the large number of studies completed or underway on developing navigational aids for blind people, many researchers argue that human assistance in identifying objects and pathways for a person with low to no vision is still superior to any navigational technology available today. But, this may be changing soon.
In the past 10 years, new technologies such as machine learning, big data, and computer vision have been developed that are only now coming to fruition in terms of product development. For example, range data can be combined with sophisticated time measurements using a technique called “simultaneous localization and mapping” (SLAM) to build a three-dimensional reconstruction of the local environment and locate the user within the mapped space. This technique is now successfully used to provide computer vision for self-propelled robots. Other promising techniques for providing computer vision that can be used for navigation include:
- Tracking and probabilistic inference of position
- Object and face detection algorithms
- Depth calculations using a variety of techniques, including stereo cameras and infrared beams
- Optical flow calculations including time to collision, motion detection, focus of expansion, and inertial information
- Use of context (sense of place from either the user, a previous user, or a knowledge map)
All this innovation has led to the development of new navigational aids for persons with a visual impairment. Some are in the form of mobile apps, while others have required the production of new hardware. An example of where we are today is Toyota’s Project BLAID, which consists of a wearable product that scans the environment with cameras, reads text, and detects obstacles. This is similar to Google’s Tango technology. Tango works by integrating three types of functionality:
- Motion-tracking: using visual features of the environment, in combination with accelerometer and gyroscope data, to closely track the device’s movements in space
- Area learning: storing environment data in a map that can be reused later, shared with other Tango devices, and enhanced with metadata such as notes, instructions, or points of interest
- Depth perception: detecting distances, sizes, and surfaces in the environment
We will be discussing assistive technology in upcoming blog posts, including what people are using today to navigate both indoors and outdoors, how to secure funding for this assistance, and how our latest app ties all of this together when it’s released next month.
Latest posts by Gary Woodill (see all)
- Rapid Doubling of Knowledge Drives Change in How We Learn - January 23, 2018
- What Does AR for Learning Enable That Previously Wasn’t Possible? - January 19, 2018
- Punctuated Equilibrium: Shifting from the Familiar to a New Normal - January 16, 2018