Effective, Automatic Access to Graphical Information for BVIs

Current Project:

Access to Graphical Information for Individuals who are Blind or Visually Impaired

TactileDiagram Projects Overview Picture

Overview Paper

Funding: National Science Foundation - Where Discoveries Begin
NSF IIS- Human Centered Computing and
NSF CBET – Research to Aid Persons with Disabilities
VCU A.D. Williams Grant

Unfortunately for individuals who are blind and visually impaired, although access to text has greatly improved with computers, the computer has also greatly facilitated the ease of producing, storing, transmitting and using graphical representations to convey information. This has created a serious obstacle for the millions of people in the United States who are visually impaired or blind, because graphics are now ubiquitous and used in many instances as the sole information presentation method.

Multiple issues need to be addressed to provide effective access to graphical information for individuals who are blind or visually impaired:

  1. Automatic Visual to Tactile Graphics Conversion: For many tasks, word descriptions are not sufficient. Teachers for the visually impaired (TVIs) have traditionally translated visual diagrams into tactile diagrams for their BVI students. However, in order to produce effective tactile diagrams, TVIs spend a considerable amount of time simplifying the original visual diagram for it to be perceived, as touch is not as effective in processing graphical information as vision. To provide automatic, independent access to graphical information, this step needs to be automated. Our lab has been working on algorithms to automate this process.
  2. Tactile/Haptic Computer Interaction Devices: Several different interface devices/methods exist for accessing text on a computer screen. However, refresh-able interfaces for accessing graphical information has been limited to small and/or expensive tactile pin displays. Our work on tactile/haptic computer interaction devices began before the availability of low cost tablet computers.  While considering the use of the point contact which is the only information available for tablets, we also have considered information spatial distributed across the finger pad and across fingers. Recent work looks at the combination of tactile feedback with haptic feedback for shared control with an intelligent robotic assistant to improve graphical access.
  3. Non-visual Presentation Formats: This section comprises two different areas of research: (1) consideration and comparison of the use of tactile feedback versus audio feedback. Audio feedback has been little used to provide feedback about lines and areas in spatially presented graphics despite practical advantages over tactile feedback.  Our lab has been looking at performance with each feedback modality in combination with other design parameters; and (2) the format of the graphical elements. A critical determination by our lab is that user performance is significantly better with graphics texturing areas rather than only using raised outlines. In addition, our lab has developed new methods for presenting the third dimension in perspective drawings, which have long been difficult for BVIs to determine due to the restricted field of view of touch compared to vision.
  4. Access Algorithms: For accessing detailed visual diagrams, sighted users are used to using zooming and panning methods developed for vision. This ability is very important for BVIs, as touch has poorer resolution than vision.  However, touch also has a significantly smaller field of view than vision, which makes methods developed for visual use inappropriate for tactile use. Our studies in this area look at developing algorithms most appropriate for the tactile sense.  In addition, another issue with nonvisual diagrams is the need to reduce the amount of information in a diagram.  TVIs refer to all information in a diagram not to be used in a lesson plan as “clutter” and remove it.  However, this is problematic if lesson plans change and reduces opportunities for incidental learning. Our studies have also looked into dynamic simplification, where a user can dynamically select what information of a diagram to interact with in real-time.

================================================================