Software Accessibility - Where Are We Today?
- Where Are We Today?
- Alternative ways to access the screen's contents
- Alternative ways to command the computer and enter data
- Uh Oh! Lack of Context is a Major Problem
- Enter Stage Left: an API that Provides Context
- What Do I Need to Do?
- Links and Resources
The accessibility of computer software has seen drastic improvements over the past two decades. This article reviews the progress and technology as it has developed.
Up until this point, the largest driving force behind desktop computing environments has been Microsoft, first with MS DOS, followed by variants of Microsoft Windows. These operating systems were not designed with the needs of people with disabilities in mind. Many, including those who were blind or physically disabled, were unable to use applications which were written for Microsoft operating systems. These applications assumed that computer users could:
- Read and react to text and images displayed on the screen.
- Type on a standard keyboard.
- Select text, pictures, and other information using a mouse
- React to sounds played. This tends to be somewhat less of a limitation in that most software doesn't rely exclusively on audio to relay feedback.
If a person was unable to do one of the above-listed tasks, they found themselves unable to use many popular computer applications. Here are some of the groups of people that have problems doing some of those tasks:
- Print disabled: blind, deaf-blind, low vision, obstructed vision, dyslexic, cognitively disabled and illiterate individuals.
- Physically disabled: users have amputations, paralysis, repetitive stress, cerebral palsy, muscular dystrophy, Parkinson's or other problems limiting mobility.
- Hearing impaired
We must also consider the increasing numbers of aging baby boomers that are beginning to experience problems with their sight, vision or dexterity. When you add all these groups of people together, that's a lot of potential users!
In answer to this problem, many small accessibility hardware and software vendors created products and software which helped people who could not perform one of the four basic tasks to use common computer applications. Some examples of these assistive devices and software include:
- Screen reading software, which speaks text displayed on the screen using hardware or software text-to-speech, and which allows a blind person to use the keyboard to simulate mouse actions
- Alternate input devices, which allow people with physical disabilities to use alternatives to a keyboard and mouse
- Voice recognition software, which allows a person to simulate typing on a keyboard or selecting with a mouse by speaking into the computer
- Screen magnification software, which allows a low-vision computer user to more easily read portions of the screen
- Comprehension software, which allows a dyslexic or learning disabled computer user to see and hear text as it is manipulated on the computer screen
In fact, the entire adaptive technology industry has grown up around these issues. One great place to go and learn about this industry is the CSUN conference in Los Angeles, which takes place every year in mid-late March.
Most computer programs are so visual, they are difficult or impossible to use for persons with visual impairments. This need not be the case. Here's how non-print readers use desktop software today:
- Text-to-speech (TTS) - makes the computer talk to the user: Those who can't read print at all usually use talking programs (text-to-speech). Talking programs are also useful for print disabilities other than visual impairments, such as dyslexia. Additionally, text-to-speech is used by those who cannot speak, in place of their own voice. Finally, this technology could be useful to mainstream users, on portable information appliances, or to access information when the eyes are busy elsewhere.
- Magnification - enlarges the screen's contents: For those with low vision, it may suffice to use a larger font, a built-in high contrast theme, or even just an an extra large screen. Otherwise, screen magnification programs may be used, which allow zooming in to portions of the screen, while following the mouse or the current focus. Screen magnifiers also have some built-in text-to-speech and the ability to filter text and images through various color palettes, such as black on yellow for high contrast, or green on blue for low contrast.
- The Optacon - provides access to printed words, graphics and on-screen information by means of an array vibrating pins the size of an index finger. The user uses one hand to read the vibrating pins, and the other hand moves a mini-camera over the material to be read. Unfortunately, the unit is not currently produced, although there is occasional talk of resurrecting this useful device.
- Braille - is a solution used for quiet reading, for detailed work, and by deaf-blind users. This can come in the form of hard copy braille printed on braille embossers, or from a refreshable braille display (see below). These technologies requires special drivers, braille formatting routines and software based text-to-braille translation. The importance of braille itself must be emphasized. For those that read it, braille can offer higher levels of employment and life fulfillment.
Left: refreshable braille
displays of various sizes.
|Right: a braille embosser|
Audio- and braille- based user interfaces are concepts that software designers are historically untrained for. The basic concept is easy - dealing with information when you're blind is like seeing everything through a mail slot - sequentially and methodically. Only small pieces of sequential, non-graphical information can be conveyed - via text-to-speech or a refreshable braille display. Whatever the user does, the software needs to respond with small, bite sized pieces of information that are as short and to the point as possible. Ideally, intelligent decisions are made, so the user does not have to wade through as much non-relevant data.
Another problem is how people with disabilities get information into the computer. If you're physically disabled, you may not be able to type on a regular keyboard or use a mouse. Here are some of the alternative ways physically disabled people enter information:
- Sticky keys : make entering key combinations easy. For example to make a capital letter, first press the shift key, release it, then press the letter to be capitalized. The sticky key technique is utilized by people who have only one usable hand, or who have no use of their hands and type using a stick in their mouth.
- Single switch : technologies enable persons with severe physical disabilities. Some, like Stephen Hawking, enter information by choosing among lists of options. They might press a switch down to begin moving a highlight bar through the list, and release the switch when the desired option is highlighted.
- Special keyboards : exist to make data entry easier. However, any special features are generally handled in the keyboard itself, so that no special programming is required.
- Speech recognition : technology lets people talk to the computer. This technology has come a long way, but still needs to be more integrated into mainstream software.
- Consistent keyboard support and hotkeys : Many people can't use a mouse. Extremely consistent keystroke support is a very important consideration. Blind testers have knack for finding ways to improve keystroke support in almost any given piece of software. Testing with people that have disabilities generally benefits everyone. Use the accessible toolkit checklist to make sure your UI controls adhere to standards.
The solutions developed by these accessibility vendors have greatly increased the employment and personal fulfillment opportunities of hundreds of thousands of persons with disabilities, and the importance of their work cannot be diminished. However, all these solutions fell short of providing people with disabilities with a working environment which was completely accessible and usable by them. This is due to a simple problem of context - the idea that a user's interaction with a computer is governed by the situation in which this interaction takes place. When the user types something on the keyboard, or when an application displays text or images on the screen, the exact meaning of these actions is determined by the context in which they take place. For example, one application might display the image of a light bulb to indicate that it is processing a task, while another might display it as an indicator that it has completed processing a task. Without the application somehow notifying a blind person about the meaning of the light bulb image, the blind person is unable to understand what the application is attempting to convey. Similarly, voice recognition software often needs information about the context of a user's interaction, in order to make sense of what the user is speaking. This context problem still plagues modern accessibility aids and solutions.
The most recent noteable attempt at solving this problem was put forth by Microsoft in 1997, and is called Microsoft Active Accessibility (MSAA). Realizing that complete accessibility was not possible without cooperation between applications and accessibility aids such as screen reading software or voice recognition software, Microsoft Active Accessibility defines a Windows-based standard by which applications can communicate context and other pertanent information to accessibility aids. This solution has seen only partial success, largely due to the fact that it requires significant changes to applications which are made accessible. Because most popular desktop and productivity applications are not open source, this forced disabled people to rely on the companies which produce this software to make it accessible. These companies were often reluctant for various reasons including the large amount of time required to do so. On a positive note, recent federal purchasing rules such as Section 508 have caused many companies to pay attention and implement MSAA support.
Microsoft was on the right track with Microsoft Active Accessibility, but because the source code to most popular desktop applications which are used in large corporations is not publicly available, they were never made fully accessible. In open source, however, making the necessary modifications to make them accessible is very possible.
Open source software is an ideal way to the needs of disabled users, because accessibility can be fully integrated into the core designs, rather than tacked onto as an afterthought. It also gives disabled programmers a chance to control their own destiny, by giving them the opportunity and the right to directly fix the innaccessible software themselves.
Furthermore, any software solution that can enable equality should by all rights be free of charge - an integral part of society's infrastructure. If no special hardware is required, why should a disabled person pay extra money to use the same software as everyone else? That said, there is still an important role for adaptive technology vendors in creating special services and hardware, or even proprietary software on platforms where that is appropriate. . The ideal situation would be for adaptive technology professionals to make money on rehab, trainingand support - something there is currently not enough of. Each end user has a unique set problems, and in the open source world, providing highly customized solutions can be a business in itself.
Right now, GUI's on Linux are mostly not accessible. Microsoft Windows is still far more accesible. Gnome, KDE, StarOffice, KOffice, Mozilla and all other GUI software packages in Linux are unuseable by large numbers of disabled users. There has been some progress with the support of Gnome's ATK APIs in many of these packages, and the development of GOK (Gnome Onscreen Keyboard) and Gnopernicus (screenreader and magnifier). However, these solutions are not yet truly usable for real disabled end users.
- Follow the general front-end accessibility requirements:
There are a number of potential "gotchas" when developing XUL UI. Please follow the practical techniques put forth listed in the Accessible XUL Authoring Guidelines. These guidelines cover many possible scenarios. If you take a little time to learn them, they will become an unconscious improvement to your design and engineering technique.
- Ensure correct keyboard accessibility when developing new controls:
Mozilla's XUL and HTML widgets already support proper keyboard accessibility, so let's not regress in that area. Make sure that every new UI control that's developed provides the correct keyboard support.
Follow the Accessible toolkit checklist whenever using XBL to create a new widget.
- Support MSAA and ATK via nsIAccessible when developing new controls:
Mozilla is a great position to provide context so that custom controls can be made accessible. Engineers can provide context simply by creating an nsIAccessible for each custom control. The infrastructure to do this is straightforward.
No matter what kind of work you do, the basis of accessibility is the need to understand that every user is different. After that, the exact techniques may change depending on the engineering environment. See Links and Resources below for information and tools for both web and desktop application developers.