Blog | UsableNet

Disability Pride Month: The Origins of Assistive Technology

Written by Lily Mordaunt | Jul 27, 2022 1:55:46 PM

Editor's Note: Thanks to UsableNet's Intern, Lily Mordaunt for contributing this post in honor of Disability Pride month. You can read more about Lily in her other blogs published on UsableNet.com by clicking here.

Each year, July is celebrated as Disability Pride month. As someone with a visual impairment, I have had my ups and downs with being disabled.

I frequently experience inaccessible platforms, thoughtless commentary, and just plain awkward mishaps. But I've also made lifelong friends through programs geared explicitly toward blind and visually impaired youth, experienced the kind side of humanity, and, most importantly for me, come away with many stories. Some good, some bad, almost all humorous in some way- even if I don't see it at the moment. (Yes, pun intended.)

Digital accessibility has come a long way since the US enacted the Americans with Disabilities Act (ADA) in 1990. With this post, I want to create a timeline of assistive technology to celebrate how much progress disabled people and our able-bodied allies have made for accessibility. 

Closed-Captioning

Used most by people with auditory disabilities, closed captioning is the text version of any spoken content on a TV show, movie, or another form of media.

When I started this research journey, I was shocked when I discovered just how long some of this technology had been around. Technically, closed-captioning as we know it was first aired in 1979 on the BBC. But its origins extend even further back.

During the late 1800s, there would be live interpreters during short, silent films to explain to audiences what was happening. Then, in the 1920s, when silent films grew longer, intertitles-text describing action or dialogue—were shown between scenes. And then talkies—films with sound—came along, and these films didn't need intertitles.

Distressed by this development that now excluded deaf moviegoers, deaf actor Emerson Romero created his version of intertitles/captions. But they were crude and not prioritized by the public. But as the years passed, others developed versions of movie-captioning sporadically added to movies until a 1958 law required all Hollywood movies to have captions.

Captions would remain the domain of movies until 1972. Then, 'The French Chef,' Julia Childs became the first TV program that regularly featured captions. The show used open captions. Unfortunately, open captions can't be toggled on and off and can distract the non-deaf viewer. Caption decoders came next. Caption decoders sit on top of the television and toggle a caption on or off as the user needs.

 

Dictation Software

Dictation software uses speech recognition software and allows you to speak while a computer turns your words into text. Software like this is helpful in a wide variety of disabled people, including those with mobility and cognitive impairments.

Dictation software originated in 1952 when Bell Laboratories created the Audrey system, which could recognize voiced numbers from 0-9. The software calibrated each speaker's voice, and then the speaker had to pause after each number. When it identified the number, the software would flash a light corresponding to it. Alexander Graham Bell's 1881 Dictaphone and other technology were already used for voice recording and playback. Yet, the Audrey system could react to the human voice in real-time.

In 1962, IBM would create a device that could recognize 16 human words and perform simple tasks. It would be another ten years before the US Department of Defense would make Harpy. Harpy could understand regular human speech but was limited to a thousand-word vocabulary.
1982 would see the introduction of The Dragon, which could predict words and phrases. It could determine context from the words used to create a grammatically correct sentence. But it wouldn't be until 1990 that the Dragon Dictation software would be available to the public.

Screen readers

Screen readers turn text-based on-screen information into speech or braille using text-to-speech technology. Those with visual impairments typically use screen readers. Screen readers are also sometimes used by folks with cognitive and mobility impairments.

The first screen reader was created in 1986 at IBM for internal use by blind and low-vision staff members. But before then, there were other, clunkier speech synthesizing devices.

It started in 1779 when a Danish scientist created a device to mimic the human vocal tract, and it could only make five vowel sounds. Then came the 1939 development of the VODER (Voice Operating Demonstrator). Between 1939 and 1986, other machines were developed and improved upon to simulate other languages. But many of these synthesizers were hard to understand. They might only read letter by letter, then, eventually, word by word.

The IBM screen reader was limited in its capabilities and only to employees. Still, it significantly improved intelligibility over the other voice synthesizing software. As activism about equal access increased, Jim Thatcher, the developer at IBM, created a version of the IBM screen reader for Windows 95.

Around the same time, Ted Henter was developing what would become the well-known screen reader JAWS. Henter was an American motorcycle racer who had gotten into a car accident that caused him to lose his vision. After his accident, Henter would painstakingly learn to code using a speech synthesizer that would only read one character at a time. He would type and have a volunteer read the results to him.

In 1987, Henter released JAWS (Job Access With Speech). JAWS was the first screen reader to have braille access, and users could customize their experience. JAWS and the IBM Screen Reader laid the groundwork for the screen reading software to follow.

Note: A screen reader is how this author personally navigates the web. Read how I use my screen reader and why I think making websites accessible for screen reader users is important by clicking here.

Continuing The Progress

So much technology has been created or perfected over the last thirty or so years. Technology that makes life easier for both the disabled and able-bodied, from the dictation software for those with limited mobility or cognitive impairments, but they have also helped to improve some of our favorite digital assistants like Siri and Alexa. To screen readers/speech synthesizing software that lends voice to our AI but also features like your car's GPS. Progress in accessibility often neatly coincides with greater progress for everyone.

Best Practices for Your Digital Accessibility Program

You can run three types of accessibility testing on your site and app: automated testing, manual testing, and user testing. 

While each type of testing has its importance, I want to emphasize the importance of user testing with people from the disability community.

A developer might take a long time to learn to use the assistive technology effectively. In contrast, a disabled user will already be familiar with the software and know how to use it most effectively.

User testing with people with disabilities is a powerful and effective way to improve the accessibility of your site and involve the community you are trying to help.

User testing with people with disabilities is one of the services for digital inclusion that UsableNet offers. Whether you are starting out or in the process of implementing or maintaining your long-term plan for digital inclusion, an experienced partner like UsableNet can help. Read more about UsableNet's services for testing with the disability community here, or click here to request a free consultation.