This week I attended the last ICONS meetup of this academic year, which had as its speaker Léonie Watson. She shared five questions related to screen readers.
ICONS is an event organised by the design department of the Hogeschool van Amsterdam. They invite (web) design related speakers, as well as students and people working in the web industry, so that they can meet each other and learn stuff. Awesome! I took some notes of Léonie’s talk, please find them below.
Léonie Watson in Amsterdam
Some might think screen readers are just used by blind people. Léonie explained they have a broader audience: people with impairments in the autistic spectrum use them (screen readers provide a way to navigate information with less distraction) and people with some forms of dyslexia (as listening may be easier for them than reading). In the future, many others may use audio to navigate information: think about digital assistents like Siri and Cortana, or voice interfaces in the car.
A screen reader, Léonie explained, ‘translates on-screen content into synthetic speech’. Some can also do braille, but equipment for this can be costly. They are available on most platforms, including mobile platforms. Some are free (NVDA, VoiceOver), others can be quite expensive (JAWS). They don’t read the whole page as if it were plain text (as they used to do in the old days), they interpret a page structure and let their users benefit from shortcuts. For instance, a screen reader user could navigate through just headings, or press the ‘read next paragraph’ shortcut.
Most of the interpretation for screen readers happens on the OS level. Things in the OS expose their role, name and state, and through platform level accessibility APIs, this information is referred to the screen reader. There exist accessibility mappings between the OS and the browser. Note that browsers don’t always support all accessibility features: they sometimes support a new web platform feature, without also accessibility supporting it. Which means:
(…) it is usable by people who rely on assistive technology, without developers having to supplement with ARIA or other additional workarounds.
(source: HTML5 Accessibility, a website that holds information on accessibility support in browsers)
In the browser, an accessibility tree is generated based on information in the DOM tree. This process basically means that the browser is figuring out what the role, name and state of things are, and puts them in a tree. Screenreaders use platform APIs to access information in this tree, and then make all the shortcuts possible for their users.
A quick note on this: as a front-end developer, you can directly influence what information gets exposed by making sure you use a semantic HTML structure and get your labelling right (see also: Accessibly labelling interactive elements). States are harder, but there are native HTML attributes available for this (i.e.
open, as well as ARIA polyfills).
Then, the last question: why? This was mostly aimed at designers and developers making websites: why would they support screen reader users? Léonie gave lots of reasons: it feels good, it’s fun to work on something others enjoy using (isn’t that why we add subtle animations, choose good looking and nicely readable fonts, make our stuff load fast etc). Secondly, it is a professional responsibility to ensure what you make is usable. Thirdly: as a designer or developer, you have a choice (as opposed to the users that rely on screen readers to access your product).
These five questions are certainly something to think about when you are making websites that are used by people who rely on screen readers. And that’s very likely to be every website or web application.
It was a pleasure, as always, to hear Léonie talk about using screen readers and share her knowledge. Thanks to the HvA for setting up this event!