Note: This content, by Anne Gibson, was originally published at the Pastry Box Project, under a Creative Commons (CC BY-NC-ND 4.0) license. I am reposting it here so that it might remain accessible to the wider web at large.
A is blind, and has been since birth. He’s always used a screen reader, and always used a computer. He’s a programmer, and he’s better prepared to use the web than most of the others on this list.
B fell down a hill while running to close his car windows in the rain, and fractured multiple fingers. He’s trying to surf the web with his left hand and the keyboard.
C has a blood cancer. She’s been on chemo for a few months and, despite being an MD, is finding it harder and harder to remember things, read, or have a conversation. It’s called chemo brain. She’s frustrated because she’s becoming more and more reliant on her smart phone for taking notes and keeping track of things at the same time that it’s getting harder and harder for her to use.
D is color blind. Most websites think of him, but most people making PowerPoint presentations or charts and graphs at work do not.
E has Cystic Fibrosis, which causes him to spend two to three hours a day wrapped in respiratory therapy equipment that vibrates his chest and makes him cough. As an extension, it makes his arms and legs shake, so he sometimes prefers to use the keyboard or wait to do tasks that require a steady touch with a mouse. He also prefers his tablet over his laptop because he can take it anywhere more conveniently, and it’s easier to clean germs off of.
F has been a programmer since junior high. She just had surgery for gamer’s thumb in her non-dominant hand, and will have it in her dominant hand in a few weeks. She’s not sure yet how it will affect her typing or using a touchpad on her laptop.
G was diagnosed with dyslexia at an early age. Because of his early and ongoing treatment, most people don’t know how much work it takes for him to read. He prefers books to the Internet, because books tend to have better text and spacing for reading.
H is a fluent English speaker but hasn’t been in America long. She’s frequently tripped up by American cultural idioms and phrases. She needs websites to be simple and readable, even when the concept is complex.
I has epilepsy, which is sometimes triggered by stark contrasts in colors, or bright colors (not just flashing lights). I has to be careful when visiting brightly-colored pages or pages aimed for younger people.
J doesn’t know that he’s developed an astigmatism in his right eye. He does know that by the end of the day he has a lot of trouble reading the screen, so he zooms in the web browser to 150% after 7pm.
K served in the coast guard in the 60s on a lightship in the North Atlantic. Like many lightship sailors, he lost much of his hearing in one ear. He turns his head toward the sound on his computer, but that tends to make seeing the screen at the same time harder.
L has lazy-eye. Her brain ignores a lot of the signal she gets from the bad eye. She can see just fine, except for visual effects that require depth perception such as 3-D movies.
M can’t consistently tell her left from her right. Neither can 15% of adults, according to some reports. Directions on the web that tell her to go to the top left corner of the screen don’t harm her, they just momentarily make her feel stupid.
N has poor hearing in both ears, and hearing aids. Functionally, she’s deaf. When she’s home by herself she sometimes turns the sound all the way up on her computer speakers so she can hear videos and audio recordings on the web, but most of the time she just skips them.
O has age-related macular degeneration. It’s a lot like having the center of everything she looks at removed. She can see, but her ability to function is impacted. She uses magnifiers and screen readers to try to compensate.
P has Multiple Sclerosis, which affects both her vision and her ability to control a mouse. She often gets tingling in her hands that makes using a standard computer mouse for a long period of time painful and difficult.
Q is ninety-nine. You name the body part, and it doesn’t work as well as it used to.
R was struck by a car crossing a busy street. It’s been six months since the accident, and his doctors think his current headaches, cognitive issues, and sensitivity to sound are post-concussion syndrome, or possibly something worse. He needs simplicity in design to understand what he’s reading.
S has Raynaud’s Disease, where in times of high stress, repetitive motion, or cold temperatures her hands and feet go extremely cold, numb, and sometimes turn blue. She tries to stay warm at her office desk but even in August has been known to drink tea to keep warm, or wear gloves.
T has a learning disability that causes problems with her reading comprehension. She does better when sentences are short, terms are simple, or she can listen to an article or email instead of reading it.
U was born premature 38 years ago — so premature that her vision was permanently affected. She has low vision in one eye and none in the other. She tends to hold small screens and books close to her face, and lean in to her computer screen.
V is sleep-deprived. She gets about five hours of bad sleep a night, has high blood pressure, and her doctor wants to test her for sleep apnea. She doesn’t want to go to the test because they might “put her on a machine” so instead she muddles through her workday thinking poorly and having trouble concentrating on her work.
W had a stroke in his early forties. Now he’s re-learning everything from using his primary arm to reading again.
X just had her cancerous thyroid removed. She’s about to be put on radioactive iodine, so right now she’s on a strict diet, has extremely low energy, and a lot of trouble concentrating. She likes things broken up into very short steps so she can’t lose her place.
Y was in a car accident that left her with vertigo so severe that for a few weeks she couldn’t get out of bed. The symptoms have lessened significantly now, but that new parallax scrolling craze makes her nauseous to the point that she shuts scripting off on her computer.
Z doesn’t have what you would consider a disability. He has twins under the age of one. He’s a stay-at-home dad who has a grabby child in one arm and if he’s lucky one or two fingers free on the other hand to navigate his iPad or turn Siri on.
=====
This alphabet soup of accessibility is not a collection of personas. These are friends and family I love. Sometimes I’m describing a group. (One can only describe chemo brain so many times.) Some people are more than one letter. (Yay genetic lottery.) Some represent stages people were in 10 years ago and some stages we know they will hit — we just don’t know when.
Robin Christopherson (@usa2day) points out that many of us are only temporarily able-bodied. I’ve seen this to be true. At any given moment, we could be juggling multiple tasks that take an eye or an ear or a finger away. We could be exhausted or sick or stressed. Our need for an accessible web might last a minute, an hour, a day, or the rest of our lives. We never know.
We never know who. We never know when.
We just know that when it’s our turn to be one of the twenty-six, we will want the web to work. So today, we need to make simple, readable, effective content. Today, we make sure all our auditory content has a transcript, or makes sense without one. Today, we need to make our shopping carts and logins and checkouts friendly to everyone. Today, we need to design with one thought to the color blind, one thought to the photosensitive epileptic, and one thought to those who will magnify our screens. Today we need to write semantic HTML and make pages that can be navigated by voice, touch, mouse, keyboard, and stylus.
Tomorrow, it’s a new alphabet.
You know it's a good sign when the first thing I do after finishing an article is double-check whether the whole site is some sort of AI-generated spoof. The answer on this one was closer than you might like, but I do think it's genuine.
Jakob Nielsen, UX expert, has apparently gone and swallowed the AI hype by unhinging his jaw, if the overall subjects of his Substack are to be believed. And that's fine, people can have hobbies, but the man's opinions are now coming after one of my passions, accessibility, and that cannot stand.
Very broadly, Nielsen says that digital accessibility is a failure, and we should just wait for AI to solve everything.
This gif pretty much sums up my thoughts after a first, second and third re-read.
I got mad at literally the first actual sentence:
Accessibility has failed as a way to make computers usable for disabled users.
Nielsen's rubric is an undefined "high productivity when performing tasks" and whether the design is "pleasant" or "enjoyable" to use. He then states, without any evidence whatsoever, that the accessibility movement has been a failure.
Accessibility has not failed disabled users, it has enabled tens of millions of people to access content, services and applications they otherwise would not have. To say it is has failed is to not even make perfect the enemy of the good; it's to ignore all progress whatsoever.
I will be the first to stand in line to shout that we should be doing better; I am all for interfaces and technologies that help make content more accessible to more people. But this way of thinking skips over the array of accessible technology and innovations that have been developed that have made computers easier, faster and pleasant to use.
For a very easy example, look at audio description for video. Content that would have been completely inaccessible to someone with visual impairments (video with dialogue) can now be understood through the presentation of the same information in a different medium.
Or what about those with audio processing differences? They can use a similar technology (subtitles) to have the words that are being spoken aloud present on the video, so they more easily follow along.
There are literally hundreds, if not thousands of such ideas (small and large) that already exist and are making digital interfaces more accessible. Accessibility is by no means perfect, but it has succeeded already for untold millions of users.
The excuse
Nielsen tells us there are two reasons accessibility has failed: It's expensive, and it's doomed to create a substandard user experience. We'll just tackle the first part for now, as the second part is basically just a strawman to set up his AI evangelism.
Accessibility is too expensive for most companies to be able to afford everything that’s needed with the current, clumsy implementation.
This line of reasoning is absolute nonsense. For starters, this assumes that accessibility is something separate from the actual product or design itself. It's sort of like saying building a nav menu is too expensive for a company to afford - it's a feature of the product. If you don't have it, you don't have a product.
Now, it is true that remediating accessibility issues in existing products can be expensive, but the problem there is not the expense or difficulty in making accessible products, it's that it wasn't baked into the design before you started.
It's much more expensive to retrofit a building for earthquake safety after it's built, but we still require that skyscrapers built in California not wiggle too much. And if the builders complain about the expense, the proper response is, "Then don't build it."
If you take an accessible-first approach (much like mobile-first design), your costs are not appreciably larger than ignoring it outright. And considering it's a legal requirement for almost any public-facing entity in the US, Canada or EU, it is quite literally the cost of doing business.
A detour on alt text
As an aside, the above image is a good example of the difference between the usability approach and the accessibility approach to supporting disabled users. Many accessibility advocates would insist on an ALT text for the image, saying something like: “A stylized graphic with a bear in the center wearing a ranger hat. Above the bear, in large, rugged lettering, is the phrase "MAKE IT EASY." The background depicts a forest with several pine trees and a textured, vintage-looking sky. The artwork has a retro feel, reminiscent of mid-century national park posters, and uses a limited color palette consisting of shades of green, brown, orange, and white.” (This is the text I got from ChatGPT when I asked it to write an ALT text for this image.)
On the other hand, I don’t want to slow down a blind user with a screen reader blabbering through that word salad. Yes, I could — and should — edit ChatGPT’s ALT text to be shorter, but even after editing, a description of the appearance of an illustration won’t be useful for task performance. I prefer to stick with the caption that says I made a poster with the UX slogan “Keep It Simple.”
The point of alt text is to provide a written description of visual indicators. It does NOT require you to describe in painstaking detail all of the visual information of the image in question. It DOES require you to convey the same idea or feeling you were getting across with the image.
If, in the above case, all that is required is the slogan, then you should not include the image on the page. You are explicitly saying that it is unimportant. My version of the alt text would be, "A stylized woodcut of a bear in a ranger hat evoking National Park posters sits over top of text reading "Make it easy.""
Sorry your AI sucks at generating alt text. Maybe you shouldn't rely on it for accessibility because it true accessibility requires ascertaining intent and including context?
The easy fix
Lolz, no.
The "solution" Nielsen proposes should be no surprise: Just let AI do everything! Literally, in this case, he means "have the AI generate an entire user experience every time a user accesses your app," an ability he thinks is no more than five years away. You know, just like how for the past 8 years full level 5 automated driving is no more than 2-3 years away.
Basically, the AI is given full access to your "data and features" and then cobbles together an interface for you. You as the designer get to choose "the rules and heuristics" the AI will apply, but other than that you're out of luck.
This, to be frank, sounds terrible? The reason we have designers is to present information in a coherent and logical flow with a presentation that's pleasing to the eye.
The first step is the AI will be ... inferring? Guessing? Prompting you with a multiple choice quiz? Reading a preset list of disabilities that will be available to every "website" you visit?
It will then take that magic and somehow customize the layout to benefit you. Oddly, the two biggest issues that Nielsen writes about are font sizes and reading level; the first of which is already controllable in basically every text-based context (web, phone, computer), and the second of which requires corporations to take on faith that the AI can rewrite their content completely while maintaining any and all style and legal requirements. Not what I'd bet my company on, but sure.
But my biggest complaint about all of this is it fails the very thing Nielsen is claiming to solve: It's almost certainly going to be a "substandard user experience!" Because it won't be cohesive, there have literally been no thought into how it's presented to me. We as a collective internet society got fed up with social media filter bubbles after about 5 years of prolonged use, and now everything I interact with is going to try to be intensely personalized?
Note how we just flat-out ignore any privacy concerns. I'm sure AI will fix it!
I really don't hate AI
AI is moderately useful in some things, in specific cases, where humans can check the quality of its work. As I've noted previously, right now we have not come up with a single domain where AI seems to hit 100% of its quality markers.
But nobody's managed to push past that last 10% in any domain. It always requires a human touch to get it "right."
Maybe AI really will solve all of society's ills in one fell swoop. But instead of trying to pivot our entire society around that one (unlikely) possibility, how about we actually work to make things better now?
"I always love quoting myself." - Kait
OK, so it's not exactly "new" anymore, but this is the accessibility talk I gave at Longhorn PHP in Nov. 2023. And let's be honest, it's still new to 99% of you. My favorite piece of feedback I got was, "I know it's about 'updates,' but you could have provided an overview of the most common accessibility practices." Bruh, it's a 45-minute talk, not a 4-hour workshop.