Nearly 20 percent of the world’s population is considered neurodivergent and roughly 1.3 billion people experience disability, according to the World Health Organization. In the future, even more people will likely identify as disabled because of conditions influenced by climate change, such as Lyme disease and asthma, and chronic conditions such as long COVID.
This group is not a monolith. Each of these individuals experiences their disability and the world differently, often having multiple intersecting disabilities to contend with in a world that isn’t designed to accommodate them. One of the major challenges of accessibility efforts is creating solutions that can meet the uniquely complex needs of each individual.
“Traditional education often overlooks the specific, diverse needs of neurodivergent students, focusing more on surface-level issues or trying to fit them into a standard mold,” Elsa, a design student at the London Interdisciplinary School who also identifies as neurodivergent, says. “We need solutions that offer individualized support, recognize unique strengths, and adapt to each student’s evolving needs in real time.”
Without this adaptability, technological solutions for disabilities are often elegant and advanced yet implausible. For example, disability advocates have been critical of exoskeletons designed to help people who use wheelchairs because they’re experimental prototypes that may serve only a small group of people, may never become commercially usable, and don’t meet most users’ current needs. But by harnessing the learning capabilities of artificial intelligence, designers and engineers are creating dynamic solutions to help enhance the most essential areas of daily life. And these improvements could help everyone.
“The magic of AI is its ability to be context-aware, making it much smarter and more accessible than previous technologies,” says Dylan Fox, director of operations at XR Access, a research consortium based at Cornell Tech committed to making virtual, augmented, and mixed reality (XR) accessible to people with disabilities. “I’m really excited about AI’s ability to bridge cognitive processing gaps.”
Elsa, along with designer Sonny Kong and software engineer Oliver Fogelin, created an AI-powered educational tool called Pathia that tailors lessons to a student’s unique communication style. The program learns about the student from every interaction, fine-tuning lessons into personalized formats such as worksheets, visuals, or games as well as providing insights to the teacher and the parents about the student’s needs.
Pathia uses AI to process language in chunks rather than word by word. This approach, called gestalt language processing, helps the system understand context better. Pathia then adjusts its teaching style based on how each student responds to these language chunks, taking into account their emotional reactions and sensory preferences.
Elsa’s brother, Ryan, who has autism and asked to go by a pseudonym for this article, expressed enthusiasm for Pathia, noting that it could explain concepts in ways his teachers couldn’t. He believes that Pathia’s ability to learn and adapt to his specific needs could lead to a better learning experience.
“If I had a tool that helped me with communication through academia, to help me manage my frustration, and Ryan had one where he was helped to excel and tap into the parts of him that other humans may not identify with, that would have been life-changing for the both of us,” says Elsa.
Communication
Dimitri Kanevsky lost his hearing at the age of 1. Since then, he has become a research scientist and speech-recognition technology expert at Google, employing various techniques to communicate effectively in both his personal and professional life. These methods include lipreading and Communication Access Realtime Translation (CART), a service mandated by the Americans With Disabilities Act in which trained professionals transcribe speech to text in real time. Both techniques, however, have significant limitations: Only about 30 to 40 percent of speech sounds are visible through lipreading, and during the coronavirus pandemic, masks made lipreading nearly impossible. CART, while helpful, can be expensive, isn’t always readily available, and sometimes struggles with Kanevsky’s accent and unique speech patterns, making it difficult for him to leverage the communication technology that was available at the time. “So much of the lived experience of being a person in the world involves interaction and communication. Suppose that is one of the greatest challenges for somebody [with a disability],” says Dr. Keivan Stassun, director of the Frist Center for Autism and Innovation at Vanderbilt University.
At Google, Kanevsky and his team, in collaboration with Gallaudet University—a leading institution for deaf and hard-of-hearing students—developed Live Transcribe, a real-time speech-to-text application available in more than 80 languages. Live Transcribe integrates the entire ecosystem of sounds necessary for seamless communication. It not only converts spoken words into text but also detects and notifies users about surrounding sounds, such as doorbells, a baby crying, or a dog barking, and has a haptic sensor that vibrates when someone calls the user’s name.
“Speech isn’t just about words. It’s filled with noise, accents, and varying speech patterns, making it tricky to handle. AI models like transformers are particularly good at tackling these challenges because they can process long sequences, handle noisy environments, and learn complex patterns in speech,” Kanevsky says. “By combining speech models with advanced language processing, AI can understand context, fix misheard words, and produce text that makes sense, which is essential when the meaning depends on how words are spoken.”
AI’s flexibility and pretraining make it highly effective for modern speech-recognition tasks, pushing the boundaries of what’s possible in terms of accuracy and application. It has the potential to usher in a new era of communication technology, which is helpful for everyone, but especially people who are deaf, hard-of-hearing, speech impaired, or neurodivergent and/or have conditions such as amyotrophic lateral sclerosis and cerebral palsy.
PopSignAI is another example of AI being used for communication. The app—developed by the Georgia Institute of Technology, the Center on Access Technology at Rochester Institute of Technology’s National Technical Institute for the Deaf, and Google—makes it easier and more accessible to learn American Sign Language (ASL). Powered by AI-enabled sign-language recognition, the app uses an educational game to provide people with real-time feedback on their hand shapes and signing accuracy. PopSignAI allows people to more easily learn ASL, acting as an additional tool to help hearing parents connect with their deaf children on a deeper level.
“Ninety-five percent of deaf children are born to hearing parents who often don’t know sign language. By helping hearing parents communicate more with their children, PopSignAI empowers these children with language, transforming their world through play,” says Sam Sepah, the ML/AI research program manager at Google.
When Kanevsky used Live Transcribe to give a presentation at an international math conference in Poland in 2023, it was a landmark moment for him. “It was an amazing experience to be able to freely communicate with other mathematicians and deliver a presentation for the first time in my life,” says Kanevsky. “I’d been waiting for something like this my entire life.”
Independence and mobility
Since autism spectrum disorder was officially described by the DSM-3 in 1980, there has been a significant amount of research and interventions for children with autism. But to live independently, adults with autism need jobs. According to a Deloitte study, approximately 85 percent of people on the autism spectrum in the United States are unemployed, compared to roughly 4 percent of the overall population in the country.
“We talked to autistic adults and asked, ‘What is the thing that you most wish you had access to?’ Overwhelmingly, the number one answer is a job,” says Stassun. “Okay. Next question, ‘What is the thing that you currently have or don’t have that is limiting your ability to access employment?’ Most people said, ‘I don’t know how to drive a car.’”
In the United States, especially in cities where public transport is limited or not accessibility friendly, driving is often the only way to commute to work. So the Frist Center created a virtual-reality (VR) driving simulator that uses artificial intelligence to positively reinforce the signs, threats, and interactions that the learner should pay attention to and anticipate—all in a safe virtual environment. Physiological sensors monitor the learner’s heart rate and eye movements, generating data that the trained AI model can use to interpret the individual’s emotional state and adapt the simulator accordingly.
“It’s the revolution with AI that is enabling us to create these realistic environments that are real-time and appropriately adaptive and responsive to each individual and their experience,” says Stassun.
For some physical disabilities, prosthetics have long been used to accommodate losses in functionality. But despite the advancements in robotics and other high-tech prosthetics, they are not a panacea. Researchers from Austria found that 44 percent of arm amputees who choose to use prosthetics are unsatisfied with their devices because they’re uncomfortable, painful, or hard to operate. A new generation of designers and neuroscientists is rethinking the very concept of prosthetics, using developments in AI, robotics, and accuracy in measuring our muscle’s nerve responses to create augmentation technology that adds or extends functionality to the body.
“It doesn’t matter what your body started like. We’re interested in extending the biological body with technology,” says Dani Clode, head designer at the Plasticity Lab at Cambridge University.
Clode is developing wearable technology that is assistive and adaptive to the future body. One of her designs, the Third Thumb, is a 3D-printed thumb extension for the hand, controlled by motion sensors placed under the toes. For the Third Thumb and other augmentation prosthetics, Clode plans to use artificial intelligence to anticipate the user’s needs, “creating a symbiotic relationship between the user and the technology.”
Tamar Makin, a neuroscience professor and head of the Plasticity Lab, emphasized the benefits of AI in enhancing motor control and communication. She likens the interaction between a user’s brain and AI technology to how an octopus controls its tentacles: There’s a central command, but also localized decision-making.
“We’re working on this collaboration between the big brain, which is the user’s brain, and the small brain, which will be the AI that we’re endowing to the technologies so that they can work relatively autonomously,” she says.
AI and machine learning algorithms can help to quickly process the body’s neurological signals by learning the user’s intentions and analyzing data from electromyography sensors, thereby making prosthetics that can quickly adapt to the individual.
“If you just have a broken limb, maybe you don’t need AI. But if you have a stroke, the signals from your body are already very faint. We can use AI to better understand what this signal means in different contexts. This can help us complete the thought for the user by creating a seamless movement,” says Makin. “The opportunities in this domain are limited by our imagination, not by technology.”
Cognitive load
Generative AI can lighten people’s cognitive load by making the web and daily life interactions more accessible. “A lot of disabled folks have trouble writing official letters like insurance appeals and complaints. They’ve been using generative AI tools to write them in a way that will be read and respected by the person who receives it,” says Ashley Shew, an associate professor in the Department of Science, Technology, and Society at Virginia Tech. “And if those appeals help get the benefits, equipment, or maintenance they need, then there’s a real everyday life outcome to LLMs that their designers probably never anticipated.”
For a person who is blind or has low vision, navigating unfamiliar environments requires learning and memorizing spaces and filling in any gaps in information without a visual reference. GPS technology has been an important aid, but even the most advanced versions of the technology are often accurate up to only 30 feet. For a person who is blind or has low vision, standing 30 feet away from a bus stop could mean missing the bus. Some people may choose to practice orientation and mobility skills using the controlled environment offered through a VR headset, but even those devices have their limitations.
A consortium of researchers at Cornell is working on an AI-powered sighted guide that would help people who are blind or have low vision to navigate and understand VR experiences—similar to a physical sighted guide who assists people who are blind or have low vision.
“That idea of being able to grab onto somebody’s elbow and have them lead you around is not built into any VR simulation, because no one thinks it is a user need,” says Fox of XR Access. “We’re thinking about how to program an AI to help guide you around a VR space.”
Challenges
In his essay “Keeping the Knives Sharp,” poet and disability activist Jim Ferris asks, “What would it mean to live in a world that understood asymmetry as a prime characteristic?” Ferris makes a point about the othering of people with disabilities and their absence in the design process, datasets, and lawmaking—significant challenges in the effort to use AI to enhance lives.
Tools, features, and design elements for people with disabilities benefit broader populations as well. One of the mandates of the landmark Americans With Disabilities Act, which came into effect in July 1990, was that sidewalks must include curb cuts, the ramps that enable people using wheelchairs to more easily travel between the sidewalk and the street. But it became clear that the curb cuts made mobility easier not just for their intended audience, but for everyone, including parents with strollers, people pushing heavy carts or wheeling luggage, and runners and skateboarders. Similarly, captions for auditory media help everyone, not just those who are deaf or hard-of-hearing; studies have found that nearly 70 percent of Gen Z uses closed captions 100 percent of the time for concentration or to understand different accents.
“People with disabilities are often first adopters of new technology because their need is higher,” says Kanevsky. “At Google, we believe that if we start with one use case, we can go on to help billions of people.”
This means that people with disabilities need to be involved in the human-centered design process from the very beginning. Doing so would avoid creating “disability dongles,” a phrase coined by disability advocate Liz Jackson to describe “well-intended, elegant, yet useless solutions to problems we never knew we had.” Fox says, “You want to have disabled people in at every step of the process, from brainstorming to prototyping to testing.”
While AI holds promise for enhancing accessibility, disability advocates also caution against allowing the allure of promise to distract us from bigger issues related to pervasive bias. The existing literature about disabilities used to train AI models may be outdated or incorrect, and certain social groups may be disproportionately represented in medical data. For example, compared to immigrants and people of color, wealthy white families in the United States are more likely to report childhood autism due to better medical access.
“If all of our sources for disabled data are from nondisabled people, the stories are going to be wrong,” says Shew.
Despite these challenges, technology and disability experts are optimistic about the revolutionary changes AI can bring about by meeting users where they are.
“The AI revolution has occurred. The world will not go back to its pre-AI state. As much as we need to attend to the appropriate concerns and fears, we would be seriously undercutting our ability to develop real solutions at scale and have a major impact if we didn’t incorporate the power and the potential that AI brings to any technological solution that we envision,” says Stassun.