Researchers Study AI Accessibility for People with Disabilities

Researchers at the University of Washington recently conducted a three-month autoethnographic study to test the utility of AI tools for accessibility. They found that although the tools were helpful sometimes, the tools had significant problems when generating images, writing Slack messages, or trying to improve the accessibility.

 

‘When technology changes rapidly, there’s always a risk that disabled people get left behind. I’m a really strong believer in the value of first-person accounts to help us understand things. Because our group had a large number of folks who could experience AI as disabled people and see what worked and what didn’t, we thought we had a unique opportunity to tell a story and learn about this,’ said Jennifer Mankoff, senior author and professor in the Paul G. Allen School of Computer Science & Engineering.

 

The team presented their findings in seven vignettes, combining their experiences to preserve anonymity. For example, ‘Mia’ who has intermittent brain fog, used ChatPDF.com to help with work. Although it was occasionally accurate, it often gave ‘completely incorrect answers’. In one case, the tool was inaccurate and ableist, wording the argument of the paper to sound like researchers would be better off talking to the caregivers rather than to chronically ill people. ‘Mia’ also used chatbots to create references for a paper, and while the AI models made some mistakes, it still proved useful.

 

‘Using AI for this task still required work, but it lessened the cognitive load. By switching from a “generation” task to a “verification” task, I was able to avoid some of the accessibility issues I was facing,’ said Mankoff, a public speaker of Lyme disease.

 

‘I was surprised at just how dramatically the results and outcomes varied, depending on the task,’ said Kate Glazko, lead author and UW doctoral student in the Allen School. ‘In some cases, such as creating a picture of people with disabilities looking happy, even with specific prompting-can you make it this way?-the results didn’t achieve what the authors wanted.’

 

The team noted that more research is required to develop solutions to the problems revealed by the study. One of such problems is creating ways for people with disabilities to validate AI tools products because the AI-generated result or the source document is usually inaccessible to them.

 

Mankoff has plans to study and document the ableism and inaccessibility present in AI-generated content.

 

‘Whenever software engineering practices change, there is a risk that apps and websites become less accessible if good defaults are not in place. For example, if AI-generated code were accessible by default, this could help developers to learn about and improve the accessibility of their apps and websites,’ said Glazko.

 

 

By Marvellous Iwendi.

 

Source: UW News