I wrote this Q&A to help me prepare for a TV interview about AI and disability. I tried to include concrete examples and I steered clear of theory (not my usual approach!). The questions were what I imagined might come up, the answers are my attempt to challenge those assumptions. The answers are disjointed because they're a collection of talking points for a conversation. They boil down quite a lot of background research, and if I get the time I'll add links to the sources. I'm posting them here in case they're helpful for anyone else who wants to challenge the disability-washing of AI. I'd be interested in any feedback and comments, or other examples I could add to the list, so feel free to drop me a line at resistingai@gmail.com.
I would like to thank Rick Burgess of the Greater Manchester Coalition of Disabled People, Louise Hickman and Alexa Hagerty from the The Minderoo Centre for Technology and Democracy, and the artist and game creator Joseph Wilk for trying to clue me up about the technopolitics of disability. Any misunderstandings or clangers are mine alone.
Q. Doesn't AI have amazing potential to help disabled people in all sorts of ways?
- You just can't trust it. This is the AI that recommended mixing glue into pizza ingredients so the cheese doesn't slide off.
- The way it's made harms people and the planet.
- It's going to make things worse because it'll be used to get rid of services that people need.
Q. Wouldn't it be amazing to have glasses that give you a live audio description of the surroundings?
- If you'd seen all the videos I've seen of Tesla's on autopilot crashing into things, you'd think twice about trusting AI glasses.
- It doesn't 'see', it's guessing based on what it's been trained on, so anything unusual can throw it.
- It's very Black Mirror; the tech companies will control what you get to 'see' (remember Google maps & ads).
- Some workers labelling training images for computer vision were told not to include homeless people.
- Those glasses would be a privacy-invasion nightmare for everyone else. And who owns all that data?
Q. Isn't it liberating to have an app rather than, for example, waiting to book a sign-language interpreter?
- But it means they're going to stop training interpreters; that's already happening.
- Interpreters don't just interpret words, they interpret systems and help both sides understand what's going on.
- You're going to be stuck with a glitchy app that also spies on you.
Q. But what about transcribing and captions; isn't this AI loads better than the old tools?
- Maybe. The real problem is the fantasy that it's good enough for serious uses.
- The Health Secretary, Wes Streeting, is trying to force GPs to use it for consultations.
- Research in hospitals shows the AI tools invent things that no-one said.
- Other researchers found errors in 8 out of 10 transcriptions.
- NHS England is advising doctors to get a lawyer to protect themselves against malpractice from using these tools.
- Universities in the USA are firing transcribers and replacing them with crappy AI, so it increases the disadvantage of visually impaired students.
Q. Wouldn't it be great to have a tool that can summarise things for people who can't see, or have trouble with long text?
- The BBC have complained about Apple's AI mangling the news. Apple's summary said Luigi Mangione had shot himself.
- BBC research found significant mistakes in 51% of the AI's news summaries.
- Research shows AI summaries often miss the most important bits. They're basically averaging out, but they sound plausible.
Q. Even if tools are glitchy, aren't they an improvement for disabled people?
- Why should disabled people put up with stuff that's glitchy?
- These tools are ableist because they're trained on all the prejudice on the internet, and that'll come out when they're used.
- Tests using LLMs to help young kids write stories found they make non-disabled white kids the heroes and minority and disabled kids the victims.
Q. Isn't it a good thing for disabled people to find uses for the latest tech, like AI?
- This isn't assistive tech, it's 'screw you' tech.
- The tech bros don't care about disabled people or anyone else who's vulnerable.
- Otherwise they wouldn't release tools that anyone could see would be used for porn and disinformation, and which make people dependent on them.
- They did for their own selfish reasons, for money and power.
- Sam Altman, the boss of OpenAI, said "young people don't really make life decisions without asking ChatGPT".
- Students aren't just using it to cheat on tests, they're asking what they should do about their relationship problems.
- People are already turning to chatbots for support, like the poor lad in the USA who killed himself because the bot encouraged him.
- The risk of thinking this is going to help disabled people is disability-washing some very bad stuff.
Q. Why do you say the way that AI is made harms people?
- It's like sweatshop labour, but with added psychological trauma.
- The tools absorb everything they're trained on, so someone needs to weed out the violence and sexual abuse.
- This is done by poorly paid workers, often in poorer countries, who are traumatised by it.
- It turns out that there's quite a bit of child labour as well.
Q. Why do you say AI is based on stealing?
- To make the AI work they need all the data - books, photos, videos, the lot.
- The business model wouldn't work if they had to pay people fairly for them so they grab them without asking.
- That's what copyright is supposed to stop, but the UK government has just changed the law to protect the AI companies.
Q. If AI is as bad as you say it is, how come governments and investors are putting so much into it?
- Telling lies about what AI can do saves our leaders from fixing the basics for everyone, but especially for disabled people.
- The government clearly isn't going to do things like tax the rich or the tech companies.
- So it needs a way to divert people's attention from how everything is breaking down.
- The tech bros have convinced the politicians that AI means growth.
- But if we want to fix education & healthcare we need more teachers & nurses and more decent buildings.
- AI is more and more becoming a military tech. It's not really helping us much, but it's a boom time for AI weapons.
Q. You say AI is bad for the environment, but aren't there problems with any kind of digital tech?
- Yes, but AI is very bad news and it's growing fast.
- Musk's XAI datacentre in Memphis has 34 methane generators emitting pollution into the surrounding black neighbourhoods.
- When it comes to power cuts or water shortages in the future, it's going to be us versus the data centres.
- They're planning datacentres in Oxfordshire but there's already plans for water rationing due to future shortages. Thanks Thames Water!
- The GLA had to suspend some affordable housing developments in West London because data centres had already booked all the available electricity grid.
Q. Why do you call AI 'tech solutionism'; surely solutions are good?
- Tech solutionism is like wheelchairs that can climb stairs; most people can't afford them and they don't fix the basics.
- For example, they don't fix the gaps between trains and station platforms.
- Disabled people pay taxes but can't use public transport, so they're basically subsidising everyone else.
Q. Even if it's got problems now, surely AI will have advanced beyond that in a couple of years?
- AI is running into a wall; the new models are still crap, and it's never going to have common sense.
- One of the so-called godfathers of AI, Geoffrey Hinton, said back in 2016 that we don't need to train any more radiologists because AI could do better cancer scanning,
- It turned out to be rubbish. AI scanning has very mixed results at best, so thank god people didn't stop training as radiologists.
- The same argument is being used now for nurses, teachers and all the other kinds of care workers we were clapping during Covid.
- AI fantasies are going to make things worse for disabled people and everyone else.
Q. Surely AI will save money and make services more efficient for disabled people?
- The Department for Science, Innovation and Technology is claiming it can save £45 billion with AI.
- This is a made up number and total rubbish; some reports asked ChatGT itself for the figures.
- What will happen is like what Musk has done with DOGE in the USA; claiming you've saved money by breaking services, sacking workers and replacing it all with crap AI.
Q. Won't AI make the benefits system more effective?
- The same government pushing for AI everywhere is the one that talks about a war on benefits and is going to cut PIP, personal independence payments.
- The Department of Work and Pensions is already using untrustworthy AI to classify people's medical conditions for Employment Support Allowance.
- Much worse is on the way with the so-called Fraud Error and Recovery Bill where AI will be scanning the bank accounts of everyone who's claiming any kind of benefit.
- Introducing AI to the benefits system is going to be like the Post Office Horizon IT scandal on steroids.
Q. Won't AI give disabled people more freedom?
- They want disabled people to have ChatGPT instead of having human rights.
- We don't need tech that tries to 'fix' people, we need to fix society so it supports people to have autonomy.
- At the end of the day, everyone depends on other people, even the tech bros, and any of us can become disabled through age, Long Covid or whatever.
- Disabled people are used to hacking tech to make it work for them, but AI is the least hackable.
Q. You're very negative about AI, what would you do instead?
- A lot of the money spent on AI should just be used for the basics.
- What people mainly want is decent, accessible transport, housing, healthcare and education, not sci-fi toys.
- Technology can help disabled people, but that tech development needs to be led by disabled people.
- It takes disabled people themselves to understand how and where situations are actually disabling.
- We need pan-impairment inclusion, not just some crappy apps built on toxic tech.
- We should all, as a society, have a say in what kind of tech is released instead of being used as guinea pigs by big companies.