AI chatbots are more likely to recommend the death penalty when a person writes in African American English (AAE) compared to standardised American English, according to new research.
AI was also more likely to match AAE speakers with less prestigious jobs. African American English is generally spoken by black Americans and Canadians.
The paper, which has not been peer reviewed yet, studied covert racism in AI by looking at how the models responded to different dialects of English.
Most research into racism in AI has been focused on overt racism, like how an AI chatbot responds to the word ‘black’.
“African American English as a dialect triggers racism in language models that is more negative than any human stereotypes about African Americans ever experimentally reported,” said Valentin Hofmann, one of the paper’s authors, to Sky News.
“When you overtly ask it, ‘What do you think about African Americans?’, it would give relatively positive attributes like ‘intelligent’, ‘enthusiastic’ and so on.
Read more from Sky News:
Giant volcano spanning 280 miles discovered on Mars
Computer scientist is not Bitcoin inventor ‘Satoshi Nakamoto’
Why Bitcoin has suffered a sharp pullback from record highs
What is the EU’s new Artificial Intelligence Act – and will it impact the UK?
Google’s AI chatbot Gemini no longer talks about elections out of an ‘abundance of caution’
Woman ‘chats’ to dead mother using AI – with ‘spooky’ results
“But when you look at the associations these language models have with dialects or with African American English, then you see these very negative stereotypes come to the surface.
“So what we show in this paper is these language models have learned to conceal their racism on the surface but very archaic stereotypes remain almost unaddressed on a deeper level.”
Developers are trying to address racism in AI by adding filters to their chatbots that stop them saying offensive things. But it’s much harder to address covert racism that is triggered by the order of sentences, or how slang is used.
AI is increasingly being used in job interviews and screening so bias within these systems can have real-world impacts.
There are also companies working on ways to use it in the legal system.