AI mistakes ‘black and white’ chess chat for racism

Issues with artificial intelligence-based content moderation highlighted after world’s most popular YouTube chess channel labelled ‘harmful and dangerous’

Anthony Cuthbertson
Thursday 18 February 2021 18:32 GMT
Comments
Popular chess channels on YouTube have faced temporary restrictions over suspected misconceptions about terms like ‘black against white'
Popular chess channels on YouTube have faced temporary restrictions over suspected misconceptions about terms like ‘black against white' (Getty Images/iStockphoto)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Online discussions about black and white chess pieces are confusing artificial intelligence algorithms trained to detect racism and other hate speech, according to new research.

Computer scientists at Carnegie Mellon University began investigating the AI glitch after a popular chess channel on YouTube was blocked for “harmful and dangerous” content last June.

Croatian chess player Antonio Radic, who goes by the online alias Agadmator, hosts the world’s most popular YouTube chess channel, with more than 1 million subscribers.

On 28 June, 2020, Radic was blocked from YouTube while presenting a chess show with Grandmaster Hikaru Nakamura, though no specific reason was given by the Google-owned video platform.

Radic’s channel was reinstated after 24 hours, leading the chess champion to speculate that he had been temporarily banned for a referral to “black against white”, even though he was talking about chess at the time.

YouTube’s moderation system relies on both humans and AI algorithms, meaning any AI system could misinterpret the comments if not trained correctly to understand context.

“If they rely on artificial intelligence to detect racist language, this kind of accident can happen,” said Ashiqur KhudaBukhsh, a project scientist at CMU’s Language Technologies Institute.

KhudaBukhsh tested this theory by using a state-of-the-art speech classifier to screen more than 680,000 comments gathered from five popular chess-focussed YouTube channels.

After manually reviewing a selection of 1,000 comments that had been classed by the AI as hate speech, they found that 82 per cent of them had been misclassified due to the use of words like “black”, “white”, “attack” and “threat” – all of which commonly used in chess parlance.

The paper was presented this month at the Association for the Advancement of AI annual conference.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in