The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission. 

Twitter’s photo algorithm prioritised white faces over black ones, company says it’s ‘got more analysis to do’

White people, cartoon characters, and dogs were all prioritised over those with darker features

Adam Smith
Monday 21 September 2020 14:42 BST
Comments
(Marten Bjork)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Twitter’s photo algorithm showed evidence of racial bias over the weekend.

The company said it was grateful that the issue had come to light and acknowledged it had more to do to fix the racism in its systems.

Users found that when posting photos of black people and white people side by side in one image, the white person would overwhelmingly be chosen as the cropped preview on the timeline.

The issue was discovered when a Twitter user posted about Zoom’s facial recognition technology which was removing his black colleague’s head when the colleague was using a virtual background.

When he tweeted about the issue, he noticed that Twitter was also prioritising his own white face instead of his colleagues.

Other users attempted to recreate the experiment with other faces, including those of US Senate majority leader Mitch McConnell and former US president Barack Obama.

Twitter’s algorithm consistently prioritised Senator McConnell’s face as the preview image

The issue also occurred when posting images of black cartoon characters versus white cartoon characters, and even dark-furred dogs against light-coloured dogs.

However, the issue seemed reduced on Tweetdeck, a dashboard manager for Twitter, which appeared to be focused less on the content of images.

Twitter’s chief design officer Dantley Davis tweeted a similar comparison, but clarified that this was “not a scientific test as it's an isolated example” and said that the company was “investigating the [neural network].”

A neural network is an artificially intelligent system that Twitter uses to decide the photos it displays on the timeline.

In 2018, the company explained that it performs a form of saliency detection, according to a blog post from 2018.

This means the company attempts to show the things that people are most likely to look at – such as “faces, text, animals, but also other objects and regions of high contrast”.

This kind of algorithm is used on all images as they are cropped and posted in real time – albeit an algorithm that has been cut-down since Twitter is “only interested in roughly knowing where the most salient regions are”.

“Unfortunately, the neural networks used to predict saliency are too slow to run in production, since we need to process every image uploaded to Twitter and enable cropping without impacting the ability to share in real time” the post says.

Twitter had previously used face detection for its algorithm, but it would often miss faces and mistakenly detect faces when there were none there.

“Thanks to everyone who raised this. We tested for bias before shipping the model and didn't find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do. We'll open source our work so others can review and replicate,” tweeted Liz Kelley, who works on Twitter’s communication team.

The algorithm’s flaw is “a very important question”, tweeted Twitter’s chief technology officer Parag Agrawal.

“To address it, we did analysis on our model when we shipped it, but needs continuous improvement. Love this public, open, and rigorous test – and eager to learn from this,” he continued.

Other social media companies, such as Instagram which is owned by Facebook, have been criticised for racially biased algorithms.

Technology in self-driving car algorithms mean that they are also more likely to drive into black people.

An algorithm that was used across the US to decide how likely prisoners are to reoffend was also biased against black people.

Recently, a black man was wrongfully arrested based on a flawed match from a facial recognition algorithm. Experts say the results of all these algorithm biases will exacerbate racial inequality.

Last year, the National Institute of Standards and Technology tested 189 algorithms from 99 developers and found that black and Asian faces were 10 to 100 times more likely to be falsely identified by the algorithms compared to white faces.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in