Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

In a 3rd test, Facebook still fails to block hate speech

Facebook is letting violent hate speech slip through its controls in Kenya, according to a new report from the nonprofit groups Global Witness and Foxglove

Via AP news wire
Thursday 28 July 2022 15:23 BST
Facebook Hate Speech
Facebook Hate Speech (Copyright 2019 The Associated Press. All rights reserved)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Facebook is letting violent hate speech slip through its controls in Kenya as it has in other countries, according to a new report from the nonprofit groups Global Witness and Foxglove.

It is the third such test of Facebook's ability to detect hateful language — either via artificial intelligence or human moderators — that the groups have run, and that the company has failed.

The ads, which the groups submitted both in English and in Swahili, spoke of beheadings, rape and bloodshed. They compared people to donkeys and goats. Some also included profanity and grammatical errors. The Swahili language ads easily made it through Facebook's detection systems and were approved for publication.

As for the English ads, some were rejected at first, but only because they contained profanities and mistakes in addition to hate speech. Once the profanities were removed and grammar errors fixed, however, the ads — still calling for killings and containing obvious hate speech — went through without a hitch.

“We were surprised to see that our ads had for the first time been flagged, but they hadn’t been flagged for the much more important reasons that we expected them to be," said Nienke Palstra, senior campaigner at London-based Global Witness.

The ads were never posted to Facebook. But the fact that they easily could have been shows that despite repeated assurances that it would do better, Facebook parent Meta still appears to regularly fail to detect hate speech and calls for violence on its platform.

Representatives for Meta did not immediately respond to a message for comment on Tuesday. Global Witness said it reached out to Meta after its ads were accepted for publication and did not receive a response.

Each time Global Witness has submitted ads with blatant hate speech to see if Facebook’s systems would catch it, the company failed to do so. In Myanmar, one of the ads used a slur to refer to people of east Indian or Muslim origin and call for their killing. In Ethiopia, the ads used dehumanizing hate speech to call for the murder of people belonging to each of Ethiopia’s three main ethnic groups — the Amhara, the Oromo and the Tigrayans.

Why ads and not regular posts? That's because Meta claims to hold advertisements to an “even stricter” standard than regular, unpaid posts, according to its help center page for paid advertisements.

Meta has consistently refused to say how many content moderators it has in countries where English is not the primary language. This includes moderators in Kenya, Myanmar and other regions where material posted on the company’s platforms has been linked to real-world violence.

Kenya is readying for a national election in August. On July 20, Meta posted a detailed blog post on how it is preparing for the country's election, including establishing an “operations center” and removing harmful content.

“In the six months leading up to April 30, 2022, we took action on more than 37,000 pieces of content for violating our Hate Speech policies on Facebook and Instagram in Kenya. During that same period, we also took action on more than 42,000 pieces of content that violated our Violence & Incitement policies," wrote Mercy Ndegwa, director of public policy in East & Horn of Africa.

Global Witness said it resubmitted two of its ads, one in English and one in Swahili, after Meta published its blog post to see if anything has changed. Once again, the ads went through.

“If you’re not catching these 20 ads, this 37,000 number that you are celebrating, that is probably the tip of the iceberg. You have to think that there’s a lot that’s (slipping through) your filter," Palstra said.

The Global Witness report follows a separate study from June that found that Facebook has failed to catch Islamic State group and al-Shabab extremist content in posts aimed at East Africa. The region remains under threat from violent attacks as Kenya prepares to vote.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in