Facebook says it removed or flagged 1.9 million pieces of terrorism-related content this year

Twice as much as in the previous quarter

Jeremy B. White
San Francisco
Tuesday 24 April 2018 00:49 BST
Comments
Facebook CEO Mark Zuckerberg during a town hall at Facebook's headquarters in Menlo Park, California
Facebook CEO Mark Zuckerberg during a town hall at Facebook's headquarters in Menlo Park, California ( REUTERS/Stephen Lam)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Facebook has said it removed or flagged 1.9 million pieces of content linked to al-Qaeda or Isis in the first part of 2018.

That figure represented a nearly twofold increase over the prior quarter, the social media site said in a blog post.

Only a tiny sliver of those posts were identified by Facebook users, the company said. The vast majority - some 99 per cent - were flagged by Facebook reviewers and by the site’s detection technology.

Facebook’s announcement that it had proactively found and scrubbed a far larger quantity of terrorism-related speech offered it an opportunity to demonstrate it is taking the threat seriously.

“We’ve made significant strides finding and removing [Isis and Al-Qaeda] propaganda quickly and at scale”, the blog post said, noting Facebook has added 50 people to its counterterrorism team and intends to add more.

The social media giant has faced tremendous pressure to police hate speech and crack down on content that may incite violence, paralleling a backlash over revelations that Russian-linked actors used the site to sow discord and try to sway the 2016 presidential election.

As the extent of the Russian influence campaign has come into focus, Facebook executives have increasingly acknowledged that the site’s vast reach and influence make it prone to abuse. Similarly, in the wake of a data privacy scandal that saw up to 87 million user' data in the hands of political firm Cambridge Analytica, Facebook's chief operating officer Sheryl Sandberg said that “bad actors” would always try to exploit the platform.

The post detailing Facebook’s counter-terror efforts echoed the company's recognition of its potential for misuse. It noted that “bad actors have long tried to use” the internet for nefarious ends, noting that white supremacists and al-Qaeda have for decades sought to disseminate their ideologies online.

Congresswoman Jan Schakowsky lists previous examples of Facebook apologies

“While the challenge of terrorism online isn’t new, it has grown increasingly urgent as digital platforms become central to our lives”, the post said.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in