Coronavirus: Social media firms only taking down one in 10 posts reported for ‘dangerous’ misinformation, research finds
Critics say tools are ‘not fit for purpose’ amid spread of fake cures and conspiracy theories
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Social media platforms are removing less than one in 10 posts spreading “dangerous” coronavirus misinformation, a new study suggests.
Conspiracy theorists have mounted protests in the UK and around the world over claims that the pandemic is fake or part of a sinister plot, while fraudsters are selling fake cures and urging people to disregard official advice.
Researchers flagged more than 600 posts including 5G conspiracy theories to Facebook, Instagram and Twitter in April and May.
But only 6 per cent were deleted, while 1 per cent were flagged as false information but left online and the accounts behind 2 per cent of the posts were taken down.
Overall, 91 per cent of the reported posts were left untouched.
The Centre for Countering Digital Hate (CCDH), which published the research, accused the platforms of “shirking their responsibility” to stop the spread of “dangerous” misinformation.
“Social media giants have claimed many times that they are taking Covid-related misinformation seriously, but this new research shows that even when they are handed the posts promoting misinformation, they fail to take action,” said chief executive Imran Ahmed.
“Their systems for reporting misinformation and dealing with it are simply not fit for purpose.”
Jo Stevens, Labour’s shadow culture secretary, said the report showed the “reality of what the global tech companies promise on removing harmful content and the pitiful steps they actually take in practice”.
She called for the government to move forward with internet regulation proposed in last year’s Online Harms White Paper, adding: “Combatting the impact of the global Covid-19 crisis is difficult enough, without the uncontrolled spread of extremely harmful content on social media platforms.”
The report was published as MPs prepared to question representatives from Twitter, Facebook and Google on the issue for a second time.
The Digital, Media, Culture and Sport sub-committee on disinformation recalled the officials to parliament after a hearing in April where they were lambasted for a lack of “clarity and openness”.
Chair Julian Knight said the firms had not given “adequate answers” to MPs’ questions in person or in writing, adding: “We were very disappointed by the standard of evidence given by all three social media companies, given the damage that can be done by the deliberate spreading of false information about Covid-19 and the need to tackle it urgently.”
Facebook chairman Mark Zuckerberg has refused to personally appear and its UK public policy manager, Richard Earley was criticised for not being able to confirm the number of content moderators during the pandemic.
US media reports claimed Facebook had put an “army of moderators” on leave in March, while acquiring the image hub Giphy for around $400m (£315m).
Mr Ahmed said the reports were a “kick in the teeth”, adding: “If social media giants continue to publish misinformation on their websites, then politicians need to hold them to account by imposing financial sanctions for the costs to the NHS, fire service, police and all of society that misinformation causes, and legislate for deeper, faster regulation.”
Research by Ofcom shows that 5G conspiracy theories are the most common piece of misinformation that members of the British public encounter online.
The watchdog’s most recent survey of British adults found that 39 per cent said they had come across “false or misleading information” on coronavirus in a week.
Separate polling commissioned by the Hope Not Hate in April found that almost half of the British population believes that coronavirus is a “man-made creation”.
That research suggested that 8 per cent of people think that 5G technology is spreading the virus, and that many more have seen claims that Covid-19 is a Chinese weapon or created by the “New World Order”.
Brexit supporters and people who distrust the political system were more likely to believe the conspiracies, according to the report.
Counter-terror police have warned that conspiracy theories are being used “as a hook” by extremists to draw in new recruits.
Officials are concerned that lockdown conditions mean that people are spending more time alone online, while experiencing fear and distress that makes them less able to spot and reject misinformation.
Sara Khan, who leads the Commission for Countering Extremism, told The Independent that conspiracy theories had previously been viewed as “harmless” but could be used to drive hate against Muslims and other minority groups.
“We’ve seen the conspiracy theories around 5G and how masts are being attacked – there are serious consequences,” she added.
“If they are inciting hatred, violence or justifying terrorism or inciting violence that’s not harmless. We need a better and more sophisticated policy response.”
According to the CCDH research, Facebook took action on the greatest proportion of flagged posts on coronavirus, with one in 10 removed and a further 2 per cent flagged as false information.
Instagram removed 4 per cent of posts and took down 6 per cent of the responsible accounts, with 10 per cent of posts acted on in total.
Twitter only acted on 3.3 per cent of reported tweets, removing 2.8 per cent of accounts and 0.6 per cent of posts.
In total, 9.4 per cent of the 649 reported posts were taken down individually or as part of a wider profile or group.
The research was conducted by volunteers from Youth Against Misinformation, a group trained by the Restless Development charity and the CCDH, from 20 April to 26 May.
Twitter said its automated systems had removed 4.3 million “spammy or manipulative” accounts since new policies were introduced on 18 March.
“We are introducing new labels and warning messages to provide additional context and information on some tweets containing disputed or misleading information related to Covid-19,” a statement added.
Facebook, which owns Instagram, said that during March and April it placed warning labels on around 90 million pieces of content relating to coronavirus, and that people interacting with removed posts are notified.
A spokesperson added: “We share the goal of reducing the spread of harmful misinformation about Covid-19, but this sample is not representative and the findings don’t reflect the work we’ve done.
“We are taking aggressive steps to remove harmful misinformation from our platforms and have removed hundreds of thousands of these posts, including claims about false cures.”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments