Facebook to direct users who interact with 'harmful' coronavirus misinformation to WHO
Agency under review for spreading China's harmful coronavirus misinformation
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Facebook will steer users engaging in coronavirus discussion flagged as “harmful” toward the defunded World Health Organisation (WHO), the company has announced.
It was the latest in a series of moves by Silicon Valley tech giant to engineer public conversation over coronavirus, and comes days after the US halted $500m (£402m) annual funding to the WHO while the agency’s pandemic response is reviewed.
In announcing the new pop-up alerts, Facebook’s vice president of integrity, Guy Rosen, said the company wanted to connect users with authoritative sources such as the WHO.
“Ever since Covid-19 was declared a global public health emergency in January, we’ve been working to connect people to accurate information from health experts and keep harmful misinformation about Covid-19 from spreading on our apps,” Mr Rosen said on Thursday.
Through its Covid-19 Information Center, Facebook has directed 2 billion people to health authorities, with more than 350 million users clicking through to learn more from reputable sources.
In January, the WHO tweeted that Chinese authorities had said there was no evidence of human-to-human transmission of coronavirus in Wuhan, the Chinese city at the centre of the outbreak.
That position was cited by Donald Trump when he announced the US’s 60-90 day review of the agency, saying it failed to investigate credible reports in December 2019 from sources in Wuhan that conflicted directly with the Chinese government’s accounts.
“Through the middle of January it parroted and publicly endorsed the idea that there was not human-to-human transmission happening despite reports and clear evidence to the contrary,” the US president said during his daily press conference.
“The WHO pushed China’s misinformation about the virus, saying it was not communicable and there was no need for travel bans. The WHO’s reliance on China’s disclosures likely caused a 20-fold increase in cases worldwide.”
Facebook CEO Mark Zuckerberg posted on social media that the company issued warnings on 40 million posts deemed false by its fact-checking partners, and said they have taken down hundreds of thousands of misinformation posts related to Covid-19, including theories that could lead to physical harm such as drinking bleach to cure the virus.
“For other misinformation, once it is rated false by fact-checkers, we reduce its distribution, apply warning labels with more context and find duplicates,” he wrote.
Facebook, YouTube and Twitter have implemented banners, pop-ups and features that direct users to third-party websites they have deemed provide authoritative and reliable information.
The new Facebook policy direct’s users to the WHO’s “Myth Busters“ page, while YouTube directs users to the Centers for Disease Control’s coronavirus landing page.
YouTube moderators began taking down “borderline content” videos promoting conspiracy theories that connect 5G networks and the spread of the coronavirus following a series of attacks on cell phone towers.
Twitter, meanwhile, has increased its use of machine learning to take down information it deems as false.
“We will continue to take action on accounts that violate our rules, including content in relation to unverifiable claims which incite social unrest, widespread panic or large-scale disorder,” a Twitter spokesperson said. “If people see anything suspicious on our service, please report it to us.”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments