Facebook bans deepfake videos ahead of 2020 US election but allows misinformation
Damaging content designed to be deliberately misleading will still not be removed under new policy
Your support helps us to tell the story
This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.
The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.
Help us keep bring these critical stories to light. Your support makes all the difference.
Facebook has banned AI-manipulated videos known as deepfakes ahead of the 2020 US presidential election, though other misleading content will still be permitted.
The technology giant announced the new policy in a blog post written by the firm’s head of global policy management, Monika Bickert. She explained that damaging content designed to spread misinformation, such as doctored videos of Labour MP Keir Starmer or US House speaker Nancy Pelosi, will not be removed under the new policy.
Instead, only videos that count as deepfakes will be removed, despite very few incidents of the technology being used to manipulate viewers.
“While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases,” she wrote.
“We are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes... This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.”
Ms Bickert said videos would be removed from Facebook and Instagram if they meet two criteria: They are edited in a way that makes it appear that someone “said words that they did not actually say”; and if they are created using artificial intelligence that replaces content in a video, “making it appear to be authentic”.
Only content that adheres to these narrow criteria will actually be removed, while videos that are verified to be sharing false or deliberately misleading information will be allowed to remain on the platforms.
“If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad,” the blog post states. “People who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.”
The company claims this approach is “critical” to its strategy, as leaving them up and allowing them to be shared among users is “providing people with important information and context”.
Facebook has also previously said that any content that violates its policies will be allowed if it is deemed newsworthy.
Nick Clegg, Facebook’s head of global communications, said in September: “If someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm.”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments