Facebook and TikTok are approving ads with ‘blatant’ misinformation about voting in midterms, researchers say
YouTube was the only platform tested that rejected all of the disinformation ads
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.A new report compiled by human rights watchdog groups claims that TikTok and Facebook failed to block advertisements containing "blatant" misinformation about how and when to vote in the US midterm elections as well as about election integrity issues.
Global Witness and the Cybersecurity Democracy Team at New York University released the report, which found that both social media titans failed to effectively stop misinformation in paid ads from reaching its users.
The organisations reached the conclusion by conducting an experiment in which they submitted 20 ads with inaccurate claims to Facebook, TikTok and YouTube. CNN reports that all of the ads were aimed at battleground states including Arizona and Georgia.
YouTube managed to detect and reject all of the misinformation and suspended the channel attempting to push the false claims. However, Facebook approved a "significant number" of the ads, according to the report, and TikTok approved 90 per cent of the misinformation.
One of the only ads rejected by TikTok claimed that voters were required to have a Covid-19 vaccination in order to cast a ballot. Facebook approved that ad.
The researchers pulled their ads once they passed through the approval process to ensure they were not shown on the platforms.
“YouTube’s performance in our experiment demonstrates that detecting damaging election disinformation isn’t impossible,” Laura Edelson, co-director of NYU’s C4D team, said in a statement with the report. “But all the platforms we studied should have gotten an ‘A’ on this assignment. We call on Facebook and TikTok to do better: stop bad information about elections before it gets to voters.”
A spokesperson for Meta, the parent company of Facebook, claimed the tests “were based on a very small sample of ads, and are not representative given the number of political ads we review daily across the world.” The spokesperson added: “Our ads review process has several layers of analysis and detection, both before and after an ad goes live.”
TikTok issued a statement in response to the experiment claiming it did remove disinformation, but that it was open to feedback from experts on how to increase its security. A spokesperson said TikTok “is a place for authentic and entertaining content which is why we prohibit and remove election misinformation and paid political advertising from our platform. We value feedback from NGOs, academics, and other experts which helps us continually strengthen our processes and policies.”
TikTok, which is especially popular among younger people, launched an "Elections Center" in August to "connect people who engage with election content to authoritative information," which includes guidance on where and how to vote. The platform has also added labels to identify content that deals with the midterm elections.
In September, the platform began to require "mandatory verification" for political accounts based in the US and issued a total ban on political fundraising using the platform.
Meta said last month that it also planned to remove misinformation regarding voting and any calls for violence linked to the upcoming election. However, The Washington Post noted in a report that the company would not ban accounts that claim the election is rigged or unsubstantiated allegations of voter fraud.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments