Social media companies could be banned from blocking content under new rules

Adam Smith
Wednesday 09 September 2020 17:15 BST
Comments
Donald Trump at a campaign rally a day before damning statements in a coming book surfaced, along with the recordings of his own words about the coronavirus,
Donald Trump at a campaign rally a day before damning statements in a coming book surfaced, along with the recordings of his own words about the coronavirus, ( Sean Rayford/Getty Images)

Republicans in the United States congress have proposed a new bill aimed at removing protections for websites with regards to how they moderate their platforms.

The bill, called the Online Freedom and Viewpoint Diversity Act, would attempt to diminish the power of Section 230 – American legislation that stops Facebook, Twitter, and all other websites from being held legally responsible for anything posted by their users.

Section 230 has become a political issue after Twitter added a fact-checking link to one of Mr Trump’s tweets that incorrectly linked voting by mail to election fraud in May 2020

The Online Freedom and Viewpoint Diversity Act would require platforms to hold an “objectively reasonable belief” that content removed violated a particular policy.  

If these platforms cannot legally defend such a belief, they could face consequences for their moderation decisions. 

The new bill also adds guidance to content moderation, focusing on “self-harm”, “unlawful”, and content that is “promoting terrorism” rather than the less specific “objectionable” content.  

The bill comes from senator Roger Wicker, Lindsey Graham, and Marsha Blackburn.  

“For too long, social media platforms have hidden behind Section 230 protections to censor content that deviates from their beliefs,” Wicker said in a statement. 

Lindsey Graham echoed that statement, saying that social media companies often censor content that would otherwise be valid.

In a recent incident, Twitter temporarily suspended Donald Trump Jr after he posted a video that promoted the drug hydroxychloroquine as a cure for Covid-19.  

It included a video that showed people claiming to be doctors, who falsely argued that "you don't need masks" and that studies showing that the drug may not be effective are "fake science".  

"The Tweet is in violation of our COVID-19 misinformation policy," a Twitter spokesperson said at the time.

Republicans feel that social media companies censoring content like this is because of political ideology, rather than to protect people from spreading misinformation about the coronavirus.

Many Republican senators, including Ted Cruz and president Donald Trump, believe Section 230 is inherently linked to private companies being politically neutral. This is not true.

Right-wing politicians also believe social media companies have a left-wing bias, politically, something which the companies have repeatedly denied.

Today, a Facebook engineer resigned after alleging that the company gives right-wing publications a “pass on our misinformation policies”, as well as criticising its approaches to hate speech. 

Another employee was reportedly also fired by Facebook after right-wing organisations including Breitbart, Turning Point USA founder Charlie Kirk, Trump advocates Diamond and Silk, and conservative video maker Prager University (PragerU) received preferential treatment to stop their posts being blocked by Facebook’s policies

Facebook vice president of global public policy and former Bush administration employee Joel Kaplan reportedly intervened personally on behalf of an Instagram post from Charlie Kirk.

President Donald Trump has also asked Mitch McConnell to repeal Section 230, after a photo was posed of the Republican Senate Majority Leader edited to appear like a Russian guard.

There are several other pieces of legislation targeting Section 230, including one from Democratic Senator Brian Schatz, Senate Republican John Thune, and Republican Senator Josh Hawley.

Presidential candidate Joe Biden has also called for it to be revoked as a way to ensure platforms remove terrorist content.

Moderating content for harmful images is a difficult task for technology companies, one that challenges the capabilities of automated algorithms and can mentally scar human employees.

Additional reporting from agencies

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in