Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Explainer: Why Supreme Court tiptoeing past Section 230 helps Big Tech fueled by social media

Google, Twitter, Facebook and other tech companies fueled by social media have dodged a legal threat that could have blown a huge hole in their business models

Barbara Ortutay
Thursday 18 May 2023 20:12 BST
Supreme Court Social Media Liability
Supreme Court Social Media Liability

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Google, Twitter, Facebook and other tech companies fueled by social media have dodged a legal threat that could have blown a huge hole in their business models.

The U.S. Supreme Court delivered the reprieve Thursday by rejecting one lawsuit alleging social media platforms should be held liable for enabling a lethal attack on a Turkish nightclub and tossing another case back to a lower court.

Those moves, coming three months after the Supreme Court heard oral arguments in the cases, preserve a law known as Section 230 that shields social media services from being held responsible for the material posted on their platforms.

Without the protection consisting of a mere 26 words tucked inside a broader reform of U.S. telecommunications adopted in 1996, Google, Facebook and other tech companies probably wouldn't have been able to have grown as large as they are now. And their future prospects would dim if their platforms were stripped of their legal immunity.

But just because the Supreme Court sidestepped the prickly issue for now doesn't mean there won't be other cases brought that could result in adverse decisions down the line. This year's high-profile oral arguments on the issue also highlighted the widely held feeling that Congress should revisit a law that was adopted before Facebook founder Mark Zuckerberg was even a teenager.

"We really don’t know about these things. You know, these are not like the nine greatest experts on the internet,” Justice Elena Kagan said of herself and her colleagues during February's oral arguments, while adding that the matter may be best addressed by U.S. lawmakers.

WHAT IS SECTION 230?

If a news site falsely calls you a swindler, you can sue the publisher for libel. But if someone posts that on Facebook, you can’t sue the company — just the person who posted it.

That’s thanks to Section 230 of the 1996 Communications Decency Act, which states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

That legal phrase shields companies that can host trillions of messages from being sued into oblivion by anyone who feels wronged by something someone else has posted — whether their complaint is legitimate or not.

Politicians on both sides of the aisle have argued, for different reasons, that Twitter, Facebook and other social media platforms have abused that protection and should lose their immunity — or at least have to earn it by satisfying requirements set by the government.

Section 230 also allows social platforms to moderate their services by removing posts that, for instance, are obscene or violate the services’ own standards, so long as they are acting in “good faith.”

WHERE DID SECTION 230 COME FROM?

The measure’s history dates back to the 1950s, when bookstore owners were being held liable for selling books containing “obscenity,” which is not protected by the First Amendment. One case eventually made it to the Supreme Court, which held that it created a “chilling effect” to hold someone liable for someone else’s content.

That meant plaintiffs had to prove that bookstore owners knew they were selling obscene books, said Jeff Kosseff, the author of “The Twenty-Six Words That Created the Internet,” a book about Section 230.

Fast-forward a few decades to when the commercial internet was taking off with services like CompuServe and Prodigy. Both offered online forums, but CompuServe chose not to moderate its, while Prodigy, seeking a family-friendly image, did.

CompuServe was sued over that, and the case was dismissed. Prodigy, however, got in trouble. The judge in their case ruled that “they exercised editorial control — so you’re more like a newspaper than a newsstand,” Kosseff said.

That didn’t sit well with politicians, who worried that outcome would discourage newly forming internet companies from moderating at all. And Section 230 was born.

“Today it protects both from liability for user posts as well as liability for any claims for moderating content,” Kosseff said.

WHAT HAPPENS IF SECTION 230 GOES AWAY?

“The primary thing we do on the internet is we talk to each other. It might be email, it might be social media, might be message boards, but we talk to each other. And a lot of those conversations are enabled by Section 230, which says that whoever’s allowing us to talk to each other isn’t liable for our conversations,” said Eric Goldman, a professor at Santa Clara University specializing in internet law. “The Supreme Court could easily disturb or eliminate that basic proposition and say that the people allowing us to talk to each other are liable for those conversations. At which point they won’t allow us to talk to each other anymore.”

There are two possible outcomes. Platforms might get more cautious, as Craigslist did following the 2018 passage of a sex-trafficking law that carved out an exception to Section 230 for material that “promotes or facilitates prostitution.” Craigslist quickly removed its “personals” section, which wasn’t intended to facilitate sex work, altogether. But the company didn’t want to take any chances.

“If platforms were not immune under the law, then they would not risk the legal liability that could come with hosting Donald Trump’s lies, defamation, and threats,” said Kate Ruane, former senior legislative counsel for the American Civil Liberties Union who now works for PEN America.

Another possibility: Facebook, Twitter, YouTube and other platforms could abandon moderation altogether and let the lowest common denominator prevail.

Such unmonitored services could easily end up dominated by trolls, like 8chan, a site that was infamous for graphic and extremist content.

Any change to Section 230 is likely to have ripple effects on online speech around the globe.

“The rest of the world is cracking down on the internet even faster than the U.S.,” Goldman said. “So we’re a step behind the rest of the world in terms of censoring the internet. And the question is whether we can even hold out on our own.”

——

AP Technology Writer Michael Liedtke contributed to this story.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in