Stay up to date with notifications from TheĀ Independent

Notifications can be managed in browser preferences.

Louisville attack shows challenge of curbing violent videos

Social media companies are once again in the spotlight after a bank employee in Louisville, Kentucky, killed five people in a mass shooting and livestreamed the attack on Instagram

Haleluya Hadero,Matt O'Brien
Tuesday 11 April 2023 21:17 BST

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Social media companies are once again in the spotlight after a bank employee in Louisville, Kentucky, killed five people in a mass shooting and livestreamed the attack on Instagram.

Tech companies have gotten better in recent years at cooperating to tamp down the spread of mass shooting videos on mainstream platforms. But there's still no easy way to stop shooters from broadcasting their grisly crimes without shutting down livestreaming services altogether.

Here's what we know so far about what happened in Louisville:

HOW DID META RESPOND?

Instagram parent company Meta, which also owns Facebook, said in a statement that it quickly removed the livestream of the Louisville shooting on Monday morning.

But Meta did not immediately respond to questions Tuesday about how long it took to take down the livestream ā€” or how many people watched it before it was removed.

Instagram allows users to anonymously report livestreams. Once a report has been submitted, the companyā€™s policy states that it will review the broadcast ā€œas quickly as possibleā€ and remove those that violate its policies. Depending on the severity of the situation, the company may decide to end a live broadcast, disable the account or contact law enforcement.

IS THIS THE FIRST LIVESTREAMED SHOOTING?

No. All told, there have been seven perpetrator-produced videos of violence posted on social media in the past four years that major companies have tried to keep off their platforms, according to the Global Internet Forum to Counter Terrorism.

In September, a gunman livestreamed his attack on people in Memphis, Tennessee, during a rampage that killed four and wounded three, police said. The shooting came four months after a white gunman massacred 10 Black shoppers and workers ā€” and wounded three ā€” in a shooting at a Buffalo, New York, supermarket that was livestreamed on the Amazon-owned gaming platform Twitch.

The platform said it removed that video in less than two minutes, which was not fast enough to prevent copies of the clip from spreading to other social media sites. But the removal was considerably faster than the 17 minutes it took Facebook to take down a livestreamed attack in 2019 at two mosques in Christchurch, New Zealand. That shooting killed 51 people.

Also in 2019, another gunman killed two people during a shooting at a German synagogue that was also livestreamed on Twitch.

Last June, two Muslim men in India were accused of slitting the throat of a Hindu tailor and posting a video of it online amid rising tensions between Hindus and Muslims in the country.

HOW HAVE SOCIAL MEDIA COMPANIES CHANGED THEIR TACTICS?

The methods to curb attack videos have evolved since 2014, when Islamic State militants in Syria began sharing grisly propaganda videos of the beheadings of kidnapped journalists and other hostages.

While those events were not shared live, it was "really the first time that there was a major terrorist incident designed for the social media era. And platforms realized that they had to do something,ā€ said Courtney Radsch, a fellow at the UCLA Institute for Technology, Law & Policy.

Facebook, Microsoft, Twitter and Google-owned YouTube formed a group in 2017 called the Global Internet Forum to Counter Terrorism. Its mission expanded after the Christchurch killings ā€œspurred a much more aggressive effort to not only eradicateā€ terrorist content online, but also to go after mass killing videos ā€œperpetrated by white nationalists and other types of extremists,ā€ said Radsch, who serves on a committee for the group.

The group, known as GIFCT, now has nearly two dozen members, including Amazon, Airbnb, Dropbox, Discord and Zoom. Whatever platform has the original video will submit a ā€œhashā€ ā€” a digital fingerprint corresponding to that video ā€” and notify the other member companies so they can restrict it from their platforms. While not perfect, experts say the response has grown quicker and also now encompasses PDF files to stop the spread of manifestos.

ā€œUnfortunately, as these have continued to occur, the more of these weā€™ve gone through with our members, the more everyone strengthened their muscle memory around this,ā€ said Sarah Pollack, a spokesperson for GIFCT.

A day after the Louisville shooting, clips from the gunmanā€™s livestream were not easily findable on Instagram or other popular social media sites such as Twitter, Facebook and TikTok. The first calls to police were around 8:30 a.m. Monday By midday, the GIFCT had put out its highest-level alert for coordinating efforts to stop the videoā€™s spread.

WHAT MORE COULD BE DONE?

Itā€™s hard to know if the effort to slow the spread of videos has done anything to deter the violence itself.

ā€œThereā€™s a tension between platforms ā€œwanting to give their users new capabilities and opportunities to engageā€ and the risks of livestreaming, said UCLA's Radsch.

Livestreaming, "with no delay, with no real oversight, can present really challenging situations when users use your platform to livestream terrorism, extremism, violence, suicide.ā€

She said platforms still need to take more seriously whether to adopt additional precautions.

ā€œThe challenge is, any precaution you put in place for a mass violence event could also potentially be leveraged to prevent livestreaming of police brutality or pro-democracy protests,ā€ she said. ā€œSo it really is a double-edged sword.ā€

Also, while mainstream companies are coordinating their response, they have little influence over the ā€œdark webā€ forums that are still trying to collect and share the videos ā€” other than preventing them from obtaining footage in the first place.

___

O'Brien reported from Providence, Rhode Island.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in