Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Dark corners of the web fight back against New Zealand’s strive to restrain the far-right after Christchurch massacre

Memes, videogames and online chat forums are glorifying the Christchurch attack as governments worldwide press tech giants to clamp down on far-right propaganda

Jamie Tarabay
Saturday 06 July 2019 13:42 BST
Comments
The attack on a mosque in Christchurch, New Zealand in March was streamed live on Facebook
The attack on a mosque in Christchurch, New Zealand in March was streamed live on Facebook (AFP/Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

A video game that uses footage of the Christchurch massacre to put Muslims in a gunman’s crosshairs. Memes featuring the face and weapons of the man charged in that New Zealand attack. Messages on online forums that glorify him as St Tarrant — patron saint of the far right.

New Zealand has worked hard to keep the name of Brenton Tarrant, the man charged with killing 51 Muslims in Christchurch, out of the news and to restrict the spread online of the hateful ideology he is accused of promoting.

But the footage, games, memes and messages that still populate the dark corners of the global internet underline the immensity of the task, especially for a small country like New Zealand.

“The internet is a very complex and rough environment, and governments, especially small governments, don’t have as many cards as they would like to play,” said Ben Buchanan, a cybersecurity expert who teaches at Georgetown University.

Shortly after the March 15 attack, Prime Minister Jacinda Ardern declared that she would never utter Mr Tarrant’s name and that she would do whatever she could to deny him a platform for his views.

A few days later, the New Zealand government banned the sharing or viewing of a 74-page manifesto that Mr Tarrant is believed to have written. The country also declared it a crime to spread the video purporting to show the massacre; more than a dozen people have been officially warned or charged.

Ms Ardern followed those actions with an effort, which she branded the Christchurch Call, to enlist tech companies like Facebook, Google, Twitter and YouTube to do more to curb violent and extremist content. In an op-ed, Ms Ardern noted that her government could change gun laws and tackle racism and intelligence failures but that “we can’t fix the proliferation of violent content online by ourselves.”

Seventeen countries and the European Commission, as well as eight large tech companies, have signed on to her call. And late last week, leaders at the Group of 20 summit in Osaka, Japan, issued their own appeal to tech companies, declaring in a statement that “the rule of law applies online as it does offline.”

But, if anything, the appetite for material connected to the Christchurch attack continues to grow, said Ben Decker, the chief executive of Memetica, a digital investigations consultancy.

Facebook said that an apparent livestream of the Christchurch attack was viewed by fewer than 200 users, but that videos of the attack posted later were watched by 4,000 others, and that the platform blocked more than 1 million uploads in the days after the assault. It is unclear how many uploads have been attempted in the months since.

The video game adapting the purported Christchurch footage is still being shared online. Modelled on other so-called first-person-shooter games, it tracks a gunman who enters a mosque, drawing a gun and killing anyone in his path.

The mosque shootings suspect, whose face was blurred on the orders of the judge, appears in court
The mosque shootings suspect, whose face was blurred on the orders of the judge, appears in court (Getty)

In the days leading up to a court appearance by Mr Tarrant last month, during which he pleaded not guilty to charges that included murder and terrorism, memes featuring him spiked across the message boards 4Chan and 8Chan, Mr Decker said. Scores of boards on 8Chan are devoted to Mr Tarrant, including forums lionising him as St Tarrant.

And on the day Mr Tarrant was due in court, a user on Reddit announced a plan to attack a mosque in Texas, vowing to follow the example of “our lad.” Many users flagged it to the police, and no attack occurred.

“You have these toxic communities trying to infect more mainstream congregations with xenophobia, Islamophobia and threats of mass violence,” Mr Decker said. “The fact that it moves across platforms allows users to notify law enforcement. It definitely is a tale of two internets.”

Mr Decker was among the consultants the New Zealand authorities met with as Ms Ardern prepared to travel to Paris in May to issue her Christchurch Call. One question she has grappled with is how far New Zealand, an island nation of just under 5 million people, will go to keep the rest of the world at bay.

New Zealand prime minister Jacinda Ardern vows not to say Christchurch shooter’s name

After the Christchurch attack, local internet service providers suspended access to websites that hosted videos of the shooting and apologised for the censorship, even as they acknowledged that they could not completely prevent users from viewing the material.

“We appreciate this is a global issue; however, the discussion must start somewhere,” the companies said in a statement addressed to the heads of Facebook, Google and Twitter. “We must find the right balance between internet freedom and the need to protect New Zealanders, especially the young and vulnerable, from harmful content.”

The press in New Zealand has also imposed restrictions on itself. As news outlets have prepared to cover Mr Tarrant’s trial, which is scheduled for May, they have voluntarily agreed to limit coverage of anything that could amplify white supremacist ideology, including the manifesto.

That manifesto has already had an impact beyond New Zealand’s shores. In April, a gunman entered a synagogue 25 miles from San Diego, killing one person and injuring three others. The suspect claimed to have been inspired by the Christchurch shootings, had reportedly posted his own manifesto online and may have tried to livestream the shooting.

Missouri's Republican senator Josh Hawley has introduced a bill to amend legislation that protects tech companies from liability for content posted by their users.

8Chan, which cooperated with law enforcement after the Christchurch attack, has criticised the bill, saying that any erosion of the legislation is “an affront to liberty and freedom of speech online.”

Ms Ardern has said she hopes that less mainstream platforms like 4Chan and 8Chan will become more open to stamping out extremist content if the major platforms can reach a consensus on the issue.

Given the free speech considerations, and the gargantuan task that tech companies face in monitoring online speech, there has been a focus on the role that artificial intelligence could play in blocking hateful content, including at a House hearing late last month.

But Mr Buchanan, the Georgetown expert, who attended the hearing, told the committee that automated systems alone would not be able to solve the problem.

Alex Stamos, a former chief security officer at Facebook and now the director of the Stanford Internet Observatory, said at the hearing that there were several steps that tech companies could take to address extreme content online, including being more transparent.

“While there is no single answer that will keep all parties happy, the platforms must do a much better job of elucidating their thinking processes and developing public criteria that bind them to the precedents they create with every decision,” Mr Stamos said.

“There remain many kinds of speech that are objectionable to some in society but not to the point where huge, democratically unaccountable corporations should completely prohibit such speech,” he added. “The decisions made in these grey areas create precedents that aim to serve public safety and democratic freedoms but can also imperil both.”

The New York Times

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in