Instagram adds new self-harm tools that can automatically detect dangerous content

Andrew Griffin
Wednesday 11 November 2020 10:51 GMT
Comments
(Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Instagram has added new tools that it says should be able to better spot self-harm and suicide content and take it down.

The app has faced sustained criticism over its failure to protect vulnerable users from such posts.

The new feature aims to automatically identify that content and make it less visible in the app. If the tools are sufficiently confident, the posts may be removed entirely, without a human even necessarily looking at them, Instagram said.

The feature is already used on Facebook and Instagram outside of the EU, where it includes additional layers which also see posts referred to human reviewers once spotted, who can then take further action such as connecting the poster to local help organisations and in the most severe cases, calling emergency services.

However, Instagram confirmed these referral aspects are not yet ready to be introduced to the UK and Europe because of data privacy considerations linked to the General Data Protection Regulation (GDPR).

The social media giant said it hoped it would be able to introduce the full set of tools in the future.

Instagram's public policy director in Europe, Tara Hopkins, said: "In the EU at the moment, we can only use that mix of sophisticated technology and human review element if a post is reported to us directly by a member of the community."

She said that because in a small number of cases an assessment would be made by a human reviewer on whether to send additional resources to a user, this could be considered by regulators to be a "mental health assessment" and therefore a part of special category data, which receives greater protection under GDPR.

Ms Hopkins said the company was in discussions with the Irish Data Protection Commission (IDPC) - Facebook's lead regulator in the EU - and others over the tools and a potential introduction in the future.

"There are ongoing conversations that have been very constructive and there's a huge amount of sympathy for what we're trying to achieve and that balancing act of privacy and the safety of our users," she said.

In a blog post announcing the update, Instagram boss Adam Mosseri said it was an "important step" but that the company want to do "a lot more".

He said not having the full capabilities in place in the EU meant it was "harder for us to remove more harmful content, and connect people to local organisations and emergency services".

He added that the firm was in discussions with regulators and governments about "how best to bring this technology to the EU, while recognising their privacy considerations".

Facebook and Instagram are among the social media platforms to come under scrutiny for their approach to and handling of suicide and self-harm material.

Concerns have been raised about self-harm and suicide content online, particularly how platforms handle such content and its impact on vulnerable users, especially young people.

And fears about the impact of social media on vulnerable people have also increased amid cases such as that of 14-year-old schoolgirl Molly Russell, who took her own life in 2017 and was found to have viewed harmful content online.

Molly's father, Ian, who now campaigns for online safety, has previously said the "pushy algorithms" of social media "helped kill my daughter".

In September, Facebook and its family of apps were among the companies to agree to guidelines published by Samaritans in an effort to set industry standards on how to handle the issue.

Ms Hopkins said Instagram was trying to balance its policies on self-harm content by also "allowing space for admission" by people who have considered self-harm.

"It's okay to admit that and we want there to be a space on Instagram and Facebook for that admission," she said.

"We're told by experts that can help to destigmatise issues around suicide. It's a balancing act and we're trying to get to the right spot where we're able to provide that kind of platform in that space, while also keeping people safe from seeing this kind of content if they're vulnerable."

Additional reporting by agencies

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in