Facebook using artificial intelligence to help suicidal users
The company has developed algorithms designed to flag up warning signs
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Facebook has started using artificial intelligence to identify users who are potentially at risk of taking their own lives.
The social network has developed algorithms capable of scanning posts and comments for warning signs.
These could be phrases such as “Are you okay?” or “I’m worried about you”, or more general talk of sadness and pain.
The AI tool would send such posts to a human review team, which would get in touch with the user thought to be at risk and offer help, in the form of contact details for support services or a chat with a member of staff through Facebook Messenger.
The site had previously relied on other users reporting worrying updates.
“The AI is actually more accurate than the reports that we get from people that are flagged as suicide and self injury,” Facebook product manager Vanessa Callison-Burch told BuzzFeed. “The people who have posted that content [that AI reports] are more likely to be sent resources of support versus people reporting to us.”
The system is currently being tested in the US.
The site has also announced new safety features for Facebook Live, which has been used to live stream several suicides.
Users can now flag up concerning Facebook Live behaviour with the site, which will display advice and highlight the video to staff for immediate review.
The goal is to provide help as quickly as possible, mid-broadcast rather than post-broadcast.
“Some might say we should cut off the stream of the video the moment there is a hint of somebody talking about suicide,” said Jennifer Guadagno, the project’s lead researcher.
“But what the experts emphasised was that cutting off the stream too early would remove the opportunity for people to reach out and offer support. So, this opens up the ability for friends and family to reach out to a person in distress at the time they may really need it the most.”
Facebook CEO Mark Zuckerberg described plans to use AI to identify worrying content in a recently published manifesto.
“Looking ahead, one of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community,” it read.
An earlier version of the piece said that it would take “many years to develop” AI systems capable of identifying issues such as bullying and terrorism risks online, but the section was removed before the manifesto was publicly issued.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments