Facebook Rolls out AI Suicide Prevention Tools Across Its Platform

Facebook Rolls out AI Suicide Prevention Tools Across Its Platform

Words Tori West

After first announcing the new initiative last year, Facebook’s new “proactive detection” AI technology can scan posts for patterns of suicidal thoughts. Now, instead of relying on user reports, AI will flag worrisome posts to human moderators, decreasing how long Facebook takes to send help.

In the UK, suicide rates are on the rise. According to Samaritans’ 2017 report, this year, 6,188 suicides were registered in the UK and 451 in the Republic of Ireland. Over the past month of testing, Facebook has initiated more than a hundred “wellness checks”, with first-responders visiting affected users.

 “With all the fear about how AI may be harmful in the future, it’s good to remind ourselves how AI is actually helping save people’s lives today.” – Mark Zuckerberg

In a Facebook post announcing the rollout, CEO Mark Zuckerberg wrote: “with all the fear about how AI may be harmful in the future, it’s good to remind ourselves how AI is actually helping save people’s lives today” and explaining in more detail how the technology worked there’s a lot more we can do to improve this further. Today, these AI tools mostly use pattern recognition to identify signals –like comments asking if someone is okay – and then quickly report them to our teams working 24/7 around the world to get people help within minutes.”

Zuckerberg added, “we’re going to keep working closely with our partners at Save.org, National Suicide Prevention Lifeline ‘1-800-273-TALK (8255)’, Forefront Suicide Prevent, and with first responders to keep improving. If we can use AI to help people be there for their family and friends, that’s an important and positive step forward”.

However, not everyone has shown the same level of enthusiasm as Zuckerberg over the announcement. Munmun De Choudhury, an assistant professor in the School of Interactive Computing at Georgia Tech, praises Facebook social media company for focusing on suicide prevention — she would still like Facebook to be more transparent about its algorithms. In conversation with tech site Mashable, she explains: “This is not just another AI tool, it tackles a really sensitive issue. It’s a matter of somebody’s life and death.”

While Zuckerberg states that its AI technology is being used for the greater good, the idea of how far Facebook is invading the privacy of its users is a real fear. Facebook’s chief security officer Alex Stamos addressed concerns on Facebook taking its surveillance a step too far. Posting to Twitter he acknowledged the “creepy” risks AI can pose, adding that “it’s important to set good norms today around weighing data use versus utility and be thoughtful about bias creeping in”.

As AI technology is still in its infancy, it’s unknown what the exact repercussions and risks of adding it to social media platforms are, but if the tools could save just even one life, surely, it’s worth the controversy?