Facebook rolls out AI to detect suicidal posts before they're reported

Facebook rolls out AI to detect suicidal posts before they're reported
From TechCrunch - November 27, 2017

This is software to save lives. Facebooks new proactive detectionartificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

Facebook previously tested using AI to detect troubling posts and more prominently surface suicide reporting options to friends in the U.S. Now Facebook is will scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech.

Facebook also will use AI to prioritize particularly risky or urgent user reports so theyre more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info. Its also dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like, National Suicide Prevention Lifeline and Forefront from which to provide resources to at-risk users and their networks.

This is about shaving off minutes at every single step of the process, especially in Facebook Live, saysVP of product management Guy Rosen. Over the past month of testing, Facebook has initiated more than 100 wellness checks with first-responders visiting affected users. There have been cases where the first-responder has arrived and the person is still broadcasting.

The idea of Facebook proactively scanning the content of peoples posts could trigger some dystopian fears about how else the technology could be applied. Facebook didnt have answers about how it would avoid scanning for political dissent or petty crime, with Rosen merely saying we have an opportunity to help here so were going to invest in that. There are certainly massive beneficial aspects about the technology, but its another space where we have little choice but to hope Facebook doesnt go too far.

[Update: Facebooks chief security officer Alex Stamos responded to these concerns with a heartening tweet signaling that Facebook does takeseriously responsible use of AI.

Facebook CEO Mark Zuckerberg praised the product update in a post today, writing that In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.]

Facebook trained the AI by finding patterns in the words and imagery used in posts that have been manually reported for suicide risk in the past. It also looks for comments like are you OK? and Do you need help?

How suicide reporting works on Facebook now


Continue reading at TechCrunch »