top of page

AI Case Study

Facebook attemps to assess suicide risk of its users based on their public posts on the platform using AI

Since November 2017, Facebook has been leveraging artificial intelligence to analyse its users public posts for possible signs of suicidal behaviour. It does so by locating phrases that could express distress and scores them according to severity. The platform may then show ads about suicide hotlines, advise other users to reports incidents or call for help. According to the company, moderator have even intervened in urgent situations where immediate action had been needed, by calling emergency services.

Industry

Technology

Internet Services Consumer

Project Overview

"Facebook has been using artificial intelligence since November, 2017, to locate phrases that may be signs of distress – for instance, one user asking another, “Are you okay?” – to send pop-up ads about suicide hotlines or highlight ways people can respond when they are worried about someone, by prompting them to ask certain questions, report the incident to Facebook or call for help themselves. The approach is meant to provide support, not predict individual behaviour, explains Kevin Chan, the head of public policy at Facebook Canada. But in extreme circumstances where harm appears imminent, Mr. Chan says, Facebook moderators have contacted emergency services, though how often this has happened he declined to say." (businessinsider)

"But over a year later, following a wave of privacy scandals that brought Facebook's data-use into question, the idea of Facebook creating and storing actionable mental health data without user-consent has numerous privacy experts worried about whether Facebook can be trusted to make and store inferences about the most intimate details of our minds.

Data protection laws that govern health information in the US currently don't apply to the data that is created by Facebook's suicide prevention algorithm, according to Duarte. In the US, information about a person's health is protected by the Health Insurance Portability and Accountability Act (HIPAA) which mandates specific privacy protections, including encryption and sharing restrictions, when handling health records. But these rules only apply to organizations providing healthcare services such as hospitals and insurance companies.

Facebook hasn't been transparent about the privacy protocols surrounding the data around suicide that it creates. A Facebook representative told Business Insider that suicide risk scores that are too low to merit review or escalation are stored for 30 days before being deleted, but Facebook did not respond when asked how long and in what form data about higher suicide risk scores and subsequent interventions are stored." (theglobeandmail)

Reported Results

Results undisclosed

Technology

Function

Background

"Following a string of suicides that were live-streamed on the platform, the effort to use an algorithm to detect signs of potential self-harm sought to proactively address a serious problem." (businessinsider)

Benefits

Data

Facebook public data

bottom of page