Inside Facebook’s Suicide Algorithm

January 8, 2019

Facebook is scanning nearly every post on the platform in an attempt to assess suicide risk.

Facebook passes the information along to law enforcement for wellness checks.

Privacy experts say Facebook’s failure to get affirmative consent from users for the program presents privacy risks that could lead to exposure or worse.

In March 2017, Facebook launched an ambitious project to prevent suicide with artificial intelligence.

Following a string of suicides that were live-streamed on the platform, the effort to use an algorithm to detect signs of potential self-harm sought to proactively address a serious problem.

But over a year later, following a wave of privacy scandals that brought Facebook’s data-use into question, the idea of Facebook creating and storing actionable mental health data without user-consent has numerous privacy experts worried about whether Facebook can be trusted to make and store inferences about the most intimate details of our minds.

Facebook is creating new health information about users, but it isn’t held to the same privacy standard as healthcare providers

The algorithm touches nearly every post on Facebook, rating each piece of content on a scale from zero to one, with one expressing the highest likelihood of “imminent harm,” according to a Facebook representative.

That data creation process alone raises concern for Natasha Duarte, a policy analyst at the Center for Democracy and Technology.

“I think this should be considered sensitive health information,” she said. “Anyone who is collecting this type of information or who is making these types of inferences about people should be considering it as sensitive health information and treating it really sensitively as such.”

Data protection laws that govern health information in the US currently don’t apply to the data that is created by Facebook’s suicide prevention algorithm, according to Duarte. In the US, information about a person’s health is protected by the Health Insurance Portability and Accountability Act (HIPAA) which mandates specific privacy protections, including encryption and sharing restrictions, when handling health records. But these rules only apply to organizations providing healthcare services such as hospitals and insurance companies.

Companies such as Facebook that are making inferences about a person’s health from non-medical data sources are not subject to the same privacy requirements, and according to Facebook, they know as much and do not classify the information they make as sensitive health information.

Facebook hasn’t been transparent about the privacy protocols surrounding the data around suicide that it creates. A Facebook representative told Business Insider that suicide risk scores that are too low to merit review or escalation are stored for 30 days before being deleted, but Facebook did not respond when asked how long and in what form data about higher suicide risk scores and subsequent interventions are stored.

Facebook would not elaborate on why data was being kept if no escalation was made.

Could Facebook’s next big data breach include your mental health data?

The risks of storing such sensitive information is high without the proper protection and foresight, according to privacy experts.

The clearest risk is the information’s susceptibility to a data breach.

“It’s not a question of if they get hacked, it’s a question of when,” said Matthew Erickson of the consumer privacy group the Digital Privacy Alliance.

In September, Facebook revealed that a large-scale data breach had exposed the profiles of around 30 million people. For 400,000 of those, posts and photos were left open. Facebook would not comment on whether or not data from its suicide prevention algorithm had ever been the subject of a data breach.

Following the public airing of data from the hack of married dating site Ashley Madison, the risk of holding such sensitive information is clear, according to Erickson: “Will someone be able to Google your mental health information from Facebook the next time you go for a job interview?”

Dr. Dan Reidenberg, a nationally recognized suicide prevention expert who helped Facebook launch its suicide prevention program, acknowledged the risks of holding and creating such data, saying, “pick a company that hasn’t had a data breach anymore.”

But Reidenberg said the danger lies more in stigma against mental health issues. Reidenberg argues that discrimination against mental illness is barred by the Americans with Disabilities Act, making the worst potential outcomes addressable in court.

Read More

0 comment