Facebook Announces It Will Use A.I. To Scan Your Thoughts

November 28, 2017

A mere few years ago the idea that artificial intelligence (AI) might be used to analyze and report to law enforcement aberrant human behavior on social media and other online platforms was merely the far out premise of dystopian movies such as Minority Report, but now Facebook proudly brags that it will use AI to “save lives” based on behavior and thought pattern recognition.

What could go wrong?

The latest puff piece in Tech Crunch profiling the apparently innocuous sounding “roll out” of AI (as if a mere modest software update) “to detect suicidal posts before they’re reported” opens with the glowingly optimistic line, “This is software to save lives” – so who could possibly doubt such a wonderful and benign initiative which involves AI evaluating people’s mental health? Tech Crunch’s Josh Cronstine begins:

This is software to save lives. Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

CEO Mark Zuckerberg has long hinted that his team has been wrestling with ways to prevent what appears to be a disturbingly increased trend of live streamed suicides as well as the much larger social problem of online bullying and harassment. One recent example which gained international media attention was a bizarre incident out of Turkey, where a distraught father shot himself on Facebook Live after announcing that his daughter was getting married without his permission.

Though the example actually demonstrates the endlessly complex and unforeseen variables involved in human decision making and the human psyche – in this case notions of rigid Middle East cultural taboos and stigma clearly played a part – Tech Crunch holds it up as something which AI could possibly prevent.

Earlier this year Zuckerberg wrote in a public post that “There have been terribly tragic events – like suicides, some live streamed – that perhaps could have been prevented if someone had realized what was happening and reported them sooner… Artificial intelligence can help provide a better approach.” And in a post yesterday announcing the new AI suicide prevention tool integration, he wrote that “In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.”

Naturally, we must ask: what does Mark mean by the eerily ambiguous reference to “we will be able to identify different issues beyond suicide as well..”?

Read More

0 comment