top of page
Writer's pictureThe Muse

Virtual Smoke Detectors: The Rising Role of AI in Suicide Prevention

It may sound like something out of an Orwellian dystopia, but this is today’s reality. Increasingly, Artificial Intelligence (AI) is being used to analyze your online activity to predict your future behavior and serve as an indicator of personal health. From looking at social media posts and calls, to measuring the time between taps and clicks on your phone, AI forms a digital phenotype of a person’s behaviour, which can open a window into their well-being and state of mind.

In recent years, social media giants like Facebook have adopted these AI tools to implement a suicide intervention program which scans the posts, comments and videos of users for indications of immediate suicide risk. Facebook’s algorithms use pattern recognition software that looks for certain signals, such as comments asking if someone is okay and then alerts human reviewers at the company. In some cases, the user is sent supportive suggestions like ‘Call a helpline’, while in some cases local law enforcement is notified. Similarly, AI is being utilized in Chinese social media applications such as Weibo, which also monitors users’ posts and comments for suicidal thoughts. It then sends flagged users intervention messages advising them to seek help.

Besides suicide prevention, AI is descending upon other aspects of mental healthcare. Companies such as Mindstrong Health have developed a research platform that monitors users’ phone habits, looking at changes in taps and clicks for hints about mood and memory changes associated with depression. Programs like this may end up developing a new avenue mental health officials can use in diagnosing illnesses. As such, the incorporation of AI may cause the nature of healthcare to change drastically as a shift occurs from patient-doctor interaction based healthcare, to AI dependent healthcare. Furthermore, it may signal a rise in predictive clinical analytics, where patient histories and activities are analyzed to forecast possibility of future mental illnesses or the likelihood of a patient relapse.

Albeit a noble endeavour, the role of AI in suicide prevention and mental healthcare raises an important question as to whether programs that actively monitor people’s social media activity and posts push existing ethical and legal boundaries. Is it an intrusion into people’s privacy to monitor already public social media posts like this? Certainly, people choose to share information about their lives and how they are feeling on public platforms; it is a Faustian deal many of us have made in exchange for the convenience of the internet.

While that is true, processing social media activity to predict and influence our future behaviour, to interfere with one’s life through intervention by the police, seems problematic. Moreover, most of these companies do not provide people with an option to opt out of the program. This is also why Facebook’s suicide intervention program is not available in the EU, where stringent data protection laws prevent the processing of a person’s personal data without their consent. One must also consider the issue of false positives, has AI advanced enough to detect nuances in language to accurately make a diagnosis? Who is to be held accountable if a person’s life is seriously disrupted due to a wrong diagnosis?

Here’s another problem: in involving law enforcement, Facebook is assuming the authority and autonomy of a public health agency, but is not being regulated as such. It has not revealed the process by which human reviewers decide whether or not to call law enforcement. Who is to say where the mass profiling stops? How do we know this data isn’t going to be misused by employers who can choose to not hire someone with suicidal tendencies? Facebook is already in hot water for improperly harvesting user data and sharing it with tech companies including Cambridge Analytica, Amazon and Spotify, among others. So who is keeping Big Brother here in check and ensuring this new data is not improperly shared?

There is no doubt that Facebook, and other such companies that are using AI for suicide prevention are navigating tricky grounds. However, privacy barriers have collapsed greatly since the last decade and the possibility of AI being misused for sinister purposes seems inevitable. Our social media activity is already being used to build our advertising profiles to allow companies to target their products at us, so using this existing technology for the purpose of saving lives seems to be a step in the right direction, if properly regulated. Our social media activity can reveal a lot about ourselves, so it will be interesting to see how this information is used in healthcare in the near future.

Written by Shomaila Rashid  

Sources: 1. How China’s AI technology can help Twitter’s suicidal users. (2018, April 20). Retrieved from https://www.scmp.com/news/china/society/article/2131853/how-chinese-ai- technology-may-help-find-suicidal-posts-twitter

2. Lage, A. (2018, December 17). Facebook Can Now Detect If A User Is At Risk For Suicide Just Through Their Posts. Retrieved from https://www.bustle.com/p/facebooks-new-ai-technology-will-be-able-to-detect-suicidal-posts-heres-what-theyll-do-about-it-6336690

3. Metz, R. (2018, October 30). The smartphone app that can tell you’re depressed before you know it yourself. Retrieved from https://www.technologyreview.com/s/612266/the-smartphone-app-that-can-tell-youre-depressed-before-you-know-it-yourself/

4. Murphy, M. (2017, November 28). EU data laws block Facebook’s suicide prevention tool . Retrieved from https://www.telegraph.co.uk/technology/2017/11/28/eu-data-laws-block-facebooks-suicide-prevention-tool/

5. Singer, N. (2018, February 25). How Companies Scour Our Digital Lives for Clues to Our Health. Retrieved from https://www.nytimes.com/2018/02/25/technology/smartphones-mental-health.html

6. Singer, Natasha. “In Screening for Suicide Risk, Facebook Takes On Tricky Public Health Role.” The New York Times, The New York Times, 31 Dec. 2018, www.nytimes.com/2018/12/31/technology/facebook-suicide-screening-algorithm.htm

10 views0 comments

Recent Posts

See All

Comments


bottom of page