AI Steps Up: Real-Time Monitoring in Suicide Prevention

Suicide represents one of the most complex and heartbreaking challenges in public health. One major difficulty in preventing suicide is knowing when someone is struggling.

Author

  • Ruth Melia

    Associate Professor in Clinical Psychology, University of Limerick

Suicidal thoughts and behaviour can come and go quickly, and they're not always present when someone sees a doctor or therapist, making them hard to detect with standard checklists.

Today, many of us use digital devices to track our physical health : counting steps, monitoring sleep, or checking screen time. Researchers are now starting to use similar tools to better understand mental health.

One method, called ecological momentary assessment (EMA), collects real-time information about a person's mood, thoughts, behaviour and surroundings using a smartphone or wearable device. It does this by prompting the person to input information (active EMA) or collecting it automatically using sensors (passive EMA).

Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK's latest coverage of news and research, from politics and business to the arts and sciences.

Research has shown EMA can be safe for monitoring suicide risk , which includes a range of experiences from suicidal thoughts to attempts and completed suicide.

Studies with adults show that this kind of monitoring doesn't increase risk. Instead, it gives us a more detailed and personal view of what someone is going through, moment by moment. So how can this information actually help someone at risk?

Adaptive interventions

One exciting use is the creation of adaptive interventions: real-time, personalised responses delivered right through a person's phone or device. For example, if someone's data shows signs of distress, their device might gently prompt them to follow a step on their personal safety plan, which they created earlier with a mental health professional.

Safety plans are proven tools in suicide prevention, but they're most helpful when people can access and use them when they're needed most. These digital interventions can offer support right when it matters, in the person's own environment.

There are still important questions: what kind of changes in a person's data should trigger an alert? When is the best time to offer help? And what form should that help take?

These are the kinds of questions that artificial intelligence (AI) - and specifically machine learning - is helping us answer.

Machine learning is already being used to build models that can predict suicide risk by noticing subtle changes in a person's feelings, thoughts, or behaviour. It's also been used to predict suicide rates across larger populations.

These models have performed well on the data they were trained on. But there are still concerns. Privacy is a big one, especially when social media or personal data is involved.

There's also a lack of diversity in the data used to train these models, which means they might not work equally well for everyone. And it's challenging to apply models developed in one country or setting to another.

Still, research shows that machine learning models can predict suicide risk more accurately than traditional tools used by clinicians. That's why mental health guidelines now recommend moving away from using simple risk scores to decide who gets care.

Instead, they suggest a more flexible, person-centred approach - one that's built around open conversations and planning with the person at risk.

Predictions, accuracy and trust

In my research , I looked at how AI is being used with EMA in suicide studies. Most of the studies involved people getting care in hospitals or mental health clinics. In those settings, EMA was able to predict things like suicidal thoughts after discharge.

While many studies we looked at reported how accurate their models were, fewer looked at how often the models made mistakes, like predicting someone is at risk when they're not (false positives), or missing someone who is at risk (false negatives). To help improve this, we developed a reporting guide to make sure future research is clearer and more complete.

Another promising area is using AI as a support tool for mental health professionals. By analysing large sets of data from health services, AI could help predict how someone is doing and which treatments might work best for them.

But for this to work, professionals need to trust the technology. That's where explainable AI comes in: systems that not only give a result but also explain how they got there. This makes it easier for clinicians to understand and use AI insights, much like how they use questionnaires and other tools today.

Suicide is a devastating global issue, but advances in AI and real-time monitoring offer new hope. These tools aren't a cure all, but they may help provide the right support at the right time, in ways we've never been able to before.

The Conversation

Ruth Melia received funding from the Fulbright Commission as part of a Health Research Board Fulbright HealthImpact Award. This award supported a visiting scholarship at Florida State University to facilitate international collaboration in the area of suicide research.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).