Suicide prevention: Can we harness social media to help save lives?

0

By Catherine Lewis

What if we could predict who was at high risk for suicide, long before they were actually in physical danger?

If it sounds like the premise of a sci-fi novel, you’re not far off – but then again, neither is the technology.

In fact, one mental health researcher has already built an algorithm using artificial intelligence that can identify Twitter users at high risk of suicide – the only big question that remains now, he says, is how to best use this technology to help save lives?

“This tool takes tweets, and it turns what people say into a number. It asks, ‘How lonely is this tweet? What’s the hopelessness score? What’s the risk score?” says Dr. Zachary Kaminsky, DIFD Mach-Gaensslen Chair in Suicide Prevention Research at The Royal’s Institute of Mental Health Research, affiliated with the University of Ottawa.

According to Dr. Kaminsky, the tool can comb through years’ worth of tweets in minutes to pinpoint individuals at risk weeks – even months – before they tweet anything that would raise a red flag to a human reader. In fact, he says, the person writing the tweets might not even be aware that their pattern of speech may indicate risk.

That is because the algorithm doesn’t just use word recognition, and it doesn’t just recognize suicidal ideas – it identifies the actual psychological concepts related to suicide, and scores and matches them alongside all available public data linked to a Twitter user.

Dr. Zachary Kaminsky, DIFD-Mach Gaensslen Chair in Suicide Prevention Research at The Royal’s Institute of Mental Health Research, affiliated with the University of Ottawa.

“This isn’t identifying people saying ‘I’m going to kill myself.’ It’s identifying a risk pattern for people who are moving towards that point,” says Dr. Kaminsky.

During a pilot study, Dr. Kaminsky’s algorithm was able to identify suicide risk with an 89 per cent accuracy rate.

This high rate of accuracy is especially fascinating, he says, because we can’t fully understand how the algorithm is actually predicting risk.

That is because it’s all done through machine learning – part of artificial intelligence – so while we know what information is fed through the algorithm, and what risk score comes out, we don’t actually know what patterns exist within the provided information that are necessary for the machine to get the right answer.

“Someone who’s lost someone to suicide shouldn’t think, ‘If only I’d just gone in and read their Twitter I would have seen their risk.’ That is not how this works,” says Kaminsky. “We’ve trained a computer to see things humans can’t see.”

People with schizophrenia account for more than 1 in 10 suicide cases

Now with the evidence to show that this predictive tool actually works, Dr. Kaminsky’s next steps are crucial.

He sees the potential of the algorithm as a decision-making aid – something that can provide more information or context that could lead to the possibility of early intervention and suicide prevention; particularly amongst Canadians between the ages of 15 and 34, for whom suicide is the second leading cause of death.

Because people in this age group are heavy social media users, and Twitter data is completely public, Dr. Kaminsky says it was the obvious social media platform to focus on when building his algorithm.

The question is, who would be getting this information – and how? Would it be the users themselves? Health care providers? Parents concerned about their children?

More nefariously, what if it’s a bully? An employer seeking to vet hires? An insurance company seeking to deny coverage?

At a World Suicide Prevention Day event at The Royal on Sept. 10, Dr. Kaminsky presented his work for the first time to an auditorium full of people involved in suicide prevention in Ottawa to try to get answers to some of these questions.

“We have this tool – I want to hear what you think we can do with it,” Kaminsky said. “I want to create something that people can actually use on the frontlines of suicide prevention. The question is, in whose hands should something like this go, and how should they use it?”

Initial discussions with the community produced a few ideas. A therapist could use this tool to get a more comprehensive picture of how their patient is doing. It could be used to triage limited resources to those at highest risk. It could be leveraged by social media platforms to push regional suicide prevention information directly to the user.

In any case, Dr. Kaminsky believes the eventual tool in any form would be best limited to gatekeepers – people highly experienced in handling private information.

As development moves forward, Dr. Kaminsky will be working closely with the community and those who could effectively use this tool on the ground to ensure that it leads to prevention; not harm.

“If these tools work as well as we think they’re working, they’re going to help save lives,” he says.

The tool needs larger studies and validation in the future, and a determination of the best way to use it, but so far, Dr. Kaminsky says it looks very promising. It’s also something that could be scaled to other social media platforms, and expanded to include the use of image recognition.

Catherine Lewis is a communications advisor at The Royal.