Going beyond ‘How often do you feel blue?’

AI emotional assessments are aimed at diagnosing mental illness more accurately and quickly

Featured Media for Going beyond ‘How often do you feel blue?’

One way to find out how someone is feeling is to ask them. “In the past two weeks, how often have you felt little interest or pleasure in doing things?” begins a standard questionnaire for depression. “How often have you felt afraid, as if something awful might happen?” inquires one questionnaire for anxiety. “Several days? Nearly every day?”

Self-reporting is how most psychiatric disorders are diagnosed and monitored today, but it’s far from perfectly reliable. In a sense, these are subjective impressions at brief points of time, usually recorded in environments outside a person’s daily life, such as a psychiatrist’s office.

Researchers at Stanford Medicine are developing artificial intelligence tools to not only provide a more accurate picture of a person’s mental well-being but also to flag those in need of help and guide providers in choosing treatments. Certainly the stakes are high — with concerns for privacy, safety and bias — but AI is opening up unprecedented possibilities in psychiatry.

One AI tool being developed would evaluate the details of speech to predict a patient’s anxiety and depression severity, said Betsy Stade, PhD, a postdoctoral fellow at the Stanford Institute for Human-Centered Artificial Intelligence, who has built machine learning models that can do just that.

“These new tools will give us some objective, reproducible measures, instead of being based on what the person is thinking of themselves at the very moment they are filling out the questionnaire,” said Ehsan Adeli, PhD, assistant professor of psychiatry and behavioral sciences.

In speech, for example, people with depression tend to use more first-person singular pronouns: I, me, my, mine. “This effect is so subtle that you might not notice it in conversation, but despite being small, it’s quite robust,” Stade said.

“These new tools will give us some objective, reproducible measures, instead of being based on what the person is thinking of themselves at the very moment they are filling out the questionnaire.”

Ehsan Adeli, PhD, assistant professor of psychiatry and behavioral sciences.

In another study, Stade found that people with depression tend to talk specifically about sadness, while those with anxiety talk about a wider range of emotions.

Therapist offices of the future may be equipped with AI assistants that listen and analyze in the background — suggesting the best medications, therapy techniques and even specific phrases a therapist might use in responding to a patient.

Adeli is developing so-called ambient intelligence — technology integrated into buildings that can sense how the people inside are doing. In addition to audio analysis, pressure sensors on the floor could measure walking gait, thermal sensors could track physiological changes, and the same visual systems that help self-driving cars navigate roads could detect unusual behavior.

Using ambient intelligence in hospitals or senior care facilities, for example, could identify an occupant who is hallucinating, at risk of suicide or showing early signs of cognitive decline.

Outside clinical settings, AI is already serving as a first-line screener for people in crisis. Recently, Stanford Medicine researchers worked with a telehealth company to develop Crisis-Message Detector 1, which could quickly and accurately identify patient messages suggesting that the patient was having thoughts of suicide, self-harm or violence to others. These messages were flagged and prioritized for review by crisis specialists, reducing wait times for people experiencing mental health crises from nine hours to less than 13 minutes.

AI systems like Crisis-Message Detector 1, intelligent buildings and speech analyzers are designed to alert and assist humans, who ultimately choose next steps and provide care.

“I don’t think AI is ready to be the sole decision-maker, nor should it be in the future,” Adeli said.

Some prefer AI therapy

But autonomous AI therapists are on the horizon. Companies are creating AI that can offer cognitive behavioral therapy and empathetic support, initially through text, but they eventually could incorporate audio and video to read a client’s facial expressions and body language.

 Whether patients will engage with a nonhuman therapist remains to be seen, but one recent survey found that 55% of respondents would prefer AI-based psychotherapy — citing convenience and the ability to talk more openly about embarrassing experiences.

The concept of an artificially intelligent therapist isn’t new. In fact, one of the earliest conversational programs, named ELIZA, developed in the 1960s at the Massachusetts Institute of Technology, was designed to mimic a Rogerian psychotherapist. Its creator, Joseph Weizenbaum, meant to show AI’s inferiority to human conversationalists but, to his dismay, many people found ELIZA compelling, even compassionate.

(A few years later, a computer science professor at Stanford University created a chatbot in the opposite role. PARRY was designed to mimic a person with paranoid schizophrenia — often expressing fear, anger and mistrust — and to serve as a practice patient for students.)

These days, with the rapid advance of large language models, people are “hacking” ChatGPT for mental health support — by prompting it to act like a therapist, or even Sigmund Freud.

More ways AI could help

Training an AI therapist might require reams of real therapy transcripts, which are not readily available because of patient privacy concerns. But new research suggests that giving an AI model fewer transcripts of good quality and then “tuning” its responses — to be more empathetic, for example — could work just as well. Generative AI could even generate more training data.

Ever since ELIZA was developed, people have pondered AI’s potential to make mental health care available to the masses.

At first, AI could deliver more prescriptive kinds of therapy, Stade said — such as cognitive behavioral therapy for insomnia, or support in between sessions with a human therapist. She is part of a Stanford team developing an AI “companion” to help people practice the skills they learn in cognitive behavioral therapy, such as identifying and reframing negative thoughts.

“When I think about the promise of fully AI psychotherapists, I think of the possibility that you could be getting huge numbers of patients really high-quality treatment at very low cost,” she said.

In 1975, the scientist Carl Sagan, PhD, imagined “computer psychotherapeutic terminals,” like telephone booths, that could be available to the public for a few dollars per session.

“When I think about the promise of fully AI psychotherapists, I think of the possibility that you could be getting huge numbers of patients really high-quality treatment at very low cost.”

Ehsan Adeli

“No such computer program is adequate for psychiatric use today, but the same can be remarked about some human psychotherapists,” he said.

Ironically, our attempts to create an AI therapist could help us identify exactly what it is that makes a good human therapist.

Psychotherapy works — about as well as medications in many cases — but how it works is not well understood. We don’t know why some therapists are consistently more effective than others who offer the same treatments, or why some patients improve while others do not.

Most research into therapy looks at outcomes over weeks of treatment, not what happens between therapist and patient minute to minute. Features that make therapy successful are difficult to capture without fine-grained analysis of the therapy experience. This is where AI could be extremely useful.

“There are all these nuanced and very detailed decision points that therapists face, probably hundreds of them, within any given therapy session,” Stade said. “We just don’t have enough information about if some of those decisions are crucial, if they’re really the drivers of people getting better.”

Instead of replacing humans, perhaps AI’s real potential is to show us how to better help ourselves.

Author headshot

Nina Bai

Nina Bai is a science writer in the Stanford Medicine Office of Communications.

Email the author