Skip to content
The shadow of Uruguayan developer Tammara Leites poses in front of a text generated by (digital Simon) thanks to artificial intelligence ahead of the (dSimon) performance at the Avignon fringe festival, in Avignon on July 14, 2022. (Photo by CLEMENT MAHOUDEAU/AFP via Getty Images)
The shadow of Uruguayan developer Tammara Leites poses in front of a text generated by (digital Simon) thanks to artificial intelligence ahead of the (dSimon) performance at the Avignon fringe festival, in Avignon on July 14, 2022. (Photo by CLEMENT MAHOUDEAU/AFP via Getty Images)
Author
UPDATED:

Psychiatry was one of my early rotations as a third-year medical student just beginning clinical training.  After years of exposure to science in the classroom and laboratories, its imprecision was unsettling. The primary way to evaluate someone’s mental health was through self-reporting in response to direct questions like, “In the past two weeks, how often have you felt little interest or pleasure in doing activities that normally would be pleasurable?” or “Do you sometimes hear voices that no one else hears?”

Although such questioning – either verbally or via a questionnaire – is still seen as the primary tool for diagnosing and monitoring psychiatric disorders, it is far from foolproof. Not only are the responses subjective snapshots, often taken in settings that do not reflect the individual’s everyday environment, but sometimes the questions simply don’t push the right psychological button.

Now we are in an era when artificial intelligence (AI) might provide previously unimagined ways that technology can help decipher patients’ deepest emotions and mental states.

Academic researchers are pioneering the use of artificial intelligence (AI) to enhance the accuracy of mental health assessments. These ingenious approaches are aimed at providing a more comprehensive picture of a person’s mental well-being, identifying those in need of intervention, and guiding treatment decisions. The potential benefits are compelling, but because machine learning that is the basis of AI requires a continuous flow of information on patients, the integration of AI into psychiatry may cause concerns about privacy, safety, and bias.

One groundbreaking AI tool under development analyzes speech to predict the severity of anxiety and depression. It can monitor reproducible parameters such as speech patterns and physiological indicators, so it can evaluate subtle speech patterns that might help with diagnoses. For example, individuals with depression frequently use words like “mine” and first-person singular pronouns such as “I,” “me,” and “my.”  This seemingly minor detail is a useful indicator of depressive states. Moreover, people with depression often specifically discuss sadness, whereas those with anxiety tend to express a broader range of emotions.

To establish empathy with patients, skilled psychotherapists sometimes adopt certain speech patterns or use carefully chosen words that have resonance to the patient based on his or her vocation or level of education. AI programs’ literally encyclopedic database could enable it to create rapport through the selection of certain words and vernacular patterns of speech.

The future of psychotherapy could include AI “mentors” that observe and analyze sessions, offer recommendations on medications, and even suggest specific therapy techniques and strategies.

Beyond the therapist’s office, there is also under development a science-fiction-like approach called “ambient intelligence” — technology embedded in buildings that can sense and respond to the occupants’ mental states. This includes audio analysis, pressure sensors to monitor gait, thermal sensors for physiological changes, and visual systems to detect unusual behaviors. Such technology could be invaluable in hospitals and senior-care facilities, identifying individuals at risk of hallucinations, cognitive decline, or suicide.

AI is also proving useful in other ways. Stanford University researchers, in collaboration with a telehealth company, developed the AI system called Crisis-Message Detector 1. It rapidly identifies messages from patients that indicate thoughts of suicide, self-harm, or violence, drastically reducing wait times for those in crisis from hours to minutes.

While AI tools like Crisis-Message Detector 1 are designed to support human decision-making, there is also the possibility of autonomous AI therapists eventually. Companies are developing AI that provides cognitive behavioral therapy and empathetic support. Initially text-based, these AI therapists could eventually incorporate audio and video to analyze clients’ facial expressions and body language. A recent survey revealed that 55% of respondents would prefer AI-based psychotherapy, appreciating the convenience and the ability to discuss sensitive topics more freely.

The concept of AI in therapy is not new. ELIZA, an early conversational program developed in the 1960s at MIT (coincidentally, when I was an undergraduate there), mimicked a Rogerian psychotherapist. Although its creator intended to demonstrate AI’s limitations, many found ELIZA surprisingly empathetic. Today, with advanced language models, individuals are using AI like ChatGPT for mental health support, prompting it to act like a therapist.

Ultimately, AI’s role in mental health care could democratize access to high-quality therapy, delivering effective treatment to vast numbers of patients at low cost. While no AI is currently adequate for independent psychiatric use, it holds the potential to complement and enhance human therapists by providing insights into the nuances of effective therapy, offering detailed analysis of therapy sessions to understand why certain approaches work better than others.

As we apply these advances, the goal remains the same as during my Psych rotation many decades ago: to diagnose mental illness and provide compassionate, effective care to all those in need.

Henry Miller, a physician and molecular biologist, is the Glenn Swogger Distinguished Fellow at the American Council on Science and Health. He was the founding director of the FDA’s Office of Biotechnology. Follow Henry on X @henryimiller

Originally Published: