Systematic Review of Large Language Models for Mental Health Therapy

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background The global mental health crisis is marked by high prevalence, workforce shortages, and inequitable access to care. These challenges have fueled interest in artificial intelligence (AI)–assisted solutions, particularly large lan- guage models (LLMs) such as GPT-4, LLaMA-2, and PaLM-2, which may extend clinical reach and provide scalable, low-cost support. Objective This review summarizes conceptual and empirical studies published on the application of LLMs in mental health care, with a focus on diagnostic support, psychoeducation, treatment dialogue, and risk communication. Methods We examined 25 key studies and multiple systematic reviews address- ing clinical and non-clinical applications of LLMs. Evidence was synthesized regarding effectiveness, limitations, and ethical considerations. Results Findings indicate that conversational LLM-based agents can alleviate mild to moderate depression and anxiety in the short term and can approach clinician-level performance in narrowly defined cognitive behavioral therapy (CBT) tasks. Domain-specific LLMs such as PsyLLM and MentaLLaMA out- performed general-purpose models in safety and accuracy when trained with clinically grounded data. Conclusions LLMs show potential as adjunctive tools within blended care models, enhancing access and patient engagement. However, safe clinical integra- tion will require regulation, long-term validation, and oversight by mental health professionals.

Article activity feed