AI in Psychotherapy: Opportunities and Risks
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This article examines the emerging role of artificial intelligence in mental health contexts, with a particular focus on psychotherapy and the risks associated with deploying large language models (LLMs) in sensitive clinical domains. It discusses several key concerns, including AI-related psychosis, the development of parasocial attachments and the growing number of crisis-related interactions users have with general-purpose AI models. These challenges raise important questions about the safety, reliability, and ethical management of AI systems when individuals seek support during periods of psychological crisis. Beyond identifying these risks, the article explores the potential of clinical LLMs specifically designed for mental health applications. In particular, AI can serve as a tool for therapists’ training, supervision, and professional development, offering simulated clinical scenarios, structured feedback, and support for reflective practice. The article concludes by outlining key directions for the responsible development of therapeutic AI. These include the importance of human oversight, the use of specialized and clinically informed training datasets, advances in model fine-tuning and safety alignment, and the establishment of clear professional guidelines and regulatory frameworks. Together, these developments may help ensure that AI technologies are integrated into mental health care in ways that prioritize safety, ethical practice, and the continued central role of human clinicians.