The Paradox of Agency in Psychotherapy: How People with Mental Distress Experience Generative AI and Human Therapists

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background: The rapid advancement of Large Language Models has sparked heated debate over whether Generative Artificial Intelligence (AI) chatbots can serve as “digital therapists” capable of providing therapeutic support. While much of this discussion focuses on AI’s lack of agency, understood as the absence of mental states, consciousness, autonomy, and intentionality, empirical research on users’ real-world experiences remains limited.Objective: This study explores how individuals with mental distress engage with both generative AI and human psychotherapy in natural and unguided contexts, with a focus on how perceptions of agency shape these experiences. By drawing on participants’ dual exposure, the study seeks to contribute to the ongoing debate about “AI therapists” by clarifying the role of agency in therapeutic change.Methods: Sixteen adults who had sought mental health support from both human therapists and ChatGPT participated in semi-structured interviews, during which they shared and compared their experiences with each type of interaction. Transcripts were analyzed using reflexive thematic analysis.Results: Analysis revealed three themes and eight subthemes enabling comparison of what participants found helpful and challenging in their interactions with ChatGPT and human therapists: (1) encouraging open and authentic self-disclosure but limiting deep exploration; (2) the myth of relationship: caring, acceptance, and understanding; (3) fostering therapeutic change: the promise and pitfalls of data-driven solutions. Drawing on these comparative reflections, we propose a conceptual model that highlights how perceptions of agency carry both strengths and limitations for therapeutic engagement. The model also illustrates how the non-agential nature of AI chatbots shapes their capacity to support individuals with mental distress in ways that differ from human therapists.Conclusion: The findings suggest that agency functions as a double-edged feature in therapeutic interactions. Rather than anthropomorphizing AI chatbots, their non-agential features—such as responsive behaviors, absence of intentions, objective stance, and disembodied state—should be used strategically to complement human-delivered psychotherapy. Meanwhile, practitioners should maximize the benefits of their agential qualities while remaining cautious of the potential inherent risks.

Article activity feed