A brief therapeutic conversation with AI decreases intentions to exclusively seek human therapy

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Although there is a growing body of work assessing the efficacy of Artificial Intelligence (AI) for therapy, it is unknown whether using AI leads people to avoid traditional therapy with trained clinicians. Across two experiments (N = 1175), we addressed this gap by asking participants to describe a personal challenge for which they might seek guidance, including from a therapist. Participants then engaged in a conversation with GPT-4 Turbo, prompted to act as a psychotherapist and pragmatic problem-solver in the treatment condition, and as a neutral interviewer in the control condition. Compared to controls, treated participants reported improved mental health related to their stated challenge (Experiment 1a: β = 6.40, 95% CI [4.20, 8.59], t = 5.74, p < .001; Experiment 1b: β = 7.82, 95% CI [5.51, 10.12], t = 6.67, p < .001). When comparing preferences for future support (human therapy, AI therapy, both or neither), a larger share of treated participants chose AI therapy compared to those in the control (Experiment 1a: +18.3%; Experiment 1b: +4.9%). Crucially, the treatment also led to fewer participants preferring human therapy (Experiment 1a: –15%; Experiment 1b: –16.5%). Notably, across both studies and conditions, the most frequently chosen option for future psychological support was the combination of AI and human therapy, selected by 34.8% to 44.6% of respondents depending on experiment and condition.

Article activity feed