Large Language Models as Mental Health Resources: Patterns of Use in the United States

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As large language models (LLMs) become increasingly accessible, there has been a rise of anecdotal evidence suggesting that users may be increasingly turning to LLMs for mental health support. However, little is known about patterns of LLM use specifically for this purpose. This study aims to assess the frequency, motivations, and perceived effectiveness of LLM use for mental health support or therapy-related goals among U.S. residents with ongoing mental health conditions who have used LLMs in the past year. A cross-sectional survey-based study was conducted via Prolific, an online participant recruitment platform. Eligible participants were U.S. residents aged 18-80 with internet access who had used at least one LLM in the past year and reported having an ongoing mental health condition. Participants completed an anonymous 35-question online survey, covering patterns of LLM use, reasons for use, perceived effectiveness, comparison with human therapy, and problematic experiences. Survey responses suggest substantial adoption of LLMs for mental health purposes, with 48.7% of participants using them for psychological support within the past year. Users primarily sought help for anxiety (73.3%), personal advice (63.0%), and depression (59.7%). Notably, 63.4% of users reported improved mental health from LLM interactions, with high satisfaction ratings for practical advice (86.8%) and overall helpfulness (82.3%). When comparing LLMs to human therapy, evaluations were generally neutral to positive, with 37.8% finding LLMs more beneficial than traditional therapy. Despite concerns, only 9.0% of users encountered harmful responses.

Article activity feed