"What Happens, What Helps, What Hurts:" A Qualitative Analysis of User Experiences with Large Language Models for Mental Health Support

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large language models (LLMs) are increasingly being used for mental health support, yet little isknown about how individuals with mental health conditions experience these interactions. Thisstudy qualitatively explored the perceived helpful and unhelpful aspects of LLM use for mentalhealth purposes and their broader clinical, ethical, and policy implications. Two hundred andforty-three English-speaking adults (age M = 36.47, SD = 11.22; 56.0% women) described (1)typical interactions, (2) especially helpful events, and (3) unhelpful events via three open-endedquestions. A qualitative content analysis and thematic analysis conducted by one doctoral andtwo master’s-level coders produced three thematic categories. Participants reported typicalinteractions (503 items) primarily involving facilitating emotional expression, affirmingemotional experiences, guiding behavioral exchanges, and reshaping cognitive dialogues. Inhelpful situations (249 items), LLMs were perceived to support users by providing behavioralguidance, enhancing emotional well-being, fostering companionship, and facilitating cognitiverestructuring. However, participants also identified unhelpful experiences (113 items), such asnon-actionable or risk-inducing advice, dismissive or emotionally harmful responses, andtechnical or functional limitations. These findings highlight both the potential and limitations ofLLMs in augmenting mental health support.Keywords: large language models, mental health, content analysis, thematic analysis, userexperience

Article activity feed