AI and Human Responses in Mental Health: Empathy, Utility, and Expectation in Two Experimental Studies
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Objectives: Our study examines how individuals perceive responses to mental health–related questions when these are generated either by an automated system or by a licensed psychologist. Specifically, we compare evaluations across three dimensions: empathy, utility, and expectation, and explore whether digital health literacy shapes these perceptions. Methods: Two experimental studies were conducted. In Study 1, 36 participants evaluated a response to a question they had personally submitted, attributed either to an automated system or to a psychologist. In Study 2, 126 participants assessed a standardized set of question–answer pairs, evenly divided between the two response sources. Across both studies, participants completed the eHealth Literacy Scale (eHEALS) to examine whether digital health literacy was associated with response evaluations. Results: In personalized contexts (Study 1), psychologist-generated responses were generally preferred across empathy, utility, and expectation, although differences did not reach statistical significance. In depersonalized contexts (Study 2), automated responses received higher ratings on all three dimensions, with large effect sizes. No significant associations emerged between eHealth literacy and participants’ evaluations. Conclusions: These findings indicate that automated mental health responses can be perceived as empathetic and useful, particularly in standardized or less personally involving situations. However, they may not fully reproduce the relational qualities typically associated with human interaction. The results support approaches that integrate automated systems with professional expertise in mental health communication.