People choose to receive human empathy despite rating AI empathy higher

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent advances in AI have enabled large language models to produce expressions that seem empathetic to human users, raising scientific and ethical questions about how people perceive and choose between human and AI sources of emotional support. Although an increasing number of studies have examined how people rate empathy generated by AI, little to no work has examined whether people would choose to receive empathy from AI. We conducted four studies investigating whether people prefer to receive empathetic expressions from humans or AI, and how they evaluate these expressions. Across diverse samples and stimuli, we found evidence for what we term the AI empathy choice paradox: participants significantly preferred to receive empathy from humans, yet they rated AI-generated empathetic responses as higher in quality, more effective at making them feel heard, and more effortful when they did choose them. These findings contribute to ongoing debates about AI empathy by demonstrating that while people may avoid AI as an empathy source, they nonetheless benefit from AI empathy when they experience it. Our results suggest potential applications for AI in supplementing human emotional support while highlighting the importance of respecting individual preferences for empathy sources.

Article activity feed