A Digital Shoulder to Cry On: A Comparison of Human and Large Language Models as Extrinsic Interpersonal Emotion Regulators
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Can artificial intelligence (AI) provide emotionally effective support traditionally delivered by humans? This research investigates whether large language models (LLMs) can function as extrinsic emotion regulators (EER) in non-clinical contexts. Across four studies, we systematically compared the structure, perceived effectiveness, and regulatory effects of LLM- versus human-generated comforting messages. Study 1 (N = 279) used thematic analysis to show that LLM-generated responses largely mirrored human EER strategies, with some discrepancies between different LLMs. Study 2 (N = 309) demonstrated that AI-generated comforting messages produced greater emotional improvement and were rated as more emotionally supportive than human messages. Study 3 (N = 196) tested emotional validation (i.e., acknowledgement of the target’s emotional response) as a mechanism, finding that perceived supportiveness rather than emotional validation predicted higher emotional improvement. Study 4 (N = 188) identified actionable support (i.e., specific and implementable regulatory tactics) as a key factor explaining LLM’s regulatory advantage in the previous studies. These findings suggest that while LLMs can effectively mimic or even surpass human regulators in EER, such advantage seems to be limited to providing actionable tactics within comforting messages. Implications are discussed for theories of interpersonal emotion regulation and AI-human interaction, highlighting the critical role of actionable regulatory tactics for successful interpersonal emotion regulation.