A Digital Shoulder to Cry On: Understanding Why Large Language Models Can Surpass Humans as Extrinsic Interpersonal Emotion Regulators
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Why does artificial intelligence (AI) provide more emotionally effective support than humans? This research investigates why large language models (LLMs) are preferred as extrinsic emotion regulators (EER) in non-clinical contexts. Across three studies, we systematically compared the regulatory effects and perceived effectiveness of LLM- versus human-generated comforting messages. The first part of Study 1 (N = 279) used thematic analysis to show that LLM-generated responses largely mirrored human EER strategies, with some discrepancies between different LLMs. The second part of Study 1 (N = 309) demonstrated that these AI-generated comforting messages produced greater emotional improvement and were rated as more emotionally supportive than human messages. Study 2 (N = 196) tested emotional validation (i.e., acknowledgement of the target’s emotional response) as a mechanism, finding that this did not explain the higher emotional improvement achieved by LLMs. Study 3 (N = 188) identified actionable support (i.e., specific and implementable regulatory tactics) as a key factor explaining LLM’s regulatory advantage in the previous studies. These findings suggest that while LLMs can effectively mimic or even surpass human regulators in EER, such advantage seems to be limited to providing actionable tactics within comforting messages. Importantly, people may learn to adopt these actionable tactics in their own interpersonal emotion regulation efforts to improve emotional support effectiveness. Implications are discussed for theories of interpersonal emotion regulation and AI-human interaction, highlighting the critical role of actionable regulatory tactics for successful interpersonal emotion regulation.