AI's Rational Empathy Promotes Reconciliation in Conflict: Evidence from Behavioral Experiments, Linguistic Analysis, and Topic Modeling

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

With the rise of large language models (LLMs), LLM-driven artificial intelligence role-playing (AI-RP) has emerged as a promising tool for resolving interpersonal conflict. However, the differential effectiveness of its "Rational Empathy" compared to human empathy remains unclear. This study investigates this issue through two experiments. Study 1 examined the effects of different AI response styles, while Study 2 directly compared interventions by an AI and a human counsellor, incorporating linguistic analysis (LIWC and BERTopic) to investigate the underlying mechanisms. Results revealed that while AI-RP effectively improved conflict resolution outcomes, this effect was context-dependent and did not generalize. Crucially, on the key metric of communication intention, the AI was significantly superior to the human counsellor. Linguistic analysis indicated that the AI’s responses were more focused on functional, problem-solving approaches, whereas the counsellor’s focused more on affective and relational aspects. This research demonstrates that an AI can act as a "cognitive scaffold" in conflicts. Its unique advantage stems from an efficient, problem-oriented "Rational Empathy" that signals the viability of communication, offering a new perspective for future human-AI collaborative interventions.

Article activity feed