Nudging Outgroup Altruism: A Human-Agent Interactional Approach for Reducing Ingroup Favoritism

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Ingroup favoritism and intergroup conflict can be mutually reinforcing during social interaction, threatening the peace and sustainability of societies. In two studies (N = 880), we investigated whether promoting prosocial outgroup altruism would weaken the ingroup favoritism cycle of influence. Using novel methods of human-agent interaction via a computer-mediated experimental platform, we introduced outgroup altruism by (i) nonadaptive artificial agents with preprogrammed outgroup altruistic behavior (Study 1; N = 400) and (ii) adaptive artificial agents whose altruistic behavior was informed by the prediction of a machine learning algorithm (Study 2; N = 480). A rating task ensured that the observed behavior did not result from the participant’s awareness of the artificial agents. In Study 1, nonadaptive agents prompted ingroup members to uphold their group identity by reinforcing ingroup favoritism. In Study 2, adaptive agents were able to weaken ingroup favoritism over time by maintaining a good reputation with both the ingroup and outgroup members, who perceived agents as being fairer than humans and rated agents as more human than humans. We conclude that a good reputation of the individual exhibiting outgroup altruism is necessary to weaken ingroup favoritism and reduce intergroup conflict. Thus, reputation is important for designing nudge agents.

Article activity feed