Inequity Aversion Toward AI Counterparts
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Human moral interactions often assume that resources should be allocated equitably, i.e., one should not take more than one’s fair share. To what extent do people apply this assumption to social AI entities? Using a 21-round Ultimatum Game, we investigated participants’ behavioral, physiological, and affective responses to fair, disadvantageous, and advantageous offers from an AI (vs. human) counterpart. We report three principal findings: (a) Participants were more likely to reject disadvantageous offers from an AI counterpart than from a human counterpart, but were more likely to reject advantageous offers from a human counterpart than from an AI counterpart; (b) Participants reported more negative affect following disadvantageous offers from an AI counterpart than from a human counterpart; (c) Participants exhibited a stronger association between heart rate variability and rejection rate for disadvantageous offers from an AI counterpart than from a human counterpart. Based on these findings, we propose a model emphasizing an important, previously under-examined role of self-regulatory processes in humans’ responses toward AI moral behavior.