Training and Oversight of Algorithms in Social Decision-Making: Algorithms With Prescribed Selfish Defaults Breed Selfish Decisions

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Human social preferences serve as oversight or training data to shape how Artificial Intelligence (AI) makes decisions, which occurs increasingly across decision-making domains, including social decisions that influence human to human interactions. Here, we test how algorithms with and without prescribed social preferences, shape social decision-making and explore delegating further decisions. In an incentivized online experiment (n = 1290), participants provided social preferences for outcomes favoring oneself or an anonymous other, Social Value Orientation (SVO), as input to a decision-making algorithm. We manipulated whether for each SVO money division question, participants saw no default options (representing providing training data) or proself/prosocial default options (representing oversight of algorithms with prescribed selfish/prosocial preferences). Default options were either from an algorithm or not. Results showed that participants’ social preferences were not significantly impacted by providing input to an algorithm without prescribed preferences (vs no defaults) nor by an algorithm with prescribed prosocial preferences (vs the same defaults without an algorithm and vs the algorithm without prescribed preferences). Only providing input for an algorithm with prescribed proself preferences resulted in more selfish social preferences (vsthe algorithm without prescribed preferences and vs the algorithm with prescribed prosocial preferences). Moreover, counter to this, participants perceived being less influenced by proself than prosocial defaults. Most people delegated a second social decision-making task to the algorithm they were exposed to. These findings suggest that human oversight could be insufficient to address algorithmic biases, as individuals act more selfishly when exposed to pre-existing selfish tendencies in algorithms.

Article activity feed