The Psychology of Algorithmic Bias
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As the use of AI proliferates, so too does the risk of algorithmic bias—systematic errors in AI systems that discriminate against disadvantaged social groups. Although such biases are widely documented, their psychological foundations are poorly understood. We argue that algorithmic bias arises from human social cognition and prejudice as these processes interact with AI systems. We propose a Human-AI Loop Model that specifies the mechanisms through which human biases infiltrate AI systems at multiple points of human-AI interaction, from training data production to the consumption of algorithmic outputs. Through these effects, AI systems can amplify and obscure human prejudices while reinforcing existing inequities. We conclude with psychology-centered strategies for disrupting this cycle and outline implications for contemporary prejudice research.