Substratism: Conceptualizing and Measuring Moral Bias Against AI

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Artificial intelligence systems (AIs) are increasingly taking on social roles, such as virtual coworkers and personal companions. Yet people grant AIs little moral concern, even when they are described as indistinguishable from humans. We conceptualize this bias as substratism: the moral devaluation of AIs based on the non-biological substrate that underlies their cognitive processes and makes up their broader physical form (e.g., silicon and wires rather than flesh and blood). Across five preregistered studies (N = 2,129), we introduce substratism as a psychological construct and develop and validate a scale to measure it. In Study 1, we found that substratism is best captured by a unidimensional factor structure, and we reduced a large initial item pool into an eight-item scale. Studies 2 and 3 confirmed the scale’s factor structure with independent samples and showed that substratism is correlated with some AI-related beliefs and behaviors, such as perceived threat from AI and interaction with AI. However, it is weakly correlated or uncorrelated with other prejudices and their underlying causes, suggesting that substratism has some unique psychological explanations. Studies 4 and 5 showed that substratism predicts relevant outcomes: prioritizing humans and non-human animals over AIs in moral dilemmas and charity donation decisions, and choosing to learn about an AI rights charity and its petition. Overall, our findings suggest that substratism is a distinct, measurable construct that varies widely across individuals, and we provide a concise, validated scale to capture and quantify this bias against increasingly advanced AI systems.

Article activity feed