AI Consciousness Will Divide Society

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As AI systems become increasingly advanced and human-like, societies will face a fundamental question: are these systems conscious, and if so, do they deserve moral or legal rights? This paper argues that rather than converging on a shared answer, society is likely to enter a period of confusion, uncertainty, and deep disagreement. Drawing on recent empirical evidence, I show that both experts and the public are already divided on the possibility of AI consciousness and the moral status of digital minds. I argue that this division is driven by structural features of the issue, including the difficulty of conceptualizing and measuring consciousness, conflicting economic, political, and emotional incentives, and a disconnect between AI systems’ internal architectures and their outwardly human-like behavior. Such disagreement creates risks of political polarization, regulatory instability, geopolitical tension, and an increased risk of ultimately misattributing moral status. I conclude by outlining strategies for improving expert coordination, fostering constructive public discourse, designing AI systems to reduce moral confusion, and reducing exclusive reliance on consciousness in moral and policy decisions.

Article activity feed