AI Consciousness Will Divide Society

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As AI systems grow more sophisticated and human-like, societies will confront a question with no easy answer: are these systems conscious, and if so, what do we owe them? I argue that we should not expect convergence. Instead, society is headed toward a period marked by confusion, uncertainty, and sharp disagreement. Drawing on recent empirical data, I show that both experts and the public are already split on whether AI systems could be conscious and whether digital minds deserve moral consideration. I trace this division to structural features of the problem: consciousness remains difficult to define and impossible to measure directly, economic and political incentives cut in opposing directions, and the gap between how AI systems behave outwardly and how they work internally invites conflicting interpretations. Left unmanaged, such disagreement raises the prospect of political polarization, unstable regulation, geopolitical friction, and a greater risk that we get the moral question wrong—in one direction or the other. I close by outlining four strategies: strengthening expert coordination, building a more informed public conversation, designing AI systems that do not needlessly amplify moral confusion, and developing decision-making frameworks that do not hinge entirely on unresolved questions about consciousness.

Article activity feed