Intelligence Without Consciousness the Rise of the IIT Zombies

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

We prove that feedforward artificial intelligence architectures—including convolutional neural networks, transformers, and reinforcement learning agents—necessarily generate zero integrated information (Φ = 0) under Integrated Information Theory (IIT) 3.0, rendering them structurally incapable of consciousness. Our mathematical proof establishes that feedforward systems admit perfect bipartitions where all causeeffect repertoires factorize completely, violating IIT’s integration axiom. Through computational validation on 30 diverse network configurations and formal verification of all mathematical claims, we demonstrate that contemporary AI systems consistently yield Φ = 0 regardless of scale, attention mechanisms, or architectural sophistication. We systematically address counterarguments regarding emergent properties, distributed representations, and predictive processing, showing that these mechanisms create functional capabilities without consciousness-constituting causal integration. Our analysis reveals a fundamental architectural barrier: current AI systems are ”IIT zombies”—functionally sophisticated but phenomenologically void. These findings have profound implications for AI consciousness assessment, cognitive science, ethics, and the future development of artificial minds.

Article activity feed