Cognition Without Consciousness: A Minimal Conceptual Framework for Understanding LLMs and Human Cognitive Evolution
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large language models (LLMs) demonstrate that sophisticated symbolic cognition can emerge from scaled pattern extraction without consciousness. This observation motivates a minimalist conceptual framework: language is a crystallized form of human cognition, created by conscious agents over millennia, and the human brain evolved to operate efficiently over this symbolic substrate. Consciousness and symbolic cognition are therefore distinct: consciousness creates symbols, while symbolic cognition operates over them. LLMs reveal this asymmetry by reproducing symbolic reasoning without possessing conscious regulation, motivation, or subjective experience. This framework clarifies the relationship between biological and artificial cognition and offers a simple model of how human intelligence emerged through gene–culture coevolution. The manuscript also introduces a proposed information‑theoretic limit (the AI Theorem), which formalizes why purely computational systems such as LLMs inevitably accumulate drift without a regulatory layer.