Cognition Without Consciousness: AI Transformers and the Revival of Human Thought
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large Language Models (LLMs) demonstrate that sophisticated cognitive abilities can emerge from scaled processing of symbolic information without explicit programming of these abilities. This observation suggests a novel hypothesis for human cognitive evolution: rather than requiring dramatic genetic changes alone, human cognitive superiority may have emerged through the co-evolution of two components: (1) an expanding symbolic knowledge base (language), and (2) neural mechanisms capable of sophisticated operations over this base. Building on recent advances in transformer architectures and theories of cultural evolution, we propose that language itself constitutes a crystallized form of human cognition, containing not just information but patterns of thought, making it a sufficient substrate for cognitive emergence in both biological and artificial systems. This framework integrates Baldwin effect theory with modern computational insights to provide testable predictions about the evolution of human cognitive capabilities, while acknowledging fundamental differences between artificial and biological cognitive systems.