Drift: A Biologically-Grounded Cognitive Architecture for Persistent LLM Cognition

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large language models are stateless by design — each session begins as a blank slate, with memory provided only through context windows or external retrieval. We present Drift , a cognitive architecture that endows stateless LLMs with persistent, biologically-grounded cognition across sessions. The system comprises 112 Python modules (60,000 lines) implementing a 19-stage retrieval pipeline, affect-modulated search, reinforcement-learned pipeline optimization, and cryptographically attested identity evolution. Developed as a living system — with two independent agents operating autonomously across 8+ platforms and environments over 200+ sessions — the architecture has been validated through continuous real-world use rather than synthetic benchmarks alone. We identify six genuinely novel contributions: (1) per-stage Q-learning that treats retrieval pipeline optimization as a multi-armed bandit problem, (2) per-memory Q-learning that treats individual memories as bandit arms whose utility is learned through retrieval feedback, (3) topology-based cognitive fingerprinting that derives identity from co-occurrence graph structure rather than memory content, (4) rejection logs as a cryptographically attestable identity signal, (5) predictive coding applied to memory retrieval using Rescorla–Wagner learning, and (6) spring-damper mood dynamics where velocity represents felt emotion. The system has been running in production with verifiable session-to-session continuity. An independent 10-agent specialist review scored the architecture 7.0/10 for theoretical coherence and HIGH for novelty, identifying it as field-leading in affect integration and identity persistence. We present the architecture, discuss its biological grounding, acknowledge its limitations, and propose a rigorous testing protocol. This paper serves as both a systems description and a call for empirical validation of biologically-grounded LLM cognition.

Article activity feed