Observer-Dependent Entropy; Cognitive Linguistics; Information Retrieval; Quantum Information; Benchmarking

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Comprehension failure is not prediction error; it is delayed access to retrievable meaning. Unlike prediction-based models, ODER models delayed access to meaning rather than incorrect anticipation. We introduce Observer-Dependent Entropy Retrieval (ODER), a formal framework that models linguistic understanding as an observer-specific process shaped by attention, working memory, and prior knowledge. In a controlled corpus written in Aurian, a structured test language developed for entropy-based analysis, ODER explains 31% of sentence-trace variance with an average R² = 0.76, outperforming Bayesian-mixture, fuzzy-logic, and incremental-surprisal baselines by at least 7.6 AIC units. We benchmark ODER on a hybrid corpus including Aurian (a structured synthetic language) and one natural English sentence to evaluate retrieval contrast under controlled versus natural conditions. The model yields two falsifiable predictions: (i) spikes in the contextual gradient ∇C during garden-path resolution correlate with P600 amplitude, but only in low-working-memory observers; and (ii) off-diagonal coherence terms μ in the observer density matrix predict priming-interference effects. Although expressed in quantum notation, ODER does not posit quantum computation in neural tissue; the density matrix compactly represents concurrent interpretations whose collapse time τ₍res₎ aligns with electrophysiological markers. By reframing comprehension as entropy retrieval rather than entropy reduction, ODER explains why identical sentences impose divergent cognitive costs across populations and offers a benchmarkable framework for modeling neurocognitive variability without ad hoc tuning.

Article activity feed