Engagement as Entanglement: Variance Signatures of Bidirectional Context Coupling in Large Language Models

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent large-scale simulations demonstrate that LLMs exhibit systematic performance degradation in multi-turn conversations, with unreliability increasing 112% across 200,000+ conversations (Laban et al., 2025). However, this "Lost in Conversation" phenomenon lacks mechanistic explanation. We present an entanglement framework for understanding context sensitivity in large language models using embedding-level variance analysis across 12 model-domain runs (4 philosophy, 8 medical) and 360 position-level measurements. We demonstrate that ΔRCI—a measure of context sensitivity introduced in Papers 1–2—tracks variance reduction in response embeddings. The correlation between ΔRCI and the Variance Reduction Index (VRI = 1 − Var_Ratio) is strong and highly significant (r = 0.76, p = 2.37 × 10⁻⁶⁸, N = 360). This relationship reveals bidirectional context coupling: convergent entanglement (Var_Ratio < 1, ΔRCI > 0), where context narrows the response distribution, and divergent entanglement (Var_Ratio > 1, ΔRCI < 0), where context widens it. The "Lost in Conversation" effect corresponds specifically to divergent entanglement. Two medical models (Llama 4 Scout: Var_Ratio = 7.46; Llama 4 Maverick: Var_Ratio = 2.64) exhibit extreme divergent entanglement at the summarization position (P30), producing highly unstable outputs when task enablement is expected. We introduce the Entanglement Stability Index (ESI) to predict which models will exhibit instability in multi-turn settings, transforming the descriptive observation that "LLMs get lost" into a predictive science of human-AI relational dynamics.

Article activity feed