Narrative-Dynamical Systems (NDS): A Closed-Loop Architecture for Long-Horizon Autoregressive Decoding via Orthogonal Logit Projection and Dynamic Barriers

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Standard autoregressive language models typically generate text in an open-loop fashion, ignoring the accumulation of errors over time. Consequently, despite their local fluency, these systems frequently suffer from long-horizon pathologies such as repetitive loops, diminished lexical diversity, and distributional collapse when relying on truncation-based sampling. To address this, we present Narrative-Dynamical Systems (NDS), a closed-loop decoding architecture that couples a frozen generator with a frozen encoder through a modular pre-sampling logit processor. NDS actively monitors online statistics across three channels—representation drift, token-level redundancy, and distributional concentration—and intervenes only when these signals jointly indicate a transition into a degenerate regime (low-drift/high-redundancy). The control action is injected directly into the logit space as a combination of (i) an orthogonally projected ascent step derived from a quadratic KL trust-region surrogate, and (ii) a sparse dynamic barrier designed to suppress empirically identified attractor token sets. We provide explicit derivations for the KL approximation and projection steps, alongside a closed-form bound demonstrating the exponential attenuation of probability mass assigned to the attractor set.

Article activity feed