Hierarchical Tensorial State Representations for Structured Context Learning in Large Language Models
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The structured representation of hierarchical dependencies within computational models has remained an open challenge, particularly in architectures designed for extended sequence processing. Conventional attention-based mechanisms often struggle to maintain structured long-range dependencies, leading to information fragmentation in tasks requiring multi-level contextual reasoning. A hierarchical tensorial state representation is introduced to address these limitations through structured state propagation, ensuring that contextual dependencies evolve coherently across multiple abstraction levels. The tensorial framework integrates within existing transformer-based architectures, providing a structured alternative to token-wise attention mechanisms without introducing prohibitive computational costs. Empirical evaluations on long-context comprehension and compositional reasoning benchmarks demonstrate that tensorial state propagation improves structured information retention while maintaining stable optimization dynamics. Comparative analyses against conventional self-attention mechanisms indicate that structured tensorial encoding enhances robustness to input perturbations, preserving information coherence across extended sequences. Experiments assessing rare word prediction and multilingual adaptation further reveal that structured state propagation improves contextual inference across diverse linguistic patterns. The structured integration of tensorial state transitions ensures more stable training gradients, reducing the risks of vanishing or exploding gradients in deeper layers.