How Humans Restructure Predictive Models: Context-Tree Dynamics in Sequential Learning
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Human learning in sequential environments often involves uncovering latent statistical structures that govern event regularities. In this study, we investigate how individuals adapt their internal predictive models while playing the Goalkeeper Game, a stochastic sequence prediction task driven by a probabilistic context tree. We introduce a real-time context-tree inference framework that reconstructs the evolving internal models underlying participants’ trial-by-trial choices. By tracking the entropy of the inferred context trees, we reveal that learning unfolds through two intertwined processes: frequent refinements, corresponding to gradual adjustments of transition probabilities within a stable structure, and rare transitions, corresponding to structural reorganizations of the predictive model. Entropy reductions parallel improvements in success rate, demonstrating that participants progressively internalize the underlying generative process. The waiting-time distribution of transitions follows a sub-exponential Weibull law, indicating history-dependent reorganization dynamics consistent with bursty, non-memoryless adaptation. These findings suggest that human statistical learning proceeds through a balance between exploitation and exploration. Our framework provides a quantitative and interpretable tool for modeling the continuous–discrete dynamics of adaptive learning in probabilistic environments.