Interpretable and Brain-Inspired Recurrent Model of Hierarchical Decision-Making Capturing Trial-by-Trial Variability
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Humans often make decisions in hierarchical environments, where low-level perceptual judgments inform high-level strategies. Understanding how the brain computationally navigates such multi-level decisions remains an open challenge. While existing models offer statistical insights, they often fall short in capturing the neural mechanisms and trial-by-trial variability underlying individual choices. To address this gap, we developed a neurocomputational model that integrates a biologically inspired attractor network for low-level perceptual decisions with a recurrent neural network (RNN) for high-level strategic adjustments. With minimal architectural constraints, the RNN receives only raw firing rates, feedback, and prior environment, learning to infer environment-switching strategies without explicit access to confidence or stimulus strength. In a hierarchical task combining motion discrimination and bandit decisions (N = 9; ~10,800 trials), the model successfully reproduced three hallmark behavioral patterns observed in humans. Unlike previous models, the model also captured trial-to-trial variability in switching decisions and implicitly learned to estimate decision confidence. For interpretability, we used representational and sensitivity analyses. Representational analyses revealed internal dynamics consistent with evidence accumulation in the anterior cingulate cortex (ACC), while sensitivity analysis identified feedback as the dominant influence on strategy, modulated by recent trial history. This framework combines interpretability and predictive power, moving beyond simple data fitting to provide mechanistic insights into how the brain integrates confidence and feedback to guide adaptive behavior in hierarchical decision-making.