An Information-Theoretic Framework for Understanding Learning and Choice Under Uncertainty

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Although information theory is widely used in neuroscience, its application has primarily been limited to the analysis of neural activity, with much less emphasis on behavioral data. This is despite the fact that the discrete nature of behavioral variables in many experimental settings—such as choice and reward outcomes—makes them particularly well-suited to information-theoretic analysis. In this study, we provide a framework for how behavioral metrics based on conditional entropy and mutual information can be used to infer an agent’s choice and learning mechanisms under uncertainty. Through simulations of various reinforcement learning models as ground truth, we illustrate how information-theoretic metrics can be applied to uncover the underlying choice and learning mechanisms. Specifically, we show that these metrics can reveal: (1) a positivity bias, reflected in higher learning rates for rewarded compared to unrewarded outcomes; (2) gradual, history-dependent changes in the learning rates indicative of metaplasticity; (3) adjustments in choice strategies driven by global reward rate; and (4) presence of alternative learning strategies. Overall, our study highlights how information theory can leverage the discrete, trial-by-trial structure of many cognitive tasks, offering a flexible framework for investigating neural mechanisms of learning and choice under uncertainty—with potential for further extension.

Article activity feed