Inferring learning rules during de novo task learning

Read the full article See related articles

Discuss this preprint

Start a discussion

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Identifying the learning rules that govern behavior is a central problem in neuroscience. While reinforcement learning (RL) offers a unifying theoretical framework, most empirical studies of animal learning behavior have focused on non-stationary environments (e.g. changing reward probabilities in a known task), as opposed to acquiring an entirely new task from scratch. Here we introduce a statistical framework to infer reinforcement learning rules directly from single-animal behavior. Applied to mice learning a perceptual decision-making task, our approach reveals that policy-gradient-like rules capture de novo task learning better than classical temporal-difference algorithms. By fitting flexible parametric learning rules, we uncover systematic deviations from standard RL models, including side-specific learning rates and negative reward baselines. Together, these parameters account for side-biased learning, as well as forgetting and consecutive errors due to aversive responses to incorrect trials. Extending the framework with latent, dynamic learning rates further reveals that animals adapt their learning rates over training and across curricula. These results provide a statistical account of how animals learn from scratch and highlight key departures from classical reinforcement learning algorithms.

Article activity feed