Excessive flexibility? Recurrent neural networks can accommodate individual differences in reinforcement learning through in-context adaptation

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Cognitive and computational modeling has been used as a method to understand the processes underlying behavior in humans and other animals. A common approach in this field involves the use of theoretically constructed cognitive models, such as reinforcement learning models. However, human and animal decision-making often deviates from the predictions of these theoretical models. To capture characteristics that these cognitive models fail to account for, recurrent neural networks (RNNs) have been increasingly used to model choice behavior involving reinforcement learning. RNNs are able to capture how choice probabilities change depending on past experience. In this work, we demonstrate that RNNs can improve future choice predictions by capturing individual differences on the basis of past behavior, even when a single model is fit across the entire population. We term this property of the RNN the individual difference tracking (IDT) property. While the IDT property might be useful for prediction, it may introduce excessive flexibility when RNNs are used as benchmarks for predictive accuracy. We investigate the nature of the IDT property through simulation studies and examine how it affects the interpretation of predictive accuracy when RNNs are used as benchmarks for cognitive models. We also present examples using real-world data. Through these analyses, we discuss practical considerations and limitations in using RNNs as benchmarks for cognitive models and propose directions for future research.

Article activity feed