Hybrid Neural-Cognitive Models Reveal Flexible Context-Dependent Information Processing in Reversal Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Reversal learning tasks provide a key paradigm for studying behavioral flexibility, requiring individuals to update choices in response to shifting reward contingencies. While reinforcement learning (RL) models have been widely used to provide interpretable explanations of human behavior on similar learning tasks, recent work has revealed that they often fail to fully account for the complexity of learning dynamics. In contrast, artificial neural networks (ANNs) often achieve higher predictive accuracy, but lack the interpretability afforded by classical RL models. We combined the strengths of both approaches using HybridRNNs, neural-cognitive models that integrate interpretable RL mechanisms with flexible recurrent neural networks (RNNs). We created a series of HybridRNNs to test previous assumptions about reversal learning mechanisms in an open human reversal learning dataset. Our results show that a HybridRNN incorporating choice perseverance and taking into account the reward history of alternative choices when evaluating new rewards (Context-ANN) outperformed classical RL models while maintaining interpretability. Context-ANN successfully replicated human behavioral patterns during reversal trials, suggesting that humans employ flexible, context-dependent value updating in reversal learning tasks. This research shows that humans employ complex cognitive mechanisms even in simple reversal tasks, and paves the way for cognitive models that provide both interpretability and predictive accuracy.

Article activity feed