Hybrid Neural-Cognitive Models Reveal How Memory Shapes Human Reward Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Human reward-guided learning is typically modeled with simple reinforcement learning algorithms. These models assume that choices depend on a handful of incrementally learned variables that summarize previous outcomes. Here, we scrutinize this account by collecting and modeling a large dataset of human probabilistic reward-learning behavior using a hybrid approach that combines simple reinforcement learning models with artificial neural networks. Our results suggest that human behavior cannot be explained by any algorithm based exclusively on incremental updating of choice variables. Instead, they suggest that human reward learning relies on a flexible memory system that can learn rich representations of past events over multiple timescales. Hence, human reward-guided choices rely on more elaborate memory representations than previously believed.

Article activity feed