Explicit memory representations in decisions from experience

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Reinforcement learning (RL) models explain how people adapt behavior through incremental value updates but assume that individual experiences are not explicitly stored or retrieved. Across two experiments (N = 282 and N = 1,818), we tested whether people rely on such explicit memory representations during experience-based choice. Participants sampled outcomes from two lotteries and, in an “ignore” condition, were instructed to disregard specific outcomes before deciding. Ignoring these outcomes substantially altered preferences, suggesting that choices were guided by explicit episodic representations rather than cumulative reinforcement. Frequency judgments revealed generally accurate memory for experienced outcomes, but reduced precision for continuous compared with discrete outcome distributions. These findings challenge purely incremental RL accounts and support theories proposing that human choice integrates flexible episodic memory with reinforcement mechanisms, bridging models of learning, memory, and decision-making.

Statement of Relevance

How people learn from experience shapes decisions in many everyday situations, from choosing a stock to invest in to selecting a restaurant or deciding which route to take home. Our experiments show that people do not rely solely on incremental value updates but instead exploit memory representations of past outcomes to guide choices. When asked to ignore certain outcomes, participants adjusted their preferences in line with these representations. These findings reveal that human decision-making also engages flexible use of memory for past experiences, highlighting the importance of memory-based mechanisms in adaptive, real-world choice behaviour.

Article activity feed