Affective Valence Represents Value During Reinforcement Learning

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Affective valence and reinforcement learning (RL) are increasingly recognized to be closely connected, yet the exact nature of their relationship remains unclear. Here, we investigated how RL-related computations contribute to affective valence and how affective valence contributes to RL. Applying an original computational method, we found that affective experience during RL is best explained by a combination of three prominent theoretical perspectives: valence is determined by reward, prediction errors, and counterfactual comparisons. To account for these findings, we hypothesized that the valence of affective responses represents (i.e., encodes, tracks) the value of the eliciting stimulus, and contributes to RL alongside reward per se. We confirmed this hypothesis across three RL tasks, showing that decision-makers prefer choice options associated with more positive affect in addition to preferring options associated with greater reward. Altogether, our results establish a unifying model of the valence-RL relationship with key implications for the mechanisms of affective experience and human RL. Moreover, they validate a powerful new computational framework for future research on these topics.

Article activity feed