Investigating the relationship between affective valence and reinforcement learning

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Affective valence and reinforcement learning (RL) are increasingly recognized to be closely connected, yet the exact nature of their relationship remains unclear. Here, we investigated how RL-related computations contribute to affective valence, and how affective valence, in turn, contributes to RL. Applying an original computational method, we found that affective experience during RL tasks was best explained by a combination of three prominent theoretical perspectives: valence is determined by reward, prediction errors, and counterfactual comparisons. Further, we found that actions were reinforced by affective responses in addition to external rewards: participants preferred choice options that led to more positive affect, in addition to preferring options that led to greater reward. Altogether, our results illuminate both the role of RL computations in affective experience and the role of affect in RL, providing insight into the mechanisms of affect, learning, and choice. Moreover, our studies validate a powerful new computational framework for future research on these topics.

Article activity feed