Evaluating the ecological validity and mechanism of a generative model-based decomposition of affective variability

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Affective variability is a pervasive phenomenon with important implications for well-being and psychopathology. Yet, the broad concept of “variability” may conflate distinct processes, such as transient fluctuations versus more sustained shifts. Reinforcement learning (RL) offers a mechanistic framework for these processes, but RL is often studied in artificial settings, raising questions about ecological validity.We combined RL-based task measures with real-world experience sampling (ESM) from 339 participants. Using a computational model, we decomposed affective variability into short-lived “affective noise,” reflecting immediate reactivity to rewards, and longer-term “affective volatility,” reflecting sustained responses to past rewards. Task-derived noise was driven by recent outcomes, while volatility reflected more distant ones. Importantly, task-based noise and volatility selectively mapped onto their real-world ESM counterparts. These findings provide a mechanistic account of distinct reward-processing timescales underlying affective variability and demonstrate the ecological validity of laboratory tasks for studying real-world affect dynamics.

Article activity feed