Elucidating attentional mechanisms underlying value normalization in human reinforcement learning

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Contextual valuation is a well-documented phenomenon in reinforcement learning (RL), typically manifesting as range normalization in outcome representation. However, recent findings have revealed systematic deviations from this model, particularly when three options with equally spaced values are presented. In this study, we hypothesized that these distortions in outcome normalization arise from attentional processes. To test this, we conducted three RL experiments with 105 participants while simultaneously tracking their gaze position with eye-tracking. Furthermore, we systematically manipulated attention using both top-down and bottom-up approaches. These manipulations significantly increased the subjective valuation of attended options, thereby supporting a causal role of attention in shaping value representation. To account for these effects, we developed an RL model that integrates attentional mechanisms, wherein gaze duration directly modulates the absolute value of options prior to range normalization. This attentional range model outperformed attention-free alternatives, underscoring the critical influence of attention in value computation.

Article activity feed