The precision of attention selection during reward learning influences the mechanisms of value-driven attention

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Reward-predictive items capture attention even when they are irrelevant to current goal. While previous studies suggest that value-driven attention generalizes to items sharing critical reward-associated features (e.g., red), recent findings propose an alternative generalization mechanism based on context-dependent feature relationships (e.g., redder). Here, we examined whether the relational coding of reward-associated features is commonly utilized across different learning contexts, particularly those engaging different attention modes (singleton search vs. feature-specific search) and varying levels of stimulus similarity (low vs. high target-distractor similarity). Focusing on value-driven attention based on feature relationships, our results showed that singleton search training led to value-driven relational attention that was independent of target-distractor similarity (Experiment 1a and 1b, n = 40 each). In contrast, feature-specific search training produced value-driven relational attention only when the target was dissimilar to the distractors, but not when they were similar (Experiment 2a and 2b, n = 40 each). These findings suggest a key role of the precision of target selection during reward learning in shaping value-driven attentional mechanisms. When the learning task required only coarse selection (e.g., singleton search or feature-specific search among dissimilar items), a relational code for reward-associated feature was formed; however, when fine selection was necessary (e.g., feature-specific search among similar items), a more precise code was utilized.

Article activity feed