Modulating the Foreground Bias: How Scene Knowledge and Depth Structure Guide Visual Search
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
When viewing real-world scenes, visual information extends across depth, yet attention is not allocated evenly across this space. The current study examined whether the prioritization of near-space information, known as the Foreground Bias, reflects a fixed attentional tendency or a flexible process that can be modulated by scene knowledge. Across two experiments, participants searched for target objects that appeared in either the foreground or background of scenes that were either semantically coherent (Normal) or composed of mismatched foreground and background regions (Chimera scenes; Castelhano et al., 2019). In Experiment 1, participants located targets in the foreground more quickly and with fewer fixations than those in the background, suggesting a robust Foreground Bias that was not explained by target size. In Experiment 2, a brief scene preview was introduced to allow participants to encode scene structure before search. Although the Foreground Bias persisted across scene types, the preview selectively reduced the magnitude of this bias in Chimera scenes, suggesting participants could strategically direct search toward the semantically relevant region when sufficient context information was available. Together, these findings suggest that the Foreground Bias reflects a strong, default weighting toward near space that can be flexibly adjusted based on prior scene knowledge.