Integrating Ideal Bayesian Searcher and Neural Networks Models for Eye Movement Prediction in a Hybrid Search Task
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Visual search, where observers search for a specific item, is a crucial aspect of daily human interaction with the visual environment. Hybrid search extends this by requiring observers to search for any item from a given set of objects. While there are models proficient at simulating human eye movement in visual search tasks within natural scenes, none are able to do so in Hybrid search tasks within similar environments. In this work, we present an enhanced version of the neural network Entropy Limit Minimization (nnELM) model, which is based on a Bayesian framework and decision theory. We also present the Hybrid Search Eye Movements (HSEM) Dataset, comprising several thousands of human eye movements during hybrid search tasks in natural scenes. A key challenge in Hybrid search, absent in visual search, is that participants might search for different objects at different time points. To address this, we developed a strategy based on the posterior probability distribution generated after each fixation. By adjusting the model’s peripheral visibility, we made early search stages more efficient, aligning it closer to human behaviour. Additionally, limiting the model’s memory capacity reduced its success in longer searches, mirroring human performance. To validate these improvements, we compared our model against participants from the HSEM dataset and against existing models in a visual search benchmark. Altogether, the new nnELM model not only successfully explains Hybrid search tasks, but also closely replicates human behaviour in natural scenes. This work advances our understanding of complex processes underlying visual and Hybrid search while maintaining model interpretability.