Enhancing TVA with Bayesian Methods: A Tutorial and New Insights into Computational Modeling of Visual Attention using RStanTVA
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The Theory of Visual Attention (TVA; Bundesen, 1990) provides a powerful computational framework for quantifying core components of visual selection. However, existing implementations such as LibTVA are limited by platform dependence, licensing restrictions, and the lack of support for hierarchical or Bayesian inference. Here we introduce RStanTVA, an open-source Stan-based implementation of TVA that overcomes these limitations and enables flexible parameter estimation using both maximum-likelihood and Bayesian methods. We validated the package through parameter recovery analyses with simulated data, showing that it recovers true parameters with high accuracy, often surpassing LibTVA in terms of reliability while providing a tenfold gain in computational speed. Regularization through weakly informative priors stabilized difficult-to-estimate parameters and highlighted the benefits of Bayesian modeling. To demonstrate practical applications, we reanalyzed three published datasets. We replicated the high retest and parallel-forms reliability of TVA parameters and revealed systematic practice effects across sessions. We also showed that RStanTVA provides stable and interpretable estimates even from sparse clinical data, underscoring its utility for neuropsychological assessment. Together, these results establish RStanTVA as a modern, flexible, and efficient framework for TVA modeling. By integrating hierarchical and Bayesian approaches, RStanTVA enhances the reliability and interpretability of attentional parameter estimates and expands the scope of TVA applications in basic, applied, and clinical research.