Do modeling choices matter for the reliability of individual difference measures in conflict tasks?

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

There is a growing realization that experimental tasks that produce reliable effects in group comparisons can simultaneously provide unreliable assess- ments of individual differences. Proposed solutions to this “reliability para- dox” range from collecting more test trials to modifying the tasks and/or the way in which effects are measured from these tasks. Here we systematically compare two proposed modeling solutions in a cognitive conflict task. Using the ratio of individual variability of the conflict effect (i.e., signal) and the trial-by-trial variation in the data (i.e., noise) obtained from Bayesian hier- archical modeling, we examine whether improving statistical modeling may improve the reliability of individual differences assessment in four Stroop datasets. The proposed improvements are 1) increasing the descriptive ad- equacy of the statistical models from which conflict effects are derived, and 2) using psychologically-motivated measures from cognitive models. Our results show that modeling choices do not have a consistent effect on the signal-to-noise ratio: the proposed solutions improved reliability in only one of the four datasets. We provide analytical and simulation-based approaches to compute the signal-to-noise ratio for a range of models of varying sophis- tication and discuss their potential to aid in developing and comparing new measurement solutions to the reliability paradox.

Article activity feed