Validity is a Theoretical Problem: A Computational Psychometrics Perspective on How to Measure Cognition
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Psychometric methods alone cannot determine what cognitive tasks measure. We argue that validity – whether behavioral indicators reflect the cognitive processes they are intended to measure – is fundamentally a theoretical problem. Traditional validation through convergent and discriminant correlations is circular when constructs are defined through the same correlational patterns used to validate them. Computational cognitive models can break this circularity by specifying how cognitive processes generate behavior, allowing validity to be assessed independently for each indicator through simulation. We demonstrate this framework using two models of conflict processing – the Diffusion Model for Conflict (DMC) and the Shrinking Spotlight model (SSP) – applied to tasks commonly used to measure attention control. Simulations reveal four measurement phenomena invisible to correlational validation: (1) process impurity, where many behavioral indicators conflate multiple cognitive processes; (2) differential validity, where RT difference scores uniquely reflect attention-control parameters under both models; (3) reliability-validity dissociations, where highly reliable mean scores show weaker validity for attention control than less reliable difference scores; and (4) correlation transfer failure, where process-level correlations across tasks transfer to indicator-level correlations only to the extent that indicators validly reflect the correlated parameters. These phenomena emerge consistently from both models despite different processing assumptions. Our work shows that advancing validity requires explicit theories of how cognition generates behavior, not more sophisticated psychometric methods or models.