Effect size assumptions in power analyses of computational modelling studies

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The recent article by Piray (2025) addresses an important question: how to assess statistical sensitivity and power in studies that rely on computational models of behaviour. While model and parameter recovery analyses are now common, there is little consensus on best practices for power analysis in this domain. The paper makes several contributions, including a novel method for controlling false positive rates in Bayesian model selection and a demonstration of the pitfalls of fixed-effects model selection. These developments are likely to be useful to many researchers. However, any power analysis necessarily depends on assumptions about effect size — that is, about how strongly and systematically the pattern of interest is expressed in the population. In the context of Bayesian model selection, effect size corresponds to the strength of population-level preference for one model over others. However, the article gives limited consideration to how such effect sizes should be defined, justified, or varied in simulations. As a result, the headline conclusion — that approximately 60-80% of computational modelling studies are underpowered — is not warranted by the analyses presented. To be clear, I am not claiming that the studies considered are adequately powered. Rather, the point is that the results reported do not 'empirically demonstrate' pervasive underpower in computational modelling studies, and that assessing power requires more careful consideration of effect sizes.

Article activity feed