Optimistic and pessimistic assumptions in Piray’s power analysis for computational model selection

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In psychology and neuroscience, computational models representing competing hypotheses are often fitted to behavioural data and compared to evaluate their plausibility. In a recent article, Piray proposed a general framework for estimating the statistical power of such model comparisons, focusing on random-effects Bayesian model selection, and suggested that many previous studies may be insufficiently powered. Piray further noted that a large number of studies rely on fixed-effects Bayesian model selection, assuming that the same model applies to all participants, and argued that this practice can lead to extremely high false-positive rates. These conclusions raise concerns about the validity of findings based on computational model selection. In this commentary, however, we argue that Piray’s analyses depart from realistic model-selection scenarios. In particular, his power analysis involves assumptions that are, in different respects, overly pessimistic or overly optimistic, potentially biasing power estimates and similar issues may arise in the simulations used to evaluate fixed-effects Bayesian model selection.

Article activity feed