Distorting Effects of Optional Stopping with Bayes Factors are Minimal - A Commentary to Anderson et al., 2021

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Sample size determination for robust and efficient hypothesis testing has been a longstanding issue in empirical sciences. Over the past years, Bayesian sequential designs – in which the sample size is not determined a priori but based on the evidence in the data – have become increasingly popular. While previous research has repeatedly shown the benefits of this procedure, there also has been criticism. Here, I address a critic by Anderson et al. (2021), who used simulations to show that Bayesian sequential designs lead to biased effect size estimates in the obtained data. I argue that the simulations by Anderson et al. (2021) neglect several best practice recommendations for Bayesian sequential designs, thereby exaggerating the danger of bias. Replicating the simulation design by Anderson et al. (2021), I show that biases are much smaller when following best practice recommendations (e.g., setting a minimum sample size, setting reasonable evidence thresholds, and transparently reporting studies that don’t reach an evidence threshold), and can be further mitigated by reporting Bayesian effect size estimates. Finally, I show that there is no bias when samples are submitted to a proper meta-analysis. Overall, these results support the strength of Bayesian sequential designs for an efficient determination of sample sizes.

Article activity feed