On the Uncertainty of Final Sample Sizes in Sequential Monitoring Designs
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Behavioral research often relies on statistical inference through effect size estimation and hypothesis testing. Traditionally, empirical studies use a fixed-sample rule (FSR), where the target sample size is determined before data collection. Unfortunately, FSRs may fail to meet their inferential goals even when the correct population variance in the sample size calculation was used. A sequential monitoring design offers a well-established alternative by continuing data collection until either the confidence interval for the effect of interest meets the desired width or the hypothesis test achieves sufficient power to detect a pre-defined smallest relevant effect. An important limitation of such sequential stopping rules is that the final sample size is unknown during data collection. Although prior work explored final sample size distributions for specific population models, little attention has been given to conditional uncertainty as data accumulate. Here we propose a method to build prediction intervals for the final sample size based on data collected so far. Our approach monitors the estimated Fisher information and uses a general stopping rule defined in terms of required information. Using this framework we propose intervals that reflect both current uncertainty and future sampling variability. We focus on the setting where interest lies in the mean difference between two groups with equal variances. These intervals apply to both estimation and testing, and become increasingly narrow as the stopping rule nears. Consequently, they provide researchers with a practical tool to anticipate the resources needed to reach conclusions. We provide an R script that implements the procedure.