nmax and the Quest to Restore Caution, Integrity, and Practicality to the Sample Size Planning Process

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In a time when the alarms of research replicability are sounding louder than ever, mappingout studies with statistical and inferential integrity is of paramount importance. Indeed, fundingagencies almost always require grant applicants to present compelling a priori power analyses tojustify proposed sample sizes, as a critical part of the information considered collectively toensure a sound investment. Unfortunately, even researchers’ most sincere attempts at sample sizeplanning are fraught with the fundamental challenge of setting numerical values not just for thefocal parameters for which statistical tests are planned, but for each of the model’s other moreperipheral or contextual parameters as well. As we plainly demonstrate, regarding the latterparameters, even in very simple models any slight deviation in well-intentioned numericalguesses can undermine power for the assessment of the more focal parameters that are of keytheoretical interest. Toward remedying this all-too-common but seemingly underestimatedproblem in power analysis, we adopt a hope-for-the-best-but-plan-for-the-worst mindset andpresent new methods that attempt to (1) restore appropriate conservatism and robustness, and inturn credibility, to the sample size planning process, and (2) greatly simplify that process.Derivations and suggestions for practice are presented using the framework of measured variablepath analysis models as they subsume many of the types of models (e.g., multiple linearregression, ANOVA) for which sample size planning is of interest.

Article activity feed