Robust Statistical Methods and the Credibility Movement of Psychological Science
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The general linear model (GLM) is the most frequently applied family of statistical models in psychology. Within the GLM, the effects under study are estimated using the ordinary least squares (OLS) estimation, which produces unbiased and optimal parameter estimates, and unbiased null hypothesis significance tests when (1) outliers and influential cases are absent, and (2) assumptions of linearity and additivity, spherical errors, and normal errors are met. This paper first provides a technical description of OLS and an overview of its statistical assumptions. We then discuss the methods commonly employed to detect and address violations of assumptions, and how the current application of these methods can compromise the reproducibility of findings by allowing too much flexibility in the analytic process. We briefly introduce several robust estimation methods - namely bootstrapping, heteroscedasticity-consistent standard errors, M-estimators, and trimming - that can improve the accuracy of parameter estimates and the power of statistical tests. We provide guidance on how these methods can be used to transparently preregister a sensitivity analysis, reducing the opportunity for problematic researcher degrees of freedom to enter the analytic pipeline.