Negligible Effect (Equivalence) Testing Based Procedures for Assessing Distributional Normality
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Researchers in psychology are often interested in evaluating whether a sample distribution is consistent with a normal (i.e., Gaussian) population distribution, most commonly to evaluate it as an assumption of a statistical model being adopted. In Study 1, a novel negligible effect (equivalence) test (NET) for normality is proposed that evaluates whether a sample distribution is similar enough to a normal distribution to be deemed equivalent (i.e., the difference between the sample and normal distributions is negligible). This NET establishes a negligible effect interval that quantifies coefficients for distribution shape that can be considered approximately normal. Any test statistic which has a 100(1-2α)% confidence interval (CI) that falls between the upper and lower limits of the negligible effect interval leads to the conclusion that the distribution is approximately normal. A series of simulations comparing Type I error and power rates of common traditional (difference-based) approaches (Kolmogorov-Smirnoff and Shapiro-Wilk tests) with the proposed NET-based approach was conducted. Results indicate that in small sample sizes the NET method has low power to detect normality, whereas the traditional methods have low power to detect nonnormality. However, the NET method almost never falsely concludes normality with nonnormal distributions and small samples. With large sample sizes, traditional methods also often indicate that distributions are nonnormal even if the level of nonnormality is very minor (such that it would be unlikely to affect the validity of statistical tests or precision of parameter estimates). This limitation of traditional methods is not a concern for NET tests in large sample sizes, since they rarely falsely conclude that a distribution is nonnormal when it shows minor deviations from normality. One limitation of the NET-based approach is reduced power to detect normality when distributions are close to normal. Study 2 aimed to improve the calculation of CIs for the NET-based approach and examined several alternative methods for computing bootstrap-based confidence intervals for the NET-SW, including the stochastic bootstrap, parametric bootstrap, and Fisher’s r to z transformation. The stochastic bootstrap approach had the best balance of Type I error to power rates and is the recommended approach to accompany the NET-based normality test.