The Impact of Publication Bias and Single and Combined p-Hacking Practices on Effect Size and Heterogeneity Estimates in Meta-Analysis

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Publication bias and p-hacking threaten the validity of inferences drawn from individualpsychological studies and meta-analyses by distorting effect size estimates and between-studyheterogeneity. In two simulation studies, we examined the impact of publication bias and p-hacking practices on individual study effect sizes, meta-analytic estimates, and between-studyheterogeneity. Study 1 assessed the individual effects of publication bias and five common p-hacking practices—selective outcome reporting, multiple conditions testing, optional dropping,outlier removal, and optional stopping. Study 2 examined how systematically combining these p-hacking practices influenced the same outcomes. Our results demonstrated that publication biasproduced larger bias than individual p-hacking practices in conditions commonly found inpsychological research, namely small sample sizes and small true effects. Among p-hackingpractices, selective outcome reporting and optional dropping caused the most severe bias in botheffect sizes and heterogeneity estimates, whether applied individually or in combination. Theimpact of these practices varied substantially based on study and meta-analytic conditions andspecific p-hacking practices used—while most p-hacking practices and combinations increasedbias, optional stopping sometimes mitigated bias by increasing sample sizes. Notably, thecombination of three practices (selective outcome reporting, multiple conditions, and optionaldropping) produced nearly as much bias as using all five practices. These findings highlight theimportance of preregistration, sharing materials, data, and analysis code, and the publication ofall studies regardless of statistical significance.

Article activity feed