The statistical fragility of animal cognition findings: a meta-meta-analytic reappraisal
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
How reliable is the evidence in animal cognition research? Concerns are mounting over the statistical robustness of this and other fields. Many primary studies rely on small samples and rarely report null results, while meta-analyses sometimes overlook publication bias, all of which may contribute to unreliable conclusions. We conducted a second-order meta-analysis across 28 published meta-analytical papers in the animal cognition field to evaluate three inferential metrics: statistical power, Type M (magnitude) error, and Type S (sign) error. These were calculated at the primary study and meta-analysis levels. To approximate the true effect, we used the mean effect size from each meta-analysis; when publication bias was detected, we applied a correction to mitigate potential overestimation. Our results indicate low statistical power and inflated effect sizes in both primary studies and meta-analyses. After bias correction, on average, power decreased from 17% to 9% and effect size values decreased from 82% to 45%. Type M errors were common, indicating that statistically significant results often exaggerated the underlying effect sizes. For improving the reliability of animal cognition research, we recommend preregistering and transparently reporting both primary and secondary studies. We also call for the routine application of publication bias correction in meta-analytical syntheses.