A Response to Pek et al.’s Commentary on Z-Curve: Clarifying the Assumptions of Selection Models

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Pek et al. (2026) comment on Soto and Schimmack (2025) and raise concerns about the use of z-curve to evaluate the credibility of emotion research. Their central criticism is based on simulations showing that z-curve can overestimate power when selection operates not only at the level of statistical significance but also within the set of significant results as a function of effect size. This point is correct: if researchers selectively publish larger significant effects while suppressing smaller significant ones, selection models that assume threshold-based filtering can be biased. However, this limitation is not unique to z-curve and applies equally to other selection models used in meta-analysis. More importantly, there is currently little empirical evidence that researchers systematically engage in effect-size-based selection among statistically significant focal tests. In contrast, extensive evidence indicates that high success rates in psychology are primarily driven by selection for significance and p-hacking. Under these more realistic conditions, z-curve provides informative estimates of average power and selection bias. Our results also demonstrate substantial inflation of effect size estimates in traditional meta-analyses that ignore selection processes. For these reasons, we reject the recommendation to rely solely on standard meta-analytic approaches and instead advocate the use of selection models to obtain more realistic estimates of effect sizes in emotion research.

Article activity feed