A meta-analyst should make informed decisions: Issues with Bayesian model-averaging meta-analyses
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
When synthesizing the results of multiple studies, meta-analysts typically assess publication bias using different models. However, none of those methods clearly outperforms the others and the inferences drawn from them are therefore subject to substantial uncertainty. One approach proposes to solve discrepancies between publication-bias methods by favoring results of methods that perform reasonably well under the specific conditions of the meta-analysis. A recent proposal known as robust Bayesian meta-analysis (RoBMA) has become an influential alternative. RoBMA uses Bayesian model averaging with different publication bias models and is presented as a method that absolves the meta-analyst from the difficult decision of having to choose which publication-bias model to apply. Unfortunately, we have noted from replication projects that the combination of heterogeneity and well-informed sample size planning might result in small-study effects that meta-analysts might misinterpret as a sign of bias, named here as the power analysis–bias paradox. In the present study, we tested the performance of RoBMA with simulated meta-analyses where publication bias was absent, but some proportion of studies based their sample size on power analyses. Under those conditions, RoBMA identified evidence of publication bias and underestimated the effect. As preregistration and power analyses become widespread in research and given that scientific results are heterogeneous for reasons that are not always known, meta-analysts should become increasingly careful in their use and interpretation of methods based on funnel plot asymmetry. Our findings suggest that publication-bias analyses require informed decisions by the meta-analyst and no data-driven approach can replace their expertise.