A meta-analyst should make informed decisions: Issues with Bayesian model-averaging meta-analyses
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
When synthesizing the results of multiple studies, meta-analysts typically assess publication bias using different models. However, none of those methods clearly outperforms the others and the inferences drawn from these methods are therefore subject to substantial uncertainty. One approach proposes to solve discrepancies between publication-bias methods by favoring results of methods that perform reasonably well under the specific conditions of the meta-analysis. A recent proposal known as robust Bayesian meta-analysis (RoBMA) has become an influential alternative. RoBMA uses Bayesian model averaging with different publication bias models and is presented as a method that absolves the meta-analyst from the difficult decision of choosing which publication-bias model to apply. In the present study, we tested the performance of RoBMA with the data sets from replication projects in different fields (Psychology, Economics, Cancer Biology, Experimental Philosophy, and Nature and Science papers) and simulated meta-analyses with similar conditions. Despite the lack of publication bias in all data sets, as all the replications were published, the combination of heterogeneity and well-informed sample size planning (via power analysis) resulted in small-study effects that RoBMA identified systematically as a sign of publication bias. As preregistration and power analyses become widespread practices in research and given that scientific results are heterogeneous for reasons that are not always known, meta-analysts should become increasingly careful in their use and interpretation of methods based on funnel plot asymmetry. Our findings suggest that publication-bias analyses require informed decisions by the meta-analyst and no data-driven approach can replace their expertise.