A meta-analyst should make informed decisions: Issues with Bayesian model-averaging meta-analyses
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
When synthesizing the results of multiple studies, meta-analysts typically assess publication bias using assumption-based models. However, the lack of convergence between methods is common. The performance of all publication-bias methods is sensitive to the specific conditions of the meta-analysis and one approach proposes to solve discrepancies by favoring results of methods that perform reasonably well under the specific conditions of each meta-analysis. Alternatively, Bayesian model averaging with different publication-bias models has become an influential approach after the recent proposal of robust Bayesian meta-analysis (RoBMA). RoBMA is presented as a method that dispenses the meta-analyst from the difficult decision of choosing which publication-bias model to apply and comes with the hope of reducing decision making. In the present study, we tested the performance of RoBMA with the data sets from replication projects in different fields (Psychology, Economics, Cancer Biology, Experimental Philosophy, and Nature and Science papers) and simulated meta-analyses with similar conditions. Despite the lack of publication bias in all data sets, the combination of heterogeneity and well-informed sample size planning (via power analysis) resulted in the small-study effects that RoBMA identified as a sign of publication bias. As preregistration and power analyses become widespread in research and given that scientific results are heterogeneous for reasons that are not always known, meta-analysts should become increasingly careful in their use and interpretation of asymmetry-based methods. Our findings suggest that publication-bias analyses demand informed decisions by the meta-analyst and no data-driven approach can replace their expertise.