Evaluation of the replicability of systematic reviews with meta-analyses of the effects of health interventions
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background
Systematic reviews are often characterised as being inherently replicable but several studies have challenged this claim.
Objectives
To investigate the variation in results following independent replication of literature searches and meta-analyses of systematic reviews.
Methods
We included ten systematic reviews of the effects of health interventions published in November 2020. Two information specialists repeated the original database search strategies. Two experienced review authors screened full-text articles, extracted data, and calculated the results for the first reported meta-analysis. All replicators were initially blinded to the results of the original review. A meta-analysis was considered not ‘fully replicable’ if the original and replicated summary estimate or confidence interval width differed by more than 10%, and meaningfully different if there was a difference in the direction or statistical significance.
Results
The difference between the number of records retrieved by the original reviewers and the information specialists exceeded 10% in 25/43 (58%) searches for the first replicator and 21/43 (49%) searches for the second. Eight meta-analyses (80%, 95% CI: 49-96%) were initially classified as not fully replicable. After screening and data discrepancies were addressed, the number of meta-analyses classified as not fully replicable decreased to five (50%, 95% CI: 24-76%). Differences were classified as meaningful in one blinded replication (10%, 95% CI: 1-40%) and none of the unblinded replications (0%, 95% CI: 0-28%).
Conclusions
The results of systematic review processes were not always consistent when their reported methods were repeated. However, these inconsistencies seldom affected summary estimates from meta-analyses in a meaningful way.
HIGHLIGHTS
What is already known on this topic
-
Systematic reviews are often characterised as being inherently replicable, however, several studies have challenged this claim.
-
Few studies have examined where and why inconsistencies arise, and what their impact is, when replicating multiple systematic review processes.
What this study adds
-
Replication of published systematic review processes (database searches, full-text screening, data extraction and meta-analysis) frequently produced results that were inconsistent with the original review.
-
Following correction of replicator errors, the main drivers of variation in the results were incomplete reporting (e.g., unclear search methods, study eligibility criteria and methods for selecting study results) and reviewer data extraction errors.
-
However, differences between the original reviewer’s and replicators’ summary estimates and confidence intervals were seldom meaningful.