Assessment of reporting biases in studies included in Campbell Systematic Reviews: A systematic review

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background: Critical appraisal of the studies included in a systematic review is essential to ensure that results of the review are properly interpreted. Critical appraisal is also one of the most difficult steps inresearch reviews. Structured risk of bias (ROB) tools can facilitate critical appraisal, but thesetools vary in content and structure, and there are unresolved issues in applications of thesetools. Assessment of risk of reporting biases, such as outcome reporting bias (ORB) and analysisreporting bias (ARB), is especially difficult, given the lack of availability of the raw materials(such as prospectively registered protocols or analysis plans) needed to properly assess the riskof selective reporting and selective non-reporting of outcomes and analyses.Objectives: To identify methods used in recent Campbell systematic reviews of intervention effects toassess the risk of reporting biases in included studies.Search methods: We searched the Campbell Library website, using a structured online form developed for this purpose, with filters for publication dates (all dates in 2020 through April 2023) and type ofdocument (completed reviews only).Selection criteria: We included systematic reviews (SRs) of primary studies of intervention effects published in Campbell Systematic Reviews between 1 January 2020 and 30 April 2023.Data collection and analysis: Of the 59 SRs published from 2020 through early 2023, 51 were eligible for our review. Forty-nine of these reviews included relevant studies of intervention effects. From these 49reviews, we extracted data on methods used to assess risk of reporting biases (ORB and ARB),broader risk of bias (ROB) or study quality assessments, and adherence to 12 mandatorymethodological standards. Data extraction and coding were performed in duplicate, by pairsof team members who worked independently, and any discrepancies were resolved bycoders or by the review team. Results were compiled in a spreadsheet, which was used togenerate tables, graphics, and a narrative summary.Main results:Reporting biases were defined and assessed in diverse and sometimes idiosyncratic ways inrecent Campbell systematic reviews of intervention effects. Most (40 of 49) reviews conductedsome structured assessment of reporting biases, but many did not report results of theseassessments. Support for some or all ORB and ARB assessments was missing in more than half(28) of the reviews. Only 12 reviews provided full documentation for their ORB/ARBassessments.Overall, we found that reviewers’ descriptions of their assessments of reporting biases wereoften incomplete and inconsistent across studies. In many cases, these assessment practices did not reflect current understanding of the prevalence of selective reporting and ways in whichthese biases can undermine the validity of and confidence in results of research reviews. Thisobservation is consistent with the fact that most reviewers did not consider the potentialimpacts of risks of bias on the credibility of their results.None of the recent reviews appeared to meet all (12) of the mandatory methodologicalstandards we assessed. On average, these reviews failed to meet 4.9 of these standards; almostthree-quarters (35) of the reviews failed to meet four or more standards.Authors' conclusions:Recent Campbell reviews did not consistently appraise or document risks of reporting biases inthe studies they included. Assessment of risk of reporting biases is difficult, given the lack ofavailability of prospective, public protocols or analysis plans for most studies.Reviewers’ failure to adhere to Campbell’s mandatory methodological standards and editors’apparent inability to enforce these standards can be understood as functions of the contexts inwhich systematic reviews are highly desirable, highly cited, and under-resourced.We provide a decision tree to guide reviewers’ assessments of reporting bias, along with ninerecommendations for improving these practices in systematic reviews of intervention effects.

Article activity feed