Unified tools for assessing the methodological quality of intervention effects in rapid reviews: a scoping review
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background: Evaluating the methodological quality of primary studies is a crucial aspect of evidence syntheses, such as rapid reviews. Rapid reviews often include both randomised controlled trials (RCTs) and non-randomised studies (NRSIs) requiring multiple design-specific methodological quality assessment tools. This can complicate workflows and reduce efficiency. Unified tools, designed to assess methodological quality across diverse study designs, may offer a more consistent and streamlined approach. This scoping review aimed to comprehensively identify and describe unified tools for assessing methodological quality across both RCTs and NRSIs. Methods: The review followed JBI scoping review methodology and was reported in line with PRISMA-ScR guidance. Searches were conducted in MEDLINE (Ovid), Embase (Ovid), and CINAHL (EBSCO) from 1998 to 2024, alongside grey literature searches, citation tracking, targeted website searches, and consultation with experts in evidence synthesis. Study selection was performed by two independent reviewers. A tool was defined as any structured instrument developed to support users in assessing methodological quality. Results: A total of 55 publications were included, identifying 29 unique unified tools. These were categorized by structure: scales (n=14), checklists with judgment (n=7), simple checklists (n=5), domain-based tools (n=2), and other tools (n=1). All tools were designed to assess both RCTs and NRSIs. Of these, 13 focused exclusively on RCTs and NRSIs, while others extended to descriptive studies (n=15), case reports or series (n=9), qualitative research (n=7), and systematic reviews (n=3). Unified tools were developed using a mixture of approaches including literature reviewing (n=25), expert consensus (n=20), and stakeholder consultations (n=17). Reports of pilot testing was identified for 17 unified tools, and evaluation of psychometric properties was conducted to varying levels across different domains of validity and reliability. Conclusion: This review identified a diverse set of unified tools for assessing methodological quality across both RCTs and NRSIs. However, variation in tool structure, the availability of accompanying guidance, and the limited evaluation of psychometric properties, particularly interrater reliability and criterion validity, indicate potential issues for adoption in rapid reviews. Unified tools need further refinement and validation to be embedded in the rapid review process. Systematic review registration: https://osf.io/nyteu/