Evaluating the Ruptured Arteriovenous Malformation Grading Scale (RAGS): A Reliability Study

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background

Unruptured arteriovenous malformations (AVMs) carry a 1% risk of annual risk of hemorrhage; however, risk of re-hemorrhage is significantly higher after an initial rupture. Ruptured AVMs can cause significant morbidity and mortality. The Ruptured Arteriovenous Malformation Grading Scale (RAGS) was developed to better predict outcomes in patients with ruptured AVMs. However, the reliability of this scale has yet to be confirmed to support its use in clinical practice.

Objective

To determine the intra– and inter-rater reliability of RAGS among those most likely to use it in clinical practice.

Methods

A cross-sectional sample of 42 patients with ruptured AVMs was selected via retrospective review, and clinical vignettes were created. Five raters were chosen to assign a RAGS score to all 42 patients to assess the inter-rater reliability of RAGS. After two months, ten patients from the study sample were randomly selected to be re-rated to determine the intra-rater reliability of RAGS.

Results

The overall agreement rate was 97.2% among all raters. The inter-rater reliability was found to be substantial when measured using Cohen/Conger’s Kappa (0.73, 95% confidence interval (CI) [0.63, 0.82]), Scott/Fleiss’ Kappa (0.72, 95% CI [0.62, 0.82]), Krippendorf’s Alpha (0.73, 95% CI [.63, 95% CI [0.63, 0.82]), intraclass correlation coefficient (ICC) (0.78, 95% CI [0.68, 0.86]), and Kendall’s W (0.79, 95% CI [0.68, 0.86]) and almost perfect using Gwet’s AC (0.90, 95% CI [0.88, 0.93]). The test-retest percent agreement was between 94.7% and 98.1% among raters.

Conclusions

The RAGS classification system is highly reliable and has substantial to near-perfect agreement among raters with different expertise levels and specialties. This study supports the potential use of RAGS in clinical practice across different institutions.

The overall agreement rate was 97.2% among all raters. The inter-rater reliability was found to be substantial when measured using Cohen/Conger’s Kappa (0.73, 95% confidence interval (CI) [0.63, 0.82]), Scott/Fleiss’ Kappa (0.72, 95% CI [0.62, 0.82]), Krippendorf’s Alpha (0.73, 95% CI [.63, 95% CI [0.63, 0.82]), intraclass correlation coefficient (ICC) (0.78, 95% CI [0.68, 0.86]), and Kendall’s W (0.79, 95% CI [0.68, 0.86]) and almost perfect using Gwet’s AC (0.90, 95% CI [0.88, 0.93]). The test-retest percent agreement was between 94.7% and 98.1% among raters.

Article activity feed