Artificial Intelligence Software to Accelerate Screening for Living Systematic Reviews

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background: Systematic and meta-analytic reviews provide gold-standard evidence but are static and outdate quickly. Here we provide performance data on a new software platform that uses artificial intelligence technologies to (1) accelerate screening of titles and abstracts from library literature searches, and (2) provide a software solution for enabling Living Systematic Reviews by maintaining a saved AI algorithm for updated searches. Methods: Performance testing was based on Living Review System (LRS) data from seven systematic reviews. LRS efficiency was estimated as the proportion (%) of the total yield of an initial literature search (titles/abstracts) that needed human screening prior to reaching the in-built stop threshold. LRS algorithm performance was measured as work saved over sampling (WSS) for a certain recall. LRS accuracy was estimated as the proportion of incorrectly classified papers in the rejected pool, as determined by two independent human raters. Results: On average, around 36% of the total yield of a literature search needed to be human screened prior to reaching the stop-point. However, this ranged from 22-53% depending on the complexity of language structure across papers included in specific reviews. Accuracy was 99% at an interrater reliability of 95%, and 0% of titles/abstracts were incorrectly assigned. Conclusion: Findings suggest that the LRS can be a cost-effective and time-efficient solution to supporting living systematic reviews, particularly for rapidly developing areas of science. Further development of the LRS is planned, including facilitated full-text data extraction and community-of-practice access to living systematic review finding.

Article activity feed