Stage-wise algorithmic bias, its reporting, and relation to classical systematic review biases in AI-based automated screening in health sciences: A structured literature review
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Introduction
Algorithmic bias in systematic reviews that use automatic screening is a major challenge in the application of AI in health sciences. This article presents preliminary findings from the project titled “Identification, Reporting, and Mitigation of Algorithmic Bias in Systematic Reviews with AI-Assisted Screening: Systematic Review and Development of a Checklist for its Evaluation” registered in PROSPERO with the registration number CRD420251036600 ( https://www.crd.york.ac.uk/PROSPERO/view/CRD420251036600 ). The results presented here are preliminary and part of ongoing work.
Objective
To synthesize knowledge about the taxonomies of algorithmic bias, reporting, relationships with classical biases, and use of visualizations in AI-supported systematic reviews in health sciences.
Methods
A specific literature review was conducted, focusing on systematic reviews, conceptual frameworks, and reporting standards for bias in AI in healthcare, as well as studies cataloguing detection and mitigation strategies, with an emphasis on taxonomies, transparency practices, and visual/illustrative tools.
Results
A mature body of work describes stage-based taxonomies and mitigation methods for algorithmic bias in general clinical AI. Common improvements in reporting and transparency (e.g. CONSORT-AI, SPIRIT-AI) are described. However, there is a notable absence of direct application to AI-automated screening of systematic reviews or empirical analyses of the interactions of biases with classical biases at the review level. Visualization techniques, such as bias heatmaps and pipe diagrams, are available, but have not been adapted to review workflows.
Conclusions
There are fundamental methodologies to identify and mitigate algorithmic bias in AI in health, but significant gaps remain in the understanding and operationalization of these frameworks within AI-assisted systematic reviews. Future research should address this translational gap to ensure transparency, fairness, and methodological rigor in the synthesis of evidence.