A Scoping Review of Racial Bias Mechanisms and Mitigation Frameworks in Clinical Artificial Intelligence
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This scoping review synthesizes evidence on how racial bias arises in clinical artificial intelligence (AI) systems and how it can be mitigated through technical, governance, and policy approaches. We conducted a scoping review of clinical AI/ML studies and relevant conceptual frameworks, with searches limited to English-language sources published between September 2020 and November 2025. Study selection was documented using a PRISMA 2020 flow diagram. Eligible studies examined racial or demographic bias mechanisms, fairness evaluation, or mitigation strategies in real-world clinical contexts. Across 22 included studies, recurring pathways to inequity included underrepresentation and label noise in training data, proxy variables that encode structural disadvantage, differences in access and measurement that distort outcomes, and limited external validation in diverse settings. Mitigation strategies clustered into (1) data and evaluation improvements (e.g., subgroup reporting, calibration, and cross-site validation), (2) model and optimization approaches (e.g., reweighting and fairness-aware objectives), and (3) governance levers (e.g., documentation, equity impact assessments, and monitoring requirements). We translate these findings into a practical framework linking bias mechanisms to mitigation actions and implementation levers, with an emphasis on feasible steps for health systems and policymakers to reduce avoidable inequities during AI deployment.