Integrating Explainable Artificial Intelligence into Histopathological Risk Assessment: A Scoping Review and Meta-analysis

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Explainable artificial intelligence is increasingly used to support cancer detection, grading, and prognosis from histopathology, yet clinical adoption remains limited by uncertainty about reliability, interpretability, and governance. We conducted a scoping review and meta-analysis of peer-reviewed studies applying explainable AI to histopathology-based cancer risk assessment. Searches of PubMed, Google Scholar, Scopus, and Web of Science identified 47 eligible studies. Post hoc visual methods, particularly CAM and Grad-CAM, dominated the field. Nine studies contributed to meta-analysis, yielding a pooled area under the curve of 0.962 (95% CI 0.909–0.985) with substantial heterogeneity (I² = 97.1%). Subgroup analysis showed higher and more consistent performance at the slide level than at the patient level, identifying unit of analysis as a major source of heterogeneity. These findings support a translational roadmap for clinically meaningful explanation, usability testing, prospective validation, and governance-aligned deployment.

Article activity feed