Human-Centered Pathways to Trustworthy AI in Healthcare: A Comparative Analysis of Explainable AI, Human-in-the-Loop, Hybrid AI, and Uncertainty Quantification Techniques

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Despite its transformative potential in healthcare, the adoption of artificial intelligence (AI) in clinical practice remains constrained by a persistent trust deficit among clinicians and patients. To address this, we conducted a systematic comparative review of 112 peer-reviewed studies published between 2015 and 2025, following the PRISMA guidelines for study selection. Articles were sourced from major scientific databases, focusing on methodological innovations and clinical evaluations to enhance AI trustworthiness. Using a novel Composite Human-Centered Trustworthiness Score (HCTS), we systematically evaluated and compared the contributions of relevant studies. Our analysis identified four human-centered pathways: explainable AI (XAI), comprising intrinsic interpretable models and post-hoc techniques (e.g., SHAP, LIME) to support error analysis and stakeholder communication; human-in-the-loop (HITL) frameworks that leverage clinician expertise via active learning and interactive visualization to improve model reliability and usability; hybrid neuro-symbolic architectures that integrate symbolic reasoning with deep learning to achieve robustness in complex or data-sparse settings; and uncertainty quantification (UQ) methods (e.g., Bayesian inference, Monte Carlo dropout, and ensemble techniques) that provide confidence estimates that are critical for high-stakes clinical decisions. We found that integrated strategies, including XAI-driven HITL loops and XAI + UQ frameworks, yield the greatest gains in transparency, human oversight, and computational capability. Addressing technical challenges (data heterogeneity, system interoperability), ethical and regulatory imperatives (fairness, accountability), and advancing multimodal and continual-learning paradigms are essential for ensuring the safe, transparent, and sustainable deployment of AI in clinical practice.

Article activity feed