Transparent Clinical Support Through Cross-Modal Fusion and Aligned Explanations
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
AI-driven healthcare recommendation systems are rapidly gaining traction for personalized clinical decision support. However, many existing solutions remain unimodal, often relying on a single data type, and lack transparency, limiting their effectiveness in real-world clinical environments. To overcome these limitations, an advanced Multimodal Healthcare Recommendation System is proposed, integrating structured vitals, free-text clinical notes, and wearable sensor data through an intermediate fusion framework with learnable cross-modal attention mechanisms. Each modality is projected into a shared latent space and fused via attention-driven interactions to capture complex interdependencies across diverse data types. This fusion enables the system to construct richer representations, thereby improving predictive accuracy by effectively leveraging heterogeneous medical information. To address the critical issue of interpretability, a Multimodal Explanation Alignment Module (MEAM) is incorporated. MEAM aligns model attention with outputs generated by modality-specific explainability techniques such as SHAP and LIME. A consistency loss function reinforces this alignment, promoting coherence between the model’s attention signals and explanation maps. As a result, the system produces not only accurate predictions but also the human-understandable justifications, thereby enhancing clinical trust and accountability. The architecture demonstrates significant improvements in both predictive performance and transparency, making it well-suited for deployment in diverse healthcare environments, including those with limited resources. By combining high performance with robust explainability, this approach addresses the core limitations of traditional black-box AI systems in clinical settings. Ultimately, the system supports responsible AI deployment in healthcare, fostering improved patient outcomes, increased clinician confidence, and alignment with contemporary standards for transparency and ethical AI use.