Explainable AI for Maternal Health Risk Prediction in Bangladesh: A Hybrid Fuzzy-XGBoost Framework with Clinician Validation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Bangladesh faces a maternal mortality ratio of 156 per 100,000 live births, with 2,459 maternal deaths reported in 2022. While machine learning shows promise in risk prediction, black-box models limit clinical adoption in resource-constrained settings where explainability is crucial. This study develops a hybrid fuzzy-XGBoost framework combining ante-hoc fuzzy logic interpretability with post-hoc SHAP explanations, validated through clinician feedback. We trained the model on 1,014 maternal health records with clinical parameters (age, blood pressure, blood sugar) augmented with synthetic regional features based on Bangladesh health data. The hybrid model achieved 88.67% accuracy with ROC-AUC of 0.9703, outperforming the best baseline (Gradient Boosting: 86.21%) by 2.46 percentage points. SHAP analysis identified healthcare access score (most important), blood sugar, and fuzzy risk score as primary predictors. Clinician validation (N=14) showed strong preference for hybrid explanations (71.4% across cases) with 54.8% expressing trust in clinical practice. Fairness analysis revealed equitable performance across regions (σ=0.0766), with better accuracy in underserved areas (r=-0.876 correlation with healthcare access), highlighting potential to address disparities.