Hierarchical Owen Value Explanations for Interpretable Brain Tumor Classification in Medical Imaging
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Deep learning models have shown promising accuracy in brain tumor classification from medical images, but their black-box nature remains a barrier to clinical adoption. To address this, we propose a structured explanation framework based on the Owen value, which extends the Shapley value by incorporating spatial feature dependencies through a predefined segmentation hierarchy. Unlike traditional SHAP-based methods that treat pixels as independent features, our approach groups spatially coherent regions using superpixel segmentation and recursively distributes attribution scores through a multi-layer hierarchy. This design aligns more closely with clinical reasoning and improves both interpretability and computational efficiency. We evaluate our method using three metrics including Pointing Game Accuracy (PGA), Attribution Entropy (AE), and Area Over the Perturbation Curve (AOPC), and show that it consistently outperforms SHAP and axis-aligned Owen baselines in both explanation quality and runtime. The proposed framework offers a scalable and clinically meaningful path toward trustworthy AI-assisted diagnosis in medical imaging.