Adaptive Example Selection: Prototype-Based Explainability for Interpretable Mitosis Detection
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Understanding the decision-making process of black-box neural network classifiers is crucial for their adoption in medical applications, including histopathology and cancer diagnostics. An approach of increasing interest is to clarify how the decisions of neural networks compare to, and perform parallel to, those of highly-trained and knowledgeable clinicians and other medical professionals within their prototypical classes of interest. Motivated by this, we introduce Adaptive Example Selection (AES), a prototype-based explainable AI (XAI) framework that facilitates the interpretability of deep learning models for mitosis detection. AES works by selecting and presenting a small set of real-world mitotic images most informative to a given classification, allowing pathologists to visually assess and understand the neural network’s decision by comparing test cases with similar previously annotated examples. AES achieves this by expanding the neural network’s confidence/belief function and fitting it to a radial basis function (RBF) approximator, an approach we term Decision Boundary-based Analysis (DBA). This method makes the decision boundary more transparent, offering robust visual insights into the model’s decisions, and thereby equipping clinicians with the information needed to effectively utilize AI-driven diagnostics. Additionally, AES includes customizable user controls, allowing clinicians to tailor decision thresholds and select prototype examples to better align with their specific diagnostic needs. This flexibility empowers users to engage with the AI model more directly and meaningfully, increasing its practical relevance in clinical settings.