Adaptive Contextual-Relational Network for Fine-Grained Gastrointestinal Disease Classification

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Automated fine-grained classification of gastrointestinal diseases from endoscopic images is vital for early diagnosis yet remains difficult due to limited annotated data, large appearance variations, and the subtle nature of many lesions. Existing few-shot and relational learning approaches often struggle with handling drastic viewpoint shifts, complex contextual cues, and distinguishing visually similar pathologies under scarce supervision. To address these challenges, we introduce an Adaptive Contextual-Relational Network (ACRN), an end-to-end framework tailored for robust and efficient few-shot classification in gastrointestinal imaging. ACRN incorporates an adaptive contextual-relational module that fuses multi-scale contextual encoding with dynamic graph-based matching to enhance both feature representation and relational reasoning. An enhanced task interpolation strategy further enriches task diversity by generating more realistic virtual tasks through feature similarity–guided interpolation. Together with a lightweight encoder equipped with spatial attention and an efficient attention routing mechanism, the model achieves strong discriminative capability while maintaining practical computational efficiency. Experiments on a challenging benchmark demonstrate improved accuracy and stability over prior methods, with ablation studies confirming the contribution of each component. ACRN also shows resilience to common image perturbations and provides performance comparable to experienced clinicians on particularly difficult cases, underscoring its potential as a supportive tool in clinical workflows.

Article activity feed