CLSENet: A Continuous Learning Semantic Enhancement Network for Medical Image Perception
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Medical image perception is crucial for clinical diagnostics and treatment planning, yet faces challenges in modeling long-range dependencies, computational efficiency, and adapting to evolving clinical needs without catastrophic forgetting. To address these issues, we propose a Continuous Learning Semantic Enhancement Network (CLSENet), which integrates Mamba-based architecture with tailored continual learning strategies. It introduces a Semantic Enhancement framework through several key innovations: an Orthogonal Gradient Correction Module (OGCM) and task-specific prompts to mitigate catastrophic forgetting at both image and sequence levels, enabling effective continual learning, an Attention-guided State Space Layer (ASSL) that captures comprehensive semantic features, and a Local Semantic Attention (LSA) module to refine local detail extraction. These components collectively enhance feature fusion by adaptively integrating multi-scale and contextual information. Extensive experiments on LIDC and LUNA16 lung nodule perception datasets demonstrate that CLSENet achieves state-of-the-art performance, with notable improvements over strong baselines, e.g., gains of 0.22%-0.59% in mDice and 1.43%-2.26% in mMPA, while maintaining the lowest computational cost (1.12G FLOPs) and memory usage (2.82MB), validating its precision, efficiency, and generalization capability for dynamic clinical environments.