SPADE: Superpixel Adjacency Driven Embedding for Three Class Melanoma Segmentation
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Melanoma remains one of the most lethal forms of skin cancer. In clinical practices, primary care physicians typically rely on the ABCDE criteria (Asymmetry, Border, Color, Diameter, and Evolution), alongside dermoscopic examination and scoring systems, to assess lesion malignancy. However, these assessments are inherently subjective and often influenced by the clinician’s level of experience. To address this limitation, Computer-Assisted Diagnosis (CAD) systems have been developed to provide more objective and reproducible evaluations. CAD algorithms either extract ABCD-related features or directly classify lesions from dermoscopic images. In both approaches, accurate lesion segmentation is critical. Yet, approximately 30% of skin lesions exhibit fuzzy or poorly defined borders, complicating the task of drawing a single, definitive contour. In this work, we identify three distinct classes in dermoscopic images (background, border, and lesion core) based on superpixels generated via the Simple Linear Iterative Clustering (SLIC) algorithm. Our contributions are fourfold: (1) redefining lesion borders as regions rather than lines; (2) generating superpixel-level embeddings using a transformer-based autoencoder; (3) incorporating these embeddings as features for classification; and (4) integrating neighborhood information to construct enriched feature vectors. Unlike pixel-level CNN algorithms that often overlook fine-grained boundary contexts, our pipeline fuses global class context with local spatial relationships, significantly improving precision and recall in challenging border regions. Extensive evaluation on the HAM10000 melanoma dataset demonstrates that our superpixel-RAG-transformer pipeline achieves exceptional performance in classifying background, border, and lesion core superpixels.