RFA-U-Net: A Foundation Model-Driven Approach for Accurate Choroid Segmentation in OCT Imaging

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The choroid layer plays a critical role in maintaining outer retinal health and is implicated in numerous vision-threatening diseases such as diabetic retinopathy, age-related macular degeneration, and diabetic macular edema. While Enhanced-Depth Imaging Optical Coherence Tomography (EDI-OCT) enables detailed visualization of the choroid, accurate manual segmentation remains timeconsuming and subjective. In this study, we introduce RFA-U-Net, a novel deep learning model designed for precise choroid segmentation in OCT images. Building upon RETFound—a foundation model pre-trained on 1.6 million retinal images—RFA-U-Net retains its encoder to leverage rich hierarchical feature representations while addressing its general-purpose segmentation limitations through three key innovations: (1) attention gates to dynamically emphasize choroid-specific features, (2) feature fusion strategies to maintain contextual integrity during decoding, and (3) standard U-Net up-convolutions to ensure accurate boundary reconstruction. Extensive experiments demonstrate that RFA-U-Net outperforms pre-trained CNN-U-Net variants and state-of-the-art (SOTA) choroidal segmentation models. Specifically, it achieves a Dice score of 95.04 ± 0.25% and a Jaccard index of 90.59 ± 0.30% on an independent test set, highlighting its superior accuracy and robustness, particularly in challenging clinical scenarios with variable OCT image quality. These findings underscore the potential of combining foundation models and attention mechanisms to advance precision and reliability in ophthalmic diagnostics, paving the way for scalable and efficient deployment in clinical and community-based screening settings.

Article activity feed