Decoding Tumor Phenotypes: A Radiologist-Inspired Deep Learning Framework for Breast Cancer Recurrence Prediction

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Accurate prognostication in breast cancer remains constrained by the limited resolution of conventional clinicopathological markers, often leading to overtreatment of indolent disease or missed opportunities for intensified therapy in aggressive cases. Here we present a radiologist-inspired multimodal deep learning framework that integrates whole-volume dynamic contrast-enhanced MRI (DCE-MRI), radiology reports and clinical variables to predict recurrence-free survival. The model is trained through cross-modal semantic alignment, in which report-derived representations guide the learning of prognostic imaging features. In a large development cohort from the Netherlands Cancer Institute (NKI, n = 3,266), the framework achieved strong discrimination (C-index 0.727) and identified high-risk patients with a hazard ratio of 7.32 (P < 0.0001). Robustness was further assessed in two independent public external cohorts (Duke and I-SPY 1; n = 1,054) using text-free inference, where the model maintained consistent performance (C-index 0.702 and 0.697, respectively). Prognostic accuracy remained stable across both short- and long-term horizons, with high discriminative power from 2 to 12 years after treatment (AUC 0.77–0.83). This temporal stability enabled reliable identification of ultra-low-risk subgroups (5-year Recurrence-free survival > 95%) as well as early detection of aggressive phenotypes prone to early recurrence. These findings support semantically guided multimodal learning as an interpretable and operationally robust imaging biomarker for personalized breast cancer management.

Article activity feed