DynamicBUS: Restoring Temporal Dynamics from Static Ultrasound for Improved Breast Cancer Diagnosis

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The clinical practice of archiving static 2D images from dynamic breast ultrasound (BUS) examinations leads to a critical loss of temporal information, vital for accurate lesion diagnosis. This data limitation constrains the performance of conventional computer-aided diagnosis systems. We challenge this limitation by proposing a novel framework that computationally recovers and leverages these lost temporal dynamics to enhance diagnostic accuracy. Our framework first synthesizes a BUS video from a static key frame image using a purpose-built generative model. For diagnostically plausible synthesis, we introduce a key-frame conditioning strategy that ensures the anatomical fidelity of the lesion is preserved while generating useful dynamic cues. Subsequently, the high-fidelity static image and the synthesized video are fed into our designed IV-Net, a dual-branch fusion network that synergistically integrates pristine spatial details with the recovered temporal context for robust classification. Evaluated on key frame BUS datasets, our integrated framework outperforms methods that rely solely on static images (AUC: 92.63\%).Moreover, a reader study indicates that the generated videos are indistinguishable to experts and lead to higher diagnostic performance. Overall, our method demonstrates the potential of generative AI to restore lost clinical information, paving the way for more accurate and reliable diagnostic systems in BUS diagnosis. The code and generated dataset are publicly available at https://github.com/minnelab/DynamicBUS

Article activity feed