DRIPS: Domain Randomisation for Image-based Perivascular spaces Segmentation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Perivascular spaces (PVS) are emerging as sensitive imaging markers of brain health. Yet, accurate out-of-sample PVS segmentation remains challenging since existing methods are modality-specific, require dataset-specific tuning, or rely on manual labels for (re-)training. We propose DRIPS (Domain Randomisation for Image-based PVS Segmentation), a physics-inspired framework that integrates anatomical and shape priors with a physics-based image generation process to produce synthetic brain images and labels for on-the-fly deep learning model training. By introducing variability through resampling, geometric and intensity transformations, and simulated artefacts, it generalises well to real-world data. We evaluated DRIPS on MRI data from five cohorts spanning diverse health conditions (N = 165; T1w and T2w, isotropic and anisotropic imaging) and on a 3D ex vivo brain model reconstructed from histology. We evaluated its performance using the area under the precision–recall curve (AUPRC) and Dice similarity coefficient (DSC) against manual segmentations and compared it with classical and deep learning methods, including Frangi, RORPO, SHIVA-PVS, and nnU-Net. Only DRIPS and Frangi achieved AUPRC values above chance across all cohorts and the ex vivo model. On isotropic data, DRIPS and nnU-Net performed comparably, outperforming the next-best method by a median of +0.17–0.39 AUPRC and +0.09–0.26 DSC. On anisotropic data, DRIPS outperformed all competitors by a median of +0.13–0.22 AUPRC and +0.07–0.14 DSC. Importantly, its performance was not associated with white matter hyperintensity burden. DRIPS delivers accurate, fully automated PVS segmentation across heterogeneous imaging settings, reducing the need for manual labels, modality-specific models, or cohort-dependent tuning.