Understanding Foundation Models in Digital Pathology: Performance, Trade-offs, and Model-Selection Recommendations
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The rapid proliferation of digital pathology foundation models (FMs), spanning widely in architectural scales and pre-training datasets, poses a significant challenge in selecting the optimal model for a specific clinical application. To systematically evaluate these performance trade-offs, we conducted a comprehensive benchmark of five FMs, stratified from small to huge scales, across a diverse suite of whole slide image (WSI) and region of interest (ROI) tasks. Our findings demonstrate that model superiority is strongly task-specific, challenging the assumption that a larger scale universally offers an advantage. For instance, while the huge-scale Virchow2 model excelled at WSI metastasis detection, the small-scale Lunit model was superior for fine-grained lung subtype classification. This trend was even more pronounced in survival analysis, where smaller models outperformed their massive-scale counterparts, a phenomenon we hypothesize is linked to potential information bottlenecks within downstream aggregation models. Notably, the base-scale Kaiko model consistently provided a compelling balance, delivering competitive accuracy, superior stability in prognostic tasks, and higher computational efficiency. Our analysis suggests that the optimal FM is not necessarily the largest, but one whose scale, data composition, and training strategy are best aligned with the specific task. This work offers a practical, evidence-based framework for balancing performance, stability, and real-world deployment costs in computational pathology.