Self-supervised learning enables unbiased patient characterization from multiplexed microscopy images

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Multiplexed immunofluorescence microscopy provides detailed insights into the spatial architecture of cancer tissue. Classical analysis approaches focus on single-cell data but can be limited by segmentation accuracy and the representational power of extracted features, potentially overlooking crucial spatial interrelationships among cells. We developed a hierarchical self-supervised deep learning approach that learns feature representations from multiplexed microscopy images without expert annotations. The method encodes tissue samples at both the local (cellular) level and the global (tissue architecture) level. We applied our method to lung, prostate, and renal cancer tissue microarray cohorts to investigate whether self-supervised learning can recognize clinically meaningful marker patterns from multiplexed microscopy images. We found that local and global features distinguished between tissue regions (e.g. tumor center and adjacent benign region). We observed that the learned features identified prognostically distinct patient groups, which show significant differences in survival outcomes. These patient groups matched earlier findings obtained with classical single-cell analysis using expert annotations. Moreover, attention maps extracted from these models highlighted crucial tissue regions that correlate with specific marker combinations. Overall, the approach effectively profiles complex multiplexed microscopy images, offering potential for improved biomarker discovery and more informed cancer treatment decisions.

Article activity feed