Cross-view graph neural networks for spatial domain identification by integrating gene expression, spatial locations with histological images

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The latest developments in spatial transcriptomics technology provide an unprecedented opportunity for in situ elucidation of tissue structure and function. Spatial transcriptomics can provide simultaneous, multi-modal, and complementary information, including gene expression profiles, spatial positions, and histological images. Despite these capabilities, current methodologies often fall short in fully integrating these multi-modal datasets, thereby limiting their ability to fully understand tissue heterogeneity. In this study, we propose XVGAE (cross-view graph autoencoders), a novel approach that integrates gene expression data, spatial coordinates, and histological images to identify spatial domains. XVGAE constructs two distinct graphs: a spatial graph from spatial coordinates and a histological graph from histological images, and these graphs enable XVGAE to learn specific representations for each view and propagate information between them using cross-view graph convolutional networks. The experiments on benchmark datasets of the human dorsolateral prefrontal cortex show demonstrate that the XVGAE could achieve better clustering accuracy than state-of-the-art methods, and further experiments on four real spatial transcriptomics datasets on different sequencing platforms show that the XVGAE could identify biologically meaningful spatial domains with smoother boundary than other methods.

Article activity feed