Post-operative tissue fragment puzzling using histopathological vision transformer alignment HiViTAlign

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

1

In pathology, reconstructing adjacent tissue parts enables an overview of the macro environment of objects like tumors. Especially, malignoma are of interest to verify invasion and resection margins, as patients with positive margins face a higher mortality risk. Reassembling image fragments is widely used in other domains, but adjacent blocks in pathology are mostly analyzed separately missing global context. In this project, neighboring tissue of pig organ whole slide images (WSI) are reconstructed without a ground truth based on histological sections at the end of a complex work-up process.

Histological tissue slices with artifacts, frayed or disrupted boundaries and sometimes missing pieces complicate the puzzling task. Thus, typical approaches such as direct feature comparison of tissue boundaries or estimating a tiles position based on an overview image or a known structures are not applicable.

A new approach is presented using partial image registration where only parts of a fixed and a moving image are aligned for adjacency. In contrast to existing projects aligning subsequent tissue slices of the same block, WSIs from separated blocks will be reassembled for adjacency. The used three stage vision transformer extracts image features on various scales, compares neighboring tiles by shape, color and texture and predicts transformation parameters. Even though the pipeline is capable of handling rigid transformation such as rotation or reflection, only translation is currently supported due to the limited training set. Supervised training of the network can be realized using a puzzle generator creating irregular shaped fragments of masked whole slide images. The factorized trained neural network is embedded into a sophisticated histopathological vision transformer alignment (HiViTAlign) pipeline executing the following steps in roughly 10 seconds per reassembled tissue puzzle: First, extract the specimen and mask the background in each whole slide image. Second, compare tile boundaries using partial image registration. Third, calculate the adjacency by boundary proximity for each image pair. Fourth, determine a minimal spanning tree to optimize adjacency of pairwise registrations and transformations for tissue reconstruction.

The python source code for HiViTAlign to start puzzling with WSIs or other objects is available at https://github.com/cpheidelberg/HiViTAlign . The generator for creating a dataset with irregular shaped tiles can be downloaded from https://github.com/cpheidelberg/ImagePuzzleGenerator .

2

Author summary

Histopathology as the microscopic analysis of tissue remains the gold standard for evaluating tumors, especially when assessing resection margins. However, the physical processing of tissue disrupts its original three dimensional structure, leaving pathologists with fragmented, two-dimensional slices that lack spatial context. This fragmentation makes it difficult to understand the full extent and orientation of tumors and to correlate pathology results with radiological imaging used in surgical planning.

In this study, we present a computational pipeline for histopathological vision transformer alignment (HiViTAlign) that reassembles fragmented histological tissue sections, similar to solving a jigsaw puzzle. Using a deep learning model based on Vision Transformers, our method predicts how individual tissue fragments are spatially related and outputs transformation parameters for adjacency. While the pipeline is designed to accommodate a variety of rigid transformations (e.g., rotation and scaling), its current implementation, constrained by the limited diversity of the training dataset, focuses solely on predicting translational shifts between fragments. A custom dataset generator was developed to create realistic puzzles from whole slide images, assigning original coordinates to each fragment to enable supervised training. The full pipeline was evaluated on both synthetic datasets and real-world whole slide images, demonstrating its ability to reconstruct tissue cross-sections without requiring a reference image. This method may support more accurate spatial interpretation of pathological specimens and better integration with surgical imaging data.

The open-source Python code, we developed, invites collaboration and innovation, reflecting our commitment to advancing computational pathology through technology and shared resources.

Paper to be submitted to PLOS Computational Biology .

Article activity feed