Kaizen: Decomposing cellular images with VQ-VAE
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
A fundamental problem in cell and tissue biology is finding cells in microscopy images. Traditionally, this detection has been performed by segmenting the pixel intensities. However, these methods struggle to delineate cells in more densely packed micrographs, where local decisions about boundaries are not trivial. Here, we develop a new methodology to decompose microscopy images into individual cells by making object-level decisions. We formulate the segmentation problem as training a flexible factorized representation of the image. To this end, we introduce Kaizen, an approach inspired by predictive coding in the brain that maintains an internal representation of an image while generating object hypotheses over the external image, and keeping the ones that improve the consistency of internal and external representations. We achieve this by training a Vector Quantised-Variational AutoEncoder (VQ-VAE). During inference, the VQ-VAE is iteratively applied on locations where the internal representation differs from the external image, making new guesses, and keeping only the ones that improve the overall image prediction until the internal representation matches the input. We demonstrate Kaizen’s merits on two fluorescence microscopy datasets, improving the separation of nuclei and neuronal cells in cell culture images.