Cell-APP: A generalizable method for microscopic cell annotation, segmentation, and classification
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
High throughput fluorescence microscopy is an essential tool in systems biological studies of eukaryotic cells. Its power can be fully realized when all cells in a field of view and the entire time series can be accurately localized and quantified. These tasks can be mapped to the common paradigm in computer vision: instance segmentation. Recently, supervised deep learning-based methods have become state-of-the-art for cellular instance segmentation. However, these methods require large amounts of high-quality training data. This requirement challenges our ability to train increasingly performant object detectors due to the limited availability of annotated training data, which is typically assembled via laborious hand annotation. Here, we present a generalizable method for generating large instance segmentation training datasets for tissue-culture cells in transmitted light microscopy images. We use datasets created by this method to train vision transformer (ViT) based Mask-RCNNs (Region-based Convolutional Neural Networks) that produce instance segmentations wherein cells are classified as “m-phase” (dividing) or “interphase” (non-dividing). While training these models, we also address the dataset class imbalance between m-phase and interphase cell annotations, which arises for biological reasons, using probabilistically weighted loss functions and partisan training data collection methods. We demonstrate the validity of these approaches by producing highly accurate object detectors that can serve as general tools for the segmentation and classification of morphologically diverse cells. Since the methodology depends only on generic cellular features, we hypothesize that it can be further generalized to most adherent tissue culture cell lines.