A Data Fusion Deep Learning Approach for Accurate Organelle-Based Classification of Cancer Cells

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Microscopy-based cancer cell classification has traditionally focused on observable cellular features, such as size, morphology, and pleomorphism. Recent advances in machine learning-based image analysis in cancer diagnostics have allowed for greater throughput and consistency in cancer cell analysis by extracting visibly non-discernable features. Specifically, classification based on sub-cellular organelles' shape and spatial feature distributions has been established as a highly accurate methodology. These handcrafted feature extraction methods, however, are limited in throughput and analytical trustworthiness. These criticisms arise from the manual use of external software for object rendering, handcrafted feature extraction, and classification, thus introducing potential biases and artificial features due to image processing. Herein, we introduce a deep learning approach using a patch-based convolutional neural network (CNN) with channel-wise intermediate data fusion to perform end-to-end breast cancer classification of fluorescent confocal microscopy images focused on separate feature analysis of each sub-cellular organelle of interest. In cross-validation studies on a dataset of six different breast cancer cell lines, our methodology achieved an average classification accuracy of 92.0 ± 0.9%, rivaling other methods. Ultimately, this work provides streamlined and organelle-focused feature analysis for automated deep learning-based cancer cell classification.

Article activity feed