Regularized Single-cell Imaging Enables Generalizable AI models for Stain-free Cell Viability Screening

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Cell viability assays are essential tools in biomedical research and drug development. Artificial intelligence (AI) offers the potential to simplify these assays by predicting cell viability directly from brightfield microscopy images, but current models lack generalizability across diverse cell types and treatments. Here, we introduce a strategy called "regularized imaging", where single cells are isolated in nanowells to generate standardized image patches that simplify segmentation and improve training data quality. We trained our model using example images of live and dead cells from a single cell line exposed to four cytotoxic conditions (ethanol, andrographolide, daunorubicin, and serum starvation). Despite this narrow training dataset, the resulting model accurately identified live and dead cells after treatment by previously unseen compounds, successfully replicating dose-response curves comparable to fluorescence live/dead assays. Importantly, this model effectively generalized across diverse cell types, including both adherent and suspension cells. Additionally, microscopy-based cell viability analysis is non-destructive, enabling repeated measurements to perform kinetic studies to distinguish between fast- and slow-acting compounds. Our findings highlight how regularized single cell imaging enables the training of broadly generalizable AI models to recognize biologically relevant cell features for label-free cell screening workflows.

Article activity feed