Machine-learning strategy for high error tolerance in image-based digital molecular assays

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

There is a significant global health need to translate more in vitro diagnostic tests (IVDs) from clinical laboratories to field-based applications, including point-of-care (POC) and self-administered test formats. These applications typically require smaller sample sizes, limit the extent of sample processing and measurement capabilities, and introduce greater handling variability. Error tolerance is one of the most critical factors for successful field-based assay design. Here, we examine machine-learning (ML) strategies to enhance the error tolerance of image-based nanoparticle immunoassays. Random dispersions of nanoparticles were imaged in microliter sample volumes, and images were processed to determine analyte concentrations based on nanoparticle appearance. Assay performance was characterized using two common blood diagnostics: C-reactive protein (CRP) and S.CoV-2 IgG. We compare the results from a conventional image analysis, a hybrid ML-conventional approach based on pixel segmentation, and a full end-to-end image regression using a targeted regularization strategy. Training images for the full image regression approach required only a single label for training – the analyte concentration – eliminating the need for labor-intensive pixel-level labeling. Ultimately, the fully ML-based analysis significantly improved dynamic range, sensitivity, and reproducibility in high-error settings, including direct measurements performed in whole blood.

Article activity feed