High-Resolution Time-Lapse Imaging of Droplet-Cell Dynamics via Optimal Transport and Contrastive Learning
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Single-cell analysis is essential for uncovering heterogeneous biological functions that arise from intricate cellular interaction. Microfluidic droplet arrays enable precise dynamic data collection through cell encapsulation in picoliter volumes. The time-lapse imaging of these arrays can reveal functional kinetics and cellular fates, but accurate tracking of cell identities across time frames remains challenging when droplets move significantly. Specifically, existing machine learning methods often depend on labeled data or require neighboring cells as reference; without them, these methods struggle to track identical objects across long distances with complex movements. To address these limitations, we developed a pipeline combining visual object detection, feature extraction via contrastive learning, and optimal transport-based object matching, which minimizes reliance on labeled training data. Our approach was validated across various experimental conditions and was able to track thousands of water-in-oil microfluidic droplets over large distances and long ( > 30 min) time-separated frames. We achieved high precision in previously untraceable scenarios, tracking small, medium and large movements (corresponding to ~126, ~800 and ~10,000 µ m, respectively) with a success rate of correctly tracked droplets of > 90% for average movements within 212 object diameters, and > 60% for average movements of > 100 object diameters. This workflow lays the foundation for high-resolution, dynamic analysis of droplets and cells in both spatial and temporal dimensions without relying on visual labeling, allowing high-accuracy tracking in samples, where the uniqueness of the sample makes repeating experiments infeasible.