Deep learning for fluorescence lifetime predictions enables high-throughput in vivo imaging

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Fluorescence lifetime imaging microscopy (FLIM) is a powerful optical tool widely used in biomedical research to study changes in a sample’s microenvironment. However, data collection and interpretation are often challenging, and traditional methods such as exponential fitting and phasor plot analysis require a high number of photons per pixel for reliably measuring the fluorescence lifetime of a fluorophore. To satisfy this requirement, prolonged data acquisition times are needed, which makes FLIM a low-throughput technique with limited capability for in vivo applications. Here, we introduce FLIMngo, a deep learning model capable of quantifying FLIM data obtained from photon-starved environments. FLIMngo outperforms other deep learning approaches and phasor plot analyses, yielding accurate fluorescence lifetime predictions from decay curves obtained with fewer than 50 photons per pixel by leveraging both time and spatial information present in raw FLIM data. Thus, FLIMngo reduces FLIM data acquisition times to a few seconds, thereby, lowering phototoxicity related to prolonged light exposure and turning FLIM into a higher throughput tool suitable for analysis of live specimens. Following the characterisation and benchmarking of FLIMngo on simulated data, we highlight its capabilities through applications in live, dynamic samples. Examples include the quantification of disease-related protein aggregates in non-anaesthetised Caenorhabditis (C . ) elegans , which significantly improves the applicability of FLIM by opening avenues to continuously assess C. elegans throughout their lifespan. Finally, FLIMngo is open-sourced and can be easily implemented across systems without the need for model retraining.

Article activity feed