Micro𝕊plit: Semantic Unmixing of Fluorescent Microscopy Data
This article has been Reviewed by the following groups
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
- Evaluated articles (Arcadia Science)
Abstract
Fluorescence microscopy, a key driver for progress in the life sciences, faces limitations due to the microscope’s optics, fluorophore chemistry, and photon exposure limits, necessitating trade-offs in imaging speed, resolution, and depth. Here, we introduce Micro𝕊plit, a computational multiplexing technique based on deep learning that allows multiple cellular structures to be imaged in a single fluorescent channel and then unmix them by computational means, allowing faster imaging and reduced photon exposure. We show that Micro𝕊plit efficiently separates up to four superimposed noisy structures into distinct denoised fluorescent image channels. Furthermore, using Variational Splitting Encoder-Decoder (VSE) networks, our approach can sample diverse predictions from a trained posterior of solutions. The diversity of these samples scales with the uncertainty in a given input, allowing us to estimate the true prediction errors by computing the variability between posterior samples. We demonstrate the robustness of Micro𝕊plit networks, which are trained for each splitting task at hand, across various datasets and noise levels and show its utility to image more, to image faster, and to improve downstream analysis. We provide Micro𝕊plit along with all associated training and evaluation datasets as open resources, enabling life scientists to immediately benefit from the potential of computational multiplexing and thus help accelerate the rate of their scientific discovery process.
Article activity feed
-
(i) multi-fold brightness differences (intensity skew) of the structures to be unmixed
Hi all, really enjoyed this preprint!
I think MicroSplit is a creative and elegant approach to navigating the photon budget limitation. The limitation I'm highlighting here, however, stood out to me as particularly tricky to contend with. In my experience, it's fairly common to see multi-fold brightness variation of a given structure in a single image—let alone across a dataset—due to factors like:
- Differential uptake or expression of dyes or fluorescent proteins
- Variability in labeling efficiency or target accessibility
- Cell-to-cell heterogeneity in biological state (e.g., membrane permeability, metabolic activity)
These sources of variability seem like they could complicate MicroSplit's unmixing inferences. Do you have any thoughts on how to handle …
(i) multi-fold brightness differences (intensity skew) of the structures to be unmixed
Hi all, really enjoyed this preprint!
I think MicroSplit is a creative and elegant approach to navigating the photon budget limitation. The limitation I'm highlighting here, however, stood out to me as particularly tricky to contend with. In my experience, it's fairly common to see multi-fold brightness variation of a given structure in a single image—let alone across a dataset—due to factors like:
- Differential uptake or expression of dyes or fluorescent proteins
- Variability in labeling efficiency or target accessibility
- Cell-to-cell heterogeneity in biological state (e.g., membrane permeability, metabolic activity)
These sources of variability seem like they could complicate MicroSplit's unmixing inferences. Do you have any thoughts on how to handle this kind of within-class brightness heterogeneity or on ways the method might be adapted to be more robust to it?
Thanks again for sharing such a neat piece of work!
-
2.1 Training Modes and Required Training Data
MicroSplit is an exciting technique that could drastically improve the feasibility of multiplexed imaging. I'm wondering if the amount of training data differs depending on the training mode. For example, does training mode III represent more of a ground truth because it's based on images of each channel separately and thus requires less data? Is that the preferred training mode when possible, or does the model perform equally well with all training modes?
-