Neural-physics adaptive reconstruction reveals 3D subcellular nanostructures over a large depth of field

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Achieving large depth-of-field super-resolution imaging deep inside samples is often hindered by unknown aberrations and overlapping molecular signals in 3D single-molecule localization microscopy. Here, we present LUNAR, a blind localization method that simultaneously resolves overlapping molecular signals and corrects for aberrations using a neural-physics model. Through self-supervised learning on single-molecule data without requiring prior knowledge or accurate calibration, LUNAR synergistically optimizes a physical model with a neural network to estimate key physical parameters (e.g., 3D positions, photons, aberrations) of molecules. Its hybrid Transformer network effectively handles PSFs of varying sizes, achieving theoretically maximum localization precision of consecutive blinking events. Extensive simulations and experiments demonstrate that LUNAR consistently outperforms current state-of-the-art methods, reducing localization error by up to sixfold in the presence of unknown aberrations and molecular overlaps, enabling high-fidelity whole-cell reconstruction of mitochondria, nucleus, and neuronal cytoskeleton at great depths.

Article activity feed