LazyNet: Interpretable ODE Modeling of Sparse CRISPR Single-Cell Screens Reveals New Biological Insights

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Single-cell CRISPR activation/interference screens offer a direct route to causal gene-regulatory maps, yet existing deep-learning pipelines are GPU-intensive and yield hard-to-interpret latent factors. We introduce LazyNet, an explicitly Euler-integrated neural ODE whose paired log-linear-exp layer collapses multiplicative transcript interactions into a compact, mechanistically interpretable weight matrix. Training a three-replica ensemble on a 55k-cell, 30k-gene Perturb-seq dataset completes on a single CPU in <1 h, running 3 to 4 folder faster than transformer (scGPT) or state-space (RetNet) baselines while lowering global RMSE by ≈ 25 % and raising genome-wide Pearson r to 0.67. Averaged Jacobians, expanded in a 32*4 breadth-first search around seven ferroptosis seeds, recapitulated 15 of 27 benchmark regulators (56 % recall) within a 4 611 gene, 11 676 edge subgraph; 26.6 % of edges show ARCHS4 co-expression r ≥ 0.2 versus 5 % expected at random, and 523 overlap STRING interactions (hypergeometric p = 1.2e-5). Elasticity ranks uncover a previously unrecognized lysosomal-mitochondrial-immune module linking PSAP-mTOR, MFN2-TLR4 and ADCY10-SIRT3, generating experimentally testable hypotheses. By combining state-of-the-art predictive accuracy, laptop-level resource demands and one-to-one parameter interpretability, LazyNet democratizes causal network discovery from sparse two-snapshot screens, enabling small laboratories to move from large-scale perturbation data to mechanistic insight without GPUs or external pathway priors.

Article activity feed