Unifying Multimodal Single-Cell Data Using a Mixture of Experts β -Variational Autoencoder-Based Framework
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Since 2014, 47 technologies have been developed to measure multiple biological modalities from the same cells. However, tools for robustly analyzing these data to uncover holistic biological interactions remain limited. Advancing this field could transform research for many disciplines, including human disease and cancer. To address this limitation, we present UniVI ( Uni fied V ariational I nference), a generalizable deep learning algorithm that aligns single-cell measurements from disparate modalities using β —and mixture-of-experts—variational autoencoder frameworks. UniVI learns a latent embedding for each modality while minimizing the divergence between them, a concept often referred to as manifold alignment. Once trained, UniVI enables batch correction, latent factorization, cell-cell alignment, data denoising, and imputation. We demonstrate its performance on multimodal single-cell datasets, including CITE-seq and 10x Multiome data, showing UniVI outperforms widely-used methods without reliance on prior knowledge. This flexibility and generalizability allows UniVI to adapt to emerging multimodal technologies. Our results highlight UniVI’s ability to integrate diverse multimodal and unimodal data, offering a scalable solution for refining biological insights. The unified latent spaces it generates enable exploration of cross-modality correlations and the generation of realistic new data, paving the way for novel discoveries in single-cell biology.