Bayesian Cooperative Learning for Multimodal Integration
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Bayesian multimodal models that integrate data from multiple sources, studies, or modalities have garnered considerable attention in recent years. However, existing methods either rely on computationally expensive Markov chain Monte Carlo (MCMC) schemes or adopt variational approaches that forgo principled uncertainty quantification. To address this limitation and cater to practical needs, we abandon the MCMC framework and turn to resampling-based posterior inference for multimodal integration. Our method, Bayesian Cooperative Learning (BayesCOOP), embeds fast maximum a posteriori (MAP) estimation within a Bayesian bootstrap, combining a novel jittered group spike-and-slab prior with an efficient expectation–maximization (E M) coordinate descent algorithm under randomly weighted data perturbations. Averaging posterior summaries (MAP estimates) across bootstrap replicates yields approximate posterior samples that retain Bayesian interpretability while avoiding the computational burden of traditional sampling-based inference. We establish theoretical connections between BayesCOOP’s pseudo-posterior averaging and posterior contraction principles, demonstrating near-optimal posterior consistency under sparsity. Extensive simulation studies and analyses of pregnancy multi-omics datasets demonstrate that BayesCOOP substantially outperforms state-of-the-art early, intermediate, and late fusion approaches in estimation, prediction, and uncertainty assessment. The open-source implementation of BayesCOOP is available at https://github.com/himelmallick/BayesCOOP .