Dynamic View-Adaptive Robust Representation Learning with Uncertainty-Aware Fusion

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Multi-view learning systems face significant reliability challenges in real-world applications due to sensor corruption, noise, and intermittent missing views.Current fusion strategies lack dynamic adaptation capabilities, compromising performance in safety-critical domains such as autonomous driving and medical diagnosis.We propose DAVE (Dynamic Adaptive View learning with Epistemic uncertainty), a robust multi-view framework that integrates uncertainty-aware stochastic encoders with a novel Uncertainty-Guided Adaptive Fusion (UGAF) module. Our approach dynamically weights view contributions based on real-time reliability estimates and incorporates robust training through stochastic view dropout and adversarial augmentation.Extensive evaluations across four diverse benchmarks demonstrate that DAVE achieves an average accuracy improvement of 8.6% under degraded conditions and reduces uncertainty miscalibration by 32% compared to state-of-the-art methods. The framework maintains robust performance even with 50% missing views and 40% sensor noise, establishing new standards for reliable multi-view learning.DAVE establishes a new paradigm for 1trustworthy multi-sensor systems by integrating principled uncertainty quantification with dynamic fusion. These advances enable reliable deployment in safety-critical applications where conventional multi-view approaches fail under real-world uncertainties.

Article activity feed