A Physics-aware Bayesian Vision Transformer for Seismic AVO Inversion: Towards an Embodied Structural Intelligence Framework with Structure-aware Uncertainty Modeling
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Traditional seismic inversion frameworks struggle to preserve spatial structure and to quantify model reliability. We present a next-generation pathway that progresses from a convolutional Physics-Informed Neural Network (PINN) to a Bayesian PINN (BPINN) with uncertainty modeling, and culminates in a Bayesian Physics-Informed Vision Transformer (BPI-ViT) that enables structure-level uncertainty quantification. In our formulation, PINN “training data” are equation-domain samples used to minimize physical residuals—supporting physics-driven, data-agnostic generalization—while BPI-ViT integrates multi-layer self-attention and Bayesian inference to transition from pixel-level optimization to structure-aware collaboration. Consistent evaluation on the Marmousi2 benchmark and validation on field-scale CO₂ EOR monitoring data show that BPI-ViT outperforms prior methods in target-horizon recovery, fault and anomaly detection, spatial continuity, and uncertainty quantification, while maintaining physical consistency. These results establish a structural-intelligent paradigm that moves seismic inversion beyond error minimization toward structure-aware, reliable, and cognitively informed modeling, and provide a foundation for future multi-physics and complex-geology applications.