ELM-FBPINNs: An Efficient Multilevel Random Feature Method
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Domain-decomposed variants of physics-informed neural networks (PINNs) such as finite basis PINNs (FBPINNs) mitigate some of PINNs' issues like slow convergence and spectral bias through localisation, but still rely on iterative nonlinear optimisation within each subdomain. In this work, we propose a hybrid approach that combines multilevel domain decomposition and partition-of-unity constructions with random feature models, yielding a method referred to as multilevel ELM-FBPINN. By replacing trainable subdomain networks with extreme learning machines, the resulting formulation eliminates backpropagation entirely and reduces training to a structured linear least-squares problem. We provide a systematic numerical study comparing ELM-FBPINNs and multilevel ELM-FBPINNs with standard PINNs and FBPINNs on representative benchmark problems, demonstrating that ELM-FBPINNs and multilevel ELM-FBPINNs achieve competitive accuracy while significantly accelerating convergence and improving robustness with respect to architectural and optimisation parameters. Through ablation studies, we further clarify the distinct roles of domain decomposition and random feature enrichment in controlling expressivity, conditioning, and scalability.