Sparse SGNN for Interpretable Learning of Stationary Solutions of Fokker-Planck-Kolmogorov Equations
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This paper introduces an interpretable machine learning framework that combines Separable Gaussian Neural Networks (SGNN) and sparse regression to solve stationary solutions of FPK equations. The structure of SGNN naturally satisfies the boundary conditions of FPK equations at infinity, which MLP-based Physics-informed Neural networks (PINNs) can only approximate. In addition, it enables the transformation of probability normalization condition into a weight constraint that can be encoded into the loss function. This eliminates the need for computationally expensive sampling, therefore, significantly reducing computational overhead. The integration with sparse regression into training process yields parsimonious SGNN models that provide highly interpretable predictions in the form of Gaussian radial-basis functions. We demonstrate our framework's effectiveness on several challenging examples including the Duffing oscillator, Van der Pol oscillator, and Lorenz system, achieving approximately 60%-98% reduction in network size while not only maintaining but improving prediction accuracy. Parametric study reveals the optimal balance between model sparsity and accuracy through studies of pruning threshold and sparsity-promoting parameter. At last, we show the limitation of the proposed approach for systems with multi-modal-annular distributions and propose a masking-based post-training scheme to address this limitation. This work demonstrates that combining SGNN with sparse optimization techniques can significantly advance our ability to solve FPK equations efficiently while maintaining physical interpretability.