The Finite Element Neural Network Method: Leveraging Non-Vanishing Shape Functions in Space-Time-Parameter Framework
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Neural networks (NNs) have received growing interest in engineering due to their ability to assimilate high-dimensional data and provide accurate approximations for complex systems. Nonetheless, classical numerical methods remain the benchmark for reliability and accuracy, backed by rigorous development over decades. Merging the strengths of the finite element formulation with physics-informed neural networks (PINNs), the finite element neural network method (FENNM) opens new venues for approximating partial differential equations (PDEs). FENNM is based on the Petrov-Galerkin framework, where the NN provides the global nonlinear space of solutions, whereas the test functions are the nonvanishing Lagrange shape functions. Compared to VPINN, hp-VPINN, cv-PINN, and FastVPINN, FENNM’s weak-form explicitly includes flux terms at the elements' interfaces and naturally incorporates Neumann boundary conditions within the residual loss function, improving the training stability and adaptability to real-world applications. We extend FENNM to two-dimensional domains, with the second dimension representing space, time, or a parameter. The method naturally integrates time and parameter spaces for design optimization, offering advantages over the deep energy method (DEM) and discrete finite element method (FEM) inspired NNs. We further showcase FENNM’s capability in local mesh refinement, vector-valued PDEs, inverse problems, and complex geometries with irregular elements.