Physics-Informed Neural Initialization for Robust Multi-Fidelity Coupled Simulation of Advanced Propulsion Systems

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Traditional zero-dimensional (0D) engine performance analysis relies on dimensionless characteristic maps, which cannot capture complex nonlinear phenomena such as shocks and flow separation in transonic regimes. Integrating high-fidelity components with complex features can invalidate previous constrains, turning the system from a single-solution to a multi-solution domain. Conventional solvers, such as the Newton-Raphson method and its variants struggle in these cases, often converging to non-physical solutions and causing pseudo-convergence. While machine learning methods can model nonlinearities, their generalization is limited when training data is sparse, increasing computational costs and risking design errors. To address these unresolved issues, this study presents a novel physics-informed neural initialization algorithm that tightly integrates physics-based nonlinear solvers with data-driven surrogate models. The proposed method employs a multilayer feedforward neural network (MLF) to generate initial values for the iterative solution process, which are further refined and constrained by physical model consistency. Multi-fidelity simulations are then performed with these calibrated values, and their results are used to iteratively refine the surrogate model. Validation on a twin-spool turbofan engine with a vectoring nozzle shows that the algorithm matches the predictive accuracy of traditional methods under normal conditions. At extreme points, such as large deflection angles and off-design operations, it effectively suppresses pseudo-convergence and reduces the number of 3D CFD model iterations by over 60%. Overall, this method enables accurate modeling of complex nozzle regulation, enhances solver robustness under high-fidelity coupling, and significantly reduces the dependence of machine learning on large-scale datasets.

Article activity feed