GREENKAN: Structured Spatiotemporal Green-Function Neural Operators with Interpretable Kernel Decomposition.

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Partial differential equations (PDEs) are central to modeling physical systems, and neural operators provide a data-driven framework for learning mappings between function spaces. Most existing neural operators rely on implicit spectral or feature-space representations, which offer limited interpretability of the learned solution structure. We introduce GREENKAN, a structured neural operator for one-dimensional time-dependent linear PDEs that parameterizes solutions through a separable spatiotemporal kernel expansion inspired by Green-function representations. Building on the functional parameterization philosophy of Kolmogorov– Arnold Networks (KAN) [9], the model learns explicit families of spatial and temporal kernels with controllable scale, frequency, and decay characteristics. A hypernetwork [14] generates synthesis coefficients conditioned on the input problem, while a gated symmetric amplification mechanism promotes stable training and mitigates mode collapse. By explicitly modeling kernel structure, GREENKAN enables direct inspection of learned basis functions and their temporal dynamics, yielding physically interpretable internal representations. This structured formulation provides a principled and transparent alternative to fully implicit neural operator architectures. This work provides a structured foundation for interpretable operator learning, with a natural path toward extensions to nonlinear and higher-dimensional PDEs.

Article activity feed