Ultra-compact and efficient on-chip diffraction neural network based on dual optimization of physical constraints

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

On-chip diffractive optical neural networks offer advantages for optical information processing but face fundamental challenges when theoretical scalar diffraction models fail to accurately predict vector electromagnetic wave propagation in real devices. Existing solutions compromise either integration density or computational efficiency. Here we show a dual-optimization approach that combines Gaussian-smoothing diffractive neural networks with angle correction to bridge this modeling gap. Our method requires no extra training datasets and adds minimal computational overhead, with excellent generalizability. It reduces modeling errors, enhancing fidelity from 34.91% to 98.10% with mode purities reaching 93.39% and 90.37% in mode conversion tasks. Importantly, it maintains excellent performance even in ultra-compact architectures, achieving 97.77% fidelity at a layer spacing of only 20 µm, compared to approximately 300 µm required previously. This establishes a scalable framework for high-performance on-chip diffractive neural networks with complete physical interpretability for silicon photonics applications.

Article activity feed