Deep Neural Network Inference on an Integrated, Reconfigurable Photonic Tensor Processor

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Artificial neural networks set the pace in machine vision, natural language processing, and scientific discovery. Yet their success comes with a rising need for fast and efficient tensor computations, the key operation underpinning neural networks. Analog photonic systems offer a promising solution to perform tensor operations more efficiently than digital electronics because of ultra-fast signal propagation for low-latency computing as well as the elimination of charging, discharging capacitances and electrical crosstalk. Here we present an all-optical photonic tensor processor capable of deep neural network inference. Integrated in a standard 19-inch rack unit with a complete high-speed electronic interface to the PyTorch framework, the processor enables seamless deployment of neural networks on photonic hardware. Our photonic processor realizes an all-optical crossbar with nine input and three output channels that performs parallel intensity-based accumulation of weighted signals. The chip is fabricated using imec’s iSiPP50G silicon photonics platform, integrating electro-absorption modulators and photodiodes to ensure scalability and compatibility with high-volume manufacturing. An integrated, self-injection-locked microcomb provides a stable multi-wavelength light source for simultaneous optical carriers. We demonstrate inference on MNIST and CIFAR-10, achieving 98.1% and 72.0% classification accuracy, respectively. Together, these advances demonstrate a compact, reprogrammable photonic computing platform compatible with industrial silicon processes as a key step toward scalable, high-speed optical accelerators for artificial intelligence.

Article activity feed