An autonomous adaptation vision chip with in-sensor neural network based on multifunctional phototransistor

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Natural light intensity spans over ten orders of magnitude, from faint starlight to bright midday sunlight. However, the widely used image sensors exhibit fixed photoresponse, leading to poor image quality under varying illumination conditions. Here, we report a scalable vision chip with in-sensor neural network based on 22-nm fully depleted silicon-on-insulator (FDSOI) multi-terminal multifunctional phototransistor, enabling environment-driven autonomous adaptation imaging. The proposed multi-terminal structure of the FDSOI phototransistor allows electrical manipulation of photocarrier distribution, yielding electrically controllable photoresponses. Leveraging the intrinsic memory properties of the FDSOI phototransistor, we further implement a compute-in-memory feedback neural network within the sensor that infers ambient illumination directly from captured images and continuously reconfigures the pixel response in real time. Experimental results demonstrate that the fabricated chip can enhance scene contrast and improve traffic sign recognition accuracy by ~20% in varying illumination conditions compared to multi-sensor fixed-photoresponse methods. Fabricated using scalable silicon technology, our work presents a general strategy for achieving autonomous adaptation vision systems that deliver robust performance in dynamic real-world environments.

Article activity feed