First Demonstration of Ferroelectric Digital In-Memory Computing for Scalable, Reliable and Ultra-Efficient Similarity Computation
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Classification-based learning has become a cornerstone of deep neural networks, particularly in few-shot learning, where accurate similarity metrics, such as Hamming distance, are critical. However, conventional architectures require retrieving class vectors from a physically separated memory for Hamming distance calculations, incurring significant energy penalties due to data movement. This inefficiency poses a challenge to scalability and overall system performance. In-memory computing, which eliminates data transfers between processing and memory units, is increasingly recognized as a promising solution to this von Neumann bottleneck. Analog content-addressable memory (CAM)-based systems address this issue by embedding class vectors directly within CAM cells. However, their reliance on sensing circuits, particularly analog-to-digital converters (ADCs), introduces scalability and reliability challenges. The limited sense margin of ADCs, combined with device variability, further constrains array size and performance. These issues are exacerbated with emerging non-volatile memory devices like ferroelectric field-effect transistors (FeFETs). In this work, we present an innovative FeFET-based digital Logic-in-Memory (LiM) XOR cell, fabricated using GlobalFoundries’ 28 nm SLPe technology, eliminating the need for ADCs. Our 2T FeFET-based XOR cell offers a fully digital, compact, and energy-efficient solution that is robust to device variability and scalable for large systems. Applied to Hamming distance calculations for 4096-bit class vectors, our design achieves a 23-fold reduction in energy consumption, a 3-fold decrease in latency, and a 14-fold reduction in silicon footprint compared to state-of-the-art solutions. Crucially, our FeFET-based architecture demonstrates an unprecedented efficiency of 2337 Gsamples/(s·W·mm2 ), a 300-fold improvement over conventional designs, offering a unique competitive advantage where energy efficiency, reliability, and performance trade-offs have long been a concern. Our efficiency gains, while maintaining the maturity of digital computing, align with the industry’s demand for energy-efficient, scalable, and reliable in-memory computing. Furthermore, digital LiM supports the broader goal of energy-efficient AI hardware without sacrificing reliability, making it highly appealing to researchers and industries focused on sustainable computing.