ML-Driven Memory Management Unit (MMU) in FPGA Architectures

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

FPGAs are becoming more popular for general-purpose computing, and AI/ML acceleration, but memory management on these reconfigurable hardware elements is still a challenge compared to fixed-logic architectures. We present a new, Machine Learning-driven Memory Management Unit (MMU) architecture for FPGAs, which employs intelligent algorithms (e.g., reinforcement learning and LSTM neural networks) to optimize memory response. In this work, we describe an ML-augmented MMU's architectural design and algorithmic framework, including virtual memory support, adaptive caching/prefetching, and dynamic allocation. Further showcasing latency, throughput, energy efficiency, and memory bandwidth benefits. We also show how we improve security mechanisms, relying on cache timing side-channels and speculative execution vulnerabilities, for cryptographic and ML algorithms. The design is adaptable across various applications (AI inference, high-performance computing, general workloads) and FPGA platforms. Finally, we describe the novelty when applied to a patent context with broad claims on machine learning applied to hardware memory management and security integration. This work is derived from the Provisional Patent Application #63/775,213, entitled "ML-Driven Memory Management Unit (MMU) in FPGA Architectures," filed on Mar 20, 2025, by Raj Sandip Parikh, with the United States Patent and Trademark Office (USPTO).

Article activity feed