HyperFabric Interconnect (HFI): A Unified, Scalable Communication Fabric for HPC, AI, Quantum, and Neuromorphic Workloads

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The evolution of high-performance computing (HPC) interconnects has produced specialized fabrics such as InfiniBand[1], Intel Omni-Path, and NVIDIA NVLink[2], each optimized for distinct workloads. However, the increasing convergence of HPC, AI/ML, quantum, and neuromorphic computing requires a unified communication substrate capable of supporting diverse requirements including ultra-low latency, high bandwidth, collective operations, and adaptive routing. We present HyperFabric Interconnect (HFI), a novel design that combines the strengths of existing interconnects while addressing their scalability and workload-fragmentation limitations. Our evaluation on simulated clusters demonstrates HFI’s ability to reduce job completion time (JCT) by up to 30%, improve tail latency consistency by 45% under mixed loads and 4× better jitter control in latency-sensitive applications., and sustain efficient scaling across heterogeneous workloads. Beyond simulation, we provide an analytical model and deployment roadmap that highlight HFI’s role as a converged interconnect for the exascale and post-exascale era.

Article activity feed