Systematic Review of Artificial Intelligence Decoders for Topological Quantum Error Correction

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Efficient, low-latency decoding of topological quantum error correction (QEC) codes is a central challenge on the road to fault-tolerant quantum computing. This systematic review synthesizes findings from 108 peer-reviewed studies (2017–2026), selected per the PRISMA 2020 framework, evaluating artificial intelligence (AI) and machine learning (ML) architectures for decoding surface, toric, color, and related topological stabilizer codes used in near-term and fault-tolerant quantum computing systems. We find that AI decoders frequently outperform classical baselines under correlated and hardware-realistic noise: graph neural networks achieve error thresholds up to \(\:{p}_{th}\approx\:13.8\%\), transformer-based models such as AlphaQubit reduce logical error rates by 24–31% over minimum weight perfect matching (MWPM) on distance-3 and distance-5 surface codes benchmarked on Google’s Sycamore superconducting processor. Meanwhile, classical co-processor implementations using field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs) reach inference latencies as low as 2.3 ns. However, robustness to calibration drift and the ability to generalize across different quantum hardware platforms remain open challenges. This review provides a structured decoder taxonomy, comparative performance tables, and an evidence-based roadmap for deploying AI-enhanced QEC in utility-scale quantum systems.

Article activity feed