Transformer-Based Quantum Error Decoding Enhanced by QGANs: Towards Scalable Surface Code Correction Algorithms
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
To address qubits' high environmental sensitivity and reduce the significant error rates in current quantum devices, quantum error correction stands as one of the most dependable approaches. The topological surface code, renowned for its unique qubit lattice structure, is widely considered a pivotal tool for enabling fault-tolerant quantum computation. Through redundancy introduced across multiple qubits, the surface code safeguards quantum information and identifies errors via state changes captured by syndrome qubits. However, simultaneous errors in data and syndrome qubits substantially escalate decoding complexity. Quantum Generative Adversarial Networks (QGANs) have emerged as promising deep learning frameworks, effectively harnessing quantum advantages for practical tasks such as image processing and data optimization. Consequently, a topological code trainer for quantum-classical hybrid GANs is proposed as an auxiliary model to enhance error correction in machine learning-based decoders, demonstrating significantly improved training accuracy compared to the traditional Minimum Weight Perfect Matching (MWPM) algorithm, which achieves an accuracy of 65 \(%\) . Numerical experiments reveal that the decoder achieves a fidelity threshold of P=0.1978, substantially surpassing the traditional algorithm's threshold of P=0.1024. To enhance decoding efficiency, a Transformer decoder is integrated, incorporating syndrome error outputs trained via QGANs into its framework. By leveraging its self-attention mechanism, the Transformer effectively captures long-range qubit dependencies at a global scale, enabling high-fidelity error correction over larger dimensions. Numerical validation of the surface code error threshold demonstrates an 8.5$%$ threshold with a correction success rate exceeding 94$%$, whereas the local MWPM decoder achieves only 55$%$ and fails to support large-scale computation at a 4$%$ threshold.