Noise-Robust Quantum Generative Models for Distribution Learning and Efficient Data Loading

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Quantum machine learning offers powerful advantages in processing high-dimensional data, but loading classical probability distributions into quantum states remains a major challenge. This work extends quantum generative adversarial networks (qGANs) to overcome two critical limitations of early approaches: poor robustness to noise on near-term devices and limited capacity to model complex, multi-modal distributions. We introduce hybrid qGAN architectures that combine Wasserstein GAN with gradient penalty (WGAN-GP) and maximum mean discrepancy (MMD) losses with expressive quantum circuits, including quantum convolutional neural networks (QCNNs) and EfficientSU2 ansätze. Training is performed using seamless PyTorch–Qiskit integration, enabling stable optimization on both simulators and real quantum hardware.Experiments on 2D Gaussian and log-normal distributions—directly relevant to financial modeling—show faster convergence and higher fidelity than prior qGAN designs, with up to 80% lower Wasserstein distance under 5% depolarizing noise. The framework is further validated through European call option pricing via quantum amplitude estimation (QAE), achieving pricing errors below 1% on IBM’s 20-qubit superconducting systems. Theoretical analysis confirms gradient stability under noise, while empirical results include comprehensive noise sweeps, t-SNE latent space visualizations, and rigorous statistical tests (KL divergence, KS tests). This work establishes a practical path toward quantum advantage in generative modeling, with fully open-source code for reproducibility.

Article activity feed