Noisy Qubits, Hard Problems: A SystematicReview and Taxonomy of Quantum OptimizationBeyond Toy Benchmarks
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Quantum optimization has become a leading application for near-term quantumcomputing, and yet many publications compare algorithms against idealizedassumptions and small toy benchmarks. This limits the interpretability, reproducibility,and practical relevance of reported performance gains, particularlyin the noisy intermediate-scale quantum (NISQ) era. In this work, we presenta systematic literature review that investigate quantum optimization beyondtoy benchmarks. Following established SLR protocols, we analyze the literaturealong multiple methodological dimensions, including algorithmic approach,benchmark realism, encoding strategies, hybrid quantum-classical workflows,hardware and noise modeling, evaluation metrics, and reporting practices. Weintroduce a unified taxonomy that captures the interaction between problemformulation, encoding overhead, noise-aware execution, and hybrid optimizationloops. In addition, we propose a reproducibility checklist and scoring rubric toassess reporting completeness and experimental rigor across studies. Instead ofdeveloping new quantum optimization algorithms or making theoretical quantumadvantage claims, the main contribution of this work is that it providesa methodological analysis on benchmarking realism and encoding or evaluationpractices as well as reproducibility rigor in NISQ-era quantum optimization studies.Our review uncovers a continued chasm between algorithmic innovation andevaluation maturity, with the number of publications and diversity of methodsgrowing without corresponding growth in standardized benchmarks, strongclassical baselines, noise-consistent evaluation, or reproducibility.