Quantum2Prompt: Representing Quantum Circuits as Language Prompts for Linear System Solving

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

We present Quantum2Prompt , a cross-modal framework that reformulates quantum circuits as structured language prompts, enabling large language models (LLMs) to estimate outcomes of the Variational Quantum Linear Solver (VQLS). Instead of relying on iterative quantum--classical loops with repeated measurements, Quantum2Prompt translates gate sequences, control--target relations, and rotation parameters into compact textual descriptions that preserve circuit semantics while remaining hardware-agnostic. An LLM-based regressor consumes these descriptions to produce real-valued residual estimates, transforming VQLS outcome prediction into a prompt-to-residual regression task. Extensive experiments across Toeplitz, Laplacian, and sparse matrix families show that Quantum2Prompt achieves up to \((R^2 = 0.99)\) and MSE = 0.0026 , substantially outperforming classical baselines such as Random Forest, SVR, and LightGBM. Ablation and noise-robustness analyses further demonstrate that combining circuit text, rotation parameters, and optimization-step features yields the most accurate and noise-resilient predictions. Beyond empirical gains, we contribute the first benchmark dataset aligning VQLS circuits, textual encodings, and residual labels, enabling reproducible evaluation and hybrid quantum--language workflows. These results suggest that LLMs can serve as efficient, interpretable surrogates for variational solvers, paving the way toward language-driven quantum algorithm design and performance prediction.The full dataset and implementation are publicly available for reproducibility.

Article activity feed