Classical SU(2) Models Match or Exceed Shallow Variational Quantum Circuits on Classical Vision Benchmarks

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Quaternion-valued neural networks and shallow variational quantum circuits (VQCs) both derive their local transformations from the rotation group SU(2), yet their comparative performance as learning architectures on classical supervised tasks has not been systematically examined. We present a controlled comparison in which real-valued, quaternion-valued, and quantum classifiers operate on identical frozen feature representations across MNIST, FashionMNIST, and CIFAR-10, isolating performance differences to the geometric inductive biases of the classification heads. For CIFAR-10, we evaluate two feature regimes—a learned 16-dimensional bottleneck and frozen ImageNet-pretrained ResNet18 features (512-dimensional)—to assess whether observed relationships reflect fundamental architectural properties or artifacts of representation capacity. Quaternion classifiers match real-valued baselines on MNIST (93.64% vs.93.54%) and FashionMNIST (84.47% vs. 84.60%), while shallow product-state VQCs achieve only 87.52% (MNIST) and 82.03% (FashionMNIST), despite substantially higher computational cost. On CIFAR-10, quaternion networks retain 94.3% of real-valued performance under the learned bottleneck and 95.9% under ResNet18 features, demonstrating robustness across a 32-fold increase in feature dimensionality. On CIFAR-10, product-state quantum circuits underperform quaternion classifiers by 2.8-3.5 percentage points across both feature regimes. Notably, entanglement reverses from providing modest gains (approximately +0.6 percentage points) on simple grayscale benchmarks to degrading performance by 9.3 percentage points relative to the product-state circuit when combined with high-quality pretrained features, indicating that additional quantum resources can adversely affect learning in shallow, measurement-limited regimes. These results demonstrate that classical SU(2) models realized through quaternion networks provide computationally efficient classification heads that match real-valued baselines while substantially outperforming shallow variational quantum circuits on classical vision tasks.

Article activity feed