The Proposal of a Fully Quantum Neural Network and Fidelity-Driven Training Using Directional Gradients for Multi-Class Classification

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In this work, we present a training method for a Fully Quantum Neural Network (FQNN) based entirely on quantum circuits. The model processes data exclusively through quantum operations, without incorporating classical neural network layers. In the proposed architecture, the roles of classical neurons and weights are assumed, respectively, by qubits and parameterized quantum gates: input features are encoded into quantum states of qubits, while the network weights correspond to the rotation angles of quantum gates that govern the system’s state evolution. The optimization of gate parameters is performed using directional gradient estimation, where gradients are numerically approximated via finite differences, eliminating the need for analytic derivation. The training objective is defined as the quantum-state fidelity, which measures the similarity between the network’s output state and a reference state representing the correct class. Experiments were conducted using the Qiskit AerSimulator, which allows for the accurate simulation of quantum circuits on a classical computer. The proposed approach was applied to the classification of the Iris dataset. The experimental results demonstrate that the FQNN is capable of effectively learning to distinguish between classes based on input features, achieving stable test accuracy across runs. These findings confirm the feasibility of constructing fully quantum classifiers without relying on hybrid quantum—classical architectures. The FQNN architecture consists of multiple quantum layers, each incorporating parameterized rotation operations and entanglement between qubits. The number of layers is determined by the ratio of quantum parameters (weights) to the number of input features. Each layer functions analogously to a hidden layer in a classical neural network, transforming the quantum-state space into a richer feature representation through controlled quantum operations. As a result, the network is capable of dynamically modeling dependencies among input features without the use of classical activation functions.

Article activity feed