Abstract:In this work, a scalable algorithm for the approximate quantum state preparation problem is proposed, facing a challenge of fundamental importance in many topic areas of quantum computing. The algorithm uses a variational quantum circuit based on the Standard Recursive Block Basis (SRBB), a hierarchical construction for the matrix algebra of the $SU(2^n)$ group, which is capable of linking the variational parameters with the topology of the Lie group. Compared to the full algebra, using only diagonal components reduces the number of CNOTs by an exponential factor, as well as the circuit depth, in full agreement with the relaxation principle, inherent to the approximation methodology, of minimizing resources while achieving high accuracy. The desired quantum state is then approximated by a scalable quantum neural network, which is designed upon the diagonal SRBB sub-algebra. This approach provides a new scheme for approximate quantum state preparation in a variational framework and a specific use case for the SRBB hierarchy. The performance of the algorithm is assessed with different loss functions, like fidelity, trace distance, and Frobenius norm, in relation to two optimizers: Adam and Nelder-Mead. The results highlight the potential of SRBB in close connection with the geometry of unitary groups, achieving high accuracy up to 4 qubits in simulation, but also its current limitations with an increasing number of qubits. Additionally, the approximate SRBB-based QSP algorithm has been tested on real quantum devices to assess its performance with a small number of qubits.
Abstract:In this work, scalable quantum neural networks are introduced to approximate unitary evolutions through the Standard Recursive Block Basis (SRBB) and, subsequently, redesigned with a reduced number of CNOTs. This algebraic approach to the problem of unitary synthesis exploits Lie algebras and their topological features to obtain scalable parameterizations of unitary operators. First, the recursive algorithm that builds the SRBB is presented, framed in the original scalability scheme already known to the literature only from a theoretical point of view. Unexpectedly, 2-qubit systems emerge as a special case outside this scheme. Furthermore, an algorithm to reduce the number of CNOTs is proposed, thus deriving a new implementable scaling scheme that requires one single layer of approximation. From the mathematical algorithm, the scalable CNOT-reduced quantum neural network is implemented and its performance is assessed with a variety of different unitary matrices, both sparse and dense, up to 6 qubits via the PennyLane library. The effectiveness of the approximation is measured with different metrics in relation to two optimizers: a gradient-based method and the Nelder-Mead method. The approximate SRBB-based synthesis algorithm with CNOT-reduction is also tested on real hardware and compared with other valid approximation and decomposition methods available in the literature.
Abstract:Classification is particularly relevant to Information Retrieval, as it is used in various subtasks of the search pipeline. In this work, we propose a quantum convolutional neural network (QCNN) for multi-class classification of classical data. The model is implemented using PennyLane. The optimization process is conducted by minimizing the cross-entropy loss through parameterized quantum circuit optimization. The QCNN is tested on the MNIST dataset with 4, 6, 8 and 10 classes. The results show that with 4 classes, the performance is slightly lower compared to the classical CNN, while with a higher number of classes, the QCNN outperforms the classical neural network.