Abstract:The potential impact of quantum machine learning algorithms on industrial applications remains an exciting open question. Conventional methods for encoding classical data into quantum computers are not only too costly for a potential quantum advantage in the algorithms but also severely limit the scale of feasible experiments on current hardware. Therefore, recent works, despite claiming the near-term suitability of their algorithms, do not provide experimental benchmarking on standard machine learning datasets. We attempt to solve the data encoding problem by improving a recently proposed variational algorithm [1] that approximately prepares the encoded data, using asymptotically shallow circuits that fit the native gate set and topology of currently available quantum computers. We apply the improved algorithm to encode the Fashion-MNIST dataset [2], which can be directly used in future empirical studies of quantum machine learning algorithms. We deploy simple quantum variational classifiers trained on the encoded dataset on a current quantum computer ibmq-kolkata [3] and achieve moderate accuracies, providing a proof of concept for the near-term usability of our data encoding method.
Abstract:Tensor networks have recently found applications in machine learning for both supervised learning and unsupervised learning. The most common approaches for training these models are gradient descent methods. In this work, we consider an alternative training scheme utilizing basic tensor network operations, e.g., summation and compression. The training algorithm is based on compressing the superposition state constructed from all the training data in product state representation. The algorithm could be parallelized easily and only iterates through the dataset once. Hence, it serves as a pre-training algorithm. We benchmark the algorithm on the MNIST dataset and show reasonable results for generating new images and classification tasks. Furthermore, we provide an interpretation of the algorithm as a compressed quantum kernel density estimation for the probability amplitude of input data.
Abstract:Simulating quantum many-body dynamics on classical computers is a challenging problem due to the exponential growth of the Hilbert space. Artificial neural networks have recently been introduced as a new tool to approximate quantum-many body states. We benchmark the variational power of different shallow and deep neural autoregressive quantum states to simulate global quench dynamics of a non-integrable quantum Ising chain. We find that the number of parameters required to represent the quantum state at a given accuracy increases exponentially in time. The growth rate is only slightly affected by the network architecture over a wide range of different design choices: shallow and deep networks, small and large filter sizes, dilated and normal convolutions, with and without shortcut connections.