Abstract:Benchmarks that concisely summarize the performance of many-qubit quantum computers are essential for measuring progress towards the goal of useful quantum computation. In this work, we present a benchmarking framework that is based on quantifying how a quantum computer's performance on quantum circuits varies as a function of features of those circuits, such as circuit depth, width, two-qubit gate density, problem input size, or algorithmic depth. Our featuremetric benchmarking framework generalizes volumetric benchmarking -- a widely-used methodology that quantifies performance versus circuit width and depth -- and we show that it enables richer and more faithful models of quantum computer performance. We demonstrate featuremetric benchmarking with example benchmarks run on IBM Q and IonQ systems of up to 27 qubits, and we show how to produce performance summaries from the data using Gaussian process regression. Our data analysis methods are also of interest in the special case of volumetric benchmarking, as they enable the creation of intuitive two-dimensional capability regions using data from few circuits.
Abstract:Quantum computers have the potential to revolutionize diverse fields, including quantum chemistry, materials science, and machine learning. However, contemporary quantum computers experience errors that often cause quantum programs run on them to fail. Until quantum computers can reliably execute large quantum programs, stakeholders will need fast and reliable methods for assessing a quantum computer's capability-i.e., the programs it can run and how well it can run them. Previously, off-the-shelf neural network architectures have been used to model quantum computers' capabilities, but with limited success, because these networks fail to learn the complex quantum physics that determines real quantum computers' errors. We address this shortcoming with a new quantum-physics-aware neural network architecture for learning capability models. Our architecture combines aspects of graph neural networks with efficient approximations to the physics of errors in quantum programs. This approach achieves up to $\sim50\%$ reductions in mean absolute error on both experimental and simulated data, over state-of-the-art models based on convolutional neural networks.
Abstract:Holistic benchmarks for quantum computers are essential for testing and summarizing the performance of quantum hardware. However, holistic benchmarks -- such as algorithmic or randomized benchmarks -- typically do not predict a processor's performance on circuits outside the benchmark's necessarily very limited set of test circuits. In this paper, we introduce a general framework for building predictive models from benchmarking data using capability models. Capability models can be fit to many kinds of benchmarking data and used for a variety of predictive tasks. We demonstrate this flexibility with two case studies. In the first case study, we predict circuit (i) process fidelities and (ii) success probabilities by fitting error rates models to two kinds of volumetric benchmarking data. Error rates models are simple, yet versatile capability models which assign effective error rates to individual gates, or more general circuit components. In the second case study, we construct a capability model for predicting circuit success probabilities by applying transfer learning to ResNet50, a neural network trained for image classification. Our case studies use data from cloud-accessible quantum computers and simulations of noisy quantum computers.
Abstract:The computational power of contemporary quantum processors is limited by hardware errors that cause computations to fail. In principle, each quantum processor's computational capabilities can be described with a capability function that quantifies how well a processor can run each possible quantum circuit (i.e., program), as a map from circuits to the processor's success rates on those circuits. However, capability functions are typically unknown and challenging to model, as the particular errors afflicting a specific quantum processor are a priori unknown and difficult to completely characterize. In this work, we investigate using artificial neural networks to learn an approximation to a processor's capability function. We explore how to define the capability function, and we explain how data for training neural networks can be efficiently obtained for a capability function defined using process fidelity. We then investigate using convolutional neural networks to model a quantum computer's capability. Using simulations, we show that convolutional neural networks can accurately model a processor's capability when that processor experiences gate-dependent, time-dependent, and context-dependent stochastic errors. We then discuss some challenges to creating useful neural network capability models for experimental processors, such as generalizing beyond training distributions and modelling the effects of coherent errors. Lastly, we apply our neural networks to model the capabilities of cloud-access quantum computing systems, obtaining moderate prediction accuracy (average absolute error around 2-5%).