Abstract:Holistic benchmarks for quantum computers are essential for testing and summarizing the performance of quantum hardware. However, holistic benchmarks -- such as algorithmic or randomized benchmarks -- typically do not predict a processor's performance on circuits outside the benchmark's necessarily very limited set of test circuits. In this paper, we introduce a general framework for building predictive models from benchmarking data using capability models. Capability models can be fit to many kinds of benchmarking data and used for a variety of predictive tasks. We demonstrate this flexibility with two case studies. In the first case study, we predict circuit (i) process fidelities and (ii) success probabilities by fitting error rates models to two kinds of volumetric benchmarking data. Error rates models are simple, yet versatile capability models which assign effective error rates to individual gates, or more general circuit components. In the second case study, we construct a capability model for predicting circuit success probabilities by applying transfer learning to ResNet50, a neural network trained for image classification. Our case studies use data from cloud-accessible quantum computers and simulations of noisy quantum computers.
Abstract:Quantum characterization, validation, and verification (QCVV) techniques are used to probe, characterize, diagnose, and detect errors in quantum information processors (QIPs). An important component of any QCVV protocol is a mapping from experimental data to an estimate of a property of a QIP. Machine learning (ML) algorithms can help automate the development of QCVV protocols, creating such maps by learning them from training data. We identify the critical components of "machine-learned" QCVV techniques, and present a rubric for developing them. To demonstrate this approach, we focus on the problem of determining whether noise affecting a single qubit is coherent or stochastic (incoherent) using the data sets originally proposed for gate set tomography. We leverage known ML algorithms to train a classifier distinguishing these two kinds of noise. The accuracy of the classifier depends on how well it can approximate the "natural" geometry of the training data. We find GST data sets generated by a noisy qubit can reliably be separated by linear surfaces, although feature engineering can be necessary. We also show the classifier learned by a support vector machine (SVM) is robust under finite-sample noise.