To reject or not to reject, that is the question.

By
Nadia Milazzo
October 20, 2022
11 minutes
Share this post

While quantum technologies progress and advance rapidly, many questions arise around their properties, functioning and potential. According to the European Quantum Technologies Roadmap [1], there are four main areas of research that form the landscape of this rich field, each with its different goals and challenges. Some examples of important technologies with their respective fields are [2]:

  1. Quantum memories — Quantum communication
  2. Robust logical qubits, fault-tolerant gates — Quantum computation
  3. Digital/Analogue quantum simulators (Quantum circuits/Annealers) — Quantum simulation
  4. Quantum-enhanced sensors (atomic/optical) — Quantum metrology and sensing

The increasing availability of these technologies naturally requires testing them efficiently, raising very deep questions on the feasibility of such tasks:

If a quantum experiment solves a problem which is proven to be intractable for classical computers, how can one verify the outcome of the experiment? [3]

The question above could sound familiar to some readers, especially the ones who have dealt with classical computer science. Indeed, there exists a similar open problem there, known as the P ≟ NP problem: Can a problem whose solution can be quickly verified also be solved? Currently, most of the scientific community seems to lean towards P ≠ NP. This would imply that solutions to problems in the NP class would be more “easily” verified than found. This problem does not answer our question, but it gives us an insight into the potential difficulty in answering it (the P/NP problem has been unsolved for the better part of a century now!).

But let’s now take a step back and start with a better definition of the problem in the quantum context!

How to characterize quantum systems?

When reading the multitude of scientific articles on the characterization of quantum systems, one could at first get confused by all the different words used to address (apparently) the same thing: characterization, certification, verification, benchmarking… Where is then the difference (if there is one)?

In a recent review [4], the authors make a distinction based on different layers of complexity, as well as on some specificities of the method and its ultimate goal. In their framework, they identify, among others, a “physical” layer, i.e., everything regarding quantum states and processes (a.k.a. channels), and an “application” layer, which instead refers to more complex tasks (e.g., testing for the correctness of the solution that a quantum computer outputs). We can associate characterization and certification to the physical layer, while verification refers to the application one.

The "physical" layer includes quantum states and processes.

More specifically, certification can be seen as a subtask of characterization since the final certificate for a quantum system could contain less information than its characterization (for instance, for a quantum state, one could be interested in full tomography or only in some specific property of the state, e.g., entanglement).

At the end, where does benchmarking fit in all this? It is also related to characterization, but its goal is slightly different since it aims at comparing the performances of different quantum devices.

In the following, the term certification will be used to address the different existing protocols. The landscape of methods is very diversified since this problem can be tackled in several ways and from different perspectives; this results from many disciplines coming together, such as mathematics, computer science, physics, and engineering.

The most known and general certification process for quantum states and processes is quantum tomography. It provides full knowledge of the quantum system. Still, the number of measurements needed for completing the task scales very badly with the dimension of the system (e.g., for pure states, it scales as 2 to the n-th power, where n is the number of quantum bits), and it thus becomes soon infeasible (it is already difficult to fully characterize a 2 quantum bits channel).

Full tomography of the quantum state ρ: measurements have to be performed on several copies of the state.

Many efforts have been spent to avoid performing quantum full tomography or to identify cases in which it can be simplified (see, for instance, the results in [5–14]). Quantum system certification approaches have a different point of view and formulate the problem of testing functionalities of quantum systems as a protocol that outputs “ACCEPT” or “REJECT” based on the hypothesis of correct functioning. A description and classification of the different methods can be found in [4] and [15]. The latter work uses two axes of comparison to classify the existing methods, one considering the information gain provided, the other looking at how many and how strong the assumptions are for the protocol to work. Some of the authors of [15] have also created an online library of certification protocols on the Quantum Protocol Zoo to help the community advance in this crucial field for the development of quantum technologies (you can have a look here!).

In particular, for quantum devices, and more precisely for noisy and intermediate scale quantum (NISQ) machines available at present, the most used certification methods are randomized benchmarking [16–18] and cross-entropy benchmarking [19–20]. The former aims at estimating the performance of quantum gates by evaluating the error that accumulates when applying random gate sequences of different lengths, and they are known to be robust against state preparation and measurement (SPAM) errors. The latter deals with the problem of correct sampling from the measurement output distribution of a quantum circuit; a quantity called cross-entropy fidelity F is calculated from the set of samples collected: if F=1, it means that the quantum computer outputting those samples is noiseless. If instead F=0, the quantum computer is too noisy, and it will thus not outperform classical machines.

Another protocol on the horizon

In a recent pre-print entitled “Principles of quantum functional testing” [21]— to which I contributed — a new path was taken to tackle the certification of quantum channels performed by quantum devices. As with the other certification protocols, the core of this method is a decision problem: given some specifications on the characterizing parameters by the device’s producer, one aims at accepting or rejecting the device as quickly as possible (at the same time providing a level of confidence that the decision is correct). This is the key to this approach to conclude in the shortest time possible; this is also the motivation that guides classical functional testing, where it is, in particular, desired to waste as little time as possible on devices that do not function properly.

Let’s see the fundamental ideas and concepts used in the following sections.

Bayesian parameter estimation — A set of continuous parameters typically characterizes a quantum channel; for this reason, the decision problem can be described as a quantum parameter estimation problem. In particular, in [21], Bayes parameter estimation (BPE) techniques were applied; they are indeed widely used to estimate the probability density function of random variables with unknown parameters, which in turn is then used for decision-making or future inference. BPE is based on Bayes’ theorem, which allows one to incorporate prior knowledge into the decision-making process. The procedure followed in BPE can be summarized in 3 steps:

  1. Identify a prior distribution for the unknown parameters

2. Collect data to calculate the likelihood function

3. Update the distribution of the parameters using 1 and 2 to get the posteriordistribution

The last point can be very hard to achieve analytically after a few updates of the probability distribution. This is why numerical methods have been implemented to overcome these difficulties, and they are the subject of the next paragraph.

Sequential Monte Carlo (SMC) methods — Known algorithms to approximate the posterior probability distribution often involve Monte Carlo methods. In particular, in the SMC approach [22], the collected data are processed sequentially, meaning that at each update of the probability distribution, the last posterior is set as the new prior. This is done by approximating the continuous distribution with a discrete one, described by ‘particles’ (i.e., parameter values) and ‘weights’ (i.e., probabilities) — forming the so-called particle filter — and updating the weights according to Bayes’ rule. New particle locations have to be resampled at some point in the algorithm to avoid numerical instabilities as data are collected. The SMC calculations in [21] were performed with the available python package QInfer, presented in [23].

In the case of the certification of quantum channels, new data are collected through measurements on the quantum system; including new knowledge coming from data via sequential updates of the posterior probability distribution suggests the possibility of optimizing the possible available measurements adaptively. Let’s see how!

Experimental design and utility functions — Adaptive strategies for choosing the next best measurement (or, more generally, the next best experiment) in the update process can be translated into optimizing a utility function U [24–25]. If this optimization is performed at each step, the corresponding adaptive experimental design is often referred to as a myopic or greedy approach. However, the optimization could be performed on one or more steps; in that case, one deals with a non-greedy design. Common choices of utility function U for parameter estimation problems are functions of the posterior covariance matrix (e.g., the variance itself in the case of single parameter estimation) or information-based functions [26], which follow the basic principle that going from before posterior distribution should increase the available information (e.g., the information gain or the mutual information [25]).

Now that the bases are all set, we are ready to ask the question: Is there a way of accelerating the testing process in the specific case of quantum channels?

New ingredients to speed up the decision problem

The quantum-mechanical context allows the investigation of some new ideas for functional testing; in the following paragraphs, three new principles are discussed.

Iteration of the channel — Imagine you receive from a producer a quantum device that is supposed to perform a channel Φ (e.g., a rotation channel), and instead, what it performs, in reality, is a faulty channel Φ’ (for instance the rotation angle or the rotation axis is not exact). Repeated application of the channel before measurement of the output can enhance the detection of such error: the channel's coherence can be exploited to gain a factor N in efficiency, where N is the number of iterations of the channel. The conclusion might not always be straightforward; in fact, while the iteration of the channel can make coherent errors more detectable, it could lead to the opposite effect when acting on incoherent errors (e.g., depolarization). A more rigorous investigation of the different cases can be done using results on the quantum Chernoff bound [27].

Non-greedy adaptive experimental design — The possibility of adopting a non-greedy adaptive experimental design is now more appealing after having introduced the idea of iteration of the quantum channel. This results in finding not only the next best measurement but an optimal sequence of measurements. More precisely, the decision whether to repeat the channel or measure the output is made by looking at the information already gathered at each step, stating which of the two actions is the most advantageous.

Sketch of the certification protocol in the experiment-limited scenario explored in [21].

Efficient decision criteria — Together with the quantum device, the provider gives you a confidence region associated with the characterizing parameters of the channel. This region can be used to define the decision criterion to accept or reject the producer's claim. Different decision criteria can impact the method's efficiency; for instance, in the case of single parameter estimation, the region is simply an interval, and one could, e.g., look at the position of the mean of the posterior probability distribution concerning this interval. A slightly more complex option relies instead on comparing this interval with a credible region, e.g., the highest posterior density (HPD) region defined as the region containing 95% of the probability mass, depicted in the figure below.

Examples of posterior probability distributions and corresponding credible regions (the area under the curve delimited by the violet interval) in the single parameter estimation case (rotation angle). The dotted red lines represent the confidence interval given by the producer. From left to right: the decision criterion based on the 95% HPD outputs ACCEPT, INCONCLUSIVE and REJECT, respectively. The curves above were obtained with the QInfer python package.

If you are interested in more technical details (e.g., the Chernoff bound investigation) and/or results for the specific quantum channels considered, you can look at the pre-print here!

Progress on the certification and verification problem of quantum systems is crucial for the research we conduct at ColibrITD, especially in the context of quantum algorithms and software, where efficient protocols for the certification of correct quantum computation become essential, in particular when dealing with noisy machines as the ones available at present.

References

[1] QUROPE Quantum Information Processing and Communication in Europe, QT roadmap, QT roadmap 2016 | QUROPE (2016).

[2] Antonio Acín et al., The quantum technologies roadmap: a European community view, New J. Phys. 20 080201 (2018).

[3] Gheorghiu, A. et al., Verification of Quantum Computation: An Overview of Existing Approaches, Theory Comput Syst 63, 715–808 (2019).

[4] Martin Kliesch and Ingo Roth, Theory of Quantum System Certification, PRX Quantum 2, 010201 (2021).

[5] Cramer, M. et al., Efficient quantum state tomography, Nat Commun 1, 149(2010).

[6] Christopher Ferrie, Self-Guided Quantum Tomography, Phys. Rev. Lett. 113, 190404 (2014).

[7] Christopher Granade et al., Practical Bayesian tomography, New J. Phys. 18033024 (2016).

[8] F. Huszár and N. M. T. Houlsby, Adaptive Bayesian quantum tomography, Phys. Rev. A 85, 052120 (2012).

[9] Tobias Moroder et al., Permutationally invariant state reconstruction, New J. Phys. 14 105001 (2012).

[10] Erik Nielsen et al., Gate Set Tomography, Quantum 5, 557 (2021).

[11] Ahmad, S. et al., Self-guided quantum state tomography for limited resources, Sci Rep 12, 5092 (2022).

[12] David Gross et al., Quantum State Tomography via Compressed Sensing, Phys. Rev. Lett. 105, 150401 (2010).

[13] T Baumgratz et al., A scalable maximum likelihood method for quantum state tomography, New J. Phys. 15 125004 (2013).

[14] Quek, Y. et al., Adaptive quantum state tomography with neural networks,npj Quantum Inf 7, 105 (2021).

[15] Eisert, J. et al., Quantum certification and benchmarking, Nat Rev Phys 2, 382–390 (2020).

[16] E. Knill et al., Randomized benchmarking of quantum gates, Phys. Rev. A 77, 012307 (2008).

[17] Easwar Magesan et al., Scalable and Robust Randomized Benchmarking of Quantum Processes, Phys. Rev. Lett. 106, 180504 (2011).

[18] Easwar Magesan et al., Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking, Phys. Rev. Lett. 109, 080505 (2012).

[19] Boixo, S. et al., Characterizing quantum supremacy in near-term devices,Nature Phys 14, 595–600 (2018).

[20] Arute, F. et al., Quantum supremacy using a programmable superconducting processor, Nature 574, 505–510 (2019).

[21] N. Milazzo et al., Principles of quantum functional testing, arXiv:2209.11712(2022).

[22] Doucet, A. et al., On sequential Monte Carlo sampling methods for Bayesian filtering, Statistics and Computing 10, 197–208 (2000).

[23] C. Granade et al., QInfer: Statistical inference software for quantum applications, Quantum 1, 5 (2017).

[24] K. Chaloner et al., Bayesian Experimental Design: A Review, Statistical Science, 10(3): 273–304 (1995).

[25] E. G. Ryan et al., A Review of Modern Computational Algorithms for Bayesian Optimal Design, International Statistical Review 84, 128 (2016).

[26] Christopher E Granade et al., Robust online Hamiltonian learning, New J. Phys. 14 103013 (2012).

[27] K. M. R. Audenaert et al., Discriminating States: The Quantum Chernoff Bound, Phys. Rev. Lett. 98, 160501 (2007).

Nadia Milazzo