Quantum device fidelity benchmark

Introduction

Over last few years, we are facing a quantum computer race where the main challenge is to build efficient and reliable qubits. All players regularly announce their roadmap, a major breakthrough or quantum supremacy.

One of the most important aspects is being able to characterize and measure the quality and fidelity of different quantum devices, as there is a direct effect on the quantum algorithm results.

The purpose of this article is to share a ready and easy to use Python module to perform a quantum device fidelity benchmark.

Languages & Frameworks

This benchmark is implemented in :

  • PyQuil
  • Q#
  • Qiskit

Actually, only IBM with Qiskit allows direct circuit execution on quantum computer, Rigetti and Microsoft restrict the access to quantum computer to key customers or partners, but APIs from Rigetti and Pyquil are available and implemented, for Windows Azure Quantum, the module could be easily modified in the future. By the way, it is possible to run the benchmark in simulation mode, wich mean that it is not subject to noise and other undesired effects, wich is exactly what we want to evaluate. For having a real quantum computer behaviour, I have introduced the possibility to simulate noise in all languages and frameworks.

The benchmark

For this benchmark, I have implemented Hidden-Shift (HS) and Bernstein–Vazirani (BV) algorithms[1][2]. Both algorithms take an input : a bit string for BV algorithm and a shift for HS algorithm, so each algorithm can be run with 2N possible inputs, where N is the number of qubits, by measuring the circuit output we should retrieve the input data. The benchmark runs X times all 2N possible inputs, where X is the shot count and N the qubit count, finally, a fidelity rate is computed from results and a graph is plotted and shown, a picture is also saved.

Benchmark parameters

Common parameters

All different environments have following parameters :

  • Qubit count : the number of qubits used for the benchmark (must be even value for Hidden-Shift algorithm)
  • Shots : the number of shot for each possible input
  • Use noise : enable noise simulation, should not be true if the benchmark is run on a real quantum computer

Languages & Frameworks specifics parameters

Pyquil
  • Use QPU : enable benchmark execution on Rigetti Quantum Processing Unit, if enabled, the used QPU name must be setted in \pyquil_modules\pyquil_backend.py
Qiskit
  • Use IBMQ : enable benchmark execution on IBM Quantum, if enabled, the IBM token and backend must be setted in \qiskit_modules\qiskit_backend.py

Running the benchmark

The benchmark can be run very simply, examples are given in main <language_name>_benchmark.py script, we can see the one for Q# :

from qsharp_modules.qsharp_run_benchmark import *

QuBitCount = 4
Shots = 1024
UseNoise = False

BV_fidelity = RunBVBenchmark(QuBitCount, Shots, UseNoise)
print('Fidelity rate with Bernstein–Vazirani Algorithm : {:.4f}'.format(BV_fidelity))

HS_fidelity = RunHSBenchmark(QuBitCount, Shots, UseNoise)
print('Fidelity rate with Hidden-Shift Algorithm : {:.4f}'.format(HS_fidelity))

Results

The corresponding fidelity rates are shown :

Fidelity rate with Bernstein–Vazirani Algorithm : 0.5018
Fidelity rate with Hidden-Shift Algorithm : 0.4969

And graph are plotted in 3D and saved to file. As example for BV benchmark in Q# :

Source code

Download source code

References

[1] Wright, K., Beck, K.M., Debnath, S. et al. Benchmarking an 11-qubit quantum computer. Nat Commun 10, 5464 (2019). https://doi.org/10.1038/s41467-019-13534-2

[2] Linke, Norbert & Maslov, Dmitri & Roetteler, Martin & Debnath, Shantanu & Figgatt, Caroline & Landsman, Kevin & Wright, Kenneth & Monroe, Christopher. (2017). Experimental Comparison of Two Quantum Computing Architectures. Proceedings of the National Academy of Sciences. 114. 10.1073/pnas.1618020114.

One thought on “Quantum device fidelity benchmark

Comments are closed.

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top