Quantum Machine Learning: a quick overview

Hamza Jaffali
August 12, 2023
12 minutes
Share this post


The goal of Machine Learning is to teach the machine how to perform a specific task, without providing explicit instructions. It is divided in three main families: supervised learning (teaching by example), unsupervised learning (clustering and dimensional reduction) and reinforcement learning (trial/error, behaviorism).

Global view of Machine Learning families and applications

When one talks about Quantum Machine Learning (QML), it can refer to three different approaches. The first one is to apply Classical Machine Learning (CML) to solve problems in quantum physics or quantum information. The second one is to use quantum computations to speedup classical machine learning techniques, sometimes leading to hybrid algorithms. This is also known as Quantum-Enhanced Machine Learning. The last one is to use pure quantum models or adaptions of learning algorithms to exploit the whole potential of Quantum Mechanics and Quantum Information Processing.

Why Quantum Machine Learning ?

Nowadays, our society is facing different kinds of challenges. Providing a solution to these difficulties can help our world better organize itself and leave to next generations a stable heritage. In the recent years, with the development of Machine Learning techniques, many problems have been tackled using these algorithms with the help of a large amount of collected data.

However, when considering the most difficult problems challenging our society, actual machine learning algorithms and approaches show some limitations in terms of the size and amount of data that can be processed, but also from the point of view of efficiency and performance.

Quantum Machine Learning has the potential, by exploiting the advantages of Quantum Computing and Quantum Information Processing, to help machines learn faster and tackle more difficult tasks.

“Simulating the behavior of 100 billion neurons of human brain is not feasible by classical computer but quantum machine learning promises to fulfill that requirement”

Amit Ray

Actual state of the art

In this article, our goal is not to focus on the first approach of QML, since we want focus on algorithms and techniques involving quantum computations, even if we can find interesting applications of Classical Machine Learning to Quantum Mechanics [4, 36, 21, 44, 32].

We rather talk about the produced work concerning quantum-enhanced machine learning, and pure quantum machine learning algorithms. We will divide this overview following the well known three families.

Supervised Learning

Supervised Learning is certainly the most famous and developed aspect of Machine Learning, both in academic and industrial research. It is also the case in Quantum Machine Learning, since many works tried to adapt classical supervised algorithms to the quantum setup [47, 38].

Neural networks are one of the most used supervised algorithms, because of their ability to classify and make accurate predictions. Several models of quantum neural networks and quantum perceptron were proposed in the literature, with different ways of encoding data [34, 53, 39, 55, 17, 52, 35]. Several improvements in the training were proposed using quantum optimization or other methods [31, 5], leading to few first applications [7, 17].

Representation of an artificial neuron, the core unit of neural networks.

Another well-known algorithm is the Support Vector Machines, which purpose is to find the optimal hyperplane to spatially separate the data into classes, according to the training dataset. In the quantum version, the kernel (or feature map) can for instance be replaced by an efficient quantum computation of the overlap between two states encoding the datapoints. Several works [50, 11, 8, 46, 19, 29] were produced about Quantum Support Vector Machines since the first article was published on this subject in 2014 [43].

Graphical representation of the feature space transformation used in Quantum Support Vector Machine (source: AI Data Business)

One can also mention an adaption of Ensemble Learning (sometimes called Boosting) to Quantum Computing with QBoost [37].

In most of the present works, an exponential speed-up is claimed but it is not always rigorously verified when considering practical implementations of such algorithms. In a recent paper [13], the limits of Quantum Supervised Learning are discussed, claiming that at most it can show polynomial speedups over its classical counterpart.

Unsupervised Learning

Quantum Computing was also introduced in several Unsupervised Learning algorithms, providing new quantum-enhanced machine learning algorithms.

Quantum Principal Component Analysis (qPCA), first proposed by Llyod et al. [31] in 2014, exploits the spectral decomposition of the Variational Quantum Eigensolver. Many works were derived from it, providing improvements in the performance or precision of qPCA [27, 48, 20, 18]. Recently qPCA was used in finance to simulate Heath-Jarrow-Morton model for pricing interest-rate financial derivatives [33].

Illustration of the principle of Principal Component Analysis.

The K-means and nearest-neighbors algorithms were also adapted, leading to the Q-means algorithm for instance. These clustering algorithms exploit the efficiency of quantum computer for computing distances, and several versions were proposed in the literature [1, 30, 54, 22, 6, 26]. A recent experimental implementation of this algorithm was proposed on photonic quantum computers [9]. We can also cite another work with the same unsupervised approach [10] for state tomography.

K-nearest-neighbours algorithm illustrated with a 2D example.

Reinforcement Learning

Reinforcement Learning has received less interest than the two other approaches of machine learning, but interesting propositions were made anyway.

Diagram of relations between Agent, Environment and the Reinforcement Learning algorithm (MathWorks).

Most of the works are based on the agent model [40, 16], where an agent interacts with his environment and select the best actions to maximize a reward function, as it was first proposed in [15].

Different approaches were also used to implement Quantum Reinforcement Learning, such as Boltzmann machines [14] and Variational Quantum Circuits [12]. This algorithm was applied recently for maximizing the overlap between two states [3] or for estimating eigenvectors of a given observable [2].

Experimental realizations using superconducting circuits [25], quantum optical neural networks [49] or photonic systems [23] were proposed in the literature, sometimes showing in practice an experimental speedup [45].

Existing tools

There already exists some packages and libraries allowing one to combine classical and quantum computations to implement quantum machine learning.

We first can mention PennyLane, an open-source software from Xanadu for performing simulations of Quantum Machine learning. It combines classical machine learning packages with quantum simulators and hardware.

The famous Qiskit environment provides a package Aqua, allowing to implement Quantum SVM and the associated feature map. However, it must be combined with other classical packages.

The new tool TensorFlow Quantum, published by Google, integrates Cirq and TensorFlow, and offers high-level abstractions for implementing quantum-classical model.

On another hand, QML is a python compatible toolkit [42], but not a high-level framework. It only provides functionalities to launch simulations.

Another one is Paddle Quantum, an open-source QML toolkit based on Baidu PaddlePaddle (the first open-source and industrial level deep learning platform in China) and uses the quantum simulator of Baido from Institute for Quantum Computing.

Finally, Q#, developed by Microsoft provide a development toolkit for machine learning and quantum algorithms.

We also mention AWS with Amazon Bracket, Tequila [24], tket (Cambridge Quantum Computing), Strawberry Fields [51] and Forest (Rigetti’s software library), which can in the future provide programming blocks or libraries for Quantum Machine Learning.

Most of these projects are referenced in [28].

Overview of the international ecosystem in Quantum Computing Software.

At ColibrITD

At ColibrITD, we aim to work at the highest level of abstraction and place ourself at the top layer in the quantum software ecosystem.

In fact, our project is to implement a full stack quantum software framework composed of three level of actions: first, translating classical to quantum code, secondly, modeling and optimizing quantum computations on our quantum engine and, finally, executing them on the right available hardware device.

One of the first applications and use cases for our framework we selected is to deal with Quantum Reinforcement Learning algorithms. We aim to develop the first Quantum Reinforcement Learning platform for solving various problems by leveraging quantum computations and the associated speed-up.

We believe at ColibrITD that Quantum advantage should be brought for everyone, and developing an automated tool that can translate classical Reinforcement Learning algorithms to the quantum setup is one of our mission.


[1] Esma Aïmeur, Gilles Brassard, and Sébastien Gambs. “Quantum speed-up for unsupervised learning”. In: Machine Learning 90.2 (2013), pp. 261–287.

[2] F Albarrán-Arriagada et al. “Reinforcement learn- ing for semi-autonomous approximate quantum eigen- solver”. In: Machine Learning: Science and Technology 1.1 (2020), p. 15002.

[3] Francisco Albarrán-Arriagada et al. “Measurement- based adaptation protocol with quantum reinforcement learning”. In: Physical Review A 98.4 (2018), p. 42315.

[4] Louis-François Arsenault et al. “Machine learning for many-body physics: The case of the Anderson impurity model”. In: Physical Review B 90.15 (2014), p. 155136.

[5] Marcello Benedetti et al. “Estimation of effective temperatures in quantum annealers for sampling applications: A case study with possible applications in deep learning”. In: Phys. Rev. A 94 (2 Aug. 2016), p. 022308. doi: 10.1103/PhysRevA.94.022308. url: https://link.aps.org/doi/10.1103/PhysRevA.94.022308.

[6] Kaoutar Benlamine et al. “Distance Estimation for Quantum Prototypes Based Clustering”. In: International Conference on Neural Information Processing. 2019, pp. 561–572.

[7] Aradh Bisarya et al. “Breast Cancer Detection Using Quantum Convolutional Neural Networks: A Demonstration on a Quantum Computer”. In: medRxiv (2020).

[8] Arit Kumar Bishwas, Ashish Mani, and Vasile Palade. “Quantum Supervised Clustering Algorithm for Big Data”. In: 2018 3rd International Conference for Convergence in Technology (I2CT). 2018.

[9] Andrew Blance and Michael Spannowsky. Unsupervised Event Classification with Graphs on Classical and Photonic Quantum Computers. 2021. arXiv: 2103. 03897 [hep-ph].

[10] Juan Carrasquilla et al. “Reconstructing quantum states with generative models”. In: Nature Machine Intelligence 1.3 (2019), pp. 155–161.

[11] Rupak Chatterjee and Ting Yu. “Generalized coherent states, reproducing kernels, and quantum support vector machines”. In: Quantum Information Computation 17.15 (2017), pp. 1292–1306.

[12] Samuel Yen-Chi Chen et al. “Variational Quantum Cir- cuits for Deep Reinforcement Learning”. In: IEEE Ac- cess 8 (2020), pp. 141007–141024.

[13] Carlo Ciliberto et al. “Statistical limits of supervised quantum learning”. In: Physical Review A 102.4 (2020), p. 42414.

[14] Daniel Crawford et al. “Reinforcement learning using quantum Boltzmann machines”. In: Quantum Information Computation 18 (2018), pp. 51–74.

[15] Daoyi Dong et al. “Quantum Reinforcement Learning”. In: systems man and cybernetics 38.5 (2008), pp. 1207– 1220.

[16] Vedran Dunjko, Jacob M. Taylor, and Hans J. Briegel. “Quantum-Enhanced Machine Learning.” In: Physical Review Letters 117.13 (2016), p. 130501.

[17] Edward Farhi and Hartmut Neven. “Classification with Quantum Neural Networks on Near Term Processors”. In: arXiv preprint arXiv:1802.06002 (2020).

[18] Matthew B. Hastings. “Classical and Quantum Algorithms for Tensor Principal Component Analysis”. In: Quantum 4 (2020), p. 237.

[19] Vojtech Havlícek et al. “Supervised learning with quantum-enhanced feature spaces.” In: Nature 567.7747 (2019), pp. 209–212.

[20] Chen He et al. “A Low Complexity Quantum Principal Component Analysis Algorithm”. In: arXiv: Quantum Physics (2020).

[21] Hamza Jaffali and Luke Oeding. “Learning algebraic models of quantum entanglement”. In: Quantum In- formation Processing 19.9 (2020), p. 279.

[22] Iordanis Kerenidis et al. “q-means: A quantum algorithm for unsupervised machine learning”. In: Advances in Neural Information Processing Systems. Vol. 32. 2019, pp. 4134–4144.

[23] Michael. J. Kewming, Sally Shrapnel, and Gerard. J. Milburn. A Physical Quantum Agent. 2021. arXiv: 2007.04426 [quant-ph].

[24] Jakob S Kottmann et al. “TEQUILA: a platform for rapid development of quantum algorithms”. In: Quantum Science and Technology 6.2 (Mar. 2021), p. 024009. doi: 10 . 1088 / 2058–9565 / abe567. url: https://doi.org/10.1088/2058-9565/abe567.

[25] Lucas Lamata. “Basic protocols in quantum reinforcement learning with superconducting circuits”. In: Scientific Reports 7.1 (2017), pp. 1609–1609.

[26] Jing Li et al. Quantum K-nearest neighbor classification algorithm based on Hamming distance. 2021. arXiv: 2103.04253 [quant-ph].

[27] Jie Lin et al. “An improved quantum principal component analysis algorithm based on the quantum sin- gular threshold method”. In: Physics Letters A 383.24 (2019), pp. 2862–2868.

[28] List of Open Quantum Projects. https://qosf.org/ project_list/.

[29] Yunchao Liu, Srinivasan Arunachalam, and Kristan Temme. A rigorous and robust quantum speed-up in supervised machine learning. 2020. arXiv: 2010.02174 [quant-ph].

[30] Seth Lloyd, Silvano Garnerone, and Paolo Zanardi. Quantum algorithms for topological and geometric analysis of big data. 2015. arXiv: 1408 . 3106[quant-ph].

[31] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. “Quantum principal component analysis”. In: Nature Physics 10.9 (2014), pp. 631–633.

[32] Di Luo et al. Autoregressive Neural Network for Simulating Open Quantum Systems via a Probabilistic Formulation. 2021. arXiv: 2009 . 05580 [cond-mat.str-el].

[33] Ana Martin et al. “Toward pricing financial derivatives with an IBM quantum computer”. In: Phys. Rev. Research 3 (1 Feb. 2021), p. 013167. doi: 10 .1103 / PhysRevResearch . 3 . 013167. url: https : / / link .aps.org/doi/10.1103/PhysRevResearch.3.013167.

[34] Alex Monras, Almut Beige, and Karoline Wiesner. “Hidden Quantum Markov Models and non-adaptive read-out of many-body states”. In: Applied Mathematical and Computational Sciences (2010), pp. 93–122.

[35] Pere Mujal et al. Opportunities in Quantum Reservoir Computing and Extreme Learning Machines. 2021. arXiv: 2102.11831 [quant-ph].

[36] Hendrik Poulsen Nautrup et al. “Optimizing Quantum Error Correction Codes with Reinforcement Learning”. In: Quantum 3 (2019), p. 215.

[37] Hartmut Neven et al. “QBoost: Large Scale Classifier Training with Adiabatic Quantum Optimization.” In: Asian Conference on Machine Learning. 2012, pp. 333–348.

[38] Nhat A. Nghiem, Samuel Yen-Chi Chen, and Tzu- Chieh Wei. A Unified Framework for Quantum Supervised Learning. 2021. arXiv: 2010.13186 [quant-ph].

[39] Román Orús, Samuel Mugel, and Enrique Lizaso. “Quantum computing for finance: Overview and prospects”. In: Reviews in Physics 4 (2019), p. 100028. issn: 2405–4283. doi: https : / / doi . org / 10 . 1016 / j . revip . 2019 . 100028. url: https : / / www . sciencedirect . com / science / article / pii /S2405428318300571.

[40] Giuseppe Davide Paparo et al. “Quantum Speedup for Active Learning Agents”. In: Phys. Rev. X 4 (3 July 2014), p. 031002. doi: 10 . 1103 / PhysRevX .4 . 031002. url: https://link.aps.org/doi/10.1103/ PhysRevX.4.031002.

[41] Alejandro Perdomo-Ortiz et al. “Opportunities and challenges for quantum-assisted machine learning in near-term quantum computers”. In: Quantum Science and Technology 3.3 (2018), p. 30502.

[42] QML: A Python Toolkit for Quantum Machine Learn- ing. https://www.qmlcode.org/qml.html.

[43] Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. “Quantum Support Vector Machine for Big Data Classification”. In: Physical Review Letters 113.13 (2014), p. 130503.

[44] Borja Requena et al. Certificates of quantum many- body properties assisted by machine learning. 2021. arXiv: 2103.03830 [quant-ph].

[45] V. Saggio et al. “Experimental quantum speed-up in reinforcement learning agents”. In: Nature 591.7849 (Mar. 2021), pp. 229–233. issn: 1476–4687. doi: 10 . 1038 /s41586–021–03242 -7. url: http://dx .doi.org/10.1038/s41586–021–03242–7.

[46] Maria Schuld and Nathan Killoran. “Quantum Ma- chine Learning in Feature Hilbert Spaces”. In: Physical Review Letters 122.4 (2019), p. 40504.

[47] Maria Schuld and Francesco Petruccione. Supervised Learning with Quantum Computers. 2018.

[48] Changpeng Shao. “An Improved Algorithm for Quantum Principal Component Analysis.” In: arXiv preprint arXiv:1903.03999 (2019).

[49] Gregory R. Steinbrecher et al. “Quantum Optical Neural Networks”. In: npj Quantum Information 5.1 (2019), pp. 1–9.

[50] E. Miles Stoudenmire and David J. Schwab. “Supervised Learning with Quantum-Inspired Tensor Net- works”. In: arXiv preprint arXiv:1605.05775 (2016).

[51] Strawberry Fields, A cross-platform Python library for simulating and executing programs on quantum pho- tonic hardware. https://strawberryfields.ai/.

[52] Francesco Tacchino et al. “Variational learning for quantum artificial neural networks”. In: IEEE Transactions on Quantum Engineering (2021), pp. 1–1. issn: 2689–1808. doi: 10 . 1109 / tqe . 2021 . 3062494. url: http://dx.doi.org/10.1109/TQE.2021.3062494.

[53] Nathan Wiebe, Ashish Kapoor, and Krysta M Svore. “Quantum perceptron models”. In: NIPS’16 Proceed- ings of the 30th International Conference on Neural In- formation Processing Systems. Vol. 29. 2016, pp. 4006– 4014.

[54] Nathan Wiebe, Ashish Kapoor, and Krysta M. Svore. “Quantum algorithms for nearest-neighbor methods for supervised and unsupervised learning”. In: Quantum Information Computation 15.3 (2015), pp. 316– 356.

[55] Shusen Zhou, Qingcai Chen, and Xiaolong Wang. “Deep Quantum Networks for Classification”. In: 2010 20th International Conference on Pattern Recognition. 2010, pp. 2885–2888.

Hamza Jaffali

Discover the Latest Blog Posts

Stay informed with our insightful blog content.