Machine learning is one of the most promising applications of quantum algorithms. This area was pioneered by the HHL algorithm that in some cases can solve systems of linear equations exponentially faster than is possible classically. Since then many quantum machine learning applications have appeared, including by consortium members on efficient Recommendation Systems and Gradient Descent. Our aim is to design novel quantum algorithms for machine learning applications and analyse their performance in practice.
Work package Leader: Cyril Allouche (Atos – Bull SAS)
Significant achievements
- We adapted the framework of block-encodings to the study of quantum machine learning algorithms and derived general results that are applicable to a variety of input models. This adaptation allowed us to unify various input models, such as sparse matrix oracles and matrix stored in a data structure and various routines such as singular value estimation of a block-encoded matrix, and quantum linear system solvers within the same framework. Using these theoretical tools, we derived applications and improvements for problems close to the machine learning field: weighted least squares, generalized least squares, quantum algorithms to estimate effective resistance in electrical networks.
- A large set of (classical) machine learning procedures rely on the ability to quickly solve some system of equations under some certain type of constraints. Second order cone programs are convex optimization problems that generalizes linear programs. We developed a quantum variant of the interior points method, a well-studied classical algorithm to solve these types of problem. Our algorithm outputs a classical solution to the SOCP with objective value close to the optimal. We experimentally evaluated the asymptotic speed-up of an application of our method to Support Vector Machine, a standard machine learning algorithm. Experiments put in light a significant improvement compared to the best-known classical algorithm.
- Another standard class of recurring problems in machine learning is the class of clustering problem, where a data set is to be partitioned into a finite, small number of classes regrouping similar data entries. We developed a quantum analogue of a standard classical algorithm answering this problematic, namely k-means. Our algorithms outputs with high probability good approximations of its classical counterpart, with substantial speed-up.
- Artificial intelligence, and more particularly deep learning is a subset of machine learning that mostly relies on the efficient training and evaluation of neural networks to model arbitrary features of a data set. We derived quantum algorithms for training and evaluating feedforward neural networks based on the canonical classical feedforward and backpropagation algorithms. The running times of these algorithms can be quadratically faster than their classical counterparts, since they depend linearly on the number of neurons in the network, as opposed to the number of edges as in the classical case.
- Finally, we tackled a recurring problematic in data analysis: dimensionality reduction. Most of the standard algorithm for classification are particularly inefficient when dealing with data points in high dimensional spaces. We proposed a quantum analogue to Slow Feature Analysis and derived its usage for classification of high dimensional data sets. We benchmarked our method on the MNIST handwritten digit dataset data set, achieving accuracy comparable to classical methods.