- The race for quantum computing, explained
- Penn State researchers to explore using quantum computers to design new drugs
- Multiverse Computing and Mila Join Forces to Advance Artificial Intelligence with Quantum Computing
- Quantum AI Elon Musk Review / Scam App Or Legit?
- Quantum Machine Learning: Opportunities and Challenges
- The Case for Quantum Machine Learning Neuroimaging
- QUANTUM MACHINE LEARNING: OPPORTUNITIES & LIMITATIONS
- Hands-on quantum machine learning | Rodrigo Morales | ADC2022
- “Reinforcement Learning for Quantum Technologies,” presented by Florian Marquardt
Classically Approximating Variational Quantum Machine Learning with Random Fourier Features
Jonas Landman, Slimane Thabet, Constantin Dalyac, Hela Mhiri, Elham Kashefi
Oct 25 2022 quant-ph arXiv:2210.13200v1
Many applications of quantum computing in the near term rely on variational quantum circuits (VQCs). They have been showcased as a promising model for reaching a quantum advantage in machine learning with current noisy intermediate scale quantum computers (NISQ). It is often believed that the power of VQCs relies on their exponentially large feature space, and extensive works have explored the expressiveness and trainability of VQCs in that regard. In our work, we propose a classical sampling method that may closely approximate a VQC with Hamiltonian encoding, given only the description of its architecture. It uses the seminal proposal of Random Fourier Features (RFF) and the fact that VQCs can be seen as large Fourier series. We provide general theoretical bounds for classically approximating models built from exponentially large quantum feature space by sampling a few frequencies to build an equivalent low dimensional kernel, and we show experimentally that this approximation is efficient for several encoding strategies. Precisely, we show that the number of required samples grows favorably with the size of the quantum spectrum. This tool therefore questions the hope for quantum advantage from VQCs in many cases, but conversely helps to narrow the conditions for their potential success. We expect VQCs with various and complex encoding Hamiltonians, or with large input dimension, to become more robust to classical approximations.
Protocols for classically training quantum generative models on probability distributions
Sachin Kasture, Oleksandr Kyriienko, Vincent E. Elfving
Quantum Generative Modelling (QGM) relies on preparing quantum states and generating samples from these states as hidden – or known – probability distributions. As distributions from some classes of quantum states (circuits) are inherently hard to sample classically, QGM represents an excellent testbed for quantum supremacy experiments. Furthermore, generative tasks are increasingly relevant for industrial machine learning applications, and thus QGM is a strong candidate for demonstrating a practical quantum advantage. However, this requires that quantum circuits are trained to represent industrially relevant distributions, and the corresponding training stage has an extensive training cost for current quantum hardware in practice. In this work, we propose protocols for classical training of QGMs based on circuits of the specific type that admit an efficient gradient computation, while remaining hard to sample. In particular, we consider Instantaneous Quantum Polynomial (IQP) circuits and their extensions. Showing their classical simulability in terms of the time complexity, sparsity and anti-concentration properties, we develop a classically tractable way of simulating their output probability distributions, allowing classical training to a target probability distribution. The corresponding quantum sampling from IQPs can be performed efficiently, unlike when using classical sampling. We numerically demonstrate the end-to-end training of IQP circuits using probability distributions for up to 30 qubits on a regular desktop computer. When applied to industrially relevant distributions this combination of classical training with quantum sampling represents an avenue for reaching advantage in the NISQ era.
Certifying Unknown Genuine Multipartite Entanglement by Neural Networks
Zhenyu Chen, Xiaodie Lin, Zhaohui Wei
Oct 26 2022 quant-ph
Suppose we have an unknown multipartite quantum state, how can we experimentally find out whether it is genuine multipartite entangled or not? Recall that even for a bipartite quantum state whose density matrix is known, it is already NP-Hard to determine whether it is entangled or not. Therefore, it is hard to efficiently solve the above problem generally. However, since genuine multipartite entanglement is such a fundamental concept that plays a crucial role in many-body physics and quantum information processing tasks, finding realistic approaches to certify genuine multipartite entanglement is undoubtedly necessary. In this work, we show that neural networks can provide a nice solution to this problem, where measurement statistics data produced by measuring involved quantum states with local measurement devices serve as input features of neural networks. By testing our models on many specific multipartite quantum states, we show that they can certify genuine multipartite entanglement very accurately, which even include some new results unknown before. We also exhibit a possible way to improve the efficiency of our models by reducing the size of features. Lastly, we show that our models enjoy remarkable robustness against flaws in measurement devices, implying that they are very experiment-friendly.
Equivalence Checking of Parameterized Quantum Circuits: Verifying the Compilation of Variational Quantum Algorithms
Tom Peham, Lukas Burgholzer, Robert Wille
Variational quantum algorithms have been introduced as a promising class of quantum-classical hybrid algorithms that can already be used with the noisy quantum computing hardware available today by employing parameterized quantum circuits. Considering the non-trivial nature of quantum circuit compilation and the subtleties of quantum computing, it is essential to verify that these parameterized circuits have been compiled correctly. Established equivalence checking procedures that handle parameter-free circuits already exist. However, no methodology capable of handling circuits with parameters has been proposed yet. This work fills this gap by showing that verifying the equivalence of parameterized circuits can be achieved in a purely symbolic fashion using an equivalence checking approach based on the ZX-calculus. At the same time, proofs of inequality can be efficiently obtained with conventional methods by taking advantage of the degrees of freedom inherent to parameterized circuits. We implemented the corresponding methods and proved that the resulting methodology is complete. Experimental evaluations (using the entire parametric ansatz circuit library provided by Qiskit as benchmarks) demonstrate the efficacy of the proposed approach. The implementation is open source and publicly available as part of the equivalence checking tool QCEC (https://github.com/cda-tum/qcec) which is part of the Munich Quantum Toolkit (MQT).
Accelerating the training of single-layer binary neural networks using the HHL quantum algorithm
Sonia Lopez Alarcon, Cory Merkel, Martin Hoffnagle, Sabrina Ly, Alejandro Pozas-Kerstjens
Binary Neural Networks are a promising technique for implementing efficient deep models with reduced storage and computational requirements. The training of these is however, still a compute-intensive problem that grows drastically with the layer size and data input. At the core of this calculation is the linear regression problem. The Harrow-Hassidim-Lloyd (HHL) quantum algorithm has gained relevance thanks to its promise of providing a quantum state containing the solution of a linear system of equations. The solution is encoded in superposition at the output of a quantum circuit. Although this seems to provide the answer to the linear regression problem for the training neural networks, it also comes with multiple, difficult-to-avoid hurdles. This paper shows, however, that useful information can be extracted from the quantum-mechanical implementation of HHL, and used to reduce the complexity of finding the solution on the classical side.
Universal algorithms for quantum data learning
Marco Fanizza, Michalis Skotiniotis, John Calsamiglia, Ramon Muñoz-Tapia, Gael Sentís
Oct 24 2022 quant-ph
Operating quantum sensors and quantum computers would make data in the form of quantum states available for purely quantum processing, opening new avenues for studying physical processes and certifying quantum technologies. In this Perspective, we review a line of works dealing with measurements that reveal structural properties of quantum datasets given in the form of product states. These algorithms are universal, meaning that their performances do not depend on the reference frame in which the dataset is provided. Requiring the universality property implies a characterization of optimal measurements via group representation theory.
Quantum Machine Learning using the ZXW-Calculus
The field of quantum machine learning (QML) explores how quantum computers can be used to more efficiently solve machine learning problems. As an application of hybrid quantum-classical algorithms, it promises a potential quantum advantages in the near term. In this thesis, we use the ZXW-calculus to diagrammatically analyse two key problems that QML applications face. First, we discuss algorithms to compute gradients on quantum hardware that are needed to perform gradient-based optimisation for QML. Concretely, we give new diagrammatic proofs of the common 2- and 4-term parameter shift rules used in the literature. Additionally, we derive a novel, generalised parameter shift rule with 2n terms that is applicable to gates that can be represented with n parametrised spiders in the ZXW-calculus. Furthermore, to the best of our knowledge, we give the first proof of a conjecture by Anselmetti et al. by proving a no-go theorem ruling out more efficient alternatives to the 4-term shift rule. Secondly, we analyse the gradient landscape of quantum ansätze for barren plateaus using both empirical and analytical techniques. Concretely, we develop a tool that automatically calculates the variance of gradients and use it to detect likely barren plateaus in commonly used quantum ansätze. Furthermore, we formally prove the existence or absence of barren plateaus for a selection of ansätze using diagrammatic techniques from the ZXW-calculus.
Deep Neural Networks as the Semi-classical Limit of Topological Quantum Neural Networks: The problem of generalisation
Antonino Marciano, Deen Chen, Filippo Fabrocini, Chris Fields, Matteo Lulli, Emanuele Zappala
Oct 26 2022 quant-ph cs.CG cs.LG math-ph math.GT math.MP
Deep Neural Networks miss a principled model of their operation. A novel framework for supervised learning based on Topological Quantum Field Theory that looks particularly well suited for implementation on quantum processors has been recently explored. We propose the use of this framework for understanding the problem of generalization in Deep Neural Networks. More specifically, in this approach Deep Neural Networks are viewed as the semi-classical limit of Topological Quantum Neural Networks. A framework of this kind explains easily the overfitting behavior of Deep Neural Networks during the training step and the corresponding generalization capabilities.
Gereon Koßmann, Lennart Binkowski, Lauritz van Luijk, Timo Ziegler, René Schwonnek
Oct 25 2022 quant-ph math-ph math.MP
Despite its popularity, several empirical and theoretical studies suggest that the quantum approximate optimization algorithm (QAOA) has issues in providing a substantial practical advantage. So far, those findings mostly account for a regime of few qubits and shallow circuits. In this work we extend on this by investigating the perspectives of QAOA in a regime of deep quantum circuits. Due to a rapidly growing range of classical control parameters, we consider local search routines as the characteristic class of variation methods for this regime. The behaviour of QAOA with local search routines can be best analyzed by employing Lie theory. This gives a geometrically nice picture of optimization landscapes in terms of vector and scalar fields on a differentiable manifold. Our methods are clearly borrowed from the field of optimal control theory. In the limit of asymptotic circuits we find that a generic QAOA instance has many favourable properties, like a unique local minimum. For deep but not close to asymptotically deep circuits many of those nice properties vanish. Saddle points turn into effective local minima, and we get a landscape with a continuum of local attractors and potentially exponentially many local traps. Our analysis reveals that statistical distribution properties of traps, like amount, sizes, and depths, can be easily accessed by solely evaluating properties of the classical objective function. As a result we introduce performance indicators that allow us to asses if a particular combinatorial optimization problem admits a landscape that is favourable for deep circuit QAOA. Even though we see that there is no free lunch on general instances, certain problem classes like random QUBO, MAXCUT on 3-regular graphs, or a very unbalanced MAX-kk-SAT have a chance to perform not too bad in the deep circuit regime.