Machine learning (ML) provides the potential to solve challenging quantum many-body problems in physics and chemistry. Yet, this prospect has not been fully justified. In this work, we establish rigorous results to understand the power of classical ML and the potential for quantum advantage in an important example application: predicting outcomes of quantum mechanical processes. We prove that for achieving a small average prediction error, one can always design a classical ML model whose sample complexity is comparable to the best quantum ML model (up to a small polynomial factor). Regarding computational complexity, we show that the class of problems that can be solved by efficient classical ML models with access to sampled data is strictly larger than BPP. Hence, classical ML models may be able to solve some challenging quantum problems after training from data obtained in physical experiments. As a concrete example, we prove that a simple, classical ML model can efficiently learn to predict ground state representations that approximate expectation values of local observables up to a small, constant error. This holds for any smooth family of gapped local Hamiltonians in a finite spatial dimension. This talk includes content from [1], [2], and [3].
[1] Huang, Hsin-Yuan, Richard Kueng, and John Preskill. "Information-theoretic bounds on quantum advantage in machine learning." arXiv preprint arXiv:2101.02464 (2021).
[2] Huang, Hsin-Yuan, et al. "Power of data in quantum machine learning." arXiv preprint arXiv:2011.01938 (2020).
[3] Huang, Hsin-Yuan, Richard Kueng, and John Preskill. "Provable machine learning algorithms for quantum many-body problems." In preparation.