Chinese Physics Letters, 2021, Vol. 38, No. 11, Article code 110301Express Letter Deep Learning Quantum States for Hamiltonian Estimation Xinran Ma (马欣然)1, Z. C. Tu (涂展春)1, and Shi-Ju Ran (冉仕举)2* Affiliations 1Department of Physics, Beijing Normal University, Beijing 100875, China 2Department of Physics, Capital Normal University, Beijing 100048, China Received 15 August 2021; accepted 3 October 2021; published online 11 October 2021 Supported by the National Natural Science Foundation of China (Grant Nos. 12004266, 11834014 and 11975050), the Beijing Natural Science Foundation (Grant Nos. 1192005 and Z180013), the Foundation of Beijing Education Committees (Grant No. KM202010028013), and the Academy for Multidisciplinary Studies, Capital Normal University.
*Corresponding author. Email: sjran@cnu.edu.cn
Citation Text: Ma X R, Tu Z C, and Ran S J 2021 Chin. Phys. Lett. 38 110301    Abstract Human experts cannot efficiently access physical information of a quantum many-body states by simply “reading” its coefficients, but have to reply on the previous knowledge such as order parameters and quantum measurements. We demonstrate that convolutional neural network (CNN) can learn from coefficients of many-body states or reduced density matrices to estimate the physical parameters of the interacting Hamiltonians, such as coupling strengths and magnetic fields, provided the states as the ground states. We propose QubismNet that consists of two main parts: the Qubism map that visualizes the ground states (or the purified reduced density matrices) as images, and a CNN that maps the images to the target physical parameters. By assuming certain constraints on the training set for the sake of balance, QubismNet exhibits impressive powers of learning and generalization on several quantum spin models. While the training samples are restricted to the states from certain ranges of the parameters, QubismNet can accurately estimate the parameters of the states beyond such training regions. For instance, our results show that QubismNet can estimate the magnetic fields near the critical point by learning from the states away from the critical vicinity. Our work provides a data-driven way to infer the Hamiltonians that give the designed ground states, and therefore would benefit the existing and future generations of quantum technologies such as Hamiltonian-based quantum simulations and state tomography. DOI:10.1088/0256-307X/38/11/110301 © 2021 Chinese Physics Society Article Text Machine learning (ML) has recently been applied to various issues that are difficult using purely the “conventional” techniques in physics (for instance, tensor network,[1–3] and quantum Monte Carlo[4,5]). The successful applications include identifying the classical/quantum phases and topologies without computing order parameters,[6–12] predicting physical properties of materials,[13–15] efficiently representing non-trivial quantum states,[16–19] to name but a few. Among others, obtaining the eigenstates, particularly the ground states, of a given quantum many-body Hamiltonian belongs to the central topics in the contemporary physics.[20,21] The inverse problems, which are of equal significance and practicality, are much less studied due to the lack of valid methods. ML serves as a novel approach that has recently gained certain inspiring successes in such problems.[22–25] In particular, one important issue under hot debate is to access the information of the potentials or interactions by learning from physical data. For instance, Xin et al. utilized fully connected neural network to recover the ground states of $k$-local Hamiltonians from local measurements.[26] Hegde et al. employed the kernel ridge regression to achieve accurate and transferable predictions of Hamiltonians for a variety of material environments.[27] Li et al. identified the effective Hamiltonians in magnetic systems and extracted the dominant spin interactions in MnO and TbMnO$_3$ through multiple linear regression.[28] Sehanobish et al. proposed the quantum potential neural networks to reconstruct the effective potential given the wave functions.[29] However, most existing works in this direction utilized the regression methods or shallow neural networks, which usually possess relatively low learning or generalizing powers. In ML, one usually uses deep networks, such as convolutional neural network (CNN),[30,31] to solve sophisticated problems such as the classifications of real-life images. The excellent learning and generalization abilities of CNN have been widely recognized in numerous applications in computer sciences (see Refs. [32–34]). Recently, Berthusen et al. utilized CNN to extract the crystal field Stevens parameters from the thermodynamic data, which illustrates the validity of CNN-based method in deducing physical information.[35] Goh et al. put forward a deep CNN model named Chemception to predict chemical properties with the 2D drawings of molecules.[36] Laanait et al. utilized an encoder-decode architecture with convolutional layers to generate the local electron density of material by learning from the diffraction patterns.[37] It is interesting and unexplored whether CNN is capable of solving more challenging issues, including those with the presence of strong correlations and many-body effects. In this work, the problem we consider is inverse to those of solving the eigenstates of a given Hamiltonian. Suppose that the state is given. Our aim is to estimate the parameters in the many-body Hamiltonian by training a CNN model, so that the given state is the ground state. Solving such a problem would be meaningful and important for, e.g., designing the Hamiltonian of a quantum annealer to prepare a target state.[38] To this end, we propose QubismNet that consists of two main parts [Fig. 1(a)]. The first part is a map to transform the states into images, and the second is a CNN to transform the images to the estimations of the target parameters in the Hamiltonian. The purpose of mapping the states to images is to utilize the power of CNN processing images. Similar idea has been used in Ref. [35], where the thermodynamic data (specific heat and others) are transformed to images by wavelet transformation before being fed to the CNN. The state-image map we use is known as Qubism,[39] where the obtained images are of fractals [see some examples in Fig. 1(b)] that can reveal the physical properties of the state. Note that there are other models that can be used to implement parameter estimations, such as Bayesian inference and expectation maximization. However, it is widely accepted that these methods perform much worse than the neural networks when dealing with the complicated tasks such as learning images. Considering the complexity of learning reduced density matrices is comparable to that of learning images, we believe CNN is a more proper choice over the models for inferences. We benchmark QubismNet on several quantum spin models defined on 1D and 2D lattices. The learning and generalization powers of QubismNet are tested by dividing the samples (i.e., the ground states taking different values of certain parameter in the Hamiltonian) into testing and generalizing sets. The parameters corresponding to the states in the testing set are independently and identically distributed (IID) as the states in the training set. QubismNet can estimate the parameters of such states with high performance. The parameters corresponding to the states in the generalizing set are restricted in a certain range in which no training states are taken. To keep the training data balanced, the training samples are taken from the boundaries of the whole parameter space, and the generalizing samples are taken from a subregion in the middle of the parameter space. Our results show that QubismNet can generalize what it has learned from the training set to estimate the parameters of the generalizing states with fair performance. For instance, QubismNet only learns from the states away from the critical point and well estimates the magnetic fields given the states in the critical vicinity. Our work suggests that CNN is capable of extracting information directly from the coefficients of quantum many-body states or density matrices, while human experts have to reply on the previous knowledge such as order parameters and measurements.
cpl-38-11-110301-fig1.png
Fig. 1. (a) Illustration of QubismNet. Its first part is the Qubism map that transforms a quantum wave function to an image of fractals. The second part is a convolutional neural network that maps the images to the estimations of the target parameters. (b) Examples of the images obtained by Qubism. The ground states are from the $XY$ model with the magnetic fields $h=0.2$–0.8.
QubismNet: Estimating Physical Parameters from Ground States. QubismNet consists of two main parts. The first part is a Qubism map[39] that transforms the quantum states to the images of fractals in a one-to-one way. These images are subsequently input to a CNN. The output of the CNN is the physical parameters of the Hamiltonian whose ground states are the input states. Taking the quantum Ising model (QIM) in a transverse magnetic field as an example, the Hamiltonian reads $$ \hat{H}(h) = J \sum_{\left \langle i, j \right \rangle} \hat{S}_i^z \hat{S}_j^z - h \sum_{k=1}^{L} \hat{S}_k^x,~~ \tag {1} $$ with $\hat{S}^{\alpha}$ being the $\alpha$-component spin operator ($\alpha = x, z$) and $L$ the system size. Here, we take the coupling constant $J=1$ as the energy scale. There are several well-established methods to calculate the ground states given the Hamiltonians, such as density matrix renormalization group (DMRG),[40,41] tensor network algorithms,[1–3] and quantum Monte Carlo.[4,5] This work considers an inverse problem: estimating the magnetic fields $h$ given the ground states. Specifically, we denote the training set as $\{|\psi_{\rm m} \rangle\}$ ($m=1, \ldots, N_{\rm train}$), where $|\psi_{\rm m} \rangle$ is the ground state of $\hat{H}(h_{\rm m})$. To train the CNN, we choose the mean-square error (MSE) as the loss function $$ \varepsilon = \frac{1}{N_{\rm train}} \sum_{m=1}^{N_{\rm train}} (h^{\rm p}_{\rm m} - h_{\rm m})^2,~~ \tag {2} $$ with $h^{\rm p}_{\rm m}$ being the estimation of the magnetic field of $|\psi_{\rm m} \rangle$ by the QubismNet. The variational parameters in the CNN are optimized by minimizing the loss function using the gradient method. We choose RMSProp[42] as the optimizer to control the gradient steps. More details of the Qubism map and CNN are provided in the Supplementary Material. To benchmark the generalization power of QubismNet, we introduce the testing and generalizing sets (Note that the datasets, i.e., the ground states of the Hamiltonians with different physical parameters, are prepared by the exact diagonalization algorithm for small sizes or by DMRG algorithm for large sizes). The testing sets contains the states whose magnetic fields are different from but IID with the training states. The states in the generalizing set are different from both the training and testing states, and are distributed in a different region. For instance, we uniformly choose the $N_{\rm train}$ values for $h$ within $0 < h < 0.5-\delta/2$ and $0.5+\delta/2 < h < 1$ for the training set, and choose other $N_{{\rm test}}$ values for $h$ in the same regions as for the testing set. For the generalizing set, we uniformly choose the number of samples in the generalizing set, i.e., $N_{\rm g}$, for $h$ within $0.5-\delta/2 < h < 0.5+\delta/2$. We dub $\delta$ as generalization width. Note $h=0.5$ is the quantum critical point of QIM. The states in either the testing or the generalizing sets are not used to train the CNN. For the large-size systems, it is inefficient to directly apply the Qubism map, as it requires the full coefficients of the quantum states. To avoid such a problem, we bring in the reduced density matrix (RDM) combined with purification. In specific, we choose a subsystem of a moderate size (denoted by $L_{\rm b}$) and calculate the reduced density matrix $\hat{\rho}(|\psi_{\rm m}\rangle) = {\rm Tr}_{\rm s} |\psi_{\rm m}\rangle \langle \psi_{\rm m} |$ with ${\rm Tr}_{\rm s}$ tracing the degrees of freedom in the subsystem. If $L_{\rm b}$ is comparable or larger than the correlation length, the RDM would contain the dominant physical information of the whole system.[43,44] According to our simulations, it is even not necessary to set $L_{\rm b}$ larger than the correlation length to accurately estimate the physical parameters. To map an RDM to image by Qubism, we write it as a pure state as $\hat{\rho}(|\psi_{\rm m}\rangle) = \sum_{ii'} \rho_{ii'} |i\rangle \langle i'| \to |\rho_{\rm m}\rangle= \sum_{ii'} \rho_{ii'} |i i'\rangle$. One can see that $|\rho_{\rm m}\rangle$ is the purification of $\hat{\rho}(|\psi_{\rm m}\rangle)^2$ since we have ${\rm Tr}_{i'} |\rho_{\rm m}\rangle\langle \rho_{\rm m}| = \hat{\rho}(|\psi_{\rm m}\rangle)^2$. Therefore, we feed the QubismNet by $\{|\rho_{\rm m}\rangle\}$, which contains identical amount of information as $\{\hat{\rho}(|\psi_{\rm m}\rangle)\}$. The parameter complexity of $|\rho_{\rm m}\rangle$ is independent of the size of the whole system. This is the same as the complexity of a state with $2L_{\rm b}$ spins.
cpl-38-11-110301-fig2.png
Fig. 2. The estimated parameters of the Hamiltonian ($h$ or $J_z$) versus their ground truth. The blue shadows indicate the ranges of parameters we take as the training and testing sets. The yellow regions indicate the range of the generalizing set. The errors of testing and generalizing sets ($\varepsilon_{\rm t}$ and $\varepsilon_{\rm g}$) can be found in the inset tables. (a) Estimations $h^{\rm p}$ or $J_z^{\rm p}$ versus the true values $h$ or $J_z$ without generalizing set ($\delta=0$) for the 1D spin models. We take $L=64$ with the subsystem size $L_{\rm b}=8$. The RDM trick is used. (b) Estimations $h^{\rm p}$ versus the true $h$ taking $\delta=0.4$ for the 1D spin models. Note the critical point of QIM, $h=0.5$, is put in the middle of the generalizing range. No RDM trick is used when $L=16$ in the $XY$ model. (c) Estimations $h^{\rm p}$ versus the true $h$ for the spin models on a $4 \times 16$ square lattice.
Results and Discussions. We first benchmark the training and testing accuracies of QubismNet on one-dimensional (1D) QIM by taking $L=64$ as the system size and $L_{\rm b}=8$ as the subsystem size in the RDM trick. We take the periodic boundary condition, meaning that the first and last spins are interacted as nearest neighbors. Figure 2(a) shows the estimated fields $h^{\rm p}$ against the true fields $h$. The $N_{\rm train}=1000$ training states are taken as the ground states by uniformly choosing different magnetic fields in $0 < h < 1$. The $h$'s of the $N_{{\rm test}}=100$ testing states are also uniformly taken in $0 < h < 1$, which are different from the fields of the training states. No generalizing states are taken (i.e., $\delta=0$). The QubismNet accurately estimates the magnetic fields of both the training and testing states. We have the testing error $\epsilon_{\rm t} \simeq 1.21 \times 10^{-4}$ evaluated by the loss function, i.e., MSE, of all testing states. The estimations are accurate near the critical point, where the states possess relatively long-range correlations. We also test the QubismNet on 1D $XXZ$ model with periodic boundary condition. We consider two cases, whose Hamiltonians are written, respectively, as $$\begin{align} \hat{H}(J_z) = \sum_{\left \langle i, j \right \rangle} (\hat{S}_i^x \hat{S}_j^x + \hat{S}_i^y \hat{S}_j^y + J_z \hat{S}_i^z \hat{S}_j^z),~~ \tag {3} \end{align} $$ $$\begin{align} \hat{H}(h) = \sum_{\left \langle i, j \right \rangle} (\hat{S}_i^x \hat{S}_j^x + \hat{S}_i^y \hat{S}_j^y) + h \sum_{k=1}^{L} \hat{S}_k^z.~~ \tag {4} \end{align} $$ The physical parameters to be estimated by the QubismNet are the coupling strength $J_z$ in Eq. (3) and the longitudinal field $h$ in Eq. (4). We dub the latter as the $XY$ model, where we take zero $J_z$ and nonzero longitudinal field. We use the RDM trick with $L=64$ and $L_{\rm b}=8$. The testing accuracies of the $XXZ$ and $XY$ models are about $\varepsilon_{\rm t} \sim O(10^{-5})$.
cpl-38-11-110301-fig3.png
Fig. 3. (a) The generalizing error $\epsilon_{\rm g}$ versus the system size $L$ by fixing the generalizing width $\delta=0.6$ and bulk size $L_{\rm b}=8$ for the 1D $XXZ$ and $XY$ models. The inset demonstrates the relation between $\epsilon_{\rm g}$ and $L_{\rm b}$ with $L=64$. (b) The relation between $\epsilon_{\rm g}$ and $\delta$ for the 1D and 2D spin models where we fix $L=64$ and $L_{\rm b}=8$.
To benchmark the generalization power, we set $\delta=0.4$ [Fig. 2(b)]. Within $0.3 < h < 0.7$ (the light yellow shadow), no training states are taken. In this range, we averagely take $N_{\rm g}=40$ with an interval $dh=0.01$ as the generalizing set. For the QIM, a quantum phase transition occurs at $h=0.5$. In our setting, the QubismNet only learns from the states away from the critical vicinity. Our results show that it can generalize from what it has learned and further estimate the magnetic fields near the critical point. We have the generalizing error (the MSE evaluated by the generalizing set) $\varepsilon_{\rm g} \sim O(10^{-3})$ using the RDM trick with $L=64$ and $L_{\rm b}=8$. For the $XY$ and $XXZ$ models, the system is in the gapless phase for $0 < h < 1$.[45] We set the same ranges for the training, testing and generalizing sets as above. Without the RDM trick, we take $L=16$, and find that “stages” appear in the $h$–$h^{\rm p}$ curves. This is due to the energy gaps caused by the finite-size effects. For instance, by changing $h$ of the $XY$ model from $0.15$ to $0.28$, no energy level crossing occurs. This means that the ground states within this range is the same state. Such “stages” obviously hinder the estimation of the physical parameters from the ground states since QubismNet distinguishes no differences from the states by varying $h$ within a stage. We propose to resolve this issue by increasing $L$ (e.g., to $L=64$). The RDM trick has to be used since we cannot handle the full $2^{64}$ coefficients in the Qubism map. From our results, the stages of the $h$–$h^{\rm p}$ curves are largely suppressed using the RDM trick. The testing and generalizing errors are decreased by more than 100 and 20 times, respectively. QubismNet is also tested on the frustrated breathing kagome antiferromagnet,[46,47] where both the testing and generalizing errors are around $O(10^{-4})$. See more details in the Supplementary Material. We also test on the $XXZ$ and $XY$ models on ($4 \times 16$) 2D square lattice under periodic boundary conditions. The subsystem for the RDM is chosen in the middle of lattice with the size $4 \times 2$. The 2D $XY$ model, whose local Hamiltonian is given by Eq. (4), is in an oscillatory phase for $0 \leq h \leq 1$. The $XXZ$ model [Eq. (3)] is in the paramagnetic phase for $0 \leq J_z \leq 1$. In general, 2D quantum models are much more challenging to simulate. QubismNet works well on such 2D quantum systems as shown in Fig. 2(c). With the generalizing width $\delta=0.4$, we get similar performance compared with the chains, with $\varepsilon_{\rm t} \sim O(10^{-5}) - O(10^{-4})$ and $\varepsilon_{\rm g} \sim O(10^{-3})$. To demonstrate the finite-size effects, we show in Fig. 3(a) the $\varepsilon_{\rm g}$ against $L$ on the $XXZ$ and $XY$ models ($\delta=0.6$). The RDM trick is used with $L_{\rm b}=8$. The error bars here (and all others in this work) are evaluated by independently and randomly taking the initial values of the variational parameters in the CNN for ten times. The $\varepsilon_{\rm g}$'s of both models are still slightly decreasing with $L$ for $L>60$, meaning that it is possible to further reduce the errors by taking larger sizes. In the inset of Fig. 3(a), we fix $L=64$ and find that $\varepsilon_{\rm g}$ well converges as the subsystem size increases to $L_{\rm b}>6$ for the models under consideration. Note that it would be inefficient to improve the performance by increasing $L_{\rm b}$ as the complexity will increase exponentially with it. We have tried to take larger $L_{\rm b}$ and the results indicate little improvement to the performance. In Fig. 3(b), we show $\varepsilon_{\rm g}$ versus $\delta$ for the QIM, $XXZ$, and $XY$ models. We fix $L=64$ and use the RDM trick with $L_{\rm b}=8$. Since the QubismNet only learns from the states sampled in $0 < h < 0.5-\delta/2$ and $0.5+\delta/2 < h < 1$, it requires more generalization power to estimate the $h$ of the ground states in $0.5-\delta/2 < h < 0.5+\delta/2$ as $\delta$ increases. Therefore, the generalizing error $\varepsilon_{\rm g}$ of the QubismNet monotonously increases with $\delta$. However, even for $\delta=0.8$, the generalizing error is still insignificant with approximately $\varepsilon_{\rm g} < 0.05$. Meanwhile, the estimations become more fluctuated for larger $\delta$ when randomly initializing the variational parameters of the QubismNet. So far, we have stated that after transforming the states into images, we can take advantage of the power of CNN on processing images. In the following, we try to directly reshape the coefficients of an RDM into a $2^{2L_{b}}$-dimensional vector. Then we use a 1D version of CNN, which consists of 1D convolutional and pooling layers, to map the vector to the estimations of the target parameters. Figure 4(a) shows the estimations $h^{\rm p}$ versus $h$ on the 1D $XY$ model with $L=64$ and $L_{\rm b}=6$ and $8$. The $\varepsilon_{\rm t}$ and $\varepsilon_{\rm g}$ without the Qubism map become more than ten times larger than those with the Qubism map. These results imply that the Qubism map is a reasonable choice, since the image “visualizes” the physics of the state in the patterns of fractals.[39] We do not exclude the possibilities of other maps that may outperform the Qubism map. The estimation of two parameters is also tried on the 1D Heisenberg $XXZ$ model, where both the magnetic field $h$ and $J_{z}$ which represents the anisotropy are estimated simultaneously. We choose $L=64$ and use the RDM trick with $L_{\rm b}=8$. As shown in Fig. 4(b), the training and testing set are obtained within the blue region, and the generalizing set is from the yellow region. The ground state is in a gapless phase. The color of each dot illustrates the error between estimation and label. The generalizing error is $7.54 \times 10^{-3}$.
cpl-38-11-110301-fig4.png
Fig. 4. (a) The estimations $h^{\rm p}$ versus the true $h$ on the 1D $XY$ model ($L=64$) with and without the Qubism map. We use the RDM trick with $L_{\rm b}=8$ and $L_{\rm b}=6$. (b) The estimations of $h$ and $J_z$ on the 1D $XXZ$ model. The position of each dot shows the ground truth of the two parameters, and its color indicates the error.
Our work is a start-up of using the classical ML models to directly learn the quantum data (e.g., wave functions or density matrices). CNN models possess high nonlinearity, thus it would be interesting to compare with the parameterized quantum circuit models[48–50] that normally represent unitary transformations on quantum states. Our results suggest the impressive learning and generalization powers of CNN in such issues. It could provide a key tool in designing the Hamiltonian in order to, for instance, prepare target states in Hamiltonian-based quantum simulators.[38,51] An important topic for the future investigations is to test the generalization power while breaking the data balance to different extents. Our proposal can be generalized to learn from the experimental data of quantum measurements in, e.g., a quantum state tomography process.[26,52–54] The idea of using NN to inversely solve challenging numeric problems can be potentially generalized, e.g., to the constraint satisfaction problems.[55,56] Moreover, our data show the possibility to estimate multiple parameters. This would weaken the constraints on the forms of the Hamiltonians. Ideally, even if we do not exactly know which terms are contained in the Hamiltonian, we could consider all possible terms and train the CNN. Assuming that the CNN could give accurate estimations, the parameters for the terms that do not exist in the Hamiltonian would be estimated close to zero. Therefore, the key of realizing such an ideal estimator would be a powerful ML model to learn the quantum data, and our work provides a good starting point. Acknowledgment. SJR is grateful to Ding Liu and Ya-Tai Miu for helpful discussions.
References Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systemsTensor networks for complex quantum systemsQuantum Monte CarloDiscovering phase transitions with unsupervised learningMachine learning phases of matterLearning phase transitions by confusionMachine Learning Topological Invariants with Neural NetworksIdentifying quantum phase transitions using artificial neural networks on experimental dataIdentifying topological order through unsupervised machine learningUnsupervised Machine Learning and Band TopologyFast and Accurate Modeling of Molecular Atomization Energies with Machine LearningCrystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material PropertiesAccelerated Search and Design of Stretchable Graphene Kirigami Using Machine LearningSolving the quantum many-body problem with artificial neural networksSymmetries and Many-Body Excitations with Neural-Network Quantum StatesNeural-Network Quantum States, String-Bond States, and Chiral Topological StatesMachine learning topological statesSpringer Series in Solid-State SciencesLecture Notes in PhysicsArtificial Neural Network Approach to the Analytic Continuation ProblemMachine learning design of a trapped-ion quantum spin simulatorForward and inverse design of kirigami via supervised autoencoderProjected regression method for solving Fredholm integral equations arising in the analytic continuation problem of quantum physicsSoundness and completeness of quantum root-mean-square errorsMachine-learned approximations to Density Functional Theory HamiltoniansConstructing realistic effective spin Hamiltonians with machine learning approachesLearning Potentials of Quantum Systems using Deep Neural NetworksBackpropagation Applied to Handwritten Zip Code RecognitionImageNet classification with deep convolutional neural networksA review on deep convolutional neural networksA review of Convolutional-Neural-Network-based action recognitionAdvances in Intelligent Systems and ComputingLearning crystal field parameters using convolutional neural networksChemception: A Deep Neural Network with Minimal Chemistry Knowledge Matches the Performance of Expert-developed QSAR/QSPR ModelsExascale Deep Learning for Scientific Inverse ProblemsColloquium : Quantum annealing and analog quantum computationQubism: self-similar visualization of many-body wavefunctionsDensity matrix formulation for quantum renormalization groupsDensity-matrix algorithms for quantum renormalization groupsMatrix product states represent ground states faithfullyTransfer matrices and excitations with matrix product statesLecture Notes in PhysicsQuantum spin liquid in a breathing kagome latticeStability of the spin- 1 2 kagome ground state with breathing anisotropyQuantum circuit learningDifferentiable learning of quantum circuit Born machinesTraining of quantum circuits on a hybrid quantum computerQuantum simulationDetermination of quasiprobability distributions in terms of probability distributions for the rotated quadrature phaseEfficient quantum state tomographyEfficient tomography of a quantum many-body systemThe complexity of theorem-proving proceduresHiding Quiet Solutions in Random Constraint Satisfaction Problems
[1] Verstraete F, Murg V, and Cirac J I 2008 Adv. Phys. 57 143
[2] Orús R 2019 Nat. Rev. Phys. 1 538
[3]Ran S J, Tirrito E, Peng C, Chen X, Tagliacozzo L, Su G, and Lewenstein M 2020 Tensor Network Contractions: Methods and Applications to Quantum Many-Body Systems Part of the Lecture Notes in Physics book series (LNP, volume 964) (Berlin: Springer)
[4] Ceperley D and Alder B 1986 Science 231 555
[5]Nightingale M P and Umrigar C J 1998 Quantum Monte Carlo Methods in Physics and Chemistry (Berlin: Springer)
[6] Wang L 2016 Phys. Rev. B 94 195105
[7] Carrasquilla J and Melko R G 2017 Nat. Phys. 13 431
[8] Van Nieuwenburg E P, Liu Y H, and Huber S D 2017 Nat. Phys. 13 435
[9] Zhang P, Shen H, and Zhai H 2018 Phys. Rev. Lett. 120 066401
[10] Rem B S, Käming N, Tarnowski M, Asteria L, Fläschner N, Becker C, Sengstock K, and Weitenberg C 2019 Nat. Phys. 15 917
[11] Rodriguez-Nieva J F and Scheurer M S 2019 Nat. Phys. 15 790
[12] Scheurer M S and Slager R J 2020 Phys. Rev. Lett. 124 226401
[13] Rupp M, Tkatchenko A, Müller K R, and Von Lilienfeld O A 2012 Phys. Rev. Lett. 108 058301
[14] Xie T and Grossman J C 2018 Phys. Rev. Lett. 120 145301
[15] Hanakata P Z, Cubuk E D, Campbell D K, and Park H S 2018 Phys. Rev. Lett. 121 255304
[16] Carleo G and Troyer M 2017 Science 355 602
[17] Choo K, Carleo G, Regnault N, and Neupert T 2018 Phys. Rev. Lett. 121 167204
[18] Glasser I, Pancotti N, August M, Rodriguez I D, and Cirac J I 2018 Phys. Rev. X 8 011006
[19] Deng D L, Li X, and Sarma S D 2017 Phys. Rev. B 96 195145
[20] Avella A and Mancini F et al. 2012 Strongly Correlated Systems (Berlin: Springer)
[21] Kuramoto Y 2020 Quantum Many-Body Physics (Berlin: Springer)
[22] Fournier R, Wang L, Yazyev O V, and Wu Q 2020 Phys. Rev. Lett. 124 056401
[23] Teoh Y H, Drygala M, Melko R G, and Islam R 2020 Quantum Sci. Technol. 5 024001
[24] Hanakata P Z, Cubuk E D, Campbell D K, and Park H S 2020 Phys. Rev. Res. 2 042006
[25] Arsenault L F, Neuberg R, Hannah L A, and Millis A J 2017 Inverse Probl. 33 115007
[26] Xin T, Lu S, Cao N, Anikeeva G, Lu D, Li J, Long G, and Zeng B 2019 npj Quantum Inf. 5 1
[27] Hegde G and Bowen R C 2017 Sci. Rep. 7 42669
[28] Li X Y, Lou F, Gong X, and Xiang H 2020 New J. Phys. 22 053036
[29] Sehanobish A, Corzo H H, Kara O, and van Dijk D 2020 arXiv:2006.13297 [cs.LG]
[30] LeCun Y, Boser B, Denker J S, Henderson D, Howard R E, Hubbard W, and Jackel L D 1989 Neural Comput. 1 541
[31] Krizhevsky A, Sutskever I, and Hinton G E 2017 Commun. ACM 60 84
[32] Aloysius N and Geetha M 2017 2017 International Conference on Communication and Signal Processing pp 0588–0592
[33] Yao G, Lei T, and Zhong J 2019 Pattern Recognit. Lett. 118 14
[34] Sultana F, Sufian A, and Dutta P 2020 Intelligent Computing: Image Processing Based Applications (Berlin: Springer) p 1
[35] Berthusen N F, Sizyuk Y, Scheurer M S, and Orth P P 2020 arXiv:2011.12911 [cond-mat.str-el]
[36] Goh G B, Siegel C, Vishnu A, Hodas N O, and Baker N 2017 arXiv:1706.06689 [stat.ML]
[37] Laanait N, Romero J, Yin J, Young M T, Treichler S, Starchenko V, Borisevich A, Sergeev A, and Matheson M 2019 arXiv:1909.11150 [cs.LG]
[38] Das A and Chakrabarti B K 2008 Rev. Mod. Phys. 80 1061
[39] Rodrı́guez-Laguna J, Migdał P, Berganza M I N, Lewenstein M, and Sierra G 2012 New J. Phys. 14 053028
[40] White S R 1992 Phys. Rev. Lett. 69 2863
[41] White S R 1993 Phys. Rev. B 48 10345
[42]Hinton G, Srivastava N, and Swersky K 2012 Neural Networks for Machine Learning Lecture 6a: Overview of Mini-batch Gradient Descent
[43] Verstraete F and Cirac J I 2006 Phys. Rev. B 73 094423
[44] Zauner V, Draxler D, Vanderstraeten L, Degroote M, Haegeman J, Rams M M, Stojevic V, Schuch N, and Verstraete F 2015 New J. Phys. 17 053002
[45] Franchini F 2017 An Introduction to Integrable Techniques for One-dimensional Quantum Systems (Berlin: Springer)
[46] Schaffer R, Huh Y, Hwang K, and Kim Y B 2017 Phys. Rev. B 95 054410
[47] Repellin C, He Y C, and Pollmann F 2017 Phys. Rev. B 96 205124
[48] Mitarai K, Negoro M, Kitagawa M, and Fujii K 2018 Phys. Rev. A 98 032309
[49] Liu J G and Wang L 2018 Phys. Rev. A 98 062324
[50] Zhu D, Linke N M, Benedetti M, Landsman K A, Nguyen N H, Alderete C H, Perdomo-Ortiz A, Korda N, Garfoot A, Brecque C et al. 2019 Sci. Adv. 5 eaaw9918
[51] Georgescu I M, Ashhab S, and Nori F 2014 Rev. Mod. Phys. 86 153
[52] Vogel K and Risken H 1989 Phys. Rev. A 40 2847
[53] Cramer M, Plenio M B, Flammia S T, Somma R, Gross D, Bartlett S D, Landon-Cardinal O, Poulin D, and Liu Y K 2010 Nat. Commun. 1 149
[54] Lanyon B, Maier C, Holzäpfel M, Baumgratz T, Hempel C, Jurcevic P, Dhand I, Buyskikh A, Daley A, Cramer M et al. 2017 Nat. Phys. 13 1158
[55] Cook S A 1971 The Complexity of Theorem-Proving Procedures in STOC'71: Proceedings of the Third Annual Acm Symposium on Theory of Computing (New York: ACM Press) pp 151–158
[56] Krzakala F and Zdeborová L 2009 Phys. Rev. Lett. 102 238701