Chinese Physics Letters, 2023, Vol. 40, No. 2, Article code 027501 Framework for Contrastive Learning Phases of Matter Based on Visual Representations Xiao-Qi Han, Sheng-Song Xu, Zhen Feng, Rong-Qiang He*, and Zhong-Yi Lu* Affiliations Department of Physics, Renmin University of China, Beijing 100872, China Received 7 November 2022; accepted manuscript online 3 January 2023; published online 17 January 2023 *Corresponding authors. Email: rqhe@ruc.edu.cn; zlu@ruc.edu.cn Citation Text: Han X Q, Xu S S, Feng Z et al. 2023 Chin. Phys. Lett. 40 027501    Abstract A main task in condensed-matter physics is to recognize, classify, and characterize phases of matter and the corresponding phase transitions, for which machine learning provides a new class of research tools due to the remarkable development in computing power and algorithms. Despite much exploration in this new field, usually different methods and techniques are needed for different scenarios. Here, we present SimCLP: a simple framework for contrastive learning phases of matter, which is inspired by the recent development in contrastive learning of visual representations. We demonstrate the success of this framework on several representative systems, including non-interacting and quantum many-body, conventional and topological. SimCLP is flexible and free of usual burdens such as manual feature engineering and prior knowledge. The only prerequisite is to prepare enough state configurations. Furthermore, it can generate representation vectors and labels and hence help tackle other problems. SimCLP therefore paves an alternative way to the development of a generic tool for identifying unexplored phase transitions.
cpl-40-2-027501-fig1.png
cpl-40-2-027501-fig2.png
cpl-40-2-027501-fig3.png
cpl-40-2-027501-fig4.png
cpl-40-2-027501-fig5.png
DOI:10.1088/0256-307X/40/2/027501 © 2023 Chinese Physics Society Article Text The problem of classifying phases of matter has lasted for centuries, and more and more states of matter have been discovered.[1] In recent years, various types of machine learning methods have been applied on this subject.[2-10] According to whether or not the data need to be labeled, they are mainly divided into two categories: supervised and unsupervised methods. The data usually consist of state configurations (e.g., samples from a Monte Carlo) or some other information deliberately prepared (e.g., entanglement spectra derived from wave functions), which serve as input to machine learning. The labels are usually our target, namely to which phases the data belong. Supervised methods can indeed learn phases efficiently when the labels are available,[11-23] but their applications are limited since in most cases the labels are unavailable. In contrast, unsupervised methods do not require labels. They recognize phases by extracting features or clustering the data. Some unsupervised methods, such as principal component analysis and variational autoencoder, are easy to implement and work well for simple systems (e.g., the two-dimensional Ising model), but fail for complex systems.[24-28] Some other unsupervised methods are technically difficult, requiring various problem-specific tricks.[24,29-43] Inspired by recent progress in contrastive learning of visual representations,[44-46] in this Letter we propose a simple framework for contrastive learning phases of matter (dubbed as SimCLP). It contains two identical neural networks (with same architecture and parameters) and does not require labels. The outputs of one of them serve as labels for the other, and vice verse. Therefore, SimCLP is unsupervised but its two neural networks can be trained like doing a supervised machine learning. In such a way, SimCLP combines the merits of both supervised and unsupervised methods and circumvents their drawbacks. Furthermore, the output of each neural network is a representation vector to the input. The input data from physical systems with the same conditions should be physically similar, hence the corresponding representation vectors should also be similar. Therefore, the training target is to maximize the similarity between these representation vectors. After the two neural networks are fully trained, we can readily predict phases and their transitions by quantifying the similarity between representation vectors for input data from physical systems with different conditions. We would like to emphasize a number of points. Our framework is flexible. The architecture of the involved neural networks is not restricted, and various excellent neural networks in the AI area can be adopted straightforwardly. SimCLP does not require any prior knowledge, such as data labels, Hamiltonians, order parameters, how many phases are involved, etc. Preparing enough training data (i.e., state configurations) is the only prerequisite and is key to avoid overfitting in the training. As valuable by-products, SimCLP can generate representation vectors and labels, and help tackle other problems. For example, they can be utilized to set up other supervised learning for other purposes.[7,12,14,23,47] We demonstrate our framework and practical implementation with several representative model systems: (1) the two-dimensional Ising model, which is a classical system developing a long-range magnetic order accompanied by spontaneous symmetry breaking below a certain temperature; (2) the quantum compass model, which is a quantum many-body system spontaneously breaking a directional symmetry below a certain temperature; (3) the Su–Schrieffer–Heeger (SSH) model, which features a topological phase transition protected by chiral symmetry. We predict correctly all the phases without using any prior knowledge. The only prerequisite is to prepare enough state configurations as input data to the neural networks, which are routinely generated with standard Monte Carlo simulations. SimCLP Framework. Inspired by the recent contrastive learning algorithm SimCLR,[44] our SimCLP framework learns phases of matter by maximizing agreement between two samples from a training data set with the same model parameter $\eta$ via a contrastive loss of the two corresponding representation vectors. As illustrated in Fig. 1, this framework comprises the following four major components:
  • For a specific physical model with a variable model parameter $\eta$, which may encounter a phase transition at certain point $\eta = \eta_{\rm c}$, we generate a series of training sets $\{ \varOmega_k \}$ and testing sets $\{ \varTheta_k \}$ (e.g., samples of state configurations drawn from a Monte Carlo simulation) with model parameter $\eta = \eta_k$, $k = 1,\,2,\,\ldots,\,N$. To avoid overfitting in the contrastive learning, each training set has to be large enough. The best performance will be achieved when every sample in each training set is used only once. This is not a hard task as a Monte Carlo algorithm usually can generate samples easily.
  • Two samples ${x_i}$ and ${x_j}$ are drawn from the training data sets, which are a positive pair if drawn from the data set with the same model parameter $\eta = \eta_k$, otherwise a negative pair.
  • A neural network encoder $f(\cdot)$ that extracts representation vectors ($z_i$ and $z_j$) from data examples ($x_i$ and $x_j$). Our framework allows various choices of the network architecture without any restriction. For simplicity, we adopt multilayer perceptron (MLP) and LeNet[48] to obtain $z_i = f(x_i) = \mathrm{MLP}(x_i)$ or $\mathrm{LeNet}(x_i)$, where $z_i \in \mathbb{R}^d$ is the output of the neural network. A contrastive loss is defined on $z_i$'s.
  • Following SimCLR,[44] a contrastive loss function is defined for a contrastive prediction task. Given a set $\{ x_l \}$ ($l = 1,\,2,\,\ldots,\,2N$ for example) including a positive pair of examples $x_i$ and $x_j$, the contrastive prediction task aims to identify $x_j$ in $\{ x_l \}_{l \ne i}$ for a given $x_i$.
cpl-40-2-027501-fig1.png
Fig. 1. A framework for contrastive learning of phases of matter. Two samples ${x_i}$ and ${x_j}$ are drawn from a training data set, which are a positive pair if drawn from the data set with the same model parameter $\eta$, otherwise a negative pair. An encoder network $f(\cdot)$ is trained to maximize agreement using a contrastive loss of the two corresponding representation vectors $z_i$ and $z_j$.
We randomly sample two examples from each training set with model parameter $\eta = \eta_k$, resulting in a minibatch of $2N$ data points, on which a contrastive prediction task is defined. Similar to SimCLR,[44] we do not sample negative examples explicitly. Instead, given a positive pair, we treat the other $2(N - 1)$ examples within a minibatch as negative examples. The cosine similarity is defined for two vectors $u$ and $v$ as $\mathrm{sim}(u,v)={{u}^{T}}v/\|u\|\|v\|$, which is the dot product between ${{\ell }_{2}}$-normalized $u$ and $v$. Then the loss function for a positive pair of examples $(i,j)$ is defined as \begin{align} {{\ell }_{i,j}}=-\log \frac{\exp (\mathrm{sim}({{z}_{i}},{{z}_{j}})/\tau)}{\sum _{l=1,l \ne i}^{2N}\,\exp (\mathrm{sim}({{z}_{i}},{{z}_{l}})/\tau)}, \tag {1} \end{align} where $\tau$ denotes a temperature parameter for the neural network, which should not be confused with the temperature $T$ for a physical system. The final loss [the normalized temperature-scaled cross entropy loss (NT-Xent)] is computed across all positive pairs, both $(i,j)$ and $(j,i)$, in a mini-batch. Algorithm 1 summarizes the proposed framework.
        Algorithm 1: SimCLP's learning algorithm
preparation: generate a series of training sets $\{\varOmega_k\}$
and testing sets $\{\varTheta_k\}$ (e.g. samples of state
configurations drawn from a Monte Carlo simulation)
for a specific physical model with model parameter
$\eta = \eta_k$, $k = 1,\,2,\,\ldots,\,N$.
input: batch size $N$, constant $\tau$, structure of $f$
for sampled minibatch $\{{{{x}}_{k}}\}_{k=1}^{N}$ do
     for all $k \in \{1, \ldots, N\}$ do
         draw two samples $x_{2k-1}$ and $x_{2k}$ from
         the training data set with model parameter $\eta = \eta_k$
     end for
     for all $i \in \{1, \ldots, 2N\}$ and $j \in \{1, \ldots, 2N\}$ do
         ${{s}_{i,j}} = \frac{z_{i}^{\top }{{z}_{j}}}{\|{{z}_{i}}\|\|{{z}_{j}}\|}$      # pairwise cos similarity
     end for
     define $\ell (i,j)=-\log \frac{\exp({{s}_{i,j}}/\tau)} {\sum_{l=1, l \ne i}^{2\,N} \exp({s_{i,l}}/\tau)}$
     $\mathcal{L} = \frac{1}{2\,N} \underset{k=1}{\overset{N}\to{\mathop \sum }}\,[\ell(2k-1,2k) + \ell(2k,2k-1)]$
     update networks $f$ to minimize $\mathcal{L}$
end for
prediction: quantify the similarity between two
differently conditioned systems with model parameters
$\eta_k$ and $\eta_{k\prime}$ by averaging the cos similarity of pair
samples $x_k$ and $x_{k\prime}$ drawn from the testing sets with
model parameters $\eta_k$ and $\eta_{k\prime}$ to predict phases
and their transitions.
Numerical Results and DiscussionsContrastive Learning Phases of the Ising Model. First, we apply our method on the prototypical example of the square-lattice ferromagnetic Ising model, \begin{align} H=-J\sum\limits_{\langle ij \rangle }{\sigma _{i}^{z}\sigma _{j}^{z}}, \tag {2} \end{align} where the Ising variables $\sigma _{i}^{z} = \pm 1$. We set $J = 1$ as the energy unit. For a system with linear size $L$, there are $N_{\rm s} = L^2$ lattice sites and hence the state space is of size $2^{N_{\rm s}}$. There is a well-understood phase transition[49] at temperature $T_{\rm c} = 2.269$, separating the high-temperature disordered phase and the low-temperature ferromagnetic phase. The standard Monte Carlo method is employed to generate enough uncorrelated state configurations to constitute the training data sets and testing data sets with $N = 51$ temperatures $T_k = 1 + (k - 1) \Delta T$ and $\Delta T = 0.05$. We adopt an MLP with one hidden layer (consisting of 10 neurons) as the encoder neural network to obtain $z_i = f(x_i) = W^{(2)}\sigma(W^{(1)} x_i)$, where $\sigma$ is an ReLU nonlinearity and $W = \{W^{(1)}, W^{(2)} \}$ are learnable parameters. The Adam algorithm is used for neural network optimization. To avoid overfitting, each example in the training sets is used only once. After the neural network is trained, we try to predict possible phase transitions by calculating the adjacent similarity and mutual similarity with the testing sets. The adjacent similarity at a temperature $T$ is the averaged cosine similarity of the representation vectors of pair examples from two testing sets with $T - \Delta T$ and $T + \Delta T$, which measures the similarity between state configurations with the two temperatures $T \pm \Delta T$. Similarly, the mutual similarity is defined for any two temperatures $T$ and $T^{\prime}$ and hence measures the similarity between state configurations with any two temperatures.
cpl-40-2-027501-fig2.png
Fig. 2. Cosine similarities estimated for the two-dimensional Ising model. (a) Adjacent similarity, which measures the similarity between state configurations with temperatures $T - \Delta T$ and $T + \Delta T$. The linear size of the model $L = 10,\,20,\,30$, and $40$. (b) Mutual similarity with $L = 10$, which measures the similarity between state configurations with any two temperatures $T$ and $T^{\prime}$.
As shown in Fig. 2(a), the adjacent similarity is close to 1 (saturation value) for temperatures far away from $T_{\rm c}$ and presents a V-shape around $T \sim T_{\rm c}$. The mutual similarity for two temperatures $T$ and $T^{\prime}$ decreases as $|T - T^{\prime}|$ increases. This is because physically the state configurations are similar in the same phase, but not similar (qualitatively different) in different phases. Near the phase transition point, the structure of the state configurations varies quickly and hence the adjacent similarity drops, implying a phase transition point [Fig. 2(a)]. As shown in Fig. 2(b), the mutual similarities are high for state configurations from the same phase and low otherwise, successfully discriminating different phases of matter. In such a way, our method successfully captures this structural varying feature and leverages it to predict correctly the phases and their transition point. Further, we examine details of the representation vectors $z_i$ of the state configurations, which is visualized in Fig. 3. The dimension of a representation vector is not limited. More dimensions may be needed for other physical systems. However, surprisingly we find that two dimensions are sufficient for all our example systems. Visualizing the representation vectors (Fig. 3), they are located on a unit circle when normalized. Before training the encoder neural network, the representation vectors are located randomly on the circle, while after training, they move on the circle as the temperature varies and the high- and low-temperature ones cluster, respectively.
cpl-40-2-027501-fig3.png
Fig. 3. Representation vectors of the state configurations of the Ising model with different temperatures, which are of dimension 2 and hence are located on a circle when normalized. (a) Before training the encoder neural network, the representation vectors are located randomly on the circle. (b) After training, the representation vector moves on the circle as the temperature varies.
Contrastive Learning Phases of the Quantum Compass Model. As above, our method works successfully for a classical physical system. Now, we test our method on a quantum system, i.e., the quantum compass model, \begin{align} H = -\frac{1}{4}{{J}_{x}}\sum\limits_{j}{{{X}_{j}}}{{X}_{j+{{e}_{x}}}}-\frac{1}{4}{{J}_{z}}\sum\limits_{j}{{{Z}_{j}}}{{Z}_{j+{{e}_{z}}}},\tag {3} \end{align} where $e_x$ and $e_z$ are unit vectors along $x$ and $z$ directions, respectively, and $X_j$ and $Z_j$ represent Pauli $x$ and $z$ operators at lattice site $j$, respectively. An order parameter is proposed in Ref. [50], $Q = |{{\langle {{X}_{j}}{{X}_{j+{{e}_{x}}}}-{{Z}_{j}}{{Z}_{j+{{e}_{y}}}}\rangle}}|$, which is zero when the temperature $T > T_{\rm c} = 0.0585(3)$[51] and nonzero when $T < T_{\rm c}$. The system is disordered and the $x$ and $z$ directions are symmetrically equivalent at high temperatures. However, at temperatures below $T_{\rm c}$, the system spontaneously breaks the directional symmetry and the $x$ and $z$ directions are no longer equivalent and hence the order parameter is finite. We set $J = J_x = J_z = 1$ as the energy unit. The stochastic series expansion (SSE) quantum Monte Carlo method[52] is employed to generate enough uncorrelated state configurations to constitute the training data sets and testing data sets with $N = 51$ temperatures $T_k = 0.01 + (k - 1) \Delta T$ and $\Delta T = 0.003$. Here, the LeNet[48] is used as the encoder neural network, which consists of a two-layer convolutional neural network and followed by a two-layer fully connected neural network.
cpl-40-2-027501-fig4.png
Fig. 4. Cosine similarities estimated for the quantum compass model. (a) Adjacent similarity, which measures the similarity between state configurations with temperatures $T - \Delta T$ and $T + \Delta T$. The linear size of the model $L = 12$, 16, 24, and $32$. (b) Mutual similarity with $L = 32$, which measures the similarity between state configurations with any two temperatures $T$ and $T^{\prime}$.
As shown in Fig. 4, similar to the Ising model case, the adjacent similarity is close to 1 (saturation value) for temperatures far away from $T_{\rm c}$ and presents a V-shape around $T \sim T_{\rm c}$. As $L$ increases, the position of the dip moves towards the true transition point (noting the severe finite-size effect in this model[51]). Our method successfully predicts the phase transition point and also works for this quantum many-body system. Contrastive Learning Phases of the Su–Schrieffer–Heeger Model. As the last example, we study the SSH model,[53] which features a one-dimensional topological insulator characterized by a global topological invariant, i.e., the winding number, and protected by chiral symmetry.[54] The Hamiltonian reads \begin{align} H =& -(J + \kappa) \sum\limits_{i}{(c_{_{\scriptstyle i,A}}^{+}{{c}_{i,B}} + {\rm H.c.})} \notag\\ & -(J - \kappa) \sum\limits_{i}{(c_{_{\scriptstyle i,B}}^{+}{{c}_{i+1,A}} + {\rm H.c.})},\tag {4} \end{align} where $c_{_{\scriptstyle i,A(B)}}^+$ and $c_{_{\scriptstyle i,A(B)}}$ are the creation and annihilation operators of the spinless fermions on the $A$($B$) sublattice site in the $i$th unit cell, respectively. Each unit cell consists of two sites, $(i,A)$ and $(i,B)$. The hopping terms always connect two adjacent lattice sites, an $A$ sublattice site and a $B$ sublattice site. The hopping amplitude in the unit cell is $-(J + \kappa)$, and that between two neighboring unit cells is $-(J - \kappa)$. We set $J = 1$ as the energy unit. As is well known, the half-filled system is a trivial insulator when $\kappa > 0$ and is a topological insulator when $\kappa < 0$. A topological phase transition occurs at $\kappa = 0$. Periodic boundary conditions are used in our numerical calculation. Standard variational Monte Carlo provides us with state configurations weighted by the amplitude squared of the ground state wave function in the real space. The training and testing data sets are prepared for $N = 51 \kappa$'s, $\kappa_k = -0.75 + (k - 1) \Delta \kappa$ and $\Delta \kappa = 0.03$. The two-layer MLP is used as the encoder neural network, the same as the Ising model case stated above.
cpl-40-2-027501-fig5.png
Fig. 5. Cosine similarities estimated for with the SSH model. (a) Adjacent similarity, which measures the similarity between state configurations of the ground state with $\kappa - \Delta \kappa$ and $\kappa + \Delta \kappa$. System sizes $L = 16$, 24, 32, and $48$. (b) Mutual similarity with $L = 48$, which measures the similarity between state configurations with any two model parameters $\kappa$ and $\kappa^{\prime}$.
As shown in Fig. 5(a), similar to the previous models, the adjacent similarity is close to 1 (saturation value) for $\kappa$'s far away from $\kappa_{\rm c} = 0$ and presents a V-shape around $\kappa \sim \kappa_{\rm c}$. Our method successfully predicts the topological phase transition point. From a conventional (local) point of view, the topologically insulating phase ($\kappa < 0$) and the trivially insulating phase ($\kappa > 0$) are difficult to discriminate because the two phases are similar locally. On the other hand, they can be evidently discriminated through the topological invariant, i.e., they are different globally. Remarkably, as shown in Fig. 5(b), SimCLP can readily discriminate the two topologically different phases. The mutual similarity is close to 1 (maximal similarity) for state configurations in the same phases and is close to $-1$ (maximal difference) for state configurations from the two different phases. It is worth emphasizing that SimCLP has carried out this only by learning from state configurations, without complicated mathematics or phase-specific tricks in other methods, showing the simplicity and broad applicability of our framework. In summary, we have proposed a framework for contrastive learning phases of matter and shown that it can encode phases of various systems, no matter whether they are non-interacting or quantum many-body, conventional or topological, and can predict phase transitions readily when enough state configurations are provided. Our framework is simple and flexible. It does not need any prior knowledge such as data labels and order parameters. This is expected to play an important role in the study of unknown phases, for which the order parameters are unknown, as well as many exotic phases, for which various problem-specific tricks are needed and difficult to devise.[24,29-43] We generate typically about 100000 state configurations for each model parameter. Every state configuration is used only once when training the neural network. Here, 2000 are not enough while 100000 have not exhausted before the machine learning converges. We think that the minimum number of state configurations is problem dependent. All the example systems shown in this study have only two phases in their phase spaces. Our framework should directly work for phase spaces with multiple phases or re-entrance of phases, where the adjacent similarity curves would show two or more dips separating multiple phases and the dimensionality of the representation vector may need to be larger. As valuable by-products, our framework can provide representation vectors and labels for state configurations, which may be used for other purposes, such as ground state representation,[55-57] accurate determination of phase transition points and critical exponents,[16,20,58] quantum error correction protocols,[59] and quantum state tomography.[60] Acknowledgments. This work was supported by the National Natural Science Foundation of China (Grant Nos. 11874421 and 11934020). Computational resources were provided by Physical Laboratory of High Performance Computing in RUC.
References Quantum Phase TransitionsMachine learning and the physical sciencesMachine learning & artificial intelligence in the quantum domain: a review of recent progressHow To Use Neural Networks To Investigate Quantum Many-Body PhysicsMachine learning for quantum matterMachine learning phase transitions with a quantum processorLearning single-particle mobility edges by a neural network based on data compressionAnalyzing Nonequilibrium Quantum States through Snapshots with Artificial Neural NetworksMachine Learning Many-Body Localization: Search for the Elusive Nonergodic MetalExperimental demonstration of adversarial examples in learning topological phasesMachine learning phases of matterMachine Learning Phases of Strongly Correlated FermionsMachine learning vortices at the Kosterlitz-Thouless transitionMachine Learning Out-of-Equilibrium Phases of MatterMachine learning topological statesQuantum kernels to learn the phases of quantum matterIdentification of Non-Fermi Liquid Physics in a Quantum Critical Metal via Quantum Loop TopographySupervised learning with quantum-enhanced feature spacesProbing many-body localization with neural networksMachine learning of phase transitions in the percolation and X Y modelsMachine Learning Topological Invariants with Neural NetworksMachine learning detection of Berezinskii-Kosterlitz-Thouless transitions in q -state clock modelsNeural network setups for a precise detection of the many-body localization transition: Finite-size scaling and limitationsLearning phase transitions by confusionDiscovering phase transitions with unsupervised learningRevealing quantum chaos with machine learningUnsupervised learning of phase transitions: From principal component analysis to variational autoencodersDiscovering phases, phase transitions, and crossovers through unsupervised machine learning: A critical examinationIdentifying topological order through unsupervised machine learningIdentifying quantum phase transitions with adversarial neural networksQuantum phase recognition via unsupervised machine learningUnsupervised learning using topological data augmentationUnveiling phase transitions with machine learningUnsupervised identification of topological phase transitions using predictive modelsUnsupervised learning of topological phase transitions using the Calinski-Harabaz indexNuclear liquid-gas phase transition with machine learningTopological quantum phase transitions retrieved through unsupervised machine learningSupervised and unsupervised learning of directed percolationMachine learning dynamical phase transitions in complex networksConfusion scheme in machine learning detects double phase transitions and quasi-long-range orderUnsupervised Machine Learning and Band TopologyUnsupervised and supervised learning of interacting topological phases from single-particle correlation functionsUnsupervised Learning of Non-Hermitian Topological PhasesMolecular contrastive learning of representations via graph neural networksCross-Modal Retrieval between13 C NMR Spectra and Structures for Compound Identification Using Deep Contrastive LearningRandom sampling neural network for quantum many-body problemsBackpropagation Applied to Handwritten Zip Code RecognitionCrystal Statistics. I. A Two-Dimensional Model with an Order-Disorder TransitionDirectional Ordering of Fluctuations in a Two-Dimensional Compass ModelRe-examining the directional-ordering transition in the compass model with screw-periodic boundary conditionsQuantum Monte Carlo with directed loopsSolitons in conducting polymersSolving the quantum many-body problem with artificial neural networksNeural-Network Quantum States, String-Bond States, and Chiral Topological StatesApproximating quantum many-body wave functions using artificial neural networksAdding machine learning within Hamiltonians: Renormalization group transformations, symmetry breaking and restorationNeural Decoder for Topological CodesNeural-network quantum state tomography
[1] Sachdev S 2011 Quantum Phase Transitions 2nd edn (Cambridge: Cambridge University Press)
[2] Carleo G, Cirac I, Cranmer K, Daudet L, Schuld M, Tishby N, Vogt-Maranto L, and Zdeborová L 2019 Rev. Mod. Phys. 91 045002
[3] Dunjko V and Briegel H J 2018 Rep. Prog. Phys. 81 074001
[4] Carrasquilla J and Torlai G 2021 PRX Quantum 2 040201
[5] Carrasquilla J 2020 Adv. Phys.: X 5 1797528
[6] Uvarov A V, Kardashin A S, and Biamonte J D 2020 Phys. Rev. A 102 012415
[7] Bai X D, Zhao J, Han Y Y, Zhao J C, and Wang J G 2021 Phys. Rev. B 103 134203
[8] Bohrdt A, Kim S, Lukin A, Rispoli M, Schittko R, Knap M, Greiner M, and Léonard J 2021 Phys. Rev. Lett. 127 150504
[9] Hsu Y T, Li X, Deng D L, and Sarma S D 2018 Phys. Rev. Lett. 121 245701
[10] Zhang H L, Jiang S, Wang X, Zhang W G, Huang X Z, Ouyang X L, Yu Y F, Liu Y Q, Deng D L, and Duan L M 2022 Nat. Commun. 13 4993
[11] Carrasquilla J and Melko R G 2017 Nat. Phys. 13 431
[12] Ch'ng K, Carrasquilla J, Melko R G, and Khatami E 2017 Phys. Rev. X 7 031038
[13] Beach M J S, Golubeva A, and Melko R G 2018 Phys. Rev. B 97 045207
[14] Venderley J, Khemani V, and Kim E A 2018 Phys. Rev. Lett. 120 257204
[15] Deng D L, Li X, and Sarma S D 2017 Phys. Rev. B 96 195145
[16] Sancho-Lorente T, Román-Roche J, and Zueco D 2021 arXiv:2109.02686 [quant-ph]
[17] Driskell G, Lederer S, Bauer C, Trebst S, and Kim E A 2021 Phys. Rev. Lett. 127 046601
[18] Havlı́ček V, Córcoles A D, Temme K, Harrow A W, Kandala A, Chow J M, and Gambetta J M 2019 Nature 567 209
[19] Schindler F, Regnault N, and Neupert T 2017 Phys. Rev. B 95 245134
[20] Zhang W Z, Liu J Y, and Wei T C 2019 Phys. Rev. E 99 032142
[21] Zhang P F, Shen H T, and Zhai H 2018 Phys. Rev. Lett. 120 066401
[22] Miyajima Y, Murata Y, Tanaka Y, and Mochizuki M 2021 Phys. Rev. B 104 075114
[23] Théveniaut H and Alet F 2019 Phys. Rev. B 100 224202
[24] van Nieuwenburg E P L, Liu Y H, and Huber S D 2017 Nat. Phys. 13 435
[25] Wang L 2016 Phys. Rev. B 94 195105
[26] Kharkov Y A, Sotskov V E, Karazeev A A, Kiktenko E O, and Fedorov A K 2020 Phys. Rev. B 101 064406
[27] Wetzel S J 2017 Phys. Rev. E 96 022140
[28] Hu W J, Singh R R P, and Scalettar R T 2017 Phys. Rev. E 95 062122
[29] Rodriguez-Nieva J F and Scheurer M S 2019 Nat. Phys. 15 790
[30] Huembeli P, Dauphin A, and Wittek P 2018 Phys. Rev. B 97 134109
[31] Broecker P, Assaad F F, and Trebst S 2017 arXiv: 1707.00663 [cond-mat.str-el]
[32] Balabanov O and Granath M 2020 Phys. Rev. Res. 2 013354
[33] Canabarro A, Fanchini F F, Malvezzi A L, Pereira R, and Chaves R 2019 Phys. Rev. B 100 045129
[34] Greplova E, Valenti A, Boschung G, Schäfer F, Lörch N, and Huber S D 2020 New J. Phys. 22 045003
[35] Wang J L, Zhang W Z, Hua T, and Wei T C 2021 Phys. Rev. Res. 3 013074
[36] Wang R, Ma Y G, Wada R, Chen L W, He W B, Liu H L, and Sun K J 2020 Phys. Rev. Res. 2 043202
[37] Che Y M, Gneiting C, Liu T, and Nori F 2020 Phys. Rev. B 102 134213
[38] Shen J M, Li W, Deng S F, and Zhang T 2021 Phys. Rev. E 103 052140
[39] Ni Q, Tang M, Liu Y, and Lai Y C 2019 Phys. Rev. E 100 052312
[40] Lee S S and Kim B J 2019 Phys. Rev. E 99 043308
[41] Scheurer M S and Slager R J 2020 Phys. Rev. Lett. 124 226401
[42] Tibaldi S, Magnifico G, Vodola D, and Ercolessi E 2022 arXiv:2202.09281 [cond-mat.supr-con]
[43] Yu L W and Deng D L 2021 Phys. Rev. Lett. 126 240402
[44]Chen T, Kornblith S, Norouzi M, Hinton G, and I 2020 Proc. Machine Learning Res. 119 1597
[45] Wang Y, Wang J, Cao Z, and Farimani A B, and 2022 Nat. Mach. Intell. 4 297
[46] Yang Z, Song J, Yang M, Yao L, Zhang J, Shi H, Ji X, Deng Y, and Wang X 2021 Anal. Chem. 93 16947
[47] Liu C Y and Wang D W 2021 Phys. Rev. B 103 205107
[48] LeCun Y, Boser B, Denker J S, Henderson D, Howard R E, Hubbard W, and Jackel L D 1989 Neural Comput. 1 541
[49] Onsager L 1944 Phys. Rev. 65 117
[50] Mishra A, Ma M, Zhang F C, Guertler S, Tang L H, and Wan S 2004 Phys. Rev. Lett. 93 207201
[51] Wenzel S, Janke W, and Läuchli A M 2010 Phys. Rev. E 81 066702
[52] Syljuåsen O F and Sandvik A W 2002 Phys. Rev. E 66 046701
[53] Heeger A J, Kivelson S, Schrieffer J R, and Su W P 1988 Rev. Mod. Phys. 60 781
[54]Shen S Q 2017 Topological Phases in One Dimension, in Topological Insulators (Berlin: Springer) pp 81–90
[55] Carleo G and Troyer M 2017 Science 355 602
[56] Glasser I, Pancotti N, August M, Rodriguez I D, and Cirac J I 2018 Phys. Rev. X 8 011006
[57] Cai Z and Liu J 2018 Phys. Rev. B 97 035116
[58] Bachtis D, Aarts G, and Lucini B 2021 Phys. Rev. Res. 3 013134
[59] Torlai G and Melko R G 2017 Phys. Rev. Lett. 119 030501
[60] Torlai G, Mazzola G, Carrasquilla J, Troyer M, Melko R, and Carleo G 2018 Nat. Phys. 14 447