Chinese Physics Letters, 2022, Vol. 39, No. 5, Article code 050301 State Classification via a Random-Walk-Based Quantum Neural Network Lu-Ji Wang (王露吉)1,2, Jia-Yi Lin (林嘉懿)1,2, and Shengjun Wu (吴盛俊)1,2* Affiliations 1Institute for Brain Sciences and Kuang Yaming Honors School, Nanjing University, Nanjing 210023, China 2School of Physics, Nanjing University, Nanjing 210093, China Received 22 February 2022; accepted 30 March 2022; published online 26 April 2022 *Corresponding author. Email: sjwu@nju.edu.cn Citation Text: Wang L J, Lin J Y, and Wu S J 2022 Chin. Phys. Lett. 39 050301    Abstract In quantum information technology, crucial information is regularly encoded in different quantum states. To extract information, the identification of one state from the others is inevitable. However, if the states are non-orthogonal and unknown, this task will become awesomely tricky, especially when our resources are also limited. Here, we introduce the quantum stochastic neural network (QSNN), and show its capability to accomplish the binary discrimination of quantum states. After a handful of optimizing iterations, the QSNN achieves a success probability close to the theoretical optimum, no matter whether the states are pure or mixed. Other than binary discrimination, the QSNN is also applied to classify an unknown set of states into two types: entangled ones and separable ones. After training with four samples, it can classify a number of states with acceptable accuracy. Our results suggest that the QSNN has the great potential to process unknown quantum states in quantum information.
cpl-39-5-050301-fig1.png
cpl-39-5-050301-fig2.png
cpl-39-5-050301-fig3.png
cpl-39-5-050301-fig4.png
DOI:10.1088/0256-307X/39/5/050301 © 2022 Chinese Physics Society Article Text Quantum state discrimination identifies a quantum state among an already known set of candidate states. It is a key step in many quantum technologies as quantum states are the carriers of information in quantum computing protocols and quantum information processing. Although quantum mechanics fundamentally forbids deterministic discrimination of non-orthogonal states, probabilistic methods such as the minimum error discrimination[1] and the unambiguous discrimination[2–4] have been developed, and their theoretical best performances have also been found. Beyond state discrimination, a special case of classification, one would expect to classify quantum states by more properties in one's own particular interests. A well studied example is the classification of states according to whether the state is entangled or not. Lots of strategies have been proposed for detecting entanglement, such as using the positive partial transpose (PPT) criterion,[5] entanglement witnesses,[6–8] and the Clauser–Horne–Shimony–Holt (CHSH) inequality.[9,10] Most of the strategies mentioned above require complete knowledge of the quantum states before carrying out their best practices. However, exactly determining all the states is in principle forbidden, unless infinite copies of the states are provided. In Ref. [11], Massar and Popescu have addressed this topic, known as the state estimation, and proved that the optimal mean fidelity for two-level system state estimation is $\frac{N+1}{N+2}$, where $N$ is the number of identically prepared copies of the state. The optimal mean fidelity is less than $1$ and tends towards $1$ as the number of copies $N$ tends to infinity. Then, in Ref. [12] the authors extended the result to finite dimensional quantum systems in pure states. All these results indicate that the pre-processing of quantum state estimation before carrying out the state discrimination or classification will introduce extra errors. In addition, state estimation often involves a quantum information technology called state tomography, which is sometimes prohibitively expensive to perform on an unknown state.[13] In the states classification task, it also becomes practically impossible to do a tomography for each state, because the number of states waiting to be classified can be extremely large. In addition to the extra errors introduced by the pre-processing and the expensive tomography, there is another difficulty for these traditional strategies. That is, even if we exactly know the quantum states to be identified or classified, the optimal measurement is hard to be analytically developed in the fashion of traditional strategies, when more than two states are involved.[14,15] Facing the above difficulties, we hope there will be a new strategy for state discrimination and classification. Recently, there has been a rising trend of using machine learning methods to fully exploit the inherent advantages of quantum technologies.[16,17] Among these studies, the classification problem has received a great deal of attention, and some quantum classifiers have shown excellent performance.[18,19] In addition, as a fusion of deep learning and quantum computation, quantum neural networks[20,21] have been proved to be effective and almost irreplaceable in many quantum tasks, such as quantum operation implementations,[22–24] many-body system simulations,[25,26] and quantum autoencoders.[24,27,28] There are also several successful attempts of classifying quantum states with quantum neural networks. Chen et al.[29] utilized a hybrid approach to learn the design of a quantum circuit to distinguish between two families of non-orthogonal quantum states and generalizes its ability to previously unseen quantum data. This approach was further extended to noisy devices by Patterson et al.[30] Cong et al.[31] introduced a quantum convolutional neural network accurately recognizing quantum phases. Those neural networks can all be implemented on near-term devices. Dalla et al.[32] proposed a quantum network with state discrimination capability based on an open quantum system. A tightly related protocol was then experimentally implemented by Laneve et al.,[33] which provided a novel approach to multi-state discrimination. Inspired by these works, in this Letter we introduce a new kind of quantum neural network, i.e., the quantum stochastic neural network (QSNN), to complete quantum state discrimination and classification tasks. Our network is based on quantum stochastic walks (QSWs),[34] which have been theoretically proposed and experimentally implemented to simulate the associative memory of Hopfield neural networks.[35,36] Therefore, it is worthwhile to explore the power of quantum walks in building general quantum neural networks. Approach. A classical random walk describes the probabilistic motion of a walker over a graph. Farhi and Gutmann[37] generalized classical random walks into quantum versions, i.e., continuous-time quantum walks (CTQWs). QSWs are further generalizations of CTQWs by introducing the decoherence, so that both classical and quantum random walks can be described by them. The state of the walker is described by a density matrix $\rho$, and evolves as[38–40] $$ \frac{d\rho}{dt}=-i[H,\rho]+\sum_{k} \Big(L_k\rho L_k^†-\frac{1}{2}\{L^†_k L_k,\rho\} \Big),~~ \tag {1} $$ where $H$ is the Hamiltonian and $L_k$ is a Lindblad operator. Our QSNN is based on QSWs [implied by Eq. (1)] rather than CTQWs, because we need to introduce the decoherence to the QSNN to simulate the forward propagation of probabilities in classical networks. The state of the QSNN is described by a density matrix $\rho=\sum_{ij}\rho_{ij}|i\rangle\langle j|$ in the $N$ dimensional Hilbert space, where $\{|i\rangle\}_{i=0}^{N-1}$ is an orthogonal basis, and each basis state $|i\rangle$ corresponds to a vertex of the graph. The vertices can be seen as neurons of the QSNN. As an example, a QSNN consisting of three layers, $6$ neurons is shown in Fig. 1. The state of the QSNN evolves according to Eq. (1), where the Hamiltonian $H$ and the Lindblad operators $L_k$ respectively determine the coherent and decoherent part of the dynamic. In our approach, we use the Hamiltonian $H=\sum_{ij}h_{ij}|i\rangle\langle j|$ to characterize the coherent transmission between neurons. The coefficients $h_{ij}$ are complex numbers with the requirement $h_{ij}=h_{ji}^*$ to ensure the hermiticity of the Hamiltonian in general. However, real coefficients $h_{ij}\in\mathbb{R}$ are sufficient for the tasks of our interest and we do not consider the coupling of a neuron to itself. Thus, the Hamiltonian is written as $$\begin{aligned} H=\sum_{ij}h_{ij}|i\rangle\langle j| = \sum_{i < j}h_{ij}(|i\rangle\langle j|+|j\rangle\langle i|). \end{aligned}~~ \tag {2} $$ The Lindblad operator used by us in Eq. (1) is $$\begin{aligned} L_k\rightarrow L_{ij}=\gamma_{ij}|i\rangle\langle j |, \end{aligned}~~ \tag {3} $$ which simulates the decoherent (typically one-way) transmission from the $j$th neuron to the $i$th neuron. The coefficient $\gamma_{ij}$ that characterizes the dissipation rate is a real number in general. We group the coefficients in the Hamiltonian as a single vector ${\boldsymbol h}=(h_1, h_2, \ldots, h_k, \ldots)$ and the coefficients in the Lindblad operators as another vector ${\boldsymbol \gamma}=(\gamma_1, \gamma_2, \ldots, \gamma_k, \ldots)$. The coefficient vectors ${\boldsymbol h}$ and ${\boldsymbol \gamma}$ are the parameters of the QSNN that need to be optimized. In this study, the dissipation and the Hamiltonian couplings only exist between certain neurons, which is discussed in more detail in the Supplementary Material (SM) I(A). As shown in Fig. 1, the Lindblad operators (the orange lines with arrows) only connect the neurons in the adjacent layers to transfer the probability amplitude from one layer to the other, so that probability converges in the output layer. And the Hamiltonian couplings (the green dashed lines) only exist between some neurons in the input and hidden layer to perform a non-directional transmission. The QSNN is initialized by encoding the state to be classified in the state of the input layer neurons of the network. To be specific, if we use an $N$-dimensional QSNN with $n$ input layer neurons to classify $n$-level quantum states $\rho$, the state of the network would be initialized as $\rho_{\rm in}=\rho\oplus 0_{{N-n}, N-n}$, where $0_{i,j}$ represents an $i\times j$ zero matrix. Then, the network evolves according to Eq. (1) for a duration $T$ from its initial state $\rho_{\rm in}$ and gives the final state $\rho_{{\rm out}}^{s}$ for the $s$th input. The evolution time $T$ is considered dimensionless (it is of dimension $1/\gamma$ actually, where $\gamma$ is a typical value of the coupling parameters $h_k$ in Hamiltonian or the dissipation rates $\gamma_k$ in Lindblad operators). The final state describes the probability that the walker is on each vertex (neuron) at time $T$. The probability converges in the output layer due to the existence of the one-way decoherent transmission. We correspond output neurons to the labels that distinguish all kinds of different states one by one. For the example of the QSNN shown in Fig. 1, if $\rho_{\rm out}^{s}=|N-2\rangle\langle N-2|$ ($\rho_{\rm out}^{s}=|N-1\rangle\langle N-1|$), we say the unknown quantum state belongs to class 1 (class 2). Hidden layers should be set according to tasks, and a single hidden layer is sufficient in the tasks of our interests [details in SM I(B)].
cpl-39-5-050301-fig1.png
Fig. 1. The graph representation of the quantum stochastic neural network (QSNN) used for quantum state binary discrimination. The state of the QSNN can be represented by a density matrix in the $N=6$ dimensional Hilbert space constituted by an orthogonal basis $\{|i\rangle\}_{i=0}^5$. The state of the two neurons of the input layer is initialized as the state of a two-level input quantum state. Two neurons in the output layer correspond to the two labels $state$ 1 and $state$ 2 that distinguish two different states. The vertices are decoherently connected (orange lines with arrows) by Lindblad operators and coherently connected (green dashed lines) by Hamiltonian elements.
In the training process, we firstly draw an already labeled sample state $\rho_{\rm in}^{s}$ together with its label $l^{s}\in\{N-2, N-1\}$ from a training set $\{(\rho_{\rm in}^{s}, l^{s})\}_{s=1}^M$, where $M$ is the number of samples in the training set. Then, performing a projective measurement $\varOmega^{s} = |l^{s}\rangle\langle l^{s}|$ on the final state $\rho_{\rm out}^{s}$ of the network gives the success probability $$ P_N^{s} = {\rm Tr}(\rho_{{\rm out}}^{s}\varOmega^{s}).~~ \tag {4} $$ The QSNN gives the desired output corresponding to the $s$th sample. We can design the specific forms of the loss function $$\begin{alignat}{1} \mbox{Loss}=\mbox{Loss}\left({\boldsymbol h}, {\boldsymbol \gamma},\{(\rho^{s}_{\rm in}, l^{s})\}\right) = \sum_s w_s f(\rho_{{\rm out}}^{s}, \varOmega^{s})~~~~~ \tag {5} \end{alignat} $$ according to different tasks. Here, $w_s$ is a weight on the sample $s$, and $f$ is the sample-wise loss. The loss function should be designed such that minimizing it (by gradient descent introduced in SM II) leads to the result that the QSNN classifies states correctly. Results—Quantum State Binary Discrimination. The general processes of the minimum error (ME) discrimination and our QSNN discrimination are respectively shown in Fig. 2. There is an ensemble where quantum states are respectively prepared by two devices. The two kinds of quantum states in the ensemble are unknown, i.e., not well-defined mathematically. We randomly pick one of the states from the ensemble. The task, called quantum state binary discrimination, is to determine which kind of state we have picked. The quantum states $\rho_1$ and $\rho_2$ are prepared with prior probabilities $w_1$ and $w_2$ ($w_1 + w_2 = 1$), respectively.
cpl-39-5-050301-fig2.png
Fig. 2. The flow charts of quantum state binary discrimination task. Black and white dots respectively represent the unknown quantum states from two different devices, and they become indistinguishable (gray dots) when mixed in an ensemble. The task is that we randomly pick one state from the ensemble and determine which device the state is coming from. (a) The minimum error discrimination. Before the discrimination, a pre-processing of quantum state tomography is often needed to get the complete information of the unknown states. (b) The QSNN discrimination. There is no need to obtain any prior information of quantum states through tomography in our approach. The network can discriminate quantum states after being trained with some labeled samples.
As shown in Fig. 2(a), before discriminating two unknown states in the ensemble with ME discrimination, a pre-processing of quantum state tomography is often needed, so that the appropriate measurement can be set up. It is not trivial to find the ME measurement set-up in general cases, but for the quantum state binary discrimination, the minimum error probability is given analytically by Helstrom[1] and Holevo,[41] which is called the Helstrom bound: $$ P_{\rm H}^{\rm error} = \frac{1}{2}(1-{\rm Tr}|w_2\rho_2-w_1\rho_1|).~~ \tag {6} $$ Then, the success probability of the ME discrimination is given as $$ P_{\rm H} = 1-P_{\rm H}^{\rm error}.~~ \tag {7} $$ However, as shown in Fig. 2(b), tomography is not required before our QSNN discrimination. The QSNN can discriminate quantum states by learning from some labeled samples, without knowing the mathematical expressions of these states. In order to complete the quantum state binary discrimination task, the QSNN is constructed by 6 neurons divided into three layers, as shown in Fig. 1. Some discussions about the topology of the network are given in SM I. There are only two sample states in the training set, namely $\rho_1$ and $\rho_2$. They are labeled $state$ 1 and $state$ 2, and fed into the QSNN with the prior probability $w_1$ and $w_2$, respectively. Thus, according to Eq. (4), the average success probability that the QSNN correctly gives the labels of the two input states is given as $$ P_N=\sum_{s=1}^{2}w_s{\rm Tr}(\rho_{{\rm out}}^{s}\varOmega^{s}).~~ \tag {8} $$ Then, the loss function is defined as the distance between 1 and the average success probability, that is, $$ \mbox{Loss}=1-P_N.~~ \tag {9} $$ In our simulation, we choose $w_1 = w_2 = 0.5$. To show the performance of our approach, we compare the success probability of the QSNN ($P_N$) and that of the ME discrimination ($P_{\rm H}$). First, without loss of generality, we consider the quantum states $$ |\psi_{\theta}\rangle=\cos{\theta}|0\rangle+\sin{\theta}|1\rangle~~ \tag {10} $$ in the real vector space. Here, $\{|0\rangle, |1\rangle\}$ constitutes an orthogonal basis. The specific expressions of the quantum states selected here are intended only to mathematically evaluate the performance of our model. The states we want to discriminate against are $|\psi_0\rangle$ and $|\psi_{\theta}\rangle$, so our training set is $\{(|\psi_0\rangle, {\rm state} 1), (|\psi_{\theta}\rangle, {\rm state} 2)\}$. In our simulation, we train the QSNN with different training sets separately, where $\theta=0, \frac{\pi}{6}, \frac{2\pi}{6}, \frac{3\pi}{6}, \ldots, \frac{11\pi}{6}$. The average success probability of the QSNN (blue curve) for the discrimination of 12 quantum state pairs ($|\psi_0\rangle$, $|\psi_{\theta}\rangle$) increases with the number of iterations used in the training procedure, as shown in Fig. 3(a). Our QSNN approximately achieves the optimal theoretical bound of the success probability named as Helstrom bound (red dashed line) after about 30 iterations. The optimal success probability of the QSNN for each training set with different $\theta$ is shown as a blue dot in Fig. 3(b). Each of them approximates to the Helstrom bound well. Second, we consider the quantum states $$ |\psi_{\varphi}\rangle=\frac{\sqrt{2}}{2}(|0\rangle+e^{i\varphi}|1\rangle),~~ \tag {11} $$ which are in a complex space. Similarly, we train the QSNN to discriminate states $|\psi_0\rangle$ and $|\psi_{\varphi}\rangle$ with the different training sets $\{(|\psi_0\rangle, {\rm state} 1), (|\psi_{\varphi}\rangle, {\rm state} 2)\}$, which are different in $\varphi=0, \frac{\pi}{6}, \frac{2\pi}{6}, \frac{3\pi}{6}, \ldots, \frac{11\pi}{6}$. The result shown in Fig. 3(c) is also the average success probabilities for all training sets. We can see a certain gap between the success probability of the QSNN and the Helstrom bound. Figure 3(d) indicates that the trained QSNN does not perform well in discriminating the quantum states with complex amplitudes. Even so, the discrimination result given by the optimized QSNN is still referential, because it can achieve a success probability of no less than 91% of the theoretical optimum.
cpl-39-5-050301-fig3.png
Fig. 3. (a) The blue curve represents the average success probability of the QSNN for the discrimination of different quantum state pairs $(|\psi_0\rangle, |\psi_{\theta}\rangle)$ with real amplitudes. The red dashed line shows the average optimal theoretical value given by the minimum error discrimination. The results are given by taking the average over the success probabilities corresponding to all state pairs with different values of $\theta$. The error bars are plotted from the variances. The average success probability rises rapidly, and after about 30 iterations, it achieves the Helstrom bound. (b) The optimal success probabilities of the QSNN in discriminating $|\psi_0\rangle$ and $|\psi_\theta\rangle$ with each $\theta$ are drawn as blue dots. The Helstrom bound is shown as the red dashed line. Here, (c) and (d) include the results similar to (a) and (c), respectively. They are given by replacing the state represented by Eq. (10) with that represented by Eq. (11). The QSNN can achieve a success probability of no less than 91% of the theoretical optimum.
In summary, the QSNN can complete binary discrimination of unknown quantum states without tomography. It can be trained to its optimum after a few iterations and used to discriminate the states with a single detection. The optimized QSNN can achieve a success probability close to the Helstrom bound for both pure and mixed state (displayed in SM III) discrimination. Our model also works when the dimension of quantum states to be discriminated against is greater than 2, and we show the simulation result of the 3-qubit case in Fig. S8 of the SM. In addition, our approach is not limited by the number of states to be discriminated against theoretically, as shown in SM IV. In opposite, in general, if the number of the states is greater than two, the optimal measurement will be hard to be analytically developed in the fashion of traditional strategies. Results–Classification of Entangled and Separable States. Entanglement is a primary feature of quantum mechanics, which is also considered with a resource. As the carrier of entanglement, entangled quantum states are costly to produce. Therefore, determining whether the given state is entangled or not is an important topic in quantum information theory. Now, some unknown separable states and entangled quantum states are mixed in an ensemble. We only know that they are prepared by two devices with an equal prior probability, without knowing their mathematical expressions. The task we consider in the following is to determine which category each quantum state belongs to. This is a classification task. Several traditional strategies have been proposed to complete this task. For example, the positive partial transpose (PPT) criterion[5] is both sufficient and necessary for 2-qubit entanglement detection with the requirement of quantum tomography (see Fig. S9 of the SM). The CHSH inequality[9,10] is also an attractive strategy because it only needs partial information of the quantum states (see Fig. S9 of the SM). However, on the one hand, multiple measurements are still required. On the other hand, using fixed measurements, the CHSH inequality cannot detect all entangled states in a set of states. To optimize the accuracy of the entanglement detection using the CHSH inequality, classical artificial neural networks combined with machine learning techniques have been proposed in Refs. [42,43]. They construct a quantum state classifier and it achieves the accuracy of the classification to near unity, but multiple measurements are still required. To avoid multiple measurements or state tomography, we train a QSNN to be a quantum classifier. In order to evaluate our approach more clearly, we select an unknown set of Werner-like states in our simulation. Each state is of the form $$ \rho = p|\varPsi\rangle\langle\varPsi|+(1-p)\frac{I}{4},~~ \tag {12} $$ where $|\varPsi\rangle=(U_1\otimes U_2)|\psi_+\rangle$ and the real coefficient $p\in[0, 1]$. $U_1$ and $U_2$ are two unknown local unitaries acting on the Bell states $|\psi_+\rangle=\frac{\sqrt{2}}{2}(|01\rangle+|10\rangle)$. The state $\rho$ can be regarded as the convex combination of an unknown maximally entangled state and the maximally mixed state $\frac{I}{4}$. The quantum state $\rho$ is separable when $p\leq\frac{1}{3}$ and it is entangled otherwise. In our simulation, there are four neurons in the input layer and four in the hidden layer. Two neurons in the output layer correspond to two labels $l^{s}$: reparable $S$ and entangled $E$. To be more specific, if the final state $\rho_{{\rm out}}$ is $|S\rangle\langle S|$ ($|E\rangle\langle E|$), the trained QSNN shows that the input quantum state is separable (entangled). Performing a corresponding measurement $\varOmega^{s}=|l^{s}\rangle\langle l^{s}|$ on the final state $\rho_{{\rm out}}^{s}$ gives the probability that the QSNN gives the label of the $s$th input state correctly. The loss function is defined as the mean error probability of all the $M$ training samples $$ \mbox{Loss}=1-\frac{1}{M}\sum_{s=1}^M{\rm Tr}(\rho_{{\rm out}}^{s}\varOmega^{s}).~~ \tag {13} $$
cpl-39-5-050301-fig4.png
Fig. 4. (a) The classification confusion matrix of the QSNN. The numbers in it are the mean success and error probabilities of the QSNN in classifying the 49 samples represented by Eq. (12) with $p\in\{0.02\cdot n\}_{n=1}^{49}$. The QSNN performs better when the input states are entangled. (b) The classification result of each state with a specific value of $p$. The bar chart shows the probabilities that the trained network identifies the 49 input states as entangled states (red bars) and separable states (blue bars). The whole chart is divided by $p=\frac{1}{3}$ into the light blue area (the state is separate) and the light red area (the state is entangled). The success probabilities are always higher than the error probabilities.
For the example of $U_1=\sigma_z$, $U_2=I$, we numerically give the classification results. We only use $M=4$ training samples, where $p\in\{0, 0.2, 0.4, 0.8\}$, while use 49 states with $p\in\{0.02\cdot n\}_{n=1}^{49}$ to evaluate the performance of the trained QSNN in the simulation. The classification confusion matrix for these 49 states is shown in Fig. 4(a). The trained QSNN identifies separable states successfully with the probability of 0.62 and identifies entangled states successfully with the probability of 0.75. The probability of one kind of the wrong classifications, i.e., the trained network identifies a separable state as an entangled one, is 0.38. The probability of the other kind of wrong classifications is 0.25. The probabilities that each quantum state with a specific value of $p\in\{0.02\cdot n\}_{n=1}^{49}$ is identified as a separable state (blue bar) and an entangled state (red bar) are shown in Fig. 4(b). When the input states are entangled at $1/3 < p\leq1$ (light red region), the red bars are always longer than blue bars. It means that the success probabilities are always higher than the error probabilities, which is also true when $0\leq p \leq 1/3$ (light blue region). In summary, when given an unknown quantum state set, the QSNN can be trained by several labeled states and used to classify the others. Although the QSNN becomes confused about the states at the boundary $p=1/3$, the success probability is always higher than the error probability for each state. What's more, the trained QSNN can give the classification result only using a single detection for each state. Some details about the parameters and hyperparameters of the QSNN are given in SM V. In this work, we have introduced the quantum stochastic neural network (QSNN), based on quantum stochastic walks. When combined with machine learning, it can be trained to become a quantum state classifier. If one wants to classify an ensemble only containing two unknown quantum states, the classification task becomes binary discrimination. The QSNN does not need any information of the candidate states in advance, so it avoids the experimentally expensive quantum state tomography used in the traditional minimum error discrimination. We have benchmarked the QSNN's performance in the quantum state binary discrimination tasks with numerical simulation. The success probability of the QSNN turned out to be very close to the theoretical optimal success probability, i.e., the Helstrom bound. When given an unknown ensemble containing states from two different families, the QSNN can be trained to classify them into those two families. We show an example of classifying Werner-like states according to whether they are entangled or not. For all those states, the trained QSNN is always more likely to classify it into the correct family with a single detection. The optimal performance of the QSNN can be achieved with only four training samples, while avoiding the state tomography and multiple measurements required by other classification methods such as using the PPT criteria or the CHSH inequality. This also suggests that our approach may reduce the consumption of resources compared to traditional methods. All the present results have shown the potential of the QSNN as a general-purpose quantum classifier, which may be helpful in various quantum machine learning models, such as the quantum adversarial generative networks.[44] Acknowledgments. This work was supported by the National Key R&D Program of China (Grant No. 2017YFA0303703), and the National Natural Science Foundation of China (Grant No. 12175104).
References Quantum detection and estimation theoryHow to differentiate between non-orthogonal statesOverlap and distinguishability of quantum statesHow to differentiate between non-orthogonal statesViolating Bell inequality by mixed states: necessary and sufficient conditionSeparability of mixed states: necessary and sufficient conditionsBell inequalities and the separability criterionOptimization of entanglement witnessesOn the Einstein Podolsky Rosen paradoxProposed Experiment to Test Local Hidden-Variable TheoriesOptimal state estimation for d-dimensional quantum systemsDiscrimination of quantum statesQuantum state discriminationMachine learning meets quantum physicsMachine learning and the physical sciencesCircuit-centric quantum classifiersLarge m asymptotics for minimal partitions of the Dirichlet eigenvalueQuantum machine learningMachine learning & artificial intelligence in the quantum domain: a review of recent progressVariational quantum compiling with double Q-learningU1 snRNP regulates cancer cell migration and invasion in vitroSoundness and completeness of quantum root-mean-square errorsSolving the quantum many-body problem with artificial neural networksIn situ click chemistry generation of cyclooxygenase-2 inhibitorsSuppression of photon shot noise dephasing in a tunable coupling superconducting qubitQuantum Autoencoders to Denoise Quantum DataUniversal discriminative quantum neural networksQuantum state discrimination using noisy quantum neural networksQuantum convolutional neural networksQuantum state discrimination on reconfigurable noise-robust quantum networksExperimental multi-state quantum discrimination through a Quantum networkQuantum stochastic walks: A generalization of classical random walks and quantum walksQuantum walks on graphs representing the firing patterns of a quantum neural networkExperimental Quantum Stochastic Walks Simulating Associative Memory of Hopfield Neural NetworksQuantum computation and decision treesOn quantum statistical mechanics of non-Hamiltonian systemsOn the generators of quantum dynamical semigroupsCompletely positive dynamical semigroups of N-level systemsExperimental Machine Learning of Quantum StatesAdequacy of Si:P chains as Fermi–Hubbard simulatorsQuantum generative adversarial networks
[1] Helstrom C W 1969 J. Stat. Phys. 1 231
[2] Ivanovic I D 1987 Phys. Lett. A 123 257
[3] Dieks D 1988 Phys. Lett. A 126 303
[4] Peres A 1988 Phys. Lett. A 128 19
[5] Horodecki R, Horodecki P, and Horodecki M 1995 Phys. Lett. A 200 340
[6] Horodecki M, Horodecki P, and Horodecki R 1996 Phys. Lett. A 223 1
[7] Terhal B M 2000 Phys. Lett. A 271 319
[8] Lewenstein M, Kraus B, Cirac J I, and Horodecki P 2000 Phys. Rev. A 62 052310
[9] Bell J S 1964 Phys. Phys. Fiz. 1 195
[10] Clauser J F, Horne M A, Shimony A, and Holt R A 1969 Phys. Rev. Lett. 23 880
[11]Massar S and Popescu S 2005 Asymptotic Theory Of Quantum Statistical Inference (Singapore: World Scientific) p 356
[12] Bruß D and Macchiavello C 1999 Phys. Lett. A 253 249
[13]Paris M and Rehacek J 2004 Quantum State Estimation (New York: Springer Science & Business Media)
[14] Bergou J A 2010 J. Mod. Opt. 57 160
[15] Barnett S M and Croke S 2009 Adv. Opt. Photon. 1 238
[16] Das Sarma S, Deng D L, and Duan L M 2019 Phys. Today 72 48
[17] Carleo G et al. 2019 Rev. Mod. Phys. 91 045002
[18] Schuld M, Bocharov A, Svore K M, and Wiebe N 2020 Phys. Rev. A 101 032308
[19] Li W and Deng D L 2022 Sci. China Phys. Mech. Astron. 65 1
[20] Biamonte J, Wittek P, Pancotti N, Rebentrost P, Wiebe N, and Lloyd S 2017 Nature 549 195
[21] Dunjko V and Briegel H J 2018 Rep. Prog. Phys. 81 074001
[22] He Z, Li L, Zheng S, Li Y, and Situ H 2021 New J. Phys. 23 033002
[23] Beer K, Bondarenko D, Farrelly T, Osborne T J, Salzmann R, Scheiermann D, and Wolf R 2020 Nat. Commun. 11 1
[24] Steinbrecher G R, Olson J P, Englund D, and Carolan J 2019 npj Quantum Inf. 5 1
[25] Carleo G and Troyer M 2017 Science 355 602
[26] Gao X and Duan L M 2017 Nat. Commun. 8 1
[27] Wan K H, Dahlsten O, Kristjánsson H, Gardner R, and Kim M 2017 npj Quantum Inf. 3 1
[28] Bondarenko D and Feldmann P 2020 Phys. Rev. Lett. 124 130502
[29] Chen H, Wossnig L, Severini S, Neven H, and Mohseni M 2021 Quantum Mach. Intell. 3 1
[30] Patterson A, Chen H, Wossnig L, Severini S, Browne D, and Rungger I 2021 Phys. Rev. Res. 3 013063
[31] Cong I, Choi S, and Lukin M D 2019 Nat. Phys. 15 1273
[32] Dalla P N and Caruso F 2020 Phys. Rev. Res. 2 043011
[33] Laneve A, Geraldi A, Hamiti F et al. 2021 arXiv:2107.09968 [quant-ph]
[34] Whitfield J D, Rodrı́guez-Rosario C A, and Aspuru-Guzik A 2010 Phys. Rev. A 81 022323
[35] Schuld M, Sinayskiy I, and Petruccione F 2014 Phys. Rev. A 89 032333
[36] Tang H et al. 2019 Phys. Rev. Appl. 11 024020
[37] Farhi E and Gutmann S 1998 Phys. Rev. A 58 915
[38] Kossakowski A 1972 Rep. Math. Phys. 3 247
[39] Lindblad G 1976 Commun. Math. Phys. 48 119
[40] Gorini V, Kossakowski A, and Sudarshan E C G 1976 J. Math. Phys. 17 821
[41]Holevo A S 2011 Probabilistic and Statistical Aspects of Quantum Theory (New York: Springer Science & Business Media)
[42] Gao J, Qiao L F, Jiao Z Q et al. 2018 Phys. Rev. Lett. 120 240501
[43] Ma Y C and Yung M H 2018 npj Quantum Inf. 4 1
[44] Dallaire-Demers P L and Killoran N 2018 Phys. Rev. A 98 012324