Chinese Physics Letters, 2021, Vol. 38, No. 3, Article code 030304Express Letter Quantum Algorithm for Approximating Maximum Independent Sets Hongye Yu (余泓烨)1, Frank Wilczek2,3,4,5,6, and Biao Wu (吴飙)7,4,8* Affiliations 1Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA 2Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA 3D. Lee Institute, Shanghai Jiao Tong University, Shanghai 200240, China 4Wilczek Quantum Center, School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China 5Department of Physics, Stockholm University, Stockholm SE-106 91, Sweden 6Department of Physics and Origins Project, Arizona State University, Tempe AZ 25287, USA 7International Center for Quantum Materials, School of Physics, Peking University, Beijing 100871, China 8Collaborative Innovation Center of Quantum Matter, Beijing 100871, China Received 25 January 2021; accepted 24 February 2021; published online 26 February 2021 F. Wilczek is supported in part by the U.S. Department of Energy (Grant No. DE-SC0012567), the European Research Council (Grant No. 742104), and the Swedish Research Council (Grant No. 335–2014-7424). B. Wu is supported by the National Key R&D Program of China (Grant Nos. 2017YFA0303302 and 2018YFA0305602), the National Natural Science Foundation of China (Grant No. 11921005), and Shanghai Municipal Science and Technology Major Project (Grant No. 2019SHZDZX01).
*Corresponding author. Email: wubiao@pku.edu.cn
Citation Text: Yu H Y, Wilczek F, and Wu B 2021 Chin. Phys. Lett. 38 030304    Abstract We present a quantum algorithm for approximating maximum independent sets of a graph based on quantum non-Abelian adiabatic mixing in the sub-Hilbert space of degenerate ground states, which generates quantum annealing in a secondary Hamiltonian. For both sparse and dense random graphs $G$, numerical simulation suggests that our algorithm on average finds an independent set of size close to the maximum size $\alpha(G)$ in low polynomial time. The best classical algorithms, by contrast, produce independent sets of size about half of $\alpha(G)$ in polynomial time. DOI:10.1088/0256-307X/38/3/030304 © 2021 Chinese Physics Society Article Text Finding a maximum independent set (MIS) of a graph is an NP-hard problem that appears difficult to solve even approximately. In spite of decades of research, no known classical algorithm produces much better results than the naive, greedy strategy. For a graph $G(n,m)$ that contains $n$ vertices and $m$ edges, it is known that unless P = NP no polynomial algorithm can find an $O(n^{1-\epsilon})$-approximate solution in the worst case,[1,2] where $\epsilon>0$ is an arbitrary small positive number that is independent of $n$. We let $\alpha(G)$ denote the largest size of independent sets for a given graph $G$. The aforementioned statement means that the size of the best approximate MIS found by a polynomial algorithm is $\sim$$ \alpha(G)/n^{1-\epsilon}$. This is not an impressive result when you notice that $1\le \alpha(G)\le n$. Average case performance, for both sparse and dense graphs, is not much better. Consider for example the class of Erdös–Rényi random graphs, denoted $G(n,\mathcal{P})$, where $\mathcal{P}$ is the probability to generate an edge between any pair of vertices. Erdös–Rényi graphs $G(n,\mathcal{P})$ are dense at $\mathcal{P}=1/2$, as their edge numbers are proportional to $n^2$. For them, the MIS size is typically $\alpha[G(n,1/2)]\sim 2 \log_2 n$.[3] However, no classical algorithm is known or suspected to produce in polynomial time, with non-vanishing probability, an independent set of size $(1 + \epsilon) \log_2 n$ for any fixed $\epsilon> 0$, neither analytically nor through numerical evidence.[4] It is common to take $d = 2\,m/n$ to define sparse random graphs $G(n,m)$ parametrically. One finds that for sparse graphs with $d\gg 1$[5] $$ \alpha[G(n,m)]\sim 2n \frac{\ln d}{d}.~~ \tag {1} $$ Here too, no classical algorithm is known or suspected to perform well—specifically, to find an independent set of size $(1 + \epsilon)n \frac{\ln d}{d}$ in polynomial time with non-vanishing probability. Here we introduce a quantum algorithm which appears, in extensive numerical evidence, to perform much better. It builds on the quantum algorithm for independent sets we proposed in Ref. [6], but adds a major new ingredient. Numerical experiments indicate that our quantum algorithm typically produces an independent set of size almost $\alpha(G)$ in low polynomial time, for both sparse and dense random graphs in the average-case scenario, where we average over both the final quantum measurement and many randomly generated graphs. Quantum Adiabatic Evolution in the Solution-Subspace. Our approach for approximating MIS builds on a quantum algorithm for independent sets.[6,7] To fix notation and to make this work self-contained, we briefly recall the earlier algorithm here. For a given non-empty graph $G$, we construct a corresponding spin-system with the following Hamiltonian[6] $$ H_{0}=\varDelta\sum_{\langle ij\rangle}(\hat{\sigma}^z_{i}+\hat{\sigma}^z_{j} +\hat{\sigma}^z_{i}\hat{\sigma}^z_{j}).~~ \tag {2} $$ Here the summation $\langle ij\rangle$ is taken over all edges in the graph. Spin $j$ being up should be interpreted as inclusion of site $j$ in the candidate set, and the terms in the Hamiltonian as imposing a penalty for connection between included sites. Two key features of this Hamiltonian are: These features allow us to explore the space of independent sets through non-abelian adiabatic evolution. We consider acting uniformly upon all the spins with the rotation matrix $$ V_j= \begin{pmatrix}\cos\frac{\theta}{2} & e^{-i\varphi}\sin\frac{\theta}{2}\\ e^{i\varphi}\sin\frac{\theta}{2}&-\cos\frac{\theta}{2} \end{pmatrix} =V_j^{-1}.~~ \tag {3} $$ $V_j$ represents rotation through $\pi$ around the axis $(\sin\frac{\theta}{2} \cos\varphi, \sin\frac{\theta}{2} \sin\varphi, \cos\frac{\theta}{2})$ and takes the unit vector $(0, 0, 1)$ to ${\boldsymbol r} \equiv(\sin\theta \cos\varphi, \sin\theta \sin \varphi, \cos\theta)$. Note that such mapping from the $SO(3)$ rotation to $V_j$ is not unique and $V_j$ does not have unit determinant, but it includes a convenient overall phase factor. If $\left|u\right\rangle _j$ and $\left|d\right\rangle _j$ are eigenstates of $\hat{\sigma}_j^z$, that is, $\hat{\sigma}_j^z\left|u\right\rangle _j=\left|u\right\rangle _j$ and $\hat{\sigma}_j^z\left|d\right\rangle _j=-\left|d\right\rangle _j$, then the eigenstates of $\hat{\tau}^z_j \equiv V_j \hat{\sigma}_j^z V_j^{-1}$ are $$\begin{align} \left|{u_{\boldsymbol r}}\right\rangle _j={}&\cos\frac{\theta}{2}\left|u\right\rangle _j+\sin\frac{\theta}{2}e^{i\varphi}\left|d\right\rangle _j,~~ \tag {4} \end{align} $$ $$\begin{align} \left|{d_{\boldsymbol r}}\right\rangle _j={}&\sin\frac{\theta}{2}\left|u\right\rangle _j-\cos\frac{\theta}{2}e^{i\varphi}\left|d\right\rangle _j.~~ \tag {5} \end{align} $$ Upon acting with $U=V_1\otimes V_2\otimes \cdots \otimes V_n$, we rotate all the spins and find a new Hamiltonian $$ H_\tau=UH_{0}U^{-1}=\varDelta\sum_{\langle ij\rangle}(\hat{\tau}^z_{i}+\hat{\tau}^z_{j}+\hat{\tau}^z_{i}\hat{\tau}^z_{j}).~~ \tag {6} $$ Of course, $U$ need not be implemented through physical rotation of the computing apparatus; it can be simulated using parallel operation of simple, one-bit gates. $H_\tau$ has the same set of eigenvalues as $H_0$. Its eigenstates of $H_\tau$ are obtained by rotating those of $H_0$, in the form $$ \left|E_\mu(\theta,\varphi)\right\rangle =\left|s_1\right\rangle \otimes \left|s_2\right\rangle \otimes \cdots\otimes\left|s_j\right\rangle \otimes\cdots \otimes \left|s_n\right\rangle ,~~ \tag {7} $$ where $\mu$ is a string of $\{\pm 1\}^n$, and $\left|s_j\right\rangle = \left|{u_{\boldsymbol r}}\right\rangle {\rm or} \left|{d_{\boldsymbol r}}\right\rangle $ for $\mu_j = 1$ or $-1$. The quantum algorithm in Ref. [6] starts the spin system in the state $E_\mu$ with $\mu=\{-1,-1,\ldots,-1\}$, which is one of many ground states of $H_{0}$. Then all spins are rotated in the same way, by slowly changing ${\boldsymbol r}$. The system evolves, to exponential accuracy in the slowness parameter, within the sub-Hilbert space spanned by the ground states of $H_\tau$. However, the evolution within that space is nontrivial due to the non-Abelian geometric phase,[8] and when ${\boldsymbol r}$ is rotated back to the $z$-direction, upon measurement one obtains with high probability a non-trivial independent set.[6] The evolution within the sub-Hilbert space of the ground states is given by[8] $$ \left|\psi(t)\right\rangle =P \exp \Big(i\int_{0}^{t} A(t') dt' \Big)\left|\psi(0)\right\rangle ,~~ \tag {8} $$ where $P$ stands for time ordering and $A$ is the hermitian nonabelian gauge matrix. The off-diagonal terms of the gauge matrix $A$ are non-zero only when they connect states labeled by strings $\mu, \nu$ separated by Hamming distance $|\mu-\nu|=1$. In that case we have $$\begin{align} A_{\mu, \nu}(\theta)={}&i\left\langle E_{\mu}\left|\partial_{t}\right| E_{\nu}\right\rangle= i\left\langle {u_{\boldsymbol r}}\left|\partial_{t} \right| {d_{\boldsymbol r}}\right\rangle\\ ={}&\frac{\sin \theta}{2} \frac{d \varphi}{d t}+\frac{i}{2}{\rm sgn}(\mu-\nu)\frac{d \theta}{d t},~~ \tag {9} \end{align} $$ where ${\rm sgn}(\mu-\nu)$ is a sign function, depending on the sign of first non-zero element of $\mu-\nu$; and $\mu-\nu$ is defined as element-wise subtractions [e.g., ${\rm sgn}(\{1,1,-1\}-\{-1,1,1\})={\rm sgn}(\{2,0,-2\})=+1$]. The diagonal terms of $A$ are $$\begin{align} A_{\mu, \mu}(\theta) ={}&i\left\langle E_{\mu}\left|\partial_{t}\right| E_{\mu}\right\rangle\\ ={}&-\Big\{n_+ \sin ^{2} \frac{\theta}{2}+(n-n_+) \cos ^{2} \frac{\theta}{2}\Big\} \frac{d \varphi}{d t},~~ \tag {10} \end{align} $$ where $n_+$ is the number of plus signs in $\mu$. Equation (8) indicates that the gauge matrix $A$ can be regarded as an emergent Hamiltonian for the spin system, generating unitary evolution within the eigenspaces of the original Hamiltonians $H_0$. We call this the secondary Hamiltonian. In Ref. [6], we took $\theta$ to be fixed and let $\varphi$ vary slowly. This gives rise to a time-independent secondary Hamiltonian $A(\theta)$. In this work we change both $\varphi$ and $\theta$ slowly, under the condition $d\theta/dt\ll d\varphi/dt$. This generalization brings in profoundly different dynamics. In this case, $A(\theta)$ becomes a time-dependent secondary Hamiltonian with the parameter $\theta$ changing slowly. Remarkably, the empty-set solution $\mu=\{-1,-1,\ldots,-1\}$ is the ground state of $A(0)$, but the maximum independent set (MIS), which has largest number of vertices $n_+$, is the ground state of $A(\pi)$. According to the adiabatic theorem, sufficiently slow evolution of the secondary Hamiltonian will keep us within the ground state manifold. This means that if we change $\theta$ slowly enough, and evolve from $\theta = 0$ to $\theta = \pi$, we will evolve to the state representing the maximum independent set when $\theta=\pi$ [note that at the end we must reverse the spin directions, e.g., turning $\{-1,-1,+1\}$ into $\{+1,+1,-1\}$, as the system ends along the $-z$ direction ($\theta=\pi$)]. This is a quantum adiabatic algorithm for MIS. Its time complexity is determined by the energy gap of $A(\theta)$.[9] In a worst case scenario the energy gap of $A(\theta)$ can be exponentially small, as we will shortly exemplify. However, our numerical results show that more typically, in interesting cases we get independents set whose size is very close to $\alpha(G)$. Two Special Graphs. To illustrate possible behavior of the minimum energy gap of $A(\theta)$, let us consider two special graphs. The first graph is the one that has no edges. In this case, all combinations of vertices are independent sets and the gauge matrix $A(\theta)$ acts on the whole $2^n$-dimension Hilbert space. Denote $A(\theta)$ for no-edge graphs as $\tilde{A}$. It can be rewritten as $$\begin{align} \tilde{A}(\theta)={}&\frac{\sin \theta}{2}\frac{d \varphi}{d t}\sum_{j=1}^n\tilde{\sigma}_j^x+ \frac{\cos\theta }{2}\frac{d \varphi}{d t}\sum_{j=1}^n\tilde{\sigma}_j^z\\ &+ \frac{1}{2}\frac{d \theta}{d t}\sum_{j=1}^n\tilde{\sigma}_j^y+\Big(\cos\theta-n\cos^2\frac{\theta}{2}\Big)I,~~ \tag {11} \end{align} $$ where $I$ is the $2^n\times 2^n$ identity matrix and contributes only a global phase factor during the evolution. Note that these $\tilde{\sigma}_j^x,\tilde{\sigma}_j^y,\tilde{\sigma}_j^z$ are not the spin operators $\sigma_j^z$ in $H_0$, and they are used just to put $\tilde{A}(\theta)$ in a concise form. If $d \theta/d t$ is much smaller than $d \varphi/d t$, then we can omit the third term of $\tilde{A}$ and have $$ \tilde{A}(\theta)\approx\frac{\sin \theta}{2}\frac{d \varphi}{d t}\sum_j\tilde{\sigma}_j^x+\frac{\cos\theta }{2}\frac{d \varphi}{d t} \sum_j\tilde{\sigma}_j^z.~~ \tag {12} $$ This is effectively a Hamiltonian for $n$ identical non-interacting spins in the same magnetic field. Apparently, $\tilde{A}(\theta)$ has a constant gap between the ground state and the first excited state. When we let $\theta$ evolve slowly from 0 to $\pi$ for a fixed period of time, the system no matter how large will evolve from the initial ground state at $\theta=0$ to the ground state at $\theta=\pi$. This is consistent with the original Hamiltonian in Eq. (2). For the graph with no edges, the Hamiltonian $H_0$ is zero. This means that there is no evolution; the system stays in the state $\{-1,-1,\ldots,-1\}$. Upon reversing the direction of the spins, we get the MIS $\{1,1,\ldots,1\}$.
cpl-38-3-030304-fig1.png
Fig. 1. (a) A special type of graphs that has $2n$ edges and $2n+1$ vertices. Note that it has a unique maximum independent set $\{x_0, x_2,\ldots,x_{2n}\}$. (b) The minimum energy gap of $A$ for these graphs as a function of $n$. The fitting line is given by ln(gap) = $0.0286 -0.332 n$.
The second special graph $S_n$ is shown in Fig. 1, which has $2n+1$ vertices and $2n$ edges. The graph has $2^n$ maximal independent sets, and only one of them is the MIS. For each $n$, we compute numerically the energy gaps of $A(\theta)$ for $0\le\theta\le\pi$ and find the minimum. The results are plotted in Fig. 1, which shows that the minimum energy gaps of $A(\theta)$ for these graphs decrease exponentially with $n$. It is clear from these two special types of graphs that there is no universal behavior for the minimum energy gap of the gauge matrix $A(\theta)$. Quantum Algorithm for Approximately Maximal Independent Set. Our quantum algorithm for finding an approximately maximal independent set runs as follows: Since the energy gap of the secondary Hamiltonian $A$ can be exponentially small, run times $T=n^\gamma$ which scale polynomially do not guarantee that the system will stay in the ground state of $A$. However, the system will stay mostly within the manifold of states whose energy is close to the ground state, i.e., approximately maximum states, if the evolution is slow enough. As a result, at the end of computation, we may expect to find a good approximately maximum states. We have explored this hypothesis numerically, with excellent results in generic cases, as we will now discuss. As the adiabatic condition for $H_{\tau}$ can be satisfied (see the Supplemental Material), our numerical simulation is done with $A$ so that we can compute for larger graphs. If the final quantum state is $\left|\psi_f\right\rangle =\sum_\ell a_\ell\left|E_\ell\right\rangle $ (after the reverse of the spin direction), we define the averaged size $\bar{N}$ of the independent sets as $$ \bar{N}=\sum_\ell |a_\ell|^2N_\ell .~~ \tag {13} $$ Here $N_\ell$ is the size of the $\ell$th independent set $\left|E_\ell\right\rangle $. In the quantum mechanical formalism, this represents the average value of a single measurement. We are interested in the ratio $\kappa =\bar{N}/\alpha(G)$. Our numerical results, displayed in Fig. 2(a), show that for Erdös–Rényi random graph $G(n,1/2)$, if we set $T\sim n^2$, the average of $\kappa$ will increase to almost 1 when $n$ increases. This is compared to the results obtained using the classical greedy and Metropolis algorithms[10] (see the Supplemental Material for details of the two algorithms). We run the classical algorithms several times on each graph to get $\bar{N}$ and ratio $\kappa=\bar{N}/\alpha(G)$, then run the process over multiple random graphs to find the double average $\bar{\kappa}$. Our numerical results in Fig. 2(b) show that even for small graphs, the ratio $\bar{\kappa}$ in the two classical algorithms is not as close to 1 as the one with our quantum algorithm. More importantly, the classical ratio $\bar{\kappa}$ tends to decrease when $n$ gets larger. This is consistent with the well known result that the best classical polynomial algorithm face grave difficulty in pushing the ratio larger than $1/2$ when $n$ goes to infinity[4] (see below).
cpl-38-3-030304-fig2.png
Fig. 2. The average $\kappa$ (or $\bar{\kappa}$) as a function of $n$ for Erdös–Rényi random graphs $G(n,1/2)$ with three different algorithms. (a) The results of our quantum algorithm. We set $T=n^2$, $\omega_\varphi=1$, $\omega_\theta=\pi/T$ and run over 1000 Erdös–Rényi random graphs $G(n,1/2)$. The variance of $\kappa$ is around $10^{-6}$. The calculation is carried out with $A$. (b) The results of the Greedy algorithm and the Metropolis algorithm in comparison with our quantum results. For the Greedy algorithm, the calculation runs 1000 times over one graph to get $\bar{N}$, and then runs over 1000 random graphs to get $\bar{\kappa}$. The variance of $\bar{\kappa}$ is around $10^{-4}$. For the Metropolis algorithm, we set the iteration time $T=n^2$. The calculation runs 1000 times over one graph to get $\bar{N}$, and then runs over 1000 random graphs to get $\bar{\kappa}$. The variance of $\bar{\kappa}$ is around $10^{-4}$. The lines in the figure are guide for the eyes.
For sparse graphs with edge number $m=\lfloor n \ln n\rfloor$ the results are similar, as shown in Fig. 3. These numerical results indicate that our quantum algorithm can find an independent set of size $(1-\epsilon)\alpha(G)$ in run times $T\sim n^2$. We also tried $T\sim n$. In this case the average radio $\kappa$ decreases when $n$ increases. These numerical results suggest that our quantum algorithm is of time complexity of $O(n^2)$. Note that in Ref. [11], it was shown that the run time required in a quantum adiabatic algorithm can increase polynomially with the system size even when the energy gap is constant. Our numerical results (see the Supplemental Material) show that the adiabaticity for $H_\tau$ is ensured with $T\sim n^2$.
cpl-38-3-030304-fig3.png
Fig. 3. The average $\kappa$ (or $\bar{\kappa}$) as a function of $n$ for random graphs $G(n,m)$ with $m=\lfloor n \ln n\rfloor$ via three different methods. (a) The results of our quantum algorithm. We set $T=n^2$, $\omega_\varphi=1$, $\omega_\theta=\pi/T$ and run over 1000 random graphs $G(n,\lfloor n \ln n\rfloor)$. The variance of $\kappa$ is around $10^{-5}$. The calculation is carried out with $A$. (b) The results of the Greedy algorithm and the Metropolis algorithm in comparison with our quantum results. For the Greedy algorithm, the calculation runs 1000 times over one graph to get $\bar{N}$, and then runs over 1000 random graphs to get $\bar{\kappa}$. The variance of $\bar{\kappa}$ is around $10^{-4}$. For the Metropolis algorithm, we set the iteration time $T=n^2$. The calculation runs 1000 times over one graph to get $\bar{N}$, and then runs over 1000 random graphs to get $\bar{\kappa}$. The variance of $\bar{\kappa}$ is around $10^{-4}$. The lines in the figure are guide for the eyes.
Diffusion and Annealing in Solution Trees. In this section, we review a theoretical picture that clarifies the challenge of finding approximate maximum independent sets, and offer a heuristic explanation for the enhanced performance of our quantum algorithm, relative to classical ones. For sparse graphs $G(n,m)$, Coja–Oghlan and Efthymiou showed in Ref. [4] that the difficulty is related to the structure of the space of independent sets, which shatters severely when their size $k$ is large enough. Thus, the classical Metropolis process has exponentially large mixing times. The graphs considered in Ref. [4] have $d=2\,m/n\gg1$. For these graphs, the size of the maximum independent set is $\alpha\sim (2-\epsilon_d)n\frac{\ln d}{d}$ with high probability. Let $S_k(G)$ denote all the independent sets of size $k$. ``$S_k(G)$ shatters severely'' in the precise sense that $S_k(G)$ can be divided into many groups such that the Hamming distance between each pair of groups is proportional to $n$, while the number of independent sets in each group decreases exponentially with $n$[4] (see Fig. 4). It is found that $S_k(G)$ shatters for $(1+\epsilon_d)n\frac{\ln d}{d} < k < \alpha$. This means that searches for the maximum independent set, based on building up through consideration of changes in small numbers of entries will get stuck at sizes around $n\frac{\ln d}{d}$. This is the essential reason why polynomial classical algorithms have difficulty finding independent sets of size $k>(1+\epsilon_d)n\frac{\ln d}{d}$.
cpl-38-3-030304-fig4.png
Fig. 4. The tree of independent sets of a graph $G$. Each point represents an independent set; the one at the top represents the empty set. The tree is layered: the independent sets $S_k(G)$ in each layer has the same size $k$. If the Hamming distance between an independent set of size $k$ and an independent set of $k+1$ is one, they are connected by a solid line. Each point in the layer of $k+1$ must be connected by a solid line with a point in the layer of $k$. For clarity, we only draw the solid lines between $k=0$ and $k=1$ and between $k=1$ and $k=2$. For independent sets of the same size, they are connected by dashed lines if the Hamming distance between them does not scale up with $n$. Before a critical size $k_{\rm c}$, the tree is well connected by dashed lines in each layer. When the size is over $k_{\rm c}$, the layers shatter with the independent sets divided into small groups, between each pair of which the Hamming distance is proportional to $n$. At the same time, the group size decreases exponentially with $n$.
Quantum evolution, by allowing superpositions, can enable more efficient exploration of a shattered solution landscape. All the candidates appear as components in the wave function. In our context, different independent sets are assigned different energies according to the secondary Hamiltonian. During slow evolution, we can expect the system which starts cold, and plausibly remains so, to approach a quasi-thermal equilibrium state, favoring larger overlaps with lower energy eigenstates. Since low energies correspond, at the conclusion of the evolution, to approximate maximum independent sets, with high probability they will appear as the result of the final measurement. This argument is far from rigorous, but it makes the striking numerical results presented above appear less mysterious. It has been rigorously established that quantum diffusion can hold advantages over classical random walk for a special types of decision trees.[12] In the future, it will be important to investigate further why and in what circumstances quantum diffusion is more effective than its classical counterpart. In summary, we have proposed a quantum algorithm for approximating the maximum independent set of a graph $G(n,m)$ by exploiting non-Abelian adiabatic mixing in the sub-Hilbert space of solutions and adiabatic evolution in the secondary Hamiltonian it generates. Our numerical experiments indicate that for both sparse and dense graphs on average we obtain an independent set of almost maximum size $\alpha(G)$ size in the evolution time $T\sim n^2$ with a single measurement. While our numerical results are encouraging, they are limited to relatively small systems. Due to the exponential complexity of simulating qubit systems classically, we only calculated systems containing up to 20 qubits. We made a heuristic argument that makes a robust quantum advantage, extending to large $n$, seem plausible, but this question deserves much further attention.
References Clique is hard to approximate within n1−εLinear degree extractors and the inapproximability of max clique and chromatic numberOn independent sets in random graphsOn the independence number of random graphsQuantum independent-set problem and non-Abelian adiabatic mixingQuantum algorithm for a set of quantum 2SAT problems*Appearance of Gauge Structure in Simple Dynamical SystemsQuantum Computation by Adiabatic EvolutionLarge Cliques Elude the Metropolis ProcessAn Elementary Proof of the Quantum Adiabatic TheoremQuantum computation and decision trees
[1] Johån H 1999 Acta Math. 182 105
[2] Zuckerman D 2006 Proceedings of the Thirty-Eighth Annual ACM Symposium on Theory of Computing (New York: Association for Computing Machinery) p 681
[3]Matula D W 1976 The Largest Clique Size in a Random Graph, Technical Report CS 7608 (Department of Computer Science, Southern Methodist University, Dallas, Texas 75275)
[4] Coja-Oghlan A and Efthymiou C 2015 Random Struct. & Algorithms 47 436
[5] Frieze A M 1990 Discrete Math. 81 171
[6] Wu B, Yu H and Wilczek F 2020 Phys. Rev. A 101 012318
[7] Hu Y, Zhang Z and Wu B 2021 Chin. Phys. B 30 020308
[8] Wilczek F and Zee A 1984 Phys. Rev. Lett. 52 2111
[9] Farhi E, Goldstone J, Gutmann S and Sipser M 2000 arXiv:quant-ph/0001106v1
[10] Jerrum M 1992 Random Struct. & Algorithms 3 347
[11] Ambainis A and Regev O 2004 arXiv:quant-ph/0411152
[12] Farhi E and Gutmann S 1998 Phys. Rev. A 58 915