Brought to you by:
Paper The following article is Open access

Computational phase transition signature in Gibbs sampling

, , and

Published 27 December 2023 © 2023 The Author(s). Published by IOP Publishing Ltd
, , Citation Hariphan Philathong et al 2023 J. Phys. Complex. 4 045010 DOI 10.1088/2632-072X/ad1410

2632-072X/4/4/045010

Abstract

Gibbs sampling is fundamental to a wide range of computer algorithms. Such algorithms are set to be replaced by physics based processors—be it quantum or stochastic annealing devices—which embed problem instances and evolve a physical system into a low-energy ensemble to recover a probability distribution. At a critical constraint to variable ratio, satisfiability (SAT) problem instances exhibit a SAT-UNSAT transition (frustrated to frustration free). Algorithms require increasing computational resources from this critical point. This is a so called, algorithmic or computational phase transition and has extensively been studied. In this paper we consider the complexity in sampling and recovering ground states from resultant distributions of a physics based processor. In particular, we first consider the ideal Gibbs distributions at some fixed inverse temperature and observe that the success probability in sampling and recovering ground states decrease for instances starting at the critical density. Furthermore, simulating the Gibbs distribution, we employ Ising spin dynamics, which play a crucial role in understanding of non-equilibrium statistical physics, to find their steady states of 2-SAT Hamiltonians. We observe that beyond the critical density, the probability of sampling ground states decreases. Our results apply to several contemporary devices and provide a means to experimentally probe a signature of the computational phase transition.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Complexity theory aims to classify computational problems by their degree of difficulty—how much resource usage to solve the problems computationally. The satisfiability (SAT) problem was one of the first problems proven to be in a complexity class called NP-complete [1] and has been a useful workhorse for studying the properties of NP-complete problems. The SAT problem determines if a Boolean formula can evaluate to TRUE, and hence be satisfiable, or if it is otherwise unsatisfiable. A SAT formula presented in conjunctive normal form (CNF) consists of clauses connected by logical AND, with each clause being a combination (logical OR) of Boolean variables or their negations. When there are k variables in each clause, the problem is referred to as k-satisfiability (k-SAT). Considering random k-SAT where each clause is selected uniformly, SAT exhibits a rapid change from satisfiable instances to unsatisfiable instances at a certain clause to variable ratio (clause density) [211]. In addition, computer algorithms solving problem instances require increasing computational resources at this transition point. Such abrupt threshold behavior is known as the algorithmic or computational phase transition.

As information is necessarily represented in physical media, the processing, storage and manipulation of information is governed by the laws of physics. Indeed, the theory of computation is intertwined with the laws governing physical processes [12]. Many variants of naturally occurring processes (such as protein folding) have been shown to represent computationally significant problems, such as NP-hard optimization problems. Viewed another way, many physical systems and physical processes can be made to represent, and solve computational problem instances. In this regard, there are attempts to build contemporary physics based processors that solve problems instances by employing a physical process, such as annealing, quantum annealing and cooling [1324].

Since the physical processes and the theory of computation are interwoven, we argue that signature of the algorithmic phase transition has potential to be observed in physical systems employing physical processes to solve problems instances. In particular, many physical systems have been build to solve computational problems by utilizing a cooling process which eventually leads a system down to equilibrium at lower temperature. In statistical mechanics, this equilibrium state is ideally described by the Gibbs state. In this regard, we consider the complexity in sampling and recovering the ground state from the Gibbs distribution of such physical systems.

Considering the Gibbs states for 2 (and 3)-SAT Hamiltonians, we calculated the ground-state occupancy as a function of clause density. We found that the algorithmic phase transition has a statistical signature in the Gibbs states of problem Hamiltonians generated randomly across the critical point. From our empirical observation, the ground-state occupancy of Gibbs states decrease for instances starting at the critical point of 2 (and 3)-SAT for fixed inverse temperatures and system sizes. This was confirmed by exact calculations of up to 26 binary units (spins) on a mid-scale supercomputer. Physical observation of the effect is hence within reach of near term and possibly even existing physical computing hardware: such as annealers [13], Ising machines [15, 16, 25] and quantum enhanced annealers [17, 18, 24, 2630].

We also explore in this work a physical process such that the Gibbs distributions could be recovered in the long time limit. To simulate such physical systems, we consider the kinetic Ising model where the system couples with stochastic dynamics and is placed in contact with a heat bath [3135]. A particular example of the kinetic Ising model that evolves by single-spin-flip dynamics is known as the Glauber model [31]. This model has been applied in the study critical behavior of Ising-like models [31, 3639] and chaotic behavior [40]. In this setting, we simulated the time evolution which approximately represents the average behavior of spins with Glauber (spin–flip) stochastic dynamics [36], then numerically estimated the success probability of the simulated Glauber dynamics to sample the ground energy within a given error tolerance as a function of clause density. Considering random 2-SAT instances, we observed that the probability decreases for instances with density higher than the critical density.

2. Satisfiability phase transition

The SAT problem is a decision problem to determine whether there exists a truth assignment to Boolean variables such that a given Boolean formula F evaluates to TRUE. If yes, we say F is satisfiable, otherwise unsatisfiable. The SAT problem was the first problem shown to be NP-complete by Cook in 1971 [1]. Many computational problems are known to be NP-complete through a polynomial mapping (Karp reduction) onto the k-SAT problem [41].

For example, considering a 3-SAT problem instance in CNF having two clauses over five variables,

Equation (1)

this instance is satisfiable as $x_1 = 1,\,x_2 = 1,\,x_3 = 0,\,x_4 = 0,\,x_5 = 0$.

Let F be a k-SAT instance consisting of M clauses in k CNF over N Boolean variables, the clause density of the instance is defined by the simple faction $\alpha = M/N$. For the above instance (1), $\alpha = 2/5$. We call F an instance of the random k-SAT problem if its clauses are randomly generated by uniformly selecting unique k-tuples from the union of a variable set (cardinality N > k) and its element wise negation.

The random k-SAT problem exhibits an abrupt threshold behavior separating a SAT phase from an UNSAT phase at the critical clause density $\alpha_c(k)$ [211, 42]. Moreover, it is observed that that the computational resources required for solving k-SAT instances start to increase from the transition point $\alpha_c(k)$.

For random 3-SAT instances, an abrupt transition in the probability of SAT has been numerically observed (though not formally proven) to be around clause density $\alpha_{c}(3) \approx 4.27$ [57, 10]. Most instances on the left (right) of the transition are satisfiable (unsatisfiable). It is statistically observed that computational time of decision SAT solvers peaks at the same point where hard instances for the 3-SAT problem are expected to concentrate (see figure 1 top right).

Figure 1.

Figure 1. Top: percent of satisfiable instances (left axis) and run-time (right axis) versus clause density. Varying clause density α, we randomly generated 1000 2-SAT instances with 1000 variables (left) and 3-SAT instances with 300 variables (right). Here $\alpha_c(2) = 1$ and $\alpha_c(3) \approx 4.27$ for 2-SAT and 3-SAT respectively. Middle: percent of SAT of 2-SAT instances (left) and 3-SAT instances (right) across the algorithmic phase transition signature, comparing total variable count. Bottom: run-time of 2-SAT instances (left) and 3-SAT instances (right) across the algorithmic phase transition signature, comparing total variable count.

Standard image High-resolution image

The computational phase transition has better theoretical footing when considering the random 2-SAT problem, which can be solved in polynomial time [43, 44]. Both theoretical and numerical results [2, 8, 45] firmly position the transition at clause density $\alpha_{c}(2) = 1$ (see figure 1 top left). The further advantage is that additional slack bits are not required to embed 2-SAT into the Ising model. In other words, 2-SAT instances can be embedded directly into 2-body generalized Ising Hamiltonians (see appendix A).

We randomly generate 1000 instances of 2 (and 3)-SAT then calculate the percent of SAT (see figure 1 middle) and run-time (see figure 1 bottom) using a SAT solver, PycoSat package in Python 3 [46]. The sharp transition at the critical point can be observed in infinite limit of a ratio. In practice, finite sized effects broaden the transition region.

For 2-SAT, the theoretical results [45] establish a finite-size scaling window for the transition region. For $0 \lt \delta \lt 1$, this window is defined by $W(N,\delta) = (\alpha_{-}(N,\delta),\alpha_{+}(N,\delta))$, where $\alpha_{-}(N,\delta) = \sup(\alpha)$ such that the probability of satisfiablity is greater than $1-\delta$, similarly, $\alpha_{+}(N,\delta) = \inf(\alpha)$ such that the probability of satisfiablity is less than δ. As the width of the window tends to zero, $\alpha_{\pm}(N,\delta) \longrightarrow 1$, as $N \longrightarrow \infty$ for all δ.

For 3-SAT, it is still unknown where the exact transition point is. However, upper and lower bounds have been proven. The best lower bound at this time appears to be $\alpha_{lb} = 3.52$ [47] and the best upper bound is $\alpha_{ub} = 4.453$ [48].

In figure 1(middle and bottom), we can see that, as the number of variables increase, the transition sharpens and becomes closer to clause density 1 for 2-SAT (left) and 4.27 for 3-SAT (right).

Although the decision k-SAT problems exhibit the easy-hard-easy transition in the computational resources (see figure 1 bottom), the maximum k SAT (MAX-k-SAT) problem, the optimization variant of k-SAT where one seeks to determine variable assignments that maximize the number of satisfied clauses in a given instance, shows a different type of phase transitions, the easy-hard transitions [49, 50], which reflects the difference in computational complexity of these two problems. MAX-2-SAT exhibits a phase transition at the same critical clause density as 2-SAT [50].

3. Ground-state occupancy of thermal states

We consider physical systems that encodes solutions of 2-SAT instances into the ground-state subspace of their corresponding Hamiltonians. This is done by using the standard Ising embedding [5153] which embeds k-SAT instances directly into k-body generalized Ising Hamiltonians (see appendix A for details).

Considering the corresponding Gibbs state of 2-SAT Hamiltonians, we calculate the ground-state occupancy of such Hamiltonians from their Gibbs distributions as a function of clause density. This procedure can be formulated as follows.

Let $|i\rangle$ denote (possibly d-degenerate) lowest eigenstates of Hamiltonian $\mathcal{H} \unicode{x2A7E} 0$. We label these possibly degenerate states by letting i range from 1 up to d and call $E_\textrm{min}$ the lowest eigenvalue of $\mathcal{H}$

Equation (2)

In statistical mechanics, a physical system at thermal equilibrium is ideally described by a Gibbs state,

Equation (3)

where β is inverse temperature and $\mathcal{Z}$ is the partition function defined as

Equation (4)

Considering the Gibbs state (3), the occupancy in the (possibly d-degenerate) lowest-energy subspace of Hamiltonian $\mathcal{H}$ for fixed inverse temperature β becomes

Equation (5)

The quantity $P\left(\beta, N, \alpha \right)$ in equation (5) provides our means to access the complexity of sampling the right solution from a thermal distribution. In the zero temperature limit, we have

Equation (6)

The average occupancy $P\left(\beta, N, \alpha \right)$ as a function of clause density α with different number of inverse temperature β is illustrated in figure 2(left). For each clause density, we randomly generated 1000 2-SAT instances from 26 variables. We then applied the embedding procedure from appendix A and calculated $P\left(\beta, N, \alpha \right)$.

Figure 2.

Figure 2. Left: thermal occupancy in ground-energy subspace $P\left(\beta, N, \alpha \right)$ of Hamiltonians embedding 2-SAT instances across the algorithmic phase transition for $\beta = \{1, 2, 3\}$. Right: minimum values of β (inverse temperature) such that the ground-state occupancy $P\left(\beta, N, \alpha \right) \unicode{x2A7E} 0.9$ across the algorithmic phase transition. Each data point is the average occupancy over 1000 random instances of 26 variables (spins). The vertical bars in the plots indicate 3σ of estimated mean. The dark lines show the probability of SAT indicating where the transition occurs for 26 variables.

Standard image High-resolution image

We observe that, on average, the ground state occupancy $P\left(\beta, N, \alpha \right)$ decrease for instances starting from the critical point ($\alpha \gt \alpha_c$). The same feature can be seen when varying the system size $N = \{5,10,15,20\}$ (see figure 3 left).

Figure 3.

Figure 3. Left: thermal occupancy $P\left(\beta, N, \alpha \right)$ embedding 2-SAT instances across the algorithmic phase transition for β = 1 with the different number of qubits $N = \{5,10,15,20\}$. Each point represents the mean of occupancy over 200 random instances. The black dashed vertical line indicates the critical density $\alpha_c(2) = 1$. Right: minimum values of β such that the ground-state occupancy $P\left(\beta, N, \alpha \right) \unicode{x2A7E} \delta$ where $\delta = \{0.9, 0.7$, $0.5\}$ across the algorithmic phase transition for N = 10 spins. Each point represents the mean of occupancy over 200 random 2-SAT instances. The vertical bars in the plots indicate 3σ of estimated mean. The dark line shows the probability of SAT to indicate where the transition occurs for 10 variables.

Standard image High-resolution image

The complexity landscape of $P\left(\beta, N, \alpha \right)$ is divided into two regions due to the density-dependent behavior of the ground subspace of SAT Hamiltonian. For $\alpha \lt \alpha_c$, the ground state energy is zero with its degeneracy decreases as clause density increases, which makes $P\left(\beta, N, \alpha \right)$ decrease with clause density. For $\alpha \gt \alpha_c$, the ground energy is an increasing function of α. However its degeneracy is almost constant with density α, hence causing $P\left(\beta, N, \alpha \right)$ to change its behavior around the critical density. Data for the density dependence of ground state energy and its degeneracy of 2-SAT are provided in appendix B, see figure B1.

One might be interested in the following question: on average, what inverse temperature is required to ensure that the occupancy of the ground thermal state, $P\left(\beta, N, \alpha \right)$, is greater than some fixed probability $\delta \in [0,1]$?

In figure 2(right), we plot the minimum value of inverse temperature ($\beta_\mathrm{min}$) satisfying the above condition with δ = 0.9 versus clause density for 2-SAT instances with 26 variables. On average, the instances starting from the critical point ($\alpha \gt \alpha_c$) require lower temperatures for sufficient ground state occupancy than ones with density $\alpha \lt \alpha_c$. The same feature also appears for other chosen values of $\delta = \{0.9, 0.7$, $0.5\}$ (see figure 3 right). For physical systems preparing in the Gibbs distributions, it informs us of the significance that temperature plays in obtaining above some percent ground-state occupancy.

4. Glauber time evolution for 2-SAT Hamiltonian

Considering physical systems employing physical processes to solve problems instances, resultant distributions could deviate from the ideal Gibbs state. In this section, we study the complexity in sampling and recovering ground states in such physical systems. We simulated the Glauber time evolution [36] on the Ising spin models realizing 2-SAT instances, and numerically estimated the success probability in finding ground-states of 2-SAT Hamiltonians as a function of clause density.

The Glauber dynamics have been successfully used to treat and understand critical behavior in physics of complex systems [31, 3640]. In the Glauber dynamics, the ith spin variable, $\sigma_i \in \{+1,-1\}$, is selected randomly at a time, and the spin flips its sign ($\sigma_i \rightarrow -\sigma_i$) at a transition rate that depends on the change in the energy of the system as a result of the flip.

For physical reasoning, ones assume that each transitioning process is in equilibrium with its reverse process. The equilibrium Boltzmann weight $e^{-\beta \mathcal{H}(\sigma)}/\mathcal{Z}$ is assigned with respect to the energy $\mathcal{H}(\sigma)$ of σ-configuration. According to this basic principle, we are able to fix the transition rates of the spin flipping by the the detailed balance condition:

Equation (7)

where $\sigma = \sigma_1 \cdots \sigma_i \cdots \sigma_N \in \{+1,-1\}^N $ denote a spin configuration of a system, and $\sigma^{^{^{\prime}}} = \sigma_1 \cdots (-\sigma_i) \cdots \sigma_N $ denote the configuration derived from σ due to flipping the ith spin: $\sigma_i \rightarrow -\sigma_i$.

From here onward, we consider the time evolution of Glauber dynamics [31, 3638]. Let $\sigma_i \in \{+1,-1\}$ be the ith spin variables where $i = \{1, \cdots, N\}$. The Glauber time evolution is given by the following rate equation (8) which assigns the expected value of ith spin $\langle \sigma_i (t)\rangle \in [-1,+1]$ at time $t \in [0, \infty)$ as

Equation (8)

with the transition rate

Equation (9)

satisfying the detailed balance condition (7) and $\delta \mathcal{H}_i (t)$ is the change of energy of a single-spin flip at the ith site.

Consider a 2-SAT instance having $M = \alpha N$ clauses, the corresponding Hamiltonian is expressed in the form of 2-body generalized Ising model

Equation (10)

with appropriate coefficients $ h_i, J_{ij}$ such that the ground state subspace is spanned by solutions of the problem instance, and the ground energy ($E_\mathrm{min}$) is equal to minimal number of unsatisfiable clauses. A detailed construction of such Hamiltonian is shown in appendix A. Thus the change of energy (10) by a single-spin flip at the ith site is given by

Equation (11)

where $\sum_{j \neq i}$ is the summation over the jth spins that interact with the ith spin.

We will approximate the Glauber time evolution (8) by consideration of the limiting case where the equilibrium mean-field theory applies: the interaction is weak and long range. Substitute (11) into (9) and by using the fact that $\tanh(a \sigma_i) = \sigma_i \tanh(a)$ if $\sigma_i = \pm 1$, and $\sigma_i^2 = 1$, the Glauber rate equation (8) becomes the following approximate equation

Equation (12)

where

Equation (13)

In this limit, ri is the sum of a large number of contributions from many different parts of the system, we assume that the fluctuations of these contributions about their mean values are independent, so that a law of large numbers applies, then the fluctuations of ri about its mean values will be small, so that in the limit $\langle \tanh{r_i} \rangle = \tanh \langle r_i (t)\rangle$ [36].

One might ask the following question: considering the approximate Glauber rate equation (12), what is the probability of sampling energy $E \unicode{x2A7D} E_\mathrm{min}+\epsilon$ for a given error tolerance $\epsilon$ > 0? The probability of sampling energy $E \unicode{x2A7D} E_\mathrm{min}+\epsilon$ is defined as a simple fraction

Equation (14)

where $n_{\epsilon}$ is the number of samples such that $E \unicode{x2A7D} E_\mathrm{min}+\epsilon$, and $n_\mathrm{total}$ is the total number of samples.

We numerically simulated the approximate Glauber rate equation (12) and set the time-step to be 500 with the change of time $\Delta t = 0.05$ for every time evolution. For each 2-SAT instance, 300 initial states are sampling from the uniform distribution: $\langle \sigma_i (t = 0) \rangle \sim \textrm{Uni} \{-1,+1\}$ for all $i \in \{1,\cdots, N\}$. In order to obtain the statistics of the energy, we evaluate the average value of energy E over 300 instances for each clause density.

In figure 4(left), we shows the average values of energy error $(E-E_\mathrm{min})$ across phase transition for different inverse temperature. The average values of energy error peak around the critical point, suggesting a reasonable range of error tolerance $\epsilon$ in the estimation of $\mathrm{Prob}(E ~|~ E \unicode{x2A7D} E_\mathrm{min}+\epsilon)$ for any fixed inverse temperature β. For example, $\epsilon \unicode{x2A7E} 0.8$ for β = 2.

Figure 4.

Figure 4. Left: average energy error $E-E_\mathrm{min}$ across phase transition for different $\beta = \{1,2,3\}$. Right: average probability of sampling energy $E \unicode{x2A7D} E_\mathrm{min}+\epsilon$ of 2-SAT Hamiltonian across the algorithmic phase transition for N = 10 qubits, β = 2 and $\epsilon = \{0.8, 0.85, 0.9\}$. Each point represents the mean values over 300 instances. The vertical bars in the plots indicate 3σ of estimated mean. The dark line shows the probability of SAT to indicate where the transition occurs for ten variables.

Standard image High-resolution image

Therefore, in figure 4(right), we illustrate the $\mathrm{Prob}(E ~|~ E \unicode{x2A7D} E_\mathrm{min}+\epsilon)$ across the phase transition for β = 2 and error tolerance $\epsilon \unicode{x2A7E} 0.8$. We observe a decreasing in $\mathrm{Prob}(E ~|~ E \unicode{x2A7D} E_\mathrm{min}+\epsilon)$ for instances with density $\alpha \gt \alpha_c$ of 2-SAT.

5. Conclusion and discussion

We predict that the algorithmic phase transition has a signature in thermal Gibbs states, which has implications for contemporary physical computing systems. The computational phase transition signature has strong theoretical footing when considering the 2-SAT problem—a problem which offers a direct embedding into 2-body generalized Ising Hamiltonians. For $k \unicode{x2A7E} 3$, the k-SAT problem on the other hand requires additional slack bits to create the k-body projectors to embed the problem instances.

We randomly generated 2-SAT instances and embed these instances in the generalized Ising Hamiltonian such that the ground-eigenspace is spanned by solutions to the problem instance. Considering the Gibbs states of 2-SAT Hamiltonians, we calculated the occupancy in ground-state subspace as a function of clause density. We found that the occupancy appear to decrease starting from the phase transition point for 2-SAT (see figure 2 left and figure 3 left). This finding indicates that the complexity in sampling solutions of 2-SAT instances from the thermal Gibbs states changes at the critical density. This feature is also seen in our numerical calculation for 3-SAT, see appendix C for more details.

Moreover one infers that temperature of the Gibbs state is the fleeting computational resource. Indeed, one argues that for any fixed inverse temperature β, there exists problem instances that would require a significant number of samplings to recover the ground-state.

We also consider physical systems employing physical processes to solve problem instances. We simulated the time evolution of corresponding physical systems of 2-SAT instances using the Glauber dynamics. We observed that the sampling success of finding the ground state at fixed inverse temperature becomes lower for instances starting at the critical point (see figure 4 right).

Our prediction connects the computational phase transition, which exhibits an importantly unsolvable feature in the theory of computational complexity, with physical processes that naturally reveal this feature in observable quantities. Since these 2-SAT instances can be directly embedded onto the generalize Ising Hamiltonian, and recently Ising devices have been built with increasing programability, the computational phase transition signature has the potential to be physically observed. Such an experiment provides a benchmark to locate where hard problem instances are most likely.

6. Numerical details

The large calculations were performed on the Skoltech's supercomputer, Zhores [54]. The code is written in C with nested OpenMP parallelism. The Hamiltonian is expanded in the shared memory of each node and we compute in parallel several clauses. Multiple nodes run the procedure concurrently with different random sequence to sample the statistics with MPI collectives. This implementation can use all the operational memory of each server. In figure 2, we illustrate results for N = 26; the program run on 8 nodes to collect statistics for around 13 h.

We also used the SAT solver called PycoSat in Python 3 [46] for the demonstration of the percent of SAT and run-time for 2 (and 3)-SAT in figure 1.

In simulating the approximate Glauber rate equation (12), we set the time-step to be 500 with the change of time $\Delta t = 0.05$ for every time evolution. For each instance, we uniformly random 300 initial states $\langle \sigma_i (t = 0) \rangle \sim \textrm{Uni} \{-1,+1\}$, for all $i \in \{1,\cdots, N\}$ in order to obtain the statistics of the energy, then evaluate average energy E over 300 instances of random 2-SAT for each clause density. In appendix D, we show an example of the simulation.

Acknowledgments

We thank the Skoltech-HPC team for supporting our large calculations and computational programming.

Data availability statements

All data that support the findings of this study are included within the article (and any supplementary files).

Conflict of interest

The authors declare no competing interests.

Code availability

The code for generating the data will be available on reasonable request.

Author contributions

All authors conceived and developed the theory and design of this study and verified the methods. J B proposed the study and supervised its execution. H P, V A, I Z developed and deployed the parallel code to collect numerical data. All authors contributed to interpret the results and wrote the manuscript.

Appendix A: Ising spin embedding

Finding ground energy and ground state configurations of physical systems such as spin glasses is NP-hard [55]. k-SAT instances can be directly mapped onto the Hamiltonians in the form of generalized Ising model, see e.g. [51, 52]. The standard procedure is as follows.

We convert logical bits $\{0,1\}$ to spins

Equation (A.1)

The penalty Hamiltonian $\mathcal{H}_\mathrm{SAT}$ is constructed by real linear extension of $\{P_0, P_1, \unicode{x1D7D9}\}$ by means of the following mapping between Boolean variables and projectors,

Equation (A.2)

and

Equation (A.3)

where $P_j^{1} = |1 \rangle\langle 1|$ and $P_j^{0} = |0 \rangle\langle 0|$ acting on the jth spin.

By this construction, clauses in a SAT instance are mapped onto projectors $P_{i j \cdots k}^{\alpha \beta \cdots \gamma} = P_{i}^{\alpha} \otimes P_{j}^{\beta} \otimes \cdots \otimes P_{k}^{\gamma}$. Hence, the Hamiltonian $\mathcal{H}_\mathrm{SAT}$ can be constructed by summing over all the clauses in an instance,

Equation (A.4)

where $\mathcal{C}_{l}$ assigns the value of $\alpha, \beta, \cdots, \gamma \in \{0, 1\}$ and the value of spin indices $i, j, \cdots, k \in \{1, 2, \cdots, N\}$ corresponding to the lth clause in the instance. The ground state space is spanned by solutions of the SAT problem, and the ground energy is equal to minimal number of unsatisfiable clauses.

Substituting the projectors $P^{\alpha}_j = \frac{1}{2}(\unicode{x1D7D9} + (-1)^{\alpha} \sigma^{z}_{j})$ in equation (A.4), where σz is the Pauli Z matrix, the k-SAT Hamiltonian can be written in the form of the generalized Ising Hamiltonian with at most k-body interaction.

In case of 2-SAT, the Hamiltonian $\mathcal{H}_{2SAT}$, which is a sum over projectors onto 2-SAT clauses, can be written in the form of 2-body Ising spin Hamiltonian,

Equation (A.5)

The coefficient hj indicates the local field of the jth spin and the couplings $\mathcal{J}_{jk}$ encode the 2-body interaction between the jth spin and the kth spin.

This method provides a way to physically realize SAT instances as spin Hamiltonians. If such a physical system is built, cooling the system into its ground state is equivalent to solving the problems.

Appendix B: 2-SAT ground energy and its degeneracy

In the following figure B1, the average values of (left) ground state energy and (right) its degeneracy over 300 random 2-SAT Hamiltonians of ten variables for clause density $\alpha \unicode{x2A7D} 6.$

Figure B1.

Figure B1. Average ground energy (left) and its degeneracy (right) of random 2-SAT Hamiltonian versus clause density for ten variables. Each data point is the average value over 300 random instances. Vertical bars indicate the standard deviation.

Standard image High-resolution image

Appendix C: Data for 3-SAT thermal occupancy

We present here the supplementary data for the random 3-SAT problems. In figure C1, we illustrate the occupancy $P\left(\beta, N, \alpha \right)$ for various inverse temperature (left) and the minimum inverse temperature $\beta_\mathrm{min}$ such that $P\left(\beta, N, \alpha \right) \unicode{x2A7E} 0.9$ (right) across the algorithmic phase transition for 26 spin variables.

Figure C1.

Figure C1. Left: thermal occupancy in ground-energy subspace $P\left(\beta, N, \alpha \right)$ of Hamiltonians embedding 3-SAT instances across the algorithmic phase transition for $\beta = \{1, 2, 3\}$. Right: minimum values of β (inverse temperature) such that the ground-state occupancy $P\left(\beta, N, \alpha \right) \unicode{x2A7E} 0.9$ across the algorithmic phase transition. Each data point is the average occupancy over 1000 random 3-SAT instances of 26 variables (spins). The vertical bars in the plots indicate 3σ of estimated mean. The dark lines show the probability of satisfiability calculated by the PycoSat package's classical algorithm [46] to indicate where the transition occurs for 26 varables.

Standard image High-resolution image

In figure C2, we show the occupancy for different problem size $N = \{5,10,15,20\}$ (left) and the minimum inverse temperature for various values of $\delta = \{0.9, 0.7$, $0.5\}$ across the algorithmic phase transition for N = 10 spins (right).

Figure C2.

Figure C2. Left: thermal occupancy $P\left(\beta, N, \alpha \right)$ embedding 3-SAT instances across the algorithmic phase transition for β = 1 with the different number of spins $N = \{5,10,15,20\}$. Each point represents the mean of occupancy over 200 random instances. The black dashed vertical line indicates the critical density $\alpha_c(3) \approx 4.27$. Right: minimum values of β such that the ground-state occupancy $P\left(\beta, N, \alpha \right) \unicode{x2A7E} \delta$ where $\delta = \{0.9, 0.7$, $0.5\}$ across the algorithmic phase transition for N = 10 spins. Each point represents the mean of occupancy over 200 random 3-SAT instances. The vertical bars in the plots indicate 3σ of estimated mean. The dark line shows the probability of satisfiability indicating where the transition occurs for 10 variables.

Standard image High-resolution image

Appendix D: Simulations of the Glauber time evolution

Considering a 2-SAT instance with ten variables and ten clauses (clause density α = 1)

Equation (D.1)

We show the numerical convergence in expected spin variables and its corresponding energy in the simulation of the Glauber time evolution for different values of $\beta = \{1,2,3\}$ in figure D1.

Figure D1.

Figure D1. Convergence in expected spine values (left) and corresponding energy (right) in the simulation of the Glauber time evolution (12) on the instance (D.1) for inverse temperature β = 1 (top), β = 2 (middle), and β = 3 (bottom). The time-step is 500 steps with the change of time $\Delta t = 0.05$.

Standard image High-resolution image
Please wait… references are loading.
10.1088/2632-072X/ad1410