A probabilistic 3–SAT algorithm further improved

2 downloads 0 Views 145KB Size Report
Schöning's analysis showed that for 3-SAT, the success probability of finding a satisfying assignment for ..... α2 = (3/4) · (1 − α1) and α3 = (1/4) · (1 − α1), hence. 9 ...
A probabilistic 3–SAT algorithm further improved Thomas Hofmeister1 , Uwe Sch¨ oning2 , Rainer Schuler2 , and Osamu Watanabe3 1) Informatik 2, Universit¨ at Dortmund, 44221 Dortmund, Germany. 2) Abt. Theoretische Informatik, Universit¨ at Ulm, Oberer Eselsberg, 89069 Ulm, Germany 3) Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology, Meguro-ku Ookayama, Tokyo 152-8552, email: [email protected] Acknowledgments to financial supports: The second author is supported in part by DFG grant Sch 302/5-2. The fourth author is supported in part by JSPS/NSF cooperative research: Complexity Theory for Strategic Goals, 1998–2001. Abstract In [?], Sch¨ oning proposed a simple yet efficient randomized algorithm for solving the kSAT problem. In the case of 3-SAT, the algorithm has an expected running time of poly(n) · (4/3)n = O(1.3334n ) when given a formula F on n variables. This was the up to now best running time known for an algorithm solving 3-SAT. In this paper, we describe an algorithm which improves upon this time bound by combining an improved version of the above randomized algorithm with other randomized algorithms. Our new expected time bound for 3-SAT is O(1.3302n ).

1

Introduction and Preliminaries

In the k-satisfiability problem (k-SAT for short), a Boolean formula F is given by a set of clauses each of which has length at most k. The problem is to determine whether there is an assignment to the variables that satisfies the formula. This problem has been studied by many researchers, and various algorithms have been proposed. Some algorithms indeed have a worst-case time complexity much better than the trivial exhaustive search algorithm; see, e.g., [?,?,?,?,?,?]. In particular, a random walk algorithm proposed by Sch¨ oning [?], which can be used for the k-SAT problem in general, had so far the best worst-case time bound for 3-SAT. The algorithm consists of two phases. First, an initial assignment is chosen independently of the clauses, then, a random walk is started which is guided by the clauses. Sch¨ oning’s analysis showed that for 3-SAT, the success probability of finding a satisfying assignment for a satisfiable formula F on n variables is at least (3/4)n /p(n), where p(n) is some polynomial over n. Iterating until a satisfying assignment is actually found yields a randomized algorithm with expected running time p(n) · (4/3)n . In this paper, we improve upon Sch¨ oning’s algorithm by modifying the first phase of the algorithm. The main idea is to make the choice of the initial assignment depend on the input clauses. We have to combine the modified algorithm with another randomized algorithm to beat the time bound in all cases. Our new combined algorithm finds a satisfying assignment for a satisfiable formula in O(1.3302n ) expected time. While this improvement in the time bound O(cn ) to c = 1.3302 may seem minor, it does at least show that the more natural looking barrier c = 4/3 for 3-SAT can in fact be beaten. First, let us recall some basic definitions concerning k-SAT as well as Sch¨ oning’s randomized algorithm for k-SAT.

Given n Boolean variables x1 , . . . , xn , an assignment to those variables is a vector a = (a1 , . . . , an ) ∈ {0, 1}n . A clause C of length k is a disjunction C = l1 ∨ l2 ∨ · · · ∨ lk of literals, where a literal is either a variable xi or its negation xi , for some 1 ≤ i ≤ n. For some constant k, the problem k-SAT is defined as follows: Input: A k-SAT formula F = C1 ∧ · · · ∧ Cm . Each Ci is a clause of length at most k. Problem: Is there an assignment that makes F true, i.e., a satisfying assignment for F ? Throughout this paper, we will use n for the number of variables and m for the number of clauses in the given input formula. We will only be concerned with the problem 3-SAT, and the notion “formula” will always refer to a 3-SAT formula. It can be assumed without loss of generality (and we will do so in the rest of the paper) that each clause has length exactly 3 and that no variable appears twice in the same clause. Furthermore, we may assume that no clause appears twice in the input formula, hence, the number of clauses is bounded by O(n3 ) and it makes sense to estimate the running time in terms of n. Sch¨ oning’s algorithm is a randomized local search algorithm that is described as program Search in Figure ??. The algorithm, as stated in the figure, does not terminate until a satisfying assignment is found. Hence, it runs forever if an unsatisfiable formula is given as an input. Note, however, the following. The success probability of one repeat-iteration, i.e., the probability of finding a satisfying assignment (if it exists) during one repeat-iteration, is at least 1/T (n) for some T (n). (Theorem ?? below gives a bound on T (n).) This means that if an input formula is satisfiable, then the probability that none of N · T (n) repeat-iterations finds a satisfying assignment is at most (1 − 1/T (n))N ·T (n) < e−N . Therefore, one can modify the algorithm so that it halts and rejects an input after executing the repeat-iteration N ·T (n) times, and its (one-sided) error probability is bounded by e−N .

Subroutine RandomWalk(assignment a); a′ := a; for 3n times do if F (a′ ) = 1 then accept (and halt); C ← a clause of F that is not satisfied by a′ ; Modify a′ as follows: Select a literal of C uniformly at random and flip the assignment to this literal; end-for; end. Program Search(3-SAT formula F ); repeat Select an initial assignment a uniformly at random from {0, 1}n ; RandomWalk(a); end-repeat; end; Fig. 1. Sch¨ oning’s randomized local search algorithm

2

The following property of subroutine RandomWalk plays a key role in our discussion and was proved in [?]. Here, and in the rest of this paper, let d(a, b) = #{i | ai 6= bi } denote the Hamming distance between two binary vectors a and b. Lemma 1. Let F be a satisfiable formula and a∗ be a satisfying assignment for F . For each assignment a, the probability that a satisfying assignment is found by RandomWalk(a) is at ∗ least (1/2)d(a,a ) /p(n), where p is a polynomial. In this paper, p(n) will always denote the polynomial which is given by the bound in Lemma ??. It can also be assumed that p(n) ≥ 1 for all n. The following Theorem ?? states the bound for 3-SAT obtained in [?]. We show here how Lemma ?? leads to Theorem ??, since we will later on apply Lemma ?? in a similar manner. Theorem 1. For any satisfiable formula F on n variables, the success probability of one repeat-iteration is at least (3/4)n /p(n) for some polynomial p. Thus, the expected number of repeat-iterations executed until some satisfying assignment is found is at most (4/3)n · p(n). Proof. Let a∗ be one of the satisfying assignments for F . We estimate the probability Pr{success} that some satisfying assignment is found in one repeat-iteration. The initial assignment a is chosen randomly, hence Xi = d(a∗i , ai ) is a random variable which is 0 if a∗ and a agree in the i-th component and 1 otherwise. We use E[Y ] to denote the expected value of a random variable Y and obtain: Pr{success} ≥

X

Pr{ a is selected as an initial assignment } ·

a∈{0,1}n

 d(a,a∗ ) 1 /p(n) 2

  1 ∗ 1 = E[( )d(a,a ) ]/p(n) = E ( )X1 +···+Xn /p(n) 2 2   1   1 1 · E ( ) X1 · · · E ( ) Xn . = p(n) 2 2 Here, we have exploited that E[Y · Z] = E[Y ] · E[Z] for independent random variables Y and Z. In the initial assignment, each bit ai is set to 1 with probability 1/2, hence Xi is 1 or 0, each with probability 1/2, hence we have 1 1 1 1 1 3 E[( )Xi ] = · ( )0 + · ( )1 = . 2 2 2 2 2 4 We obtain Pr{success} ≥ ( 34 )n /p(n). The expected number of repeat-iterations is bounded by the inverse of the success probability Pr{success}, hence it is bounded by (4/3)n · p(n). ⊔ ⊓

2

Independent Clauses and the Basic Idea

We begin by introducing some notions. Two clauses C and C ′ are called independent if they have no variable in common. E.g., C = x1 ∨ x2 ∨ x3 and C ′ = x1 ∨ x5 ∨ x6 are not independent. For a formula F , a maximal independent clause set C is a subset of the clauses of F such that all clauses in C are (mutually) independent and no clause of F can be added to C without destroying this property. 3

If C is a maximal independent clause set for a formula F , then every clause C in F contains at least one variable that occurs in (some clause of) C. This holds because otherwise, C would not have been a maximal independent clause set. This has the following consequence: If we assign constants to all the variables contained in the independent clauses, then – after the usual simplifications consisting of removing constants – we obtain a formula F ∗ which is a 2-SAT-formula, since every clause in F ∗ has at most two literals. It is well known that there is a polynomial-time algorithm for checking the satisfiability of a 2-SAT formula [?] and finding a satisfying assignment if one exists. This leads to the following: Given a formula F and a maximal independent clause set with m ˆ independent clauses C1 , . . . , Cm , we are able to find a satisfying assignment in time poly(n) · ˆ ˆ 7m . Namely, for each of the m ˆ independent clauses, we can run through all seven (of the eight) assignments that satisfy the clause. The remaining 2-SAT formula is tested in polynomial time for satisfiability. For a better understanding of what follows, we remark that this algorithm could also be replaced by a randomized algorithm: Set each of the independent clauses with probability 1/7 to one of the seven satisfying assignments. Then, solve the remaining 2-SAT problem. ˆ The success probability of this randomized algorithm is (1/7)m , hence it decreases with m. ˆ The basic idea The basic idea behind our algorithm is as follows: If we are given an instance of 3-SAT where we find a maximal set of m ˆ independent clauses where m ˆ is small, we get a small running time (or large success probability) by the algorithm just described. On the other hand, m ˆ could be as large as n/3. But then we are able to improve the first phase of Sch¨ oning’s algorithm. The idea is as follows: In the original algorithm, each variable is set to 0 with probability 1/2. The information given by the clauses is ignored completely. Although a clause C = x1 ∨ x2 ∨ x3 tells us that not all those three variables should be set to zero simultaneously, the original initialization selects such an assignment with probability 1/8. The improvement is to initialize the variables in the independent clauses in groups of three such that an all-0-assignment is avoided in each group. The computation shows that the success probability of this modified version of Sch¨ oning’s algorithm increases with m. ˆ

3

The underlying algorithms

The algorithm starts by computing a maximal set of independent clauses. It should be clear that this can be done in polynomial time by a greedy algorithm which selects independent clauses until no more independent clause can be added. Let C1 , . . . , Cm ˆ be the independent clauses thus chosen. By renaming variables and exchanging the roles of xi and xi if necessary, we may assume that C1 = x1 ∨ x2 ∨ x3 , C2 = x4 ∨ x5 ∨ x6 , etc. Hence, the variables x1 , . . . , x3m , . . . , xn are the remainˆ are those contained in the independent clauses and x3m+1 ˆ ing variables. For assigning constants to the variables in the independent clauses, we apply a randomized procedure called Ind-Clauses-Assign which depends on three parameters p1 , p2 , and p3 . They describe the probabilities with which we choose one of the seven assignments for the three variables of one clause. The details of the procedure, which is used by all our 4

Subroutine Ind-Clauses-Assign(p1 , p2 , p3 ) # pi are probabilities with 3p1 + 3p2 + p3 = 1 # for C ∈ {C1 , . . . , Cm ˆ } do Assume that C = xi ∨ xj ∨ xk . Set the variables xi , xj , xk randomly in such a way that for a = (xi , xj , xk ) the following holds: Pr{a = (0, 0, 1)} = Pr{a = (0, 1, 0)} = Pr{a = (1, 0, 0)} = p1 Pr{a = (0, 1, 1)} = Pr{a = (1, 0, 1)} = Pr{a = (1, 1, 0)} = p2 Pr{a = (1, 1, 1)} = p3 . endfor; Fig. 2. Subroutine Ind-Clauses-Assign

randomized algorithms, are explained in Figure ??. After Ind-Clauses-Assign is called, all variables x1 , . . . , x3m ˆ are assigned constants. Assume in the following that a∗ is an arbitrary, but fixed satisfying assignment to the input formula. For each independent clause Ci , we can count how many of the variables in Ci are set to 1 by a∗ . Since a∗ is a satisfying assignment for all clauses, we have that either one, two or three of the variables in Ci are set to 1, for each i. For our analysis, let m1 (m2 and m3 , respectively) be the number of independent clauses in which exactly one variable is set to 1 by a∗ (two and three variables, respectively). Of course, we have m ˆ = m1 + m2 + m3 . Let us also abbreviate αi := mi /m. ˆ We then have α1 + α2 + α3 = 1. We are now ready to describe the two randomized algorithms which are the ingredients of our 3-SAT algorithm. Algorithm Red2 and its success probability ˆ Algorithm Red2 is the generalization of the algorithm which checks all 7m assignments to the variables in the maximal independent clause set. It reduces a 3-SAT formula to a 2-SAT formula and is successful if the 2-SAT formula is satisfiable. The algorithm is described in Figure ??.

Algorithm Red2(q1 , q2 , q3 ): # qi are probabilities with 3q1 + 3q2 + q3 = 1 # Ind-Clauses-Assign(q1 , q2 , q3 ); Simplify the resulting formula and start the polynomial-time 2-SAT algorithm; Fig. 3. Algorithm Red2

Algorithm Red2 finds a satisfying assignment when the partial assignment to the variables x1 , . . . , x3m ˆ (which satisfies the clauses C1 , . . . , Cm ˆ ) can be extended to a complete satisfying assignment. This is the case, e.g., if the partial assignment agrees with a∗ on x1 , . . . , x3m ˆ . The probability of this event is exactly q1m1 · q2m2 · q3m3 , as we show now: Let Cj be one of the m ˆ clauses and let a∗ satisfy exactly i literals in Cj . The probability that algorithm Red2 assigns values to the three variables in Cj that agree with a∗ is qi . Multiplying over all clauses, we obtain the above probability. The following Theorem states the bound with the parameters we need later on: 5

Theorem 2. Algorithm Red2(α1 /3, α2 /3, α3 ) has success probability at least h α im ˆ α2 α2 α1 1 . ( )m1 · ( )m2 · α3m3 = ( )α1 · ( )α2 · α3α3 3 3 3 3 Algorithm RW and its success probability This algorithm describes the improvement on the first phase of Sch¨ oning’s random walk (RW) algorithm in which an initial assignment is chosen. Instead of initializing each variable xi with probability 1/2 to 1, the variables x1 , . . . , x3m ˆ in the independent clauses are set in groups of three. The parameters p1 , p2 and p3 describe the probability distribution according to which we set the variables. Algorithm RW is described in Fig. ?? and its success probability is analyzed in the following theorem. Algorithm RW(p1 , p2 , p3 ) # pi are probabilities with 3p1 + 3p2 + p3 = 1 # Ind-Clauses-Assign(p1 , p2 , p3 ); Set the variables x3m+1 , . . . , xn independently of each other to 1 with probability 1/2. ˆ To the assignment a obtained in this way, apply RandomWalk(a). Fig. 4. Algorithm RW

Theorem 3. The success probability of algorithm RW is at least PRW :=  n−3m m1  m2  m3 ˆ  3 3p1 9p2 p3 3p2 p3 9p1 3p2 3p1 1 · + + · + + · p3 + + · . 4 2 8 4 2 2 8 2 4 p(n) Proof. By Lemma ??, the success probability of RandomWalk(a), where a has Hamming distance d of the satisfying assignment a∗ = (a∗1 , . . . , a∗n ) is at least ( 12 )d /p(n). As in the proof of Theorem ??, we obtain for the success probability: ∗ 1 Pr{success} ≥ E[( )d(a,a ) ]/p(n). 2 This time, it does not hold that all variables are fixed independently of each other. But: the only dependence is between variables which are in the same clause Ci , where 1 ≤ i ≤ m. ˆ Define X1,2,3 to be the random variable which is the Hamming distance between (a1 , a2 , a3 ) and (a∗1 , a∗2 , a∗3 ). Define X4,5,6 etc. similarly and let Xi = d(ai , a∗i ) for i > 3m. ˆ We have

d(a, a∗ ) = X1,2,3 + X4,5,6 + · · · + X3m−2,3 + X3m+2 + · · · + Xn , ˆ m−1,3 ˆ m ˆ + X3m+1 ˆ ˆ hence  1  1 +X3m+2 +···+Xn ˆ m−1,3 ˆ m ˆ +X3m+1 ˆ ˆ · E ( )X1,2,3 +X4,5,6 +···+X3m−2,3 p(n) 2 n Y  1   1   1    1 1 ˆ m−1,3 ˆ m ˆ · E ( )X1,2,3 · E ( )X4,5,6 · · · E ( )X3m−2,3 · E ( ) Xi . = p(n) 2 2 2 2

Pr{success} ≥

i=3m+1 ˆ

6

As in the proof of Theorem ??, we have E[( 21 )Xi ] = 3/4 for i ≥ 3m ˆ + 1, hence n Y

i=3m+1 ˆ

1 ˆ . E[( )Xi ] = (3/4)n−3m 2

We show how to analyze E[( 21 )X1,2,3 ], the other terms E[( 21 )X4,5,6 ] etc. are analyzed in the same way. It turns out that E[( 21 )X1,2,3 ] depends on how many ones (a∗1 , a∗2 , a∗3 ) contains. We have to analyze the three possible cases: Case a∗1 + a∗2 + a∗3 = 3: Then (a∗1 , a∗2 , a∗3 ) = (1, 1, 1). The algorithm chooses (x1 , x2 , x3 ) = (1, 1, 1) with probability p3 , and the Hamming distance to (a∗1 , a∗2 , a∗3 ) then is zero. Likewise, the algorithm sets (x1 , x2 , x3 ) with probability 3p2 to one of (0, 1, 1), (1, 0, 1) and (1, 1, 0) which leads to Hamming distance 1, etc. Thus, by definition of the expected value, we obtain 1 1 1 1 3 3 E[( )X1,2,3 ] = ( )0 · p3 + ( )1 · 3p2 + ( )2 · 3p1 = p3 + p2 + p1 . 2 2 2 2 2 4 Likewise, we can analyze the other two cases: Case a∗1 + a∗2 + a∗3 = 1: 1 1 1 1 1 p3 9 3 E[( )X1,2,3 ] = ( )0 · p1 + ( )1 · 2p2 + ( )2 · (2p1 + p3 ) + ( )3 · p2 = + · p2 + · p1 . 2 2 2 2 2 4 8 2 Case a∗1 + a∗2 + a∗3 = 2: 1 1 1 1 p3 3 9 1 + · p2 + · p1 . E[( )X1,2,3 ] = ( )0 · p2 + ( )1 · (p3 + 2p1 ) + ( )2 · 2p2 + ( )3 · p1 = 2 2 2 2 2 2 2 8 Now it becomes clear why we have defined the values m1 , m2 , m3 , namely they count for how many clauses which of the three cases holds. We obtain the bound on Pr{success} stated in the Theorem. ⊔ ⊓

4

Combining the algorithms

The simplest way to improve upon the O(p(n) · (4/3)n) bound of Sch¨ oning’s 3-SAT algorithm is as follows: First call Red2(1/7, 1/7, 1/7). Then call RW(4/21, 2/21, 3/21). The success probability of this combined algorithm is at least Ω(1.330258−n ), which we will prove now: First, observe that the success probability of algorithm Red2(1/7, 1/7, 1/7) is at ˆ ˆ least (1/7)m ≥ (1/7)m /p(n). On the other hand, by Theorem ??, with the chosen parameters, the success probability of algorithm RW is at least  n−3m  n  m ˆ  m ˆ ˆ 3 1 1 3 64 3 · · · · = . 4 7 p(n) 4 63 p(n) 7

The combined algorithm is successful if one of the two randomized algorithms is successful and thus the success probability of the combined algorithm is at least as large as the larger of the two success probabilities. We observe that the success probability of Red2 decreases with m, ˆ the success probability of algorithm RW increases with m, ˆ hence it suffices to compute the m ˆ where both are equal. This is the case for n log 9/64 = ≈ 6.8188417, i.e., m ˆ ≈ 0.1466525 · n. m ˆ log 3/4 This leads to a success probability of at least Ω(1.330258−n ), which leads to a randomized algorithm for 3-SAT with expected running time O(1.330258n ). In the following section, we will show how this running time can still be improved.

5

Refinements

It would be nice if we knew the values m1 , m2 , m3 (or equivalently, α1 , α2 , α3 ) in advance. This is because we then could fine-tune our algorithms Red2 and RW. E.g., if we knew that (α1 , α2 , α3 ) = (1, 0, 0), then we could improve upon algorithm Red2 by just checking all ˆ possible 3m assignments to the variables in the independent clauses. Unfortunately, the values α1 , α2 , α3 are not known in advance. But since we know m ˆ and since m1 + m2 + m3 = m, ˆ there are only polynomially in n many possibilities for α1 , α2 , α3 . We proceed as follows: For each of the possible values for α1 , α2 , α3 , choose the parameters which maximize the success probabilities of algorithm Red2 and algorithm RW and call those two algorithms with the chosen parameters. The running time of this algorithm is still polynomial. The success probability of the whole algorithm is at least as large as the success probability of the largest success probability of all the algorithms we called. The idea is simple, but the computations are a little tedious. We will sketch in the following that one can thus achieve a success probability of at least Ω(1.330193−n ). It is rather straightforward to see that one should call Red2(α1 /3, α2 /3, α3 ) to maximize the success probability of Red2. Recall (Theorem ??) that the success probability then is at least:  α1 α1 α2 α2 α3 m ˆ ˆ ( ) · ( ) · α3 = Z(α1 , α2 , α3 )m . 3 3 | {z } Z(α1 ,α2 ,α3 )

1 ˆ · Z(α1 , α2 , α3 )m is a lower bound for the success probability of Red2. Thus, PRed2 := p(n) This bound is decreasing with m. ˆ We recall that the success probability of algorithm RW is at least PRW which is given by Theorem ??. Using that p3 + 3p2 + 3p1 = 1, we rewrite PRW as:

PRW =

1 · p(n)

n   m α1  α2  α3 m −3  ˆ 3 p3 3 1 3 1 p3 3 3 ˆ · . · · − + · p1 − · p1 + − · p1 4 8 8 8 2 8 2 2 4 {z } {z } {z } | | |

T1 (p1 ,p3 )

T2 (p1 ,p3 )

T3 (p1 ,p3 )

Since we select the parameters p1 , p2 , and p3 depending on α1 , α2 , α3 , we can consider the following function as a function depending on the αi only: 8

T (α1 , α2 , α3 ) := T1 (p1 , p3 )α1 · T2 (p1 , p3 )α2 · T3 (p1 , p3 )α3 . We will choose p1 and p3 in such a way that 3 T (α1 , α2 , α3 ) ≥ ( )3 , i.e. log3/4 T (α1 , α2 , α3 ) ≤ 3. 4 This guarantees that PRW is monotone increasing in m. ˆ Like we did in the last section, we have to analyze for which m ˆ both success probabilities are equal. Equality of PRed2 and PRW is achieved when n  m −3 3 ˆ n Z(α1 , α2 , α3 ) Z(α1 , α2 , α3 ) , i.e., = 3 + log3/4 . = 4 T (α1 , α2 , α3 ) m ˆ T (α1 , α2 , α3 )

For a value of m ˆ where this equality holds, we have PRed2 =

1 · Z(α1 , α2 , α3 )n/[log3/4 (Z(α1 ,α2 ,α3 )/T (α1 ,α2 ,α3 ))+3] . p(n)

Written differently: 1 · p(n)

 F (α1 ,α2 ,α3 )·n 3 , where F (α1 , α2 , α3 ) = 4 1+

1 3−log3/4 T (α1 ,α2 ,α3 ) log3/4 Z(α1 ,α2 ,α3 )

.

(1)

This lower bound for the success probability is increasing with T (α1 , α2 , α3 ) and Z(α1 , α2 , α3 ). In the following, we suggest a choice of p1 and p3 which only depends on the value of α1 . We are aware that there are better choices which also take α2 (and α3 ) into account, but the gain in the probability is only minor. For simplicity of exposition, we suggest the following choice:  0, if 0 ≤ α1 ≤ 0.5  and p1 := (4/3) · p3 . p3 := 2α1 − 1, if 0.5 ≤ α1 ≤ 0.6 ,  1 else. 5,

Since 3p1 + p3 ≤ 1 by our choice, p2 := 1 − 3p1 − 3p2 is a valid choice. Our choice leads to the following values of T1 , T2 and T3 :    1/2, for 0 ≤ α1 ≤ 0.5  3/8, for 0 ≤ α1 ≤ 0.5 T1 (p1 , p3 ) = 3α1 /4, for 0.5 ≤ α1 ≤ 0.6 and T2 = T3 (p1 , p3 ) = 1 − α1 , for 0.5 ≤ α1 ≤ 0.6 .   2/5, for 0.6 ≤ α1 ≤ 1 9/20 for 0.6 ≤ α1 ≤ 1 This yields the following lower bounds for T (α1 , α2 , α3 ):  ( 83 )α1 · ( 21 )1−α1 for 0 ≤ α1 ≤ 0.5  3 α1 T (α1 , α2 , α3 ) ≥ ( 4 ) · α1α1 · (1 − α1 )1−α1 , for 0.5 ≤ α1 ≤ 0.6 .  9 α1 ) · ( 52 )1−α1 , for 0.6 ≤ α1 ≤ 1 ( 20

On the other hand, for fixed α1 , i.e., fixed 1 − α1 = α2 + α3 , the function Z is minimal for α2 = (3/4) · (1 − α1 ) and α3 = (1/4) · (1 − α1 ), hence 9

α1 α1 1 − α1 1−α1 ) ·( ) . 3 4 Given the above lower bounds for Z and T for all values of α1 , one can now verify by either a sequence of calculations or by numerically evaluating the function F (α1 , α2 , α3 ) from equation (??) that the lower bound for the success probability is at least 1.3301924−n for all 0 ≤ α1 ≤ 1. We have the following theorem. Z(α1 , α2 , α3 ) ≥ (

Theorem 4. For any satisfiable formula F on n variables, the combined algorithm finds a satisfying assignment for F in expected time at most O(1.3302n ).

References [APT79]

B. Aspvall, M.F. Plass, R.E. Tarjan, A linear-time algorithm for testing the truth of certain quantified Boolean formulas, Information Processing Letters, 8(3), 121–123, 1979. [DGHS00] E. Dantsin, A. Goerdt, E.A. Hirsch, and U. Sch¨ oning, Deterministic algorithms for kSAT based on covering codes and local search, in Proc. 27th International Colloquium on Automata, Languages and Programming, ICALP’00, Lecture Notes in Comp. Sci. 1853, 236–247, 2000. [Hir00a] E.A. Hirsch, New worst-case upper bounds for SAT, Journal of Automated Reasoning, 24(4), 397–420, 2000. [Kul99] O. Kullmann, New methods for 3-SAT decision and worst-case analysis, Theoretical Computer Science, 223(1/2), 1–72, 1999. [MS85] B. Monien and E. Speckenmeyer, Solving satisfiability in less than 2n steps, Discrete Applied Mathematics, 10, 287–295, 1985. [PPSZ98] R. Paturi, P. Pudl´ ak, M.E. Saks, and F. Zane, An improved exponential-time algorithm for k-SAT, in Proc. of the 39th Ann. IEEE Sympos. on Foundations of Comp. Sci. (FOCS’98), IEEE, 628–637, 1998. [Sch99] U. Sch¨ oning, A probabilistic algorithm for k-SAT and constraint satisfaction problems, in Proc. of the 40th Ann. IEEE Sympos. on Foundations of Comp. Sci. (FOCS’99), IEEE, 410–414, 1999.

10

Suggest Documents