Lower Bounds for Randomized Read-k-Times Branching ... - CiteSeerX

5 downloads 0 Views 132KB Size Report
A read-k-times branching program is a branching program with the restriction ... is therefore natural to ask what can be done for randomized variants of restricted.
Lower Bounds for Randomized Read-k-Times Branching Programs (Extended Abstract) Martin Sauerhoff? Fachbereich Informatik, Universit¨at Dortmund, 44221 Dortmund, Germany e-Mail: [email protected]

Abstract. Randomized branching programs are a probabilistic model of computation defined in analogy to the well-known probabilistic Turing machines. In this paper, we contribute to the complexity theory of randomized read- -times branching programs. We first consider the case = 1 and present a function which has nondeterministic read-once branching programs of polynomial size, but for which every randomized read-once branching program with two-sided error at most 27 128 is exponentially large. The same function also exhibits an exponential gap between the randomized read-once branching program sizes for different constant worstcase errors, which shows that there is no “probability amplification” technique for read-once branching programs which allows to decrease the error to an arbitrarily small constant by iterating probabilistic computations. Our second result is a lower bound for randomized read- -times branching pro1 is allowed. The bound is exponential grams with two-sided error, where for log , an appropriate constant. Randomized read- -times branching programs are thus one of the most general types of branching programs for which an exponential lower bound result could be established.

k

k

=

k

k

k

1 Introduction Branching programs are a theoretically and practically interesting data structure for the representation of Boolean functions. In complexity theory, among other problems, lower bounds for the size of branching programs for explicitly defined functions and the relations of the various branching program models are investigated. A branching program (BP) on the variable set fx1; : : :; xng is a directed acyclic graph with one source and two sinks, the latter labelled by the constants 0 and 1. Each non-sink node is labelled by a variable xi and has exactly two outgoing edges labelled by 0 or 1. This graph represents a Boolean function f : f0; 1gn ! f0; 1g in the following way. To compute f(a) for some input a 2 f0; 1gn, start at the source node. For a non-sink node labelled by xi, check the value of this variable and follow the edge which is labelled by this value. Iterate this until a sink node is reached. The value of f on input a is the value of the reached sink. The size of a branching program G is the number of its non-sink nodes and is denoted by jGj. ? This work has been supported by DFG grants We 1066/7-3 and We 1066/8-1.

106 Note that an edge of a branching program can be regarded as an assignment of a variable, and each path corresponds to a sequence of assignments of variables. A read-k-times branching program is a branching program with the restriction that on each path from the source to a sink each variable is allowed to be tested at most k times. This model is sometimes termed syntactic read-k-times BP, in contrast to the “non-syntactic” variant with the restriction that only on each consistent path from the source to a sink each variable is allowed to be tested at most k times (a path is called consistent if all assignments of variables on it are consistent). In this paper, we only consider syntactic read- k-times BPs. For syntactic read-k-times BPs, exponential lower bounds have been independently proved by Okolnishnikova [11] for k  c logn= loglog n, c < 1 arbitrarily chosen, and by Borodin, Razborov and Smolensky [4] even for nondeterministic syntactic read- k-times BPs and k  c log n, for an appropriate constant c. The theory of read-once branching programs (i. e. , the case k = 1) is especially well understood. This is the branching program model for which the first exponential lower bounds have been proved (see [13] for an overview). OBDDs (ordered binary decision diagrams) introduced by Bryant [5] are restricted read-once branching programs which have turned out to be extremely useful in practice. An OBDD is a read-once branching program with an additional ordering of variables. On each path from the source to the sinks, the variables have to appear in the prescribed order. In this paper we are concerned with randomized branching programs, i. e. , branching programs with additional “coin-tossing nodes.” We will give a formal definition of this model in the next section. In the context of Turing machines, randomized models have been studied since the introductory work of Gill [7]. But to clarify the relations of the respective complexity classes among each other and to the polynomial hierarchy belongs to the famous open problems in complexity theory. In spite of this, these questions could be solved for some restricted computation models, most important perhaps communication protocols (see [3], [12]). By the analysis of these restricted models we hope to be able to improve our tools for proving lower bounds and thus also to gain deeper insights into the structure of the more general models. It is therefore natural to ask what can be done for randomized variants of restricted branching programs. Ablayev and Karpinski [2] have made the first step by presenting a function for which randomized OBDDs are exponentially smaller than their deterministic counterpart. Lower bounds for randomized OBDDs with two-sided error have been proved by Ablayev [1] and the author [14] using tools from communication complexity theory. Our present work goes a step further and attacks the lower bound problem for the more general model of randomized read-once and even randomized read-k-times branching programs. We first deal with the question whether nondeterminism is more powerful than randomness or vice versa for read-once branching programs. We show that both concepts are incomparable if the error allowed for the randomized BPs is not too large, which partially answers a question raised in [8]. As for Turing machines, we can decrease the error of randomized OBDDs and randomized (general) branching programs below an arbitrarily small constant by iteration. We will prove that there is no such “probability amplification” technique for read-once branching programs.

107 Finally, we prove the first exponential lower bound for randomized read-k-times branching programs, where k < c log n, c an appropriate constant. We now give an overview on the rest of the paper. In Section 2, we formally define randomized branching programs. In Section 3, we present the proof technique which will be used for our main results. Section 4 deals with the question of “nondeterminism versus randomness” for read-once branching programs, and Section 5 contains the lower bound result for randomized read- k-times branching programs.

2 Definitions and Basic Facts In this section, we give the definitions for randomized branching programs and nondeterministic branching programs. Definition 1. A randomized branching program G syntactically is a branching program with two disjoint sets of variables x1 ; : : :; xn and z1 ; : : :; zr . We will call the latter “stochastic” variables. By the usual semantics for deterministic branching programs (defined in the introduction), G represents a function g on n + r variables. Now we introduce an additional probabilistic semantics for G. We say that G as a randomized branching program represents a function f : f0; 1gn ! f0; 1g with – one-sided error at most ", 0  " < 1, if for all x 2 f0; 1gn it holds that if f(x) = 0; Prfg(x; z) = 0g = 1; Prfg(x; z) = 1g  1 ? "; if f(x) = 1; – two-sided error at most ", 0  " < 1=2, if for all x 2 f0; 1gn it holds that Prfg(x; z) = f(x)g  1 ? ".

In these expressions, z is an assignment to the stochastic variables which is chosen according to the uniform distribution from f0; 1gr. A randomized read-k-times BP is a randomized branching program with the restriction that on each path from the source to a sink, each variable xi and each variable zi is tested at most k times. For a randomized OBDD, an ordering on the variables x1; : : :; xn and z1 ; : : :; zr is given. Definition 2. A nondeterministic branching program G has the same syntax as described for randomized branching programs in the previous definition. Again, let g be the function on n + r variables computed by G as a deterministic branching program. Then we say that G as a nondeterministic branching program computes a function f : f0; 1gn ! f0; 1g if for all x 2 f0; 1gn Prfg(x; z) = 0g = 1; if f(x) = 0; Prfg(x; z) = 1g > 0; if f(x) = 1; where z is an assignment to the stochastic variables chosen according to the uniform distribution from f0; 1gr. This is equivalent to other definitions nondeterministic branching programs, e. g. , that of Meinel [10]. Definitions for nondeterministic read-k-times BPs and nondeterministic OBDDs are derived from this definition in the same way as done for the randomized types above.

108 In analogy to the well-known complexity classes for Turing machines, let RP" -BPk be the class of sequences of functions computable by polynomial size randomized readk-times branching programs with one-sided error at most ", " < 1. Let BPP"-BPk be the class of sequences of functions computable by polynomial size randomized readk-times branching programs with two-sided error at most ", " < 1=2. Furthermore, let [ [ RP-BPk := RP"-BPk; and BPP-BPk := BPP" -BPk:

"2[0;1)

"2[0; 12 )

Analogous classes can be defined for randomized OBDDs. Finally, for each of the considered complexity classes C let co-C be the class of sequences of functions (fn ) for which (:fn ) 2 C . As for Turing machines, it holds that RP-BPk  NP-BPk. We can also adapt the well-known technique of iterating probabilistic computations to improve the error probability of randomized branching programs. The following can be proved by using standard construction techniques for branching programs (see [14] for more details and a similar theorem for OBDDs). Lemma 3 (Probability amplification). Let G be a randomized read-k-times BP for a function f : f0; 1gn ! f0; 1g with twosided error at most " 2 [0; 12 ). Let 0  "0  ". Then a randomized read-(mk)-times "0 canbe constructed which has size jG0j = BP G0 for f with two-sided  error less than ?1 0 ? 1 2 O(m jGj), with m = O log((" ) ) 2 ? " ?2 . Note that this “probability amplification technique” only allows to decrease the error by increasing the number of tests of variables.

3 A Lower Bound Technique for Randomized Read-k-Times BPs In this section, we describe a proof technique that allows us to establish exponential lower bounds on the size of randomized read- k-times branching programs. We extend a technique of Borodin, Razborov and Smolensky [4] which yields lower bounds on the size of nondeterministic read-k-times BPs. The main idea is to relate the number of nodes of a branching program to the number of certain subsets of the input set. As Borodin, Razborov and Smolensky, we describe the technique for a generalized variant of branching programs, called s-way branching programs, which use s-valued variables instead of Boolean ones. For the whole section, let S be a finite set and s := jS j,

Definition 4. An s-way branching program on the variable set fx1; : : :; xng is a directed acyclic multigraph which has one source and two sinks, the latter labelled by the constants 0 and 1. Each non-sink node is labelled by a variable xi and has exactly s outgoing edges labelled by “1” to “s.” The semantics of an s-way BP is an obvious generalization of the semantics of 2-way BPs. We omit explicit definitions of read-k-times s-ways BPs and randomized s-way BPs, since these are analogous to the 2-way case. Also note that Lemma 3 holds in an analogous form for s-way BPs. Next we introduce the notion of rectangles which is central to our proof technique. This definition is from [4].

109 Definition 5 ((k; p)-Rectangle). Let X be a set of variables, n := jX j. Let k be an integer and 1  p  n. Let sets X1 ; : : :; Xkp  X be given with (1) X1 [ : : : [ Xkp = X and jXi j  dn=pe, for i = 1; : : :; kp; (2) each variable from X appears in at most k of the sets Xi . Let R  S n be given. If there are functions fi : S n ! f0; 1g depending only on the variables from Xi such that for the characteristic function fR : S n ! f0; 1g of R (with fR (x) = 1 iff x 2 R) it holds that fR = f1 ^ f2 ^ : : : ^ fkp , then we call R a (k; p)-rectangle in S n (with respect to the sets X1 ; : : :; Xkp). Notation: We will regard rectangles as sets or as characteristic functions, depending on what is more convenient, and we will use the same name for the set as well as for its characteristic function. By letting k = 1 and p = 2 we obtain the rectangles considered in communication complexity theory, which we will call 2-dimensional rectangles in the following. In this special case, we have a balanced partition (X1 ; X2 ) of the variable set X , i. e. , it holds that jjX1j ? jX2jj  1 and X1 [ X2 = X , X1 \ X2 = ;. A rectangle R with respect to this partition can be written as R = A  B , where A  2X1 and B  2X2 . Our goal is to map the “complicated” structure of a branching program to a combinatorial representation based on rectangles which is “simpler” to analyze. The desired type of representation is described by the definition below. Definition 6. Let X be a set of variables, n := jX j, and let k be an integer and 1  p  n. A function ': S n ! f0; 1g is called a step function (with respect to X , k and p), if there is a partition of S n into (k; p)-rectangles R1; : : :; Rm (where the respective sets X1 ; : : :; Xkp  X from Def. 5 may be different for all Ri) and constants c1 ; : : :; cm 2 f0; 1g such that '(x) = ci for all x 2 Ri , i = 1; : : :; m. For a step function ' we call the least number m such that there are rectangles as described above the number of rectangles used by '. Let f : S n ! f0; 1g be defined on the variable set X and let ' be a step function as described above. Define

m X " := jS1n j  jfx 2 Ri j f(x) 6= ci gj: i=1

Then we say that ' approximates f with (total) error ".

The next lemma links the number of nodes in an arbitrary randomized branching program to the number of rectangles used by an appropriate step function. Lemma 7. Let f : S n ! f0; 1g be defined on the variable set X , jX j = n. Let G be a randomized read-k-times s-way BP for f with two-sided error at most ". Let p 2 which approximates f with total error at most f1; : : :; ng. Then there is a step function " and which uses at most (sjGj)kp?1 rectangles. Sketch of Proof. The first step of the proof is to turn G into a deterministic branching program. By a simple counting argument (due to Yao [16]) one can prove that there is a deterministic read-k-times s-way BP G0 representing a function f 0 : S n ! f0; 1g such that (1) Prff 0 (x) = f(x)g  1 ? "; where the input x is chosen according to the uniform distribution from S n .

110 The second step of the proof operates on the deterministic BP G0. We map paths in G0 to (k; p)-rectangles in S n as described in the proof of Theorem 1 in the paper of Borodin, Razborov and Smolensky [4]. Borodin, Razborov and Smolensky consider nonderministic BPs and use a “non-standard” BP-model where tests of variables occur at the edges and not at the nodes. The adaption to our setting is mainly technical work. We only sketch the mapping of paths to rectangles, using ideas of Okolnishnikova [11] which simplify the presentation. Each path from the source to a sink in the BP G0 is partitioned into kp segments, say by “intermediate” nodes v1 ; : : :; vkp+1, where v1 is the source of G0 and vkp+1 is the sink reached by the path. Let Xi be set of variables tested on the segment between vi and vi+1 , i = 1; : : :; kp. It can be shown that there is a partition of the path such that the set of all inputs of S n for which the computation path runs through the intermediate nodes v1; : : :; vkp+1 is a (k; p)-rectangle with respect to X1 ; : : :; Xkp. Moreover, the collection of rectangles obtained for all paths forms a partition of S n , since the path for any input runs through exactly one sequence of intermediate nodes. The function f 0 computed by G0 is constant within each rectangle of the partition constructed above. Hence, f 0 is a step function. The upper bound on the number of rectangles used by f 0 follows by estimating the number of sequences of intermediate ut nodes, and the claim on the error bound follows from (1). In order to prove large lower bounds on the size of randomized BPs, we have to choose functions which are “hard” to approximate by step functions. One type of such functions is described in the hypothesis of the following theorem, which summarizes our proof technique. Theorem 8. Let f : S n ! f0; 1g be defined on the variable set X , jX j an integer and 1  p  n. Assume that there is a constant % such that

= n. Let k be

jf ?1 (1)j  % > 0: jS n j

(H1)

jR \ f ?1 (0)j   jR \ f ?1 (1)j ? (n); jS n j jS n j

(H2)

Furthermore, assume that for every (k; p)-rectangle R in S n (with respect to sets X1 ; : : :; Xkp  X as in Def. 5, which may depend on R) it holds that

where  1 is a constant and  is a real-valued function. Then it holds for every randomized read-k-times s-way BP G for f with two-sided error at most " that

 1=(kp?1) 1 (% ? ") jGj  s (n) : Note that in the applications of this theorem, (n) will be exponentially small in n. Sketch of Proof. Let ' be a step function which approximates f with total error at most ". Chose an arbitrary partition of S n into (k; p)-rectangles such that ' is constant within each rectangle. For c 2 f0; 1g let Rc1; : : :; Rcrc be the rectangles for which ' computes the result c. It holds that

jf ?1(1)j =

r0 X i=1

jR0i \ f ?1 (1)j +

r1 X i=1

jR1i \ f ?1 (1)j;

(2)

111 since the R0i , R1i are a partition of S n . Since ' approximates f with total error at most ", we have ! r r

"  jS1n j 

0 X

i=1

jR0i \ f ?1 (1)j +

1 X

i=1

jR1i \ f ?1 (0)j :

(3)

Using equation (2) and the hypothesis (H2), we can derive a lower bound on r1 from inequality (3). We additionally estimate the fraction of 1-inputs of f by (H1). The final result of these calcuations (which we omit here) is the bound

? ") : r1  (%(n)

(4)

The claimed lower bound on jGj follows from this by Lemma 7.

ut

The above theorem says that such functions are “hard” for randomized read-k-times BPs for which (i) the fraction of 1-inputs is “not too small” (especially, it does not tend to zero with increasing input size); and (ii) each (k; p)-rectangle which covers a “large” fraction of 1-inputs also contains a “large” fraction of 0-inputs (rectangles cannot consist solely of 1-inputs unless they are “very small”).

4 Nondeterminism versus Randomness for Read-Once BPs In this section, we show that nondeterminism and randomness are incomparable for read-once branching programs if the error allowed for the randomized BPs is not too large. The complete proofs of the theorems within this section can be found in [15]. We start by exhibiting a function which has only nondeterministic read-once BPs of exponential size, whereas its complement has randomized read-once BPs with small one-sided error of polynomial size. We consider the function PERM defined on an n  n-matrix X = (xij )1i;j n of Boolean variables. Let PERM(X) = 1 if and only if X is a permutation matrix, i. e. , if each row and each column contains exactly one entry equal to 1. Theorem 9. (1) PERM 2 coRP"(n)-OBDD for all "(n) 2 [0; 1) with "(n)?1 (2) PERM 62 NP-BP1.

= O(poly(n)), but

For the proof of this theorem, we have to refer to [15]. It follows that RP-BP1 6= coRP-BP1 and also BPP-BP1 6 NP-BP1. It is easy to improve this result to show 2 that BPP-BP1 6 (NP-BP1 [ coNP-BP1). The function 2PERM : f0; 1g2n ! f0; 1g, defined on two Boolean n  n-matrices X and Y by 2PERM(X; Y ) := PERM(X) ^ :PERM(Y ) is contained in the class BPP-BP1 but neither in NP-BP1 nor in coNP-BP1 (this immediately follows from Theorem 9). It is much harder to show that nondeterminism can be more powerful than randomness for read-once BPs, since we have to consider a function which is “easy” enough to be computable by nondeterministic read-once BPs, but for which nevertheless the proof technique of Section 3 for lower bounds on randomized BPs works. We claim that the following function has these properties.

112 Define MS: f0; 1gn ! f0; 1g (“Mod-Sum”) on the n  n-matrix X = (xi;j )12i;j n of Boolean variables by MS(X) := RT(X) ^ RT(X T ), where RT: f0; 1gn ! f0; 1g (“Row-Test”) is defined by 2

Definition 10.

RT(X) := [s(X)  0 mod 2]; s(X) :=

n X i=1

[xi;1 +    + xi;n  0 mod 3]:

(By the expression [P], P a predicate, we denote the Boolean function which is equal to 1 iff P is true.) Theorem 11. (1) MS 2 coRP1=2-BP1; (2) MS 62 BPP" -BP1, for "
0:21. Hence, hypothesis (H1) is fulfilled. We comment on the more interesting part of the proof that hypothesis (H2) is also 2 fulfilled. We claim that for an arbitrary 2-dimensional rectangle R in f0; 1gn it holds that 2?n2  j MS?1 (0) \ Rj   2?n2  j MS?1 (1) \ Rj ? (n); (5)

p

where we choose := 1 and (n) := ( 14=4)n=4. For the proof of this fact, let an arbitrary 2-dimensional rectangle R be given. R is associated with a balanced partition (X1 ; X2 ) of the input matrix X . The function MS is a conjunction of a “row-wise” and a “column-wise” test of the matrix X . Our strategy is to prove that for any choice of the partition (X1 ; X2 ), at least one of these tests is “difficult.” It is easy to show that either “many” rows or “many” columns are “split” by the given partition, i. e. , some variables lie on either side of the partition. W. l. o. g. let this happen for the rows. In this case, we expect the evaluation of RT(X) to be difficult. As the next step, we restrict ourselves to a class of subfunctions of RT obtained by assigning constant values to a fixed subset of the variables in X . This class contains functions RTCc0 ;c1 ;:::;cm : f0; 1g4m ! f0; 1g, m := n=4, defined on vectors of 1 ; ym2 )) as follows: variables x = ((x11; x21); : : :; (x1m ; x2m )) and y = ((y11 ; y12 ); : : :; (ym

RTCc0 ;c1 ;:::;cm (x; y) := [s(x; y)  c0 mod 2];

s(x; y) :=

m X i=1

[x1i + x2i + yi1 + yi2 + ci  0 mod 3];

113 where c0 2 f0; 1g and c1; : : :; cm 2 Z3 are arbitrary constants. The main work of the proof is to show that already these subfunctions are “difficult” for an arbitrary choice of the constants. As the well-known inner product function from communication complexity theory (see, e. g. , [9]), the function RTC has the property that the numbers of 0- and 1-inputs in any 2-dimensional rectangle are “nearly balanced.” More precisely, for any 2-dimensional rectangle R0 in f0; 1g4m it holds that ?



p

2?m  j RTC?1(1) \ R0j ? j RTC?1(0) \ R0j  ( 14=4)m : (6) We claim that fact (6) concerning the subfunctions RTC of RT can be used to derive (5) for the complete function MS (proof omitted). Finally, we apply Theorem 8 with parameters %, and  as defined above and k := 1, p := 2 (since we consider 2-dimensional rectangles). We obtain that   p % ? " 1 = 2cn?O(log n) ; c := (1=4)  log2(4= 14)  0:024; jGj  2 p n= 4 ( 14=4) for " < % = 27=128. ut 27 Corollary 12. For " < 128 it holds that 2

(1) NP-BP1 6 BPP"-BP1; (2) RP" -BP1 $ RP1=2-BP1  NP-BP1.

The second part of this corollary shows that an increase in the number of tests of variables as described in Lemma 3 is indeed unavoidable if we want to decrease the error below a small constant for read-once BPs. This is contrary to the situation for OBDDs or general branching programs, where arbitrary small constant errors may be obtained.

5 A Lower Bound for Randomized Read-k-Times BPs Now we are going to present our second main result, an exponential lower bound on the size of randomized read-k-times BPs. The complete proofs of this section can be found in the paper [14]. n We consider the function SIP: Zn 3  Z3 ! f0; 1g (“Sylvester inner product”), d n = 2 , with

SIP(x; y) = 1 :, xT Ay  0;

where A = (ai;j )1i;j 2d is the Sylvester matrix of dimension 2d  2d , i. e. ,

ai+1;j +1 := (?1); for 0  i; j  2d ? 1, where bin(i) is the binary representation of i and <  ;  > the inner product in Zd2 . Borodin, Razborov and Smolensky [4] have proved that this function has no polynomial size nondeterministic read-k-times BP for k  c logn for appropriate c. We can show that this function also has no polynomial size randomized read-k-times BPs with two-sided error.

114 Theorem 13. Let G be a randomized 3-way read-k-times BP for SIP with two-sided error at most ", " < 1=9. Then

jGj = exp k3 n 4k 





:

Sketch of Proof. We can only give some comments highlighting the main ideas of the proof of this theorem here. We require the technique of Section 3 in its general form for (k; p)-rectangles and 3-way branching programs. We choose p := 4k. The proof follows the same pattern as the proof of Theorem 11. Again, we have to show that hypotheses (H1) and (H2) of Theorem 8 are fulfilled. It is easy to see that for hypothesis (H1); it holds that 3?2n  j SIP?1 (1)j = 1=3 ? o(1): (7) As in the previous section, the main work is to establish that hypothesis (H2) holds. For the function SIP, we can show that for an arbitrary (k; p)-rectangle R it holds that

3?2n  jR \ SIP?1 (0)j   3?2n  jR \ SIP?1(1)j ? (n); ?  with := 2 and (n) := n=(k  4k ) .

(8)

Besides some combinatorial facts from the original paper of Borodin, Razborov and Smolensky [4], the core of the proof of the above claim consists of a generalization of a lemma attributed to Lindsey (see, e. g. , [6]). In its familiar form, this lemma states that in every submatrix of a Hadamard matrix which is “not too small” the number of 1’s and (?1)’s is “nearly balanced.” (A Hadamard matrix is an orthogonal matrix with entries equal to ?1 or 1.) The generalized form of this lemma (proved in [14]) says that if B is a t  u-matrix over the field Z3 with “large” rank, then the 3t  3u -matrix M = (mx;y ) defined by mx;y := xT By, x 2 Zt3 , y 2 Zu3 , has the property that in every submatrix of M which is “not too small” the number of 0’s, 1’s and (?1)’s is “nearly balanced.” Now consider the inner product in Z3 defined by xT Ay, x; y 2 Zn 3 , where A is the matrix of SIP. The generalized form of Lindsey’s Lemma can be applied to show that in each (k; p)-rectangle, the fractions of inputs for which this inner product yields the result 0, 1 or ?1, resp., are equal up to an exponentially small term. Hence, the number of 0-inputs for SIP in each (k; p)-rectangle is “essentially” at least twice as large as the ut number of 1-inputs, which proves (8). By Lemma 3, the above lower bound also holds for arbitrary " < 1=2. Finally, we mention that it is also possible to derive a similar exponential lower bound for a Boolean variant of SIP where each value from Z3 is encoded by two Boolean variables. The precise definition of this function together with the proof of the lower bound can again be found in [14].

Conclusion and Open Problems We have introduced a technique for proving lower bounds on the size of randomized read-k-times branching programs, which has turned out to be powerful enough to prove an exponential lower bound for arbitrary k < c logn. We have also shown that BPP" -BP1 is incomparable to NP-BP1 if the error " is not too large. This partially solves the open problem raised in [8] to separate the classes

115 BPP-BP1 and NP-BP1. We even have obtained an exponential gap between the randomized read-once BP sizes for different constant worst-case errors. Nevertheless, some interesting problems concerning randomized read-once BPs still remain open, e. g. : (1) Find a function f with f 2 NP-BP1, but f 62 BPP1=2?"-BP1 for arbitrarily small " > 0, showing that NP-BP1 6 BPP-BP1. (3) Show that for arbitrary " and "0 with 0  " < "0 < 1 holds that RP"-BP1 $ RP"0 -BP1.

Acknowledgement I would like to thank Ingo Wegener and Martin Dietzfelbinger for helpful discussions on the subject of this paper.

References 1. F. Ablayev. Randomization and nondeterminism are incomparable for polynomial ordered binary decision diagrams. In Proc. of ICALP, LNCS 1256, 195–202. Springer, 1997. 2. F. Ablayev and M. Karpinski. On the power of randomized branching programs. In Proc. of ICALP, LNCS 1099, 348 – 356. Springer, 1996. 3. L. Babai, P. Frankl, and J. Simon. Complexity classes in communication complexity theory. In Proc. of the 27th IEEE Symp. on Foundations of Computer Science, 337 – 347, 1986. 4. A. Borodin, A. A. Razborov, and R. Smolensky. On lower bounds for read- -timesbranching programs. Computational Complexity, 3:1–18, 1993. 5. R. E. Bryant. Graph-based algorithms for Boolean function manipulation. IEEE Trans. Computers, C-35(8):677–691, Aug. 1986. 6. B. Chor and O. Goldreich. Unbiased bits from sources of weak randomness and probabilistic communication complexity. SIAM J. Comput., 17(2):230 – 261, Apr. 1988. 7. J. Gill. Probabilistic Turing Machines and Complexity of Computations. Ph. D. dissertation, U. C. Berkeley, 1972. 8. S. Jukna, A. Razborov, P. Savick´y, and I. Wegener. On P versus NP \ coNP for decision diagrams and read-once branching programs. In Proc. of the 22th Int. Symp. on Mathematical Foundations of Computer Science (MFCS), LNCS 1295, 319–326. Springer, 1997. Submitted to Computational Complexity. 9. E. Kushilevitz and N. Nisan. Communication Complexity. Cambridge University Press, 1997. 10. C. Meinel. The power of polynomial size -branching programs. In Proc. of the 5th Ann. ACM Symp. on Theoretical Aspects of Computer Science, LNCS 294, 81–90. Springer, 1988. 11. E. A. Okolnishnikova. On lower bounds for branching programs. Siberian Advances in Mathematics, 3(1):152 – 166, 1993. 12. C. H. Papadimitriou and M. Sipser. Communication complexity. In Proc. of the 14th Ann. ACM Symp. on Theory of Computing, 196 – 200, 1982. 13. A. A. Razborov. Lower bounds for deterministic and nondeterministic branching programs. In Proc. of Fundamentals of Computation Theory, LNCS 529, 47–60. Springer, 1991. 14. M. Sauerhoff. A lower bound for randomized read- -times branching programs. Technical Report TR97-019, Electronic Colloquium on Computational Complexity, 1997. Available via WWW from http://www.eccc.uni-trier.de/. 15. M. Sauerhoff. On non-determinism versus randomness for read-once branching programs. Technical Report TR97-030, Electronic Colloquium on Computational Complexity, 1997. Available via WWW from http://www.eccc.uni-trier.de/. 16. A. C. Yao. Lower bounds by probabilistic arguments. In Proc. of the 24th IEEE Symp. on Foundations of Computer Science, 420 – 428, 1983.

k



k

Suggest Documents