On Extracting Private Randomness Over a Public Channel Yevgeniy Dodis∗
Roberto Oliveira†
April 14, 2003
Abstract We introduce the notion of a super-strong extractor. Given two independent weak random sources X, Y , such extractor E XT(·, ·) has the property that E XT(X, Y ) is statistically random even if one is given Y . Namely, hY, E XT(X, Y )i ≈ hY, Ri. Super-strong extractors generalize the notion of strong extractors [16], which assume that Y is truly random, and extractors from two weak random sources [26, 7] which only assure that E XT(X, Y ) ≈ R. We show that super-extractors have many natural applications to design of cryptographic systems in a setting when different parties have independent weak sources of randomness, but have to communicate over an insecure channel. For example, they allow one party to “help” other party extract private randomness: the “helper” simply sends Y , and the “client” gets private randomness E XT(X, Y ). In particular, it allows two parties to derive a nearly random key after initial agreement on only a weak shared key, without using ideal local randomness. We show that optimal super-strong extractors exist, which are capable of extracting all the randomness from X, as long as Y has a logarithmic amount of min-entropy. This generalizes a similar result from strong extractors, and improves upon previously known bounds [7] for a weaker problem of randomness extraction from two independent random sources. We also give explicit super-strong extractors which work provided the sum of the min-entropies of X and Y is at least their block length. Finally, we consider the setting of our problem where the public communication channels are not authenticated. Using the results of [13], we show that non-trivial authentication is possible when the min-entropy rate of the shared secret key is at least a half. Combining this with our explicit super-extractor construction, we get the first privacy amplification protocol over an adversarially controlled channel, where player do not have ideal local randomness.
∗
Department of Computer Science, New York University, 251 Mercer Street, New York, NY 10012, USA. Email:
[email protected]. Parially supported by the NSF CAREER Award. † Department of Mathematics, New York University, 251 Mercer Street, New York, NY 10012, USA. Email:
[email protected]. The work of this author was funded by a doctoral fellowship from CNPq, Brazil.
1
I NTRODUCTION
I MPERFECT R ANDOMNESS . Randomization has proved to be extremely useful and fundamental in many areas of computer science. Unfortunately, in many situations one does not have ideal sources of randomness, and one has to base a given application on imperfect sources of randomness. Among many imperfect sources considered so far, perhaps the most general and realistic source is the weak source [29, 7]. The only thing guaranteed about a weak source is that no particular string has a very high probability of occurring. This is characterized by a parameter b (called the min-entropy of the source) by saying that no string (of some given length ℓ) occurs with probability more than 2−b (for any distribution of the source). We will call this source (ℓ, b)-weak. Unfortunately, handling such weak sources is often necessary in many applications, as it is typically hard to assume much structure on the source beside the fact that it contains some randomness. Thus, by now a universal goal in basing some application on imperfect sources is to make it work with the weak source. The most direct way of utilizing weak sources would be to extract nearly perfect randomness from such a source. Unfortunately, it is trivial to see [7] that that no deterministic function can extract even one random bit from a weak source, as long as b < ℓ (i.e., the source is not random to begin with). This observation leaves two possible options. First, one can try to use weak sources for a given application without an intermediate step of extracting randomness from it. Second, one can try designing probabilistic extractors, and later justify where and how one can obtain the additional randomness needed for extraction. U SING A S INGLE W EAK S OURCE . A big successful line of research [27, 25, 7, 8, 29, 3] following the first approach showed that a single weak source is sufficient to simulate any probabilistic computation of decision or optimization problems (i.e., problems with a unique “correct” output which are potentially solved more efficiently using randomization; this class is called BPP). Unfortunately, most of the methods in this area are not applicable for applications of randomness, where the randomness is needed by the application itself, and not mainly for the purposes of efficiency. One prime example of this is cryptography. For example, secret keys have to be random, and many cryptographic primitives (such as public-key encryption) must be probabilistic. Indeed, it is not good enough to produce a set of secret keys most of which are “good”, if one does not know which of these keys to actually use, or to say that “at least half of the ciphertext does not reveal the message”. Thus, new methods are needed to base cryptographic protocols on weak sources. So far, this question has only been studied in the setting of information-theoretic symmetric-key cryptography. In this scenario, the shared secret key between the sender and the recipient is no longer random, but comes from a weak source. As a very negative result, McInnes and Pinkas [14] proved that one cannot securely encrypt even a single bit, even when using an “almost random” (ℓ, ℓ−1)-weak source. Thus, one cannot base symmetric-key encryption on weak sources. Dodis and Spencer [9] also consider the question of message authentication and show that one cannot (non-interactively) authenticate even one bit using (ℓ, ℓ/2)-weak source (this bound is tight as Maurer and Wolf [13] showed how to authenticate up to ℓ/2 bits when b > ℓ/2). Basing more advanced cryptographic primitives on a single weak random sources also promises to be challenging. For example, it is not clear how to meaningfully model access a single weak source by many parties participating in a given cryptographic protocol. Additionally, moving to the computational setting will likely require making very non-standard cryptographic assumptions. U SING S EVERAL W EAK S OURCES . Instead, we will assume that each party will have its own weak source, which is independent from all the other weak random sources. In other words, while each individual party cannot assume that his source is truly random, the parties are located “far apart” so that their imperfect sources are independent from each other. For simplicity, we will restrict the number of independent sources to two for the remainder of this paper. One of the questions we will consider if it is possible to construct cryptographic protocols, like secret-key encryption or key exchange, in this new setting. In fact, rather than construct these primitives from scratch, we will try to extract nearly ideal randomness from two weak sources, and then simply
1
use whatever standard methods exist for the cryptographic task at hand! This brings us to the question of randomness extraction from several independent random sources. This question originated as early as [18], who showed that one can extract randomness from “many” so called “semirandom” sources (later called SV-sources), which are very special cases of the weak sources. Vazirani [26] considered two SV-sources, and showed that the inner product function indeed extracts an almost random bit in this case (he also extended this construction to extract more than one bit). Chor and Goldreich [7] were the first to consider general weak sources of equal block length; let us say that the sources X and Y are (ℓ1 , b1 )-weak and (ℓ2 , b2 )-weak, for concreteness, while here we also assume ℓ1 = ℓ2 = ℓ. First, they showed that a random function can extract almost (b1 + b2 − ℓ) nearly random bits in this setting.1 The also gave an explicit numbertheoretic construction that can essentially match this (non-optimal) bound. Moreover, they showed that the simple inner product function is also a good bit-extractor under the same condition that b1 + b2 > ℓ. Recently, Trevisan and Vadhan [24] broke this the “barrier” b2 + b2 > ℓ, but only for the very “imbalanced” case when b1 = ε2 ℓ, b2 = (1 − O(ε))ℓ (for any ε > 0). To summarize, while non-trivial randomness extraction is possible, the known constructions and parameters seem far from optimal. Unfortunately, improving this situation seems to be extremely challenging. Indeed, it is easy to see that the question of extracting randomness from two independent sources beyond what is currently known is even harder than a notoriously hard problem of explicitly constructing certain bipartite Ramsey graphs (see [28, 17]). E XTRACTORS . A special case of the above question has received a huge amount of attention recently. It involved the case when one of the two sources, say X, is perfect: b1 = ℓ1 . In this case, one invests b1 bits of true randomness X (called the seed) and hopes to extract nearly b1 + b2 random bits from X and a given (b2 , ℓ2 )weak source Y . A deterministic function E XT achieving this task has simply been called an extractor [16]. A strong extractor additionally requires X itself to be part of the extracted randomness. In this case, X is usually excluded from the output of E XT, so that the goal becomes to extract up to b2 random bits from Y . By now, it is well known that one can indeed achieve this goal provided b1 ≫ log ℓ2 . Moreover, many explicit constructions of strong extractors which come very close to this bound are known by now (see [15, 23, 11, 20, 19] and the references therein). Not surprisingly, strong extractors have found many applications (e.g., see [19, 6, 12]). O UR Q UESTION . The general question of extracting randomness from two weak sources [18, 26, 7] concentrated on regular, non-strong extractors. Namely, one could not publish the random seed X. If the extracted randomness is to be used as the secret key of the conventional cryptographic systems, this means that one should sample X and Y from two independent weak sources, and securely “transport” X to Y . Consider, for example, the following application. Alice and Bob stay together and wish to securely communicate when Alice goes away. Then can agree on an auxiliary secret key X sampled from their common weak source. When Alice leaves far away, she gets access to an independent source Y . Assuming the parameters are right, Alice can now extract a nearly random secret key k = E XT(X, Y ). However, Bob only knows X, so Alice has to send Y to Bob. In cryptography, it is conventional to assume that the communication channel between Alice and Bob is public. Thus, Alice has to send Y “in the clear”. With regular extractors, even from two weak sources, there is no guarantee that E XT(X, Y ) will look random to the eavesdropper who learns X. On the other hand, conventional strong extractors resolve this problem, but rely on a strong assumption that Alice can sample a truly random Y and send it over the channel. In the world with no “true randomness” and only weak sources, this assumption is not realizable, unless eventually two independent sources are secretly brought together. The above example motivates our common generalization of previous work. We wish to consider strong extractors with weak seeds. We will call such extractors super-strong. Namely, we want to design a function E XT such that E XT(X, Y ) looks random even for an observer who knows Y , for any X and Y sampled from their corresponding (ℓ1 , b1 ) and (ℓ2 , b2 ) weak sources. 1
A trivial strengthening of their technique can push this number to min(b1 , b2 ); we will later non-trivially push this to b1 + b2 .
2
O UR R ESULTS . As we demonstrate, such remarkable super-strong extractors exists. In particular, we show that a random function can be used to extract essentially all the randomness from X (i.e., nearly b1 bits), provided only that b2 ≥ log ℓ1 (and also b1 ≥ log ℓ2 ). The latter condition says that as long as the public seed Y has barely enough randomness, we can extract almost all the randomness for our target source Y . Clearly, this bound generalizes the standard setting of (strong) extractors, where one needs ℓ2 = b2 ≥ log ℓ1 to extract all the randomness from X. We also remark that our analysis non-trivially extends the previous work. In particular, it is not just a “trivial application” of the Chernoff bound, like is the case for the existence proof for regular extractors [21, 22].2 Proving our bound will involve a careful martingale construction, and then applying the famous Azuma’s inequality to bound its deviation from the mean. Also, our bound strengthens what was known for regular (non-strong) extraction from two weak sources [7]. As mentioned, their result gave only b1 + b2 − ℓ bits. It is easy to improve it to min(b1 , b2 ) bits, but getting b1 + b2 bits — which follows from our more general bound — does not seem possible when using standard Chernoff type bounds used by [7]. Next, we address explicit constructions of super-strong extractors. Unfortunately, the large body of work on strong extractors does not seem to be applicable to super-strong extractors. Intuitively, standard extractors use the seed to perform a random walk on some expander, or to select a hash function from a small family of functions. These arguments seem to fall apart completely once the seed comes from a weak random source. On the other hand, any explicit constructions of super-strong extractors will in particular implies extraction from two independent weak sources, for which any improvement seems very hard, as we mentioned earlier. Thus, the best we can hope for is to extend the best known constructions in this latter setting to yield super-strong extractors. And, indeed, this is exactly what we achieve. First, we show that the inner product function is a super-strong bit-extractor for the case ℓ1 = ℓ2 = ℓ, provided b1 + b2 > ℓ. This argument involves extending the combinatorial lemma of Lindsey (see Section 4). Second, we show that Vazirani’s milti-bit extraction for SV-sources can be applied to weak source as well. This allows to extract Ω(ℓ) bits provided b1 + b2 ≫ 3ℓ/2. Finally, we show that the explicit extractor of [7] based on discrete logarithms can also be also be extended to our setting, which gives a way to extract nearly (b1 + b2 − ℓ)/2 random bits. Again, we remark than all these extensions actually involve non-trivial modifications to the existing arguments. P RIVACY A MPLIFICATION . Finally, we return to applications of super-strong extractors to the setting where different parties have independent weak sources, but all the communication between them is public. The most natural such application is that of key agreement (aka privacy amplification [5, 4]) by public discussion: sending Y over the channel allows Alice and Bob to agree on a (nearly) random key k = E XT(X, Y ), provided the communication channel is authentic. Therefore, the remaining interesting case to consider is what happens when the channel is not only public, but adversarially controlled [13]. In particular, the question is whether we can build any kind of message authentication with a shared key coming from a (ℓ1 , b1 )-weak block source, and without any local randomness. Specifically, assume Alice and Bob share a key X1 , . . . , Xi (where i will be determined from their need; see below) coming from i samples from the (ℓ1 , b1 )-weak block source. When Alice gets her hands on an independent source Y , she would like to authenticate Y using X2 . . . Xi . Then, they both can agree on the key k = E XT(X1 , Y ), where E XT is our super-strong extractor. As we mentioned, [9] showed that non-interactive one-time authentication from Alice to Bob is impossible when b1 ≤ ℓ1 /2. On the other hand, Maurer and Wolf [13] gave a way to non-interactively authenticate up to ℓ1 /2 bits per each shared Xi , provided b1 ≫ ℓ1 /2.3 Thus, sharing i = 1 + 2ℓ2 /ℓ1 values Xi will allows Alice to authentically transmit Y over the channel, so that both can apply a super-strong extractors to agree on a random k = E XT(X1 , Y ). Combining this observation with our explicit constructions of super-strong extractor for ℓ1 = ℓ2 = ℓ, we get that the first efficient privacy amplification without ideal local randomness, provided b1 ≫ ℓ/2 and b2 ≫ ℓ − b1 . 2
We are not aware of any written proof for the existence of strong extractors, as all the references we found point to [21, 22]. We remark that unlike our setting, Alice and Bob had ideal local randomness in the setting of [13], and used it at later stages of their application. Luckily, the authentication step was deterministic, which makes it “coincidentally applicable” to our situation. 3
3
2 Preliminaries 2.1 Basic notation We mostly employ standard notation. The symbol log is reserved for the base 2 logarithm. For a positive integer t, Ut denotes a random variable that is uniform over {0, 1}t and independent of all other random variables under consideration. We also write [t] ≡ {1, 2, . . . t}. P For two random variables A, B taking values in the finite set A, their statistical distance is kA − Bk ≡ 12 a∈A | Pr(A = a) − Pr(B = a)|, and the min-entropy of A is H∞ (A) ≡ mina∈A − log(Pr(A = a)). Finally, if C is another random variable, C|A=a represents the distribution of C conditioned on A = a ∈ A.
2.2 Extraction vs. Super-Strong Extraction Min-entropy quantifies the amount of hidden randomness in a source X. The objective of extractors is to purify this randomness with the aid of a (small amount of) truly random bits. Definition 1 Let k ≥ 0, ε > 0. A (k, ε)-extractor E XT : {0, 1}n × {0, 1}d → {0, 1}m is a function such that for all n-bit random variables X with min-entropy H∞ (X) ≥ k kE XT(X, Ud ) − Um k ≤ ε. E XT is a (k, ε)-strong extractor if the function E XT′ : (x, y) 7→ y ◦ E XT(x, y) is an extractor. Efficient extraction from weakly random sources using small seed length is a non-trivial problem on which a lot of progress has been made recently (see references in the Introduction). However, even a minimal amount of true randomness can be very difficult to obtain in many situations. Given the impossibility of deterministic extraction [7], it is therefore natural to consider pairs of weak sources. We adopt a special notation for those. Definition 2 [7] The set CG(ℓ1 , ℓ2 , b1 , b2 ) of pairs of independent (Chor-Goldreich) weak sources is the set of all pairs of independent random variables (X, Y ) where X (respectively Y ) is ℓ1 (resp. ℓ2 ) bits long and H∞ (X) ≥ b1 (resp. H∞ (Y ) ≥ b2 ). We define extractors from 2 weak sources whose output remains random even if one of the input strings is revealed. This is stronger than the definition used in [7], and this extra strength is essential for cryptographic purposes. Definition 3 A (b1 , b2 , ε)-super-strong extractor (SSE) is a function E XT : {0, 1}ℓ1 × {0, 1}ℓ2 → {0, 1}m such that for all pairs (X, Y ) ∈ CG(ℓ1 , ℓ2 , b1 , b2 ), we have khY, E XT(X, Y )i − hY, Um ik ≤ ε. We state for later convenience the following proposition, which can be deduced from the linear programming argument in [7] (i.e. the fact that general sources of a given min-entropy are convex combinations of flat distributions with the same min-entropy). Proposition 1 If b1 and b2 are integers, then for any function E XT : {0, 1}ℓ1 × {0, 1}ℓ2 → {0, 1}m the maximum of khY, E XT(X, Y )i − hY, Um ik over all (X, Y ) ∈ CG(ℓ1 , ℓ2 , b1 , b2 ) is achieved by flat random variables, that is, by a pair (X, Y ) for which X is uniform over a subset SX ⊂ {0, 1}ℓ1 with |SX | = 2b1 , and Y is uniform over SY ⊂ {0, 1}ℓ2 , |SY | = 2b2 .
3 Existence of super-strong extractors Form now on m, ℓ1 ≥ b1 ≥ 2 and ℓ2 ≥ b2 ≥ 2 are positive integers and ε > 0 is a positive real number. The aim of this section is to prove that super-strong extractors exist for certain choices of parameters. Theorem 1 There exists a (b1 , b2 , ε)-SSE E XT : {0, 1}ℓ1 × {0, 1}ℓ2 → {0, 1}m for any choice of parameters satisfying
4
m ≤ b1 − 2 log 1ε b1 ≥ log2 (ℓ2 − b2 ) + 2 log 1ε + O(1) b2 ≥ log2 (ℓ1 − b1 ) + 2 log 1ε + O(1)
(1)
Our proof of Theorem 1 is non-constructive and does not provide an efficiently computable SSE. Nevertheless, Theorem 1 is important because it provides nearly tight conditions for existence of super-strong extractors, as demonstrated by Theorem 2 (proved in the Appendix A). Theorem 2 There exists a constant c such that if b1 ≤ ℓ1 −c and b2 ≤ ℓ2 −c, then the first and second conditions (1) of Theorem 1 are in fact necessary for the existence of a (b1 , b2 , ε)-SSE E XT : {0, 1}ℓ1 ×{0, 1}ℓ2 → {0, 1}m . We believe that some condition along the lines of the second one in Theorem 1 is also necessary for the existence of SSE’s. However, our proof technique for Theorem 2, which is based on the lower bounds of Ta-Shma and Radhakrishnand [22] for (regular) extractors, does not allow us to conclude this fact (cf. Remark 1 in the Appendix A).
3.1 Reformulating Theorem 1 Theorem 1 follows from the following combinatorial theorem. Theorem 3 Let 0 < δ < 1/2 be given, and let L1 ≥ B1 , L2 ≥ B2 and M be integers, all of which are at least two. Assume that ) ( r r r M ln L1 + 1 − ln B1 ln L2 + 1 − ln B2 ,4 ,4 (2) δ ≥ max 2 B1 B2 B1 1 ,L2 L1 ×L2 such that for all sets of rows R ⊂ [L ] of size B and Then there exists a matrix H = {Hij }L 1 1 i=1,j=1 ∈ [M ] all sets of columns C ⊂ [L2 ] of size B2 X X X 1 [Hij = α] − B2 ≤ δ (3) 2B1 B2 M
α∈[M ] i∈R j∈C
Indeed, if we set 2bi = Bi , 2ℓi = Li (i = 1, 2), 2m = M , ε = δ and let E XT(x, y) = Hxy (identifying [M ] ≈ {0, 1}m ), we note that the conditions of Theorem 3 correspond naturally to those of Theorem 1. The result (3) of Theorem 3 corresponds to khY, E XT (X, Y )i − hY, Um ik ≤ ε where X is flat on R and Y is flat on C. By Proposition 1, this implies khY, E XT(X, Y )i − hY, Um ik ≤ ε for all (X, Y ) ∈ CG(ℓ1 , ℓ2 , b1 , b2 ), which is exactly the defining property of SSE’s. We prove Theorem 3 below.
3.2 Proof of Theorem 3 Proof: (of Theorem 3) The proof is probabilistic: we choose a matrix H ∈ [M ]L1 ×L2 uniformly at random and prove that under the hypotheses this matrix has a positive probability of satisfying the desired low-discrepancy property (3). For each fixed α the indicator random variables [Hij = α] are i.i.d Bernoulli with common mean 1/M and one could try to apply a Chernoff bound to bound the probability of their sum being too big for a fixed choice of column and row sets C, R4 . This standard approach would finish with a union bound over all α, R, C. However, this does not work directly: we are considering a sum of absolute values of sums of these random variables. A two-step approach of bounding the “inside” sum first and then the “outside” sum does not seem work either, for it only provides weak bounds. We circumvent this difficulty by employing a stronger concentration inequality discussed below. 4
Of course, for different α, β [Hij = α] and [Hij = β] are actually dependent
5
Q T HE CONCENTRATION INEQUALITY. Let g : ki=1 Λi → R be a function defined on a Cartesian product (for Q simplicity, we assume that each Λi finite). We assume that g is c-Lipschitz; that is, if x, y ∈ ki=1 Λi differ at at most one coordinate,Q|g(x) − g(y)| ≤ c. Now let (X1 , X2 , . . . , Xk ) be a k-tuple of independent random variables that takes values in ki=1 Λi , and define Z ≡ g(X1 , . . . , Xk ). The proposed inequality is √ 2 2 ∀t ≥ 0 Pr(Z − E(Z) > t k) ≤ e−t /2c (4) This is a consequence of Azuma’s Inequality (cf. [10, Chapter 2] or [2, Chapter 6]) applied to the Doob’s martingale in which the values of X1 , X2 , . . . Xk are revealed one at a time. We now show how to apply this result in our present context. C ONSIDERING ONE SUBMATRIX. Let H be randomly picked from H ∈ [M ]L1 ×L2 . Fix a choice of R, C as in the statement of the theorem. That is, R ⊂ [L1 ] has size B1 and C ⊂ [L2 ] has size B2 . Consider ZR,C ≡
1 2
X
X [Hij = α] − B2 M
(5)
i∈R,α∈[M ] j∈C
We shall use the concentration inequality (4) to bound the probability of ZR,C > δB1 B2 . To this end, note that ZR,C is a function of (Hij )(i,j)∈R×C ∈ [M ]R×C . This space is the Cartesian product of B1 B2 copies of [M ], the choices of Hij are all independent, and ZR,C is also 1-Lipschitz on this space. The last assertion is proved as follows: choose (i, j) ∈ R × C and change the value of Hij from α to β (say). This changes the value of B2 1 X B2 1 X [Hit = α] − [Hit = β] − + 2 M 2 M t∈C
t∈C
by at most 1 while leaving all other summands in (5) unchanged; therefore, ZR,C changes by at most 1. This proves that ZR,C satisfies the assumptions of (4), and we can deduce p 2 ∀t ≥ 0 Pr(ZR,C − E(ZR,C ) > t B1 B2 ) ≤ e−t /2 (6) P We now estimate the expectation of ZR,C . Observe that for any fixed i ∈ [L2 ] and α ∈ [M ], j∈C [Hij = α] − B2 /M is distributed like Bin(n, p) − np, where Bin(n, p) is the Binomial distribution of n i.i.d Bernoulli summands with common mean p (in our case, n = B1 and p = 1/M ). It follows that s X 2 p X (7) [Hij = α] − B2 /M ) ≤ E [Hij = α] − B2 /M < B2 /M E( j∈C
j∈C
p √ ZR,C is half of the sum of M B1 such terms, so that E(|ZR,C |) < 2−1 B1 M B2 = 2−1 B1 B2 M/B2 and by assumption, this last quantity is upper bounded by δB1 B2 /2. We plug this estimate into inequality (6) and set √ t = δ B1 B2 /2 to obtain Pr(ZR,C > δB1 B2 ) ≤ e−δ
2 B B /8 1 2
(8)
Our next step is to take an union bound over all valid choices of rows R and columns C. L2 −δ2 B1 B2 /8 L1 e Pr(∃R, C ZR,C > δB1 B2 ) ≤ B2 B1 δ 2 B2 δ 2 B1 < exp ln L1 + 1 − ln B1 − B1 + ln L2 + 1 − ln B2 − B2 ≤ 1 (9) 16 16
T HE UNION
BOUND .
by the assumption that ln L1 + 1 − ln B1 ≤ δ2 B2 /16 and ln L2 + 1 − ln B2 ≤ δ2 B1 /16. It follows that there exists a matrix H for which ZR,C ≤ δB1 B2 for all R, C. From the definition of ZR,C (5), we have shown the existence of a matrix H satisfying (3). This concludes the proof. 6
4 Efficient constructions This section is devoted to constructions of efficiently computable SSE. Most standard techniques applied to the construction of regular and strong extractors (e.g. using list-decodable codes, walks on expanders, mergers or even hash functions) seem to fail when the seed is not completely random. As a result, even our 1-bit SSE is only useful when the sources have the same length ℓ1 = ℓ2 = ℓ and their min entropies add up to more than ℓ. However, the present constructions are both useful and elegant. We propose the construction of more efficient SSE’s as a challenging open problem.
4.1 Hadamard matrices and extraction of one bit A class of 1-bit super-strong extractors which includes the inner product function is now considered, thus providing a strengthening of a result of Chor and Goldreich [7]. Identify [L] ≡ [2ℓ ] ≈ {0, 1}ℓ and let H = {Hxy }L x,y=1 be a L × L Hadamard matrix (i.e. a ±1 matrix with pairwise orthogonal rows and columns). Define E XTH : {0, 1}ℓ × {0, 1}ℓ
(x, y)
→ {0, 1} 1+Hxy 7−→ 2
(10)
We shall prove the following two results. Theorem 4 E XTH as defined above is a (b1 , b2 , ε)-SSE with log
1 ε
=
b1 +b2 −ℓ 2
+ 1.
Corollary 2 The inner product function on ℓ-bit strings is a (b1 , b2 , ε)-SSE with ε as above. Proof: (of Corollary 2) Inner product is of the form E XTH for some Hadamard matrix H (as one can easily show). Proof: (of Theorem 4) The proof parallels that of the corresponding theorem in [7]. In particular, we also employ Lindsay’s Lemma. Lemma 3 (Lindsay’s Lemma cf. [7]) Let G = (Gij )Ti,j=1 S be a T ×T Hadamard matrix, and R and C be subsets p P P of [T ] corresponding to choices of rows and columns of G (respectively). Then | i∈R j∈C Gij | ≤ |R| |C| T . ˜ = (H ˜ ij ) whose ith row is qi times the ith row of H For any choice of (q1 , . . . , qL ) ∈ {−1, +1}L , the matrix H P P ˜ ij , which is is Hadamard. Hence Lindsay’s Lemma applies and for all sets R, C ⊂ [L] the sum i∈R j∈C H P p P just i∈R qi j∈C Hij , is bounded by ≤ |R| |C| L. From this fact it is easy to deduce a stronger form of Lemma 3. X X p (11) ∀R, C ⊂ [N ] Hij ≤ |R| |C| L i∈R j∈C Now let (X, Y ) ∈ CG(ℓ, ℓ, b1 , b2 ) be flat random variables and assume that X is uniform on SX , |SX | = 2b1 and Y is uniform on SY , |SY | = 2b2 . For each y ∈ SY 1 1 1 | Pr(E XTH (X, y) = 1) − | + | Pr(E XTH (X, y) = 0) − | kE XTH (X, y) − U1 k = 2 2 2 X Hxy 1 1 = | Pr(E XTH (X, y) = 1) − Pr(E XTH (X, y) = 0)| = (12) 2 2 |SX | x∈SX Averaging over all y and using (11) with R = SX , C = SY , we obtain
7
khY, E XTH (X, Y )i − hY, U1 ik =
1 X kE XTH (X, y) − U1 k |SY | y∈SY s X X b +b −ℓ H L 1 1 xy − 1 22 −1 ≤ (13) = = 2 2|SY | |SX | 2 |SX ||SY | y∈SY x∈SX
By Proposition 1 this finishes the proof.
4.2 Extracting many bits We now adapt a construction from [26] based on error-correcting codes to obtain many bits from weak sources of same length and sufficiently high min-entropy. In what follows E CC : {0, 1}m → {0, 1}ℓ is a linear error correcting code with distance d, {~ ei : 1 ≤ i≤ m} is the canonical basis of {0, 1}m as a vector space over Z2 , and for (x, y) = (x1 , . . . , xℓ ), (y1 , . . . , yℓ ) ∈ {0, 1}ℓ × {0, 1}ℓ we let ~v (x, y) ∈ {0, 1}ℓ be the vector whose ith coordinate is xi yi . The proposed SSE is E XT : {0, 1}ℓ × {0, 1}ℓ
(x, y)
→ {0, 1}m 7−→ E CC(e~1 ) · ~v (x, y) ◦ · · · ◦ E CC(e~m ) · ~v (x, y)
(14)
Note that each bit that E XT outputs corresponds to the inner product of matching segments of the input strings x and y. We show that Theorem 5 The function E XT constructed above is a (b1 , b2 , ε)-SSE with log
1 ε
=1+
b1 +b2 +d 2
− (ℓ + m).
There exist efficiently encodable linear codes of codeword length ℓ, dimension m = δ3 ℓ and distance d = ( 21 − δ)ℓ, for all fixed 0 < δ < 12 . Plugging one such code into Theorem 5 yields an efficiently computable 2 (b1 , b2 , ε)-SSE with ε = ℓ−ω(1) for all min-entropies satisfying b1 +b ≥ (3/4 + δ)ℓ + ω(log ℓ), and the number 2 3 of extracted bits is m = δ n. It remains to prove Theorem 5, and for that we use two lemmas (the second being fairly standard).
P Lemma 4 (Parity Lemma, [26]) For any t-bit random variable T , kT − Ut k ≤ ~v∈{0,1}t \{~0} T · ~v ) − U1 . Lemma 5 If Z = Z1 Z2 . . . Zt is a t-bit random variable and W ⊂ [t], let Z|W denote the concatenation of all Zi with i ∈ W . Then H∞ (Z|W ) ≥ H∞ (Z) − t + |W |. Proof: (of Theorem 5) We need to show that for any pair (X, Y ) ∈ CG(ℓ, ℓ, b1 , b2 ) X khY, E XT(X, Y )i − hY, Um ik = Pr(Y = y) kE XT(X, y) − Um k
(15)
y∈{0,1}ℓ
is bounded by ε = 2ℓ+m−
b1 +b2 +d −1 2
. To this end, we apply Lemma 4 to each of the above summands. X X
E XT(X, y) · ~a − U1 khY, E XT(X, Y )i − hY, Um ik ≤ Pr(Y = y) ~a∈{0,1}m \{~0}
y∈{0,1}ℓ
=
X
~a∈{0,1}m \{~0}
It now suffices to show that for any ~a ∈ {0, 1}m \{~0}
hY, E XT(X, Y ) · ~a i − hY, U1 i
b +b +d
hY, E XT(X, Y ) · ~a i − hY, U1 i ≤ ε = 2ℓ− 1 22 −1 m 2 Pm Fix some non-zero ~a = i=1 ai e~i and note that by the linearity of E CC 8
(16)
(17)
E XT(X, Y ) · ~a =
m X i=1
X ei ) · ~v (X, Y ) = E CC(~a) · ~v (X, Y ) = Xi Yi = X|S · Y |S ai E CC(~
(18)
i∈S
where S is the set of all non-zero coordinates of E CC(~a), and X|S and Y |S are defined as in the statement of Lemma 5. Applying that same lemma, we conclude that X|S (Y |S ) has min-entropy at least b1 − ℓ + |S| (respectively b2 − ℓ + |S|). It now follows from (18), Corollary 2 and the fact that X|S and Y |S have length |S| that
b1 +b2 +|S| |S|−b1 −b2 +2ℓ−2|S| −1
hY, E XT(X, Y ) · ~a i − hY, U1 i ≤ 2 2 2 (19) = 2ℓ−1− Since |S| = (weight of E CC(~a)) ≥ d by definition of S, equation (19) proves (17) and finishes the proof.
4.3 A number-theoretic construction A third efficient SSE construction is now presented. Its minimal min-entropy requirement is basically b1 + b2 > ℓ, which roughly matches the Hadamard matrix construction for 1-bit extraction. However, this SSE has the drawback of requiring a pre-processing stage for efficiency to be achieved. The construction dates back to [7], in which it was shown that E XT(X, Y ) is close to random. We claim that the same is true even if Y is given to the adversary, thus establishing that this construction satisfies our definition of SSE. In what follows, p > 2 is a prime and we take ℓ = ⌊log p⌋ so that we can assume {0, 1}ℓ ⊆ Zp . Let k be a divisor of p − 1; our SSE will output elements of Zk (the definition of a SSE easily generalizes to this case). Finally, let g be a generator of the multiplicative group Z∗p and denote by logg the base-g discrete logarithm in Z∗p . We define E XT : {0, 1}ℓ × {0, 1}ℓ
(x, y)
→ Zk 7−→ logg (x − y) mod k
We prove in the Appendix B that approximately m = log k ≈ construction.
(20) b1 +b2 −ℓ 2
Theorem 6 The function E XT defined above is a (b1 , b2 , ε)-SSE with log
− log 1 ε
=
1 ε
bits can be extracted by this
b1 +b2 −ℓ 2
+ 1 − log k.
We refer to [7] for details on the efficient implementation of E XT and the pre-computation of p, k and g.
5 Simple authentication with weak sources 5.1 Motivation Assume that two parties Alice and Bob share the output of a ℓ1 -bit weak random source X with min-entropy H∞ (X) ≥ b1 . It is then clear that a (b1 , b2 , ε)-SSE E XT : {0, 1}ℓ1 × {0, 1}ℓ2 → {0, 1}m can be used by both Alice and Bob to extract an almost perfectly random secret key S = E XT(X, Y ) from the shared secret information X and a weakly random public string Y with min-entropy H∞ (Y ) ≥ b2 . This shows that superstrong extractors trivially solve the problem of privacy amplification over passive public channels when weak random sources are used. In this Section we provide a simple protocol PA for privacy amplification over an adversarially controlled channel, when only weak sources of randomness are available. Following [13], we show that weak sources can be used in conjunction with the simple “ax + b” message authentication code (MAC) to transmit the non-secret input Y over the adversarial channel. For completeness, we prove (in the Appendix C) appropriate versions of the results of [13]. Let us specify the idealized world we shall deal with when discussing the protocol. We assume that Bob can either be close (to Alice) or far from Alice. Each one of them has a weak source (specified below). If Bob is close, they can share secret information at will, but their sources should be assumed to be arbitrarily correlated 9
(which is reasonable for purely physical and adversarial sources). On the other hand, if Bob is far, the sources can be assumed to be independent, but only active adversarial communication channels are available. Finally, we assume that Bob’s source outputs a ℓ-bit long string Y with min-entropy H∞ (Y ) ≥ b2 , whereas Alice’s source outputs three ℓ-bit strings A, B, X, which are assumed that they form a b1 -block source [7]. That is, for any a, b ∈ {0, 1}ℓ , A,B|A=a and X|A=a,B=b all have min entropy at least b1 . Our scenario differs from that of previous work on privacy amplification (e.g. [13]). In most of those works, the secret key X satisfies a min-entropy condition (that is, the adversary does not know it completely), but Alice and Bob are capable of sampling perfectly random bits. By contrast, under our constrants, no perfect randomness is available to either Alice or Bob, and geographical distance between the sources is necesary for independence, which is a reasonable assumption for physical and adversarial sources. Whereas in [13] (for instance) it is not clear that it would not be possible for the parties to agree on a perfectly random secret key when they meet in the first place, this is impossible in our case. Therefore, the need for privacy amplification is arguably better motivated in the present work. We also note that, although our assumption on Alice’s source is stronger than that on Bob’s source, it is still much weaker than the capability to generate truly random bits.
5.2 The protocol Alice and Bob’s aim is to agree on a secret key S that is very close to being random from Eve’s point of view. This is achieved by the protocol PA which we now describe (see also Table 1 in the Appendix D), in which we identify {0, 1}ℓ with the finite field F2ℓ for the purpose of arithmetic operations, and E XT : {0, 1}ℓ × {0, 1}ℓ → {0, 1}m is a function (we will later choose it to be a suitable SSE). Briefly, Alice and Bob share (A, B, X) when Bob is close. Then Bob moves to far, samples, Y and sends Y, Z = AY + B to Alice. Eve then intercepts (Y, Z), ˜ and sends to Alice. Alice checks if AY˜ + B = Z˜ and, if this is satisfied, she which she substitutes for (Y˜ , Z) ˜ ˜ computes S = E XT(X, Y ), rejecting otherwise. In the meantime, Bob has computed S = E XT(X, Y ). As we shall see (see Theorem 7), with high probability either S = S˜ and Alice and Bob share a secret key, or else Alice has rejected. Note that we always assume that Y˜ and Z˜ as in Table 1 have length ℓ each by establishing that Alice rejects otherwise. We demonstrate in the Appendix C that protocol PA is indeed secure as long as b1 = 2ℓ + ω(log ℓ) and a (b1 , b2 , ℓ−ω(1) )-SSE exists. For instance, the number-theoretic SSE (Theorem 6) permits agreement on a key of length m ≈ b1 +b22 −ℓ + ω(log ℓ). Theorem 7 If E XT is a (b1 , b2 , ε)-SSE, the protocol PA has the following property. If Eve is passive, Alice never rejects, S˜ = S and khY, Si − hY, Um ik ≤ ε. If Eve is active, the probability of either Alice rejecting or S = S˜ and khY, Si − hY, Um ik ≤ ε is at least 1 − 2ℓ−2b1 .
6 Acknowledgments We thank Yan Zhong Ding, Amit Sahai, Joel Spencer and Salil Vadhan for useful discussions.
References [1] M. Ajtai, L. Babai, P. Hajnal, J. Komlos, P. Pudlak. Two lower bounds for branching programs. In Proceedings of the eighteenth annual ACM symposium on Theory of computing, 30–38, 1986. [2] N. Alon and J. Spencer. The Probabilistic Method - 2nd ed. Wiley Interscience, New York, 2000. [3] A. Andreev, A. Clementi, J. Rolim, L. Trevisan. Dispersers, deterministic amplification, and weak random sources. In SIAM J. on Comput., 28(6):2103–2116, 1999. [4] C. H. Bennett, G. Brassard, C. Cr´epeau, U. Maurer. Generalized Privacy Amplification. In IEEE Transaction on Information Theory, vol. 41, no. 6, pp. 1915–1923, 1995 10
[5] C. H. Bennett, G. Brassard, and J.-M. Robert. How to reduce your enemy’s information. In Proc. of CRYPTO ’85, Lecture Notes in Computer Science, vol. 218, pp. 468–476, Springer-Verlag, 1986. [6] R. Canetti, Y. Dodis, S. Halevi, E. Kushilevitz and A. Sahai. Exposure-Resilient Functions and All-OrNothing-Transforms. In Proc. of EuroCrypt, pp. 453–469, 2000. [7] B. Chor, O. Goldreich. Unbiased bits from sources of weak randomness and probabilistic communication complexity. SIAM J. Comput., 17(2):230–261, 1988. [8] A. Cohen, A. Wigderson. Dispersers, deterministic amplification, and weak random sources. In Proc. of FOCS, pp. 14–19, 1989. [9] Y. Dodis, J. Spencer. On the (Non-)Universality of the One-Time Pad. In Proc. of FOCS, 2002. [10] S. Janson, T. Luksak and A. Ruci´nski. Random Graphs. Wiley Interscience, New York, 2000. [11] C. Lu, O. Reingold, S. Vadhan and A. Wigderson. Extractors: Optimal Up to Constant Factors. In Proc. of STOC, 2003. [12] U. Maurer and S. Wolf. Strengthening security of information-theoretic secret-key agreement. In Proceedings of EUROCRYPT 2000, Lecture Notes in Computer Science, Springer-Verlag, 2000. [13] U. Maurer and S. Wolf. Privacy Amplification Secure Against Active Adversaries. In Proc. of CRYPTO, Lecture Notes in Computer Science, Springer-Verlag, vol. 1294, pp. 307–321, 1997. [14] J. McInnes, B. Pinkas. On the Impossibility of Private Key Cryptography with Weakly Random Keys. In Proc. of CRYPTO, pp. 421–435, 1990. [15] N. Nisan, A. Ta-Shma. Extracting Randomness: a survey and new constructions. In JCSS, 58(1):148–173, 1999. [16] N. Nisan, D. Zuckerman. Randomness is Linear in Space. In JCSS, 52(1):43–52, 1996. [17] L. R´onyai, L. Babai, M. Ganapathy On the number of zero-patterns in a sequence of polynomials Journal of the AMS, 2002. [18] M. S´antha, U. Vazirani. Generating Quasi-Random Sequences from Semi-Random Sources. Journal of Computer and System Sciences, 33(1):75–87, 1986. [19] R. Shaltiel. Recent developments in Explicit Constructions of Extractors. Bulletin of the EATCS, 77:67–95, 2002. [20] R. Shaltiel and C. Umans. Simple extractors for all min-entropies and a new pseudo-random generator. In Proceedings of FOCS 2001, pp.648-657, IEEE Computer Society, 2001. [21] M. Sipser. Expanders, Randomness or Time versus Space. In Journal of Computer and Systems Sciences 36, pp. 379-383, 1988. [22] A. Ta-Shma and J. Radhakrishnand. Bounds for Dispersers, Extractors, and Depth-Two Superconcentrators. In SIAM Journal on Discrete Mathematics, 13(1):2–24, 2000. [23] L. Trevisan. Construction of Extractors Using PseudoRandom Generators. In Proc. of STOC, pp. 141–148, 1999. [24] L. Trevisan, S. Vadhan. Extracting Randomness from Samplable Distributions. In Proc. of FOCS, 2000. [25] U. Vazirani. Randomness, Adversaries and Computation. PhD Thesis, University of California, Berkeley, 1986. [26] U. Vazirani. Strong Communication Complexity or Generating Quasi-Random Sequences from Two Communicating Semi-Random Sources. Combinatorica, 7(4):375–392, 1987.
11
[27] U. Vazirani, V. Vazirani. Random polynomial time is equal to slightly-random polynomial time. In Proc. of 26th FOCS, pp. 417–428, 1985. [28] A. Wigderson. Open problems. Notes from DIMACS Workshop on Pseudorandomness and Explicit Combinatorial Constructions, 1999 [29] D. Zuckerman. Simulating BPP Using a General Weak Random Source. Algorithmica, 16(4/5):367-391, 1996.
A
Lower bounds - proof of Theorem 2
In this subsection we prove Theorem 2. Proof: (of Theorem 2) We use the following lower bounds of Ta-Shma and Radhakrishnan [22] for (regular) extractors. Theorem 8 [22] There exists a constant c such that the following holds. Let E XT ’ : {0, 1}n × {0, 1}d → {0, 1}m be a regular (k, ε)-extractor with d ≤ m − 2 and k ≤ n − c. Then d ≥ log(n − k) + 2 log 1ε − O(1) and d + k − m ≥ 2 log 1ε − O(1). Let Y be uniform on {0, 1}b2 ◦ 0ℓ2 −b2 and note that it has min-entropy b2 . Now set E XT1 : {0, 1}ℓ1 × {0, 1}b2
(x, y)
→ {0, 1}m+b2 7−→ y ◦ E XT(x, y ◦ 0ℓ2 −b2 )
(21)
For all ℓ1 -bit long random variables X with min-entropy H∞ (X) ≥ b1 kE XT1 (X, Ub2 ) − Um+b2 k = khY, E XT(X, Y )i − hY, Um ik ≤ ε
(22)
since (X, Y ) ∈ CG(ℓ1 , ℓ2 , b1 , b2 ) and E XT is a SSE with the adequate parameters. It follows that E XT1 is a (b1 , ε)- extractor. By Theorem 8 we conclude b2 ≤ log(ℓ1 − b1 ) + 2 log
1 1 − O(1) and b2 + b1 − m − b2 = b1 − m ≥ 2 log ε ε
(23)
Remark 1 In an attempt to prove that b1 ≤ log(ℓ1 − b1 ) + 2 log 1ε − O(1) is necessary, one might be tempted to reverse the process in the above proof and built a regular extractor E XT2 out of E XT for which the random variable X that is uniform on {0, 1}b1 ◦ 0ℓ1 −b1 is the seed. The reason why this does not work is that the output length m in this case is smaller than the effective seed length b1 , and Theorem 8 does not apply to this case.
B Proof of Theorem 6 Proof: (of Theorem 6) We first claim that the following inequality holds for all subsets A, B, C ⊆ Zp : setting 2πicj P ΦC ≡ max1≤j≤p−1 | c∈C e p |, X p #{b ∈ B : a − b ∈ C} − |B| |C| ≤ ΦC |A| |B| p
a∈A
12
(24)
Inequality (24) is proven subsequently. Assuming it for the moment, choose α ∈ Zk and set C ≡ {c ∈ C | logg (c) = α mod k}
√ Following [7, Section 3.2], we note that |C| = p/k and ΦC < p. Hence for all A, B ⊆ C X p |B| ≤ p|A| |B| #{b ∈ B : logg (a − b) = α} − k
(25)
a∈A
We deduce from (25) that for any choice of flat random variables (X, Y ) ∈ CG(ℓ1 , ℓ2 , b1 , b2 )with respective supports SX , SY of sizes 2b1 , 2b2 khY, E XT(X, Y )i − hY, U ik r 1 X X #{x ∈ SX : logg (x − y) = α} p 1 k = − b1 ≤ = ε (26) 2 2b1 +b2 2 k 2 2b1 +b2 α∈Zk y∈SY
and this implies the theorem by Proposition 1.
Proof: (of inequality (24)) This proof uses the method of trigonometric sums (a.k.a. Fourier Analysis on Zp ) and follows closely that of Lemma 6 in [1]. For simplicity, we prove the equivalent inequality X p #{b ∈ B : ∃c ∈ C a + b = c} − |B| |C| ≤ ΦC |A| |B| p
(27)
a∈A
Let ω ≡ e
2πi p
and define
ψT (j) ≡
X t∈T
ω tj (j ∈ Zp , T ⊆ Zp )
For any a ∈ A, the number of b ∈ B satisfying a + b = c for some c ∈ C is precisely p−1
p−1
|B| |C| 1 X ja 1 X ja ω ψB (j)ψC (−j) = ω ψB (j)ψC (−j) + p p p j=0
(28)
j=1
as a simple calculation shows. Hence for any choice of qa ∈ ±1, a ∈ A qa
|B| |C| #{b ∈ B : ∃c ∈ C a + b = c} − p
=
p−1 qa X ja ω ψB (j)ψC (−j) p j=1
P We can now sum over a ∈ A; letting ψ˜A (j) ≡ a∈A qa ω aj X
a∈A
qa
|B| |C| #{b ∈ B : ∃c ∈ C a + b = c} − p
p−1
=
1X ˜ ψA (j)ψB (j)ψC (−j) p j=1
By an appropriate choice of the qa ’s, it is possible to conclude that in fact 13
(29)
p−1 X 1X |B| |C| = #{b ∈ B : ∃c ∈ C a + b = c} − ψ˜A (j)ψB (j)ψC (−j) p p
(30)
j=1
a∈A
Applying Cauchy Schwartz to the RHS and noting that p−1 X j=1
|ψ˜A (j)|2 ≤ p|A| and
p−1 X j=1
|ψB (j)ψC (−j)|2 ≤ Φ2C p|B|
where ΦC ≡ sup1≤j≤p−1 |ψC (j)|, we can finally bound X p |B| |C| #{b ∈ B : ∃c ∈ C a + b = c} − ≤ ΦC |A||B| p
(31)
a∈A
This finishes the proof.
C
Proof of Theorem 7
We first prove a lemma on the “ax + b” MAC that is similar to Theorem 6 in [13]. Lemma 6 Let g : F2ℓ × F2ℓ → F2ℓ × F2ℓ be a function, w ∈ F2ℓ , and hC, Di be two ℓ-bit long strings with joint min-entropy H∞ (hC, Di) ≥ 2b1 . Define (T, V ) ≡ g(w, Cw + D). Then Pr (T, V ) 6= (w, Cw + D) and CT + D = V ≤ 2ℓ−2b1
Proof: (of Lemma 6) Fix C = c, D = d and let s ≡ cw + d. The pair (T, V ) is then completely determined by the value of s. Now assume that (c, d) is bad, that is, (T, V ) = (t(s), v(s)) 6= (w, s) but ct(s) + d = v(s). The pair (c, d) must then satisfy the following system of equations 1 w c s = (32) 1 t(s) d v(s) for s ∈ F2ℓ . It cannot be true that t(s) = w, for that would imply that v(s) = ct(s) + d = cw + d and (t(s), v(s)) = (w, cw + d), so it must hold that w 6= t(s). This implies that the matrix in (32) is non-singular and that the corresponding system of equations has exactly one solution. We conclude that for each possible value of s = cw + d there can be at most one bad pair (c, d), and that this pair is completely determined by (32). Since there are 2ℓ possible values for s and each pair (c, d) has probability ≤ 2−2b1 , the probability of the sampled value of (C, D) being bad is ≤ 2ℓ−2b1 . This is precisely the desired result. Proof: (of Theorem 7) It suffices to treat the case of a deterministic active adversary. That is, Eve’s strategy for ˜ is to use a deterministic function of Y and Z ≡ AY + B. Lemma 6 implies that for any value producing hY˜ , Zi y of Y the probability that Z˜ = AY˜ + B and Y˜ 6= y is at most 2ℓ−2b1 . Assuming that this event does not happen, ˜ Moreover, Alice does not reject and S = S. khY, AY + B, Si − hY, AY + B, Um ik ≤
max
a,b∈{0,1}ℓ1
khY, E XT(X|A=a,B=b Y )i − hY, Um ik ≤ ε
by the SSE property and the block source condition on A, B, X.
D The Protocol PA 14
(33)
Alice samples (A, B, X)
Alice
˜ receives (Y˜ , Z) if AY˜ + B 6= Z˜
PART 1 - Bob is close (secret info) (A,B,X)
−−−−−−−−−−−−−−−−−−−→
Bob stores (A, B, X)
PART 2 - Bob is far Eve (channel)
˜ (Y,Z)→(Y˜ ,Z)
←−−−−−−−−−−−−−−−−−−−
reject
Bob samples Y Z ≡ AY + B sends (Y, Z) S ≡ E XT(X, Y )
accept
else S˜ ≡ E XT(X, Y ) accept
Table 1: Description of protocol PA for privacy amplification. All random variables X, Y , A, B, Y˜ and Z˜ take values in {0, 1}ℓ ≈ F2ℓ .
15