Statistical Security Conditions for Two-Party Secure Function Evaluation Claude Cr´epeau 1 and J¨ urg Wullschleger 2 1
2
McGill University, Montr´eal, QC, Canada
[email protected] University of Bristol, Bristol, United Kingdom
[email protected]
Abstract To simplify proofs in information-theoretic security, the standard security definition of two-party secure function evaluation based on the real/ideal model paradigm is often replaced by an informationtheoretic security definition. At EUROCRYPT 2006, we showed that most of these definitions had some weaknesses, and presented new information-theoretic conditions that were equivalent to a simulation-based definition in the real/ideal model. However, there we only considered the perfect case, where the protocol is not allowed to make any error, which has only limited applications. We generalize these results to the statistical case, where the protocol is allowed to make errors with a small probability. Our results are based on a new measure of information that we call the statistical information, which may be of independent interest. Keywords: Secure function evaluation, information-theoretic security, security definition, oblivious transfer.
1
Introduction
Secure function evaluation [1] allows two (or more) parties to jointly compute a function in a secure way, which means that no player may get additional information about the other players’ inputs or outputs, other than what may be deduced from their own input and output. A computationally secure solution to this problem has been given in [2]. Schemes ensuring unconditional security were subsequently provided in [3] and independently in [4]. Oblivious transfer [5,6,7] is a simple primitive of central interest in secure function evaluation. It allows a sender to send one of n binary strings of length k to a receiver. The primitive allows the receiver to receive the string of his choice while concealing this choice from a (possibly dishonest) sender. On the other hand, a dishonest receiver cannot obtain information about more than one of the strings, including partial joint information on two or more strings. It has since been proved that oblivious transfer is in fact sufficient by itself to securely compute any function [8,9]. More completeness results followed in [10,11,12,13].
1.1
Security Definitions
Formal security definitions for secure function evaluation have been proposed in [14] and [15]. Both definitions were inspired by the simulation paradigm used in [16] to define zero-knowledge proofs of knowledge. These definitions require that for any adversary, there exists a simulated adversary in an ideal setting (which is secure by definition) that achieves the same. That protocols which satisfy these definitions are sequentially composable has been proved in [17]. See also [18]. Later, a stronger notion of security, called universal composability, has been defined in [19] and independently in [20]. It guarantees that protocols are securely composable in any way. Even though simulation-based security definitions are widely accepted as being the right definition of security today, ad-hoc definitions are still widely used due to their simplicity. Unfortunately, as we showed in [21], many of these definitions proposed for various specific scenarios have turned out to be deficient. We proposed in [21] simple information-theoretic conditions for the security of function evaluation, and proved that they are equivalent to the standard definition in the real/ideal model. However, these conditions could only be applied in the perfect case, when the protocol does not have any failure probability and does not leak any information, and therefore had only a very limited range of applications. For the special case of randomized oblivious transfer, these conditions have been generalized in [22] to the statistical case, where the protocol is allowed to make errors with a small probability. 1.2
Information Measures
The Shannon mutual information has been introduced in [23], and is one of the most important tools in information theory, as a measure of how many bits of information one random variable has over the other. The mutual information tells us for example how many bits can be transmitted over a noisy channel. In information-theoretic cryptography, the mutual information has also been used in security definitions, to express that an adversary obtains almost no information about some secret, i.e., that two random variables are almost independent. But since in cryptography we are not interested in how many bits the adversary gets, but in the probability that he gets any information at all, the mutual information is not a good measure for that task. 1.3
Contribution
First, we propose a new measure of information that we call the statistical information, which is better suited to express security conditions than the mutual information. The difference between the statistical and the mutual information is the distance measure they are based on: while the mutual information is based on the relative entropy, the statistical information is based on the statistical distance.
Then we will generalize the results from [21] and [22]. We present necessary and sufficient information theoretic conditions for any two-party secure function evaluation in the statistical case, and apply them to oblivious transfer. The statistical information plays a very important role to state these conditions.
1.4
Related Work
Recently, Fehr and Schaffner showed in [24] that similar results also hold in the quantum setting. They presented security conditions for quantum protocols where the honest players have classical input and output, and showed that any quantum protocol that satisfies these conditions can be used as a sub-protocol in a classical protocol.
1.5
Preliminaries
For a random variable X, we denote its distribution by PX and its domain by X . PY |X = PXY /PX denotes a conditional probability distribution, which models a probabilistic function that takes x as input and outputs y, distributed according to PY |X=x . 0 Definition 1. The statistical distance P between two distributions PX and PX over X is defined as δ(PX , PX 0 ) = 12 x∈X |PX (x) − PX 0 (x)|.
If δ(PX , PX 0 ) ≤ ε, we may also write PX ≡ε PX 0 or X ≡ε X 0 . We will need the following basic properties of δ. Lemma 1 (Triangle Inequality). For any distributions PX , PX 0 and PX 00 , we have δ(PX , PX 00 ) ≤ δ(PX , PX 0 ) + δ(PX 0 , PX 00 ) . Lemma 2 (Data Processing). For any distributions PXY and PX 0 Y 0 , we have δ(PX , PX 0 ) ≤ δ(PXY , PX 0 Y 0 ) . Lemma 3. For any distributions PX and PX 0 , and any conditional distribution PY |X , we have δ(PX , PX 0 ) = δ(PX PY |X , PX 0 PY |X ) . Lemma 4. For any distributions PX and PX 0 we have δ(PX , PY ) ≤ ε, if and only if there exist events EX and EY with Pr[EX ] = Pr[EY ] = 1 − ε and PX|EX = PY |EY .
2
Statistical Information
In this section, we introduce the statistical information IS . While the mutual information uses relative entropy as the underlying distance measure, we will use the statistical distance. Its value tells us how close the distribution of three random variables X, Y and Z is to a Markov-chain. Definition 2. The statistical information of X and Y given Z is defined as IS (X; Y | Z) := δ(PXY Z , PZ PX|Z PY |Z ) . Obviously, this measure is non-negative and symmetric in X and Y . We will now show more properties of IS , which are related to similar properties of the mutual information. Lemma 5 (Chain rule). For all PW XY Z , we have IS (W X; Y | Z) ≤ IS (W ; Y | Z) + IS (X; Y | W Z) Proof. We have IS (X; Y | W Z) = δ(PW XY Z , PW Y Z PX|W Z ) . From Lemma 3 follows that δ(PW Y Z PX|W Z , PZ PW |Z PY |Z PX|W Z ) = δ(PW Y Z , PZ PW |Z PY |Z ) = IS (W ; Y | Z) . Using Lemma 1 and PW X|Z = PW |Z PX|W Z , we get δ(PW XY Z , PZ PW X|Z PY |Z ) ≤ IS (W ; Y | Z) + IS (X; Y | W Z) . t u Lemma 6 (Monotonicity). For all PW XY Z , we have IS (W ; Y | Z) ≤ IS (W X; Y | Z) . Proof. Using Lemma 2, we get IS (W X; Y | Z) = δ(PW XY Z , PZ PW |Z PX|W Z PY |Z ) ≥ δ(PW Y Z , PZ PW |Z PY |Z ) = IS (W ; Y | Z) . t u Note that there exist PXY Z and QZX QY |Z , where δ(PXY Z , QZX QY |Z ) < δ(PXY Z , PZ PX|Z PY |Z ), so PZ PX|Z PY |Z is not always the closest Markov-chain to PXY Z . Luckily, as the following two lemmas show, PZ PX|Z PY |Z is only by a factor of 4 away from the optimal Markov-chain, which is sufficient for our applications3 . 3
An alternative definition for IS would be to take the distance to the closest Markovchain. However, we think that this would make the definition much more complicated, at almost no benefit.
Lemma 7. For all probability distributions PXY Z , we have IS (X; Y | Z) ≤ 2 · min δ(PXY Z , PXZ QY |Z ) . QY |Z
Proof. Let QY |Z be the conditional probability distribution that minimizes the expression, and let ε := δ(PXY Z , PXZ QY |Z ). We have PXZ QY |Z = PZ QY |Z PX|Z . Let Q0Y Z := PZ QY |Z . From Lemma 2 follows that δ(PY Z , Q0Y Z ) ≤ ε and from Lemma 3 that δ(PY Z PX|Z , Q0Y Z PX|Z ) ≤ ε. From Lemma 1 follows then that δ(PXY Z , PXZ PY |Z ) ≤ 2ε . t u Lemma 8. For all probability distributions PXY Z , we have IS (X; Y | Z) ≤ 4 ·
min
QXZ ,QY |Z
δ(PXY Z , QXZ QY |Z ) .
Proof. Let QXZ and QY |Z be the conditional probability distributions that minimize the expression, and let ε := δ(PXY Z , QXZ QY |Z ). From Lemma 2 follows that δ(PXZ , QXZ ) ≤ ε and from Lemma 3 that δ(PXZ QY |Z , QXZ QY |Z ) ≤ ε. From Lemma 1 follows that δ(PXY Z , PXZ QY |Z ) ≤ 2ε. The statement follows by applying Lemma 7. t u Lemma 9. For all PW XY Z , we have IS (X; Y | W Z) ≤ 2 · IS (W X; Y | Z) . Proof. From Lemma 7 follows that IS (X; Y | W Z) ≤ 2 · min δ(PW XY Z , PW XZ QY |W Z ) QY |W Z
≤ 2 · δ(PW XY Z , PW XZ PY |Z ) = 2 · IS (W X; Y | Z) . t u 2.1
Relation between I and IS
Since we would like to use IS in situations where previously the Shannon mutual information I has been used, it is important to know how these two measures relate to each other. Using Pinsker’s inequality (see, for example, Lemma 16.3.1 in [25]) and Jensen’s inequality, it is easy to show that p IS (X; Y | Z) ≤ I(X; Y | Z) . The other direction can be shown using Lemma 12.6.1 from [25]. We get that for IS (X; Y | Z) ≤ 14 , I(X; Y | Z) ≤ −2 · IS (X; Y | Z) log
2 · IS (X; Y | Z) . |X | · |Y| · |Z|
3 3.1
Two-party Secure Function Evaluation Definition of Security in the Real/Ideal Paradigm
We will now give a definition of secure function evaluation based on the real/ideal model paradigm. We use the same definitions as [21], which are based on Definition 7.2.10 of [18] (see also [17]). Let x ∈ X denote the input of the first party, y ∈ Y the input of the second party and z ∈ {0, 1}∗ an additional auxiliary input available to both parties, that is ignored by all honest parties. A g-hybrid protocol is a pair of (randomized) algorithms Π = (A1 , A2 ) which can interact by exchanging messages and which additionally have access to the functionality g. A pair of algorithms A = (A1 , A2 ) is called admissible for protocol Π if either A1 = A1 or A2 = A2 , i.e., if at least one of the parties is honest and uses the algorithm defined by the protocol Π. The joint execution of Π under A on input pair (x, y) ∈ X × Y and auxiliary input z ∈ {0, 1}∗ in the real model, denoted by realgΠ,A(z) (x, y) , is defined as the output pair resulting from the interaction between A1 (x, z) and A2 (y, z) using the functionality g. The ideal model defines the optimal setting where the players have access to an ideal functionality f they wish to compute. The trivial f -hybrid protocol B = (B1 , B2 ) is defined as the protocol where both parties send their inputs x and y unchanged to the functionality f and output the values u and v received from f unchanged. Let B = (B 1 , B 2 ) be an admissible pair of algorithms for B. The joint execution of f under B in the ideal model on input pair (x, y) ∈ X × Y and auxiliary input z ∈ {0, 1}∗ , denoted by idealf,B(z) (x, y) , is defined as the output pair resulting from the interaction between B 1 (x, z) and B 2 (y, z) using the functionality f . We say that a protocol securely computes a functionality, if anything an adversary can do in the real model can be simulated in the ideal model. Definition 3 (Statistical Security). A g-hybrid protocol Π securely computes f with an error of at most ε if for every pair of algorithms A = (A1 , A2 ) that is admissible in the real model for the protocol Π, there exists a pair of algorithms B = (B 1 , B 2 ) that is admissible in the ideal model for protocol B (and where the same players are honest), such that for all x ∈ X , y ∈ Y, and z ∈ {0, 1}∗ , we have idealf,B(z) (x, y) ≡ε realgΠ,A(z) (x, y) . A very important property of the above definition is that it implies sequential composition, see [17]. Note that in contrast to [17] or [18], we do not require the simulation to be efficiently computable.
The following lemma formalizes the idea already mentioned in [21], namely that if a protocol is secure against adversaries without auxiliary input, then it is also secure against adversaries with auxiliary input. To avoid that the ideal adversary with auxiliary input gets infinitely big, we have to additionally require that there exists an explicit construction of the ideal adversary without auxiliary input. Lemma 10. If a g-hybrid protocol Π securely computes f with an error ε against adversaries with constant auxiliary input and the construction of the ideal adversary is explicit, then it securely computes f with an error of at most ε. Proof. If both players are honest the auxiliary input is ignored and the lemma holds. Let player i be malicious and denote by Ai the algorithm used. For a fixed z z ∈ {0, 1}∗ , let Ai be equal to Ai , but with the auxiliary input z hard-wired into it. Since Π securely computes f with an error ε against adversaries with z constant auxiliary input, there exists an algorithm B i , such that for all x ∈ X and y ∈ Y, we have idealf,B z (x, y) ≡ε realgΠ,Az (x, y) . z
Now, we let B i be the concatenation of all B i , i.e., on auxiliary input z the z adversary B i behaves as B i . Note that since we have an explicit construction z of B i , B i has a finite description. Obviously, we have for all x ∈ X , y ∈ Y, and z ∈ {0, 1}∗ that idealf,B(z) (x, y) ≡ε realgΠ,A(z) (x, y) . Hence, Π securely computes f with an error of at most ε.
t u
Therefore, to show the security of a protocol in our model, the auxiliary input can be omitted, which we will do for the rest of this paper. 3.2
Information-Theoretic Conditions for Security
We will now state our main results, which are information-theoretic conditions for the statistical security of a protocol without the use of an ideal model. First of all, we will slightly change our notation. Let X and Y be random variables denoting the player’s inputs, distributed according to a distribution PXY unknown to the players, and let U and V be random variables denoting the outputs of the two parties, i.e., for specific inputs (x, y) we have (U, V ) = realgΠ,A (x, y) ,
(U , V ) = idealf,B (x, y) .
The security condition of Definition 3 can be expressed as PU V |X=x,Y =y ≡ε PU V |X=x,Y =y . To simplify the statement of the following theorem, we will assume that the ideal functionality f is deterministic. It can be generalized to probabilistic functionalities without any problems.
The conditions for the security for player 1 must ensure that there exists an ideal adversary that achieves almost the same as the real adversary. We achieve this by requiring that there exists a virtual input value Y 0 that the adversary could have created (this is ensured by IS (X; Y 0 | Y ) ≈ 0), and a virtual output value V 0 that, together with U , could be the output of the ideal functionality, given X and Y 0 as input (this is ensured by Pr[(U, V 0 ) = f (X, Y 0 )] ≈ 1). The protocol is secure if the adversary’s output V could have been calculated by him from Y , Y 0 and V 0 , which is ensured by IS (U X; V | Y Y 0 V 0 ) ≈ 0. Theorem 1. A protocol Π securely computes the deterministic functionality f with an error of at most 3ε, if for every pair of algorithms A = (A1 , A2 ) that is admissible in the real model for the protocol Π and for any input (X, Y ) distributed according to PXY over X × Y, A produces outputs (U, V ) distributed according to PU V |XY , such that the following conditions are satisfied: – (Correctness) If both players are honest, we have Pr[(U, V ) = f (X, Y )] ≥ 1 − ε . – (Security for Player 1) If player 1 is honest then there exist random variables Y 0 and V 0 distributed according to PY 0 V 0 |XY U V such that Pr[(U, V 0 ) = f (X, Y 0 )] ≥ 1 − ε , IS (X; Y 0 | Y ) ≤ ε and IS (U X; V | Y Y 0 V 0 ) ≤ ε . – (Security for Player 2) If player 2 is honest then there exist random variables X 0 and U 0 distributed according to PX 0 U 0 |XY U V such that Pr[(U 0 , V ) = f (X 0 , Y )] ≥ 1 − ε , IS (Y ; X 0 | X) ≤ ε and IS (V Y ; U | XX 0 U 0 ) ≤ ε . Both PY 0 V 0 |XY U V and PX 0 U 0 |XY U V should have explicit constructions. Proof. If both players are honest, the correctness condition implies PU V |X=x,Y =y ≡ε PU V |X=x,Y =y , for all x and y. If both players are malicious nothing needs to be shown. Without loss of generality, let player 1 be honest and player 2 be malicious. Let us for the moment assume that the input distribution PXY is fixed and known to the adversary, so the joint distribution in the real model is PXY U V = PXY PU V |XY .
We will define an admissible protocol B = (B1 , B 2 ) in the ideal model that produces almost the same output distribution as the protocol Π in the real model. On input y, let B 2 choose his input y 0 according to PY 0 |Y =y , which we model by the channel PY 0 |Y . After receiving v 0 from the ideal functionality f , let B 2 choose his output v according to PV |Y =y,Y 0 =y0 ,V 0 =v0 , which we model by the channel PV |Y Y 0 V 0 . The distribution of the input/output in the ideal model is given by X PXY U V = PXY PY 0 |Y PU V 0 |XY 0 PV |Y Y 0 V 0 , y 0 ,v 0
where (U , V 0 ) = f (X, Y 0 ). In the real model, it follows from IS (X; Y 0 | Y ) ≤ ε that PXY PY 0 |XY ≡ε PXY PY 0 |Y , from IS (U X; V | Y Y 0 V 0 ) ≤ ε that PXY PU Y 0 V 0 |XY PV |XY U Y 0 V 0 ≡ε PXY PU Y 0 V 0 |XY PV |Y Y 0 V 0 , and from Pr[(U, V 0 ) = f (X, Y 0 )] ≥ 1 − ε and Lemma 4 that PXY PY 0 |XY PU V 0 |XY 0 ≡ε PXY PY 0 |XY PU V 0 |XY Y 0 . We have PXY U V = PXY
X
PY 0 |Y PU V 0 |XY 0 PV |Y Y 0 V 0
y 0 ,v 0
≡ε PXY
X
PY 0 |XY PU V 0 |XY 0 PV |Y Y 0 V 0
y 0 ,v 0
≡ε PXY
X
PY 0 |XY PU V 0 |XY Y 0 PV |Y Y 0 V 0
y 0 ,v 0
≡ε PXY
X
PY 0 |XY PU V 0 |XY Y 0 PV |XY U Y 0 V 0
y 0 ,v 0
= PXY U V . Therefore, given PXY , we are able to construct an adversary in the ideal model that simulates the output of the real protocol with an error of at most 3ε. However, we have to show that a fixed adversary in the ideal model works for every input (x, y) ∈ X × Y.4 Given PXY , let e be the average error P of the simulation, and let exy be the 0 error if the input is (x, y). We have e = x,y PXY (x, y)·exy . Let h(PXY ) → PXY a function that maps from the space of all distribution over X ×Y to itself, where 0 PXY (x, y) := PXY (x, y) · 4
exy + 1 . e+1
This part is missing in [21], but there the problem can be solved easily by fixing PXY to the uniform distribution. But in our case, this would give us an error bound that would depend on the dimension of the input, which would be quite weak.
h is a continuous5 function from a non-empty, compact, convex set S ⊂ R|X ×Y| into itself, so by Brouwer’s Fixed Point Theorem h must have a fixed point distribution QXY . (A constructive proof of Brower’s Fixed Point Theorem can be found in [26].) So we have for all (x, y) that QXY (x, y) = QXY (x, y) ·
exy + 1 e+1
and QXY (x, y) > 0, and hence exy = e. Therefore, by taking the adversary in the ideal model for the input distribution QXY , the output will have the same error e for all inputs. Since e ≤ 3ε, we get for all x and y PU V |X=x,Y =y ≡3ε PU V |X=x,Y =y , which implies that the protocol is secure with an error of at most 3ε.
t u
Theorem 2 now shows that our conditions are not only sufficient but also necessary. Theorem 2. If a protocol Π securely computes the deterministic functionality f with an error of at most ε, then for every pair of algorithms A = (A1 , A2 ) that is admissible in the real model for the protocol Π and for any input (X, Y ) distributed according to PXY over X × Y, A produces outputs (U, V ) distributed according to PU V |XY , such that the following conditions are satisfied: – (Correctness) If both players are honest, we have Pr[(U, V ) = f (X, Y )] ≥ 1 − ε . – (Security for Player 1) If player 1 is honest then there exist random variables Y 0 and V 0 distributed according to PY 0 V 0 |U V XY such that Pr[(U, V 0 ) = f (X, Y 0 )] ≥ 1 − ε , IS (X; Y 0 | Y ) = 0 and IS (U X; V | Y Y 0 V 0 ) ≤ 4ε . – (Security for Player 2) If player 2 is honest then there exist random variables X 0 and U 0 distributed according to PX 0 U 0 |U V XY such that Pr[(U 0 , V ) = f (X 0 , Y )] ≥ 1 − ε , IS (Y ; X 0 | X) = 0 and IS (V Y ; U | XX 0 U 0 ) ≤ 4ε . 5
We can assume that PY 0 V 0 |XY U V is a continuous function of PXY .
Proof. There exists an admissible pair of algorithms B = (B 1 , B 2 ) for the ideal model such that for all x ∈ X and y ∈ Y, we have PU V |X=x,Y =y =ε PU V |X=x,Y =y . If both players are honest we have B = B. B1 and B2 forward their inputs (X, Y ) unchanged to the trusted third party, get back (U 0 , V 0 ) := f (X, Y ) and output (U , V ) = (U 0 , V 0 ) = f (X, Y ). It follows that Pr[(U, V ) = f (X, Y )] ≥ 1 − ε. Without loss of generality, let player 1 be honest and player 2 be malicious. Let us look at the execution of B = (B1 , B 2 ), and let PXY be an arbitrary input distribution. The malicious B 2 can be modeled by the two conditional probability distributions PY 0 S|Y computing the input to the ideal functionality f and some internal data S, and PV |V 0 S computing the output. We get X PXY PY 0 S|Y PU V 0 |XY 0 PV |V 0 S PXY U V Y 0 V 0 = (1) s
= PXY PY 0 |Y PU V 0 |XY 0
X
PS|Y Y 0 PV |V 0 S
(2)
s
= PXY PY 0 |Y PU V 0 |XY 0 PV |Y V 0 Y 0 , 0
(3)
0
where (U , V ) = f (X, Y ). Let PY 0 V 0 |U V XY := PY 0 V 0 |U V XY . From PY 0 V 0 |U V XY = PY 0 |Y PV 0 |U V XY Y 0 follows that IS (X; Y 0 | Y ) = 0 . From PU V XY ≡ε PU V XY and Lemma 3 follows that PXY U V Y 0 V 0 ≡ε PXY U V Y 0 V 0 . Since PXY U V Y 0 V 0 = PXY U V 0 Y 0 PV |Y V 0 Y 0 , it follows from Lemma 8 that IS (U X; V | Y Y 0 V 0 ) ≤ 4ε , and from Lemma 2 follows PXY 0 U V 0 ≡ε PXY 0 U V 0 , and therefore Pr[(U, V 0 ) = f (X, Y 0 )] ≥ 1 − ε . t u 3.3
Oblivious Transfer
We now apply Theorem 1 to 1-out-of-n string oblivious transfer, or n1 -OTk for short. The ideal functionality fOT is defined as fOT (X, C) := (⊥, XC ), where ⊥ denotes a constant random variable, X = (X0 , . . . , Xn−1 ), Xi ∈ {0, 1}k for i ∈ {0, . . . , n − 1}, and C ∈ {0, . . . , n − 1}. Theorem 3. A protocol Π securely computes n1 -OTk with an error of at most 6ε if for every pair of algorithms A = (A1 , A2 ) that is admissible for protocol Π and for any input (X, C), A produces outputs (U, V ) such that the following conditions are satisfied:
– (Correctness) If both players are honest, then U = ⊥ and Pr[V = XC ] ≥ 1 − ε . – (Security for Player 1) If player 1 is honest, then we have U = ⊥ and there exists a random variable C 0 distributed according to PC 0 |XCV , such that IS (X; C 0 | C) ≤ ε , and IS (X; V | CC 0 XC 0 ) ≤ ε . – (Security for Player 2) If player 2 is honest, we have V ∈ {0, 1}k and IS (C; U | X) ≤ ε . Proof. We need to show that these conditions imply the conditions of Theorem 1 for ε0 := 2ε. For correctness and the security for player 1 this is trivial. 0 For the security for player 2, we choose X 0 = (X00 , . . . , Xn−1 ) as follows: for all 0 values i, let Xi be chosen according to the distribution PV |XU,C=i except for XC0 . We set XC0 = V . Note that all Xi0 , 0 ≤ i ≤ n − 1, have distribution PV |XU,C=i . Thus X 0 does not depend on C given XU , and we have IS (C; X 0 | XU ) = 0. From Lemma 5 follows that IS (C; X 0 U | X) ≤ IS (C; U | X) + IS (C; X 0 | XU ) ≤ ε . Lemmas 6 implies that IS (C; X 0 | X) ≤ ε and, since V is a function of X 0 and C, it follows from Lemma 9 that IS (V C; U | XX 0 ) = IS (C; U | XX 0 ) ≤ 2 · IS (C; X 0 U | X) ≤ 2ε . The statements follows by applying Theorem 1.
t u
Furthermore, note that using Lemmas 5 and 6, we get IS (C; U | X) ≤ IS (C; X 0 U | X) ≤ IS (C; X 0 | X) + IS (C; U | XX 0 ) ≤ IS (C; X 0 | X) + IS (V C; U | XX 0 ) , from which it is easy to show that the conditions of Theorem 1 also imply the conditions in Theorem 3.
References 1. Yao, A.C.: Protocols for secure computations. In: Proceedings of the 23rd Annual IEEE Symposium on Foundations of Computer Science (FOCS ’82). (1982) 160– 164 2. Goldreich, O., Micali, S., Wigderson, A.: How to play any mental game. In: Proceedings of the 21st Annual ACM Symposium on Theory of Computing (STOC ’87), ACM Press (1987) 218–229
3. Ben-Or, M., Goldwasser, S., Wigderson, A.: Completeness theorems for noncryptographic fault-tolerant distributed computation. In: Proceedings of the 21st Annual ACM Symposium on Theory of Computing (STOC ’88), ACM Press (1988) 1–10 4. Chaum, D., Cr´epeau, C., Damg˚ ard, I.: Multiparty unconditionally secure protocols (extended abstract). In: Proceedings of the 21st Annual ACM Symposium on Theory of Computing (STOC ’88), ACM Press (1988) 11–19 5. Wiesner, S.: Conjugate coding. SIGACT News 15(1) (1983) 78–88 6. Rabin, M.O.: How to exchange secrets by oblivious transfer. Technical Report TR-81, Harvard Aiken Computation Laboratory (1981) 7. Even, S., Goldreich, O., Lempel, A.: A randomized protocol for signing contracts. Commun. ACM 28(6) (1985) 637–647 8. Goldreich, O., Vainish, R.: How to solve any protocol problem - an efficiency improvement. In: Advances in Cryptology — CRYPTO ’87. Lecture Notes in Computer Science, Springer-Verlag (1988) 73–86 9. Kilian, J.: Founding cryptography on oblivious transfer. In: Proceedings of the 20th Annual ACM Symposium on Theory of Computing (STOC ’88), ACM Press (1988) 20–31 10. Cr´epeau, C.: Verifiable disclosure of secrets and applications. In: Advances in Cryptology — EUROCRYPT 1989. Lecture Notes in Computer Science, SpringerVerlag (1990) 181–191 11. Goldwasser, S., Levin, L.A.: Fair computation of general functions in presence of immoral majority. In: Advances in Cryptology — CRYPTO ’90. Lecture Notes in Computer Science, Springer-Verlag (1991) 77–93 12. Cr´epeau, C., van de Graaf, J., Tapp, A.: Committed oblivious transfer and private multi-party computation. In: Advances in Cryptology — CRYPTO ’95. Lecture Notes in Computer Science, Springer-Verlag (1995) 110–123 13. Kilian, J.: More general completeness theorems for secure two-party computation. In: Proceedings of the 32th Annual ACM Symposium on Theory of Computing (STOC ’00), ACM Press (2000) 316–324 14. Micali, S., Rogaway, P.: Secure computation (abstract). In: Advances in Cryptology — CRYPTO ’91. Volume 576 of Lecture Notes in Computer Science., SpringerVerlag (1992) 392–404 15. Beaver, D.: Foundations of secure interactive computing. In: Advances in Cryptology — CRYPTO ’91. Volume 576 of Lecture Notes in Computer Science., SpringerVerlag (1992) 377–391 16. Goldwasser, S., Micali, S., Rackoff, C.: The knowledge complexity of interactive proof systems. SIAM J. Comput. 18(1) (1989) 186–208 17. Canetti, R.: Security and composition of multiparty cryptographic protocols. Journal of Cryptology 13(1) (2000) 143–202 18. Goldreich, O.: Foundations of Cryptography. Volume II: Basic Applications. Cambridge University Press (2004) 19. Canetti, R.: Universally composable security: A new paradigm for cryptographic protocols. In: Proceedings of the 42th Annual IEEE Symposium on Foundations of Computer Science (FOCS ’01). (2001) 136–145 Updated Version at http://eprint.iacr.org/2000/067. 20. Backes, M., Pfitzmann, B., Waidner, M.: A universally composable cryptographic library. http://eprint.iacr.org/2003/015 (2003) 21. Cr´epeau, C., Savvides, G., Schaffner, C., Wullschleger, J.: Information-theoretic conditions for two-party secure function evaluation. In: Advances in Cryptology —
22. 23. 24. 25. 26.
EUROCRYPT ’06. Volume 4004 of Lecture Notes in Computer Science., SpringerVerlag (2006) 538–554 Full version available at http://eprint.iacr.org/2006/183. Wullschleger, J.: Oblivious-Transfer Amplification. PhD thesis, ETH Zurich, Switzerland (2007) Shannon, C.E.: A mathematical theory of communication. Bell System Tech. Journal 27 (1948) 379–423, 623–656 Fehr, S., Schaffner, C.: Composing quantum protocols in a classical environment. http://arxiv.org/abs/0804.1059 (2008) Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley-Interscience, New York, USA (1991) Kellogg, R.B., Li, T.Y., Yorke, J.: A constructive proof of the brouwer fixed-point theorem and computational results. SIAM Journal on Numerical Analysis 13(4) (1976) 473–483