Relay Broadcast Channels with Confidential ... - Semantic Scholar

5 downloads 0 Views 715KB Size Report
the achievable secrecy rate region of the broadcast channels with confidential messages. More specifically, first, we investigate the discrete memoryless relay ...
1

Relay Broadcast Channels with Confidential Messages

arXiv:1312.6784v2 [cs.IT] 12 Jan 2014

Bin Dai and Zheng Ma

Abstract We investigate the effects of an additional relay node on the secrecy of broadcast channels by considering the model of relay broadcast channels with confidential messages. We show that this additional relay node can increase the achievable secrecy rate region of the broadcast channels with confidential messages. More specifically, first, we investigate the discrete memoryless relay broadcast channels with two confidential messages and one common message. Three inner bounds (with respect to decode-forward, generalized noise-forward and compress-forward strategies) and an outer bound on the capacity-equivocation region are provided. Removing the secrecy constraint, this outer bound can also be served as a new outer bound for the relay broadcast channel. Second, we investigate the discrete memoryless relay broadcast channels with two confidential messages (no common message). Inner and outer bounds on the capacity-equivocation region are provided. Then, we study the Gaussian case, and find that with the help of the relay node, the achievable secrecy rate region of the Gaussian broadcast channels with two confidential messages is enhanced. Finally, we investigate the discrete memoryless relay broadcast channels with one confidential message and one common message. This work generalizes Lai-Gamals work on the relay-eavesdropper channel by considering an additional common message for both the legitimate receiver and the eavesdropper. Inner and outer bounds on the capacity-equivocation region are provided, and the results are further explained via a Gaussian example. Compared with Csisz´ ar-K¨orner’s work on broadcast channels with confidential messages (BCC), we find that with the help of the relay node, the secrecy capacity region of the Gaussian BCC is enhanced. Index Terms Capacity-equivocation region, confidential messages, relay broadcast channel, secrecy capacity region.

I. I NTRODUCTION The security of a communication system was first studied by Shannon [1] from a standpoint of information theory. He discussed a theoretical model of cryptosystems using the framework of classical one way noiseless channels and derived some conditions for secure communication. Subsequently, Wyner, in his paper on the discrete memoryless wiretap channel [2], studied the problem that how to transmit the confidential messages to the legitimate receiver via a discrete memoryless degraded broadcast channel, while keeping the wiretapper as ignorant of the messages B. Dai and Z. Ma are with the School of Information Science and Technology, Southwest JiaoTong University, Chengdu 610031, China e-mail: [email protected], [email protected].

2

as possible. Measuring the uncertainty of the wiretapper by equivocation, the capacity-equivocation region was established. Furthermore, the secrecy capacity was also established, which provided the maximum transmission rate with perfect secrecy. Based on Wyner’s work, Leung-Yan-Cheong and Hellman studied the Gaussian wiretap channel(GWC) [3], and showed that its secrecy capacity was the difference between the main channel capacity and the overall wiretap channel capacity (the cascade of main channel and wiretap channel). Moreover, Merhav [4] studied a specified wiretap channel, and obtained the capacity-equivocation region, where both the legitimate receiver and the wiretapper have access to some leaked symbols from the source, but the channels for the wiretapper are more noisy than the legitimate receiver, which shares a secret key with the encoder. Other related works on the wiretap channel are split into the following four directions. •

The first is the wiretap channel with feedback, and it was first investigated by Ahlswede and Cai [5]. In [5], the general wiretap channel (not physically or stochastically degraded) with noiseless feedback from the legitimate receiver to the channel encoder was studied, and both upper and lower bounds on the secrecy capacity were provided. Specifically, for the physically degraded case, they showed that the secrecy capacity is larger than that of Wyner’s wiretap channel (without feedback), i.e., the noiseless feedback helps to enhance the secrecy capacity of [2]. Besides Ahlswede and Cai’s work, the wiretap channel with noisy feedback was studied in [6], and the wiretap channel with secure rate-limited feedback was studied in [7], and both of them focused on bounds of the secrecy capacity.



The second is the wiretap channel with channel state information (CSI). Mitrpant et al. [8] studied the Gaussian wiretap channel with CSI, and provided an inner bound on the capacity-equivocation region. Furthermore, Chen et al. [9] investigated the discrete memoryless wiretap channel with noncausal CSI (CSI is available to the channel encoder in a noncausal manner), and also provided an inner bound on the capacity-equivocation region.



The third is the compound wiretap channel. The compound wiretap channel can be viewed as a wiretap channel with multiple legitimate receivers and multiple wiretappers, and the source message must be successfully transmitted to all receivers and must be kept secret from all wiretappers. The compound wiretap channel was studied in [10], [11], [12], [13], [14], [15], [16].



The fourth is the MIMO compound wiretap channel, in which the transmitters, legitimate receivers and the wiretappers are equipped with several antennas. The MIMO compound wiretap channel was studied in [17], [18], [19].

After the publication of Wyner’s work, Csisz´ ar and K¨orner [20] investigated a more general situation: the broadcast channels with confidential messages (BCC). In this model, a common message and a confidential message were sent through a general broadcast channel. The common message was assumed to be decoded correctly by the legitimate receiver and the wiretapper, while the confidential message was only allowed to be obtained by the legitimate receiver. This model is also a generalization of [21], where no confidentiality condition is imposed. The capacity-equivocation region and the secrecy capacity region of BCC [20] were totally determined, and the results were also a generalization of those in [2]. Furthermore, the capacity-equivocation region of Gaussian BCC was

3

determined in [33]. By using the approach of [2] and [20], the information-theoretic security for other multi-user communication systems has been widely studied, see the followings. •

For the broadcast channel, Liu et al. [22] studied the broadcast channel with two confidential messages (no common message), and provided an inner bound on the secrecy capacity region. Furthermore, Xu et al. [23] studied the broadcast channel with two confidential messages and one common message, and provided inner and outer bounds on the capacity-equivocation region.



For the multiple-access channel (MAC), the security problems are split into two directions. – The first is that two users wish to transmit their corresponding messages to a destination, and meanwhile, they also receive the channel output. Each user treats the other user as a wiretapper, and wishes to keep its confidential message as secret as possible from the wiretapper. This model is usually called the MAC with confidential messages, and it was studied by Liang and Poor [24]. An inner bound on the capacity-equivocation region is provided for the model with two confidential messages, and the capacityequivocation region is still not known. Furthermore, for the model of MAC with one confidential message [24], both inner and outer bounds on capacity-equivocation region are derived. Moreover, for the degraded MAC with one confidential message, the capacity-equivocation region is totally determined. – The second is that an additional wiretapper has access to the MAC output via a wiretap channel, and therefore, how to keep the confidential messages of the two users as secret as possible from the additional wiretapper is the main concern of the system designer. This model is usually called the multiple-access wiretap channel (MAC-WT). The Gaussian MAC-WT was investigated in [25]. An inner bound on the capacity-equivocation region is provided for the Gaussian MAC-WT. Other related works on MAC-WT can be found in [26], [27].



For the interference channel, Liu et al. [22] studied the interference channel with two confidential messages, and provided inner and outer bounds on the secrecy capacity region. In addition, Liang et al. [28] studied the cognitive interference channel with one common message and one confidential message, and the capacityequivocation region was totally determined for this model.



For the relay channel, Lai and Gamal [29] studied the relay-eavesdropper channel, where a source wishes to send messages to a destination while leveraging the help of a relay node to hide those messages from the eavesdropper. Three inner bounds (with respect to decode-forward, noise-forward and compress-forward strategies) and one outer bound on the capacity-equivocation region were provided in [29]. In addition, Oohama [30] studied the relay channel with confidential messages, where a relay helps the transmission of messages from one sender to one receiver. The relay is considered not only as a sender that helps the message transmission but also as a wiretapper who can obtain some knowledge about the transmitted messages. Measuring the uncertainty of the relay by equivocation, the inner and outer bounds on the capacity-equivocation region were provided in [30].

4

Recently, Ekrem and Ulukus [31] investigated the effects of user cooperation on the secrecy of broadcast channels by considering a cooperative relay broadcast channel. They showed that user cooperation can increase the achievable secrecy rate region of [22]. In this paper, first, we study the relay broadcast channels with two confidential messages and one common message, see Figure 1. This model generalizes the broadcast channels with confidential messages [23] by considering an additional relay node. The motivation of this work is to investigate the effects of an additional relay node on the secrecy of broadcast channels, and whether the achievable rate-equivocation regions of [20], [22], [23] can be enhanced by using an additional relay node.

Fig. 1: Relay broadcast channels with two confidential messages and one common message

For the model of Figure 1, we provide inner and outer bounds on the capacity-equivocation region. The decodeforward (DF), generalized noise-forward (GNF) and compress-forward (CF) relay strategies are used in the construction of the inner bounds. Of particular interest is the generalized noise-forward relay strategy, which is an extension of Lai-Gamal’s noise-forward (NF) strategy [29]. The idea of this strategy is as follows. N The relay sends the codeword xN 1 to both receivers, and x1 is independent of the transmitter’s messages. •

If the channel from the relay to receiver 1 is less noisy than the channel from the relay to receiver 2, we allow N N receiver 1 to decode xN 1 , and receiver 2 can not decode x1 . Therefore, in this case, x1 can be viewed as a

noise signal to confuse receiver 2. •

If the channel from the relay to receiver 1 is more noisy than the channel from the relay to receiver 2, we N N allow receiver 2 to decode xN 1 , and receiver 1 can not decode x1 . Therefore, in this case, x1 can be viewed

as a noise signal to confuse receiver 1. The outer bound on the capacity-equivocation region of the model of Figure 1 generalizes the outer bound in [23]. In addition, removing the secrecy constraint, this outer bound can also be served as a new bound for the relay broadcast channel. Second, we study the relay broadcast channels with two confidential messages (no common message), see Figure 2. Inner and outer bounds on the capacity-equivocation region of Figure 2 are provided. The outer bound is directly obtained from that of Figure 1. The inner bounds are also constructed according to the three relay strategies (DF,

5

GNF, CF). In addition, we present a Gaussian example for the model of Figure 2, and find that with the help of the relay, the achievable secrecy rate region of the Gaussian broadcast channels with two confidential messages is enhanced.

Fig. 2: Relay broadcast channels with two confidential messages

Third, we study the relay broadcast channels with one confidential message and one common message, see Figure 3. This model generalizes Lai-Gamal’s work [29] by considering an additional common message. Inner and outer bounds on the capacity-equivocation region are also provided. The outer bound is also directly obtained from that of Figure 1. The inner bounds are constructed according to the DF, NF and CF strategies. Here note that the NF strategy of Figure 3 is slightly different from that of Figure 1, and it is considered into two cases, see the followings. •

If the channel from the relay to receiver 1 is less noisy than the channel from the relay to receiver 2, we allow N N receiver 1 to decode xN 1 , and receiver 2 can not decode x1 . Therefore, in this case, x1 can be viewed as a

noise signal to confuse receiver 2. •

If the channel from the relay to receiver 1 is more noisy than the channel from the relay to receiver 2, we N allow both the receivers to decode xN 1 , and therefore, in this case, the relay codeword x1 can not make any

contribution to the security of the model of Figure 3. Moreover, a Gaussian example for the model of Figure 3 is provided, and we find that with the help of the relay, the secrecy capacity region of the Gaussian BCC [33] is enhanced.

Fig. 3: Relay broadcast channels with one confidential message and one common message

6

In this paper, random variab1es, sample values and alphabets are denoted by capital letters, lower case letters and calligraphic letters, respectively. A similar convention is applied to the random vectors and their sample values. For example, U N denotes a random N -vector (U1 , ..., UN ), and uN = (u1 , ..., uN ) is a specific vector value in U N that is the N th Cartesian power of U. UiN denotes a random N − i + 1-vector (Ui , ..., UN ), and N uN i = (ui , ..., uN ) is a specific vector value in Ui . Let pV (v) denote the probability mass function P r{V = v}.

Throughout the paper, the logarithmic function is to the base 2. The organization of this paper is as follows. Section II provides the inner and outer bounds on the capacityequivocation region of the model of Figure 1. The model of Figure 2 and its Gaussian case are investigated in Section III. The model of Figure 3 and its Gaussian case are investigated in Section IV. Final conclusions are provided in Section V. II. R ELAY BROADCAST CHANNELS WITH TWO CONFIDENTIAL MESSAGES AND ONE COMMON MESSAGE The model of Figure 1 is a four-terminal discrete channel consisting of finite sets X, X1 , Y , Y1 , Z and a transition probability distribution pY,Y1 ,Z|X1 ,X (y, y1 , z|x1 , x). X N and X1N are the channel inputs from the transmitter and the relay respectively, while Y N , Y1N , Z N are the channel outputs at receiver 1, relay and receiver 2, respectively. The channel is discrete memoryless, i.e., the channel outputs (yi ; y1,i , zi ) at time i only depend on the channel inputs (xi ; x1,i ) at time i. Definition 1: (Channel encoder) The confidential messages W1 and W2 take values in W1 , W2 , respectively. The common message W0 takes values in W0 . W1 , W2 and W0 are independent and uniformly distributed over their ranges. The channel encoder is a stochastic encoder fE that maps the messages w1 , w2 and w0 into a codeword xN ∈ X N . The transmission rates of the confidential messages (W1 , W2 ) and the common message (W0 ) are log kW1 k log kW2 k , N N

and

log kW0 k , N

respectively.

Definition 2: (Relay encoder) The relay encoder is also a stochastic encoder ϕi that maps the signals (y1,1 , y1,2 , ..., y1,i−1 ) received before time i to the channel input x1,i . Definition 3: (Decoder) The Decoder for receiver 1 is a mapping fD1 : Y N → W0 × W1 , with input Y N and ˘ 0, W ˘ 1 . Let Pe1 be the error probability of receiver 1, and it is defined as P r{(W0 , W1 ) 6= (W ˘ 0, W ˘ 1 )}. outputs W c0 , W c2 . Let Pe2 The Decoder for receiver 2 is a mapping fD2 : Z N → W0 × W2 , with input Z N and outputs W c0 , W c2 )}. be the error probability of receiver 1, and it is defined as P r{(W0 , W2 ) 6= (W The equivocation rate at receiver 2 is defined as ∆1 =

1 H(W1 |Z N ). N

(2.1)

Analogously, the equivocation rate at receiver 1 is defined as ∆2 =

1 H(W2 |Y N ). N

(2.2)

A rate quintuple (R0 , R1 , R2 , Re1 , Re2 ) (where R0 , R1 , R2 , Re1 , Re2 > 0) is called achievable if, for any  > 0 (where  is an arbitrary small positive real number and  → 0), there exists a channel encoder-decoder

7

(N, ∆1 , ∆2 , Pe1 , Pe2 ) such that lim

N →∞

log k W0 k log k W1 k log k W2 k = R0 , lim = R1 , lim = R2 , N →∞ N →∞ N N N

lim ∆1 ≥ Re1 , lim ∆2 ≥ Re2 , Pe1 ≤ , Pe2 ≤ .

N →∞

N →∞

(2.3)

The capacity-equivocation region R(A) is a set composed of all achievable (R0 , R1 , R2 , Re1 , Re2 ) quintuples. The inner and outer bounds on the capacity-equivocation region R(A) are provided from Theorem 1 to Theorem 4, and they are proved in Appendix A, Appendix B, Appendix C and Appendix D, respectively. Our first result establishes an outer-bound on the capacity-equivocation region of the model of Figure 1. Theorem 1: (Outer bound) A single-letter characterization of the region R(Ao) (R(A) ⊆ R(Ao) ) is as follows, R(Ao) = {(R0 , R1 , R2 , Re1 , Re2 ) : Re1 ≤ R1 , Re2 ≤ R2 , R0 ≤ min{I(U, U1 ; Y ), I(U ; Y, Y1 |U1 )}, R0 ≤ min{I(U, U2 ; Z), I(U ; Z, Y1 |U2 )}, R0 + R1 ≤ min{I(U, U1 , V1 ; Y ), I(U, V1 ; Y, Y1 |U1 )}, R0 + R2 ≤ min{I(U, U2 , V2 ; Z), I(U, V2 ; Z, Y1 |U2 )}, R0 + R1 + R2 ≤ I(U, U2 , V1 ; Y, Y1 |U1 ) + I(V2 ; Z, Y1 |U, U1 , U2 , V1 ), R0 + R1 + R2 ≤ I(U, U1 , V2 ; Z, Y1 |U2 ) + I(V1 ; Y, Y1 |U, U1 , U2 , V2 ), Re1 ≤ min{I(V1 ; Y |U, V2 ) − I(V1 ; Z|U, V2 ), I(V1 ; Y |U ) − I(V1 ; Z|U )}, Re2 ≤ min{I(V2 ; Z|U, V1 ) − I(V2 ; Y |U, V1 ), I(V2 ; Z|U ) − I(V2 ; Y |U )}}, where U → (U1 , U2 , V1 , V2 ) → (X, X1 ) → (Y, Y1 , Z). Remark 1: There are some notes on Theorem 1, see the following. •

The relay X1 is represented by auxiliary random variables U1 and U2 . The common message W0 is represented by U , and the confidential messages W1 , W2 are represented by V1 and V2 , respectively.



Removing the relay node from the model of Figure 1, the model of Figure 1 reduces to the broadcast channels with two confidential messages and one common message [23]. Letting U1 = U2 = Y1 = const, the region R(Ao) reduces to R(Ao1) , and it is given by R(Ao1) = {(R0 , R1 , R2 , Re1 , Re2 ) : Re1 ≤ R1 , Re2 ≤ R2 , R0 ≤ I(U ; Y ), R0 ≤ I(U ; Z), R0 + R1 ≤ I(U, V1 ; Y ), R0 + R2 ≤ I(U, V2 ; Z), R0 + R1 + R2 ≤ I(U, V1 ; Y ) + I(V2 ; Z|U, V1 ),

8

R0 + R1 + R2 ≤ I(U, V2 ; Z) + I(V1 ; Y |U, V2 ), Re1 ≤ min{I(V1 ; Y |U, V2 ) − I(V1 ; Z|U, V2 ), I(V1 ; Y |U ) − I(V1 ; Z|U )}, Re2 ≤ min{I(V2 ; Z|U, V1 ) − I(V2 ; Y |U, V1 ), I(V2 ; Z|U ) − I(V2 ; Y |U )}}. This region R(Ao1) is exactly the same as the outer bound in [23]. Note that removing the secrecy constraint, the above region is the same as the outer bound for the general broadcast channel provided by Nair and Gamal [34]. •

Removing the secrecy constraint, the model of Figure 1 reduces to the general relay broadcast channel. Then, the following region R(Co) can be served as a new outer bound for the general relay broadcast channel. R(Co) = {(R0 , R1 , R2 ) : R0 ≤ min{I(U, U1 ; Y ), I(U ; Y, Y1 |U1 )}, R0 ≤ min{I(U, U2 ; Z), I(U ; Z, Y1 |U2 )}, R0 + R1 ≤ min{I(U, U1 , V1 ; Y ), I(U, V1 ; Y, Y1 |U1 )}, R0 + R2 ≤ min{I(U, U2 , V2 ; Z), I(U, V2 ; Z, Y1 |U2 )}, R0 + R1 + R2 ≤ I(U, U2 , V1 ; Y, Y1 |U1 ) + I(V2 ; Z, Y1 |U, U1 , U2 , V1 ), R0 + R1 + R2 ≤ I(U, U1 , V2 ; Z, Y1 |U2 ) + I(V1 ; Y, Y1 |U, U1 , U2 , V2 )}, where U → (U1 , U2 , V1 , V2 ) → (X, X1 ) → (Y, Y1 , Z).



The outer bound on the secrecy capacity region is denoted as CsAo , which is the set of triples (R0 , R1 , R2 ) such that (R0 , R1 , R2 , Re1 = R1 , Re2 = R2 ) ∈ R(Ao) . Corollary 1: CsAo = {(R0 , R1 , R2 ) : R0 ≤ min{I(U, U1 ; Y ), I(U ; Y, Y1 |U1 )}, R0 ≤ min{I(U, U2 ; Z), I(U ; Z, Y1 |U2 )}, R0 + R1 ≤ min{I(U, U1 , V1 ; Y ), I(U, V1 ; Y, Y1 |U1 )}, R0 + R2 ≤ min{I(U, U2 , V2 ; Z), I(U, V2 ; Z, Y1 |U2 )}, R0 + R1 + R2 ≤ I(U, U2 , V1 ; Y, Y1 |U1 ) + I(V2 ; Z, Y1 |U, U1 , U2 , V1 ), R0 + R1 + R2 ≤ I(U, U1 , V2 ; Z, Y1 |U2 ) + I(V1 ; Y, Y1 |U, U1 , U2 , V2 ), R1 ≤ min{I(V1 ; Y |U, V2 ) − I(V1 ; Z|U, V2 ), I(V1 ; Y |U ) − I(V1 ; Z|U )}, R2 ≤ min{I(V2 ; Z|U, V1 ) − I(V2 ; Y |U, V1 ), I(V2 ; Z|U ) − I(V2 ; Y |U )}}. Proof: Substituting Re1 = R1 and Re2 = R2 into the region R(Ao) , Corollary 1 is easily to be checked.

9

We now turn our attention to constructing cooperation strategies for the model of Figure 1. Our first step is to characterize the inner bound on the capacity-equivocation region by using Cover-Gamal’s Decode and Forward (DF) Strategy [35]. In the DF Strategy, the relay node will first decode the common message, and then re-encode the common message to cooperate with the transmitter. Then, the superposition coding and random binning techniques used in [23] will be combined with the DF cooperation strategy to characterize the inner bound. Theorem 2: (Inner bound 1: DF strategy) A single-letter characterization of the region R(Ai1) (R(Ai1) ⊆ R(A) ) is as follows, R(Ai1) = {(R0 , R1 , R2 , Re1 , Re2 ) : Re1 ≤ R1 , Re2 ≤ R2 , R0 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)}, R0 + R1 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)} + I(V1 ; Y |U, X1 ), R0 + R2 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)} + I(V2 ; Z|U, X1 ), R0 + R1 + R2 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)} + I(V1 ; Y |U, X1 ) + I(V2 ; Z|U, X1 ) − I(V1 ; V2 |U, X1 ), Re1 ≤ I(V1 ; Y |U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V1 ; Z|U, X1 , V2 ), Re2 ≤ I(V2 ; Z|U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V2 ; Y |U, X1 , V1 )}, for some distribution PY,Z,Y1 ,X,X1 ,V1 ,V2 ,U (y, z, y1 , x, x1 , v1 , v2 , u) = PY,Z,Y1 |X,X1 (y, z, y1 |x, x1 )PX,X1 |U,V1 ,V2 (x, x1 |u, v1 , v2 )PU,V1 ,V2 (u, v1 , v2 ). Remark 2: There are some notes on Theorem 2, see the following. •

The common message W0 is represented by U , and the confidential messages W1 , W2 are represented by V1 and V2 , respectively.



The inequality R0 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)} of Theorem 2 implies that the relay node decode-and-forward the common message W0 . Other inequalities in Theorem 2 follow the ideas of [23], [36], [37].



The first inner bound on the secrecy capacity region of Figure 1 is denoted as CsAi1 , which is the set of triples (R0 , R1 , R2 ) such that (R0 , R1 , R2 , Re1 = R1 , Re2 = R2 ) ∈ R(Ai1) . Corollary 2: CsAi1 = {(R0 , R1 , R2 ) : R0 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)}, R1 ≤ I(V1 ; Y |U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V1 ; Z|U, X1 , V2 ), R2 ≤ I(V2 ; Z|U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V2 ; Y |U, X1 , V1 )}. Proof: Substituting Re1 = R1 and Re2 = R2 into the region R(Ai1) , Corollary 2 is easily to be checked.

10

The second step is to characterize the inner bound on the capacity-equivocation region by using the generalized noise and forward (GNF) strategy. In the GNF Strategy, the relay node does not attempt to decode the messages but sends codewords that are independent of the transmitters messages, and these codewords aid in confusing the receivers. Specifically, if the channel from the relay to receiver 1 is less noisy than the channel from the relay to receiver 2, we allow receiver 1 to decode the relay codeword, and receiver 2 can not decode it. Therefore, in this case, the relay codeword can be viewed as a noise signal to confuse receiver 2. Analogously, if the channel from the relay to receiver 1 is more noisy than the channel from the relay to receiver 2, we allow receiver 2 to decode the relay codeword, and receiver 1 can not decode it. Thus, in this case, the relay codeword is a noise signal to confuse receiver 1. Theorem 3: (Inner bound 2: GNF strategy) A single-letter characterization of the region R(Ai2) (R(Ai2) ⊆ R(A) ) is as follows, R(Ai2) = L1

[

L2 ,

where L1 is given by

L1 =

[ PY,Z,Y ,X,X ,V ,V ,U : 1 1 1 2 I(X1 ; Y ) ≥ I(X1 ; Z|U, V2 )

                      

(R0 , R1 , R2 , Re1 , Re2 ) : Re1 ≤ R1 , Re2 ≤ R2 , R0 ≤ min{I(U ; Y |X1 ), I(U ; Z)}, R0 + R1 ≤ min{I(U ; Y |X1 ), I(U ; Z)} + I(V1 ; Y |U, X1 ), R0 + R2 ≤ min{I(U ; Y |X1 ), I(U ; Z)} + I(V2 ; Z|U ),

R0 + R1 + R2 ≤ min{I(U ; Y |X1 ), I(U ; Z)}      +I(V1 ; Y |U, X1 ) + I(V2 ; Z|U ) − I(V1 ; V2 |U ),       Re1 ≤ min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )} + I(V1 ; Y |U, X1 )       −I(V1 ; V2 |U ) − I(X1 , V1 ; Z|U, V2 ),      R ≤ I(V ; Z|U ) − I(V ; V |U ) − I(V ; Y |U, X , V ). e2 2 1 2 2 1 1

                      

,

                     

L2 is given by

2

L =

[ PY,Z,Y ,X,X ,V ,V ,U : 1 1 1 2 I(X1 ; Z) ≥ I(X1 ; Y |U, V1 )

                      

(R0 , R1 , R2 , Re1 , Re2 ) : Re1 ≤ R1 , Re2 ≤ R2 , R0 ≤ min{I(U ; Z|X1 ), I(U ; Y )}, R0 + R1 ≤ min{I(U ; Z|X1 ), I(U ; Y )} + I(V1 ; Y |U ), R0 + R2 ≤ min{I(U ; Z|X1 ), I(U ; Y )} + I(V2 ; Z|U, X1 ),

R0 + R1 + R2 ≤ min{I(U ; Z|X1 ), I(U ; Y )}      +I(V1 ; Y |U ) + I(V2 ; Z|U, X1 ) − I(V1 ; V2 |U ),       Re1 ≤ I(V1 ; Y |U ) − I(V1 ; V2 |U ) − I(V1 ; Z|U, V2 , X1 ),       Re2 ≤ min{I(X1 ; Y |U, V1 , V2 ), I(X1 ; Z)} + I(V2 ; Z|U, X1 )      −I(V ; V |U ) − I(X , V ; Y |U, V ). 1 2 1 2 1

                      

,

                     

and PY,Z,Y1 ,X,X1 ,V1 ,V2 ,U (y, z, y1 , x, x1 , v1 , v2 , u) satisfies PY,Z,Y1 ,X,X1 ,V1 ,V2 ,U (y, z, y1 , x, x1 , v1 , v2 , u) = PY,Z,Y1 |X,X1 (y, z, y1 |x, x1 )PX|U,V1 ,V2 (x|u, v1 , v2 )PU,V1 ,V2 (u, v1 , v2 )PX1 (x1 ).

11

Remark 3: There are some notes on Theorem 3, see the following. •

The region L1 is characterized under the condition that the channel from the relay to receiver 1 is less noisy than the channel from the relay to receiver 2 (I(X1 ; Y ) ≥ I(X1 ; Z|U, V2 ) and X1 is independent of U , V2 imply that I(X1 ; Y ) ≥ I(X1 ; Z)). Then, in this case, receiver 1 is allowed to decode the relay codeword, and receiver 2 is not allowed to decode it. The rate of the relay is defined as min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )}, and the relay codeword is viewed as pure noise for receiver 2. Analogously, the region L2 is characterized under the condition that the channel from the relay to receiver 2 is less noisy than the channel from the relay to receiver 1. Then, in this case, receiver 2 is allowed to decode the relay codeword, and receiver 1 is not allowed to decode it.



The second inner bound on the secrecy capacity region of Figure 1 is denoted as CsAi2 , which is the set of triples (R0 , R1 , R2 ) such that (R0 , R1 , R2 , Re1 = R1 , Re2 = R2 ) ∈ R(Ai2) . Corollary 3: CsAi2 = La

[

Lb ,

where La is given by

[

L1 =

PY,Z,Y ,X,X ,V ,V ,U : 1 1 1 2 I(X1 ; Y ) ≥ I(X1 ; Z|U, V2 )

   (R0 , R1 , R2 ) :       R0 ≤ min{I(U ; Y |X1 ), I(U ; Z)},      R + R ≤ min{I(U ; Y |X ), I(U ; Z)} + I(V ; Y |U, X ), 0 1 1 1 1  R ≤ min{I(X ; Z|U, V , V ), I(X ; Y )} + I(V ; Y |U, X )  1 1 1 2 1 1 1        −I(V1 ; V2 |U ) − I(X1 , V1 ; Z|U, V2 ),     R ≤ I(V ; Z|U ) − I(V ; V |U ) − I(V ; Y |U, X , V ).

             

   (R0 , R1 , R2 ) :       R0 ≤ min{I(U ; Z|X1 ), I(U ; Y )},      R + R ≤ min{I(U ; Z|X ), I(U ; Y )} + I(V ; Z|U, X ), 0 2 1 2 1   R1 ≤ I(V1 ; Y |U ) − I(V1 ; V2 |U ) − I(V1 ; Z|U, V2 , X1 ),       R2 ≤ min{I(X1 ; Y |U, V1 , V2 ), I(X1 ; Z)} + I(V2 ; Z|U, X1 )      −I(V ; V |U ) − I(X , V ; Y |U, V ).

             

2

2

1

2

2

1

1

,

            

and Lb is given by

L2 =

[ PY,Z,Y ,X,X ,V ,V ,U : 1 1 1 2 I(X1 ; Z) ≥ I(X1 ; Y |U, V1 )

1

2

1

2

1

.

            

Proof: Substituting Re1 = R1 and Re2 = R2 into the region R(Ai2) , Corollary 3 is easily to be checked.

The third step is to characterize the inner bound on the capacity-equivocation region by using a combination of Cover- Gamal’s compress and forward (CF) strategy [35] and the GNF strategy, i.e., in addition to the independent codewords, the relay also sends a quantized version of its noisy observations to the receivers. This noisy version of the relay’s observations helps the receivers in decoding the transmitter’s messages, while the independent codewords help in confusing the receivers. Similar to Theorem 3, if the channel from the relay to receiver 1 is less noisy than

12

the channel from the relay to receiver 2, we allow receiver 1 to decode the relay codeword, and receiver 2 can not decode it. Analogously, If the channel from the relay to receiver 1 is more noisy than the channel from the relay to receiver 2, we allow receiver 2 to decode the relay codeword, and receiver 1 can not decode it. Theorem 4: (Inner bound 3: CF strategy) A single-letter characterization of the region R(Ai3) (R(Ai3) ⊆ R(A) ) is as follows, R(Ai3) = L3

[

L4 ,

where L3 is given by

L3 =

[ P ˆ ,X,X ,V ,V ,U : I(X1 ; Y ) ≥ I(X1 ; Z|U, V2 ) Y,Z,Y1 ,Y 1 1 1 2 ∗ − R∗ ≥ I(Y ; Y ˆ |X ) Rr1 1 1 1

                                                  

        ˆ  R0 ≤ min{I(U ; Y, Y1 |X1 ), I(U ; Z)},      ˆ  R0 + R1 ≤ min{I(U ; Y, Y1 |X1 ), I(U ; Z)}      ˆ  +I(V1 ; Y, Y1 |U, X1 ),     ˆ R0 + R2 ≤ min{I(U ; Y, Y1 |X1 ), I(U ; Z)} + I(V2 ; Z|U ),  ,   R0 + R1 + R2 ≤ min{I(U ; Y, Yˆ1 |X1 ), I(U ; Z)}       +I(V1 ; Y, Yˆ1 |U, X1 ) + I(V2 ; Z|U ) − I(V1 ; V2 |U ),       Re1 ≤ R∗ + I(V1 ; Y, Yˆ1 |U, X1 ) − I(V1 ; V2 |U )       −I(X1 , V1 ; Z|U, V2 ),     R ≤ I(V ; Z|U ) − I(V ; V |U ) − I(V ; Y |U, X , V ). 

(R0 , R1 , R2 , Re1 , Re2 ) : Re1 ≤ R1 , Re2 ≤ R2 ,

e2

2

1

2

2

1

1

∗ Rr1 = min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )}, and L4 is given by    (R0 , R1 , R2 , Re1 , Re2 ) : Re1 ≤ R1 , Re2 ≤ R2 ,       R0 ≤ min{I(U ; Z, Yˆ1 |X1 ), I(U ; Y )},        R0 + R1 ≤ min{I(U ; Z, Yˆ1 |X1 ), I(U ; Y )} + I(V1 ; Y |U ),     R + R ≤ min{I(U ; Z, Yˆ |X ), I(U ; Y )}  0 2 1 1      +I(V ; Z, Yˆ |U, X ), [ 2 1 1 L4 =    R0 + R1 + R2 ≤ min{I(U ; Z, Yˆ1 |X1 ), I(U ; Y )} P ˆ ,X,X ,V ,V ,U : I(X1 ; Z) ≥ I(X1 ; Y |U, V1 )  Y,Z,Y1 ,Y 1 1 1 2   ∗ ∗ ˆ |X ) Rr2 − R ≥ I(Y1 ; Y  1 1  +I(V1 ; Y |U ) + I(V2 ; Z, Yˆ1 |U, X1 ) − I(V1 ; V2 |U ),       R ≤ I(V1 ; Y |U ) − I(V1 ; V2 |U ) − I(V1 ; Z|U, V2 , X1 ),   e1     Re2 ≤ R∗ + I(V2 ; Z, Yˆ1 |U, X1 ) − I(V1 ; V2 |U )      −I(X , V ; Y |U, V ). 1

∗ Rr2

2

1

= min{I(X1 ; Y |U, V1 , V2 ), I(X1 ; Z)}.

The joint probability PY,Z,Y1 ,Yˆ1 ,X,X1 ,V1 ,V2 ,U (y, z, y1 , yˆ1 , x, x1 , v1 , v2 , u) satisfies PY,Z,Y1 ,Yˆ1 ,X,X1 ,V1 ,V2 ,U (y, z, y1 , yˆ1 , x, x1 , v1 , v2 , u) = PY,Z,Y1 |X,X1 (y, z, y1 |x, x1 )PX|U,V1 ,V2 (x|u, v1 , v2 )PU,V1 ,V2 (u, v1 , v2 )PYˆ1 |Y1 ,X1 (ˆ y1 |y1 , x1 )PX1 (x1 ). Remark 4: There are some notes on Theorem 4, see the following.

                                                  

,

13



∗ In Theorem 4, R∗ is the rate of pure noise generated by the relay to confuse the receivers, while Rr1 − R∗ ∗ ∗ (Rr2 − R∗ ) is the part of the rate allocated to send the compressed signal Yˆ1 to help the receivers. If R∗ = Rr1 ∗ (R∗ = Rr2 ), this scheme is exactly the same as the GNF scheme used in Theorem 3.



The third inner bound on the secrecy capacity region of Figure 1 is denoted as CsAi3 , which is the set of triples (R0 , R1 , R2 ) such that (R0 , R1 , R2 , Re1 = R1 , Re2 = R2 ) ∈ R(Ai3) . Corollary 4: CsAi3 = Lc

[

Ld ,

where Lc is given by

[

Lc = P

ˆ ,X,X ,V ,V ,U : I(X1 ; Y ) ≥ I(X1 ; Z|U, V2 ) Y,Z,Y1 ,Y 1 1 1 2 ∗ − R∗ ≥ I(Y ; Y ˆ |X ) Rr1 1 1 1

   (R0 , R1 , R2 ) :       R0 ≤ min{I(U ; Y, Yˆ1 |X1 ), I(U ; Z)},       R + R1 ≤ min{I(U ; Y, Yˆ1 |X1 ), I(U ; Z)}   0 +I(V1 ; Y, Yˆ1 |U, X1 ),     R ≤ R∗ + I(V ; Y, Yˆ |U, X ) − I(V ; V |U )  1 1 1 1 1 2        −I(X1 , V1 ; Z|U, V2 ),     R ≤ I(V ; Z|U ) − I(V ; V |U ) − I(V ; Y |U, X , V ). 2 2 1 2 2 1 1

∗ Rr1 = min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )}, and Ld is given by    (R0 , R1 , R2 ) :       R0 ≤ min{I(U ; Z, Yˆ1 |X1 ), I(U ; Y )},        R0 + R2 ≤ min{I(U ; Z, Yˆ1 |X1 ), I(U ; Y )}  [ Ld = +I(V2 ; Z, Yˆ1 |U, X1 ),    P ˆ ,X,X ,V ,V ,U : I(X1 ; Z) ≥ I(X1 ; Y |U, V1 )  Y,Z,Y1 ,Y 1 1 1 2  R1 ≤ I(V1 ; Y |U ) − I(V1 ; V2 |U ) − I(V1 ; Z|U, V2 , X1 ),  ∗ − R∗ ≥ I(Y ; Y ˆ |X ) Rr2  1 1 1     R2 ≤ R∗ + I(V2 ; Z, Yˆ1 |U, X1 ) − I(V1 ; V2 |U )      −I(X , V ; Y |U, V ). 1

2

1

∗ Rr2 = min{I(X1 ; Y |U, V1 , V2 ), I(X1 ; Z)}.

Proof: Substituting Re1 = R1 and Re2 = R2 into the region R(Ai3) , Corollary 4 is easily to be checked.

III. R ELAY BROADCAST CHANNELS WITH TWO CONFIDENTIAL MESSAGES In this section, the main results on the model of Figure 2 are provided in Subsection III-A, and the results are further explained via a Gaussian example, see Subsection III-B. A. Problem formulation and the main results The model of Figure 2 is similar to the model of Figure 1, except that there is no common message W0 . The channel encoder is a stochastic encoder that maps the messages W1 and W2 into a codeword xN ∈ X N .

                

,

               

                                

,

14

ˇ 1 . Let Pe1 be the error The decoder for receiver 1 is a mapping fD1 : Y N → W1 , with input Y N and output W ˇ 1 6= W1 }. probability of receiver 1, and it is defined as P r{W ˆ 2 . Let Pe2 Analogously, the decoder for receiver 2 is a mapping fD2 : Z N → W2 , with input Z N and output W ˆ 2 6= W2 }. be the error probability of receiver 2, and it is defined as P r{W A rate quadruple (R1 , R2 , Re1 , Re2 ) (where R1 , R2 , Re1 , Re2 > 0) is called achievable if, for any  > 0 (where  is an arbitrary small positive real number and  → 0), there exists a channel encoder-decoder (N, ∆1 , ∆2 , Pe1 , Pe2 ) such that lim

N →∞

log k W1 k log k W2 k = R1 , lim = R2 , N →∞ N N

lim ∆1 ≥ Re1 , lim ∆2 ≥ Re2 , Pe1 ≤ , Pe2 ≤ .

N →∞

N →∞

(3.4)

The capacity-equivocation region R(B) is a set composed of all achievable (R1 , R2 , Re1 , Re2 ) quadruples. The inner and outer bounds on the capacity-equivocation region R(B) are provided from Theorem 5 to Theorem 8, see the remainder of this subsection. The first result is an outer-bound on the capacity-equivocation region of the model of Figure 2. Theorem 5: (Outer bound) A single-letter characterization of the region R(Bo) (R(B) ⊆ R(Bo) ) is as follows, R(Bo) = {(R1 , R2 , Re1 , Re2 ) : Re1 ≤ R1 , Re2 ≤ R2 , R1 ≤ min{I(U1 , V1 ; Y ), I(V1 ; Y, Y1 |U1 )}, R2 ≤ min{I(U2 , V2 ; Z), I(V2 ; Z, Y1 |U2 )}, R1 + R2 ≤ I(U2 , V1 ; Y, Y1 |U1 ) + I(V2 ; Z, Y1 |U1 , U2 , V1 ), R1 + R2 ≤ I(U1 , V2 ; Z, Y1 |U2 ) + I(V1 ; Y, Y1 |U1 , U2 , V2 ), Re1 ≤ min{I(V1 ; Y |V2 ) − I(V1 ; Z|V2 ), I(V1 ; Y |U ) − I(V1 ; Z|U )}, Re2 ≤ min{I(V2 ; Z|V1 ) − I(V2 ; Y |V1 ), I(V2 ; Z|U ) − I(V2 ; Y |U )}}, where U → (U1 , U2 , V1 , V2 ) → (X, X1 ) → (Y, Y1 , Z). Remark 5: There are some notes on Theorem 5, see the following. •

Theorem 5 is directly obtained from Theorem 1 by letting R0 = 0, and therefore, we omit the proof here.



Removing the relay node from the model of Figure 1, the model reduces to the broadcast channels with two confidential messages [22]. Letting U1 = U2 = Y1 = const, the region R(Bo) is exactly the same as the outer bound in [22].



The outer bound on the secrecy capacity region of Figure 2 is denoted as CsBo , which is the set of pairs (R1 , R2 ) such that (R1 , R2 , Re1 = R1 , Re2 = R2 ) ∈ R(Bo) .

15

Corollary 5: CsBo = {(R1 , R2 ) : R1 ≤ min{I(U1 , V1 ; Y ), I(V1 ; Y, Y1 |U1 )}, R2 ≤ min{I(U2 , V2 ; Z), I(V2 ; Z, Y1 |U2 )}, R1 + R2 ≤ I(U2 , V1 ; Y, Y1 |U1 ) + I(V2 ; Z, Y1 |U1 , U2 , V1 ), R1 + R2 ≤ I(U1 , V2 ; Z, Y1 |U2 ) + I(V1 ; Y, Y1 |U1 , U2 , V2 ), R1 ≤ min{I(V1 ; Y |V2 ) − I(V1 ; Z|V2 ), I(V1 ; Y |U ) − I(V1 ; Z|U )}, R2 ≤ min{I(V2 ; Z|V1 ) − I(V2 ; Y |V1 ), I(V2 ; Z|U ) − I(V2 ; Y |U )}}. Proof: Substituting Re1 = R1 and Re2 = R2 into the region R(Ao) , Corollary 5 is easily to be checked.

We now turn our attention to constructing the achievable rate-equivocation regions of the model of Figure 2. We split the confidential message W1 into W10 and W11 , and W2 into W20 and W22 . The messages W10 and W20 are intended to be decoded by both receivers, i.e., W10 and W20 can be viewed as the common messages for the model of Figure 2. Then, replacing the common messages (W0 , W10 , W20 ) of the model of Figure 1 by (W10 , W20 ), the achievable regions of the model of Figure 2 are along the lines of those in Figure 1, and therefore, the proofs of Theorem 6, Theorem 7 and Theorem 8 are omitted here. The first inner bound on R(B) is characterized by using DF Strategy. In this DF Strategy, the relay node will first decode the messages W10 and W20 , and then re-encode these messages to cooperate with the transmitter. The superposition coding and random binning techniques used in [22] are combined with the DF cooperation strategy to characterize the inner bound. Theorem 6: (Inner bound 1: DF strategy) A single-letter characterization of the region R(Bi1) (R(Bi1) ⊆ R(B) ) is as follows, R(Ai1) = {(R1 , R2 , Re1 , Re2 ) : Re1 ≤ R1 , Re2 ≤ R2 , R1 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)} + I(V1 ; Y |U, X1 ), R2 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)} + I(V2 ; Z|U, X1 ), R1 + R2 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)} + I(V1 ; Y |U, X1 ) + I(V2 ; Z|U, X1 ) − I(V1 ; V2 |U, X1 ), Re1 ≤ I(V1 ; Y |U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V1 ; Z|U, X1 , V2 ), Re2 ≤ I(V2 ; Z|U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V2 ; Y |U, X1 , V1 )}, for some distribution PY,Z,Y1 ,X,X1 ,V1 ,V2 ,U (y, z, y1 , x, x1 , v1 , v2 , u) = PY,Z,Y1 |X,X1 (y, z, y1 |x, x1 )PX,X1 |U,V1 ,V2 (x, x1 |u, v1 , v2 )PU,V1 ,V2 (u, v1 , v2 ). Remark 6: There are some notes on Theorem 6, see the following.

16



Theorem 6 is directly obtained from Theorem 2 by letting R0 = 0.



The first inner bound on the secrecy capacity region of Figure 2 is denoted as CsBi1 , which is the set of pairs (R1 , R2 ) such that (R1 , R2 , Re1 = R1 , Re2 = R2 ) ∈ R(Bi1) . Corollary 6: CsBi1 = {(R1 , R2 ) : R1 ≤ I(V1 ; Y |U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V1 ; Z|U, X1 , V2 ), R2 ≤ I(V2 ; Z|U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V2 ; Y |U, X1 , V1 )}. Proof: Substituting Re1 = R1 and Re2 = R2 into the region R(Bi1) , Corollary 6 is easily to be checked.

The second inner bound on R(B) is characterized by using the noise and forward (NF) strategy. In this NF Strategy, the relay node sends codewords that are independent of the transmitters messages, and these codewords aid in confusing the receivers. Theorem 7: (Inner bound 2: NF strategy) A single-letter characterization of the region R(Bi2) (R(Bi2) ⊆ R(B) ) is as follows, R(Bi2) = L5

[

L6 ,

where L5 is given by

L5 =

[ PY,Z,Y ,X,X ,V ,V ,U : 1 1 1 2 I(X1 ; Y ) ≥ I(X1 ; Z|U, V2 )

   (R1 , R2 , Re1 , Re2 ) : Re1 ≤ R1 , Re2 ≤ R2 ,       R1 ≤ min{I(U ; Y |X1 ), I(U ; Z)} + I(V1 ; Y |U, X1 ),       R2 ≤ min{I(U ; Y |X1 ), I(U ; Z)} + I(V2 ; Z|U ),      R + R ≤ min{I(U ; Y |X ), I(U ; Z)} 1 2 1  +I(V ; Y |U, X ) + I(V ; Z|U ) − I(V ; V |U ),  1 1 2 1 2        Re1 ≤ min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )} + I(V1 ; Y |U, X1 )      −I(V1 ; V2 |U ) − I(X1 , V1 ; Z|U, V2 ),      R ≤ I(V ; Z|U ) − I(V ; V |U ) − I(V ; Y |U, X , V ). e2 2 1 2 2 1 1

                   

   (R1 , R2 , Re1 , Re2 ) : Re1 ≤ R1 , Re2 ≤ R2 ,       R1 ≤ min{I(U ; Z|X1 ), I(U ; Y )} + I(V1 ; Y |U ),       R2 ≤ min{I(U ; Z|X1 ), I(U ; Y )} + I(V2 ; Z|U, X1 ),      R + R ≤ min{I(U ; Z|X ), I(U ; Y )} 1 2 1   +I(V1 ; Y |U ) + I(V2 ; Z|U, X1 ) − I(V1 ; V2 |U ),       Re1 ≤ I(V1 ; Y |U ) − I(V1 ; V2 |U ) − I(V1 ; Z|U, V2 , X1 ),       Re2 ≤ min{I(X1 ; Y |U, V1 , V2 ), I(X1 ; Z)} + I(V2 ; Z|U, X1 )      −I(V ; V |U ) − I(X , V ; Y |U, V ). 1 2 1 2 1

                   

,

                  

L6 is given by

L6 =

[ PY,Z,Y ,X,X ,V ,V ,U : 1 1 1 2 I(X1 ; Z) ≥ I(X1 ; Y |U, V1 )

                  

,

17

and PY,Z,Y1 ,X,X1 ,V1 ,V2 ,U (y, z, y1 , x, x1 , v1 , v2 , u) satisfies PY,Z,Y1 ,X,X1 ,V1 ,V2 ,U (y, z, y1 , x, x1 , v1 , v2 , u) = PY,Z,Y1 |X,X1 (y, z, y1 |x, x1 )PX|U,V1 ,V2 (x|u, v1 , v2 )PU,V1 ,V2 (u, v1 , v2 )PX1 (x1 ). Remark 7: There are some notes on Theorem 7, see the following. •

Theorem 7 is directly obtained from Theorem 3 by letting R0 = 0.



The second inner bound on the secrecy capacity region of Figure 2 is denoted as CsBi2 , which is the set of pairs (R1 , R2 ) such that (R1 , R2 , Re1 = R1 , Re2 = R2 ) ∈ R(Bi2) . Corollary 7: CsBi2 = Le

[

Lf ,

where Le is given by

[

Le =

PY,Z,Y ,X,X ,V ,V ,U : 1 1 1 2 I(X1 ; Y ) ≥ I(X1 ; Z|U, V2 )

   (R0 , R1 , R2 ) :       R ≤ min{I(U ; Y |X1 ), I(U ; Z)} + I(V1 ; Y |U, X1 ),   1 R1 ≤ min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )} + I(V1 ; Y |U, X1 )      −I(V1 ; V2 |U ) − I(X1 , V1 ; Z|U, V2 ),      R ≤ I(V ; Z|U ) − I(V ; V |U ) − I(V ; Y |U, X , V ). 2 2 1 2 2 1 1

          

   (R1 , R2 ) :       R ≤ min{I(U ; Z|X1 ), I(U ; Y )} + I(V2 ; Z|U, X1 ),   2 R1 ≤ I(V1 ; Y |U ) − I(V1 ; V2 |U ) − I(V1 ; Z|U, V2 , X1 ),      R2 ≤ min{I(X1 ; Y |U, V1 , V2 ), I(X1 ; Z)} + I(V2 ; Z|U, X1 )      −I(V ; V |U ) − I(X , V ; Y |U, V ). 1 2 1 2 1

          

,

         

and Lf is given by

Lf =

[ PY,Z,Y ,X,X ,V ,V ,U : 1 1 1 2 I(X1 ; Z) ≥ I(X1 ; Y |U, V1 )

.

         

Proof: Substituting Re1 = R1 and Re2 = R2 into the region R(Bi2) , Corollary 7 is easily to be checked.

The third inner bound on R(B) is characterized by using a combination of compress and forward (CF) strategy and the GNF strategy, and this inner bound is similar to Theorem 4. Theorem 8: (Inner bound 3: CF strategy) A single-letter characterization of the region R(Bi3) (R(Bi3) ⊆ R(B) ) is as follows, R(Bi3) = L7

[

L8 ,

18

where L7 is given by                       

[

7

L =

P ˆ ,X,X ,V ,V ,U : I(X1 ; Y ) ≥ I(X1 ; Z|U, V2 ) Y,Z,Y1 ,Y 1 1 1 2 ∗ − R∗ ≥ I(Y ; Y ˆ |X ) Rr1 1 1 1

(R1 , R2 , Re1 , Re2 ) : Re1 ≤ R1 , Re2 ≤ R2 , R1 ≤ min{I(U ; Y, Yˆ1 |X1 ), I(U ; Z)} +I(V1 ; Y, Yˆ1 |U, X1 ), R2 ≤ min{I(U ; Y, Yˆ1 |X1 ), I(U ; Z)} + I(V2 ; Z|U ),

R1 + R2 ≤ min{I(U ; Y, Yˆ1 |X1 ), I(U ; Z)}      +I(V1 ; Y, Yˆ1 |U, X1 ) + I(V2 ; Z|U ) − I(V1 ; V2 |U ),       Re1 ≤ R∗ + I(V1 ; Y, Yˆ1 |U, X1 ) − I(V1 ; V2 |U )       −I(X1 , V1 ; Z|U, V2 ),      R ≤ I(V ; Z|U ) − I(V ; V |U ) − I(V ; Y |U, X , V ). e2 2 1 2 2 1 1

∗ Rr1 = min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )}, and L8 is given by    (R1 , R2 , Re1 , Re2 ) : Re1 ≤ R1 , Re2 ≤ R2 ,       R1 ≤ min{I(U ; Z, Yˆ1 |X1 ), I(U ; Y )} + I(V1 ; Y |U ),       R2 ≤ min{I(U ; Z, Yˆ1 |X1 ), I(U ; Y )}       +I(V2 ; Z, Yˆ1 |U, X1 ),   [ L8 = R1 + R2 ≤ min{I(U ; Z, Yˆ1 |X1 ), I(U ; Y )}    P : I(X ; Z) ≥ I(X ; Y |U, V ) 1 1 1 ˆ ,X,X ,V ,V ,U  +I(V ; Y |U ) + I(V ; Z, Yˆ |U, X ) − I(V ; V |U ), Y,Z,Y1 ,Y 1 1 1 2  1 2 1 1 1 2  ∗ − R∗ ≥ I(Y ; Y ˆ |X ) Rr2  1 1 1      Re1 ≤ I(V1 ; Y |U ) − I(V1 ; V2 |U ) − I(V1 ; Z|U, V2 , X1 ),     R ≤ R∗ + I(V ; Z, Yˆ |U, X ) − I(V ; V |U )  e2 2 1 1 1 2      −I(X , V ; Y |U, V ). 1 2 1 ∗ Rr2 = min{I(X1 ; Y |U, V1 , V2 ), I(X1 ; Z)}.

The joint probability PY,Z,Y1 ,Yˆ1 ,X,X1 ,V1 ,V2 ,U (y, z, y1 , yˆ1 , x, x1 , v1 , v2 , u) satisfies PY,Z,Y1 ,Yˆ1 ,X,X1 ,V1 ,V2 ,U (y, z, y1 , yˆ1 , x, x1 , v1 , v2 , u) = y1 |y1 , x1 )PX1 (x1 ). PY,Z,Y1 |X,X1 (y, z, y1 |x, x1 )PX|U,V1 ,V2 (x|u, v1 , v2 )PU,V1 ,V2 (u, v1 , v2 )PYˆ1 |Y1 ,X1 (ˆ Remark 8: There are some notes on Theorem 8, see the following. •

Theorem 8 is directly obtained from Theorem 4 by letting R0 = 0.



The third inner bound on the secrecy capacity region of Figure 2 is denoted as CsBi3 , which is the set of pairs (R1 , R2 ) such that (R1 , R2 , Re1 = R1 , Re2 = R2 ) ∈ R(Bi3) . Corollary 8: CsBi3 = Lg

[

Lh ,

                      

,

                     

                                            

,

19

where Lg is given by

[

Lg =

P ˆ ,X,X ,V ,V ,U : I(X1 ; Y ) ≥ I(X1 ; Z|U, V2 ) Y,Z,Y1 ,Y 1 1 1 2 ∗ − R∗ ≥ I(Y ; Y ˆ |X ) Rr1 1 1 1

   (R1 , R2 ) :       R1 ≤ min{I(U ; Y, Yˆ1 |X1 ), I(U ; Z)}      +I(V ; Y, Yˆ |U, X ), 1 1 1 ∗   R1 ≤ R + I(V1 ; Y, Yˆ1 |U, X1 ) − I(V1 ; V2 |U )       −I(X1 , V1 ; Z|U, V2 ),      R ≤ I(V ; Z|U ) − I(V ; V |U ) − I(V ; Y |U, X , V ). 2 2 1 2 2 1 1

∗ Rr1 = min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )}, and Lh is given by    (R1 , R2 ) :       R1 ≤ I(V1 ; Y |U ) − I(V1 ; V2 |U ) − I(V1 ; Z|U, V2 , X1 ),      [ R2 ≤ min{I(U ; Z, Yˆ1 |X1 ), I(U ; Y )} Lh =   +I(V2 ; Z, Yˆ1 |U, X1 ),  P ˆ ,X,X ,V ,V ,U : I(X1 ; Z) ≥ I(X1 ; Y |U, V1 )  Y,Z,Y1 ,Y 1 1 1 2   ∗ − R∗ ≥ I(Y ; Y ˆ |X ) Rr2  1 1 1  R2 ≤ R∗ + I(V2 ; Z, Yˆ1 |U, X1 ) − I(V1 ; V2 |U )      −I(X , V ; Y |U, V ). 1

2

1

∗ Rr2 = min{I(X1 ; Y |U, V1 , V2 ), I(X1 ; Z)}.

Proof: Substituting Re1 = R1 and Re2 = R2 into the region R(Bi3) , Corollary 8 is easily to be checked.

B. Gaussian relay broadcast channels with two confidential messages In this subsection, we investigate the Gaussian case of the model of Figure 2. The signal received at each node is given by Y1 = X + Zr , Y = X + X1 + Z1 , Z = X + X1 + Z2 ,

(3.5)

where Zr ∼ N (0, Nr ), Z1 ∼ N (0, N1 ), Z2 ∼ N (0, N2 ), and they are independent, E[X 2 ] ≤ P1 , E[X12 ] ≤ P2 . In this subsection, we assume that N1 + P1 ≤ N2 , which implies that the channel for receiver 1 is less noisy than the channel for receiver 2. The inner bound on the secrecy capacity region of Figure 2 by using the DF strategy is given by       (R , R ) : 1 2     [ Bi1 (1−β)P +N (1−β)αP +N 1 1 1 1 1 2 Cs = . R1 ≤ 2 log (1−β)(1−α)P1 +N1 − 2 log , N2       0 ≤ α ≤ 1  R ≤ 1 log (1−β)P1 +N2 − 1 log (1−β)(1−α)P1 +N1 .  0 ≤ β ≤ 1

2

2

(1−β)αP1 +N2

2

(3.6)

N1

Here CsBi1 is obtained by letting X = U + V1 + V2 and U = c1 X1 + X10 , where U ∼ N (0, βP1 ), V1 ∼ q . X10 , N (0, (1 − β)P1 α), V2 ∼ N (0, (1 − β)P1 (1 − α)), X10 ∼ N (0, βγP1 ) (0 ≤ γ ≤ 1), and c1 = P1 β(1−γ) P2 X1 , V1 , U and V2 are independent random variables.

             

,

            

                          

,

20

Then, the inner bound on the secrecy capacity region by using the NF strategy is given by    (R1 , R2 ) :     P1 +N1 P1 +P2 +N2   , 1 log (1−β)P } R1 ≤ min{ 12 log (1−β)P  1 +N1 2 1 +P2 +N2    (1−β)P +N 1 1  + 1 log [ 2 (1−β)P1 (1−α)+N1 , CsBi2 =  2 +N1  R1 ≤ min{ 12 log P2N+N2 , 12 log P1P+P+N } 0 ≤ α ≤ 1  2 1 1   0 ≤ β ≤ 1  (1−β)P +N (1−β)P 1 1 1 α+P2 +N2   + 21 log (1−β)P1 (1−α)+N − 12 log ,  N2 1     R ≤ 1 log (1−β)P1 +P2 +N2 − 1 log (1−α)(1−β)P1 +N1 . 2

2

(1−β)αP1 +P2 +N2

2

N1

Here note that N1 + P1 ≤ N2 implies that I(X1 ; Y ) ≥ I(X1 ; Z|U, V2 ), and

CsBi2

             

.

(3.7)

            

is obtained by letting X =

U + V1 + V2 , where V1 ∼ N (0, (1 − β)P1 α), V2 ∼ N (0, (1 − β)P1 (1 − α)), U ∼ N (0, βP1 ). X1 , U , V1 and V2 are independent random variables. Next, the inner bound on the secrecy capacity region by using the CF strategy is given by    (R1 , R2 ) :     P1 (Nr +Q+N1 )+N1 (Nr +Q) P1 +P2 +N2   } , 1 log (1−β)P R ≤ min{ 21 log (1−β)P  1 (Nr +Q+N1 )+N1 (Nr +Q) 2 1 +P2 +N2  1    + 1 log (1−β)P1 (Nr +Q+N1 )+N1 (Nr +Q) , [ 2 (1−β)P1 (1−α)(Nr +Q+N1 )+N1 (Nr +Q) CsBi3 = (1−β)P1 (Nr +Q+N1 )+N1 (Nr +Q) ∗   R ≤ R + 12 log (1−β)P 1 0 ≤ α ≤ 1  1 (1−α)(Nr +Q+N1 )+N1 (Nr +Q)   0 ≤ β ≤ 1  (1−β)P1 α+P2 +N2 1   , − 2 log  N2     R ≤ 1 log (1−β)P1 +P2 +N2 − 1 log (1−α)(1−β)P1 +N1 . 2

2

(1−β)αP1 +P2 +N2

2

N1

             

,

(3.8)

            

subject to 1 P2 + N 2 1 P 1 + P 2 + N1 P1 + Q + N r 1 0 ≤ R∗ ≤ min{ log . , log } − log 2 N2 2 P 1 + N1 2 Q

(3.9)

Here note that N1 + P1 ≤ N2 implies that I(X1 ; Y ) ≥ I(X1 ; Z|U, V2 ), and CsBi3 is obtained by letting X = U + V1 + V2 , Yˆ1 = Y1 + ZQ , where ZQ ∼ N (0, Q), V1 ∼ N (0, (1 − β)P1 α), V2 ∼ N (0, (1 − β)P1 (1 − α)), U ∼ N (0, βP1 ). X1 , U , V1 and V2 are independent random variables. Finally, remember that [22] provides an inner bound on the secrecy capacity region of the broadcast channels with two confidential messages, and it is given by Cs(Bi) = {(R1 , R2 ) : R1 ≤ I(V1 ; Y |U ) − I(V1 ; V2 |U ) − I(V1 ; Z|V2 , U ), R2 ≤ I(V2 ; Z|U ) − I(V1 ; V2 |U ) − I(V2 ; Y |V1 , U )}. Letting X = U + V1 + V2 , V1 ∼ N (0, (1 − β)P1 α), V2 ∼ N (0, (1 − β)P1 (1 − α)), U ∼ N (0, βP1 ), Y = X + Z1 , Z = X + Z2 , Z1 ∼ N (0, N1 ), Z2 ∼ N (0, N2 ), and E[X 2 ] ≤ P1 , we find that the secrecy rate region of the Gaussian case of [22] is exactly the same as (3.6), i.e., the DF strategy can not enhance the secrecy rate region of the broadcast channels with two confidential messages [22]. Note that N1 + P1 ≤ N2 implies that N1 ≤ N2 , CsBi1 , CsBi2 , CsBi3 and CsBi satisfy R2 = 0. Letting P1 = 5, P2 = 3, N1 = 2, N2 = 8, Nr = 2 and Q = 300, and maximizing the secrecy rates R1 of CsBi1 , CsBi2 , CsBi3 and CsBi , the following Figure 4 shows the relationship between R1 and α for different cooperation strategies. It is easy

21

to see that the NF strategy and the CF strategy help to obtain larger achievable secrecy rates. The DF strategy obtains the same secrecy rate as that of the Gaussian case of [22]. In addition, when Q → ∞, the inner bound for the CF strategy is exactly the same as that for the NF strategy.

Fig. 4: The achievable secrecy rate R1 of the model of Figure 2

IV. R ELAY BROADCAST CHANNELS WITH ONE CONFIDENTIAL MESSAGE AND ONE COMMON MESSAGE In this section, the main results on the model of Figure 3 are provided in Subsection IV-A, and the results are further explained via a Gaussian example, see Subsection IV-B. A. Problem formulation and the main results The model of Figure 3 is similar to the model of Figure 1, except that there is no confidential message W2 . The channel encoder is a stochastic encoder that maps the messages W0 and W1 into a codeword xN ∈ X N . ˇ 0 and W ˇ 1 . Let The decoder for receiver 1 is a mapping fD1 : Y N → W0 × W1 , with input Y N and outputs W ˇ 0, W ˇ 1 ) 6= (W0 , W1 )}. Pe1 be the error probability of receiver 1, and it is defined as P r{(W ˆ 0 . Let Pe2 Analogously, the decoder for receiver 2 is a mapping fD2 : Z N → W0 , with input Z N and output W ˆ 0 6= W0 }. be the error probability of receiver 2, and it is defined as P r{W A rate triple (R0 , R1 , Re ) (where R0 , R1 , Re > 0) is called achievable if, for any  > 0 (where  is an arbitrary small positive real number and  → 0), there exists a channel encoder-decoder (N, ∆, Pe1 , Pe2 ) such that lim

N →∞

log k W0 k log k W1 k = R0 , lim = R1 , N →∞ N N

lim ∆ ≥ Re , Pe1 ≤ , Pe2 ≤ .

N →∞

(4.10)

22

The capacity-equivocation region R(C) is a set composed of all achievable (R0 , R1 , Re ) triples. The inner and outer bounds on the capacity-equivocation region R(C) are provided from Theorem 9 to Theorem 12, see the remainder of this subsection. The first result is an outer-bound on the capacity-equivocation region of the model of Figure 3. Theorem 9: (Outer bound) A single-letter characterization of the region R(Co) (R(C) ⊆ R(Co) ) is as follows, R(Co) = {(R0 , R1 , Re ) : Re ≤ R1 , R0 ≤ min{I(U, U1 ; Y ), I(U ; Y, Y1 |U1 )}, R0 ≤ min{I(U, U2 ; Z), I(U ; Z, Y1 |U2 )}, R0 + R1 ≤ min{I(U, U1 , V ; Y ), I(U, V ; Y, Y1 |U1 )}, R0 + R1 ≤ I(U, U1 ; Z, Y1 |U2 ) + I(V ; Y, Y1 |U, U1 , U2 ), Re ≤ I(V ; Y |U ) − I(V ; Z|U ), where U → (U1 , U2 , V ) → (X, X1 ) → (Y, Y1 , Z). Remark 9: There are some notes on Theorem 9, see the following. •

Theorem 9 is directly obtained from Theorem 1 by letting R2 = 0, Re2 = 0 and V2 = const, and therefore, the proof of Theorem 9 is omitted here.



Removing the relay node from the model of Figure 3, the model reduces to the broadcast channels with one confidential message and one common message [20]. Letting U1 = U2 = Y1 = const, the region R(Co) is exactly the same as the capacity-equivocation region in [20].



The outer bound on the secrecy capacity region of Figure 3 is denoted as CsCo , which is the set of pairs (R0 , R1 ) such that (R0 , R1 , Re = R1 ) ∈ R(Co) . Corollary 9: CsCo = {(R0 , R1 ) : R0 ≤ min{I(U, U1 ; Y ), I(U ; Y, Y1 |U1 )}, R0 ≤ min{I(U, U2 ; Z), I(U ; Z, Y1 |U2 )}, R0 + R1 ≤ min{I(U, U1 , V ; Y ), I(U, V ; Y, Y1 |U1 )}, R0 + R1 ≤ I(U, U1 ; Z, Y1 |U2 ) + I(V ; Y, Y1 |U, U1 , U2 ), R1 ≤ I(V ; Y |U ) − I(V ; Z|U ). Proof: Substituting Re = R1 into the region R(Co) , Corollary 9 is easily to be checked.

We now turn our attention to constructing cooperation strategies for the model of Figure 3. Our first step is to characterize the inner bound on the capacity-equivocation region by using DF Strategy. In the DF Strategy, the relay node will first decode the common message, and then re-encode the common message to cooperate with the

23

transmitter. Then, the superposition coding and random binning techniques used in [20] will be combined with the DF cooperation strategy to characterize the inner bound. Theorem 10: (Inner bound 1: DF strategy) A single-letter characterization of the region R(Ci1) (R(Ci1) ⊆ R(C) ) is as follows, R(Ci1) = {(R0 , R1 , Re ) : Re ≤ R1 , R0 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)}, R0 + R1 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)} + I(V ; Y |U, X1 ), Re ≤ I(V ; Y |U, X1 ) − I(V ; Z|U, X1 ), for some distribution PY,Z,Y1 ,X,X1 ,V,U (y, z, y1 , x, x1 , v, u) = PY,Z,Y1 |X,X1 (y, z, y1 |x, x1 )PX,X1 |U,V (x, x1 |u, v)PU,V (u, v). Remark 10: There are some notes on Theorem 10, see the following. •

The inequality R0 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)} follows from the fact that the relay node decode-and-forward the common message W0 . The inequalities R0 +R1 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)}+ I(V ; Y |U, X1 ) and Re ≤ I(V ; Y |U, X1 )−I(V ; Z|U, X1 ) follow from the fact that the relay codeword xN 1 can be decoded by both receivers, and from Csisz´ ar-K¨orner’s techniques on broadcast channels with confidential messages [20]. Since the proof is obvious, we omit the details about the proof of Theorem 10.



The first inner bound on the secrecy capacity region of Figure 3 is denoted as CsCi1 , which is the set of pairs (R0 , R1 ) such that (R0 , R1 , Re = R1 ) ∈ R(Ci1) . Corollary 10: CsCi1 = {(R0 , R1 ) : R0 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)}, R1 ≤ I(V ; Y |U, X1 ) − I(V ; Z|U, X1 )}. Proof: Substituting Re = R1 into the region R(Ci1) , Corollary 10 is easily to be checked.

The second step is to characterize the inner bound on the capacity-equivocation region by using the NF strategy. In the NF Strategy, the relay node does not attempt to decode the messages but sends codewords that are independent of the transmitters messages, and these codewords aid in confusing the receivers. Different from the NF strategies for the models with two confidential messages, the NF strategy for the model with one confidential message is considered into the following two cases. •

(Case 1) If the channel from the relay to receiver 1 is less noisy than the channel from the relay to receiver 2 (I(X1 ; Y ) ≥ I(X1 ; Z|U )), we allow receiver 1 to decode xN 1 , and receiver 2 can not decode it. Therefore, in this case, xN 1 can be viewed as a noise signal to confuse receiver 2.

24



(Case 2) If the channel from the relay to receiver 1 is more noisy than the channel from the relay to receiver N 2, we allow both the receivers to decode xN 1 , and therefore, in this case, the relay codeword x1 can not make

any contribution to the security of the model of Figure 3. Theorem 11: (Inner bound 2: NF strategy) A single-letter characterization of the region R(Ci2) (R(Ci2) ⊆ R(C) ) is as follows, R(Ci2) = L9

[

L10 ,

where L9 is given by

[

L9 =

PY,Z,Y ,X,X ,V,U : 1 1 I(X1 ; Y ) ≥ I(X1 ; Z|U )

   (R0 , R1 , Re ) : Re ≤ R1 ,      R ≤ min{I(U ; Y |X ), I(U ; Z)}, 0 1  R + R ≤ min{I(U ; Y |X ), I(U ; Z)} + I(V ; Y |U, X ),  0 1 1 1      R ≤ min{I(X ; Z|U, V ), I(X ; Y )} + I(V ; Y |U, X ) − I(X , V ; Z|U ). e 1 1 1 1

       

,

      

L10 is given by

[

L10 =

PY,Z,Y ,X,X ,V,U : 1 1 I(X1 ; Z) ≥ I(X1 ; Y )

   (R0 , R1 , Re ) : Re ≤ R1 ,      R ≤ min{I(U ; Y |X ), I(U ; Z|X )}, 0 1 1  R + R ≤ min{I(U ; Y |X ), I(U ; Z|X )} + I(V ; Y |U, X ),  0 1 1 1 1      R ≤ I(V ; Y |U, X ) − I(V ; Z|U, X ). e 1 1

       

,

      

and PY,Z,Y1 ,X,X1 ,V,U (y, z, y1 , x, x1 , v, u) satisfies PY,Z,Y1 ,X,X1 ,V,U (y, z, y1 , x, x1 , v, u) = PY,Z,Y1 |X,X1 (y, z, y1 |x, x1 )PX|U,V (x|u, v)PU,V (u, v)PX1 (x1 ). The proof of Theorem 11 is in Appendix E. Remark 11: There are some notes on Theorem 11, see the following. •

The regions L9 and L10 are characterized according to case 1 and case 2, respectively.



The second inner bound on the secrecy capacity region of Figure 3 is denoted as CsCi2 , which is the set of pairs (R0 , R1 ) such that (R0 , R1 , Re = R1 ) ∈ R(Ci2) . Corollary 11: CsCi2 = Li

[

Lj ,

where Li is given by

Li =

[ PY,Z,Y ,X,X ,V,U : 1 1 I(X1 ; Y ) ≥ I(X1 ; Z|U )

   (R0 , R1 ) :      R ≤ min{I(U ; Y |X ), I(U ; Z)}, 0 1   R0 + R1 ≤ min{I(U ; Y |X1 ), I(U ; Z)} + I(V ; Y |U, X1 ),      R ≤ min{I(X ; Z|U, V ), I(X ; Y )} + I(V ; Y |U, X ) − I(X , V ; Z|U ). 1 1 1 1 1

              

,

25

and Lj is given by

[

Lj =

PY,Z,Y ,X,X ,V ,V ,U : 1 1 1 2 I(X1 ; Z) ≥ I(X1 ; Y )

   (R1 , R2 ) :   R0 ≤ min{I(U ; Y |X1 ), I(U ; Z|X1 )},     R ≤ I(V ; Y |U, X ) − I(V ; Z|U, X ). 1

1

1

    

.

   

Proof: Substituting Re = R1 into the region R(Ci2) , Corollary 11 is easily to be checked. The third step is to characterize the inner bound on the capacity-equivocation region by using a combination of CF strategy and NF strategy, i.e., in addition to the independent codewords, the relay also sends a quantized version of its noisy observations to the receivers. This noisy version of the relay’s observations helps the receivers in decoding the transmitter’s messages, while the independent codewords help in confusing the receivers. Similar to Theorem 11, we consider the CF strategy into two cases. Theorem 12: (Inner bound 3: CF strategy) A single-letter characterization of the region R(Ci3) (R(Ci3) ⊆ R(C) ) is as follows, R(Ci3) = L11

[

L12 ,

where L11 is given by

[

L11 = P

ˆ ,X,X ,V,U : I(X1 ; Y ) ≥ I(X1 ; Z|U ) Y,Z,Y1 ,Y 1 1 ∗ − R∗ ≥ I(Y ; Y ˆ |X ) Rr1 1 1 1

   (R0 , R1 , Re ) : Re ≤ R1 ,      R ≤ min{I(U ; Y, Yˆ |X ), I(U ; Z)} 0 1 1   R0 + R1 ≤ min{I(U ; Y, Yˆ1 |X1 ), I(U ; Z)} + I(V ; Y, Yˆ1 |U, X1 ),      R ≤ R∗ + I(V ; Y, Yˆ |U, X ) − I(X , V ; Z|U ). e 1 1 1

       

,

      

∗ Rr1 = min{I(X1 ; Z|U, V ), I(X1 ; Y )}, and L12 is given by    (R0 , R1 , Re ) : Re ≤ R1 ,      R ≤ min{I(U ; Y, Yˆ |X ), I(U ; Z, Yˆ |X )} [ 0 1 1 1 1 12 L =  ˆ   R0 + R1 ≤ min{I(U ; Y, Y1 |X1 ), I(U ; Z, Yˆ1 |X1 )} + I(V ; Y, Yˆ1 |U, X1 ), P ˆ ,X,X ,V,U : I(X1 ; Z) ≥ I(X1 ; Y )  Y,Z,Y1 ,Y 1 1   ˆ |X ) I(X1 ; Y ) ≥ I(Y1 ; Y  R ≤ I(V ; Y, Yˆ |U, X ) − I(V ; Z|U, X ). 1 1 e

1

1

1

and PY,Z,Y1 ,Yˆ1 ,X,X1 ,V,U (y, z, y1 , yˆ1 , x, x1 , v, u) satisfies PY,Z,Y1 ,Yˆ1 ,X,X1 ,V,U (y, z, y1 , yˆ1 , x, x1 , v, u) = PY,Z,Y1 |X,X1 (y, z, y1 |x, x1 )PX|U,V (x|u, v)PU,V (u, v)PYˆ1 |Y1 ,X1 (ˆ y1 |y1 , x1 )PX1 (x1 ). The proof of Theorem 12 is in Appendix F. Remark 12: There are some notes on Theorem 12, see the following. •

∗ In L11 , R∗ is the rate of pure noise generated by the relay to confuse the receivers, while Rr1 − R∗ is the ∗ part of the rate allocated to send the compressed signal Yˆ1 to help the receivers. If R∗ = Rr1 , this scheme is

exactly the same as the NF scheme. •

The third inner bound on the secrecy capacity region of Figure 3 is denoted as CsCi3 , which is the set of pairs (R0 , R1 ) such that (R0 , R1 , Re = R1 ) ∈ R(Ci3) .

              

,

26

Corollary 12: CsCi3 = Lk

[

Ll ,

where Lk is given by

[

Lk = P

ˆ ,X,X ,V,U : I(X1 ; Y ) ≥ I(X1 ; Z|U ) Y,Z,Y1 ,Y 1 1 ∗ − R∗ ≥ I(Y ; Y ˆ |X ) Rr1 1 1 1

   (R0 , R1 ) :      R ≤ min{I(U ; Y, Yˆ |X ), I(U ; Z)} 0 1 1   R0 + R1 ≤ min{I(U ; Y, Yˆ1 |X1 ), I(U ; Z)} + I(V ; Y, Yˆ1 |U, X1 ),      R ≤ R∗ + I(V ; Y, Yˆ |U, X ) − I(X , V ; Z|U ). 1 1 1 1

∗ Rr1 = min{I(X1 ; Z|U, V ), I(X1 ; Y )}, and Ll is given by    (R0 , R1 ) :   [ l L = R0 ≤ min{I(U ; Y, Yˆ1 |X1 ), I(U ; Z, Yˆ1 |X1 )}    P : I(X ; Z) ≥ I(X ; Y ) 1 1 ˆ ,X,X ,V,U  R ≤ I(V ; Y, Yˆ |U, X ) − I(V ; Z|U, X ). Y,Z,Y1 ,Y 1 1 1

ˆ |X ) I(X1 ; Y ) ≥ I(Y1 ; Y 1 1

1

1

1

    

.

   

Proof: Substituting Re = R1 into the region R(Ci3) , Corollary 12 is easily to be checked. B. Gaussian relay broadcast channels with one confidential message and one common message In this subsection, we investigate the Gaussian case of the model of Figure 3. The signal received at each node is given by Y1 = X + Zr , Y = X + X1 + Z1 , Z = X + X1 + Z2 ,

(4.11)

where Zr ∼ N (0, Nr ), Z1 ∼ N (0, N1 ), Z2 ∼ N (0, N2 ), and they are independent, E[X 2 ] ≤ P1 , E[X12 ] ≤ P2 . In this subsection, we assume that P1 + N1 ≤ N2 , which implies that the channel for receiver 1 is less noisy than the channel for receiver 2. The inner bound on the secrecy capacity region of Figure 3 by using the DF strategy is given by       (R0 , R1 ) :    [  Ci1 P +N P +P +N P +P +N 1 1 1 1 r 1 2 1 1 2 2 . Cs = R0 ≤ min{ 2 log αP1 +Nr , 2 log αP1 +N1 , 2 log αP1 +N2 },    0≤α≤1     R ≤ 1 log αP1 +N1 − 1 log αP1 +N2 .  1

Here X10

2

N1

2

(4.12)

N2

CsCi1

is obtained by letting X = U + V and U = c1 X1 + X10 , where U ∼ N (0, (1 − α)P1 ), V ∼ N (0, αP1 ), q ∼ N (0, (1 − α)βP1 ) (0 ≤ β ≤ 1), and c1 = P1 (1−α)(1−β) . X10 , X1 , V and U are independent random P2

variables. Then, the inner bound on the secrecy capacity region by using the NF strategy is given by       (R0 , R1 ) :         P +N P +P +N 1 1   1 1 1 2 2 [ R ≤ min{ log , log } 0 2 αP1 +N1 2 αP1 +P2 +N2 Ci2 Cs = .  R ≤ min{ 1 log P2 +N2 , 1 log P1 +P2 +N1 }   1 0≤α≤1    2 N 2 P +N 2 1 1        + 1 log αP1 +N1 − 1 log αP1 +P2 +N2 .  2

N1

2

N2

(4.13)

              

,

27

Here note that P1 + N1 ≤ N2 implies I(X1 ; Y ) ≥ I(X1 ; Z|U ), and CsCi2 is obtained by letting X = U + V , where V ∼ N (0, (1 − α)P1 ), U ∼ N (0, αP1 ). X1 , U and V are independent random variables. Next, the inner bound on the secrecy capacity region by using the CF strategy is given by    (R0 , R1 ) :  [  Ci3 P1 (Nr +Q+N1 )+N1 (Nr +Q) 1 P1 +P2 +N2 Cs = , log αP } R0 ≤ min{ 12 log αP 1 (Nr +Q+N1 )+N1 (Nr +Q) 2 1 +P2 +N2   0≤α≤1   R ≤ R∗ + 1 log αP1 (Nr +Q+N1 )+N1 (Nr +Q) − 1 log αP1 +P2 +N2 . 1

2

N1 (Nr +Q)

2

N2

    

,

(4.14)

   

subject to P2 + N 2 1 P 1 + P 2 + N1 1 P1 + Q + N r 1 , log } − log . 0 ≤ R∗ ≤ min{ log 2 N2 2 P 1 + N1 2 Q

(4.15)

Here note that P1 + N1 ≤ N2 implies I(X1 ; Y ) ≥ I(X1 ; Z|U ), and CsCi3 is obtained by letting X = U + V , Yˆ1 = Y1 + ZQ , where ZQ ∼ N (0, Q), V ∼ N (0, (1 − α)P1 ), U ∼ N (0, αP1 ). X1 , U and V are independent random variables. Finally, remember that [33] provides the secrecy capacity region of the broadcast channels with one confidential message and one common message, and it is given by     (R0 , R1 ) : [  P1 +N1 1 P1 +N2 CsCi = R0 ≤ min{ 21 log αP , log αP }, 1 +N1 2 1 +N2   0≤α≤1  αP +N αP +N 1 1  R ≤ log 1 1 − log 1 2 . 1

2

N1

2

N2

    

.

(4.16)

   

Letting P1 = 5, P2 = 3, N1 = 2, N2 = 8, Nr = 2 and Q = 300, the following Figure 5 shows the achievable secrecy rate regions of the model of Figure 3. Compared with the secrecy capacity region of the model of Figure 3 without the relay, it is easy to see that the maximum achievable secrecy rate R1 is enhanced by using the NF and the CF strategies. For the DF strategy, though it can not increase the maximum achievable secrecy rate R1 , the maximum achievable common rate R0 is enhanced. In addition, when Q → ∞, the inner bound for the CF strategy is exactly the same as that for the NF strategy. V. C ONCLUSION In this paper, we generalizes the previous works on the broadcast channels with confidential messages, the relay broadcast channel and the relay-eavesdropper channel. Several cooperative strategies are constructed to enhance the security of the broadcast channels with confidential messages. The details are as follows. •

First, we investigate the relay broadcast channels with two confidential messages and one common message. Three inner bounds (with respect to decode-forward, generalized noise-forward and compress-forward strategies) and an outer bound on the capacity-equivocation region are provided. Removing the secrecy constraint, this outer bound can also be served as a new outer bound for the general relay broadcast channel.



Second, we investigate the relay broadcast channels with two confidential messages (no common message). Inner and outer bounds on the capacity-equivocation region are provided. Then, we study the Gaussian case, and find that with the help of the relay node, the achievable secrecy rate region of the broadcast channels with two confidential messages is enhanced.

28

Fig. 5: The achievable secrecy rate regions of the model of Figure 3



Third, we investigate the relay broadcast channels with one confidential message and one common message. This work generalizes Lai-Gamal’s work on the relay-eavesdropper channel by considering an additional common message for both the legitimate receiver and the eavesdropper. Inner and outer bounds on the capacityequivocation region are provided, and the results are further explained via a Gaussian example. Compared with Csisz´ ar-K¨orner’s work on broadcast channels with confidential messages (BCC), we find that with the help of the relay node, the secrecy capacity region of the Gaussian BCC is enhanced. ACKNOWLEDGEMENT

The authors would like to thank Professor Ning Cai for his valuable suggestions to improve this paper. This work was supported by a sub-project in National Basic Research Program of China under Grant 2012CB316100 on Broadband Mobile Communications at High Speeds, and the National Natural Science Foundation of China under Grant 61301121. A PPENDIX A P ROOF OF T HEOREM 1 In this section, we will prove Theorem 1: all the achievable (R0 , R1 , R2 , Re1 , Re2 ) quintuples are contained in the set RAo . The inequalities of Theorem 1 are proved in the remainder of this section. First, define the following auxiliary random variables, N N U1 , Y1J−1 , U2 , Y1,J+1 , U , (Y J−1 , W0 , ZJ+1 , J)

V1 , (U, W1 ), V2 , (U, W2 ) Y , YJ , Y1 , Y1,J , Z , ZJ ,

(A1)

29

where J is a random variable (uniformly distributed over {1, 2, , ..., N }), and it is independent of Y N , Y1N , Z N , W0 , W1 and W2 . (Proof of R0 ≤ min{I(U, U1 ; Y ), I(U ; Y, Y1 |U1 )}) The inequality R0 ≤ I(U, U1 ; Y ) is proved as follows. 1 H(W0 ) N

≤ (a)

≤ =

1 (I(W0 ; Y N ) + H(W0 |Y N )) N 1 (I(W0 ; Y N ) + δ(Pe1 )) N N 1 X ( I(W0 ; Yi |Y i−1 ) + δ(Pe1 )) N i=1

=

N δ(Pe1 ) 1 X (H(Yi |Y i−1 ) − H(Yi |Y i−1 , W0 )) + N i=1 N



N 1 X δ(Pe1 ) N (H(Yi ) − H(Yi |Y i−1 , W0 , Y1i−1 , Zi+1 )) + N i=1 N

(b)

=

(c)

≤ (d)

=

N δ(Pe1 ) 1 X N (H(Yi |J = i) − H(Yi |Y i−1 , W0 , Y1i−1 , Zi+1 , J = i)) + N i=1 N N H(YJ ) − H(YJ |Y J−1 , W0 , Y1J−1 , ZJ+1 , J) +

H(Y ) − H(Y |U1 , U ) +

(e)



I(U1 , U ; Y ) +

δ(Pe1 ) N

δ(Pe1 ) N

δ() , N

(A2)

where (a) is from the Fano’s inequality, (b) is from the fact that J is a random variable (uniformly distributed over {1, 2, ..., N }), and it is independent of Y N , Y1N , Z N , W0 , W1 and W2 , (c) is from J is uniformly distributed over {1, 2, ..., N }, (d) is from the definitions of the auxiliary random variables (see (A1)), and (e) is from Pe1 ≤ . By using  → 0, R0 = limN →∞

H(W0 ) N

and (A2), R0 ≤ I(U, U1 ; Y ) is obtained.

The inequality R0 ≤ I(U ; Y, Y1 |U1 ) is proved as follows. 1 H(W0 ) N

≤ ≤ =

1 (I(W0 ; Y1N , Y N ) + H(W0 |Y1N , Y N )) N 1 (I(W0 ; Y1N , Y N ) + δ(Pe1 )) N N 1 X ( I(W0 ; Y1,i , Yi |Y1i−1 , Y i−1 ) + δ(Pe1 )) N i=1

=

N 1 X δ(Pe1 ) (H(Y1,i , Yi |Y1i−1 , Y i−1 ) − H(Y1,i , Yi |Y1i−1 , Y i−1 , W0 )) + N i=1 N



N 1 X δ(Pe1 ) N (H(Y1,i , Yi |Y1i−1 ) − H(Y1,i , Yi |Y i−1 , W0 , Y1i−1 , Zi+1 )) + N i=1 N

=

N 1 X δ(Pe1 ) N (H(Y1,i , Yi |Y1i−1 , J = i) − H(Y1,i , Yi |Y i−1 , W0 , Y1i−1 , Zi+1 , J = i)) + N i=1 N

30

≤ (a)

N H(YJ , Y1,J |Y1J−1 ) − H(YJ , Y1,J |Y J−1 , W0 , Y1J−1 , ZJ+1 , J) +

=

H(Y, Y1 |U1 ) − H(Y, Y1 |U1 , U ) +



I(U ; Y, Y1 |U1 ) +

δ(Pe1 ) N

δ(Pe1 ) N

δ() , N

(A3)

where (a) is from (A1). By using  → 0, R0 = limN →∞

H(W0 ) N

and (A3), R0 ≤ I(U ; Y, Y1 |U1 ) is obtained.

Therefore, R0 ≤ min{I(U, U1 ; Y ), I(U ; Y, Y1 |U1 )} is proved. (Proof of R0 ≤ min{I(U, U2 ; Z), I(U ; Z, Y1 |U2 )}) The inequality R0 ≤ I(U, U2 ; Z) is proved as follows. 1 H(W0 ) ≤ N ≤ =

1 (I(W0 ; Z N ) + H(W0 |Z N )) N 1 (I(W0 ; Z N ) + δ(Pe2 )) N N 1 X N I(W0 ; Zi |Zi+1 ) + δ(Pe2 )) ( N i=1

=

N 1 X δ(Pe2 ) N N (H(Zi |Zi+1 ) − H(Zi |Zi+1 , W0 )) + N i=1 N



N 1 X δ(Pe2 ) N N (H(Zi ) − H(Zi |Y i−1 , W0 , Y1,i+1 , Zi+1 )) + N i=1 N

=

N δ(Pe2 ) 1 X N N (H(Zi |J = i) − H(Zi |Y i−1 , W0 , Y1,i+1 , Zi+1 , J = i)) + N i=1 N

N N ≤ H(ZJ ) − H(ZJ |Y J−1 , W0 , Y1,J+1 , ZJ+1 , J) +

= H(Z) − H(Z|U2 , U ) + ≤ I(U2 , U ; Z) + By using  → 0, R0 = limN →∞

H(W0 ) N

δ(Pe2 ) N

δ(Pe2 ) N

δ() . N

(A4)

and (A4), R0 ≤ I(U2 , U ; Z) is obtained.

The inequality R0 ≤ I(U ; Z, Y1 |U2 ) is proved as follows. 1 H(W0 ) ≤ N ≤ =

1 (I(W0 ; Y1N , Z N ) + H(W0 |Y1N , Z N )) N 1 (I(W0 ; Y1N , Z N ) + δ(Pe2 )) N N 1 X N N ( I(W0 ; Y1,i , Zi |Y1,i+1 , Zi+1 ) + δ(Pe2 )) N i=1

=

N 1 X δ(Pe2 ) N N N N (H(Y1,i , Zi |Y1,i+1 , Zi+1 ) − H(Y1,i , Zi |Y1,i+1 , Zi+1 , W0 )) + N i=1 N



N δ(Pe2 ) 1 X N N N (H(Y1,i , Zi |Y1,i+1 ) − H(Y1,i , Zi |Y i−1 , W0 , Y1,i+1 , Zi+1 )) + N i=1 N

=

N 1 X δ(Pe2 ) N N N (H(Y1,i , Zi |Y1,i+1 , J = i) − H(Y1,i , Zi |Y i−1 , W0 , Y1,i+1 , Zi+1 , J = i)) + N i=1 N

31

N N N ≤ H(ZJ , Y1,J |Y1,J+1 ) − H(ZJ , Y1,J |Y J−1 , W0 , Y1,J+1 , ZJ+1 , J) +

= H(Z, Y1 |U2 ) − H(Z, Y1 |U2 , U ) +

δ(Pe2 ) N

δ() . N

≤ I(U ; Z, Y1 |U2 ) + By using  → 0, R0 = limN →∞

δ(Pe2 ) N

H(W0 ) N

(A5)

and (A5), R0 ≤ I(U ; Z, Y1 |U2 ) is obtained.

Therefore, R0 ≤ min{I(U, U2 ; Z), I(U ; Z, Y1 |U2 )} is proved. (Proof of R0 + R1 ≤ min{I(U, U1 , V1 ; Y ), I(U, V1 ; Y, Y1 |U1 )}) The inequality R0 + R1 ≤ I(U, U1 , V1 ; Y ) is proved as follows. 1 H(W0 , W1 ) ≤ N ≤ =

1 (I(W0 , W1 ; Y N ) + H(W0 , W1 |Y N )) N 1 (I(W0 , W1 ; Y N ) + δ(Pe1 )) N N 1 X I(W0 , W1 ; Yi |Y i−1 ) + δ(Pe1 )) ( N i=1

=

N 1 X δ(Pe1 ) (H(Yi |Y i−1 ) − H(Yi |Y i−1 , W0 , W1 )) + N i=1 N



N 1 X δ(Pe1 ) N (H(Yi ) − H(Yi |Y i−1 , W0 , W1 , Y1i−1 , Zi+1 )) + N i=1 N

=

N δ(Pe1 ) 1 X N (H(Yi |J = i) − H(Yi |Y i−1 , W0 , W1 , Y1i−1 , Zi+1 , J = i)) + N i=1 N



N H(YJ ) − H(YJ |Y J−1 , W0 , W1 , Y1J−1 , ZJ+1 , J) +

=

H(Y ) − H(Y |U1 , U, V1 ) +



I(U1 , U, V1 ; Y ) +

By using  → 0, R0 + R1 = limN →∞

H(W0 ,W1 ) N

δ(Pe1 ) N

δ(Pe1 ) N

δ() . N

(A6)

and (A6), R0 + R1 ≤ I(U, U1 , V1 ; Y ) is obtained.

The inequality R0 + R1 ≤ I(U, V1 ; Y, Y1 |U1 ) is proved as follows. 1 H(W0 , W1 ) ≤ N ≤ =

1 (I(W0 , W1 ; Y1N , Y N ) + H(W0 , W1 |Y1N , Y N )) N 1 (I(W0 , W1 ; Y1N , Y N ) + δ(Pe1 )) N N 1 X ( I(W0 , W1 ; Y1,i , Yi |Y1i−1 , Y i−1 ) + δ(Pe1 )) N i=1

=

N 1 X δ(Pe1 ) (H(Y1,i , Yi |Y1i−1 , Y i−1 ) − H(Y1,i , Yi |Y1i−1 , Y i−1 , W0 , W1 )) + N i=1 N



N δ(Pe1 ) 1 X N (H(Y1,i , Yi |Y1i−1 ) − H(Y1,i , Yi |Y i−1 , W0 , W1 , Y1i−1 , Zi+1 )) + N i=1 N

=

N 1 X δ(Pe1 ) N (H(Y1,i , Yi |Y1i−1 , J = i) − H(Y1,i , Yi |Y i−1 , W0 , W1 , Y1i−1 , Zi+1 , J = i)) + N i=1 N

32



N H(YJ , Y1,J |Y1J−1 ) − H(YJ , Y1,J |Y J−1 , W0 , W1 , Y1J−1 , ZJ+1 , J) +

=

H(Y, Y1 |U1 ) − H(Y, Y1 |U1 , U, V1 ) +



I(U, V1 ; Y, Y1 |U1 ) +

By using  → 0, R0 + R1 = limN →∞

δ(Pe1 ) N

δ(Pe1 ) N

δ() , N

H(W0 ,W1 ) N

(A7) and (A7), R0 + R1 ≤ I(U, V1 ; Y, Y1 |U1 ) is obtained.

Therefore, R0 + R1 ≤ min{I(U, U1 , V1 ; Y ), I(U, V1 ; Y, Y1 |U1 )} is proved. (Proof of R0 + R2 ≤ min{I(U, U2 , V2 ; Z), I(U, V2 ; Z, Y1 |U2 )}) The proof of R0 + R2 ≤ min{I(U, U2 , V2 ; Z), I(U, V2 ; Z, Y1 |U2 )} is analogous to the proof of R0 + R1 ≤ min{I(U, U1 , V1 ; Y ), I(U, V1 ; Y, Y1 |U1 )}, and it is omitted here. (Proof of R0 + R1 + R2 ≤ I(U, U2 , V1 ; Y, Y1 |U1 ) + I(V2 ; Z, Y1 |U, U1 , U2 , V1 )) The inequality R0 + R1 + R2 ≤ I(U, U2 , V1 ; Y, Y1 |U1 ) + I(V2 ; Z, Y1 |U, U1 , U2 , V1 ) is proved by the following (A8), (A9), (A10) and (A11). First, note that 1 H(W0 , W1 , W2 ) N

= =

1 (H(W0 , W1 ) + H(W2 |W0 , W1 )) N 1 (I(W0 , W1 ; Y1N , Y N ) + H(W0 , W1 |Y1N , Y N ) + I(W2 ; Y1N , Z N |W0 , W1 ) N +H(W2 |W0 , W1 , Y1N , Z N ))

(a)



1 (I(W0 , W1 ; Y1N , Y N ) + δ(Pe1 ) + I(W2 ; Y1N , Z N |W0 , W1 ) + δ(Pe2 )), N

(A8)

where (a) is from Fano’s inequality. The character I(W0 , W1 ; Y1N , Y N ) in (A8) is upper bounded by I(W0 , W1 ; Y1N , Y N ) =

N X

I(W0 , W1 ; Y1,i , Yi |Y1i−1 , Y i−1 )

i=1

=

N X (H(Y1,i , Yi |Y1i−1 , Y i−1 ) − H(Y1,i , Yi |Y1i−1 , Y i−1 , W0 , W1 ) i=1 N N N N +H(Y1,i , Yi |Y1i−1 , Y i−1 , W0 , W1 , Y1,i+1 , Zi+1 ) − H(Y1,i , Yi |Y1i−1 , Y i−1 , W0 , W1 , Y1,i+1 , Zi+1 ))

=

N X

N N (I(Y1,i , Yi ; W0 , W1 , Y1,i+1 , Zi+1 |Y1i−1 , Y i−1 )

i=1 N N −I(Y1,i , Yi ; Y1,i+1 , Zi+1 |Y1i−1 , Y i−1 , W0 , W1 )),

and the character I(W2 ; Y1N , Z N |W0 , W1 ) in (A8) is upper bounded by I(W2 ; Y1N , Z N |W0 , W1 ) =

N X i=1

N N I(W2 ; Y1,i , Zi |Y1,i+1 , Zi+1 , W0 , W1 )

(A9)

33



N X

N N I(W2 , Y i−1 , Y1i−1 ; Y1,i , Zi |Y1,i+1 , Zi+1 , W0 , W1 )

i=1

=

N X

N N (H(Y1,i , Zi |Y1,i+1 , Zi+1 , W0 , W1 )

i=1 N N −H(Y1,i , Zi |Y1,i+1 , Zi+1 , W0 , W1 , W2 , Y i−1 , Y1i−1 ) N N N N +H(Y1,i , Zi |Y1,i+1 , Zi+1 , W0 , W1 , Y i−1 , Y1i−1 ) − H(Y1,i , Zi |Y1,i+1 , Zi+1 , W0 , W1 , Y i−1 , Y1i−1 )

=

N X

N N (I(Y1,i , Zi ; Y i−1 , Y1i−1 |Y1,i+1 , Zi+1 , W0 , W1 )

i=1 N N +I(Y1,i , Zi ; W2 |Y1,i+1 , Zi+1 , W0 , W1 , Y i−1 , Y1i−1 )).

(A10)

PN N N Here note that i=1 I(Y1,i , Yi ; Y1,i+1 , Zi+1 |Y1i−1 , Y i−1 , W0 , W1 ) appeared in the last step of (A9) is equal to PN i−1 N N , Y1i−1 |Y1,i+1 , Zi+1 , W0 , W1 ) appeared in the last step of (A10), i.e., i=1 I(Y1,i , Zi ; Y N X

N N I(Y1,i , Yi ; Y1,i+1 , Zi+1 |Y1i−1 , Y i−1 , W0 , W1 )

i=1

=

N X

N N I(Y1,i , Zi ; Y i−1 , Y1i−1 |Y1,i+1 , Zi+1 , W0 , W1 ),

(A11)

i=1

and it is proved by the following (A12) and (A13). N X

N N I(Y1,i , Yi ; Y1,i+1 , Zi+1 |Y1i−1 , Y i−1 , W0 , W1 )

i=1

=

N N X X

N N I(Y1,i , Yi ; Y1,j , Zj |Y1i−1 , Y i−1 , W0 , W1 , Y1,j+1 , Zj+1 ).

(A12)

i=1 j=i+1

N X

N N I(Y1,i , Zi ; Y i−1 , Y1i−1 |Y1,i+1 , Zi+1 , W0 , W1 )

i=1

=

i−1 N X X

N N I(Y1,i , Zi ; Y1,j , Yj |Y1,i+1 , Zi+1 , W0 , W1 , Y j−1 , Y1j−1 )

i=1 j=1

=

j−1 N X X

N N I(Y1,j , Zj ; Y1,i , Yi |Y1,j+1 , Zj+1 , W0 , W1 , Y i−1 , Y1i−1 )

j=1 i=1

=

N X N X

N N I(Y1,j , Zj ; Y1,i , Yi |Y1,j+1 , Zj+1 , W0 , W1 , Y i−1 , Y1i−1 ).

i=1 j=i+1

Finally, substituting (A9) and (A10) into (A8), and using the fact that (A11) holds, then we have



1 H(W0 , W1 , W2 ) N N 1 X N N (I(Y1,i , Yi ; W0 , W1 , Y1,i+1 , Zi+1 |Y1i−1 , Y i−1 ) N i=1 N N +I(Y1,i , Zi ; W2 |Y1,i+1 , Zi+1 , W0 , W1 , Y i−1 , Y1i−1 )) +

δ(Pe1 ) + δ(Pe2 ) N

(A13)

34

(1)



N 1 X N N (I(Y1,i , Yi ; W0 , W1 , Y1,i+1 , Zi+1 |Y1i−1 , Y i−1 , J = i) N i=1 N N +I(Y1,i , Zi ; W2 |Y1,i+1 , Zi+1 , W0 , W1 , Y i−1 , Y1i−1 , J = i)) +

(2)



N N I(Y1,J , YJ ; W0 , W1 , Y1,J+1 , ZJ+1 |Y1J−1 , Y J−1 , J) N N +I(Y1,J , ZJ ; W2 |Y1,J+1 , ZJ+1 , W0 , W1 , Y J−1 , Y1J−1 , J) +

N N +I(Y1,J , ZJ ; W2 |Y1,J+1 , ZJ+1 , W0 , W1 , Y J−1 , Y1J−1 , J) + (3)

2δ() N

I(Y1 , Y |U1 ) − H(Y1 , Y |U, V1 , U1 , U2 ) + H(Y1 , Z|U, V1 , U1 , U2 ) −H(Y1 , Z|U, V1 , U1 , U2 , V2 ) +

=

2δ() N

N N H(Y1,J , YJ |Y1J−1 ) − H(Y1,J , YJ |W0 , W1 , Y1,J+1 , ZJ+1 , Y1J−1 , Y J−1 , J)



=

2δ() N

2δ() N

I(U, U2 , V1 ; Y, Y1 |U1 ) + I(V2 ; Z, Y1 |U, U1 , U2 , V1 ) +

2δ() , N

(A14)

where (1) is from J is a random variable (uniformly distributed over {1, 2, ..., N }), and it is independent of Y N , Y1N , Z N , W0 , W1 and W2 , (2) is from J is uniformly distributed over {1, 2, ..., N } and Pe1 , Pe2 ≤ , and (3) is from the definitions of the auxiliary random variables (see (A1)). By using  → 0, R0 + R1 + R2 = limN →∞

H(W0 ,W1 ,W2 ) N

and (A14), R0 + R1 + R2 ≤ I(U, U2 , V1 ; Y, Y1 |U1 ) +

I(V2 ; Z, Y1 |U, U1 , U2 , V1 ) is proved. (Proof of R0 + R1 + R2 ≤ I(U, U1 , V2 ; Z, Y1 |U2 ) + I(V1 ; Y, Y1 |U, U1 , U2 , V2 )) The inequality R0 +R1 +R2 ≤ I(U, U1 , V2 ; Z, Y1 |U2 )+I(V1 ; Y, Y1 |U, U1 , U2 , V2 ) is proved by letting H(W0 , W1 , W2 ) = H(W0 , W2 ) + H(W1 |W0 , W2 ), and the remainder of the proof is analogous to the proof of R0 + R1 + R2 ≤ I(U, U2 , V1 ; Y, Y1 |U1 ) + I(V2 ; Z, Y1 |U, U1 , U2 , V1 ). Thus ,we omit the proof here. (Proof of Re1 ≤ I(V1 ; Y |U, V2 ) − I(V1 ; Z|U, V2 )) The inequality Re1 ≤ I(V1 ; Y |U, V2 )−I(V1 ; Z|U, V2 ) is proved by the following (A15), (A16), (A17) and (A20). First note that

= ≤ = = ≤

1 H(W1 |Z N ) N 1 (I(W1 ; W0 , W2 |Z N ) + H(W1 |Z N , W0 , W2 )) N 1 (H(W1 |Z N , W0 , W2 ) + δ()) N 1 (H(W1 |W0 , W2 ) − I(W1 ; Z N |W0 , W2 ) + δ()) N 1 (I(W1 ; Y N |W0 , W2 ) + H(W1 |Y N , W0 , W2 ) − I(W1 ; Z N |W0 , W2 ) + δ()) N 1 (I(W1 ; Y N |W0 , W2 ) − I(W1 ; Z N |W0 , W2 ) + 2δ()). N

Then, the character I(W1 ; Y N |W0 , W2 ) in (A15) is upper bounded by I(W1 ; Y N |W0 , W2 ) =

N X i=1

I(W1 ; Yi |W0 , W2 , Y i−1 )

(A15)

35

=

N X (H(Yi |W0 , W2 , Y i−1 ) − H(Yi |W0 , W1 , W2 , Y i−1 ) i=1 N N +H(Yi |W0 , W2 , Y i−1 , W1 , Zi+1 ) − H(Yi |W0 , W2 , Y i−1 , W1 , Zi+1 ))

=

N X N N (I(Yi ; W1 , Zi+1 |W0 , W2 , Y i−1 ) − I(Yi ; Zi+1 |W0 , W1 , W2 , Y i−1 )) i=1

=

N X N N (I(Yi ; Zi+1 |W0 , W2 , Y i−1 ) + I(Yi ; W1 |W0 , W2 , Y i−1 , Zi+1 ) i=1 N −I(Yi ; Zi+1 |W0 , W1 , W2 , Y i−1 )),

(A16)

and the character I(W1 ; Z N |W0 , W2 ) in (A15) is upper bounded by I(W1 ; Z N |W0 , W2 ) =

N X

N I(W1 ; Zi |W0 , W2 , Zi+1 )

i=1

=

N X

N N (H(Zi |W0 , W2 , Zi+1 ) − H(Zi |W0 , W1 , W2 , Zi+1 )

i=1 N N +H(Zi |W0 , W2 , Y i−1 , W1 , Zi+1 ) − H(Zi |W0 , W2 , Y i−1 , W1 , Zi+1 ))

=

N X

N N (I(Zi ; W1 , Y i−1 |W0 , W2 , Zi+1 ) − I(Zi ; Y i−1 |W0 , W1 , W2 , Zi+1 ))

i=1

=

N X

N N (I(Zi ; Y i−1 |W0 , W2 , Zi+1 ) + I(Zi ; W1 |W0 , W2 , Y i−1 , Zi+1 )

i=1 N −I(Zi ; Y i−1 |W0 , W1 , W2 , Zi+1 )).

Note that

N X

N I(Yi ; Zi+1 |W0 , W2 , Y i−1 )

=

i=1

and

N X

N X

(A17)

N I(Zi ; Y i−1 |W0 , W2 , Zi+1 ),

(A18)

N I(Zi ; Y i−1 |W0 , W1 , W2 , Zi+1 ),

(A19)

i=1

N I(Yi ; Zi+1 |W0 , W1 , W2 , Y i−1 ) =

i=1

N X i=1

and these are from Csisz´ ar’s equality [20]. Substituting (A16) and (A17) into (A15), and using the equalities (A18) and (A19), we have

≤ =

1 H(W1 |Z N ) N 1 (I(W1 ; Y N |W0 , W2 ) − I(W1 ; Z N |W0 , W2 ) + 2δ()) N N 1 X N (I(Yi ; W1 |W0 , W2 , Y i−1 , Zi+1 ) N i=1 N −(Zi ; W1 |W0 , W2 , Y i−1 , Zi+1 )) +

=

2δ() N

N 1 X N (I(Yi ; W1 |W0 , W2 , Y i−1 , Zi+1 , J = i) N i=1 N −I(Zi ; W1 |W0 , W2 , Y i−1 , Zi+1 , J = i)) +

2δ() N

36

N = I(YJ ; W1 |W0 , W2 , Y J−1 , ZJ+1 , J)

2δ() N 2δ() = I(Y ; V1 |U, V2 ) − I(Z; V1 |U, V2 ) + . N N −I(ZJ ; W1 |W0 , W2 , Y J−1 , ZJ+1 , J) +

By using  → 0, Re1 ≤ limN →∞

H(W1 |Z N ) N

(A20)

and (A20), Re1 ≤ I(V1 ; Y |U, V2 ) − I(V1 ; Z|U, V2 ) is proved.

(Proof of Re1 ≤ I(V1 ; Y |U ) − I(V1 ; Z|U )) The inequality Re1 ≤ I(V1 ; Y |U ) − I(V1 ; Z|U ) is proved by the following (A21), (A22), (A23) and (A26). First note that 1 H(W1 |Z N ) N 1 (I(W1 ; W0 |Z N ) + H(W1 |Z N , W0 )) N 1 (H(W1 |Z N , W0 ) + δ()) N 1 (H(W1 |W0 ) − I(W1 ; Z N |W0 ) + δ()) N 1 (I(W1 ; Y N |W0 ) + H(W1 |Y N , W0 ) − I(W1 ; Z N |W0 ) + δ()) N 1 (I(W1 ; Y N |W0 ) − I(W1 ; Z N |W0 ) + 2δ()). N

= ≤ = = ≤

(A21)

Then, the character I(W1 ; Y N |W0 ) in (A21) is upper bounded by I(W1 ; Y N |W0 ) =

N X

I(W1 ; Yi |W0 , Y i−1 )

i=1

=

N X

(H(Yi |W0 , Y i−1 ) − H(Yi |W0 , W1 , Y i−1 )

i=1 N N +H(Yi |W0 , Y i−1 , W1 , Zi+1 ) − H(Yi |W0 , Y i−1 , W1 , Zi+1 ))

=

N X

N N (I(Yi ; W1 , Zi+1 |W0 , Y i−1 ) − I(Yi ; Zi+1 |W0 , W1 , Y i−1 ))

i=1

=

N X

N N (I(Yi ; Zi+1 |W0 , Y i−1 ) + I(Yi ; W1 |W0 , Y i−1 , Zi+1 )

i=1 N −I(Yi ; Zi+1 |W0 , W1 , Y i−1 )),

and the character I(W1 ; Z N |W0 ) in (A21) is upper bounded by I(W1 ; Z N |W0 ) =

N X

N I(W1 ; Zi |W0 , Zi+1 )

i=1

=

N X

N N (H(Zi |W0 , Zi+1 ) − H(Zi |W0 , W1 , Zi+1 )

i=1 N N +H(Zi |W0 , Y i−1 , W1 , Zi+1 ) − H(Zi |W0 , Y i−1 , W1 , Zi+1 ))

=

N X i=1

N N (I(Zi ; W1 , Y i−1 |W0 , Zi+1 ) − I(Zi ; Y i−1 |W0 , W1 , Zi+1 ))

(A22)

37

=

N X

N N (I(Zi ; Y i−1 |W0 , Zi+1 ) + I(Zi ; W1 |W0 , Y i−1 , Zi+1 )

i=1 N −I(Zi ; Y i−1 |W0 , W1 , Zi+1 )).

Note that

N X

N I(Yi ; Zi+1 |W0 , Y i−1 ) =

i=1

and

N X

N X

(A23)

N I(Zi ; Y i−1 |W0 , Zi+1 ),

(A24)

N I(Zi ; Y i−1 |W0 , W1 , Zi+1 ),

(A25)

i=1

N I(Yi ; Zi+1 |W0 , W1 , Y i−1 ) =

i=1

N X i=1

and these are from Csisz´ ar’s equality [20]. Substituting (A22) and (A23) into (A21), and using the equalities (A24) and (A25), we have

≤ =

1 H(W1 |Z N ) N 1 (I(W1 ; Y N |W0 ) − I(W1 ; Z N |W0 ) + 2δ()) N N 1 X N (I(Yi ; W1 |W0 , Y i−1 , Zi+1 ) N i=1 N −(Zi ; W1 |W0 , Y i−1 , Zi+1 )) +

=

2δ() N

N 1 X N (I(Yi ; W1 |W0 , Y i−1 , Zi+1 , J = i) N i=1 N −I(Zi ; W1 |W0 , Y i−1 , Zi+1 , J = i)) +

=

N I(YJ ; W1 |W0 , Y J−1 , ZJ+1 , J) N −I(ZJ ; W1 |W0 , Y J−1 , ZJ+1 , J) +

= By using  → 0, Re1 ≤ limN →∞

2δ() N

I(Y ; V1 |U ) − I(Z; V1 |U ) + H(W1 |Z N ) N

2δ() N

2δ() . N

(A26)

and (A26), Re1 ≤ I(V1 ; Y |U ) − I(V1 ; Z|U ) is proved.

(Proof of Re2 ≤ min{I(V2 ; Z|U, V1 ) − I(V2 ; Y |U, V1 ), I(V2 ; Z|U ) − I(V2 ; Y |U )}) The proof of Re2 ≤ min{I(V2 ; Z|U, V1 ) − I(V2 ; Y |U, V1 ), I(V2 ; Z|U ) − I(V2 ; Y |U )} is analogous to the proof of Re1 ≤ min{I(V1 ; Y |U, V2 ) − I(V1 ; Z|U, V2 ), I(V1 ; Y |U ) − I(V1 ; Z|U )}, and therefore, we omit the proof here. The Markov chain U → (U1 , U2 , V1 , V2 ) → (X, X1 ) → (Y, Y1 , Z) is directly proved by the definitions of the auxiliary random variables. Thus, the proof of Theorem 1 is completed. A PPENDIX B P ROOF OF T HEOREM 2 Suppose (R0 , R1 , R2 , Re1 , Re2 ) ∈ R(Ai1) , we will show that (R0 , R1 , R2 , Re1 , Re2 ) is achievable, i.e., there exists encoder-decoder (N, ∆1 , ∆2 , Pe1 , Pe2 ) such that (2.3) is satisfied. The existence of the encoder-decoder is

38

under the sufficient conditions that Re1 = I(V1 ; Y |U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V1 ; Z|U, X1 , V2 ),

(A27)

Re2 = I(V2 ; Z|U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V2 ; Y |U, X1 , V1 ).

(A28)

and

The coding scheme combines the decode and forward (DF) strategy [35], random binning, superposition coding, block Markov coding and rate splitting techniques. The rate splitting technique is typically used in the interference channels to achieve a larger rate region as it enables interference cancellation at the receivers. Now we use it to split the confidential message W1 into W10 and W11 , and W2 into W20 and W22 , and the details are as follows. Define the messages W0 , W10 , W11 , W20 , W22 taken values in the alphabets W0 , W10 , W11 , W20 , W22 , respectively, where W0 = {1, 2, ..., 2N R0 }, W10 = {1, 2, ..., 2N R10 }, W11 = {1, 2, ..., 2N R11 }, W20 = {1, 2, ..., 2N R20 }, W22 = {1, 2, ..., 2N R22 }, and R10 + R11 = R1 , R20 + R22 = R2 . Here note that the formulas (A27) and (A28) combined with the rate splitting and the fact that W10 and W20 are decoded by both receivers ensure that, R11 ≥ Re1 = I(V1 ; Y |U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V1 ; Z|U, X1 , V2 ),

(A29)

R22 ≥ Re2 = I(V2 ; Z|U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V2 ; Y |U, X1 , V1 ).

(A30)

and

Code Construction: Fix the joint probability mass function PY,Z,Y1 ,X,X1 ,V1 ,V2 ,U (y, z, y1 , x, x1 , v1 , v2 , u). For arbitrary  > 0, define L11 = I(V1 ; Y |U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V1 ; Z|U, X1 , V2 ),

(A31)

L12 = I(V1 ; Z|U, X1 , V2 ),

(A32)

L21 = I(V2 ; Z|U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V2 ; Y |U, X1 , V1 ),

(A33)

L22 = I(V2 ; Y |U, X1 , V1 ),

(A34)

L3 = I(V1 ; V2 |U, X1 ) − .

(A35)

L11 + L12 + L3 = I(V1 ; Y |U, X1 ) − ,

(A36)

Note that

39

L21 + L22 + L3 = I(V2 ; Z|U, X1 ) − . •

(A37)

First, generate at random 2N Rr i.i.d. sequences at the relay node each drawn according to pX1N (xN 1 ) = QN N N Rr ], where i=1 pX1 (x1,i ), index them as x1 (a), a ∈ [1, 2 Rr = min{I(X1 ; Y ), I(X1 ; Z)} − .



(A38)

Generate at random 2N (R10 +R20 +R0 ) i.i.d. sequences uN (b|a) (b ∈ [1, 2N (R10 +R20 +R0 ) ], a ∈ [1, 2N Rr ]) QN according to i=1 pU |X1 (ui |x1,i ). In addition, partition 2N (R10 +R20 +R0 ) i.i.d. sequences uN into 2N Rr bins. These bins are denoted as {S1 , S2 , ..., S2N Rr }, where Si (1 ≤ i ≤ 2N Rr ) contains 2N (R10 +R20 +R0 −Rr ) sequences about uN . 0

00

000

0



N (L11 +L12 +L3 ) For the transmitted sequences uN and xN i.i.d. sequences v1N (i , i , i ), with i ∈ 1 , generate 2 QN 0 00 00 000 000 I = [1, 2N L11 ], i ∈ I = [1, 2N L12 ] and i ∈ I = [1, 2N L3 ], according to i=1 pV1 |U,X1 (v1,i |ui , x1,i ).



N (L21 +L22 +L3 ) Similarly, for the transmitted sequences uN and xN i.i.d. sequences v2N (j , j , j ), 1 , generate 2 QN 0 0 00 00 000 000 with j ∈ J = [1, 2N L21 ], j ∈ J = [1, 2N L22 ] and j ∈ J = [1, 2N L3 ], according to i=1 pV2 |U,X1 (v2,i |ui , x1,i ).



N N N The xN is generated according to a new discrete memoryless channel (DMC) with inputs xN 1 , u , v1 , v2

0

00

000

and output xN . The transition probability of this new DMC is pX|X1 ,U,V1 ,V2 (x|x1 , u, v1 , v2 ). The probability N N N pX N |X1N ,U N ,V1N ,V2N (xN |xN 1 , u , v1 , v2 ) is calculated as follows. N N N pX N |X1N ,U N ,V1N ,V2N (xN |xN 1 , u , v1 , v2 ) =

N Y

pX|X1 ,U,V1 ,V2 (xi |x1,i , ui , v1,i , v2,i ).

(A39)

i=1

Denote xN by xN (a, w0 , w10 , w20 , w11 , w22 ). Encoding: Encoding involves the mapping of message indices to channel inputs, which are facilitated by the sequences generated above. We exploit the block Markov coding scheme, as argued in [35], the loss induced by this scheme is negligible as the number of blocks n → ∞. For block i (1 ≤ i ≤ n), encoding proceeds as follows. ∗ First, for convenience, define w0,i = (w0,i , w10,i , w20,i ), where w0,i , w10,i and w20,i are the messages transmitted

in the i-th block. The messages w11 and w22 transmitted in the i-th block are denoted by w11,i and w22,i , respectively. •

(Channel encoder) 0

00

000

0

00

0

00

000

∗ ∗ ∗ 1) The transmitter sends (uN (w0,1 |1), v1N (i1 , i1 , i1 |1, w0,1 ), v2N (j1 , j1 , j1 |1, w0,1 )) at the first block, 0

00

000

000

∗ ∗ ∗ (uN (w0,i |ai−1 ), v1N (ii , ii , ii |ai−1 , w0,i ), v2N (ji , ji , ji |ai−1 , w0,i )) from block 2 to n − 1, and 0

00

000

0

00

000

(uN (1|an−1 ), v1N (1, 1, 1|an−1 , 1), v2N (1, 1, 1|an−1 , 1)) at block n. Here ii , ii , ii , ji , ji and ji are the indexes for block i. 0

00

0

00

2) In the i-th block (1 ≤ i ≤ n), the indexes ii , ii , ji and ji are determined by the following methods. 0

0

– If R11 ≤ L11 + L12 , define W11 = I × K1 . Thus the index ii is determined by a given message w11,i . 00

00

Evenly partition I into K1 bins, and the index ii is drawn at random (with uniform distribution) from the bin k1 . 0

0

Analogously, if R22 ≤ L21 + L22 , define W22 = J × K2 . Thus the index ji is determined by a given message w22,i . Evenly partition J distribution) from the bin k2 .

00

00

into K2 bins, and the index ji is drawn at random (with uniform

40

0

00

0

00

– If L11 +L12 ≤ R11 ≤ L11 +L12 +L3 , define W11 = I ×I ×K1 . Thus the indexes ii and ii are determined by a given message w11,i . Evenly partition I

000

0

00

000

∗ into K1 bins, and the codeword v1N (ii , ii , ii |ai−1 , w0,i )

will be drawn from the bin k1 . 0

00

0

Analogously, if L21 + L22 ≤ R22 ≤ L21 + L22 + L3 , define W22 = J × J × K2 . Thus the indexes ji 00

and ji are determined by a given message w22,i . Evenly partition J 0

00

000

into K2 bins, and the codeword

000

∗ v2N (ji , ji , ji |ai−1 , w0,i ) will be drawn from the bin k2 . 000

000

3) In the i-th block (1 ≤ i ≤ n), the indexes ii and ji are determined as follows. 0

00

0

00

After the determination of ii , ii , ji and ji , the transmitter tries to find a pair 0

00

000

0

00

000

∗ ∗ (v1N (ii , ii , ii |ai−1 , w0,i ), v2N (ji , ji , ji |ai−1 , w0,i )) 0

00

000

0

00

000

N ∗ N ∗ ∗ |ai−1 ), xN such that (uN (w0,i 1 (ai−1 ), v1 (ii , ii , ii |ai−1 , w0,i ), v2 (ji , ji , ji |ai−1 , w0,i )) are jointly typical. If

there are more than one such pair, randomly choose one; if there is no such pair, an error is declared. Thus, all the indexes of v1N and v2N (in block i) are determined. One can show that such a pair exists with high probability for sufficiently large N if (see [40]) I(V1 ; Y |U, X1 ) −  − R11 + I(V2 ; Z|U, X1 ) −  − R22 ≥ I(V1 ; V2 |U, X1 ).

(A40)

4) In the i-th block (1 ≤ i ≤ n), the transmitter finally sends xN (ai−1 , w0,i , w10,i , w20,i , w11,i , w22,i ). •

(Relay encoder) N The relay sends xN ai−1 ) from block 2 to n. 1 (1) at the first block, and x1 (ˆ

Decoding: Decoding proceeds as follows. 1) (At the relay) At the end of block i (1 ≤ i ≤ n), the relay already has an estimation of the ai−1 (denoted as a ˆi−1 ), which was sent at block i − 1, and will declare that it receives a ˆi , if this is the only triple such that ∗ (uN (w ˆ0,i |ˆ ai−1 ), xN ai−1 ), y1N (i)) are jointly typical. Here note that y1N (i) indicates the output sequence y1N in 1 (ˆ ∗ block i, and a ˆi is the index of the bin that w ˆ0,i belongs to. Based on the AEP, the probability P r{ˆ ai = ai } goes

to 1 if R0 + R10 + R20 ≤ I(U ; Y1 |X1 ).

(A41)

2) (At receiver 1) Receiver 1 decodes from the last block, i.e., block n. Suppose that at the end of block n−1, the an−1 ), y N (n)) jointly typical. relay decodes successfully, then receiver 1 will declare that a ˇn−1 is received if (xN 1 (ˇ By using (A38) and the AEP, it is easy to see that the probability P r{ˇ an−1 = an−1 } goes to 1. After getting a ˇn−1 , receiver 1 can get an estimation of ai (1 ≤ i ≤ n − 2) in a similar way. ∗ Having a ˇi−1 , receiver 1 can get the estimation of the message w0,i = (w0,i , w10,i , w20,i ) by finding a unique triple ∗ ∗ ∗ such that (uN (w ˇ0,i |ˇ ai−1 ), xN ai−1 ), y N (i)) are jointly typical. Based on the AEP, the probability P r{w ˇ0,i = w0,i } 1 (ˇ

goes to 1 if R0 + R10 + R20 − Rr ≤ I(U ; Y |X1 ).

(A42)

∗ After decoding w ˇ0,i , receiver 1 tries to find a quadruple such that 0

00

000

∗ ∗ (v1N (ˇii , ˇii , ˇii |ˇ ai−1 , w ˇ0,i ), uN (w ˇ0,i |ˇ ai−1 ), xN ai−1 ), y N (i)) are jointly typical. Based on the AEP, the probability 1 (ˇ

41

P r{w ˇ11,i = w11,i } goes to 1 if R11 ≤ I(V1 ; Y |U, X1 ). 0

00

0

000

0

00

(A43) 00

000

000

∗ ) exists and is unique, set ˇii = ii , ˇii = ii and ˇii = ii ; otherwise, declare an error. If such v1N (ˇii , ˇii , ˇii |ˇ ai−1 , w ˇ0,i 0

00

000

From the values of ˇii , ˇii , ˇii , and the above encoding schemes, receiver 1 can calculate the message w ˇ11,i . (At receiver 2) The decoding scheme for receiver 2 is symmetric, and it is omitted here. Analogously, we have R0 + R10 + R20 − Rr ≤ I(U ; Z|X1 ),

(A44)

R22 ≤ I(V2 ; Z|U, X1 ).

(A45)

and

The following Table I shows the transmitted codewords in the first three blocks.

TABLE I: Decode and forward strategy for the model of Figure 1

By using (A38), (A40), (A41), (A42), (A43), (A44) and (A45), it is easy to check that Pe1 ≤  and Pe2 ≤ . Moreover, applying Fourier-Motzkin elimination on (A38), (A40), (A41), (A42), (A43), (A44) and (A45) with the definitions R1 = R10 + R11 and R2 = R20 + R22 , we get

R0 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)},

R0 + R1 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)} + I(V1 ; Y |U, X1 ),

R0 + R2 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)} + I(V2 ; Z|U, X1 ),

R0 +R1 +R2 ≤ min{I(U ; Y1 |X1 ), I(U, X1 ; Y ), I(U, X1 ; Z)}+I(V1 ; Y |U, X1 )+I(V2 ; Z|U, X1 )−I(V1 ; V2 |U, X1 ). Note that the above inequalities are the same as those in Theorem 2.

42

Equivocation Analysis: Now, it remains to prove limN →∞ ∆1 ≥ Re1 = I(V1 ; Y |U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V1 ; Z|U, X1 , V2 ). The bound limN →∞ ∆2 ≥ Re2 = I(V2 ; Z|U, X1 ) − I(V1 ; V2 |U, X1 ) − I(V2 ; Y |U, X1 , V1 ) follows by symmetry.

H(W1 |Z N )



H(W1 |Z N , V2N , U N , X1N )

=

H(W10 , W11 |Z N , V2N , U N , X1N )

(a)

=

H(W11 |Z N , V2N , U N , X1N )

=

H(W11 , Z N |V2N , U N , X1N ) − H(Z N |V2N , U N , X1N )

=

H(W11 , Z N , V1N |V2N , U N , X1N ) − H(V1N |W11 , Z N , V2N , U N , X1N ) − H(Z N |V2N , U N , X1N )



H(Z N , V1N |V2N , U N , X1N ) − H(V1N |W11 , Z N , V2N , U N , X1N ) − H(Z N |V2N , U N , X1N )

=

H(V1N |V2N , U N , X1N ) + H(Z N |V1N , V2N , U N , X1N ) − H(V1N |W11 , Z N , V2N , U N , X1N ) −H(Z N |V2N , U N , X1N )

=

H(V1N |U N , X1N ) − I(V1N ; V2N |U N , X1N ) − I(Z N ; V1N |V2N , U N , X1N ) −H(V1N |W11 , Z N , V2N , U N , X1N ),

(A46)

where (a) follows from the fact that given U N , W10 is uniquely determined. Consider the first term in (A46), the codeword generation and [24, Lemma 3] ensure that H(V1N |U N , X1N ) ≥ log 2N (L11 +L12 +L3 ) − δ = N (I(V1 ; Y |U, X1 ) − ) − δ,

(A47)

where δ is small for sufficiently large N . For the second and third terms in (A46), using the same approach as that in [20, Lemma 3], we get 0

I(V1N ; V2N |U N , X1N ) ≤ N (I(V1 ; V2 |U, X1 ) +  ),

(A48)

and 00

I(Z N ; V1N |V2N , U N , X1N ) ≤ N (I(V1 ; Z|U, X1 , V2 ) +  ), 0

(A49)

00

where  ,  → 0 as N → ∞. Now, we consider the last term of (A46). For the case that R11 ≤ L11 + L12 , given U N , X1N , V2N and W11 , the total number of possible codewords of V1N is N1 ≤ 2N L12 = 2N I(V1 ;Z|U,X1 ,V2 ) .

(A50)

By using the Fano’s inequality and (A50), we have 000

H(V1N |W11 , Z N , V2N , U N , X1N ) ≤ N  , where 

000

→ 0.

(A51)

43

For the case that L11 + L12 ≤ R11 ≤ L11 + L12 + L3 , given U N , X1N , V2N and W11 , V1N is totally determined, and therefore H(V1N |W11 , Z N , V2N , U N , X1N ) = 0.

(A52)

Substituting (A47), (A48), (A49) and (A51) (or (A52)) into (A46), and using the definition (2.3), we have limN →∞ ∆1 ≥ Re1 = I(V1 ; Y |U, X1 )−I(V1 ; V2 |U, X1 )−I(V1 ; Z|U, X1 , V2 ). This completes the proof of Theorem 2. A PPENDIX C P ROOF OF T HEOREM 3 We consider the proof of Theorem 3 for the case I(X1 ; Y ) ≥ I(X1 ; Z|U, V2 ), and the proof for I(X1 ; Z) ≥ I(X1 ; Y |U, V1 ) follows by symmetry. In Theorem 3, the relay node does not attempt to decode the messages but sends codewords that are independent of the transmitter’s messages, and these codewords aid in confusing the receivers. Since the channel between the relay and receiver 1 is better than the channel between the relay and receiver 2 (I(X1 ; Y ) ≥ I(X1 ; Z|U, V2 ) ≥ I(X1 ; Z)), we allow receiver 1 to decode the relay codeword, and receiver 2 can not decode it. Therefore, in this case, the relay codeword can be viewed as a noise signal to confuse receiver 2. Now we will prove that the quintuple (R0 , R1 , R2 , Re1 , Re2 ) ∈ R(Ai2) with the conditions Re1 = min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )} + I(V1 ; Y |U, X1 ) − I(V1 ; V2 |U ) − I(X1 , V1 ; Z|U, V2 ),

(A53)

and Re2 = I(V2 ; Z|U ) − I(V1 ; V2 |U ) − I(V2 ; Y |U, X1 , V1 ),

(A54)

is achievable. Similar to the proof of Theorem 2, we split the confidential message W1 into W10 and W11 , and W2 into W20 and W22 , and the definitions of these messages are the same as those in Appendix B. Here note that the formulas (A53) and (A54) combined with the rate splitting and the fact that W10 and W20 are decoded by both receivers ensure that, R11 ≥ Re1 = min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )} + I(V1 ; Y |U, X1 ) − I(V1 ; V2 |U ) − I(X1 , V1 ; Z|U, V2 ), (A55) and R22 ≥ Re2 = I(V2 ; Z|U ) − I(V1 ; V2 |U ) − I(V2 ; Y |U, X1 , V1 ).

(A56)

Code Construction: Fix the joint probability mass function PY,Z,Y1 ,X,X1 ,V1 ,V2 ,U (y, z, y1 , x, x1 , v1 , v2 , u) = PY,Z,Y1 |X,X1 (y, z, y1 |x, x1 )PX|U,V1 ,V2 (x|u, v1 , v2 )PU,V1 ,V2 (u, v1 , v2 )PX1 (x1 ). For arbitrary  > 0, define L11 = I(V1 ; Y |U, X1 ) − I(V1 ; V2 |U ) − I(V1 ; Z|U, V2 ),

(A57)

44

L12 = I(V1 ; Z|U, V2 ),

(A58)

L21 = I(V2 ; Z|U ) − I(V1 ; V2 |U ) − I(V2 ; Y |U, X1 , V1 ),

(A59)

L22 = I(V2 ; Y |U, X1 , V1 ),

(A60)

L3 = I(V1 ; V2 |U ) − .

(A61)

L11 + L12 + L3 = I(V1 ; Y |U, X1 ) − ,

(A62)

L21 + L22 + L3 = I(V2 ; Z|U ) − ,

(A63)

L11 ≥ Re1 .

(A64)

Note that



First, generate at random 2N Rr i.i.d. sequences at the relay node each drawn according to pX1N (xN 1 ) = QN N N Rr ], where i=1 pX1 (x1,i ), index them as x1 (a), a ∈ [1, 2 Rr = min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )} − ,

(A65)

and  → 0+ . Note that I(X1 ; Z|U, V2 ) ≤ I(X1 ; Z|U, V1 , V2 ) and I(X1 ; Z|U, V2 ) ≤ I(X1 ; Y ), and thus Rr ≥ I(X1 ; Z|U, V2 ) − ,

(A66)

Rr ≤ I(X1 ; Z|U, V1 , V2 ) − .

(A67)

and

• •

Generate at random 2N (R10 +R20 +R0 ) i.i.d. sequences uN (b) (b ∈ [1, 2N (R10 +R20 +R0 ) ]) according to N

N (L11 +L12 +L3 )

00

00

000

∈I

000

i=1

0 00 000 v1N (i , i , i ),

0

pU (ui ). 0

i.i.d. sequences with i ∈ I = Q N = [1, 2N L3 ], according to i=1 pV1 |U (v1,i |ui ).

For the transmitted sequence u (b), generate 2 [1, 2N L11 ], i ∈ I = [1, 2N L12 ] and i

QN

0

00

000



N (L21 +L22 +L3 ) Similarly, for the transmitted sequences uN and xN i.i.d. sequences v2N (j , j , j ), 1 , generate 2 QN 00 00 000 000 0 0 with j ∈ J = [1, 2N L21 ], j ∈ J = [1, 2N L22 ] and j ∈ J = [1, 2N L3 ], according to i=1 pV2 |U (v2,i |ui ).



The xN is generated according to a new discrete memoryless channel (DMC) with inputs uN , v1N , v2N and output xN . The transition probability of this new DMC is pX|U,V1 ,V2 (x|u, v1 , v2 ). The probability pX N |U N ,V1N ,V2N (xN |uN , v1N , v2N ) is calculated as follows. N

N

pX N |U N ,V1N ,V2N (x |u

, v1N , v2N )

=

N Y

pX|U,V1 ,V2 (xi |ui , v1,i , v2,i ).

(A68)

i=1

Denote xN by xN (w0 , w10 , w20 , w11 , w22 ). ∗ Encoding: Similar to the definitions in Appendix B, define w0,i = (w0,i , w10,i , w20,i ), where w0,i , w10,i and

w20,i are the messages transmitted in the i-th block. The messages w11 and w22 transmitted in the i-th block are denoted by w11,i and w22,i , respectively. •

(Channel encoder)

45

0

00

000

0

00

000

∗ ∗ ∗ 1) The transmitter sends (uN (w0,i ), v1N (ii , ii , ii |w0,i ), v2N (ji , ji , ji |w0,i )) for the i-th block (1 ≤ i ≤ n). 0

00

000

0

00

000

Here ii , ii , ii , ji , ji and ji are the indexes for block i. 0

00

0

00

2) The indexes ii , ii , ji and ji are determined by the following methods. 0

0

– If R11 ≤ L11 , evenly partition I into W11 bins, and the index ii is drawn at random (with uniform 00

00

distribution) from the bin w11 . The index ii is drawn at random (with uniform distribution) from I . Note that R22 always satisfies R22 ≥ L21 . 0

0

– If L11 ≤ R11 ≤ L11 + L12 , define W11 = I × K1 . Thus the index ii is determined by a given message 00

00

w11,i . Evenly partition I into K1 bins, and the index ii is drawn at random (with uniform distribution) from the bin k1 . 0

0

Analogously, if R22 ≤ L21 + L22 , define W22 = J × K2 . Thus the index ji is determined by a given message w22,i . Evenly partition J

00

00

into K2 bins, and the index ji is drawn at random (with uniform

distribution) from the bin k2 . 0

00

00

0

– If L11 +L12 ≤ R11 ≤ L11 +L12 +L3 , define W11 = I ×I ×K1 . Thus the indexes ii and ii are determined by a given message w11,i . Evenly partition I

000

0

00

000

∗ ) will into K1 bins, and the codeword v1N (ii , ii , ii |w0,i

be drawn from the bin k1 . 0

00

0

Analogously, if L21 + L22 ≤ R22 ≤ L21 + L22 + L3 , define W22 = J × J × K2 . Thus the indexes ji 00

and ji are determined by a given message w22,i . Evenly partition J 000 00 0 ∗ ) v2N (ji , ji , ji |w0,i 000

000

into K2 bins, and the codeword

will be drawn from the bin k2 . 000

3) The indexes ii and ji are determined as follows. 0

00

0

00

0

00

000

0

00

000

∗ ∗ )) ), v2N (ji , ji , ji |w0,i After the determination of ii , ii , ji and ji , the transmitter tries to find a pair (v1N (ii , ii , ii |w0,i 0

00

0

000

00

000

∗ ∗ ∗ )) are jointly typical. If there are more than one such ), v2N (ji , ji , ji |w0,i such that (uN (w0,i ), v1N (ii , ii , ii |w0,i

pair, randomly choose one; if there is no such pair, an error is declared. Thus, all the indexes of v1N and v2N (in block i) are determined. One can show that such a pair exists with high probability for sufficiently large N if (see [40]) I(V1 ; Y |U, X1 ) −  − R11 + I(V2 ; Z|U ) −  − R22 ≥ I(V1 ; V2 |U ).

(A69)

4) The transmitter finally sends xN (w0,i , w10,i , w20,i , w11,i , w22,i ). •

(Relay encoder) N Rr In the i-th block, the relay uniformly picks a codeword xN ], and sends xN 1 (ai ) from ai ∈ [1, 2 1 (ai ).

Decoding: Decoding proceeds as follows. (At receiver 1) At the end of block i, receiver 1 will declare that a ˇi is received if (xN ai ), y N (i)) are jointly 1 (ˇ typical. By using (A65) and the AEP, it is easy to see that the probability P r{ˇ ai = ai } goes to 1. ∗ Having a ˇi , receiver 1 can get the estimation of the message w0,i = (w0,i , w10,i , w20,i ) by finding a unique triple ∗ ∗ ∗ such that (uN (w ˇ0,i ), xN ai ), y N (i)) are jointly typical. Based on the AEP, the probability P r{w ˇ0,i = w0,i } goes 1 (ˇ

to 1 if R0 + R10 + R20 ≤ I(U ; Y |X1 ).

(A70)

46

∗ After decoding w ˇ0,i , receiver 1 tries to find a quadruple such that 0 00 000 ∗ ∗ (v1N (ˇii , ˇii , ˇii |w ˇ0,i ), uN (w ˇ0,i ), xN ai ), y N (i)) are jointly typical. Based on the AEP, the probability P r{w ˇ11,i = 1 (ˇ

w11,i } goes to 1 if R11 ≤ I(V1 ; Y |U, X1 ). 0

00

000

0

0

00

(A71)

00

000

000

∗ If such v1N (ˇii , ˇii , ˇii |w ˇ0,i ) exists and is unique, set ˇii = ii , ˇii = ii and ˇii = ii ; otherwise, declare an error. From 0

00

000

the values of ˇii , ˇii , ˇii , and the above encoding schemes, receiver 1 can calculate the message w ˇ11,i . (At receiver 2) The decoding scheme for receiver 2 is as follows. ∗ ∗ ), z N (i)) are by finding a unique pair such that (uN (w ˆ0,i Receiver 2 gets the estimation of the message w0,i ∗ ∗ jointly typical. Based on the AEP, the probability P r{w ˆ0,i = w0,i } goes to 1 if

R0 + R10 + R20 ≤ I(U ; Z). 0

(A72) 00

000

∗ ∗ ∗ After decoding w ˆ0,i , receiver 2 tries to find a triple such that (v2N (ˆji , ˆji , ˆji |w ˆ0,i ), uN (w ˆ0,i ), z N (i)) are jointly

typical. Based on the AEP, the probability P r{w ˆ22,i = w22,i } goes to 1 if R22 ≤ I(V2 ; Z|U ).

(A73)

000 000 00 00 0 0 000 00 0 ∗ ˆ0,i ) exists and is unique, set ˆji = ji , ˆji = ji and ˆji = ji ; otherwise, declare an error. If such v2N (ˆji , ˆji , ˆji |w 000 00 0 ˆ22,i . From the values of ˆji , ˆji , ˆji , and the above encoding schemes, receiver 2 can calculate the message w

By using (A65), (A69), (A70), (A71), (A72) and (A73), it is easy to check that Pe1 ≤  and Pe2 ≤ . Moreover, applying Fourier-Motzkin elimination on (A65), (A69), (A70), (A71), (A72) and (A73) with the definitions R1 = R10 + R11 and R2 = R20 + R22 , we get

R0 ≤ min{I(U ; Y |X1 ), I(U ; Z)},

R0 + R1 ≤ min{I(U ; Y |X1 ), I(U ; Z)} + I(V1 ; Y |U, X1 ),

R0 + R2 ≤ min{I(U ; Y |X1 ), I(U ; Z)} + I(V2 ; Z|U ),

R0 + R1 + R2 ≤ min{I(U ; Y |X1 ), I(U ; Z)} + I(V1 ; Y |U, X1 ) + I(V2 ; Z|U ) − I(V1 ; V2 |U ). Note that the above inequalities are the same as those in Theorem 3. Equivocation Analysis: Now, it remains to prove limN →∞ ∆1 ≥ Re1 = min{I(X1 ; Z|U, V1 , V2 ), max{I(X1 ; Y ), I(X1 ; Z|U, V2 )}} + I(V1 ; Y |U, X1 ) − I(V1 ; V2 |U ) − I(X1 , V1 ; Z|U, V2 ) and limN →∞ ∆2 ≥ Re2 = I(V2 ; Z|U ) − I(V1 ; V2 |U ) − I(V2 ; Y |U, X1 , V1 ). Proof of limN →∞ ∆1 ≥ Re1 = min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )}+I(V1 ; Y |U, X1 )−I(V1 ; V2 |U )−I(X1 , V1 ; Z|U, V2 ):

47

H(W1 |Z N )



H(W1 |Z N , V2N , U N )

=

H(W10 , W11 |Z N , V2N , U N )

(a)

=

H(W11 |Z N , V2N , U N )

=

H(W11 , Z N |V2N , U N ) − H(Z N |V2N , U N )

=

H(W11 , Z N , V1N , X1N |V2N , U N ) − H(V1N , X1N |W11 , Z N , V2N , U N ) − H(Z N |V2N , U N )



H(Z N , V1N , X1N |V2N , U N ) − H(V1N , X1N |W11 , Z N , V2N , U N ) − H(Z N |V2N , U N )

=

H(V1N , X1N |V2N , U N ) + H(Z N |V1N , V2N , U N , X1N ) − H(V1N , X1N |W11 , Z N , V2N , U N ) −H(Z N |V2N , U N )

(b)

=

H(X1N ) + H(V1N |V2N , U N ) + H(Z N |V1N , V2N , U N , X1N ) − H(V1N , X1N |W11 , Z N , V2N , U N ) −H(Z N |V2N , U N )

=

H(X1N ) + H(V1N |U N ) − I(V1N ; V2N |U N ) + H(Z N |V1N , V2N , U N , X1N ) −H(V1N , X1N |W11 , Z N , V2N , U N ) − H(Z N |V2N , U N )

=

H(X1N ) + H(V1N |U N ) − I(V1N ; V2N |U N ) − I(Z N ; X1N , V1N |V2N , U N ) −H(V1N , X1N |W11 , Z N , V2N , U N ),

(A74)

where (a) follows from the fact that given U N , W10 is uniquely determined, and (b) is from that X1N is independent of V1N , V2N and U N . Consider the first term in (A74), the codeword generation and [24, Lemma 3] ensure that H(X1N ) ≥ N Rr − δ = N (min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )} − ) − δ,

(A75)

where δ is small for sufficiently large N . For the second term in (A74), similarly we have H(V1N |U N ) ≥ log 2N (L11 +L12 +L3 ) − δ1 = N (I(V1 ; Y |U, X1 ) − ) − δ1 ,

(A76)

where δ1 is small for sufficiently large N . For the third and fourth terms in (A74), using the same approach as that in [20, Lemma 3], we get 0

I(V1N ; V2N |U N ) ≤ N (I(V1 ; V2 |U ) +  ),

(A77)

and 00

I(Z N ; X1N , V1N |V2N , U N ) ≤ N (I(X1 , V1 ; Z|U, V2 ) +  ), 0

00

where  ,  → 0 as N → ∞. Now, we consider the last term of (A74). Given W11 , receiver 2 can do joint decoding.

(A78)

48



For the case that R11 ≤ L11 , given U N , V2N , W11 and 

000

→ 0+ , 000

H(V1N , X1N |W11 , Z N , V2N , U N ) ≤ N  ,

(A79)

is guaranteed if Rr ≤ I(X1 ; Z|V1 , V2 , U ) −  and Rr ≥ I(X1 ; Z|U, V2 ) −  ( → 0+ ), and this is from the properties of AEP (similar argument is used in the proof of Theorem 3 in [29]). By using (A66) and (A67), (A79) is obtained. •

For the case that L11 ≤ R11 ≤ L11 + L12 , given U N , V2N and W11 , the total number of possible codewords of V1N is N1 ≤ 2N L12 = 2N I(V1 ;Z|U,V2 ) .

(A80)

By using the Fano’s inequality and (A80), we have 000

H(V1N |W11 , Z N , V2N , U N ) ≤ N  , where 

000

(A81)

→ 0.

N

Given U , V1N , V2N and W11 , the total number of possible codewords of X1N is N2 ≤ 2N Rr = 2N (min{I(X1 ;Y ),I(X1 ;Z|V1 ,V2 ,U )}−) .

(A82)

By using the Fano’s inequality and (A82), we have 0000

where 

0000

H(X1N |W11 , Z N , V1N , V2N , U N ) ≤ N  ,

(A83)

1 H(V1N , X1N |W11 , Z N , V2N , U N ) ≤  → 0, N

(A84)

→ 0.

By using (A81) and (A83),

is guaranteed. •

For the case that L11 + L12 ≤ R11 ≤ L11 + L12 + L3 , given U N , V2N and W11 , V1N is totally determined, and therefore H(V1N |W11 , Z N , V2N , U N ) = 0.

(A85)

Similarly, note that Rr = min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )} − , by using the Fano’s inequality, we have (A83). Thus 1 H(V1N , X1N |W11 , Z N , V2N , U N ) ≤  → 0 N

(A86)

is guaranteed. Substituting (A75), (A76), (A77), (A78) and (A79) (or (A84), (A86)) into (A74), and using the definition (2.3), we have limN →∞ ∆1 ≥ Re1 = min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )} + I(V1 ; Y |U, X1 ) − I(V1 ; V2 |U ) − I(X1 , V1 ; Z|U, V2 ). Proof of limN →∞ ∆2 ≥ Re2 = I(V2 ; Z|U ) − I(V1 ; V2 |U ) − I(V2 ; Y |U, X1 , V1 ):

49

H(W2 |Y N )



H(W2 |Y N , V1N , U N , X1N )

=

H(W20 , W22 |Y N , V1N , U N , X1N )

(a)

=

H(W22 |Y N , V1N , U N , X1N )

=

H(W22 , Y N |V1N , U N , X1N ) − H(Y N |V1N , U N , X1N )

=

H(W22 , Y N , V2N |V1N , U N , X1N ) − H(V2N |W22 , Y N , V1N , U N , X1N ) − H(Y N |V1N , U N , X1N )



H(Y N , V2N |V1N , U N , X1N ) − H(V2N |W22 , Y N , V1N , U N , X1N ) − H(Y N |V1N , U N , X1N )

=

H(V2N |V1N , U N , X1N ) + H(Y N |V2N , V1N , U N , X1N ) − H(V2N |W22 , Y N , V1N , U N , X1N ) −H(Y N |V1N , U N , X1N )

(b)

=

H(V2N |U N ) − I(V1N ; V2N |U N ) − I(Y N ; V2N |V1N , U N , X1N ) −H(V2N |W22 , Y N , V1N , U N , X1N ),

(A87)

where (a) follows from the fact that given U N , W20 is uniquely determined, and (b) is from that X1N is independent of V1N , V2N and U N . For the first term in (A87), we have H(V2N |U N ) ≥ log 2N (L21 +L22 +L3 ) − δ3 = N (I(V2 ; Z|U ) − ) − δ3 ,

(A88)

where δ3 is small for sufficiently large N . For the second and third terms in (A87), using the same approach as that in [20, Lemma 3], we get 0

I(V1N ; V2N |U N ) ≤ N (I(V1 ; V2 |U ) +  ),

(A89)

and 00

I(Y N ; V2N |V1N , U N , X1N ) ≤ N (I(V2 ; Y |U, V1 , X1 ) +  ), 0

(A90)

00

where  ,  → 0 as N → ∞. Now, we consider the last term of (A87). •

For the case that R22 ≤ L21 + L22 , given U N , V1N and W22 , the total number of possible codewords of V2N is N3 ≤ 2N L22 = 2N I(V2 ;Y |U,X1 ,V1 ) .

(A91)

By using the Fano’s inequality and (A91), we have 000

H(V2N |W22 , Y N , V1N , U N , X1N ) ≤ N  , where 

000

→ 0.

(A92)

50



For the case that L21 + L22 ≤ R22 ≤ L21 + L22 + L3 , given U N , V1N and W22 , V2N is totally determined, and therefore H(V2N |W22 , Y N , V1N , U N , X1N ) = 0.

(A93)

Substituting (A88), (A89), (A90) and (A92) (or (A93)) into (A87), and using the definition (2.3), we have limN →∞ ∆2 ≥ Re2 = I(V2 ; Z|U ) − I(V1 ; V2 |U ) − I(V2 ; Y |U, X1 , V1 ). This completes the proof for Theorem 3. A PPENDIX D P ROOF OF T HEOREM 4 We consider the proof of Theorem 4 for the case I(X1 ; Y ) ≥ I(X1 ; Z|U, V2 ), and the proof for I(X1 ; Z) ≥ I(X1 ; Y |U, V1 ) follows by symmetry. Now we will prove that the quintuple (R0 , R1 , R2 , Re1 , Re2 ) ∈ R(Ai3) with the conditions Re1 = R∗ + I(V1 ; Y, Yˆ1 |U, X1 ) − I(V1 ; V2 |U ) − I(X1 , V1 ; Z|U, V2 ),

(A94)

Re2 = I(V2 ; Z|U ) − I(V1 ; V2 |U ) − I(V2 ; Y |U, X1 , V1 ),

(A95)

and

is achievable, where min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )} − R∗ ≥ I(Y1 ; Yˆ1 |X1 ). Similar to the proof of Theorem 3, we split the confidential message W1 into W10 and W11 , and W2 into W20 and W22 , and the definitions of these messages are the same as those in Appendix C. Here note that the formulas (A94) and (A95) combined with the rate splitting and the fact that W10 and W20 are decoded by both receivers ensure that, R11 ≥ Re1 = R∗ + I(V1 ; Y, Yˆ1 |U, X1 ) − I(V1 ; V2 |U ) − I(X1 , V1 ; Z|U, V2 ),

(A96)

R22 ≥ Re2 = I(V2 ; Z|U ) − I(V1 ; V2 |U ) − I(V2 ; Y |U, X1 , V1 ).

(A97)

and

1) Code Construction: Construction of the relay code-book: N We first generate at random 2N Rr i.i.d. sequences xN 1 at the relay node each drawn according to p(x1 ) =

QN

i=1

N Rr ], where p(x1,i ), index them as xN 1 (s), s ∈ [1, 2

Rr = min{I(X1 ; Z|U, V1 , V2 ), I(X1 ; Y )} − ,

(A98)

I(X1 ; Z|U, V2 ) −  ≤ Rr ≤ I(X1 ; Z|U, V1 , V2 ) − .

(A99)

and

N (Rr −R For each xN 1 (s), generate at random 2



)

i.i.d. yˆ1N , each with probability p(ˆ y1N |xN 1 (s)) =



QN

i=1

p(ˆ y1,i |x1,i (s)).

N (Rr −R Label these yˆ1N (m, s), m ∈ [1, 2N (Rr −R ) ], s ∈ [1, 2N Rr ]. Equally divide these 2N Rr xN 1 sequences into 2 ∗

bins, hence there are 2N R xN 1 sequences at each bin. Let f be this mapping, i.e., m = f (s).



)

51

Construction of U N : Generate at random 2N (R10 +R20 +R0 ) i.i.d. sequences uN (b) (b ∈ [1, 2N (R10 +R20 +R0 ) ]) according to

QN

i=1

pU (ui ).

Constructions of V1N and V2N : For arbitrary  > 0, define L11 = I(V1 ; Y, Yˆ1 |U, X1 ) − I(V1 ; V2 |U ) − I(V1 ; Z|U, V2 ),

(A100)

L12 = I(V1 ; Z|U, V2 ),

(A101)

L21 = I(V2 ; Z|U ) − I(V1 ; V2 |U ) − I(V2 ; Y |U, X1 , V1 ),

(A102)

L22 = I(V2 ; Y |U, X1 , V1 ),

(A103)

L3 = I(V1 ; V2 |U ) − .

(A104)

L11 + L12 + L3 = I(V1 ; Y, Yˆ1 |U, X1 ) − ,

(A105)

L21 + L22 + L3 = I(V2 ; Z|U ) − .

(A106)

Note that

0

00

000

0

0



For the transmitted sequence uN (b), generate 2N (L11 +L12 +L3 ) i.i.d. sequences v1N (i , i , i ), with i ∈ I = QN 00 00 000 000 [1, 2N L11 ], i ∈ I = [1, 2N L12 ] and i ∈ I = [1, 2N L3 ], according to i=1 pV1 |U (v1,i |ui ).



Similarly, for the transmitted sequences uN , generate 2N (L21 +L22 +L3 ) i.i.d. sequences v2N (j , j , j ), with QN 0 0 00 00 000 000 j ∈ J = [1, 2N L21 ], j ∈ J = [1, 2N L22 ] and j ∈ J = [1, 2N L3 ], according to i=1 pV2 |U (v2,i |ui ).

0

00

000

Construction of X N : The xN is generated according to a new discrete memoryless channel (DMC) with inputs uN , v1N , v2N and output xN . The transition probability of this new DMC is pX|U,V1 ,V2 (x|u, v1 , v2 ). Denote xN by xN (w0 , w10 , w20 , w11 , w22 ). 2) Encoding: ∗ Similar to the definitions in Appendix C, define w0,i = (w0,i , w10,i , w20,i ), where w0,i , w10,i and w20,i are the

messages transmitted in the i-th block. The messages w11 and w22 transmitted in the i-th block are denoted by w11,i and w22,i , respectively. •

(Channel encoder) 0

00

000

0

00

000

∗ ∗ ∗ 1) The transmitter sends (uN (w0,i ), v1N (ii , ii , ii |w0,i ), v2N (ji , ji , ji |w0,i )) for the i-th block (1 ≤ i ≤ n). 0

00

000

0

00

000

Here ii , ii , ii , ji , ji and ji are the indexes for block i. Especially note that for the n-th block, the transmitted ∗ messages are denoted by (w0,n , w11,n , w22,n ) = (1, 1, 1). 0

00

0

00

000

000

2) The indexes ii , ii , ji , ji , ii and ji are determined exactly the same as those in Appendix C, and we omit the details here. 3) The transmitter finally sends xN (w0,i , w10,i , w20,i , w11,i , w22,i ). •

(Relay encoder)

52

N At the end of block i (2 ≤ i ≤ n), assume that (xN ˆ1N (mi , si )) are jointly typical, then we choose 1 (si ), y1 (i), y

si+1 uniformly from bin mi , and the relay sends xN 1 (si+1 ) at block i + 1. In the first block, the relay sends xN 1 (1). 3) Decoding: (At the relay) At the end of block i, the relay already has si , it then decides mi by choosing mi such that N ˆ1N (mi , si )) (xN 1 (si ), y1 (i), y

are jointly typical. There exists such mi , if Rr − R∗ ≥ I(Y1 ; Yˆ1 |X1 ),

(A107)

and N is sufficiently large. Choose si+1 uniformly from bin mi . (At receiver 1) Receiver 1 does backward decoding. The decoding process starts at the last block n, receiver 1 decodes sn by choosing unique sˇn such that (xN sn ), y N (n)) are jointly typical. Since Rr satisfies (A98), the 1 (ˇ probability P r{ˇ sn = sn } goes to 1 for sufficiently large N . Next, receiver 1 moves to the block n − 1. Now it already has sˇn , hence we also have m ˇ n−1 = f (ˇ sn ). It first declares that sˇn−1 is received, if sˇn−1 is the unique one such that (xN sn−1 ), y N (n − 1)) are joint typical. If (A98) 1 (ˇ ∗ is satisfied, sˇn−1 = sn−1 with high probability. After knowing sˇn−1 , the destination gets an estimation of w0,n−1 ∗ ∗ by picking the unique w ˇ0,n−1 such that (uN (w ˇ0,n−1 ), yˆ1N (m ˇ n−1 , sˇn−1 ), y N (n − 1), xN sn−1 )) are jointly typical. 1 (ˇ ∗ ∗ with high probability, if We will have w ˇ0,n−1 = w0,n−1

R0 + R10 + R20 ≤ I(U ; Y, Yˆ1 |X1 ),

(A108)

and N is sufficiently large. ∗ After decoding w ˇ0,n−1 , receiver 1 tries to find a quadruple such that 0 00 000 ∗ ∗ (v1N (ˇin−1 , ˇin−1 , ˇin−1 |w ˇ0,n−1 ), uN (w ˇ0,n−1 ), xN sn−1 ), yˆ1N (m ˇ n−1 , sˇn−1 ), y N (n − 1)) are jointly typical. Based on 1 (ˇ

the AEP, the probability P r{w ˇ11,n−1 = w11,n−1 } goes to 1 if R11 ≤ I(V1 ; Y, Yˆ1 |U, X1 ).

(A109)

0 00 000 0 0 00 00 000 000 ∗ If such v1N (ˇin−1 , ˇin−1 , ˇin−1 |w ˇ0,n−1 ) exists and is unique, set ˇin−1 = in−1 , ˇin−1 = in−1 and ˇin−1 = in−1 ; 0

00

000

otherwise, declare an error. From the values of ˇin−1 , ˇin−1 , ˇin−1 , and the above encoding schemes, receiver 1 can calculate the message w ˇ11,n−1 . The decoding scheme of receiver 1 in block i (1 ≤ i ≤ n − 2) is similar to that in block n − 1, and we omit it here. (At receiver 2) In block i (1 ≤ i ≤ n − 1), since Rr satisfies (A98), and note that I(X1 ; Y ) ≥ I(X1 ; Z|V2 , U ) ≥ I(X1 ; Z), and I(X1 ; Z|V1 , V2 , U ) ≥ I(X1 ; Z), receiver 2 can not decode the relay codeword xN 1 . The decoding scheme for receiver 2 is as follows. ∗ ∗ Receiver 2 gets the estimation of the message w0,i by finding a unique pair such that (uN (w ˆ0,i ), z N (i)) are ∗ ∗ jointly typical. Based on the AEP, the probability P r{w ˆ0,i = w0,i } goes to 1 if

R0 + R10 + R20 ≤ I(U ; Z).

(A110)

53

0

00

000

∗ ∗ ∗ After decoding w ˆ0,i , receiver 2 tries to find a triple such that (v2N (ˆji , ˆji , ˆji |w ˆ0,i ), uN (w ˆ0,i ), z N (i)) are jointly

typical. Based on the AEP, the probability P r{w ˆ22,i = w22,i } goes to 1 if R22 ≤ I(V2 ; Z|U ).

(A111)

0 00 000 0 0 00 00 000 000 ∗ If such v2N (ˆji , ˆji , ˆji |w ˆ0,i ) exists and is unique, set ˆji = ji , ˆji = ji and ˆji = ji ; otherwise, declare an error. 0 00 000 From the values of ˆji , ˆji , ˆji , and the above encoding schemes, receiver 2 can calculate the message w ˆ22,i .

By using the above encoding-decoding scheme, it is easy to check that Pe1 ≤  and Pe2 ≤ . Moreover, applying Fourier-Motzkin elimination on the above inequalities with the definitions R1 = R10 + R11 and R2 = R20 + R22 , we get R0 ≤ min{I(U ; Y, Yˆ1 |X1 ), I(U ; Z)},

R0 + R1 ≤ min{I(U ; Y, Yˆ1 |X1 ), I(U ; Z)} + I(V1 ; Y, Yˆ1 |U, X1 ),

R0 + R2 ≤ min{I(U ; Y, Yˆ1 |X1 ), I(U ; Z)} + I(V2 ; Z|U ),

R0 + R1 + R2 ≤ min{I(U ; Y, Yˆ1 |X1 ), I(U ; Z)} + I(V1 ; Y, Yˆ1 |U, X1 ) + I(V2 ; Z|U ) − I(V1 ; V2 |U ),

min{I(X1 ; Y ), I(X1 ; Z|V1 , V2 , U )} − R∗ ≥ I(Y1 ; Yˆ1 |X1 ). Note that the above bounds are the same as those in Theorem 4. Equivocation Analysis: Now, it remains to prove limN →∞ ∆1 ≥ Re1 = R∗ +I(V1 ; Y, Yˆ1 |U, X1 )−I(V1 ; V2 |U )− I(X1 , V1 ; Z|U, V2 ) and limN →∞ ∆2 ≥ Re2 = I(V2 ; Z|U ) − I(V1 ; V2 |U ) − I(V2 ; Y |U, X1 , V1 ). Proof of limN →∞ ∆1 ≥ Re1 = R∗ + I(V1 ; Y, Yˆ1 |U, X1 ) − I(V1 ; V2 |U ) − I(X1 , V1 ; Z|U, V2 ):

H(W1 |Z N )



H(W1 |Z N , V2N , U N )

=

H(W10 , W11 |Z N , V2N , U N )

(a)

=

H(W11 |Z N , V2N , U N )

=

H(W11 , Z N |V2N , U N ) − H(Z N |V2N , U N )

=

H(W11 , Z N , V1N , X1N |V2N , U N ) − H(V1N , X1N |W11 , Z N , V2N , U N ) − H(Z N |V2N , U N )



H(Z N , V1N , X1N |V2N , U N ) − H(V1N , X1N |W11 , Z N , V2N , U N ) − H(Z N |V2N , U N )

=

H(V1N , X1N |V2N , U N ) + H(Z N |V1N , V2N , U N , X1N ) − H(V1N , X1N |W11 , Z N , V2N , U N ) −H(Z N |V2N , U N )

54

(b)

=

H(X1N ) + H(V1N |V2N , U N ) + H(Z N |V1N , V2N , U N , X1N ) − H(V1N , X1N |W11 , Z N , V2N , U N ) −H(Z N |V2N , U N )

=

H(X1N ) + H(V1N |U N ) − I(V1N ; V2N |U N ) + H(Z N |V1N , V2N , U N , X1N ) −H(V1N , X1N |W11 , Z N , V2N , U N ) − H(Z N |V2N , U N )

=

H(X1N ) + H(V1N |U N ) − I(V1N ; V2N |U N ) − I(Z N ; X1N , V1N |V2N , U N ) −H(V1N , X1N |W11 , Z N , V2N , U N ),

(A112)

where (a) follows from the fact that given U N , W10 is uniquely determined, and (b) is from that X1N is independent of V1N , V2N and U N . Consider the first term in (A112), the codeword generation and [24, Lemma 3] ensure that H(X1N ) ≥ N R∗ − δ,

(A113)

where δ is small for sufficiently large N . For the second term in (A112), similarly we have H(V1N |U N ) ≥ log 2N (L11 +L12 +L3 ) − δ1 = N (I(V1 ; Y, Yˆ1 |U, X1 ) − ) − δ1 ,

(A114)

where δ1 is small for sufficiently large N . For the third and fourth terms in (A112), using the same approach as that in [20, Lemma 3], we get 0

I(V1N ; V2N |U N ) ≤ N (I(V1 ; V2 |U ) +  ),

(A115)

and 00

I(Z N ; X1N , V1N |V2N , U N ) ≤ N (I(X1 , V1 ; Z|U, V2 ) +  ), 0

(A116)

00

where  ,  → 0 as N → ∞. Now, we consider the last term of (A112). •

For the case that R11 ≤ L11 , given U N , V2N , W11 and 

000

→ 0+ , 000

H(V1N , X1N |W11 , Z N , V2N , U N ) ≤ N  ,

(A117)

is guaranteed if Rr ≤ I(X1 ; Z|V1 , V2 , U ) −  and Rr ≥ I(X1 ; Z|U, V2 ) −  ( → 0+ ), and this is from the properties of AEP (similar argument is used in the proof of Theorem 3 in [29]). By using (A99), (A117) is obtained. •

For the case that L11 ≤ R11 ≤ L11 + L12 , given U N , V2N and W11 , the total number of possible codewords of V1N is N1 ≤ 2N L12 = 2N I(V1 ;Z|U,V2 ) .

(A118)

By using the Fano’s inequality and (A118), we have 000

H(V1N |W11 , Z N , V2N , U N ) ≤ N  ,

(A119)

55

where 

000

→ 0.

Given U N , V1N , V2N and W11 , the total number of possible codewords of X1N is N2 ≤ 2N Rr = 2N (min{I(X1 ;Y ),I(X1 ;Z|V1 ,V2 ,U )}−) .

(A120)

By using the Fano’s inequality and (A120), we have 0000

where 

0000

H(X1N |W11 , Z N , V1N , V2N , U N ) ≤ N  ,

(A121)

1 H(V1N , X1N |W11 , Z N , V2N , U N ) ≤  → 0, N

(A122)

→ 0.

By using (A119) and (A121),

is guaranteed. •

For the case that L11 + L12 ≤ R11 ≤ L11 + L12 + L3 , given U N , V2N and W11 , V1N is totally determined, and therefore H(V1N |W11 , Z N , V2N , U N ) = 0.

(A123)

Similarly, note that Rr = min{I(X1 ; Y ), I(X1 ; Z|V1 , V2 , U )} − , by using the Fano’s inequality, we have (A121). Thus 1 H(V1N , X1N |W11 , Z N , V2N , U N ) ≤  → 0 N

(A124)

is guaranteed. Substituting (A113), (A114), (A115), (A116) and (A117) (or (A122),(A124)) into (A112), and using the definition (2.3), we have limN →∞ ∆1 ≥ Re1 = R∗ + I(V1 ; Y, Yˆ1 |U, X1 ) − I(V1 ; V2 |U ) − I(X1 , V1 ; Z|U, V2 ). Proof of limN →∞ ∆2 ≥ Re2 = I(V2 ; Z|U ) − I(V1 ; V2 |U ) − I(V2 ; Y |U, X1 , V1 ): The proof of limN →∞ ∆2 ≥ Re2 = I(V2 ; Z|U ) − I(V1 ; V2 |U ) − I(V2 ; Y |U, X1 , V1 ) is exactly the same as that in Appendix C, and therefore, we omit the proof here. The proof for Theorem 4 is completed. A PPENDIX E P ROOF OF T HEOREM 11 The proof of Theorem 11 is a combination of the NF strategy [29] and Csisz´ ar-K¨orner’s techniques on broadcast channels with confidential messages [20], see the remainder of this section. Theorem 11 is proved by the following two cases. •

(Case 1) If the channel from the relay to receiver 1 is less noisy than the channel from the relay to receiver 2 (I(X1 ; Y ) ≥ I(X1 ; Z|U )), we allow receiver 1 to decode xN 1 , and receiver 2 can not decode it. For case 1, it is sufficient to show that the triple (R0 , R1 , Re ) ∈ L9 with the condition Re = min{I(X1 ; Z|U, V ), I(X1 ; Y )} + I(V ; Y |U, X1 ) − I(X1 , V ; Z|U ).,

(A125)

56

is achievable. •

(Case 2) If the channel from the relay to receiver 1 is more noisy than the channel from the relay to receiver 2 (I(X1 ; Y ) ≤ I(X1 ; Z)), we allow both the receivers to decode xN 1 . For case 2, it is sufficient to show that the triple (R0 , R1 , Re ) ∈ L10 with the condition Re = I(V ; Y |U, X1 ) − I(V ; Z|U, X1 ),

(A126)

is achievable. Now split the confidential message W1 into W10 and W11 , and the details are as follows. Define the messages W0 , W10 , W11 taken values in the alphabets W0 , W10 , W11 , respectively, where W0 = {1, 2, ..., 2N R0 }, W10 = {1, 2, ..., 2N R10 }, W11 = {1, 2, ..., 2N R11 }, and R10 + R11 = R1 . Here note that the formulas (A125) and (A126) combined with the rate splitting and the fact that W10 is decoded by both receivers ensure that, R11 ≥ Re = min{I(X1 ; Z|U, V ), I(X1 ; Y )} + I(V ; Y |U, X1 ) − I(X1 , V ; Z|U ),

(A127)

R11 ≥ Re = I(V ; Y |U, X1 ) − I(V ; Z|U, X1 ),

(A128)

and

respectively. Code-book Construction for the Two Cases: First, we define some parameters that will be used in the construction of vN , see the followings. •

For the case 1, fix the joint probability mass function PY,Z,Y1 ,X,X1 ,V,U (y, z, y1 , x, x1 , v, u), and define L11 = I(V ; Y |U, X1 ) − I(V ; Z|U ),

(A129)

L12 = I(V ; Z|U ),

(A130)

L13 = min{I(U ; Y |X1 ), I(U ; Z)},

(A131)

Note that L11 ≥ Re . •

For the case 2, fix the joint probability mass function PY,Z,Y1 ,X,X1 ,V,U (y, z, y1 , x, x1 , v, u), and define L21 = I(V ; Y |U, X1 ) − I(V ; Z|U, X1 ),

(A132)

L22 = I(V ; Z|U, X1 ),

(A133)

L23 = min{I(U ; Y |X1 ), I(U ; Z|X1 )}.

(A134)

Then, the constructions of the code-books for the two cases are as follows.

57



Code-book Construction for case 1: – First, generate at random 2N Rr i.i.d. sequences at the relay node each drawn according to pX1N (xN 1 ) = QN N N Rr ], where i=1 pX1 (x1,i ), index them as x1 (a), a ∈ [1, 2 Rr = min{I(X1 ; Z|U, V ), I(X1 ; Y )} − ,

(A135)

and  is an arbitrary small positive real number. Here note that I(X1 ; Z|U ) −  ≤ Rr ≤ I(X1 ; Z|U, V ) − .

(A136)

– Generate at random 2N (R0 +R10 ) i.i.d. sequences uN (b) (b ∈ [1, 2N (R0 +R10 ) ]) according to 0

00

QN

i=1

000

pU (ui ). 0

– For the transmitted sequence uN (b), generate 2N (L11 +L12 +L13 ) i.i.d. sequences v N (i , i , i ), with i ∈ QN 0 00 00 000 000 I = [1, 2N L11 ], i ∈ I = [1, 2N L12 ] and i ∈ I = [1, 2N L13 ], according to i=1 pV |U (vi |ui ). – The xN is generated according to a new discrete memoryless channel (DMC) with inputs uN , v N , and output xN . The transition probability of this new DMC is pX|U,V (x|u, v). The probability pX N |U N ,V N (xN |uN , v N ) is calculated as follows. N

N

N

pX N |U N ,V N (x |u , v ) =

N Y

pX|U,V (xi |ui , vi ).

(A137)

i=1 •

Code-book Construction for case 2: – First, generate at random 2N Rr i.i.d. sequences at the relay node each drawn according to pX1N (xN 1 ) = QN N N Rr ], where i=1 pX1 (x1,i ), index them as x1 (a), a ∈ [1, 2 Rr = min{I(X1 ; Y ), I(X1 ; Z)} −  = I(X1 ; Y ) − ,

(A138)

and  is an arbitrary small positive real number. – Generate at random 2N (R0 +R10 ) i.i.d. sequences uN (b) (b ∈ [1, 2N (R0 +R10 ) ]) according to 0

00

QN

i=1

000

pU (ui ). 0

– For the transmitted sequence uN (b), generate 2N (L21 +L22 +L23 ) i.i.d. sequences v N (i , i , i ), with i ∈ QN 00 00 000 000 0 I = [1, 2N L21 ], i ∈ I = [1, 2N L22 ] and i ∈ I = [1, 2N L23 ], according to i=1 pV |U (vi |ui ). – The xN is generated exactly the same as that of case 1, and it is omitted here. Encoding: •

(Channel encoder) 0

00

000

1) For a given message triple (w0 , w10 , w11 ), the transmitter sends uN (w0 , w10 ) and v N (i , i , i |w0 , w10 ). 0

00

000

2) The indexes i , i and i

are determined by the following methods.

– Case 1: 0

0

∗ If R11 ≤ L11 , evenly partition I into W11 bins, and the index ii is drawn at random (with uniform 00

00

distribution) from the bin w11 . The index ii is drawn at random (with uniform distribution) from I . 000

000

Let W0 × W10 ⊆ I , and the index ii is determined by the messages w0 and w10 . 0

0

∗ If L11 ≤ R11 ≤ L11 + L12 , define W11 = I × K1 . Thus the index i is determined by a given message 00

00

w11 . Evenly partition I into K1 bins, and the index ii is drawn at random (with uniform distribution) 000

000

from the bin k1 . Let W0 × W10 ⊆ I , and the index ii is determined by the messages w0 and w10 .

58

0

00

0

00

∗ If L11 + L12 ≤ R11 ≤ L11 + L12 + L13 , define W11 = I × I × K1 . Thus the indexes i and ii are 000

determined by a given message w11 . Define a one-to-one mapping g of K1 × W0 × W10 into I , i.e., 000

ii = g(w0 , w10 , k1 ), where k1 ∈ K1 . – Case 2: 0

0

∗ If R11 ≤ L21 + L22 , define W11 = I × K1 . Thus the index i is determined by a given message w11 . 00

00

Evenly partition I into K1 bins, and the index ii is drawn at random (with uniform distribution) from 000

000

the bin k1 . Let W0 × W10 ⊆ I , and the index ii is determined by the messages w0 and w10 . 0

00

0

00

∗ If L21 + L22 ≤ R11 ≤ L21 + L22 + L23 , define W11 = I × I × K1 . Thus the indexes i and ii are 000

determined by a given message w11 . Define a one-to-one mapping g of K1 × W0 × W10 into I , i.e., 000

ii = g(w0 , w10 , k1 ), where k1 ∈ K1 . •

(Relay encoder) N Rr The relay uniformly picks a codeword xN ], and sends xN 1 (a) from a ∈ [1, 2 1 (a).

Decoding: Decoding proceeds as follows. •

Case 1: (At receiver 1) Receiver 1 will declare that a ˇ is received if (xN a), y N ) jointly typical. By using (A135) and 1 (ˇ the AEP, it is easy to see that the probability P r{ˇ a = a} goes to 1. Having a ˇ, receiver 1 can get the estimation of the messages w0 and w10 by finding a unique triple such that (uN (w ˇ0 , w ˇ10 ), xN a), y N ) are jointly typical. Based on the AEP, the probability P r{(w ˇ0 , w ˇ10 ) = (w0 , w10 )} 1 (ˇ goes to 1 if R0 + R10 ≤ I(U ; Y |X1 ).

(A139)

After decoding w ˇ0 and w ˇ10 , receiver 1 tries to find a quadruple such that 0 00 000 (v N (ˇi , ˇi , ˇi |w ˇ0 , w ˇ10 ), uN (w ˇ0 , w ˇ10 ), xN a), y N ) are jointly typical. Based on the AEP, the probability P r{w ˇ11 = 1 (ˇ

w11 } goes to 1 if R11 ≤ I(V ; Y |U, X1 ).

(A140)

0 00 000 0 0 00 00 000 000 If such v N (ˇi , ˇi , ˇi |w ˇ0 , w ˇ10 ) exists and is unique, set ˇi = i , ˇi = i and ˇi = i ; otherwise, declare an 0 00 000 error. From the values of ˇi , ˇi , ˇi , and the above encoding schemes, receiver 1 can calculate the message

w ˇ11 . (At receiver 2) The decoding scheme for receiver 2 is as follows. Receiver 2 gets the estimation of the messages w0 and w10 by finding a unique pair such that (uN (w ˆ0 , w ˆ10 ), z N ) are jointly typical. Based on the AEP, the probability P r{(w ˆ0 , w ˆ10 ) = (w0 , w10 )} goes to 1 if R0 + R10 ≤ I(U ; Z). •

(A141)

Case 2: (At receiver 1) Receiver 1 will declare that a ˇ is received if (xN a), y N ) jointly typical. By using (A138) and 1 (ˇ the AEP, it is easy to see that the probability P r{ˇ a = a} goes to 1.

59

Having a ˇ, receiver 1 can get the estimation of the messages w0 and w10 by finding a unique triple such that (uN (w ˇ0 , w ˇ10 ), xN a), y N ) are jointly typical. 1 (ˇ After decoding w ˇ0 and w ˇ10 , receiver 1 tries to find a quadruple such that 0 00 000 0 00 000 (v N (ˇi , ˇi , ˇi |w ˇ0 , w ˇ10 ), uN (w ˇ0 , w ˇ10 ), xN a), y N ) are jointly typical. If such v N (ˇi , ˇi , ˇi |w ˇ0 , w ˇ10 ) exists and 1 (ˇ 0 0 00 00 000 000 0 00 000 is unique, set ˇi = i , ˇi = i and ˇi = i ; otherwise, declare an error. From the values of ˇi , ˇi , ˇi , and

the above encoding schemes, receiver 1 can calculate the message w ˇ11 . a), z N ) jointly typical. By using (A138) and (At receiver 2) Receiver 2 will declare that a ˆ is received if (xN 1 (ˆ the AEP, it is easy to see that the probability P r{ˆ a = a} goes to 1. Having a ˆ, receiver 2 can get the estimation of the messages w0 and w10 by finding a unique triple such that (uN (w ˆ0 , w ˆ10 ), xN a), z N ) are jointly typical. Based on the AEP, the probability P r{(w ˆ0 , w ˆ10 ) = (w0 , w10 )} 1 (ˆ goes to 1 if R0 + R10 ≤ I(U ; Z|X1 ).

(A142)

By using (A135), (A139), (A140) and (A141), it is easy to check that Pe1 ≤  and Pe2 ≤ . Moreover, applying Fourier-Motzkin elimination on (A135), (A139), (A140) and (A141) with the definition R1 = R10 + R11 , we get

R0 ≤ min{I(U ; Y |X1 ), I(U ; Z)},

R0 + R1 ≤ min{I(U ; Y |X1 ), I(U ; Z)} + I(V ; Y |U, X1 ). Similarly, by using (A138), (A139), (A140) and (A142), it is easy to check that Pe1 ≤  and Pe2 ≤ . Moreover, applying Fourier-Motzkin elimination on (A138), (A139), (A140) and (A142) with the definition R1 = R10 + R11 , we get

R0 ≤ min{I(U ; Y |X1 ), I(U ; Z|X1 )},

R0 + R1 ≤ min{I(U ; Y |X1 ), I(U ; Z|X1 )} + I(V ; Y |U, X1 ). Equivocation Analysis: Now, it remains to prove limN →∞ ∆ ≥ Re = min{I(X1 ; Z|U, V ), I(X1 ; Y )} + I(V ; Y |U, X1 ) − I(X1 , V ; Z|U ) for the case 1, and limN →∞ ∆ ≥ Re = I(V ; Y |U, X1 ) − I(V ; Z|U, X1 ) for the case 2. Proof of limN →∞ ∆ ≥ Re = min{I(X1 ; Z|U, V ), I(X1 ; Y )} + I(V ; Y |U, X1 ) − I(X1 , V ; Z|U ) for the case 1:

60

H(W1 |Z N )



H(W1 |Z N , U N )

=

H(W10 , W11 |Z N , U N )

(a)

=

H(W11 |Z N , U N )

=

H(W11 , Z N |U N ) − H(Z N |U N )

=

H(W11 , Z N , V N , X1N |U N ) − H(V N , X1N |W11 , Z N , U N ) − H(Z N |U N )



H(Z N , V N , X1N |U N ) − H(V N , X1N |W11 , Z N , U N ) − H(Z N |U N )

=

H(V N , X1N |U N ) + H(Z N |V N , U N , X1N ) − H(V N , X1N |W11 , Z N , U N ) −H(Z N |U N )

(b)

H(X1N ) + H(V N |U N ) + H(Z N |V N , U N , X1N ) − H(V N , X1N |W11 , Z N , U N )

=

−H(Z N |U N ) H(X1N ) + H(V N |U N ) + H(Z N |V N , U N , X1N )

=

−H(V N , X1N |W11 , Z N , U N )H(Z N |U N ) H(X1N ) + H(V N |U N ) − I(Z N ; X1N , V N |U N )

=

−H(V N , X1N |W11 , Z N , U N ),

(A143)

where (a) follows from the fact that given U N , W10 is uniquely determined, and (b) is from that X1N is independent of V N and U N . Consider the first term in (A143), the codeword generation and [24, Lemma 3] ensure that H(X1N ) ≥ N Rr − δ = N (min{I(X1 ; Z|U, V ), I(X1 ; Y )} − ) − δ,

(A144)

where δ is small for sufficiently large N . For the second term in (A143), similarly we have H(V N |U N ) ≥ log 2N (L11 +L12 ) − δ1 = N I(V ; Y |U, X1 ) − δ1 ,

(A145)

where δ1 is small for sufficiently large N . For the third term in (A143), using the same approach as that in [20, Lemma 3], we get 00

I(Z N ; X1N , V N |U N ) ≤ N (I(X1 , V ; Z|U ) +  ),

(A146)

00

where  → 0 as N → ∞. Now, we consider the last term of (A143). Given W11 , receiver 2 can do joint decoding. 000



For the case that R11 ≤ L11 , given U N , Z N , W11 and 

→ 0+ , 000

H(V N , X1N |W11 , Z N , U N ) ≤ N  ,

(A147)

61

is guaranteed if Rr ≤ I(X1 ; Z|V, U ) −  and Rr ≥ I(X1 ; Z|U ) −  ( → 0+ ), and this is from the properties of AEP (similar argument is used in the proof of Theorem 3 in [29]). By using (A136), (A147) is obtained. •

For the case that L11 ≤ R11 ≤ L11 + L12 , given U N , and W11 , the total number of possible codewords of V N is N1 ≤ 2N L12 = 2N I(V ;Z|U ) .

(A148)

By using the Fano’s inequality and (A148), we have 000

H(V N |W11 , Z N , U N ) ≤ N  , where 

000

(A149)

→ 0.

N

Given U , V N and W11 , the total number of possible codewords of X1N is N2 ≤ 2N Rr = 2N (min{I(X1 ;Y ),I(X1 ;Z|V,U )}−) .

(A150)

By using the Fano’s inequality and (A150), we have 0000

where 

0000

H(X1N |W11 , Z N , V N , U N ) ≤ N  ,

(A151)

1 H(V N , X1N |W11 , Z N , U N ) ≤  → 0, N

(A152)

→ 0.

By using (A149) and (A151),

is guaranteed. •

For the case that L11 + L12 ≤ R11 ≤ L11 + L12 + L13 , given U N and W11 , V N is totally determined, and therefore H(V N |W11 , Z N , U N ) = 0.

(A153)

Similarly, note that Rr = min{I(X1 ; Z|U, V ), I(X1 ; Y )} − , by using the Fano’s inequality, we have (A151). Thus 1 H(V N , X1N |W11 , Z N , U N ) ≤  → 0 N

(A154)

is guaranteed. Substituting (A144), (A145), (A146) and (A147) (or (A152),(A154)) into (A143), and using the definition (4.10), we have limN →∞ ∆ ≥ Re = min{I(X1 ; Z|U, V ), I(X1 ; Y )} + I(V ; Y |U, X1 ) − I(X1 , V ; Z|U ). The proof of case 1 is completed. Proof of limN →∞ ∆ ≥ Re = I(V ; Y |U, X1 ) − I(V ; Z|U, X1 ) for the case 2:

62

H(W1 |Z N )



H(W1 |Z N , U N , X1N )

=

H(W10 , W11 |Z N , U N , X1N )

(a)

=

H(W11 |Z N , U N , X1N )

=

H(W11 , Z N |U N , X1N ) − H(Z N |U N , X1N )

=

H(W11 , Z N , V N |U N , X1N ) − H(V N |W11 , Z N , U N , X1N ) − H(Z N |U N , X1N )



H(Z N , V N |U N , X1N ) − H(V N |W11 , Z N , U N , X1N ) − H(Z N |U N , X1N )

=

H(V N |U N , X1N ) + H(Z N |V N , U N , X1N ) − H(V N |W11 , Z N , U N , X1N ) −H(Z N |U N , X1N )

(b)

H(V N |U N ) + H(Z N |V N , U N , X1N ) − H(V N |W11 , Z N , U N , X1N )

=

−H(Z N |U N , X1N ) H(V N |U N ) − I(Z N ; V N |U N , X1N ) − H(V N |W11 , Z N , U N , X1N ),

=

(A155)

where (a) follows from the fact that given U N , W10 is uniquely determined, and (b) is from that X1N is independent of V N and U N . For the first term in (A155), the codeword generation ensures that H(V N |U N ) ≥ log 2N (L21 +L22 ) − δ1 = N I(V ; Y |U, X1 ) − δ1 ,

(A156)

where δ1 is small for sufficiently large N . For the second term in (A155), using the same approach as that in [20, Lemma 3], we get 00

I(Z N ; V N |U N , X1N ) ≤ N (I(V ; Z|U, X1 ) +  ),

(A157)

00

where  → 0 as N → ∞. Now, we consider the last term of (A155). •

For the case that R11 ≤ L21 + L22 , given U N , X1N and W11 , the total number of possible codewords of V N is N1 ≤ 2N L22 = 2N I(V ;Z|U,X1 ) .

(A158)

By using the Fano’s inequality and (A158), we have 000

H(V N |W11 , Z N , U N , X1N ) ≤ N  , 000

where  •

(A159)

→ 0.

For the case that L21 + L22 ≤ R11 ≤ L21 + L22 + L23 , given U N , X1N and W11 , V N is totally determined, and therefore H(V N |W11 , Z N , U N , X1N ) = 0.

(A160)

63

Substituting (A156), (A157) and (A159) (or (A160)) into (A155), and using the definition (4.10), we have limN →∞ ∆ ≥ Re = I(V ; Y |U, X1 ) − I(V ; Z|U, X1 ). The proof of case 2 is completed. The proof of Theorem 11 is completed. A PPENDIX F P ROOF OF T HEOREM 12 Theorem 12 is proved by the following two cases. •

(Case 1) If I(X1 ; Y ) ≥ I(X1 ; Z|U ), we allow receiver 1 to decode xN 1 , and receiver 2 can not decode it.



(Case 2) If I(X1 ; Y ) ≤ I(X1 ; Z), we allow both the receivers to decode xN 1 .

1) Letting V2 = const and removing Re2 , the proof of case 1 is along the lines of the proof of Theorem 4, and therefore, the proof is omitted here. 2) By allowing both receivers to decode the relay codeword xN 1 , the proof of case 2 is directly obtained from the proof of Theorem 4 and the proof of case 2 in Theorem 11, thus, the proof is omitted here. The proof of Theorem 12 is completed. R EFERENCES [1] C. E. Shannon, “Communication theory of secrecy systems,” The Bell System Technical Journal, vol. 28, pp. 656-714, 1949. [2] A. D. Wyner, “The wire-tap channel,” The Bell System Technical Journal, vol. 54, no. 8, pp. 1355-1387, 1975. [3] S. K. Leung-Yan-Cheong, M. E. Hellman, “The Gaussian wire-tap channel,” IEEE Trans Inf Theory, vol. IT-24, no. 4, pp. 451-456, July 1978. [4] N. Merhav, “Shannon’s secrecy system with informed receivers and its application to systematic coding for wiretapped channels,” IEEE Trans Inf Theory, special issue on Information-Theoretic Security, vol. IT-54, no. 6, pp. 2723-2734, June 2008. [5] R. Ahlswede and N. Cai, “Transmission, Identification and Common Randomness Capacities for Wire-Tap Channels with Secure Feedback from the Decoder,” book chapter in General Theory of Information Transfer and Combinatorics, LNCS 4123, pp. 258-275, Berlin: SpringerVerlag, 2006. [6] L. Lai, H. El Gamal and V. Poor, “The wiretap channel with feedback: encryption over the channel,” IEEE Trans Inf Theory, vol. IT-54, pp. 5059-5067, 2008. [7] E. Ardestanizadeh, M. Franceschetti, T.Javidi and Y.Kim, “Wiretap channel with secure rate-limited feedback,” IEEE Trans Inf Theory, vol. IT-55, no. 12, pp. 5353-5361, December 2009. [8] C. Mitrpant, A. J. Han Vinck and Y. Luo, “An Achievable Region for the Gaussian Wiretap Channel with Side Information,” IEEE Trans Inf Theory, vol. IT-52, no. 5, pp. 2181-2190, 2006. [9] Y. Chen, A. J. Han Vinck, “Wiretap channel with side information,” IEEE Trans Inf Theory, vol. IT-54, no. 1, pp. 395-402, January 2008. [10] Y. Liang, G. Kramer, H. V. Poor, and S. Shamai, “Compound wiretap channels,” in Proceedings of the 45th Annual Allerton Conference on Communication, Control and Computing, USA, 2007. [11] Y. Liang, G. Kramer, H. V. Poor, and S. Shamai, “Recent results on compound wiretap channels,” in Proceedings of the IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), France, 2008. [12] T. Liu, V. Prabhakaran, and S. Vishwanath, “The secrecy capacity of a class of parallel Gaussian compound wiretap channels,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT), Canada, 2008. [13] A. Khisti, A. Tchamkerten, and G. Wornell, “Secure broadcasting,” IEEE Trans Inf Theory, vol. IT-54, pp. 2453-2469, 2008. [14] P. Wang, G. Yu, and Z. Zhang, “On the secrecy capacity of fading wireless channel with multiple eavesdroppers,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT), France, 2007.

64

[15] H. Yamamoto, “Coding theorem for secret sharing communication systems with two noisy channels,” IEEE Trans Inf Theory, vol. IT-35, pp. 572-578, 1989. [16] H. Yamamoto, “A coding theorem for secret sharing communication systems with two Gaussian wiretap channels,” IEEE Trans Inf Theory, vol. IT-37, pp. 634-638, 1991. [17] M. Cagalj, S. Capkun, and J. P. Hubaux, “Key agreement in peer-to-peer wireless networks,” Proceedings of the IEEE, vol. IT-94, pp. 467-478, 2006. [18] N. Cai, A. Winter, and R. W. Yeung, “Quantum privacy and quantum wiretap channels,” Problems of Information Transmission, vol. 94, no. 4, pp. 318-336, 2004. [19] N. Cai, and R. W. Yeung, “Secure network coding,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT), 2002. [20] I. Csisz´ ar and J. K¨orner, “Broadcast channels with confidential messages,” IEEE Trans Inf Theory, vol. IT-24, no. 3, pp. 339-348, May 1978. [21] J. K¨orner and K. Marton, “General broadcast channels with degraded message sets,” IEEE Trans Inf Theory, vol. IT-23, no. 1, pp. 60-64, January 1977. [22] R. Liu, I. Maric, P. Spasojevic and R.D Yates, “Discrete memoryless interference and broadcast channels with confidential messages: secrecy rate regions,” IEEE Trans Inf Theory, vol. IT-54, no. 6, pp. 2493-2507, Jun. 2008. [23] J. Xu, Y. Cao, and B. Chen, “Capacity bounds for broadcast channels with confidential messages,” IEEE Trans Inf Theory, vol. IT-55, no. 6, pp. 4529-4542. 2009. [24] Y. Liang and H. V. Poor, “Multiple-access channels with confidential messages,” IEEE Trans Inf Theory, vol. IT-54, no. 3, pp. 976-1002, Mar. 2008. [25] E. Tekin and A. Yener, “The Gaussian multiple access wire-tap channel,” IEEE Trans Inf Theory, vol. IT-54, no. 12, pp. 5747-5755, Dec. 2008. [26] E. Tekin and A. Yener, “The general Gaussian multiple access and two-way wire-tap channels: Achievable rates and cooperative jamming,” IEEE Trans Inf Theory, vol. IT-54, no. 6, pp. 2735-2751, June 2008. [27] E. Ekrem and S. Ulukus, “On the secrecy of multiple access wiretap channel,” in Proc. Annual Allerton Conf. on Communications, Control and Computing, Monticello, IL, Sept. 2008. [28] Y. Liang, A. Somekh-Baruch, H. V. Poor, S. Shamai, and S. Verdu, “Capacity of cognitive interference channels with and without secrecy,” IEEE Trans Inf Theory, vol. IT-55, pp. 604-619, 2009. [29] L. Lai and H. El Gamal, “The relay-eavesdropper channel: cooperation for secrecy,” IEEE Trans Inf Theory, vol. IT-54, no. 9, pp. 4005C4019, Sep. 2008. [30] Y. Oohama, “Coding for relay channels with confidential messages,” in Proceedings of IEEE Information Theory Workshop, Australia, 2001. [31] E. Ekrem and S. Ulukus, “Secrecy in cooperative relay broadcast channels,” IEEE Trans Inf Theory, vol. IT-57, pp. 137-155, 2011. [32] G. Kramer, M. Gastpar and P. Gupta, “Cooperative strategies and capacity theorems for relay networks,” IEEE Trans Inf Theory, vol. IT-51, pp. 3037-3063, 2005. [33] Y. Liang, H. V. Poor and S. Shamai, “Secure communication over fading channels,” IEEE Trans Inf Theory, vol. IT-54, pp. 2470-2492, 2008. [34] C. Nair and A. El Gamal, “An outer bound to the capacity region of the broadcast channel,” IEEE Trans Inf Theory, vol. IT-53, pp. 350-355, 2007. [35] T. M. Cover and A. El Gamal, “Capacity theorems for the relay channel,” IEEE Trans Inf Theory, vol. IT-25, pp. 572-584, 1979. [36] K. Marton, “A coding theorem for the discrete memoryless broadcast channel,” IEEE Trans Inf Theory, vol. IT-25, pp. 306-311, 1979. [37] S. I. Gelfand and M. S. Pinsker, “Capacity of a broadcast channel with one deterministic component,” Problems of Control and Information Theory, vol. 16, pp. 17-25, 1980. [38] I. Csisz´ ar and J. K¨orner, Information Theory. Coding Theorems for Discrete Memoryless Systems. London, U.K.: Academic, 1981. [39] A. A. El Gamal and E. C. van der Meulen, “A proof of Martons coding theorems for the discrete memoryless broadcast channel,” IEEE Trans Inf Theory, vol. IT-27, pp. 120-122, 1981. [40] S. Lall, “Advanced Topics in Computation for Control,” Lecture Notes for Engr210b at Stanford University, Stanford, CA.