4432
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
On the Generalized Degrees of Freedom of the Gaussian Interference Relay Channel Anas Chaaban, Student Member, IEEE, and Aydin Sezgin, Member, IEEE
Abstract—The symmetric two-user Gaussian interference relay channel (IRC) is studied from a generalized degrees of freedom (GDoF) perspective. While it is known that the relay does not increase the DoF of the IRC, such a characterization has not been reported for the GDoF yet. The focus of this paper is on all cases where the interference link is stronger than the link from the source to the relay. This regime basically covers half the space of all possible parameters of the IRC. By using genie-aided approaches, new sum-capacity upper bounds are derived. These bounds are then compared with rates achieved with a novel transmission scheme, which is based on a functional decode-and-forward (FDF) strategy. It is shown that the GDoF of the IRC is achieved by FDF in the given regime, and that a relay can indeed increase the GDoF of IRC. Finally, the FDF scheme is compared with other schemes like decode-and-forward as well as compress-and-forward at low, moderate, and high signal-to-noise ratios. Index Terms—Functional decode-and-forward (FDF), generalized degrees of freedom (GDoF), interference relay channel (IRC), lattice strategies, sum-capacity bounds.
I. INTRODUCTION
T
HE potential and limitations of two ingredients in wireless networks, namely interference and relaying, are the focus of large number of investigations by the research community, in particular in recent years. In this paper, we consider an elemental wireless network with two transmitters, their respective receivers, and one dedicated relay node. Due to the broadcast nature of the wireless channel, the signals from the undesired transmitter are causing interference and limiting the performance in comparison to an interference-free communication. The relay node is deployed with the aim of improving the performance of the network, in terms of coverage and achievable rates. The obtained setup is known as the interference relay channel (IRC). This setup has been studied in different variants such as the IC with a full-duplex causal relay [1]–[3], and the IC with a cognitive relay [4]–[8]. In both variants, the impact of Manuscript received August 25, 2010; revised February 03, 2012; accepted February 24, 2012. Date of publication March 22, 2012; date of current version June 12, 2012. This work was supported by the German Research Foundation, Deutsche Forschungsgemeinschaft (DFG), Germany, under Grant SE 1697/3. The material in this paper was partly presented at the 44th Annual Asilomar Conference on Signals, Systems, and Computers, 2010 [14], the ISIT, 2012 [1], and the SPAWC, 2012 [2]. The authors were with the Emmy-Noether Research Group on Wireless Networks, TAIT, Ulm University, 89081 Ulm, Germany. They are now with the Department of Electrical Engineering and Information Technology, Ruhr-Universität Bochum, 44801 Bochum, Germany (e-mail:
[email protected]; aydin.
[email protected]). Communicated by S. Jafar, Associate Editor for Communications. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIT.2012.2191712
relaying on the system performance was analyzed, by studying upper bounds and achievable rate regions. It is worth noting that the capacity of the IRC remains an open problem in general. Several recent works study special cases of the IRC, e.g., strong/weak source-relay links and strong interference. For instance, in [2] new upper bounds were developed for the IC with a potent relay, i.e., a relay that has no power constraint. Clearly, an IC with a potent relay provides an upper bound for the IRC with a power constraint at the relay. The upper bounds given in [2] cover the case of weak interfering and source-relay links, and the case of strong interfering links. In [9], an achievable scheme for the IRC that uses block Markov encoding at the sources and decode-and-forward at the relay was proposed. In [3], an achievable scheme similar to that in [9] was studied, with an additional component, that is rate splitting at the sources. The performance of this scheme is analyzed for the case when the source-relay links are strong, and thus, decode-and-forward at the relay does not limit the achievable rates. The IRC with strong interference was studied in [1]. A new upper bound was given, and this new bound was compared to an achievable rate in an IRC with strong interference. Given the difficulty of the problem, one way to get insights into the behavior of the IRC is to resort to approximative characterization of the sum capacity. The degrees of freedom (DoF) analysis, although rather coarse, provides such an approximation, which gets tight at asymptotically high signal-to-noise . Interestingly enough (and rather counterpower ratios intuitive at first sight) it was shown in [10] that relaying does not increase the DoF of the IRC. In other words, the DoF of the IRC is the same as that of the IC, i.e., . Note that the DoF is a special case of a more general metric, the so-called generalized degrees of freedom or GDoF [11]. The GDoF is a much more powerful metric, as it allows different signal strengths and thus captures a large variety of scenarios. One question which immediately arises is whether the insight obtained for the DoF holds true for the GDoF as well, or whether the relay has benefits in terms of GDoF. In order to answer this question, we seek a GDoF characterization for the symmetric Gaussian IRC. The major milestones of this characterization can be summarized as follows. A. Upper Bounds We establish new upper bounds on the sum capacity of this setup based on genie-aided approaches. Namely, four new bounds are given. As we show in this paper, the given bounds are GDoF-tight and hence characterize the GDoF of the IRC for many cases. All the upper bounds we provide are given in closed form. That is, no optimization of the bounds is required as for the bound in [1], for instance.
0018-9448/$31.00 © 2012 IEEE
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
B. Lower Bounds We also establish new sum-capacity lower bounds for the IRC. 1) A new functional decode-and-forward (FDF) [13] strategy is devised. This strategy uses nested-lattice coding [14], [15], for enabling the relay to decode a sum of the transmitted codewords. A cooperation strategy is then established by forwarding an index, which refers to the decoded sum of transmitted codewords. This scheme is also combined with rate splitting and backward decoding. Two variants of the FDF scheme are given, one for the weak interference case and the other for the strong interference case. The sum-capacity lower bound obtained from this scheme is close to the provided upper bounds. In fact, we show that FDF achieves the GDoF of the IRC for half the space of all possible channel parameters, namely, for all cases where the source-relay link is weaker than the interference link. The other case is left open, and is to be considered in a future work. 2) Another lower bound that we analyze is obtained via a classical decode-and-forward (DF) scheme [3], [9]. This scheme combines superposition block-Markov encoding and rate splitting at the sources, DF at the relay, and Willems’ backward decoding at the destinations [16]. Superposition block-Markov encoding is used to establish cooperation between the users and the relay. Rate splitting is used to combat interference as in the IC [11]. 3) Another classical lower bound is obtained by combining compress-and-forward (CF) [19]–[22] with rate splitting. Two variants of CF are considered, namely, CF with forward decoding (also stated in ([3]) and CF with backward decoding. These schemes can outperform the DF scheme in some cases, especially when the channels from the sources to the relay are weak. C. GDoF Given the obtained upper and lower bounds, we consider the GDoF as an approximation of the sum capacity. The bounds reveal that FDF achieves the GDoF of the IRC for all cases where the interference link is stronger than the link from the source to the relay. This regime basically covers half the space of all possible channel parameters. We cannot exclude the optimality of FDF for the opposite case in general; however, this needs to be investigated but is not in the scope of this paper. The main insights of this paper are as follows. 1) While a relay does not increase the DoF of the IRC, it does increase its GDoF. 2) While cooperation is essential in general for achieving the GDoF, it turns out that for some cases the GDoF is achieved by not using the full power at the relay or even switching the relay off. Of particular interest is the weak interference, since in this case (given the source-relay links are weaker than the interference links) the source-relay channel is very weak. This might suggest that in this case, the benefits of the relay are limited; however, we show that even in this case, the IRC has more GDoF than the IC.
4433
We conclude this paper with a numerical comparison of the three schemes (FDF, DF, and CF) for some cases with high, moderate, and low . While the FDF outperforms the other schemes for asymptotically high , such a simple conclusion cannot be made in other regimes. The rest of this paper is organized as follows. In Section II, we introduce the notation used in this paper and define the Gaussian IRC. We summarize our results in Section III. In Section IV, we state previously known upper bounds. New sum-capacity upper bounds for the IRC are given in Section V. Section VI describes the FDF transmission scheme and gives its achievable sum rate. Then, in Section VII we characterize the GDoF of the IRC for a wide range or channel parameters. We introduce the DF and CF schemes in Section VIII and compare their performance with FDF. Finally, we conclude in Section IX. II. NOTATION AND MODEL DEFINITION A. Notation Throughout this paper, we will use the following notation. We use to denote the length- sequence and to denote the sequence . is i.i.d. if its components are independent and identically distributed. We denote by the quantity
and by
the quantity
is used to denote a Gaussian distribution with mean and variance . B. System Setup We consider a symmetric Gaussian IRC as shown in Fig. 1. Transmitter , , has a message which is a random . variable uniformly distributed over the set Each transmitter needs to communicate its message to its respective receiver. Transmitter encodes its message to an -symbol codeword , where is a real-valued random variable, and transmits this codeword. At time instant , the input--output equations of this setup are given by1
The coefficient represents the real-valued channel gains as shown in Fig. 1. is the th symbol of the transmit signal at the relay. The relay is causal, which means that is a function of the previous observations at the relay, i.e., (1) The source and relay signals must satisfy a power constraint given by (2) 1In order to simplify the exposition of this paper, we restrict ourselves to the study of the symmetric IRC.
4434
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
Fig. 1. Two-user interference relay channel.
Remark 1: This also covers the case when the relay has different power than the other two transmitters, since this can be modeled as an IRC with relay power and with adjusted accordingly. The receivers’ additive noise is Gaussian with zero mean and unit variance
After receiving , receiver uses a decoder to detect , i.e., . The messages sets, encoders, and decoders define a code for the IRC denoted . The performance of the code is measured by its error probability defined as averaged over all messages. A rate pair is said to be achievable if there exists a sequence of codes such that the probability of error averaged over all codes approach zero as . The capacity region of the IRC is defined as the closure of the set of these achievable rate pairs. The sum capacity is defined as the maximum achievable sum rate
where . This quantity, , is the main focus of this paper together with the GDoF (a high approximation of ). Before we go into the details of this paper, we summarize the main result in the following section.
The GDoF of the IRC with
is given by
(3)
The statement is obtained from sum-capacity upper bounds and lower bounds that we provide in this paper. Briefly, we get a GDoF upper bound by using the cut-set bounds and new genieaided bounds. Namely, the first two lines in (3) are obtained from the cut-set bounds; the rest are from our new bounds. The achievability of this GDoF upper bound is proved by using a novel FDF scheme described in Section VI. This scheme establishes cooperation between the users and the relay by allowing the relay to forward the sum of its received codewords, a process which is enabled by using nested-lattice codes. IV. KNOWN UPPER BOUNDS Several upper bounds for the IRC exist. In this section, we will only present the cut-set bounds, which are necessary for the GDoF characterization pursued in this study. The cut-set bound [21] is given in the following lemma. Lemma 1 ([1]): The achievable rates in the IRC are bounded by the region
III. SUMMARY OF THE MAIN RESULT The main result of this paper regarding the GDoF of the IRC is given in this section. Let us first define some parameters necessary for expressing the GDoF.
(4)
Definition 1: Let the following variables represent the strength of different channels (as in [11]):
(6)
and define the GDoF,
, or simply
as
(5)
satisfying the maximized over all distributions power constraints, with and independent. The given cut-set bound should be maximized over all such distributions of the triple of input variables . However, it can be noticed that the mutual information expressions given previously [(4)–(6)] are maximized by the Gaussian distribution. Then, using a Gaussian distribution for the inputs, evaluating the bounds (4) and (5), and maximizing them over the set of covariance matrices of the triple satisfying the power constraints, we obtain the following simple sum-capacity upper bounds.
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
Theorem 1: The sum capacity of the IRC is upper bounded by (7)
4435
It can be noticed that the bounds in Theorem 2 have a similar structure as the Z-bound of the IC (or the one-sided IC) [11]. Namely, they have the form
(8) Proof: The proof is given in Appendix A. Other upper bounds on the sum capacity of the IRC exist. For instance, Maric et al. [1] tightened the first sum-rate term in the cut-set upper bound (6) obtaining a new upper bound. This bound is obtained by giving enough additional information to one receiver that allows it to construct a less noisy version of the other receiver’s received signal. In this way, it is guaranteed that one receiver, given its received signal and the additional genie information, is able to decode and , and thus, an upper bound on can be obtained. Other new bounds were derived by Tian et al. [2] by using a potent relay approach (a relay with no power constraint). Clearly, any achievable rate pair in the IRC is achievable in the IC with a potent relay. Thus, the capacity of the IC with a potent relay serves as an upper bound for the capacity of the IRC. Finally, we remark that it was shown that relaying does not increase the DoF of the X-channel by Cadambe et al. in [10]. Since the IC can be viewed as a special case of the X-channel, then relaying also does not increase the DoF of the IC. Since the IC has 1 DoF, it follows that the IRC also has 1 DoF. For the purpose of this study, i.e., studying the GDoF of the IRC, more bounds are required in addition to the cut-set bounds in Theorem 1. In the following section, we introduce some new upper bounds that are instrumental for the characterization of the GDoF of the IRC.
This kind of bounds is useful for characterizing the GDoF for the strong interference scenario, and some subregimes of the weak interference scenario (as in the IC) as we show in the following sections. If we specialize the X-channel upper bound given in [10] to the IRC, we obtain a bound which is a special case of the bound in Theorem 2. Actually, it becomes the same as (9) if . Otherwise, Theorem 2 provides a tighter bound. Next, we provide another upper bound based on a different genie-aided method. Theorem 3: The sum capacity of the IRC is upper bounded by
Proof: The proof is provided in Appendix C. Additional bounds are still required for characterizing the GDoF of the IRC. A sum-capacity upper bound, inspired from the weak interference upper bound of the IC in [11], but appropriately adapted for the IRC is given in the following theorem. Theorem 4: The achievable sum rate of the IRC is upper bounded by
V. NEW UPPER BOUNDS The first upper bound we derive is important for the IRC with strong interference and some cases in the weak interference regime. This bound is obtained by using a genie-aided method illustrated in Appendix B. Theorem 2: The achievable sum rate of the IRC is upper bounded by
(9) then
Moreover, if
Proof: See Appendix D. Note that when the term dominates this bound, it has the same behavior as sum capacity of the IC with noisy interference [23]–[25]. In fact, we show in this paper that treating interference as noise (as in the IC) is GDoF-optimal in the IRC in some cases. Finally, the last upper bound that we need in this paper is presented in the following theorem. This upper bound is also useful in the IRC with weak interference. Theorem 5: The sum rate of the IRC is upper bounded by
(10) and if
then Proof: The proof is given in Appendix E. (11)
Proof: See Appendix B.
So far, we have presented the cut-set upper bounds and our new upper bounds. These bounds collectively characterize the as we show in the next sections. GDoF of the IRC with In order to test the tightness of these upper bounds, we need to
4436
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
compare them with lower bounds obtained by considering some transmission schemes for the IRC. Due to the complicated nature of the given problem, which combines the IC and the relay channels, both of which have unknown capacity in general, we resort to approximative characterization of the sum capacity. An approximation of the sum capacity is provided by the DoF of the network, which is known from previous results to be 1 [10]. While the DoF provides interesting insights into the behavior of the system, the GDoF [11] is a much more powerful metric, as it allows different signal strengths and thus captures a large variety of scenarios. The focus of this paper is thus on the characterization of the GDoF. In fact, we show that the bounds we provide (Theorems 2--5), in addition to those obtained from the cut-set bounds (7) and (8), are GDoF-op. More details on this follow in timal as long as Section VII. VI. FUNCTIONAL DF Various transmission schemes can be employed in the IRC. For instance, an IC-type HK [22] scheme which ignores the relay can be used. This scheme can be very close to optimal if the channels to/from the relay are very weak. Better performance might be obtained if we use the relay in a DF or CF fashion. Moreover, combinations of these schemes can also be used, leading to numerous possibilities. See for instance schemes that combine cooperative and noncooperative strategies in the context of the IC with source or destination cooperation in [19] and [20]. Besides these classical schemes, new ideas can be applied to the IRC to construct possibly more capable schemes. Namely, one can use nested-lattice coding and lattice alignment to establish a cooperation strategy between the relay and the users. This scheme will be denoted “Functional Decode-and-Forward" (FDF) using the terminology in [13]. We give the achievable rates of two variants of FDF next; the details of the schemes are relegated to the Appendix. In FDF, the relay does not decode the transmitted signals of the users individually, but rather a linear combination thereof. This is rendered possible by lattice codes that have the property of linearity; the sum of two codewords is a codeword.2 Additionally, rate splitting is used at the transmitters such that each message is split into 1) a cooperative-public message (CP);3 2) a common message (C); and 3) a private message (P). The nomenclature CP is due to [20], and is used since the CP message is a public message (decoded by both users) and is cooperative in the sense that the relay is used for communicating this message. This message is encoded using lattices that align at the relay in such a way that allows the relay to decode the sum of the CP signals from transmitters 1 and 2. The C and P 2A
statement to be made more precise in the Appendix.
3The
CP message is also split into several parts as will follow with more details later.
messages do not benefit from the relay, i.e., they are the same as the C and P messages in the IC in[11]. The relay decodes a linear combination of the CP signals which are encoded using nested-lattice codes, and forwards them to the receivers in the next transmission block. The receivers then use backward decoding to extract their desired signals, in addition to some interfering signals that can be decoded as a by-product of this scheme (the CP and the C messages of the interfering transmitter). Starting from the last transmission block, receiver 1 for instance decodes the relay signal, thus obtaining the sum of the CP signals. Then decoding proceeds backward to the previous block. In each block, the receiver attempts to decode either 1) the relay signal and the desired CP signal; or 2) the relay signal and the interfering CP signal. We call the former the weak interference FDF (WI-FDF) scheme since it is more useful in the weak interference regime , and the latter the strong interference FDF where (SI-FDF) scheme since it is more useful in the strong interfer. Notice that when a receiver has ence regime where already decoded the relay signal (a linear combination of CP signals) and one of the CP signals (desired or undesired), it can extract the other CP signal (see [15]) and cancel its contribution from the received signal. This will prove to be very useful in the given scheme which allows us to obtain GDoF results for the IRC. Next, the receiver decodes the C messages jointly, subtracts their contribution, and then decodes its desired private message as in [11]. More details on this scheme are provided in Appendix F. The following sum capacity is achieved by the WI-FDF scheme. Theorem 6 (WI-FDF): The sum rate with , , and as given in (12)–(19) at the bottom of the next page is achievable. In this theorem, the used scheme splits the message of user is split , to , , and . Furthermore, 1, , , which will help us achieve into parts, higher GDoF as we see later (see Section VII-B.III). Each of the CP messages is encoded using a nested-lattice code with power . The P and C messages are encoded using Gaussian and , respectively. random codes with power The relay decodes the sum of the nested-lattice codewords corresponding to and , denoted , for all , successively starting with up to (see ). This gives the rate Table I illustrating this process for constraint (17). Now the relay maps the decoded information to which is split into and , and encodes a message each of them using a Gaussian random code with power and , respectively, and sends them in the next block. The relay messages are cooperative messages that are meant to be decoded at both destinations. Receiver 1 start decoding from the last block and proceeds backward. Consider any block , and assume that the dewas successful. Then, receiver coding process in block and , and hence, it knows , for all 1 knows . Having this in mind, decoding proceeds
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
4437
TABLE I SIGNALS SENT BY TRANSMITTER 1 AND BY THE RELAY, AND THOSE DECODED BY THE RELAY IN EACH BLOCK. TRANSMISSION BY TRANSMITTER 1 TAKES PLACE ONLY IN BLOCKS , WHERE IT SENDS PRIVATE, COMMON, AND COOPERATIVE PUBLIC SIGNALS. (HERE SHOWN FOR .) THE RELAY DECODES . THE ARROWS SHOW THE ORDER OF THE DECODING PROCESS. THE SIGNALS AND ARE ENCODED THE SUM INTO AND AND FORWARDED IN BLOCK .
DECODING AT THE FIRST RECEIVER IN BLOCKS
AND
TABLE II SHOWING THE INTERFERENCE CANCELLATION OF THE COOPERATIVE PUBLIC SIGNALS (SHOWN FOR ).
as follows (see Table II illustrating the decoding process for with ). blocks and is decoded first, which In block , the relay message gives the first term in the rate constraint (19). The contribution of in the received signal is canceled, and then the re-
ceiver decodes for all . After decoding each , the interfering CP message is calculated from and its contribution is canceled from the received signal. Thus, after decoding the desired CP messages, all interference from the undesired CP messages is canceled. This decoding can be
(12) (13) (14) (15) (16) (17) (18) (19) where
,
, and
.
4438
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
done reliably if the constraint (18) is satisfied. Next, is decoded which gives the second term in the rate constraint (19). The C messages are decoded jointly afterward, which gives us (13)–(15). Finally, the private message is decoded, reliably so as long as (12) holds. Remark 2: Note that we have used the same strategy for encoding and decoding the P and C messages as for the IC in [11]. This results in our scheme being at least as good as that in [11]. Namely, by switching the relay off ( ), and set), our scheme ting the powers of the CP signals to zero ( reduces to the HK scheme of [11]. In WI-FDF, we have fixed the decoding order so that the relay and CP messages are decoded first, and the C and P messages are decoded last. Different decoding orders can also be used. However, we stick to this order in this paper because it turns out to be GDoF achieving. Remark 3: The splitting of the relay message into two parts provides a flexibility in the decoding order of the relay messages and the CP messages. For instance, if we set and , then this is equivalent to decoding the relay message first, and the CP messages afterward. Otherwise, if we set and , then we have the other decoding order. and are nonzero, then we have an intermediate If both strategy where one part of the relay message is decoded before the CP messages, and the other part decoded afterward. Recall that in the WI-FDF scheme, we have forced the receiver to decode its desired CP messages, and then use the cooperation information, i.e., the relay messages and , to extract the CP signal interference and cancel it. Alternatively, the receiver can start by decoding the interfering CP signal, i.e., using SI-FDF, and then, given and , extract its desired CP messages. This gives the following alternative achievable rates. Theorem 7 (SI-FDF): The sum rate , where and satisfy (20)–(26) shown at the bottom of the page, is achievable.
Remark 4: One can also include a private message in the SI-FDF scheme. However, for the purpose of this paper, the private messages are not necessary. Thus, we have removed them. The achievability of these rates can be verified by using the same scheme as that in Theorem 6, with decoding the CP interference first instead of the CP desired messages. This was a brief discussion of our FDF scheme; more details are given in Appendix F. To examine the performance of the and the power alFDF scheme, one has to carefully choose locations, plug in the FDF rate constraints, and compare to the upper bounds. This is done in the next section where we discuss the GDoF of the network. VII. GDOF Using the definition of , , , and (Definition 1), we start by transforming the bounds in Theorem 1
and
to GDoF bounds. We write
as
, leading to
which gives (27)
(20) (21) (22) (23) (24) (25) (26) where
,
, and
.
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
4439
Fig. 2. GDoF of the IRC for different values of and . The GDoF of the IC is also shown (dash--dotted) for comparison. Notice the GDoF gain obtained by using a relay. The plots in (b) and (c) are particularly interesting, where is very weak, but still the relay increases the GDoF compared to the IC. The GDoF of , is not characterized in this paper. (a) , . (b) , . (c) , . (d) , . the shaded area, where
Similarly,
Now consider the bound in Theorem 3. This bound translates to the GDoF bound
yields (28)
(33) Now consider the bounds from Theorem 2. The first bound Similarly, the bounds in Theorems 4 and 5 yield (34) (35) provides the GDoF bound
In this paper, we focus on the case . With this in mind, we collect the bounds (27), (28), (32)–(35) and obtain the following theorem. (29)
Theorem 8: The GDoF of the IRC with
is given by
Similarly, the second and third bounds in Theorem 2 become (30) (31) The bounds in (29)–(31) can be combined to one bound given by (32)
The converse of this theorem is clearly given by the GDoF upper bounds presented in this section. We discuss the achievability in what follows. Before we start, let us restate the GDoF of the IC given in [11].
4440
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
1) WI-1: By evaluating the WI GDoF bounds (36)–(39) for we get
Fig. 3. GDoF of the IRC for the same as that of the IC.
and
. For
, the GDoF is
Lemma 2 ([11]): The GDoF of the IC is given by
This GDoF is achievable using the HK transmission scheme [22]. The result was shown in [11] by fixing the decoding order of the HK scheme, and carefully choosing the powers of the private and common signals. Clearly, this scheme can be used to achieve the same GDoF in the IRC as in the IC. It turns out that this is GDoF-optimal in the IRC in some cases. For example, if , then the GDoF in Theorem 8 becomes the same as that in Lemma 2 (see Fig. 3). As a result, HK achieves the GDoF of the IRC in this case. If , then the relay can be switched off without any impact on the GDoF of the IRC. As we shall see next, there are more cases with this property. Consequently, the achievability of Theorem 8 is established for . As a result, for proving the achievability of Theorem 8 with weak interference , we need only to consider . Remark 5: Since the FDF scheme can achieve any GDoF that is achievable by the HK scheme, then when we say that HK achieves the GDoF of the IRC, it follows that FDF also achieves it. A. Weak Interference The GDoF for the weak interference (WI) IRC can be written as (36) (37) (38) (39) In order to simplify this WI GDoF bound, we subdivide it into three cases: 1) WI-1: ; 2) WI-2: ; and 3) WI-3: . Keep in mind that in the first two cases, i.e., WI-1 and WI-2, we only need to consider , since if , then the GDoF is the same as that of the IC, achieved by the HK scheme as discussed previously.
which is the same as that of the IC. In Fig. 3, the portion of the GDoF curve where belongs to this case WI-1. Here, the HK scheme suffices to achieve the GDoF. As a result, if , then the relay can be switched off without any impact on the GDoF of the network. 2) WI-2: Now we have . In this case, the relay can play an important role in achieving the GDoF of the IRC, which is larger than that of the IC. In particular, in this case we use the scheme described in Section VI whose achievable rate is given in Theorem 6. Let us first write the WI GDoF upper bound for this case, given by
shown in Fig. 4. For convenience, we express this upper bound as
. (40) is impossible here Remark 6: The case since is false (due to ). The first and last cases in (40) can be achieved using the HK scheme. Again, the HK is GDoF-optimal in these cases. We achieve the upper bound in the second case, where
by using the HK scheme if WI-FDF scheme with
,
, and otherwise by using the , ,
, and i)
to achieve
;
to achieve ; ii) iii) to achieve . Details on how to show this are given in Appendix G. It is notable that in this case the relay does not use its full power to achieve the GDoF. In fact, as can be seen in Fig. 5, if we increase the power used by the relay, we decrease the GDoF achievable by FDF. The reasoning behind the choice of the power allocation is as follows. First, the private signal power is set in such a way that it arrives below the noise level at the undesired receiver (as in the IC [11]). Now we observe the difference between the (total) desired signal power and the desired private signal power in logarithmic scale at the receiver. This is given by
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
Fig. 4. GDoF of the IRC for and different values of . The GDoF has three different trends if , ( ). (b) The GDoF of the IC is also shown (dash--dotted) for comparison. (a) ( ).
,
4441
as shown in the aforementioned plots. ( ). (c) ,
Fig. 5. In this example, , . A CP message GDoF of is achievable since , and a P message GDoF of . In is achieved. Notice that if we increase the power of , then decreases since we have higher noise when decoding leading to a lower total, is decoded first. The other signals are treated as CP message GDoF. (a) Power levels of transmitted and received signals at receiver 1. (b) Desired CP signal . Then is calculated and its contribution is subtracted. Next, is decoded while treating as noise, leading to a CP message GDoF constraint . Finally, is decoded with . noise, leading to
or equivalently, when normalized by
, we have
Our aim now is to send the CP signal such that we are still able to perform interference cancellation by proceeding backward from one transmission block to another. Thus, we need to be
4442
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
Fig. 6. GDoF of the IRC for ( ). (b) ,
(
and different values of ).
. The GDoF of the IC is also shown (dash--dotted) for comparison. (a)
able to decode the relay signal and the CP signal reliably at the destinations. This interval of size should be divided in such a way that maximizes the achievable GDoF for decoding the CP signal and the relay signal. At best, we can assign to each of them [see Fig. 5(a)]. In this case, the relay signal power as received at the receiver is
,
We first rewrite this bound for convenience as
.
(41) In the first two cases in (41), we set the WI-FDF parameters to If we set
, then
which is the desired relay signal power as seen at the receiver. In this case, a CP GDoF of is achievable only if for the relay to be able to decode the sum of the CP messages reliably. Moreover, we must have
If these conditions hold, then we can achieve
otherwise, we achieve
which leads to the power allocation in the other two cases (ii) and (iii) given previously in this section. A graphical illustration of the decoding steps is shown in Fig. 5(b). 3) WI-3: In this case, we have . Here again, we need WI-FDF (Theorem 6) to achieve the GDoF of the network. The GDoF upper bound for the case WI-3, is given by
and i)
ii)
, and
, and
to achieve
to achieve
. The GDoF in this case is shown in Fig. 6. The same reasoning for the power allocation, as that for the WI-2 case, is used here. Here comes the importance of the choice of and , which is explained as follows. Why to split the CP message into parts: in this discussion, we refer to Fig. 7 which captures the general idea, and can be extended to other cases. This figure refers to a case where the GDoF is (case (i) aforesaid). The choice of the powers of the CP signals leads to
that is, after decoding , while decoding the th CP message, the interference power from the ( )th desired CP message is exactly the same as that of the th interfering CP message. This
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
4443
Fig. 7. In this example, and . Here, , and we have and . Since , the relay can also decode the CP messages with . The achievable P message GDoF is . In total, is . However, by splitting the CP message into two pieces, we were able to decode it with achieved. Notice that the interference power is larger than the power of as if there is no CP interference. (a) Power levels of transmitted and received signals at receiver 1. At the receiver side, the interval a GDoF of is divided into two parts, namely which is assigned to the relay signal and which is assigned to the desired CP signals. (b) while treating the other signals as noise. This gives . The first desired CP signal, is decoded first, leading to Here, decoding starts with . Then, is calculated and its contribution is subtracted. (c) The second desired CP signal is decoded next, leading to . is calculated and its contribution is subtracted. Next, is decoded, leading to . Finally, is decoded with . Then,
can be seen in Fig. 7(a) (receiver side) where both and arrive at the same power at receiver 1. Thus, after decoding , the th desired CP message can be decoded with GDoF of as if the only interference comes from the ( )th interfering CP message. Notice that without CP message splitting, that is all we could achieve. With splitting however, once the th desired CP message is decoded, we can cancel the interference of the th interfering CP message,4 and then proceed to decode the ( )th desired CP message, achieving another . Now we ask ourselves how many such CP messages can we have? In other words, what is the largest number of CP message splits that we should choose to increase the achievable GDoF? To answer this question, we have first to choose the 4Recall
that in block , the receiver knows the sum of the desired and inter, which can be fering CP signals from the decoded relay signal in block used together with the desired CP signal for interference cancellation.
power of the relay signal . This is done as follows. The WI-FDF is based on decoding two observations of the same signal, namely, the desired CP signal and its sum with the interfering one. To maximize the GDoF achievable by the CP messages, we try to divide the interval (from the power of the private signal to the power of the relay signal, as seen at the receiver) into two equal parts: one which is assigned to the relay signals and the other assigned to the desired CP signals, as shown in Fig. 7(a). Since only the relay signal can occupy the interval , which is allocated to , what remains for is
Thus, the signal
must be received with power
4444
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
in order to be decoded with GDoF of while treating the P signal as noise. The quantity is equivalent to
ii)
and
to achieve
. Details on how to show this are given in Appendix H. As a result, we have shown the achievability of the GDoF for the weak interference scenario with . leading to . Now, the th CP message can have nonzero GDoF if its . As shown power is larger than the power of the relay signal in Fig. 7(c), the power of is larger than that of , which . allows achieving nonzero That is, it is required that
Using
Therefore, the largest
B. Strong Interference , still We now consider the strong interference (SI) case . We start by writing the GDoF for this case: with
, we get Here again, notice that if , then this upper bound becomes the same as that in Lemma 2. Thus, the HK scheme is again GDoF-optimal. Thus. we need only consider the cases . where In order to simplify the proof, we split SI into two cases: 1) SI-1: 2) SI-2: . 1) SI-1: If , then the GDoF upper bound becomes
that we can choose is
This guarantees that while decoding the th desired CP messince this sage, the strongest interferer is the relay signal gives
This upper bound can be written as which follows from the choice of . In fact, is chosen as the largest number such that the received power of the th desired and the received power of the CP signal is larger than th interfering CP signal is smaller than . Following these guidelines, by splitting the CP message the first CP messages have . The last one, , and depends on . is in general less than In the third and last cases in (41), the common messages become necessary. If we set
and i)
and
to achieve
(42) Let us start with the first case in (42). Here, we need to achieve
This can be done by using the SI-FDF scheme with the following parameters:
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
4445
A. DF
In the second case, we need to achieve
The DF scheme achieves the following sum rate. Theorem 9: The sum rate is achievable where and satisfy the rate constraints for decoding at the relay:
which is possible with SI-FDF by setting
and and
i)
to achieve
if
, and ii)
and
2) SI-2: If
(44) and for decoding at the receivers:
to achieve
.
, then the GDoF upper bound becomes
This upper bound can be written as , ,
(43)
with
.
Observe that the first and second cases are always achievable in the case of SI by using the HK scheme. In the second and third cases, we use SI-FDF with
which achieves the upper bound. The achievability of the upper bound with the given parameter setup can be shown using the same method as in Appendix H. Several plots of the GDoF of the IRC with strong interference are shown in Fig. 8. With this, we end our discussion on the GDoF of the IRC for , or equivalently, . Since the GDoF is a the case high metric, let us see how does the FDF scheme perform at low , in comparison to two classical schemes: DF and CF. VIII. COMPARISON WITH CLASSICAL SCHEMES Let us first give the achievable rates of the DF scheme and the CF scheme. In Appendix I, we illustrate the DF scheme, which is a restricted version of the DF scheme in [3]. Namely, we restrict the decoding order, such that the common messages are decoded before the private message.
for all and . At high , where the system is interference limited, the advantage of the FDF scheme over DF is obvious. The DF scheme has the problem of the bottleneck at the relay. The achievable rate is always upper bounded by (44)
which gives a GDoF constraint of
Thus, it is not possible to achieve a GDoF higher than , which in in contrast is achievable by FDF [e.g., the achievability of (43)]. Thus, FDF is clearly superior to DF. B. CF Another cooperative strategy that can be used in relay networks is the compress-and-forward strategy. In this strategy, the relay compresses its received observation and maps it to an index, then it uses a channel code to send this index to the receivers. This scheme was considered in ICs with source/destination cooperation in [20]–[22] and in the IRC in [3]. Two variants of this scheme exist, namely, CF with forward decoding
4446
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
Fig. 8. GDoF of the IRC with strong interference for several values of and . The GDoF has different trends depending on the values of , of the IC is also shown (dash--dotted) for comparison. In all cases, the GDoF gain obtained by using the relay is apparent. (a) . (c) , . (d) , . (e) , .
(CFF), and CF with backward decoding (CFB). This scheme is described in Appendix J, and it achieves the rates given in the following Theorems. Theorem 10: CFF achieves
and . The GDoF . (b) ,
, is a Gaussian noise, independent of where all other variables, with variance
where and (45)
such that
(46)
, where Theorem 11: CFB achieves and are bounded as in (45)–(47), with and given in (48) shown at the bottom of the page,
(47)
and
.
(48)
4447
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
Fig. 9. Achievable sum rate for different schemes in comparison with the sum-capacity upper bound as a function of . (a) (equivalently and ). (b) , , , (equivalently and
, , such that and . The analysis of the CF scheme for the IRC, in both its variants, is cumbersome and thus omitted. The FDF scheme is comparatively easier to analyze and more useful to draw conclusions about the GDoF of the setup. Note that in FDF, we have “selectivity” in forwarding, in the sense that the relay forwards only a part of the transmitted codewords. This is helpful especially in the cases where it is better not to forward everything
, ).
,
,
at the relay. In contrast, CF does not have this property, where the relay has to forward (a compression index of) all what it observes. C. Comparison With the achievable sum rates of the two classical schemes, DF and CF, and that of the FDF scheme given, we can proceed to compare their performance in terms of sum rate. We start with some high evaluation. Fig. 9 shows the sum-capacity upper
4448
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
Fig. 10. Achievable sum rate for different schemes in comparison with the sum-capacity upper bound as a function of . (b) , , , and . and
and lower bounds as a function of for two cases of an IRC with , . In Fig. 9(a), we show the bounds for weak interference and weak relay channels. We can see the effect of the relay bottleneck in the DF scheme. DF performs very poorly if is weak. CF performs better than DF, and furthermore, CFF achieves rates very close to the upper bound. The achievable rate of WI-FDF and SI-FDF is also shown for two cases ( and
. (a)
,
,
,
). WI-FDF performs better than CFF, and is therefore closer to the upper bound. It can be seen that the achievable rate of the WI-FDF scheme increases if we increase . SI-FDF (which is not meant to be used in the weak interference scenario) achieves lower rates than WI-FDF. Fig. 9(b) shows a scenario with strong interference. Here, the performance of CF and SI-FDF dominates the others. Although is stronger than in this example, the DF scheme still
4449
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
suffers from the relay bottleneck. At lower values of , CFF is close to optimal. As increases, CFF is outperformed by SI-FDF and CFB. We can see that SI-FDF achieves rates that are also close to the upper bound, and outperforms the other schemes for a wide range of . Notice that WI-FDF and CFF (around saturate to the very strong IC sum capacity when 13 in this example), while SI-FDF and CFB increase further. A comparison of the achievable schemes at low to moderate is shown in Fig. 10 as a function of . In the first plot, Fig. 10(a), shows the bounds for an IRC with weak relay and cross channels. In this example, we can see that the performance is dominated by either WI-FDF or CFF. These two schemes have similar performance at low , and different performance as increases. In the second plot, Fig. 10(b), we show an exand . The performance of ample with weak and strong DF improves, compared to that in Fig. 10(a), since is larger. However, its performance is still worse than FDF and CF. FDF and CFF have nearly the same performance in this example. IX. CONCLUSION We have studied the symmetric Gaussian IRC. We gave new sum-capacity upper bounds for this setup. These bounds improve on previously known bounds for the IRC. Furthermore, we studied the achievable rate in this setup. A new scheme for the IRC is proposed, denoted FDF. We have shown that FDF achieves the GDoF of the IRC for all cases where the channel from the transmitters to the relay is weaker than the cross link (from a transmitter to its undesired receiver). This provides the GDoF characterization of the IRC for half the space of all possible channel parameters (characterizing the GDoF of the other case is an ongoing work). It is known that the relay does not increase the DoF of the the network (IC in this case). However, we have shown that it indeed increases its GDoF. The IRC can have higher GDoF than the IC in both the weak interference scenario and the strong one. Additionally, we compared the performance of FDF with DF and CF numerically. FDF outperforms DF and CF in many cases. In some cases, the performance of FDF and CF is nearly the same. It is interesting to know whether the GDoF upper bounds given in this paper are achievable also if the channel from the sources to the relay is stronger than the cross link, and what is the optimal scheme in this case. This will be the topic of a future paper. APPENDIX A PROOF OF THEOREM 1 Consider the first term in the cut-set bounds (4), given by
This term has the form of an MISO channel bound, and can be maximized as follows:
where follows since conditioning does not increase entropy and follows by maximizing the entropy using and with a correlation coefficient between and equal to . Now consider the second term in (4), which has the form of an SIMO channel bound given by
Similarly, we can maximize this as follows:
Similarly, we can obtain the following bounds for
:
which proves the bounds given in Theorem 1. APPENDIX B PROOF OF THEOREM 2 The proof is based on a genie-aided approach. To get the first bound in the theorem, give , and (49) to receiver 2 as side information. Then we can bound
by Fano’s inequality, where write
as
as
. Next we can
(50) follows since messages and where Now consider the first term in (50), i.e., bounded as
are independent. . This can be
(51) by the chain rule and since conditioning does not increase entropy. Define (52) (53)
4450
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
and consider next the second term in (50). Since conditioning does not increase entropy, this can be bounded as
which we write for convenience as follows:
(57) (54)
Now we investigate the terms of (57). For the first term, we have
is conditionally indewhere the last equality follows since pendent of given and . Consider now the third term in (50). This can be bounded as shown next: (58) where follows since the Gaussian distribution maximizes the differential entropy, where with and , i.e., are correlation coefficients. Notice that has to be fulfilled so that the covariance matrix of is positive semidefinite. Step follows by maximizing over such that . Similarly, for the second term in (57) we can show that
(55) follows since knowing , we can construct Step using (1), and step follows since conditioning does not increase entropy. Finally, we write the fourth term in (50) as
(59)
where in we used and to denote the random variables and when the input is with , which maximizes the conditional differential entropy [26], and follows since the function in (59) is increasing in . Similarly, we can show that the third term in (57) is bounded by (56)
Plugging (51), (54), (55), and (56) into (50), we get
Collecting (58)–(60) and letting bound
(60) , we get the desired
4451
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
For , we can obtain the second bound in Theorem 2, (10), by enhancing the second receiver [20] where the noise is replaced by , and then proceeding with the same steps aforementioned. The third bound in Theorem 2, (11), is obtained similarly by enhancing receiver 1 where the noise is replaced by , where . APPENDIX C PROOF OF THEOREM 3 and To derive the bound in Theorem 3, we give as side information to receivers 1 and 2, respectively. Then, using Fano’s inequality and the chain rule, we have
Therefore,
which follows by using the Gaussian distribution for to maximize the upper bound. Now, by letting
and , we get
If , then the bound can be obtained by enhancing receiver 2 by replacing the noise by , and proceeding as earlier. APPENDIX D PROOF OF THEOREM 4 where
as
Before we proceed with the proof, we use the fact that the first symbol of is independent of and due to causality (1), to obtain the following lemma.
. Then, we proceed as follows:
Lemma 3: Let
. Then, it holds that
Proof: Using the chain rule, we have where , , and are defined in (49), (52), and (53), which follows since knowing , we can construct by (1). Then, since conditioning does not increase entropy, we obtain
Consider . Knowing struct as follows:
, we can con-
Having , we are able to obtain using (1). Now knowing , we can construct and so on. Using this induction, we can construct all symbols of . Thus, is conditionally independent of given . So where follows since assume that . Then, we can write
[21]. Now
where and are independent, which is possible since implies that the variance of is larger than that of Then we can write the following inequality:
.
Step
also follows since knowing we can also construct . This proves the statement of the lemma.
4452
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
Now we can proceed with the proof of Theorem 4. The proof starts by giving the genie signals and to receivers 1 and 2, respectively, where and are as in [11]
with and being Gaussian noises, independent of all other random variables, and with variance
Then, by using Fano’s inequality, with have
as
Additionally, since
we can write
, we
(61) where the last step follows since conditioning does not increase entropy. Notice that a jointly Gaussian distribution which factors as maximizes (61). Hence, letting ,
where follows from the independence between and on the one hand, and on the other hand [due to causality (1)]. Now consider the quantity . It holds that
where
for , , zero mean and covariance matrix
is Gaussian with
, and the correlation coefficients . By evaluating this bound, we get where follows from the independence of other random variables;
and all
where
follows by conditioning on defined in Lemma 3, which does not increase entropy; follows by using Lemma 3; and follows by using similar arguments as in [25, Lemma 6] where is i.i.d. Gaussian with zero mean and variance . Similarly,
Finally, we can write
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
4453
follows since all the random variables in the expression are independent of and from the definition of , and follows since and are independent. Thus, from (62) and (63) we obtain
where follows since and follows by maximizing over . As a result, we obtain the desired bound
Similarly
where
APPENDIX E PROOF OF THEOREM 5 Give the genie-side information receivers 1 and 2, respectively, where
Then, we bound
as
. Then
Now, we proceed as in [20] to obtain and
to
by using Fano’s inequality as follows:
where
. We proceed
(62) as . Now consider the third term in (62). where This can be expressed as follows:
But knowing , we can construct by (1). Then, we obtain the equality in (63) shown at the bottom of the page where
(64)
(63)
4454
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
Similarly
(65) Next, we have
since
and
to , which determines the rate of the code. The power of the code is determined by the second moment of the coarse lattice . It is shown that a nested-lattice code achieves the capacity of the point-to-point AWGN channel [29]. In the sequel, we are going to need the following result from [30]. Given two nodes A and B, with messages and , respectively, where both messages have rate . The two nodes use the same nested-lattice codebook with second moment , to encode their messages into codewords and , respectively, of length . The nodes then construct their transmit signals and as (68)
are independent. So
(69)
since conditioning does not increase entropy. Thus, we obtain the following expression:
(66)
where and are -dimensional dither vectors [15] uniformly distributed over , known at the relay and nodes A and B. A relay node receives
where is an additive white Gaussian noise with i.i.d. components with zero mean and variance .
which is the sum capacity of the multiple-access channel from both transmitters to the relay [27]. And similarly
Lemma 4 ([30]): The relay can decode the sum
(67) By collecting (64)--(67) and letting
, we get
which proves the statement of the theorem. APPENDIX F FUNCTIONAL DF In this appendix, we describe our FDF transmission scheme. Before describing the scheme, preliminaries on lattice codes [28] are required. We start with a brief introduction about lattice codes, before proceeding to describe the achievable scheme. 1) Lattice Codes: An -dimensional lattice is a subset of such that
i.e., it is an additive subgroup of . The fundamental Voronoi region of is the set of all points in whose distance to the origin is smaller that that to any other . Thus, by quantizing points in to their closest lattice point, all points in are mapped to the all-zero vector. In this study, we need nested-lattice codes. Two lattices are required for nested-lattice codes, a coarse lattice which is “nested” in a fine lattice , i.e., . We denote a nestedlattice code using a fine lattice and a coarse lattice by the pair . The codewords are chosen as the fine lattice points that lie in . The number of fine lattice points that lie in is given by the ratio of the volume of
from
reliably as long as
Lemma 5 ([30]): Node A, knowing and , can extract and hence also . Now we can proceed with the description of our transmission scheme. Consider a block of transmission , where for some . 2) Message Splitting: User 1 wants to send a message . It starts by splitting the message into three parts: 1) a cooperative-public (CP) [20] message ; 2) a common (C) message ; and 3) a private (P) . The CP message refers to the message used for cooperation with the relay. As we will notice later, each receiver is able to decode the interfering CP message as a by-product of this scheme. Next, the CP message is subdivided into CP submessages , . Thus, the set of messages to be sent from the first transmitter becomes
The
rates
of
these
messages are denoted , respectively. 3) Encoding: The P message is encoded into a length- codeword using a Gaussian random code with power and rate into . That is, is i.i.d. . Similarly, the C message is encoded using a Gaussian code with power and rate into .
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
Each CP message nested-lattice code [as in (68)]. Thus
is encoded into using a with rate and power
where distributed over
and is a random dither uniformly . We choose the powers so that . Moreover, in order to satisfy the power constraint, we set
The same is done at transmitter 2, where the same nestedlattices are used. Notice that this enables the relay to decode the sum
with some rate constraint that we specify next. Transmitter 1 then sends the sum of all codewords as
Similar encoding is done at transmitter 2. This is done for all blocks . The transmitters do not send any messages in the last block , which incurs some rate loss. However, this loss becomes negligible if we consider a large number of blocks . 4) Relay Processing: Decoding at the relay starts at the end of block where the relay decodes the sum
starting with and ending with . Decoding is done successively, where at each decoding step, the interference from already decoded signals is removed (see [31] for more details). Decoding this sum of codewords is possible as long as (see Lemma 4)
for all , noise.
,
,
. While decoding , , and with
Observe that the set of all possible values of has size
The relay maps the vector of all , , i.e., , into one message , where the message set has a size which is equal to the size of the Cartesian product of all , i.e.,
This relay message is then split into and with rates and , respectively. The message is to be decoded after decoding the CP messages. However, if the relay-destination channel is strong , then there is room for decoding (namely ) before the CP messages, allowing us a part of and are decoded to achieve higher rates than if both after the CP messages. These messages are then encoded to and , which are Gaussian codes with powers and , respectively, such that . These codewords are sent in block . Notice that the relay does not send any signal in block 1. 5) Decoding: The receivers wait until the end of block where decoding starts. At the end of block , receiver 1 receives only
since the transmitters do not send in this block. Then, and are decoded successively in this order, which is possible reliably if (71) (72) and , the receiver knows . Consider next block . The received signal at receiver 1 is given by Now, by decoding
(70) , all the signals are treated as
Remark 7: The channel between the sources and the relay is similar to the doubly dirty MAC [31], except for the fact that the relay does not need to decode the individual messages but a function thereof. It is thus possible to increase the rate constraints at the relay if we encode the CP messages against “self” interference using the lattice DPC scheme in [31]. For instance, transmitter 1 can encode against the interference caused for , and similarly at transmitter 2 which achieves by higher rates than (70). However, this will not be necessary for the purpose of this paper.
4455
Now, the receiver decodes the messages in this order:
4456
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
Since the message is first decoded while treating the other signals as noise, this leads to the rate constraint
Notice that the rate constraints (73) and (75) are more binding than (71) and (72) which will be ignored. Now since we have
(73) while treating the Next, the receiver decodes other signals as noise. Thus, we have the following rate constraint:
The next step is to perform interference cancellation. Since and (dethe receiver now knows both (see Lemma 5). It coded in block ), it can extract , from . thus removes its contribution, After canceling the interference from the first interfering CP signal, the second desired CP signal is decoded while treating the other signals as noise. Next, the second interfering CP signal is canceled and so on. This continues until all CP messages are decoded, leading to the rate constraint given in (74) shown at the bottom of the page. At this stage, since the first receiver knows the messages , , and , for , then the received signal can be reduced to
we can write
(80)
into As long as (80) is satisfied, then there exists a split of and which achieves a total CP rate as (80). This is to be equal to the first term in (80) and namely setting to the second one. Then decoding proceeds backward till block 1 is reached where , , and are decoded, and as a and are also obtained. by-product, The same is done at the second receiver. Collecting the resulting bounds (70), (73), (74), (75), (76)–(78), and (79) we obtain the desired lower bound given in Theorem 6. APPENDIX G GDOF ACHIEVABILITY FOR CASE WI-2
Now,
is decoded, with the rate constraint (75)
We only discuss the achievability of here. The achievability of and can be shown similarly. Let us start by writing the achievable rates given in Theorem 6 for the setting
and are decoded Then, messages jointly, in an MAC fashion, leading to the rate constraint [11] (76) (77)
We get (81)
(78) Finally, the receiver decodes its desired private message , and thus, must be bounded by (79)
(82)
(83)
(74)
4457
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
(84) Now let us write (81) as
We then write the expressions of the achievable rates, given in Theorem 6, for the given setting, keeping only the relevant expressions: (90) (91)
where rate
follows from the definition of . Therefore, a private
(92)
(93) is also achievable since it is less than (81). Thus, the “private" GDoF (as in [11])
(94)
(85) is achievable. By using the same procedure with the remaining expressions (82)–(84) we obtain
(95)
(86) (87)
(96)
(88) Finally, the upper bound inated by if we obtain
and
is dom, in which case
(97)
(98) since (88) dominates (86) and (87). Hence, by combining (85) and (88), the achievable GDoF for both users becomes
. Let us now analyze these rate constraints. First, from (90) we obtain (99) From (91) and (92) we obtain
APPENDIX H GDOF ACHIEVABILITY FOR CASE WI-3 We discuss the achievability of the third case in (41), i.e., (89)
Since the bound (89) dominates if then and and hence
All other cases can be shown similarly. We repeat the parameters of the WI-FDF scheme in this case, given as
and
,
(100) Now let us consider the CP rates. By examining (94) we can conclude that (101) is achievable and similarly from (95) (102)
4458
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
is achievable. Now check (96). Our choice of CP signal powers gives
Thus, is a scaled superposition of private and common codewords from blocks and . At the second transmitter, a similar procedure is done. The power constraint is satisfied if
leading to
2) Relay Processing: The relay decodes in a forward manner starting from block 1 where it decodes , , , and . The rate constraint for reliable decoding at the relay is
(103) being achievable. Similarly from (97) we observe that
(107) (108) which follows from the choice of
(109)
. Thus
(110) (111)
(104)
(112)
is achievable. Finally, from (98) we get
(113) (114)
(105)
After decoding the P and C messages, the relay re-encodes them to
Consequently, by combining (101)--(105), and using
we get (106) From (99), (100), and (106), we obtain the achievable GDoF
which can be achieved by using the given parameters for the WI-FDF scheme. APPENDIX I DF 1) Transmitter Processing: Transmitter 1 transmits messages in a window of blocks. If a rate pair is achievable in a block of length symbols, then over blocks, we achieve which approaches for large . In block , the message of transmitter is split into two messages: a private (P) message to be decoded by the respective receiver , and a common (C) message to be decoded by both receivers . The rates of the P and the C messages are and , respectively. These are then encoded using a Gaussian random code into two independent i.i.d. sequences and , with powers and , respectively. In each block , the transmit signal is constructed in the following way:
The relay then transmits the sum
in block . The power constraint at the relay is satisfied if
3) Receiver Processing: As a result, the received signal at receiver 1 in block can be written as follows:
where we used
The receivers use Willems’ backward decoding [16] to decode the signals, starting from the last block . Each receiver decodes both the common messages and its private message. Then, the decoding proceeds to the previous block after subtracting the already decoded signals. Thus, in each block
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
, the known P and C signals ( , , and decoded in block ) are subtracted first before decoding , , and . This procedure continues until all messages are decoded. The decoding of P and C messages at each receiver is done in a similar way as in [11]. Each receiver decodes both the C messages while treating the other signals as noise. This is possible with arbitrarily small error probability if the rates of the C message satisfy (115)
For decoding, two variants can be used, forward or backward decoding. 1) CFF: Let us consider blocks 1 and 2. Due to causality, the relay does not send any signal in block 1; thus, the received signals at receiver 1 for instance in these blocks are
Recall that represents the compressed . The receiver decodes first from while treating the other signals as noise. Reliable decoding of is possible if
(116) (117) Then, each receiver subtracts both the decoded C signals, and decodes its own P message treating the remaining interference as noise. The error probability can be made arbitrarily small if the following private message rate constraint is satisfied:
(118) is achievable if Thus, the sum rate satisfy (107)–(114), (115)–(117), and (118).
and
APPENDIX J CF The CF transmit strategy is performed blockwise as for the DF strategy. We introduce rate splitting to the CF strategy, i.e., each transmitter sends a private (P) message and a common (C) message. In block , source node encodes its P and C messages and with rates and , respectively, into codewords and , respectively, with i.i.d. (119) (120) such that
. Then the signal
is sent. At the end of block , the relay compresses its received signal using Wyner--Ziv coding [32] with rate and assigns it to an index and then encodes this index into with i.i.d. such that . This codeword is sent in the next block , and hence denoted .
4459
(121) The same is done at the second receiver. Now, by applying [18, and is Proposition 1], the first receiver knowing equivalent to a receiver with two receive antennas where the received signal can be written as (122) where ( for forward) is i.i.d. Gaussian representing the compression noise, which is independent of all other variables. Using (121), the variance of can be written as given in (123) shown at the bottom of the page. Similarly at the second receiver, by proceeding forward block by block, the received signal in each block can be written as (122). The receivers proceed by jointly decoding the C messages first, stripping them off the received signal, and then decoding the P message, while treating the undesired P signal as noise. The achievable rates are thus bounded by
where the input is distributed as given in (119). Decoding proceeds in this manner until all blocks are decoded. 2) CFB: In this case, we start decoding from the last blocks and and then proceed backward. Assume that the decoding of the C and P messages in block was successful. Then, the received signal at receiver 1 in blocks and can be written as
where decodes sion of
represents the compressed . The receiver first, decompresses into a noisy ver, and then decodes the common messages and its
(123)
4460
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 7, JULY 2012
(124)
private message from its equivalent received signal given by [18, Proposition 1] (125) ( for backward) is i.i.d. Gaussian. Notice that in this where case, in addition to , is also treated as noise. The same is done at the second receiver. The resulting constraint for decoding is5 (126) As a result, using (126), the variance of is as given in (124) shown at the top of the page. The rate constraints for successful decoding are then (127) (128) (129) where the input is distributed as in (119). Then proceeding backward, every block can be written as (125), and decoding is reliable if (127)–(129) hold. ACKNOWLEDGMENT The authors would like to express their appreciation to Dr. D. Gündüz (CTTC) and Prof. D. Tuninetti (UIC) for fruitful discussions. They would also like to thank the reviewers and the editor for invaluable comments which helped to significantly improve the quality of this paper. REFERENCES [1] A. Chaaban and A. Sezgin, “Lattice coding and the generalized degrees of freedom of the interference channel with relay,” presented at the IEEE Int. Symp. Inf. Theory (ISIT), Cambridge, MA, Jul. 1–6, 2012. [2] A. Chaaban and A. Sezgin, “Relaying strategies for the interference realy channel,” presented at the SPAWC, Ce˛ ˛ sme, Turkey, Jun. 17–20, 2012. [3] Y. Tian and A. Yener, “The Gaussian interference relay channel: improved achievable rates and sum rate upper bounds using a potent relay,” IEEE Trans. Inf. Theory, vol. 57, no. 5, pp. 2865–2879, May 2011. [4] I. Maric, R. Dabora, and A. Goldsmith, “An outer bound for the Gaussian interference channel with a relay,” in Proc. IEEE Inf. Theory Workshop, Taormina, Italy, Oct. 2009, pp. 569–573. [5] O. Sahin and E. Erkip, “Achievable rates for the Gaussian interference relay channel,” in Proc. IEEE Global Telecommun. Conf., Washington, D.C., Nov. 2007, pp. 1627–1631. [6] S. Rini, D. Tuninetti, N. Devroye, and A. Goldsmith, “The capacity of the interference channel with a cognitive relay in very strong interference,” in Proc. IEEE Int. Symp. Info. Theory, St. Petersburg, Jul.--Aug. 2011, pp. 2632–2636. [7] S. Rini, D. Tuninetti, and N. Devroye, “Capacity to within 3 bits for a class of Gaussian interference channels with a cognitive relay,” in Proc. IEEE Int. Symp. Info. Theory, St. Petersburg, Jul.--Aug. 2011, pp. 2627–2631. [8] S. Rini, D. Tuninetti, and N. Devroye, “Outer bounds for the interference channel with a cognitive relay,” in Proc. Inf. Theory Workshop, Dublin, Ireland, Sep. 2010. 5The rate constraints for decoding can be larger since there is no interference by the private messages in this block.
[9] O. Sahin and E. Erkip, “On achievable rates for interference relay channel with interference cancellation,” presented at the presented at the 41st Annu. Asilomar Conf. Signals Systems and Computers, Pacific Grove, CA, Nov. 2007. [10] S. Sridharan, S. Vishwanath, S. A. Jafar, and S. Shamai, “On the capacity of cognitive relay assisted Gaussian interference channel,” in Proc. IEEE Int. Symp. Inf. Theory, Toronto, ON, Canada, Jul. 2008, pp. 549–553. [11] I. Maric, R. Dabora, and A. Goldsmith, “Generalized relaying in the presence of interference,” presented at the presented at the 42nd Asilomar Conf. Signals, Systems, and Computers, Pacific Grove, CA, Oct. 2008. [12] V. R. Cadambe and S. A. Jafar, “Degrees of freedom of wireless networks with relays, feedback, cooperation and full duplex operation,” IEEE Trans. Inf. Theory, vol. 55, no. 5, pp. 2334–2344, May 2009. [13] R. H. Etkin, D. N. C. Tse, and H. Wang, “Gaussian interference channel capacity to within one bit,” IEEE Trans. Inf. Theory, vol. 54, no. 12, pp. 5534–5562, Dec. 2008. [14] A. Chaaban and A. Sezgin, “Achievable rates and upper bounds for the interference relay channel,” presented at the presented at the 44th Annu. Asilomar Conf. Signals, Systems, and Computers, Pacific Grove, CA, Nov. 2010. [15] L. Ong, C. M. Kellett, and S. J. Johnson, “Capacity theorems for the AWGN multi-way relay channel,” in Proc. IEEE Int. Symp. Information Theory, Austin, TX, Jun. 2010, pp. 664–668. [16] R. Zamir, “Lattices are everywhere,” presented at the presented at the 4th Annu. Workshop on Information Theory and its Application, La Jolla, CA, Feb. 2009. [17] B. Nazer and M. Gastpar, “Compute-and-forward: Harnessing interference through structured codes,” IEEE Trans. Info. Theory, vol. 57, no. 10, pp. 6463–6486, Oct. 2011. [18] F. M. J. Willems, “Informationtheoretical results for the discrete memoryless multiple access channel,” Ph.D. dissertation, Katholieke Univ. Leuven, Leuven, Belgium, 1982. [19] T. M. Cover and A. El-Gamal, “Capacity theorems for the relay channel,” IEEE Trans. Inf. Theory, vol. 25, no. 5, pp. 572–584, Sep. 1979. [20] A. Host-Madsen, “Capacity bounds for cooperative diversity,” IEEE Trans. Info. Theory, vol. 52, no. 4, pp. 1522–1544, Apr. 2006. [21] V. M. Prabhakaran and P. Viswanath, “Interference channels with destination cooperation,” IEEE Trans. Info. Theory, vol. 57, no. 1, pp. 187–209, Jan. 2011. [22] V. M. Prabhakaran and P. Viswanath, “Interference channels with source cooperation,” IEEE Trans. Info. Theory, vol. 57, no. 1, pp. 156–186, Jan. 2011. [23] T. Cover and J. Thomas, Elements of Information Theory. New York: Wiley, 1991. [24] T. S. Han and K. Kobayashi, “A new achievable rate region for the interference channel,” IEEE Trans. Inf. Theory, vol. 27, no. 1, pp. 49–60, Jan. 1981. [25] A. S. Motahari and A. K. Khandani, “Capacity bounds for the Gaussian interference channel,” IEEE Trans. Inf. Theory, vol. 55, no. 2, pp. 620–643, Feb. 2009. [26] X. Shang, G. Kramer, and B. Chen, “A new outer bound and the noisy-interference sum-rate capacity for Gaussian interference channels,” IEEE Trans. Info. Theory, vol. 55, no. 2, pp. 689–699, Feb. 2009. [27] V. S. Annapureddy and V. V. Veeravalli, “Gaussian interference networks: Sum capacity in the low interference regime and new outer bounds on the capacity region,” IEEE Trans. Info. Theory, vol. 55, no. 7, pp. 3032–3050, Jul. 2009. [28] J. A. Thomas, “Feedback can at most double Gaussian multiple access channel capacity,” IEEE Trans. Info. Theory, vol. 33, no. 5, pp. 711–716, Sep. 1987. [29] R. Ahlswede, “Multi-way communication channels,” in Proc. 2nd Int. Symp. Inf. Theory, Tsahkadsor, U.S.S.R., Sep. 1971, pp. 23–52. [30] H. A. Loeliger, “Averaging bounds for lattices and linear codes,” IEEE Trans. Info. Theory, vol. 43, no. 6, pp. 1767–1773, Nov. 1997. on the AWGN [31] U. Erez and R. Zamir, “Achieving 1/2 channel with lattice encoding and decoding,” IEEE Trans. Inf. Theory, vol. 50, no. 10, pp. 2293–2314, Oct. 2004.
CHAABAN AND SEZGIN: ON THE GENERALIZED DEGREES OF FREEDOM OF THE GAUSSIAN INTERFERENCE RELAY CHANNEL
[32] K. Narayanan, M. P. Wilson, and A. Sprintson, “Joint physical layer coding and network coding for bi-directional relaying,” in Proc. 45th Allerton Conf., IL, Sep. 2007. [33] T. Philosof, R. Zamir, U. Erez, and A. J. Khisti, “Lattice strategies for the dirty multiple access channel,” IEEE Trans. Inf. Theory, vol. 57, no. 8, pp. 5006–5035, Aug. 2011. [34] A. Wyner and J. Ziv, “The rate-distortion function for source coding with side information at the decoder,” IEEE Trans. Inf. Theory, vol. 22, no. 1, pp. 1–10, Jan. 1976.
Anas Chaaban (S’09) was born in Doha, Qatar, on December 11, 1984. He received his Maýtrise ‘es Sciences degree in electronics, and his M.Sc. degree in communications technology from the Lebanese University, Lebanon, in 2006, and from the University of Ulm, Germany, in 2009, respectively. During 2008-2009, he was with the Daimler research group on machine vision, Ulm, Germany. He was a Reasearch Assistant with the Emmy-Noether Research Group on Wireless Networks at the University of Ulm, Germany, during 2009-2011, which relocated to Ruhr-Universi¨tat Bochum, Germany, in 2011. His current research interests are in the area of network information theory with main focus on relaying and interference management.
4461
Aydin Sezgin (S’01–M’05) received the Dipl.-Ing. (M.S.) degree in communications engineering and the Dr.-Ing. (Ph.D.) degree in electrical engineering from the TFH Berlin in 2000 and the TU Berlin, in 2005, respectively. From 2001 to 2006, he was with the Heinrich-Hertz-Institut (HHI), Berlin. From 2006 to 2008, he was a Post-doc and Lecturer at the Information Systems Laboratory, Department of Electrical Engineering, Stanford University. From 2008 to 2009, he was a Post-doc at the Department of Electrical Engineering and Computer Science at the University of California Irvine. From 2009 to 2011, he was the Head of the Emmy-Noether-Research Group on Wireless Networks at the Ulm University, Ulm, Germany. In 2011, he was a Full Professor at the Department of Electrical Engineering and Information Technology at TU Darmstadt, Germany. He is currently a full professor at the Department of Electrical Engineering and Information Technology at Ruhr-University Bochum, Bochu, Germany. His current research interests are in the area of information theory, communication theory, and signal processing with focus on applications to wireless communication systems. He is currently serving as an Editor for IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS and an Area Editor for Elsevier Journal of Electronics and Communications.