1
Noisy-interference Sum-rate Capacity of Parallel Gaussian Interference Channels
arXiv:0903.0595v1 [cs.IT] 3 Mar 2009
Xiaohu Shang, Biao Chen, Gerhard Kramer and H. Vincent Poor
Abstract The sum-rate capacity of the parallel Gaussian interference channel is shown to be achieved by independent transmission across sub-channels and treating interference as noise in each sub-channel if the channel coefficients and power constraints satisfy a certain condition. The condition requires the interference to be weak, a situation commonly encountered in, e.g., digital subscriber line transmission. The optimal power allocation is characterized by using the concavity of sum-rate capacity as a function of the power constraints.
I. I NTRODUCTION Parallel Gaussian interference channels (PGICs) model the situation in which several transceiver pairs communicate through a number of independent sub-channels, with each sub-channel being a Gaussian interference channel. Fig. 1 illustrates a two-user PGIC where a pair of users, each subject to a total power constraint, has access to a set of m Gaussian interference channels (GIC). Existing systems that can be accurately modelled as PGICs include both wired systems such as digital subscriber lines (DSL) and wireless systems employing orthogonal frequency division multiple access (OFDMA). Both of these systems have been and will be major players in broadband systems. While there have been extensions of information theory for the classical single-channel GIC to the PGIC (e.g, [1]), most existing research, especially for DSL systems, often relies on the following two assumptions [2]–[4]: X. Shang is with Princeton University, Department of EE, Engineering Quad, Princeton, NJ, 08544. Email:
[email protected]. B. Chen is with Syracuse University, Department of EECS, 335 Link Hall, Syracuse, NY 13244. Phone: (315)443-3332. Email:
[email protected]. G. Kramer was with Bell Labs, Alcatel-Lucent. He is now with University of Southern California, Department of EE, 3740 McClintock Ave, Los Angeles, CA 90089. Email:
[email protected]. H. V. Poor is with Princeton University, Department of EE, Engineering Quad, Princeton, NJ, 08544. Email:
[email protected] 0
This work was supported in part by the National Science Foundation under Grant CNS-06-25637.
March 3, 2009
DRAFT
2
TX1
.. .
.. .
IC 1
RX1
.. . TX2 Fig. 1.
.. .
IC m
.. .
RX2
Illustration of a PGIC system where the two transceiver pairs have access to m independent parallel channels.
•
Transmissions in sub-channels are independent of each other.
•
Each receiver treats interference as noise.
These assumptions greatly simplify an otherwise intractable problem. The independent transmission assumption ensures that the total sum rate is expressed as a sum of all sub-channels’ sum rates. The assumption of using single-user detection permits a simple closed-form expression for the rate pair of each sub-channel 1 . Our main goal in this paper is to provide a sound theoretical basis for such assumptions, i.e., to understand under what conditions such a transceiver structure leads to optimal throughput performance. It is certainly not obvious that this structure could ever be optimal, but we show that it is optimal for systems with weak interference, a situation encountered in many deployed systems such as DSL. Our approach in characterizing the sum-rate capacity of a two-user PGIC leverages recent breakthroughs in determining the sum-rate capacity of the GIC under noisy interference [6]–[8]. We determine conditions on the channel gains and power constraints such that there is no loss in terms of sum rate when we impose the above two assumptions. This is accomplished in two steps. First, under the independent transmission assumption, we find conditions such that the maximum sum rate can be achieved by treating interference as noise in each sub-channel. The key to establishing these conditions is the concavity of sum-rate capacity in power constraints for a GIC (cf. Lemma 2). Second, we show that with the same power constraints and channel gains obtained in the first step, independent transmission and single-user detection in each sub-channel achieves the sum-rate capacity of the PGIC. The proof utilizes a genie-aided approach that generalizes that of [6]. This paper is organized as follows. In Section II we introduce the system model and review recent results. In Section III, we consider the maximum sum rate of a special transmission scheme, i.e., independent transmission and single-user detection for each sub-channel. We obtain conditions on the 1
Even under this simplified assumption, finding the optimal power allocation is an NP hard problem [5]
March 3, 2009
DRAFT
3
power constraints and channel coefficients under which the above strategy maximizes the total sum rate. We prove in Section IV that the maximum sum rate we obtain is the sum-rate capacity. Numerical results are given in Section V. Section VI concludes the paper. II. S YSTEM M ODEL
AND
P RELIMINARIES
The received signals of the ith sub-channel i = 1, · · · , m are defined as Y1i = Y2i =
√
√
ci X1i +
√
ai X2i + Z1i , √ di X2i + bi X1i + Z2i ,
(1)
where 0 ≤ ai < di , 0 ≤ bi < ci ; Z1i and Z2i are unit variance Gaussian noise, the total block power constraints are P and Q for users 1 and 2 respectively: m n X X 2 1 ≤ P, E X1i j n i=1
and
n m X X 1 2 ≤ Q, E X2i j n i=1
where n is the block length, and X1i
j=1
j
j=1
and X2i j , j = 1, . . . , n, are the user/channel input sequences for
the ith sub-channel. We remark that this model is a special case of the multiple-input multiple-output (MIMO) GIC [9]. We denote the sum-rate capacity of the ith sub-channel as Ci (Pi , Qi ), where Pi and Qi are the respective powers allocated to the two users in this sub-channel.
To find the sum-rate capacity of the PGIC, we need to solve three problems: the first problem is whether the sub-channels can be treated separately like the parallel Gaussian multiple-access channel [10] and parallel Gaussian broadcast channel [11]–[13], i.e., whether the sum-rate capacity of the PGIC P is in the form of m i=1 Ci (Pi , Qi ). Such a strategy is suboptimal for PGICs in general [14], [15]. The second problem is the optimal distribution of the input signals. It has been shown respectively in [16]–[18]
and [6]–[8] that Gaussian inputs are sum-rate optimal for a single-channel GIC under strong or noisy interference. However, whether this is still the case for PGICs is not known. The third problem is to find the optimal power allocation among sub-channels. Existing works on this problem treat the sub-channels separately, they use Gaussian inputs, and they use single-user detection at the receivers [2]–[4]. Before proceeding, we introduce the following notation. •
Bold fonts x and X denote vectors and matrices respectively.
•
I denotes the identity matrix and 0 denotes the zero matrix.
March 3, 2009
DRAFT
4
• • •
|X|, XT , X−1 , denote the respective determinant, transpose and inverse of the matrix X. T is a long vector which consists of the vectors x i , i = 1, . . . , n. x n = x T1 , x T2 , . . . , x Tn
x ∼ N (0, Σ) means that the random vector x is Gaussian distributed with zero mean and covariance
matrix Σ. •
E(·) denotes expectation; Var(·) denotes variance; Cov(·) denotes covariance matrix; I(·; ·) denotes
mutual information; h(·) denotes differential entropy with the logarithm base e and log(·) = loge (·). A. Noisy-interference sum-rate capacity The noisy-interference sum-rate capacity for single-channel GICs [6]–[8] is summarized as follows. Lemma 1: The sum-rate capacity of the ith sub-channel with ai < ci , bi < di and power allocation p, q is 1 ci p di q 1 Ci (p, q) = log 1 + + log 1 + 2 1 + ai q 2 1 + bi p
provided (p, q) ∈ Ai :
√ √ √ ai ci (1 + bi p˜) + bi di (1 + ai q˜) ≤ ci di Ai = (˜ . p, q˜) p˜ ≥ 0, q˜ ≥ 0
(2)
(3)
In the case of a symmetric GIC, i.e., ai = bi , ci = di and p = q , the noisy interference condition reduces to ai 1 ≤ , ci 4
p=q≤
√
ai ci − 2ai . 2a2i
(4)
In the case of a ZIC where ai = 0, the noisy interference condition reduces to bi < 1,
The main difficulty in maximizing
Pm
i=1 Ci
p ≥ 0,
q ≥ 0.
(5)
(Pi , Qi ) is that Ci (Pi , Qi ) is generally unknown if (Pi , Qi ) ∈ /
Ai . To solve this problem we use the following results.
B. Concavity of sum-rate capacity The key to our study of the PGIC is the concavity of the sum-rate capacity as a function of the power constraint. We establish a slightly more general result by using a modified frequency division multiplexing (FDM) argument [19]. Lemma 2: Let Cµ (p, q) denote the weighted sum rate capacity of a GIC with powers p and q : Cµ (p, q) = March 3, 2009
max
R1 ,R2 achievable
{R1 + µR2 }, DRAFT
5
where µ ≥ 0 is a constant. Then Cµ (p, q) is concave in the powers (p, q), i.e., for any 0 ≤ λ ≤ 1 we have Cµ (p, q) ≥ λCµ (p′ , q ′ ) + (1 − λ)Cµ (p′′ , q ′′ ),
(6)
where p′ , p′′ , q ′ , and q ′′ are chosen to satisfy λp′ + (1 − λ)p′′ = p,
λq ′ + (1 − λ)q ′′ = q.
(7)
Proof: Consider a potentially suboptimal strategy that divides the total frequency band into two subbands: one with a fraction λ and the other with a fraction 1 − λ of the total bandwidth. Powers are
allocated into these two sub-bands as (λp′ , λq ′ ) and ((1 − λ)p′′ , (1 − λ)q ′′ ), where p′ , q ′ , p′′ , q ′′ are such that (7) is satisfied. The information transmitted in these two sub-bands is independent and the decoding is also independent. Then the maximum weighted sum rate for the first sub-band is reduced by a factor λ and becomes λCµ (p′ , q ′ ). Similarly, the maximum weighted sum rate at the second sub-band is (1 − λ)Cµ (p′′ , q ′′ ). Therefore, the right-hand side of (6) is an achievable weighted sum rate. Lemma 2 provides a fundamental result for weighted sum-rate capacities. It applies not only to two-user GICs but also to many-user GICs, Gaussian multiaccess channels, and Gaussian broadcast channels.
C. Subgradient and subdifferential To apply the concavity of sum-rate capacity, we need to use several properties of subgradients and subdifferentials (see [20]). Definition 1: If f : Rn → R is a real-valued concave function defined on a convex set S ⊂ Rn , a vector y is a subgradient at point x 0 if x) − f (x x0 ) ≤ y T (x x − x0) , f (x
∀ x ∈ S.
(8)
x0 ) of all subgraDefinition 2: For the concave function f defined in Definition 1, the collection ∂f (x
dients at point x0 is the subdifferential at this point. If the function f is differentiable at x0 , then the subgradient and subdifferential both coincide with the gradient. We introduce a lemma related to subdifferentials which we use to prove our main result. x ), i = 1, . . . , m, be finite, concave, real-valued functions on S ⊂ Rn and let Lemma 3: Let fi (x P ∗ x∗i ∈ S , i = 1, · · · , m. If there is a vector y such that y ∈ ∂fi (x x ∗i ), i = 1, . . . m, and m i=1 x i = u , then
March 3, 2009
DRAFT
6
x ∗ = x ∗1 T , · · · , x ∗m T is a solution for the following optimization problem: max
m X
xi ) fi (x
i=1
subject to
m X
xi = u,
x i ∈ S.
i=1
(9)
x∗i ), i = 1, . . . m, we have Proof: Since y ∈ ∂fi (x x ) ≤ fi (x x∗i ) + y T (x x − x ∗i ), fi (x
Let xˆ i , i = 1, . . . , m, be any vectors satisfying xˆi ∈ S and
Pm
∀ x ∈ S.
ˆi i=1 x
(10)
= u , then using (10) we have
x∗i ) + y T (ˆ fi (ˆ x i ) ≤ fi (x x i − x ∗i )
(11)
Therefore m X i=1
fi (ˆ xi) ≤ =
where the last equality is from
Pm
m X
x∗i ) + y T fi (x
i=1
i=1
m X
m X
xˆi −
m X
x ∗i
i=1
!
x∗ ), fi (x
(12)
i=1
ˆi i=1 x
=
Pm
∗ i=1 x i
= u.
In the Appendix, we compute the subdifferential ∂Ci (p, q) when (p, q) ∈ Ai . We are also interested in the set of pairs Bi =
[
∂Ci (p, q).
(13)
(p,q)∈Ai
The mapping from Ai to Bi is illustrated in Fig. 2. As seen from (3), Ai is a triangle region with the corner points O(0, 0), √ √ √ ci di − ai ci − bi di √ S 0, , (0, qs ), ai bi di
and T
√
√ √ ci di − ai ci − bi di , 0 , (pt , 0). √ bi ai ci
The corresponding points in Bi are respectively ′ ci di O , , 2 2 bi di qs di ci ′ − , S , 2(1 + ai qs ) 2(1 + di qs ) 2(1 + di qs )
March 3, 2009
DRAFT
7
and T
′
ci di ai ci pt , − 2(1 + ci pt ) 2(1 + bi pt ) 2(1 + ci pt )
q
k
.
q
S II
IV O′
I
III
T′ S′ T
k
p
O
Fig. 2.
p
B
A
The mapping from Ai to Bi .
(1)
Let Ai
(1)
be the inner points of Ai and the line segment ST , and let Bi
be the inner points of the
′ T ′ . As shown in the Appendix, A(1) maps to B (1) and closed area defined by O′ S ′ T ′ and the curve Sd i i (2)
this mapping is one-to-one. Let Ai
(2)
be the line segment OT and Bi (2)
above it (labeled as region II in Fig. 2). Ai
(2)
maps to Bi
′ T ′ and the points [ be the curve O
and this mapping is one-to-many. Specifically,
let (Pi , 0) be a point on OT . The partial derivatives of Ci (p, q) with respect to p and q at this point are denoted as Kp and Kq , respectively, where Kp is a two-sided partial derivative and Kq is a one-sided ′ T ′ of B (2) . The subdifferential of C (p, q) at point [ partial derivative. Point (Kp , Kq ) is on the curve O i i (2)
(Pi , 0) is a ray in Bi
defined as kp = Kp , kq ≥ Kq . (3)
Similarly to the above, let Ai
(3)
to the right (labeled as region III in Fig. 2). Ai Let
(4) Ai
be the origin and let
(4) Bi
(3)
be the line segment OS and Bi (3)
maps to Bi
′ S ′ and the points be the curve Od
and this mapping is also one-to-many.
be the collection of points (kp , kq ) satisfying kp ≥ Kp and kq ≥ Kq
(labeled as region IV in Fig. 2), where Kp and Kq are the two one-sided partial derivatives at the origin. (4)
Ai
(4)
maps to Bi .
March 3, 2009
DRAFT
8
D. Concave-like property of conditional entropy The following Lemma is proved in [9] based on the fact that a Gaussian distribution maximizes conditional entropy under a covariance matrix constraint [21]. h iT Lemma 4: [9, Lemma 2] Let x ni = x Ti,1 , . . . , x Ti,n , i = 1, . . . , k, be k long random vectors each of
which consists of n vectors. Suppose the x i,j , i = 1, · · · , k all have the same length Lj , j = 1, · · · , n. T Let y n = y T1 , . . . , y Tn , where y j has length Lj , be a long Gaussian random vector with covariance
matrix
n
Cov (yy ) =
k X
xni ) , λi Cov (x
(14)
i=1
where
Pk
i=1 λi
= 1, λi ≥ 0. Let S be a subset of {1, 2, . . . , n} and T be a subset of S ’s complement.
Then we have k X i=1
xi,S |x xi,T ) ≤ h (yy S |yy T ) . λi h (x
(15)
xS |x xS¯ ) is concave over When x k , k = 1, · · · , n are all Gaussian distributed, Lemma 4 shows that h (x
the covariance matrices. III. A LOWER
BOUND FOR THE SUM - RATE CAPACITY
If the sum-rate capacity of a PGIC can be achieved by (1) transmitting independent symbol streams in each sub-channel and (2) treating interference as noise in each sub-channel, we say this PGIC has noisy interference. Before proceeding to the main theorem of noisy-interference sum-rate capacity, we first consider the following optimization problem: max
subject to
m X
i=1 m X
Ci (Pi , Qi ) Pi = P,
i=1
Pi ≥ 0,
m X
Qi = Q
i=1
Qi ≥ 0,
i = 1, . . . , m.
(16)
Problem (16) is to find the maximum of the sum of the sum-rate capacities of individual sub-channels and the corresponding power allocation. In general, the optimal solution of (16) is not the sum-rate capacity of the PGIC, since it presumes that the signals transmitted in each sub-channel are independent and no joint decoding across sub-channels is allowed. However, solving problem (16) is important to derive the
March 3, 2009
DRAFT
9
sum-rate capacity of a PGIC. We are interested in the case where the optimal power allocations Pi∗ , Q∗i satisfy the following noisy interference conditions √
ai ci (1 + bi Pi∗ ) +
p p bi di (1 + ai Q∗i ) ≤ ci di ,
i = 1, . . . , m.
(17)
In such a case, it turns out that the sum of the sum-rate capacities in (16) is maximized when each sub-channel experiences noisy interference. For the rest of this section, we first consider the general PGIC and derive the optimal solution of problem (16) based on Lemmas 2 and 3. We further find conditions on the total power P and Q such that the optimal solution of (16) satisfies (17). Then we focus on symmetric PGICs and provide some insights on this solution. A. General parallel Gaussian interference channel Theorem 1: For a PGIC defined in (1), if in the following set
√
ai ci +
√
bi di
△R2 C1 (p)
A1
∗ Cs1
kδ < △R1
δ A′1
∗ Cs1 − △R1
p P2
P1 − δ P1
P2 + δ
Fig. 3. An illustration of sum-rate capacity achieving power allocation for a symmetric parallel Gaussian interference channel.
sub-channel, points A1 and A2 correspond to the power allocations P1 and P2 , respectively, and the two supporting hyperplanes pass through A1 and A2 . The power allocation satisfies P = P1 +P2 , Pi ∈ A′i , i = 1, 2, and k =
∂C1 (p) ∂p |p=P1
=
∂C2 (p) ∂p |p=P2
(we assume the subgradient is equal to the gradient in this
case, and hence the supporting hyperplane is the tangent hyperplane), and the corresponding sum rate is ∗ + C ∗ . Consider the power allocation P − δ and P + δ and the corresponding sum-rate capacities for Cs1 1 2 s2 ∗ − △R and C ∗ + △R , respectively. By concavity, △R > kδ and △R < kδ . the two sub-channels Cs1 2 1 2 1 s2
∗ − △R ) + (C ∗ + △R ) < (C ∗ − kδ) + (C ∗ + kδ) = C ∗ + C ∗ . Therefore the new sum-rate is (Cs1 2 1 s2 s1 s2 s1 s2 n√ o ai ci −2ai Remark 2: When min < P ≤ P¯ , there exist power allocations such that some sub2a2 i
i
channels do not have noisy interference. As such the sum-rate capacities of those sub-channels are
unknown. Surprisingly, in this case we do not need to derive upper bounds for those unknown sum-rate capacities. Instead, the concavity of the sum-rate capacity (as a function of the power) and the existing noisy-interference sum-rate capacity results ensure the validity of Theorem 2. Remark 3: The parallel supporting hyperplanes condition for the optimal power allocation is applicable to a broad class of parallel channels in which 1) transmissions across subchannels are independent, 2) March 3, 2009
DRAFT
13
the capacity of each subchannel is concave in its power constraint. For example, this condition applies to parallel multi-access and broadcast channels. In particular, applying the condition to single user parallel Gaussian channels, it is easy to verify that the parallel supporting hyperplanes condition reduces to the classic waterfilling interpretation. Remark 4: Intuitively, since each sub-channel is a symmetric Gaussian IC with noisy interference, the power allocated to the two users in each sub-channel ought to be identical. While Theorem 2 does not explicitly address the power allocation scheme, we see from the proof of the theorem that this is indeed the case. Remark 5: If (22) is satisfied, the optimal power allocation Pi∗ is unique, and there exists a k∗ ∈ [w, ˆ cˆ] P ∗ ∗ such that Pi∗ and k∗ satisfy (29). To see this, observe that in the proof of Theorem 2, m i=1 Pi (k ) is
continuous and monotonically decreasing in k∗ when k∗ ∈ [w, ˆ cˆ], and P varies from 0 to P¯ when k∗
varies from cˆ to w ˆ . Thus, if 0 ≤ P ≤ P¯ , there exists a corresponding unique k∗ in [w, ˆ cˆ] that solves
problem (24). Since the mapping from k∗ to Pi∗ in (29) is a one-to-one mapping, Pi∗ is also unique.
Remark 6: As shown in (29), whether a sub-channel is active or not depends only on the direct channel gain ci . The amount of power allocated to a sub-channel depends on both the direct channel gain ci and the interference channel gain ai . When the total power constraint P increases from 0 to P¯ , the corresponding k∗ decreases from cˆ to w ˆ . As such, from (29) the sub-channels with larger ci become active earlier than
those with smaller ci . IV. N OISY
INTERFERENCE SUM - RATE CAPACITY
The following theorem gives the noisy-interference sum-rate capacity of a PGIC. Theorem 3: For the PGIC defined in (1), if √
ai ci +
p
bi di
0 and ǫ → 0 as n → ∞. From Fano’s inequality, any achievable rate R1 and R2 for the PGIC must satisfy n(R1 + R2 ) − nǫ xn1 ; y n1 ) + I (x xn2 ; y n2 ) ≤ I (x ≤ I x n1 ; y n1 , x n1i + n n1i , x n1j + n n1j + I x n2 ; y n2 , x n2i + n n2i , x n2k + n n2k = h y n1 , x n1i + n n1i , x n1j + n n1j − h y n1 , x n1i + n n1i , x n1j + n n1j | x n1 + h y n2 , x n2i + n n2i , x n2k + n n2k (52) −h y n2 , x n2i + n n2i , x n2k + n n2k | x n2 .
nn1i and xn1j +n nn1j to receiver one, and xn2i +n nn2i and xn2k +n nn2k to In (52), we provide side information xn1i +n
receiver two, respectively. For the first m1 sub-channels which have two-sided interference, both receivers have side information. For the sub-channels which have one-sided interference, only the receivers suffering from interference have the corresponding side information. For the sub-channels without interference, no side information is given. For the first term of (52), we have h y n1 , x n1i + n n1i , x n1j + n n1j = h y n1i , y n1j , y n1k , y n1r , x n1i + n n1i , x n1j + n n1j ≤ h y n1k , x n1i + n n1i + h y n1i x n1i + n n1i + h y n1j , x n1j + n n1j + h y n1r ∗ b∗1j + n 1j + nh yb∗1r b1i + n 1i + nh yb∗1j , x ≤ h y n1k , x n1i + n n1i + nh yb∗1i x X X X ∗ ∗ ∗ b1i b1j ≤ h y n1k , x n1i + n n1i + n h Yb1i∗ X ,(53) h Yb1j∗ , X h Yb1r + N1i + n + N1j + n i
j
r
where the first inequality follows by the chain rule and the fact that conditioning does not increase entropy, and the second inequality is from Lemma 4. For the fourth term of (52), we have −h y n2 , x n2i + n n2i , x n2k + n n2k | x n2 = −h y n2i , y n2j , y n2k , y n2r , x n2i + n n2i , x n2k + n n2k x n2i , x n2j , x n2k , x n2r = −h Bix n1i + z n2i , z n2j , Bkx n1k + z n2k , z n2r , n n2i , n n2k = −h Bix n1i + z n2i , Bk x n1k + z n2k , n n2i , n n2k − h z n2j − h z n2r = −h Bix n1i + z n2i , Bk x n1k + z n2k n n2i , n n2k − h n n2i − h n n2k − h z n2j − h z n2r March 3, 2009
DRAFT
18
X X X = −h Bix n1i + z n2i , Bk x n1k + z n2k n n2i , n n2k − n h (N2i ) − n h (N2k ) − n h (Z2j ) −n
X
i
j
k
h (Z2r ) .
(54)
r
where the third equality holds since z n2j and z n2r are independent of all other variables. Combine the first terms of (53) and (54), we have h y n1k , x n1i + n n1i − h Bix n1i + z n2i , Bk x n1k + z n2k n n2i , n n2k (a) = h x n1i + n n1i , Ckx n1k + z n1k − h Bix n1i + w n2i , Bkx n1k + w n2k −1 n −1 n n n n − h x = h x n1i + n n1i , x n1k + C−1 z + B w , x + B w 1i 1k 2i 1k 2k + n log i k k
|Ck | |Bi | · |Bk |
|Ck | |Bi | · |Bk | X X 1 1 (c) X ∗ ∗ ∗ b b b √ h X1i + W2i =n h X1i + N1i + n h X1k + √ Z1k − n ck bi i i k X X X X p p √ b ∗ + √1 W2k + n −n log bi − n h X log bk log ck − n 1k bk i k k k X X √ X p ∗ ∗ ∗ b + N1i + n b b =n h X c X + Z − n h b h X + Z | N i 1i 2i 2i k 1k 1k 1i (b)
= n log
i
k
X p b ∗ + Z2k | N2k , bk X −n h 1k
i
(55)
k
where in (a) we let w 2i and w 2k be independent Gaussian vectors and Cov w 2i = Cov z 2i n 2i = Ii − diag(ρ22i ) Cov w 2k = Cov z 2k n 2k = Ik − diag(ρ22k ).
(56)
The stacked vectors w n2i and w n2k each have independent and identical distribution (i.i.d) entries. Equality (b) holds because of (45) and (49) which imply
and
Cov n 1i = Cov B−1 w 2i , i −1 = Cov B , Cov C−1 z w 1k 2k k k −1 n −1 n n n n h x n1i + n n1i , x n1k + C−1 − h x z + B w , x + B w 1i 1k 2i 1k 2k = 0, i k k
regardless of the distribution of x n1i and x n1k .
March 3, 2009
DRAFT
19
Equality (c) holds also because of (45) and (49), which imply 1 n ∗ b h X1i + N1i − h X1i + √ W2i = 0, bi b ∗ + √1 W2k = 0, b ∗ + √1 Z1k − h X h X 1k 1k ck bk
b1i and X b1k . regardless of the distributions of X
Combining (53) and (54) and using (55), we have h y n1 , xn1i + nn1i , xn1j + nn1j − h y n2 , x n2i + nn2i , x n2k + nn2k | xn2 X X X X √ ∗ ∗ ∗ b ∗ + Z1k + n b1i∗ X b1i b1i h ck X Y + n h Yb1r ≤n h X + N + N1i + n h 1i 1k i
r
i
k
X X p X p b ∗ + N1j − n b ∗ + Z2k | N2k b ∗ + Z2i | N2i − n +n h Yb1j∗ , X h h b b X X i k 1j 1k 1i j
−n
X i
h (N2i ) − n
X k
i
h (N2k ) − n
X j
h (Z2j ) − n
X
k
h (Z2r ) .
(57)
r
Similarly, because of (44) and (47) we have h y n2 , x n2i + n n2i , x n2k + n n2k − h y n1 , x n1i + n n1i , x n1j + n n1j | x n1 X X X X p ∗ ∗ ∗ ∗ b b2i b Y2i∗ X h Yb2r + N2i + n ≤n h X2i + N2i + n h dj X2j + Z2j + n i
r
i
j
X X √ X √ ∗ b∗ b ∗ + Z1i | N1i − n b ∗ + Z1j | N1j +n h Yb2k ai X h a , X2k + N2k − n h X j 2i 2j k
−n
X i
h (N1i ) − n
X j
i
h (N1j ) − n
X k
h (Z1k ) − n
X
j
h (Z1r ) .
(58)
r
Substituting (57) and (58) into (52), we have R1 + R2 − ǫ Xh b ∗ + Z1i | N1i b ∗ + N1i + h Yb ∗ X b ∗ + N1i − h (N1i ) − h √ai X ≤ h X 2i 1i 1i 1i i
p i b ∗ + N2i + h Yb ∗ X b ∗ + N2i − h (N2i ) − h b ∗ + Z2i | N2i h X bi X 2i 2i 2i 1i p i √ Xh ∗ ∗ ∗ b2j b2j b1j aj X dj X + Z1j | N1j + h + Z2j − h (Z2j ) + h Yb1j∗ , X + N1j − h (N1j ) − h j
p i Xh ∗ b∗ b ∗ + Z2k | N2k + h √ck X b ∗ + Z1k − h (Z1k ) + h Yb2k , X2k + N2k − h (N2k ) − h bk X 1k 1k k
=
i i Xh Xh ∗ ∗ − h (Z2r ) − h (Z1r ) + h Yb2r + h Yb1r
X
r
fi (Pi , Qi ) +
i
March 3, 2009
X j
r
fj (Pj , Qj ) +
X k
fk (Pk , Qk ) +
X
fr (Pr , Qr )
r
DRAFT
20
=
m X
fl (Pl , Ql ),
(59)
l=1
where
" # √ ci ρ1i 2 1 ci Pi 1 (1 + ai Qi ) Pi +1+ fi (Pi , Qi ) = log − 2 1 + ai Qi 1 + ai Qi − ρ21i σ1i 1 + ai Qi " # √ 2 (1 + bi Pi ) Qi 1 di Qi 1 di ρ2i − + log +1+ , 2 1 + bi Pi 1 + bi Pi − ρ22i σ2i 1 + bi Pi " # √ cj ρ1j 2 cj Pj (1 + aj Qj ) Pj 1 1 +1+ − + fj (Pj , Qj ) = log 2 1 + aj Qj 1 + aj Qj 1 + aj Qj − ρ21j σ1j # " √ 2 dk Qk 1 (1 + bk Pk ) Qk dk ρ2k 1 + +1+ − fk (Pk , Qk ) = log 2 1 + bk Pk 1 + bk Pk 1 + bk Pk − ρ22k σ2k fr (Pr , Qr ) =
(60) 1 log (1 + dj Qj ) , (61) 2 1 log (1 + ck Pk ) , (62) 2
1 1 log (1 + Pr ) + log (1 + Qr ) . 2 2
(63)
Next we will show that the fl (Pl , Ql ), l = 1, · · · , m are all concave and non-decreasing functions of (Pl , Ql ). From (59) and the fact that p p ∗ ∗ b1i b1i h X + N1i − h bi X + Z2i | N2i = − log bi , b ∗ + Z1i | N1i = − log √ai , b ∗ + N2i − h √ai X h X 2i 2i
we have
p b∗ ∗ b∗ b fi (Pi , Qi ) = h Yb1i∗ X − h (N ) + h + N Y ai bi . X + N 1i 1i 2i − h (N2i ) − log 1i 2i 2i
(64)
b ∗ + √ai X b ∗ +Zi , b ∗ and Yb ∗ independent of Ni , and Yb ∗ = √ci X b∗ , X Define Gaussian variables X 1i t 2i t 2i t 1i t 1i t 1i t ∗ ∗ b b t = 1, · · · , s where s is an integer. Let Var X 1i t = Pi t , Var X2i t = Qi t and s X t=1
s X t=1
∗ ∗ b b1i = P = Var X λt Var X i 1i , t
∗ b2i b∗ λt Var X t = Qi = Var X2i ,
P where {λt } is a non-negative sequence with st=1 λt = 1. Then we have s X Yb1i∗ t Yb1i∗ = Cov , λl Cov ∗ +N ∗ +N b b X X t=1 i i 1i 1i t
(65) (66)
(67)
From Lemma 4 we have
k X ∗ b∗ ∗ b∗ b b h Y1i X1i + N1i ≥ λt h Y1i t X1i t + N1i .
March 3, 2009
(68)
l=1
DRAFT
21
b∗ b ∗ X b ∗ + N2i Therefore, h Yb1i∗ X is a concave function of (P , Q ) . For the same reason h + N Y i i 1i 1i 2i 2i
is also a concave function of (Pi , Qi ). Therefore fi (Pi , Qi ) is a concave function of (Pi , Qi ). Similar steps show that fj , fk , and fr are concave in (Pl , Ql ).
¯ 1i , X ¯ 2i , U1 and U2 be four To show that fi (Pi , Qi ) is a non-decreasing function of Pi , Qi , we let X
independent Gaussian variables and ∗ b1i ¯ 1i + U1 , X =X
∗ b2i ¯ 2i + U2 . X =X
Then
b∗ h Yb1i∗ X 1i + N1i √ b ∗ + N1i b ∗ + √ai X b ∗ + Z1i X ci X =h 1i 1i 2i √ √ b∗ ∗ ∗ b1i b2i + N , U , U + ai X + Z1i X ≤h ci X 1i 1 2 1i
√ ¯ √ ¯ ¯ = h ci X 1i + ai X2i + Z1i X1i + N1i . b∗ b ∗ X b ∗ + N2i Therefore, h Yb1i∗ X is a non-decreasing function of (P , Q ) . For the same reason, h + N Y i i 1i 1i 2i 2i
is also a non-decreasing function of (Pi , Qi ). Therefore, fi is a non-decreasing function of (Pi , Qi ). From (50) and (51) we have m m X cl Pl∗ dl Q∗l 1X ∗ ∗ log 1 + fl (Pl , Ql ) = + log 1 + 2 1 + al Q∗l 1 + bl Pl∗ l=1
=
Next we will show that Pm l=1 Ql = Q.
Pm
l=1 m X
Cl (Pl∗ , Q∗l ).
l=1
∗ ∗ l=1 f (Pl , Ql ) ≥
Using (50) and (51), we have
Pm
l=1 f (Pl , Ql )
(69) for any Pl , Ql that satisfy
∂fl (p, q) ∂Cl (p, q) p = Pl∗ = p = Pl∗ ∂p ∂p q = Q∗ q = Q∗ l l ∂fl (p, q) ∂Cl (p, q) = p = Pl∗ p = Pl∗ , ∂q ∂q q = Q∗ q = Q∗ l
Pm
l=1 Pl
= P,
(70)
(71)
l
for all l = 1, · · · , m. Therefore, fl and Cl have the same partial derivatives at point (Pl∗ , Q∗l ). From the Appendix, the subgradient of a function is determined by the derivatives, therefore, fl and Cl have the same subgradient at point (Pl∗ , Q∗l ) for each l. From (35), we have ∗ ∗ kp , kq ∈ ∂fl (Pl∗ , Q∗l ), l = 1, · · · , m. March 3, 2009
(72) DRAFT
22
Therefore, from Lemma 3 we have
if
Pm
If
l=1 Pl = P and Pm l=1 Pl ≤ P
{t1 , · · · , tm } such
m X
fl (Pl∗ , Q∗l )
l=1
Pm
l=1 Ql = Q. Pm and l=1 Ql ≤ Q, Pm that l=1 (Pl + sl ) = P m X l=1
≥
m X
fl (Pl , Ql ) ,
(73)
l=1
there exist two non-negative sequences {s1 , · · · , sm } and P and m l=1 (Ql + tl ) = Q. Therefore we have from Lemma 3
fl (Pl∗ , Q∗l ) ≥
m X
fl (Pl + sl , Ql + tl ) .
(74)
i=1
Since fl (Pl , Ql ), l = 1, · · · , m are all non-decreasing functions of Pl and Ql , we have m X i=1
fl (Pl + sl , Ql + tl ) ≥
m X
m X
fl (Pl , Ql ) ,
Combining (74) and (75) we have
for any
Pm
l=1 Pl
l=1
Pm
fl (Pl∗ , Q∗l ) ≥
m X
fl (Pl , Ql ) .
(76)
i=1
≤ Q. Therefore we have from (59) and (69) that m cl Pl∗ dl Q∗l 1X log 1 + + log 1 + . R1 + R2 − ǫ ≤ 2 1 + al Q∗l 1 + bl Pl∗ ≤ P and
(75)
i=1
l=1 Ql
(77)
l=1
The above sum rate is achievable by independent transmission across sub-channels and single-user detection in each sub-channel. Therefore (77) is the sum-rate capacity of the PGIC if the power constraints satisfy (18). Remark 7: The main idea of the proof can be summarized as follows. We first assume an arbitrary power allocation (Pl , Ql ), l = 1, · · · , m. Then we show that the sum rate for this power allocation is P upper bounded by m l=1 fl (Pl , Ql ). This upper bound decomposes the sum rate bound into the sum of the P individual sub-channel’s sum-rate capacity upper bounds. By Lemma 3, the maximum of m l=1 fl (Pl , Ql ) Pm is l=1 Cl (Pl∗ , Q∗l ) which is an achievable sum rate for a special power allocation. To ease the proof,
the upper bound fl can not be arbitrarily chosen. Compared to the sum-rate capacity Cl for sub-channel
l, fl has the following properties: •
fl is concave over the powers;
•
fl is tight at the optimal point (Pl∗ , Q∗l );
•
fl and Cl have the same subdifferentials at the optimal point (Pl∗ , Q∗l ).
Therefore we choose the noise vectors n 1 and n 2 such that the above conditions are satisfied. Fig. 4 illustrates such an upper bound. March 3, 2009
DRAFT
23
7
6 P*i
Sum rate
5
4
3
2
Upper bound for the sum−rate capacity Sum−rate capacity
1
0
0
10
20
30
40
50
60
70
80
P
Fig. 4.
An illustration of the upper bound suitable for the proof of Theorem 1.
V. N UMERICAL
EXAMPLES
Figs. 5 and 6 are examples for a symmetric PGIC with directly link gain ci = di = 1. Fig. 5 shows the ratio of maximum noisy interference power constraint P¯ for a two-channel symmetric PGIC with a varying from 0 to 0.25, and the sum of the maximum noisy interference power constraints for S1 + S2 where Si = 1 and S1 =
√
ai −2ai , i = 1, 2. Thus, for a1 = a2 (i.e., these two sub-channels are identical), 2a2i ¯ S2 = P2 , and when a1 and a2 are far apart, then P¯ ≪ S1 + S2 . Thus, for a1 = a2
the ratio is we achieve
capacity despite the fact that we do not know the capacity if all power is placed in one sub-channel. Again, this is where Lemma 2 is useful. Fig. 6 shows the maximum noisy interference power constraint P¯ for a two-channel symmetric PGIC with a1 varying in [0, 14 ] and a2 = 18 . When a1 = 41 , the first sub-channel no longer has noisy interference, therefore the maximum noisy interference power constraint is P¯ = 0. Fig. 6 also shows that P¯ decreases with a1 . The discontinuity at a1 =
1 8
is because the second sub-channel becomes the worse channel.
Fig. 7 shows the regions of B1 and B2 for a PGIC with two sub-channels. In this case B1 ⊂ B2 . In Fig. 8, O1 CM N D is the noisy-interference power region for this PGIC. Points A, E coincide in March 3, 2009
DRAFT
24 (2) T
P1∗
(4)
and thus the power allocation is Q∗1 = P2∗ = Q∗2 = 0 and (2) T (2) B2 and thus Q∗1 = Q∗2 = 0 > 0. Similarly, points C, T coincide in Fig. 8 since C, T ∈ B1
Fig. 8 since in Fig. 7 A, E ∈ B1
B2
and P1∗ > 0, P2∗ > 0. Similar arguments apply to points B, F and D, S . By considering the relationship between the power regions and the respective subdifferentials we can determine the activity of the two users in each sub-channel. This activity is summarized in Tab. I, where 0 indicates inactive (zero power) and + indicates active (positive power) for the user in the corresponding sub-channel. In the following we illustrate the regions in Figs. 7 and 8. (1)
We first remind the reader that Bi
(2)
corresponds to the case where both users are active; Bi
sponds to the case where only user 1 is active; and
(4) Bi
(3) Bi
corre-
corresponds to the case where only user 2 is active;
corresponds to the case where both users are inactive in sub-channel i. Consider the following
regions. •
The region O2 M N in Tab. I denotes regions in both Figs. 7 and 8. In Fig. 7, it is the intersection (1)
of B1
(1)
and B2 . Thus, to achieve the sum-rate capacity both users are active in both sub-channels.
The corresponding power region O2 M N is shown in Fig. 8. •
(1)
Region O1 EO2 F is the intersection of B1
(4)
and B2 . So both users are active only in sub-channel
1. In this case, both of the power constraints P and Q are small, so that only the better sub-
channel which produces larger sum-rate capacity than the other is allocated power. Therefore, this two-channel PGIC behaves as a GIC. •
(2)
Region O1 AE is the intersection of B1
(4)
and B2 . So user 1 is active in sub-channel 1 and user 2
is inactive in both sub-channels. The power region O1 AE is shown in Fig. 8. The overall power for user 2 is Q = 0. •
(2)
(2)
Similar to the above case, region AET C of Fig. 7 is the intersection of B1 and B2 . The optimal power allocation makes user 1 active in both sub-channels while user 2 is inactive in both subchannels. In Fig. 8, the overall power for user 2 is also Q = 0. Actually, regions O1 AE and AET C are examples of single-user parallel Gaussian channels whose optimal power allocation is the waterfilling scheme. In the former case, the power constraint is so small that only the sub-channel with larger direct link channel gain (sub-channel 1) is allocated power. In the latter case, the power constraint increases to a critical level (point A in Fig. 8) so that both sub-channels are allocated power.
•
(1)
Region ET M O2 is the intersection of B1
(2)
and B2 . User 1 is active in both sub-channels while
user 2 is active only in sub-channel 1 which has larger direct channel gain for user 2. In this case,
March 3, 2009
DRAFT
25
the two-channel PGIC with two-sided interference works like a PGIC with one sub-channel having two-sided interference and the other having one-sided interference. •
Regions O1 BF , BF SD and F O2 N S are counterparts of regions O1 AE , AET C and ET M O2 , respectively, by swapping the roles of the two users.
Also plotted in Fig. 8 are the noisy-interference power regions for the individual sub-channels, where sub-channel 2 has a larger noisy-interference power region than that of sub-channel 1. In this case, the overall noisy-interference power region is larger than that of either of these two sub-channels.
1 0.9 0.8 0.7
¯ P S1 +S2
0.6 0.5 0.4 0.3 0.2 0.1 0 0.25 0.2 0.15
0.25 0.2
0.1
0.15 0.1
0.05
a2
Fig. 5.
0
0.05 0
a1
Ratio of the total power constraint and the sum of the individual sub-channel power constraints for different channel
gains.
VI. C ONCLUSION Based on the concavity of the sum-rate capacity in the power constraints, we have shown that the noisy-interference sum-rate capacity of a PGIC can be achieved by independent transmission across subchannels and treating interference as noise in each sub-channel. The optimal power allocations have March 3, 2009
DRAFT
26
9
8
7
6
¯ P
5
4
3
2
1
0
Fig. 6.
0
0.05
0.1
a1
0.15
0.2
0.25
The maximum noisy interference power constraint P¯ for a1 with a2 = 18 .
TABLE I P OWER CONSTRAINTS AND THE ACTIVENESS OF
O1 A(E) (2) T (4) B2 B1
A(E)(T )C (2) T (2) B2 B1
ET M O2 (1) T (2) B2 B1
O1 EO2 F (1) T (4) B2 B1
(P1∗ , Q∗1 )
(+,0)
(+,0)
(+,+)
(+,+)
(P2∗ , Q∗2 )
(0,0)
(+,0)
(+,0)
(0,0)
O1 B(F ) (3) T (4) B2 B1
B(F )(S)D (3) T (3) B2 B1
F O2 N S (1) T (3) B2 B1
O2 M N T (1) B2
(1) B1
March 3, 2009
USERS
(P1∗ , Q∗1 )
(+,+)
(0,+)
(0,+)
(+,+)
(P2∗ , Q∗2 )
(+,+)
(0,0)
(0,+)
(0,+)
DRAFT
27
C
2
A
O
1
1.8
1.6
E
1.2
k
q
1.4
T
1
0.8
O2 0.6
F B
M sub−channel 1
0.4
N
D
S sub−channel 2
0.2 0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
kp Fig. 7.
B1 and B2 for the parallel Gaussian interference channel with a1 = b1 = 0.6, c1 = d1 = 4, a2 = b2 = 0.24, c2 =
d2 = 1.2.
the property that the sub-channel sum-rate capacity curves have parallel supporting hyperplanes at these powers. The methods introduced in this paper can also be used to obtain the optimal power allocation and capacity regions of parallel Gaussian multiple access and broadcast channels [22]. A PPENDIX :
SUBDIFFERENTIAL OF
Ci (p, q)
The subdifferential ∂Ci (p, q) depends on the location of the point (p, q). If the sub-channel is a twosided GIC, from Lemma 1, Ai is defined in (3). We derive the subdifferentials as outlined in (78)-(88) below and present the evaluations in (89) below. •
If (p, q) is an interior point of Ai , Ci (p, q) is differentiable. From the concavity of Ci (p, q), ▽Ci (p, q)
March 3, 2009
DRAFT
28
1.4
D(S) noisy−interference boundary for sub−channel 2
1.2
1
N
Q
0.8
B(F)
0.6
noisy−interference boundary for M sub−channel 1
0.4
O
2
0.2
O 0
A(E)
1
0
0.2
0.4
C(T)
0.6
0.8
1
1.2
1.4
P Fig. 8. Noisy-interference power region for the parallel Gaussian interference channel with a1 = b1 = 0.6, c1 = d1 = 4, a2 = b2 = 0.24, c2 = d2 = 1.2.
satisfies (8) and the subdifferential ∂Ci (p, q) consists of a unique vector [kp , kq ]T = ▽Ci (p, q) where ∂Ci (p, q) ∂p ∂Ci (p, q) kq = ∂q
kp =
•
If
(78) (79)
√ √ √ ai ci (1 + bi p) + bi di (1 + ai q) = ci di , p > 0, q > 0, we can compute only one-sided partial
derivatives since Ci (p, q) is unknown on the other side. The subdifferential includes a vector [kp , kq ]T where Ci (p + δ, q) − Ci (p, q) , δ↑0 δ Ci (p, q + δ) − Ci (p, q) , kq = lim δ↑0 δ
kp = lim
where lim , δ↑0
March 3, 2009
lim
δ≤0,δ→0
and similarly lim , δ↓0
(80) (81)
lim
δ≥0,δ→0
DRAFT
29
•
If q = 0, p > 0, we have only a one-sided partial derivative in q . From the concavity of Ci (p, q), all the [kp , kq ]T vectors satisfying following conditions are subgradients √ √ √ ci di − ai ci − bi di ∂Ci (p, 0) , 0 limδ↓0 , the vector [kp , kq ]T satisfies (8) for all points [p, q]T ∈ Ai , δ and thus is also a subgradient. Therefore we can replace (83) with kq ≥ lim δ↓0
•
•
Ci (p, δ) − Ci (p, 0) . δ
(84)
Similarly, if p = 0, q > 0, the subdifferential is the set of [kp , kq ]T with Ci (δ, q) − Ci (0, q) kp ≥ lim , δ↓0 δ √ √ √ ci di − ai ci − bi di ∂Ci (0, q) √ , 0 0, q > 0 , √ √ √ c d − a c − b d i i i i i i ,q = 0 , = (p, q) 0 < p ≤ √ bi ai ci √ √ √ ci di − ai ci − bi di √ = (p, q) p = 0, 0 < q ≤ , ai bi di = {(p, q) |p = q = 0 } ,
(89)
(90) (91) (92) (93)
and cˆ = max {ci },
(94)
dˆ = max {di }.
(95)
i=1,··· ,m
i=1,··· ,m
S4
(l) l=1 Ai .
In (89) Ai =
For the subgradient [kp , kq ]T , when p = 0 or q = 0, kp or kq varies from
some constants to infinity. In (89) we upper bound kp and kq with
cˆ 2
dˆ 2
respectively for convenience S and without loss of generality. The main reason is that we are interested only in m i=1 Bi . To relate the (j)
mapping of Ai
to different regions of Bi , we further define (j)
Bi
March 3, 2009
and
=
[
[p,q]T ∈Ai(j)
∂Ci (p, q),
j = 1, · · · , 4.
(96)
DRAFT
31
If the sub-channel has one-sided interference bi = 0, 0 < ai < 1 or no interference ai = bi = 0, then from Lemma 1 we have Ai = {p ≥ 0, q ≥ 0}. We similarly obtain ∂Ci (p, q) as follows: ∂Ci (p, q) ci k = p 2(1 + ci p + ai q) , (kp , kq ) ai di ai k = 1 + − q 2 1 + ci p + ai q 1+ di q 1 + ai q ci kp = 2(1 + c p) i , (k , k ) p q ai ci p dˆ di = 2 − 2(1 + c p) ≤ kq ≤ 2 i cˆ ci ≤ k ≤ p 2(1 + a q) 2 i , (k , k ) p q di k = q 2(1 + di q) ( ) ˆ c ˆ d d c i i (kp , kq ) 2 ≤ kp ≤ 2 , 2 ≤ kq ≤ 2 ,
p > 0, q > 0
p > 0, q = 0
(97) p = 0, q > 0
p = q = 0.
R EFERENCES
[1] S. T. Chung and J. M. Cioffi, “The capacity region of frequency-selective Gaussian interference channels under strong interference,” IEEE Trans. Communications, vol. 55, pp. 1812–1821, Sept. 2007. [2] W. Yu, G. Ginis, and J. M. Cioffi, “Distributed multiuser power control for digital subscriber lines,” IEEE Journal on Selected Areas in Communications, vol. 20, pp. 1105–1115, 2002. [3] K. B. Song, S. T. Chung, G. Ginis, and J. M. Cioffi, “Dynamic spectrum management for next-generation DSL systems,” IEEE Communications Magazine, vol. 40, pp. 101–109, 2002. [4] R. Cendrillon, W. Yu, M. Moonen, J. Verlinden, and T. Bostoen, “Optimal multi-user spectrum management for digital subscriber lines,” IEEE Trans. Communications, vol. 54, pp. 922–933, 2006. [5] S. Hayashi and Z.-Q. Luo, “Spectrum management for interference-limited multiuser communication systems,” submitted to IEEE Trans. Inf. Theory. [6] X. Shang, G. Kramer, and B. Chen, “A new outer bound and the noisy-interference sum-rate capacity for Gaussian interference channels,” IEEE Trans. Inf. Theory, vol. 55, no. 2, pp. 689–699, Feb. 2009. [7] A. S. Motahari and A. K. Khandani, “Capacity bounds for the Gaussian interference channel,” submitted to IEEE Trans. Inf. Theory. http://arxiv.org/abs/0801.1306, Jan. 2008. [8] V. S. Annapureddy and V. Veeravalli, “Gaussian interference networks: Sum capacity in the low interference regime and new outer bounds on the capacity region,” submitted to IEEE Trans. Inf. Theory. http://arxiv.org/abs/0802.3495, Feb. 2008. [9] X. Shang, B. Chen, G. Kramer, and H. V. Poor, “On the capaicty of MIMO interference channels,” in Proc. of the Forty-sixth Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, Sep. 2008. [10] D. N. C. Tse and S. V. Hanly, “Multiaccess fading channelsCpart I: polymatroid structure, optimal resource allocation and throughput capacities,” IEEE Trans. Inf. Theory, vol. 44, no. 7, pp. 2796–2815, Nov. 1998. [11] A. El Gamal, “The capacity of the product and sum of two unmatched degraded broadcast channels,” Problemy Perdachi Informatsi, vol. 6, no. 1, pp. 3–23, 1980.
March 3, 2009
DRAFT
32
[12] D. Hughes-Hartog, The Capacity of a Degraded Spectral Gaussian Broadcast Channel,, Ph.D. thesis, Stanford University, Stanford, CA, Jul. 1995. [13] D. N. C. Tse,
“Optimal power allocation over parallel Gaussian broadcast channels,”
available at
http://www.eecs.berkeley.edu/∼dtse/broadcast2.pdf, 1998. [14] S. A. Jafar V. R. Cadambe, “Multiple access outerbounds and the inseparability of parallel interference channels,” submitted to IEEE Trans. Inf. Theory. http://arxiv.org/abs/0802.2125, Feb. 2008. [15] L. Sankar, X. Shang, E. Erkip, and H. V. Poor, “Ergodic two-user interference channels: is separability optimal,” in Proceedings of the Forty-sixth Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, Sep. 2008. [16] A. B. Carleial, “A case where interference does not reduce capacity,” IEEE Trans. Inf. Theory, vol. 21, pp. 569–570, Sep. 1975. [17] H. Sato, “The capacity of the Gaussian interference channel under strong interference,” IEEE Trans. Inf. Theory, vol. 27, pp. 786–788, Nov. 1981. [18] T. S. Han and K. Kobayashi, “A new achievable rate region for the interference channel,” IEEE Trans. Inf. Theory, vol. 27, pp. 49–60, Jan. 1981. [19] H. Sato, “On degraded Gaussian two-user channels,” IEEE Trans. Inf. Theory, vol. 24, pp. 634–640, Sep. 1978. [20] D. P. Bertsekas, Nonlinear Programming, Athena Scientific, Belmont, MA, 2003. [21] J. A. Thomas, “Feedback can at most double Gaussian multiple access channel capacity,” IEEE Trans. Inf. Theory, vol. 33, no. 5, pp. 711–716, Sep. 1987. [22] X. Shang, On the Capacity of Gaussian Interference Channels, Ph.D. thesis, Syracuse University, Syracuse, NY, Aug. 2008.
March 3, 2009
DRAFT