On the Symbol Asynchronous Gaussian Multiple-Access Channel

2 downloads 0 Views 240KB Size Report
multiple access channel in which each user is allowed to linearly modulate a set of ..... where IK denotes the identity matrix of size K and the noise vector is ...
On the Symbol Asynchronous Gaussian Multiple-Access Channel Hon-Fah Chong and Mehul Motani Department of Electrical and Computer Engineering National University of Singapore, Singapore 117576 Email: {chong.hon.fah,motani}@nus.edu.sg

Abstract—We consider the symbol asynchronous Gaussian multiple access channel in which each user is allowed to linearly modulate a set of orthogonal waveforms and the symbol periods for each user are not aligned at the receiver. This models the case in which asynchronous users may employ quadrature signaling in a multiple access scenario. The case in which each user is only allowed to linearly modulate a fixed waveform in each symbol period was considered by Verdu. He explicitly evaluated the capacity region of this class of multiple access channels for the case where the transmitters know the symbol period offset and also extended it to the case where the transmitters have no knowledge of the offset. In this paper, we characterize the capacity region for the scenario in which each user is allowed to modulate K orthogonal waveforms and the users know the symbol period offset. We note that the orthogonal waveforms need not be identical for both users. Similar to the case where each user is allowed to modulate a fixed waveform, the result holds regardless of whether or not the transmitters are frameasynchronous.

I. I NTRODUCTION In the information-theoretic study of multiple access channels (MAC), there are two types of asynchronism that may occur, namely, frame asynchronism and symbol asynchronism. In discrete MAC, frame asynchronism occurs when the codewords for the different users are not aligned at the receiver as shown in Fig. 1. The frame offset is an integer satisfying 0 ≤ ∆ < N , where N is the frame/block length. For continuous-time waveform MAC, besides the possibility of frame asynchronism, symbol asynchronism may also occur when the symbol periods for the different users are not aligned at the receiver. A two-user MAC is said to be frame synchronous and symbol asynchronous if the offset between the two frames is less than a symbol period (see Fig. 2). The capacity region of the discrete memoryless MAC (DMMAC) with frame synchronism was completely characterized by Ahlswede [1] and Liao [2] in the early 1970s and is given by the following: Theorem 1: The capacity region of a DMMAC (X1 × X2 , p (y|x1 , x2 ), Y) is the closure of the convex hull of all (R1 , R2 ) satisfying 0 ≤ R1 < I (X1 ; Y |X2 ) 0 ≤ R2 < I (X2 ; Y |X1 )

(1) (2)

R1 + R2 < I (X1 X2 ; Y )

(3)

for some product distribution p1 (x1 ) p2 (x2 ) on X1 × X2 .

Fig. 1.

Frame asynchronous discrete MAC

Proof: Refer to proof of [3, Theorem 15.3.1]. Cover [4] and Wyner [5] gave an explicit expression for the capacity region of the memoryless Gaussian MAC with both frame and symbol synchronism. The capacity region is given by the pentagon satisfying   P1 1 (4) 0 ≤ R1 < log2 1 + 2 N   1 P2 0 ≤ R2 < log2 1 + (5) 2 N   P1 + P2 1 (6) R1 + R2 < log2 1 + 2 N where P1 is the power constraint on sender 1 and P2 is the power constraint on sender 2. The restriction of frame synchronism was first removed for the DMMAC by Cover, McEliece and Posner [6]. They determined the capacity region of the two-user frame asynchronous DMMAC for the case where the offset (∆) between the two frames as a fraction of the frame length (N ) goes to zero ∆ as the frame length increases, i.e., N → 0 as N → ∞. They showed that the capacity region remains unchanged in this case and is given by Theorem 1. Poltyrev [7], and Hui and Humblet [8] determined the capacity region of the totally frame asynchronous DMMAC. They showed that the effect on the capacity region in this case is the removal of the convex hull operation from Theorem 1. Verdu considered the continuous-time waveform memoryless Gaussian MAC with symbol asynchronism [9] where each transmitter is allowed to linearly modulate a fixed signature waveform. User j, j ∈ {1, 2}, transmits a codeword [ bj (1), bj (2),...,bj (N ) ] ∈ RN by sending the signal N X

n=1

bj (n) sj (t − nT )

(7)

Fig. 2. Frame synchronous symbol asynchronous continuous-time waveform MAC

Fig. 3.

Quadrature Amplitude Modulation

where sj (t) is a fixed signature waveform in the interval [ 0, T ). The channel output is given by y (t) =

N X

(n) s1 (t − τ1 − nT )

+

b2 (n) s2 (t − τ2 − nT ) + n (t)

b1 n=1 N X

(8)

n=1

where τ1 , τ2 ∈ [ 0, T ) and n (t) is white Gaussian noise with power spectral density equal to σ 2 . Verdu first considered the case where the transmitters have knowledge of the symbol period offset before extending the result to the case where the transmitters have no knowledge of the offset. In this paper, we extend the result, where the transmitters have knowledge of the symbol period offset at the receiver, to the case where each user is allowed to modulate K orthogonal waveforms, instead of only one signature waveform. This allows us to model a larger class of asynchronous Gaussian MAC. For example, this encompasses the situation where each

Fig. 4.

Dual Channel Direct Sequence Spread Spectrum

user is allowed to linearly modulate two phase-quadrature carriers in each symbol period and the carrier frequency for each user may be different (see Fig. 3). This may also encompass the dual channel direct-sequence spread spectrum system. In this system, each user has two data streams that linearly modulate two phase-quadrature signals and each stream uses a different spreading code (see Fig. 4). Even though the two spreading codes for each user may not be orthogonal, from Lemma 1, the dual channel direct-sequence spread spectrum system is equivalent to the continuous-time waveform MAC where each user linearly modulates two orthogonal waveforms. Our extension closely follows the approach of Verdu. In [9], an explicit expression for the capacity region relied on the evaluation of the eigenvalues of Toeplitz matrices. In our case, the extension relies on the evaluation of the eigenvalues of block Toeplitz matrices rather than Toeplitz matrices. Hence, we will also give some relevant background information on the spectrum of Hermitian block Toeplitz matrices. The paper is organized as follows: • In Section II, we first introduce some of the mathematical preliminaries necessary to understand the paper. • In Section III, we review some of the results in [9]. • In Section IV, we describe the channel model and also the equivalent channel model with discrete time outputs. • In Section V, we describe the main result, Theorem 4, for the case where transmitters have knowledge of the mutual offset. • In Section VI, we show that the capacity region given in Theorem 4 is obtained with stationary inputs. Hence, the same capacity region also holds for the case where the transmitters are frame-asynchronous. • In Section VII, we give the main details of the proof of our main result. II. M ATHEMATICAL P RELIMINARIES In this section, we give a brief review of some results on the spectrum of Hermitian block Toeplitz matrices [10]. A. Notation and preliminary considerations We denote by R the set of real numbers, by C the set of complex numbers and by HK×K the set of Hermitian matrices of size K × K. In the following, we will only consider Lebesgue-integrable Hermitian matrix-valued functions defined (almost everywhere) over the interval Q = (−π, π). If A is a square matrix, we denote by kAkF its Frobenius norm. In addition, if A ∈ HK×K , we indicate by λk (A), k = 1, 2, ..., K, all the eigenvalues of A, counted with their multiplicities, numbered in non-decreasing order. We also indicate by σ (A) the spectrum of A, i.e., the set of eigenvalues of A. It is again understood that each eigenvalue is counted according to its multiplicity.  For p ≥ 1, we denote by Lp Q, HK×K the Banach space of all K × K Hermitian matrix-valued functions which are  p K×K p-integrable on Q, that is, f ∈ L Q, H ⇐⇒ f (θ) ∈ R p 1 kf (θ)k dθ < ∞. We also denote HK×K and kf kpLp = 2π F Q

 by L∞ Q, HK×K the Banach space of Hermitian matrixvalued functions which are essentially bounded over Q, i.e., f is measurable and kf kL∞ = inf {y ∈ R : kf (θ)kF < y for a.e. θ ∈ Q} < ∞. (9)

 Theorem 3: Suppose that f ∈ L∞ Q, HK×K and that {TN } is the set of block Toeplitz matrices generated by f ; then for any function F , continuous on the interval [inf f, sup f ], the following holds: KN 1 X F (λl (TN )) N →∞ KN l=1 Z π K 1 1 X = F (λk (f (θ))) dθ. 2π −π K

lim

B. Asymptotic spectra of Hermitian block Toeplitz matrices A Hermitian matrix TN has structure with K × K blocks if  A0 A1 · · ·  .. ..  A−1 . . TN =   . . . . . .  . . . A−N +1 · · · A−1

with

An ∈ CK×K ,

a N × N block Toeplitz  AN −1 ..  .   ∈ CKN ×KN (10)  A1  A0

An = A∗−n .

(11)

We consider the case where the blocks An are the Fourier coefficients of a K × K Hermitian matrix-valued function f : Q → HK×K  , which is integrable on Q, that is f ∈ L1 Q, HK×K and Z π 1 ˆ f (θ) e−inθ dθ, n = 0, ±1, ±2, ... (12) An = 2π −π where ˆi is the imaginary unit. The integration is understood to be carried out on each entry of the K × K matrix. For every natural number N , we associate the block Toeplitz matrix (10) with the Hermitian matrix-valued function f , and we say that {TN } is the set of block Toeplitz matrices generated by f . Each matrix TN has a N × N block Toeplitz structure and each block (12) is a K × K matrix with complex entries (no structure is imposed upon these blocks). If f : Q → HK×K is a Hermitian matrix-valued measurable ′ function such that f (θ) has real eigenvalues, for Q ⊆ Q, we set

k=1

Proof: Refer to proof of [10, Theorem 3.3]. III. A

REVIEW OF PAST RESULTS

Verdu first considered the frame synchronous and symbol asynchronous Gaussian MAC where each user knows the symbol period offset. In addition, each user is allowed to linearly modulate a fixed waveform sj (t), j ∈ {1, 2}, of unit energy defined on the interval [ 0, T ). Hence, the channel output is given by (8) subject to the power constraint n=N 1 X 2 b (n) ≤ Pj , N n=1 j

j ∈ {1, 2} .

(16)

Next, Verdu obtained a discrete channel model with an equivalent channel capacity by considering the projection of the channel output y (t) along the direction of the signals sj (t), j ∈ {1, 2}, and their T -shifts, i.e., Z (n+1)T +τj y (t) sj (t − nT − τj ) dt. (17) yj (n) = nT +τj

By defining the cross correlations between the assigned waveforms as follows (assuming without loss of generality that τ1 ≤ τ2 ): Z T ρ12 = s1 (t) s2 (t + τ1 − τ2 ) dt (18) 0 Z T ρ21 = s1 (t) s2 (t + T + τ1 − τ2 ) dt (19) 0

inf′ f = Q

n sup y ∈ R : λ1 (f (θ)) > y

o ′ for almost every θ ∈ Q (13)

sup f = Q′

o ′

n inf y ∈ R : λK (f (θ)) < y

for almost every θ ∈ Q . (14)  Theorem 2: Suppose that f ∈ L1 Q, HK×K , and let {TN } be the set of Hermitian block Toeplitz matrices generated by f . Then, for any natural number N , if λ is an eigenvalue of TN , it holds that inf f ≤ λ ≤ sup f. Q

Q

Proof: Refer to proof of [10, Theorem 3.1].

(15)

the equivalent discrete MAC can be expressed as     b1 (1) n1 (1)  b2 (1)   n2 (1)       b1 (2)   n1 (2)          Y N = R  b2 (2)  +  n2 (2)   ..   ..   .   .      b1 (N ) n1 (N ) b2 (N ) n2 (N ) where R is given by  1 ρ12 ρ12 1   ρ21   R=    

ρ21 1 ρ12

ρ12 1 .. .

(20)



ρ21 .. .

..

. 1 ρ12

         ρ12  1

(21)

and the noise vector is Gaussian with zero-mean and covariance matrix σ 2 R. Verdu then made use of the limiting characterization of the discrete MAC with memory [11, Theorem 3] to characterize the capacity of this channel. The channel capacity of the frame synchronous and symbol asynchronous Gaussian MAC is given by [ C= (R1 , R2 ) (22) 1 2π

SRj (θ)≥0, θ∈Q π S (θ)dθ=Pj −π j j∈{1,2}

where   Z π S1 (θ) 1 dθ (23) log 1 + 4π −π σ2   Z π 1 S2 (θ) 0 ≤ R2 ≤ dθ (24) log 1 + 4π −π σ2  Z π S1 (θ) S2 (θ) 1 + log 1 + R1 + R2 ≤ 4π −π σ2 σ2  S1 (θ) S2 (θ) 1 − ρ212 − ρ221 − 2ρ12 ρ21 cos (θ) + dθ. σ4 (25) 0 ≤ R1 ≤

Finally, Verdu extended the result to the case where each user does not know the symbol period offset. Each user only knows that the cross correlation coefficients belong to an uncertainty set Ψ = {(ρ12 , ρ21 )}, where (ρ12 , ρ21 ) depends on the symbol period offset and on the fixed waveform chosen by each user. In this case, the capacity region is given by [ (R1 , R2 ) (26) C= SRj (θ)≥0, θ∈Q π 1 2π −π Sj (θ)dθ=Pj j∈{1,2}

where   Z π 1 S1 (θ) 0 ≤ R1 ≤ dθ (27) log 1 + 4π −π σ2   Z π S2 (θ) 1 dθ (28) log 1 + 0 ≤ R2 ≤ 4π −π σ2  Z π 1 S1 (θ) S2 (θ) R1 + R2 ≤ inf + log 1 + (ρ12 ,ρ21 )∈Ψ 4π −π σ2 σ2  S1 (θ) S2 (θ) 1 − ρ212 − ρ221 − 2ρ12 ρ21 cos (θ) dθ. + σ4 (29) For both cases where the transmitters know the symbol period offset and where the transmitters only have knowledge of the uncertainty set Ψ, the channel capacities are achieved with stationary inputs. Hence, the channel capacities for both cases also hold regardless of whether the transmitters are frame asynchronous or not. In this paper, we will only consider the case where the transmitters know the symbol period offset. However, each user is allowed to modulate K orthogonal waveforms instead of only one fixed waveform and the set of K waveforms need

Fig. 5. Cross correlation coefficients between waveform k for first user and ′ waveform k for the second user.

not be identical for both users. Our proof follows along the lines of the proof of Verdu and we will give the relevant extensions where necessary. IV. C HANNEL M ODEL For simplicity of notation, we denote n as an element taking values in the set {1, 2, ..., N }, j as an element taking values ′ in the set {1, 2}, and k and k as elements taking values in the set {1, 2, ..., K}. In this paper, we assume the two-user scenario in which user j linearly modulates each of its K orthogonal waveforms sjk (t). The waveforms occupy the symbol period [ 0, T ) and are assumed to be of unit energy. The symbol periods for the K orthogonal waveforms are also assumed to be aligned for each user. Hence, we can write the channel output as follows: y (t) =

n=N X k=K X

b1k n=1 k=1 n=N X k=K X

+

(n) s1k (t − nT − τ1 )

b2k (n) s2k (t − nT − τ2 ) + n (t)

(30)

n=1 k=1

where τ1 , τ2 ∈ [0, T ), n (t) is white Gaussian noise with power spectral density equal to σ 2 and b1k (n), b2k (n) ∈ R satisfying n=N k=K 1 X X 2 bjk (n) ≤ Pj . KN n=1

(31)

k=1

We obtain an equivalent model with discrete time outputs by passing the received waveform through a matched filter for each of the signals sjk (t) as follows: Z (n+1)T +τj y (t) sjk (t − nT − τj ) dt. (32) yjk (n) = nT +τj

n on=N The discrete outputs obtained, {y1k (n)}k=K and k=1 n=1 n on=N k=K {y2k (n)}k=1 , are sufficient statistics for the transmitn=1 ted messages (See [9, Pg. 735] or [12]). Since the K assigned waveforms for each user are assumed to be orthogonal, we only need to define the cross correlations between assigned waveforms for different users. Referring to Fig. 5, we may define the cross correlation coefficients as follows (assume that τ1 ≤ τ2 ): Z T αkk′ = s1k (t) s2k′ (t + τ1 − τ2 ) dt (33) 0

ρkk′ =

Z

0

T

s1k (t) s2k′ (t + T + τ1 − τ2 ) dt.

(34)

X1 (n) = [b11 (n) , b12 (n) , ..., b1K (n)]

T

(35)

X2 (n) = [b21 (n) , b22 (n) , ..., b2K (n)]T

(36)

Y1 (n) = [y11 (n) , y12 (n) , ..., y1K (n)]T

(37)

T

(38)

Y2 (n) = [y21 (n) , y22 (n) , ..., y2K (n)] .

It is easy to verify iT h that the N -block asynchronous Gaussian T T T T N MAC, Y = Y1 (1) , Y2 (1) , ..., Y1 (N ) , Y2 (N ) , can be written as follows:     X1 (1) N1 (1)  X2 (1)   N2 (1)       X1 (2)   N1 (2)          Y N = M  X2 (2)  +  N2 (2)  (39)  ..   ..   .   .      X1 (N ) N1 (N ) X2 (N ) N2 (N )

where IK denotes the identity matrix of size K and the noise vector is Gaussian with zero-mean and covariance matrix σ 2 M and M is given by IK αT     M=    

[

C=

For compactness of description, let us define the matrices, α = [αkk′ ] and ρ = [ρkk′ ], in RK×K . We also denote X1 (n), X2 (n), N1 (n), N2 (n), Y1 (n) and Y2 (n) as column vectors in RK×1 . We define X1 (n), X2 (n), Y1 (n) and Y2 (n) as follows:



to transmit, is given by

α IK ρT

ρ IK αT

α IK .. .

ρ .. .

..

. IK αT

(41)

k=1

j∈{1,2}

where π

K X

  S1k (θ) dθ log 1 + σ2 −π k=1   Z π X K 1 S2k (θ) 0 ≤ R2 ≤ dθ log 1 + 4π −π σ2 k=1  Z π X K 1 S1k (θ) S2k (θ) R1 + R2 ≤ + log 1 + 4π −π σ2 σ2 1 0 ≤ R1 ≤ 4π

Z

(42)

(43)

k=1

S1k (θ) S2k (θ) 1 − d2k (θ) + σ4





(44)

where d2k (θ), k = {1, 2, ..., K} and θ ∈ Q are the eigenvalues of the following Hermitian matrix: ˆ

ˆ

Γ (θ) = ααT + ρρT + αρT eiθ + ραT e−iθ .

(45)

Remark 1: We note that Γ is a Hermitian matrix-valued function Γ : Q → HK×K . Hence, for every N , there is an associated block Toeplitz matrix generated by Γ (See Section II). VI. ACHIEVABILITY



     .    α IK

1 2π

(R1 , R2 )

Sjk (θ)≥0, θ∈Q K Rπ 1 P Sjk (θ)dθ=Pj −π K

OF THE CAPACITY REGION BY

STATIONARY INPUTS

(40)

We note that the noise sequence thus obtained is correlated. However, to invoke [11, Theorem 3] for the limiting characterization for capacity regions of discrete MAC with memory, we need the outputs to be conditionally independent given the inputs. Following the Gram-Schmidt orthonormalization procedure, we can obtain an equivalent discrete MAC (Refer to Appendix A), with the same capacity region and where the noise process is independent. Hence, we can directly make use of coding theorems where the outputs are conditionally independent given the inputs. V. C APACITY R EGION Theorem 4: When the transmitters know the mutual offset, the capacity region of the energy-constrained asynchronous Gaussian MAC, where each user has K orthogonal waveforms

We assume that the inputs to channel (39) are stationary Gaussian processes where the power spectral density matrices Sj (θ), j ∈ {1, 2}, are non-negative definite Hermitian matrices satisfying Z π 1 1 tr [Sj (θ)] dθ ≤ Pj . (46) 2π −π K The mutual information rates are given by  1 I X1N ; Y N |X2N N →∞ N Z π   1 log det IK + σ −2 S1 (θ) dθ (47) = 4π −π  1 lim I X2N ; Y N |X1N N →∞ N Z π   1 = log det IK + σ −2 S2 (θ) dθ (48) 4π −π  1 lim I X1N X2N ; Y N N →∞ N     Z π 1 S1 (θ) OK 1 B (θ) dθ log det I2K + 2 = OK S2 (θ) 4π −π σ (49) lim

where

"

I B (θ) = T K T ˆiθ α +ρ e

ˆ

α + ρe−iθ IK

#

(50)

and OK denotes the zero matrix of size K × K. We note that it is difficult to obtain a closed form expression directly as in [9, (3.9)]. However, we may perform singular value ˆ decomposition to αT + ρT eiθ as follows: ˆ

αT + ρT eiθ = U (θ) D (θ) V∗ (θ)

(51)

where D (θ) is the diagonal matrix of the square roots of the eigenvalues of Γ (θ). Using the orthogonality of U (θ) and V (θ), we can express the determinant in the rate-sum constraint as     1 Λ1 (θ) IK D (θ) OK (52) det I2K + 2 OK Λ2 (θ) D (θ) IK σ

where we have set Λ1 (θ) = V∗ (θ) S1 (θ) V (θ) and Λ2 (θ) = U∗ (θ) S2 (θ) U (θ). As singular value decomposition is a continuous and wellbehaved function of its matrix argument (see [13, Section 2.3]), we note that Sj (θ) is a continuous function of θ as long as Λj (θ) is a continuous function of θ. In addition, Sj (θ) (for a fixed θ) is a non-negative definite matrix if and only if Λj (θ) is a non-negative definite matrix. Finally, (46) is satisfied if and only if Z π 1 1 tr [Λj (θ)] dθ ≤ Pj (53) 2π −π K is satisfied. Hence, for a real and non-negative diagonal matrix Λj (θ) satisfying (53), j ∈ {1, 2} and θ ∈ Q, we have  1 I X1N ; Y N |X2N lim N →∞ N   Z π X K 1 Λ1k (θ) dθ (54) = log 1 + 4π −π σ2 k=1  1 I X2N ; Y N |X1N lim N →∞ N   Z π X K 1 Λ2k (θ) = dθ (55) log 1 + 4π −π σ2 k=1  1 I X1N X2N ; Y N lim N →∞ N  Z π X K Λ1k (θ) Λ2k (θ) 1 + log 1 + = 4π −π σ2 σ2 k=1  Λ1k (θ) Λ2k (θ) 1 − d2k (θ) + dθ σ4 (56) where d2k (θ) are the eigenvalues of Γ (θ) and Z π k=K 1 1 X Λjk (θ) dθ ≤ Pj . 2π −π K

(57)

k=1

Since the capacity region is achieved with stationary inputs, Theorem 4 holds also for the case where the transmitters are frame asynchronous.

VII. P ROOF OF THE CAPACITY REGION The transmitters know the actual symbol period offset at the receiver and hence, the exact cross-correlation matrices α and ρ. Next, let us define the following vectors: iT h X1N = X1 (1)T , X1 (2)T , ..., X1 (N )T (58) iT h T T T X2N = X2 (1) , X2 (2) , ..., X2 (N ) (59) iT h X N = X1 (1)T , X2 (1)T , ..., X1 (N )T , X2 (N )T . (60)

As the capacity region of the asynchronous Gaussian MAC is equivalent to a discrete MAC with memory where the outputs are conditionally independent given the inputs, [11, Theorem 3] is applicable in evaluating the capacity region of the asynchronous Gaussian MAC. Hence, a limiting characterization for the capacity region of the asynchronous Gaussian MAC is given by   (61) C = closure lim inf CN N →∞

with

 0 ≤ R1 ≤ N1 I X1N ; Y N |X2N   (R1 , R2 ) : 0 ≤ R2 ≤ N1 I X2N ; Y N |X1N  CN =   N N R1 + R2 ≤ N1 I X1N X2N ; Y N X1 ,X2 (62) where X1N and X2N are independent random variables satis fying E XjN T XjN ≤ Pj . We can bound the first two terms in (62) as follows:    1 1 (63) I X1N ; Y N |X2N ≤ log det IKN + σ −2 Σ1 N 2N    1 1 (64) I X2N ; Y N |X1N ≤ log det IKN + σ −2 Σ2 N 2N where Σ1 and Σ2 represents the covariance matrices of X1N and X2N , respectively. We note from  (39) that  the output covariance matrix is given by ME X N X N T M + σ 2 M. Hence, we can also bound the third term as follows:  1 I X1N X2N ; Y N N     1 log det I2KN + σ −2 E X N X N T M ≤ 2N     1 1 Σ1 OKN IKN ST = log det I2KN + 2 S IKN Σ2 2N σ OKN (65) [

 

where S is given by 

 .. 0 1 0 0 . . . .    ..  .. 0  . . 0 1 0     . . . . 0 . . 0 0 1 S = IN ⊗ α +   ⊗ ρ.  . ..  .. .. .. ..  .. . . . . .    .  .. .. .. ..  .. . . . . 1 ··· ··· ··· ··· ··· 0

(66)

and ⊗ denotes the Kronecker product. The equality above follows along the lines of the proof of the identity in [9,

Lemma 1]. Refer to Lemma 2 in Appendix B for the proof that the equality still holds true in our present case (by replacing the scalars ρ12 and ρ21 in [9, Lemma 1] with their equivalent K × K matrices α and ρ, respectively). Next, we can perform singular value decomposition to the matrix S = UDV∗ , where U and V are orthogonal matrices and D is a diagonal l=KN matrix of the singular values {dl }l=1 of ST S, which is given by   T αα αρT   ραT ααT + ρρT αρT   (67) .  ..   . αρT ραT

If we fix 12 ≤ α ≤ 1, following the N -dimensional optimization in [9, Appendix IV], we have l=KN 1 X max f1 (φ1l , φ2l , dl ) φjl ≥0, l={1,2,...,KN } 2N 1 KN

l=1

l=KN 1 X = f1 (γ1 (dl , β1 , β2 ) , γ2 (dl , β1 , β2 ) , dl ) 2N l=1

l=KN 1 X g (dl , β1 , β2 ) = 2N

ααT + ρρT

After obtaining (65), we can proceed exactly as follows in [9, Pg. 739-740] to obtain [ (R1 , R2 ) (68) CN =

l=1

P

φjl ≤ σ2j j∈{1,2}

Pl=KN

(73)

l=1

where βj , j ∈ {1, 2}, are scalar positives (Lagrange multipliers in the KN -dimensional optimization problem) and γj (dl , β1 , β2 ) are continuous functions of dl such that (See proof of [9, Lemma 3])

φjl ≥0, l={1,2,...,KN } Pl=KN 1 φjl ≤Pj l=1 KN j∈{1,2}

l=KN Pj 1 X γj (dl , β1 , β2 ) = 2 . KN σ

(74)

l=1

where 0 ≤ R1 ≤

1 2N

KN P l=1 KN P



log 1 +  log 1 +

1 0 ≤ R2 ≤ 2N l=1  KN P 1 R1 + R2 ≤ 2N log 1 + φσ1l2 + l=1

φ2l σ2

+

φ1l σ2 φ2l σ2

Following the same approach with the convex set C, given in Theorem 4, for every 12 ≤ α ≤ 1, each of its Pareto-optimal pair attains

 

max

(R1 ,R2 )∈C

φ1l φ2l σ4

1 − d2l



.

(69) We note that the maximization over all matrices of size KN is now reduced to a maximization over all diagonal matrices of size KN . Since CN is convex, each of the Pareto-optimal rate pairs (rate pairs on the boundary of the region) in CN satisfy max

(R1 ,R2 )∈CN

          

αR1 + (1 − α) R2 =

max

l=KN P

φjl ≥0 l=1 Pl=KN P φjl ≤ σ2j l=1 j∈{1,2}, l={1,2,...,KN } l=KN P 

        

1 KN

max

φjl ≥0

1 KN

Pl=KN l=1

P φjl ≤ σ2j

l=1

f1 (φ1l ,φ2l ,dl ) , 2N

f2 (φ1l ,φ2l ,dl ) , 2N

j∈{1,2}, l={1,2,...,KN }

if

1 2

  ≤ α ≤ 1        

  if 0 ≤ α ≤ 12        

=

αR1 + (1 − α) R2 max

1 2π

Φjk (θ)≥0 K R 1 P Φjk (θ)dθ Q K

1 4π

Z

k=K X

f1 (Φ1k (θ) , Φ2k (θ) , dk (θ)) dθ

Q k=1

k=1 P

≤ σ2j j∈{1,2}, θ∈Q

(75) and for every 0 ≤ α ≤ attains max

(R1 ,R2 )∈C

=

each of its Pareto-optimal pair

αR1 + (1 − α) R2 max

1 2π

1 2,

Φjk (θ)≥0 K R 1 P Φjk (θ)dθ Q K

1 4π

Z

k=K X

f2 (Φ1k (θ) , Φ2k (θ) , dk (θ)) dθ

Q k=1

k=1 P

≤ σ2j j∈{1,2}, θ∈Q

(76)

(70)

where f1 (z1 , z2 , d) = (2α − 1) log (1 + z1 )  + (1 − α) log 1 + z1 + z2 + z1 z2 1 − d2 (71) f2 (z1 , z2 , d) = (1 − 2α) log (1 + z1 )  + α log 1 + z1 + z2 + z1 z2 1 − d2 . (72)

where d2k (θ), k = {1, 2, ..., K}, are the eigenvalues of the Hermitian matrix Γ (θ). By fixing 12 ≤ α ≤ 1, and proceeding as in the KN dimensional optimization, we have max

(R1 ,R2 )∈C

αR1 + (1 − α) R2 Z π X 1 = 4π −π 2

d ∈σ(Γ(θ))

g (d, β1 , β2 ) dθ

(77)

where β1 , and β2 are positive scalars (Lagrange multipliers of the infinite-dimensional optimization problem) such that Z π X Pj 1 1 γj (d, β1 , β2 ) dθ = 2 , j = {1, 2} . 2π −π K 2 σ

RT where for each k and m, skm = 0 sk (t) φm (t) dt. Proof: Refer to [15, Pg. 163]. Next, we define the following waveforms:

d ∈σ(Γ(θ))

(78) Next, instead of considering the singular values of ST S, we can consider the singular values of the block Toeplitz matrix n ol=KN TN , dˆl , obtained by substituting the first K × K l=1

entries with ααT + ρρT . It is indeed valid to carry this replacement because TN and ST S differ only in the first K × K entries. Thus, they are equivalent asymptotically and since g (., β1 , β2 ) and γj (., β1 , β2 ) are continuous functions, n ol=KN their averages evaluated at {dl }l=KN and dˆl coincide l=1

l=1

as N → ∞ [14, Corollary 2.1]. We note that the sequence of symmetric, block Toeplitz matrices, {TN }, can be generated by the K × K Hermitian matrix-valued function Γ. We can readily check that the functions Γ, γj and g satisfy the conditions of Theorem 3. Hence, for every fixed positive pair (β1 , β2 ), we have the following: X 1 g (dl , β1 , β2 ) lim N →∞ 2N d2l ∈σ(TN ) Z π X 1 = g (d, β1 , β2 ) dθ (79) 4π −π 2 d ∈σ(Γ(θ))

X 1 γj (dl , β1 , β2 ) N →∞ N K 2 d ∈σ(TN ) Z πl X 1 1 = γj (d, β1 , β2 ) dθ, 2π −π K 2 lim

j = {1, 2} .

(t) =

(

s1k (t) , t ∈ [ 0, τ2 − τ1 ) 0, t ∈ [ τ2 − τ1 , T )

(83)

(t) =

(

0, t ∈ [ 0, τ2 − τ1 ) s1k (t) , t ∈ [ τ2 − τ1 , T )

(84)

sL 2k (t) =

(

s2k (t + T − τ2 + τ1 ) , 0,

sR 2k

(

0, t ∈ [ 0, τ2 − τ1 ) s2k (t − τ2 + τ1 ) , t ∈ [ τ2 − τ1 , T )

sL 1k sR 1k

(t) =

(80)

(86)

where k ∈ {1, 2, ..., K} (see also Fig. 5). From Lemma  m=ML 1, there exists ML orthonormal functions φL m (t) m=1 defined on [ 0, T ) such that every function in the set n oj=2 k=K L sjk (t) can be represented in the form j=1

sL jk (t) =

k=1

m=M XL

L sL jkm φm (t) , j ∈ {1, 2} ; k ∈ {1, 2, ..., K}

m=1

(87) RT L L (t) dt. (t) φ s = where for each j, k and m, sL m jk jkm 0 Similarly, ′there exists MR orthonormal functions om =MR n defined on [ 0, T ) such that every φR ′ (t) m m′ =1 n oj=2 k=K function in the set sR (t) can be represented jk j=1

d ∈σ(Γ(θ))

t ∈ [ 0, τ2 − τ1 ) (85) t ∈ [ τ2 − τ1 , T )

in the form

k=1



This completes the proof of the capacity region. A PPENDIX A E QUIVALENT D ISCRETE MAC WITH I NDEPENDENT N OISE P ROCESS We first state the following theorem necessary to obtain the equivalent discrete MAC model with independent noise process: k=K

Lemma 1: Given K finite-energy functions {sk (t)}k=1 , RT 2 i.e., 0 |sk (t)| dt < ∞, defined on [ 0, T ), there exists M ≤ m=M K orthonormal functions {φm (t)}m=1 defined on [ 0, T ), i.e., ( Z T 0, k 6= m φk (t) φm (t) dt = δkm = (81) 1, k=m 0 such that every function in the set {sk (t)}k=K k=1 can be represented in the form sk (t) =

m=M X m=1

skm φm (t) , k ∈ {1, 2, ..., K}

(82)

sR jk

(t) =

mX =MR

φR (t) , j ∈ {1, 2} ; k ∈ {1, 2, ..., K} sR jkm′ m′

m′ =1

(88) RT R ′ R s (t) φ where for each j, k and m , sR ′ (t) dt. ′ = jk 0 m jkm The channel output can then be written as y (t) =

n=N X k=K X

b1k (n)

n=1 k=1 n=N X k=K X

+

+ +

n=1 k=1 n=N X k=K X

n=1 k=1 n=N X k=K X

m=M XL

L sL 1km φm (t − nT − τ1 )

m=1 m=M XL

b1k (n)

R sR 1km φm (t − nT − τ1 )

m=1 m=M XL

b2k (n − 1)

L sL 2km φm (t − nT − τ1 )

m=1 m=M L X R b2k (n) sR 2km φm n=1 k=1 m=1

+ n (t) .

(t − nT − τ1 ) (89)

We then obtain an equivalent model with discrete time outputs by passing the received waveform through a matched filter for

R each of the signals φL m (t) and φm′ (t) as follows: Z (n+1)T +τ1 L y (t) φL ym (n) = m (t − nT − τ1 ) dt, nT +τ1

m ∈ {1, 2, ..., ML } (90)

R ym ′ (n) =

Z

(n+1)T +τ1

nT +τ1

(t − nT − τ1 ) dt, y (t) φR m′ ′

m ∈ {1, 2, ..., MR } . (91)

It is easy to verify that the noise process thus obtained is conditionally independent given the inputs.

R EFERENCES

A PPENDIX B P ROOF OF L EMMA 2 Lemma 2:     det I2KN + σ −2 E X N X N T M    Σ1 OKN IKN −2 = det I2KN + σ S OKN Σ2

ST IKN



.

(92)

Proof: Let us assume that A, B, C, D are N ×N matrices with elements in RK×K and P is a 2N × 2N matrix with elements in RK×K . If we assume that the Kronecker product is carried out entry by entry and that the multiplication of the elements is taken to be the matrix multiplication of the elements, it is then straightforward (see also [9, Appendix II]) to check that       OK OK OK IK I OK +C⊗ +B⊗ A⊗ K OK IK OK OK OK OK     A B OK OK =P PT +D⊗ C D IK OK (93) where P is the permutation matrix whose only non-OK entries are P2n′ −1 = IK , P2j−2n′ = IK ,



n = 1, ..., N

(94)



n = N + 1, ..., 2N.

OK . We note that P is also unitary in our present case and we obtain     det I2KN + σ −2 E X N X N T M      Σ1 OKN IKN ST T −2 P = det I2KN + σ P S IKN OKN Σ2     Σ1 OKN IKN ST −2 T = det I2KN + σ P P S IKN OKN Σ2     T Σ1 OKN IKN S . = det I2KN + σ −2 S IKN OKN Σ2 (97)

(95)

Therefore, we have       OK OK I OK + Σ2 ⊗ E X N X N T = Σ1 ⊗ K OK IK OK OK   Σ1 OKN PT =P OKN Σ2     IK OK OK OK M = JN ⊗ + JN ⊗ OK OK OK IK     OK IK OK OK +S⊗ + ST ⊗ OK OK IK OK   IKN ST PT (96) =P S IKN where JN is a N × N matrix (with entries in RK×K ) where all the diagonal entries are IK and the rest of the entries are

[1] R. Ahlswede, “The capacity region of a channel with two senders and two receivers,” Annals Probabil., vol. 2, no. 5, pp. 805–814, 1974. [2] H. Liao, “Multiple access channels,” Ph.D. dissertation. [3] T. M. Cover and J. A. Thomas, Elements of Information Theory. Wiley Interscience, 2006. [4] T. M. Cover, “Some advances in broadcast channels,” Advances in Communication Systems, vol. 4, pp. 229–260, 1975. [5] A. D. Wyner, “Recent results in the Shannon Theory,” IEEE Trans. Inform. Theory, vol. 20, no. 1, pp. 2–10, Jan 1974. [6] T. Cover, R. McEliece and E. Posner, “Asynchronous multiple-access channel capacity,” IEEE Trans. Inform. Theory, vol. 27, no. 4, pp. 409 – 413, July 1981. [7] G. S. Poltyrev, “Coding in an asynchronous multiple-access channel,” Problems Inform. Transmission, pp. 12–21, July-Sept. 1983. [8] J. Hui and P. Humblet, “The capacity region of the totally asynchronous multiple-access channel,” IEEE Trans. Inform. Theory, vol. 31, no. 2, pp. 207–216, Mar 1985. [9] S. Verdu, “The Capacity Region of the Symbol-Asynchronous Gaussian Multiple-Access Channel,” IEEE Trans. Inform. Theory, vol. 35, no. 4, pp. 733–751, July 1989. [10] M. Miranda and P. Tilli, “Asymptotic Spectra of Hermitian Block Toeplitz Matrices and Preconditioning Results,” SIAM J. Matrix Anal. Appl., vol. 21, no. 3, pp. 867–881, Feb 2000. [11] S. Verdu, “Multiple-Access Channels with Memory with and without Frame-Synchronism,” IEEE Trans. Inform. Theory, vol. 35, no. 3, pp. 605–619, May 1989. [12] E. L. Lehmann, Testing Statistical Hypothesis. New York: Wiley, 1959. [13] A. A. Maciejewski and C. A. Klein, “The singular value decomposition: Computation and applications to robotics,” The Int’l. Journ. Robotics Res., vol. 8, no. 6, pp. 63–79, Dec 1989. [14] R. M. Gray, “On the Asymptotic Eigenvalue Distribution of Toeplitz Matrices,” IEEE Trans. Inform. Theory, vol. IT-18, no. 6, pp. 725–730, Nov. 1972. [15] J. G. Proakis, Digital Communications. McGraw-Hill International Edition, 2001.

Suggest Documents