Linear Transceivers for Sending Correlated ... - Semantic Scholarhttps://www.researchgate.net/...MAC/.../Linear-Transceivers-for-Sending-Correlated-S...

2 downloads 0 Views 85KB Size Report
I. INTRODUCTION. Sensor networks have great potential in applications such ... two correlated memoryless sources over a Gaussian MAC. One motivation for ...
Linear Transceivers for Sending Correlated Sources Over the Gaussian MAC Onkar Dabeer

Aline Roumy, Christine Guillemot

School of Technology and Computer Science Tata Institute for Fundamental Research Mumbai, India Email: [email protected]

IRISA-INRIA 35042 Rennes Cedex France

Abstract— We consider the problem of sending two correlated random sources over a Gaussian multiple-access channel. The sources are assumed to be temporally memoryless. The performance criterion is the MSE and we seek linear transceivers that minimize it. When the bandwidth expansion factor is unity, it is shown that uncoded transmission is the best linear code for any SNR. When the bandwidth expansion factor is two, we present a scheme that involves uncoded transmission of one source with appropriate power adaptation and uncoded transmission of the other source and its negative with appropriate power adaptation. This scheme is shown to strictly ourperform coding strategies based on TDMA.

codes, while in Section III-B we present a lower bound. In Section III-C we prove that when the source and channel bandwidths are the same, uncoded transmission is the best linear code for any SNR. In Section III-D we show that when the bandwidth expansion factor is two, then the linear code corresponding to the upper-bound in Section III-A uses uncoded transmission of one source with power adaptation, and uncoded transmission of the other source and its negative with power adaptation. This code is better than TDMA based codes.

I. I NTRODUCTION

Notation: Bold small-case letters denote vectors while bold capital letters denote matrices. 1 denotes the all ones vector, 0 denotes the zero vector/matrix, I n denotes the n × n identity x) denotes the diagonal matrix with diagonal x . matrix, diag(x We use := and =: to define terms; for example, 2+x2 =: 2+y implicitly defines y := x2 . The least achievable MSE using linear codes is denoted by MSE∗ .

Sensor networks have great potential in applications such as environmental monitoring and currently there is a vibrant research activity in this area. In one design of such networks, the sensors organize themselves into clusters and the data collected is sent to a cluster head for further processing. Since the sensors observe the same physical phenomenon, their observations are correlated. Moreover, due to their proximity to each other, they share the same wireless communication channel to the cluster head. This naturally leads to the problem of communicating correlated information sources over a multiple-access channel (MAC). This is a challenging problem for which the source-channel separation theorem does not hold in general ([1]) and a single-letter characterization of the capacity region is not known. In fact, it is known that source-channel separation can be substantially worse than joint source-channel coding ([2]). In this paper, we look at linear (over the real field) joint source-channel codes for transmitting two correlated memoryless sources over a Gaussian MAC. One motivation for this is the simplicity of linear processing which makes it attractive for building low-cost sensors. Another motivation is a recent result in [3], which shows that for transmitting memoryless, bivariate Gaussian sources over the Gaussian MAC, uncoded transmission is optimal below a certain SNR. It is then natural to ask if linear codes can improve over uncoded transmission for SNR greater than the threshold given in [3]. In this paper, we study the best linear transceivers for this problem. This paper is organized as follows. The problem statement is given in Section II. In Section III-A, we present an upper-bound on the MSE achievable with block linear

II. P ROBLEM S TATEMENT Suppose we have two correlated sources {s1,t , s2,t }N t=1  that are i.i.d. with zero mean and covariance matrix ρ1 ρ1 , ρ ∈ [0, 1]. These sources are to be transmitted over the Gaussian MAC channel. For simplicity, we write the source symbols in vector notation s k , k = 1, 2. The source vectors s k are to be transmitted using M uses of the channel. The bandwidth expansion factor is defined to be B = M/N , and in this paper, for simplicity we only consider B = 1, 2. Each source is encoded separately and has a power constraint P . Let x k = H ks k be the encoded signal, where H k is a real M × N encoding matrix. To ensure the power constraint we need: o n   xk k2 = tr H kH Tk ≤ M P, k = 1, 2 (1) E kx where we have used the fact that the covariance matrix of s k is the identity I N . The output of the Gaussian MAC channel is: K X y= xk + w . k=1

Let

H 1 H 2] H := [H

and

B = 1 and r = 2N for B = 2. Let θi := λ2i and for a vector 2 xkC x , kx := x T C x . Then it is shown in Appendix I that

T  s := s T1 s T2 .

Then

MSE = 1 −

y = Hs + w. We note that s has zero mean  1 C= ρ

and covariance matrix  ρ ⊗ IN. 1

The receiver generates an estimate sˆ and the metric of interest to us is   1 MSE = E kˆ s − s k2 . 2N For a fixed transmitter, the MMSE linear receiver can be obtained by finding the MMSE estimate for each si , which is the ith component of s . Let r i = E[siy ] = H C e i

(2)

where e i has 1 in the ith position and zero entries otherwise. Let R = E[yyy T ] = H C H T + σ 2I M . (3) Then for the MMSE estimate sˆi :  E[(si − sˆi )2 ] = 1 − r Ti R −1r i = 1 − tr R −1r ir Ti . MSE =

i=1

= 1 − tr R (

−1

2N 1 X T r ir i 2N i=1

= 1 − tr R −1H C

1 2N

2N X

!)

eieTi

i=1

o 1 n −1 =1− tr R H C 2H T . 2N

where v i is the ith column of V . We note that C has eigenvalue (1 + ρ) with eigenvectors     1 1 1 ei √ ⊗ ei = √ , i = 1, ..., N 2 1 2 ei and eigenvalue (1 − ρ) with eigenvectors     1 1 1 ei √ , i = 1, ..., N. ⊗ ei = √ 2 −1 2 −eei (j)

Hence if v i is partitioned into two N ×1 vectors v i , j = 1, 2, then we can write (1 + ρ) (1) (1 − ρ) (1) (2) (2) kvv i + v i k2 + kvv i − v i k2 (7) 2 2 in (6). Further it is shown in Appendix II that (1) becomes

r (1) (2) (1) (2) 2 X vi − vi

vi + vi (8) ± p

θ ≤ M P.

p

2 (1 + ρ) 2 (1 − ρ) i 2 = kvv i kC

Equivalent Problem: Find an orthogonal matrix V and nonnegative {θi } such that (6) is minimized subject to (8). (We note that U does not influence the MSE or the power constraints, and hence we take it to be the identity.)

E[(si − sˆi )2 ]/(2N ) (

(6)

i=1

Then the MSE is

2N X

r 1 X θi 2 kvv i kC 2N i=1 σ 2 + θi

III. M AIN R ESULTS A. An Upper Bound on MSE∗ !

CH T

) (4)

Problem Statement: Minimize the MSE (4) over all encoding matrices H 1 , H 2 satisfying the power conditions (1). (We recall that the least achievable MSE is denoted by MSE∗ .) To address this problem, it is convenient to express the MSE √ and the power constraints in a different form. Let G := H C ; since C is invertible we work with G instead of H . Using tr{AB} = tr{BA} and (3), we get o 1 n T GG T + σ 2I m )−1GC . (5) tr G (G MSE = 1 − 2N

From the singular value decomposition we have G = U ΣV T , where U is an M × M orthogonal matrix, V is an 2N × 2N orthogonal matrix, and Σ is an M × 2N matrix with zeros outside the main diagonal and the singular values of G along the main diagonal. Let r = min{M, 2N } and λ1 ≥ λ2 ≥ · · · ≥ λr be the singular values of G . We note that r = N for

One way to obtain an upper-bound is to fix some orthogonal matrix V and then minimize with respect to {θi }, which is the minimization of a convex function (6) under the linear constraints (8). Below we provide a way to choose Pi a good V . 2 For simplicity let ai = kvv i kC and let bi := j=1 aj , i = 1, ..., r, b0 := 0. From (6) we get  r  1 X σ2 MSE = 1 − 1− 2 ai 2N i=1 σ + θi r br σ 2 X (bi − bi−1 ) + 2N 2N i=1 σ 2 + θi   br 1 =1− 1− 2 σ + θr 2N  r−1  2 X 1 1 σ − 2 bi . − 2N i=1 σ 2 + θi+1 σ + θi

=1−

Since θi ≥ θi+1 ≥ 0, the coefficients of bi are non-positive. Thus in order to minimize the MSE we want to choose bi as large as possible under the conditions (8). Define V i to be the 2N ×i matrix with columns v 1 , v 2 ,...,vv i . Then we note that V Ti V i = I i and n o bi = tr V Ti C V i .

Also the power constraints (8) become

Then from [4, (4), pp. 72] we get that

N X

bi ≤ ξ1 + · · · + ξi where ξ1 = ξ2 = · · · = ξN = (1 + ρ), ξN +1 = ξN +2 = · · · = ξ2N = (1 − ρ) are the eigenvalues of C . We see that the upper bound on bi is achieved if we choose v i to be the eigenvector of C corresponding to the eigenvalue ξi , i = 1, ..., r. Let V C denote the orthogonal matrix whose columns are the eigenvectors of C . For the choice of V = V C , we have MSE = 1 −

r r σ 2 X ξi 1 X . ξi + 2N i=1 2N i=1 σ 2 + θi

(9)

If we now minimize (9) over {θi } subject to (8), then we obtain an upper bound on the least achievable MSE. We note that this is a convex optimization problem. It is discussed in detail in subsequent sections for B = 1, 2. Remark: The matrix V C is obtained by ignoring the power constraints. It therefore leads to a lower bound on the MSE for any given {θi }. However, minimizing this lower bound over {θi } subject to (8) does not lead to a lower bound on MSE∗ , since this excludes those {θi } that do not satisfy (8) when V C is used. B. A Lower Bound on MSE∗ One way to obtain a lower bound is to minimize (6) under a constraint weaker than (8). We do so now. By expanding the norm and using kvv i k = 1 it is easy to show that

(1) (2) (1) (2) 2 1 vi − vi

vi + vi + p

p 2 2 (1 + ρ) 2 (1 − ρ)

(1) (2) (1) (2) 2 1 vi − vi 1

vi + vi

+ p . − p

≥ 2 2 (1 + ρ) 2 (1 − ρ) 2(1 + ρ)

Hence the power constraints (8) lead to the following weaker constraint r X θi ≤ 2(1 + ρ)M P. (10)

i=1

where γ := P/σ 2 is the SNR at each of the transmitters. The corresponding encoding matrix is given by √ H k = PII N . Thus for B = 1 uncoded transmission is optimal. D. Bandwidth Expansion of Two Now consider B = 2, r = 2N . We first consider the upper bound derived in Section III-A. Substituting for the eigenvalues and eigenvectors of C , from (9) we get that N (1 + ρ)σ 2 1 X 1 MSE = 2 N i=1 σ 2 + θi

+

In this case M = N and r = N . We first consider the upper bound derived in Section III-A. From (9), substituting for the eigenvalues and eigenvectors of C we get that N

MSE = 1 −

(1 + ρ) 1 + ρ X σ 2 + . 2 2N i=1 σ 2 + θi

(11)

2N (1 − ρ)σ 2 1 X 1 2 2 N σ + θi

(14)

i=N +1

and the power constraints (8) are now given by N X i=1

M X θi θi + ≤ 2M P = 4N P. 1+ρ 1−ρ

(15)

i=N +1

To find an upper bound on MSE∗ we have to minimize (14) under (15). Due to space constraints we do not give the details of this minimization, but the solution is given by θi∗ = 2(1 + ρ)Q∗1 , i = 1, ..., N ; θi∗ = 2(1 − ρ)Q∗2 , i = N + 1, ..., 2N where for γ ≥

ρ 2(1−ρ2 )

Q∗1 = P + and for γ