Shaping Low-Density Lattice Codes Using Voronoi ... - IEEE Xplore

7 downloads 21576 Views 412KB Size Report
Email: [email protected], [email protected], [email protected] and [email protected] ... shaping and low-density lattice codes (LDLC) with dimension. „ 10, 000 ...
Shaping Low-Density Lattice Codes Using Voronoi Integers Nuwan S. Ferdinand˚ , Brian M. Kurkoski: , Behnaam Aazhang; and Matti Latva-aho˚ ˚ Centre

: Japan

for Wireless Communications, University of Oulu, Finland Advanced Institute of Science and Technology, Nomi, Japan and ; Rice University, Texas, USA Email: [email protected], [email protected], [email protected] and [email protected]

Abstract—A lattice code construction that employs two separate lattices, a high dimension lattice for coding gain and a low-dimension lattice for shaping gain, is described. Systematic lattice encoding is a method to encode an integer sequence to a lattice point that is nearby that integer sequence. We describe the “Voronoi integers” Zm {Λs , the set of integers inside the fundamental Voronoi region of a shaping lattice Λs , and a concrete scheme to label these integers. By first shaping the information using the Voronoi integers in low dimension, and then performing systematic lattice encoding using a high-dimension lattice, good shaping and coding gains can be simultaneously obtained. We concentrate on the case of using the E8 lattice for shaping and low-density lattice codes (LDLC) with dimension „ 10, 000 for coding. While optimal shaping provides a wellknown 1.53 dB gain, previously reported shaping gains with LDLC lattices are on the order of 0.4 dB. The proposed method preserves the shaping gain of the E8 lattice, that is, as much as 0.65 dB. This shaping operation can be implemented with lower complexity than previous LDLC approaches.

I.

I NTRODUCTION

Lattice codes can achieve the capacity of the AWGN channel [1]; they use the same real algebra as the wireless channel; and they have algebraic structure that makes them suitable for physical layer network coding [2], [3]. While there are numerous information theoretical results that assume the existence of good lattices, there are relatively few concrete constructions that can practically achieve these results. Progress has been made recently in developing lattices with high coding gain. Various high-gain, high-dimension lattices constructions have been described, but here we concentrate on low-density lattice codes (LDLC) [4]. LDLCs, inspired by LDPC codes, perform iterative lattice decoding, and come within only 0.5 dB of the unconstrained AWGN capacity at 10´5 symbol error rate for block length n “ 105 . Lowcomplexity and low-storage Gaussian mixture decoders [5]– [7] have similar performance. However, to satisfy a power constraint, a subset of the lattice points must be selected; this is called shaping. At high rates, the contribution of a lattice code’s shaping gain and coding gain can be separated [8]. Using an ideal spherically-shaped codebook provides 1.53 dB of gain over a hypercube-shaped codebook. While the progress on finding This work is supported in part by the Ministry of Education, Science, Sports and Culture; Grant-in-Aid for Scientific Research (B) number 26289119, the Academy of Finland; JULIET project number 260755, and the US National Science Foundation.

978-1-4799-5999-0/14/$31.00 ©2014 IEEE

127

lattices with high coding gain has been impressive, there has been considerably less progress on achieving the shaping limit, particularly using high-coding-gain lattices. Shaping requires a lattice quantization algorithm, which is computationally expensive as the dimension gets large. The most direct approach to shaping is to construct nested lattice codes (lattice coset codes), for example nested LDLC lattices [9] formed using a suboptimal and high-complexity M -algorithm [10] for quantization; this approach yields around 0.4 dB of the possible 1.53 dB shaping gain. In this paper, shaping is not performed using nested lattices. Rather, the shaping lattice Λs and the coding lattice Λc are distinct. The coding lattice Λc is a high-dimensional and highcoding gain LDLC lattice. We employ systematic lattice encoding, a technique for mapping information integers onto lattice points, such that the lattice point is near the corresponding integer sequence [9]. Concretely, an integer sequence c is encoded to a lattice point x P Λc such that when x is rounded, the corresponding integer sequence can be recovered. LDLCs and systematic lattice encoding are reviewed in Sec. II. With this systematic property, if the integers c satisfy a power constraint, then the transmitted lattice point x point will approximately satisfy the same power constraint. Accordingly, shaping is performed by selecting integers c which satisfy a power constraint. The “Voronoi integers” Zm {Λs is the set of integer vectors inside the fundamental Voronoi region of the shaping lattice Λs . The shaping lattice Λs is a low-dimension lattice which has an efficient shaping (quantization) algorithm. The Voronoi integers are described in Sec. III. By using an LDLC lattice for Λc to systematically encode the Voronoi integers, the new code construction possesses the coding gains of Λc and the shaping gains of Λs . At the same time, the encoder-side shaping algorithm for Λs and decoderside belief-propagation decoding algorithm for Λc both retain their low-complexity characteristics. The new lattice code construction is given in Sec. IV. Sec. V contains numerical results, where we show that the 0.65 dB gain of the E8 lattice is preserved when applied to this new construction, while still possessing significant coding gains. II.

L OW D ENSITY L ATTICE C ODES AND S YSTEMATIC L ATTICE E NCODING

In this section, we provide an overview of LDLC and systematic lattice encoding.

A. LDLC: Encoding and Decoding A low-density lattice code (LDLC) is a lattice Λc Ă R defined by a parity check matrix H P Rnˆn Λc “ tx|H´1 c, c P Zn u.

n

(1)

such that the H is sparse and its inverse exists [4]. Lower case boldface indicates column vectors, for example x “ px1 , x2 , . . . , xn qt . The vector c P Zn represents the information conveyed by the codeword. LDLCs can be decoded with a linear-complexity, iterative algorithm [4]–[6]. Similar to iterative decoders for low-density parity check codes, the LDLC decoder is an iterative message-passing scheme described over a bipartite graph containing variable nodes, which ensure the decoded codeword sufficiently matches the received signal, and check nodes, which enforce the parity-check equations defined by H. The decoding algorithm input is y, which is a lattice point x P Λc plus an AWGN sequence z with known p and variance σ 2 . The output is the estimated lattice point x the corresponding integer sequence p c. As in [9], we use codes based on a lower-triangular paritycheck matrix H, which permits low-complexity encoding and decoding. As an example following H has maximum degree “ 3 with n “ 6. » fi h1 0 0 0 0 0 — 0 h1 0 0 0 0 ffi — ffi 0 h1 0 0 0 ffi — 0 H“— 0 h3 h1 0 0 ffi — h2 ffi –´h3 h2 0 0 h1 0 fl 0 ´h3 0 ´h2 0 h1 We assume h1 “ 1 for simplicity; however, it can be easily generalized. Also, h1 ą h2 ą h3 ą ¨ ¨ ¨ is used. B. Systematic lattice encoding It is essential that transmitters obey the power constraint in practical communication systems, hence, we need to combine a LDLC Λc with a shaping mechanism. Three shaping methods were described in [9]: 1) hypercube shaping, 2) nested lattice shaping and 3) systematic lattice encoding1 . Since systematic lattice encoding is of interest, it is reviewed here. Systematic lattice encoding is as follows. Given c P Zn , find x P Λc such that txs “ c, where t¨s denotes elementwise rounding to the nearest integer. In particular, find integer vector k “ pk1 , k2 , . . . , kn qt such that: Hx “ c ´ k and 1 |xi ´ ci | ď for all i “ 1, . . . , n. 2 Note that line i of (2) is given by: xi `

i´1 ÿ

Hi,j xj “ ci ´ ki .

(2) (3)

(4)

j“1

This is called systematic encoding because the information integer vector can be recovered by simply rounding the lattice point.

Due to the triangular structure of H, encoding is straightforward, with the ki and xi found recursively. Clearly, x1 “ c1 and k1 “ 0. Continuing recursively for i “ 2, 3, . . . , n: [ W i´1 ÿ ki “ ´ Hi,j xj , (5) j“1

and ˜ xi “ ci ´

j“1

was called “systematic shaping” in [9]. Recognizing that this powerful technique may have applications beyond shaping, this paper uses “systematic lattice encoding” instead.

128

[ Hi,j xj ´

i´1 ÿ

W¸ Hi,j xj

.

(6)

j“1

Here, xi has been written to show that it is within 12 of ci . This shaping method guarantees that |xi ´ ci | ď 1{2. Hence, the average energy of the codeword Er}x}2 s will be similar to the average energy of the integer input Er}c}2 s. The biggest drawback of the lower-triangular structure is that the codeword components associated with the low-degree columns of H are less protected against noise. To compensate, we allocate less information to those elements by using a coarse constellation. That is, for the lower-degree columns, we restrict further the possible values of the associated elements of u and it results in rate penalty [9].

III.

VORONOI I NTEGERS : E NCODING Zm {Λ

This section describes the Voronoi integers, a method to encode Zm {Λs , where Λs is an m-dimension lattice. “Encoding” means mapping information integers u “ pu1 , u2 , . . . , un qt to codewords c P Zm {Λs . Here, the codewords are also integer vectors. “Decoding” means the reverse mapping: given c, find the corresponding u. The mapping is bijective. Then, an example is given using the D2 lattice [11, p. 117].

A. Preliminaries The mˆm generator matrix for Λs is G, and the basis vectors are in columns, so Gu is a lattice point, for u P Zm . The generator matrix G should satisfy the following conditions. It is lower triangular, where the elements gij “ 0 for j ą i, and the diagonal elements gii , i “ 1, . . . , m are positive integers. Also, for each column j, gij {gjj is an integer, for all elements i in that column. Shortest-distance quantization is denoted by QΛ p¨q. For the lattice Λs , the Dm and E8 lattices [11] scaled by M ě 2 are of primary interest. These lattices satisfy the conditions above. The Dm and E8 also have efficient decoding algorithms [11], [12]. For example, if Λs “ M Dm and shortest distance quantization for Dm is QD , then the quantization of x is given by QΛ pxq “ QD px{M q ¨ M [13]. The region of Rm called the fundamental parallelotope (or fundamental parallelepiped) [11, p. 4] is given by: tGs|0 ď si ă 1u., th

1 This

i´1 ÿ

(7)

where si is the i element of s. There is a shifted parallelotope region for each point of Λs . Any point in Rm is in exactly one such region.

5

B. Encoding

4

The codebook is Zm {Λs , the set of integers inside the fundamental Voronoi region of Λs . The codebook has cardinality | det Λs | “

m ź

gii ,

p0, 3q

3 p3, 3q

p0, 2q

p1, 1q

p2, 3q

p3, 2q

p0, 1q

p1, 0q

p2, 7q

p1, 3q

p2, 2q

p3, 1q

p0, 0q

p1, 7q

p2, 6q

p3, 5q

p1, 2q

p2, 1q

p3, 0q

p0, 7q

p1, 6q

p2, 5q

p3, 4q

p2, 0q

p3, 7q

p0, 6q

p1, 5q

p2, 4q

p3, 6q

p0, 5q

p1, 4q

2

(8)

i“1

1

since G is triangular. Thus the code rate R is: 1 R“ log2 | det G| bits/dimension. m

c2

(9)

´1

´2

Encode information u “ pu1 , . . . , um qt to codeword c “ pc1 , . . . , cm qt P Zm {Λs as follows. Select ui as: ui P t0, 1, . . . , gii ´ 1u, and define d by scaling each position by gii : ˆ ˙t u1 u2 um . d“ , ,¨¨¨ , g11 g22 gmm

´3

(10)

p0, 4q

´4

´5 ´5

´4

´3

´2

0

´1

1

2

3

4

5

c1

(11) Fig. 1. The codebook Z2 {4D2 . Dashed lines show the Voronoi boundaries of 4D2 , and the corresponding information integers u are labeled as pu1 , u2 q.

m

Then, encode to c P Z {Λs as: c “ Gd ´ QΛ pGdq .

0

(12) fractional. Continuing recursively for i “ 2, 3, . . . , m,

Let us consider Gd. In general, Gd is not a point of Λs , but it is an integer vector and it is in the fundamental parallelotope. It is also labeled by a unique u. This can be clearly shown using an example of the scaled version of D2 , with: „  4 0 G“ . (13) 4 8 Then Gd is written as: „  „  „ 4 0 u {4 1 ¨ 1 “ 4 8 u2 {8 1

ci “ gii pbi ` di q `

(14)

gij pbj ` dj q,

it is always possible to find unique bi and di . A decoding algorithm is given as follows:

2)

The left-hand side shows Gd is in the fundamental parallelepiped. The right-hand side shows the point Gd is an integer vector; here the assumption of integral gij {gjj for each column j is used.

Input: c with elements ci and generator matrix G with elements gij For each i “ 1, 2, . . . , m: a) find ti , where ti “ bi ` di : ři´1 ci ´ j“1 gij tj ti “ , (19) gii b)

find the integer part bi : bi “ tti u,

c)

C. Decoding Decode codeword c to information u as follows. Any point c can be written as: c “ Gb ` Gd,

ui gii

ă 1 holds for all i P t1, . . . mu.

Using the lower triangular structure of G, the first row of (15) is: c1 “ g11 pb1 ` d1 q,

find the information integer ui : ui “ pti ´ bi qgii .

3)

(20)

(21)

Output: integer vector u “ pu1 , . . . , um q.

(15)

where b is a vector of integers and 0 ď di ă 1. Here c is in the parallelepiped for Gb. More specifically: ˙ ˆ um u1 u2 , ,..., (16) d“ g11 g22 gmm is of interest, where 0 ď

(18)

j“1

1)  „  0 u ¨ 1 1 u2

i´1 ÿ

(17)

which has a unique solution since b1 is an integer and d1 is

129

D. Invertibility Given that u was encoded to c, it is shown that the decoding algorithm with input c gives u as the output, that is, the decoding is invertible. For any decoding algorithm input c, there exists unique integers b and fractions d such that c “ Gb ` Gd; the decoding algorithm gives an explicit method to find those. Since QΛ pGdq is a lattice point, say Gb1 , the codeword is written as: c “Gd ´ Gb1 ,

(22)

as in (12). Given this c, the decoding algorithm will find the corresponding d, as well as u, which are unique.

u P Zn

u1

c1

u2

c2

un{m

c P Zn

Combine integers

Zm {Λs

Splitter

Systematic

x “ H´1 pc ´ kq

x1 “ x ´ a

Substract

shaping

offset a

cn{m

(a) Encoder

˜ c1

noise (z)

u ˜1

˜ c2 y “ x1 ` z

Add offset a

x`z

LDLC Decoding

x ˜

˜ c

Y U ¨

u ˜2 Combined integers

m

Splitter

Λs Ñ Z ˜ cn{m

u ˜

u ˜ n{m

(b) Decoder

Fig. 2.

Encoder and Decoder using Zm {Λs integers in LDLC.

where

E. Numerical Example Fig. 1 shows an example of Z2 {4D2 , using the scaled D2 lattice: „  1 0 G“4 . (23) 1 2 The figure shows the Voronoi region and the elements c, which are integers inside the shaping region, labeled as u “ pu1 , u2 q. Note that for consistent tie-breaking, some elements on the Voronoi boundary and included, and some are not. In this case | det G| “ 32, so the rate is R “ 2.5 bits/dimension.

a “ looooooomooooooon ras as . . . as s,

(25)

repeat n{m

and as P Rm is the mean of Zm {Λs . B. Channel and Decoding The source transmits the codeword x1 via an AWGN channel and the received signal is y: y “ x1 ` z

(26)

where z is the AWGN noise vector with variance σz2 . IV.

P ROPOSED C ODE C ONSTRUCTION AND D ECODER

A. Code Construction This section describes the code construction obtained by performing systematic lattice encoding on the Voronoi integers. Systematic lattice encoding does not change the structure of the lattice — it is only a mapping from integers to lattice points, so the high coding gain properties of Λc are retained. By using systematic lattice shaping, the average power of x is similar to average power of c. The Voronoi integers gives the same shaping gain observed in Λs lattice with much less complexity. Thus, we can reach the goal of simultaneously having good coding gain along with some shaping gain. The shaping lattice Λs has dimension m and the coding lattiice Λc has dimension n ą m. We require that n{m is an integer. The transmitter divides integer vector u to n{m integer vectors such that u “ ru1 , u2 , . . . un{m s where ui P Zm . Then, these are encoded to Zm {Λs to obtain ci , @i P t1, . . . , n{mu as described in Section III-B. It is evident [13] that the mean of c may not be zero, therefore, x also has non-zero mean. Hence, subtract the mean of Zm {Λs to obtain the transmitted codeword x1 : x1 “ x ´ a,

(24)

130

At the decoder, the receiver adds the offset a to obtain y1 “ x ` z and performs LDLC iterative decoding to obtain x ˜. Next, it rounds to the nearest integer to find the integer vector ˜ c. Finally, it performs the inverse mapping for Voronoi integers as in Section III-C to obtain estimated information vector u ˜ . The encoding and decoding block diagram is shown in Fig. 2. Remark 1: The complexity of shaping using the Malgorithm is OpndM q, [9] where d is typically 7 and M is the depth of the search (M “ 51 was used in the simulations in the following section). On the other hand, shaping using the E8 algorithm, as proposed here, can be accomplished in in about 72 steps [11, p. 450], so the complexity scales as n (that is 9n for the E8 lattice). Both shaping operations 72 m are linear in n, but for the proposed approach the coefficient on n is lower, and moreover has better shaping gain. V.

N UMERICAL R ESULTS

Efficient decoding and quantization schemes are available for E8 lattice [12]; further, E8 lattice has a shaping gain of 0.65 dB [11]. Hence, we use E8 lattice to obtain Voronoi integers, i.e., Z8 {M E8 in simulations. Fig. 3 illustrates shaping gains for different constellation sizes. For the Z8 {M E8 codebook alone we observe shaping gains of 0.55, 0.62, 0.64, and 0.65 dB for M “ 4, 8, 16, and

0.7

10´1

0.6 Symbol error rate

Shaping gain

0.5 0.4 as “ aopt

0.3 as “ 0

0.2

2

2.5

3

3.5

4

4.5

10´5 29.5

5

30

30.5

31

31.5

32

32.5

Average SNR in dB

log2 pM q

Fig. 3. Shaping gain with E8 lattice for different constellation sizes M . For M “ 4, adding offset aopt increases gain by 0.07 dB. As M gets large, the shaping gain of the new construction approaches that of the E8 lattice alone.

32. Further, the shaping gain converges to the E8 shaping bound. On the other hand, the Voronoi integers with systematic LDLC encoding exploit shaping gains of 0.27, 0.54, 0.62, and 0.65 dB. We conclude that the shaping gain of the proposed lattice code construction using Z8 {M E8 approaches the E8 shaping bound for larger constellations. We have observed that the effect of mean offset is significant for small constellations and numerically found the optimal mean offset for M “ 4 and it is: ` aopt “ 0.1964, ´0.0359, ´0.0689, ´0.0999 ˘ ´ 0.1291, ´0.1564, ´0.1818, ´0.2053 . One can notice that the gap between the shaping gains of c and x1 is significant for small constellation sizes, i.e. M ď 4, however, it is less significant and asymptotically small for larger constellations. The reason for the larger gap for small M is due to the fact that x1i is uniformly distributed over ci ˘ 12 and the effect of additional 12 is significant for small constellations ci and for larger constellations it is less significant. Fig. 4 shows the symbol error rate (SER) versus average SNR for different shaping methods for n “ 10000. The rate is fixed at R “ 4.935 bits/dimension; the slight penalty is due to the selection of constellation sizes for different rows of LDLC parity check matrix to protect the unprotected integers as described in [9]. It is observed that proposed method has 0.645 dB gain over hypercube shaping and 0.25 dB gain over high complexity nested lattice shaping [9]. With the proposed shaping method, LDLC is only 0.55 dB away from the uniform input distribution at SER=10´5 for n “ 10000, which is 1.53 dB away from AWGN capacity. This shows that LDLC performs close to uniform input distribution even with the inherited LDLC coding loss of 0.8 dB for n “ 10000 and the rate penalty of 0.4 dB due to unprotected integers. R EFERENCES [1]

10´3

Hypercube shaping [9] Nested lattice shaping [9] Proposed shaping Uniform input capacity AWGN capacity

10´4 x1 pZ8 {M E8 with LDLC) c pZ8 {M E8 q E8 shaping gain bound

0.1 0

10´2

U. Erez and R. Zamir, “Achieving 12 logp1 ` SNRq on the AWGN channel with lattice encoding and decoding,” IEEE Trans. Inform. Theory, vol. 50, no. 10, pp. 2293–2314, Oct. 2004.

131

Fig. 4. Symbol error rate versus average SNR for different shaping methods. For n “ 10000 and R “ 4.935 bits/dimension.

[2]

B. Nazer and M. Gastpar, “Compute-and-forward: Harnessing interference through structured codes,” IEEE Trans. Inform. Theory, vol. 57, no. 10, pp. 6463–6486, Oct. 2011.

[3]

N. Ferdinand, M. Nokleby, and B. Aazhang, “Low density lattice codes for the relay channel,” in Proc. Int. Conf. Commun., June 2013, pp. 3035–3040.

[4]

N. Sommer, M. Feder, and O. Shalvi, “Low-density lattice codes,” IEEE Trans. Info. Theory, vol. 54, no. 4, pp. 1561–1585, July 2008.

[5]

B. Kurkoski and J. Dauwels, “Message-passing decoding of lattices using gaussian mixtures,” in Proc. Int. Symp. Information Theory (ISIT), July 2008, pp. 2489–2493.

[6]

Y. Yona and M. Feder, “Efficient parametric decoder of low density lattice codes,” in Proc. Int. Symp. Information Theory (ISIT), June 2009, pp. 744–748.

[7]

B. Kurkoski and J. Dauwels, “Reduced-memory decoding of lowdensity lattice codes,” IEEE Commun. Lett., vol. 14, no. 7, pp. 659–661, July 2010.

[8]

G. D. Forney, Jr. and G. Ungerboeck, “Modulation and coding for linear Gaussian channels,” IEEE Trans. Info. Theory, vol. 44, no. 6, pp. 2384– 2415, Oct. 1998.

[9]

N. Sommer, M. Feder, and O. Shalvi, “Shaping methods for low-density lattice codes,” in Proc. Information Theory Workshop (ITW), Oct. 2009, pp. 238–242.

[10]

T. M. Aulin, “Breadth-first maximum likelihood sequence detection: basics,” IEEE Trans. Commun., vol. 47, no. 2, pp. 208–216, Feb. 1999.

[11]

J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices and Groups, 3rd ed. New York, NY, USA: Springer-Verlag, 1999, iSBN 0-387-98585-9.

[12]

J. Conway and N. Sloane, “Fast quantizing and decoding and algorithms for lattice quantizers and codes,” IEEE Trans. Info. Theory, vol. 28, no. 2, pp. 227–232, Mar 1982.

[13]

——, “A fast encoding method for lattice codes and quantizers,” IEEE Trans. Info. Theory, vol. 29, no. 6, pp. 820–824, Nov 1983.