Systematic Polar Codes for Joint Source-Channel ... - Science Direct

12 downloads 0 Views 659KB Size Report
achieving codes that perform close to Shannon theoretical limits1 led to the ... stronger compression and shorter block lengths allow for a better robustness ...
Available Available online online at at www.sciencedirect.com www.sciencedirect.com

ScienceDirect

Available online at www.sciencedirect.com Procedia Procedia Computer Computer Science Science 00 00 (2017) (2017) 000–000 000–000

ScienceDirect

www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia

Procedia Computer Science 110 (2017) 266–273

The 12th International Conference on Future Networks and Communications (FNC 2017)

Systematic Polar Codes for Joint Source-Channel Coding in Wireless Sensor Networks and the Internet of Things Charles Yaacoub*, Malak Sarkis Department Department of of Telecommunications, Telecommunications, Faculty Faculty of of Engineering, Engineering, Holy Holy Spirit Spirit University University of of Kalsik Kalsik (USEK), (USEK), P.O.Box P.O.Box 446 446 Jounieh, Jounieh, Lebanon Lebanon

Abstract Abstract The The Internet Internet of of Things Things (IoT) (IoT) is is becoming becoming an an increasingly increasingly growing growing topic topic of of interest interest in in the the research research community. community. Its Its requirements requirements meet meet those those of of the the next next generation generation 5G 5G mobile mobile communication communication system system which which is is expected expected to to be be an an enabling enabling technology technology for for IoT, IoT, where where networks networks of of large large numbers numbers of of sensors sensors require require massive massive connectivity connectivity demands. demands. As As polar polar codes codes have have strongly strongly entered entered into into action action within within the the standardization standardization of of 5G, 5G, this this paper paper proposes proposes and and investigates investigates the the use use of of systematic systematic polar polar codes codes for for joint-source joint-source channel channel coding coding of of correlated correlated sources sources thus thus allowing, allowing, on on one one hand, hand, the the compression compression of of the the volume volume of of data data to to be be transmitted transmitted over over the the network, network, and and on on the the other other hand, hand, the the protection protection of of this this data data from from channel channel impairments. impairments. Results Results show show that that systematic systematic polar polar codes codes can can achieve achieve aa distributed distributed compression compression with with rates rates close close to to the the theoretical theoretical bound, bound, with with better better error error rates rates obtained obtained for for larger larger blocks. blocks. However, However, stronger stronger compression compression and and shorter shorter block block lengths lengths allow allow for for aa better better robustness robustness against against transmission transmission errors. errors. © © 2017 2017 The The Authors. Authors. Published Published by by Elsevier Elsevier B.V. B.V. © 2017 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the Conference Program Chairs. Peer-review under under responsibility responsibility of of the the Conference Conference Program Program Chairs. Chairs. Peer-review Keywords: Keywords: channel channel coding; coding; compression; compression; distributed distributed source source coding; coding; entropy; entropy; internet internet of of things; things; systematic systematic polar polar codes; codes; wireless wireless sensor sensor networks networks

1. Introduction Error correcting codes play a major role in the air interface of modern wireless and mobile communication systems, as they allow for protecting transmitted data from channel impairments. Research efforts for designing capacity achieving codes that perform close to Shannon theoretical limits11 led to the development of Turbo codes22, Low Density Parity Check (LDPC) codes33, and Polar Codes44. LDPC and Turbo codes have been incorporated in different standards, such as in Digital Video Broadcasting (DVB)55, the third (3G) and fourth (4G) generations of mobile communication

* * Corresponding Corresponding author. author. Tel.: Tel.: +961-6-600-960; +961-6-600-960; fax: fax: +961-9-600-970. +961-9-600-970. E-mail E-mail address: address: [email protected] [email protected] 1877-0509 1877-0509 © © 2017 2017 The The Authors. Authors. Published Published by by Elsevier Elsevier B.V. B.V. Peer-review under Peer-review under responsibility responsibility of of the the Conference Conference Program Program Chairs. Chairs.

1877-0509 © 2017 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the Conference Program Chairs. 10.1016/j.procs.2017.06.094

2

Charles Yaacoub et al. / Procedia Computer Science 110 (2017) 266–273 Charles Yaacoub / Procedia Computer Science 00 (2015) 000–000

267

systems (e.g. UMTS, LTE, LTE-A, …), while the more recent Polar coding is being considered for next generation (5G) systems6. The expected 5G will be an enabling technology for applications with massive connectivity demand7, such as the Internet of Things (IoT) where large numbers of devices with multiple sensors and actuators exchange information and control commands, thus forming a wireless sensor network (WSN). The exchange of large amounts of information in the context of IoT brings to our concern the problem of data compression and error correction in WSNs, where distributed source compression (DSC) and joint source-channel coding (JSCC) techniques can be exploited. Based on Slepian-Wolf theorem18, it is known that given two statistically correlated sources X and Y, both sources can be independently encoded and jointly decoded with the same compression efficiency as joint encoding and decoding. Consequently, if Y is compressed down to its entropy H(Y), X can be independently compressed down to the conditional entropy H(X|Y), given that Y is available at the decoder as side information for decoding X. Source correlation can be modeled as a noisy channel with one source (X) as the channel input and the other source (Y) as the output, and thus, channel coding techniques can be used to recover X by observing Y as noisy version of X8-15. When a transmission channel is taken into account in a DSC application, channel codes used for forward error correction (FEC) over the correlation channel can also be used as FEC codes for the transmission channel, thus allowing for joint source and channel coding. Several channel coding techniques have been used in different DSC and JSCC applications, where LDPC and Turbo codes have proven to be the best performing codes. For instance, LDPC codes have been used8 for the compression of binary sources with side information at the decoder. Farah et al.9 used non-binary Turbo codes for the compression of correlated sources, and extended their study to the case of joint source-channel coding. Aaron et al.10 and Yaacoub et al.11,12 used turbo-codes for distributed video coding (DVC), whereas Ascenso et al.13 used LDPC codes for DVC. Different schemes for DSC or JSCC in wireless sensor networks have also been proposed, based on Turbo14 and LDPC15 codes. Since their invention by Arikan4, polar codes have been well investigated in the literature. The idea behind polar codes is to create J new channels from J independent copies of a channel using a linear transformation, such that the new channels are polarized. Therefore, data can be transmitted over these synthesized good channels whereas only zeros (frozen bits) are sent over the bad channels, with the same overall capacity. Recent studies16,17 have demonstrated the superior performance of polar codes compared to LDPC and Turbo codes in the context of 5G test scenarios. Furthermore, polar codes can be constructed and decoded using simple algorithms that are more computationally efficient than Turbo and LDPC codecs6, which makes them suitable for a wide range of applications. In contrast with previous studies19-22 where the use of polar codes in DSC applications has been investigated, we previously proposed23 the use of polar codes in their systematic form for DSC, due to their superior error correction capability compared to non-systematic codes24 and their intuitive design approach for DSC such that only parity information is transmitted to the decoder where the side information replaces the missing systematic data. In this paper, we extend our study23 to the case of joint source-channel coding, where the correlation between different sources is modeled with a Gaussian channel, and the transmission channel is considered Gaussian as well. While the Gaussian channel constitutes a simplified scenario compared to practical real-life systems, our paper presents a preliminary study where a JSCC system is designed based on systematic polar codes, thus meeting the latest technologies of the future generation (5G) of mobile communications. Our study can be projected to the context of IoT where a network of wireless sensors is communicating observed data to a central node (e.g. relay node or base station) for decoding. The remainder of this paper is organized as follows. In Section II, the JSCC system considered in this study is presented along with the source correlation model and a brief review of systematic polar encoding. Section III presents the simulation environment and setup, and discusses practical results. Finally, conclusions are drawn in Section IV. 2. System Description Consider a network of wireless sensors observing a common source of information and transmitting their data to a central base station for decoding, as shown in the block diagram of Fig. 1 (showing a network of only 2 sensors, for simplicity). This model fits for several practical scenarios, such as a network of surveillance cameras observing the same scene, a network of fire detection sensors monitoring some protected forest area, etc… In Fig.1, one of the sensors (Sensor 2) employs conventional source and channel encoding (CSCE) techniques to transmit its observed

Charles Yaacoub et al. / Procedia Computer Science 110 (2017) 266–273 Charles Yaacoub / Procedia Computer Science 00 (2015) 000–000

268

3

data (Y), with the corresponding conventional decoders (CSCD) at the base station. On the other hand, the other sensor (Sensor 1) independently encodes its data (X) using a Systematic Polar Encoder (SPE). At the output of the SPE, systematic bits {ds} are dropped while only parity bits {dp} are transmitted over a noisy channel to the base station. If the number of parity bits does not exceed the number of systematic bits, compression is achieved. At the receiver side, a Systematic Polar Decoder (SPD) uses the decoded source Y’, an estimate of Y, as noisy version of the systematic data needed to decode X.

Sensor 1

x

SPE

ds

//

dp

SPD

Channel

x'

Information Source Sensor 2

y

Channel

CSCE

CSCD

y'

Fig. 1. Block diagram of the proposed JSCC system model.

With conventional encoding, Y can be compressed to a rate close to its entropy bound H(Y) and correctly recovered at the decoder (Y’=Y). This can be achieved using any entropy coding scheme (e.g. Huffman coding25) with a suitable FEC code. As stated earlier, by exploiting the correlation between X and Y at the decoder, X can be compressed to a rate close to the conditional entropy H(X|Y), thus achieving stronger compression compared to H(X) if Y is not to be exploited for decoding X. For a (N, K) SPE, compression is achieved when N < 2K, and the compression rate of X is defined as:

R=

N −K N = −1 . K K

(1)

The case of a binary discrete memoryless source X is considered, with equally likely symbols ‘0’ and ‘1’. In order to model the correlation between sources, a virtual channel is used taking at its input the source X and giving Y at its output. Y needs not necessarily be discrete, a Gaussian correlation model is thus considered. The system model can thus be simplified as represented by the diagram of Fig. 2.

x

SPE

ds dp

noise //



Correlation Model

SPD

x'

y

Fig. 2. simplified JSCC system model.

The correlation channel is modeled as a Gaussian channel. The binary source X is fed to a binary pulse amplitude modulator (B-PAM) that outputs rectangular pulses of amplitudes ±(Eb)1/2 and duration Tb, which then adds to a Gaussian random variable (RV). The channel output is then sampled to obtain the source Y. This correlation channel model is borrowed from communications theory26 where Eb represents the bit energy, and the additive Gaussian RV represents zero-mean additive white Gaussian noise (AWGN) with power spectral density N0/2. Therefore, the

4

Charles Yaacoub et al. / Procedia Computer Science 110 (2017) 266–273 Charles Yaacoub / Procedia Computer Science 00 (2015) 000–000

269

correlation between X and Y can be measured by the bit energy to noise density ratio Eb/N0 (i.e. the higher the ratio, the more the sources are correlated). According to Slepian-Wolf theory, the lower bound for R is H(X|Y). It was demonstrated23 that the compression bound can be expressed as:

H (X |Y) = − p log 2 ( p ) − (1 − p ) log 2 (1 − p ) ,

(2)

 E 1 1 − 1 − exp  −2 b 2  N0 

(3)

where p=

 .  

Practically, there is always a gap between the achievable rate, which depends on code design, and the theoretical bound. In case of JSCC, additional redundancy bits are required to overcome channel impairments, and thus the gap towards H(X|Y) further increases. In this study, we consider different SPEs. For a (N, K) code, the number N of output bits (systematic + parity) is chosen as a power of 2 (N=2n, for n = 8, 10, 12, 14, 16, or 18), whereas for a given value of N, the compression rate is varied by varying the number K of input data bits. Define xin = [x{i}, 0{i}c], a vector of N bits that includes x{i}, the K information bits of input vector x at positions defined by the set of indices {i}, and 0{i}c, a set of N-K zeros at frozen bit indices {i}c. In a non-systematic polar encoder, the output codeword d is obtained by computing: d = xin · F⊗n,

(4)

where F⊗n is the Kronecker product of n copies of the kernel F defined as: F = 11 

0  1

.

In a SPE, the output codeword consists of systematic and parity bits such that d = [d{i}, d{i}c], where the systematic part is ds = d{i} = x{i} and the parity component is dp = d{i}c. The systematic bits in SPE do not appear as the first K bits in the output codeword similar to systematic linear block codes, but they rather appear at information bit indices at the SPE output, and therefore, parity bits are placed at frozen bit indices in d. Given the information vector x, the output codeword d of an SPE is the solution of: d = z · F⊗n,

(5)

where z = [z{i}, 0{i}c], with z{i} and d{i}c being the unknowns. Algorithms for the solution of Eq. 5 are proposed by Vangala et al.27 along with their source codes28, whereas successive cancellation27 is used at the decoder. 3. Practical Results In our simulations, we consider first the case of DSC, where no noise is applied on parity bits. Fig. 3 shows the gap between the achievable compression rate and the theoretical compression bound, for a target bit error rate (BER) of 1E-6, using SPEs with different values of n∈{8, 10, 12, 14, 16, 18}. It can be clearly observed that the gap towards H(X|Y) decreases as the correlation between X and Y (i.e. the ratio Eb/N0 ) increases. For example, for n = 12, the gap is reduced by 0.22 bits when Eb/N0 increases from 1.5 dB to 3 dB. On the other hand, these curves show that for a desired compression ratio, a greater energy to correlation noise ratio is required as n decreases, to achieve the desired BER performance. In Fig. 4, we plot the BER obtained with different SPEs as a function of the conditional entropy H(X|Y), for a compression rate of 0.64 with different values of n. The dotted line represents the actual compression rate and the distance between this line and any data point represents the gap towards the compression bound for a given BER. For example, 0.28 bits (per input bit) are required in addition to H(X|Y) to achieve a target BER of 1E-6 for n =18. This gap increases to 0.47 bits for n = 12, and 0.61 bits for n = 8. It can be clearly observed that for this compression rate, the BER curve corresponding to n = 18 is the closest to the dotted line and thus n = 18 achieves the best compression performance in this case.

270

Charles Yaacoub et al. / Procedia Computer Science 110 (2017) 266–273 Charles Yaacoub / Procedia Computer Science 00 (2015) 000–000

5

Fig. 3. Achievable compression rate in case of DSC, for a target BER of 1E-6.

Fig. 4. BER obtained with a compression rate of 0.64 (dotted line represents actual compression rate).

In Fig. 5, H(X|Y) is fixed to 0.085 (dotted line) and the BER is measured for different compression rates. As expected, a stronger compression results in increased BER, regardless of the value of n. By observing the system behavior for larger values of n (i.e. n = 14, 16, and 18), it can be noticed that the BER increases with n for R < 0.4 (roughly). After this threshold (i.e. for R > 0.4), the BER sharply drops and decreases as n increases. This is due to the fact that an arbitrarily low BER cannot be achieved as R approaches zero (very strong compression) in a (N, K) SPE with very large N, which is not the case with polar codes used in channel coding applications where a better performance is always obtained by increasing N whose maximum value is bound by physical constraints (e.g. memory requirements...).

Fig. 5. BER obtained with H(X|Y) = 0.085 (represented with a dotted line).

6

Charles Yaacoub et al. / Procedia Computer Science 110 (2017) 266–273 Charles Yaacoub / Procedia Computer Science 00 (2015) 000–000

271

After evaluating our SPE-based DSC system, we study next the case of JSCC, i.e. the influence of transmission channel errors on system performance. As mentioned earlier in the description of our JSCC scenario, we assume that Y is successfully recovered at the decoder (using conventional source and channel coding techniques) whereas the SPE is jointly used for both compression and forward error correction, for the transmission and reconstruction of the source X. The correlation channel is the same used for DSC, whereas the symbol energy to noise density ratio (Ec/N0) is varied on the transmission channel (i.e. the channel carrying parity bits) in order to analyze our JSCC system performance in terms of BER. We consider codes with N=212 and N=216, compression rates of 0.45 and 0.64, and Ec/N0=1, 2, 3.5, and 5 dB. The BER is measured in each case and results are reported in Figures 6 through 9. In those figures, we do not represent the BER as a function of H(X|Y) as in Fig. 4, since the uncertainty about the transmitted source increases due to noisy transmission and consequently, H(X|Y) would not be the same. Therefore, we plot BER curves as a function of the correlation parameter Eb/N0, for different values of Ec/N0. Furthermore, we kept, for reference, the BER obtained in case of DSC represented as a dotted curve and labeled as “noise-free” on the figures. These reference lines represent the best achievable performance obtained when there is no noise affecting the transmission of source X.

Fig. 6. BER obtained with JSCC, n=12, compression rate = 0.45.

Fig. 7. BER obtained with JSCC, n=12, compression rate = 0.64.

Two major observations can be made from Fig.6 through Fig.9. The first observation is for the same input block length (i.e. fixing n=12 or n=16), the stronger the compression the less the system is sensitive to noise. For example, for n=12 at Eb/N0=0 dB, the BER increases from roughly 5E-3 to 1E-1 (i.e. 20 times) when Ec/N0 decreases from 5 dB to 1 dB for a compression rate of 0.45, whereas with a compression rate of 0.64, at the same source correlation level (Eb/N0=0 dB) and the same noise levels, the BER increases by a factor of 100 (i.e. from 1E-4 to 1E-2). Though the BER shows lower values for the weaker compression (i.e. compression ratio of 0.64), implying a better BER performance, the BER increases faster with noise in case of higher compression ratio (i.e. weaker compression), indicating higher sensitivity to noise. Similarly, for n=16 at Eb/N0=0 dB, when Ec/N0 decreases from 5 dB to 1 dB, the BER increases by a factor of 100 for a compression ratio of 0.45, whereas it increases by a factor of 1000 when the compression rate is 0.64. The second observation is for the same compression ratio, the system is more sensitive to

272

Charles Yaacoub et al. / Procedia Computer Science 110 (2017) 266–273 Charles Yaacoub / Procedia Computer Science 00 (2015) 000–000

7

noise when the block length increases. For example, considering the same operating points previously discussed, when the block length increases from n=12 to n=16, the rate of BER increase goes from 100 to 1000 with a compression ratio of 0.64, and from 20 to 100 with a compression ratio of 0.45.

Fig. 8. BER obtained with JSCC, n=16, compression rate = 0.45.

Fig. 9. BER obtained with JSCC, n=16, compression rate = 0.64.

4. Conclusion In this paper, we investigated the use of systematic polar codes for the joint source-channel coding of correlated sources, in the context of wireless sensor networks and the Internet of Things. A Gaussian model has been considered to represent source correlation, and a Gaussian channel has been considered for transmission. A simple scenario of two correlated sources has been simulated for simplicity however, the generalization to an arbitrary number of sources is straightforward. It has been shown that a better error rate can be obtained with less compression and longer blocks, whereas it was observed that the system with stronger compression and shorter blocks is more robust to degradation due transmission channel impairments. As for future work perspectives, we aim at considering more practical scenarios with large numbers of sensors, multiple relay nodes, and fading channels. References 1. C. Shannon. A mathematical theory of communication. Bell System Technical Journal, vol. 27, July and October 1948, pp. 379-423 and 623656. 2. C. Berrou, A. Glavieux and P. Thitimajshima. Near Shannon Limit Error-Correcting Coding and Decoding: Turbo-Codes. International Conference on Communications, Geneva, May 1993, pp. 1064-70. 3. R. Gallagher. Information Theory and Reliable Communication. Wiley 1968. 4. E. Arikan. Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels. IEEE Transactions on Information Theory, vol. 55 (7), pp. 3051–73, July 2009.

8

Charles Yaacoub et al. / Procedia Computer Science 110 (2017) 266–273 Charles Yaacoub / Procedia Computer Science 00 (2015) 000–000

273

5. C. Douillard, M. Jezequel, C. Berrou, N. brengarth, J. Tousch, and N. Pahm. The Turbo Code Standard for DVB-RCS. 2nd International Symposium On Turbo Codes and Related Topics, Brest, France, Sep. 2000, pp. 535-538. 6. HUAWEI Technologies Co. White Paper. 5G: New Air Interface and Radio Access Virtualization. April 2015. 7. W. Tong, J. Ma, P. Z. Huawei. Enabling technologies for 5G air-interface with emphasis on spectral efficiency in the presence of very large number of links. 2015 21st Asia-Pacific Conference on Communications (APCC), Kyoto, 2015, pp. 184-187. doi: 10.1109/APCC.2015.7412508 8. A. D. Liveris, Zixiang Xiong and C. N. Georghiades. Compression of binary sources with side information at the decoder using LDPC codes," IEEE Communications Letters, vol. 6, no. 10, pp. 440-442, Oct. 2002. 9. J. Farah, C. Yaacoub, N. Rachkidy and F. Marx. Binary and non-Binary Turbo Codes for the Compression of Correlated Sources Transmitted through Error-Prone Channels, Turbo Codes&Related Topics; 6th International ITG-Conference on Source and Channel Coding (TURBOCODING), 2006 4th International Symposium on, Munich, Germany, 2006, pp. 1-6. 10. A. Aaron, R. Zhang and B. Girod. Wyner-Ziv coding of motion video. Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 2002, pp. 240-244 vol.1. 11. C. Yaacoub, J. Farah and B. Pesquet-Popescu. Feedback channel suppression in distributed video coding with adaptive rate allocation and quantization for multiuser applications. Eurasip Journal on Wireless Communications and Networking, 2008. 12. C. Yaacoub, J. Farah and B. Pesquet-Popescu. New adaptive algorithms for gop size control with return channel suppression in wyner-ziv video coding. International Journal of Digital Multimedia Broadcasting, 2009. 13. J. Ascenso and F. Pereira. Complexity efficient stopping criterion for LDPC based distributed video coding. 5th International Mobile Multimedia Communications Conference, MOBIMEDIA‘09, London, UK, September 2009. 14. C. Yaacoub, J. Farah and B. Pesquet-Popescu. Joint Source-Channel Wyner-Ziv Coding in Wireless Video Sensor Networks. Signal Processing and Information Technology, 2007 IEEE International Symposium on, Giza, 2007, pp. 225-228. 15. M. Sartipi and F. Fekri. Distributed source coding in wireless sensor networks using LDPC coding: the entire Slepian-Wolf rate region. Wireless Communications and Networking Conference, 2005, pp. 1939-1944 Vol. 4. 16. B. Zhang, H. Shen, B. Ying. A 5G Trial of Polar Code. 2016 IEEE Globecom Workshops (GC Wkshps), Washington, DC, 2016, pp. 1-6. doi: 10.1109/GLOCOMW.2016.7848800 17. O. Iscan, D. Lentner, W. Xu. A Comparison of Channel Coding Schemes for 5G Short Message Transmission. 2016 IEEE Globecom Workshops (GC Wkshps), Washington, DC, 2016, pp. 1-6. doi: 10.1109/GLOCOMW.2016.7848804 18. D. Slepian and J.K. Wolf. Noiseless coding of correlated information sources. IEEE Transactions on Information Theory, vol. IT-19, July 1973, pp. 471–480. 19. S. Onay. Polar Codes for Distributed Source Coding. PhD Thesis, Bilkent University, Dec. 2014 20. Vu Thi Thuy Trang, J. W. Kang, M. Jang, Jong-hwan Kim and S. H. Kimy. The performance of polar codes in distributed source coding. 2012 Fourth International Conference on Communications and Electronics (ICCE), Hue, 2012, pp. 196-199. 21. X. Lv, R. Liu and R. Wang. A Novel Rate-Adaptive Distributed Source Coding Scheme Using Polar Codes. IEEE Communications Letters, vol. 17, no. 1, pp. 143-146, Jan. 2013. 22. S. B. Korada and R. L. Urbanke. Polar Codes are Optimal for Lossy Source Coding. IEEE Transactions on Information Theory, vol. 56, no. 4, pp. 1751-1768, April 2010. 23. C. Yaacoub, M. Sarkis. Distributed compression of correlated sources using systematic Polar Codes. 2016 9th International Symposium on Turbo Codes and Iterative Information Processing (ISTC), Brest, 2016, pp. 96-100. doi: 10.1109/ISTC.2016.7593084 24. E. Arikan. Systematic Polar Coding. IEEE Communications Letters, vol. 15, no. 8, pp. 860-862, Aug. 2011. 25. D.A. Huffman. A Method for the Construction of Minimum-Redundancy Codes. Proceedings of the I.R.E., pp. 1098–1102, Sept. 1952. 26. S. Haykin. Communication Systems. 4E, ch. 4, pp. 247-308, Wiley 2001. 27. H. Vangala, Y. Hong and E. Viterbo. Efficient Algorithms for Systematic Polar Encoding. IEEE Communications Letters, vol. 20, no. 1, pp. 17-20, Jan. 2016. 28. H. Vangala, E. Viterbo and Y. Hong, Polar Coding Algorithms in MATLAB, 2015 [Online]. Available: http://www.ecse.monash.edu.au/ staff/eviterbo/polarcodes.html

Suggest Documents