Application of syndrome based Turbo decoding with adaptive ... - Core

1 downloads 0 Views 223KB Size Report
large code block lengths, Turbo Codes can achieve close-to- capacity ... blocks are considered to be error-free. 2. .... Given an acceptable loss in block error rate.
Adv. Radio Sci., 10, 159–165, 2012 www.adv-radio-sci.net/10/159/2012/ doi:10.5194/ars-10-159-2012 © Author(s) 2012. CC Attribution 3.0 License.

Advances in Radio Science

Application of syndrome based Turbo decoding with adaptive computational complexity in LTE downlink J. Geldmacher1 , K. Hueske1 , M. Kosakowski2 , and J. G¨otze1 1 TU

Dortmund University, Information Processing Lab, Otto-Hahn-Str. 4, 44227 Dortmund, Germany In Motion GmbH, Universit¨atsstr. 140, 44799 Bochum, Germany

2 Research

Correspondence to: J. Geldmacher ([email protected])

Abstract. This paper describes the application of an adaptive complexity decoder for the Long Term Evolution (LTE) downlink Turbo code. The proposed decoding approach is based on the block syndrome decoding principle and enables adaptive reduction of decoding effort depending on current SNR and iteration number with negligible influence on decoding performance. Numerical results in context of LTE downlink using typical mobile channels are used to demonstrate the efficiency of the approach.

1

Introduction

Mobile communication usage has shifted from mainly speech-centered use to highly bandwidth demanding data communication: A mobile phone is no longer just a phone, but it has become a personal communication and multimedia device, which is also used for internet browsing, video and audio streaming, online-based navigation etc. The resulting increasing bandwidth demands are satisfied in one part by using more sophisticated signal processing in the physical layer, like MIMO transmission or advanced forward error correction (FEC) methods. Considering the Long Term Evolution (LTE), one of the most complex parts of these baseband processing components is the FEC, which is realised using Turbo coding (Berrou et al., 1993). Turbo Codes are used in several applications, because of their excellent BER performance, compared to classical codes like convolutional or block codes. In case of ideal uncorrelated Gaussian inputs and very large code block lengths, Turbo Codes can achieve close-tocapacity BERs. However, good performance has to be payed for with high computational complexity: Turbo codes are decoded iteratively by using constituent decoders, which exchange their beliefs about the transmitted information bits in the form of Log Likelihood Ratios (LLRs). Typically the LogMAP or the MaxLog algorithm (Robertson et al., 1995)

are used in the constituent decoders to generate these LLRs by doing forward and backward recursions on a trellis representation of the code. Traditionally this trellis represents the encoder G of the code. In this paper, a different decoding approach is considered which is based on the so called syndrome former HT of the underlying code. While this decoder is equivalent to the conventional decoder in terms of trellis complexity and decoding performance, it can be modified to achieve adaptive decoding complexity based on the so called Block Syndrome Decoding (BSD) approach. The BSD concept has been described earlier in context of Viterbi decoding (Geldmacher et al., 2009) and Turbo equalization (Geldmacher et al., 2010; Hueske et al., 2010). Applying it to Turbo decoding is, however, not straightforward as will be shown in this paper, but requires the extension of the Turbo decoder by means of a pre-correction of the constituent decoders inputs. In the following Sect. 2 the BSD concept is summarized. The syndrome based, adaptive complexity Turbo decoder is explained in Sect. 3 and numerical results for typical LTE scenarios are shown in Sect. 4. Conclusions are drawn in Sect. 5.

2

Block syndrome decoding

In syndrome decoding, instead of the trellis of the encoder, the trellis of the syndrome former HT is used, where each path represents an admissible error sequence . Given the hard decision r of a received distorted sequence, an admissible error sequence  is defined as any sequence that results in a valid code sequence, when applied to r. This trellis can be constructed based on the syndrome former HT of the code, and the syndrome sequence b = rHT of r (Forney Jr., 1970; Schalkwijk and Vinck, 1976). Consequently, in this case a MAP decoder estimates the LLRs of the error in code or information bits, and this approach is referred to as syndrome

Published by Copernicus Publications on behalf of the URSI Landesausschuss in der Bundesrepublik Deutschland e.V.

160

J. Geldmacher et al.: Application of syndrome based Turbo decoding

decoding. The resulting absolute LLR is identical for both approaches, and therefor the choice of the trellis has no influence on the decoding performance. Also trellis complexities of syndrome former and encoder are known to be identical in their minimal realizations (Forney Jr., 1970). However, there is an interesting property of syndrome based decoding that can be exploited for reducing the decoding complexity. As the syndrome based decoder outputs LLRs of the error bits and not of the code bits, the probability of states and transitions depends on the number of errors in the decoders input sequence r: Less errors relate to a higher probability of the all-zero state in the trellis. This also means that if a subblock in the decoder’s input sequence is error-free, then the zero-state is the most likely state in this subblock, and consequently that an estimation of error-free blocks can be interpreted as a state estimation. Decoding complexity can then be reduced by identifying error-free subblocks before the decoding starts, and switching off the decoder for these subblocks. Only the remaining erroneous subblocks are actually processed by the decoder. In context of syndrome decoding, a very simple yet efficient criterion for estimating error-free subblocks is the syndrome b of the decoder’s input sequence. Because of the orthogonality of the syndrome former HT to the code, the syndrome only depends on the channel error c , b = rHT = (v + c )HT = uGHT + c HT = c HT ,

(1)

so that subsequences of a certain number of consecutive zeros in b indicate error-free subblocks in r with certain probability. In (1) v = uG denotes the code sequence corresponding to the information sequence u. In summary, the basic idea of BSD consists of two steps: 1. Preprocessing Compute the syndrome b of the decoders input sequence and identify subblocks of a predefined minimum number of consecutive zeros. These subblocks are considered to be error-free. 2. Decoding Apply the syndrome-based decoder only to the remaining erroneous subblocks. The required minimum number of consecutive zeros is a design parameter, which is called `min in the following. It influences complexity reduction and loss of decoding performance: A smaller `min will result in the detection of more error-free subblocks, i.e. more reduction of decoding complexity, but also has a higher chance of identifying a subblock as error-free which actually contains errors, i.e. higher degradation of decoding performance. It is easy to see, that the BSD concept results in a decoding complexity that adapts to the number of errors contained in the decoders input sequence. In the worst case, if the input sequence contains many errors, no error-free subblocks are identified, and the decoder has to process the whole sequence. In this case the effort is equal to a conventional decoder plus the preprocessing overhead. However, if the deAdv. Radio Sci., 10, 159–165, 2012

coder’s input contains only few errors, more error-free subblocks can be identified and the decoding effort is reduced. Considering a Turbo decoder, it is expected that the number of remaining errors in the a priori estimate of the systematic part decreases during Turbo iterations. Additionally, if an estimate of the parity part is employed to correct errors in the parity part, then the overall number of errors in the constituent decoder’s input decreases with ongoing iterations and increasing SNR. Thus, employing the BSD concept yields complexity reduction with increasing SNR and ongoing iterations. The necessity of correcting the constituent decoder’s input sequence requires a modified transition metric, which accounts for the precorrection. Therefore the syndrome decoding approach for Turbo Codes described in this work differs from other syndrome based decoding algorithms (Schalkwijk and Vinck, 1976; Yamada et al., 1983; Tajima et al., 2002; Minowa, 2003; Geldmacher et al., 2010).

3

Adaptive complexity Turbo decoder

First the operation of the syndrome based constituent decoder is described (Sect. 3.1), then the Turbo decoding framework with precorrection and the BSD extension are explained (Sect. 3.2 and Sect. 3.3). 3.1

Syndrome based Turbo decoder

It is assumed, that a half rate systematic terminated encoder is employed, and that on receiver side depuncturing is realized by replacing punctured positions with zero reliability bits. One constituent decoder takes the received soft decision sequence r˜ , a precorrection sequence x and a priori LLRs of the errors LA () in the systematic part as input. For the systematic part, it generates an a posteriori hard decision estimate ˆ of the error in the harddecision r of r˜ and extrinsic LLRs LE (). The decoder performs the following steps: 1. Precorrection and syndrome computation The precorrection sequence x is applied to r, and the syndrome b is computed as b = (r ⊕ x)HT .

(2)

2. Trellis operation Given the trellis of HT subject to b, the MaxLog algorithm is applied to generate an a posteriori estimate L(). The intial state for the forward recursion is set to the zero state, while the intial state for the backward recursion is selected according to the final state of the syndrome former from (2). The transition metric at time instant t is given as  p p p s x˜ts e˜(p,q) |˜rts | − LA (ts )/2 + x˜t e˜(p,q) |˜rt |,

(3)

www.adv-radio-sci.net/10/159/2012/

J. Geldmacher et al.: Application of syndrome based Turbo decoding

x ˜ r

bits, and, as a priori values, the interleaved extrinsic informations LE (ts ) generated by the other decoder. Additionally, the precorrection possible, sequence x is such used. that the hamming wei Generally an arbitrary sequence can be selected x, This can be achieved byforselecting x without changing the absolute values of the generated LLRs. error ǫc , which in context However, for thechannel BSD concept it is crucial, that the input (r⊕x) of the former contains errors as 200 syndrome framework can asbefewdone asposfollows: sible, such that the hamming weight of b becomes small. This can be achieved by selecting x as an estimate of the – The systematic part xs is set acc channel error  c , which in context of the Turbo decoding framework can be done as follows: sion of the a priori values L (ǫ

ǫˆ

Decoder 1

161

LE (ǫ)

LA(ǫ)

Intlvr

A

DeIntlvr LA(ǫ)

xs

– The systematic part is set according to the hard decip sion of the a priori A ().parity part x , the – values For Lthe

LE (ǫ)

x

Decoder 2



ǫˆ

outp p , the output u For the parity part xiteration the previouswhich y isˆ from reencoded, iteration is reencoded, which yields an estimated code 205 sequence v ˆparity . The of sequence vˆ . The comparison of the partcomparison vˆ p of vˆ p p p p p p with r, yields x = rwith ⊕ vˆ .r, yields x = r ⊕ v ˆ .

Thus the sequence x depends on both constituent decoders, such that it can be Thus used as the a measure for the decoding sequence x depends o . 1. Syndrome based Turbo decoder with precorrection. progress. In case of convergence of the decoder, it leads to a coders, such that it can be used as a m decreasing hamming weight of the syndrome sequence b. p p s s where x˜t , x˜t and e˜(p,q) , e˜(p,q) represent BPSK modThe described progress. syndrome based is identiInTurbo casedecoder of convergence of t ulated systematic and parity bits of precorrection secal to the conventional decoder in terms of decoding perforp p 210 decreasing hamming weight of the sy correponding trellis transition where x ˜stquence , x ˜t and anderror e˜s(p,q) , e˜(p,q)to the represent BPSKmance modand trellis complexity. Additional effort is required from states p to q. This yields the a posteriori LLR to compute and the syndrome b. However, based T Thex described syndrome ulated systematic and parity precorrection se- the sequence L(e) of the incremental error e bits w.r.t. (rof ⊕ x). Note that both are binary operations and can be considered of negligifor LogMAP decoding proper normalization using the transition quence and error correponding to the trellis ble complexity. cal to the conventional decoder in te channel reliability would be required. mance and trellis complexity. Addi from states p to q. This yields the a posteriori3.3LLR Block syndrome decoding of Turbo codes In order to generate the LLR L() of the absolute error to compute the sequence x and the L(e) of the erroris ean w.r.t. in r,incremental the sign of L(e), which estimate(r of ⊕ thex). error Note that Based on the described based Turbo decoder, the in r ⊕ x,decoding has to be flipped according to x, 215 both syndrome are binary operations and can b for LogMAP proper normalization using BSD the concept (Geldmacher et al., 2009, 2010) as described s in Sect. 2 can be ble applied to achieve a reduction of decoding complexity. channel reliability would be required. L(ts ) = −x˜ts L(e (4) t ). effort. Because of the precorrection, the syndrome sequence b error of a constituent decoder shows subsequences of consecIn order to LLRdelivers L(ǫ)theof the absolute Thegenerate harddecisionthe ˆ of L() a posteriori es3.3 Block syndrome of utive zeros, that increase with ongoing iterations in decoding case of timate of the channel error  c . Applying ˆ to the systemin r, the sign of L(e), which is an estimate of the error convergence. This can be exploited to separate the input seatic part rs of r results in the estimate uˆ of the original quence into subblocks that are considered to be erroneous information u. in r ⊕ x, has to bebits flipped according to x, Based on theto described syndrome ba and subblocks that are considered be error-free. Conse3. Generation of extrinsic LLR The extrinsic LLRs LE (), quently, a reduction of decoding effort can be achieved BSD concept (Geldmacherbyet al., 20 s s to the other constituent decoder, are genthat sare passed only processing the erroneous subblocks and neglecting the L(ǫt ) = −˜ xt L(e (4) error-free in subblocks. Sec. 2 can be applied to achieve a t ). erated by removing the a priori information LA () and supposely 220 s the received systematic softbits from L(t ), More preciselyeffort. the syndrome based Turbo algoBecause of decoding the precorrection, rithm from Sect. 3.1 is extended as follows:  a posteriori esThe harddecision ǫˆs )of L(ǫ) s sdelivers s b of a constituent decoder shows su LE (ts ) = L(e rts | − x˜ts LA (the t − (−x˜ t ) x˜ t |˜ t) 1. Preprocessing of syndrome Identify subsequences of ≥ s s s timate of the channel ǫˆ to (5) the system= L(t ) +error |˜rt | − LǫAc(.t Applying ). zeros, that increase with ongo `min zeros inutive b. Consider the corresponding subblocks s atic part r of r results in the estimate u ˆ of the original in r˜ , except convergence. a padding of b`min /2cThis at the beginning can be and exploited 3.2 Turbo decoder with precorrection end of each subblock, as error-free, and the remaining information bits u. 225 subblocks asquence erroneous. into subblocks that are cons An overview of the syndrome based Turbo decoder with two and subblocks that are considered t constituent of decoders is shown in LLR Fig. 1. Like in theextrinsic conven. Generation extrinsic The LLRs quently, a reduction of decoding eff tional Turbo decoder, both constituent decoders use the sysLE (ǫ),tematic thatand aretheir passed toparity the part other constituent according of the received soft- decoder, Fig. 1. Syndrome based Turbo decoder with precorrection.

are generated by removing the a priori information LA (ǫ)www.adv-radio-sci.net/10/159/2012/ and the received systematic softbits from L(ǫst ), s

s

s

s

s

s

s

230

only processing the erroneous subbl supposely error-free subblocks. Adv. Radio Sci., 10, 159–165, 2012 More precisely the syndrome base rithm from Sec. 3.1 is extended as fo

J. Geldmacher et al.: Application of syndrome based Turbo decoding :

stateofinthe thedesign preceding and succeeding errorThelikely choice parameter ℓmin affects the free blocks. achievable reduction of decoding effort and possible loss in 255 decoding performance. an acceptable loss in block (b) Error-free blocks Given No processing is required for theerror rate (BLER), iterror-free may be selected supposedly blocks.heuristically. Instead, the extrinsic LLR is set to a sufficiently large value, LE (ts ) = x˜ts c, with results c > 0, and the estimated error is set to 4 Simulation the systematic part of the pre-correction sequence, ts =the xts . efficiency A reasonable for cBSD is theTurbo largest To show of thechoice proposed dein the quantization range of the LLR values. coder,value it is analyzed in context of an LTE downlink scenario 260 (Mehlf¨uhrer et al., 2009). For the simulations presented in The of the themodulation design parameter `min scheme affects the achievthischoice section, and coding (MCS) is set ableaccording reductiontoofthe decoding effort and possible loss in Channel Quality Indicator (CQI). Adecodsingleing performance. Given an acceptable loss in block error user, single antenna scenario is considered, where therate car(BLER), it may be selected heuristically. rier frequency is set to fC = 2.1 GHz and Doppler frequency 265 is fD,max = 0 Hz. For all simulations independent channel realizations are used. The bandwidth is set to 5 MHz 4 Simulation results and no control channel information is for all simulations, transmitted by the eNodeB, meaning all resources are used To show the efficiency thedecoder proposed BSD the Turbo for payload data only.ofThe employs MaxdeLog coder, it is analyzed in context of an LTE downlink scenario 270 MAP algorithm and is set to a maximum iteration number of uhrer al., 2009). For theforsimulations presented in (Mehlf¨ imax = 8.etCRC24 is employed High SNR Early Terminathis tion section, the modulation and coding scheme (MCS) is set (ET). according the Channel Quality Indicator (CQI). A The tocomputational effort will be measured as singleequiva- 300 user,lent single antenna is considered, carrier iterations of scenario the conventional decoderwhere until the termination. frequency is set fC = 2.1 GHz and this Doppler frequency 275 Thus for the to conventional decoder measure equals is the fD,max = 0 Hz. For all channel realaverage number of simulations full iterationsindependent until ET. For the BSD apizations are used. The bandwidth is set to 5 MHz for all proach, this corresponds to the averaged number of simiteraulations, no each control channel information transmitted tions, and where iteration is weighted withisthe percentage of erroneous blocks. For forused the BSD approach 305 by the eNodeB, meaning all example, resourcesifare for payload 280 the decoder would terminate after 3 full iterations, each data only. The decoder employs the Max Log MAPbut algoconstituent decoder would have only processed 50% of the rithm and is set to a maximum iteration number of imax = 8. code block during each full iteration, then this would correCRC24 is employed for High SNR Early Termination (ET). spond to 1.5 equivalent iterations. The computational effort will be measured as equivaThere are two factors which decoder have major influence on the lent iterations of the conventional until termination. 285 performance of the BSD approach: Thus for the conventional decoder this measure equals the 310 average– number of full ET. Forhow the many BSD posiapCode rate The iterations code rate until determines proach, this of iterationscorresponds are puncturedtointhe the averaged transmittednumber block. The higher tions, where eachrate, iteration is weighted the percentage the code the more puncturedwith positions and conseof erroneous blocks. For example, if for the BSD approach quently the more depunctured, zero-reliability bits are the decoder would 3 full iterations, but each 290 input to theterminate decoder.after However, the reliability of the 315 constituent decoder would have only processed 50of%error-free of the syndrome-based criterion for identification code block during each full then this correblocks depends on iteration, a good estimate ofwould the parity sespond to quence, 1.5 equivalent becauseiterations. of the involved precorrection. Thus, if are factors more zero-reliability parity positions There there are two which have major influencethis on critethe 295 rion is less reliable and a larger value has to be selected 320 performance of the BSD approach: for ℓmin to avoid a loss of BLER. – Code rate The code rate determines how many positions are punctured in the transmitted block. The higher Adv. Radio Sci., 10, 159–165, 2012

CQI 3

0

10

Reference BSD lmin=21 BLER

2. Processing of subblocks (b) Error-free blocks No processing is required for the supposedly error-free blocks. Instead, theare extrinsic (a) Erroneous blocks The erroneous subblocks proLLR setsyndrome to a sufficiently large decoder, value, LEwhich (ǫst ) = cessed byisthe based MAP x ˜st c, with c > 0,values and the error is set to generates extrinsic LEestimated (ts ) and the estimated s 250 the systematic part of the pre-correction sequence, errors t for all t in these subblocks. Note that ǫt blocks = xst . can A reasonable choice c is the largest these be considered to for be terminated in in the quantization the is LLR the value zero state, because the range zero of state thevalues. most

−1

10

−2

10 −1.5 −1.4 −1.3 −1.2 −1.1 0

−1 −0.9 −0.8 −0.7 −0.6 −0.5 SNR [dB] CQI 9

10

Reference BSD lmin=31 BLER

4

−1

10

−2

10

7.9

0

8

8.1

8.2

8.3

8.4 8.5 SNR [dB] CQI 13

8.6

8.7

8.8

8.9

10

Reference BSD lmin=91 BLER

162

−1

10

−2

10

15.5 15.6 15.7 15.8 15.9 16 16.1 16.2 16.3 16.4 16.5 SNR [dB]

Fig.2.2.Comparison Comparison of of BLER BLER for for different Fig. different CQI CQIvalues values(AWGN). (AWGN).

– the Channel typethe The typepunctured of the channel influences the code rate, more positions and conseconvergence behavior of the Turbo decoder. Naturally quently the more depunctured, zero-reliability bits are the BSD approach is more effective if the decoder exeinput to the decoder. However, the reliability of the cutes more iterations before termination, because in this syndrome-based criterion for identification of error-free case the BSD has more chances to identify error-free blocks depends on a good estimate of the parity separts. quence, because of the involved precorrection. Thus, if there are moreSec. zero-reliability parity positions this criteThe following 4.1 compares BLER and equivalent rion isfor lessdifferent reliableCQI and values, a largeri.e. value has tocode be selected iterations different rates. `min toofavoid a loss of BLER. Thefor influence the transmission channel is analyzed in Sec. 4.2. A conventional Turbo decoder is used to provide refer– Channel ence results. type The type of the channel influences the convergence behavior of the Turbo decoder. Naturally BSDrates approach is more effective if the decoder exe4.1 the Code cutes more iterations before termination, because in this case the the BSD has more of chances to identify To illustrate performance the BSD approacherror-free for difparts. ferent code rates, the CQI values 3, 9 and 13 are used under the assumption of AWGN transmission. The parameter ℓmin compares BLERisand equivalent following Sect.the4.1BLER isThe adjusted such that degradation smaller than iterations for different CQI values, i.e. different code 0.1 dB compared to the reference; it is set to ℓmin = 21rates. for The influence channel CQI3, ℓmin =of31the fortransmission CQI9 and ℓmin = 91 is foranalyzed CQI13. in AsSect. al4.2. A mentioned, conventional Turbovalue decoder is usedfor to aprovide referready a larger is required higher CQI ence results. (higher code rate). The resulting BLER for the three cases is shown on Fig. 2. It can be noted, that there is a slight per4.1 Codedecrease, rates which however is still smaller than the formance required 0.1dB at a working point of 10% BLER. To illustrate the performance of the BSD for by difThe equivalent number of iterations that approach are executed ferent code rates, the CQI values 3, 9 and 13 are used under the decoder are shown in Fig. 3. Again, the BSD approach is the assumption of AWGN transmission. The parameter `min is adjusted such that the BLER degradation is smaller than

www.adv-radio-sci.net/10/159/2012/

J. Geldmacher et al.: Application of syndrome based Turbo decoding :: CQI33 CQI

10 10

Eq Eq Iterations Iterations

Reference Reference BSDl lmin=21 =21 BSD

Reference Reference BSD llmin=31 =31 BSD

min

min

BLER BLER

66

Eq Eq Iterations Iterations

22 −1.5 −1.4 −1.4 −1.3 −1.3 −1.2 −1.2 −1.1 −1.1 −1 −1 −0.9 −0.9 −0.8 −0.8 −0.7 −0.7 −0.6 −0.6 −0.5 −0.5 −1.5 SNR[dB] [dB] SNR CQI99 CQI 88 Reference Reference BSDl lmin=31 =31 BSD min 66 44

−2 −2

8.1 8.1

8.2 8.2

8.3 8.4 8.4 8.5 8.5 8.3 SNR[dB] [dB] SNR CQI13 13 CQI

8.6 8.6

8.7 8.7

8.8 8.8

8.2 8.2

8.3 8.4 8.4 8.5 8.5 8.3 SNR [dB] [dB] SNR Typical Urban Urban Typical

8.6 8.6

8.7 8.7

8.8 8.8

8.9 8.9

Reference Reference BSD llmin=31 =31 BSD min

−1 −1

10 10

10 10

11 11

12 12

0

13 13 SNR [dB] [dB] SNR PedestrianA PedestrianA

14 14

15 15

16 16

Reference Reference BSD llmin=31 =31 BSD

BLER BLER

Eq Eq Iterations Iterations

10 10

100 10

Reference Reference BSDl lmin=91 =91 BSD min

22 15.5 15.6 15.6 15.7 15.7 15.8 15.8 15.9 15.9 16 16 16.1 16.1 16.2 16.2 16.3 16.3 16.4 16.4 16.5 16.5 15.5 SNR[dB] [dB] SNR

345 345

8.1 8.1

00

8.9 8.9

44

340 340

88

−2 −2

88

66

335 335

7.9 7.9

10 10

88

330 330

−1 −1

10 10

10 10

BLER BLER

44

325 325

AWGN AWGN

00

88

22 7.9 7.9

163 55

min

−1

10−1 10

−2

10−2 10 10 10

12 12

14 14

16 18 16 18 SNR [dB] [dB] SNR

20 20

22 22

24 24

Fig.3. 3.Equivalent Equivalentiterations iterationsfor fordifferent differentCQI CQIvalues values(AWGN). (AWGN). Fig. Fig. 3. Equivalent iterations for different CQI values (AWGN).

Fig. 4. Comparison Comparison of BLER BLER for different different channels (CQI9). (CQI9). Fig. Fig. 4. 4. Comparison of of BLER for for different channels channels (CQI9).

compared theconventional conventional Turboitdecoder decoder for the=considconsid0.1 dB compared to the reference; is set tofor `min 21 for compared totothe Turbo the ered SNR range. For the conventional decoder the equivalent CQI3, `minrange. = 31 For for the CQI9 and `min =decoder 91 for CQI13. As alered SNR conventional the equivalent number ofiterations iterations onlydepends depends onthe the CRC24 CRC24 basedCQI ET.350 ready mentioned, a larger value is required for a higher number of only on based ET. 350 For the BSD, it is additionally influenced by the amount of (higher code rate). The resulting BLER for the three cases is For the BSD, it is additionally influenced by the amount of error-free subblocks thatare arenoted, identified during thea iterations. iterations. shown onsubblocks Fig. 2. It that can be thatduring there is slight pererror-free identified the formance decrease, which however is still smaller than the Naturally, the number of iterations that are required todedeNaturally, the number of iterations that are required to required 0.1dB at a working point of 10 % BLER. code a block decreases with increasing SNR for both decode a block decreases with increasing SNR for both decoders, because the quality of the received values improves. The equivalent number of iterations that are executed by 355 355 coders, because the quality of the received values improves. For the BSD decoder the number of equivalent iterations is the decoder are shown in Fig. 3. Again, the BSD approach is For the BSD decoder the number of equivalent iterations is alwayssmaller smaller than thatof ofthe theTurbo conventional decoder. Forexexcompared to thethan conventional decoderdecoder. for the For considalways that conventional ample, considering a working point of 10% BLER, the comered SNR range. For the conventional decoder the equivalent ample, considering a working point of 10% BLER, the computational effort reduced byabout abouton 0.7, 1.1 and11equivalent equivalent number ofeffort iterations only depends the CRC24 based ET. putational isisreduced by 0.7, 1.1 and iterations at CQI3, CQI9 and CQI13. This corresponds toof 360 For the BSD, it is additionally influenced by the amount iterations at CQI3, CQI9 and CQI13. This corresponds to aa360 reduction of decoding operations by about 20%. error-free subblocks that are identified during the iterations. reduction of decoding operations by about 20%. Naturally, the number of iterations that are required to decode aChannels block decreases with increasing SNR for both de4.2 4.2 Channels coders, because the quality of the received values improves. For the BSD decoderthe theimpact number equivalent iterations is 365 In order order analyze the impact ofof different channel condi365 In toto analyze of different channel condialways smaller than that of the conventional decoder. For extions, Figs. Figs.44and and 55 compare compare the the performance performance of of BSD BSD over over tions, ample, a working point over of 10the % BLER, the comAWGNconsidering channeltotothe the performance over the fadingchannels channels AWGN channel performance fading putational effort is reduced by about 0.7, 1.1 and 1 equivalent Typical Urban (TU) and PedestrianA (PedA). The BSD Typical Urban (TU) and PedestrianA (PedA). The BSD isis iterations at CQI3, CQI9 and CQI13. Thisfor corresponds toof a shownalong along withthe the conventional decoder for fixedCQI CQI of shown with conventional decoder aafixed reduction of decoding operations by about 20%. 9. The BLER is shown in Fig. 4 for the three cases. As in the 370 9. The BLER is shown in Fig. 4 for the three cases. As in the 370 previous simulation, simulation, the the parameter parameter ℓℓmin has been been selected selected min has previous such that that the the impact impact on on BLER BLER performance performance isis <