Two decoding algorithms for tailbiting codes - Communications, IEEE ...

2 downloads 0 Views 532KB Size Report
wrap-around Viterbi decoders to process the tailbiting trellis from both ends in opposite ..... 0.4 dB away from the ML decoding around the frame-error rate. (FER).
1658

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 51, NO. 10, OCTOBER 2003

Transactions Papers Two Decoding Algorithms for Tailbiting Codes Rose Y. Shao, Member, IEEE, Shu Lin, Life Fellow, IEEE, and Marc P. C. Fossorier, Senior Member, IEEE

Abstract—This paper presents two efficient Viterbi decoding-based suboptimal algorithms for tailbiting codes. The first algorithm, called the wrap-around Viterbi algorithm (WAVA), falls into the circular decoding category. It processes the tailbiting trellis iteratively, explores the initial state of the transmitted sequence through continuous Viterbi decoding, and improves the decoding decision with iterations. A sufficient condition for the decision to be optimal is derived. For long tailbiting codes, the WAVA gives essentially optimal performance with about one round of Viterbi trial. For short- and medium-length tailbiting codes, simulations show that the WAVA achieves closer-to-optimum performance with fewer decoding stages compared with the other suboptimal circular decoding algorithms. The second algorithm, called the bidirectional Viterbi algorithm (BVA), employs two wrap-around Viterbi decoders to process the tailbiting trellis from both ends in opposite directions. The surviving paths from the two decoders are combined to form composite paths once the decoders meet in the middle of the trellis. The composite paths at each stage thereafter serve as candidates for decision update. The bidirectional process improves the error performance and shortens the decoding latency of the unidirectional decoding with additional storage and computation requirements. Simulation results show that both proposed algorithms effectively achieve practically optimum performance for tailbiting codes of any length. Index Terms—Bidirectional decoding, circular decoding, tailbiting codes, tailbiting trellises, Viterbi algorithm.

I. INTRODUCTION

A

LINEAR block code can be represented by a trellis of finite length with multiple initial states and the same multiple final states [10]. Trellises of this type are called tailbiting trellises, and the codes so represented, tailbiting codes. For simplicity, trellises in the sequel are tailbiting trellises unless explicitly explained. Tailbiting technique was first introduced for terminating convolutional codes without Paper approved by V. K. Bhargava, the Editor for Coding and Communication Theory of the IEEE Communications Society. Manuscript received June 26, 2002; revised April 23, 2003. This work was supported in part by the National Science Foundation under Grant CCR-0096191 and Grant CCR-0117891 and in part by NASA under Grant NAG 5-10480 and Grant 5-12789. This paper was presented in part at the IEEE Information Theory Workshop, Kruger National Park, South Africa, June 1999 and in part at the IEEE International Symposium on Information Theory, Sorrento, Italy, June 2000. R. Y. Shao is with the Maxtor Corporation, Shrewsbury, MA 01545 USA (e-mail: [email protected]). S. Lin is with the Department of Electrical and Computer Engineering, University of California, Davis, Davis, CA 95616 USA. M. P. C. Fossorier is with the Department of Electrical Engineering, University of Hawaii at Manoa, Honolulu, HI 96822 USA. Digital Object Identifier 10.1109/TCOMM.2003.818084

code rate loss [1], [2]. It defines a sequence of quasi-cyclic codes, and conversely, many quasi-cyclic codes can be viewed as convolutional tailbiting codes with small constraint lengths [1]. The importance of tailbiting codes further lies in the fact that short- to medium-length tailbiting codes achieve the best minimum distance of codes with given block lengths [9]. The recent work on tailbiting trellis representation of linear block codes with minimal state space complexity [7] has made efficient decoding algorithms for tailbiting codes more desirable. In the trellis of a tailbiting code, the paths with the same initial and final states are called tailbiting paths. There is a one-to-one correspondence between a codeword in the code and a tailbiting path in the trellis, therefore “tailbiting paths” and “codewords” are interchangeable in this paper. Suppose the tailbiting trellis initial (or final) states, then it is composed of subhas trellises, each having the same initial and final states. We call these subtrellises tailbiting subtrellises. Different from a decoder based on a conventional trellis (unique initial and final state), the decoder for a tailbiting trellis essentially needs to identify the initial state of the transmitted sequence. The maximum-likelihood decoder (MLD) of a tailbiting code finds the times of the optimal codeword as follows: first, it applies conventional Viterbi algorithm (VA) to each tailbiting subtrellis, tailbiting candidate paths; then the best candidate and finds is selected as the most likely codeword. We call one round of VA over the trellis as one iteration or one Viterbi trial. With sections in the trellis, each iteration consists of Viterbi updates or decoding stages, each corresponding to the computation of branch metrics and the application of the add–compare–select (ACS) procedure to one trellis section. The complexity of the Viterbi trials or Viterbi upMLD thus is equal to increases exponentially with the memory order dates, where of a convolutional tailbiting code. A number of suboptimal circular Viterbi algorithms (CVA) have been proposed to achieve near-optimal performance with a significant reduction in the number of Viterbi trials [2]–[5]. In these circular schemes, the decoder traverses the trellis more than once and terminates the decoding when either the designed stopping rule is satisfied, i.e., the decoding is converged, or the preset maximum number of decoding stages is reached, even if no convergence is detected. Under the same circular Viterbi decoding principle, various CVAs handle the circular process differently and may employ different stopping rules. These differences result in various convergence speeds and performances (see Section IV). The most recently proposed CVA by Anderson et al. [6] is devised with the circular strategy but a fixed number of decoding

0090-6778/03$17.00 © 2003 IEEE

SHAO et al.: TWO DECODING ALGORITHMS FOR TAILBITING CODES

TABLE I COMPARISON OF THE WAVA AND OTHER CIRCULAR ALGORITHMS BASED ON THE FEATURES DESCRIBED IN SECTION I

stages. The number of decoding stages is shown to be minimal under the bounded distance criterion. In this paper, we present two suboptimal CVAs for tailbiting codes, one is called the wrap-around Viterbi algorithm (WAVA), the other the bidirectional Viterbi algorithm (BVA). The WAVA has the following features: 1) the decoding starts with the asare equally probable to be sumption that all the states in the true initial state; 2) the same set of initial states is considered at all iterations; 3) it processes the tailbiting circle continuously, i.e., it accumulates path metrics throughout the circular decoding; 4) it records survivors of one trellis circle long and checks the tailbiting condition at trellis boundaries, and the decision sequences are output in the original order; 5) it updates the best tailbiting path at each iteration, and outputs the best one at the end; and 6) it adaptively terminates the decoding within a preset maximum number of decoding stages. A comparison of the WAVA with other existing CVAs based on these features is given in Table I. The BVA is devised based on the WAVA. It consists of two WAVA decoders that process the tailbiting trellis from both ends in opposite directions. The two decoders collaborate with each other in searching for the best tailbiting path. The bidirectional process reduces the decoding delay, and improves the decoding performance by revealing more tailbiting paths which may be discarded in a unidirectional decoding. II. WRAP-AROUND VITERBI ALGORITHM (WAVA) be a tailbiting code with an -section trellis . For , let denote the state space at location with size . Clearly, . We use to indicate that location is to the left of location . The trellis can be viewed as the union of tailbiting subtrellises, i.e., , where denotes a tailbiting subtrellis or all the tailbiting and . Fig. 1 shows a five-secpaths connecting tion tailbiting trellis with four initial and final states. Since only tailbiting paths of correspond to codewords in , all the paths connecting any initial and final states form a super code of , denoted . The most likely (ML) sequence in with respect to a given received sequence may not be tailbiting, hence, it may not be the ML codeword of . We define the most likely tailbiting path (MLTBP) as the path in corresponding to the ML Let

1659

codeword in . In WAVA, the state metric of at itera, accumulates branch metrics continuously tion , denoted starting from the very first iteration. Equivalently, the surviving path entering state at iteration is said to have accu. At the end of iteration , survivors mulative path metric of length can be either directly output from path memories or obtained by tracing back, dependent upon the VA implemen, the state at tation. Generally speaking, for location on a survivor passing through is uniquely de. To align our complexity analysis fined and we denote it as with the previous CVA papers [2]–[5], we assume that locating on a survivor is costless. The survivor at and iter, whose path ation between and is denoted . It metric is defined as . We is the sum of branch metrics along the path only consider survivors of length up to sections in this paper. is better than path if For consistency, we say that path has larger path metric than . The best path and its metric (or the best tailbiting path and its metric) up to iteration are (or . Let dedenoted as note the preset maximum number of iterations. WAVA with 1) Start from all the states in state metrics set to zero. to , record 2) Process trellis and if it exists. 3) At the end of the first iteration, if , stop decoding and output as the decoded sequence. Otherwise, go to step 4. 4) At iteration , , the state metare initialized by the rics of state metrics of at iteration , i.e., , for . This is called wrap around. At the end of the iteration, if is recognized as the MLTBP, stop is the decoded sedecoding and quence; otherwise, go to step 5. 5) Repeat step 4 until either the decoding is reached. process is terminated or as the decoded codeword 6) Output is the if it exists; otherwise, output sequence. In the following, we derive a sufficient condition for identifying the MLTBP, which can be used to terminate the decoding process before reaching iteration at steps 3 or 4 of the WAVA. Step 1 of the WAVA implies that the tailbiting code is decoded by processing its super code . At the end of the first is the ML sequence in . If is also tailiteration, biting, it must be the MLTBP of the trellis or the ML codeword in . Although the metric of a specific path in is always the same, given the received sequence, the WAVA decoder can come up with different survivors to a state at different iterations due to the wrap-around process, which assigns different initial state

1660

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 51, NO. 10, OCTOBER 2003

Fig. 1. Five-section tailbiting trellis for the rate 1/2 convolutional code with (g received sequence (00 10 10 00 00).

metrics. Generalizing the VA properties to the circular decoding scenario, we have the following inequality for iteration :

and

(1)

In words, inequality (1) says that the surviving path at state has the largest accumulative path metric among all paths entering . Assume that the decoder fails to produce the MLTBP be the set of the final at the first iteration. Let states of all the tailbiting survivors that the decoder has encoundenote set excluding set . tered up to iteration . Let is the MLTBP if for all the Lemma 1: The path states (2) is the best among Proof: We need to prove that all the tailbiting paths in the trellis in terms of path metric. Since the set of tailbiting paths in is the union of all the and in , for all and paths in , we consider the paths in first. Assume that a tailbiting survivor in iteration one or two is a path in , where . Since all the paths in have the same initial state metric at a certain iteration, the survivor thus has the largest path metric, based on inequality (1). In other words, the tailbiting survivor ended at is the best among . Since is the best among all all the paths in the tailbiting survivors in the first two iterations, it is the best , for all . Secondly, among all the paths in , it follows from (1) that the paths in for a state have path metrics not exceeding in the first iteration with for , and in the second iteration. If the condition (2) holds for all is better than all the paths in . Following the definition of the MLTBP, we see that (2) gives to be the MLTBP. a sufficient condition for A generalization of Lemma 1 is given by Theorem 1. Theorem 1: At the first iteration, if the best path is tailbiting, it is the MLTBP. Otherwise, the best tailbiting survivor up to , is the MLTBP if iteration ,

(3) for all

.

;g

) = (7; 5). The highlighted are the ML paths at distance two given the

Based on Theorem 1, if the WAVA decoder stops at step 3, it produces the MLTBP. Furthermore, if step 4 is designed based on the sufficient condition of (3), the ML decoding is also guaranteed. In this case, the suboptimality of the WAVA is solely manifested at step 6 after iterations. To implement the sufficient condition, one can update a value that the metrics of the can not exceed for each . For paths in . At example, at the first iteration, the bound is simply the second iteration, the bound is updated by if it is smaller than the stored value, and so on. In general, implementing the sufficient condition requires extra memory to store the bounds and additional computation to update the bounds and check whether the sufficient condition is satisfied. Instead, if we holds at any iterterminate the decoding when ation , little suboptimality is seen in the early terminated blocks (see Section IV). We call this condition the simple termination condition. is not tailbiting at iterSuppose that the survivor at ation , but is tailbiting at iteration . Let be the . Let and initial state of the survivor at iteration be the path metrics of the survivors at iteration and , respectively. It follows from (1) that and . Based on the wrap-around process, we have Lemma 2, which states a necessary condition for a state to be explored as the initial state of the transmitted tailbiting sequence. is tailbiting at iteration Lemma 2: If the survivor at , but not at iteration , then (4) Example 1 (Pseudocodewords) [6], [14]: Fig. 1 shows a five-section tailbiting trellis from a rate 1/2 convolutional code . with feed-forward generator polynomials With the binary symmetric channel (BSC) and the received , two ML codewords are at sequence distance two from . They are the all-zero sequence and 01 , where denote the initial and final 10 10 01 00 states of a path. A path of length two trellis circles 00 11 10 is at distance three away from the 00 10 00 10 11 00 00 repeated received sequence. A tailbiting sequence with a span of two or more trellis circles is called a pseudocodeword. If a circular decoder determines survivors based on accumulative path metrics or checks tailbiting conditions over more than one trellis circle, it can be trapped with this pseudocodeword, even with infinite number of iterations. For the WAVA, however, since the tailbiting condition is enforced over one trellis circle, pseudocodewords can never be a tailbiting candidate. The first Viterbi trial of WAVA gives the sequence 00 10 11 00

SHAO et al.: TWO DECODING ALGORITHMS FOR TAILBITING CODES

1661

Fig. 2. Left: A case where the right decoder discovers a survivor hs ; s ; s i that does not survive in the left decoder, when M (s ) + m (s ; s ) > (s ) + m (s ; s ) and path hs ; s ; s i survives in the right decoder. Right: A case where the composite path hs ; s ; s i from the BVA is discarded not only in the left decoder due to a better path hs ; s ; s i, but also in the right decoder when losing to the path hs ; s ; s i.

M

00 as the best path with metric one, and 01 10 10 01 as the best tailbiting path with metric two, assuming 00 favorable tie-break rules. At iteration two, the all-zero path is also discovered as a candidate tailbiting path. Due to a tie in the path metrics, either of the two tailbiting paths can be output at the end of two WAVA iterations. Note that if the sufficient condition is applied, the MLTBP can be recognized and the decoding can be stopped at the second iteration. However, with the simple termination condition, the pseudocodeword delays the termination of the decoding process until iteration . This example suggests that the WAVA can be less susceptible to pseudocodewords than other circular decoders. Example 2 (Suboptimality of the WAVA): We decode the same tailbiting code given in Example 1, but with the additive white Gaussian noise (AWGN) channel and soft-decision Viterbi decoding. The received sequence is , . With the WAVA, and the ML sequence is 00 01 01 00 10 the first iteration gives four survivors: 00 10 11 00 00 ; 00 00 11 01 01 ; 00 11 01 01 00 ; 00 01 01 00 01 . None of the survivors is tailbiting, and the survivor ended at state 0 is the best path. The second round of decoding gives almost the same set of survivors, except that the last . It is not one is replaced by the path 00 11 10 00 01 better than the recorded best path. The same sets of survivors appear alternatively in the following iterations. As a result, is the decision output at the sequence 00 10 11 00 00 any iteration. It is observed that the optimal sequence never survives in the WAVA, and the circular decoding is periodically trapped with the same sets of survivors. As we see, the WAVA is inherently suboptimal due to the circular decoding feature. Increasing the number of iterations can improve its optimality only up to a certain level; applying the sufficient condition only guarantees that the early terminated blocks are optimal. However, in light of Example 2, if the decoding process can explore more candidate paths, one can improve the optimality effectively. This motivates the design of the bidirectional decoding schemes. III. BIDIRECTIONAL VITERBI ALGORITHM (BVA) In the WAVA, the decoder processes a trellis from left to right. We call it a left decoder which produces left survivors. A right decoder which gives right survivors decodes the trellis from right to left. Let the superscripts and indicate the left and right decoding-related variables, respectively. The state metric

of at iteration from the left (or right) decoder is denoted (or ). We denote the state at location on a as as . The left (or right) survivor besurvivor passing for tween locations and is denoted (or for ), with path metric defined as (or . Let (or ) denote the set of final states of the left (or right) tailbiting survivors up to iteration . The left and the right decoders are expected to produce similar performance when they work independently. However, the survivors obtained from the left and right decoders after one iteration are not necessarily the same, since even with a symmetric trellis, the received sequence is usually not symmetric. Moreover, the initial state metrics after the first iteration are different, due to the wrap-around process. As a result, the left and right decoders generally do not produce the same survivors, and thus do not give the same best path and the best tailbiting path up to a certain iteration. The left graph in Fig. 2 abstracts a case where a path survives in the right decoder but not in the left one. If two WAVA decoders are employed to process the trellis from both ends, and survivors from both decoders at each iteration are used to update the best path and the best tailbiting path, one can improve the performance of the unidirectional WAVA with the same number of iterations. We called this decoding scheme the left-right WAVA. A sufficient condition for the best tailbiting path after iterations to be the MLTBP is given in Corollary 1 for the joint decoding. The proof of Corollary 1 is similar to that of Lemma 1 and is omitted here. with metric Corollary 1: The best tailbiting path after left and right iterations is the MLTBP, if for all the states in

(5) Provided two decoders are available, collaboration between the decoders after they have jointly covered the trellis circle is possible and proven to be beneficial. Such a decoding scheme is called the BVA. and In the BVA, a composite path at a state , is a path obtained by concateiteration , denoted from nating the left and right survivors at the state

1662

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 51, NO. 10, OCTOBER 2003

the two decoders. The path metric of is defined as . Correspondingly, we denote the best composite path and its metric up to a certain , and the best tailbiting composite decoding stage . path and its path metric

convergence, as well as improves the error performance of the WAVA and the left–right WAVA. Example 3: If the BVA is applied to Example 1, the decoders find both the distance two ML tailbiting paths for the given received sequence at the first iteration with total six decoding stages, compared with two iterations or 10 decoding stages if the WAVA is used. Example 4: If we apply the BVA to Example 2, the optimal sequence 00 01 01 00 10 is explored at the second iteration with overall 16 decoding stages. Note that the left–right WAVA does not succeed in obtaining the optimal sequence in two iterations or 20 decoding stages. The BVA achieves improved performance with shorter latency at the cost of additional storage and computation requirements. During phase one, the state metrics obtained at each decoding stage by both decoders need to be stored for computing the composite path metrics during phase two. In the left–right WAVA, two decoders store the current state metrics as well as the initial state metrics of an iteration. In the BVA, the memory requirement peaks at the midpoint of the trellis, when metrics for all the states in the trellis need to be stored, and decreases to what is needed in the left–right WAVA, since state metrics already used in composite path metric computation can be discarded. Computational wise, the two decoders perform conventional Viterbi updates in phase one. In phase two, three more additions per state are needed to compute composite path metrics. A complete BVA iteration consists of the same number of decoding stages for the two phases. But the decoders go through more independent decoding stages than the collaborating ones on average, since they have to fully cover the first phase before possible termination in the second phase once a new iteration starts.

BVA . Set the counter 1) Set iteration and that for for the left decoder . Initialize the right decoder the state metrics at both ends of the trellis to be zero. 2) The left and right decoders perform wrap-around Viterbi decoding through the trellis in opposite directions , where decreases and until increases, with each section processed by the right and left decoders, respectively. 3) Compute the composite paths for all the and . Update states in and . If , as stop the decoding and output the decoded sequence. Otherwise, go to step 4. and , 4) Repeat step 3 for and . with and repeat steps 2-4 until 5) Set . if it exists; otherwise, 6) Output . output Each iteration of the BVA can be decomposed into two phases: phase one, when the two decoders process the trellis independently; and phase two, when the state metrics and the survivors from both decoders are joined together to update and . is the best At the first iteration, the composite path . The best among all the paths passing through the state composite path at any decoding stage represents the ML path in the whole trellis. Due to the uniqueness of the ML path, when the best composite path at any location happens to be also tailbiting, it must be the MLTBP. For the other iterations, a sufficient condition for the MLTBP, as in Corollary 1, can be readily derived. However, checking the sufficient condition at each decoding stage becomes more expensive in terms of computation and storage. Applying the simple termination condition to the holds BVA, we terminate the BVA whenever at any decoding stage. The suboptimality therefrom is negligible based on simulations. The BVA further improves the performance of the left-right WAVA. The right graph in Fig. 2 exemplifies a case where the BVA discovers a composite tailbiting path that is discarded by both the left and right decoders. In other words, the BVA explores more candidate tailbiting paths and chooses from a larger pool of candidates than the unidirectional WAVA and the left–right WAVA. As a result, the BVA expedites the

IV. SIMULATION RESULTS The WAVA and the BVA have been applied to decode various short-to-long tailbiting codes. The WAVA generally achieves , while the BVA near-optimum performance with . The data achieves closer-to-optimum performance with shown in figures and tables are based on at least 100 block error events per simulation point. We first check the effect of the simple termination condition on the optimality of the WAVA. Table II compares the numbers of decoded and ML decoded blocks at each iteration for the (24,12) Golay code with 16-state tailbiting trellis [7] when . At iterthe simple termination condition is applied and ation three, part of the blocks are decoded since the condition ), the rest are decoded since is satisfied (marked with is reached . The number of blocks decoded as nondoes not exist in step 6 of the tailbiting paths (when . It WAVA) is shown in the brackets under the column is clear that all the decoded sequences are ML codewords at the first iteration. At iterations two and three, the number of non-ML decoded blocks due to early termination is negligible. We apply the simple termination condition in all the simulations. Convolutional tailbiting codes can be categorized into long, medium, and short codes, usually depending upon the ratio of the length of the trellis circle to the memory order of the code

SHAO et al.: TWO DECODING ALGORITHMS FOR TAILBITING CODES

1663

TABLE II EFFECT OF THE SIMPLE TERMINATION RULE ON THE SUBOPTIMALITY OF THE WAVA

TABLE III NUMBER OF VITERBI TRIALS REQUIRED BY VARIOUS CIRCULAR ALGORITHMS FOR THE CONVOLUTIONAL TAILBITING CODE (64,32) WITH m = 7 AND (g ; g ) = (712; 476) OVER THE BSC WITH HARD-DECISION DECODING

[9], [12]. With 1/2 code rate, long tailbiting codes have trellis length typically larger than (4-5) , while short tailbiting codes have length around (1-2) . Most of the previous CVAs achieve near-optimal performance for long tailbiting codes [2]–[6]. Simulations show that for long tailbiting codes, the WAVA provides error performances within 0.1 dB from that of MLD at practical signal-to-noise ratios (SNRs) and the average number of grows. The Viterbi trials approaches to one as the ratio of convolutional tailbiting performance of the rate 1/2, is shown in Fig. 4. code (64,32) with for The WAVA gives almost optimal performance with this code, and the average number of Viterbi trials ranges from for SNR dB. For short-to-medium length tailbiting codes, larger gaps between the performances of the ML decoding and the suboptimal circular schemes are observed. Fig. 3 shows the bit-error rate (BER) of various circular algorithms for the rate 1/2, convolutional tailbiting code (64,32) with . The performances of the algorithms other than the WAVA are taken from the corresponding papers [2], [3], and [5]. Comparisons are based on the BER for the BSC with hard-decision decoding due to the availability of the comparison data. It should be mentioned, however, that some of the differences depicted in Fig. 3 may be due to different encoding, i.e., different mappings between information bits and codewords. Table III compares the average numbers of Viterbi trials at different SNRs. It is clear that the WAVA achieves closer-to optimal performance and it requires less than 1.5 Viterbi trials on average for each . SNR point with

Fig. 3. BER performances of the (64,32) convolutional tailbiting code with (g ; g ) = (712; 476) decoded by various circular algorithms over the BSC with hard decision decoding.

One exceptional CVA is the so-called bounded distance decoder (BDD)-CVA [6]. It achieves near-optimum performance when the code is long enough, while up to 0.4 dB degradation is observed in simulations compared with the MLD for short-to-medium tailbiting codes [6], [11], [12]. Fig. 4 shows the performances of the WAVA and the BDD-CVA for the (24,12) convotailbiting code obtained by truncating a rate 1/2, . This is a time-invariant lutional code with

1664

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 51, NO. 10, OCTOBER 2003

TABLE IV NUMBER OF DECODING STAGES REQUIRED TO OBTAIN THE PERFORMANCES SHOWN IN FIGS. 4 AND 5 FOR THE (24,12) GOLAY CODE WITH THE 64-STATE TAILBITING TRELLIS AND (g ; g ) = (414; 730)

Fig. 4. FER performances of the (24,12) Golay code with the 64-state tailbiting trellis and (g ; g ) = (414; 730), as well as those of the (64,32) tailbiting code with (g ; g ) = (554; 744), decoded by the BDD-CVA [11], [13], WAVA, left-right WAVA, and BVA under the AWGN channel.

superiorly to the left–right WAVA with the same . To provide the similar level of optimality, the left–right WAVA with almost doubles the number of the Viterbi updates required in the . The BVA with , on the other hand, reWAVA with quires about the same average number of decoding stages, only about 15%–30% of which is for the phase two decoding. With two decoders working together and achieving a higher degree of optimality, the BVA requires about half of the decoding delay as the WAVA at the expense of more storage and overall computation. V. CONCLUSION

Fig. 5. Percentage of ML decoded blocks for the (24,12) Golay code with the 64-state tailbiting trellis (g ; g ) = (414; 730) using the WAVA, left–right WAVA, and BVA under the AWGN channel.

tailbiting representation of the extended Golay code [9]. We see are about that both the BDD-CVA and the WAVA with 0.4 dB away from the ML decoding around the frame-error rate . But with , the WAVA performs almost (FER) optimally. On the complexity side, the number of Viterbi updates is fixed to be 112 for each SNR point in the BDD-CVA averages from 32 to 16 [13], while that in the WAVA with for SNR 1.0–5.0 dB. The curves for the left–right WAVA and fall on top of the ML performance curve. the BVA with To illustrate the fine differences in performances, we compare the percentages of ML decoded blocks in Fig. 5. The curve for shows that the optimality of the WAVA the WAVA with improves limitedly with more iterations. Table IV shows the decoding complexities in terms of the number of decoding stages. improves the It is seen that the left–right WAVA with , and the BVA performs performance of the WAVA with

Two VA-based schemes have been proposed for decoding tailbiting codes. The WAVA is based on continuous Viterbi decoding. An optimal termination condition has been derived and the properties of the algorithm have been discussed. For easier implementation, a simple termination rule which causes negligible suboptimality has been proposed. It has been pointed out that the WAVA is inherently suboptimal and exploring more candidate paths is an effective way to improve the optimality. We have compared the WAVA with other CVAs in terms of error performance and decoding complexity. Even though many of the circular decoding schemes perform near optimally with long tailbiting codes, simulations have shown that the WAVA converges faster and performs better, especially for short-tomedium tailbiting codes. The collaborative decoding in the BVA helps to discover more candidate tailbiting paths and, therefore, the BVA provides higher degree of optimality than the WAVA. The cooperation reduces the decoding delay and speeds up the convergence effectively. Simulations show that the BVA essentially performs ML decoding with much fewer decoding stages. REFERENCES [1] G. Solomon and H. van Tilborg, “A connection between block and convolutional codes,” SIAM J. Appl. Math., vol. 37, no. 2, pp. 358–369, Oct. 1979. [2] H. H. Ma and J. K. Wolf, “On tailbiting convolutional codes,” IEEE Trans. Commun., vol. COM-34, pp. 104–111, Feb. 1986. [3] Q. Wang and V. K. Bhargava, “An efficient maximum-likelihood decoding algorithm for generalized tailbiting convolutional codes including quasi-cyclic codes,” IEEE Trans. Commun., vol. 37, pp. 875–879, Aug. 1989.

SHAO et al.: TWO DECODING ALGORITHMS FOR TAILBITING CODES

[4] K. S. Zigangirov and V. V. Chebyshov, “Study of decoding tailbiting convolutional codes,” in Proc. 4th Joint Swedish-Soviet Int. Workshop Information Theory, Gotland, Sweden, Aug. 1989, pp. 52–55. [5] R. V. Cox and C. E. Sundberg, “An efficient adaptive circular Viterbi algorithm for decoding generalized tailbiting convolutional codes,” IEEE Trans. Veh. Technol., vol. 43, pp. 57–68, Feb. 1994. [6] J. B. Anderson and S. M. Hladik, “An optimal circular Viterbi decoder for the bounded distance criterion,” IEEE Trans. Commun., vol. 50, pp. 1736–1742, Nov. 2002. [7] A. R. Calderbank, G. D. Forney, Jr., and A. Vardy, “Minimal tailbiting trellises: Golay code and more,” IEEE Trans. Inform. Theory, vol. 45, pp. 1435–1455, Jul. 1999. [8] J. B. Anderson and S. M. Hladik, “Tailbiting MAP decoders,” IEEE J. Select. Areas Commun., vol. 16, pp. 297–302, Feb. 1998. [9] P. Stahl, J. B. Anderson, and R. Johannesson, “Optimal and near-optimal encoders for short and moderate-length tailbiting trellises,” IEEE Trans. Inform. Theory, vol. 45, pp. 2562–2571, Nov. 1999. [10] R. Y. Shao, “Decoding of linear block codes based on their tailbiting trellises and efficient stopping criteria for turbo decoding,” Ph.D. dissertation, Univ. of Hawaii at Manoa, Honolulu, HI, 1999. [11] K. E. Tepe, “Topics in BCJR and Turbo Decoding, Communication, Information and Voice Processing Report Series,” ECSE Dept., Rensselaer Polytech. Inst., Troy, NY, Rep. TR 98-1, 1998. [12] J. B. Anderson and K. E. Tepe, “Properties of the tailbiting BCJR decoding,” in Codes, Systems and Graphical Models. New York: Springer-Verlag, 2001. , Private Communication, 1999. [13] [14] G. D. Forney, “On iterative decoding of tailbiting codes,” in Proc. 1998 Information Theory Workshop, San Diego, CA, Feb. 1998, pp. 11–12. [15] R. Johannesson and K. S. Zigangirov, Introduction to Convolutional Coding. Piscataway, NJ: IEEE Press, 1999.

Rose Y. Shao (S’96–M’00) received the B.S. and M.S. degrees from Xiamen University and Shanghai Academy of Space Technology (SAST), China, both in electrical engineering, in 1989 and 1992, respectively. She received the Ph.D. degree in electrical engineering in 1999 from the University of Hawaii at Manoa, Honolulu, HI. She was a Researcher with Shanghai Precision Instrument Institute, SAST, China, from 1992 to 1994. In 2000, she joined the Technology and Engineering Department, Quantum Corporation, Shrewsbury, MA, whose hard disk drive business was merged with Maxtor Corporation in 2001. She is currently with the Architecture Group of the Advanced Technology Department, Maxtor Corporation, Shrewsbury, MA. Her research interests include coding theory, coding techniques, detection and filtering, and their application to communication systems.

1665

Shu Lin (S’62–M’65–SM’78–F’80–LF’00) received the B.S.E.E. degree from the National Taiwan University, Taipei, Taiwan, in 1959, and the M.S. and Ph.D. degrees in electrical engineering from Rice University, Houston, TX, in 1964 and 1965, respectively. In 1965, he joined the Faculty of the University of Hawaii, Honolulu, as an Assistant Professor of Electrical Engineering. He became an Associate Professor in 1969 and a Professor in 1973. In 1986, he joined Texas A&M University, College Station, as the Irma Runyon Chair Professor of Electrical Engineering. In 1987, he returned to the University of Hawaii. From 1978 to 1979, he was a Visiting Scientist at the IBM Thomas I. Watson Research Center, Yorktown Heights, NY, where he worked on error control protocols for data communication systems. He spent the academic year of 1996–1997 as a Visiting Professor at the Technical University of Munich, Munich, Germany. He retired from the University of Hawaii in 1999 and he is currently a Visiting Professor at the University of California, Davis. He has published numerous technical papers in refereed journals. He is the author of the book, An Introduction to Error-Correcting Codes (Englewood Cliffs, NJ: Prentice-Hall, 1970). He also co-authored (with D. J. Costello) the book, Error Control Coding: Fundamentals and Applications (Englewood Cliffs, NJ: Prentice-Hall, 1982), and (with T. Kasami, T. Fujiwara, and M. Fossorier) the book, Trellises and Trellis-Based Decoding Algorithms, (Boston, MA: Kluwer Academic, 1998). His current research areas include algebraic coding theory, coded modulation, error control systems, and satellite communications. He has served as the Principal Investigator on 25 research grants. Dr. Lin is a Member of the IEEE Information Theory and Communication Societies. He served as the Associate Editor for Algebraic Coding Theory for the IEEE TRANSACTIONS ON INFORMATION THEORY from 1976 to 1978, and as the Program Co-Chairman of the IEEE International Symposium of Information Theory held in Kobe, Japan, in June 1988. He was the President of the IEEE Information Theory Society in 1991. In 1996, he was a recipient of the Alexander von Humboldt Research Prize for U.S. Senior Scientists.

Marc P. C. Fossorier (S’90–M’95–SM’00) was born in Annemasse, France, on March 8, 1964. He received the B.E. degree from the National Institute of Applied Sciences (I.N.S.A.) Lyon, France in 1987, and the M.S. and Ph.D. degrees from the University of Hawaii at Manoa, Honolulu, in 1991 and 1994, all in electrical engineering. In 1996, he joined the Faculty of the University of Hawaii, Honolulu, as an Assistant Professor of Electrical Engineering. He was promoted to Associate Professor in 1999. His research interests include decoding techniques for linear codes, communication algorithms, combining coding and equalization for ISI channels, magnetic recording and statistics. He coauthored (with S. Lin, T. Kasami and T. Fujiwara) the book, Trellises and Trellis-Based Decoding Algorithms, (New York: Kluwer Academic Publishers, 1998). Dr. Fossorier is a recipient of a 1998 NSF Career Development award. He has served as Editor for the IEEE TRANSACTIONS ON COMMUNICATIONS since 1996, as Associate Editor for the IEEE COMMUNICATIONS LETTERS since 1999, and is currently the Treasurer of the IEEE Information Theory Society. He was Program Co-Chairman for the 2000 International Symposium on Information Theory and Its Applications (ISITA) and Editor for the Proceedings of the 2003 and 1999 Symposium on Applied Algebra, Algebraic Algorithms and Error Correcting Codes (AAECC). He is a member of the IEEE Information Theory and IEEE Communications Societies.

Suggest Documents