Modeling the decoder's error distribution in a satellite communication system.
Dissertation presented by Milena Planells-Rodriguez to obtain the degree of ...
Modeling the decoder’s error distribution in a satellite communication system Dissertation presented by Milena Planells-Rodriguez to obtain the degree of Doctor of the Ecole Nationale Sup´erieure de T´el´ecommunications de Paris
Members of the Jury: J.F.Helard J.L.Vandendorpe M.L.Boucheret D.Roviras F. Castani´e G.Maral A.Duverdier G. Mesnager
Acknowledgments I would like to say some words to express my gratitude to all the people that have helped me all along these years that lead to the PhD. First of all, I thank Professor Maral for having honored me by being the president of the jury and for the talks we have had about the history of the Catalan people and the Pyrenees. I would like to thank Professor Helard and Professor Vandendorpe for their suggestions and questioning which have improved this thesis and will be a source of inspiration for my future work. I do not forget the Centre Nationale d’Etudes Spatiales. Alban Duverdier has been a great help, suggesting solutions and new ways to investigate. Also, I want to thank the rest of the Signal Processing Group, specially Caroline, Frederic, and Jon. But the very special thanks go to Jean Adoubert, my best supporter and at the same time, my skiing “teacher”. To Gilles Mesnager, I would like to thank him for his patience and his good humor. The biggest thanks go to my two supervisors, Marie-Laure Boucheret and Daniel Roviras. They are great persons and researchers. Even when they lead a team of several PhD students, they find always the time to help when you need them. There are many people at T´eSA who make it a very special place: Sylvie, the heart of the laboratory; Professor Lacaze, who trusted me from the beginning for tutoring some of his courses; Corinne, Nathalie and Marie, always willing to help, Martial, which makes me discover famous restaurants in my country that I do not know; and all the “Boys” that have posed for my calendar. I wish good luck for those who are still on the way of the PhD: Anne-Laure, Virginie, Audrey, Garmy, Ana, etc. To all my friends from Barcelona, Paris and Toulouse: thank you! Climbing, skiing or just drinking a Voll Damm with you have been a great help. My deep gratitude goes, of course, to my family, without whom I would not be here today. I know I won’t be able to repay all their efforts and sacrifices but I want them to know that I am aware of them. Last, but not least, I would like to thank Paolo. Not only he has cooked dinner for me (and he does it very well) this last year, but he has also attended all my rehearsals at home for these three years. I hope to make it up to you one day.
1
R´esum´e Ces derniers ann´ees, les communications num´eriques se sont d´evelopp´ees, autant dans le domaine terrestre que par satellite. R´ecemment, un standard a e´ t´e approuv´e par l’Institut Europ´een des Standards de T´el´ecommunications, qui e´ tablit les sp´ecifications pour une nouvelle g´en´eration de syst`emes de transmission interactifs avec des satellites g´eostationnaires. Ce standard est nomm´e DVB-RCS (Digital Video Broadcasting- Return Channel par Satellite). Cette norme propose deux sch´emas diff´erents pour le codage canal. D’un cˆot´e, la concat´enation classique d’un code bloc Reed-Solomon et d’un code convolutif s´epar´es par un entrelaceur. D’un autre cˆot´e, un code plus complexe, de la famille des turbocodes. Les performances de ces syst`emes sont g´en´eralement e´ valu´ees par des simulations, dont le temps d’ex´ecution est tr`es long. L’objectif de cette th`ese est d’´etudier le comportement des erreurs en sortie des deux types de d´ecodeurs, puis d’en d´eduire un mod`ele capable de reproduire ce comportement le plus proche possible. Le Centre National d’Etudes Spatiales et Alcatel Space ont financ´e ensemble cet e´ tude. La concat´enation du code Reed-Solomon et d’un code convolutif a e´ t´e d´ej`a e´ tudi´ee dans les ann´ees 80. L’algorithme utilis´e pour le d´ecodage convolutif est le maximum de vraisemblance, que l’on appelle aussi algorithme de Viterbi. On sait que les erreurs en sortie de cet algorithme apparaissent par rafales, a` cause de la m´emoire du code. Un groupe de bits d´ecod´es correctement entre deux rafales est appel´e gap. La sortie d’un d´ecodeur fond´e sur le maximum de vraisemblance peut eˆ tre mod´elis´ee par une chaˆıne de Markov a` deux e´ tats : un premier e´ tat sans erreur (´etat bon) et un deuxi`eme e´ tat o`u les erreurs surgissent par rafales (´etat mauvais). Dans la litt´erature, on trouve un mod`ele g´eom´etrique pour la distribution des gaps. Ce mod`ele a comme param`etres la longueur de contrainte du code et la longueur moyenne de gap. Cette longueur moyenne est calcul´ee empiriquement a` partir de simulations de la chaˆıne de transmission pour chaque rapport signal a` bruit. Nous avons affin´e le mod`ele de la distribution des gaps en d´eveloppant une fonction analytique d´ecrivant la longueur moyenne de gap. Concernant la mod´elisation des rafales d’erreurs, les mod`eles propos´es jusqu’`a maintenant ne sont pas adapt´es a` nos simulations de la chaˆıne de transmission. C’est pourquoi nous avons d´evelopp´e un nouveau mod`ele a` partir des propri´et´es du code qui donne des tr`es bons r´esultats. Le deuxi`eme sch´ema de codage, le turbocode, suit un principe r´ecent. Les turbocodes sont construits par concat´enation de codes simples (convolutif ou bloc).
Plutˆot que utiliser l’algorithme de Viterbi pour le d´ecodage, on utilise des algorithmes it´eratifs. Ces algorithmes font appel au principe du Maximum A Posteriori (MAP). Leur principe est fond´e sur des e´ changes d’informations entre les blocs constituants du turbocode. De nombreuses e´ tudes sont encore en cours pour d´eterminer la convergence de ces algorithmes et leurs performances en termes de taux d’erreur binaire. Cette th`ese analyse le comportement des erreurs en sortie de ces d´ecodeurs it´eratifs et propose une s´erie de mod`eles pour les diff´erents algorithmes.
2
Abstract In recent years, there has been a huge increase in multimedia communications, both for terrestrial and satellite applications. Recently approved by the European Telecommunications Standard Institute, the Digital Video Broadcasting - Return Channel by Satellite (DVB-RCS) standard establishes an open specification for a new generation of geostationary satellite interactive network systems. This standards defines two types of channel coding schemes. On one hand, a classical concatenation of a Reed-Solomon with a convolutional code and interleaving. On the other hand, a code from the turbo-codes family. In order to evaluate the performances of such coding schemes, computer simulations are carried out with a huge cost of time. The aim of this dissertation is to study the behavior of the errors at the output of such coding schemes. Then, we propose a model able to reproduce these errors’ behavior. This work has been financed by both the French spatial agency (C.N.E.S.) and Alcatel Space. The concatenation of the Reed-Solomon and the convolutional code has already been studied in the 80’s. The algorithm used in the convolutional decoding is the maximum likelihood algorithm, also called the Viterbi algorithm. It is known that errors at the output of this algorithm are grouped in bursts due to the memory of the code. The group of correct bits between bursts is called a gap. Thus, the output of a maximum likelihood decoder can be modeled by a Markov chain with two states: a first state where no errors take place (good state) and a second state where errors appear in bursts (bad state). Classically, a geometric model is used to model the gap distribution. This model has the constraint length of the code and the average gap length as parameters. This average gap length is computed empirically from simulations of the transmission chain for each energy of bit to noise ratio (Eb No). We have refined on the model of the gap distribution by developing an analytical function for the average gap length. Regarding the burst modeling, the previous proposed models did not fit the simulation results for low and average burst lengths. Therefore, we have developed a new model based on the properties of the code that fits the range of all possible bursts lengths. Our developed model gives very good match between simulated errors (via Monte Carlo time consuming simulations) and the errors generated using the mathematical model. The second coding scheme, a code from the turbo-codes family, is a recent system. Turbo-codes are built from concatenation of simple codes (either convolutional or block). Instead of using the Viterbi decoding algorithm, iterative
decoding based on the successive decoding of each constituent code is considered. These iterative decoding algorithms are based on the Maximum A Posteriori (MAP) principle. Soft informations are exchanged between each elementary decoder. Many studies are still being carried out about the convergence of these algorithms and the performances achieved in terms of Bit Error Rate. This dissertation analyses the behavior of the errors at the output of such iterative decoders and proposes a model that fits quite well with the real errors simulated via Monte Carlo simulations.
2
Contents 1
2
3
Introduction
12
1.1
Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
1.2
Dissertation’s outline . . . . . . . . . . . . . . . . . . . . . . . .
13
1.3
Original contributions . . . . . . . . . . . . . . . . . . . . . . . .
14
Modeling the satellite transmission chain
15
2.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
2.2
The satellite transmission system . . . . . . . . . . . . . . . . . .
15
2.3
Outline of the error generator . . . . . . . . . . . . . . . . . . . .
18
2.4
Statistical models for bursty channels . . . . . . . . . . . . . . .
19
2.4.1
Descriptive Models . . . . . . . . . . . . . . . . . . . . .
20
2.4.2
Hidden Markov Models . . . . . . . . . . . . . . . . . .
22
2.5
Statistics for analysis and comparison purposes . . . . . . . . . .
25
2.6
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
Forward link modeling
27
3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
3.2
The concatenated coding scheme . . . . . . . . . . . . . . . . . .
27
3.3
The decoding scheme . . . . . . . . . . . . . . . . . . . . . . . .
31
3.4
Error statistics of the convolutional decoder and comparison with mathematical models . . . . . . . . . . . . . . . . . . . . . . . .
33
3.4.1
Bit Error Rate . . . . . . . . . . . . . . . . . . . . . . . .
34
3.4.2
The average error burst length . . . . . . . . . . . . . . .
34
1
3.4.3
Gap length distribution . . . . . . . . . . . . . . . . . . .
36
3.4.4
Error burst length distribution . . . . . . . . . . . . . . .
36
3.4.5
Distribution of the average error density within a burst . .
37
New Analytical model for Viterbi decoding . . . . . . . . . . . .
38
3.5.1
Refinements to the gap length modeling . . . . . . . . . .
38
3.5.2
Error burst modeling by the extended Union-Bhattacharyya bound . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
The convolutional error generator . . . . . . . . . . . . .
44
Analytical model for punctured codes . . . . . . . . . . . . . . .
52
3.6.1
Gap modeling . . . . . . . . . . . . . . . . . . . . . . . .
53
3.6.2
Burst modeling . . . . . . . . . . . . . . . . . . . . . . .
54
3.6.3
Simulation results
. . . . . . . . . . . . . . . . . . . . .
57
3.7
Modeling the error statistics of the Reed-Solomon . . . . . . . . .
62
3.8
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
3.5
3.5.3 3.6
4
Return link: Turbocodes
66
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
4.2
Turbocode (1, 5/7) . . . . . . . . . . . . . . . . . . . . . . . . .
68
4.2.1
Decoding algorithms . . . . . . . . . . . . . . . . . . . .
69
4.2.2
Max-Log-Map algorithm: error statistics and modeling . .
75
4.2.3
Log-Map algorithm: error statistics and modeling . . . . .
90
Turbocode DVB-RCS . . . . . . . . . . . . . . . . . . . . . . . .
93
4.3.1
Max-Log-Map algorithm: error statistics and modeling . .
95
4.3.2
Modeling the Log-Map algorithm . . . . . . . . . . . . . 104
4.3
4.4 5
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Conclusions and Further work
108
5.1
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.2
Further work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
A Gilbert model
111 2
B Fritchman model
114
C The DVB-RCS standard
119
C.1 Presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 C.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 120 C.3 Base-band physical layer specification . . . . . . . . . . . . . . . 121 C.3.1
Forward link . . . . . . . . . . . . . . . . . . . . . . . . 121
C.3.2
Return link . . . . . . . . . . . . . . . . . . . . . . . . . 125
3
List of Figures 2.1
Block diagram of the DVB-RCS satellite communication system. .
16
2.2
Block diagram of the satellite chain emulator. . . . . . . . . . . .
19
2.3
Gilbert model with two states. . . . . . . . . . . . . . . . . . . .
23
2.4
Diagram of general Fritchman model. . . . . . . . . . . . . . . .
24
3.1
Concatenated codes in the forward link of the DVB-RCS standard . 28
3.2
Framing structure. . . . . . . . . . . . . . . . . . . . . . . . . . .
29
3.3
Conceptual diagram of the convolutional interleaver and deinterleaver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
3.4
Conceptual diagram of the convolutional code K=7 (133,171). . .
30
3.5
Decoding scheme in Forward Link. . . . . . . . . . . . . . . . . .
31
3.6
Example of Viterbi algorithm with a 4-state trellis. . . . . . . . . Dependence of b¯ on BLC value. Code rate 1/2. . . . . . . . . . . Dependence of b¯ on the density threshold of the AEDC. Code rate 1/2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
3.7 3.8
2 10 . Code rate 1/2. . . . . . 3.10 Burst length distribution. BER 2 10 . Code rate 1/2. . . . . . 3.11 Average error density. BER 2 10 . Code rate 1/2. . . . . . . 3.9
4
Gap length distribution. BER
4
4
3.12 Evolution of average gap length with Eb/No. . . . . . . . . . . . .
35 35 36 37 37 39
3.13 Conceptual diagram of the convolutional code K=3 G0 G1 75. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
3.14 Trellis of convolutional code (3,1/2), G0 G1
. . . . . .
42
3.15 S matrix of convolutional code (3, 1/2). . . . . . . . . . . . . . .
43
7 5 .
4
3.16 Comparison between the analytical model for burst distribution and the simulation data. BER 2 10 4 . Code rate 1/2. . . . . .
44
3.17 Conceptual diagram of the analytical model. . . . . . . . . . . . .
45
3.18 Convolutional error generator. . . . . . . . . . . . . . . . . . . .
45
3.19 Cumulative relative frequency of gap length. BER 2 10 4 . Code rate 1/2. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
3.20 Burst distribution. BER
2 10
4.
Code rate 1/2. . . . . . . . .
47
3.21 Cut-off rate evolution with Eb/No (dB). . . . . . . . . . . . . . .
10 . Code rate 1/2. . . . . . . . . . . . 3.23 Burst distribution. BER 10 . Code rate 1/2. . . . . . . . . . . 3.24 Gap distribution. BER 5 10 . Code rate 1/2. . . . . . . . . . 3.25 Burst distribution. BER 5 10 . Code rate 1/2. . . . . . . . . 3.26 Gap distribution. BER 2 10 . Code rate 1/2. . . . . . . . . . 3.27 Burst distribution. BER 2 10 . Code rate 1/2. . . . . . . . .
48
3.28 Average gap length as function of Eb/No. . . . . . . . . . . . . .
53
3.29 Trellis of convolutional code K=3 code rate 1/2. Erased bits by a 3/4 puncturing matrix are substituted by x. . . . . . . . . . . . . .
54
3.22 Gap distribution. BER
3
3
5
5
5
5
2 10 . Code rate 3/4. . . . . . . . . 3.31 Burst distribution. BER 2 10 . Code rate 7/8. . . . . . . . . 3.32 Burst distribution. BER 10 . Code rate 3/4. . . . . . . . . . . 3.33 Burst distribution. BER 5 10 . Code rate 3/4. . . . . . . . . 3.34 Burst distribution. BER 2 10 . Code rate 3/4. . . . . . . . . 3.35 Burst distribution. BER 10 . Code rate 7/8. . . . . . . . . . . 3.36 Burst distribution. BER 5 10 . Code rate 7/8. . . . . . . . . 3.37 Burst distribution. BER 2 10 . Code rate 7/8. . . . . . . . . 3.30 Burst distribution. BER
49 49 50 50 51 51
4
57
4
58
3
59
5
59
5
60
3
60
5
61
5
61
3.38 BER evolution with Eb/No (dB). . . . . . . . . . . . . . . . . . .
62
3.39 Error burst lengths before deinterleaving. BER 2 10 4 Code rate 1/2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
3.40 Error burst lengths after deinterleaving. BER 2 10
4
Code rate 1/2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
63
3.41 Number of byte errors per RS packet (204 bytes) after deinterleaving. BER 2 10 4 Code rate 1/2. . . . . . . . . . . . . . . . .
64
3.42 DVB-S error generator model. . . . . . . . . . . . . . . . . . . .
65
4.1
Turbocode with two RSC encoders in parallel concatenation. . . .
67
4.2
Turbodecoder with two SISO modules in parallel concatenation. .
67
4.3
(1,5/7) Recursive Systematic Convolutional code. . . . . . . . . .
68
4.4
Trellis of (1, 5/7) RSC code. . . . . . . . . . . . . . . . . . . . .
68
4.5
. Visualization of the MAP algorithm. . . . . . . . . . . . . . . .
70
4.6
Comparison between Viterbi and Max-Log-Map algorithms. . . .
74
4.7
Comparison between Log-Map and Max-Log-Map algorithms. . .
74
4.8
Evolution of BER with iterations. Max-Log-Map decoding. . . . .
75
4.9
Evolution of the probability density function of the logarithm of the extrinsic informations ratio with iterations. . . . . . . . . . . .
78
4.10 Probability density function of the error burst length at the output of the first SISO decoder at iteration 1. . . . . . . . . . . . . . . .
81
4.11 Probability density function of the error burst length at the output of the first SISO decoder at iteration 2. Parameters of the extrinsic function: µext 5 7759, σ2ext 15 4353 . . . . . . . . . . . . . .
82
4.12 Probability density function of the error burst length at the output of the first SISO decoder at iteration 3. Parameters of the extrinsic function: µext 11 5085, σ2ext 28 7676 . . . . . . . . . . . . . .
82
4.13 Probability density function of the error burst length at the output of the first SISO decoder at iteration 4. Parameters of the extrinsic function: µext 13 3, σ2ext 46 73 . . . . . . . . . . . . . . . . .
83
4.14 Average bit error density within a burst. . . . . . . . . . . . . . .
84
4.15 Probability density function of the error burst length at the output of the first SISO decoder at iteration 1. Analytical model parameters: ηs 1 η p 1. . . . . . . . . . . . . . . . . . . . . . . . . .
86
4.16 Probability density function of the error burst length at the output of the first SISO decoder at iteration 2. Analytical model parameters: ηs 1 8 η p 0 6. . . . . . . . . . . . . . . . . . . . . . . .
86
6
4.17 Probability density function of the error burst length at the output of the first SISO decoder at iteration 3. Analytical model parameters: ηs 2 η p 0 5. . . . . . . . . . . . . . . . . . . . . . . . .
87
4.18 Cumulative distribution of the gap length. . . . . . . . . . . . . .
87
4.19 Probability density function of number of bursts per packet. . . . .
88
4.20 Functional diagram of Max-Log-Map error generator. . . . . . . .
89
4.21 BER and FER results. Comparison between chain simulation and analytical model. . . . . . . . . . . . . . . . . . . . . . . . . . .
90
4.22 General model for FER and BER distributions. . . . . . . . . . .
91
4.23 Gap length distribution for frames. . . . . . . . . . . . . . . . . .
92
4.24 HMM model results for turbo code (1,5/7), r=1/3. . . . . . . . . .
93
4.25 Constituent RS encoder of turbocode used in DVB-RCS. . . . . .
93
4.26 Trellis of the RSC code used in DVB-RCS. . . . . . . . . . . . .
94
4.27 Schematic of generic turbodecoder of DVB-RCS. . . . . . . . . .
95
4.28 BER performances for different coding rates. . . . . . . . . . . .
96
4.29 The prologue at decoding process. . . . . . . . . . . . . . . . . .
97
4.30 BER evolution between decoders. . . . . . . . . . . . . . . . . .
98
4.32 Burst distribution. Second iteration. Analytical model parameters: η 1 65 η 1 55 . . . . . . . . . . . . . . . . . . . . . . 100 4.33 Burst distribution. Sixth iteration. Analytical model parameters: η 1 4 η 1 5 . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.31 Burst distribution. First iteration. Analytical model parameters: ηs 1 4 η p 1 4 . . . . . . . . . . . . . . . . . . . . . . . . . . 100 s
s
p
p
4.34 Bit error density inside a burst. Iteration 1. . . . . . . . . . . . . . 102 4.35 Cumulative Relative Frequency of gap length. . . . . . . . . . . . 102 4.36 Distribution of bursts per erroneous packet. . . . . . . . . . . . . 103 4.37 Burst distribution at iteration 5. Comparison between analytical and empirical results. . . . . . . . . . . . . . . . . . . . . . . . . 104 4.38 Gap length distribution for correct packets in DVB-RCS decoding model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.39 HMM model results for turbo code (1,5/7), r=1/3. . . . . . . . . . 106
7
A.1 Gilbert model with two states. . . . . . . . . . . . . . . . . . . . 111 B.1 Diagram of general Fritchman model. . . . . . . . . . . . . . . . 114 B.2 Fritchman model with one error state. . . . . . . . . . . . . . . . 116 C.1 Example of system model with a regenerative satellite. . . . . . . 121 C.2 Functional block diagram of the transmission system. . . . . . . . 122 C.3 Framing structure. . . . . . . . . . . . . . . . . . . . . . . . . . . 123 C.4 Conceptual diagram of the convolutional interleaver and deinterleaver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 C.5 Conceptual diagram of the convolutional code K=7 (133,171). . . 124 C.6 Composition of an ATM traffic burst. . . . . . . . . . . . . . . . . 126 C.7 Composition of the traffic burst carrying MPEG-2 transport packets.126 C.8 Turbocode of DVB-RCS standard. . . . . . . . . . . . . . . . . . 128
8
List of Tables 3.1
Puncturing Patterns. . . . . . . . . . . . . . . . . . . . . . . . . .
30
3.2
Parameters of the transmission chain simulation for r=1/2. . . . .
33
3.3
BER results for r=1/2, Eb/No=3.4 dB. . . . . . . . . . . . . . . .
34
3.4
Values of average gap length constants for 1/2 coding rate. . . . .
39
3.5
BER comparison between simulation data and analytical model. .
52
3.6
Values of average gap length constants for 3/4 and 7/8 coding rates. 53
4.1
Results from the analytical simulation for iteration 5. . . . . . . . 103
C.1 Puncturing Patterns. . . . . . . . . . . . . . . . . . . . . . . . . . 125 C.2 Puncturing Patterns for Turbo code. . . . . . . . . . . . . . . . . 128 C.3 Table of circular states. . . . . . . . . . . . . . . . . . . . . . . . 129 C.4 Frame size and permutation parameters for Turbo. . . . . . . . . . 130
9
Notations and acronyms Notations K Eb Ec Rc Eb No G0 G1 r nk Ro T
Eb
Ts
Constraint Length of a convolutional code Energy per useful bit Energy per coded bit Bit energy to noise ratio Generator polynomials of a convolutional code Code rate Cut-off rate Number of bytes which can be corrected in an RS error protected packet Symbol period
Acronyms AEDC ATM AWGN BER BLC CCSDS DSL DVB DVB-S DVB-RCS DVB-DSNG ENST
Average Error Density Criterion Asynchronous Transfer Mode Additive White Gaussian Noise Bit Error Rate Burst Length Criterion Consultative Committee for Space Data Systems Digital Subscriber Line Digital Video Broadcasting Digital Video Broadcasting by Satellite Digital Video Broadcasting - Return Channel on Satellite Digital Video Broadcasting -Digital Satellite News Gathering Ecole Nationale Sup´erieure de T´el´ecommunications
10
ETSI FDMA FEC FER FIFO HMM ISI JPEG MPEG MUX PC PEP PRBS QPSK RS RSC SISO SNR SPW TDMA
European Telecommunications Standard Institute Frequency Division Multiple Access Forward Error Correction Frame Error Rate First-In First-Out Hidden Markov Models Intersymbol interference Joint Photographic Experts Group Moving Pictures Experts Group Multiplex Personal Computer Pairwise Error Probability Pseudo Random Binary Sequence Quaternary Phase Shift Keying Reed-Solomon Recursive Systematic Convolutional Soft Input Soft Output Signal to Noise Ratio Signal Processing Workstation Time Division Multiple Access
11
Chapter 1 Introduction 1.1
Framework
In recent years, multimedia communications have grown at high speed, connecting not only international corporations but also millions of individual users. At the same time, thanks to digital communications, we have assisted to the convergence of different applications (television, data transmission, telephone, etc.) into the same device, such as the mobile phone. The final achievement is the total inter-connectivity between users from their small terminals by means of a huge network with a high speed and capacity. In this huge network, satellite systems can play an important role as a complement to the existing terrestrial solutions, due to their large coverage and easy deployment. Besides, new advances in technology allow to use digital processing on board the satellite. Packet switching, which was performed on the ground, can now be carried out on board the satellite, improving the use of the large band available. A recent standard defining a new generation of geostationary satellite interactive network systems has been approved by the European Telecommunications Standard Institute (ETSI), the Digital Video Broadcasting - Return Channel by Satellite (DVB-RCS) standard. This standard combines a classical broadcasting forward link from the provider to the users (keeping the specifications of the Digital Video Broadcasting by Satellite (DVB-S) standard) and a more complex return link from users to the provider. This standard defines two types of channel coding schemes. On one hand, the classical concatenation of the Reed-Solomon (204, 188) with the convolutional code (171, 133) and interleaving for the forward link. On the other hand, a turbocode for the return link. To evaluate the performance of these coding schemes, 12
computer simulations are widely used. The knowledge of the error distribution at the output of the decoder is fundamental when analysing the performance of the system at higher levels. When simulating a network at the link or routing layer (layers 2 and 3 of the OSI model), huge amount of packets are simulated. The precise simulation of the physical layer (Monte Carlo simulation of the data transmission) is impracticable (huge cost of time and computer resources). In this cases is necessary to have a model of the errors during the data transmission. These error models must be as close as possible to the real errors encountered in the physical layer. Our aim is to develop a mathematical model of the errors (the physical layer of the DVB-RCS system) with a very good match with what happens in the real physical layer. The advantage of the model is twofold: first, it saves time as we obtain results in a faster way than the transmission chain simulation. Second, the model can be used by people unacquainted with the processes that take place at the physical layer in a satellite transmission system.
1.2
Dissertation’s outline
This work studies the error properties at the output of the decoder in a satellite transmission system. Chapter 1 introduces the framework of the study, it describes the dissertation plan and points up the original contributions of this work. Chapter 2 describes the satellite transmission system and the processes that take place at the receiver structure. Some teams have already worked on the modeling of the errors at the output of the convolutional code (171, 133) [33]-[22]. They use a descriptive model: the structure of error sequences is reproduced by means of various statistics that describe the decoder’s behavior. We present also another modeling approach: the Hidden Markov Models (HMM). These other models are not based on the structure of the code, they are general models that describe the error sequences by probabilistic terms extracted from an observation vector. As mentioned before, we will focus our attention on the communication system defined by the DVB-RCS standard. Chapter 3 describes the channel code used in the forward link (from the provider to the user). The application of the models described in Chapter 2 on the convolutional code (171, 133) is analysed. As the results are not satisfactory, a new model is created integrating part of the descriptive model views and some new ideas. Although the model is developed for this specific convolutional code, it can easily be generalized to any similar coding scheme using convolutional codes. 13
The decoder used in the return link (from the user to the provider) is presented in Chapter 4. It consists of an iterative convolutional decoder. In the first place, we analyse the properties of the errors at the output of the decoder using two different algorithms: the Log-Map and Max-Log-Map algorithms. In the second place, the model developed in Chapter 3 is modified to be applied to the Max-Log-Map decoding. The Log-Map decoding errors are modeled by a HMM. The conclusions of this work and some axes for future work are presented in Chapter 5 . Annexes A and B describe in detail the HMM that are mentioned in Chapter 2. Annexe C presents a detailed description of the DVB-RCS system and the physical layer specifications.
1.3
Original contributions
The main contribution of this work is the conception of a new mathematical model for the error burst length at the output of a convolutional decoder (in particular the code (7, 1/2) used in the DVB-RCS). The models found in literature are not satisfactory because they fail to describe the short error burst lengths. Moreover, they are based on average statistics or probabilistic terms extracted from the results of the chain simulations. So, when the signal to noise ratio changes, the parameters are no longer valid and new chain simulations have to be carried out. Our error burst length model is able to describe with accuracy all ranges of burst lengths. It has been introduced in an error generator that reproduces the error statistics of the DVB-RCS physical layer, giving very good results. The error behavior of the return link of the DVB-RCS has also been analysed. Now, we understand better the processes that take place in an iterative decoder. There were no models that described the burst lengths at the output of a turbodecoder. We have developed two new models: one for the Max-Log-Map decoding and another for the Log-Map decoding. The Max-Log-Map algorithm has revealed to have many resemblances to the Viterbi algorithm. Consequently, we have modified the Viterbi modeling to take into account the specific characteristics of the Max-Log-Map algorithm. The Log-Map algorithm has been modeled by a mixture of HMM. Therefore, now we possess a very powerful tool that is able to describe with accuracy the error distribution at the output of a turbodecoder.
14
Chapter 2 Modeling the satellite transmission chain 2.1
Introduction
In this chapter we describe the physical layer of the DVB-RCS satellite communication system. The receiver structure and the processes that can cause errors are described in detail. Then, the outline and characteristics of the emulator of the satellite transmission chain are traced. A state of the art of the mathematical models used over the last decade is presented, followed by a description of the error statistics that will allow us to compare the mathematical models.
2.2
The satellite transmission system
The DVB-RCS satellite communication system can be represented as shown in Figure 2.1. The information bits are first encoded with a Forward Error Correction (FEC) code. In the forward link, the proposed channel code is a concatenation of a convolutional code with a block code and interleaving. The convolutional code has constraint length K 7 and generator polynomials (G0 171 G1 133). The block code is a Reed-Solomon (204,188). In the return link, several possibilities are defined: the concatenation of the convolutional code (171,133) and the ReedSolomon (204,188) without interleaving, only the Reed-Solomon code, only the convolutional code, and a code from the turbocode family. The next two chapters describe in detail these coding schemes.
After encoding, the mapping assigns to two coded bits one of the four possi15
Forward link:
Geostationary Satellite
Provider
Users
Return link:
Geostationary Satellite Provider Users RF satellite channel Antenna
Antenna
QPSK modulator
Carrier & clock recovery
I
Baseband shaping h(t)
I
QPSK demodulator Q
Matched filter h*(−t) Sampling
Q
Mapping Puncturing Channel coding
De−puncturing Channel decoding
Information bits
Information bits
Figure 2.1: Block diagram of the DVB-RCS satellite communication system.
ble phases of the QPSK constellation, using Gray encoding. The I and Q signals
16
are then filtered by a square root raised cosine filter to observe the Nyquist criterion for zero intersymbol interference (ISI). The signal at the output of the QPSK modulator is given by:
∑ cos 2π f t φ h t kT Re ∑ e e h t kT
with m 0 1 2 3 is one of the four possible phases of where φ the QPSK constellation, h t is the square root raised cosine filter, f is the carrier ∞
st
c
∞
k
2πm 4
k
jφk j2π fct
s
k
s
k
π 4
c
frequency and Ts is the symbol period.
The satellite channel can be approximated by an Additive White Gaussian Noise (AWGN) channel if we assume that the amplifiers on board the satellite work at the linear range. The received signal prior to demodulation, r t , is given by:
Re ∑ e e h t τ jφk j 2π fct ϕ t
rt
kTs
k
n t
τ is the propagation delay from the transmitter to the receiver, ϕ t is the carrier offset due to the propagation delay, the carrier offset between the oscillators of emission and reception and the jitter or phase noise of the oscillators, and n t is the AWGN. If coherent demodulation is used, τ and ϕ t parameters must be estimated by the carrier and clock recovery unit. The block diagram of the generic analog receiver is depicted below:
r(t)
h*(−t)
t=nT s+ τ
− j2π fct −ϕ
e
The received signal is first coherently demodulated to a complex baseband signal. The baseband signal is then further processed in a matched filter and sampled. The sampling instant is controlled by the timing recovery system. Defining n t as the AWGN filtered by the matched filter, the signal at the input of the decoder is given by:
Re ∑ e e h n k T τ τ n nT jφk j ϕ t
rdem t
ϕt
k
17
s
s
Even when the estimation of the carrier phase offset and the timing is perfect, there is a residual phase noise at the output of the synchronization unit. The coded bits at the input of the decoder can be corrupted by this residual noise phase and the Gaussian colored noise. A complete meaningful analytical description of the bit error process at the output of the decoder is quite difficult, so instead, computer simulations are used to simulate the chain showed in Figure 2.1. This approach requires enormous computer resources when working at low Bit Error Rate (BER).
2.3
Outline of the error generator
The aim of this dissertation is to study the behavior of the errors at the output of the decoder in this satellite communication system and propose a model that reproduces this error behavior. The main advantage of the model is the reduction in time and resources with regard to the simulation of the transmission chain. Moreover, the performance of a communication system is usually given in terms of BER or FER and little is known about the error distribution. Are the errors randomly distributed within a packet or do they follow a particular law? The knowledge of these error statistics will be very useful to design the packet size or the overhead location. This dissertation tries to give some answers about the error behavior. For simplification purposes, we will assume that a perfect estimate of the received carrier phase and timing is available and there is no residual noise phase at the output of the synchronization unit. Therefore, the coded bits at the input of the decoder are corrupted only by Gaussian colored noise. The satellite transmission chain emulator that we want to implement will present two values at its output:
0 if there is no error in decoding the transmitted bit. 1 if there has been an error in decoding the transmitted bit.
Thus, the output of the satellite transmission chain emulator will be a sequence formed by ones and zeros. It can be seen as an error generator whose output will be added using an XOR gate to the original information bits to model the received decoded information bits as shown in Figure 2.2. In this example, we see that the chain emulator produces an error at positions 4, 6 and 7 in the sequence. These errors are added to the information bits to yield the received decoded information bits. 18
Chain emulator or error generator Ex:0001011
Information bits
Received Information bits
Ex: 0100110
Ex:0101101
Figure 2.2: Block diagram of the satellite chain emulator.
We will analyse the errors at the output of the decoders used in the DVBRCS standard. Two different coding schemes are used, however, both schemes are based on convolutional codes. The convolutional encoder consists of a shift register with several stages, so it is characterized by a memory. The errors at the output of the decoder appear in groups due to this memory effect. These groups of errors are called bursts. In the next section, we describe the different models that can be used in modeling this type of bursty behavior. On one hand, we present the results of two teams that have already worked in modeling the convolutional decoder (171,133). They have developed a mathematical model that generates errors by using a Monte Carlo technique. Their model is a descriptive model since it reproduces the error sequences by various statistics that describe the decoder’s behavior. On the other hand, we present the Hidden Markov Models. These other models are not based on the structure of the code, they are general models that describe the error sequences by probabilistic terms extracted from an observation vector.
2.4
Statistical models for bursty channels
Mathematical models were developed to model channels with memory [30]. These models are parametrised models, capable of generating error sequences that are statistically similar to the actual error sequences. But, in order to reduce the computational complexity, most models are simplified to generate approximate error 19
sequences since the simplicity and convenience of use are also important goals. Next, we describe the more popular models used over the last decade for modeling bursty channels.
2.4.1
Descriptive Models
A descriptive or data analytic approach to modeling consists of describing the structure of error sequences by various statistics such as the burst and gap distributions. 2.4.1.1
Burst and Gap definitions by the Burst Length Criterion
Let “1” represent an erroneous decoded bit and “0” a correct decoded bit. We denote a sequence of x successive correct/(incorrect) decoded bits by 0x /(1x ). We will take the following example to define the burst and gap concepts:
1 0!" 1 0 !1 0 1" 0! " 1 !0 1" 25
Gap
2
3
2
Burst
512
Gap
3
Burst
In a decoding system with memory, there are more errors in certain portions of the decoded data stream. These groups of errors are commonly referred to as bursts. We find in literature different definitions of an error burst. In this section we will explain the definition of error burst given by [22]. An error burst is defined by a group of bits in which two successive erroneous bits are always separated by less than a given number (X) of correct bits. Implicit in this definition is that a burst always starts and ends with an erroneous bit. The value of X must always be specified when describing an error burst. Here we will use the term “Burst Length Criterion” (BLC) for this quantity X. In convolutional decoding, errors happen due to excursions from the correct path in the code trellis structure. Denote the constraint length of the convolutional code under consideration by K. A string of K-1 correctly decoded bits will return the decoder to the correct decoding path (assuming a non recursive code). Thus, errors corresponding to different bursts will be separated by no less than K-1 correct bits. This leads to a value of BLC equal to K-1. A gap is defined as the string of correct decoded bits between two bursts.
20
2.4.1.2
A geometric model for burst and gap length
This model has been developed taking into account the properties inherent to convolutional codes [33]. A discrete random variable X is said to be geometrically distributed with parameter p 0 1 if:
#%$ & P X s p 1 p
#'$ &
s
s
0 1 2
(2.1)
A random variable Y satisfies a modified geometric distribution of parameter p 0 1 if there exists a positive integer d such that:
s p 1 p
s d
PY
s
d d 1 d 2
(2.2)
In this case, Y will be called d-geometrically distributed. Omura [61] showed by a random coding argument that burst lengths for an “average convolutional code” have a distribution that may be over-bounded by an l-geometric distribution. In [33] it is shown that for convolutional codes of constraint lengths seven through ten, burst lengths are nearly 1-geometrically distributed and gap lengths are K-1 geometrically distributed (K is the constraint length of the code). Two parameters are needed to define these distributions. They are the average burst length, b, and the average gap length, g: b
g
Cumulative length of all burst Total number of bursts
(2.3)
Cumulative length of all gaps Total number of gaps
(2.4)
Given these parameters, burst and gap lengths are distributed according to :
l 1b 1 1b P G n q 1 q
l 1
PB
n K 1
where q
1 g K 2
21
l
n
(
0
) K
(2.5)
1
(2.6)
The third parameter that may be important is the average density of errors in a burst, θ. If we define Nl as the number of bursts of length l, the average error density in a burst of length l is given by:
θl
1 Nl Number of errors in the burst n Nl n∑1 Length l of the burst
(2.7)
The geometric model of convolutional burst error statistics states that these bursts occur randomly according to these two modified geometric distributions. Errors within a burst occur essentially randomly (except for the fact that each burst starts and ends with an error) with a determinate probability. At low Eb No, [22] assumes that the decoder, once it has departed from the correct path, makes correct and incorrect decisions with equal probability. The average density of errors, θ l , in a burst of length l bits is therefore:
θl
*
1 l 2 2l
l l
(
1 1
(2.8)
To generate error sequences similar to those produced by the convolutional decoder, the quantities b¯ and g¯ must be known. They have been tabulated for several Eb No values [33]. A Monte Carlo software routine is used to generate Viterbi error sequences directly from these formulas.
2.4.2
Hidden Markov Models
Hidden Markov Models (HMM), or probabilistic functions of Markov chains, form the basis for the modeling of various systems: speech and image recognition, telecommunications, automatic control, and queuing systems. The popularity of these models is its ability to approximate a large variety of stochastic processes and its relative simplicity. The model is used to describe the bursty nature of communication channel errors. The channel states are modeled by a Markov chain. We can observe the received symbols but the state in which an error occurs is ”hidden”. Thus, the HMM is characterized by a set of states, a set of state probabilities (the probability of being in state si at time ti ) and a set of transition probabilities. These parameters or probabilities are extracted from an observation or training vector. The burst and gap definitions of section 2.4.1.1 are no longer valid since the HMM are not based on the properties of the code. Thus, another definition of burst and gap is given below. Next, we present some HMM. 22
2.4.2.1
Burst and Gap definition by the Average Error Density Criterion
Following the Average Error Density Criterion [58], a burst is defined as a region where the local bit error rate exceeds a certain threshold value ∆0 . First, the burst is assumed to begin with an error and end with an error. Next, if the successive inclusion of the next error keeps the density above the specified value, the burst is continued, and the burst ends if further inclusion of an error reduces the density below the specified value. Let the specified density be ∆0 =0.5. Assume that the first ”1” does not belong to the previous burst. Take the following sequence as an example: ...0 0 0 1 1 0 0 1 0 1 0 0 0 1 ... Start from the first error. If the second error is included, the density is 1; if the third error is included, the density is 3/5; if the fourth error is included, the density is 4/7; if the fifth error is included, the density is 5 11 0 5. Hence, the burst ends at the fourth error, i.e, 1100101
+
A gap is defined as the region of error-free bits between two errors. The length of the gap is the number of those error-free bits. 2.4.2.2
Gilbert Model
The Gilbert model [59] consists of two states (see Figure 2.3):
A good state G with a null error probability. A bad state B with an error probability equal to pb . 1-g
g
G
B
1-b
Figure 2.3: Gilbert model with two states.
23
b
Good and Bad states are hidden Markov states. The observation is the error sequence and not G and B states. The probabilities of changing states, 1-g and 1b, are small, producing consecutive correct bits (gaps) and consecutive error bits (bursts). Therefore, the model is able to describe the bursty nature of convolutional decoded bit errors. The mapping of the states on to the digit sequence can be recognized as probabilistic, characterized by the probability pb of producing an error in the B state. The inverse mapping, i.e. the reconstruction of the state sequence from the digit sequence is not possible, since there is no way of distinguishing 0’s in a sequence as originating either from G state or B state. Elliot [18] suggested a modification to Gilbert’s model by introducing a parameter pg , the probability of error in the G state. In this way it is possible to visualize channels as making transitions between long periods of good state with small probability of error and a bad state with relatively larger probability of error. Details of the parameters and the computing of the transition probabilities of both models are given in Annex A. While Gilbert’s model is conceptually satisfying in its ability to generate burst errors, it has some limitation so far as its suitability to represent real channels is concerned. The limitations arise from its renewal nature and from the assumption of the geometric distribution for run length of G and B. One way to overcoming this limitation is to enlarge the number of states. 2.4.2.3
Fritchman model
1
2
N-1
Error-free states
N
Error states
Figure 2.4: Diagram of general Fritchman model.
Fritchman investigated the characteristics of a very general case, a finite-state Markov chain model with N states, partitioned into two types of states: error-free 24
states and error states [58]. In good states no errors are produced whereas the bad states always produce an error. There are no transitions within the set of good states and no transitions within the set of bad states since these transitions are not observable in the output. The state transition diagram for Fritchman’s model is shown in Figure 2.4. Details of the computation of the state transition probabilities are given in Annexe B. The Fritchman model is able to represent sequences of errors with different periods of good and bad states.
2.5
Statistics for analysis and comparison purposes
In this section, we describe briefly the statistics that we will use to compare the different models between them and the results obtained from the Monte Carlo simulation of the transmission chain of Figure 2.1. These statistics are:
The Bit Error Rate (forward and return link) and the Frame Error Rate (return link). The average error burst length defined in section 2.4.1.1: b
Cumulative length of all bursts Total number of bursts
(2.3)
The average error burst length depends on the value of the BLC or the average density threshold ∆0 used to define the error burst lengths, the type of convolutional code and the Eb No value. At moderate to high signal to noise ratios, the error event probability is low and error events are largely separated by long gaps. For very small values of BLC or respectively high ¯ For values of ∆0 , error events can be split into multiple bursts, reducing b. very large values of BLC or small values of ∆0 , separate events may be ¯ The average burst length is however essentially combined, increasing b. independent of BLC or ∆0 over a very wide range.
$
+ &
The gap distribution, i. e. the cumulative distribution of the gap length (P gap length n ).
$
&
The burst distribution, i. e. the probability density function of the error burst length (P error burst length l ).
25
The distribution of the average error density within a burst, i.e. the average error density within a burst of length l versus the length of the burst l, given in section 2.4.1.1:
θl
2.6
1 Nl Number of errors in the burst n Nl n∑1 Length l of the burst
(2.7)
The number of error bursts per packet (in the return link).
Conclusion
We have described the physical layer of the DVB-RCS satellite communication system and the different processes that cause errors in the transmitted bits: the phase noise, the AWG noise and imperfect estimations of the carrier phase and timing. The need of a model has been discussed and the principal statistical models used over the last decade have been introduced. In next chapter, we obtain the error statistics of the Monte Carlo simulation of the forward link transmission chain of the DVB-RCS system. From the chain simulation, we extract the parameters of the models presented in this chapter. The different model results are compared using the statistics defined in the last section of this chapter.
26
Chapter 3 Forward link modeling 3.1
Introduction
In this chapter we describe in detail the channel code used in the forward link (provider to user). The decoding algorithm (Viterbi) is explained with an example. A Monte Carlo simulation of the transmission chain is carried out and several error statistics are evaluated. Then, the application of the models defined in the previous chapter is analysed, comparing the error statistics of the different model results. As the model results are not close enough to the transmission chain simulation results, a new mathematical model is developed and validated.
3.2
The concatenated coding scheme
The Forward link of the DVB-RCS standard is established from the service provider to the users to broadcast video, audio and data. The standard specifies the equipment performing the adaptation of the baseband TV signals to the satellite channel characteristics as shown in Figure 2.1. In this chapter we describe the channel code of the transmission system: a concatenated coding scheme. A detailed description of the physical layer of the DVB-RCS standard can be found in Annexe C. The satellite communication channel can be characterized by a white Gaussian random process. In the late 70’s, convolutional codes with Viterbi decoding were employed for channel coding in deep space. This type of codes are able to correct random errors. The Viterbi decoding is a maximum likelihood algorithm, it chooses the most probable sequence at the receiver. The complexity of the algorithm grows exponentially with the number of registers of the code, that is the 27
reason why convolutional codes’ constraint lengths are normally small. A way of improving performances without increasing the constraint length of the code was found out in the 80’s: the concatenation of codes. With a two level coding, we can achieve good performances with a relative small complexity. The outer code is chosen to be able to correct the errors at the output of the inner decoder. It is known that errors at the output of a convolutional decoder appear in bursts. That is the reason why the outer code must be able to correct error bursts.
The DVB-RCS standard [20] has adopted a convolutional code with constraint length K 7 as the inner code and a Reed-Solomon (n=204, k=188, T=8) block code as the outer code. The block diagram of this coding scheme is shown in Figure 3.1.
Information bits
Outer code RS (204,188)
Convolutional Interleaver
Inner code Convolutional (171,133)
Modulation
Channel coding
Figure 3.1: Concatenated codes in the forward link of the DVB-RCS standard .
The framing organization is shown in Figure 3.2. Reed-Solomon codes do not work on bits but on symbols. This one works on symbols of 8 bits, i.e., on bytes. The encoder takes 188 bytes and adds (n-k) redundant bytes at the end of the packet. It is a systematic code because the input data form part of the codeword. The capacity of correction (T) or number of bytes which can be corrected in an RS(204, 188) error protected packet is 8 bytes. It does not make a difference if there is one or several bit errors in the same symbol. Thus, the Reed-Solomon is usually employed to correct bit error bursts. In a concatenation with the convolutional code, an interleaver is used between the two codes. The interleaver is used because the errors at the output of the Viterbi decoding appear in bursts that can be of several bytes length. As mentioned before, the Reed-Solomon works well with several bit errors on the same symbol; however it does not work as well when the symbols of the packet are correlated between them. Thus, the function of the interleaver is to break the bursts at the output of the Viterbi decoding so bit errors do not spread over consecutive bytes. The conceptual scheme of the interleaver of the DVB-RCS standard is shown in Figure 3.3. Convolutional interleaving with depth 12 is applied to the RS error protected packets. The interleaver is composed of 12 branches, cyclically 28
Data coder
Audio coder
Video coder
Prorgramme MUX
Services
n
2
1
188 Bytes MPEG−2 transport MUX packet
204 bytes redundancy bytes from RS(204,188)
188 Bytes
Reed−Solomon errror protected packet continuous transmission 204 Bytes
204 Bytes
Interleaved frames. Input of the convolutional encoder
Figure 3.2: Framing structure.
connected to the input byte-stream by the input switch. Each branch is a Fist-In, First-Out (FIFO) shift register, with 17 j cells, j being the branch index. The cells of the FIFO contain 1 byte.
sync word route
sync word route
17x11 cells 17 cells 17x3 cells
17x2 cells
17x2 cells
17x3 cells
17 cells 17x11 cells Deinterleaver Interleaver
Figure 3.3: Conceptual diagram of the convolutional interleaver and deinterleaver.
29
The interleaved frame is encoded by the convolutional code shown in Figure 3.4. The generator polynomials are G0 171 and G1 133 in octal representation. The encoder register is initialized to all zeroes before encoding the first data bit. At the end of each data block (i.e. a packet of TV programs), the encoder is flushed by 6 zero bits and the output is generated until the encoder is in its allzero state. This way, the initial and final states (the all-zero state) are known by the decoder at reception. X output (171 octal) Modulo−2 adder Serial input Bit−stream
1−bit delay
1−bit delay
1−bit delay
1−bit delay
1−bit delay
1−bit delay
Modulo−2 adder Y output (133 octal)
Figure 3.4: Conceptual diagram of the convolutional code K=7 (133,171).
Posterior puncturing process allows for coding rates of 2/3, 3/4, 5/6 and 7/8. The coded bits are deleted following the puncturing patterns shown in Table 3.1.
Original code: K=7 G0 =171oct X G1 =133oct Y Code rate Pattern d f ree 1/2 X :1 C1 X1 10 Y :1 C2 Y1 2/3 X :10 C1 X1Y2Y3 6 Y :11 C2 Y1 X3Y4 3/4 X :101 C1 X1Y2 5 Y :110 C2 Y1 X3 5/6 X :10101 C1 X1Y2Y4 4 Y :11010 C2 Y1 X3 X5 7/8 X :1000101 C1 X1Y2Y4Y6 3 Y :1111010 C2 Y1Y3 X5 X7 NOTE: 1= transmitted bit. 0= non transmitted bit.
Table 3.1: Puncturing Patterns.
30
3.3
The decoding scheme
The decoding scheme that manufacturers have to implement at the reception chain is given by Figure 3.5. Classically, a Viterbi algorithm is used for the convolutional decoding. The demodulator usually gives soft inputs to the decoder. The Viterbi algorithm recursively searches through the trellis for the most likely sequence, i.e, the sequence that maximize the maximum likelihood function.
Inner decoding (Viterbi algorithm)
Outer decoding
Convolutional De−Interleaver
Figure 3.5: Decoding scheme in Forward Link.
We define Nk as the number of survivor paths at the step k of the trellis, i as the index of the survivor paths (i 0 Nk 1), L is the number of trellis sections or branches, rk are the received bits at step k of the trellis, ck i are the bits of the considered trellis transition (of the i-path) and σ2 is the variance of the additive Gaussian Noise. In case of AWGN, the metric of a trellis path i given by the maximum likelihood criterion is: λML
/ e .2πσ
r c ∏ , i
L 1
r k ck i 2 2σ2
1
(3.1)
2
k 0
By taking the logarithm of eqn. (3.1) and neglecting the terms that are common to all branches, we find that an equivalent maximum likelihood algorithm selects the path in the trellis that minimizes the Euclidean distance metric:
D r c ∑ 0 r c 0
λviterbi r ci
i
L 1
k
k
i 2
(3.2)
k 0
The Viterbi algorithm is a sequential trellis search algorithm for performing maximum likelihood detection. The path metric is decomposed as the sum of branch metrics. The branch metric for a given transition is thus the Euclidean distance between the received bits and the bits associated to the considered trellis transition. At each step k of the trellis, the algorithm compares the metrics of the paths arriving to the same state and only the path with the lower metric is selected. The other paths are discarded. 31
k=0
λ=1
k=1
1
k=2
0 3
1 3
k=3
1
1
k=4
k=5
1
0
1
1
0
1 0 1
2 2
k=0
1 1
k=1
1
3
k=0
k=1
k=2
2 3
2
5
k=0
k=1
k=2
k=3
2
2 4 3
k=0
k=1
k=2
k=3
k=4
2 4
k=0
k=1
k=2
k=3
κ=5
k=4
λ
=3
Figure 3.6: Example of Viterbi algorithm with a 4-state trellis.
An example of the steps of the Viterbi algorithm is shown in Figure 3.6. The first trellis presents all the metrics λviterbi r ci . The following trellis shows the selection of the survivor paths at each step of the decoding process. Finally, at step k 5, the only survivor path is that with accumulated metric λ 3.
32
Instead of waiting until the end of the received sequence, the decision about transmitted bits is forced after a delay, called the memory path length, long enough (usually a delay of 5 K for 1/2 code rate codes, being K the constrain length of the code) to assume that the surviving paths are reliable enough. The selected path is that with minimum accumulated metric.
1
The block codes like the RS code, are characterized by the minimum distance of the code, dmin , which is the minimum Hamming distance between any two different code words. The RS (204,188) decoder takes the 204-byte packet at its input and finds the code-word that is closest to it in the Hamming distance sense. The RS (204, 188) has minimum distance dmin 17 bytes. If dmin 1 or fewer byte errors occur, the input packet has been converted into a non-codeword packet and the error is detected. But detection is not the same as correction. Correction is only possible if the received 204-byte packet is closer to the correct code-word than any other code-word. Thus, the correction capability of the RS (204, 188) is given by T dmin2 1 8 bytes.
3.4
Error statistics of the convolutional decoder and comparison with mathematical models
The French spatial agency (C.N.E.S.) has carried out Monte Carlo simulations of the forward link of the DVB-RCS system. At the receiver chain, the demodulator supplied the Viterbi decoder with soft decisions. The Viterbi decoding and the RS decoding gave hard-decision output. The simulations of this communication system have been carried out for different values of Eb No and different coding rates. The positions of the bit errors at the output of the Viterbi decoder have been stored in files. We have recovered these files to study the error statistics and extract the parameters of the models presented in chapter 2. First of all, we present the results of the transmission system without puncturing (code rate = 1/2). Eb/No (dB) BER nb of processed bits
1.9 10 2 0 5 106
2.8 10 3 5 106
3.4 2 10 4 25 106
3.85 5 10 5 100 106
4.1 2 10 5 250 106
Table 3.2: Parameters of the transmission chain simulation for r=1/2. From these results, we have extracted the parameters of the models presented in chapter 2. The models in competition are:
33
The geometric model, a probabilistic model based on eqns. (2.5)-(2.6)-(2.8).
The Gilbert model and the Gilbert-Elliot model presented in section 2.4.2.2 and detailed in Annexe A. A Fritchman model composed by one error state and two free-error states presented in section 2.4.2.3. See Annexe B for computation details.
To compare the models, we have chosen to work at Eb/No=3.4 dB that corresponds approximately to a BER 2 10 4 . This BER is the BER required at the output of the Viterbi decoder to allow a quasi-error-free transmission after the RS decoding in the forward link of the DVB-RCS system [19].
3.4.1
Bit Error Rate
We have run the mathematical models to obtain error sequences with approximately 5000 bit errors. Table 3.3 shows the results of the different models in terms of BER. BER
real chain 1 9 10 4
Geometric 1 8 10 4
Gilbert 1 75 10
4
Gilbert-Elliot Fritchman 2 2 10 4 2 10 4
Table 3.3: BER results for r=1/2, Eb/No=3.4 dB. We see that all the models are able to give a correct BER since it is one of the parameters that must be introduced in the models.
3.4.2
The average error burst length
The expression of the average error burst length was given in section 2.4.1.1: b
Cumulative length of all burst Total number of bursts
(2.3)
To measure the length of an error burst, first we have to group errors into bursts. In chapter 2 two different definitions of an error burst have been presented. We will see now the equivalence between these two criteria. The average error burst length of the transmission chain simulation results has been evaluated using on one hand the BLC definition and on the other hand the AEDC definition for error burst. 34
Figure 3.7 validates BLC=6 as the correct value, since the average error burst length achieves a steady value. To obtain the same average error burst length value with the AEDC definition the density threshold must be set to ∆0 =0.4 (see Figure 3.8). 18 BER=10−2 BER=10−3 BER=2x10−4 BER=5x10−5 BER=2x10−5
16
Average Burst Length
14
12
10
8
6
4
2
0
0
2
4
6
8
10
12
14
16
18
20
BLC
Figure 3.7: Dependence of b¯ on BLC value. Code rate 1/2.
Evolution of Average Burst Length with AED 22 BER=10−2 BER=10−3 BER=2x10−4 BER=5x10−5 BER=2x10−5
20 18
Average Burst Length
16 14 12 10 8 6 4 2 0
0.2
0.4
0.6
0.8
Error Density Threshold
Figure 3.8: Dependence of b¯ on the density threshold of the AEDC. Code rate 1/2.
We observe that when the BER becomes lower, the average error burst length decreases. At very low BER, the average burst length is given by the length of the path with minimum distance of the trellis. 35
3.4.3
Gap length distribution
Figure 3.9 shows the cumulative distribution of the gap length at the output of the Viterbi decoder. The results from the transmission chain simulation are given by the line with the cross marker. We have added also the results of the mathematical models. We observe that the geometric model or the Fritchman model with three states yield good results. But the Gilbert and Gilbert-Elliot tend to produce gaps that are shorter than the gaps of the transmission chain simulation. 1
0.9
Cumulative Relative Frequency
0.8
0.7
0.6
0.5
0.4
0.3
Simulation Data Geometric model Gilbert model Gilbert−Elliot model Fritchman model
0.2
0.1
0
0
2
4
6
8
10
12
14
16
Length of Gap
Figure 3.9: Gap length distribution. BER
3.4.4
4
x 10
2 10
4.
Code rate 1/2.
Error burst length distribution
We have analysed the distribution of the error burst length. The probability density function of the burst length is shown in Figure 3.10. All models are quite accurate when describing the probability of long error bursts but they do not fit at all when describing the probability of short error bursts. All the models give very high probabilities to short bursts, following a geometrical law. However, the simulation results of the transmission chain indicate that the burst of length 1 is not the more probable one. The most probable burst length is 5 bits. Even the Fritchman model with 3 states, that represents with accuracy the gap distribution, fails to describe the burst distribution.
36
0.4
Probability Density Function
Simulation Data Geometric model Gilbert model Gilbert−Elliot model Fritchman model
0.2
0
0
5
10
15
20
25
30
Length of burst
Figure 3.10: Burst length distribution. BER
3.4.5
2 10
4.
Code rate 1/2.
Distribution of the average error density within a burst
The average error density for different lengths of burst is shown in Figure 3.11. We see that the mathematical models agree with the results of the transmission chain simulation except the Gilbert-Elliot model, that gives an average error density higher than the other models. 1.1
Transmission chain simulation Gilbert model Gilbert−Elliot model Fritchman model Geometric model
1
Average error density
0.9
0.8
0.7
0.6
0.5
0.4
0
2
4
6
8
10
12
14
16
18
20
22
24
Length of burst
Figure 3.11: Average error density. BER
37
2 10
4.
Code rate 1/2.
3.5
New Analytical model for Viterbi decoding
In the previous section we have studied the error statistics of the transmission chain simulation. These results have been compared with the mathematical models results. We can conclude that the geometric model and the Fritchman model are those closer to the data results concerning the gap length distribution and the average error density. However, they fail to describe the burst length distribution In this section, we propose a new mathematical model keeping the approach of the geometric model, i. e. the error bursts and gaps are defined using the BLC definitions and the average error density will be given by eqn.(2.8). We will make some refinements to the geometric model of the gap distribution and develop a new mathematical model for the burst distribution.
3.5.1
Refinements to the gap length modeling
We have seen that even if the geometrical model is not appropriate for the burst distribution, it is however suitable for the gap distribution. We adopt this solution for the gap modeling. Hence, the probability of having a gap with length n is still:
n q 1 q
n K 1
PG
where q
1 g¯ K 2
2 n) K
1
(2.6)
and g¯ is the average gap length.
In literature, g¯ is computed empirically for each Eb No. We have studied the evolution of g¯ with the Eb No value (see Figure 3.12). Due to its form, we propose to model it by means of the exponential law given by eqn.(3.3). g¯
AeBEb
3
No
(3.3)
A and B parameters have been computed for the convolutional code (171,133) with K=7 using the least mean square method applied on the results of the transmission chain simulation. They are given by Table 3.4. The fitting of the A and B parameters for other convolutional codes are left for future work.
38
5
2
x 10
1.8
Data simulation Exponential model
1.6
Average gap length
1.4
1.2
1
0.8
0.6
0.4
0.2
0 1.5
2
2.5
3
3.5
4
4.5
Eb/No (dB)
Figure 3.12: Evolution of average gap length with Eb/No.
Code rate
1/2
A
0.134
B
5.523
Table 3.4: Values of average gap length constants for 1/2 coding rate.
3.5.2
Error burst modeling by the extended Union-Bhattacharyya bound
In order to find a better model for the burst modeling, we propose to analyse the error performance of the Viterbi algorithm on an AWGN channel with softdecision decoding [41]. In deriving the probability of error, the linearity property of the convolutional codes can be employed. That is, it is sufficient to analyse the case where the allzero sequence is transmitted and study the probability of error in deciding in favor 39
of another sequence. The Viterbi algorithm has already been explained in section 3.3. The Viterbi metric for the i-th path consisting in L branches of the trellis is given by eqn. (3.1). Suppose that there is a path i consisting in L branches that merge with the allzero path at a given step of the trellis . This path differs from the all-zero path in d bits, i.e., there are d 1s in the path i and the rest are 0s. The algorithm will choose the incorrect sequence and thus produce an error burst if the accumulated metric of the i path is lower than the accumulated metric of the all-zero path:
P $ 2 ∑ r c c ) 0& L
PEP d
k
i k
0 k
(3.4)
k 1
PEP stands for Pairwise Error Probability. rk are the received bits at step k of the trellis, cik and c0k are the coded bits of respectively path i and all-zero path in the trellis and L is the number of trellis sections. Since the coded bits in the two paths are identical except in the d positions, the sum must be done over the set of d received bits in which the two paths differ. The rk are independent and Ec and variance identically distributed Gaussian random variables with mean No 2 . Consequently, the PEP can be written as:
,
Q 64 5
2Ec d No
PEP d
;
97 8 :65 Q
2Eb rd No
7
(3.5)
where r is the code rate and Eb No is the energy per useful bit to noise ratio. Until this point, we have only considered one path of distance d and length L that merges with the all-zero path. But there are several paths of length L with different distances that can merge with the all-zero path at the given step. Summing the PEP over all possible path distances, we obtain an upper bound on the probability that an error burst of length L will take place at a given step of the trellis:
∑ a < d @ L= PEP < d = (3.6) is the minimum distance of the code, a < d @ L = represents the number L
P error burst length=L
d d f ree
where d f ree of incorrect paths of length L with distance d and L is the number of branches of the paths. This bound is called a union bound for obvious reasons. The equality would only occur if the events that results in the error probabilities PEP were disjoint. 40
Next, we introduce some modifications in the PEP expression to simplify the calculation of eqn.3.6. Upper-bounding the Q function by an exponential, (3.5) can be written as:
< =?> eA
Eb rd No
PEP d
(3.7)
This expression is the Bhattacharyya bound [61] for convolutional codes. Let us decompose the accumulated Hamming distance of the i-path into the Hamming distance of each branch: d d0 dk dL 1 .
8 BDCECB BDCECFCECB A
Using the exponential properties:
G
< =?> eA H 8 ∏ A eA Eb r L 1 No ∑ dk k 0
PEP d
L 1
Eb rdk No
(3.8)
k 0
Thus combining (3.8) and (3.6), we find:
∑ a < d @ L= ∏ A eA L
L 1
d d f ree
k 0
Eb rdk No
(3.9)
Thus, the probability of an error burst of length L is given by summing the PEP of the possible paths of length L. This upper bound can be computed using S, the transition matrix [29]. S i j represents the transition from state i at decoding time k to state j at decoding time k 1, for i j 0 1 2M 1 with M K 1 Eb r being the memory of the code. S i j is equal to e No raised to the Hamming 0 for impossible transitions. distance of the trellis transition, with S i j