UNIVERSITY OF BUEA BURST ERROR CORRECTION USING

0 downloads 0 Views 4MB Size Report
associated with data transmission at the physical layer. Recently ...... 2: Code sequence (output bits) and Encoder bits for the message (10111101) using the (2,1 ...
UNIVERSITY OF BUEA

FACULTY OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING

BURST ERROR CORRECTION USING NONLINEAR CONVOLUTIONAL CODES IN WIRELESS COMMUNICATION CHANNELS

By

Bisong Emmanuel Eseme B. Eng.

A Thesis Submitted to the Faculty of Engineering and Technology of the University of Buea in Partial Fulfilment of the Requirements for the Award of the Master of Engineering (M.Eng) Degree in Telecommunications and Networks

October, 2017

ii

DEDICATION This thesis is dedicated to my parents: Mr. Bisong Charles Bisong and Mrs. Bisong Helen Manyi.

iii

iv

ACKNOWLEDGMENT I am deeply grateful for the efforts of my supervisor Dr. Sone M. Ekonde, who has always been very keen to my challenges and problems faced during this thesis. His efforts has contributed enormously to this academic work. I simply say thank you. I am grateful to Dean of the Faculty of Engineering and Technology, Prof. Emmanuel Tanyi, for his great academic insights in research. I thank the HOD in Electrical and Electronic Engineering, Dr. Tsafack Pierre, for his contributions during the research progress follow-up seminars. I immensely thank Dr. Ngwashi Divine, the Post Graduate Coordinator, for his endless advice on pertinent research issues. I am grateful to all my lecturers whose names I may not have mentioned for their efforts in one way or another towards realizing this academic piece of work. To all my friends, I say thank you for the support. I am thankful to my parents for their fervent support throughout my studies especially during my research period. Finally, I am grateful to God Almighty for his unmerited favour and privilege towards me to be at this level in my academics.

v

ABSTRACT Classical security for wireless transmission channels is at transport and data link layers. In telecommunication systems, an increase in security adversely affects the throughput either through avalanche effect in symmetric encryption schemes or huge modular exponentiation in public-key encryption schemes. These adverse effects could be circumvented if classical security schemes implemented at the upper layers is associated with data transmission at the physical layer. Recently, convolutional coding which is an efficient forward error correction scheme has been associated with classical cryptographic algorithms such as the RSA public-key cryptography to enhance security and throughput. In this research, a non-linear convolutional coding scheme was used to implement the RSA cryptography using small key lengths resulting to a minimization of modular exponentiation which implies increase throughput. In addition, it was shown that, using small key lengths of the order of 256 bits and 512 bits, security levels comparable to that of traditional 1024 bits could be attained. The research did not however explore the error correction capability of the non-linear convolutional coding scheme. This thesis presents a scheme of non-linear convolutional codes for the correction of burst errors in a wireless communication channel. Interleaving and deinterleaving processes enable very long burst error to appear as random error within the channel, thereby increasing the performance of the convolutional code. The performance of the proposed non-linear convolutional burst error correcting scheme is tested with different burst lengths and constraint length to determine how the performance is affected by these factors within the wireless channel. This thesis shows that the longer the burst length the greater the strength of the code, and the longer the constraint length the greater the ability of the code to correct errors.

vi

TABLE OF CONTENTS DEDICATION .............................................................................................................. II ACKNOWLEDGMENT .............................................................................................. IV ABSTRACT .................................................................................................................. V TABLE OF CONTENTS ............................................................................................. VI LIST OF FIGURES ..................................................................................................... IX LIST OF TABLES ......................................................................................................XII LIST OF ABBREVIATIONS AND SYMBOLS ..................................................... XIII 1.1

BACKGROUND ................................................................................................ 1

1.2

DIGITAL COMMUNICATION SYSTEM ........................................................ 2

1.3

TYPES OF ERRORS .......................................................................................... 3

1.3.1 RANDOM ERRORS ............................................................................................. 3 1.3.2 BURST ERRORS ................................................................................................. 3 1.4

ERROR CONTROL TECHNIQUE ................................................................... 4

1.5

TYPES OF CODES ............................................................................................ 4

1.5.1 BLOCK CODES .................................................................................................. 5 1.5.2 CONVOLUTIONAL CODES .................................................................................. 5 1.5.3 CONCATENATED CODES ................................................................................... 6 1.6

OBJECTIVE OF THE THESIS .......................................................................... 6

1.7

PROBLEM STATEMENT ................................................................................. 7

1.8

SCOPE OF THE THESIS ................................................................................... 7

1.9

METHODOLOGY ............................................................................................. 8

1.10

THESIS OVERVIEW ......................................................................................... 8

CHAPTER TWO ......................................................................................................... 10 LITERATURE REVIEW ............................................................................................ 10 2.1 INTRODUCTION ................................................................................................. 10 2.1.1 BURST-ERROR CORRECTION .......................................................................... 10 2.1.2 KNOWN CODES AND CODING TECHNIQUES FOR CORRECTING BURSTS .......... 12 2.1.2.1 Fire Codes .............................................................................................. 12 2.1.2.2 Binary RS Codes .................................................................................... 15 2.1.2.3 Interleaving Technique .......................................................................... 16 2.1.2.4 Concatenated Coding Scheme ............................................................... 19 2.1.2.5 Cascaded Coding Scheme: Product Code .............................................. 21 2.2

CODING AND DECODING WITH CONVOLUTIONAL CODES .............. 22

2.2.1 CODE PARAMETERS AND THE STRUCTURE OF THE CONVOLUTIONAL CODE ... 26 2.2.2 STATE OF THE CODE ....................................................................................... 26

vii

2.2.3 PUNCTURED CODES ........................................................................................ 27 2.2.4 SYSTEMATIC VERSUS NON-SYSTEMATIC CONVOLUTIONAL CODE................. 30 2.3

CODING AN INCOMING SEQUENCE ......................................................... 30

2.3.1 THE ENCODER DESIGN ................................................................................... 34 2.3.1.1 The State Diagram ................................................................................. 35 2.3.1.2 The Tree Diagram .................................................................................. 37 2.3.1.3 The Trellis Diagram ............................................................................... 37 2.3.1.4 Trellis Truncation and Termination ....................................................... 38 2.3.1.5 Interleaved Convolutional Code ............................................................ 39 2.3.1.6 Trellis Coded Modulation (TCM) .......................................................... 42 2.4

THE DECODING ALGORITHMS AND/OR THEOREMS ........................... 44

2.4.1 2.4.2 2.4.3 2.4.4

CHOICE OF DECODING ALGORITHM ............................................................... 44 DECODING OF CONVOLUTIONAL CODES USING VITERBI ALGORITHM ............ 45 BRANCH METRIC AND PATH METRIC CALCULATION ...................................... 45 TRACE BACK .................................................................................................. 46

2.5 DECODING OF CONVOLUTIONAL CODES USING VITERBI ALGORITHM .............................................................................................................. 46 2.5.1 HARD DECISION DECODING OF CONVOLUTION CODE USING VITERBI ALGORITHM .............................................................................................................. 47 2.5.2 SOFT DECISION DECODING OF CONVOLUTION CODE USING VITERBI ALGORITHM .............................................................................................................. 53 2.6

DECODING BURST ERROR USING AN INTERLEAVED SYSTEM ........ 55

CHAPTER THREE ..................................................................................................... 57 ENCODING CONVOLUTIONAL CODES ............................................................... 57 3.1 THE (2, 1, 3) CONVOLUTIONAL CODE ENCODER ....................................... 57 3.2

THE (2, 1, 3) CONVOLUTIONAL CODE STATE DIAGRAM .................... 59

3.3

THE (2, 1, 3) CONVOLUTIONAL CODE TRELLIS DIAGRAM ................. 62

3.4

THE (2, 1, 3) CONVOLUTIONAL INTERLEAVED CODE ......................... 63

CHAPTER 4 ................................................................................................................ 68 DECODING CONVOLUTIONAL CODES ............................................................... 68 4.1

VITERBI ALGORITHM FOR (2, 1, 3) CONVOLUTIONAL CODE ............ 68

4.2

DEINTERLEAVING FOR (2, 1, 3) CONVOLUTIONAL CODE .................. 71

4.3 NON-LINEAR CONVOLUTIONAL CODES ..................................................... 74 4.3.1 LINEAR DYNAMIC CONVOLUTIONAL TRANSDUCER ........................... 75 CHAPTER 5 ................................................................................................................ 82 RESULTS AND DISCUSSIONS ................................................................................ 82 5.1 QUANTIFICATION OF THROUGHPUT ........................................................... 82

viii

5.2 THROUGHPUT ANALYSIS ................................................................................ 83 5.2 DISCUSSION AND PROPOSAL FOR FUTURE WORK .................................. 87 REFERENCES ............................................................................................................ 89 APPENDIX .................................................................................................................. 93

ix

LIST OF FIGURES Figure 1.1: Digital Communication System ..................................................................... 2 Figure 1.2: Shift Registers for Convolutional Encoding .................................................. 5 Figure 1.3: Block diagram of serially concatenated codes ............................................... 6 Figure 1.4: Methodology Flow Diagram .......................................................................... 8 Figure 2.1: Syndrome register ......................................................................................... 14 Figure 2.2: l-high-order parity bit positions .................................................................... 15 Figure 2.3: Error-trapping Decoder ................................................................................ 15 Figure 2.4: An interleaved array ..................................................................................... 16 Figure 2.5: A burst of length λl ....................................................................................... 18 Figure 2.5b: Structure of a convolutional interleaver ..................................................... 19 Figure2.6: Concatenated Coding .................................................................................... 20 Figure 2.7: Code array for the product code C1× C2 ..................................................... 22 Figure 2.8 Block diagram of an (n, k, K) convolutional encoder. .................................. 23 Figure 2.9: Block diagram of shift registers with generator sequence............................ 24 Figure 2.10: A (2, 1, 3) binary systematic feed forward convolutional encoder. ........... 25 Figure 2.11: The Structure of (2, 1, 4) code .................................................................... 27 Figure 2.12: Two (2, 1, 3) convolutional codes produce 4 output bits. Bit number 3 is “punctured” so the combination is effectively a (3, 2, 3) code. ...................................... 28 Figure 2.13: This (4,3,3) convolutional code has 9 memory registers, 3 input bit and 4 output bits. The shaded registers contain “old” bits representing the current state. ....... 29 Figure 2.14: This (4, 3, 3) convolutional code contains 9 memory registers, 3 input bits and four output bits. The shaded registers contain “old” bits representing the current state. .................................................................................................................... 30

x

Figure 2.15: A sequence consisting of a solo 1 bit as it goes through the encoder. The single bit produces 8 bit of output................................................................................... 31 Figure 2.16: Encoding sequence for verification for the code (2,1,4) ........................... 33 Figure 2.17: The State diagram for the (2,1,4) Code ...................................................... 36 Figure 2.18: Trellis diagram of (2, 1, 2) convolutional code .......................................... 38 Figure 2.19: Trellis Truncation ....................................................................................... 39 Figure 2.20: An (n, k, K) convolutional coding system with interleaving degree λ ....... 40 Figure 2.21: Interleaving Techniques. ............................................................................ 40 Figure 2.22: Deinterleaving techniques. ......................................................................... 41 Figure 2.23: A (2, 1, 1) convolutional encoding system ................................................. 42 Figure 2.24: A (2, 1, 1) convolutional coding system with interleaving degree λ = 9. .. 42 Figure 2.25: Block diagram showing the process of Viterbi algorithm. ......................... 46 Figure 2.26: A (2, 1, 2) convolutional encoder. .............................................................. 47 Figure 2.27: Trellis diagram of (2,1,2) Convolutional encoder. ..................................... 48 Figure 2.28: First state of trellis diagram ........................................................................ 49 Figure 2.29: State representation of trellis diagram. ....................................................... 50 Figure 2.30: Path metrics calculation .............................................................................. 51 Figure 2.31: Final path metrics calculation. .................................................................... 52 Figure 2.32: Trellis Structure for Hard Decision Decoding. .......................................... 52 Figure 2.33: Trellis diagram for the soft decision decoding. .......................................... 54 Figure 2.34: A simple feedback decoding system for (2, 1, 1) convolutional code. ...... 56 Figure 2.35: Decoding burst error for (2, 1, 1) convolutional code using 𝜆 = 9. ............ 56 Figure 3.1: The Structure of (2, 1, 3) code ...................................................................... 57 Figure 3.2: A sequence consisting of a 1 bit as it goes through the encoder. The single bit produces 8 bit of output. ............................................................................................ 58

xi

Figure 3.3: Block diagram for a (2, 1, 3) convolutional encoder. ................................... 59 Figure 3.4: State diagram of (2, 1, 3) convolutional code. ............................................. 60 Figure 3.5: Encoding process for (2,1,3) code for the message (10111101) .................. 61 Figure 3.6: Encoding the message (11011101) with the (2,1,3) code using trellis diagram. .......................................................................................................................... 62 Figure 3.7: A (2, 1, 3) convolutional encoding system ................................................... 63 Figure 3.8: A (2, 1, 3) Convolutional coding system with interleaving degree λ = 5..... 64 Figure 4.1: Viterbi decoder for (2,1,3) code for the message (11011101) ..................... 69 Figure 4.2: Viterbi decoding of the (2,1,3) convolutional code ...................................... 70 Figure 4.3: Viterbi decoding for code (2,1,3) after tracing back .................................... 71 Figure 4.4: Deinterleaving block diagram for (2,1,3) convolutional code. .................... 72 Figure 4.5: Deinterleaving block diagram for (2, 1, 3) convolutional code with interleaving degree λ=5 ................................................................................................... 73 Figure 4.6: (2, 2, 2) convolutional encoder .................................................................... 75 Figure: 4.7: Initial structure of cascade used to encrypt the first two input bits ............. 79 Figure. 4.8: Structure of cascade used to encrypt the second two input bits .................. 80 Figure 4.9: Structure of cascade used to encrypt the third two input bits ....................... 81 Figure 5.1. BER vs Eb/N0 in AWGN channel .............................................................. 84 Figure 5.2: Normalised Throughput versus Number of States ....................................... 85 Figure 5.3: Uncoded BER versus Eb/No for QPSK ....................................................... 86 Figure 5.4: Burst length and memory Registers for different Key lengths. .................... 87

xii

LIST OF TABLES Table 2. 1: Output bits and the encoder bits through the (2,1,4), input bits: 101100 ..... 34 Table 2. 2: Lookup Table for the code (2,1,4) Encoder .................................................. 35 Table 2. 3: Comparing the encoder output and received message. ................................. 48

Table 3. 1: Transition (state) table for (2, 1, 3) convolutional code. .............................. 60 Table 3. 2: Code sequence (output bits) and Encoder bits for the message (10111101) using the (2,1,3) code. ..................................................................................................... 62 Table 3. 3: Convolutional Interleaved Code with interleaving degree of 5 .................... 65

Table 4. 1: Summary of the (2,1,3) code encoded information. ..................................... 68

Table 5. 1: Bit-error probability, Pe for Trellis-coded 16-PSK modulation. .................. 82

xiii

LIST OF ABBREVIATIONS AND SYMBOLS AWGN

Additive White Gaussian Noise

B

Length of burst error

BER

Bit Error Rate

BSC

Binary Symmetric Channel

CRC

Cyclic Redundancy Check

dmin

Minimum distance of a code

Eb

Energy per bit

EDAC

Error Detection and Correction

Es

Average Signal Energy

FEC

Forward Error Correction

g(x)

Generator Polynomial of an (n, k) code

GF

Generating Function

GF

Galois Field

K

Length of message for an (n, k) code

K

Constraint length an (n, k, K) Convolutional code

mod

Modulo

N

Length of codeword for an (n, k) code

NASA

National Aeronautics and Space Administration

No

Noise Energy

Pe

Probability of error

PSK

Pulse Shift Keying

QAM

Quadrature Amplitude Modulation

QPSK

Quadrature Pulse Shift keying

R

Code rate for an (n, k) code

xiv

RS

Reed Solomon

SNR

Signal to noise ratio

TCM

Trellis Coded Modulation

U

Message sequence

V

Codeword generated by an (n, k) code

XOR

Exclusive OR

λ

Interleaving degree

1

CHAPTER 1 INTRODUCTION 1.1 Background The constraints of power and bandwidth in wireless communications channels are not the only reliable parameters to send and receive secure data. In these channels errors occurs due to fading and need to be corrected. Therefore, error correction codes are used to improve the reliability and performance of digital communications systems. In telecommunication, security and throughput has more to do than limit it only in the transport and datalink layers. Classical security for wireless transmission channels is at transport and data link layers. Coding techniques which are principally used at the physical layer for error detection and correction could equally be used to enhance security. Convolutional coding is a good candidate in this respect by ensuring security and throughput in telecommunications systems. Convolutional codes alone can easily be broken, hence there is need for coupling convolutional codes using product ciphers to obtain a nonlinear multi-stage transducer. Using error-correcting codes does not mean it can correct all the error present in the systems. However, the type of coding depends upon the nature of the channel used. It is shown in [1] that convolutional codes are suitable in wireless fading channels. Also, studies carried out has shown that

in

telecommunication systems, the noise present in the channel are generally assumed to be Additive White Gaussian Noise (AWGN), which leads to random errors and has a Gaussian distribution. In real life, the channel noise is not always AWGN. Instead, the distribution of error is more complex and the error rate varies over a range of time due to fading which leads to burst errors [2].

2

1.2 Digital Communication System In the flow diagram shown in figure 1.1, the Source and destination are physically separated points. When the signal travels in the communication channels, noise interferes with it. The addition of noise in the original signal disturbs the message content. Therefore the received message by the receiver may not match the original message. Thus, the transmission rate of the signal is adversely affected by the presence of noise and disturbances in the communication channel. Information Source

Source Encoder

Channel Encoder

Modulator

Noise +

Destination

Source Decoder

Channel Decoder

Channel

Demodulator

Figure 1.1: Digital Communication System The message signal produced by the information source is fed into the source encoder. The message signal in the form of binary sequence is passed through the channel encoder where to avoid the error introduced into the channel extra bits called redundant bits are added to the input sequence. The sequence of bits obtained after adding redundant bits is called code word.

The modulator converts the low

frequency components of signal into the high frequency components. The carrier wave used in the digital modulation is continuous, the output of the modulator is a random digital pulse train. The signal is passed through the channel where unwanted noise and interference are added to the modulated signal. At the receiving end, the demodulator converts the received continuous signal again into binary sequence. The channel decoder decodes this demodulated binary sequence. It checks all the errors: random error caused by AWGN and burst error caused by fading, present in the

3

received signal and try to recover the original information by using error correcting codes such as convolutional codes. If errors are not found, the received signal will be the same as original signal. At the final stage, the source decoder converts the received information into the form it was originally sent. 1.3 Types of Errors There are two main types of error in telecommunication systems. They are random and burst errors.

1.3.1 Random errors These errors are inconsistent and do not affect the performance of the system in following intervals. These errors are uncorrelated and developed due to the noise in channels without memory. They are found in a transmission channel that requires line-of-sight transmission [3]. A random- error correcting code based on minimum distance coding can provide a strict guarantee on the number of detectable errors but it may not protect against preimage attack.

1.3.2 Burst errors These errors are generated due to impulsive noise in the channel. Lightening and switching transient produce the impulsive noise. In transmission systems, a signal is a string of data consisting of different bits. The burst error appears, when bits are shifted from their real positions during transmission. The channel with a memory like mobile telephony channel is subjected to a burst error due to interference and signal fading [4]. This fading is caused by many prominent terrain contours like hills, forests, clumps of building etc., between the transmitter and receiver. The statistics of largescale fading provide a way of computing an estimate of path loss as a function of distance. Small scale fading caused by the superposition or cancellation of multipath propagation signals, gives the speed of the transmitter or receiver or the bandwidth of

4

the transmitted signal [5]. It is also known as Multipath Fading or Rayleigh Fading. It could be 20 - 30 dB over a fraction of a wavelength. When the multipath signals arrived at the receiver it creates constructive and destructive interference in space. As the receiver is moving through space, it experience peaks and nulls of multipath fading; often losing the signal momentarily resulting to burst errors. An error pattern e  e0 , e1 , e2 ,..., en 1 , is said to be a burst of length l if its





nonzero component are confined to l consecutive position, say e j , e j 1 ,..., e j  l 1 , the first and the last of which are nonzero, i.e., e j  e j l 1  1 . For example the error pattern, e = (0001001001000) is a burst of length of 7 [6]. 1.4 Error Control Technique In digital communication systems when reliable and efficient data transfer is considered error control technique plays an important role. By using various coding schemes, the channel errors are reduced to an acceptable level to ensure the quality of data transmission [7]. For this reason, different types of coding schemes have been designed to detect and correct the errors. All these coding schemes have common features of adding extra redundancy to the original information during the transmission, which in turns are removed by the decoder at the receiver end. These coding schemes are fire codes; RS Codes, Interleaving; Product Codes; Concatenated; Cascaded. In digital communication systems there are two types of error control techniques used for the reliable and efficient data transmission. They are called Automatic Repeat Request (ARQ) [8] and Forward Error Correction (FEC) technique [9]. 1.5 Types of Codes Different FEC coding methods are used in the digital communication systems. They are mainly classified into block and convolutional codes.

5

1.5.1 Block Codes The binary information sequence in block codes is divided into blocks of fixed length. Each message block has a data storage capacity of k. There are a total of 2k dissimilar messages. The encoder transforms each code-block into a codeword, which is of length n (n > k). Therefore, corresponding to 2k possible messages, there are 2k codewords. This set of 2k codewords is called block code [10]. An important parameter of a block code is the minimum distance. It is the minimum weight between any two codewords and it is denoted by dmin. A block code with minimum distance dmin allows detection of all the error patterns with dmin -1 or fewer errors [11].

1.5.2 Convolutional codes Convolutional codes are widely used for the reliable data transfer in digital communication systems. Convolutional codes are generated before the transmission by passing the information sequence through a linear finite-state shift register. Figure 1.2 shows general structure of convolutional encoder. The shift register for convolutional coding contains K stages. The bits are shifted into the encoder by k times where n denotes the number of output bits of the encoder corresponding to the k input bits. The ratio of k/n is the code rate and denoted by R. k input bits.

1 2



k

1 2



k

k Stages.

………………… . 1

2

n

Encoded sequence to modulator

Figure 1.2: Shift Registers for Convolutional Encoding

6

These coding methods are implemented so as to provide the low bit error rate under certain system constraints, such as power, bandwidth and the channel complexity [12].

1.5.3 Concatenated Codes Concatenated Codes are formed by combination of two codes linked by each other. They are suitable to design long codes. It is a reliable coding method that reduces the decoding complexity. A concatenated code is very useful to correct the combination of random and burst errors. It has a two level of coding. In general, the first level of coding corrects the random error but it may not correct the burst error. Thus, the second level corrects the remaining part of the burst error present in the code [13]. Uncoded Message

Outer Code Encoder (n2, k2)

Inner code Encoder (n1, k1)

Channel

Inner Code Decoder

Outer Code Decoder

Figure 1.3: Block diagram of serially concatenated codes 1.6 Objective of the Thesis The aim of this thesis is to design a new coding scheme using non-linear convolutional codes which could be derived from standard techniques: fire codes, RS Codes, Interleaving, Product Codes, Concatenated codes and Cascaded. This coding scheme would be used for correcting burst errors using integers that is efficient and suitable to implement in wireless transmission systems. To achieve this goal, the initial important point of the approach developed in this thesis is the introduction of Convolutional codes coupled with product ciphers. The individual performance of these codes are investigated and verified to correct random and burst errors. The technique of interleaved convolutional code and Trellis coded Modulation (TCM) is used to detect and correct the burst error. Similarly the burst decoding of cyclic codes based on circulant parity-check matrices is also studied and

Decoded Message

7

verified. In the next stage, two codes are combined to make a new scheme of serially concatenated codes. The serially concatenated codes are then used to detect and correct both random and burst errors. 1.7 Problem Statement Classical security for wireless transmission channels is at transport and data link layers. In telecommunication systems, an increase in security adversely affects the throughput either through avalanche effect in symmetric encryption schemes or huge modular exponentiation in public-key encryption schemes. There is therefore need to develop adequate coding schemes which will enhance security without considerably compromising throughput. Convolutional codes can therefore handle security and throughput at the physical layer and since convolutional codes alone can easily be broken, there is need for coupling different stages of convolutional codes with product ciphers to form a non-linear structure which is resilient to attacks and increases security. Recent research efforts carried out show how security increases by using non-linear convolutional codes in wireless channels [12], [14] [15]. Also, there is enough material that shows using a non-linear convolutional code in a wireless channel improves security and throughput in the channel by correcting errors and adding security. This is because convolutional codes has memory and product ciphers serves as very strong keys. There is need to have a secure coding scheme that enhances security and correct errors. 1.8 Scope of the thesis This research is geared at investigating the error correction capability of a non-linear convolutional cryptosystem. The new scheme will be compared to existing schemes. When compared with existing schemes through simulations and/or graphing a comparative improvement of the security can be inferred.

8

1.9 Methodology The thesis has a research method which contains an in-depth concept on error correcting codes, encoding and decoding of convolution codes, concatenated convolutional codes to form a trellis. Simulation, graphing and comparing of results with existing solutions to show improvements.

Standard techniques: fire codes, RS Codes, Interleaving, Product codes, concatenated, cascaded

One or more suitable standard technique for convolutional codes. Non-linear convolutional coding scheme

Simulation of non-linear convolutional coding scheme and analysis

Figure 1.4: Methodology Flow Diagram 1.10

Thesis overview

This thesis gives an in-depth look on Burst Error Correction using Convolutional codes in Wireless Communication. The approach is to first achieve a general frame layout on Convolutional codes in Wireless Communication, then look at the focus on security and throughput which in this thesis handles. The thesis is consists of five chapters of which will be covered as follows: Chapter one introduces the wireless communication channel and Convolutional codes in wireless Communication. This chapter presents the objective of the thesis; which talks on how to better correct burst errors and at the same time optimize

9

throughput, the problem statement; which handles what this thesis is trying to solve, the scope of the project, the methodology used and the thesis overview. Chapter two explains the literature review and related existing theories in Convolutional Codes, Trellis Diagram, Viterbi algorithm and Trellis Coded Modulation. Chapter three reports on encoding of Convolutional Code which is a robust code in error detection and correction using the Trellis Diagram and Interleaving Technique. Chapter four presents the decoding of Convolutional Codes using the Viterbi Algorithm and Deinterleaving. Chapter five discusses the results, the conclusion and future scope of studies.

10

CHAPTER TWO LITERATURE REVIEW 2.1 Introduction In information theory and coding theory with applications in computer science and telecommunication, error detection and correction or error control techniques that enable reliable delivery of digital data over unreliable communications channels. Many of such communication channels are subject to noise; and thus errors maybe introduced during transmission from source to destination. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data in many cases. The general definitions of the terms are as follows: Error detection is the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver. Error correction is the detection of errors and reconstruction of the original, error-free data [8]. The integrity of received data is a critical consideration aspect in the design of digital communications and storage systems. Many applications require the absolute validity of the received message, allowing no room for errors encountered during transmission and/or storage. Reliability considerations frequently require that Forward Error Correction (FEC) techniques be used when Error Detection and Correction (EDAC) strategies are required. 2.1.1

Burst-Error Correction

In burst error channels, errors occur in clusters. An error pattern e  e0 , e1 , e2 ,..., en 1 , is said to be a burst of length l if its non-zero components are confined to l





consecutive positions, say e j , e j 1 ,..., e j  l 1 , the first and last of which are non-zero, i.e., e j  e j l 1  1 . For example the error pattern e  (0000101100 100000 ) is a burst of length 7. A linear code which is capable of correcting all error bursts of length l or

11

less but not all error bursts of length l+1 is called an l-burst-error-correcting code. The code is said to have burst-error-correcting capability l. For an l-burst -errorcorrecting code, all the error burst of length l or less can be used as coset leaders of a standard array [8]. Reiger Bound: The burst-error-correcting capability l of an (n, k) code is at most

n  k  / 2 , i.e, l  n  k  / 2 . Codes that meet the Reiger bound are called optimal codes. For a cyclic burst-error-correcting code, it can correct bursts with one part at one end and one part at the other end. These bursts are called end-around bursts [16]. Theorem 2.1: if b  n / 2, a binary burst-b-error-correcting code has at most 2 n-2b code words Proof: If M >2

n-2b

, then by the pigeon-hole role principle there must be two distinct

codewords which agree in their first n-2b coordinates. These two codewords can be represented schematically as follows: X  * * * * * * * * AAAAA Y  * * * * * * * * BBBBBB

(2.1)

Where * denotes agreement, and the A‟s and B‟s are arbitrary. But then the word Z  * * * * * * * * AAABBB

(2.2)

differs from both X and Y by a burst of length  b, a contradiction [17]. Corollary (the Reiger bound). If 0  b  n / 2, a binary (n,k) linear burst error correcting code must satisfy r  2b , where r=n-k is the code’s redundancy Proof: The number of codewords in an (n,k)binary linear code is 2k, which by Theorem 2.1 must be  2 n  2b [17]. This is equivalent to the statement of the corollary The series of examples discusses burst error correcting codes. In this discussion, when we say that a particular bound (either the Abrahamson bound or the Reiger bound) is

12

tight, we mean the exits a code whose redundancy is equal to the value of the bound. If no such code exists, we will say that the bound is loose. Example 2.1: The (n, 1) binary repetition code with g ( x)  x

n 1

 x n  2  ...  x  1,

where n is odd, can correct all error patterns of weight  ( n  1) / 2 , and is a burst(( n  1) / 2) -error-correcting code. Since r=n-1, the Reiger bound is tight [16].

Example 2.2: The (n, n) code, consisting of all possible codewords of length n, is a cyclic code with g(x) =1. It is (trivially) a b=0 burst-error-correcting code, and since r=0 too, the Reiger bound is again tight [16]. Example 2.3: Any binary Hamming code, with n=2m-1, r=m, and g(x) = a primitive polynomial of degree m, is a b=1 burst-error-correcting code. (Any error vector of weight 1 is ipso facto a burst length 1.) The strong Abramson bound is tight for all these codes [16]. 2.1.2

Known Codes and Coding Techniques for Correcting Bursts

The following codes are known to correct burst errors. They include Fire codes, Binary RS codes, Interleaving technique, Product codes, Concatenation and Cascading codes [12]. 2.1.2.1

Fire Codes

They are cyclic codes and were discovered by P. Fire in 1959. Let p ( X ) be a binary irreducible polynomial of degree m. Let ρ be the smallest integer such that p ( X ) divides X+1. The integer ρ is called the period of p ( X ) [8]. 

Let l  m such that 2l-1 is not divisible by ρ.



Let n=LCM(2l-1,ρ)

2 l 1 g ( X )  ( X  1)  p  X   Define the following polynomial:



n Then g ( X ) is a factor of X  1 , and has degree 2l-1+m.

13

The cyclic code generated by g ( X )  ( X

2 l 1

 1)  p  X  is a Fire code which is

capable of correcting any single burst of errors of length l or less (including the end around bursts). The code has the following parameters [17]: n=LCM(2l-1,ρ) , n-k=2l-1+m. Example The polynomial is irreducible and has as period ρ=31. Solution Let l=m=5 Clearly ρ=31, does not divide 2l-1=9. Then

g ( x)  ( x 9  1)(1  x 2  x 5 )  1  x 2  x 5  x 9  x11  x14 Generates a Fire code with parameters n=LCM (9, 31) =279 n-k = 2l-1+m=2x5+5-1=14 This code is capable of correcting any error burst of length 5 or less [17]. Decoding of Burst-Error-Correcting Codes Decoding consist of two basic steps: 1. Error-pattern determination 2. Burst location determination These two steps can be easily achieved by error trapping decoding. The basic concept is to trap the error burst in a (syndrome) shift register by cyclically shifting the received vector, r . Let

r (x) and

e (x ) be the received and error polynomial respectively. Let

s ( x)  s 0  s1 x  ...  s n  k 1 x n  k 1 be the syndrome of r (x) which is the remainder

14

obtained by dividing r (x) by the generator polynomial g (x ) . Recall that s (x) is actually equal to the remainder of the error polynomial e (x ) divided by g (x ) . e ( x )  a ( x ).g ( x )  s ( x ) . Suppose the errors in e (x ) are confined to the l high-order

parity bit positions x

n  k l , n  k l 1

x

,..., x nk 1 .

Message

nk

k

x n k 1

Then, e ( x)  e n  k l x Dividing

e(x) by

n  k l

x n  k l

 en  k 1 x n  k 1  ...  en  k 1 x n  k 1

the

generator

polynomial

g (x )

we

find

that

e ( x )  0. g ( x )  s ( x )  s ( x )

The l-high order syndrome bits sn  k l , sn  k l 1 ,..., sn  k 1 are identical to the errors in e (x ) . The order n-k-l low-order syndrome bits are zeros,

i.e., S0=S1=…=Sn-k-l-1. Thus, when the received polynomial r (x) is completely shifted into the syndrome register, the error pattern trapped in the l high order stages of the syndrome register and the order n-k-l low order stages contain zeros.

Figure 2.1: Syndrome register Suppose the errors in e(x) are not confined to the l high-order parity bit positions, but confined to l consecutive positions (including the end-around case). For example,

15

Figure 2.2: l-high-order parity bit positions Then, after a certain number of shifts of r (x) , say i cyclic shifts, the errors in e (x ) (i )

will be shifted into the l high order parity bit positions of r ( x ) . At this instant, the errors are tapped in the l high order stages of the syndrome register, and the other n-kl low order stages of the syndrome register contain zeros. Knowing the number of shifts, i, (shorted in a counter) we can determine the location of burst in e (x ) . Then error correction is done by adding the error pattern to r (x) at the right location. A general error-trapping decoder is shown in the figure below Gate

Input r(X

Syndrome Register …

Gate

l stages

Test for Zeros n-k-l

Buffer Register

Gate Output

Figure 2.3: Error-trapping Decoder 2.1.2.2

Binary RS Codes

Consider a t-symbol-error correcting RS code C of length 2m-1 with from GF (2m). The binary code derive from C by representing each code symbol by a m-bit byte has length n=m(2m-1) and number of parity bits n-k=2mt. This binary RS code is capable

16

of correcting any single burst of length m(t-1)+1 or less because such a burst can only affect t or fewer symbols in the original RS code C [13]. Example: Consider the NASA standard (255, 223) RS code over GF(28). It is capable of correcting t=16 symbol errors. Solution The binary code derived from this RS code has length. n = 8x 255 = 2040, and dimension k = 8x233 = 1784. Hence it is a (2040, 1784) binary RS code. This code is capable of correcting any single burst of length l = 8x(16 -1) +1 = 121 or less. 2.1.2.3

Interleaving Technique

Let C be an (n, k) linear code. Suppose we take λ code words from C and arrange then into λ rows of λxn array as shown in the figure 2.4. This structure is called block interleaver.

Figure 2.4: An interleaved array Then we transmit this code array column by column in serial manner. By doing this, we obtain a vector of λn digits. Note that two consecutive bits in the same codeword are now separated by λ-1 positions. Actually, the above process simply interleaves λ codewords in C. The parameter λ is called interleaving degree (or depth). There are (2k)λ=2kλ such interleaved sequences and they form a (λn, λk) linear code, called an

17

Interleaved Code, denoted C(λ). If the base code C is a cyclic code with generator polynomial g(x), then the interleaved code C(λ) is also cyclic. The generator polynomial of C(λ) is g(x λ) [18]. Example: a burst of three consecutive errors in the following sequence is written by columns into a 4 x 4 de-interleaver 1 5 9 13 2 6 10 14 3 7 11 15 4 8 12 16, then these errors will be separated by at least four intervals

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 The de-interleaved sequence in this case is (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16) which confirms that the errors are separated by four positions. Error Correction Capability of an Interleaved Code A pattern of errors can be corrected for the whole array if and only if the pattern of errors in each row is a correctable pattern for the base code C. Suppose C is s singleerror-correcting code. Then a burst of length λ or less, no matter where it starts, will affect no more than one digital in each row. This single bit error in each row will be corrected by the base code C. Hence the interleaved code C(λ) is capable of correcting any error burst of length λ or less. Decoding of Interleaved Code At the receiving end, the received interleaved sequence is de-interleaved and rearranged back to a rectangular array of λ rows. Then each row is decoded based on the base code C. Suppose the base code C is capable of correcting any burst of length l or less. Consider any burst of length λl or less. No matter where this burst starts in the interleaved code sequence, t will result a burst of length l or less in each row of the corresponding code array as shown in figure 2.5.

18

Figure 2.5: A burst of length λl As a result, the burst in each row will be corrected by the base code C. Hence the interleaved code C(λ) is capable of correcting any single error burst of length λl or less. Interleaving is a very effective technique for constructing long powerful bursterror correcting codes from good short codes. If the base code is an optimal bursterror-correcting code, the interleaved code is also optimal. Example: Consider a (7,3) cyclic code C generated by g ( x)  ( x  1)( x  x  1) = 3

1  x 2  x 3  x 4 . This code is capable of correcting any burst of length l = 2 or less. It

is optimal since z 

2l 2X 2   1 . Suppose we interleave this code to a depth nk 73

λ=10. The interleaved code C(10) is a (70,30) code which is capable of correcting any burst of length 20 or less. The burst-correcting efficiency of C(10) is z  Hence

C(10)

is

also

g ( x)  1  x 20  x 30  x 40 .

optimal.

The

2l 2 X 20  1 n  k 70  30

generator

polynomial

of

C(10)

is

19

Convolutional Interleaver A convolutional interleaver consists of a set of shift registers, each with a fixed delay. In a typical convolutional interleaver, the delays are nonnegative integer multiples of a fixed integer (although a general multiplexed interleaver allows arbitrary delay values). Each new symbol from the input signal feeds into the next shift register and the oldest symbol in that register becomes part of the output signal. Figure 2.5b depicts the structure of a convolutional interleaver by showing the set of shift registers and their delay values D(1), D(2),..., D(N). The blocks in this library have mask parameters that indicate the delay for each shift register. The delay is measured in samples.

Figure 2.5b: Structure of a convolutional interleaver A convolutional interleaver can be used in place of a block interleaver in much the same way. Convolutional interleavers are better matched for use with the class of convolutional codes. 2.1.2.4

Concatenated Coding Scheme

Concatenation is a very effective method of constructing long powerful codes from shorter codes. It was devised by Forney in 1965. It is often used to achieve high reliability with reduced decoding complexity. A simple concatenated code is formed from two codes: an (n1,k1) binary code C1 and an (n2,k2) non binary code C2 with

20 k

symbols from GF( 2 1), say a RS code. Concatenated codes are effective against a mixture of random errors and burst errors. Scattered random errors are corrected by C1. Bursts may affect relatively few bytes, but probably so badly that C1 cannot correct them. These few bytes can then be corrected by C2 [13]. Outer Code Encoder (n2,k2)

Inner Code Encoder (n1,k1)

Channel

Outer Code Decoder

Figure2.6: Concatenated Coding

Inner Code Decoder

Encoding of concatenated codes Encoding consists of two stages, the outer code encoding and the inner code encoding, as shown in the figure 2.6. First a message of k1k2 bits are divided into k2 bytes of k1 bits each. Each k1-bit byte is regarded as a symbol in GF( 2k1 ). This k2-byte message is encoded into an n2-byte codeword v in C2. Each k1-bit byte of v is then encoded into an n1-bit codeword w in C2. This results in a string of n2 codewords in C2, a total of n1n2 bits. C1 is called the inner code and C2 is called the outer code. If the minimum distances of the inner and outer codes are d1 and d2 respectively, the minimum distance of their concatenation is at least d1d2 [19]. Decoding of concatenated codes Decoding of a concatenated code also consists of two stages, the inner code decoding and the outer code decoding, as shown above. First, decoding is done for each inner codeword as it arrives, and the parity bits are removed. After n2 inner codewords have been decoded, we obtain a sequence of n2 k1-bit bytes. This sequence of n2 bytes is

21

then decoded based on the outer code C2 to give k1k2 decoded message bits. Decoding implementation is the straightforward combination of the implementations for the inner and outer codes [19]. Error Correction Capability of concatenated codes Concatenated codes are effective against a mixture of random errors and bursts. In general, the inner code is a random-error-correcting code and the outer code is a RS code. Scattered random errors are corrected by the inner code, and bursts are then corrected by the outer code. Various forms of concatenated coding scheme are being used or proposed for error control in data communications, especially in space and satellite communications. In many applications, concatenated coding offers a way of obtaining the best of two worlds, performance and complexity [20]. 2.1.2.5

Cascaded Coding Scheme: Product Code

A simple generalization of the concatenated coding is shown in figure 2.7. The twodimensional code is called the product code. The outer code C2 is an (n2,k2) RS code with symbols from GF( 2m ). The inner code C1 is an (n1,k1) binary linear code with k1 =λm where λ is a positive integer. The outer code C2 is interleaved to a depth of λ. If the code C1 has minimum weight d1 and the code C2 has minimum weight d2, the minimum weight of the product code is exactly d1 d2 [20].

22

Figure 2.7: Code array for the product code C1× C2 Encoding of Product codes A message of k2 m-bit bytes (or k2m bits) is first encoded into an n2-byte codeword in C2. This codeword is then temporarily stored in a buffer as a row in an array as shown in figure 2.7. After λ outer codewords have been formed, the buffer stores a λ × n2 array. Each column of the array consists of λ m-bit bytes (or λ m bits), and is encoded into an n1-bit codeword in C1 and transmitted in serial manner. Note that the outer code is interleaved to a depth of λ and the inner code consists of λ bytes of message bits [7].

2.2 Coding and Decoding with Convolutional Codes A (n, k, K) convolutional code is represented by three parameters. In (n, k, K) convolutional code; n is the number of encoder output bits, k is the number of bits shifted into the encoder or input bits that passed through the encoder one at a time. K is called the constraint length and related to the number of memories of the encoder

23

[8]. The coding rate of the convolutional codes is given by the ratio of input bit to the output bit i.e. R=k/n. It determines the content of information and the overhead of coding. The encoder output is depending on the input message and the previous state of the encoder stored in the memory [13]. Mod-2 Output bit, v1

Input bit, u(n) iiiuu(n)U9

u(n-1)

u(n-2) Codeword Output bit, v2

Mod-2 Figure 2.8 Block diagram of an (n, k, K) convolutional encoder. In figure 2.8, the input bits u [n] passes through the encoder. Then, the encoder calculates the output bits using modulo-2 additions, which is similar to XOR operation of input bit and the previous content of the register at the encoder. The encoder contains two memories to manipulate the incoming bits. Finally, multiplexing the output bits v1 and v2 generates the codeword. The coding rate R and constraint length K determines the performance of the convolutional code. The longer the constraint length the more powerful the code becomes and the more the coding gain. However it requires more complex decoder and more time in decoding. Whereas, the smaller the coding rate, the more powerful the code is due to extra redundancy but they are less bandwidth efficient compared with larger coding rate [21]. A convolutional code can be represented using the generator sequence g(1), g(2), …, g(n), which is also called the impulse of the encoder. Example, consider a (2, 1, 3) convolutional code encoder as shown in figure 2.9. In figure 2.9, there are n generator polynomial for each mod-2 adder. Each polynomial

24

has a degree less than or equal to (K x k-1) that describes the connection of the shift registers to the related mudulo-2 adder.

v1 g0(1)

g2(1)

g3

(1)

u

g3(2)

g0

(2)

g1

(2)

g2(2) v2

Figure 2.9: Block diagram of shift registers with generator sequence. For the two mudulo-2 adders, the generator polynomial g1 (X) and g2 (X) are given by: g1(X)= g0(1)+ g1(1).X+ g2(1).X2+ g3(1).X3 In g0(1) = 1, 1 represents the modulo-2 adder is connected to the shift registers. Similarly, in g1(1) = 0 , 0 represents the mudulo-2 adder is not connected to the shift registers. In this example: g2(1)= 1 and g3(1) = 1 are available. g1(X ) =1+X2+X3 Similarly, g2(X) = g0(2) + g1(2).X+ g2(2).X2+ g3(2).X3 = 1+X+X2+X3 The polynomial sequence we obtained from the above polynomial equation is: g1 = (1, 0, 1, 1) and g2 = (1, 1, 1, 1) And the output sequence is given by, V(X) = u(X) g1(X) multiplexed with u(X) g2(X). Example:

25

Let, consider a (2, 1, 3) binary systematic feed forward convolutional encoder as shown in Figure 2.10. The generator sequences are g1 = (1, 0, 0, 0), g2= (1, 1, 0, 1).

v1

u

v2 Figure 2.10: A (2, 1, 3) binary systematic feed forward convolutional encoder. The polynomial form of generator can be written as: g1 (x) = 1 g2 (x) = 1+x+x3 For an input sequence u (x) = 1+x+x3, the output information sequence is v1(x) = u (x) g1 (x) = (1+x+x3) (1) = 1+x+x3 And,

v2(x) = u (x) g2 (x) = (1+x+x3) (1+x+x3) =1+x2+x6

Hence the codeword is given by: V(x) = [1+x+x3

1+x2+x6]

or, v = [1 1 0 1

1 0 1 0 0 0 1], after multiplexing it becomes:

v = (11, 10, 01, 10, 00, 00, 01). Convolutional Codes have three parameters; (n, k, m),

26

n = number of output bits k = number of input bits m = number of memory registers. The quantity k/n is called the code rate which is a measure of the efficiency of the code, where 1  k  8 , 1  n  8 , 1  m  10 . The code rate ranges from 1/8 to 7/8 except for deep space applications were code rate are as low as 1/100 or even longer has been employed. In the code parameter (n, k, l), l is the constraint length of the code given by: Constraint length l  k (m  1) , where l represents the number of bits length in the encoder memory that affects the generation of n output bits. 2.2.1

Code Parameters and the Structure of the Convolutional Code

There are three different notations used to represent convolutional codes. They however have the same interpretation. The first two notations (n, k, K) and (n. k, l) are equivalent where n is the number of output bit of the encoder, k is the number of input bit of the encoder, and K, l all representing the constraint length and it is related to the number of memories of the encoder. In the third notation (n, k, m), the only difference is m, which is the number of memories in the encoder. The procedure to getting the code structure from the parameters (n, k, m) is as follows: 1. First draw the m-boxes 2. Draw the modolu-2 adders or the n-outputs 3. Connect the m-boxes (memory registers) to the adders using the polynomial generator. The polynomials give the code a unique error protection quality. 2.2.2

State of the Code

The number of combinations of bits in the shaded registers showed in figure 2.9 are called the states of the code and are defined by:

27 l

Number of states = 2 , where l is the constraint length of the code and is given by l  k ( m  1) . The states of the code indicates what is in the memory registers. The

order of the polynomial is given by (km) degree. The (2, 1, 4) code in figure 2.11 has a constraint length of three. The shaded register below hold this bits. The unshaded register holds the incoming bit. These means that 3 bits or 8 different combinations of these bits can be present in memory registers. These 8 different combination determines the output we will get for v1 and v2, the coded sequence. (1, 1, 1, 1) +

V1

U1

+ (1, 1, 0, 1)

V2

Figure 2.11: The Structure of (2, 1, 4) code The output bit depends on the initial condition which changes at each time. The (2,1,4) code in figure 2.11 outputs 2 bits for every 1 input bit. It is a rate ½ code. Its constraint length is 3.The total number of states is equal to 8. The 8 eight states of this (2,1,4) code are: 000,001,010,011,100,101,110,111. [17]. 2.2.3

Punctured Codes

For special case of k=1, the codes rates are 1/2, 1/3, 1/4, 1/5, 1/7 are sometimes called mother codes. We can combine these single bit input codes to produce puncture code rates other than 1/n. By using two rate ½ codes together as shown in figure 2.10 and then just not transmitting one of the output bits. We can convert this rate ½ implementation in to 2/3 rate code. 2 bits come and 3 go out. This concept is called

28

puncturing. On the received side, dummy bits that do not affect the decoding metric are inserted in the appropriate places before decoding.

Figure 2.12: Two (2, 1, 3) convolutional codes produce 4 output bits. Bit number 3 is “punctured” so the combination is effectively a (3, 2, 3) code. This technique allows us to produce codes of many different rates using just one simple hardware. Although we can also directly construct a code of 1/3, the advantage of a punctured code is that the rates can be changed dynamically (through software) depending on the channel conditions such as rain, and others. A fixed implementation although easier, does not allow this flexibility. Structure for k>1 Alternatively we can create codes where k is more than1 bit such as (4, 3, 3). This code takes in 3 bits and outputs 4 bits. The number of registers is 3. The constraint length is 3x2=6. The code has 64 states. And this code requires polynomials of 9th order. The shaded boxes represent the constraint length. Procedure for drawing the structure of a (n, k, m) code The procedure for drawing the structure of a (n, k, m) code where k is greater than 1 is as follows. First draw k sets of M boxes. Then draw n adders. Now connect n adders to the memory registers using the coefficient of the nth (km), degree polynomial what you will get is a structure like one in figure 2.13 below for code (4, 3, 3).

29

Figure 2.13: This (4,3,3) convolutional code has 9 memory registers, 3 input bit and 4 output bits. The shaded registers contain “old” bits representing the current state. Procedure for drawing the structure of a (n, k, m) code where k> 1 Procedure for drawing the structure of a (n, k, m) code where k> 1 is as follows: 1. First draw k sets of in boxes. 2. Then draw n adders 3. Now connect n adders to the memory registers using the coefficients of nth (km) degree polynomial. Example, considering the code structure for code (4, 3, 3), the analysis of the code is as follows: ( 4,3,3)  ( n, k , m)

n  4  outputs   k  3  inputs m  3  memory _ registers  The constraint length is l =k (m-1) =3(3-1) =6 The polynomial degree = (km) = 3x3=9 The code rate = k/n=3/4 It should be noted that, the links are arbitrary between the mod-2 adder and memory

30

Figure 2.14: This (4, 3, 3) convolutional code contains 9 memory registers, 3 input bits and four output bits. The shaded registers contain “old” bits representing the current state. 2.2.4

Systematic Versus Non-Systematic Convolutional Code.

A special form of convolutional code in which the output bits contain an easily recognizable Sequence of the input bits is called the Systematic form. Systematic codes are often preferred over non- systematic codes because they allow quick look. They also require less hardware for encoding. Another important property of systematic codes is that they are none “catastrophic”, which means that errors cannot propagate catastrophically. All these properties make them very desirable. Systematic code are also used in Trellis Coded Modulation (TCM). The error protection properties of systematic codes however are the same as those non-systematic codes.

2.3 Coding an Incoming Sequence The output sequence v can be computed by convolving the input sequence with the impulse response g. i.e. v = u* g or in a more generic from m

vlj   u i 0

l i

g ij where v l j is the output bit 1 from the encoder j, and U6-i is the

input bit, and gi j is the 4th term in the polynomial j [22]. Let‟s encode a two bit sequence of 10 with the (2, 1, 4) and see how the process works.

31

Figure 2.15: A sequence consisting of a solo 1 bit as it goes through the encoder. The single bit produces 8 bit of output. First we will pass a single bit 1 through this encoder as shown in fig 2.12 a) At time t =0, we see that the initial of the encoder is all zero (the bits in the right most L register position). The input bit 1 cause two bit 11 to be output. How did we compute that? By a mod2 sum of all bits in the registers for the first bit and a mod2 sum of three bits for second output bit per the polynomial coefficients. b) At t=1, the input bit 1 moves forward one register. The input register is now empty and is filled with a flush bit of 0. The encoder is now in state 100. The output bits are now again 11 by the same math. c) The input bit 1 moves forward again. Now the encoder state is 010 and another flush bit is moved into the input register. The output bits are now 10. d) At time 3, the input bit moves to the last register and the input bit is 001. The output bits are been flushed to an all zero state, ready for the next sequence.

32

Note that a single bit has produced an 8-bit output although nominally the code rate is ½. This shows that for small sequences the overhead is much higher than the nominal rate, which only applies to long sequences. If we did the same thing with a 0 bit, we will get an 8 bit all zero sequence. What we just produced is called the impulse response of this encoder. The 1 bit has a response of 11 11 10 11 which is called the impulse response. The 0-bit similarly has an impulse response of 00 00 00 00 (not shown but these is obvious). Convolving the input sequence with the code polynomials produced these two output sequences, which is why these codes are called convolutional codes. From the principle of linear superposition, we can now produce a coded sequence from the above two impulse responses as follows. Let‟s say that we have an input sequence of 1011 and we want to know what the coded sequence would be. We can calculate the output by just adding the shifted versions of the individual impulse responses. Input Bit

Its impulse response

1

11 11 10 11 0

00 00 00 00 1

11 11 10 11 1

11 11 10 11

Add to obtain response _____________________________________ 1011 11 11 01 11 01 01 11 We obtained the response to sequence 1011 by adding the shifted version of the responses for 1 and 0.

33

In figure 2.16, we manually put the sequence 1011 through the encoder to verify the above and amazingly enough, we will get the answer. This shows that the convolution model is correct.

Figure 2.16: Encoding sequence for verification for the code (2,1,4) The result of the above encoding at each time interval is shown in the table 2.1

34

Table 2. 1: Output bits and the encoder bits through the (2,1,4), input bits: 101100

The encoded sequence is 11 11 01 11 01 01 11. 2.3.1

The Encoder Design

The encoder for convolutional code uses a look up table to do the encoding. The look up table consist of four items [22]: 1.Input bit 2.The State of the encoder, which is one of the 8 possible states for the example (2,1,4) code. 3.The output bits. For the code (2,1,4), since 2 bits are output, the process are 00,01,10,11. 4.The output state which will be the input state for the next bit. Table 2.2 shows the look up table for the (2,1,4) code.

35

Table 2. 2: Lookup Table for the code (2,1,4) Encoder

This lookup table uniquely describes the code (2,1,4). It is different for each code depending on the parameters and polynomials used. There are three ways by which an encoder can be represented graphically, in order to gain an understanding of its operation. These include: 1. The State Diagram 2. The Tree Diagram 3. The Trellis Diagram 2.3.1.1

The State Diagram

The state diagram for the (2,1,4) code is shown in figure 2.17. Each circle represents a state. At any one time, the encoder resides in one of these states. The lines to and from it show the transitions that are possible as bits arrive. Only two events can happen at the same time, arrival of a 1 bit or arrival of a 0 bit. Each of these two events allows the encoder to jump into a different state. The state diagram does not have time as a dimension and hence it tends to be not intuitive [17].

36

Figure 2.17: The State diagram for the (2,1,4) Code Comparing the State Diagram to the encoder lookup table, reveals that the state diagram contains the same information as in the lookup table, but it is a graphic representation. The solid lines indicate the arrival of a 0 and the dashed lines indicate the arrival of a 1.The output bits for each case are shown on the line and the arrow indicate the state transition. Some encoder states allow outputs of 11 and 00 and some allow 01 and 10. No state allows all the four options. How do we encode the sequence 1011 using the state diagram? 1. Let‟s start at state 000. The arrival of a 1 bit outputs 11 and puts us in state 100. 2. The arrival of the next 0 bit outputs 11 and puts us in state 010. 3. The arrival of the next 1 bit outputs 01 and puts us in state 101.

37

4. The last bit 1 takes us to state 110 and outputs 11. So now we have 11 11 01 11. But this is not the end. We have to take the encoder back to all zero state. 5. From state 110, go to state 001 outputting 01. 6. From state 011 we go to state 001 outputting 01 and then 7. To state 00 with a final output of 11. The final answer is: 11 11 01 11 01 01 11. This is the same answer as what we got by adding up the individual impulse responses for bits 1011000. 2.3.1.2

The Tree Diagram

The tree diagram attempts to show the passage of time as we go deeper into the tree branches. It is somewhat better than the state diagram but still not the preferred approach for representing convolutional codes. Here instead of jumping from one state to another, we go down branches of the tree depending on whether a 1 or a 0 is received. [22]. 2.3.1.3

The Trellis Diagram

A trellis diagram is an extended representation of state diagram [18]. For each instant of time it shows all the possible states. A unique path through the trellis represents the input bits and output bits. A trellis diagram consists of a node and the branches representing the state of the encoder and the transition of state. The initial node of the trellis diagram is the starting node. A combination of consecutive branches that connects the initial node to another node in the trellis is called a path and the number of branches comprising a path is called the length of the path. For example consider the trellis diagram for the (2, 1, 2) convolutional code in figure 2.18.

38

Figure 2.18: Trellis diagram of (2, 1, 2) convolutional code In Figure 2.18, the continuous line indicates the change of state when input is 0 and dotted line indicates the change of state when input is 1. It is always assumed that the encoder is cleared and the initial state of the encoder is (00). After the second stage, each node in the trellis has 2k incoming and 2k outgoing paths. For example, if the input message bit for the encoder is u=(1,0,1,1,0,0), from the trellis diagram the output of the encoder is v=(11,01,00,10,10,01). The path traversed by the transmitted bit and the output of the encoder during transition of the state are given in broken and dotted lines, respectively. 2.3.1.4

Trellis Truncation and Termination

Trellis truncation is the simple method in which the encoder is reset to zero at the beginning of bit shifting operation. After each bit is shifted, no extra bits are shifted to put the encoder into initial state. The typical example of trellis truncation is the trellis diagram shown in Figure 2.19. Trellis termination is the most usual technique. Like

39

trellis truncation the encoder is reset to zero at the beginning of each shifting. After all message bits are shifted into the encoder, more zero bits are inserted until the encoder state return to the zero state. It means that the transmission of message bit ends, when the final state of the encoder is set to zero.

Figure 2.19: Trellis Truncation Figure 2.19 shows the trellis truncation for message u= (1,0,1,1,0,0) applied in (2, 1, 2) convolutional code. 2.3.1.5

Interleaved Convolutional Code

Burst-error is mostly found in storage and wireless communication systems. The presence of burst-error is corrected by interleaving technique. Interleaved convolutional codes for correcting burst-error was first introduced by Hagel Barger [24]. The general idea behind using interleaving is to spread the long burst of error into random error. For some communication systems, long burst-error is not correctable. The simple interleaving in convolutional codes can be achieved by using the degree of interleaving denoted by λ, where λ -1 should be the multiple of n [10]. The encoded

40

bit streams are interleaved by using a delay before passing it through channel. Then, the output bit streams from the channel are deinterleaved and decoded. The simple block diagram of interleaved convolutional code and its deinterleaving used in correcting burst-error is shown in figure 2.21.

mn

(n,k,m) Convolutiona l Encoder

1

2

m1

1 Multiplexer

n

Channe l

m1 m1 m2

Demultiplexe r

2 n

Viterbi Decoder

Deinterleaver Interleaver Figure 2.20: An (n, k, K) convolutional coding system with interleaving degree λ Let X1, X2, X3, X4, X5, X6, X7, X8, X9, X10, X11, X12, X13, X14, X15, and X16 be the message encoded by the encoder. In figure 2.22 the number of horizontal line gives the degree of interleaving, let the encoded message be interleaved by using an interleaver with degree three and delay one. The process of interleaving is shown in figure 2.22. Initially, the register of interleaver is assumed to be cleared i.e. it has bit „0‟ in the register. While interleaving, the extra zero is padded to the end of the message to clear all the register. After interleaving the output sequence becomes X1, 0, 0, X4, X2, 0, X7, X5, X3, X10, X8, X6, X13, X11, X9, X16, X14, X12, 0, 0, X15.

Figure 2.21: Interleaving Techniques.

m2 m3

41

Figure 2.22: Deinterleaving techniques. Once the code is interleaved, it is passed through the channel. The output sequence from the channel is now deinterleaved. Deinterleaving is also a kind of interleaving, which recovers the original information from the interleaved one. The degree and number of delay in deinterleaving is the same as used in interleaving. For example, bit streams obtained after passing through the channel is X1, 0, 0, X4, X2, 0, X7, X5, X3, X10, X8, X6, X13, X11, X9, X16, X14, X12, 0, 0, X15. The process of deinterleaving is described in Figure 2.24. Again, the same as in interleaving, the extra zero bits are padded to clear the register and to get the complete output of the deinterleaving. The output sequence from the deinterleaving is 0, 0, 0, 0, 0, 0, X1, X2, X3, X4, X5, X6, X7, X8, X9, X10, X11, X12, X13, X14, X15, and X16. After deinterleaving, the extra zeros that were padded during interleaving are neglected. Example: Consider a (2, 1, 1) convolutional coding system with interleaving degree λ= 9. Let the message to be encoded u= [1 0 1 0 1 0 1 0 1 0]. The figure of original convolutional encoder is shown below in Figure 2.25. Here the message signal is fed into one shift register and the delay version of this register output is binary added with the original message thus gives output v2, fed into the multiplexer. On the other hand the original message without passing through any circuit is feed forwarded to the

42

multiplexer as output v1. Thus obtained codeword or encoded message is passed through the channel.

v1 MULTIPLEXER

u

Encoded message to the channel

v2

s

Figure 2.23: A (2, 1, 1) convolutional encoding system Now to use the convolutional coding system with interleaving degree λ= 9, the encoding circuit has to change into the form as shown in figure 2.26. The register equal to the degree of interleaving which is λ = 9, replaces one shift register of encoder in figure 2.25. Then, the delay equal to (λ-1)/n is added to the output v2. In this example delay is equal to (9-1)/2 = 4. The completed encoding calculation has been attached to the Appendix A1.

v1

r1

r2

s

r4

r3

y1

y2

r5

y3

r6

r7

MULTIPLEXER

r0

u

r8

y4

v2 Figure 2.24: A (2, 1, 1) convolutional coding system with interleaving degree λ = 9. 2.3.1.6

Trellis Coded Modulation (TCM)

Traditionally, coding and modulation has been considered as two separate parts of a digital coding system. In 1982, Ungerboeck proposed TCM to optimize coding and modulation. With TCM, we can get additional noise performance without asking for

Encoded message

43

more bandwidth. Both blocks (encoder and modulator) are optimized independently, even though their objective is the same, that is, to correct errors introduced by the channel. It is possible to obtain coding gain without bandwidth expansion if the channel encoder is integrated with the modulator. [18]. Example: Consider data transmission over a channel with a throughput of 2bits/S/Hz. One possible solution is to use encoded QPSK. Another possibility is to first use a rate 2/3 convolutional encoder (which covers 2 uncoded bits to 3 coded bits) and then use an 8-PSK signal which has a throughput of 3bit/S/Hz. This coded 8-PSK scheme yields the same information data throughput of the uncoded QPSK (2 bits/S/Hz). Note that both the QPSK and the 8-PSK requires the same bandwidth. It could be possible that the coding gain provided by the encoder outweighs the performance loss because of the 8-PSK signal set. This is doable. a)

The free Euclidean Distance: The minimum Euclidean distance between any

two paths in the trellis called the Free Euclidean distance, dfree, of the TCM scheme. Free Euclidean distance, dfree, could also be defined in terms of Hamming distance between any two paths in the trellis. The minimum free distance in terms of hamming weight could be calculated in terms of minimum weight of a path that deviates from all zero path and later merges back into the all zero path at the same point further down the trellis. This was a consequence of the linearity of convolutional codes. However, the same doesnot apply for the case of TCM, which are non-linear [18]. b) Asymptotic Coding Gain The difference between the value for the coded and uncoded schemes required to achieve the same error probability is defined as the coding gain, g, where

g  SNR |coded  SNR |uncoded . At high SNR, the coding gain can be expressed as

2.1

44

g 00  10 log

2

(d free / Es ) coded (d 2 free / Es )uncoded

where g00 represents the asymptotic gain and Es is the

average signal energy. For uncoded schemes, dfree is simply the minimum Euclidean distance between the signal points. c) Mapping by set Partitioning The mapping on set partitioning is based on successive partitioning of the expanded 2m+1 ary signal set into subsets with increasing minimum Euclidean distances. Each time we partition the set, we reduce the number of the signal points in the subset but increase the minimum distance between the signal points in the subset. The mapping is done in such a manner so as to maximize the minimum Euclidean distance between the different paths in the trellis [18].

2.4 The Decoding Algorithms and/or Theorems There are several different approaches to decoding of convolutional codes. These are grouped into two basic categories. 1. Sequential decoding – Fano Algorithm 2. Maximum likely-hood Decoding – Viterbi Decoding Both of these methods represents two different approaches to the same basic idea behind decoding [22]. 2.4.1

Choice of Decoding Algorithm

Viterbi decoding is the best known implementation of the maximum likely-hood decoding. Here, the options are narrowed systematically at each time tick. The main factors (reasons) used to reduce the choices are as follows: 1. The errors occur frequently. The probability of error is small. 2. The probability of two errors in a row is much smaller than a single error. That is the errors are distributed randomly.

45

Viterbi decoding is quite important since it also applies to decoding of block codes. This form of Trellis decoding is also used for Trellis-Coded Modulation (TCM) [22]. 2.4.2

Decoding of Convolutional Codes using Viterbi Algorithm

Viterbi algorithm was first proposed by Andrew Viterbi as a decoding algorithm for convolutional codes in 1967 [25]. It is also known as a maximum likelihood decoding process, which uses trellis structure to find the codeword closest to the received message. Viterbi algorithm searches all the likely paths in the trellis and compares the metrics between each path. The metrics here used is branch metrics and path metrics. In (n, k, K) convolutional code, there are always 2(K-1)*k possible states and each node has 2k merging paths. The path with the minimum Hamming distance is selected as the surviving path. There is only one survivor path with the minimum metric that gives the output as decoding result. The important parameters need to be considered in Viterbi algorithm is described below. 2.4.3

Branch metric and Path Metric calculation

a) Branch metric Calculation It is a calculation of a distance between the received pair of bits and all the possible output bits of each state i.e. “00”, “01”,“10”, “11” for (2,1,2) code. The branch metric calculation is different for different types of decoding system. In a hard-decision decoding, a branch metric is a Hamming distance between the received pair of bits and the output bits of each state. For each state there are more than one possible branch metrics. However, for a soft-decision decoding, a branch metric is measured by using the Euclidean distance. The Euclidean distance is calculated by using the formula as shown below: BM= (x-x1)2 + (y-y1)2 (2.1)

46

Where x and y be the first and second received bits in the pair and x 1 and y1 are the first and second bits of each branch of the possible state in trellis diagram. For example, if the input message for the decoder is 11,01,00,10 and the branch value for the first state S0 corresponding to the input bit 0 is 00. The branch metric is calculated by using Equation (2.1) where x = 1, y = 1 and x1 = 0 and y1 = 0. b) Path metric calculation It is a sum of the branch metric in corresponding path of each encoder state for finding the survivor path [22]. It is calculated by using a procedure called Add-CompareSelect [23]. The method is repeated for every encoder state. In this procedure, the value of branch metric that lies along the path is added for each node. There are two paths ending and two paths emerging from any state. The metrics are compared and the path with greater metrics is dropped so that the survivor path with lowest metrics is selected. Encoded Stream

Branch Metric Calculation

Path Metric Calculation

Trace back

Decoded Stream

Figure 2.25: Block diagram showing the process of Viterbi algorithm. 2.4.4

Trace back

It is a technique to find the output from the decoder. It stores the maximum likelihood path calculated from the path metric, the process starts from the end of the trellis and ends at the first stage of the trellis. 2.5

Decoding of Convolutional Codes using Viterbi Algorithm

There are two different approaches to decoding of convolutional codes using the Viterbi Algorithm. 1. Hard Decision Decoding

47

2. Soft Decision Decoding 2.5.1 Hard decision Decoding of Convolution Code using Viterbi Algorithm In hard decision decoding of convolutional codes, the message received from the noisy channel are quantized into the two levels of binary data either „0‟ or „1‟ [10]. These binary data are fed into the decoder as an input. The next step follows the procedure of finding most likelihood path along the trellis structure. It involves the calculation of finding the minimum Hamming distance between the surviving paths. The process of hard decision decoding is described below with an example. Let us consider an encoder for (2,1,2) convolutional code shown in figure 2.26. Let u=0,1,1,1,0,1,0,0 the input bit to the encoder and v=00,11,10,01,10,00,01,11 is the output from the encoder. Figure 2.27 shows the states of the trellis that reached during the encoding of the message.

Figure 2.26: A (2, 1, 2) convolutional encoder. Now suppose received message with couple of bit errors due to channel noise. The input to the decoder is assumed as 00, 01, 10, 01, 11, 00, 01, 11. Let see the encoder output and the received message in the tabular form so that the bit errors are easy to identify.

48

Figure 2.27: Trellis diagram of (2,1,2) Convolutional encoder. Table 2. 3: Comparing the encoder output and received message.

From table 2.3, we can see that at time slot t=1 and t=4 there is an error in the pair of message bit. To implement the Viterbi algorithm used for decoding of the noisy message, the original data the process is explained below. For each time slot, the received pair of symbol is compared to calculate the metric, which is called distance, with all the possible output value of the branch emerging from each state. Starting from initial state t=0 to next sate t=1, there are only two symbols we can see in the

49

trellis diagram, i.e. 002 and 112. It is because we always assume that the encoder starts from the all-zeroes state. For one input bit either „0‟ or „1‟, there are only two states to transit and two possible outputs of the encoder, which are 00 and 11. As described above for the hard decision decoding, the Hamming distance as a metric is used to calculate the distance between the received and the output symbol. It is simply calculated by counting the difference in bits between the received symbol pair and all the possible output symbol pairs. Since symbols in this example contain only two bits, the possible distance between them are 0, 1 or 2. The metrics value calculated for each branch emerging out from each state is called branch metrics. We can see from the trellis diagram that for the time instant t=1 and t=2, there is only one branch coming out from one state to another one. However, from t=3, there are always two branches coming into and other two branches going out. After t=3, there exist more than one surviving path from one node to other one. To find the most likelihood path the metrics called path metrics are calculated. Path metrics are the summation of all the branch metrics that exist between two states. The path having larger number of metrics is discarded and the one with the lowest value of metrics is chosen as most likelihood path.

Figure 2.28: First state of trellis diagram

50

A symbol received at t=1 is 00. The possible output values of the branch from the initial state are 00 and 11. The Hamming distance between 00 and 00 are zero. This distance between 00 and 11 is two respectively. Therefore, the branch metric value from the initial state at time instant t=0 to initial state at instant t=1 is zero and the branch from state 00 to state 10 is two. Figure 4.4 shows the results at t = 1. Symbol received at t=2 is 01. Possible branch output going from state 00 to 00 is 00 and from state 00 to 10 is 11. From state 10 to 01 is 01 and 10 to 11 is 10. Branch metrics and path metrics for each state at t=2 is calculated and shown in figure 2.29.

Figure 2.29: State representation of trellis diagram.

51

Figure 2.30: Path metrics calculation Also, at time instant t=3, each state includes two paths. To choose the most likely path between two adders, comparators and selectors are used. In this method, the path metrics are calculated for each path and the path with the lowest metric is selected as a survivor path. The one with the highest metric is rejected. If decoder gets the same value of metrics for both paths, algorithm can be made to simply choose one of them. The path metrics calculation for each state at t=3 is calculated and shown in figure 2.30. From the above calculation of path metrics for the state 00, the path with metrics 3 is chosen. Similarly for sate 01, state 10 and state 11 paths with metrics 3, 2 and 1 are chosen, respectively. Same process is repeated for the other time instants and the surviving paths. The path metrics for the whole time instant till t=8 is shown in figure 2.31.

52

Figure 2.31: Final path metrics calculation. From the above diagram for each instant of time, the path with lowest metric is selected. For the condition having same values of path metrics going from one state to another,

the

predecessor that follows the lowest metrics is selected as the survivor path. At last, the paths of trellis are drawn and the output of the decoded message is revealed. Figure 2.32 shows the decoded message and the path of the trellis, respectively.

Figure 2.32: Trellis Structure for Hard Decision Decoding.

53

2.5.2

Soft Decision Decoding of Convolution Code using Viterbi Algorithm

Unlike hard decision decoding, in soft decision decoding, bits received by the decoder from the channel are quantized into more than two levels [24]. Long length of the soft decision level will increase performance in expense of increasing the computational difficulty. Decoding is carried out according to the quantization level. The number of zeros and ones in the quantization level determines the strongest and weakest bit .For example if we use a 3-bit quantized level for the soft decision decoding then „100‟ is the least likely 1,„111‟ is the most likely 1, „000‟ is the most likely zero and „011‟ is the least likely zero. To calculate the branch and path metrics, soft Viterbi decoding use square Euclidean distance rather than Hamming distance. The branch metrics of the trellis diagram for any instant of time t can be calculated by using the below formula: n 1

M i , j ,t   ( y k , t Ci , j ,t ) 2

(2.2)

k 0

Where yk and Ci,j is the kth received information and output transition information for the transition from state j to state i. By expanding the above equation 2.2 gives: n 1

n 1

n 1

k 0

k 0

k 0

M i , j ,t   ( y k , t ) 2  2 y k , t Ci , j ,t   (Ci , j ,t ) 2

(2.3)

Two terms in equation (2.3) are constant for all paths at the specific time. Therefore, after eliminating them, the equation becomes: n 1

M i , j ,t    2 y k , t C i , j ,t

(2.4)

k 0

The negative values in Equation (2.4) say that it is minimum when the equation 2.4 is maximum. By ignoring the coefficient 2 from the above equation, the value of the branch metric is given by:

54 n 1

M i , j ,t    yk ,t Ci , j ,t

(2.5)

k 0

Again, consider u=0,1,0,1,1,1,0,0 as the input bit to the encoder shown in Figure 4.2. Hence, v=00,11,10,00,01,10,01,11 would be the output from the encoder. The soft information received from the channel (-1.5, -1.3, -1.9, -0.7, 1.7, -2.4, 0.4, -1.2, -0.9, 1.1, 1.6, 0.6, -0.4, 2.4, 0.7,0.6) is considered as the input information to the decoder. Figure 2.33 shows the trellis diagram of this code that reached during the decoding of the message.

Figure 2.33: Trellis diagram for the soft decision decoding. From figure 2.33 we can see that the soft decision method searches for the survivor path from the paths that have left the state including the survivor path at the previous time [13]. At the next time instant, the path metrics of possible state are calculated based on the selected state at the previous time and the state with the highest path metric. If the path metric of the present state is not originated from the previously selected state, the decision is made for another path with the highest path metric. Then the decoder traces back until it finds the path with the highest path metrics. As mentioned before, the procedure is based on maximum likelihood decoded

55

information. In order to finish decoding, process is repeated for every time instant. For the above example the soft decision decoding recovered the input message as shown in above diagram by the bold arrow line. The output obtained from the decoder is 0,1,0,1,1,1,0,0. 2.6

Decoding Burst Error Using an Interleaved System

A simple feedback decoding circuit for correcting single error is given below in Figure 2.34. This single error correcting decoder circuit can be converted into burst error correcting decoder by an interleaving technique. The interleaving degree 𝑜𝑓 𝜆 introduced during the convolutional encoding has to be used again after the codeword is received. In the previous example of interleaved convolutional coding system an interleaving degree 𝜆 = 9 is used. Since 𝜆-1 has to be the multiple of n, in this case n=2 and 𝜆-1=8 is the multiple of 2. Hence this code can correct all burst of error having a length equal to less than 𝜆 = 9. The total memory required in interleaved decoder is [8] (  1)( n  1)  (  n  m ) 2

For example, 𝜆 = 9, n=2 and m=1;

(9  1)( 2  1)  (9  2  1) = 22 memories are 2

required. The complete interleaved decoder for 𝜆 = 9 is given below in figure 2.34. For input message u= [1 0 1 0 1 0 1 0 1 0], the complete calculation is given in Appendix A2. The calculation has been done for the burst length equal to, less than and greater than 𝜆 = 9. When burst length is greater than 9 the decoder is unable to correct the burst errors. Decoding uses the logical relation given below: S1 = r3 ⨁ r12

S2 = S1⨁ V2

M = S2⨁ r21

S3 = S2⨁ M

56

S4 = r12⨁ M

S4

DEMUXTIPLEXER

r1

u

S1

r2

S2

S3 M

Figure 2.34: A simple feedback decoding system for (2, 1, 1) convolutional code.

v1 DEMULTIPLEXER

r

r

r

0

1

2

r

r

r

r

r

r

r

3

4

5

6

7

8

9

S4

r1

r1

r1

S1

v2

S2

r

r

r

S3

r

r

r1

r

8

M

Figure 2.35: Decoding burst error for (2, 1, 1) convolutional code using 𝜆 = 9.

r2

r2

0

1

u

57

CHAPTER THREE ENCODING CONVOLUTIONAL CODES 3.1 The (2, 1, 3) Convolutional Code Encoder The implementation of convolutional codes is found in applications, which requires good performance, high throughput and low computational complexity. It converts any length of message to a single codeword. The code structure for code (2, 1, 3) and its analysis is given as ( 2,1,3)  ( n, k , m)

n  2  outputs   k  1  inputs m  3  memory _ registers  The constraint length is l =k (m-1) =1(3-1) =2 The polynomial degree = (km) = 1x3=3 The code rate = k/n=1/2 The state of the (2,1,3) code =2l=22=4 It should be noted that, the links are arbitrary between the mod-2 adder and memory (1, 0, 1) +

u

u0

u1

u2

+

(1, 1, 1)

Figure 3.1: The Structure of (2, 1, 3) code The output bit depends on the initial condition which changes at each time. The (2,1,3) code in figure 3.1 outputs 2 bits for every 1 input bit. It is a rate ½ code. Its constraint length is 2.The total number of states is equal to 4. The 4 states of this (2, 1, 3) code are: 00,01,10,11.

58

Encoding a eight bit sequence of 10111101 with the (2, 1, 3) code

1

+

0

+

1

1

0

0 0

0

1

0

+ +

1

1

+

0

+

0

0

0

1

b) at t=1, input state=10 Input bit =0, output=01

a) at t=0, input state=00 Input bit =1, output=11

1 0 +

0

0

1

c) at t=2, input state=01 Input bit =0, output=11

0

+

0

d) at t=3, input state=00 Input bit =0, output=00

Figure 3.2: A sequence consisting of a 1 bit as it goes through the encoder. The single bit produces 8 bit of output. First we will pass a single bit 1 through this encoder as shown in figure 3.2. This shows that for small sequences the overhead is much higher than the nominal rate, which only applies to long sequences. If we did the same thing with a 0 bit, we will get an 8 bit all zero sequence. The impulse response of this encoder with the 1 bit input has a response of 11 01 11 00 which is called the impulse response. The 0-bit similarly has an impulse response of 00 00 00 00 (not shown but these is obvious). For input sequence of 10111101, the coded sequence would be.

59

Input Bit

Its impulse response

1

11 01 11 00 00 00 00 00 1 11 01 11 00 1 11 01 11 00 1 11 01 11 00 1 11 01 11 00 0 00 00 00 00 1 11 01 11 00 Add to obtain response ________________________________________________ 10111101 11 01 00 10 01 01 10 00 01 11 00 0

Therefore the encoded sequence for the message, 10111101, is 11 01 00 10 01 01 10 00 01 11 00.

3.2

The (2, 1, 3) Convolutional Code State Diagram

A transition State table is used to follow the realization of the State diagram. The block diagram and the transition table for the convolutional code (2,1,3) is shown in figure 3.3 and table 3.1 respectively.

(1, 0, 1)

V1

+

U1

u0

u1

u2

+

(1, 1, 1)

V2

Figure 3.3: Block diagram for a (2, 1, 3) convolutional encoder.

60

Table 3. 1: Transition (state) table for (2, 1, 3) convolutional code.

Input bit Current State Next State Output 0 0 0 0 0 0 0 1 0 0 1 0 1 1 0 1 0 0 1 0 1 1 1 0 1 1 1 0 0 0 1 0 0 1 1 1 0 1 1 0 0 0 0 1 1 0 1 1 0 1 1 1 1 1 1 1 From the state table the state diagram is constructed. From the current state to the next state and corresponding input and output. 0/00 1/11 00

0/11

01

1/00 0/10 0/01 10

11

1/10

1/11

Figure 3.4: State diagram of (2, 1, 3) convolutional code. In figure 3.4, the dotted line represents the change of state, when input is 1 and continuous line represents the change of state when input is 0. Getting the Encoded sequence By putting the message (10111101) in the encoder gives the coded sequence and the encoder bits

1

+

0 61

+

1

1

0

0 0

0

1

0

+ +

1 a) at t=0, input state=00 Input bit =1, output=11 0 V

+

1

1 U

U1

0

1

U0

U -1

1

1

0

0 V

+

c) at t=2, input state=01 Input bit =1, output=00

1

+

1

+

0

+

d) at t=3, input state=10 Input bit =1, output=10

1

+

0

+

1

1

1

1

1

1 +

1

0

1

0

+

e) at t=4, input state=11 Input bit =1, output=01

f) at t=5, input state=11 Input bit =1, output=01

g) at t=6, input state=11 Input bit =0, output=10

h) at t=7, input state=01 Input bit =0, output=00 0

+

1

+

0

0

1

b) at t=1, input state=10 Input bit =0, output=01

1

0 0 +

0

0

1

1

+

Figure 3.5: Encoding process for (2,1,3) code for the message (10111101) The encoding process of the message (10111101) is shown in table 3.2

1

62

Table 3. 2: Code sequence (output bits) and Encoder bits for the message (10111101) using the (2,1,3) code. Time 0 1 2 3 4 5 6 7 8 9 10

Input bit Output bits Encoder bits 1 1 1 0 0 0 0 1 1 0 1 0 0 0 1 1 1 0 1 0 1 0 1 1 1 1 0 1 1 1 0 1 0 1 1 1 0 0 0 1 0 0 1 1 0 0 1 1 0 1 0 0 0 0 0

From table 3.2, the following is inferred: 1. The encoder bit is 00 10 01 10 11 11 11 01 10 01 00. 2. The coded sequence is 11 01 00 10 01 01 10 00 01 11 00.

3.3

The (2, 1, 3) Convolutional Code Trellis Diagram

The trellis diagram for the (2, 1, 3) convolutional code is used to encode the message (11011101) using the state (transition) table shown in table 3.1. This is demonstrated in figure 3.6. The broken lines represents input bit 1 while the solid lines represent input bit 0.

t0

t1

States 00

0 Message 0 11

11

Encoded

0 sequence 1 1 0

t2

t3

1

11

0

Encoder 00 10 01 01 bits 10

1

11

00

11

00 01

10 10 11

1

10

111 11

00

01

10

11

11

t7

t6 00

00

11

00

10

1 1

00

11 11 01

t5

t4

00

00

11

11

00

10

01

10

10

10

11

111

0 11

11

10

t8

11

00

00

011 11

01

11

0 11 11

10

10 11

t1 1

0 00

00

01 01

10

11

00

1

00

10

10

t10

10 01

10

11

00

01 01

10

t9

00

11 10

11

00

1 11

01

10 11

11

11

01

10

00

00

11

01

10

00

11

Figure 3.6: Encoding the message (11011101) with the (2,1,3) code using trellis diagram.

00

63

From the trellis diagram, the message is encoded and the following information is gotten form the trace back line in blue. 1. The encoder bit is 00 10 01 10 11 11 11 01 10 01 00. 2. The coded sequence is 11 01 00 10 11 11 10 00 01 11 00.

3.4

The (2, 1, 3) convolutional Interleaved Code

Consider the (2, 1, 3) convolutional coding system with interleaving degree λ= 5. Let the message to be encoded u= [1 0 1 1 1 1 0 1]. The figure of original convolutional encoder is shown below in figure 3.7. Here the message signal is fed into three shift registers and the delay version of these registers output is binary added with the original message thus gives output v2, fed into the multiplexer. On the other hand the original message without passing through any circuit is feed forwarded to the multiplexer as output v1. Thus obtained codeword or encoded message is passed through the channel.

S1

V1

+

u0

u1

MULTIPLEXER

u

u2

+

Encoded Message

V2

S2

Figure 3.7: A (2, 1, 3) convolutional encoding system Now to use the convolutional coding system with interleaving degree λ= 5, the encoding circuit has to change into the form as shown in figure 3.8. The register equal to the degree of interleaving which is λ = 5, replaces each one shift register of encoder in figure 3.7. Then, the delay equal to (λ-1)/n is added to the output v2. In this example

64

delay is equal to (5-1)/2 = 2. The completed encoding calculation is shown in table 3.3. S V 1

U

r1

r7

r3

r8

r4

r9

r5

MULTIPLEXER

r6

r2

r1 0

r

r

r1

r1

r1

Encode d Messag

3

V Y

S

Y

2

1

Figure 3.8: A (2, 1, 3) Convolutional coding system with interleaving degree λ = 5 The completed encoding calculation is shown in table 3.3 below and also in appendix 3A. It has the following data. When n=2, and λ = 5, mλ = 3x5=15, delay= (λ-1)/n = (5-1)/2=2

65

Table 3. 3: Convolutional Interleaved Code with interleaving degree of 5 U r1 r2 r3 r4 0 0 0 0 1 1 0 0 0 1 1 1 0 0 0 0 1 1 0 1 1 0 1 1 1 1 1 0 1 1 1 1 1 0 0 0 1 1 1 1 1 0 1 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 The Codeword is

r5 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0

r6 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0

r7 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0

r8 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0

r9 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0 0 0 0 0 0 0

r10 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0 0 0 0 0 0

r11 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0 0 0 0 0

r12 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0 0 0 0

r13 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0 0 0

r14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0 0

r15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0

S1 0 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0

S2 0 1 1 0 1 1 1 0 1 0 1 1 0 1 1 0 1 1 1 1 1 0 1 0 0

Y1 0 1 1 0 1 1 1 0 1 0 1 1 0 1 1 0 1 1 1 1 1 0 1 0 0

Y2 0 0 1 1 0 1 1 1 0 1 0 1 1 0 1 1 0 1 0 1 1 1 0 1 0

1 0 1 1 0 1 1 0 1 1 1 1 0 1 1 0 0 1 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 0 1 1 1 1 0 1 1 0 0 1 0 0

Block Interleaver. A block interleaver of size M1 × N1 separates the symbols of any burst error pattern of length less than M1 by at least N1 symbols. For example, a burst of three consecutive errors in the following sequence is written by columns into a 4 x 4 deinterleaver (1 5 9 13 2 6 10 14 3 7 11 15 4 8 12 16), then these errors will be separated by at least four intervals [18].

V1 0 1 1 0 1 1 1 0 1 0 0 0 0 0 0 1 0 0 1 1 1 0 1 0 0

V2 0 0 1 1 0 1 1 1 0 1 0 0 0 0 1 1 0 1 0 1 1 1 0 1 0

66

The de-interleaved sequence in this case is (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16) which confirms that the errors are separated by four positions. In a given application of error control coding, a block interleaver is selected to have a number of rows that should be ideally larger than the longest burst of errors expected, and in practice at least as large as the length of most expected bursts. The other parameter of a block interleaver is the number of columns of the permutation matrix, N1, which is normally selected to be equal to or larger than the block or decoding length of the code that is used. In this way, a burst of NI errors will produce only one error per code vector. For error-correcting codes able to correct any error pattern of size t or less, the value of N1 can be set to be larger than the expected burst length divided by t. [22]. For and error pattern ē= (00000010111101000000), from the Reiger bound theorem, l  ( n  k ) / 2 , for l=8 then n=21 and k=7. Since l=8, it implies λ rows = 8 of the error pattern is as follows. 0 0 0 Input data

0

0

0

1

0

1

1

1

1

0

1

0

0

0

0

0

0

1

0

0

1

1

0

0

0

0

1

0

0

A 0 0 1 0 0

0 0 1 1 0

0 1 1 0 0

0 0 1 0 0 B

0 0 1 0 0 Interleaved data

0

0

1

67

C

0 0 1 0 0

0 0 1 1 0

0 1 1 0 0

0 0 1 0 0

D

The de-interleaved data is 0 0 0 0 A is Input data

0

0

1

0

1

1

1

1

0

1

0

0

0

0

0

0

B is interleaved data C is interleaved data D is de-interleaved data It is important to note that during deinterleaving, if the interleaved data is loaded from top then the de-interleaved data is gotten from bottom as above. But if the interleaved data is loaded from bottom then the deinterleaved data is gotten from top as below. E 0 0 1 0 0

0 1 1 0 0

0 0 1 1 0

0 0 1 0 0

D The de-interleaved data is 0

0

0

0

0

0

D is interleaved data E is de-interleaved data

1

0

1

1

1

1

0

1

0

0

0

0

0

0

68

CHAPTER 4 DECODING CONVOLUTIONAL CODES 4.1

Viterbi Algorithm for (2, 1, 3) convolutional code

The (2, 1, 3) convolutional code encoded in chapter three will be decoded using the Viterbi algorithm. In this algorithm, branch metric calculation and path metric calculation is determined as shown in figure 4.1.The path with a larger hamming distance is discarded. This is done by comparing the received codeword with the path bits, and the number of bits that changes is added to the last hamming distance to get the resultant hamming distance. If two paths have the same hamming distance, then any of the two paths is chosen at random and discarded. The process continues until the trellis gets back to 00. After the Viterbi algorithm is done, the message is gotten by taking a traceback from 00 to all the paths that were not discarded back to the starting point. The incoming bits in this traceback path gives the message that was sent. Table 4.1 summarises the encoded information. The message is assumed to be transmitted in a noisy channel and hence an error bit was introduced in the received codeword. This error bit in the received codeword need to be corrected by the Viterbi algorithm. Table 4. 1: Summary of the (2,1,3) code encoded information. Time t0

t1

t2

t3

t4

t5

t6

t7

t8

t9

t10

1

0

1

1

1

1

0

1

0

0

0

Encoded sequence

11

01

00

10

11

11

10

00

01

11

00

Encoder bits

00

10

01

10

11

11

11

01

10

01

00

11

01

00

10

11

10

10

00

01

11

00

Message

Received codeword (has an error)

69

From table 4.1, the error bit occurred at time t5, where a 10 was received instead of a 11. The Viterbi algorithm decodes the received codeword as shown in figure 4.1, and the message is received as it was sent. The Viterbi decoder below follows the following steps  Calculation of branch metric (distance from received data)  Calculation of path metric (Add, compare, select)  On reaching the end of trellis, start from the best state  Trace back (knowing the Path metrics)  Decide the data bits while tracing back

Stat 0 0

t 0

2( 2) 0 0 1 1

t

t

1

0 0 1

1 1

0( 0)

0 1

1

2( 2)

2( 2)

1 Received Codeword

1

0

t 0 0 1

2

1 1

0 0

t

t 0 0

3

0 0

0 0 1

4

1 1 1

0 0

t 0 0 1

5

1 1

0 1

0 1

0 1

0 1

1 0

1 0

1 0

1 0

0 0

t 0 01

6

1 1

1 1

1 1

1 1

0

1

1

1

0 0

8

0 0

t9 1 1 1

0 1

1 0

0 1

1 1

1

0

t1

0 0 1

t

1 0 0

0 0 0 1

1 0

1 0

1 1

0 0 1

0 0

0 1

1 0

1 1

t

1 1

0 0 0 1

1 1

0 0 1

7

1 0

1 1

1 1

1 1

0

1

0

(with error)

Figure 4.1: Viterbi decoder for (2,1,3) code for the message (11011101) The path metric and branch metric are calculated in all the branches and the path metric with the smallest path is selected. The complete process is shown in figure 4.2.

70

State s 0 0

t0 2( 2) 0

t1

1( 0 1)

0

0 1 1 1( 1)

1 1 0( 0)

0 1

0 1

1 0

0( 0)

2( 2)

1 Receive1 d codewo

1 1

t2 0( t3 2( t4 1) 3) 0 0 0 0 1 1 1( 2( 1 1 3) 2) 1 1 1( 1 2( 1 1 2) 1 0 3) 0 0 1(0 0( 0) 0 1( 1 2)

1 0

0( 2) 1 0

1( 2) 1 1 2( 4)

States/Time 00 branch metric 01 branch metric 10 branch metric 11 branch metric

t0 0 0 0 0

t1 0 0 0 0

1( 3) 1 0(1 0)

1 0

0 0

0 1

1( 3) 0

0( t7 1) 0 01

1( 03)

t6

0 1 1

0 1 1

0 1 1

1 0

0( 0)

1 11( 3)

t5

0( 1( 1( 2) 1) 2) 1 1 1 0( 1( 1 1( 1 1 3) 0 3) 0 3) 0 1( 2( 0 0 1( 0 2) 4) 1) 0 1( 0(0 02( 02( 5) 1 3) 1)1 1 14) 0( 1( 0( 1) 1) 0)

03) 2( 1 2)

1( 2)

2( 05)

1 0

1 1

1 1

t2 1 0 1 2

1 0

1 1 1( 2)

1( 1)

1 0

t3 1 2 0 2

0( 2)

1 0

0( 3)

1 0

t4 3 2 2 0

t8 1( 2)0 0

0( 2( t 10 1) 0 04) 1 0 0 1 1 1 1 2( 1 1( 1 1 0( 3) 2( 1) 2) 1 1 1 5) 2( 1 1( 1 1 1( 3) 2) 0 0 0 2) 0 1( 1(0 0( 0 0 0 3) 1) 02) 0 0 0( 2( 1 1 1 1) 2) 1( 2( 1( 4) 4) 3) 1 2( 1 2( 1( 1 3) 0 3) 0 2) 0 1 1 1 12( 1(1 1 4) 3)

0 0

t5 2 1 3 0

t9

0 1

t6 2 0 2 1

t7 1 1 1 2

1 1

0 0

t9 2 1 2 3

t10 1 3

t8 1 1 1 2

t 11

t11 1

Figure 4.2: Viterbi decoding of the (2,1,3) convolutional code After Viterbi decoding and tracing back, the message =10111101000 is gotten through the path traced. The error is corrected and the message received as it was sent. The last three bits of all zeros are flush bits of the register.

States 00

t0

2(2) 00

t1

1(1) 00 11 1(1)

11 0(0)

t2

11 11

01 01

11

2(2)

2(3) 00

0(0)

11 00 1(3)

01 1(2)

01 2(2)

1(2)

2(2)

11

10

1(2)

11

01

1(3)

2(5) 00 11

0(3)

10

11

11 1(3)

00

10

1(3)

11 0(0)

11

1(3)

11 11

0(3)

10

11 1(1)

10

11

01 0(1) 0(1)

0(0)

2(3)

2(3)

00 0(1)

01 2(4)

0(2) 10 11 1(2)

10

1(2) 00

t8

00

00 1(1)

2(5) 01

10

0(1)

1(1)

11

00 1(2)

1(3)

t7

11 1(2)

1(1) 10

0(0)

1(3) 00

t6

11

0(2)

01 1(3)

0(2)

1(3) 00

t5

00 2(4)

2(4)

11

t4

1(2) 11

0(0)

10

Received codeword

t3 2(3) 00

0(1) 00

1(3) 1(2) 10 11 2(4)

00

2(4) t10 00

t9

1

11 1(2) 11

1(2) 00 1(2) 01 0(1) 2(4) 2(3) 10

11

11 0(1) 1(2)

01 2(2)

2(5)

11 00

00 1(3) 01

1(4) 2(3) 10

11 1(3)

11

01

11

Error bit [ 0 instead of 1]

0(1) t11 00

00

71

Figure 4.3: Viterbi decoding for code (2,1,3) after tracing back From figure 4.3, we can see that after tracing back, the message is gotten as

10111101 000  . It is very clear that the error has been corrected. flushbits

4.2

Deinterleaving for (2, 1, 3) convolutional code

The interleaving degree 𝑜𝑓 𝜆 introduced during the convolutional encoding has to be used again after the codeword is received. In the example of interleaved convolutional coding system an interleaving degree 𝜆 = 5 is used. Since 𝜆-1 has to be the multiple of n, in this case n=2 and 𝜆-1=8 is the multiple of 2. Hence this code can correct all burst of error having a length equal to less than 𝜆 = 5. The total memory required in interleaved decoder is (  1)( n  1)  (  n  m ) 2

Now, 𝜆 = 5, n=2 and m=1;

(5  1)( 2  1)  (5  2  3) = 32 memories are needed. 2

The complete interleaved decoder for 𝜆 = 5 is given below in figure 4.5. For input message u= [10111101], the complete calculation is given in appendix A4. The calculation has been done for the burst length equal to, less than and greater than 𝜆 = 5. When burst length is greater than 9 the decoder is unable to correct the burst errors. Decoding uses the logical relation given below: S1 = r2 ⨁ r17 S2 = V2⨁ s1 M = S2⨁ r32 S3 = S2⨁ M

72

S4 = r17⨁ M S4 = u V1→r1, r7→r8, r12→r13, S3→r18, r27→r28, r12 →r13, r22→r23

S4

DEMUXTIPLEXER

r1

u

S1

r2

S2

S 3

M Figure 4.4: Deinterleaving block diagram for (2,1,3) convolutional code.

73

r1

r3

r2

r4

r5

r6

r7

V1

r8

r9

r11

r10

r12

DEMULTIPLEXER

S4 r13

r14

r15

r16

r17 Message

S1

S2

V2

r18

r19

r20

r21

r22

S3

r23

r24

r25

r26

r27

r28

r29

r30

r31

r32

M

Figure 4.5: Deinterleaving block diagram for (2, 1, 3) convolutional code with interleaving degree λ=5

U

74

4.3 Non-Linear Convolutional codes The non-linearity of convolutional codes can be explained using a (2,2,2) convolutional code. The code has 2 input bit, 2 output bits and 2 memory registers. It has the following code properties: The constraint length, L = k(m-1) = 2(2-1) = 2 The code rate, k/n = 2/2 = 1 The number of states, 2L = 22 = 4 The polynomial degree, km=2(2)=4 The procedure for drawing the structure is as follows. 1) Draw k sets of m boxes. 2) Draw n adders. 3) Connect n adders to memory registers using the coefficients of the nth (km) degree polynomial. The encoder structure is illustrated using a (2, 2, 2) convolutional code encoder in figure 4.6 Let the elements of generator matrices of the (2, 2, 2) convolutional code be given as [19], [27].

1 1 0 0  1 1 g m,0   , g m,1   , g m,2      0 1 0 0  0 1

(4.1)

From the generator matrices in (4.1), gm,0 gives the connections of the input data register to the modulo-2 adder, gm,1 gives the connections of the first two registers,

M 11 and M 12 (which contains previous input bit) to the modulo-2 adder and gm,2 gives 1

2

the connections of the second two registers M 2 and M 2 to the modulo-2 adder. V1= (10 00 10) V2= (11 00 11) The corresponding convolutional encoder is represented graphically in figure 4.6.

75

v

u

Figure 4.6: (2, 2, 2) convolutional encoder The input vector is given as „u‟ while the output vector is given as „v‟. From figure 4.6, the two output bits can be computed using the two unknown input bits u[1] and u[2] as follows: 1

v[1] = u[1]  M 2

1

2

v[2] = u[1]  u[2]  M 2  M 2

and

(4.2)

where the symbol  stands for mod-2 addition. Given that the two output bits are v[1] = 0 and v[2] = 1, from mod-2 addition the first 1

input bit u[1] = 1 since M 2 = 1. This input bit can then be used to compute the second input using the equation for v[2]. It is easily seen that, the second input bit u[2] = 0. Hence, t2 is globally invertible since at each step the current block of k output bits can be uniquely decoded. 4.3.1 Linear dynamic convolutional transducer Let (t, f, S1, S2, S3) be a (2, 2, 2) convolutional cryptosystem with 3 states [19], [30] where t is the transducer function, f is the set of functions used to switch between any of the states S1, S2 and S3. Arbitrary state matrices corresponding to the different states are as follows

76

1 1 1 0 0  1 1 1 S1  g 1m,0   , g m,1   , g m,2     , 0 1 0 0  0 1 1 1 0 0  2 1 1 S 2  g 2m,0   , g 2m,1   , g m,2     , 0 1 0 0  1 1

(4.3)

1 1 0 0  3 1 0 S 3  g 3m,0   , g 3m,1   , g m,2      0 1 0 0  1 1  The contents in the state matrices indicate the connections between the registers and the mod-2 adder. A „1‟ in the state matrix indicates that the corresponding shift register is connected to the modulo-2 adder and a „0‟ in a given position indicates that no connection exists between that shift register and the modulo-2 adder. The transition function, f gives a set of transition conditions for switching from one state to another. For example, let f1 be the transition function that compares the input data [0 0] with the present state S1 and switches to the next state S2. The rest of the transition conditions are shown in table 4.2. F

[0 0]

[0 1]

[1 0]

S1

[1 1]

S2

S3

S3

S1

S2

S3

S1

S2

S3

S3

S3

S2

S1

S2

Table 4.2: Transition function The generator matrix, gm can be obtained from the state matrices in the different states in (4.3) as [29], [30], [31]

g1m,0  gm   0  0 

g 2m,1 g 2m,0 0

1 0 g 3m,2    0 g 3m,1    0 g 3m,0   0  0

1 0 0 1 0 1 0 0 1 1  0 1 1 0 0  0 1 1 0 0 0 0 0 1 1  0 0 0 0 1 

(4.4)

77

Using the generator matrix, the convolutional transducer function is given as [31], [32] where „u‟ is the input vector t(u) = u x gm

(4.5)

where „u‟ is the input vector . Let the input vector be given as u = [0 1 1 1 0 1]. Based on this input vector, the convolutional transducer function is computed as

1 0  0 t(u) = [0 1 1 1 0 1]  0 0  0

1 0 0 1 0 1 0 0 1 1 0 1 1 0 0   [0 1 1 0 1 0] 0 0 1 0 0 0 0 0 1 1  0 0 0 0 1

(4.6)

The input vector u = [0 1 1 1 0 1] is encrypted to the output vector v = [0 1 1 0 1 0]. It is a linear convolutional transducer since the input vector is encrypted directly to the output vector. The same result can be obtained by using the transition function, f, the state matrices S1, S2 and S3 and mod-2 addition. This is the method which will be adopted in the implementation of the new multi-level convolutional cryptosystem. This method leads to dynamic cryptosystem due to the switching process between the states dictated by the transition functions. 2.1.1

Non-Linear dynamic convolutional transducer

Coupling of the linear structure in figure 4.6 onto itself leads to a cascaded non-linear structure. Meta S-boxes (or meta substitution) [33], [20] and permutation set are used to link one linear transducer stage to the next in the cascade. Let the S-box, S and permutation set, Per be chosen as shown in table 4.3.

78

Input

00 01 10

1 1

Input

1

2

output

00 11 10

0 1

Output

2

1

(a)

(b)

Table 4.3: (a) S-box, S and (b) Permutation set, Per for non-linear cascade In addition to the S-box and the permutation set, each transducer stage will have a transition function. Let f1 be the transition function of transducer stage 1 and f2 that of transducer stage 2. For simplicity, assume both transition functions are identical and equivalent to the entries for the linear convolutional transducer shown in table 4.2. f1(1, {[0 0]}) = {2}; f1(1, {[0 1]}) = {3}; f1(1, {[1 0]}) = {3}; f1(1, {[1 1]}) = {1}; f1(2, {[0 0]}) = {3}; f1(2, {[0 1]}) = {1}; f1(2, {[1 0]}) = {2}; f1(2, {[1 1]}) = {3}; f1(3, {[0 0]}) = {3}; f1(3, {[0 1]}) = {2}; f1(3, {[1 0]}) = {1}; f1(3, {[1 1]}) = {2}; The same transitions apply to transducer stage 2 by changing f1 to f2 since they are identical for this case. The operation of function f1(1, {[0 0]}) = {2} for example is as follows: the present state is „1‟ and if the input data is [0 0], the transducer stage switches to state „2‟. Assuming the initial state for each of the linear transducer stage to be the first state S1, the initial structure of the cascade using the S-box and permutation set is as shown in figure 4.7. In figure 4.7, the numbers „1‟ and „2‟ at the outputs of the S-box, indicates the interconnections due to the permutation set. It is seen in table 4.3 that, the „1‟ output from the S-box is connected to the „2‟ input of the next stage. The same applies to the „2‟ output which is connected to the „1‟ input of the next stage.

79 o11

v11

u11

1

1

S 2 2 u12

o12

v12

Figure: 4.7: Initial structure of cascade used to encrypt the first two input bits The output vector cannot be computed using the convolutional transducer function t(u) = u x gm where „u‟ is the input vector and gm the generator matrix since the resultant cryptosystem due to coupling of stages is not linear. The output vector will therefore be computed using the other method which involves the transition function, f, the state matrices S1, S2 and S3 and mod-2 addition. This method will be illustrated in details since it forms the basis for the computation of the output vector for the new multi-level convolutional cryptosystem. Using the input vector u = [0 1 1 1 0 1] with u 11 = 0 and u12 = 1, the outputs of the first transducer o11 and o12 can be computed as follows;

o11  u 11  M 12  0  0  0 o12  u 11  u 12  M 12  M 22  0  1  0  0  1

(4.7a) (4.7 b)

The output of the S-box as a result of o11 and o12 is [1 1]. Using the permutation set on the outputs of the S-box, the input to the second transducer stage will be [1 1]. This input is used to compute the first two bits of the ciphertext, v11 and v12 as follows;

v11  1  M 12  1  0  1 and v12  1  1  M 2  M 2  1  1  0  0  0 1

2

(4.8 a) (4.8 b)

After the first encryption process, the transition functions are used to compare the inputs to the different transducer stages and their present states in order to determine the next state for each transducer stage. Using the same transition function, the first

80

transducer stage switches to the third state, S3 while the second transducer stage remains in the first state S1. The new structure of the cascade used to encrypt the next set of input bits is as shown in figure 4.8.

o u21

v2

1 S 2

u22

o2

v22

Figure. 4.8: Structure of cascade used to encrypt the second two input bits Using the input vector u = [0 1 1 1 0 1] with u 21 = 1 and u22 = 1, the outputs of the first transducer o21 and o22 can be computed as follows; o 21  u 21  M 12  M 22  1  0  0  1 o 22  u 21  u 22  M 22  1  1  0  0

(4.9)

The output of the S-box as a result of o21 and o22 is [1 0]. Using the permutation set on the outputs of the S-box, the input to the second transducer stage will be [0 1]. This input is used to compute the second two bits of the ciphertext, v21 and v22 as follows;

v 21  0  M 12  0  0  0 and v 22  0  1  M 2  M 2  0  1  0  0  1 1

2

(4.10 a) (4.10b)

Similarly, after the second encryption process, the transition functions are used to compare the inputs to the different transducer stages and their present states in order to determine the next state for each transducer stage. Using the same transition function, the first transducer stage switches to the second state, S2 while the second transducer stage switches to the third state S3. The new structure of the cascade used to encrypt the next set of input bits is shown in figure 4.9.

81

Using the input vector u = [0 1 1 1 0 1] with u 31 = 0 and u32 = 1, the outputs of the first transducer o31 and o32 can be computed as follows; o 31  u 31  M 12  M 22  0  0  1  1

(4.11)

o 32  u 31  u 32  M 12  M 22  0  1  0  1  0

The output of the S-box as a result of o31 and o32 is [1 0]. Using the permutation set on the outputs of the S-box, the input to the second transducer stage will be [0 1]. This input is used to compute the third two bits of the ciphertext, v31 and v32 as follows; v 31  0  M 12  M 22  0  1  1  0

(4.12)

v 32  0  1  M 22  0  1  1  0

o31

v31

u31

1

1

S

u31

2 2 o32

Figure 4.9: Structure of cascade used to encrypt the third two input bits Hence using the non-linear (2, 2, 2) dynamic convolutional transducer with 3 states, the input vector u = [0 1 1 1 0 1] is encrypted to the output vector v = [1 0 0 1 0 0]. The encrypted message send through the channel will be decrypted at the receiving end.

v32

82

CHAPTER 5 RESULTS AND DISCUSSIONS Both throughput and security need to be quantified in order to establish a trade-off. This chapter presents the quantification of throughput and security in using convolutional codes. 5.1 Quantification of Throughput Throughput of a wireless network could be adversely affected when public-key cryptography is implemented through modular exponentiation and bit error rate. The data throughput, T could be given as [14] T = R(1 – Pe)N

(5.1)

where Pe is the bit error probability for QPSK, N is the number of bits in the block length and R is a fixed transmission rate for the frames. For P e