Chapter Three Channel Coding

5 downloads 0 Views 1MB Size Report
Any physical channel may add noise to the signal passed through it. ... correct, the decoder discards the check bits to obtain the recovered block ... Many factors can alter one or more information (message) bits during digital .... So n = 16 + 4 = 20 bits ... For any coding system CS with N codewords, the minimum Hamming ...
Information and Coding Theory

Asst. Prof. Dr. Ali Kadhim Al-Janabi

Chapter Three Channel Coding 3.1 Introduction

Sink Decoder

M

C

M′

C′

 Any physical channel may add noise to the signal passed through it. The noise is anything that causes errors to the signal. For binary transmission, the error changes the bit value (0 to 1) or (1 to 0) (bit compliment). That is, an error occurs when a transmitted bit is a 1 and its corresponding received bit becomes a 0 or a 0 becomes a 1.  For binary channel, the error is measured by the bit error rate (BER). For example =

= 10

=

means that there is one bit error for every 1000 bits transmitted.

Pe is function of the transmission rate, modulation type, and the signal to noise ratio (SNR).  Pe is the sum of the conditional probabilities, Pe = p(0/1) + p(1/0).  The combined objective of the channel encoder at the transmitter and the channel decoder at the receiver is to minimize the effect of the channel noise. The major engineering problem is to design and implement the channel encoder/decoder pair such that: (1) information can be transmitted in a noisy environment as fast as possible. (2) reliable reproduction of the information can be obtained at the output of the channel decoder. (3) the cost of implementing the encoder and decoder falls within acceptable limits.  During a time period of  sec., the source encoder sends a block of data (M) of size k bits. Usually, the channel encoder adds r check bits to U in controlled way that permits to the channel decoder to determine if the received block is correct or subjected to one or more errors during transmission through the channel. The output codeword from the channel encoder (C) is of size n = k + r bits. C is then modulated by the modulator and transmitted through the channel. At the receiving end, the demodulator delivers the received codewords C′ of size n bits. The channel decoder determines if C′ is correct or subjected to errors. If C′ is correct, the decoder discards the check bits to obtain the recovered block M′ of size k bits and

Information and Coding Theory

Asst. Prof. Dr. Ali Kadhim Al-Janabi

delivers M′ to the sink decoder. If C′ contains one or more errors, the decoder processes C′ according to the used error control coding mode.  There are two error control coding modes: 1) Error Detection Coding (EDC): EDC mode can detect if C′ is subjected to one or more errors but it cannot correct them because EDC cannot specify the position of the error. The correction is performed by requiring a repetition of the transmission. These schemes are known as Automatic Repeat reQuest (ARQ) schemes. They are used with duplex (two ways) communication systems. 2) Error Correction Coding (ECC): ECC mode can detect if C′ is subjected to one or more errors and can correct them because the system can specify the position of the errors. ECC is more complex than ECC and it is used when duplex transmission is not possible such as TV broadcasting, email, SMS, etc. This method is known as Forward Error Correction (FEC).

3.2 Types of Errors Many factors can alter one or more information (message) bits during digital transmission and storage. Sources of errors may include white noise (e.g., a hissing noise on the phone), impulse noise (e.g., a scratch on CD/DVD), crosstalk (e.g., hearing another phone conversion), echo (e.g., hearing talker’s or listener’s voice again), interference (e.g., unwanted signals due to frequency reuse in cellular systems), multipath fading (e.g., due to reflected, refracted paths in mobile systems), and thermal and shot noise (e.g., due to the transmitting and receiving equipment), to name just a few. * There are two types of errors: random (single bit) errors and burst (multiple bit) errors.  Random errors: In random errors, the bit errors occur independently of each other, and usually when one bit has changed, its neighboring bits remain correct (unchanged). Examples causing random errors may include shot noise in the transmitting and receiving equipment as well as thermal noise in free-space communication channel. The transmission errors due to additive white Gaussian noise (AWGN) are generally referred as random errors, though it is possible that the AWGN channel, due to imperfect filtering and timing operations at the receiver, introduces errors that are localized in a short interval.  Bursts errors: In burst errors, two or more bits in a message have usually changed, i.e., clusters of errors occur. The burst errors are not independent and tend to be spatially concentrated. The length of the burst is measured from the first corrupted bit to the last corrupted bit; some bits in between may not have been corrupted. Examples of the sources of burst errors may include magnetic recording channels (tape or disk), impulse noise due to lightening and switching transients in radio channels, and mobile fading channels where the signal power can wax and wane in time.

Information and Coding Theory

Asst. Prof. Dr. Ali Kadhim Al-Janabi

A burst error is more common to occur than a random error. It is important to highlight that most channel coding systems are designed to deal with random errors. However, they can be employed for bursts errors as well by using additional steps such as the interleaving process. * For a binary channel with BER = Pe, the probability of having i random errors in a block of b bits P(i, b) is given by the binomial distribution:

( , ) = ℎ



Note that

= ∑

! !(

)!

=

(1 − (

)…(

)







≪1

)

!

(, )=1

Example: find the prob. of having 0, 1, and 2 random errors in block of 10 bits transmitted through a BSC channel with p(0/1) = 0.005. ANS. Pe = p(0/1) + p(1/0). For BSC p(0/1) = p(1/0). Pe = 2p(0/1) = 2×0.005 = 0.01. Note that 0! = 1 using the exact equation

(0, 10) = 10 (0.01) (1 − 0.01) = 0.99 = 0.904 0 (1, 10) = 10 (0.01) (1 − 0.01) = 0.1 × 0.99 = 0.091 1 (2, 10) =

10 (0.01) (1 − 0.01) = 45 × 0.01 × 0.99 = 0.000091 2

* From the above example, the following can be concluded 1) P(2, 10)