the Department of Electrical Engineering, University of Washington, and Richard ... Poggi and Cammorota to reduce the bit rate of standard and progressive ...
Codebook Organization to Enhance Maximum A Posteriori Detection of Progressive Transmission of Vector Quantized Images Over Noisy Channels Ren-Yuh Wang
1
Eve A. Riskin
Richard Ladner
1
This work was supported by NSF Grants No. MIP-9110508 and CCR-9108314, an NSF Young
Investigator Award, and a Sloan Research Fellowship.
Ren-Yuh Wang was with the Department
of Electrical Engineering, Box 352500, University of Washington, Seattle, WA 98195-2500. now with FutureTel, Inc., 1092 E. Arques Avenue, Sunnyvale, CA 94086.
He is
Eve A. Riskin is with
the Department of Electrical Engineering, University of Washington, and Richard Ladner is with the Department of Computer Science and Engineering, University of Washington. This work was appeared in part in the 1993 Proceedings of the International Conference on Acoustics, Speech, and Signal Processing.
Abstract We describe a new way to organize a full search vector quantization codebook so that images encoded with it can be sent progressively and have resilience to channel noise. The codebook organization guarantees that the most signi cant bits (MSB's) of the codeword index are most important to the overall image quality and are highly correlated. Simulations show that the eective channel error rates of the MSB's can be substantially lowered by implementing a Maximum A Posteriori (MAP) detector similar to one suggested by Phamdo and Farvardin [12]. The performance of the scheme is close to that of Pseudo-Gray [22] coding at lower bit error rates and outperforms it at higher error rates. No extra bits are used for channel error correction.
Contents 1 Introduction
1
2 Principal Component Partitioning
3
3 Distortion From Errors in Dierent Codeword Index Bits
6
4 Decreasing Average Distortion by Local Switching
8
5 MAP Detection for Images
15
5.1 Applying MAP detection to images coded with organized VQ codebooks
: :
15
: : : : : : : : : : : : : : : : : : : : : : : :
18
: : : : : : : : : : : : : : : : : : : : : : : : : :
21
5.2 Prediction of Eective Error Rate 5.3
Fast MAP Decoding Scheme
6 Results
23
7 Conclusion And Future Work
27
8 Acknowledgements
27
1
1 Introduction Vector quantization (VQ) [1] [5] [6] is a lossy compression technique that has been used extensively for image compression. Full search VQ leads to higher image quality than is given by the popular tree-structured VQ (TSVQ), but TSVQ is amenable to progressive transmission due to its built-in successive approximation character. In a progressive image transmission system [18], the decoder reconstructs increasingly better reproductions of the transmitted image as bits arrive to allow for early recognition of the image. In [18], Tzou describes TSVQ's suitability for progressive transmission. Due to the successive approximation nature of TSVQ, the farther down the tree the vector is encoded, the better the reproduction. In [16], we organized a full search VQ so that images coded with it could be sent progressively as in TSVQ. Related work on ordered VQ codebooks has been studied by Poggi and Cammorota to reduce the bit rate of standard and progressive transmission VQ [2, 14], by Nasrabadi and Feng to lower the bit rate in nite-state VQ [10] and in our earlier work [15]. One problem of transmitting any VQ codeword index is that VQ is sensitive to channel errors. When VQ codeword indexes are transmitted over a noisy channel, channel errors can cause a signi cant increase in distortion if the VQ codebook is not organized for resilience to channel noise. There has been signi cant previous research on the transmission of VQ indexes over noisy channels. Phamdo, Farvardin and Moriya [13] designed a channel-matched TSVQ scheme that does not need extra bits for error correction. They developed an iterative algorithm to design a set of codebooks to minimize a modi ed distortion for dierent channel error rates. Their experiments showed substantial improvements over ordinary TSVQ when the channel is very noisy, but because their scheme uses a dierent codebook for each channel error rate, if there is a channel mismatch, the encoder does not use the best codebook for the channel. Phamdo and Farvardin also implemented Maximum A Posteriori (MAP) detection 1
of Markov sources in which the correlation between successive VQ indexes is used to correct channel errors [12]. In other work, Sayood and Borkenhagen [17] used residual redundancy to correct channel errors in the design of a joint source-channel DPCM image coding system. Zeger and Gersho [22] developed Pseudo-Gray coding to reassign the indexes of a VQ codebook to reduce the distortion introduced by noisy channels. They rearrange the codebook so that codewords with similar indexes lie near each other in the input space. They iteratively switch the codewords when the switch lowers the distortion caused by channel noise. They obtain a reordered codebook with distortion at a local minimum. Cheng and Kingsbury designed robust VQs using a codebook organization technique based on minimum cost perfect matching [3] and Knagenhjelm used a Kohonen method for designing VQs robust to channel noise [8]. Hung and Meng [7] adaptively change the codebook at the receiver, depending on the channel bit error rate, without suering performance degradation for noiseless transmission. They also developed a modi ed annealing method to generate VQ codebooks that improve channel error robustness and have little performance degradation for noiseless transmission. In this paper we extend our methods for using an organized xed rate full search codebook for progressive transmission [16] to protect against channel errors. First, a full search progressive transmission tree is constructed using the method of Principal Component Partitioning (PCP) [16]. The result is an assignment of codeword indexes which allows for progressive transmission and at the same time gives protection against channel noise. Second, the full search progressive transmission tree is modi ed by switching nodes in a way reminiscent of the switching that is done in Pseudo-Gray Coding [22]. The switching improves the ability to protect against channel noise without aecting the progressive transmission performance over noiseless channels. Finally, because the PCP method causes the most signi cant bits of the codeword indexes to become highly correlated, we use a MAP detection scheme to further decrease distortion for noisy channels. The remainder of the paper is organized as follows. In Section 2, we describe Principal 2
Component Partitioning (PCP), our codebook organization technique. A complete description of it can be found in [16]. In Section 3, we derive an equation which can be used to calculate the distortion introduced by errors in each bit position of the codeword indexes and describe our switching algorithm in Section 4. In Section 5, we apply the MAP detector to correct channel errors in progressive image transmission, predict the eective and critical channel error rates of our MAP scheme, and propose a fast decoding scheme for the MAP detector. In Section 6, we present the results of using our MAP detector on images transmitted over simulated noisy channels. Finally, we conclude in Section 7.
2 Principal Component Partitioning We developed Principal Component Partitioning (PCP) in [16] to build a full search progressive transmission tree
to organize a full search VQ codebook for progressive transmission.
The tree gives full search VQ the successive approximation character that is built into TSVQ. The progressive transmission tree is a balanced tree whose terminal nodes or leaves are labeled by VQ codewords and whose internal nodes are labeled by intermediate codewords derived from the leaf codewords. The tree is used to reassign the original codeword indexes to new indexes that can be used for progressive transmission. To build the tree, we initially nd a hyperplane perpendicular to the rst principal component of the training set used to design the full search codebook. The hyperplane partitions the codewords into two equal size sets. An iterative process is used to adjust the hyperplane [16] and the iteration terminates with two equal size sets of codewords which are used to build the next layer of the tree. The recursive application of PCP leads to a top-down construction of the full search progressive transmission tree. Wu and Zhang also used a method of principal components to build TSVQ's [21].
Fig. 1 (a) is a 2562256 magnetic resonance (MR) image displayed at 8 bits per pixel (bpp).
The image is coded to 2 bpp in Fig. 1 (b) with a size 256 codebook with vector dimension 3
Figure 1: (a) The original 256 2 256 magnetic resonance image at 8 bpp (left) and (b) the coded image at 2 bpp with a GLA VQ codebook of size 256 and vector dimension 4 (right).
4
Figure 2: The intermediate images from 1 (top left) to 8 bpv (bottom right) coded by a 4 dimensional codebook of size 256 organized by PCP.
5
4 designed on a training set of 20 MR images with the generalized Lloyd algorithm (GLA) [9]. We will use this MR image codebook throughout the paper. Fig. 2 is an example of the quality of intermediate progressive transmission images resulting from organizing the codebook with PCP. These images range from 1 bit per vector (bpv) to 8 bpv. As a by-product of the PCP codebook organization, we have found that the PCP codebook indexes have natural resilience to channel noise. This is because the codewords whose indexes dier only in the least signi cant bits (LSB's) lie near each other in the full search progressive transmission tree. Thus, errors occurring in the LSB's of the codeword index produce little distortion. We shall examine this further in Section 3 and see that errors in diering bits of the codeword index have dierent eects on the nal image. We shall see in Section 5 that they can be protected to a varying extent by a MAP detector.
3 Distortion From Errors in Dierent Codeword Index Bits In this section, we derive an equation for calculating the distortions that are introduced by errors in each bit position of the codeword index. The equation is then used to calculate the distortion that is introduced by channel noise for our MR image codebook. Given a size
N
as the subset of
T
fC , C , : : : , C
01 g designed on a training set T , we de ne T whose nearest codeword is C , the centroid of T . We de ne W as the
codebook
0
1
N
i
i
number of vectors in T and i
Di
i
in the total distortion of amount (D j
j i
X
=
X
where jjC 0C i
j
2i
jj
T
2
jjX 0 C jj j
2
=
i
as the total distortion measured between T and C . When
there are channel errors such that every input in
Dj ji
i
X X
2i
Ti
0 D ) where
i
is mapped to C , we incur an increase j
i
jj(X 0 C ) + (C 0 C )jj i
i
j
2
= D + W jjC i
i
i
0 C jj j
2
(1)
T
is the squared distance between C and C . We de ne to be the probability i
that the i-th bit (8 = most signi cant bit (MSB), 6
j
1
i
= least signi cant bit (LSB)) of the
codeword index is in error. We use the binary symmetric channel (BSC) and in the analysis that follows, we assume that at most one bit of a codeword index is in error at a time. The average distortion of a vector transmitted over a BSC channel is then 01 01 X 1 X Q jD j D = W N
N
j i
av
where
W
=
P 01 =0 W is the size of the training set. The quantity N
j
j
(2)
j i
j =0
i=0
Qj ji
is the conditional
probability that the index of C is received given that the index of C was transmitted. By j
Equation 1, Dav
and Dav
=
i
01 01 X 1 X W
01 1 X W
i=0
j =0
i=0
01 X
Qj ji (Di
N
N
=
N
N
Di
j =0
+
Qj ji
+ W jjC i
01 01 X 1 X
0 C jj ); j
j =0
i=0
2
jj 0 C jj
N
N
W
i
Qjji Wi Ci
Equation 3 is the same as Equation 7 in Farvardin [4]. Because
j
2
(3)
P 01 =0 Q j = 1, the rst term N j
j i
of Equation 3 is exactly the average distortion when the codebook is used with a noiseless channel. We de ne this as D =
PN 01 i=0
D
i
W
. For i , which we de ne to dier from index i only k
in the k-th bit, where
Q
=
Qlog2
j =1
N
(1 0
j
=Q
k
; (4) 10 ) is the probability that a codeword index is transmitted with no Qik ji
k
error. Since we assumed the probability of more than one error per codeword to be highly unlikely, we get
01 X
N
6
j =0;j =i
Equation 3 then reduces to
Qj ji
X
log 2 N
'
X
log 2 N k =1
(5)
Qik ji
01 X
2 W jjC 0 C k jj (6) 1 0 =0 =1 We organized the MR codebook from Section 2 with the PCP algorithm on the same MR
Dav
'D+W Q
N
k
i
k
k
i
i
i
training set. For this codebook and data set, we calculate: Dav
' D + A(135 1 0 8
8
+ 57
7
1 0 7
7
+ 16
6
1 0 6
+ 5: 6
5
1 0 5
+3:8 where
4
1 0 4 A
+ 3: 1 =
3
+ 1: 6
1 0 3 01 X
W
1 0 2
jj 0 C jj
N
Q
2
Wi Ci
i=0
i1
+
1
1 0 1
)
2
(7) (8)
is normalized against the bit sensitivity of the LSB. Let us de ne the bit sensitivity for the k -th
LSB to be Sk
=
01 X
N
Equation 7 shows that for our data set,
i=0
S8
jj 0 C k jj
Wi Ci
i
2
(9)
= 135S1:
This means that an error in the MSB will result in an increase in distortion that is 135 times larger than that caused by an error in the LSB (if the channel error rates of the MSB and LSB are the same). Fig. 3 shows the MR images at 2 bpp with 50% errors in each bit of the codeword index. We can clearly see that errors occurring in the MSB of the codeword index (top left) would make the images look much worse than errors in the LSB (bottom right). Thus, the MSB's of the codeword index must be carefully protected against channel noise. Generally speaking, using the PCP method to construct the full search progressive transmission tree will always have the eect that errors in the MSB's will cause more distortion than errors in the LSB's. Due to the PCP, the MSB separates the codewords into two equal size sets which are fairly well separated. Thus, an error in the MSB will cause the decoded codeword to be far from the one that was transmitted. Each succeeding split separates codewords that are closer together. Thus channel errors occurring in the LSB's are less detrimental to the image quality than are errors occurring in the MSB's.
4 Decreasing Average Distortion by Local Switching After a codebook is organized by PCP, the average distortion between the original image and the decoded image transmitted over noisy channels is expressed as Equation 6. The rst 8
Figure 3: The coded images at 2 bpp with 50% errors occurring in the codeword index from the MSB (top left) to LSB (bottom right) (codebook size 256).
9
C0
E
z
C01
v z
C1
EE
C2
z
EEv EE C EE EE
v C23 0123
z C3
Figure 4: An example of a size 4 codebook organized by PCP and the rate 0 (fC0123 g) and
rate 1 (fC01 ; C23 g) intermediate codebooks.
term on the right hand side of Equation 6 is the average distortion for a noiseless channel and is xed for a given codebook. In this section, we develop a method of switching nodes in the full search progressive transmissions tree (related to Pseudo-Gray Coding [22]) which further decreases the second term of Equation 6, the distortion due to channel errors. The switching method does not aect the progressive transmission performance obtained from the PCP alone over noiseless channels. We describe the switching method on a simple example of the size four codebook in
fC ; C ; C ; C g are organized by the PCP and a progressive transmission tree is t to the codebook to obtain intermediate codewords fC ; C g and fC g. Two possible forms of arranging the codebook are shown in Fig. 5(a) and Fig. 5(b).
Fig. 4. The four codewords
0
1
2
3
01
23
0123
We wish to select the codebook arrangement with the smaller
Dav
in Equation 6. That is,
we compare the values of (W0 + W2 )jjC0 0 C2 jj2 + (W1 + W3 )jjC1 0 C3jj2 and
(W0 + W3 )jjC0 0 C3 jj2 + (W1 + W2 )jjC1 0 C2 jj2 :
The codebook arrangement in Fig. 5(b) is obtained by just switching C2 and C3 in Fig. 5(a); this clearly does not change the intermediate codeword C23 . Similarly, if we were to switch 10
C0123
C01
0 888HHH 1 HHj 88
JJ 1
JJ ^
C23
C01
JJ 1
JJ ^
0
C0
C0123
JJ 1
JJ ^
0
C1
C2
0 888HHH 1 HHj 88
JJ 1
JJ ^
0
C3
C0
0
C1
(a)
C23
C3
C2
(b)
Figure 5: Two possible codebook organizations by PCP. codewords C0 and C1 , the intermediate codeword C01 would not change. Thus, the progressive transmission performance over noiseless channels is unchanged by this local switching (LS). In the general case, there are N codewords organized as leaves of a full search progressive transmission tree generated by the PCP. By a local
switch
, we mean the exchange of a left
subtree with a right subtree as described in Fig. 6. A local switch has the eect that for some i and x where 1 i log N and 0 x < 2log2 0 , and for all y , 0 y < 2 01 , the codewords N
2
i
i
with indexes y + 2 x and y + 2 01 + 2 x are switched. We call i the level of the local switch. i
i
i
The root is at level log2 N and the leaves at level 0. By switching nodes in the full search progressive transmission tree in this manner, it is possible to further reduce
Dav .
there is an optimal set of local switches which minimizes the average distortion
Indeed, Dav
but
nding the optimal set of local switches requires searching exponentially many possibilities. We are left to explore alternative heuristics that lead to suboptimal solutions.
11
Recall from Section 3 that the bit sensitivity of the k -th bit is de ned to be 01 X 2 W jjC 0 C k jj : S = N
i
k
i
i
i=0
If we assume that the transmission error rate in all bit positions is identical, minimizing D is equivalent to minimizing
Plog2
N
k =1
Sk
av
according to Equation 6 and Equation 9. Thus, the
goal of our heuristics is to minimize this sum. As a practical matter, a local switch at level log2 N does not change the value of
Plog2
not change the value of
Thus, to test whether a local switch at level i decreases
the average distortion,
P
i k =1
Plog2
N
k =i+1
Dav ,
Sk .
k =1
N
Sk .
More generally, a local switch at level i does
it is only necessary to test whether it decreases the value of
Sk .
There are a number of heuristics to apply LS to decrease D . Any heuristic we describe av
starts with a full search progressive transmission tree produced by the PCP as the initial tree. Listed below are four possibilities.
Simple Greedy Heuristic: In this approach, simply search the full search progressive transmission tree for the local switch which produces the largest decrease in D . Apply this av
local switch and repeat the process until no local switch reduces D . av
Simple Greedy Heuristic with Simulated Annealing: In this approach, apply random switches to the initial tree without regard to decreasing D (this is the annealing step). av
With the result of this annealing step, apply the simple greedy heuristic. Repeat this algorithm a number of times with dierent starting trees and choose the tree with the smallest Dav .
Our experience is that the initial tree is a good starting point for the simple greedy
heuristic, so that the annealing step should only be performed on a small percentage of the nodes. A good choice appears to be choosing to randomly switch about 5% of the nodes of the initial tree. The number of times the annealing should be repeated depends on the number of nodes in the initial tree. For our trees with 256 leaves, 100 annealing trials were adequate. 12
Local Switch at Node X Level i
X 0
Level i 1
A
X 0
1
B
B
A
Figure 6: A local switch at a node switches its left and right subtrees.
Top Down Greedy Heuristic: In this approach, simply search the tree level by level starting with the root, to nd any local switch which decreases until no local switch decreases
Dav .
Dav .
Repeat this process
Starting the search at the top of the tree has the
advantage that usually, local switches nearer the root cause larger decreases in
Dav
than
switches lower down the tree, because more codewords are aected by the switch.
Top Down Greedy Heuristic with Simulated Annealing: This approach is similar to the Simple Greedy Heuristic with Simulated Annealing method, except that the top down greedy heuristic is used in place of the simple greedy heuristic. All these heuristics and most of the others we have tried give very similar results. The simple greedy and top down greedy heuristic take about the same number of iterations before convergence. Adding simulated annealing increases the execution time by a large factor with only small decreases in the average distortion. To obtain the best result, we used the Top Down Greedy Heuristic with Simulated Annealing method to further organize our MR codebook from Section 2. We calculate the new 13
average distortion over noisy channels by Equation 6: Dav
' D + A(123 1 0 8
+2:9
4
8
+ 2: 7
+ 48 3
7
+ 13
1 0 7 + 1: 6
2
6
+ 5: 0
+
1
1 0 6
5
1 0 5
) (10) 1 0 4 1 0 3 1 0 2 1 0 1 Comparing Equation 10 and Equation 7, we see that errors introduce less distortion for our codebook once it has been further organized by the LS algorithm. If the error rates are the same on each bit, that is, i
= ; i = 1; 2; : : : ; 8;
Equation 7 becomes Dav
' D + A 1223 0
Dav
' D + A 1197 0 :
and Equation 10 becomes
We can see that the LS algorithm decreases the increase in distortion caused by channel errors by 11.6% in this case. As we de ned in Section 3, the bit sensitivity
Sk
is the amount of increased distortion
caused by the channel errors occurring in the k -th LSB of the codeword index. For our medical image codebook, the
Sk 's
for a codebook organized with the PCP and the LS
algorithm are in Equation 10. As we can see, the k
Sk 's
are monotonically decreasing with
in a roughly exponential manner. This shows that the MSB's, whose bit sensitivities are
larger, are more important to the image quality than the LSB's. While this monotonic eect is of course dependent on the data set and can not be proved in general for real images, we prove a precise exponential relationship among the bit sensitivities for a 2-dimensional lattice VQ [19].
14
5 MAP Detection for Images In Section 3, we showed that the MSB's of codeword indexes from a codebook organized by the PCP are much more important to the image quality than the LSB's. Another consequence of the full search progressive transmission tree is that it results in a high amount of correlation between the MSB's of the codeword indexes. This is because images vary slowly and neighboring input vectors are likely to be coded from the same region of the organized codebook. This is the same observation used by Neuho and Moayeri in their interblock noiseless coding for TSVQ [11]. Here we illustrate the high correlation between MSB's of the codeword indexes from our organized VQ. Fig. 7 is a display of the MSB (top left) to the LSB (bottom right) of the codeword indexes for a magnetic resonance brain image coded with our organized VQ (0 = black and 1 = white). The MSB is obviously highly correlated and is even a coarse approximation to the original image. An image coded with an unorganized codebook is not likely to display such high correlation. We see from Fig. 7 that the LSB is much more random but fortunately, this bit is not important to the image quality. In this section, we use this correlation in the MSB's of the codeword index to apply MAP detection to images with channel errors. In addition, we predict the eective error rate, describe the fast MAP decoding scheme, and predict the critical channel error rate based on training data.
5.1 Applying MAP detection to images coded with organized VQ codebooks We send one bit plane of the codeword index at a time from the MSB to the LSB. Because the MSB's of the codeword indexes of an image are highly correlated, we can use the redundancy between them to correct channel errors for progressive transmission over noisy communication channels. To implement this, we use a variation of Phamdo and Farvardin's 15
Figure 7: MSB's (top left) to LSB's (bottom right) of image coded with organized full search codebook of size 256 (0 = black, 1 = white). Notice that the MSB's are much more highly correlated than the LSB's.
16
Figure 8:
Ti
T1
T8
T7
T2
T0
T6
T3
T4
T5
R1 R8 R7
-
R2 R0 R6 R3 R4 R5
and R are the transmitted and received bits over a BSC. i
MAP detector [12]. The received bit plane is simply scanned with a sliding 3 2 3 bit block and the middle bit is either changed or unchanged based on the channel bit error rate (BER) and conditional probabilities calculated from training data of the received 3 2 3 bit block.
In Fig. 8, T0 and its eight neighboring bits T , 1 i 8 are a block of bits from one bit i
plane of the codeword index. They are transmitted over a noisy BSC and the corresponding received bit of
Ti
is the random variable R which may be ipped by the channel noise. It i
is also helpful to recognize that T is also a random variable, namely, that the probability of i
the bit T , 0 i 8 is statistically determined by the input images. Let r , 0 i 8, be a i
i
xed set of 9 received bits. Given knowledge of the error rate and the training set, if Pr(T0 = r0 jR = r ; 0 i 8) Pr(T0 = r0 jR = r ; 0 i 8); i
the received bit
r0
i
i
i
(11)
is decoded to r0 ; otherwise it is unchanged. In other words, we change
the central bit of the 3 2 3 block if the probability that the transmitted central bit is the
same as the received central bit, conditioned on the received 3 2 3 block, is less than 0.5.
The eect of the error rate on the probabilities in Equation 11 can be factored out so that the probabilities (which we estimate as relative frequencies) depend only on the training set. The resulting inequality has many terms but, if the error rate is much less than 1, the
following inequality (where the symbol ^ represents a logical AND) can be used in its place with negligible loss in performance Pr(T = r ; 0 i 8)(1 0 )9 + i
i
17
8 X
Pr(T = r j
j =1
j
^T
i
= r ; i 6= j; 0 i 8)(1 0 )8 i
Pr(T0 = r0 ^ T = r ; 1 i 8)(1 0 )8 + i
X 8
Pr(T0 = r0 ^ T = r j
j =1
j
i
^T
i
= r ; i 6= j; 1 i 8)2 (1 0 )7 i
(12)
The probabilities in Equation 12 depend only on the training data. As examples, Pr(T = i
i 8) is simply the percentage of 3 2 3 blocks from the training set which are identical to the block which contains exactly r ; r ; :::; r , and Pr(T = r ^ T = r ; i = 6 j; 0 i 8) is the percentage of 3 2 3 blocks from the training set which are identical to the block ri ; 0
0
1
8
j
j
i
i
which contains r0 ; :::; r 01 ; r ; r +1 ; :::; r8 . The rst term on the left side in Equation 12 is the j
j
probability of transmitting
j
Ti ,
0
i 8 without channel errors.
The second term on the
left side is for when T0 is transmitted correctly but there is one error in T ; 1 i 8. In the i
rst term on the right side,
T0
is transmitted incorrectly but all other bits are transmitted
correctly. Finally, the second term on the right side is due to transmitting T0 incorrectly along
with one additional error in T , 0 i 8. Our simulation results show that Equation 12 is i
a good approximate decision criterion. Adding more terms to allow for more channel errors requires higher decoding complexity and gave little performance improvement.
5.2 Prediction of Eective Error Rate The MAP decoder changes or does not change the received data based on Equation 12. Thus, it not only corrects errors but may also introduce errors. As a result, the eective error rate of our MAP scheme consists of two parts. One is due to the MAP decoder not correcting an error caused by channel noise and the other is due to the MAP decoder introducing errors. When we design the MAP detector, we can calculate these two probabilities based on the training set and predict the eective error rate in advance. Our results show that the predicted eective error rates are very close to the simulated values. For a bit
T0
transmitted over a BSC with channel error rate 18
channel
and decoded with
T0
-
-
Noisy Channel
MAP Decoder
R0
channel
D0
-
Figure 9: Bit T0 is transmitted over noisy channels and received as R0 and then decoded as D0 .
our MAP scheme in Fig. 9, the received bit is scheme. Because the eective error rate only on
channel
eective
R0 ,
eective
and
R0
is decoded to
D0
by the MAP
is comprised of two terms, which depend
and the training set, it can be calculated ahead of time as follows:
= P r(D0 = T0 ) = P r (R0 = T0 ^ D0 = R0 ) + P r (R0 = T0 ^ D0 = R0 )
= (1 0
channel
)P r(D0 = R0 jR0 = T0 ) +
channel
P r(D0
(13)
= R0 jR0 = T0 ):
For a xed 3 2 3 vector r0 ; r1 ; :::; r8 of bits, we de ne the integer r
=
8 X
ri 2
i
i=0
to simply be the natural interpretation of the 3 2 3 binary vector as an integer. We also de ne r (j1 ; j2 ; : : : ; j ) to dier from r in the j1 -th, j2 -th,: : :, and j -th bits. Finally, we de ne n
n
the indicator function
8 > < 1; if r0 is changed by the MAP detector due to r being received If g = > : 0; otherwise. r
For r = 0; 1; : : : ; 511, de ne Pr[r] = Pr(T = r ; 0 i 8) to be the a i
i
priori
probability of
transmitting 3 2 3 binary block r. By considering the probabilities of all 512 patterns and their probabilities of being changed by the MAP detector, we get the following equation P r (D0
= R0 jR0 = T0 ) = P r(R0 is changed by MAPjR0 = T0 ) 19
1
x*o +
x*o +
x*o +
x*o +
x*o + x*o +
x* +
+ x*
+ *
+ *+ *
+
+
+ ++
+
+
0.9
decoded error rate / channel error rate
o *
* x
0.8
*
*
* * *
x
x
0.7
x
o
x
0.6
x
* = The Third Bit 0.5
x
x
+ = The Fourth Bit
o
x x
o
x = The Second Bit o
o = The First Bit (MSB)
o
o
0.4 o
Solid Line = Simulated Value
o o
0.3 10 -5
10 -4
10 -3
10 -2
o
o
10 -1
10 0
channel error rate
Figure 10: Simulated and estimated eective error rates over dierent channel error rates for Magnetic Resonance training data. =
511 X r =0
P r [r ]
q I fr g 8
+
P
1
1 8 q If j
7
r (j1 )
g+
P
1
1
j