Combined Source/Channel Decoding: When Minimizing Bit Error Rate is Suboptimal Tim Fingscheidt, Thomas Hindelang, Richard V. Cox, Nambi Seshadri AT&T Labs – Research Florham Park, NJ 07932, USA ftim,thomas,rvc,
[email protected]
Abstra t |
Convolutional coding as a means of error correction is widely used in mobile communications, where the effects of channel noise have to be taken into account. Channel decoders usually aim at minimizing the frame, symbol or residual bit error rate of the source bits. In this contribution we show how the residual bit error rate can be reduced by using a priori knowledge about the source statistics within the channel decoding process. This usually helps the source decoder to perform better. If the source decoder also uses a priori knowledge, however, we show that the best overall system performance is reached at a higher residual bit error rate after channel decoding. This gives new insight into optimization criteria of combined source/channel decoding. I. I NTRODUCTION Real world source coding usually leaves a certain amount of residual redundancy within the bit stream. This is mostly due to constraints on coding delay and complexity. In recent years an increasing interest has grown in decoding techniques that are able to make use of this residual source redundancy. Bahl et al. laid the foundations of symbol-by-symbol channel decoding that is able to exploit a priori knowledge about the source bits [1]. Hagenauer investigated sequence-estimating channel decoding algorithms and proposed the source-controlled channel decoding technique [2]. Further work focused on channel decoding exploiting source statistics to reduce the residual path, symbol, or bit error rate is [3–5]. Alternatively, the source redundancy can of course be used in the source decoding process [6–14] often referred to as soft decision or softbit decoding. In contrast to channel decoding these techniques have the advantage of an easy adaptation to arbitrary error criteria, e.g. the signal-to-noise ratio rather than the bit error rate. In this paper we investigate combined source/channel decoding in the sense that the channel decoder as well as the source decoder are able to exploit source statistics. Furthermore, the channel decoder yields soft decisions (softbits) for use in the softbit source decoder. In section II we briefly review the softbit decoding technique [11, 12], whereas in section III we discuss source controlled channel decoding according to [5, 15], both presented in a uniform manner. Simulation results will be presented answering the question, whether a priori knowledge can be used twice, in the channel decoder Author was on leave from Institute for Communications Engineering (LNT), Munich University of Technology (TUM), Germany.
as well as in the source decoder. Moreover, indications will be given that in a combined source/channel decoding scheme, the minimization of the residual bit error rate after channel decoding is not always optimum. II. S OURCE D ECODING U SING S OURCE A P RIORI K NOWLEDGE Let us assume a speech or an image signal s~ being source coded. The source coder usually generates socalled codec parameters, e.g. pitch, spectral coefficients, etc. In the following we focus only on a single codec parameter v~n 2 R at time instant n = 0 as depicted in Fig. 1. The parameter is scalar quanv0 ℄ = v0 and coded by the bit combination tized [~ x0 = fx0 (0); x0 (1); :::; x0 (M 1)g consisting of M bits x0 (m) 2 f0; 1g; m = 0; 1; :::; M 1. When all parameters belonging to a frame are coded, their bits are multiplexed building a frame of K source bits. This frame is now subject to convolutional encoding resulting in a frame of channel bits y 0 . In the following we assume the regarded parameter v~n is generated only once per frame. Thus the parameter time instant n = 0 denotes the current frame, n = 1 the previous frame, etc. After transmission over an equivalent channel comprising modulation, the physical channel, and softdemodulation, the received frame y^0 is fed into a channel decoder. To allow combined source/channel decoding the channel decoder is required to yield a soft output (softbits) [1, 11] e.g. in form of M decoder probabilities Pd (x0 (m)) giving a likelihood for every source bit x0 (m). These probabilities are the interface to the source decoder which uses them in the form
Q
d
P (x0 ) =
QMm
1 =0
d
P (x0 (m))
:
(1)
Different meanings of Pd (x0 ) dependent on the used type of a priori knowledge will be discussed in section III. The source decoding of a parameter can be done in several ways [11]. Applying
v~0(est)
=
v (x0 0 ) with x0 0
(est)
x0 d
= arg max P (x0 )
SD/HB (2)
with v~0 being the decoded parameter at frame n = 0 is equivalent to hard-output channel decoding and source decoding by table lookup. The table lookup is symbolized by v (x0 0 ). We call this straightforward technique source decoding by hardbit decoding, abbreviated SD/HB.
s~ Source v~0 y^0 Decoder Channel Equivalent Channel x MUX DMUX (est) 0 x0 Channel s^ s~ Coder Decoder Parameter v~0 v0 Q y0 Decoder Pd (x0 (m)) Parameter y^0 v0 Coder A priori Knowledge Pd (x0 (m)) (est) v~0 Fig. 1: Transmission system s^ Often times a squared error measure like the parameter P(x0 (m)jx0nm ) SNR defined as P(x0 (m)jy ^ ) 0 SNR = 10 log10 E fv~2 g=E f(~ v (est) v~)2 g (3) Preliminary Compute Channel CD/AK0 describes the transmission quality reasonably well. Thus Decoder (11) a mean square estimation called source decoding by softbit decoding (SD/SB) according to Pp0 (x0 (m)jy ^ ) 0 Source Coder v~0
(est)
v~0
y0
Xvx
=
x0
d
SD/SB (4)
is appropriate. Modelling the quantized parameter as a Markov chain the parameter a priori knowledge P(xn ) (0th order Markov, models the parameter distribution) or alternatively P(xn j xn 1 ) (1st order Markov, models the parameter correlation over time) can be exploited. Assuming the Markov chain to be homogeneous the a priori knowledge can be written as P(x0 ) = P(xn ) and P(x0 j x 1 ) = P(xn j xn 1 ). It can be measured by applying a large database to the source encoder and by counting how often (pairs of) different levels of the parameter quantizer output occur. The a priori knowledge then has to be stored in the decoder ROM. Using 0th order a priori knowledge (SD/AK0) eq. (4) becomes (est)
v~0
Xvx
=
x0
( 0)
C Pd (x ) P(x ) 0
0
0
SD/AK0 (5)
with the constant C0 normalizing the sum over the product of probabilities to one. Finally, the usage of 1st order a priori knowledge (SD/AK1) in eq. (5) leads to (est)
v~0
Xvx
=
x0
( 0)
C Pd (x ) Pp (x ) 0
0
0
SD/AK1 (6)
with the constant C0 as used in eq. (5). A recursive computation yields the prediction probabilities1
p
X
P (x0 ) =
x
Main Channel Decoder
( 0 ) P (x0 )
P(x0
j x ) C Pd(x ) Pp (x 1
1
1
1)
y^0
p
d
P (x0 (m))
j
^ P 1 (x0 (m) Y 1)
Compute CD/AK1 (8)
T
jx
P(x0 (m)
1)
Fig. 2: Channel decoding using a priori knowledge
III. C HANNEL D ECODING U SING S OURCE A P RIORI K NOWLEDGE In Fig. 2 possible realizations of the channel decoder block in Fig. 1 are depicted. They differ only in the used type of a priori knowledge about the regarded parameter, and thus in the interpretation of Pd (x0 ). For simplicity, some (de-)multiplexing schemes which are necessary in the channel decoder itself are not shown. Furthermore, only probabilities related to the regarded parameter x0 are depicted in Fig. 2. If both switches are in the off position, conventional channel decoding is carried out yielding — besides information on the other source bits — the decoder probabilities Pd (x0 (m)) = P(x0 (m)jy^0 ); m = 0; 1; :::; M 1, assuming x0 (m) = 0 and x0 (m) = 1 being equiprobable. Because this assumption does not rely on any a priori knowledge about the source, we call this type of channel decoding CD/noAK.
(7)
1
A. Using Interframe Correlation
with C 1 being the normalization constant of the previous frame. If frame n = 0 is the first transmitted frame ever, the recursion (7) can be initialized with C 1 Pd (x 1 ) Pp (x 1 ) = P(x 1 ). 1 Here and in the following the term prediction probabilities denotes the probability distribution of 0 or 0 ( ) as predicted by bit combinations in the previous frame and/or neighbouring bits in the same frame. They are exploiting the a priori knowledge which in our terminology exclusively relates to the decoder ROM tables describing the statistics of the parameter.
x
x m
If the lower branch is switched on, correlation in time (interframe correlation) is exploited. The main channel decoder delivers probabilities Pd (x0 (m)) ^ ) with the term Y ^ = y ^ ; ::: denoting P(x0 (m)jY ^ ;y 0 0 0 1 the whole history of received frames.
Based on Pd (x 1 ) P(x 1 jY^ 1 ) as generated after decoding of the previous frame, prediction probabilities [15]
j
p
^ P 1 (x0 (m) Y
with
1) =
X x
C. Using Inter- and Intraframe Correlation
jx
P(x0 (m)
1)
P(x jY^ 1
;
1)
1
x0 (m) 2 f0; 1g
CD/AK1 (8)
can be computed. They are taken into account at the respective trellis transitions (eqs. (4) ff. in [1]), when the metric is generated within the main channel decoder for the current frame. The a priori knowledge in eq. (8) can easily be derived from P(x0 jx 1 ) = P(x0nm ; x0 (m)jx 1 ) — the a priori knowledge used in SD/AK1 eq. (6) — by
jx
P(x0 (m)
1)
=
X
x0nm
nm ; x0 (m)jx
P(x0
with x0nm=fx0 (0); :::; x0 (m
1)
(9)
g:
1); x0 (m+1); :::; x0 (M 1)
Note that while in the SD/AK1 scheme the parameter a priori knowledge is of size 2M 2M , in CD/AK1 the dimensions are reduced to M 2M denoting a mixed form of bit and parameter a priori knowledge.
If the upper branch in Fig. 2 is switched on and the lower one is switched off, then correlations among the bits x0 are exploited in terms of a priori knowledge about a bit x0 (m) given the M 1 other bits x0nm (intraframe correlation). In this case the main channel decoder delivers probabilities Pd (x0 (m)) P(x0 (m)jy^0 ) dependent only on the currently received frame y^0 . At first a preliminary decoding step is performed yielding an estimate P(x0 (m)jy^0 ) as given by the CD/noAK technique assuming x0 (m) = 0 and x0 (m) = 1 being equiprobable, i.e. no a priori knowledge is used here. Multiplication of the soft-output bit probabilities P(x0 (m)jy ^ ) with m = 0; 1; :::; M 1 according to 0 eq. (1) approximates P(x0 jy^0 ) better if bit errors are independent. This can be reached by separating the bits x0 (m) with m = 0; 1; :::; M 1 from each other by at least 5 times the constraint length of the code. The (de)multiplexer in Fig. 1 should take care of that. The next step is then to compute
nm jy^ ) = 0
X
X 1
x0 (m)=0
j
P(x0 y ^ ):
(10)
0
Finally, the prediction probability estimate [5]
p
j
P 0 (x0 (m) y ^ )
with
x0nm x0 (m) 2 f0; 1g 0
jx nm ) P(x nm jy^ );
P(x0 (m)
0
0
0
CD/AK0 (11)
is used in the main decoding step to modify the metric. The a priori knowledge in eq. (11) can easily be derived from the a priori knowledge used in SD/AK0 eq. (5) by
jx nm ) = P
P(x0 (m)
0
P(x0 )
x0 (m)=0 P(x0 ) 1
j
p
:
(12)
j
p
^ ^ ) = P (x (m) y P 01 (x0 (m) Y ^ ;Y 01 0 0 1) 0
:
Their computation is convenient if y^0 and Y^ 1 are considered as statistically independent, which is a quite simplifying assumption. Then they can be divided into a CD/AK1 part according to eq. (8) and a CD/AK0 part according to eq. (11) which can be calculated separately. Multiplication of both parts and application of a correction factor yields
p
j
^ )= P 01 (x0 (m) Y 0
p
j p
j
^ P 0 (x0 (m) y ^ ) P 1 (x0 (m) Y 0
with x0 (m) 2 f0; 1g
B. Using Intraframe Correlation
P(x0
The above approaches can also be combined to exploit both the CD/AK0 and the CD/AK1 type of a priori knowledge. This means both switches in Fig. 2 are in on position. The main channel decoder then delivers estimates Pd (x0 (m)) P(x0 (m)jY^ 0 ) which are formally the same as in the CD/AK1 case, but in general a better approximation. In this case the prediction probabilities can be written as
P(x0 (m))
1)
;
CD/AK0+1 (13)
which is to be used to modify the decoder metric. The inverse of the correction factor is the unconditioned probability that bit no. m has a certain value, given by P(x0 (m)) =
X
x0nm
P(x0 )
:
(14)
Note that in contrast to the channel decoding schemes the SD/AK1 technique already includes the distribution related redundancy used in SD/AK0. IV. S IMULATION R ESULTS We performed simulations according to Fig. 1. A Gaussian parameter with correlation = 0:8 was Lloyd-Max quantized with M = 3 bit. We examined two kinds of bit mapping: natural binary as well as folded binary bit mapping. The three bits were multiplexed with 237 random bits building a frame of K = 240 bits, located at positions 60,119,178. The recursive systematic convolutional code that is used has constraint length 5 and a code rate r = 1=2. The equivalent channel is an AWGN channel with BPSK modulation and coherent demodulation. The soft output of the channel is computed as given in [11], assuming the channel quality is perfectly known at the receiver. The channel quality is given in terms of Es =N0 with Es denoting the energy of a BPSK symbol and N0 =2 being the power spectral density of the additive Gaussian noise. The decoding quality is defined in terms of the parameter SNR according to eq. (3) and is measured over 80,000 frames, i.e. 80,000 parameters. Channel decoding is done by bit rate minimizing symbol-by-symbol decoders according to [1]. They are able to provide a good soft-output to the subsequent channel or source decoding stage.
v~0
jv Pp (x (m)jy ^y ) ^) ^ Pp (x (m)jY y P ( x ( m d )jx nm))) P(x (m P(x (m)jv x ) ~ Pd (x (m)) s^ Es =N s~ Es =N v~ 0 00
P(x0 (m) y ^ )
s^
0
1
0
00
0
10
0
0
00 (est) 0 1 0
s~ v ~0 y^ x00 P(x0 (m)jy ^ ) v00 Pp0 (x0 (m)jy ^y ) 00 ^) ^ y Pp1 (x0 (m)jY 10 m d()xjx0 (0(est) P(x0 (P m nm))) P(x0 (m)jv x0 1 ) ~ Pd (x0 (m)) s^
0
14
0
12
14
0
12
SNR [dB]
SNR [dB]
6
SD/AK1, CD/noAK E =N0 SD/AK1, CD/AK0+1 SD/AK0, CD/AK1 E =N0 SD/AK0, CD/noAK SD/AK0, CD/AK0E =N0
s ss
4 2
Es =N0
8 6
Es =N0
CD/AK0 CD/AK1
Es =N0
CD/AK0+1
4
MSB 1
BER [%]
Es =N0
BER [%]
−4 −3 −2 −1 0 1 2 y^0 E =N [dB] s 0 P(x0 (m)jy ^ ) 0 Pp0 (x0 (m)jy ^ )Fig. 4: Parameter SNR for natural binary bit mapping, 0 Es =N0 de= different types of a priori knowledge used in channel ^ Pp1 (x0 (m)jY 1) Es =N0 = coder as well as in source decoder P(x0 (m)jx0nm ) Es =N0 = 3 dB P(x0 (m)jx 1 ) 10 Pd (x0 (m)) CD/noAK
Es =N0
=
1
dB
0.6 0.4
SD/AK1, CD/noAK SD/AK1, CD/AK0+1 SD/AK0, CD/AK1 SD/AK0, CD/noAK SD/AK0, CD/AK0
2 −4
3 1
−3
−2
−1
0
1
2
Es =N0 [dB] Fig. 7: Parameter SNR for folded binary bit mapping, different types of a priori knowledge used in channel decoder as well as in source decoder 10 8 6
CD/noAK CD/AK0
Es =N0
=
3 dB
=
1 dB
CD/AK1 CD/AK0+1
4
LSB
0.8
6 4
BER [%]
1
P (x0 (m))
MSB 1
BER [%]
d Es =N0
SNR [dB]
0
SNR [dB]
x0 v0 10 10 y0 y^0 8 8 SD/SB,ECD/AK0+1 SD/SB, CD/AK0+1 =N0 P= s (x03(m)) d SD/SB, CD/AK1 SD/SB, CD/AK1 E =N = 1 (est) s 0 6 6 SD/SB, CD/AK0 SD/SB, CD/AK0 v~0y^ SD/SB, CD/noAK SD/SB, CD/noAK 0s ^ SD/HB, CD/AK0+1 SD/HB, CD/AK0+1 4 4 P(x0 (m)jy ^ ) 0 SD/HB, CD/AK1 SD/HB, CD/AK1 Pp0 (x0 (m)jy ^ ) SD/HB, CD/AK0 SD/HB, CD/AK0 0 2 2 SD/HB, CD/noAK ^ SD/HB, CD/noAK Pp1 (x0 (m)jY 1 ) −4 −3 −2 −1 0 1 2 −4 −3 −2 −1 0 1 2 P(x0 (m)jx0nm ) s~ Es =N0 [dB] E =N [dB] s 0 P(x0 (m)jx 1 ) v~0 Pd (x0 (m))Fig. 6: Parameter SNR for folded binary bit mapping, x0Fig. 3: Parameter SNR for natural binary bit mapping, different types of a priori knowledge used in channel dedifferent types of a priori knowledge used in channel dev0 coder, no a priori knowledge used in source decoder Es =N0 y0coder, no a priori knowledge used in source decoder y^0 Es =N0 Pd (x0 (m)) 14 14 y^0 (est) v~0y^ P(x0 (m)jy ^ ) 0 12 12 0s ^ Pp0 (x0 (m)jy ^ ) P(x0 (m)jy ^ ) 0 0 ^ Pp1 (x0 (m)jY 10 10 Pp0 (x0 (m)jy ^ ) 1) 0 EP( =N 3 s 0(= x m ) j x ) 0 ^ 0nm Pp1 (x0 (m)jY 1) Es =N 1 0 = 8 8 P(x 0 (m)jx 1 ) P(x0 (m)jx0nm ) Pd (x0 (m)) P(x (m)jx )
Es =N0
LSB
0.8 0.6 0.4
MSB LSB Fig. 5: Bit error rates for natural binary bit mapping, different types of a priori knowledge used within channel decoder
MSB LSB Fig. 8: Bit error rates for folded binary bit mapping, different types of a priori knowledge used within channel decoder
Figs. 3,4 are showing the parameter SNR versus channel quality for several combinations of channel and source decoding algorithms. Fig. 5 depicts the residual bit error
rate (BER) after channel decoding, if hard decision on the soft-output were carried out. For the MSB, center bit and LSB the BERs of all four channel decoding schemes
are shown for the channel qualities of Es =N0 = 3 and 1 dB. Comparing the 4 solid curves with the 4 dashed curves in Fig. 3 shows the advantage of parameter estimation by SD/SB in contrast to a simple table lookup by SD/HB. Further considerable gains are achieved by increasing amounts of a priori knowledge used within the channel decoder. These gains can also be measured in terms of the residual bit error rate after channel decoding as depicted in Fig. 5. It turns out that natural binary bit mapping shows distribution related (used by CD/AK0) as well as correlation related (used by CD/AK1) redundancy for the MSB. Employing CD/AK0+1 the BER can significantly be reduced by about 50%. The center bit shows only some distribution related redundancy. Fig. 4 shows the performance if a priori knowledge is also used in the source decoder. The two lower curves indicate that if distribution related a priori knowledge is used in the source decoder (SD/AK0), it shouldn’t be used in the channel decoder (CD/AK0). Furthermore, it turns out that the best overall performance is achieved without usage of a priori knowledge in the channel decoder at all (SD/AK1, CD/noAK). The SNR of the best scheme with a priori knowledge used in the channel decoder (SD/AK1, CD/AK0+1) is worse at all channel qualities, although — according to Fig. 5 — it reduces the bit error rate of the MSB by about 50%! Figs. 6,7,8 show simulation results for a binary folded bit mapping. Here the center bit shows both types of residual redundancy, whereas the MSB reveals ony correlation related redundancy. Employing CD/AK0+1, the BER of both bits can be decreased by about 30%. The SNR performance of the (SD/AK1, CD/noAK) scheme — which ist still the best one — turns out to be even better than that for natural binary bit mapping. These results disprove the common opinion that the reduction of bit error rate should be the principal goal of channel decoding. Seen from a combined source/channel decoding point of view we showed for the given system settings that it is advantageous to use the source redundancy within the source decoder alone employing softbit source decoding [11]. We expect the reason for this to be the channel decoder’s a priori knowledge which does not really exploit the parameter statistics, but instead is tied to a bit level. However, if we take complexity into account the CD/AK0 scheme with its two channel decoding steps as well as the SD/AK1 scheme in the case of high bit rate parameters (e.g. M > 5) might require too much computational complexity. Thus for low complexity applications the combination of CD/AK1 and SD/AK0 or even SD/SB is a good choice providing a performance that still comes reasonably close to the overall optimum. V. C ONCLUSIONS We proposed a combined source/channel decoder that is able to use different kinds of a priori knowledge about the source to achieve a better performance. We showed by simulation that a priori knowledge shouldn’t be used twice, in the channel decoder as well as in the source decoder. The optimum system performance in terms
of SNR is instead achieved when only the source decoder exploits the source statistics. This is true despite the higher residual bit error rate after channel decoding compared to a system where a priori knowledge is used within the channel decoder. VI. R EFERENCES [1] L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate,” IEEE Transactions on Information Theory, vol. 20, pp. 284–287, Mar. 1974. [2] J. Hagenauer, “Source- Controlled Channel Decoding,” IEEE Transactions on Communications, vol. 43, pp. 2449–2457, Sept. 1995. [3] F. Alajaji, N. Phamdo, and T. Fuja, “Channel Codes that Exploit the Residual Redundancy in CELP-Encoded Speech,” IEEE Transactions on Speech and Audio Processing, vol. 4, pp. 325–336, Sept. 1996. [4] S. Heinen, A. Geiler, and P. Vary, “MAP Channel Decoding by Exploiting Multilevel Source A Priori Knowledge,” in Proc. of EPMCC, (Bonn, Germany), pp. 467– 473, Oct. 1997. [5] T. Hindelang and A. Ruscitto, “Kanaldecodierung mit Apriori-Wissen bei nicht bin¨aren Quellensymbolen,” in Proc. of 2. ITG-Fachtagung “Codierung f¨ur Quelle, ¨ Kanal und Ubertragung”, (Aachen, Germany), pp. 163– 167, VDE–Verlag, Mar. 1998. [6] K. Sayood and J. Borkenhagen, “Use of Residual Redundancy in the Design of Joint Source/Channel Coders,” IEEE Transactions on Communications, vol. 39, pp. 838–846, June 1991. [7] N. Phamdo and N. Farvardin, “Optimal Detection of Discrete Markov Sources Over Discrete Memoryless Channels – Applications to Combined Source-Channel Coding,” IEEE Transactions on Information Theory, vol. 40, pp. 186–193, Jan. 1994. [8] C. Gerlach, “A Probabilistic Framework for Optimum Speech Extrapolation in Digital Mobile Radio,” in Proc. of ICASSP’93, vol. 2, (Minneapolis, Minnesota), pp. 419–422, Apr. 1993. [9] V. Cuperman, F.-H. Liu, and P. Ho, “Robust Vector Quantization for Noisy Channels Using Soft Decision and Sequential Decoding,” European Transactions on Telecommunications, vol. 5, pp. 7–18, Sept. 1994. [10] T. Fingscheidt and P. Vary, “Error Concealment by Softbit Speech Decoding,” in Proc. of ITG-Fachtagung “Sprachkommunikation”, (Frankfurt a.M., Germany), pp. 7–10, VDE–Verlag, Sept. 1996. [11] T. Fingscheidt and P. Vary, “Robust Speech Decoding: A Universal Approach to Bit Error Concealment,” in Proc. of ICASSP’97, vol. 3, (Munich, Germany), pp. 1667– 1670, Apr. 1997. [12] T. Fingscheidt, Softbit-Sprachdecodierung in digitalen Mobilfunksystemen. PhD thesis, Aachener Beitr¨age zu digitalen Nachrichtensystemen, edited by P. Vary, vol. 9, (ISBN 3-86073-438-5), 1998. [13] T. Fingscheidt and P. Vary, “Softbit Speech Decoding: A New Approach to Error Concealment,” IEEE Transactions on Acoustics, Speech and Signal Processing (submitted), 1999. [14] N. G¨ortz, “Joint Source Channel Decoding Using BitReliability Information and Source Statistics,” in Proc. of IEEE International Symposium on Information Theory, (MIT, Massachusetts), p. 9, Aug. 1998. [15] T. Hindelang, J. Hagenauer, and S. Heinen, “Source Controlled Channel Decoding of Quantized and Correlated Symbols,” in Proc. of 3. ITG Conference “Source and Channel Coding”, (Munich, Germany), VDE–Verlag, Jan. 2000.