Rate-Distortion Performance of Source Coders in the Low Bit-Rate Region for. Highly Correlated Gauss-Markov Source. Majid Foodeei1 and Eric Dubois1;2.
Rate-Distortion Performance of Source Coders in the Low Bit-Rate Region for Highly Correlated Gauss-Markov Source Majid Foodeei 1 and Eric Dubois 1 2 ;
1 Electrical Engineering
2 INRS-Telecommunications
McGill University Universite du Quebec Montreal, Quebec, CANADA H3A 2A7 Verdun, Quebec, CANADA H3E 1H6 ABSTRACT
The discrete-time Gauss-Markov process is commonly used as a model in many source coding applications. A highly correlated Gauss-Markov process models the intensity along the motion trajectories encountered in time varying image coding. We investigate the rate-distortion performance of lossy coding schemes operating on such sources in the practically important low bit-rate region of less than one bit per sample. Additional lossless entropy coding is assumed. Low complexity low bit-rate Code-Excited Linear Predictive (CELP) coders are designed, analyzed and simulated. Performance close to the R(D) bound is achieved using this coder. For the highly correlated Gauss-Markov source, the CELP coder performs better than the optimized Entropy Constrained DPCM (ECDPCM) and is expected to perform better than the Entropy Constrained Vector Quantization (ECVQ) and Entropy Constrained Block Transform Quantization (ECBTQ). Results include comparisons with these and other previously reported coders.
1. Introduction The discrete-time Gauss-Markov process is both a good mathematical model for certain real sources and a wellde ned standard for the comparison of various source coding schemes. This model with a high correlation coecient is used for the intensity along motion trajectories encountered in time-varying image coding. For such applications, we are interested in investigation of coding schemes which perform well in the low-bit rate region of less than one bit per sample. The use of Block coding or Vector Quantization (VQ) is advantageous over the scalar quantization. For VQ, the source sequence is buered into vectors of length n samples (n = 1 for scalar). Each vector is represented by one of the m reproduction vectors from the codebook. The index of this reproduction vector is transmitted over the channel. This amounts to a bit rate of log 2 m=n bit per sample. A set of m codebook vectors which minimizes the average distortion D between the source and the reproduction sequence has to be designed. For suciently large n, rate-distortion theory guarantees that there exists a codebook of size m, This research was supported in part by a grant from the Canadian Institutefor TelecommunicationsResearch (CITR) under the NCE program of the Government of Canada.
for which the rate log 2 m=n is arbitrarily close to R(D), the source Rate-Distortion Function (RDF) (the rate distortion performance bound of any quantization scheme). In practical situations where arbitrarily long vector length is not possible, additional lossless or entropy coding reduces the average transmission rate log2 m=n to the entropy of the index at the cost of additional complexity. Entropy coder removes the redundancies due to the nonuniform probability distribution of the encoded signal. There are two classical views of optimum quantization. One is the (generalized) Lloyd-Max Fixed-Rate optimization, where the optimization goal is to minimize the average distortion for a xed number of indices m. The second alternative is the Entropy Constrained optimization procedure, where the minimization of average distortion is subject to an entropy constraint. Entropy Constrained optimum quantizer is known to perform better at the cost of additional entropy coding complexity. This scheme with a variable rate output has concomitant diculties (e.g. noisy channel performance and buer problems). Assuming entropy coding is used, the question is: \How close to the source R(D) bound one can get using practical coding schemes?" There is a wealth of work in the coding literature attempting to answer this question, especially for the high bit-rate or high-resolution region (also referred to as asymptotic coding). Although the high-resolution results can be informative even in the low bit-rate region, the validity of the results is undermined as the rate is decreased. The bit-rate region 1{3 bit per sample which was considered the low bit-rate region a decade ago is probably now replaced by the region of a fraction of a bit per sample. This paper's contribution is an attempt to answer the above question using a designed Code-Exited Linear Predictive (CELP) coder for this region. The CELP coder is already used in many low bit-rate speech applications. Even without entropy constrained optimization strategy, the results obtained using the designed CELP coder are close to the source RDF. We have also compared the CELP performance with the entropy constrained schemes such as Entropy Constrained DPCM coder (ECDPCM) [1], Entropy Constrained VQ (ECVQ) [2] and Entropy Constrained Block Transform Quantization (ECBTQ) of [3]. The comparison con rms the potential of schemes such as CELP for the low bit-rate region. Although the remaining gap between the CELP performance and the RDF bound is small (as low as 0.1 bit per sample), future work is
to study entropy constrained or xed-rate optimum design schemes applied to CELP.
2. Preliminaries Source coding can be considered as a combination of a lossy and lossless (entropy) coding used to obtain a more compact signal representation to be transmitted over the channel (or storage medium). The lossy coder maps the input signal sequence fXn g1 n=0 to the output reconstruction sequence ^ n g1 fX with an average distortion D. For a given rate, to n=0 measure the coding performance, the normalized average distortion D=X2 , or the Signal-to-Noise Ratio de ned as 2 SNR = 10 log10 DX measured in dB is used. The lossy encoder output (the index sequence) is the input to the lossless encoder. When lossless coding is used, the rstorder entropy of the lossy encoder output can be considered as the coder rate (entropy rate). The discrete asymptotically stationary Gauss-Markov process fXn g1 n=0 ( rst-order Autoregressive AR(1) process with a Gaussian innovation process) is de ned as Xn = Zn + aXn?1 n = 1; 2; : : : (1) where a is the correlation coecient and fZn g1 n=0 is the iid Gaussian innovation sequence. We are interested in highly correlated sources for which a ranges between 0.9 and 1.0. The RDF or R(D) is de ned as the rate-distortion performance bound for any quantization scheme. Calculation of R(D) can be considered as a minimization problem for which closed form solutions often do not exist [4]. Blahut provides iterative algorithms to obtain solution to the problem using a double minimization formulation. The Shannon Lower Bound RSLB (D) is the argument of this minimization in the special case of dierence distortion measures. For the memoryless Gaussian process (e.g. innovation process fZn g1 n=0 ), the RSLB (D) and R(D) coincide. In the high bit-rate region we have a non-parametric simple formulation while for the low bit-rate region we may use the parametric formulae or the closed form expression driven from the knowledge of source power spectral density [4]. The R(D) for various values of correlation coecient a are shown in Fig. 2 along with what maybe considered as the maximum potential memory gain for the Gauss-Markov process at various rates (gain obtained by removing memory or transforming the Gauss-Markov process to its innovation process). We now consider ways of classifying various source redundancies and the potential gains which will be obtained by exploiting these redundancies. For many sources, the main redundancy is the signal nonlinear and linear dependencies (memory). If the quantization output has a nonuniform probability distribution, there is an additional redundancy which sometimes is referred to as the shape redundancy. Since in this study additional entropy coding is assumed, the eect of this redundancy is removed. As mentioned earlier, use of longer block size will improve the coding performance. This gain, which is sometimes referred to as dimensionality, is exploited by the choice of space lling. There are gains due to all three categories obtained when VQ is used [5]. The scalar quantization notions, granular
region and overload regions are also extended to VQ. Related to these notions, the VQ boundary gain and granular gain are de ned to alternatively classify the VQ gains [6]. The bounded VQ does not have overload region. Also as in the case of scalar, the distortion due to overload region can be considered negligible.1 Coding schemes can be combined to remove dierent kinds of redundancies. Predictive Coding (PC), Transform Coding (TC), and Vector Quantization (VQ) are the commonly used coding techniques. Dierential Pulse Code Modulation (DPCM), is the simplest and most widely used Linear Predictive Coding scheme. Predictive Vector Quantization (PVQ) combines VQ and the DPCM advantages. The schemes can be adaptive or nonadaptive. Entropy constrained or Fixed-Rate optimization can be used. Error minimization procedures can be in an open or closed-loop fashion. These procedures may sometimes interact with the entropy coding block [7]. The choice depends on the application and especially is determined by the tolerable coding complexity, storage, and delay.
3. Design and analysis of the CELP coder In this section, we rst present the CELP coder structure and design. The coder features and a brief analysis of its superior performance along with the paper's investigation methods are then given. The CELP coder main features are the implicit use of PC and VQ, closed-loop analysis-by-synthesis con guration, (optional) adaptation of prediction lter, (optional) gain scaling and adaptation, and (optional) perceptual weighting (noise-shaping). These features are particularly useful for the very low bit-rate regions. Coarse quantization results in a violation of additive quantization noise model assumptions in DPCM and other similar coding schemes. The signal and quantization noise are correlated. The noise process is no longer an uncorrelated white process. The closed-loop con guration compensates for some of the undesired eects. The CELP coder is a relatively complex coder. However, here the complexity of the designed coder is quite low. First, the optional features are not used. This is justi ed since the signal is often stationary. Second, the prediction order of one simpli es the operations. Third, there are eective ways to reduce the coder complexity (see below). Finally, with the obtained low block length and codebook size (e.g. 4{6 and 8{16) coding complexity is not high. The excellent results with such low complexity were not fully expected. It should also be noted that there are other coders with similar characteristics such as the Predictive Trellis or Tree coder [8] which also use PC, delayed decision coding (VQ, Tree, and Trellis coding), and closed-loop structure. 1 The scalar quantization may ignore the overload eect by selecting a xed overload factor (ratio of highest threshold level to rms value) with reasonable value in the 2{4 range (also used in most DPCM coders).
The block diagram of the CELP encoder used here is shown in Fig. 1. The encoder which has an analysis-bysynthesis structure (a copy of the decoder exists at the encoder) uses an exhaustive search through the residual signal codebook. This results in the minimization of the MSE. Each residual vector from the codebook is passed through the synthesis lter (the inverse of the prediction error l1 ) to obtain candidates for the current input ter, 1?P(Z) signal vector. The index of the one which yields minimum MSE is sent to the entropy coder block. The lter ZeroState-Response (ZSR) and Zero-Input-Response (ZIR) are separated to reduce the complexity. Here, the predictor coecient has the same value as the Gauss-Markov Coef cient a. Adaptation of the predictor coecient and the use of gain adaptation, which are often used in the CELP structure, are not utilized. These are especially bene cial in the case of nonstationary signal (e.g. slowly varying a). The codebook can be designed using closed-loop LBG-like algorithms for the CELP coder (see for example [9]). The full analysis of various components of the CELP coder and comparison with alternative coding schemes is not within the scope of this paper and it is postponed to a paper to be submitted shortly. However, to summarize brie y, for the highly correlated signal, the use of PC is advantageous over TC. For example, for the Gauss-Markov source with correlation coecient a :95, the required delay for TC is not practical. We will come back to this point when the CELP performance is compared with coding schemes using TC. Unlike PVQ, the analysis-by-synthesis feature of the CELP allows for a better combination of advantages of PC and VQ. The analysis reveals that CELP advantage over PVQ could be up to a few dB. In our experiments, we seek to obtain the rate-distortion performance of the CELP coder, with the codebook size m and the vector size limited to n. The following exhaustive simulations were used. Using a Gauss-Markov process training-sequence of 100,000 samples, codebooks were designed for various pairs of vector size and codebook size (n 8 and m 512). For each pair of m and n values, the rate ( rst order entropy of the index sequence) and average distortion were estimated. Points on the rate-distortion curve were obtained using this exhaustive algorithm. The input signal test-sequence was dierent from the trainingsequence used for codebook design.
4. Previous work In this section a brief review of high-resolution asymptotic coding results and numerical schemes which also apply to lower bit-rates are presented. Through this review, the choice of CELP coder and other coders (ECVQ, ECDPCM, and ECBTQ) used for comparison is justi ed. First we present a review of asymptotic results. One of the simplest quantizers is the uniform quantizer which is a regular quantizer with equally spaced decision levels and reproduction values which are the midpoints of the quantization intervals. The above regularity of the uniform quantizer is also linked to the concept of lattice quantizers. Bennett introduced the companding model of the scalar quantizer and provided formulations for the asymptotic (high number of quantization levels) quantizer distortion in terms
of input Probability Distribution Function (PDF). Gish and Pierce [10] under weak assumptions showed that Entropy Constrained Scalar Quantization (ECSQ) with uniform quantization is asymptotically optimum regardless of the source PDF and error criterion. They showed that this quantizer yields performance within 0:255 bit of the RDF bound (or 0:2546 of the Shannon Lower Bound RSLB(D)). These results were based on asymptotic approximations. Zador [11], Gersho [12] and others studied the extension of these results to block quantization (ECVQ) and obtained formulations for the asymptotic quantization distortion D with various degrees of generality [13]. The Zador study provides the bounds for asymptotic ECVQ bounds. Gersho conjecture is that optimum high-resolution ECVQ has the form of a lattice. Numerical methods were devised for the cases where high-resolution approximation was not used [14], [15], [16], [17]. The last 3 references used a Lagrangian formulation to obtain optimum ECSQ. Lloyd-Max optimum quantization ideas and their generalizations (necessary conditions of nearest neighbor and centroid) were used to obtain optimality conditions for the optimum ECSQ [4]. As shown by iterative algorithms of Farvardin [17], the Gish and Pierce asymptotic result of 0.255 bit gap gets even smaller for all memoryless sources. This is true for sources with various PDF except possibly for uniform distribution case (0.3 bit gap for this case). Rates below one bit per sample were also considered. The other important result from his work is that if there is no constraint on the number of quantization levels, the performance of optimum quantizer and the Uniform Threshold Quantizer (UTQ) are almost the same. The above optimization procedure required knowledge of the PDF. Farvardin extended his work to Gauss-Markov source (with memory) to obtain rate-distortion performance for the DPCM method. He used an iterative algorithm where the estimation of PDF of quantization error at each stage is used for optimization of quantizer structure (optimum ECDPCM). His results are used as a reference in this work. The ECSQ of Farvardin is extended to the vector case (ECVQ) by Chou et al [2] and the results which are also in the region of less than one bit per sample show the superior performance of such coders over ECSQ, lattice, and other schemes at the cost of increased complexity. For the memoryless Gaussian case, the performance improvement is small. For the Gauss-Markov source, a signi cant gain is seen over other methods. As an example at 0.75 bit per sample there is 1.6 dB advantage over the next competitor (entropy coded D4 lattice). Some of these results are also referenced in the next section. For highly correlated Gauss-Markov source, the higher dimension ECVQ is probably not a cost eective alternative. This is because much of the total possible gain for a xed dimension may be used for memory gain. The alternative is to remove most of dependencies (at least linear dependencies) using methods such as PC or TC and use VQ more eectively to remove other redundancies. The ECDPCM coder bene ts from PC and ECSQ. The ECBTQ coder [3] combines the advantages of TC and ECSQ (UTQ). In the ECBTQ method [3], the Gauss-Markov source vectors (size n) are decorrelated using the optimum Karhunen-
Loeve (KL) unitary transformation. Using the UTQ scheme (UTQ performs close to optimum), an iterative algorithm then solves the entropy constrained Lagrangian problem. This results in the optimum step-size vector (size n) for the quantization of the transformation coecients.
5. Results We are interested in the rate-distortion performance for the highly correlated Gauss-Markov process with coecient a in the range 0.9{1.0 (the discrete-time Wiener process a = 1:0 is not considered). The entropy rate of interest is one bit per sample and below. We have used the simulation strategy described in previous section for the CELP. The maximum vector size n = 8 and maximum codebook size m = 256 were used. Although, the simulations revealed that most points on the rate-distortion curves were obtained using lower dimensions than the maximum values. For comparison we used the results reported (tables andA combination of the above coding schemes can be used to remove the redundancies due to signal correlations. graphs) for the ECDPCM [1], the ECVQ [2], and the ECBTQ [3]. We also used a DPCM coder with a Uniform Scalar Quantizer (UDPCM) with a load factor equal to 4 for some references. This coder of course only performs well at higher rates. The curves for this quantizer are obtained by varying the number of quantizer levels m. In Fig. 3 the rate-distortion performance of the CELP coder is shown. The R(D) bound and the UDPCM are also included for reference. In the low-bit region, the CELP coder curve is consistently close to the R(D) bound for all a values. The performance of the UDPCM (and also the ECDPCM, as seen latter in Fig. 4) degrades as the correlation coecient a approaches 1.0. The degradation of CELP performance at higher rates (which is not of concern here) is probably due to the low dimensionality and codebook design problems for higher dimensions. The left lower graph shows the rate variations as a function of correlation coecient a for CELP, UDPCM and R(D) for a xed normalized average distortion (0:03) in the region of interest. To show the gap between the coders' rate performance and R(D) more clearly, the lower right curves show these gaps for the two coders. As seen for higher correlations, the CELP gap is decreased while the UDPCM gap is increased. Fig. 4 compares the CELP performance with the available ECDPCM [1] performance results for a = :5; :8; :9. This gure also contains comparisons with the ECBTQ [3] and ECVQ [2] for a = :9. Trend of results for a :9 is possible to predict. Again, the CELP performance is particularly better than the ECDPCM for higher correlation coecients for reasons discussed earlier. The bottom right graph in Fig. 4 compares CELP performance with the ECVQ [2] for a = :9. The performance of other coding schemes which were used by Chou et al [2] are also shown for comparison. As expected, for n = 8, the CELP coder outperformed the ECVQ. Next runner up competitor of the ECVQ is the D4 lattice in the low bit-rate region. Eective higher dimensionality of CELP results in the better performance. As mentioned earlier, the use of VQ alone for the sources with memory is not ecient and as with the TC for a :95, the block length will not be practical. For n = 8; a :9, the
ECVQ performance is expected to degrade. Finally, as the results in Fig. 4 show, the performance of the more recent ECBTQ [3] and CELP are comparable. The ECBTQ has a block length n = 8 while the CELP coder maximum block length is 8 (often the block length is only 4). The dierence between ECBTQ and CELP performance is negligible (for higher correlation coecient a, CELP performs slightly better while for a = :5 is the opposite). However as mentioned in section 3., and as with the ECVQ, for highly correlated source with a :95, which is of interest here, the TC required block length will not be practical. As seen in Fig. 3, for a = :99, the CELP coder continues to perform well.
6. Conclusion We have investigated the rate-distortion performance of a CELP coder in the low bit-rate region of less than 1 bit per sample for highly correlated Gauss-Markov processes. For reference, the RDF bound and some previously reported entropy constrained coders were used. Despite high performance, the main disadvantage of the entropy-coded coding class considered in this paper, is the diculties inherent in the variable-length coding. As an alternative to the entropy-coded systems, the Fixed-Rate structured VQ may be considered. Recent works in this area [18], [19], [20] have narrowed the gap between entropy constrained methods and the xed-rate methods. As a result of lower complexities achieved for such coders, higher dimensions can be used to improve the performance. The low complexity CELP coder performs very well for the region of interest in comparison with the selected entropy constrained coding schemes. The VQ, PC, and closed-loop analysis-by-synthesis features in CELP allow for a closely coupled memory/dimensionality gain. The complexity of the CELP is not very high. Analysis of the CELP coder performance which was brie y discussed here is postponed to a paper to be submitted shortly. Also under investigation is the design algorithm and analysis of the new coding method of Entropy Constrained CELP (EC-CELP). The future work may include extension of studies to the discrete Wiener process case. The ultimate goal is the investigation of ecient coding of elds (2 dimensional image plane) of motion trajectories where each trajectory is modeled by a discrete-time Gauss-Markov process.
REFERENCES
[1] N. Farvardin and J. W. Modestino, \Rate-distortion performance of DPCM schemes for autoregressive sources," IEEE Trans. Inform. Theory, vol. 31, pp. 402{418, May 1985. [2] P. A. Chou, T. Lookabaugh, and R. M. Gray, \Entropy-constrained vector quantization," IEEE Trans. Acoust. Speech Signal Process., pp. 31{42, Jan. 1989. [3] N. Farvardin and F. Y. Lin, \Performance of entropyconstrained block transform quantizers," IEEE Trans. Inform. Theory, vol. 37, pp. 1433{1439, Sept. 1991. [4] T. Berger, Rate Distortion Theory: a mathematical basis for data compression. Prentice-Hall, 1972. [5] T. D. Lookabaugh and R. M. Gray, \High-resolution quantization theory and the vector quantization advantage," IEEE Trans. Inform. Theory, vol. 35, pp. 1020{1033, Sept. 1989.
4
Rate
3 2 1 0 10 -5
10 -4
10 -3
10 -2
10 0
Mem. Gain, top to bottom a=.99,.95,.9,.5,.2
Mem. Gain (dB)
20 15 10 5 0 0.1
0
0.2
0.3
0.4
0.5 Rate
0.6
0.7
0.8
0.9
1
Fig. 2 R(D) and maximum memory gain over innovation
source for Gauss-Markov processes with various values of a. RDF versus CELP
RDF versus CELP and UDPCM
2
2 a=0.91
Rate
Rate
a=0.2
1
1
a=0.99
a=0.9 0 -3 10
-2
10
-1
10 D/var_x
0 -3 10
0
10
RDF versus CELP and UDPCM
-2
10
2
0
10
1 D=0.03
D=0.03 0.8 0.6 bit
Rate
-1
10 D/var_x
RDF Gap with CELP or UDPCM
1.5 1
0.4 0.5
0.2
0 0.9
0.95 a
0 0.9
1
0.95 a
1
Fig. 3 UDPCM (dashed) and CELP (dotted) performance (solid line is the R(D)). Top graphs show the results for various values of a and bottom graphs indicate how relative performance of CELP improves as a increases. a=0.5 (CELP, ECBTQ, ECDPCM)
a=.8 (CELP, ECBTQ, ECDPCM)
2
Zero Input Response (ZIR)
2
1
0 -2 10
1/(1-P(Z))
-1
10 D/var_x
1
0 -2 10
0
10
a=.9 (CELP, ECBTQ, ECDPCM)
0
10
2
Rate
Rate
-1
10 D/var_x a=.9 (CELP, ECVQ, D4, A2)
2
Codebook Zero State Response (ZSR)
10 -1
D/var_x
Rate
Input Signal
RDF, left to right a=.99,.95,.9,.5,.2,.0
5
Rate
[6] M. V. Eyuboglu and J. G. D. Forney, \Lattice and trellis quantization with lattice-and-trellis-bounded codebooks{high-rate theory for memoryless sources," IEEE Trans. Inform. Theory, vol. 39, pp. 46{59, Jan. 1993. [7] A. Gersho and R. M. Gray, Vector quantization and signal compression. Boston: Kluwer Academic Press, 1990. [8] E. Ayanoglu and R. M. Gray, \The design of predictive trellis waveform coders using the generalized Lloyd algorithm," IEEE Trans. Commun., vol. 34, pp. 1073{ 1080, Nov. 1986. [9] M. Foodeei, \Low-delay speech coding at 16 kb/s and below," Master's thesis, McGill University, May 1991. [10] H. Gish and J. N. Pierce, \Asymptotically ecient quantizing," IEEE Trans. Inform. Theory, vol. 14, pp. 676{683, Sept. 1968. [11] P. L. Zador, \Asymptotic quantization error of continuous signals and the quantization dimension," IEEE Trans. Inform. Theory, vol. 28, pp. 373{380, July 1982. [12] A. Gersho, \Asymptotic optimal block quantization," IEEE Trans. Inform. Theory, vol. 25, pp. 373{380, July 1979. [13] R. M. Gray, Source Coding Theory. Boston: Kluwer Academic Press, 1990. [14] R. C. Wood, \On optimum quantization," IEEE Trans. Inform. Theory, vol. 15, pp. 248{252, Mar. 1969. [15] T. Berger, \Optimum quantizers and permutation codes," IEEE Trans. Inform. Theory, vol. 18, pp. 759{ 765, Nov. 1972. [16] P. Noll and R. Zelinski, \Bounds on quantizer performance in the low bit-rate region," IEEE Trans. Commun., vol. 26, pp. 300{304, Feb. 1978. [17] N. Farvardin and J. W. Modestino, \Optimum quantizer performance for a class of non-Gaussian memoryless sources," IEEE Trans. Inform. Theory, vol. 30, pp. 485{497, May 1984. [18] R. Laroia, Design and analysis of a xed-rate structured vector quantization derived from variable-length quantizers. PhD thesis, University Of Maryland, 1992. [19] A. S. Balamesh and D. L. Neuho, \Block-constrained methods of xed-rate, entropy-coded, scalar quantization," IEEE Trans. Inform. Theory, 1992. [20] A. K. Khandani, P. Kabal, and E. Dubois, \Ecient decomposition algorithm for the xed rate, entropycoded vector quantization," in Conf. on Inform. Sci. and Sys., (Johns Hopkins University), 1993.
1
1
1/(1-P(Z)) 0 -2 10
Min. MSE Search Module
-1
10 D/var_x
0
10
0 -2 10
-1
10 D/var_x
0
10
Fig. 4 Two top graphs and bottom left graph show
Index to Entropy Coder
Fig. 1 CELP encoder block diagram
CELP performance (dotted) versus ECDPCM (circles) and ECBTQ (stars) (solid line is RDF) for various values of a. Bottom right graph shows CELP performance (dotted) versus the ECVQ performance (x's) for a = 0:9 (solid line is R(D), D4 lattice is dashdot, and A2 lattice is dashed curve)