Turing Complexity of Fixed-Rate Quantization. Dennis Hui. Advanced Development and Research, Ericsson Inc., Research Triangle Park, NC 27513, USA.
1998, Killamey, Ireland. June 22 -July 26
Turing Complexity of Fixed-Rate Quantization Dennis Hui Advanced Development and Research, Ericsson Inc., Research Triangle Park, NC 27513, USA
Abstract --The rate of increase of quantization complexity, as rate of quantization increases, is investigated under a Turing machine framework. It is shown that the problem of asymptotically optimal scalar quantization has polynomial encoding complexity if the distribution function corresponding to the one-third power of the source density is polynomially computable with high probability.
I. INTRODUCTION The classical theory of lossy source coding focuses on the performance, in terms of rate and distortion, of quantization. Although the celebrated work of Shannon [I] described the fundamental performance limits of quantization, the important issue of complexity, in terms of the implementation costs of encoding and decoding, was not addressed. On the one hand, it seems evident that quantizers need to become very complex in order to achieve optimal performance. For instance, it is generally understood that the complexity of optimal full-search unstructured quantizer with dimension k and rate R increases exponentially with the dimension-rate product k R . On the other hand, there are many reduced complexity full-search methods and sub-optimal structured quantization schemes, whose complexities are much reduced when compared to full-search. It remains unclear whether the exponential growth in complexity is indeed necessary to achieve optimal or nearly optimal performance, Here we present a framework for studying how complexity of fixed-rate quantization increases with rate R for a given dimension k . We assess quantization complexity on a hypothetical machine, namely an oracle Turing machine, in such a way that both hardware and operating costs of quantization are simultaneously accounted for. Our goal is to investigate the possibility of asymptotically optimal quantization with polynomial (as opposed to exponential) growth in complexity, and to identify conditions on the source density under which this is possible. As an initial effort, we focus on the encoding complexity of quantization and the mean-squared distortion measure.
.
11. TURING MACHINE BASICS In short, an ordinary Turing machine (TM), denoted by M , is simply a finite-state machine augmented by a finite number of primitive data storage devices, called tapes, each capable of storing an arbitrarily long string of symbols (from a finite alphabet) arranged side by side, cf. [2]. The machine A4 can read or write onto each tape via its associated tape head, which always points to one of the symbols stored on the tape. A typical task performed by a TM M involves computing a function from the integers to the integers. The input integer, represented by a finite string of symbols, is stored on a special tape, called input tape, before the operation of M starts, while the output integer is written on another tape, called the output tape. All tapes except for the
0-7803-4408-1/98/$10.00 0 1 9 9 8 IEEE
input tape are assumed to be blank initially. The operation of M starts at some initial state. At each step of operation, M reads the symbols to which the tape heads are pointing, overwrites each of these symbols by a new symbol, moves each tape head to an adjacent symbol, and then changes to a new state. The operation continues until some special state, called halting state, is reached. Whatever is in the output tape at that time is considered as the output. The complexity of a task is assessed by the least number of steps required by any TM performing the task. However, for the task of quantizing real source samples, the ordinary TM model is inadequate. A model of machines that can also perform tasks involving operations with realvalued arguments is needed. Clearly, no realistic machine can acquire a real argument perfectly, but only an approximation. Nevertheless, the approximation to the argument can be accurate enough for the machine to satisfy certain desired output requirement. This is similar to issues arising in computational analysis, cf. [3], where an oracle Turing machine (OTM) is adopted as the model for machines that compute functions with real-valued arguments. Basically, the difference between an OTM and an ordinary TM is that in addition to the input tape which is capable of storing integer arguments, an OTM M can also obtain an arbitrarily precise dyadic approximation @ ( x ,n ) of a real argument x , whenever it is needed, from a fictitious device, called an oracle function 9 , which has exclusive access to the actual argument x and which is not considered as part of M . The output approximation @(x,n) is accurate to within 2-" , where n is the desired precision specified by M to @ during the operation of M The main reason for including an oracle in the model is that the cost of acquiring approximations to the input x can be counted through the desired accuracy n when assessing complexity, but the cost of generating such approximations will not be counted. Besides its interaction with an oracle function, an OTM M operates in the same manner as an ordinary TM. In particular, the input tape of M may contain a finite string s of symbols before the operation of M starts. We refer to such a string s as an initialization string. We also refer to the ordered pair (M ,s) as an initialized OTM.
20
111. FORMULATION Instead of assuming a different encoding machine for each rate R , whose relative complexities would be difficult to assess, we envision one machine, namely a Turing encoder defined below, which can perform k -dimensional quantization encoding at any specified rate R . For simplicity, we present our formulation in the scalar ( k = 1) case. The generalization to the vector case is straightforwrd. We define a Turing encoder to be an ordered pair E=(M,{?(R)};=,), where M is an OTM and s ( K ) is an initialization string for M to operate at rate R ,such that for
Our main result presented below relates the: complexity of quantization with the complexity of a real function, namely the distribution function of the well-known optimal point density for the source.
any source sample x and any oracle function @ , the machine M operating with each initialization string s ( K ) halts with an output M (s(K),@,x ) that is an integer between 0 and 2 R -1. Note that each initialized O W ( M , s ( K ) ) is a “true” encoder of rate R . The complexity C(E,R) of a Turing encoder E = ( M ,{ s ( R ) } i =), Fperating at rate R is measured by the maximum num er o steps required for the initialized OTM ( M , s ( K ) ) to encode an arbitrary source sample x . Since the number of tape symbols accessed by M during its operation is no more than the number of steps required for M to halt, the increase in C(E, R ) , as rate R increases, simultaneously accounts for the increase in both the hardware and operating costs required for performing encoding operations. On the other hand, the distortion performance D(E, p , K) of a Turing encoder E operating at rate R is defined as the mean squared error that results when the initialized OTM ( M , s(K)) is used with an optimum decoder to quantize a source with density p . More precisely, consider any decoder d , of rate R as a mapping from {0,1,...,2K] to 93. Then, we define
Theorem 1: Suppose a source density p saitisjies some technical conditions not stated here. Then the source density p is PAOQ ifthe distributionfunction
is polynomially computable in probability, where c is the normalization constant. Unfortunately, no simple characterization of distribution functions that are polynomially computable in probability is known to date. For a specific source density p , one may individually determine the computability of F ( x ) and apply Theorem 1 to make inference on the complexity of quantization. For instance, one can easily show that uniform densities are PAOQ. More interestingly, we show that source densities of the form: P ( X ) = P exP(-+la 1 are PAOQ, where a 2 1 is an integer and ,U and IC are real constants. It follows that both Gaussian (a= 2 ) and Laplacian ( a = 1 ) densities are PAOQ
Using the above definitions, we now define a class of source densities that are intrinsically easier to quantize than others. Definition 1: A source density p is polynomially asymptotically optimally quantizable (PAOQ) if there exists a Turing encoder E and a polynomial y(K) such that C(E,K) 5 y ( K ) for all R and
where U.(p, K) is the least possible distortion when quantizing the source density p at rate less than or equal to R
.
111. RESULTS FOR SCALAR QUANnZATION To state our main result, we need a way to measure complexity of real functions. To achieve this, we adopt definitions of computable real functions from computational analysis, cf. [3]. Basically, a real function f :% + 3 is said to be (everywhere) computable with polynomial complexity if there is an OTM that is capable of providing, for every X E 9 , a dyadic approximation to f ( x ) to within an error of 2-” for any desired precision n . This notion of everywhere computability may be extended to computability with high probability, which is formally defined below. Definition 2: A real function f :% -+ % is said to be polynomially computable in probability with respect to a density p if there exists an OTM M , a sequence of initialization strings {s(n)};=,and a polynomial function v(n) such that any oracle function Q , and any precision n , for any X E 9, M halts in less than y ( n ) steps, and its output M (s(n),Q,x ) is a dyadic rational which satisfies
IV. SKETCH OF PROOF Here we briefly sketch the proof of Theorem 1. The proof uses the companding implementation of scalar quantizers. Consider a quantizer with quantization rule given by U , = Ci o U ; o , where U; denotes a uniform quantizer of rate R on the interval (U,l), F denotes the “ideal” compressor function defined in Theorem 1, and G denotes the inverse function of F . We first use the assumed technical conditions to show that the distortion performance of QR is asymptotically optimal. witlh the “ideal” Now consider another quantizer uR,“,@ compressor replaced by an OTM M that computes F in polynomial number of steps, i.e. yR,”,@ = ( i o y; o rn,@ , where rn.@ denotes the function mapping generated by the OTM M with oracle function i#~ and precision In. A Turing encoder for y,,“,@that operates in v ( R , n ) !;teps can be easily constructed using the OTM M , where y ( R ,n) denotes some polynomial function in R and n . When n = 2 R , we show that the percentage difference between the distortions of QR and u,,~.) converges to zero uniformly over all oracle function $I. By combining with the asymptotic optimality of QR mentioned above, Theorem 1 follows. REFERENCES
[l] C.E. Shannon, “Coding theorems for a discrete source with a fidelity criterion,” IRE Nut. Conv. Rec., pp. 142163,1959. [2] C.H. Papadimitriou, Computational Complexity, Addison Wesley, 1994. [3] K. KO,Complexity Theory of Real Functions, Birkhauser, 1991.
for all n , where P is the probability measure with respect todensity p .
21