On the Encoding Complexity of Scalar Quantizers - IEEE Xplore

2 downloads 0 Views 131KB Size Report
On the Encoding Complexity of Scalar Quantizers. Dennis Hui and David L. Neuhoff .... W.H. Freeman, 1979. [2] Ker-I KO, Complexity Theory of Real Functions,.
On the Encoding Complexity of Scalar Quantizers Dennis Hui and David L. Neuhoff Department of Electrical Engineering and Computer Science, University of Michigan, A n n Arbor, MI 48109

Abstract -- It i s s h o w n that as rate increases the problem of asymptotically optimal scalar quantization has polynomial-time (or space) encoding complexity if t h e distribution function corresponding to the one-third power o f t h e s o u r c e d e n s i t y is polynomial-time ( o r space) computable in the Turing sense. I. INTRODUCTION Shannon’s distortion-rate theory describes the optimal tradeoff between rate and distortion of vector quantizers. While it does not address the question of complexity, generally speaking, it is evident that quantizers need to become very complex in order to approach the optimal performance tradeoff, namely, the distortion-rate function. It is well known that the full-search unstructured quantizers with dimension k and rate R has storage and arithmetic complexity increasing exponentially with the dimension rate product kR. However, there are many reduced complexity full-search methods, and the question of how fast complexity must increase as performance approaches the rate-distortion function is open. Moreover, there are many structured vector quantization techniques whose complexities are substantially less than that of full search, but whose performance does not approach the distortion-rate function. It is unclear whether there exist structured quantizers with significantly reduced complexity and distortion close to the optimal. The approach taken in this paper is to consider how the complexity of (asymptotically) optimal quantization with a given dimension IC increases with rate R . Specifically, as an initial effort, we focus on the encoding complexity of scalar quantization. 11. PROBLEM FORMULATION

In stating and deriving the main result we adopt a Turinglike framework for evaluating complexity. Instead of assuming a different encoding machine for each R , whose relative complexities would be difficult, if not impossible to assess, we envision one machine, namely an oracle Turing machine M , c.f. [1,2], that is capable of encoding at any integer rate. That is, when rate R is specified, its out ut in response to a source sample x is an index I , 15 I < 2.I? We let d ( M , p ,R ) denote the mean-squared error (MSE) that results when this Turing encoder is used with an optimum decoder. In the context of encoding, an oracle Turing machine consists of a finite-state machine, an unlimited tape memory and an oracle that provides a dyadic approximation to the source sample x to the required precision. The time (space) complexity of encoding at rate R with this machine, denoted c ( M ,R ) , is the maximum number of steps, (alternatively, the maximum amount of tape memory) required to encode an arbitrary input sample. We say a source density p is asymptotically optimally quantizable in polynomial time (or space), abbreviated PTIME-AOQ (or P S P A C E - A O Q ) , if there exists a Turing encoder M and a polynomial g such that

Intuitively, it is easy to see that some source densities are intrinsically easier to quantize than others. For instance, sources with uniform density can be optimally quantized by simple uniform quantizers. On the other hand, it is also known that the optimal quantization point density for a given source is directly related to the one-third power of the source density. Therefore, it seems reasonable that the possibility of optimal quantization with polynomial complexity should depend on the “complexity” of the desired point density. In order to rigorously analyze this relationship, we adopt the framework of Turing complexity for real-valued functions, c.f. [ 2 ] . In this theory, a real-valued function f:% -+ 3 is said to be polynmial-time (space) computable if there is an oracle Turing machine M that is capabable of providing, for any x , a dyadic approximation to f ( x ) to within an error of 2-n for any pre-specified integer n , and its time (space) complexity is bounded from above by a polynomial function of n , We are now ready to present the main results of this paper. 111. RESULTS

Proposition 1: Suppose ? L ( x ) is a desired quantization point denity such that I p ( x ) l A ( x ) * d x < w and the function F(x)= A(y)dy is polynomial-time (alternatively, space) -m computable. Then there exists a Turing encoder that runs in polynomial-time (space) and with the resulting MSE satisfying

where F ( A , P , R ) = ( ~/ -1 ~ 2 )~~ p ( y ) l , % ( y ) 2 d yis the Bennett integral prediction for the MSE of a quantizer with a given point density. Corollary 2: If the source density p is such that the X ~ p ( y ) ” ~ d where y, ~ = ( ~ ~ p ~ / ~is, ~ ) - ~ ‘ ~ , function F ( x ) = j -ca polynomial-time (space) computable, then p is PTIME-AOQ (PSPACE-AOQ) . By applying Corollary 2, one can easily show that Gaussian, Laplacian and uniform source densities with zero means and unit variances are PTIME-AOQ and PSPACE-AOQ. On the other hand, it is also possible to construct a source density p for which the function F in Corollary 2 is not computable in polynomial time. REFEFENCES

[ l ] M.R. Garey and D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H. Freeman, 1979. [ 2 ] Ker-I KO, Complexity Theory of Real Functions, Birkhauser, 1991.

where D * ( p , R ) is the mean-squared error of the optimum quantizer of rate R . 372