the numerator coefficients and nonlinearly related to the denominator coefficients. The numerator and denominator estimation problems are theoretically ...
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 9, SEPTEMBER 1997
A Structured Matrix Approach for Spatial-Domain Approximation of 2-D IIR Filters Arnab K. Shaw and Srikanth Pokala Abstract— This brief addresses least-squares (LS) approximation of a prescribed spatial response by a quarter-plane 2-D IIR filter. Using structured matrix representation it is shown that the spatial domain error vector between the desired and estimated responses is linearly related to the numerator coefficients and nonlinearly related to the denominator coefficients. The numerator and denominator estimation problems are theoretically decoupled without affecting the optimality properties. The decoupled denominator criterion possesses a quasi-quadratic form and its optimization is computationally efficient requiring very few iterations. The numerator is estimated linearly in a single step. Effectiveness of the algorithm is demonstrated with several examples.
the numerator estimation problem is purely linear, whereas the denominator criterion possesses a quasi-quadratic relationship with the unknown coefficients which naturally leads to a convenient iterative algorithm. Decoupled estimation reduces the computational complexity because there is no need for iterating on the numerators as in [25] or other standard nonlinear optimization methods [1]–[4], [18], [24]. Simulation results for several common filter design problems indicate that the performance of proposed OM-2D is superior than various algorithms reported in [1], [3], [9], [13], and [18]. II. PROBLEM DEFINITION A general quarter-plane 2-D LSI system is characterized by the transfer function n
n
Index Terms—2-D digital filters, IIR filters, structured matrix approximation.
1 2)
H (z ; z
0 0 a(i; j )z1 z2 1 =0 =0 1 2) = B (z1 ; z2 ) 0 0 b(i; j )z1 z2 =0 =0
1 =
A(z ; z
i
j
i
j
(1)
m
i
Manuscript received March 6, 1995; revised July 30, 1996. This work was supported in part by AFOSR-F49620-93-1-0014. This paper was recommended by Associate Editor T. Hinamoto. The authors are with the Electrical Engineering Department, Wright State University, Dayton, OH 45435 USA. Publisher Item Identifier S 1057-7130(97)03642-2.
i
j
m
I. INTRODUCTION The least-squares (LS) spatial-domain design of both 1- and 2-D IIR filters is essentially a nonlinear optimization problem. Among existing methods, variations of general nonlinear optimization methods have been used in [1]–[4], [18], and [24], for `2 -norm minimization. These methods perform well if good initial estimates are available though in many cases, large number of iterations may be needed for convergence [3]. Another class of 2-D filter synthesis methods attempts to exploit the special matrix structures inherent in this LS approximation problem. These algorithms modify as well as linearize the true nonlinear problem in order to obtain simplified solutions [13], [20], [24]. These methods are suboptimal in the sense that they do not optimize the true fitting error criterion. However, the 1-D method due to Evans and Fischl (EFM) [6], [11] is optimal because it does optimize the true LS error criterion. There have been some previous attempts to generalize EFM for designing 2D IIR filters, though only for separable-denominator cases [9], [10], [22]. It should be emphasized also that in [9] and [10] the complete optimal criterion had not been minimized and certain restrictions were imposed on the choice of numerator/denominator orders. In this brief, we develop a structured matrix approximation framework to formulate the most general 2-D quarter-plane filter synthesis algorithm for optimal least-squares design of 2-D recursive filters from prescribed spatial response data. The proposed method (OM2D) is an extension of a recently proposed 1-D optimal method (OM) [21] that, unlike [10] and [22], imposes no restriction on the numerator or denominator orders. Another significant difference in OM-2D is that the 2-D denominator is nonseparable. The exact matrix structures inherent in this LS problem is utilized to show that the `2 error has a purely linear relationship with the 2-D numerator parameters whereas the 2-D denominator coefficients are nonlinearly related to the error. More interestingly, the matrixvector representation reveals that these two sets of parameters appear separately in the 2-D LS criterion. Consequently, the numerator and denominator estimation problems are mathematically decoupled without affecting any optimality properties. In the decoupled form,
769
j
where fa(i; j )g and fb(i; j )g denote the numerator and denominator coefficients, respectively. Let the k1 2 k2 significant grid of the impulse (spatial) response of the above filter be given by H ; where H (i; j )
1
=
0 1 0 1)
h(i
;j
;
= 1;
i
111 1
;k ;
j
= 1;
111 2 ;k
(2)
and the corresponding prescribed 1-quadrant spatial response matrix be given by X ; where X (i; j )
1
=
0 1 0 1)
x(i
;j
;
i
= 1;
111 1
;k ;
j
= 1;
111 2
;k :
(3)
Define the coefficient vectors, a b
111 = [ (0 0) 1 1 1
= [a(0; 0) b
;
1 0) 1 1 1 a(0; n2 ) 1 1 1 a(n1 ; n2 )] b(m1 ; 0) 1 1 1 b(0; m2 ) 1 1 1 b(m1 ; m2 )] T
a(n ;
T
(4) :
(5)
In vector form, the spatial responses are defined as x = vec (X ) and = vec (H ); where, vec is the operation of stacking the columns of a matrix. The problem addressed in this brief is to estimate the optimal a and b that minimize the `2 -norm of the error between x and h , i.e.,
h
min a ;b
jj jj2 =1
min
e
a ;b
jj 0 jj2 x x
with b(0; 0) = 1:
h
(6)
III. STRUCTURED FORMULATION OF THE ERROR CRITERION H (z1 ; z2 ) denote the inverse filter of B (z1 ; z2 ), 1 2 )H (z1 ; z2 ) = 1: Then, (1) can be rewritten as 1 A(z1 ; z2 ) = 1 H (z ; z )A(z ; z ): H (z1 ; z2 ) = 1 2 1 2 B (z1 ; z2 ) The significant k1 2 k2 samples of the impulse response of
Let
B (z ; z
b
i.e.,
b
b
(7)
this filter can be expressed, in matrix notation, in terms of the numerator coefficients fa(i; j )g and the inverse filter coefficients fhb (i; j )g as h
= H b a;
0 H1
where
(8)
H
Hb
1057–7130/97$10.00 1997 IEEE
H
.. .
1
=
Hn
.. . Hk
2
01
k k
.. . Hn
.. .
0
..
01
.
111 ..
n
n
.. .
.
02 1 1 1 2( +1)( +1) Hk
H
Hk
0
0 01 n
770
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 9, SEPTEMBER 1997
with H l (i; j )
Note that
0;
for (i 0 j ) 0 for (i 0 j ) < 0:
j; l)
k
n
b
a ;b
a ;b
A. Decoupling Numerator and Denominator Estimation If the denominator, i.e., H b is known, the LS estimate of the numerator is obtained from (10) as
1 where H # =
T
(H b
b
f
Hl
01H
H b)
#
a ^
=
T b
is the pseudoinverse of H b : Plugging
this back in (10), we get
2 min jje x0P ejj min jjx a ;b
b
= min b
(11)
Hb x
jj(
I Ik k
H
0
P
H
2 min jje e jj b b 2 )xjj
Reparameterization of (12) in terms of b necessitates identifying the basis space orthogonal to H b : For that, consider the inverse filter relation B (z1 ; z2 )Hb (z1 ; z2 ) = 1; which represents a 2-D convolution of the coefficients in B (z1 ; z2 ) and Hb (z1 ; z2 ): In matrix form, this convolution can be expressed as
i; j
k k
k k
T
B
T
1
=
.. .
m
Dm
m Dm
Bf
1
=
.. .
.. .
Bm
Bm
..
0 H1
Hf
1
=
0
..
01
.
111
B
0
..
.
f
.. . f Hk
.
111
B
n
0+11
.. .
H
.. . f
Hn
f Hk
0
0
..
02 1 1 1
H
.. .
f Hk
0
(14)
f
..
.
0 01 1 1 1 n
H
0
f
.
0
m
D
111
2 [
k k
m
n
..
111
0( +1)( +1)]2 n
+1
0
D
.
. D
0
k
k k
(17)
where (see (18) at the bottom of the next page) by their very structures, the matrices B and H b have full rank and rank (B ) + rank (H b ) =
f 1 2 0 ( 1 + 1)( 2 + 1)g + f( 1 + 1)( 2 + 1)g
=
k k :
k k
n
n
n
n
1 2
(19)
Hence, using a theorem on projection matrices of orthogonally related matrices [17], we have P
B
+ PH =
(20)
Ik k :
Using the above relation in (12), and after some simplification, we get
jj jj2 = min e e b
1
T
x
1
=
B (B
01B
T
B)
b
T
T
Xs
(B
1 J0 2 J1 .. .
m
Jm
+1
.. .
.
111
(16)
n
k Dm
Xs
f
01
n
111 +1 1 1 1 01
m Dm
2 [ ..
n
02
T
J
m
Jm
01
T
x
01X
B)
20
.. . .. .
k k
.. .
01
0( +1)( +1)]2( +1)( +1)
k
D
k J k
.. .
Hn
0[k
b
f f
=
= min b
Bm
H
Hb
..
B
B
(15)
;k :
01 2 D1
b
0 B1
for (i 0 j ) 0 for (i 0 j ) < 0
j; l);
111 1
D
min
where
= 1;
2
(13)
Ik k
0
Note that B f 2 k k 2k k ; H f 2 : The matrix H f can be viewed as having k2 columnblocks where each column-block contains k1 columns. Comparing the structures of H f in (14) and H b in (9), it is evident that H b can be formed using the first (n1 + 1) columns of the first (n2 + 1) column-blocks of H f : If we “take out” the corresponding rows (a total of (n1 + 1)(n2 + 1)) from B f ; the remaining matrix, call it T B ; possesses the following orthogonal relationship where
(12)
B. Reparametrization of the Denominator Criterion
=
hb (i
0;
for 0 (i 0 j ) m1 for (i 0 j ) < 0;
j; l);
where
where P H = H b (H bT H b )01H bT denotes the projection matrix of H b : Note that the criterion in (12) is a function of fhb (i; j )g; or equivalently, the denominator coefficients fb(i; j )g only, and the minimization needs to be done w.r.t. only these coefficients. Clearly, the numerator and denominator problems are theoretically decoupled in (11) and (12), respectively. Indeed, in a more general setting of nonlinear optimization problems with separable parameters, it has been shown in [8, Th. 2.1] that if the denominator is estimated by minimizing the criterion in (12) and then the numerator is computed by (11), the resulting estimates would be identical to the unique and global minimizers of the original criterion in (6). Note however that in (12), the denominator coefficients are related to the error criterion in a highly nonlinear manner. Thus, direct minimization of it can only be accomplished via standard nonlinear optimization methods [7], [12], [15]. Instead, we will reparameterize (12) so as to relate it directly to the denominator coefficients in b:
BfH f
0;
(i; j ) =
B
21 x jj =
0
b(i
=
(9)
2( +1) : The criterion in (6) thus becomes 2 2 min jje jj = min jjx 0 H ajj : (10)
2
Hl
0
hb (i
=
B l (i; j )
..
.
+1 1 1 1 01 ..
(21)
sb
J
.
0
+1
m
.. .
02 1 1 1 0 01 0( +1)( +1)]2( +1)( +1) k J k n
k J k
n
m
m
m
(22)
(see (23) at the bottom of the next page) is that in (21), the denominator coefficients appear explicitly. Furthermore, with respect to the unknowns in b; the criterion has an obvious weighted-quadratic structure which is amenable to the formulation of a convenient iterative algorithm, as outlined in the next section.
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 9, SEPTEMBER 1997
C. Computational Algorithm for Estimating the Denominator
optimizing T
It can be seen from (21) that the (decoupled and reparameterized) error-vector indeed has a quasi-linear relation to the denominator coefficients given by
01X sb
T
eb = B (B B )
1 =
W X s b:
(i+1)
1 (i) = Y
0[Y
1(1i)1 1 1 01 111 1 (i) G]
with b(0; 0) = 1: Interestingly enough, this “equation error” criterion corresponds to the exact “2-D linear prediction” criterion for this problem [16]. The result of this minimization is given by
(24)
(0)
^ b
Y
g
j
.. .
1
b(m1 ; i)
Di =
111 ..
:
(27)
IV. SIMULATION RESULTS (25)
We present several filter design examples in this section and compare the the results of proposed methods with those of several others for which coefficients are readily available in [1], [3], [9], and [18]. Results of an equation error based Modified Prony’s method (MPM) [13] have also been included for comparison purposes. One of the performance measures used for comparison of various methods is defined as k 01k 01
Note that at the (i + 1)th iteration, B is formed using the estimate of b from the previous iteration. The iterations are terminated when jj^b(i+1) 0^b(i) jj < ; where is a small constant. Note that the estimate of b given above is iterative because the weighting matrix W does depend on b: In order to achieve a local optima, the iterative algorithm in OM [21] also includes a second phase, where the variation of W w.r.t. b is taken into account. However, our simulations indicate that the algorithm in (25) itself produces excellent estimates and hence, there is no need to invoke the second phase for all practical purposes. For the initial estimate, a “natural” candidate here would be to optimize the subproblem obtained by setting (B T B )01 in (21) to an identity matrix [11]. Thus, the initial estimate is obtained by
b(n1 + 1; i)
1
111111 0G#g
=
Once the denominator coefficients are estimated, the numerator is found using (11).
where
1 [g jG ]: (i) 01 and X s = B )
T T G (B
(26)
b
1
=
T
min b X s X s b
Assuming W to be independent of b; minimization of jjeb jj2 can be accomplished using the following iterative algorithm: ^ b
771
111
1 010 log
Closeness in dB =
(x(i; j )
10
i=0 j =0 k 01k
0 h(i; j ))2
01
i=0 j =0
x2 (i; j )
:
(28)
b(0; i)
..
111
.
2 (k 0n 01)2k ;
b(0; i)
..
.
111
b(m1 ; i)
for j
(n2 + 1)
. b(0; i)
b(0; i)
1 =
b(1; i)
b(0; i)
.. .
.. .
0
b(m1 1; i) b(m1 ; i)
b(m1
..
0 1; i)
..
.
111
111
b(0; i)
..
.
111
b(m1 ; i)
x(n1 + 1; i)
j 1 Ji =
.. .
111
111
.. .
0 1; i)
x(1; i)
x(0; i)
.. .
.. .
x(m1 ; i) x(k1
.. .
0 1; i)
..
. x(0; i)
111
x(0; i)
1 =
k
2k
;
2
for j > (n + 1):
(18)
. b(0; i)
x(0; i)
x(m1 ; i) x(k1
2
b(0; i)
x(m1 x(k1
x(k1
.. ..
. .
0 2; i) 1 1 1
for j
(n2 + 1)
.. .
0 m1 0 1; i)
0 2; i) 1 1 1
.. .
2 (k 0n 01)2(m +1) ;
x(0; i) x(k1
.. .
0 m1 0 1; i)
2
k
2(m +1) ;
for j > (n2 + 1):
(23)
772
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 9, SEPTEMBER 1997
TABLE I COMPARISON OF THE ERRORS FOR A FEW METHODS FOR THE GAUSSIAN CASE. (DEFINITIONS ARE GIVEN IN [1], [9], and [18])
(a)
(a)
(b)
(b)
(c) Fig. 2. Laplacian-of-Gaussian filter.
(c) Fig. 1. Gaussian filter.
Note that in all simulations, the value of = 1005 has been used as the stopping criterion. A. Quarter-Plane Designs 1) Gaussian Filter Design: The impulse responses of the filters designed by MPM and the proposed method to approximate a Gaussian response, given by
0
(a)
(b)
f 0 4)2 + (j 0 4)2g];
h(i; j ) = 0:256322 exp [ 0:103203 (i
(29) where (i; j ) 2 Sf ; Sf = f(i; j )j0 i 14; 0 j 14g; are shown in Fig. 1(b) and (c), respectively. It can be seen that the impulse response of the filter designed by OM-2D matches very closely the ideal Gaussian response given in Fig. 1(a). This particular design has been studied extensively and some comparison in terms of a few other performance measures are also available in the literature. Table I provides a summary of comparison of the performance of OM-2D with three other existing methods. Clearly, when compared with the methods in [1], [9], and [18], the errors for the proposed design are much smaller in all the cases. The Closeness in dB values are also given in Table II. 2) Laplacian-of-Gaussian Filter Design: The filter responses for OM-2D and MPM for the Laplacian-of-Gaussian response, defined by
0 14 ((i 0 4)2 + (j 0 4)2 )) 1 exp [0 14 ((i 0 4)2 + (j 0 4)2 )] (30) = f(i; j )j0 i 21; 0 j 21g; are shown
h(i; j ) = (1
(i; j )
2 Sf ; Sf
in Fig. 2.
(c) Fig. 3. Lowpass filter.
B. Zero-Phase Filter Designs Zero-phase lowpass, bandpass, bandstop, and fan filters, of the form given in (31), have also been designed. These filters are of the form as explained in [13, pp. 283–285], I
II
01
H (z1 ; z2 ) = H (z1 ; z2 ) + H (z1 ; z2 ) + H +H
IV
01 (z1 ; z2 )
III
01 01 (z1 ; z2 )
(31)
and we attempt to approximate one-quadrant of the ideal impulse response which is assumed to possess fourfold symmetry (i.e.,
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 9, SEPTEMBER 1997
773
TABLE II COMPARISON OF THE CLOSENESS IN D ECIBEL VALUES OF DIFFERENT METHODS
(a)
(a)
(b)
(c)
(b) Fig. 5. Bandstop filter.
(c) Fig. 4. Bandpass filter. (a) = h(n1 ; 0n2 ) = h(0n1 ; 0n2 ) = h(0n1 ; n2 )): The specifications for the lowpass and the bandpass filters have been taken from [13, pp. 283–285], while for the bandstop example, from [3]. For the fan filter, the desired 1-quadrant response has been obtained by applying a circularly symmetric Kaiser window of size 15 2 15 with = 2, upon the ideal 90 fan filter spatial response. The orders chosen to design the filters are given in Table II. The results, including the comparison with MPM, are shown in Figs. 3–6. It may be noted that for the results from [3], the Broyden–Fletcher–Goldfarb-Shanno (BFGS) and the modified Newton–Raphson (MNR) algorithms were used for the lowpass and the bandstop cases, respectively. Also, for both the cases in [3], the estimates from Shank’s method were used as the starting points. As can be seen from both the figures and the Closeness in DB values in Table II, the performance of the proposed algorithm is superior in all the cases.
(b)
h(n1 ; n2 )
C. Discussion The number of iterations and the total CPU times (using MATLAB on a multiuser Unix-based mainframe computer) needed by OM-2D for convergence for all six examples are summarized in Table III. When compared with some of the general optimization methods [3], convergence appears to be significantly faster. Note also that Table II includes comparison with the method in [23], which is a direct
(c) Fig. 6. Fan filter.
extension of OM [21] for the design of the most general separable denominator filters with arbitrary numerator and denominator orders. Some comments on stability is in order. As explained in several standard 2-D texts, least-squares spatial domain design methods do not, in general, guarantee stability and the proposed approach is no exception. (see [13, Sec. 5.1–5.2] and [5, Sec. 5.4]). However, since the design is accomplished by reducing the total square error, according to [13, p. 274], the resulting filter is likely to be stable. Finally, if a filter turns unstable at a particular iteration, the algorithm can be terminated and the estimates at the previous iteration can
774
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 9, SEPTEMBER 1997
TABLE III CPU TIME AND NUMBER
OF ITERATIONS FOR THE
PROPOSED METHOD
be considered as the best stable solution possible by the algorithm. Alternately, an unstable final design can also be stabilized using one of several stabilization techniques presented in [13, Sec. 5.4] or [5, Sec. 5.7]. It may be added though that no stabilization measures had to be invoked for any of the designs reported in this brief. V. CONCLUDING REMARKS We have presented an algorithm for 2-D quarter-plane IIR filter design which minimizes the true least-squares error. The denominator and numerator estimation problems are decoupled leading to an iterative algorithm of reduced dimensionality for the estimation of the denominator. The numerator can be estimated in a single step once the final estimate of the denominator has been obtained. Computer simulations on the design of several filters have demonstrated the superiority of OM-2D over some existing methods. It should be possible to extend the algorithm for the design of the most general 2-D recursive filters, i.e., the nonsymmetric half-plane (NSHP) filters and further work is in progress.
[12] K. Levenberg, “A method for the solution of certain non-linear problems in least squares,” Quart. Applied Math, vol. 2, pp. 164–168, 1944. [13] J. S. Lim, Two-Dimensional Signal and Image Processing. Englewood Cliffs, NJ: Prentice-Hall, 1990. [14] D. G. Luenberger, Optimization by Vector Space Method. New York: Wiley, 1969. [15] D. W. Marquardt, “An algorithm for least squares estimation of nonlinear parameters,” J. Soc. Ind. Appl. Math., vol. 11, pp. 431–444, 1963. [16] T. L. Marzetta, “Two-dimensional linear prediction: Autocorrelation arrays, minimum-phase prediction error filters, and reflection coefficient arrays,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-28, pp. 725–733, Dec. 1980. [17] C. R. Rao and S. K. Mitra, Generalized Inverse of Matrices and its Application. New York: Wiley, 1971. [18] D. M. Raymond and M. M. Fahmy, “Spatial-domain design of twodimensional recursive digital filters,” IEEE Trans. Circuits Syst., vol. 36, June 1989. [19] J. L. Shanks, “Recursion filters for digital processing,” Geophys., vol. 32, pp. 33–51, 1967. [20] J. L. Shanks, S. Treitel, and J. H. Justice, “Stability and synthesis of two-dimensional recursive filters,” IEEE Trans. Audio Electroacoust., vol. AU-20, pp. 115–128, 1972. [21] A. K. Shaw, “Optimal identification of discrete-time systems from impulse response,” IEEE Trans. Signal Processing, vol. 42, pp. 113–120, Jan. 1994. , “Design of denominator separable 2-D IIR filters,” Signal Pro[22] cessing, Switzerland, vol. 42, no. 1, pp. 191–206, Feb., 1995. [23] A. K. Shaw and S. Pokala, “A structured approach for denominator separable 2-D IIR filter design in the spatial-domain,” unpublished. [24] G. A. Shaw and R. M. Mersereau, “Design, stability and performance of two-dimensional recursive digital filters,” Tech. Rep. E21-B05-1, Georgia Inst. Technol., School Elect. Eng., 1979. [25] K. Steiglitz and L. E. McBride, “A technique for identification of linear systems,” IEEE Trans. Automat. Contr., vol. AC-10, pp. 461–464, 1965. [26] S. G. Tzafestas, Ed. Multidimensional Systems. New York: Marcel Dekker, 1986.
REFERENCES [1] S. Aly and M. Fahmy, “Spatial-domain design of two-dimensional recursive digital filters,” IEEE Trans. Circuits Syst., vol. CAS-27, pp. 892–901, Oct. 1980. [2] M. S. Bertran, “Approximation of digital filters in one and two dimensions,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-23, pp. 438–443, Oct. 1975. [3] T. Bose and M.-Q. Chen, “Design of two-dimensional digital filters in the spatial domain,” IEEE Trans. Signal Processing, vol. 41, pp. 1464–1469, Mar. 1993. [4] J. A. Cadzow, “Recursive digital filter synthesis via gradient based algorithms,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP24, pp. 349–355, June 1976. [5] D. E. Dudgeon and R. M. Mersereau, Multidimensional Digital Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1984. [6] A. G. Evans and R. Fischl, “Optimal least squares time-domain synthesis of recursive digital filters,” IEEE Trans. Audio Electroacoust., vol. AU21, pp. 61–65, 1973. [7] R. Fletcher and M. J. D. Powell,“A rapidly convergent descent method for minimization,” Comput. J., vol. 6, pp. 163–168, 1963. [8] G. H. Golub and V. Pereyra, “The differentiation of pseudoinverses and nonlinear problems whose variables separate,” SIAM J. Numer. Analysis, vol. 10, no. 2, pp. 413–432, Apr. 1973. [9] T. Hinamoto and S. Maekawa, “Spatial-domain design of a class of two-dimensional recursive digital filters,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-32, Feb. 1984. [10] T. Hinamoto, “Design of 2-D separable-denominator recursive digital filters,” IEEE Trans. Circuits Syst., vol. CAS-31, pp. 925–933, Nov. 1984. [11] R. Kumaresan, L. L. Scharf, and A. K. Shaw, “An algorithm for polezero modeling and spectral estimation,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-34, pp. 637–640, June 1986.
Floating-Point Roundoff Noises of First- and SecondOrder Sections in Parallel Form Digital Filters Chimin Tsai
Abstract—Assuming wide-sense stationary white noise input, we investigated the floating-point roundoff noises of first- and second-order digital subfilters. For first-order subfilters, the roundoff noise of parallel form 3P realization is invariably smaller than that of the corresponding parallel form 1P realization. The floating-point roundoff noise of a second-order subfilter depends on the order of additions. Criteria are developed to find the best order and to make the choice between parallel form 1P and 3P realizations. Index Terms—Digital filter wordlength effects, finite wordlength effects, floating-point arithmetic, IIR digital filters.
I. INTRODUCTION As the finite wordlength effects of fixed-point digital filters have been investigated intensively and are reasonably well understood, Manuscript received March 1, 1995; revised November 20, 1996. This paper was recommended by Associate Editor B. Kim. The author is with the Department of Electrical Engineering, Chung-Hua Polytechnic Institute, Hsin-Chu 30067, Taiwan, R.O.C. Publisher Item Identifier S 1057-7130(97)03662-8.
1057–7130/97$10.00 1997 IEEE