Jan 18, 2014 - recursion methods of Levinson and Manolakis are combined to generate two fast ...... Comments by Tadeusz Jan Ulrych, Arthur Weglein, Eliza- beth S. Ramos ... Manolakis, D., Kalouptsidis, N., and Carayannis, G., 1983, Fast.
Fast Algorithms to Design Discrete Wiener Filters in Lag and Length Coordinates (Presented at the 61th An. Int. Meeting SEG) Title page: Wiener Filters in Lag and Length Coordinates
Milton J. Porsani
Programa de Pesquisa e P´os-Gradua¸ca˜o em Geof´ısica Universidade Federal da Bahia Campus Universit´ario da Federa¸c˜ao Salvador-BA/Brasil January 18, 2014
ABSTRACT Two recursive algorithms for designing discrete Wiener filters for wavelets of di↵erent phase characteristics are the Simpson and the Manolakis recursions. Both procedures are very efficient, however both recursions work with a pre-fixed length filter. Two fast algorithms to design discrete Wiener filters in lag and length coordinates are presented in this paper. The recursion methods of Levinson and Manolakis are combined to generate two fast algorithms which calculate the value for the minimized total squared error, (MTSE), corresponding to spiking and shaping filters. For a spiking filter of length n and a wavelet of m data points, 5mn + 5/2n2 + 3/2n “operations” are required to obtain the (m + n 1) ⇥ n map of the MTSEs, (one “operation” is defined here as one multiplication and one addition). For a shaping filter, 4mn + 3/2(n2 + n) “operations” are required to obtain the corresponding m ⇥ n map. These algorithms may be seen as a Levinson recursion on two variables, length j of the filter and lag k, for the desired signal. Numerical examples for spiking and shaping filters are presented. INTRODUCTION Two particularly efficient methods for obtaining Wiener filters for wavelets of di↵erent phase characteristics are the Simpson (Simpson et al., 1963) and the Manolakis recursions (Manolakis et al., 1983). These recursions have found considerable application in seismology, particularly in obtaining causal spiking filters when the seismic wavelet is known but is not necessarily 1
minimum-phase. For a pre-fixed length of the filter, the desired signal is continuously shifted to the right or to the left and for each new time position the MTSE is calculated. In the Simpson recursion, 4n 2 “operations” are required to compute the MTSE for each position and in the Manolakis recursion only 2n “operations” are required. The considered optimum spiking or shaping filter is the one associated with the minimum for the sequence of MTSE values, or equivalently for the maximum for the sequence of performance values (see Robinson and Treitel, 1980). Manolakis’s approach permits computations for the MTSE to be twice as fast as Simpson’s approach, however both recursions are restricted to a pre-fixed length for the modeling operator. Another method used for computing spiking and shaping filters is the usual Levinson’s recursion (LR) (Levinson, 1947). In this case the relative position between the input and the desired signal is held fixed, while the filter length is continuously increased by means of Levinson’s relationship. Many di↵erent applications of this scheme to solve systems of normal equations (NE) have been published in the geophysical and electrical engineering literature (Gangi and Shapiro, 1977; Papoulis, 1985; Delsarte and Genin, 1986; Krishna and Morgera, 1987; Eisner and Hampson, 1990; Porsani and Ulrych, 1995). The Levinson’s principle used in the LR consists of a recursive solution of Toeplitz NE, for order j using a linear combination of the forward and backward (FB) solutions of subsystems of lesser order. This basic principle has many useful geophysical applications even when the systems of equations are not symmetric or even Toeplitz. One of its most important applications is to 2
obtain spiking and shaping filters for use in seismic data processing (Levinson, 1947; Simpson et al., 1963; Manolakis et al., 1983). In this case, for a filter of length n, the LR requires n2 +n “operations” to obtain the unit delay prediction error operator (PEO) and 2n2 + n “operations” to compute a shaping filter. In this paper I use a compact representation for the NE and the recursive expressions obtained from LR (Porsani, 1986) to develop algorithms for designing of optimum filters, which are not restricted to a pre-fixed length for the modeling operators. For a pre-defined value of the MTSE the optimum length and the optimum position of the desired output signal are obtained. These algorithms are fast and allow the computation of the twodimensional surface values which describe the MTSEs as a function of all possible positions in the interval, (lag ⇥ length). From the usual LR for the PEO, and the shaping filter, key expressions are derived to be used in the new algorithm. The expressions derived by Manolakis et al. (1983) that will be combined with the LR are presented below. Finally the Levinson and the Manolakis recursions are combined to generate two fast algorithms for spiking and shaping filters which compute MTSEs in lag and length coordinates. Numerical examples for spiking and shaping filters are presented to illustrate the applicability of these new algorithms. In order to help comprehension is presented below the table of notations used in this paper.
3
MTSE: NE: FB: LR: PEO: MEO: R: r:
minimised total squared error normal equations forward and backward Levinson’s recursion prediction error operator modeling error operator denote matrix denote vector
LEVINSON’S RECURSION FOR THE PEO Let {xt }, t = 0, . . . , m 1, represents the input signal. Defining ge j = (˜ gj,1 , . . . , g˜j,j )T is the unit delay prediction operator P in such a way that xt = ji=1 g˜j,i xt i , and ( 1 ge Tj ) as the PEO, one may write the error vector associated with the input signal as, " # 1 e ,j = Xj+1 , gj where, for the sake of simplicity gj = ge j . indicates the forward prediction and 2 6
XTj+1 = 64
x0
0
. . . . . . . . . xm 1 ... ... ... ... x0 . . . . . .
3
0 xm
1
7 7 5
,
is the regression matrix corresponding to the input trace, xt . Minimizing the quadratic form, Q(gj ) = eT ,j e to the parameters gj,i , i = 1, . . . , j the NE, rxx,j + Rg,j gj = 0j , 4
,j
with respect
may be obtained. Where, Rg,j
2
r0 6 . T = Xj Xj = 64 .. rj 1
... ... ...
3
rj 1 .. 77 , . 5 r0
is the well known Toeplitz autocorrelation matrix and rxx,j = (r1 , . . . , rj )T is the vector with autocorrelation coefficients, P r⌧ = x t xt ⌧ . Solving the NE one obtains the expression for the MTSE, Eg,j , r0 + rTxx,j gj = Eg,j . Grouping the NE together with the expression for the MTSE one obtains the expanded NE, in which the Toeplitz character is preserved, Rg,j+1
"
#
"
rTxx,j Rg,j
1 r0 = gj rxx,j
#"
#
"
#
1 Eg,j = . gj 0j
(1)
The Toeplitz symmetry implies that all minor subsystems of the same order that occur along the main diagonal are equal. Additionaly, the persymmetric property of the band-structured matrix Rg,j , (i. e. Jj Rg,j Jj = Rg,j , where Jj is the inverse of the identity matrix of order j), simplifies the relationship between solutions of order j 1 and order j, and the Levinson relationship for PEO may be written as, "
1 gj
#
2
1
3
2
0 6 7 6 6 7 6 = 4 gj 1 5 + gj,j 4 Jj 1 gj 0 1
3
7 7 15
.
Using this relationship in the quadratic form, Q(gj ), and considering the PEO, (1 gTj 1 ) as known, one obtains the compact 5
(2 ⇥ 2) form for the expanded NE, "
Eg,j
1
g,j 1
Eg,j
g,j 1
1
#"
#
"
#
1 Eg,j = , gj,j 0
(2)
where, Eg,j 1 is the MTSE of j 1 order and g,j 1 is obtained by multiplying ( 1 gTj 1 0 ) by the last line of the expanded matrix, Rg,j+1 . As shown in Porsani (1986) and Porsani and Ulrych (1991), this simplified representation of the expanded NE is the heart of all algorithms which employ the basic principle of Levinson for constructing the solution of the NE based on the linear combination of solutions of a lesser order. From the equation corresponding to the last line of equation (2) one may obtain the expression for gj,j . Knowing gj,j , Eg,j may be updated. The algorithm is initialized with Eg,0 = r0 . For j = 1 , . . . , n the quantities g,j 1 , gj,j , Eg,j and the PEO, (1 gTj ), are computed as presented below. Initialization: E 0 = r0 DO j = 1, n g,j 1 " "
Eg,j
1
g,j 1
Eg,j
g,j 1
1 gj
#
= rj +
2
1
1 3
X
gj
i #"
1,i rj i #
"
1 Eg,j = gj,j 0 2
0 6 7 6 = 64 gj 1 75 + gj,j 64 Jj 1 gj 0 1 6
# 3
7 7 15
ENDDO n2 + n “operations” are required. Expression for
g,j 1
using FB errors
An important point related to the efficiency of the algorithm for the spiking filter, to be presented later, concerns the quantity g,j 1 , written in terms of FB prediction errors, as presented below. Minimizing the quadratic form, Q(gj ), as suggested earlier, one can verify that g,j 1 may be written as, g,j 1
= [ 1 gTj
= [ 1 gTj
= [e
T
,j 1
1
1
2
0 6 6 0 ] Rg,j+1 4 Jj 1 gj 1
3
7 7 15
2
0 6 T 6 0 ] Xj+1 Xj+1 4 Jj 1 gj 1 0]
"
0 e
,j 1
3
7, 7 15
(3)
#
where e ,j 1 and e ,j 1 represent the vectors associated with the FB prediction errors, respectively. Expressions for updating the FB errors The FB prediction errors of j order may be updated from the 7
FB prediction errors of j 1 order. Post-multiplying the matrix Xj+1 by the FB PEOs, and considering Levinson’s relationship, the expressions for updating e ,j and e ,j , may be obtained as presented below, [e
,j
e
,j
2 6
] = Xj+1 64
1
Jj g j
gj
1
2 6
1
=
e
0
,j 1
0
e
7 7 5
0 Jj 1 g j 1
= Xj+1 64 gj 1 0 "
3
,j 1
#
2 6 6 4
32
76 76 154
1
gj,j
gj,j
1
gj,j
gj,j
1
1
3 7 7 5
(4)
3
7 7. 5
Equations (3) and (4) allow us to perform LR directly on an input trace, and do not require the explicit knowledge of the autocorrelation coefficients. These relations are used in Burg’s algorithm (Burg, 1975), for estimating the reflection coefficients, {gj,j j = 1, . . . , n} which are employed in the theory of maximum entropy spectrum analysis (see Burg, 1975; Ulrych and Clayton, 1976; Marple, 1980 ). As it will be shown later these expressions are very useful in designing a fast algorithm to compute the optimum spiking filter in lag and length coordinates.
8
LEVINSON’S RECURSION FOR THE SHAPING FILTER As is well known, the least-squares solution for the Wiener shaping filter uses the past values of the input wavelet, xt , to optimally shape it into a desired signal yt . Also in this case the coefficient matrix is a symmetric Toeplitz autocovariance matrix, Rg,j , and the LR may be applied. e Let h j represent the modeling operator in such a way that e e . The the estimate vector ye j is then obtained as Xj h j = y j correspondent modeling error is,
eh,j = [ y
"
#
1 Xj ] , hj
where y = (y0 , . . . , yp )T , corresponds to the desired signal and Xj is the regression matrix previously defined. (1 hTj ) = (1 , hj,1 , . . . , hj,j ) is the modeling error operator, e (MEO), in which for the sake of simplicity, hj = h j. The corresponding NE in an expanded form may be written as, Rh,j+1
"
#
"
1 r = y,0 hj rxy,j
rTxy,j Rg,j
#"
#
"
#
1 Eh,j = , hj 0j
(5)
P
where Eh,j is the MTSE, ry,0 = yt2 , and corresponds to the total energy of a desired signal, and rxy,j = (rxy,0 , . . . , rxy,j 1 )T is the crosscorrelation vector between the input and the output P signal, with coefficients given by rxy,⌧ = yt+⌧ xt .
Since Rg,j is the autocovariance Toeplitz matrix, all the backward solutions are available from the LR for the PEO, described 9
earlier. Consequently the Levinson’s relationship for the construction of the MEO, (1 hTj ), takes the form, "
1 hj
#
2
3
1
2
0 6 7 6 6 7 6 = 4 hj 1 5 + hj,j 4 Jj 1 gj 0 1
3
7 7 15.
(6)
Using equation (6) in the quadratic form associated with the error vector, eh,j , and minimizing it regarding the parameters hj,i i = 1, . . . , j, one obtains for the expanded NE, the compact form, "
Eh,j
1
h,j 1
h,j 1
Eg,j
1
#"
#
"
1 Eh,j = hj,j 0
#
(7)
where, Eh,j 1 corresponds to the MTSE of modeling of j order, and Eg,j 1 corresponds to the MTSE of prediction of j order. Expressions for
1 1
h,j 1
As may be verified from the symmetry of Rh,j+1 , h,j 1 may T be obtained by multiplying ( 1 hj 1 0 ) by the last line of the Rh,j+1 matrix, or equivalently, by multiplying ( 0 gTj 1 Jj 1 1 ) by the first line of the Rh,j+1 . This results in, h,j 1
= rxy,j h,j 1
=
1
+ rTxx,j
rTxy,j
Jj
"
1
Jj 1 hj 1 ,
1 gj
1
#
.
(8a) (8b)
Equation (8b) is of fundamental importance in the algorithms for the optimum Wiener filters to be presented. The expression 10
(8a) is the expression commonly used in the LR (see for example Robinson, 1967, Wiggins and Robinson, 1965, Robinson and Treitel, 1980). While (8a) is expressed in terms of the MEO and the autocorrelation vector, (8b) is expressed in terms of the PEO and the crosscorrelation vector (Porsani, 1986). The full recursion for obtaining the shaping filters, which use expression (8b) is presented below. Initialization: "
Eg,0 = r0 ,
ry,0 rxy,0
rxy,0 r0
DO j = 1, n = rj +
g,j 1 " "
Eg,j
1
1 gj
2
1
1 3
#
gj
i #"
"
1 Eh,1 = h1,1 0 1,i rj i
#
"
1 Eg,j = gj,j 0 2
0 6 7 6 = 64 gj 1 75 + gj,j 64 Jj 1 gj 0 1
#
h,j
h,j
Eg,j
1 hj+1
#
X
= rxy,j +
Eh,j
"
ENDDO
Eg,j
g,j 1
h,j "
g,j 1
X
#"
2
#"
gj,i rxy,j
i
1 hj+1,j+1
#
3
"
# 3
7 7 15
i
Eh,j+1 = 0 2
#
3
1 0 6 7 6 7 6 7 6 = 4 hj 5 + hj+1,j+1 4 Jj gj 75 0 1
2n2 + n “operations” are required. 11
#
For each length of the shaping filter, the MTSE, or equivalently the performance factor may be recursively calculated, Pj = Eh,j /ry,0 = 1 ye Tj ye j /yT y.
In the present case, the shaping filter increases its length while the relative position between the input and the desired signal is preserved as fixed. Using this algorithm to compute the performance values for m positions of the desired signal, 2mn2 + mn “operations” are required. Below, I present the Manolakis et al. (1983) algorithm, in which the filter length is held fixed while the relative position between input and desired signal is changed sample by sample. THE MANOLAKIS RECURSION To obtain the optimum spiking or shaping filters of pre-fixed length, Manolakis et al. (1983) derived a recursive expression to calculate the MTSE, k Eh,n , that makes the recursion twice as fast as the Simpson’s approach, (Simpson et al., 1963). Let the vectors, k eh,n and k+1 ef,n represent the FB modeling error for a filter of n coefficients and for the desired signal at position k and k + 1, respectively, k
k+1
eh,n
"
#
1 = [ y Xn ] k , hn k
ef,n = [ Xn
k+1
12
y]
" k+1
1
fn
(9) #
.
(10)
The corresponding NEs in expanded form are, 2 32 3 2k 3 ry,0 k rTxy,n 1 Eh,n 6 76 7 6 7 6 76 7 = 6 7 , 4 54 5 4 5 k k rxy,n Rg,n hn 0n and, 2 32 3 2 3 k+1 Rg,n ryx,n k+1 fn 0n 6 76 7 6 7 6 76 7 = 6 7 , 4 54 5 4 5 k+1 T k+1 ryx,n ry,0 1 Ef,n
(11)
(12)
where, k Eh,n and k+1 Ef,n are the MTSE associated with the desired signal at position k and k + 1 respectively. As a consequence of the definition of the FB error vector, presented earlier, one may verify that the n 1 first elements of the crosscorrelation vector, k rxy,n in equation (11), are equal to the n 1 last elements of k ryx,n in equation (12), as indicated below. ( k rTxy,n
1,
k
k T rxy,n
rxy,n 1 ) =
k k+1 T ryx,n
= (k+1 ryx,n 1 , k+1 rTyx,n 1 ) Assuming that the solutions for the FB subsystems of equations (11) and (12) are known and using the LR, one obtains the corresponding compact forms for the expanded NEs, 2k 32 3 2k 3 Eh,n 1 k h,n 1 1 Eh,n 6 76 7 6 7 6 76 7 = 6 7 , (13) 4 54 5 4 5 k k Eg,n 1 hn,n 0 h,n 1 2 6 6 4
Eg,n k+1
1
f,n 1
k+1
k+1
f,n 1
Ef,n
1
3 2 k+1 76 76 54
13
fn,n
1
3 7 7 5
2 6
= 64
0 k+1
Ef,n
3 7 7 5
,
(14)
where, k h,n 1 and k+1 f,n 1 may be calculated in terms of PEO and crosscorrelation vectors, k
h,n 1
=
k T rxy,n Jn
"
1 gn
1
#
,
(15)
and, k+1
=
f,n 1
k+1 T ryx,n
"
1 gn
1
#
.
(16)
From equations (13) and (14) above, the expressions for k Eh,n and k+1 Ef,n , may be written as, k k+1
Eh,n =
k
Eh,n
k+1
Ef,n =
Ef,n
1
+
k
1
+
k+1
The subtraction of k Eh,n from k+1
hn,n k
k+1
Ef,n = k Eh,n + (k+1 Ef,n
h,n 1 ,
fn,n k+1
f,n 1 .
Ef,n , results in k
1
Eh,n 1 ) + (17)
k+1
fn,n
k+1
f,n 1
k
hn,n
k
h,n 1 .
The definitions given in equations (9) and (10) above, imply that k+1 Ef,n 1 = k Eh,n 1 and k+1 Ef,n = k+1 Eh,n , and the Manolakis equation for the MTSE may be obtained as, k+1
Eh,n =
k
Eh,n +
k+1
fn,n k+1
f,n 1
k
hn,n k
h,n 1 .
(18)
For a pre-fixed length, n, this expression allows us to compute the MTSE at position k+1 using the MTSE at position k. Using the expression for the coefficients k hn,n and k+1 fn,n which may 14
be obtained from the compact NE, another form for equation (18) is k+1
Eh,n =
k
Eh,n +
⇣
k
h,n 1
k+1
f,n 1
⌘⇣
Eg,n
k
h,n
1+
k+1
1
f,n 1
⌘
. (19)
The algorithm which uses the Manolakis expression is initialized with 0 Eh,n . For k = 0, . . . , m, the quantities k h,n , k+1 k+1 Eh,n are calculated. 2n “operations” are ref,n and quired to compute the MTSE, k+1 Eh,n , at each new time position of the desired signal. By using the Manolakis algorithm, mn2 + m “operations” are required to generate the surface of performance values, where n is the maximum filter length and m is the maximum shift. THE COMBINED ALGORITHM In this section two efficient algorithms for the spiking filter and for the shaping filter are presented. Both algorithms are as efficient as the LR and give the values of the MTSE, k Eh,j , for j = 1, 2, . . . , n and k = 0, 1, . . . , m . These values correspond to the surface of the MTSEs for modeling of the input signal xt into the desired signal yt as a function of the j length of the filter and the position k of the desired signal. Using the Levinson relationship for the PEO in equations (15) and (16) above at the j + 1 order and k + 1 lag, the recursive relations to update the quantities k+1 h,j and k+1 f,j may be derived (see Appendix) resulting, k+1
h,j
=
k
h,j 1
15
+ gj,j
k+1
f,j 1
.
Analogously for
k+1
k+1
f,j
f,j
=
,
k+1
f,j 1
+ gj,j
k
h,j 1 .
Or in a matrix notation: [
k+1
f,j
k+1
h,j
]=[
k+1
f,j 1
k
h,j
"
1 1] gj,j
#
gj,j . (20) 1
Only two “operations” are required to update k+1 f,j and k+1 h,j . The coefficient gj,j used in equation (20) above, is available from the LR for the PEO presented earlier. Therefore, coupling equation (20) above with the LR one obtains efficient algorithms to design optimum Wiener filters, which are not restricted to a pre-fixed length of the modeling operator. The algorithm for the optimum spiking filter is presented below. The algorithm for the spiking filter In the event that the desired signal, yt , is an impulse the coefficients of the crosscorrelation vector are the elements of the input signal xt . In this case, k+1 f,j and k+1 h,j are equal to the FB prediction error of the signal xt as shown in the Appendix. Another interesting point is that the prediction errors may also be utilized for obtaining the expressions for g,j 1 , as presented in equation (3). This changes the algorithm to a spiking filter not dependent explicitly on autocorrelation coefficients of the input wavelet. As in the Burg algorithm (Burg, 1975), the FB prediction errors may be updated by the relations, [ k+1 e
,j
k+1
e
,j
] = [ k+1 e
,j 1
16
k
e
,j
"
1 1] gj,j
#
gj,j . 1
Consequently, with the increase of the length j of the filter and using the Manolakis equation one can compute recursively MTSEs for the impulse moving from the first position of the signal xt to position m + j 2. To initialize the Manolakis expression one needs 0 Eh,j which corresponds to the MTSE for a filter of j coefficients and the impulse (desired signal) in position k = 0. From the compact representation of NE, one may obtain the expression for 0 Eh,j , 0
Eh,j =
0
0
Eh,j
1
2 h,j 1
Eg,j
(21)
1
where, 0 Eh,0 represents the energy of a desired signal, and may be defined equal to 1. For an impulse in the first position, the crosscorrelation vector is equal to the first element of the input signal, consequently 0 h,j 1 = x0 gj 1,j 1 = 0 e ,j 1 The steps of the algorithm to obtain k Eh,j , [(k = 0, . . . , m+ j 2) , j = 1, . . . , n], are presented below, Initialization: g0,0 = 1 , Eg,0 = r0 , k
e
,0
= ke
,0
= xk
,
0
Eh,0 = ry,0 = 1
k = 0, . . . , m
1
DO j = 1, . . . , n 0
Eh,j =
0
0
Eh,j
DO k = 0, . . . , m + j k+1
Eh,j =
k
Eh,j +
⇣
k
e
,j 1
e ,j 1+ Eg,j
1
2
1
2 k+1
e
,j 1
⌘⇣
Eg,j 17
k
e 1
,j
1+
k+1
e
,j 1
⌘
ENDDO g,j 1 "
Eg,j
= [e
1
T
,j 1
g,j 1
Eg,j 1 DO k = 0, . . . , m + j k
[ e
,j
k
g,j 1
e
,j
k
]=[ e
#"
,j 1
0]
"
#
0 e
#
1 = gj,j 1 k 1
e
,j 1 " # Eg,j
0
,j
"
1 1] gj,j
gj,j 1
#
ENDDO ENDDO n
5 2 2n
+ 32 n + 5mn “operations” are required to generate (m + 1) ⇥ n values of the MTSE.
The algorithm for the shaping filter The values 0 Eh,j for j = 1, . . . , n, to initialize the Manolakis expression, can be recursively calculated from expression equation (21) above, in which 0 Eh,0 = ry,0 and 0 h,j 1 is obtained by, " # 1 0 T . h,j 1 = rxy,j Jj gj 1 In this case, the crosscorrelation vector is no longer equal to the input signal xt as in the case of the spiking filter. Consequently, we require autocorrelation of xt and the corresponding PEO. The algorithm for the calculation of k Eh,j , [(k = 0, . . . , m) , j = 1, . . . , n] is presented below (i is the initial position of the desired signal, yi = x0 ). 18
Initialization: X
rxx,⌧ =
t X
rxy,⌧ =
t X
ryx,⌧ =
xt xt+⌧
⌧ = 0, . . . , n
1
y⌧ +i+t xt
⌧ = 0, . . . , n
1
yt x⌧
⌧ = 0, . . . , n
1
t
0
Eh,0 = ry,0 =
k
h,0
k
=
i+t
X 2 yt t
f,0
, g0,0 = 1. , Eg,0 = r0
= ryx,k
k = 0, . . . , m
1
DO j = 1, . . . , n 0
0
Eh,j =
0
Eh,j
1
+
2 h,j 1
Eg,j
1
DO k = 0, . . . , m k+1
Eh,j =
k
Eh,j +
⇣
k
k+1
h,j 1
f,j 1
=
g,j 1
Eg,j
1
[
f,j
k
g,j 1
h,j
ENDDO "
1 gj
#
rTxx,j
g,j 1
Eg,j DO k = 0, . . . , m k
k
Eg,j
ENDDO "
⌘⇣
]=[ 2
k
1
1
#"
f,j 1
3
Jj
"
gj
2
1 "
h,j
0 6 7 6 = 64 gj 1 75 + gj,j 64 Jj 1 gj 0 1 19
k+1
f,j 1
#
1 Eg,j = gj,j 0 k 1
+
1
1 #
h,j 1
# "
1 1] gj,j 3
7 7 15
gj,j 1
#
⌘
ENDDO The total number of “operations” , (except for those in autocorrelation and crosscorrelation) are equal to 32 n2 + 32 n + 4nm. NUMERICAL EXAMPLES Spiking filter Figure 1A shows a mixed phase wavelet formed by convolution of 10 dipoles, ( 1 , Aj ei j ) where, ( Aj , j) = o o o {( .5, ±50 ), ( .5 , ±50 ), ( .2 , ±30 ), ( .2 , ±30o ), ( .8 , 0o ), ( .8 , 180o )}. Figure 1B represents the surface of performance factors associated with the spiking filter, for the wavelet shown in Figure 1A. The length of the filter was increased up to 20 coefficients and the desired signal, which is a spike of unit amplitude, was shifted from position, k = 0, to k = 29. The spikes in Figure 1C indicate the path, in lag and in length coordinates, for the maximum performance factors of the spiking filter. Figure 1D is a plot of the optimum performance factors, represented in Figure 1C, versus length of the filter. Figure 1E shows the output of the spiking filter for the desired signal in positions indicated in Figure 1C. Shaping filter Figure 2A shows the symmetric wavelet formed by convoo o lution of 6 dipoles, ( Aj , j ) = {( 2 , ±80 ), ( .5 , ±80 ), ( 2 , 0o ), ( 5 , 180o )}. Figure 2B represents the surface formed by the performance factors for the shaping filter which has as input the mixed phase wavelet (Figure 1A) and as desired signal 20
the symmetric wavelet shown in Figure 2A. The spikes in Figure 2C indicate the path of the optimum performance for the shaping filter. The behavior of the optimum performance factors versus length of the filter is ilustrated in Figure 2D. Figure 2E shows the output of the shaping filter for the desired signal (Figure 2A) shifted to the positions indicated in Figure 2C. CONCLUSIONS As shown in equations (2), (7), (13) and (14), the compact (2 ⇥ 2) form obtained for the expanded NE, allows us to perform a structural representation of the Levinson and the Manolakis algorithms. Even when non-symmetric Toeplitz systems are present, as in extended Yule-Walker equations, or in Hankel systems (Porsani and Ulrych, 1995), or even in simply symmetric systems, the compact representation for the NEs is very useful in deriving new Levinson type algorithms. The efficiency of the algorithms presented is determined by the use of the Levinson relationship together with equations (15) and (16), for obtaining k h,j 1 and k f,j 1 . These expressions use only PEOs and crosscorrelation coefficients and allow us to update the values for the MTSE with increase in length of the modeling operator. For a wavelet of length m and a filter of n coefficients, 5mn + O(n2 ) and 4mn + O(n2 ) “operations” are required to compute the (m + n 1) ⇥ n values of the MTSE for spiking and shaping filters, respectively. If so desired, the continuity of the recursion may be moni21
tored from the minimum value of the sequence of the MTSE {0 Eh,j , . . . , m j+2 Eh,j }, thus interrupting the recursion process when this minimum satisfies some a priori value for the MTSE. Another possibility is to interrupt the recursion when the minimum value for k Eh,j does not change significantly between j and j + 1 length of the filter. Knowing kmim as the minimum value for the lag index associated with the desired signal and knowing the j length of the optimum filter, the LR can be used to obtain the corresponding modeling operator. Using the FB least-squares modeling coupled with the compact representations for the expanded NEs, the Levinson and Manolakis recursions were merged into one unified algorithm for the designing of discrete Wiener filters that are not restricted to a pre-fixed length of the spiking or of the shaping filter. These algorithms may be seen as a Levinson’s recursion on two variables, j length of the filter and k lag for the desired signal. A point of importance in both algorithms is the use of the Levinson relationship in an analogous manner to that in the Burg algorithm. This approach makes the algorithms as efficient as the Levinson recursion. The algorithm presented generates the surface of performance factors, (or equivalently, the MTSEs) for the Wiener filters. This surface gives us the best map of maximum performance factors in lag and in length coordinates and may be used to design Wiener filters. ACKNOWLEDGEMENTS Comments by Tadeusz Jan Ulrych, Arthur Weglein, Elizabeth S. Ramos and anonymous reviewers helped to improve the 22
manuscript. I wish to express my gratitude to the Conselho Nacional de Desenvolvimento Cient´ıfico e Tecnol´ogico, CNPq, for their kind support. REFERENCES Delsarte, P., and Genin, V., 1986, The Split Levinson Algorithm: IEEE Trans. Acoust., Speech, and Signal Processing, 34, 470478. Burg, J. P., 1975, Maximum entropy spectrum analysis: Ph. D. Dissertation, Dept. Geophysics, Stanford Univ., Stanford, CA, May. Eisner, E., and Hampson, G., 1990, Decomposition into minimum and maximum phase components: Geophysics, 55, 897901. Gangi, A. F., and Shapiro, J. N., 1977, A propagating algorithm for determining nth-order polynomial least-squares fits: Geophysics, 42, 1265-1276. Levinson, N., 1947, The Wiener RMS (root mean square) error criterion in filter design and prediction: J. Math. Phys., 25, 261-278. Krishna, H. and Morgera, S. D., 1987, The Levinson Recurrence and Fast Algorithms for Solving Toeplitz Systems of Linear Equations: IEEE Trans. Acoust., Speech, and Signal Processing, 35, 839-848. Manolakis, D., Kalouptsidis, N., and Carayannis, G., 1983, Fast Algorithms for discrete-time Wiener filters with optimum lag: 23
IEEE Trans. Acoust., Speech, Signal Processing, 31, 168-179. Marple, L., 1980, A new autoregressive spectrum analysis algorithm: IEEE, Tans. Acoust., Speech, Signal Processing, 28, 233-243. Papoulis, A., 1985, Levinson’s Algorithm, Wold’s Decomposition, and Spectral Estimation: SIAM, 27, 405-418. Porsani, M. J., 1986, Desenvolvimento de algoritmos tipo-Levinson para o Processamento de dados s´ısmicos: Ph.D. thesis, PPPG, University of Bahia, Salvador, Brazil. Porsani, M. J., and Ulrych, T. J. , 1991, Levinson-Type Extensions for Non-Toeplitz Systems: IEEE Trans. Acoust., Speech, Signal Processing, 39, 366-375. Porsani, M. J., and Ulrych, T. J., 1995, Levinson-type Algorithms for Polynomial Fitting and for Cholesky and Q factors of Hankel and Vandermond matrices: IEEE Trans. on Signal Processing, 43, 63-70. Robinson, E. A., 1967, Predictive decomposition of time series with applications to seismic exploration: Geophysics, 32, 418484. Robinson, E. A., and Treitel, S., 1980, Geophysical signal analysis: Englewood Cli↵s, N.J., Prentice-Hall, 466 p. Simpson, S. M., Robinson, E. A., Wiggins, R. A., and Wunsch, C. I., 1963, Studies in optimum filtering of single and multiple stochastic processes: Massachusetts Institute of Technology, Cambridge. Ulrych, T. J., and Clayton, R. W., 1976, Time series modeling 24
and maximum entropy: Phys. Earth Planetary Interiors, 12, 188-200. Wiggins, R. A., and Robinson, E. A., 1965, Recursive solution to the multichannel filtering problems: J. Geophysical Res., 70, 1885-1891.
25
APPENDIX DERIVATION OF EQUATIONS FOR
and
h
f
Using the Levinson relationship for the PEO in equations (15) and (16) at the j + 1 order and k + 1 lag, the recursive relations to update the quantities k+1 h,j and k+1 f,j may be derived as presented below, k+1
h,j
=
"
k+1 T rxy,j+1 Jj+1
1 gj
02
# 3
2
0 B6 7 6 k+1 T B 6 7 6 = rxy,j+1 Jj+1 @4 gj 1 5 + gj,j 4 Jj 1 gj 0 1 =
=
k T rxy,j
k
"
Jj
h,j 1
gj 1
1
+ gj,j
1
1
#
k+1
26
+
gj,j k+1 rTyx,j
f,j 1 .
"
1 gj
1
31
7C 7C 1 5A #
Analogously for k+1
f,j
k+1
=
f,j
,
k+1 T ryx,j+1
"
1 gj
#
02
3
2
0 B6 7 6 k+1 T B 6 7 6 = ryx,j+1 @4 gj 1 5 + gj,j 4 Jj 1 gj 0 1 =
=
k+1 T ryx,j
k+1
"
1
#
+
+ gj,j
k
1 gj
f,j 1
1
gj,j k rTxy,j
Jj
"
31
7C 7C 1 5A
1 gj
1
#
h,j 1 .
The correspondence between FB prediction error and f AND h In the event that the desired signal, yt , is an impulse the coefficients of the crosscorrelation vector are the elements of the input signal xt . In this case, k+1 f,j and k+1 h,j are equal to
27
the FB prediction error of the signal xt as shown below. k
e
,j 1
k k
h,j 1
(
1
gj
• xk
j+1
• xk
• 0
1,1
j+2
• gj
1,j 1
...
1,j 1
0
...
•
•
...
xk
xk+1
...
•
•
...
gj
gj
1,1
1
)
k+1
f,j 1
k k+1
28
e
,j 1
CAPTIONS Figure 1A: Mixed phase wavelet. Figure 1B: Surface of performance values associated with the spiking filter, for the wavelet shown in Figure 1A. Figure 1C: Maximum performance factors for the spiking filter. Figure 1D: Maximum performance factors versus length of the spiking filter. Figure 1E: Output of the spiking filter associated with the maximum performance factors, indicated in Figure 1C. Figure 2A: Symmetric wavelet (desired signal). Figure 2B: Surface of performance factors associated with the filter to shape the wavelet in Figure 1A into that in Figure 2A. Figure 2C: Maximum performance factors for the shaping filter. Figure 2D: Maximum performance factors versus length of the shaping filter. Figure 2E: Output of the shaping filter associated with the maximum performance values, indicated in Figure 2C.
1
Figure 1A
Manuscript # 94221
2
Figure 1B
Manuscript # 94221
3
Figure 1C
Manuscript # 94221
4
Figure 1D
Manuscript # 94221
5
Figure 1E
Manuscript # 94221
6
Figure 2A
Manuscript # 94221
7
Figure 2B
Manuscript # 94221
8
Figure 2C
Manuscript # 94221
9
Figure 2D
Manuscript # 94221
10
Figure 2E
Manuscript # 94221
11