Document not found! Please try again

Levinson-type Algorithms for Polynomial Fitting ... - Semantic Scholar

1 downloads 0 Views 194KB Size Report
This paper presents Levinson-type algorithms for (i) polynomial fitting, (ii) obtaining a Q ...... Special thanks are due to Dr. Arthur B. Weglein and Dr. Olivar L. Lima for ... [14]J. P. Burg, “Maximum entropy spectrum analysis,” Ph.D. dissertation,.
Levinson-type Algorithms for Polynomial Fitting and for Cholesky and Q factors of Hankel and Vandermond matrices Milton J. Porsani

and 1995

Tad J. Ulrych

Abstract This paper presents Levinson-type algorithms for (i) polynomial fitting, (ii) obtaining a Q decomposition of Vandermonde matrices and a Cholesky factorization of Hankel matrices and (iii) obtaining the inverse of Hankel matrices. The algorithm for the least-squares solution of Hankel systems of equations, requires 3n2 + 9n + 3 multiply and divide operation (MD0). The algorithm for obtaining an orthogonal representation of a (m × n) Vandermonde matrix X, and compute the Cholesky factor, F, of Hankel matrices requires 5mn + n2 + 2n − 3m MDO. The algorithm for generating the inverse of Hankel matrices requires 3(n2 + n − 2)/2 MDO. Our algorithms have been tested by fitting polynomials of various orders and Fortran versions of all subroutines are provided in appendix.

1

Introduction

Many examples of least-squares, (LS), using polynomial fitting exist in the literature. In geophysics, a field which we are particularly familiar with, applications include, (1) the determination of traveltimes of rays in multilayered media with dipping interfaces, [1], (2) the correction of systematic error in magnetic surveys, [2] and (3) the estimation of reflectivity polynomials, [3]. The most commonly used method (see, e.g., [4]) of fitting the data to a polynomial is the LS method, in which the sum of the squared values between calculated and observed values is minimized. The propagating algorithm presented in [5] requires (9n2 + n − 10)/2 MDO for solution of the normal equations, (NE), plus n2 /2 multiplications to compute the mean-square error during the recursion. The Levinson-type algorithm for the LS polynomial fitting, presented in this paper, requires only 3n2 + 9n + 3 MDO. Another particular characteristic of our algorithm is that the minimized total squared error, (MTSE), is available as an output at each polynomial order. Similarly to Levinson’s recursion for Wiener shaping filters, the algorithms require the backward solutions related to the coefficient matrix which, in the case of polynomial fitting, is a Hankel matrix. The Hankel structure of the coefficient matrix, R, for the LS polynomial fitting, and the relationship between symmetric Hankel matrices and nonsymmetric Toeplitz matrices, R, (R = R J, where, J is the reverse identity matrix) makes the algorithm to solve Hankel systems also useful in solving nonsymmetric Toeplitz systems. Systems of equations which are associated with a nonsymmetric Toeplitz autocovariance matrix occur in the important problem of seismic deconvolution when the primary reflectivity function can not be assumed to be white, [6]. Other recent geophysical applications of nonsymmetric Toeplitz systems are related to the wavelet decomposition into minimum and maximum phase-components, [7]. In the QR decomposition, the matrix X is written as a product of an orthogonal matrix Q, with an upper triangular matrix R, (X = QR). The matrix Q may be obtained using Gram-Schmidt, Householder or Given’s algorithms. Since the condition number of XT X is the square of the condition

1

number of X, the solution of the NE will in general be less accurate than that produced by the square root algorithm which uses an orthogonal representation of X. The fast QR algorithm for factorization of Vandermonde matrices presented in [8], requires 5mn + 7n2 /2 + 0(m) MDO. The algorithm presented for obtaining simultaneously the Q factorization of Vandermonde matrix and the Cholesky factor F, of the inverse of Hankel matrix requires 5mn + n2 + 2n − 3m MDO allowing the LS solution to be obtained faster than approach presented in [8]. Levinson-type algorithms are recursive in order and do not require the explicit knowledge of the inverse of the coefficient matrix. In some cases the knowledge of the inverse matrix is necessary. An example is related to the solution of a family of NEs with the same coefficient matrix. As it will be shown, the cross-band property of Hankel matrices allows us to develop an algorithm for obtaining the inverse of a Hankel matrix which begins with the first and last columns of the inverse of order n. All remaining columns of the inverse of the Hankel matrix are generated recursively. 2n2 + 4n + 1 MDO are required to compute the first and last columns of inverse and only 3(n2 + n − 2)/2 MDO to generate the remaining columns. Similar approach is described in [9] for symmetric Toeplitz matrices. Other procedures that use the inverse matrix of order j to obtain the inverse of order j + 1 are presented in [5], [10-11]. The paper is organized as follows. The basic Levinson’s principle for solution of linear system of equations is presented in section 1. Section 2 deals with the application of Levinson’s principle, and the algorithm for recursive polynomial fitting is presented. Section 3 applies Levinson’s relationship directly to the Vandermonde system of equations and a fast algorithm for obtain the Q matrix and the Cholesky factor is presented. Section 4 presents an algorithm to recursively generate the inverse of a Hankel matrix.

2

Levinson’s basic principle

Levinson’s principle [12] consists of a recursive solution for order j using a linear combination of the forward and backward, (F & B), solutions of sub2

systems of lesser order. This basic principle has many useful applications even when the systems of equations are not Toeplitz. Symmetric or nonsymmetric Toeplitz systems, tridiagonal, Hessemberg, Vandermonde and Hankel systems can be solved with O(n2 ) MDO [5-6], [9] and [13]. In the general case of a simply symmetric matrix with no special structure, the application of Levinson’s principle gives us the solution with O(n3 ) [9], [13]. Let hj = ( hj,1 . . . hj,j )T and fj = ( fj,j the two subsystems represented below, 

[ uj



1 h vj ]   j 0

Rj

fj,1 )T as the solution of

...

0 fj   = [ 0j 1

0j ]

(1)

where uj = ( u1 . . . uj )T and vj = ( v1 . . . vn )T are the first and last column of the j × j + 2 matrix. Rj is any j × j invertible matrix and 0j is a vector with j null elements. For the order j + 1, the system may be written as:  

uj uj+1

Rj rTj

vj   vj+1 

1 hj+1



0 = j 0 

 



(2)

where rj = ( r1 . . . rj )T . Taking (1) above in consideration, one may verify by using simple algebra, that the Levinson’s relationship between hj and fj to increase the order of the solution vector hj+1 may be expressed as, 



1 hj+1









0 1     =  hj  + αj+1  fj  1 0

(3)

where [ uj+1 αj+1 = − [ rTj

1 ] hj   f vj+1 ] j 1 rTj





(4)

We would like to point out that equations (1-4) also are true even when fj,0 is different from one, as it will be demonstrated later on. Since 1947, this basic and simple idea has been intensively applied to the development of recursive and adaptive filters [13-21] 3

3

A recursive algorithm for LS fitting

j Let Xj+1 = [ 0 x 1 x . . . x ] represent the Vandermonde matrix, where the columns correspond to the independent variable x, with exponent, 0, 1, ..., j, respectively. Xh = y represent the overdetermined Vandermond system of equation to be solved. The vector y represent our data, y = (y1 , . . . , ym )T . The coefficient matrix XT X, is a Hankel matrix. For the solution of the Hankel system of equation correspondent to the NE, a Levinson-type algorithm named as propagating algorithm, presented in [5], requires (9n2 + n − 10)/2 MDO. We will derive below a Levinson-type algorithm that requires 3n2 + 9 + 3 MDO.

Using a matrix notation we can represent the NE together with the expression for the minimized total squared error (MTSE) as,   

yT y

yT Xj+1

XTj+1 y XTj+1 Xj+1



1

  



hj+1

Eh,j+1 = 0j+1 



(5)

Eh,j+1 represents the MTSE at stage j+1. The coefficient matrix, XTj+1 Xj+1 = Rj+1 , represents a Hankel matrix which has the same values for its cross diagonal and equal subcross diagonals. This type of matrix has only 2j + 1 independent elements, as shown below 

Rj+1 =

XTj+1 Xj+1

r0   r1 =  ..  . rj

rj



r1 r2 .. .

... ... .. .

rj+1   ..   . 

rj+1

...

r2j

where, rj = i xji . The application of Levinson’s principle, [12], to the development of recursive algorithms requires, at each stage of the process, the knowledge of the F & B solutions of minor subsystems. Applying Levinson’s principle to construct the solution at stage j + 1 as the linear combination of F & B solutions at stage j, one writes the relationship P





1 hj+1









1 0    =  hj  + αj+1  fj   0 fj,0 4

(6)

where (1 hTj ) is the modelling error operator (MEO), whose negative values of hj are the solution of NE at the j stage. (fTj fj,0 ) = (fj,j . . . fj,0 )T is the unnormalized backward MEO associated with the subsystem 

Rj+1

fj 0j = fj,0 Ef,j 





(7)

Ef,j is the MTSE related to the unnormalized backward MEO. In some cases, as in the Levinson recursion for Wiener filters, it is convenient to normalize the backward MEO so that fj,0 = 1, j = 1, . . . , n. However, in the present case, the Levinson-type algorithm for polynomial fitting may be faster than the propagating algorithm [5] if fj,0 is left free during the recursion. Using relation (6) in the quadratic form Q(hj+1 ) and minimizing it in relation to the parameter αj+1 , one obtains for (5), the simplified (2 × 2) form 

Eh,j ∆h,j

∆f,j Ef,j

1





αj+1

Eh,j+1 = 0 

where ∆h,j = [ j xT y ∆f,j = fj,0 ∆h,j

j

x T Xj ]



1 hj



(8)



,

f = [ y Xj+1 ] j fj,0 T





.

The coefficient αj+1 which is required in (6) and the MTSE, Eh,j+1 , can be readily obtained from (8). Assuming that at each stage the backward MEO, ( fTj fj,0 ) and Ef,j , are known, then the procedure to continue the recursion is very simple. Initialization: Eh,0 = yT y ,

∆h,0 =

0 T

x y

DO j = 1, n − 1 ∆h,j = [ j xT y

j

x T Xj ]

αj+1 = − 5

∆h,j Ef,j



1 hj







1 hj+1









1 0     =  hj  + αj+1  fj  0 fj,0

Eh,j+1 = Eh,j + αj+1 ∆h,j fj,0 ENDDO The change of sign of the vector hn results in the solution of the NE for the Vandermonde system of equations. If ( fTj fj,0 ) j = 1, . . . n − 1 is known, the procedure above may be applied for obtaining the solution of any linear system of equations. Only two MDO are required to update the MTSE, Eh,j+1 , that may be used for monitoring the fitting. Next we present the algorithm for obtaining the backward MEO ( fTj fj,0 ) and Ef,j , which are required for the procedure presented above. The recursive algorithm to obtain the backward MEO, which is used in equation (8) requires, at stage j, the backward MEO of order j − 1 and the first column of the inverse of the coefficient matrix, ( gj−1,0 gTj−1 ) = (gj−1,0 . . . gj−1,j−1 )T , as shown below 

Rj

gj−1,0 gj−1



fj−1 fj−1,0



1



=



0j−1

0j−1 Ef,j−1



(9)

The cross band-structured property of the Hankel matrix allows one to define the following relationships 



fj fj,0









fj−1 0     =  fj−1,0  + βj  gj−1,0  , 0 gj−1

(10)

and 





gj,0 gj



  gj−1,0 fj   =  gj−1  + γj . fj,0 0

(11)

Pre-multiplying (10) by Rj+1 and considering (9) ( fTj first j equations, 

0j−1 0j−1 0 + βj = j−1 Ef,j−1 ∆g,j−1 0 





6



fj,0 ) results, for the



,

where

gj−1,0   .. = [ rj . . . r2j−1 ]   . . gj−1,j−1 

∆g,j−1



Calculating βj in order to solve Ef,j−1 + βj ∆g,j−1 = 0, βj = −

Ef,j−1 . ∆g,j−1

(12)

Pre-multiplying (11) by Rj+1 and considering (9) results,   

1





0







1

0j−1  + γj  0j−1  =  0j−1  ∆g,j−1 Ef,j 0 









Calculating γj in order to solve ∆g,j−1 + γj Ef,j = 0, γj = −

∆g,j−1 . Ef,j

(13)

Returning to ( fTj fj,0 ) and γj in (11) one obtains ( gj,0 gTj ) which will be required at the next stage of the recursion. The steps of the algorithm for obtaining the backward MEO of order n are presented below. 2n2 + 4n + 1 MDO are required. Initialization: Ef,0 = r0 ,

f0,0 = 1. ,

g0,0 =

1 r0

DO j = 1, n gj−1,0   .. r2j−1 ]   . gj−1,j−1 

∆g,j−1 = [ rj 



fj fj,0



...









fj−1 0 Ef,j−1     =  fj−1,0  −  gj−1,0  ∆g,j−1 gj−1 0 fj,j  ..  r2j ]  .  fj,0 

Ef,j = [ rj

...

7







gj,0 gj





  gj−1,0 ∆g,j−1 fj   =  gj−1  − Ef,j fj,0 0

ENDDO Combining both procedures we obtain the Levinson-type algorithm for the solution of the Hankel system of NE for the LS polynomial fitting. The steps of the algorithm are presented below. Initialization:

0 T

Eh,0 = yT y ,

h1,1 = −

x y

0 xT 0 x

Eh,1 = Eh,0 + yT 0 x h1,1 f0,0 = 1 , DO j = 1,

g0,0 =

1 r0

,

Ef,0 = r0

n gj−1,0   .. r2j−1 ]   . gj−1,j−1 

∆g,j−1 = [ rj

...





fj fj,0









fj,j  ..  r2j ]  .  fj,0 

Ef,j = [ rj

...





gj,0 gj



0 fj−1 Ef,j−1     g =  fj−1,0  −  j−1,0  ∆g,j−1 gj−1 0







  gj−1,0 ∆g,j−1 fj   =  gj−1  − Ef,j fj,0 0

∆h,j = [ j xT y

j

x T Xj ]

αj+1 = −

8

∆h,j Ef,j



1 hj







1 hj+1









1 0     =  hj  + αj+1  fj  0 fj,0

Eh,j+1 = Eh,j + αj+1 ∆h,j fj,0 ENDDO The algorithm presented solves the expanded NE (5) and does not require the computation of the inverse of the coefficient matrix. Due to numerical limitations inherent in this type of recursive algorithm, (which implicitly inverts the Hankel matrix), it is recommended that double precision arithmetic be used [5].

4

LS solution using Q Factorization

As shown above, the LS solution of the overdetermided system Xh = y may be obtained from Levinson-type algorithm that recursively solve the NE associated with Hankel system of equations. We present in this section an algorithm that, solves directly the Vandermonde system of equations, by using the Levinson’s relationship. Our approach generates the orthogonal matrix Q and the Cholesky factor F, such XF = Q. The presented Levinson-type algorithm is less expensive computationaly than the algorithm presented in [8]. The solution for the ]NE may be expressed in terms of Cholesky factor P T for the inverse of the Rankel matrix, R−1 = F −1 f F , h = [F

P−1 f

FT ]XT y (14)

=F

P−1 f

QT y ,

Q = XF

where F represents the upper triangular matrix formed by lining in columns P the backward MEO, [ fTj fj,0 ] j = 1, . . . , n. f is a diagonal matrix with Ef,j in the diagonal. Using equations (10) and (11) we may derive a recursive procedure to generate simultaneously both the orthogonal matrix Q, the upper triangular matrix F and the elements Ef,j that are required in (14). 9

We obtain the relation for the backward error for order j, ef,j , by premultiplying (10) by Xj+1 ef,j = ef,j−1 + βj e’g,j−1 where

gj−1,0   .. 1 j−1 = [ x ... x]  . gj−1,j−1 

e’g,j−1

(15) 

gj−1,0   .. 1 0 j−2 = x : [ x ... x]  . gj−1,j−1 

=

1



x : eg,j−1 .

(: represents the point by point product between each element of 1 x with each element of the error vector, eg,j−1 ) By pre-multiplying (15) by j−1 xT and taking into consideration that j−1 xT ef,j = 0, we obtain the expression for βj in terms of the inner product between the error vectors and the j−1 x column of the X matrix j−1 T

βj = − j−1

∆f,j−1 x ef,j−1 =− 0 . T x e’g,j−1 ∆g,j−1

Pre-multiplying (11) by Xj+1 we obtain the relation to update eg,j eg,j = eg,j−1 + γj ef,j .

(16)

Similarly by using the orthogonality condition between the vectors eg,j and x in (16) we may rewrite equation (13) in terms of inner products between the error vectors and j x. j

j

∆g,j−1 xT eg,j−1 =− . γj = − j T x ef,j ∆f,j As we may verify, Ef,j = fj,0 ∆f,j and ∆0g,j−1 = ∆g,j−1 ∆0g,j−1 =

j−1 T

x e’g,j−1 =

j−1 T 1

x

x : eg,j−1 = j xT eg,j−1 = ∆g,j−1 .

10

The steps of the algorithm to generate (i) an orthogonal factorization of Xn (ii) a Cholesky factor, F, such XF = Q, and (iii) the backward MTSE, Ef,j that are equal to the energy of the j column of Q matrix, are given below. All these quantities may be used in (14) for obtaining the LS solution associated with the overdetermined Vandermonde system of equations, Xh = y. Initialization, Ef,0 = ∆f,0 = 0 xT 0 x

f0,0 = 1. ,

0

ef,0 = 0 x ,

g0,0 = 1./Ef,0 ,

eg,0 =

x

Ef,0

DO j=1,n-1 e’g,j−1 =

1

x : eg,j−1

∆g,j−1 =

j−1 T





x e’g,j−1 ∆f,j−1 βj = − ∆g,j−1



fj fj,0







0 fj−1     =  fj−1,0  + βj  gj−1,0  gj−1 0

ef,j = ef,j−1 + βj e’g,j−1 ∆f,j = j xT ef,j , Ef,j = fj,0 ∆f,j ∆g,j−1 γj = − ∆f,j 





  gj−1,0 fj gj,0   =  gj−1  + γj gj fj,0 0 eg,j = eg,j−1 + γj ef,j 

ENDDO 5mn+n2 +2n−3m MDO are required for obtaining the ortogonal matrix Q = [ ef,0 . . . ef,n ], the F matrix and the quantities Ef,j . The computation for generation of the columns of X matrix was not included. The presented algorithm allows the LS solution for the overdetermined Vandermonde system of equations be faster than the Demeure approach [8], that requires 5mn + 7n2 /2 + O(m) MDO for obtaining the QR factorization of X matrix. 11

5

Inverse of Hankel matrices

There are cases where the knowledge of the inverse matrix is necessary. As an example we may mention the need to solve a family of NE with the same coefficient matrix. In this case it is more efficient to compute the inverse only once so that the solution for the entire family of equations may be obtained efficiently. Next we present a Levinson-type algorithm for obtaining the inverse of a Hankel matrix. The same approach may also be used to obtain the inverse of symmetric Toeplitz matrices, as presented in [9]. A different approach for obtaining the inverse of Hankel matrices is presented in [10-11] in which the inverse of order j is used to calculate the inverse of order j + 1. As per the algorithm for the Hankel systems presented in section 3, the first column of the inverse of the Hankel matrix is known, and the last column of the inverse may be calculated by simply normalizing the backward MEO n

gn =

1 (fn,n , . . . , fn,0 )T . Ef,n

Thus, assuming that the first and the last columns of the inverse of the Hankel matrix, 0 gn and n gn , respectively, are known 0

n

Rn+1 [ gn



gn ] =

The inverse matrix, Gn+1 = [ 0 gn . . . (k = 0, 1, . . . , n) is known such that Rn+1

k

gn =

1 0n n

k

0n 1



,

gn ] will be complete when

δ n+1

k

gn ,

(17)

where k δ n+1 = (01 , . . . , 0k−1 , , 1 , 0k+1 . . . , 0n+1 )T , relates to column k of the identity matrix. k gn may be obtained from k+1 gn , as follows. Multiplying

0

gn by the last n lines of (17) one obtains rn+1 1  ..   .  = −0 gn,n r2n 



    

 0 g

n,0

.. .

f  R n

12

0

gn,n−1

      

(18)

where

r1  .. f Rn =  . rn

... ...



rn ..  . . 

...

r2n−1

Now let us rewrite (17), in an expanded form,   

T − k δ n+1 





 

k

− δ n+1

Rn+1

⊕ 1 = , k gn 0n+1 





(19) k T gn )

where ⊕ is not explicitly required. The relationship for (1 writen as, 



1 gn

k

where

k



may be



  1 0   =  0  − k ∆g,n−1 n gn k gn−1

(20)

k f kg gn−1 solves the upper subsystem of (19) (R n n−1 = δ n ) and,

 k g

n−1,0

k

∆g,n−1 = [ rn+1

...

.. .

r2n ]   k

  

.

gn−1,n−1

Simplified expression for k ∆g,n−1 Let S represent a symmetric matrix and define the scalars a and b as: uT Sv = a and vT Su = b. (u and v are arbitrary vectors). As a consequence of the symmetry of S, a and b are necessarily equal. Using this property we may write   

n T gn

0 1

0

 k T gn−1

 

T

−k δ n+1

⊕ k

− δ n+1

Rn+1



1  0  k gn−1

0 n

gn

  

= [a b]

Considering that   

⊕ k

− δ n+1

T

−k δ n+1 Rn+1



1  0  k gn−1

0 n

gn

13

  

=

   k  − n gn,k   ⊕ − gn−1,k     0n    0n      k

∆g,n−1

1

and calculating a and b results in ⊕ − k gn−1,k  n T  0n gn ]  = k ∆g,n−1 

a = [0



n

gn,n k ∆g,n−1

− n gn,k   b = [ 1 0 k gTn−1 ]  0n  = − n gn,k + k gn−1,n−1 1 



Since a = b k k

Using (1

k+1 T gn )

∆g,n−1 =

gn−1,n−1 − kg n,n

n

gn,k

(21)

in the last n lines of (19) and taking into account (18)

 k+1  0   gn,0  gn,0  k+1  gn,n  .    .. f ..  − R    n . 0g     k+1 n,n 0 g g

=

k

δn

(22)

n,n−1

n,n−1

The relationship between braces corresponds to k gn−1 which is required in (20) to obtain k gn . The algorithm for obtaining the inverse of the Hankel matrix starts with the first and the last columns of the inverse of order n and all remainder columns of the inverse are generated recursively. The steps of the algorithm to compute the columns of the inverse of the Hankel matrix are given below. 3(n2 + n − 2)/2 MDO are required. Initialization: Solve the Hankel system:  0    gn,0 fn,n         Rn+1  ...   ...   0 

gn,n n

gn =

fn,0



=

1 0n

fn,n fn,0 ... Ef,n Ef,n

14

 

!T

0n Ef,n



DO k= n-1, 2  k+1 g

n,0

k

gn−1 =  





1 gn

k





 .. − . k+1 gn,n−1 

1 = 0  −  k gn−1

k

k+1

 0 g

n,0





0

gn,n  .  ..   0g n,n 0 gn,n−1

gn−1,n−1 − kg n,n

n

gn,k  

n

gn

  

ENDDO

6

Conclusions

We have presented Levinson-type algorithms to solve the Hankel and Vandermonde systems of equations. These types of systems are related to the LS problem for polynomial fitting which have many important applications in geophysics. The algorithm presented for obtaining the LS solution to polynomial fitting requires only 3n2 +9n+3 MDO and is faster than the propagating algorithm [5], which requires (9n2 + n − 10)/2 MDO for solving NE plus n2 /2 multiplications to compute the MTSE. Also, at each polynomial order the MTSE is obtained as an output of our algorithm. Furthermore, another algorithm which directly solves a n × m Vandermonde system was developed. 5mn + n2 + 2n − 3m MDO are required to generate the orthogonal Q matrix and the Cholesky factor, F, such XF = Q. Also we derive a very compact and fast algorithm for obtaining the inverse of the Hankel matrix. 2n2 +4n+1 MDO are required for the computation of the first and last columns of the inverse and 3(n2 + n − 2)/2 MDO are required to generate the remaining columns. Furthermore, the algorithm presented to compute the coefficients for LS polynomials fitting can be applied to solve nonsymmetric Toeplitz systems. As it may be verified, when one multiplies the symmetric Hankel matrix R, by the reverse identity matrix J, the result is a nonsymmetric Toeplitz matrix, R, (R = RJ). Consequently, the reverse of the solution vector for the Hankel systems corresponds to the solution vector for nonsymmetric Toeplitz 15

systems (h = Jh). For the same reason, the algorithm for obtaining the inverse of a symmetric Hankel matrix can be applied to generate the inverse of nonsymmetric Toeplitz systems.

7

Acknowledgements

Special thanks are due to Dr. Arthur B. Weglein and Dr. Olivar L. Lima for their constructive suggestions to this manuscript. Also we wish to express our gratitude to PPPG/UFBA and the Conselho Nacional de Desenvolvimento Cient´ıfico e Tecnol´ogico - CNPq, for generous support. T.J.U also wishes to gratefully acknowledge the support of the NSERC grant # 581804.

8

References

[1] A. F. Gangi and S. J. Yang, “Traveltime curves for reflections in dipping layers,”Geophysics, vol. 41, pp. 425-440, 1976. [2] D. R. Richard, “Correction of systematic error in magnetic surveys: An application of ridge regression and sparse matrix theory,” Geophysics, vol. 50, pp. 1721-1731, 1985. [3] B. Ursin and T. Dahal, “Least-squares estimation of reflectivity polynomials,” 60th SEG Meeting, S. Francisco, Expanded Abstracts, pp. 1069-1071, 1990. [4] A. Ralston, A first course in numerical analysis, New York, McGraw-Hill Book Co., Inc., pp. 228-260, 1965. [5] A. F. Gangi and J. N. Shapiro, “A propagating algorithm for determining nth-order polynomial least-squares fits,”Geophysics, vol. 42, pp. 1265-1276, 1977. [6] M. J. Porsani, T. J. Ulrych, J. Pessoa, W. S. Leaney and O. G. Jensen, 16

“Extended Yule-Walker equations, non-white deconvolutions and roots of polynomials,” 59th Annual International SEG Meeting, Dallas, Texas, pp. 1198-1202, 1989. [7] E. Eisner, and G. Hampson, “Decomposition into minimum and maximum phase components,” Geophysics, vol. 55, pp. 897-901, 1990. [8] C. J. Demeure, “Fast QR Factorizaton of Vandermonde Matrices”, Linear Algebra and its applications pp. 165-194, 1989. [9] M. J. Porsani, “Desenvolvimento de Algoritmos tipo-Levinson para o Processamento de dados s´ısmicos”, Phd. dissertation, PPPG/UFBA Salvador, Bahia, Brazil, 1986. [10] W. F. Trench, “An algorithm for the inversion of finite Hankel matrices,” J. Soc. Indust. Appl. Math., vol. 13, pp. 1102-1107, 1965. [11] J. L. Phillips, “The triangular Decomposition of Hankel matrices,” Math and Physics, vol. 25. pp. 261-278, 1971. [12] N. Levinson, “The Wiener RMS (root mean square) error criterion in filter design and prediction,” J. Math. Phys., vol. 25, pp. 261-278, 1947. [13] M. J. Porsani and T. J. Ulrych, “Levinson-type extensions for nonToeplitz systems,” IEEE Trans. Signal Processing, vol. ASSP-39, pp. 306375, 1991. [14]J. P. Burg, “Maximum entropy spectrum analysis,” Ph.D. dissertation, Dept. Geophysics, Stanford Univ., Stanford, CA., 1975. [15] M. Morf, B. Dickinson,T. Kailath and A. Vieira, “Recursive solution of covariance equations for linear prediction,”IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-25, pp. 429-433, 1977. [16] B. Friendlander, M. Morf, T. Kailath, and L. Ljung, “New inversion formulas for matrices classified in terms of their distance from Toeplitz matrices, ” Linear Algebra Appl., vol. 27. pp. 31-60, 1979. [17] L. Marple, “A new autoregressive spectrum analysis algorithm, ” IEEE 17

Trans. Acoust., Speech, Signal Processing, vol. ASSP-28, pp. 233-243, Aug, 1980. [18] D. Manolakis, N. Kalonptsidis and G. Carayannis, “Fast algorithns for discrete-time Wiener filters with optimum lag,”IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-31, pp. 168-179, 1983 [19] P. Delsarte and Y. V. Genin, “The split Levinson algorithm,”IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-34, pp. 470-478, June, 1986. [20] P. C. Rialan and L. L. Scharf, “Fast algorithm for computing QR and Cholesky factors of Toeplitz operators,” IEEE Trans. Acoust. Speech, Signal Processing, vol. ASSP-36, pp. 123-142, 1988. [21 ] H. Lev-Ari, T. Kailath and J. M. Cioffi, “Adaptive recursive-leastsquares lattice and transversal filters for continuous-time signal processing,” IEEE Trans. Circ. syst. , vol. 39, 2, pp. 81-88. Feb, 1992.

18

9

APPENDIX A - Fortran Subroutines

subroutine LSpoly(n,f,g,h,Ef,Eh,r,yx,yy,rxy) C++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ c Calculates the coefficients for LS polynomials fitting c Method: Levinson’s principle applied to Hankel systems c input parameters c n = number of coef. of the polynomials c ex: n=3, y=h(1) + h(2) x + h(3) x**2 c r = vector with the elements of the first and last line c of Hankel matrix. Ex: n=3, r(1), r(2), r(3), r(4), r(5). c yx= vector with the cross correlation elements c yy= energy of the observations c | yy yx(1) yx(2) yx(3)| | 1 | | Eh(3) | c | yx(1) r(1) r(2) r(3) | | h(1) | = | 0 | c | yx(2) r(2) r(3) r(4) | | h(2) | | 0 | c | yx(3) r(3) r(4) r(5) | | h(3) | | 0 | c output parameters: c f = backward MEO c g = first line of inverse of coef. matrix (Hankel mx) c h = coefficients for LS polynomial fitting c Eh= error energy associated to the fitting c rxy= vector with the correlations coefficients (j=1,...,n) C++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ dimension r(1),f(1),g(1),h(1),yx(1),rxy(1) Eh0=yy Eh=Eh0 h(1)=-yx(1)/r(1) Eh=Eh0 + h(1)*yx(1) f(1) = 1. g(1)=1./r(1) Ef=r(1) do 1 j=1,n-1 j1= j+1 deltg = r(j1)*g(1) do 2 i=2,j 2 deltg = deltg + g(i)*r(j+i) beta = -Ef/deltg f(j1)= f(j) do 3 i=1,j-1 3 f(j1-i) =f(j-i) + beta*g(i) f(1)=beta*g(j)

19

4

5

6

7

1 8

Ef=r(j1) do 4 i=1,j Ef = Ef + f(i)*r(j1+j1-i) gamma= -deltg/Ef do 5 i=1,j1 g(i) = g(i) + gamma*f(j-i+2) delth=yx(j1) do 6 i=1,j delth=delth+h(i)*r(j+i) alf = -delth/Ef do 7 i=1,j1 h(i)=h(i)+alf*f(j-i+2) Eh=Eh+alf*delth*f(1) rxy(j)=1-Eh/Eh0 continue do 8 i=1,n h(i)=-h(i) return end

20

subroutine Hankel(n,f,g,Ef,r) C++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ c Calculates the backward MEO and the last column c of inverse of the symmetric Hankel matrix c Method: Levinson’s principle applied to Hankel systems c input parameters: c n = dimension of the hankel matrix c r = vector with the first line of Hankel matrix c followed bye the elements of the last line c EX: n=3 ( r(1), ... , r(3), r(4),r(5) ). c | r(1) r(2) r(3)| | g(1) f(3)| | 1 0 | c | r(2) r(3) r(4)| | g(2) f(2)| = | 0 0 | c | r(3) r(4) r(5)| | g(3) f(1)| | 0 Ef(2) | c output parameters: c f = backward MEO c g = first column of inverse matrix c Ef = error energy for backward MEO C++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ dimension r(1),f(1),g(1) f(1) = 1. g(1)=1./r(1) Ef=r(1) do 1 j=1,n-1 j1= j+1 deltg = r(j1)*g(1) do 2 i=2,j 2 deltg = deltg + g(i)*r(j+i) beta = -Ef/deltg f(j1)= f(j) do 3 i=1,j-1 3 f(j1-i) = f(j-i) + beta*g(i) f(1)=beta*g(j) Ef=r(j1) do 4 i=1,j 4 Ef = Ef + f(i)*r(j1+j1-i) g(j1)= -deltg/Ef do 5 i=1,j1 5 g(i) = g(i) + g(j1)*f(j-i+2) 1 continue return end

21

subroutine invhank(n,g,f,Ef,H) C++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ c Generates the inverse of the symmetric Hankel matrix c input parameters: c n = dimension of the hankel matrix c f = backward MEO c g = first column of inverse matrix c Ef = error energy for backward MEO c (g, f, Ef) -- are output of subroutine Hankel c output parameters: c H = inverse of Hankel matrix (n $\times$ n) C++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ dimension f(1),g(1),H(n,1) do 1 i=1,n H(i,1)=g(i) 1 H(i,n)=f(n-i+1)/Ef do 2 i=1,n-1 H(n,i)=H(i,n) 2 H(1,i)=H(i,1) do 3 k=n-1,2,-1 hkn=H(n-1,k+1)-(H(n,k+1)/H(n,1))*H(n-1,1) deltk=(hkn-H(k,n))/H(n,n) do 4 i=2,k hk=H(i-1,k+1)-(H(n,k+1)/H(n,1))*H(i-1,1) H(i,k)=hk-deltk*H(i,n) 4 H(k,i)=H(i,k) 3 continue return end

22

subroutine QFAC(m,n,x,F,Q,Ef) C++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ C purpose: To generate (i) the orthogonal Q factorization of the C Vandermonde matrix X=[1 x x**2 .... x**(n-1)], (ii) the C Cholesky factor F, such as XF=Q, (iii) the elements Ef_j, C equal to energy of column j of diagonal matrix, Q^tQ C input parameters: C m,n = number of lines and columns of the Vandermonde matrix C x(i) i=1, ..., m independent variable array C output : Q(m,n) orthogonal matrix C F(n,n) upper triangular matrix C Ef(i), i=1, ... , n elements of diagonal matrix C work arrays: efj(i), egj(i), xj(i) i=1, ... m C fa(i), ga(i), i=1, ... , n C++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ dimension efj(100),egj(100),elgj(100),xj(100),fa(20),ga(20) dimension x(1),F(n,1),Q(m,1),Ef(1) Deltf=m Ef(1)=Deltf fa(1)=1. ga(1)=1./Ef(1) F(1,1)=1. do 1 i=1,m xj(i)=1. efj(i)=1. egj(i)=1./Ef(1) 1 Q(i,1)=efj(i) do 2 j=1,n-1 deltg=0. do 3 i=1,m elgj(i)=x(i)*egj(i) 3 deltg=deltg + xj(i)*elgj(i) beta=-deltf/deltg fa(j+1)=fa(j) do 4 i=1,j-1 4 fa(j+1-i) = fa(j-i) + beta*ga(i) fa(1)=beta*ga(j) deltf=0. do 5 i=1,m efj(i)=efj(i)+beta*elgj(i) xj(i)=xj(i)*x(i) 5 deltf=deltf+xj(i)*efj(i) Ef(j+1)=deltf*fa(1) gamma=-deltg/deltf

23

6

7 2

do 6 i=1,j+1 ga(i) = ga(i) + gamma*fa(j+2-i) F(i,j+1)=fa(j+2-i) do 7 i=1,m egj(i)=egj(i)+gamma*efj(i) Q(i,j+1)=efj(i) continue return end

24

Suggest Documents