Computing the block factorization of complex Hankel

0 downloads 0 Views 231KB Size Report
Jan 21, 2010 - Hankel matrices play an important role in signal processing [11]. In fact, the scalars as defined ... ters, thus requiring less storage space than ordinary dense matrices. Moreover, ..... −0.1 − 0.3i 2.5e-11 + 2.1e-11i. −0.28 + 0.46i.
Computing the block factorization of complex Hankel matrices S. Belhaj Laboratoire de Math´ematiques, CNRS UMR 6623, UFR des Sciences et Techniques, Universit´e de Franche-Comt´e, 25030 Besan¸con cedex, France. & Laboratoire LAMSIN, Ecole Nationale d’Ing´enieurs de Tunis, BP 37, 1002 Tunis Belv´ed`ere, Tunisie E-mail: [email protected]

Abstract In this paper, we present an algorithm for finding an approximate block diagonalization of complex Hankel matrices. Our method is based on inversion techniques of an upper triangular Toeplitz matrix, specifically, by simple forward substitution. We also consider an approximate block diagonalization of complex Hankel matrices via Schur complementation. An application of our algorithm by calculating the approximate polynomial quotient and remainder appearing in the Euclidean algorithm is also given. We have implemented our algorithms in Matlab. Numerical examples are included. They show the effectiveness of our strategy. Key words: Hankel matrix, Block diagonalization, Triangular Toeplitz matrix, Schur complementation, Euclidean algorithm AMS subject classifications: 15A23, 15B05, 65Fxx, 11Cxx

1

Introduction

A square Hankel matrix is of the form: 

 h1 h2 · · ·

H=

   h2   .  ..  

h3 .. .



hn   · · · hn+1   , ..  .. . .  

hn hn+1 · · · h2n−1

(1)



21 January 2010

which the entries {hk }2n−1 k=1 are complex valued scalars and all elements along the same anti-diagonal are identical. This matrix is symmetric (not Hermitian if complex): H t = H. Throughout this paper, the notation M t denotes the transpose of M and not the conjugate transpose. Hankel matrices play an important role in signal processing [11]. In fact, the scalars as defined in (1) represent a signal generated by a sum of a finite number r of exponentials hk =

r X

λkl dl ,

k = 1, ..., 2n − 1

l=1

where λl and dl are the underlying modes and weights, respectively. When the signal is corrupted by noise, ”perturbed” Hankel matrices are produced. In this paper, we derive the problem on the reduction of a ”perturbed” Hankel matrix in block diagonal form and discuss the complex case. There is an extensive literature on block factorization of a symmetric matrix with Hankel structure; for some references, see Phillips [15], Rissanen [16], Kung [12], Gragg and Lindquist [9], Pal and Kailath [14], Bini and Pan [7], Bultheel and Van Barel [6], Ben Atti and Diaz-Toca [4] and recently by Belhaj [1]. Although the algorithm described in [1] seems to be the first studying the approximate way, it did not keep the block diagonal form in the complex case. Indeed, when the coefficients of the ”perturbed” Hankel matrix are complex, we lose our block diagonalization. Consequently, we reconsider this problem in the complex case. Thus, we present an algorithm for finding an approximate block diagonalization of complex Hankel matrices in which the successive transformation matrices are upper triangular Toeplitz matrices. Such a diagonalization returns an approximate block diagonal matrix 

 lH1 Θ1,2

Dε = At HA =

   Θ2,1   .  ..  

lH2 .. .

··· ... ..

.

Θ1,n .. .



      Θn−1,n   

(2)

Θn,1 · · · Θn,n−1 lHn

where every block lHj , for j = 1, 2, ..., n is an approximate lower Hankel matrix, where all entries of Θj,k , for j, k = 1, 2, ..., n and j 6= k are very close to zero and A is an approximate upper triangular matrix. The new algorithm is based on revisiting Belhaj’s method [1] for computing an approximate block diagonalization of real Hankel matrices. We also compare our approach to an approximate factorization variant of the customary fast 2

method based on Schur complementation (see References [9], [7] and [6] for instance) adapted to complex Hankel matrices. In addition, we contributed with the application of our algorithm by calculating the approximate polynomial quotient and remainder appearing in the Euclidean algorithm. Our paper is organized as follows: Classical results are introduced in Section 2. The approximate block diagonalization of complex Hankel matrices is described in Section 3. The classical approximate block diagonalization of complex Hankel matrices based on Schur complementation is given in Section 4, followed by an illustrative numerical comparison of our process with the classical one and an application of our algorithm by calculating the approximate polynomial quotient and remainder appearing in the Euclidean algorithm in Section 5. Finally, a summary and a future work are presented in section 6 to complete the paper.

2

Classical results

We will give in this section a brief presentation of Toeplitz and Hankel matrices.

2.1

Toeplitz and Hankel matrices

Definition 2.1 T = (ti,j ) is a Toeplitz matrix if tij = ti+k,j+k for all positive k, that is, if the entries of T are invariant with respect to shift along the diagonal direction. A Toeplitz matrix is therefore completely defined by its first row and first column. Definition 2.2 H = (hi,j ) is a Hankel matrix if hij = hi−k,j+k for all positive k, that is, if the entries of H are invariant with respect to shift along the antidiagonal direction. A Hankel matrix is completely defined by its first row and last column. Toeplitz or Hankel matrix of size n is completely specified by 2n − 1 parameters, thus requiring less storage space than ordinary dense matrices. Moreover, many computations with Toeplitz or Hankel matrices can be performed faster; this is the case, for instance, for the sum and the product by a scalar. Less trivial examples are given by the following results (see [7]): Proposition 2.1 The multiplication of a Hankel or Toeplitz matrix of size n by a vector can be reduced to multiplication of two polynomials of degree at most 2n and performed with a computational cost of O(n log n).

3

Proposition 2.2 A nonsingular linear system of n equations with Hankel or Toeplitz matrix can be solved with a computational cost of O(n log2 n).

2.2

Inversion of a complex triangular Toeplitz matrix

To compute quickly and efficiently the inversion of a real upper (or lower) triangular Toeplitz matrices Tn = uT (t0 , t1 , . . . , tn−1 ) , Lin, Ching and Ng [13] propose to revise Bini’s algorithm [5] embedding the n × n triangular Toeplitz matrix into a triangular Toeplitz matrix of size 2n . Although this technique seems to be the fastest in real case, we can’t apply it in the complex case. In fact, we propose to calculate the inverse of an upper (or a lower) Toeplitz matrix by a simple forward substitution method which requires about n(n + 1) arithmetic operations [10]. Then, we give the fast algorithm to compute the inverse of a complex triangular Toeplitz matrix as follows: Algorithm 2.1 (Inversion via forward substitution method) Given a list t = [t0 , t1 , . . . , tn−1 ], this algorithm computes the inverse of an upper (or a lower) Toeplitz matrix s by a simple forward substitution method. 1. Set n = length(t) and s = zeros(n, 1) 1 2. Set s(1) = t(1) 3. For k = 2 : n s(k : n) = s(k : n) + s(k − 1) ∗ t(2 : n − k + 2) s(k) = −s(k)/t(1) EndFor Moreover we will use the following notations: • H (S) ∈ Rn×n denotes the Hankel matrix associated to a list S of length (2n − 1) . This means that the first row is given by the first n terms of S and the last column is given by the last n terms of S. • lH (S) ∈ Rn×n denotes the Hankel triangular matrix (with respect to the antidiagonal) associated to a list S of length (2n − 1) such that the last column is defined by S. • uH (S) ∈ Rn×n denotes the Hankel triangular matrix (with respect to the antidiagonal) associated to a list S of length (2n − 1) such that the first column is defined by S. • H (S; m; n) ∈ Rm×n denotes the Hankel matrix associated to a list S of 4



• • •

length (m + n − 1) . The first row is given by the first n terms of S and the last column is given by the last m terms of S. T (S; m; n) ∈ Rm×n denotes the Toeplitz matrix associated to a list S of length (m + n − 1) . The first row is given by the first m terms of S and the last column is given by the last n terms of S. uT (S) ∈ Rn×n denotes the upper Toeplitz triangular matrix associated to a list S such that the first row is defined by S. lT (S) ∈ Rn×n denotes the lower Toeplitz triangular matrix associated to a list S such that the last row is defined by S. Let p ∈ N. Let Σp ∈ Rp×p , Σp = [εj,k ]pj,k=1 , where all entries of Σp are zero except that εj+k,j = εk for j, k = 1, 2, ..., p 

 0 ··· ··· ···  ...   ε1   ... ... Σp =   ε2   .. . . . . . .  . . . .  



0 ..   .  ..  . .  ..  .

εp−1 · · · ε2 ε1 0

 

• Jp = lH(1, 0, ..., 0), p ∈ N. | {z }

• • • •

p−1 n×m

Given P ∈ R , Pe = Jm P t Jn . T (S; m; n) = Jm H (S; m; n) . uT (S) = Jl lH(S), where l is the length of L. Let a ∈ R. Let µ > 0, V (a, µ) = (a − µ; a + µ) is a neighborhood of a.

3

Block factorization of Hankel matrices via Toeplitz matrices

Our main result is presented in this section.

3.1

The algorithm for a real Hankel matrix

The algorithm for the approximate block diagonalization of a real Hankel matrix presented in [1] is introduced. Lemma 3.1 Let n ∈ N∗ . Let h = H (h1 , ..., h2n−1 ) be a square Hankel matrix of order n. Suppose that hj = εj with εj ∈ V (0, µ) for j = 1, 2, ..., p − 1 and 5

hp ∈ / V (0, µ). Then h has the form 

 h11

h = H (ε1 , ..., εp−1 , hp , ..., h2n−1 ) = 



h12 

h21 h22

,

(3)

where h11 = lH (hp , ..., h2p−1 ) + Jp Σp ,

h22 = H (h2p+1 , ..., h2n−1 ) , h21 = ht12 .

h12 = H (hp+1 , ..., hn+p−1 ; p; n − p) ,

Under these conditions, we can construct from H the following matrices: • A square lower Hankel triangular matrix H of order (2n − p) , 



H = lH (hp , ..., h2n−1 ) =

     

0

0 H13  

 , 0 h11 h12  

H31 ht12 h22



where H31 = H13 = lH(hp , ..., hn−1 ) • A square upper triangular Toeplitz matrix T, 



T = J2n−p H =uT (hp , ..., h2n−1 ) =

 t11 t12 t13       0 t22 t23  ,    

0 0 t33

where t11 = t33 = uT (hp , ..., hn−1 ) , t22 = Jp h11 , t13 = Jn−p h22 , t12 = Jn−p ht12 , and t23 = Jp h12 . Lemma 3.2 Let T be an upper triangular matrix, nonsingular with nonzero diagonal. Then T −1 = uT (µ1 , ..., µ2n−p ) and has the following block decomposition, 

T −1 =

 (T     

−1

0 0

)11 (T

−1

)12 (T

−1



−1  t11

)13 

  (T −1 )22 (T −1 )23   

0



(T −1 )33

=

    



Pe

Q   

, 0 t−1 22 P    0 0 t−1 11

where P = T (µ2 , ..., µn ; p; n − p) , Pe = Jn−p P t Jp , −1 t22 P + t23 t−1 11 = 0(p,n−p) , h11 P + M t11 = 0(p,n−p) −1 t11 Pe + t12 t−1 22 = 0(n−p,p) , t11 Q + t12 P + t13 t11 = 0(n−p,n−p) .

6

Then we have the following result. Theorem 3.1 Let h = H (ε1 , ..., εp−1 , hp , ..., h2n−1 ) , where εj ∈ V (0, µ) for j = 1, 2, ..., p − 1 with hp ∈ / V (0, µ) and T −1 = uT (µ1 , ..., µ2n−p ) ,

T = uT (hp , ..., h2n−1 ) ,

t−1 = uT (µ1 , ..., µn ) .

t = uT (hp , ..., hn+p−1 ) , Then







h0 = t−1

t

ht−1

h011 0  = ,  (0 )t h022

where 

h011 = lH (µ1 , ..., µp ) + t−1 22

t

(4)

Jp Σp t−1 22 ,

(5)

h022 = −H (µp+2 , ..., µ2n−p ) + P t Jp Σp P,

(6)



0 = t−1 22

t

Jp Σp P.

(7)

Remark 3.1 Theorem 3.1 gives an approximate reduction for a real Hankel matrix. Thus if we have εj = 0, Theorem 3.1 provides the exact case. To iterate this result, h022 must be a Hankel matrix, but −H (µp+2 , ..., µ2n−p ) is a Hankel matrix and P t Jp Σp P is only a symmetric matrix. If we choose εj very close to zero, we can conclude that Σp ≈ 0p , and the process converges to the exact case. To solve this problem, we can choose h022 = −H (µp+2 , ..., µ2n−p ) + Θ where Θ is a Hankel matrix built with the first column and the last row of P t Jp Σp P . 3.2

The algorithm for a complex Hankel matrix

We revise Theorem 3.1 in order to introduce an algorithm for the approximate block diagonalization of a complex Hankel matrix. (r)

(i)

Corollary 3.1 Let h = H (ε1 , ..., εp−1 , hp , ..., h2n−1 ) , where εj = εj + iεj , (r) (i) εj ∈ V (0, µ) and εj ∈ V (0, µ) for j = 1, 2, ..., p − 1 with hj ∈ / V (0, µ) and T −1 = uT (µ1 , ..., µ2n−p ) ,

T = uT (hp , ..., h2n−1 ) , t = uT (hp , ..., hn+p−1 ) , Then

t−1 = uT (µ1 , ..., µn ) . 



h0 = t−1

t

ht−1 =  

7

h011

0



  , (0 )t h022

(8)

where 

t

Jp Σp t−1 22 ,

(9)

h022 = −H (µp+2 , ..., µ2n−p ) + P t Jp Σp P,

(10)

h011 = lH (µ1 , ..., µp ) + t−1 22 

0 = t−1 22

t

Jp Σp P.

(11)

PROOF. We apply the same proof as in [1].

To compute the approximate block diagonalization for complex Hankel matrix, we apply, in every steps, the following algorithm which count the size of the blocks as defined in (2). Algorithm 3.1 (The count blocks-size) Given a list S = [ε1 , ..., εp−1 , hp , ..., h2n−1 ] (r) (i) (r) (i) where εj = εj + iεj , εj ∈ V (0, µ) and εj ∈ V (0, µ) for j = 1, 2, ..., p − 1 with hp ∈ / V (0, µ), this algorithm counts the size of the blocks c as defined in (2) with tolerance less than a small positive number , 0 <   1. 1. Set c = 0 2. For k = 1 : length(S) If (|real (S(k))| ≤  and |Im (S(k))| ≤ ) c=c+1 Else Break EndIf EndFor Remark 3.2 Algorithm 3.1 avoids using the count blocks-size SVD techniques [10] which requires more arithmetic operations with respect to our approach. Then, we give the fast algorithm to compute the block factorization as follows: Algorithm 3.2 (Approximate block diagonalization of a complex Hankel matrix via Toeplitz matrices) Given a list S = [ε1 , ..., εp−1 , hp , ..., h2n−1 ] (r) (i) (r) (i) where εj = εj + iεj , εj ∈ V (0, µ) and εj ∈ V (0, µ) for j = 1, 2, ..., p − 1 with hp ∈ / V (0, µ), this algorithm computes the approximate block diagonalization of complex Hankel matrix via upper triangular Toeplitz matrix. 1. 2. 3. 4. 5.

Find p on calling Algorithm 3.1 Define an upper triangular Toeplitz matrix t = uT (hp , ..., hn+p−1 ) Compute t−1 via Algorithm 2.1 t Compute h0 = (t−1 ) ht−1 Set h011 = h0 (1 : p, 1 : p) and h022 = h0 (p + 1 : n, p + 1 : n) 8

6. Recursively apply Algorithm 3.2 to S = [h022 (1 : n − p, 1) h022 (n − p, 1 : n − p)], obtaining (8).

4

Block factorization of Hankel matrices via Schur complementation

Now the algorithm for the approximate block factorization of complex Hankel matrices via Schur complementation (see [9], [7] and [6]) is introduced. We also clarify the correlation between the classical approximate factorization method and the approximate diagonalization obtained with our approach. 4.1

The algorithm for a complex Hankel matrix

The classical Schur algorithm is one of the best known techniques to compute h = LDLt

(12)

which gives the decomposition of a Hankel matrix. Then we have the following result. Theorem 4.1 Let h be a complex symmetric matrix satisfying the following partition   t h h  11 21  h= (13)  h21 h22 where h11 = lH (h1 , ..., hp ) + Jp Σp , h21 ∈ C(n−p)×p , h22 ∈ C(n−p)×(n−p) with n > p, consider the Schur complement of h11 hsc = h22 − h21 (lH (h1 , ..., hp ))−1 ht21 and the elimination matrix 

l= 



Ip×p

0p×(n−p)  .

h21 (lH (h1 , ..., hp ))−1 I(n−p)×(n−p)

Then



lh0 lt = h where h0 =  

9

h011

0



 

(0 )t h0sc



(14)

where 

h011 = h11 , 0 = −Jp Σp h21 (lH (h1 , ..., hp ))−1 





t

,

h0sc = hsc + h21 (lH (h1 , ..., hp ))−1 Jp Σp h21 (lH (h1 , ..., hp ))−1

t

.

PROOF. We apply the same proof as in [1].

The iteration of Theorem 4.1 defines the approximate block diagonal matrix Dε (2) where every block is an approximate lower Hankel matrix. Thus, we propose the following algorithm. Algorithm 4.1 (Approximate block factorization for Hankel matrix via Schur complementation) Given a complex symmetric matrix h as defined in (13), this algorithm computes the reduction of h via Schur Complementation. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

5

Define a complex symmetric matrix h as defined in (13) Set lh11 = lH (h1 , ..., hp ) and ut11 = uT (h1 , ..., hp ) Compute (ut11 )−1 via Algorithm 2.1 Compute (lh11 )−1 = (ut11 )−1 Jp Set h11 = h(1 : p, 1 : p), h21 = h(p + 1 : n, 1 : p) and h22 = h(p + 1 : n, p + 1 : n) Compute hsc = h22 − h21 (lh11 )−1 ht21 Set l = [I (p, p) , h21 (lh11 )−1 ; 0 (n − p, p) , I (n − p, n − p)] Set invl = [I (p, p) , −h21 (lh11 )−1 ; 0 (n − p, p) , I (n − p, n − p)] Compute h0 = (invl) h (invl)t Set h011 = h0 (1 : p, 1 : p) and h0sc = h0 (p + 1 : n, p + 1 : n).

Numerical examples

A numerical example is introduced in this section to show the execution steps of our processes. We also perform an experimental analysis which prove the accuracy and the effectiveness as well as the computational cost and time analysis of our approximate block diagonalization with respect to the classical approximate factorization method. • All algorithms were implemented in Matlab 7.0.4.287 (R2007A) and run on an Intel(R) Core(TM)2 CPU T5600 laptop with a 1.83GHz processor and 2046Mb of RAM. • The lists S which constructed Hankel matrices are produced using a Maple file recovered from Diaz-Toca [4]. 10

• A ”perturbed” Hankel matrix is an ”exact” Hankel matrix on adding k × 10−13 to all their entries with k = k (r) + ik (i) ∈ C where k (r) , k (i) are taken randomly in (−1, 1). • All algorithms could be applied to any randomly generated complex Hankel matrix of size n×n. In the examples, we introduce a complex Hankel matrix via a perturbation of another complex Hankel matrix, whose diagonalization with blocks of various sizes is known. • The relative accuracy of the Schur approximate method and our approximate method are, respectively:

ErrorR Sc =

kDOur approx − DOur k1 kDSc approx − DSc k1 , ErrorR . (15) Our = kDSc k1 kDOur k1

• The absolute accuracy of the Schur approximate method and our approximate method are, respectively: A ErrorA Sc = kDSc approx − DSc k1 , ErrorOur = kDOur approx − DOur k1 .

(16)

• Gd and Bd represent the number of Good and Bad perturbations of the 500 randomly selected perturbations, respectively. • u(x) and v(x) are introduced via a perturbation on adding k × 10−13 to all their entries with k = k (r) + ik (i) ∈ C where k (r) , k (i) are taken randomly in (−1, 1) of another ”exact” input polynomials, whose sequence of polynomials quotients and remainders is ”exactly” known.

5.1

Comparison steps

We compare our approach with the classical approximate block factorization of complex Hankel matrices (Algorithm 4.1). We also describe the differences and the similarities between these two methods via an example. Consider the following list S of length 13 which defines a Hankel matrix 19 59 i, − 103 + 108 i, − 143 + 49 i, − 3911 − 173 i, S = [0, 16 − 16 i, − 61 + 16 i, 16 + 49 i, − 16 − 18 108 45 45 1620 45 6889 815831 242717 5083067 4592431 9589 42814517 496841 − 10892 + i, − i, − − i, − + i, − 2025 900 48600 48600 243000 243000 13500 729000 6750 7561091 i]. 121500

11

Step 1.

h011

=



1.0801e-24 + 1.800e-12i 3 + 3i



3 + 3i 3 + 3i

 2.5e-11 + 2.0e-11i

2.5e-11 + 2.0e-11i

−0.1 − 0.3i 2.5e-11 + 2.0e-11i

 2.5e-11 + 2.0e-11i

h022 =  

−0.1 − 0.3i 2.5e-11 + 2.1e-11i

−0.1 − 0.3i 2.5e-11 + 2.0e-11i

−0.28 + 0.46i

0.4 − 0.3i

0.4 − 0.3i

0.906 − 0.142i

0.906 − 0.142i

−1.6653 − 0.376i

−0.28 + 0.46i

2.5e-11 + 2.1e-11i

−0.28 + 0.46i

0.4 − 0.3i

−0.28 + 0.46i

0.4 − 0.3i

0.906 − 0.142i

−0.28 + 0.46i

−1.6653 − 0.376i −0.51422 − 0.58378i

Step 2.

h011 h022

−7.9978e-11 − 3.1498e-10i

= =



−7.9913e-11 − 3.1505e-10i

−1 + 3i

−7.9913e-11 − 3.1505e-10i

−1 + 3i

3.9584e-10 − 1.2961e-10i

−1 + 3i

3.9584e-10 − 1.2961e-10i

−5 + 2i

−8.8373e − 08 − 1.7003e − 07i 15 − 3.6869e − 007i 15 − 3.6869e-007i

!



47 + i

Step 3. Set

. h011 = h022

End STOP. Table 1 describes the steps executed with Algorithm 3.2

Step 1.

h011 =



1.0e-13 + 1.0e-13i

0.16667 − 0.16667i



0.16667 − 0.16667i −0.16667 + 0.16667i

 1.1e-12 − 7.3e-13i

6.6e-16 − 6.7e-13i −0.016 + 0.005i −0.016 + 0.005i

 6.6e-16 − 6.7e-13i

h0Sc =  

−0.016 + 0.005i −0.016 + 0.005i 0.029 − 0.019i

−0.016 + 0.005i 0.033 − 0.011i

0.033 − 0.011i

0.118 + 0.033i



0.012 − 0.014i −0.095 + 0.057i

0.016 − 0.049i −0.099 + 0.092i

0.012 − 0.014i −0.099 + 0.092i −0.095 + 0.057i

0.029 − 0.019i

0.060 − 0.038i

0.118 + 0.033i 0.117 − 0.235i

  

0.117 − 0.235i −0.530 + 0.256i

Step 2.

h011

=

h0Sc =



1.1335e-12 − 7.3319e-13i

6.6613e-13 − 6.7224e-13i

6.6613e-13 − 6.7213e-13i

−0.016667 + 0.0055556i

0.033333 − 0.011111i

−0.016667 + 0.0055556i

0.033333 − 0.0111110i

0.016296 − 0.0492590i

1.1745e-11 + 1.2695e-11i

0.00022222 + 0.00029630i

−0.016667 + 0.0055556i



0.00022222 + 0.00029630i −0.00112100 − 0.00153580i

Step 3. Set

. h011 = h0Sc

End STOP. Table 2 describes the steps executed with Algorithm 4.1

12

!

   



3 + 3i 3 + 3i −11 + 33i 8.9e-10 + 3.5e-9i −55 + 22i 54 + 7.4e-9i −7.0e-9 + 8.2e-9i

   AOur =    

0

−12 + 6i

0

0

0

−12 + 6i

−12 + 6i

−32 + 24i

0

0

0

0

−12 + 6i

−12 + 6i

0

0

0

0

0

−12 + 6i

0

0

0

0

0

−11 + 33i

−21 − 9i

−40 + 37i

54 + 7.7e-9i

−12 + 6i −32 + 24i

−6 + 6i

−40 + 37i

0

3 + 3i (-1.3+0.6i)e-11

3 + 3i

3 + 3i (-1.3+0.6i)e-11

   −6 + 6i   −32 + 24i   −12 + 6i −12 + 6i

(1.7-3.5i)e-15 (-2.3-0.9i)e-11

(1.6+1.6i)e-11

(1.4+2.8i)e-14

5.3e-15 (-2.3-0.9i)e-11

(1.6+1.6i)e-11

(1.4-1.4i)e-14

(-7-3i)e-11

(-6.6-6.6i)e-16 (-2.6+2.6i)e-15

(-7-3i)e-12

−1 + 3i

−1 + 3i

(3-i)e-11

−5 + 2i

(-1.1-1.7i)e-10 (-4.3+2.7i)e-10

(9.7-0.3i)e-11

(-3-i)e-11

(-0.1+5.3i)e-15 (-1.5+2.6i)e-14 (-4.3+2.7i)e-10 (-6.2+1.1i)e-12 (-8.3-2.5i)e-10

15-3.6e-7i

(-0.2-9.9i)e-12

(-0.2-9.9i)e-12

(1.6+1.6i)e-11

(1.6+1.6i)e-11

(-7-3i)e-12

−1 + 3i

(-0.1+6.5i)e-12 (-1.3+0.6i)e-11

1 0 −1.8333 + 1.8333i

1.8333 − 1.8333i −1.8333 − 4.8889i

−1 + 6.0e-13i −0.83333 + 1.8333i

(3-i)e-11 (-4.3+2.7i)e-10 (9.6-0.3i)e-11

1.7667 + 11.578i

2.6667 − 3.6667i

−4.5 − 1.2222i

1

0

0

−0.6 + 0.2i

0

0

1

0 −2.9333 + 0.53333i

0

0

0

1

−1 − 3.3e-11i

0

0

0

0

1

0 0

0

0

0

1

(1+i)e-13

0.16-0.16i   (-1.2-1.3i)e-16  DSc =  (-1.8+1.8i)e-13  (1.8+4.8i)e-13  (-3.5-1.7i)e-13

0.16-0.16i -0.16+0.16i

(-1.2-1.3i)e-16 (-1.8+1.8i)e-13 (-1.2-1.3i)e-16

6.2667 + 12.8i

   2.3333 − 0.33333i   −1.9333 + 0.53333i   0

(2.4-6.0i)e-16

(2.8+3.4i)e-16

-0.01+0.005i

(-2.0-5.1i)e-12

-0.01+0.005i

(2.8+3.4i)e-16

(-0.1-3.6i)e-12

(3.6+1.8i)e-13 (-7.6-9.4i)e-16

(3.6-0.9i)e-12

-0.01+0.005i

0.03-0.01i

(-2.0-5.1i)e-12

0.03-0.01i

0.01-0.04i

(-1.5+1.4i)e-12

(-2.0-5.1i)e-12 (-1.5+1.4i)e-12 (1.1+1.2i)e-11 (0.6+1.3i)e-12

(1.5-1.4i)e-12

1

(-3.5-1.7i)e-13 (3.6+1.8i)e-13

(5.4+7.1i)e-17

(2.4-6.0i)e-16

 47 + i

−3.3 + 0.93333i

(-1.2-1.3i)e-16 (1.1-0.7i)e-12 (6.6-6.7i)e-13 (5.4+7.1i)e-17 (6.6-6.7i)e-13

   (-6.2+1.2i)e-12   (-8.3-2.5i)e-10   15-3.6e-7i

10.557 − 5.9759i

0 (1.8+4.8i)e-13

(-7.6-9.4i)e-16

(2.2+2.9i)e-4 (-1.1-1.5i)e-3

Tables 1 and 2 describe the steps executed with Algorithm 3.2 and Algorithm 4.1, respectively. In Table 3, we propose the block diagonal matrices as well as the upper triangular matrices, as defined in (2), obtained with the two approximate methods. Thus, all the properties introduced in [1] of the two methods applied to a Hankel matrix with real coefficients are valid for a Hankel matrix with complex coefficients.

5.2.1

Accuracy and conditioning issues

Conditioning analysis

We provide statistical results on the behavior of the two methods applied to Hankel matrices with complex coefficients which measuring the conditioning of our strategy. To do this, we perturbed our Hankel sequences 500 times on adding k × 10−13 to all their entries with k = k (r) + ik (i) ∈ C where k (r) , k (i) are taken randomly in (−1, 1). The tests measure the absolute accuracy given by (16).

13



   (0.6+1.3i)e-12   (1.5-1.4i)e-12   (2.2+2.9i)e-4 (3.6-0.9i)e-12

Table 3. The proposed block diagonal matrices as well as the upper triangular matrices

5.2



(-1.1-1.7i)e-10 (-4.3+2.7i)e-10

0

0 0  ASc = 0 0 0 

−12 + 6i

0

1e-24+1.8e-12i

   DOur =    

0 3 + 3i



A

A

Fig. 1: log10 (ErrorOur ) (the left one) and log10 (ErrorSc ) (the right one) for a Hankel matrix of size 672.

Figure 1 presents the behavior of the absolute accuracy given by (16) for a Hankel matrix of size 672. The other Hankel matrices were established also and not displayed here. A Subsequently, we derive the behavior of ErrorA Our and ErrorSc compared to 500 randomly perturbations and a certain random tolerance ε for a Hankel matrix of size 672.

..., 10−13

10−12

10−11

10−10 , ...

Gd

0

1

70

500

Bd

500

499

430

0

 ErrorA Our

..., 10−13

10−12

10−11

10−10

10−9 , ...

Gd

0

1

11

311

500

Bd

500

499

489

189

0

 ErrorA Sc

We do the same thing for the other Hankel matrices. Thus, the conclusion is: out of 500 random perturbations: • For || > 10−10 : our approximate block diagonalization and the approximate block diagonalization via the Schur complementation is often good compared to the exact case. • For 10−13 < || < 10−10 : our approximate block diagonalization and the approximate block diagonalization via the Schur complementation are often quite good compared to exact case. • Beyond  = 10−13 : our approximate block diagonalization and the approximate block diagonalization via the Schur complementation are often bad compared to the exact case. 14

This statistical analysis under perturbation of the input applied to the Hankel sequences, confirms that our two approaches are well conditioned and the perturbation above do not diverge the result starting from a certain fixed tolerance  with || > 10−10 .

5.2.2

Accuracy

Some numerical results are presented to illustrate the accuracy of our algorithm with respect to the Schur approximate algorithm for eleven different sequences defining Hankel matrices as (3).

Thus, the relative accuracy of our algorithm and the Schur complementation algorithm (15) compared to a fixed tolerance  = 10−5 are given in the following tables:

kDOur approx k1

ErrorR Our

n

kDOur k1

7

8.4852813742385700 8.4852813742339990 5.706774496276349e-13

14

23.000000000000000 23.000000221676345 9.697543542689079e-09

28

23.000000000000000 23.000000192887367 8.393834710724900e-09

56

10.000000000003041 10.000000003571856 2.729703512635876e-08

112

12.000000000000000 12.000000000082560 3.166115405614226e-08

224

15.000000000000000 15.000000000104892 5.453164647193843e-06

336

23.236067977499783 23.236068029831394 3.982568713893646e-08

448

23.236067977499783 23.236068720381791 3.475003625625449e-07

672

12.000000000000000 12.000000000003219 3.739884346894936e-06

896

12.000000000000000 12.000000000334289 9.270214916390855e-07

1120 13.000000000000000 13.000000053823175 9.069382833634979e-06

15

kDSc approx k1

ErrorR Sc

n

kDSc k1

7

310.9203966784305 310.92039667837630 2.213926888702992e-13

14

36.00000000000000 36.000000000122682 2.192140674609713e-08

28

8.000000000000000 8.0000000000226170 1.179146052406410e-09

56

7.228045704137195 7.2280457034719530 6.270644695469078e-08

112

138.0665927567458 138.06659276952500 2.809358098521050e-08

224

92.00000000000000 92.000000003949523 1.858406019570196e-06

336

3.000000000000000 3.0000000000011880 1.716036350304927e-07

448

5.500000000000000 5.5000000000201320 9.448631662438197e-07

672

1161.000000000000 1160.9756804024230 4.563332870795622e-05

896

1161.000000000000 1161.0275947690960 3.972672785950243e-05

1120 63.00000000000000 63.000000095200178 1.611067508970338e-05

5.3

Computational cost and time

In the following table, we give the computational times (in sec.) of our approximate block diagonalization and the Schur complementation approximate approach compared to a fixed tolerance  = 10−5 for eleven different sizes of ”perturbed” Hankel matrices. 16

n

TimeSc approximate

TimeOur approximate

7

0.011534

0.008796

14

0.004748

0.002351

28

0.013804

0.002813

56

0.012679

0.011248

112

0.037908

0.035668

224

0.268646

0.249651

336

0.880709

0.816654

448

1.593345

1.580948

672

6.347841

6.006527

896

14.153723

14.048224

1120

28.963264

28.857474

Then, we conclude that the Schur factorization requires little more time than our approximate approach, specially when the size of Hankel matrices n growth.

5.4

Application to the Euclidean algorithm

Let u(x) =

n X

uk xk and v(x) =

k=0

m X

vk xk

k=0

be two polynomials in C [x] of degree n and m, respectively, where m < n. The power series expansion R (x) of the function v(x)/u(x) at the infinity R (x) =

∞ X

hk x−k

k=0

defines the n × n complex Hankel matrix, H = H (u, v) as defined in (1), associated to u(x) and v(x). In this section we present an application of Algorithm 3.2 applied to the Hankel matrix H (u, v) by calculating the approximate polynomial quotient and remainder appearing in the Euclidean algorithm. Recall that the classical Euclidean algorithm applied to u(x) and v(x) returns a sequence of polynomials quotients qk (x) and polynomials remainders rk (x) , 17

such that r−1 (x) = u(x), r0 (x) = v(x), rk−2 (x) = rk−1 (x) qk (x)−rk (x) , k = 1, . . . , K where −rk (x) is the polynomial remainder of the division of rk−2 (x) and rk−1 (x) and rK (x) is the greatest common divisor (GCD) of u(x) and v(x). If we apply Corollary 3.1 to H(u, v), we obtain the following equality (see the proof in [2,3])  

t−1

t

H (u, v) t−1 = 



0

f (q, 1) H



(0 )t

f (v, r) H



where 

f (q, 1) = H (q, 1) + t−1 H 22

t

 ,

Jn−m Σn−m t−1 22 , 

0 −1 f (v, r) = H (v, r) + P t J H n−m Σn−m P,  = t22

t

Jn−m Σn−m P,

q(x) and r(x) are the polynomial quotient and remainder of the division u (x) /v (x) .

5.4.1

Example

Let u(x) = 2ix7 + (6i − 7)x6 − (17 + 5i)x5 − (64 + 21i)x4 − (211i + 93)x3 − (329 + 334i)x2 − (490 + 645i)x − 751, and v(x) = 2ix5 + (−1 + 6i)x4 + (1 + 30i)x3 + (15 + 42i)x2 + (44 + 76i)x + 67. Then, H (u, v) = H(0, 1, -3i, 2,-39i, -94, -1-152i, -1502+10i, 50+2837i, -8069140i, -171+55458i, 76781+58i, 2920+(765777/2)i). In the following, we show the connection between our algorithm (Algorithm 3.2) applied to H (u, v) and the Euclidean Algorithm (for every steps). Step 1: Block factorization for H (u, v) f (q , 1) = H(0, 1, 3i), H 1 f (v, r ) = H((−9.6809−6.3949i)e-13, (1+5.5397i)e-13, (−4.8404 −3.1974i)eH 1 12, −8.6832e-17, (9.6875+6.3949i)e-13, −3.011e-13+2.5686e-12i, −2+8.0178e11i, −1 − 4.7778e-10i, 12 − 0.5i), q1 (x) = 1x2 + (−1.5e-13 + 3i)x − 11 − 4.7398e-13i, r1 (x) = (3.4904e-12 + 2i)x3 + (3 + 4i)x2 + (6 + 10i)x+14 + 2.555e-11i. Step 1: Euclidean algorithm r−1 (x) = r0 (x) q1 (x) − r1 (x) , q1 (x) = 1x2 + (−1.501e-13 + 3i)x − 11 − 4.7606e-13i, r1 (x) = (1.1227e-11 + 2i)x3 + (3 + 4i)x2 + (6 + 10i)x + 14 − 3.1896e-11i. 18

Step 2: Block factorization for H (u, v) f (q , 1) = H(8.4427e-12 − 4.4421e-11i, 1 − 1.5502e-10i, 1 + 2i), H 2 f (r , r ) = H(8.4427e-12 − 4.4421e-11i, 1 − 1.5502e-10i, 4.2213e-11 −2.221eH 1 2 10i, 8.4427e-11 − 4.4421e-10i, −1.8705e-10 − 0.5i), q2 (x) = (1 + 3.1296e-12i)x2 + (1 + 2i)x + 5 + 3.1296e-12i, r2 (x) = (2.8121e-10 − 2i)x − 3 − 4.7869e-10i. Step 2: Euclidean algorithm r0 (x) = r1 (x) q2 (x) − r2 (x) , q2 (x) = (1 + 5.5633e-12i)x2 + (1 + 2i)x + 5 + 9.468e-12i r2 (x) = (2.8301e-11 − 2i)x − 3 + 2.693e-11i. Step 3: Block factorization for H (u, v) f (q , 1) = H (8.4427e-12 − 4.4421e-11i, 1 − 1.5502e-10i, 2 − 5.2256e-10i), H 3 f (r , r ) = H(1 + 5.147e-10i), H 2 3 q3 (x) = (1 − 6.6174e-11i)x2 + (2 − 3.4487e-10i)x + 5 − 7.694e-10i, r3 (x) = 1 + 5.147e-10i. Step 3: Euclidean algorithm r1 (x) = r2 (x) q3 (x) − r3 (x) . q3 (x) = (1 − 1.9764e-11i)x2 + (2 − 2.6777e-11i)x + 5 − 1.3356e-11i, r3 (x) = 1 − 1.4282e-10i.

6

Conclusion

This paper proposed a new method for an approximate block factorization of complex Hankel matrices. An illustrative numerical comparison of our approach with the classical Schur complementation method show that our algorithms are well-conditioned and accurate. Moreover, an application of our algorithm by calculating the approximate polynomial quotient and remainder appearing in the Euclidean algorithm is given. Theoretical error analysis concerning the propagation of the error introduced at each step in our algorithms is an interesting future work.

Acknowledgements

I would like to thank Prof. P. Van Dooren who suggested to me this problem, Prof. Clemens H. Cap, Prof. Xiaojun Chen and the referee for improvements in the presentation of the paper. 19

References

[1] S. Belhaj, A fast method to block-diagonalize a Hankel matrix, Numer Algor 47, 15-34 (2008). [2] S. Belhaj, Block factorization of Hankel matrices and Euclidean Algorithm, Accepted for publication in MMNP journal. [3] S. Belhaj, Block diagonalization of Hankel and B´ezout matrices: connection with the Euclidean Algorithm, Submitted. [4] N. Ben Atti, G. M. Diaz-Toca, Block diagonalization and LU-equivalence of Hankel matrices, Linear Algebra and its Applications 412 (2006) 247-269. [5] D. Bini, Parallel solution of certain Toeplitz linear systems, SIAM J. Comput. 13 (1984) 268-276. [6] A. Bultheel, M. Van Barel, Linear algebra, rational approximation and orthogonal polynomials, Studies in computational Mathematics, vol 6, Elsevier/North-Holland, Amsterdam, 1997. [7] D. Bini, V. Pan, Polynomial and matrix computations, Vol. 1, Fundamental Algorithms, Birkh¨ auser, 1994. [8] G. M. Diaz-Toca, N. Ben Atti, Block LU factorization of Hankel and Bezout Matrices and Euclidean Algorithm, Int. J. Comput. Math. 86, 135-149 (2009). [9] W. B. Gragg, A. Lindquist, On partial realization problem, Linear Algebra Appl. 50 (1983) 277-319. [10] G. Golub, Ch. Van Loan, Matrix Computations, J. Hopkins Univ. Press, Baltimore and London. 3`eme ´edition, 1996. [11] A.K. Jain, Fundamentals of Digital Image Processing, Prentice-Hall, Inc., Englewood Cliffs, NY, 1989. [12] S. Y. Kung, Multivariable and Multidimensional Systems, PhD thesis, Stanford University, Stanford, CA, June 1977. [13] Fu-Rong Lin, Wai-Ki Ching, M. K. Ng, Fast inversion of triangular Toeplitz matrices, Theoretical Computer Science 315 (2004) 511-523. [14] D. Pal, T. Kailath, Fast Triangular Factorization of Hankel and Related Matrices with Arbitrary Rank Profile, SIAM J. Matrix Anal. Appli. 16 (1990) 451-478. [15] J. Phillips, The triangular decomposition of Hankel Matrices, Math. Comp. 25 (1971) 599-602. [16] J. Rissanen, Algorithms for Triangular Decomposition of Block Hankel and Toeplitz Matrices with Applications to Factoring Positive Polynomials, Mathematics of Computation, 27 (1973) 147-154.

20

Suggest Documents