Newton's Method as a Tool for Finding the Eigenvalues ... - Springer Link

1 downloads 0 Views 176KB Size Report
This algorithm uses Newton's method and a numerical procedure for differentiating determinants. The problem to be solved is to find the values of the spectral ...
ISSN 0965-5425, Computational Mathematics and Mathematical Physics, 2008, Vol. 48, No. 12, pp. 2140–2145. © Pleiades Publishing, Ltd., 2008. Original Russian Text © B.M. Podlevskii, 2008, published in Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 2008, Vol. 48, No. 12, pp. 2107–2112.

Newton’s Method as a Tool for Finding the Eigenvalues of Certain Two-Parameter (Multiparameter) Spectral Problems B. M. Podlevskii Institute of Applied Mathematics and Mechanics, National Academy of Sciences of Ukraine, ul. Naukova 3-b, Lvov, 79000 Ukraine e-mail: [email protected] Received November 15, 2007; in final form, May 20, 2008

Abstract—An iterative algorithm is examined for finding the eigenvalues of the two-parameter (multiparameter) algebraic eigenvalue problem. This algorithm uses Newton’s method and an efficient numerical procedure for differentiating determinants. Some numerical examples are given. DOI: 10.1134/S0965542508120038 Keywords: two-parameter (multiparameter) eigenvalue problems, Newton’s method, determinant differentiation.

INTRODUCTION Multiparameter eigenvalue problems are an extension of the classical one-parameter problem Ax = λx. Their abstract formulation is given by m

Ax =

∑λ B x j

j

j=1

or m

Ai xi =

∑λ B j

ij x i ,

i = 1, 2, …, m.

(1)

j=1

Here, λj ∈ R (j = 1, 2, …, m) are spectral parameters, while A, Bj, Ai, and Bij (i, j = 1, 2, …, m) are linear operators acting in separable Hilbert spaces. A source of problems of this kind is classical analysis. In particular, such problems arise when boundary value problems for partial differential equations are solved by separating the variables. This largely explains the interest of researchers in the various aspects of spectral theory, as well as in numerical methods for solving such problems. Recently, a rather complete theory has been developed for the problems in this class (for instance, see [1–4], where a comprehensive list of references is also given). Some numerical methods were also constructed (e.g., see [5–11]). Nevertheless, there still are many open issues related to the multiparameter eigenvalue problems. In particular, efficient numerical methods and algorithms are required for solving the eigenvalue problems arising from algebraic and integral equations. In this paper, we propose a numerical algorithm for solving the multiparameter algebraic eigenvalue problem. This algorithm uses Newton’s method and a numerical procedure for differentiating determinants. The problem to be solved is to find the values of the spectral parameters λ1, …, λm for which system (1) admits a nontrivial solution x1, …, xm (xi ≠ 0, i = 1, 2, …, m). 2140

NEWTON’S METHOD AS A TOOL FOR FINDING THE EIGENVALUES

2141

1. TWO-PARAMETER ALGEBRAIC EIGENVALUE PROBLEM The two-parameter algebraic eigenvalue problem is a particular case of problem (1). It can be written as a system of two homogeneous linear equations ( A 1 + λB 1 + µC 1 )x = 0,

(2)

( A 2 + λB 2 + µC 2 )y = 0,

where Ai, Bi, and Ci are square n-by-n matrices. The problem is to find the eigentuples (which, in our case, means the eigenpairs (λ, µ)) such that system (2) has nontrivial solutions x ≠ 0 and y ≠ 0. The required eigenpairs are obviously solutions to the system of the two nonlinear algebraic equations f (λ, µ) ≡ det( A 1 + λB 1 + µC 1) = 0,

(3)

g(λ, µ) ≡ det( A 2 + λB 2 + µC 2) = 0.

In this paper, we propose an iterative process that makes it possible to calculate the eigenpairs of system (3) by Newton’s method without expanding the determinants in (3). This means that the left-hand sides in system (3) are not calculated explicitly; instead, an algorithm is proposed for calculating the functions f (λ, µ) and g(λ, µ), as well as the entries of the Jacobian matrix of system (3), for fixed values of λ and µ. Thus, suppose that an approximation (λk, µk) is available for an eigenpair of system (3). The iterations of Newton’s method can be written as λ k + 1 = λ k + ∆λ k ,

µ k + 1 = µ k + ∆µ k ,

k = 0, 1, …,

(4)

where the corrections ∆λk and ∆µk solve the system of two linear equations f 'λ ∆λ k + f 'µ ∆µ k = – f ,

(5)

g 'λ ∆λ k + g 'µ ∆µ k = – g,

where all six coefficients (the functions f and g and the four partial derivatives f 'λ , f 'µ , g 'λ , and g 'µ ) are calculated at the point (λk, µk). We do not discuss the advantages and disadvantages of Newton’s method; rather, we proceed directly to the description of an efficient algorithm for calculating the coefficients of system (5). This algorithm can also be used in other methods requiring the derivatives. 2. CALCULATING THE DERIVATIVE OF A DETERMINANT Let A(λ) be a square matrix whose entries are analytic functions of the parameter λ. Define f ( λ ) = det A ( λ ) and A = A ( λ 0 ),

B = A' ( λ 0 ).

The problem is to calculate f (λ0) and f '(λ0) using A and B, which can easily be calculated for any given λ = λ0. We assume that the triangular decomposition A ( λ ) = L ( λ )U ( λ )

(6)

exists for all λ in a certain neighborhood of λ0. Here, L(λ) is a unit lower triangular matrix and U(λ) is an upper triangular matrix. Let lik(λ) and uik(λ) be the entries of L(λ) and U(λ), respectively. By assumption, these entries are differentiable functions of λ. Therefore, differentiating (6), we obtain B ( λ ) = M ( λ )U ( λ ) + L ( λ )V ( λ ),

(7)

where B ( λ ) = A' ( λ ),

M ( λ ) = L' ( λ ),

V ( λ ) = U' ( λ ).

(8)

Moreover, M(λ) is a lower triangular matrix with a zero main diagonal, and V(λ) is an upper triangular matrix. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 48

No. 12

2008

2142

PODLEVSKII

Relations (6)–(8) imply that n

f ( λ ) = detL ( λ )detU ( λ ) =

∏u

ii ( λ )

( detL ( λ ) = 1 )

i=1

and n

f '(λ) =

∑v

n

kk ( λ )

∏u

ii ( λ ).

i = 1 i≠k

k=1

Thus, to find f(λ0) and f '(λ0) for a fixed λ = λ0, one should calculate A = LU, B = MU + LV;

(9)

then, we have n

n

f ( λ0 ) =

∏u

f ' ( λ0 ) =

ii ,

i=1

n

∑v ∏u kk

k=1

ii .

(10)

i = 1 i≠k

Observe that the entries of the matrices appearing in decomposition (9) can be directly determined by the recursive formulas r = 1, 2, …, n, r–1

u rk = a rk –

∑l

rj u jk ,

k = r, …, n,

j=1

⎛ l ir = ⎜ a ir – ⎝



r–1

∑l

ij u jr⎟ /u rr ,



j=1

i = r + 1, …, n,

r = 1, 2, …, n,

r–1

v rk = b rk –

∑ (m

rj u jk

+ l rj v jk ),

k = r, …, n,

j=1 r–1

m ir = b ir –

∑ (m

ij u jr

+ l ij v jr ) – l ir v rr /u rr ,

i = r + 1, …, n.

j=1

Now, the above algorithm for finding f(λk) and f '(λk) for the given A = A(λk) and B = A'(λk) can be applied to calculating the coefficients of system (5) as shown below. Step 1. Calculate the matrix A = A1 + λkB1 + µkC1. Step 2. Take B1 as the matrix B; thus, B = B1. Using (9) and (10), calculate f(λk, µk) and f 'λ (λk, µk). Step 3. Take C1 as the matrix B; thus, B = C1. Using (9) and (10), calculate f(λk, µk) and f µ' (λk, µk). The coefficients in the second equation in (5) are found similarly. Namely, Step 4. Calculate the matrix A = A2 + λkB2 + µkC2. Step 5. Take B2 as the matrix B; thus, B = B2. Using (9) and (10), calculate g(λk, µk) and g 'λ (λk, µk). Step 6. Take C2 as the matrix B; thus, B = C2. Using (9) and (10), calculate g(λk, µk) and g µ' (λk, µk). Now, it remains to perform two steps prescribed by Newton’s method. Step 7. Form the Jacobian matrix from the coefficients f 'λ , f 'µ , g 'λ , and g 'µ and solve system (5) with respect to ∆λk and ∆µk. Step 8. Calculate the next approximation to λ and µ using formula (4). COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 48

No. 12

2008

NEWTON’S METHOD AS A TOOL FOR FINDING THE EIGENVALUES

2143

Table 1 Initial approximations to eigenpairs

Number of iteration steps

Calculated values of eigenpairs

Exact values of eigenpairs

λ0

µ0

N

λN

1.0

0.1

5

0.707107

0.0

10.0

–3.0

8

0.707107

0.0

–1.0

0.1

5

–0.707107

0.0

–11.0

1.0

8

–0.707107

0.0

1.0

–1.0

5

0.577350

–0.577350

0.5

–0.5

4

0.577350

–0.577350

–1.0

1.0

5

–0.577350

0.577350

–5.0

50.0

11

–0.577350

0.577350

µN

λ

µ

1/ 2

0

–1/ 2

0

1/ 3

–1/ 3

–1/ 3

1/ 3

Note that this version of our algorithm (Step 1 to Step 6) is numerically unstable (especially if urr = 0 for some r, in which case the algorithm fails). However, it can be modified so as to work in any case and remain numerically stable (under the natural assumption that the matrix A in decomposition (9) is nonsingular). To this end, one can use the well-known technique of pivoting, which exploits permutations of the rows (and/or columns) of A (e.g., see [12, p. 44]). In this case, decomposition (9) can be written as PA = LU, PB = MU + LV. Here, P is a permutation matrix and detP = (–1)s, where s is the number of pairwise (say, row) permutations. Relations (10) take the form n

n

f ( λ0 ) = ( –1 )

s

∏u

ii ,

f ' ( λ0 ) = ( –1 )

i=1

s

n

∑v ∏u kk

ii .

i = 1 i≠k

k=1

Also, note that our algorithm can easily and completely be extended to the multiparameter eigenvalue problem. 3. TEST EXAMPLES The test examples for the proposed algorithm are matrix two-parameter problems of form (2) with matrices of orders n = 2 and n = 3 whose eigenvalues can be found analytically. This makes it possible to compare the exact eigenvalues with their approximations obtained using our algorithm and get some idea of its efficiency. Example 1: n = 2, ⎛ ⎞ A1 = ⎜ 0 1 ⎟ , ⎝ 10⎠

⎛ ⎞ B1 = ⎜ –1 0 ⎟ , ⎝ 0 –2 ⎠

⎛ A2 = ⎜ 0 0 ⎝ 00

⎛ B2 = ⎜ 0 0 ⎝ 0 –1

⎞ ⎟, ⎠

⎞ ⎟, ⎠

⎛ ⎞ C1 = ⎜ 2 0 ⎟ , ⎝ 0 –1 ⎠ ⎛ C2 = ⎜ –1 0 ⎝ 0 –1

⎞ ⎟. ⎠

In this case, determinants (3) have the form f ( λ, µ ) = ( 2λ + µ ) ( 2µ – λ ) + 1, g ( λ, µ ) = µ ( λ + µ ). COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 48

No. 12

2008

2144

PODLEVSKII

It is easy to verify that the eigenpairs are ( λ 1, 2, µ 1, 2 ) = ( ± 1/ 2, 0 )

( λ 3, 4, µ 3, 4 ) = ( ± 1/ 3, − + 1/ 3 ).

and

Table 1 presents all the eigenpairs of problem (2) calculated using iterative process (4) and the algorithm for determining the coefficients of system (5), which was described above. The calculations were carried out to an accuracy of 10–6. For comparison purposes, we show the results obtained with various initial approximations for each pair of eigenvalues. Example 2: n = 3, ⎛ 11 0 ⎞ ⎜ ⎟ A1 = ⎜ 0 1 –1 ⎟ , ⎜ ⎟ ⎝ 02 0 ⎠ ⎛ 110 ⎜ A2 = ⎜ 0 2 0 ⎜ ⎝ 101

⎞ ⎟ ⎟, ⎟ ⎠

⎛ 100⎞ ⎜ ⎟ B1 = ⎜ 0 1 1 ⎟ , ⎜ ⎟ ⎝ 000⎠ ⎛ –1 2 0 ⎜ B2 = ⎜ 0 0 0 ⎜ ⎝ 0 01

⎞ ⎟ ⎟, ⎟ ⎠

⎛ 100⎞ ⎜ ⎟ C1 = ⎜ 2 0 0 ⎟ , ⎜ ⎟ ⎝ 010⎠ ⎛ 1 3 0 ⎜ C2 = ⎜ 0 –1 0 ⎜ ⎝ 1 0 0

⎞ ⎟ ⎟. ⎟ ⎠

In this case, determinants (3) have the form f ( λ, µ ) = – ( λ – 1 ) ( µ + 2 ) ( λ + µ + 1 ), g ( λ, µ ) = ( λ + 1 ) ( µ – 2 ) ( λ – µ – 1 ). From these formulas, we can determine all the eigenpairs. They are (1, 0), (1, 2), (0, –1), (–1, 0), (–1, –2), and (–3, –2). As in Example 1, the eigenpairs of problem (2) were calculated using iterative process (4) and our algorithm for determining the coefficients of system (5). The results are presented in Table 2. In closing, we note that, in numerous experiments (including those discussed above), we observed rapid convergence to the eigenpairs irrespective of the chosen initial approximations (even if they were far from the eigenvalues). Table 2 Initial approximations to eigenpairs

Number of Calculated values of eigenpairs iteration steps

Exact values of eigenpairs

λ0

µ0

N

λN

µN

λ

µ

1.0 55.0

–0.5 5.0

4 18

1.0 1.0

0.0 0.0

1

0

2.0 5.0

3.0 5.0

5 7

1.0 1.0

2.0 2.0

1

2

0.2 10–5

–0.2 10–5

22 56

–1.0 –1.0

–2.0 –2.0

–1

–2

–2.2 –5.0

–0.2 5.0

7 7

–3.0 –3.0

2.0 2.0

–3

2

0.1 –0.1

–1.5 –0.5

5 6

0.0 0.0

–1.0 –1.0

0

–1

–1.5 –1.0

0.2 1.0

5 5

–1.0 –1.0

0.0 0.0

–1

0

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 48

No. 12

2008

NEWTON’S METHOD AS A TOOL FOR FINDING THE EIGENVALUES

2145

REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

B. D. Sleeman, Multiparameter Spectral Theory in Hilbert Space (Pitman Press, London, 1978). H. Volkmer, “Multiparameter Eigenvalue Problems and Expansion Theorem,” Lect. Notes Math. 1336 (1988). G. F. Roach, “A Fredholm Theory for Multiparameter Problems,” Nieuw Arch. Wisk 24, 49–76 (1976). Yu. M. Berezansky and A. Yu. Konstantinov, “Expansion in Eigenvector of Multiparameter Spectral Problems,” Ukr. Mat. Zh. 44, 901–913 (1992). L. Fox, L. Hayes, and D. F. Mayers, “The Double Eigenvalue Problem. Topics in Numerical Analysis,” Proc. of the Irish Academy Conference on Numerical Analysis, Doublin, 1972, pp. 93–112 (1972). H. Henke, “Eine Methode zur Lösung Spezieller Randwertaufgaben der Mathieuschen Differentialgleichung,” Z. angew. Math. Mech. 52, 250–251 (1972). P. A. Binding and P. J. Browne, “A Variational Approach to Multiparameter Eigenvalue Problems for Matrices,” J. Math. Analys. 8, 763–777 (1977). A. A. Abramov and V. I. Ul’yanova, “A Method for Solving Self-Adjoint Multiparameter Spectral Problems for Weakly Coupled Sets of Ordinary Differential Equations,” Zh. Vychisl. Mat. Mat. Fiz. 37, 566–571 (1997) [Comput. Math. Math. Phys. 37, 552–557 (1997)]. A. A. Abramov, V. I. Ul’yanova, and L. F. Yukhno, “A Method for Solving the Multiparameter Eigenvalue Problem for Certain Systems of Differential Equations,” Zh. Vychisl. Mat. Mat. Fiz. 40, 21–29 (2000) [Comput. Math. Math. Phys. 40, 18–26 (2000)]. P. O. Savenko and L. P. Protsakh, “Variational Approach to Eigenvalue Problems with a nonlinear Vector Spectral Parameter,” Mat. Metodi Fiz.-Mekh. Polya 47 (3), 7–15 (2004). B. M. Podlevskii, “Variational Approach to Solving Two-Parameter Eigenvalue Problems,” Mat. Metodi Fiz.Mekh. Polya 48 (1), 31–35 (2005). J. Rice, Matrix Computations and Mathematical Software (McGraw-Hill, New York, 1981; Mir, Moscow, 1984).

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 48

No. 12

2008

Suggest Documents