Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2013, Article ID 548487, 8 pages http://dx.doi.org/10.1155/2013/548487
Research Article Augmented Arnoldi-Tikhonov Regularization Methods for Solving Large-Scale Linear Ill-Posed Systems Yiqin Lin,1 Liang Bao,2 and Yanhua Cao3 1
Department of Mathematics and Computational Science, Hunan University of Science and Engineering, Yongzhou 425100, China Department of Mathematics, East China University of Science and Technology, Shanghai 200237, China 3 Department of Mathematics, North China Electric Power University, Beijing 102206, China 2
Correspondence should be addressed to Liang Bao;
[email protected] Received 1 November 2012; Revised 18 March 2013; Accepted 19 March 2013 Academic Editor: Hung Nguyen-Xuan Copyright Β© 2013 Yiqin Lin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We propose an augmented Arnoldi-Tikhonov regularization method for the solution of large-scale linear ill-posed systems. This method augments the Krylov subspace by a user-supplied low-dimensional subspace, which contains a rough approximation of the desired solution. The augmentation is implemented by a modified Arnoldi process. Some useful results are also presented. Numerical experiments illustrate that the augmented method outperforms the corresponding method without augmentation on some real-world examples.
1. Introduction We consider the iterative solution of a large system of linear equations π΄π₯ = π,
(1)
where π΄ β RπΓπ is nonsymmetric and nonsingular, and π β Rπ . We further assume that the coefficient matrix π΄ is of ill-determined rank; that is, all its singular values decay gradually to zero, with no gap anywhere in the spectrum. Such systems are often referred to as linear discrete illposed problems and arise from the discretization of ill-posed problems such as Fredholm integral equations of the first kind with a smooth kernel. The right-hand side π of (1) is assumed to be contaminated by an error π β Rπ , which may stem from discretization or measurement inaccuracies. Thus, π = Μπ + π, where Μπ is the unknown error-free right-hand side vector. We would like to compute the solution π₯Μ of the linear system of equations with the error-free right-hand side Μπ, π΄π₯Μ = Μπ.
(2)
However, since the right-hand side in (2) is not available, we seek to determine an approximation of π₯Μ by solving the
available system (1) or a modification. Due to the ill conditioning of π΄, the system (1) has to be regularized in order Μ Perhaps the best to compute a useful approximation of π₯. known regularization method is Tikhonov regularization [1β 3], which in its simplest form is based on the minimization problem min {βπ΄π₯ β πβ2 +
π₯βRπ
1 βπ₯β2 } , π
(3)
where π > 0 is a regularization parameter. Here and throughout this paper β β
β denotes the Euclidean vector norm or the associated induced matrix norm. After regulating the system (1), we need to compute the solution π₯π of the minimization problem (3). Such a vector π₯π is also the solution of (π΄π π΄ +
1 πΌ) π₯ = π΄π π. π
(4)
Here and in the following, πΌ denotes the identity matrix, whose dimension is conformed with the dimension used in the context. If π is far away from zero, then, due to the ill conditioning of π΄, π₯π is badly computed while, if π is close to zero, π₯π is well computed, but the error π₯π β π₯Μ is
2
Mathematical Problems in Engineering
quite large. Thus, the choice of a good value for π is fairly important. Several methods have been proposed to obtain an effective value for π. For example, if the norm of the error π or a fairly accurate estimate is known, the regularization parameter is quite easy to determine by application of the discrepancy principle. The discrepancy principle proposes that the regularization parameter π can be chosen so that the discrepancy π β π΄π₯π satisfies σ΅© σ΅©σ΅© σ΅©σ΅©π β π΄π₯π σ΅©σ΅©σ΅© = ππ, σ΅© σ΅©
(5)
Some numerical methods without using the Tikhonov regularization technique have been already proposed to solve the large-scale linear discrete ill-posed problem (1). These methods include the range-restricted GMRES (RRGMRES) method [14, 15], the augmented range-restricted GMRES (ARRGMRES) method [16], and the flexible GMRES (FGMRES) method [17]. The RRGMRES method determines the πth approximation π₯π of (1) by solving the minimization problem min
π₯βKπ (π΄,π΄π)
where π = β π β and π > 1 is a constant; see, for example, [4] for further details on the discrepancy principle. The singular value decomposition [5] of π΄ can be used to determine the solution π₯π of the minimization problem (3). For an overview of numerical methods for computing the SVD, we refer to [6]. We remark that the computational effort required to compute the SVD is quite high even for moderately sized matrices. Many numerical methods using Krylov subspaces have been proposed for the solution of large-scale Tikhonov regularization problems (3). The main idea of such algorithms has been to first project the large problems onto some Krylov subspace to produce problems with small size and then solve the small-sized problems by the SVD. For instance, several well-established methods based on the Lanczos bidiagonalization process have been proposed for the solution of the minimization problem (3); see [7β10] and references therein. These methods use the Lanczos bidiagonalization process to construct a basis of the Krylov subspace: Kπ (π΄π π΄, π΄π π) πβ1
= span {π΄π π, π΄π π΄π΄π π, . . . , (π΄π π΄)
π΄π π} .
π = 1, 2, . . . , π₯0 = 0.
(8)
The regularization is implemented by choosing a suitable dimension number π; see, for example, [18]. The ARRGMRES method augments the Krylov subspace Kπ (π΄, π΄π) by a low-dimensional user-supplied subspace. The lowdimensional subspace is determined by vectors that are able to represent the known features of the desired solution. The augmented method can yield approximate solutions of higher accuracy than the RRGMRES method if the Krylov subspace Kπ (π΄, π΄π) does not allow representation of the known features. In this paper, we propose a new iterative method, named augmented Arnoldi-Tikhonov regularization method, for solving large-scale linear ill-posed systems (1). The new method is deduced by combining the Tikhonov regularization technique and the augmentation technique. The augmentation is implemented by a modified Arnoldi process. The following summarizes the structure of this paper. Section 2 describes the augmented Arnoldi-Tikhonov regularization method and some useful results. Some real-world examples are presented in Section 3. Section 4 contains the conclusions.
(6)
We remark that each Lanczos bidiagonalization step needs two matrix-vector product evaluations, one with π΄ and the other with π΄π . Other methods using the Krylov subspace Kπ (π΄, π΄π) = span {π΄π, π΄2 π, . . . , π΄π π}
βπ΄π₯ β πβ ,
(7)
as the projection subspace have been also designed. For example, Lewis and Reichel [11] proposed to exploit the Arnoldi process to produce a basis of the Krylov subspace Kπ (π΄, π΄π) to obtain an approximation of the solution of the Tikhonov regularization problem (3). Since each Arnoldi decomposition step requires only one matrix-vector evaluation with π΄, the approach based on the Arnoldi process may require fewer matrix-vector product evaluations than that based on the Lanczos bidiagonalization process. Moreover, the methods based on the Arnoldi process do not require the adjoint matrix π΄π and, hence, are more appropriate to problems for which the adjoint is difficult to evaluate. For such problems we refer to [12]. A similar Tikhonov regularization method based on generalized Krylov subspace is proposed in [13].
2. Augmented Arnoldi-Tikhonov Regularization Method We attempt to improve the Arnoldi-Tikhonov regularization method proposed in [11] by augmenting the Krylov subspace Kπ (π΄, π΄π) by a π-dimensional subspace W, which contains a rough approximation of the desired solution of (2). Then, the subspace of projection we will exploit in the following is of the form Kπ (π΄, π΄π) + W = span {π΄π, π΄2 π, . . . , π΄π π} + W.
(9)
Let π be the π by π matrix whose columns form an orthonormal basis of the space W. For the purpose of augmentation by W, we apply the modified Arnoldi process [16] to construct the modified Arnoldi decomposition π΄ [π, ππ ] = [π, ππ+1 ] π»,
(10)
where π β RπΓπ , ππ+1 β RπΓ(π+1) , [π, ππ+1 ] has orthonormal columns, and π» β R(π+π+1)Γ(π+π) is an upper Hessenberg matrix. We point out that the leading principal π by π submatrix of π» is the upper triangular matrix in the QR factorization [5] of π΄π; that is, π» is of the form
Mathematical Problems in Engineering
3
Input: π΄ β RπΓπ , π β Rπ , π β RπΓπ , π. Output: π β RπΓπ , ππ+1 β RπΓ(π+1) , π» β R(π+π+1)Γ(π+π) . (1) Compute QR-factorization π΄π = ππ». σ΅© σ΅© (2) Compute V1 = π΄π β π(ππ π΄π), V1 = V1 / σ΅©σ΅©σ΅©V1 σ΅©σ΅©σ΅©, and set π1 = [V1 ]. (3) For π = 1, 2, . . . , π Do: Vπ+1 := π΄Vπ ; For π = 1, 2, . . . , π Do: βπ,π+π := π(:, π)π Vπ+1 ; Vπ+1 := Vπ+1 β βπ,π+π π(:, π); End For For π = 1, 2, . . . , π Do: βπ+π,π+π := Vππ Vπ+1 ; Vπ+1 := Vπ+1 β βπ+π,π+π Vπ ; End For σ΅© σ΅© βπ+π+1,π+π := Vπ+1 / σ΅©σ΅©σ΅©σ΅©Vπ+1 σ΅©σ΅©σ΅©σ΅©; Vπ+1 := Vπ+1 /βπ+π+1,π+π ; ππ+1 := [ππ , Vπ+1 ]; End For Algorithm 1: Modified Arnoldi process.
[ [ [ [ [ [ [ π»=[ [ [ [ [ [ [ [
β11 β12 d
β
β
β
β1π .. .
β1,π+1 .. .
β1,π+2 .. .
d βπβ1,πβ1 βπβ1,π βπβ1,π+1 βπβ1,π+2 βππ βπ,π+1 βπ,π+2 βπ+1,π+1 βπ+1,π+2 βπ+2,π+1 βπ+2,π+2
[
The modified Arnoldi relation (10) is shared by other methods such as GMRES-E [19] and FGMRES [20], which are also augmented type methods. The modified Arnoldi process is outlined in Algorithm 1. We remark that the modified Arnoldi process in this paper is slightly different from the one used by Morgan [19] to augment the Krylov subspace with some approximate eigenvectors. In his method, the augmenting vectors are put after the Krylov vectors while in Algorithm 1, the augmenting subspace containing a rough approximation solution is included in the projection subspace from the beginning. In general, this can give better results at the start of the regularization method proposed in this paper. We remark that a loss of orthogonality can occur when the algorithm progresses; see [21]. A remedy is the socalled reorthogonalization where the current vector has to be orthogonalized against previously created vectors. One can choose between a selective reorthogonalization or a full
d
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
d βπ+π,π+πβ1
β1,π+π ] .. ] . ] βπβ1,π+π ] ] βπ,π+π ] ] βπ+1,π+π ] ]. βπ+2,π+π ] ] ] .. ] . ] βπ+π,π+π ] βπ+π+1,π+π ]
(11)
reorthogonalization against all vectors in the current augmented subspace. In this paper we only use the full reorthogonalization. The full reorthogonalization can be done as a classical or modifed Gram-Schmidt orthogonalization; see [21] for details. We now seek to determine an approximate solution π₯π,π of (3) in the augmented Krylov subspace Kπ (π΄, π΄π) + W. After computing the QR factorization [π, ππ ] = ππ
with π β RπΓ(π+π) having orthonormal columns and π
β R(π+π)Γ(π+π) being an upper triangular matrix, we substitute π₯ = [π, ππ ]π¦, π¦ β Rπ+π , into (3). It yields the reduced minimization problem σ΅© σ΅©2 1 σ΅© σ΅©2 min {σ΅©σ΅©σ΅©π΄ [π, ππ ] π¦ β πσ΅©σ΅©σ΅© + σ΅©σ΅©σ΅©[π, ππ ] π¦σ΅©σ΅©σ΅© } π+π π π¦βR σ΅©2 σ΅© σ΅©2 1 σ΅© = min {σ΅©σ΅©σ΅©[π, ππ+1 ] π»π¦ β πσ΅©σ΅©σ΅© + σ΅©σ΅©σ΅©ππ
π¦σ΅©σ΅©σ΅© } π+π π π¦βR
4
Mathematical Problems in Engineering σ΅© σ΅©2 = min {σ΅©σ΅©σ΅©σ΅©π»π¦ β [π, ππ+1 ]π πσ΅©σ΅©σ΅©σ΅© π+π π¦βR +β(πΌ β π)πβ2 +
Note that πΌ β π»(π»π π» +
1 σ΅©σ΅© σ΅©σ΅©2 σ΅©π
π¦σ΅© } , πσ΅© σ΅© (12)
in which π = [π, ππ+1 ][π, ππ+1 ]π is an orthogonal projector onto span([π, ππ+1 ]). Obviously, the reduced minimization problem (12) is equivalent to σ΅© σ΅© 2 1 σ΅© σ΅©2 min {σ΅©σ΅©σ΅©π»π¦ β [π, ππ+1 ]π πσ΅©σ΅©σ΅©σ΅© + σ΅©σ΅©σ΅©π
π¦σ΅©σ΅©σ΅© } π π¦βRπ+π σ΅© σ΅©σ΅© π» σ΅©σ΅©2 σ΅©σ΅© σ΅© [π, ππ+1 ]π π σ΅©σ΅©σ΅© σ΅©σ΅©[ 1 ] = min σ΅©σ΅© π¦ β [ ] σ΅©σ΅© . σ΅©σ΅© 0 π¦βRπ+π σ΅© σ΅©σ΅© βπ π
σ΅©σ΅© σ΅©[ ]
(13)
1 π (π» π» + π
π π
) π¦ = π»π [π, ππ+1 ] π. π
1 π β1 π π π
π
) π» [π, ππ+1 ] π. π
(16)
Since the matrix π»π π» + (1/π)π
π π
has a larger condition number than the matrix [π»π , (1/βπ)π
π ]π , we apply the QR factorization of [π»π , (1/βπ)π
π ]π to obtain the solution π¦π,π of (13) instead. The QR factorization of [π»π, (1/βπ)π
π ]π can be implemented by a sequence of Givens rotations. Define σ΅©2 σ΅© (17) ππ (π) = σ΅©σ΅©σ΅©σ΅©π β π΄π₯π,π σ΅©σ΅©σ΅©σ΅© . Substituting π₯π,π = [π, ππ ]π¦π,π into (17) and using the modified Arnoldi decomposition (10) yield σ΅©2 σ΅© ππ (π) = σ΅©σ΅©σ΅©σ΅©π β π΄ [π, ππ ] π¦π,π σ΅©σ΅©σ΅©σ΅© σ΅©2 σ΅© = σ΅©σ΅©σ΅©σ΅©π β [π, ππ+1 ] π»π¦π,π σ΅©σ΅©σ΅©σ΅© σ΅©2 σ΅© = σ΅©σ΅©σ΅©σ΅©[π, ππ+1 ]π π β π»π¦π,π σ΅©σ΅©σ΅©σ΅© + β(πΌ β π)πβ2 .
(21)
2
+ β(πΌ β π) πβ . Concerning the properties of ππ (π), we have the following results, which are similar to those of Theorem 2.1 in [11] for the Arnoldi-Tikhonov regularization method.
π
β2
ππ (π) = ππ [π, ππ+1 ] [ππ»π
β1 (π»π
β1 ) + πΌ] π
(22)
2
Γ [π, ππ+1 ] π + β(πΌ β π) πβ . Assume that π΄π =ΜΈ 0 and (π»π
β1 )π [π, ππ+1 ]π π =ΜΈ 0. Then ππ is strictly decreasing and convex for π β₯ 0 with ππ (0) = β π β2 . Moreover, the equation ππ (π) = π
(15)
The approximate solution of (3) is π₯π,π = [π, ππ ] π¦π,π .
σ΅©σ΅© σ΅©2 β1 π π σ΅© σ΅© σ΅© ππ (π) = σ΅©σ΅©σ΅©[ππ»π
β1 (π»π
β1 ) + πΌ] [π, ππ+1 ] πσ΅©σ΅©σ΅© σ΅©σ΅© σ΅©σ΅©
(14)
We denote the solution of the minimization problem (13) by π¦π,π . Then, from (14) it follows that π¦π,π = (π»π π» +
Therefore, the function ππ (π) can be expressed as
Theorem 1. The function ππ (π) has the representation
The normal equations of the minimization problem (13) is π
β1 π 1 π β1 π π
π
) π» = [ππ»π
β1 (π»π
β1 ) + πΌ] . π (20)
(23)
has a unique solution ππ,π , such that 0 < ππ,π < β, for any π with 2 σ΅©σ΅© π σ΅© 2 2 σ΅©σ΅© σ΅©σ΅©π σ΅©σ΅© N((π»π
β1 )π ) [π, ππ+1 ] πσ΅©σ΅©σ΅© + β(πΌ β π) πβ < π < βπβ , (24) where πN((π»π
β1 )π ) denotes the orthogonal projector onto N((π»π
β1 )π ). Proof. The proof follows the same argument of the proof of Theorem 2.1 in [11] and therefore is omitted. We easily obtain the following theorem, of which the proof is almost the same as that of Corollary 2.2 in [11]. Theorem 2. Assume that the modified Arnoldi process breaks π down at step π. Then the sequence {π π }π=0 defined by π 0 = βπβ2 ,
(18)
σ΅©σ΅© σ΅©2 π σ΅© π π = σ΅©σ΅©σ΅©πN((π»π
β1 )π ) [π, ππ+1 ] πσ΅©σ΅©σ΅© + β(πΌ β π)πβ2 , σ΅© σ΅©
0 < π < π,
π π = 0, (25)
Substituting π¦π,π in (15) into (18), we obtain
is decreasing. σ΅©2
σ΅©σ΅© β1 σ΅© 1 σ΅© π σ΅© ππ (π) = σ΅©σ΅©σ΅©σ΅©[πΌ β π»(π»π π» + π
π π
) π»π ] [π, ππ+1 ] πσ΅©σ΅©σ΅©σ΅© π σ΅©σ΅© σ΅©σ΅© + β(πΌ β π) πβ2 .
(19)
We apply the discrepancy principle to the discrepancy π β π΄π₯π,π to determine an appropriate regularization parameter π so that it satisfies (5). To make the equation ππ (π) = π2 π2
(26)
Mathematical Problems in Engineering
5
Input: π΄ β RπΓπ , π β Rπ , π β RπΓπ , π, π, π0 . Output: π₯π,π , ππ , π = πmin + π0 . (1) Compute the modified Arnoldi decomposition π΄[π, ππ ] = [π, ππ+1 ]π», with π = πmin + π0 , where πmin is the smallest number such that β(πΌ β π)πβ2 < π2 π2 . (2) Compute the solution ππ of the equation ππ (π) = π2 π2 by Newtonβs method. (3) Compute the solution π¦π,π of the least-squares problem (13) and obtain the approximate solution π₯π,π = [π, ππ ]π¦π,π . Algorithm 2: Augmented Arnoldi-Tikhonov regularization method.
have a solution, it follows from Theorem 1 that the input parameter π of the modified Arnoldi should be chosen so that π π < π2 π2 . To simplify the computations, we ignore the first term in π π . Then, the smallest iterative step number, denoted by πmin , of the modified Arnoldi process is chosen so that 2
2 2
β(πΌ β π) πβ < π π .
(27)
We can improve the quality of the computed solution by choosing the practical iterative step number π somewhat larger than πmin . After choosing the number π of the modified Arnoldi iterative steps, the regularization parameter ππ is determined by solving the nonlinear equation ππ (π) = π2 π2 . Many numerical methods have been proposed for the solution of a nonlinear equation, including Newtonβs method [22], super-Newtonβs [23] method, and Halleyβs method [24]. For the specific nonlinear equation ππ (π) = π2 π2 , Reichel and Shyshkov proposed a new zero-finder method in their new paper [25]. In this paper, we still make use of Newtonβs method to obtain the regularization parameter ππ . In Algorithm 2, we outline the augmented ArnoldiTikhonov regularization method, which is used to solve largescale linear ill-posed systems (1). Newtonβs method requires to evaluate ππ (π) and its first derivative with respect to π for computing approximations (π) ππ of ππ for π = 0, 1, 2, . . .. Let π
β1
π
π§π,π = [ππ»π
β1 (π»π
β1 ) + πΌ] [π, ππ+1 ] π;
(28)
that is, π§π,π satisfies the system of linear equations π
π
[ππ»π
β1 (π»π
β1 ) + πΌ] π§ = [π, ππ+1 ] π.
(29)
(30)
(31)
(32)
where π
β1
π
π€π,π = [ππ»π
β1 (π»π
β1 ) + πΌ] π»π
β1 (π»π
β1 ) π§π,π .
(33)
Hence, we may compute π€π,π by solving a least-squares problem analogous to the above with the vector [π, ππ+1 ]π π replaced by π»π
β1 (π»π
β1 )π π§π,π . The algorithm for implementing the Newton iteration for solving the nonlinear equation ππ (π) = π2 π2 is presented in Algorithm 3. Let π₯π be the πth iterate produced by the augmented RRGMRES in [16]. Then, π₯π satisfies σ΅©2 σ΅©σ΅© min βπ΄π₯ β πβ2 σ΅©σ΅©π΄π₯π β πσ΅©σ΅©σ΅© = π₯βKπ (π΄,π΄π)+W
σ΅© σ΅©2 = min σ΅©σ΅©σ΅©π΄ [π, ππ ] π¦ β πσ΅©σ΅©σ΅© π+π π¦βR σ΅©2 σ΅© = min σ΅©σ΅©σ΅©[π, ππ+1 ] π»π¦ β πσ΅©σ΅©σ΅© π¦βRπ+π
(34)
σ΅© σ΅©2 = min σ΅©σ΅©σ΅©σ΅©π»π¦ β [π, ππ+1 ]π πσ΅©σ΅©σ΅©σ΅© π+π π¦βR + β(πΌ β π)πβ2 . As π β β, the reduced minimization problem (12) is the same as the above minimization problem, which shows that πββ
For numerical stability, we compute the vector π§π,π by solving the least-squares problem. Then, the ππ (π) is evaluated by computing π π§π,π + β(πΌ β π) πβ2 . ππ (π) = π§π,π
σΈ π π€π,π , ππ (π) = β2π§π,π
π₯π = lim π₯π,π .
Note that the above linear system is the normal equations of the least-squares problem σ΅©σ΅© σ΅©σ΅© 1/2 β1 π σ΅©σ΅© σ΅©σ΅© 0 ] min σ΅©σ΅©σ΅©[π (π»π
) ] π§ β [ σ΅©. π σ΅© (U, ππ+1 ) π σ΅©σ΅©σ΅©σ΅© π§βRπ+π+1 σ΅© πΌ σ΅©σ΅©
σΈ (π) can be It is easy to show that the first derivative of ππ written as
(35)
Therefore, we obtain the following result. Theorem 3. Let π₯π be the πth iterate determined by augmented RRGMRES applied to (1) with initial iterate π₯0 = 0. Then σ΅©2 σ΅©σ΅© (36) σ΅©σ΅©π΄π₯π β πσ΅©σ΅©σ΅© = ππ (β) .
3. Numerical Experiments In this section, we present some numerical examples to illustrate the performance of the augmented ArnoldiTikhonov regularization method for the solution of largescale linear ill-posed systems. We compare the augmented
6
Mathematical Problems in Engineering
(1) Set π0 = 0 and π = 0. (2) Solve the least-squares problem σ΅©σ΅© σ΅©σ΅© 1/2 π 0 σ΅©σ΅© σ΅© π (π»π
β1 ) ]π§ β [ min σ΅©σ΅©σ΅©[ π π ]σ΅© (π, ππ+1 ) π σ΅©σ΅©σ΅© πΌ π§βRπ+π+1 σ΅© σ΅© to obtain π§ππ ,π . (3) Compute ππ (ππ ) = π§πππ ,π π§ππ ,π + β(πΌ β π)πβ2 . (4) Solve the least-squares problem σ΅©σ΅© σ΅©σ΅© 1/2 0 σ΅© σ΅© π (π»π
β1 )π ]σ΅©σ΅©σ΅©σ΅© ] π§ β [ β1 min σ΅©σ΅©σ΅©σ΅©[ π β1 π π»π
(π»π
) π§ πΌ σ΅©σ΅© π§βRπ+π+1 σ΅© π ,π π σ΅© to obtain π€ππ ,π .
σΈ (ππ ) = β2π§πππ ,π π€ππ ,π . (5) Compute ππ (6) Compute the new approximation π (π ) β π2 π2 ππ+1 = ππ β π σΈ π . ππ (ππ ) β6 (7) If |ππ+1 β ππ | < 10 , stop; else ππ := ππ+1 , π := π + 1, and go to 2.
Algorithm 3: Newtonβs method for ππ (π) = π2 π2 .
Arnoldi-Tikhonov regularization method implemented by Algorithm 2 to the Arnoldi-Tikhonov regularization method proposed in [11]. The Arnoldi-Tikhonov regularization method is denoted by ATRM while the augmented ArnoldiTikhonov regularization method is denoted by AATRM. In all the following tables, we denote by MV the number of matrix-vector products and by RERR the relative error Μ π₯β, Μ where π₯Μ is the exact solution of the linear βπ₯π,π β π₯β/β error-free system of (2). Note that the number of matrixvector products in Algorithm 2 is π + πmin + π0 . In all the examples, π = 1.01. All the numerical experiments are performed in Matlab on a PC with the usual double precision, where the floating point relative accuracy is 2.22 β
10β16 . Example 4. The first example considered is the Fredholm integral equation of the first kind, which takes the generic form 1
β« π (π , π‘) π₯ (π‘) ππ‘ = π (π ) , 0
0 β€ π β€ 1.
(37)
Here, both the kernel π(π , π‘) and the right-hand side π(π ) are known functions, while π₯(π‘) is the unknown function. For test, the kernel π(π , π‘) and the right-hand side π(π ) are chosen as π (π , π‘) = {
π (π‘ β 1) , π < π‘, π‘ (π β 1) , π β₯ π‘,
π (π ) =
π 3 β π . 6
Table 1: Computational results of Example 4. π0 0 1 2 3 0 1 2 3
Method AATRM AATRM AATRM AATRM ATRM ATRM ATRM ATRM
With this choice, the exact solution of (37) is π‘. We use the Matlab program deriv2 from the regularization package [26] to discretize the integral equation (37) and to generate a system of linear equations (2) with the coefficient matrix π΄ β R200Γ200 and the solution π₯Μ β R200 . The condition number of π΄ is 4.9 β
104 . The right-hand side π is given by π = π΄π₯Μ + π, where the elements of the error vector π are generated from normal distribution with mean zero and the norm of π is 10β2 β
β π΄π₯Μ β. The augmentation subspace is one-dimensional
RERR 1.95 β
10β2 1.17 β
10β2 6.25 β
10β2 1.19 β
10β1 3.14 β
10β1 2.94 β
10β1 2.87 β
10β1 2.89 β
10β1
and is spanned by the vector π€ = [1, 2, . . . , 200]π . Numerical results for the example are reported in Table 1 for several choice of the number π0 of additional iterations. From Table 1, we can see that for 0 β€ π0 β€ 3, AATRM has smaller relative errors than ATRM, and the smallest relative error is given by AATRM with π0 = 1. The exact solution π₯Μ and the approximate solutions generated by AATRM and ARTM with π0 = 1 are depicted in Figure 1. Example 5. This example comes again from the regularization package [26] and is the inversion of Laplace transform β
(38)
MV 2 3 4 5 4 5 6 7
β« πβπ π‘ π₯ (π ) ππ = π (π‘) , 0
π‘ β₯ 1,
(39)
where the right-hand side π(π‘) and the exact solution π₯(π‘) are given by π (π‘) =
1 , π‘ + 1/2
π₯ (π‘) = πβπ‘/2 .
(40)
The system of linear equations (2) with the coefficient matrix π΄ β R250Γ250 and the solution π₯Μ β R250 is obtained by using the Matlab program ilaplace from the regularization package [26]. In the same way as Example 4, we construct
Mathematical Problems in Engineering
7 is one-dimensional and is spanned by the vector π€ = [1, 1, . . . , 1]π . In Table 3, we report numerical results for π0 = 1, 2, 3.
0.08 0.07 0.06
We observe from Table 3 that for this example AATRM has almost the same relative errors as ATRM, and the approximate solution can be slightly improved by the augmentation space spanned by π€ = [1, 1, . . . , 1]π .
0.05 0.04 0.03
4. Conclusions
0.02 0.01 0
0
50
100
150
200
Exact solution AATRM ATRM
Μ approximation solutions with Figure 1: Example 4. Exact solution π₯; π0 = 1.
In this paper we propose an iterative method for solving large-scale linear ill-posed systems. The method is based on the Tikhonov regularization technique and the augmented Arnoldi technique. The augmentation subspace is a usersupplied low-dimensional subspace, which should contain a rough approximation of the desired solution. Numerical experiments show that the augmented method is effective for some practical problems.
Acknowledgments
Table 2: Computational results of Example 5. π0 1 2 3 1 2 3
Method AATRM AATRM AATRM ATRM ATRM ATRM
MV 5 6 7 6 7 8
RERR 3.3 β
10β1 9.1 β
10β2 2.7 β
10β1 5.7 β
10β1 3.1 β
10β1 4.5 β
10β1
Table 3: Computational results of Example 6. π0 1 2 3 1 2 3
Method AATRM AATRM AATRM ATRM ATRM ATRM
MV 10 11 12 10 11 12
RERR 2.2 β
10β1 1.1 β
10β1 2.1 β
10β1 4.9 β
10β1 1.2 β
10β1 2.3 β
10β1
the right-hand side π of the system of linear equations (1). The augmentation subspace is also one-dimensional and is spanned by the vector π€ = [1, 1/2, . . . , 1/250]π . Numerical results for the example are reported in Table 2 for π0 = 1, 2, 3. Table 2 shows that AATRM with π0 = 2 has the smallest relative error, and AARTM works slightly better than ATRM for this problem. Example 6. This example considered here is the same as Example 5 except that the right-hand side π(π‘) and the exact solution π₯(π‘) are given by π (π‘) =
2 , (π‘ + 1/2)3
π₯ (π‘) = π‘2 πβπ‘/2 .
(41)
By using the same Matlab program as Example 5, we generate a system with π΄ β R1000Γ1000 . The augmentation subspace
Yiqin Lin is supported by the National Natural Science Foundation of China under Grant 10801048, the Natural Science Foundation of Hunan Province under Grant 11JJ4009, the Scientific Research Foundation of Education Bureau of Hunan Province for Outstanding Young Scholars in University under Grant 10B038, the Science and Technology Planning Project of Hunan Province under Grant 2010JT4042, and the Chinese Postdoctoral Science Foundation under Grant 2012M511386. Liang Bao is supported by the National Natural Science Foundation of China under Grants 10926150 and 11101149 and the Fundamental Research Funds for the Central Universities.
References [1] H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, vol. 375 of Mathematics and Its Applications, Kluwer Academic, Dordrecht, The Netherlands, 1996. [2] A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-Posed Problems, John Wiley & Sons, New York, NY, USA, 1977. [3] A. N. Tikhonov, A. V. Goncharsky, V. V. Stepanov, and A. G. Yagola, Numerical Methods for the Solution of Ill-Posed Problems, vol. 328 of Mathematics and Its Applications, Kluwer Academic, Dordrecht, The Netherlands, 1995. [4] P. C. Hansen, Rank-Deficient and Discrete Ill-Posed Problems, SIAM Monographs on Mathematical Modeling and Computation, SIAM, Philadelphia, Pa, USA, 1998. [5] G. H. Golub and C. F. van Loan, Matrix Computations, Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, Md, USA, 3rd edition, 1996. Λ BjΒ¨orck, Numerical Methods for Least Squares Problems, [6] A. SIAM, Philadelphia, Pa, USA, 1996. Λ BjΒ¨orck, βA bidiagonalization algorithm for solving large and [7] A. sparse ill-posed systems of linear equations,β BIT: Numerical Mathematics, vol. 28, no. 3, pp. 659β670, 1988. [8] D. Calvetti, S. Morigi, L. Reichel, and F. Sgallari, βTikhonov regularization and the πΏ-curve for large discrete ill-posed
8
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17] [18]
[19]
[20]
[21]
[22]
[23]
[24] [25]
[26]
Mathematical Problems in Engineering problems,β Journal of Computational and Applied Mathematics, vol. 123, no. 1-2, pp. 423β446, 2000. D. Calvetti and L. Reichel, βTikhonov regularization of large linear problems,β BIT: Numerical Mathematics, vol. 43, no. 2, pp. 263β283, 2003. D. P. OβLeary and J. A. Simmons, βA bidiagonalizationregularization procedure for large scale discretizations of illposed problems,β SIAM Journal on Scientific and Statistical Computing, vol. 2, no. 4, pp. 474β489, 1981. B. Lewis and L. Reichel, βArnoldi-Tikhonov regularization methods,β Journal of Computational and Applied Mathematics, vol. 226, no. 1, pp. 92β102, 2009. T. F. Chan and K. R. Jackson, βNonlinearly preconditioned Krylov subspace methods for discrete Newton algorithms,β SIAM Journal on Scientific and Statistical Computing, vol. 5, no. 3, pp. 533β542, 1984. L. Reichel, F. Sgallari, and Q. Ye, βTikhonov regularization based on generalized Krylov subspace methods,β Applied Numerical Mathematics, vol. 62, no. 9, pp. 1215β1228, 2012. D. Calvetti, B. Lewis, and L. Reichel, βOn the choice of subspace for iterative methods for linear discrete ill-posed problems,β International Journal of Applied Mathematics and Computer Science, vol. 11, no. 5, pp. 1069β1092, 2001, Numerical analysis and systems theory (Perpignan, 2000). L. Reichel and Q. Ye, βBreakdown-free GMRES for singular systems,β SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 4, pp. 1001β1021, 2005. J. Baglama and L. Reichel, βAugmented GMRES-type methods,β Numerical Linear Algebra with Applications, vol. 14, no. 4, pp. 337β350, 2007. K. Morikuni, L. Reichel, and K. Hayami, βFGMRES for linear discrete problems,β Tech. Rep., NII, 2012. D. Calvetti, B. Lewis, and L. Reichel, βOn the regularizing properties of the GMRES method,β Numerische Mathematik, vol. 91, no. 4, pp. 605β625, 2002. R. B. Morgan, βA restarted GMRES method augmented with eigenvectors,β SIAM Journal on Matrix Analysis and Applications, vol. 16, no. 4, pp. 1154β1171, 1995. Y. Saad, βA flexible inner-outer preconditioned GMRES algorithm,β SIAM Journal on Scientific Computing, vol. 14, no. 2, pp. 461β469, 1993. B. N. Parlett, The Symmetric Eigenvalue Problem, vol. 20 of Classics in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1998. C. T. Kelley, Solving Nonlinear Equations with Newtonβs Method, vol. 1 of Fundamentals of Algorithms, SIAM, Philadelphia, Pa, USA, 2003. J. A. Ezquerro and M. A. HernΒ΄andez, βOn a convex acceleration of Newtonβs method,β Journal of Optimization Theory and Applications, vol. 100, no. 2, pp. 311β326, 1999. W. Gander, βOn Halleyβs iteration method,β The American Mathematical Monthly, vol. 92, no. 2, pp. 131β134, 1985. L. Reichel and A. Shyshkov, βA new zero-finder for Tikhonov regularization,β BIT: Numerical Mathematics, vol. 48, no. 3, pp. 627β643, 2008. P. C. Hansen, βRegularization tools: a Matlab package for analysis and solution of discrete ill-posed problems,β Numerical Algorithms, vol. 6, no. 1-2, pp. 1β35, 1994.
Advances in
Operations Research Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Applied Mathematics
Algebra
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Probability and Statistics Volume 2014
The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at http://www.hindawi.com International Journal of
Advances in
Combinatorics Hindawi Publishing Corporation http://www.hindawi.com
Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of Mathematics and Mathematical Sciences
Mathematical Problems in Engineering
Journal of
Mathematics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Discrete Dynamics in Nature and Society
Journal of
Function Spaces Hindawi Publishing Corporation http://www.hindawi.com
Abstract and Applied Analysis
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Journal of
Stochastic Analysis
Optimization
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014