A new approach for computing the solution of Sylvester matrix ... - ispacs

0 downloads 0 Views 617KB Size Report
signal processing, filtering, model reduction, block-diagonalization of ... a system to be identified, and to attain the recursive solutions by utilizing the ..... computations have been carried out using MATLAB 2014(Ra) with roundoff error u ≈ 10.
Journal of Interpolation and Approximation in Scientific Computing 2016 No.2 (2016) 66-76 Available online at www.ispacs.com/jiasc Volume 2016, Issue 2, Year 2016 Article ID jiasc-00094, 11 Pages doi:10.5899/2016/jiasc-00094 Research Article

A new approach for computing the solution of Sylvester matrix equation Amir Sadeghi∗ Department of Mathematics, Robat Karim Branch, Islamic Azad University, Tehran, Iran. c Amir Sadeghi. This is an open access article distributed under the Creative Commons Attribution License, which permits Copyright 2016 ⃝ unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract The aim of this article is modifying the well-known homotopy perturbation method (HPM) to yield the solution of Sylvester matrix equation. Moreover, conditions are deduced to check the convergence of the homotopy series. The numerical implementations will be conducted to measure the accuracy and speed of the homotopy series. Finally, some applications of this linear matrix equation are given. Keywords: Sylvester matrix equation, Homotopy perturbation method, Convergence, Block diagonalization, Matrix sign function.

1 Introduction The linear matrix equation AX + XB = C,

(1.1)

where, A = [ai j ] ∈ Rn×n , B = [bi j ] ∈ Rm×m , C = [ci j ] ∈ Rn×m is called Sylvester matrix equation. This matrix equation plays a significant role in several applications in science and engineering such as evaluation of implicit numerical schemes for partial differential equations, decoupling techniques for ordinary differential equations, image restoration, signal processing, filtering, model reduction, block-diagonalization of matrices, computation of the matrix functions, and Control theory [2, 3, 5, 25]. There are various approaches either direct methods or iterative methods to the solution of this equation. The Bartels-Stewart method [1] and the Hessenberg-Schur method [15] are based on the transforming the coefficient matrices to Schur and Hessenberg form respectively, and then solving the corresponding linear system of equations by a backward substitution process. These approaches are categorized as direct methods. Some iterative methods to solve Sylvester equation have also been proposed that are more appropriate to large sparse systems [20, 31]. Zhang et. al. [34] proposed a numerical procedure for solving matrix equations in the form (A ⊗ B)X = F, by employing the well-known Gaussian elimination for the linear system Ax = b. They have distinguished that the modified algorithm has a high computational efficiency. Ding et al. [9] presented iterative solutions of matrix equations AXB = F, and generalized Sylvester matrix equations AXB + CXD = F by improving the well-known Jacobi and Gauss-Seidel iterations for Ax = b. They have shown that the iterative solution always converges to the exact solution for any initial values. Their strategy is to regard the unknown matrix X to be solved as the parameters of a system to be identified, and to attain the recursive solutions by utilizing the hierarchical identification principle. ∗ Corresponding

author. Email address: [email protected]

66

Journal of Interpolation and Approximation in Scientific Computing 2016 No.2 (2016) 66-76 http://www.ispacs.com/journals/jiasc/2016/jiasc-00094/

67

Moreover, Ding and Zhang [11] solved the coupled matrix equations Ai XBi = Fi , for i = 1, 2, . . . , p by constructing an objective function and using the gradient search. They stated that the gradient solution is convergent for any initial values. J. Ding et. al. [10] introduced two iterative algorithms to obtain the linear matrix equations A1 XB1 = F1 and A2 XB2 = F2 . It is proved that the iterative solutions obtained by the proposed algorithms converge to their true values for any initial value. Toutounian et. al. [32] proposed an iterative method to solve the general coupled matrix equations (including generalized coupled Lyapunov and Sylvester matrix equations) by extending the idea of LSMR method. They claimed that by employing this iterative method, a solution group can be yielded within finite iteration steps in absence of round-off errors, and the minimum Frobenius norm solution or the minimum Frobenius norm least-squares solution group can be derived when an appropriate initial iterative matrix group is selected. Furthermore, the optimal approximation solution group to a given matrix group in the Frobenius norm can be obtained by finding the least Frobenius norm solution group of new general coupled matrix equations. For more information about the sophisticated techniques for solving matrix equations, one can refer to [6, 7, 8, 12, 14, 27, 28, 29]. It is well-known that the basic idea of homotopy to propose a general method for nonlinear problems was introduced by Doctor Liao [22] on 1992. Following him, an analytic approach based on the same theory which is so called “homotopy perturbation method” (HPM), is provided by Doctor He [16] on 1998, as well as the recent developments [16, 17]. As can be seen in several publications, in most cases, using HPM gives a very rapid convergence of the solution series, and usually only a few iterations leading to very accurate solutions. In the sense of linear algebra, Keramati [21] first applied HPM to solve linear system of equations Ax = b. He shown that the splitting matrix of this method is only the identity matrix. However, this method does not converge for some systems when the spectrum radius is greater than one. In order to solve this issue, Liu [23] was added the auxiliary parameter and the auxiliary matrix to the homotopy method. He has adjusted the Richardson method, the Jacobi method, and the Gauss-Seidel method to choose the splitting matrix. Moreover, Edalatpanah and Rashidi [13] focused on modification of HPM for solving systems of linear equations by choosing an auxiliary matrix to increase the rate of convergence. In the recent paper, Saeidian et. al [30] proposed an iterative scheme to solve linear systems equations based on the concept of homotopy. They have shown that their modified method recommends more cases of convergence. According to our knowledge, nevertheless the HPM was not improved to solve Sylvester matrix equations. In this article, we extend the application of HPM to find an appropriate approximation to the Sylvester matrix equation. Furthermore, convergence conditions of the method will be analyzed in detail. Numerical examples and applications are provided to illustrate the properties of the modified method. 2 The solution of Sylvester matrix equation In this section, we will solve the Sylvester matrix equation by new approach. First, we will investigate the existences and uniqueness of the solution, and then we will modify HPM to the equation (1.1). Eventually, convergence analysis will be given. 2.1 Existence and uniqueness of the solution The first important question is, “When does a solution for equation (1.1) exist?” By rewriting the matrices in (1.1) in terms of their columns, it is easily seen that      A + b11 I b21 I ... bm1 I x1 c1  b12 I     A + b22 I . . . bm2 I     x2   c2  =  . . (2.2)     .. .. . . . .. ..    ..   ..  . . b1m I b2m I . . . A + bmm I xm cm Applying Kronecker product and vector operator, the linear system (2.2) can be denoted as [ ] (Im ⊗ A) + (Bt ⊗ In ) vec(X) = vec(C),

(2.3)

that with Kronecker sum notation, it is as (A ⊕ Bt )vec(X) = vec(C). Therefore, the solution of the system (2.2) is exist if and only if the coefficient matrix (A ⊕ Bt ) will be nonsingular. In other words, it has no zero eigenvalues. But

International Scientific Publications and Consulting Services

Journal of Interpolation and Approximation in Scientific Computing 2016 No.2 (2016) 66-76 http://www.ispacs.com/journals/jiasc/2016/jiasc-00094/

68

we know that (A ⊕ B) (x ⊗ y) = (λ + µ ) (x ⊗ y) . | {z } | {z } | {z } eigenvector

eigenvalue eigenvector

Hence, the eigenvalues of (A ⊕ B ) are (λi + µ j ), whenever λi ∈ σ (A) and µ j ∈ σ (B) (σ (W) is the spectrum of a matrix W). In summary, we have the following theorem. t

Theorem 2.1. [24] The Sylvester matrix equation (1.1) has a unique solution if and only if A and −B have no eigenvalue in common. In other words, σ (A) ∩ σ (−B) = 0. / Theorem 2.2. [24] Suppose A and B are asymptotically stable (a matrix is asymptotically stable if all its eigenvalues have real parts in the open left half-plane). Then the unique solution of the Sylvester equation (1.1) is given as X=−

∫ +∞ 0

eτ A Ceτ B d τ .

(2.4)

In the next subsection, the improvement of HPM will be given. 2.2 Modification of homotopy perturbation method In this subsection, we are ready to modify HPM in order to yield the solution of Sylvester matrix equation. In order to adjust dimension of matrices in homotopy function, m = n is considered. The general type of homotopy approach to obtain the solution of (1.1) can be described as follows. Let L(U) = AU + UB − C,

(2.5)

F(U) = U − W0 .

(2.6)

Now, the homotopy H(U, p) can define by H(U, 0) = F(U) and H(U, 1) = L(U), and a convex homotopy can express as H(U, p) = (1 − p)F(U) + pL(U) = 0, (2.7) where F is an operator with known solution W0 . In this case, HPM utilizes the homotopy parameter p as an expanding parameter to obtain ∞

U = ∑ pi Ui = U0 + pU1 + p2 U2 . . . ,

(2.8)

i=0

and it gives an approximation to the solution of (1.1) as ( ) ∞

V = lim

p→1

∑ pi Ui

i=0



= ∑ Ui = U0 + U1 + U2 . . . .

(2.9)

i=0

By substituting (2.5) and (2.6) in (2.7), and equating the terms with the identical power of p, we can obtain:  0 p : U0 − W0 = 0,     1    p : U1 + W0 − U0 + AU0 + U0 B − C = 0, p2 : U2 − U1 + AU1 + U1 B = 0,  ..    .    i p : Ui+1 − Ui + AUi + Ui B = 0, i = 1, 2, . . . .

(2.10)

International Scientific Publications and Consulting Services

Journal of Interpolation and Approximation in Scientific Computing 2016 No.2 (2016) 66-76 http://www.ispacs.com/journals/jiasc/2016/jiasc-00094/

In other words, we have

or

69

 0 p : U0 = W0 ,    1     p : U1 = −W0 + U0 − AU0 − U0 B + C, p2 : U2 = U1 − AU1 − U1 B,  ..    .    i p : Ui+1 = Ui − AUi − Ui B, i = 1, 2, . . . ,

(2.11)

 0 p : U0 = W0 ,     1    p : U1 = −W0 + U0 − AU0 − U0 B + C, p2 : U2 = U1 − (AU1 + U1 B) ,  ..    .    i p : Ui+1 = Ui − (AUi + Ui B) , i = 1, 2, . . . .

(2.12)

Taking U0 = W0 = 0 and applying vector operator and Kronecker product, we obtain  vec(U1 ) = vec(C),     vec(U2 ) = −((A ⊕ Bt ) − In2 )vec(U1 ), ..   .    vec(Ui+1 ) = (−1)i ((A ⊕ Bt ) − In2 )i vec(Ui ). i = 1, 2, . . . .

(2.13)

Consequently, according to the property of the vector operator, the solution is in the form ) ( ∞

vec(V) = vec

∑ Ui

i=0

or vec(V) ≈



∑ (−1)k



= ∑ vec(Ui ), i=0

( )k (A ⊕ Bt ) − In2 vec(C).

(2.14)

k=0

2.3 Convergence analysis In the following theorem, we will verify that the sequence vec(V) is convergent. Before this, we mention that ρ (W) is denoted the spectral radius, and defined by ρ (W) = maxλi ∈σ (W) |λi |.

k t k Theorem 2.3. The sequence Sm = ∑m k=0 (−1) ((A ⊕ B ) − In2 ) vec(C) is a cauchy sequence if ( ) ρ (A ⊕ Bt ) − In2 < 1.

Proof. It is obvious that q

Sm+q − Sm =

∑ (−1)m+k

(

(A ⊕ Bt ) − In2

)m+k

(2.15)

vec(C).

k=1

Taking norm and setting ξ = ∥(A ⊕ Bt ) − In2 ∥, we can write q

∥Sm+q − Sm ∥ ≤ ∥vec(C)∥ ∑ ∥(A ⊕ B ) − In2 ∥ t

k=1

m+k

) ξq −1 = ∥vec(C)∥ ξ m+1 ξ −1 (

International Scientific Publications and Consulting Services

Journal of Interpolation and Approximation in Scientific Computing 2016 No.2 (2016) 66-76 http://www.ispacs.com/journals/jiasc/2016/jiasc-00094/

70

Now, if ξ < 1 then we have limm→∞ ξ m = 0. Consequently, lim ∥Sm+p − Sm ∥ = 0.

m→∞

Thus, Sm is a Cauchy sequence. k t k Corollary 2.1. The sequence Sm = ∑m k=0 (−1) ((A ⊕ B ) − In2 ) vec(C) is convergent if we have

∥(A ⊕ Bt ) − In2 ∥ < 1.

(2.16)

Proof. The relation ρ ((A ⊕ Bt ) − In2 ) ≤ ∥(A ⊕ Bt ) − In2 ∥ completes the proof. Now, consider matrix = diag( λ1 , . . . , λ 11

1 n2 ,n2

), where [λi j ] are the elements of matrix (A ⊕ Bt ). If the matrix (A ⊕ Bt )

is strictly row diagonally dominant (SRDD), then we have

ρ ( (A ⊕ Bt ) − In2 ) < 1. Because, if suppose M = (A ⊕ Bt ) − In2 , then it can be easily shown that { 0, i = j, [mi j ] = λi j λii , i ̸= j. Since (A ⊕ Bt ) is SRDD, it is clear that |λii | > ∑nj=1, j̸=i |λi j |. Hence, ∥M∥∞ = ∥ (A ⊕ Bt ) − In2 ∥∞ =

n

n

j=1

j=1,i̸= j

∑ |mi j | = ∑

|mi j | =

n



j=1, j̸=i

|

λi j | < 1. λii

Therefore, it is concluded that

ρ ( (A ⊕ Bt ) − In2 ) ≤ ∥ (A ⊕ Bt ) − In2 ∥∞ < 1. Remark 2.1. The important question may arise as “Does the matrix (A ⊕ Bt ) is SRDD whenever A and Bt are SRDD [ ] th matrices?” To answer for this question, consider the i row of the system (2.2) as b1i I . . . A + bii I . . . bmi I , or open notation   b1i . . . a11 + bii . . . a1m . . . bmi   .. .. .. .. ..  . . . . ... . . ... b1i

...

...

am1

amm + bii

...

bmi

Thus, it is easy to show that if the diagonal elements of A and Bt are positive, then the matrix (A ⊕ Bt ) will be SRDD. If ρ ( (A ⊕ Bt ) − In2 ) > 1, by pre-multiplying the both sides of equation (1.1) in vector definition by matrix and using convex homotopy function once again, we can easily verify that ∞

vec(U) =

∑ (−1)k

(

(A ⊕ Bt ) − In2

)k

vec(C).

2n×2n ,

(2.17)

k=0

k t k In this part, we will show that the series Sm = ∑m k=0 (−1) ( (A ⊕ B ) − In2 ) .vec(C) is converges. Thus, first we need the following definition.

Definition 2.1. [26] Let A, M, N are three matrices satisfying A = M − N. The pair of matrices M, N is a regular splitting of A, if M is nonsingular, and M−1 and N are nonnegative. Theorem 2.4. [26] Let M, N are regular splitting of a matrix A. Then ρ (M−1 N) < 1 if and only if A is nonsingular and A−1 is nonnegative.

International Scientific Publications and Consulting Services

Journal of Interpolation and Approximation in Scientific Computing 2016 No.2 (2016) 66-76 http://www.ispacs.com/journals/jiasc/2016/jiasc-00094/

Theorem 2.5. Let is nonsingular matrix such that and (A ⊕ Bt ) − ∞

vec(U) =

∑ (−1)k

(

(A ⊕ Bt ) − In2

71

−1

)k

are nonnegative. Then the sequence vec(C),

(2.18)

k=0

converges if (A ⊕ Bt ) is nonsingular and (A ⊕ Bt )−1 is nonnegative. Proof. Suppose that is a nonsingular matrix such that and −1 − (A ⊕ Bt ) are nonnegative. By employing Theorem 2.4, it can be attained that ρ ( (A ⊕ Bt ) − In2 ) < 1 as both −1 and −1 − (A ⊕ Bt ) are a regular splitting of (A ⊕ Bt ). This implies that the series m )k ( Sm = ∑ (−1)k (A ⊕ Bt ) − In2 vec(C), (2.19) k=0

converges by employing Theorem 2.3. 3 Numerical experiments and applications In this section, some numerical examples and applications to the Sylvester matrix equation are given. All the computations have been carried out using MATLAB 2014(Ra) with roundoff error u ≈ 10−16 . Moreover, the residual error of the approximations have been measured by Res(U) =

∥U − X∥∞ , ∥X∥∞

(3.20)

whenever, U is approximated solution by HPM. Example 3.1. (Block diagonalization) One of the most important application of equation (1.1) is block diagonaliza( ) tion of matrices [2, 4, 24]. Namely, if X is a solution of (1.1), the similarity transformation defined by Z = 0I XI , can (A C ) be employed to block-diagonalize the block upper triangular matrix 0 −B . In other words, we have Z−1

( A 0

) ( C I Z= −B 0

)( )( ) ( −X A C I X A = I 0 −B 0 I 0

) 0 . −B

(3.21)

In this example, consider n × n matrices A = Heptadiagn (−3, −2, −1, 25, −1, −2, −3) , ( ) σ σ σ σ B = Heptadiagn −2 − , −1 − , 0, 40, 0, −1 + , −2 + , ω +2 ω +1 ω +1 ω +2 with σ = 10, ω = 100 and n = 30. Furthermore, we assume first the following matrix as exact solution X = Hendecadiagn (3, 2, 0, 1, −1, 30, −1, 1, 0, 2, 3) , and we obtain the matrix C by setting C = AX + XB. The structure of 30 × 30 matrices A, B and X can be seen in Figure 1. Now, we approximate the solution of (1.1) by applying HPM with eight terms, and we measure the residual error of the estimation. In Figure 2, it can be seen that the approximated solution has very good agreement with exact solution. Eventually, we have investigate CPU time and error by increasing dimension of the matrices. Results are reported in Figure 3.

International Scientific Publications and Consulting Services

Journal of Interpolation and Approximation in Scientific Computing 2016 No.2 (2016) 66-76 http://www.ispacs.com/journals/jiasc/2016/jiasc-00094/

72

Figure 1: The structure of matrices A, B and X respectively, in example 3.1.

Figure 2: Comparison exact and approximated solution in example 3.1.

Figure 3: Behavior of the error and CPU time by increasing dimension in example 3.1. Example 3.2. (Matrix sign function) Another application of equation (1.1) is a special decoupling property of the solution for computing matrix sign function. If A = Zdiag(J1 , J2 )Z−1 be a Jordan canonical form of A, where J1 is a p × p Jordan block, J2 is a q × q Jordan block, and Z is a nonsingular matrix, then the matrix sign function is defined as follows [4, 27]: ( ) Ip 0 sign(A) = Z Z−1 . (3.22) 0 −Iq

International Scientific Publications and Consulting Services

Journal of Interpolation and Approximation in Scientific Computing 2016 No.2 (2016) 66-76 http://www.ispacs.com/journals/jiasc/2016/jiasc-00094/

Considering T =

(A

C 0 −B

)

and Z =

( Im

X) 0 In ,

( A sign(T) = Zsign 0

73

the matrix sign function gives an expression for the solution of (1.1): ) ( ) ( ) 0 −Im 0 −Im 2X −1 −1 Z =Z Z = . (3.23) −B 0 In 0 In

This example is continuing to consider the following matrices ) ( σ σ 2σ 2σ A = Pentadiagn −2 − , −1 − , 50, −1 + , −2 + , m+2 ω +1 ω +1 ω +2 ( ) 4σ 4σ B = Tridiagn −10 − , 100, −10 − , ω +1 ω +1 with σ = 10 and ω = 10. Furthermore, we assume the following matrix as exact solution X = Pentadiagn (−2, −1, 10, −1, −2, ) . Now, we first approximate X by HPM with eight terms, and then we compute the matrix sign function by (3.23) as \ In addition, we use Higham’s matrix function Toolbox [19] to obtain good approximation to the matrix sign sign(T). function as sign(T). It must be mentioned that in our example, we have used Newton’s method for computing matrix \ − sign function. However, by increasing dimension of the matrices, we will measure of the difference of ∥sign(T) sign(T)∥∞ . Results are reported in Figure 4. It can be observed that the error of HPM approximation is significantly acceptable.

Figure 4: Comparison the error for the solution of (1.1) and computation of Sign function.

4 Conclusion In this survey, we have generalized the theory of the homotopy perturbation method for solving particular linear matrix equation. Numerical implementations reveal that considering more terms of the approximation series, error will be decreased. Furthermore, if the matrix (A ⊕ Bt ) becomes more strictly row diagonally dominant, the convergence of the method will happen quickly and error will decline dramatically. Moreover, we have seen that by growing up dimension of the matrices, the error of the approximation will be significantly shoot up. In final, some relevant applications of the Sylvester matrix equation with conjunction of numerical tests are given.

International Scientific Publications and Consulting Services

Journal of Interpolation and Approximation in Scientific Computing 2016 No.2 (2016) 66-76 http://www.ispacs.com/journals/jiasc/2016/jiasc-00094/

74

Acknowledgments The author would like to thank an anonymous referee whose pertinent and detailed suggestions considerably improved the presentation and the correctness of the paper. This work is supported by Robat Karim branch, Islamic Azad University, Tehran, Iran. References [1] R. Bartels, G. Stewart, Solution Of The Matrix Equation AX +XB = C, Circuits, Systems, and Signal Processing, 13 (1994) 820-826. [2] P. Benner, Factorized Solution Of Sylvester Equations With Applications In Control, In: Proceedings of international symposium of mathematics. Theory networks and systems, MTNS, (2004). [3] P. Benner, Large-Scale Matrix Equations Of Special Type, Numerical Linear Algebra with Applications, 15 (2008) 747-754. http://dx.doi.org/10.1002/nla.621 [4] U. Baur, Low Rank Solution Of Data-Sparse Sylvester Equations, Special Issue on Large-Scale Matrix Equations of Special Type, Numerical Linear Algebra and its Applications, 15 (2008) 837-851. http://dx.doi.org/10.1002/nla.605 [5] B. N. Datta, K. Datta, Theoretical And Computational Aspects Of Some Linear Algebra Problems In Control Theory, In: Byrnes CI, Lindquist A (eds) Computational and combinatorial methods in systems theory. Elsevier, Amsterdam, (1986) 201-212. [6] F. Ding, T. Chen, Iterative least-squares solutions of coupled Sylvester matrix equations, Systems and Control Letters, 54 (2) (2005) 95-107. http://dx.doi.org/10.1016/j.sysconle.2004.06.008 [7] F. Ding, T. Chen, Gradient Based Iterative Algorithms for Solving a Class of Matrix Equations, IEEE Transactions on Automatic Control, 50 (8) (2005) 1216-1221. http://dx.doi.org/10.1109/TAC.2005.852558 [8] F. Ding, T. Chen, On Iterative Solutions of General Coupled Matrix Equations, SIAM Journal on Control and Optimization, 44 (6) (2006) 2269-2284. http://dx.doi.org/10.1137/S0363012904441350 [9] F. Ding, X. P. Liu, J. Ding, Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle, Applied Mathematics and Computation, 197 (1) (2008) 41-50. http://dx.doi.org/10.1016/j.amc.2007.07.040 [10] J. Ding, Y. Liu, F. Ding, Iterative solutions to matrix equations of the form Ai XBi = Fi , Computers and Mathematics with Applications, 59 (11) (2010) 3500-3507. http://dx.doi.org/10.1016/j.camwa.2010.03.041 [11] F. Ding, H. Zhang, Gradient-based iterative algorithm for a class of the coupled matrix equations related to control systems, IET Control Theory and Applications, 8 (15) (2014) 1588-1595. http://dx.doi.org/10.1049/iet-cta.2013.1044 [12] F. Ding, Y. Wang, J. Ding, Recursive least squares parameter identification algorithms for systems with colored noise using the filtering technique and the auxiliary model, Digital Signal Processing, 37 (2015) 100-108. http://dx.doi.org/10.1016/j.dsp.2014.10.005

International Scientific Publications and Consulting Services

Journal of Interpolation and Approximation in Scientific Computing 2016 No.2 (2016) 66-76 http://www.ispacs.com/journals/jiasc/2016/jiasc-00094/

75

[13] S. A. Edalatpanah, M. M. Rashidi, On The Application Of Homotopy Perturbation Method For Solving Systems Of Linear Equations, International Scholarly Research Notices, Volume 2014 (2014), Article ID 143512, 5 pages. http://dx.doi.org/10.1155/2014/143512 [14] M. A. Fariborzi Araghi, M. Hosseinzadeh, ABC method for solving fuzzy Sylvester matrix equation, International Journal of Mathematical Modelling and Computations, 2 (2012) 231-237. [15] G. Golub, S. Nash, Ch. Van Loan, A Hessenberg-Schur Method For The Problem AX + XB = C, IEEE Trans Automat Control, 24 (1979) 909-913. http://dx.doi.org/10.1109/TAC.1979.1102170 [16] J. H. He, Homotopy Perturbation Technique, Computer Methods in Applied Mechanics and Engineering, (1999) 57-62. http://dx.doi.org/10.1016/s0045-7825(99)00018-3 [17] J. H. He, A Coupling Method Of A Homotopy Technique And A Perturbation Technique For Non-Linear Problems, International Journal of Non-Linear Mechanics, 35 (1) (2000) 37-43. http://dx.doi.org/10.1016/S0020-7462(98)00085-7 [18] J. H. He, Homotopy Perturbation Method: A New Non-Linear Analytical Technique, Applied Mathematics and Computation, 135 (1) (2003) 73-79. http://dx.doi.org/10.1016/S0096-3003(01)00312-5 [19] N. J. Higham, The Matrix Function Toolbox, http://www.ma.man.ac.uk/∼higham/mftoolbox (Retrieved on November 3, 2009). [20] K. Jbilou, A. Messaoudi, H. Sadok, Global Fom And Gmres Algorithms For Matrix Equations, Applied Numerical Mathematics, 31 (1999) 49-63. http://dx.doi.org/10.1016/S0168-9274(98)00094-4 [21] B. Keramati, An approach to the solution of linear system of equations by Hes homotopy perturbation method, Chaos Solitons Fract, 41 (1) (2009) 152-156. http://dx.doi.org/10.1016/j.chaos.2007.11.020 [22] S. J. Liao, The Proposed Homotopy Analysis Technique For The Solution Of Nonlinear Problems, Ph.D. Thesis, Shanghai Jiao Tong University, (1992). [23] H. K. Liu, Application Of Homotopy Perturbation Methods For Solving Systems Of Linear Equations, Applied Mathematics and Computation, 217 (12) (2011) 5259-5264. http://dx.doi.org/10.1016/j.amc.2010.11.024 [24] A. J. Laub, Matrix Analysis for Scientists and Engineers, SIAM, Philadelphia, PA, (2005). http://dx.doi.org/10.1137/1.9780898717907 [25] A. J. Laub, M. T. Heath, C. Paige, R. C. Ward, Computation Of System Balancing Transformations And Other Applications Of Simultaneous Diagonalisation Algorithms, IEEE Trans Automat Control, 32 (1987) 115-122. http://dx.doi.org/10.1109/TAC.1987.1104549 [26] Y. Saad, Iterative Methods for Sparse Linear Systems, second ed, SIAM, (2003). http://dx.doi.org/10.1137/1.9780898718003 [27] A. Sadeghi, M. I. Ahmad, A. Ahmad, M. E. Abbasnejad, A Note On Solving The Fuzzy Sylvester Matrix Equation, Journal of Computational Analysis and Applications, 15 (1) (2013) 10-22. [28] A. Sadeghi, S. Abbasbandy, M. E. Abbasnejad, The common solution of the pair of fuzzy matrix equations, World Applied Sciences, 15 (2) (2011) 232-238.

International Scientific Publications and Consulting Services

Journal of Interpolation and Approximation in Scientific Computing 2016 No.2 (2016) 66-76 http://www.ispacs.com/journals/jiasc/2016/jiasc-00094/

76

[29] A. Sadeghi, M. E. Abbasnejad, M. I. Ahmad, On solving systems of fuzzy matrix equation, Far East Journal of Applied Mathematics, 59 (1) (2011) 31-44. [30] J. Saeidian, E. Babolian, A. Aziz, On A Homotopy Based Method For Solving Systems Of Linear Equations, TWMS Journal of Pure and Applied Mathematics, 6 (1) (2015) 15-26. [31] V. Simoncini, On The Numerical Solution of AX − XB = C, BIT, 36 (1996) 814-830. http://dx.doi.org/10.1007/BF01733793 [32] F. Toutounian, D. Khojasteh Salkuyeh, M. Mojarrab, LSMR Iterative Method for General Coupled Matrix Equations, Journal of the Franklin Institute-Engineering and Applied Mathematics, 351 (1) (2014) 340-357. [33] L. Xie, Y. J. Liu, H. Z. Yang, Gradient based and least squares based iterative algorithms for matrix equations AXB +CX T D = F, Applied Mathematics and Computation, 217 (5) (2010) 2191-2199. http://dx.doi.org/10.1016/j.amc.2010.07.019 [34] H. M. Zhang, H. C. Yin, R. Ding, A numerical algorithm for solving a class of matrix equations, Journal of Mathematical Modeling, 2 (1) (2014) 41-45.

International Scientific Publications and Consulting Services

Suggest Documents