Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 697947, 11 pages http://dx.doi.org/10.1155/2013/697947
Research Article An Iterative Method for the Least-Squares Problems of a General Matrix Equation Subjects to Submatrix Constraints Li-fang Dai, Mao-lin Liang, and Yong-hong Shen School of Mathematics and Statistics, Tianshui Normal University, Tianshui, Gansu 741001, China Correspondence should be addressed to Li-fang Dai;
[email protected] Received 26 July 2013; Accepted 22 October 2013 Academic Editor: Debasish Roy Copyright Š 2013 Li-fang Dai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. đĄ
An iterative algorithm is proposed for solving the least-squares problem of a general matrix equation âđ=1 đđ đđ đđ = đš, where đđ (đ = 1, 2, . . . , đĄ) are to be determined centro-symmetric matrices with given central principal submatrices. For any initial iterative matrices, we show that the least-squares solution can be derived by this method within finite iteration steps in the absence of Ěđ can also be obtained by the roundoff errors. Meanwhile, the unique optimal approximation solution pair for given matrices đ đĄ Ěđ đđ . The given Ěđ , đš = đš â âđĄ đđ đ least-norm least-squares solution of matrix equation âđ=1 đđ đđ đđ = đš, in which đđ = đđ â đ đ=1 numerical examples illustrate the efficiency of this algorithm.
1. Introduction Throughout this paper, we denote the set of all đ Ă đ real matrices by đ
đĂđ . The symbol đ´đ represents the transpose of matrix đ´. đ˝đ and đźđ stand for the reverse unit matrix, and identity matrix, respectively. For đ´, đľ â đ
đĂđ , the inner product of matrices đ´ and đľ is defined by â¨đ´, đľâŠ = trace (đľđ đ´), which leads to the Frobenius norm, that is, âđ´â = ââ¨đ´, đ´âŠ. A matrix đ´ = (đđđ ) â đ
đĂđ , is called centro-symmetric (centro-skew symmetric) if and only if đđđ = đđâđ+1,đâđ+1 (đđđ = âđđâđ+1,đâđ+1 ) ,
đ, đ = 1, 2, . . . , đ, (1)
which can also be characterized equivalently by đ˝đ đ´đ˝đ = đ´ (đ˝đ đ´đ˝đ = âđ´). The set of all centro-symmetric (centroskew symmetric) matrices is denoted by CSđ
đĂđ (CASđ
đĂđ ). This kind of matrices plays an important role in many applications (see, e.g., [1â4]), and has been frequently and widely investigated (see, e.g., [5â7]) by using generalized inverse, generalized singular value decomposition (GSVD) [8], and so forth. For more, we refer the readers to [9â16] and therein. We firstly introduce the concept of the central principal submatrix which is originally put forward by Yin [17].
Definition 1. Let đ´ â đ
đĂđ , if đ â đ is even, a đ Ă đ central principal submatrix of đ´, denoted by đ´[đ], is obtained by deleting the first and last (đ â đ)/2 rows and columns of đ´, namely, đ´[đ] = (đđđ )(đâđ)/2â¤đ,đâ¤đâ(đâđ)/2 . Evidently, a matrix with odd (even) order only has central principal submatrices of odd (even) order. Now, the first problem to be studied here can be stated as follows. Problem 2. Given đđ â đ
đĂđđ , đđ â đ
đđ Ăđ and Xđ â CSđ
đđ Ăđđ (đ = 1, 2, . . . , đĄ), đš â đ
đĂđ . Find the least-squares solution đđ â Î đ of matrix equation đ1 đ1 đ1 + đ2 đ2 đ2 + â
â
â
+ đđĄ đđĄ đđĄ = đš,
(2)
in which Î đ = {đđ | đđ â CSđ
đđ Ăđđ with đđ [đđ ] = Xđ }, đđ > đđ , đ â Î â {1, 2, . . . , đĄ}, and đđ [đđ ] represents the đđ Ă đđ central principal submatrix of đđ . Problem 2 is the submatrix constrained problem of matrix equation (2), which originally arises from a practical subsystem expansion problem, and has been deeply investigated (see, e.g., [7, 18â22]). In these literatures, the generalized inverses or some complicated matrix decompositions such as canonical correlation decomposition (CCD) [23] and GSVD
2
Journal of Applied Mathematics
are employed. However, it is almost impossible to solve (2) by the above methods. The iterative method is an efficient approach. Recently, kinds of iteration methods have been constructed: Zhou and Duan [24] studied the generalized Sylvester matrix equation đ
đ
đ=0
đ=0
âđ´ đ đđšđ + â đľđ đđšđ = đ
(3)
by so-called generalized Sylvester mapping that has pretty properties. Wu et al. [25] presented an finite iterative method for a class of complex matrix equations including conjugate and transpose of unknown solution. Motivated by the wellknown Jacobi and Gauss-Seidel iterations methods, Ding and Chen, in [26], proposed a general family of iterative methods to solve linear matrix equations; meanwhile, these methods were also extended to solve the following coupled Sylvester matrix equations
Ěđ that satisfies both requirements, which is the optimal đ approximation of đđ (see, e.g., [31, 32]). About this problem, we also refer the authors to [9â11, 13, 15, 16, 20â23, 27â29, 33â 36] and therein. The rest of this paper is outlined as follows. In Section 2, an iterative algorithm will be proposed to solve Problem 2, and the properties of which will be investigated. In Section 3, we will consider the optimal approximation Problem 3 by using the iterative algorithm. In Section 4, some numerical examples will be given to verify the efficiency of this algorithm.
2. The Algorithm for Problem 2 and Its Properties According to the definition of centro-symmetric matrix, when đ â đ is even, a centro-symmetric matrix đ â CSđ
đĂđ can be divided into smaller submatrices, namely,
đ
â đ´ đđ đđ đľđđ = đśđđ ,
đ = 1, 2, . . . .
(4)
đ=1
Although these iterative algorithms are efficient, there still exist some handicaps when meeting the constrained matrix equation problem (i.e., to find the solution of matrix equation in some matrices sets with specifical structure, for instance, symmetric matrices, centro-symmetric matrices, and bisymmetric matrices sets) and the submatrix constrained problem, since these methods cannot keep the special properties of the unknown matrix in the iterative process. Based on the classical conjugate gradient (CG) method, Peng et al. [27] gave an iterative method to find the bisymmetric solution of matrix equation (2). Similar method was constructed to solve matrix equations (4) with generalized bisymmetric đđ in [28]. In particular, Li et al. [29] proposed an elegant algorithm for solving the generalized Sylvester (Lyapunov) matrix equation đ´đđľ + đśđđˇ = đ¸ with bisymmetric đ and symmetric đ, the two unknown matrices include the given central principal submatrix and leading principal submatrix, respectively. This method shunned the difficulties in numerical instability and computational complexity, and solved the problem, completely. By borrowing the thinking of this iterative algorithm, we will solve Problem 2 by iteration method. The second problem to be considered is the optimal approximation problem. Problem 3. Let đđ¸ be the solutions set of Problem 2. For given Ěđ â đ
đđ Ăđđ , find đ Ěđ such that matrices đ đĄ óľŠ Ě Ě óľŠóľŠ2 âóľŠóľŠóľŠóľŠđ óľŠóľŠ = đ â đđ óľŠ đ=1
min
(đ1 ,đ2 ,...,đđĄ )âđđ¸
đĄ óľŠ Ěđ óľŠóľŠóľŠóľŠ2 . âóľŠóľŠóľŠóľŠđđ â đ óľŠ
(5)
đ=1
This problem occurs frequently in experimental design Ěđ of (see for instance [30]). Here, the preliminary estimation đ the unknown matrix đđ can be obtained from experiments, but it may not satisfy the structural requirement and/or spectral requirement. The best estimation of đđ , is the matrix
đ11
đ12
đ13
đ21
đ22
đ˝đ đ21 đ˝(đâđ)/2
đ=( ) , (6) đ˝(đâđ)/2 đ13 đ˝(đâđ)/2 đ˝(đâđ)/2 đ12 đ˝đ đ˝(đâđ)/2 đ11 đ˝(đâđ)/2
where đ11 â đ
(đâđ)/2Ă(đâđ)/2 , đ12 â đ
(đâđ)/2Ăđ , đ13 â đ
(đâđ)/2Ă(đâđ)/2 , đ21 â đ
đĂ(đâđ)/2 , and đ22 â CSđ
đĂđ . Now, for some fixed positive integer đ â Î, we define two matrix sets. đđ Ăđđ = {đđ | đđ â CSđ
đđ Ăđđ with đđ [đđ ] = 0} , CSđ
â,đ đ
0 0 0 { đđ Ăđđ 0 đđ 0) â CSđ
đđ Ăđđ , = đ | đ = ( CSđ
⏌,đ đ { đ đ 0 0 0 {
(7)
} đđ = đđ [đđ ] â CSđ
đđ Ăđđ } . } đ Ăđ
đđ Ăđđ đ đ and CSđ
⏌,đ are linear It is clear that both CSđ
â,đ đ đ đđ Ăđđ . subspaces of đ
In addition, for any matrix đ â đ
đđ Ăđđ , it has uniquely decomposition in direct sum, that is, đ = đ1 â đ2 , here đ1 = (đ + đ˝đđ đđ˝đđ )/2, đ2 = (đ â đ˝đđ đđ˝đđ )/2. Furthermore, đ1 = đ11 + đ12 is also the direct sum decomposition of đ1 if đ11 â đđ Ăđđ đđ Ăđđ , đ12 â CSđ
⏌,đ , since â¨đ11 , đ12 ⊠= 0. Hence, we CSđ
â,đ đ đ obtain the following. đ Ăđ
đđ Ăđđ đ đ âCSđ
⏌,đ âCASđ
đđ Ăđđ . Lemma 4. Consider đ
đđ Ăđđ = CSđ
â,đ đ đ
Lemma 4 reveals that any matrix đ â đ
đđ Ăđđ can be đđ Ăđđ , uniquely written as đ = đ1 + đ2 + đ3 , where đ1 â CSđ
â,đ đ đđ Ăđđ đđ Ăđđ đ2 â CSđ
⏌,đđ , đ3 â CASđ
. Then, we can define the following linear projection operators: đđ Ăđđ Lđ : đ
đđ Ăđđ ół¨â CSđ
â,đ đ
đ ół¨â đ1 for đ â Î.
(8)
Journal of Applied Mathematics
3
According to the definition of Lđ , if đ â đ
đđ Ăđđ and đ â đđ Ăđđ , we have CSđ
â,đ đ â¨đ, đ⊠= â¨đ1 , đ⊠= â¨Lđ (đ) , đ⊠.
(9)
The iterative algorithm for the least squares problem of matrix equations (11) can be expressed as follows. Algorithm 7. Consider the following.
This property will be employed frequently in the residual context. The following theorem is essential for solving Problem 2, which transforms equivalently Problem 2 into solving the least-square problem of another matrix equation.
Step 1. Let đđ â đ
đĂđđ , đđ â đ
đđ Ăđ , đš â đ
đĂđ and Xđ â CSđ
đđ Ăđđ for đ â Î. đđ Ăđđ . Input arbitrary matrices đđ(0) â CSđ
â,đ đ
Theorem 5. Any solution group of Problem 2 can be obtained by
đ
0 = đş â F (đ1(0) , đ2(0) , . . . , đđĄ(0) ) ,
(đ1 , đ2 , . . . , đđĄ ) = (đ1 +
đ1⏌ , đ2
+
đ2⏌ , . . . , đđĄ
+
đđĄâŹŚ ) ,
Step 2. Calculate
đđ(0) = Lđ (đđđ đ
0 đđđ ) ,
(10)
đđ Ăđđ where đđ â CSđ
â,đ is the least-squares solution of matrix đ equation
đş=đšâ
+
đ2 đ2⏌ đ2
+ â
â
â
+
đđĄ đđĄâŹŚ đđĄ ) ,
0 0 0 đđ Ăđđ , đđ⏌ = (0 Xđ 0) â CSđ
⏌,đ đ 0 0 0
Step 3. Calculate đđ(đ+1) = đđ(đ) + đźđ đđ(đ) ,
đ
đ+1 = đş â F (đ1(đ+1) , đ2(đ+1) , . . . , đđĄ(đ+1) )
(12)
đđ(đ+1) = đđ(đ+1) + đ˝đ đđ(đ) , óľŠ óľŠ2 âđĄđ=1 óľŠóľŠóľŠóľŠđđ(đ+1) óľŠóľŠóľŠóľŠ đ˝đ = óľŠ óľŠ2 . âđĄđ=1 óľŠóľŠóľŠóľŠđđ(đ) óľŠóľŠóľŠóľŠ
then matrix equation (11) can be simplified as (14)
Moreover, we can easily verify that đĄ
holds for arbitrary đ â đ
đĂđ .
Step 4. Calculate
= đđ(đ) â đźđ Lđ (đđđ F (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) ) đđđ ) ,
F (đ1 , đ2 , . . . , đđĄ ) = đ1 đ1 đ1 + đ2 đ2 đ2 + â
â
â
+ đđĄ đđĄ đđĄ , (13)
đ=1
(17)
đđ(đ+1) = Lđ (đđđ đ
đ+1 đđđ )
In the next part of this section, we will establish an iterative algorithm for (11) and analysis its properties. For the convenience of expression, we define a matrix function
â¨đ, F (đ1 , đ2 , . . . , đđĄ )⊠= â â¨đđđ đđđđ , đđ âŠ
đźđ = óľŠ . óľŠóľŠF (đ(đ) , đ(đ) , . . . , đ(đ) )óľŠóľŠóľŠ2 óľŠóľŠ óľŠóľŠ đĄ 1 2
= đ
đ â đźđ F (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) ) ,
Remark 6. It follows, from Theorem 5, that Problem 2 can be solved completely by finding the least-squares solution of đđ Ăđđ . matrix equations (11) in subspaces CSđ
⏌,đ đ
F (đ1 , đ2 , . . . , đđĄ ) = đş.
óľŠ óľŠ2 âđĄđ=1 óľŠóľŠóľŠóľŠđđ(đ) óľŠóľŠóľŠóľŠ
(11)
Xđ â CSđ
đđ Ăđđ is the given central principal submatrix of đđ in Problem 2. đđ Ăđđ , we have Proof. Noting that the definition of CSđ
â,đ đ óľŠóľŠ óľŠóľŠóľŠ đĄ óľŠóľŠ óľŠ min óľŠóľŠóľŠâ đđ đđ đđ â đšóľŠóľŠóľŠ óľŠóľŠ đđ âÎ đ óľŠ óľŠóľŠđ=1 óľŠ óľŠóľŠ đĄ óľŠóľŠ óľŠóľŠ óľŠóľŠ ââ minđ Ăđ óľŠóľŠóľŠâ đđ (đđ + đđ ) đđ â đšóľŠóľŠóľŠ óľŠóľŠ đ đ óľŠ đđ âCSđ
â,đ óľŠ đ óľŠ đ=1 óľŠ óľŠóľŠ đĄ óľŠóľŠ óľŠóľŠ óľŠóľŠ ââ minđ Ăđ óľŠóľŠóľŠâ đđ đđ đđ â đşóľŠóľŠóľŠ . óľŠóľŠ đ đ óľŠ đđ âCSđ
â,đđ óľŠ óľŠđ=1 óľŠ The proof is completed.
(16)
đ = 0.
đ1 đ1 đ1 + đ2 đ2 đ2 + â
â
â
+ đđĄ đđĄ đđĄ = đş, (đ1 đ1⏌ đ1
đđ(0) = đđ(0) ,
(15)
(18)
2
Step 5. If âđĄđ=1 âđđ(đ) â = 0, stop. Otherwise, đ := đ + 1, go to Step 3. From Algorithm 7, we can see that đđ(đ) , đđ(đ) , đđ(đ) , â đđ Ăđđ CSđ
â,đ . In particular, (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) ) is a least-squares đ
solution group belonging to matrix equation (11) if đđ(đ) = 0 for all đ â Î. The following lemma gives voice to the reason.
Lemma 8. If Lđ (đđđ đ
đ đđđ ) = 0 (đ = 1, 2, . . . , đĄ) hold simultaneously for some positive đ, then (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) ) generated by Algorithm 7 is a solution group of matrix equation (11). đđ Ăđđ Ě = } and đş Proof. Let L = {đż | đż = F(đđ ), đđ â CSđ
â,đ đ (đ) Ě F(đ ). Obviously, đş â L. Then, from the Project Theorem, đ
(đ1(đ) , đ2(đ) , . . . , đđĄ(đ) ) is a least-square solution group of matrix Ě âĽ L. That is to say, for equation (11) if and only if đş â đş
4
Journal of Applied Mathematics
đđ Ăđđ any matrices đđ â CSđ
â,đ , noting that Lemma 4, we have đ Ě F(đ1 , đ2 , . . . , đđĄ )⊠= 0, that is, â¨đş â đş,
đĄ
đ=1
đ=1
đĄ óľŠ óľŠ2 óľŠ óľŠ2 = â óľŠóľŠóľŠóľŠđđ(0) óľŠóľŠóľŠóľŠ â đź0 óľŠóľŠóľŠóľŠF (đ1(0) , đ2(0) , . . . , đđĄ(0) )óľŠóľŠóľŠóľŠ
Ě F (đ1 , đ2 , . . . , đđĄ )⊠= â¨đ
đ+1 , F (đ1 , đ2 , . . . , đđĄ )⊠â¨đş â đş, =
đĄ
óľŠ óľŠ2 = â óľŠóľŠóľŠóľŠđđ(0) óľŠóľŠóľŠóľŠ â đź0 â â¨đđ đđ(0) đđ , F (đ1(0) , đ2(0) , . . . , đđĄ(0) )âŠ
đ=1
đĄ
â â¨đđđ đ
đ+1 đđđ , đđ ⊠đ=1
= 0, (23)
đĄ
= ââ¨Lđ (đđđ đ
đ+1 điđ ) , đđ ⊠= 0, đ=1
which also deduces that đĄ
(19)
â â¨đđ(0) , đđ(1) ⊠đ=1
đĄ
= â â¨đđ(0) , đđ(0) â đź0 Lđ (đđđ F (đ1(0) , đ2(0) , . . . , đđĄ(0) ) đđđ )âŠ
which completes the proof.
đ=1
In addition, the sequences {đđ(đ) }, {đđ(đ) }, {đđ(đ) } generated by Algorithm 7 are self-orthogonal, that is, as follows.
đĄ
= â â¨đđ(0) , đđ(0) ⊠â đź0 đ=1
Lemma 9. Suppose that the sequences {đđ(đ) }, {đđ(đ) }, {đđ(đ) } generated by Algorithm 7 not equal null for đ ⤠đ, then đĄ
đ=1
đĄ
(đ)
(1) â â¨đđ , đđ(đ) ⊠= 0,
(20)
= â â¨đđ(0) , đđ(0) ⊠â đź0 đ=1
đ=1 đĄ
đĄ
Ă â â¨đđ(0) , đđđ F (đ1(0) , đ2(0) , . . . , đđĄ(0) ) đđđ âŠ
đĄ
(đ)
(2) â â¨đđ , đđ(đ) ⊠= 0,
Ă â â¨đđ đđ(0) đđ , F (đ1(0) , đ2(0) , . . . , đđĄ(0) )âŠ
(21)
đ=1
đ=1
(đ)
(đ)
(đ)
(3) â¨F (đ1 , đ2 , . . . , đđĄ ) , F (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) )⊠= 0, (22) where đ, đ = 1, 2, . . . , đ, đ ≠ đ ⤠đ. Proof. In view of the symmetry of the inner product, we only prove (20)â(22) when đ < đ. According to Algorithm 7, when đ = 1, we have
= 0, â¨F (đ1(0) , đ2(0) , . . . , đđĄ(0) ) , F (đ1(1) , đ2(1) , . . . , đđĄ(1) )⊠đĄ
= â¨F (đ1(0) , đ2(0) , . . . , đđĄ(0) ) , â [đđ (đđ(1) + đ˝0 đđ(0) ) đđ ]⊠đ=1
= â¨F (đ1(0) , đ2(0) , . . . , đđĄ(0) ) , F (đ1(1) , đ2(1) , . . . , đđĄ(1) )âŠ óľŠ óľŠ2 + đ˝0 óľŠóľŠóľŠóľŠF (đ1(0) , . . . , đđĄ(0) )óľŠóľŠóľŠóľŠ
đĄ
â â¨đđ(0) , đđ(1) ⊠đ=1 đĄ
=
= â â¨đđ(0) , đđ(0) âđź0 Lđ đđđ (F (đ1(0) , đ2(0) , . . . , đđĄ(0) ) đđđ)âŠ
óľŠ óľŠ2 + đ˝0 óľŠóľŠóľŠóľŠF (đ1(0) , đ2(0) , . . . , đđĄ(0) )óľŠóľŠóľŠóľŠ
đ=1 đĄ
= â [â¨đđ(0) , đđ(0) ⊠đ=1
âđź0 â¨đđ(0) , Lđ đđđ (F (đ1(0) , đ2(0) , . . . , đđĄ(0) ) đđđ )âŠ] đĄ óľŠ óľŠ2 = â óľŠóľŠóľŠóľŠđđ(0) óľŠóľŠóľŠóľŠ â đź0 đ=1
đĄ
Ă â â¨đđ(0) , đđđ F (đ1(0) , đ2(0) , . . . , đđĄ(0) ) đđđ ⊠đ=1
1 â¨đ
0 â đ
1 , F (đ1(1) , đ2(1) , . . . , đđĄ(1) )⊠đź0
=
đĄ đĄ 1 [â â¨đđđ đ
0 đđđ , đđ(1) ⊠â â â¨đđđ đ
1 đđđ , đđ(1) âŠ] đź0 đ=1 đ=1
óľŠ óľŠ2 + đ˝0 óľŠóľŠóľŠóľŠF (đ1(0) , đ2(0) , . . . , đđĄ(0) )óľŠóľŠóľŠóľŠ =â
1 đĄ óľŠóľŠ (1) óľŠóľŠ2 óľŠ óľŠ2 â óľŠóľŠđ óľŠóľŠ + đ˝0 óľŠóľŠóľŠóľŠF (đ1(0) , đ2(0) , . . . , đđĄ(0) )óľŠóľŠóľŠóľŠ đź0 đ=1 óľŠ đ óľŠ
= 0. (24)
Journal of Applied Mathematics
5 đĄ
Assume that (20), (21), and (22) hold for positive integer đ (< đ), that is, for đ = 1, 2, . . . , đ â 1, đĄ
đ=1
= 0,
(đ)
â â¨đđ , đđ(đ ) ⊠= 0,
(đ)
(đ)
đ=1
(đ)
(đ)
(đ)
(đ)
= â¨F (đ1 , đ2 , . . . , đđĄ ) , F (đ1(đ +1) , đ2(đ +1) , . . . , đđĄ(đ +1) )âŠ
(đ)
â â¨đđ , đđ(đ ) ⊠= 0, (đ)
(đ)
â¨F (đ1 , đ2 , . . . , đđĄ ) , F (đ1(đ +1) , đ2(đ +1) , . . . , đđĄ(đ +1) )âŠ
đ=1 đĄ
(đ)
Ă â â¨đđ đđ đđ , F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) )âŠ
(đ)
(đ)
(đ)
+ đ˝đ â¨F (đ1 , đ2 , . . . , đđĄ ) , F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) )âŠ
(đ)
â¨F (đ1 , đ2 , . . . , đđĄ ) , F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) )⊠= 0. (25)
=
1 â¨đ
đ â đ
đ+1 , F (đ1(đ +1) , đ2(đ +1) , . . . , đđĄ(đ +1) )⊠đźđ
Then, similar to the above proof, noting that the assumptions, we have
=
đĄ 1 đĄ [â â¨đđđ đ
đ đđđ , đđ(đ +1) ⊠â â â¨đđđ đ
đ+1 đđđ , đđ(đ +1) âŠ] đźđ đ=1 đ=1
=
đĄ 1 đĄ (đ) [â â¨đđ , đđ(đ +1) ⊠â â â¨đđ+1,đ , đđ(đ +1) âŠ] đźđ đ=1 đ=1
đĄ
(đ)
â â¨đđ , đđ(đ +1) ⊠đ=1
đĄ
(đ)
= â â¨đđ , đđ(đ ) â đźđ Lđ (đđđ F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) ) đđđ )âŠ
= 0.
đ=1
(27)
đĄ
(đ)
= â â¨đđ , đđ(đ ) ⊠â đźđ
The last equal sign â=â holds due to âđĄđ=1 â¨đđ(đ ) , đđ(đ +1) ⊠= 0. In fact, from the hypothesis, we deduce
đ=1
đĄ
(đ)
Ă â â¨đđ , Lđ (đđđ F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) ) đđđ )âŠ
đĄ
â â¨đđ(đ ) , đđ(đ +1) âŠ
đ=1
đĄ
đ=1
(đ)
= âđźđ â â¨đđ đđ đđ , F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) )âŠ
đĄ
= â â¨đđ(đ ) , đđ(đ ) â đźđ Lđ (đđđ F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) ) đđđ)âŠ
đ=1
=
đĄ
(đ) âđźđ â â¨đđ (đđ đ=1
đ=1
â đ˝đâ1 đđâ1,đ ) đđ ,
đĄ óľŠ óľŠ2 = â óľŠóľŠóľŠóľŠđđ(đ ) óľŠóľŠóľŠóľŠ đ=1
F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) )⊠=
đĄ
(đ) (đ) (đ) âđźđ â¨F (đ1 , đ2 , . . . , đđĄ ) , F (đ1(đ ) , đ2(đ ) , . . . , đt(đ ) )âŠ
+
(đâ1) (đâ1) (đâ1) đźđ đ˝đâ1 â¨F (đ1 , đ2 , . . . , đđĄ ) ,
(đâ1)
âđźđ â â¨đđ (đđ(đ ) âđ˝đ â1 đđ đ=1
) đđ , F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) )âŠ
đĄ
óľŠ óľŠ2 óľŠ óľŠ2 = â óľŠóľŠóľŠóľŠđđ(đ ) óľŠóľŠóľŠóľŠ â đźđ óľŠóľŠóľŠóľŠF (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) )óľŠóľŠóľŠóľŠ
F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) )âŠ
đ=1
+ đźđ đ˝đ â1 â¨F (đ1(đ â1) , đ2(đ â1) , . . . , đđĄ(đ â1) ) ,
= 0. (26) Furthermore,
F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) )⊠= 0. (28)
đĄ
(đ)
â â¨đđ , đđ(đ +1) ⊠đ=1
đĄ
(đ)
= â â¨đđ , đđ(đ ) â đźđ Lđ (đđđ F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) ) đđđ)⊠đ=1 đĄ
(đ)
= â â¨đđ , đđ(đ ) ⊠â đźđ đ=1
Moreover, đĄ
â â¨đđ(đ ) , đđ(đ ) ⊠đ=1
đĄ
= â â¨đđ(đ ) + đ˝đ â1 đđ(đ â1) , đđ(đ ) ⊠đ=1
6
Journal of Applied Mathematics đĄ
đĄ
đ=1
đ=1
óľŠ óľŠ2 = â óľŠóľŠóľŠóľŠđđ(đ ) óľŠóľŠóľŠóľŠ + đ˝đ â1 â â¨đđ(đ â1) , đđ(đ ) âŠ
Remark 10. We know from Lemma 4 that the matrices sequences đ0,1 d (
đĄ óľŠ óľŠ2 = â óľŠóľŠóľŠóľŠđđ(đ ) óľŠóľŠóľŠóľŠ . đ=1
(29)
đĄ
đđ,đĄ
),..., (31)
),...
2
â â¨đđ(đ ) , đđ(đ +1) ⊠đ=1
đĄ
= â â¨đđ(đ ) , đđ(đ ) â đźđ Lđ (đđđ F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) ) đđđ ⊠đ=1 đĄ
âđĄđ=1 âđđ(đ) â = 0, in this case, (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) ) can be regarded as a least-squares solution group of matrix equation (11). In addition, we should point out that if đźđ = 0 or â, the conclusions may not be true, and the iteration will break down before đđ(đ) = 0 for đ < âđĄđ=1 (đđ + đđ )(đđ â đđ )/2. 2
= â [â¨đđ(đ ) , đđ(đ ) âŠ
Actually, đźđ = 0 implies that âđĄđ=1 âđđ(đ) â = 0, so đđ(đ) = 0 for đ â Î. While đźđ = â leads to F(đ1(đ) , đ2(đ) , . . . , đđĄ(đ) ) = 0, making inner product with đ
đ by both sides, it follows from Algorithm 7 that
đ=1
â đźđ â¨đđ(đ ) , đđđ F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) ) đđđ âŠ] đĄ óľŠ óľŠ2 óľŠ óľŠ2 = â óľŠóľŠóľŠóľŠđđ(đ ) óľŠóľŠóľŠóľŠ â đźđ óľŠóľŠóľŠóľŠF (đ1(đ +1) , đ2(đ +1) , . . . , đđĄ(đ +1) )óľŠóľŠóľŠóľŠ
â¨đ
đ , F (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) )âŠ
đ=1
đĄ
= â â¨đđ(đ) , đđ(đ) + đ˝đâ1 đđ(đâ1) âŠ
= 0, â¨F (đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) ) , F (đ1(đ +1) , đ2(đ +1) , . . . , đđĄ(đ +1) )⊠= â¨F (đ1(đ ) , . . . , đđĄ(đ ) ) , F (đ1(đ +1) , đ2(đ +1) , . . . , đđĄ(đ +1) )âŠ
đ=1
đĄ đĄ óľŠ óľŠ2 = â óľŠóľŠóľŠóľŠđđ(đ) óľŠóľŠóľŠóľŠ + đ˝đâ1 â â¨đđ(đ) , đđ(đâ1) ⊠đ=1
óľŠ óľŠ2 + đ˝đ óľŠóľŠóľŠóľŠF (đ1(đ ) , . . . , đđĄ(đ ) )óľŠóľŠóľŠóľŠ
(32)
đ=1
đĄ
óľŠ óľŠ2 = â óľŠóľŠóľŠóľŠđđ(đ) óľŠóľŠóľŠóľŠ = 0, đ=1
1 â¨đ
đ â đ
đ +1 , F (đ1(đ +1) , đ2(đ +1) , . . . , đđĄ(đ +1) )⊠đźđ
which also implies the same situation as đźđ = 0. Hence, if there exists a positive integer đ such that the coefficient đźđ = 0 or đźđ = â, then the corresponding matrix group đ1(đ ) , đ2(đ ) , . . . , đđĄ(đ ) is just the solution of matrix equation (11).
đĄ óľŠ óľŠ2 + đ˝đ â óľŠóľŠóľŠóľŠđđ đđ(đ ) đđ óľŠóľŠóľŠóľŠ đ=1
=â
đđ,1 â
â
â
(
đ1,đĄ
are orthogonal to each other. Hence, it can be regarded as đĄ đĄ an orthogonal basis of matrix space đ
âđ=1 đđ Ăâđ=1 đđ . Hence, the iteration will be terminated at most âđĄđ=1 (đđ + đđ )(đđ â đđ )/2 steps in the absence of roundoff errors. Therefore, there exists a positive integer đ ⤠âđĄđ=1 (đđ + đđ )(đđ â đđ )/2 such that
It follows from (29) that
=
đ0,đĄ
đ1,1 d ),(
1 đĄ â â¨đđđ đ
đ +1 đđđ , đđ(đ +1) ⊠đźđ đ=1
Together with the above analysis and Lemma 9, we can conclude the following theorem.
đĄ
óľŠ óľŠ2 + đ˝đ â óľŠóľŠóľŠóľŠđđ đđ(đ ) đđ óľŠóľŠóľŠóľŠ đ=1
= 0. (30)
Therefore, for arbitrary integers number đ, the conclusions hold. The proof is completed.
đĂđ , Theorem 11. For any initial iteration matrices đđ(0) â CSđ
â,đ đ đ = 1, 2, . . . , đĄ, the least-squares solution of matrix equation (11) can be obtained within finite iteration steps. Moreover, Suppose that (đ1Ě , đ2Ě , . . . , đđĄĚ ) is a least-squares solution group of (11), then the general solution to Problem 2 can be expressed as đĂđ đđ = đđĚ + đđ + đđ⏌ (đ = 1, 2, . . . , đĄ), where đđ â CSđ
â,đ satisfy đ homogeneous equation
F (đ1 , đ2 , . . . , đđĄ ) = 0, đđ⏌ as in Theorem 5.
(33)
Journal of Applied Mathematics
7
In order to show the validity of Theorem 11, it is adequate to prove the following conclusion. Proposition 12. The least-squares solution group (đ1Ě , đ2Ě , . . . , đđĄĚ ) of matrix equation (11) can be expressed as đđĚ + đđ , where đđ satisfy equality (33). Proof. According to the assumptions, we obtain óľŠ óľŠ min óľŠóľŠóľŠF (đ1 , đ2 , . . . , đđĄ ) â đşóľŠóľŠóľŠ đ âCSđ
đĂđ i
â,đđ
óľŠ óľŠ = óľŠóľŠóľŠóľŠF (đ1Ě , đ2Ě , . . . , đđĄĚ ) â đşóľŠóľŠóľŠóľŠ .
Theorems 11 and 13 display the efficiency of Algorithm 7. Actually, the iteration sequence {đđ(đ) } converges smoothly to the solution đđ , that is the minimization property of Algorithm 7. Theorem 14. For any initial iteration matrices đđ(0) , the đđ(đ) generated by Algorithm 7 satisfy the minimization problem óľŠóľŠ óľŠ óľŠóľŠF (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) ) â đşóľŠóľŠóľŠ óľŠ óľŠ óľŠóľŠ óľŠ = min óľŠóľŠF (đ1 , đ2 , . . . , đđĄ ) â đşóľŠóľŠóľŠ , đđ âLđ
(34)
On the other hand, noting that F(đ1 , đ2 , . . . , đđĄ ) = 0, then óľŠóľŠ óľŠ2 óľŠóľŠF (đ1Ě + đ1 , đ2Ě + đ2 , . . . , đđĄĚ + đđĄ ) â đşóľŠóľŠóľŠ óľŠ óľŠ óľŠ óľŠ2 = óľŠóľŠóľŠóľŠF (đ1Ě , đ2Ě , . . . , đđĄĚ ) â đş + F (đ1 , đ2 , . . . , đđĄ )óľŠóľŠóľŠóľŠ (35) óľŠ óľŠ2 = óľŠóľŠóľŠóľŠF (đ1Ě , đ2Ě , . . . , đđĄĚ ) â đşóľŠóľŠóľŠóľŠ .
where Lđ = đđ(0) + span{đđ(0) , đđ(1) , . . . , đđ(đâ1) }, đ = 1, 2, . . . , đĄ. Proof. From the definition of Lđ , there exist a series of real (đ) numbers đ0 , đ1 , . . . , đđâ1 such that đđ = đđ(0) + âđâ1 đ=0 đđ đđ . Define a function of đ variables đ(đ0 , đ1 , . . . , đđâ1 ), that is, đ (đ0 , đ1 , . . . , đđâ1 )
The proof is completed.
óľŠóľŠ đĄ đâ1 óľŠóľŠóľŠ2 óľŠóľŠ óľŠ (0) (đ) óľŠ = óľŠóľŠâ [đđ (đđ + â đđ đđ ) đđ â đş]óľŠóľŠóľŠ . óľŠóľŠ óľŠóľŠi=1 đ=0 óľŠ óľŠ
Next, we will show that the unique least norm solution of matrix equation (11) can be derived by choosing a special kind of initial iteration matrices. Theorem 13. Let the initial iteration matrices đi(0) = Lđ (đđđ đťđđđ) with arbitrary đť â đ
đĂđ , đ = 1, 2, . . . , đĄ, and then (đ1â , đ2â , . . . , đđĄâ ) generated by Algorithm 7 is the leastnorm least-squares solution group of matrix equation (11). Furthermore, the least-norm solution group to Problem 2 can be expressed by (đ1â , đ2â , . . . , đđĄâ ) = (đ1â + đđ⏌ , đ2â + đ2⏌ , . . . , đđĄâ + đđĄâŹŚ ) . (36) Proof. From Algorithm 7 and Theorem 11, for initial iteration matrices đđ(0) = Lđ (đđđ đťđđđ), we can obtain a least-squares solution đđâ of matrix equation (11) and there exists a matrix đťâ such that đđâ = Lđ (đđđ đťâ đđđ ). Hence, it is enough to prove that the đđâ is the least-norm solution. In fact, noting that (33) and Proposition 12, we have
(39)
In addition, from Algorithm 7, we know that đ
0 = đ
đ + đźđâ1 F (đ1(đâ1) , đ2(đâ1) , . . . , đđĄ(đâ1) ) + â
â
â
+ đź0 F (đ1(0) , đ2(0) , . . . , đđĄ(0) ) .
(40)
Noting that (22) and making the inner product with F(đ1(đ) , đ2(đ) , . . . , đđĄ(đ) ) on both sides of (40) yield â¨F (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) ) , đ
0 ⊠= â¨F (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) ) , đ
đ ⊠. (41) Hence, by simple calculation, (40) and (41), the function can be rewritten as đ (đ0 , đ1 , . . . , đđâ1 )
đĄ
óľŠ óľŠ2 â óľŠóľŠóľŠđđâ + đđ óľŠóľŠóľŠ
đâ1 óľŠ óľŠ2 óľŠ óľŠ2 = óľŠóľŠóľŠđ
0 óľŠóľŠóľŠ + â đđ2 óľŠóľŠóľŠóľŠF (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) )óľŠóľŠóľŠóľŠ
đ=1
đĄ
đ=0
óľŠ óľŠ2 óľŠ óľŠ2 = â (óľŠóľŠóľŠđđâ óľŠóľŠóľŠ + óľŠóľŠóľŠđđ óľŠóľŠóľŠ + 2 â¨đđâ , đđ âŠ) đĄ
đĄ
đĄ
đ=1
đ=1
đ=1
(42)
đâ1
đ=1
â 2 â đđ â¨F (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) ) , đ
đ ⊠.
óľŠ óľŠ2 óľŠ óľŠ2 = â óľŠóľŠóľŠđđâ óľŠóľŠóľŠ + â óľŠóľŠóľŠđđ óľŠóľŠóľŠ + 2 â â¨Lđ (đđđ đťâ đđđ ) , đđ âŠ
đ=0
Then,
đĄ
óľŠ óľŠ2 ⼠â óľŠóľŠóľŠđđâ óľŠóľŠóľŠ , đ=1
(37) as required.
(38)
đ (đ0 , đ1 , . . . , đđâ1 ) min đ đ
óľŠ óľŠ ââ min óľŠóľŠóľŠF (đ1 , đ2 , . . . , đđĄ ) â đşóľŠóľŠóľŠ . đđ âLđ
(43)
8
Journal of Applied Mathematics
Since đ(đ0 , đ1 , . . . , đđâ1 ) = min only if đđ/đđđ = 0, it follows from (29) that đđ =
â¨F (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) ) , đ
đ ⊠óľŠóľŠ óľŠ2 óľŠóľŠF (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) )óľŠóľŠóľŠ óľŠ óľŠ âđĄđ=1 â¨đđ(đ) , đđ,đ âŠ
=óľŠ óľŠóľŠF (đ(đ) , đ(đ) , . . . , đ(đ) )óľŠóľŠóľŠ2 óľŠóľŠ đĄ óľŠ 1 2 óľŠ 2 óľŠ đĄ óľŠ âđ=1 óľŠóľŠóľŠđđ,đ óľŠóľŠóľŠ =óľŠ = đźđ . óľŠóľŠF (đ(đ) , đ(đ) , . . . , đ(đ) )óľŠóľŠóľŠ2 óľŠóľŠ óľŠ đĄ 1 2 óľŠ
(44)
â
(45)
monotonically decreases with respect to increasing integer đ. The descent property of the residual norm of matrix equation (11) leads to the smoothly convergence of Algorithm 7.
3. The Solution of Problem 3 In this section, we discuss the optimal approximation Problem 3. Since the least squares problem is always consistent, it is easy to verify that the solution set đđ¸ of Problem 2 is a nonempty convex cone, so the optimal approximation solution is unique. Without loss of generality, we can assume that the given Ěđ â Ěđ â CSđ
đđ Ăđđ . In fact, from Lemma 4, arbitrary đ matrices đ â,đđ đđ Ăđđ can be divided into đ
Ěđ1 â CSđ
đđ Ăđđ , with đ â,đđ
Ěđ2 â CSđ
đđ Ăđđ , đ Ěđ3 â CASđ
đđ Ăđđ . đ ⏌,đđ
(46)
(47)
which meets the claim. Ěđ , đš = đš â F(đ Ě1 , đ Ě2 , . . . , đ ĚđĄ ), then to Denote đđ = đđ â đ solve Problem 3 is equivalent to find the least-norm solution of the new matrix equation F (đ1 , đ2 , . . . , đđĄ ) = đš.
(48)
Furthermore, similar to the construction of (11), Problem 2 is transformed equivalently into finding the least-norm leastsquares solution of matrix equation F (đ1 , đ2 , . . . , đđĄ ) = đş,
đđ Ăđđ with đđ â CSđ
â,đ , đ
â
In this section, we illustrate the efficiency and reliability of Algorithm 7 by some numerical experiments. All the numerical experiments are performed by using Matlab 6.5. In addition, because of the influence of the roundoff errors, đđ(đ) may not equal zero within finite iteration steps, 2
so the iteration will be terminated if âđĄđ=1 âđđ(đ) â < đ, for example, let đ = 1.0đ â 008. At this time, (đ1(đ) , đ1(đ) , . . . , đđĄ(đ) ) can be regarded as a solution of matrix equation (11), and đđ(đ) + đđ⏌ (đ = 1, 2, . . . , đĄ) consist of the solution group to Problem 2. In particular, let the initial iteration matrices đđ(0) = 0, then we will obtain the least-norm solution by (36). Example 1. Input matrices đ1 , đ2 , đ3 , đ1 , đ2 , đ3 , and đš as follows: đ đ ones ( ) hilb ( ) [ 2 2 ] ] đ1 = [ [hankel (1 : đ ) zeros ( đ )] , 2 2 [ ] đ đ hilb ( ) toeplitz (1 : ) ] 2 2 đ ], đ hankel (1 : )] ones ( ) 2 2 [ ]
[ đ2 = [ [
đđ Ăđđ , then Furthermore, if đđ â CSđ
â,đ đ
óľŠóľŠ Ě óľŠóľŠóľŠ2 óľŠóľŠóľŠ Ě óľŠóľŠóľŠ2 Ěđ óľŠóľŠóľŠóľŠ2 = óľŠóľŠóľŠóľŠđđ â đ Ěđ1 óľŠóľŠóľŠóľŠ2 + óľŠóľŠóľŠóľŠđ óľŠóľŠđđ â đ óľŠ óľŠ óľŠ đ2 óľŠóľŠ + óľŠóľŠđđ3 óľŠóľŠ , óľŠ óľŠ
â
4. Numerical Example
Theorem 14 reveals that the sequence
Ěđ = đ Ěđ1 + đ Ěđ2 + đ Ěđ3 , đ
Ě2 , . . . , đ ĚđĄ ) Ě1 , đ (đ Ě1 + đ⏌ , đ + đ Ě2 + đ⏌ , . . . , đ + đ ĚđĄ + đ⏌ ) . = (đ1 + đ 1 2 đĄ 2 đĄ (50)
Combined with (43), we complete the proof. óľŠ óľŠ {óľŠóľŠóľŠóľŠF (đ1(đ) , đ2(đ) , . . . , đđĄ(đ) ) â đşóľŠóľŠóľŠóľŠ}
(0)
from Theorem 13 that if let the initial iteration matrices đđ = Ě đ ) with arbitrary đť Ě â đ
đĂđ , or especially đ(0) = Lđ (đđđ đťđ đ đ â 0 â đ
đđ Ăđđ , then the iteration solutions đđ consist of the leastnorm least-squares solution of which. In this case, the unique optimal approximation solution to Problem 3 can be obtained by
(49)
in which đş = đš â F(đ1⏌ , đ2⏌ , . . . , đđĄâŹŚ ). Therefore, we can apply Algorithm 7 to derive the required solution of matrix equation (49). Virtually, it follows
đ đ zeros ( ) hankel (1 : ) [ ] 2 2 ], đ3 = [ đ [ hilb ( đ ) ones ( ) ] 2 2 [ ] đ1 = eye (đ) ,
(51)
đ2 = ones (đ) ,
đ3 = tridiag ([7, 1, â1] , đ) , 3 [â2 [ đš=[ [â1 [ [
â2 3 â2 d
â1 â2 d â2 â1
] d ] â2 â1] ], 3 â2] â2 3 ]
where toeplitz(đ), hilb(đ), hankel(đ), zeros(đ), and eye(đ)
Journal of Applied Mathematics
9
20 0 â20 â40 â60 â80 â100 â5
1 0.5 0 â0.5 â1 â5
0
5
10
15
5
20
Z3
Z2
Z1
10
15
20
0
40 30 20 10 0 â10 â20 â30 â5 5
10
15
5
20
(a)
15
10
0
20
(b)
5
10
15
5
20
10
20
15
(c)
1.25
4
1.2
2
1.15
0 â2
1.1 Y axis
Y axis
Figure 1: The bars graphs of đđ when đ = 20.
1.05 1
â6 â8
0.95
â10
0.9 0.85
â4
â12 0
50
100
150
200
250
300
350
400
450
â14
0
50
100
150
Iteration number k r = 20, q = 10 r = 40, q = 20
200 250 300 Iteration number k
350
400
450
r = 20, q = 10 r = 40, q = 20
Figure 2: The convergence curve of the Frobenius norm of the residual.
denote the Toeplitz matrix, Hilbert matrix, Hankel matrix, null matrix, identity matrix with orders đ, and the elements of matrix ones(â
) are one, tridiag([7, 1, â1], đ) represents đ Ă đ tri-diagonal matrix produced by vector [7, 1, â1]. Let the given central principal matrices đ X1 = zeros ( ) , 2
Figure 3: The convergence curve of the Frobenius norm of the terminated condition.
bars graphs of đ1 , đ2 , đ3 when we choose the initial iterative matrices đâđ đâđ đâđ ) ones ( , đ) ones ( ) 2 2 2 đâđ đâđ đ ) zeros ( ) ones (đ, )) (ones (đ, 2 2 2 ) đđ = ( ( ), đâđ đâđ đâđ ones ( ) ones ( , đ) ones ( ) 2 2 2 ( ) ones (
đ = 1, 2, 3,
(52)
(53)
By using the Algorithm 7, we obtain the solution to Problem 2. To save space, we shall not report the explicit datum of the solution, but the bars graphs of the components for the solution matrices will be given. Let đ = 20, Figure 1 shows the
and the terminal condition âđ1(đ) â + âđ2(đ) â + âđ3(đ) â < đ = 1.0đ â 012. Moreover, when đ = 20 and đ = 40, the convergence curves for the Frobenius norm of the residual denoted by RES = âđ
đ â and the termination condition denoted by TC = âđ1(đ) â + âđ2(đ) â + âđ3(đ) â are plotted in Figures 2 and 3, respectively.
đ X2 = ones ( ) â 10, 2
đ X3 = toeplitz (1 : ) . 2
10 From Figure 2, we can see that the residual norm of Algorithm 7 is monotonically decreasing, which is in accordance with the theory established in Theorem 14, namely, this algorithm is numerical stable. While Figure 3 shows that the terminated condition âđ1(đ) â+âđ2(đ) â+âđ3(đ) â is oscillating back and forth and approaches to zero as iterative process. Hence, the iterative Algorithm 7 is efficient, but it lacks of smooth convergence. Of course, for a problem with large and sparse matrices, Algorithm 7 may not terminate in a finite number of steps because of roundoff errors. How to establish an efficient and smooth algorithm is an important problem which we should study in a future work.
Journal of Applied Mathematics
[10]
[11]
[12]
[13]
Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.
[14]
Acknowledgments
[15]
The authors would like to express their sincere gratitude to the editor and two anonymous reviewers for their valuable comments and suggestions which have helped immensely improving the quality of the paper. Mao-lin Liang acknowledges the support of the scientific foundation of Tianshui Normal University (no. TSA1104). Young-hong Shen is supported by the âQingLanâ Talent Engineering Funds of Tianshui Normal University.
References [1] G. H. Golub and C. F. Van Loan, Matrix Computations, John Hopkins University Press, Baltimore, Md, USA, 1996. [2] L. Datta and S. D. Morgera, âOn the reducibility of centrosymmetric matricesâapplications in engineering problems,â Circuits, Systems, and Signal Processing, vol. 8, no. 1, pp. 71â96, 1989. [3] J. Respondek, âControllability of dynamical systems with constraints,â Systems & Control Letters, vol. 54, no. 4, pp. 293â314, 2005. [4] I. S. Pressman, âMatrices with multiple symmetry properties: applications of centro-Hermitian and per-Hermitian matrices,â Linear Algebra and its Applications, vol. 284, no. 1â3, pp. 239â 258, 1998. [5] F.-Z. Zhou, X.-Y. Hu, and L. Zhang, âThe solvability conditions for the inverse eigenvalue problems of centro-symmetric matrices,â Linear Algebra and Its Applications, vol. 364, pp. 147â160, 2003. [6] D. Boley and G. H. Golub, âA survey of matrix inverse eigenvalue problems,â Inverse Problems, vol. 3, no. 4, pp. 595â622, 1987. [7] Z.-J. Bai, âThe inverse eigenproblem of centrosymmetric matrices with a submatrix constraint and its approximation,â SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 4, pp. 1100â1114, 2005. [8] C. C. Paige, âComputing the generalized singular value decomposition,â Society for Industrial and Applied Mathematics, vol. 7, no. 4, pp. 1126â1146, 1986. [9] X.-P. Pan, X.-Y. Hu, and L. Zhang, âA class of constrained inverse eigenproblem and associated approximation problem for skew
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
symmetric and centrosymmetric matrices,â Linear Algebra and its Applications, vol. 408, pp. 66â77, 2005. F.-Z. Zhou, L. Zhang, and X.-Y. Hu, âLeast-square solutions for inverse problems of centrosymmetric matrices,â Computers & Mathematics with Applications, vol. 45, no. 10-11, pp. 1581â1589, 2003. F.-L. Li, X.-Y. Hu, and L. Zhang, âLeft and right inverse eigenpairs problem of skew-centrosymmetric matrices,â Applied Mathematics and Computation, vol. 177, no. 1, pp. 105â110, 2006. O. Rojo and H. Rojo, âSome results on symmetric circulant matrices and on symmetric centrosymmetric matrices,â Linear Algebra and its Applications, vol. 392, pp. 211â233, 2004. D. X. Xie, X. Y. Hu, and Y.-P. Sheng, âThe solvability conditions for the inverse eigenproblems of symmetric and generalized centro-symmetric matrices and their approximations,â Linear Algebra and Its Applications, vol. 418, no. 1, pp. 142â152, 2006. Z. Y. Liu and H. FaĂbender, âSome properties of generalized đž-centrosymmetric đť-matrices,â Journal of Computational and Applied Mathematics, vol. 215, no. 1, pp. 38â48, 2008. F.-L. Li, X.-Y. Hu, and L. Zhang, âLeft and right inverse eigenpairs problem of generalized centrosymmetric matrices and its optimal approximation problem,â Applied Mathematics and Computation, vol. 212, no. 2, pp. 481â487, 2009. M.-L. Liang, C.-H. You, and L.-F. Dai, âAn efficient algorithm for the generalized centro-symmetric solution of matrix equation đ´đđľ = đś,â Numerical Algorithms, vol. 44, no. 2, pp. 173â184, 2007. Q. X. Yin, âConstruction of real antisymmetric and biantisymmetric matrices with prescribed spectrum data,â Linear Algebra and Its Applications, vol. 389, pp. 95â106, 2004. P. Deift and T. Nanda, âOn the determination of a tridiagonal matrix from its spectrum and a submatrix,â Linear Algebra and its Applications, vol. 60, pp. 43â55, 1984. L. S. Gong, X. Y. Hu, and L. Zhang, âThe expansion problem of anti-symmetric matrix under a linear constraint and the optimal approximation,â Journal of Computational and Applied Mathematics, vol. 197, no. 1, pp. 44â52, 2006. Y. X. Yuan and H. Dai, âInverse problems for symmetric matrices with a submatrix constraint,â Applied Numerical Mathematics, vol. 57, no. 5-7, pp. 646â656, 2007. L. J. Zhao, X. Y. Hu, and L. Zhang, âLeast squares solutions to đ´đ = đľ for bisymmetric matrices under a central principal submatrix constraint and the optimal approximation,â Linear Algebra and its Applications, vol. 428, no. 4, pp. 871â880, 2008. A.-P. Liao and Y. Lei, âLeast-squares solutions of matrix inverse problem for bi-symmetric matrices with a submatrix constraint,â Numerical Linear Algebra with Applications, vol. 14, no. 5, pp. 425â444, 2007. G. P. Xu, M. S. Wei, and D. S. Zheng, âOn solutions of matrix equation đ´đđľ + đśđđˇ = đš,â Linear Algebra and Its Applications, vol. 279, no. 1â3, pp. 93â109, 1998. B. Zhou and G.-R. Duan, âOn the generalized Sylvester mapping and matrix equations,â Systems & Control Letters, vol. 57, no. 3, pp. 200â208, 2008. A.-G. Wu, G. Feng, G.-R. Duan, and W.-J. Wu, âFinite iterative solutions to a class of complex matrix equations with conjugate and transpose of the unknowns,â Mathematical and Computer Modelling, vol. 52, no. 9-10, pp. 1463â1478, 2010. F. Ding and T. Chen, âIterative least-squares solutions of coupled Sylvester matrix equations,â Systems & Control Letters, vol. 54, no. 2, pp. 95â107, 2005.
Journal of Applied Mathematics [27] Z.-H. Peng, X.-Y. Hu, and L. Zhang, âThe bisymmetric solutions of the matrix equation đ´ 1 đ1 đľ1 + đ´ 2 đ2 đľ2 + â
â
â
+ đ´ đ đđ đľđ = đś,â Linear Algebra and its Applications, vol. 426, no. 2-3, pp. 583â 595, 2007. [28] M. Dehghan and M. Hajarian, âThe general coupled matrix equations over generalized bisymmetric matrices,â Linear Algebra and Its Applications, vol. 432, no. 6, pp. 1531â1552, 2010. [29] J.-F. Li, X.-Y. Hu, and L. Zhang, âThe submatrix constraint problem of matrix equation đ´đđľ + đśđđˇ = đ¸,â Applied Mathematics and Computation, vol. 215, no. 7, pp. 2578â2590, 2009. [30] T. Meng, âExperimental design and decision support,â in Expert Systems, the Technology of Knowledge Management and Decision Making-Forthe 21st Century, C. Leondes, Ed., Academic Press, 2001. [31] M. Baruch, âOptimization procedure to correct stiffness and flexibility matrices using vibration tests,â AIAA Journal, vol. 16, no. 11, pp. 1208â1210, 1978. [32] N. J. Higham, âComputing a nearest symmetric positive semidefinite matrix,â Linear Algebra and its Applications, vol. 103, pp. 103â118, 1988. [33] Z.-Y. Peng, X.-Y. Hu, and L. Zhang, âThe nearest bisymmetric solutions of linear matrix equations,â Journal of Computational Mathematics, vol. 22, no. 6, pp. 873â880, 2004. [34] A.-P. Liao, Z.-Z. Bai, and Y. Lei, âBest approximate solution of matrix equation đ´đđľ + đśđđˇ = đ¸,â SIAM Journal on Matrix Analysis and Applications, vol. 27, no. 3, pp. 675â688, 2005. [35] Z.-Y. Peng and Y.-X. Peng, âAn efficient iterative method for solving the matrix equation đ´đđľ+đśđđˇ = đ¸,â Numerical Linear Algebra with Applications, vol. 13, no. 6, pp. 473â485, 2006. [36] Y.-B. Deng, Z.-Z. Bai, and Y.-H. Gao, âIterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations,â Numerical Linear Algebra with Applications, vol. 13, no. 10, pp. 801â823, 2006.
11
Advances in
Operations Research Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Applied Mathematics
Algebra
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Probability and Statistics Volume 2014
The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at http://www.hindawi.com International Journal of
Advances in
Combinatorics Hindawi Publishing Corporation http://www.hindawi.com
Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of Mathematics and Mathematical Sciences
Mathematical Problems in Engineering
Journal of
Mathematics Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Discrete Dynamics in Nature and Society
Journal of
Function Spaces Hindawi Publishing Corporation http://www.hindawi.com
Abstract and Applied Analysis
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Journal of
Stochastic Analysis
Optimization
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014