An Iterative Method for the Least-Squares Problems of a General ...

3 downloads 0 Views 2MB Size Report
Oct 22, 2013 - Li-fang Dai, Mao-lin Liang, and Yong-hong Shen. School of Mathematics and Statistics, Tianshui Normal University, Tianshui, Gansu 741001, ...
Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 697947, 11 pages http://dx.doi.org/10.1155/2013/697947

Research Article An Iterative Method for the Least-Squares Problems of a General Matrix Equation Subjects to Submatrix Constraints Li-fang Dai, Mao-lin Liang, and Yong-hong Shen School of Mathematics and Statistics, Tianshui Normal University, Tianshui, Gansu 741001, China Correspondence should be addressed to Li-fang Dai; [email protected] Received 26 July 2013; Accepted 22 October 2013 Academic Editor: Debasish Roy Copyright Š 2013 Li-fang Dai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 𝑡

An iterative algorithm is proposed for solving the least-squares problem of a general matrix equation ∑𝑖=1 𝑀𝑖 𝑍𝑖 𝑁𝑖 = 𝐹, where 𝑍𝑖 (𝑖 = 1, 2, . . . , 𝑡) are to be determined centro-symmetric matrices with given central principal submatrices. For any initial iterative matrices, we show that the least-squares solution can be derived by this method within finite iteration steps in the absence of ̃𝑖 can also be obtained by the roundoff errors. Meanwhile, the unique optimal approximation solution pair for given matrices 𝑍 𝑡 ̃𝑖 𝑁𝑖 . The given ̃𝑖 , 𝐹 = 𝐹 − ∑𝑡 𝑀𝑖 𝑍 least-norm least-squares solution of matrix equation ∑𝑖=1 𝑀𝑖 𝑍𝑖 𝑁𝑖 = 𝐹, in which 𝑍𝑖 = 𝑍𝑖 − 𝑍 𝑖=1 numerical examples illustrate the efficiency of this algorithm.

1. Introduction Throughout this paper, we denote the set of all 𝑚 × 𝑛 real matrices by 𝑅𝑚×𝑛 . The symbol 𝐴𝑇 represents the transpose of matrix 𝐴. 𝐽𝑛 and 𝐼𝑛 stand for the reverse unit matrix, and identity matrix, respectively. For 𝐴, 𝐵 ∈ 𝑅𝑚×𝑛 , the inner product of matrices 𝐴 and 𝐵 is defined by ⟨𝐴, 𝐵⟩ = trace (𝐵𝑇 𝐴), which leads to the Frobenius norm, that is, ‖𝐴‖ = √⟨𝐴, 𝐴⟩. A matrix 𝐴 = (𝑎𝑖𝑗 ) ∈ 𝑅𝑛×𝑛 , is called centro-symmetric (centro-skew symmetric) if and only if 𝑎𝑖𝑗 = 𝑎𝑛−𝑖+1,𝑛−𝑗+1 (𝑎𝑖𝑗 = −𝑎𝑛−𝑖+1,𝑛−𝑗+1 ) ,

𝑖, 𝑗 = 1, 2, . . . , 𝑛, (1)

which can also be characterized equivalently by 𝐽𝑛 𝐴𝐽𝑛 = 𝐴 (𝐽𝑛 𝐴𝐽𝑛 = −𝐴). The set of all centro-symmetric (centroskew symmetric) matrices is denoted by CS𝑅𝑛×𝑛 (CAS𝑅𝑛×𝑛 ). This kind of matrices plays an important role in many applications (see, e.g., [1–4]), and has been frequently and widely investigated (see, e.g., [5–7]) by using generalized inverse, generalized singular value decomposition (GSVD) [8], and so forth. For more, we refer the readers to [9–16] and therein. We firstly introduce the concept of the central principal submatrix which is originally put forward by Yin [17].

Definition 1. Let 𝐴 ∈ 𝑅𝑛×𝑛 , if 𝑛 − 𝑝 is even, a 𝑝 × 𝑝 central principal submatrix of 𝐴, denoted by 𝐴[𝑝], is obtained by deleting the first and last (𝑛 − 𝑝)/2 rows and columns of 𝐴, namely, 𝐴[𝑝] = (𝑎𝑖𝑗 )(𝑛−𝑝)/2≤𝑖,𝑗≤𝑛−(𝑛−𝑝)/2 . Evidently, a matrix with odd (even) order only has central principal submatrices of odd (even) order. Now, the first problem to be studied here can be stated as follows. Problem 2. Given 𝑀𝑖 ∈ 𝑅𝑚×𝑝𝑖 , 𝑁𝑖 ∈ 𝑅𝑝𝑖 ×𝑛 and X𝑖 ∈ CS𝑅𝑞𝑖 ×𝑞𝑖 (𝑖 = 1, 2, . . . , 𝑡), 𝐹 ∈ 𝑅𝑚×𝑛 . Find the least-squares solution 𝑍𝑖 ∈ Π𝑖 of matrix equation 𝑀1 𝑍1 𝑁1 + 𝑀2 𝑍2 𝑁2 + ⋅ ⋅ ⋅ + 𝑀𝑡 𝑍𝑡 𝑁𝑡 = 𝐹,

(2)

in which Π𝑖 = {𝑍𝑖 | 𝑍𝑖 ∈ CS𝑅𝑝𝑖 ×𝑝𝑖 with 𝑍𝑖 [𝑞𝑖 ] = X𝑖 }, 𝑝𝑖 > 𝑞𝑖 , 𝑖 ∈ Γ ≜ {1, 2, . . . , 𝑡}, and 𝑍𝑖 [𝑞𝑖 ] represents the 𝑞𝑖 × 𝑞𝑖 central principal submatrix of 𝑍𝑖 . Problem 2 is the submatrix constrained problem of matrix equation (2), which originally arises from a practical subsystem expansion problem, and has been deeply investigated (see, e.g., [7, 18–22]). In these literatures, the generalized inverses or some complicated matrix decompositions such as canonical correlation decomposition (CCD) [23] and GSVD

2

Journal of Applied Mathematics

are employed. However, it is almost impossible to solve (2) by the above methods. The iterative method is an efficient approach. Recently, kinds of iteration methods have been constructed: Zhou and Duan [24] studied the generalized Sylvester matrix equation 𝜙

𝜓

𝑖=0

𝑘=0

∑𝐴 𝑖 𝑋𝐹𝑖 + ∑ 𝐵𝑘 𝑌𝐹𝑘 = 𝑅

(3)

by so-called generalized Sylvester mapping that has pretty properties. Wu et al. [25] presented an finite iterative method for a class of complex matrix equations including conjugate and transpose of unknown solution. Motivated by the wellknown Jacobi and Gauss-Seidel iterations methods, Ding and Chen, in [26], proposed a general family of iterative methods to solve linear matrix equations; meanwhile, these methods were also extended to solve the following coupled Sylvester matrix equations

̂𝑖 that satisfies both requirements, which is the optimal 𝑍 approximation of 𝑍𝑖 (see, e.g., [31, 32]). About this problem, we also refer the authors to [9–11, 13, 15, 16, 20–23, 27–29, 33– 36] and therein. The rest of this paper is outlined as follows. In Section 2, an iterative algorithm will be proposed to solve Problem 2, and the properties of which will be investigated. In Section 3, we will consider the optimal approximation Problem 3 by using the iterative algorithm. In Section 4, some numerical examples will be given to verify the efficiency of this algorithm.

2. The Algorithm for Problem 2 and Its Properties According to the definition of centro-symmetric matrix, when 𝑛 − 𝑞 is even, a centro-symmetric matrix 𝑍 ∈ CS𝑅𝑛×𝑛 can be divided into smaller submatrices, namely,

𝑝

∑ 𝐴 𝑖𝑗 𝑋𝑗 𝐵𝑖𝑗 = 𝐶𝑖𝑗 ,

𝑖 = 1, 2, . . . .

(4)

𝑗=1

Although these iterative algorithms are efficient, there still exist some handicaps when meeting the constrained matrix equation problem (i.e., to find the solution of matrix equation in some matrices sets with specifical structure, for instance, symmetric matrices, centro-symmetric matrices, and bisymmetric matrices sets) and the submatrix constrained problem, since these methods cannot keep the special properties of the unknown matrix in the iterative process. Based on the classical conjugate gradient (CG) method, Peng et al. [27] gave an iterative method to find the bisymmetric solution of matrix equation (2). Similar method was constructed to solve matrix equations (4) with generalized bisymmetric 𝑋𝑗 in [28]. In particular, Li et al. [29] proposed an elegant algorithm for solving the generalized Sylvester (Lyapunov) matrix equation 𝐴𝑋𝐵 + 𝐶𝑌𝐷 = 𝐸 with bisymmetric 𝑋 and symmetric 𝑌, the two unknown matrices include the given central principal submatrix and leading principal submatrix, respectively. This method shunned the difficulties in numerical instability and computational complexity, and solved the problem, completely. By borrowing the thinking of this iterative algorithm, we will solve Problem 2 by iteration method. The second problem to be considered is the optimal approximation problem. Problem 3. Let 𝑆𝐸 be the solutions set of Problem 2. For given ̃𝑖 ∈ 𝑅𝑝𝑖 ×𝑝𝑖 , find 𝑍 ̂𝑖 such that matrices 𝑍 𝑡 󵄩 ̂ ̃ 󵄩󵄩2 ∑󵄩󵄩󵄩󵄩𝑍 󵄩󵄩 = 𝑖 − 𝑍𝑖 󵄩 𝑖=1

min

(𝑍1 ,𝑍2 ,...,𝑍𝑡 )∈𝑆𝐸

𝑡 󵄩 ̃𝑖 󵄩󵄩󵄩󵄩2 . ∑󵄩󵄩󵄩󵄩𝑍𝑖 − 𝑍 󵄩

(5)

𝑖=1

This problem occurs frequently in experimental design ̃𝑖 of (see for instance [30]). Here, the preliminary estimation 𝑍 the unknown matrix 𝑍𝑖 can be obtained from experiments, but it may not satisfy the structural requirement and/or spectral requirement. The best estimation of 𝑍𝑖 , is the matrix

𝑍11

𝑍12

𝑍13

𝑍21

𝑍22

𝐽𝑞 𝑍21 𝐽(𝑛−𝑞)/2

𝑍=( ) , (6) 𝐽(𝑛−𝑞)/2 𝑍13 𝐽(𝑛−𝑞)/2 𝐽(𝑛−𝑞)/2 𝑍12 𝐽𝑞 𝐽(𝑛−𝑞)/2 𝑍11 𝐽(𝑛−𝑞)/2

where 𝑍11 ∈ 𝑅(𝑛−𝑞)/2×(𝑛−𝑞)/2 , 𝑍12 ∈ 𝑅(𝑛−𝑞)/2×𝑞 , 𝑍13 ∈ 𝑅(𝑛−𝑞)/2×(𝑛−𝑞)/2 , 𝑍21 ∈ 𝑅𝑞×(𝑛−𝑞)/2 , and 𝑍22 ∈ CS𝑅𝑞×𝑞 . Now, for some fixed positive integer 𝑖 ∈ Γ, we define two matrix sets. 𝑝𝑖 ×𝑝𝑖 = {𝑍𝑖 | 𝑍𝑖 ∈ CS𝑅𝑝𝑖 ×𝑝𝑖 with 𝑍𝑖 [𝑞𝑖 ] = 0} , CS𝑅⋆,𝑞 𝑖

0 0 0 { 𝑝𝑖 ×𝑝𝑖 0 𝑋𝑖 0) ∈ CS𝑅𝑝𝑖 ×𝑝𝑖 , = 𝑍 | 𝑍 = ( CS𝑅⬦,𝑞 𝑖 { 𝑖 𝑖 0 0 0 {

(7)

} 𝑋𝑖 = 𝑍𝑖 [𝑞𝑖 ] ∈ CS𝑅𝑞𝑖 ×𝑞𝑖 } . } 𝑝 ×𝑝

𝑝𝑖 ×𝑝𝑖 𝑖 𝑖 and CS𝑅⬦,𝑞 are linear It is clear that both CS𝑅⋆,𝑞 𝑖 𝑖 𝑝𝑖 ×𝑝𝑖 . subspaces of 𝑅 In addition, for any matrix 𝑋 ∈ 𝑅𝑝𝑖 ×𝑝𝑖 , it has uniquely decomposition in direct sum, that is, 𝑋 = 𝑋1 ⊕ 𝑋2 , here 𝑋1 = (𝑋 + 𝐽𝑝𝑖 𝑋𝐽𝑝𝑖 )/2, 𝑋2 = (𝑋 − 𝐽𝑝𝑖 𝑋𝐽𝑝𝑖 )/2. Furthermore, 𝑋1 = 𝑋11 + 𝑋12 is also the direct sum decomposition of 𝑋1 if 𝑋11 ∈ 𝑝𝑖 ×𝑝𝑖 𝑝𝑖 ×𝑝𝑖 , 𝑍12 ∈ CS𝑅⬦,𝑞 , since ⟨𝑋11 , 𝑋12 ⟩ = 0. Hence, we CS𝑅⋆,𝑞 𝑖 𝑖 obtain the following. 𝑝 ×𝑝

𝑝𝑖 ×𝑝𝑖 𝑖 𝑖 ⊕CS𝑅⬦,𝑞 ⊕CAS𝑅𝑝𝑖 ×𝑝𝑖 . Lemma 4. Consider 𝑅𝑝𝑖 ×𝑝𝑖 = CS𝑅⋆,𝑞 𝑖 𝑖

Lemma 4 reveals that any matrix 𝑊 ∈ 𝑅𝑝𝑖 ×𝑝𝑖 can be 𝑝𝑖 ×𝑝𝑖 , uniquely written as 𝑊 = 𝑊1 + 𝑊2 + 𝑊3 , where 𝑊1 ∈ CS𝑅⋆,𝑞 𝑖 𝑝𝑖 ×𝑝𝑖 𝑝𝑖 ×𝑝𝑖 𝑊2 ∈ CS𝑅⬦,𝑞𝑖 , 𝑊3 ∈ CAS𝑅 . Then, we can define the following linear projection operators: 𝑝𝑖 ×𝑝𝑖 L𝑖 : 𝑅𝑝𝑖 ×𝑝𝑖 󳨀→ CS𝑅⋆,𝑞 𝑖

𝑊 󳨀→ 𝑊1 for 𝑖 ∈ Γ.

(8)

Journal of Applied Mathematics

3

According to the definition of L𝑖 , if 𝑊 ∈ 𝑅𝑝𝑖 ×𝑝𝑖 and 𝑌 ∈ 𝑝𝑖 ×𝑝𝑖 , we have CS𝑅⋆,𝑞 𝑖 ⟨𝑊, 𝑌⟩ = ⟨𝑊1 , 𝑌⟩ = ⟨L𝑖 (𝑊) , 𝑌⟩ .

(9)

The iterative algorithm for the least squares problem of matrix equations (11) can be expressed as follows. Algorithm 7. Consider the following.

This property will be employed frequently in the residual context. The following theorem is essential for solving Problem 2, which transforms equivalently Problem 2 into solving the least-square problem of another matrix equation.

Step 1. Let 𝑀𝑖 ∈ 𝑅𝑚×𝑝𝑖 , 𝑁𝑖 ∈ 𝑅𝑝𝑖 ×𝑛 , 𝐹 ∈ 𝑅𝑚×𝑛 and X𝑖 ∈ CS𝑅𝑞𝑖 ×𝑞𝑖 for 𝑖 ∈ Γ. 𝑝𝑖 ×𝑝𝑖 . Input arbitrary matrices 𝑌𝑖(0) ∈ CS𝑅⋆,𝑞 𝑖

Theorem 5. Any solution group of Problem 2 can be obtained by

𝑅0 = 𝐺 − F (𝑌1(0) , 𝑌2(0) , . . . , 𝑌𝑡(0) ) ,

(𝑍1 , 𝑍2 , . . . , 𝑍𝑡 ) = (𝑌1 +

𝑍1⬦ , 𝑌2

+

𝑍2⬦ , . . . , 𝑌𝑡

+

𝑍𝑡⬦ ) ,

Step 2. Calculate

𝑃𝑖(0) = L𝑖 (𝑀𝑖𝑇 𝑅0 𝑁𝑖𝑇 ) ,

(10)

𝑝𝑖 ×𝑝𝑖 where 𝑌𝑖 ∈ CS𝑅⋆,𝑞 is the least-squares solution of matrix 𝑖 equation

𝐺=𝐹−

+

𝑀2 𝑍2⬦ 𝑁2

+ ⋅⋅⋅ +

𝑀𝑡 𝑍𝑡⬦ 𝑁𝑡 ) ,

0 0 0 𝑝𝑖 ×𝑝𝑖 , 𝑍𝑖⬦ = (0 X𝑖 0) ∈ CS𝑅⬦,𝑞 𝑖 0 0 0

Step 3. Calculate 𝑌𝑖(𝑘+1) = 𝑌𝑖(𝑘) + 𝛼𝑘 𝑄𝑖(𝑘) ,

𝑅𝑘+1 = 𝐺 − F (𝑌1(𝑘+1) , 𝑌2(𝑘+1) , . . . , 𝑌𝑡(𝑘+1) )

(12)

𝑄𝑖(𝑘+1) = 𝑃𝑖(𝑘+1) + 𝛽𝑘 𝑄𝑖(𝑘) , 󵄩 󵄩2 ∑𝑡𝑖=1 󵄩󵄩󵄩󵄩𝑃𝑖(𝑘+1) 󵄩󵄩󵄩󵄩 𝛽𝑘 = 󵄩 󵄩2 . ∑𝑡𝑖=1 󵄩󵄩󵄩󵄩𝑃𝑖(𝑘) 󵄩󵄩󵄩󵄩

then matrix equation (11) can be simplified as (14)

Moreover, we can easily verify that 𝑡

holds for arbitrary 𝑋 ∈ 𝑅𝑚×𝑛 .

Step 4. Calculate

= 𝑃𝑖(𝑘) − 𝛼𝑘 L𝑖 (𝑀𝑖𝑇 F (𝑄1(𝑘) , 𝑄2(𝑘) , . . . , 𝑄𝑡(𝑘) ) 𝑁𝑖𝑇 ) ,

F (𝑍1 , 𝑍2 , . . . , 𝑍𝑡 ) = 𝑀1 𝑍1 𝑁1 + 𝑀2 𝑍2 𝑁2 + ⋅ ⋅ ⋅ + 𝑀𝑡 𝑍𝑡 𝑁𝑡 , (13)

𝑖=1

(17)

𝑃𝑖(𝑘+1) = L𝑖 (𝑀𝑖𝑇 𝑅𝑘+1 𝑁𝑖𝑇 )

In the next part of this section, we will establish an iterative algorithm for (11) and analysis its properties. For the convenience of expression, we define a matrix function

⟨𝑋, F (𝑍1 , 𝑍2 , . . . , 𝑍𝑡 )⟩ = ∑ ⟨𝑀𝑖𝑇 𝑋𝑁𝑖𝑇 , 𝑍𝑖 ⟩

𝛼𝑘 = 󵄩 . 󵄩󵄩F (𝑄(𝑘) , 𝑄(𝑘) , . . . , 𝑄(𝑘) )󵄩󵄩󵄩2 󵄩󵄩 󵄩󵄩 𝑡 1 2

= 𝑅𝑘 − 𝛼𝑘 F (𝑄1(𝑘) , 𝑄2(𝑘) , . . . , 𝑄𝑡(𝑘) ) ,

Remark 6. It follows, from Theorem 5, that Problem 2 can be solved completely by finding the least-squares solution of 𝑝𝑖 ×𝑝𝑖 . matrix equations (11) in subspaces CS𝑅⬦,𝑞 𝑖

F (𝑍1 , 𝑍2 , . . . , 𝑍𝑡 ) = 𝐺.

󵄩 󵄩2 ∑𝑡𝑖=1 󵄩󵄩󵄩󵄩𝑃𝑖(𝑘) 󵄩󵄩󵄩󵄩

(11)

X𝑖 ∈ CS𝑅𝑞𝑖 ×𝑞𝑖 is the given central principal submatrix of 𝑍𝑖 in Problem 2. 𝑝𝑖 ×𝑝𝑖 , we have Proof. Noting that the definition of CS𝑅⋆,𝑞 𝑖 󵄩󵄩 󵄩󵄩󵄩 𝑡 󵄩󵄩 󵄩 min 󵄩󵄩󵄩∑ 𝑀𝑖 𝑍𝑖 𝑁𝑖 − 𝐹󵄩󵄩󵄩 󵄩󵄩 𝑍𝑖 ∈Π𝑖 󵄩 󵄩󵄩𝑖=1 󵄩 󵄩󵄩 𝑡 󵄩󵄩 󵄩󵄩 󵄩󵄩 ⇐⇒ min𝑝 ×𝑝 󵄩󵄩󵄩∑ 𝑀𝑖 (𝑌𝑖 + 𝑍𝑖 ) 𝑁𝑖 − 𝐹󵄩󵄩󵄩 󵄩󵄩 𝑖 𝑖 󵄩 𝑌𝑖 ∈CS𝑅⋆,𝑞 󵄩 𝑖 󵄩 𝑖=1 󵄩 󵄩󵄩 𝑡 󵄩󵄩 󵄩󵄩 󵄩󵄩 ⇐⇒ min𝑝 ×𝑝 󵄩󵄩󵄩∑ 𝑀𝑖 𝑌𝑖 𝑁𝑖 − 𝐺󵄩󵄩󵄩 . 󵄩󵄩 𝑖 𝑖 󵄩 𝑌𝑖 ∈CS𝑅⋆,𝑞𝑖 󵄩 󵄩𝑖=1 󵄩 The proof is completed.

(16)

𝑘 = 0.

𝑀1 𝑌1 𝑁1 + 𝑀2 𝑌2 𝑁2 + ⋅ ⋅ ⋅ + 𝑀𝑡 𝑌𝑡 𝑁𝑡 = 𝐺, (𝑀1 𝑍1⬦ 𝑁1

𝑄𝑖(0) = 𝑃𝑖(0) ,

(15)

(18)

2

Step 5. If ∑𝑡𝑖=1 ‖𝑃𝑖(𝑘) ‖ = 0, stop. Otherwise, 𝑘 := 𝑘 + 1, go to Step 3. From Algorithm 7, we can see that 𝑌𝑖(𝑘) , 𝑃𝑖(𝑘) , 𝑄𝑖(𝑘) , ∈ 𝑝𝑖 ×𝑝𝑖 CS𝑅⋆,𝑞 . In particular, (𝑌1(𝑘) , 𝑌2(𝑘) , . . . , 𝑌𝑡(𝑘) ) is a least-squares 𝑖

solution group belonging to matrix equation (11) if 𝑃𝑖(𝑘) = 0 for all 𝑖 ∈ Γ. The following lemma gives voice to the reason.

Lemma 8. If L𝑖 (𝑀𝑖𝑇 𝑅𝑘 𝑁𝑖𝑇 ) = 0 (𝑖 = 1, 2, . . . , 𝑡) hold simultaneously for some positive 𝑘, then (𝑌1(𝑘) , 𝑌2(𝑘) , . . . , 𝑌𝑡(𝑘) ) generated by Algorithm 7 is a solution group of matrix equation (11). 𝑝𝑖 ×𝑝𝑖 ̂ = } and 𝐺 Proof. Let L = {𝐿 | 𝐿 = F(𝑌𝑖 ), 𝑌𝑖 ∈ CS𝑅⋆,𝑞 𝑖 (𝑘) ̂ F(𝑌 ). Obviously, 𝐺 ∈ L. Then, from the Project Theorem, 𝑖

(𝑌1(𝑘) , 𝑌2(𝑘) , . . . , 𝑌𝑡(𝑘) ) is a least-square solution group of matrix ̂ ⊥ L. That is to say, for equation (11) if and only if 𝐺 − 𝐺

4

Journal of Applied Mathematics

𝑝𝑖 ×𝑝𝑖 any matrices 𝑌𝑖 ∈ CS𝑅⋆,𝑞 , noting that Lemma 4, we have 𝑖 ̂ F(𝑌1 , 𝑌2 , . . . , 𝑌𝑡 )⟩ = 0, that is, ⟨𝐺 − 𝐺,

𝑡

𝑖=1

𝑖=1

𝑡 󵄩 󵄩2 󵄩 󵄩2 = ∑ 󵄩󵄩󵄩󵄩𝑃𝑖(0) 󵄩󵄩󵄩󵄩 − 𝛼0 󵄩󵄩󵄩󵄩F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) )󵄩󵄩󵄩󵄩

̂ F (𝑌1 , 𝑌2 , . . . , 𝑌𝑡 )⟩ = ⟨𝑅𝑘+1 , F (𝑌1 , 𝑌2 , . . . , 𝑌𝑡 )⟩ ⟨𝐺 − 𝐺, =

𝑡

󵄩 󵄩2 = ∑ 󵄩󵄩󵄩󵄩𝑃𝑖(0) 󵄩󵄩󵄩󵄩 − 𝛼0 ∑ ⟨𝑀𝑖 𝑃𝑖(0) 𝑁𝑖 , F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) )⟩

𝑖=1

𝑡

∑ ⟨𝑀𝑖𝑇 𝑅𝑘+1 𝑁𝑖𝑇 , 𝑌𝑖 ⟩ 𝑖=1

= 0, (23)

𝑡

= ∑⟨L𝑖 (𝑀𝑖𝑇 𝑅𝑘+1 𝑁i𝑇 ) , 𝑌𝑖 ⟩ = 0, 𝑖=1

which also deduces that 𝑡

(19)

∑ ⟨𝑄𝑖(0) , 𝑃𝑖(1) ⟩ 𝑖=1

𝑡

= ∑ ⟨𝑄𝑖(0) , 𝑃𝑖(0) − 𝛼0 L𝑖 (𝑀𝑖𝑇 F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) ) 𝑁𝑖𝑇 )⟩

which completes the proof.

𝑖=1

In addition, the sequences {𝑌𝑖(𝑙) }, {𝑃𝑖(𝑙) }, {𝑄𝑖(𝑙) } generated by Algorithm 7 are self-orthogonal, that is, as follows.

𝑡

= ∑ ⟨𝑄𝑖(0) , 𝑃𝑖(0) ⟩ − 𝛼0 𝑖=1

Lemma 9. Suppose that the sequences {𝑌𝑖(𝑙) }, {𝑃𝑖(𝑙) }, {𝑄𝑖(𝑙) } generated by Algorithm 7 not equal null for 𝑙 ≤ 𝑘, then 𝑡

𝑖=1

𝑡

(𝑗)

(1) ∑ ⟨𝑃𝑖 , 𝑃𝑖(𝑙) ⟩ = 0,

(20)

= ∑ ⟨𝑃𝑖(0) , 𝑃𝑖(0) ⟩ − 𝛼0 𝑖=1

𝑖=1 𝑡

𝑡

× ∑ ⟨𝑄𝑖(0) , 𝑀𝑖𝑇 F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) ) 𝑁𝑖𝑇 ⟩

𝑡

(𝑗)

(2) ∑ ⟨𝑄𝑖 , 𝑃𝑖(𝑙) ⟩ = 0,

× ∑ ⟨𝑀𝑖 𝑄𝑖(0) 𝑁𝑖 , F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) )⟩

(21)

𝑖=1

𝑖=1

(𝑗)

(𝑗)

(𝑗)

(3) ⟨F (𝑄1 , 𝑄2 , . . . , 𝑄𝑡 ) , F (𝑄1(𝑙) , 𝑄2(𝑙) , . . . , 𝑄𝑡(𝑙) )⟩ = 0, (22) where 𝑗, 𝑙 = 1, 2, . . . , 𝑘, 𝑗 ≠ 𝑙 ≤ 𝑘. Proof. In view of the symmetry of the inner product, we only prove (20)–(22) when 𝑗 < 𝑙. According to Algorithm 7, when 𝑘 = 1, we have

= 0, ⟨F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) ) , F (𝑄1(1) , 𝑄2(1) , . . . , 𝑄𝑡(1) )⟩ 𝑡

= ⟨F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) ) , ∑ [𝑀𝑖 (𝑃𝑖(1) + 𝛽0 𝑄𝑖(0) ) 𝑁𝑖 ]⟩ 𝑖=1

= ⟨F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) ) , F (𝑃1(1) , 𝑃2(1) , . . . , 𝑃𝑡(1) )⟩ 󵄩 󵄩2 + 𝛽0 󵄩󵄩󵄩󵄩F (𝑄1(0) , . . . , 𝑄𝑡(0) )󵄩󵄩󵄩󵄩

𝑡

∑ ⟨𝑃𝑖(0) , 𝑃𝑖(1) ⟩ 𝑖=1 𝑡

=

= ∑ ⟨𝑃𝑖(0) , 𝑃𝑖(0) −𝛼0 L𝑖 𝑀𝑖𝑇 (F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) ) 𝑁𝑖𝑇)⟩

󵄩 󵄩2 + 𝛽0 󵄩󵄩󵄩󵄩F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) )󵄩󵄩󵄩󵄩

𝑖=1 𝑡

= ∑ [⟨𝑃𝑖(0) , 𝑃𝑖(0) ⟩ 𝑖=1

−𝛼0 ⟨𝑃𝑖(0) , L𝑖 𝑀𝑖𝑇 (F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) ) 𝑁𝑖𝑇 )⟩] 𝑡 󵄩 󵄩2 = ∑ 󵄩󵄩󵄩󵄩𝑃𝑖(0) 󵄩󵄩󵄩󵄩 − 𝛼0 𝑖=1

𝑡

× ∑ ⟨𝑃𝑖(0) , 𝑀𝑖𝑇 F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) ) 𝑁𝑖𝑇 ⟩ 𝑖=1

1 ⟨𝑅0 − 𝑅1 , F (𝑃1(1) , 𝑃2(1) , . . . , 𝑃𝑡(1) )⟩ 𝛼0

=

𝑡 𝑡 1 [∑ ⟨𝑀𝑖𝑇 𝑅0 𝑁𝑖𝑇 , 𝑃𝑖(1) ⟩ − ∑ ⟨𝑀𝑖𝑇 𝑅1 𝑁𝑖𝑇 , 𝑃𝑖(1) ⟩] 𝛼0 𝑖=1 𝑖=1

󵄩 󵄩2 + 𝛽0 󵄩󵄩󵄩󵄩F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) )󵄩󵄩󵄩󵄩 =−

1 𝑡 󵄩󵄩 (1) 󵄩󵄩2 󵄩 󵄩2 ∑ 󵄩󵄩𝑃 󵄩󵄩 + 𝛽0 󵄩󵄩󵄩󵄩F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) )󵄩󵄩󵄩󵄩 𝛼0 𝑖=1 󵄩 𝑖 󵄩

= 0. (24)

Journal of Applied Mathematics

5 𝑡

Assume that (20), (21), and (22) hold for positive integer 𝑠 (< 𝑘), that is, for 𝑗 = 1, 2, . . . , 𝑠 − 1, 𝑡

𝑖=1

= 0,

(𝑗)

∑ ⟨𝑃𝑖 , 𝑃𝑖(𝑠) ⟩ = 0,

(𝑗)

(𝑗)

𝑖=1

(𝑗)

(𝑗)

(𝑗)

(𝑗)

= ⟨F (𝑄1 , 𝑄2 , . . . , 𝑄𝑡 ) , F (𝑃1(𝑠+1) , 𝑃2(𝑠+1) , . . . , 𝑃𝑡(𝑠+1) )⟩

(𝑗)

∑ ⟨𝑄𝑖 , 𝑃𝑖(𝑠) ⟩ = 0, (𝑗)

(𝑗)

⟨F (𝑄1 , 𝑄2 , . . . , 𝑄𝑡 ) , F (𝑄1(𝑠+1) , 𝑄2(𝑠+1) , . . . , 𝑄𝑡(𝑠+1) )⟩

𝑖=1 𝑡

(𝑗)

× ∑ ⟨𝑀𝑖 𝑄𝑖 𝑁𝑖 , F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) )⟩

(𝑗)

(𝑗)

(𝑗)

+ 𝛽𝑠 ⟨F (𝑄1 , 𝑄2 , . . . , 𝑄𝑡 ) , F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) )⟩

(𝑗)

⟨F (𝑄1 , 𝑄2 , . . . , 𝑄𝑡 ) , F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) )⟩ = 0. (25)

=

1 ⟨𝑅𝑗 − 𝑅𝑗+1 , F (𝑃1(𝑠+1) , 𝑃2(𝑠+1) , . . . , 𝑃𝑡(𝑠+1) )⟩ 𝛼𝑗

Then, similar to the above proof, noting that the assumptions, we have

=

𝑡 1 𝑡 [∑ ⟨𝑀𝑖𝑇 𝑅𝑗 𝑁𝑖𝑇 , 𝑃𝑖(𝑠+1) ⟩ − ∑ ⟨𝑀𝑖𝑇 𝑅𝑗+1 𝑁𝑖𝑇 , 𝑃𝑖(𝑠+1) ⟩] 𝛼𝑗 𝑖=1 𝑖=1

=

𝑡 1 𝑡 (𝑗) [∑ ⟨𝑃𝑖 , 𝑃𝑖(𝑠+1) ⟩ − ∑ ⟨𝑃𝑗+1,𝑖 , 𝑃𝑖(𝑠+1) ⟩] 𝛼𝑗 𝑖=1 𝑖=1

𝑡

(𝑗)

∑ ⟨𝑃𝑖 , 𝑃𝑖(𝑠+1) ⟩ 𝑖=1

𝑡

(𝑗)

= ∑ ⟨𝑃𝑖 , 𝑃𝑖(𝑠) − 𝛼𝑠 L𝑖 (𝑀𝑖𝑇 F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) ) 𝑁𝑖𝑇 )⟩

= 0.

𝑖=1

(27)

𝑡

(𝑗)

= ∑ ⟨𝑃𝑖 , 𝑃𝑖(𝑠) ⟩ − 𝛼𝑠

The last equal sign “=” holds due to ∑𝑡𝑖=1 ⟨𝑃𝑖(𝑠) , 𝑃𝑖(𝑠+1) ⟩ = 0. In fact, from the hypothesis, we deduce

𝑖=1

𝑡

(𝑗)

× ∑ ⟨𝑃𝑖 , L𝑖 (𝑀𝑖𝑇 F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) ) 𝑁𝑖𝑇 )⟩

𝑡

∑ ⟨𝑃𝑖(𝑠) , 𝑃𝑖(𝑠+1) ⟩

𝑖=1

𝑡

𝑖=1

(𝑗)

= −𝛼𝑠 ∑ ⟨𝑀𝑖 𝑃𝑖 𝑁𝑖 , F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) )⟩

𝑡

= ∑ ⟨𝑃𝑖(𝑠) , 𝑃𝑖(𝑠) − 𝛼𝑠 L𝑖 (𝑀𝑖𝑇 F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) ) 𝑁𝑖𝑇)⟩

𝑖=1

=

𝑡

(𝑗) −𝛼𝑠 ∑ ⟨𝑀𝑖 (𝑄𝑖 𝑖=1

𝑖=1

− 𝛽𝑗−1 𝑄𝑗−1,𝑖 ) 𝑁𝑖 ,

𝑡 󵄩 󵄩2 = ∑ 󵄩󵄩󵄩󵄩𝑃𝑖(𝑠) 󵄩󵄩󵄩󵄩 𝑖=1

F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) )⟩ =

𝑡

(𝑗) (𝑗) (𝑗) −𝛼𝑠 ⟨F (𝑄1 , 𝑄2 , . . . , 𝑄𝑡 ) , F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄t(𝑠) )⟩

+

(𝑗−1) (𝑗−1) (𝑗−1) 𝛼𝑠 𝛽𝑗−1 ⟨F (𝑄1 , 𝑄2 , . . . , 𝑄𝑡 ) ,

(𝑗−1)

−𝛼𝑠 ∑ ⟨𝑀𝑖 (𝑄𝑖(𝑠) −𝛽𝑠−1 𝑄𝑖 𝑖=1

) 𝑁𝑖 , F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) )⟩

𝑡

󵄩 󵄩2 󵄩 󵄩2 = ∑ 󵄩󵄩󵄩󵄩𝑃𝑖(𝑠) 󵄩󵄩󵄩󵄩 − 𝛼𝑠 󵄩󵄩󵄩󵄩F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) )󵄩󵄩󵄩󵄩

F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) )⟩

𝑖=1

+ 𝛼𝑠 𝛽𝑠−1 ⟨F (𝑄1(𝑠−1) , 𝑄2(𝑠−1) , . . . , 𝑄𝑡(𝑠−1) ) ,

= 0. (26) Furthermore,

F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) )⟩ = 0. (28)

𝑡

(𝑗)

∑ ⟨𝑄𝑖 , 𝑃𝑖(𝑠+1) ⟩ 𝑖=1

𝑡

(𝑗)

= ∑ ⟨𝑄𝑖 , 𝑃𝑖(𝑠) − 𝛼𝑠 L𝑖 (𝑀𝑖𝑇 F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) ) 𝑁𝑖𝑇)⟩ 𝑖=1 𝑡

(𝑗)

= ∑ ⟨𝑃𝑖 , 𝑃𝑖(𝑠) ⟩ − 𝛼𝑠 𝑖=1

Moreover, 𝑡

∑ ⟨𝑄𝑖(𝑠) , 𝑃𝑖(𝑠) ⟩ 𝑖=1

𝑡

= ∑ ⟨𝑃𝑖(𝑠) + 𝛽𝑠−1 𝑄𝑖(𝑠−1) , 𝑃𝑖(𝑠) ⟩ 𝑖=1

6

Journal of Applied Mathematics 𝑡

𝑡

𝑖=1

𝑖=1

󵄩 󵄩2 = ∑ 󵄩󵄩󵄩󵄩𝑃𝑖(𝑠) 󵄩󵄩󵄩󵄩 + 𝛽𝑠−1 ∑ ⟨𝑄𝑖(𝑠−1) , 𝑃𝑖(𝑠) ⟩

Remark 10. We know from Lemma 4 that the matrices sequences 𝑃0,1 d (

𝑡 󵄩 󵄩2 = ∑ 󵄩󵄩󵄩󵄩𝑃𝑖(𝑠) 󵄩󵄩󵄩󵄩 . 𝑖=1

(29)

𝑡

𝑃𝑘,𝑡

),..., (31)

),...

2

∑ ⟨𝑄𝑖(𝑠) , 𝑃𝑖(𝑠+1) ⟩ 𝑖=1

𝑡

= ∑ ⟨𝑄𝑖(𝑠) , 𝑃𝑖(𝑠) − 𝛼𝑠 L𝑖 (𝑀𝑖𝑇 F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) ) 𝑁𝑖𝑇 ⟩ 𝑖=1 𝑡

∑𝑡𝑖=1 ‖𝑃𝑖(𝑘) ‖ = 0, in this case, (𝑌1(𝑘) , 𝑌2(𝑘) , . . . , 𝑌𝑡(𝑘) ) can be regarded as a least-squares solution group of matrix equation (11). In addition, we should point out that if 𝛼𝑘 = 0 or ∞, the conclusions may not be true, and the iteration will break down before 𝑃𝑖(𝑘) = 0 for 𝑘 < ∑𝑡𝑖=1 (𝑝𝑖 + 𝑞𝑖 )(𝑝𝑖 − 𝑞𝑖 )/2. 2

= ∑ [⟨𝑄𝑖(𝑠) , 𝑃𝑖(𝑠) ⟩

Actually, 𝛼𝑘 = 0 implies that ∑𝑡𝑖=1 ‖𝑃𝑖(𝑘) ‖ = 0, so 𝑃𝑖(𝑘) = 0 for 𝑖 ∈ Γ. While 𝛼𝑘 = ∞ leads to F(𝑄1(𝑘) , 𝑄2(𝑘) , . . . , 𝑄𝑡(𝑘) ) = 0, making inner product with 𝑅𝑖 by both sides, it follows from Algorithm 7 that

𝑖=1

− 𝛼𝑠 ⟨𝑄𝑖(𝑠) , 𝑀𝑖𝑇 F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) ) 𝑁𝑖𝑇 ⟩] 𝑡 󵄩 󵄩2 󵄩 󵄩2 = ∑ 󵄩󵄩󵄩󵄩𝑃𝑖(𝑠) 󵄩󵄩󵄩󵄩 − 𝛼𝑠 󵄩󵄩󵄩󵄩F (𝑄1(𝑠+1) , 𝑄2(𝑠+1) , . . . , 𝑄𝑡(𝑠+1) )󵄩󵄩󵄩󵄩

⟨𝑅𝑖 , F (𝑄1(𝑘) , 𝑄2(𝑘) , . . . , 𝑄𝑡(𝑘) )⟩

𝑖=1

𝑡

= ∑ ⟨𝑃𝑖(𝑘) , 𝑃𝑖(𝑘) + 𝛽𝑘−1 𝑄𝑖(𝑘−1) ⟩

= 0, ⟨F (𝑄1(𝑠) , 𝑄2(𝑠) , . . . , 𝑄𝑡(𝑠) ) , F (𝑄1(𝑠+1) , 𝑄2(𝑠+1) , . . . , 𝑄𝑡(𝑠+1) )⟩ = ⟨F (𝑄1(𝑠) , . . . , 𝑄𝑡(𝑠) ) , F (𝑃1(𝑠+1) , 𝑃2(𝑠+1) , . . . , 𝑃𝑡(𝑠+1) )⟩

𝑖=1

𝑡 𝑡 󵄩 󵄩2 = ∑ 󵄩󵄩󵄩󵄩𝑃𝑖(𝑘) 󵄩󵄩󵄩󵄩 + 𝛽𝑘−1 ∑ ⟨𝑃𝑖(𝑘) , 𝑄𝑖(𝑘−1) ⟩ 𝑖=1

󵄩 󵄩2 + 𝛽𝑠 󵄩󵄩󵄩󵄩F (𝑄1(𝑠) , . . . , 𝑄𝑡(𝑠) )󵄩󵄩󵄩󵄩

(32)

𝑖=1

𝑡

󵄩 󵄩2 = ∑ 󵄩󵄩󵄩󵄩𝑃𝑖(𝑘) 󵄩󵄩󵄩󵄩 = 0, 𝑖=1

1 ⟨𝑅𝑠 − 𝑅𝑠+1 , F (𝑃1(𝑠+1) , 𝑃2(𝑠+1) , . . . , 𝑃𝑡(𝑠+1) )⟩ 𝛼𝑠

which also implies the same situation as 𝛼𝑘 = 0. Hence, if there exists a positive integer 𝑠 such that the coefficient 𝛼𝑠 = 0 or 𝛼𝑠 = ∞, then the corresponding matrix group 𝑌1(𝑠) , 𝑌2(𝑠) , . . . , 𝑌𝑡(𝑠) is just the solution of matrix equation (11).

𝑡 󵄩 󵄩2 + 𝛽𝑠 ∑ 󵄩󵄩󵄩󵄩𝑀𝑖 𝑄𝑖(𝑠) 𝑁𝑖 󵄩󵄩󵄩󵄩 𝑖=1

=−

𝑃𝑘,1 ⋅⋅⋅ (

𝑃1,𝑡

are orthogonal to each other. Hence, it can be regarded as 𝑡 𝑡 an orthogonal basis of matrix space 𝑅∑𝑖=1 𝑝𝑖 ×∑𝑖=1 𝑝𝑖 . Hence, the iteration will be terminated at most ∑𝑡𝑖=1 (𝑝𝑖 + 𝑞𝑖 )(𝑝𝑖 − 𝑞𝑖 )/2 steps in the absence of roundoff errors. Therefore, there exists a positive integer 𝑘 ≤ ∑𝑡𝑖=1 (𝑝𝑖 + 𝑞𝑖 )(𝑝𝑖 − 𝑞𝑖 )/2 such that

It follows from (29) that

=

𝑃0,𝑡

𝑃1,1 d ),(

1 𝑡 ∑ ⟨𝑀𝑖𝑇 𝑅𝑠+1 𝑁𝑖𝑇 , 𝑃𝑖(𝑠+1) ⟩ 𝛼𝑠 𝑖=1

Together with the above analysis and Lemma 9, we can conclude the following theorem.

𝑡

󵄩 󵄩2 + 𝛽𝑠 ∑ 󵄩󵄩󵄩󵄩𝑀𝑖 𝑄𝑖(𝑠) 𝑁𝑖 󵄩󵄩󵄩󵄩 𝑖=1

= 0. (30)

Therefore, for arbitrary integers number 𝑙, the conclusions hold. The proof is completed.

𝑛×𝑛 , Theorem 11. For any initial iteration matrices 𝑌𝑖(0) ∈ CS𝑅⋆,𝑞 𝑖 𝑖 = 1, 2, . . . , 𝑡, the least-squares solution of matrix equation (11) can be obtained within finite iteration steps. Moreover, Suppose that (𝑌1̆ , 𝑌2̆ , . . . , 𝑌𝑡̆ ) is a least-squares solution group of (11), then the general solution to Problem 2 can be expressed as 𝑛×𝑛 𝑍𝑖 = 𝑌𝑖̆ + 𝑋𝑖 + 𝑍𝑖⬦ (𝑖 = 1, 2, . . . , 𝑡), where 𝑋𝑖 ∈ CS𝑅⋆,𝑞 satisfy 𝑖 homogeneous equation

F (𝑋1 , 𝑋2 , . . . , 𝑋𝑡 ) = 0, 𝑍𝑖⬦ as in Theorem 5.

(33)

Journal of Applied Mathematics

7

In order to show the validity of Theorem 11, it is adequate to prove the following conclusion. Proposition 12. The least-squares solution group (𝑌1̆ , 𝑌2̆ , . . . , 𝑌𝑡̆ ) of matrix equation (11) can be expressed as 𝑌𝑖̆ + 𝑋𝑖 , where 𝑋𝑖 satisfy equality (33). Proof. According to the assumptions, we obtain 󵄩 󵄩 min 󵄩󵄩󵄩F (𝑌1 , 𝑌2 , . . . , 𝑌𝑡 ) − 𝐺󵄩󵄩󵄩 𝑌 ∈CS𝑅𝑛×𝑛 i

⋆,𝑞𝑖

󵄩 󵄩 = 󵄩󵄩󵄩󵄩F (𝑌1̆ , 𝑌2̆ , . . . , 𝑌𝑡̆ ) − 𝐺󵄩󵄩󵄩󵄩 .

Theorems 11 and 13 display the efficiency of Algorithm 7. Actually, the iteration sequence {𝑌𝑖(𝑘) } converges smoothly to the solution 𝑌𝑖 , that is the minimization property of Algorithm 7. Theorem 14. For any initial iteration matrices 𝑌𝑖(0) , the 𝑌𝑖(𝑘) generated by Algorithm 7 satisfy the minimization problem 󵄩󵄩 󵄩 󵄩󵄩F (𝑌1(𝑘) , 𝑌2(𝑘) , . . . , 𝑌𝑡(𝑘) ) − 𝐺󵄩󵄩󵄩 󵄩 󵄩 󵄩󵄩 󵄩 = min 󵄩󵄩F (𝑌1 , 𝑌2 , . . . , 𝑌𝑡 ) − 𝐺󵄩󵄩󵄩 , 𝑌𝑖 ∈L𝑖

(34)

On the other hand, noting that F(𝑋1 , 𝑋2 , . . . , 𝑋𝑡 ) = 0, then 󵄩󵄩 󵄩2 󵄩󵄩F (𝑌1̆ + 𝑋1 , 𝑌2̆ + 𝑋2 , . . . , 𝑌𝑡̆ + 𝑋𝑡 ) − 𝐺󵄩󵄩󵄩 󵄩 󵄩 󵄩 󵄩2 = 󵄩󵄩󵄩󵄩F (𝑌1̆ , 𝑌2̆ , . . . , 𝑌𝑡̆ ) − 𝐺 + F (𝑋1 , 𝑋2 , . . . , 𝑋𝑡 )󵄩󵄩󵄩󵄩 (35) 󵄩 󵄩2 = 󵄩󵄩󵄩󵄩F (𝑌1̆ , 𝑌2̆ , . . . , 𝑌𝑡̆ ) − 𝐺󵄩󵄩󵄩󵄩 .

where L𝑖 = 𝑌𝑖(0) + span{𝑄𝑖(0) , 𝑄𝑖(1) , . . . , 𝑄𝑖(𝑘−1) }, 𝑖 = 1, 2, . . . , 𝑡. Proof. From the definition of L𝑖 , there exist a series of real (𝑙) numbers 𝑎0 , 𝑎1 , . . . , 𝑎𝑘−1 such that 𝑌𝑖 = 𝑌𝑖(0) + ∑𝑘−1 𝑙=0 𝑎𝑙 𝑄𝑖 . Define a function of 𝑘 variables 𝑓(𝑎0 , 𝑎1 , . . . , 𝑎𝑘−1 ), that is, 𝑓 (𝑎0 , 𝑎1 , . . . , 𝑎𝑘−1 )

The proof is completed.

󵄩󵄩 𝑡 𝑘−1 󵄩󵄩󵄩2 󵄩󵄩 󵄩 (0) (𝑙) 󵄩 = 󵄩󵄩∑ [𝑀𝑖 (𝑌𝑖 + ∑ 𝑎𝑙 𝑄𝑖 ) 𝑁𝑖 − 𝐺]󵄩󵄩󵄩 . 󵄩󵄩 󵄩󵄩i=1 𝑙=0 󵄩 󵄩

Next, we will show that the unique least norm solution of matrix equation (11) can be derived by choosing a special kind of initial iteration matrices. Theorem 13. Let the initial iteration matrices 𝑌i(0) = L𝑖 (𝑀𝑖𝑇 𝐻𝑁𝑖𝑇) with arbitrary 𝐻 ∈ 𝑅𝑚×𝑛 , 𝑖 = 1, 2, . . . , 𝑡, and then (𝑌1∗ , 𝑌2∗ , . . . , 𝑌𝑡∗ ) generated by Algorithm 7 is the leastnorm least-squares solution group of matrix equation (11). Furthermore, the least-norm solution group to Problem 2 can be expressed by (𝑍1∗ , 𝑍2∗ , . . . , 𝑍𝑡∗ ) = (𝑌1∗ + 𝑍𝑖⬦ , 𝑌2∗ + 𝑍2⬦ , . . . , 𝑌𝑡∗ + 𝑍𝑡⬦ ) . (36) Proof. From Algorithm 7 and Theorem 11, for initial iteration matrices 𝑌𝑖(0) = L𝑖 (𝑀𝑖𝑇 𝐻𝑁𝑖𝑇), we can obtain a least-squares solution 𝑌𝑖∗ of matrix equation (11) and there exists a matrix 𝐻∗ such that 𝑌𝑖∗ = L𝑖 (𝑀𝑖𝑇 𝐻∗ 𝑁𝑖𝑇 ). Hence, it is enough to prove that the 𝑌𝑖∗ is the least-norm solution. In fact, noting that (33) and Proposition 12, we have

(39)

In addition, from Algorithm 7, we know that 𝑅0 = 𝑅𝑙 + 𝛼𝑙−1 F (𝑄1(𝑙−1) , 𝑄2(𝑙−1) , . . . , 𝑄𝑡(𝑙−1) ) + ⋅ ⋅ ⋅ + 𝛼0 F (𝑄1(0) , 𝑄2(0) , . . . , 𝑄𝑡(0) ) .

(40)

Noting that (22) and making the inner product with F(𝑄1(𝑙) , 𝑄2(𝑙) , . . . , 𝑄𝑡(𝑙) ) on both sides of (40) yield ⟨F (𝑄1(𝑙) , 𝑄2(𝑙) , . . . , 𝑄𝑡(𝑙) ) , 𝑅0 ⟩ = ⟨F (𝑄1(𝑙) , 𝑄2(𝑙) , . . . , 𝑄𝑡(𝑙) ) , 𝑅𝑙 ⟩ . (41) Hence, by simple calculation, (40) and (41), the function can be rewritten as 𝑓 (𝑎0 , 𝑎1 , . . . , 𝑎𝑘−1 )

𝑡

󵄩 󵄩2 ∑ 󵄩󵄩󵄩𝑌𝑖∗ + 𝑋𝑖 󵄩󵄩󵄩

𝑘−1 󵄩 󵄩2 󵄩 󵄩2 = 󵄩󵄩󵄩𝑅0 󵄩󵄩󵄩 + ∑ 𝑎𝑙2 󵄩󵄩󵄩󵄩F (𝑄1(𝑙) , 𝑄2(𝑙) , . . . , 𝑄𝑡(𝑙) )󵄩󵄩󵄩󵄩

𝑖=1

𝑡

𝑙=0

󵄩 󵄩2 󵄩 󵄩2 = ∑ (󵄩󵄩󵄩𝑌𝑖∗ 󵄩󵄩󵄩 + 󵄩󵄩󵄩𝑋𝑖 󵄩󵄩󵄩 + 2 ⟨𝑌𝑖∗ , 𝑋𝑖 ⟩) 𝑡

𝑡

𝑡

𝑖=1

𝑖=1

𝑖=1

(42)

𝑘−1

𝑖=1

− 2 ∑ 𝑎𝑙 ⟨F (𝑄1(𝑙) , 𝑄2(𝑙) , . . . , 𝑄𝑡(𝑙) ) , 𝑅𝑙 ⟩ .

󵄩 󵄩2 󵄩 󵄩2 = ∑ 󵄩󵄩󵄩𝑌𝑖∗ 󵄩󵄩󵄩 + ∑ 󵄩󵄩󵄩𝑋𝑖 󵄩󵄩󵄩 + 2 ∑ ⟨L𝑖 (𝑀𝑖𝑇 𝐻∗ 𝑁𝑖𝑇 ) , 𝑋𝑖 ⟩

𝑙=0

Then,

𝑡

󵄩 󵄩2 ≥ ∑ 󵄩󵄩󵄩𝑌𝑖∗ 󵄩󵄩󵄩 , 𝑖=1

(37) as required.

(38)

𝑓 (𝑎0 , 𝑎1 , . . . , 𝑎𝑘−1 ) min 𝑎 𝑙

󵄩 󵄩 ⇐⇒ min 󵄩󵄩󵄩F (𝑌1 , 𝑌2 , . . . , 𝑌𝑡 ) − 𝐺󵄩󵄩󵄩 . 𝑌𝑖 ∈L𝑖

(43)

8

Journal of Applied Mathematics

Since 𝑓(𝑎0 , 𝑎1 , . . . , 𝑎𝑘−1 ) = min only if 𝜕𝑓/𝜕𝑎𝑙 = 0, it follows from (29) that 𝑎𝑙 =

⟨F (𝑄1(𝑙) , 𝑄2(𝑙) , . . . , 𝑄𝑡(𝑙) ) , 𝑅𝑙 ⟩ 󵄩󵄩 󵄩2 󵄩󵄩F (𝑄1(𝑙) , 𝑄2(𝑙) , . . . , 𝑄𝑡(𝑙) )󵄩󵄩󵄩 󵄩 󵄩 ∑𝑡𝑖=1 ⟨𝑄𝑖(𝑙) , 𝑃𝑙,𝑖 ⟩

=󵄩 󵄩󵄩F (𝑄(𝑙) , 𝑄(𝑙) , . . . , 𝑄(𝑙) )󵄩󵄩󵄩2 󵄩󵄩 𝑡 󵄩 1 2 󵄩 2 󵄩 𝑡 󵄩 ∑𝑖=1 󵄩󵄩󵄩𝑃𝑙,𝑖 󵄩󵄩󵄩 =󵄩 = 𝛼𝑙 . 󵄩󵄩F (𝑄(𝑙) , 𝑄(𝑙) , . . . , 𝑄(𝑙) )󵄩󵄩󵄩2 󵄩󵄩 󵄩 𝑡 1 2 󵄩

(44)

∗

(45)

monotonically decreases with respect to increasing integer 𝑘. The descent property of the residual norm of matrix equation (11) leads to the smoothly convergence of Algorithm 7.

3. The Solution of Problem 3 In this section, we discuss the optimal approximation Problem 3. Since the least squares problem is always consistent, it is easy to verify that the solution set 𝑆𝐸 of Problem 2 is a nonempty convex cone, so the optimal approximation solution is unique. Without loss of generality, we can assume that the given ̃𝑖 ∈ ̃𝑖 ∈ CS𝑅𝑝𝑖 ×𝑝𝑖 . In fact, from Lemma 4, arbitrary 𝑍 matrices 𝑍 ⋆,𝑞𝑖 𝑝𝑖 ×𝑝𝑖 can be divided into 𝑅 ̃𝑖1 ∈ CS𝑅𝑝𝑖 ×𝑝𝑖 , with 𝑍 ⋆,𝑞𝑖

̃𝑖2 ∈ CS𝑅𝑝𝑖 ×𝑝𝑖 , 𝑍 ̃𝑖3 ∈ CAS𝑅𝑝𝑖 ×𝑝𝑖 . 𝑍 ⬦,𝑞𝑖

(46)

(47)

which meets the claim. ̃𝑖 , 𝐹 = 𝐹 − F(𝑍 ̃1 , 𝑍 ̃2 , . . . , 𝑍 ̃𝑡 ), then to Denote 𝑍𝑖 = 𝑍𝑖 − 𝑍 solve Problem 3 is equivalent to find the least-norm solution of the new matrix equation F (𝑍1 , 𝑍2 , . . . , 𝑍𝑡 ) = 𝐹.

(48)

Furthermore, similar to the construction of (11), Problem 2 is transformed equivalently into finding the least-norm leastsquares solution of matrix equation F (𝑌1 , 𝑌2 , . . . , 𝑌𝑡 ) = 𝐺,

𝑝𝑖 ×𝑝𝑖 with 𝑌𝑖 ∈ CS𝑅⋆,𝑞 , 𝑖

∗

In this section, we illustrate the efficiency and reliability of Algorithm 7 by some numerical experiments. All the numerical experiments are performed by using Matlab 6.5. In addition, because of the influence of the roundoff errors, 𝑃𝑖(𝑘) may not equal zero within finite iteration steps, 2

so the iteration will be terminated if ∑𝑡𝑖=1 ‖𝑃𝑖(𝑘) ‖ < 𝜖, for example, let 𝜖 = 1.0𝑒 − 008. At this time, (𝑌1(𝑘) , 𝑌1(𝑘) , . . . , 𝑌𝑡(𝑘) ) can be regarded as a solution of matrix equation (11), and 𝑌𝑖(𝑘) + 𝑍𝑖⬦ (𝑖 = 1, 2, . . . , 𝑡) consist of the solution group to Problem 2. In particular, let the initial iteration matrices 𝑌𝑖(0) = 0, then we will obtain the least-norm solution by (36). Example 1. Input matrices 𝑀1 , 𝑀2 , 𝑀3 , 𝑁1 , 𝑁2 , 𝑁3 , and 𝐹 as follows: 𝑟 𝑟 ones ( ) hilb ( ) [ 2 2 ] ] 𝑀1 = [ [hankel (1 : 𝑟 ) zeros ( 𝑟 )] , 2 2 [ ] 𝑟 𝑟 hilb ( ) toeplitz (1 : ) ] 2 2 𝑟 ], 𝑟 hankel (1 : )] ones ( ) 2 2 [ ]

[ 𝑀2 = [ [

𝑝𝑖 ×𝑝𝑖 , then Furthermore, if 𝑍𝑖 ∈ CS𝑅⋆,𝑞 𝑖

󵄩󵄩 ̃ 󵄩󵄩󵄩2 󵄩󵄩󵄩 ̃ 󵄩󵄩󵄩2 ̃𝑖 󵄩󵄩󵄩󵄩2 = 󵄩󵄩󵄩󵄩𝑍𝑖 − 𝑍 ̃𝑖1 󵄩󵄩󵄩󵄩2 + 󵄩󵄩󵄩󵄩𝑍 󵄩󵄩𝑍𝑖 − 𝑍 󵄩 󵄩 󵄩 𝑖2 󵄩󵄩 + 󵄩󵄩𝑍𝑖3 󵄩󵄩 , 󵄩 󵄩

∗

4. Numerical Example

Theorem 14 reveals that the sequence

̃𝑖 = 𝑍 ̃𝑖1 + 𝑍 ̃𝑖2 + 𝑋 ̃𝑖3 , 𝑍

̂2 , . . . , 𝑍 ̂𝑡 ) ̂1 , 𝑍 (𝑍 ̃1 + 𝑍⬦ , 𝑌 + 𝑍 ̃2 + 𝑍⬦ , . . . , 𝑌 + 𝑍 ̃𝑡 + 𝑍⬦ ) . = (𝑌1 + 𝑍 1 2 𝑡 2 𝑡 (50)

Combined with (43), we complete the proof. 󵄩 󵄩 {󵄩󵄩󵄩󵄩F (𝑌1(𝑘) , 𝑌2(𝑘) , . . . , 𝑌𝑡(𝑘) ) − 𝐺󵄩󵄩󵄩󵄩}

(0)

from Theorem 13 that if let the initial iteration matrices 𝑌𝑖 = ̃ 𝑇 ) with arbitrary 𝐻 ̃ ∈ 𝑅𝑚×𝑛 , or especially 𝑌(0) = L𝑖 (𝑀𝑖𝑇 𝐻𝑁 𝑖 𝑖 ∗ 0 ∈ 𝑅𝑝𝑖 ×𝑝𝑖 , then the iteration solutions 𝑌𝑖 consist of the leastnorm least-squares solution of which. In this case, the unique optimal approximation solution to Problem 3 can be obtained by

(49)

in which 𝐺 = 𝐹 − F(𝑍1⬦ , 𝑍2⬦ , . . . , 𝑍𝑡⬦ ). Therefore, we can apply Algorithm 7 to derive the required solution of matrix equation (49). Virtually, it follows

𝑟 𝑟 zeros ( ) hankel (1 : ) [ ] 2 2 ], 𝑀3 = [ 𝑟 [ hilb ( 𝑟 ) ones ( ) ] 2 2 [ ] 𝑁1 = eye (𝑟) ,

(51)

𝑁2 = ones (𝑟) ,

𝑁3 = tridiag ([7, 1, −1] , 𝑟) , 3 [−2 [ 𝐹=[ [−1 [ [

−2 3 −2 d

−1 −2 d −2 −1

] d ] −2 −1] ], 3 −2] −2 3 ]

where toeplitz(𝑘), hilb(𝑘), hankel(𝑘), zeros(𝑘), and eye(𝑘)

Journal of Applied Mathematics

9

20 0 −20 −40 −60 −80 −100 −5

1 0.5 0 −0.5 −1 −5

0

5

10

15

5

20

Z3

Z2

Z1

10

15

20

0

40 30 20 10 0 −10 −20 −30 −5 5

10

15

5

20

(a)

15

10

0

20

(b)

5

10

15

5

20

10

20

15

(c)

1.25

4

1.2

2

1.15

0 −2

1.1 Y axis

Y axis

Figure 1: The bars graphs of 𝑍𝑖 when 𝑟 = 20.

1.05 1

−6 −8

0.95

−10

0.9 0.85

−4

−12 0

50

100

150

200

250

300

350

400

450

−14

0

50

100

150

Iteration number k r = 20, q = 10 r = 40, q = 20

200 250 300 Iteration number k

350

400

450

r = 20, q = 10 r = 40, q = 20

Figure 2: The convergence curve of the Frobenius norm of the residual.

denote the Toeplitz matrix, Hilbert matrix, Hankel matrix, null matrix, identity matrix with orders 𝑘, and the elements of matrix ones(⋅) are one, tridiag([7, 1, −1], 𝑟) represents 𝑟 × 𝑟 tri-diagonal matrix produced by vector [7, 1, −1]. Let the given central principal matrices 𝑟 X1 = zeros ( ) , 2

Figure 3: The convergence curve of the Frobenius norm of the terminated condition.

bars graphs of 𝑍1 , 𝑍2 , 𝑍3 when we choose the initial iterative matrices 𝑟−𝑞 𝑟−𝑞 𝑟−𝑞 ) ones ( , 𝑞) ones ( ) 2 2 2 𝑟−𝑞 𝑟−𝑞 𝑟 ) zeros ( ) ones (𝑞, )) (ones (𝑞, 2 2 2 ) 𝑌𝑖 = ( ( ), 𝑟−𝑞 𝑟−𝑞 𝑟−𝑞 ones ( ) ones ( , 𝑞) ones ( ) 2 2 2 ( ) ones (

𝑖 = 1, 2, 3,

(52)

(53)

By using the Algorithm 7, we obtain the solution to Problem 2. To save space, we shall not report the explicit datum of the solution, but the bars graphs of the components for the solution matrices will be given. Let 𝑟 = 20, Figure 1 shows the

and the terminal condition ‖𝑃1(𝑘) ‖ + ‖𝑃2(𝑘) ‖ + ‖𝑃3(𝑘) ‖ < 𝜖 = 1.0𝑒 − 012. Moreover, when 𝑟 = 20 and 𝑟 = 40, the convergence curves for the Frobenius norm of the residual denoted by RES = ‖𝑅𝑘 ‖ and the termination condition denoted by TC = ‖𝑃1(𝑘) ‖ + ‖𝑃2(𝑘) ‖ + ‖𝑃3(𝑘) ‖ are plotted in Figures 2 and 3, respectively.

𝑟 X2 = ones ( ) ∗ 10, 2

𝑟 X3 = toeplitz (1 : ) . 2

10 From Figure 2, we can see that the residual norm of Algorithm 7 is monotonically decreasing, which is in accordance with the theory established in Theorem 14, namely, this algorithm is numerical stable. While Figure 3 shows that the terminated condition ‖𝑃1(𝑘) ‖+‖𝑃2(𝑘) ‖+‖𝑃3(𝑘) ‖ is oscillating back and forth and approaches to zero as iterative process. Hence, the iterative Algorithm 7 is efficient, but it lacks of smooth convergence. Of course, for a problem with large and sparse matrices, Algorithm 7 may not terminate in a finite number of steps because of roundoff errors. How to establish an efficient and smooth algorithm is an important problem which we should study in a future work.

Journal of Applied Mathematics

[10]

[11]

[12]

[13]

Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.

[14]

Acknowledgments

[15]

The authors would like to express their sincere gratitude to the editor and two anonymous reviewers for their valuable comments and suggestions which have helped immensely improving the quality of the paper. Mao-lin Liang acknowledges the support of the scientific foundation of Tianshui Normal University (no. TSA1104). Young-hong Shen is supported by the “QingLan” Talent Engineering Funds of Tianshui Normal University.

References [1] G. H. Golub and C. F. Van Loan, Matrix Computations, John Hopkins University Press, Baltimore, Md, USA, 1996. [2] L. Datta and S. D. Morgera, “On the reducibility of centrosymmetric matrices—applications in engineering problems,” Circuits, Systems, and Signal Processing, vol. 8, no. 1, pp. 71–96, 1989. [3] J. Respondek, “Controllability of dynamical systems with constraints,” Systems & Control Letters, vol. 54, no. 4, pp. 293–314, 2005. [4] I. S. Pressman, “Matrices with multiple symmetry properties: applications of centro-Hermitian and per-Hermitian matrices,” Linear Algebra and its Applications, vol. 284, no. 1–3, pp. 239– 258, 1998. [5] F.-Z. Zhou, X.-Y. Hu, and L. Zhang, “The solvability conditions for the inverse eigenvalue problems of centro-symmetric matrices,” Linear Algebra and Its Applications, vol. 364, pp. 147–160, 2003. [6] D. Boley and G. H. Golub, “A survey of matrix inverse eigenvalue problems,” Inverse Problems, vol. 3, no. 4, pp. 595–622, 1987. [7] Z.-J. Bai, “The inverse eigenproblem of centrosymmetric matrices with a submatrix constraint and its approximation,” SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 4, pp. 1100–1114, 2005. [8] C. C. Paige, “Computing the generalized singular value decomposition,” Society for Industrial and Applied Mathematics, vol. 7, no. 4, pp. 1126–1146, 1986. [9] X.-P. Pan, X.-Y. Hu, and L. Zhang, “A class of constrained inverse eigenproblem and associated approximation problem for skew

[16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

symmetric and centrosymmetric matrices,” Linear Algebra and its Applications, vol. 408, pp. 66–77, 2005. F.-Z. Zhou, L. Zhang, and X.-Y. Hu, “Least-square solutions for inverse problems of centrosymmetric matrices,” Computers & Mathematics with Applications, vol. 45, no. 10-11, pp. 1581–1589, 2003. F.-L. Li, X.-Y. Hu, and L. Zhang, “Left and right inverse eigenpairs problem of skew-centrosymmetric matrices,” Applied Mathematics and Computation, vol. 177, no. 1, pp. 105–110, 2006. O. Rojo and H. Rojo, “Some results on symmetric circulant matrices and on symmetric centrosymmetric matrices,” Linear Algebra and its Applications, vol. 392, pp. 211–233, 2004. D. X. Xie, X. Y. Hu, and Y.-P. Sheng, “The solvability conditions for the inverse eigenproblems of symmetric and generalized centro-symmetric matrices and their approximations,” Linear Algebra and Its Applications, vol. 418, no. 1, pp. 142–152, 2006. Z. Y. Liu and H. Faßbender, “Some properties of generalized 𝐾-centrosymmetric 𝐻-matrices,” Journal of Computational and Applied Mathematics, vol. 215, no. 1, pp. 38–48, 2008. F.-L. Li, X.-Y. Hu, and L. Zhang, “Left and right inverse eigenpairs problem of generalized centrosymmetric matrices and its optimal approximation problem,” Applied Mathematics and Computation, vol. 212, no. 2, pp. 481–487, 2009. M.-L. Liang, C.-H. You, and L.-F. Dai, “An efficient algorithm for the generalized centro-symmetric solution of matrix equation 𝐴𝑋𝐵 = 𝐶,” Numerical Algorithms, vol. 44, no. 2, pp. 173–184, 2007. Q. X. Yin, “Construction of real antisymmetric and biantisymmetric matrices with prescribed spectrum data,” Linear Algebra and Its Applications, vol. 389, pp. 95–106, 2004. P. Deift and T. Nanda, “On the determination of a tridiagonal matrix from its spectrum and a submatrix,” Linear Algebra and its Applications, vol. 60, pp. 43–55, 1984. L. S. Gong, X. Y. Hu, and L. Zhang, “The expansion problem of anti-symmetric matrix under a linear constraint and the optimal approximation,” Journal of Computational and Applied Mathematics, vol. 197, no. 1, pp. 44–52, 2006. Y. X. Yuan and H. Dai, “Inverse problems for symmetric matrices with a submatrix constraint,” Applied Numerical Mathematics, vol. 57, no. 5-7, pp. 646–656, 2007. L. J. Zhao, X. Y. Hu, and L. Zhang, “Least squares solutions to 𝐴𝑋 = 𝐵 for bisymmetric matrices under a central principal submatrix constraint and the optimal approximation,” Linear Algebra and its Applications, vol. 428, no. 4, pp. 871–880, 2008. A.-P. Liao and Y. Lei, “Least-squares solutions of matrix inverse problem for bi-symmetric matrices with a submatrix constraint,” Numerical Linear Algebra with Applications, vol. 14, no. 5, pp. 425–444, 2007. G. P. Xu, M. S. Wei, and D. S. Zheng, “On solutions of matrix equation 𝐴𝑋𝐵 + 𝐶𝑌𝐷 = 𝐹,” Linear Algebra and Its Applications, vol. 279, no. 1–3, pp. 93–109, 1998. B. Zhou and G.-R. Duan, “On the generalized Sylvester mapping and matrix equations,” Systems & Control Letters, vol. 57, no. 3, pp. 200–208, 2008. A.-G. Wu, G. Feng, G.-R. Duan, and W.-J. Wu, “Finite iterative solutions to a class of complex matrix equations with conjugate and transpose of the unknowns,” Mathematical and Computer Modelling, vol. 52, no. 9-10, pp. 1463–1478, 2010. F. Ding and T. Chen, “Iterative least-squares solutions of coupled Sylvester matrix equations,” Systems & Control Letters, vol. 54, no. 2, pp. 95–107, 2005.

Journal of Applied Mathematics [27] Z.-H. Peng, X.-Y. Hu, and L. Zhang, “The bisymmetric solutions of the matrix equation 𝐴 1 𝑋1 𝐵1 + 𝐴 2 𝑋2 𝐵2 + ⋅ ⋅ ⋅ + 𝐴 𝑙 𝑋𝑙 𝐵𝑙 = 𝐶,” Linear Algebra and its Applications, vol. 426, no. 2-3, pp. 583– 595, 2007. [28] M. Dehghan and M. Hajarian, “The general coupled matrix equations over generalized bisymmetric matrices,” Linear Algebra and Its Applications, vol. 432, no. 6, pp. 1531–1552, 2010. [29] J.-F. Li, X.-Y. Hu, and L. Zhang, “The submatrix constraint problem of matrix equation 𝐴𝑋𝐵 + 𝐶𝑌𝐷 = 𝐸,” Applied Mathematics and Computation, vol. 215, no. 7, pp. 2578–2590, 2009. [30] T. Meng, “Experimental design and decision support,” in Expert Systems, the Technology of Knowledge Management and Decision Making-Forthe 21st Century, C. Leondes, Ed., Academic Press, 2001. [31] M. Baruch, “Optimization procedure to correct stiffness and flexibility matrices using vibration tests,” AIAA Journal, vol. 16, no. 11, pp. 1208–1210, 1978. [32] N. J. Higham, “Computing a nearest symmetric positive semidefinite matrix,” Linear Algebra and its Applications, vol. 103, pp. 103–118, 1988. [33] Z.-Y. Peng, X.-Y. Hu, and L. Zhang, “The nearest bisymmetric solutions of linear matrix equations,” Journal of Computational Mathematics, vol. 22, no. 6, pp. 873–880, 2004. [34] A.-P. Liao, Z.-Z. Bai, and Y. Lei, “Best approximate solution of matrix equation 𝐴𝑋𝐵 + 𝐶𝑌𝐷 = 𝐸,” SIAM Journal on Matrix Analysis and Applications, vol. 27, no. 3, pp. 675–688, 2005. [35] Z.-Y. Peng and Y.-X. Peng, “An efficient iterative method for solving the matrix equation 𝐴𝑋𝐵+𝐶𝑌𝐷 = 𝐸,” Numerical Linear Algebra with Applications, vol. 13, no. 6, pp. 473–485, 2006. [36] Y.-B. Deng, Z.-Z. Bai, and Y.-H. Gao, “Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations,” Numerical Linear Algebra with Applications, vol. 13, no. 10, pp. 801–823, 2006.

11

Advances in

Operations Research Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Advances in

Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Applied Mathematics

Algebra

Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Probability and Statistics Volume 2014

The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of

Differential Equations Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Submit your manuscripts at http://www.hindawi.com International Journal of

Advances in

Combinatorics Hindawi Publishing Corporation http://www.hindawi.com

Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of Mathematics and Mathematical Sciences

Mathematical Problems in Engineering

Journal of

Mathematics Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Discrete Mathematics

Journal of

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Discrete Dynamics in Nature and Society

Journal of

Function Spaces Hindawi Publishing Corporation http://www.hindawi.com

Abstract and Applied Analysis

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of

Journal of

Stochastic Analysis

Optimization

Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014