A simultaneous decomposition of five real quaternion matrices with

5 downloads 0 Views 182KB Size Report
Jun 21, 2013 - dimR(A) = dim N (A) , which is called the rank of the quaternion matrix A and denoted by r(A). It is easy to see that for any nonsingular matrices ...
A simultaneous decomposition of five real quaternion matrices with applications Zhuo-Heng He, Qing-Wen Wang

arXiv:1306.5074v1 [math.RA] 21 Jun 2013

Department of Mathematics, Shanghai University, Shanghai 200444. P.R. China E-mail: [email protected] Abstract: In this paper, we construct a simultaneous decomposition of five real quaternion matrices in which three of them have the same column numbers, meanwhile three of them have the same row numbers. Using the simultaneous matrix decomposition, we derive the maximal and minimal ranks of some real quaternion matrices expressions. We also show how to choose the variable real quaternion matrices such that the real quaternion matrix expressions achieve their maximal and minimal ranks. As an application, we give a solvability condition and the general solution to the real quaternion matrix equation BXD + CY E = A. Moreover, we give a simultaneous decomposition of seven real quaternion matrices. Keywords: Matrix decomposition; Matrix equation; Quaternion; Rank 2010 AMS Subject Classifications: 15A24, 15A09, 15A03

1. Introduction Let R be the real number fields. Let Hm×n be the set of all m × n matrices over the real quaternion algebra  H = a0 + a1 i + a2 j + a3 k i2 = j 2 = k2 = ijk = −1, a0 , a1 , a2 , a3 ∈ R .

For a quaternion matrix A, we denote the conjugate transpose, the column right space, the row left space of A by A∗ , R (A), N (A) , respectively, the dimension of R (A) by dim R (A) . By [12], dim R (A) = dim N (A) , which is called the rank of the quaternion matrix A and denoted by r(A). It is easy to see that for any nonsingular matrices P and Q of appropriate sizes, A and P AQ have the same rank [34]. Moreover, for A ∈ Hm×n , by [12], there exist invertible matrices P and Q such that ! Ir 0 P AQ = 0 0 where r = r(A), Ir is the r × r identity matrix. There is no need to emphasize the importance of matrix decomposition. Matrix decomposition is not only a basic approach in matrix theory, but also an applicable tool in other areas in mathematics. Many papers have presented different matrix decompositions for different purposes (e.g. [2], [4], [13]-[18], [24], [25], [31], [33]). For matrix factorization, there are many kinds of decompositions, for instance, the generalized singular decompose ([8], [18]), the restricted singular value decomposition of matrix triplets [32], the decomposition of triple matrices which two of them have the same row numbers meanwhile two of them have the same column numbers ([4], [11]), the decomposition of the matrix triplet (A, B, C) ([13], [15], [26]), the decomposition 1

2

of a matrix quaternity in which two of them have the same column numbers, meanwhile three of them have the same row numbers [27]. The research on maximal and minimal ranks of partial matrices started in later 1980s. Some optimization problems on ranks of matrix expressions attract much attention from both theoretical and practical points of view. Minimal and maximal ranks can be used in control theory (e.g. [5], [6]). Cohen et al. [10] and H.J. Woerdeman [28]-[30] considered the maximal and minimal ranks of 3 × 3 partial block matrix   A11 A12 X   A21 A22 A23  Y A32 A33 over C. Cohen and Dancis [9] gave the minimal matrix  A B  ∗ C B ∗ X D∗

rank and extremal inertias of the Hermitian  X  D . E

Liu [16] obtained the maximal and minimal ranks of A−BXC using the restricted singular value decomposition (RSVD) of the matrix triplet (C, A, B). Liu and Tian [14] derived the maximal and minimal ranks of A − BX − CY using the QQ-SVD of the matrix triplet (C, A, B). Tian ([19], [20]) studied the maximal and minimal ranks of the matrix expression p(X, Y ) = A − BXD − CY E

(1.1)

using generalized inverses of matrices. Chu, Hung and Woerdeman [7] also considered the maximal and minimal ranks of (1.1). The aim of this paper is to revisit the maximal and minimal ranks of the real quaternion matrix expression (1.1) through a simultaneous decomposition of the real quaternion matrix array A B C . D E

(1.2)

Our method is different from the methods mentioned in [7], [19] and [20]. We can derive the maximal and minimal ranks of the real quaternion matrix expression f2 (X1 , X2 , X3 , X4 ) = A − B1 X1 − X2 C2 − B3 X3 C3 − B4 X4 C4 .

(1.3)

We also consider the solvability condition and minimal rank of the general solution to the real quaternion matrix equation BXD + CY E = A.

(1.4)

3

Moreover, we also give a simultaneous decomposition of the real quaternion matrix array

A B C D E , F G

which plays an important role in investigating the extreme ranks of the matrix expression A − BXE − CY F − DZG.

2. A simultaneous decomposition of the real quaternion matrix array (5.1) Now we give the main theorem of this section. Theorem 2.1. Let A ∈ Hm×n , B ∈ Hm×p1 , C ∈ Hm×p2 , D ∈ Hq1 ×n and E ∈ Hq2×n be given. Then there exist nonsingular matrices P ∈ Hm×m , Q ∈ Hn×n , T1 ∈ Hp1×p1 , T2 ∈ Hp2 ×p2 , V1 ∈ Hq1 ×q1 , V2 ∈ Hq2 ×q2 such that

A = P SA Q, B = P SB T1 , C = P SC T2 , D = V1 SD Q, E = V2 SE Q,

(2.1)

where 

0 0 0 0 0

     SA =       Im6 0

0 0 0 0 A1 A2 A3 0 A4 A5 A6 0 A7 A8 A9 0 0 0 0 Im5 0 0 0 0 0 0 0 0



0 In 2  SD =  0 0 D1 0

Im1 0 0 0 0 0 0

0 0 0 0 0 0 0





0

 I   m2     0     , SB =  0      0     0  0

0 0 Im3 0 0 0 0

B1 0 0 0 0 0 0

  E1 0 0 0 0 0 0 0   In3 0 0 0 0  , SE =  E2 0 In3 0 0 0 0 0 0 0 0





           , SC =           

C1 C2 0 0 0 Im3 0 0 0 0 0 0 0 0

 0 0 0 0  0 0 0 0 , In 4 0 0 0

0 0 0



     Im4  ,  0   0  0

4

    A A B C     m1 + m5 + m6 = r A B C + r D  − r D 0 0  , E E 0 0         A B A B C A C A B C         m2 = r D 0 0  − r D 0  , m4 = r D 0 0  − r D 0  , E 0 E 0 0 E 0 E 0 0         A A B C A C A B         m3 = r D 0  + r D 0  − r D 0 0  − r D  , E E 0 0 E 0 E 0     " # " # A B C A B C A B C A B C     , , n4 = r D 0 0  − r n2 = r D 0 0  − r D 0 0 E 0 0 E 0 0 E 0 0   " # " # A B C h i A B C A B C   n3 = r +r − r D 0 0  − r A B C . D 0 0 E 0 0 E 0 0 h

i

(2.2)

(2.3)

(2.4)

(2.5)

(2.6)

Proof. The proof is inspired by [15] and [26]. The whole procedure is constructive and consists of seven steps. Step 1. We can find two nonsingular matrices P1 and Q1 such that " # " # " # h i B (1) C (1) D D (1) 0 P1 B C = , Q1 = , 0 0 E E (1) 0 " # h i (1) D where B (1) C (1) has full row rank, and has full column rank. Denote E (1) " # (1) (1) A1 A2 P1 AQ1 = (1) (1) . A3 A4 Step 2. There exist two nonsingular matrices P2 and Q2 such that " # I 0 (1) (1) m5 P2 A4 Q2 = , m5 = r(A4 ). 0 0 Then diag(I, P2 )

"

(1) A1 (1) A3

(1) A2 (1) A4

#

 (2) (2) (2)  A1 A2 A3   diag(I, Q2 ) := A(2) 0 , Im5 4 (2) A5 0 0

  (2) C (2) " # " # B B (1) C (1) D (2) 0 0   D (1) 0 diag(I, P2 ) :=  0 diag(I, Q2 ) := , 0 , 0 0 E (1) 0 E (2) 0 0 0 0 # " h i D (2) (2) (2) where B has full column rank. has full row rank, and C E (2) "

#

5

Step 3. Set 

(2)

I −A2  P3 = 0 I 0 0

Then we have



(2)

(2)

(2)

0

A1 A2  (2) P3 A4 Im5 A5

   I 0 0 0    (2) I 0 . 0 , Q3 = −A4 0 0 I I

 (3) (2)  A3 A1 0   Im5 0  Q3 :=  0 (3) A3 0 0

 (3) A2  0 , 0

    # " # B (3) C (3) " (2) B (2) C (2) (3) 0 0 D 0 0 D     Q3 := . P3  0 0 , 0  :=  0 E (2) 0 0 E (3) 0 0 0 0 0 0

Step 4. We can choose nonsingular matrices P4 , Q4 , P5 and Q5 such that such that " # " # I 0 I 0 (3) (3) (3) (3) m1 m6 P4 A2 Q4 = , P5 A3 Q5 = , r(A2 ) = m1 , r(A3 ) = m6 . 0 0 0 0 Then 

(3)

A1  diag(P4 , I, P5 )  0

0 Im5

(3) A3

0

 (4) (4) A1 A2 0   (4) (3) (4) A3 A2 A4 0    0  diag(Q4 , I, Q5 ) :=  0 0 Im5  0 0 0  Im6 0 0 0 

(4)

B1

(4)

C1

Im1 0 0 0 0

 0  0  0 ,  0 0



  (4) (4)   B2 B (3) C (3) C2      , diag(P4 , I, P5 )  0 0  :=  0 0     0 0 0   0 0 0 

"

# " # (4) (4) D (3) 0 0 D1 D2 0 0 0 diag(Q4 , I, Q5 ) := . (4) (4) E (3) 0 0 E1 E2 0 0 0

Especially, we have h

i

r B C =r

i.e.,

h

(4) B2

(4) C2

i

h

B (3)

C (3)

i

=r

" # (4) (4) B1 C1 (4)

B2

"

(4)

C2

(4)

D2 has full row rank, and (4) E2

#

" # " # (4) (4) D (3) D1 D2 ,r D E = r =r (4) (4) , E (3) E1 E2 h

i

has full column rank.

6

Step 5. Set 

(4)

0 0 −A1

I

 0  P6 =  0  0 0

I 0 0 0

(4)

0 −A3 I 0 0 I 0 0





I 0     0 I 0   0 0  , Q6 =  0   (4) 0 0 −A2 0 0 I 0

0 0 I 0 0

0 0 0 I 0

 0  0  0 .  0 I

Then we have 

(4)

(4)

A2 0 A1  (4) (4) A3 A4 0   P6  0 0 Im5  0 0  Im6 0 0 0 

(4)

B1

(4)

C1





(5)

B1

(5)

C1

Im1 0 0 0 0

  0 0    0 0    0 Q6 :=   0   0 Im6 0 0

0

0

Im1

(5)

0

0 0 0 0

A1 0 0 0

Im5 0 0

 0  0  0 ,  0 0



 (4)  (5) (4)  (5)  " # " #  B2  B2 C2  C2  (5) (5)     D1(4) D2(4) 0 0 0 D D 0 0 0 1 2  P6  0  0   0  :=  0  , E (4) E (4) 0 0 0 Q6 := E (5) E (5) 0 0 0 ,     1 2 1 2 0  0   0  0 0 0 0 0 where

h

(5) B2

(5) C2

i

has full row rank, and

" # (5) D2

(5) has full column rank. E2 Step 6. We can find six nonsingular matrices P7 , Q7 , WB , WC , WD , WE such that

h P7 B2(5)

 " # Im2 i W 0  B (5) = 0 C2 0 WC 0

"



WD 0

0 Im3 0

0 0 0 0 0 Im3 0 0 0

In 2 0   0 In 3 # " (5) #   0 0 D2 0  (5) Q7 =  WE E2 0  0   0 In 3 0 0

 0  0   0  . 0    0  In 4

 0  0 ,

Im4

7

Then 

0 0 0

0 0 (5) A1 0 0 Im5 0 0 0 0

   diag(I, P7 , I, I, I)    Im6 0  0 0 0 0 0  (6) (6) (6)  0 A1 A2 A3 0   0 A(6) A(6) A(6) 0  4 5 6  (6) (6)  0 A(6) A A 0 7 8 9   0 0 0 0 Im5   0 0 0 0  Im6 0 0 0 0 0



(5)

B1

(5)

C1

WD 0 0 WE

#"

0 0 0 0 0 0

0



(6)

B1 I  m2  #  0  0 :=   0 WC   0   0 0



 0  0  0  diag(I, Q7 , I, I, I) :=  0 0

      , 0    0   0  0 0 0



 (5) (5)  "  B2  C 2   WB  diag(I, P7 , I, I, I)  0 0   0   0   0 0 0

"

Im1

Im1 0 0 0 0

(6)

B2 0 Im3 0 0 0 0



(5)

D1

(5) E1

(5)

D2

(5) E2

(6)

B3 0 0 0 0 0 0

(6)

C1 0 0 0 0 0 0

(6)

D In 2 0  1(6) D2 0 In 3  # (6)  0 0 0 0 0 D3 diag(I, Q7 , I, I, I) :=  (6)  0 0 0 0 0 E1 E (6) 0 I n3  2 (6)

E3

0

0

(6)

C2 0 Im3 0 0 0 0

0 0 0 0 0

(6)  C3 0    0   Im4  ,  0   0 

0

 0 0 0  0 0 0  0 0 0  . 0 0 0  0 0 0 

In 4 0 0 0

Step 7. Set  (6) (6) (6) I −B1 −B2 −C3 0 I 0 0   0 0 I 0   P8 = 0 0 0 I  0 0 0 0  0 0 0 0 0 0 0 0

0 0 0 0 I 0 0

0 0 0 0 0 I 0

  I 0  (6)   −D1 0   −D (6) 0  2    0 , Q8 = −E3(6)    0 0    0  0 I 0

 0 0 0 0 0 0  I 0 0 0 0 0  0 I 0 0 0 0   0 0 I 0 0 0 .  0 0 0 I 0 0   0 0 0 0 I 0 0 0 0 0 0 I

8

Then we have 

(6)

(6)

B1 I  m2   0  P8   0   0   0 0 

In 2

0

0

0 0

In 3 0

0 0

0 0

0 In 3

0 0

0

0

In 4

0

0

Im1

(6)

0

0

(6)

D1

      P8      

0

0

A1

0 0 0

A4 A5 A6 0 (6) (6) (6) A7 A8 A9 0 0 0 0 Im5 0 0 0 0 0 0 0 0

(6)

Im6 0

(6)

A2

(6)

A3

 (6)  0 C3   0  Im2    0 0      0 := Im4     0   0    0 0 

C2 0 Im3 0 0 0 0

0

0

0 0 Im3 0 0 0 0

  0 0 0 0 In 2    0 0 0 0 0   D1 0 0 0 0  Q8 :=  E 0 0 0  1 0     E2 0 0 0 0 0 0 0 0 0

0

(6)

0

C1 0 0 0 0 0 0

(6)

B3 0 0 0 0 0 0

 (6) D2  D (6)  3  (6)  E1   E (6)  2 (6) E3 

(6)

(6)

B2 0 Im3 0 0 0 0

(6)

0 0 0 0 0

0





φ1

   φ5 0     φ 0    6   Q :=  φ7 0   8    0 0     0   Im6 0 0

0 0 In 3 0 0 0 0 0 In 3 0 0 In 4

φ2 (6)

A1

(6)

0 0 0

B1 C1 C2 0 0 0 0 0 Im3 0 0 0 0 0 0 0 0 0 0 0 0

φ3 (6)

A2

(6)

0 0 0 0 0 0

0 0 0 0 0 0

φ4 (6)

A3



     Im4  ,  0   0  0

 0  0  0 , 0   0 0

0

Im1

0

0

(6)

A4 A5 A6 0 (6) (6) (6) A7 A8 A9 0 0 0 0 Im5 0 0 0 0 0 0 0 0

0 0 0 0 0

where (6)

(6)

(6)

φ1 = −φ2 D1 − φ3 D2 − φ4 E3 ,

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

(6)

φ2 = −B1 A1 − B2 A4 − C3 A7 , φ3 = −B1 A2 − B2 A5 − C3 A8 ,

φ4 = −B1 A3 − B2 A6 − C3 A9 , φ5 = −A1 D1 − A2 D2 − A3 E3 ,

φ6 = −A4 D1 − A5 D2 − A6 E3 , φ7 = −A7 D1 − A8 D2 − A9 E3 .

0



 0   0    , 0   0    0  0

9

Using Im1 , Im6 as the pivots to eliminate the obtain   φ1 φ2 φ3 I 0 0 0 0 −φ1 0 (6) (6) 0 I 0 0 0 −φ 0  φ5 A1 A2 5     (6) (6) 0 0 I 0 0 −φ6 0  φ6 A4 A5    (6) (6) 0 0 0 I 0 −φ7 0  φ A8   7 A7    0 0  0 0 0 0 0 I 0 0    0 0 0 0 0 I 0  Im6 0 0 0 0 0 0 0 0 I 0 0 0   0 0 0 0 0 Im1 0  0 A A A 0 0 0  1 2 3      0 A4 A5 A6 0 0 0    . :=  0 A A A 0 0 0 7 8 9     0 0 0 Im5 0 0   0    Im6 0 0 0 0 0 0  0 0 0 0 0 0 0

blocks φ1 , φ2 , φ3 , φ4 , φ5 , φ6 , φ7 , respectively, we 0

Im1

0

A3 (6) A6

(6)

0 0

0 0

0 0

A9 0 0 0

(6)

0

0 0 0 0

0 0 0 0

φ4

Im5 0 0



I 0 0 0   0 I 0 0   0 0 I 0   0 0 I  0   0 0 0 0   0 −φ −φ −φ 2 3 4  0 0 0 0

0 0 0 0 I 0 0

After Step 7, we have established the decomposition.  3. Maximal and minimal ranks of (1.1) and (1.3) As applications of Theorem 2.1, we in this section consider the maximal and minimal ranks of (1.1) and (1.3) over H. We also show how to choose the variable real quaternion matrices such that the real quaternion matrix expressions achieve their maximal and minimal ranks. Theorem 3.1. Let A, B, C, D, E be given quaternion matrices, and p(X, Y ) be as given in (1.1). Then     " # " #  A  h i A B A C    max r [p (X, Y )] = min r  D  , r A B C , r ,r ,  X,Y E 0 D 0    E  ( " " # # A h i A B A B C   −r min r [p (X, Y )] = r  D  + r A B C + max r X,Y E 0 E 0 0 E 



   " # " # A B A C ) A C A B C     −r  D 0  , r −r −r D 0  . D 0 D 0 0 E 0 E 0

Proof. From Theorem 2.1, we can rewrite the expression p(X, Y ) in (1.1) in the following canonical form p(X, Y ) = P (SA − SB T1 XV1 SD − SC T2 Y V2 SE )Q.

0 0 0 0 0 I 0

 0 0   0  0   0  0 I

10

Because P, Q are nonsingular, the rank of p(X, Y ) can be written as r[p(X, Y )] = r(SA − SB T1 XV1 SD − SC T2 Y V2 SE ). b = T1 XV1 , Yb = T2 Y V2 . Partition the matrices X b and Yb Put X     X1 X2 X3 Y1 Y2 Y3    b = X X4 X5 X6  , Yb = Y4 Y5 Y6  . X7 X8 X9 Y7 Y8 Y9

Hence,



     b SB XSD =      

∗ ∗ ∗ ∗ X1 X2 ∗ X4 X5 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0





           , SC Yb SE =           

∗ 0 ∗ ∗ 0 0 0

0 ∗ ∗ 0 0 0 0 Y5 Y6 0 Y8 Y9 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0



     .     

Through the new notation, the matrix expression A − BXD − CY E can be rewritten as   ∗ ∗ ∗ ∗ 0 Im1 0  ∗ A −X A2 − X2 A3 0 0 0  1 1      ∗ A4 − X4 A5 − X5 − Y5 A6 − Y6 0 0 0    b D − SC Yb SE =  ∗ SA − SB XS A7 A8 − Y8 A9 − Y9 0 0 0   .   0 0 0 Im5 0 0   0    Im6 0 0 0 0 0 0  0 0 0 0 0 0 0

Obviously,

where



b D − SC Yb SE ) = m1 + m5 + m6 + r(Ω), r(SA − SB XS

   A1 − X1 A2 − X2 A3 Z1 Z2 A3     Ω =  A4 − X4 A5 − X5 − Y5 A6 − Y6  :=  Z3 Z4 Z5  , A7 A8 − Y8 A9 − Y9 A7 Z6 Z7

(3.1)

where Z1 , · · · , Z7 are arbitrary real quaternion matrices. It is easily seen from (3.1) that max {r(A3 ), r(A7 )} ≤ r(Ω) ≤ min {m2 + m3 + m4 , n2 + n3 + n4 , r(A3 ) + n2 + n3 + m3 + m4 , r(A7 ) + n3 + n4 + m2 + m3 } . (3.2) Now, we choose Z1 , · · · , Z7 such that Ω reach the upper and lower bounds in (3.2). There exist nonsingular matrices P1 , Q1 , P2 , Q2 such that # # " " Ib 0 Ia 0 . , P2 A7 Q2 = P1 A3 Q1 = 0 0 0 0

11

Case 1. Assume that b = r(A7 ) ≤ r(A3 ) = a. Set Z2 = 0, Z3 = 0, Z4 = 0, Z5 = 0, Z6 = 0. Then Ω= Denote

"

Z1 A3 A7 Z7

#

.



 " # W1 W2 U U U   1 2 3 P1 Z1 Q2 = W3 W4  , P2 Z7 Q1 = , U4 U5 U6 W5 W6

where W1 , · · · , W4 , U1 , · · · , U4 are arbitrary  W1  W3  r(Ω) = r  W5   Ib 0

Set

real quaternion matrices. Then we have  W 2 Ib 0 0  W4 0 Ia−b 0   W6 0 0 0 .  0 U1 U2 U3  0 U4 U5 U6

W1 = Ib , W2 = 0, W3 = 0, W4 = 0, W5 = 0, W6 = 0, U1 = Ib , U2 = 0, U3 = 0, U4 = 0, U5 = 0, U6 = 0. Then r(Ω) = a = r(A3 ). The case r(A7 ) ≥ r(A3 ) can be shown similarly. Hence, max {r(A3 ), r(A7 )} ≤ r(Ω). Case 2. Assume that m2 + m3 + m4 ≤ min {n2 + n3 + n4 , r(A3 ) + n2 + n3 + m3 + m4 , r(A7 ) + n3 + n4 + m2 + m3 } . Then



   Ω=   

W1 W2 W3 Ia 0 W4 W5 W6 0 0 W7 W8 W9 W10 W11 Ib 0 W12 W13 W14 0 0 W15 W16 W17



   .   

where W1 , · · · , W17 are arbitrary real quaternion matrices. Set   i h i W11 h h i   W4 W5 W6 = Im4 −b 0 , W14  = Im3 +b+m2 −a 0 , W17 Wi = 0, i = 1, 2, 3, 7, 8, 9, 10, 12, 13, 15, 16.

Then we have r(Ω) = m2 + m3 + m4 . The case n2 + n3 + n4 ≤ min {m2 + m3 + m4 , r(A3 ) + n2 + n3 + m3 + m4 , r(A7 ) + n3 + n4 + m2 + m3 } . can be shown similarly.

12

Case 3. Assume that r(A3 ) + n2 + n3 + m3 + m4 ≤ min {m2 + m3 + m4 , n2 + n3 + n4 , r(A7 ) + n3 + n4 + m2 + m3 } . Then we have m2 − a ≥ n2 + n3 and n4 − a ≥ m3 + m4 . Set   # W11 " h i i h In2 +n3   , W14  = Im3 +m4 0 , Wi = 0, i = 1, 2, 3, 7, 8, 9, 10, 12, 13, 15, 16. W4 W5 W6 = 0 W17

Then we have

r(Ω) = a + n2 + n3 + m3 + m4 = r(A3 ) + n2 + n3 + m3 + m4 . The case r(A7 ) + n3 + n4 + m2 + m3 ≤ min {m2 + m3 + m4 , n2 + n3 + n4 , r(A3 ) + n2 + n3 + m3 + m4 } . can be shown similarly. Hence, m1 + m5 + m6 + max {r(A3 ), r(A7 )} ≤ r(A − BXD − CY E) ≤ m1 + m5 + m6 + min {m2 + m3 + m4 , n2 + n3 + n4 , r(A3 ) + n2 + n3 + m3 + m4 , r(A7 ) + n3 + n4 + m2 + m3 } . It follows from Theorem 2.1 that

"

# A B m1 + m2 + m3 + r(A7 ) + m5 + m6 + n3 + n4 = r , E 0

(3.3)

"

(3.4)

# A C m1 + n2 + n3 + m3 + m4 + r(A3 ) + m5 + m6 = r . D 0 Combining (2.2)-(2.6), (3.3) and (3.4), we can obtain the results.

 Corollary 3.2. Let A, B, C, D, E be given, and p(X, Y ) be as given in (1.1). Assume that R(B) ⊆ R(C), R(E ∗ ) ⊆ R(D ∗ ). Then

( "

max r [p (X, Y )] = min r X,Y

min r [p (X, Y )] = r X,Y

"

A D

#

+r

h

A C

i

A D

#

+r

,r

"

h

A C

A B E 0

#

i

,r

−r

"

"

A B E 0 A C E 0

#) #

,

−r

"

A B D 0

#

.

By the method mentioned in the proof of Theorem 3.1, We can choose variable real quaternion matrices X and Y such that the real quaternion matrix expression attains its maximal and minimal ranks.

13

Corollary 3.3. Let A ∈ Hm×n , B ∈ Hm×k , C ∈ Hl×n be given, and f1 (X, Y ) = A − BX − Y C.

(3.5)

Then (

max r [f1 (X, Y )] = min m, n, r X,Y

"

A B C 0

#)

, min r [f1 (X, Y )] = r X,Y

"

A B C 0

#

− r(B) − r(C).

By the method mentioned in the proof of Theorem 3.1, We can choose variable real quaternion matrices X and Y such that the real quaternion matrix expression (3.5) attains its maximal and minimal ranks. Theorem 3.4. Let A ∈ Hm×n , B1 ∈ Hm×k , C2 ∈ Hl×n , B3 ∈ Hm×k3 , C3 ∈ Hl3 ×n , B4 ∈ Hm×k4 , C4 ∈ Hl4 ×n be given, and f2 (X1 , X2 , X3 , X4 ) be as given in (1.3). Then 

 A B1 # "  C  0 A B B B  2  1 3 4 max r [f2 (X1 , X2 , X3 , X4 )] = min m, n, r  , ,r  C3 0  {Xi } C2 0 0 0 C4 0     A B1 B3 A B1 B4 )     r  C2 0 0  , r  C2 0 0  , C4 0 0 C3 0 0 (



  min r [f2 (X1 , X2 , X3 , X4 )] = r   {Xi }

 A B1 # " C2 0  A B1 B3 B4  − r(B1 ) − r(C2 ) +r C3 0  C2 0 0 0 C4 0

       A B1 B3    A B1 B3 A B1 B3 B4  C  0       2 0  + max r  C2 0 0  − r  C2 0 0 0 −r ,   C3 0 0    C 0 0 C 0 0 0  4 4 C4 0 0 









A B1 B3 B4 A B1 B4       r  C2 0 0 0 −r 0  − r  C2 0  C3 0 0 0 C3 0 0

 A B1 B4     C2 0 0    . C3 0 0     C4 0 0

Proof. Our proof is just similar to the proof in [23]. Note that " # # " # # " " h i h i B4 B3 A B1 A − B3 X3 C3 − B4 X4 C4 B1 X4 C4 0 . X3 C3 0 − − = 0 0 C2 0 C2 0

(3.6)

14

c3 , X c4 X f3 and X f4 such that the It follows from Theorem 3.1 that there exist X matrix (3.6) attains its maximal and minimal ranks, i.e. # " # " c3 C3 − B4 X c4 C4 B1 A − B3 X A − B3 X3 C3 − B4 X4 C4 B1 r = max r {X3 ,X4 } C2 0 C2 0      A B1 ( # " A B1 B3 A  C  A B1 B3 B4  2 0     = min r  , r  C2 0 0  , r  C2 ,r  C3 0  C2 0 0 0 C4 0 0 C3 C4 0

real quaternion

 B1 B4 )  0 0  , 0 0

# # " f3 C3 − B4 X f4 C4 B1 A − B3 X A − B3 X3 C3 − B4 X4 C4 B1 r = min r {X3 ,X4 } C2 0 C2 0   A B1 # "  C  A B1 B3 B4  2 0  = r +r  C3 0  C2 0 0 0 C4 0 "

       A B1 B3    A B1 B3 B4 A B1 B3  C  0    2 0     + max r  C2 0 0 0 −r 0  − r  C2 0 ,    C 0 0 3   C4 0 0 0 C4 0 0  C4 0 0 









A B1 B4 A B1 B3 B4       r  C2 0 0  − r  C2 0 0 0 −r  C3 0 0 C3 0 0 0

 A B1 B4     C2 0 0    . C3 0 0     C4 0 0

c1 , X c2 , X f1 and X f2 such that By Corollary 3.3, we can find X #) ( " i h  c3 C3 − B4 X c4 C4 B1 A − B X 3 c1 , X c2 , X c3 , X c4 = min m, n, r r f1 X C2 0         A B1   # "   A B1 B3 A B1 B4    C   A B1 B3 B4  2 0      = min m, n, r  , r  C2 0 0  , r  C2 0 0  , ,r    C3 0  C2 0 0 0    C4 0 0 C3 0 0    C4 0 i h  f1 , X f2 , X f3 , X f4 = r r f1 X 

  = r 

"

f3 C3 − B4 X f4 C4 B1 A − B3 X C2 0

 A B1 # " C2 0  A B1 B3 B4  − r(B1 ) − r(C2 ) +r C3 0  C2 0 0 0 C4 0

#

− r(B1 ) − r(C2 )

15

       A B1 B3    A B1 B3 B4 A B1 B3   C 0   2 0      + max r  C2 0 0 0 −r 0  − r  C2 0 ,    C 0 0 3   C4 0 0 0 C4 0 0  C4 0 0 









A B1 B3 B4 A B1 B4       r  C2 0 0 0 −r 0  − r  C2 0  C3 0 0 0 C3 0 0

 A B1 B4     C2 0 0    . C3 0 0     C4 0 0



Theorem 3.5. Let f3 (X1 , X2 , X3 , X4 ) = A − B1 X1 C1 − B2 X2 C2 − B3 X3 C3 − B4 X4 C4

(3.7)

be a linear matrix expression with four two-sided terms over H, where R(Bi ) ⊆ R(B2 ), R(Cj∗ ) ⊆ R(C1∗ ), i = 1, 3, 4, j = 2, 3, 4. Then, 

 A B 1 " # ( # "  C  i h A A B1 B3 B4  2 0  max r [f3 (X1 , X2 , X3 , X4 )] = min r A B2 , r , ,r ,r  C3 0  {Xi } C1 C2 0 0 0 C4 0     A B1 B4 ) A B1 B3     r  C2 0 0  , 0  , r  C2 0 C3 0 0 C4 0 0 min r [f3 (X1 , X2 , X3 , X4 )]

{Xi }



  = r 

 A B1 # # " " # " # " i h C2 0  A B2 A B1 A A B1 B3 B4  −r + r A B2 − r +r +r C3 0  C2 0 C1 0 C2 0 0 0 C1 C4 0        A B B  1 3   A B1 B3 A B1 B3 B4  C  0       2 0  + max r  C2 0 0  − r  C2 0 0 0 −r ,    C 0 0 3   C4 0 0 C4 0 0 0  C4 0 0 









A B1 B3 B4 A B1 B4       r  C2 0 0 0 −r 0  − r  C2 0  C3 0 0 0 C3 0 0

 A B1 B4     C2 0 0    . C3 0 0     C4 0 0

16

Proof. Applying the method mentioned in Theorem 3.4 and Corollary 3.2, we can get the results.  Remark 3.1. Tian [22] derived the maximal and minimal ranks of (1.1), (1.3), (3.5) and (3.7) over C. 4. Minimal rank of the general solution to real quaternion matrix equation (1.4) We in this section consider a solvability condition and the general solution to real quaternion matrix equation (1.4). Moreover, we study the minimal rank of the general solution to real quaternion matrix equation (1.4). Theorem 4.1. Let A, B, C, D, E be given. Then the real matrix equation (1.4) is consistent if and only if   " # " # " # " # " # A h i h i D A B 0 B A C 0 C   ,r =r ,r =r . r A C B = r C B , r D  = r E E 0 E 0 D 0 D 0 E (4.1) In this case, the general solution to (1.4) can be expressed as     A1 A2 X3 Y1 Y2 Y3     X = T1−1  A4 X5 X6  V1−1 , Y = T2−1 Y4 A5 − X5 A6  V2−1 , X7 X8 X9 Y7 A8 A9

where A1 , A2 , A4 , A5 , A6 , A8 , A9 , T1 , V1 , T2 , V2 are defined as in Theorem 2.1, and X3 , X5 , X6 , X7 , X8 , X9 , Y1 , Y2 , Y3 , Y4 , Y7 are arbitrary real quaternion matrices. ¨ uler [17] derived the solvability condition Remark 4.1. J.K. Baksalary and R. Kala [1], A.B. Ozg¨ (4.1). Theorem 4.2. Let A ∈ Hm×n , B ∈ Hm×p1 , C ∈ Hm×p2 , D ∈ Hq1 ×n and E ∈ Hq2×n be given. Suppose that real quaternion matrix equation (1.4) is consistent. Then " # " # A A C min r (X) = r A C + r −r , BXD+CY E=A E E 0 " # " # h i A A B min r (Y ) = r A B + r −r . BXD+CY E=A D D 0 h

i

Proof. It follows from Theorem 4.1 that the solution X in (1.4) can be expressed as   A1 A2 X3   X = T1−1  A4 X5 X6  V1−1 , X7 X8 X9

where A1 , A2 , A4 , T1 , V1 are defined as in Theorem 2.1, and X3 , X5 , X6 , X7 , X8 , X9 are arbitrary real quaternion matrices. Since real quaternion matrix equation (1.4) is consistent, i.e., the

17

equations in (4.1) hold, we get A3 = 0, A7 defined as in Theorem 2.1. Note that    A1 A2 X3 A1 A2    = A X X  4  A4 0 5 6 X7 X8 X9 X7 0

= 0, B1 = 0, D1 = 0, where A3 , A7 , B1 and D1 are    #" # X3 0 0 "    X5 X6 0 I 0 . 0  + I 0 X8 X9 0 0 I 0 0 I

(4.2)

First, we consider the minimal rank of X. Applying Theorem 3.1 to (4.2) yields   A1 A2 X3   min r (X) = min r  A4 X5 X6  BXD+CY E=A X3 ,X5 ,X6 ,X7 ,X8 ,X9 X7 X8 X9     A1 i h     = min r A1 A2 X3 + r  A4  − r(A1 ) X3 ,X7 X7 " # i h A1 − r(A1 ).(SetX3 = 0, X7 = 0) =r A1 A2 + r A4

" # " # A A C Applying (2.2)-(2.6) to r A C + r −r yields E E 0 h

i

" # " # " # i h A1 A A C r A C +r −r = r A1 A2 + r − r(A1 ) = min r (X) . BXD+CY E=A A4 E E 0 h

i

 Remark 4.2. Tian [21] gave the minimal rank of the general solution to the matrix equation (1.4). Our method is different from this in [21]. 5. A simultaneous decomposition of seven real quaternion matrices In this section, we give a simultaneous decomposition of the real quaternion matrix array A B C D E . F G

(5.1)

The proof will be given in another paper. Theorem 5.1. Let A ∈ Hm×n , B ∈ Hm×p1 , C ∈ Hm×p2 , D ∈ Hm×p3 , E ∈ Hq1 ×n , F ∈ Hq2 ×n and G ∈ Hq3 ×n be given. Then there exist nonsingular matrices P ∈ Hm×m , Q ∈ Hn×n , T1 ∈ Hp1 ×p1 , T2 ∈ Hp2 ×p2 , T3 ∈ Hp3 ×p3 , V1 ∈ Hq1 ×q1 , V2 ∈ Hq2 ×q2 , V3 ∈ Hq3 ×q3 such that A = P SA Q, B = P SB T1 , C = P SC T2 , D = P SD T3 , E = V1 SE Q, F = V2 SF Q, G = V3 SG Q, (5.2)

18

where



             SA =              

             SB =             

0 Im1 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 Ip 3 0

0 A11 A21 A31 A41 A51 A61 A71 A81 A91 0 0 0 0 0

Im2 0 0 0 0 0 0 0 0 0 0

0 A12 A22 A32 A42 A52 A62 A72 A82 A92 0 0 0 0 0 0

Im3 0 0 0 0 0 0 0 0 0

0 A13 A23 A33 A43 A53 A63 A73 A83 A93 0 0 0 0 0 0 0

Im4 0 0 0 0 0 0 0 0 

0 A14 A24 A34 A44 A54 A64 A74 A84 A94 0 0 0 0 0 0 0 0

Im5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

             SD =         Im8   0    0 0

B1 0 0 0 0 0 0 0 0 0 0 0 0 D1 0 0 0 Im4 0 Im4 0 0 0 0 0 0

0 A15 A25 A35 A45 A55 A65 A75 A85 A95 0 0 0 

0 A16 A26 A36 A46 A56 A66 A76 A86 A96 0 0 0 

0 A17 A27 A37 A47 A57 A67 A77 A87 A97 0 0 0

0 A18 A28 A38 A48 A58 A68 A78 A88 A98 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0 0

                          , S =  C  Im4     0     0       0     0       0 0 D2 0 0 0 0 0 0 Im6 0 0 0 0 0

D3 0 0 Im3 0 0 0 0 0 0 0 0 0

D4 Im1 0 0 0 0 0 0 0 0 0 0 0

Im6 0 0 0 0 0 D5 0 0 0 0 0 0 0 0 0 0 0 0

0 0 Ip 1 A19 0 0 A29 0 0 A39 0 0 A49 0 0 A59 0 0 A69 0 0 A79 0 0 A89 0 0 A99 0 0 0 Ip 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Im7 0 0 0 0 

             ,            

C1 Im1 0 0 0 0 0 0 0 0 0 0 0

C2 0 Im2 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0



             ,             C3 0 0 0 0 0 0 0 0 0 0 0 0



             ,            

19



    SE =     



    SF =      

    SG =     

0 In 1 0 0 0 0 0 0 0 0 E1 0

0 0 In 2 0 0 In 3 0 0 0 0 0 0

0 0 0 0 0 0 F1 In1 F2 0 F3 0

0 0 0 0 In 2 0

0 0 G1 0 G2 0 G3 0 G4 In1 G5 0

0 0 0 0 0 0

0 0 0 0 0 0 0 In 3 0 0 0 0

0 0 0 0 0 0 In 4 0 0 In 5 0 0 0 0 0 0 0 0

0 In 4 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 In 4 0 0 0 In 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 In 4 0 0 0 In 6 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 0 In 7 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 In 8 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0



    ,    



    ,    

0 0 0 0 0 0

It is easy to derive the numbers of p1 , · · · , p3 , m1 , · · · , m8 , n1 , · · · , n8 .



    .    

Remark 5.1. Tian [21] derived the maximal rank of the matrix expression k(X, Y, Z) = A − BXE − CY F − DZG.

(5.3)

However, Tian did not give the proof. Applying Theorem 5.1, we can find the maximal and minimal ranks of the matrix expression (5.3). Moreover, we can give a solvability condition and the general solution to the matrix equation BXE + CY F + DZG = A.

(5.4)

The corresponding results and their applications will be given in another paper. Remark 5.2. Similarly, we can establish a simultaneous decomposition of [A, B, C, D], where A = A∗ . Using the results on the simultaneous decomposition of [A, B, C, D], we can derive the maximal rank and extremal inertias of the Hermitian matrix expression h(X, Y, Z) = A − BXB ∗ − CY C ∗ − DZD ∗ , X = X ∗ , Y = Y ∗ , Z = Z ∗ .

(5.5)

In addition, we can give a solvability condition and the general solution to the matrix equation BXB ∗ + CY C ∗ + DZD ∗ = A, X = X ∗ , Y = Y ∗ , Z = Z ∗ .

(5.6)

The corresponding results and their applications will be given in another paper. Remark 5.3. Tian [23] derived the maximal and minimal inertias of the Hermitian matrix expression fk (X1 , X2 , · · · , Xk ) = A − B1 X1 B1∗ − · · · − Bk Xk Bk∗ , Xi = Xi∗

(5.7)

20

using generalized inverses of matrices. Chen, He and Wang [3] derived the maximal rank of the Hermitian matrix expression (5.7) through a simultaneous decomposition of the matrix [A, B1 , · · · , Bk ], where A is Hermitian. The approach in this paper is different with this in [3]. 6. Conclusions We have established a simultaneous decomposition of five real quaternion matrices in which three of them have the same column numbers, meanwhile three of them have the same row numbers. Using the simultaneous matrix decomposition, we have presented the maximal and minimal ranks of the real quaternion matrix expressions (1.1) and (1.3). We have given a necessary and sufficient condition for the existence of the general solution to the real quaternion matrix equation (1.4). The expression of the general solution to (1.4) has also been given when it is solvable. Moreover, we have derived the minimal rank of the general solution to the real quaternion matrix equation (1.4). The results of this paper may be generalized to an arbitrary division ring (with an involutional anti-automorphism). References [1] J.K. Baksalary, R. Kala, The matrix equation AXB + CY D = E, Linear Algebra Appl. 30 (1980) 141–147. [2] B.De Moor, H.Y. Zha, A tree of generalization of the ordinary singular value decomposition, Linear Algebra Appl. 147 (1991) 469–500. [3] D. Chen, Z.H. He, Q.W. Wang, A decomposition of multiple matrices and its applications, submitted. [4] D.L. Chu, B.De Moor, On a variational formulation of the QSVD and the RSVD, Linear Algebra Appl. 311 (2000) 61–78. [5] D.L. Chu, H. Chan, D.W.C. Ho, Regularization of singular systems by derivative and proportional output feedback, SIAM J. Matrix Anal. Appl. 19 (1998) 21–38. [6] D.L Chu, L. De Lathauwer and B. De Moor, On the computation of restricted singular value decomposition via cosine-sine decomposition, SIAM J. Matrix Anal. Appl. 22 (2) (2000) 550–601. [7] D.L. Chu, Y.S. Hung, H.J. Woerdeman, Inertia and rank characterizations of some matrix expressions, SIAM J. Matrix Anal. Appl. 31 (2009) 1187–1226. [8] C.F. van Loan, Generalizing the singular value decomposition, SIAM J Number Anal. 13 (1976) 76–83. [9] N. Cohen, J. Dancis, Inertias of block band matrix completions, SIAM J. Matrix Anal. Appl. 19 (1998) 583–612. [10] N. Cohen, C.R. Johnson, L. Rodman, H.J. Woerdeman, Ranks of completions of partial matrices, Oper. Theory: Adv. Appl. 40 (1989) 165–185. [11] W.H. Gustafson. Quivers and matrix equations, Linear Algebra Appl. 231 (1995) 159–174. [12] T.W. Hungerford, Algebra, Springer-Verlag, New York, 1980. [13] Y.H. Liu, Y.G. Tian, Max-min problems on the ranks and inertias of the matrix expressions A − BXC ± (BXC)∗ , J. Optim. Theory Appl. 148 (3)(2011) 593–622. [14] Y.H. Liu, Y.G. Tian, More on extremal ranks of the matrix pencil A − BX − Y C with applications, in Proceeding of the 14th Conference of International Linear Algebra Society, World Academic Press, Liverpool, UK. (2007) 192–199. [15] Y.H. Liu, Y.G. Tian, A simultaneous decomposition of a matrix triplet with applications, Numer. Linear Algebra Appl. 18 (1) (2011) 69–85. [16] Y.H. Liu, How to use RSVD to solve the matrix equation A = BXC, Linear and Multilinear Algebra. 58 (5) (2010) 537–543.

21

¨ uler, The matrix equation AXB + CY D = E over a pricipal ideal domain, SIAM J. Matrix Anal. [17] A.B. Ozg¨ Appl. 12 (1991) 581–591. [18] C.C. Paige, M.A. Saunders, Towards a generalized singular value decomposition, SIAM J. Number. Anal. 18 (1981) 398–405. [19] Y.G. Tian, Upper and lower bounds for ranks of matrix expressions using generalized inverses, Linear Algebra Appl. 355 (2002) 187–214. [20] Y.G. Tian, The minimal rank completion of a 3 × 3 partial block matrix, Linear and Multilinear Algebra. 50 (2002) 125–131. [21] Y.G. Tian, Ranks and independence of solutions of the matrix equation AXB + CY D = M , Acta Math. Univ. Comenianae. 1 (2006) 75–84. [22] Y.G. Tian, Rank equalities related to generalized inverses and their applications, M.Sc. Thesis, Department of Mathematics and Statistics, Concordia University, Montr´ eal, April 1999. [23] Y.G. Tian, Equalities and inequalities for inertias of Hermitian matrices with applications, Linear Algebra Appl. 433 (2010) 263–269. [24] C.C. Took, D.P. Mandic, F.Z. Zhang, On the unitary diagonalization of a special class of quaternion matrices, Appl. Math. Lett. 24 (2011) 1806–1809. [25] C.F.Van Loan, Generalizing the singular value decomposition, SIAM J. Number. Anal. 13 (1976) 76–83. [26] Q.W. Wang, J.W. van der Woude, S.W. Yu, An equivalence canonical form of a matrix triplet over an arbitrary division ring with applications, Sci. China Math. 54 (5)(2011) 907–924. [27] Q.W. Wang, X. Zhang, J.W. van der Woude, A new simultaneous decomposition of a matrix quaternity over an arbitrary division ring with applications, Comm. Algebra. 40 (2012) 2309-2342. [28] H.J. Woerdeman, The lower order of triangular operators and minimal rank extensions, Integral Equations operator Theory. 10 (1987) 859–879. [29] H.J. Woerdeman, Minimal rank completions for block matrices, Linear Algebra Appl. 121 (1989) 105–122. [30] H.J. Woerdeman, Minimal rank completions of partial banded matrices, Linear and Multilinear Algebra. 36 (1993) 59–69. [31] G.P. Xu, M.S. Wei, D.S. Zheng, On solutions of matrix equation AXB + CY D = F , Linear Algebra Appl. 279 (1998) 93–109. [32] H.Y. Zha, The restricted singular value decomposition of matrix triplets, SIAM J. Matrix Anal. Appl. 12 (1991) 172–194. [33] R.A. Horn, F. Zhang, A generalization of the complex Autonne-Takagi factorization to quaternion matrices, Linear and Multilinear Algebra. 60 (11-12) (2012) 1239–1244. [34] F.Z. Zhang, Quaternions and matrices of quaternions, Linear Algebra Appl. 251 (1997) 21–57.

Suggest Documents