A, B â RnÃn, let i(A, B) = {C â RnÃn : min{aij,bij} ⤠cij ⤠max{aij,bij}}. Rohn has .... be a P-operator, if the following implication holds for every X â Sn: ... where for Y â Sn we use Y ⥠0 to denote that Y is positive semidefinite, i.e., utY u ⥠0 ... In Theorem 2.5 we present a necessary and sufficient condition for a matrix to be.
P† -matrices : A generalization of P -matrices
M. Rajesh Kannan and K.C. Sivakumar Department of Mathematics Indian Institute of Technology Madras Chennai 600 036, India. Abstract For A, B ∈ Rn×n , let r(A,B) (c(A,B)) be the set of matrices whose rows, (columns) are independent convex combinations of the rows (columns) of A and B. Johnson and Tsatsomeros have shown that the set r(A,B) (c(A,B)) consists entirely of non singular matrices if and only if BA−1 (B −1 A) is a P -matrix. For A, B ∈ Rn×n , let i(A, B) = {C ∈ Rn×n : min{aij , bij } ≤ cij ≤ max{aij , bij }}. Rohn has shown that if all the matrices in i(A, B) are invertible, then BA−1 , A−1 B, AB −1 and B −1 A are P -matrices. In this article, we define a new class of matrices called P† -matrices and present certain extensions of the above results to the singular case, where the usual inverse is replaced by the Moore-Penrose generalized inverse. The case of the group inverse is briefly discussed.
AMS Subject Classification(2010): 15A09 Keywords: Interval hull of matrices; P† -matrix; Moore-Penrose inverse; P# matrix; group inverse; range kernel regularity; range-symmetric matrix.
1
1
Introductory Remarks
Let Rm×n denote the set of all m × n matrices over the reals. A ∈ Rn×n is said to be a P -matrix (P0 -matrix) [5] if all its principal minors are positive (non negative). There are numerous ways to describe a P -matrix. For our discussion, we consider the following equivalent conditions on A ∈ Rn×n : 1. All principal minors of A are positive. 2. (non-sign reversal property) x ∈ Rn and xi (Ax)i ≤ 0 for all i imply x = 0. 3. All the real eigen values of A and its principal sub matrices are positive. For a proof we refer to [4]. It is clear that every P -matrix is invertible. Next, we develop some notation. Let diag(t1 , t2 , ..., tn ) denote the diagonal matrix whose entries are t1 , t2 , ..., tn . Let E denote the matrix whose entries are all one, and let ◦ denote the Hadamard (entry wise) product of matrices. For any A, B ∈ Rn×n we define the following sets: h(A, B) = {C : C = tA + (1 − t)B, t ∈ [0, 1]}, r(A, B) = {C : C = T A + (I − T )B, T = diag(t1 , t2 , ..., tn ), ti ∈ [0, 1], 1 ≤ i ≤ n}, c(A, B) = {C : C = AT + B(I − T ), T = diag(t1 , t2 , ..., tn ), ti ∈ [0, 1], 1 ≤ i ≤ n}, i(A, B) = {C : C = T ◦ A + (E − T ) ◦ B, T = (tij ), tij ∈ [0, 1], 1 ≤ i ≤ n, 1 ≤ j ≤ n} J = i(A, B), if A ≤ B. For A, B ∈ Rn×n , the interval hull of matrices A and B is defined as i(A, B) = {C ∈ Rn×n : min{aij , bij } ≤ cij ≤ max{aij , bij }}. From the definitions it is clear that, h(A, B) ⊆ r(A, B) ⊆ i(A, B) and h(A, B) ⊆ c(A, B) ⊆ i(A, B). For A ∈ Rm×n , we denote the transpose of A, the range space of A and null space of A by AT , R(A) and N (A), respectively. For a given A ∈ Rm×n , the unique matrix X ∈ Rn×m satisfying AXA = A, XAX = X, (AX)T = AX and (XA)T = XA is called the Moore-Penrose inverse of A and is denoted by A† . For complementary subspaces L and M of Rn , the projection of Rn on L along M will be denoted by PL,M . If, in addition, L and M are orthogonal then we denote this by PL . When m = n, the smallest nonnegative integer k such that rank(Ak+1 ) = rank(Ak ), denoted by Ind(A) is called index of A. There is a unique matrix AD ∈ Rn×n satisfying the matrix equation Ak+1 AD = Ak , AD AAD = AD , and AAD = AD A. AD is called the Drazin
2
inverse of A. If Ind(A) = 1, AD is called the group inverse of A and is denoted by A# . Let us also mention that the group inverse has been studied for linear operators over finite dimensional spaces in [10]. It is shown there that the group inverse of a linear map T : V → V , where V is a finite dimensional vector space, exists if and only if N (T 2 ) = N (T ), which is equivalent to the statement R(T 2 ) = R(T ). Of course, if A is invertible, then all these generalized inverses coincide with A−1 . Some of the well known properties of A† and A# that will be frequently used, are ([1]): R(At ) = R(A† ); N (AT ) = N (A† ); AA† = PR(A) ; A† A = PR(AT ) ; R(A) = R(A# ); N (A) = N (A# ); AA# = PR(A),N (A) . In particular, if x ∈ R(AT ) then x = A† Ax and if x ∈ R(A) then x = A# Ax. The interval hull i(A, B) is said to be range kernel regular if R(A) = R(B) and N (A) = N (B). For a range kernel interval hull i(A, B), we define K = {C ∈ i(A, B) : R(A) = R(C) and N (A) = N (C)}. Obviously, when A and B are invertible, then K contains only invertible matrices. This will be used in what follows. This paper deals with certain class of matrices, called P† -matrices, and mainly their relations with subsets of interval hull of matrices. Let us begin with the definition of P† -matrices. Definition 1.1. A square matrix A is said to be a P† -matrix if for each non zero x ∈ R(AT ) there is an i ∈ {1, 2, ..., n} such that xi (Ax)i > 0. Equivalently, for any x ∈ R(AT ) the inequalities xi (Ax)i ≤ 0 for i = 1, 2, ..., n imply x = 0. Next, we discuss an important motivation for introducing the notion of P† -matrices. In their study on subsets of interval hull of matrices defined as above, Johnson and Tsatsomeros [3] were interested in chracterizations that guarantee that these subsets contain only invertible matrices. Among other results, they proved the following: Theorem 1.1. (Theorem 3.3, [3]) Let A, B ∈ Rn×n be such that A and B are invertible. Then r(A, B) ⊆ K (i.e., r(A, B) consists of only invertible matrices) if and only if BA−1 is a P -matrix. To explore the extent to which Theorem 1.1 could be generalized for singular matrices, let us consider the question: Does Theorem 1.1 have an analogue for P0 matrices? The next example demonstrates that there is no verbatim analogue for 3
P0 -matrices, where the condition that BA−1 is a P -matrix is replaced (naturally) by the condition that BA† is a P0 -matrix. To reiterate, our interest is when both A and B are singular matrices which have the same range space and null space.
1 2
1 2
0 Example 1.1. Let A = −2 1 0 0 0 0
2 1 0 and B = 2 2 0 , then R(A) = 0 0 0
1 2
1 2
0 −1 0 0 R(B) and N (A) = N (B), BA† = 4 0 0 is a P0 -matrix. But C = 2 2 0 0 0 0 0 0 0 ∈ r(A, B) and R(A) 6= R(C) so that C ∈ / K. Interestingly, for the new class, viz., P† -matrices, among other results, the following analog holds (Theorem 3.2, section 3): Theorem 1.2. Let A, B ∈ Rn×n be such that R(A) = R(B) and N (A) = N (B). Then r(A, B) ⊆ K if and only if BA† is a P† -matrix. From the preceding discussion, it seems that, in so far as attempting generalizations of results which establish relationships between interval matrices and P -matrices are concerned, the notion of P† -matrices is quite natural to consider. Further, in order to reinforce the fact that our generalization is appropriate, we consider yet another, seemingly similar extension of a P -matrix, which we call a P# -matrix. (This notion is presented in the last section, very briefly). Definition 1.2. A square matrix A is said to be a P# -matrix if for each non zero x ∈ R(A) there is an i ∈ {1, 2, ..., n} such that xi (Ax)i > 0. Equivalently, for any x ∈ R(A) the inequalities xi (Ax)i ≤ 0 for i = 1, 2, ..., n imply x = 0. Let us consider the Lyapunov transformation, the Stein transformation, and the multiplicative transformation on the space S n of real symmetric n×n matrices. These are some of the most widely studied maps in the context of semidefinite linear complementarity problems. Formally, these are given as follows: For a given A ∈ Rn×n , define 4
(a) The Lyapunov transformation LA : S n → S n , by LA (X) = AX + XAT , (b) the Stein transformation SA : S n → S n , by SA (X) = X − AXAT and (c) the multiplicative transformation MA : S n → S n , by MA (X) = AXAT . The operators mentioned above have been studied, among other things, in the context of knowing when they are P -operators. Here, an operator T on S n is said to be a P -operator, if the following implication holds for every X ∈ S n : X and T (X) commute and XT (X) ≤ 0 =⇒ X = 0, where for Y ∈ S n we use Y ≥ 0 to denote that Y is positive semidefinite, i.e., ut Y u ≥ 0 for all u ∈ Rn . Also, −Y ≥ 0 is denoted by Y ≤ 0. In this context, the following results are rather well known [13]. Theorem 1.3. Let A ∈ Rn×n . We have the following: −1 (a) If LA is a P -operator, then both L−1 exist. A and A
(b) If SA is a P -operator, then both SA−1 and (I − A)−1 exist. (c) If MA is a P -operator, then both MA−1 and A−1 exist. Interestingly, we have an analogue of the above mentioned result for the case of P# -operators. We present this notion next and mention an extension of the theorem above for such operators. Definition 1.3. An operator T on S n is said to be a P# -operator if for every X ∈ S n : X ∈ R(T ), XT (X) = T (X)X and XT (X) ≤ 0 =⇒ X = 0. Theorem 1.4. Let A ∈ Rn×n . We have the following: # (a) If LA is a P# -operator, then both L# A and A exist. # (b) If SA is a P# -operator, then both L# A and (I − A) exist.
(c) If MA is a P# -operator, then both MA# and A# exist.
5
The proof is postponed to the concluding section. Equally interesting will be to know what information one could deduce for the Moore-Penrose inverse A† (which always exists, unlike the group inverse, see the last paragraph), when the operators considered above are P† -operators. Now, we summarize the contents of the article. First, we present the important notion of P† -matrices, primarily motivated by the second equivalent condition for P -matrices. This, we introduce, as a plausible generalization for possibly singular matrices. This is followed by two immediate consequences (Theorem 2.3 and Theorem 2.4). In Theorem 2.5 we present a necessary and sufficient condition for a matrix to be a P† -matrix. The next result shows that, if A is a singular M -matrix such that A† ≥ 0, then A is permutationally equivalent to a P† -matrix. (Theorem 2.7). The main results of this article appear in sections 3 and 4. These results are briefly discussed, next. In section 3, in Theorem 3.2, under certain assumptions, we characterize the inclusion condition r(A, B) ⊆ K in terms of P† -matrices. An analogous result for c(A, B) ⊆ K is presented next. The next result, Theorem 3.5, provides a necessary condition for the equality i(A, B) = K to hold and generalizes a corresponding result in [11]. The last main result of this section extends a result in [3] for range-symmetric matrices. In the next section, we study the inclusion h(A, B) ⊆ K. The first result, Theorem 4.1, presents a characterization in terms of a certain constrained eigenvalue condition. More importantly, the role of the inclusion in the context of a certain generalized regularity condition is studied next. This is done in Theorem 4.3. A new characterization for the nonnegativity of the Moore-Penrose inverse of certain elements of an interval is proved in Theorem 4.5. The concluding section discusses the notion of P# -matrices and certain results for some P# -operators on S n . Let us mention here that traditional approaches to P -matrices make rather heavy use of determinants. In contrast, our proofs are purely linear algebraic in nature and thus have the potential to be extended to even infinite dimensional spaces.
2
P†-Matrices
First, we present the important notion of P† -matrices, primarily motivated by the second equivalent condition for P -matrices mentioned in the introduction. This is 6
followed by two immediate results, where the first result shows that the class of P† matrices is closed with respect to the Moore-Penrose inverse (Theorem 2.3). Next, in Theorem 2.5, a well known characterization for P -matrices is extended. The case of singular Z-matrices is studied next (Theorem 2.7), where it is shown that, if A is a singular Z-matrix such that A† ≥ 0, then A is permutationally similar to a P† -matrix. Let us recall the definition of a P† -matrix. Definition 2.1. A square matrix A is said to be a P† -matrix if for each non zero x ∈ R(AT ) there is an i ∈ {1, 2, ..., n} such that xi (Ax)i > 0. Equivalently, for any x ∈ R(AT ) the inequalities xi (Ax)i ≤ 0 for i = 1, 2, ..., n imply x = 0. Trivially, every P -matrix is a P† -matrix. Recall that a matrix A is called a P0 matrix if all the principle minors of A are nonnegative. We show that the class of all singular P0 -matrices is different from the class of all P† -matrices. Let A1 =
Then A1 is a P0 -matrix, but not a P† -matrix. Let A2 =
1
1
−1 −1
0 1 0 0
.
. Then A2 is
aP† -matrix, and is not a P0 -matrix. The matrix A2 also shows that a principal sub matrix of a P† -matrix need not be a P† -matrix. Also it is known that, if A is a P0 -matrix, then A† need not be a P0 -matrix.
1 0.9 1.8 For example [8], let A = .9 1 1.81 1.81 1.8 3.429
. Then A is a P0 -matrix. But A† =
7.5161 −7.929 0.38 −7.2441 7.589 −0.414 is not a P0 -matrix. −0.47961 0.4529 −0.072 Let us also mention in the following some of the important classes of matrices that have been shown to have the property that their Moore-Penrose inverses are P0 -matrices. 7
Theorem 2.1. (Theorem 2.6, [8]) Let A˜ ∈ Rn×n be a singular M -matrix which is permutationally similar to A =
A11
0
A21 A22
where A11 is a singular irreducible
M -matrix of order k, where 2 ≤ k ≤ n − 1 and A22 is a non singular M -matrix. Then A˜† is a P0 -matrix. Theorem 2.2. (Corollary 3.2, [9]) If A ∈ Rn×n is a singular M -matrix of rank n − 1, then A† is a P0 -matrix. It is well known that, if A is a P -matrix, then A−1 is a P -matrix and that the real part of the eigenvalues of A are positive. The next two results extend these properties for P† - matrices and in some sense justify the nomenclature P† -matrix. Theorem 2.3. A is a P† -matrix if and only if A† is a P† -matrix. Proof. Clearly, it is enough to prove the necessity part. Let y ∈ R((A† )T ) = R(A). Then y = Ax for some x ∈ Rn . So, yi (A† y)i = (Ax)i (A† Ax)i = (A† Ax)i (Ax)i = (A† Ax)i (A(A† Ax))i = ui (Au)i . Since u = A† Ax ∈ R(AT ) and A is a P† -matrix, it follows that there exists at least one i such that ui (Au)i > 0. Thus A† is a P† matrix. Theorem 2.4. Let A be a P† -matrix. Suppose, Ax = λx, with 0 6= x ∈ R(AT ) and λ ∈ R. Then λ > 0. Proof. Assume that Ax = λx, 0 6= x ∈ R(AT ) and λ ∈ R. Then λx2i = xi (Ax)i = xi (Ax)i > 0, for some i ∈ {1, 2, ..., n}. Hence λ > 0. Theorem 2.5. Let A ∈ Rn×n . Then A is a P† -matrix if and only if for each 0 6= x ∈ R(AT ) there is a positive diagonal matrix Dx ∈ Rn×n such that xT (Dx Ax) > 0. Proof. Necessity: Let 0 6= x ∈ R(AT ) be given. Then there exists i0 ∈ {1, ..., n} such P that xi0 (Ax)i0 > 0. Then there exists > 0 such that xi0 (Ax)i0 + nj=1,j6=i0 xj (Ax)j > 0. Now, set Dx = diag(d1 , ..., dn ) with di0 = 1 and dj = for all j 6= i0 .
8
Sufficiency: Suppose for each x ∈ R(AT ) there exists a positive diagonal matrix P P Dx ∈ Rn×n such that xT (Dx Ax) > 0. Now Dx Ax = (d1 nj=1 a1j xj , ..., dn nj=1 anj xj )T . Since xT (Dx Ax) > 0 and di > 0, so xi (Ax)i > 0, for some i. Hence A is a P† matrix. Remark: A matrix A is called a Z-matrix if all the off-diagonal entries of A are nonpositive. It follows that a Z-matrix A can be written as A = sI − B, where B ≥ 0. A Z-matrix A is called an M -matrix if, in the above decomposition, we also have s ≥ ρ(B), where ρ(.) denotes the spectral radius. An M -matrix A is invertible if and only if s > ρ(B) which holds if and only if A−1 ≥ 0. It is also known that if A is a Z-matrix then A is a P -matrix if and only if A is an invertible M -matrix. For details we refer to Chapter 6 of [2]. However a verbatim statement for the Moore-Penrose inverse is not true for P† -matrices, as the following example shows. Let A =
1
−1
−1
1
a P† -matrix. But A† =
. Then A is a Z-matrix and it can be verified that A is
1 4
−1 4
−1 4
1 4
0.
For the singular case (when s = ρ(B)) the following result is quite well known. Theorem 2.6. (Theorem 3.9, [6]) Let A be a Z-matrix such that all principal minors are nonnegative. Then A† ≥ 0 if and only if there exists a permutation matrix S such that SAS T =
T 0 0 0
where T is an invertible M -matrix.
Note that the matrix T in Theorem 2.6 is a P -matrix. As a consequence, we have the following result. Theorem 2.7. Let A ∈ Rn×n be a Z-matrix with all principal minors are nonnegative and A† ≥ 0. Then there exists a permutation matrix S such that SAS T is a P† -matrix. Proof. By Theorem 2.6, we have C = SAS T = 9
T 0 0 0
, with T ∈ Rm×m being an
invertible M -matrix, i.e., a P -matrix. Now we shall verify that C is a P† -matrix. Let 0 6= x = (x1 , ..., xn )T ∈ R(C T ). Define u = (x1 , ..., xm )T . Then u ∈ R(T T ). Hence there exists at least one i ∈ {1, ..., m} such that ui (T u)i > 0. Since xi (Cx)i = ui (T u)i , for each 1 ≤ i ≤ m, it then follows that C is a P† -matrix. Now we shall discuss some examples of P† - matrices. It can easily be shown that a rank one matrix A is a P† -matrix if and only if at least one of the diagonal entries of A is positive. The first example below shows that this is not true for matrices with higher rank.
1 0 1 0
1 0 1 0 . Consider the vector z = (0, 1, 0, 0) ∈ R(AT ), then Let A = 0 1 0 0 0 1 0 0 zi (Az)i = 0 for all i. Hence A is not a P† -matrix. Next, we present three classes of examples, each giving rise to matrices A for any given order n×n with the property that A has rank k and is a P† -matrix. Interestingly, these matrices also demonstrate that P† -matrices are not necessarily direct sums of P -matrices, underscoring the fact that these are not trivial extensions of P -matrices. The details of the calculations are presented for the first class; similar is the case for the other two. P Let A ∈ Rn×n be defined as follows: Ae1 = 0 and Aek = kj=1 ej , k ≥ 2. Then P P AT e1 = AT e2 = nj=2 ej and AT ek = nj=k ej , k ≥ 3. Let y ∈ Rn . Then Ay = P P (z1 , z2 , ..., zn ), where zi = nj=2 yj for i ≤ 2 and zi = nj=i yj for i ≥ 3. Now, let y ∈ R(AT ); y = AT x for some x ∈ Rn . Then y1 = 0 and yk = Pk Pk Pk j=1 xj , k ≥ 2. Also y1 (Ay)1 = 0 and yk (Ay)k = ( j=1 xj )[((n − k + 1) j=1 xj ) + Pn j=k+1 (n − j + 1)xj ]. Suppose y(Ay) ≤ 0, then yk (Ax)k ≤ 0 for all k. Now, P P yn (Ay)n ≤ 0 implies nj=1 xj = 0 and yn−1 (Ay)n−1 ≤ 0 implies n−1 j=1 xj = 0, and Pk recursively we can prove that j=1 xj = 0, k ≥ 2. Thus y = 0 and hence A is a P† -matrix. Clearly, A is not a direct sum of P -matrices. The other two classes of matrices are given next. 1. Define Aek = 0 for 1 ≤ k ≤ i − 1 and Aek = 10
Pk
j=1
ej , k ≥ i.
2. Define Aek = 0 for 1 ≤ k ≤ i − 1 and Aek =
3
Pk
j=i
ej , k ≥ i.
Relationship with subsets of i(A, B)
In Theorem 3.2, under certain assumptions, we characterize the inclusion condition r(A, B) ⊆ K in terms of P† −matrices. An analogous result for c(A, B) ⊆ K is presented next. The next result, Theorem 3.5, provides a necessary condition for the equality i(A, B) = K to hold and generalizes a corresponding result in [11]. The last main result of this section extends a result in [3] for range-symmetric matrices. In [3], the following result was proved. Theorem 3.1. (Theorem 3.3, [3]) Let A, B ∈ Rn×n be such that A and B are invertible. Then r(A, B) ⊆ K if and only if BA−1 is a P -matrix. The following result generalizes the theorem above and also simplifies the proof given there. Theorem 3.2. Let A, B ∈ Rn×n be such that R(A) = R(B) and N (A) = N (B). Then r(A, B) ⊆ K if and only if BA† (and hence AB † ) is a P† -matrix. Proof. Necessity: Let r(A, B) ⊆ K and suppose BA† is not a P† -matrix. Then there exist 0 6= x ∈ R((BA† )T ) ⊆ R(A) such that xi (B † Ax)i ≤ 0 ∀i. For 1 ≤ i ≤ n consider the function fi : [0, 1] −→ R defined by fi (t) = txi + (1 − t)(BA† )xi . By the intermediate value theorem, there exist ti ∈ [0, 1] such that ti xi + (1 − ti )(BA† )xi = 0. Let T = diag(t1 , t2 , ..., tn ). Then T x + (I − T )(BA† )x = 0. Also, x = Ay, for some y ∈ R(AT ). Hence, 0 = T x + (I − T )BA† x = (T A + (I − T )B)y. This implies that y ∈ N (A) (since T A + (I − T )B ∈ r(A, B)), a contradiction. So, BA† is a P† -matrix. Sufficiency: Let ti ∈ [0, 1], i = 1, 2, ..., n, T = diag(t1 , t2 , ..., tn ) and (T A + (I − T )B)x = 0 for some x ∈ R(AT ). Since x ∈ R(AT ), we have x = A† y for some y ∈ R(A). Thus, y = AA† y so that (T A+(I−T )B)A† y = 0, i.e., T y+(I−T )BA† y = 0. Also (BA† )(BA† )† y = y, since R(BA† ) ⊆ R(B) = R(A). Thus y ∈ R(BA† ). Also T ≥ 0 and (I − T ) ≥ 0, so yi and (BA† y)i have difference signs for each i. That is, BA† is not a P† -matrix.
11
It can be shown that AB † is the Moore-Penrose inverse of BA† under the assumptions R(A) = R(B) and N (A) = N (B). The last part now follows from Theorem 2.3. In a similar way, the following result can be established. Theorem 3.3. Let A, B ∈ Rn×n be such that R(A) = R(B) and N (A) = N (B). Then c(A, B) ⊆ K if and only if B † A (and hence A† B) is a P† -matrix. Combining the previous two results, we have the following: Corollary 3.1. Let A, B ∈ Rn×n be such that R(A) = R(B) and N (A) = N (B). Then r(A, B) ⊆ K and c(A, B) ⊆ K if and only if B † A, A† B, AB † and BA† are P† -matrices. Rohn [11] and Johnson and Tsatsomeros [3] proved the following result for interval hull of matrices. Theorem 3.4. (Theorem 3.2, [3]) Let A, B ∈ Rn×n such that each matrix in i(A, B) is invertible. Then BA−1 , A−1 B, AB −1 , B −1 A are P -matrices. The next result is a generalization for P† -matrices. Theorem 3.5. Let A, B ∈ Rn×n such that R(A) = R(B) and N (A) = N (B). Further let i(A, B) = K. Then BA† , A† B, AB † , B † A are P† -matrices. Proof. We only prove that BA† is a P† -matrix. The proofs for the other matrices are similar. Suppose that BA† is not a P† -matrix. Then there exists a non zero x ∈ R((BA† )T ) such that xi (BA† x)i ≤ 0 ∀i. Let Ci denote the i-th row of C ∈ Rn×n defined by Ci = ti Ai + (1 − ti )Bi , where Ai , Bi are the i-th rows of A, B, respectively. Let ti = 1 if xi = 0 and ti be an arbitrary root of the continuous function φi (t) = xi (B + t(A − B))i A† x in [0, 1], if xi 6= 0. Since φ(0) = xi (BA† x)i ≤ 0 and φ(1) = xi (AA† x)i = x2i ≥ 0, such a root exists. Since Ci is a convex linear combination of Ai and Bi , we have C ∈ i(A, B). Now, we shall prove that A† x ∈ N (C). If xi = 0, then (CA† x)i = Ci A† x = Ai (A† x) = (AA† x)i = 0, and if xi 6= 0, then (CA† x)i = (Ci A† x) =
φi (ti ) xi
= 0. Hence A† x ∈ N (C). If A† x ∈ N (A), then x = 0,
a contradiction. So, N (C) 6= N (A), again a contradiction. Hence BA† must be a P† -matrix. 12
In [3], the following result for real P -matrices was also proved. Theorem 3.6. (Theorem 3.8, [3]) The following are equivalent for F ∈ Rn×n : (a) F is a P -matrix. (b) There exist A, B ∈ Rn×n such that F = BA−1 and all the matrices in r(A, B) are invertible. (c) There exist A, B ∈ Rn×n such that F = B −1 A and all the matrices in c(A, B) are invertible. The next result is a generalization of this result. We need the following notation. For a given F ∈ Rn×n , define KF = {C ∈ Rn×n : R(C) = R(F ) and N (C) = N (F )}. KF is nonempty as F ∈ KF . Let us recall that a real n × n matrix F is said to be range-symmetric if R(F T ) = R(F ). This is equivalent to the condition that N (F T ) = N (F ). Theorem 3.7. For a range-symmetric matrix F ∈ Rn×n , the following are equivalent: (a) F is a P† -matrix. (b) There exist A, B ∈ Rn×n such that F = BA† , and r(A, B) ⊆ KF . (c) There exist A, B ∈ Rn×n such that F = B † A, and c(A, B) ⊆ KF . Proof. (a) =⇒ (b) : Suppose that F is a P† -matrix and B ∈ KF . Set A = F † B. Then N (B) ⊆ N (A). On the other hand if Ax = 0, then F † Bx = 0, so that y = Bx ∈ R(B) and y ∈ N (F T ) = N (B T ) and so y = Bx = 0, i.e., N (A) ⊆ N (B). We also have AT = B T (F † )T . If x ∈ N (AT ), then (F † )T x ∈ N (B T ) = N (F T ) so that x ∈ N (F ) = N (F T ) = N (B T ). Thus R(B) ⊆ R(A) and by the rank-nullity-dimension theorem, it follows that R(A) = R(B). So by Theorem 3.2, r(A, B) ⊆ KF . (b) =⇒ (a) Follows from Theorem 3.2. (b) ⇐⇒ (c) Follows similarly, from Theorem 3.3.
13
4
On the inclusion h(A, B) ⊆ K
In this section we study the inclusion h(A, B) ⊆ K. The first result Theorem 4.1 presents a characterization in terms of a certain constrained eigenvalue condition. More importantly, the role of the inclusion in the context of a certain generalized regularity condition for interval matrices J is studied next. A new characterization for the nonnegativity of the Moore-Penrose inverse of certain elements of an interval is proved in Theorem 4.5. We start with a characterization for the inclusion h(A, B) ⊆ K to hold. This result generalizes observation 3.1 in [3]. Theorem 4.1. Let A, B ∈ Rn×n be such that R(A) = R(B) and N (A) = N (B). Then the following are equivalent: (a) h(A, B) ⊆ K (b) A† Bx = λx with 0 6= λ ∈ R implies λ > 0. Proof. (a) =⇒ (b) : Suppose that (a) holds. If possible, assume that A† Bx = λx holds for some λ < 0. Then Bx = PR(A) Bx = AA† Bx = λAx. If Bx = 0, then x = 0. So Bx 6= 0 and so Ax 6= 0. Set t = t ∈ (0, 1) and A† Cx =
1 (−λI 1−λ
−λ 1−λ
and C = (tA + (1 − t)B) ∈ h(A, B). Then
+ A† B)x = 0. So Cx ∈ N (AT ). Also Cx ∈ R(C) =
R(A), so that Cx = 0. Hence N (C) * N (A) and so h(A, B) 6⊆ K, a contradiction. (b) =⇒ (a) : Suppose that (b) holds and if possible assume that h(A, B) * K. Then tA + (1 − t)B ∈ / h(A, B) for some t ∈ (0, 1). We have, N (A) ⊆ N (tA + (1 − t)B). Suppose that N (A) 6= N (tA+(1−t)B). Then (tA+(1−t)B)x = 0 for some x ∈ / N (A). Let x = x1 + x2 where x1 ∈ N (A) and 0 6= x2 ∈ R(At ). Then (tA + (1 − t)B)x2 = 0. Pre multiplying by A† , we get, (tA† A + (1 − t)A† B)x2 = 0. By setting λ =
−t 1−t
< 0 it
follows that A† Bx2 = λx2 , a contradiction. Let us recall that J = [A, B] = {C ∈ Rn×n : A ≤ C ≤ B}. Then J is said to be regular [12] if every C ∈ J is invertible. There are many characterizations for regularity of J [12]. In [7] the authors introduced the notion of range kernel regularity. Specifically, J is said to be range kernel regular if R(A) = R(B) and N (A) = N (B).
14
The following result proved in [7] provides sufficient conditions under which J is range kernel regular. Note that the matrices Jc = 21 (B + A) and ∆ = 12 (B − A) are referred to as the centre and the radius of the interval matrix J, respectively. Then ∆ ≥ 0, A = Jc − ∆, B = Jc + ∆ and an alternative description of the interval J is then given by J = [Jc − ∆, Jc + ∆]. Theorem 4.2. (Theorem 3.8, [7]) Let J = [A, B]. Suppose that N (Jc ) = N (A), R(Jc ) = R(A) and ρ(|Jc† |4) < 1. Then K contains the line segment λA + (1 − λ)B, λ ∈ [0, 1], i.e., h(A, B) ⊆ K. In particular, J is range kernel regular. Next, we show that the following converse is true: Theorem 4.3. Let J = [A, B]. If h(A, B) ⊆ K, then ρ(Jc† 4) < 1. Proof. Let us first observe that since Jc ∈ K, it follows that R(Jc ) = R(A) and N (Jc ) = N (A). Suppose that, to the contrary, α = ρ(Jc† 4) ≥ 1. Then there exists 0 6= x ∈ Rn such that Jc† 4x = αx. Then x ∈ R(Jc† ) = R(JcT ) = R(AT ). Also, PR(Jc ) (Jc − A)x = Jc Jc† 4x = αJc x so that (Jc − A − αJc )x = 0. Dividing by setting µ = 1 −
1 α
−1 α
and
≥ 0, we then have (µJc + (1 − µ)A)x = 0. Set S = µJc + (1 −
µ)A. Then S ∈ K and x ∈ N (S) = N (A). Already x ∈ R(AT ). Thus x = 0, a contradiction. Under the assumption of range kernel regularity, the following result, showing precisely when the matrices in K have nonnegative Moore-Penrose inverse, was proved in [7]. Theorem 4.4. (Theorem 3.5, [7]) Let J = [A, B] be range kernel regular. Then the following statements are equivalent: (a) C † ≥ 0 whenever C ∈ K. (b) A† ≥ 0 and B † ≥ 0. (c) B † ≥ 0 and ρ(B † (B − A)) < 1. Using this, in the next result we provide yet another characterization.
15
Theorem 4.5. Let J = [A, B] be range kernel regular. Then the following statements are equivalent: (a) A† ≥ 0 and B † ≥ 0. (b) h(A, B) ⊆ K and C † ≥ 0 for all C ∈ h(A, B). Proof. (a) =⇒ (b): Let C = λA + (1 − λ)B for some λ ∈ [0, 1]. Then N (A) ⊆ N (C) and R(C) ⊆ R(A). Also, we have 0 ≤ B † (B − C) ≤ B † (B − A) and hence ρ(B † (B − A)) ≤ 1 (by Theorem 4.4). Thus ρ(B † (B − C)) ≤ 1 and (I − B † (B − C)) is invertible. Now, I − B † (B − C) = I − PR(B T ) + B † C = PN (B) + B † C = D, say. Then BD = B(PN (B) + B † C) = BB † C = C. So, B = CD−1 and hence R(B) = R(C). By the rank-nullity-dimension theorem, it follows that N (A) = N (C). This proves that C ∈ K. By Theorem 4.4, C † ≥ 0. (b) =⇒ (a): Obvious.
5
P#-matrices
In this short section, we briefly discuss another extension of a P -matrix, namely a P# -matrix. Here, unlike the case of P† -matrices where we confine to vectors in the subspace R(AT ), we now restrict our attention to the subspace R(A). Hence this is hence closely related to the notion of the group inverse. However, all the results that we have derived for P† -matrices extend in a straight forward manner to the case of P# -matrices. In this section, we first prove the existence of the group inverse of A is A is a P # -matrix and prove Theorem 5.2, mentioned in the introduction. Definition 5.1. A square matrix A is said to be a P# -matrix if for each non zero x ∈ R(A) there is an i ∈ {1, 2, ..., n} such that xi (Ax)i > 0. Equivalently, for any x ∈ R(A) the inequalities xi (Ax)i ≤ 0 for i = 1, 2, ..., n imply x = 0. Theorem 5.1. A is a P# -matrix if and only if A# is a P# -matrix. Proof. It suffices to show that A# exists. The rest of the proof is similar to the proof of Theorem 2.3. So, suppose that A is a P# -matrix. Let x ∈ N (A) ∩ R(A). Then xi (Ax)i = 0 for each i ∈ {1, 2, ..., n}, so that N (A) ∩ R(A) = {0}. Since R(A) and N (A) are complementary subspaces, it follows that A# exists. 16
We conclude the paper with a result for the three operators, mentioned in the introduction. Theorem 5.2. Let A ∈ Rn×n . We have the following: # (a) If LA is a P# -operator, then both L# A and A exist.
(b) If SA is a P# -operator, then both SA# and (I − A)# exist. (c) If MA is a P# -operator, then both MA# and A# exist. Proof. (a) First we shall prove LA # exists. It is enough to prove that R(LA ) and N (LA ) are complementary subspaces. Let X ∈ R(LA ) ∩ N (LA ). Then XLA (X) = LA (X)X = 0 and hence X = 0. So LA # exists. Next, we prove the existence of A# . Let y ∈ R(A) ∩ N (A), then Ay = 0 and y = Ax for some x ∈ Rn . We must show that y = 0. Set X = yy T . Then AX = Ayy T = 0 and XAT = yy T AT = y(Ay)T = 0 so that LA (X) = 0. Set U = 21 (xy T + yxT ). Then U ∈ S n . Also, AU = 12 (Axy T + AyxT ) = 1 yy T 2
= 21 X. Further, U AT = 21 (xy T AT + y(Ax)T ) = 12 (x(Ay)T + yy T ) = 12 X. Thus
X = LA (U ) so that X ∈ R(LA ). It now follows that X = 0 so that y = 0, as we set out to prove. (b) The existence of SA # is similar to (a) above. We prove the existence of (I −A)# . Let y ∈ R(I − A) ∩ N (I − A). Then (I − A)y = 0 and y = (I − A)x for some x ∈ Rn . We must show that y = 0. Set X = yy T . Then SA (X) = yy T − Ayy T AT = yy T − Ay(Ay)T = 0. Set U = 1 (xy T 2
1 (xy T 2
+ yxT ). Then U ∈ S n . Also, SA (U ) =
+ yxT − A(xy T + yxT )AT ) = 21 (xy T + yxT − (Ax)(Ay)T − (Ay)(Ax)T ) = 21 (xy T +
yxT − (Ax)y T − y(Ax)T ) = 21 (y(xT − (Ax)T ) + (x − (Ax))y T ) = yy T . Thus X = SA (U ) so that X ∈ R(SA ). It now follows that X = 0 so that y = 0. (c) Again we only prove the existence of A# . Let y ∈ R(A) ∩ N (A), then Ay = 0 and y = Ax for some x ∈ Rn . We must show that y = 0. Set X = yy T . Then MA (X) = Ayy T AT = 0. Set U = xxT . Then U ∈ S n .Also, MA (U ) = AxxT AT = yy T . Thus X = MA (U ) so that X ∈ R(MA ). It now follows that X = 0 so that y = 0.
17
6
Acknowledgements
The authors thank Prof. T. Parthasarathy, Prof. M. Seetharama Gowda, Prof. T. Szulc, Prof. J. Tao and Prof. P. Veeramani for comments and suggestions on an earlier version of the paper. The first author thanks the University Grants Commission (UGC) for financial support in the form of a Senior Research Fellowship. The authors are grateful to the referees and the handling editor for their careful reading and helpful comments and suggestions that improved the quality of the paper.
18
References [1] Ben-Israel, A., and T.N.E. Greville, Generalized inverses: theory and applications. Pure and Applied Mathematics, Wiley-Interscience, New York, 2003. [2] Berman, A., and R.J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, SIAM, 1994. [3] Johnson, C.R., and M.J. Tsatsomeros, Convex sets of nonsingular and Pmatrices, Lin. Mult. Alg., 38 (1995), 233-239. [4] R.W. Cottle, J.S. Pang, and R.E. Stone, The Linear Complementarity Problem, Academic, New York, 1992. [5] Fiedler, M., and V. Ptak, On matrices with non-positive off-diagonal elements and positive principal minors, Czech. J. Math. 12 (1962), 382-400. [6] I-wen Kuo, The Moore-Penrose inverses of Singular M-Matrices, Lin. Alg. Appl., 17 (1977), 1-14. [7] Rajesh Kannan, M., and K.C. Sivakumar, Moore-Penrose inverse positivity of interval matrices, Lin. Alg. Appl., 436 (2012), 571-578. [8] S.R.Mohan, M.Neumann, and K.G.Ramamurthy, Nonnegativity of principal minors of generalized inverses of M -matrices, Lin. Alg. Appl., 58 (1984), 247-259. [9] K.G.Ramamurthy, and S.R.Mohan, Nonnegativity of principal minors of generalized inverses of P0 -matrices, Lin. Alg. Appl., 65 (1985), 125-131. [10] Robert, P, The group inverse of a linear transformation, J. Math. Anal. Appl. 22 (1968), 658-669. [11] Rohn, J., A thorem on P-matrices, Lin. Mult. Alg., 30 (1991), 209-211. [12] Rohn, J., Systems of linear interval equations, Lin. Alg. Appl., 126 (1989), 39-78. [13] Gowda, M.S., and Parthsarathy, T., Complementarity forms of theorems of Lyapunov and Stein, and related results, Lin. Alg. Appl., 320 (2000), 131-144.
19