ing operations: transposition, permutation similarity, addition of a nonnegative diagonal matrix, multiplication by a positive diagonal matrix, and extraction of ...
TAMKANG JOURNAL OF MATHEMATICS Volume 21, Number 2; Summer 1990
LINEAR TRANSFORMATIONS WHICH MAP THE CLASS OF INVERSE M-MATRICES ONTO ITSELF BIT-SHUN TAM* AND PO-HONG LIOU
Abstract. The purpose of this paper is to characterize those linear transformations on the space of n X n real matrices which map the class of n X n inverse Mmatrices (or, the closure of this dass) onto itseU. As a by-product of our approach, we also obtain a sufficient condition for an inverse M-matrix (resp. M-matrix) to have all positive powers being inverse M-matrices (resp. M-niatrices).
1. Introduction.
In [1] Berman, Hershkowitz and Johnson characterize those linear transformations£ on Rn,n, the space of n x n real matrices, which map a class in Rn,n onto itself. The classes studied generalize the notion of positivity to n x n matrices and include: matrices with nonnegative principal minors, M-matrices, totally nonnegative matrices, D"'stable matrices, etc. As shown in their paper, the treatment of each class is somewhat different, but there are strong similarities among the proofs for each class. In this paper we consider the same problem for another class of positivity, namely, M;; 1 , the class of n X n inverse M-matrices. we ·wm show that the set of all linear transformations£ on Rn,n which map the class M;; 1 (or its closure M~ 1 ) onto itself is the same as that for Mn, the class of n x n M-matrices (see Theorem 4.1). We employ the same strategy of proof as that given by Berman, Hershkowitz and Johnson [1] while differing in details. One chief difference is, in our treatment ~e rely much on the graph-theoretic properties of the concerned matrices. We need results which can tell from the directed graph of a nonnegative matrix (or equivalently, from its zero and nonzero patte;rn) whether the matrix is in the class M~ 1 • In Section 3, we give a sufficient condition for a matrix in the class M;; 1 (or M~ 1 , Mn) to have all positive powers in the same class. The result has independent interest of its own. We need (part of) this result in our proof of a characterization of linear .transformations on the said .classes of matrices in Section 4. We will fix our notation and terminology in Section 2. Some basic properties of matrices in the class M~ 1 will also be given. Received April 7, 1989. *The researdJ. of this author was partially supported by the National Science Council of the Republic of China. 159
BIT-SHUN TAM AND PO-HONG LIOU
160
2. Preliminaries
Unless otherwise stated, all matrices under consideration are square real matrices. By an M -matrix we mean a nonsingular M-matrix, that is, a matrix which is of the form >..I- P, where Pis a (entrywise) nonnegative matrix and>..> p(P) (the spectral radius of P). There is an extensive literature on characterizations of M-matrices among Zmatrices (i.e. matrices with nonpositive off-diagonal entries). See Berman and Plemmons [3, Chapter 6] for the details. We state only two characterizations which will be of particular use to us. If A is a Z-matrix then each of the following is equivalent to the statement that A is an M-matrix: (1) A is inverse-nonnegative; that is A- 1 exists and is nonnegative, and (2) A is a P-matrix; that is all of its principal minors are positive. The inverse of an M-matrix is called an inverse M -matrix. Clearly, every inverse M-matrix is nonnegative. The classes of n x n M-matrices, inverse M-matrices, and limits of inverse M-matrices will be denoted respectively by Mn, M;;- 1 and M;- 1 . For an excellent survey article on inverse M-matrices, see Johnson [8]. It is well-known (see [8]) that the class M;; 1 is closed under each of the following operations: transposition, permutation similarity, addition of a nonnegative diagonal matrix, multiplication by a positive diagonal matrix, and extraction of principal submatrices. By limiting· arguments, one readily shows that the class M;- 1 is also closed under each of the above operations. It is also known that if A E M;;- 1 , then A is a P-matrix, and adj A, the (classical) adjoint of A, is a Z-matrix. By limiting arguments, we also readily show that if A E M;; 1 then A is a P 0-matrix (i.e. all of its principal ·minors are nonnegative) and adj A is a Z-matrix.
Observation 2.1. If A E M;; 1 and if A is nonsingular, then A E M;;- 1 . Proof. From the preceding discussion, when the hypotheses are satisfied, we have, det A > 0 and adj A is a Z-matrix; hence A - 1 is a Z-matrix. ·But A is nonnegative, so it is an inverse M -matrix. We assume elementary knowledge on graph theory. To fix notation and terminology, we give some definitions. Let D be a directed graph with vertex set { 1, ... , n}. We denote by (i,j) the arc with initial vertex i and terminal vertex j; if i = j, we call the arc a loop at i. A path from ito j is a sequence of arcs (io, i!), ... , (im-1, lm) in D with i io and j = im. If either the vertices io, ... , im are distinct or the vertices i 0 , •.• , im-1 are distinct and io = im, then the path is said to be simple; in the latter case, the path is also called a circuit. The directed graph D is said to be transitive if whenever ( i, j) and (j, k) are arcs in D, (i, k) is also an arc in D. For any A E Rn,n, we denote its associated directed graph by D(A). The vertex set of D(A) is {1, ... , n}, and (i,j) is an arc of D(A) if and only if aij f:. 0.
=
The class M;- 1 was first studied by Fiedler, Johnson and Markham. In their paper [4, Theorem 2] they gave several equivalent conditions for a matrix to be in M;- 1 . We give below one of their equivalent conditions which is of particular use to us, and offer
LINEAR TRANSFORMATIONS WHICH MAP THE CLASS OF INVERSE
161
an independent proof. For results on the explicit form of a matrix in the class M;;- 1 , see Fiedler and Markham [5]. Lemma 2.2. Let A be a nonnegative matrix. Then A E M~ 1 if and only if A+ aln E M;;- 1 for all a > 0, where In denotes the n X n identity matrix. Proof. We need only consider the "only if' part. Suppose that there exists a sequence of matrices (Ak)keN in M;;- 1 converging to A. Let a > 0 be given. Since · A + aln is a nonnegative matrix, in order to prove that this matrix is an inverse Mmatrix, it suffices to show that (A+ aln)- 1 is a Z-matrix. From the expansion det (A+ aln) =an+ 2:;= 1 ckan-k ,·where Ck is the sum of all principal minors of A of order k, and the fact that A is a Po-matrix, it is clear that det (A+ a:In) > 0. Since the class M;;- 1 is closed under the addition of a nonnegative diagonal m~trix, each Ak + a:In is an inverse M-matrix and hence adj (Ak + a:In) is a Z-matrix. Passing to the limit, we deduce that adj (A+ a:In) is a Z-matrix. But we have shown that det (A+ a:In) > 0; hence (A+ a:In)- 1 is a Z-matrix. The proof is complete. It is well-known that the directed graph of an inverse M-matrix is always tran()..! - P)- 1 , where P is nonnegative and . ).. > p(P), then sitive. Indeed, if A
=
A= L:r:=opkj;>..k+l and .hence D(A) is equal to the transitive closure of D(P), because the (i, j) entry of pk is positive if and only if there is a path of length k from i to j. Since the transitivity property of a· graph is not affected by the addition or omission of loo~y the above lemma, it is clear that the directed graph of every matrix in the cl~s M~ 1 is also transitive. 3. Power Invariant Inverse M -matrices Theorem 3.1. Let A be an n x n matrix .in the class M;;- 1 (resp. · .M;;- 1 , Mn)· Suppose that every simple path of length two in the directed graph D(A) zs a circuit. Then for all positive integers k, Ak E M;;- 1 (resp. M;;-1, Mn). Proof. First, observe that if A is a 2 x 2 M -matrix, then for all positive integers k, Ak is a Z-matrix as well as a ?-matrix,' and hence is an M-matrix. It follows that the classes M2 1 and M2 1 areal~o closed under taking positive powers. Let A E M;;- 1 such that every simple path of length two in the directed graph D(A) is a circuit. Then for any i, j, k, j -:f. i, k if'{ i, j), (j, k) are arcs of D(A), then i = k. It follows that A is permutation similar to a direct sum of 2 x 2 matrices in M2 1 and a matrix B of the form (
~ 1 ~:
)
,
where
B12
is a nonnegative matrix and D 1 , D 2 are
positive diagonal matrices .. In view of the beginning remark, it suffices to show that all positive powers of the matrix B are inverse M-matrices. Note that the inverse of B is a Z-matrix of the form ( DJ' direct
c~lculations
Z~1)
where .C12
= -D! 1 B 12 D:; 1 is nonpositiv.e.
By
one easily shows that all positive powers of the matrix B are still of
BIT-SHUN TAM AND PO-HONG LIOU
162
the same form as B, and hence their inverses are necessarily Z-matrices. It follows that all positive powers of B are inverse M -matrice_s.__ . 1 Now let A be ann x n matrix in the class M~ whose directed graph D(A) satisfies the hypothesis of the theorem. Then by Lemma 2.2, for every a > 0, A+ aln is an inverse M -matrix, whose directed graph also satisfies the hypothesis of the theorem. Since we have proved our theorem for matrices in the class M; 1 , for each positive integer k, (A+ aln)k E M;; 1 where a is any positive number. Letting a approaches zero, we . Ak E M-1 o btam, n . • Finally, let A be an n x n M -matrix which satisfies the given hypothesis. Then is an inverse M-matrix and, as we have shown before, the directed graph D(A- 1 ) is equal to the transitive closure of D(A). From the given assumptions on D(A), it follows that D(A) = D(A- 1 ). So (A- 1 )k is an inverse M-matrix for each positive integer k. Hence Ak is an M-matrix for each positive integer k. The following ·result will be of particular use to us.
· A- 1
Theorem 3.2. Let A be an n x n nonnegative matri~. Suppose that in the directed graph D(A) of A, no vertex has both positive in-degree and out-degree, where the degrees contributed by loops are not counted. Then Ak E M~ 1 for every positive integer k. Proof. Let A be an n x n nonnegative matrix whose directed graph satisfies the
given hypothesis. Then A is permutation similar to a matrix of the form (
~ 1 ~: )
,
where A 12 is a nonnegative matrix and D1, D2 are nonnegative diagonal matrices. As the proof of Theorem 3.1 (for the class M;; 1 ) shows, then for any a > 0, all positive powers of the matrix A+ aln are inverse M-matrices. Letting a approaches to zero, we deduce that all powers of A are in the class M~ 1 • '
Here we want to make two remarks. First, it is interesting to compare the following result of Friedland, Hershkowitz and H. Schneider [6, Theorem 2.18] (after reformulation) with Theorem 3.1: LetS be ann x n {-1,0,1}-matrix. Then every n x n real matrix with sign pattern matrix S is a ZM -matrix (that is, all of its positive powers are Zmatrices) if and only if Sis a Z-matrix and every simple path of length two in D(S)is a circuit. Indeed, we could have used this result to deduce Theorem 3.1. But to make the paper more self-contained, we have chosen to offer an independent alternative proof. Second, the condition given in Theorem 3.1 is sufficient but not necessary for an M-matrix to have all positive powers being M-matrices. As an illustration, we give an example below: . Example 3.3. Let A =
2 0 OJ
··
-1 1 0 . Then A is a Z-matrix as well as a P-1 -1 1 . matrix and hence is an M-matrix. As can be readily seen, the directed graph of A does
(
:::hs:t::::i:::::.:::h:~i::r Th(eor:r·~·:· HreY'J' ~::::::~:n;:::i:h:~::t~o: . .
.
-k
-k
1
LINEAR TRANSFORMATIONS WHICH MAP THE CLASS OF INVERSE
163
Z-matrix. Therefore, Ak is an M-matrix for each positive integer k. Now we give examples of matrices in the class M~ 1 and some not in this class. Example 3.4. We denote by Eij the n X n matrix with 1 at its ( i, j)th entry and zero elsewhere. Then in the directed graph D( Eij) there is only one arc, namely, ( i, j). Clearly the hypothesis of Theorem 3.2. is satisfied, and hence Eij E M~ 1 . For the same reason, we have,
Eij
+ Ekw
E M~ 1 , provided that j
=f. i, k
and w
=f. i, k.
Example 3.5. Fori =f. j, the matrix Eij + Eji ¢ M;; 1 , because its principal minor determined by the indices i, j is negative. However, the matrix Eij + Eji + Eii + Ejj E M;; 1 , provided that i =f. j, becauseit is permutation similar to the direct sum of a zero
matrix and the matrix (
~ ~)
. And by Lemma ~.2, the latter matrix can be shown to
belong to M2 1 . Example 3.6. Let i, j, k be distinct indices. Then the matrix Eij + Ejk + Eik ¢ 1 M;;- , even though its directed graph is transiti~e. This is because, its principal submatrix determined by the indices i, j, k is permut~tion similar to the matrix has
a positive (3,1)-cofactor and hence is not in M3
Eik
+ Ejj
1
.
[
0 1 1 0 0 1
J which
0 0 0
However, the matrix
Eii
+ Eik +
1
E M~ . as can be shown by Lemma 2.2.
4. Linear Transformations on the Class of Inverse M -matrices Theorem 4.1. Let £, : Rn,n ---+ Rn,n be a linear transformation. Then £, maps the class M;; 1 onto .itself if and only if£ is a composition of one or more of the following types of transformations:
(i) positive diagonal equivalence: A---+ F AE,in which E and F are diagonal matrices with positive main diagonal entries; (ii) transposition: A---+ AT, where AT denotes the transpose of A; and (iii) permutation similarity: A---+ pT AP, in which P is a permutation matrix. Moreover, the theorem also holds if M;;- 1 is replaced by M;; 1 . Proof. Since the class M;; 1 is closed under the (linear) operations of multiplication by a positive diagonal matrix, taking transpose and permutation similarity, it is readily seen that if£, is a linear transformation which is a composition of one or more types of transformations (i), (ii) or (iii), then £(M;; 1 ) M;; 1 . Next, we show that if£, satisfies ..C(M;; 1 ) M;; 1 then we also have .C(M;- 1 ) . M;- 1 . This is because, then£, maps span M;;- 1 isomorphically (and hence homeomorphically) onto itself, and as M;- 1 C span M;; 1 , we have £(M~ 1 )....:. £(M~ 1 ) M~ 1 . To complete the proof, it remains to show that if £(M~ 1 ) M~ 1 then£ is a composition of the types of transformations specified in the theorem. Since the proof of this is fairly long, we break down the argument into several steps.
=
=
=
=
BIT-SHUN TAM AND PO-HONG LIOU
164
Step 1. £ is nonsingular. This follows from the fact .that M~ 1 contains a basis of R"·", namely, {Eij : 1 < i, j < n }, (see Example 3.4) and £maps the class M~ 1 onto itself. Step 2. £ carries each Eij to a positive multiple of some Ekw, where 1 < i, j, k, w < n, and for distinct Eij the corresponding Ekw are distinct (up to multiples). Let L be a representative matrix of£ relative to the basis {Eij : 1 < i,j < n} (arranged in some order). Since matrices in M~ 1 are nonnegative, each .C(Ei;) is a nonnegative matrix; hence, Lis also a nonnegative matrix. Similary, from .c- 1 (M~ 1 ) = M~ 1 we can also deduce that L -l is a nonnegative matrix. As is well-known (see, for instance, Berman and Plemmons [3, Chapter 3, Theorem 4.6]) a nonsingular nonnegative matrix with a nonnegative inverse is necessarily monomial (that is, the product of a permutation matrix and a positive diagonal matrix). Thus our assertion follows. Step 3. £ carries each Eii to a positive multiple of some E;;. Hence £ also carries each Eij, if= j, to some Ekw, k f= w. Suppose that for some i, 1 < i < n, .C(Eii) aEkw for some k, w, 1 < k, w < n, k f= w, where a > 0. Since M;; 1 is closed under addition of nonnegative diagonal matrix and .c- 1 (M;; 1 ) M;;l, we have, 1(Ewk) + Eii E M;; 1 and hence, £[£- 1 (Ewk) + Eii] Ewk + aEkw E M;; 1 , which is a contradiction, since the principal minor determined by the indices w, k of this matrix is negative. Step 4. If .C(Ei;) = aEpq wherei f=j,p f= q and a> 0, then .C(Ei;) = bEqp for some b > 0. Assume that the contrary holds. The_n for some (k, w) f= (j, i), k f= w, we have, .C(Ekw) = bEqp where b > 0. We first consider the case when k f= j and w f= i. Then . Eij + Ekw E M;; 1 (see Example 3.4), which is a contradiction, since .C(Eij + Ekw) = aEpq + bEqp tJ. M;; 1 . Next, we consider the case when k = j and w f= i. (The r~maining case when k f= j and w i can be treated similarly.) Then the indices i, j, w are distinct and we have B E M~ 1 , where B is the matrix Eij + E;w + Eiw + E;; (see Example 3.6). Now .C(B) = aEpq + bEqp + .C(Eiw) + .C(E;; ). The principal minor of .C(B) which is determined by the indices p, q is negative: the off-diagonal entries of the submatrix are a and b, and it has at most one nonzero main diagonal entry. So again we arrive at a contradiction. Step 5. If .C(Eii) = aEu,.C(E;;) = bEtt, where if= j and a,b > 0, then either .C(Eij) = cE8t and .C(E;i) = dEh or .C(Ei;) = cEt 8 and .C(E;i) = dE8t, where c, d > 0. Assume that the contrary holds. Then there exist p, q, p f= q, {p, q} f= { i, j} such that .C(Epq) = cE8t and .C(Eqp) = dEt 8, where c, d > 0. Choose some a > 0 such that a 2 cd > ab. There are two cases to be considered. First, assume that {p, q} n { i, j} 0. Note that the matrix B = Eii + E;; +aEqp + aEpq + aEpp + aEqq belongs to M;; 1 , since it is permutation similar to a direct sum of
=
=
the matrices (:
.c-
=
: ) , !2 and a zero matrix. But .C(B) tJ. M;;l, since the principal
LINEAR TRANSFORMATIONS WHICH MAP THE CLASS OF INVERSE
165
minor of this matrix determined by ·the indices s, tis ab- a: 2 dc < 0. Now consider the remaining case when {p, q} n { i,j} contains exactly one element, say p i. Let C denote the matrix Eii+Ejj+a: 2 Eqq+a:(Eiq+Eqi)· Then Cis permutation
=
similar to the direct sum of a zero matrix and the matrix (
~ ~ ~
) . By Lemma
a: 2
a 0 2.2, one readily shows that the latter matrix belongs to M3 1. Hence C E M~ 1 , which ; is a contradiction since £(C) ¢ M; 1,· as the principal submatrix of£( C) determined by
the indices s, t is permutation similar to the matrix (
:d
~c)
,which has a negative
determinant. Step 6. The theorem is true for n 2. · By applyin:g a permutation similarity and /or a transposition, we may assume without loss of generality that we have, £(Eu) .= aEu, £(E22) bE22, £(E12 ) cE12 and £(£21) dE21, where a, b, c, d > 0. Now
=
=
=
=
(1 1 1) .----=-r ( 1 1 E Mn and£ 1
)=EM~ '
> 0. Similarly, from c- 1 ( ~ ~· ) = ( ~=~ ~=~ obtain a- 1b..... 1 - c- 1d- 1 > 0. It follows that ab ~ cd. So for any B Hence ab
~ cd
1
we also (bij) E R 2•2 , we ( a 0d ) B ( 1 c/0a ) . Hence the ·. theorem holds 0 0
. C ( B ). ( db abu cb12 bb ) may wnte 21 22 in this case. We now proceed to prove the theorem by induction on n. Let n > 3 and assume that the theorem holds for matrices of order less than n. Step 7. There is. no loss of generality in assuming £(Eij) . . :. Eij for I< i, j < n- 1. Note that the linear transformation £ satisfies £(M~ 1 ) = M; 1 if and only if its composition with a permutation similarity also has this property. Thus without loss of generality we may assume that £ carries each Eii to a positive multiple of itself. Denote the set {1, 2, ... , n- 1} by a:. Let Ca denote the reduction of£ to R(n- 1),(n- 1); that is, the linear transformation which maps A[a:] to £(A)[a:], where we use A[a:] to denote the principal submatrix of A with row and column indices taken from a:. It ·is·not difficult to see that Ca(M~~ 1 ) = M~~ 1 • So by the induction assumption Ca is a composition of the types of transformations specified in the theorem. Applying appropriate transformations of these types to C we may assume that Ca is the identity transformation; or in other words, C(Eij) Eij for 1 < i,j 0. By Step 5, for each i, 1 < i < n- 1, £(Ein) is either a positive multiple of itself or a positive multiple of Eni· Assume that for some such i, the latter case happens.
=
=
=
BIT-SHUN TAM AND PO-HONG LIOU
166
Since n > 3, w~ can choose some j, 1 < j ~ n, j =f i or n. Then £(Ejn) is also either a positive multiple of itself or of Enj. In the first case, let B = Eij + Ejn + Ein + Ejj. Then B E M~ 1 and .C(B) = Eij + Cj Ejn + ciEni + Ejj for some Ci, Cj > 0. Note that the graph of the matrix £(B) is not transitive, so £(B) rf:. M~ 1 . In the second case, let .C(Eni) = kiEin and £(Ejn) = CjEnj, where ki,Cj > 0. Then the nonnegative matrjx Ejn + Eni + Eji has a transitive graph. As is well-known (see Johnson [8, Theorem 7]) then , Ejn + Eni + Eji + o:In E M;;- 1 for o: > 0 sufficiently large. However, the matrix £(Ejn + Eni + Eji + o:In) = CjEnj + kiEin + Eji + o:£(In) does not have a transitive graph . So in either case, we arrive at a contradiction. Step 9. By Step 8 we may assume that £(Ein) = CiEin and £(Eni) = kiEni, where Ci, ki > 0, fo r 1 < i < n - 1. Then we will have, c1 = c2 = · · · = Cn-1 and k1=k2=···= kn-1· Again let £(Enn) = dEnn· Consider any i, 1 ~ i::; n- 1. Let L{i,n} have the same meaning as that for Lex in Step 7. Then L{i,n}(M2 1 ) = M2 1 , and by the argument given in Step 6, we have, ciki = d (noting that we have assumed £(Eii) Eii). Suppose that Ci > Cj for some i,j, 1 < i,j < n-1. Clearly Eji+Ein+Ejn+Eii E M~ 1 . However,
£(Eji + Ein + Ejn + Eii) = Eji +ciEin +cj Ejn + Eii r/:. M~ 1 because its {i, j, n }-.principal submatrix is permutation similar to the 3 x 3 matrix (
~ ~ ~; ) 0 0
whose (3,2)-cofactor
0
is negative. So we arrive at a contradiction. It is now clear that £ is a positive diagonal equivalence, and our theorem follows.
In this paper we have characterized linear transformations which map the matrix classes M~ 1 or M ~ 1 onto itself. One may also be interested in linear transformations which map these m atrix classes into itself. Then the classification problem is more difficult and more complicated, as illustrated by the following examples. For the progress on the linear preservers "into" problems for the class of P-matrices, or M-matrices, see Hershkowitz and Johnson [7] and their remarks at the end. Exan1.ple 4 .2. Let £ be a linear transformation on Rn,n given by: for any A =
(aij)ERn, n,
£(A)=
(~1 ~:),where
Dl=diag(an, ... ,au),
D2=
diag( a,+l,,+l, ... , ann), s is a fixed positive integer less than or equal to n - 1, and L12 is an s x ( n - s) matrix each of whose entries is an arbitrary nonnegative linear functionals of the entries of A. Then £(M~ 1 ) C M~ 1 and £(M~ 1 ) C M~ 1 , as can be shown by Theorem 3.2 and Observation 2.1. Note that the nullspace of£, being the intersection of the nulls paces of n 2 - s( n- s) linear functionals on Rn,n is nonzero. Hence £ is always singular . Example 4.3. Let £ be the linear transformation on Rn,n given by: for any A= (aij) E Rn,n, £(A)= (bij), where bij equals aij if i =f j, and equals 2aij if i = j. Then £(M~ 1 ) C M~ 1 (since M~ 1 is closed under the addition of a nonnegative diagonal matrix), but £(M~ 1 ) =f M;;- 1 and .Cis nonsingular.
LINEAR TRANSFORMATIONS WHICH MAP THE CLASS OF INVERSE
167
The authors would like to thank Jen-chu.ng Chuan for a helpful remark on Example
3.3. References [1] A. Bennan, D. Hershkowitz and C. R. Johns~, "Linear transformations that preserve certain positivity classes of matrices," Linear Algebra Appl. 68: 9-29 (1985). [2] A. Bennan and 0. Kessler, "Matrices with a transitive graph and inverse M-matrices," Linear Algebra Appl. 71: 175-185 (1985). [3] A. Bennan and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic Press, New York, 1979. [4] M. Fiedler, C. R. Johnson and T. L. Markham, "Notes on inverse M-matrices," Linear Algebra Appl. 91: 75-81 (1987). [5] M. Fiedler and T. L. Markham, "A characterization of the closure of inverse M-matrices," Linear Algebra Appl. 105: 209-223 (1988). [6] S. Friedland, D. Hershkowitz and H. Schneider, "Matrices whose powers are M-matrices or Zmatrices," Trans. Amer. Math. Soc. 300: 343-366 (1987). [7] D. Hershkowitz and C. R. Johnson, "Linear transfonnations that map the P-matrices into themselves,"- Linear Algebra Appl. 14: 23-38 (1986). [8] C. R. Johnson, "Inverse M-matrices," Linear Algebra Appl. 41: 195-216 (1982). [9] C. R. Johnson, "Closure properties of certain positivity classes of matrices under various algebraic operati~,'' Linear Algebra Appl. 97: 243-247 (1987). [10] M. Lewin and M. Neumann, "On the inverse M-matrix problem for (0,1)-m,atrices," Linear Algebra Appl. 30: 41..:50 (1980).
Department of Mathematics, Tamkang University, Tamsui, Taiwan 25137, Republic of China.