invariant subspace for A and B can exist even for noncommuting matrices. ... problem of common invariant subspaces is related to the study of the algebra A(A, ...
Journal of Mathematical Sciences, Vol. 114, No. 6, 2003
RATIONAL PROCEDURES IN THE PROBLEM OF COMMON INVARIANT SUBSPACES OF TWO MATRICES Yu. A. Al’pin and Kh. D. Ikramov
UDC 519.6
The paper discusses criteria for the existence of common invariant subspaces of two complex matrices verifiable in a finite number of arithmetic operations. Bibliography: 20 titles.
1. Introduction Let A and B be given n × n complex matrices. Is it possible to verify whether A and B have (or do not have) a common invariant subspace of dimension p, where 1 ≤ p < n, by a procedure using only a finite number of arithmetic operations with the entries of the matrices? (It is these procedures that we call rational.) One can readily indicate a rationally verifiable sufficient condition, namely, that A and B commute. It is well known that, in this case, both matrices have common invariant subspaces of any dimension. However, a nontrivial common invariant subspace for A and B can exist even for noncommuting matrices. A rational necessary and sufficient condition for the case p = 1 is given in [1] (see also the review [2]). It turns out that further progress in the problem of common invariant subspaces is related to the study of the algebra A(A, B) that is the subalgebra of the full algebra of n×n complex matrices Mn generated by A and B. The presentation below (which follows that of [3–7]) will reveal the importance of the semisimplicity property of the algebra A(A, B) and of such concepts as the radical and the socle of an algebra. The style of our presentation is such that, in many cases, we restrict ourselves to stating a result, discussing it, and then referring to more complete publications. We use the standard interpretation of a matrix A as a linear operator that acts in Cn , the space of complex column vectors, in accordance with the rule x 7→ Ax. All the subspaces used in the paper belong to Cn ; this will always be stipulated without mentioning it specifically. Necessary facts from the theory of algebras are recalled only to the extent of generality sufficient for our purposes, i.e., for associative algebras of finite complex matrices. A more general exposition of this theory can be found in [8–11]. 2. Shemesh’s criterion. The condensed form of a pair of matrices and its applications We begin with the problem of the existence for a pair of matrices of a common one-dimensional invariant subspace or, in other words, of a common eigenvector. Surprisingly, such a natural problem was formulated and solved relatively recently (as compared with the age of matrix theory). Here is the solution given in [1]. Theorem 1. A common eigenvector for A and B exists if and only if the subspace N1 =
n−1 \
ker Ak , B l
(1)
k,l=1
is of positive dimension. The symbol [F, G] is the standard notation for the commutator F G − GF of the matrices F and G. The genuine meaning of the subspace N1 is revealed by a remark in [1], which we formulate as the following theorem. Theorem 2. The subspace N1 is invariant with respect to both matrices A and B; moreover, A and B commute on N1 . Every common invariant subspace L of A and B on which A and B commute is contained in N1 . Matrices A and B are said to be partially commuting if they have an eigenvector in common, i.e., if the subspace N1 is of positive dimension. Translated from Zapiski Nauchnykh Seminarov POMI, Vol. 268, 2000, pp. 9–23. Original article submitted October 2, 2000. c 2003 Plenum Publishing Corporation 1072-3374/03/1146-1757 $25.00
1757
It is essential that the subspace N1 can be found rationally. Indeed, the calculation of a basis in N1 amounts to solving a system of linear equations Lx = 0 with the coefficient matrix [A, B] .. L= . . n−1 n−1 ,B A This matrix is built up from the commutators occurring in (1). Theorem 2 will be used for simultaneously reducing A and B to a special block triangular form by a similarity transformation. Theorem 3 [3]. Let A and B be partially commuting matrices. Then there exists a similarity transformation that brings A and B to the forms R=
R11
R12 R22
... ... ...
R 1,k−1 R 2,k−1 ... Rk−1,k−1
R1,k R2,k ... , Rk−1,k Rkk
S11
S=
S12 S22
... ... ...
S1,k−1 S2,k−1 ... Sk−1,k−1
S1,k S2,k ... , Sk−1,k Skk
respectively, where the diagonal blocks Rii and Sii (i = 1, . . . , k − 1) commute, and the blocks Rkk and Skk either commute or have no common eigenvector (i.e., they are not partially commuting). The matrix P of this transformation can be computed rationally. Proof. Choose a matrix P1 such that its first columns constitute a basis in N1 . Then, by Theorem 2, the similarity transformation with P 1 reduces A and B to the following forms: ˜ 12 R11 R S11 S˜12 −1 −1 R1 = P1 AP1 = ˜ 22 , S1 = P1 BP1 = 0 S˜22 . 0 R Here, R11 and S11 are the matrices describing the restrictions of the operators A and B to the invariant subspace N1 . These matrices commute. ˜ 22 and S˜22 are partially commuting matrices, then an analogous transformation is applied to these matrices If R as well. Let P2 be the corresponding transformation matrix for R1 and S1 . Then the second step yields the matrices R2 and S2 that are similar to A and B, respectively, and have the following forms: ˜ 13 R11 R12 R S11 S12 S˜13 ˜ 23 , S2 = P2−1 S1 P2 = 0 S22 S˜23 . R2 = P2−1 R1 P2 = 0 R22 R ˜ 33 0 0 R 0 0 S˜33 Here, the matrices in the pairs R11 , S11 and R22, S22 commute. The commutator [R2, S2 ] has the block form
0 C12 0 0 0 0
C˜13 C˜23 . C˜33
Note that C12 6= 0. Indeed, otherwise the submatrices
R11 0
R12 , R22
S11 0
S12 S22
would commute. In this case, there would exist a common invariant subspace L of A and B for which N1 would be a proper subspace and on which A and B would commute. This contradicts Theorem 2. Repeating this procedure of extracting commuting blocks as long as possible, we eventually arrive at the matrices R and S described in the theorem. As is easily seen from the definition of the matrices P1 , P2 , . . . , P k−1, these matrices can be computed rationally. Therefore, the resulting transformation matrix P = P1 P2 · . . . · P k−1 can be computed rationally as well. 1758
We say that A and B can be brought to the condensed form of type (a) if the blocks Rkk and Skk are commuting, and to the condensed form of type (b) if these blocks are not partially commuting. The proof of Theorem 3 reveals that there is a certain freedom in the choice of the matrices P1 , P2, . . . ; thus, the condensed form is not uniquely determined. However, the number k and the orders of the diagonal blocks do not depend on the choice of particular transformations. The number k is called the index of the condensed form. If A and B commute, we set k = 1. Consider the commutator of matrices in condensed form. We have
0 T12 0 T = [R, S] =
... ... ...
T1,k−1 T1,k−1 ... 0
T1,k T1,k ... . Tk−1,k Tk,k
Here, T12 , . . . , T k−2,k−1 are nonzero blocks, and Tk−1,k and Tk,k cannot vanish simultaneously. This observation entails the following bound for the rank of the commutator. Theorem 4 [3]. Let A and B be partially commuting matrices with the index of the condensed form being equal to k. Then rank[A, B] ≥ k − 1. Consider two applications of the condensed form. 1. The triangularization problem for a pair of matrices. How can one determine whether two given matrices can be brought to triangular form by the same similarity transformation? This is an old problem in linear algebra, for which the following McCoy’s criterion [12, 13] is known. Theorem 5. Matrices A and B are simultaneously triangularizable if and only if the matrix p(A, B)[A, B]
(2)
is nilpotent for any complex polynomial p(s, t) in noncommuting variables s and t. An application of McCoy’s theorem presumes that infinitely many matrices (2) be checked for nilpotency. A more elementary proof of this theorem given in [14, 15] offers no rational procedure for verifying its condition. The following assertion shows that the reduction of A and B to a condensed form can be regarded as the required rational procedure. Theorem 6. Matrices A and B are simultaneously triangularizable if and only if they can be reduced to a condensed form of type (a). Proof. Sufficiency. Let Qi be the matrix of a similarity transformation that brings the commuting blocks Rii and Sii to triangular form. Then the block diagonal matrix diag(Q1 , . . . , Q k ) brings A and B to triangular form. Necessity. Obviously, the simultaneously triangularizable matrices A and B must be partially commuting. We will show that the application of the process described in the proof of Theorem 3 to A and B yields just the condensed form of type (a). Suppose the contrary, i.e., let the process result in a form of type (b), where the matrices Rkk and Skk have no common eigenvectors. Matrices having no eigenvector in common cannot be simultaneously triangularizable. By Theorem 5, there exists a polynomial p0 (s, t) such that the matrix p0 (Rkk , Skk )[Rkk , Skk ] is not nilpotent. But then the matrix p0 (R, S)[R, S] neither is nilpotent, and the same is true if, in the latter expression, R and S are replaced by the similar matrices A and B, respectively. Thus, A and B do not satisfy the condition of McCoy’s theorem, which contradicts the assumption that A and B are simultaneously triangularizable. A different rational solution of the problem of simultaneous triangularization, related to the theory of Lie algebras and ideals in associative algebras, can be found in [16]. 2. Laffey’s pairs. Another application of the condensed form concerns the following remarkable result of Laffey [17]. 1759
Theorem 7. If rank[A, B] = 1 (in this case, A and B are said to form a Laffey’s pair), then A and B are simultaneously triangularizable. It turns out that Laffey’s pairs have an especially simple condensed form. Theorem 8 [3]. Let A and B form a Laffey’s pair. Then A and B can be brought to the following form of type (a): R11 R12 S11 S12 R= , S= . 0 R22 0 S22 Proof. The fact that a Laffey’s pair can be brought to the form of type (a) is immediately implied by Theorems 6 and 7. It remains to prove that the index of the form is equal to 2. Applying Theorem 4 to the case under consideration, we have k − 1 ≤ rank[A, B] = 1 ⇒ k ≤ 2. Since A and B do not commute (i.e., k 6= 1), we conclude that k = 2. Remark. If the rank of the commutator is interpreted as a measure of the noncommutativity of matrices, then Laffey’s pairs are the closest ones to pairs of commuting matrices. Theorem 2 supports this characterization from a different point of view; namely, among all noncommuting pairs, Laffey’s pairs are pairs with condensed form of minimal index. 3. The standard polynomials. An extension of Shemesh’s theorem It is possible to generalize Theorem 2 based on the notion of the standard polynomial. Recall that the standard polynomial of degree r is the polynomial in noncommuting variables x 1 , . . . , x r of the form Sr (x1 , . . . , x r ) =
X (sign(σ))xσ(1) , . . . , x σ(r).
The summation here is assumed over all permutations of the symbols 1, . . . , r. In particular, for r = 2 we have S2 (x1 , x2 ) = x1 x2 − x2 x1 . By the classical Amitsur–Levitzki theorem [9], the equality S2k (C1 , . . . C 2k ) = 0 holds for any (2k)-tuple of matrices C1 , . . . , C 2k ∈ Ml , where l ≤ k. Moreover, for every l, l ≥ k + 1, there exists a (2k)-tuple of l × l matrices G1 , . . . , G 2k such that S2k (G1 , . . . , G 2k ) 6= 0. In other words, the algebra Ml satisfies the identity S2k = 0 for l ≤ k and does not satisfy this identity for l ≥ k + 1. Matrices p(A, B) (see Theorem 5) form a subalgebra of the full matrix algebra Mn that is generated by the matrices A and B. This subalgebra will be denoted by A = A(A, B). Alternatively, A can be defined as the minimum (w.r.t. inclusion) subset of Mn that contains the matrices I, A, and B and is closed under operations of matrix addition, matrix multiplication, and multiplication by complex scalars. If a subspace L is invariant w.r.t. the matrices A and B, then, obviously, it is also invariant w.r.t. any matrix from A(A, B), i.e., L is invariant w.r.t. the entire algebra A. By Burnside’s theorem (see, for example, [8]), the only (nonzero) subalgebra of Mn that has no nontrivial invariant subspace is Mn itself. Consider the restrictions of the operators from A(A, B) to the invariant subspace L. These restrictions constitute an algebra, which will be denoted by A|L. The algebra A|L may be considered as the subalgebra of Mp , where p = dim L, generated by the matrices A|L and B|L. Burnside’s theorem implies that A|L coincides with Mp only if L contains no A-invariant subspace except for {o} and L itself. In this case, L is called an irreducible invariant subspace. The following generalization of Theorem 2 is valid. 1760
Theorem 9 [4, 5]. Define Nk = ∩ ker(S2k (C1 , . . . , C 2k )C2k+1 ,
(3)
where the intersection is taken over all (2k + 1)-tuples of matrices C1 , . . . , C 2k+1 ∈ A(A, B). Then Nk is an invariant subspace of the algebra A, and A satisfies the identity S2k = 0 on this subspace. This means that ∀C1 , . . . , C 2k ∈ A
S2k (C1 , . . . , C 2k )x = 0
(4)
for all x ∈ N k . Any invariant subspace L of A on which the identity S2k = 0 is satisfied is contained in Nk . In the case k = 1, representation (3) gives the following description of N1 , different from the description (1): N1 = ∩ ker([C1 , C2 ]C3),
C1 , C2 , C3 ∈ A.
Observe that S2k is a linear function w.r.t. each of its arguments. Thus, in order to determine Nk , it suffices to take the intersection in (3) over a finite set of (2k + 1)-tuples C1 , . . . , C 2k+1 such that every Ci independently runs over the elements of a certain basis in A. It was shown in [3] that a basis in A(A, B) can be computed rationally. Arguments similar to those presented above for N1 reduce the computation of a basis in Nk to the solution of a certain system of linear homogeneous equations. Hence, Nk can be computed rationally. Note that the following inclusions obviously hold: {0} ≡ N0 ⊂ N1 ⊂ . . . ⊂ Nn = Cn .
(5)
The theorems below reveal the advantages of using the subspaces Nk in studying the problem of invariant subspaces. Theorem 10. Let L be an invariant subspace of the algebra A(A, B). (1) If dim L ≤ k, then L ⊆ Nk . (2) If dim L ≥ k + 1 and L is irreducible, then L 6⊆ Nk . Proof. In the case (1), the algebra A|L satisfies the identity S2k = 0 as a subalgebra of Ml with l = dim L. Thus, equality (4) is valid for all x ∈ L. By Theorem 9, L ⊆ Nk . In the case (2), we have A|L = Ml . By the Amitsur–Levitzki theorem, A|L does not satisfy the identity S2k = 0. Then Theorem 9 implies that L 6⊆ Nk . Now it is easy to prove the following extension of Theorem 1. Theorem 11. A nontrivial subspace of dimension not exceeding k invariant w.r.t. the algebra A(A, B) exists if and only if N k 6= {0}. Indeed, if such an invariant subspace exists, then, by Theorem 10, it must be contained in Nk . Hence, Nk 6= 0. Conversely, assume that Nk 6= 0. Then Nk contains a nontrivial irreducible subspace whose dimension, in view of Theorem 10, does not exceed k. The existence of an invariant subspace of dimension exactly equal to k turns out to be a more difficult problem. A rational algorithm for solving this problem is proposed in [18]. However, there is an essential limitation in this algorithm; namely, at least one of the matrices generating a given algebra must have no multiple eigenvalues. We will show that, for generic matrices, the following two problems can be solved rationally: (a) Does there exist an irreducible invariant subspace of dimension k? (b) Does there exist a two-dimensional invariant subspace? First we will consider semisimple algebras. 4. The case of semisimple algebras and ∗-algebras A matrix C ∈ A is called a properly nilpotent element of an algebra A if the matrix CQ is nilpotent for every Q ∈ A. The set R of all properly nilpotent matrices is called the radical of A. If the radical contains only the zero matrix, then the algebra A is said to be semisimple. Using the well-known properties of the trace, one can characterize the radical as the set of matrices C ∈ A such that ∀Q ∈ A tr(CQ) = 0 (see, for example, [10]). 1761
If a basis E1 , . . . , E t in A is known, then it suffices to verify the equalities tr(CEj ) = 0, j = 1, . . . , t. For a matrix C = x1 E1 + . . . + xt Et to belong to the radical it is necessary and sufficient that the scalars x1 , . . . , x t form a solution of the system of linear equations t X tr(( xi Ei )Ej ) = 0, j = 1, . . . , t. i=1
Thus, the semisimplicity of A is equivalent to the nonsingularity of the coefficient matrix (trEiEj ). Combining these two facts, namely, that a basis of an algebra A(A, B) can be found rationally and that, with a known basis, the calculation of the radical reduces to the solution of a system of linear equations, we conclude that the problem of finding the radical of A(A, B) and, in particular, of verifying that A(A, B) is a semisimple algebra can both be solved rationally. We will also need the following geometric characterization of semisimplicity: if L and U are invariant subspaces of a semisimple algebra A and L ⊆ U , then there exists a direct complement of L in U , which is A-invariant. This implies that any invariant subspace of a semisimple algebra is a direct sum of irreducible invariant subspaces. Theorem 12. Let A(A, B) be a semisimple algebra. An irreducible A-invariant subspace of dimension k exists if and only if dim N k−1 < dim Nk . Proof. Assume that the subspace in question exists. By Theorem 10, this subspace is contained in Nk but not in Nk−1 . Thus, dim Nk−1 < dim Nk . Conversely, assume that the latter inequality holds. Let Φk be an A-invariant subspace such that Nk−1 ⊕ Φk = Nk . (6) The same Theorem 10 now implies that all the irreducible invariant subspaces contained in Φk are of dimension k, which completes the proof. Since bases in the subspaces Nk and, therefore, the dimensions of these subspaces can be calculated rationally, the condition of Theorem 12 is rationally verifiable. The number of summands in the decomposition of Φk into a direct sum of k-dimensional subspaces can also be easily determined because it is equal to dim Φk /k. In accordance with (6), we have dim Φk = dim Nk − dim Nk−1 . Consider an important particular case of semisimple algebras, namely, the case of ∗-algebras. A ∗-algebra A is characterized by the following property: if G ∈ A, then the adjoint matrix G∗ also belongs to A. We will now regard Cn as a unitary space with the standard scalar product. There is another property that is characteristic for ∗-algebras, namely, if L and U are invariant subspaces of a ∗-algebra A and L ⊆ U , then the orthogonal complement of L in U is also A-invariant. In particular, Φk is the orthogonal complement of Nk−1 in Nk . Since the latter two subspaces can be computed rationally, the same holds for Φk . Thus, the orthogonal decomposition Cn = Φk1 ⊕ Φk2 ⊕ · · · ⊕ Φkf
(7)
can be computed rationally (only nonzero summands are kept in (7)). With decomposition (7) one can associate the corresponding block diagonal forms of the matrices A and B (also computable rationally): A ∼ A1 ⊕ A2 ⊕ · · · ⊕ Af ,
B ∼ B1 ⊕ B2 ⊕ · · · ⊕ Bf ,
(8)
where the symbol ∼ denotes unitary similarity. Since the subspace Φki can be decomposed into an orthogonal sum of subspaces of dimension ki , the matrices Ai and Bi (i = 1, . . . , f) can be brought to direct sums of ki × ki blocks by the same unitary similarity transformation: Ai ∼ Ai1 ⊕ Ai2 ⊕ · · · ⊕ Aili ,
Bi ∼ Bi1 ⊕ Bi2 ⊕ · · · ⊕ Bili .
(9)
In general, direct sums (9) cannot be computed rationally. For example, let A = B and let A be a normal matrix. Then (9) is the spectral decomposition of A, which usually cannot be computed even in radicals. However, one can indicate a rationally computable parameter of decomposition (9). Let µi be the largest number of pairs in the sequence (Aim , Bim ), m = 1, . . . , l i, that cannot be transformed into one another by the same unitary similarity. Then, by the generalized Burnside theorem [8], dim A(Ai , Bi ) = µi ki2 . The left-hand side of this equality can be computed rationally. Hence, µi is rationally computable as well. 1762
An interesting class of ∗-algebras are algebras of the type A(A, A∗ ), which have been studied in [19, 20, 6]. In this section, we have mainly followed the exposition in [6], examining, however, more general ∗-algebras of type A(A, B). Now we will indicate two properties that are specific for the case A = A(A, A∗ ). First, for these algebras, the subspace N1 can be characterized as the maximal subspace L invariant w.r.t. both A and A∗ such that the restriction A|L is a normal matrix. Second, the following assertion is valid. Theorem 13 [20]. Let d be the number of distinct nonzero eigenvalues of an n × n matrix A. For A to be a normal matrix it is necessary and sufficient that dim A(A, A∗ ) = d. 5. A criterion for the existence of a two-dimensional invariant subspace for a pair of matrices First we will show how one can verify whether arbitrary matrices A and B (for which A(A, B) is not necessarily a semisimple algebra) have a common irreducible invariant subspace of dimension k, where 1 < k < n. To this end, the notion of the socle of an algebra A will be used. The socle of A is the sum of all the irreducible invariant subspaces of A. Another characterization of the socle is that it is the maximal A-invariant subspace L such that the algebra A|L is semisimple [9]. Finally, the socle of A can be described as the set Λ=
\
ker Q
Q∈R
(see [5]). If a basis R1 , . . . , R m of the radical R is known, then the above formula can be replaced by the formula R1 . Λ= ker Rj or, equivalently, Λ = ker .. . j=1 Rm m \
It was observed in the preceding section that a basis of the radical of the algebra A(A, B) can be computed rationally. Since finding the null-space of a matrix amounts to solving a system of linear equations, we arrive at the following conclusion: the socle of A(A, B) can be computed by a rational procedure. It is now clear what rational procedure can be used to verify whether the matrices A and B have a common irreducible invariant subspace of dimension k. First, one should calculate the socle Λ of A(A, B) and the restriction A|Λ. Then the above problem must be solved for the semisimple algebra A|Λ with the help of Theorem 11. Since the socle contains all the irreducible A-invariant subspaces, this solution will also be valid for the original algebra A(A, B). Now we turn to the problem of the existence of a two-dimensional invariant subspace. This problem can also be resolved rationally. Moreover, it is possible to find out (a) whether there exists an irreducible two-dimensional subspace and (b) whether there exists a reducible subspace of the same dimension. A rational procedure that solves the problem of an irreducible subspace of any given dimension has already been described in this section. The existence of a reducible subspace of dimension 2 can be verified (rationally) by using the condensed form. As is easy to see, the affirmative answer will be obtained precisely in the case where the combined dimension of the commuting blocks in the condensed form is not less than 2. Thus, a reducible two-dimensional invariant subspace does not exist only in the following two cases: (1) A and B have no eigenvector in common; and (2) the condensed form for A and B is of the type α a β b , , 0 A22 0 B22 where α and β are scalars and the blocks A22 and B22 share no eigenvector. The solution of the two-dimensional subspace problem in the form of a step-by-step algorithm is given in [4, 5]. It can be seen that, in the case of subspaces of dimension at least 3, the attempt at conducting a similar subdivision into rationally verifiable subcases does not succeed. In conclusion, we indicate how the notion of the socle can be applied to computing invariant subspaces of a matrix rationally (see also [7]). Consider the algebra A generated by a matrix A, i.e., the algebra of all polynomials in A. It is well known that irreducible invariant subspaces of a complex matrix are necessarily of 1763
dimension 1. Hence, the socle of a monogenic algebra A is a direct sum of one-dimensional invariant subspaces, i.e., a sum of the eigenspaces of A. Thus, despite the fact that, in general, the eigenvectors themselves cannot be found even in radicals, the subspace spanned by all of the eigenvectors of a matrix can be computed rationally. Translated by Kh. D. Ikramov. REFERENCES 1. D. Shemesh, “Common eigenvectors of two matrices,” Linear Algebra Appl., 62, 11–18 (1984). 2. Kh. D. Ikramov, N. V. Savel’eva, and V. N. Chugunov, “On rational criteria for the existence of common eigenvectors or invariant subspaces,” Programmirovanie, No. 3, 43–57 (1997). 3. Yu. A. Alpin, L. Elsner, and Kh. D. Ikramov, “On condensed forms for partially commuting matrices,” Linear Algebra Appl., 306, 165–182 (2000). 4. Yu. A. Al’pin and Kh. D. Ikramov, “On a rational procedure for verifying the existence of a two-dimensional common invariant subspace for a given pair of matrices,” Dokl. Ross. Akad. Nauk, 371, No. 4, 439–441 (2000). 5. Yu. A. Alpin, A. George, and Kh. D. Ikramov, “Solving the two-dimensional CIS problem by a rational algorithm,” Linear Algebra Appl., 312, 115–123 (2000). 6. Yu. A. Al’pin and Kh. D. Ikramov, “On the solution of the generalized Djokovi´c problem,” Dokl. Ross. Akad. Nauk, 372, No. 5, 583–585 (2000). 7. Yu. A. Al’pin and Kh. D. Ikramov, “The sum of the eigenspaces of a matrix can be found by a rational computation,” Dokl. Ross. Akad. Nauk, 371, No. 5, 583-584 (2000). 8. B. L. van der Waerden, Algebra [Russian translation], Nauka, Moscow (1979). 9. R. S. Pierce, Associative Algebras, Springer-Verlag (1982). 10. N. G. Chebotarev, Introduction to the Theory of Algebras [in Russian], Gostekhizdat, Moscow (1949). 11. Yu. A. Drozd and V. V. Kirichenko, Finite-Dimensional Algebras, Springer-Verlag, Berlin (1980). 12. N. H. McCoy, “On the characteristic roots of matrix polynomials,” Bull. Amer. Math. Soc., 42, 592-600 (1936). 13. R. A. Horn and C. R. Johnson, Matrix Theory, Cambridge University Press (1985). 14. M. P. Drazin, J. W. Dungey, and K. W. Gruenberg, “Some theorems on commutative matrices,” J. London Math. Soc., 26, 221–228 (1951). 15. V. V. Prasolov, Problems and Theorems in Linear Algebra, American Mathematical Society, Providence, RI (1994). 16. Yu. A. Al’pin and N. A. Koreshkov, “On simultaneous triangularization of matrices,” Mat. Zametki, 68, No. 5, 648–652 (2000). 17. T. J. Laffey, “Simultaneous triangularization of matrices – low-rank cases and the nonderogatory case,” Linear and Multilinear Algebra, 6, 269–305 (1978). 18. A. George and Kh. D. Ikramov, “Common invariant subspaces of two matrices,” Linear Algebra Appl., 287, 171–179 (1999). ˘ Djokovi´c, “Unitary similarity of projectors,” Aequationes Math., 42, 220–224 (1991). 19. D. Z. 20. Kh. D. Ikramov and V. N. Chugunov, “On algebras generated by pairs of adjoint matrices,” Vestn. Mosk. Univ., Ser. XV Vychisl. Mat. Kibern., No. 1, 49–50 (1999).
1764