Some commutation theorems in finite-dimensional vector spaces

5 downloads 35 Views 147KB Size Report
Jul 12, 2013 ... We denote by Lin(r) the space of linear transformations mapping r into r, and by GL(r) the ... Let r be a finite dimensional vector space and let A : r→r be a linear transformation. .... r1 = r2 = ··· = rr = r. That is, every .... [2] P. Halmos, Finite- Dimensional Vector Spaces, D. Van Nostrand Company, Inc., 1958. 6.
Some commutation theorems in finite-dimensional vector spaces Alen Alexanderian∗

Abstract We discuss some useful commutation theorems on finite-dimensional vector spaces.

1

Basic notation and definitions

In what follows, V denotes a finite-dimensional vector space over the field of real numbers. We denote by Lin(V) the space of linear transformations mapping V into V , and by GL(V) the group of invertible transformations in Lin(V).

1.1

Classes of matrices

When V = Rn we use the following notations for various classes of linear transformations on V : • GL(n)is the group of invertible n × n matrices with real entries. • S(n) is the space of symmetric n × n matrices with real entries. • Dev(n) is the subspace of S(n) consisting of real symmetric matrices with vanishing trace. • Sph(n) is the subspace of S(n) consisting of matrices of form αI where α ∈ R is a constant and I denotes the identity matrix. • O(n) is the group of orthogonal n × n matrices with real entries. Given any A ∈ S(n), let D = A − n1 tr(A)I and S = n1 tr(A)I . Note that A = D + S with D ∈ Dev(n) and S ∈ Sph(n); this leads to the following observation: Remark 1.1. With the usual inner product on GL(n) given by

hA, Bi = tr(AB T ),

A, B ∈ GL(n),

S(n) is the orthogonal direct sum of Dev(n) and Sph(n). 1.2

Group representations

Definition 1.2. Let (G, ·) be a group. A representation of G is a finite-dimensional vector space V along with a mapping ρ : G → GL(V) satisfying,

ρ(g1 · g2 ) = ρ(g1 ) ◦ ρ(g2 ). ∗ The University of Texas at Austin, USA. E-mail: [email protected] Last revised: July 12, 2013

Commutation theorems in finite-dimension

In other words, ρ is a group homomorphism from G into GL(V). We use the notation (V, ρ) to denote a representation of G. In what follows, when talking about a group (G, ·), if the group operation · is clear from the context, we will omit the group operation and will refer to the group as G. Example 1.3. (Rn , 1O(n) ), where 1O(n) is the identity map on O(n) is a representation of O(n). Example 1.4. Define the mapping ρ : O(n) → GL(S(n)) as follows: For every Q ∈ O(n),

ρ(Q)E = QEQT ,

∀E ∈ S(n).

Then, (S(n), ρ) is a representation of O(n).

1.3

Invariant subspaces and irreducible representations

Definition 1.5. Let V be a finite dimensional vector space and let A : V → V be a linear transformation. We say a subspace U ⊆ V is invariant under A if

AU ⊆ U. In finite dimension, if a subspace U is invariant under an invertible linear transformation A, then it is simple to show that AU = U . That is, we have,

AU ⊆ U ⇐⇒ AU = U. Furthermore, we have the following simple result. Lemma 1.6. Let V be a finite dimensional inner product space, and let A : V → V be a linear transformation. Suppose A has an invariant subspace U . Then, 1. If A is invertible, then A−1 leaves U invariant also. 2. If A is orthogonal, A leaves U ⊥ invariant also. Proof. Proof of (1) is trivial. To show (2) note that since A is orthogonal, A−1 = AT and thus by (1), AT leaves U invariant. Consequently, if we let v ∈ U ⊥ be fixed but arbitrary, then for all u ∈ U ,

hu, Avi = hAT u, vi = 0. And therefore, A leaves U ⊥ invariant also.

Next, we introduce the notion of a subspace invariant under a group. Definition 1.7. Let G be a group with representation (V, ρ). A subspace U of V is said to be invariant under G if

ρ(g)U ⊆ U,

∀g ∈ G.

Definition 1.8. We say that the representation, (V, ρ) of a group G is irreducible if the only subspaces of V invariant under G are {0} and V . In other words, (V, ρ) is an irreducible representation of G if for any subspace U ⊆ V ,

 ρ(g)U ⊆ U,

∀g ∈ G



=⇒ 2

U = {0} or U = V.

Commutation theorems in finite-dimension

2

Self adjoint linear transformations and the Spectral Theorem

Let us collect some classical results regarding self-adjoint linear maps on finite dimensional inner  product spaces. Recall that a linear mapping A on an inner product space V, h·, ·i is called self-adjoint if

hAx, yi = hx, Ayi,

∀x, y ∈ V.

A standard reference for the following is [2]. Theorem 2.1 (Spectral Theorem). Let A be a self-adjoint linear transformation on an n-dimensional inner product space. Then, there exist real numbers α1 , . . . , αr and perpendicular projections E1 , . . . .Er (with 0 < r ≤ n) such that, 1. αj are pairwise distinct. 2. Ej are non-zero and pairwise orthogonal. 3.

4.

r X j=1 r X

Ej = I αj Ej = A

j=1

Note that in the above Theorem, αj are eigenvalues of A and Ej are perpendicular projections to the corresponding eigenspaces,

Vj = {x ∈ V : Ax = αj x}. Remark 2.2. The representation A = tral form of A.

P

αj Ej in the above theorem is called the spec-

The following result is not used directly in this note, but is still worth mentioning. Theorem 2.3. Let V be a finite dimensional inner-product space and B ∈ Lin(V). If Pr 1 αj Ej is the spectral form of a self-adjoint linear transformation A ∈ Lin(V), then AB = BA if and only if Ej B = BEj for j = 1, . . . , r.

3

Commutation Theorems

The first two theorem below can be found in [1]. Theorem 3.1 (Commutation Theorem I). Let V be a finite-dimensional vector space, and let A, B ∈ Lin(V). if AB = BA then B leaves eigenspaces of A invariant. Proof. Let α be an eigenvalue of A, and let

Vα = {x ∈ V : Ax = αx} be the corresponding eigenspace. For any x ∈ Vα ,

ABx = BAx = B(αx) = αBx. That is, BVα ⊆ Vα . Theorem 3.2 (Commutation Theorem II). Let V be a finite-dimensional inner product space, and let A be a self-adjoint linear map on V and suppose B leaves the eigenspaces of A invariant. Then, BA = AB . 3

Commutation theorems in finite-dimension

Proof. Let

Pr

1

αj Ej be the spectral form of A. For each x ∈ V , let xj = Ej x and note, x=

X

X X  Ej x = Ej x = xj . j

j

Now, since B leaves eigenspaces of A invariant, we know that ABxj = αj Bxj . Next, let x ∈ V be arbitrary

BAx = BA

X j

xj = B

X

Axj = B

X

j

αj xj =

j

X

αj Bxj =

j

X

ABxj = ABx.

j

We also have the following useful result which is a consequence of Commutation Theorem I. Theorem 3.3 (Commutation Theorem III). Suppose G is a group with an irreducible representation (V, ρ), and let A : V → V be a linear self-adjoint transformation. If Aρ(g) = ρ(g)A for all g ∈ G, then A = αI , where α ∈ R is a constant. Proof. Using the Spectral Theorem, we know A has the spectral form,

A=

X

αj Ej .

j

Since Aρ(g) = ρ(g)A for every g ∈ G, we know by Commutation Theorem I that for every g ∈ G, ρ(g) leaves eigenspaces of A invariant. Thus, the eigenspaces Vj of A are invariant under G. Therefore, since the representation (V, ρ) is irreducible, V1 = V2 = · · · = Vr = V . That is, every vector in V is an eigenvector of A with the same eigenvalue:

Ax = αx,

∀x ∈ V.

We also have the following result. Theorem 3.4 (Commutation Theorem IV). Suppose G is a group with a representation (V, ρ), where V is an n-dimensional inner product space. Let A : V → V be a linear self-adjoint transformation, such that Aρ(g) = ρ(g)A for all g ∈ G. If A has an eigen pair (α, v), such that span{ρ(g)v : g ∈ G} = V , then A = αI . Proof. By assumption, there exist {gi }n 1 in G such that

B = {ρ(g1 )v, . . . , ρ(gn )v} is a basis for V . Next, applying Commutation Theorem I, gives that eigen-spaces of A are invariant under G. Therefore, in particular, the eigen space Vα is invariant under G. Therefore, B ⊂ Vα , and thus, Vα = V . That is, Ax = αx for every x ∈ V . Let us get back to Commutation Theorem III. A question is the following: Is the converse of the Theorem true. In other words, is it true that if only multiples of identity commute with ρ(g) for every g ∈ G, then (V, ρ) must be irreducible? We address that question in the next section. 4

Commutation theorems in finite-dimension

4

Orthogonal representation and their decomposition

Definition 4.1 (Orthogonal representation). Let G be a group with a representation (V, ρ) where V is a finite-dimensional inner product space. We say (V, ρ) is an orthogonal representation if ρ(g) is an orthogonal linear operator on V : For every g ∈ G,

hρ(g)u, ρ(g)vi = hu, vi,

∀u, v ∈ V.

Note that for an orthogonal representation (V, ρ) of a group G,

ρ(g)T = ρ(g)−1 = ρ(g −1 ) for every g ∈ G. Lemma 4.2. Let G be a group with an orthogonal representation (V, ρ). Suppose U is a proper subspace of V which is invariant under G. Then, U ⊥ is also invariant under G. Proof. Let g ∈ G and v ∈ U ⊥ be fixed but arbitrary. Then,

hρ(g)v, ui = hv, ρ(g)−1 ui = hv, ρ(g −1 )ui = 0,

∀u ∈ U;

that is ρ(g)v ∈ U ⊥ also. Theorem 4.3 (Decomposition of orthogonal representations). Let G be a group with a reducible orthogonal representation (V, ρ), where V is an n-dimensional inner product space. Then, there exist subspaces Ui , i = 1, . . . , r of V (r ≤ n) which are pairwise orthogonal,

V = U1 ⊕ U2 · · · ⊕ Ur , and (Ui , ρ) is an irreducible representation of G. Proof. Since (V, ρ) is a reducible representation, there exist a non-zero proper subspace U1 of V which is invariant under G. Without loss of generality (by reducing U1 further if necessary), we may assume (U1 , ρ) is irreducible. Note that by Lemma 4.2, we know U1⊥ is invariant under G; if (U1⊥ , ρ) is irreducible then we are done, otherwise, we can continue reducing U1⊥ in the same way, and the process will end in a finite number of steps as V is finite-dimensional. Finally, we have the converse to the Commutation Theorem III: Theorem 4.4. Let G be a group with an orthogonal representation (V, ρ). If the only self-adjoint transformations that commute with ρ(g) for every g ∈ G are constant multiples of the identity, then (V, ρ) is irreducible. Proof. Assume the hypotheses of the theorem hold but suppose to the contrary that (V, ρ) is reducible. Then, by Theorem 4.3, there exist an orthogonal decomposition of V,

V = U1 ⊕ U2 · · · ⊕ Ur , such that (Ui , ρ) are irreducible. Now, let {ai , . . . , ar } be distinct real numbers. Define Ai : Ui → Ui by

Ai x = ai x,

x ∈ Ui .

Let Ei be the perpendicular projection into Ui . Given x ∈ V , write xi = Ei x, and note that we can write X X

x=

Ei x =

i

xi .

i

5

Commutation theorems in finite-dimension

P

P

Finally, define the mapping A : V → V by Ax = i Ai xi , that is A = i ai Ei . Then, A is a self-adjoint (essentially block diagonal) linear transformation with real eigen values ai and eigenspaces Ui , i = 1, . . . , r. Now, for every g ∈ G, ρ(g) leaves eigenspace Ui of A invariant and Therefore, Aρ(g) = ρ(g)A by commutation Theorem II. That is we have a self-adjoint linear map on V which is not a multiple of the identity, but commutes with ρ(g) for every g ∈ G; this, however, is in contradiction to the hypotheses of the theorem. Remark 4.5. The above theorem provides theoretical means to test whether a group representation is irreducible or not.

References [1] M. E. Gurtin, An Introduction to Continuum Mechanics, Academic Press, 1981. [2] P. Halmos, Finite-Dimensional Vector Spaces, D. Van Nostrand Company, Inc., 1958.

6