Discrete Trigonometric Matrix Functions Douglas R. Anderson Department of Mathematics and Computer Science, Concordia College Moorhead, MN 56562
[email protected] Abstract We explore a pair of matrix solutions to a certain discrete system which has various properties similar to the familiar continuous trigonometric functions, including basic identities and sum and difference of two angles formulas. Then we examine separation properties of these matrices. An oscillation result is also given.
Key words: difference equations, discrete symplectic systems, generalized zeros. AMS Subject Classification: 39A10. In this paper we will define the discrete trigonometric matrix functions. The continuous trigonometric matrix functions have been studied by Barrett [3], Etgen [5], and Reid [11]. We will assume the reader has only had a first course in difference equations. Elementary books on the subject include Elaydi [4], Jerri [7], Kelley and Peterson [8], and Mickens [10]. Some other books in the area include Agarwal [1], Ahlbrandt and Peterson [2], and Lakshmikantham and Trigiante [9]. Let Q(t) be an n × n Hermitian matrix function on the discrete interval [a, ∞) ≡ {a, a + 1, . . .}. We define the discrete sine and cosine matrix functions S(t) = S(t; a, Q) and C(t) = C(t; a, Q) to be the unique solution Y (t) = S(t), Z(t) = C(t) of the initial value problem: Y (t + 1) =
cos Q(t)Y (t) + sin Q(t)Z(t) (1)
Z(t + 1) = − sin Q(t)Y (t) + cos Q(t)Z(t) Y (a) = 0, Z(a) = I,
(2)
where the coefficient matrices in (1) are defined by their Maclaurin series. It is then easy to verify that the pair Y (t) = C(t), Z(t) = −S(t) 1
also solves (1) with Y (a) = I, Z(a) = 0. By checking initial conditions it is easy to characterize all solutions of (1), as in the following lemma: Lemma 1 The pair Y (t), Z(t) solves (1) if and only if Y (t) =
C(t)Y (a) + S(t)Z(a)
Z(t) = −S(t)Y (a) + C(t)Z(a). The interested reader may note that the system (1) is what Ahlbrandt and Peterson [2] call a symplectic system. However, to keep things simple, we will not appeal to their development, but rather derive all of our results directly from the above system in (1). Note that if Q(t) is a diagonal matrix, then the pair S(t) = S(t; a, Q) = sin C(t) = C(t; a, Q) = cos
t−1 X τ =a t−1 X
Q(τ ) (3) Q(τ )
τ =a
solves (1) and (2) on [a, ∞). (Here it is understood that a sum from a to a − 1 is 0.) Theorem 2 For any solution pair Y (t), Z(t) of (1), "
Y (t) Z(t)
rank
#
is constant for t ≥ a. Proof: Written as a matrix system, (1) becomes "
Y (t + 1) Z(t + 1)
#
"
=
cos Q(t) sin Q(t) − sin Q(t) cos Q(t)
#"
Y (t) Z(t)
#
.
(4)
Since the coefficient matrix is nonsingular, "
ker "
Y (t) As t was arbitrary, rank Z(t)
Y (t + 1) Z(t + 1)
#
"
= ker
Y (t) Z(t)
#
.
#
is constant for t ≥ a.
Lemma 3 If Y (t), Z(t) solve (1) for t ≥ a, then Y (t) = cos Q(t)Y (t + 1) − sin Q(t)Z(t + 1) Z(t) = sin Q(t)Y (t + 1) + cos Q(t)Z(t + 1). 2
2
Proof: First note that "
cos Q(t) sin Q(t) − sin Q(t) cos Q(t)
#−1
"
=
cos Q(t) − sin Q(t) sin Q(t) cos Q(t)
#
.
Using this and (4) we get the desired result.
2
Definition For any n × n matrix pairs of solutions Y1 (t), Z1 (t) and Y2 (t), Z2 (t) of (1), define (
where
∗
Y1 (t) Y2 (t) Z1 (t) Z2 (t)
)
:= Y1∗ (t)Z2 (t) − Z1∗ (t)Y2 (t),
(5)
denotes the conjugate transpose of the matrix.
Lemma 4 If Y1 (t), Z1 (t) and Y2 (t), Z2 (t) are n × n matrix pairs of solutions of (1), then, for t ≥ a, (
Y1 (t) Y2 (t) Z1 (t) Z2 (t)
)
= A,
where A is an n × n constant matrix. Proof: By (5), (
Y1 (t + 1) Y2 (t + 1) Z1 (t + 1) Z2 (t + 1)
)
= Y1∗ (t + 1)Z2 (t + 1) − Z1∗ (t + 1)Y2 (t + 1) "
#"
0 I Y2 (t + 1) + 1) + 1) = −I 0 Z 2#(t + 1) " h i cos Q(t) − sin Q(t) Y1∗ (t) Z1∗ (t) = sin Q(t) cos Q(t) " #" #" # 0 I cos Q(t) sin Q(t) Y2 (t) · −I 0 − sin Q(t) cos Q(t) Z2 (t) h
Z1∗ (t
Y1∗ (t
"
i
0 I = −I 0 ∗ ∗ = −Z 1 (t)Z2 (t) ( 1 (t)Y2 (t) + Y) Y1 (t) Y2 (t) = . Z1 (t) Z2 (t) h
(
As t was arbitrary,
Y1 (t) Y2 (t) Z1 (t) Z2 (t)
Remark: In the following, trA :=
Y1∗ (t)
Z1∗ (t)
n X
#"
Y2 (t) Z2 (t)
#
)
is a constant matrix. n X
2
aii denotes the trace of an n × n matrix A, and ||A||2 :=
i=1
i
#
1 2
|aij |2 is the Euclidean norm of A.
i,j=1
3
Theorem 5 For all t ≥ a, the trigonometric matrix functions S(t), C(t) satisfy the following: S ∗ (t)S(t) + C ∗ (t)C(t) = I,
(6)
S(t)S ∗ (t) + C(t)C ∗ (t) = I,
(7)
S ∗ (t)C(t) = C ∗ (t)S(t),
(8)
S(t)C ∗ (t) = C(t)S ∗ (t),
(9)
||S(t)||22 + ||C(t)||22 = n.
(10)
Proof: Since C(t), −S(t) and S(t), C(t) solve (1) with C(a) = I, S(a) = 0, Lemma 4 and (5) give that ( ) C(t) S(t) A1 = = C ∗ (t)C(t) + S ∗ (t)S(t) −S(t) C(t) for all t. With t = a, we have that A1 = I by the initial conditions. Hence, (6) holds. Moreover, S(t), C(t) and S(t), C(t) are two solution pairs of (1) and (2), so again by Lemma 4 we have (
A2 =
S(t) S(t) C(t) C(t)
)
(
S(a) S(a) C(a) C(a)
=
)
(
=
0 0 I I
)
.
By (5), this last expression must be 0. But then S ∗ (t)C(t) − C ∗ (t)S(t) = 0, which is (8). Finally, (6) and (8) yield "
S ∗ (t) C ∗ (t) C ∗ (t) −S ∗ (t)
#"
S(t) C(t) C(t) −S(t)
#
= I.
It follows that "
I=
S(t) C(t) C(t) −S(t)
#"
S ∗ (t) C ∗ (t) ∗ C (t) −S ∗ (t)
#
"
=
S(t)S ∗ (t) + C(t)C ∗ (t) S(t)C ∗ (t) − C(t)S ∗ (t) C(t)S ∗ (t) − S(t)C ∗ (t) C(t)C ∗ (t) + S(t)S ∗ (t)
#
,
so that (7) and (9) follow. To get (10), take the trace of (7) and use the fact that tr(AA∗ ) = ||A||22 . 2 Definition For those values of t such that C −1 (t) exists, define the discrete tangent matrix function to be T (t) := C −1 (t)S(t). In the same way, for those values of t such that S −1 (t) exists define the discrete cotangent matrix function to be COT (t) := S −1 (t)C(t).
4
Corollary 6 For any t ≥ a such that C(t) is nonsingular, T ∗ (t) = T (t),
(11)
I + T 2 (t) = [C ∗ (t)C(t)]−1 .
(12)
Moreover, for any t ≥ a such that S(t) is nonsingular, COT ∗ (t) = COT (t),
(13)
COT 2 (t) + I = [S ∗ (t)S(t)]−1 .
(14)
Proof: If C(t) is invertible, then T (t) is defined, and from (9) above we get that C(t)S ∗ (t) S ∗ (t) S ∗ (t)C ∗−1 (t) ∗ [C −1 (t)S(t)] T ∗ (t)
S(t)C ∗ (t) C −1 (t)S(t)C ∗ (t) C −1 (t)S(t) C −1 (t)S(t) T (t).
= = = = =
From (7) we have the equalities C ∗ (t) + T (t)S ∗ (t) I + T (t)S ∗ (t)C ∗−1 (t) I + T (t)T ∗ (t) I + T 2 (t)
= = = =
C −1 (t) C −1 (t)C ∗−1 (t) C −1 (t)C ∗−1 (t) [C ∗ (t)C(t)]−1 .
The COT (t) properties follow in the same way.
2
Lemma 7 For t ≥ a, S(t) and C(t) satisfy S(t + 1)S ∗ (t) + C(t + 1)C ∗ (t) = cos Q(t) S(t + 1)C ∗ (t) − C(t + 1)S ∗ (t) = sin Q(t). Proof: Since Y (t) = S(t), Z(t) = C(t) solve system (1), S(t + 1) = cos Q(t)S(t) + sin Q(t)C(t) C(t + 1) = − sin Q(t)S(t) + cos Q(t)C(t)
(15)
Right multiplication of the first line of (15) by S ∗ (t) and the second by C ∗ (t) gives S(t + 1)S ∗ (t) = sin Q(t)C(t)S ∗ (t) + cos Q(t)S(t)S ∗ (t) C(t + 1)C ∗ (t) = cos Q(t)C(t)C ∗ (t) − sin Q(t)S(t)C ∗ (t). Vertical addition together with (7) and (9) yield the first result. Next, use (15) again to obtain S(t + 1)C ∗ (t) = sin Q(t)C(t)C ∗ (t) + cos Q(t)S(t)C ∗ (t) ∗ −C(t + 1)S (t) = −cos Q(t)C(t)S ∗ (t) + sin Q(t)S(t)S ∗ (t). As before, add vertically to see that the second equation holds with the aid of (7) and (9).
5
2
Corollary 8 The matrices C(t) and S(t) satisfy S(t)S ∗ (t + 1) + C(t)C ∗ (t + 1) = cos Q(t) C(t)S ∗ (t + 1) − S(t)C ∗ (t + 1) = sin Q(t) for t ≥ a. Proof: Take conjugate transposes of the system in the above lemma.
2
Definition Let S(t; s), C(t; s) be the unique solution of (1) and (2) with a replaced by s, for any s in the discrete interval [a, ∞). Theorem 9 (Difference Formulas) If t, s ∈ [a, ∞), then S(t; s) = S(t; a)C ∗ (s; a) − C(t; a)S ∗ (s; a) C(t; s) = C(t; a)C ∗ (s; a) + S(t; a)S ∗ (s; a). Proof: Set A(t) := S(t; a)C ∗ (s; a) − C(t; a)S ∗ (s; a) and B(t) := C(t; a)C ∗ (s; a) + S(t; a)S ∗ (s; a). First note that A(t + 1) = S(t + 1; a)C ∗ (s; a) − C(t + 1; a)S ∗ (s; a) = [cos Q(t)S(t; a) + sin Q(t)C(t; a)]C ∗ (s; a) −[− sin Q(t)S(t; a) + cos Q(t)C(t; a)]S ∗ (s; a) = cos Q(t)[S(t; a)C ∗ (s; a) − C(t; a)S ∗ (s; a)] +sin Q(t)[C(t; a)C ∗ (s; a) + S(t; a)S ∗ (s; a)] = cos Q(t)A(t) + sin Q(t)B(t). Also, A(s) = S(s; a)C ∗ (s; a) − C(s; a)S ∗ (s; a) = 0 by (9). Similarly, B(t + 1) = − sin Q(t)A(t) + cos Q(t)B(t) and B(s) = C(s; a)C ∗ (s; a) + S(s; a)S ∗ (s; a) = I
6
(16)
by (7). Thus A(t), B(t) solve the system (1) with Y (s) = 0, Z(s) = I. But the unique solution to this IVP is S(t; s), C(t; s); hence A(t) = S(t; s) and B(t) = C(t; s). 2 We call these difference formulas, for if Q(t) = q(t) is a scalar function, then s(t; a, q) = sin c(t; a, q) = cos
t−1 X
q(τ )
τ =a t−1 X
q(τ ),
τ =a
and the theorem says that sin(α − β) = sin α cos β − cos α sin β cos(α − β) = cos α cos β + sin α sin β, where α=
t−1 X
q(τ ) and β =
τ =a
s−1 X
q(τ ).
τ =a
Corollary 10 (Addition Formulas) For t, s ∈ [a, ∞), S(t; a) = S(t; s)C(s; a) + C(t; s)S(s; a) C(t; a) = C(t; s)C(s; a) − S(t; s)S(s; a). Proof: By (16), we have −S(t; s)S(s; a) = −S(t; a)C ∗ (s; a)S(s; a) + C(t; a)S ∗ (s; a)S(s; a) C(t; s)C(s; a) = C(t; a)C ∗ (s; a)C(s; a) + S(t; a)S ∗ (s; a)C(s; a). Adding vertically on either side of the equality results in C(t; s)C(s; a) − S(t; s)S(s; a) = C(t; a)[C ∗ (s; a)C(s; a) + S ∗ (s; a)S(s; a)] +S(t; a)[S ∗ (s; a)C(s; a) − C ∗ (s; a)S(s; a)] = C(t; a). by (6) and (8). The first equation follows with a similar method. 2 Corollary 11 For t, s ∈ [a, ∞), S(s; t) = −S ∗ (t; s) and C(s; t) = C ∗ (t; s). Proof: Switch s and t in the first line of (16) to see that S(s; t) = S(s; a)C ∗ (t; a) − C(s; a)S ∗ (t; a) = [C(t; a)S ∗ (s; a) − S(t; a)C ∗ (s; a)]∗ = −S ∗ (t; s) by using (16) again. Likewise the relationship for C(s; t) can be derived.
7
2
Lemma 12 If C(t) is nonsingular for t = t0 , t0 + 1, then ∆T (t0 ) = C −1 (t0 + 1) sin Q(t0 )C ∗−1 (t0 ).
(17)
Similarly, if S(t) is nonsingular for t = t0 , t0 + 1, then ∆COT (t0 ) = −S −1 (t0 + 1) sin Q(t0 )S ∗−1 (t0 ).
(18)
Proof: First note that because C(t) is nonsingular for t = t0 , t0 + 1, T (t) exists and is Hermitian by (11) for t = t0 , t0 + 1. Consequently, by the first equality in Lemma 7, C −1 (t0 + 1)S(t0 + 1)C ∗ (t0 ) − S ∗ (t0 ) = C −1 (t0 + 1) sin Q(t0 ) T (t0 + 1) − S ∗ (t0 )C ∗−1 (t0 ) = C −1 (t0 + 1) sin Q(t0 )C ∗−1 (t0 ) ∗ T (t0 + 1) − [C −1 (t0 )S(t0 )] = C −1 (t0 + 1) sin Q(t0 )C ∗−1 (t0 ), and we have by (11) that ∆T (t0 ) = C −1 (t0 + 1) sin Q(t0 )C ∗−1 (t0 ). In the same way we can derive the equation for ∆COT (t0 ).
2
Corollary 13 If C(t) is nonsingular on [a, b], then C −1 (t + 1) sin Q(t)C ∗−1 (t)
(19)
is Hermitian on [a, b − 1], and T (t) =
t−1 X
C −1 (τ + 1) sin Q(τ )C ∗−1 (τ )
(20)
τ =a
on [a, b]. If S(t) is nonsingular on [a + 1, b], then S −1 (t + 1) sin Q(t)S ∗−1 (t)
(21)
is Hermitian on [a + 1, b − 1], and COT (t) = −
t−1 h X
i
S −1 (τ + 1) sin Q(τ )S ∗−1 (τ ) + COT (a + 1)
(22)
τ =a+1
on [a + 2, b]. Proof: As in (17) above, ∆T (t) = C −1 (t + 1) sin Q(t)C ∗−1 (t); since the lefthand side is Hermitian on [a, b − 1], so is the right. Now sum both sides of (17) from a to t − 1 and use T (a) = S(a) = 0 to get T (t) =
t−1 X
C −1 (τ + 1) sin Q(τ )C ∗−1 (τ )
τ =a
8
for t ∈ [a, b]. If S(t) is invertible on [a + 1, b], we may likewise obtain the other results.
2
Remark: Before examining the implications of invertibility and introducing the idea of a generalized zero for the S and C matrix functions, we have the following example, which illustrates that both S(t) and C(t) can be singular at the same points. Suppose S(t), C(t) solves (1) and (2) with a = 0 and "
Q(t) =
π 0 0 π/2
#
.
Then by (3), "
S(t) =
0 0 0 sin(πt/2)
#
"
and C(t) =
cos(πt) 0 0 cos(πt/2)
#
.
Finally, note that for all odd integers t, S(t) and C(t) are both singular. Definition Assume sin Q(t) is nonsingular on [a, ∞). We say C(t) has a generalized zero at a if C(a) is singular. Otherwise, C(t) has a generalized zero at t0 for t0 > a provided det C(t0 − 1) 6= 0 and either C(t0 ) is singular or C −1 (t0 ) sin Q(t0 − 1)C ∗−1 (t0 − 1) is nonsingular and not positive definite. In other words, if C(t) does not have a generalized zero at t0 and C(t0 − 1) is nonsingular, then C −1 (t0 ) sin Q(t0 − 1)C ∗−1 (t0 − 1) > 0.
(23)
(Here the inequality A > 0 signifies that A is an n × n positive definite Hermitian matrix.) In the same way, S(t) may have generalized zeros; note that by the initial conditions given in (2), S(t) has a generalized zero at a, while C(t) does not. Theorem 14 If C(t) has no generalized zeros on [a, b], then S(t) has no generalized zeros on [a + 1, b]. Proof: Since C(t) has no generalized zeros on [a, b], C −1 (t + 1) sin Q(t)C ∗−1 (t) > 0 for t ∈ [a, b − 1]; in particular, sin Q(t) and C(t) are nonsingular on [a, b]. By (20), T (t) =
t−1 X
C −1 (τ + 1) sin Q(τ )C ∗−1 (τ ) > 0
τ =a
on [a + 1, b], and S(t) = C(t)T (t) on [a, b]. Using (11), S ∗ (t)(sin Q(t))−1 S(t + 1) = T (t)C ∗ (t)(sin Q(t))−1 C(t + 1)T (t + 1) = T (t)C ∗ (t)(sin Q(t))−1 C(t + 1) h
i
· C −1 (t + 1) sin Q(t)C ∗−1 (t) + T (t) = T (t) + T (t)C ∗ (t)(sin Q(t))−1 C(t + 1)T (t) h
i−1
= T (t) + T ∗ (t) C −1 (t + 1) sin Q(t)C ∗−1 (t) 9
T (t).
Hence S ∗ (t)(sin Q(t))−1 S(t + 1) > 0 for t ∈ [a + 1, b − 1], so that S −1 (t + 1) sin Q(t)S ∗−1 (t) > 0 on [a + 1, b − 1]; i.e., S(t) has no generalized zeros on [a + 2, b]. Finally, as S(a) = 0 and S(a + 1) = sin Q(a) is nonsingular, S(t) has no generalized zero at a + 1 either. 2 Theorem 15 If C(t) has no generalized zeros on [t0 , t1 ] for any t0 ≥ a with t1 > t0 + n, then S(t) has at most n singularities on [t0 , t1 ]. Likewise if S(t) has no generalized zeros on [t0 , t1 ] for t0 > a, then C(t) has at most n singularities on [t0 , t1 ]. Proof: Let α, β ∈ [t0 , t1 ] such that β > α. Sum (17) from α to β − 1 to see that T (β) − T (α) =
β−1 X
C −1 (τ + 1) sin Q(τ )C ∗−1 (τ ) > 0,
τ =α
and thus T (β) > T (α).
(24)
Following Etgen [5], for each t ∈ [t0 , t1 ] let λi (t) be the ith eigenvalue of T (t); the functions λi (t) are increasing over [t0 , t1 ] by (24). The singularities of S(t) are precisely those of T (t) = C −1 (t)S(t), and since T (t) is Hermitian, the singularities of T (t) occur at the zeros of the λi (t). Since the λi (t) are increasing for all i = 1, . . . , n, S(t) has at most n singularities in [t0 , t1 ]. To see the second half of the theorem, sum (18) as above to note that COT (t) is decreasing on [t0 , t1 ]. 2 Theorem 16 If there is a t0 ≥ a such that S(t0 ; a), C(t0 ; a) are nonsingular, and if there is a t1 > t0 + n such that C(t; t0 ) has no generalized zeros on [t0 , t1 ], then each of S(t; a), C(t; a) has at most n singularities on [t0 , t1 ]. Proof: Since C(t) has no generalized zeros on [t0 , t1 ], C(t) is nonsingular on [t0 , t1 ]. Let A(t) := −S ∗ (t0 ; a)C −1 (t; t0 )C(t; a) for t ∈ [t0 , t1 ]. By Corollary 10, A(t) = −S ∗ (t0 ; a)[C(t0 ; a) − C −1 (t; t0 )S(t; t0 )S(t0 ; a)] = −S ∗ (t0 ; a)C(t0 ; a) + S ∗ (t0 ; a)C −1 (t; t0 )S(t; t0 )S(t0 ; a). Then the difference of both sides produces h
i
∆A(t) = ∆ S ∗ (t0 ; a)C −1 (t; t0 )S(t; t0 )S(t0 ; a) = S ∗ (t0 ; a) [∆T (t; t0 )] S(t0 ; a). By (17), h
i
∆A(t) = S ∗ (t0 ; a) C −1 (t + 1; t0 ) sin Q(t)C ∗−1 (t; t0 ) S(t0 ; a), 10
so that ∆A(t) > 0 on [t0 , t1 − 1], since S(t0 ; a) is nonsingular and C −1 (t + 1; t0 ) sin Q(t)C ∗−1 (t; t0 ) > 0 on [t0 , t1 − 1] by hypothesis. Hence A(t) is increasing on [t0 , t1 ], and as in the proof of Theorem 15, the functions λi (t), where λi (t) is the ith eigenvalue of A(t), are increasing on [t0 , t1 ]. By the definition of A(t), the zeros of these functions are exactly the singularities of C(t; a). Consequently, C(t; a) has at most n singularities on [t0 , t1 ]. Letting B(t) := C ∗ (t0 ; a)C −1 (t; t0 )S(t; a) and continuing as above gives the similar result for S(t; a).
2
Lemma 17 If there is an integer t0 ≥ a such that C(t) ≡ C(t; a) is nonsingular on [t0 , ∞), then for U (t) ≡ U (t; t0 ) := C(t)
t−1 X
C −1 (τ + 1) sin Q(τ )C ∗−1 (τ ),
(25)
τ =t0
||U (t)||2 is bounded on [t0 , ∞). Proof: Note that if C(t; a) was invertible on [a, ∞), then U (t; a) = S(t; a) ≡ S(t) by (20). Now set V (t) ≡ V (t; t0 ) := −S(t)
t−1 X
[C −1 (τ + 1) sin Q(τ )C ∗−1 (τ )] + C ∗−1 (t)
τ =t0
for t in [t0 , ∞). We first show for t ∈ [t0 , ∞) that U (t), V (t) is a solution of (1). To this end, it is easy to see that U (t + 1) = cos Q(t)U (t) + sin Q(t)V (t). Consider V (t + 1) = −S(t + 1)
t X
[C −1 (τ + 1) sin Q(τ )C ∗−1 (τ )] + C ∗−1 (t + 1)
τ =t0
= −S(t + 1)
t−1 X
[C −1 (τ + 1) sin Q(τ )C ∗−1 (τ )]
τ =t0 −1
−S(t + 1)C
(t + 1) sin Q(t)C ∗−1 (t) + C ∗−1 (t + 1)
= −sin Q(t)U (t) − cos Q(t)S(t) −S(t + 1)C
−1
t−1 X
[C −1 (τ + 1) sin Q(τ )C ∗−1 (τ )]
τ =t0 ∗−1
(t + 1) sin Q(t)C
(t) + C ∗−1 (t + 1).
By Lemma 3, C(t) = sin Q(t)S(t + 1) + cos Q(t)C(t + 1), so that C ∗ (t) = C ∗ (t + 1) cos Q(t) + S ∗ (t + 1) sin Q(t) = C ∗ (t + 1) cos Q(t) + S ∗ (t + 1)C(t + 1)C −1 (t + 1) sin Q(t) = C ∗ (t + 1) cos Q(t) + C ∗ (t + 1)S(t + 1)C −1 (t + 1) sin Q(t) 11
(26)
by (8). Solving for C ∗−1 (t + 1) results in C ∗−1 (t + 1) = cos Q(t)C ∗−1 (t) + S(t + 1)C −1 (t + 1) sin Q(t)C ∗−1 (t).
(27)
Substitute (27) into (26) for C ∗−1 (t + 1) to see that V (t + 1) = − sin Q(t)U (t) + cos Q(t)V (t). Therefore U (t), V (t) solves (1) on [t0 , ∞); checking initial conditions gives that U (t) = S(t; t0 )C ∗−1 (t0 ) and V (t) = C(t; t0 )C ∗−1 (t0 ). By (10), ||S(t; t0 )||2 is bounded for all t ∈ [t0 , ∞), so ||U (t)||2 is as well.
2
We now state and prove the following oscillation result. Theorem 18 If
∞ X
tr sin Q(τ ) = ∞, then C(t) and S(t) have infinitely many generalized zeros
τ =a
on [b, ∞) for any integer b ≥ a. Proof: Let b ≥ a and assume that
∞ X
tr sin Q(τ ) = ∞. Suppose that C(t) has only a finite
τ =a
number of generalized zeros on [b, ∞). Let c, where c ≥ b, be the last generalized zero of C(t) and fix t0 > c; then C(t) is nonsingular and C −1 (t + 1) sin Q(t)C ∗−1 (t) > 0 on [t0 , ∞) by definition. Thus sin Q(t) is invertible, and 0 < C −1 (t0 + 1) sin Q(t0 )C ∗−1 (t0 ) = C −1 (t0 )C(t0 )C −1 (t0 + 1) sin Q(t0 )C ∗−1 (t0 ) i−1
h
= C −1 (t0 ) (sin Q(t0 ))−1 C(t0 + 1)C −1 (t0 )
C ∗−1 (t0 ),
which yields (sin Q(t))−1 C(t + 1)C −1 (t) > 0
(28)
on [t0 , ∞). Now put W (t) :=
t−1 X
C −1 (τ + 1) sin Q(τ )C ∗−1 (τ );
(29)
τ =t0
then W (t) = W ∗ (t) by (19), and W (t) > 0 on [t0 + 1, ∞). Using (25), we have that U (t + 1) = sin Q(t)C ∗−1 (t) + C(t + 1)W (t)
(30)
W (t) = C −1 (t)U (t)
(31)
and on [t0 + 1, ∞), since C(t) is invertible on [t0 , ∞). Then (30) gives that h
i
U ∗ (t) (sin Q(t))−1 U (t + 1) = U ∗ (t) C ∗−1 (t) + sin Q(t)C(t + 1)W (t) , 12
and by (31) we have that h
i
U ∗ (t) (sin Q(t))−1 U (t + 1) = W ∗ (t) + U ∗ (t) (sin Q(t))−1 C(t + 1)C −1 (t) U (t). Using (28) and the fact that W ∗ (t) > 0, we have U ∗ (t) (sin Q(t))−1 U (t + 1) > 0 on [t0 + 1, ∞); hence, U −1 (t + 1) sin Q(t)U ∗−1 (t) > 0
(32)
on [t0 + 1, ∞). Now let t1 be any integer greater than t0 . Since W (t) > 0 on [t1 , ∞), W −1 (t) = U −1 (t)C(t) > 0 on [t1 , ∞). From (31) we get for all t ≥ t0 that I = U −1 (t)C(t)W (t),
(33)
I = U −1 (t + 1)C(t + 1)W (t) + U −1 (t + 1) sin Q(t)C ∗−1 (t).
(34)
and so Using (33) and (34) results in h
i
0 = ∆ U −1 (t)C(t) W (t) + U −1 (t + 1) sin Q(t)C ∗−1 (t).
(35)
Since W −1 (t) = U −1 (t)C(t), from (35) we have ∆W −1 (t) = −U −1 (t + 1) sin Q(t)C ∗−1 (t)W −1 (t) = −U −1 (t + 1) sin Q(t) [C(t)W (t)]∗−1 , using (19). Hence ∆W −1 (t) = −U −1 (t + 1) sin Q(t)U ∗−1 (t) for all t in [t1 , ∞). Thus, t−1 X
∆W −1 (τ ) = W −1 (t) − W −1 (t1 )
τ =t1
= −
t−1 X
U −1 (τ + 1) sin Q(τ )U ∗−1 (τ )
τ =t1
on [t1 , ∞), so that W
−1
(t) = W
−1
(t1 ) −
t−1 X
U −1 (τ + 1) sin Q(τ )U ∗−1 (τ ).
τ =t1
13
(36)
But, since W −1 (t) > 0 on [t1 , ∞), it follows from (36) that trW −1 (t1 ) −
t−1 X
tr[U −1 (τ + 1) sin Q(τ )U ∗−1 (τ )] > 0.
τ =t1
Consequently, t−1 X
tr[U −1 (τ + 1) sin Q(τ )U ∗−1 (τ )] < trW −1 (t1 ) < ∞,
(37)
τ =t1
for all t ≥ t1 . Using (32) on [t1 , ∞) and the inequality trA ≥ ||A||2 for A > 0, we have tr[U −1 (t + 1) sin Q(t)U ∗−1 (t)] ≥ ||U −1 (t + 1) sin Q(t)U ∗−1 (t)||2 ∗ −1 ≥ || sin Q(t)||2 · ||U (t + 1)||−1 2 · ||U (t)||2 1 −1 · tr sin Q(t) · ||U (t + 1)||−1 ≥ 2 · ||U (t)||2 n after employing the formula ||A||2 ≥ ∞ X
1 n
· trA for an n × n matrix A. Therefore by (37),
−1 tr sin Q(τ ) · ||U (τ + 1)||−1 2 · ||U (τ )||2 < ∞.
τ =t1
By Lemma 17, however, ||U (t)||2 is bounded on [t1 , ∞). As a result, ∞ X
tr sin Q(τ ) < ∞,
τ =t1
a contradiction of the hypothesis of the theorem. Now suppose S(t) has only a finite number of generalized zeros on [b, ∞); in other words, that there exists t0 ≥ a such that S −1 (t + 1) sin Q(t)S ∗−1 (t) > 0 on [t0 , ∞). To again reach a contradiction, mimic the proofs of Lemma 17 and above with Uˆ (t) := S(t)
t−1 X
S −1 (τ + 1) sin Q(τ )S ∗−1 (τ )
τ =t0
and Vˆ (t) := C(t)
t−1 h X
i
S −1 (τ + 1) sin Q(τ )S ∗−1 (τ ) + S ∗−1 (t);
τ =t0
Uˆ (t), Vˆ (t) is the solution S(t; t0 )S ∗−1 (t0 ), C(t; t0 )S ∗−1 (t0 ).
2
Examples: The following are examples where the sum of the trace of sin Q(t) is infinite, and thus the matrix functions S, C have infinitely many generalized zeros. (a) Consider the scalar function Q(t) ≡ π/2. Then
∞ X
τ =0
tr sin Q(τ ) =
∞ X
1 = ∞ and sin Q(t)
τ =0
is invertible; both S(t) = sin(πt/2) and C(t) = cos(πt/2) have infinitely many singularities, and thus infinitely many generalized zeros on [b, ∞) for any b > 0. 14
(b) Consider the 2 × 2 matrix π Q(t) = 3t
0 1 ; t2
0
∞ X 1 1 π ≥ and sin 2 > 0 for t ≥ 2, so that sin Q(t) is nonsingular and tr sin Q(τ ) 3t t t τ =2 diverges. Now C(t), for example, looks like
note that sin
t−1 X π
cos τ =2 3τ C(t) =
0
0
cos
, t−1 X 1 τ =2
τ2
and C −1 (t + 1) sin Q(t)C ∗−1 (t) has λ1 (t) =
π ! 3t t−1 ! t X X π π cos cos τ =2 3τ τ =2 3τ
λ2 (t) =
1 2 ! t ! t−1 t X X 1 1 cos cos 2 2 τ =2 τ τ =2 τ
sin
and
sin
as its eigenvalues. Since 0