NABLA DYNAMIC EQUATIONS ON TIME SCALES DOUGLAS ANDERSON, JOHN BULLOCK1 AND LYNN ERBE1 , ALLAN PETERSON1 , HOAINAM TRAN1
1. preliminaries about time scales The following definitions can be found in Bohner and Peterson [4] and Agarwal and Bohner [2]. A time scale T is defined to be any closed subset of R. Then the forward and backwards jump operators σ, ρ : T → T σ(t) = inf{s ∈ T : s > t}
and
ρ(t) = sup{s ∈ T : s < t}
(supplemented by inf ∅ := sup T and sup ∅ := inf T) are well defined. A point t ∈ T is called left-dense if t > inf T and ρ(t) = t, left-scattered if ρ(t) < t, right-dense if t < sup T and σ(t) = t, right-scattered if σ(t) > t. If T has a right-scattered minimum m, define Tκ := T − {m}; otherwise, set Tκ = T. The backwards graininess ν : Tκ → R+ 0 is defined by ν(t) = t − ρ(t). For f : T → R and t ∈ Tκ , define the nabla derivative [3] of f at t, denoted f ∇ (t), to be the number (provided it exists) with the property that given any ε > 0, there is a neighborhood U of t such that |f (ρ(t)) − f (s) − f ∇ (t)[ρ(t) − s]| ≤ ε|ρ(t) − s| for all s ∈ U . For T = R, we have f ∇ = f 0 , the usual derivative, and for T = Z we have the backward difference operator, f ∇ (t) = ∇f (t) := f (t) − f (t − 1). A function f : T → R is left-dense continuous or ld-continuous provided it is continuous at left-dense points in T and its right-sided limits exist (finite) at right-dense points in T. If T = R, then f is ld-continuous if and only if f is continuous. If T = Z, then any function is ld-continuous. It is known [4] that if f is ld-continuous, then there is a function F (t) such that F ∇ (t) = f (t). In this case, we define Z b f (t)∇t = F (b) − F (a). a
Remark 1. The following theorems delineate several properties of the nabla derivative; they are found in [3] or [4, p 331-333]. Theorem 2. Assume f : T → R is a function and let t ∈ Tκ . Then we have the following: (i) If f is nabla differentiable at t, then f is continuous at t. (ii) If f is continuous at t and t is left-scattered, then f is nabla differentiable at t with f ∇ (t) =
f (t) − f (ρ(t)) . ν(t)
2
PETERSON AND ANDERSON, ET AL
(iii) If t is left-dense, then f is nabla differentiable at t iff the limit lim
s→t
f (t) − f (s) t−s
exists as a finite number. In this case f ∇ (t) = lim
s→t
f (t) − f (s) . t−s
(iv) If f is nabla differentiable at t, then f ρ (t) = f (t) − ν(t)f ∇ (t). Theorem 3. Assume f, g : T → R are nabla differentiable at t ∈ Tκ . Then: (i) The sum f + g : T → R is nabla differentiable at t with (f + g)∇ (t) = f ∇ (t) + g ∇ (t). (ii) The product f g : T → R is nabla differentiable at t, and we get the product rules (f g)∇ (t) = f ∇ (t)g(t) + f ρ (t)g ∇ (t) = f (t)g ∇ (t) + f ∇ (t)g ρ (t). (iii) If g(t)g ρ (t) 6= 0, then
f g
is nabla differentiable at t, and we get the quotient rule
∇ f f ∇ (t)g(t) − f (t)g ∇ (t) (t) = . g g(t)g ρ (t) (iv) If f and f ∇ are continuous, then Z t ∇ Z t = f (ρ(t), t) + f ∇ (t, s)∇s. f (t, s)∇s a
a
2. introduction to nabla equations Definition 4. The function p is ν-regressive if 1 − ν(t)p(t) 6= 0 for all t ∈ Tκ . Define the ν-regressive class of functions on Tκ to be Rν = {p : T → R | p is ld continuous and ν-regressive}. If p, q ∈ Rν , then (p ⊕ν q)(t) := p(t) + q(t) − p(t)q(t)ν(t) for all t ∈ Tκ . Theorem 5. The set {Rν , ⊕ν } is an Abelian group.
NABLA DYNAMIC EQUATIONS
3
Proof. Suppose p and q are in Rν . To prove that we have closure under the addition ⊕ν , note first that p⊕ν q is also ld-continuous. It only remains to show that p⊕ν q is ν-regressive on Tκ , but this follows from 1 − ν(p ⊕ν q) = = = 6=
1 − ν(p + q − pqν) 1 − νp − νq + νpqν (1 − νp)(1 − νq) 0.
Hence Rν is closed under the addition ⊕ν . Since 0 ∈ Rν and p ⊕ν 0 = 0 ⊕ν p = p, 0 is the additive identity for ⊕ν . To find the additive inverse of p ∈ Rν under ⊕ν , we must solve p ⊕ν w = 0 for w. Hence we must solve p + w − pwν = 0 for w. Thus w := −
p ∈ Rν 1 − pν
is the additive inverse of p under the addition ⊕ν . A straightforward calculation verifies that associativity holds. Hence (Rν , ⊕ν ) is a group. Since p ⊕ν w = p + w − pwν = w + p − wpν = w ⊕ν p, the commutative law holds, and hence (Rν , ⊕ν ) is an Abelian group. Definition 6. For p ∈ Rν , define circle minus p by p , ν p := − 1 − pν and the generalized square of p by 2 p (t) := −p(t)( ν p) =
Theorem 7. For p ∈ Rν , 2 = p 2, (i) ( ν p) 2 (ii) 1 − pν = p , 2 p
2 · ν, (iii) p + ( ν p) = −p 2 = p + p2 . (iv) p ⊕ν p
p2 . 1 − pν
4
PETERSON AND ANDERSON, ET AL
Proof. Throughout this proof assume p ∈ Rν . To prove part (i), consider 2 ( ν p) =
= = =
( ν p)2 1 − ( ν p)ν p2 (1−pν)2 pν 1 + 1−pν p2
pν)2
(1 − p2 1 − pν
·
1 − pν 1 − pν + pν
2 = p . 2 . To prove part (iii), note that Formula (ii) follows immediately from the definition of p −p p + ( ν p) = p + 1 − pν p − p2 ν − p = 1 − pν −p2 = ·ν 1 − pν 2 = −p · ν.
To prove (iv), we have 2 2 2 p ⊕ν p = p + p − p · p ·ν 2 p p2 = p+ −p· ν 1 − pν 1 − pν p 2 − p3 ν = p+ 1 − pν 2 = p+p .
3. The Nabla Exponential Function Definition 8. For h > 0, let Zh :=
z∈C:
and
Ch :=
π −π < Im(z) < h h
1 z∈C:z= 6 h
.
Define the ν-cylinder transformation ξˆh : Ch → Zh by 1 (1) ξˆh (z) := − Log(1 − zh), h
NABLA DYNAMIC EQUATIONS
5
where Log is the principal logarithm function. For h = 0, we define ξˆ0 (z) = z for all z ∈ C0 := C. Definition 9. If p ∈ Rν , then we define the nabla exponential function by Z t ˆ (2) eˆp (t, s) := exp ξν(τ ) (p(τ ))∇τ for s, t ∈ T, s
where the ν-cylinder transformation ξˆh is as in (1). Lemma 10 (semigroup property). If p ∈ Rν , then the semigroup property eˆp (t, u)ˆ ep (u, s) = eˆp (t, s)
for all
s, t, u ∈ T
is satisfied. Proof. Suppose p ∈ Rν . Let s, t, u ∈ T. Then we have by Definition 9 Z u Z t ˆ ˆ ξν(τ ) (p(τ ))∇τ ξν(τ ) (p(τ ))∇τ exp eˆp (t, u)ˆ ep (u, s) = exp s u Z t Z u ˆ ˆ ξν(τ ) (p(τ ))∇τ ξν(τ ) (p(τ ))∇τ + = exp s u Z t = exp ξˆν(τ ) (p(τ ))∇τ s
= eˆp (t, s). Definition 11. If p ∈ Rν , then the first order linear dynamic equation (3)
y ∇ = p(t)y
is called ν-regressive. Theorem 12. Suppose (3) is ν-regressive and fix t0 ∈ T. Then eˆp (·, t0 ) is a solution of the initial value problem (4)
y ∇ = p(t)y,
y(t0 ) = 1
on T. Proof. Fix t0 ∈ Tκ and assume (3) is ν-regressive. First note that eˆp (t0 , t0 ) = 1. It remains to show that eˆp (t, t0 ) satisfies the dynamic equation y ∇ = p(t)y. Fix t ∈ Tκ . There are two cases.
6
PETERSON AND ANDERSON, ET AL
Case 1. Assume ρ(t) < t. In this case R R t ρ(t) exp t0 ξˆν(τ ) (p(τ ))∇τ − exp t0 ξˆν(τ ) (p(τ ))∇τ eˆ∇ p (t, t0 ) = ν(t) 1 − exp −ν(t)ξˆν(t) (p(t)) = eˆp (t, t0 ) ν(t) eˆp (t, t0 ) = {1 − exp[Log(1 − p(t)ν(t))]} ν(t) = p(t) · eˆp (t, t0 ). Case 2. Assume ρ(t) = t. If y(t) := eˆp (t, t0 ), then we want to show that y ∇ (t) = p(t)y(t). Using Lemma 10 we obtain |y(t) − y(s) − p(t)y(t)(t − s)| = |ˆ ep (t, t0 ) − eˆp (s, t0 ) − p(t)ˆ ep (t, t0 )(t − s)| = |ˆ ep (t, t0 )| · |1 − eˆp (s, t) − p(t)(t − s)| Z t Z t = |ˆ ep (t, t0 )| 1 − ξˆν(τ ) (p(τ ))∇τ − eˆp (s, t) + ξˆν(τ ) (p(τ ))∇τ − p(t)(t − s) s s Z t ≤ |ˆ ep (t, t0 )| · 1 − ξˆν(τ ) (p(τ ))∇τ − eˆp (s, t) s Z t ˆ +|ˆ ep (t, t0 )| · ξν(τ ) (p(τ ))∇τ − p(t)(t − s) s Z t ˆ ≤ |ˆ ep (t, t0 )| · 1 − ξν(τ ) (p(τ ))∇τ − eˆp (s, t) s Z t +|ˆ ep (t, t0 )| · [ξˆν(τ ) (p(τ )) − ξˆ0 (p(t))]∇τ . s
Let ε > 0 be given. We now show that there is a neighborhood U of t so that the right-hand side of the last inequality is less than ε|t − s|, and the proof will be complete. Since ρ(t) = t and p ∈ Cld , it follows that lim ξˆν(τ ) (p(τ )) = ξˆ0 (p(t)).
(5)
τ →t
This implies that there is a neighborhood U1 of t such that ε ˆ for all ξν(τ ) (p(τ )) − ξˆ0 (p(t)) < 3|ˆ ep (t, t0 )|
τ ∈ U1 .
Let s ∈ U1 . Then (6)
Z t ε |ˆ ep (t, t0 )| · [ξˆν(τ ) (p(τ )) − ξˆ0 (p(t))]∇τ < |t − s|. 3 s
Next, by L’Hˆopital’s rule 1 − z − e−z = 0, z→0 z lim
NABLA DYNAMIC EQUATIONS
7
so there is a neighborhood U2 of t so that if s ∈ U2 , s 6= t, then 1 − R t ξˆ (p(τ ))∇τ − eˆ (s, t) p s ν(τ ) < ε∗ , Rt ˆ s ξν(τ ) (p(τ ))∇τ where
ε ε = min 1, 1 + 3|p(t)ˆ ep (t, t0 )| ∗
.
Let s ∈ U := U1 ∩ U2 . Then Z t Z t ∗ |ˆ ep (t, t0 )| · 1 − ξˆν(τ ) (p(τ ))∇τ − eˆp (s, t) < |ˆ ep (t, t0 )|ε ξˆν(τ ) (p(τ ))∇τ s s Z t ∗ ≤ |ˆ ep (t, t0 )| · ε [ξˆν(τ ) (p(τ )) − ξˆ0 (p(t))]∇τ + |p(t)||t − s| s Z t ˆ ˆ ep (t, t0 )|ε∗ |p(t)||t − s| ≤ |ˆ ep (t, t0 )| · [ξν(τ ) (p(τ )) − ξ0 (p(t))]∇τ + |ˆ s
≤ ≤ =
ε |t − s| + |ˆ ep (t, t0 )|ε∗ |p(t)||t − s| 3 ε ε |t − s| + |t − s| 3 3 2ε |t − s|, 3
using (6).
Theorem 13. If (3) is ν-regressive, then eˆp (·, t0 ) is the unique solution of the IVP (4). Proof. Assume y is a solution of (4). By the definition (2), the exponential never vanishes. Using the nabla quotient rule, ∇ y ∇ (t)ˆ ep (t, t0 ) − y(t)ˆ e∇ y p (t, t0 ) (t) = eˆp (·, t0 ) eˆp (t, t0 )ˆ ep (ρ(t), t0 ) p(t)y(t)ˆ ep (t, t0 ) − y(t)p(t)ˆ ep (t, t0 ) = eˆp (t, t0 )ˆ ep (ρ(t), t0 ) = 0, so that y is a constant multiple of eˆp (·, t0 ); the initial condition shows that they are equal. Theorem 14. Let p, q ∈ Rν and s, t, u ∈ T. Then (i) eˆ0 (t, s) ≡ 1 and eˆp (t, t) ≡ 1, (ii) eˆp (ρ(t), s) = (1 − ν(t)p(t))ˆ ep (t, s), 1 (iii) eˆp (t,s) = eˆ ν p (t, s), 1 (iv) eˆp (t, s) = eˆp (s,t) = eˆ ν p (s, t), (v) eˆp (t, u)ˆ ep (u, s) = eˆp (t, s), (vi) eˆp (t, s)ˆ eq (t, s) = eˆp⊕ν q (t, s), eˆ (t,s) (vii) eˆpq (t,s) = eˆp ν q (t, s).
8
PETERSON AND ANDERSON, ET AL
(viii)
1 eˆp (t,s)
∇
=
−p(t) . eˆρp (t,s)
Proof. Part (i). Assume p, q ∈ Rν and s, t ∈ T. Then eˆ0 (t, s) is a solution of the IVP x∇ = 0 · x, x(t0 ) = 1, so x∇ = 0 · x if and only if x(t) = c for some constant c ∈ R. But x(t0 ) = 1 = c, and since the system above has a unique solution, eˆ0 (t, s) ≡ 1. Moreover, eˆp (t, t) ≡ 1 follows directly from (2). Part (ii). We have eˆp (ρ(t), s) = eˆρp (t, s) = eˆp (t, s) − ν(t)ˆ e∇ p (t, s) = eˆp (t, s) − ν(t)p(t)ˆ ep (t, s) = (1 − ν(t)p(t))ˆ ep (t, s). Part (iii). Let x(t) =
1 eˆp (t,s) .
Then x∇ (t) =
(7)
−ˆ e∇ p (t, s) , eˆp (t, s)ˆ ep (ρ(t), s)
so that −p(t)ˆ ep (t, s) eˆp (t, s)(1 − ν(t)p(t))ˆ ep (t, s) −p(t) 1 = · 1 − ν(t)p(t) eˆp (t, s) = ν p(t) · x(t).
x∇ (t) =
But x(t) is a solution to initial value problem x∇ (t) = ν p(t)x(t), Therefore,
1 eˆp (t,s)
x(t0 ) = 1.
= eˆ ν p (t, s) by uniqueness.
Part (iv). This follows from (iii) and (2). Part (v). This semigroup property was already shown above in Lemma 10 . Part (vi). We know eˆp⊕ν q (t, s) is the unique solution to the initial value problem x∇ (t) = (p ⊕ν q)(t)x(t),
(8)
x(s) = 1.
If we define (9)
x(t) := eˆp (t, s)ˆ eq (t, s),
then x∇ (t) = = = = = =
p(t)ˆ ep (t, s)ˆ eq (t, s) + q(t)ˆ ep (ρ(t), s)ˆ eq (t, s) p(t)ˆ ep (t, s)ˆ eq (t, s) + q(t)ˆ ep (t, s)(1 − ν(t)p(t))ˆ eq (t, s) eˆp (t, s)ˆ eq (t, s)(p(t) + q(t)(1 − ν(t)p(t))) (p(t) + q(t) − p(t)q(t)ν(t))ˆ ep (t, s)ˆ eq (t, s) (p ⊕ν q)(t)ˆ ep (t, s)ˆ eq (t, s) (p ⊕ν q)(t)x(t).
NABLA DYNAMIC EQUATIONS
9
It is obvious that x(s) = 1 by (2). Thus, eˆp (t, s)ˆ eq (t, s) = eˆp⊕ν q (t, s). Part (vii). This follows easily using parts (iii) and (vi) of this theorem. Part (viii). We calculate ∇ −ˆ e∇ 1 p (t, s) = eˆp (t, s) eˆp (t, s)ˆ ep (ρ(t), s) −p(t)ˆ ep (t, s) = eˆp (t, s)ˆ ep (ρ(t), s) −p(t) . = eˆp (ρ(t), s) Lemma 15. Let p ∈ Rν . Suppose there exists a sequence of distinct points {tn }n∈N ⊂ Tκ such that 1 − ν(tn )p(tn ) < 0 for all n ∈ N. Then limn→∞ |tn | = ∞. In particular, if there exists a bounded set J ⊂ Tκ , then the cardinality of the set of points such that 1 − ν(t)p(t) < 0 and t ∈ J is finite. Proof. Let J ⊂ Tκ be a bounded set and assume there exists an infinite sequence of distinct points {tn }n∈N ⊂ J such that 1 − ν(tn )p(tn ) < 0
for all
n ∈ N.
Since the sequence {tn }n∈N is bounded, it has a convergent subsequence. We assume without loss of generality that the sequence {tn }n∈N is itself convergent, i.e., lim tn = t0 .
n→∞
Since T is closed, t0 ∈ T. Since 1 − ν(tn )p(tn ) < 0, we have ν(tn ) > 0 and 1 for all n ∈ N. (10) p(tn ) > ν(tn ) There is a subsequence {tnk }k∈N of {tn }n∈N with limk→∞ tnk = t0 such that {tnk }k∈N is either strictly decreasing or strictly increasing. If {tnk }k∈N is strictly decreasing, then limk→∞ ν(tnk ) = 0 because of 0 < ν(tnk ) = tnk − ρ(tnk ) ≤ tnk − tnk+1 . If {tnk }k∈N is strictly increasing, then limk→∞ ν(tnk ) = 0 because of 0 < ν(tnk ) = tnk − ρ(tnk ) ≤ tnk − tnk−1 . So in either case limk→∞ ν(tnk ) = 0 and hence using (10), lim p(tnk ) = ∞.
k→∞
But this contradicts the fact that p ∈ Cld . Theorem 16. Assume p ∈ Rν and t0 ∈ T.
10
PETERSON AND ANDERSON, ET AL
(i) If 1 − ν(t)p(t) > 0 for t ∈ T, then eˆp (t, t0 ) > 0 for all t ∈ T. (ii) If 1 − ν(t)p(t) < 0 for t > inf T, then eˆp (t, t0 ) = (−1)nt α(t, t0 ) for all t ∈ T, where Z t log |1 − ν(τ )p(τ )| α(t, t0 ) := exp − ∇τ > 0 ν(τ ) t0 and
( card (t0 , t] nt = card (t, t0 ]
if if
t ≥ t0 t < t0 .
Proof. Part (i) can be shown directly using Definition 9: Since 1 − ν(t)p(t) > 0, we have Log[1 − ν(t)p(t)] ∈ R for all t ∈ T and therefore ξˆν(t) (p(t)) ∈ R Hence
for all
t ∈ T.
Z t ˆ ξν(τ ) (p(τ ))∇τ > 0 eˆp (t, t0 ) = exp − t0
for all t ∈ T. Part (ii) follows similarly: Since 1 − ν(t)p(t) < 0, for t > inf T we have Log[1 − ν(t)p(t)] = log |1 − ν(t)p(t)| + iπ
for all
t > inf T.
It follows from Lemma 15 that nt < ∞ and Z t ˆ eˆp (t, t0 ) = exp − ξν(τ ) (p(τ ))∇τ t
Z 0t Log[1 − ν(τ )p(τ )] = exp − ∇τ ν(τ ) t0 Z t log |1 − ν(τ )p(τ )| + iπ = exp − ∇τ ν(τ ) t0 Z t iπ log |1 − ν(τ )p(τ )| = exp − + ∇τ ν(τ ) ν(τ ) t0 Z t ∇τ = α(t, t0 ) exp −iπ . t0 ν(τ ) By Euler’s formula,
Z
t
exp −iπ t0
∇τ ν(τ )
Z = cos π
t
t0
∇τ ν(τ )
= cos(±nt π) = (−1)nt . This leads to the desired result. In view of Theorem 16 (i), we make the following definition.
NABLA DYNAMIC EQUATIONS
11
Definition 17. We define the set R+ ν of all positively ν-regressive elements of Rν by + R+ ν = Rν (T, R) = {p ∈ Rν : 1 − ν(t)p(t) > 0 for all t ∈ T}.
Exercise 18. Assume p ∈ Cld and p(t) ≤ 0 for all t ∈ T. Show that p ∈ R+ ν. Lemma 19. R+ ν is a subgroup of Rν . + Proof. Obviously we have that R+ ν ⊂ Rν and that the constant function 0 ∈ Rν . Now let + p, q ∈ Rν . Then 1 − νp > 0 and 1 − νq > 0 on T. Therefore 1 − ν(p ⊕ν q) = (1 − νp)(1 − νq) > 0 on T. Hence we have p ⊕ν q ∈ R+ ν. + Next, let p ∈ Rν . Then 1 − νp > 0 on T. This implies that νp 1 1 − ν( ν p) = 1 + = > 0 on T. 1 − νp 1 − νp Hence ν p ∈ R+ ν. + These calculations establish that Rν is a subgroup of Rν .
Theorem 20 (Sign of the Nabla Exponential Function). Let p ∈ Rν and t0 ∈ T. ˆp (t, t0 ) > 0 for all t ∈ T. (i) If p ∈ R+ ν , then e (ii) If 1 − ν(t)p(t) < 0 for some t ∈ Tκ , then eˆp (ρ(t), t0 )ˆ ep (t, t0 ) < 0. (iii) If 1 − ν(t)p(t) < 0 for all t ∈ Tκ , then eˆp (t, t0 ) changes sign at every point t ∈ T. (iv) Assume there exist sets T = {ti : i ∈ N} ⊂ Tκ and S = {si : i ∈ N} ⊂ Tκ with · · · < s2 < s1 < t0 ≤ t1 < t2 < · · · such that 1−ν(t)p(t) < 0 for all t ∈ S∪T and 1−ν(t)p(t) > 0 for all t ∈ Tκ \(S∪T ). Furthermore if card T = ∞, then limn→∞ tn = ∞, and if card S = ∞, then limn→∞ sn = −∞. If T 6= ∅ and S 6= ∅, then eˆp (t, t0 ) > 0
on
[s1 , ρ(t1 )].
If card T = ∞, then (−1)i eˆp (t, t0 ) > 0
on
[ti , ρ(ti+1 )]
for all
i ∈ N.
If card T = N ∈ N, then (−1)i eˆp (t, t0 ) > 0
on
[ti , ρ(ti+1 )]
for all
0≤i≤N −1
and (−1)N eˆp (t, t0 ) > 0
on
[tN , ∞).
12
PETERSON AND ANDERSON, ET AL
If T = ∅ and S 6= ∅, then eˆp (t, t0 ) > 0
[s1 , ∞).
on
If card S = ∞, then (−1)i eˆ(t, t0 ) > 0
on
[si+1 , ρ(si )]
for all
i ∈ N.
If card S = M ∈ N, then (−1)i eˆp (t, t0 ) > 0
on
[si+1 , ρ(si )]
1≤i≤M −1
for all
and (−1)M eˆp (t, t0 ) > 0 If S = ∅ and T 6= ∅, then eˆp (t, t0 ) > 0
on on
(−∞, ρ(sM )]. (−∞, ρ(t1 )].
In particular, the exponential function eˆp (·, t0 ) is a real-valued function that is never equal to zero but can be negative. Proof. While (i) is just Theorem 16 (i), (ii) follows immediately from Theorem 14 (ii). Part (iii) is clear by Theorem 16 (ii). Now we prove (iv). By Lemma 15, the set of points in T where 1 − ν(t)p(t) < 0 is countable. If card T = ∞, then limn→∞ tn = ∞ by Lemma 15. We can assume t0 ≤ t1 < t2 < · · · . Consider the case where card T = ∞ with t0 ≤ t1 < t2 < · · · such that 1 − ν(t)p(t) < 0 for all t ∈ T and 1 − ν(t)p(t) > 0 for all t ∈ [t0 , ∞) \ T . We prove the conclusion of (iv) for this case by mathematical induction with respect to the intervals [t0 , ρ(t1 )], [t1 , ρ(t2 )], [t2 , ρ(t3 )], . . .. First we prove that eˆp (t, t0 ) > 0
on
[t0 , ρ(t1 )].
If t1 = t0 , then eˆp (t1 , t0 ) = 1 > 0. Hence we can assume t1 > t0 . Since 1 − ν(t1 )p(t1 ) < 0, t1 is left-scattered. Now 1 − ν(t)p(t) > 0 on [t0 , ρ(t1 )], implies that −
eˆp (t, t0 ) = e
Rt t0
ξˆν(s) (p(s))∇s
> 0,
on
[t0 , ρ(t1 )].
Assume i ≥ 0 and (−1)i eˆp (t, t0 ) > 0 on [ti , ρ(ti+1 )]. It remains to show that (−1)i+1 eˆp (t, t0 ) > 0
on
[ti+1 , ρ(ti+2 )].
First note that 0 < (−1)i eˆp (ρ(ti+1 ), t0 ) = (−1)i [1 − ν(ti+1 )p(ti+1 )]ˆ ep (ti+1 , t0 ) = −[1 − ν(ti+1 )p(ti+1 )](−1)i+1 eˆp (ti+1 , t0 ) implies that (−1)i+1 eˆp (ti+1 , t0 ) > 0. Since 1 − ν(ti+1 )p(ti+2 ) < 0, we have that ρ(ti+2 ) > ti+2 . By the semigroup property (Theorem 14, (iv)) eˆp (t, t0 ) = eˆp (t, ti+1 )ˆ ep (ti+1 , t0 ). Since 1 − ν(t)p(t) > 0 on (ti+1 , ρ(ti+2 )] it follows that eˆp (t, t0 ) > 0
on
[ti+1 , ρ(ti+2 )].
NABLA DYNAMIC EQUATIONS
13
The remaining cases are similar and hence are omitted.
Exercise 21. Show that if the constant a > 1 and T = Z, then the exponential function eˆa (·, 0) changes sign at every point in Z. In this case we say the exponential function eˆa (·, 0) is strongly oscillatory on Z.
4. Examples of Exponential Functions Example 22. Let T = hZ for h > 0. Let α ∈ Rν be constant, i.e., 1 α ∈ Cˆh := C \ . h Then (11)
eˆα (t, t0 ) =
1 1 − αh
t−t0 h
for all
t ∈ T.
To show this we note that y defined by the right-hand side of (11) satisfies y(t0 ) = 1 and y ∇ (t) = =
y(t) − y(t − h) h t−t0 h 1 1 h 1 − αh
−
1 1 − αh
t−h−t0 h
1 {1 − (1 − αh)}y(t) h = αy(t) =
for all t ∈ T. Exercise 23. Show that if T = R or T = hZ, h > 0, and α ∈ Rν is a constant, then eˆα (t + s, 0) = eˆα (t, 0)ˆ eα (s, 0) for all s, t ∈ T. Example 24. Consider the time scale T = N2 = {n2 : n ∈ N}. We claim that (12)
eˆ−1 (t, 1) =
√
2
2 √
t(
t)!
for
t ∈ T.
14
PETERSON AND ANDERSON, ET AL
Let y be defined by the right-hand side of (12). Clearly, y(1) = 1, and for t ∈ T we have y(t) − y(ρ(t)) = = = = =
2 √
2 − √ p ρ(t) 2 t)! 2 ( ρ(t))! 2 2 √ √ − √ √ t t−1 2 ( t)! 2 ( t − 1)! √ 2−4 t √ √ 2 t ( t)! 2 −ν(t) √ √ t 2 ( t)! −ν(t)y(t). √
t(
It follows that y ∇ (t) = −y(t). Example 25. Let H0 := 0, Hn =
Pn
1 k=1 k ,
k ∈ N be the harmonic numbers [4], and
T = {t = Hn : n ∈ N0 }. Let α ∈ Rν be a constant. We claim that (13)
eˆα (Hn , 0) =
n! , (1 − α)n
where “t to the n rising” is defined by (14)
tn = t(t + 1)(t + 2) · · · (t + n − 1)
for n ∈ N and t0 := 1. To show this claim, suppose y is defined by the right-hand side of (13) and consider y(t) − y(ρ(t)) = y(Hn ) − y(Hn−1 ) (n − 1)! n! − = n (1 − α) (1 − α)n−1 [1 − n−α n ]n! = (1 − α)n n! α = n (1 − α)n n! = αν(t) (1 − α)n = αν(t)y(t). It follows that y ∇ (t) = αy(t). Example 26. We consider the time scale T = q N0 . Let p ∈ Rν . The problem y ∇ = p(t)y,
y(1) = 1
NABLA DYNAMIC EQUATIONS
15
can be equivalently rewritten as y=
1 1−
(q−1) q tp(t)
yρ,
y(1) = 1.
The solution of this problem is (15)
eˆp (t, 1) =
1
Y s∈T∩(1,t] 1 −
(q−1) q sp(s)
.
If α ∈ Rν is constant, then we have 1
Y
eˆα (t, 1) =
s∈T∩(1,t]
(q−1) q sα
1−
.
For q = 2 this simplifies as (16)
Y
eˆα (t, 1) =
s∈T∩(1,t]
Now consider the special case of (15) when q t−1 p(t) = q − 1 t2
1 . 1 − 12 sp(s)
t ∈ T = q N0 .
for
Using (15), we find eˆp (t, 1) =
Y s∈T∩(1,t]
=
=
Y
1 1 − s−1 s s
s∈T∩(1,t] n Y k
q
k=1
= q n(n+1)/2 where t = q n . Substituting t = q n we finally get that √ ln2 (t) eˆp (t, 1) = te 2 ln(q) . Exercise 27. Find eˆα (t, t0 ) for t, t0 ∈ q N0 with constant α ∈ Rν . Exercise 28. Find the exponential function eˆ λ (·, 1), where λ ∈ Rν is a constant for T = t [1, ∞) and T = N. P Example 29. Let ∞ k=0 αk = L be a convergent series with α0 ∈ R and αk > 0 for k ≥ 1 and put (n−1 ) X T= αk : n ∈ N ∪ {L}. k=0
16
PETERSON AND ANDERSON, ET AL
Table 1. Exponential Functions T
eˆα (t, t0 )
R
eα(t−t0 ) t−t0 1 1−α
Z
hZ
1 nZ
Q
q N0
1 k=1 k
Then
: n∈N
n n−α
s∈[t0 ,t)
Q
2N0 Pn
1 1−αh
n(t−t0 )
1 1−(q−1)αs
s∈[t0 ,t)
1 1−αs
(n0 +1)n−n0 (n0 +1−α)n−n0
(t−t0 )/h
if t ≥ t0
if t ≥ t0
if t =
Pn
1 k=1 k
n−1 Y
n−1 X 1 eˆ1 (t, α0 ) = for t = αk , n ∈ N, 1 − αk k=1 k=0 Q where we use the convention that 0k=1 (· · · ) = 1. Therefore
lim eˆ1 (t, α0 ) = eˆ1 (L, α0 ) =
t→L
∞ Y k=1
1 . 1 − αk
In particular, if αk =
1 4k 2
for all
k ∈ N,
then, using that the Wallis product ∞ ∞ ∞ Y Y Y 1 1 4k 2 π = = = 1 2 1 − αk 4k − 1 2 1 − 4k2 k=1 k=1 k=1 converges, we find that lim eˆ1 (t, α0 ) = eˆ1 (L, α0 ) =
t→L
π . 2
5. Nonhomogeneous First Order Linear Equations In this section we first study the first order nonhomogeneous linear equation (17)
y ∇ = p(t)y + f (t)
and the corresponding homogeneous equation (18)
y ∇ = p(t)y
NABLA DYNAMIC EQUATIONS
17
Table 2. Exponential Functions
Pn
T
t0
p(t)
eˆp (t, t0 )
R
0
1
et
Z
0
−1
1 t 2
hZ
0
1
1 nZ
0
1
q N0
1
1
2N0
1
−1
q N0
1
1−t (q−1)t2
2N0
1
1−t t2
N2
1
−1
0
1
1 k=1 k
: n∈N
Q
s∈[1,t)
Q
1 1−h
t/h
n n−1
nt
1 1−(q−1)s
s∈[1,t)
√
1 1+s
if t ≥ 1
if t ≥ 1
ln2 (t)
− 2 ln(q)
te
√
−
te
2
ln2 (t) ln(4)
√ 2√ t ( t)!
n + 1 if t =
Pn
1 k=1 k
on a time scale T. Using Theorem 12 one can easily prove the following theorem. Theorem 30. Suppose (18) is ν-regressive. Let t0 ∈ T and y0 ∈ R. The unique solution of the initial value problem (19)
y ∇ = p(t)y,
y(t0 ) = y0
is given by y(t) = eˆp (t, t0 )y0 . Definition 31. For p ∈ Rν we define an operator L1 : C1ld → Cld by L1 y(t) = y ∇ (t) − p(t)y(t),
t ∈ Tκ .
Then (18) can be written in the form L1 y = 0 and (17) can be written in the form L1 y = f (t). Since L1 is a linear operator we say that (17) is a linear dynamic equation. We say y is a solution of (17) on T provided y ∈ C1ld and L1 y(t) = f (t) for t ∈ Tκ . Definition 32. The adjoint operator L∗1 : C1ld → Cld is defined by L∗1 x(t) = x∇ (t) + p(t)xρ (t),
t ∈ Tκ .
Example 33. It is easy to varify that the function t
x(t) = (1 − αh) h ,
t ∈ hZ
18
PETERSON AND ANDERSON, ET AL
is a solution of the adjoint equation x∇ + αxρ = 0,
t ∈ hZ,
where α is a ν-regressive constant. Theorem 34 (Lagrange Identity). If x, y ∈ C1ld , then xρ L1 y + yL∗1 x = (xy)∇
on
Tκ .
Proof. Assume x, y ∈ C1ld and consider (xy)∇ = xρ y ∇ + x∇ y = xρ (y ∇ − py) + y(x∇ + pxρ ) = xρ L1 y + yL∗1 x on Tκ .
The next result follows immediately from the Lagrange identity. Corollary 35 (Abel’s formula). If x and y are solutions of L1 y = 0 and L∗1 x = 0, respectively, then x(t)y(t) = C for t ∈ T, where C is a constant. It follows from this corollary that if a nontrivial y satisfies L1 y = 0, then x := the adjoint equation L∗1 x = 0.
1 y
satisfies
Exercise 36. Show directly by substitution into L∗1 x = 0 that if y is a nontrivial solution of L1 y = 0, then x := y1 is a nontrivial solution of the adjoint equation L∗1 x = 0. For later reference we also cite the following existence and uniqueness result for the adjoint initial value problem. Theorem 37. Suppose p ∈ Rν . Let t0 ∈ T and x0 ∈ R. The unique solution of the initial value problem (20)
x∇ = −p(t)xρ ,
x(t0 ) = x0
is given by x(t) = eˆ ν p (t, t0 )x0 . Exercise 38. Prove Theorem 37. Show directly that the function x given in Theorem 37 solves the initial value problem (20). We now turn our attention to the nonhomogeneous problem (21)
x∇ = −p(t)xρ + f (t),
x(t0 ) = x0 ,
where we assume that p ∈ Rν and f ∈ Cld . Let us assume that x is a solution of (21). We multiply both sides of the dynamic equation in (21) by the so-called integrating factor eˆp (t, t0 ) and obtain eˆp (t, t0 )x∇ (t) + p(t)ˆ ep (t, t0 )xρ (t) = eˆp (t, t0 )f (t).
NABLA DYNAMIC EQUATIONS
19
Using the product rule we get that [ˆ ep (t, t0 )x]∇ = eˆp (t, t0 )f (t). Taking the nabla integral of both sides we obtain Z
t
eˆp (t, t0 )x(t) − eˆp (t0 , t0 )x(t0 ) =
eˆp (τ, t0 )f (τ )∇τ. t0
Solving for x(t) we have t
Z
eˆ ν p (t, τ )f (τ )∇τ.
x(t) = eˆ ν p (t, t0 )x0 + t0
Before we state the variation of constants formula we give the following definition. Definition 39. The equation (17) is called regressive provided (18) is regressive and f : T → R is ld-continuous. Now we give the variation of constants formula for the adjoint equation L∗1 x = f . Theorem 40 (Variation of Constants). Suppose (17) is regressive. Let t0 ∈ T and x0 ∈ R. The unique solution of the initial value problem (22)
x∇ = −p(t)xρ + f (t),
x(t0 ) = x0
is given by t
Z (23)
x(t) = eˆ ν p (t, t0 )x0 +
eˆ ν p (t, τ )f (τ )∇τ. t0
Proof. First, it is easily verified that x given by (23) solves the initial value problem (22) (see Exercise 42 below). Finally, if x is a solution of (22), then we have seen above that (23) holds. Remark 41. Because of Theorem 14 (v), an alternative form of the solution of the initial value problem (22) is given by Z t x(t) = eˆ ν p (t, t0 ) x0 + eˆ ν p (t0 , τ )f (τ )∇τ . t0
Exercise 42. Verify directly that the function x given in Theorem 40 solves the IVP (22). Next we give the variation of constants formula for L1 y = f . Theorem 43 (Variation of Constants). Suppose (17) is regressive. Let t0 ∈ T and y0 ∈ R. The unique solution of the initial value problem (24)
y ∇ = p(t)y + f (t),
y(t0 ) = y0
is given by Z
t
y(t) = eˆp (t, t0 )y0 +
eˆp (t, ρ(τ ))f (τ )∇τ. t0
20
PETERSON AND ANDERSON, ET AL
Proof. We equivalently rewrite y ∇ = p(t)y + f (t) as y ∇ = p(t) y ρ + ν(t)y ∇ + f (t), i.e., [1 − ν(t)p(t)]y ∇ = p(t)y ρ + f (t), i.e., (use p ∈ Rν ) y ∇ = −( ν p)(t)y ρ +
f (t) 1 − ν(t)p(t)
and apply Theorem 40 to find the solution of (24) as (use ( ν ( ν p))(t) = p(t)) Z t f (τ ) eˆp (t, τ ) y(t) = eˆp (t, t0 )y0 + ∇τ. 1 − ν(τ )p(τ ) t0 For the final calculation eˆp (t, τ ) eˆp (t, τ ) = = eˆp (t, ρ(τ )), 1 − ν(τ )p(τ ) eˆp (ρ(τ ), τ ) we use Theorem 14 (ii) and (v).
Remark 44. Because of Theorem 14 (v), an alternative form of the solution of the initial value problem (24) is given by Z t y(t) = eˆp (t, t0 ) y0 + eˆp (t0 , ρ(τ ))f (τ )∇τ . t0
Exercise 45. Use the variation of constants formula from Theorem 43 to solve the following initial value problems on the indicated time scales: (i) y ∇ = 2y + t, y(0) = 0, where T = R; (ii) y ∇ = 2y + 3t , y(0) = 0, where T = Z; (iii) y ∇ = p(t)y + eˆp (ρ(t), t0 ), y(t0 ) = 0, where T is an arbitrary time scale and p ∈ Rν . 6. Wronskians In this section we consider the second order linear nabla dynamic equation (25)
y ∇∇ + p(t)y ∇ + q(t)y = f (t),
where we assume that p, q, f ∈ Cld . If we introduce the operator L2 : C2ld → Cld by L2 y(t) = y ∇∇ (t) + p(t)y ∇ (t) + q(t)y(t) for t ∈ Tκ2 , then (25) can be rewritten as L2 y = f . If y ∈ C2ld and L2 y(t) = f (t) for all t ∈ Tκ2 , then we say y is a solution of L2 y = f on T. The fact that L2 is a linear operator (see Theorem 46) is why we call equation (25) a linear equation. If f (t) = 0 for all t ∈ Tκ2 , then we get the homogeneous dynamic equation L2 y = 0. Otherwise we say the equation L2 y = f is nonhomogeneous. The following principle of superposition is easy to prove and is left as an exercise.
NABLA DYNAMIC EQUATIONS
21
Theorem 46. The operator L2 : C2ld → Cld is a linear operator, i.e., L2 (αy1 + βy2 ) = αL2 (y1 ) + βL2 (y2 )
for all
α, β ∈ R and y1 , y2 ∈ C2ld .
If y1 and y2 solve the homogeneous equation L2 y = 0, then so does y = αy1 + βy2 , where α and β are any real constants. Exercise 47. Prove Theorem 46. Show also that the sum of a solution of the homogeneous equation L2 y = 0 and the nonhomogeneous equation (25) is a solution of (25). Definition 48. Equation (25) is called ν-regressive provided p, q, f ∈ Cld such that the ν-regressivity condition (26)
1 + ν(t)p(t) + ν 2 (t)q(t) 6= 0
for all
t ∈ Tκ
holds. Theorem 49. Assume that the dynamic equation (25) is ν-regressive. If t0 ∈ Tκ , then the initial value problem L2 y = f (t), y(t0 ) = y0 , y ∇ (t0 ) = y0∇ , where y0 and y0∇ are given constants, has a unique solution, and this solution is defined on the whole time scale T. To motivate the next definition we now try to solve the initial value problem (27)
L2 y = 0,
y(t0 ) = y0 ,
y ∇ (t0 ) = y0∇ ,
where t0 ∈ Tκ and y0 , y0∇ ∈ R. If y1 and y2 are two solutions of L2 y = 0, then by Theorem 46 y(t) := αy1 (t) + βy2 (t) is a solution of L2 y = 0 for all α, β ∈ R. Then we want to see if we can pick α and β so that y0 = y(t0 ) = αy1 (t0 ) + βy2 (t0 ) and y0∇ = y ∇ (t0 ) = αy1∇ (t0 ) + βy2∇ (t0 ), i.e., (28)
y1 (t0 ) y2 (t0 ) α y0 = . ∇ ∇ y1 (t0 ) y2 (t0 ) β y0∇
System (28) has a unique solution provided the occurring 2 × 2-matrix is invertible. This leads to the following definition. Definition 50. For two nabla differentiable functions y1 and y2 we define the nabla Wronskian W = W (y1 , y2 ) by y1 (t) y2 (t) . W (t) = det ∇ ∇ y1 (t) y2 (t)
22
PETERSON AND ANDERSON, ET AL
We say that two solutions y1 and y2 of L2 y = 0 form a fundamental set of solutions (or a fundamental system) for L2 y = 0 provided W (y1 , y2 )(t) 6= 0 for all t ∈ Tκ . Theorem 51. If the pair of functions y1 , y2 forms a fundamental system of solutions for L2 y = 0, then y(t) = αy1 (t) + βy2 (t), where α and β are constants, is a general solution of L2 y = 0. By a general solution we mean every function of this form is a solution and every solution is in this form. Proof. Assume that the pair of functions y1 , y2 is a fundamental system of solutions for L2 y = 0. By Theorem 46, any function of the form y(t) = αy1 (t) + βy2 (t), where α and β are constants, is a solution of L2 y = 0. Now we show that any solution of L2 y = 0 is of this form. Let y0 (t) be a fixed but arbitrary solution of L2 y = 0. Fix t0 ∈ Tκ and let y0 := y0 (t0 ), y0∇ := y0∇ (t0 ). Let u(t) = αy1 (t) + βy2 (t). We want to pick α and β so that u(t0 ) = y0 = αy1 (t0 ) + βy2 (t0 ) and u∇ (t0 ) = y0∇ = αy1∇ (t0 ) + βy2∇ (t0 ). Hence we want to pick α and β so that (28) is satisfied. Since the determinant of the coefficient matrix in (28) is nonzero, there is a unique set of constants α0 , β0 so that (28) is satisfied. It follows that u(t) = α0 y1 (t) + β0 y2 (t) solves the initial value problem (27). By the uniqueness theorem (Theorem 49), y0 (t) = u(t) = α0 y1 (t) + β0 y2 (t). It is an important fact that the nabla Wronskian of two solutions of (25) is nonzero at a single point t0 if and only if it is nonzero for all t. This is a consequence of the subsequent Abel’s formula. Before we prove Abel’s theorem we prove the following lemma concerning nabla Wronskians. Lemma 52. Let y1 and y2 be twice nabla differentiable. Then ρ ρ y1 y 2 ; (i) W (y1 , y2 ) = det ∇ ∇ y 1 y2
NABLA DYNAMIC EQUATIONS
(ii) W ∇ (y1 , y2 ) = det (iii)
W ∇ (y
23
y1ρ
y2ρ
;
y1∇∇ y2∇∇
ρ y2ρ y1 + (−p − νq)W (y1 , y2 ). 1 , y2 ) = det Ly1 Ly2
Proof. By Definition 50 we have
y1 y2 W (y1 , y2 ) = det y1∇ y2∇ ρ y 1
= det
+ νy1∇ y1∇
+ νy2∇ y2∇
ρ y1
= det
y2ρ
y1∇
y2ρ y2∇
.
This proves part (i). For part (ii), we use the product formula to calculate [W (y1 , y2 )]∇ = (y1 y2∇ − y2 y1∇ )∇ = y1ρ y2∇∇ + y1∇ y2∇ − y2ρ y1∇∇ − y2∇ y1∇ = y1ρ y2∇∇ − y2ρ y1∇∇ ρ y2ρ y1 . = det ∇∇ ∇∇ y1 y2
Finally we employ part (ii) to obtain ρ y2ρ y1 [W (y1 , y2 )] = det ∇∇ ∇∇ y1 y2 ∇
y1ρ y2ρ = det ∇ ∇ Ly1 − py1 − qy1 Ly2 − py2 − qy2
24
PETERSON AND ANDERSON, ET AL
y1ρ
y2ρ
y1ρ
y2ρ
y1ρ
y2ρ
+ det = det −py1∇ − qy1 −py2∇ − qy2 Ly1 Ly2 y1ρ
y2ρ
+ det = det ρ ρ ∇ ∇ ∇ ∇ −py1 − qy1 − qνy1 −py2 − qy2 − qνy2 Ly1 Ly2 ρ y2ρ y1ρ y2ρ y1 + det = det ∇ ∇ ∇ ∇ Ly1 Ly2 −py1 − qνy1 −py2 − qνy2 ρ ρ ρ y2ρ y1 y1 y 2 . = det + (−p − νq) det ∇ ∇ Ly1 Ly2 y1 y 2
From here, part (iii) follows by applying part (i).
Theorem 53 (Abel’s Theorem). Let t0 ∈ Tκ and assume that L2 y = 0 is ν-regressive. Suppose that y1 and y2 are two solutions of L2 y = 0. Then their Wronskian W = W (y1 , y2 ) satisfies W (t) = eˆ−p−νq (t, t0 )W (t0 )
for
t ∈ Tκ .
Proof. Using the fact that y1 and y2 are solutions of L2 y = 0 and Lemma 52 (iii), we see that W is a solution of the initial value problem (29)
W ∇ = [−p(t) − ν(t)q(t)]W,
W (t0 ) = W0 ,
where we put W0 = W (t0 ). Using condition (26) we get that the coefficient function −p−νq is ν-regressive. Since p, q, ν ∈ Cld , we have that −p − νq ∈ Cld . Altogether −p − νq ∈ Rν . By Theorem 30, the unique solution of (29) is W (t) = eˆ−p−νq (t, t0 )W0 for t ∈ Tκ . Alternatively, we may consider a linear dynamic equation of the form (30)
ρ
x∇∇ + p(t)x∇ + q(t)xρ = 0.
Definition 54. We say that (30) is ν-regressive provided p ∈ Rν and q ∈ Cld . Theorem 55. If (30) is ν-regressive, then it is equivalent to a ν-regressive equation of the form L2 y = 0. Conversely, if L2 y = 0 is ν-regressive, then it is equivalent to a ν-regressive equation of the form (30).
NABLA DYNAMIC EQUATIONS
25
Proof. Assume (30) is ν-regressive. Then p, q ∈ Cld and 1 − ν(t)p(t) 6= 0 for t ∈ Tκ . We write ρ
x∇∇ + px∇ + qxρ = x∇∇ + p(x∇ − νx∇∇ ) + q(x − νx∇ ) = x∇∇ − νpx∇∇ + (p − νq)x∇ + qx = (1 − νp)x∇∇ + (p − νq)x∇ + qx p − νq ∇ q ∇∇ = [1 − νp] x + x + x . 1 − νp 1 − νp Hence equation (30) is equivalent to y ∇∇ + p1 (t)y ∇ + q1 (t)y = 0, where p1 :=
p − νq 1 − νp
and
q1 :=
q . 1 − νp
Note that p1 , q1 ∈ Cld , and since q p − νq + ν2 1 − νp 1 − νp 1 − νp + νp − ν 2 q + ν 2 q = 1 − νp 1 = 1 − νp 6 = 0,
1 + νp1 + ν 2 q1 = 1 + ν
we get that (30) is equivalent to a ν-regressive equation of the form (25). Next assume that (25) is ν-regressive. Then 1 + ν(t)p(t) + ν 2 (t)q(t) 6= 0 for t ∈ Tκ and p, q ∈ Cld . Consider ρ
y ∇∇ + py ∇ + qy = y ∇∇ + p(y ∇ + νy ∇∇ ) + q(y ρ + νy ∇ ) ρ
ρ
= (1 + νp)y ∇∇ + py ∇ + qy ρ + νq(y ∇ + νy ∇∇ ) ρ
= (1 + νp + ν 2 q)y ∇∇ + (p + νq)y ∇ + qy ρ q p + νq ∇ρ ρ 2 ∇∇ = (1 + νp + ν q) y + y + y . 1 + νp + ν 2 q 1 + νp + ν 2 q Hence L2 y = 0 is equivalent to the equation ρ
x∇∇ + p2 (t)x∇ + q2 (t)xρ = 0,
(31) where
p + νq 1 + νp + ν 2 q Note that p2 , q2 ∈ Cld , and since p2 :=
and
q2 :=
q . 1 + νp + ν 2 q
p + νq 1 + νp + ν 2 q 1 = 1 + νp + ν 2 q 6= 0,
1 − νp2 = 1 − ν
26
PETERSON AND ANDERSON, ET AL
we have p2 ∈ Rν so that (31) is ν-regressive.
Theorem 56 (Abel’s Theorem for (30)). Assume that (30) is ν-regressive and let t0 ∈ Tκ . Suppose that x1 and x2 are two solutions of equation (30). Then their Wronskian W = W (x1 , x2 ) satisfies W (t) = eˆ ν p (t, t0 )W (t0 ) for t ∈ Tκ . Proof. Let x1 and x2 be two solutions of (30). We use Theorem 52 (ii) to find W
∇
ρ xρ2 x1 = det x∇∇ x∇∇ 1 2
= det
xρ1
xρ2
ρ
ρ
ρ ρ ∇ −px∇ 1 − qx1 −px2 − qx2
ρ x1 = det ρ −px∇ 1 ρ
x1 = −p det ρ x∇ 1
xρ2
ρ
−px∇ 2
xρ2 ρ ∇ x2 ρ
x1 x2 = −p det ∇ ∇ x1 x2 = −pW ρ . Hence W is a solution of the initial value problem W ∇ = −p(t)W ρ ,
W (t0 ) = W0 ,
where we put W0 = W (t0 ). Since p ∈ C1ld we get from Theorem 37 that the unique solution of this initial value problem is given by W (t) = eˆ ν p (t, t0 )W0 . Corollary 57. Assume q ∈ Cld . Then the Wronskian of any two solutions of x∇∇ + q(t)xρ = 0 is independent of t.
NABLA DYNAMIC EQUATIONS
27
Proof. Let t0 ∈ T. The above assumptions ensure that (30) with p ≡ 0 is ν-regressive. Since ν p ≡ 0, we have eˆ ν p (t, t0 ) ≡ 1, and by Theorem 56, W (x1 , x2 )(t) ≡ W (x1 , x2 )(t0 ) follows, where x1 and x2 can be any two solutions of (30).
7. Nabla Hyperbolic and Trigonometric Functions Here and in the following section we consider the second order linear dynamic homogeneous equation with constant coefficients y ∇∇ + αy ∇ + βy = 0
(32)
with
α, β ∈ R,
on a time scale T. We assume throughout that the dynamic equation (32) is ν-regressive, i.e., 1 + αν(t) + βν 2 (t) 6= 0 for t ∈ Tκ , i.e., − βν − α ∈ Rν . We try to find numbers λ ∈ C with 1 − λν(t) 6= 0 for t ∈ Tκ such that y(t) = eˆλ (t, t0 ) is a solution of (32). Note that if y(t) = eˆλ (t, t0 ), then y ∇∇ (t) + αy ∇ (t) + βy(t) = λ2 eˆλ (t, t0 ) + αλˆ eλ (t, t0 ) + βˆ eλ (t, t0 ) 2 = λ + αλ + β eˆλ (t, t0 ). Since eˆλ (t, t0 ) does not vanish, y(t) = eˆλ (t, t0 ) is a solution of (32) iff λ satisfies the so-called characteristic equation λ2 + αλ + β = 0.
(33)
The solutions λ1 and λ2 of the characteristic equation (33) are given by p p −α + α2 − 4β −α − α2 − 4β and λ2 = . (34) λ1 = 2 2 Exercise 58. Let λ1 and λ2 be defined as in (34). Show that (32) is regressive iff λ1 , λ2 ∈ Rν . Theorem 59. Suppose α2 − 4β 6= 0. If −βν − α ∈ Rν , then a general solution of (32) is given by y(t) = c1 eˆλ1 (t, t0 ) + c2 eˆλ2 (t, t0 ), where t0 ∈ Tκ and λ1 , λ2 are given in (34). The solution y of the initial value problem (35)
y ∇∇ + αy ∇ + βy = 0,
y(t0 ) = y0 ,
y ∇ (t0 ) = y0∇
is given by y(t) = y0
eˆλ1 (t, t0 ) + eˆλ2 (t, t0 ) αy0 + 2y0∇ eˆλ2 (t, t0 ) − eˆλ1 (t, t0 ) +p . 2 2 α2 − 4β
28
PETERSON AND ANDERSON, ET AL
Proof. Since λ1 and λ2 given in (34) are solutions of the characteristic equation (33), we find that (note λ1 , λ2 ∈ Rν according to Exercise (58)) both eˆλ1 (·, t0 ) and eˆλ2 (·, t0 ) solve the dynamic equation (32). Moreover, the Wronskian of these two solutions is found to be eˆλ2 (t, t0 ) eˆλ1 (t, t0 ) = (λ2 − λ1 )ˆ det eλ1 (t, t0 )ˆ eλ2 (t, t0 ) λ1 eˆλ1 (t, t0 ) λ2 eˆλ2 (t, t0 ) p α2 − 4βˆ eλ1 ⊕ν λ2 (t, t0 ), = which does not vanish since α2 − 4β 6= 0. Hence the pair of functions eˆλ1 (·, t0 )
and
eˆλ2 (·, t0 ),
is a fundamental system of (32). The proof of the last statement in this theorem is straight forward and so will be omitted. When α = 0 and β < 0, we are interested in the nabla hyperbolic functions defined as follows. Definition 60 (Nabla Hyperbolic Functions). If p ∈ Cld and νp2 ∈ Rν , then we define the d p (·, t0 ) and sinh d p (·, t0 ) by nabla hyperbolic functions cosh d p (t, t0 ) := eˆp (t, t0 ) + eˆ−p (t, t0 ) cosh 2
and
d p (t, t0 ) := eˆp (t, t0 ) − eˆ−p (t, t0 ) sinh 2
for t ∈ T. Note that the ν-regressivity condition on νp2 is equivalent to 0 6= 1 − ν 2 p2 = (1 − νp)(1 + νp) and hence is equivalent to both p and −p being ν-regressive. In the following, if f is a function in two variables, then by f ∇ we mean the nabla derivative with respect to the first variable. Lemma 61. Let p ∈ Cld . If νp2 ∈ Rν and t0 ∈ Tκ , then for t ∈ T we have d ∇ (t, t0 ) = p(t)sinh d p (t, t0 ); (i) cosh p ∇ d p (t, t0 ); d (t, t0 ) = p(t)cosh (ii) sinh p
(iii) (iv) (v) (vi)
d 2 (t, t0 ) − sinh d 2 (t, t0 ) = eˆνp2 (t, t0 ); cosh p p d d coshp (t, t0 ) + sinhp (t, t0 ) = eˆp (t, t0 ); d p (t, t0 ) − sinh d p (t, t0 ) = eˆ−p (t, t0 ); cosh d p (t, t0 ), sinh d p (t, t0 )] = p(t)ˆ W [cosh eνp2 (t, t0 ).
NABLA DYNAMIC EQUATIONS
29
Proof. Using Definition 60, the first two formulas are easily verified, while the formula in (iii) follows from 2 2 eˆp + eˆ−p 2 eˆp − eˆ−p 2 d d coshp − sinhp = − 2 2 2 2 eˆp + 2ˆ ep eˆ−p + eˆ−p eˆ2p − 2ˆ ep eˆ−p + eˆ2−p = − 4 4 = eˆp eˆ−p = eˆ−p ⊕ν p = eˆνp2 , where we have used Theorem 14 (vi). The Euler type formulas (iv) and (v) follow immediately from Definition 60. Finally note that
d d sinhp (t0 , t0 ) coshp (t0 , t0 ) d p (t, t0 ), sinh d p (t, t0 ) W cosh = det d p (t, t0 ) p(t)cosh d p (t, t0 ) p(t)sinh h i d 2 (t, t0 ) − sinh d 2 (t, t0 ) = p(t)ˆ = p(t) cosh eνp2 (t, t0 ) p p so that (vi) holds. d γ (·, t0 ) and sinh d γ (·, t0 ) are Exercise 62. Show that if γ > 0 with γ 2 ν ∈ Rν , then cosh solutions of y ∇∇ − γ 2 y = 0.
(36)
Theorem 63. If γ > 0 with γ 2 ν ∈ Rν , then a general solution of (36) is given by d γ (t, t0 ) + c2 sinh d γ (t, t0 ), y(t) = c1 cosh where c1 and c2 are constants. d γ (·, t0 ) and sinh d γ (·, t0 ) are solutions of (36). From Theorem 61, Proof. From Exercise 62, cosh (vi) we get that these solutions are a fundamental set of solutions and hence the conclusion of this theorem follows from Theorem 51. Exercise 64. Show that if γ > 0 and γ 2 ν ∈ Rν , then the solution of the IVP y ∇∇ − γ 2 y = 0,
y(t0 ) = y0 ,
y ∇ (t0 ) = y0∇
is given by ∇ d γ (t, t0 ). d γ (t, t0 ) + y0 sinh y(t) = y0 cosh γ
30
PETERSON AND ANDERSON, ET AL
Theorem 65. Suppose α2 − 4β > 0. Define p α α2 − 4β p=− and q = . 2 2 If p and −βν − α are ν-regressive and t0 ∈ Tκ , then a general solution of (32) is given by d q/(1−νp) (t, t0 ) + c2 eˆp (t, t0 )sinh d q/(1−νp) (t, t0 ), y(t) = c1 eˆp (t, t0 )cosh where t0 ∈ T, and the Wronskian d q/(1−νp) (t, t0 ), eˆp (t, t0 )sinh d q/(1−νp) (t, t0 ] = qˆ W [ˆ ep (t, t0 )cosh e−νβ−α (t, t0 ) for t ∈ Tκ . The solution of the IVP (35) is given by ∇ − py y 0 0 d q/(1−νp) (t, t0 ) . d q/(1−νp) (t, t0 ) + sinh y(t) = eˆp (t, t0 ) y0 cosh q Proof. In this proof we use the convention that eˆp = eˆp (·, t0 ) d p and sinh d p . We apply Theorem 59 to find two solutions of (32) as and similarly for cosh eˆp+q
and
eˆp−q .
By Theorem 46 we can construct two other solutions of (32) by y1 =
eˆp+q + eˆp−q 2
and
y2 =
eˆp+q − eˆp−q . 2
We use the formulas p ⊕ν
q 1 − νp
=p+
q νpq − =p+q 1 − νp 1 − νp
and q νpq q =p− + =p−q 1 − νp 1 − νp 1 − νp to obtain, by using Theorem 14 (vi),
p ⊕ν
−
y1 = = = = =
eˆp+q + eˆp−q 2 eˆp⊕ν (q/(1−νp)) + eˆp⊕ν (−q/(1−νp)) 2 eˆp eˆq/(1−νp) + eˆp eˆ−q/(1−νp) 2 eˆq/(1−νp) + eˆ−q/(1−νp) eˆp 2 d eˆp coshq/(1−νp)
and similarly y2 =
eˆp+q − eˆp−q d q/(1−νp) . = eˆp sinh 2
NABLA DYNAMIC EQUATIONS
31
Instead of using Abel’s Theorem (Theorem 53) we will calculate this Wronskian directly. First we find, by using Theorem 14 (ii) and Lemma 61, that ∇
d d y1∇ = eˆ∇ ˆρp cosh p coshq/(1−νp) + e q/(1−νp) q ρ d q/(1−νp) + eˆ d q/(1−νp) = pˆ ep cosh sinh p 1 − νp d q/(1−νp) + qˆ d q/(1−νp) = pˆ ep cosh ep sinh and similarly that d q/(1−νp) + qˆ d q/(1−νp) . y2∇ = pˆ ep sinh ep cosh Then the Wronskian of y1 and y2 is given by d q/(1−νp) d q/(1−νp) eˆp cosh eˆp sinh det d q/(1−νp) + qˆ d q/(1−νp) pˆ d q/(1−νp) + qˆ d q/(1−νp) pˆ ep cosh ep sinh ep sinh ep cosh d q/(1−νp) eˆp sinh d q/(1−νp) eˆp cosh = det d q/(1−νp) qˆ d q/(1−νp) qˆ ep sinh ep cosh h i d2 d2 = qˆ e2p cosh − sinh q/(1−νp) q/(1−νp) = = = =
qˆ e2p eˆνq2 /(1−νp)2 qˆ ep(2−νp)⊕ν (νq2 /(1−νp)2 ) qˆ e2p−ν(p2 −q2 ) qˆ e−νβ−α ,
where we have used Lemma 61.
When α = 0 and β > 0, we are interested in the nabla trigonometric functions defined as follows. Definition 66 (Nabla Trigonometric Functions). If p ∈ Cld and −νp2 ∈ Rν , then we define c p (·, t0 ) by the nabla trigonometric functions cos c p (·, t0 ) and sin cos c p (t, t0 ) :=
eˆip (t, t0 ) + eˆ−ip (t, t0 ) 2
and
c p (t, t0 ) := eˆip (t, t0 ) − eˆ−ip (t, t0 ) sin 2i
for t ∈ T. Note that −νp2 is ν-regressive iff both ip and −ip are ν-regressive, so cos c p (·, t0 ) and c p (·, t0 ) in Definition 66 are well defined. sin The proofs of Lemma 67 and Theorem 70 are similar to the proofs of Lemma 61 and Theorem 65 and hence will left as exercises (Exercises 68 and 71). Lemma 67. Let p ∈ Cld . If −νp2 ∈ Rν and t0 ∈ Tκ , then we have for t ∈ Tκ c (i) cos c∇ p (t, t0 ) = −p(t)sinp (t, t0 );
32
PETERSON AND ANDERSON, ET AL
c∇ (ii) sin c p (t, t0 ); p (t, t0 ) = p(t)cos 2 2 c p (t, t0 ) = eˆ−νp2 (t, t0 ); (iii) cos c p (t, t0 ) + sin c p (t, t0 ); (iv) eˆip (t, t0 ) = cos c p (t, t0 ) + isin c p (t, t0 ); (v) eˆ−ip (t, t0 ) = cos c p (t, t0 ) − isin c p (t, t0 )] = p(t)ˆ (vi) W [cos c p (t, t0 ), sin e−νp2 (t, t0 ). We remark that for p ∈ R, the regressivity condition on −νp2 is always satisfied. Exercise 68. Prove Lemma 67. Theorem 69. Assume γ > 0 and t0 ∈ Tκ . Then c γ (t, t0 ) y(t) = c1 cos c γ (t, t0 ) + c2 sin is a general solution of y ∇∇ + γ 2 y = 0.
(37)
Proof. Note that the equation (37) is ν-regressive because 1 + γ 2 ν 2 (t) 6= 0 for t ∈ Tκ . c γ (t, t0 ) are solutions of Using Lemma 67,(i), (ii), we can easily show that cos c γ (t, t0 ) and sin c γ (·, t0 ) form a fundamental set of (37). Using Lemma 67,(vi) we get that cos c γ (·, t0 ) and sin solutions of (37). It follows that c γ (t, t0 ) y(t) = c1 cos c γ (t, t0 ) + c2 sin is a general solution of (37).
Theorem 70. Suppose α2 − 4β < 0. Define α p=− 2
p and
q=
4β − α2 . 2
If p and −νβ − α are regressive, then a general solution of (32) is given by c q/(1−νp) (t, t0 ), y(t) = c1 eˆp (t, t0 )cos c q/(1−νp) (t, t0 ) + c2 eˆp (t, t0 )sin where t0 ∈ T, and the Wronskian c q/(1−νp) (t, t0 )] = qˆ W [ˆ ep (t, t0 )cos c q/(1−νp) (t, t0 ), eˆp (t, t0 )sin e−νβ−α (t, t0 ) for t ∈ Tκ . The solution of the initial value problem (35) is given by y(t) = y0 eˆp (t, t0 )cos c q/(1−νp) (t, t0 ) + Exercise 71. Prove Theorem 70.
y0∇ − py0 c q/(1−νp) (t, t0 ). eˆp (t, t0 )sin q
NABLA DYNAMIC EQUATIONS
33
8. Reduction of Order Now we consider the nabla dynamic equation (32) in the case that α2 − 4β = 0. From (34) we have λ1 = λ2 = p, where α p=− . 2 In this case α = −2p, β = p2 , and so the dynamic equation (32) is of the form (38)
y ∇∇ − 2py ∇ + p2 y = 0.
Hence one solution y1 of (32) is given by y1 (t) = eˆp (t, t0 ), where t0 ∈ Tκ . We will now find a second linearly independent solution of (38) using the so-called method of reduction of order. We will look for a second linearly independent solution of the form (39)
y(t) = u(t)ˆ ep (t, t0 ).
By the product rule and Theorem 14 (ii), y ∇ (t) = u∇ (t)ˆ eρp (t, t0 ) + u(t)ˆ e∇ p (t, t0 ) (40)
= u∇ (t)[1 − ν(t)p]ˆ ep (t, t0 ) + pˆ ep (t, t0 )u(t).
Now we must be careful because there are many time scales where the graininess function ν is not nabla differentiable. We assume u is a function such that u∇ (1 − νp) is nabla differentiable. Then from (40) we get using the product rule that ∇ y ∇∇ (t) = u∇ (t)[1 − ν(t)p] eˆρp (t, t0 ) + u∇ (t)[1 − ν(t)p]ˆ e∇ p (t, t0 ) +pˆ eρp (t, t0 )u∇ (t) + pˆ e∇ p (t, t0 )u(t) ∇ ∇ ρ = u (t)[1 − ν(t)p] eˆp (t, t0 ) + u∇ (t)[1 − ν(t)p]pˆ ep (t, t0 ) (41)
+p[1 − ν(t)p]ˆ ep (t, t0 )u∇ (t) + p2 eˆp (t, t0 )u(t) ∇ = u∇ (t)[1 − ν(t)p] eˆρp (t, t0 ) +2p[1 − ν(t)p]ˆ ep (t, t0 )u∇ (t) + p2 eˆp (t, t0 )u(t).
Using (39), (40), and (41) we obtain y ∇∇ (t) − 2py ∇ (t) + p2 y(t) =
∇ u∇ (t)[1 − ν(t)p] eˆρp (t, t0 ) +2p[1 − ν(t)p]ˆ ep (t, t0 )u∇ (t) + p2 eˆp (t, t0 )u(t) −2pu∇ (t)[1 − ν(t)p]ˆ ep (t, t0 ) − 2p2 eˆp (t, t0 )u(t)
+p2 u(t)ˆ ep (t, t0 ) ∇ ∇ = u (t)[1 − ν(t)p] eˆρp (t, t0 ). Hence for y(t) = u(t)ˆ ep (t, t0 ) to be a solution we want to choose u so that ∇ ∇ u (t)[1 − ν(t)p] = 0.
34
PETERSON AND ANDERSON, ET AL
We will get that the above equation is true if we choose u so that u∇ (t)[1 − ν(t)p] = 1. Recall in the steps above that we assumed u∇ (t)[1 − ν(t)p] to be differentiable, and that is true in this case. We then solve for u∇ (t) to get 1 u∇ (t) = . 1 − ν(t)p Hence if we take Z
t
1 ∇τ, 1 − ν(τ )p t0 we can check that all of the above steps are valid and so Z t 1 ∇τ (42) y(t) = eˆp (t, t0 ) 1 − ν(τ )p t0 u(t) =
is a solution of (32). This leads to the following theorem. Theorem 72. Suppose α2 − 4β = 0. Define α p=− . 2 If p ∈ Rν , then a general solution of (32) is given by Z t y(t) = c1 eˆp (t, t0 ) + c2 eˆp (t, t0 ) t0
where t0 ∈ T, and the Wronskian Z W eˆp (t, t0 ), eˆp (t, t0 )
t
t0
1 ∇τ, 1 − pν(τ )
1 ∇τ = eˆ−α−να2 /4 (t, t0 ). 1 − pν(τ )
The solution of the initial value problem (35) is given by Z t ∇τ ∇ . eˆp (t, t0 ) y0 + (y0 − py0 ) t0 1 − pν(τ ) In the next exercise we use a slight variation of the method of reduction of order to find a second linearly independent solution of (38). We also call this the method of reduction of order and we will use this method several times later in this chapter. Exercise 73. In this exercise we outline another method for finding the solution (42) of the dynamic equation (38). The reader should fill in the missing steps. Suppose y is the solution of (38) satisfying the initial conditions y(t0 ) = 0
and
y ∇ (t0 ) = 1
and consider W (ˆ ep (·, t0 ), y)(t) = eˆp (t, t0 )y ∇ (t) − eˆ∇ p (t, t0 )y(t) = eˆp (t, t0 )y ∇ (t) − pˆ ep (t, t0 )y(t) = (y ∇ (t) − py(t))ˆ ep (t, t0 ).
NABLA DYNAMIC EQUATIONS
35
In particular, W (ˆ ep (·, t0 ), y)(t0 ) = y ∇ (t0 ) − py(t0 ) = 1. On the other hand, by Abel’s theorem (Theorem 53), we have W (ˆ ep (·, t0 ), y)(t) = W (ˆ ep (·, t0 ), y)(t0 )ˆ e−νβ−α (t, t0 ) = eˆ−νβ−α (t, t0 ). Hence y is a solution of the first order linear equation (y ∇ − py)ˆ ep (t, t0 ) = eˆ−νβ−α (t, t0 ) or, equivalently according to Theorem 14 (vii), y ∇ − py = eˆ(−νβ−α) ν p (t, t0 ) = eˆp (t, t0 ). Since y(t0 ) = 0, we have by the variation of constants formula given in Theorem 43 that Z t Z t 1 eˆp (t, ρ(τ ))ˆ ep (τ, t0 )∇τ = eˆp (t, t0 ) y(t) = ∇τ. 1 − ν(τ )p t0 t0 9. Nabla Riccati Equation In this section we will introduce and briefly study the nabla Riccati equation associated with the second order linear equation ρ
M2 x = x∇∇ + p(t)x∇ + q(t)xρ = 0. 1 ∩ R we define the nabla Riccati We assume throughout that p and q be in Cld . For z ∈ Cld ν operator R1 by 2 R1 z = z ∇ + q + pz ρ + z ,
where the nabla generalized square of z ∈ Rν is defined by 2 z := (−z)( ν z) =
z2 . 1 − νz
1 ∩ R and R z(t) = 0 for t ∈ T . We say z is a solution of R1 z = 0 on Tκ provided z ∈ Cld ν 1 κ2 First we prove the following factorization theorem relating the second order linear operator M2 and the first order nonlinear Riccati operator R1 . 2 and x(t) 6= 0 for t ∈ T. Then z := Theorem 74. Assume x ∈ Cld
x∇ x
∈ Rν and
xρ (t)R1 z(t) = M2 x(t) for t ∈ Tκ2 . 2 and x(t) 6= 0 for t ∈ T and let z := Proof. Assume x ∈ Cld
1 − ν(t)z(t) = 1 − ν(t) so z ∈ Rν . Next consider
x∇ x .
First note that
x∇ (t) x(t) − ν(t)x∇ (t) xρ (t) = = 6= 0 x(t) x(t) x(t)
36
PETERSON AND ANDERSON, ET AL
2 xρ R1 z = xρ [z ∇ + q + pz ρ + z ] ( ∇ ) ρ x∇ x∇ ρ
2 = x +q+p ρ +z x x ∇∇ ρ xx − (x∇ )2 x∇ (x∇ )2 ρ = x +q+p ρ + xxρ x x[x − νx∇ ]
= x∇∇ −
(x∇ )2 (x∇ )2 ρ + qxρ + px∇ + x x
= M2 x. x∇
2 , x(t) 6= 0 for t ∈ T, and z := Corollary 75. Assume x ∈ Cld x . Then x is a solution of M2 x = 0 on T iff z is a solution of R1 z = 0 on Tκ . In particular if z is a solution of R1 z = 0 on Tκ , then x = eˆz (·, t0 ) is a solution of M2 x = 0 on T.
Proof. The first statement follows from Theorem 74. Now assume z is a solution of R1 z = 0, then z ∈ Rν and it follows from Theorem 74 that the solution x = eˆz (·, 0) of the IVP x∇ = z(t)x,
x(t0 ) = 1
is a solution M2 x = 0.
10. ν-Regressive Matrices
Definition 76. Let A be an m × n-matrix-valued function on T. We say that A is ldcontinuous on T if each entry of A is ld-continuous on T, and the class of all such ldcontinuous m × n-matrix-valued functions on T is denoted by Cld = Cld (T) = Cld (T, Rm×n ). We say that A is nabla differentiable on T provided each entry of A is nabla differentiable on T, and in this case we put A∇ = (a∇ ij )1≤i≤m,1≤j≤n ,
where
A = (aij )1≤i≤m,1≤j≤n .
Theorem 77. If A is nabla differentiable at t ∈ Tκ , then Aρ (t) = A(t) − ν(t)A∇ (t). Proof. We obtain Aρ = (aρij ) = (aij − νa∇ ij ) = (aij ) − ν(a∇ ij ) = A − νA∇ at t.
Theorem 78. Suppose A and B are nabla differentiable n × n-matrix-valued functions. Then
NABLA DYNAMIC EQUATIONS
(i) (ii) (iii) (iv) (v)
37
(A + B)∇ = A∇ + B ∇ ; (αA)∇ = αA∇ if α is constant; (AB)∇ = A∇ B ρ + AB ∇ = Aρ B ∇ + A∇ B; (A−1 )∇ = −(Aρ )−1 A∇ A−1 = −A−1 A∇ (Aρ )−1 if AAρ is invertible; (AB −1 )∇ = (A∇ −AB −1 B ∇ )(B ρ )−1 = (A∇ −(AB −1 )ρ B ∇ )B −1 if BB ρ is invertible.
Proof. We only show the first parts of (iii) and (iv) and leave the remainder of the proof as an exercise. Let 1 ≤ i, j ≤ n. We calculate the ijth entry of (AB)∇ : !∇ n n X X aik bkj = (aik bkj )∇ k=1
k=1
=
n X
ρ ∇ a∇ ik bkj + aik bkj
k=1
=
n X
ρ a∇ ik bkj
+
k=1
and this is the ijth entry of the matrix (iii) to differentiate I = AA−1 :
A∇ B ρ + AB ∇
n X
aik b∇ kj ,
k=1
(see Definition 76). Next, we use part
0 = A∇ A−1 + Aρ (A−1 )∇ , and solving for (A−1 )∇ yields the first formula in (iv).
We consider the linear system of dynamic equations y ∇ = A(t)y,
(43)
where A is an n × n-matrix-valued function on T. A vector-valued y : T → Rn is said to be a solution of (43) on T provided y ∇ (t) = A(t)y(t) holds for each t ∈ Tκ . In order to state the main theorem on solvability of initial value problems involving equation (43), we make the following definition. Definition 79 (Regressivity). An n × n-matrix-valued function A on a time scale T is called ν-regressive (with respect to T) provided (44)
I − ν(t)A(t)
is invertible for all
t ∈ Tκ ,
and the class of all such ν-regressive and ld-continuous matrix functions is denoted by Rν = Rν (T) = Rν (T, Rn×n ). We say that the system (43) is ν-regressive provided A ∈ Rν . Exercise 80. Show that the n×n-matrix-valued function A is ν-regressive iff the eigenvalues λi (t) of A(t) are ν-regressive for all 1 ≤ i ≤ n. Exercise 81. Show that a 2 × 2-matrix-valued function A is ν-regressive iff the scalar-valued function tr A − ν det A is ν-regressive (where tr A denotes the trace of the matrix A, i.e., the sum of the diagonal elements of A). Derive similar characterizations for the 3 × 3-case, the 4 × 4-case, and the general n × n-case.
38
PETERSON AND ANDERSON, ET AL
Now the existence and uniqueness theorem for initial value problems for equation (43) reads as follows. Theorem 82 (Existence and Uniqueness Theorem). Let A ∈ Rν be an n × n-matrix-valued function on T and suppose that f : T → Rn is ld-continuous. Let t0 ∈ T and y0 ∈ Rn . Then the initial value problem y ∇ = A(t)y + f (t), y(t0 ) = y0 has a unique solution y : T → Rn . It follows from Theorem 82 that the matrix initial value problem Y ∇ = A(t)Y,
(45)
Y (t0 ) = Y0 ,
where Y0 is a constant n × n-matrix, has a unique (matrix-valued) solution Y . Before we define and give some properties of the important nabla exponential matrix function we define the “ν circle plus” addition ⊕ν and the “ν circle minus” subtraction ν . Definition 83. Assume A and B are ν-regressive n×n-matrix-valued functions on T. Then we define A ⊕ν B by (A ⊕ν B)(t) := A(t) + B(t) − ν(t)A(t)B(t)
for all
t ∈ Tκ ,
and we define ν A by ( ν A)(t) := −[I − ν(t)A(t)]−1 A(t)
for all
t ∈ Tκ .
for all
t ∈ Tκ .
Exercise 84. Show that if A is ν-regressive on T, then ( ν A)(t) = −A(t)[I − ν(t)A(t)]−1 Lemma 85. (Rν (T, Rn×n ), ⊕ν ) is a group. Proof. Let A, B ∈ Rν . Then I − ν(t)A(t) and I − ν(t)B(t) are invertible for all t ∈ Tκ , and therefore I − ν(t)(A ⊕ν B)(t) = I − ν(t)[A(t) + B(t) − ν(t)A(t)B(t)] = I − ν(t)A(t) − ν(t)B(t) + ν 2 (t)A(t)B(t) = [I − ν(t)A(t)][I − ν(t)B(t)] is also invertible for each t ∈ Tκ , being the product of two invertible matrices. Hence (note that A ⊕ν B ∈ Cld ) A ⊕ν B ∈ Rν . Next, if A ∈ Rν , then ν A defined in Definition 83 satisfies (A ⊕ν ( ν A))(t) = 0 for all t ∈ Tκ . It remains to show that ν A ∈ Rν . But this follows since because of Exercise 84 I − ν(t)( ν A)(t) = I + ν(t)A(t)[I − ν(t)A(t)]−1 = [I − ν(t)A(t)][I − ν(t)A(t)]−1 + ν(t)A(t)[I − ν(t)A(t)]−1 = [I − ν(t)A(t) + ν(t)A(t)][I − ν(t)A(t)]−1 = [I − ν(t)A(t)]−1
NABLA DYNAMIC EQUATIONS
39
is invertible for all t ∈ Tκ . In Exercise 86, the reader is asked to check that ⊕ν satisfies the associative law. Exercise 86. Show that ⊕ν satisfies the associative law, i.e., A ⊕ν (B ⊕ν C) = (A ⊕ν B) ⊕ν C
for all
A, B, C ∈ Rν .
Definition 87. If the matrix-valued functions A and B are ν-regressive on T, then we define A ν B by (A ν B)(t) = (A ⊕ν ( ν B))(t)
for all
t ∈ Tκ .
If A is a matrix, then we let A∗ denote its conjugate transpose. Exercise 88. Show that (A∗ )∇ = (A∇ )∗ holds for any nabla differentiable matrix-valued function A. Exercise 89. Suppose A and B are ν-regressive matrix-valued functions. Show that (i) A∗ is ν-regressive; (ii) ν A∗ = ( ν A)∗ ; (iii) (A ⊕ν B)∗ = B ∗ ⊕ν A∗ . Definition 90 (Nabla Matrix Exponential Function). Let t0 ∈ T and assume that A ∈ Rν is an n × n-matrix-valued function. The unique matrix-valued solution of the IVP Y ∇ = A(t)Y,
Y (t0 ) = I,
where I denotes as usual the n × n-identity matrix, is called the nabla matrix exponential function (at t0 ), and it is denoted by eˆA (·, t0 ). In the following theorem we give some properties of the nabla matrix exponential function. Theorem 91. If A, B ∈ Rν are matrix-valued functions on T, then (i) (ii) (iii) (iv) (v) (vi)
eˆ0 (t, s) ≡ I and eˆA (t, t) ≡ I; eˆA (ρ(t), s) = (I − ν(t)A(t))ˆ eA (t, s); ∗ (t, s) = e ˆ (t, s); eˆ−1 ∗ ν A A eˆA (t, s) = eˆ−1 ˆ∗ ν A∗ (s, t); A (s, t) = e eˆA (t, τ )ˆ eA (τ, s) = eˆA (t, s); eˆA (t, s)ˆ eB (t, s) = eˆA⊕ν B (t, s) if eˆA (t, s) and B(t) commute.
Proof. Part (i). Consider the initial value problem Y ∇ = 0,
Y (s) = I,
which has exactly one solution by Theorem 82. Since Y (t) ≡ I is a solution of this IVP, we have eˆ0 (t, s) = Y (t) ≡ I. Clearly eˆA (t, t) = I. Part (ii). According to Theorem 77 we have eˆA (ρ(t), s) = eˆA (t, s) − ν(t)ˆ e∇ A (t, s) = eˆA (t, s) − ν(t)A(t)ˆ eA (t, s) = (I − ν(t)A(t))ˆ eA (t, s).
40
PETERSON AND ANDERSON, ET AL ∗ Part (iii). Let Y (t) = (ˆ e−1 A (t, s)) . Then
∗ Y ∇ (t) = − eˆ−1 e∇ e−1 A (t, s)ˆ A (ρ(t), s)ˆ A (t, s) ∗ = − eˆ−1 eA (t, s)ˆ e−1 A (ρ(t), s)A(t)ˆ A (t, s) ∗ = − eˆ−1 A (ρ(t), s)A(t) ∗ −1 = − eˆ−1 A (t, s)(I − ν(t)A(t)) A(t) ∗ = eˆ−1 A (t, s)( ν A)(t) ∗ = ( ν A∗ )(t)(ˆ e−1 A (t, s)) = ( ν A∗ )(t)Y (t),
where we have used (ii) and Exercise 89 (ii). Also, ∗ −1 ∗ Y (s) = (ˆ e−1 ) = I. A (s, s)) = (I
Hence Y solves the initial value problem Y ∇ = ( ν A∗ )(t)Y,
Y (s) = I,
which has exactly one solution according to Theorem 82, and therefore we have eˆ ν A∗ (t, s) = ∗ ˆ∗ ν A∗ (t, s). ˆ−1 Y (t) = (ˆ e−1 A (t, s) = e A (t, s)) so that e Part (iv). This is Exercise 92. Part (v). Consider Y (t) = eˆA (t, τ )ˆ eA (τ, s). Then Y ∇ (t) = eˆ∇ eA (τ, s) = A(t)ˆ eA (t, τ )ˆ eA (τ, s) = A(t)Y (t) A (t, τ )ˆ and Y (s) = eˆA (s, s)ˆ eA (s, s) = I according to (iv). Hence Y solves the IVP Y ∇ = A(t)Y,
Y (s) = I,
which has exactly one solution according to Theorem 82, and therefore we have eˆA (t, s) = Y (t) = eˆA (t, τ )ˆ eA (τ, s). Part (vi). Let Y (t) = eˆA (t, s)ˆ eB (t, s) and suppose that eˆA (t, s) and B(t) commute. We use Theorem 77 and Theorem 78 (iii) to calculate Y ∇ (t) = = = = = =
e∇ eˆ∇ eρB (t, s) + eˆA (t, s)ˆ A (t, s)ˆ B (t, s) A(t)ˆ eA (t, s)(I − ν(t)B(t))ˆ eB (t, s) + eˆA (t, s)B(t)ˆ eB (t, s) A(t)(I − ν(t)B(t))ˆ eA (t, s)ˆ eB (t, s) + B(t)ˆ eA (t, s)ˆ eB (t, s) [A(t)(I − ν(t)B(t)) + B(t)] eˆA (t, s)ˆ eB (t, s) (A ⊕ν B)(t)ˆ eA (t, s)ˆ eB (t, s) (A ⊕ν B)(t)Y (t).
Also Y (s) = eˆA (s, s)ˆ eB (s, s) = I · I = I. So Y solves the initial value problem Y ∇ = (A ⊕ν B)(t)Y,
Y (s) = I,
which has exactly one solution according to Theorem 82, and therefore we have eˆA⊕ν B (t, s) = Y (t) = eˆA (t, s)ˆ eB (t, s). Exercise 92. Prove part (iv) of Theorem 91.
NABLA DYNAMIC EQUATIONS
41
Theorem 93. If A ∈ Rν and a, b, c ∈ T, then [ˆ eA (c, ·)]∇ = −[ˆ eA (c, ·)]ρ A and Z
b
eˆA (c, ρ(t))A(t)∇t = eˆA (c, a) − eˆA (c, b). a
Proof. We use the properties given in Theorem 91 and Exercise 89 (ii) to find eˆA (c, ρ(t))A(t) = eˆ∗ ν A∗ (ρ(t), c)A(t) = {[I − ν(t)( ν A∗ )(t)] eˆ ν A∗ (t, c)}∗ A(t) = eˆ∗ ν A∗ (t, c) [I − ν(t)( ν A∗ )(t)]∗ A(t) ∗ = eˆ∗ ν A∗ (t, c) I + ν(t){A∗ (t)[I − ν(t)A∗ (t)]−1 } A(t) ∗ = eˆ∗ ν A∗ (t, c) [I − ν(t)A∗ (t) + ν(t)A∗ (t)][I − ν(t)A∗ (t)]−1 A(t) = eˆ∗ ν A∗ (t, c)[I − ν(t)A(t)]−1 A(t) ∗ = A∗ (t)[I − ν(t)A∗ (t)]−1 eˆ ν A∗ (t, c) = − {( ν A∗ )(t)ˆ e ν A∗ (t, c)}∗ ∇ ∗ = − eˆ ν A∗ (t, c) = −[ˆ eA (c, ·)]∇ (t). This proves our result.
Together with the homogeneous equation (43) we also consider the nonhomogeneous equation (46)
y ∇ = A(t)y + f (t),
where f : T → Rn is a vector-valued function. We have the following result for initial value problems for equation (46). Theorem 94 (Variation of Constants). Let A ∈ Rν be an n × n-matrix-valued function on T and suppose that f : T → Rn is ld-continuous. Let t0 ∈ T and y0 ∈ Rn . Then the initial value problem (47)
y ∇ = A(t)y + f (t),
y(t0 ) = y0
has a unique solution y : T → Rn . Moreover, this solution is given by Z t (48) y(t) = eˆA (t, t0 )y0 + eˆA (t, ρ(τ ))f (τ )∇τ. t0
Proof. First, y given by (48) is well defined and can be rewritten because of Theorem 91 (v) as Z t y(t) = eˆA (t, t0 ) y0 + eˆA (t0 , ρ(τ ))f (τ )∇τ . t0
42
PETERSON AND ANDERSON, ET AL
We use the product rule to differentiate y: Z t ∇ eˆA (t0 , ρ(τ ))f (τ )∇τ y (t) = A(t)ˆ eA (t, t0 ) y0 + t0
+ˆ eA (ρ(t), t0 )ˆ eA (t0 , ρ(t))f (t) = A(t)y(t) + f (t). Obviously, y(t0 ) = y0 . Therefore y is a solution of (47). Now we show that y is the only solution of (47). Assume y˜ is another solution of (47) and put u(t) = eˆA (t0 , t)˜ y (t). By Theorem 91 (v) we have y˜(t) = eˆA (t, t0 )u(t) and therefore A(t)ˆ eA (t, t0 )u(t) + f (t) = A(t)˜ y (t) + f (t) = y˜∇ (t) = A(t)ˆ eA (t, t0 )u(t) + eˆA (ρ(t), t0 )u∇ (t), so u∇ (t) = eˆA (t0 , ρ(t))f (t). Since u(t0 ) must be equal to y0 , this yields Z t u(t) = y0 + eˆA (t0 , ρ(τ ))f (τ )∇τ t0
and therefore y˜ = y, where y is given by (48).
In order to apply Theorem 91 (vi), it is important to know under what conditions the two matrices eˆA and B commute. We investigate this problem next. Theorem 95. Suppose A ∈ Rν and C is nabla differentiable. If C is a solution of the nabla dynamic equation C ∇ = A(t)C − C ρ A(t), then C(t)ˆ eA (t, s) = eˆA (t, s)C(s). Proof. Fix s ∈ T and consider F (t) = C(t)ˆ eA (t, s) − eˆA (t, s)C(s). Then F (s) = 0 and F ∇ (t) = C(ρ(t))A(t)ˆ eA (t, s) + C ∇ (t)ˆ eA (t, s) − A(t)ˆ eA (t, s)C(s) ∇ = C (t) + C(ρ(t))A(t) − A(t)C(t) eˆA (t, s) +A(t) [C(t)ˆ eA (t, s) − eˆA (t, s)C(s)] = A(t)F (t). Hence F solves the initial value problem F ∇ = A(t)F,
F (s) = 0.
By the variation of constants theorem (Theorem 94) we have F ≡ 0, but this means that C(t)ˆ eA (t, s) and eˆA (t, s)C(s) commute. Corollary 96. Suppose A ∈ Rν and C is a constant matrix. If C commutes with A, then C commutes with eˆA . In particular, if A is a constant matrix, then A commutes with eˆA .
NABLA DYNAMIC EQUATIONS
Proof. This follows from Theorem 95 since C ∇ ≡ 0 in this case.
43
As in the scalar case, along with (43), we consider its adjoint equation x∇ = −A∗ (t)xρ .
(49)
We have the following result on initial value problems involving the adjoint equation. Theorem 97 (Variation of Constants). Let A ∈ Rν be an n × n-matrix-valued function on T and suppose that f : T → Rn is ld-continuous. Let t0 ∈ T and x0 ∈ Rn . Then the initial value problem x∇ = −A∗ (t)xρ + f (t),
(50)
x(t0 ) = x0
has a unique solution x : T → Rn . Moreover, this solution is given by Z t eˆ ν A∗ (t, τ )f (τ )∇τ. (51) x(t) = eˆ ν A∗ (t, t0 )x0 + t0
Proof. We rewrite x∇ = −A∗ (t)xρ + f (t) = −A∗ (t) x − ν(t)x∇ + f (t) = −A∗ (t)x + ν(t)A∗ (t)x∇ + f (t), i.e., [I − ν(t)A∗ (t)] x∇ = −A∗ (t)x + f (t). Since A is regressive, A∗ is regressive as well (see Exercise 89 (i)), and hence the above is equivalent to x∇ = − [I − ν(t)A∗ (t)]−1 A∗ (t)x + [I − ν(t)A∗ (t)]−1 f (t) = ( ν A∗ (t))x + [I − ν(t)A∗ (t)]−1 f (t). Hence we may apply Theorem 94 to obtain the solution of (50) as Z t x(t) = eˆ ν A∗ (t, t0 )x0 + eˆ ν A∗ (t, ρ(τ )) [I − ν(τ )A∗ (τ )]−1 f (τ )∇τ t0 t
Z = eˆ ν A∗ (t, t0 )x0 +
eˆ∗A (ρ(τ ), t) [I − ν(τ )A∗ (τ )]−1 f (τ )∇τ
t0
Z tn o∗ = eˆ ν A∗ (t, t0 )x0 + [I − ν(τ )A(τ )]−1 eˆA (ρ(τ ), t) f (τ )∇τ t0 t
Z = eˆ ν A∗ (t, t0 )x0 +
{ˆ eA (τ, t)}∗ f (τ )∇τ
t0 t
Z = eˆ ν A∗ (t, t0 )x0 +
eˆ ν A∗ (t, τ )f (τ )∇τ, t0
where we have applied Theorem 91 (ii) and (iii). We now present the dynamic nabla equation version of Liouville’s formula.
44
PETERSON AND ANDERSON, ET AL
Theorem 98 (Liouville’s Formula). Let A ∈ Rν be a 2 × 2-matrix-valued function and assume that X is a solution of X ∇ = A(t)X. Then X satisfies Liouville’s formula (52)
det X(t) = eˆtr A−ν det A (t, t0 ) det X(t0 )
for
t ∈ T.
Proof. Note that A ∈ Rν implies tr A − ν det A ∈ Rν by Exercise 81. We put a11 a12 A= a21 a22
x11 x12 . X= x21 x22
and
Then (det X)∇ = (x11 x22 − x12 x21 )∇ ρ ∇ ρ ∇ ∇ = x∇ 11 x22 + x11 x22 − x12 x21 − x12 x21 ρ ρ ∇ x∇ x11 x12 11 x12 = det + det ∇ ∇ x21 x22 x21 x22
a11 x11 + a12 x21 a11 x12 + a12 x22 = det x21 x22 ∇ x11 − νx∇ 11 x12 − νx12 + det ∇ x∇ x 21 22
x∇ 11
x∇ 12
x11 x12 − ν det = a11 det X + det ∇ ∇ x∇ x∇ x x 21 22 21 22 x11 x12 = a11 det X + det a21 x11 + a22 x21 a21 x12 + a22 x22 ∇ x∇ 11 x12 −ν det ∇ ∇ x21 x22
a11 x11 + a12 x21 a11 x12 + a12 x22 = (a11 + a22 ) det X − ν det ∇ ∇ x21 x22
NABLA DYNAMIC EQUATIONS
45
x21 x22 x11 x12 = tr A det X − ν det a11 + a12 ∇ ∇ ∇ x∇ x x21 x22 21 22 = tr A det X − ν(a11 a22 det X − a12 a21 det X) = (tr A − ν det A) det X. Solving this first order scalar nabla dynamic equation (use Theorem 30) we obtain (52). 11. Polynomials and Taylor’s Formula ˆk : The generalized polynomials, that also occur in Taylor’s formula, are the functions h ˆ → R, k ∈ N0 , defined recursively as follows: The function h0 is ˆ 0 (t, s) ≡ 1 for all s, t ∈ T, (53) h T2
ˆ k for k ∈ N0 , the function h ˆ k+1 is and, given h Z t ˆ k (τ, s)∇τ ˆ k+1 (t, s) = h (54) h
for all
s, t ∈ T.
s
ˆ k are all well defined. If we let h ˆ ∇ (t, s) denote for each fixed s the Note that the functions h k ˆ k (t, s) with respect to t, then nabla derivative of h (55)
ˆ k−1 (t, s) ˆ ∇ (t, s) = h h k
The above definition implies ˆ 1 (t, s) = t − s h
for
for all
k ∈ N, t ∈ Tκ . s, t ∈ T.
ˆ k for k > 1 is not easy in general. But for a particular given time scale Finding the h it might be easy to find these functions. We will consider several examples first before we present Taylor’s formula in general. ˆk: Example 99. For the cases T = R and T = Z it is easy to find the functions h First, consider T = R. Then ρ(t) = t for t ∈ R, so that k ˆ k (t, s) = (t − s) h k! for s, t ∈ R, k ∈ N0 . We note that, for an n times differentiable function f : R → R, the following well-known Taylor’s formula holds: Let α ∈ R be arbitrary. Then, for all t ∈ R, the representations Z n X (t − α)k (k) 1 t f (t) = f (α) + (t − τ )n f (n+1) (τ )dτ k! n! α k=0 Z t n X (k) ˆ ˆ n (t, ρ(τ ))f (n+1) (τ )dτ (56) = hk (t, α)f (α) + h k=0
α
are valid, where f (k) denotes as usual the kth derivative of f .
46
PETERSON AND ANDERSON, ET AL
Next, consider T = Z. Then ρ(t) = t − 1, ∇f (t) = f (t) − f (t − 1) for t ∈ Z, and [4, p333] b X
b
Z
f (t)∇t = a
f (t).
t=a+1
Note that b X
(57)
∇f (s) = f (b) − f (a).
s=a+1
Define the discrete factorial function t to the k rising as in (14) by tk := t(t + 1) · · · (t + k − 1) for t ∈ Z; then [tk ]∇ = ∇tk = ktk−1 For s, t ∈ Z, we claim that for k ∈ N0 we have k ˆ k (t, s) = (t − s) h k!
(58)
for all
s, t ∈ Z.
If k = 0 or k = 1 it is true; assume (58) holds for k replaced by m. Then Z t ˆ ˆ m (τ, s)∇τ hm+1 (t, s) = h s
=
t X (τ − s)m m!
τ =s+1
=
(t − s)m+1 , (m + 1)!
which is (58) with k replaced by m + 1. Hence by mathematical induction we get that (58) holds for all k ∈ N0 . Therefore the discrete nabla version of Taylor’s formula reads as follows: Let f : Z → R be a function, and let α ∈ Z. Then, for all t ∈ Z with t > α + n, the representations f (t) =
(59)
=
n X (t − α)k k=0 n X
k!
t 1 X ∇ f (α) + (t − τ + 1)n ∇n+1 f (τ ) n! k
τ =α+1
ˆ k (t, α)f ∇k (α) + h
t X
ˆ n (t, ρ(τ ))f ∇n+1 (τ ) h
τ =α+1
k=0
hold, where ∇k is the k-times iterated backward difference operator. Example 100. We consider the time scale T = qZ
for some
q > 1.
NABLA DYNAMIC EQUATIONS
47
Here ν(t) = t − ρ(t) = t − t/q = (q − 1)t/q. The claim is that ˆ k (t, s) = h
(60)
k−1 Y r=0
qr t − s Pr j j=0 q
for all
s, t ∈ T
holds for all k ∈ N0 . Obviously, for k = 0 (observe that the empty product is considered to be 1, as usual), the claim (60) holds. Now we assume (60) holds with k replaced by some m ∈ N0 . Then Qm qr−1 t−s )∇ Qm Pqr t−s (m Pr r Y qr t − s j − j r=0 r=0 q j=0 j=0 q Pr = j ν(t) j=0 q r=0
= =
q m t−s ˆ P m j hm (t, s) j=0 q
−
m−1 t−s qP ˆ m j hm (ρ(t), s) j=0 q
ν(t) qmt ν(t)
n o m−1 ˆ m (t, s) − q Ptm− s h ˆ m (t, s) − ν(t)h ˆ ∇ (t, s) h m j ν(t) j=0 q j j=0 q
−s Pm
=
q m t − q m−1 t ˆ q m−1 t − s ˆ ∇ Pm j hm (t, s) + Pm j h m (t, s) ν(t) j=0 q j=0 q
=
qm Pm
=
=
=
m−1 ˆ m (t, s) + qPm t − s h ˆ m−1 (t, s) h j j q q j=0 j=0 ( ) m−2 Y qr t − s 1 ˆ m (t, s) + (q m−1 t − s) Pm j q m h Pr j q j=0 j=0 q r=0 m−1 m−1 X Y qr t − s 1 ˆ m (t, s) + Pm j q m h Pr qj j q j=0 j=0 q r=0 j=0 m−1 X ˆ h (t, s) Pmm j q m + qj j=0 q j=0
ˆ m (t, s) = h so that (60) follows with k replaced by m + 1. Hence, by the principle of mathematical induction, (60) holds for all k ∈ N0 . For n ∈ N0 and ld-continuous functions pi : T → R, 1 ≤ i ≤ n, we consider the nth order linear dynamic equation (61)
Ly = 0,
where
n
Ly = y ∇ +
n X
pi y ∇
n−i
.
i=1
A function y : T → R is said to be a solution of the equation (61) on T provided y is n n times nabla differentiable on Tκn and satisfies Ly(t) = 0 for all t ∈ Tκn . It follows that y ∇ is an ld-continuous function on Tκn .
48
PETERSON AND ANDERSON, ET AL
Now let f : T → R be ld-continuous and consider the nonhomogeneous equation n X n n−i (62) y∇ + pi (t)y ∇ = f (t). i=1
Definition 101. We define the Cauchy function y : T × Tκn → R for the linear dynamic equation (61) to be for each fixed s ∈ Tκn the solution of the initial value problem Ly = 0,
i
y ∇ (ρ(s), s) = 0,
0 ≤ i ≤ n − 2,
y∇
n−1
(ρ(s), s) = 1.
Remark 102. Note that is the Cauchy function for y
∇n
ˆ n−1 (t, ρ(s)) y(t, s) := h = 0.
Theorem 103 (Variation of Constants). Let α ∈ Tκn and t ∈ T. If f ∈ Cld , then the solution of the initial value problem Ly = f (t),
i
y ∇ (α) = 0,
0≤i≤n−1
is given by Z
t
y(t) =
y(t, τ )f (τ )∇τ, α
where y(t, τ ) is the Cauchy function for (61). Proof. With y defined as above we have Z t Z t i ∇i ∇i ∇i−1 y (t) = y (t, τ )f (τ )∇τ + y (ρ(t), t)f (t) = y ∇ (t, τ )f (τ )∇τ α
α
for 0 ≤ i ≤ n − 1 and Z t Z t n ∇n ∇n ∇n−1 y (t) = y (t, τ )f (τ )∇τ + y (ρ(t), t)f (t) = y ∇ (t, τ )f (τ )∇τ + f (t). α
α
It follows from these equations that i
y ∇ (α) = 0, and Z
0≤i≤n−1
t
Ly(t) =
Ly(t, τ )f (τ )∇τ + f (t) = f (t), α
and the proof is complete.
∇n+1
Theorem 104 (Taylor’s Formula). Let n ∈ N. Suppose the function f is such that f ˆ k by (53) and is ld-continuous on Tκn+1 . Let α ∈ Tκn , t ∈ T, and define the functions h (54), i.e., Z ˆ 0 (r, s) ≡ 1 h
and
r
ˆ k+1 (r, s) = h
ˆ k (τ, s)∇τ for k ∈ N0 . h
s
Then we have f (t) =
n X k=0
ˆ k (t, α)f ∇k (α) + h
Z
t
α
ˆ n (t, ρ(τ ))f ∇n+1 (τ )∇τ. h
NABLA DYNAMIC EQUATIONS
Proof. Let g(t) := f ∇
n+1
49
(t). Then f solves the initial value problem
∇n+1
x
k
k
x∇ (α) = f ∇ (α),
= g(t),
Note that the Cauchy function for y constants formula in Theorem 103,
∇n+1
0 ≤ k ≤ n.
ˆ n (t, ρ(s)). By the variation of = 0 is y(t, s) = h t
Z f (t) = u(t) +
ˆ n (t, ρ(τ ))g(τ )∇τ, h
α
where u solves the initial value problem u∇
n+1
m
m
u∇ (α) = f ∇ (α), 0 ≤ m ≤ n. P ˆ k (t, α)f ∇k (α), set To validate the claim that u(t) = nk=0 h (63)
= 0,
w(t) :=
n X
ˆ k (t, α)f ∇k (α). h
k=0
ˆ k given in (53) and (55), w∇n+1 = 0. We have moreover that By the properties of the h m
w∇ (t) =
n X
ˆ k−m (t, α)f ∇k (α), h
k=m
so that m
w∇ (α) =
n X
ˆ k−m (α, α)f ∇k (α) = f ∇m (α) h
k=m
for 0 ≤ m ≤ n. We consequently have that w also solves (63), whence u ≡ w by uniqueness. Remark 105. The reader may compare Example 99 (i.e., the cases T = R and T = Z) to the above presented theory. Taylor’s formula corresponds to formulas (56) and (59). The following corollary is useful in proving the positivity of certain Green’s functions. Corollary 106. Let α, β ∈ T be fixed. For any t ∈ T and any positive integer n, ˆ n (t, β) = h
n X
ˆ k (t, α)h ˆ n−k (α, β). h
k=0
ˆ n (·, β) in Taylor’s formula. Proof. Take f = h
Example 107. For T = Z, consider f (t) = eˆ−1 (t, 0) = ( 21 )t for t ∈ Z. If we expand f about 0, then Taylor’s formula (59) for f is given by t 1 = Pn (t) + En (t), 2 where the Taylor polynomial Pn is given by Pn (t) =
n X (−1)k tk k=0
k!
50
PETERSON AND ANDERSON, ET AL
and the error term En is given by (−1)n+1 Pt n 1 τ n! τ =1 (t + 1 − τ ) 2 En (t) = 0 (−1)n+1 P0 n 1 τ − n! τ =t+1 (t + 1 − τ ) 2 n+1
if if
t>0 t=0
if
t < 0.
n+1
Then Pn (1) = 1−(−1) and En (1) = (−1)2 , so that the Taylor polynomial will not 2 converge to f at 1 as n → ∞. Note, however, that t X |t| ∞ X |t| 1 (−1)k tk = = 2|t| = P|t| (t) = 2 k! k k=0
k=0
for any nonpositive integer t. References [1] R. P. Agarwal. Difference Equations and Inequalities, 2e. Marcel Dekker, New York, 2000. [2] R. P. Agarwal and M. Bohner. Basic calculus on time scales and some of its applications. Results Math., 35(1-2):3-22, 1999. [3] F. M. Atici and G. Sh. Guseinov. On Green’s functions and positive solutions for boundary value problems on time scales. J. Comput. Appl. Math., 2001. Special Issue on “Dynamic Equations on Time Scales,” edited by R. P. Agarwal, M. Bohner, and D. O’Regan. To appear. [4] M. Bohner and A. Peterson. Dynamic Equations on Time Scales: An Introduction with Applications. Birkhauser, New York, 2001. [5] S. Hilger. Ein Mabketten kalkul mit Anwendungauf Zentrums mannigfaltig keiten. PhD thesis, Universitat Wurzburg, 1988. Department of Mathematics and Computer Science, Concordia College, Moorhead, Minnesota 56560 usa E-mail address:
[email protected],
[email protected] Department of Mathematics and Statistics, University of Nebraska–Lincoln, Lincoln, Nebraska 68588-0323 usa E-mail address:
[email protected],
[email protected],
[email protected]