Construction of the stochastic integrals with respect to martingale. Let I = [0,∞), M
∈ M. 2,loc . Goal. Define an integral. ∫. Hs dMs for a martingale M and some ...
CHAPTER 9
Stochastic Integrals ddddddddddddddddddddddd.
9.1. Construction of the stochastic integrals with respect to martingale Let I = [0, ∞), M ∈ M2,loc . Goal. Define an integral Hs dMs for a martingale M and some suitable process H. ddddd: Consider the Riemann-Stieltjes integral
t
f (s) dα(s) = lim
n→∞
0
∞
f (t∗i )(α(ti+1 ) − α(ti )),
i=1
where f is continuous, α is of bounded variation and t∗i ∈ [ti , ti+1 ]. Question. ddddddddddd Hs dMs d? Answer. No, by Remark 7.43, if M is a continuous non-constant local martingale, then M is not of bounded variation. dddddddddd lim
n→∞
n i=1
Hti (Mti+1 − Mti ) = lim
n→∞
n
Hti+1 (Mti+1 − Mti ).
i=1
ddddddddddddddddddddddd. dddddddddddddd d. dddddddd: dddddddd. ddddddddddd: 195
196
9. STOCHASTIC INTEGRALS
Question. dddddddd? dddddddd: ddddd (Hti ), ddddd (Hti+1 ), dddd (H ti +ti+1 ). dddddddddddddd? 2
Answer. dddddddddd, dddddddd. • Hti (ddd): Itˆo integral • H ti +ti+1 (dd): Fisk-Stratonovich integral 2
• Hti+1 (ddd): backward Itˆo integral dddddddddd, ddddd (i.e., Itˆo integral) ddddd (dddd, strategy, d profit ddddd).
9.1.1. Simple processes.
Notation 9.1. Define E b = the collection of all bounded previsible simple processes, i.e., all processes H of the form
Ht (ω) =
n−1
hi (ω)I(ti ,ti+1 ] (t)
i=0
with n ≥ 1, 0 ≤ t0 < t1 < · · · < tn ≤ ∞, and hi are bounded and Fti -measurable for all i. Definition 9.2. For H ∈ E b , we define the stochastic integral by (H · M )t :=
0
t
Hs dMs =
n−1
hi (Mti+1 ∧t − Mti ∧t )
i=0
for 0 ≤ t ≤ ∞.
Example 9.3. Let M = W = standard one-dimensional Brownian motion.
9.1. CONSTRUCTION OF THE STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALE197
(i) H = I[1,3) , then (H · W )1/2 =
1 2
0
(H · W )2 =
0
(H · W )4 =
2
Hs dWs = 0, Hs dWs = W3∧2 − W1∧2 = W2 − W1 ,
4
0
Hs dWs =
2
0
I[1,3) dWs = W3 − W1 .
(ii) H = 4I[1.2) − 3I[2,3) , then (H · W )2 = 4(W2 − W1 ), (H · W )3 = 4(W2 − W1 ) − 3(W3 − W2 ) = −3W3 + 7W2 − 4W1 . (iii) H = W1 I[1.2) − (2W2 − W1 )I[2,3) , then (H · W )2 = W1 (W2 − W1 ), (H · W )3 = W1 (W2 − W1 ) − (2W2 − W1 )(W3 − W2 ). Proposition 9.4. For H ∈ E b , H · M ∈ M20 . Moreover, if M is continuous, H · M is continuous, i.e., H · M ∈ M2,c 0 , and E[(H ·
M )2∞ ]
=E
0
∞
Hs2
dM s .
Proof. Let s ≤ t. If s = tk , t = tl with k < l, then E[(H · M )t − (H · M )s |Fs ] =
l−1
E hi (Mti+1 − Mti )|Ftk
i=k
=
l−1
E E hi (Mti+1 − Mti )|Fti |Ftk
i=k
Since hi is Fti -measurable and M is a martingale, we have E hi (Mti+1 − Mti )|Fti = Ehi (Mti+1 − Mti )|Fti = 0.
198
9. STOCHASTIC INTEGRALS
Hence, E[(H · M )t − (H · M )s |Fs ] = 0. (ddddd s = tk , t = tl dd. dddddd t ∧ tl ddddddd.) Moreover, since M is a martingale, n−1 E[(H · M )2∞ ] = E (hi )2 (Mti+1 − Mti )2 i=0 n−1
=
E (hi )2 E[(Mti+1 − Mti )2 |Fti ] .
i=0
Since M and M 2 − M are martingales, E[(Mti+1 − Mti )2 |Fti ] = E[Mt2i+1 − Mt2i |Fti ] = E[M ti+1 − M ti |Fti ]. Thus, E[(H ·
M )2∞ ]
=
n−1
E (hi )2 (M ti+1 − M ti )
i=0
= E
n−1
i 2
(h ) (M ti+1 − M ti )
i=0
= E
∞
0
Hs2
dM s .
Notation 9.5. For a < b, we denote b Hs dMs = (H · M )b − (H · M )a . a
Lemma 9.6. Let H ∈ E b and let W be the standard Brownian motion, then b E Hs dWs = 0, a
E
a
2
b
Hs dWs
=E a
b
Hs2
ds .
9.1. CONSTRUCTION OF THE STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALE199
Lemma 9.7. Let H 1 , H 2 ∈ E b and c1 , c2 ∈ R, then c1 H 1 + c2 H 2 ∈ E b and (c1 H 1 + c2 H 2 ) · M = c1 H 1 · M + c2 H 2 · M. 9.1.2. Square-integrable processes. Theorem 9.8. If M ∈ M2,c 0 , H is progressively measurable and satisfies T 2 E Hs dM s < ∞, for each T > 0,
(9.1)
0
then there exists a sequence of simple processes H (n) such that T (n) 2 |Hs − Hs | dM s = 0. sup lim E T >0 n→∞
(9.2)
0
dddddddddddddd (9.1) d progressively measurable ddddd H, dddddddd simple processes H (n) ddd H. ddddddd? ddddddd construct stochastic integrals d, dd simple process d stochastic integral ddd. d ddddddddddd. Notation 9.9. For 0 < T < ∞, denote L∗T = L∗ =
the class if all bounded progressively measurable processes such that (9.1) holds
L∗T .
T ≥0
d L∗T dddddd bounded ddddddddddddd. Remark 9.10. For any process H ∈ L∗T , there exists a sequence of simple processes H (n) such that (9.2) holds. By Lemma 9.6 and Equation (9.2), we have 2 2 T T T E = E Hs(n) dMs − Hs(m) dMs (Hs(n) − Hs(m) ) dMs 0
0
= E
0
0
T
|Hs(n)
−
Hs(m) |2
dM s −→ 0
200
9. STOCHASTIC INTEGRALS
as m, n → ∞. Thus,
T
0
Hs(n)
is a Cauchy sequence in L2 (Ω, F, P), which
dMs
n∈N
implies that this sequence converges in L2 .
ddddddd, ddddddd H ∈ L∗T ddddd. Definition 9.11. For H ∈ L∗T , the stochastic integral of H with respect to the martingale M ∈ M2,c is defined by T Hs dMs := lim n→∞
0
T
Hs(n) dMs ,
0
in L2 − sense,
where H (n) is a sequence of simple processes satisfying (9.2). Remark 9.12.
(1) Definition 9.11 is well-defined.
Suppose there exists another sequence of simple processes K (n) converging to H in the sense of (9.2). Then the sequence Z (n) with Z (2n−1) = H (n) and Z (2n) = K (n) is also convergent to H in the sense of (9.2). Thus, T Zs(n) dMs converges in L2 -sense. lim n→∞
This means lim n→∞
0
T
Hs(n)
0
dMs = lim
n→∞
0
T
Zs(n)
dMs = lim
n→∞
0
T
Ks(n) dMs .
Hence, the definition is well-defined. 1
(2) If t −→ M t (ω) is absolutely continuous for P-a.e. ω, then
0
T
Hs dMs is
well-defined if H is bounded, measurable and F-adapted. Proposition 9.13. Let H, K ∈ L∗T , α, β ∈ R, M ∈ M2,c . Then
t (1) If H is (Ft )-adapted, then Hs dMs is a square-integrable martingale. 0
1absolutely
0≤t≤T
continuous dddddd, ddddddd Appendix C dddd.
9.1. CONSTRUCTION OF THE STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALE201
T
T
T
(αHs + βKs ) dMs = α Hs dMs + β Ks dMs . 0 0 0 2
T T 2 Hs dMs Hs dM s . (3) E =E 0 0
2
t t
2 Hu dMu Fs = E Hu dM u
Fs . (4) E
s s · t Hs dMs = Hs2 dM u . (5)
(2)
0
0
t
Proof. (1) If H (n) is a simple process satisfying (9.2), then
T
Hs(n)
0
T
Hs dMs = lim
n→∞
0 T
This implies that 0
hi (Mti+1 ∧T − Mti ∧T ) ∈ FT
i=0
and
dMs =
n−1
T
0
Hs(n) dMs .
Hs dMs is FT -measurable. It remains to show that E
0
t
s
Hu dMu
Fs = Hu dMu .
(9.3)
0
For any A ∈ Fs , A
s
t
Hu dMu dP = E =
t
s
Hu IA dMu
lim E
n→∞
t
s
Hu(n) IA
dMu
in L2 -sense. Due to Cauchy-Schwarz inequality and (9.2), we have
t
t t
(n) (n)
E
≤ E (Hu − Hu )IA dMu H I dM H I dM − E u A u A u u
s s s
t 2 (n) ≤ E (Hu − Hu )IA dMu = E
s
s
t
(Hu −
Hu(n) )2 IA
dM u ,
202
9. STOCHASTIC INTEGRALS
which approaches to 0 as n → ∞. Hence, 0=E
t
s
Hu IA dMu =
Hu dMu dP,
s
A
t
which implies (9.3). (2), (3) Similar argument. (4) For A ∈ Fs ,
Hu dMu
s
A
2
t
dP = E IA = E = E
A s
E
t
s
t
s
=
t
s
Thus,
s
t
2
t
Hu dMu 2
Hu IA dMu
Hu2 I2A
dM u
Hu2 dM u dP.
2
t
2 Hu dMu Fs = E Hu dM u
Fs .
s
(5) By (1) and (4), we have
E Hu2 dM u − Hu2 dM u
Fs 0 0 2
s t
Hu dMu − Hu dMu Fs = E
0 0 2 s 2
t
= E Hu dMu − Hu dMu Fs .
0 0 This implies that E
t
0
Hu dMu
t
t
2 −
0
s
2 s
s
2 Hu dM u Fs = Hu dMu − Hu2 dM u ,
0 0
9.1. CONSTRUCTION OF THE STOCHASTIC INTEGRALS WITH RESPECT TO MARTINGALE203
i.e.,
·
Hs dMs
0
= 0
t
t
Hs2 dM u .
Corollary 9.14. If H ∈ L∗T , M = W = Brownian motion, then
E Hu dWu
Fs = 0, s
t 2
t t
2 E Hu dWu Fs = E Hu du
Fs = E[Hu2 ] du.
s s s
t
Theorem 9.15. For H, K ∈ L∗ , M ∈ M2,c 0 , then (1) for stopping time S ≤ T , E
t∧T
0
Hu dMu
FS =
t∧S
0
P − a.s.
Hu dMu ,
(2) for stopping time T , 0
t∧T
Hu dMu =
0
t
Hu I[[0,T ]] Mu =
0
t
Hu dMu∧T .
(3) for stopping time S ≤ T , E
T ∧t
S ∧t
Hu dMu ·
T ∧t
S ∧t
Ku dMu FS = E
T ∧t
S∧t
Hu Ku dM u
FS .
In particular, if S and T are constant, E
s
t
t t
Hu Ku dM u
FS . Hu dMu · Ku dMu Fs = E s
s
(4) Let H ∈ L∗ (M ), G ∈ L∗
H dM , i.e.,
E
0
T
Hu2
dM u < ∞,
204
9. STOCHASTIC INTEGRALS
and
E
0
T
G2u
·
d
< ∞,
H dM 0
u
for all T > 0, then GH ∈ L∗ (M ) and
u t t Gu d Hv dMv = Gu Hu dMu . 0
0
0
(5) If M, N ∈ M2,c and H ∈ L∗ (M ), then (6) If M, N ∈ M2,c , H ∈ L∗ (M ), K ∈ L∗ (N ), then · · t Hu dMu , Ku dNu = Hu Ku dM, N u , 0
0
t
0
i.e., E
s
t
Hu dMu ·
s
t
t
Ku dNu
Fs = E Hu Ku dM, N u
Fs . s
In particular, if K ≡ 1, · t Hu dMu , N = Hu dM, N u 0
P-a.s.
0
t
for al 0 ≤ t < ∞. Proposition 9.16 (Kunita-Watanabe). If M, N ∈ M2,c , H ∈ L∗ (M ), K ∈ L∗ (N ), then
0
t
|Hu Ku | dM, N u ≤
0
t
Hu2
1/2 dM u
0
t
Ku2
1/2 dN u
P-a.s. dddd Theorem 9.8 d (9.1) ddddddd? dddd T 2 P Hs dM s < ∞ = 1.
(9.4)
0
dddddd, ddddddd stochastic integral? dddddd, dddddddd dd.
9.2. STOCHASTIC INTEGRAL WITH RESPECT TO LOCAL MARTINGALES
205
Remark 9.17. Under the condition (9.4), there exists a sequence of simple processes H (n) such that
However, 0
dd
t
0 t
Hs dMs = lim
n→∞
Hs dMs
0
t
Hs(n) dMs
in probability.
is not a martingale in general, it is a local martingale.
ddddd M ddd square-integrable martingale, d local martingale dd
dddddd. dddddddddddd. ddddd martingale ddd, ddddd ddd stopping times.
9.2. Stochastic integral with respect to local martingales dddddddddddddddd, dddddddddddd. Definition 9.18. For M ∈ Mc,loc and X ∈ L∗ (M ), the stochastic integral of X with respect to M is defined by t t Xs dMs = Xs I{Tn ≥s} dMs∧Tn 0
0
on {0 ≤ t ≤ Tn },
where (Tn ) is a nondecreasing sequence of stopping times such that (Mt∧Tn , Ft ) is a mar tingale for each n ≥ 1 and P lim Tn = ∞ = 1. n−→∞
Theorem 9.19. Let M ∈ Mc,loc and X, Y ∈ L∗ (M ). Then
t (1) Xu dMu is a continuous local martingale. t0 t t (2) (αXs + βYs ) dMs = α Xs dMs + β Ys dMs for all α, β ∈ R. 0 0 0 · t (3) Xs dMs = Xs2 dM s P − a.s. 0 0 t t t∧T Xs dMs = Xs I{s≤T } dMs P − a.s. (4) 0
0
206
9. STOCHASTIC INTEGRALS
Remark 9.20. If M ∈ Mc,loc and X ∈ L∗ (M ), the following statements are false in general.
(1)
t
Xu dMu 0
is a martingale. t 2 t 2 =E Xu dMu Xu dM u . (2) E 0 0
t 2
t
2 Xu dMu Fs = E Xu dM u
Fs . (3) E
s s ddd, dddd expectation d conditional expectation ddddddddddd d. dddddddddd: d M dd, ddddddddd stochastic integral? dd ddd? dddddd?
9.3. Stochastic integral with respect to semimartingales Recall: A stochastic process X = (Xt )t≥0 is called a semimartingale if X is an adapted process with the decomposition Xt = X0 + MtX + AX t , where M X = (MtX ) is a local martingale with M0 = 0, AX = (AX adl`ag t ) is an adapted, c` process of bounded variation, i.e., there exist nondecreasing, adapted processes A+ , A− + − such that AX t = At − At .
In general, this decomposition is not unique. We may use the similar method o define the stochastic integral. dddddddd ddd, ddd
dddddddddd. ddddddddd Remark dd.
9.3. STOCHASTIC INTEGRAL WITH RESPECT TO SEMIMARTINGALES
207
Remark 9.21. For H ∈ L∗ (M X ),
t
Hs dXs =
0
t
0
Hs dMsX
+ 0
t
2 Hs dAX s .
dddddddd, d quadratic variation d cross variation dddddddd. d ddddd Chapter 7 ddddddddddddd. dddddddddddddd d. Assume that X is c`adl`ag. dd martingale d quadratic variation ddd. Let (τn ) be a sequence of partitions of [0, t] and let
qn (t) =
2 Xti+1 ∧t − Xti ∧t ,
for t ≥ 0.
ti ∈τn ,ti ≤t
Definition 9.22.
(1) The quadratic variation of a semimartingale X is defined
by [X, X]t = lim qn (t). n→∞
(2) The cross variation / quadratic covariation of semimartingales X and Y is defined by [X, Y ]t =
1 ([X + Y, X + Y ]t − [X − Y, X − Y ]t ) . 4
Remark 9.23. If X and Y are continuous martingales, then X, Y t = [X, Y ]t and Xt = [X, X]t . 2dd
0
t
Hs dMsX d stochastic integral,
0
t
Hs dAX s d Riemann-Stieltjes integral.
208
9. STOCHASTIC INTEGRALS
Theorem 9.24. If X and Y are continuous semimartingales and if M X,c and M Y,c denote their continuous martingale parts, then [X, Y ]t = M X,c , M Y,c t +
Xs Ys ,
s≤t
where Xs = Xs − Xs− . Remark 9.25. If X and Y are continuous semimartingales, then [X, Y ]t = M X , M Y t . Corollary 9.26. If X and Y are semimartingale, H ∈ L∗T , then 0
.
Hu dXu , Y
t
= t
0
Hu d[X, Y ]u .
9.4. Itˆ o formula Recall. In calculus, we see that if F, G ∈ C 1 , by chain rule we have (F ◦ G) = (F ◦ G) · G , or in differential form dG(t) d (F (G(t))) = F (G(t)) · = F (G(t)) · G (t). dt dt This implies that F (G(t)) − F (G(0)) =
t
F (G(s))G (s) ds = 0
t
F (G(s)) dG(s).
0
ddddddddddd, ddddddddd, ddddddddddd, ddddd ddddddddddddddddddddd.
ˆ FORMULA 9.4. ITO
209
Theorem 9.27 (one-dimensional Itˆo Formula, continuous form). Let f : R −→ R be a C 2 -function and let X = (Xt , Ft ) be a continuous semimartingale with the decomposition Xt = X0 + Mt + At , where M is a local martingale and A is of bounded variation. Then
1 t f (Xs ) dXs + f (Xs ) d[X, X]s f (Xt ) = f (X0 ) + 2 0 0 t t 1 t = f (X0 ) + f (Xs ) dMs + f (Xs ) dAs + f (Xs ) dM s , 2 0 0 0 t
(9.5)
P-a.s. Remark 9.28. (9.5) in differential form: 1 f (Xt ) d[X, X]t 2 1 = f (Xt ) dMt + f (Xt ) dAt + f (Xt ) dM t . 2
df (Xt ) = f (Xt ) dXt +
1 t f (Xs ) dMs is a local martingale, f (Xs ) dAs + f (Xs ) dM s Note that the part 2 0 0 0 is of bounded variation. This means that if X is a (continuous) semimartingale and
t
t
f ∈ C 2 , then f (X) is again a semimartingale. Proof of Theorem 9.27. dddddddddddd. By Taylor expansion f (Xti+1 ∧t ) − f (Xti ) = f (Xti )Δi X +
1 f (Xti )(Δi X)2 + Ri , 2
where Δi X = Xti+1 ∧t − Xti ∧t and Ri is the error term. Summarize the above term, we get f (Xt ) − f (X0 ) =
f (Xti )Δi X +
1 f (Xti )(Δi X)2 + Ri . 2
210
9. STOCHASTIC INTEGRALS
Due to the definition of stochastic integral and Riemann-Stieltjes integral, we have t f (Xs ) dXs f (Xti )Δi X −→
0
2
f (Xti )(Δi X) −→
0
t
f (Xs ) d[X, X]s ,
as n → ∞ and
1
˜ t ) − f (Xt )|(Δi X)2 ≤ ε |f (X Ri ≤ (Δi X)2 −→ 0,
i i 2 i 2 ˜ t is between Xt and Xt ∧t . for n large enough, where X i i i+1 dddddddddd: dddddddddd [X, X]? Remark 9.29. If X is a continuous semimartingale, then in notation (dXt )2 = d[X, X]t (ddd differential form d). Thus, by Remark 9.28, df (Xt ) = f (Xt ) dXt +
1 f (Xt ) (dXt )2 . 2
Moreover, if X = W is a standard Brownian motion, then we have ⎧ ⎪ ⎨ (dt)2 = dt · dWt = dWt · dt = 0, ⎪ ⎩ Example 9.30.
(dWt )2 = dW t = dt.
(1) Consider X = W = standard Brownian motion, and f (x) = x2 ,
f (x) = 2x,
f (x) = 2.
Then Wt2
1 t = + f (Ws ) dWs + f (Ws ) dW s 2 0 0 t t 1 t = 2 Ws dWs + 2 ds = 2 Ws dWs + t, 2 0 0 0 W02
t
ˆ FORMULA 9.4. ITO
i.e.,
t
0
Ws dWs =
211
1 2 Wt − t . 2
(2) Let W be the standard Brownian motion and X ∈ L∗ . Consider an exponential martingale
Zt = exp
Set
0
Yt :=
t
0
t
1 Xu dWu − 2
1 Xu dWu − 2
t
0
0
t
Xu2
du .
(9.6)
Xu2 du,
i.e., dYt = Xt dWt −
1 2 X dt. 2 t
For f (x) = ex , f (x) = ex and f (x) = ex . Thus, df (Yt ) = f (Yt ) +
1 f (Yt ) (dYt )2 . 2
This implies that 1 2 = Zt Xt dWt − Xt dt + 2
1 2 = Zt Xt dWt − Xt dt + 2
dZt
1 Zt (dYt )2 2 2
1 1 2 Zt Xt dWt − Xt dt 2 2
= Zt Xt dWt . Hence, we know that the stochastic process (Zt ) satisfies the stochastic differential equation dZt = Zt Xt dWt , i.e.,
Zt = 1 +
0
t
Zu Xu dWu .
(9.7)
212
9. STOCHASTIC INTEGRALS
In other words, if (Zt ) is of the form (9.6), (Zt ) is a solution to (9.7). A special case: If Xt ≡ σ, then
1 Zt = exp σWt − σ 2 t 2
is a solution to the stochastic differential equation dZt = σZt dWt . d Chapter 10 dddddddddddddddddd. Theorem 9.31 (multi-dimensional Itˆo formula, continuous local martingale). Let X = (X 1 , X 2 , ..., X n ) be a vector of local martingales in Mc,loc . Let f : [0, ∞) × Rn −→ R be a C 1,2 -function. Then f (t, Xt ) = f (0, X0 ) +
t
0
n t ∂ ∂ f (s, Xs ) dXsi f (s, Xs ) ds + ∂t ∂x i i=1 0 n 1 t ∂2 + f (s, Xs ) dX i , X j s 2 i,j=1 0 ∂xi ∂xj
for all t.3 Example 9.32. Let W = (W 1 , W 2 , ..., W n ) be an n-dimensional Brownian motion with n ≥ 2. Consider Rt = |Wt | =
(Wt1 )2 + · · · + (Wtn )2
(called Bessel process, dd Wt dddddd). Let f (t, x) = (x21 + · · · + x2n )1/2 . 3ddddddddd:
∂ f (s, Xs ) dddddddd ∂t
∂ f (t, x1 , x2 , ..., xn )
. ∂xi (t,x1 ,x2 ,...,xn )=(s,X 1 ,X 2 ,...,X n )
dd
s
ddddddd, ddddddddddd.
s
s
ˆ FORMULA 9.4. ITO
213
Then xi ∂ , f = ∂xi f (t, x) ⎧ xi x j ⎪ ⎪ − , ⎪ 3 ⎪ (f (t, x)) ⎪ ⎨ ∂2 f = ⎪ ∂xi ∂xj ⎪ ⎪ ⎪ (f (t, x))2 − x2i ⎪ ⎩ , (f (t, x))3
if i = j,
if i = j.
By Theorem 9.31
dRt = df (t, Wt ) n n 1 ∂ 2f ∂f i (t, Wt ) dWt + (t, Wt ) dW i , W j t = ∂xt 2 i,j=1 ∂xi ∂xj i=1
=
n Wi t
i=1
Rt
dWti
1 + 2 i=1 n
Rt2 − (Wti )2 Rt3
dt.
Since n 2 R − (W i )2 t
i=1
Rt3
t
(W i )2 n−1 n t − = , 3 Rt R R t t i=1 n
=
we have dRt =
n Wi t
i=1
Rt
dWti +
n−1 dt, 2Rt
i.e., Rt dRt =
n
Wti dWti +
i=1
1 (n − 1) dt. 2
Theorem 9.33 (Itˆo formula, general form). Let X = (X 1 , X 2 , ..., X n ) be an ndimensional semimartingale with decomposition Xti = X0i + Mti + Ait
for 1 ≤ i ≤ n,
214
9. STOCHASTIC INTEGRALS
where M i is a local martingale and Ai is of bounded variation. Let f : Rn −→ R be a C 2 -function, then f (X) is a semimartingale and f (Xt ) = f (X0 ) +
n i=1
t
0
n
∂ f (Xu ) dXui ∂xi
t
∂2 f (Xu ) dM i,c , M j,c u 0 ∂xi ∂xj n ∂ f (Xs ) − f (Xs− ) − + f (Xs− ) Xs− . ∂xi i=1 s≤t
1 + 2 i,j=1
9.5. Integration by parts Recall: In calculus,
f (x) dg(x) = f (x)g(x) −
g(x) df (x).
d stochastic analysis dddddddddd? ddddd.
t
Example 9.34. Find 0
s dWs .
Let f (t, x) = tx, then ∂2f = 0. ∂x2
∂f = t, ∂x
∂f = x, ∂t By Itˆo formula, f (t, Wt ) = f (0, W0 )+
0
t
∂f f (s, Ws ) ds+ ∂s
Thus,
tWt =
i.e.,
0
t
0
0
t
1 ∂f f (s, Ws ) dWs + ∂x 2
t
Ws ds +
t
Ws ds = tWt −
s dWs ,
0
0
t
s dWs .
0
t
∂ 2f f (s, Ws ) dW s . ∂x2
9.5. INTEGRATION BY PARTS
215
Theorem 9.35. Suppose that f (s, ω) be continuous, of bounded variation with respect to s ∈ [0, t] for almost every ω ∈ Ω. Then t t f (s) dWs = f (t)Wt − Ws df (s). 0
0
ddddddd? Theorem 9.36 (Integration by Parts). Suppose X and Y are continuous semimartingales. Then
0
t
Xs dYs = Xt Yt − X0 Y0 −
t
0
Ys dXs − [X, Y ]t .
(9.8)
Proof. By Itˆo formula (multi-dimensional, Theorem 9.31), let f (t, x, y) = xy. Remark 9.37. (9.8) in differential form d(Xt Yt ) = Xt dYt + Yt dXt + d[X, Y ]t = Xt dYt + Yt dXt + (dXt )(dYt ). ddddddd in convenience, ddddddddddd. dddddddddddd d. Remark 9.38. If X and Y are semimartingales, then t t Xt Yt = X0 Y0 + Xu− dYu + Yu− dXu + [X, Y ]t . 0
In particular, Xt =
X02
0
+2 0
t
Xu− dXu + [X, X]t .
216
9. STOCHASTIC INTEGRALS
9.6. Martingale representation theorem Recall. By Proposition 9.13(1), we get that if X ∈ L∗ and W is a Brownian motion,
t Xu dWu is a martingale. 0
t≥0
Question. If M is a martingale, does there exist X ∈ L∗ and a Brownian motion W such that
Mt = M0 +
t
Xs dMs ?
0
Answer. No! dddddddddd. ddddddddd: M is continuous. ddddd continuous dddd, dddd dddd? dddddd. Example 9.39. Let W 1 and W 2 be two independent Brownian motions. Set Ft = σ(Ws1 , Ws2 : s ≤ t), then W 1 and W 2 are martingales with respect to F = (Ft )t≥0 . Suppose there exists H ∈ L∗ such that Wt2 Then 2
t = W t =
0
.
t
= 0
Hs dWs1 .
Hs dWs1 , W 2
t
= t
0
Hs dW 1 , W 2 s = 0,
which is obviously a contradiction. Theorem 9.40 (Martingale Representation Theorem). Let (Wt ) be an n-dimensional Brownian motion with respect to (FtW ) and let (Mt ) be a square-integrable martingale with respect to P and (FtW ). Then there exists a unique H i ∈ L∗ for all i such that Mt = E[M0 ] +
n i=1
for all t ≥ 0.
0
t
Hsi dWsi ,
P − a.s
9.7. CHANGES OF MEASURES
217
Remark 9.41. The condition “M is a martingale with respect to the filtration (FtW )” is important!
9.7. Changes of measures 9.7.1. Absolutely continuous probability measures. Definition 9.42. Let P and Q be two probability measures on a measurable space (Ω, F). (1) Q is said to be absolutely continuous with respect to P on the σ-algebra F, and we write Q P, if for A ∈ F , P(A) = 0
Q(A) = 0.
=⇒
(2) If both P Q and Q P hold, we will say that Q and P are equivalent and we will write P ∼ Q. Theorem 9.43 (Radon-Nikodym Theorem). Q is absolutely continuous with respect to P on F if and only if there exists an F-measurable function Z ≥ 0 such that
X dQ =
Ω
XZ dP
(9.9)
Ω
for all F-measurable functions X ≥ 0. Proof. Omitted
Remark 9.44. Z is unique up to a null set.
218
9. STOCHASTIC INTEGRALS
Definition 9.45. The function Z is called the Radon-Nikodym density or Radon-Nikodym derivative of Q with respect to P and we write
Z :=
dQ . dP
Notation 9.46. In general, we write EQ [X] =
X dQ. Ω
Therefore, (9.9) implies dQ dQ dQ dP = E X · X dQ = X· EQ [X] = = EP X · . dP dP dP Ω Ω
Remark 9.47. The probability measure Q is absolutely continuous with respect to P on F if and only if there exists an F-measurable random variable Z such that
Q(A) =
Z dP = A
A
dQ dP dP
for all A ∈ F.
Example 9.48.
(1) Let Ω = {1, 2, 3, 4} and F = σ({1}, {2}, {3}, {4}).
(i) Consider 1 P1 ({1}) = , 2
1 P1 ({2}) = , 3
1 P1 ({3}) = , 6
P1 ({4}) = 0,
1 P2 ({1}) = , 3
1 P2 ({2}) = , 4
1 P2 ({3}) = , 6
1 P2 ({4}) = . 4
9.7. CHANGES OF MEASURES
219
Then P1 P2 and its Radon-Nikodym density is given by P1 ({1}) dP1 1/2 3 (1) = = = , dP2 P2 ({1}) 1/3 2 1/3 4 dP1 P1 ({2}) = = , (2) = dP2 P2 ({2}) 1/4 3 1/6 P1 ({3}) dP1 = = 1, (3) = dP2 P2 ({3}) 1/6 dP1 0 P1 ({4}) = = 0. (4) = dP2 P2 ({4}) 1/4 dd
dP1 P1 ({i}) dddddddddd Remark 9.47, take A = {i}, (i) = dP2 P2 ({i}) P1 ({i}) =
dP1 (i)P2 ({i}). dP2
ddd discrete probability space, Radon-Nikodym density ddddddd dd. Furthermore, P2 P1 . Thus, P1 ∼ P2 . Moreover, consider a random variable X with X(1) = 3,
X(2) = 4,
X(3) = 1,
X(4) = 6.
Then EP1 [X] = X(1)P1 ({1}) + X(2)P1 ({2}) + X(3)P1 ({3}) + X(4)P1 ({4}) = 3·
1 1 1 + 4 · + 1 · + 6 · 0 = 3. 2 3 6
Alternatively, by Theorem 9.43 dP1 EP1 [X] = EP2 X · dP2 dP1 dP1 (1) · P2 ({1}) + X(2) · (2) · P2 ({2}) dP2 dP2 dP1 dP1 (3) · P2 ({3}) + X(4) · (4) · P2 ({4}) +X(3) · dP2 dP2 4 1 1 1 3 1 = 3 · · + 4 · · + 1 · 1 · + 6 · 0 · = 3. 2 3 3 4 6 4
= X(1) ·
220
9. STOCHASTIC INTEGRALS
ddddddddddddddddd, dddddddddddddd? dddddddd? ddddddd X d distribution. dddddddd dd X d P2 dddddd distribution, dddddX d P1 ddddd d distribution. ddddddddddd change of measures. (ii) Moreover, consider
P3 ({1}) = 0,
1 P3 ({2}) = , 2
1 P3 ({3}) = , 4
1 P3 ({4}) = . 4
Then P1 P3 , P3 P1 , P2 P3 , but P3 P2 and its Radon-Nikodym density is given by dP3 ({1}) = 0, dP2 1/2 dP3 ({2}) = = 2, dP2 1/4 3 dP3 1/4 = , ({3}) = dP2 1/6 2 dP3 1/4 = 1. ({4}) = dP2 1/4 (2) Let Ω = R and let F be the collection of all Borel sets on R. P is a probability measure on R. Consider the standard normally distributed random variable X on (Ω, F, P), i.e., X ∼ N (0, 1) under P. Explicitly, we have
EP [X] = 0,
EP [X 2 ] = 1.
Consider a probability measure Q on F defined by
1 exp X − Q(A) = 2 A
dP.
9.7. CHANGES OF MEASURES
221
Check: Q is a probability measure. (Exercise!) Then Q P and the Radon-Nikodym density is given by
dQ 1 Z= = exp X − . dP 2 Then under the probability measure Q,
1 Q(X ≤ x) = EQ [I{X≤x} ] = I{X≤x} exp X − 2 Ω x 1 2 et−1/2 e−t /2 dt = √ 2π −∞
x 1 1 2 = √ exp − (t − 1) dt. 2 2π −∞
dP
This means that the probability density function of X under Q is given by
1 1 2 √ exp − (t − 1) . 2 2π Thus, X ∼ N (1, 1) under the probability measure Q, i.e., EQ [X] = 1,
EQ [X 2 ] = 1.
Theorem 9.49. Let P, Q1 , and Q2 be probability measures on a measurable space (Ω, F). Q1 + Q2 P and 2
Q1 + Q2
d 1 dQ1 dQ2 2 = + . dP 2 dP dP
(1) If Q1 P and Q2 P, then
(2) If Q1 P Q2 , then dQ1 dQ1 dP · = , dQ2 dP dQ2
Q2 − a.s.
222
9. STOCHASTIC INTEGRALS
(3) If Q1 ∼ P, then
dQ1 > 0 P-a.s., and dP 1 dP = = dQ1 dQ1 dP
dQ1 dP
−1 .
dddddddd Radon-Nikodym Theorem ddddddd. Proof. Let X be a random variable on (Ω, F). (1) By the linearity of integration, we have
Xd Ω
Q1 + Q2 2
1 = 2
Ω
X dQ1 +
Ω
X dQ2
dQ1 dQ2 1 dP + X dP X = 2 dP dP Ω Ω
1 dQ1 dQ2 = + X· dP. 2 dP dP Ω
By Radon-Nikodym Theorem, we get the desired results. (2) Using change of measures twice, we get
X dQ1
Ω
dQ1 dP = = X· dP Ω dQ1 = X· dQ2 . dQ2 Ω
Ω
X·
dQ1 dP dQ2 dP dQ2
Due to the uniqueness of Radon-Nikodym Theorem, we have the desired result. (3) Since Q1 ∼ P,
dP X dP = X· dQ1 = dQ1 Ω Ω
Ω
X·
dP dQ1 dP, dQ1 dP
which implies the wanted result.
9.7. CHANGES OF MEASURES
223
Remark 9.50. Interpretation of conditional expectation: Consider X ≥ 0, let 1 X dP, for A ∈ F. Q(A) = EP [X] A Clearly, Q is a probability measure and Q P on F. By Radon-Nikodym Theorem, there exists a unique nonnegative F-measurable random variable Z such that Q(A) = Z dP. A
Denote EP [X|F] = ZEP [X], then EP [X|F] 1 dP. X dP = Q(A) = EP [X] A A EP [X] Hence,
X dP =
A
A
EP [X|F] dP
and the conditional expectation EP [X|F] is unique.
9.7.2. Conditional expectation. Proposition 9.51. Suppose that P and Q are two probability measures on a measurable space (Ω, F) and that Q P on F with density Z. If G is a σ-algebra contained in F, then (1) Q P on G; (2) the corresponding density is given by
dQ
= E[Z|G], dP G
P − a.s.
Proof. For A ∈ G ⊂ F, by Remark 9.47, Q(A) = Z dP = E[Z|G] dP. A
A
224
9. STOCHASTIC INTEGRALS
dQ and that dP G ⊂ F is another σ-algebra. then, for any F-measurable random variable Y ≥ 0, Proposition 9.52. Suppose that Q ∼ P on F with density function Z =
EQ [Y |G] =
E[Y Z|G] , E[Z|G]
Q − a.s.
d change of measures d, ddddddd probability measure ddddddddd dddd. Proof. For A ∈ G, Q(A) =
Z dP =
A
E[Z|G] dP. A
Thus,
A
EQ [Y |G] dQ =
Y dQ =
A
=
A
Y Z dP A
E[Y Z|G] dP =
A
Then we get the desired result.
E[Y Z|G] dQ. E[Z|G]
9.8. Girsanov’s theorem ddddddddddddddd. dddddddddd change of measures, d ddd measure ddd, random variable dddd distribution dddddd. ddd ddd Girsanov theorem ddddddddddddd: d measure ddd, stochastic process ddddddddd. ddd measure d, stochastic process d law ddddd? dddddddddd, ddd measure d, Brownian motion ddddd? ddd, d dd stochastic process ddd Brownian motion.
9.8. GIRSANOV’S THEOREM
225
Let (Ω, F, P) be a probability space and let F = (Ft )t≥0 be a filtration satisfying the usual condition. Let (Wt ) be a Brownian motion on (Ω, F, P). dddddddd Example 9.48(3) ddd. Example 9.53. Let Z1 , Z2 , ..., Zn be independent standard normally distributed ran˜ dom variables on (Ω, F, P) and μi ∈ R for all i. Consider a new probability measure P given by ˜ dP(ω) = exp
n i=1
i.e.,
˜ P(A) =
exp A
1 2 μi Z i − μ 2 i=1 i
n i=1
n
1 2 μi Z i − μ 2 i=1 i n
dP(ω), dP.
Consider the characteristic function ˜ [exp (it1 Z1 + it2 Z2 + · · · + itn Zn )] E n n ˜ dP ˜= = dP exp i ti Zi dP exp i ti Zi dP Ω Ω i=1 i=1 n n n 1 2 = dP exp i ti Zi exp μi Z i − μ 2 i=1 i Ω i=1 i=1 n
1 2 = E exp iti Zi + μi Zi − μi . 2 i=1 Since
1 2 E exp iti Zi + μi Zi − μi 2
2 ∞ 1 1 2 z = √ exp iti zi + μi zi − μi exp − i dzi 2 2 2π −∞
∞ 1 1 −t2i + 2iti μi 2 = √ exp − (zi − (iti + μi )) + dzi 2 2 2π −∞
2 ti = exp − + iti μi , 2
226
9. STOCHASTIC INTEGRALS
and due to the independence of Z1 , Z2 , ..., Zn under P, we get
n 1 2 ˜ [exp (it1 Z1 + it2 Z2 + · · · + itn Zn )] = E exp iti Zi + μi Zi − μi E 2 i=1
2 n ti = exp − + iti μi . 2 i=1 ˜ (see Appendix F. Thus, Zi ∼ N (μi , 1) for all i and Z1 , Z2 , ..., Zn are independent under P dddddd stochastic process ddd change of measures ddddddd? Let X = (Xt )t≥0 be a measurable, F-adapted stochastic process satisfying
T 2 P Xu du < ∞ = 1 for all 0 ≤ T < ∞. 0
Define
Zt (X) = exp
0
t
1 Xu dWu − 2
0
t
Xu2
du .
(9.10)
By Example 9.30(2), (Zt ) satisfies Zt (X) = 1 +
0
t
Zu Xu dWu .
This implies that (Zt ) is a continuous local martingale with E[Z0 ] = 1. d d d d d d X d d d. d d d d, (Zt ) d d stochastic process d d d d d martingale. ddddddd. Proposition 9.54. If X = (Xt ) is a measurable, F-adapted process satisfying
T 1 2 E exp |Xs | ds < ∞, for all 0 ≤ T < ∞, (9.11) 2 0 then Zt defined by (9.10) is a martingale. Moreover, the condition (9.11) is called the Novikov condition. Remark 9.55. ddddddd:
9.8. GIRSANOV’S THEOREM
227
(1) dddddd (Zt ) ddd martingale? ddddddddddd probability measure dd.
t ∗ (2) dddddddd X ∈ L d, Xu dWu ddd martingale. ddddd 0
dddddddddddd? ddddddddddddd
t
0
Zu Xu dWu
ddddd. ddddd E d
0
T
Xu2
du < ∞
T 1 2 Xu du c, if |x| ≤ c, if x < −c, if |x| > c. if |x| < c,
9.9. LOCAL TIMES
233
By Itˆo’s formula, we have gc (Wt ) = gc (0) +
0
t
gc (Ws ) dWs
1 + 2
t
0
gc (Ws ) ds.
It is easy to see that 0
t
0 t
gc (Ws ) ds
gc (Ws ) dWs
t
1 1 I{|Ws |