B. IFTIMIE, I. MOLNAR and C. VÂRSAN. We provide a quasimartingale
representation and a stochastic rule of differentia- tion for test functions belonging
to ...
SOLUTIONS OF SOME ELLIPTIC EQUATIONS ASSOCIATED WITH A PIECEWISE CONTINUOUS PROCESS B. IFTIMIE, I. MOLNAR and C. VÂRSAN
We provide a quasimartingale representation and a stochastic rule of dierentiation for test functions belonging to some class of polynomials, which depend on the solution of a stochastic dynamic system dened via a piecewise continuous function λ(t). This is achieved by applying the classical Itô's dierentiation rule on each continuity interval. The quasimartingale representation on each continuity interval is obtained by solving some possibly degenerate elliptic equations. AMS 2000 Subject Classication: Key words:
35B40, 35J70, 60G44, 91B28.
stochastic rule of dierentiation, piecewise continuous process, degenerate elliptic equation. 1. INTRODUCTION
The
evolution
of
a
multi-dimensional
piecewise
continuous
process
{(z(t), µ(t)); t > 0} is analyzed in connection with the stochastic rule of derivation by using polynomial solutions for some (possibly degenerate) elliptic equations. In our setting, the process
{z(t) ∈ Rn ; t > 0}
is generated as a solution
of a stochastic dierential system depending on a piecewise constant process
{µ(t) ∈ S ⊆ Rd ; t > 0}.
ϕ ∈ Pp (z; µ) consists of z = (z1 , . . . , zn ), d whose coecients are continuous bounded functions of µ ∈ R . It agrees with d the elliptic operators Lµ depending on the parameter µ ∈ R which determine the drift part when a pondered functional {exp(γt)ϕ(z(t); µ(t)); t > 0} is The set of test functions
all polynomial functions of degree
p
in the state variables
considered and the decomposition formula
t
Z exp(γt)ϕ(z(t); µ(t)) = ψt (z(·), µ(·)) +
exp(γs)[γϕ+Lµ (ϕ)](z(s); µ(s))ds+ 0
+ Mt (z(·), µ(·)), is valid. Here,
t > 0}
Lµ : Pp (z; µ) → Pp (z; µ)
t>0
{Mt (z(·), µ(·)); {ψt (z(·); µ(·)); t > 0} is a piecewise
is an elliptic operator,
is a continuous martingale and
constant process.
REV. ROUMAINE MATH. PURES APPL.,
53
(2008), 4, 323338
324
B. Iftimie, I. Molnar and C. Vârsan
2
The above decomposition formula is meaningfull when describing the asymptotic behaviour of the augmented process provided the nontrivial solution
ϕγ ∈ P2 (z; µ)
{exp(γt)kz(t)k2 | t > 0},
of the elliptic equation
γϕγ (z; µ) + kyk2 + Lµ (ϕγ )(z, µ) = 0,
∀z ∈ Rn , µ ∈ Rd ,
satises the nonsingularity condition
δ(γ)kzk2 6 ϕγ (z, µ) 6 for some positive constants
1 kzk2 C(γ)
δ(γ), C(γ).
2. DEFINITIONS, SETTING OF THE PROBLEM AND AUXILIARY RESULTS
m-dimensional Wiener process over a complete {Ω, F, {Ft }t>0 , P}. On the same probability space we have piecewise constant adapted processes {µ(t); t > 0} and {y(t); t > 0} of dimension d, respectively n. Set λ(t) = (y(t), µ(t)), t > 0, and Let
W (t)
be a standard
probability space
λ(t, ω) = λk (ω) = (yk (ω), µk (ω)),
t ∈ [tk (ω), tk+1 (ω)),
{tk ; k > 0} is an increasing sequence of positive adapted random varit0 = 0, tk → ∞, a.s., as k → ∞ and (yk , µk ), k > 0, are multidimensional Ftk -measurable random variables. We assume that the process W (t) and the sequence {(tk , λk ); k > 1} are independent. n We are next given a nite set of vector elds gi (z; µ) ∈ R , linear in z ,
where
ables with
dened as
i = 0, 1, . . . , m, µ ∈ Rd , z ∈ Rn ,
gi (z; µ) = ai (µ) + Ai (µ)z,
(1)
where the
(n × n)-matrices Ai (µ)
and the
n-dimensional
vectors
ai (µ)
are
assumed to be continuous and bounded. For each
x ∈ Rn
dene a piecewise continuous process
{z(t, x); t > 0}
as
the unique solution of the dynamical system
m X gj (z(t); µ(t))dWj (t), dz(t) = g0 (z(t); µ(t)) +
(2)
z(tk ) = z_ (tk ) + y(tk ), z(0) = x,
where
t ∈ [tk , tk+1 ),
j=1
for any
k > 1,
z_ (tk ) = lim z(t). t↑tk
By denition, the unique solution of system (2) is a piecewise continuous and
{Ft }-adapted process {z(t, x); t > 0} such that, z(tk , x) − z_ (tk , x) = y(tk ) occurs.
jump
at each jump time tk , the
3
Solutions of some elliptic equations
Denition
325
Pp (z; µ) the set consisting of the polynomial (z1 , z2 , . . . , zn ) = z , whose coecients d functions of µ ∈ R ; Pp (z) ⊆ Pp (z; µ) consists of
1. Denote by
functions of degree
p
in the variables
are continuous bounded
polynomials with constant coecients. Consider the piecewise continuous scalar process
Vzγ (t, x) = exp(γt) ϕ(z(t, x); µ(t)) +
(3)
Z
t
exp(γs) f (z(s, x))ds,
t > 0,
0 where
γ
is a constant,
f ∈ Pp (z)
and
ϕ ∈ Pp (z; µ).
We provide a stochastic rule of dierentiation involving the piecewise continuous process
{Vzγ (t, x); t > 0} which contains both a continuous component
and a piecewise constant process dened as
Dzγ (t, x) =
(4)
P exp(γtk )ψk (z(·, x)), 0,
where
t > t1 ,
0 1.
Lemma 1. Let f ∈ Pp (z), ϕ ∈ Pp (z; µ) dene the piecewise continuous process {Vzγ (t, x); t > 0} as in (3), corresponding to the unique solution {z(t, x); t > 0} of system (2). Then Vzγ (t, x) satises the integral equation
γ (5) Vz (t, x)
Z = ϕ(z(0, x); µ(0)) +
t
exp(γs) [γϕ + f + L(ϕ)](z(s, x); µ(s))ds 0
+
m Z X j=1
t
exp(γs) h∇z ϕ(z(s, x); µ(s)), gj (z(s, x); µ(s))idWj (s) + Dzγ (t, x),
0
for t > 0 and x ∈ Rn , where the piecewise constant process {Dzγ (t, x); t > 0} is dened by (4) and the elliptic operator L : Pp (z; µ) → Pp (z; µ) is given by m
(6)
L(ϕ)(z; µ) = h∇z ϕ(z; µ), g0 (z; µ)i +
1X h∇z ∇z ϕ(z; µ)gj (z; µ), gj (z; µ)i. 2 j=1
Proof.
The stochastic rule of dierentiation from (5) can be obtained
by applying the standard rule of stochastic dierentiation associated with the continuous process (7)
{Vzγ (t, x); t ∈ [tk , tk+1 )} for each k .
In this respect, rewrite
Vzγ (t, x) = Vzγ (tk , x) + Uk (t, z(t, x)) − Uk (tk , z(tk , x)) Z
t
+
exp(γs) f (z(s, x))ds, tk
326 for
B. Iftimie, I. Molnar and C. Vârsan
t ∈ [tk , tk+1 ),
where
Uk (t, z) = exp(γt)ϕ(z; µ(tk )).
4 According to the sto-
chastic rule of dierentiation, we get
Z
t
Uk (t, z(t, x)) − Uk (tk , z(tk , x)) =
exp(γs) [γϕ + L(ϕ)](z(s, x); µ(tk ))ds tk
+
m Z X j=1
for every
t
exp(γs) h∇z ϕ(z(s, x); µ(tk )), gj (z(s, x); µ(tk ))idWj (s),
tk
t ∈ [tk , tk+1 ).
Using also (7), it follows that
Vzγ (t, x) = Vzγ (tk , x) +
(8)
Z
t
exp(γs) [γϕ + f + L(ϕ)](z(s, x); µk )ds tk
+
m Z X
exp(γs) h∇z ϕ(z(s, x); µk ), gj (z(s, x); µk )idWj (s),
tk
j=1 for
t
t ∈ [tk , tk+1 )
µk = µ(tk ) and Z tk γ Vz (tk , x) = Uk (tk , z(tk , x)) + exp(γs) f (z(s, x))ds.
(9)
where, by denition,
0 On the other hand, we notice that
Uk (tk , z(tk , x)) = Uk−1 (tk , z_ (tk , x)) + exp(γtk ) ψk (z(·, x)) for
k > 1. Uk−1 (tk , z_ (tk , x))
is computed as
Uk−1 (tk , z_ (tk , x)) = Uk−1 (tk−1 , z(tk−1 , x)) Z tk exp(γs)[γϕ + L(ϕ)](z(s, x); µk−1 )ds + tk−1
+
m Z tk X j=1
exp(γs)h∇z ϕ(z(s, x); µk−1 ), gj (z(s, x); µk−1 )idWj (s).
tk−1
Inserting now the last two formulas into (9), we get
γ γ (10) Vz (tk , x) = Vz (tk−1 , x)
Z
tk
+
exp(γs)[γϕ + f + L(ϕ)](z(s, x); µk−1 )ds tk−1
+
m Z X j=1
tk
exp(γs)h∇z ϕ(z(s, x); µk−1 ), gj (z(s, x); µk−1 )idWj (s)
tk−1
+ exp(γtk )ψk (z(·, x)). Now, iterating the last formula, we can rewrite equation (8) as it appears in (5).
5
Solutions of some elliptic equations
327
Problem A. For a xed f ∈ Pp (z), nd a constant γ 6= 0 and ϕγ ∈ Pp (z; µ), such that the piecewise continuous process
{Vzγ (t, x); t > 0}
dened by (3) is
a quasi-martingale, i.e.,
Vzγ (t, x) = ϕγ (z(0, x); µ(0)) + Dzγ (t, x)
(11)
+
m Z X
t
exp(γs)h∇z ϕγ (z(s, x); µ(s)), gj (z(s, x); µ(s))idWj (s).
0
j=1
Problem
B. For a xed
f ∈ Pp (z), nd a constant γ 6= 0 and ϕγ ∈ Pp (z; µ)
solution of the elliptic equation
γϕγ (z; µ) + f (z) + L(ϕγ )(z; µ) = 0
(12)
where the linear operator
Remark
for any
L : Pp (z; µ) → Pp (z; µ)
1. Notice that each solution
z ∈ Rn , µ ∈ Rd ,
is dened by (6).
(γ, ϕγ )
of Problem A. In this respect, the scalar process
of Problem B is a solution
{Vzγ (t, x); t > 0}
dened
by (3) is a quasi-martingale expressed as in (11), provided the stochastic rule of dierentiation from Lemma 1 is used for the given solution of Problem B. Lemma 2.
be given as in
Let f ∈ P2 (z) and the vector elds gi (z; µ), i = 0, 1, . . . , m, . Let γ < 0 be such that
(1)
m 1X 2 |γ| > 4 kg0 k + kgj k , 2
(13)
j=1
where kgj k = sup (kaj (µ)k + kAj (µ)k). Then there exists ϕγ ∈ P2 (z; µ) satisµ∈Rd
fying the elliptic equation (12). In addition, ϕγ is the sum of the convergent series ∞ 1 X k L|γ| (f )(z; µ) , ϕγ (z; µ) = |γ|
(14)
(z, µ) ∈ Rn × Rd ,
k=0
where L|γ| = Proof.
1 |γ| L.
Lk|γ| (f )(z; µ) ∈ P2 (z; µ), k > 1, Write each ϕ ∈ P2 (z; µ) as
Estimates of the polynomials
be obtained by an induction argument.
can
ϕ(z; µ) = C(µ) + hh0 (µ), zi + hH(µ)z, zi, C(µ) ∈ R, hi (µ) ∈ Rn , i = 0, 1, . . . , n are n P continuous bounded functions with respect to µ. Denote kϕk = kCk+ khi k, where
H(µ) = (h1 (µ), . . . , hn (µ)
and
i=0
328
B. Iftimie, I. Molnar and C. Vârsan
where
6
kCk = sup |C(µ)| and khi k = sup khi (µ)k, using khi (µ)k = µ∈Rd
i = 0, 1, . . . , n.
µ∈Rd
n P j=1
|hji (µ)|,
It is easily seen that
|ϕ(z; µ)| 6 kϕk(1 + kzk2 ),
∀(z, µ) ∈ Rn × Rd ,
and an elementary computation shows that
n 1X 4 2 2 kg0 k + kgj k kϕk, (a) kL|γ| (ϕ)k 6 |γ| 2 j=1 n 4 1X 2 2 kgj k (1 + kzk2 ) (b) |L|γ| (ϕ)(z; µ)| 6 kϕk |γ| kg0 k + 2
(15)
j=1
for any
(z, µ).
Using an induction argument, we can easily prove that
k n X 4 1 k kgj k2 kf k (1 + kzk2 ) kg0 k2 + L|γ| (f )(z; µ) 6 |γ| 2
(16)
j=1
for any
(z, µ) ∈ Rn × Rd
k > 0. Lk+1 |γ| : P2 (z; µ) → P2 (z; µ) can k Lk+1 k > 0, |γ| (f )(z, µ) = L|γ| L|γ| (f ) (z, µ), and
In this respect, recall that
be written as
Lk|γ| (f ) ∈ P2 (z; µ) and k n X
k
L (f ) 6 4 kg0 k2 + 1 kgj k2 kf k |γ| |γ| 2
and assume that
j=1
ϕ = Lk|γ| (f ), we get n X
k+1 2
L (f ) = L|γ| (ϕ) 6 4 kg0 k2 + 1 kgj k kϕk |γ| |γ| 2 j=1 k+1 n X 4 1 kgj k2 kf k = ρk+1 kf k 6 kg0 k2 + |γ| 2
for some xed
k > 1.
Then, using estimates (15) for
j=1
and
k+1 L (f )(z, µ) 6 ρk+1 kf k(1 + kzk2 ) |γ|
for any
(z, µ),
where
ρ :=
4 |γ|
h
kg0 k2 +
1 2
n P j=1
kgj k2
i
.
7
Solutions of some elliptic equations
329
This shows that estimate (16) is satised for any that
ρ 0
and assuming
(see (13)), we easily get that the series in (14) is convergent. In
addition, using (16) we obtain that
1 |ϕγ (z, µ)| 6 |γ|
X ∞
ρ
k
kf k(1 + kzk2 ),
k=0
where
∞
1 X k 1 1 ρ = = |γ| |γ| 1 − ρ k=0
By denition,
ϕγ ∈ P2 (z; µ)
1 "
n 1 P |γ| − 4 kg0 k2 + kgj k2 2 j=1
and using (14) we get that
ϕγ
# > 0.
satises the elliptic
equation (12), as the following simple computation shows
" L(ϕγ )(z; µ) = L|γ|
∞ X
# Lk|γ| (f )(z, µ)
k=0 The proof is complete.
Remark
=
∞ X
Lk|γ| (f )(z, µ) = −γϕγ (z; µ) − f (z).
k=1
2. The functional
ϕγ ∈ P2 (z; µ)
constructed in Lemma 2 is
meaningful for the asymptotic analysis of the piecewise continuous process
{Vzγ (t, x); t > 0}
provided
ϕγ (z; µ)
is a quadratic positive form satisfying
δ(γ)kzk2 6 ϕγ (z; µ) 6 for any
µ ∈ Rd .
Here,
δ(γ)
and
C(γ)
1 kzk2 , C(γ)
are positive constants.
This can be achieved by imposing the additional conditions on the vector elds (17)
gi (z; µ), i = 0, 1, . . . , m given in (1) (a) gi (z; µ) = Ai (µ)z, i = 0, 1, . . . , m, (b) Aj (µ)Ak (µ) = Ak (µ)Aj (µ) for any j, k ∈ {0, 1, . . . , m}, µ ∈ Rd ,
where recall that
Ai (µ)
are continuous and bounded symmetric matrices.
Assume that the vector elds gi (z; µ) given by sumptions (17). Let f (z) = kzk2 and let the constant γ full Lemma 3.
(1)
satisfy as-
m 1X γ < 0, |γ| > 4 kg0 k + kgj k2 . 2
(18)
j=1
Then there exists ϕγ ∈ P2 (z; µ) satisfying the equations (19)
(a)
γϕγ (z; µ) + kzk2 + L(ϕγ )(z; µ) = 0, ∀(z, µ) ∈ Rn × Rd , 1 kzk2 , (b) ϕγ (z; µ) = hRγ (µ)z, zi, δ(γ)kzk2 6 ϕγ (z; µ) 6 C(γ)
330
B. Iftimie, I. Molnar and C. Vârsan
8
where m
L(ϕγ )(z; µ) = h∇z ϕ(z, µ), A0 (µ)zi +
1X h∇z ∇z ϕ(z; µ) Aj (µ)z, Aj (µ)zi, 2 j=1
−1 C(γ), δ(γ) are positive constants, Rγ (µ) = |γ|In − B(µ) and B(µ) = m P A2j (µ). 2A0 (µ) + j=1
Proof.
The assumptions of Lemma 2 are satised. Let
ϕγ ∈ P2 (z; µ)
be
the solution of the elliptic equation (12) given by the series
"∞ # 1 X k L|γ| (f )(z, µ) , ϕγ (z; µ) = |γ|
(z, µ) ∈ Rn × Rd ,
k=0
where
f (z) =
kzk2 and the linear operator
L|γ| =
1 |γ| L satises
m 1X 1 h∇z ϕ(z, µ), A0 (µ)zi+ h∇z ∇z ϕ(z; µ) Aj (µ)z, Aj (µ)zi . L|γ| (ϕ)(z, µ) = |γ| 2 j=1
By denition,
1 L|γ| (f )(z, µ) = |γ|
* 2A0 (µ) +
m X j=1
+ 1 A2j (µ) z, z = hB(µ)z, zi. |γ|
It is obvious that the symmetric matrix
i = 0, 1, . . . , m.
B(µ)
Therefore,
Lk|γ| (f )(z, µ) = Here, the matrix
B(µ)
*
B(µ) |γ|
z, z
m X
kAj (µ)k2 6 2kg0 k +
j=1
µ ∈ Rd .
Dene
+
k
.
satises
kB(µ)k 6 2kA0 (µ)k + for any
commutes with each
m X j=1
kBk = sup kB(µ)k.
Using (18), we get
µ∈Rd
ρ=
(20)
ν 1 kBk 6 < |γ| |γ| 2
and the matrix series
∞
(21)
1 X Rγ (µ) = |γ| k=0
kgj k2 = ν,
B(µ) |γ|
k ,
µ ∈ Rd ,
Ai (µ),
9
Solutions of some elliptic equations
|γ| > 2ν .
is convergent if
Hence
1 ϕγ (z; µ) = |γ| provided
|γ| > 2ν .
331
*∞ X B(µ) k k=0
|γ|
+ = hRγ (µ)z, zi,
z, z
Rγ (µ)
Recall that the symmetric matrix
fulls
1 B(µ) −1 Rγ (µ) = In − . |γ| |γ|
(22)
Conclusion (19) is a direct application of the following argument.
T (µ)y
be an orthogonal mapping, i.e.
T ∗ (µ) = [T (µ)]−1 ,
Let
z =
such that
T −1 (µ)B(µ)T (µ) = D(µ) = diag(γ1 (µ), . . . , γn (µ)) ϕγ (T (µ)y; µ) = hT −1 (µ)Rγ (µ)T (µ)y, yi as *∞ + 1 X D(µ) k y, y = hQγ (µ)y, yi, y ∈ Rn , ϕγ (T (µ)y; µ) = |γ| |γ| k=0 −1 , . . . , (|γ| − γ (µ))−1 . Using B(µ)e (µ) = where Qγ (µ) = diag (|γ| − γ1 (µ)) n k γk (µ)ek (µ), k = 1, . . . , n, where T (µ) = col[e1 (µ), . . . , en (µ)], it is easily seen that kB(µ)k > max |γk (µ)| = |b γ (µ)| and |b γ (µ)| 6 kBk for any µ ∈ Rd . The and write
16k6n
hypothesis
|γ| > 2ν
(see (18)) implies that the matrix
uniformly with respect to
µ,
and
δ(γ)kyk2 6 ϕγ (T (µ)y; µ) 6 with some positive constants
kyk2
=
Qγ (µ) is positive dened,
δ(γ), C(γ).
1 kyk2 , C(γ)
Take now
y = T −1 (µ)z .
Using
kzk2 , we obtain 1 kzk2 > ϕγ (z; µ) = hRγ z, zi > δ(γ)kzk2 , C(γ)
Remark
∀z ∈ Rn .
3. The stochastic rule of dierentiation from Lemma 1 and the
result stated in Lemma 3 are relevant for the piecewise continuous process
{z(t, x); t > 0},
dened as the unique solution of system (2). In this respect,
we use the decomposition
z(t, x) = zc (t, x) + y(t),
(23)
where the continuous component
t > 0,
{zc (t, x); t > 0}
is the unique solution of the
linear stochastic system
(24)
m X dzc (t) = A0 (µ(t))zc (t)dt + A0 (µ(t))zc (t)dWj (t), t > 0, j=1
zc (0) = x.
332
B. Iftimie, I. Molnar and C. Vârsan
The stochastic rule of dierentiation written for
zc (t, x)
10 reads as
Vzγc (t, x) = ϕγ (x; µ(0)) + Dzγc (t, x)
(25)
Z +
t
exp(γs)[γϕγ + f + L(ϕγ )](zc (s, x); µ(s))ds 0
+
m Z X
t
exp(γs)h∇z ϕγ (zc (s, x); µ(s)), Aj (s, x); µ(s))zc (s, x)idWj (s),
0
j=1 where
Dzγc (t, x) =
X
exp(γtk ) ψk (zc (·, x))
0 0, x ∈ Rn ,
Lemma 4.
k > 1.
for f (z) = kzk2 and ϕγ (z; µ) = hRγ z, zi. Then the piecewise continuous process Uγ (t, x) satises the integral equation Uγ (t, x) = ϕγ (zc (0, x); µ(0)) + Dzγc (t, x)
(26)
Z th
+
i − C(γ)Uγ (s, x) + exp(γs)βγ (zc (s, x); µ(s)) ds
0
+
m Z X j=1
t
exp(γs)h∇z ϕγ (zc (s, x); µ(s)), Aj (µ(s))zc (s, x)idWj (s),
0
where 2 n d βγ (z; µ) = C(γ)ϕγ (z; µ) − kzk 6 0, ∀(z, µ) ∈ R × R , X Dγ (t, x) = exp(γtk ) ϕγ (zc (tk , x); µ(tk )) − ϕγ (zc (tk , x); µ(tk−1 )) . zc 0 0}.
If we apply it for
we get equation (25). Using the elliptic equation
γϕγ (z; µ) + kzk2 + L(ϕγ )(z; µ) = 0,
∀(z, µ) ∈ Rn × Rd
and the integral equation (25), we obtain that the piecewise continuous process
{Uγ (t, x); t > 0} (27)
satises
Uγ (t, x) = ϕγ (x; µ(0)) +
Dzγc (t, x)
Z − 0
t
exp(γs)kzc (s, x)k2 ds
11
Solutions of some elliptic equations
+
m Z X j=1
333
t
exp(γs)h∇z ϕγ (zc (s, x); µ(s)), Aj (µ(s))zc (s, x)idWj (s).
0
Conclusion (19) of Lemma 3 shows that the equation
−kzk2 = −C(γ)ϕγ (z; µ) + βγ (z; µ), is satised, where
βγ (z; µ) 6 0
for any
(z, µ) ∈ Rn × Rd .
Rewrite now (27) using the last formula. We thus get conclusion (26).
Remark
4. Assuming that the set
Rγ (µ(t)) = Rγ (µ(0)),
for any
S ⊆ Rd
is chosen such that
t = tk , k > 0 (see µ(tk ) ∈ S),
γ the piecewise constant process {Dzc (t, x);
t > 0}
equation (26) can be written accordingly.
vanishes and the integral
In addition, the same equation
reduces to
t
Z Uγ (t, x) = ϕγ (x; µ(0)) −
(28)
C(γ)Uγ (s, x)ds 0
+2
m Z X j=1
t
Z Uγ (s, x)θj (s)dWj (s) +
0
t
exp(γs)βγ (zc (s, x); µ(s))ds, 0
θj (µ) : S → R, j = 1, . . . , m, b θj (t) = µj (µ(t)).
where
are given such that
Aj (µ) = θj (µ)In
and
If it is the case, then dene an exponential martingale
(29)
ξ(t) = exp 2
m Z X j=1
t
0
Z θbj (s)dWj (s) −
t
(θbj (s))2 ds ,
t > 0,
0
which satises the equation
(30)
m X dξ(t) = 2ξ(t) θbj (t)dWj (t), j=1 ξ(0) = 1.
t > 0;
Lemma 5. Let hypotheses (17), (18) and (19) hold. In addition, assume that Aj (µ) = θj (µ)In , j = 1, . . . , m, and that the matrix B(µ) = 2A0 (µ) +
m P
j=1
A2j (µ) is constant. Denote hγ (t, x) = exp(γt)kzc (t, x)k2 , t > 0. Then
(31)
0 6 Uγ (t, x) 6 ϕγ (x; µ(0)) exp(−C(γ)t)ξ(t),
(32)
0 6 hγ (t, x) 6
for any t > 0, x ∈ Rn .
ϕγ (x; µ(0)) exp(−C(γ)t)ξ(t), δ(γ)
334
B. Iftimie, I. Molnar and C. Vârsan
Proof.
12
By hypothesis, the conclusions of Lemma 4 hold. Using Remark 2
Φγ (t) = exp(−C(γ)t)ξ(t), Uγ (t, x) of equation (26) as Z t −1 Uγ (t, x) = Φγ (t) ϕγ (x; µ(0))+ Φγ (s) exp(γs) βγ (zc (s, x); µ(s))ds .
we rewrite the integral equation (26) as in (28). Set
t > 0. (33)
Represent the solution
0
βγ (z; µ) 6 0 for 1 hγ (t, x) 6 δ(γ) Uγ (t, x).
(z, µ) ∈ Rn × S .
Recall that
any
that
We easily get (32).
Remark
Thus, (31) follows. Notice
5. A relaxation of the assumptions made consists of admitting a
{q1 , . . . , qd } ⊆ Rn , Bi (µ) such that
common eigen-subspace generated by the constant vectors
d 6 n.
In this respect, real symmetric
∗
Q Ai (µ)z = Bi (µ)y,
(d × d)-matrices
i = 0, 1, . . . , m, µ ∈ S, z ∈ Rn ,
Q = [q1 , . . . , qd ], y = col(q1∗ z, . . . , qd∗ z). gi (z, µ) = ai (µ) + Ai (µ)z will be replaced by
will exist, where elds
hi (y, µ) = bi (µ) + Bi (µ)y,
The original vector
bi (µ) = Q∗ ai (µ),
bi (µ) ∈ Rd and the (d × d)-matrices Bi (µ) are continuous and bounded. n For each x ∈ R , dene a piecewise continuous process {b z (t, x); t > 0}
where
as the unique solution of the dynamic system
(34)
m X db z (t) = h0 (b z (t); µ(t))dt + hj (b z (t); µ(t))dWj (t), t ∈ [tk , tk+1 ), j=1 zb(tk ) = zb_ (tk ) + Q∗ µ(tk ) for any k > 1, zb(0) = Q∗ [x + µ(0)].
Remark
6. The analysis contained in Lemmas 15 has an adequate ver-
sion for the piecewise continuous process
{b z (t, x); t > 0}
and, in particular,
the decomposition
can be used.
Here, the
zb(t, x) = zbc (t, x) + Q∗ µ(t) continuous argument z bc (t, x)
stands for the unique
solution of the linear stochastic system
(35)
m X db zc (t) = h0 (b zc (t); µ(t))dt + hj (b zc (t); µ(t))dWj (t), j=1 zbc (0) = Q∗ x.
t > 0,
Problem C. Prove that under suitable conditions imposed on the matrices Bi (µ), we get the asymptotic behaviour of the continuous process {˜ z (t, x); t > 0}, satisfying (a) exp(γt)Ek˜ z (t, x)k2 → 0 as t → ∞ for each x ∈ Rn provided γ < 0 and m P kBj k2 , where hi (y; µ) = Bi (µ)y and kBi k = sup kBi (µ)k. |γ| > kB0 k + 21 j=1
µ∈S
13
Solutions of some elliptic equations (b) For any
γ
as above,
335
lim exp(γt)k˜ z (t, x)k2 = 0 P-a.s.
t↑∞
3. SOLUTIONS OF PROBLEM C Assume that there exist matrices
Bi (µ)
q 1 , q 2 , . . . , q d ∈ Rn , d 6 n,
and symmetric
Q∗ gi (z; µ) = Bi (µ)y,
i ∈ {0, 1, . . . , m}, µ ∈ S, z ∈ Rn ,
gi (z; µ) = ai (µ) + Ai (µ)z , Q = [q1 , q2 , . . . , qd ] and qd∗ z). Associate the linear stochastic system m X
where
z˜(t) = B0 (µ(t))˜ z (t)t +
(36)
d × d-
such that
y=
Bj (µ(t))˜ z (t)dWj (t),
∗
∗
col(q1 z, q2 z, . . . ,
t > 0,
j=1
z˜(0) = Q∗ x,
x ∈ Rn .
Moreover, assume the matrices
Bi (µ)
are bounded, continuous and mutually
commuting, i.e.,
Bi (µ)Bj (µ) = Bj (µ)Bi (µ) for any µ ∈ S ; m P (b) B(µ) = 2B0 (µ) + Bj2 (µ) is a constant (a)
matrix.
j=1
be the unique h Let z˜(t) P i solution of system (36) and γ < 0 m 1 2 such that |γ| > 4 kB0 k + 2 j=1 kBj k . Then (37) lim exp(γt) E k˜ z (t, x)k2 = 0, for any x ∈ Rn . Theorem 1.
t↑∞
Proof.
ϕγ ∈ P2 (z; µ) be the positive dened quadratic functional ϕγ (y; µ) = hRγ (µ)y, yi such that the conclusions in Lemma 3 hold. Here, z ∈ Rn is replaced by y ∈ Rd and the equations mentioned are replaced by 2 n (a) γϕγ (y; µ) + kyk + L(ϕγ )(y; µ) = 0, ∀(y, µ) ∈ R × S, (38) 1 (b) δ(γ)kyk2 6 ϕγ (y; µ) 6 kyk2 , C(γ) Let
where
m
L(ϕγ )(y; µ) = h∇y ϕ(y, µ), B0 (µ)yi +
1X h∇y ∇y ϕ(y; µ) Bj (µ)y, Bj (µ)yi, 2 j=1
C(γ), δ(γ)Pare positive constants, Rγ (µ) = [|γ|Id − B(µ)]−1 and B(µ) = 2 2B0 (µ) + m j=1 Bj (µ). bγ (t, x) = exp(γt)ϕγ (˜ Set U z (t, x); µ(t)), t > 0, x ∈ Rn . Then the piecewise bγ (t, x); t > 0} fulls all the necessary conditions needed continuous process {U
336
B. Iftimie, I. Molnar and C. Vârsan
to apply Lemma 4.
Ubγ (t, x)
As a consequence,
14
is a solution of the integral
equation
b yγ (t, x) Ubγ (t, x) = ϕγ (˜ z (0, x); µ(0)) + D
(39)
+ +
Z th 0 t
m Z X j=1
i − C(γ)Ubγ (s, x) + exp(γs)βγ (˜ z (s, x); µ(s)) ds
exp(γs)h∇y ϕγ (˜ z (s, x); µ(s)), Bj (µ(s))˜ z (s, x)idWj (s),
0
where
2 n βγ (y; µ) = C(γ)ϕγ (y; µ) − kyk 6 0, ∀(y, µ) ∈ R × S, X b γ (t, x) = D exp(γtk ) [ϕγ (˜ z (tk , x); µ(tk )) − ϕγ (˜ z (tk , x); µ(tk−1 ))] . y 00 , P}. Taking v(t, x) = EUbγ (t, x), we get
space
now the expectation in equation (39) and
setting
Z
t
[−C(γ) v(s, x) + exp(γs) β(s, x)] ds,
v(t, x) = v(0, x) + D(t, x) +
(40)
0 where
D(t, x) =
X
exp(γtk )E {h[Rγ (µ(tk )) − Rγ (µ(tk−1 ))] z˜(tk , x), z˜(tk , x)i}
0 0, x ∈ Rn .
k
In addition, since
, B(µ) is a constant matrix,
µ ∈ S and γ < 0. Therefore, {D(t, x); t > 0} vanishes and by a standard
is also constant for any
the piecewise constant component
integral representation formula we have
(41)
t
Z
0 6 v(t, x) = exp[−C(γ)t] v(0, x) +
exp(C(γ)s) exp(γs)β(s, x)ds
0
6 exp(−C(γ)t) v(0, x)
for any
Notice that
exp(γt)Ek˜ z (t, x)k2 6
t > 0, x ∈ Rn .
1 v(t, x). δ(γ)
15
Solutions of some elliptic equations
337
Using (41) we get (37).
Let the assumptions of Theorem 1 hold. Moreover, let Bj (µ) = θj (µ)Id , j = 1, . . . , m, and assume µj (µ) continuous and bounded. Dene the exponential martingale Theorem 2.
ξ(t) = exp 2
(42)
m Z X
t
θbj (s)dWj (s) −
0
j=1
t
Z
(θbj (s))2 ds ,
t > 0,
0
where θbj (t) = θj (µ(t)). Then
2
exp(γt)k˜ z (t, x)k 6 exp[−C(γ)t] ξ(t)
Proof.
.
Ubγ (t, x) dened in Theorem 1, satises the integral b yγ (t, x) = 0 matrix Rγ (µ) is constant, we get that D
By hypothesis,
equation (27). Since the for any
k˜ z (0, x)k2 δ(γ)
t > 0, x ∈ Rn .
Hence, (27) becomes
Ubγ (t, x) = ϕγ (˜ z (0, x); µ(0))
(43)
+
Z th
i − C(γ)Ubγ (s, x) + exp(γs)βγ (˜ z (s, x); µ(s)) ds
0
+2
m Z X j=1
t
Ubγ (s, x) θbj (s)dWj (s).
0
A standard integral representation formula allows us to write (44)
0 6 Ubγ (t) 6 exp[−C(γ)t] ξ(t) [ϕγ (˜ z (0, x), µ(0))] ,
∀ t > 0.
Notice that
exp(γt)k˜ z (t, x)k2 6
1 b Uγ (t, x), δ(γ)
and estimate (44) leads us to the conclusion.
Acknowledgement.
t > 0, x ∈ Rn
The authors were supported by Contract 2-CEx06-11-18/2006.
REFERENCES [1] R. Cont and P. Tankov, Financial Modelling with Jump Processes. Chapman and Hall, 2004. [2] I. Karatzas and S. Shreve, Brownian Motion and Stochastic Calculus, 2nd Edition. Springer Verlag, 1991.
338
B. Iftimie, I. Molnar and C. Vârsan
16
[3] D. Lamberton and B. Lapeyre, Introduction to Stochastic Calculus Applied to Finance. Chapman and Hall, 1996. [4] M. Musiela and M. Rutkovski, Martingale Methods in Financial Modelling. Springer Verlag, 1998. [5] B. Oksendal, Stochastic Dierential Equations. An Introduction with Applications, 5th Edition. Springer, 2000. Received 12 November 2007
B. Iftimie Academy of Economic Studies Department of Mathematics Piaµa Roman , 6 010374 Bucharest, Romania
and I. Molnar, C. Vârsan Romanian Academy S. Stoilow Institute of Mathematics P.O. Box 1-764 014700 Bucharest, Romania