Backward Stochastic Partial Differential Equations

0 downloads 0 Views 248KB Size Report
We prove an existence and uniqueness result for a general class of backward stochastic partial differential equations with jumps. This is a type of equations ...
Backward Stochastic Partial Differential Equations with Jumps and Application to Optimal Control of Random Jump Fields Bernt Oksendal, Frank Proske and Tusheng Zhang April 2006

MIMS EPrint: 2006.54

Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester

Reports available from: And by contacting:

http://www.manchester.ac.uk/mims/eprints The MIMS Secretary School of Mathematics The University of Manchester Manchester, M13 9PL, UK

ISSN 1749-9097

Backward Stochastic Partial Differential Equations with Jumps and Application to Optimal Control of Random Jump Fields Bernt Oksendal, Frank Proske & Tusheng Zhang

First version: 27 September 2005

Research Report No. 12, 2005, Probability and Statistics Group School of Mathematics, The University of Manchester

Backward Stochastic Partial Differential Equations with Jumps and Application to Optimal Control of Random Jump Fields Bernt Øksendal1,2 , Frank Proske1 , Tusheng Zhang 3 June 7, 2005 Abstract We prove an existence and uniqueness result for a general class of backward stochastic partial differential equations with jumps. This is a type of equations which appear as adjoint equations in the maximum principle approach to optimal control of systems described by stochastic partial differential equations driven by L´evy processes. AMS Subject Classification: Primary 60H15 Secondary 93E20, 35R60.

1

Introduction

RtR e (ds, dz); t ≥ 0 be an m-dimensional Brownian motion Let Bt , t ≥ 0 and η(t) = 0 Rn z N and a pure jump L´evy process, respectively, on a filtered probability space (Ω, F, F t , P ). Fix T > 0 and let φ(ω) be an FT -measurable random variable. Let b : [0, T ] × Rn × Rn×m → Rn be a given vector field. Consider the problem to find three F t -adapted processes p(t) ∈ Rn , q(t) ∈ Rn×m and r(t, z) ∈ Rn×m such that Z e (ds, dz), t ∈ (0, T ) dp(t) = b(t, p(t), q(t))dt + q(t)dBt + r(t, z)N (1.1) Rn

p(T ) = φ

(1.2)

a.s.

This is a backward stochastic (ordinary) differential equation (BSDE). It is called backward because it is the terminal value p(T ) = φ that is given, not the initial value p(0). Still p(t) is required to be Ft -adapted. In general this is only possible if we also are free to choose q(t) and r(t, z) (in an Ft -adapted way). The theory of BSDEs, when η = 0, is now well developed. See e.g. [EPQ], [MY], [PP1], [PP2] and [YZ] and the references therein. In the jump case (η 6= 0) BSDE’s have been studied. See [FØS], [NS], [S] and the references therein. There are many applications of this theory. Examples include the following: (i) The problem of finding a replicating portfolio of a given contingent claim in a complete financial market can be transformed into a problem of solving a BSDE. 1

CMA, Department of Mathematics, University of Oslo, P.O. Box 1053 Blindern, N-316 Oslo, Norway. Email: [email protected], [email protected] 2 Norwegian School of Economics and Business Administration, Helleveien 30, N-5045 Bergen, Norway 3 School of Mathematics, University of Manchester, Oxford Road, Manchester M13 9PL, England, U.K. Email: [email protected]

1

(ii) The maximum principle method for solving a stochastic control problem involves a BSDE for the adjoint processes p(t), q(t), r(t, z). For more information about these and other applications of BSDEs we refer to [EPQ] and [YZ] and references therein. The purpose of this paper is to study backward stochastic partial differential equations (BSPDEs) with jumps. They are defined in a similar way as BSDEs, but with the basic equation being a stochastic partial differential equation rather than a stochastic ordinary differential equation. More precisely, we will study a class of BSPDEs which includes the following: Find adapted processes Y (t, x), Z(t, x), Q(t, x, z) such that dY (t, x) = AY (t, x)dt + b(t, x, Y (t, x), Z(t, x))dt Z e (dt, dz), (t, x) ∈ (0, T ) × Rn + Z(t, x)dBt + Q(t, x, z)N

(1.3)

Rn

Y (T, x) = φ(x, ω)

(1.4)

Here dY (t, x) denotes the Itˆo differential with respect to t, while A is a partial differene (dt, dz) is the compensated Poisson random measure tial operator with respect to x and N associated with a L´evy process η(·). The function b : [0, T ] × Rn × R × R → R is given and so is the terminal value function φ(x) = φ(x, ω). We assume that φ(x) is F T -measurable for all x and that Z E[ φ(x)2 dx] < ∞, (1.5) Rn

where E denotes expectation with respect to P . We are seeking the processes Y (t, x), Z(t, x) and Q(t, x, z) such that (1.3) and (1.4) hold. The processes Y (t, x), Z(t, x) and Q(t, x, z) are required to be Ft -adapted, i.e., Y (t, x), Z(t, x) and Q(t, x, z) are F t -measurable for all x ∈ Rn , z ∈ R and we also require that  Z Z T Z 2 2 2 {Y (t, x) + Z(t, x) + Q(t, x, z) ν(dz)}dt dx < ∞, (1.6) E Rn

Rn

0

where ν(·) is the L´evy measure of the underlying L´evy process. Equations of this type are of interest because they appear as adjoint equations in a maximum principle approach to optimal control of stochastic partial differential equations. See Section 2. Example 1.1 Consider the following BSPDE: Z 1 e (dt, dz), (t, x) ∈ (0, T ) × Rn (1.7) Q(t, x, z)N dY (t, x) = − 2 ∆Y (t, x)dt + Z(t, x)dBt + Rn

Y (T, x) = φ(x)

(1.8)

P 2 is the Laplacian with respect to x applied to Y (t, x), and φ(x) Here ∆Y (t, x) = ni=1 ∂ Y∂x(t,x) 2 i R 2 satisfies E[ Rn φ(x) dx] < ∞. 2

In this simple case, we are able to find the solution explicitly: We first use the Itˆo representation theorem to write, for almost all x, Z TZ Z T e (ds, dz) k(s, x, z, ω)N g(s, x, ω)dBs + φ(x) = h(x) + Rn

0

0

where

h(x) = E[φ(x)], g(s, x, ·) and k(s, x, z) are Fs -measurable for all s, x and  Z Z T Z 2 2 E[ g (s, x, ·) + k (s, x, z)ν(dz) dsdx] < ∞. Rn

Rn

0

Let Rt f (x) =

Z

n

(2πt)− 2 f (y)exp(− Rn

|x − y|2 )dy, 2t

t>0

be the transition operator for Brownian motion defined for all measurable f : R n → R such that the integral converges. Then it is well known that ∂ (Rt f (x)) = 12 ∆(Rt f (x)) ∂t

(1.9)

Now define Z

t

Z tZ

e (ds, dz) + h(·))(x) k(s, ·, z, ω)N Y (t, x) = RT −t ( g(s, ·, ω)dBs + n 0 R 0 Z t Z tZ e (ds, dz) + (RT −t h)(x) = (RT −t g(s, ·, ω))(x)dBs + (RT −t k(s, ·, z, ω))(x)N 0

0

Rn

(1.10)

Then dY (t, x)  Z t Z tZ 1 1 1 e ∆(RT −t k(s, ·, z, ω))(x)N (ds, dz) − 2 ∆RT −t h(·)(x) dt − 2 ∆(RT −t g(s, ·, ω))(x)dBs − 2 = 0 Rn 0 Z e (dt, dz) + (RT −t g(t, ·, ω))(x)dBt + RT −t (k(t, ·, z, ω))(x)N n R Z e (dt, dz), Q(t, x, z)N = − 12 ∆Y (t, x)dt + Z(t, x)dBt + Rn

where

Z(t, x) = (RT −t g(t, ·, ω))(x)

(1.11)

Q(t, x, z) = (RT −t k(t, ·, z))(x).

(1.12)

and Hence the processes Y (t, x), Z(t, x) and Q(t, x, z) given by (1.10)–(1.12) solve the BSPDE (1.7)–(1.8). 3

In the general case it is not possible to find explicit solutions of a BSPDE. However, in Section 3 we will prove an existence and uniqueness result for a general class of such equations. We will achieve this by regarding the BSPDE of type (1.3)–(1.4) as a special case of a backward stochastic evolution equation for Hilbert space valued processes. This, in turn, is studied by taking finite dimensional projections and then taking the limit. This is the well known Galerkin approximation method which has been used by several authors in other connections. See e.g. [B1], [B2] and [P]. We also refer readers to [PZ] for the general theory of stochastic evolution equations on Hilbert spaces. The rest of the paper is organized as follows: In Section 2 we prove a (sufficient) maximum principle for optimal control of random jump fields, i.e. solutions of SPDE’s driven by L´evy processes (Theorem 2.1). This principle involves a BSPDE of the form (1.3)-(1.4) in the associated adjoint processes. In Section 3 we give the precise framework of our general existence and uniqueness result. The existence and uniqueness result and its proof are given in Section 4.

2

The stochastic maximum principle

In this section we prove a verification theorem for optimal control of a process described by a stochastic partial differential equation (SPDE) driven by a Brownian motion B(t) and a e (dt, dz). We call such a process a (controlled) random jump field. Poisson random measure N The verification theorem has the form of a sufficient stochastic maximum principle and the adjoint equation for this principle turns out to be a backward SPDE driven by B(·) and e (·, ·). This part of the paper is an extension of [Ø] to the case including jumps and an N extension of [FØS] to SPDE control. Let Γ(t, x) = Γ(u) (t, x); t ∈ [0, T ], x ∈ Rk be the solution of a (controlled) stochastic reaction-diffusion equation of the form dΓ(t, x) = [(LΓ)(t, x) + b(t, x, Γ(t, x), u(t, x))] dt + σ(t, x, Γ(t, x), u(t, x))dB(t) Z θ(t, x, Γ(t, x), u(t, x), z)Ne (dt, dz); (t, x) ∈ [0, T ] × G +

(2.1)

R

Γ(0, x) = ξ(x); x ∈ G

(2.2)

Γ(t, x) = η(t, x); (t, x) ∈ (0, T ) × ∂G

(2.3)

Here dΓ(t, x) = dt Γ(t, x) is the differential with respect to t and L = L x is a given partial differential operator of order m acting on the variable x ∈ R k . We assume that G ⊂ Rk , f ⊂ Rl are given open and closed sets, respectively, and that b : [0, T ] × G × R × f → R, σ : [0, T ] × G × R × f → R, θ : [0, T ] × G × R × f × R → R, ξ : G → R and η : (0, T ) × ∂G → R are given measurable functions. The process u : [0, T ] × G → f is called an admissible control if the equation (2.1)-(2.3) has a unique continuous solution Γ(t, x) = Γ(u) (t, x) which satisfies Z T Z   Z E |f (t, x, Γ(t, x), u(t, x))| dx dt + |g(x, Γ(T, x))| dx < ∞, (2.4) 0

G

G

4

where f : [0, T ] × G × R × f → R and g : G × R → R are given C 1 functions. The set of all admissible controls is denoted by A. If u ∈ A we define the performance of u, J(u), by J(u) = E

Z

T 0

Z



f (t, x, Γ(t, x), u(t, x))dx dt + G

Z



g(x, Γ(T, x))dx . G

(2.5)

We consider the problem to find J ∗ ∈ R and u∗ ∈ A such that J ∗ := sup J(u) = J(u∗ ).

(2.6)

u∈A

J ∗ is called the value of the problem and u ∗ ∈ A (if it exists) is called an optimal control. In the following we let L∗ denote the adjoint of the operator L, defined by (L∗ ϕ1 , ϕ2 ) = (ϕ1 , Lϕ2 ) ; ϕ1 , ϕ2 ∈ C0∞ (G), (2.7) R where (ϕ, ψ) = (ϕ, ψ)L2 (G) = G ϕ(x)ψ(x)dx and C0∞ (G) is the set of infinitely many times differentiable functions with compact support in G. We now formulate a (sufficient) stochastic maximum principle for the problem (2.6): Let R denote the set of functions r : [0, T ] × G × R × Ω → R. Define the Hamiltonian H : [0, T ] × G × R × f × R × R×R → R by H(t, x, γ, u, p, q, r(t, x, ·)) = f (t, x, γ, u) + b(t, x, γ, u)p + σ(t, x, γ, u)q +

Z

θ(t, x, γ, u, z)r(t, x, z)ν(dz) (2.8)

R

For each u ∈ A consider the following adjoint backward SPDE in the 3 unknown adapted processes p(t, x), q(t, x) and r(t, x, z): ∂H (t, x, Γ(u) (t, x), u(t, x), p(t, x), q(t, x), r(t, x, ·)) ∂γ Z ∗ e (dt, dz); t < T. + L p(t, x)}dt + q(t, x)dB(t) + r(t, x, z)N

dp(t, x) = −{

(2.9)

R

∂g (x, Γ(u) (t, x)); x ∈ G ∂γ p(t, x) = 0; (t, x) ∈ (0, T ) × ∂G.

p(T, x) =

(2.10) (2.11)

The following result may be regarded as a synthesis of Theorem 2.1 in [Ø] and Theorem 2.1 in [FØS]: Theorem 2.1(Sufficient SPDE maximum principle for optimal control of reactiondiffusion jump fields)

5

b x) of (2.1)-(2.3) and let pb(t, x), qb(t, x), rb(t, x, ·) Let u b ∈ A with corresponding solution Γ(t, be a solution of the associated adjoint backward SPDE (2.9)-(2.11). Suppose the following, (2.12)-(2.15), holds: The functions

(2.12)

(γ, u) 7→ H(γ, u) := H(t, x, γ, u, pb(t, x), qb(t, x), rb(t, x, ·)); (γ, u) ∈ R × f and

γ 7→ g(x, γ); γ ∈ R

are concave for all (t, x)



[0, T ] × G.

b x), u H(t, x, Γ(t, b(t, x), pb(t, x), qb(t, x), rb(t, x, ·)) b = sup H(t, x, Γ(t, x), v, pb(t, x), qb(t, x), rb(t, x, ·))

(2.13)

v∈f

for all (t, x) ∈ [0, T ] × G. For all u ∈ A Z Z E G

and Z Z E G

0

T

0



T

pb(t, x)

2

b x) Γ(t, x) − Γ(t, 

  Z 2  2 2 qb(t, x) + rb(t, x, z) ν(dz) dtdx < ∞

(2.14)

R

2

σ(t, x, Γ(t, x), u(t, x)) +

Z

2





θ(t, x, Γ(t, x), u(t, x), z) ν(dz) dtdx < ∞ R

(2.15)

Then u b(t, x) is an optimal control for the stochastic control problem (2.6).

Proof. Let u be an arbitrary admissible control with corresponding solution Γ(t, x) = of (2.1)-(2.3). Then by (2.5)

Γ(u) (t, x)

J(b u) − J(u) = E where

Z

T 0

Z n G

 Z o b {b g − g} dx , f − f dxdt + G

Similarly we put

b x), u fb = f (t, x, Γ(t, b(t, x)), f = f (t, x, Γ(t, x), u(t, x)) b gb = g(x, Γ(T, x)) and g = g(x, Γ(T, x)).

and

bb = b(t, x, Γ(t, b x), u b(t, x)), b = b(t, x, Γ(t, x), u(t, x)) b σ b = σ(t, x, Γ(t, x), u b(t, x)), σ = σ(t, x, Γ(t, x), u(t, x)) b b x), u θ = θ(t, x, Γ(t, b(t, x), z), θ = θ(t, x, Γ(t, x), u(t, x), z). 6

(2.16)

Moreover, we set b = H(t, x, Γ(t, b x), u H b(t, x), pb(t, x), qb(t, x), rb(t, x, ·))

H = H(t, x, Γ(t, x), u(t, x), pb(t, x), qb(t, x), rb(t, x, ·)).

Combining this notation with (2.8) and (2.16) we get

J(b u) − J(u) = I1 + I2 ,

(2.17)

where I1 = E

Z

T

0

Z  G

and

b − H − (bb − b)b H p − (b σ − σ)b q− I2 = E

Z

Z

R



  (b θ − θ)b r ν(dz) dxdt

{b g − g} dx .

G

(2.18)

(2.19)

Since γ 7→ g(x, γ) is concave we have

Therefore, if we put

g − gb ≤

∂g b b (x, Γ(T, x)) · (Γ(T, x) − Γ(T, x)). ∂γ e x) = Γ(t, x) − Γ(t, b x) Γ(t,

we get, by the Itˆo formula (or integration by parts formula) for jump diffusions ([ØS, Ex. 1.7])  Z ∂g b e (x, Γ(T, x)) · Γ(T, x)dx I2 ≥ −E G ∂γ  Z e pb(T, x) · Γ(T, x)dx = −E ZG  e x) = −E pb(0, x) · Γ(0, Z

G

n o e x)db e x) + (σ − σ Γ(t, p(t, x) + pb(t, x)dΓ(t, b)b q (t, x) dt + 0   Z TZ b (θ − θ)b r (t, x, z)N (dt, dz) dx + 0 R Z Z T      ∂H ∧ ∗ e = −E Γ(t, x) · − − L pb(t, x) ∂γ G h 0 i e x) + (b − bb) + (σ − σ +b p(t, x) LΓ(t, b)b q (t, x)    Z θ)b r (t, x, z)ν(dz) dt dx , + (θ − b T

R

where



∂H ∂γ

∧

=

∂H b x), u (t, x, Γ(t, b(t, x), pb(t, x), qb(t, x), rb(t, x, ·)). ∂γ 7

(2.20)

Combining (2.17)-(2.20) we get Z T Z n o   ∗ e e J(b u) − J(u) ≥ E ΓL pb − pbLΓ dx dt 0 G     Z Z T   ∂H ∧ e b · Γ(t, x) dt dx . +E H −H + ∂γ G 0

(2.21)

e x) = pb(t, x) = 0 for all (t, x) ∈ (0, T ) × ∂G we get by an extension of (2.7) that Since Γ(t, Z n o e ∗ pb − pbLΓ e dx = 0 for all t ∈ (0, T ). ΓL G

Combining this with (2.21) we get Z Z J(b u) − J(u) ≥ E

T

0

G



b −H + H



∂H ∂γ

∧

From the concavity assumption in (2.12) we deduce that

   e · Γ(t, x) dt dx .

b u b + ∂H (Γ, b u b ≤ ∂H (Γ, b) · (Γ − Γ) b) · (u − u b). H −H ∂γ ∂u

(2.22)

(2.23)

From the maximality assumption in (2.13) we get that

∂H b (Γ, u b) · (u − u b) ≤ 0. ∂u

(2.24)

Combining (2.23) and (2.24) we get

b u b ≤ 0, b − ∂H (Γ, b) · (Γ − Γ) H −H ∂γ

which substituted in (2.22) gives

J(b u) − J(u) ≥ 0. Since u ∈ A was arbitrary this proves Theorem 2.1.

3

Framework

We now present the general setting in which we will prove our main existence and uniqueness result for backward SPDE’s with jumps: Let V , H be two separable Hilbert spaces such that V is continuously, densely imbedded in H. Identifying H with its dual we have V ⊂H ∼ = H ∗ ⊂ V ∗,

(3.1)

where V ∗ stands for the topological dual of V . Let A be a bounded linear operator from V to V ∗ satisfying the following coercivity hypothesis: There exist constants α > 0 and λ ≥ 0 such that 2hAu, ui + λ|u|2H ≥ α||u||2V for all u ∈ V , (3.2) 8

where hAu, ui = Au(u) denotes the action of Au ∈ V ∗ on u ∈ V . Remark that A is generally not bounded as an operator from H into H. Let K be another separable Hilbert space. Let (Ω, F, P ) be a probability space. Let {B t , t ≥ 0} be a cylindrical Brownian motion with covariance space K on the probability space (Ω, F, P ), i.e., for any k ∈ K,hBt , ki is a real valued-Brownian motion with E[hB t , ki2 ] = t|k|2K . Let (X, B(X)) be a measurable space, where X is a topological vector space. Further let η(t) be a L´evy process on X. Denote by ν(dx) the L´evy measure of η. Denote by L 2 (ν) the L2 -space of square integrable H-valued measurable functions associated with ν. Set p(t) = ∆η(t) = η(t)−η(t−). Then p = (p(t), t ∈ Dp ) is a stationary Poisson point process on X with characteristic measure ν. See [IW] for details on Poisson point processes. Denote by PN (dt, dx) the Poisson counting measure associated with the L´evy process, i.e., N (t, A) = s∈Dp ,s≤t IA (p(s)). Let e (dt, dx) := N (dt, dx) − dtν(dx) be the compensated Poisson mesasure. Define N Ft = σ(Bs , N (s, A), A ∈ B(X), s ≤ t).

P 2 Recall that a linear operator S from K into H is called Hilbert-Schmidt if ∞ i=1 |Ski |H < ∞ for some orthonormal basis {ki , i ≥ 1} of K. L2 (K, H) will denote the Hilbert space of Hilbert-Schmidt operators from K into H equipped with the inner product hS 1 , S2 iL2 (K,H) = P ∞ i=1 hS1 ki , S2 ki iH . Let b(t, y, z, q, ω) be a measurable mapping from [0, T ] × H × L 2 (K, H) × L2 (ν) × Ω into H such that b(t, y, z, q, ω) is F t -adapted,i.e., b(t, y, z, q, ·) is Ft -measurable for all t, y, z, q. Suppose we are given an F T -measurable, H-valued random variable φ(ω). We are looking for Ft -adapted processes Yt , Zt , Qt with values in H, L2 (K, H) and L2 (ν) respectively, such that the following backward stochastic evolution equation holds: dYt = AYt dt + b(t, Yt , Zt , Qt )dt + Zt dBt Z e (dt, dx), t ∈ (0, T ) + Qt (x)N

(3.3)

X

YT

= φ(ω) a.s.

(3.4)

From now on we assume that the following, (3.5) and (3.6), hold: There exists a constant c < ∞ such that |b(t, y1 , z1 , q1 )(ω) − b(t, y2 , z2 , q2 )(ω)|H ≤ c(|y1 − y2 |H + |z1 − z2 |L2 (K,H) + |q1 − q2 |L2 (ν) ) for all t, y1 , z1 , q1 , y2 , z2 , q2 . E[

4

Z

T 0

|b(t, 0, 0, 0)|2H dt] < ∞

(3.5) (3.6)

Existence and Uniqueness

We now state and prove the main existence and uniqueness result of this paper. Theorem 4.1 Assume that E[|φ|2H ] < ∞. Then there exists a unique H × L2 (K, H) × L2 (ν)-valued progressively measurable process (Y t , Zt , Qt ) such that RT RT RT (i) E[ 0 |Yt |2H dt] < ∞, E[ 0 |Zt |2L2 (K,H) dt] < ∞, E[ 0 |Qt |2L2 (ν) dt] < ∞. RT RT RT RT R e (ds, dx); 0 ≤ t ≤ T. (ii) φ = Yt + t AYs ds + t b(s, Ys , Zs , Qs )ds + t Zs dBs + t X Qs (x)N 9

As the proof is long, we split it into lemmas. Lemma 4.2 Assume that E[|φ|2H ] < ∞, and that b(t, y, z, q, ω) = b(t, ω) is independent RT of y, z and q, and E[ 0 |b(t)|2H dt] < ∞. Then there exists a unique H × L 2 (K, H) × L2 (ν)valued progressively measurable process (Y t , Zt , Qt ) such that (i)

E[

(ii) φ =

RT

R0T Yt + t

RT RT |Yt |2H dt] < ∞, E[ 0 |Zt |2L2 (K,H) dt] < ∞, E[ 0 |Qt |2L2 (ν) dt] < ∞. RT RT RT R e (ds, dx); 0 ≤ t ≤ T. AYs ds + t b(s)ds + t Zs dBs + t X Qs (x)N

Proof. Existence of solution. Set D(A) = {v; v ∈ V, Av ∈ H}. Then D(A) is a dense subspace of H. Thus we can choose and fix an orthonormal basis {e 1 , ...en , ...} of H such that ei ∈ D(A). Set Vn = span(e1 , e2 , ..., en ). Denote by Pn the projection operator from H into V n . Put An = Pn A. Then An is a bounded linear operator from Vn to Vn . For the cylindrical Brownian motion Bt , it is well known that the following decomposition holds: Bt =

∞ X

β it ki

(4.1)

i=1

where {k1 , k2 , ..., ki , ...} is an orthonormal basis of K, and β it , i = 1, 2, 3, ... are independent standard Brownian motions. Set B tn = (β 1t , ..., β nt ). Define Ftn = σ(Bsn , N (s, A), A ∈ B(X), s ≤ t) completed by the probability measure P , and put φ n = E[Pn φ|FTn ] and bn (t) = E[Pn b(t)|Ftn ]. Consider the following backward stochastic differential equation on the finite dimensional space Vn : dYtn = An Ytn dt + bn (t)dt + Ztn dBtn Z e (dt, dx) ; t ds] − E[

|Zsn |2L2 (Kn ,Vn )

where (3.2) that

=

=

Pn

E[|φn |2H ] Z

T t



E[|φ|2H ]

T t

hYsn , Pn AYsn ids]

|Zsn |2L2 (Kn ,Vn ) ds] − E[

n 2 i,j=1 (Zs (i, j))

E[|Ytn |2H ]

− 2E[

Z

Z

T

ds

Z

X

t

|Qns (x)|2H ν(dx)], (4.4)

stands for the Hilbert-Schmidt norm. It follows from

− αE[

Z

T t

||Ysn ||2V

10

ds] + λE[

Z

T t

|Ysn |2H ds]

+E[

Z

T t

|Ysn |2H ds] + E[

Z

T t

|b(s)|2H ds] − E[

Z

T t

|Zsn |2L2 (Kn ,Vn ) ds] − E[

Z

T

ds t

Z

X

|Qns (x)|2H ν(dx)].

Hence, E[|Ytn |2H ]

+ αE[

Z

T t

||Ysn ||2V

ds] + E[

Z

T t

≤ E[|φ|2H ] + (λ + 1)E[

Z

|Z¯sn |2L2 (K,H) ds] + E[ T

t

|Ysn |2H ds] + E[

Z

T t

Z

T

ds t

Z

X

|Qns (x)|2H ν(dx)]

|b(s)|2H ds]

where Z¯sn = Zsn P¯n , and P¯n is the projection from K into Kn = span(k1 , ..., kn ). In particular, Z T Z T n 2 n 2 2 |b(s)|2H ds] (4.5) |Ys |H ds] + E[ E[|Yt |H ] ≤ E[|φ|H ] + (λ + 1)E[ t

t

Set Y¯tn =

Hence,

RT t

|Ysn |2H ds. Then (4.5) implies that d(e(λ+1)t Y¯tn ) − ≤ e(λ+1)t (E[|φ|2H ] + E[ dt Z

T 0

E[|Ysn |2H ]ds



C(E[|φ|2H ]

+ E[

Z

Z

T 0

T t

|b(s)|2H ds])

|b(s)|2H ds]),

where C is an appropriate constant. This further implies that Z T sup{ E[|Ysn |2H ]ds} < ∞ n 0 Z T E[||Ysn ||2V ]ds} < ∞ sup{ n 0 Z T sup{ E[|Z¯sn |2L2 (K,H) ]ds} < ∞ n 0 Z T sup{ E[|Qns |2L2 (ν) ]ds} < ∞ n

(4.6) (4.7) (4.8)

0

For a separable Hilbert space L, we denote by M 2 ([0, T ], L) the Hilbert space of progressively measurable, R T square integrable, L-valued processes equipped with the inner product < a, b >M = E[ 0 < at , bt >L dt]. By the weak compactness of a Hilbert space, it follows from (4.6),(4.7)and (4.8) that a subsequence {n k , k ≥ 1} can be selected so that Y.nk , k ≥ 1 converges weakly to some limit Y in M 2 ([0, T ], V ), Z¯.nk , k ≥ 1 converges weakly to some limit Z in M 2 ([0, T ], L2 (K, H)) and Qn. k , k ≥ 1 converges weakly to some limit Q in M 2 ([0, T ], L2 (ν)). Let us prove that ( a version of )(Y, Z, Q) is a solution to the backward stochastic evolution equation (3.3) and (3.4). For n ≥ i ≥ 1, we have that Z n n n ¯ e (dt, dx) dhYt , ei i = hPn AYt , ei idt + hbn (t), ei idt + hZt dBt , ei i + hQnt (x), ei iN X

= hAYtn , ei idt + hbn (t), ei idt + hZ¯tn dBt , ei i + 11

Z

X

e (dt, dx) hQnt (x), ei iN

(4.9)

Let h(t) be an absolutely continuous function from [0, T ] to R with h 0 (·) ∈ L2 ([0, T ]) and h(0) = 0. By the Itˆo formula, Z T Z T n n h(t)hbn (t), ei idt h(t)hAYt , ei idt + hYT , ei ih(T ) = 0

0

+

Z

T

h(t)dh 0

Z

t 0

Z¯sn dBs , ei i +

Z

0

T

Z

X

e (dt, dx) h(t)hQnt (x), ei iN

+

Z

T 0

hYtn , ei ih0 (t)dt.

(4.10)

Replacing n by nk in (4.10) and letting k → ∞ to obtain

hφ, ei ih(T ) Z T Z T Z TZ e (dt, dx) = h(t)hAYt , ei idt + h(t)hb(t), ei idt + h(t)hQt (x), ei iN 0 0 0 X Z T Z t Z T hYt , ei ih0 (t)dt. (4.11) Zs dBs , ei i + h(t)dh + 0

0

0

From (4.10) to (4.11), we have used the fact that the linear mapping G from M 2 ([0, T ], L2 (K, H)) into L2 (Ω) defined by Z T Z t ∞ Z T X G(Z) = h(t)dh Zs dBs , ei i = h(t)hZt (kj ), ei idβ jt 0

0

0

j=1

is continuous and also the linear mapping F from M 2 ([0, T ], L2 (ν)) into L2 (Ω) defined by Z TZ e (dt, dx) h(t)hQt (x), ei iN F (Q) = 0

X

is continuous. So, the convergence of (4.10) to (4.11) takes place weakly in L 2 (Ω). Fix t ∈ (0, T ) and choose , for n ≥ 1,  1  , s ≥ t + 2n 1, 1 1 1 1 hn (s) = 1 − n (t + 2n − s), t − 2n ≤ s ≤ t + 2n ,   1 0, s ≤ t − 2n With h(·) replaced by hn (·) in (4.11), it follows that Z T Z T Z hφ, ei i = hn (s)hAYs , ei ids + hn (s)hb(s), ei ids + 0

+

Z

T

hn (s)dh 0

Z

0

s 0

1 Zu dBu , ei i + n

Z

12

X

e (dt, dx) h(t)hQt (x), ei iN

hYs , ei ids.

1 t− 2n

t

X

0

Z

1 t+ 2n

Sending n to infinity in (4.12) we get that Z T Z T hφ, ei i = hAYs , ei ids + hb(s), ei ids t t Z Z TZ e (ds, dx) + hQs (x), ei iN + t

T

T

dh

Z

s

Zu dBu , ei i + hYt , ei i. 0

(4.12)

for almost all t ∈ [0, T ] ( with respect to Lebesgue measure). As i is arbitrary, this implies that Z TZ Z T Z T Z T e (ds, dx) + Yt . Qs (x)N Zs dBs + b(s)ds + AYs ds + φ= t

t

t

t

(4.13)

X

for almost all t ∈ [0, T ] ( with respect to Lebesgue measure). For t ∈ [0, T ], define Yˆt = φ −

Z

T

AYs ds − t

Z

T

b(s)ds − t

Z

Z

T

Zs dBs −

T

t

t

Z

X

e (ds, dx). Qs (x)N

Then we see that (Yˆt , Zt , Qt ) also satisfies (ii) in the Theorem 4.1 with Y replaced by Yˆ for all t ∈ [0, T ]. Hence, (Yˆt , Zt , Qt ) is a solution to the equations (3.3) and (3.4). Uniqueness: ¯ t ) be two solutions of the equation (3.3). Then Let (Yt , Zt , Qt ) and (Y¯t , Z¯t , Q Z

T

A(Ys − Y¯s )ds +

t

Z

T

(Zs − Z¯s )dBs + (Yt − Y¯t ) +

t

Z

t

Applying Itˆo’s formula, we get

T

Z

X

¯ s (x))N e (ds, dx) = 0 (Qs (x) − Q

Z T 2 ¯ 0 = |Yt − Yt |H + 2 hYs − Y¯s , dMs i t Z TZ ¯ s (x)|2H − |Ys− − Y¯s− |2H ]N e (ds, dx) + [|Ys− − Y¯s− + Qs (x) − Q t X Z T Z T ¯ ¯ +2 hA(Ys − Ys ), Ys − Ys hds + |Zs − Z¯s |2L2 (K,H) ds t t Z TZ ¯ s (x)|2H dsν(dx) + |Qs (x) − Q t

where Mt =

Rt

0 (Zs

(4.14)

X

− Z¯s )dBs . By (3.2),we get that

Z T Z T 2 ¯ ¯ ¯ |Zs − Z¯s |2L2 (K,H) ds] E[hA(Ys − Ys ), Ys − Ys i]ds − E[ E[|Yt − Yt |H ] = −2 t t Z TZ ¯ s (x)|2 dsν(dx)] − E[ |Qs (x) − Q H t X Z T Z T 2 ¯ ≤ −α E[||Ys − Ys ||V ]ds + λ E[|Ys − Y¯s |2H ]ds t t Z T E[|Ys − Y¯s |2H ]ds. ≤λ t

By a Gronwall type inequality, it follows that E[|Y t − Y¯t |2H ] = 0 , which proves the uniqueness.

13

Lemma 4.3 Assume that E[|φ|2H ] < ∞, and that b(t, y, z, q, ω) = b(t, z, q, ω) is independent of y. Then there exists a unique H × L 2 (K, H) × L2 (ν)-valued progressively measurable process (Yt , Zt , Qt ) such that RT RT |Yt |2H dt] < ∞, E[ 0 |Zt |2L2 (K,H) dt] < ∞, E[ 0 |Qt |2L2 (ν) dt] < ∞. RT RT RT RT R e (ds, dx); 0 ≤ t ≤ T. (ii) φ = Yt + t AYs ds + t b(s, Zs , Qs )ds + t Zs dBs + t X Qs (x)N (i)

E[

RT 0

Proof. Set Zt0 = 0, Q0t = 0. Denote by (Ytn , Ztn , Qnt ) the unique solution of the backward stochastic evolution equation: Z n−1 n−1 n n n e (dt, dx) Qnt (x)N (4.15) dYt = AYt dt + b(t, Zt , Qt )dt + Zt dBt + X

YTn = φ(ω).

(4.16)

existence of such a solution (Ytn , Ztn , Qnt ) has been proved in Lemma 4.2. Putting M tn = RThe t n o’s formula we get that 0 Zs dBs , and by Itˆ 0 = |YTn+1 − YTn |2H

Z T hA(Ysn+1 − Ysn ), Ysn+1 − Ysn ids = |Ytn+1 − Ytn |2H + 2 t Z T hb(t, Zsn , Qns ) − b(t, Zsn−1 , Qsn−1 ), Ysn+1 − Ysn ids +2 t Z TZ n 2 e n+1 n n+1 |H ]N (ds, dx) − Ys− − Qns |2H − |Ys− + Qn+1 − Ys− [|Ys− + s t X Z TZ + |Qn+1 − Qns |2H ]dsν(dx) s t X Z T Z T +2 hYsn+1 − Ysn , d(Msn+1 − Msn )i + |Zsn+1 − Zsn |2L2 (K,H) ds t

t

In virtue of (3.2), for ε > 0, E[|Ytn+1



Ytn |2H ] Z

+ E[

Z

T t

|Zsn+1



Zsn |2L2 (K,H) ds]

+ E[

Z

T t

Z

X

|Qn+1 − Qns |2H ]dsν(dx)] s

T

= −2E[ hA(Ysn+1 − Ysn ), Ysn+1 − Ysn ids] t Z T hb(t, Zsn , Qns ) − b(t, Zsn−1 , Qsn−1 ), Ysn+1 − Ysn ids] − 2E[ t Z T Z T n+1 n 2 ||Ysn+1 − Ysn ||2V ds] |Ys − Ys |H ds] − αE[ ≤ λE[ t t Z T Z T 1 n n n−1 n−1 2 |Ysn+1 − Ysn |2H ds] + εE[ |b(t, Zs , Qs ) − b(t, Zs , Qs )|H ds] + E[ ε t t

14

(4.17)

Choose ε
ds] t Z T ¯ s ), Ys − Y¯s ids] − 2E[ hb(t, Zs , Qs ) − b(t, Z¯s , Q t Z T Z T 2 ¯ ||Ys − Y¯s ||2V ds] |Ys − Ys |H ds] − αE[ ≤ λE[ t t Z T Z T 2 1 ¯ |Ys − Y¯s |2H ds] |Zs − Zs |L2 (K,H) ds] + cE[ + 2 E[ t t Z TZ 1 ¯ s (x)|2H dsν(dx)] + E[ |Qs (x) − Q (4.27) 2 t X Consequently, E[|Yt − Y¯t |2H ] ≤ (λ + c)E[ By Gronwall’s inequality ,

Z

T t

|Ys − Y¯s |2H ds]

(4.28)

Yt = Y¯t

¯ t by (4.27). which further implies Zt = Z¯t and Qt = Q Proof of Theorem 4.1. Let Yt0 = 0. Define , for n ≥ 1, (Ytn+1 , Ztn+1 , Qn+1 ) to be the solution of the equation: t dYtn+1 = AYtn+1 dt + b(t, Ytn , Ztn+1 , Qn+1 )dt t Z e (dt, dx) Qn+1 (x)N + Ztn+1 dBt + t

(4.29)

X

YTn+1 = φ

(4.30)

The existence of (Ytn+1 , Ztn+1 , Qn+1 ) is contained in Lemma 4.3. t Using the similar arguments as in the proof of Lemma 4.3 it can be shown that (Y tn+1 , Ztn+1 , Qn+1 ) t converges to some limit (Yt , Zt , Qt ), and moreover (Yt , Zt , Qt ) is the unique solution to equation (3.3). We omit the details avoiding the repeating. Example 4.2 Let H = L2 (Rd ), and set V = H21 (Rd ) = {u ∈ L2 (Rd ); ∇u ∈ L2 (Rd → Rd )} Denote by a(x) = (aij (x)) a matrix-valued function on R d satisfing the uniform ellipticity condition: 1 Id ≤ a(x) ≤ cId for some constant c ∈ (0, ∞). c 17

Let f (x) be a vector field on R d with f ∈ Lp (Rd ) for some p > d. Define Au = −div(a(x)∇u(x)) + f (x) · ∇u(x) Then (3.2) is fulfilled for (H, V, A). Thus, for any choice of cylindrical Brownian motion B, any drift coefficient b(t, y, z, ω) satisfying (3.5) and (3.6) and terminal random variable φ, the main result in Section 4 applies.

Acknowledgement This work was initiated when the first and the third author attended a workshop on SPDEs in Warwick in May, 2001. We would like to thank the organizers Jo Kennedy, Roger Tribe and Jerzy Zabczyk, as well as David Elworthy, for the invitation and hospitality. We also thank the Mathematics Institute at University of Warwick for providing the stimulating research environment.

References [B1]

A.Bensoussan: Maximum principle and dynamic programming approaches of the optimal control of partially observed diffusions. Stochastics 9 (1983), 169-222.

[B2]

A.Bensoussan: Stochastic maximum principle for systems with partial information and application to the separation principle. In M.Davis and R.Elliott (editors): Applied Stochastic Analysis. Gordon and Breach 1991, pp 157-172.

[EPQ] N. El Karoui, S. Peng and M.C. Queuez: Backward stochastic differential equations in finance. Mathematical Finance 7 (1997), 1–71. [FØS] N.C. Framstad, B. Øksendal and A. Sulem: Sufficient stochastic maximum principle for the optimal control of jump diffusions and applications to finance. J. Optim. Theory and Appl. 121 (2004), 77-98 and Vol. 124, No. 2 (2005) (Errata). [MY] J. Ma and J. Yong: Forward-Backward Stochastic Differential Equations and Their Applications. Springer LNM 1702, Springer-Verlag 1999. [NS]

D. Nualart and W. Schoutens: Backward stochastic differential equations and Feynman-Kac formula for L´evy processes, with applications in finance. Bernoulli 7 (2001), 761-776.

[Ø]

B. Øksendal: Optimal control of stochastic partial differential equations. Stoch. Anal. and Appl. (to appear).

[ØS]

B. Øksendal, Sulem, A.: Applied Stochastic Control of Jump Diffusions. SpringerVerlag 2004 (to appear).

[P]

E. Pardoux: Stochastic partial differential equations and filtering of diffussion processes. Stochastics 3 (1979), 127–167.

18

[PP1] E. Pardoux and S. Peng: Adapted solutions of backward stochastic differential equations. Systems and Control Letters 14 (1990), 55–61. [PP2] E. Pardoux and S. Peng: Backward doubly stochastic differential equations and systems of quasilinear stochastic partial differential equations, Probability Theory and Related Fields 98 (1994), 209-227. [PZ]

G.D. Prato and J. Zabczyk: Stochastic equations in infinite dimensions. Cambridge University Press, 1992.

[S]

R. Situ: On solutions of backward stochastic differential equations with jumps and applications. Stochastic Processes and their Applications 66(1997), 209-236.

[YZ]

J.Yong and X.Y.Zhou: Stochastic Controls. Springer-Verlag 1999.

19