of Meng in [9] to infinite dimensions. 2 .... is a mapping on V × V, by setting d(x, y) = x − y So, every normed space is a ..... uniqueness of the solution to infinite dimensional to backward stochastic ..... 7x = x − x, 7y = y − y, 7z = z − z,. (H3). 23 ...
Kingdom of Saudi Arabia Ministry of Education Qassim University College of Science Department of Mathematics
Forward-backward stochastic differential equations and applications to stochastic optimal control
Research Project for Partial Fulfillment of The requirements of Master Degree
By Nawal Mohammed Alharbi ID: 331217500
Supervisor Professor AbdulRahman Al-Hussein
Rajab 1436 (May 2015)
In the name of Allah Most Gracious Most Merciful
Gratitude
In the name of Allah Most Gracious Most Merciful First I thank Allah Almighty Who has blessed me with everything I have achieved in this work. I would like to express my sincere thanks and gratitude to my father and mother. Many thanks to my supervisor, Professor AbdulRahman Al-Hussein, for leading me into the subject. I owe my interest in writing this dissertation to his teaching and continuous encouragement. Also many thanks to Professor Said Kouachi, for his support in explaining how to use Scientific Workplace 5.5. Finally I thank every one supported me during prepare of this work.
Abstract
This dissertation is concerned with the existence and uniqueness result of full coupled forward-backward stochastic differential equations (FBSDE) with an arbitrarily large time duration, and a stochastic optimal control problem in which the controlled system is described by a fully coupled nonlinear (FBSDE) driven by a Wiener process in a Hilbert Space. It is required that all admissible control processes are adapted to a given subfiltration of the natural filtration generated by a Wiener process. For this type of partial information control problems sufficient and necessary conditions of optimality are derived. The control domain is convex and the forward diffusion coefficient of the system contains the control variable.
Contents 1 Introduction
2
2 Stochastic Calculus 2.1 Preliminaries in Measure Theory . . . . . . . . 2.2 Wiener Process . . . . . . . . . . . . . . . . . . 2.3 Martingales . . . . . . . . . . . . . . . . . . . . 2.4 Stochastic Integration . . . . . . . . . . . . . . 2.4.1 Stochastic integration with respect to 1dimensional Brownian motion. . . . . . . 2.4.2 Stochastic integration with respect to in…nite dimensional Wiener processes . . . 2.5 Itô’s Formula . . . . . . . . . . . . . . . . . . . 3
3 . 3 . 6 . 9 . 11 . 11 . 14 . 15
Forward-Backward Stochastic Di¤erential Equations 18 3.1 Basic De…nitions and Assumptions . . . . . . . . 18 3.2 Main Theorem . . . . . . . . . . . . . . . . . . . 19
4 Partial Information Necessary Maximum Principle 4.1 Statement of the Control Problem . . . . . . . . . 4.2 A partial Information Optimal Control Su¢ cient Maximum Principle . . . . . . . . . . . . . . . . . 4.3 A partial Information Necessary Maximum Principle . . . . . . . . . . . . . . . . . . . . . . . . . References
28 28 30 30 36
1
1. Introduction Riemann integration, created by Bernhard Riemann was the …rst rigorized de…nition of the integral of a function de…ned on an interval. In dealing with random functions such as functions of a Brownian motion, Riemann integral is not available with respect to these function because Brownian motions have unbounded variation. So we need a di¤erent idea in order to de…ne the integral. In 1944 Kiyosi Itô introduced the stochastic integral and a formula, known since then as Itô’s formula. Stochastic integral is the counterpart of Riemann integral for random functions. Stochastic calculus has become nowadays one of the most important and attractive …elds due to its applications in di¤erent …elds such as PDEs, di¤erential geometry, …nance, Malliavin calculus and potential theory. We refer the reader to [5], [8], [9], [11] and [12] for more informations and applications. This dissertation consists mainly of three parts. In the …rst part we study brie‡y stochastic calculus in …nite and in…nite dimensional spaces. In the second part, we establish the existence and uniqueness results of fully coupled forward-backward stochastic di¤erential equations (FBSDE) in the outline of Peng and Wu in [11]. In particular, we generalize their work to in…nite dimensions. On the other hand, a maximum principle for optimal control problem of in…nite dimensional fully coupled FBSDE with partial information was proved by Al-Dafas in [2]. We then complete the work of Al-Dafas by adding a partial information necessary maximum principle for the same studied FBSDE in in…nite dimensions. This gives us a generalization to the work of Meng in [9] to in…nite dimensions.
2
2. Stochastic Calculus 2.1. Preliminaries in Measure Theory We start this section by some information from measure theory and integration. More details and proofs can be found for example in [4], [1] and [8]. Definition 2.1. Let be a nonempty set and F be a collection of subsets of . We say that F is a …eld ( algebra) if: (i) 2 F; (ii) A 2 F =) Ac 2 F; 1 S (iii) A1 ; A2 ; : : : ; An ; . . . 2 F =) Aj 2 F: In this case, the j=1
pair ( ; F) is called a measurable space.
Definition 2.2. The algebra generated by open sets of Rn is called Borel sets or Borel measurable sets. We write B(Rn ): Definition 2.3. Let ( ; F) be a measurable space. A measure on F is a function : ! [0; 1) with the properties: (i) ( ) = 0; (ii) if Ai 2 F 8j 1; and Ai \ Aj = 8i 6= j; then (
1 [
Aj ) =
j=1
1 X
(Aj ):
j=1
The triplet ( ; F; ) is space called a measure space and the member of F are called measurable sets. If ( ) = 1; then is called probability measure, and ( ,F, ) a probability space. A measurable set X is called a null set if (X) = 0: A subset of a null set is called a negligible set. A negligible set need not be measurable, but every measurable negligible set is automatically null set. A measure space is called complete if every negligible set is measurable. Definition 2.4. (i) Let ( ; F) be a measurable space. Then a function f : ! R is called hF; B(R)i-measurable) if for each a in R f
1
(( 1; a])
f! : f (!)
ag 2 F:
(ii) Let ( ; F; ) be a probability space. Then a function X : ! R is called a random variable, if the event X
1
(( 1; a])
f! : X(!)
ag 2 F for each a in R: (2.1) 3
The following de…nition generalizes (2.1) to maps between two measurable spaces. Definition 2.5. Let ( i ; Fi ), i = 1; 2 be a measurable spaces. Then a mapping T : 1 ! 2 is called measurable with respect to the algebras (F1 ; F2 ) (or (F1 ; F2 ) measurable) T
1
(A) 2 F1 for all A 2 F2 :
(2.2)
Thus, X is a random variable on a probability space ( ; F; P) if and only if X is (F; B(R)) measurable. Definition 2.6. A function ' : ! R [ 1; 1] is called a simple function if there exists a partition (Ai )1 i n of and real numbers fc1 ; c2 ; :::; ck g such that '(w) =
k X
(2.3)
ci 1Ai (w)
i=1
Definition 2.7 (The integral of a simple nonnegative function)Let ' : ! R [0; 1) be a simple nonnegative function on ( ; F; ) with the representation (2.3). TheR integral of ' over a measure space ( ; F; ), denoted by 'd ; is de…ned as Z k X 'd = ci (Ai \ E): (2.4) E
i=1
In the following we de…ne the integration over a measure space ( ; F; ). (i) If f 0 is measurable we de…ne the integral of positive function over E by Z Z f d = sup 'd ; '
E
where the supremum is taken over all simple functions ' with 0 ' f: R (ii) If fRis measurable and at least one of the quantities E f + d or E f d is …nite, we de…ne the integral of f over E to be Z Z Z + fd = f d f d : E +
E
E
The functions f and f are measurable and represent the positive and negative part of f; respectively: f + = max(f; 0); f = 4
min(f; 0):
(iii) If
Z
E
jf j d =
Z
+
f d +
E
Z
E
f d < 1;
we say that the function f is integrable over the set E. If E = we denote this collection of functions by L1 ( ,F; ): Extension to de…ne Lp ( ; F; ); for 0 < p < 1 can be made in an obvious way by considering the integral for the p-th norm of functions. Using Bochner’s integral (see e.g.[6]), one can de…ne space LP ( ; F; ; K); for 0 < p < 1; when the mapping f takes values in a separable Hilbert (or even Banach) space K; see De…nition 2.11 below. Definition 2.8. Let V be a vector space | = C or R: A mapping k:k : V ! [0; 1) is called a norm if: (i) kxk = 0 , x = 0; (ii) kcxk = jcj kxk for all c 2 | and x 2 V; (iii)kx + yk kxk + kyk for all x; y 2 V (triangular inequality). the pair (V; k:k) is called a normed space. Remark 2.1. Recall that a norm induces a metric, which is a mapping on V V; by setting d(x; y) = kx yk So, every normed space is a metric space. For more examples about normed space see [7]. Definition 2.9. A normed space V is called complete if every Cauchy-sequence of elements of V converges to a limit point in V: A normed, complete space is called a Banach space. A Banach space is said to be separable if it has a countable and dense subset. Definition 2.10. Consider a F-vector space V , where F = R or C: A scalar product is a mapping h:; :i : V V ! F with: (i) hx1 + x2 ; yi = hx1 ; yi + hx2 ; yi for all x1 ; x2 ; y 2 V; (ii) hcx; yi = c hx; yi for all x; y 2 V and c 2 F; (iii) hx; yi = hy; xi for all x; y 2 V; (iv) hx; xi 0 for all x 2 V; (v) hx; xi = 0 , x = 0: 1 Note that kxk = hx; xi 2 de…nes a norm. Definition 2.11. A complete, normed space (V; k:k) is called a Hilbert space if there exists a scalar product such that kxk = 1 hx; xi 2 for all x 2 V: 5
We close this subsection by introducing the notions of nuclear and a Hilbert-Schmidt operators. A element T 2 L(H1 ; H2 ), the space of all inner operators from H1 to H2 , is said to be a nuclear operator if there exist two sequences faj g H2 ; 'j H1 such that: 1 X j=1
kaj k 'j < 1;
and T has the representation Tx =
1 X j=1
aj 'j (x); x 2 H1 :
The space of all nuclear operators from H1 into H2 ; given with the norm (1 ) 1 X X kT k1 = inf kaj k 'j : T x = aj 'j (x) ; j=1
j=1
is a Banach space, and is denoted by L1 (H1 ; H2 ): We shall write L1 (H1 ) instead of L1 (H1 ; H1 ): The space of Hilbert-Schmidt operators from H1 into H2 is de…ned by: ( ) 1 X 2 L1 (H1 ; H2 ) s:t: h ej ; ej iH2 < 1 ; j=1
where fej g is an orthonormal base of H1 , and is denoted by L2 (H1 ; H2 ): This is a Hilbert space endowed with the norm: p ): k kL2 (H1 ;H2 ) = tr(
We shall write L2 (H1 ) for L2 (H1 ; H1 ): 2.2. Wiener Process
Definition 2.12. A stochastic process X : [0; T ] !H is a collection fX(t; !) : (t; !) 2 [0; T ] g such that (i) for each t; X(t; :) is H-valued random variable on ; (ii) for each !; X(:; !) is measurable function (t ! X(t; !) called a sample path), Thus a stochastic process X(t; !) or fX(t; !) : (t; !) 2 [0; T ] g, can be expressed as X(t)(!) or simply as X(t) or Xt : 6
Definition 2.13. For a continuous random variable X 2 L1 = L1 ( ; F; P), we de…ne the expectation of X by Z E [X] = XdP: We say X has a probability distribution or probability density function if there exists a Borel function f (x) : R ! [0; 1] ; Z f (x)dx; B 2B(R): P(X 2 B) = B
2
E[X ] is called the mean square or second moment of the continuous random variable X given by the equation: Z 1 2 x2 f (x)dx: E X = 1
The expected value of the squared deviation from the mean E[X] is called the variance of a random variable X and denoted by var(X): Example 2.1. Let X is uniformly distributed from 0 to 1; f (x) = Then E [X] =
Z
1; 0 x 1 0; otherwise.
1
xf (x)dx =
Z
0
0
1
1 xdx = ; 2
hence the mean of a random variable with the uniform probability density function Z 1 Z 1 1 2 2 E X = x f (x)dx = x2 dx = : 3 1 0 The variance is then var(X) = E [X 2 ]
E [X]2 =
1 3
1 4
=
1 : 12
Let us now introduce an important process. Definition 2.14. A stochastic process B(t; !) is called a Brownian motion if it satis…es the following conditions: (i) P f! j B(0; !) = 0g = 1;
7
(ii) For any 0 s < t; the random variable B(t) B(s) is normally distributed with mean 0 and variance t s; i:e:; for any a s ; Fs Ft : By using the preceding formal expansion (2.6) of W we can de…ne the completed natural …ltration fFt ; t 0g of W by f! j (s); 0
s
t; j = 1; 2; : : : ; 1g _ N ; t
0;
where N is the collection of all P null sets of F: 2.3. Martingales First we start with the conditional expectation. Definition 2.15. Let G be a - …eld and G F: For an integrable random variable X; (i:e: E [jXj] < 1) we say that Y is the conditional expectation of X and denote it by E [X j G] ; if Y is the unique (a:s:, i:e: up to changes on events of probability zero) random variable satisfying: (i) YR is G- measurable, R (ii) E XdP = E Y dP 8E 2 G:
Note that the conditional expectation E [X j G] is a random variable, while the expectation E [X] is deterministic. Let us give some properties of conditional expectation. (i) Measurability: if the random variable X is G- measurable, then E [X j G] = X: (ii) Computing expectations by Conditioning: E [E [X j G]] = E [X] : 9
(iii) Independence: if X and G are independent, then E [X j G] = E [X] : (iv) Linearity: E [aX + bZ j G] = aE [X j G]+bE [Z j G] ; 8a; b 2 R and X; Z 2 L1 ( ; F; P ): (v) If G1
G2
F; then E [E[X j G2 ] j G1 ] = E [X j G1 ] :
For the proof of these properties and to get more properties we refer the reader to [10, P. 41-48]. Definition 2.16. A stochastic process fXt ; t 0g is called a martingale with respect to fFt ; t 0g if: (i) Xt adapted to fFt ; t 0g for each t; (i:e: Xt is Ft - measurable for all t 0); (ii) E [jXt j] < 1; 8t 0; (iii) E [Xt j Fs ] = Xs ; 8t s: Example 2.2. Let ( ; F; P) be a probability space and let fWt ; t 0g be a standard Brownian motion. We want to show that Xt = Wt2 t is a martingale. We argue as follows: 1. Note that for s t; since Wt independent of Fs ); we have E Wt2
Ws + Ws )2 j Fs
t j Fs = E (Wt
2
Ws2
+E j Fs t 2 = t s + 0 + Ws = Ws2 s:
E(jXt j) = E( Wt2
tj
Ws is
t
Ws ) j Fs + 2E [Ws (Wt
= E (Wt
2. Since jXt j = jWt2
Ws ? Fs , (i:e: Wt
Ws ) j Fs ]
t
Wt2 + t we can therefore write E(Wt2 + t) = 2t < 1;
t)
for all t 0: 3. Since Xt = Wt2 t is a function of Wt ; hence it is Ft adapted. From (1)-(3) we obtain that Xt = Wt2 t is a martingale.
10
2.4. Stochastic Integration Let ( ; F; (Ft )t 0 ; P) be a complete …ltered probability space, which satis…es the usual conditions (i:e: the probability space ( ; F; P) is complete, the algebras Ft contain all the sets in F of zero probability and the …ltration Ft is right-continuous), in non-maximal t, the algebra Ft+ = T the sense that, for every ~ be a separable Hilbert space. Denote Ft is equal to Ft . Let H s>t
~ the space of all progressively measurable f~ with by L2F (0; T ; H) ~ (i:e: for all t 2 [0; T ] ; the process f~ j[0;t] values in H; is B([0; t]) Ft measurable such that: E[
Z
T
2
f~(s)
H
0
ds] < 1:
~ is a Hilbert space with the It is easy to see that L2F (0; T ; H) norm: Z T 2 1 ~ f = (E[ f~(s) ds]) 2 : H
0
2.4.1. Stochastic integration with respect to 1-dimensional Brownian motion. This subsection is based on the books of Kuo, [8], and Oksendal, [12]. For ' 2 L2F (0; T ; R) we want to de…ne the stochastic integral: I(') =
Z
T
'(t)dB(t);
0
where B is a Brownian motion in R: We do this in two steps: Step 1: Suppose ' is a step function given by: X '(t; !) = j (!)1[tj ;tj+1 ] (t); j 0
where ft0 < t1 < g are suitable partition of [0; t] satisfying sup jtj+1 tj j ! 0; j are Ftj random variables satisfying j 0
E
2 j
< 1. ln this case the stochastic integral is de…ned by: Z
0
T
'(t)dB(t) =
X
j (!)(B(tj+1 )
j 0
11
B(tj )):
(2.7)
Lemma 2.1. For a step function ' the random variable I(') is Gaussian with mean 0 and variance equals Z T 2 ('(t))2 dt: (2.8) E(I(') ) = E 0
The proof of this lemma can be found in [8, P. 10]. Formula (2.8) is often called the isometry property of the stochastic integral I('): Step 2: We will use L2 ( ; F; P) to denote the Hilbert space of square integrable real-valued random variables on with inner product hX; Y i = E(XY ): Let ' 2 L2 (0; T; R): Choose a sequence f'n g1 n=1 of step func2 tions such that 'n ! ' in LF (0; T; R). By Lemma 2.1 this se2 quence f'n g1 n=1 is Cauchy in LF (0; T; R). Hence it converges in 2 L ( ): De…ne: Z T Z T I(') = '(t)dB(t) = lim 'n dB(t) n!1
0
0
2
= lim I('n ); in L ( ; F; P):
(2.9)
n!1
RT Hence 0 '(t)dB(t) is well de…ned, here for more details see [8, P. 11] and [12, Chapter 3]. An isometry property as in Lemma 2.1 is also valid for the situation of L2F (0; T ; H). In particular, Z T Z T 2 E '(s)dW (s) = E j'(s)j2L (H ) ds; 0
0
H
2
for ' 2 L2F (0; T ; H). Extensions to de…ne stochastic integration for processes outside L2F (0; T; R); ([a; b] ; B([a; b])) and in particular in L( ; Lp [a; b]), (1 p 1) the space of all fFt g adapted stoRb chastic process f (t) such that a jf (t)jp dt < 1 a:s: are important. Here we write brie‡y Lp [a; b] = Lp ([a; b] ; B( [a; b]);Leb.), where Leb. denote Lebesgue measure on ([a; b] ; B( [a; b])). We record this brie‡y in the following theorem. Theorem 2.17. Suppose f 2 LF ( ; Lp [a; b]) such that fFt g adapted stochastic process and assume that E(f (t)f (s)) is a continuous function of t and s:Then Z b n X f (t)dB(t) = lim f (ti 1 )(Bti Bti 1 ); a
k4n k!0
i=1
12
in probability, where 4n =ft0 ; t1 ; : : : ; tn g is a partition of the …nite interval [a; b] and k4n k = max (ti ti 1 ): 1 i n
More details on this fact as well as the proof of this theorem can be found in [8, P.43-57], [12] or [15]. Example 2.3. Assume B0 = 0;we claim that Z t 1 1 2 Bs dBs = Bt2 t: 2 2 0 P To prove this identity, put 'n (t) = Btj (!)1[tj ;tj+1 ] (t); where j 0 8 9 n n t= < j 2 if 0 j 2 (n) tj = tj = 0 if j 2 n < 0 ; and Bj = Btj : : ; n t if j 2 > t Then Z t X Z tj+1 2 E[ ('n Bs ) ds] = E[ (Bj Bs )2 ds] 0
tj
j
=
XZ
X1 j
(s
tj )ds
tj
j
=
tj+1
2
tj )2 ! 0; as 4tj ! 0:
(tj+1
So (2.9) is hold here, then by the Theorem 2.17 Z t X Bs dBs = lim Bj 4Bj : 4tj !0
0
j
Now 2 4(Bj )2 = Bj+1
Bj2 = (Bj+1
Bj )2 + 2Bj (Bj+1
Bj )
2
= (4Bj ) + 2Bj (4Bj ):
and therefore, since B0 = 0; X X Bt2 = (4Bj )2 + 2 Bj (4Bj ) j
X j
Since
P j
j
1X
1 Bj (4Bj ) = Bt2 2
2
(4Bj )2 :
j
(4Bj )2 ! t in L2 ( ; F; P) as k4tj k ! 0, we get the
desired identity. 13
2.4.2. Stochastic integration with respect to in…nite dimensional Wiener processes Let W be a cylindrical Wiener process on a separable Hilbert space H. Let K be a separable Hilbert space. By using (2.7) we can de…ne a stochastic integral of process 2 L2F (0; T ; L2 (H; K)) with respect to W by Z T 1 Z T X (s)dW (s) = ( (s)ej )dwj (s); (2.10) 0
j=1
0
where the integral in the right hand side now makes sense as stochastic integral with respect to 1 dimensional Brownian motion. Observe that the sum in (2.10) exists in L2F ( ; F; P; K) since 2 Z T 1 1 Z T X X E E j( (s)ej j2K ds < 1: ( (s)ej )dwj (s) = j=1
0
K
There by the integral to L2 ( ; Ft ; P; K):
RT 0
j=1
0
(s)dW (s) is well-de…ned an belongs
Theorem 2.18 (Martingale representation theorem)Let fM (t); 0 t T g be a martingale in a separable Hilbert space K with respect to the completed natural …ltration fFt g0 t T of a cylindrical Wiener process W on H: Assume that M is square integrable, i:e. sup E jM (t)j2K < 1: 0 t T
Then there is a unique stochastic process R 2 L2F (0; T ; L2 (H; K)); such that, for all 0
T , we have a:s: Z t M (t) = M (0) + R(s)dW (s): t
(2.11)
0
In particular, M has a continuous modi…cation. Note that M (0) in (2.11) equals to E(M (t)) for all t: Theorem 2.18 is very useful establish to the existence and uniqueness of the solution to in…nite dimensional to backward stochastic di¤erential equations (BSDE) and forward backward stochastic di¤erential equations (FBSDE), which will be discussed in Section 3. For the proof of this theorem one can see [3]. 14
2.5. Itô’s Formula Recall that if f and g are di¤erentiable functions of t, then f (g(t)) is also di¤erentiable and has derivative d f (g(t)) = f 0 (g(t))g 0 (t); 8t: dt In terms of the fundamental theorem of calculus this equality says that Z t f 0 (g(s))g 0 (s)ds: (2.12) f (g(t)) f (g(a)) = 0
Itô calculus deals with random functions i:e: stochastic processes. In the following we introduce versions of (2.12) called Itô’s formula. We follow mainly [8] Theorem 2.19 (Itô’s formula in the simplest form)Let f be a C 2 function and B(t) be a Brownian motion in R. Then Z t Z 1 t 00 0 f (B(s))dB(s) + f (B(t)) = f (B(a)) + f (B(s))ds; 2 a a for any a 0; where the …rst integral is a stochastic integral (as de…ned in Subsection 2.4.1) and the second integral is a Riemann integral for each sample path of B(s): We prefer to omit the proof of this theorem, because it is lengthy and out of the scope of our make here. For convenience we refer the reader to [3]. Similarly for the following two theorems. Definition 2.20. An Itô process is a stochastic process of the form Z t Z t Xt = Xa + f (s)dB(s) + g (s) ds; a t b; a
a
where Xa is Fa measurable, f g 2 LF ( ; L1 [a; b]):
2
LF ( ; L2 [a; b]); and
Theorem 2.21 (Itô’s formula in the general form)Let Xt be an Itô Process given by Z t Z t Xt = X a + f (s)dB(s) + g (s) ds; a t b; (2.13) a
a
15
Suppose (t; x) is a continuous function with continuos partial @ @2 derivatives @@t ; @x and @x (t; Xt ) is also an Itô Process 2 : Then and Rt @ (t; Xt ) = (a; Xa ) + a @x (s; Xs )f (s)dB(s) Rt @ (2.14) @ 1 @2 + a [ @s (s; Xs ) + @x (s; Xs ) + 2 @x2 (s; Xs )g(s)2 ]ds:
This formula (2.14) is written sometimes in its di¤erential form (as in Example 2.4 below) giving an equivalent formula, which is useful in studying stochastic di¤erential equations (SDE). In in…nite dimensions we have the following version of Itô’s formula in separable Hilbert spaces H and K:
Theorem 2.22. Let fx(t); t 2 [0; T ]g be an K -valued process given by Z t Z t x(t) = x(0) + b(s)ds + (s)dW (s); 0
0
where b(:) 2 L2F (0; T ; L2 (H; K)): Suppose that 2 C 1;2 ([0; T ] K): Then P a:s:, for all t 2 [0; T ] ; Z t @ (s; x(s))ds (t; x(t)) = (0; x(0)) + 0 @s Z t + D (s; x(s))(b(s))ds 0 Z t + D (s; x(s))( (s))dW (s) 0 Z 1 t + tr[D2 (s; x(s))( (s))( (s) )ds; 2 0 where D denotes Fréchet derivative of and x: We refer the reader to [5, P. 105] for the proof of this theorem. Example 2.4. We want to evaluate Z t It = Bsn dBs ; for all n 0
Let Xt = Bt and (t; x) =
1 xn+1 : n+1
(t; Xt ) =
Then
1 Btn+1 : n+1
16
1:
by Itô’s formula, @ @ 1 @2 (t; Xt )dt + (t; Xt )d(B(t) + (t; Xt )dt @t @x 2 @x2 1 = Btn d(B(t) + nBtn 1 dt: 2
d (t; Xt ) =
Hence d(
1 1 Btn+1 ) = Btn dBt + nBtn 1 dt n+1 2 Z t Z t 1 1 n n+1 Bt dBt = Btn 1 dt: Bt n n + 1 2 0 0
If n = 1;
Z
t
0
1 Bs dBs = Bt2 2
1 t: 2
Example 2.5. We want to compute Z t It = eBs dBs : 0
Let Xt = Bt and (t; x) = ex : Then (t; Xt ) = eBt : by Itô’s formula, @ @ 1 @2 (t; Xt )dt + (t; Xt )d(B(t) + (t; Xt )dt @t @x 2 @x2 1 = eBt d(B(t) + eBt dt: 2
d (t; Xt ) =
Hence It =
Z
t Bs
Bt
e dBs = e
0
1 2
Z
t
eBs ds:
0
This equality is the Doob–Meyer decomposition of the submartingale eBt .
17
3.
Forward-Backward Stochastic Di¤erential Equations
In this section we shall discuss the existence and uniqueness of the solution to in…nite dimensional to forward-backward stochastic di¤erential equations (FBSDE). We shall assume from here on as done before that we are given a complete …ltered probability space ( ; F; (Ft )t 0 ; P), where (Ft )t 0 is the completed natural …ltration of a cylindrical Wiener process W on a separable Hilbert space H: 3.1. Basic De…nitions and Assumptions Denote by S 2 (0; T; H) to the set of continuous processes fXt ; t 2 [0; T ]g with values in H, and satisfy Xt is Ft -measurable for a:e: t 2 [0; T ] and E
sup jXt j2H < 1:
0 t T
We set M2 = S 2 (0; T; H) S 2 (0; T; H) L2F (0; T; L2 (H)): This is a Banach space with respect to the norm k:kM2 given, for = (x; y; z) by: RT k k2M2 = E sup jxt j2H + sup jyt j2H + 0 kzt k2H dt : 0 t T
0 t T
Consider the following FBSDE: 8 > dxt = b(t; xt ; yt; zt )dt + (t; xt ; yt; zt )dWt ; > > < dyt = f (t; xt ; yt; zt )dt + zt dWt ; t 2 (0; T ); > x0 = a 2 H; > > : y = (x ): T T
(3.1)
where the coe¢ cients: b; f : :
[0; T ] [0; T ]
H H
H H
L2 (H) ! H; L2 (H) ! L2 (H);
are measurable functions with respect to (t; x; y; z), a is a given element of H and 2 L2 ( ; FT ; P; H): A solution of (3.1) is a triple (x; y; z) of stochastic processes such that (x; y; z) belongs to M2 and satis…es the following equations: ( Rt Rt xt = a + 0 b(s; xs ; ys ; zs )ds + 0 (s; xs ; ys ; zs )dWS ; RT RT yt = (xT ) + t f (s; xs ; ys ; zs )ds zs dWs ; t 2 [0; T ] : t 18
For u 2 H
H
L2 (H) and t 2 [0; T ] denote: A(t; u) = ( f; b; )(t; u)
and, for Y = (x; y; z) 2 H hA; Y i =
H
L2 (H), de…ne
hx; f iH + hy; biH + hz; iL2 (H) :
The following theorem contains a result of the existence and uniqueness of the solution of (3.1). We assume that (i) A(t; u) is uniformly Lipschitz with respect to u; (ii) for each u 2 H H L2 (H); A(:; u) is in M2 ; (iii) (x) is uniformly Lipschitz with respect to x 2 H; (iv) for each x; (x) 2 L2 ( ; FT ; P; H):
(H1)
The following monotone conditions are our main assumptions: (jx xj2H A(t; u); u ui + jy yj2H ) for some > 0; (ii) h (x) (x); x xi 0 8u = (x; y; z); u = (x; y; z) 2 H H L2 (H) ; 8t 2 [0; T ] : (i) hA(t; u)
(H2)
3.2. Main Theorem In this subsection we introduce the exists and uniqueness of the solution of FBSDE (3.1). Theorem 3.1. Assume (H1) and (H2). Then there exists a unique adapted solution to FBSDE (3.1), with (x) = 2 L2 ( ; FT ; P; H):
Uniquenes. Let us = (xs ; ys ; zs ) and u0s = (x0s ; ys0 ; zs0 ) be two solutions of (3.1). We set u b = (x x0 ; y y 0 ; z z 0 ) = (b x; yb; zb): We use Itô’s formula applied to hb xs ; ybs i to deduce E h (b xT ); x bT i Ehb yt ; x bt i Z T =E hA(s; u bs ); u bs i ds t Z T =E hA(s; us ) A(s; u0s ); u bs i ds t Z T Z T hb xs ; x bs i ds + E hb ys ; ybs i ds): (E t
t
19
This with the monotone of (E
Z
t
T
hb xs ; x bs i ds + E
and A(t; : ) implies Z
t
T
hb ys ; ybs i ds)
0; since
> 0;
then hb xs ; x bs i = 0 and hb ys ; ybs i = 0 a:s:; so we have x bs = 0 a:s: 0 and ybs = 0 a:s:. Thus x x = 0 a:s: implies x = x0 ; similarly y = y 0 a:s. and z = z 0 a:s.. Existence. We shall an existence result of FBSDE (3.1) for the case where yT does not depend on x, i:e:, yT = : In order to do so we need to establish …rst the following two lemmas. Let us start with a family of FBSDEs parametrized by 2 [0; 1] : 8 > 1) (yt ) + b(t; ut ) + t ]dt dxt = [( > > < +[( 1) (zt ) + (t; ut ) + t ]dW t; (3.2) > ) xt + f (t; ut ) + t ]dt zt dWt ; > dyt = [(1 > : x = a; y = ; T 0 where ; and are given processes in M2 taking their values in H, L2 (H); and H, respectively. Clearly, when = 1 the existence of the solution of (3.2) implies that of (3.1) for yT = . The following lemma gives an a prior estimate for the "existence interval" of (3.2) with respect to 2 [0; 1] :
Lemma 3.1. Assume (H1),(H2). Then there exists a positive constant 0 such that if, a prior, for an 0 2 [0; 1) there exists a solution (x 0 ; y 0 ; z 0 ) of ( 3.2), then for each 2 [0; 0 ] ; there exists a solution (x 0 + ; y 0 + ; z 0 + ) of (3.2) for = 0 + : Proof. Since for each ; and 2 L2 ( ; FT ; P; H); 2 [0; 1) there exists a (unique) solution of (3.2). Thus, for each triple us = (xs ; ys ; zs ) 2 M2 ; there exists a unique triple Us = (Xs ; Ys ; Zs ) 2 M2 satisfying the following FBSDE: 8 dXt = [( 0 1) (Yt ) + 0 b(t; Ut ) + ( yt > > > > > +b(t; ut )) + t ]dt + [( 0 1) Zt + 0 (t; Ut ) > > < + ( zt + (t; ut ) + t )]dWt; (3.3) > dYt = [(1 xt 0 ) Xt + 0 f (t; Ut ) + ( > > > > +f (t; ut )) + t ]dt Zt dWt ; > > : X0 = a; YT = : We are now going to prove that the mapping de…ned by I
0+
(u) = U : M2 ! M2 ; 20
is a contraction. Let u0 = (x0 ; y 0 ; z 0 ) 2 M2 , and denote U 0 (X 0 ; Y 0 ; Z 0 ) = I 0 + (u0 ): Denote also u b = (x x0 ; y y 0 ; z z 0 ) = b = (X X 0 ; Y Y 0 ; Z Z 0 ) = (X; b Yb ; Z): b (b x; yb; zb); U bs ; Ybs i to get Applying Itô’s formula to hX 0=E
Z
T
h
0
(1 + E
0 (A(s; Us )
0) E
Z
T
0
Z
T
0
bs ids A(s; Us0 ); U
bs ; X bs i + hYbs ; Ybs i + hZbs ; Zbs i)ds (hX
bs ; x ( (hX bs i + hYbs ; ybs i + hZbs ; zbs i)
bs ; f s i + hYbs ; bs i + hZbs ; + hX
where
s i)ds;
f (s; u0s ); bs = b(s; us ) (s; u0s ):
f s = f (s; us ) s = (s; us )
b(s; u0s );
From (H1) and (H2), we can get Z
E
T
0
bs ; X bs i + hYbs ; Ybs i)ds (hX
C1 E
Z
T
2
bs )ds: (jb us j2 + U
0
On the other hand, since X and X 0 are solutions of SDEs of Itô type, applying the usual technique, the estimates for the b = X X 0 are obtained by di¤erence X bs sup E X
0 s T
E
RT 0
bs X
2
C1 E
2
C1 T E
RT 0
RT 0
jb us j2 ds + C1 E
RT
jb us j2 ds + C1 T E
0
RT 0
2
2
2
2
( Ybs + Zbs )ds;
( Ybs + Zbs )ds:
Similarly, for the di¤erence of the solutions (Ybs ; Zbs ) = (Y Y 0 ; Z Z 0 ); we apply the usual technique to the BSDE of (3.3): E
Z
0
T
2
2
( Ybs + Zbs )ds
C1 E
Z
0
T
2
jb us j ds + C1 E
Z
0
T
2
bs ds: X
Here the constant C1 depends on the Lipschitz constants of b; 2 and f as well as and T: Next using Itô’s formula to Ybs from 21
the same BSDE once more it yields Z T Z T 2 2 Yb0 + E Zbs ds = E 2Ybs [ 0
0 (f (s; Us )
f (s; Us0 ))
0
+ (1
1 E 4
Z
+ C2 E
0)
T
0
Z
T
0
x bs ]ds Z T 2 2 1 bs ds Zbs ds + E X 4L 0 Z T 2 b Ys ds + C2 E jb us j2 ds: Xs + fs
0
Here L = max(T C1 ; 1); C2 is a su¢ ciently large constant which depends on L; , and the Lipschitz constants b; and f . Combining the above estimates, it is clear that, whenever > 0; hold true, we always have Z T Z T 2 bs ds C E E U jb us j2 ds; 0
0
where the constant C depends only on C1 , C2 , L, and T . We 1 now choose 0 = 2C . It is clear that, for each …xed 2 [0; 0 ] ; the mapping I 0 + is a contraction since Z T Z T 2 1 b jb us j2 ds: Us ds E E 2 0 0
Hence I 0+ has U 0 + = (X 0 + ; Y 0 + ; Z for = 0 + :
0+
a unique …xed point ); which is the solution of (3.2)
It remains to prove that, when = 0 in (3.2), i:e. 8 0 0 0 < dxt = [ yt + t ]dt + [ zt + t ]dWt; dy 0 = [ x0t + t ]dt zt0 dWs ; : 0 t x0 = a; yT0 = ;
has a unique solution. We will treat a more general situation which can also be used in the proof of Theorem 3.2. Lemma 3.2. The following equation has a unique solution: 8 > < dxt = [ yt + t ]dt + [ zt + t ]dWt; dyt = [xt + t ]dt zt dWs ; (3.4) > : x = a; y = + ex ; 0 T T
where e is nonnegative constant.
22
Proof. The uniqueness of (x; y; z) follows from Theorem 3.1. In order to solve (3.4), we introduce the following ODE, known as the (matrix)-Riccati equation: _ K(t) = K 2 (t) + I; t 2 [0; T ); K(T ) = eI;
where K(t) takes values in the space of symetric linear operators on H: It is well known (see, e.g., [13, P. 263-265]) that this equation has a unique C 1 nonnegative solution K(:);which is symmetric linear operator on H: We then consider the solution (p; q) 2 M2 of the following linear simple backward stochastic di¤erential equation (BSDE): dpt = [ +[K(t) t PT = :
K(t)pt + K(t) t + t I]dt (I + K(t))qt ]dWt ; t 2 [0; T ] ;
We now let xt be the solution of the SDE: dxt = [ (K(t)xt + pt ) + x0 = a:
t ]dt
+[
t
qt ]dWs ; t 2 [0; T ] ;
Then it is easy to check that (xt ; yt ; zt ) = (xt ; K(t)xt + pt ; qt ) is the solution of (3.4). The proof of Lemma 3.2 is complete. We now proceed to give the proof of the existence of Theorem 3.1. By Lemma 3.2, when e = 0 in (3.4) for 0 = 0 has a unique solution. It then follows from Lemma 3.1, FBSDE that there exists a positive constant 0 = 0 (C1 ; C2 ; L; T ) such that for each 2 [0; 0 ] ; (3.2) for = 0 + has a unique solution. Since 0 depends only on (C1 ; C2 ; L; T ); we can repeat this process for N - times with 1 N 0 1 + 0 : It then follows that, in particular, (3.2) for = 1 with s 0; s 0 and s 0 has a unique solution. Thus the proof of Theorem 3.1 is complete. Now we can consider the FBSDE (3.1) for yT = (xT ): In fact, assumption (H3) has to be strengthened to the following: (jb xj2H + jb y j2H A(t; u); u ui + kb z k2L2 (H) ); for some > 0; (ii) h (x) (x); x xi xj 1 jb 8u = (x; y; z); u = (x; y; z) 2 H H L2 (H); 8t 2 [0; T ] ; x b = x x; yb = y y; zb = z z; (i) hA(t; u)
23
(H3)
where ; 1 are given nonnegative constants with have the following main result of this section.
1
> 0: We
Theorem 3.2. Let (H1) and (H3) hold. Then there exists a unique adapted solution (X; Y; Z) of FBSDE (3.1) with yT = (xT ). Actually the method to prove the existence here is similar to that of Theorem 3.1. We now consider the following for each 2 [0; 1] : 8 > 1) (yt ) + b(t; Xt ) + t ]dt dxt = [( > > < +[( 1) (zt ) + (t; Xt ) + t ]dWt; (3.5) > dyt = [(1 ) xt + f ((t; Xt ) + t ]dt zt dWt ; > > : x = a; y = (x ) + (1 )xT + ; 0 T T where ; and are given processes in M2 with values in H, L2 (H), and H; respectively, 2 L2 ( ; FT ; P; H): Note that the existence of (3.5) for = 1 implies the existence of (3.1). In order to obtain this conclusion, we also need the following lemma.
Lemma 3.3. Suppose (H1) and (H3). Then there exists a positive constant 0 such that if, a prior, for 0 2 [0; 1) there exists a solution (X 0 ; Y 0 ; Z 0 ) of (3.5), then for each 2 [0; 0 ] there exists a solution (X 0 + ; Y 0 + ; Z 0 + ) of (3.5) for = 0+ : Proof. Since for each ; ; 2 L2 ( ; FT ; P; H), 2 L2 ( ; FT ; P; L2 (H)); 0 2 [0; 1) ; then from Lemma 3.1 there exists a (unique) solution of (3.5), thus for each xT 2 L2 ( ; FT ; P; H) there exists a unique triple Us = (Xs; Ys; Zs ) 2 M2 satisfying the following FBSDE: dXt = [(1 0 ) ( Yt ) + 0 b(t; Ut ) + ( yt + b(t; ut )) + t ]dt + [(1 0 ) ( Zt ) + 0 (t; Ut ) + ( zt + (t; ut )) + t ]dWt ; dYt = [(1 0 ) (Xt ) + 0 f (t; Ut ) + ( xt + f (t; ut )) + t ]dt Zt dWt ; X0 = a; YT = 0 (XT ) + (1 (xT ) xT ) + : 0 )XT + We now proceed to prove that, if is su¢ ciently small, the mapping I 0 + (u xT ) = U XT de…ned by M2
L2 ( ; FT ; P; H) ! M2 24
L2 ( ; FT ; P; H)
is a contraction. Let u0 = (x0 ; y 0 ; z 0 ) 2 M2 ; and U 0
XT0 = I
0+
xT ): Denote
(u
u b = (b x; yb; zb) = (x x0 ; y y 0 ; z z 0 ); b = (X; b Yb ; Z) b = (X X 0 ; Y Y 0 ; Z Z 0 ): U
b Yb i one obtains By applying Itô’s formula to hX;
bT i + (1 b b (XT ) (XT0 ); X 0 )EhXT ; XT i + Eh (xT ) (x0T ) x bT ; x bT i Z T bs ids h 0 (A(s; Us ) A(s; Us0 )); U =E 0 Z T bs ; X bs i + hYbs ; Ybs i + hZbs ; Zbs i)ds (hX (1 0) E 0 Z T bs ; x (hX bs i + hYbs ; ybs i + hZbs ; zbs i + E 0 Eh
0
bs ; fs i + hYbs ; bs i + hZbs ; + hX
where
s i)ds;
f (s; u0s ); bs = b(s; us ) (s; u0s ):
fs = f (s; us ) s = (s; us )
b(s; u0s );
From (H1) and (H3), we can get (
+ (1
0 1
Z
0 ))( 0 1
T
2
2
b 0 ))E XT
+ (1
2
+E
0
( Ybs + Zbs )ds 0 Z T 2 bs )ds + K1 E X bT (jb us j2 + U K1 E
+E
Z
0
2
T
2
bs ds X
+ K1 E jb xT j2 :
Next apply the same technique in Lemma 3.1, to derive the estimates: bs sup E X
0 s T RT E 0
2
K1 E
RT 0
jb us j2 ds + K1 E
0
2
2
Ybs + Zbs )ds;
2 2 RT jb us j2 ds + K1 T E 0 Ybs + Zbs )ds; 2 2 RT RT E 0 ( Ybs + Zbs )ds K1 E 0 jb us j2 ds + K1 E jb xT j2 2 2 RT bs ds + K1 E X bT : +K1 E 0 X 2
bs ds X
K1 T E
RT
RT
0
25
Here the constant K1 depends on the Lipschitz constants and T: If 1 > 0, then ( 0 1 + (1 ; = min (1; 1 ) > 0: 0 )) Now combine the above four estimate to see that, whatever > 0; 1 0 or 1 > 0; we always have Z T Z T 2 2 b b E Us ds + E XT K (E jb us j2 ds + E jb xT j2 ): 0
0
Here the constant K depends only on the Lipschitz constants; 1 , T and : We now choose 0 = 2K : It is clear that for each …xed 2 [0; 0 ] ; the mapping I 0 + is a contraction since Z T Z T 2 2 1 b b jb us j2 ds + E jb xT j2 ): (E Us ds + E XT E 2 0 0 Hence this mapping has a unique …xed point U
0+
= (X
0+
;Y
0+
;Z
0+
);
which is the solution of (3.5) for = 0 + : The proof is complete. We now give the proof of Theorem 3.2. proof of Theorem 3.2. The uniqueness is similar to the proof of uniqueness in Theorem 3.1. By Lemma 3.2, when e = 1; = 0 in (3.4), (3.5) for 0 = 0 has a unique solution. It the follows from Lemma 3.3 that there exists a positive constant 0 depending on Lipschitz constants and T such that for each 2 [0; 0 ] ; (3.5) for = 0 + has a unique solution. We can repeat this process for N -times with 1 N 0 < 1 + 0 : Hence, in particular, FBSDE (3.5) for = 1 with = 0 has a unique solution. This complete the proof. Remark 3.1. If (3.1) satis…es the following monotonicity conditions and (H1), hA(t; u)
A(t; u); u h (x)
j^ y j2 ;
ui
(x); x^i
1
j^ xj2 ;
where > 0; 1 > 0; one can also mimic the same proof of Theorem 3.2, and get the same conclusion. Example 3.1. Consider the following FBSDE: 8 > dxt = (P xt BN 1 B T yt BN 1 DT zt )dt > > < +(Cxt DN 1 B T yt DN 1 DT zt )dWt ; > dyt = (P T yt + C T zt + Rxt )dt zt dWt ; > > : x = a; y = Qx ; 0 T T 26
(3.6)
where uniqueness in: H ! H; B : H ! H; C : H ! L2 (H); D : H ! L2 (H) are bounded liner operators, Q : H ! H and R : H ! H are nonnegative symmetric operators, and N : H ! H is a positive linear operator. We assume moreover that R; B; N and D are strictly monotone. Then (3.6) satis…es assumptions (H1) and (H3). Let us show this precisely. We have 1 0 f A(t; u) = @ b A (t; u) 0
1 (P T yt + C T zt + Rxt ) = @ (P xt BN 1 B T yt BN 1 DT zt ) A : (Cxt DN 1 B T yt DN 1 DT zt )
By assumptions: hA(t; u) A(t; u); u ui = hP T (yt yt ) + C T (zt zt ) + R(xt +hP (xt xt ) BN 1 B T (yt yt ) BN 1 DT (zt zt ); yt yt iH +hC(xt xt ) DN 1 B T (yt yt ) DN 1 DT (zt zt ); zt zt iL2 (H) =
hR(xt xt ); xt xt iH hDN 1 DT (zt zt ); zt 1
jxt
(jxt
xt j2H
xt j2H
2
+ jyt
jyt
hBN 1 B T (yt zt iL2 (H)
yt j2H
yt j2H
+ jzt
3
jzt
zt jL
xt iH
xt ); xt
yt ); yt zt jL
2 (H)
yt iH
2 (H)
);
where 1 ; 2 ; 3 are positive constants resulting from the monotonicity properties of the operators R; BN 1 B T and DN 1 DT ; respectively, and = min f 1 ; 2 ; 3 g : Moreover, hQxT
QxT ); xT
xT i = hQ(xT xT ); xT jxt xt j2H ;
xT iH
Thus according to Theorem 3.2 there exists a unique solution to (3.6).
27
4. Partial Information Necessary Maximum Principle In this section we introduce an application of FBSDE to control theory. We state and prove partial information necessary maximum principle for the partial information optimal control problem. Let us start with our control problem. 4.1. Statement of the Control Problem Consider the following controlled H-valued fully coupled nonlinear FBSDE of the form: 8 > dxt = b(t; xt ; yt ; zt ; t )dt + (t; xt ; yt ; zt ; t )dWt ; > > < dyt = f (t; xt ; yt ; zt ; t )dt + zt dWt ; (4.1) > x(0) = a; > > :y = ; T where
b(t; x; y; z; ) : [0; T ] (t; x; y; z; ) : [0:T ] f (t; x; y; z; ) : [0:T ]
H H H
H H H
L2 (H) L2 (H) L2 (H)
O ! H; O ! L2 (H); O ! H;
are C 1 functions with respect to (x; y; z; v); and T > 0 is a constant, and is a random variable in L2 ( ; FT ; P;H): The process (:) : [0; T ] ! O in the system (4.1) is required to be fEt gt>0 adapted, where Et Ft for all t 2 [0; T ] is a given sub…ltration representing the information available to the controller at time t, where O is a separable Hilbert space and (t) 2 U for a:e: t a:s. with U being a nonempty convex subset of O. In addition we require that the process (:) gives rise to a unique (strong) solution of the FBSDEs (4.1). In which case we say that (:) is an admissible control process we denote the set of admissible controls by Uad . ( ) ( ) ( ) If (:) 2 Uad and (xt ; yt ; zt ) = xt ; yt ; zt is the corresponding strong solution of (4.1), we call (xt ; yt ; zt ) an admissible quaternion. Suppose the performance functional obtained by applying the control (:) 2 Uad has the form J(v(:)) = E
Z
T
l(t; xt ; yt ; zt ;
0
28
t )dt
+ (xT ) + h(y0 ) ;
(4.2)
where are given C 1 function with respect to (x; y; z; v): We require also that
E
Z
0
T
jl (t; xt ; yt ; zt ;
t )j dt
+ j (x)j + jh(y0 j) < 1:
(4.3)
The partial information optimal control problem is minimize J( (:)) over (:) 2 Uad , i:e: we seek u(:) 2 Uad such that J(u(:)) =
inf
(:)2Uad
J( (:)):
(4.4)
Such a control u(:) is called an optimal control. (u) (u) (u) If (xt ; yt ; zt ) = xt ; yt ; zt is the corresponding solution of (4.1), then (ut ; xt ; yt ; zt ) is called an optimal quaternion. And we can introduce the following adjoint forward-backward stochastic di¤erential equation of the system (4.1) corresponding to admissible quaternion ( t ; xt ; yt ; zt ) ; 8 > dkt = [by (t; xt ; yt ; zt ; t )pt + y (t; xt ; yt ; zt ; t )qt > > > > fy (t; xt ; yt ; zt ; t )kt + ly (t; xt ; yt ; zt ; vt )]dt > > > > > [bz (t; xt ; yt ; zt ; t )pt + z (t; xt ; yt ; zt ; t )qt > > < fz (t; xt ; yt ; zt ; t )kt + lz (t; xt ; yt ; zt ; t )dWt ; (4.5) > dpt = [bx (t; xt ; yt ; zt ; t )pt + x (t; xt ; yt ; zt ; t )qt > > > > fx (t; xt ; yt ; zt ; t )kt > > > > +lx (t; xt ; yt ; zt ; t )]dt + qt dWt ; > > > : k = h (y ); p = (x ); 0 t T; T 0 y 0 x T where (p(:); q(:); k(:)) 2 H L2 (H) H: Here bx , x ; fx ,lx and x are transpose of the gradient of b; ; f and l with respect to x, ... etc. De…ne the Hamiltonian: H : [0; T ]
H
L2 (H)
H
O
H
L2 (H)
H ! R;
by H(t; x; y; z; ; p; q; k) = hk; f (t; x; y; z; )i + hp; b(t; x; y; z; )i + hq; (t; x; y; z; )i + l(t; x; y; z; ): We can rewrite equation (4.5) in the Hamiltonian system form: 8 > dkt = Hy (t; xt ; yt ; zt ; t ; pt ; qt ; kt )dt > > < Hz (t; xt ; yt ; zt ; t ; pt ; qt ; kt )dWt ; (4.6) > dpt = Hx (t; xt ; yt ; zt ; t ; pt ; qt ; kt )dt + qt dWt ; > > : k = h (y ); p = (x ); 0 t T: 0 T y 0 x T 29
4.2. A partial Information Optimal Control Su¢ cient Maximum Principle The main result of this section are based on the works of Al-Dafas, [2], Meng in [9]. Theorem 4.1. Let (^ ut ; x^t ; y^t ; z^t ) be an admissible quaternion of the control problem (4.1)-(4.4). Suppose that there exists a solution (^ pt ; q^t ; k^t ) of the corresponding adjoint FBSDE (4.6) such that for arbitrary admissible control (:) 2 Uad we have: i hR T ( ) ( ) xt xt )dt < +1; E 0 (^ xt xt ) q^t q^t (^ E E E E
hR
T (^ yt 0
hR
T (^ pt ) 0
hR T 0
RT 0
(v) yt yt )z HHz (t; x^t ; y^t ; z^t ; p^t ; q^t ; k^t )(^
i )(t; x^t ; y^t ; z^t ; u^t )^ pt dt < +1;
(
( ) ( k^t (zt zt
)
i
( ) yt )dt
< +1;
i )k^t dt < +1;
Hu (t; x^t ; y^t ; z^t ; p^t ; q^t ; k^t )
2
dt < +1: O
Moreover, suppose that for all t 2 [0; T ] ; H(t; x^t ; y^t ; z^t ; p^t ; q^t ; k^t ) is convex in (x; y; z; ); the functions h and are convex, and the following partial information minimum condition holds E[H(t; x^t ; y^t ; z^t ; p^t ; q^t ; k^t j Et )] = min E[H(t; x^t ; y^t ; z^t ; p^t ; q^t ; k^t ) j Et ]: v2U
Then u^(:) is a partial information optimal control. The proof of this theorem can be found in [2]. 4.3. A partial Information Necessary Maximum Principle In this subsection we prove that if u^(:) is a local optimal control of the partial information forward-backward optimal control problem (4.1)-(4.4) in some sense then u^(:) satis…es the partial information maximum condition in some local form. This is our control problem. In addition to the assumptions of Subsection 4.2, we introduce the following notion and assumptions. Let fei gi 1 be a complete orthonormal system of O: For ei let i (w) =< (w); ei >O : Denote, for s 2 [0; T ] ; s
(s) =< (w); ei >O X[t;t+r] (s)ei : 30
(4.7)
(P1) For all t; r such that 0 t + r T , and all bounded Et -measurable = (!); the control (s) 2 U O; s 2 [0; T ] ; with i (s) = i [t;t+r] (s) 2 Uad for all s 2 [0; T ] and i 1: (P2) For all u(:); (:) 2 Uad with (:) being bounded, there exists > 0 such that u(:) + (:) 2 Uad for all 2 ( ; ). For given u(:); (:) 2 Uad with a bounded (:) , we de…ne the processes (Xt1 ; Yt1 ; Zt1 ) by (u; )
Xt1 = Xt
(u; )
Yt1 = Yt
(u; )
Zt1 = Zt
= = =
d (u+ ) x dy t d (u+ ) y dy t d (u+ z dy t
; =0
; =0
)
(4.8)
: =0
Note that (Xt1 ; Yt1 ; Zt1 ) satis…es the following linear FBSDE: 8 dXt1 = [bx (t; xt ; yt ; zt ; ut )Xt1 + by (t; xt ; yt ; zt ; ut )Yt1 > > > > > +bz (t; xt ; yt ; zt ; ut )Zt1 + bv (t; xt ; yt ; zt ; ut ) t ]dt > > > 1 1 > < +[ x (t; xt ; yt ; zt ; ut )Xt + y (t; xt ; yt ; zt ; ut )Yt + z (t; xt ; yt ; zt ; ut )Zt1 + v (t; xt ; yt ; zt ; ut ) t ]dWt > > > dYt1 = [fx (t; xt ; yt ; zt ; ut )Xt1 + fy (t; xt ; yt ; zt ; ut )Yt1 > > > > +fz (t; xt ; yt ; zt ; ut )Zt1 + fv (t; xt ; yt ; zt ; ut ) t ]dt + Zt1 dW; > > : 1 X0 = 0; YT1 = 0; (4.9) where (u) (u) (u) (xt ; yt ; zt ) = (xt ; yt ; zt ): Theorem 4.2. Let u^(:) 2 Uad be a local minimum for performance functional J( (:)) in the sense that for all bounded (:) 2 Uad there exists > 0 such that u(:) + (:) 2 Uad for all 2 ( ; ) and ( ) = J(^ u(:) + (:)); is minimal at = 0: Suppose that there exists a solution (^ pt ; q^t ; k^t ) of the adjoint FBSDE (4.6) corresponding to the admissible quaternion (^ u; x^t ; y^t ; z^t ) that is 8 RT RT > xT ) + t Hx (s; x^s ; y^s ; z^s ; p^s ; q^s ; k^s )ds q^s dWs; < p^t = x (^ t Rt ^ ^ kt = hy (^ y0 ) H (s; x^s ; y^s ; z^s ; p^s ; q^s ; ks )ds R t0 y > : Hz (s; x^s ; y^s ; z^s ; p^s ; q^s ; k^s )ds: 0 (4.10) ^ t1 = Xt(^u; ) ; Y^t1 = Yt(^u; ) ; Z^t1 = Zt(^u; ) , Moreover, suppose that if X
31
then hR i 8 T ^1 1 ^ > (i) E ( X ) q ^ q ^ X dt < +1; t t t > t 0 > > > > > hR i > > T ^1 > 1 ^ ^ > > < (ii) E 0 (Yt ) (Hz Hz )(t; x^t ; y^t ; z^t ; p^t ; q^t ; kt )Yt dt < +1; hR i > > T > ^1 (^1 ) (t; x^t ; y^t ; z^t ; u^t )^ > (iii) E p ^ p dt < +1; t t t t > 0 > > > > > h i > > : (iv) E R T k^ Z^ 1 (Z^ 1 ) k^ dt < +1; 0
t
t
t
t
(4.11)
where
^1 = t +
^1 ^t ; y^t ; z^t ; u^t )X x (t; x t ^t ; y^t ; z^t ; u^t )Z^t1 z (t; x
+
+
^t ; y^t ; z^t ; u^t )Y^t1 y (t; x ^t ; y^t ; z^t ; u^t ) t : v (t; x
Then u^(:) is a stationary point for E[H j Et ] in the sense that for almost all t 2 [0; T ], we have E[H (t; x^t ; y^t ; z^t ; p^t ; q^t ; k^t ) j Et ] = 0:
(4.12)
We shall divide the proof of this theorem into several lemmas as follows. Lemma 4.1. Under conditions oh Theorem 4.2, we have Z T ^ 1 idt 0=E hlx (t; x^t ; y^t ; z^t ; u^t ); X t 0 Z T hly (t; x^t ; y^t ; z^t ; u^t ); Y^t1 idt +E 0 Z T +E hlz (t; x^t ; y^t ; z^t ; u^t ); Z^t1 idt 0 Z T +E hlv (t; x^t ; y^t ; z^t ; u^t ); t i dt 0
+Eh
^ t1 i xT ); X x (^
+ Ehhy (^ y0 ); Y^01 i:
(4.13)
Proof. Since ( ) = J(^ u(:) + (:)) Z T =E l(t; xt ; yt ; zt ; ut + 0
32
t )dt
+ (xT ) + h(y0 )
then we have form the fact ( ) is minimal at Z
0
=0
T
^ 1 idt (0) = E hlx (t; x^t ; y^t ; z^t ; u^t ); X t 0 Z T hly (t; x^t ; y^t ; z^t ; u^t ); Y^t1 idt +E 0 Z T +E hlz (t; x^t ; y^t ; z^t ; u^t ); Z^t1 idt 0 Z T hlv (t; x^t ; y^t ; z^t ; u^t ); t i dt +E
0=
0
+ Eh
^ t1 i xT ); X x (^
+ Ehhy (^ y0 ); Y^01 i;
as required. Lemma 4.2. Under conditions oh Theorem 4.2, we have Eh =
E
^ 1i xT ); X x (^ t Z T
^ 1 idt hlx (t; x^t ; y^t ; z^t ; u^t ); X t
0
E
Z
T
0
E +E
Z
Z
T
0
+E +E
hlz (t; x^t ; y^t ; z^t ; u^t ); Z^t1 idt h^ pt (t); bv (t; x^t ; y^t ; z^t ; u^t ) t i dt
T
h^ qt (t);
0
Z
hly (t; x^t ; y^t ; z^t ; u^t ); Y^t1 idt
T
0
Z
+ Ehhy (^ y0 ); Y^01 i
T
0
^t ; y^t ; z^t ; u^t ) t i dt v (t; x
hk^t (t); fv (t; x^t ; y^t ; z^t ; u^t ) t idt :
(4.14)
^ 1 i + hk^t ; Y^ 1 i Proof. Applying Itô formula to compute h^ pt ; X t t yields: ^ 1 i + Ehhy (^ xT ); X y0 ); Y^01 i x (^ t ^ 1 i Ehk^0 ; Y^ 1 i Eh^ pT ; X T 0 Z T
Eh = =
^ t1 idt hHx (t; x^t ; y^t ; z^t ; p^t ; q^t ; k^t ); X
E
0
E
Z
0
T
hHy (t; x^t ; y^t ; z^t ; p^t ; q^t ; k^t ); Y^t1 idt 33
E
Z
Z
T
0
hHz (t; x^t ; y^t ; z^t ; p^t ; q^t ; k^t ); Z^t1 idt
T
^ 1 + by (t; x^t ; y^t ; z^t ; u^t )Y^ 1 (h^ pt ; bx (t; x^t ; y^t ; z^t ; u^t )X t t 0 i +bz (t; x^t ; y^t ; z^t ; u^t )Z^t1 + bv (t; x^t ; y^t ; z^t ; u^t ) t i)dt Z T ^ t1 + y (t; x^t ; y^t ; z^t ; u^t )Y^t1 +E h^ qt ; x (t; x^t ; y^t ; z^t ; u^t )X 0 i + z (t; x^t ; y^t ; z^t ; u^t )Z^t1 + v (t; x^t ; y^t ; z^t ; u^t ) t idt Z T ^ t1 (hk^t (t); fx (t; x^t ; y^t ; z^t ; u^t )X +E +E
0
fy (t; x^t ; y^t ; z^t ; u^t )Y^t1 fz (t; x^t ; y^t ; z^t ; u^t )Z^t1 fv (t; x^t ; y^t ; z^t ; u^t ) t i)dt] Z T ^ 1 idt hlx (t; x^t ; y^t ; z^t ; u^t ); X = E t 0 Z T E hly (t; x^t ; y^t ; z^t ; u^t ); Y^t1 idt 0 Z T hlz (t; x^t ; y^t ; z^t ; u^t ); Z^t1 idt E 0 Z T h^ p(t); bv (t; x^t ; y^t ; z^t ; u^t ) t i dt +E 0 Z T h^ q (t); v (t; x^t ; y^t ; z^t ; u^t ) t i dt +E 0 Z T ^ hk(t); fv (t; x^t ; y^t ; z^t ; u^t ) t idt : +E 0
Note that the L2 conditions (4.11) ensure that the other terms having stochastic integrals with respect to the cylindrical Wiener process W have zero expectations. Lemma 4.3. Under Theorem 4.1 conditions we have Z T E hH (t; x^t ; y^t ; z^t ; p^t ; q^t ; k^t ); t idt = 0: (4.15) 0
Proof. By substituting (4.14) into (4.13) we deduce (4.15). We are now ready to complete the proof of Theorem 4.2. Fix t 2 [0; T ] and apply Lemma 4.3 to presented in (4.7), 34
which satis…es (P1) and (P2). Then it follows from (4.15) that Z t+r @ H(t; x^s ; y^s ; z^s ; p^s ; q^s ; k^s ) i dt = 0: E @ i t Di¤erentiating with respect to r at r = 0 gives E
@ H(t; x^s ; y^s ; z^s ; p^s ; q^s ; k^s ) @ i
i
= 0:
Since this holds for all bounded Et -measurable i , we conclude that @ E[ H (t; x^t ; y^t ; z^t ; p^t ; q^t ; k^t ) j Et ] = 0: @ i The proof of Theorem 4.2 is complete.
35
REFERENCES [1] M. Adams and V. Guillemin, Theory and probability, 1st edition, Brikhäuser, Boston, (1996). [2] N. A. Al-Dafas, Stochastic calculus and application optimal Stochastic control, MSc dissertation, Qassim University, Saudi Arabia, (2013). [3] A. Al-Hussein, Martingale representation theorem in in…nite dimensions. Arab J. Math. Sc., 10 (2004) 1, 1-18. [4] K. B. Athreya and S. N. Lahiri, Measure theory and probability theory, Springer Texts in Statistics, New York, (2006). [5] G. Da Prato and J. Zabczyk, Stochastic equations in…nite dimensions. Encyclopedia of mathematics and its applications, 44, Cambridge University Press, Cambridge, (1992). [6] N. Dunford, and J. T. Schwartz, Linear operators. Part II, Interscience, (1963). [7] A. M. Krall, Hilbert Space, Boundary value problems and orthogonal polynomials, Birkhäuser Verlag, Basel - Boston - Berlin, (2002). [8] H.-H. Kuo, Introduction to stochastic integration, Universitext, Springer, New York, (2006). [9] Q. X. Meng, A maximum principle for optimal control problem of fully coupled forward-backward stochastic systems with partial information, Sci China Ser A, (2009), 52(7): 1579-1588, DOI:10. 1007/s11425-009-0114-7. [10] E. C. D. Nel and S. Ólafsson, Problems and solutions in mathematical …nance, Willey Finance Series, Volume 1: Stochastic Calculus, British Library, (2014). [11] S. Peng and Z. Wu, Fully coupled forward-backward stochastic di¤erential equations and applications to optimal control, SIAM J. ControlOptim, 37 (1999), 825-843. [12] B. Øksendal, Stochastic deferential equations: an introduction with applications, 5th edition, Springer, Berlin, (2000). [13] F. Riesz and B. Sz.-Nagy. Functional analysis, Unger and New York, 1955. [14] J. L. Speyer and W. H. Chung, Stochastic processes, estimation, and control. University of California, Los Angeles, (2008). [15] D. Williams, Probability with martingales, Cambridge Mathematical Textbooks, Cambridge University Press, Cambridge, (1992). 36
ﺍاﻟﻤﻤﻠﻜﺔ ﺍاﻟﻌﺮﺑﻴﯿﺔ ﺍاﻟﺴﻌﻮﺩدﻳﯾﺔ ﻭوﺯزﺍاﺭرﺓة ﺍاﻟﺘﻌﻠﻴﯿﻢ ﺟﺎﻣﻌﺔ ﺍاﻟﻘﺼﻴﯿﻢ ﻛﻠﻴﯿﺔ ﺍاﻟﻌﻠﻮﻡم ﻗﺴﻢ ﺍاﻟﺮﻳﯾﺎﺿﻴﯿﺎﺕت
ﺍاﻟﻣﻌﺎﺩدﻻﺕت ﺍاﻟﺗﻔﺎﺿﻠﻳﯾﺔ ﺍاﻟﻌﺷﻭوﺍاﺋﻳﯾﺔ ﺍاﻟﺗﻘﺩدﻣﻳﯾﺔ ﺍاﻻﺭرﺗﺟﺎﻋﻳﯾﺔ ﻭوﺗﻁطﺑﻳﯾﻘﺎﺕت ﻋﻠﻰ ﺍاﻟﺗﺣﻛﻡم ﺍاﻟﻌﺷﻭوﺍاﺋﻲ ﺍاﻷﻣﺛﻝل ﻣﺷﺭرﻭوﻉع ﺑﺣﺛﻲ ﻟﺗﺣﻘﻳﯾﻕق ﺟﺯزء ﻣﻥن ﻣﺗﻁطﻠﺑﺎﺕت ﺍاﻟﻣﺎﺟﺳﺗﻳﯾﺭر
ﺇإﻋﺩدﺍاﺩد ﻧﻭوﺍاﻝل ﺑﻧﺕت ﻣﺣﻣﺩد ﺍاﻟﺣﺭرﺑﻲ ﺍاﻟﺭرﻗﻡم ﺍاﻟﺟﺎﻣﻌﻲ 331217500
ﺇإﺷﺭرﺍاﻑف
ﺍاﻷﺳﺗﺎﺫذ ﺍاﻟﺩدﻛﺗﻭوﺭر /ﻋﺑﺩدﺍاﻟﺭرﺣﻣﻥن ﺑﻥن ﺳﻠﻳﯾﻣﺎﻥن ﺍاﻟﺣﺳﻳﯾﻥن
ﺭرﺟﺏب 1436ﻫﮬﮪھـ ﻣﺎﻳﯾﻭو 2015ﻡم
ﺷﻛﺭر ﻭوﺇإﻫﮬﮪھﺩدﺍاء ﺃأﻭوﻻ ﺃأﻭوﺩد ﺍاﻥن ﺃأﺣﻣﺩد ﷲ ﺍاﻟﻘﺩدﻳﯾﺭر ﻋﻠﻰ ﻣﺎ ﻣﻥن ﺑﻪﮫ ﻋﻠﻲ ﻣﻥن ﻧﻌﻡم ﻹﻧﺟﺎﺯز ﻫﮬﮪھﺫذﺍا ﺍاﻟﻌﻣﻝل ،٬ﻭوﺛﺎﻧﻳﯾﺎ ﺍاﺗﻘﺩدﻡم ﺑﺧﺎﻟﺹص ﺍاﻟﺷﻛﺭر ﻟﻛﻝل ﻣﻥن ﻭوﺍاﻟﺩدﻱي ﻭو ﻭوﺍاﻟﺩدﺗﻲ. ﻛﻣﺎ ﺍاﺗﻘﺩدﻡم ﺑﺎﻟﺷﻛﺭر ﺍاﻟﺟﺯزﻳﯾﻝل ﺃأﻳﯾﺿﺎ ﻟﻸﺳﺗﺎﺫذ ﺍاﻟﺩدﻛﺗﻭوﺭر /ﻋﺑﺩد ﺍاﻟﺭرﺣﻣﻥن ﺑﻥن ﺳﻠﻳﯾﻣﺎﻥن ﺍاﻟﺣﺳﻳﯾﻥن ﺍاﻟﺫذﻱي ﺃأﺷﺭرﻑف ﻋﻠﻰ ﺇإﻧﺟﺎﺯز ﻫﮬﮪھﺫذﺍا ﺍاﻟﻌﻣﻝل ﻭوﺃأﺩدﻳﯾﻥن ﻟﻪﮫ ﺑﺎﻟﻛﺛﻳﯾﺭر ﺣﻳﯾﺙث ﺍاﻧﻪﮫ ﻗﺎﻡم ﺑﺩدﻋﻣﻲ ﻣﻥن ﺧﻼﻝل ﺗﻭوﺟﻳﯾﻬﮭﺎﺗﻪﮫ ﻭوﺗﺷﺟﻌﻳﯾﻪﮫ ﺍاﻟﻣﺳﺗﻣﺭر. ﺃأﺷﻛﺭر ﺃأﻳﯾﺿﺎ ً ﺍاﻻﺳﺗﺎﺫذ ﺍاﻟﺩدﻛﺗﻭوﺭر /ﺳﻌﻳﯾﺩد ﻛﻭوﺍاﺷﻲ ﻟﻣﺑﺎﺩدﺭرﺗﻪﮫ ﺍاﻟﻁطﻳﯾﺑﻪﮫ ﻓﻲ ﺷﺭرﺡح ﻛﻳﯾﻔﻳﯾﺔ ﺍاﺳﺗﺧﺩدﺍاﻡم ﺑﺭرﻧﺎﻣﺞ ) (Scientific Workplace 5.5ﻓﻲ ﻛﺗﺎﺑﺔ ﺍاﻟﺑﺣﻭوﺙث ﻭو ﺍاﻟﻣﻘﺎﻻﺕت ﻭو ﻏﻳﯾﺭرﻩه .ﻭوﺃأﻗﺩدﻡم ﺧﺎﻟﺹص ﺩدﻋﺎﺋﻲ ﻭوﺍاﻣﺗﻧﺎﻧﻲ ﻟﻛﻝل ﺷﺧﺹص ﻗﺩدﻡم ﻟﻲ ﺍاﻟﻌﻭوﻥن ﻭو ﺍاﻟﻣﺳﺎﻋﺩدﺓة.
ﻣﻠﺧﺹص ﺗﻬﮭﺩدﻑف ﻫﮬﮪھﺫذﻩه ﺍاﻷﻁطﺭرﻭوﺣﺔ ﺇإﻟﻰ ﺇإﺛﺑﺎﺕت ﻭوﺟﻭوﺩد ﻭوﻭوﺣﺩدﺍاﻧﻳﯾﺔ ﺣﻠﻭوﻝل ﺍاﻟﻣﻌﺎﺩدﻻﺕت ﺍاﻟﺗﻔﺎﺿﻠﻳﯾﺔ ﺍاﻟﻌﺷﻭوﺍاﺋﻳﯾﺔ ﺍاﻻﻗﺗﺭرﺍاﻧﻳﯾﺔ ﺍاﻟﺗﻘﺩدﻣﻳﯾﺔ ﺍاﻻﺭرﺗﺟﺎﻋﻳﯾﺔ ﻓﻲ ﻅظﻝل ﻭوﺟﻭوﺩد ﻓﺗﺭرﺍاﺕت ﺯزﻣﻧﻳﯾﺔ ﻛﺑﻳﯾﺭرﺓة ﻋﺷﻭوﺍاﺋﻳﯾﺔ ،٬ﻭوﻣﻥن ﺛﻡم ﺩدﺭرﺍاﺳﺔ ﻣﺳﺄﻟﺔ ﺗﺣﻛﻡم ﻋﺷﻭوﺍاﺋﻲ ﺃأﻣﺛﻝل .ﺣﻳﯾﺙث ﻳﯾﺗﻡم ﻭوﺻﻑف ﻧﻅظﺎﻡم ﺍاﻟﺗﺣﻛﻡم ﻓﻳﯾﻬﮭﺎ ﻁطﺑﻘﺎ ﻟﻧﻅظﺎﻡم ﻣﻥن ﻣﻌﺎﺩدﻻﺕت ﺗﻔﺎﺿﻠﻳﯾﺔ ﻋﺷﻭوﺍاﺋﻳﯾﺔ ﻣﻥن ﺍاﻟﻧﻭوﻉع ﺃأﻋﻼﻩه ﺑﺣﻳﯾﺙث ﺗﻘﺎﺩد ﺑﻌﻣﻠﻳﯾﺔ ﻭوﻳﯾﻧﻳﯾﺭر ) (Wiener processﻓﻲ ﻅظﻝل ﻓﺿﺎء ﻫﮬﮪھﻠﺑﺭرﺕت. ﻧﻌﺗﺑﺭر ﻓﻲ ﻫﮬﮪھﺫذﻩه ﺍاﻟﺩدﺭرﺍاﺳﺔ ﻋﻣﻠﻳﯾﺎﺕت ﺗﺣﻛﻡم ﻣﻼﺋﻣﺔ ﻟﺗﺭرﺷﻳﯾﺢ ﻓﺭرﻋﻲ ﻣﻥن ﺍاﻟﺗﺭرﺷﻳﯾﺢ ﺍاﻟﻁطﺑﻳﯾﻌﻲ ﻟﻌﻣﻠﻳﯾﺔ ﻭوﻳﯾﻧﻳﯾﺭر .ﻭوﻧﺷﺗﻕق ﺷﺭرﻭوﻁطﺎ ً ﻛﺎﻓﻳﯾﺔ ﻭوﺷﺭرﻭوﻁطﺎ ً ﺃأﺧﺭرﻯى ﻻﺯزﻣﺔ ﻟﻣﺳﺄﻟﺔ ﺍاﻟﺗﺣﻛﻡم ﺫذﺍاﺕت ﺍاﻟﻣﻌﻠﻭوﻣﺎﺕت ﺍاﻟﺟﺯزﺋﻳﯾﺔ ﻣﺣﻝل ﺍاﻟﺩدﺭرﺍاﺳﺔ .ﺇإﻥن ﻣﺟﺎﻝل ﺍاﻟﺗﺣﻛﻡم ﻫﮬﮪھﻧﺎ ﻣﺣﺩدﺏب ،٬ﻛﻣﺎ ﺍاﻥن ﻣﻌﺎﻣﻝل ﺍاﻻﻧﺗﺷﺎﺭر )ﻣﻌﺎﻣﻝل ﻋﻣﻠﻳﯾﺔ ﻭوﻳﯾﻧﺭر( ﻓﻲ ﺍاﻟﻣﻌﺎﺩدﻟﺔ ﺍاﻟﺗﻘﺩدﻣﻳﯾﺔ ﻣﻥن ﻧﻅظﺎﻡم ﻣﻌﺎﺩدﻻﺗﻧﺎ ﺍاﻟﻌﺷﻭوﺍاﺋﻳﯾﺔ ﻳﯾﺣﺗﻭوﻱي ﻛﺫذﻟﻙك ﻋﻠﻰ ﻣﺗﻐﻳﯾﺭر ﺗﺣﻛﻡم.
ﻣﻘﺩدﻣﺔ ﻳﯾﻌﺩد ﺗﻛﺎﻣﻝل ﺭرﻳﯾﻣﺎﻥن ) (Riemann integrationﻭوﺍاﻟﺫذﻱي ﺍاﺑﺗﻛﺭرﻩه ﺑﻳﯾﺭرﻧﺎﺭرﺩد ﺭرﻳﯾﻣﺎﻥن ﺃأﻭوﻝل ﺗﻌﺭرﻳﯾﻑف ﺩدﻗﻳﯾﻕق ﻟﺗﻛﺎﻣﻝل ﺍاﻟﺩدﻭوﺍاﻝل ﺍاﻟﻣﺣﺩدﺩدﺓة ﻋﻠﻰ ﻓﺗﺭرﺍاﺕت ﺯزﻣﻧﻳﯾﺔ .ﻟﻛﻥن ﻣﻊ ﺍاﻷﺳﻑف ﻓﻠﻳﯾﺱس ﺑﺎﻹﻣﻛﺎﻥن ﺍاﺳﺗﺧﺩدﺍاﻡم ﺗﻛﺎﻣﻝل ﺭرﻳﯾﻣﺎﻥن ﻋﻧﺩد ﺍاﻟﺗﻌﺎﻣﻝل ﻣﻊ ﺑﻌﺽض ﺍاﻟﺩدﻭوﺍاﻝل ﺍاﻟﻌﺷﻭوﺍاﺋﻳﯾﺔ ﻣﺛﻝل ﺍاﻟﺣﺭرﻛﺔ ﺍاﻟﺑﺭرﻭوﺍاﻧﻳﯾﺔ ) (Brownian motionﻭوﺫذﻟﻙك ﻷﻥن ﻟﺩدﻳﯾﻬﮭﺎ ﺗﺑﺎﻳﯾﻧﺎ ً ﻏﻳﯾﺭر ﻣﺣﺩدﻭوﺩد .unbounded variationﻟﺫذﻟﻙك ﺑﺎﻟﻔﻌﻝل ﻧﻣﺕت ﺍاﻟﺣﺎﺟﺔ ﺇإﻟﻰ ﻓﻛﺭرﺓة ﻣﺧﺗﻠﻔﺔ ﻟﺗﻌﺭرﻳﯾﻑف ﺍاﻟﺗﻛﺎﻣﻝل ﺑﺎﻟﻧﺳﺑﺔ ﻟﻣﺛﻝل ﺗﻠﻙك ﺍاﻟﺣﺭرﻛﺔ .ﻓﻲ ﻋﺎﻡم 1944ﻗﺩدﻡم ﺍاﻟﻌﺎﻟﻡم ﺍاﻟﻳﯾﺎﺑﺎﻧﻲ ﻛﻳﯾﻭوﺳﻲ ﺇإﻳﯾﺗﻭو ) (Kiosi Itoﺗﻌﺭرﻳﯾﻑف ﺍاﻟﺗﻛﺎﻣﻝل ﺍاﻟﻌﺷﻭوﺍاﺋﻲ ﻭوﺃأﻋﻁطﻰ ﺻﻳﯾﻐﺗﻪﮫ ﺍاﻟﻣﺷﻬﮭﻭوﺭرﺓة ﻭوﺍاﻟﻣﻌﺭرﻭوﻓﺔ ﻣﻧﺫذ ﺫذﻟﻙك ﺍاﻟﺣﻳﯾﻥن ﺑﺻﻳﯾﻐﺔ ﺇإﻳﯾﺗﻭو ) .(Ito's formulaﺣﻳﯾﺙث ﻳﯾﻌﺗﺑﺭر ﺍاﻟﺗﻛﺎﻣﻝل ﺍاﻟﻌﺷﻭوﺍاﺋﻲ ﻧﻅظﻳﯾﺭرﺍاً ﻟﺗﻛﺎﻣﻝل ﺭرﻳﯾﻣﺎﻥن ﺑﺎﻟﻧﺳﺑﺔ ﻟﻠﺩدﺍاﻟﺔ ﺍاﻟﻌﺷﻭوﺍاﺋﻳﯾﺔ .ﻭوﻣﻧﺫذ ﺫذﻟﻙك ﺍاﻟﺣﻳﯾﻥن ﻧﺷﺄ ﺍاﻟﺣﺳﺎﺏب ﺍاﻟﻌﺷﻭوﺍاﺋﻲ ﻭوﺍاﻟﺗﺣﻠﻳﯾﻝل ﺍاﻟﻌﺷﻭوﺍاﺋﻲ. ﻭوﻣﻥن ﺍاﻟﺟﺩدﻳﯾﺭر ﺑﺎﻟﺫذﻛﺭر ﺃأﻥن ﻧﺷﻳﯾﺭر ﻫﮬﮪھﻧﺎ ﺇإﻟﻰ ﺣﺳﺎﺏب ﺍاﻟﺗﻔﺎﺿﻝل ﻭوﺍاﻟﺗﻛﺎﻣﻝل ﺍاﻟﻌﺷﻭوﺍاﺋﻲ ﻭوﺍاﻟﺗﺣﻠﻳﯾﻝل ﺍاﻟﻌﺷﻭوﺍاﺋﻲ ﺃأﺻﺑﺢ ﺍاﻵﻥن ﻣﻥن ﺃأﻫﮬﮪھﻡم ﻓﺭرﻭوﻉع ﺍاﻟﺭرﻳﯾﺎﺿﻳﯾﺎﺕت ﻭوﺫذﻟﻙك ﺑﻔﺿﻝل ﺗﻁطﺑﻳﯾﻘﺎﺗﻪﮫ ﺍاﻟﻣﺳﺗﺧﺩدﻣﺔ ﻓﻲ ﺷﺗﻰ ﺍاﻟﻣﺟﺎﻻﺕت ﺍاﻟﻣﺗﻧﻭوﻋﺔ ﻛﺎﻟﻁطﺏب ﻭوﺍاﻟﻬﮭﻧﺩدﺳﺔ ﺍاﻟﺗﻔﺎﺿﻠﻳﯾﺔ ﻭوﺍاﻻﻗﺗﺻﺎﺩد ﻭوﺣﺳﺎﺏب ﻣﺎﻟﻳﯾﻔﻳﯾﺎﻥن ) (Malliavin calculusﻋﻼﻭوﺓة ﻋﻠﻰ ﻧﻅظﺭرﻳﯾﺔ ﺍاﻟﺟﻬﮭﺩد ) .(potential theoryﻳﯾﻣﻛﻥن ﻟﻠﻘﺎﺭرﺉئ ﺍاﻟﺭرﺟﻭوﻉع ﺇإﻟﻰ ][12] ،٬[11] ،٬[9] ،٬[8] ،٬[5 ﻟﻠﺣﺻﻭوﻝل ﻋﻠﻰ ﺗﻔﺎﺻﻳﯾﻝل ﻭو ﺗﻁطﺑﻳﯾﻘﺎﺕت ﺃأﺧﺭرﻯى. ﺗﺭرﺗﻛﺯز ﻫﮬﮪھﺫذﻩه ﺍاﻷﻁطﺭرﻭوﺣﺔ ﻋﻠﻰ ﺛﻼﺛﺔ ﺃأﺟﺯزﺍاء ﺭرﺋﻳﯾﺳﻳﯾﺔ .ﻧﺩدﺭرﺱس ﻓﻲ ﺍاﻟﺟﺯزء ﺍاﻻﻭوﻝل ﺑﺷﻛﻝل ﻣﺧﺗﺻﺭر ﺣﺳﺎﺏب ﺍاﻟﺗﻔﺎﺿﻝل ﻭوﺍاﻟﺗﻛﺎﻣﻝل ﺍاﻟﻌﺷﻭوﺍاﺋﻲ ﻓﻲ ﻓﺿﺎءﺍاﺕت ﻣﻧﺗﻬﮭﻳﯾﺔ ﺍاﻟﺑﻌﺩد ﻭوﻛﺫذﻟﻙك ﻓﻲ ﻓﺿﺎءﺍاﺕت ﻫﮬﮪھﻠﺑﺭرﺕت ﺍاﻟﻐﻳﯾﺭر ﻣﻧﺗﻬﮭﻳﯾﺔ ﺍاﻟﺑﻌﺩد .ﻓﻲ ﺍاﻟﺟﺯزء ﺍاﻟﺛﺎﻧﻲ ﻧﺑﺭرﻫﮬﮪھﻥن ﻧﺗﺎﺋﺞ ﻭوﺟﻭوﺩد ﻭوﻭوﺣﺩدﺍاﻧﻳﯾﺔ ﺍاﻟﺣﻠﻭوﻝل ﻟﻧﻅظﺎﻡم ﻣﻘﺗﺭرﻥن ﺑﺎﻟﻛﺎﻣﻝل ﻣﻥن ﻣﻌﺎﺩدﻻﺕت ﺗﻔﺎﺿﻠﻳﯾﺔ ﻋﺷﻭوﺍاﺋﻳﯾﺔ ﺍاﺭرﺗﺟﺎﻋﻳﯾﺔ ﺗﻘﺩدﻣﻳﯾﺔ ﻭوﺫذﻟﻙك ﻣﺿﻳﯾﺎ ً ﻋﻠﻰ ﺍاﻷﻁطﺭر ﺍاﻟﻌﺭرﻳﯾﺿﺔ ﻟﺑﺣﺙث ﺑﻳﯾﻧﻕق ) (Pengﻭو ﻭوﻭو ) (Wuﺍاﻟﻭوﺍاﺭرﺩد ﻓﻲ ﺍاﻟﻣﺭرﺟﻊ ] [11ﻣﻥن ﺿﻣﻥن ﻗﺎﺋﻣﺔ ﺍاﻟﻣﺭرﺍاﺟﻊ ،٬ﺣﻳﯾﺙث ﻧﻘﻭوﻡم ﺑﺗﻌﻳﯾﻣﻡم ﻋﻣﻠﻬﮭﻡم ﺇإﻟﻰ ﺍاﻟﻔﺿﺎءﺍاﺕت ﺍاﻟﻐﻳﯾﺭر ﻣﻧﺗﻬﮭﻳﯾﺔ ﺍاﻟﺑﻌﺩد. ﻭوﻣﻥن ﺟﺎﻧﺏب ﺃأﺧﺭر ﻗﺎﻣﺕت ﻧﻭوﺭرﺓة ﺍاﻟﺩدﻋﻔﺱس ) (N. Al-Dafasﺑﺩدﺭرﺍاﺳﺔ ﻭوﺟﻭوﺩد ﺷﺭرﻭوﻁط ﻛﺎﻓﻳﯾﺔ ﺑﺻﻳﯾﻐﺔ ﻣﺑﺩدﺃأ ﺍاﻟﺣﺩد ﺍاﻷﻋﻠﻰ ﻟﻧﻔﺱس ﻣﺳﺄﻟﺔ ﺍاﻟﺗﺣﻛﻡم ﺍاﻟﻌﺷﻭوﺍاﺋﻲ ﺫذﺍاﺕت ﺍاﻟﻣﻌﻠﻭوﻣﺎﺕت ﺍاﻟﺟﺯزﺋﻳﯾﺔ ﻓﻲ ﻓﺿﺎء ﻫﮬﮪھﻠﺑﺭرﺕت ﻭوﺫذﻟﻙك ﻓﻲ ﺍاﻟﻣﺭرﺟﻊ ﺭرﻗﻡم ] .[2ﻭوﻓﻲ ﻫﮬﮪھﺫذﻩه ﺍاﻷﻁطﺭرﻭوﺣﺔ ﻧﻛﻣﻝل ﻋﻣﻝل ﺍاﻟﺩدﻋﻔﺱس ﻣﻥن ﺧﻼﻝل ﺇإﺛﺑﺎﺕت ﺍاﻟﺷﺭرﻭوﻁط ﺍاﻻﺯزﻣﺔ ﻟﻸﻣﺛﻠﻳﯾﺔ ﺑﺻﻳﯾﻐﺔ ﻣﺑﺩدﺃأ ﺍاﻟﺣﺩد ﺍاﻷﻋﻠﻰ ﻭوﺫذﻟﻙك ﻟﻧﻔﺱس ﻣﺳﺄﻟﺔ ﺍاﻟﺗﺣﻛﻡم ﺍاﻷﻣﺛﻝل ﻭوﺍاﻟﻣﺭرﺗﺑﻁطﺔ ﺑﺎﻟﺗﺣﺩدﻳﯾﺩد ﺑﻧﻅظﺎﻡم ﻣﻘﺗﺭرﻥن ﺑﺎﻟﻛﺎﻣﻝل ﺑﻣﻌﺎﺩدﻻﺕت ﺗﻔﺎﺿﻠﻳﯾﺔ ﻋﺷﻭوﺍاﺋﻳﯾﺔ ﺍاﺭرﺗﺟﺎﻋﻳﯾﺔ ﺗﻘﺩدﻣﻳﯾﺔ ﻓﻲ ﻓﺿﺎءﺍاﺕت ﻫﮬﮪھﻠﺑﺭرﺕت ﻏﻳﯾﺭر ﻣﻧﺗﻬﮭﻳﯾﺔ ﺍاﻟﺑﻌﺩد .ﺇإﻥن ﻫﮬﮪھﺫذﺍا ﺍاﻟﻌﻣﻝل ﺍاﻷﺧﻳﯾﺭر ﻳﯾﻌﺗﺑﺭر ﺗﻌﻣﻳﯾﻣﺎ ً ﻟﻌﻣﻝل ﻣﻳﯾﻧﻕق ) (Mengﻓﻲ ﺍاﻟﻣﺭرﺟﻊ ﺭرﻗﻡم ] [9ﺇإﻟﻰ ﺍاﻟﻔﺿﺎءﺍاﺕت ﺍاﻟﻐﻳﯾﺭر ﻣﻧﺗﻬﮭﻳﯾﺔ ﺍاﻟﺑﻌﺩد.