American Journal of Operations Research, 2013, 3, 445-453 Published Online November 2013 (http://www.scirp.org/journal/ajor) http://dx.doi.org/10.4236/ajor.2013.36043
Relationship between Maximum Principle and Dynamic Programming in Stochastic Differential Games and Applications Jingtao Shi School of Mathematics, Shandong University, Jinan, China Email:
[email protected] Received July 22, 2013; revised August 22, 2013; accepted August 30, 2013 Copyright © 2013 Jingtao Shi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ABSTRACT This paper is concerned with the relationship between maximum principle and dynamic programming in zero-sum stochastic differential games. Under the assumption that the value function is smooth enough, relations among the adjoint processes, the generalized Hamiltonian function and the value function are given. A portfolio optimization problem under model uncertainty in the financial market is discussed to show the applications of our result. Keywords: Stochastic Optimal Control; Stochastic Differential Games; Dynamic Programming; Maximum Principle; Portfolio Optimization; Model Uncertainty
1. Introduction Game theory has been an active area of research and a useful tool in many applications, particularly in biology and economic. Among others, there are two main approaches to study differential game problems. One approach is Bellman’s dynamic programming, which relates the saddle points or Nash equilibrium points to some partial differential Equations (PDEs) which are known as the Hamilton-Jacobi-Bellman-Isaacs (HJBI) Equations (see Elliott [1], Fleming and Souganidis [2], Buckdahn et al. [3], Mataramvura and Oksendal [4]). The other approach is Pontryagin’s maximum principle, which finds solutions to the differential games via some Hamiltonian function and adjoint processes (see Tang and Li [5], An and Oksendal [6]). Hence, a natural question arises: Are there any relations between these two methods? For stochastic control problems, such a topic has been discussed by many authors (see Bensoussan [7], Zhou [8], Yong and Zhou [9], Framstad et al. [10], Shi and Wu [11], Donnelly [12], etc.) However, to the best of our knowledge, the study on the relationship between maximum principle and dynamic programming for stochastic differential games is quite lacking in literature. In this paper, we consider one kind of zero-sum stochastic differential game problem within the frame work Open Access
of Mataramvura and Oksendal [4] and An and Oksendal [6]. However, we don’t consider jumps. This more general case will appear in our forthcoming paper. For our problem in this paper, [4] related its saddle point to some HJBI Equation and obtained the stochastic verification theorem. [6] proves both sufficient and necessary maximum principles, which state some conditions of optimality via the Hamiltonian function and adjoint Equation. The main contribution of this paper is that we connect the maximum principle of [6] with the dynamic programming of [4], and obtain relations among the adjoint processes, the generalized Hamiltonian function and the value function under the assumption that the value function is enough smooth. As applications, we discuss a portfolio optimization problem under model uncertainty in the financial market. In this problem, the optimal portfolio strategies for the trader (representative agent) and the “worst case scenarios” (see Peskir and Shorish [13], Korn and Menkens [14]) for the market, derived from both maximum principle and dynamic programming approaches independently, coincide. The relation that we obtained in our main result is illustrated. The rest of this paper is organized as follows. In Section 2, we state our zero-sum stochastic differential game problem. Under suitable assumptions, we reformulate the sufficient maximum principle of [6] by adjoint Equation AJOR
J. T. SHI
446
and Hamiltonian function, and the stochastic verification theorem [4] by HJBI Equation. In Section 3, we prove the relationship between maximum principle and dynamic programming for our zero-sum stochastic differential game problem, under the assumption that the value function is smooth enough. A portfolio optimization problem under model uncertainty in the financial market is discussed in Section 4, to show the applications of our result. Notations: throughout this paper, we denote by R n the space of n-dimensional Euclidean space, by R nd the space of n d matrices, by S n the space of n n symmetric matrices. , and |.| denote the scalar product and norm in the Euclidean space, respectively. appearing in the superscripts denotes the transpose of a matrix.
(H1) There exists a constant C 0 such that for all s 0, T , x, xˆ R n , u, uˆ U , we have
b s, x, u b s, xˆ , uˆ x xˆ u uˆ ,
s, x, u s, xˆ , uˆ x xˆ u uˆ , b s, x, u s, x, u 1 x .
For any u . U t , T , under (H1), it is obvious that SDE (1) has a unique solution X t , x;u . . Let f : 0, T R n U R and g : R n R be continuous functions. For any t , x 0, T R n and admissible control u . U t , T , we define the following performance functional
Let T 0 be given, suppose that the dynamics of a stochastic system is described by a stochastic differential Equation (SDE) on a complete probability space , , P of the form
(1)
u . . , π . ,
V t , x J t , x; , π *
b : 0, T R n U R n ,
: 0, T R n U R nd are given continuous functions, where U R k is nonempty and convex. The U-valued process u t u t , , is the control process. For any t 0, T , we denote by U t , T the set of st -adapted processes. For given u . U t , T s t and x R n , an R n -valued process X t , x;u . is called
a solution to (1) if it is an -adapted process such that (1) holds. We refer to such u . U t , T as an admissible control and X t , x;u . , u . as an admissible pair. We make the following assumption.
Open Access
*
sup inf J t , x; , π t ,T t ,T
(4)
inf sup J t , x; , π , t ,T t ,T
st W r W t t r s ,
where contains all P-null sets in and 1 2 denotes the -field generated by 1 2 . In particular, if the initial time t 0 , we write s st . In the above,
(3)
where and π are valued in two sets K1 and K 2 , respectively. We let t , T and t , T be given families of admissible controls s t s T and π π s t s T , respectively. The zero-sum stochastic differential game problem is to find * , π* t , T t , T such that
t s T
t s
Now suppose that the control process has the form
with initial time t 0, T and initial state x R n . Here W t 0t T is a d-dimensional standard Brownian motion. For given t 0, T , we suppose the filtrais generated as the following tion st
(2)
g X t , x ;u T .
2. Problem Statement and Preliminaries
dX t , x;u s b s, X t , x;u s , u s ds s , X t , x ; u s , u s dW s , t , x ;u t x, X
T J t , x; u E t f s, X t , x;u s , u s ds
for given t , x 0, T R n, with V T , x g x , x R n .
Such a control process (pair) * , π* is called an optimal control or a saddle point of our zero-sum stochastic differential game problem (if it exists). And the corresponding solution X * . X t , x;u . to (1) is called the optimal state. The intuitive idea is that there are two players, I and II. Player I controls and Player II controls . The actions of the two players are antagonistic, which means that between Players I and II there is a payoff J t , x; , π which is a cost for player I and a reward for Player II. We now define the Hamiltonian function H : 0, T R n K1 K 2 R n R nd R
by H s, x, , π, p, q : b s, x, , π , p
tr s, x, , π q f s, x, , π .
(5)
AJOR
J. T. SHI
In addition, we need the following assumption. (H2) B, σ, f are continuously differentiable in x, , π and g is continuously differentiable in x . Moreover, bx , x are bounded and there exists a constant C 0 such that for all s 0, T , , π K1 K 2 , we have f x s, x, , π g x x C 1 x , x R n .
H t , x ; ,π s , s , π s , p s , q s dp s s s, X (6) q s dW s , t , x ; ,π T . p T g x X
x, H s, x, , πˆ s , pˆ s , qˆ s is convex. Then
with corresponding state process Xˆ 0, x . Xˆ
π
E H s, Xˆ 0, x s , ˆ s , πˆ s , p s , q s sup E H s, X 0, x;π s , ˆ s , π, p s , q s , πK 2
(7)
P a.s.
1) Suppose that for all s 0, T , g x is concave and
x, π H s, x,ˆ s , π, pˆ s , qˆ s is concave. Then
A ,π t , x
(8)
1 tr 2
t, x x
(9)
t , x, x , π x x t, x , 2
2
where .,. C 0, T R ; R . The following result is a stochastic verification theorem of optimality, which is an immediate corollary of Theorem 3.2 in Mataramvura and Oksendal [4]. Lemma 2.2 Let (H1), (H2) hold and t , x 0, T R n be fixed. Suppose that there exists a 1,2
n
V .,. C1,2 0, T R n ; R
and a Markovian control process ˆ x , πˆ x such that 1) A ,πˆ V t , x f t , x, , πˆ x 0, , x R n , ˆ 2) A ,πV t , x f t , x, ˆ x ,π 0, π , x R n ,
ˆ V t , x f t , x, x , πˆ x 0, x R ,
ˆ ,πˆ
n
3) A 4) , π ,
lim t , X 0, x; ,π t h X 0, x; ,π T ,
5) the family
X
0, x ; ,π
K
is uniformly integrable
for all x R n , , π , where K is the set of
and
J 0, x;ˆ, πˆ sup J 0, x;ˆ, π . π
2) Suppose that for all s 0, T , g x is convex and Open Access
t, x t
b t , x, x , π x ,
t T
J 0, x;ˆ, πˆ J 0, x;ˆ, π , π ,
Next, when the control process u , π is Markovian, then we can define the generator A ,π of diffusion system (1) by
ˆ
inf sup J 0, x; , π .
Let X 0, x;π . and X 0, x; . be X 0, x; ,π . and X 0, x; . X 0, x; ,πˆ . , respectively. Suppose that there
inf E H s, X 0, x; s , , πˆ s , p s , q s
, π* ˆ, πˆ
V x sup inf J 0, x; , π
. .
exists a solution pˆ . , qˆ . to the corresponding adjoint Equation (6). Moreover, suppose that for all s 0, T , the following minimum/maximum conditions hold:
*
is an optimal control (saddle point) and
, π 0, T 0, T 0, x ;ˆ ,πˆ
3) If both Cases (1) and (2) hold (which implies, in particular, that g is an affine function), then
J 0, x;ˆ, πˆ inf J 0, x; , πˆ .
For any , π 0, T 0, T , under (H2), we know that BSDE (6) admits a unique s -adapted solution p . .q . . We now can state the following sufficient maximum principle which is Corollary 2.1 in An and Oksendal [6]. Lemma 2.1 Let (H1), (H2) hold and x R n be fixed. Suppose that
K1
J 0, x;ˆ, πˆ J 0, x; , πˆ , ,
and
The adjoint Equation in the unknown s -adapted processes p R n , q R nd is the backward stochastic differential Equation (BSDE) of the form
447
stopping times
T .
(10)
Then AJOR
J. T. SHI
448
, s t , T , P a.s.,
V x sup inf J 0, x; , π inf sup J 0, x; , π π π ˆ (11) sup J 0, x; , π inf J 0, x; , πˆ
π
, π ˆ, πˆ
(2)
J 0, x;ˆ, πˆ ,
and
*
*
* V s, X t , x; ,π s s * G s, X t , x; ,π s , * s ,
is an optimal Markovian control.
3. Main Result
π,
In this section, we investigate the relationship between maximum principle and dynamic programming for our zero-sum stochastic differential game problem. The main contribution is that we find the connection between the value function V , the adjoint processes p, q and the following generalized Hamiltonian function
(3)
V s , X * s G s, X * s , * s , π* s , s
V 2V s , X * s , 2 s, X * s , x x
1 tr t , x, , π A f t , x, , π . 2
(12)
V .,. C1,3 0, T R n ; R
0, T R ; R , n
(13)
* * V 2V s, X t , x ; ,π s , 2 s, X t , x; ,π s x x
(16)
then p . , q . solves the adjoint Equation (6). Proof. (13), (15) can be obtained from the HJBI Equation (10), by the definitions of the generator A ,π in (9) and the generalized Hamiltonian function G in (12). We proceed to prove the second part. If
* V s, X t , x; ,π s s * G s, X t , x; ,π s , , π* s ,
V t , x ; * ,π* s , p s x s, X 2V t , x ; * ,π* s q s 2 s, X x * * s, X t , x; ,π s , * s , π* s ,
then (1)
and Vsx is also continuous. For any s t , T , define
ovian control, and X * . is the corresponding optimal state. Suppose that the value function
Further, suppose that
Our main result is the following. Theorem 3.1 Let (H1), (H2) hold and t , x 0, T R n be fixed. Suppose that * , π* is an optimal Mark-
V .,. C
(15)
s t , T , P a.s.
G t , x, , π, p, A : b t , x, , π , p
1,2
π , s t , T , P a.s., and
defined by
* * V 2V s, X t , x; ,π s , 2 s, X t , x; ,π s , x x
G : 0, T R n K1 K 2 R n S n R
(14)
V .,. C1,3 0, T R n ; R
,
and Vsx is also continuous, then from (15), we have
V V 2V s, x G s, x, * s , π* s , s, x , 2 s, x x s x x x X *
0. s
This is equivalent to 2V 2V s, X * s 2 s, X * s b s, X * s , * s , π* s sx x V b 1 s , X * s , * s , π* s s, X * s tr 2 x x
s, X s , s , π s xV s, X s *
*
*
2 V s , X * s , * s , π* s 2 s , X * s s, X * s , * s , π* s tr x x
Open Access
3
*
3
f s, X * s , * s , π* s 0, x
AJOR
J. T. SHI
449
where
1 n 3 2 2 V V V , tr : tr , , t r 3 2 2 x x x x x
On the other hand, applying Ito’s formula to V s, X * s , we get x
n V 1 V V with , , . x x x
d
2V V 2V s, X * s s, X * s 2 s, X * s b s, X * s , * s , π* s x x sx
1 tr 2
s, X s , s , π s xV s, X s ds *
*
3
*
*
3
V s, X * s s, X * s , * s , π* s dW s x 2 V f b s , X * s , * s , π* s s, X * s s , X * s , * s , π* s x x x
2
2 V * * * tr s, X s , s , π s 2 s, X * s s, X * s , * s , π* s ds x x 2 V 2 s, X * s s, X * s , * s ,π* s dW s . x
V T , X * T g x X * T . x
Hence, by the uniqueness of the solutions to (6), we obtain (16). The proof is complete.□
4. Applications In this section, we will discuss a portfolio optimization problem under model uncertainty in the financial market, where the problem is put into the framework of a zero-sum stochastic differential game. The optimal portfolio strategies for the investor and the “worst case scenarios” for the market, derived both from maximum principle and dynamic programming approaches independently, coincide. The relation that we obtained in our main result Theorem 3.1 is illustrated. Suppose that the investors have two kinds of securities in the market for possible investment choice: (1) a risk-free security (e.g. a bond), where the price S0 t at time t is given by dS0 t t S0 t dt , S0 0 0,
here t is a deterministic function; (2) a risky security (e.g. a stock), where the price S1 t at time t is given by Open Access
Note that
dS1 t t S1 t dt t S1 t dW t , S1 0 0,
here W . is a one-dimensional Brownian motion and t , t 0 are deterministic functions with t t . Let π t be a portfolio for the investors in the market, which is the proportion of the wealth invested in the risky security at time t . Given the initial wealth Y π 0 y 0 , we assume that π . is self-financing, which means that the corresponding wealth process Y π . admits the following dynamics dY π t Y π t t t t π t dt π Y t t π t dW t , π Y 0 y.
(17)
A portfolio π is admissible if it is an t -adapted process and satisfies
0 t t t π t T
π t t dt , P -a.s. 2
2
The family of admissible portfolios is denoted by . Now, we introduce a family Q of measures Q parameterized by processes t such that AJOR
J. T. SHI
450
dQ Z T dP , on T ,
(18)
dZ t Z t t dW t , Z 0 1.
(19)
dX 1 s dZ s dX s π dX 2 s dY s
where
0 π ds Y s s s s π s Z s s π dW s , s t , T , Y s s π s
We assume that
0 t T
2
ds , P -a.s.
(20)
If t satisfies E Z T 1,
(21)
which is a 2-dimensional process combined the Radon-Nikodym process Z . with the wealth process Y π . .
4.1. Maximum Principle Approach
then Q is a probability measure. If in addition,
t t t t , t 0, T ,
(22)
then
To solve our problem by maximum principle approach, that is, applying Lemma 2.1, we write down the Hamiltonian function (5) as H s, x1 , x2 , , π, p1 ,, p2 , q1 , q2
dQ Z T dP
is an equivalent local martingale measure. But here we do not assume that (22) holds. All satisfying (20) and (21) are called admissible controls of the market. The family of admissible controls is denoted by . The problem is to find * , π* such that
(23)
where U : 0, , is a given utility function, which is increasing, concave and twice continuously differentiable on 0, . We can consider this problem as a zero-sum stochastic differential game between the agent and the market. The agent wants to maximize his/her expected discounted utility over all portfolios π and the market wants to minimize the maximal expected utility of the agent over all “scenarios”, represented by all probability measures Q ; . To put the problem in a Markovian framework so that we can apply the dynamic programming, define
V t , x1 , x2 inf sup EQ U Y π T t π EQ*
U Y π* T t
,
(24)
where t 0, T denote the initial time of the investment, and x1 X 1 t , x2 X 2 t are the initial values of the process by
X s : X ,π s : X 1 s , X 2 s R 2 given
Open Access
x1 q1 x2 s s s π p2
(26)
x2 s πq2 .
The adjoint Equations (6) are dp1 s s q1 s ds q1 s dW s , U p1 T x X 2 T , 1
inf sup EQ U Y π T π * EQ * U Y π T ,
(25)
(27)
and dp2 s s s s π s p2 s (28) s π s q2 s ds q2 s dW s p T U X T . 2 2 x2
Let ˆ, πˆ be a candidate optimal control (saddle point) and let Xˆ . Xˆ 1 . , Xˆ 2 . be the corresponding state process, with corresponding solution
pˆ . pˆ1 . , pˆ 2 . , qˆ . qˆ1 . , qˆ2 . to the adjoint Equations. By (7) in Lemma 2.1, we first maximize the Hamiltonian function H over all π . This gives the following condition for a maximum point πˆ : Xˆ 2 s s s pˆ 2 s s qˆ2 s 0.
(29)
Then, we minimize H over all , and get the Following condition for a minimum point ˆ : Xˆ 1 s qˆ1 s 0.
(30)
We try a process pˆ1 . of the form AJOR
J. T. SHI
pˆ1 s U f s Xˆ 2 s ,
451
Substituting (39) into (29), we have
(31)
s s Xˆ 1 s f s U f s Xˆ 2 s
with a deterministic differential function f . Differentiating (31) and using (17), we get
dp1 s { f s Xˆ 2 s U f s Xˆ 2 s
or
1 2 f s Xˆ 22 s 2 s πˆ 2 s U f s Xˆ 2 s 2 Xˆ s s s s πˆ s
2
s ˆ s s s , s t , T .
U f s Xˆ s Xˆ
(32)
2
f s U f s Xˆ 2 s }ds
or
Comparing this with the adjoint Equation (27) by equating the ds, dW s coefficients respectively, we get
qˆ1 s Xˆ 2 s s πˆ s f s U f s Xˆ 2 s ,
(33)
1 2 f s Xˆ 22 s 2 s πˆ 2 s U f s Xˆ 2 s 2 Xˆ 2 s s s s π s f ( s )U
(34)
f s Xˆ 2 s ˆ s qˆ1 s .
Substituting (33) into (30), we have
Xˆ 1 s Xˆ 2 s s πˆ s f s U f s Xˆ 2 s 0, (35)
or πˆ s 0, s t , T .
(36)
Now, we try a process pˆ 2 s of the form
pˆ 2 s Xˆ 1 s f s U f s Xˆ 2 s .
(37)
Differentiating (37) and using (17), (36), we get
dpˆ 2 s Xˆ 1 s f s U f s Xˆ 2 s
f s Xˆ s s U f s Xˆ s ds f s ˆ s Xˆ s U f s Xˆ s dW s .
(38)
2
1
2
Comparing this with the adjoint Equation (28), we have
qˆ2 s f s Xˆ 1 s ˆ s U f s Xˆ 2 s ,
(39)
and
f s U f s Xˆ 2 s s f s U f s Xˆ 2 s
f s Xˆ 2 s U f s Xˆ 2 s
Open Access
f s exp
r dr , s t,T . T
s
(43)
(44)
The optimal “scenario”, that is, the optimal probability measure for the market is to choose ˆ such that 1 ˆ t t t t , t 0, T . (45) That is, the market minimize the maximal expected utility of the agent by choosing a scenario (represented by a probability law dQ Z T dP ), which is an equivalent martingale measure for the market (see (22)). In this case, the optimal portfolio strategy for the agent is to place all the money in the risk-free security, i.e., to choose π t 0 for all t . This result is the counterpart of Theorem 4.1 in An and Oksendal [6] without jumps with complete information.
To solve our problem by dynamic programming approach, that is, applying Lemma 2.2, we write down the generator A ,π of the diffusion system (25) as A ,π t , x1 , x2
2
i.e.,
(42)
4.2. Dynamic Programming Approach
f s f s Xˆ 2 s U f s Xˆ 2 s
f s s f s 0,
πˆ t 0, t 0, T .
s f s U f s Xˆ 2 s
Let t 0 , we have proved the following theorem. Theorem 4.1 The optimal portfolio strategy πˆ for the agent is
and f s Xˆ 2 s U f s Xˆ 2 s
2
f s s f s 0,
Xˆ 2 s s πˆ s f s U f s Xˆ 2 s dW s .
(41)
From (40), we get
s f s Xˆ 1 s ˆ s U f s Xˆ 2 s 0,
(40)
f s s f s .
t , x1 , x2 t
x2 t t t π t
t , x1 , x2 x2
2 1 x12 2 t 2 t , x1 , x2 2 x1
(46)
2 1 x22 2 t π 2 t 2 t , x1 , x2 2 x2 t π t t x1 x2
2 t , x1 , x2 , x1 x2 AJOR
J. T. SHI
452
for .,.,. C1,2,2 0, T R 2 ; R . Applying to our setting, the HJBI Equation (10) gets the following form
i A V t , x1 , x2 0, , x1 , x2 R , ˆ ii A ,πV t , x1 , x2 0, π , x1 , x2 R 2 , ˆ iii A ,πˆ V t , x1 , x2 0, x1 , x2 R 2 . ,πˆ
2
sup A ,πV t , x1 , x2 A ,πˆ V t , x1 , x2 0.
(50)
ˆ
ˆ
Maximizing A ,πV t , x1 , x2 over all π gives the following first-order condition for a maximum point πˆ :
We now verify the relationships in Theorem 3.1. In fact, relationship (13), (14), (15) is obvious from (47). We only need to verify the following relations V
pˆ1 t x1 pˆ t V t , x1 , x2 , 2 x2
ˆ
0 t t ˆ t t U f t x2 x2 πˆ t 2 t f t U f t x2 .
(51)
We then minimize A ,πˆ V t , x1 , x2 over all and get the following first-order condition for a minimum point ˆ : πˆ t x1 x2 t f t U f t x2 0.
(52)
From (52) we conclude that πˆ t 0, t 0, T ,
(53)
(54)
And the HJBI Equation (iii) A ,πˆ V t , x1 , x2 0 states that with these values of πˆ and ˆ , we should have ˆ
x1U f t x2 x2 f t x2 t x1U f t x2 f t 0,
or
f t t f t 0,
i.e., f t exp
r dr , t 0,T . T
t
(55)
(56)
We have proved the following result. Theorem 4.2 The optimal portfolio strategy πˆ for the agent is πˆ t 0, t 0, T
(57)
(i.e., to put all the wealth in the risk-free security) and the
Open Access
and 2V qˆ1 t x12 2 qˆ2 t V x2 x1
2V x1ˆ t x1 x2 . t x x , , 1 2 x t πˆ t 2V 2 x22
Note that πˆ t 0 , the above relations are easily obtained from (48), (31), (37), (33) and (39).
5. Conclusions and Future Works
which substituted into (51) gives
t t ˆ t t , t 0, T .
(58)
4.3. Relationship between Maximum Principle and Dynamic Programming
(49)
π
t t , t 0, T
(48)
inf A ,πˆ V t , x1 , x2 A ,πˆ V t , x1 , x2 0, ˆ
1
(i.e., the market chooses an equivalent martingale measure or risk-free measure Qˆ for the market). This result is the counterpart of Theorem 2.2 in Oksendal and Sulem [15] without jumps.
for some deterministic function f with f T 1 . Note that conditions (i), (ii), (iii) in (47) can be rewritten as
ˆ t t
(47)
We try a V of the form V t , x1 , x2 x1U f t x2 ,
optimal “scenario” for the market is to choose ˆ such that
In this paper, we have discussed the relationship between maximum principle and dynamic programming in zerosum stochastic differential games. Under the assumption that the value function is smooth, relations among the adjoint processes, the generalized Hamiltonian function and the value function are given. A portfolio optimization problem under model uncertainty in the financial market is discussed to show the applications of our result. Many interesting and challenging problems remain open. For example, what is the relationship between maximum principle and dynamic programming for stochastic differential games without the illusory assumption that the value function is smooth? This problem may be solved in the framework of viscosity solution theory (Yong and Zhou [9]). Another topic is that we can continue to investigate the relationship between maximum principle and dynamic programming for forward and backward stochastic differential games, and then study its applications to stochastic recursive utility optimization problem under model uncertainty (Oksendal and Sulem [16]). Such topics will be studied in our future work.
AJOR
J. T. SHI
453 http://dx.doi.org/10.1080/17442508308833253
6. Acknowledgements This work is supported by National Natural Science Foundations of China (Grant No. 11301011 and 11201264) and Shandong Province (ZR2011AQ012).
[8]
X. Y. Zhou, “The Connection between the Maximum Principle and Dynamic Programming in Stochastic Controll,” Stochastics & Stochastics Reports, Vol. 31, No. 1-4, 1990, pp. 1-13. http://dx.doi.org/10.1080/17442509008833645
REFERENCES
[9]
J. M. Yong and X. Y. Zhou, “Stochastic Controls: Hamiltonian Systems and HJB Equations,” Springer-Verlag, New York, 1999. http://dx.doi.org/10.1007/978-1-4612-1466-3
[1]
R. J. Elliott, “The Existence of Value in Stochastic Differential Games,” SIAM Journal on Control and Optimization, Vol. 14, No. 1, 1976, pp. 85-94. http://dx.doi.org/10.1137/0314006
[2]
W. H. Fleming and P. E. Souganidis, “On the Existence of Value Function of Two-Player, Zero-Sum Stochastic Differential Games,” Indiana University Mathematics Journal, Vol. 38, No. 2, 1989, pp. 293-314. http://dx.doi.org/10.1512/iumj.1989.38.38015
[3]
R. Buckdahn, P. Cardaliaguet and C. Rainer, “Nash Equilibrium Payoffs for Nonzero-sum Stochastic Differential Games,” SIAM Journal on Control and Optimization, Vol. 43, No. 2, 2004, pp. 624-642. http://dx.doi.org/10.1137/S0363012902411556
[4]
S. Matatamvura and B. Oksendal, “Risk Minimizing Portfolios and HJBI Equations for Stochastic Differential Games,” Stochastics: An International Journal of Probability and Stochastics Processes, Vol. 80, No. 4, 2008, pp. 317-337. http://dx.doi.org/10.1080/17442500701655408
[5]
[6]
[7]
S. J. Tang and X. J. Li, “Differential Games of N Players in Stochastic Systems with Controlled Diffusion Terms,” Automatic and Remote Control, Vol. 69, No. 5, 2008, pp. 874-890. http://dx.doi.org/10.1134/S0005117908050123 T. T. K. An and B. Oksendal, “Maximum Principle for Stochastic Differential Games with Partial Information,” Journal of Optimization Theory and Applications, Vol. 139, No. 3, 2008, pp. 463-483. http://dx.doi.org/10.1007/s10957-008-9398-y A. Bensoussan, “Maximum Principle and Dynamic Programming of the Optimal Control of Partially Observed Diffusions,” Stochastics, Vol. 9, No. 3, 1989, pp. 169-222.
Open Access
[10] N. C. Framstad, B. Oksendal and A. Sulem, “A Sufficient Stochastic Maximum Principle for Optimal Control of Jump Diffusions and Applications to Finance,” Journal of Optimization Theory and Applications, Vol. 121, No. 1, 2004, pp. 77-98. http://dx.doi.org/10.1023/B:JOTA.0000026132.62934.96 [11] J. T. Shi and Z. Wu, “Relationship between MP and DPP for the Optimal Control Problem of Jump Diffusions,” Applied Mathematics and Optimization, Vol. 63, No. 2, 2011, pp. 151-189. http://dx.doi.org/10.1007/s00245-010-9115-8 [12] C. Donnelly, “Sufficient Stochastic Maximum Principle in a Regime-Switching Diffusion Model,” Applied Mathematics and Optimization, Vol. 64, No. 2, 2011, pp. 155-169. http://dx.doi.org/10.1007/s00245-010-9130-9 [13] G. Peskir and J. Shorish, “Market Forces and Dynamic Asset Pricing,” Stochastic Analysis and Applications, Vol. 20, No. 5, 2002, pp. 1027-1082. http://dx.doi.org/10.1081/SAP-120014553 [14] R. Korn and O. Menkens, “Worst-Case Scenario Portfolio Optimization: A New Stochastic Control Approach,” Mathematical Methods of Operations Research, Vol. 62, No. 1, 2005, pp. 123-140. http://dx.doi.org/10.1007/s00186-005-0444-3 [15] B. Oksendal and A. Sulem, “A Game Theoretical Approach to Martingale Measures in Incomplete Markets,” Pure Mathematics, E-Print of University of Oslo, No. 24, 2006. [16] B. Oksendal and A. Sulem, “Forward-Backward SDE Games and Stochastic Control under Model Uncertainty,” Journal of Optimization Theory and Applications, in Press.
AJOR