SINGULAR STOCHASTIC CONTROL PROBLEMS 1 ... - CiteSeerX

0 downloads 0 Views 232KB Size Report
as a combination of an optimal stopping problem and a classical control problem. ... Key words. nonlinear stochastic systems, optimal control, singular control, time ...... [38] B. M. Miller and W. J. Runggaldier, Optimization of observations: A ...
c 2004 Society for Industrial and Applied Mathematics 

SIAM J. CONTROL OPTIM. Vol. 43, No. 2, pp. 708–730

SINGULAR STOCHASTIC CONTROL PROBLEMS∗ F. DUFOUR† AND B. MILLER‡ Abstract. In this paper, we study an optimal singular stochastic control problem. By using a time transformation, this problem is shown to be equivalent to an auxiliary control problem defined as a combination of an optimal stopping problem and a classical control problem. For this auxiliary control problem, the controller must choose a stopping time (optimal stopping), and the new control variables belong to a compact set. This equivalence is obtained by showing that the (discontinuous) state process governed by a singular control is given by a time transformation of an auxiliary state process governed by a classical bounded control. It is proved that the value functions for these two problems are equal. For a general form of the cost, the existence of an optimal singular control is established under certain technical hypotheses. Moreover, the problem of approximating singular optimal control by absolutely continuous controls is discussed in the same class of admissible controls. Key words. nonlinear stochastic systems, optimal control, singular control, time change AMS subject classifications. 49J30, 49N25, 93E20 DOI. 10.1137/S0363012902412719

1. Introduction. In this paper, the existence of optimal singular controls is studied for the nonlinear stochastic system defined by the following equation: (1)

. xt = ζ +





t

0



t

A(s, xs )ds +

B(s)dus + 0

t

D(s, xs )dWs , 0

where the functions A, B, D are deterministic, {Wt } is a Brownian motion, and {ut } is the control. All the processes are assumed to be defined on a probability space (Ω, F, P, {Ft }). Let K ⊂ Rp be a closed convex cone and T be the finite horizon. The class of admissible controls, labeled Ca , is defined by the class of K-valued, continuous on the right with left-hand limits, {Ft }-progressively measurable processes for which almost every sample path is of finite variation on the interval [0, T ]: vTu < ∞,

(2)

Q-a.s.,

n where vtu = |u0 | + limn→∞ k=1 |utk/n − ut(k−1)/n | (|z| is the norm of the vector z). For an admissible control u, the cost is given by 

k(t, xt )duct

J[u] = E 0

(3)

T



u    ∆vt  ∆ut ∆ut + k t, xt− + B(t) u s u ds ∆v ∆v 0 t t 0≤t≤T

+ g(xT , vTu ) ,

∗ Received by the editors August 7, 2002; accepted for publication (in revised form) December 5, 2003; published electronically August 4, 2004. This research was supported by a CNRS/Russian Academy of Sciences cooperation (number PECO/NEI 9570) and in part by the Nonlinear Control Network and by Russian Basic Investigation Foundation grant 02-01-00361. http://www.siam.org/journals/sicon/43-2/41271.html † Corresponding author. MAB, Universit´ e Bordeaux I, 351 cours de la Liberation, 33405 Talence Cedex, France ([email protected]) and GRAPE, Universit´e Bordeaux IV, France. ‡ Institute for Information Transmission Problems, 19 Bolshoy Karetny per., Moscow 127994, Russia ([email protected]).

708

SINGULAR STOCHASTIC CONTROL PROBLEMS

709

where g, k are deterministic functions and uc is the continuous part of u (∆zt denotes zt − zt− for a process {zt } which is continuous on the right with left-hand limits). The interested reader may consult [42] for a nice interpretation of the cost defined by (3). Singular stochastic control problems have received considerable attention in the literature. The authors do not pretend to present here an exhaustive panorama of singular control problems. However, the interested reader may consult the work of Boetius [9], especially the sections at the end of the chapters for an interesting and complete survey on stochastic singular control problems including theoretical results and applications. This problem was first introduced by Bather and Chernoff [6] in 1967 by considering a simplified model for the control of a spaceship. It was noted for this special model that there was a connection between the singular control problem and optimal stopping problem. This link was established through the derivative of the value function of this initial singular control problem and the value function of the corresponding optimal stopping problem. After this seminal work, this connection and its properties were extensively studied in different contexts but mainly in the one-dimensional case or in the multidimensional linear case. Two approaches were used: one is based on the theory of partial differential equations and on variational arguments, and can be found in the works of Alvarez [1, 2], Chow, Menaldi, and Robin [13], Karatzas [27], Karatzas and Shreve [31], and Menaldi and Taksar [36]. The other approach is related to probabilistic methods; see, for example, Baldursson [4], Boetius [8, 9], Boetius and Kohlmann [10], El Karoui and Karatzas [17, 18], Karatzas [28], and Karatzas and Shreve [29, 30]. Other problems, such as the dynamic programming principle, have been studied in a general context, for example, by Boetius [9], Haussmann and Suo [24], Fleming and Soner [21], and Zhu [43], as well as the stochastic maximum principle in [11]. Singular control problems correspond to many applications in diverse areas such as mathematical finance (see, for example, Baldursson and Karatzas [5], Chiarolla and Haussmann [12], Kobila [34], and Karatzas and Wang [33]), manufacturing systems (see, for example, Shreve, Lehoczky, and Gaver [41]), and queuing systems (see, for example, Martins and Kushner [35]). In this paper, we focus our attention on the existence problem and the connection between the singular control and optimal stopping problems. As we have already mentioned, the connection between singular stochastic control problems and optimal stopping problems has only been studied in the one-dimensional case or in the multidimensional linear case. A generalization of this connection to the multidimensional nonlinear case has been proposed by Benth and Reikvam [7] but under a very strong hypothesis, namely, the model of the state process must have a special structure in order to ensure that the ith component of x(t) depends only on the ith component of the initial state process ζ. In the work of Boetius and Kohlmann [10], the result of Karatzas and Shreve [29] was generalized to a nonlinear one-dimensional state process. A multidimensional problem (see section 5.2 in [10]) was also considered but again under a strong hypothesis, since the control process could influence the state through only one variable. In the present paper, this link is revisited by using a completely different approach. It will be shown that a multidimensional and nonlinear singular control problem can be converted into an auxiliary control problem where the control variables are of the classical type and where the controller must choose a stopping time (optimal stopping as described by Haussmann and Lepeltier, p. 851 in [22]). Consequently this auxiliary problem combines classical control and optimal stopping. Moreover, it will

710

F. DUFOUR AND B. MILLER

be shown that these two optimization problems have the same value function. It must be pointed out that our result differs from existing results in the literature on two points. First, our auxiliary equivalent control problem is defined as a combination of optimal stopping and classical control, but in the literature, the equivalent control problem is formulated by a pure optimal stopping problem (the controller cannot influence the trajectory of the state process). Second, our connection is obtained directly through the value function, but in the literature, this link was established through the gradient of the value function of the singular control problem and the value of the related optimal stopping problem. In this auxiliary control problem, the state processes are defined on a probability space (Ω, F, P, {Gt }) by the following equations:  t  t  t  . (4) ξt = ζ + D(ηs , ξs ) (1 − θs )dVs , θs B(ηs )αs ds + A(ηs , ξs )(1 − θs )ds + 0

(5)

0

0

. ηt =



t

(1 − θs )ds, 0

where {Vt } is a Brownian motion. The new control variables are {(αt , θt )} and ρ, which is a {Gt }-stopping time to be chosen by the controller. A key feature of this formulation is that the processes {(αt , θt )} belong to a compact set (labeled B; for the definition of this set, see the notation section at the end of the introduction). The cost is defined by

 ρ J[α, θ, ρ] = E g(ξρ , ρ − ηρ ) + (6) θs k(ηs , ξs )αs ds + G(ηρ ) 0

(the function G is equal to +∞ everywhere excepted at T , where it takes the value zero). The idea used to show this equivalence result is based on a time transformation. This method, originally developed in deterministic control theory (for a complete exposition on the subject see the recent book [37] and the references therein), has been introduced recently in the stochastic context in [15, 38] and in [3]. In the latter reference, Alavarez, Gyllenberg, and Shepp used a time change technique to show that a one-dimensional singular control problem subject to a state-dependent killing rate is equivalent to an associated one-dimensional singular control problem with constant discount rate. Note that in this work, the time change does not depend explicitly on the control but is related to the integral of the discount factor, which is dependent on the state process. Our result provides a set of weak hypotheses (Assumptions A1–A5, defined below) to ensure the existence of an optimal singular control for a general model, but it has the drawback that it provides no information about the nature and the properties of the optimal control. The existence of singular stochastic controls has been investigated by Haussmann and Suo for a general nonlinear model [23]. Using a compactification method, the authors show an existence result under certain technical conditions. Our result can be viewed as extending the work of Haussmann and Suo [23] in several directions. We assume that the functions A and C must satisfy a Lipschitz condition but are not necessarily bounded as in [23], where these functions are required to be bounded continuous (for example, linear control problems cannot be considered).

SINGULAR STOCHASTIC CONTROL PROBLEMS

711

Moreover, an important difference is that the form of the cost presented in our paper is more general. The part of the cost depending on the singular control in [23] has the following form:   (7)

c(s)dus ,

E [0,T )

where c(.) is lower semicontinuous and each of its components is strictly positive. In our work, we propose a more general form given by ⎡ ⎤ u   T   ∆vt  ∆u ∆u t t E⎣ (8) ds⎦ . k(t, xt )duct + k t, xt− + B(t) u s ∆vt ∆vtu 0 0 0≤t≤T

It must be pointed out that the cost defined by (8) depends explicitly on the state process {xt }, which is not the case in (7). Singular control problems defined with such general cost functions (see (3)) have been studied by many authors (for example, Zhu [43] studied a finite horizon problem and Taksar [42] and Davis and Zervos [14] analyzed infinite horizon problems). To the best knowledge of the authors, the work presented in this paper is the first attempt to derive an existence result for a multidimensional nonlinear model with such a general form for the cost. Zhu [43] derived a dynamic programming principle for a nonlinear model where D (with our notation) is assumed to be time independent (see the remark in [43, p. 229]) and nondegenerate. Taksar [42] considered a singular control problem where the state process satisfies an equation where A and D are time independent, A is bounded, and D is nondegenerate. The author showed the equivalence between the original singular problem and a linear programming problem. In [14], Davis and Zervos proved a verification theorem for a nonlinear time independent model. Two special onedimensional cases were explicitly solved by the authors. The hypotheses used here are weaker than the assumptions previously cited. Another important difference is that in [23], the variation of an admissible control needs to be integrable: E[vtu ] < ∞ (see the proof of Proposition 3.4, p. 34, in Suo’s thesis). Here a weaker hypothesis is introduced in the sense that the variation of an admissible control needs to satisfy the following assumption: E[g(xT , vTu )] < +∞, where g must satisfy limt→+∞ inf x∈Rn g(x, t) = +∞. An interesting point is that a large number of results presented in the literature concern singular control problems where the control process is left continuous, rejecting the possibility of a jump at the terminal time (see, for example, Karatzas and Shreve [29]). In contrast, we consider singular controls which are continuous on the right and have limits on the left, allowing a jump at the terminal time. Our approach is more direct than in [23], where the left continuous control process is considered for which a modification is proposed, allowing jumps at the terminal time (see Remark 2.2 and section 4 in [23]). The advantage of our approach is that one can find an optimal solution to those singular control problems which do not admit optimal solutions when the control is assumed to be left continuous. In order to illustrate this point, the wellknown example of nonexistence of Karatzas and Shreve in [29] is revisited in section 6. In section 6, we also discuss some extensions and generalizations of the model initially presented in section 2. In particular, it is shown how our results can be modified in order to study other singular control problems, such as monotone follower problems (where the process {ut } is assumed to be a nondecreasing function).

712

F. DUFOUR AND B. MILLER

The problem of approximating singular optimal control by absolutely continuous controls is studied in the final section. The main difficulty is that the control may have a jump at the terminal time and, consequently, the associated state process. Therefore, it is difficult to find a sequence of absolutely continuous control process {vn (t)} defined on [0, T ] and a sequence of filtration {Ftn } defined on the probability space (Ω, F, P ) satisfying both of the following conditions: {vn (t)} is {Ftn }-progressively measurable and limn→+∞ xn (T ) = x(T ), where {xn (t)} is the state process controlled by {vn (t)}. Our equivalence result provides a way of overcoming this difficulty. However, it must be pointed out that although this problem is simpler, it remains difficult to solve mainly because one needs to approximate the combination of an optimal stopping problem (where the stopping is not necessarily bounded) and a classical control problem under the strong admissibility condition of the control and the stopping time given by E[G(ηρ )] = 0. In [15], the authors considered a general stochastic control problem where the controls have to satisfy an integral constraint. It was shown that there exists an optimal control within the class of generalized controls leading to impulse actions. The problem studied in [15] is related to singular control problems in the sense that the optimal generalized state process is given in terms of stochastic differential equations governed by a measure. However, the main difference is that the jump of the optimal state process cannot be explicitly expressed in terms of the jump of the optimal control process contrary to the case of the singular control problem (∆xt = B(t)∆ut ). Although the general idea of time change is used here and in [15], the results presented in [15] cannot be applied to solve the problem studied in this paper. Indeed, in this work, it is required that the control process be only of finite variation contrary to [15], T where the control is assumed to satisfy the integral constraint P ({ 0 |us |ds < M }) = 1 for a constant M (this point is a crucial hypothesis in [15]). Finally, it must be pointed out that in the present paper, the stopping time is not necessarily bounded, which represents a difficulty which makes the technique proposed in [15] inapplicable. The paper is organized as follows. In section 2, we formulate the singular control problem. The description of the time transformation is presented in section 3. In section 4, an auxiliary control problem is introduced that will be shown to be equivalent to the original one. On the basis of known results, the existence theorem is proved for the auxiliary problem and consequently for the original problem in section 5. In section 6, we show how our existence result can be modified and applied to other problems, and we revisit a well-known example found in the literature. In the last part of the paper, it is shown that the optimal singular control can be approximated by continuous controls in a sense that will be defined below (see Theorem 7.5). Now, we present some notation and terminology. Notation. NN is the set of the first N integers; that is, NN = {1, . . . , i, . . . , N }. . . N∗ = {k ∈ N : k > 0} and R+ = {x ∈ R : x ≥ 0}. . p For a vector x in Rp , the ith component of x is denoted by xi , |x| = i=1 |xi | is the norm of x, and 0p is the zero vector in Rp . If A is an m × n matrix, the norm of A is defined by |A| = max|x|≤1 |Ax| and ( ) denotes the transpose operation. The indicator function of a set A is defined as IA (x). The function δ defined on N×N is such that δij = 1 if i = j and δij = 0 otherwise. For x ∈ R, x+ is defined by x+ = x1 if x = 0 and by x+ = 0 if x = 0. If X is a metric space, then B(X) denotes its associated Borel σ-field. A process is said to be corlol if it is continuous on the right and have limits on the left.

SINGULAR STOCHASTIC CONTROL PROBLEMS

713

On the probability space (Ω, F, P, {Ft }), the mathematical expectation is denoted by EP [.], and for an Rp -valued corlol process {ut }, the total variation of {ut } on the interval [0, t] is defined by vtu

(9)

n  . |utk/n − ut(k−1)/n |, = |u0 | + lim n→∞

k=1

and {ut } is said to be of finite variation on [0, t] if vtu < +∞, P -a.s. Let {wt } be a real-valued corlol process of finite variation on [0, T ]. {wt } is the distribution function of a signed measure defined on [0, T ]; this measure is denoted by dw. In order to define the state processes, let us introduce the following data: • T is a fixed real number. • K is a subset of Rp . . • B = {(x, y) ∈ K × [0, 1] : |x| ≤ 1}. • A : [0, T ] × Rn → Rn . • B : [0, T ] → Rn×p . • D : [0, T ] × Rn → Rn×m . • g : Rn × R+ → R+ . • k : R+ × Rn → Rp+ . • ζ is a fixed vector in Rn . • G : R+ → R+ such that G(T ) = 0 and G(t) = ∞ for t = T . The following assumptions will be used in the paper. Assumption A1. The functions A(., .), B(.), and D(., .) are continuous, and ∀ t ∈ R+ , there is a constant L1 such that, ∀ (x, y) ∈ Rn × Rn , |A(t, x) − A(t, y)| + |D(t, x) − D(t, y)| ≤ L1 |x − y|. Assumption A2. The function g is lower semicontinuous and satisfies limt→+∞ inf x∈Rn g(x, t) = +∞ and (∀x ∈ Rn ), (∀(y1 , y2 ) ∈ R+ ×R+ ), if y1 ≤ y2 , then g(x, y1 ) ≤ g(x, y2 ). Assumption A3. Each component of k is lower semicontinuous. There exists a continuously differentiable function L : [0, T ] × Rn → R such that (10)

∇x L(t, x)B(t) = k(t, x). Assumption A4. K is a closed cone which is convex. Assumption A5. For all (t, x) ∈ [0, T ] × Rn , the set K(t, x), defined by

. K(t, x) = {(A(t, x)(1 − θ) + θB(t)α, (1 − θ)D(t, x)D(t, x) , 1 − θ) : (α, θ) ∈ B}, is convex. Unless clearly mentioned, we shall always follow the convention that X0− = 0, P -a.s., for any corlol process {Xt } defined on a probability space (Ω, F, P ). 2. Preliminaries and statement of the problem. In this section, we formulate the stochastic control problem presented in the introduction using the formulation described in [20] and in [22].

714

F. DUFOUR AND B. MILLER

Definition 2.1. A singular control is defined by the following term: . C = (Ω, F, P, {Ft }, {ut }, {Wt }, {xt }) , where (i) (Ω, F, P ) is a complete probability space with a right continuous complete filtration {Ft }; (ii) {ut } is an Rp -valued, corlol {Ft }-progressively measurable process such that 

T

(∀A ∈ B([0, T ]) ⊗ F),

(11)

IA dut ∈ K, 0

vTu < +∞;

(12)

(iii) {Wt } is a standard m-dimensional {Ft }-Brownian motion; (iv) {xt } is an Rn -valued, corlol {Ft }-progressively measurable process such that (∀t ∈ [0, T ]) (13)

. xt = ζ +



B(s)dus +

A(s, xs )ds + 0





t

[0,t]

t

D(s, xs )dWs 0

and x0− = ζ. We write C for the set of controls satisfying the previous conditions. The cost is given by  u  T   ∆vt  ∆ut ∆ut . c J[C] = EP k(t, xt )dut + k t, xt− + B(t) u s ds ∆vt ∆vtu 0 0≤t≤T 0  (14)

+g(xT , vTu ) . The set Ca of admissible controls is defined by

(15)

. Ca = {C ∈ C : J[C] < ∞}. The singular control problem is defined by the minimization of J[C] on Ca .

3. Time transformation. In this section, it is assumed that on a probability space (Ω, F, P, {Ft }) satisfying the usual hypotheses (completion and right continuity), there exists a process {ut } satisfying item (ii) of Definition 2.1. Let us define the process {Γt } by (16)

. Γt = t + vtu .

{Γt } is a corlol strictly increasing, {Ft }-progressively measurable process. Denote by {ηt } the right inverse of {Γt }: (17)

. ηt = inf{s ≥ 0 : Γs > t}.

Therefore, applying Proposition 1.1 of Chapter V in [40], {ηt } is a time change satisfying (18)

ηΓt = t.

SINGULAR STOCHASTIC CONTROL PROBLEMS

715

Proposition 3.1. There exists a process {(αt , θt )}, {Ft }-progressively measurable process taking value in B such that (∀j ∈ Np )  vtu

(19)

t

=

θs dΓs 0

 (20)

Γt

θηs ds,

= 0

 (21)

t

αs dvsu

ut = 0

 (22)

Γt

θηs αηs ds.

= 0

Proof. By definition, the measure dv u is absolutely continuous with respect to the measure dΓ for almost all ω ∈ Ω. Consequently, using Proposition 3.13 of Chapter I in [26], it follows that there exists an {Ft }-optional process {θt } such that 

t

vtu =

(23)

θs dΓs .

0

Using the fact that (∀A ∈ B([0, T ])) dv u (A) ≤ dΓ(A), we obtain from (23) that  vtu =

(24)

t

θs dΓs , 0

. where θs = (0 ∨ θs ) ∧ 1. Moreover, using Proposition 4.9 in [40, p. 8] and (24), we obtain (20). Using the same arguments, it can be shown that there exists a process {αt } such that |αt | ≤ 1 and which satisfies (21). By combining (19), (21), and [40, Proposition 4.9, p. 8], we have (22), giving the result. Now using (11), it can be shown easily that {αt } is a K-valued process giving the result. Proposition 3.2. The process {θt } satisfies the following equality:  ηt =

(25)

Γt

(1 − θηs )ds.

0

Proof. Combining (16) and (20), we obtain that 

Γt

(1 − θηs )ds = Γt − vtu = t.

0

Consequently, using (18), we have 

Γt

(1 − θηs )ds = ηΓt .

0

t Since { 0 (1 − θηs )ds} and {ηt } are increasing continuous processes and {Γt } is a strictly increasing corlol process, the result follows.

716

F. DUFOUR AND B. MILLER

4. Auxiliary control and equivalence result. We introduce an auxiliary control problem given in terms of a classical control problem and an optimal stopping problem. It is shown that this problem is equivalent to the initial one. The new control variables are defined by {(αt , θt )} and by ρ, which is a {Gt }-stopping time to be chosen by the controller (see Definition 4.1 below). A key property of the auxiliary control problem is that the new control variables {(αt , θt )} take their values in a compact set. Definition 4.1. An auxiliary control is defined by the following term: . Ψ = (Ω, G, Q, {Gt }, {(αt , θt )}, {Vt }, {Λt }, ρ) , where (i) (Ω, G, Q) is a complete probability space with a right continuous complete filtration {Gt }; (ii) {(αt , θt )} is a B-valued, {Gt }-progressively measurable process; (iii) {Vt } is a standard m-dimensional {Gt }-Brownian motion; (iv) ρ is a {Gt }-stopping time such that (26)

ρ < +∞,

Q-a.s.;

. (v) {Λt = (ξt , ηt ) } is an Rn+1 -valued, {Gt }-progressively measurable process such that  t  t . ξt = ζ + A(ηs , ξs )(1 − θs )ds + θs B(ηs )αs ds 0 0  t  (27) D(ηs , ξs ) (1 − θs )dVs , + 0  t . (28) (1 − θs )ds ηt = 0

for t ∈ [0, ρ]. We write Υ for the set of controls satisfying the previous conditions. The cost is given by

 ρ . M[Ψ] = EQ g(ξρ , ρ − ηρ ) + (29) θs k(ηs , ξs )αs ds + G(ηρ ) . 0

The set Υa of admissible auxiliary controls is defined by (30)

. Υa = {Ψ ∈ Υ : M[Ψ] < ∞}.

The auxiliary control problem is defined by the minimization of M[Ψ] on Υa . In this section, the equivalence between the auxiliary and the initial control problems is shown. Theorem 4.2. Assume Assumption A1. Let C be an element of Ca . Then there exists an auxiliary control Ψ in Υa such that M[Ψ] = J[C].

(31)

 Proof. Let us define C as Ω, F, P , {F t }, {ut }, {Wt }, {xt } . Define the process {Γt } by (16) and its right inverse {ηt } by (17). Clearly, the probability space 

717

SINGULAR STOCHASTIC CONTROL PROBLEMS

(Ω, F, P , {F ηt }) satisfies the usual hypotheses. Since {ut } satisfies item (i) of Definition 2.1, we can apply the results of the previous section to obtain the existence of an {Ft }-progressively measurable process {(αt , θt )} satisfying (19) and (21) (see Propo. . sition 3.1). Therefore, we have the process {(αt , θt )}, where αt = αηt and θt = θηt is {F ηt }-progressively measurable. Using Proposition 1.1 in [40, Chapter V], ΓT is an {F ηt }-stopping time and ΓT < +∞ by definition since vTu < +∞. Let us introduce the following stochastic differential equation:  t  t θs B(ηs )αs ds A(ηs , ξ s )(1 − θs )ds + ξt = ζ + 0 0  t  + (32) D(ηs , ξ s ) (1 − θs )(1 − θs )+ dWηs . 0

Using Assumption A1 and [39, Theorem 6, p. 194], it follows that the solution to the exists and is unique and continuous on the probability space  previous equation  Ω, F, P , {F ηt } . Using [40, Proposition 4.9, p. 8], Propositions 3.1 and 3.2, and (18), it follows that ∀ t ∈ [0, ΓT ] (33)  Γt

 A(ηs , ξ s )(1 − θs )ds = 0

0



t

A(s, ξ Γs )ds



Γt

and

θs B(ηs )αs ds = 0

B(s)dus . [0,t]

t Define the process {W t } by W t = 0 (1 − θs )(1 − θs )+ dWηs . It is a continuous {F ηt }-local martingale such that  t  t i j

W , W t = (1 − θs )(1 − θs )+ d Wηi , Wηj s = δij (1 − θs )(1 − θs )+ dηs = δij ηt , 0

0

where the last equality has been obtained using Proposition 3.2. Moreover, it is easy to show that W Γt = Wt . Now using [32, Proposition 4.8, p. 176] and (18), it follows that  Γt  Γt  + D(ηs , ξ s ) (1 − θs )(1 − θs ) dWηs = D(ηs , ξ s )dW s 0 0  t (34) = D(s, ξ Γs )dWs . 0

Finally, combining (33) and (34), we obtain that {ξ Γt } is a corlol process which satisfies (13). From Theorem 6 in [39, p. 194], it follows that (13) has a unique solution and, consequently, ξ Γt = xt  F,  P, {Ft }) be a filtered probability space supporting a standard mLet (Ω, t }, and set dimensional Brownian motion {W  Ω = Ω × Ω,

 F = F ⊗ F,

P = P ⊗ P,

Ft = F ηt ⊗ Ft .

A process X defined on Ω may be viewed as being defined on Ω by setting X(ω, ω  ) = X(ω). For simplicity of exposition, the same notation will be used in the rest of the paper to identify the process X and X. This approach will also be  applied to the processes defined on Ω.

718

F. DUFOUR AND B. MILLER

Let us introduce the process {Vt } defined on (Ω, F, P, {Ft }) by  t  t s . Vt = (35) 1 − (1 − θs )(1 − θs )+ dW (1 − θs )+ dWηs + 0

0

t } are two independent continuous {Ft }-martingales. ThereClearly, {Wηt } and {W fore, {Vt } is a continuous {Ft }-local martingale such that  t

 t + + i j (1 − θs ) dηs + (1 − (1 − θs )(1 − θs ) )ds .

V , V t = δij 0

0

However, using Proposition 3.2, it follows that V i , V j t = δij t, which, by Levy’s characterization theorem, gives that {Vt } is a standard m-dimensional {Ft }-Brownian motion. On (Ω, F, P, {Ft }), let us consider the equation  t  t  t  (36) ξt = ζ + D(ηs , ξs ) 1 − θs dVs . θs B(ηs )αs ds + A(ηs , ξs )(1 − θs )ds + Since



t

0

0

0

 t   D(ηs , ξs ) (1 − θs )dVs = D(ηs , ξs ) (1 − θs )(1 − θs )+ dWηs , 0

0

it can be shown that {ξ t }, defined by (32), is a solution to (36). Let (Ω, G, Q) be the completion of the probability space  (Ω, F, P ). Denote by N the σ-field, generated by all Q-null sets. Introduce Gt = s>t (Fs ∨ N ). Then the probability space (Ω, G, Q, {Gt }) satisfies the usual hypotheses (item (i) in Definition 4.1). Clearly, ΓT is a {Gt }-stopping time and {(αt , θt )} is a {Gt }-progressively measurable process. Moreover, using Lemmas A.1 and A.2 in [15], it follows that {Vt } is a {Gt }-Brownian motion. Let us define the auxiliary control Ψ by    Ω, G, Q, {Gt }, {(αt , θt )}, {Vt }, {(ξ t , ηt ) }, ΓT . To complete the proof, it remains to be shown that the costs are equal: M[Ψ] = J[C]. Since ηΓT = T , we have by definition of {Γt } that ΓT − ηΓT = vTu . Therefore, (37)

EP [g(xT , vTu )] = EQ [g(ξΓT , ΓT − ηΓT )].

times which exhausts the jumps Denote by {τn }n∈N∗ the sequence of {F t }-stopping  of {Γt }. Since {ηt } is a time change on Ω, F, P , {F t } , and using [25, Lemma 10.5(a)], it follows that {Γτn }n∈N∗ is a sequence of {F ηt }-stopping times. We have that {Γτn − > t} = {ηt < τn } ∈ F ηt ; therefore, {Γτn − }n∈N∗ is a sequence of {F ηt }stopping times. Remark that ∞ 

[[Γτn − , Γτn ]] ⊂ {(t, ω) ∈ R+ × Ω : θt = 1}.

n=1

Define ∞  . D = {(t, ω) ∈ R+ × Ω : θt = 1} − [[Γτn − , Γτn ]]. n=1

719

SINGULAR STOCHASTIC CONTROL PROBLEMS

Consequently,  (∀t ∈ [0, T ]),



Γt

ut =

Γt

I{θs

Suggest Documents