Optimal Advertising under Uncertainty with Carryover Effects Fausto ...

1 downloads 0 Views 228KB Size Report
Mar 1, 2006 - Grunewald, Natalie; Otto, Felix; Reznikoff, Maria G.; Villani, Cédric: A Two-Scale Proof of a. Logarithmic ... Sturm-Liouville Operators. 255.
Optimal Advertising under Uncertainty with Carryover Effects Fausto Gozzi, Carlo Marinelli, Sergei Savin

no. 265

Diese Arbeit ist mit Unterstützung des von der Deutschen Forschungsgemeinschaft getragenen Sonderforschungsbereiches 611 an der Universität Bonn entstanden und als Manuskript vervielfältigt worden. Bonn, April 2006

Optimal advertising under uncertainty with carryover effects Fausto Gozzi Facolt` a di Economia Universit` a degli Studi Sociali “Guido Carli” 00198 Roma, Italy e-mail: [email protected]

Carlo Marinelli Institut f¨ ur Angewandte Mathematik Universit¨ at Bonn D-53115 Bonn, Germany e-mail: [email protected]

Sergei Savin Graduate School of Business Columbia University New York, NY 10027, USA e-mail: [email protected]

March 1, 2006

Abstract We consider a class of dynamic advertising problems under uncertainty in the presence of carryover and distributed forgetting effects, generalizing the classical model of Nerlove and Arrow. In particular, we allow the dynamics of the product goodwill to depend on its past values, as well as previous advertising levels. The optimal advertising model is formulated as an infinite dimensional stochastic control problem to which we associate, through the dynamic programming principle, a Hamilton-Jacobi-Bellman (HJB) equation. In the absence of carryover advertising effects, the value function of the problem and the optimal advertising policy can be characterized (in some simple cases even explicitly) in terms of the solution of the associated HJB equation. In the general case, due to the lack of a solvability theory for the corresponding HJB equation, only partial results can be obtained. In particular, we fully characterize the value function and the optimal state-control pair in two special cases where the HJB equation admits a closed-form solution.

1

Introduction

Analysis of advertising policies has always been occupying a front-and-center place in the marketing research. The sheer size of the advertising market (over $143 billion in the US in 2005 Media Intelligence [30]) and the strong body of evidence of systematic 1

over-advertising by firms across many industries (Aaker and Carman [1], Lodish, Abraham, Kalmenson, Livelsberger, Lubetkin, Richardson, and Stevens [27], Luo and Donthu [28], Herrington and Dempsey [21]) has caused a renewal of attention to the proper accounting for the so-called “carryover” or “distributed lag” advertising effects. The term “carryover” designates an empirically observed advertising feature under which the advertising influence on product sales or goodwill level is not immediate, but rather is spread over some period of time: according to a survey of recent empirical “carryover” research by Leone [26], delayed advertising effects can last between 6 and 9 months in different settings. Dynamic models related to advertising have a long history that begins at least with the seminal papers of Vidale and Wolfe [34] and Nerlove and Arrow [31]. Since then a considerable amount of work has been devoted to the extension of these models, both in the monopolistic and the competitive settings, mainly under deterministic assumptions. Models with uncertainty have received instead relatively less attention (see Feichtinger, Hartl, and Sethi [11] for a review of the existing work until 1994, Prasad and Sethi [32] for a Vidale and Wolf-like model in the competitive setting, and Grosset and Viscolani [18], Marinelli [29] for recent work on the case of a monopolistic firm). One of the first and the most important sub-streams of advertising literature was formed by the papers focused on the empirical studies of distributed advertising lag (see e.g. Griliches [17], Bass [3], Bass and Parsons [5]). One of the important early empirical results was obtained by Bass and Clark [4], who established that the initially adopted models with monotone decreasing lags (see Koyck [23]) are often inferior in their explanatory power to the models with more general lag distributions. Despite the wide and growing empirical literature on the measurement of carryover effects, there are practically no analytical studies that incorporate distributed lag structure into the optimal advertising modeling framework in the stochastic setting. The only papers dealing with optimal dynamic advertising with distributed lags we are aware of are the works of Bass and Parsons [5] (which provides a numerical solution to a discrete-time deterministic example), and by Hartl [19] and Hartl and Sethi [20] (which applies a version of the maximum principle in the deterministic setting). The creation of models which incorporate the treatment of “carryover” effects in the stochastic settings have long been advocated in the advertising modeling literature (see e.g. [19], [11] and references therein). In this work we study a rather broad class of stochastic models deriving from that of Nerlove and Arrow [31], incorporating both the advertising lags as well as distributed “churn” ([32]), or “forgetting”, effects. Based on this model, we formulate an optimization program that seeks to maximize the goodwill level at a given time T > 0 net of the cumulative cost of advertising until T . This optimization problem is studied using techniques of stochastic optimal control in infinite dimensions. In particular, we specify the goodwill dynamics in terms of a controlled stochastic delay differential equation (SDDE), that can be rewritten as a regular stochastic differential equation (without delays) in a suitable Hilbert space. This allows us to associate to the original control problem for the SDDE an equivalent infinite dimensional control problem for the “lifted” stochastic equation. In the absence of carryover effects, we are able to solve the original problem in terms of the solution of the associated abstract problem, using the L2 approach developed by Goldys and Gozzi [14] (see also Fuhrman and Tessitore [13] for an alternative approach). On the other hand, in the presence of carryover, the problem becomes very challeging: none of the presently available techniques for the solution 2

of infinite-dimensional stochastic control problems (with the possible exception of the viscosity approach) is applicable. Nonetheless, by using a suitable infinite-dimensional Markovian reformulation, we can obtain an associated HJB equation. Under specific assumptions on the objective function, we derive closed-form solutions for the HJB equation, which are shown to coincide with the respective value functions. Optimal advertising policies are then obtained in feedback form using a verification theorem for smooth solutions of the HJB equation. Other approaches (in addition to those in [14] and [13]) to optimal control problems for systems described by stochastic differential equations with delay in the state term are possible: for instance, see Elsanosi [8] and Larssen [24] for a more direct application of the dynamic programming principle without appealing to infinite dimensional analysis, and Kolmanovski˘ı and Sha˘ıkhet [22] for the linear-quadratic case. See also Elsanosi, Øksendal, and Sulem [10] for some solvable control problems of optimal harvesting, and Elsanosi and Larssen [9] for an application in financial mathematics. However, we would like to point out that none of the methods just mentioned apply to the control of stochastic differential equations with delay in the control terms. The paper is organized as follows: in section 2 we formulate the optimal advertising problem as an optimal control problem for a stochastic delay differential equations with delays both in the state and the control. This non-Markovian optimization problem is then “lifted” in two different ways to infinite dimensional Markovian control problems in sections 3 and 4, which are then studied via the dynamic programming principle. In particular, in section 3 we consider the case of distributed forgetting (churn) in the absence of advertising carryover, and in section 4 our model is extended to include distributed lag effects. We also obtain sharper characterization of the value function and of the optimal trajectory-control pair for special choices of the reward and cost functions. We conclude the introduction by fixing some notation and recalling some notions that will be needed. Given a function f : U → R, with U a convex subset of a Hilbert space with inner product h·, ·i, we denote the Legendre transform of f by f ∗ (y) := supx∈U (hx, yi − f (x)). Recall also that D− f ∗ (y) = arg maxx∈U (hx, yi − f (x)), where D− stands for the subdifferential operator (see e.g. Barbu and Precupanu [2], p. 91). Throughout the paper, X will be the Hilbert space defined as X = R × L2 ([−r, 0]; R), with inner product Z

0

hx, yi = x0 y0 +

x1 (ξ)y1 (ξ) dξ −r

and norm  |x| =

2

Z

0 2

|x0 | +

|x1 (ξ)| dξ

1/2 ,

−r

where r > 0, x0 and x1 (·) denote the R-valued and the L2 ([−r, 0]; R)-valued components, respectively, of the generic element x of X.

2

The Model

Our general reference model for the dynamics of the stock of advertising goodwill y(t), 0 ≤ t ≤ T , of the product to be launched is given by the following controlled stochastic 3

delay differential equation (SDDE), where z models the intensity of advertising spending:    Z 0 Z 0   dy(t) = a0 y(t) + a1 (ξ)y(t + ξ) dξ + b0 z(t) + b1 (ξ)z(t + ξ) dξ dt    −r −r (1) +σ dW0 (t), 0 ≤ t ≤ T      y(0) = η 0 ; y(ξ) = η(ξ), z(ξ) = δ(ξ) ∀ξ ∈ [−r, 0], where the Brownian motion W0 is defined on a filtered probability space (Ω, F, F = (Ft )t≥0 , P), with F being the completion of the filtration generated by W0 , and z belongs to U := L2F ([0, T ]; R+ ), the space of square integrable non-negative processes adapted to F. Moreover, we shall assume that the following conditions are verified: (i) a0 ≤ 0; (ii) a1 (·) ∈ L2 ([−r, 0]; R); (iii) b0 ≥ 0; (iv) b1 (·) ∈ L2 ([−r, 0]; R+ ); (v) η 0 ≥ 0; (vi) η(·) ≥ 0, with η(0) = η 0 ; (vii) δ(·) ≥ 0. Here a0 is a constant factor of image deterioration in absence of advertising, b0 is a constant advertising effectiveness factor, a1 (·) is the distribution of the forgetting time, and b1 (·) is the density function of the time lag between the advertising expenditure z and the corresponding effect on the goodwill level. Moreover, x is the level of goodwill at the beginning of the advertising campaign, η(·) and δ(·) are the histories of the goodwill level and of the advertising expenditure, respectively, before time zero (one can assume η(·) = δ(·) = 0, for instance). Note that when a1 (·), b1 (·), and σ are identically zero, equation (1) reduces to the classical model of Nerlove and Arrow [31]. Finally, we define the objective functional   Z T J(t, x; z(·)) = Et,x ϕ0 (y(T )) − h0 (z(s)) ds , (2) t

where ϕ0 is a concave utility function, h0 is a convex cost function, and the dynamics of y is determined by (1). We also define the value function V for this problem as follows: V (t, x) = sup J(t, x; z(·)). z(·)∈U

We shall say that z ∗ ∈ U is an optimal control if it is such that V (t, x) = J(t, x; z ∗ (·)). The problems we will deal with are the maximization of the objective functional J over all admissible controls U, and the characterization of the value function V and of the optimal control z ∗ .

4

3

Advertising in the absence of carryover effects

In a model for the dynamics of goodwill with distributed forgetting factor, but without lags in the effect of advertising expenditure, i.e. with b1 (·) = 0 in (1), it is possible to apply both the approach of Hamilton-Jacobi equations in L2 spaces developed by Goldys and Gozzi [14], and the backward SDE approach of Fuhrman and Tessitore [13]. We follow here the first approach, showing that both the value function and the optimal advertising policy can be characterized in terms of the solution of a HamiltonJacobi-Bellman equation in infinite dimensions. In particular, we assume, for the sake of simplicity, that the goodwill evolves according to the following equation, where the distribution of the forgetting factor is concentrated on a point:   dy(t) = [a0 y(t) + a1 y(t − r) + b0 u0 (t)] dt + σ dW0 (t), 0 ≤ t ≤ T (3)  y(0) = x; y(ξ) = η(ξ) ∀ξ ∈ [−r, 0]. The problem of interest is to maximize over all positive strategies u0 (·) adapted to the filtration generated by y(·) the objective functional   Z T J(t, x; u0 (·)) = Et,x ϕ0 (y(T )) − h0 (z0 (s)) ds , t

with y obeying the SDDE (4). By the representation theorems for solutions of stochastic delay equations of ChojnowskaMichalik [6], one can associate to (3) the following stochastic evolution equation on the Hilbert space X:  √ √  dY (t) = (AY (t) + Qz(t)) dt + Q dW (t), (4)  Y (0) = y, where A : D(A) ⊂ X → X is defined as A : (x0 , x1 (·)) 7→ (a0 x0 + a1 x1 (−r), x01 (·)) on its domain D(A) = R × W 1,2 ([−r, 0]; R); z = (σ −1 b0 u0 , z1 (·)) ∈ R+ × L2 ([−r, 0]; R), with z1 (·) a fictitious control; Q : X → X is defined as Q : (x0 , x1 (·)) 7→ (σ 2 x0 , 0); W is an X-valued cylindrical Wiener process with W = (W0 , W1 ), and W1 is a (fictitious) cylindrical Wiener process taking values in L2 ([−r, 0]; R). Finally, y = (x, η(·)). In particular, the X-valued mild solution Y (t) = (Y0 (t), Y1 (t)) of (4) is such that Y0 (t) solves the original stochastic delay equation (3). Defining h(x0 , x1 (·)) = −h0 (σb−1 0 x0 ) ϕ(x0 , x1 (·)) = ϕ0 (x0 ), thanks to the above mentioned equivalence between the SDDE (3) and the abstract SDE (4), we are led to the equivalent problem of maximizing over all strategies z : [0, T ] → R+ × L2 ([−r, 0]; R) adapted to the filtration generated by Y (·) the objective functional   Z T J(t, y; z(·)) = Et,y ϕ(Y (T )) + h(z(s)) ds , (5) t

5

with Y subject to (4). Remark 1 Note that the operator A just introduced does not coincide with the A introduced in section 4. In fact, A here is exactly the A∗ of section 4. Similarly, the initial datum y of this section differs from that of section 4. Note also that the reformulation carried out in this section does not extend to the more general case of delay in the control, explaining why the more elaborate reformulation of section 4 is needed. The operator A is the infinitesimal generator of a strongly continuous semigroup S(t) (see again Chojnowska-Michalik [6]). More precisely, one has S(t)(x0 , x1 (·)) = (y(t), y(t + ξ)|ξ∈[−r,0] ), where y(·) is the solution of the deterministic delay equation   dy(t) = a0 y(t) + a1 y(t − r), 0 ≤ t ≤ T dt  y(0) = x0 ; y(ξ) = x1 (ξ) ∀ξ ∈ [−r, 0].

(6)

In order to apply the results of [14], we need to assume conditions on the coefficients of (3) such that the uncontrolled version of (4), i.e.  √  dY (t) = AY (t) dt + Q dW (t), (7)  Y (0) = y, admits an invariant measure. It is known (see Da Prato and Zabczyk [7]) that q a0 < 1, a0 < −a1 < γ 2 + a20 ,

(8)

where γ coth γ = a0 , γ ∈]0, π[, ensures that (7) admits a unique non-degenerate invariant measure µ. Remark 2 The deterioration factor a0 is always assumed to be negative, hence the first condition in (8) is not a real constraint. In general, however, it is not clear what sign a1 should have. If a1 is also negative, i.e. it can again be interpreted as a deterioration factor, condition (8) says that a1 cannot be “much more negative” than a0 . On the other hand, if a1 is positive, then the second condition in (8) implies that the improvement effect as measured by a1 cannot exceed the deterioration effect as measured by |a0 |. Thus it is fair to say that in both cases condition (8) appears not really restrictive. Let us now define the Hamiltonians H0 , H : X → R as p  H0 (p) = sup h Qz, pi + h(z)

(9)

z∈U

and H(p) = H0 (Q−1/2 p). The Hamilton-Jacobi-Bellman equation associated to the control problem (5) can be written as   vt + 1 Tr(Qvxx ) + hAx, vx i + H0 (vx ) = 0 2 (10)  v(T, x) = ϕ(x). 6

Remark 3 The HJB equation (10) is “genuinely” infinite dimensional, i.e. it reduces to a finite dimensional one only in very special cases. For example, by the results of Larssen and Risebro [25], (10) reduces to a finite dimensional PDE if and only if a0 = −a1 . However, under this assumption, we cannot guarantee the existence of a non-degenerate invariant measure for the Ornstein-Uhlenbeck semigroup associated to (7). Even more extreme would be the situation of distributed forgetting time: in this case the HJB is finite dimensional only if the term accounting for distributed forgetting vanishes altogether! We have the following result, which is a slight modification of theorems 3.7 and 5.7 of [14]. Proposition 4 If the Hamiltonian H is Lipschitz, ϕ ∈ L2 (X, µ), and the operator A satisfies (8), then (10) admits a unique mild solution v in the space L2 (X, µ). Moreover, v coincides (µ-a.e.) with the value function t,y

V (t, y) = sup E

 Z ϕ(Y (T )) +

z∈U

T

 h(z(s)) ds .

t

Moreover, if there exists a solution Y ∗ (·) of the closed-loop differential inclusion  p p e Q v(t, Y (t))] dt + Q dW (t)  dY (t) ∈ [AY (t) + QD− H(D

(11)

 Y (0) = y, then the following feedback relation defines an optimal control: e Q v(t, Y ∗ (t)). z ∗ (t) ∈ D− H(D

(12)

e Q is, roughly speaking, a “weakly closable” extension of the The gradient operator D eQ gradient Q1/2 D acting on WQ1,2 (X, µ). For the exact definition and construction of D we refer the reader to [14]. Note that H is Lipschitz if, e.g., h is bounded below and |z| bounded. We now explicit the Hamiltonian (9), the HJB equation (10), the feedback expression for the optimal control (12), and the closed loop equation (11) using the definitions of the operators A, Q and H0 . In particular, one has the Hamiltonian H0 (p) does not depend on p1 , since   −1 H0 (p) = H0 (p0 ) = sup σz0 p0 − h0 (σb0 z0 ) . z0 ∈U

Thus, recalling that H(q) = H0 (Q−1/2 q), we have   ∗ H(q) = H(q0 ) = sup z0 q0 − h0 (σb−1 0 z0 ) = h1 (q0 ), z0 ∈U

where we have set h1 (x) = h0 (σb−1 0 x). Therefore one has − arg max(z0 q0 − h0 (σb−1 0 z0 )) = D H(q0 ). z0 ∈U

7

Moreover, the HJB equation becomes  Z 0  ∂v  2  ∂v 0  ∂v + 1 σ 2 ∂ v + (a x + a x (−r)) ∂v + x (ξ) (ξ) dξ + H =0 0 0 1 1 0 1 ∂t 2 ∂x20 ∂x0 ∂x1 ∂x0 −r   v(T, x , x ) = ϕ (x ). 0 1 0 0 (13) The feedback control (12) can be rewritten as  ∂v  z ∗ (t) ∈ D0− H σ (t, Y0∗ (t), Y1∗ (t) , ∂x0 where D0− stands for the subdifferential operator with respect to the R-valued component, and Y0∗ (t), Y1∗ (t) solve the closed loop differential inclusions     ∂v  − ∗ ∗  (t, Y0 (t), Y1 (t) dt + σ dW0 (t)  dY0 (t) ∈ a0 Y0 (t) + a1 Y0 (t − r) + σD0 H σ ∂x0  d   dY1 (t)(ξ) = Y1 (t)(ξ). dξ (14)

3.1

Properties of the value function

Proposition 5 If ϕ0 is concave, then the value function V (t, x) is concave with respect to x. Proof. Let x1 , x2 ∈ X. Then T

hZ λV (t, x ) + (1 − λ)V (t, x ) = λ sup E 1

2

z∈U

i h(z(s)) ds + ϕ(x(T ; x1 , z))

t

Z +(1 − λ) sup E z∈U T

=

Z sup E

z 1 ,z 2 ∈U

T

 h(z(s)) ds + ϕ(x(T ; x , z)) 2

t

[λh(z 1 (s)) + (1 − λ)h(z 2 (s))] ds

t

 + λϕ(x(T ; x , z )) + (1 − λ)ϕ(x(T ; x , z )) . 1

1

2

2

Since U is a convex set and h is concave, then zλ := λz 1 + (1 − λ)z 2 is admissible for any choice of z 1 , z 2 ∈ U, and one has h(zλ (s)) ≥ λh(z 1 (s)) + (1 − λ)h(z 2 (s)).

(15)

Moreover, by linearity of the state equation, it is easy to prove that x(T ; xλ , zλ ) = λx(T ; x1 , z 1 ) + (1 − λ)x(T ; x2 , z 2 ), hence, by the concavity of ϕ, ϕ(x(T ; xλ , zλ )) ≥ λϕ(x(T ; x1 , z 1 )) + (1 − λ)ϕ(x(T ; x2 , z 2 )).

8

(16)

Therefore, as a consequence of (15) and (16), we obtain Z T  1 2 λV (t, x ) + (1 − λ)V (t, x ) ≤ sup E h(zλ (s)) ds + ϕ(x(T ; xλ , zλ )) zλ ∈U

Z ≤ sup E z∈U

t T

 h(z(s)) ds + ϕ(x(T ; xλ , z))

t 1

= V (t, λx + (1 − λ)x2 ), and the proof is finished. Proposition 6 If a1 ≥ 0 and ϕ0 is strictly increasing, then the value function V (t, x) is strictly increasing with respect to x in the following sense: given x1 , x2 ∈ X with x10 > x20 and x10 (ξ) > x20 (ξ) ξ-a.e., then V (t, x1 ) > V (t, x2 ) for every t ∈ [0, T ]. In particular, V (t, x1 ) = V (t, x2 ) if and only if x1 = x2 . Proof. Let x1 ≥ x2 in the sense just defined. One has h i J(t, x1 ; z) − J(t, x2 ; z) = E ϕ(Y (T ; x1 , z)) − ϕ(Y (T ; x2 , z)) h i = E ϕ(eA(T −t) x1 + ζ) − ϕ(eA(T −t) x2 + ζ) , RT RT √ √ where ζ := t eA(T −s) Qz(s) ds + t eA(T −s) Q dW (s). The assumption a1 ≥ 0 together with (6) implies that the semigroup generated by A is positivity preserving, i.e. x1 ≥ x2 implies eA(T −t) x1 ≥ eA(T −t) x2 . Therefore, by the monotonicity of ϕ0 , one also has ϕ(eA(T −t) x1 + ζ) ≥ ϕ(eA(T −t) x2 + ζ) a.s., hence J(t, x1 ; z) ≥ J(t, x2 ; z), and finally V (t, x1 ) = supz∈U J(t, x1 ; z) ≥ supz∈U J(t, x2 ; z) = V (t, x2 ). Remark 7 In the above proof the positivity preserving property of the semigroup etA is crucial, and the assumption a1 ≥ 0 is “sharp” in the following sense: if a1 < 0, one can find x > 0 such that etA inverts the sign, i.e. etA x < 0. Moreover, under the assumptions of the theorem, the value function is increasing with respect to the real valued component of the initial datum. By this we mean that given x1 ≥ x2 with x10 > x20 and x11 = x21 a.e., then V (t, x1 ) > V (t, x2 ). Therefore one also has D0− V > 0. The subdifferential D− can be replaced by the derivative if we can guarantee that V is continuously differentiable with respect to x0 . Conditions for the continuous differentiability of V with respect to x are the subject of the following proposition. Proposition 8 If ϕ0 ∈ C 1 (R) and h0 is strictly convex, then V ∈ C 0,1 (R × X), in particular the value function V (t, x) is continuously differentiable with respect to x. Proof. Applying theorem 26.3 of Rockafellar [33], the strict convexity of h0 implies that H is continuously differentiable. Then the smoothness of V follows by the regularity results for solutions of semilinear partial differential equations in Hilbert spaces obtained through the backward stochastic differential equations approach: see theorem 4.3.1 of Fuhrman and Tessitore [12] (see also [13]).

9

Remark 9 Note that if the Hamiltonian H and the terminal conditional ϕ that appear in the HJB equation are smooth, then we can dispense with the assumption about the existence of an invariant measure for the uncontrolled state equation, as follows from the BSDE approach of [12]. The approach we used here (cf. [14]), while requiring the existence of the above mentioned invariant measure, allows for more singular data (it is enough, for instance, to have ϕ ∈ L2 (X, µ), where µ is an invariant measure of (7)).

3.2

Special cases

In this subsection we assume U = [0, R], where R can also be infinity. In general, obtaining explicit representations of the value function V as a solution of (10) and of the optimal control (12) is a very difficult task. However, due to the special structure of the operators A and Q, we can obtain somewhat less abstract expressions. For instance, let us further assume that the cost on the control is given by h0 (z) = βz02 , so that  p H0 (p) = sup h Qp, zi + h(z) ˜ 0≤z0 ≤R

=

˜ 2 ), sup (σp0 z0 − βz 0

˜ 0≤z0 ≤R

˜ = σ −1 b0 R and β˜ := σ 2 b−2 β. Then one has where R 0   0, p0 < 0     (σp0 )2 H0 (p) = , 0 ≤ p0 ≤  4β˜    ˜  σp R 2β˜R ˜ ˜ ˜2 0 − β R , p0 > σ

˜ 2β˜R σ

or equivalently   0, q0 < 0    ˜ ˜ q02 /4β, 0 ≤ q0 ≤ 2β˜R H(q) =     q R ˜˜ ˜ ˜ ˜2 0 − β R , q 0 > 2β R One also has

  0, q0 < 0    ˜ 0 ≤ q0 ≤ 2β˜R ˜ q0 /2β, DH(q) =     R, ˜ ˜ q0 > 2β˜R

Applying the results of proposition 4 one obtains the following expression for the feedback control:  0, D0 V ∗ < 0     D0 V ∗ ˜ z ∗ (t) = DH(σD0 V (t, Y0∗ (t), Y1∗ (t))) = , 0 ≤ σD0 V ∗ ≤ 2β˜R σ   2β˜   ˜ ˜ R, σD0 V ∗ > 2β˜R,

10

where we have set, for simplicity of notation, V ∗ = V (t, Y0∗ (t), Y1∗ (t)). Finally, after obvious simplifications,  0, D0 V ∗ < 0     b0 D0 V ∗ ∗ u∗0 (t) = σb−1 z (t) = , 0 ≤ D0 V ∗ ≤ 2b−1 0 0 0 βR  2β    R, D0 V ∗ > 2b−1 0 βR, Note that, as we observed above, if ϕ0 is increasing, then we have D0− V ∗ ≥ 0, hence the optimal control is either linear in D0 V ∗ or constant for D0 V ∗ over a threshold. Let us now consider, as a further example, the case of h0 linear, i.e. h0 = βz0 . One has, with the same notation as above, ( ˜ 0, p0 ≤ β/σ ˜ 0) = H0 (p) = sup (σp0 z0 − βz ˜ R, ˜ ˜ p0 > β/σ ˜ (σp0 − β) 0≤z0 ≤R and

( H(q) =

thus

q0 ≤ β˜

0,

˜ R, ˜ q0 > β˜ (q0 − β)

 0, q0 < β˜    D− H(q) = R, q0 = β˜    ˜ ˜ R, q0 > β.

As is typical of control problems with linear cost function, an application of proposition 4 yield the following bang-bang expression for the optimal strategy:  0, σD0 V ∗ < β˜    D− H(σD0 V (t, Y0∗ (t), Y1∗ (t))) 3 z ∗ (t) = ρ, σD0 V ∗ = β˜    ˜ ˜ R, σD0 V ∗ > β, where ρ is an arbitrary real number. Finally,  0, D0 V ∗ < σb−2  0 β   ∗ u∗0 (t) = σb−1 D0 V ∗ = σb−2 0 z0 (t) =  ρ, 0 β   R, D0 V ∗ > σb−2 0 β. In general, a fully explicit solution of the HJB equation for arbitrary ϕ0 is not available, hence the above expressions of the optimal control strategy in terms of the value function and their corresponding qualitative properties are the “best” that one can expect, at least in the cases we have considered. However, in some very special situations, the HJB equation is solvable explicitly: we refer to the following section where the case of ϕ0 linear, h0 quadratic, and R = +∞ is explicitly solved.

11

4

Advertising in the presence of carryover effects

In this section we deal with the general problem introduced in section 2 with both a1 and b1 different than zero, that is, we explicitly assume here that the effect of advertising on the goodwill is subject to delays. The first main difference with respect to the case of no delays in the effect of avertising treated in the previous section is that a completely different reformulation in infinite dimensions is needed. In particular, an equivalence theorem holds (proposition 10 below), for the proof of which we refer to Gozzi and Marinelli [15]. Let us define an operator A : D(A) ⊂ X → X as follows: 

A : (x0 , x1 (ξ)) 7→

a0 x0 + x1 (0), a1 (ξ)x0 −

dx1 (ξ)  dξ

a.e. ξ ∈ [−r, 0],

with domain  D(A) = (x0 , x1 (·)) ∈ X : x1 (·) ∈ W 1,2 ([−r, 0]; R), x1 (−r) = 0 . Let us also define the bounded linear control operator B : U → X as   B : u 7→ b0 u, b1 (·)u ,

(17)

where U := R+ , and finally the operator Q : R → X as G : x0 → (σ 2 x0 , 0). Sometimes it will be useful to identify the operator B with the element (b0 , b1 (·)) ∈ X. Proposition 10 Let Y (·) be the weak solution of the abstract evolution equation √ ( dY (s) = (AY (s) + Bz(s)) ds + Q dW0 (s) (18) Y (0) = x ∈ X, with arbitrary initial datum x ∈ X and control z(·) ∈ U. Then, for t ≥ r, one has, P-a.s., Y (t) = M (Y0 (t), Y0 (t + ·), z(t + ·)), where M : X × L2 ([−r, 0]; R) → X (x0 , x1 (·), v(·)) 7→ (x0 , m(·)), Z

ξ

Z

ξ

a1 (ζ)x1 (ζ − ξ) dζ +

m(ξ) := −r

b1 (ζ)v(ζ − ξ) dζ. −r

Moreover, let {y(t), t ≥ −r} be a continuous solution of the stochastic delay differential equation (1), and Y (·) be the weak solution of the abstract evolution equation (18) with initial condition x = M (η 0 , η(·), δ(·)). Then, for t ≥ 0, one has, P-a.s., Y (t) = M (y(t), y(t + ·), z(t + ·)), hence y(t) = Y0 (t), P-a.s., for all t ≥ 0. 12

Using this equivalence result, we can now give a Hilbert space formulation of our problem. In particular, for 0 ≤ t ≤ T , X is as before our state space, the control space is U = R+ , and the space of U -valued adapted strategies is denoted again by U. Then our aim is to maximize over U the objective functional   Z T t,x J(t, x; z(·)) = E ϕ(Y (T )) + h(z(s)) ds , (19) t

where the functions h : U → R and ϕ : X → R are defined as h(z) = −h0 (z) ϕ(x0 , x1 (·)) = ϕ0 (x0 ), and Y follows the dynamics (18). The value function is defined, as before, as V (t, x) = supz(·)∈U J(t, x; z(·)), and we denote again by (Y ∗ , z ∗ ) an optimal pair. The HJB equation associated to this problem can be written as   v + 1 Tr(Qv ) + hAx, v i + H (v ) = 0 xx x 0 x t 2 (20)  v(T ) = ϕ, where H0 (p) = supz∈U (hBz, pi + h(z)). If the cost function h is one of the two cases considered in the previous section, we can use similar arguments to obtain less abstract expressions for the Hamiltonian H0 . The main difference would be that in the present case an explicit dependence on the control operator B, hence on the integrated history of the control, will appear. However, knowing the Hamiltonian in this case does not allow us to deduce the form of the feedback control in terms of the value function, because we cannot prove existence and uniqueness of regular solutions of the HJB, hence we cannot formulate a corresponding verification theorem under general assumptions. Nevertheless, if we know a priori that a smooth solution exists, then we can apply the verification theorem proved in [15]. Here we limit ourselves to the discussion of a particular case, for which we obtain a smooth solution in close form, and hence we can fully characterize the optimal strategy.

4.1

The case of quadratic cost and linear reward

Let us assume h(z) = −βz02 and ϕ(x) = γx0 , with β, γ > 0. Then we have HCV (p, z) = hBz, pi + h(z) = hB, piz − βz 2 , and

 2   hB, pi , hB, pi ≥ 0 4β H0 (p) = sup HCV (z, p) =  z∈U  0, hB, pi < 0,

13

or equivalently, in more compact notation, H0 (p) =

(hB, pi+ )2 . 4β

We conjecture a solution (in the integral sense, cf. [15]) of the HJB equation (20) of the form v(t, x) = hw(t), xi + c(t), t ∈ [0, T ], x ∈ X, where w(·) : [0, T ] → X and c(·) : [0, T ] → R are given functions whose properties we will study below. Arguing as in [15], we obtain that w = (w0 , w1 ) satisfies  Z 0  w0 (t) + a w (t) + a1 (ξ)w1 (t, ξ) dξ = 0, t ∈ [0, T [ 0 0 0 (21) −r  w0 (T ) = γ and w1 (t, ξ) = w0 (t − ξ)I{t−ξ∈[0,T ]} ,

(22)

from which one can solve equation (21), obtaining w0 (·), and finally Z

T

(hB, w(s)i+ )2 ds, 4β

c(t) = t

t ∈ [0, T ]

Since a maximizer of the current-value Hamiltonian HCV is given by z ∗ = hB, pi+ /(2β), then it is immediately seen that the control z ∗ (t) =

hB, w(t)i+ hB, vx (t)i+ = , 2β 2β

t ∈ [0, T ]

is admissible, hence one can apply the verification theorem in [15] holds, concluding that z ∗ (·) is optimal. 4.1.1

Qualitative properties of the optimal pair and of the value function

We can also give a rather simple characterization of the optimal trajectory, which could be numerically approximated simply by solving a linear ODE with delay. In particular, 1 let w = (w0 , w1 ) be the solution of (21)-(22). Then, setting z ∗ (t) = 2β hB, w(t)i+ , the optimal trajectory is the R-valued component Y0 of the (mild) solution of the abstract SDE p dY (t) = [AY (t) + Bz ∗ (t)] dt + Q dW (t), which is given by Y (t) = etA Y (0) +

Z

t

e(t−s)A Bz ∗ (s) ds +

0

Z

t

e(t−s)A

0

In particular Y is a X-valued Gaussian process with mean Z t µt = etA Y (0) + e(t−s)A Bz ∗ (s) ds 0

14

p

Q dW (s).

and covariance operator Z Qt =

t



e(t−s)A Qe(t−s)A ds.

0

It follows that Y0 is a Gaussian process itself with mean Z t 

tA (t−s)A ∗ EY0 (t) = hµt , e1 iX = e Y (0), e1 X + e Bz (s) ds, e1 0

=

D



Y (0), etA e1

E X

+

Z tD

X



Bz ∗ (s), e(t−s)A e1

0

E X

ds,

(23)

where e1 = (1, 0) ∈ X. Since Y (0) is given as in Proposition 10 and Bz ∗ (·) is also easy to compute (z ∗ is one dimensional and B is just multiplication by a fixed vector in X), we are left with the ∗ problem of computing etA e1 . However, as one can prove by a direct calculation, the ∗ semigroup etA is given by   ∗ etA (x0 , x1 (·)) = φ(t), φ(t + ξ)|ξ∈[−r,0] , where φ(·) solves the linear ODE with delay  Z 0   dφ(t) = a φ(t) + a1 (ξ)φ(t + ξ) dξ, 0 ≤ t ≤ T 0 dt −r   φ(0) = x ; φ(ξ) = x (ξ) ∀ξ ∈ [−r, 0]. 0 1

(24)



Therefore etA e1 is given by (φ(t), φ(t + ξ)|ξ∈[−r,0] ), where φ solves (24) with initial condition x0 = 1, x1 (·) = 0. Such φ can be computed numerically by discretizing (24), and then EY0 (t) can be obtained by approximating the integrals in (23) with finite sums. Analogously one can write the variance of the optimal trajectory in such a way that it can be easily approximated by numerical methods. In particular, one has  Z t (t−s)A (t−s)A∗ Var Y0 (t) = hQt e1 , e1 i = e Qe ds e1 , e1 0 Z tD E ∗ = e(t−s)A Qe(t−s)A e1 , e1 ds 0 Z t Dp E p ∗ ∗ = Qe(t−s)A e1 , Qe(t−s)A e1 ds 0 Z t p 2 ∗ = (25) Qe(t−s)A e1 ds. 0

(t−s)A∗

Setting ψ(s) = e has

e1 , which can be approximated as indicated before, one finally Z t 2 Var Y0 (t) = σ ψ(s)2 ds. 0

One can also perform simple comparative statics on the value function. For instance we can compute explicitly its sensitivity with respect to the (maximal) delay r: ∂V (t, x; r) = ∂r

∂c ∂ hw1 (t), x1 iL2 ([−r,0]) + (t; r) ∂r ∂r 15

= w1 (t, −r)x1 (−r) + = w1 (t, −r)x1 (−r) +

1 2β

T

Z

hB, w(s)i t

b1 (−r) 2β

Z

∂ ∂r

Z

0

 b1 (ξ)w1 (s, ξ) dξ

ds

−r

T

hB, w(s)iw1 (s, −r) ds, t

where we have used the fact that hB, w(t)i+ = hB, w(t)i. Note that in the above expression everything can be computed explicitly, as soon as we fix the delay kernel b1 . Let us consider, as an example, the special case of b1 (ξ) = b1 I(ξ ∈ [−r, 0]), where on the right-hand side, with a slight abuse of notation, b1 is a positive constant. One has b1 ∂V (t, x; r) = w1 (t, −r)x1 (−r) + ∂r 2β

Z

T

Z  w1 (s, −r) b0 w0 (s) + b1

0

 w1 (s, ξ) dξ ds.

−r

t

Furthermore, if we consider the special case of delay in the control only, that is a1 (·) = 0, we obtain, after some calculations,  b1 2 a 0 r  b1 ∂V b0 − (1 − ea0 r ) (1 − e2a0 (T −t) ), (t, x; r) = γea0 (T −t+r) x1 (−r) − γ e ∂r 4βa0 a0 for t ∈ [r, T ]. 4.1.2

Delay in the control only

In this subsection we consider the special case of a1 (·) ≡ 0, for which an explicit solution of (21) is easily obtained. This solvability in close form then “propagates” to other quantities of interest. In fact, note that (21) reduces to ( 0 w0 (t) + a0 w0 (t) = 0 (26) w0 (T ) = γ, yielding w0 (t) = γe(T −t)a0 , and therefore w1 (t, ξ) = γe(T −(t+ξ))a0 I{t+ξ∈[0,T ]} , Z T hB, w(s)i2 c(t) = ds. 4β t That is, the last three formulae explicitly give a solution of the HJB equation (20) in our specific case. As a consequence we can also determine the unique optimal feedback control z ∗ as follows: z ∗ (t) = arg max HCV (vx , z) hB, w(t)i+ hB, vx i+ = = 2β 2β   Z 0 (T −t)a 0 γe −a0 ξ = b0 + b1 (ξ)e I{t+ξ∈[0,T ]} . 2β −r 16

The optimal trajectory can be characterized in a completely similar way as in 4.1.1, with the difference that now we can explicitly write: ∗

etA e1 = (ea0 t , ea0 t+ξ |ξ∈[−r,0] ), hence simplifying (23) in the present case. Even simpler is the expression for the variance of the optimal trajectory, which can be obtained by (25): Var Y0 (t) = σ

2

Z

t

e2a0 (t−s) ds =

0

5

σ 2 2a0 t (e − 1). 2a0

Concluding remarks

A number of deterministic advertising models allowing for delay effects have been proposed in the literature. However, the corresponding problems in the stochastic setting have not been investigated. One of the reasons is certainly that a theory of continuous-time stochastic control with delays has only been developed recently, following two approaches. The first approach is based on the solution of an associated infinite-dimensional Hamilton-Jacobi-Bellman equation in spaces of integrable functions (see [14]). The other one relies on the analysis of an appropriate infinite-dimensional forward-backward stochastic differential equation (see [13]). Both approaches, however, cannot be applied to problems with distributed lag in the effect of advertising. Problems with memory effects in both the state and the control have been studied first by Vinter and Kwong [35] (in a deterministic LQ setting), and by Gozzi and Marinelli [15] (in the case of linear stochastic dynamics and general objective function). A general theory of solvability of corresponding HJB equations is currently under development (in the framework of the viscosity approach), while an infinite-dimensional Markovian reformulation and a “smooth” verification theorem have been proved in [15]. In order to construct optimal strategies in feedback form, one would also need a regularity theory for the viscosity solutions of the HJB equation, or a suitable non-smooth ´ verification theorem (see e.g. Gozzi, Swiech and Zhou [16] for the analysis of the finitedimensional case). In this paper we have concentrated on deriving qualitative properties of the value function (such as convexity, monotonicity with respect to initial conditions, smoothness). For specific choices of the reward and cost functions, we obtain more explicit characterizations of value function and optimal state-control pair. While our work makes a substantial initial step in the analysis of the stochastic advertising problems with delays, more remains to be done. Potential extensions of the present work include the analysis of problems with budget constraints as well as problems of advertising through multiple media outlets with different delay characteristics.

References [1] D. Aaker and J. M. Carman. Are you overadvertising? Journal of Advertising Res., 22:57–70, 1982. [2] V. Barbu and Th. Precupanu. Convexity and optimization in Banach spaces. D. Reidel, Dordrecht, second edition, 1986. ISBN 90-277-1761-3.

17

[3] F. M. Bass. Optimal advertising expenditure implications of simultaneous-equation regression analysis. Oper. Res., 19:822–831, 1971. [4] F. M. Bass and D. G. Clark. Testing distributed lag models of advertising effect. Journal of Market. Res., 9:298–308, 1972. [5] F. M. Bass and L. J. Parsons. Simultaneous-equation regression analysis of sales and advertising. Applied Economics, 1:103–124, 1969. [6] A. Chojnowska-Michalik. Representation theorem for general stochastic delay equations. Bull. Acad. Polon. Sci. S´er. Sci. Math. Astronom. Phys., 26(7):635–642, 1978. [7] G. Da Prato and J. Zabczyk. Ergodicity for infinite-dimensional systems. Cambridge UP, 1996. [8] I. Elsanosi. Stochastic control for systems with memory. Dr. Scient. thesis, University of Oslo, 2000. [9] I. Elsanosi and B. Larssen. Optimal consumption under partial observation for a stochastic system with delay. Preprint, University of Oslo, 2001. [10] I. Elsanosi, B. Øksendal, and A. Sulem. Some solvable stochastic control problems with delay. Stochastics Stochastics Rep., 71(1-2):69–89, 2000. [11] G. Feichtinger, R. Hartl, and S. Sethi. Dynamical Optimal Control Models in Advertising: Recent Developments. Management Sci., 40:195–226, 1994. [12] M. Fuhrman and G. Tessitore. Backward stochastic differential equations in finite and infinite dimensions. Lecture Notes, Politecnico di Milano, 2004. [13] M. Fuhrman and G. Tessitore. Nonlinear Kolmogorov equations in infinite dimensional spaces: the backward stochastic differential equations approach and applications to optimal control. Ann. Probab., 30(3):1397–1465, 2002. [14] B. Goldys and F. Gozzi. Second order parabolic Hamilton-Jacobi equations in Hilbert spaces and stochastic control: L2 approach. Working paper, 2003. [15] F. Gozzi and C. Marinelli. Stochastic optimal control of delay equations arising in advertising models. In G. Da Prato and L. Tubaro, editors, Stochastic partial differential equations and applications. Marcel Dekker, 2005. ´ [16] F. Gozzi, A. Swiech, and X. Y. Zhou. A corrected proof of the stochastic verification theorem within the framework of viscosity solutions. SIAM J. Control Optim., 43 (6):2009–2019 (electronic), 2005. ISSN 0363-0129. [17] Z. Griliches. Distributed lags: A survey. Econometrica, 35:16–49, 1969. [18] L. Grosset and B. Viscolani. Advertising for a new product introduction: a stochastic approach. Top, 12(1):149–167, 2004. [19] R. F. Hartl. Optimal dynamic advertising policies for hereditary processes. J. Optim. Theory Appl., 43(1):51–72, 1984.

18

[20] R. F. Hartl and S. P. Sethi. Optimal control of a class of systems with continuous lags: dynamic programming approach and economic interpretations. J. Optim. Theory Appl., 43(1):73–88, 1984. [21] J. D. Herrington and W. A. Dempsey. Comparing the current effects and carryover of national-, regional-, and local-sponsor advertising. Journal of Advertising Research, 45:60—72, 2005. [22] V. B. Kolmanovski˘ı and L. E. Sha˘ıkhet. Control of systems with aftereffect. AMS, Providence, RI, 1996. [23] L. M. Koyck. Distributed Lags and Investment Analysis. North-Holland, Amsterdam, 1954. [24] B. Larssen. Dynamic programming in stochastic control of systems with delay. Stoch. Stoch. Rep., 74(3-4):651–673, 2002. [25] B. Larssen and N. H. Risebro. When are HJB-equations in stochastic control of delay systems finite dimensional? Stochastic Anal. Appl., 21(3):643–671, 2003. [26] R. P. Leone. Generalizing what is known about temporal aggregation and advertising carryover. Market. Sci., 14:G141–G150, 1995. [27] L. M. Lodish, M. Abraham, S. Kalmenson, J. Livelsberger, B. Lubetkin, B. Richardson, and M. E. Stevens. How tv advertising works: A meta-analysis of 389 real world split cable tv advertising experiments. Journal of Marketing Res., 32:125–139, 1995. [28] X. Luo and N. Donthu. Benchmarking advertising efficiency. Journal of Advertising Research, 41:7–18, 2001. [29] C. Marinelli. The stochastic goodwill problem. Europ. J. Oper. Res., 2006. In press. [30] TNS Media Intelligence. TNS Media Intelligence reports U.S. advertising expenditures increased by 3.0 percent in 2005. http://www.tns-mi.com/news/02282006.htm (March 1, 2006). [31] M. Nerlove and J. K. Arrow. Optimal advertising policy under dynamic conditions. Economica, 29:129–142, 1962. [32] A. Prasad and S. P. Sethi. Competitive advertising under uncertainty: a stochastic differential game approach. J. Optim. Theory Appl., 123(1):163–185, 2004. ISSN 0022-3239. [33] R. T. Rockafellar. Convex analysis. Princeton UP, Princeton, NJ, 1997. ISBN 0-691-01586-4. Reprint of the 1970 original. [34] M. L. Vidale and H. B. Wolfe. An operations-research study of sales response to advertising. Operations Res., 5:370–381, 1957. [35] R. B. Vinter and R. H. Kwong. The infinite time quadratic control problem for linear systems with state and control delays: an evolution equation approach. SIAM J. Control Optim., 19(1):139–153, 1981. 19

Bestellungen nimmt entgegen: Institut für Angewandte Mathematik der Universität Bonn Sonderforschungsbereich 611 Wegelerstr. 6 D - 53115 Bonn Telefon: Telefax: E-mail:

0228/73 3411 0228/73 7864 [email protected]

Homepage: http://www.iam.uni-bonn.de/sfb611/

Verzeichnis der erschienenen Preprints ab No. 235

235. Grunewald, Natalie; Otto, Felix; Reznikoff, Maria G.; Villani, Cédric: A Two-Scale Proof of a Logarithmic Sobolev Inequality 236. Albeverio, Sergio; Hryniv, Rostyslav; Mykytyuk, Yaroslav: Inverse Spectral Problems for Coupled Oscillating Systems: Reconstruction by Three Spectra 237. Albeverio, Sergio; Cebulla, Christof: Müntz Formula and Zero Free Regions for the Riemann Zeta Function 238. Marinelli, Carlo: The Stochastic Goodwill Problem; erscheint in: European Journal of Operational Research 239. Albeverio, Sergio; Lütkebohmert, Eva: The Price of a European Call Option in a BlackScholes-Merton Model is given by an Explicit Summable Asymptotic Series 240. Albeverio, Sergio; Lütkebohmert, Eva: Asian Option Pricing in a Lévy Black-Scholes Setting 241. Albeverio, Sergio; Yoshida, Minoru W.: Multiple Stochastic Integrals Construction of non-Gaussian Reflection Positive Generalized Random Fields 242. Albeverio, Sergio; Bernabei, M. Simonetta; Röckner, Michael; Yoshida, Minoru W.: Homogenization of Diffusions on the Lattice Zd with Periodic Drift Coefficients; Application to Logarithmic Sobolev Inequality 243. Albeverio, Sergio; Konstantinov, Alexei Yu.: On the Absolutely Continuous Spectrum of Block Operator Matrices 244. Albeverio, Sergio; Liang, Song: A Remark on the Nonequivalence of the Time-Zero \phi_3^4-Measure with the Free Field Measure 245. Albeverio, Sergio; Liang, Song; Zegarlinski, Boguslav: A Remark on the Integration by Parts Formula for the \phi_3^4-Quantum Field Model 246. Grün, Günther; Mecke, Klaus; Rauscher, Markus: Thin-Film Flow Influenced by Thermal Noise 247. Albeverio, Sergio; Liang, Song: A Note on the Renormalized Square of Free Quantum Fields in Space-Time Dimension d > 4 248. Griebel, Michael: Sparse Grids and Related Approximation Schemes for Higher Dimensional Problems; erscheint in: Proc. Foundations of Computational Mathematics 2005

249. Albeverio, Sergio; Jushenko, Ekaterina; Proskurin, Daniil; Samoilenko, Yurii: *-Wildness of Some Classes of C*-Algebras 250. Albeverio, Sergio; Ostrovskyi, Vasyl; Samoilenko, Yurii: On *-Representations of a Certain Class of Algebras Related to Extended Dynkin Graphs 251. Albeverio, Sergio; Ostrovskyi, Vasyl; Samoilenko, Yurii: On Functions on Graphs and Representations of a Certain Class of *-Algebras 252. Holtz, Markus; Kunoth, Angela: B-Spline-Based Monotone Multigrid Methods 253. Albeverio, Sergio; Kuzhel, Sergej; Nizhnik, Leonid: Singularly Perturbed Self-Adjoint Operators in Scales of Hilbert Spaces 254. Albeverio, Sergio; Hryniv, Rostyslav; Mykytyuk, Yaroslav: On Spectra of Non-Self-Adjoint Sturm-Liouville Operators 255. Albeverio, Sergio; Nizhnik, Leonid: Schrödinger Operators with Nonlocal Point Interactions 256. Albeverio, Sergio; Alimov, Shavkat: On One Time-Optimal Control Problem Associated with the Heat Exchange Process; erscheint in: Applied Mathematics and Optimization 257. Albeverio, Sergio; Pustylnikov, Lev D.: Some Properties of Dirichlet L-Functions Associated with their Nontrivial Zeros II 258. Abels, Helmut; Kassmann, Moritz: An Analytic Approach to Purely Nonlocal Bellman Equations Arising in Models of Stochastic Control 259. Gottschalk, Hanno; Smii, Boubaker; Thaler, Horst: The Feynman Graph Representation of Convolution Semigroups and its Application to Lévy Statistics 260. Philipowski, Robert: Nonlinear Stochastic Differential Equations and the Viscous Porous Medium Equation 261. Albeverio, Sergio; Kosyak, Alexandre: Quasiregular Representations of the InfiniteDimensional Nilpotent Group 262. Albeverio, Sergio; Koshmanenko, Volodymyr; Samoilenko, Igor: The Conflict Interaction Between Two Complex Systems. Cyclic Migration. 263. Albeverio, Sergio; Bozhok, Roman; Koshmanenko, Volodymyr: The Rigged Hilbert Spaces Approach in Singular Perturbation Theory 264. Albeverio, Sergio; Daletskii, Alexei; Kalyuzhnyi, Alexander: Traces of Semigroups Associated with Interacting Particle Systems 265. Gozzi, Fausto; Marinelli, Carlo; Savin, Sergei: Optimal Advertising under Uncertainty with Carryover Effects