Time Optimal Control Problem for Prey-Predator Systems Narcisa Apreutesei1 , Gabriel Dimitriu2 and R˘ azvan S¸tef˘anescu2 1
“Gh. Asachi” Technical University, Department of Mathematics, 700506 Ia¸si, Romania,
[email protected] 2 “Gr. T. Popa” University of Medicine and Pharmacy, Department of Mathematics and Informatics, 700115 Ia¸si, Romania,
[email protected],
[email protected]
Abstract. Of concern is the time optimal control problem for a preypredator system with a general functional response of the predator. One shows that the control function is bang-bang. The number of the switching points of the optimal control is next found. Numerical simulations are also presented.
1
Introduction
The evolution of a Volterra prey-predator system with a general functional response is described by the differential system y1 = ay1 − y2 F (y1 , y2 ) , t > 0, y2 = y2 (−c + bF (y1 , y2 )) where a, b, c > 0 are given constants ([6], [8]). Here y1 and y2 represent the number of individuals of the prey and predator populations, respectively. The predator functional response F (y1 , y2 ) models different types of functional responses, like Holling type II or III, Ivlev, Hassell & Valery, De Angelis et al; Beddington, etc. (see [3], [6]). One separates the prey from the predators with the aid of a control function u whose significance is their rate of mixture. In other words, 1 − u(t) denotes the separation rate between the two species at the moment t. Suppose that u : [0, +∞) → IR, 0 ≤ u(t) ≤ 1 a.e. on [0, +∞). The controlled system is y1 = ay1 − u y2 F (y1 , y2 ) , t > 0. (1) y2 = y2 (−c + bu F (y1 , y2 )) Under some regularity conditions on function F , the initial value conditions y1 (0) = y10 > 0, y2 (0) = y20 > 0
(2)
assure the existence and uniqueness of a positive solution to the Cauchy problem (1) − (2). Such control functions were introduced in [9] for linear F , in a different context. A similar problem was analyzed in [1] for a three species ecosystem.
2
N. Apreutesei, G. Dimitriu, and R. S ¸ tef˘ anescu
Our goal is to find the shortest time T and the optimal control u such that, after a time interval of length T , function y = (y1 , y2 ) reaches a prescribed value y T = (y1T , y2T ). We can write the optimal control problem in the form Min{T : (∃) u : [0, T ] → [0, 1] , y = (y1 , y2 ) verifies (1) − (2), y(T ) = y T }. (3) Optimal control of non-homogenous prey-predator models during infinite and finite time intervals was found in [4]. Basic notions and results concerning optimal control problems can be found in the books [2], [7]. The structure of the paper is the following. Next section deals with the maximum principle for the above time optimal control problem. One proves that the control u is bang-bang and one finds the number of the switching points of u according the sign of a certain function in t = 0. Section 3 is devoted to numerical simulations. Some concluding remarks are drawn in the last section.
2
The number of switching points of the optimal control
Assume that the predator functional response F : [0, ∞)2 → IR satisfies the following conditions: (H) F (y1 , y2 ) > 0, for y1 , y2 > 0, F ∈ C 1 ([0, ∞)2 ); F is bounded. Since the right-hand side of (1) is an affine function in u, we can derive the existence of a time-optimal control (via Theorem 2.1 and Corollary 2.1 from [2], pp. 44–45). The Hamiltonian function associated to this problem is H(y, p, u) = ay1 p1 − cy2 p2 + u y2 F (y1 , y2 )(bp2 − p1 ), where p = (p1 , p2 ) is the solution of the adjoint system ∂F p1 = −ap1 + u y2 ∂y (y1 , y2 )(p1 − bp2 ) 1 , ∂F (y , y ) (p1 − bp2 ) p2 = cp2 + u F (y1 , y2 ) + y2 ∂y 1 2 2
(4)
t > 0.
Since y1 , y2 , F (y1 , y2 ) > 0, Hamiltonian H becomes maximal if 0, if (bp2 − p1 )(t) < 0 u(t) = , 1, if (bp2 − p1 )(t) > 0
(5)
(6)
so u is a bang-bang control. Observe that u(t)(bp2 − p1 )(t) ≥ 0,
t ≥ 0.
(7)
We have also ay1 p1 − cy2 p2 + u y2 F (y1 , y2 )(bp2 − p1 ) = 1 on [0, T ] . To find the number of switching points of u, observe that (5) leads to ∂F ∂F − y2 (bp2 − p1 ) = ap1 + bcp2 − u bF + by2 (bp2 − p1 ), t > 0. ∂y2 ∂y1 We now can state the main result.
(8)
(9)
Time Optimal Control Problem for Prey-Predator Systems
3
Theorem 1. Let a, c, b > 0 be given constants and (y, u, T ) be an optimal triplet for problem (3). If hypotheses (H) hold, then the time optimal control u has at most two switching points. They are the zeros of the function bp2 − p1 , where p = (p1 , p2 ) is a solution of the adjoint system (5). More exactly, if μ = bp2 − p1 is the switching function, we have: Case 1. If μ(0) > 0, then u has at most one switching point τ1 . The optimal control is either u = 1 on [0, +∞) or 1, t ∈ [0, τ1 ) u(t) = . (10) 0, t ∈ [τ1 , +∞) Case 2. If μ(0) < 0, then u has at most two switching points τ1 < τ2 . We have either u(t) = 0, (∀)t ∈ [0, +∞) or ⎧ ⎨ 0, t ∈ [0, τ1 ) 0, t ∈ [0, τ1 ) (11) or u(t) = 1, t ∈ [τ1 , τ2 ) . u(t) = 1, t ∈ [τ1 , +∞) ⎩ 0, t ∈ [τ2 , +∞) Case 3. Let μ(0) = 0. If p1 (0) < 0, p2 (0) < 0, then u = 0 on [0, +∞). If p1 (0) > 0, p2 (0) > 0, then u has at most one switching point τ1 , namely either u = 1 on [0, +∞) or u has the form (10). The crucial point in the proof of the theorem is the following auxiliary result. Lemma 1. Let μ = bp2 − p1 be the switching function. If μ(τ ) = 0, then μ (τ ) = 0. In addition, we have: (i) If τ is a switching point with μ (τ ) > 0, then pi (τ ) > 0, i = 1, 2 and u(t) = 0, (∀)t ∈ (θ, τ ], for some θ. (ii) If τ is a switching point with μ (τ ) < 0, then pi (τ ) < 0, i = 1, 2 and u(t) = 0, (∀)t ≥ τ . Proof. If μ(τ ) = 0, then p1 (τ ) = bp2 (τ ) and by (9) we have μ (τ ) = (ap1 + bcp2 )(τ ) = (a + c)bp2 (τ ).
(12)
According to (8), p1 (τ ) and p2 (τ ) cannot be zero. Therefore μ (τ ) = 0, so the first claim is proved. (i) Let τ ∈ (0, T ) be a switching point with μ (τ ) > 0. Then by (12) we derive that p2 (τ ) > 0. Since μ(τ ) = 0, it follows that p1 (τ ) > 0. Moreover, since μ switches from negative values to positive ones, we have μ(t) < 0 for all t at the left side of τ , say on (θ, τ ) for some θ. This implies that u(t) = 0 for t ∈ (θ, τ ]. (ii) If τ is now a switching with μ (τ ) < 0, by (12) we get p2 (τ ) < 0, p1 (τ ) < 0 and since μ is decreasing in τ , it follows that μ(t) < 0 for t > τ , t close to τ . Let (τ, τ0 ) be the maximal interval on which μ(t) < 0. Then u(t) = 0 on [τ, τ0 ). The adjoint system p1 = −ap1 , p2 = cp2
4
N. Apreutesei, G. Dimitriu, and R. S ¸ tef˘ anescu
for t ∈ [τ, τ0 ) has the solution p1 (t) = p1 (τ )e−a(t−τ ) , Then
p2 (t) = p2 (τ )ec(t−τ ) ,
t ∈ [τ, τ0 ).
μ (t) = ap1 (τ )e−a(t−τ ) + cbp2 (τ )ec(t−τ ) < 0,
t ∈ [τ, τ0 ),
hence μ = bp2 − p1 is decreasing and negative on (τ, τ0 ). This means that τ0 = +∞ and u(t) = 0 on [τ, +∞). Proof of Theorem 1. We discuss three cases according to the sign of μ(0) = (bp2 − p1 )(0). Case 1. Suppose first that μ(0) > 0. Then μ > 0 on a right neighbourhood [0, τ1 ) of 0, which can be chosen maximal. If τ1 = +∞, then u = 1 on [0, +∞). If τ1 < +∞, then u = 1 on [0, τ1 ), μ(τ1 ) = 0 and μ (τ1 ) < 0. According to Lemma 1, p1 (τ1 ) < 0, p2 (τ1 ) < 0 and u(t) = 0, (∀)t ≥ τ1 . Therefore in this case, either u = 1 on [0, +∞) or (10) holds. Case 2. Assume now that μ(0) < 0. Then μ < 0 on a right neighbourhood [0, τ1 ) of 0, which can be taken maximal. If τ1 = +∞, the optimal control is u = 0 on [0, +∞). If τ1 is finite, then it verifies the equality p1 (τ1 ) = bp2 (τ1 ).
(13)
and μ (τ1 ) > 0. Lemma 2 implies that p1 (τ1 ) > 0, p2 (τ1 ) > 0 and u(t) = 0, t ∈ [0, τ1 ). The solution of the adjoint system p1 = −ap1 , t ∈ [0, τ1 ) p2 = cp2 has the form p1 (t) = p1 (0)e−at ,
p2 (t) = p2 (0)ect ,
t ∈ [0, τ1 ).
(14)
Depending on p1 (0) and p2 (0), from (13) − (14) one can easily calculate τ1 or see that there is no switching point. By (13) and (14) we deduce that p1 (0), p2 (0), p1 (τ1 ), and p2 (τ1 ) have the same sign, namely plus. For t > τ1 , t close to τ1 , μ > 0. Let (τ1 , τ2 ) be the maximal interval on which μ > 0. Thus u = 1 on (τ1 , τ2 ). We have again two possibilities. The first one is τ2 = +∞. It follows that u = 1 on (τ1 , +∞), i.e. u admits one switching point τ1 > 0 and has the first form in (11). If τ2 < +∞, then μ(τ2 ) = 0, μ (τ2 ) < 0, μ > 0 on (τ1 , τ2 ) and μ < 0 on a maximal interval (τ2 , τ3 ), with τ2 < τ3 . By Lemma 1 one yields u(t) = 0, (∀)t ≥ τ2 . So τ3 = +∞ and in this subcase, the control u has two switching points τ1 < τ2 . It has the second form in (11). Case 3. Assume that μ(0) = 0. By (9) one derives that μ (0) = (bp2 − p1 ) (0) = (a + c)bp2 (0). But p1 (0) and p2 (0) are different from zero (see (8)) and they have the same sign. Let p1 (0) < 0, p2 (0) < 0. This implies that function t → μ(t) is strictly
Time Optimal Control Problem for Prey-Predator Systems
5
decreasing in a right neighbourhood of 0. For t > 0, t close to 0, we have μ < 0. Let [0, τ1 ) be the maximal interval on which this inequality holds. One gets u = 0 on [0, τ1 ). Arguing as in Case 1, one derives that μ is strictly decreasing and negative on [0, τ1 ). Consequently, τ1 = +∞ and u = 0 on [0, +∞). If p1 (0) > 0, p2 (0) > 0, then t → μ(t) is strictly increasing in a right neighbourhood of 0. Let [0, τ1 ) be maximal such that μ > 0 on [0, τ1 ). Repeating the reasoning from Case 2, it follows that either τ1 = +∞ (and u = 1 on [0, +∞)) or else τ1 < +∞ and u has the form (10). The theorem is now proved.
3
Numerical simulations
We have used the Matlab version of the software MISER3 for solving the optimal control problem (3). The software makes extensive use of GUIs (Graphical User interfaces) to allow the user to input all the data relevant to a specific problem which is then automatically saved to a data input file. The MISER3 software can search numerically for an optimal final time by transforming the problem (3) into a canonical problem form, which in turn is addressed as a combined optimal parameter selection and optimal control problem. In particular, the unspecified terminal time problem is one in which the problem is defined over the interval [t0 , tf ] and T = tf is free to vary. In this case, MISER3 treats T = tf as an unknown system parameter (denoted by z), maps the interval [t0 , tf ] onto [0, 1], and optimizes the transformed problem. Further details may be found in [5]. We have chosen for our numerical experiments the function F of the form: F (y1 , y2 ) =
dy12 , 1 + my12
representing the Holling type III functional response. Here d is a positive constant. Using the notations of MISER3, a typical continuous-time optimal control problem will be put in the form: Minimize an objective functional tf g0 (t, y, u, z) dt , G0 (u, z) = ϕ0 (y(tf ), z) + ts
over nc control functions u(t), and nz system parameters z, subject to the dynamics of the ns state variables y(t) ˙ = f (t, y, u, z) ,
y(ts ) = y 0 (z) ,
and subject to ngl linear all-time control only constraints nc
αki ui (t) + βk ≥ 0 ,
i=1
or
nc
αki ui (t) + βk = 0 ,
i=1
ngc constraints of the canonical form τk gk (t, y, u, z) dt ≥ 0 , Gk (u, z) = ϕk (y(τk ), z) + ts
k = 1, 2, . . . , ngc
6
N. Apreutesei, G. Dimitriu, and R. S ¸ tef˘ anescu
and ngz system parameter only constraints Gk (z) ≥ 0 , k = 1, 2, . . . , ngz . The numerical examples have used: ns = 2 (number of states), nc = 1 (number of controls), nz = 1 (number of system parameters), ts = 0 (objective functional integral lower limit), tf = 1 (objective functional integral upper limit) and characteristic times τk = 1, for k = 1, 2. With these notations, the initial time optimal control problem (3) takes the form Minimize G0 (u, z) := z , with respect to the control variable u ∈ [0, 1] and the system parameter z ∈ Z, satisfying the scaled state system dy1 = z[ay1 − u y2 F (y1 , y2 )] , dt
dy2 = z[y2 (−c + bu F (y1 , y2 ))] , dt
t > 0,
where the initial value conditions are given by y1 (0) = y10 , y2 (0) = y20 and the constraints G1 := y1 (1) − y1T = 0, and G2 := y2 (1) − y2T = 0. We have used the interval Z = [0, 100] as the variation range of the system parameter z, and the values of the optimal control u have been calculated in 11 points called internal knots (including the characteristic times). Figure 1 (plots above) illustrates the case defined by the following values of system coefficients: a = b = c = d = m = 1, y10 = y20 = 2, y1T = 4.5 and y2T = 1. The value of the objective function (precisely, the optimal time T defined by the system parameter z) was 0.8563. The optimization has finished in 20 iterations and the elapsed time was 25.69 seconds. In both figures the dashed line represents the component y1 of the state, and the continuous line indicates the other component y2 . The plots situated below in Figure 1 were obtained with the following values of system coefficients: a = b = 1, c = 0.5, d = 1, m = 0.55, y10 = 1, y20 = 2, y1T = 1.8 and y2T = 12.5. The value of the objective function was minimized to 6.4375. The optimization routine has terminated in 16 iterations. In both cases of Figure 1 the bang-bang optimal control presented only one switching point. Figure 2 (plots above) designates the case defined by the following values of system coefficients: a = 3, b = 1, c = 0.5, d = 1, m = 0.55, y10 = 2, y20 = 1, y1T = 1.5 and y2T = 2.5. Starting with 3, the final value of the objective function was minimized to 0.5137, attained in 33 iterations. The case represented in Figure 2 (plots below) was constructed with the following values of system coefficients: a = 0.3, b = 2, c = 0.5, d = 100, m = 5.5, y10 = 0.2, y20 = 0.1, y1T = 0.07 and y2T = 0.27. The algorithm started with 3 as initial value of z, and converged in 98 iterations. The optimal value of the objective function was 0.7934. This time, the bang-bang optimal control has two switching points. As a general remark, we denoted in our numerical experiments a very high sensitivity of the results to the initial conditions and to the starting point of the minimization procedure.
Time Optimal Control Problem for Prey-Predator Systems
1
u
2
2
y ,y
1
4 3
1 0
0.5
0 0
0.5 Time
1
40
0
0.5 Time
1
0
0.5 Time
1
1
30
u
y1, y2
7
20 10 0
0.5
0 0
0.5 Time
1
Fig. 1. The plots for the solution vector y = (y1 , y2 ) (left side) and the bang-bang control u (right side) with one switching point.
4
Conclusions
The main result of the paper shows that, in order to reach a prescribed value of the state in a minimal time, the control function should be bang-bang. In the first case, for example, the two species should be either completely mixed (u = 1), or they should be completely mixed on a certain time interval [0, τ1 ) and afterwards, completely separated (u = 0). The other two cases could be interpreted similarly. Numerically, it is investigated a prey-predator system with the Holling type III functional response. The separation can be carried out, for instance, by attracting the predators with different food resources or with the aid of a hunter population. We can also take into account the usage of natural or artificial dams in their habitat. Acknowledgments. The paper was supported by the project ID 342/2008, CNCSIS, Romania.
References 1. N. Apreutesei. Necessary optimality conditions for a Lotka-Volterra three species system. Math. Modelling Natural Phen., 1 (2006), 123–135.
8
N. Apreutesei, G. Dimitriu, and R. S ¸ tef˘ anescu
2.5 1 1.5
u
y1, y2
2 0.5
1 0.5 0
0 0
0.5 Time
1
0.25
0
0.5 Time
1
0
0.5 Time
1
1
u
y1, y2
0.2 0.15
0.5
0.1 0.05 0
0 0
0.5 Time
1
Fig. 2. The plots for the solution vector y = (y1 , y2 ) (left side) and the bang-bang control u (right side) with no switching point and two switching points, respectively.
2. V. Barbu. Mathematical Methods in Optimization of Differential Systems. Kluwer Academic Publishers, Dordrecht, 1994. 3. F. Brauer, C. Castillo-Chavez. Mathematical Models in Population Biology and Epidemiology. Springer Verlag, New York, Berlin, 2000. 4. A. El-Gohary, A.S. Al-Ruzaiza. Optimal control of non-homogenous prey-predator models during infinite and finite time intervals, Applied Math. Comput. 146 (2) (2003), 495–508. 5. L.S. Jennings, M.E. Fisher, K.L. Teo, C.J. Goh. MISER3 Optimal Control Software: Theory and User Manual. Department of Mathematics, The University of Western Australia, Nedlands, WA 6907, Australia, 2004. Version 3. 6. J.D. Murray. Mathematical Biology, Springer Verlag, Berlin-Heidelberg-New York, third edition, 2002. 7. E. Sontag. Mathematical Control Theory: Deterministic Finite Dimensional Systems, Springer-Verlag, 1998. 8. V. Volterra. Variazionie fluttuazioni del numero d’individui in specie animali conviventi, Mem. Acad. Lincei 2 (1926), 31–113 (Variations and fluctuations of a number of individuals in animal species living together, Translation in R. N. Chapman: Animal Ecology, New York, McGraw Hill (1931), 409–448). 9. S. Yosida. An optimal control problem of the prey-predator system, Funck. Ekvacioj, 25 (1982), 283–293.