Document not found! Please try again

Dynamic Optimization of constrained Semi-batch ...

1 downloads 0 Views 521KB Size Report
Abstract. This paper proposes a PMP-based solution scheme with parsimonious parameterization of sensitivity-seeking arcs in order to reduce the ...
Dynamic Optimization of constrained Semi-batch Processes using Pontryagin’s Minimum Principle and Parsimonious Parameterization Erdal Aydin,a,b Dominique Bonvin,c Kai Sundmacher a,d,* a

Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstraße 1, 39106 Magdeburg, Germany

b

International Max Planck Research School (IMPRS) for Advanced Methods in Process and Systems Engineering,

Magdeburg, Germany c

Laboratoire d’Automatique, Ecole Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland

d

Otto-von-Guericke University Magdeburg, Universitätplatz 2, 39106 Magdeburg, Germany

[email protected]

Abstract This paper proposes a PMP-based solution scheme with parsimonious parameterization of sensitivity-seeking arcs in order to reduce the computational complexity of constrained dynamic optimization problems. We tested our method on a binary batch distillation column and a twophase semi-batch reactor for the hydroformylation of 1-dodecene. The performance of the proposed solution scheme is compared with the fully parameterized PMP-based and the direct simultaneous solution approaches. The results show that the alternative parameterization applied together with the PMP can reduce the computational time significantly. Keywords: Pontryagin’s Minimum Principle, alternative parameterization, constrained dynamic optimization, semi-batch process, numerical optimization 1. Introduction Dynamic optimization is an important task in the batch chemical industry. Given a reliable process model, dynamic optimization can be considered as a promising tool for reducing production costs, improving product quality and meeting safety and environmental restrictions. The dynamic optimization methods available in the literature can be classified as direct or indirect approaches (Srinivasan et al., 2003). Direct methods are often the methods of choice, but they may exhibit certain limitations related to feasibility and computational burden. In indirect optimization methods, the original problem is reformulated as the minimization of an Hamiltonian function. The reformulated problem is then solved to satisfy the necessary conditions of optimality given by Pontryagin’s Minimum Principle (PMP) (Bryson, 1975).

The optimal inputs of dynamic optimization problems are composed of different arcs (Srinivasan et al., 2003). An arc can be either on a lower or an upper bound (𝑢min or 𝑢max ), on a path constraint (𝑢path ) or inside the feasible region as a sensitivity-seeking arc (𝑢sens ). Since the fine shape of a sensitivity-seeking arc often contributes negligibly to the cost, it is difficult to compute that part of the input accurately. Consequently, most direct schemes require a very fine input discretization for an accurate solution. Alternatively, instead of full parameterization, the sensitivity-seeking arcs can be parameterized parsimoniously, which reduces significantly the number of decision variables in the dynamic optimization problem (Welz et al., 2005; Schlegel et al., 2005; Welz et al., 2006). 2. Solution Methodology The dynamic optimization problem for batch processes is often stated as follows (Srinivasan et al., 2003): min 𝐽 = 𝜙(𝑥(𝑡𝑓 ))

𝑡𝑓 ,𝑢(𝑡)

𝑥̇ = 𝐹(𝑥, 𝑢), 𝑥(0) = 𝑥0 ;

s.t.

𝑆(𝑥, 𝑢) ≤ 0,

(1)

𝑇(𝑥(𝑡𝑓 )) ≤ 0

where J is a scalar performance index that depends on the values of the states at the final time tf,

is the objective function, x is the nx-dimensional state vector with the initial conditions 𝒙𝟎 , u is the nu-dimensional input vector, S is the nS-dimensional vector of inequality path constraints that include input bounds, and T is the nT-dimensional vector of inequality terminal constraints. Assuming that the final time 𝒕𝒇 is fixed, the dynamic optimization problem (1) can be reformulated using the PMP as follows (Srinivasan et al., 2003): min 𝐻(𝑡) = 𝜆𝑇 𝐹(𝑥, 𝑢) + 𝜇𝑇 𝑆(𝑥, 𝑢) 𝑢(𝑡)

𝑥̇ = 𝐹(𝑥, 𝑢); 𝑥(0) = 𝑥0 ;

s.t.

𝜕𝐻 𝜕𝜙 𝜕𝑇 𝜆̇𝑇 = − , 𝜆𝑇 (𝑡𝑓 ) = | + 𝜐 𝑇 | ; 𝜕𝑥

𝜇𝑇 𝑆(𝑥, 𝑢) = 0;

𝜕𝑥 𝑡𝑓

𝜕𝑥 𝑡𝑓

𝜐 𝑇 𝑇(𝑥(𝑡𝑓 )) = 0 ;

𝜕𝐻(𝑡) 𝜕𝐹(𝑥, 𝑢) 𝜕𝑆(𝑥, 𝑢) = 𝜆𝑇 + 𝜇𝑇 =0 𝜕𝑢 𝜕𝑢 𝜕𝑢

(2)

where H is the Hamiltonian function,  is the nx-dimensional vector of Lagrange multipliers (or co-states) for the system equations,  is the nS-dimensional vector of Lagrange multipliers for the path constraints and 𝜐 is the nT-dimensional vector of Lagrange multipliers for the terminal constraints. 𝜇𝑇 𝑆 = 0 and 𝜐 𝑇 𝑇 = 0 are the complementary slackness conditions that will be satisfied at the optimum. One can use an effective control vector iteration algorithm to solve the constrained problem (2) (Aydin et al., 2017). The algorithm parameterizes the inputs using N piecewise-constant elements, integrates the state equations forward in time and the co-state equations backwards in time, leading to a gradient-based control vector iteration approach. Pure state path constraints can be handled by indirect adjoining into the Hamiltonian function. If a path or terminal constraint is violated, the corresponding Lagrange multiplier is penalized so as to keep the optimization iterates within the feasible region. A parameterization of the sensitivity-seeking arcs with respect to switching times can further decrease the complexity of the problem and therefore also the computational time. Given the optimal solution structure (the types and sequence of arcs), it is possible to reformulate the optimization problem using a parsimonious parameterization of problem (2) of the form utU. For example, a sensitivity-seeking arc can be expressed as a linear arc between the two switching times t1 and t2 that represent the beginning and the end of the arc, thus giving 𝑡 𝜋 = (𝑡1 ). This reformulation allows writing: 2

̃ (𝑡) = 𝜆𝑇 𝐹̃ (𝑥, 𝜋) + 𝜇𝑇 𝑆̃(𝑥, 𝜋) min 𝐻 𝜋

𝑥̇ = 𝐹̃ (𝑥, 𝜋); 𝑥(0) = 𝑥0 ;

s.t.

̃ 𝜕𝐻 𝜕𝜙 𝜕𝑇 𝜆̇𝑇 = − , 𝜆𝑇 (𝑡𝑓 ) = | + 𝜐 𝑇 | ; 𝜕𝑥

𝜇𝑇 𝑆̃(𝑥, 𝜋) = 0;

𝜕𝑥 𝑡𝑓

𝜕𝑥 𝑡𝑓

(3)

𝜐 𝑇 𝑇(𝑥(𝑡𝑓 )) = 0;

̃ (𝑡) 𝜕𝐻 𝜕𝐹̃ (𝑥, 𝜋) 𝜕𝑆̃(𝑥, 𝜋) = 𝜆𝑇 + 𝜇𝑇 =0 𝜕𝜋 𝜕𝜋 𝜕𝜋 If the solution structure consists of the 3 arcs 𝑢max , 𝑢𝑠𝑒𝑛𝑠 and 𝑢min , the sensitivity-seeking arc 𝑢𝑠𝑒𝑛𝑠 can be approximated using linear interpolation between the two switching times t1 and t2, thus giving:

𝑢max

𝑢𝑚𝑖𝑛 − 𝑢𝑚𝑎𝑥 (𝑡 − 𝑡1 ) 𝑢̃(𝑡) = { 𝑢𝑠𝑒𝑛𝑠 (𝑡) = 𝑢𝑚𝑎𝑥 + 𝑡 − 𝑡 2 1 𝑢min

𝑖𝑓 0 ≤ 𝑡 < 𝑡1 ; 𝑖𝑓 𝑡1 ≤ 𝑡 < 𝑡2 ;  𝑖𝑓 𝑡2 ≤ 𝑡 < 𝑡𝑓



Then, problem (3) is solved using the proposed PMP-based algorithm (Aydin et al., 2017). The overall algorithm can be formulated as follows.

PMP-based Solution Algorithm  Select values for the penalty term K>0, the step size , the threshold

 the number of discrete input values N. Initialize the iteration counter ℎ = 0 and the input vector 𝜋0 . Discretize the Lagrange multipliers for the path constraints as tMMwhere M is a (nS x N) matrix and N the number of discrete time instants. do h = 1  ∞ I.

Solve the state equations by forward integration and the co-state equations by backward integration. If the jth path constraint is such that 𝑆̃𝑗 (𝑥, 𝜋) < 0 at the time instant k, set 𝑀ℎ (j,k)  =Otherwise, set 𝑀ℎ (j,k) = K, for j=1,..,nS, k=1,..,N.

II.

If the ith terminal constraint is such that 𝑇 𝑖 (𝑥(𝑡𝑓 )) < 0, set 𝜐ℎ (i)  =Otherwise, set 𝜐ℎ (i) = K, for i=1,…, nT.

III.

̃ 𝜕𝐻

Evaluate the value of the gradient (

) 𝜕𝜋 ℎ

analytical expressions given in Eq. (3).  IV.

̃ 𝜕𝐻

If 𝑛𝑜𝑟𝑚(

𝜕𝜋

)h