Run-to-run optimization of batch processes with ... - Wiley Online Library

3 downloads 36405 Views 775KB Size Report
Author to whom correspondence may be addressed. E-mail: [email protected]. †This article has been accepted for publication and undergone full peer ...
Run-to-Run Optimization of Batch Processes with Self-Optimizing Control Strategy Lingjian Ye,1,* Hongwei Guan,2 Xiaofeng Yuan3 and Xiushui Ma1 1. Ningbo Institute of Technology, Zhejiang University, 315100, Ningbo, Zhejiang, China 2. Ningbo Dahongying University, 315175, Ningbo, Zhejiang, China 3. Institute of Industrial Process Control, Zhejiang University, 310027, Hangzhou, Zhejiang, China

This paper deals with the run-to-run optimization problem of batch processes in the presence of uncertainty with a tailored self-optimizing control (SOC) strategy. Firstly, the dynamic programming problem for the batch process is transformed into a static nonlinear programming (NLP) problem using the control parameterization method. Then combinations of output measurements are selected as controlled variables (CVs), which are batch-wise controlled to account for uncertainties. However, although existing SOC methods appear directly applicable to such a static NLP formulation, a major problem therein is that the number of control parameters is generally large to maintain a satisfactory optimizing performance, which makes them inappropriate as being manipulated variables for closed-loop optimization. To circumvent this difficulty, it is proposed to alternatively use the so-called latent effective manipulated variables as the control system’s manipulated variables, which are linear combinations of original control parameters, however, less in number whilst implicitly dominating optimal operation in the whole uncertain space. This way, the run-to-run self-optimizing control system is designed with less process-dependent CVs and operated with minimal complexity. A simulated fed-batch reactor is provided to illustrate the proposed methodology. Keywords: batch process, self-optimizing control, real-time optimization, dynamic optimization

INTRODUCTION

C

hemical processes commonly suffer from various uncertainties and disturbances, which are detrimental to maintaining optimality of plant operation. In an increasingly competitive market, the real-time optimization (RTO) technique is becoming more and more important for chemical enterprises to improve their profitability, by restoring optimal operation in the face of uncertainty. In industrial practice, measurement-based optimization has been demonstrated as an efficient method for RTO.[1] The philosophy behind measurement-based optimization is that those unknown disturbances entering into the process affect the system states, which are further (partially) measured from the output variables, hence informative measurements are reasonably incorporated to account for uncertainty. Among various approaches reported in the literature, self-optimizing control (SOC)[2] constitutes an important branch of measurement-based optimization. SOC was firstly proposed in the context of plant-wide control,[2] which aimed to identify a promising control structure via selecting appropriate controlled variables (CVs), through which means the plant can be more easily maintained near the optimum in spite of uncertainties. Four general rules for CV selection were characterized by Skogestad,[2] where the most important one was stated as “the optimal values of selected CVs should be insensitive to disturbances.” Therefore, when these CVs are maintained at their optimally insensitive set-points, the plant is automatically operated toward the optimum along with the function of feedback controllers. In later developments of SOC methodology, combinations of measurements as CVs were typically considered to improve self-optimizing performance, as compared to the case where individual measurements are controlled. Halvorsen et al.[3] mathematically formulated the CV selection problem, and both a minimum singular value rule and an exact local method were

|

VOLUME xx, 2016

|

derived. Afterwards, detailed methods were further developed to solve the measurement-combined CVs, e.g. the (extended) null space methods,[4,5] the eigenvalue decomposition methods,[6,7] the necessary conditions of optimality (NCO) approximation method,[8] and the CV adaptation method,[9] among others. These methods have their own advantages and disadvantages. For example, both the (extended) null space methods[4,5] and the eigenvalue decomposition methods[6,7] gave explicit solution of the combination matrix, however, since they were derived via linearization around the nominal operating point, their self-optimizing performances were only locally guaranteed. The NCO approximation method[8] used a function of measurement variables as NCO estimators, which leads to the NCO indirectly satisfied via CV tracking. However, this method risks fitting for those operating points far away from the optimum that are not closely relevant to optimal operation.[9] More recently, a new global SOC method was developed by approximately minimizing the global average loss in the entire uncertainty space.[10] In the approach, several original optimization problems were solved with regard to expected disturbance scenarios, and corresponding optimal measurements were combined to search for CV directions that minimize the overall economic loss. All of the above-mentioned methods were proposed for SOC of continuous chemical plants. In modern industry, the batch process has been becoming more and more popular for manufacturing chemical products, because it is more flexible and smaller in scale to meet customized demands. Since SOC has several

∗ Author to whom correspondence may be addressed. E-mail address: [email protected] Can. J. Chem. Eng. xx:1–13, 2016 © 2016 Canadian Society for Chemical Engineering DOI 10.1002/cjce.22692 Published online in Wiley Online Library (wileyonlinelibrary.com).

|

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING

|

1

|

distinct advantages compared to other RTO methodologies,[8,11] for example, its control policy is simple and its optimizing speed is fast, the application of self-optimizing control for a batch process has great potential. To the best of the authors’ knowledge, however, not much effort has been devoted to consider such a control strategy for batch process. Toward this direction, Dahl-Olsen et al.[12] made a first attempt by considering a single measurement as CV for dynamic optimization of the batch process, where the maximum gain rule was applied. Hu et al.[13] introduced the concept of dynamic SOC and extended the exact local method. Jaschke et al.[14] and Ye et al.[15] suggested controlling the Hamiltonian function to achieve self-optimizing control of a batch process. Nonetheless, analytical expression of Hamiltonian function is required therein, which may not be realistic for more complex batch systems. For RTO of a batch process, there are two basic routines: the run-to-run/batch-to-batch optimization and within-batch optimization. In the former case, the plant operation is updated iteratively in a batch-wise manner taking advantage of the repetitive nature of the batch process. Optimizing actions for the next batch are made at the end of a previous one, and the performance is expected to be gradually improved as iteration evolves. However, a shortcoming of this routine is that uncertainties occurring within a single run are not accounted for.[16] In the latter case, optimizing control is performed within the duration of batch operation, thus simultaneously handling uncertainties that occur during the current batch. Generally speaking, within-batch optimization has better potential than the run-to-run scheme in terms of achievable optimizing performance, because the control inputs are adjusted adaptively during a single batch, hence more disturbances are allowed to be handled. Nonetheless, devising such an effective solution would be more challenging. In this study, the run-to-run routine is investigated for optimizing the batch process with a tailored SOC strategy. In particular, the current SOC methodology developed for a continuous process will be extended to batch optimization, by utilizing the fact that from a batch-wise view, the dynamic programming problem is transformed to be a static one via input parameterization. Trajectory parameterization is a common way to implement numerical optimization for dynamic programming, where the time horizon is discretized into a finite number of small grids and continuous variable arcs are presented as piece-wise polynomials. Those polynomial coefficients are considered as the decision variables in the formulated nonlinear programming (NLP) problem. An overview of dynamic optimization for the batch process can be found in Srinivasan et al.[17] Since a static NLP formulation is currently under consideration, existing SOC methods developed in the literature can be readily applied to derive CVs for batch process.[4–8,10] However, it turns out to be problematic to use those control parameters, i.e. decision variables for optimization in CVP, as adjustable manipulated variables (MVs) in the self-optimizing control system for RTO. This is because the decision variables for optimization are determined by input parameterization, which, in a general case, requires a large number of control parameters so that the input arcs are finely parameterized to guarantee a satisfactory optimizing performance. In the context of SOC, the same number of CVs is typically selected to form a square control problem; it is particularly undesired to specify a large number of CVs, both in designing and online control phase. To address this problem, it is proposed to alternatively search some latent effective manipulated variables as the MVs for the control system. The latent effective MVs should be small in number but capture most variation logic for optimal operation in the entire uncertainty space. By reducing the

|

2

|

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING

|

number of MVs and thus the required CVs, the control system may be designed more easily and operated with minimal complexity. The rest of this paper is organized as follows: the next section reviews the concept of SOC together with a global CV selection, which was developed for continuous processes.[10] The Batch-to-Batch Self-Optimizing Control section extends the static SOC methodology to batch optimization, then the concept of latent effective manipulated variables is proposed and the run-torun self-optimizing scheme for batch process is elaborated. In the Case Study section, the developed methodology is applied to a simulated batch reactor and the final section concludes this work. SELF-OPTIMIZING CONTROL OF CONTINUOUS PROCESSES General Descriptions In the presence of uncertainties and disturbances, the control strategy of SOC[2] achieves near-optimum operation by controlling a set of particularly selected CVs, which are allowed to be either individual measurements or their mapping functions (artificial variables). A general rule of selecting CVs is that their optimal values should be insensitive to disturbances; this way, the input variables can be automatically adjusted toward optimum along with set-point tracking of CVs by feedback controllers. Furthermore, if the economic loss is acceptable, one can avoid devising an optimization layer which is commonly necessary in a well known two-layer architecture of control structure. Generally, SOC addresses parametric uncertainties with expected ranges, so plantmodel mismatch and structural uncertainties are not accounted for. In other words, the rigorous plant model is assumed transparent to the designer, however, some parameters/disturbances are uncertain and unknown during the operation. Consequently, in the face of plant-model mismatch or structural uncertainties, a degraded RTO performance is possible. Another assumption is that for a constrained process, active constraints are assumed directly controlled to guarantee the operational feasibility, hence the CV selection problem is dealt with in the reduced unconstrained space. Furthermore, these active constraints are assumed to be unchanged in the whole uncertain space. This assumption is not general, but rather a simplification for CV selection in many SOC methods. There have been several works dealing with changing active constraints in the framework of SOC. For example, Cao[18] proposed the cascade control approach which controls the variable associated with the constraint in the inner loop, while the self-optimizing CV is controlled in the outer loop. Hu et al.[19] proposed the explicit constraint handling approach by adding the constraint into the optimization problem for CV selection such that the constraint is always satisfied in the whole operating space. Manum and Skogestad[20] developed a CV switching strategy with online detection of the critical region. Problem Presentation Consider the following unconstrained static optimization problem: min J(u, d)

(1)

u

with noisy measurements: ym = f(u, d) +n

(2)

   y

|

VOLUME xx, 2016

|

where J is the scalar cost function to be minimized, which is generally an economic index for plant operation, and u ∈ Rnu and d ∈ Rnd are the manipulated variables and uncertain disturbances respectively. ym , y, n ∈ Rny are the measured, true output variables, and their measurement noises/errors. d and n are considered as the source of uncertainties, which spans a variation region of D and N, respectively. f is the input-output mapping function between u, d, and y. Note that f may not be known explicitly, rather, it is more likely to be defined implicitly by a set of coupling model equations for most chemical plants. Define an economic loss L as the difference between J and its optimal value J opt , i.e.: L = J(u, d) − J opt (d)

(3)

and its average loss over the entire uncertain operating region:

 Lav = E[L] =

(d)(n)L dndd

(4)

d∈N,n∈D

where E[·] and (·) represent the expected value and the occurrence probability of a random variable, respectively. Consider selecting nc CVs (nc = nu to form a square control problem) as linear combinations of output measurements, c = Hy, at their constant set-points cs . Here, H is a (nc × ny )-dimensional combination matrix as follows:



h1





h11

⎢ ⎥ ⎢ H = ⎣ ... ⎦ = ⎣ ... hnc

hnc 1

···

h1ny .. .

⎤ ⎥ ⎦

H

s.t.

Lav

(5)

Equation (2) and Hym = cs

where the constraints are the process model and feedback control actions, respectively. Note that since available measurements contain noise, the controllers can only control the measured CVs, defined as cm  Hym at their set-points cs . A Globally Approximate Solution In general, solving the rigorous optimization problem Equation (5) would be numerically intractable, hence certain simplifications were typically introduced to ease the problem. For example, the local SOC methods assumed that the economic loss can be locally approximated with a quadratic function and furthermore, a linearized input-output model around the nominal point was used over the whole operating space.[3–7] To reduce the error caused by linearization, a new global SOC method (gSOC)[10] has been recently developed, which extended the self-optimizing performance to the whole operating space instead of a small neighbouring region around the nominal point. In the following, a short introduction will be given concerning this new method. For more detailed derivations and discussions, readers are referred to Ye et al.[10]

|

VOLUME xx, 2016

|

L=

1 T e Jcc ec 2 c

(6)

where ec  c − copt is defined as the deviation of c to its optimal value copt , Jcc is the Hessian of J with respect to c, which can be evaluated as Jcc = (HGy )−T Juu (HGy )−1 ,[3] Gy , and Juu is the sensitivity matrix of y and Hessian of J, both of which are defined with respect to u. Since copt = Hyopt and considering the control effect cm = Hym = 0, the true value of c is c = cm − Hn = −Hn. (Note that the CV set-points have been generalized as 0.) Finally, the deviation term is calculated as ec = −H(yopt + n). By substituting the obtained results, i.e. L = (H(yopt + n))T Jcc H(yopt + n) into Equation (4), the average loss can be derived as follows:[10] Lav = E(Ld ) + E(Ln )

· · · hnc ny

where hi (i = 1, . . . , nc ) is a row vector with ny coefficients associated with output measurements. The objective is to identify an optimal combination matrix H, such that the average loss Lav is minimized along with closed-loop control, i.e. the following optimization problem is considered: min

Before proceeding into the presentation of CV selection, it is worth firstly introducing a property concerning the set-points of CVs as follows: arbitrary nonzero setpoints cs can be generalized to be zero. This property is true because one can add an artificial measurement, y0 = 1, into the measurement vector as y¯ = [y0 yT ]T . With y¯ as the new measurement set, the set-points of CVs can be unified to 0 by considering an augmented combination matrix ¯ = [h0 H], where h0 = −cs . This further indicates that by solving H ¯ the cs is automatically determined the new combination matrix H, as the negative coefficient associated with constant 1. Therefore, ¯ are simpliwithout loss of generality, the notations for y¯ and H fied as y and H in the remainder of this paper. To evaluate the economic loss, a quadratic function is considered in terms of c as follows:[3]

(7)

where: Ld =

1 opt T T 1 (y ) H Jcc Hyopt , Ln = tr(W2 HT Jcc H) 2 2

(8)

where tr(·) stands for the trace of a matrix and W2  E(nnT ) is the covariance matrix of the measurement errors. Here, Ld is considered as the loss induced by disturbances d while Ln is caused by measurement errors. Note that, in the above equations, the overall effect of measurement errors has been nicely calculated with a constant matrix W, because the terms yopt and Jcc are only affected by disturbance variable d. To simplify the problem, it is approximately assumed that the Hessian Jcc keeps stationary at different operating conditions, although they should be rigorously different. Based on this assumption, it is possible to further enforce all Jcc ≈ I by incor1/2 porating a constraint of HGy,ref = Juu,ref , where the subscript (·)ref denotes a chosen reference operating point. Note that it gives no loss of generality to enforce such a constraint in the optimization problem, rather, such a constraint guarantees the uniqueness solution of H.[4,5,10] With this treatment, the weight Jcc at the reference point is exactly I, whilst Jcc at other operating conditions would be close to I. Based on the above conditions, the average loss Lav in Equation (4) is approximately estimated using the Monte Carlo method by sampling the entire disturbance space D. The following presentation is further derived:[10] Lav ≈

1 2N

N



opt

opt



(y(i) )T HT Hy(i) +

i=1

|

1 tr(HW2 HT ) 2

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING

(9)

|

3

|

=

1 YHT 2F 2N

˜ T 2F + 12 WHT 2F = 12 YH

where N is the number of sampled disturbance scenarios, the subscript (·)(i) denotes the corresponding variables under the ith ˜ are disturbance scenario d(i) , and intermediate matrices Y and Y constructed as follows:

⎡ ⎢

opt

(y(1) )T .. .

Y=⎢ ⎣

⎤ ⎥ ⎥, ⎦

 ˜ = Y

opt (y(N) )T

√1 Y N

 (10)

W

Finally, by combining Equation (9) and the constraint introduced before, the following optimization problem is formulated: ˜ T 2F min Lav = 12 YH

(11)

H

BATCH-TO-BATCH SELF-OPTIMIZING CONTROL Static Formulation of Batch Processes

1/2

HGy,ref = Juu,ref

s.t.

computation cost. Practically, such a compromise can always be made reasonably. 3. In step 2, the choice of reference point affects the final resultant CVs (note that the analytical expression of H contains terms evaluated at the reference point), however, the loss differences are generally small. For simplicity, the nominal point can be typically selected. 4. Compared to previous linear SOC methods,[3–7] the rigorous nonlinear model is used to generate optimal measurements in the entire disturbance space in step 1, hence the obtained selfoptimizing performance is globally valid without limiting to the nominal point, because the loss is evaluated and minimized in a global sense. This is the most attractive advantage of this methodology.

which is a convex formulation. An analytical solution to this problem has been derived as follows:[10] ˜ T Y) ˜ −1 Gy,ref (GTy,ref (Y ˜ −1 Gy,ref )−1 J1/2 ˜ T Y) HT = (Y uu,ref

(12)

In this paper, we are concerned with the real-time optimization problem of batch processes using the SOC strategy. In particular, the following type of batch process optimization is considered: min J(x(tf ))

(14)

u(t)

˙ s.t. x(t) = F(x(t), u(t), d),

or in a simple form:[21,22] ˜ T Y) ˜ −1 Gy,ref HT = (Y

(13)

due to the fact that the performance of H is equivalent to BH, where B is any nc × nc non-singular matrix. To sum up, the above gSOC algorithm for global CV selection involves the following steps: Step 1 Sampling the disturbance space D using Monte Carlo simulation, a sequence of N disturbance scenarios {d(i) }, i = 1, . . . , N, is generated; Step 2 Choose a reference point (optimal point for a particular dref ), the sensitivity matrix of measurements, Gy,ref and Hessian of cost function, Juu,ref are evaluated; Step 3 For each disturbance scenario d(i) , the original optimization problem Equation (1) is solved and corresponding opt optimal values of measurements, y(i) are stored. Then the ˜ are constructed according to Equation matrices Y and Y (10); Step 4 The combination matrix H is calculated using Equation (12). Concerning this gSOC algorithm, several remarks are in order. 1. In the algorithm, the economic loss is evaluated with a second-order Taylor expansion as L = 0.5eTc Jcc ec , which is in a form of weighted Euclidean norm. Therefore, it is reasonable to assume the weight Jcc is constant and place the focus on minimizing ec . This is termed as the gSOC-II algorithm. Further, another algorithm (gSOC-I algorithm) which uses the rigorous Jcc differently with regard to operating points was also presented in Ye et al.,[10] which however cannot analytically solve the optimal combination matrix, but does through numerical optimization. 2. The average loss Lav is estimated based on N finite disturbance scenarios, which are randomly sampled using the Monte Carlo method in the whole disturbance space. For the selection of N, it should be large enough that the average loss is estimated more reliably. On the other hand, it should be small to reduce the

|

4

|

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING

|

x(0) = x0

(15)

where u(t) ∈ Rnu(t) and x(t) ∈ Rnx are the input variables and system states (with initial condition x0 ), both u(t) and x(t) are time-dependent, d ∈ Rnd are external uncertain disturbances, tf is the fixed duration time of a batch run, and F is the dynamic process model equations. Optimization of such a batch process calls for dynamic programming, which declines into two directions, namely the analytical and numerical solution methods.[17] The former is built upon Pontryagin’s Minimum Principle (PMP) or Hamilton-Jacobi-Bellman (HJB) to analytically derive the optimal solution of u(t) by satisfying the first-order necessary conditions of optimality. However, the optimal solution of u(t) is generally expressed as functions of system states. Furthermore, since analytical derivations involve complex symbolic computations, the analytical way is commonly restricted to within small systems and not suitable for large-scale plants. Alternatively, a numerical approach converts the dynamic optimization into an NLP problem by parameterizing the variable trajectories. Based on which variables are parameterized, the numerical optimization is further distinguished with sequential approach and simultaneous approach,[23] which parameterize only the input variables and both the input and state variables, respectively. The parameterization scheme discretizes the infinite dimensional variables (in the time scale) into finite control parameters which are taken as the decision variables in the NLP formulation. Although less rigorous, performance loss can be negligible provided with a fine parameterization scheme. Besides, since numerical algorithms for NLP have been quite well developed, optimization for large-scale plants can be efficiently solved. In this paper, the numerical solution method, more precisely, the sequential approach will be considered afterwards. In this approach, the whole time horizon is divided into many small length grids, where the inputs are parameterized into piece-wise polynomial functions with those polynomial coefficients as the decision variables for optimization. Hence, the dynamic optimization problem Equation (14) is equivalently formulated as the

|

VOLUME xx, 2016

|

A main idea here is to firstly project the control parameters u onto a low-dimensional space using the principal component analysis (PCA), then one deals with the control design problem in the low-dimensional space. The PCA is a well-known dimension reduction technique by compressing those correlated variables into a compact subspace spanned by the so-called latent variables. Assuming that the NLP problem (Equation (16)) has been solved under N disturbance scenarios, all obtained optimal control parameters uopt are collected in the matrix U ∈ RN×nu . ¯ by scaling all control parameters Firstly, U is normalized to U ¯ u¯ T(i) to be zero mean and unit variance. Precisely, the rows of U, are computed as follows: Figure 1. Static formulation of batch process using input parameterization.

u¯ T(i) = (u(i) − ␮u )   u

following NLP problem: min J(x(tf )) u

(16)

(18)

where ␮u and  u are the vector of mean and standard deviation for each control parameter, and  stands for Hadamard division. ¯ using PCA is represented as follows: Decomposition of U

s.t. x(tf ) = f(u, d)

¯ = VPT + E = VPT + V ˜ P˜ T U

with the measurements expressed as follows:

where V ∈ RN×k and P ∈ Rnu ×k are the score and loading matrices ˜ ∈ RN×(nu −k) of latent principal components, respectively, while V and P˜ ∈ Rnu ×(nu −k) are the score and loading matrices of residual components, respectively. E ∈ RN×nu is the residual matrix. k ≤ nu is the number of principal components. Commonly, it can be determined according to cumulative percent variance or crossvalidation. To obtain the various matrices in Equation (19), let  = ¯ T U/(N ¯ U − 1), the symmetric eigenvalue decomposition is performed for , as follows:

¯ f) + n ym = y + n = x(t

(17)

In the above presentation, u ∈ Rnu (nu = Mnu(t) ) are the new decision variables (often referred to as the control parameters) parameterized from original input trajectories, M is the number of control parameters for each input arc. f denotes the mapping function between the final state x(tf ) and u and d, here, f is established by calculating x(tf ), where the dynamic model F is integrated over the time horizon [0, tf ]. A subset of the final states x(tf ), say ¯ f ), are assumed to be measured with n as the noise. Hence, we x(t ¯ f ) and noisy measurement have the true measurements y = x(t ym . Note that throughout the NLP presentation, the same notations {u, y, n, ym , f} have been used, for consistency with those defined in the continuous process. The above static formulation is visually indicated in Figure 1, where the polynomial function is taken as a special case of constant values in each time grid for a simple illustration. Latent Effective Manipulated Variables With a static formulation, those existing SOC methods[4–8,10] can be, in principle, directly applied to optimizing the batch process by controlling some combinations of measurements from batch to batch, because problem Equations (16) and (1) are basically identical. However, a problem here is that the number of new parameterized decision variables is typically large, because a fine parameterization is required to guarantee the optimization performance. If u is directly used as the MVs in the self-optimizing control system, one faces at least the following problems: (1) in the context of SOC, the same number of CVs should be designed to form a square control problem, and solving these CVs will not be a easy task. Because of this, the required sensitivity matrices (Gy,ref and Juu,ref ) are typically computed with a finite difference, which requires perturbations of all MVs in a particular ordered sequence. (2) Online control of a large-scale multivariable system is also difficult because there may exist complicated interactions between the MVs and CVs, which raises both challenges in designing a decentralized control structure and obtaining a guaranteed control performance. Based on these reasons, it is highly desired to identify a control system with a minimal number of adjustable MVs and process-dependent CVs, from a practical viewpoint.

|

VOLUME xx, 2016

|





= P



P˜  P



(19)

T

(20)

¯ V = UP ˜ =U ¯ P˜ V where  is a diagonal matrix, whose elements are eigenvalues of , i.e.:  = diag{1 , 2 , ..., I }

(21)

Based on the above relationships, u¯ in the original space can ˆ which is a reconstruction of the latent be approximated with u, variables v as follows: u¯ ≈ uˆ = Pv

(22)

where the residual e is: ˜v e = u¯ − uˆ = P˜

(23)

¯ the anti-scaled variUsing the approximately reconstructed u, able is further computed as u =  u ◦ u¯ + ␮u (◦ stands for the Hadamard product) and applied to the real plant. In this work, v is defined as the latent effective manipulated variables, which are used as the adjusted MVs for run-to-run optimization of batch process. The term “latent” implies that v are linear combinations of original control parameters u, hence the lack of clear physical interpretation. Meanwhile, “effective” means that although fewer in number, v will implicitly dominate the variation space of optimal operation, from which the original u can be reconstructed with adequate accuracy.

|

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING

|

5

|

A Local Interpretation Dimension reduction for u is possible because optimal operation for a plant relates to the disturbance which has occurred.[9] In this subsection, interpretations are provided using a local analysis, from which one can understand the rationale easily. ∂J Firstly, define Ju = ∂u as the partial gradient of cost function J with respect to u. Then, consider a reference point, for example opt the optimal operating point uref under disturbance scenario dref .

are extracted one by one as k increases. Therefore, one is allowed to pursue a simple SOC control system by discarding those directions having small impact for optimal operation. Such directions are easily identified in the PCA algorithm. Run-to-Run Optimization for Batch Process In the following, u is replaced with the latent variable v as the MVs, which leads to the following optimization problem:

opt

The first order expansion of Ju around uref reads: Ju = Ju,ref + Juu u

opt

+

min J(x(tf ))

nu ×nu

(24)

opt

In the above equation, Ju,ref = 0 because uref is optimal with regard to dref . Furthermore, since the operating condition has changed, the first-order optimality condition should be satisfied to restore optimality, i.e. Ju = 0. Then Equation (24) becomes: Juu uopt + JTud d = 0

(25)

nu ×(nu −nd ) be the null space of Jud such that Next, let Jns ud ∈ R ns T Jud Jud = 0. Left multiplying (Jns ud ) for both sides of above equation gives the following: T opt (Jns ≡ Cuopt = 0 ud ) Juu u

(26)

T (nu −nd )×nu . The above relationship further where C = (Jns ud ) Juu ∈ R indicates that uopt lies in the subspace spanned by the orthogonal complement of C, as follows:

uopt = (CT )ns s

(27)

where s is an arbitrary independent nd -dimensional vector. The above local analysis shows that the optimal operation is exactly dominated by nd independent variables. For numerical batch optimization, it is reasonable to assume that the input arcs are intensively parameterized so that nu > nd , which makes it possible for dimension reduction of control parameter variables. Equivalently, one can also find that uopt is solved from Equation (25) as follows: uopt = −(Juu )−1 JTud d

(28)

which clearly indicates that the variation of optimal operation purely depends on disturbance changes. Indeed, if there is no disturbance occurring, one does not need to perform RTO. Instead, it is more logical to implement a fixed operating policy which has already minimized the cost function. In this case, there is no degree of freedom to adjust the MVs for optimal operation. On the other hand, since d is online immeasurable, Equation (28) cannot be directly used to find the new optimal operation. Instead, some latent variables are sought in the proposed method, which are linear combinations/projections of known control parameters. Remark 1. The local analysis implies that the dimension of independent/latent variables is exactly nd such that uopt can be accurately recomputed. However, it turns out that even provided with k < nd , the method is still highly efficient, as will be seen in the later case study. This is because the most effective latent variables maximizing the variation range of optimal operations

6

|

s.t. x(tf ) = fv (v, d)

nd ×nu

and Jud ∈ R are the second-order sensiwhere Juu ∈ R tivity matrices, and the deviation terms are defined as uopt = opt uopt − uref , d = d − dref , respectively.

|

(29)

v

JTud d

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING

|

with the measurements defined the same as in Equation (17), fv is the modified mapping function after MV replacement, which is, equivalently, a composite function of the original mapping as: fv (v, d) = f(u(v), d), where the inner function u(v) is defined in the PCA step. Note that the modification of plant model may introduce some error, which however can be accepted as indicated by the cumulative percent variance method or verified through re-optimization. As compared to Equation (16), the number of operable MVs has been reduced from nu to k (1 ≤ k ≤ nu ), which can be chosen by the designer concerning the problem under consideration. In general, when a small k is selected, the resultant SOC control system is simple with a small number of process-dependent CVs, however, the residual e may be big, thus the performance loss. When k is large, things are vice versa. In practice, a reasonable trade-off should be made between the economic performance and control complexity. To sum up, the proposed run-to-run self-optimizing control strategy for a batch process is illustrated in Figure 2. With the addition of input parameterization and MV reconstruction, the diagram in the dashed box can be regarded as an extended system, which takes v as the operable MVs and ym as the measurements. Therefore, this small-scale system is applied with the CV selection method described in the second section to solve t self-optimizing CVs. As indicated in Figure 2, suppose that as soon as the jth batch finishes, CV values cj are computed with those measurej j ments sampled from the current batch ym , i.e. cj = Hym , which is then passed to the batch-wise controllers for set-point tracking, which produces a new control signal vj+1 . At the beginning of the (j + 1)th batch, vj+1 are firstly reconstructed to control parameters as u¯ j+1 = Pv, then anti-scaled as u˜ j+1 =  u˜ ◦ u¯ j+1 + ␮u˜ , which is further used to generate the full input trajectory uj+1 (t) for the forthcoming batch. Some important features of the proposed SOC control system are remarked as follows: 1. The control system is run-to-run self-optimizing to account for uncertainties, i.e. from batch to batch, the plant is automatically operated near optimum by tracking constant set-points of selected CVs. This feature is inherited from the original SOC strategy, furthermore, the economic performance in this paper is enhanced by a recently developed global CV selection method. 2. Even if an intense parameterization selection is used to improve the optimizing performance, the number of required process-dependent CVs can be reduced by identifying the least number of latent effective MVs. This way, CV selection for achieving SOC is significantly simplified. Additionally, controller design is also alleviated because a much smaller scale of multivariable system is currently under consideration.

|

VOLUME xx, 2016

|

Figure 2. Block diagram of run-to-run optimization for batch process.

3. All variables that determine a complete input arc for the current batch are generated at the end of the last one. Consequently, input corrections cannot be made along with possible midcourse disturbances until the next batch. In this situation, the within-batch optimization technique is better suited to further enhance the plant operation. However, within-batch optimization is not explored in this study and remains a subject of future research. CASE STUDY

reactions, respectively, and cBin is the concentration of B in the feed. It is assumed that the 5 states are only measured at the end of a batch run (denoted as cAf , cBf , Vf , cCf , and cDf ). The measurement noises are considered as Gaussian type, all with 0 means and 5 % of their nominal values for standard deviations. The nominal values of involved parameters are listed in Table 1. The expected disturbances include 3 variables: k1 , k2 , and cBin , all with ±50 % variations in terms of their nominal values, which span quite a large operating space. The operational objective is to maximize the product C with minimal feed material B, hence the negative profit is considered as the cost function:

Process Description In this section, a fed-batch reactor process is considered,[24] as shown in Figure 3. In the reactor, two reactions occur as A + B → C and 2B → D, where A and B are the reactants, C is the product, and D is the undesired byproduct caused by side reaction. For operation, reactant A is initially fed once at the beginning of a batch, meanwhile, B is fed continuously along with the reaction with a feed rate of u(t), which is the input variable and constrained within 0 ≤ u(t) ≤ 0.001 L · min−1 . The first principle models for this reactor are developed based on the material balance law, which is described as the following differential equations: c˙A = −k1 cA cB − cA −

cA u , V



c˙B = −k1 cA cB − 2k2 cB2 − V˙ = u,

cA (0) = cA0



cB −cBin u V

V(0) = V0

c˙C = k1 cA cB − c˙D = k2 cB2 −

cC u , V

cB (0) = cB0

(31) (32)

cC (0) = cC0 cD (0) = cD0

cD u , V

,

(30)

(33) (34)

where cX and cX0 denote the concentration of material X in the reactor and their initial values, V is the reactor holdup with an initial value of V0 . k1 and k2 are the kinetic coefficients of two

 J = −[c(tf )V(tf ) − ω

|

VOLUME xx, 2016

|

u(t)2 dt]

(35)

0

where ω is the penalty weight for B and is taken as 2500 L2 · min−1 · mol−1 afterwards. Besides the input constraints, the maximum end qualities of B and D are not allowed to be greater than 0.025 mol·L−1 and 0.15 mol·L−1 , respectively. Selection of Latent Effective Manipulated Variables Firstly, numerical optimization for the nominal operating condition is performed. Using the sequential approach, the continuous feed rate u(t) is parameterized into piece-wise continuous linear functions by discretizing the entire operation time [0, tf ] into 50 small time grids, which guarantees adequate precision. Hence, we have 51 decision variables as u = [ui ]T (i = 1, . . . , 51) for numerical optimization. The GPOPS toolbox[25] is implemented as the numerical solution algorithm, which gives a minimum objective function J opt = −0.2401 mol in the nominal case. The optimal input trajectory, together with all 5 system states, is illustrated in Figure 4.

Table 1. Parameter values for the reactor Variable

Figure 3. Fed-batch reactor.

tf

k1 k2 cBin cA0 cB0 cC0 cD0 V0 tf ω

|

Description

Value

Kinetic coefficient (main) Kinetic coefficient (side) Inlet concentration of B Initial concentration (A) Initial concentration (B) Initial concentration (C) Initial concentration (D) Initial holdup Batch during time Penalty coefficient

0.053 L/(mol · min) 0.128 L/(mol · min) 5 mol/L 0.72 mol/L 0.05 mol/L 0 mol/L 0 mol/L 1L 250 min 2500 L2 /(min · mol)

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING

|

7

|

Figure 4. Optimal input and system state trajectories.

As can be seen from the results, the optimal arc of B feed rate exhibits the following feature: in the early phase of reaction, more B is fed into the reactor to produce the desired product C. However, such influence becomes less significant as the remaining reaction time vanishes. Finally, the penalized feed of B dominates the objective function thus u(t) drops dramatically. In the nominal case, the terminal concentration of B is found to be actively constrained at its maximal allowed value of 0.025 mol· L−1 .

Next, to explore the characteristics of optimal operation, 200 disturbance scenarios are randomly generated within their expected ranges, and the cost function is all optimized using the GPOPS toolbox. It is worth mentioning that the results suggest that under different disturbances, the constraint of B concentration can be active or inactive, which constitutes an active-set change problem. However, to make the problem tractable within the unchanging active-set assumption, a penalty function approach has been incorporated such that the constraint is always inactive in the operating space of interest. Figure 5a shows 50 groups of obtained optimal input arcs. One finds that their arc shapes are generally similar, namely u(t) is initiated big and decreases afterwards. Physically, this trend is determined by the inartistic mechanism of the reaction process, even though there are various disturbances involved. This feature leads to, mathematically, correlated optimal inputs. To inspect such phenomena in detail, the distributions of u are plotted in selected pairs in Figure 5, as u1 against u26 , u6 against u31 , u11 against u36 , u16 against u41 , and u21 against u46 , respectively, where ui denotes the ith parameterized control variable for the input arc. As indicated in the figure, those optimal decision variables are correlated among each other to a certain extent, and some of them are approximately dominated by linear relationships, for example, u1 against u26 and u6 against u31 . In the case of u21 against u46 , it exhibits no clear linearity. Then, PCA is performed to rigorously find the latent effective manipulated variables. For selection of latent variable number k, the cumulative percent variance method is used here. It is noted that the first principal component extracted explains 73.2 % of the variance while the second explains 21.2 %, which jointly

(a) u (t)

u

u1

(b) u 1 against u 26

u 26

(c) u 6 against u 31

(d) u 11 against u 36

u6

u 11

time (min)

(e) u 16 against u 41

(f) u 21 against u 46

u 21

u 36

u 16

u 31

u 41

u 46

Figure 5. Distribution of optimal inputs: (a) input arcs; (b–f) various input pairs.

|

8

|

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING

|

|

VOLUME xx, 2016

|

heuristic parameterization scheme. The average loss (2.6 × 10−4 ) is only about 1/10 of the heuristic approach (2.5 × 10−3 ), which is obtained with the same number of control parameters. Selection of Controlled Variables With 2 selected latent effective manipulated variables (denoted as v1 and v2 ), the same number of CVs need to be identified for the purpose of self-optimizing control. Concerning the candidate measurements for CV selection, it is noted that u are basically redundant, therefore, latent variables v1 and v2 are alternatively considered as measurements. The overall measurement set is therefore as follows:



y = cAf Figure 6. Economic losses for two parameterization schemes under 100 random disturbances.

cBf

Vf

cCf

cDf

v1

v2

T

(36)

To implement the CV selection method as described previously, the reference values for v1 and v2 are firstly calculated as −8.13 and −0.42 using the loading matrix resulting from the PCA. Then, the involved sensitivity matrices are evaluated using the finite difference method around the nominal point, as follows:

occupy 94.4 % of the data information. As k increases, the explained variance is accumulated up to 98.6 % and 99.5 % for k = 3 and k = 4, respectively. When k > 4, there is little room to improve as k gets bigger. Consequently, it would be recommended to use k = 2 to make a reasonable trade-off between the desired economic performance and control complexity. One should notice that this selection of k is smaller than the number of disturbances, which is 3. The achievable economic performance of using latent effective manipulated variables is further verified through re-optimization. As a comparison, here we compare a heuristic parameterization scheme based on process knowledge,[24] where the input arc was parameterized with 2 variables, Fs and ts . Specifically, the feed is maintained at a constant feed of Fs before time instant ts , and then switched to be 0 after ts . Note that this scheme also contains 2 decision variables and formulates the same scale of reduced problem as the one considered in this paper. Random disturbance scenarios have been tested by re-optimization based on these two parameterization schemes; their achievable minimal costs are compared to the minimal costs obtained using the original fine control parameterization method. Figure 6 shows the calculated economic losses through re-optimization for 100 groups of random disturbances, and one sees that the scheme proposed in this paper significantly reduces the economic loss compared to the

Gy,ref

 =

0.0033

2.7 × 10−5

0.0021

−0.0018



Jvv,ref =

1.05

0.089

0.089

1.95



−0.0014

−0.0024

−0.0018

1

0

−0.0013

−0.0014

−0.001

0

1

T

× 10−4 (37)

In practice, using all measurements for constituting CV is not necessary. In most cases, it is possible to rely on a few measurements without sacrificing much economic performance. For this simple reactor case, enumeration has been used to serve this purpose. Table 2 summarizes the most promising measurement subsets and corresponding CVs when the number of used measurements ny increases. The results reveal that the concentration of B and the first latent variable, v1 , are always chosen with highest priority among all measurements. When ny = 2, the best measurement subset is [cBf , v1 ] with a minimal average loss of 0.0024. As ny increases, the minimal average loss is further reduced. For example, when

Table 2. Best measurement subsets for CV selection ny

Measurement subset

2

cB f , v1

3

cB f , cC f , v1

4

cB f , cC f , cD f , v1

5

cB f , cC f , cD f , v1 , v2

6

cA f , cB f , cC f , cD f , v1 , v2

7

All measurements

Ha

 

−0.215 0.112

   

0.058 0.136

−0.207 0.120

−0.2 0.1

−0.96 −5.84

−0.2 −0.96 0.1 −5.84 −0.398 −0.053

The first column in H corresponds to the artificial measurement 1

|

VOLUME xx, 2016

−0.205 −7.81

−0.847 −7.506

0.010 0.0006

0.821 0.223

−0.522 0.921 −8.10 0.187 0.93 0.127

−0.002 −0.0009

−1.05 −5.85

a

|

0.005 0.183

Average loss (mol)

0.012 0.001

−0.672 0.248

−0.62 0.001

|

0.000 77



−0.0007 0.004

0.011 0.0008

−0.63 0.0003

0.56 0.078

0.0024



0.011 0.002

−0.62 0.011 0.001 0.0008

0.93 0.127

−0.002 −0.0009



−0.0007 0.004

0.012 0.0009

0.000 57



0.000 48



−0.000 55 0.004



0.000 44 0.000 44

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING

|

9

|

ny = 3, the best subset is [cBf , cCf , v1 ] with a minimal average loss of 0.000 77, which is less than one third of the previous case. When k = 4, the loss is reduced to 0.000 57. However, when ny further increases, the gained profit tends to be marginal. When all measurements are selected, the average loss is 0.000 44. Therefore, it is reasonable that ny = 3 is a good choice to simultaneously maintain optimality and simplicity. The final selected self-optimizing CVs are as follows: c1 = −0.215 − 0.847cBf + 0.821cCf + 0.012v1

(38)

c2 = 0.112 − 7.506cBf + 0.223cCf + 0.001v1

(39)

which is applicable to all batches, hence the superscript j is omitted. Closed-Loop Validation To evaluate the real self-optimizing control performance, closedloop validation is carried out for the following illustrative 3 disturbance cases: 1. Disturbance case (D1): [k1 , k2 , cBin ] = [0.053, 0.128, 5] (nominal); 2. Disturbance case (D2): [k1 , k2 , cBin ] = [0.0795, 0.192, 5]; 3. Disturbance case (D3): [k1 , k2 , cBin ] = [0.0265, 0.192, 2.5]. Note that, except for D1, some disturbance variables are set to be either minimal or maximum values to indicate significant

uncertain conditions. The true minimal costs for these disturbance cases are optimized as −0.2401, −0.2739, and −0.077 mol, respectively, which are optimized assuming these uncertain parameters are known. Another fact is that for the first two cases, the constraint of B concentration is active, while for the lase case, the optimal value of B concentration is 0.015 mol/L, within the feasible region. Two PI controllers have been tuned to track c = 0 from batch j+1 to batch, namely, each latent effective MV vi for the (j + 1)th batch, is adapted as follows:

 j+1 vi

=

j vi

+ Kp

j ci

j 1 k + ci Ti

 (40)

k=1

where Kp and Ti are the proportional gain and integral time, which are tuned via trial and error. Here, it should be highlighted that although the design of self-optimizing CV has in priori incorporated the effect of measurement noise, in practice, the noise occurs with high frequency and significantly differs from other disturbances which stay constant over a number of batches. Among many solutions for handling measurement noise, the linear smoothing filter (moving average for 5 samples) is used here as an comparison, which is simple yet effective for handling Gaussian noise. The closed-loop performances for D1–D3 are shown in Figures 7–9, where run-to-run evolutions of CVs, latent effective

Figure 7. Closed-loop performance for disturbance D1.

|

10

|

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING

|

|

VOLUME xx, 2016

|

Figure 8. Closed-loop performance for disturbance D2.

MVs, the final concentration of B, and cost function are illustrated for 20 batches. For all cases, the system is initiated with a random starting point far away from the optimum. For D1, the selfoptimizing CVs, c are soon controlled toward 0 by manipulating v for both of the two schemes. However, in the no-filtering case, significant fluctuations are observed around the set-point 0 due to random noise. Meanwhile, v has also been more aggressively regulated. On the other hand, c are controlled more smoothly around the set-point by filtering the measurements, as well as v. From the evolving trend of cBf , one sees that the system was initially operated in an infeasible region with cBf > 0.025, which is however regulated into the feasible region within 5 batches as the self-optimizing CVs are controlled. The final converged values of cBf are about 0.023 mol/L for both the filtering and no-filtering cases, whereas the latter case exhibits more obvious fluctuation of cBf , which is not surprising. In the bottom-right figure where J is plotted, it is clear that the cost is minimized along with tracking the CVs. However, it should be noted that although J is −0.24 and very close to the optimal value at the 2nd batch, it actually benefits from the high value of cBf which is infeasible. As the CVs are controlled and converged to set-points, J asymptotically approaches about −0.2392 mol, which indicates a loss of less than 0.001 mol. Similar results have been observed for D2, which has the reaction parameters set at the boundary. The reactor is automatically operated toward the optimum by tracking c = 0 from batch to batch. Furthermore, using a smoothing filter improves the control qualities to a certain extent by reducing the fluctuation. In

|

VOLUME xx, 2016

|

this case, cBf is at first far away from the desired value of 0.025 and then approaches this point along with controlling c. This time, the measured values of cBf have been observed with very slight variations against its upper bound. Regardless, such variations are so small that they can be considered acceptable. The achieved economic costs for both schemes are −0.2733, which is very close to the optimal value, −0.2739. D3 is an extreme case with the minimal main reaction rate k1 , maximum side reaction rate k2 , and least material B contained in the feed, therefore, very little product C can be produced. The optimal value of cost function is calculated as −0.077 mol with cBf equal to 0.015 at the optimum. The closed-loop validation shows that the system is able to adjust cBf toward its optimum value, thus avoiding engagement of tuning set-point of cBf as should be done in other control policies. The ultimate value of cBf is around 0.013 and close to optimum. The achieved cost is −0.070 mol, which is about a 10 % loss. However, since the absolute value of economic loss has been aimed to be minimized in the CV selection method, a loss of 0.007 mol is considered a similar loss compared to other cases. Besides the above investigated disturbances, the designed selfoptimizing control system has also been validated with a number of random disturbance scenarios, and the final results are satisfactory. These experiments verify that by controlling a reduced 2 × 2 plant, the control system is efficiently self-optimizing, even though the input arc for the reactor has been parameterized into many small time grids, which requires more than 50 decision variables to determine the input arc in the first place.

|

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING

|

11

|

Figure 9. Closed-loop performance for disturbance D3.

CONCLUSIONS In this study, a run-to-run self-optimizing control strategy of batch processing is presented. In the approach, the batch optimization problem is firstly transformed into a static NLP problem by input parameterization and then applied with the enhanced gSOC method. Since SOC is an optimization scheme by means of feedback control, it is highly desired to implement a control system with minimal complexity. To this end, the so-called latent effective manipulated variables were proposed to be the adjusted MVs in a self-optimizing control system, which are compressed from the space of original control parameters and more suitable for feedback control. The developed methodology is applied to a simulated fed-batch reactor and satisfactory performances are achieved. It is worth mentioning that a requirement for successful latent variable extraction is that the number of control parameters, nu , should be larger than the number of disturbances. However, when such an assumption does not hold (for example, when disturbances occur dynamically along with the batch time), the proposed run-to-run self-optimizing approach may lose function. Extension of the current method to cover such situations has been considered as a direction of future work. ACKNOWLEDGEMENTS The authors gratefully acknowledge the financial support from the National Natural Science Foundation of China (NSFC) (61673349, 61304081), Ningbo Natural Science Foundation (2015A610151),

|

12

|

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING

|

Scientific Research Fund of Zhejiang Provincial Education Department (Y201432757), and Ningbo Innovation Team (2012B82002). REFERENCES [1] G. Francois, D. Bonvin, Control and Optimisation of Process Systems 2013, 43, 1. [2] S. Skogestad, J. Process Contr. 2000, 10, 487. [3] I. J. Halvorsen, S. Skogestad, J. C. Morud, V. Alstad, Ind. Eng. Chem. Res. 2003, 42, 3273. [4] V. Alstad, S. Skogestad, Ind. Eng. Chem. Res. 2007, 46, 846. [5] V. Alstad, S. Skogestad, E. S. Hori, J. Process Contr. 2009, 19, 138. [6] V. Kariwala, Ind. Eng. Chem. Res. 2007, 46, 3629. [7] V. Kariwala, Y. Cao, S. Janardhanan, Ind. Eng. Chem. Res. 2008, 47, 1150. [8] L. Ye, Y. Cao, Y. Li, Z. Song, Ind. Eng. Chem. Res. 2013, 52, 798. [9] L. Ye, Y. Cao, X. Ma, Z. Song, Ind. Eng. Chem. Res. 2014, 53, 14695. [10] L. Ye, Y. Cao, Y. Xiao, Ind. Eng. Chem. Res. 2015, 54, 12040. [11] J. Jaschke, S. Skogestad, J. Process Contr. 2011, 21, 1407. [12] H. Dahl-Olsen, S. Narasimhan, S. Skogestad, “Optimal output selection for control of batch processes,” American Control Conference, Seattle, 11–13 June 2008.

|

VOLUME xx, 2016

|

[13] W. Hu, J. Mao, G. Xiao, V. Kariwala, “Selection of Self-Optimizing Controlled Variables for Dynamic Processes,” 8th International Symposium on Advanced Control of Chemical Processes (ADCHEM), IFAC, Singapore, 10–13 July 2012. [14] J. Jaschke, M. Fikar, S. Skogestad, “Self-optimizing invariants in dynamic optimization,” 50th IEEE Conference on Decision and Control, Orlando, 12–15 December 2011. [15] L. Ye, V. Kariwala, Y. Cao, “Dynamic optimization for batch processes with uncertainties via approximating invariant,” 8th IEEE Conference on Industrial Electronics and Applications (ICIEA), Melbourne, 19–21 June 2013. [16] J. Zhang, T. I. Meas. Control 2005, 27, 391. [17] B. Srinivasan, S. Palanki, D. Bonvin, Comput. Chem. Eng. 2003, 27, 1. [18] Y. Cao, “Constrained self-optimizing control via differentiation,” 7th International Symposium on Advanced Control of Chemical Processes (ADCHEM), IFAC, Hong Kong, 11–14 January 2004. [19] W. Hu, L. M. Umar, G. Xiao, V. Kariwala, J. Process Contr. 2012, 22, 488. [20] H. Manum, S. Skogestad, J. Process Contr. 2012, 22, 873. [21] G. P. Rangaiah, V. Kariwala, Plantwide control: Recent developments and applications, John Wiley & Sons, Hoboken 2012. [22] R. Yelchuru, S. Skogestad, J. Process Contr. 2012, 22, 995. [23] L. T. Biegler, Chemical Engineering and Processing: Process Intensification 2007, 46, 1043. [24] B. Chachuat, B. Srinivasan, D. Bonvin, Comput. Chem. Eng. 2009, 33, 1557. [25] A. V. Rao, D. A. Benson, C. Darby, M. A. Patterson, C. Francolin, I. Sanders, G. T. Huntington, ACM Transactions on Mathematical Software (TOMS) 2010, 37, 22. Manuscript received March 21, 2016; revised manuscript received June 1, 2016; accepted for publication July 16, 2016.

|

VOLUME xx, 2016

|

|

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING

|

13

|

Suggest Documents