Reduced-Order Modeling and ROM-Based Optimization of Batch Chromatography Peter Benner, Lihong Feng, Suzhou Li, and Yongjin Zhang
Abstract A reduced basis method is applied to batch chromatography and the underlying optimization problem is solved efficiently based on the resulting reduced model. A technique of adaptive snapshot selection is proposed to reduce the complexity and runtime of generating the reduced basis. With the help of an output-oriented error bound, the construction of the reduced model is managed automatically. Numerical examples demonstrate the performance of the adaptive technique in reducing the offline time. The ROM-based optimization is successful in terms of the accuracy and the runtime for getting the optimal solution.
1 Introduction Reduced basis methods (RBMs) have been proved to be powerful tools for rapid and reliable evaluation of the output response associated with parameterized partial differential equations (PDEs) [1, 5, 9, 11, 12]. The reduced basis (RB), used to construct the reduced-order model (ROM), is computed from snapshots, that is, the solutions to the PDEs at certain selected samples of parameters and/or chosen time steps. An efficient and rigorous a posteriori error estimation is crucial for RBMs because it enables automatic generation of the RB, and in turn a reliable ROM, with the help of a greedy algorithm. The efficiency of RBMs is ensured by the strategy of offline-online decomposition. During the offline stage, all full-dimension dependent and parameterindependent terms can be precomputed and a (parametric) ROM is obtained a priori; during the online stage, a reliable output response can be obtained rapidly from the ROM for any given feasible parameter. Although the offline cost is usually not taken into consideration, it is typically high, especially for time-dependent PDEs.
P. Benner • L. Feng • S. Li • Y. Zhang () Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstrasse 1, 39106 Magdeburg, Germany e-mail:
[email protected];
[email protected];
[email protected];
[email protected] © Springer International Publishing Switzerland 2015 A. Abdulle et al. (eds.), Numerical Mathematics and Advanced Applications - ENUMATH 2013, Lecture Notes in Computational Science and Engineering 103, DOI 10.1007/978-3-319-10705-9__42
427
428
P. Benner et al.
To reduce the cost and complexity of the offline stage, we propose a technique of adaptive snapshot selection (ASS) for the generation of the RB. For time-dependent problems, if the dynamics (rather than the solution at the final time) is of interest, the solution at the time instances in the evolution process should be collected as snapshots. However, the trajectory for a given parameter might contain a large number of time steps, e.g. in the simulation of batch chromatography. In such a case, if the solutions at all time steps are taken as snapshots, the subsequent computation will be very expensive because the number of snapshots is too large; if one just trivially selects part of the solutions, i.e. solutions at parts of the time instances (e.g. every two or several time steps), the final RB approximation might be of low accuracy because important information may have been lost due to such a naive snapshot selection. We propose to select the snapshot adaptively according to the variation of the solution in the evolution process. The idea is to make full use of the behavior of the trajectory and discard the redundant (linearly dependent) information adaptively. It enables the generation of the RB with a small number of snapshots but including only “useful” information. In addition, it is easily combined with other algorithms for the generation of RB, e.g. the POD-Greedy algorithm [9]. Batch chromatography is a very important chemical process and widely used in industries. Many efforts have been made for the optimization of batch chromatography over the last decades [4, 6, 7]. Notably, all these studies are based on the finely discretized full-order model (FOM), which must be repeatedly solved in the optimization process, making the runtime of obtaining the optimal solution too long. In this paper, a RB method is introduced to generate a surrogate ROM for batch chromatography. The nonlinear terms in the FOM are treated by empirical operator interpolation [1, 2]. With the help of the ASS and an output error bound derived in vector space in [13], the ROM is efficiently constructed in a goal-oriented fashion. The resulting ROM is used for the rapid evaluation of the output response during the optimization process. This paper is organized as follows. A brief review of the RBM is given in Sect. 2. The ASS technique is presented in detail for the construction of the ROM in Sect. 3. Section 4 shows the numerical results. Conclusions are drawn in section “Conclusions and Perspective”.
2 Reduced Basis Method and Empirical Interpolation Consider a parametrized evolution problem defined over the spatial domain ˝ Rd and the parameter domain P Rp , @t u.t; xI / C L Œu.t; xI / D 0;
t 2 Œ0; T ;
x 2 ˝;
2 P;
(1)
where L Œ is a spatial differential operator. Let W N L2 .˝/ be an N dimensional discrete space in which an approximate numerical solution to Eq. (1) is sought. Let 0 D t 0 < t 1 < : : : < t K D T be K C 1 time instants in Œ0; T . Given
RBM and ROM-Based Optimization of Batch Chromatography
429
2 P with suitable initial and boundary conditions, the numerical solution at the time t D t n , un ./, can be obtained by using suitable numerical methods, e.g. the finite volume method. Assume that un ./ 2 W N satisfies the following form, LI .t n /ŒunC1 ./ D LE .t n /Œun ./ C g.un ./; /;
(2)
where LI .t n /Œ; LE .t n /Œ are linear implicit and explicit operators, respectively, and g./ is a nonlinear -dependent operator. To make the RBM feasible, we assume that LI .t n /; LE .t n / are time-independent. By convention, un ./ is considered as the “true” solution by assuming that the numerical solution is a faithful approximation of the exact (analytical) solution u.t n ; xI /. RBMs aim to find a suitable low dimensional subspace W N W N and solve the resulting ROM to get the RB approximation uO n ./ 2 W N . In addition or alternatively to the field variable itself, the approximation of outputs of interest can also be obtained cheaply by y./ O D y.Ou.//. More precisely, given a RB matrix V WD ŒV1 ; : : : ; VN , Galerkin projection is employed to generate the ROM: V T LI .t n /ŒVanC1 ./ D V T LE .t n /ŒVan ./ C V T g.Van .//;
(3)
n where an ./ D .a1n ./; : : : ; aN .//T 2 RN is the vector of the weights in the PN n n expression uO ./ WD Va ./ D i D1 ain ./Vi , and it is the vector of unknowns in the ROM. Thanks to the linearity of the operators LI and LE , the ROM (3) can be rewritten as
V T LI .t n /V ŒanC1 ./ D V T LE .t n /V Œan ./ C V T g.Van .//;
(4)
where V T LI .t n /V and V T LE .t n /V can be precomputed and stored for the construction of the ROM. However, the computation of the last term in (4), V T g.Van .//, cannot be done analogously because of the nonlinearity of g. This can be tackled by using a technique of empirical (operator) interpolation (EI), see e.g. [1, 2] for details. The POD-Greedy algorithm [9], shown in Algorithm 2.1, is often used for the generation of the RB for time-dependent problems. Note that N .max / is an indicator of the error of the ROM. As aforementioned, an efficient and rigorous a posteriori error estimator is desired for efficient construction of the ROM. In this paper, we use an output error bound to compute the error indicator N .max /. Due to space limitations, the derivation of the output error bound will be given in a more detailed paper [13]. For some problems, like the batch chromatographic model under consideration, the implementation of Step 4 in Algorithm 2.1 is costly because the number of time steps K is very large. In this work, we propose to use a technique we call ASS to reduce the cost, which is addressed in Sect. 3.
430
P. Benner et al.
Algorithm 2.1: RB generation using POD-Greedy Require: Ptrain ; 0 ; tolRB .< 1/ Ensure: RB V D ŒV1 ; : : : ; VN 1: Initialization: N D 0, V D Œ , max D 0 , N .max / D 1 2: while the error N .max / > tolRB do 3: Compute the trajectory Smax WD fun .max /gK nD0 . 4: Enrich the RB, e.g. V WD ŒV; VN C1 , where VN C1 is the first POD mode of the matrix UN D ŒNu0 ; : : : ; uN K with uN n WD un .max / ˘W N Œun .max /; n D 0; : : : ; K. ˘W N Œu is the projection of u onto the current space W N WD spanfV1 ; : : : ; VN g. 5: N DN C1 6: Find max WD arg max2Ptrain N ./. 7: end while
3 Adaptive Snapshot Selection For the generation of the RB, a training set Ptrain of parameters must be determined. On the one hand, the training set is desired to include the information of the parametric system as much as possible. On the other hand, the RB should be efficiently generated. To construct the RB efficiently, many efforts have been made on adaptively choosing the training set [3,8] in the past years. The authors tried to get an “optimal” training set in the sense that the original manifold M D fu./j 2 Pg can be well represented by the submanifold MO D fu./j 2 Ptrain g induced by the sample set with its size as small as possible. For time-dependent problems, in spite of an “optimal” training set, the number of snapshots can be huge if the total number of time steps for a single parameter is large. A large number of snapshots means that it is time-consuming to generate the RB because the POD mode in Step 4 in Algorithm 2.1 is hard to compute from the singular value decomposition of UN , due to the large size of UN . As a straightforward way to avoid using the solutions at all time instances as snapshots, one can simply pick out the solutions at certain time instances (e.g. every two or several time steps) as snapshots. However, the results might be of low accuracy because some important information may have been lost during such a trivial snapshot selection. For an “optimal” or a selected training set, we propose to select the snapshots adaptively according to the variation of the trajectory of the solution, fun./gK nD0 . The idea is to discard the redundant ((almost) linearly dependent) information from the trajectory. In fact, the linear dependency of two non-zero vectors v1 and v2 can be reflected by the angle between them. More precisely, they are linearly dependent if and only if j cos./j D 1 . D 0 or /. In other words, the value 1 j cos./j is large if the linear relevance between the two vectors is weak. This implies that the jhv1 ;v2 ij ;v2 i .cos./ D kvhv11kkv / is a good indicator for the linear dependency quantity 1 kv 1 kkv2 k 2k of v1 and v2 . Given a parameter and the initial vector u0 ./, the numerical solution n u ./ .n D 1; : : : ; K/ can be obtained, e.g. by using the evolution scheme (2). jhun ./; um ./ij Define an indicator Ind.un ./; um .// D 1 ku n ./kkum ./k ; which is used to
RBM and ROM-Based Optimization of Batch Chromatography
431
Algorithm 3.1: Adaptive snapshot selection (ASS) Require: Initial vector u0 ./, tolASS Ensure: Selected snapshot matrix S A D Œun1 ./; un2 ./; : : : ; un` ./ 1: Initialization: j D 1; nj D 0; S A D Œunj ./ 2: for n D 1; : : : ; K do 3: Compute the vector un ./. 4: if Ind.un ./; unj .// > tolASS then 5: j D j C1 6: nj D n 7: S A D ŒS A ; unj ./ 8: end if 9: end for
Algorithm 3.2: RB generation using ASS-POD-Greedy Require: Ptrain ; 0 ; tolRB .< 1/ Ensure: RB V D ŒV1 ; : : : ; VN 1: Initialization: N D 0, V D Œ , max D 0 , .max / D 1 2: while the error N .max / > tolRB do 3: Compute the trajectory Smax WD fun .max /gK nD0 and adaptively select snapshots using A WD fun1 .max /; : : : ; un` .max /g. Algorithm 3.1 to get Smax 4: Enrich the RB, e.g. V WD ŒV; VN C1 , where VN C1 is the first POD mode of the matrix UN A D ŒNun1 ; : : : ; uN n` with uN ns WD uns .max / ˘W N Œuns .max /; s D 1; : : : ; `; ` K. ˘W N Œu is the projection of u onto the space W N WD spanfV1 ; : : : ; VN g. 5: N DN C1 6: Find max WD arg max2Ptrain N ./. 7: end while
measure the linear dependency of the two vectors. When Ind.un ./; um .// is large, the linear relevance between un ./ and um ./ is weak. Algorithm 3.1 shows the realization of the ASS, un ./ is taken as a new snapshot only when un ./ and unj ./ are “sufficiently” linearly independent, by checking whether Ind.un ./; unj .// is large enough or not. Here, unj ./ is the last selected snapshot. Note that the inner product h; i W W N W N ! R used above is properly defined according to the solution space, and the norm k k is induced by the inner product. Remark 1 For the linear dependency, it is also possible to check the angle between the tested vector un ./ and the subspace spanned by the selected snapshots S A . More redundant information can be discarded but at more cost. However, the data will be compressed further, e.g. by using the POD-Greedy algorithm, we simply choose the economical case shown in Algorithm 3.1. Note that the tolerance tolASS is prespecified and problem-dependent, and the value at O.104 / gives good results for the numerical examples studied in Sect. 4 based on our observation. The ASS technique can be easily combined with other algorithms for the generation of the RB and/or the collateral reduced basis (CRB) for EI. For example, Algorithm 3.2 shows the combination with the POD-Greedy algorithm (Algorithm 2.1).
432
P. Benner et al.
4 Numerical Experiments In this paper, we use the RBM and ASS presented in the previous sections to generate a surrogate ROM for batch chromatography. The governing equations for batch chromatography can be described as follows, 8 < @cz C @t
1 @qz @t
:
@qz @t
z D @c C @x
D
1 @2 cz ; Pe @x 2
L Eq Q=.Ac / z .qz
0 < x < 1;
qz /;
0 x 1:
(5)
Here cz ; qz .z D a; b/ are the unknowns in the system, and qzEq is a nonlinear function of ca and cb . A detailed description of model parameters, and the initial and boundary conditions can be found in [13]. The feed flow rate Q and the injection period tin (in the boundary conditions) are considered as the operating variables, denoted as WD .Q; tin /. In this section, we first illustrate the performance of the ASS for the construction of the ROM, and then show the results of ROM-based optimization. The parameter domain of is P D Œ0:0667; 0:1667 Œ0:5; 2:0, and N D 1;000 for the FOM. We employ the tolerance tolRB D 1:0 106 and tolASS D 5:0 104 unless stated otherwise. All the computations were done on a PC with Intel Core(TM)2 Quad CPU 2.83 GHz and RAM 4.00 GB.
4.1 Performance of the Adaptive Snapshot Selection To investigate the performance of the ASS, we compare the runtime of the generation of the RB with different threshold values tolASS . As is shown in Algorithm 3.2, the ASS can be combined with the POD-Greedy algorithm for the generation of the RB. For the computation of the error indicator N .max / in Algorithm 3.2, EI is involved for an efficient offline-online decomposition. To efficiently generate a CRB, the ASS is also employed. The training set for the generation of the CRB is CRB CRB Ptrain P, with 25 uniform sample points. For each 2 Ptrain , Algorithm 3.1 is used to choose the snapshots adaptively. Table 1 shows the results. It is seen that, the larger tolerance is used, the more runtime is saved. Particularly, when tolASS D 5:0 104 , the runtime is reduced by 93:1 % compared to that without ASS. Table 1 Runtime for the generation of the CRB with different tolASS tolASS Runtime (h)
No ASS – 25:30 (–)
ASS 1:0 104 2.56 .89:9 %/
ASS 5:0 104 1.74 .93:1 %/
ASS 1:0 103 1.04 .95:9 %/
RBM and ROM-Based Optimization of Batch Chromatography
433
Table 2 Comparison of the detailed and reduced simulations over a validation set Pval with 400 random sample points. FOM: N D 1;000; ROM: N D 43 Simulations Max. error Aver-runtime (s)/SpF
FOM – 91.65/-
ROM (POD-Greedy) 4:0 107 3.45/27
ROM (ASS-POD-Greedy) 7:7 107 3.45/27
With the precomputed CRB (tolASS D 5:0 104 ), we perform Algorithms 2.1 and 3.2 to generate the RB with the same tolerance tolRB , respectively. The runtime for the former (using Algorithm 2.1) is 5.24 h, while it is only 3.10 h for the latter (using Algorithm 3.2). The runtime of the RB construction with the ASS is reduced by 40:9 %. Notice that the CRB is obtained a priori, the runtime of it is not included here. The training set is Ptrain P, with 64 uniform sample points. Moreover, the resulting ROM with ASS is almost as accurate as that without ASS, as is shown in Table 2.
4.2 ROM-Based Optimization The optimization of batch chromatography aims to find opt 2 P such that opt WD arg min fPr.cz ./; qz .//g; 2P
s.t. Recmi n Rec.cz ./; qz .// 0;
2P
cz ./; qz ./ are the solutions to the system (5); z D a; b: More details about the optimization problem, e.g. the definition of the production rate Pr and the recovery yield Rec, can be found in [13]. Before solving the ROM-based optimization, we first assess the reliability of the resulting ROM. We performed the detailed and reduced simulations over a validation set Pval P with 400 random sample points. From Table 2, it is seen that the average runtime of the detailed simulation is sped up by a factor of 27, and the maximal true error is below the prespecified tolerance. The global optimizer NLOPT_GN_DIRECT_L [10] is employed to solve the optimization problems. Let k be the vector of parameters determined by the optimizer at the k-th iteration. When kkC1 k k < opt , the iteration is stopped and the optimal solution is obtained. Table 3 shows the results. It is seen that the optimal solution to the ROM-based optimization converges to the FOM-based one. Moreover, the runtime of getting the optimal solution is largely reduced. The speedup factor (SpF) is 29.
434
P. Benner et al.
Table 3 Optimization results based on the ROM and FOM, opt D 1:0 104 Simulations FOM-based Opt. ROM-based Opt.
Obj. .Pr/ 0.020271 0.020276
Opt. solution ./ .0:07969; 1:05514/ .0:07969; 1:05514/
#Iterations 211 211
Runtime (h)/SpF 10.60/0.36/29
Conclusions and Perspective We present a reduced basis method for batch chromatography and solve the underlying optimization efficiently based on a surrogate ROM. The technique ASS is presented for efficient construction of the ROM. Numerical examples demonstrate that it significantly reduces the offline time while not sacrificing the accuracy of the ROM. In addition, the ASS might be applied to other snapshot-based model reduction methods.
References 1. M. Barrault, Y. Maday, N.C. Nguyen, A.T. Patera, An ‘empirical interpolation’ method: application to efficient reduced-basis discretization of partial differential equations. Comptes Rendus Math. Acad. Sci. Paris Ser. I 339(9), 667–672 (2004) 2. M. Drohmann, B. Haasdonk, M. Ohlberger, Reduced basis approximation for nonlinear parametrized evolution equations based on empirical operator interpolation. SIAM J. Sci. Comput. 34(2), 937–969 (2012) 3. J.L. Eftang, D.J. Knezevic, A.T. Patera, An hp certified reduced basis method for parametrized parabolic partial differential equations. Math. Comput. Model. Dyn. Syst. 17(4), 395–422 (2011) 4. W. Gao, S. Engell, Iterative set-point optimization of batch chromatography. Comput. Chem. Eng. 29(6), 1401–1409 (2005) 5. M.A. Grepl, Reduced-basis approximation a posteriori error estimation for parabolic partial differential equations, Ph.D. thesis, Massachusetts Institute of Technology, 2005 6. D. Gromov, S. Li, J. Raisch, A hierarchical approach to optimal control of a hybrid chromatographic batch process. Adv. Control Chem. Process. 7, 339–344 (2009) 7. G. Guiochon, A. Felinger, D.G. Shirazi, A.M. Katti, Fundamentals of Preparative and Nonlinear Chromatography (Academic, Boston, 2006) 8. B. Haasdonk, M. Dihlmann, M. Ohlberger, A training set and multiple bases generation approach for parameterized model reduction based on adaptive grids in parameter space. Math. Comput. Model. Dyn. Syst. 17(4), 423–442 (2011) 9. B. Haasdonk, M. Ohlberger, Reduced basis method for finite volume approximations of parametrized linear evolution equations. ESAIM Math. Model. Numer. Anal. 42(2), 277–302 (2008) 10. S.G. Johnson, The NLopt nonlinear-optimization package, http://ab-initio.mit.edu/nlopt 11. A.K. Noor, J.M. Peters, Reduced basis technique for nonlinear analysis of structures. AIAA J. 18(4), 145–161 (1980)
RBM and ROM-Based Optimization of Batch Chromatography
435
12. A.T. Patera, G. Rozza, Reduced Basis Approximation and A Posteriori Error Estimation for Parametrized Partial Differential Equations. MIT Pappalardo Graduate Monographs in Mechanical Engineering. Cambridge, MA (2007). Available from http://augustine.mit.edu/ methodology/methodology_book.htm 13. Y. Zhang, L. Feng, S. Li, P. Benner, Accelerating PDE Constrained Optimization by the Reduced Basis Method: Application to Batch Chromatography, preprint MPIMD/14-09, Max Planck Institute Magdeburg Preprints, 2014