Application of Numerical Methods in Chemical Process ... - CiteSeerX

9 downloads 0 Views 184KB Size Report
Available codes are 30, 31] e.g. EULEX, ODEX, DIFEX1 or. STIFF3 32]. Further information is available from E-mail address eZib.zib- berlin.de and hairer @ uni ...
Application of Numerical Methods in Chemical Process Engineering Frerich J. Keil

Technical University of Hamburg-Harburg, Dept. of Chemical Engineering, Eiendorfer Str. 38, D-21073 Hamburg, Germany

Introduction Numerical methods in chemical engineering deal with a broad range of problems starting from calculations on atomic or molecular level to the optimization of complete chemical plants. From an engineer's point of view, we will expound the following subjects: { quantum mechanical calculations of atoms and molecules { numerical treatment of chemical reaction kinetics { transport processes { mathematical description of unit operations { stationary and instationary simulation and optimization of chemical plants Because of this extensive eld we will have to refer to other overview papers.

Quantum mechanics of atoms and molecules, Monte Carlo and Molecular Dynamics Methods Exact solutions for many-electron systems are essentially unattainable, except for some trivial cases, since an exact solution would imply an exact Hamiltonian, which is unavailable. In practice model Hamiltonians are in use, both on nonrelativistic and relativistic level. In short, one can say that the energy of an n-electron system can be partioned into the energy of one electron moving in the average eld of the remaining (n-1) (HF-SCF) electrons and nuclei. This is the Hartree-Fock model. The next more advanced level takes the electron correlation into account. The Hartree-Fock approach allows that two electrons with di erent spins can be found at the same spatial point, indicating that the Coulomb hole cannot be represented on the HF-level. Various methods have been proposed for calculating electron correlation e ects. In common use is the Mller-Plesset perturbation theory of second order (MP2). In this approach, the total Hamiltonian of the system is partioned into two pieces: a zeroth-order part, H0, which is the HartreeFock Hamiltonian and a pertubation, V . The exact energy is then expressed as an in nite sum of contributions of increasing complexity. Going beyond

2

Keil

HF- SCF or MP2 is dicult and matter for specialists in quantum chemistry research (for details see [1-4] ). The energy ER of a molecular system is obtained as a solution of the electronic part of the Schrodinger equation for a xed con guration R of the nuclei (Born-Oppenheimer approximation): H (r; R) (r) = ER R (r) (1) The n-electron wave function R (r) describes the motion of the electrons in the eld of the nuclei. Due to the electron-electron interaction term in the Hamiltonian, this equation cannot be solved without approximations. The HF approximation assumes that the n-particle wave function (r) can be written as an antisymmetrized product of one-electron functions i (r1) (so-called orbitals): (r ) (r )    (r ) 1(r1) 2(r1)    n(r1) 1 2 2 2 n 2 1 (2) (r1 ; r2; :::; rn) = pn .. .. .. .. . . . . 1 (rn ) 2 (rn)    n (rn ) Eq. (2) is called a Slater determinant. The set of orbitals that yields the lowest energy of a molecular system in the sense of a variational principle is given by the following set of HF integrodi erential equations: F(r1) i (r1) = Ei i(r1) (3) The orbitals i (r1 ) are called molecular orbitals (MOs) and F(r1 ) is the Fock operator which comprises the di erential operator of the kinetic energy and the electron-electron interaction term. The expansion of the MOs i into a nite series of basic functions i (r) 0c 1 BB c12ii CC i (r1) = (r1 )ci = (1 (r1); 2(r1 ); : : :; m (r1 )) B (4) @ ... CA cmi leads to a transformation of the Eq. (3) into the so-called Roothaan matrix equations: FC = SCE (5) with F  hjFi Fock matrix (6) S  hji overlap matrix (7) The HF solution is given as m column vectors ci of coecients related to a chosen basis set :

Application of Numerical Methods in Process Engineering

3

C = (c1 ; c2; : : :; cm);

Eij = Eiij (8) The one-electron density function,(r), is given for this basis set as

R=

n X k

ckck

(9)

+

The Fock matrix elements are: F = hjh i + G(R) with XX G(R) = R [h ji ? hji]

(11)

and Z h ji =  (r1 ) (r1 ) r1  (r2 ) (r2 )dr1dr2

(12)

 

12

(10)

In practice billions of integrals of type (12) have to be calculated, stored and reread in each iteration. Eq. (5) is a pseudo-eigenvalue equation because C has to be known in beforehand. Therefore, one starts with an assumption of C and solves Eq. (5) iteratively until convergence has been achieved. Matrix eigenvalues are mostly computed according to the method of Davidson [5]. In its brute force form the Hartree-Fock SCF method needs a computation of  N 4 (N  Number of functions in its basis set (r1)). Due to some tricks one can reduce this number to  N 3. With the MP2 approach about  N 5 integrals have to be calculated. More sophisticated methods need  N 7 integral computations. Quantum chemical computations are a standard tool, even for experimentalists (see e.g. [2, 6, 7] ).Black box programs are commercially available (e.g. GAUSSIAN; TURBOMOLE). With the development of material science, ne chemistry, molecular biology and many branches of condensed-matter physics, the problem of how to deal with the quantum mechanics of many-particle systems formed by thousands of electrons and hundreds of nuclei has attained relevance. An alternative of ab-initio methods is the density functional approach [8-10] which gives results of an accuracy comparable to ab-initio methods. The densityfunctional method bypasses the calculation of the n-electron wave-function by using the electron density (r). The energy of a many-electron system is a unique function of electron density. The computational work grows like  N 3 instead of  N 4 in HF. Unlike in classical physics, in quantum mechanics relativistic e ects may be quite striking, especially for heavy atoms in connection of quantum chemistry. Pyykko [11] gives a review of relativistic quantum chemistry. The parallelization of a molecular quantum chemical code is at least in parts trivial. Especially the computation of the integrals (matrix elements)

4

Keil

can be done independently. Depending on the exact structure of the integral package, one easily develops a strategy for the type of granularity best suited for a certain parallel computer. Quantum chemical computations are in use for the determination of thermophysical data, arrangement of molecules on solid catalyst surfaces, relative stability of zeolites etc. Monte Carlo methods are a class of techniques that can be used to simulate the behaviour of a physical system [12-16]. They are distinguished from other simulation methods such as molecular dynamics, by being stochastic, that is, nondetermistic in some manner. This stochastic behaviour in Monte Carlo methods generally results from the use of random number sequences. Monte Carlo methods are used for the evaluation of multidimensional integrals with complicated boundary conditions, where grid methods become inecient, solution of di erential equations (e.g. Schrodinger equation), study of systems with a large number of strongly coupled degrees of freedom, such as liquids, phase transitions, disordered materials, and strongly coupled solids. A striking application of Monte Carlo methods is the computation of di usion and reaction processes in zeolites [17]. For an N-particle system, the average of a function F (qN ), which depends only on the con gurational variables qN = (q1 ; q2; : : :; qN ), is given by

Z

hF i = F (qN )p(qN )dq

(13)

where p(qN ) is the probability density. Because of the many thousands of degrees of freedom of the system, it is practically impossible to evaluate averages from this integral. A solution to the problem is to limit the sampling to a con ned region of the con guration space. An algorithm which provides this approach is the Metropolis algorithm [18]. This algorithm generates a random walk in the con gurational space with the constraint that the con gurations have to be states of an irreducible Markovian chain [19]. Let us suppose to have N molecules in a given con guration and con ned in a cubic box. In the Monte Carlo procedure the following steps are performed many times: (1) a molecule is chosen randomly; (2) a trial move is attempted: the molecule is displaced and rotated around the three rotational axis, so that a new set of coordinates is generated from the old one; (3) the change in energy of the system (E ) due to the trial move is calculated; if the energy is decreased, the move is accepted with probability equal to one. If the energy is increased the move is accepted with probability exp(?E=kT ) and rejected with probability (1 ? exp(?E=kT )), where k is the Boltzmann constant, and T is the temperature of the system. In short, a Monte Carlo program runs like this: (1) input: T, N and initial con guration; (2) calculation of the system interaction energy; (3) generation of trial move co-ordinates; (4) computation of trial move energy di erence; (5) acceptance or rejection of the move; (6) update of the coordinates and energy arrays; (7) after number of moves has been completed: output of nal results. Steps (1) and (7) occur

Application of Numerical Methods in Process Engineering

5

once, steps (3) to (6) take place inside a loop over a given number of moves. In step (6) the total energy of the system is evaluated after each move. In order to avoid accumulation of round errors, step (2) must be done from time to time. For a single move steps (2) and (4) take by far most of the computing time. Therefore, these steps are worth modifying to execute in parallel. For the computation of dynamic properties of physical systems with a very high degree of freedom the Molecular dynamics (MD) approach is the most widely used simulation technique. Monte Carlo and Molecular dynamics methods are linked via statistical mechanics and the assumption of ergodicity. Thus, if equilibrium properties of an ergodic system are desired then either of these methods may be employed. However, if time dependent (dynamical) properties are required then MD simulations are the sole option. An introduction to MD is given in [12, 20, 21]. MD calculates the motion of an ensemble of atoms and/or molecules by integrating Newton's equations. From the motion of the ensemble of atoms and molecules microscopic and macroscopic information can be extracted, e.g. transport coecients, phase diagrams, and structural properties. The physics of the model is contained in a potential energy functional for the system from which force equations for each atom and molecule are derived. MD simulations are large in respect to the number of time steps and the number of atoms and/or molecules. Let us consider an ensemble of N molecules in a xed volume V with a xed total energy E. This is a microcanonical ensemble of classical statistical mechanics. Typical values for N used in these simulations of chemical interest is of the order of hundreds to a few thousands. In order to simulate an "in nite" system, periodic boundary conditions are invariably imposed. Thus a typical MD system would consist of N molecules enclosed in a cubic box with each side equal to length L. MD solves the equations of motion for a molecule i: N X X (14) fi = fij = @Vij@q(qi ) j j (6=i) where Vij is the pair potential function between molecules i and j. A restriction to binary molecular interactions is not necessary. The summation over j is usually con ned to the molecules within a spherical cuto radius R of particle i. The radius is usually chosen to be half the length of the box. For long range forces, such as Coulombic or dipol-dipol forces, interactions beyond the cuto are usually calculated according to the Ewald summation technique or the reaction method [21]. For systems consisting of many different types of complex molecules, many di erent types of potential energy functions must be evaluated at each time step. For a given pair of interacting particles, the appropriate potential energy function and parameters must be identi ed according to particle type. The evaluation of the total force is by far the most time consuming step. It requires the computation of a wide range of expressions which consist of terms containing fractional powers or

6

Keil

transcendental functions. The method of table look-up and the so-called Lagrangian particle tracking are important for calculating many di erent types of interactions or arbitrary complexity in large MD simulations. Optimal programming structures of these techniques have recently been published by Dunn and Lambrakos [22]. The integration of the equations of motion is done by the Verlet, Verlet leap frog, Gear xed or variable time step or e.g. Gauss-Radau algorithm. A comparison of di erent algorithms in MD simulations has been published by Bolton and Nordhohn [23]. The Gauss-Raudau algorithm turned out to be the most ecient and accurate one. MD computations are inherently parallel [24] and special-purpose hardware for MD calculations is also available[25].

Numerical treatment of chemical reaction-kinetics In chemical reaction engineering numerical simulation and identi cation of reaction systems is of an outstanding importance. Evaluating reaction rate parameters is a common problem for the chemical engineer. Based on proposed chemical mechanisms and carefully done measurements of ow rates, pressures, temperatures and compositions the rate constants have to be determined. Details of numerical methods to tackle this problem is given by Bock [26] or Deu hard and Nowak [27, 28]. In general a system of chemical reactions is described by a set of di erential equations which corresponds to a proposed chemical reaction mechanism. The set of di erential equations evaluates the nC concentrations C of the involved species. They may be described as: C_ = f (C; k; t) (15) The rate equations on the right-hand sides are polynomial in C. In general they are nonlinear in C and linear in the rate parameters k (constant for T = const.). The parameters k are to be determined. They follow the Arrhenius equation: kij (T ) = Aij T exp(?Eij =T ) (16) If the parameters Aij ; ij and Eij are all known, the initial concentrations and a temperature pro le are given, the rate equations would predict the behaviour of the reaction. For very large systems a program LARKIN that integrates the, in general sti , system of equations [27]. The initial value problems may be solved by routines like METAN1 [29] or SODEX [30, 31]. Both methods are based on a semi-implicit midpoint rule. The mathematical problem of identi cation of rate parameters is a socalled inverse problem, which may be stated like this [26]: Find parameters k and a solution C(k; t) of the di erential equations C_ = f (C; k; t) (17) ij

Application of Numerical Methods in Process Engineering

7

such that kr1 (C0 ; : : :; Ck ; k)k22 = MIN ! (18) constraints: r2 (C0 ; : : :; Ck ; k) = 0 (19) r3 (C0 ; : : :; Ck ; k)  0 (20) where C = C(k; tj ), with tj known. This problem is a constrained overdetermined multipoint boundary value problem. Bock [26] has taken a multiple shooting technique to solve this problem. This time interval is devided into pieces (e.g. according to measurement points): t0 =  0 <  1 < : : : <  m = t k (21) The concentrations Cvi at these points are additional parameters, such that an augmented variable vector (Cv0 ; : : :; Cv0; k) (22) is obtained. For a given estimate of this vector the solutions C(Cv i; k; t) of m initial value problems on the time subintervals are computed: (23) C_ = f (C; k; t); t 2 [i; i+1]; C(i ) = Cv i (23) (24) which leads to a discontinuous trajectory. This approach reduces the in uence of poor estimates of the rate constants k. Information about C(t) taken from measurements can be brought in. Thus, the initial trajectory is close to the observed data. A large constrained nonlinear least-squares problems obtained [26]: Find parameters k and concentrations Cvo ; : : :; Cvm , such that kR1(Cv0 ; : : :; Cvm ; k)k22 = MIN ! (25) with constraints: R2 (Cv0 ; : : :; Cvm ; k) = 0 (26) R3 (Cv0 ; : : :; Cvm ; k)  0 (27) In order to ensure continuity of the nal solution, the additional smoothing conditions have to be met ki (Cvi ; Cvi+1 ; k) := C(i+1 ; Cvi; k) ? Cvi+1 = 0 (28)

8

Keil

The least-squares problem has been solved by a generalized Gau -Newton method [26,53]. The algorithm of the inverse problem of kinetic parameter identi cation is available as a code called PARFIT. Nowak and Deu hard [27] have developed a software package PARKIN for the identi cation of kinetic parameters. In principle, these programs can be combined with any integrator for sti problems. Available codes are [30, 31] e.g. EULEX, ODEX, DIFEX1 or STIFF3 [32]. Further information is available from E-mail address eZib.zibberlin.de and hairer @ uni 2a.unige.ch.

Transport processes Multicomponent mass transfer combined with heat and momentum transport are omnipresent in chemical process engineering [42-44]. Di usion/reaction processes have attracted many mathematicians and is a eld of mathematical research of its own [45]. Exemplary, di usion and reaction in catalyst particles will be discussed in more detail. Catalytic gas/solid reactions take place within porous solid supports in which catalytic active sites are found. The reacting gases have to di use into these porous solids, adsorb on the active sites, react, and the products have to di use back to the outer surface. First of all the porous material has to be described. In the past this has mostly been done by continuum models. In a continuum model, the porous material is treated as a continuum within which temperature, uid species concentrations are de ned as smooth functions of time and position. These pointwise functions are governed by suitable energy and species concentration balances. Continuum models are sets of ordinary or partial di erential equations which include "e ective" parameters like transport coecients (di usivity, reactivity coecients etc.). A spatial averaging technique is used which is meaningful when the characteristic length for the variation in macroscopic concentrations is much larger than the linear dimension of a statistically adequate material region. In many practical systems, the length scale over which the inhomogeneities are important is small enough to fully justify the continuum treatment. Continuum models cannot satisfactorily describe the connectivity of the material from a global standpoint. The majority of continuum models are capillary models that treat the pore space as a collection of capillaries of one or more radii each of which satis es the ux relations for an in nitely long cylinder. The models di er from each other by the way that the uxes in capillaries of di erent sizes are combined with each other. The multicomponent di usion is mostly described by the so-called dusty gas model [46] which is based on the Stefan-Maxwell equations. In practice the active sites are not homogeneously distributed within the solid pellet. Furthermore, due to deposits (e.g. coke) within the pores the structure of the pores changes with time. This demands a local description of the pore space. At present network models are preferred for this purpose

Application of Numerical Methods in Process Engineering

9

(see e.g. Rieckmann and Keil [47]). These models lead to very large linear systems of equations of about 106 unknowns. For the simple case of a rst order reaction A ?! B one gets a symmetric matrix A in the system Ax = b (29) where A 2 0 2

z

3

2

z

4

3

z

4

z

Similar expressions may be found for multicomponent mixtures and an algorithm is given to nd Tc and Pc where the second- and third-order terms are zero. Michelsen [55] has also formulated nonlinear equations for the calculation of phase envelopes and critical points for multicomponent mixtures. The nonlinear equations may be solved by e.g. a di erential arclength homotopycontinuation algorithm published by Allgower and Georg [56] that integrates the ODEs along the arclength using an Euler predictor step followed by Newton correction steps. This approach was implemented by Wayburn and Seader [57] with attention to the methods for step-size adjustment and parametervariable exchange to avoid singularities near limit points. Kovach and Seider [58] extended the turning-point algorithm to bypass the limit points when multiple solutions are encountered as a second liquid phase is introduced on the trays of a heterogeneous azeotropic distillation tower. A widely distributed program package for the implementation of Newton and xed-point homotopies is HOMPACK [62]. A similar program called CONSOL has been described in a book written by Morgan [63]. Seader et al. [64] have extended these methods to systems of nonlinear equations with transcendental terms. They have applied a global xed-point homotopy to nd from a single arbitrary starting guess all solutions to several sets of nonlinear equations, even when transcendental terms are present. However, depending upon the starting guess selected, the homotopy path may consist of branches that are only connected at in nities, which traverse the dependent variables and/or the homotopy parameter from ?1 to +1. By using mapping functions, the xed point homotopy path may be transformed into a nite domain, wherein all solutions lie on the path.

Application of Numerical Methods in Process Engineering

11

Paloschi [68] has presented a hybrid algorithm that combines a Newton iteration process with the continuation code PITCON [69]. A collection of nonlinear model problems is given by More [70]. As homotopy-continuation algorithms follow the steady-state solution branches, they bracket limit points, where the Jacobian jJj is equal to zero. But normally they pass over higher- order singular points, e.g. Hopf bifurcation points. By evaluating the eigenvalues of the Jacobian a local stability analysis can be performed along the solution branches, and e.g. the Hopf bifurcation points can be located. Alternatively, the necessary and sucient conditions for these points can be solved [65]. At a Hopf bifurcation point, two complex eigenvalues become pure imaginary, the steady-state branch destabilizes or remains unstable, and a branch of periodic solutions has been created. In order to trace the periodic branches a very comprehensive software package called AUTO is available [66]. Kubicek and Marek [65] give a printout of their program DERPAR. Holodniok and Kubicek have described a similar program DERPER [67]. The above mentioned mathematical methods may also be used to model unit operations without phase transitions. In reaction engineering PDEs with two spatial variables are dominating. For time dependent problems the time is an additional variable. Nonlinear mathematical methods are also getting more and more employed in chemical process control. A survey of this subject is given by Bequette [74].

Stationary and instationary simulation and optimization of chemical plants

Widely distributed software packages like ASPEN+ [75] and SPEEDUP [76] make stationary and instationary process simulations available for many engineers. An extensive summary of this subject is given in a book edited by Schuler [77]. A pioneering book about process owsheeting has been written by Westerberg et al. [78]. In Germany the DIVA [77, 79] simulator for instationary simulation of complete chemical plants is in industrial use. A chemical process plant consists of many unit operations connected by process streams. Each process unit may be modelled by a set of equations (ODEs, PDEs, DAEs, algebraic equations), which include material, energy and momentum balances, phase and chemical equilibrium relations, rate equations and physical property correlations. These equations relate the outlet stream variables to the inlet stream variables for a given set of equipment parameters. At present, there are three approaches of owsheet calculations: the sequential modular, the equation oriented approach and the simultaneous modular strategy. The equation oriented approach is the most straightforward. All process equations are organized according to structure and ease of solution. Very ecient methods exist for partioning and tearing large sets of algebraic equations

12

Keil

[see e.g. 80]. The most common simulationstrategy in industrial environments is the sequential modular strategy. Here, equations are grouped in modules according to the physical processes they present, and the modules are solved sequentially, following the way material ows through the process. Due to recycles in owsheets, iterations are required to converge the steady-state equations of the process. The simultaneous modular strategy attempts to bridge the gap between equation - solving and sequential modular strategies. It retains the modularitiy of the process but allows more exible speci cation of calculation procedures and additional conditions. Instationary simulation of chemical processes leads to very large systems of di erential - algebraic -equations (DAEs). The special features of DAEs have been outlined by Petzold [81]. Instructive examples of DAEs are given by e.g. Pantelides et al. [82] and Byrne and Ponzi [83]. An extensive survey of the numerical solution of DAEs is given by Bock et al. [84]. These authors give a list of special diculties connected with the solution of DAEs: { very large implicit, resp. linear-implicit models with several ten to hundred thousands of variables { sti and highly nonlinear di erential equations which require special discretization techniques { highly nonlinear algebraic constraints whose consistent initializationpresents diculties { discontinuities and non-di erentiabilities of the model equations { implicitly de ned delays e.g. due to transport in pipes { due to convective ow and di usion spatially distributed reactive systems which have to be suitably discretized. DAE systems can be classi ed according to their so-called index. The index may be de ned as the minimumnumber of di erentiations with respect to time that the system equations have to undergo to convert the system into a set of ODEs. Thus, by de nition any ODE system has index zero. Problems of index one may be solved by some software packages such as LSODI [85, 86] for linearly-implicit DAEs, DASSL [87] for implicit DAEs. Both programs implement the backward di erence formulas (BDF) of Gear and are available in ODEPACK. DASSL provides an option for consistent initialization which is not very reliable. Semiexplicit systems with index -1 can be solved by LIMEX [88]. An implicit Runge-Kutta code is RADAU5 [89]. Caracotsios and Stewart [90] have developed a robust numerical method for integration and parametric sensitivity analysis of nonlinear initial-boundary-value problems in a timelike dimension t and space dimension x. Mixed systems of PDEs and algebraic equations can be treated. Parametric derivatives of the calculated states are obtained directly via the local Jacobian of the state equations. Initial and boundary conditions are eciently reconciled. The method is able to handle jump conditions induced by changes of equation forms at given tvalues, or at unknown t-values dependent on the solution. Transition points

Application of Numerical Methods in Process Engineering

13

of the latter kind are computed via a Newton scheme coupled with the step selection strategy of the integrator. The algorithm is an extension of DASSL. The extensions include robust initialization, automatic optimization of the initial t-step, and row scaling of each iteration matrix to stabilize the corrector computations. The program works on partial-di erential- algebraic systems (PDAEs) with index 0 or 1. It is available as PDASAC-package from Prof. Stewart (University of Wisconsin-Madison). In process models, quite often algebraic conditions appear which are not of index -1. Those problems are not well-posed and neither real initial-value nor real initial-value problems [84]. Therefore, methods of index reduction have been developed [91, 92]. Unger et al. [38] have suggested a new algorithm which determines structural properties of DAEs in order to obtain consistent initial conditions as well as an ecient and reliable solution. The problem of consistent initialization of DAEs is also discussed by Pantelides [93]. An important eld in process engineering is process optimization. Mostly, the objective is to optimize a given economic measure subject to satisfying the performance equations describing the physical behavior of the system as well as other speci ed constraints. The application of general purpose optimization algorithms to chemical processes is often a nontrivial matter. The very large systems of DAEs cannot be incorporated readily within an optimization algorithm, and many design problems involve the optimization of both continuous and discrete parameters. Process owsheets are mostly optimized by either feasible of infeasible path strategies. The optimization problem may be stated as follows:

a) (z) b) g(z)  0 c) h(z) = 0 d) c(z) = 0 (32) where z represents all of the continuous variables in a given process,  is a performance index, and the inequalities g(z)  0 represent design limitations on process variables. The equalities h(z) simulate the process (e.g. heat, mass and momentum balances), while c(z) = 0 are conditions imposed by the designer. With the feasible path approach the optimization algorithm automatically performs case studies by variing input data. There are several drawbacks: the process equations (32c) have to be solved every time the performance index is evaluated. Ecient gradient-based optimization techniques can only be used with great diculties because derivatives can only be evaluated by perturbing the entire owsheet with respect to the decision variables. This is very time consuming. Second, process units are often described by discrete and discontinuous relations or by functions that may be nondi erentiable at certain points. To overcome these problems quadratic module models can be Minimize

14

Keil

constructed at each iteration which lead to quadratic optimization problems (see e.g. [94, 95]). However, this strategy still requires the simulation of the

owsheet at very iteration. With the infeasible path strategy we need to consider the structure of simultaneous modular simulators and how it can be used for optimization. The simulation problem can be written as:

a) (x; y) b) g(x; y)  0 c) c(x; y)  0 d) h(x; y) = y ? w(x; y) (33) The function w is determined from the tear variables, y, by evaluating the process modules, x is the vector of design variables. In the infeasible path strategy this constraint is converged simultaneously with the optimization problem. Sequential quadratic programming (SQP) is mostly used for optimization of this type of problems (see e.g. [96, 97]). The work required is not too excessive because the owsheet converges while the optimum is found [97]. The reliability of the SQP algorithm is dependent on the nonlinear process units and the recycle structure of the owsheet. When multiple solutions exist for the subproblems, local solutions can be obtained for the nonconvex NLPs and care must be exercised to locate the global optimum. Global optimization is a still unsolved problem. For certain classes of nonconvex NLPs Floudas and Visveswaran [98] have found a reliable method. An introduction into global optimization is given by Horst et al. [99,100]. Ryoo and Sahimidis [101] have published a global optimization algorithm for NLPs and MINLPs (mixed integer nonlinear programming). For certain applications the iterative dynamic programming (IDP) has proved to be a very useful global optimization procedure [102 - 104]. Even for MINLPs the IDP may be used [105]. During the last few years process synthesis by algorithm methods have made considerable progress. The problem of chemical process synthesis can be stated as follows: given a set of reactants with amounts speci ed, determine both the cost-optimal structure and the optimal design variables of each process unit that can produce a set of speci ed products. In general, the process synthesis problem can be expressed as a MINLP problem where the integer and continuous variables occur nonlinearly in the performance index. The mathematical form is: Minimize

Minimize z = f (x; y ) gj (x; y)  0 j 2 J;

x 2 X;

y2Y

(34)

Application of Numerical Methods in Process Engineering

15

where f and g are convex, di erentiable functions, and x and y are the discrete and continuous variables, respectively. The set x is commonly assumed to be a compact set, e.g. X = fxjx 2

Suggest Documents