Gasoline Blending System Modeling via Static and

0 downloads 0 Views 441KB Size Report
Specifications on gasoline qualities include octane number, Reid vapor pressure, volatility, etc. The blending process involves two types proprieties: static.
INTERNATIONAL JOURNAL OF MODELLING AND SIMULATION, VOL. 24, NO. 3, 2004

151

Gasoline Blending System Modeling via Static and Dynamic Neural Networks Wen Yu, América Morales

Abstract— Gasoline blending is an important unit operation in gasoline industry. A good model for the blending system is beneficial for supervision operation, prediction of the gasoline qualities and realizing model-based optimal control. Gasoline blending process involves two types proprieties: static blending property and dynamic property of blending tanks. Since the blending cannot follow the ideal mixing rule in practice. We propose static and dynamic neural networks to approximate the two types of blending properties. Input-to-state stability approach is applied to access robust learning algorithms of the two neural networks. Numerical simulations are provided to illustrate the neuro modeling approaches.

Refinery gas Wet gas

Vapour recovery

Light virgin naphtha

Butanes Alkylate

Alkylation

Mixed butane

LSR naphtha Regular

Heavy virgin naphtha Crude oil Virgin distillate

Reformate

Catalytic reformin g

Catalytic gas Vacuum gas oil

Reduced crude Reduced crude

Premium

Light cycle oil

Catalytic cracking Heavy cycle oil coking

coke

Additives

Asphalt

Distillation

Blending

I. I NTRODUCTION Gasoline blending is the process of combining a number of feedstocks, produced by other refinery units, together with small amounts of additives to make a mixture meeting certain quality specifications. A simplified petroleum refinery flow sheet can be seen in Fig.1, where the bold lines are used for gasoline blending. Specifications on gasoline qualities include octane number, Reid vapor pressure, volatility, etc. The blending process involves two types proprieties: static blending property and dynamic property of blending tanks. In order to improve the gasoline qualities, many advance control techniques are proposed recently [2][18][24]. All of these approaches need good models of blending operation. Modeling of gasoline blending becomes key problem for supervision operation, prediction of the gasoline qualities and realizing model-based optimal control. Usually they use a linear model add a nonlinear uncertainty model. The blending models they used are either static [18] or dynamic [2]. The exact mathematical model for gasoline blending is too complex to be handled analytically. Many attempts were made to introduce simplified models in order to construct Wen Yu is with the Departamento de Control Automatico, CINVESTAV-IPN, Av.IPN 2508, México D.F., 07360, México

[email protected] América Morales is with the Instituto Mexicano del Petróleo, Eje Central Lazano Cardenas 152, Mexico D.F., 07730, Mexico

[email protected]

Fig. 1.

Simplified petroleum refinery flowsheet.

”model-based” controller [8]. A common method to approximate the blending operation is to use linear (ideal ) model [18] or regard blending operation has a sufficient small nonlinear uncertainty [1]. Practically most of real blending handle no-ideal. A realistic model for gasoline blending based on operation data seems to be very important for engineers. Resent results show that neural network technique seems to be very effective to model a broad category of complex nonlinear systems when we do not have complete model information. Neural networks may be classified as feedforward (static) and recurrent (dynamic) ones. Most of publications in nonlinear system identification use feedforward networks, for example Multilayer Perceptrons. (MLP), which are implemented for the approximation of nonlinear functions. Static neural networks are suitable for modeling static nonlinear system [12]. Since recurrent networks incorporate feedback, they have powerful representation capabilities and can successfully overcome the disadvantages of feedforward networks for modeling dynamic system [7]. Neuro modeling approach uses the nice features of neural networks, but the lack of mathematical model for the plant makes it hard to obtain theoretical results on stable learning.

INTERNATIONAL JOURNAL OF MODELLING AND SIMULATION, VOL. 24, NO. 3, 2004

It is very important to assure the stability of neuro modeling in theory before we use them in some real applications. Lyapunov approach can be used directly to obtain robust training algorithms for continuous-time neural networks [21][22]. Discrete-time neural networks are more convenient for real applications. Two types stability for discrete-time neural networks were studied. The stability of neural networks can be found [4] and [20]. The stability of learning algorithms was discussed by [12] and [17]. In [17] they assumed neural networks could represent nonlinear systems exactly, and concluded that backpropagation-type algorithm guaranteed exact convergence. Gersgorin’s theorem was used to derive stability conditions for the network learning in [12]. It is well known that normal identification algorithms are stable for ideal plants [10]. In the presence of disturbances or unmodeled dynamics, these adaptive procedures can go to instability easily. The lack of robustness in parameters identification was demonstrated in [3] and became a hot issue in 1980s. Several robust modification techniques were proposed in [10]. The weight adjusting algorithms of neural networks is a type of parameters identification, the normal gradient algorithm is stable when neural network model can match the nonlinear plant exactly [17]. Generally, some modifications to the normal gradient algorithm or backpropagation should be applied, such that the learning process is stable. For example, in [12] some hard restrictions were added in the learning law, in [20] the dynamic backpropagation was modified with NLq stability constraints. Another generalized method is to use robust modification techniques of robust adaptive control [10]. [14] applied σ−modification, [11] used modified δ−rule, and [19] used dead-zone in the weight tuning algorithms. Input-to-state stability is another elegant approach to analyze stability besides Lyapunov method. It can lead to general conclusions on the stability by using input and state characteristics. In this paper, input-to-state stability approach is applied to obtain some new stable learning laws that do not need robust modifications for static and dynamic modeling for gasoline blending operation. Numerical simulations are provided to illustrate the neuro modeling approach. II. M ATHEMATICAL MODELS OF GASOLINE BLENDING OPERATION

In order to model gasoline blending operation by input/output data, we first study several simple mathematical models of gasoline blending. A standard gasoline blending system can be described in Fig.2, here pi and qi (i = 1 · · · n)

152

p1

q1

pn

qn

Pa .

a1

pf 2

V

h Fig. 2.

.a

qf

Gasoline blending system.

are property and flow rate of ith feed stock, pf and qf are property and flow rate of outlet product. This system includes static properties (pi → pf ) and dynamic properties of the tank. A. Static properties of gasoline blending Gasoline blending is a kind of catalytic reforming, catalytic crackingalkylation and hydrocracking process. There are several properties that are important in characterizing automotive gasoline such as, Research Octane Number (RON), Reid Vapor Pressure (RVP), ASTM distillation points, viscosity, flash point and aniline point. Octane numbers are measures of a fuel’s antiknock properties. Two standard test procedures are used to characterize the antiknock properties of fuels for spark engines: the ASTMD-908 test gives the Research Octane Number (RON) and the ASTMD-357 gives the Motor Octane Number (MON) [5] which represents engine performance under severe high speed conditions. Other important properties that affect engine performance are volatility and boiling range. Reid vapor pressure (RVP) gives an indication of the volatility of a gasoline blend and is approximately the vapor pressure of the gasoline at 38o C. In this paper we introduce 5 models for RON and 3 models for RVP. 1) Ideal model for Octane Numbers (Linear model) [5] RON1 =

n X

xi pi =

n X

qi ,

i=1

n 1 X pi qi qf i=1

(1)

where xi is the volume fraction of component i, pi is the the blending octane number of the component, pf = RON1 , qi is flow rate of ith feed stock qf satisfies mass balance qf =

i=1

2) Ethyl model [6][24].

xi =

qi qf

INTERNATIONAL JOURNAL OF MODELLING AND SIMULATION, VOL. 24, NO. 3, 2004

The Ethyl method is one of the oldest models available in literature. This method has been used as a benchmark against which newer models have been compared. Blending nonlinearity is modeled explicitly as a function of the component sensitivity, olefin content and the aromatic content of the components µ ¶ (pT x)(sT x) T xi pi + α1 r diag(s)x − RON2 = eT x µ i=1 ¶ µ ¶ 2 2 (OT x) (AT x) T T + α3 As x − eT x +α2 Os x − eT x n X

(2) where m is MON, s is sensitivity, s = p − m, o is the olefin content of each feedstock, Os is the olefin content squared, As is the square of aromatics content, αi (i = 1 · · · 3) is correlation coefficient. 3) Interaction model [18][1] Since an adequate equation of state for liquids is rarely known, modeling of the nonlinear part relies on empirical equations, the following second coefficient equation is proposed RON3 =

n X

xi pi +

i=1

n n X X

Ii,k xi xk

(3)

i=1 k=i+1

where Ii,k is the interaction coefficient between component i and k which is given as Ii,k = 4Oi,k − 2 (Oi + Ok ) where Oi,k is the octane number of a 50 : 50 blend of i and k. Ii,k depends mainly on temperature and pressure. The above equation can be generalized to account for the third coefficients, however, few data are available for the third coefficients of pure materials. For a practical viewpoint, it is expected that the second coefficient model retains the main nonlinear interactions of homogeneous mixtures. 4) Excess model [16] n n X X E xi pi + xij Oij (4) RON4 = i=1

i=1

E Oij

is the excess octane number associated with where component i in blend j, xij is the volume fraction of component i in blend j. 5) Zahed model [23] n X ¡ ¢k RON4 = M0 + Mi xi pli (5) i=1

where Mi (i = 0 · · · n), l and k are constants. 1) Ideal model for Reid vapor pressure (Linear model) [5]

153

RV P1 =

n X

xi pi

(6)

i=1

where xi is the volume fraction of component i, pi is the the blending Reid vapor pressure of the component, pf = RV P1 . 2) Interaction model [18][1] RV P2 =

n X

xi pi +

i=1

n n X X

Ii,k xi xk

(7)

i=1 k=i+1

where Ii,k is the interaction coefficient between component i and k which is given as Ii,k = 4Pi,k − 2 (Pi + Pk ) where Pi,k is the Reid vapor pressure of a 50 : 50 blend of i and k. 3) Blending index [18] Ã n !0.8 X RV P3 = xi pbi , pbi = p1.25 (8) i i=1

We can see that RON models (1), (2), (3) and (4), RVP models (6) and (7) use linear models plus nonlinear modifications. But model (5) and (8) are absolutely nonlinear models. All of above models are only suitable in some special conditions and the parameters of these models are determined by experience data. B. Dynamic properties of gasoline blending Dynamic properties of gasoline blending system is caused by the dynamic behavior of tank system. We use following assumptions for the tank [9]: 1). perfect mixing; 2). isothermal operation; 3). chemical reaction does not take place in the liquid of the tank. The flow dynamics is a simple holding tank, see Fig.2. From a mass balance on this system, we have n

X d qi − qf m= dt i=1

where m is the total mass in the tank, m = ρf V = ρf Ah, ρf is the density of the output product, V, A and h are volume, surface area and liquid height of the tank. In the d ρf = 0, case perfect mixing dt n

X d d qi − qf m = ρf A h = dt dt i=1

(9)

The outlet mass flow rate can be written in terms of the exit velocity vf qf = ρf Af vf (10)

INTERNATIONAL JOURNAL OF MODELLING AND SIMULATION, VOL. 24, NO. 3, 2004

where Af is cross-sectional area at exit. Since the energy Flow rate of feed stocks Gasoline Blending Systems storage is a summarization of internal energy, kinetic energy, potential energy and flow work, a energy balance at exit port (a1 and a2 points) is ¢ d qf ¡ 2 qf Feed forward Model of E = 0 = qf (u1 − u2 )+ v1 − v22 +qf g (z1 − z2 )+ (P1 − P2 )neural networks RON,RVP dt 2 ρf

154

RON,RVP tank level, concentration, temperature

Dynamic Model of level, neural networks concentration, temperature

If we also assume negligible changes in internal energy (u1 = u2 ) and elevation (z1 = z2 ), v1 is small compared Fig. 3. Gasoline blending system modeling via neural networks. to v2 , the conservation of energy equation states that s s ¢ p 2 2 ¡ exit stream has the same concentration and temperature as vf = v2 = Pa + ρf gh − Pa = 2gh (P1 − P2 ) = ρf ρf the reactor fluid. The gasoline blending tanks typically have (11) more complicated kinetics than we study in this module, where Pa is exhaust pressure. From (9), (10) and (11) but the characteristic behavior is similar. Overall material n X p d balance (9) corresponding to volumes ρf A h = −ρf Af 2gh + qi (12) dt n X ¢ i=1 d ¡ (ρi Vi ) (14) ρf V = −ρf Vf + Thus the state equation for this first-order system is nondt i=1 linear. This expression says that the rate of change of mass where V, Vi and Vf are volumes of the tank, inlet and outlet. within the tank is equal to the mass flow rate in minus the The balance on component Ai (i = 1 · · · m) is mass flow rate out, where the mass in the tank and the exit d flow rate are both written in terms of the height of the fluid (15) (cri Vi ) = −cri Vi + ci Vi − ri V dt in the tank. where cri and ci are concentrations of Ai in reactor and in If we define feed stream, ri is the rate of reaction of Ai . The balance on ρf Ah h √ , C= = ρf A R= component B is h ρf Af 2gh (12) becomes ·

x=−

1 1 x+ u R (x) C C

(13)

P where u = ni=1 qi , x = h. If chemical reactions take place in the liquid in the tank, CSTR(continuous stirred-tank reactor) can be used for the dynamic properties modeling of the blending system [8]. Chemical reactions are either exothermic (release energy) or endothermic (require energy input) and therefore require that energy either be removed or added to the reactor for a constant temperature to be maintained. Exothermic reactions are the most interesting systems to study because of potential safety problems (rapid increases in temperature, sometimes called "ignition" behavior) and the possibility of exotic behavior such as multiple steady-states (for the same value of the input variable there may be several possible values of the output variable). We consider m elements of the n inlet flows have chemical reaction, the reaction is first-order exothermic irreversible, A1 + A2 · · · + Am → B. In Fig.2 we see that m fluid streams are continuously fed to the reactor and a fluid stream is continuously removed from the reactor. Since the reactor is perfectly mixed, the

d (cf Vf ) = −crf Vf + cf Vf + rf V dt where crf and cf are concentrations of B in reactor and in outlet stream, rf is the rate of reaction of B. Two dynamic properties for gasoline blending tank are discussed in this paper: tank level h is described in (13); concentration of each element is shown in (15). III. M ODELING OF GASOLINE BLENDING PROCESS VIA FEEDFORWARD AND RECURRENT NEURAL NETWORKS

In the Section II we discussed several mathematical models for the static and dynamic properties of gasoline blending. These models can only work in some special conditions, these conditions in real application correspond to history data. We hope to model the gasoline blending process via input/output data. Neural networks modeling is in sense of "black-box", it is suitable for gasoline blending modeling. We will use feedforward networks to model static properties of the blending system: Research Octane Number (RON) and Reid Vapor Pressure (RVP), recurrent neural networks to model dynamic properties of the blending systems: tank level, concentration and temperature, see Fig.3.

INTERNATIONAL JOURNAL OF MODELLING AND SIMULATION, VOL. 24, NO. 3, 2004

A. Modeling static properties via feedforward neural networks The static properties of RON and RVP can be written in following general form y(k) = Φ [u (k) , u (k − 1) , u (k − 2) , · · · ] = Φ [X (k)] (16) where T

X (k) = [u (k) , u (k − 1) , u (k − 2) , · · · ]

(17)

Φ (·) is an unknown nonlinear function representing the blending operation, u (k) is measurable scalar input, it can be xi (the volume fraction of component) y(k) is RON or RVP value at time k. This is NARMA model [15] We consider multilayer neural network(or multilayer perceptrons, see Fig.4 the part of MLP) to model the static operation of the blending (16) yb (k) = Vk φ [Wk X (k)]

(18)

where the scalar output yb (k) and vector input X (k) ∈ Rn×1 is defined in (17), the weights in output layer are Vk ∈ R1×m , the weights in hidden layer are Wk ∈ Rm×n , φ is mdimension vector function. The typical presentation of the element φi (.) is sigmoid function. The identified blending system (16) can be represented be written as y (k) = V ∗ φ [W ∗ X (k)] − µ (k) where V ∗ and W ∗ are set of unknown weights which may minimize the modeling error µ (k). The nonlinear plant (16) may be also expressed as y (k) = V 0 φ [W ∗ X (k)] − δ (k)

(19)

where V 0 is an known matrix chosen by the user. In general, kδ (k)k ≥ kµ (k)k . Using Taylor series around the point of Wk X (k), the identification error can be represented as e (k) = Vk φ [Wk X (k)] − V 0 φ [W ∗ X (k)] + δ (k) = Vk φ [Wk X (k)] − V 0 φ [Wk X (k)] +V 0 φ [Wk X (k)] − V 0 φ [W ∗ X (k)] + δ (k) 0 fk X (k) + ζ (k) = Vek φ [Wk X (k)] + V 0 φ W 0

155

the boundness of the sigmoid function φ we assume that δ (k) in (19) is bounded, also ε (k) is bounded. So ζ (k) in (20) is bounded. The following theorem gives a stable backpropagation-like algorithm for discrete-time multilayer neural network. Theorem 1: . If we use the multilayer neural network (18) to identify nonlinear process (16), the following backpropagation-like algorithm can make identification error e (k) bounded Wk+1 = Wk − ηk e (k) φ0 V 0T X T (k) Vk+1 = Vk − ηk e (k) φT

(21)

η , 0 < η ≤ 1. ° 0 °2 2 0T ° 1 + φ V X T (k)° + kφk The average of the identification error satisfies where η k =

T η 1X 2 e (k) ≤ ζ (22) π T →∞ T k=1 ∙ ¸ η κ where π = 1− > 0, κ = 1+κ 1´+ κ ³° ° ¤ £ 2 2 max °φ0 V 0T X T (k)° + kφk , ζ = max ζ 2 (k)

J = lim sup

k

k

Remark 2: . V 0 does not effect the stability property of the neuro identification, but it influences the identification accuracy, see (22). We design an off-line method to find a better value for V 0 . If we let V 0 = V0 , the algorithm (21) can make the identification error convergent, i.e.., Vk will make the identification error smaller than that of V0 . V 0 may be selected by following steps: 1) Start from any initial value for V 0 = V0 . 2) Do identification with this V 0 until T0 . 3) If the ke (T0 )k < ke (0)k , let VT as a new V 0 , i.e., V 0 = VT0 , go to 2 to repeat the identification process. 4) If the ke (T0 )k ≥ ke (0)k, stop this off-line identification, now VT0 is the final value for V 0 . With this prior knowledge V 0 , we may start the identification (21).

(20)

where φ is the derivative of nonlinear activation function fk = Wk − W ∗ , Vek = φ (·) at the point of Wk X (k) , W 0 0 Vk − V , ζ (k) = V ε (k) + δ (k) , here ε (k) is second order approximation error of the Taylor series. In this paper we are only interested in openloop identification, we may assume that the plant (16) is bounded-input and bounded-output stable, i.e., y(k) and u(k) in (16) are bounded. Since X (k) = T [u (k) , u (k − 1) , u (k − 2) , · · · ] , X (k) is bounded. By

B. Modeling dynamic properties via dynamic neural networks The dynamic properties of blending systems, such as tank level, concentration of each element, etc., can be written in following discrete-time form x(k + 1) = f [x (k)]

(23)

The discrete-time multilayer recurrent neural networks is represented as βb x (k + 1) = Ab x (k) + Vk σ [Wk x (k)]

(24)

INTERNATIONAL JOURNAL OF MODELLING AND SIMULATION, VOL. 24, NO. 3, 2004

Theorem 3: . If we use the recurrent neural network (24) to identify nonlinear plant (23) and A is selected as −1 < λ (A) < 0, the following gradient updating law without robust modification can make identification error e (k) bounded

A WkT σ 1

β

VkT

σ2

z −1

Σ

x(k)

Σ

σn

Σ

Wk+1 = Wk − ηk e (k) σ 0 VkT x (k) Vk+1 = Vk − ηk e (k) σ T [Wk (k) x (k)]

MLP

Fig. 4.

Discrete-time multilayer recurrent neural networks.

where the weights in output layer are Vk ∈ R1×m , the weights in hidden layer are Wk ∈ Rm×n , σ is mdimension vector function, β is a positive constant β > 1 which is a design parameter. The typical presentation of the element φi (.) is sigmoid function. The structure of the discrete-time multilayer recurrent neural networks is shown in Fig.4. MLP is discrete-time Multilayer Perceptrons. The identified blending system (23) can be represented be written as βx (k + 1) = Ax (k) + V ∗ σ [W ∗ x (k)] − µ (k)

(25)

where V ∗ and W ∗ are set of unknown weights which may minimize the modeling error µ (k). In the case of three independent variables, a smooth function f has Taylor formula as l−1 h¡ X ¡ ¢ ∂ ¢ ∂ ik 1 0 0 − x + x − x f x f (x1 , x2 ) = 1 2 1 2 k! ∂x1 ∂x2 0

k=0

156

where η k satisfies ⎧ η ⎨ ° °2 2 1 + °σ 0 VkT x (k)° + kσk ηk = ⎩ 0

0 fk x (k)+ζ (k) βe (k + 1) = Ae (k)+ Vek σ [Wk x (k)]+Vk σ W (26) 0 where σ is the derivative of nonlinear activation function fk = Wk − W ∗ , Vek = σ (·) at the point of Wk X (k) , W Vk − V ∗ , ζ (k) = R1 + µ (k) , here R1 is second order approximation error of the Taylor series. In this report we are only interested in open-loop identification, we may assume that the plant (23) is bounded stable, i.e., x(k) in (23) is bounded. By the boundness of the sigmoid function σ we assume that µ (k) in (25) is bounded, also R1 is bounded. So ζ (k) in (26) is bounded. The following theorem gives a stable backpropagation-like algorithm for discrete-time multilayer neural network.

β ke (k + 1)k ≥ ke (k)k β ke (k + 1)k < ke (k)k

0 < η ≤ 1. The average of the identification error satisfies J = lim sup T →∞

T η 1X 2 e (k) ≤ ζ T π

k

(28)

k=1

∙ ¸ η κ where π = 1− > 0, 1+κ 1+κ ³° ´ °2 ¤ £ 2 max °σ0 VkT x (k)° + kσk , ζ = max ζ 2 (k)

κ

=

k

Remark 4: . The condition β ke (k + 1)k < ke (k)k is dead-zone. This technique is always used in system identification, it can guarantee identification error stable with respect to unmodeled dynamics. In order to avoid to a case such as: when β ke (k + 1)k < ke (k)k (η k = 0) we still want to training the networks, we should select β big enough, so that the dead-zone becomes smaller. But a big β means the stable eigenvalues of dynamic neural networks

+Rl

where Rl is the remainder of the Taylor formula. If we let x1 and x2 correspond V ∗ and W ∗ , x01 and x02 correspond Vk and Wk . Using Taylor series around the point of Wk X (k) b (k) − x (k) can be and Vk , the identification error e (k) = x represented as

(27)

x b (k + 1) =

A 1 x b (k) + Vk σ [Wk x (k)] β β

are near to unite circle, it will affect the stability of the neural networks. The contradiction in small dead-zone and stable neural netwroks is solved by simulations inn this paper. IV. N UMERICAL SIMULATION We first use feedforward neural networks to model a static property of gasoline blending, octane number (RON). We use two kinds of mathematical model, interaction model (3) and Zahed model (5) to approximate two groups real data. A static neural network will be used to model the blending octane number. We consider 5 components for T gasoline blending, x = [x1 , x2 , x3 , x4 , x5 ] . The interaction model can be rewritten as RON3 = xT p + xT Γx

(29)

INTERNATIONAL JOURNAL OF MODELLING AND SIMULATION, VOL. 24, NO. 3, 2004

95

95

O c ta n e n u m b e r N e u ra l n etw o rk s

94

B le n d in g s y s te m

93

93

92

92

91

91

90

90

89

89

88

88 87

87 86

O c ta n e n u m b e r B le n d in g s y s te m

94

Fig. 5.

157

86

T im e (h o u rs)

1350

1400

1450

N e u ra l n e tw o r k s

1500

85

T i m e (h o u r s )

1350

1400

1450

1500

Training phase Fig. 6.

where the property of the feed stocks, p = = [90.1, 75.7, 93.8, 82.9, 95]T . Γ [p1 , p2 , p3 , p4 , p5 ]T is interaction matrix which is defined as ⎡ ⎤ 0.0 −6.0 −8.25 8.25 −6 ⎢ 0 0 −7.8 9 11 ⎥ ⎢ ⎥ ⎢ ⎥ Γ=⎢ 0 0 0 11 −9 ⎥ ⎢ ⎥ ⎣ 0 0 0 0 11.5 ⎦ 0 0 0 0 0

Testing phase.

0 .2 5 M o d e lin g e rro r o f o c ta n e n u m b e r T ra in in g p h a s e

0 .2

T e s tin g p h a s e

0 .1 5

0 .1

0 .0 5 T im e (h o u rs )

0

A standard form of (29) is

0

500

1000

1500

y(k) = Φ [X (k)] y(k) = RON3 , X (k) = x. We first select x1 , x2 , x3 , x4 as random numbers in [0, 1] , x5 = 1 − (x1 + x2 + x3 + x4 ) to train following neural network model yb (k) = Vk φ [Wk X (k)]

(30)

where Wk ∈ R9×5 , Vk ∈ R1×9 , the initial conditions for the elements of V 0T , Wk and Vk are random number in [0, 1] . The learning algorithm is Wk+1 = Wk − η k e (k) φ0 V 0T X T (k) Vk+1 = Vk − η k e (k) φT where η k = y(k),

(31)

1 , e (k) = yb (k)− ° 0 °2 2 0T ° 1 + φ V X T (k)° + kφk x

−x

φ (·) = tanh(x) = eex −e +e−x 2 φ0 (·) = sec h(x) = ex +e −x

We use 1500 data to training it, the training results is shown in Fig.5. We find after k > 1300, the weights are converged. Then we use different flow rates of the feed stocks to test our neural model. we select £ ¡ ¢¤ k x1 (k) = 0.1 1 + sin 2π £ ¡ 20 ¢¤ x2 (k) = 0.2 1 + cos 2π 20 k¢¤ £ ¡ 2π x3 (k) = 0.1 1 + sin 30 k £ ¡ ¢¤ x4 (k) = 0.1 1 + sin 2π 50 k x5 (k) = 1 − [x1 (k) + x2 (k) + x3 (k) + x4 (k)]

Fig. 7.

Average modeling error.

These data are put into the blending system (29) and blending model (30) at same time, the octane number of gasoline blending is shown in Fig.6. Let us define the mean squared error for finite time is

J (N ) =

N 1 X 2 e (k) 2N

(32)

k=1

where N is simulation time. The average modeling error is shown in Fig.7. In the training phase J1 (1500) = 0.0058, in the testing phase J1 (1500) = 0.0087. Modeling errors depend on the complexity of the particular model selected and how close it is to the actual plant. In this example the modeling is bigger in testing phase than training phase. The worse results due to the neural networks cannot match the plant exactly. From the point of identification, it is because the model is not close to the plant. We should mention that model structure influences modeling error, but does not destroy stability of identification process. It is interested to see that if we do not change the neural networks, but the data is from another model, Zahed model

INTERNATIONAL JOURNAL OF MODELLING AND SIMULATION, VOL. 24, NO. 3, 2004

158

1 .4 T a n k le v e l

95

B le n d in g s y s te m

1 .3

90

T r a in in g p h a s e

85

1 .1

T im e (h o u rs)

1350

1400

95

N e u ra l n e tw o rk s

1 .2

1450

1500

1

90

0 .9

85

T e s tin g p h a s e

0 .8 T im e (h o u rs)

80

1400

1350

0 .4

1450

1500

0 .7

M o d e lin g e rr o r

0 .6 0

0 .2

T im e (h o u rs )

10

20

30

40

50

60

T im e (h o u rs)

0

0

500

1000

1500

Fig. 10.

Fig. 8.

Neuro modeling via Zahed approach.

0 .7 S q u a re d e rro r

J

neural networks to model dynamic properties of gasoline blending. The behavior of the tank can be expressed in √ n Af 2g √ 1 X · x=− x+ qi (33) A ρf A i=1

(N )

0 .6 0 .5

B a c k p ro p a g a tio n le a rn in g

0 .4 S ta b le le a rn in g

0 .3 0 .2 0 .1

N 0

500

800

1200

Dynamic property modeling via recurrent neural networks.

1500

T im e (h o u rs)

1800

we select parameters as A = 30, Af = 2, g = 9.8, ρf = 0.85, q = [q1 , q2 , q3 , q4 , q5 ] = [3 + a1 , 2, 1 + a3 , 1, 0.5]T , a1 and a3 are square wave with amplitude 10 and frequency 0.5Hz. x(0) = 2. Following difference technique is used to get the discretetime states of the system (33) ·

x = α (x) x + λu Fig. 9.

Perform comparision.

[23] RON4 = M0 +

n X i=1

¡ ¢k Mi xi pli

with M0 = 0, Mi = 1, l = 1.25, k = 0.8, n = 5, p = [90.1, 75.7, 93.8, 82.9, 95]T . The modeling result is shown in Fig.8. The same neural model can also model RON4 , only the weights are different. Now we compare our algorithm (31) with normal backpropagation algorithm [15]. We use the same multilayer neural networks as [15], it is Π4,9,1 , and a fixed learning rate η = 0.05. We found after η > 0.1 the normal backpropagation algorithm become unstable. The performance comparison can be realized by mean squared errors (32). The comparison results are shown in Fig. 9. We can see that the stable algorithm proposed in this paper has almost the same convergence rate as normal backpropagation algorithm. Normal backpropagation algorithm for multilayer neural networks has a slow convergence speed and a big identification error, J1 (1500) = 0.078. The second part of the simulation is to use recurrent

√ Pn A 2g where α(x) = Af √x , λ = ρ 1A , u = qi . Let us f ¡i=1 ¢ 2 . define s1 = αxk , s2 = α (xk + s1 ) , s3 = α xk + s1 +s 4 ¯ s −2∗s +s ¯ ¯ s −2∗s +s ¯ |xk | 1 3 2¯ 1 3 2¯ ¯ ¯ ≤ 1000 or < 1, then xk+1 = If 3 3 xk + s1 +4s63 +s2 , k = 0, 1, 2 · · · . A multilayer recurrent neural network (18) is used to identify (33),

βb x (k + 1) = Ab x (k) + Vk σ [Wk (k) x (k)] + au where σ (·) = tanh (·) , A = .8. Model complexity is important in the context of system identification, which is corresponded to the number of hidden units of the neuro model. In this simulation different numbers of hidden nodes is tested, the simulation shows that after the hidden nodes is more than 20, the identification accuracy will not be improved a lot. In [15], they also used 20 hidden nodes for the first hidden layer. So Wk ∈ R20×1 , Vk ∈ R1×20 . The learning algorithm for Wk and Vk are (21) with ηk = 1 , σ 0 (·) = sech2 (·) , β = 4. The ° °2 2 1 + °σ0 VkT x (k)° + kσk initial conditions for Wk and Vk are random number, i.e., W0 = rand (·) , V0 = rand (·) . The on-line identification result is shown in Fig.10. The total simulation time is 60, there are only 1 time when β ke (k + 1)k < ke (k)k .

INTERNATIONAL JOURNAL OF MODELLING AND SIMULATION, VOL. 24, NO. 3, 2004

V. C ONCLUSION In this paper we give static and dynamic neural networks for modeling gasoline blending process. We first propose detail mathematical models for static and dynamic proprieties of gasoline blending systems. Input-to-state stability approach is applied to access robust learning algorithms of the two neural networks. Only input/output data are used. Numerical simulations are provided to illustrate the neuro modeling approaches. Gasoline blending process appears static and dynamic properties. We use static neural networks to model its static procedure, and dynamic neural networks to approximate its dynamic behavior. Obviously, the method proposed in this paper is more effective than using static or dynamic neural networks to model both static and dynamic properties of gasoline blending. R EFERENCES [1] J.Alvarez-Ramirez, A.Morales, R.Suarez, Robustness of a class of bias update controllers for blending systems, Industrial Engineering Chemistry Research, Vol.41, No.19, 4786-4793, 2002. [2] D-M.Chang, C-C.Yu and I-L.Chien, Coordinated control of blending systems, IEEE Trans. Control Systems Technology,Vol.6, No.4, 495506, 1998. [3] B.Egardt, Stability of Adaptive Controllers, Lecture Notes in Control and Information Sciences, Vol.20, Springer-Verlag, Berlin, 1979. [4] Z.Feng and A.N.Michel, Robustness Analysis of a Class of DiscreteTime Systems with Applications to Neural Networks, Proc. of American Control Conference, 3479-3483, San Deigo, 1999. [5] J.H.Gary, G.E.Handwerk, Petroleum Refining Technology and Economics, Marcer Dekker, New York, 1994. [6] W.C.Healy, C.W.Maassen and R.T.Peterson, A new approach to blending octanes, Proc.24th Meeting of American Petroleum Institute’s Division of Refining, New york, 1959. [7] J.J.Hopfield, Neurons with grade response have collective computational propierties like those of a two-state neurons, Proc. of the National Academy of Science, USA, vol. 81, 3088-3092, 1984. [8] W.L.Luyben, Process Modeling, Simulation and Control for Chemical Enqineers, 2nd edition, McGraw-Hill, Inc.,1990. [9] K.Murakami and D.E.Seborg, Constrained parameter estimation with applications to blending operations, Journal of Precess Control, Vol.10, 195-202, 2000. [10] P.A.Ioannou and J.Sun, Robust Adaptive Control, Prentice-Hall, Inc, Upper Saddle River: NJ, 1996. [11] S.Jagannathan and F.L.Lewis, Identification of Nonlinear Dynamical Systems Using Multilayered Neural Networks, Automatica, Vol.32, No.12, 1707-1712, 1996. [12] L.Jin ad M.M.Gupta, Stable Dynamic Backpropagation Learning in Recurrent Neural Networks, IEEE Trans. Neural Networks, Vol.10, No.6, 1321-1334, 1999. [13] Z.P.Jiang and Y.Wang, Input-to-State Stability for Discrete-Time Nonlinear Systems, Automatica, Vol.37, No.2, 857-869, 2001. [14] E.B.Kosmatopoulos, M.M.Polycarpou, M.A.Christodoulou and P.A.Ioannou, High-Order Neural Network Structures for Identification of Dynamical Systems, IEEE Trans. on Neural Networks, Vol.6, No.2, 442-431, 1995.

159

[15] K.S.Narendra and K.Parthasarathy, Identification and Control of Dynamical Systems Using Neural Networks, IEEE Trans. Neural Networks, Vol.1, No.1, 4-27, 1990. [16] A.Muller, New method produces accurate octane blending values, Oil & Gas J., Vol.23, No.3, 80-90, 1992. [17] M.M.Polycarpou and P.A.Ioannou, Learning and Convergence Analysis of Neural-Type Structured Networks, IEEE Trans. Neural Networks, Vol.3, No.1, 39-50, 1992. [18] A.Singh, J.F.Forbes, P.J.Vermeer, S.S.Woo, Model-based real-time optimization of automotive gasoline blending operations, Journal of Process Control, Vol.10, 43-58, 2000. [19] Q.Song, Robust Training Algorithm of Multilayered Neural Networks for Identification of Nonlinear Dynamic Systems, IEE Proceedings Control Theory and Applications, Vol.145, No.1, 41-46,1998. [20] J.A.K. Suykens, J. Vandewalle, B. De Moor, NLq Theory: Checking and Imposing Stability of Recurrent Neural Networks for Nonlinear Modelling, IEEE Transactions on Signal Processing (special issue on neural networks for signal processing), Vol.45, No.11, 2682-2691, 1997. [21] J.A.K.Suykens, J.Vandewalle and B.De Moor, Lur’e Systems with Multilayer Perceptron and Recurrent Neural Networks; Absolute Stability and Dissipativity, IEEE Trans. on Automatic Control, Vol.44, 770-774, 1999. [22] W.Yu, A.S. Poznyak and X.Li, Multilayer Dynamic Neural Networks for Nonlinear System On-line Identification, International Journal of Control, Vol.74, No.18, 1858-1864,2001. [23] A.H.Zahed, S.A.Mullah and M.D.Bashir, Predict octane number for gasoline blends, Hydrocarbon Processing, N0.5, 85-87, 1993. [24] Y.Zhang, D.Monder and J.F.Forbes, Real-time optimization under parametric uncertainty a probability constrained approach, Journal of Process Control, Vol.12, 373-389, 2002.

Appendix Proof: (Theorem 1). We selected a positive defined matrix Lk as ° °2 ° °2 °f ° °e ° (34) Lk = °W k ° + °Vk °

fk = Wk − W ∗ , Vek = Vk − V 0 . From the updating where W law (21), we have fk −η k e (k) φ0 V 0T X T (k) , fk+1 = W W

Vek+1 = Vek −ηk e (k) φT

Since φ0 is diagonal matrix, and by using (20) we have ° °2 °f ° 0 0T ∆Lk = °W X T (k)° k − η k e (k) φ V °2 ° °2 ° °2 ° ° ° °f ° °e ° + °Vek − ηk e (k) φT ° − °W k ° − °Vk ° ³° ´ °2 = η2k e2 (k) °φ0 V 0T X T (k)° + kφk2 ° ° 0 ° fk X (k) + Vek φ° −2ηk ke (k)k °V 0 φ W ° ³° ´ °2 2 0 2 2 0T T = η e (k) °φ V X (k)° + kφk k

−2ηk ke (k) [eh (k) − ζ³(k)]k ´i °2 ° ≤ −η k e2 (k) 1 − η k °φ0 V 0T X T (k)° + kφk2 + ηζ 2 (k) ≤ −πe2 (k) + ηζ 2 (k)

(35) where π is defined in (22). Because £ ¡ 2¢ ¡ ¢¤ ¡ ¢¤ £ ¡ 2¢ ei + max vei2 n min w ei + min vei2 ≤ Lk ≤ n max w

INTERNATIONAL JOURNAL OF MODELLING AND SIMULATION, VOL. 24, NO. 3, 2004

£ ¡ 2¢ ¡ ¢¤ where n min w ei + min vei2 and ¡ 2 ¢¤ £ ¡ 2¢ 2 are K∞ -functions, and πe (k) n max w ei + max vei is an K∞ -function, ηζ 2 (k) is a Kfunction. From (20) and (34) we know Vk is the function of e (k) and ζ (k) , so Lk admits a smooth ISS-Lyapunov function, the dynamic of the identification error is input-to-state stable. Because the ”INPUT” ζ (k) is bounded and the dynamic is ISS, the ”STATE” e (k) is bounded. (35) can be rewritten as ∆Lk ≤ −πe2 (k) + ηζ 2 (k) ≤ πe2 (k) + ηζ

(36)

Summarizing (36) from 1 up to T , and by using LT > 0 and L1 is a constant, we obtain PT L − L ≤ −π K=1 e2 (k) + T ηζ PT T 2 1 π K=1 e (k) ≤ L1 − LT + T ηζ ≤ L1 + T ηζ

error is input-to-state stable. Because the ”INPUT” ζ (k) is bounded and the dynamic is ISS, the ”STATE” e (k) is bounded. · If β ke (k + 1)k < ke (k)k , ∆V (k) = 0. L (k) is constant, the constants of Wk and Vk means e (k) is bounded (38) can be rewritten as ∆Lk ≤ −πe2 (k) + ηζ 2 (k) ≤ πe2 (k) + ηζ

(28) is established.

From the updating law (27), we have

Vek+1 = Vek −η k e (k) σ T

Since φ0 is diagonal matrix, and by using (26) we have ° °2 °f ° 0 T T ∆Lk = °W k − η k e (k) σ Vk x (k)° °2 ° °2 ° °2 ° ° ° °f ° °e ° + °Vek − η k e (k) σ T ° − °W k ° − °Vk ° ³° ´ °2 = η 2k e2 (k) °σ 0 VkT xT (k)° + kσk2 (38) ° ° 0 ° fk X (k) + Vek σ ° −2ηk ke (k)k °VkT σ W ° ³° ´ °2 2 = η 2 e2 (k) °σ 0 V T xT (k)° + kσk k

k

−2ηk ke (k) [βe (k + 1) − Ae (k) − ζ (k)]k

Using (19) and η(k) ≥ 0, there exist a constant β > 0, such that · If kβe (k + 1)k ≥ ke (k)k h ³° ´i °2 2 + ηζ 2 (k) ∆Lk ≤ −ηk e2 (k) 1 − ηk °σ 0 VkT xT (k)° + kσk ≤ −πe2 (k) + ηζ 2 (k)

where π is defined in (28). Because £ ¡ 2¢ £ ¡ 2¢ ¡ ¢¤ ¡ ¢¤ n min w ei + min vei2 ≤ Lk ≤ n max w ei + max vei2 £ ¡ 2¢ ¡ ¢¤ where n min w ei + min vei2 and ¡ 2 ¢¤ £ ¡ 2¢ 2 are K∞ -functions, and πe (k) n max w ei + max vei is an K∞ -function, ηζ 2 (k) is a K-function. From (26) and (37) we know Vk is the function of e (k) and ζ (k) , so Lk admits a smooth ISS-Lyapunov function as in Definition 2. From Theorem 1, the dynamic of the identification

(39)

Summarizing (39) from 1 up to T , and by using LT > 0 and L1 is a constant, we obtain PT L − L ≤ −π K=1 e2 (k) + T ηζ PT T 2 1 π K=1 e (k) ≤ L1 − LT + T ηζ ≤ L1 + T ηζ

(22) is established. Proof: (Theorem 2). We selected a positive defined matrix Lk as ° °2 ° °2 °f ° °e ° (37) Lk = °W k ° + °Vk °

fk −ηk e (k) σ 0 VkT xT (k) , fk+1 = W W

160