A computational intelligence scheme for prediction of

2 downloads 0 Views 2MB Size Report
Aug 5, 2014 - dew point of natural gas in TEG dehydration systems. 5. 6. 7. Mohammad Ali Ahmadi a, .... back-propagation. MEG monoethylene glycol. FFNN.
JFUE 8331

No. of Pages 10, Model 5G

5 August 2014 Fuel xxx (2014) xxx–xxx 1

Contents lists available at ScienceDirect

Fuel journal homepage: www.elsevier.com/locate/fuel 5 6

A computational intelligence scheme for prediction of equilibrium water dew point of natural gas in TEG dehydration systems

3 4 7

Q1

Mohammad Ali Ahmadi a,1, Reza Soleimani b, Alireza Bahadori c,⇑ a

8 9 10

Department of Petroleum Engineering, Ahwaz Faculty of Petroleum Engineering, Petroleum University of Technology (PUT), Iran Department of Gas Engineering, Ahwaz Faculty of Petroleum Engineering, Petroleum University of Technology (PUT), Ahwaz, Iran c Southern Cross University, School of Environment, Science and Engineering, Lismore, NSW, Australia b

11 12

h i g h l i g h t s

1 4 15

 Particle swarm optimization (PSO) is used to estimate the water dew point of natural gas in equilibrium with TEG.

16 17

 The model has been developed and tested using 70 series of the data.

18

 Back-propagation (BP) algorithm is used to estimate the water dew point of natural gas in equilibrium with TEG.

19

 PSO-ANN accomplishes more reliable outputs compared with BP-ANN in terms of statistical criteria.

20

a r t i c l e 2 6 2 3 23 24 25 26 27 28 29 30 31 32 33 34 35

i n f o

Article history: Received 4 November 2013 Received in revised form 24 July 2014 Accepted 24 July 2014 Available online xxxx

Q2

Keywords: Gas dehydration Triethylene glycol Equilibrium water dew point Prediction Particle swarm optimization Artificial neural network

a b s t r a c t Raw natural gases are frequently saturated with water during production operations. It is crucial to remove water from natural gas using dehydration process in order to eliminate safety concerns as well as for economic reasons. Triethylene glycol (TEG) dehydration units are the most common type of natural gas dehydration. Making an assessment of a TEG system takes in first ascertaining the minimum TEG concentration needed to fulfill the water content and dew point specifications of the pipeline system. A flexible and reliable method in modeling such a process is of the essence from gas engineering view point and the current contribution is an attempt in this respect. Artificial neural networks (ANNs) trained with particle swarm optimization (PSO) and back-propagation algorithm (BP) were employed to estimate the equilibrium water dew point of a natural gas stream with a TEG solution at different TEG concentrations and temperatures. PSO and BP were used to optimize the weights and biases of networks. The models were made based upon literature database covering VLE data for TEG–water system for contactor temperatures between 10 °C and 80 °C and TEG concentrations ranging from 90.00 to 99.999 wt%. Results showed PSO-ANN accomplishes more reliable outputs compared with BP-ANN in terms of statistical criteria. Ó 2014 Published by Elsevier Ltd.

37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53

54 55

1. Introduction

56

All natural gas streams contain significant amounts of water vapor as they exit from oil and gas reservoirs. Water vapor in natural gas can make several operational problems during the processing and transmission of natural gas such as line plugging due to formation of gas hydrates, reduction of line capacity due to formation of free water (liquid), corrosion, and the decrease of natural gas heating value.

57 58 59 60 61 62

⇑ Corresponding author. Tel.: +61 2 6626 9412. E-mail addresses: [email protected] (M.A. Ahmadi), [email protected] (A. Bahadori). 1 Tel.: +98 9126364936.

Various techniques can be executed to dehydrate natural gas. Among these gas dehydration methods, glycol absorption processes, in which glycol is considered as liquid desiccant (absorption liquid), is the most common dehydration process used in the gas industry since it approximate the features that fulfill the commercial application criteria. In a typical TEG system, shown in Fig. 1, water-free TEG (lean or dry TEG) enters at the top of the TEG contactor where it is flow countercurrent with wet natural gas stream flowing up the tower. Elimination of water from natural gas via TEG is based on physical absorption. In TEG system, specification of the minimum concentration of TEG to fulfill the water dew point of exit gas has always been operationally important. Indeed, the one single change that can

http://dx.doi.org/10.1016/j.fuel.2014.07.072 0016-2361/Ó 2014 Published by Elsevier Ltd.

Please cite this article in press as: Ahmadi MA et al. A computational intelligence scheme for prediction of equilibrium water dew point of natural gas in TEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07.072

63 64 65 66 67 68 69 70 71 72 73 74 75 76

JFUE 8331

No. of Pages 10, Model 5G

5 August 2014 2

M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx

Nomenclature Acronyms ANN artificial neural network TEG triethylene glycol VLE Vapor–Liquid Equilibrium BP back-propagation MEG monoethylene glycol FFNN feed-forward neural network GA genetic algorithm ICA imperialist competitive algorithm MSE mean square error PA pruning algorithm DEG diethylene glycol TREG tetraethylene glycol PSO particle swarm optimization HGAPSO hybrid genetic algorithm and particle swarm optimization R2 correlation coefficient MLP multilayer perceptron TST Twu–Sim–Tassone SPSO stochastic particle swarm optimization UPSO unified particle swarm optimization Symbols bH bO c1, c2 wt.% °C kPa psia K A W

77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103

used bias associated with hidden neurons bias associated with output neuron trust parameters weight percent centigrade degree kilopascals pounds per square inch absolute number of input training data input signal (vector) vector of weights and biases

be made in a TEG system, which will produce the largest effect on dew point depression, is the degree of TEG concentration (purity). To that end, it is needed to have a liquid–vapor equilibrium relation/model for water–TEG system. Several equilibrium correlations [1–7] for estimation the equilibrium water dew point of natural gas with a TEG dehydration system can be found in the literature. Generally, the correlations presented by Worley [4], Rosman [5] and Parrish et al. [1] work satisfactorily and are suitable for most TEG system designs. However, according to the literature [8], previously published correlations are unable to estimate precisely the equilibrium water concentration above TEG solutions throughout the vapor phase. Parrish et al. [1] and Won [7] generated correlations in which equilibrium concentrations of water throughout the vapor phase have been ascertained at 100% TEG (unlimited dilution). Moreover, the other approaches employ data extrapolations at lower concentrations to predict equilibrium throughout the unlimited dilution area [8]. The effect of pressure on TEG–water equilibrium is small up to about 13,800 kPa (2000 psia) [1]. Recently, Bahadori and Vuthaluru [9] proposed a simple correlation for the prompt prediction of equilibrium water dew point of a natural gas stream with a TEG solution in terms of TEG concentrations and contactor temperatures. In addition, Twu et al. [10] employed the Twu–Sim–Tassone (TST) equation of state (EOS) [11] to specify the water–TEG system phase behavior. Furthermore, they presented an approach for employing the TST EOS to determine water content and water dew point throughout

grad r1, r2 SH Td T

vi

wH xi xg xi,p Ypre Yexp OH

the gradient of the performance function random number hidden neuron’s net input signal equilibrium water dew point temperature contactor temperatures velocity of ith particle Weight between input and hidden layer position of ith particle gbest value pbest value of particle i predicted output actual output output of the hidden neuron

Greek symbols u activation function x inertia weight a learning rate Subscripts i particle i j input j k in Eq. (7) kth iteration m number of neuron in the input layer z zth experimental data Superscripts n iteration number max maximum min minimum pre predicted exp experimental

natural gas systems. Although, these methods (i.e. TST equation of state and simple correlation) have good predictive capability, applications of presented methods are typically limited to the system which they have been adapted for. As a matter of fact, aforementioned schemes need tunable parameters which should be adjusted based upon experimental data points. Without experimental data points and adjusted parameters, aforementioned models are totally not reliable. In such circumstances, it is preferable to develop and employed general models competent to predict phase behaviors of such systems. Among the various predictive methods, artificial neural network (ANN) is one of the competent methods enjoy great flexibility and capable to explain multiple mechanisms of action [12]. ANNs are computational schemes, either hardware or software which imitates the computational abilities of the human brain by using numbers of interconnected artificial neurons. The inimitability of ANN lies in its ability to acquire and create interrelationships between dependent and independent variables without any prior knowledge or any assumptions of the form of the relationship made in advance [13]. In the last two decades, ANNs have become one of the most successful and widely applied techniques in many fields, including chemistry, biology, materials science, engineering, etc. Especially, in the field of modeling of Vapor–Liquid Equilibrium (VLE) ANNs have successful track records [14–24]. Implications of artificial intelligent based approaches in various complicated engineering aspects have got a noticeable attentions

Please cite this article in press as: Ahmadi MA et al. A computational intelligence scheme for prediction of equilibrium water dew point of natural gas in TEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07.072

104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130

JFUE 8331

No. of Pages 10, Model 5G

5 August 2014 3

M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx

Fig. 1. Basic TEG dehydration unit.

131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148

149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169

through recent years such as application back propagation (BP)feed forward neural network [25], couple of genetic algorithm (GA) and fuzzy logic [26], particle swarm optimization (PSO) [27–29], hybridized of PSO and GA (HGAPSO) [30,31], unified particle swarm optimization (UPSO) [32], fuzzy decision tree (FDT) [33,34], imperialist competitive algorithm (ICA) [35–37], least square support vector machine (LS-SVM) [38–40], and pruning algorithm (PA) [41] have been applied to determine network structure and involved parameters. In this study, PSO is employed to specify the optimum values of the interconnection weights throughout feed-forward neural network in order to predict equilibrium water dew point temperature of a natural gas stream with a TEG solution at different TEG concentrations and contactor temperatures. Modeling results confirm the integrity and show the ability of the suggested hybrid model for the estimation of water dew point with adequate precision in comparison with the real recorded data which are published in the previous literatures (see Appendix A) [1,6].

2. Artificial neural network Artificial neural network (ANN), usually denoted to as neural network (NN), are an attempt at mimicking the information processing competences of biological nervous systems. The leadingedge picture of neural networks first came into being in the Q3 1940s by McCulloch and Pitts [42], who illustrated that networks of artificial neurons could, in principle, handle any arithmetic or logical function. The fundamental element of processing throughout NN is a neuron (node) in which simple computations are carried out from a vector of input values. A neuron executes a nonlinear transformation of the weighted sum of the incoming neuron inputs to yield the output of the neuron (see Fig. 2). One of most conventional type of ANN approaches is multilayer perceptron (MLP) which belongs to a common category of configurations named ‘‘feed-forward NN’’, a simple class of NN able of approaching general types of functions, counting integrable and continuous functions [43]. In the feed-forward NN, the track of sign movement is from the input layer, via hidden layers, to the output layer. Throughout the MLP configuration, the neurons are assembled into layers. The last and first layers are named output and input layers correspondingly, because they illustrate outputs

and inputs of the general network. The residual layers are named hidden layers. In a NN, each neuron—except neurons located in the input layer—obtains and treats inputs from other neurons. The treated info is obtainable at the output termination of the neuron. Fig. 2 demonstrates the technique which the hidden layer’s neuron throughout a MLP handles the info. Herein, each input to the 3th hidden neuron in a 3-layer feedforward neural network is denoted by a1, a2, a3, . . . , am, collectively they are referred to as the input vector. Every input is mint by a relevant weight wH3,2, wH3,3, . . . , wH3,m which demonstrate the synaptic neural links throughout natural nets and proceed in such a method as to decrease or increase the input signs to the neuron. As a matter of fact, the factors of weight are adjustable constants inside the network which specify the strength of the input signs. Weighted inputs are applied to the summation block, labeled R. The neuron has also a bias, bH3, that is collected with the weighted inputs to create the net input. A bias demonstrates a weight which does not join an input and an output of two neurons, but that is product by a unique sign and led to the neuron. A bias puts a specific degree of the output sign of a neuron which is autonomous from input signs. The algebraic formulation for net can be expressed as following as:

SH3

m X H ¼ NET ¼ wH3;j  aj þ b3

! H

wH3;j  aj þ b3

173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191

192

196

197

199

where u stands for the neuron transfer function or the neuron activation function. Three of the most commonly used activation functions are shown below.  Log-Sigmoid function (logsig)

1 uðsÞ ¼ 1 þ es

195

ð2Þ

j¼1

ð3Þ

 Hyperbolic tangent function (tansig)

es  es uðsÞ ¼ s s e þe

172

194

The neuron performs as a mapping or activation function (NET) to generate an outcome OH3 that can be shown as:

OH3 ¼ uðNETÞ ¼ u

171

ð1Þ

j¼1

m X

170

200 201 202 204 203 205

207 209 208 210

ð4Þ

 Linear function (purelin)

Please cite this article in press as: Ahmadi MA et al. A computational intelligence scheme for prediction of equilibrium water dew point of natural gas in TEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07.072

212 214 213

JFUE 8331

No. of Pages 10, Model 5G

5 August 2014 4

M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx

Fig. 2. Schematic of an artificial neuron within the hidden layer in a 3 layer feed-forward neural network.

215 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231

uðsÞ ¼ s

ð5Þ

It is worth to mention that w and b are both adaptable variables of the neuron. The principal concept of NN is that such variables can be modified with the purpose of the network shows some interesting or desired performance. The thresholds and factors of weight are updated throughout process of training. Therefore, to do a specific work we can train the network by regulating the bias or weight factors. There are numerous categories of approaches for training NN. Back propagation (BP) approach is one of the most conventional types of training methods for MLP-FFNNs. ANN training via dint of BP, which is one of the gradient descent algorithms, is an iterative optimization approach where the introduced objective function is minimized by updating the interconnection weights properly. The mean-squared-error (MSE) is a frequently engaged objective function that is formulated as below:

232

234

MSE ¼

K 1X 2 ðY exp  Y pre l Þ K l¼1 l

ð6Þ

241

where K denotes the number of training samples, Y exp and Y pre are l l the recorded values and estimated data, respectively. The straightforward application of BP learning iteratively adjusts the network biases and interconnection weights throughout the track wherein the objective function declines most quickly (as shown in following equation, the gradient has negative sign). Iteration throughout this strategy can be demonstrated as:

242 244

W kþ1 ¼ W k  ak gradk

235 236 237 238 239 240

245 246 247 248 249 250 251 252 253

ð7Þ

In which Wk stands for the vector of present biases and weights, gradk represents the present gradient of the performance function, and the parameter ak denotes called the learning rate. It is worthy to be mentioned that this training algorithm needs the differentiability of activation functions u since the weight adjust way is on the basis of the gradient of the performance function which is described in terms of the activation functions and weights. Interested readers are referred to the literature [44–48] to know more descriptions of technical point of views of BP training

Fig. 3. The flowchart of ANN trained with back-propagation algorithm [57].

approach. Fig. 3 presents the flowchart of training MLP feed-forward neural network by application of the BP algorithm. In this study, the ANN paradigm trained with BP applied the Levenberg– Marquardt algorithm.

Please cite this article in press as: Ahmadi MA et al. A computational intelligence scheme for prediction of equilibrium water dew point of natural gas in TEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07.072

254 255 256 257

JFUE 8331

No. of Pages 10, Model 5G

5 August 2014 M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx

retained throughout the process of searching to impart their knowledge successfully.

258

3. Particle swarm optimization (PSO)

259

PSO is a stochastic population-based search approach invented by Kennedy and Eberhart in 1995 [49], modeled on the social behavior of some kinds of animals (such as bird flocks, fish schooling, and swarm of insects) with the intention of gain more complicated activities that can be utilized to unravel difficult issues, mostly optimization problems [50]. This optimization algorithm can be readily executed and it is inexpensive from a computational point of view due to its CPU speed and memory necessities are low [51]. PSO conducts the search for finding of the optima using a population (swarm) of particles. Every particle throughout the swarm characterizes a candidate answer to the optimization issue. In a PSO scheme, every particle is ‘‘flown’’ over hyper-dimensional space of search, iteratively modifying its position in space of search consistent with its own flight knowledge as well as the flying knowledge of further particles in the entire of space of search, since particles of a swarm communicate good positions to each other. A particle thus employs the finest position experienced by itself and the finest position of other particles to guide itself into an optimal answer. The effectiveness of every particle (i.e. the ‘‘nearness’’ of a particle to the overall optima) is evaluated through objective function which is associated with the issue being unraveled [50]. After finding the two best aforementioned positions, during an iteration-based process every particle throughout the swarm is adjusted executing the below formulas:

260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282

283

v nþ1 ¼ xv ni þ c1 rn1 i

h

i h i xni;p  xni þ c2 r n2 xng  xni

ð8Þ

285

¼ xni þ v nþ1 xnþ1 i i

286

where n stands for the number of iteration and the index of the particle is denoted by i. v ni represents the particle i velocity at nth iteration, v nþ1 is the velocity of particle i at n + 1th iteration. The i individual finest position, xni;p , connected with particle i is the finest position the particle has stayed meanwhile the first time stage (pbest), xng is the best value, obtained up to now (i.e. at nth iteration) by any particle in the swarm (gbest). c1 and c2 are the acceleration factors related to pbest and gbest respectively and typically c1 and c2 values are set to 2. rn1 and r n2 are random values with constant distribution in the range [0, 1] [52]. xnþ1 and xni are the position of i particle i at n + 1th and nth iteration respectively. x is the weight of inertia, presented by Shi and Eberhart [53], that controls the exploration and exploitation of the search space [54]. Generally, the inertia weight is calculated by means of linear declining methodology where a primarily large weight of inertia is linearly reduced to a minor value [50]:

287 288 289 290 291 292 293 294 295 296 297 298 299 300 301

302 304

305 306 307 308 309 310 311 312 313 314 315 316 317 318 319

xn ¼ xmax 

ð9Þ



xmax  xmin nmax

 n

5

321 322

Optimization of functions with continuous-valued variables is done mainly via PSO. Optimizing weights and bias of NN is one of the first implementations of PSO. The first studies in training MLP feed-forward neural networks using PSO [55,56] have illustrated that the PSO is a competent substitute for training neural network. Frequent investigations have additional surveyed the ability of PSO as a training approach for a number of various neural network configurations. Investigations have also demonstrated for particular implementations that neural networks trained executing PSO afford more precise outputs.

323

4. Implementation of ANN training using PSO algorithm

333

With the intention of employ PSO for training a neural network, an appropriate representation and fitness function are necessary. Meanwhile the main objective is to minimize the error, the objective function is abridge the provided error (e.g. the MSE). Every particle demonstrates a nominee answer to the optimization issue, and subsequently the interconnection weights of a neural network at training step are a answer, a sole particle illustrates single comprehensive network. Every component of a position vector of particles illustrates single neural network bias or weight. Employing this illustration, PSO approach can be employed to specify the finest weights for a neural network to minimize the fitness function [50]. As a matter of fact, the fitness function for each particle is gained by adjusting the interconnection weights of ANN as determined by the parameters of the particle and evaluating the fitness function, gained in training of ANN. In the same way, the fitness functions of the whole particles in the swarm are established. gbest particle is defined as the particle having lowest fitness function and the fitness function of the gbest particle is contrasted with the pre-defined precision. If the pre-defined precision is fulfilled subsequently the process of training is discontinued. Else, the new position and velocity of the particles are adjusted again according to Eqs. (8) and (9). The similar procedure is replicated

334

ð10Þ

where xmax, xmin, n and nmax are the initial weight of inertia, the final weight of inertia, current iteration number and total iteration number (maximum number of iteration used in PSO) respectively. Usually xmax and xmin values are equal to 0.9 and 0.4 respectively [27,50,55]. PSO assigns various common points with evolutionary based approaches such as genetic algorithms (GAs). However, PSO enjoys noticeable advantages. The two main advantages of PSO over GAs are [31]:  Memory of PSO, that is, the information of worthy answers is remembered by all particles, while in GA, preceding information of the issue is demolished as soon as the new population is generated.  GA use filtering operation such as selection operation, however; PSO does not utilize one, and all the particles of the swarm are

320

Fig. 4. The flowchart of ANN optimized with PSO algorithm [57].

Please cite this article in press as: Ahmadi MA et al. A computational intelligence scheme for prediction of equilibrium water dew point of natural gas in TEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07.072

324 325 326 327 328 329 330 331 332

335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356

JFUE 8331

No. of Pages 10, Model 5G

5 August 2014

(a)

0.9 0.85 0.8 0.75 0

2

4

6

8

10

12

Mean Square Error (MSE)

Number Neurons 800

(b)

750 700 650 600 550 500

0

2

4

6

8

10

12

Number Neurons

Table 1 Details of trained ANN with PSO for the estimation of the water dew point of a natural gas stream in equilibrium with a TEG solution. Type

Value/comment

Input layer Hidden layer Output layer Hidden layer activation function Output layer activation function Number of data used for training Number of data used for testing Number of max iterations c1 and c2 in Eq. (8) Number of particles

2 7 1 Logsig Purelin 130 44 200 2 22

361

until the pre-defined precision is achieved [57]. The flowchart of PSO-ANN is shown in Fig. 4. It should be mentioned that each weight throughout the constructed NN is originally established in the span of [1, 1] and each initial particle is an assortment of weights produced arbitrarily in the span of [1, 1].

362

5. Results and discussion

358 359 360

363 364 365 366 367 368 369 370 371 372 373 374 375

(a)

Experimental Data ANN Output

50

100

150

60 50 40 30 20 10 0 -10 0 -20 -30 -40 -50

(b)

Experimental Data

10

20

30

40

50

ANN Output

Data Index

Fig. 5. Variation of (a) R2 and (b) MSE with the number of hidden neurons.

357

70 60 50 40 30 20 10 0 -10 0 -20 -30 -40

Data Index Water Dew Point Temperature

0.7

As mentioned, ANNs were applied to construct reliable paradigms to predict the equilibrium water dew point temperature (Td). They were supplied by the contactor temperatures (T) and TEG concentrations (wt%) data as input variables. The whole database was split into two divisions by a random number generator: The first, which is used in the training process, includes 75% of the entire database and is equivalent to 130 data lines. The remaining points were save for validating and testing the trained networks. This data set consists of 44 samples. It should be mentioned that the first assortment is the training data bank, which is employed for optimizing the network biases and weights whereas the testing assortment affords a wholly autonomous assessment of network integrity.

Fig. 6. Actual versus predicted equilibrium water dew point using the BP-ANN model: (a) training and (b) testing.

Water Dew Point Temperature

0.95

80

(a)

60 40 20

Experimental Data

0 0

50

100

150

PSO-ANN Output

-20 -40 -60

Data Index Water Dew Point Temperature

1

Water Dew Point Temperature

M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx

Correlation Coefficient (R2)

6

80

(b)

60 40 20

Experimental Data

0 0

10

20

30

40

50

PSO-ANN Output

-20 -40 -60

Data Index Fig. 7. Actual versus predicted equilibrium water dew point using the PSO-ANN model: (a) training and (b) testing.

The hidden neurons’ number has a critical impact on the estimation integrity and precision. Many sources (for example Ref. [58]) claimed that a feed-forward network with one hidden layer and enough neurons in the hidden layer, can fit any finite input–output mapping problem. In this respect, herein, networks with one hidden layer with various hidden neurons were examined. The neurons number throughout the hidden layer illustrates the

Please cite this article in press as: Ahmadi MA et al. A computational intelligence scheme for prediction of equilibrium water dew point of natural gas in TEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07.072

376 377 378 379 380 381 382

JFUE 8331

No. of Pages 10, Model 5G

5 August 2014 7

M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx

60

40

(a)

y = 0.5066x + 7.7335 R² = 0.9679

30

y = 0.9944x + 0.1149 R² = 0.998

40

PSO-ANN Output

20

ANN Output

(a)

50

10 0 -10 -20

30 20 10 0 -10 -20

-30

Data

Data -30

Linear (Data)

-40 -40

-20

0

20

Linear (Data)

-40

40

-40

Water Dew Point Temperature

-20

0

20

40

60

Actual Water Dew Point Temperature 60 y = 0.4954x + 8.8406 R² = 0.9751

50

60

(b)

(b)

50

40 y = 0.9987x + 0.1402 R² = 0.9996

40

PSO-ANN Output

ANN Output

30 20 10 0 -10

30 20 10 0 -10 Data

-20 Data

-20

Linear (Data)

-30

-30 -40 -40

-20

0

20

40

60

Water Dew Point Temperature Fig. 8. Regression plots of the BP-ANN model for: (a) training data set and (b) testing data set.

383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406

complication of the network, though the more complex networks are effective in estimate within the restrictions of the data bank employed for their training, they travail from absence of adequate extension. Specification of the number of neurons in the hidden layer is performed on the basis of a trial and error approach. Fig. 5a shows the change of R2 versus the hidden neurons’ number throughout the hidden layer. As demonstrated in Fig. 5a, it is observable that rising the hidden neurons’ number from 1 to 7 improved the coefficient of determination; conversely, no improvement followed in an additional rise from 7 to 10. Fig. 5b shows the influence of the neurons’ number on MSE. According to Fig. 5a and b, the highest R2 is observed and the MSE get the minimum when 7 neurons employed in the hidden layer. Therefore, a three-layer network with a 2 (input units):7 (neurons in hidden layer):1 (output neuron) architecture is the most appropriate. The details of the PSO-optimized network used in this study to predict equilibrium water dew point temperature were given in Table 1. With the purpose of gauging the effectiveness of the PSO-ANN approach, a BP-ANN scheme was performed with the same data banks utilized in the PSO-ANN approach. The PSO-optimized network trained via 50 generations conformed by a BP training algorithm. For the BP training algorithm the values of momentum correction factor and learning coefficient are assigned to 0.001 and 0.7, correspondingly.

Linear (Data)

-40 -40

-20

0

20

40

60

Water Dew Point Temperature Fig. 9. Regression plots of the PSO-ANN model for: (a) training data set and (b) testing data set.

As can be seen in Figs. 6 and 7, a comparison between predicted and actual equilibrium water dew point during the testing and training steps for both hybrid PSO-ANN and common BP-ANN approaches is executed. As shown in Fig. 7, there are not major differences between the outputs of the PSO-optimized network and the references values of equilibrium water dew point. It is clear that the PSO-ANN approach depicts a higher integrity in estimation of equilibrium water dew point temperature compared with BPANN, with lower MSE for the training and test sets 43.935 and 13.472 in contrast to 551.13 and 527.098 for BP-ANN, respectively. The performance of trained networks with PSO and conventional BP can be also evaluated by conducting an analysis of regression between the models outcomes and the relevant object. The cross plots of actual equilibrium water dew point versus predicted values of training and testing data set using PSO-ANN and BP-ANN approaches are depicted in Figs. 8 and 9. It can be seen that the fitting obtained by PSO-ANN is excellent since the regression line (the best linear fit) overlaps with the diagonal (perfect fit), as a result of a slope value close to 1 and minor value of the y-intercept (see Fig. 9) [59]. The training and testing correlation coefficients (R2) of PSO-ANN were found to be greater than 0.99 while those of BP-ANN model are not as favorably as PSO-ANN model. This means that the proposed hybrid PSO-ANN

Please cite this article in press as: Ahmadi MA et al. A computational intelligence scheme for prediction of equilibrium water dew point of natural gas in TEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07.072

407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429

JFUE 8331

No. of Pages 10, Model 5G

5 August 2014 8

M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx

Mean Square Error ( MSE)

100 90 80 70 60 50 40 30 20 10 0

1

11

21

31

41

51

61

Epoch

Mean Square Error (MSE)

950 900 850 800 750 700 650 600 550 500

1

11

21

31

41

51

61

Epoch Fig. 10. Performance plot of: (a) PSO-ANN model and (b) ANN model.

1500

(a)

where K denotes the number of training or testing samples, Y exp ; Y pre ; and Y exp are the experimental response, predicted l l l response, and the mean of experimental response respectively. Fig. 10 shows the performance plots for training, validation, test data subset, and best models introduced for predicting equilibrium water dew point. The performance plot shows the value of the performance function (MSE) against number of epochs. As can be seen, the validation and test data sets had similar trends; therefore, PSOANN can predict an unseen data set as well as the data set used for its validation [60]. Fig. 11 shows the scheme of actual data and % error between the actual and estimated equilibrium water dew point temperatures during the testing and training steps for both PSO-ANN and BPANN approaches. As shown in Fig. 11a, poor results are observed through BP-ANN model. However, the agreement between the actual equilibrium water dew point values and the PSO-ANN predicted ones is acceptable. Considering the performance of PSO-ANN globally, the effectiveness of the model is obvious since the vast majority of the training and testing data subsets falls in the region bordered by a relative deviation less than 20%. As a matter of fact, only for six data points the deviation between experimental and estimated equilibrium water dew point temperature was obtained to be P10% through the testing and training development. According to Fig. 11b, relative deviations located in the span 18.96% to 16.33%, the magnitude of minimum relative deviation is 0.0334%, and the average magnitude of deviation is 2.909%, while for the testing data banks the relative deviations located in the span 14.92% to 7.545%, the magnitude of minimum relative deviation is 0.0099%, and the average absolute deviations is 1.676%.

435

6. Conclusions

465

436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464

Deviation %

1000 500 0 -40

-20

0

20

40

60

80

-500 Training Data

-1000

Testing Data

-1500

Water Dew Point Temperature 40

(b)

30

Deviation %

20 10 0 -10

-40

-20

0

20

40

60

-20

Training Data

-30

Testing Data

80

-40

Water Dew Point Temperature Fig. 11. Percent deviation between the actual and predicted isotherms against actual data during the training and testing process: (a) BP-ANN model and (b) PSOANN model.

1. According to the literature database, the feasibility of using ANN scheme trained with a new evolutionary algorithm, viz PSO, to predict equilibrium water dew point versus contactor temperature at different concentrations of TEG was considered. The proposed PSO-ANN approach produced high reliability, with MSE and R2 of 13.472 and 0.998, respectively. 2. The use of PSO led to the rise of comprehensive searching capability for choosing appropriate initial weights of ANN. 3. To specify the optimal structure of the PSO-ANN approach, various three-layer feed-forward networks with different neurons in hidden layer were tested. Tuning parameters (including acceleration constants (c1 and c2), number of maximum iterations, number of particles, and time interval) of proposed hybrid model were carefully carried out. 4. According to the graphical representations together with the statistical error analysis, the optimum PSO-ANN scheme performs much better in accuracy than the common back propagation NN approach for the purpose of equilibrium water dew point prediction due to unlike PSO algorithm there is a probability of trapping or undulating nearby a local minima in back propagation algorithms.

467 466 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489

7. Uncited references [61,62].

490

Q4

491

431

model has been well trained and tested and is superior to BP-ANN model. The formula for correlation coefficient is as follows:

Appendix A

492

This section provides some of the data that used in this study. Table A1 reports the contactor temperature, concentration of TEG and corresponding equilibrium water dew point temperature.

493

434

PK 2 ðY exp  Y pre l Þ R2 ¼ 1  Pl¼1 l 2 K exp  Y exp l¼1 ðY l l Þ

430

432

ð11Þ

Please cite this article in press as: Ahmadi MA et al. A computational intelligence scheme for prediction of equilibrium water dew point of natural gas in TEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07.072

494 495

JFUE 8331

No. of Pages 10, Model 5G

5 August 2014 9

M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx Table A1 Data used in this study [1,6].

496 497 498 499 500 501 502 503 504 505 506

Contactor T (°C)

TEG purity (%)

Equilibrium water dew point T (°C)

Contactor T (°C)

TEG purity (%)

Equilibrium water dew point T (°C)

Contactor T (°C)

TEG purity (%)

Equilibrium water dew point T (°C)

10 15 20 25 30 35 37 10 15 20 25 30 35 40 45 10 15 20 25 30 35 40 45 50 55 10 15 20 25 30 35 40 45 50 55 60 10 15 20 25 30 35 40 45 50 55 60 65 70 10 15 20 25 30 35 40 45 50 55 60 65

90 90 90 90 90 90 90 95 95 95 95 95 95 95 95 97 97 97 97 97 97 97 97 97 97 98 98 98 98 98 98 98 98 98 98 98 99 99 99 99 99 99 99 99 99 99 99 99 99 99.5 99.5 99.5 99.5 99.5 99.5 99.5 99.5 99.5 99.5 99.5 99.5

6 1 3 8.5 13 18 20 12 8 4 1 5 9.5 14 19 18 13.5 10 6 2 2 6 11.5 15 19.5 22 18 14.5 11 7 2.5 1.5 6 9.5 13.5 17.5 30 26.5 22.5 19 15 11 8 4 0.25 3.5 7.5 11.5 14.5 37.5 34 30 27 23 19.5 16.5 12.5 9 6 2.5 1

70 10 15 20 25 30 35 40 45 50 55 60 65 70 75 10 15 20 25 30 35 40 45 50 55 60 65 70 75 10 15 20 25 30 35 40 45 50 55 60 65 70 75 10 15 20 25 30 35 40 45 50 55 60 65 70 75 10 15 20 25

99.8 99.8 99.8 99.8 99.8 99.8 99.8 99.8 99.8 99.8 99.8 99.8 99.8 99.8 99.8 99.9 99.9 99.9 99.9 99.9 99.9 99.9 99.9 99.9 99.9 99.9 99.9 99.9 99.9 99.95 99.95 99.95 99.95 99.95 99.95 99.95 99.95 99.95 99.95 99.95 99.95 99.95 99.95 99.97 99.97 99.97 99.97 99.97 99.97 99.97 99.97 99.97 99.97 99.97 99.97 99.97 99.97 99.98 99.98 99.98 99.98

4.5 46.5 43.5 40 36.5 33.5 30 26.5 24 20.5 17 14 11 8.5 5.5 52.5 49.8 47 43.5 40.5 37.5 34 31.5 28 25 22.5 19.5 17 14 59 56 54 50 47.5 44 42 38.5 36 33.5 30 27.5 25 22.5 63 60 57.5 54.5 52 49 47 44.5 41 38.5 36 33 31 28 66.5 63.5 61 58

30 35 40 45 50 55 60 65 70 75 10 15 20 25 30 35 40 45 50 55 60 65 70 75 10 15 20 25 30 35 40 45 50 55 60 65 70 75 15 20 25 30 35 40 45 50 55 60 65 70 75

99.98 99.98 99.98 99.98 99.98 99.98 99.98 99.98 99.98 99.98 99.99 99.99 99.99 99.99 99.99 99.99 99.99 99.99 99.99 99.99 99.99 99.99 99.99 99.99 99.995 99.995 99.995 99.995 99.995 99.995 99.995 99.995 99.995 99.995 99.995 99.995 99.995 99.995 99.997 99.997 99.997 99.997 99.997 99.997 99.997 99.997 99.997 99.997 99.997 99.997 99.997

55 52.5 50 47.5 45 42.5 40 37.5 35 32.5 72 69 66.5 63.5 61.5 59 56.5 54 52 49 47 44 42 39.5 77 74 72 69 67 64.9 62.5 60 57.5 55 53 51 48 47 78 76 73 71.5 68 67 64 62 60 57.5 55 53 51.5

References [1] Parrish WR, Won KW, Baltatu ME. Phase behavior of the triethylene glycol– water system and dehydration/regeneration design for extremely low dew point requirements. 65th GPA annual convention. San Antonio, TX; 1986. [2] Townsend FM. Vapor–liquid equilibrium data for DEG and TEG–water–natural gas system. In: Gas conditioning conference. University of Oklahoma, Norman, OK; 1953. [3] Scauzillo FR. Equilibrium ratios of water in the water–triethylene glycol– natural gas system. J Petrol Technol 1961;13:697–702. [4] Worley S. Super dehydration with glycols. In: Gas conditioning conference. University of Oklahoma, Norman, OK; 1967.

[5] Rosman A. Water equilibrium in the dehydration of natural gas with triethylene glycol. SPE J 1973;13:297–306. [6] Herskowitz M, Gottlieb M. Vapor–liquid equilibrium in aqueous solutions of various glycols and polyethylene glycols. 1. Triethylene glycol. J Chem Eng Data 1984;29:173–5. [7] Won KW. Thermodynamic basis of the glycol dew-point chart and its application to dehydration. 73rd GPA annual convention New Orleans, LA; 1994. p. 108–33. [8] Association GP. Engineering data book: FPS version. Sections 16-26: Gas Processors Suppliers Association; 1998. [9] Bahadori A, Vuthaluru HB. Rapid estimation of equilibrium water dew point of natural gas in TEG dehydration systems. J Nat Gas Sci Eng 2009;1:68–71.

Please cite this article in press as: Ahmadi MA et al. A computational intelligence scheme for prediction of equilibrium water dew point of natural gas in TEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07.072

507 508 509 510 511 512 513 514 515 516 517 518

JFUE 8331

No. of Pages 10, Model 5G

5 August 2014 10 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589

Q5

M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx

[10] Twu CH, Tassone V, Sim WD, Watanasiri S. Advanced equation of state method for modeling TEG–water for glycol gas dehydration. Fluid Phase Equilib 2005;228–229:213–21. [11] Twu CH, Sim WD, Tassone V. A versatile liquid activity model for SRK, PR and a new cubic equation-of-state TST. Fluid Phase Equilib 2002;194–197:385–99. [12] Carrera G, Aires-de-Sousa J. Estimation of melting points of pyridinium bromide ionic liquids with decision trees and neural networks. Green Chem 2005;7:20–7. [13] Chen H, Kim AS. Prediction of permeate flux decline in crossflow membrane filtration of colloidal suspension: a radial basis function neural network approach. Desalination 2006;192:415–28. [14] Urata S, Takada A, Murata J, Hiaki T, Sekiya A. Prediction of vapor–liquid equilibrium for binary systems containing HFEs by using artificial neural network. Fluid Phase Equilib 2002;199:63–78. [15] Mohanty S. Estimation of vapour liquid equilibria of binary systems, carbon dioxide–ethyl caproate, ethyl caprylate and ethyl caprate using artificial neural networks. Fluid Phase Equilib 2005;235:92–8. [16] Bahasoei A. Estimation of hydrate inhibitor loss in hydrocarbon liquid phase. Pet Sci Technol 2009;27(9):943–51. [17] Mohanty S. Estimation of vapour liquid equilibria for the system carbon dioxide–difluoromethane using artificial neural networks. Int J Refrig 2006;29:243–9. [18] Ghanadzadeh H, Ahmadifar H. Estimation of (vapour + liquid) equilibrium of binary systems (tert-butanol + 2-ethyl-1-hexanol) and (n-butanol + 2-ethyl-1hexanol) using an artificial neural network. J Chem Thermodyn 2008;40:1152–6. [19] Bahadori A, Mokhatab S, Towler BF. Rapidly estimating natural gas compressibility factor. J Nat Gas Chem 2007;16(4):349–53. [20] Ketabchi S, Ghanadzadeh H, Ghanadzadeh A, Fallahi S, Ganji M. Estimation of VLE of binary systems (tert-butanol + 2-ethyl-1-hexanol) and (n-butanol + 2ethyl-1-hexanol) using GMDH-type neural network. J Chem Thermodyn 2010;42:1352–5. [21] Guimaraes PRB, McGreavy C. Flow of information through an artificial neural network. Comput Chem Eng 19(Suppl. 1):741–6. [22] Bahadori A. New model predicts solubility in glycols. Oil Gas J 105(8):50–5. [23] Sharma R, Singhal D, Ghosh R, Dwivedi A. Potential applications of artificial neural networks to thermodynamics: vapor–liquid equilibrium predictions. Comput Chem Eng 1999;23:385–90. [24] Lashkarbolooki M, Vaferi B, Shariati A, Zeinolabedini Hezave A. Investigating vapor–liquid equilibria of binary mixtures containing supercritical or nearcritical carbon dioxide and a cyclic compound using cascade neural network. Fluid Phase Equilib 2013;343:24–9. [25] Bahadori A, Vuthaluru HB. A novel correlation for estimation of hydrate forming condition of natural gases. J Nat Gas Chem 2009;18(4):453–7. [26] Potukuchi S, Wexler AS. Predicting vapor pressures using neural networks. Atmos Environ 1997;31:741–53. [27] Soleimani R, Shoushtari NA, Mirza B, Salahi A. Experimental investigation, modeling and optimization of membrane separation using artificial neural network and multi-objective optimization using genetic algorithm. Chem Eng Res Des 2013;91:883–903. [28] Ebadi M, Ahmadi MA, Hikoei KF, Salari Z. Evolving genetic algorithm, fuzzy logic and Kalman filter for prediction of asphaltene precipitation due to natural depletion. Int J Comput Appl 2011;35(1):12–6. [29] Zendehboudi S, Ahmadi MA, James L, Chatzis I. Prediction of condensate-to-gas ratio for retrograde gas condensate reservoirs using artificial neural network with particle swarm optimization. Energy Fuels 2012;26:3432–47. [30] Ahmadi MA, Shadizadeh SR. New approach for prediction of asphaltene precipitation due to natural depletion by using evolutionary algorithm concept. Fuel 2012;102:716–23. [31] Zendehboudi S, Ahmadi MA, Bahadori A, Shafiei A, Babadagli T. A developed smart technique to predict minimum miscible pressure—eor implications. Can J Chem Eng 2013;91:1325–37. [32] Ali Ahmadi M, Zendehboudi S, Lohi A, Elkamel A, Chatzis I. Reservoir permeability prediction by neural networks combined with hybrid genetic algorithm and particle swarm optimization. Geophys Prospect 2013;61:582–98. [33] Ali Ahmadi M, Golshadi M. Neural network based swarm concept for prediction asphaltene precipitation due to natural depletion. J Pet Sci Eng 2012;98–99:40–9. [34] Ahmadi MA. Neural network based unified particle swarm optimization for prediction of asphaltene precipitation. Fluid Phase Equilib 2012;314:46–51.

[35] Ebadi M, Ahmadi MA, Gerami S, Askarinezhad R. Application fuzzy decision tree analysis for prediction condensate gas ratio: case study. Int J Comput Appl 2012;39(8):23–8. [36] Ebadi M, Ahmadi MA, Hikoei KF. Application of fuzzy decision tree analysis for prediction asphaltene precipitation due natural depletion; case study. Aust J Basic Appl Sci 2012;6(1):190–7. [37] Ahmadi MA, Ebadi M, Shokrollahi A, Majidi SMJ. Evolving artificial neural network and imperialist competitive algorithm for prediction oil flow rate of the reservoir. Appl Soft Comput 2013;13:1085–98. [38] Zendehboudi S, Ahmadi MA, Mohammadzadeh O, Bahadori A, Chatzis I. Thermodynamic investigation of asphaltene precipitation during primary oil production: laboratory and smart technique. Ind Eng Chem Res 2013;52:6009–31. [39] Ahmadi M. Prediction of asphaltene precipitation using artificial neural network optimized by imperialist competitive algorithm. J Petrol Explor Prod Technol 2011;1:99–106. [40] Fazeli H, Soleimani R, Ahmadi MA, Badrnezhad R, Mohammadi AH. Experimental study and modeling of ultrafiltration of refinery effluents. Energy Fuels 2013;27:3523–37. [41] Ahmadi MA, Ebadi M, Hosseini SM. Prediction breakthrough time of water coning in the fractured reservoirs by implementing low parameter support vector machine approach. Fuel 2014;117:579–89. [42] Ahmadi MA, Ebadi M. Evolving smart approach for determination dew point pressure of condensate gas reservoirs. Fuel 2014;117(Part B):1074–84. [43] Reed R. Pruning algorithms – a survey. IEEE Trans Neural Netw 1993;4:740–7. [44] McCulloch W, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 1943;5:115–33; Bahadori A, Vuthaluru HB. Prediction of silica carry-over and solubility in steam of boilers using simple correlation. Appl Therm Eng 2010;30(2– 3):250–3. [45] Scarselli F, Chung Tsoi A. Universal approximation using feedforward neural networks: a survey of some existing methods, and some new results. Neural Networks 1998;11:15–37. [46] Hagan MT, Demuth HB, Beale M. Neural network design. PWS Publishing Co.; 1996. [47] Baughman DR, Liu YA. Neural networks in bioprocessing and chemical engineering. Academic Press; 1995. [48] Freeman JA, Skapura DM. Neural networks: algorithms, applications, and programming techniques. Addison-Wesley; 1991. [49] Haykin SS. Neural networks: a comprehensive foundation. Prentice Hall; 1999. [50] Mehra P, Wah BW. Artificial neural networks: concepts and theory. IEEE Computer Soc. Press; 1992. [51] Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of the IEEE international conference on neural networks, vol. 4; 1995. p. 1942–8. [52] Engelbrecht AP. Computational intelligence: an introduction. Wiley; 2007. [53] Eberhart RC, Simpson PK, Dobbins R, Dobbins RW. Computational intelligence PC tools. AP Professional; 1996. [54] Bahadori A. Determination of well placement and breakthrough time in horizontal wells for homogeneous and anisotropic reservoirs. J Petrol Sci Eng 2010;75(1–2):196–202. [55] Yuhui S, Eberhart R. A modified particle swarm optimizer. In: The 1998 IEEE international conference on evolutionary computation proceedings, 1998. IEEE world congress on computational intelligence; 1998. p. 69–73. [56] Sivanandam SN, Deepa SN. Introduction to genetic algorithms. Springer; 2007. [57] Eberhart R, Kennedy J. A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science, 1995 MHS ’95; 1995. p. 39–43. [58] Kennedy J. The particle swarm: social adaptation of knowledge. In: IEEE international conference on evolutionary computation; 1997. p. 303–308. [59] Geethanjali M, Raja Slochanal SM, Bhavani R. PSO trained ANN-based differential protection scheme for power transformers. Neurocomputing 2008;71:904–18. [60] Bahadori A, Vuthaluru HB. Predicting emissivities of combustion gases. Chem Eng Prog 2009;105(6):38–41. [61] Bahadori A, Vuthaluru HB. Prediction of silica carry-over and solubility in steam of boilers using simple correlation. Appl Therm Eng 2010;30(2– 3):250–3. [62] Ghandehari S, Montazer-Rahmati MM, Asghari M. A comparison between semi-theoretical and empirical modeling of cross-flow microfiltration using ANN. Desalination 2011;277:348–55.

Please cite this article in press as: Ahmadi MA et al. A computational intelligence scheme for prediction of equilibrium water dew point of natural gas in TEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07.072

590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660