Multidimensional Minimization Training Algorithms for ...

2 downloads 0 Views 308KB Size Report
Algorithms for Steam Boiler Drum Level Trip. Using Artificial Intelligent ... by defining the objectives and training with on-line real data from on-service plant with ...
Multidimensional Minimization Training Algorithms for Steam Boiler Drum Level Trip Using Artificial Intelligent Monitoring System Firas Basim Ismail Alnaimi1, Hussain H. Al-Kayiem2 Mechanical Engineering Department, University Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak, Malaysia Emails:1: [email protected] , 2:[email protected] Abstract- This paper deals with the Fault Detection and Diagnosis of steam boiler using developed artificial Neural networks model. Water low level trip of steam boiler is artificially monitored and analyzed in this study, using two different interpretation algorithms. The Broyden-FletcherGoldfarb-Shanno quasi-Newton and Levenberg-Marquart are adopted as training algorithms of the developed neural network model. Real site data is captured from a coal-fired thermal power plant in Perak state - Malaysia. Among three power units in the plant, the boiler drum data of unit3 was considered. The selection of the relevant variables to train and validate the neural networks is based on the merging between the theoretical base and the operators experience and the procedure is described in the paper. Results are obtained from one hidden layer and two hidden layers neural network structures for both adopted algorithms. Detailed comparisons have been made based on the Root Mean Square Error. The results are demonstrating that the one hidden layer with one neuron using BFGS training algorithm provides the best optimum NN structure

Keywords Steam Boiler, Drum Level Trip, Fault Detection and Diagnosis(FDD), Artificial Neural Networks (ANN). I.

INTRODUCTION

Computational intelligent systems are nowadays taking fault detection and diagnosis into account as a very important feature in a number of processes and applications. The system consists of many interdepended members working modules. It is normal behavior of such systems to depend on early detection and diagnosis of any possible trips. Fault detection and diagnosis techniques can be divided into two major categories: estimation methods and pattern recognition methods. Estimation methods have, as an assumption, the existence of a mathematical model that describes the process satisfactorily. In practice, a mathematical model cannot describe the process exactly what is going on in the complex process, hence fault detection can be quite difficult because some error in the model may be interpreted as faults, or some faults may be undetected when they occur. The estimation methods are divided into two categories according to what is to be estimated: state variables estimation or process parameters estimation Pattern recognition method for fault detection and diagnosis do not need a mathematical model of the

process. The operation is classified by measurement data and it is the process of assigning a category to a pattern on the basis of certain features in the pattern. There are two different techniques of pattern recognition: Template matching and feature extraction and classification [1]. Kalogirou [2] reported a brief presentation of ANN applications in different energy systems. Mellit [3] described the operating way of neural networks to present a number of problems in photovoltaic systems application. kesgin and Heperkan, [4] used soft computing techniques for simulation of thermodynamic systems. It is particularly useful to implement complex mapping and system identification for system modeling. Many studies on the capability of ANN to replicate an established correspondence between points of an input range to interpret the behavior of phenomena involved in energy conversion plant have been done by Boccaletti et al, [5], Mathioudakis et al, [6], Fantozzi and Desideri, [7]. The steam boiler is undoubtedly an important process of a thermal power plant (TPP). Physical modeling of a steam boiler is difficult and complicated [8]. However, ANN models can be developed by defining the objectives and training with on-line real data from on-service plant with less effort but with great utility. Irwin et al, [9]; Lu and Hogg [10]; Chu et al, [11] reported the modeling of conventional coal-fired boiler using ANN using real plant data. In this paper, a Fault Detection and Diagnosis Neural Network model (FDDNN) of steam boiler drum with real-life data. The objective of the (FDDNN) model is to detect and diagnose steam boiler drum level trip using two multidimensional minimization training algorithms. Required neural network input parameters were initially selected based on the basis of plant operator experience. II.

DESCRIPTION OF THE STEAM BOILER UNIT

Schematic diagram of the steam boiler is shown in the Fig.1. The thermal power station (3*700MW) boilers are of sub-critical pressure, single reheat and controlled circulation type. Each boiler is fired with pulverized coal to produce steam for the continuous generation of 700MW. The combustion circuit consists of single furnace, with direct tangential firing and balanced draught. The coal milling plant consists of 7 vertical bowl mills. Light fuel oil burners are available for boiler start-up and for coal ignition or combustion stabilization.

The boiler is designed to comply with the Malaysian environmental requirements. NOx control is achieved with a low NOx combustion burner system including over fire air (OFA) ports. An Electro-Static Precipitator (ESP) removes dust in the flue gas at the boiler outlet and a Flue Gas Desulphurization (FGD) plant, scrubs the flue gas controls the SO2 emission level at the stack. The major auxiliary equipment consists of: three boiler circulating pumps, two forced draft fans, two steam air preheaters, one soot blowing equipment, two electrostatic precipitators, one coal milling plant consisting of 7 vertical bowl mills and one wet Flue Gas Desulphurisation plant.

definitions and expected objectives is important. If the scope of the problem is outside one’s field of expertise, interviewing specialists or domain experts may provide insight the underlying process so that some potential problems may be avoided [12,13,14]. When understanding the real data requirements, real data will be captured from various field sources. 2) Data Acquisition It’s Consider as an important step because of the results of the step will restrict subsequent Inter-phases. Data was captured from the thermal power station control room. The real data related to the steam boiler drum consists of 93 variables and was accumulated on the base of 1 min interval for a period of one month for that trip. Each one of 93 variables was addressed to a file which consisting of 5200 data points. B. Inter-Phase (2) In this step, the on-line data was pre-processed, where the 93 variables were reduced to 32 variables by adapting the plant operator experience. Also, the mean values of sum of the variables which are provided by multi-sensors have been considered. The resulted set of variables is shown in Table 1.

Figure.1: Schematic diagram of the steam boiler, as seen in the monitoring and control room of the plant.

III.

INTEGRATED REAL DATA PREPARATION FRAMEWORK FOR FDDNN MODEL

The FDDNN model data preparation for complex real data is of high importance. However, no standard real data preparation framework for FDDNN model has so far been proposed. In view of the importance of real data preparation, the author suggests an integrated real data scheme for FDDNN model. As shown in fig. 2 the integrated real data preparation scheme consist of three inter-phases: NN data pre-analysis, in which real data of steam boiler are captured, identified and sampled; NN data pre-processing, in which data are tested, checked and processed and in which some real data may be restricted or transformed to make them more useful for NN model, and NN data post-analysis in which some real data are validated and readjusted. In almost all existing studies, NN data preparation includes only the second inter-phase, data preprocessing. Therefore, the proposed real data preparation framework for FDDNN model is broader than others and this would fill the literature gap. In the following sections, the three inter-phases of the integrated real data preparation framework are described briefly. A. Inter-Phase (1) 1) Data Requirement Analysis First of all, full understanding the real data requirements of the work in conjunction with the domain of the problem

C. Inter-Phase (3) Real steam boiler drum data were pre-randomized before subdivided into three different groups for NN training; the first 60% of real acquired data were used for training. The following 15% was used for cross-validation and the rest for were used for testing. The randomization process of the real data was applied on each of the three sets. IV.

NEURAL NETWORKS TOPOLOGIES

The feed-forward NN structure selection in modeling is defined by the choice of the number of hidden layers of the network, the number of nodes of each hidden layer and the types of activation function of hidden and output nodes. Models were with One-hidden layer (1HL) and two hidden layer (2HL) networks, the types of activation functions of the hidden and output nodes were logistic (Logsig), hyperbolic tangent (Tansig) or linear summation (Purelin) functions, the error estimator used here was RootMean-Squared Error (RMSE). A developed method of designing the most suitable network structure using two multidimensional minimization algorithms for NN training is presented. The prediction of the RMES was repeated for 10 assumed neurons in each hidden layer(s)-algorithm combination.

TABLE (1) Selected Thermal Power Plant Boiler parameters Sym. Description Unit V1

Total combined Steam flow

t/h

V2

Feed water flow

t/h

V3

Boiler drum pressure

Barg

V4

Super heater steam pressure

Barg

V5

Super heater steam temperature

C

V6

High temperature Re-heater outlet temperature

°C

V7

High temperature super heater exchange metal temperature

°C

V8

Intermediate temperature (A) super heater exchange metal temperature

°C

V9

High temperature super heater inlet header metal temperature

°C

V10

final super heater outlet temperature

°C

V11

super heater steam pressure transmitter (control)

bar

V12

Feed water valve station

t/h

V13

Feed water control valve position

V14

Drum level corrected (control)

mm

V15

Drum level compensated (from protection)

mm

V16

Feed water flow transmitter

%

V17

Boiler circulation pump1 pressure

bar

V18

Boiler circulation pump2 pressure

V19

Low temperature super heater left wall outlet before super heater dryer

bar °C

V20

Low temperature super heater right wall outlet before super heater dryer

V21

Low temperature super heater left wall after super heater dryer

°C

V22

Low temperature super heater right wall exchange metal temperature

°C

V23

Intermediate temperature (B) super heater exchange metal temperature

°C

V24

Intermediate temperature super heater outlet before super heater dryer

°C

V25

Intermediate temperature super heater outlet header metal temperature

°C

V26

High temperature super heater outlet header metal temperature

°C

V27

High temperature Re-heater outlet steam pressure

bar

V28

Super heated steam form Intermediate temperatures outlet pressure

V29

Super heater water injection compensated flow

V30

Economizer inlet PRESSURE

bar

V31

Economizer inlet temperature

°C

V32

Economizer outlet temperature

°C

%

°C

bar ton/hr

algorithms were considered: The BFGS quasi-Newton (TRAINBFG) and Levenberg-Marquart (TRAINLV) algorithms. Their main difference is the way of approximating the inverse of the hessian matrix. In the backpropagation algorithm [12] the vector of the network parameters is updated in each epoch with the formula: .

.

1

Where (α) is the learning rate, (g) is the gradient of the error function and (M) is approximation of the inverse of the hessian matrix. This matrix is positive definite in order to ensure the descent. All these quantities are for the (k-th) iteration. A crucial point is the choice of the learning rate, α which must be much to minimize the error of the nest iteration (epoch). In the literature, large majority of the works, affixed value learning rate is used during the training of the NN. Here, an on-line adjustable learning rate used, the best learning rate was calculated with an approximate line search using a cubic interpolation. A. Levenberg-Marquardt algorithm The Levenberg-Marquardt algorithm was designed to approach second-order training speed without having to compute the hessian matrix. When the performance function has the form of a sum of squares (as is typical in training feed-forward networks), then the hessian matrix can be approximated.In Levenberg-Marquardt algorithm the inverse of the hessian is approximated by the quantity: .

.

2

Where J is the jacobian matrix [Jij] with: 3 The Jacobian matrix can be computed through a standard back-propagation technique that is much less complex than computing the hessian matrix. This algorithm [13] appears to be the fastest method for training moderate-sized feedforward neural networks (up to several hundred weights) The Fletcher-Reeves Update and Levenberg-Marquardt training algorithms for, one-hidden layer (1HL) and twohidden layers (2HL) network, as well as the BroydenFletcher-Goldfarb-Shanno (BFGS) method and the cubic interpolation are used by these algorithms, were written in MATLAB code.

Figure.2 Integrated Data Preparation scheme for recent Neural Network data analysis

The basic methodology used here for training was the back-propagation (Bp) training algorithm. Bp algorithm has several modifications according to the dimensional minimization algorithm that it uses to minimize the error estimator. Two different interpretation types of optimization

B. BFGS Quasi-Newton algorithms Newton’s method is an alternative to conjugate gradient methods for fast optimization. Quasi- Newton methods update an approximation of hessian matrix at each iteration of the algorithm. The update is computed as a function of the gradient. In Quasi-Newton algorithms the positive definite approximation of the inverse of the hessian Mk satisfies: .

4

where, 5 and 6 In the algorithm used in this work the approximation was made using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method [14] [15].

architecture and algorithm. Based on the preliminary training, the network architecture/algorithm combination that gave the best results for Low level water trip for steam boiler drum using (1HL) and (2HL) shown in Table 2, 3, and 4 show the most suitable network architecture/algorithm The basic training process had a goal to further train the selected NN with the best possible algorithm for this system and architecture.

1 0.9

V.

FDDNN MODELING AND RESULTS

The development of FDDNN model fig.3 represents the type of transfer function, number of hidden layers, number of neurons, normalization range and other parameters. Input parameters are defined in the previous phase. The accuracy of the trained NN for real data application has to be checked by observing the error of output parameters (steam boiler trip) for a data set which were never “seen” by the NN model before. This procedure is called validation of the developed NN model. The feed-forward methodology of NNs was used. The neural network employed in this work has 32 inputs nodes corresponding to the 32 steam boiler variables and two outputs (one corresponding to the steam boiler drum low water level trip and the other one corresponding to normal operation of steam boiler drum).

Root Mean Square Error

0.8 0.7 0.6 0.5 0.4

TrainLM 0.3 0.2 0.1 0

TrainBFGS 1

2

3

4

5

6

7

8

9

10

Number of hidden layer nodes

Figure 4. Performance of (1HL) structure of the two candidate training algorithms for Boiler drum level trip

1 0.9

Mean Root Square Error

0.8 0.7 0.6 0.5 0.4 0.3

TrainBFGS

0.2 0.1

TrainLM 0

Figure3. FDDNN Structure

The (FDDNN) model was trained with Real plant data obtained from the selected thermal power station steam boilers systems, for both normal and faulty situations. Approximately 10 (continuous) days of data for a trip formed the final training set. The time step of these data was 1 minute. Thus, the training set was based on 1050 entries for each input of the neural network. The training process included two basic parts. The first part, the preliminary training process, determined the best combination of network architecture and training algorithm. This was achieved by training several candidate network topologies (both 1-HL and 2-HL networks) with all two training algorithms and comparing the results. The second part of the training process, which was the basic training process, focused on training the best combination of

1

2

3

4

5

6

7

8

9

10

Number of hidden layer nodes

Figure 5. Performance of (2HL) structure of the two candidate training algorithms for Boiler drum level trip

Fig. 4 and Fig. 5 shows the Performance of (1HL) and (2HL) architectures using two candidate training algorithms for trip (2), the BFG training algorithms gave best optimum NN structure using one hidden neuron with RMSE close to 0.1544. Many different random initial network parameters were tested in that training. Also, several values of the coefficient of the penalty term for the regularization (λ), varying by the order of 5, were tried. Logistic, hyperbolic tangent and linear summation activation functions (functions of hidden nodes) were tested. The results of these tests showed that the value of λ that leads to the Root mean squared error (RMSE) was λ= 0.03 and that the network with linear summation and logistic activation functions

performed better than the one with hyperbolic tangent activation function VI.

CONCLUSIONS

An FDDNN model has been developed and coded using MATLAB software. Real steam boiler data is acquired from a thermal power plant – Perak - Malaysia. After classification and multi phases of treatment, the data has been used to train and validate the NN code. Also it has been used in the selection procedure of the optimum NN structure. In the selection procedure a (1HL) and (2HL) cases has been tested and compared. In each case, two multi-dimensional different algorithms have been adopted. It can be concluded, based on the lowest RMSE that the one hidden layer with one neuron using BFG training algorithm provides the best optimum NN structure. ACKNOWLEDGMENT The authors acknowledge Universiti Teknologi PETRONAS for the sponsoring of the project. The facilitating of the power station management in the data collection, and the experience provided for many decision making is highly appreciated.

[1] k. P.Ferentinos, “Neural network Fault Detection and Diagnosis in deep-trough hydroponic system”, Ph.d dissertation, Cornell University, 2002. [2] SA. Kalogirou “Applications of Artificial Neural Networks for energy systems”, Appl Energy,Vol. 67,17–35, 2000. [3] Mellit and SA. Kalogirou, “Artificial Intelligence Techniques for Photovoltaic Applications: a review. Progress energy combust” sci.,Vol.34,pp.574-632, 2008. [4] U. Kesgin and H. Heperkan,” Simulation of Thermodynamic Systems using Soft Computing Techniques”, int. j. Energy Res. , pp.29:581611,, 2005. [5] C.Boccaletti, G.Cerri and B.Seyedan,” Neural Network Simulator of a Gas Turbine with a waste heat recovery section”, Trans ASME:J.eng. gas turbines Power ,Vol.123;371-6, 2001. [6] K. Mathioudakis, A.Stamatis, A.Tsalavoutas and N.Aretakis,” Performance Analysis of Industrial Gas Turbine for Engine Condition Monitoring”, Proc Inst Mech. Eng. Part A-J Power Energy; Vol.215, pp.:175-84, 2001. [7] F.Fantozzi and U.Desideri, “Simulation of Power Plant transients with artificial neural network model for a biomass-fuelled boiler”,In: Proceeding of the AMSE turbo Expo, Atlanta,Georgia USA,. 2003. [8] S.De, M.Kaiadi, M.Fast and M.Assadi ,” Development of an artificial neural network model for the steam process of a coal biomass co fired combined heat and power (CHP) plant in Sweden”, Energy,Vol. 32, pp.2099-109, 2007. [9] G.Irwin. M.Brown, B.Hogg and E.swidenbank ,” Neural Network modeling of a 200 boiler system”, IEE Proc-control Theory Appl. Vol.142(6), pp.529-36,1999. [10] S.Lu and BW. Hogg ,” Dynamic and nonlinear modeling of power plant by Physical Principles and Neural Networks”, Int J Electr Power Energy Syst. Vol.22, pp.67-78, 2000. [11] JZ.Chu, SS.Shieh, CI Chien , HP.Wan and HH.Ko ,”Constrained optimization of combustion in a simulated coal-fired boiler using artificial neural network model and information analysis”. Fuel Vol.82, pp.693-703, 2003. [12] Demuth Howard and Beale Mark ,”Neural Network Toolbox for use with Matlab”, the Math Works inc. , 1998. [13] R.Practical Fletcher, ,”methods of optimization”, Wily new York, 1987. [14] R.Stien, Selecting ,”Data for Neural Network”, AI expert , Vol.2:pp.42-47, 1993. [15] J.Suykens.,J.Vandewalle and B.De moor,” Artificial neural Networks for modeling and control of Non–Linear systems”, Kluwer Academic Publisher, the Netherlands, 1996.

REFERENCES

TABLE 2 THE MOST SUITABLE STRUCTURE (1HL) FOR BOILER DRUM LOW LEVEL TRIP USING (TRAINLM) NNHL1

1

2

3

4

5

6

7

8

9

10

Mini.

L+L

0.321

0.635

0.562

0.356

0.757

0.686

0.573

0.887

0.743

0.648

0.321

L+T

0.268

0.368

0.785

0.531

0.704

0.930

0.713

0.931

0.900

0.932

0.268

L+P

0.272

0.616

0.534

1.161

2.917

0.818

1.992

53.829

0.837

11.734

0.272

T+T

0.272

0.616

0.534

1.161

2.917

0.818

1.992

53.829

0.837

11.734

0.272

T+L

0.272

0.361

0.537

0.725

0.858

0.884

0.822

0.860

0.790

0.680

0.272

T+P

0.289

0.742

0.732

0.607

0.750

1.033

1.778

3.019

3.111

2.859

0.289

Trainlm

P+P

0.268

0.269

0.273

0.269

0.269

0.269

0.269

0.270

0.269

0.269

0.268

P+L

0.221

0.221

0.221

0.221

0.253

0.221

0.361

0.221

0.221

0.221

0.221

P+T

0.296

0.296

0.298

0.296

0.296

0.296

0.296

0.296

0.299

0.296

0.296

TABLE 3 THE MOST SUITABLE STRUCTURE (1HL) FOR BOILER DRUM LOW LEVEL TRIP USING (TRAINBFG)

NNHL1

1

2

3

4

5

6

7

8

9

10

Mini.

L+L

0.184

0.684

0.406

0.610

0.505

0.361

0.573

0.612

0.578

0.449

0.184

L+T

0.368

0.174

0.476

0.693

0.700

0.741

0.572

0.774

0.562

0.574

0.174

L+P

0.468

0.490

0.458

0.616

0.435

0.504

0.468

0.580

0.727

0.412

0.412

T+T

0.492

0.478

0.781

0.249

0.488

0.619

0.561

0.659

0.611

0.640

0.249

Trainbfg

T+L

0.368

0.491

0.455

0.775

0.587

0.562

0.459

0.406

0.778

0.515

0.368

T+P

0.154

0.486

0.614

0.674

0.520

0.249

1.501

0.523

0.429

1.010

0.154

P+P

0.486

0.352

0.285

0.372

0.340

0.312

0.304

0.346

0.337

0.318

0.285

P+L

0.361

0.361

0.361

0.232

0.361

0.361

0.259

0.361

0.245

0.224

0.224

P+T

0.395

0.261

0.322

0.358

0.932

0.437

0.932

0.932

0.344

0.429

0.261

TABLE 4 THE MOST SUITABLE STRUCTURE (2HL) FOR BOILER DRUM LOW LEVEL TRIP USING (TRAINBFG) Trainbfg

NNHL2

1

2

3

4

5

6

7

8

9

10

Mini.

NNHL1

P+T+L

1

0.244

0.570

0.368

0.368

0.529

0.808

0.368

0.368

0.368

0.368

0.244

2

0.368

0.361

0.118

0.163

0.449

0.249

0.737

0.533

0.368

0.510

0.118

3

0.368

0.368

0.361

0.422

0.361

0.564

0.789

0.544

0.454

0.579

0.361

0.267

0.522

0.692

0.932

0.644

0.626

0.694

0.512

0.395

0.932

0.267

0.368

0.251

0.625

0.368

0.819

0.493

0.457

0.534

0.237

0.815

0.237

0.577

0.361

0.463

0.368

0.425

0.721

0.664

0.376

0.685

0.722

0.361

0.368

0.906

0.236

0.562

0.566

0.631

0.611

0.515

0.365

0.490

0.236

0.291

0.472

0.346

0.473

0.516

0.162

0.725

0.624

0.275

0.581

0.162

4 5 6 7 8

R M S E

9

0.368

0.270

0.296

0.605

0.374

0.430

0.789

0.742

0.388

0.907

0.270

10

0.368

0.623

0.571

0.368

0.374

0.410

0.606

0.598

0.463

0.573

0.368

*RMSE:Root-Mean-Square-Error * NNHL1: number of neurons in first hidden layer * NNHL2: number of neurons in second hidden layer * L:Logistic, P:Purlin,T:tangent *TrainBFG: BFGS quaasi-newton back-propagation training algorithm * Tainrp: Resilient back-propagation training algorithm

Suggest Documents