Fault detection method with PCA and LDA and its ... - Springer Link

0 downloads 0 Views 509KB Size Report
JUNG D Y1, LEE S M2, WANG Hong-mei(王洪梅)3, KIM J H3, LEE S H2 ... School of Mechatronics, Changwon National University, Changwon 641-773, Korea.
J. Cent. South Univ. Technol. (2010) 17: 1238−1242 DOI: 10.1007/s11771−010−0625−y

Fault detection method with PCA and LDA and its application to induction motor JUNG D Y1, LEE S M2, WANG Hong-mei(王洪梅)3, KIM J H3, LEE S H2 1. Daeho Tech Company Limited, Changwon 641-465, Korea; 2. Department of Electronics Engineering, Inha University, Incheon 402-751, Korea; 3. School of Mechatronics, Changwon National University, Changwon 641-773, Korea © Central South University Press and Springer-Verlag Berlin Heidelberg 2010 Abstract: A feature extraction and fusion algorithm was constructed by combining principal component analysis (PCA) and linear discriminant analysis (LDA) to detect a fault state of the induction motor. After yielding a feature vector with PCA and LDA from current signal that was measured by an experiment, the reference data were used to produce matching values. In a diagnostic step, two matching values that were obtained by PCA and LDA, respectively, were combined by probability model, and a faulted signal was finally diagnosed. As the proposed diagnosis algorithm brings only merits of PCA and LDA into relief, it shows excellent performance under the noisy environment. The simulation was executed under various noisy conditions in order to demonstrate the suitability of the proposed algorithm and showed more excellent performance than the case just using conventional PCA or LDA. Key words: principal component analysis (PCA); linear discriminant analysis (LDA); induction motor; fault diagnosis; fusion algorithm

1 Introduction For reducing maintenance cost and preventing the unscheduled downtimes of the induction motor, fault detection techniques for the induction motors were studied by numerous researchers [1−7]. Faults of an induction machine were classified into bearing fault, coupling and rotor bar faults, air gap, rotor, end ring and stator faults, etc [1]. Various measurements, vibration signal, stator currents, lights, sound, and heat, were required to monitor the status of the motor or to detect the faults. It is also well known that the current signal is useful for detecting the faults because of cost reduction [3−4]. The faults of induction motor can be derived analytically or heuristically [5−7]. In both cases, features of the faulty or healthy motor are needed. The characteristic extraction for the case of healthy and faulty induction motor was considered by synchronizing and analyzing stator currents. Obtaining characteristic values from the stator current could be used in the frequency domain and time domain approach at the same time. In frequency domain, Fourier and wavelet transformation of the signal showed good points for obtaining characteristics [6]. However, these methods were not complete to obtain proper result individually

because of insufficient information. In order to obtain sufficient performance two methods should be needed simultaneously to get successful output [8]. Furthermore, another method to solve the realistic circumstance was needed, that is, noise added case. Hence, another method was needed to easily process signals. Among the multiple methods, principal component analysis (PCA) and linear discriminant analysis (LDA) were considered in this work.

2 Characteristic extraction methods By linear transformation, PCA presented projecting the high-dimensional data onto a lower dimensional space [9]. This approach sought a projection that best separating the data in a least-square sense. However, components that were obtained by the PCA have not discrimination characteristic between data in different classes. Next, an orientation was founded because the projected samples were well separated. This is exactly the goal of LDA. PCA and LDA methods are applied to the determination of healthy and faulty induction motor. The procedure is illustrated in Fig.1. Ratings and specifications of experimental motor are illustrated in Table 1.

Foundation item: Project supported by the Second Stage of Brain Korea 21 Project; Project(2010-0020163) supported by Priority Research Centers Program through the National Research Foundation (NRF) of Korea funded by the Ministry of Education, Science and Technology Received date: 2010−06−29; Accepted date: 2010−09−22 Corresponding author: LEE S H, PhD, Professor; Tel: +82−32−8608829; E-mail: [email protected]

J. Cent. South Univ. Technol. (2010) 17: 1238−1242

1239 n



k =1



where

xk − m

2

(2)

is the 2-norm, and

e =1.

∂J1 = 0 gives ∂α k

Also

α k = e T ( xk − m)

(3)

where α k is the basis or feature value of x, i.e., principal component. In order to find e, first define Scatter matrix as n

S = ∑ ( xk − m )( xk − m )T

(4)

k =1

Substituting Eq.(3) into Eq.(2), the following equation is derived: Fig.1 Schematic of fault diagnosis system for induction motor Table 1 Test motor specifications Motor specification

Value

Rated voltage/V

220

Number of slots Rated speed/(r·min ) Number of poles

4 0.37

Number of rotor bars

24

xk = m + α k e

k =1

n

n

2

=

− 2∑ ak e T ( xk − m ) + k =1

2

=

k =1

n

2

=

k =1

n

2

k =1

where m is the sample mean vector; e is the unit vector in the direction of the line. Optimal set of coefficients α k can be obtained by minimizing the squared-error criterion function J1 (a1 , ⋅⋅⋅ , an , e ) = ∑ (m + ak e ) − xk

=

n

2

− e T Se + ∑ xk − m

(1)

k =1

k =1

k =1

2.1 Principal component analysis (PCA) n-dimensional samples x1, x2, ···, xn, are considered by a single vector x0. Suppose that vector x0 is to be found such that the sum of the squared distances between x0 and various xk is as small as possible [10]. Then, x0 becomes sample mean m. Data xk is denoted as

2

k =1 n

2

− ∑ e T ( xk − m )( xk − m )T e + ∑ xk − m

Five faulty conditions were categorized, which were denoted by bearing fault, bowed rotor bar, broken rotor bar, static and dynamic eccentricity, respectively. In addition, healthy condition was also included. Totally, six cases patterns were tested by the mixed PCA and LDA method. From the following sections properties of two main analyses were explained. Furthermore, explicit mixed algorithm was also illustrated by graphical explanation.

e

n

k =1 n

3 450

Rated power/kW

∑ ak2

n

− ∑ ⎡ e T ( xk − m ) ⎤ + ∑ xk − m ⎣ ⎦

34 −1

k =1 n

n

J1 ( e) = ∑ ak2 − 2∑ ak + ∑ xk − m

Clearly, vector e that minimizes J1 also maximizes e Se. Lagrange multipliers to maximize eTSe subject to the constraint of e =1. Let λ be the undetermined multiplier, T

L = e T Se − λ (e T e − 1)

Differentiate with respect to e. Setting

∂L = 0, it can ∂e

be seen that e must be an eigenvector of the Scatter matrix:

Se=λe

(5)

where λ is the eigenvalue of S; and e is the eigenvector corresponding to λ. Due to e T Se = λ e T e , it follows that, to maximize e T Se , the eigenvector corresponding to the largest eigenvalue of scatter matrix S is to be selected. Now, principal value αk is considered the characteristic value to classify pattern between healthy and faulty conditions. From Eq.(3), principal value αk of known vector x is calculated, and unknown vector principal value α*k is also obtained. 2.2 Linear discriminant analysis (LDA) LDA tried to obtain the directions that were efficient for discrimination [11]. For this discrimination analysis, firstly, between-class scatter (BCS) matrix SB

1240

J. Cent. South Univ. Technol. (2010) 17: 1238−1242

and within-class scatter (WCS) matrix SW are defined by c

SB = ∑ ni (mi − m )(mi − m )T

(6)

i =1 c

SW = ∑

∑ ( x − mi )( x − mi )T

(7)

i =1 x∈Ci

where c is the number of class; mi denotes the average value of the samples in class ci; average of the total samples is denoted as m; and ni is the number of signal in class ci. In terms of SW and SB, the criterion can be written as J (W ) =

W T S BW W T S WW

(8)

where W = ⎡⎣w1 , w2 , ⋅⋅⋅, wc −1 ⎤⎦. Rectangular matrix W maximizes Eq.(7). The columns of W are the generalized eigenvectors that correspond to the largest eigenvalues in SB wi = λi SW wi , i = 1, 2, ⋅⋅⋅, c − 1

(9)

Conventional eigenvalue problem requires an unnecessary computation of the inverse of SW. Instead, with the eigenvalues as the roots of the characteristic polynomial det[SB−λiSW]=0, that is, determinant satisfies zero, eigenvectors are directly solved [12]. (SB−λiSW)wi=0, i=1, 2, …, c−1

(10)

For training data xi′ s , feature vector Ti can be obtained as follows: Ti = W T ai = W T e T ( xi − m )

In order to get more reliable data, Bootstrap method is applied to DSUM, then Gaussian distribution is gotten over each fault case. With this result minimum distance of health (1), fault (2), and fault (N) are regarded as the fault condition. Signal has 128 data points. Training vectors are 54(9×6cases), and mean of xi, m represents size of 1×128. Sampling frequency is 3 kHz and sampling time is 0.13 (=1 000(60×128)) ms. Fig.2 shows fusion algorithm for fault diagnosis. Fig.3 shows the result without noise case, and Fig.4 shows the result at signal-to-noise ratio (ηSNR) of 5. As shown in Figs.3−4, it is hard to discriminate when there is noise. Discrimination results are compared with those of LDA later. Under noise free condition, Fig.5 is superior to Fig.3. When ηSNR is 5, PCA and LDA results are illustrated in Figs.4 and 6. Both cases cannot discriminate faults. Hence, the above mixed algorithm is applied. Fault detection result of PCA under noise free shows over 90% recognition. On the contrary, LDA shows perfect result under the same condition of PCA. Performance can be found easily by comparison between Fig.3 and Fig.5. However, it is not easy to detect the fault for the case of ηSNR. Hence, it is better to apply mixed detection algorithm for these conditions. In the experimental results, comparisons of PCA, LDA, and the proposed mixed detection are illustrated with result table and graphical representation.

(11)

PCA feature value αi is projected to the LDA space by matrix W. Generally, training data c is less than the data points of signal, WCS matrix SW becomes singular. This means that projection matrix W has to be chosen properly [13]. Next, the distance of training PCA feature value αi and test PCA feature coefficient αi is computed as DPCA. Furthermore, LDA feature distance is also computed as follows: DPCA = (a − a T ) T (a − a T )

(12)

DLDA = (T − T T ) T (T − T T )

(13)

where T and T T are the training LDA feature and test feature vectors, respectively. When the Euclidean distance satisfies minDLDA < Tth, where Tth is the predetermined threshold value, fault detection process is carried out. Tth at which DLDA is larger than DPCA is chosen as the noise rises with the iterative experiments. Whereas in the case of minDLDA <Tth, new distance DSUM is calculated by DPCA and DLDA, respectively. DSUM=DPCA+DLDA

(14)

Fig.2 Fusion algorithm for fault diagnosis

J. Cent. South Univ. Technol. (2010) 17: 1238−1242

1241

Fig.6 Feature vectors by LDA at ηSNR=5 Fig.3 Feature vectors by PCA

motor, induction motor, PWM inverter, PWM converter, and digital board containing TMS320VC33 DSP chip. Data acquisition device of NI co. was equipped to obtain data. Table 2 shows that recognition results for noise free cases of PCA and LDA. LDA results are perfect for every case. However, PCA result indicates two detection errors for bowed rotor and static eccentricity case, respectively. Hence LDA, has the advantage over the noise free case because of maximizing discrimination of each case [14].

Fig.4 Feature vectors by PCA at ηSNR=5

Fig.7 Experimental characteristics

system

for

extraction

of

current

Table 2 Recognition results for 9 cases of healthy and 46 cases of 5 faults

Fig.5 Feature vectors by LDA

3 Experimental results For the extraction of current characteristics, 3-phase induction motor with 220 V, 1.85 kW, and 4-pole was considered. Experimental system is illustrated in Fig.7. System contains 5 kW permanent magnet synchronous

LDA

PCA

Operating condition

Recognition

Error

Recognition

Error

Healthy

9

0

9

0

Faulted bearing

9

0

9

0

Bowed rotor

9

0

7

2

9

0

9

0

9

0

7

2

9

0

9

0

Broken rotor bar Static eccentricity Dynamic eccentricity

1242

J. Cent. South Univ. Technol. (2010) 17: 1238−1242

Next, noise was added to the signal, hence, ηSNR was considered from 5.0 to 35. Nine signals per fault, total 54 cases were tested [15]. Recognition results under the noise condition are listed in Table 3. At ηSNR=40, there are no changes. Results of Table 3 indicate that performance of LDA is better than that of PCA when ηSNR is from noise free to 25. However, error rate deteriorates rapidly as noise is larger(ηSNR<25). When ηSNR is 5, recognition rate of LDA is 22% less than that of PCA.

References [1]

[2]

[3]

Table 3 Recognition rate versus SNR ηSNR

has better performance than the other method with or without noise.

Recognition ratio/% PCA

LDA

Proposed

5

60.56

38.52

62.96

10

72.78

51.23

77.59

15

82.78

67.96

84.82

20

88.89

85.56

90.74

25

92.22

95.17

95.17

30

91.30

98.70

98.70

35

92.60

100

100

By the recognition result, proposed mixed algorithm shows superior performance to the single PCA or LDA, respectively. Especially, it has a strong point when the noise is added in the considering signals.

4 Conclusions (1) With the characteristic of PCA, extracting characteristics from high-dimensional data onto a lowdimensional space, characteristic vectors are obtained and illustrated for the five cases of induction motor faults and healthy case. Also with the LDA property, feature vectors of discrimination characteristic between data in different classes are done. Finally, the mixed algorithm based on the PCA and LDA methods is proposed. To overcome the insufficient data condition bootstrap method is also considered. (2) By the comparison of each case, LDA result shows good performance when noise is free. However, LDA result is deteriorated rapidly from ηSNR=25 to ηSNR=5. In the whole range of ηSNR the mixed algorithm shows better recognition output than PCA or LDA algorithm. With total 108 data of the 6 cases, 54 data are applied to PCA and LDA respectively. Remaining 54 data are tested to verify the usefulness of detection algorithm. By the results, the proposed mixed algorithm

[4]

[5]

[6]

[7]

[8]

[9]

[10] [11]

[12]

[13]

[14]

[15]

CHO H C, KNOWLES J, FADALI M S, LEE K S. Fault detection and isolation of induction motors using recurrent neural networks and dynamic Bayesian modeling [J]. IEEE Transactions on Control Systems Technology, 2010, 18(2): 430−437. DIDIER G, TERNISIEN E, CASPARY O, RAZIK H. Fault detection of broken rotor bars in induction motor using a global fault index [J]. IEEE Transactions on Industry Applications, 2006, 42(1): 79−88. DE ANGELO C H, BOSSIO G R, GIACCONE S J, VALLA M I, SOLSONA J A, GARCIA G O. Online model-based stator-fault detection and identification in induction motors [J]. IEEE Transactions on Industrial Electronics, 2009, 56(11): 4671−4680. KIM K, PARLOS A G, MOHAN-BHARADWAJ R. Sensorless fault diagnosis of induction motors [J]. IEEE Transactions on Industrial Electronics, 2003, 50(5): 1038−1051. ZIDNI F, BENBOUZID M E H, DIALLO D, NAIT-SAID M S. Induction motor stator faults diagnosis by a current Concordia pattern-based fuzzy decision system [J]. IEEE Transactions on Energy Conversion, 2003, 18(4): 469−475. HAJI M, TOLIYAT H A. Pattern recognition—A technique for induction machines rotor broken bar detection [J]. IEEE Transactions on Energy Conversion, 2001, 16(4): 312−317. BAYINDIR R, SEFA I, COLAK I, BEKTAS A. Fault detection and protection of induction motors using sensors [J]. IEEE Transactions on Energy Conversion, 2008, 23(3): 734−741. TURK M A, PENTLAND A P. Face recognition using eigenfaces [C]// Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Maui, 1991: 586−591. BELHUMEUR P N, HESPANHA J P, KRIEGMAQN D J. Eigenfaces vs fisherfaces: Recognition using class specific linear projection [J]. IEEE Transactions on Pattern Analysis and Machine Intell, 1997, 19(7): 711−720. DUDA R O, HART P E, STORK D G. Pattern classification [M]. 2nd ed. New York: John Wiley & Songs Inc, 2002. ZHANG Jian-yang, YANG Jing-yu. Constructing PCA baseline algorithms to reevaluate ICA-based face-recognition performance [J]. IEEE Transactions on Systems, Man, and Cybernetics, 2007, 37(4): 1015−1021. GOOD R P, KOST D, CHERRY G A. Introducing a unified PCA algorithm for model size reduction [J]. IEEE Transactions on Semiconductor Manufacturing, 2010, 23(2): 201−209. ULFARSSON M O, SOLO V. Sparse variable PCA using geodesic steepest descent [J]. IEEE Transactions on Signal Processing, 2008, 56(12): 5823−5832. PENG De-zhong, ZHANG Yi. Dynamics of generalized PCA and MCA learning algorithms [J]. IEEE Transactions on Neural Networks, 2007, 18(6): 1777−1784. CHAMUNDEESWARI V V, SINGH D, SINGH K. An analysis of texture measures in PCA-based unsupervised classification of SAR images [J]. IEEE Geoscience and Remote Sensing Letters, 2009, 6(2): 214−218. (Edited by CHEN Wei-ping)

Suggest Documents