experimental validation of ls-svm based fault ... - Semantic Scholar

2 downloads 0 Views 247KB Size Report
Meas., 51 (3), 544-550. 13. Mohamed EA, Abdelaziz AY, & Mostafa AS. (2005) A neural network-based scheme for fault diagnosis of power transformers. Electr.
EXPERIMENTAL VALIDATION OF LS-SVM BASED FAULT IDENTIFICATION IN ANALOG CIRCUITS USING FREQUENCY FEATURES Arvind Sai Sarathi Vasana, Bing Longa,b and Michael Pechta,c a

Center for Advanced Life Cycle Engineering (CALCE), University of Maryland, College Park, MD-20742, USA

b

School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu, China

c

Center for Prognostics and System Health Management, City University of Hong Kong, Kowloon, Hong Kong

Analog circuits have been widely used in diverse fields such as avionics, telecommunications, healthcare, and more. Detection and identification of soft faults in analog circuits subjected to component variation within standard tolerance range is critical for the development of reliable electronic systems, and thus forms the primary focus of this paper. In this paper, we have experimentally demonstrated a reliable and accurate (99%) fault diagnostic framework consisting of a sweep signal generator, spectral estimator and a least squares-support vector machine. The proposed method is completely automated and can be extended for testing other analog circuits whose performances are mainly determined by their frequency characteristics. Keywords – analog circuits, frequency features, least squares-support vector machines, soft fault diagnosis, tolerance, wavelet features.

1.

INTRODUCTION

A survey conducted in the area of diagnostic testing of electronic circuits and devices indicated that 80% of the faults occur in the analog segment [1-2]. Existing diagnostic testing methods for analog circuits are largely dependent on simulation data and their accuracy is affected by component tolerances, the complex nature of the fault mechanisms, nonlinearity issues and the deviation of component parameters due to operational and environmental stresses [3]. Detecting and isolating soft faults when components vary within their tolerance range is a challenging fault diagnosis problem [4]. Situations in which the circuit topology is not affected but deviation in circuit elements from their nominal value leads to changes in the operational range of the circuit thereby affecting its performance are referred to as soft faults [3]. On the other hand, component tolerance does not result in a fault but can affect performance of diagnostic techniques through small changes in circuit’s component parameters within their standard tolerance range [5]. Over the past ten years, various diagnostic methods have been proposed. These are either based on derived diagnostic equations at selected nodes of the circuit [5-9] or on simulated fault data (datadriven) [10-13]. Fault diagnosis based on the former approach suffers from drawbacks associated with the estimation procedure when dealing with nonlinear circuits and are limited by the poor accessibility of internal nodes of modern integrated circuits. Neural networks (NNs) are the most popular data-driven methods used in soft fault diagnosis in analog circuits. The advantage of NNs is that they do not need a fault model, which is hard to obtain for analog circuits. However, a NN is hard to control and requires sufficiently large amount of fault data for training. It also has other disadvantages such as low convergence rate, local optimal solution, and poor generalization when the number of fault samples is limited [13-15]. Feeding the extracted features directly as the input to a NN can result in a large NN, even for a small circuit [10]. Improved performance with NNs has been reported [11-12] at the cost of additional computations for data preprocessing. In recent years, researchers have demonstrated SVM as an effective tool for diagnostics of analog circuits [9][14-15][19-20]. Support vector machine (SVM) is a machine learning tool that accounts for the trade-off between learning ability and generalizing ability by minimizing structure risk [16-18]. Most of the aforementioned methods [5-13] are based on simulations data which are not consistent with real data [11-12]. Also, they are validated using data from faulty circuits, where the faulty components vary from their nominal value by 50%. The diagnosability of these approaches under component variations just outside tolerance range is still questionable. In our research, we experimentally demonstrate 99% diagnosability using the least squares support vector machine (LS-SVM) even in the unexplored component variation levels. Conventional frequency domain and wavelet features are extracted from the spectral estimate of the circuit’s response to a sweep signal, to demonstrate reliable and accurate fault diagnosis with reduced test time. The material in this work is arranged as follows. Section 2 gives a brief description of our diagnostic framework followed by a discussion on spectrum estimation and feature selection in Section 3 and 4 respectively. In Section 5, the multiclass LS-SVM for fault classification is discussed. This is followed by a detailed description on our experiments demonstrating the strength of our diagnostic technique in Section 6. Finally, conclusions are drawn in Section 7.

2.

DESCRIPTION OF THE FAULT DIAGNOSTIC FRAMEWORK

Fig. 1 shows the proposed diagnostic system for a circuit under test (CUT). It uses a sweep signal generator as the test signal generator followed by a spectral estimator and a feature extractor and finally a fault classifier based on a multiclass LSSVM. For most of the electronic circuits, the behavioral characteristics are accompanied by a unique frequency response. In this approach, this feature is exploited for generating fault signatures. In order to obtain the frequency domain features, the CUT is excited by a sweep signal containing a frequency bandwidth which is larger than that of the CUT. Then the associated power spectral density (PSD) of the CUT response is estimated from which frequency domain features are extracted. It is practically not feasible to obtain the true PSD from the observation and only an estimate of the PSD can be obtained. Hence, we employ a nonparametric spectrum estimator-based on Welch’s method [22] to estimate the PSD. Then, two types of features, namely 1) the conventional frequency features and 2) the wavelet features are extracted from the estimated PSD and are used as signatures for the fault classifier. In this work, we have used a one-against-one multiclass least squares-support vector machine (LS-SVM) for fault classification. Analog circuit fault diagnosis procedure involves two phases: the training and the diagnostic phase. The most frequently occurring faults are investigated to define faults of interest. CUT is then simulated for these hypothesized faulty conditions and excited by the test stimuli so as to develop signatures representing fault conditions. The signatures of these responses are then stored in a dictionary for use during the online identification of faults. During the diagnostic phase, the CUT is excited by the test stimuli and the signatures are acquired. These signatures are compared with those stored in the fault dictionary for identification. Generally the fault classifiers are used in organizing the signatures using a specific criterion to identify the faulty CUT to one of the prestored faults.

Figure 1. Overview of the proposed diagnostic setup. 3.

SPECTRUM ESTIMATION

Let x ( n ) ; n = 0, ⋅⋅⋅⋅⋅, N − 1 be the observed response of the CUT. Take segments xi ( n ) of length L with starting points of these segments D units apart. If K sequences cover the entire N data points, then ( K − 1) D + L = N . Then a data window w ( n ) is applied to each of these sequences, thereby producing a set of K modified periodograms that are averaged. The modified periodograms obtained by applying the window is given by 2

I ik ( e jω ) =

2 L 1 L −1 1 L −1 xi ( n ) w ( n ) e − jnω ; k = 1, 2,...., K , where U = ∑ w ( n) . ∑ L n =0 U L n=0

(1)

Finally the average of these periodograms gives the expected value of the Welch’s spectral estimate [22]. 4.

FEATURE SELECTION AND EXTRACTION

In this proposed diagnostic approach, two types of features are extracted from the PSD of the CUT response to sweep test signal. Both of these features are explained with reference to the well know frequency responses of analog filter circuits. In general there are four types of filters that are used to process analog signals. These are the low pass (LP), high pass (HP), band pass (BP), and band stop (BS) filters. Among these four types, the method to select the frequency features for LP filters can be used for HP filters, and the frequency features for BP filters can be used for BS filters. 4.1 Conventional Frequency Features The conventional frequency features refers to the characteristics components which can identify the circuits behavior in the frequency domain. For a BP filter, the operational characteristics are defined using 1) the center frequency ( f 0 ); 2) 3dB upper and lower pass-band limits ( f l and f u ); and 3) the maximum of the frequency response ( H ( f 0 ) ). Similarly, for the LP filters, both the center frequency and the maximum of the frequency response ( H ( f 0 ) ) become a part of the feature vector. Instead of the 3dB pass-band limit, the 3dB cut-off frequency ( f cut ) is used as the third element of the feature vector. Additional features can also improve the classification accuracy. Hence, the frequency responses at 1) the 3dB pass-band limit ( H ( f u ) ) for BP filters and 2) the 3dB cut-off frequency ( H ( f cut ) ) for an LP filter are used as additional feature elements. Thus the conventional frequency features for the BP and LP filters are expressed as:

BPF =  f 0 , f l , fu , H ( f 0 ) , H ( fu )  and LPF =  f 0 , f cut , H ( f 0 ) , H ( f cut )  .

(2)

Similarly, for the HP and BS filters the features are chosen as they are in LP and BP filters, except that the upper cut-off frequency is replaced by a lower cut-off frequency for HP filters and the 3dB pass-band limit is replaced with a 3dB stop band limits in BS filters. Also, the minimum frequency response is used in place of the maximum frequency response for BS filters. 4.2 Wavelet Features To improve the classification accuracy, additional features are extracted using wavelet transformation where the frequency response, P ( f ) is decomposed into the so-called approximation and detail signals using the multiresolution representation [22] J

P ( f ) = ∑ Wkjψ j ( f − 2 j k ) + ∑ VkJφJ ( f − 2 J k ). j =1

(3)

k

where, ψ j ( f ) is the family of wavelet functions and φJ ( f ) is the family of scaling functions. The approximation and detail signals are expressed using the approximation ( Wkj ) and detail ( Vkj ) wavelet coefficients which are obtained through low and high pass filtering respectively. Based on earlier research [19], we have selected the Haar wavelet and the coefficients of the detail signals of levels 1 through 5 for extracting two types of wavelet features. In the first type, the energy contained in the detail signal at various levels of decomposition is used as a wavelet feature. This is expressed as follows: E j = ∑ Wkj , j = 1, 2,....., J 2

(4)

k

where, Wkj is the k th coefficient of the detail at j th level of decomposition and J is the predefined decomposition level. The energy indicators are easy to implement, but they fail to utilize complete information from wavelet transformation. Hence, in the second type, we use the means (µ j) and standard deviations (σj) at each decomposition level as wavelet features:

µj =

1 n ∑ Wkj and σ j = E{Wkj − µ j } n k =1

(5)

where, n denotes the predefined number of coefficients used as features. Thus the final feature vector including the features extracted using wavelet decomposition for a BP filter looks like either

BPF =  f 0 , fl , fu , H ( f 0 ) , H ( fu ) , E1 ,...., E5  (or) BPF =  f 0 , fl , fu , H ( f 0 ) , H ( fu ) , µ1 ,...., µ5 , σ 1 ,...., σ 5 

(7)

Table 1 Test conditions evaluated Condition C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13

Feature Type Conventional frequency features only Conventional frequency features only Conventional frequency features only Wavelet features only Wavelet features only Wavelet features only Wavelet features only Conventional frequency & wavelet features Conventional frequency & wavelet features Conventional frequency & wavelet features Conventional frequency & wavelet features Conventional frequency & wavelet features Conventional frequency & wavelet features

Wavelet Feature Type Energy indicator of detail coefficients ( E j ' s ) Energy indicator of detail coefficients ( E j ' s ) Mean ( µ j ) and standard deviation (σ j ) of detail coefficients Mean ( µ j ) and standard deviation (σ j ) of detail coefficients Energy indicator of detail coefficients ( E j ' s ) Energy indicator of detail coefficients ( E j ' s ) Energy indicator of detail coefficients ( E j ' s ) Mean ( µ j ) and standard deviation (σ j ) of detail coefficients Mean ( µ j ) and standard deviation (σ j ) of detail coefficients Mean ( µ j ) and standard deviation (σ j ) of detail coefficients

Normalization No All Partial No All No All No All Partial No All Partial

In this work, we have evaluated different test conditions which are based on 1) the frequency feature type, 2) the wavelet feature type, and 3) the normalization method. By frequency feature type we mean that the feature vector constitutes either the conventional frequency features of the filter circuit, or the wavelet features, or a combination of both. When wavelet features are chosen, either the energy indicators, or the mean and standard deviation of the detail signal coefficients are chosen as features. Further, normalization is used for the purpose of enhancing these features. Data normalization is performed to avoid large dynamic ranges in one or more dimensions. We have evaluated the effects of having no normalization; complete normalization of all the elements in the feature vector with respect to the maximum of the individual vector elements; and partial normalization where f 0 , f l and f u are normalized with respect to 10 kHz and the wavelet features are normalized with respect to the maximum of the individual wavelet features. Table 1 summarizes the test conditions evaluated in this work. 5.

LS-SVM BASED FAULT CLASSIFIER

5.1 Least Squares Support Vector Machine Classifiers The support vector machine (SVM) theory developed by Vapnik [21] provides better generalization ability when compared to conventional methods. It uses a kernel function to map the input samples to a higher dimension feature space,

where the overlapped samples become linearly separable. This involves solving a complex quadratic programming (QP) problem. The least squares support vector machine [17] reduces the complexity and computations involved in the QP problem. Given a set of data vectors ( x 1 , y1 ) ,....., ( x n , y n ) , such that x i ∈ ℜ N and y i ∈ {−1, +1} , i = 1,....., n , the aim is to construct a classifier of the form wT Γ ( x i ) + b subjected to the constraints yi ( w T Γ ( x i ) + b ) ≥ 1 , where Γ ( x ) is a nonlinear function that maps the input space to a higher dimensional space, where w is a M -dimensional weight vector perpendicular to the separating hyperplane, and b is the bias term. In order to include some degree of tolerance in misclassification, a slack variable ( ξ i ) is introduced. In simple SVM, the classification problem is given by the saddle point of the Lagrangian function, whose computation leads to the solution of the following QP problem, based on which the classifier needs to be designed max Q1 ( α i ; Γ ( x i ) ) = − αi

n 1 n n T y i y j Γ ( x i ) Γ ( x j ) α iα j + ∑ α i ∑∑ 2 i =1 j =1 i =1

(8)

n

subjected to the constraint ∑ αi yi and having Lagrange multiplier 0 ≤ αi ≤ C; i = 1,....., n . To reduce the complexity, the i =1

modification introduced by Suykens and Vandewalle [17] is in the cost function: min J 2 ( w, ξ ) =

w ,ξ , b

1 1 n 2 w + γ ∑ ξi2 2 2 i =1

(9)

This transforms the problem to a (n+1) × (n+1) linear system:

Ω  YT 

Y   α  1  = 0   b  0 

(10)

where, Ωij = yi y j K ( x i , x j ) , Y = [y1 ,...., y n ] , α = [α1,...., α n ] and 1 = [1,....,1] . T

T

T

5.2 Multi-class SVM Multiple faults classification problems are typically solved by combining multiple binary classification LS-SVMs [18]. One-against-one LS-SVM (OAO LS-SVM) provides the best balance between the sample numbers of the classes under consideration. OAO LS-SVM for C -class classification constitutes C ( C − 1) 2 classifiers where each classifier is trained by the data from two classes according to the above explained LS-SVM algorithm. Then voting is used for the future testing, after all C ( C − 1) 2 classifiers are constructed, based on the decision function:

(11)

fij ( x ) = w ijT Γ ( x ) + bij

where, w ij and bij are the weight vector and bias term of the i th and j th classes respectively. Then the data sets are classified into their corresponding classes based on the rule C

arg max f i ( x ) ; where f i ( x ) = i =1,...., C



sign ( f ij ( x ) ).

(12)

i =1, j ≠ i

If the above equation (12) is satisfied for one i, then x is classified into class i, else if the equation is satisfied for plural i’s, then x is unclassifiable. 6.

EXPERIMENTATION AND PERFORMANCE RESULTS

The sample circuit built to examine fault diagnosis in analog filter circuits using LS-SVM is a Sallen-Key band pass filter (Fig. 2) having component tolerances ranging within 10%. The motivation behind selecting this filter circuit is because, it has been the most commonly used circuit for evaluating the performance of fault diagnostic approaches and thus becomes simpler to compare the performance results with the earlier results reported in [10]-[12],[23]-[25]. The frequency response of the circuit with C1 , C 2 , R2 and R 3 varying within their tolerances, belong to the no-fault class (NF). When any of the above four components vary beyond their tolerance range with other components varying within their tolerances, then we obtain faulty responses using which the fault classes are constructed (Table 2). These faulty responses along with the NF class responses are used to generate features that will eventually be used to train the one-against-one multiclass LS-SVM classifier. As shown in Table 2, we consider component values that are just (>10%) higher or lower than the nominal value to values as high as 200% the nominal value. The experimental setup for evaluating the proposed framework is shown in Fig. 3. Once the response signal is obtained, the corresponding signal’s power spectrum is estimated using the Welch’s method as explained in Section 3. Then the features are extracted from the spectral estimate. These features are supplied as input to the LS-SVM classifier. One-against-one multiclass LS-SVM was employed in this work, where 36 binary LS-SVM’s were trained and optimized. The classifier has been trained using 30 data sets obtained from each intentionally induced fault class. During the diagnostics phase, in each of the nine fault condition, 4 different types of fault values were used to produce 10 faulty responses

each, which was analyzed using the trained LS-SVM classifier. The classifier was tested for all the test conditions listed in Table 1. Table 3 shows, the test accuracy and test time for the designed LS-SVM classifier. In this work, test accuracy is defined as the ratio of correctly classified samples to all tested samples; test time is defined as the time consumed for testing all of the samples. Among the evaluated test conditions (C1 through C13), test condition C13 outperforms all other test conditions in terms of test accuracy (~99%) and test time (around 6.2 seconds for testing 360 fault samples). From test conditions C10 and C13 (Table 3) we can presume that combining conventional frequency features with wavelet features and performing partial normalization always lead to the best classification accuracy. When only conventional frequency features (C1) or wavelet features (C4, C6) are taken into account, wavelet features provided better test accuracy that too in the least testing time. However, if only wavelet features were considered and performing normalization on the feature vectors resulted in poor testing accuracy. Furthermore, by comparing test conditions C4 and C5 with C6 and C7 shows that wavelet features extracted from the statistical properties of the detailed coefficients provides better classification information than the energy contained in the detailed coefficients. Table 2 Fault classes for Sallen-Key band pass filter

Figure 2. 25-kHz Sallen-Key bandpass filter used in our study.

Fault Class NF R 2 ⇑ (F5) R 2 ⇓ (F6) R3 ⇑ (F1)

Nominal Value 1kΩ 1kΩ 2kΩ

Faulty Value 1.241 kΩ, 1.613 kΩ, 2.217 kΩ, 3.477 kΩ

R3 ⇓ (F2)

2kΩ

200 Ω, 0.883 Ω, 1.5 kΩ, 1.794 kΩ

C1 ⇑ (F7) C1 ⇓ (F8) C 2 ⇑ (F3) C 2 ⇓ (F4)

5nF 5nF 5nF 5nF

43.8 Ω, 372.3 Ω, 669 Ω, 836 Ω 2.311 kΩ, 2.529 kΩ, 3.655 kΩ, 4.447 kΩ

6 nF, 7nF, 8nF, 10nF 1.5nF, 1.875nF, 2.5nF, 3.5nF 6 nF,7nF, 8nF, 10nF 1.5nF, 1.67nF, 2.5nF, 3nF

Table 3 Classifier performance for the Sallen-Key band pass filter

Figure 3. Experimental setup for demonstrating proposed fault diagnostic approach.

Condition

Test accuracy

Test time (s)

C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13

0.547 0.811 0.969 0.878 0.569 0.936 0.775 0.547 0.833 0.964 0.480 0.908 0.989

29.281 14.835 8.845 15.397 38.095 11.450 17.300 28.797 13.525 7.706 29.843 11.949 6.193

Now, in order to show the significance of the proposed diagnostic framework, we compare our method with the existing diagnostic approaches. Most of the diagnostic approaches presented in the past such as the ones in [3], [8]-[12], [19], [20], [23]-[26], considered a CUT to be faulty only when the value of a critical component was higher or lower than the nominal value by 50%. They do not explicitly demonstrate the performance of their diagnostic approaches when there are small variations in the value of critical component outside the tolerance range. Here, we have demonstrated classification accuracy around 99% even when the components value varies by 20% higher/lower than the nominal values. Also, as in the aforementioned methods, if we consider only 50% variation as faulty condition then our method demonstrates a test accuracy of 100%. In the recently published works of Yuan et al [24], and Xiao and He [25], 100% diagnosability has been claimed for the same Sallen-Key band pass filter. However, both these works are based on data collected using simulation study, which are sometimes not consistent with real data extracted from the field. This is because, even though tolerance level can be specified during simulations, simulations do not cover all the values within the standard tolerance range of every single component that naturally arises in an actual circuit. This tolerance problem can obscure the separability of fault classes and thus can affect the performance of the classifier during real life application. As an example, Aminian and Aminian [11] demonstrated a test accuracy of 100% for a Sallen-Key band pass filter using data collected from a simulation study however, the performance dropped to less than 95% when experimentally verified [12].

7.

CONCLUSIONS

In this study, we have proposed and experimentally validated a systematic approach to perform soft-fault diagnosis on actual analog filter circuits that are affected by component tolerances. The proposed approach is completely automated and is capable of detecting and isolating faults in real-time and thus, can be used to evaluate the reliability of electronic systems during field operation. The proposed approach overcomes the issues confronted in the existing diagnostic methods. In contrast to diagnostic methods based on circuit nodal equations, the proposed approach is not limited by the inaccessibility to the internal nodes of analog circuits as in modern integrated circuits, as the approach monitors only the response of the circuit at the output node. The generalization capability of LS-SVM and the use of the structural risk minimization concept ensure that component tolerances do not affect the separability of the extracted features, which is a major drawback in neural network–based diagnostic techniques. Furthermore, the use of LS-SVM transformed the classification problem into a system of linear equations rather than a quadratic programming problem; this helped in reducing the computation time involved in classification. For an analog filter circuit, extracting features from the frequency domain proved to be efficient, as it helped in avoiding additional computations involved with data preprocessing for reducing the number of features. Features extracted using wavelet transformation and performing partial normalization on these features tends to have a significant effect on not only the test accuracy (99%) but also the testing time (reduced by 50% e.g. C13-6.193 sec/ C2-14.835 sec, C10-7.706 sec/ C913.525 sec, and C3-8.845/ C2-14.835 sec). Thus, the use of LS-SVM leads to reliable classification in reduced testing time when compared to other previously reported diagnostic approaches. Our proposed approach can be extended to other analog circuits whose performances are defined in the frequency domain. Future work includes the investigation of different mother wavelets, the possibility of detecting and isolating multiple faults which might generate refusal areas during classification using least squares support vector machine, and detecting hard faults. 8.

REFERENCES

1. 2.

Birolini A. (1997) Quality and reliability of technical systems. New York: Springer-Verlag. Li F & Woo P-Y. (2002) Fault detection for linear analog IC - the method of short-circuits admittance parameters. IEEE Trans. Circuits Syst., 49 (1), 105-108. Alippi C, Catelani M, Fort A, & Mugnaini M. (2002) SBT soft fault diagnosis in analog electronic circuits: a sensitivity-based approach by randomized algorithms. IEEE Trans. Instrum. Meas., 51 (5), 1116-1125. Williams A & Taylor F. (2006) Electronic filter design handbook. New York: McGraw-Hill. Yang C, Tian S, Long B, & Chen F. (2011) Methods of handling the tolerance and test-points selection problem for analog-circuit fault diagnosis. IEEE Trans. Instrum. Meas., 60 (1), 176-185. Mei H, Hong W, Geng H, & Shiyuan Y. (2007) Soft fault diagnosis for analog circuits based on slope fault feature and BP neural network. Tsinghua Science and Technology, 12 (S1) 26-31. Halgas S. (2008) Multiple soft fault diagnosis of nonlinear circuits using the fault dictionary approach. Bull. Pol. Acad. Sci., 56 (1), 53-57. Zhou L, Shi Y, Zhao G, Zhang W, Tang H, & Su L. (2010) Soft-fault diagnosis of analog circuit with tolerance using mathematical programming. J. Commun. Comp., 7 (5), 50-59. Cui J & Wang Y. (2011) A novel approach of analog circuit fault diagnosis using support vector machines classifier. Measurement, 44, 281-289. Spina R & Upadhyaya S. (1997) Linear circuit fault diagnosis using neuromorphic analyzers. IEEE Trans. Circuits. Syst. II, 44, 188-196. Aminian M & Aminian F. (2000) Neural-network based analog-circuit fault diagnosis using wavelet transform as preprocessor. IEEE Trans. Circuits. Syst. II, 47 (2), 151-156. Aminian F & Aminian M. (2002) Analog fault diagnosis of actual circuits using neural networks. IEEE Trans. Instrum. Meas., 51 (3), 544-550. Mohamed EA, Abdelaziz AY, & Mostafa AS. (2005) A neural network-based scheme for fault diagnosis of power transformers. Electr. Power Syst. Res., 75 (1), 29-39. Huang J, Hu X, & Yang F. (2011) Support vector machine with genetic algorithm for machinery fault diagnosis of high voltage circuit breaker. Measurement, 44, 1018-1027. Mao X, Wang L, & Li C. (2008) SVM classifier for analog fault diagnosis using fractal features. Proceedings of the 2nd IEEE International Symposium on Intelligent Information Technology and Application, 553-557. Scholkopf B & Smola A. (2002) Learning with kernels-Support Vector Machines, regularization, optimization and beyond, Cambridge, MA: MIT Press. Suykens J & Vandewalle J. (1999) Least squares support vector machine classifiers. Neural Process. Lett., 9 (3), 293-300. Hsu C-W & Lin C-J. (2002) A comparison of methods for multi-class support vector machines. IEEE Trans. Neural Networks, 13 (2), 415-425. Long B, Tian SL, & Wang HJ. (2008) Least squares support vector machine based analog circuit fault diagnosis using wavelet transform as preprocessor. Proceedings of the International Conference on Communications, Circuits and systems, 1026-1029. Lei Z, Ligang H, Wang Z, & Wuchen W. (2010) Applying wavelet support vector machine to analog circuit fault diagnosis. Proceedings of the 2nd Workshop on Education, Technology and Computer Science, 75-78. Vapnik K. (1995) The nature of statistical learning theory. Springer-Verlag, New York. Welch P. (1967) The use of fast fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. IEEE Trans. Audio Electroacoust., 15 (2), 70-73. Aminian M & Aminian F. (2007) A modular fault-diagnostic system for analog electronic circuits using neural networks with wavelet transform as preprocessor. IEEE Trans. Instrum. Meas., 56 (5), 1546-1554. Yuan L, He Y, Huang J, & Sun Y. (2010) A new neural-network-based fault diagnosis approach for analog circuits by using kurtosis and entropy as a preprocessor. IEEE Trans. Instrum. Meas., 59 (3), 586-595. Xiao Y & He Y. (2011) A novel approach for analog fault diagnosis based on neural networks and improved kernel PCA. Neurocomputing, 74, 11021115. Toczek W, Zielonko R, & Adamczyk A. (1998) A method for fault diagnosis of nonlinear electronic circuits. Measurement, 24, 79-86.

3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26.

ACKNOWLEDGMENTS The authors would like to thank the more than 100 companies and organizations that support research activities at the Prognostics and Health Management Group within the Center for Advanced Life Cycle Engineering at the University of Maryland annually.

Suggest Documents