Structural Health Monitoring with Lamb-wave sensors - DigiNole!

0 downloads 0 Views 6MB Size Report
I dedicate this dissertation to the memory of my mother, Nilam Mishra. I miss her everyday. Although, she is not here to give me strength and support, I always ...
Florida State University Libraries 2016

Structural Health Monitoring with LambWave Sensors: Problems in Damage Monitoring, Prognostics and Multisensory Decision Fusion Spandan Mishra

Follow this and additional works at the FSU Digital Library. For more information, please contact [email protected]

FLORIDA STATE UNIVERSITY FAMU-FSU COLLEGE OF ENGINEERING

STRUCTURAL HEALTH MONITORING WITH LAMB-WAVE SENSORS: PROBLEMS IN DAMAGE MONITORING, PROGNOSTICS AND MULTISENSORY DECISION FUSION

By SPANDAN MISHRA

A Dissertation submitted to the Department of Industrial and Manufacturing Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy

2016

c 2016 Spandan Mishra. All Rights Reserved. Copyright

Spandan Mishra defended this dissertation on June 10, 2016. The members of the supervisory committee were:

O. Arda Vanli Professor Directing Dissertation

Fred W. Huffer University Representative

Okenwa Okoli Committee Member

Sungmoon Jung Committee Member

Chiwoo Park Committee Member

The Graduate School has verified and approved the above-named committee members, and certifies that the dissertation has been approved in accordance with university requirements.

ii

I dedicate this dissertation to the memory of my mother, Nilam Mishra. I miss her everyday. Although, she is not here to give me strength and support, I always feel her presence. I also dedicate this dissertation to my father, Mahesh Prasad Mishra, and my sister, Stuti Mishra without whose support it would have been impossible for me to complete this uphill task.

iii

ACKNOWLEDGMENTS I would like to express my sincere gratitude to my PhD. supervisor Dr. Arda Vanli for providing me the opportunity to come to the Florida State University and be part of his research group. I will forever be indebted to him for the guidance, mentorship and support throughout my studies. The past four and half years of research experience under his guidance has been tremendous asset to my career and life. I am also deeply indebted to Dr. Sungmoon Jung for guiding me in our work in the FL Sea Grant Project. I am also thankful to my doctoral committee members Dr. Chiwoo Park, Dr. Fred Huffer, and Dr. Okenwa Okoli, their insightful advice and comments have served as valuable inputs for the engineering and scientific significance of this research. My keen appreciation goes to Dr. Richard Liang for his support and help throughout this research. This research was partially funded by National Sea Grant College Program of the U.S. Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA), Grant No. NA 14OAR4170108, Subcontracted to Florida State University. This support is gratefully acknowledged. I would also like to thank all of my colleagues here in the High Performance Materials Institute. A special thanks to Mr. Frank Allen and Ms. Judy Gardner for assigning me an office in HPMI and managing this amazing research institute. I have also been fortunate enough to have a person like Mr. Jim Horne in our department, who has solution for any technical issue that I have ever had. I would also like to express my cordial appreciation to all the faculty and staff here at the Department of Industrial and Manufacturing Engineering at FAMU-FSU College of Engineering for creating a family like environment and work culture. Last but not the least, I will always be indebted to my girlfriend Isabela, who has always been there for me in times of need. I would also like to thank my friends Aschkan, Grzegorz,

iv

Garth, Biswas whom I met here in Tallahassee and who made my stay here in Tallahassee very pleasant and a time I will cherish for the rest of my life.

v

TABLE OF CONTENTS List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

viii

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

x

List of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xiv

Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xv

1 Introduction 1.1 Need for the Study . . . . . . . . . . . . . . . 1.2 General Outline and Components of the Study 1.3 Research Hypothesis and Limitations . . . . . 1.4 Dissertation Outline . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

2 Literature Review 2.1 Theoretical Framework of Lamb-Wave . . . . . . . . . . . . . . 2.2 Statistical Learning Approaches in Structural Health Monitoring 2.2.1 Supervised Damage Detection Methods . . . . . . . . . . 2.2.2 Unsupervised Damage Detection Methods . . . . . . . . 2.3 Principal Component Based Monitoring . . . . . . . . . . . . . . 2.3.1 Principal Component Analysis . . . . . . . . . . . . . . . 2.3.2 Statistical Control Charts Using Principal Components . 2.3.3 Principal Component Regression . . . . . . . . . . . . . 2.4 Prognostics and Remaining Useful Life Estimation . . . . . . . . 2.5 Decision Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . .

. . . .

. . . . . . . . . .

. . . .

. . . . . . . . . .

. . . .

. . . . . . . . . .

. . . .

. . . . . . . . . .

. . . .

. . . . . . . . . .

3 Experimental Lamb-Wave and Degradation Data Used in the Research

. . . .

1 2 3 4 5

. . . . . . . . . .

7 7 10 10 12 13 13 15 16 17 22 28

4 A Multivariate Cumulative Sum Method for Continuous Damage Monitoring 34 4.1 Proposed Methodology- Multivariate Cumulative Sum Monitoring with Principal Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.1.1 Comparison to Existing Methods . . . . . . . . . . . . . . . . . . . . 38 4.2 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

vi

5 Remaining Useful Life Estimation with Lamb-Wave Sensors Based Wiener Process and Principal Components Regression 5.1 Proposed Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Principal Component Regression . . . . . . . . . . . . . . . . . . . 5.1.2 Wiener Process Based Degradation Model . . . . . . . . . . . . . . 5.2 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Remaining Useful Life Prediction Using Wiener Process Model . . . 5.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

on . . . . . .

49 49 50 53 57 61 68

6 Regularized Linear Discriminant Analysis Based Bayesian Multi-Sensory Decision Fusion 6.1 Proposed Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Regularized Linear Discriminant Analysis . . . . . . . . . . . . . . . . 6.1.2 Bayesian Decision Fusion . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Sensor-level Damage Detection . . . . . . . . . . . . . . . . . . . . . 6.2.2 Comparison to PCA Based Approach . . . . . . . . . . . . . . . . . . 6.2.3 Decision Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70 70 71 74 76 76 80 81 87

7 Conclusion and Future Work 7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Impact of the Research and Future Work . . . . . . . . . . . . . . . . . . . .

88 88 89

Appendix A Sensor Level Performance Metrics

91

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Biographical Sketch

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

vii

LIST OF TABLES 4.1

Misdetection rates for the MCUSUM and T 2 charts from fatigue loading tests

44

5.1

Variance explained by different loading vectors and the number of principal components corresponding to each window length. . . . . . . . . . . . . . . .

60

Variance explained by different loading vectors and the number of principal components at various positions of window. . . . . . . . . . . . . . . . . . . .

60

5.2

6.1

Table showing the actual observation versus predicted observation by each sensor. 74

6.2

A rough guide for classifying accuracy of a classification method. . . . . . . .

80

6.3

Comparison between average performance of RLDA method to PCA based discriminant analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81

Probability of detection (Pd ), probability of false alarm (Pf ) and the average probability of error (Pe ) of each sensor estimated using training data. . . . . .

83

False positives for baseline signal and false negatives for damage signal after the final fusion for coupon L2S17. False positives are counted out of 59 baseline signals and false negatives are counted out of 9 damage signals. . . . . . . . .

84

False positives for baseline signal and false negatives for damage signal after the final fusion for coupon L3S18. False positives are counted out of 83 baseline signals and false negatives are counted out of 11 damage signals. . . . . . . . .

84

6.7

Precision, recall and F-measure of the decision fusion for coupon L2S17. . . .

85

6.8

Precision, recall and F-measure of the decision fusion for coupon L3S18 . . . .

85

A.1

Actuator-sensor paths and their distances from the notch. . . . . . . . . . . .

91

A.2

Decision fusion scores for Bayesian decision fusion rule (BR), Chair-Varshney’s rule (CVR), Ideal Sensor rule (ISR), and Majority rule (MR) on sample L2S17. The thresholds for BR, CVR, and ISR is 0, while the threshold for MR is 12. .

91

Decision fusion scores for Bayesian decision fusion rule (BR), Chair-Varshney’s rule (CVR), Ideal Sensor rule (ISR), and Majority rule (MR) on sample L3S18. The thresholds for BR, CVR, and ISR is 0, while the threshold for MR is 12. .

93

6.4

6.5

6.6

A.3

viii

A.4

Area under the curve values for all sensors . . . . . . . . . . . . . . . . . . . .

97

A.5

Table showing performance of individual sensors used in the decision making process using regularized linear discriminant analysis method. . . . . . . . . .

98

Table showing performance of individual sensors using principal component based features in generalized discriminant analysis . . . . . . . . . . . . . . .

99

A.6

ix

LIST OF FIGURES 1.1

Flow chart depicting various phases of the research. . . . . . . . . . . . . . . .

4

2.1

(a) Symmetric mode (b) Anti-symmetric mode [127] . . . . . . . . . . . . . .

9

2.2

Twenty different simulated paths generated by Wiener process with drift parameter µ = 4.84 × 10−4 mm2 and process variance σ 2 = 84 × 10−4 . . . . . .

20

2.3

Architecture of a centralized data fusion system . . . . . . . . . . . . . . . . .

22

2.4

Architecture of distributed detection and multi-sensory decision fusion system

23

3.1

Dog bone shaped specimen used in experiments. (a) Dimensions (in mm) of the specimen and the two arrays of sensors. The path between sensors 1 and 7 is analyzed in this research. (b) X-ray image of the first coupon (L1S11) under baseline condition. (c) X-ray image of the first coupon (L1S11) after 100K cycles of fatigue loading. Delamination can be seen as the light gray-colored region centered around the notch [111]. . . . . . . . . . . . . . . . . . . . . . .

29

(a) Dog-bone shaped specimen. (b) Delamination area with increasing number of cycles, the two ellipsoidal regions indicate sensor readings corresponding to baseline and damage classes. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

a) Actuator signal and sensor signal at 300 kHz frequency. The actuator window Wa , and sensor window Ws used in dispersion calculation are shown. b) Dispersion values for different frequencies. . . . . . . . . . . . . . . . . . . . .

31

3.2

3.3

3.4

(a) Lamb-wave sensor data at baseline, after 200,000 cycles and after 750,000 cycles for specimen L2S17. (b) X-ray delamination measurements at all 13 measured points. Circled points correspond to the 3 measured instances of panel. 33

3.5

(a) Lamb-wave sensor data at baseline, after 150,000 cycles and after 500,000 cycles for specimen L3S18. (b) X-ray delamination measurements at all 13 measured points. Circled points correspond to the 3 measured instances of panel. 33

4.1

Original Lamb-wave signal represented by black curve and the red curve signal reconstructed from AR1 coefficients. . . . . . . . . . . . . . . . . . . . . . . .

x

40

4.2

AR(1) coeffcients from all five windows for baseline and fatigue signals. It can be seen that most of the difference between baseline and damage signals occurs in window 3 and window 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

Principal component scores of the raw signal. 5 PC’s are used. (a) Baseline (b) Fatigue loading applied. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

4.4

Cumulative variance explained by additional principal components.

. . . . .

42

4.5

Control charts plotted from the principal component scores (coupon 1). (a) PCA based fast initial response multivariate CUSUM (FIR MCUSUM) chart. (b) PCA based MCUSUM. (c) PCA based Hotelling’s T 2 chart. (d) AR based Hotelling’s T 2 chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

Control charts plotted from the principal component scores (coupon 2). (a) PCA based fast initial response MCUSUM (FIR MCUSUM) chart. (b) PCA based MCUSUM. (c) PCA based Hotelling’s T 2 chart. (d) AR based Hotelling’s T 2 chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

Control charts plotted from the principal component scores (coupon 3). (a) PCA based fast inital response multivariate CUSUM (FIR CUSUM) chart. (b) PCA based MCUSUM. (c) PCA based Hotelling’s T 2 chart. (d) AR based Hotelling’s T 2 chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47

5.1

Flowchart of the proposed methodology. . . . . . . . . . . . . . . . . . . . . .

50

5.2

Simulated delamination area growth from the fitted Wiener process along with the actual measured delamination areas (a) Coupon one. (b) Coupon two. . .

54

a) Estimated minimum squared prediction error for increasing value of number of principal components. (b) Mean squared prediction error for increasing value of principal components for different positions of moving window. . . . . . . .

59

(a) Fitted value versus actual values for window lengths, mean squared prediction error (MSPE) is highest for window of length 1400. (b) Fitted value versus actual values for different position of moving window, the position 3 has the least MSPE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64

4.3

4.6

4.7

5.3

5.4

5.5

Delamination area prediction from one-step ahead forecast of Wiener process model and PC score of Lamb-wave signal. The thin solid lines indicate predic-

xi

5.6

5.7

5.8

5.9

5.10

6.1

6.2

6.3

6.4

tions and the dashed lines indicate the 95% prediction interval. (a) Coupon one. (b) Coupon two. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

Delamination area prediction from one-step ahead forecast of Wiener process model and PC score of Lamb-wave signal. Red and green lines indicate the one step ahead prediction and 2 step ahead forecasts, respectively. The dashed lines for both colors indicate the 95% prediction intervals. . . . . . . . . . . . . . .

65

Probability distribution function of the failure time for 3000mm2 and 1000mm2 delamination failure thresholds. (a) coupon one. (b) coupon two. . . . . . . .

66

Probability distribution function of the failure time estimated using moving window approach and the Wiener process. (a) Coupon 1 with 3000mm2 and 1000mm2 delamination failure thresholds.(b) Coupon 2 with 800mm2 and 600mm2 delamination threshold. . . . . . . . . . . . . . . . . . . . . . . . . .

66

Time to failure distribution using Gamma process (a)coupon 1 at damage threshold 1000 and 3000 mm2 . (b)Coupon 2 (L3S18) at damage threshold 600 and 800 mm2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

67

a) Optimum windows size for the Lamb wave signal using increasing window technique. b) Optimum window size of the moving window technique using the moving window technique. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

67

Regularized linear discriminant analysis (RLDA) based Bayesian decision fusion algorithm is a three-stage process. In stage 1 the parameters of RLDA and Bayesian decision fusion are estimated using the training data. In stage 2, the sensor-level decision of any new incoming data is done using the parameters estimated in stage 1. In stage 3, the local sensor level decisions are fused using Bayesian decision fusion method, the parameters for Bayesian decision fusion method were obtained from stage 1. . . . . . . . . . . . . . . . . . . . . . . .

71

Classification error contours for different regularization (γ) and threshold parameters (∆)for Sensor 1-7 (a) Specimen L2S17 (b) Specimen L3s18 . . . . . .

77

Scatter plot of threshold parameter (∆) versus regularization parameter (γ) for all sensors participating in the SHM system. . . . . . . . . . . . . . . . . . . .

78

Accuracy of all sensors participating in the multi-sensory SHM system, plotted against their distance from damage. . . . . . . . . . . . . . . . . . . . . . . . .

79

xii

6.5

Receiver operating curve of the Bayesian method is compared with optimum Chair-Varshney’s rule, ideal sensor rule and the majority rule (also knows as counting rule). Bayesian method is represented by solid red line, majority is represented by broken blue line, Chair-Varshney’s rule is represented by broken green line, ideal sensor rule is represented by broken black line. (a) For the specimen L217 (based on area under the curve), ideal sensor rule seems to perform slightly better than proposed Bayesian rule as it hugs the north-west position of the graph more closely. (b) For specimen L3S18, Bayesian decision fusion has the best decision fusion among all decision fusion rules. . . . . . . .

xiii

86

LIST OF ABBREVIATIONS • AR: Auto-regressive • ASTM: American Society for Testing and Materials • ANN: Artificial Neural Network • CDF: Cumulative Distribution Function • CFRC: Carbon-fiber Reinforced Composite • CM: Condition Monitoring • DSF: Damage Sensitive Features • FIR CUSUM chart: Fast Initial Response Cumulative-sum Chart • LCL: Lower Control Limit • MCUSUM chart: Multivariate Cumulative-sum Chart • NDE: Non-destructive Evaluation • PCA: Principal Component Analysis • PCR: Principal Component Regression • PDF: Probability Density Function • PHM: Prognostics and Health Monitoring • RLDA: Regularized Discriminant Analysis • RUL: Remaining Useful Life • SHM: Structural Health Monitoring • SVM: Support Vector Machine • UCL: Upper Control Limit

xiv

ABSTRACT Carbon fiber reinforced composites (CFRC) have several desirable traits that can be exploited in the design of advanced structures and systems. The applications requiring high strengthto-weight ratio and high stiffness-to-weight ratio such as, fuselage of airplanes, wind turbine blades, water-boats etc. have found profound use of CFRC. Furthermore, low density, good vibration damping ability, easy manufacturability, carbon fiber’s electrical conductivity, as well as high thermal conductivity and smooth surface finish provide additional benefits to the users. Various applications of CFRC can be relevant for aerospace, military, wind-turbines, robotics, sports equipment etc. However, among many advantages of CFRC there are a few disadvantages; CFRC undergo completely different failure patterns compared to metals. Once the yield strength is exceeded, CFRC will fail suddenly and catastrophically. The inherent anisotropic nature of CFRC makes it very difficult for traditional condition monitoring methods to assess the condition of the structure. The complex failure patterns, including delamination, micro-cracks, and matrix-cracks require specialized sensing and monitoring schemes for composite structure. This Ph.D. research is focuses on developing an integrated structural health monitoring methodology for damage monitoring, remaining useful life estimation (RUL), and decision fusion using Lamb-wave data. The main objective of this research is to develop an integrated damage detection method that utilizes Lamb-wave sensor data to infer the state of the damage condition and make an accurate prognosis of the structure. Slow fatigue loading results in very unique failure patterns in the CFRC structures, fatigue damage first manifests itself as fiber-breakage and then slowly progresses to matrix-cracks and that ultimately leads to delamination damage. This type of failure process is very difficult to monitor using the traditionally used damage monitoring methods such as X-ray evaluation, ultrasonic evaluation, infrared evaluation etc. xv

For this research, we have used principal component (PC) based multivariate cumulative sum (MCUSUM) to monitor the structure. MCUSUM chart is very useful when monitoring structures undergoing slow and gradual change. For remaining-useful-life (RUL) estimation, we have proposed to use the Wiener process model coupled with principal component regression (PCR). For damage detection/classification we studied discriminant analysis, in-spite of the popular use in image analysis and in the gene data classification problem, has not been widely used for damage classification. In this research, we showed that discriminant analysis is a useful detecting known damage modes, while dealing with the high dimensionality of Lamb-wave data. We modified the standard Gaussian discriminant analysis by introducing regularization parameters to directly process raw Lamb-wave data without requiring an intermediate feature extraction step.

xvi

CHAPTER 1 INTRODUCTION According to the American Society of Civil Engineer’s (ASCE) Bridge report [2], there are over two hundred trips made daily across structurally deficient bridges in the United States. This equates to one in nine bridges across the nation being structurally deficient. In addition to bridges, many other structures, such as federally-owned and privately-owned buildings, dams, roads among others which were built over the last half-century, are approaching – or have already exceeded – their intended design life. Currently, there is no scientific approach to confidently predict when a structure that experiences a natural hazard such as an earthquake or hurricane, will be completely safe for public use. Traditionally, various non-destructive testing techniques such as X-ray, ultrasound, eddycurrent evaluations [117], use of natural frequency [30], and fiber optics [159] have been used for damage monitoring. Damage can be defined as any change introduced in the structure that adversely affects its performance. While the traditional non-destructive techniques are very rigorous and enable multi-point inspection of the structures, they require halting the daily operation of the structure, which entails a significant increase to the operational cost [55]. Structural Health Monitoring (SHM) is an emerging interdisciplinary field of study for the monitoring of civil, mechanical, and aerospace structures. SHM has the potential to bring a paradigm shift in current maintenance strategies by replacing rigid maintenance practices,–such as time based maintenance– with more flexible, condition based maintenance strategies. The availability of quantitative information about the damage makes the conditionbased approach more accurate and cost effective; along with saving both in operational and maintenance costs. The SHM process involves: observation and analysis of equally-spaced 1

dynamic responses; extraction of features from these response; and statistical analyses of these features to assess the system’s health. Guided waves, such as Lamb-waves, have been widely adopted by non-destructive evaluation (NDE) industry, due to their ability to combine detection accuracy with significant travel range. This PhD. research focuses on the development and application of statistical learning techniques for SHM of CFRC structures using guided Lamb-waves.

1.1

Need for the Study

The SHM system with dense distribution of sensors and actuators is very efficient at monitoring structures. The high density sensor systems are biologically inspired techniques, in which dense networks of sensors are permanently attached to the structure [6, 37]. SHM systems with high sensor density makes up for the lack of sensing capacity; however, creating a high density system is highly costly. Therefore, SHM systems with sparse sensor distribution and high detection accuracy are needed for an economically sustainable monitoring system. Lamb-waves have the ability to travel long distances with minimum dispersion. This property enables the user to monitoring the structure with sparse sensor density. This research focuses on answering three major research questions related to data-driven SHM. 1) Which statistical approach can be used to quantify the minuscule growth of damage occurring during the fatigue loading condition? 2) Can Lamb-wave sensor data be used for the online prognosis of structures? 3) What is the most efficient way to fuse data from multiple sensors? SHM systems generate a large amount of information and rely heavily on the data obtained from the sensors. However, recent statistical learning methods have had limited application in SHM applications. Therefore, the outcomes of this research will contribute to the growing body of statistical learning methods in SHM to improve existing the SHM systems in making more accurate and robust decisions about structural health. 2

The major challenges in the current sensor-based health monitoring systems are: • Identifying features that are sensitive to damage and robust to variations in operational and environmental conditions • Monitoring and quantifying damage based on identified features. • Generating prognosis of the damage while considering uncertainty in the loading condition This research has addressed some aspects of the forementioned challenges by proposing the following methods: the problem of damage monitoring is approached by using principal component analysis (PCA) based multivariate cumulative sum chart (MCUSUM); the useful life prediction will be studied using a principal component regression (PCR) based Wiener process; and finally decision fusion with multiple Lamb-wave sensors is accomplished using a supervised Bayesian learning algorithm.

1.2

General Outline and Components of the Study

In this section we discuss the different components of the proposed research and how they are interrelated. The main focus of this study is to develop an integrated SHM algorithm that will monitor the damage, prognose the structural health, and fuse decisions from multiple damage detection sensors. The data referenced in this study was obtained from experiments done under a controlled laboratory environment. In addition, this study lays a strong foundation to evolve the presented methods to work in an environment with varying ambient environmental and loading conditions. We have proposed to use the principal component analysis (PCA) as a damage sensitive feature (DSF) extraction method due to the flexibility it offers users. By using PCA, one can decide how much information (variance) one wants to retain in the feature vector. These feature vectors are then used on multivariate CUSUM chart for novelty monitoring. The next step of the research will be to quantify the severity of the damage. Because augmentation of 3

the damage is a cumulative and monotonic process, we have proposed Principal Component Regression (PCR) based method to quantify the damage based on Lamb wave readings. The predictions from the PCR model will further be used in a Wiener process model to predict the useful life of the structure. Finally, we develop a supervised decision fusion algorithm. We have presented regularized discriminant analysis of raw Lamb-wave data which does not require any features from Lamb-waves. The use of raw sensor data for damage detection results in a reduction in the loss of information, which would have otherwise occurred during the feature extraction process. Figure 1.1 gives a pictorial view of different phases of this research.

Figure 1.1: Flow chart depicting various phases of the research.

1.3

Research Hypothesis and Limitations

The main hypothesis that we had before starting this research is that there are specific windows or regions in the Lamb-wave signal that contain most of the information carried by the signal. All of the three dimension-reduction-techniques discussed in this research: 1) 4

principal component analysis, 2) moving window / increasing window, and 3) regularization, all seek to identify the region of the Lamb-wave signal that contains the largest amount of information. Another major hypothesis is that large principal components, eliminate the noise from the Lamb-wave signal, which arises due to varying environmental and operational variance. In comparison to the other feature extraction techniques (such as autoregressive (AR) modeling) PCA results in minimum loss of information and is computationally more efficient than AR models. We have also developed new hypotheses to guide our future work, although regularized linear discriminant analysis (RLDA) is efficient at handling high dimensional Lamb-wave signal and results in very little loss in the information, it is still not tailored to process signals containing environmental noise. The four major limitations of this research are as follows: • The data studied in this research is obtained from anisotropic CFRC plates, with different ply layup configurations. • The fatigue experiment done to obtain the data was done under constant environmental condition, which is unrealistic in real-world applications where SHM systems have to endure varying environmental and operational condition. • During our analysis we assume that the rate at which the structure deteriorates is constant. • More importantly, the study does not consider damage occurring due to impact loading, which may have led to completely different failure pattern.

1.4

Dissertation Outline

The rest of the dissertation will be organized as follows: Chapter 2 presents an extensive literature review about the physics of Guided Lamb-waves and different statistical learning algorithms. Chapter 3 gives a detailed explanation of the data set used for this dissertation and the experiments done to obtain the data sets. Chapter 4 discuses the multivariate cumulativesum (MCUSUM) based statistical control chart and compares it to other popularly used 5

damage monitoring methods. Chapter 5 presents a data-driven prognostic algorithm, based on principal component regression and Wiener process. In Chapter 6 we present a supervised Bayesian multi-sensory decision fusion algorithm and compare its performance with some state-of-the art decision fusion methods. Finally, Chapter 7 concludes the findings of this research and presents an outline for future work.

6

CHAPTER 2 LITERATURE REVIEW 2.1

Theoretical Framework of Lamb-Wave

Lamb-waves are the guided ultrasonic waves that can travel long distances with minimum dispersion; and can be excited and detected using piezoelectric sensors. The feature that makes Lamb-wave very useful for damage detection is that it is very sensitive to change in density of the transmitting material, i.e. as soon as it detects change in density of the material medium, a portion of the wave gets reflected which is directly proportional to the change in the density of the transmitting medium. The non-dispersive nature of Lamb-wave helps eliminate noise from the signal by reducing the amount of boundary reflections. Lamb-waves can be efficiently generated using piezoelectric transducers, which are devices that convert electrical pulses into mechanical vibration and vice-versa [28]. There are two popularly used schemes to generate Lamb-wave signals 1) passive schemes and 2) active schemes. Passive schemes including acoustics emission and strain/load monitoring have been demonstrated to have the drawback of requiring high sensor densities on the structure [14, 33, 74, 87, 106, 112, 113]. On the flip side, active scheme using guided-wave based structural health monitoring is increasingly popular for monitoring structures with sparsely distributed sensors [104]. In active sensing, piezoelectric actuator patches permanently attached or embedded in the structure are used to interrogate the structure in which actuators generate sinusoidal waves and sensors measure the resulting response. The waves propagating through the structure are detected by a set of receiving sensors. Any damage to the structure will result in changes in the received waves due to reflection or scattering from the waveform. The presence of damage is then determined by comparing detected waveform to a baseline that is measured from the undamaged structure. 7

Lamb-waves were first modeled mathematically by Horace Lamb [77]. However, the credit for laying the theoretical foundation of Lamb-waves goes to Gazis [44, 45], who developed the dispersion equations for Lamb-wave. Similarly, Worlton [155] was the first person to experimentally reproduce Lamb-waves and also to recognize its potential for non-destructive testing (NDT). Guided Lamb-waves are waves that are guided by finite dimension of the test structure. In reality, most structures that we deal with such as cylindrical pipes, fuselage of an airplane, composite plates cut into thin strips etc. all have finite dimensions, therefore the term ”Guided Lamb-waves” is practically more relevant than ”Lamb-waves”. In general, guided waves are any kind of stress waves forced to follow a path, defined by material boundaries of the surface; they have the ability to interact with the defects in the structure due to their propagation properties that are highly sensitive to the defects in the material [104, 123, p. 136]. Guided waves can usually travel long distances on solids; however some attenuation of the wave occurs due to scattering and absorption by material. Guided Lamb-waves which will heretofore be referred simply as Lamb-waves are guided between two free, parallel surfaces of a plate or a shells and their sensitivity to detect damage depends on their driving frequency [1, 48, 142]. Piezoelectric transducer is a device that converts electrical pulses to mechanical vibration and also converts mechanical vibrations back into electrical energy [28]. They are well proven and the most widely used sensors for guided wave generation [29, 64, 65, 68, 96]. Many factors, including material, mechanical and electrical construction, and the external mechanical, and electrical load conditions, influence the behavior of these transducers. Sensitivity of an acoustic sensor is determined by bandwidth and resonant frequency. A very interesting property of Lamb-waves is that it can propagate at a velocity dependent on the frequency of the wave with minimum dispersion [28, 107]. This property makes Lambwave based SHM system more suitable for detecting global damages than the vibration based techniques (passive schemes) which are more sensitive to local damages [69]. Lamb-waves have two basic modes of displacement pattern: symmetric mode and anti-symmetric mode. While, 8

the symmetric Lamb-wave mode is radial in-plane displacement of particles; anti-symmetric mode is out-of-plane displacement of particles, as illustrated in Figure 2.1. For an isotropic and homogeneous plate of thickness d and half-thickness 2d, the general wave equation for Lamb-wave can be written as Equation (2.1a) [127, p. 20]

Out-of-plane motion

Radial in-plane motion

(a) (b) Figure 2.1: (a) Symmetric mode (b) Anti-symmetric mode [127]

4k 2 qpµ tan qh = tan ph (λk 2 + λp2 + 2µp2 )(k 2 − q 2 ) ω2 ω2 2π p2 = 2 − k 2 , q 2 = 2 − k 2 , k = cL cT λwave

(2.1a) (2.1b)

where, k ,ω and λwave are the wave-number, circular frequency and wavelength of the wave respectively; cL and cT are the velocities of longitudinal and transverse/shear modes respectively. Equation (2.1a) can be split into two parts, symmetric and anti-symmetric modes [127, p. 20]. 4k 2 qp tan qh =− 2 tan ph (k − q 2 )2 (k 2 − q 2 )2 tan qh =− tan ph 4k 2 qp

symmetric mode

(2.2a)

anti-symmetric mode

(2.2b)

The number of propagating modes increases and becomes more complicated with an increase in the frequency. Thus, in order to simplify the detection process, it is important 9

to excite only the lowest order modes (A0 and S0 ). Both these modes, fundamental antisymmetric (A0 ) and symmetric modes (S0 ), have some very interesting properties that are extremely useful for non-destructive evaluation of the structure. Some notable properties of A0 and S0 modes are as follows: S0 mode travels faster than A0 mode; A0 mode looses more energy from the boundary reflections; for low frequency-thickness values (usually around 1 MHz-mm range) only the lowest order modes are excited [69, 80]. Anti-symmetric mode travels along the thickness of the structure, hence they are widely used to detect delamination damages; while the symmetric mode which travel along the longitudinal dimension of the structure is used to detect damages related to holes and matrix cracking in composites [56]. Two main approaches used for damage identification are: 1) model-driven methods 2) data-driven methods. Model driven methods establish a physical method of analysis using finite elements analysis method. The data driven method seeks to establish a statistical model of the system, deviations from normality are signaled by the data or statistics that appear in the region of very low density.

2.2

Statistical Learning Approaches in Structural Health Monitoring

2.2.1

Supervised Damage Detection Methods

Although a majority of damage detection methods in structural health monitoring have used unsupervised learning methods, which do not need a mapping function to be trained on input-output pairs, this research seeks to use supervised regularized discriminant analysis due to the fact that results of the training data can be utilized as prior information for multi-sensory decision fusion. Supervised learning method is a technique for deducing a mapping function using training data-sets consisting of input-output pairs. Discriminant analysis is a powerful supervised classification method which uses the concept of decision boundaries to discriminate between features from different classes. Farrar et al. 10

[36] have used Fisher’s discriminant method to identify structural damage in physical systems. Worden et al. [153] have used kernel density estimation method to estimate the density of the damage-sensitive features and then used it to find the Bayesian posterior probability of each class to make decisions about the health of the structure. Larrosa et al. [78] have used time-domain analysis and short-time Fourier transform combined with Gaussian discriminant analysis to characterize damage in the composite. Cremona et al. [19] have used supervised learning methods like Bayesian decision tree and support-vector machine to discriminate between structural features of pristine and damaged structures. Cheung et al. [13] have used auto-regressive (AR) coefficients as the DSF, and supervised form of Mahalanobis distance to quantify the damage extent. Soft computing techniques, including artificial neural network (ANN) were successfully used by few researchers. Baniotopoulos [3], Engelhardt et al. [34], Stavroulakis and Antes [124], Stavroulakis [125], Ziemia´ nski and Harpula [160] have used ANN for damage identification. Damage sensitive features such as modal shape, natural frequencies, displacements, acceleration spectra and strain are used to train the network of an ANN [126]. Support Vector Machine (SVM) is a form of classification technique that separates two classes of data using a hyper-plane. Vapnik [138, 139] discusses the basic foundations of support vector machines. Worden and Manson [152] have demonstrated the use of SVM to locate damage in the wing of aircraft. Similarly, Bornn et al. [7] have used auto regressive support vector machines for damage detection. The authors have used support vector machine to create a nonlinear estimate of the time-series model that provides an alternative to linear auto-regressive (AR) models. Song et al. [122] have used a procedure which involves extracting independent components from measured sensor data, and then using them as an input data for SVM classifier. The major limitation associated with forementioned supervised damage detection methods is that, their classification accuracy significantly decreases with increase in the dimension of input data.

11

2.2.2

Unsupervised Damage Detection Methods

Unsupervised learning techniques do not require a mapping function to be trained on the input-output pairs. These algorithms are designed to identify the patterns present in the data as they are fed. Unsupervised methods are especially useful for structural health monitoring systems for which the data obtained from damaged structures is scarce or non-existent, method such as statistical process control or outlier analysis have been commonly used in SHM for novelty or damage detection [91, 157]. In outlier analysis, if the discordancy measure exceeds the threshold, then the data is flagged as discordant or novel. Mahalanobis squared distance is a popular discordancy measure for high dimensional data [38]. One limitation of Mahalanobis distance metric is the requirement that the data should have an elliptical shape in the feature space and it should be uni-modal. Any departure from these assumptions will lead to inaccurate detection performance. Cluster analysis can be used as an alternative to outlier analysis, it allows one to distinguish between different groups within data without need to define baseline and damaged signals. Although it is very efficient at detecting damages, its computational complexity has limited its application in SHM systems for large scale structures [26, 121]. Cury [22], Cury et al. [23], Santos et al. [110] have used the symbolic dissimilarity measure to overcome the problem in computational complexity in cluster analysis. Statistical process control charts are more direct method of detecting changes in the statistical behavior of a process. It provides a framework to monitor future data, and to make sure that any new incoming data is consistent with past data. If mean (µ) and standard deviation (σ) of data is known, control charts can be created by drawing horizontal line passing through upper control limit (UCL) and lower control limit (LCL) at µ + kσ and µ − kσ respectively, where k is a number chosen so that when the structure is in good condition large percentage of observation will fall within control limits [38].

12

Soft computing techniques have been applied in an unsupervised fashion as well. Autoassociative neural network (AANN) was first proposed by Pomerleau [101]. AANN works by only being trained on normal condition features and does not require any information regarding damaged condition. Surace and Worden [128], Worden [151] have used AAN to detect damages in the structure. Nonparametric density estimation method, such as kernel density estimation, is another popularly used method. Once the PDF is known, the new data can be estimated or rejected on the basis of PDF magnitude of the feature [38].

2.3

Principal Component Based Monitoring

This section will do literature review about PCA as a feature extraction method and its application in SHM. The part of SHM that is highly emphasized is feature extraction, it allows one to distinguish between undamaged and damaged structure [30, 119]. Feature extraction involves reducing the amount of resources required to describe a large set of data. In the Chapter 4, we will explore PCA as a feature extraction tool for Lamb-wave data and its application in multi-variate statistical control charts.

2.3.1

Principal Component Analysis

Principal component analysis is an orthogonal transformation to convert a set of high dimensional and highly correlated variables into lower dimensional and linearly uncorrelated set. The method extracts features of Lamb-wave signals, which are called principal scores (computed using eigenvectors) that are sensitive to simulated through-hole cracks and delamination damages from a set of experiments. Manson et al. [86] used first two principal scores of acoustic emission (AE) signal to visualize the effect of crack and friction related events. Worden et al. [154] used various dimension reduction method including simple projection, linear and nonlinear principal component and Sammon mapping [109] to reduce the dimension of data. The method was applied on synthesized data and experimental data. PCA was used to cluster damages in different classes; authors have shown that PCA is able to clearly 13

separate no-damage cases from damaged cases. Pullin et al. [103] have also done PCA on acoustic signals to differentiate fatigue crack propagation signal from background noise of a landing gear component. The authors have demonstrated that PCA of six different artificial signals, namely Hann wave-1, Hann wave-2, Hann wave-3, Saw-tooth wave 1, Saw-tooth wave 2 and Saw-tooth wave 3. The paper depicts the plot of first and second principal component of aforementioned acoustic emission signals, this technique successfully differentiates between signals that are very distinctly different from each other. Furthermore, the paper also does PCA on signals emanating from an artificial fracture source and rest of the test signal which is presumed to be noise. Principal component scores depict clear separation between artificial source and landing gear noise. Pavlopoulou et al. [99] used non-linear principal component analysis and principal curves for damage prognosis. It is a generalization of the linear PCA, instead of employing only orthogonal relationships for projecting the data onto axes, the data can also be projected into curves or surfaces instead of line. Cross et al. [21] proposed a method to filter out environmental variations from the data. In the case of undamaged structure subject to variable environmental conditions, data is projected to minor components that account for very small variances in the data so that the dimension of the data that carry any dependence on environment factors can be discarded. Carbon fiber reinforced composite (CFRC) specimen was scanned with Lamb wave at undamaged state and normal operating condition, undamaged and cyclic temperature variance and damaged state under cyclic variance. The authors showed how all three different states cluster separately after principal component analysis. Bellino et al. [4] have used PCA to detect damages in time varying systems. Authors have claimed that it can not only detect presence of damage but also properly distinguish among different levels of crack depths; environmental conditions like mass and velocity that effect modal parameters of a linear time variant (LTV) systems have been successfully removed by PCA method, making it possible to detect damages. Linear time variant systems are ones wherein the modal parameters changes with times. PCA was used in conjunction 14

with Mahalanobis squared distance which was used to define a novelty index. Viet H`a and Golinval [141] have done PCA on frequency response of beam at different locations not only to identify the damage, but also locate and evaluate it. Kessler and Agrawal [67] also used Lamb wave testing coupled with principal component analysis to detect presence, type and severity of damage. The authors reduced the raw -time series data from 800 data points to 20, which the authors claimed explains 70% of the data variance. The feature extracted from principal component analysis method was used to train the pattern recognition algorithms like K-Nearest Neighbor, Neural-network to do further classification.

2.3.2

Statistical Control Charts Using Principal Components

Statistical process control methods have been used extensively for variation reduction in manufacturing industry. The Shewhart control charts are employed as the main tool to detect shifts from an in-control statistical model and to make sure the process continues to operate in a stable manner [89]. When multiple correlated quality characteristics are of interest then multivariate control charts should be used to simultaneously monitor all characteristics. Hotellings T 2 is the multivariate counterpart of the Shewart chart for monitoring the mean vector of a process [85]. While the Shewhart charts can detect large shifts reasonably well, in order to better detect small shifts, cumulative sum procedures, are recommended [149]. Cumulative sum utilizes the entire history of the observed data. In contrast to the Hotellings T 2 or Shewhart charts, which utilize only the current data point and are therefore more sensitive to gradually developing small shifts in the signal mean; multivariate cumulative sum (MCUSUM) methods have been proposed by Woodall and Ncube [150], who used multiple univariate CUSUMs to test shifts in the mean of a multivariate normal variable, and by Crosier [20], who accumulated deviations of the vectors from the baseline and produced a quadratic form to find a scalar monitoring statistic. Pignatiello and Runger [100] compared and outlined the benefits of various MCUSUM approaches.

15

Statistical process control techniques have been utilized in structural health monitoring ¯ charts to monitor coefficients of an by many authors. Sohn et al. [118] used Shewart X auto-regressive time series model fitted to the measured vibration time history from an undamaged structures. Control limits of the control charts are used to detect deviations of the coefficients from the initial structures for damage detection. For monitoring multiple features extracted from sensor signals, Worden et al. [153] and Sohn et al. [120] extended the univariate control charts to multivariate charts by using a Mahalanobis distance. It quantifies the distance between potential outlier vector and the in-control sample mean vector, a measure similar to the Hotelling T 2 statistic. Mujica et al. [92] used PCA in conjunction with a T 2 statistic to extract features from multi-sensory arrangement on a turbine blade and to detect the variation due to damages in the subspace of the dominant principal components that are greater than what can be explained by the common cause variations. Deraemaeker et al. [27] applied factor analysis to subdue the effects of environmental fluctuation on data and used multivariate control charts for damage detection. Kullaa [76] used missing data model to eliminate environmental and operational variances and Hotelling T 2 chart to monitor changes in modal parameters and to detect the possible damage in the structure.

2.3.3

Principal Component Regression

Lamb-wave sensor data are highly multi-collinear. Hence, the precision of estimated regression model based on raw sensor data significantly decreases and results in a model that over-fits the data. Small changes in the input data can lead to large changes in the model. To overcome this problem, a sub-set of predictor variables, which does not contain multicollinearity can be used in the regression model. The principal goal is to shrink the solution coefficient vector away from ordinary least squared (OLS) solution towards the direction of larger sample spread [42]. Principal component regression (PCR) starts by using principal components (PC) of the predictor variables in place of the predictor variables, as the PCs are uncorrelated there is no multicollinearity issue in regression model [61]. The idea 16

of PCR was first proposed by Kendall [66]. Hotelling [54], Jeffers [59] are also regarded as early contributors to PCR. Jolliffe [62] in his pioneering work, has done an extensive review of the different approaches to PCR. In PCR, the principal component scores are used as regressors to build a prediction model. When large degree of multicollinearity (near linear dependence) exists among the regressor variables, PCs of the predictor variables are used in place of the predictor variables, as the PCs are uncorrelated and there is no multicollinearity in the regression model [90, p.355]. PCR has been extensively used in Chemometrics research for calibration of spectral data with corresponding property measurements [47]. Frank and Friedman [42] discussed the statistical properties of PCR. Geladi [47] have used PCR to do the calibration between a desired property of a sample and it’s spectrum or variables . Chang et al. [10] have used PCR to predict diverse soil properties using near-infrared reflectance spectroscopy. Hernandez-Arteseros et al. [52] used PCR to luminescence data for the screening of Ciproflaxin and Enroflaxin in animal tissues. Fekedulegn et al. [40] used PCR as a means to cope with multicollinearity among independent variables in ecological data. Tan et al. [129] proposed use of total PCR to classify tumors, by extracting latent variables underlying DNA micro-array data from the augmented subspace of both independent and dependent data. Due to presence of multi-collinearity between the variables, PCR is widely used in the area of Chemometric. However, PCR has not been used in SHM applications which also involves large number of interrelated variables.

2.4

Prognostics and Remaining Useful Life Estimation

Prognostics is the study of making predictions about a performance metric of a system based on the historical data observed from the system. Statistical prognostics method can be broadly classified based on the type of condition monitoring (CM) data available from the system under consideration: direct CM based method and indirect CM based method. In 17

the direct CM based methods, the underlying cause for the deterioration of the structure is directly observed and used to model the failure. However, in the indirect CM based method, the indirect observation that indicates the underlying state of the structure is used for the modeling, e.g. vibration and oil based monitoring [115]. A regression based degradation model is used to estimate the general path with indirect CM data as discussed by Lu and Meeker [84]. Gebraeel et al. [46] have focused on developing a model that predicts the remaining useful life (RUL) of the structure using sensor observations; a mapping function that links the historical data with the real time CM data is developed and continuously updated using Bayesian framework. With CM methods no intermediary regression model is required. With direct CM data, the states of the structures are directly observed and are modeled as stochastic process to predict the future values. Markov process is a widely used stochastic process and popularly used in degradation modeling. Proschan [102] states that Markov process is process such that given the value of X(t), the values of X(τ ), where τ > t are independent of the values of X(u), u < t, is independent of the past. The process is said to have Markov property if [105]: P (X(t + s) = j|X(t) = i, X(u) = k, 0 ≤ u < s) = P (X(t + s) = j|X(s) = i) for all possible x(u), 0 ≤ u < s Markovian property states that conditional probability of any future event, given any past event and present state is independent of the past event and only depends on the present event [53]. Esary and Marshall [35] have modeled a continuous wear process, relaxing the assumption that deterioration occurs only at discrete time points. The authors have modeled the continuous wear process as a non-decreasing Markov process. Cinlar [15] improved on the work of Esary and Marshall [35] by demonstrating that the wear process is a Markov additive

18

process. Kharoufeh [70] have used the sensor data to estimate full and residual life time distribution for single unit system subjected to stochastically evolving environment. Classes of Markov processes which are useful for modeling stochastic deterioration are 1) discrete-time Markov process that have finite or countable state space 2) continuous-time Markov process with independent increments. Wiener process (Brownian motion with drift), compound Poisson process, and gamma process are three kinds of the continuous time Markov processes [136]. Brownian motion is a stochastic process with independent and real valued increment having a normal distribution with mean µt and variance σ 2 t for all t ≥ 0. The compound Poisson process is a stochastic process with independent and identical jumps which occurs according to Poisson process [108]. Gamma process is a stochastic process with independent, non-negative increments having gamma distribution with an identical scale parameter. Both gamma process and compound process are jump processes. However, according to Singpurwalla et al. [116] the main difference between gamma and compound Poisson process is that compound process has finite number of jumps in finite time intervals, whereas gamma processes has infinite number of jumps in the finite time interval. The authors have presented closed form solution for the cumulative distribution function and moments of the system lifetime via an analysis of bivariate Markov process. Kharoufeh and Cox [71] presented the full degradation procedure for estimating full and residual lifetime distribution for a single unit system subjected to Markovian deterioration. The inability of the Markov process to retain the memory from the previous states pose a major challenge its implementation. Wiener process was first introduced as a mathematical model for the Brownian motion, which is a random zigzag motion of microscopic particles suspended in liquid. Figure 2.2 shows twenty random Wiener process realizations. A random walk is the simplest form of the Brownian motion. A particle starts from the origin and moves one step to the left or one step to the right with equal probabilities 1/2, in each unit of time. A Brownian motion with a drift is continuous-time stochastic process X(t), t ≥ 0 with drift parameter µ and variance 19

Figure 2.2: Twenty different simulated paths generated by Wiener process with drift parameter µ = 4.84 × 10−4 mm2 and process variance σ 2 = 84 × 10−4 parameter σ 2 , σ > 0 having following properties X(t) ∼ N (µt, σ 2 t) for all t > 0 [63]. When µ = 0 the process X(t) is simply called Brownian motion process. For µ 6= 0 the process is called Wiener process with drift µ and variance parameter σ 2 [18]. For such a process, small increment, X(t + ∆t) − X(t) in a very small time interval, ∆t is independent of X(t) and hence the process is a Markov process. Brownian motion with drift is also called Wiener process and can be represented as Y (t) = µt + σB(t), where µ is the drift parameter, σ is the diffusion coefficient and B(t) is the standard Brownian motion [115]. A characteristics feature of this process in structural reliability is that the structure’s reliability increases or decreases with time. There are different variations to the standard Wiener process in the literature. Doksum and Hbyland [31] have applied Wiener process to variable-stress accelerated life testing experiment. The fatigue failure model proposed by Doksum and Hbyland [31] transforms a non-stationary Wiener process to stationary. Whitmore [146] used

20

a statistical model based on Wiener process to describe a degradation process that accounts for the error due to randomness of the degradation process and imperfect measurement procedure. Whitmore et al. [147] have used a bivariate Wiener process which combines the process of degradation and covariates. The major advantage of using Wiener process is that the first passage time (FPT) can be formulated analytically using Inverse Gamma Distribution [18, 41, 135]. Liao and Elsayed [81], Tseng et al. [134] have used Wiener process to model life time of light intensity of LED lamps. Tseng and Peng [133] proposed an integrated Wiener process to model the cumulative degradation path of a product’s quality characteristics. Park and Padgett [97, 98] applied Brownian motion and geometric Brownian motion to model the degradation under accelerated life testing and to further infer the lifetime of structure. Gamma process based degradation modeling is another method that is used to model monotonically increasing degradation, i.e. Y (ti + ∆t) − Y (ti ) where ∆t is the small increment in time. Similar to the Wiener process, gamma degradation model assumes that all future degradation will be independent of the current state of the degradation. van Noortwijk et al. [137] presents a gamma process based method to combine two stochastic processes which consists of deteriorating structure and fluctuating load to compute the time-dependent reliability of the structure. In their paper, authors have generated stochastic process of load using Poisson process and deterioration using gamma process. Finally, these stochastic processes are combined to evaluate the time reliability of the structure. Lawless and Crowder [79] have used gamma process model with covariates and random effect to characterize the different rates among different individuals. Tseng et al. [132] have used step-stressed accelerated degradation testing model for the degradation model following gamma process. This method is especially useful for highly reliable products which are very unlikely to fail in short period of time. The major challenge associated with gamma process model is that noise in the gamma process must be a Gamma distribution and there is no room for other distributions. 21

2.5

Decision Fusion

Classical studies in decision fusion is based on hypothesis testing and estimating the models where H1 represents presence of an event and H0 represents absence of the event. The sensor observation can be used to arrive at a decision about an event using two different methods, namely data fusion and decision fusion. Data fusion is one in which each sensor sends the raw measurement to the fusion center, which make the decision about the phenomenon under inspection. In other words, data fusion is an inference problem, wherein data from different decentralized units are transmitted to fusion center. Furthermore, fusion center makes the final inference about the phenomenon of interest. Figure 2.3 shows the commonly used system architecture for a data fusion system. On the other hand, in decision fusion, each sensor sends its local decision derived by independent processing of its measurement. Figure 2.4 shows the schematic of decision fusion system.

Figure 2.3: Architecture of a centralized data fusion system

One might always expect data fusion to perform better than decision fusion but Kokar et al. [72] has shown that that decision fusion has same behavior as data fusion under certain 22

Figure 2.4: Architecture of distributed detection and multi-sensory decision fusion system

regularity conditions. Furthermore, Clouqueur et al. [17] has shown that in presence of faulty sensors in the network, decision fusion based algorithm may perform better than data fusion algorithm. This research is focused on decentralized detection approach; the sensor setting in which the local sensors preprocess the observations before transmitting data to a fusion center is termed as decentralized detection [9]. The need for decision fusion arises due to distributed sensor network where preliminary processing of data is carried out at each sensor and condensed information is sent from each sensor to central processing unit also known as fusion center. The decision process consists of (1) decoding information, interpretations and association by using previous experience and (2) perception of interpreted and associated sensory impressions that would lead to meaning. The sensing process always precedes the decision process and are together called information process cycle [82]. Decision fusion process is divided into a hierarchy of four processes. Levels 1 and 2 are concerned with formation of track, information and fusion of the information from several sources. Probabilistic methods like recursive Bayesian method, 23

fuzzy logic, Kalman filter etc. are used for these levels. Level 3 and 4 build on the level 1-2 methods; it is concerned with extraction of high level knowledge from low level fusion process, the incorporation of human judgment, and formulation of decisions and actions [32]. There are three essential components of decision fusion process 1) state 2) observation model 3) decision rule. State simply denotes the condition of the phenomenon of interest, observation model (δ = δ(x) ∈ X) describes what observation we will make for each state of nature (say x ∈ Z) and decision rule is a function that maps an observation to a state. Decision fusion configuration is one in which summary message from each sensor is sent to central processor, where the final decision is made. Decision making problem may be classified into static and dynamic. In the static framework, each decision maker makes only one decision. Dynamic frameworks are the scenarios in which constantly changing information calls for dynamic decision process. Kokar et al. [73] have done a comparative study between data fusion and decision fusion approaches, authors formulate that decision fusion is a special case of data fusion. Veeravalli [140] provided an introduction to a sequential decision fusion problem where each one of the sensors receives sequences of information which are quantified at each time step and sent to the fusion center to make the final decision. The optimal decision fusion for multiple sensor detection system was studied by Chair and Varshney [8], Thomopoulos et al. [131] in 1980s. Equation (2.3) is the optimal fusion rule based on individual sensor decision U1 , U2 , . . . , UK ΛCV =

K h X i=1

Ui ln

1 − pdi i pdi + (1 − Ui ) ln p fi 1 − p fi

(2.3)

where Ui takes two values {0, 1}. Probability of false alarm (pfi ) at each sensor is computed from a threshold and the probability of detection (pdi ) at each sensor is decided by each sensor’s distance to the target and amplitude of target’s signal. Chair-Varshney approach involves calculation of probability of detection (pdi ) of each sensor, which is dependent on each sensor’s distance to the target, and amplitude of target’s signal. Unfortunately, it is almost impossible to know the exact location of sensor in a practical situation. Niu and 24

Varshney [94] proposed a sub-optimal counting rule. Counting rule is based on the intuitive approach of counting the number of decisions made by the individual sensors in the Region of Interest (ROI) and making the final decision at the fusion center based on the count statistic (Λ), which is the sum of local decisions (Ui ). The counting statistic is compared against a threshold to arrive at a final decision. Niu and Varshney [93] also proposed a generalized likelihood ratio test (GLRT) based decision fusion method that uses quantified data from local sensors to localize and detect target. The target coordinates (xt , yt ) are estimated by maximizing a log likelihood function. The GLRT based decision fusion method is expressed as [93]: p(zf | (xt , yt ), H1 ) =

K h Y i

pdi (xt , yt )Ui [1 − pdi (xt , yt )]1−Ui

i

(2.4)

Krishnamachari and Iyengar [75] have proposed the seminal work in decision fusion for a sensor network with stochastically uncorrelated sensor faults while event measurements are spatially correlated. Assuming that false alarm probability is equal to probability of misdetection, majority voting rule is the optimum decision fusion rule but it may not be optimal in cases where false alarm probability is not equal to probability of misdetection [75]. A hierarchical model for distributed detection is proposed by Chen and Varshney [11]. Chen and Varshney [11] proposed a Bayesian sampling approach using hierarchical model for decision fusion. Jiang et al. [60], Wang et al. [144] proposed a multi-level damage decision fusion for structural damage detection,which utilizes a fuzzy neural network for damage assessment. Zein-Sabatto et al. [158] have analyzed various decision fusion strategies based on fuzzy logic and Bayesian approach for their capability to handle the uncertainties. The authors have generated synthetic decisions from an unbiased process for fair comparison of the decision fusion algorithms. Ciuonzo and Rossi [16] studied the sensor decision fusion problem in which false-alarm probability of the sensor is known while the detection probability in unknown. The authors have also studied different fusion rules, including Clairvoyant likelihood ratio test (CLRT) , 25

locally-optimum detection (LOD) rule, Ideal sensors (IS) rule , counting rule (CR) and Wu rule [156]. According to CLRT ΛLRT =

K h X i=1

PD i αi (PD ) + (1 − Ui )ln Ui ln α(PF ) PF

(2.5)

where αi (P1 ) , P (Ui = 1, P1 ) = ((1 − 2Pe,i ).P1 + Pe,i ) and βi (P1 ) , P (Ui = 0, P1 ) = (1 − αi (P1 )), P1 is probability of hypothesis H1 and P0 is the probability of hypothesis H0 and Pe,i = 0 ( probability of error). Similarly, ideal sensor rule assumes that probability of detection PD and probability of false alarm PF works ideally, (PD , PF ) = (1, 0). The likelihood ratio test statistic for ideal sensor rule can be expressed as ΛIS =

K X i=1

(2Ui − 1)ln

1 − Pe,i Pe,i

(2.6)

where Pe,i is the bit error probability (BEP) of the i − th sensor. The decision score for counting rule (CR) is expressed as:

ΛCR =

K X

Ui

(2.7)

i=1

Counting rule also know as majority rule assumes that Pe,i = 0 ( probability of error). Wu rule [156], which was shown to outperform generalized likelihood ratio test (GLRT) obtains the P maximum likelihood estimate of the probability of detection (PD , N1 N i=1 [(1+2Pe,i )Ui −Pe,i ], then the statistic is calculated as

ΛW u , (PˆD − PF ).

(2.8)

Structural health monitoring is a continuous monitoring process which involves combining constant influx of sensor-level decisions using a decision fusion rule for the purpose of ensuring the integrity of the structure. Several authors in different fields have worked out different decision fusion rules.Chen and Varshney [12] have proposed a decision fusion using hierarchical Bayesian model, which uses Gibbs sampler method to obtain posterior likelihood of an observation to make the final decision. Mikhail et al. [88] have a done a good review 26

of popularly used decision fusion methodologies in SHM: majority voting rule, fuzzy Logic method, Dempster’s combination [114] rule among others are popularly used decision fusion method. Fuzzy logic method uses fuzzy sets which smooths the boundary between different classes by using a mapping function [95]. Dempster-Shafer’s rule deals with degree of belief extracted from multiple evidence sources and represents both uncertainties and imprecision [5]. Wang et al. [144] have also demonstrated decision fusion using Dempster-Shafer rule, fuzzy-reference, and Bayesian inference techniques. It is important to note that if the sensors are measuring same physical phenomenon, the usual practice in multi-sensory-decision fusion is to combine the raw data. Conversely, if the sensor data are non-commensurate, the data must be fused at the feature level or decision level [50]. In this research, in spite of the fact that each sensor is monitoring same event, we prefer fusing the data at decision level because the combined raw Lamb-wave data is very hard to manage. Therefore, combining the raw data may further exacerbate the problem of high-dimensionality. The architecture that we follow for multi-sensory data fusion is called decision level fusion in which each sensor uses its own observation for identity declaration process. In this research, the identity declaration process is achieved by regularized discriminant analysis method. Additionally, the identity declaration provided by individual sensors are then combined using proposed Bayesian decision fusion method to make the final inference about condition of the structure.

27

CHAPTER 3 EXPERIMENTAL LAMB-WAVE AND DEGRADATION DATA USED IN THE RESEARCH In this chapter we present the experimental data set used in the study. A fatigue data set provided by the Prognostic Center of Excellence at NASA Ames Research Center [111] is used. The data corresponds to Lamb-wave sensor measurements and to the X-ray images of the specimens that capture internal damage growth from run-to-failure experiments on carbon fiber composite specimens under tension-tension fatigue loading. A tension-tension load cycle with a stress ratio of 0.14 at 5Hz frequency was applied. R Two sets of six piezoelectric sensors (SMART Layers from Acellent Technologies) were

attached on both ends of a dog-bone shaped specimen, 6 in by 10 in (152.4 mm by 355.6 mm. The dog-bone shaped carbon-fiber reinforced composite polymer (CFRP) specimen is made up of Torayca T700G unidirectional carbon-prepreg material. A notch (of size 0.20 in by 0.75 in) is introduced to induce stress concentration and to accelerate delamination growth at this site. All tests were performed on an MTS machine following ASTM standards [25] [24]. To illustrate PCA based MCUSUM chart in Chapter 4, we used samples labeled as L1S11, L1S12 and L1S19, the fiber ply layup configuration for all these specimens is [02 /904 ]. X-ray image of one of the samples L1S11 before the damage and after the damage is shown in Figure 3.1. To illustrate the application of PCR based Wiener process for prognosis of the structure in Chapter 5, we used data from two different fiber layup orientation configurations [0/902 /45/45/90]s and [902 /45/45]2s , labeled as L2S17 and L3S18, respectively [111]. For both Chapter 4 and Chapter 5, the sensor data corresponding to the path shown between 28

(a) (b) (c) Figure 3.1: Dog bone shaped specimen used in experiments. (a) Dimensions (in mm) of the specimen and the two arrays of sensors. The path between sensors 1 and 7 is analyzed in this research. (b) X-ray image of the first coupon (L1S11) under baseline condition. (c) X-ray image of the first coupon (L1S11) after 100K cycles of fatigue loading. Delamination can be seen as the light gray-colored region centered around the notch [111].

transducers 1 and 7 (See Figure 3.1 a) was chosen because it is one of the longest paths traversing close to the notch area. We used specimen L2S17 and L3S18 to illustrate the application of regularized linear discriminant analysis based decision fusion algorithm in Chapter 6. X-ray image collected from the specimen L2S17 at 750K cycles of fatigue loading is shown in Figure 3.2 a. Sensors at the bottom are numbered 1 to 6 left to right and at the top 7 to 12 from right to left. We used the signals corresponding to the path from actuator 1 and sensor 7. Each of these lines: blue, red, green, and orange in the Figure 3.2a represent a sensor-actuator path and can be treated as a separate damage detection entity. As it can be seen in the figure, each of these lines intersect the magenta line, passing through the center of the specimen, at different distances from the damage. Table A.1 shows the distance of these actuator-sensor path from the damage. This distance can affect the detection accuracy of each of these actuator-sensor 29

paths. Delamination damage appears to start developing after about 10 to 100 cycles based on the images. However, to make the detection process challenging, we categorize all the data from 0-600 K cycles as baseline and rest as damage class. Figure 3.2 b shows the delamination area and loading cycles corresponding to baseline (0-600 K loading cycles) and damage class(600 K to 900 K loading cycles).

(a) (b) Figure 3.2: (a) Dog-bone shaped specimen. (b) Delamination area with increasing number of cycles, the two ellipsoidal regions indicate sensor readings corresponding to baseline and damage classes.

For Lamb-wave sensing, a five cycle tone-burst actuation excited at 7 different interrogation frequencies in the range of 150-450 KHz at an average volts of 50 V and gain of 20 dB, was used. According to Saxena et al. [111] frequencies of 150KHz-450KHz have been selected so that fundamental symmetric and anti-symmetric modes are as distinguishable as possible. The signal consists of a mixture of both A0 and S0 modes. However, due to the small size of the specimen and low velocity of A0 mode, it usually gets corrupted by boundary reflection and is very hard to distinguish. In Chapter 5 we present two new methods to approximate position and length of the window in a Lamb-wave that minimizes mean squared prediction 30

error, details of the proposed method will be discussed in Section 5.2. We considered the data corresponding to the 300 KHz actuation frequency as it produced minimum dispersion, calculated as the percent change in the time window of first peak and last peak of the actuator and the sensor waveform [57]. The dispersion calculation is shown in Equation (3.1): D=

W s0 − W a × 100 Wa

(3.1)

where, D is dispersion measurement, Ws0 is length of S0 window and Wa is the length of the actuator window. A hamming window is used to window both actuator and sensor S0 wave packet, the point which marks as 5% of the maximum amplitude of the signal is identified as beginning and end of the signal window. Figure 3.3a shows the actuator window (Wa ) and sensor window (WS0 ) for 300 KHz. Figure 3.3b shows the dispersion for different frequencies, indicating that 300 KHz actuation results in acceptable dispersion characterized by relative spacing of the peaks of the actuator and sensor signals. Sensor signals were acquired with a sampling rate of 1.2 MHz for a duration of 1667 microseconds which results in a p = 2000 dimensional raw time-history vector x for each sensor measurement.

(a) (b) Figure 3.3: a) Actuator signal and sensor signal at 300 kHz frequency. The actuator window Wa , and sensor window Ws used in dispersion calculation are shown. b) Dispersion values for different frequencies.

31

Figure 3.4a shows Lamb wave measurements at three different cycles during the fatigue test for the first coupon type (L2S17). With increasing number of cycles, it can be seen the amplitude of the sensor signal reflects a gradual decrease, an indication of the scattering of the signal from the delamination damage. Small abnormality can be seen in Figure 3.4. Ideally, the amplitude of the signal should decrease with increase in the number of loading cycles. However, amplitude of 200K cycles is much higher than the baseline specimen. The specimen size is small and the damage size has grown significantly at 200K cycles hence the interference between boundary reflections, damage reflection and incident waves causes the signal amplitude to be higher than baseline. An X-ray image of the coupon is taken for each sampling instant and the actual delamination area is measured. Figure 3.1 is the X-ray image of the L1S11 coupon at baseline and after 100K cycles of loading. Lamb-wave data was collected at 0, 0.001, 50, 100, 200, 400, 500, 575, 650, 700, 750, 800, 950 thousand cycles ( a total of 13 damage conditions). Figure 3.4b shows the delamination area measurements for all of the 13 cycles for the same coupon; we used image analysis software ImageJ to extract the delamination area information from Xray-images. Figure 3.5 shows the sensor and X-ray data for the second coupon type (L3S18). For this coupon, readings were made at 0, 0.001, 50, 100, 150, 200, 250, 300, 400, 500, 600, 700, 750 thousand cycles, a total of 13 damage conditions.

32

Figure 3.4: (a) Lamb-wave sensor data at baseline, after 200,000 cycles and after 750,000 cycles for specimen L2S17. (b) X-ray delamination measurements at all 13 measured points. Circled points correspond to the 3 measured instances of panel.

Figure 3.5: (a) Lamb-wave sensor data at baseline, after 150,000 cycles and after 500,000 cycles for specimen L3S18. (b) X-ray delamination measurements at all 13 measured points. Circled points correspond to the 3 measured instances of panel.

33

CHAPTER 4 A MULTIVARIATE CUMULATIVE SUM METHOD FOR CONTINUOUS DAMAGE MONITORING This chapter proposes a new damage monitoring method based on a fast initial response (FIR) multivariate cumulative sum (MCUSUM) test statistic applied to Lamb-wave sensing data for health monitoring in CFRC structures. The MCUSUM monitoring method applied to the features extracted with Principal Components Analysis (PCA) was studied to improve robustness of detection and sensitivity to small damages. The method is illustrated with measured sensor data from fatigue loading of carbon fiber materials and the performance of the proposed PCA based MCUSUM with and without fast initial response (FIR) approach was compared with existing Mahalanobis distance based monitoring techniques commonly applied in the health monitoring literature. It was shown that the PCA based MCUSUM both with and without FIR can significantly improve the misdetection rate for monitoring gradually developing damages.

4.1

Proposed Methodology- Multivariate Cumulative Sum Monitoring with Principal Components

Lamb-wave sensor data is a high dimensional vector (on the order of thousands, depending on the sampling frequency) and some form of dimension reduction is required to practically monitor fewer variables and to achieve robust and repeatable detection performance. Principal component Analysis (PCA) is a popular multivariate statistical analysis method for dimension reduction in process monitoring and fault diagnosis applications [58]. The method transforms 34

a set of correlated variables to a smaller number of uncorrelated new variables. The original vector of variables x = (x1 , . . . , xp ) is projected into a vector of new variables z = (z1 , . . . , zr ) called the principal components. In the new coordinate system, z1 is a linear combination of the original variables x1 , . . . , xp and explains the maximum possible variance, z2 , another linear combination, is orthogonal to z1 and explains most of the remaining variance, and so on. It can be seen that if the original set of p variables are actually a linear combination of r new variables, then the first r principal components will be sufficient to explain all the variance and remaining p − r principal components are very small. The monitored area of the structure is assumed to incur some damage when the sensor data vector deviates significantly from a baseline (no damage) condition. The baseline condition is represented by collecting a set of N observations with the sensor under the no damage condition and the data is given in a p × N matrix X = [x1 , x2 , . . . , xN ] in which each p × 1 column vector xj represent an observation j = 1, 2, ..., N . In practice, the sensor data must be scaled in some meaningful way to account for differences in the measurement units of the variables. A typical approach is scaling so that all variables have zero mean and unit variance as xij = (˜ xij − µi )/σi ,where (i = 1, 2, . . . , p, j = 1, 2, . . . , N ) and x˜ij is the original data, µi and σi are the sample mean and standard deviations along the i-th dimension. The covariance matrix of the sensor data C = 1/(n − 1)XX T is decomposed, using singular value decomposition, as C = V DV T into an orthogonal eigenvector matrix V = [v 1 v 2 . . . v p ] and a diagonal eigenvalue matrix D = diag(λ1 , λ2 , . . . , λp ), both matrices are of size p × p and the eigenvalues are in descending order λ1 > λ2 > · · · > λp . It can be seen that the i-th principal component is the linear combination:

zi = v Ti x = v1i x1 + v2i x2 + . . . + vpi xp

(4.1)

in which v i is the i-th column of the V matrix (i = 1, . . . , r) also called the i-th the principal component loading vector or eigenvector. Principal component scores of an observation vector

35

are the inner products of the observation vector with principal component loading vectors. For example, for the j-th (j = 1, ..., N ) sensor data xj the score for the i-th (i = 1, ..., r) principal component (PC) is zij = v ′i xj . In practice the first r principal components will be sufficient to represent most variability of the original data, thus the eigenvectors associated with the eigenvalues λr+1 , . . . , λp are discarded and a reduced eigenvector matrix V of size p × r is formed. The transformation to the principal component scores is obtained through the matrix multiplication z = V T x in which x is the vector of raw sensor data of size p × 1 and z be the r × 1 vector of principal components in the reduced dimension. In this research, we propose two different types of MCUSUM charts: 1) PCA based fast initial response multivariate cumulative sum (FIR MCUSUM) 2) PCA based MCUSUM chart without any head-start for detecting and monitoring small damages with Lamb-wave sensors. In order to detect small shifts, a commonly used approach is to use a cumulative sum statistic which utilizes not only the most recent sensor measurement, but also the past observations to more quickly expose slowly accumulating changes [150]. PCA is conducted on the raw sensor data to find the feature vector on which the MCUSUM chart operates. We follow the formulation studied by Pignatiello and Runger [100], in which a cumulative multivariate difference vector between the observed PC score vector z and the expected (under baseline) scores at time t is defined as:

st =

t X

i=t−nt +1

(z i − µz0 )

(4.2)

where µz0 = 0 for baseline PC scores. The cumulative sum (CUSUM) statistic to be monitored is

M Ct = max{0, ||st || − knt } in which the norm of st is found as ||st || = sTt D −1 st

1/2

(4.3) with D being the matrix of

eigenvalues and the summation is found using nt which is the number of measurements since 36

the last renewal (zero value) of the CUSUM, defined as: ( nt−1 + 1 for M Ct−1 > 0 nt = 1 otherwise.

(4.4)

The MCUSUM statistic accumulates differences that are larger than k, which is a chart parameter that needs to be specified by the user, usually taken as one half of the desired change in the mean vector [100]. If the MCUSUM statistic exceeds the upper threshold h of the chart, that is, if M Ct−1 > h, then an out of control alarm is signaled. During the fatigue loading process, the structure is very responsive to the loadings occurring during first few stages of damage. We modify the MCUSUM chart to improve its sensitivity at the start-up of the process. This is specially applicable in our research because the baseline data which is being used to design the control limits do not come from perfectly pristine structure. They are obtained while the structure is still undergoing fatigue loading, due to this all the baseline data have traces of damages which will be accounted for by the FIR MCUSUM. The cumulative sum statistic represented by Equation (4.3) is modified to take the form h M CtF IR = max{0, ||st || − knt + } 2

(4.5)

where we add a nonzero value, typically h2 , which is also called 50% head-start [89, p. 325]. Rest of the formulation is similar to PCA based MCUSUM without any head-start. There is no closed form expression to the reference distribution for the statistic, therefore the threshold to achieve a desired false alarm rate under the hypothesis of no damage is found by simulation [100]. In our study, we conduct Monte Carlo simulation, to generate replicated realizations of the in control process (under the hypothesis of no damage) with increasing threshold h values to find the value that gives an average run length (the average time to signal an alarm) of 200 samples [83]. This value, which corresponds to the false alarm probability of α = 0.005, is a commonly used value to tune control charts [89].

37

4.1.1

Comparison to Existing Methods

PCA based Hotelling’s T 2 :. To detect a damage with a given confidence level, the PCA method is followed by a decision making procedure based on Hotelling’s T 2 , a statistic for testing differences between mean values of two data groups [85]. The sensor data collected from the baseline (undamaged) structure is used to establish an upper control limit (UCL) for the T 2 statistic under the 100(1 − α)% confidence level, which enables one to control the false alarm probability (the probability that an alarm is generated when in fact there is no damage) to α. It is assumed that under the baseline (no damage) condition the raw sensor data x follows a p-dimensional vector Normal distribution with mean vector µ0 and covariance matrix C. To determine when the sensor data indicates damage, deviations from baseline is monitored by calculating the Hotelling’s T 2 statistic, defined as: 2

T

T = (x − µ0 ) C

−1

(x − µ0 ) =

r X (v T x)2 i

i=1

λi

(4.6)

in which, the term after the first equality sign represents the test statistic in terms of the original sensor vector x and the term after the second equality sign is the representation based on the r principal components [85]. Under the hypothesis of no damage, the T 2 statistic follows an F distribution and to control the false alarm rate at α the upper control limit (UCL) of the monitoring statistic is set at U CLF =

(N − 1)(N + 1)r Fα (r, N − r) N (N − r)

(4.7)

where N is the sample size, Fα (r, N − r) is the upper 100α% point of the F distribution with r and N − r degrees of freedom. As soon as the test statistic exceeds the control limit, i.e., T 2 ≥ U CL, an alarm is given indicating a damage has initiated. Otherwise, i.e., T 2 < U CL, it is assumed that the structure still operates in the baseline no damage condition. Many authors who have studied Hotelling’s T 2 control charts have concluded that the chart is quite effective for detecting large and sustained shifts (three standard deviations or 38

larger from baseline) however, it may take a long time to signal an alarm for relatively small or gradually developing shifts (on the order of one or two standard deviations [83]. This presents a limitation for SHM applications, where it is important to be able to detect cracks or delaminations early on, from the onset, to be able to continuously monitor as they grow and react on time by scheduling repair or replacement, if the growth becomes rapid. AR Model based Hotelling’s T 2 :. Wang and Ong [145] have developed an autoregressive (AR) coefficient based Hotelling’s T 2 control chart for damage monitoring. A series of auto-regressive models are fitted to response time histories of the structure to be monitored, the coefficients of these AR models are then extracted to form a set of multivariate data features. Hotelling’s T 2 statistic is calculated from these AR coefficients using Equation (4.8). Tφ2 = (φ − µφ )T Cφ (φ − µφ )

(4.8)

, where φ is the vector of AR coefficients with dimension p, µφ mean of in-control AR coefficients and Cφ is variance-covariance matrix of in-control AR coefficients. For the detailed process, readers should refer to Wang and Ong [145]. The upper control limit of the Hotteling’s control chart with subgroup size (n = 1) is [89] U CL =

p(N + 1)(N − 1) Fα,p,N −p N2 − Np

(4.9)

each time-series signal is divided into p time windows, N preliminary in-control (baseline) subgroups were drawn. Figure 4.1 shows the five different windows of the signal separated by dotted vertical lines, each segment of the signal has a length of 400 sample points. AR coefficient from each window was used to reconstruct the original signal. It can be seen that the reconstructed signal very closely resembles the original signal; therefore the AR(1) coefficients used to model the Hotelling’s T 2 chart are very good representative of the features of the original Lamb-wave signal. Figure 4.2 shows the graphical representation of AR(1) coefficients that constitute baseline and fatigue signals.

39

Figure 4.1: Original Lamb-wave signal represented by black curve and the red curve signal reconstructed from AR1 coefficients.

4.2

Results and Discussion

In this section we illustrate the PCA based MCUSUM damage monitoring method using the fatigue aging data for continuous monitoring presented in Chapter 3. We find the principal component loading vectors v 1 , ..., v p of the 14 × 2000 baseline data matrix X. Figure 4.3 represents the first five principal components of baseline and fatigue data. It can be seen that baseline principal components scores are less variant compared to fatigue principal components. The variations in the principal component scores occur due to the change in physical and mechanical properties of the structure during loading. Figure 4.4a shows the 40

Figure 4.2: AR(1) coeffcients from all five windows for baseline and fatigue signals. It can be seen that most of the difference between baseline and damage signals occurs in window 3 and window 4.

proportion of variance explained by additional principal components, indicating that r = 5 principal components is sufficient to explain about 97% of the variation and is selected as the reduced dimension representation of the sensor data. The U CL for the T 2 control chart is found as the 99.5% percent point (vertical red line) of the distribution which results in 0.005 false alarm rate (the area under the curve to the right of the UCL). Figure 4.5, Figure 4.6 and Figure 4.7, demonstrate comparative study between different types of control chart for all three coupons. The control chart used for this study are: PCA based FIR MCUSUM, PCA based MCUSUM, PCA based Hotelling’s T 2 chart, and 41

(a) (b) Figure 4.3: Principal component scores of the raw signal. 5 PC’s are used. (a) Baseline (b) Fatigue loading applied.

Figure 4.4: Cumulative variance explained by additional principal components.

AR-based Hotelling’s T 2 chart. While all PCA based charts are obtained by applying the principal component analysis (also 5 PC’s are used) on the Lamb wave data from the tests, 42

the auto-regressive coefficient based control chart is designed using only AR(1) coefficient. The vertical line shows the actual separation between baseline and fatigue loading conditions. The horizontal red line is the upper control limit for both charts. The T 2 statistic is computed using Equation (4.6). For both FIR MCUSUM and MCUSUM chart the difference vector st was found from Equation (4.2) and the cumulative sum statistic M CtF IR and M Ct is found by applying Equation (4.5) and Equation (4.3) respectively, in which the subscript t denotes the sample or measurement number. The control limit of the PCA based T 2 chart for coupon 1 is found using Equation (4.7) and the F distribution with N = 14 and r = 5. For both MCUSUM charts we set k = 0.5 to make the chart sensitive to one half of one standard deviation shifts. The upper control limit is found as h = 6.64 by running 1000 Monte Carlo simulations. For AR based Hotelling’s T 2 Lamb-wave signal is divided into m = 5 segments, the upper control limit is estimated using Equation (4.9). AR(1) model is fitted to each of the five segments separated by vertical dotted lines shown in Figure 4.1 and the AR(1) coefficients of each segment is concatenated to construct the feature vector of dimension p = 5. It should be noted that AR model fitted to the segments are forced an order of 1 due to small sample size problem that arises due to limited number of baseline samples. If large number of sensor data were available, a higher order AR model could be fitted to the sensor data. It can be seen that for all three coupons, the charts do not signal any false alarms under the no damage condition: none of the test statistics plotted in Figure 4.5, Figure 4.6 and Figure 4.7 to the left of the vertical line crosses the horizontal red line. Coupon 1 has 14 baseline signals and 24 damaged signals, the UCL for PCA based T 2 chart and AR based T 2 chart is 57.81. PCA based T 2 chart (Figure 4.5c) for the first coupon (L1S11) has 14 misdetections out of 24 damaged signals. On the other-hand, AR based T 2 chart does not miss any damaged signals. PCA based MCUSUM chart without any head-start shown in Figure 4.5a only has 1 misdetection. FIR MCUSUM statistic plotted in Figure 4.5b shows that it is able to correctly detect all the signals from the damaged structure. 43

Coupon 2 has 13 baseline samples and 24 fatigue loading samples (for cycles 10 to 10,000), the UCL for both T 2 charts is 67.05. As shown in Figure 4.6, PCA based T 2 and MCUSUM chart without any head-start detect the shift on time at sample 14. However, PCA based T 2 has 9 miss-detections later on at samples (15,18,21,22,23,24,27,30 and 31) and AR based T 2 control chart has 1 miss-detection at sample 37. Both MCUSUM charts with and without FIR (shown in Figure 4.6a and Figure 4.6b) consistently increases with no miss-detection. Coupon 3 has 10 baseline samples and 26 fatigue loading samples, was more challenging for all methods except for FIR MCUSUM. The UCL for both T 2 charts is 147.90. Damage detection problem in coupon 3 seems to be challenging for both T 2 charts. PCA based T 2 does not detect any damage at all. However, AR based T 2 chart performed comparatively better than PCA based T 2 chart with only 5 out of 26 missdetections. PCA based MCUSUM performs second best with only 3 out of 24 misdetections. By contrast, PCA based FIR MCUSUM chart signals the first alarm at sample 11, thus it does not have any misdetection. Overall, for all three specimens, it can be seen that the MCUSUM chart has a much faster reaction time (which results in a lower miss-detection rate) than the traditional control chart to small fatigue damages. No false alarms have been observed in any of the charts. A summary of miss-detection rates are given in Table 4.1. We note that the coupons have identical layups and are expected to have similar damage propagation behavior. However, there are some differences in the detection performance, especially for the T 2 chart. Both multivariate MCUSUM with and without head-start consistently performed better than the T 2 chart; the approach was successfully applied on three different specimens. Table 4.1: Misdetection rates for the MCUSUM and T 2 charts from fatigue loading tests Coupon 1 2 3

FIR MCUSUM 0 out of 24 (0%) 0 out of 24 (0%) 0 out of 26 (0 %)

MCUSUM 1 out of 24 0 out of 24 3 out of 24

PCA based T 2 2 out of 24 (8.3%) 1 out of 24 (4.2%) 20 out of 26 (76.9%)

44

AR based T 2 0 out of 24 (0 %) 1 out of 24 (4.2 %) 5 out of 26 (7.7 %)

(a)

(b)

(c) (d) Figure 4.5: Control charts plotted from the principal component scores (coupon 1). (a) PCA based fast initial response multivariate CUSUM (FIR MCUSUM) chart. (b) PCA based MCUSUM. (c) PCA based Hotelling’s T 2 chart. (d) AR based Hotelling’s T 2 chart

4.3

Conclusion

Guided-wave sensing based health monitoring has been receiving an increasing interest in recent years due to the low cost implementation of these sensing systems and boost in the monitoring capacity provided by guided waves. However, challenges remain in processing high dimensional sensor data and there is a need for accurate and reliable damage detection

45

(a)

(b)

(c) (d) Figure 4.6: Control charts plotted from the principal component scores (coupon 2). (a) PCA based fast initial response MCUSUM (FIR MCUSUM) chart. (b) PCA based MCUSUM. (c) PCA based Hotelling’s T 2 chart. (d) AR based Hotelling’s T 2 chart

and monitoring methods especially for composite materials with an-isotropic properties and multiple failure modes. In this research, we studied a novel multivariate damage monitoring method for Lamb-wave sensing data. A multivariate cumulative sum test statistic was applied to the features extracted with principal components analysis in order to improve the robustness of detection and sensitivity to small damages.

46

(a)

(b)

(c) (d) Figure 4.7: Control charts plotted from the principal component scores (coupon 3). (a) PCA based fast inital response multivariate CUSUM (FIR CUSUM) chart. (b) PCA based MCUSUM. (c) PCA based Hotelling’s T 2 chart. (d) AR based Hotelling’s T 2 chart

The monitoring performance of the proposed PCA based FIR CUSUM approach was compared with the existing Mahalanobis distance based monitoring techniques that uses PCA based features and AR-based features applied in the health monitoring literature. The results complied our expectation that the existing monitoring methods work reasonably well for relatively large changes in the structural condition, however, supplementing them with 47

a statistic that accumulates information over time can make the monitoring much more sensitive to gradually developing damages. This can enhance the ability to continuously monitor growing damages and react to them by scheduling repair or replacement operations before they reach critical size and result in breakdown. Fatigue loading data from multiple specimens showed that both FIR MCUSUM and MCUSUM approaches have significantly lower misdetection rates.

48

CHAPTER 5 REMAINING USEFUL LIFE ESTIMATION WITH LAMB-WAVE SENSORS BASED ON WIENER PROCESS AND PRINCIPAL COMPONENTS REGRESSION In this chapter, we present the application of moving window based principal components regression and Wiener process modeling methods to fatigue aging data. The objective of this chapter is to develop a Lamb wave health monitoring and prognostics approach for composite structures. The method utilizes two different windowing techniques 1) increasing window 2) moving window. Both these windowing techniques identify the portion of the Lamb-wave signal that is most sensitive to fatigue loading. In addition, principal components regression (PCR) is used to extract features of the sensor time domain signal that are sensitive to delamination growth and fit a model between the features and measured delamination areas. A Wiener process is developed to model the delamination area predicted by the regression as a stochastic growth process. The Wiener process model enables one to develop the predictive distribution of the future delamination which, in turn is used to find the probability of failure at a given time and the remaining useful life (RUL).

5.1

Proposed Approach

In our proposed method, we develop a mapping function between principal component (PC) based features of Lamb-wave signal and the measured delamination areas. This is followed by a Wiener process model to predict future delamination levels under the same

49

loading and probability of failure. Figure 5.1 shows a flowchart of the approach. We next describe the formulations for the proposed PCR and Wiener modeling approaches.

Figure 5.1: Flowchart of the proposed methodology.

5.1.1

Principal Component Regression

As discussed in Section 4.1, principle component analysis (PCA) is a popular multivariate statistical analysis method used for dimension reduction [58]. The method transforms a set of correlated variables to a smaller number of uncorrelated new variables. We assume that the sensor training data is recorded in a p × N matrix X whose columns represent N damage conditions, each with p dimensions. That is X = [x1 , x2 , . . . , xN ], where the column vector xj (j = 1, ..., N ) is a p × 1 vector. In our Lamb-wave training data, each sensor data is xj , a 2000 × 1 vector, thus p = 2000 and N is the number of damage conditions in the data. In practice the data must be scaled in some meaningful way to account for differences in measurement units. A typical approach is scaling so that all variables have zero mean and unit variance. That is xij = (˜ xij − µi )/σi , where x˜ij is the original data, µi is the sample mean and σi is the sample standard deviation along the i-th dimension (i = 1, 2, . . . , p) and j denotes the damage condition (j = 1, 2, . . . , N ). 50

The first principal component (PC) is the linear combination which has the maximum variance defined as z1 = v T1 x with v 1 being the p × 1 loading vector of the first PC, the second principle component is the linear combination z2 = v T2 x has the second largest variance, where v 2 is orthogonal to v 1 (i.e. v T1 v 2 = 0). Further PCs are defined similarly. It can be shown that the loading vectors v i are the eigenvectors of the sample covariance matrix C = 1/(N − 1)XX T in which X is the matrix of mean centered and scaled sensor data and the corresponding eigenvalues and λi (for i = 1, 2, ..., p) are the variances of the PCs, i.e., V ar(zi ) = λi . The principal component scores are defined as the observed values of the principal components: z i = X T v i is the N × 1 vector of scores for the i-th principal component. In practice, only a few significant PCs are sufficient to explain most of the variability in the data. The covariance matrix C has a total of p principal components. If only the first a principal components are sufficient to explain all the variance and remaining p − a principal components are zero then Z = [z 1 , . . . , z a ] is the N × a matrix of principle component scores for the retained PCs. Letting V = [v 1 v 2 . . . v a ] be the p × a matrix of loading vectors of the retained principal components, the scores of the principal components can be obtained as Z = XT V . In our study, we use principal component regression (PCR) to represent the relation between the measured delamination areas and the measured Lamb-wave signals. PCR is a technique for analyzing multiple regression data that suffer from multicollinearity. Multicollinearity results in unbiased estimation; however, it also causes significant increase in the variance of the prediction. PCR adds a degree of bias to the regression model which results in reduction of the standard error. The PCR model for the Lamb-wave based model can be represented as: ym = Z T θ + f

(5.1)

in which y m is the vector of the measured delamination areas (mean centered), Z is the matrix of principal component scores of the sensor data matrix X, θ = (θ1 , . . . , θa ) is the 51

a × 1 vector of regression coefficients and f is the vector of residuals. From Equation 5.1 it can be seen that the parameter vector θ characterizes the relationship between the principal components and the delamination area. Figure 3.4 and Figure 3.5 present measurements from the training sensor data X (p × N matrix) and the vector of corresponding delamination areas y m (N × 1 vector) for coupon1 (L2S17) and coupon 2 (L3S18) respectively. Ordinary least squares estimation is used to find the coefficients from the observed delamination areas and scores using: θ = (ZZ T )−1 Zy m .

(5.2)

The delamination area for a new Lamb wave sensor vector is predicted using the fitted principal component regression as: y = θ T z(x0 ) = θ1 z1 + θ2 z1 + . . . + θa za

(5.3)

in which y is the predicted delamination area and z = (z1 . . . za ) = V T x0 is the vector of principal component scores found for the new sensor reading x0 . Selecting the number of components to be used in regression is an important consideration. Principal components with the largest variances explain a large proportion in the sensor data, but they are not necessarily the best predictors. We observe in Section 5.2 that some PCR models with large PC scores have very high prediction error. The number of components to be used in PCR is determined according to the minimum mean square of prediction error (MSPE): M SP E = 1/N

N X i=1

(ym,i − yi )2

(5.4)

where ym,i and yi are the measured and the predicted delamination areas, respectively, for the i-th condition with the PCR model. Starting with a model with no variables, we add to the model the PC resulting in the smallest MSPE and increase the number of PC in each step until the quality of prediction cannot be essentially increased.

52

5.1.2

Wiener Process Based Degradation Model

In order to predict the future delamination area based on the monitored sensor data, we model the delamination growth as a stochastic process. While the experiment data set used in the study contains the actual delamination areas corresponding to each Lamb-wave measurement, in practice during the operation of the structure, the delamination area is unobservable and have to be predicted from Lamb-wave signals. The experiment data will be used to estimate the principal component regression model that predicts the delamination area given a Lamb-wave observation. Therefore, in order to make decisions for a particular structure, the random growth model will be fitted to the predictions of the PCR model obtained from Lamb-wave condition monitoring signals. An effective model for random growth is the Wiener process [148] in which, the delamination area growth over time is modeled as yt = y0 + δt + σBt

(5.5)

where yt is the PCR model-predicted delamination area (Equation 5.3) from Lamb wave measurement at the time instant t, Bt is the standard Brownian motion, y0 is the initial degradation level, σ is a dispersion parameter and δ is the drift rate. The Brownian motion is given as Bt = Bt−1 + ǫt

(5.6)

in which ǫt ∼ N (0, 1) is the random error, normally distributed with zero mean and unit P variance. It can be shown that Bt = B0 + ti=0 ǫi ; therefore, assuming without loss of generality that B0 = 0, we have Bt ∼ N (0, t). By inserting the probability distribution of

Brownian motion Bt in Equation (5.5) we get the predictive distribution of the delamination area as: yt ∼ N (y0 + δt, σ 2 t)

(5.7)

and the incremental change in the area is: ∆y = yt − yt−1 ∼ N (δ∆t, σ 2 ∆t). 53

(5.8)

In order to predict a future delamination area, we estimate the parameters of the Wiener process from a set of past delamination predictions y = (y0 , . . . , yn ) obtained at the time points t0 , t1 , . . . , tn from the PCR model and the observed Lamb-wave data X. The estimating equations for the two parameters of the model are: yn − y0 δˆ = tn − t0

(5.9)

and σ ˆ2 =

n 2 1X 1  ˆ i ∆yi − δ∆t n i=1 ∆ti

(5.10)

in which ∆yi = yi − yi−1 and ∆ti = ti − ti−1 for i = 1, 2, ..., n.

As an example, Wiener process path in modeling the delamination we simulated three sample paths with the delamination model (parameters estimation with the measured fatigue data will be explained below). Figure 5.2 shows three simulated degradation trends for both coupon types based on the parameters estimated from observed data along with the measured delamination areas. It shows that the model provides a reasonable simulation of the delamination area compared to actual observed delamination area.

Figure 5.2: Simulated delamination area growth from the fitted Wiener process along with the actual measured delamination areas (a) Coupon one. (b) Coupon two.

54

Probabilistic inferences on the future degradation level yf at a future time tf are made using the predictive distribution, obtained for this process as: 

 2 ˆ yf ∼ N yf −1 + δtf , σ ˆ δt .

(5.11)

In order to quantify the prediction uncertainty one can obtain, from the predictive distribution, the upper and lower limits of the (1 − α)100% prediction interval as: √ ˆ f ± z1−α/2 σ {LP L, U P L} = yf −1 + δt ˆ 2 ∆t.

(5.12)

where α is the significance level and z1−α/2 is the 1 − α/2 quantile for the standard normal distribution. The level of significance implies that one can have high confidence that the future delamination level would lie between the prediction limits. Typical choices of α are 0.05 or 0.01, respectively, corresponding to 95% and 99% confidence levels in prediction. The remaining useful life is estimated with respect to a failure threshold L; that is, when the delamination area exceeds this value the component is assumed to have failed. The probability that the degradation measure exceeds (greater than or equal to) this threshold at any point in time is used as a measure of reliability. Denoting the failure time as T , we use the cumulative distribution function F (t) = P (T ≤ t) to make predictive statements about T at time instant t (remaining life = T − t). From the degradation measure, the failure time distribution function is found as P (T ≤ t) = P (yf ≥ L), the distribution function of the first passage time of the delamination to the threshold. The first passage time of the Wiener process of Equation (5.11) is shown to follow an Inverse Gaussian distribution [41]): ! " # # " ˆ −L ˆ +L 2δˆ δt δt + exp (5.13) Φ −√ F (t) = Φ √ σ ˆ2 σ ˆ2t σ ˆ2t in which Φ[.] is the cumulative distribution function of the standard normal distribution. Gamma process based degradation model:. Gamma process is popularly used stochastic process which is used to model strictly increasing degradation data. Gamma

55

process {y(t); y(0) = 0} has independent, non-negative increments ∆y(t) = y(t + ∆t) − y(t) that follows gamma distribution as [143]: ∆y(t) = Ga(α(Λ(t + ∆t) − Λ(t)), β)

(5.14)

where, Ga(.) represent gamma P.D.F, β is the shape parameter, α is the scale parameter, Λ(t) = tc is the monotone increasing function of time. The mean and variance of the gamma process is obtained as αΛ(t)/β and αΛ(t)/β 2 respectively. If ξ is defined as first passage time of degradation process to reach failure threshold L. The cumulative distribution function of first passage time (ξ) is defined as: F (t) =

Γ(αΛ(t), lβ ) Γ(αΛ(t))

(5.15)

where, lβ = (l − y0 )β and Γ(αΛ(t), lβ ) is the incomplete gamma function [143]. Therefore, the failure time distribution based of gamma process can be written as: P (yt ≤ t) = P (yf ≥ L) = 1 − F (t)

(5.16)

An important distinction must be made between the traditional PCA based damage detection and our proposed principle components regression approach. PCA is an unsupervised learning method and used to group the objects according to their multivariate measurements and a similarity measure. In health monitoring the PCA is applied to sensor data collected from undamaged structure and deviations from this baseline are detected as the damage detection strategy. Principle components regression, by contrast, is a supervised learning method in which dependent variables (in our case delamination size) are used to aid in grouping. The regression model is built on sensor data collected under actual operation and therefore may correspond to undamaged as well as damaged structure.

56

5.2

Results and Discussion

In this section, we apply PCA based Wiener degradation model on the fatigue test data to make inferences about remaining useful life of two specimens with different ply layup configurations. Principal component regression on Lamb-wave data: We first build the principal component regression model with the Lamb-wave data collected under the fatigue loading conditions. We used the sensor data and the corresponding delamination measurements to construct the model. Some of the data points were used as left-out samples from the training data to test the adequacy of model predictions. For the first coupon data we used the first 11 data for training (last 2 data are left-out) and for the second coupon we used the first 10 data for training (last 3 data are left-out). Thus, N = 11 for the first coupon and N = 10 for the second coupon. Lamb wave data consists of 2000 data points, however, this window length may be too long and focusing the data analysis to a segment that contains the richest information about delamination is desirable for better efficiency of the statistical algorithm. In order to find a window of the signal that is most sensitive to the delamination damage, we have compared two different methods. The first approach consists of estimating an increasing window lengths of p = 400, 600, 800, . . . , 2000 (starting point is sample 1 for all windows). On the other-hand, the second approach consists of identifying a moving window of fixed length p, such that p = 0 - 400, 400 -800, 800- 1200, 1200 - 1600, 1600 -2000. In both approaches, the principle components analysis is applied on the resulting data matrix. Thus, the data matrix X has the dimension p × N with p being the window length and N is the number of training data. We calculate the mean squared prediction error (MSPE) for each window length and the window with the smallest MSPE is selected for model estimation. The MSPE is calculated by leave-one-out cross validation; for each damage condition the model is fitted by leaving one point out of the data and the MSPE is calculated 57

as the average of the squared prediction error of the ten models. This research will compare the prognostics capacity of both windowing techniques. The mean squared prediction error (MSPE) for the PCR model with increasing number of principle components (up to 10 components; since we have N training data we can add up to N − 1 principal components to the model.) and varying window lengths are found using Equation 5.4, and for the coupon 1 (L2S17) type plotted in Figure 5.3a. The variances explained by the principle components and the number of principle components found from the cross-validation study (the minimum points of the plots in Figure 5.3a and Figure 5.3b) are summarized in Table 5.1 for different window lengths. From the Figure 5.3a, it can be seen that for the window lengths 400, 600, 800, 1000, and 1200, the MSPE is minimized with 6, 5, 1, 7 and 9 principle components, respectively and for longer than 1400 windows, 10 principle components are needed. It can be seen that for window lengths less than 1200, the number of principle components needed is quite erratic, but it stabilizes after 1200. For all cases considered, a model with window length 1400 and 10 components achieves the minimum MSPE overall (MSPE = 1.41 × 105 ), see Figure 5.3a. Similarly for moving-window, it can be seen in Figure 5.3b that for window position 1, 2 3 4 and, 5 MSPE is minimized by 7,6, 9, 7 and 2 principal components. The smallest MSPE is obtained at position 2 where 9 principal components have MSPE of 1.27 × 105 . Figure 5.10a shows the optimum length of increasing window Figure 5.10b shows optimum position of moving window. Figure 5.4a shows the fitted versus actual delamination areas with the models with optimal number of principal components, indicating a very good fit is achieved between predicted and observed values, as measured by the small MSPE and high coefficient of determination R2 , with the window sizes greater than 1200. The cross-validation study indicates window length p = 1400 and number of components a = 10 is best for the first coupon from data. Similar results were obtained with the second coupon type. Similarly, Figure 5.4b shows the fitted values versus the actual value plot for the moving window technique. Window position 58

(a)

(b)

Figure 5.3: a) Estimated minimum squared prediction error for increasing value of number of principal components. (b) Mean squared prediction error for increasing value of principal components for different positions of moving window. 59

Table 5.1: Variance explained by different loading vectors and the number of principal components corresponding to each window length. Window length

λ1

λ2

λ3

λ4

λ5

λ6

λ7

λ8

λ9

λ10

# PC

400

64.11

16.1

4.15

0.92

0.58

0.34

0.21

0.17

0.07

0.04

6

600

77.54

20.49

12.17

2.04

1.77

0.85

0.73

0.37

0.28

0.21

5

800

62.9

60.42

27.89

17.15

7.24

6.65

2.86

2.11

0.86

0.49

1

1000

67.15

51.76

43.58

35.74

18.59

12.25

7.62

6.94

4.04

2.39

7

1200

74.53

54.07

47.93

39.98

19.8

14.28

10.31

7.77

5.92

3.76

9

1400

80.55

58.79

50.7

42.94

21.06

16.32

11.74

9.40

7.17

5.4

10

1600

86.09

62.4

53.71

46.07

22.96

17.84

14.38

10.01

7.61

7.09

10

1800

90.85

67.21

56.38

48.07

24.91

19.37

16.39

10.66

8.39

7.61

10

2000

95.70

70.8

59.36

50.72

26.25

20.49

17.46

11.41

9.04

8.19

10

1 and position 3 seems to produce better agreement between true value and the fitted value. Window position 3 is preferred to window position 1 because it has smaller MSPE. The mean squared prediction errors (MSPE) and number of PCs used for each window position is shown in Table 5.2 Table 5.2: Variance explained by different loading vectors and the number of principal components at various positions of window. Window position

λ1

λ2

λ3

λ4

λ5

λ6

λ7

λ8

λ9

λ10

# PC

0- 400

62.15

19.74

11.89

2.00

0.91

0.51

0.27

0.19

0.16

0.06

7

400-800

59.44

25.47

10.63

7.09

6.45

3.03

1.82

0.83

0.34

0.13

6

800-1200

43.62

32.00

30.89

14.86

12.05

9.08

6.88

3.81

2.23

1.32

9

1200-1600

48.02

29.62

22.63

21.36

13.15

4.76

4.69

4.14

3.33

1.16

7

1600-2000

45.58

36.40

22.09

16.26

11.51

8.71

4.38

2.90

2.54

1.19

2

Based on comparison between MSPE from Figure 5.3a and Figure 5.3b and fitted v/s actual value plot Figure 5.4a and Figure 5.4b, we conclude that moving window approach performs better than increasing window technique. This is because moving window approach focuses on a specific region of the Lamb-wave signal and in doing so, it filters out the noise

60

occurring due to environmental factors and boundary reflections. This noise if not removed, will obscure the damage detection power of the Lamb-wave sensors. We find the eigenvectors of the data covariance matrix, represented in a p×a loading vector matrix V and the N × a score matrix from Z = X T V , thus reducing the raw time-history data from p to a dimensions (p = 1400 and a = 10, for the first coupon). We regress the delamination areas y m on the score matrix Z and obtain the a × 1 coefficients vector θ.

5.2.1

Remaining Useful Life Prediction Using Wiener Process Model

The Wiener process model (Equation (5.5)) is fitted to the predictions of the PCR model. ˆ . The predictions Equations (5.9) and (5.10) are used to find the parameter estimates δˆ and σ and prediction intervals of delamination areas are found using Equation (5.12). Figure 5.5 shows the forecasts and the 95% prediction intervals of the Wiener process for the two coupon types and 13 damage conditions (The thin solid lines show the predictions and the dotted lines show the prediction intervals. The thick lines show the measurements). For the first coupon, the predictions for the first 11 samples are one-step ahead predictions (the observations available up to the current time instant are used to predict the next sample), while the forecasts of the samples 12 and 13 are one-step-ahead and two-steps ahead forecasts, respectively (observations up to sample 11 are used). In order to forecast the delamination for the unobserved samples 12 and 13, we fit the Wiener process to the PCR model delamination predictions of the first 11 samples. For the second coupon, the first 10 samples are predicted and the last 3 samples are forecasted. It can be seen that the forecasts very closely follow the trend and the prediction intervals encompass the future delamination values. The prediction intervals are narrower for the second coupon than those of the first coupon, indicating that there is less sampling variability in the sensor and X-ray of the measurements for the second coupon data. Surprisingly, Figure 5.6 shows that although moving window technique has better MSPE than increasing window technique, the one step ahead prediction for first 11 61

samples and forecasts for samples 12 and 13 is not very different from increasing window technique. The probability that the coupon will fail in the future is calculated based on the Inverse Gamma distribution given in Equation (5.13). To see the effect of the choice of the failure threshold, we consider two different values in this equation: L = 3000 mm2 and 1000 mm2 for the first coupon and L = 800 mm2 and 600 mm2 for the second coupon. The failure time distribution function F (t) of the first coupon after observing the Lamb wave data up to 750,000 cycles (after the 11th measurement is taken) is plotted in Figure 5.7a, which gives the probability that the coupon will fail in the next ten million cycles. It can be seen that for the threshold L = 3000 mm2 the median failure time (time by which the component will fail with probability 50%) is about 880K cycles and the probability of failure within the next 1.5 million cycles is almost 99%. If we assume 10−1 to be the threshold failure probability, then it can be seen from the curves that the remaining useful life for 3000 mm2 threshold is about 475,000 cycles and for 1000 mm2 threshold, about 105,000 cycles. Similarly for the second coupon, the failure time distribution based on the data at 500,000 cycles (after the 10th measurement is taken) is plotted in Figure 5.7b. The median failure time for failure threshold of 800 mm2 is 1.7 million cycles and for failure threshold of 600 mm2 is 1.3 million cycles. The remaining useful life for the coupon for a failure probability of 10−1 can be found as 1 million cycles for the 600 mm2 threshold and 1.4 million cycles for the 800 mm2 threshold. Figure 5.8a and Figure 5.8b show the time to failure distribution of coupon 1 and coupon 2 estimated using moving window approach. Although MSPE of moving window technique is smaller than increasing window technique, the forecast is not significantly different from the one predicted using increasing window technique. Figure 5.9a and Figure 5.9b shows the time to failure distribution for coupon 1 and coupon 2 estimated using Gamma process based degradation model from Equation (5.16). The median failure time at a failure threshold of 1000 mm2 is 680,000 cycles and median failure time for 3000 mm2 is more than 900,000 cycles. For coupon 2, failure threshold of 600 mm2 has median failure time of 725,000 cycles 62

and the median failure time for 800 mm2 is more than 800,000 cycles. We conclude that gamma based model gives a less conservative estimate of the failure time distribution than the estimate provided by PCR based Wiener process. In other words, gamma based degradation model provides an estimate of remaining useful life of the structure which is more than the estimate of PCR based Wiener process. When preparing a maintenance strategy for critical components, it is preferable to chose a maintenance strategy that is developed with due consideration of the worst case scenario over the ones that does not. Therefore, PCA based Wiener process should be preferred over Gamma based degradation model. Regarding PCR based Wiener process, both increasing window and moving window have similar prediction performance. The reason for such similar performance of two techniques can be inferred from Figure 5.10a and Figure 5.10b. Figure 5.10a shows the optimum window of length 1400 samples obtained using increasing window technique and Figure 5.10a shows the optimum window obtained using moving window technique at position 800-1200 samples. Both these techniques are roughly focusing on the first half of the second wave packet which is less noisy and makes clear distinction between baseline and fatigue loading condition. Despite the fact that both increasing window technique and moving window technique have similar prediction accuracy, we would recommend the first one over later because moving window technique significantly reduces the dimensionality of the Lamb-wave signal required for diagnostics of the structures.

63

(a)

(b)

Figure 5.4: (a) Fitted value versus actual values for window lengths, mean squared prediction error (MSPE) is highest for window of length 1400. (b) Fitted value versus actual values for different position of moving window, the position 3 has the least MSPE.

64

Figure 5.5: Delamination area prediction from one-step ahead forecast of Wiener process model and PC score of Lamb-wave signal. The thin solid lines indicate predictions and the dashed lines indicate the 95% prediction interval. (a) Coupon one. (b) Coupon two.

(a) (b) Figure 5.6: Delamination area prediction from one-step ahead forecast of Wiener process model and PC score of Lamb-wave signal. Red and green lines indicate the one step ahead prediction and 2 step ahead forecasts, respectively. The dashed lines for both colors indicate the 95% prediction intervals.

65

(a)

(b)

Figure 5.7: Probability distribution function of the failure time for 3000mm2 and 1000mm2 delamination failure thresholds. (a) coupon one. (b) coupon two.

(a)

(b)

Figure 5.8: Probability distribution function of the failure time estimated using moving window approach and the Wiener process. (a) Coupon 1 with 3000mm2 and 1000mm2 delamination failure thresholds.(b) Coupon 2 with 800mm2 and 600mm2 delamination threshold.

66

(a)

(b)

Figure 5.9: Time to failure distribution using Gamma process (a)coupon 1 at damage threshold 1000 and 3000 mm2 . (b)Coupon 2 (L3S18) at damage threshold 600 and 800 mm2 .

(a)

(b)

Figure 5.10: a) Optimum windows size for the Lamb wave signal using increasing window technique. b) Optimum window size of the moving window technique using the moving window technique.

67

5.3

Conclusion

There has been many studies in dimension reduction of guided-wave sensor data for detecting and localizing structural damage. However, methods for continuous monitoring and for prediction of future health is relatively few. In this chapter, we compared two different dimensionality reduction approaches namely moving-window technique and increasing-window technique, which are based on windowing of the Lamb-wave signal. We also presented a new degradation modeling approaches for reliability prediction of composite structures from Lamb-wave sensor data. A Wiener process is used to model the stochastic growth of delamination area in carbon fiber composites under fatigue loading. The delamination area is found from Lamb-wave sensing data using Principal Components Regression and used to fit the Wiener process model. Since Wiener process is a stochastic model, it provides the predictive distribution of the future delamination area, from which the probability that the delamination will exceed a critical value is computed and is used as a reliability metric. The effectiveness of the proposed method is illustrated on a real Lamb-wave data collected under fatigue loading conditions. The features of Lamb-wave signal extracted via principal components were able to track the delamination growth in the two coupon cases reasonably well and the Wiener process modeling provided adequate forecasts of future delamination levels and remaining useful life predictions. The properties of the material that are expected to influence the sensor measurements include ply orientations, number of plies used, resin content, resin type and fiber type. Therefore, as long as the algorithm is trained on a prototype with similar material properties, it is expected to perform consistently with the training results and will be portable on a new structure. We note however, that the data used in training are collected in well-controlled experimental conditions, which allowed satisfying the modeling assumptions, including linearity of delamination growth rate and normality of model error. In actual application of 68

the approach, the modeling assumptions have to be checked carefully to produce repeatable results.

69

CHAPTER 6 REGULARIZED LINEAR DISCRIMINANT ANALYSIS BASED BAYESIAN MULTI-SENSORY DECISION FUSION In this chapter, we present a Bayesian decision fusion algorithm based on regularized linear discriminant analysis for multi-sensory decision fusion, which is able to detect damages without any intermediate feature extraction step and therefore, more efficient in handling data with high-dimensionality. A robust discriminant model is obtained by shrinking of the covariance matrix to a diagonal matrix and thresholding redundant predictors without hurting the predictive power of the model. The shrinkage and threshold parameters of the discriminant function (decision boundary) are estimated to minimize the classification error. Furthermore, it is shown how the damage classification achieved by the proposed method can be extended to multiple sensors by following a Bayesian decision-fusion formulation. The detection probability of each sensor is used as a prior condition to estimate the posterior detection probability of the entire network, and the posterior detection probability is used as a quantitative basis to make the final decision about the damage.

6.1

Proposed Approach

The algorithm works in three stages: in stage 1, the parameters of regularized linear discriminant analysis (RLDA) and Bayesian decision fusion method are obtained using the training data. In stage 2, the sensors embedded in the specimen process Lamb wave signals to make inference about the condition of the structure, which is explained in Section 6.1.1. In stage 3, multi-sensor decision fusion is done. Figure 6.1 gives a pictorial representation of 70

the damage detection algorithm. The decisions made by all the sensors participating in the SHM process are fused at the fusion center to arrive at a single conclusion about condition of the structure, which is described in Section 6.1.2. In this chapter we used the data for all 24 sensor actuator pairs shown in Figure 3.2a for training and validating the algorithm. Table A.1 shows the definition and the distances of the sensor-actuator paths.

Figure 6.1: Regularized linear discriminant analysis (RLDA) based Bayesian decision fusion algorithm is a three-stage process. In stage 1 the parameters of RLDA and Bayesian decision fusion are estimated using the training data. In stage 2, the sensor-level decision of any new incoming data is done using the parameters estimated in stage 1. In stage 3, the local sensor level decisions are fused using Bayesian decision fusion method, the parameters for Bayesian decision fusion method were obtained from stage 1.

6.1.1

Regularized Linear Discriminant Analysis

Consider initially the single sensor case. The n × p data matrix X represents collection of observations, where p is the dimension of sensor signal and n is the number of sensor observation. Let fk (x) denote the class-conditional density for a single observation X to be in 71

class G = k, (for k = 1, . . . , K) with prior π(k), where [51]:

PK

k=1

π(k) = 1. From Bayes theorem

fk (x)πk P r(G = k|X = x) = PK l=1 fl (x)πl

(6.1)

if we assume fk (x) to be a multivariate normal distribution fk (x) =

1 exp − (x − µk )Σ−1 k (x − µk ). 2 (2π) |Σk | 1

1 2

1 2

(6.2)

We take multivariate normal distribution due to several reasons. First, it is a natural distribution for sum of a large number of independent random distributions. Second, Gaussian distribution has maximum uncertainty for a given mean and variance. Also, it can be used to model many situations where the data has been corrupted by random processes. The discriminant score for class k can be written as [51]. 1 1 δk (X) = log(fk (x)) = − log |Σk | − (X − µk )T Σ−1 k (X − µk ) + log πk 2 2

(6.3)

If each class has an independent and distinct covariance estimate, then we get a quadratic discriminant analysis (QDA) function (represented in Equation 6.3). In Equation 6.3, a special case of linear discriminant analysis (LDA) arises when all classes have a common covariance matrix (Σk = Σ for all k). This leads to the cancellation of normalization factor in (X − µk )T Σ−1 k (X − µk ), which also happens to be the quadratic term in the equation. Hence, for the linear case the discriminant score for each class reduces to: 1 δk (x) = xT Σ−1 µk − µTk Σ−1 µk + log πk 2

(6.4)

Usually, when p is very large, LDA is preferred over QDA, because QDA may have a large number of parameters and is computationally very expensive. LDA requires (K − 1)(1 + p) number of predictor parameters, whereas QDA requires (K − 1) × [p(p + 3)/2 + 1] parameters [51]. Taking natural logarithm on both sides of Equation 6.1 and taking the ratio between the two classes, we get the decision boundary between class k and class l for the linear case as: log

πk 1 P rk (G|X) = δk (x) − δl (x) = log − (µk + µl )Σ−1 (µk − µl ) + xT Σ−1 (µk − µl ). (6.5) P rl (G|X) πl 2 72

In case of small sample size setting where the dimension of the data is very large compared to the number of samples available (p >> N ), the resulting covariance estimate (Σ) is ill-conditioned or nearly singular and hence not all of their parameters can be identifiable[43]. The problem can be more clearly understood, upon doing the spectral decomposition of the P covariance estimate Σ = pi=1 ei vi viT , where ei is i-th eigenvalue of Σ and vi is corresponding P v vT eigenvector, then Σ−1 = pi=1 ieii . Therefore the discriminant score in Equation (6.3) takes the form:

δ(x) =

p X [v T (x − µk )]2 i

ei

i=1

+

p X i=1

log ei − 2 log πk .

(6.6)

The discriminant scores is heavily weighted by smaller eigenvalues and the directions associated with their eigenvectors. For the eigenvalues estimated using covariance estimate, the highest values are biased high, while the lowest ones are biased towards values that are low. This phenomenon becomes more pronounced when the sample size decreases [43]. Regularization can be done to reduce the variance of the parameters associated with ill-conditioned covariance matrix at the expense of increased bias. Regularization consists of adding extra information to solve an ill posed problem. We will use the diagonal elements of the covariance estimate to regularize the covariance estimate. The regularized covariance matrix is defined as: ˆ ˆ + γ.diag(Σ) ˆ Σ(γ) = (1 − γ)Σ

(6.7)

where the value of γ is chosen by testing the performance of model on the validation data. In addition to regularization, the coefficients µk of the linear term in the Equation 6.4 can be thresholded so that coefficients with value smaller than preassigned threshold can be eliminated [49]. If µ∗k = Σ−1 µk , then µ∗k = sgn(µ∗k )(|µ∗k | − ∆)+ . The discriminant score can thus be solved for pair of regularization parameter γ and threshold parameter ∆. After regularization the sensor level decision function is represented as: log

P rk (G|X) zi =0 ≶ 0 P rl (G|X) zi =1

(6.8)

where, P rk (G|X) is found from Equation 6.1 obtained with the regularization parameters. 73

6.1.2

Bayesian Decision Fusion

Suppose there is a population of objects with a fraction ξ being defective. It should be noted that the object being classified in the decision fusion and the training data are being sampled from same population or are assumed to be drawn from the populations which are very similar. A training sample of size m is drawn at random from this population and tested. This sample is determined to contain m1 defective objects and m0 non-defective (with m = m1 + m0 ). In the testing, there are a total of mk sensor readings with Z01 false-positives, Z00 true-negatives, Z11 true-positives, and Z10 false-negatives. Later a new object is drawn from this population and tested, resulting in sensor readings z1 , z2 , . . . , zk . We desire to classify this new object as W = 1 or W = 0 on the basis of the sensor readings and training data. Define parameters θ0 and θ1 as shown in Table 6.1. Assume a prior distribution for (θ0 , θ1 , ξ) in which these parameters are independent with θ0 ∼ Beta(α0 , β0 ), θ1 ∼ Beta(α1 , β1 ), and ξ ∼ Beta(αξ , βξ ). For convenience, define ψ = (θ0 , θ1 , ξ) and Z = (z1 , z2 , . . . , zk ), and let D denote the training data. We will classify the new object as W = 1 or W = 0 depending on Table 6.1: Table showing the actual observation versus predicted observation by each sensor.

Predicted case zi = 0 Predicted case zi = 1

Actual Case (W = 0) 1-θ0 (True negative) θ0 (False positive)

Actual Case (W = 1) 1-θ1 (False negative) θ1 (True positive)

which class has the greater posterior probability given Z and D. The posterior probability of class W is found as follows: P (W, Z|D) ∝ P (W, Z|D) P (Z|D) Z Z = P (W, Z|ψ, D)P (ψ|D) dψ = P (W, Z|ψ)P (ψ|D) dψ Z = P (W |ψ)P (Z|W, ψ)P (ψ|D) dψ = E ∗ P (W |ψ)P (Z|W, ψ)

P (W |Z, D) =

74

(6.9)

where, E ∗ denotes expectation with respect to P (ψ|D), the posterior distribution of ψ given the training data. It is immediate that:  ξ θT (1 − θ )k−T 1 1 P (W |ψ)P (Z|W, ψ) = T (1 − ξ)θ (1 − θ0 )k−T 0

where, T =

Pk

i=1 zi .

if W = 1 if W = 0

Standard arguments show that P (ψ|D) ∝ P (D|ψ)P (ψ). Assuming that

the posterior of each of the parameters θ0 , θ1 , ξ (given D) are independent with θ0 |D ∼ Beta(α0 + Z01 , β0 + Z00 ), θ1 |D ∼ Beta(α1 + Z11 , β1 + Z10 ), ξ|D ∼ Beta(αξ + m1 , βξ + m0 ). Computing the expectations in equation (6.9) then leads to αξ + m1 B(α1 + Z11 + T, β1 + Z10 + k − T ) αξ + βξ + m B(α1 + Z11 , β1 + Z10 )   α ξ + m1 B(α0 + Z01 + T, β0 + Z00 + k − T ) P (W = 0|Z, D) ∝ A0 = 1 − αξ + βξ + m B(α0 + Z01 , β0 + Z00 )

P (W = 1|Z, D) ∝ A1 =

(6.10) (6.11)

by which we mean that, P (W = 1|Z, D) =

A1 A1 + A0

and P (W = 0|Z, D) =

A0 . A1 + A0

(6.12)

In these computations, we repeatedly use the formula for the moments of a Beta distribution. If X ∼ Beta(α, β), then EX j (1 − X)k =

B(α + j, β + k) B(α, β)

where B(·) denotes the Beta function: B(α, β) =

Γ(α)Γ(β) . Γ(α + β)

Log-likelihood ratio test. We will use the posteriors probabilities P (W = 0|Z, D) and P (W = 1|Z, D) in a log-likelihood ratio test to make final a hypothesis about the state of the structure. See Equation (6.13). log

P (W = 1|Z, D) W0 ≶ 0 P (W = 0|Z, D) W1 75

(6.13)

The estimates for P (W = 0|Z, D) and P (W = 1|Z, D) calculated using Equation (6.12). Most importantly, Equation (6.10) and Equation (6.11) show that the final marginal-posterior probability is largely influenced by results of the training data (Z00 , Z11 , Z01 and Z10 ). In particular, if the training data is sufficiently big, the hyper-parameters αξ ,βξ , α0 , β0 , α1 and β1 will be outweighed by the results of training data of both baseline and damage data. However, for small sizes, the prior distribution plays a much larger role. This observation is very much in line with the intuition behind this decision making process. At situations, when there is sufficient amount of data present, we want the training data to be the basis for decision making process.

6.2

Results and Discussion

The results of sensor level damage detection and decision fusion are discussed separately in two different subsections:

6.2.1

Sensor-level Damage Detection

The method presented in this section uses a supervised approach for damage detection. Instead of acting as a black-box to detect a novelty, a regularized discriminant analysis damage detection algorithm is able to learn directly from Lamb-wave signals before being deployed in real-time. The decision boundary represented in Equation (6.4) is the backbone of the sensor-level damage detection in this proposed algorithm. The regularized discriminant score obtained by inner product of the raw sensor signals and decision boundary represented in Equation (6.4) is the main basis for sensor level damage detection. We have applied the methodology on two different composite specimens: 1) L2S17 2) L3S18. The algorithm searches in a 2-dimensional Euclidean space for the value of regularization (γ) and threshold parameter (∆) that minimizes classification error. Figure 6.2a and Figure 6.2b show the classification error contours for Sensor 1-7 of specimens L2S17 and L3S18, respectively. The optimum (γ,∆) pair for sensor 1-7 in specimen L2S17 is (0.74, 0.11) and for specimen 76

L3S18 it is (0.28, 0.24). One important thing to note is that ∆ is a threshold parameter and it does not help to make the covariance matrix (Σ), in Equation 6.5 to be invertible. It eliminates linear predictors that do not have significant impact in the prediction capacity of the discriminant method, thereby reducing the computation time. Figure 6.3a and Figure 6.3b show the scatter plot of all pairs of optimum(γ,∆) for all sensors mentioned in Table A.1. Regularization parameter values for most of the sensors are mostly concentrated between 0.8 to 1; on the other hand, the value for threshold parameters are uniformly distributed from 0 to 1 for most of the sensors.

(a)

(b)

Figure 6.2: Classification error contours for different regularization (γ) and threshold parameters (∆)for Sensor 1-7 (a) Specimen L2S17 (b) Specimen L3s18

After the regularization and threshold parameter have been estimated, the algorithm is tested on specimen L2S17, consisting of s1 = 59 baseline signals (W = 0) and s2 = 9 damage signals (W = 1); and specimen L3S18 consisting of s1 = 89 baseline signals and s2 = 11 damage signals. For a damage detection method to be considered good, it has to be able to correctly detect as many states (either baseline or damage) as possible. We use the accuracy metric to assess the performance of sensor level decision making process. The 77

(a)

(b)

Figure 6.3: Scatter plot of threshold parameter (∆) versus regularization parameter (γ) for all sensors participating in the SHM system.

accuracy of a test is determined by how well it classifies data from two separate classes [39], see Equation (6.14). The overall accuracy of the model shown in Equation (6.14) can be calculated as total number of damages and baselines correctly classified divided by total number of observations. Accuracy =

TP s1

+

TN s2

2

(6.14)

where, s1 is the size of baseline test data, s2 is the size of damage test data, TP= true positive, FP=false positive, TN=true negative, and FN= false negative. In addition to accuracy, we also estimate these three metrics: T P/s2 T P/s2 + F P/s1 TP Recall = TP + FN 2 F − measure = 1/P recision + 1/Recall P recision =

Precision measures the positive predictive power of a classifier, recall measures the true positive rate of a classifier and F-measure is a harmonic mean of precision and recall [39]. 78

Figure 6.4: Accuracy of all sensors participating in the multi-sensory SHM system, plotted against their distance from damage.

Table 6.2 provides a rough guide that can be used as a benchmark to measure the accuracy of classification method [130]. Figure 6.4 shows that the accuracy of RLDA decreases when the distance from the damage decreases. This inversely proportional relationship between accuracy and distance is very much expected. However, the relationship seems to be very weak. This may be because of the fact that the specimen is very small, and the significant amount of boundary reflection seems to corrupt the actual signal. The receiver operating characteristics (ROC) is another method which plots true positive rate against false positive rate. ROC is used to visualize the performance and select the classifier. The area under the curve (AUC) measures ability of the test to correctly classify those with and without the damage. The AUC value of all 24 sensors for specimen L2S17 and specimen L3S18 is shown in Table A.4a and Table A.4b respectively. For specimen L2S17, 14 out of 24 sensors have AUC values higher than 0.9 which is considered to be an excellent 79

performance (Class A), 7 sensors have values higher than 0.8 which is considered to be good performance (Class B), and 3 sensors have AUC values higher than 0.7 (Class C). The sensor level damage classification performance for specimen L3S18 is even better, it has 21 out of 24 sensors with AUC values equal to higher than 0.9. Table 6.2: A rough guide for classifying accuracy of a classification method. Area under the curve (AUC) .90-1 .80-.90 .70-.80 .60-.70 .50-.60

Category excellent (A) good (B) fair (C) poor (D) fail (F)

Table A.5 shows the accuracy, precision, recall, and F-measure for all 24 sensors used in the analysis. Average accuracy of sensors in specimen L2S17 is 66%, and that in specimen L3S18 is 73%. Average precision of specimen L2S17 is 62% and that for specimen L3S18 is 69%. The average recall of specimen L2S17 is 89% and for specimen L3S18, it is 76%. Average F-measure for specimen L2S17, which is a harmonic mean between precision and recall is 73%; and the average F-measure for specimen L3S18 is 80%.

6.2.2

Comparison to PCA Based Approach

We compared the performance of the regularized discriminant analysis method with the principal component (PC) based generalized Gaussian discriminant analysis method. Three PCs (more than 85% of the data variance) of the Lamb-wave sensor data were extracted as damage sensitive features (DSF). These DSFs were then used in Equation (6.3) to calculate discriminant scores. The use of three largest PCs avoided the small-size-problem that had arisen when raw Lamb-wave sensor data was used, therefore no regularization was needed. Table A.6 gives the measure of accuracy, precision, recall and F-measure for all 24 sensors using PC based discriminant analysis method. Average accuracy is 59% and 69% for L2S17 and L3S18. Average precision is 25% and 30% for L2S17 and L3S18 respectively. Average 80

recall is 92% and 97% and average F-measure is 39% and 45% respectively. It can be seen that RLDA based classification method has better performance than PC based discriminant analysis. Even-though PCA minimizes the loss in information and filters out environmental noise from the sensor signal, its performance is still not better than the RLDA method. The flexibility that RLDA provides by enabling us to use actual Lamb-wave sensor data in the classification, is the major advantage of this method. Table 6.3 shows the comparison between average performance metrics such as, accuracy, precision, recall and F-measure of RLDA and PCA based sensor level damage classification for specimen L2S17 and L3S18, the values are obtained from Table A.6 and Table A.5. F-measure, which is the harmonic mean between precision and recall is almost twice for RLDA than for PCA based discriminant analysis. Similarly accuracy of RLDA is higher than PCA based discriminant analysis method. Table 6.3: Comparison between average performance of RLDA method to PCA based discriminant analysis.

Average Average Average Average

6.2.3

accuracy precision recall F-measure

L2S17 PCA based DA 59% 25% 92% 39%

RLDA 66% 62% 89% 73%

L3S18 PCA based DA 69% 30% 97% 45%

RLDA 73% 69% 76% 80%

Decision Fusion

The proposed Bayesian decision fusion method (given in Equation (6.13)) was compared with five different existing decision fusion methods, Chair-Varshney’s optimum decision fusion rule (ΛCV ) [8], Ideal sensor rule (ΛIS ) [16], AND rule, OR rule and majority rule (ΛCR ). The detailed discussion of Chair-Varshney and Ideal sensor rule is presented in Section 2.5. The

81

following definitions are used in implementing existing methods. T Pi T Pi + F N i F Pi Probability of false alarm (Pf ai ) = F P i + T Ni 1 − P d i + Pf a i Probability of error (Pei ) = 2 Probability of detection (Pdi ) =

where, i represents ith sensor and i = 1, . . . , 24. The probability of detection (pd ) and probability of false alarm (pf ) of all sensors that are required to estimate Chair-Varshney’s optimal decision fusion score (see Equation (2.3)) are found in Table 6.4. Probability of detection (pd ) is empirically estimated as the fraction of number of true positives out of damage signals used in the training data and probability of false alarm (pf ) is estimated as fraction of false detection out of baseline signals used in training data. Probability of error (pe ) values for all sensors, which is required to estimate the ideal sensor decision fusion score (see Equation (2.6)), are found as shown in the last two columns of Table 6.4. Table A.2 shows the decision fusion scores for Bayesian decision fusion rule, ChairVarshney’s rule, ideal sensor rule and majority rule for the first coupon L2S17. Damage detection threshold for Bayesian decision fusion rule, Chair-Varshney’s rule, and ideal sensor rule is 0, while for majority rule, the damage detection threshold is 12. Using these thresholds it can be seen that the fused decision of specimen L2S17 using Bayesian decision fusion rule has 14 out of 59 false positives, whereas there was no false negatives in the damage signals. Chair-Varshney’s optimum decision fusion rule performs second best with 27 false positives and 0 false negative, which is followed by ideal sensor rule, majority rule and OR rule. Table 6.5 and Table 6.6 summarizes the fusion performance of the decision rules for two coupons in terms of false positives and false negatives. The data set used for estimation of false positives and false negative do not contain equal number of baseline and damage data. Therefore, relying solely on Table 6.5 and Table 6.6 can be misleading. Hence, we use estimates presented in Table 6.7 to make further judgment about the decision fusion methods. 82

Table 6.4: Probability of detection (Pd ), probability of false alarm (Pf ) and the average probability of error (Pe ) of each sensor estimated using training data. Pd

Sensor ID Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor Sensor

1-7 1-8 1-9 1-10 1-11 1-12 2-7 2-8 2-9 2-10 2-11 2-12 3-7 3-8 3-9 3-10 3-11 3-12 4-7 4-8 4-9 4-10 4-11 4-12

Specimen L2S17 0.95 1.00 1.00 0.90 0.95 1.00 0.90 0.90 1.00 1.00 1.00 1.00 0.95 1.00 1.00 1.00 1.00 1.00 0.95 0.90 1.00 0.95 1.00 0.95

Pf Specimen L3S18 1.00 1.00 0.95 1.00 1.00 1.00 1.00 1.00 0.95 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.95 1.00 0.90 0.85 1.00 1.00 1.00 1.00

Specimen L2S17 0.15 0.15 0.15 0.15 0.00 0.10 0.05 0.00 0.00 0.20 0.00 0.00 0.15 0.00 0.20 0.05 0.00 0.00 0.15 0.00 0.05 0.20 0.15 0.05

Pe Specimen L3S18 0.20 0.10 0.00 0.15 0.00 0.05 0.00 0.15 0.05 0.05 0.00 0.10 0.00 0.10 0.00 0.05 0.00 0.05 0.00 0.00 0.00 0.20 0.00 0.05

Specimen L2S17 0.10 0.08 0.08 0.13 0.03 0.05 0.08 0.05 0.00 0.10 0.00 0.00 0.10 0.00 0.10 0.03 0.00 0.00 0.10 0.05 0.03 0.13 0.08 0.05

Specimen L3S18 0.10 0.05 0.03 0.10 0.00 0.03 0.00 0.08 0.05 0.03 0.00 0.05 0.00 0.05 0.00 0.03 0.03 0.03 0.05 0.08 0.00 0.10 0.00 0.03

For a sensor to have good damage detection capability, high value of the F-measure is preferred, which is harmonic mean of precision and recall. Proposed Bayesian decision fusion method has the highest value of all six performance metrics. Based on the performance metric like F-measure and accuracy, Chair-Varshney’s rule is the next best decision fusion method after the Bayesian decision fusion rule. Table A.3 shows the decision fusion scores for Bayesian decision fusion rule, Chair Varshney’s rule, ideal sensor rule, and majority rule for the second coupon L3S18. Similar to specimen L2S17, damage detection threshold of Bayesian decision fusion rule, Chair Varshney’s rule, and ideal sensor rule for specimen L3S18 is 0 and damage detection threshold for majority rule is 12. Table 6.6 shows the false positives and false negatives for specimen L3S18, Bayesian decision fusion outperforms all other remaining decision fusion methods with 83

only (2.4%) false positive. Table 6.8 shows the precision, recall, F-measure, and accuracy for specimen L3S18. Bayesian method has a staggering accuracy and an F-measure of 99%, it is followed by Ideal sensor rule and Chair-Varshney’s rule. ROC curve which illustrates the performance of different decision fusion methods at a range of thresholds are shown in Figure 6.5a and Figure 6.5b. According to Figure 6.5a, ideal sensor rule seems to perform slightly better than Bayesian decision fusion rule which is followed by Chair-Varshney’s optimum decision fusion rule. For specimen, L3S18, Figure 6.5b shows that Bayesian decision fusion rule outperforms all other decision fusion rules. Table 6.5: False positives for baseline signal and false negatives for damage signal after the final fusion for coupon L2S17. False positives are counted out of 59 baseline signals and false negatives are counted out of 9 damage signals. Decision fusion

False Positives

False Negatives

Proposed Bayesian method Chair-Varshney rule Ideal rule AND rule OR rule Majority rule

14 (23.70%) 27 (45.76 %) 34 (57.63 %) 0 (0 %) 59 (100%) 39 (57%)

0 (0%) 0 (0%) 0 (0%) 9 (100%) 0 (0%) 0 (0%)

Table 6.6: False positives for baseline signal and false negatives for damage signal after the final fusion for coupon L3S18. False positives are counted out of 83 baseline signals and false negatives are counted out of 11 damage signals. Decision fusion

False Positives

False Negatives

Proposed Bayesian method Chair-Varshney rule Ideal rule AND rule OR rule Majority rule

2 (2.41 %) 7 (8.44%) 6(7.23%) 0 (0%) 83 (100%) 4 (4.82%)

0 (0%) 1 (9.09%) 0 (0%) 11 (100%) 0 (0%) 0 (0%)

84

Table 6.7: Precision, recall and F-measure of the decision fusion for coupon L2S17. Decision fusion Proposed Bayesian method Chair-Varshney rule Ideal rule AND rule OR rule Majority rule

Precision 0.81 0.69 0.63 0 0.5 0.60

Recall 1 1 1 0 1 1

F-measure 0.89 0.81 0.78 0 0.66 0.75

Accuracy 0.88 0.77 0.71 0.5 0.5 0.67

Table 6.8: Precision, recall and F-measure of the decision fusion for coupon L3S18 Decision fusion Proposed Bayesian method Chair-Varshney rule Ideal rule AND rule OR rule Majority rule

Precision 0.98 0.92 0.93 0 0.5 0.91

85

Recall 1 0.91 1 0 1 1

F-measure 0.99 0.91 0.96 0 0.66 0.95

Accuracy 0.99 0.91 0.96 0.5 0.50 0.95

(a)

(b)

Figure 6.5: Receiver operating curve of the Bayesian method is compared with optimum Chair-Varshney’s rule, ideal sensor rule and the majority rule (also knows as counting rule). Bayesian method is represented by solid red line, majority is represented by broken blue line, Chair-Varshney’s rule is represented by broken green line, ideal sensor rule is represented by broken black line. (a) For the specimen L217 (based on area under the curve), ideal sensor rule seems to perform slightly better than proposed Bayesian rule as it hugs the north-west position of the graph more closely. (b) For specimen L3S18, Bayesian decision fusion has the best decision fusion among all decision fusion rules.

86

6.3

Conclusion

This chapter presents a multi-sensory damage detection technique that can successfully process high dimensional Lamb-wave signals without compromising the accuracy of the damage detection system. We compared the performance of RLDA based sensor level classification with PC based discriminant analysis and discovered that RLDA performs better than PCA based discriminant analysis at sensor level damage classification. The proposed methodology utilizes a regularized discriminant analysis procedure to determine sensor-level decisions made from the Lamb-wave data for baseline versus damage classification. In addition, a Bayesian decision fusion approach has been presented to fuse decisions from multiple sensors. To verify the validity of the proposed hypotheses, a case study is presented that uses real Lamb-wave data collected under fatigue damage conditions. We demonstrated that individual sensor decisions made by the proposed regularized discriminant analysis ranges from good to excellent (as measured by the area under ROC curves between 0.76 to 1). Also, the Bayesian decision fusion method significantly improves the accuracy and robustness of local sensor level decision. The average sensor level accuracy for specimen L2S17 was improved from 66% to 88%. Similarly, the sensor level accuracy for specimen L3S18 was improved from 73% to 99% . Although PCA is good at minimizing the loss in information that occurs during the feature extraction step and at filtering out the environmental noise, it still results in some loss in information. RLDA helps solve this information loss during feature extraction process by eliminating it altogether from the SHM process. The future work will be mainly focused on developing an unsupervised form of a one-class discriminant analysis method, and to make it robust to environmental and operation variance.

87

CHAPTER 7 CONCLUSION AND FUTURE WORK The research presented in this dissertation focused on developing an integrated damage detection methodology that is very efficient at handling the high-dimensionality of Lambwave sensor data and in reducing the number of misdetections. The first two phases of this research (PCA based MCUSUM and PCR based Wiener Process model) focuses on identifying damage sensitive features (DSF) to monitor and to prognose the structure without compromising the accuracy of the method. In the last phase, we present a supervised Bayesian decision fusion method which processes high-dimensional Lamb-wave signals without requiring any intermediate feature extraction step.

7.1

Summary

Slow fatigue loading conditions present a very unique failure pattern. The physical properties of the structure deteriorate very slowly over time and are difficult to detect even when using state-of-the-art methodologies. We presented two novel types of PCA based statistical control charts, namely FIR MCUSUM and MCUSUM and compared their performance to Hotelling’s T 2 chart. FIR MCUSUM is especially useful for online monitoring scenarios where structures undergo constant change, and consequently pure baseline data is difficult to obtain. In such scenarios, FIR MCUSUM can be used with impure baseline data to calibrate control limits of the process. We also verified that both PCA based MCUSM charts with and without FIR are more efficient at monitoring small changes, as compared to Hotelling’s T 2 chart. Once a damage has been identified, the next step in SHM is to do the prognostic. Most prognostic methods are based on finite element modeling of the structure. The major 88

limitation of finite element based degradation model is that it does not incorporate sensor measurements, which makes it inapplicable for real-time monitoring of the structures. We presented a principal component regression model combined with the Wiener process to fill this gap in the literature. Principal component regression constantly updates the regression model based on new Lamb-wave sensor data, which is then used by the Wiener process to draw inferences about the remaining useful life of the structure. In addition to a decrease in the computational cost during the online prognosis, we have presented two different types of windowing approaches: 1) increasing-window and 2) moving-window. We recommend moving-window over increasing-window due to the fact that moving-window enables us to choose a window of Lamb-wave signal that is more sensitive to detecting the damage, and also results in a reduction of the computational cost. Lastly, we presented a supervised damage identification method, which can also be used to classify different types of damages. This approach does not require a DSF extraction process, which is an intermediary step and leads to a significant loss of information. The training information used in this process is also used to calibrate the parameters for the Bayesian decision fusion method. We have also demonstrated that damage detection capacity of this algorithm is better than state-of-the art decision fusion methods. We have shown that there is almost a 50% improvement in the accuracy of damage detection method at the decision-fusion-center.

7.2

Impact of the Research and Future Work

This research takes us one step closer to developing an integrated damage monitoring and prognostics algorithm for structures such as wind-turbine blades, airplane fuselage, bridges, resulting in improved safety for the public, as well as a reduction in immediate maintenance costs. We demonstrated how damage-monitoring, prognosis of structure, and decision-fusion processes are interconnected. As a future work, we plan to evolve this algorithm so that it 89

can operate in structures under variable ambient and loading conditions. Currently, Bayesian decision fusion method utilizes a supervised training method that requires knowledge both of the baseline and damage data. However, the information of baseline and damage data is not easily available in real-time monitoring. We plan to develop a one-class discriminant function which will only require baseline information, and will provide sufficient training information to calibrate Bayesian decision function.

90

APPENDIX A SENSOR LEVEL PERFORMANCE METRICS

Table A.1: Actuator-sensor paths and their distances from the notch. Sensor 1-7 67.45 mm

Sensor 1-8 86.12 mm

Sensor 1-9 103.88 mm

Sensor 1-10 127.97 mm

Sensor 1-11 147.24 mm

Sensor 1-12 164.71 mm

Sensor 2-7 50.89 mm

Sensor 2-8 69.26 mm

Sensor 2-9 87.02 mm

Sensor 2-10 111.41 mm

Sensor 2-11 130.38 mm

Sensor 2-12 147.54 mm

Sensor 3-7 34.02 mm

Sensor 3-8 52.69 mm

Sensor 3-9 70.46 mm

Sensor 3-10 95.45 mm

Sensor 3-11 113.82 mm

Sensor 3-12 130.38 mm

Sensor 4-7 10.54 mm

Sensor 4-8 29.81 mm

Sensor 4-9 48.18 mm mm

Sensor 4-10 72.87 mm

Sensor 4-11 91.23 mm

Sensor 4-12 109.00 mm

Table A.2: Decision fusion scores for Bayesian decision fusion rule (BR), Chair-Varshney’s rule (CVR), Ideal Sensor rule (ISR), and Majority rule (MR) on sample L2S17. The thresholds for BR, CVR, and ISR is 0, while the threshold for MR is 12. S.N

Data type

BR

CVR

ISR

MR

1 2 3 4 5 6 7 8 9 10 11 12 13 14

Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline

-26.48 -55.30 -43.91 -38.16 -32.35 -20.55 -20.55 -38.16 -43.91 -38.16 -20.55 -20.55 -32.35 -20.55

-14.28 -18.61 -18.61 -18.61 -12.72 -12.72 -12.72 -12.72 -18.61 -18.61 -9.60 -9.60 -9.60 -13.93

-8.82 -22.70 -17.68 -17.68 -11.79 -2.37 -5.30 -26.50 -27.13 -12.47 -5.00 -2.75 3.14 3.46

12.00 7.00 9.00 10.00 11.00 13.00 13.00 10.00 9.00 10.00 13.00 13.00 11.00 13.00

91

Table A.2: Continued S.N. 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47

Data type Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline

BR -32.35 -38.16 -32.35 -20.55 -14.55 10.40 -8.46 -14.55 -32.35 -8.46 -26.48 -32.35 -32.35 -38.16 -20.55 -8.46 -38.16 -43.91 -2.29 -14.55 -26.48 -20.55 -8.46 -2.29 -20.55 -14.55 -14.55 -20.55 -8.46 -20.55 3.99 -2.29 10.40 92

CVR -14.28 -14.28 -3.71 0.57 0.57 10.04 1.43 -3.71 -8.39 0.22 -4.11 -10.00 -0.99 -9.60 -4.92 -3.71 -3.71 -8.04 -0.99 -0.52 -0.52 -0.99 4.90 4.90 4.90 4.90 4.90 4.90 0.57 4.90 4.90 4.90 4.90

ISR -15.28 -20.30 -10.89 -4.19 8.16 27.99 14.68 6.73 -2.93 6.22 -3.56 -2.12 -6.19 -7.95 8.16 7.36 -5.00 -14.78 8.52 -0.53 8.08 8.89 14.15 17.08 -1.16 9.75 10.62 9.75 11.75 2.43 14.78 14.78 14.78

MR 11.00 10.00 11.00 13.00 14.00 18.00 15.00 14.00 11.00 15.00 12.00 11.00 11.00 10.00 13.00 15.00 10.00 9.00 16.00 14.00 12.00 13.00 15.00 16.00 13.00 14.00 14.00 13.00 15.00 13.00 17.00 16.00 18.00

Table A.2: Continued S.N. 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68

Data type Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Damage Damage Damage Damage Damage Damage Damage Damage Damage

BR -2.29 10.40 3.99 10.40 10.40 16.96 30.67 10.40 10.40 23.70 16.96 37.96 23.70 37.96 37.96 45.75 30.67 45.75 37.96 30.67 16.96

CVR -0.99 4.90 4.90 4.90 4.90 4.90 4.90 4.90 4.90 4.90 4.90 4.90 4.90 4.90 4.90 10.04 4.90 10.04 4.90 4.90 10.04

ISR 17.08 20.67 20.67 20.67 27.99 27.36 32.39 20.67 30.95 38.28 26.55 38.28 33.88 38.28 38.28 43.30 33.88 43.30 38.28 38.28 38.91

MR 16.00 18.00 17.00 18.00 18.00 19.00 21.00 18.00 18.00 20.00 19.00 22.00 20.00 22.00 22.00 23.00 21.00 23.00 22.00 21.00 19.00

Table A.3: Decision fusion scores for Bayesian decision fusion rule (BR), Chair-Varshney’s rule (CVR), Ideal Sensor rule (ISR), and Majority rule (MR) on sample L3S18. The thresholds for BR, CVR, and ISR is 0, while the threshold for MR is 12. S.N

Data type

BR

CVR

ISR

MR

1 2 3 4 5 6

Baseline Baseline Baseline Baseline Baseline Baseline

-53.22 -65.20 -77.09 -77.09 -71.15 -77.09

-5.78 -5.78 -5.78 -5.78 -5.78 -5.78

-20.53 -32.88 -40.21 -40.21 -35.81 -38.77

7 5 3 3 4 3

93

Table A.3: Continued S.N. 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39

Data type Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline

BR -71.15 -59.22 -71.15 -65.20 -65.20 -65.20 -65.20 -77.09 -65.20 -65.20 -59.22 -71.15 -65.20 -71.15 -71.15 -71.15 -47.19 -65.20 -59.22 -71.15 -71.15 -47.19 -41.13 -41.13 -59.22 -77.09 -59.22 -47.19 -41.13 -41.13 -41.13 -28.89 -3.74 94

CVR -5.78 -1.10 -1.10 -5.78 -5.78 -5.78 -5.78 -5.78 -5.78 -5.78 -5.78 -5.78 -5.78 -5.78 -5.78 -5.78 -5.78 -5.78 -5.78 -5.78 -5.78 -1.10 -5.78 -5.78 -5.78 -5.78 -1.10 -1.10 -5.78 -5.78 -5.78 -5.78 -1.10

ISR -34.37 -22.65 -43.19 -35.24 -35.24 -35.24 -32.88 -44.66 -32.88 -33.74 -27.85 -38.77 -33.74 -38.77 -38.77 -40.21 -23.46 -33.74 -32.88 -41.07 -41.07 -27.89 -31.42 -21.16 -32.88 -46.09 -29.35 -27.89 -29.98 -29.98 -29.98 -16.76 7.31

MR 4 6 4 5 5 5 5 3 5 5 6 4 5 4 4 4 8 5 6 4 4 8 9 9 6 3 6 8 9 9 9 11 15

Table A.3: Continued S.N. 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72

Data type Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline

BR -41.13 -65.20 -59.22 -71.15 -35.03 -28.89 -53.22 -47.19 -53.22 -22.70 -22.70 -41.13 -47.19 -10.13 -28.89 -35.03 -35.03 -28.89 -35.03 -41.13 -28.89 -41.13 -41.13 -41.13 -47.19 -41.13 -22.70 -59.22 -47.19 -22.70 -41.13 -47.19 -41.13 95

CVR -1.10 -1.10 -1.10 -1.10 -1.10 -5.78 -5.78 -5.78 -1.10 4.79 -1.10 -5.78 -5.78 -1.10 -1.10 -1.10 -1.10 -1.10 -1.10 0.11 4.79 -1.10 -1.10 -1.10 -1.10 -5.78 0.11 -1.10 -1.10 4.79 -1.10 -5.78 -5.78

ISR -16.13 -36.68 -29.35 -42.56 -19.12 -15.32 -27.05 -18.22 -17.63 1.48 -7.34 -8.00 -12.39 7.31 -4.41 -4.41 -11.74 -15.32 -10.93 -15.27 -13.18 -24.95 -12.39 -24.09 -29.98 -6.50 5.22 -32.28 -23.46 -1.45 -18.26 -16.76 -9.44

MR 9 5 6 4 10 11 7 8 7 12 12 9 8 14 11 10 10 11 10 9 11 9 9 9 8 9 12 6 8 12 9 8 9

Table A.3: Continued S.N. 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94

Data type Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Baseline Damage Damage Damage Damage Damage Damage Damage Damage Damage Damage Damage

BR -22.70 -47.19 -65.20 -47.19 -53.22 -53.22 -41.13 -47.19 -47.19 16.01 37.12 29.87 52.74 44.68 44.68 44.68 44.68 52.74 52.74 37.12 52.74 52.74

96

CVR -1.10 -5.78 -1.10 -1.10 -5.78 -5.78 -5.78 -5.78 -5.78 0.11 4.79 -1.10 4.79 4.79 4.79 4.79 4.79 4.79 4.79 4.79 4.79 4.79

ISR -0.02 -24.11 -34.37 -22.65 -29.98 -29.98 -15.32 -14.69 -14.69 22.02 38.77 32.88 46.09 38.77 38.77 38.77 38.77 46.09 46.09 32.88 46.09 46.09

MR 12 8 5 8 7 7 9 8 8 18 21 20 23 22 22 22 22 23 23 21 23 23

97

Sensor 1-8 0.96

Sensor 2-10 0.81

Sensor 3-12 1

Sensor 1-8 0.97

Sensor 2-10 0.97

Sensor 3-12 0.89

Sensor 1-7 0.96

Sensor 2-9 0.79

Sensor 3-11 0.98

Sensor 1-7 0.92

Sensor 2-9 0.94

Sensor 3-11 0.97

Sensor 4-7 0.98

Sensor 2-11 0.91

Sensor 1-9 0.96

Sensor 4-7 0.90

Sensor 2-11 0.85

Sensor 1-9 0.98

(b)

Sensor 4-8 0.97

Sensor 2-12 0.98

Sensor 1-10 0.98

Sensor 4-8 0.94

Sensor 2-12 0.84

Sensor 1-10 0.98

(a)

Sensor 4-9 0.94

Sensor 3-7 0.98

Sensor 1-11 0.99

Sensor 4-9 0.94

Sensor 3-7 0.86

Sensor 1-11 0.92

Sensor 4-10 0.90

Sensor 3-8 0.86

Sensor 1-12 0.97

Sensor 4-10 0.89

Sensor 3-8 0.78

Sensor 1-12 0.87

Sensor 4-11 0.97

Sensor 3-9 0.98

Sensor 2-7 0.95

Sensor 4-11 0.95

Sensor 3-9 0.74

Sensor 2-7 0.93

Table A.4: Area under the curve values for all sensors

Sensor 4-12 0.93

Sensor 3-10 0.81

Sensor 2-8 0.96

Sensor 4-12 1

Sensor 3-10 0.90

Sensor 2-8 0.83

98

Distance (mm) 67.45 86.12 103.88 127.97 147.24 164.71 50.89 69.26 87.02 111.41 130.38 147.54 34.02 52.69 70.46 95.45 3 113.82 130.38 10.54 29.81 48.18 72.87 91.23 109.00 88.84

Sensor ID

Sensor 1-7

Sensor 1-8

Sensor 1-9

Sensor 1-10

Sensor 1-11

Sensor 1-12

Sensor 2-7

Sensor 2-8

Sensor 2-9

Sensor 2-10

Sensor 2-11

Sensor 2-12

Sensor 3- 7

Sesnor 3-8

Sensor 3-9

Sesnor 3-10

Sensor 3-11

Sensor 3-12

Sensor 4-7

Sensor 4-8

Sensor 4-9

Sensor 4-10

Sensor 4-11

Sensor 4-12

Average Performance

0.66

0.68

0.77

0.64

0.59

0.74

0.70

0.68

0.71

0.64

0.52

0.56

0.50

0.46

0.55

0.76

0.70

0.65

0.63

0.69

0.80

0.71

0.70

0.72

0.65

L2S17

0.73

0.87

0.67

0.95

0.79

0.94

0.91

0.45

0.88

0.58

0.81

0.60

0.5

0.78

0.60

0.81

0.84

0.77

0.48

0.67

0.78

0.92

0.77

0.62

0.5

L3S18

Accuracy

0.62

0.61

0.69

0.58

0.55

0.66

0.63

0.60

0.63

0.58

0.51

0.54

0

0.48

0.53

0.75

0.68

0.59

0.83

0.62

0.71

0.63

0.63

0.64

0.59

L2S17

0.69

0.91

0.60

0.90

0.77

0.97

0.85

0

0.81

0.54

0.96

0.56

0

0.69

0.56

0.72

0.86

0.69

0

0.61

0.70

0.94

0.68

0.92

0

L3S18

Precision

0.89

1

1

1

1

1

1

1

1

1

0.89

0.78

0

0.89

1

0.78

0.78

1

0.33

1

1

1

1

1

1

L2S17

0.76

0.82

1

1

0.82

0.91

1

0

1

1

0.64

1

0

1

1

1

0.82

1

0

1

1

0.91

1

0.27

0

L3S18

Recall

0.73

0.76

0.81

0.73

0.71

0.79

0.77

0.76

0.78

0.74

0.65

0.64

0

0.63

0.69

0.77

0.72

0.74

0.48

0.77

0.83

0.78

0.77

0.78

0.74

L2S17

0.80

0.86

0.75

0.95

0.79

0.94

0.92

0

0.90

0.70

0.77

0.72

0

0.82

0.72

0.84

0.84

0.81

0

0.75

0.82

0.92

0.81

0.42

0

L3S18

F-measure

Table A.5: Table showing performance of individual sensors used in the decision making process using regularized linear discriminant analysis method.

99

Distance (mm) 67.45 86.12 103.88 127.97 147.24 164.71 50.89 69.26 87.02 111.41 130.38 147.54 34.02 52.69 70.46 95.45 113.82 130.38 10.54 29.81 48.18 72.87 91.23 109.00 88.84

Sensor ID

Sensor 1-7

Sensor 1-8

Sensor 1-9

Sensor 1-10

Sensor 1-11

Sensor 1-12

Sensor 2-7

Sensor 2-8

Sensor 2-9

Sensor 2-10

Sensor 2-11

Sensor 2-12

Sensor 3- 7

Sesnor 3-8

Sensor 3-9

Sesnor 3-10

Sensor 3-11

Sensor 3-12

Sensor 4-7

Sensor 4-8

Sensor 4-9

Sensor 4-10

Sensor 4-11

Sensor 4-12

Average Performance

0.59

0.29

0.50

0.72

0.65

0.51

0.40

0.76

0.50

0.76

0.75

0.56

0.59

0.49

0.63

0.59

0.66

0.78

0.49

0.63

0.72

0.74

0.60

0.63

0.40

L2S17

0.69

0.59

0.77

0.60

0.39

0.77

0.78

0.87

0.60

0.55

0.62

0.75

0.75

0.53

0.81

0.74

0.72

0.82

0.78

0.37

0.81

0.63

0.79

0.83

0.82

L3S18

Accuracy

0.25

0.10

0.10

0.29

0.27

0.21

0.17

0.35

0.21

0.33

0.35

0.23

0.24

0.20

0.26

0.24

0.28

0.38

0.20

0.23

0.32

0.33

0.25

0.26

0.18

L2S17

0.30

0.22

0.33

0.22

0.17

0.32

0.33

0.48

0.20

0.20

0.23

0.32

0.32

0.20

0.38

0.31

0.30

0.38

0.34

0.16

0.38

0.24

0.35

0.40

0.35

L3S18

Precision

0.92

0.56

0.33

0.78

1

1

1

0.89

0.89

1

0.78

1

1

1

1

1

1

1

1

1

0.78

1

1

1

1

L2S17

0.97

1

1

1

1

0.91

0.91

1

0.82

0.91

1

1

1

1

1

1

1

0.91

1

1

1

0.91

1

0.91

0.91

L3S18

Recall

0.39

0.17

0.15

0.42

0.43

0.35

0.28

0.50

0.35

0.47

0.51

0.38

0.39

0.34

0.42

0.39

0.44

0.55

0.34

0.36

0.49

0.50

0.40

0.42

0.30

L2S17

0.45

0.36

0.50

0.37

0.28

0.48

0.49

0.64

0.32

0.38

0.49

0.49

0.33

0.55

0.48

0.46

0.54

0.52

0.27

0.75

0.55

0.39

0.52

0.55

0.54

L3S18

F-measure

Table A.6: Table showing performance of individual sensors using principal component based features in generalized discriminant analysis

BIBLIOGRAPHY [1] Alleyne, D. N. and Cawley, P. (1992). The interaction of lamb waves with defects. Ultrasonics, Ferroelectrics and Frequency Control, IEEE Transactions on, 39(3):381–397. [2] ASCE. (2013). 2013 Report Card for America’s Infrastructure. [3] Baniotopoulos, C. (1998). Analysis of above-ground pipelines on unilateral supports: a neural network approach. International journal of pressure vessels and piping, 75(1):43–48. [4] Bellino, A., Fasana, A., Garibaldi, L., and Marchesiello, S. (2010). Pca-based detection of damage in time-varying systems. Mechanical Systems and Signal Processing, 24(7):2250– 2260. [5] Bloch, I. (1996). Some aspects of dempster-shafer evidence theory for classification of multi-modality medical images taking partial volume effect into account. Pattern Recognition Letters, 17(8):905–919. [6] Boller, C. (2000). Next generation structural health monitoring and its integration into aircraft design. International Journal of Systems Science, 31(11):1333–1349. [7] Bornn, L., Farrar, C. R., Park, G., and Farinholt, K. (2009). Structural health monitoring with autoregressive support vector machines. Journal of Vibration and Acoustics, 131(2):021004. [8] Chair, Z. and Varshney, P. (1986). Optimal data fusion in multiple sensor detection systems. Aerospace and Electronic Systems, IEEE Transactions on, (1):98–101. [9] Chamberland, J. and Veeravalli, V. (2007). Wireless sensors in distributed detection applications. Signal Processing Magazine, IEEE, 24(3):16–25. [10] Chang, C.-W., Laird, D. A., Mausbach, M. J., and Hurburgh, C. R. (2001). Near-infrared reflectance spectroscopy–principal components regression analyses of soil properties. Soil Science Society of America Journal, 65(2):480–490. [11] Chen, B. and Varshney, P. (2002a). A bayesian sampling approach to decision fusion using hierarchical models. Signal Processing, IEEE Transactions on, 50(8):1809–1818.

100

[12] Chen, B. and Varshney, P. K. (2002b). A bayesian sampling approach to decision fusion using hierarchical models. Signal Processing, IEEE Transactions on, 50(8):1809–1818. [13] Cheung, A., Cabrera, C., P. Sarabandi, P., Nair, K. K., Kiremidjian, A., and Wenzel, H. (2008). The application of statistical pattern recognition methods for damage detection to field data. Smart Materials and Structures, 17(6):065023. [14] Chiu, W. K., Galea, S., Koss, L., and Rajic, N. (2000). Damage detection in bonded repairs using piezoceramics. Smart Materials and Structures, 9(4):466. [15] Cinlar, E. (1977). Shock and wear models and markov additive processes. The theory and applications of reliability, 1:193–214. [16] Ciuonzo, D. and Rossi, P. S. (2013). Decision fusion with unknown sensor detection probability. arXiv preprint arXiv:1312.2227. [17] Clouqueur, T., Saluja, K., and Ramanathan, P. (2004). Fault tolerance in collaborative sensor networks for target detection. Computers, IEEE Transactions on, 53(3):320–333. [18] Cox, D. and Miller, H. (1965). The Theory of Stochastic Processes. Chapman and Hall, London. [19] Cremona, C., Cury, A., and Orcesi, A. (2012). Supervised learning algorithms for damage detection and long term bridge monitoring. In IABMAS 2012-International Conference on Bridge Maintenance, Safety and Management, pages pp–2144. [20] Crosier, R. B. (1988). Multivariate generalizations of cumulative sum quality-control schemes. Technometrics, 30(3):291–303. [21] Cross, E. J., Manson, G., Worden, K., and Pierce, S. G. (2012). Features for damage detection with insensitivity to environmental and operational variations. Proceedings of the Royal Society a-Mathematical Physical and Engineering Sciences, 468(2148):4098–4122. [22] Cury, A. (2010). Techniques d’anormalit´e appliqu´ees a` la surveillance de sant´e structurale. PhD thesis, Universit´e Paris-Est. [23] Cury, A., Cr´emona, C., and Diday, E. (2010). Application of symbolic data analysis for structural modification assessment. Engineering Structures, 32(3):762–775.

101

[24] D3039 / D3039M-14, A. (2014). Standard test method for tensile properties of polymer matrix composite materials. [25] D3479/D3479M-12, A. (2012). Standard test method for tension-tension fatigue of polymer matrix composite materials. [26] da Silva, S., Junior, M. D., Junior, V. L., and Brennan, M. J. (2008). Structural damage detection by fuzzy clustering. Mechanical Systems and Signal Processing, 22(7):1636–1649. [27] Deraemaeker, A., Reynders, E., De Roeck, G., and Kullaa, J. (2008). Vibration-based structural health monitoring using output-only measurements under changing environment. Mechanical systems and signal processing, 22(1):34–56. [28] Diamanti, K., Hodgkinson, J., and Soutis, C. (2004). Application of a lamb wave technique for the non-destructive inspection of composite structures. [29] Diaz Valdes, S. and Soutis, C. (2000). Health monitoring of composites using lamb waves generated by piezoelectric devices. Plastics, Rubber and Composites, 29(9):475–481. [30] Doebling, S. W., Farrar, C. R., Prime, M. B., and Shevitz, D. W. (1996). Damage identification and health monitoring of structural and mechanical systems from changes in their vibration characteristics: a literature review. Technical report, Los Alamos National Lab., NM (United States). [31] Doksum, K. A. and Hbyland, A. (1992). Models for variable-stress accelerated life testing experiments based on wener processes and the inverse gaussian distribution. Technometrics, 34(1):74–82. [32] D.Whyte, H. (2001). Multi Sensor Data Fusion. Australian Centre for Field Robotics, NSW, Australia, 1.2 edition. [33] Ellerbrock, P. J. (1997). Dc-xa structural health-monitoring fiber optic-based strain measurement system. In Smart Structures and Materials’ 97, pages 207–218. International Society for Optics and Photonics. [34] Engelhardt, M., Stavroulakis, G., and Antes, H. (2003). Crack identification as an optimization task. PAMM, 3(1):511–512.

102

[35] Esary, J. and Marshall, A. (1973). Shock models and wear processes. The annals of probability, pages 627–649. [36] Farrar, C. R., Nix, D. A., Duffey, T. A., Cornwell, P. J., and Pardoen, G. C. (1999). Damage identification with linear discriminant operators. In Proc. 17th International Modal Analysis Conference, pages 599–607. Citeseer. [37] Farrar, C. R. and Worden, K. (2007). An introduction to structural health monitoring. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 365(1851):303–315. [38] Farrar, C. R. and Worden, K. (2012). Structural Health Monitoring: A Machine Learning Perspective. John Wiley & Sons. [39] Fawcett, T. (2006). An introduction to roc analysis. Pattern recognition letters, 27(8):861– 874. [40] Fekedulegn, B. D., Colbert, J., Hicks Jr, R., Schuckers, M. E., et al. (2002). Coping with multicollinearity: An example on application of principal components regression in dendroecology. [41] Folks, J. and Chhikara, R. (1978). The inverse gaussian distribution and its statistical application–a review. Journal of the Royal Statistical Society. Series B (Methodological), pages 263–289. [42] Frank, L. E. and Friedman, J. H. (1993). A statistical view of some chemometrics regression tools. Technometrics, 35(2):109–135. [43] Friedman, J. (1989). Regularized discriminant analysis. Journal of American Statistical Association, 84(405):165–175. [44] Gazis, D. C. (1958). Exact analysis of the plane-strain vibrations of thick-walled hollow cylinders. The Journal of the Acoustical Society of America, 30(8):786–794. [45] Gazis, D. C. (1959). Three-dimensional investigation of the propagation of waves in hollow circular cylinders. i. analytical foundation. The journal of the Acoustical Society of America, 31(5):568–573.

103

[46] Gebraeel, N. Z., Lawley, M. A., Li, R., and Ryan, J. K. (2005). Residual-life distributions from component degradation signals: A bayesian approach. IiE Transactions, 37(6):543– 557. [47] Geladi, P. (2003). Chemometrics in spectroscopy. part 1. classical chemometrics. Spectrochimica Acta Part B: Atomic Spectroscopy, 58(5):767–782. [48] Giurgiutiu, V. (2005). Tuned lamb wave excitation and detection with piezoelectric wafer active sensors for structural health monitoring. Journal of intelligent material systems and structures, 16(4):291–305. [49] Guo, Y., Hastie, T., and Tibshirani, R. (2007). Regularized linear discriminant analysis and its application in microarrays. Biostatistics, 8(1):86–100. [50] Hall, D. L. and Llinas, J. (1997). An introduction to multisensor data fusion. Proceedings of the IEEE, 85(1):6–23. [51] Hastie, T., Tibshirani, R., and Friedman, J. (2001). The Elements of Statistical Learning. Springer Series in Statistics. Springer New York Inc., New York, NY, USA. [52] Hernandez-Arteseros, J., Compano, R., Ferrer, R., and Prat, M. (2000). Application of principal component regression to luminescence data for the screening of ciprofloxacin and enrofloxacin in animal tissues. Analyst, 125(6):1155–1158. [53] Hillier, F. S. (1995). Introduction to operations research. Tata McGraw-Hill Education. [54] Hotelling, H. (1957). The relations of the newer multivariate statistical methods to factor analysis. British Journal of Statistical Psychology, 10(2):69–79. [55] Hull, B. and John, V. (1988). Non-destructive Testing. Springer. [56] Ihn, J.-B. and Chang, F.-K. (2004a). Detection and monitoring of hidden fatigue crack growth using a built-in piezoelectric sensor/actuator network: I. diagnostics. Smart Materials and Structures, 13(3):609. [57] Ihn, J.-B. and Chang, F.-K. (2004b). Detection and monitoring of hidden fatigue crack growth using a built-in piezoelectric sensor/actuator network: I. diagnostics. Smart Materials and Structures, 13(3):609.

104

[58] Jackson, J. E. (1991). A User’s Guide to Principal Components. Wiley Series in Probability and Statistics. Wiley-Interscience. [59] Jeffers, J. (1967). Two case studies in the application of principal component analysis. Applied Statistics, pages 225–236. [60] Jiang, S.-F., Zhang, C.-M., and Zhang, S. (2011). Two-stage structural damage detection using fuzzy neural networks and data fusion techniques. Expert systems with applications, 38(1):511–519. [61] Jolliffe, I. (2002). Principal component analysis. Wiley Online Library. [62] Jolliffe, I. T. (1982). A note on the use of principal components in regression. Applied Statistics, pages 300–303. [63] Karlin, S. and Taylor, H. (1975). A first course in stochastic process. Academic press, San Diego, second edition. [64] Kehlenbach, M. and Das, S. (2002). Identifying damage in plates by analyzing lamb wave propagation characteristics. In Proc. SPIE, volume 4702, pages 364–75. [65] Keilers, C. H., Chang, F.-K., et al. (1995). Identifying delamination in composite beams using built-in piezoelectrics: part iexperiments and analysis. Journal of Intelligent Material Systems and Structures, 6(5):649–663. [66] Kendall, M. (1957). A course in multivariate analysis. Griffin Statistical Monographs. [67] Kessler, S. S. and Agrawal, P. (2007). Application of pattern recognition for damage classification in composite laminates. In Proceedings of the 6th International Workshop on Structural Health Monitoring, Stanford University. [68] Kessler, S. S. and Spearing, S. M. (2002). Design of a piezoelectric-based structural health monitoring system for damage detection in composite materials. In SPIE’s 9th Annual International Symposium on Smart Structures and Materials, pages 86–96. International Society for Optics and Photonics. [69] Kessler, S. S., Spearing, S. M., and Soutis, C. (2002). Damage detection in composite materials using lamb wave methods. Smart Materials and Structures, 11(2):269.

105

[70] Kharoufeh, J. P. (2003). Explicit results for wear processes in a markovian environment. Operations Research Letters, 31(3):237–244. [71] Kharoufeh, J. P. and Cox, S. M. (2005). Stochastic models for degradation-based reliability. IIE Transactions, 37(6):533–542. [72] Kokar, M. M., Tomasik, J. A., and Weyman, J. (2001a). Data vs. decision fusion in the category theory framework. FUSION 2001. [73] Kokar, M. M., Tomasik, J. A., and Weyman, J. (2001b). Data vs. decision fusion in the category theory framework. FUSION 2001. [74] Kollar, L. P. and Van Steenkiste, R. J. (1998). Calculation of the stresses and strains in embedded fiber optic sensors. Journal of Composite Materials, 32(18):1647–1679. [75] Krishnamachari, B. and Iyengar, S. (2004). Distributed bayesian algorithms for faulttolerant event region detection in wireless sensor networks. Computers, IEEE Transactions on, 53(3):241–250. [76] Kullaa, J. (2003). Damage detection of the z24 bridge using control charts. Mechanical Systems and Signal Processing, 17(1):163–170. [77] Lamb, H. (1917). On waves in an elastic plate. Proceedings of the Royal Society of London. Series A, Containing papers of a mathematical and physical character, pages 114–128. [78] Larrosa, C., Lonkar, K., and Chang, F.-K. (2014). In situ damage classification for composite laminates using gaussian discriminant analysis. Structural Health Monitoring, 13(2):190–204. [79] Lawless, J. and Crowder, M. (2004). Covariates and random effects in a gamma process model with application to degradation and failure. Lifetime Data Analysis, 10(3):213–227. [80] Lee, B. and Staszewski, W. (2007). Lamb wave propagation modelling for damage detection:i. two dimensional analysis. Smart Materials and Structure, 16:249–259. [81] Liao, H. and Elsayed, E. A. (2006). Reliability inference for field conditions from accelerated degradation testing. Naval Research Logistics (NRL), 53(6):576–587.

106

[82] Liggins II, M., Hall, D., and Llinas, J. (2008). Handbook of multisensor data fusion: theory and practice. CRC press. [83] Lowry, C. A. and Montgomery, D. C. (1995). A review of multivariate control charts. IIE transactions, 27(6):800–810. [84] Lu, C. J. and Meeker, W. O. (1993). Using degradation measures to estimate a time-tofailure distribution. Technometrics, 35(2):161–174. [85] MacGregor, J. and Kourti, T. (1995). Statistical process control of multivariate processes. Control Engineering Practice, 3(3):403–414. [86] Manson, G., Worden, K., Holford, K., and Pullin, R. (2001). Visualisation and dimension reduction of acoustic emission data for damage detection. Journal of Intelligent Material Systems and Structures, 12(8):529–536. [87] Marantidis, C., Van Way, C. B., and Kudva, J. N. (1994). Acoustic-emission sensing in an on-board smart structural health monitoring system for military aircraft. In 1994 North American Conference on Smart Structures and Materials, pages 258–264. International Society for Optics and Photonics. [88] Mikhail, M., Zein-Sabatto, S., and Bodruzzaman, M. (2012). Decision fusion methodologies in structural health monitoring systems. In Southeastcon, 2012 Proceedings of IEEE, pages 1–6. IEEE. [89] Montgomery, D. C. (2007). Introduction to statistical quality control. John Wiley & Sons. [90] Montgomery, D. C., Peck, E. A., and Vining, G. G. (2012). Introduction to linear regression analysis, volume 821. John Wiley & Sons. [91] Mujica, L., Rodellar, J., Fernandez, A., and Guemes, A. (2010a). Q-statistic and t2-statistic pca-based measures for damage assessment in structures. Structural Health Monitoring, page 1475921710388972. [92] Mujica, L., Rodellar, J., Fernandez, A., and Guemes, A. (2010b). Q-statistic and t2-statistic pca-based measures for damage assessment in structures. Structural Health Monitoring, page 1475921710388972.

107

[93] Niu, R. and Varshney, P. (2006). Joint detection and localization in sensor networks based on local decisions. In Signals, Systems and Computers, 2006. ACSSC ’06. Fortieth Asilomar Conference on, pages 525–529. [94] Niu, R. and Varshney, P. K. (2008). Performance analysis of distributed detection in a random sensor field. Signal Processing, IEEE Transactions on, 56(1):339–349. [95] O’Hagan, M. (1993). A fuzzy decision maker. Proc. Fuzzy Logic ?93 (Computer. [96] Osmont, D. L., Dupont, M., Gouyon, R., Lemistre, M. B., and Balageas, D. L. (2000). Piezoelectric transducer network for dual-mode (active/passive) detection, localization, and evaluation of impact damages in carbon/epoxy composite plates. In Symposium on Applied Photonics, pages 130–137. International Society for Optics and Photonics. [97] Park, C. and Padgett, W. (2005a). Accelerated degradation models for failure based on geometric brownian motion and gamma processes. Lifetime Data Analysis, 11(4):511–527. [98] Park, C. and Padgett, W. J. (2005b). New cumulative damage models for failure using stochastic processes as initial damage. Reliability, IEEE Transactions on, 54(3):530–540. [99] Pavlopoulou, S., Worden, K., and Soutis, C. (2013). Structural health monitoring and damage prognosis in composite repaired structures through the excitation of guided ultrasonic waves. In Conference on Health Monitoring of Structural and Biological Systems, volume 8695 of Proceedings of SPIE, BELLINGHAM. Spie-Int Soc Optical Engineering. [100] Pignatiello, J. J. and Runger, G. C. (1990). Comparisons of multivariate cusum charts. Journal of quality technology, 22(3):173–186. [101] Pomerleau, D. (1993). Input reconstruction reliability estimation. In Advances in Neural Information Processing Systems 5, [NIPS Conference], pages 279–286, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. [102] Proschan, F. (1996). Mathematical theory of refiabifity. [103] Pullin, R., Eaton, M. J., Hensman, J. J., Holford, K. M., Worden, K., and Evans, S. L. (2008). A principal component analysis of acoustic emission signals from a landing gear component. Applied Mechanics and Materials, 13:41–47.

108

[104] Raghavan, A. and Cesnik, C. E. (2007). Review of guided-wave structural health monitoring. Shock and Vibration Digest, 39(2):91–116. [105] Rausand, M. and Høyland, A. (2004). System reliability theory: models, statistical methods, and applications, volume 396. John Wiley & Sons. [106] Rees, D., Chiu, W., and Jones, R. (1992). A numerical study of crack monitoring in patched structures using a piezoelectric sensor. Smart materials and Structures, 1(3):202. [107] Rose, J. L. (2004). Ultrasonic waves in solid media. Cambridge university press. [108] Ross, S. M. (1970). Applied probability models with optimization applications. Dover publications Inc. [109] Sammon, J. W. (1969). A nonlinear mapping for data structure analysis. IEEE Transactions on computers, (5):401–409. [110] Santos, J. P., Orcesi, A., Silveira, P., and Weichao, G. (2012). Real time assessment of rehabilitation works under operational loads. In ICDS12-International Conference Durable Structures, page 10p. [111] Saxena, A., Goebel, K., Larrosa, C. C., Janapati, V., Roy, S., and Chang, F.-K. (2011). Accelerated aging experiments for prognostics of damage growth in composite materials. Technical report, DTIC Document. [112] Schoess, J. N. and Zook, J. D. (1998). Test results of a resonant integrated microbeam sensor (rims) for acoustic emission monitoring. In 5th Annual International Symposium on Smart Structures and Materials, pages 326–332. International Society for Optics and Photonics. [113] Seydel, R. and Chang, F.-K. (2001). Impact identification of stiffened composite panels: I. system development. Smart Materials and Structures, 10(2):354. [114] Shafer, G. et al. (1976). A mathematical theory of evidence, volume 1. Princeton university press Princeton. [115] Si, X.-S., Wang, W., Hu, C.-H., and Zhou, D.-H. (2011). Remaining useful life estimation–a review on the statistical data driven approaches. European Journal of Operational Research, 213(1):1–14. 109

[116] Singpurwalla, N. D., Wilson, S. P., et al. (1998). Failure models indexed by two scales. Advances in Applied Probability, 30(4):1058–1072. [117] Sodano, H. A. (2007). Development of an automated eddy current structural health monitoring technique with an extended sensing region for corrosion detection. Structural Health Monitoring, 6(2):111–119. [118] Sohn, H., Czarnecki, J. A., and Farrar, C. R. (2000). Structural health monitoring using statistical process control. Journal of Structural Engineering, 126(11):1356–1363. [119] Sohn, H., Farrar, C. R., Hemez, F. M., Shunk, D. D., Stinemates, D. W., Nadler, B. R., and Czarnecki, J. J. (2004). A review of structural health monitoring literature: 1996-2001. Los Alamos National Laboratory Los Alamos, NM. [120] Sohn, H., Farrar, C. R., Hunter, N. F., and Worden, K. (2001). Structural health monitoring using statistical pattern recognition techniques. Journal of dynamic systems, measurement, and control, 123(4):706–711. [121] Sohn, H., Kim, S. D., and Harries, K. (2008). Reference-free damage classification based on cluster analysis. Computer-Aided Civil and Infrastructure Engineering, 23(5):324–338. [122] Song, H., Zhong, L., and Han, B. (2006). Structural damage detection by integrating independent component analysis and support vector machine. International journal of systems science, 37(13):961–967. [123] Staszewski, W. J., Boller, C., Grondel, S., Biemans, C., O’Brien, E., and Tomlinson, G. (2004). Health Monitoring of Aerospace Structures-Smart Sensor technologies and Signal Processing. Johnn Wiley amnd Sons. [124] Stavroulakis, G. and Antes, H. (1997). Nondestructive elastostatic identification of unilateral cracks through bem and neural networks. Computational Mechanics, 20(5):439– 451. [125] Stavroulakis, G. E. (2013). Inverse and crack identification problems in engineering mechanics, volume 46. Springer Science & Business Media. [126] Su, Z. and Ye, L. (2005). Quantitative damage prediction for composite laminates based on wave propagation and artificial neural networks. Structural Health Monitoring, 4(1):57–66. 110

[127] Su, Z. and Ye, L. (2009). Fundamentals and analysis of lamb waves. In Identification of Damage Using Lamb Waves, pages 15–58. Springer. [128] Surace, C. and Worden, K. (1997). Some aspects of novelty detection methods. In Proceedings of the 3rd International Conference on Modern Practice in Stress and Vibration Analysis., pages 89–84, Dublin. [129] Tan, Y., Shi, L., Tong, W., and Wang, C. (2005). Multi-class cancer classification by total principal component regression (tpcr) using microarray gene expression data. Nucleic acids research, 33(1):56–65. [130] Tape, T. G. (2016). Interpreting diagnostic tests. online. University of Nebraska Medical Center. [131] Thomopoulos, S. C., Viswanathan, R., and Bougoulias, D. C. (1987). Optimal decision fusion in multiple sensor systems. Aerospace and Electronic Systems, IEEE Transactions on, (5):644–653. [132] Tseng, S.-T., Balakrishnan, N., and Tsai, C.-C. (2009). Optimal step-stress accelerated degradation test plan for gamma degradation processes. Reliability, IEEE Transactions on, 58(4):611–618. [133] Tseng, S.-T. and Peng, C.-Y. (2004). Optimal burn-in policy by using an integrated wiener process. IIE Transactions, 36(12):1161–1170. [134] Tseng, S.-T., Tang, J., and Ku, I.-H. (2003). Determination of burn-in parameters and residual life for highly reliable products. Naval Research Logistics (NRL), 50(1):1–14. [135] Tweedie, M. (1945). Inverse statistical variates. Nature, 155(3937):453–453. [136] Van Noortwijk, J. (2009). A survey of the application of gamma processes in maintenance. Reliability Engineering & System Safety, 94(1):2–21. [137] van Noortwijk, J. M., van der Weide, J. A., Kallen, M.-J., and Pandey, M. D. (2007). Gamma processes and peaks-over-threshold distributions for time-dependent reliability. Reliability Engineering & System Safety, 92(12):1651–1658. [138] Vapnik, V. (1998). Statistical learning theory, volume 1. Wiley New York.

111

[139] Vapnik, V. (2013). The nature of statistical learning theory. Springer Science & Business Media. [140] Veeravalli, V. V. (1999). Sequential decision fusion: theory and applications. Journal of the Franklin Institute, 336(2):301–322. [141] Viet H`a, N. and Golinval, J.-C. (2010). Localization and quantification of damage in beam-like structures using sensitivities of principal component analysis results. Mechanical Systems and Signal Processing, 24(6):1831–1843. [142] Viktorov, I. A. (1967). Rayleigh and Lamb Waves: Physical Theory and Applications. Plenum Press, New York. [143] Wang, H., Xu, T., and Mi, Q. (2015). Lifetime prediction based on gamma processes from accelerated degradation data. Chinese Journal of Aeronautics, 28(1):172–179. [144] Wang, X., Foliente, G., Su, Z., and Ye, L. (2006). Multilevel decision fusion in a distributed active sensor network for structural damage detection. Structural Health Monitoring, 5(1):45–58. [145] Wang, Z. and Ong, K. (2008). Autoregressive coefficients based hotellings t 2 control chart for structural health monitoring. Computers & Structures, 86(19):1918–1935. [146] Whitmore, G. (1995). Estimating degradation by a wiener diffusion process subject to measurement error. Lifetime data analysis, 1(3):307–319. [147] Whitmore, G., Crowder, M., and Lawless, J. (1998). Failure inference from a marker process based on a bivariate wiener model. Lifetime Data Analysis, 4(3):229–251. [148] Whitmore, G. and Schenkelberg, F. (1997). Modelling accelerated degradation data using wiener diffusion with a time scale transformation. Lifetime data analysis, 3(1):27–45. [149] Woodall, W. H. and Adams, B. M. (1993). The statistical design of cusum charts. Quality Engineering, 5(4):559–570. [150] Woodall, W. H. and Ncube, M. M. (1985). Multivariate cusum quality-control procedures. Technometrics, 27(3):285–292.

112

[151] Worden, K. (1997). Structural fault detection using a novelty measure. Journal of Sound and vibration, 201(1):85–101. [152] Worden, K. and Manson, G. (2007). The application of machine learning to structural health monitoring. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 365(1851):515–537. [153] Worden, K., Manson, G., and Fieller, N. (2000). Damage detection using outlier analysis. Journal of Sound and Vibration, 229(3):647–667. [154] Worden, K., Manson, G., and Spie (1999). Visualisation and dimension reduction of high-dimensional data for damage detection. In 17th International Modal Analysis Conference (IMAC) on Modal Analysis - Reducing the Time to Market, volume 3727 of Proceedings of the Society of Photo-Optical Instrumentation Engineers (Spie), pages 1576–1585, BETHEL. Soc Experimental Mechanics Inc. [155] Worlton, D. (1961). Experimental confirmation of lamb waves at megacycle frequencies. Journal of Applied Physics, 32(6):967–971. [156] Wu, J.-Y., Wu, C.-W., Wang, T.-Y., and Lee, T.-S. (2010). Channel-aware decision fusion with unknown local sensor detection probability. Signal Processing, IEEE Transactions on, 58(3):1457–1463. [157] Yan, A.-M., Kerschen, G., De Boe, P., and Golinval, J.-C. (2005). Structural damage diagnosis under varying environmental conditionspart i: a linear analysis. Mechanical Systems and Signal Processing, 19(4):847–864. [158] Zein-Sabatto, S., Mikhail, M., Bodruzzaman, M., DeSimio, M., Derriso, M., and Behbahani, A. (2012). Analysis of decision fusion algorithms in handling uncertainties for integrated health monitoring systems. In SPIE Defense, Security, and Sensing, pages 84070A–84070A. International Society for Optics and Photonics. [159] Zhou, G. and Sim, L. (2002). Damage detection and assessment in fibre-reinforced composite structures with embedded fibre optic sensors-review. Smart Materials and Structures, 11(6):925. [160] Ziemia´ nski, L. and Harpula, G. (1999). The use of neural networks for damage detection in eight storey frame structure. Networks, 9:22–2.

113

BIOGRAPHICAL SKETCH Spandan Mishra was born on January 28, 1988 in Birgunj, Nepal. He attended Gyan-Jyoti High Secondary School, during which he was awarded the Mahatma Gandhi Scholarship by Embassy of India-Kathmandu. Spandan was awarded a fellowship by the Government of Nepal to pursue an undergraduate degree in industrial engineering at Tribhuvan University. He graduated on June of 2010 with B.E in Industrial Engineering with distinction, from Thapathali Campus, Tribhuvan University. In January 2012, Mishra enrolled as doctoral student in the Department of Industrial and Manufacturing Engineering at FAMU-FSU College of engineering. Under the direction of Dr. Arda Vanli, his doctoral research focused on developing data-driven methodologies for structural health monitoring of carbon-fiber reinforced composite structures. Spandan was actively involved in FSU-student government, and was graduate student senator in 65th and 66th FSU student senate.

114

Suggest Documents