Predicting Time to Failure Using the IMM and Excitable ... - CiteSeerX

10 downloads 998 Views 330KB Size Report
Prognosis, on the other hand, refers to the ability to monitor the health of a system ... maintenance, and higher availability of the system due to increased uptime.
Predicting Time to Failure Using the IMM and Excitable Tests Ethan Phelps1, Peter Willett2, Thiagalingam Kirubarajan3 and Craig Brideau2 ABSTRACT

Prognostics, which refers to the inference of an expected time-to-failure for a system, is made difficult by the need to track and predict the trajectories of real-valued system parameters over essentially unbounded domains, and by the need to prescribe a subset of these domains in which an alarm should be raised. In this paper we propose an idea, one whereby these problems are avoided: instead of physical system or sensor parameters, a vector corresponding to the failure probabilities of the system’s sensors (which of course are bounded within the unit hypercube) is tracked. With the help of a system diagnosis model, the corresponding the fault signatures can be identified as terminal states for these probability vectors. To perform the tracking, Kalman filters and interacting multiple model (IMM) estimators are implemented for each sensor. The work that has been completed thus far shows promising results in both large and small-scale systems, with the impending failures being detected quickly and the prediction of the time until this failure occurs being determined accurately.

1. INTRODUCTION 1.1 PROGNOSTICS BACKGROUND

Most condition-based maintenance systems and tools rely heavily on the ability to diagnose the current working state of a plant or process. Often, it involves the identification of useful features (for example, harmonic power, or temperature) and a match of these to an archive of “training” data from other versions of the system. To some extent, then, diagnostics is a matter of learning from data; of course, if the training data is relevant and correctly labeled then diagnostics amounts to statistical classification. Prognosis, on the other hand, refers to the ability to monitor the health of a system or component, and predict its remaining useful life, and applications include mechanical systems such as engines, bearings, or gearboxes. Prognostics is perhaps more ambitious than diagnostics. The latter involves the comparison of current sensor signatures to a suite of past ones, and may be thought of as asking the question “Where are we?” A prognostic algorithm may have as ammunition some prior observed data, but its primary question of “Where are we going?” reveals that the system may be in uncharted waters. The need for prognostics has become apparent in recent years (e.g., [He04], [La05], [Le00], and many others in this manuscript’s biblography). The aerospace industry and the military have shown particular interest in prognostics, due to the high criticality of their systems. The consequences of a failure of an aircraft during flight are very severe, and to avoid them, routine (often unnecessary)

1

Raytheon Company. [email protected]. ECE Department, University of Connecticut. {willett,brideau}@engr.uconn.edu. 3 ECE Department, McMaster University. [email protected]. 2

maintenance and replacement of parts is prescribed. Besides being very expensive, these methods are disadvantageous in other ways. The benefits of a prognostics solution are many, including increased safety, reduced expense of operation and maintenance, and higher availability of the system due to increased uptime. The objective of prognostics is to predict how long it will be until the system fails – the time to failure, along with confidence bounds that reflect the uncertainty – and if possible also to identify the component that will fail first. To predict the time to failure, one needs to track the health of the system, and predict when it will reach a level deemed unacceptable. There has been much research on finding signals rich in information about the system’s health. Components of turbine engines, such as bearings, gears, and fan blades have often been the focus of prognostics research, and signals used for prognostics have included vibration (position, velocity, or acceleration), electrical signals, temperature, pressure, and even color. By using sensors to measure these signals, and then processing the signals, features can be extracted that to be used to predict the time to failure. Various processing techniques have been found to extract useful information from the signals.

1.2 PRIOR WORK

In the literature there is much information on what kinds of physical signals contain useful prognostic information in various systems, and what kinds of sensors can be used to measure them. Some signals that are often measured for determining the health of a system are: y

Vibration and acoustic emission (high frequency vibration) [Bo00], [Li97], [Go00], [So00]: Vibration of gear shafts or bearings can reveal the level of wear or damage. High frequency vibration, in the ultrasonic range, sometimes referred to as acoustic emission, can contain useful information as well.

y

Contamination of oil (particle count) [Mi00], [Po00], [Ed00], [Kh04]: Particles of metal and debris in the oil that lubricates a bearing or gearbox are a definite sign of degradation. By counting the number and size of particles gives a clue to the health of the system.

y

Detection of foreign objects entering an engine [Sh00], [Ro01]: For turbine engines, foreign objects that enter the engine can be detected and classified as damaging or non-damaging, based on their size and composition.

y

Blade tip clearance [Ja99], [Fl00], [Do99]: In turbine engines, the clearance of the blade tips must be kept low for the engine to be efficient, but not so low that there is rubbing against the race; this is made difficult by thermal expansion. Sensors can be used to monitor the clearance of the blades.

Prognostic and diagnostic information is extracted from the signals through processing. Some forms of processing in the literature are: y

Thresholding [Ha99], [Sw01] or adaptive thresholding [De00]: Sensor outputs can be simply thresholded to give an alarm. A more sophisticated method uses past data to develop adaptive thresholds that change based on the operating mode of the system.

y

Spectral analysis [Go00], [Ha00]: Spectral analysis is often performed on vibration data. For rotating machinery, the harmonics of the frequency of rotation are observed: many sorts of damage can be recognized by their harmonic signatures.

2

y

High pass filtering (SWAN, HFRT) [Bo00], [Li97]: Acoustic emission signals are often indicative of events that cause damage. To isolate the acoustic emission in a vibration signal, the measured signal is passed through a high-pass filter. Usually the envelope (power) is taken, showing the magnitude of the acoustic emission.

y

ARMA modeling [Ga97]: Autoregressive and autoregressive moving-average modeling has been applied to signals for information reduction and classification.

y

Univariate statistics [Ha99], [En00], [Ha00]: The Kurtosis and the RMS value for vibration signals often change distinctly preceding a failure.

After processing the signals, the information is used to make decisions such as “Has a failure occurred?” and “If a failure has occurred, which one?” Techniques for diagnostic information fusion and decision might be physical models/expert knowledge/rule-based [Ro00], [Ga01]; neural networks trained by real data [Br00], [Ga01], [Br99], [Ja99]; fuzzy logic [Br00], [Ga01], [Br99]; or statistical (Bayesian, Dempster-Shafer) [Ro01]. The literature in prognostic inference is somewhat sparser, but we have: y

Extrapolation of parameter trajectory [Do00], [Sw01]: If there is a region or level of a certain parameter, at which failure can be assumed, then the parameter can be tracked and the trajectory extrapolated to find when a failure will occur. Depending on the characteristics of the trajectory, linear extrapolation or a more complex method could be used.

y

Model based prediction [Ro00]: If the physics of the system can be modeled accurately, then physical models can be used to predict the time to failure. This tends to be very expensive and difficult for complex systems.

y

Hybrid Modeling [Kh04]: A state transition matrix is used to relate individual subsystems where coupling exists to each other. In this way coupling that contributes to the degradation of a connected subsystem is attempting to be modeled. Prognosis is then performed based on a “bathtub curve” which is used to model such damage as fatigue.

y

Comparison with seeded fault data (data mining) [Br00]: If the current situation can be matched to previously recorded data, obtained from seeded fault tests, then the previously recorded data can be used to predict the time of failure.

Most of the methods for prognostics require large investments, for instance: performing seeded fault tests to acquire a library of data, or creating a model of the physics of the system. Additionally, these solutions are very application specific. Changes to the system would require extensive recalculations in many cases to verify the overall integrity of the new system model. It would be desirable to have a prognostic solution that is more “universal”, in that it does not require large amounts of previous data or detailed physical models, and can be more easily applied to different systems. One example of a method that is to some extent universal is that by Swanson [Sw01]: in that case a parameter that is trending away from its nominal value is checked, via comparison of the directions and magnitudes of its velocity and acceleration, for stability. This is, we feel, a major step forward, and this paper is informed by its reliance on a parameter’s trajectory in making prognostic predictions.

3

1.3 PROPOSED APPROACH

To reiterate, most visions of a prognostic system are based either on extensive physics-based modeling of the meaning of sensor parameters, or on learning from seeded-fault trials. These are expensive: we would like a prognostics solution that is general, in that it can be applied to many systems with minimal cost (time, effort) of adaptation. It is appealing to track system parameters, as is done in [Sw01], and to estimate/predict the time until one or more of them enters “bad” or unsatisfactory territory. In [Po99], for example, the trending is linear and the alarm threshold is expert-tunable; defining the region of undesirable operation (which indicates a problem in the system) directly in terms of parameter values usually involves an appeal to judgment and experience, and may not always be feasible. Conversely, many or most multiple-fault system-level diagnostic approaches4 rely on a suite of semi-intelligent sensors that are able to render a binary (acceptable/unacceptable) decision on a particular observed feature (say, temperature): these diagnostic engines make sense out of an otherwise confusing vector of binary sensor outputs and offer a explanation in terms of underlying fault conditions. If prognosis is to be based on parameter tracking, such sensors hold little appeal. Indeed, if such binary-output sensors make a sudden jump from a no-fault signature to a signature where a fault is present, then there is little we can do since there was no indication of a problem in the system. If, on the other hand, sensors are re-tuned such that each alerts more and more often as a faulty-condition is incipient, then the situation becomes interesting. That is, we submit that there may be virtue to be found in the necessity of dealing with binary sensors. Whereas direct parameter tracking suffers from the fact that parameters can wander over a wide range of real values, and there is no way of knowing which ones are bad values; if we track the probabilities of each sensor raising an alert, then these probabilities are known to be constrained between zero and unity. This allows us to examine the data over a more manageable range. raw sensor signal high threshold

low threshold

time time of failure

Figure 1: Illustration of threshold levels

With this approach, we are positing a sensor that is tuned differently from the way sensors are currently tuned. Most sensors measure a signal and give an alarm when a threshold is crossed. Alarms are disquieting, and to avoid having frequent false alarms the threshold is typically set high. Our “excitable” kind of sensor gives alarms frequently, and as the system approaches failure, the alarms become more frequent. Normal sensors could be made to behave more like this if the thresholds were set lower (see Figure 1); and we believe that setting the thresholds lower would 4

… such as TEAMS, which we shall discuss later …

4

preserve more prognostic information, as long as the alarms could be dealt with intelligently. It should also be mentioned that if analog sensors are being used in the system, the thresholds could be set in software, eliminating the need for different sensors. Our procedure is: 1. We evaluate the system’s “dependency matrix”: whose rows are vectors relating a fault to its observed binary sensor signature. We refer to a binary 1 as indicating an “alarm”. 2. We estimate sensor alarm probabilities from observations. To reduce computation and noise, and to mitigate non-Gaussianity, we employ measurement averaging over a number of observations. 3. For each sensor, these probabilities are then tracked: the above estimates are inputs to banks of Kalman filters, these combined via an interacting multiple model (IMM) estimator [Ba01]. The IMM accounts for different sorts of “motion” that the trajectories can undergo, i.e. no-motion (no fault state), linear path to failure, and nonlinear path(s). 4. After a sensor’s failure probability begins to increase, we extrapolate it forward. 5. Once the trajectories have been extrapolated into the future, we attempt to determine what failure mode the model is approaching by passing the expected fault signature based on this extrapolation and the dependency matrix to an inference engine. We use TEAMS [De97] for this.

measurement: z time

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

1

0

1

0

0

0

0

1

0

0

0

1

0

0

0

1

0

1

0

0

0

0

1

0

0

1

0

1

0

0

0

0

1

0

0

1

0

1

0

0

1

0

1

0

0

1

0

}

}

initial state: p 0.01

0.01

0.01

0.01

0.01

a terminal state: pterm1 0.99

0.01

0.01

0.95

0.01

Figure 2: Evolution of the measurement vector

It is to be noted that if a failure happens very quickly compared to the sensor sampling time, then our approach to prognosis, nor indeed any other prognosis approach, will not work: in order that prognostic information be useful there must be some sort of trend. The strength of our approach is that the trend need not be of a predetermined structure or path.

5

In the following section we will give details on the measurement model and of TEAMS. Then, in Section 3 there will be a review of tracking algorithms [Ph01], [Ph02] and models. In Section 4 we describe how prognostic inference (time to failure and prediction of failure type) are to overlay the trackers, followed by a description of how the algorithms were tested. In Section 5 we present simulations, and conclusions are drawn in Section 6.

2. PROBLEM FORMULATION 2.1 MEASUREMENT MODEL

The modeling begins with a set of sensors, and implicit to each is an analog signal containing information about the prognostic health of the system. The sensors are sampled at a rate much higher than the rate of the system’s evolution to failure. Consecutive samples must be independent: the sampling interval ought to exceed the decorrelation time of the underlying analog signals. Our prognostic inference is not concerned with these analog signals, but observes only their thresholded versions: that is, it sees only binary signals. For each signal, the threshold level is chosen such that it is crossed frequently during the evolution of the system5. As illustrated in Figure 2, let the set of binary random signals be denoted as {z1, z2, … , zns}, where zi  {0, 1}, and ns is the number of sensors. Because the system is assumed to evolve slowly with respect to the data acquisition rate, and the signals are random, over any small amount of time a probability can be assigned to each signal. Define the vector of probabilities p = [p1 p2 … pns]’, where pi = Pr(zi = 1), the probability that sensor i has raised a fault flag. The probability vector represents a position in an ns-dimensional space, that being the unit hypercube with one vertex at the origin (0,0, … , 0), and the opposite at (1,1, … , 1). The probability vector is estimated from the most recent n measurements, which are the binary values determined from the sensors. Key to our approach is that the measurement vector is viewed as a direct measurement of the (desired) probability vector. Note, however, that the elements of the measurement vector z take only discrete values, while the elements of the probability vector p have values that are continuous. Define the measurement noise as: w = z – p. The elements of the measurement noise vector are assumed independent of each other. The measurement noise has the unusual form of: wi

⎧1  pi ⎨ ⎩  pi

pi with probability with probability 1  pi

(1)

This distribution is essentially a Bernoulli distribution, just shifted to have a mean of zero. The variance is pi (1– pi ). The measurement noise can be assumed to be white, under the assumption that the analog signals decorrelate within the sample time. The vertices of the unit hypercube represent all the possible 2ns sensor signatures. Each failure mode should have associated with it a sensor signature; the sensor signature can be thought of here as a terminal state (see Figure 3), and it is reasonable to assume that the population of these 2ns possible sensor signatures by valid fault modes should be relatively sparse (see the Documatch, LGCU and X38 systems described in [Er05]). As the system progresses towards failure, the probability vector would tend to approach one of the terminal states. The main goal is to predict 5 For instance, a good choice for the threshold level may be such that at initialization, the threshold is exceeded 50% of the time. If the analog signal is too smooth or not “noisy” enough, the signal can be dithred.

6

when the probability vector will reach the boundary of the unit hypercube, which corresponds to at least one sensor’s binary signal having a probability of 1. A “time to failure” (TTF) estimate is desired.

p3 1

terminal states

p2

1

0 1

p1

Figure 3: Sample trajectory of the probability vector in a 3-dimensional state space. The terminal states are available from a component-failure/sensor-signature dependency modeling tool such as TEAMS.

2.2 TEAMS

An important secondary goal is to predict which terminal state (fault mode) the probability vector is heading towards. Typically, there will be a subset of terminal states that are already classified as known faults, either from past experience with the system or from modeling the system; indeed here the set of terminal states – or equivalently, fault-mode sensor-signatures – is found using TEAMS, a software made by Qualtech Systems Inc. It should be stressed that the approach of this paper requires only that there be a model of component-failure/sensor-signature dependencies: TEAMS is efficient at finding this, but there are competing tools that we could use.

Test 1

Test 2

Test 3

No fault

0

0

0

Fault 1

0

1

0

Fault 2

1

1

0

Fault 3

0

0

1

Fault 4

1

0

0

Figure 4: Example of the D-matrix whose rows indicate the terminal states as in Figure 3.

7

TEAMS (“Testability Engineering and Maintenance System”) uses dependency modeling and a more advanced technique – multi-signal modeling – to generate test sequences and perform testability analysis of the system. From the model, TEAMS generates a dependency matrix (Dmatrix), a rather small example of which is given in Figure 4. The D-matrix contains the fault signatures for all single component faults. Tests are enumerated along the columns of the D-matrix, and faults are enumerated along the rows. All entries in the D-matrix are binary. A value of 1 in the ith row and jth column means that test j can detect a failure of component i. The set of known terminal states is taken directly from the rows of the D-matrix. If the terminal state that the probability vector is approaching can be estimated correctly, and it belongs to the set of known terminal states, then the impending failure is known. For instance, a simple example would be if the sensor corresponding to test 3 was determined to be approaching a failure while the sensors corresponding to test 1 and test 2 were not, then the system would determine that fault 3 seems to be imminent, with time from the time-to-failure calculation methods discussed later. However, if for another time the sensor for test 2 determined a failure was to occur, TEAMS would require knowledge of test 1 to determine where the failure exists. If test 1 did not show an imminent failure, then fault 1 would be determined the cause of the failure since fault 2 requires both test 1 and test 2 to show failures. However if both test 1 and test 2 showed failures, TEAMS would display that a combination of faults 1, 2, and 4 could be possible.

State Estimation

State Covariance Computation

xˆ (k | k )

P( k | k )

state prediction

u( k )

state prediction covariance

xˆ(k  1| k ) F (k ) xˆ (k | k )  G(k )u(k )

P( k +1| k) = F( k)P( k| k)F( k)’ + Q( k)

measurement prediction

residual covariance

S (k  1) R(k  1)  H (k  1) P(k  1| k ) H (k  1)'

zˆ(k  1| k ) H (k  1) xˆ(k  1 | k ) measurement residual

z( k+1)

filter gain

Q (k  1)

W (k  1) 1 P(k  1| k ) H (k  1)' S (k  1)

z(k  1)  zˆ(k  1 | k ) updated state estimate

updated state covariance

xˆ(k  1 | k  1) xˆ(k  1 | k )  W (k  1)Q (k  1)

xˆ (k  1 | k  1)

P(k  1| k  1) P(k  1 | k )  W (k  1)S (k  1)W (k  1)'

P( k+1| k+1)

Figure 5: One cycle of the Kalman Filter. Diagram taken from [Ba01].

8

3. TRACKING ALGORITHMS

To predict the future trajectory of the probability vector, its current value must first be estimated. Statistical tracking algorithms are used to estimate the probability vector by removing the measurement noise. The tracking algorithms used are the (extended) Kalman filter with the IMM (Interacting Multiple Model) estimator [Ba01] overlay. Reviews of the Kalman filter and IMM estimator are given in the next sections, with a description of the implementation following. 3.1 THE KALMAN FILTER

For Kalman filtering the system is described by two linear equations – the plant equation, describing how the state evolves, and the measurement equation, describing the measurement of the state. We use the discrete time Kalman filter, the common form. The equations are: Plant equation: x(k  1) F (k ) x(k )  G (k )u (k )  v(k )

(2)

Measurement equation: z (k ) H (k ) x( k )  w(k )

(3)

in which k is the time index, x is the state vector, u is the input vector, and z is the measurement vector. The process noise is represented by v, the measurement noise by w, with covariances Q and R given by: Q

E[vv' ]

(4)

R

E[ww' ]

(5)

The operation of the Kalman filter is well known, and is nicely encapsulated in Figure 5, taken from [Ba01]. At the end of each cycle, we have an updated state estimate xˆ ( k  1 | k  1) (the nomenclature (m|n) denotes estimation of the quantity at time m given data up to and including time n), and an updated state covariance P (k  1 | k  1) ; both of these are used at the beginning of the next cycle as k is incremented.

Figure 6: Overview of the Interacting Multiple Model algorithm. Here there are assumed to be 2 modes.

9

The Kalman filter normally predicts one step ahead, but it can easily be used to do an “open loop” prediction for more than one time step. 3.2 THE IMM

The shortcoming of the Kalman filter is that it can track the trajectory evolution only in one mode (the one it is tuned to). When the trajectory deviates significantly from the assumed model (i.e., in target tracking terms: when there is a maneuver), the filter may be unable to adjust. The Interacting Multiple Model (IMM) estimator overcomes this by using multiple Kalman filters in parallel [Ba01], each matched to different possible trajectory evolutions. In the IMM estimator it is assumed that at any time the target trajectory evolves according to one of a finite number of models, which differ in their noise levels and/or structures. By probabilistically combining the estimates of the filters – typically Kalman (or EK) matched to these modes – an overall estimate is found. The steps of the IMM estimator (see Figure 6) are as follows [Ba01]: 1. Mode interaction or mixing: The mode-conditioned state estimate and the associated covariances from the previous iteration are mixed to obtain the initial condition for the mode-matched filters. 2. Mode-conditioned filtering: A Kalman filter is used for each mode to calculate the modeconditioned state estimates and covariances. In addition, the likelihood of each mode is also evaluated. 3. Mode update: The mode probabilities are updated based on the likelihood of each mode. 4. State combination: The mode-conditioned estimates and covariances are combined using a probabilistic weighting to find the overall estimate based on the mode probabilities. 3.3 MEASUREMENT AVERAGING

In order to improve the performance of the Kalman filter, and to make use of a high sampling rate, measurement averaging has been incorporated into the algorithms. The Kalman filter expects the measurement noise to be Gaussian; but since our measurement noise is far from Gaussian (specifically, it is Bernoulli), we take batches of a number navg of consecutive measurements, take the average of the batch, and give that to the filter as the measurement. As navg is increased, the measurement noise becomes more Gaussian (via the central limit theorem, see for example [Pa91]), for which the Kalman filter is optimal. A disadvantage of measurement averaging is that it introduces a lag. Now, provided navg is not excessively high this does not adversely affect the performance. The higher the sampling rate is, compared to the rate of evolution of the system, the higher navg can be without negatively affecting the performance; but the need for independence between samples supplies a trade-off. In practice, navg can be chosen based on a required minimum number of predictions per average life-cycle time, or else a maximum time between predictions. 3.4 TRAJECTORY MODELS

We wish to track the probabilities {pi} as they evolve: those which appear to be heading to unity are those that correspond to the TEAMS-generated fault-mode sensor signature (i.e., its row in the Dmatrix). There is, naturally, some concomitant coupling between the pi’s through the D-matrix, and one could take advantage of this in tracking – a trajectory would preferentially head towards a

10

vertex corresponding to a row in the D-matrix. However, we have found [Ph01] that this coupling is relatively weak, and further that to take advantage of it would require an IMM of high model-order complexity. Since high model-order IMMs are usually rejected for reasons of performance, we adopt the usual tracking expediency [Ba01] of operating a separate tracker in each dimension. To model the system, in each dimension, we use kinematic (inertial) state models [Ba01], also called polynomial models. We expect that the motion of the state might remain stationary for a time, and then move along a trajectory to some terminal value. Because it is not known beforehand when the state will start moving, or what type of trajectory the state will move along, we choose to use multiple models for the trajectory. If only a single model were to be used, the Kalman filter would not be able to provide satisfactory noise reduction while tracking the state through such a wide range of motion. For multiple models, we use the Interacting Multiple Model (IMM) estimator. The set of models we use consists of direct discrete time kinematic models of varying orders [Ba01]. We use a first order kinematic model for the situation that the state remains in one position – that is, the no-fault condition, at least as measured by the sensor that corresponds to that dimension. We use a second order kinematic model to model the case when the state exhibits linear motion. We also use a third order model to account for parabolic motion. (We do not expect the state to be limited to these types of motion, but as has usually been found in target tracking experience, the process noise will take care of this, and in any case the IMM render the Kalman filter in a sense adaptive) By considering each dimension separately (using an IMM for each one), we allow the IMM to focus on the motion in each dimension, rather than the “average” motion over all the dimensions. Each dimension corresponds to the probability of failure for a given sensor. An added benefit of filtering each dimension separately is a large reduction in computational cost. For a third order kinematic model, the augmented state vector is given by (2): xd

⎡ pd ⎤ ⎢ p& ⎥ ⎢ d⎥ ⎣⎢ &p&d ⎦⎥

(6)

in which d is the dimension. The system matrices used in the Kalman filter for the third order kinematic model are as follows in (3) – (6). The system matrix is:

F

⎡1 T T 2 / 2 ⎤ ⎢ ⎥ T ⎥ ⎢0 1 ⎢0 0 1 ⎥⎦ ⎣

(7)

Here, T is the time between the intervals at which the measurements given to the filters are taken. We will call it the estimation interval time. The estimation interval time is given by T = navg*T0, where T0 is the sampling interval time. The measurement matrix is: H=[1 0 0]

(8)

⎡T 6 / 36 T 5 / 12 T 4 / 6⎤ ⎥ ⎢ Q V 2 ⎢T 5 / 12 T 4 / 4 T 3 / 2⎥ ⎢ T4 /6 T3 /2 T 2 ⎥⎦ ⎣

(9)

The process noise covariance matrix is:

11

where V is the standard deviation of the white noise input sequence to the plant equation. In our case, R happens to be a scalar, and it is calculated as:

R

1 p1 (1  p1 ) navg

(10)

The Kalman filter we are using is actually an Extended Kalman filter (EKF) [Ba01] since the measurement noise covariance matrix R is re-calculated at each interval, and it depends on the position element of the estimated state vector.

4. PROGNOSTIC ESTIMATION ALGORITHMS

Two separate algorithms have been developed for estimating the time to failure (TTF). y

The deterministic method extrapolates the trajectory of the state and finds when it will reach the terminal value. This method of using the Kalman filter to track a parameter and predict the time to failure is similar to that proposed by Swanson [Sw01], although the parameters being tracked and the mode of tracking are rather different.

y

The probabilistic method predicts the future values of the state and covariance, and creates a cumulative distribution function (cdf) of the time to failure. Extrapolation here is and must be identical to that in the deterministic method, and the difference is that the model’s uncertainty (i.e., covariance) is also calculated and used.

Both of these methods operate in one dimension of the probability space. To find the overall TTF, the minimum is taken over all dimensions. 4.1 DETERMINISTIC METHOD FOR TTF

The deterministic method takes the updated state estimate from the IMM at every estimation interval, and extrapolates the future trajectory. According to the model, MMSE extrapolation is done by assuming the highest order term of the state will remain constant, and ignoring future process noise inputs. In the current implementation, using models of up to third order, the equation of motion is calculated as: v x1 (0)

⎡ p1 (0)⎤ ⎢ p& (0)⎥ ⇒ pˆ (t ) 1 ⎢ 1 ⎥ ⎢⎣ &p&1 (0)⎥⎦

1 ˆ &p&1 (0)t 2  pˆ& 1 (0)t  pˆ 1 (0) 2

(11)

The highest order term in the state is acceleration, so the trajectory is calculated based on constant acceleration. To find the TTF, the position is set equal to 1, and the equation is solved. If there are no solutions that are real and non-negative, then the TTF is set to infinity. 4.2 PROBABILISTIC METHOD FOR TTF

The probabilistic method uses the estimates of the state x(k|k) and the state covariance P(k|k), and projects them into the future by using the same state prediction equations that are used in the

12

Kalman filter. The state prediction equations can be thought of as the Kalman filter without measurements. The equations for one step into the future are: v v (12) x (k  1 | k ) Fx (k | k ) P(k  1 | k ) FP(k | k ) F 'Q

(13)

These equations can be used recursively to project the estimates any number of steps into the future. The estimated state and state covariance specify a probability density function (pdf) for the state. The position element of the state has a Gaussian pdf f ( p1 )

⎛  ( p1  pˆ 1 ) 2 ⎞ 1 ⎟⎟ exp⎜⎜ 2 P11 2S ™ P11 ⎝ ⎠

(14)

whose mean is the first element of the state estimate, and whose variance is the (1,1) (northwest) element of the state covariance matrix. Let the distance between the mean and the terminal value be called G. The event that the true position is further from the mean than G is likened to the event that failure has occurred. Thus, the probability that failure has occurred is found by integrating the two “tails” of the pdf. This can be done via the complementary error function. The result of this – the probability that failure has occurred – becomes one point on the cdf of the time to failure. To get the next point, the whole process is repeated for one more step into the future. Once the cdf has been constructed out to an arbitrary look-ahead time, the cdf is interpolated at a value of 0.5 to find the TTF estimate. Confidence bounds can also be found by interpolating at other values, for instance 0.1 and 0.9 for an 80% confidence region. 4.3 TERMINAL STATE ESTIMATION

Because the models in the IMM can be classified as “normal operation” or “failure”, the mode probabilities offer a very convenient way to detect failures. Mode probabilities of the IMM have been used before to diagnose faults. In [Me95], [Me98], [Ra98], [Zh97] raw sensor data outputs were assumed to have been modeled both in normal and all abnormal operation modes, and a jump from one mode to another would announce itself via the IMM mode probabilities – this is the usual application of the IMM to system diagnosis, and while it has shown very nice results its weaknesses are a need for precise modeling in the domain of the raw signal, and a lack of prognostic information. The approach of this paper uses a simple suite of relatively untuned kinematic models on the sensor-alarm probabilities: there is little need for modeling beyond what is known in the system dependency matrix, and the prognostic information is rich. The terminal state estimation algorithm uses the mode probabilities of the IMM to distinguish in which dimensions the positions are moving. The assumption is made that the positions will move much more in dimensions where they are approaching a terminal state than in dimensions where they are not. Therefore, the set of dimensions in which motion occurs can be used to determine which terminal state is in effect. Let {T1, T2, … , TM} be the set of known terminal state vectors. v Also, let S be an M-dimensional vector defined as Si =1-P1(i), where P1(i) is the IMM mode probability of model 1 (non-moving) for the ith dimension. So, Si is a probability that motion is occurring in dimension i. From Bayes’ rule, the probability that the jth terminal state is in effect, v given the value of the random vector S , is:

13

v

P (T j | S )

v f (S | T j ) P (T j ) v f (S )

(15)

v v Assume that P(Tj) and f (S ) are uniform and that the different elements of S are independent of each other: there results:

v v P (T j | S ) w f (S | T j )

N

— f (S

i

| Tj )

(16)

i 1

where N is the number of dimensions. Since the pdf’s in the product are unknown, the following simple ad hoc form is used: f (S i | T j )

if ⎧ 2S i ⎨ ⎩2(1  S i ) if

T j (i ) 1 T j (i )

0

(17)

where Tj(i) represents the ith element of the jth terminal state. This form is simple, yet contains the information that if a dimension has a terminal value of 1, then the probability of motion in that dimension is expected to be high, and vice versa. The end result is a set of (non-normalized) probabilities: N ⎧ S v P (T j | S ) w — ⎨ i i 1 ⎩1  S i

if if

T j (i ) 1 T j (i ) 0

(18)

These are calculated for each known terminal state, and then normalized by their sum, resulting in an estimate of the probabilities of each terminal state being in effect.

5. RESULTS 5.1 TEST DATA GENERATOR

The trajectories must be created in a way that is not overly suited to the models used in the tracking algorithms. A data generator was written that used polynomials to describe the trajectories. The order of the polynomials was deliberately chosen higher than the kinematic models could accommodate, in order that there is mismatch between ground truth and the kinematic modeling. The trajectories begin at values given by the initial probability vector p0, which is a parameter. They remain constant until a time tmove. The number of points on the trajectory is determined by the sampling time dt. Between tmove and the time tend, the dimensions that have non-zero elements in the terminal state Tf approach their terminal values via trajectories that are described by polynomials. The polynomial coefficients for each trajectory are chosen randomly (but positive, for monotonicity) and such that the paths increase monotonically from 0 to 1 over the range 0 to 1. The polynomials are then shifted and scaled along the time axis so that they begin at tmove and the earliest one finishes at tend. The others finish later, at randomly chosen times. Next, the magnitudes of the polynomials are shifted and scaled before they are concatenated to the end of the trajectories. Once the trajectories are created, every point in the trajectories is a value between 0 and 1. If a point in the trajectory has a value p, then its corresponding measured value is a 1 with probability p, otherwise with a 0.

14

5.2 PARAMETER CHOICES AND TUNING RATIONALE

The algorithms are applied to a system that represents a 1553 serial digital multiplex data bus (MILSTD-1553). The 1553 is widely used in military aircraft to allow the various systems to communicate with each other. The TEAMS model of the 1553 can be considered medium-sized, with 61 tests, 174 components. Each test translates into a dimension, so the state space is 61dimensional. Of the 174 components, there are only 52 unique terminal states, reflecting the fact that the system does not have perfect fault isolation. In generating the data, we chose a sampling time of 20 seconds, and an ETTF of 3,600,000 seconds (1,000 hours). The terminal state was chosen randomly. Sixth order polynomials were used for the trajectories: this is three times the order of the trajectory that our highest order kinematic model can track. The system operated nominally until a time of 0.3*ETTF, when the states started to move. At a time of 1.1*ETTF, one of the states (the fastest moving one) reaches its terminal value, signifying a failure. (In an actual implementation of these algorithms, the sampling time and ETTF would be predetermined by the system and its sensors.) The critical tuning parameters are: the averaging number navg, the expected time of evolution (ETOE), the process noise covariance scaling, and the IMM transition matrix. The averaging number used was 1,000. With this averaging number and the sampling time of 20 seconds, the estimation interval time is 20,000 seconds. This corresponds to 180 estimation intervals in a time equal to the ETTF. Increasing the value of the averaging number reduces the computational requirements, but also can cause lag in the tracking. The authors have found that to avoid lag, the averaging number should be small enough that there are at least 50 estimation intervals during the evolution of the states. The ETOE is similar to the ETTF, except that it is the expected time to failure once the states begin to move (damage occurs). The ETOE used in this case was 3,600,000 seconds (equal to the ETTF). The actual time of evolution is 0.8*ETTF, so in this case the ETOE is mismatched by 25%. The next critical tuning parameter is the scaling of the process noise covariance matrices. Because the models are kinematic, these matrices are predetermined except for a scaling factor Vv2, which represents the variance of the process noise input. The authors have found that a good choice of Vv (standard deviation) is: Vv

100 ETOE N

(19)

where N is the order of the kinematic model. This quantity is computed for the first, second, and third order kinematic models. Additionally, in order to make the first order (non-moving) model more distinguishable to the IMM, the process noise standard deviation Vv for the first model is then divided by 1,000. The last critical parameter is the IMM transition matrix. The mean sojourn time is the expected number of consecutive estimation intervals where a certain model is in effect [Ba01], and that of model i is defined as: Wi

1 1  pii

(20)

where pii is the element of the transition matrix corresponding to the transition from model i to model i. In order to choose nominal values for the transition matrix, the mean sojourn time for all three models was calculated as W = (ETOE/T)/3. The diagonal elements of the transition matrix were then set to pii = 1-1/W. The transition matrix used was: P

⎡0.9833 0.0083 0.0083⎤ ⎢0.0008 0.9833 0.0158⎥ ⎥ ⎢ ⎢⎣0.0008 0.0158 0.9833⎥⎦

15

(21)

where pij = Pr(model j in effect at time k+1 | model i in effect at time k). trajectories, true and estimated 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.5

1

1.5

2 2.5 time (seconds)

3

3.5

4 6

x 10

Figure 7: True (dashed) and estimated (solid) trajectories of the sensor probabilities from one run. In this case the row of the D-matrix corresponding to the fault simulated implies that three sensors raise alarms, and these can be seen accurately tracked in the above.

5.3 TYPICAL PERFORMANCE BASED ON A SINGLE RUN

In this section, typical single run results will be presented. A prerequisite for the prognostic algorithms is to have good tracking of the states. With the IMM, and the measurement averaging that the algorithms use, good estimates of the positions of the states are easily obtained. The true positions of the states and their estimated values are shown in Figure 7. -7

12

velocities, true and estimated

x 10

10 8 6 4 2 0 -2 -4 -6

0

0.5

1

1.5

2 2.5 time (seconds)

3

3.5

4 6

x 10

Figure 8: True (dashed) and estimated (solid) velocities of the sensor probabilities from one run. As with the previous figure, the row of the D-matrix corresponding to the fault simulated implies that three sensors raise alarms, and even these can be seen accurately tracked.

16

It is helpful to also measure the tracking performance by the velocity estimates, because the velocities are not directly measured. The true velocities their estimates are shown in Figure 8.

Figure 9: IMM mode probabilities for the fastest evolving sensor. One Monte Carlo run.

In Figure 9 the mode probabilities of the IMM assigned to the fastest moving dimension are shown. The mode probabilities are all initialized at values of 1/3. Early on, the mode probability for model 1 (non-moving) increases to about 0.8: the state has not started to move yet. At a time of about 106 seconds, the state slowly starts to move. Near the time of 1.6*106 seconds, the IMM starts to be able to distinguish that the state is moving, and the mode probabilities for model 2 (linear motion) and model 3 (parabolic motion) increase. By the time of 2*106 seconds, mode probability 1 has dropped nearly to zero, indicating a high certainty that the state is moving. 6

TTF, true and deterministically estimated

x 10

7 6 5 4 3 2 1 0

0

0.5

1

1.5 2 2.5 time (seconds)

3

3.5 6

x 10

Figure 10: Deterministically estimated (solid) and true (dashed) TTF. One Monte Carlo run.

17

The deterministic estimate of the TTF is shown in Figure 10. One can see that this estimate is quite variable at first, and then gradually converges, but it is always higher than the correct value. However, towards the end, the TTF estimate is quite close to the true TTF. x 10

6

7 6 5 4 3 2 1 0

0

0.5

1

1.5 2 2.5 time (seconds)

3

3.5 x 10

6

Figure 11: Probabilistically estimated TTF (solid) along with its estimated 80% confidence bounds (upper and lower, dotted), and the true TTF (dashed). One Monte Carlo run.

The probabilistic estimate of the TTF is shown in Figure 11. The estimate for the threshold of 0.5 may be interpreted as the actual TTF estimate, and those for the other thresholds can be thought of as 80% confidence bounds. These latter converge very closely towards the end, indicating a high certainty of the estimate. 1 0.9 0.8 0.7 0.6 correct terminal state

0.5 0.4 0.3 0.2 0.1 0

0

0.5

1

1.5 2 2.5 time (seconds)

3

3.5 x 10

6

Figure 12: Estimated likelihoods of the true terminal state and others. One Monte Carlo run.

The results of the terminal state estimation are shown in Figure 12. This figure shows estimates of the probabilities corresponding to all 52 unique rows of the 1553 bus D-matrix: clearly, the true

18

fault is unambiguously identified. In fact, although at first the likelihoods of the various terminal states are quite variable, near the time when the IMM recognizes the movement of the states, the terminal state estimation algorithm recognizes the destination; that is, the type of fault that appears to be incipient is identified too. x 10

6

7 6 5 4 3 2 1 0

0

0.5

1

1.5 2 2.5 time (seconds)

3

3.5 x 10

6

Figure 13: Deterministic TTF estimate: 90%, 50%, 10% percentiles, and true TTF (500 runs). x 10

6

7 6 5 4 3 2 1 0

0

0.5

1

1.5 2 2.5 time (seconds)

3

3.5 x 10

6

Figure 14: Probabilistic TTF estimate: 90%, 50%, 10% percentiles, and true TTF (500 runs). 5.4 MONTE CARLO PERFORMANCE

The following Monte Carlo results were based on 500 runs. The terminal state, trajectories, and instances of measurement noise were varied randomly between runs. The time when the states begin moving, and the time of failure were kept the same between runs, in order to allow comparison. In Figure 13 are the 90%, 50%, and 10% percentiles for the deterministic TTF estimate. The deterministic estimate tends to always be above the true TTF, because the highest

19

order model used is third order, while the actual motion is of a higher order. The variance is large at the beginning and throughout most of the time. After the states start to move, the estimate approaches the true value, and the variance approaches zero. The deterministic estimate is valuable near the time of failure, where it is close to the true TTF.

1 det TTF prob TTF

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.5

1

1.5 2 2.5 time (seconds)

3

3.5 x 10

6

Figure 15: Normalized root mean square error of TTF estimates (500 runs).

The percentiles for the probabilistic TTF estimate are shown in Figure 14. (The confidence bounds of the probabilistic TTF estimation are not considered here.) The probabilistic estimate is very close to the true TTF over the entire evolution time. The probabilistic estimates have a relatively small variance. At first, they remain at a constant level, determined by the size of the process noise covariance Q. The probabilistic estimate becomes accurate after the true TTF drops below that level. From that time on, the median value of the estimate is very close to the true value. To compare further the two methods of estimating the TTF, Figure 15 displays the root mean square error (RMSE). The RMSE is normalized by dividing by the actual TTF. The normalized RMSE of the deterministic estimate starts very high, and steadily decreases. The normalized RMSE of the probabilistic estimate is much lower, almost always below 18%. The sharp increases at the right of the graph are only due to the fact that the true TTF approaches zero there. The results of the terminal state estimation can be evaluated from Figure 16, which shows the number of occurrences of the correct estimation of the terminal state, as a function of time. Additional lines on the graph show the number of occurrences where the terminal state was correct and its likelihood greater than a certain value. Halfway into the evolution time (2.5*106 seconds), the estimation is almost always correct. The “knee” at the top of the graph would be sharper were it not for the fact that some terminal states differ only in one dimension, making them difficult to distinguish. 5.5 COMPUTATION

The problem being studied is one of slowly degrading components. Even in a large model (containing around thousands of components and tests) we have found the computational time required for the Kalman/IMM update is on the order of less than a second. Considering that the

20

normal failure times that we are predicting are of the order of hours, days, weeks, or even months or years, these computational times seem well within acceptable ranges. Also it can be noted that the only stages where the computational time will even be this high is in the updating of the Kalman filter estimates and the IMM estimate which occurs only once in every navg measurements. 500 correct correct and >50% confidence correct and >80% confidence

450 400 350 300 250 200 150 100 50 0

0

0.5

1

1.5 2 2.5 time (seconds)

3

3.5 x 10

6

Figure 16: Occurrences of correct terminal state estimation (500 runs).

6. CONCLUSION

We have proposed an approach to prognostics that uses probabilities of binary random signals to track the system’s health. Practical present-day sensors are typically equipped with a high threshold to discourage distracting false alarms, but let us imagine (propose) that thresholds be low enough that sensor alerts are common. Our contention is that we ought to be able to track these alert probabilities; and unlike direct parameter (such as temperature) tracking, a probability is constrained to be between zero and unity, so it is quite clear, when we extrapolate, where a dangerous operating point is. This paper’s purpose is to explore what might be done with such lowthreshold sensors: should we recommend them? As for tracking: measurement averaging, Kalman filters using robust kinematic models, and the IMM are used. As an overlay, two separate algorithms estimate the time to failure: the deterministic algorithm has low computational requirements but is accurate only in the near term. The probabilistic algorithm is more demanding but correspondingly more accurate, and also gives estimated confidence bounds. A wrapper algorithm that predicts the failure mode is also given: the key here is matching the extrapolations from the trackers to rows of the D-matrix from a dependency modeling engine such as Qualtech Systems’ TEAMS. On simulated data for a realistic model the algorithms produced results sufficiently impressive that the idea may be extremely valuable. Further research is required to apply this method to real data.

21

7. REFERENCES [Ba01] Y. Bar-Shalom, X. Li and T. Kirubarajan, Estimation with Applications to Tracking and Navigation: Algorithms and Software, New York, NY: John Wiley & Sons, 2001. [Bo00] D. Board, “Stress Wave Analysis of Turbine Engine Faults”, IEEE 2000 Aerospace Conference Proceedings, CD ROM, March, 2000. [Br99] T. Brotherton, G. Chadderdon, and P. Grabill, “Automated Rule Extraction for Engine Vibration Analysis”, IEEE 1999 Aerospace Conference Proceedings, CD ROM, March 1999. [Br00] T. Brotherton, G. Jahns, J. Jacobs, and D. Wroblewski, “Prognosis of Faults in Gas Turbine Engines”, IEEE 2000 Aerospace Conference Proceedings, CD ROM, March 2000. [De00] D. DeCoste, “Learning Envelopes for Fault Detection and State Summarization”, IEEE 2000 Aerospace Conference Proceedings, CD ROM, March 2000. [De97] S. Deb, K.R. Pattipati, R. Shrestha, “QSI’s Integrated Diagnostics Toolset”, Proceedings of IEEE AUTOTESTCON, Anaheim, CA, 1997. [Do99] M. Dowell and G. Sylvester, “Turbomachinery Prognostics and Health Management via Eddy Current Sensing: Current Developments”, IEEE 1999 Aerospace Conference Proceedings, CD ROM, March 1999. [Do00] D. Dousis, “Design and Implementation of the V-22 Tiltrotor Aircraft Vibration Monitoring and Diagnostic System”, IEEE 2000 Aerospace Conference Proceedings, CD ROM, March 2000. [Ed00] J. Edmonds, M. Resner, and K. Shkarlet, “Detection of Precursor Wear Debris in Lubrication Systems”, IEEE 2000 Aerospace Conference Proceedings, CD ROM, March 2000. [En00] S. Engel, B. Gilmartin, K. Bongort, and A. Hess, “Prognostics, The Real Issues Involved With Predicting Life Remaining”, IEEE 2000 Aerospace Conference Proceedings, CD ROM, March, 2000. [Er05] O. Erdinc, C. Brideau, P. Willett and T. Kirubarajaran, \Real-Time Diagnosis with Sensors of Uncertain Quality," submitted to IEEE Transactions on Systems, Man & Cybernetics, Part B: Cybernetics, March 2005. [Fl00] A. von Flotow, M. Mercadal, and P. Tappert, “Health Monitoring and Prognostics of Blades and Disks with Blade Tip Sensors”, IEEE 2000 Aerospace Conference Proceedings, CD ROM, March 2000. [Ga97] A. Garga, B. Elverson, and D. Lang, “AR Modeling with Dimension Reduction for Machinery Fault Classification”, Proceedings of the 51st Meeting of the Society for MFPT, Vol. 51, pp. 309-318, 1997. [Ga01] A. Garga, K. McClintic, R. Campbell, C. Yang, M. Lebold, T. Hay and C. Byington, “Hybrid Reasoning for Prognostic Learning in Systems”, IEEE 2001 Aerospace Conference Proceedings, CD ROM, March 2001. [Go00] T. Goodenow, W. Hardman, and M. Karchnak, “Acoustic Emissions in Broadband Vibration as an Indicator of Bearing Stress”, IEEE 2000 Aerospace Conference Proceedings, CD ROM, March 2000. [Ha99] W. Hardman, A. Hess, and J. Sheaffer, “SH-60 Helicopter Integrated Diagnostic System (HIDS) Program – Diagnostic and Prognostic Development Experience”, IEEE 1999 Aerospace Conference Proceedings, CD ROM, March 1999. [Ha00] W. Hardman, A. Hess, and J. Sheaffer, “A Helicopter Powertrain Diagnostics and Prognostics Demonstration”, IEEE 2000 Aerospace Conference Proceedings, CD ROM, March 2000. [He04] A. Hess, G. Calvello and T. Dabney, “PHM: A Key Enabler for JSF Autonomic Logistics Support Concept,” IEEE 2004 Aerospace Conference Proceedings, CD-ROM, March 2004. [Ja99] L. Jaw, “Neural Networks for Model-Based Prognostics”, IEEE 1999 Aerospace Conference Proceedings, CD ROM, March 1999. [Kh04] A. Khalak, G. Colman, A. Hess, “Damage Prediction for Interconnected, Degrading Systems”, IEEE 2004 Aerospace Conference Proceedings, CD-ROM, March 2004. [La05] P. Lall, M. Islam, M. Rahim and J. Suhling, “Prognostics and Health Management of Electronic Packaging,” IEEE Transactions on Components and Packaging Technologies, to appear. [Le00] S. Leader, R. Friend, “A Probabilistic Diagnostic and Prognostic System for Engine Health and Usage Management”, ”, IEEE 2000 Aerospace Conference Proceedings, CD-ROM, March 2000. [Li97] Y. Li, J. Shiroishi, S. Danyluk, T. Kurfess, and S. Y. Liang, “Bearing Fault Detection via High Frequency Resonance Technique with Adaptive Line Enhancer”, Proceedings of the 51st Meeting of the Society for MFPT, Vol. 51, pp. 763-772, 1997. [Me95] T. Menke and P. Maybeck, “Sensor/Actuator Failure Detection in the Vista F-16 by Multiple Model Adaptive Estimation”, IEEE Transactions On Aerospace and Electronic Systems, 31(4): pp. 1218-1229, Oct. 1995. [Me98] R. Mehra, C. Rago, S. Seereeram, “Autonomous Failure Detection, Identification and Fault-tolerant Estimation with Aerospace Applications”, IEEE 1998 Aerospace Conference Proceedings, CD ROM, March 1998. [Mi00] J. Miller and D. Kitaljevich, “In-line Oil Debris Monitor for Aircraft Engine Condition Assessment”, IEEE 2000 Aerospace Conference Proceedings, CD ROM, March 2000. [Pa91] A. Papoulis, Probability, Random Variables, and Stochastic Processes, McGraw-Hill, Inc., 1991.

22

[Ph01] E. Phelps, T. Kirubarajan and P. Willett, “A Statistical Approach to Prognostics”, Proceedings of the 2001 SPIE Aerosense Conference on Component and Systems Diagnostics, Prognostics, and Health Management, Orlando FL, April 2001. [Ph02] E. Phelps, T. Kirubarajan and P. Willett, “Useful Lifetime Tracking via the IMM”, Proceedings of the 2002 SPIE Aerosense Conference on Component and Systems Diagnostics, Prognostics, and Health Management, Orlando FL, April 2002. [Po99] H. Powrie and C. Fisher, “Engine Health Monitoring: Towards Total Prognostics”, IEEE 1999 Aerospace Conference Proceedings, CD ROM, March 1999. [Po00] H. Powrie, “Use of Electrostatic Technology for Aero Engine Oil System Monitoring”, IEEE 2000 Aerospace Conference Proceedings, CD ROM, March 2000. [Ra98] C. Rago, R. Prasanth, R. Mehra, and R. Fortenbaugh, “Failure Detection and Identification and Fault Tolerant Control using the IMM-KF with applications to the Eagle-Eye UAV”, Proceedings on the 37th IEEE Conference on Decision & Control, Tampa FL, December 1998. [Ro00] M. Roemer and G. Kacprzynski, “Advanced Diagnostics and Prognostics for Gas Turbine Engine Risk Assessment”, IEEE 2000 Aerospace Conference Proceedings, CD ROM, March 2000. [Ro01] M. Roemer, R. Howe and R. Friend, “Advanced Test Cell Diagnostics for Gas Turbine Engines”, IEEE 2001 Aerospace Conference Proceedings, CD ROM, March 2001. [Sh00] D. Shepard, P. Tait, and R. King, “Foreign Object Detection Using Radar”, IEEE 2000 Aerospace Conference Proceedings, CD ROM, March 2000. [So00] H. Sonnichsen, “Real-time Detection of Developing Cracks in Jet Engine Rotors”, IEEE 2000 Aerospace Conference Proceedings, CD ROM, March 2000. [Sw01] D. Swanson, “A General Prognostic Tracking Algorithm for Predictive Maintenance”, IEEE 2001 Aerospace Conference Proceedings, CD ROM, March 2001. [Zh97] Y. Zhang and X. Li, “Detection and Diagnosis of Sensor and Actuator Failures Using Interacting MultipleModel Estimator”, Proceedings of the 36th IEEE Conference on Decision & Control, December 1997.

23

Suggest Documents