Data fusion applications in intelligent condition monitoring - wseas

168 downloads 0 Views 241KB Size Report
“cottage industry” [1]. The field is not simply about the core algorithms, but also about the way the problem is formulated and the choice of methods. The range of ...
Data fusion applications in intelligent condition monitoring A STARR*, R WILLETTS*, P HANNAH*, W HU*, D BANJEVIC**, A.K.S. JARDINE** *Maintenance Engineering Research Group, Manchester School of Engineering, University of Manchester Oxford Road, Manchester M13 9PL, UK ** CBM Laboratory, Department of Mechanical and Industrial Engineering, University of Toronto 5 King’s College Road, Toronto, M5S 3G8, CANADA

Abstract: - Condition monitoring allows maintenance actions to be based on changes of a trend of machine or process parameters. The identification, monitoring and fusion of data from key parameters can provide greater confidence in decisions. Data fusion is a relatively new term to the condition monitoring community. In defence and other applications the field is mature and has seen extensive application. The advances in methods and frameworks for applications, in terms of condition monitoring, include: • the crystallisation of a cohesive scheme for problem definition; • structured solution selection; • comparisons with dissimilar application fields which have similar problem structures or solution methods; • the blending of quantitative and qualitative methods. This paper briefly reviews architectures or frameworks, and draws examples from manufacturing and plant applications. The chief application shown establishes the key vibration monitoring parameters for equipment within a paper mill and aids the maintenance optimisation process. Key-Words: - Data fusion; Intelligence, Condition based maintenance, Optimising maintenance decisions.

1 Introduction Data fusion may be defined as the process of combining data and knowledge from different sources with the aim of maximising the useful information content, for improved reliability or discriminant capability, whilst minimising the quantity of data ultimately retained. Most data fusion users find that the field is wider than they thought. Llinas described data fusion as a “cottage industry” [1]. The field is not simply about the core algorithms, but also about the way the problem is formulated and the choice of methods. The range of applications is vast. Fusion users in widely differing disciplines can shed light on structurally similar problems. The sensor and signal processing communities have been using fusion to synthesise the results of two or more sensors for some years. This simple step recognises the limitations of a single sensor but exploits the capability of another similar or dissimilar sensor to calibrate, add dimensionality or simply to increase statistical significance or

robustness to cope with sensor uncertainty. In many such applications the fusion process is necessary to gain sufficient detail in the required domain. Crucially for condition monitoring, the encompassing philosophy of data fusion allows us to cross some boundaries where recent applications have faltered: • it is possible to deal with the selection of data processing methods based on problem characteristics, e.g. data or knowledge density; the relationships are becoming clearer; • we can merge qualitative and quantitative information, e.g. data and expert knowledge. There are many problems to be overcome. A number of proposed frameworks exist, and each needs work before it could be called generic. There is much to be learnt from existing methods for technique selection, even if those are heuristic and empirical. Integrated approaches will involve multilevel fusion – sensor data, novelty detection, feature classification, diagnosis and decision making [2].

2 Structures in data fusion

3 A condition monitoring perspective

Several framework architectures have been proposed. The lay out of these architectures varies in relation to the field of application. The US Department of Defence Joint Directors of Laboratories (JDL) architecture [1] assumes a level distribution for the fusion process, characterising the data from the source signal level to a refinement level, where the fusion of information takes place in terms of data association, state estimation or object classification (see fig. 1). Situation assessment, at a higher level of inference, fuses the object representations provided by the refinement and can draw a course of action. It is clear that this architecture can be adapted, in principle, to condition monitoring problems. The strategy for data fusion varies across applications. Three stages are commonly identified. It is not always necessary to apply all the stages: • Pre-processing, i.e. reduction of the quantity of data whilst retaining useful information and improving its quality, with minimal loss of detail. This may include feature extraction and sensor validation. Techniques used include dimension reduction, gating, thresholding, Fourier transform, averaging, and image processing. • Data alignment, where the techniques must fuse the results of multiple sensors, or features already extracted in pre-processing. These include association metrics, batch and sequential estimation processes, grouping techniques, and model-based methods. • Post-processing, combining the mathematical data with knowledge, and decision making. Techniques could be classified as knowledge-based, cognitive-based, heuristic, and statistical. A method map of these techniques is included in [3]. Much research focuses on methods applied to particular problems, or aspects of the architecture. Examples include architectural issues dealing with multiple sensors in similar or dissimilar domains [4].

Data fusion has clear applications in condition monitoring because much data should be processed for good assessment of machine health. The data arriving at the fusion centre may contain vibration, temperature, pressure, oil analysis, and others, which encapsulate the properties of the system and aid in its condition assessment [5]. An important aspect of condition monitoring is the fidelity of information received by the sensor units. The data acquired must be consistent and low in noise. Sensor complementarity, rather than redundancy, is important. These aspects should be considered at the source level to reduce preprocessing. The sample cycle should be small enough to catch machine faults, and input frequencies should be carefully selected to achieve the desired capabilities. After the data has been acquired, it passes to preprocessing for digital conversion and manipulation. At this stage spectral analysis, correlation, image processing, time averaging, thresholding, and dimension reduction techniques may be implemented. Processed data is passed to the fusion centre and routed according to the level of fusion sought, i.e. raw data, feature, or decision level fusion. Thus, the data will reach the data alignment stage or the post-processing stage accordingly. No single fusion technique has been proposed; selection must be made depending upon the application. The fusion process could combine different sets of data, or could consider several systems. Condition monitoring needs an output which contains explicit information for health assessment of the machine. A fault indicator, with a range of intermediate conditions, aids in decision making. This information can be derived from the best estimate of value or state.

4 The need for integrated parameters Condition monitoring establishes the health of plant by regular measurement of parameters. By trending Level 2

Sensors

Preliminary filtering

• • • •

Data alignment Association Tracking Identification

Situation assessment

Threat assessment

Level 1 Level 3

Figure 1: JDL data fusion architecture.

Sub–system

R111

Rule tree

R212

R211 0.2(£10)

0.4 (£10)

R311

R312

0.4(£30)

R422

Rules R111: R211: R212: R311: R312: R313: R321:

0.3(£30)

R313

0.2 (£10)

R421

0.4(£10)

0.4(£10)

R321

0.7(£30) R451

R423

Spindle Motor Motor Drive Control Circuit Mechanical Parts Motor Temperature Motor Connection Circuit Connection

0.3 (£50) 0.4 (£10) R322 R323

R322: R323: R421: R422: R423: R451: R452:

Sub–sub–system Components/sensors

0.3(£10) R452

Faults

Power Connection Control Amplifier Tool Edge Cooling System Feed Load Fuse Power Switch

Figure 2: Part of a diagnostic reasoning procedure for a machine tool the data, the normal condition is established and thus any deviation from the norm can be identified. Kennedy [6] states however, that the output of condition monitoring is data. Maintenance decisions are seldom based on a single parameter. They are usually based on the fusion of knowledge and experience of relationships between many parameters. The relationships can be complex. Each monitoring technique has its own data format and analysis.

4.1

Intelligent monitoring and diagnostics

Artificial Intelligence (AI), particularly expert systems and neural networks, has proved to be a popular environment for fusion research. More traditional environments such as multi-variate statistics have also been used. A combination of neural networks and expert systems has been used for diagnostics on bearings [7]. Neural networks were used to combine data from three sensors on a for five different types of machine tool wear [8]. A generic diagnostic system was designed around fusion of several sensor sources [9]. Tiger, a hybrid expert system, monitors the operating conditions of gas turbines [10]. ADVISOR, based on the combination of neural networks and expert systems, diagnoses failures in rotating machinery [11].

4.2

Multivariate statistical systems

EXAKTTM, developed by the CBM Consortium at the University of Toronto, optimises the maintenance

interval of groups of machines by modelling the relationships between parameters and probability of change. The Proportional Hazards Model (PHM) produces a statistical model of the plant item based parameter history and the risk of failure. A Markov chain describes the behaviour of the condition over time. This is used with the PHM, the working age of the units, and costs incurred by failure and replacement to predict optimum life.

5 Example: Knowledge based fusion In complex systems decisions are based on measured parameters, but are related to knowledge about system. In manufacturing systems, for example, operational faults account for about 70% of failures. Rapid diagnosis is critical for improving the availability and productivity. Diagnosis of complex systems is challenging because there are many different faults, and training is likely to be forgotten before it can be applied. Hierarchical diagnosis models, based on fault tree analysis, logical control and sequential control can be built around the operation of the Programmable Logical Controller (PLC). With these models working together, operational faults can be diagnosed completely. The models have been successfully applied to a PLC controlled flexible manufacturing system and have achieved good results, fully described in [12-14]. The model combines knowledge about the operating procedure with inputs from sensors. The stage of the programme indicates the broad area of

the fault, and the status of the measured inputs localises the solution. Here a system fault in a flexible manufacturing system (FMS) is traced: • In a functional tree, we first establish that the FMS has failed because of a machine tool; the tree further establishes that this is a milling machine; this has failed during the machining process. • According to the principles of operation of the machining process, the highest probability lies in a spindle failure (0.4) and that, in this case, the most likely fault is the spindle motor (0.45). • The rule tree (shown in figure 2) analyses the potential faults in the spindle motor and their cost weighted risk. Here several potential faults are compared, and the most likely ones are investigated: a fault could exist in the mechanical drive system (R211) or in the control circuit (R212). A first likely fault is a high temperature cut out (R312) caused by a blunt tool (R421). Other faults are investigated in order of probability until the fault is found.

6 Case study: Paper mill This data in this case study was collected over 10 years. The paper mill collected on-line production control data and manually collected condition monitoring data. Readings were taken at predefined intervals to monitor changes in machine health and identify fault conditions, and were recorded in a proprietary condition monitoring management package, MIMICTM. Data was collected from 18 identical units, each with 8 measurement parameters relating to different failure modes. These parameters are used to monitor vibration signals in both the vertical and axial directions. Table 2 shows these parameters together with a measurement description. There was an average of 140 data points for each parameter with an average monitoring interval of 21 days. A total of 64 defined failure events was obtained from the job history system.

6.1

Building the models

A number of steps are involved in the building of a PHM model: 1. Data preparation, which formats the original measurement and failure history. 2. Establishing the optimum combination of parameters or co-variants which are the most relevant for the defined groups of machines. 3. Defining a series of equally spaced bands for each co-variant from zero to an upper level, which encompasses approximately 98% of the measurements. These bands are used to form the basis of a probability matrix, which defines the probability of the measurement changing state in the next monitoring interval. From these stages a decision model is built, which combines the above information with the costs of preventative maintenance and failures. The model calculates a number of features that are used to predict the optimal replacement period. The primary feature is the cost function, which is based upon the total costs that would be incurred at different risk values. This is then used to determine the optimal replacement interval at the least cost, and predict remaining life.

6.2 Data Preparation The PHM model needs to have a complete data set to allow any analysis to be undertaken. This means that any missing data has to be predicted and included. During this case study two different but common situations were identified: • Only one or two data points were missing – these values were interpolated from the surrounding points to give an estimated value. • Large number of missing points – an average value was calculated based on the readings observed in other identical units. A Weibull analysis on the 8 inputs showed that the best model for the defined failure modes would be based upon the combination of vertical acceleration and the axial gear mesh feature.

Table 2: Description of Measurement Parameters Parameter/Failure Mode Overall Vibration Overall Vibration Gear Mesh Gear Mesh Harmonics Harmonics Acceleration High frequency bearing damage

Direction Vertical Axial Vertical Axial Vertical Axial Vertical Vertical

Units Velocity (mm/s) Velocity (mm/s) Velocity (mm/s) Velocity (mm/s) Velocity (mm/s) Velocity (mm/s) Acceleration (g) ESP Envelope (g)

Cost for optimal policy

Cost for replacement on failure

Figure 3: Comparison of costs

6.3

Decision Model

The decision model is based upon the combination of the above stages with the cost of either replacement at failure or for preventative maintenance. It is estimated that an unexpected failure of these units would result in a 24 hour shutdown of the entire plant, incurring a cost of £120k; a fixed time replacement (preventive maintenance) would result in a 4 hour shutdown with an incurred cost of £20k. From this information a cost function was determined that identified the incurred cost per day at an optimal risk factor for that group of machines. Figure 3 shows the cost function. Here the average cost reaches a minimum (indicated by a small square), before increasing risk causes the predicted cost to rise. Failure causes a step rise in cost. The expected time between replacement decreases from 3081 days to 1765 days but there would be an expected saving in incurred costs of £14 per day, or 37%. This estimated saving may be altered by the real decisions made.

6.4 Justification of replacement policy To ensure that the defined model is predicting the correct information a manual analysis is made of the results obtained. This comprises: • Obtain information from model; • Analyse decision reports for each unit; • Build report code matrix; • Cost analysis. Analysis of the decision made at the end of each history can be used to validate the developed model. Figure 4 shows a decision graph, where the decision to replace is close. The composite covariate is calculated from a combination of two parameters. This value is plotted against bands which take into account the usual statistical variation of the

Figure 4: Decision graph composite covariate, as well as age. The present decision of this graph is ‘don’t replace’, however the chart shows a number of high measured points, which is an indication of the likelihood of an event occurring soon. Note that the analysis has been applied to historical data. The decision to replace might have been made earlier if the model was running in real time.

6.5 Discussion of the case study The results identified a number of features that need further consideration. The first of these was that the number of parameters presently being monitored were found to be not well related to the machines under investigation: only 2 of the original 8 parameters were used for the modelling process. The reason for this reduction is that the parameters are defined based upon commonly known failure modes, for example misalignment and bearing damage. A requirement of the system is that the database contains information regarding the type of event that resulted in the end of that particular history. If that type of failure mode has not been evident in the event history of the machine prior to the modelling process, its significance on the model would be greatly reduced.

7 Conclusions and further work Data fusion is widely used by scientists of many disciplines. There are many individual examples of successful application. There is a need, however, for a clear strategy with which to define and classify the problem and hence select fusion techniques. A picture is emerging of a flexible strategy, which incorporates a number of steps. This needs to be refined by encapsulating expertise, from a variety of sources, which defines the patterns and

interconnection between solution steps, and matches the solution to the problem characteristics. A necessary prerequisite for an engineering solution is a full statement of the problem: its definition and classification. This enables a specification to be drawn up and a solution devised. Problem definition is a area of uncertainty in condition monitoring. Maintenance decisions are seldom based upon the change in a single parameter, but by analysing the relationships between different parameters. Traditional condition monitoring systems often rely on user experience during the analysis of these trends but this can be a difficult task. EXAKT aids and automates the process of analysis. A case study has demonstrated the combination of parameters measured on a paper machine where the model predicts a potential saving of 37% in the maintenance cost by optimising replacement interval. The model combines two vibration parameters and time. The choice of which parameters to use is dependent not only upon the successful identification of failure modes, but also the failures observed during the plant’s working life. Statistical models have been shown to be effective for the decision process within the level 1 data fusion framework. However, there a number of areas where intelligent systems could be better used to aid the development of this type of system. These areas include problems with missing values in the initial data, novelty detection within parameter association and the diagnostics of the root cause.

8 Acknowledgements: The authors would like to thank the following bodies for their support and collaboration, particularly the Faraday Intersect partnership: Engineering and Physical Science Research Council (EPSRC), Corus plc, DERA, Rolls-Royce plc, National Physical Laboratory, University of Warwick, University of Toronto, WM Engineering Ltd. References: [1] Waltz E L, Llinas J, 1990, Multisensor data fusion, Norwood MA, Artech. [2] Starr A G, Ball A D, 2000, Systems integration in maintenance engineering, Proceedings of the Institution of Mechanical Engineers Part E – Journal of Process Mechanical Engineering, v214 pp79-95 [3] Hannah P, Starr A G, Bryanston-Cross P, 2001, Condition monitoring and diagnostic engineering - a data fusion approach, Proc. 14th International Congress on Condition

Monitoring and Diagnostic Engineering Management (Comadem 2001), Manchester UK, ISBN 0-08-0440363 pp275-282 [4] Grime S, Durrant-Whyte H F, 1994, Data fusion in decentralized sensor networks, Control Engineering Practice, Vol.2, No.5, pp.849-863, pub. Pergamon, Oxford, UK, ISSN: 0967-0661 [5] Stevens P W, et al, 1996, Multidisciplinary research approach to rotorcraft health and usage monitoring, Proc. Annual Forum, American Helicopter Society, Vol.2, pp.1732-1751, pub. American Helicopter Soc., Alexandria, VA, USA, ISSN: 0733-4249 [6] Kennedy I., 1998, “Holistic Condition Monitoring”, Wolfson Maintenance, Proc. 1st Seminar on Advances in Maintenance Engineering, University of Manchester [7] Harris T.J and Kirkham C., 1997, “A Hybrid Neural Network System for Generic Bearing Fault Detection” , Proceedings of COMADEM 97, Vol 2, pp 85-93, ISBN 951-38-4563-X [8] Smith G.T, 1996,”Condition Monitoring of Machine Tools”, Southampton Institute, Handbook of Condition Monitoring, Rao (Ed.) 1st edition, pp 171-207, ISBN 185617 234 1 [9] Taylor O. and MacIntyre J., (1998),”Modified Kohonen Network for Data Fusion and Novelty Detection Within Condition Monitoring”, Proceedings EuroFusion98, pp145-155 [10] Travè-Massuyés L., Milne R., 1997 “Gasturbine condition monitoring using qualitative model-based diagnosis”, IEEE Expert, May/June 1997, pp22-31 [11] Andersson C & Witfelt C., 2000, “Advisor:- A prolog implementation of an automated neural network for diagnosis of rotating machinery”, http://www.visualprolog.com/vip/articles/Carst enAnderson/advisor.html [12] Hu W, Starr A, Zude Zhou, Andrew Leung, 2001, An Intelligent Integrated System Scheme for Machine Tool Diagnostics, IJ Advanced Manufacturing Technology ISSN 0268-3768, v18 no11 pp836-841 [13] Hu W, Starr A G, Leung A Y T, 2000, A systematic approach to integrated fault diagnosis of flexible manufacturing systems, Int. J. Machine Tools & Manufacture, ISSN 0890-6955 v40 pp1587-1602 [14] Hu W, Starr A G, Leung A Y T, 2001, A Multisensor Based System for Manufacturing Process Monitoring, Journal of Engineering Production, Institution of Mechanical Engineers Proceedings Part B v215 pp1165-1175

Suggest Documents