a decisions system for applying

0 downloads 0 Views 416KB Size Report
E.T.S. Ingenieros Industriales de Gijón. Organización de Empresas. Dpto. Administración de Empresas y Contabilidad. Campus de Viesques s/n 33204. Gijón.
PREPRINT

A DECISION SUPPORT SYSTEM FOR APPLYING FAILURE MODE AND EFFECTS ANALYSIS (FMEA).

Javier Puente (*), Raúl Pino (*), Paolo Priore (*), David de la Fuente (*)

E.T.S. Ingenieros Industriales de Gijón Organización de Empresas Dpto. Administración de Empresas y Contabilidad Campus de Viesques s/n 33204. Gijón e-mail: [email protected]

INFORMATION ABOUT THE FINAL PUBLISHED VERSION To cite this document: Javier Puente, Raúl Pino, Paolo Priore, David de la Fuente, (2002) "A decision support system for applying failure mode and effects analysis", International Journal of Quality & Reliability Management, Vol. 19 Issue: 2, pp.137-150

Permanent link to this document: https://doi.org/10.1108/02656710210413480

A DECISION SUPPORT SYSTEM FOR APPLYING FAILURE MODE AND EFFECTS ANALYSIS (FMEA).

Abstract This study describes an alternative way of applying Failure Mode and Effects Analysis (FMEA) to a wide variety of problems. It presents a methodology based on a decision system supported by qualitative rules which provides a ranking of the risks of potential causes of production system failures. By providing an illustrative example, it highlights the advantages of this flexible system over the traditional FMEA model. Finally, a fuzzy decision model is proposed, which improves the initial decision system by introducing the element of uncertainty.

Keywords: FMEA, Decision Support System, Advanced Quality Tools, Risk Priority, Fuzzy Decision Support Systems.

Introduction Failure Mode and Effects Analysis (FMEA) first emerged from studies done by NASA in 1963; it then spread to the car manufacturing industry, where it served to quantify and order possible, potential defects at the design stage of a product so that they were not passed on to the customer. The FMEA method is based on a session of systematic brainstorming aimed at uncovering the failures that might occur in a system or process (Clifton, 1990). It is important if high levels of reliability are to be achieved in production processes, even at the early design stage, and costly complicated corrective action at later production stages or during after-sales is to be avoided, and it involves bearing in mind both the clients’ reliability criteria and potential flaws in the operations of the process (Teng and Ho, 1996).

FMEA divides up into design FMEA and production FMEA. Potential latent problems can be analysed, possible defects can be pinpointed before they are passed on to the customer, their effects on the overall system can be studied and the right control decisions can be taken, in either of these two stages. Later modifications during the production phase and the added costs that this generates can thus be avoided (Juran, 1989).

This study focuses on FMEA from a new perspective based on two decision support systems. The first one allows classes, or value intervals, to be assigned not only to the Detection, Frequency and Severity Indexes but also to the resulting Risk Priority of each cause of potential

failure. The second one extends the first by means of a fuzzy treatment of all the variables considered in the decision system.

This paper is structured as follows; a review of the traditional FMEA methodology is first described; an alternative method for assigning criticality to potential failures that overcomes some of the criticism aimed at the traditional model is then proposed; an example of how the proposed system is applied follows; a fuzzy decision system that incorporates uncertainty into the FMEA mode is then described and comparative results of the traditional and proposed systems are provided, before the main conclusions to this study are drawn.

Review of the FMEA Procedure. FMEA Procedure is very well documented in the literature on quality (MIL-HDBK-338-IA, 1988; MIL-STD-1629A, 1980; Chrysler Corporation, Ford Motor Company, General Motors Corporation 1995; McDermott et al., 1996). It basically consists of two stages. During the first stage, possible failure modes of a product or process and its prejudicial effects should be identified; this stage is related to the detailed design stage and will include the definition of potential failures in the components of the product, the sub-assembly, final assembly and the manufacturing process. During the second phase, the engineering team that developed the FMEA determines the critical level (or risk score) of these failures and proceeds to put them in order, reviewing each design detail and proposing the relevant modifications. The most critical failures will head the ranking, and will therefore be considered first during design review or during corrective actions taken to minimise the likelihood of them occurring.

According to McDermott et al. (1996), FMEA methodology pursues a multitude of aims. It attempts to identify what possible failures may occur in the design and manufacture of products, and pinpoint their source. It also aims not only to make sure that resources are available when necessary but also to establish preventive and corrective action so that failure does not occur, as well as endowing projects, processes and production equipment with greater reliability. McDermott et al. (1996), also state that FMEA methodology analyses and evaluates with sufficient foresight the effectiveness of actions undertaken and of the resources made available, besides educating and familiarising staff with team work, so that the work force itself foresees defects, detects the causes leading to them, proposes preventive action, and, evaluates the results.

Dale and Shaw (1990) provide a list of the advantages that accrue. Client specifications can be catered for, costs and launch times are cut (as redesign and modifications are avoided and many

tests are eliminated), and product/process quality and reliability improves, which leads to greater safety and responsibility during the manufacturing process and more customer satisfaction. In short, problem prevention rather than problem correction is focused on, and the number of customer complaints drops.

Any FMEA should start with a flow chart that clearly defines the activity or function (design or process) to be analysed. Therefore, all the information on components in the design or functioning of the process must be collected. Basic tools such as Brainstorming sessions and Cause-Effect diagrams can then be used to determine the relationship between potential failure modes, their effects, and the causes leading to them for each function that is analysed. Figure 1 shows the headings of a FMEA report with the most usual sections to be filled in. In this report, the first columns tend to deal with the mode, effects and causes of failures related to a particular function. Description, type and aim of FMEA (Descriptive Information).

Detection

Risk Priority Number (RPN)

Severity

Frequency

Results (14)

Action taken

Detection (10) Risk Priority Number (11) Recommended state and action (12) Person(s) in charge of rectifying failure (13)

Severity (9)

Frequency (8)

Reference number of part The part’s function (2) Potential failure mode (3) Potential effect of failure (4) Main cause of failure (5) Present Control mechanisms (7)

Existing Conditions (6)

Figure 1: The fields of an FMEA report

Control action that already exists to detect or prevent each of the potential root causes of a failure mode (the existing Control Plan) should also be identified. To evaluate the criticality of a cause of a possible defect, the Risk Priority Number (RPN) for the failure is next calculated as the product of three indexes: the Detection Index “D”, reflecting the likelihood of a possible flaw not being detected, the Frequency Index “F”, reflecting the likelihood of the failure occurring, and the Severity Index “S”, reflecting how serious the failure is. Having obtained the RPN, a ranking of the causes of the failures is drawn up, and corrective action is taken by the relevant department beginning with the riskiest, as shown by the RPN. However, on occasions, the Frequency Index “F” or the Severity Index “S” may be very high, though the RPN itself fails to reflect this. Despite the failure of the RPN to show this up, when, for example, the probability of a potential failure cause occurring is small but it has catastrophic consequences, it should nevertheless remain in the top part of the ranking for analysis. Generally, control for this type of failure will involve 100% inspection (Teng and Ho, 1996).

RPN values should be recalculated after a certain time lapse to see if they have gone down, and to check the efficiency of the corrective action for each failure cause. All this data is recorded in the relevant columns of the FMEA report (see Figure 1).

Table I provides an example of an application of the traditional FMEA method for 5 causes of failure. The ranking is assigned according to the RPN associated to each cause of failure in

Potential Failure Mode

Potential Failure cause

Detection

Frequency

Severity

RPN (tradit.)

decreasing order. Each RPN is the product of the "D", "F" and "S" numerical indexes.

Ranking (traditional)

A

a1

1

4

8

32



a2

5

2

4

40



a3

1

7

10

70



b1

7

7

1

49



b2

3

7

3

63



B

Table I: An example of an FMEA application. Traditional calculus.

Tables II, III and IV show the usual values assigned to the above-mentioned "D", "F" and "S" indexes, as a function of the Probability of Occurrence, and of Non-Detection of the failures considered. These values range between "1" and "10". (Ben Daya and Raouf, 1996; Chang et al., 1999).

Ammerman (1998) proposes an order of priorities that must be followed to decide corrective action. This order of preference is generally the following: changes in product design, the service or the general process; changes in the manufacturing process; an increase in control and inspection measures. Design is most effective at reducing Risk Indexes, mainly because of the repercussions it has on reducing the frequency of failures (the “F” Index). In contrast, controls mainly affect detection (the “D” Index). Thus, according to the above-mentioned source, the following order of priorities can thus be established for applying corrective action: Eliminate the cause of the failure. The design of a part might be changed, for example, so that another piece that is similar and easily mistaken for it is not incorrectly assembled. Reduce the frequency or likelihood of occurrence. Instead of trying to eliminate the root cause of the failure, the system is strengthened so it can “resist”. This is Taguchi’s Principle applied to parameter design (Roy, 2001). Reduce the severity of the failure. This can only be achieved with failure-free design or by using redundant systems.

Increase the likelihood of detection. By increasing controls or designing an improvement of the existing controls. Reducing the frequency of a failure or defect is always a preventive measure whereas increasing detection controls is a contingent action aimed at “limiting” already existing failures, and should only be seen as a temporary solution to allow time for truly preventive measures to be established to resolve the problem once and for all.

Score Remote Low Moderate

High Very high

1 2 3 4 5 6 7 8 9 10

Score Remote Low Moderate

High Very high

1 2 3 4 5 6 7 8 9 10

The customer will remain unaware of it A minor problem Dissatisfaction High level of dissatisfaction Serious consequences for safety

Likelihood of occurrence 0 1/20000 1/10000 1/2000 1/1000 1/200 1/100 1/20 1/10 1/2

Table II. The Frequency Index “F”

Likelihood of Non-Detection 0-5 6-15 16-25 26-35 36-45 46-55 56-65 66-75 76-85 86-100

Table III. Non-Detection Index "D"

Score 1 2 3 4 5 6 7 8 9 10

Table IV. Severity Index "S"

The work team is critical to the success of the FMEA process. The failures listed on the FMEA report should include those observed both externally, by the customer, and internally (by the

company’s production department, internal customers, etc.). Moreover, information used in the FMEA should come not only from the production lines themselves, and from customers, as has just been mentioned, but also from similar data about products or processes (Teng and Ho, 1996). Numerous researchers – Gilchrist (1993), Ben-Daya & Raouf (1996), and Deng (1989) being amongst them - criticise the rationale underlying traditional FMEA. Some of the criticisms are: 

RPN evaluation does not fulfil the usual measurement requirements.



There is no precise algebraic rule to assign a score to the Frequency Index “F” and Detection Index “D”, as traditional scoring is based on the probability of occurrence of failures and the probabilities of non-detection. Tables I and II above show how traditional FMEA employs five categories for each failure parameter, and how the score for each index can range over a ten-point scale of evaluation. This simplifies calculation, but converts probability into another scoring system Calculating the product of the three indexes can thus cause problems, as the likelihood of non-detection and its corresponding score follows a linear law, whereas the relationship between the likelihood of occurrence and its score does not respect this linearity.



Different scores for the “F” and “D” indexes could give the same RPN result, yet the risk involved would be completely different.



In view of the above factors, there is no rationale in obtaining the RPN as a product of the “D”, “F” and “S” indexes.



The RPN does not take into account the quantity produced.



The RPN does not cater for a possible weighting of the importance of “D”, “F” and “S” indexes.



The RPN cannot measure the effectiveness of proposed corrective measures.

A Proposal for a new model to assign a risk category to potential failures. An attempt will now be made to alleviate the problems surrounding the traditional methodology outlined above. When applying FMEA, the risk priority for different causes of failure is reached as a function of the values of the “D”, “F” and "S” indexes. It seems appropriate therefore to propose a decision support system based on qualitative rules that can assign risk evaluations, or categories, to each potential cause of failure. This is what this study now moves on to do. The proposed system will assign a risk priority class to each of the causes of failure in an FMEA, depending on the importance given to the three basic characteristics related to a failure mode: failure detection (D), failure frequency (F) and failure severity (S) by means of a knowledge base supported by a qualitative rule base.

The three indexes "D", "F" and "S" taken from the traditional methodology are the input variables in the decision system, so that integer scores (of between 1 and 10) assigned to these indexes also have their corresponding qualitative class. It was decided that the traditional categories of importance (from very low “VL” to very high “VH”) would be maintained, as would the scoring for the three indexes (between 1 and 10), as is shown in Table V. The output variable of the decision system is the risk priority category (RPC) assigned to the cause of a failure. Here, a division of the traditional domain of the Risk Priority Number (RPN) from 1 to 1000 into nine class intervals was opted for, so as to discriminate risks more, so that each of the class intervals has a different RPC (very low: ”VL”, between very low and low: “VL-L”, low “L”, ..., very high "VH"). The correspondence between the RPC and their class intervals is shown in Table VI.

D 1 2,3 4,5,6 7,8 9,10

Scores F G 1 1 2,3 2,3 4,5,6 4,5,6 7,8 7,8 9,10 9,10

RPN (Class interval) 1-50 50-100 100-150 150-250 250-350 350-450 450-600 600-800 800-1000

Categories VL L M H VH

Class Score

Category (RPC)

25 75 125 200 300 400 525 700 900

VL VL-L L L-M M M-H H H-VH VH

Table V: Categories for "D", "F" and "S" indexes.

Table VI: Categoríes for the output variable of the decision system.

The structure of the decision system’s rules is of the type: “if ((D=VL) & (F=M) & (S=H)), then (RPC=H)"; this would mean that if “non-detection” is very low, “frequency” is moderate, and “severity” is high for a cause of failure, then the risk priority category should be high. As each of the three input variables can be given one of five categories or classes, we have as many as 125 rules at our disposal to assign the RPCs to each of the causes of failure analysed in the FMEA. The rules are shown in Figure 2 in the form of a three-dimensional graph. Each level (related to one of the five severity categories) shows the RPCs matrix corresponding to 25 possible combinations of the detection and frequency categories.

S F

F

F

F

F

Figure 2: Rule Base of the decision system proposed.

The procedure to construct the rules is as follows: in the matrix that corresponds to the “VL” severity category (bottom level), the lowest RPC “VL” is the one corresponding to the cell at the origin of the graph. Then, as one progresses along the main diagonal, the risk priority increases its category sequentially (thus, after “VL” there is “VL-L”, “L”, “L-A” and finally “M”). Moreover, each matrix is symmetrical with respect to the main diagonal, and the risk priority tends to increase as the variables of detection (D) and frequency (F) increase. The step up to the higher severity level “L” involves a matricial structure similar to that of the lower level, although the root cell is given an RPC identical to the severity evaluation of the level it is at (if the root cell for matrix S =“VL” were “VL”, the root cell for the matrix S=“L” would be “L”, and so on).

The aim of this procedure for assigning RPCs is to alleviate some of the problems surrounding FMEA commented on in the previous section (Gilchrist, 1993; Ben-Daya and Raouf, 1996; Deng, 1989): 

Firstly, the RPN is not used as a measurement of risk associated with a cause of failure; instead, an intuitive decision system based on risk priority class or categories (RPC) is applied.



Moreover, the definition of the categories of the variables involved in the FMEA respects the numeric correspondence assigned to the traditional model’s "D", "F", and "S" indexes. However, the product of these indexes is not used to calculate the risk associated with a failure; instead, the RPC associated with a failure is calculated according to a rule base containing expert knowledge of assigning risk to each combination of the "D", "F" and "S" categories.



Furthermore, establishing the rule base allows weighting of any input variables seen as being more important (in our particular case the severity variable has been weighted as more important, as it can have a direct influence on the safety of handling the system being evaluated); in contrast, the traditional system assigns equal importance to all three indexes. The problem of the different implications of risk for the same RPN is thus solved.



The system being proposed also provides a single possible model for a particular problem. Based on the model, new rule bases are easily generated according to the problem being studied and with the experience of analysts in similar problems analysed in the past; one might, for example, weight not only the Severity Index but also the Frequency Index as more important (which would involve considering asymmetrical RPC matrixes with greater risk implications above the main diagonal). It would even be possible to consider a different number of categories for the variables of the model depending on the problem under analysis, or to consider new input variables for risk evaluation such as cost or the quantity manufactured, although the method would become less intuitive when handling larger numbers of dimensions.



Finally, the method gains in rationale with this proposal, and is easy to implement (a single spreadsheet is all that is required).

Illustrative example. Comparative analysis of results. Table VII shows the solution to the same problem by applying the traditional methodology using the method proposed in this study. Table V gives the corresponding categories of the numeric values assigned to the "D", "F" and "S" indexes in each case of failure. Once these have been obtained, the rule base (figure 2) provides us with the Risk Priority Category (RPC) that is assigned to each cause of failure. Finally, the ranking is obtained by listing these categories in descending order.

It can be seen how the results of the ranking according to the traditional FMEA model (based on the product of the three indexes) and the one according to the model being proposed (based on the RPC that a decision system assigns) are different except for “a3” failure cause. This is because the new model gives a marked weighting to the severity index, whereas the RPN-based

model simply calculates the product of the three indexes with no consideration of the importance assigned to a particular index. Traditional Calculus Detection

Frequency

Severity

RPN

1

4

8

32

Ranking (Traditional) 5º

5

2

4

40



70



RPN 1

7

10

7

7

1

49



3

7

3

63



Severity

RPC

Ranking System 1



Calculus using the system being proposed Detection

Frequency

1

VL

4

M

8

H

H



5

M

2

L

4

M

M-H



VL

7

H

10

VH



7

H

7

H

1

VL

L-M



3

L

7

H

3

L

M



1







VH

Rule Base 

Table VII: A comparison of ranking using the traditional and proposed methods.

Furthermore, the flexibility of the system being proposed must also be remembered, as other decision variables such as the cost associated with a failure cause, the production quantity etc. can be incorporated; a higher or lower number of categories can be assigned both to the input and the output variables; the numeric range associated with the categories can be varied from one problem to another depending on expert knowledge, and the rule base can have a different structure for each problem depending on whether it is of interest to weight one of the "D", "F" or "S" indexes.

Nevertheless, despite the claims being made above in favour of this new model, it does also have its drawbacks. Firstly, a discrete score between values of 1 and 10 must be chosen for the "D", "F" and "S" indexes, which reduces the continuity of the FMEA decision model. Moreover, the risk priority category (RPC) that is assigned may be similar for different causes of failure for combinations of classes that have similar input variables, which detracts from the risk priority ranking’s capacity to discriminate.

We therefore propose a fuzzy decision system to limit some of the above-mentioned drawbacks. Fuzzy modelling (Zadeh, 1965) of the classes associated with the variables of the decision system would achieve greater continuity and the fuzzy decision system (Cox, 1994) itself would also optimise risk discrimination of different causes of failure, as the defuzzified output of such a system would enable a risk priorities ranking to be established that would be less ambiguous

for similar combinations of the "D", "F” and "S" indexes. However, before showing the results obtained with this proposal, brief consideration will be given to describing what it consists of. FMEA as a Fuzzy Decision Support System. Fuzzy decision support systems are based on the theory of fuzzy sets (Zadeh, 1965), and allow an uncertainty component to be incorporated into models, making them more effective in terms of approximating to reality (Kaufmann and Gupta, 1991). Linguistic variables can be used to handle qualitative or quantitative information, so that its content can be labelled taking words from common or natural language as values. This contrasts with numeric variables, which can only take numbers as values (Driankov et al., 1996). All decision problems require a knowledge base provided by an expert who is able to explain how the system works through a set of linguistic rules involving the system’s input and output variables; the system’s variables, that is, the form and range of the labels for each variable, must therefore be defined in fuzzy form. Fuzzy decision support systems depend on this to model systems in a process which has five stages: the fuzzification of the input variables, the application of fuzzy operators (AND/OR) to each rule’s antecedent, the implication process from each rule’s antecedent to consequent, the consequent aggregation process, and the defuzzification process (Cox, 1994). A fuzzy decision support system was applied (MATLAB 5.3 – Toolbox ‘Fuzzy’ (v. 2,0)- to the FMEA problem. The Mamdani model (Mamdani and Gains, 1981) was chosen as a continual domain, in this particular case between 1 and 1000, could be evaluated for the RPC output variable. Input fuzzy variables are detection ‘D’, frequency, ‘F’, and severity, ‘S’, for each failure cause. The output fuzzy variable of the system is the fuzzy risk priority category (FRPC) associated to the failure cause. Figure 3 shows the labels and ranges defined by the expert for the variables of the FMEA system. Each fuzzy rule of our decision system relates the three input variables to the single output variable, maintaining an identical structure to the one proposed for the model in the previous section (125 rules); the only difference is that now each label is a fuzzy number. Defining the variables and rules of the new decision system fuzzily means that the model can process imprecise information related to evaluating both the “D”, “F” and “S” indexes and the “RPC” variable. This brings the model closer to the reality of decision taking when giving risk priorities to causes of failure.

L a b e ls fo r D , F , a n d S v a r ia b le s . VL

L

M

H

VH

1 0 ,8 0 ,6 0 ,4 0 ,2 0 0

2

4

L a b e ls V L V L -L L

L-M

M

6

8

10

f o r F R P C v a r ia b le M -H

H

H -V H

VH

1 0 ,8 0 ,6

Figure 3: Labels for D, F, S and FRPC variables

0 ,4 0 ,2 0 0

200

400

600

800

1000

In order to bestow consistency on the fuzzy method that is being proposed, so that it is highly compatible with the initial decision system, classification error of the failure causes analysed must be minimised. A study was therefore carried out of the parameters of the fuzzy decision system which considered the classification errors from the fuzzy system for the 1000 cases corresponding to all the integer inputs in the range of 1 – 10 for the "D", "F" and "S" input variables. Two types of error were analysed; the first (E1) is the "mean absolute percentage error - MAPE" (Makridakis et al., 1982), which quantifies error according to the expression: E1= MAPE



1 N

N

 i1

x i  si 1 / 2 x i  si

where N represents the number of examples analysed (in this case: 1000); Xi is the numeric classification value according to the qualitative-interval rule based system in section 2 (taking the class mark score of the associated interval as the numeric value of the output label; see Table VI) and Si is the numeric classification value returned by the fuzzy system being proposed.

The second error that is analysed (E2) is classification error itself; there is a check made as to whether the fuzzy system output coincides with the output label from the system based on qualitative rules – intervals from the initial system. As the fuzzy system really returns a number (the fuzzy risk priority number – FRPN), this number is assigned the label corresponding to the interval that it belongs to, as shown in Table VI. The proportion of “misclassifications” is calculated in order to quantify error.

The results obtained when the 1000 input examples are analysed (all possible discrete combinations for the ‘D’, ‘F’ and ‘S’ indexes) and for the different parameters of the fuzzy

decision system are provided in Table VIII. The highest of the cells that quantify errors is the MAPE (E1), and the lowest is the error proportion of ‘misclassified’ cases (E2).

Errors (%) for different defuzzification methods AND Method

Implication Method

Aggregation Method MAX

MIN

MIN

SUM PROBOR MAX

PROD

PROD

SUM PROBOR

Centroid

Bisector

MOM

LOM

SOM

12.05 33 14.03 38.4 14.30 38.3 9.47 24.7 10.07 25.7 10.36 25.4

7.40 8.9 11.16 24.7 11.63 25.4 4.72 2.4 5.92 7.9 6.00 8.3

2.38 0 5.78 0.2 5.44 0 3.24 0 3.93 1.2 3.85 1.2

9.7 0 9.83 0.2 5.44 0 4.45 0 5.07 1.8 4.93 1.02

8.85 0 9.48 0 9.17 0 2.81 0 3.33 1.2 3.33 1.2

Table VIII: MAPE and classification errors for different fuzzy configuration parameters.

Detection

Frequency

RPC (Class mark)

Severity

1

VL

4

M

8

H

5

M

2

L

4

M

1

 VL

7



H

10

 VH

7

H

7

H

1

VL

3

L

7

H

3

L

Detection

Frequency

Severity

1

4

8

5

2

4

1

7

10

7

7

1

3

7

3

Rule Base  (System 1)

Fuzzy Inference Process  (System 2)

H (525) M-H (400) VH (900) L-M (200) M (300)

FRPN (Class) 535 (H) 405 (M-H) 925 (VH) 205 (L-M) 300 (M)

Ranking System 1 2º 3º 1º 5º 4º

Ranking System 2 2º 3º 1º 5º 4º

Table IX: A comparison of the results from system 1 and 2

Given the results, the parameter structures that perform best (in terms of classification compatibility of criticity) are the shaded ones in the table, as they minimise both error types. The choice made for the set of parameters for the decision system will finally be as follows: “MIN” for the operator “and”, “MIN” as the implication method, “MAX” as the aggregation method, and “MOM” (the average of the maximums) as the defuzzification method.

Table IX shows the comparative results using the initial decision system (System 1) and the Fuzzy system (System 2) for the example of the five causes of failure used throughout this study. The results show the high level of risk classification compatibility between systems 1 and 2, as classification errors obtained were: E1 = MAPE = 1,67% and E2 = 0%.

Conclusions. This study highlights a number of the drawbacks, inherent to the traditional FMEA model, such as calculating the risk priority number (RPN) as a product of scores given to "D", "F" and "S" indexes. Given these drawbacks, a first proposal is to structure expert knowledge in the form of qualitative decision rules whereby a risk priority category can be assigned to each cause of failure. This effectively mitigates one of the main criticisms aimed at the traditional model, since the structure of the rule system being proposed allows considerable weighting of the severity index “S” associated to a cause of failure. The method being proposed is, moreover, flexible and easily implemented, making it a useful new tool for most risk classification problems. Additionally to this proposal, a fuzzy decision system is proposed, which increases the continuity of the FMEA decision model (by electing continuous values for the ‘D’, ‘F’ and ‘S’ indexes), and which optimises risk discrimination of different causes of failure. A study is carried out in this latter model of the parameters of the fuzzy system to optimise its structure in order to minimise risk classification errors of failure causes.

References Ammerman, M. (1998). "The Root Cause Analysis Handbook - A Simplified Approach to Identifying, Correcting, and Reporting Workplace Errors". Productivity Inc. Ben-Daya, M and Raouf, A. (1996). "A revised failure mode and effects analysis model", International Journal of Quality and Reliability Management, Vol. 13 No.1, pp. 43-7. Chang, C.; Wei, C. and Lee, Y. (1999). "Failure mode and effects analysis using fuzzy method and grey theory", Kybernetes, Vol. 28 No. 9, pp. 1072-80 Chrysler Corporation, Ford Motor Company, General Motors Corporation (1995), Potencial Failure Mode and Effects Analysis, Reference Manual, 2nd ed., February. Clifton J.J. (1990). "Risk Prediction", in Keller, A.Z. & Wilson, H.C. (Eds.), Disaster Prevention, Planning and Limitation Unit, University of Bradford and the British Library. Cox, E. (1994), The Fuzzy Systems Handbook. Academic Press, Inc, London. Dale, B. and Shaw, P. (1990). “Failure mode and effects analysis in the UK motor industry: a state-of- the-art study”, Quality and Reliability Engineering International, Vol. 6, pp. 179-88. Deng, J. (1989), "Introduction to grey system theory", Journal of Grey System, Vol. 1 No. 1

Driankov, D.; Hellendoorn, H.; Reinfrank, M. (1996), "An Introduction to Fuzzy Control. 2nd Edition". Springer. Gilchrist, W. (1993), "Modelling failure mode and effects analysis", International Journal of Quality and Reliability Management, Vol. 10 No.5, pp. 16-23 Juran, J.M. (1989), Quality Control Handbook, McGraw-Hill, New York. Kaufmann, A.; Gupta, M. (1991), "Introduction to Fuzzy Arithmetic. Theory and Applications" .Van Nostrand Reinhold. Makridakis, S.; Anderson, A.; Carbone, R.; Fildes, R.; Hibon, R.; Lewandowski, R.; Newton, J.; Parzen, E. and Winkler, R. (1982): “The accuracy of extrapolation (time series) methods: Results of a forecasting competition”. Journal of Forecasting, 1, pp. 111-153. Mamdani, E.H.; Gains, B.R. (1981): "Fuzzy Reasoning and its Applications". New York: Academic Press. McDermott, R.E.; Mikulak, R.J.; Beauregard, M.R. (1996), "The Basics of FMEA". Productivity Inc. MIL-HDBK-338-IA (1988), Military Handbook – Electronic Reliability Design Handbook, Department of Defense, Washington, DC. MIL-STD-1629ª (1980) Military Standard –Procedures for Performing a Failure Mode, Effects and Criticality Analysis, Department of Defense, Washington, DC. Roy, R.K. (2001). "Design of Experiments Using the Taguchi Approach : 16 Steps to Product and Process Improvement". John Wiley & Sons. Teng, S. and Ho, S. (1996), "Failure Mode and effects analysis. An integrated approach for product design and process control". International Journal of Quality & Reliability Management, Vol. 13, pp. 8-26. Zadeh, L.A. (1965), "Fuzzy Sets", Information and Control. Vol. 8, pp.338-53.