Team errors: definition and taxonomy

156 downloads 4405 Views 167KB Size Report
that deficiencies in communication, resource/task management, excessive ... gies such as nuclear power generation, commercial aviation, ... of Air Florida 90, both officers seemed to have shared .... This is particularly true of complex tech-.
Reliability Engineering and System Safety 65 (1999) 1–9

Team errors: definition and taxonomy Kunihide Sasou a,*, James Reason b a

Human Factors Research Center, Central Research Institute of Electric Power Industry, 2-11-1, Iwato-kita, Komae-shi, Tokyo, 201-8511, Japan b Department of Psychology, University of Manchester, Oxford Road, Manchester, M13 9PL, UK Received 23 September 1997; revised 27 August 1998

Abstract In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to ‘‘team errors’’. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors. 䉷 1999 Elsevier Science Ltd. Keywords: human errors; team errors; performance shaping factors; event analysis; nuclear industry; aviation industry; shipping industry

1. Introduction Most human work is performed by teams rather than individuals. This is particularly true of complex technologies such as nuclear power generation, commercial aviation, chemical process plants and the like. There are many advantages in teamwork, perhaps the most important is the provision of mutual aid. One member can help another when he/she is busy, or about to mishandle an operation, or when a bad decision has been, or is about to be, made. Team members can divide their work among themselves to promote efficiency and economy of effort. However, while teamwork can detect and recover errors, it can also create errors. A famous example is the foreignpolicy decisions by the US [1]. This example is well known as ‘‘groupthink’’. Janis’s [1] analysis shows that the interest of the group members shifted to maintain their good human relations rather than to find the best decision to a given problem, so that they fell into a wrong decision. It is thought that the larger the group cohesiveness is, the more pronounced this tendency becomes. This example makes it clear that when considering errors in group processes, the focus should be put on not only how they made errors, whether they noticed their errors and why they failed to * Corresponding author. Fax: +0081-3-3430-5579. E-mail address: [email protected] (K. Sasou)

correct them, but also on how human relations caused errors. Persuaded by this and other case studies involving teamwork, the paper studies human errors as team errors and proposes both a definition and a taxonomy of team errors. We also consider the relationships between the varieties of team errors and performance shaping factors (PSFs).

2. Team errors 2.1. Definition and taxonomy The authors think that ‘‘team error’’ is one form of ‘‘human error’’ as defined by Reason [2]. The difference is that ‘‘team error’’ considers how a group of people made human errors when they worked in a team or a group. Then we can define team error as human error made in group processes. Reason [2] also categorized human errors into three types: mistakes, lapses and slips. Mistakes and lapses arise in the planning and thinking process, whereas action slips emerge primarily out of the execution process. On the face of it, mistakes and lapses are more likely to be associated with group processes (e.g., Janis’s group thinking [1]). Slips are errors in the action process of a single individual and are likely to be divorced from the activities of the

0951-8320/99/$ - see front matter 䉷 1999 Elsevier Science Ltd. All rights reserved PII: S0 95 1 -8 3 20 ( 98 ) 00 0 74 - X

2

K. Sasou, J. Reason / Reliability Engineering and System Safety 65 (1999) 1–9

Fig. 1. Team error process

team as a whole. Therefore, the concept of team error considered in this paper will be restricted to mistakes and lapses. In order to discuss taxonomy of team errors, we will give another example. In the crash of Air Florida Flight 90, Boeing 737-200 on January 13, 1982 [3], the captain and the first officer of the flight discussed snow and ice accumulation on the wings but started the take off. During it, the first officer repeated the comment that something was not right. But the captain did not react to his comment. As a result, they continued the take off which led to the crash. This example suggests that two points need to be considered. The first point is the error-making process. In the case of Air Florida 90, both officers seemed to have shared inappropriate decisions to start the take off in spite of the accumulated ice and snow on the wings. This is an example of cases where an error is made by a group of people. The team’s decision is not always shared by all members. An opinion or idea of a member can become a team’s decision although other members have different opinions or ideas. The second point is the error recovery process. The first officer’s repeated comment leads us to a guess that he thought they should have aborted the take off. However, the captain did not react to his comment. It means that they had a chance to correct their inappropriate decision. It is therefore important to examine the nature of the error recovery. The authors then worked out the taxonomy of team errors shown in Fig. 1. It derives from two earlier studies by the present authors [4,5]. The taxonomy incorporates both the different types of errors (4 types) and the various kinds of recovery failures (3 types). These are discussed below.

2.2. The error-making process

1. Individual errors Individual errors are errors which are made by individuals. That is, an individual alone makes an error without the participation of any other team member. Individual errors may be further sub-divided into independent errors and dependent errors. Independent errors occur when all information available to the perpetrator is essentially correct. In dependent errors, however, some part of this information is inappropriate, absent or incorrect so that the person makes an error unsuitable for a certain situation. 2. Shared errors Shared errors are errors which are shared by some or all of the team members, regardless of whether or not they were in direct communication. Like individual errors, shared errors may also be sub-divided into two categories: independent and dependent. 2.3. The error-recovery process The error-recovery process may fall into any one of three stages: detection, indication and correction. 1. Failure to detect The first step in recovering errors is to detect their occurrence. If the remainder of the team do not notice errors, they will have no chance to correct them. Actions based on those errors will be executed.

K. Sasou, J. Reason / Reliability Engineering and System Safety 65 (1999) 1–9

2. Failure to indicate Once detected, the recovery of an error will depend upon whether team members bring it to the attention of the remainder. This is the second barrier to team error making. An error that is detected but not indicated will not necessarily be recovered and the actions based on those errors are likely to be executed.

3

indicator in the control room. Based on their belief that two discharge valves were in service (in fact, all discharge valves were closed inadvertently: this was a prior error), they thought that this alarm was a false one (a dependent shared error). The indicator connected by the triple low alarm was far away from the control room. The crew failed to recognise their error for a period of some 30 min (a failure to detect).

3. Failure to correct The last barrier is the actual correction of errors. Even if the remainder of the team notices and indicates the errors, the people who made the errors may not change their minds. If they do not correct the errors, the actions based on those errors will go unchecked.

3. Application to events in three industries

Example 2: an independent shared error with failure to correct The captain and the first officer of Air Florida Flight 90, Boeing 737-200 on January 13, 1982 discussed the snow and ice accumulation on the wings and reached the shared decision to start the take off (an independent shared error). During it, the first officer repeated the comment about the wings but they did not abort the take off (a failure to correct).

3.1. Event analysis The definition and taxonomy of team errors, set out above, were applied to the events that happened in the nuclear, aviation and shipping industries. The survey included 21 events in the nuclear industry, 21 in the aviation industry and 25 in the shipping industry. The data sources were human factors investigation reports issued by INEL (Idaho National Engineering Laboratory), an EPRI (Electric Power Research Institute) report, an aircraft accidents report issued by Department of Transportation in UK, some incident reports issued by Department of Transportation in Australia, some books discussing air accidents, etc. Among these, there were 28 team errors in the 13 nuclear industry events, 8 in the 7 aviation industry events and 9 in the 7 shipping industry events. When classifying these team errors, the authors faced two difficulties. The first one was in judging whether an error is an individual error or a shared error. Transcriptions of communication in cockpits are very useful in this respect. Usually such transcriptions are available only in aviation accident reports. Some reports in the other domains described the group decision-making processes because significant errors had occurred there. However, most reports did not provide a detailed account. The second difficulty was to judge whether the remainder of teams had noticed the errors or not. Most accident reports focused on the human errors themselves, but did not discuss the remainder of the team’s intervention or otherwise. As a consequence our assignment to the taxonomic categories was based upon the event reports and upon discussion between the authors. Some examples are given below. Example 1: a dependent shared error with failure to detect At Oyster Creek Power Station on May 2 1979 [6] a triple low level alarm went off which was inconsistent with the

Example 3: an independent individual error with failure to correct During discussion regarding an impending check flight, the training captain in command of Fairchild Aircraft on September 16, 1995 [7] informed the junior pilot that he intended to make a ‘‘V1 engine cut’’ (an independent individual error) on takeoff. The pilot under review responded that it was illegal to conduct V1 cuts at night. The pilot in command replied that such a manoeuvre was not illegal, and that the company operations manual was being amended to reflect this. As a result, they reached the conclusion to conduct V1 cuts (a failure to correct). Just after the takeoff, the aircraft struck a tree approximately 347 m beyond, and 212 m left, of the upwind end of the runway. 3.2. Discussion Table 1 shows the results of the classification of these team errors. Several biases can be seen. Some categories have no examples or have only one example in each. Individual errors occur more frequently than shared errors. In addition, failures to detect are more frequent than failures to indicate and correct. The possible reasons of these biases are as follows. First, team errors categorized as shared errors with failures to indicate or correct may be rare occurrences. However, it does not mean that such errors are not possible. Shared errors are actually possible and failures to indicate or correct are possible, too. Therefore, it is thought that shared errors with failure to indicate or correct are likely to be possible but are relatively infrequent occurrences. Second, as described before, the information sources of the analysis are accident reports, books, etc. Some of them include the transcriptions of communication among people concerned. Some reports discuss the mechanism where

4

K. Sasou, J. Reason / Reliability Engineering and System Safety 65 (1999) 1–9

Table 1 Team errors in three industrial domains Failures to

Independent individual errors Dependent individual errors Independent shared errors Dependent shared errors Subtotal

detect

indicate

correct

subtotal

14 3 9 6 32

4 1 1 0 6

3 0 2 0 5

21 4 12 6 Total: 43

human errors occurred in the accidents. Those reports are useful in clarifying the team errors. However, it is not enough. Judging whether the remaining members of a team detected errors or not requires either transcripts of actual communications or interviews with the people concerned. Generally, accident reports give emphasis to the people who made the errors and the surrounding circumstances (i.e., working environment, deficiencies in education or training, fatigue, arousal level, etc.) with little or no information regarding the behaviour of the remaining members. This makes the determination of team errors highly subjective. It is possible, therefore, that the scarcity of relevant data caused the biases and imbalances shown in Table 1. Despite these many difficulties, the analyses provided no firm grounds for abandoning either the definition or the taxonomy of team errors, outlined above.

4. Team errors and PSFs 4.1. PSFs and their estimation

1. Performance shaping factors (PSFs) The next question is why team errors are made. An error is usually the result of some influencing factors which are called Performance Shaping Factors. In what follows, we will use the concept of PSFs to elucidate the situations in which team errors occur. Generally, there are two kinds of PSFs: external PSFs and internal PSFs [8]. These two kinds of PSFs are probably enough to discuss why individuals made human errors. However, as described before, most human work is performed by teams rather than individuals. This is particularly true of complex technologies such as nuclear power generation, commercial aviation, chemical process plants and the like. Especially when the remainder of a team failed to indicate or correct individual or shared errors in spite of their notices, there must have been influences of human relations between them. We therefore identified three classes of PSFs: external, internal and team PSFs as follows.

External PSFs External PSFs are, for example, darkness, high temperature, excessive humidity, high work requirement, and the like. These factors are shared by people working within the same working environment. Internal PSFs Internal PSFs include high stress, excessive fatigue, deficiencies in knowledge, skills and experience, etc. There are ideas that the internal PSFs are results of external PSFs. Although internal PSFs are not necessarily independent of external PSFs, the adverse impact of an external PSF depends, in part, upon the individual. A person may feel a certain PSF severely, while another may not. It seems reasonable, therefore, to consider internal PSFs separately from external PSFs. Team PSFs This paper defines team PSFs as factors arising from a group of people working together on a common project or task. They include lack of communication, inappropriate task allocation, excessive authority gradient (authority gradient: the difference in power or right between 2 or more people. The excessive difference sometimes becomes a hindrance of frank discussion), over-trusting (inappropriate trust in other people’s ability), etc. It could be argued that team PSFs are a subset of internal PSFs. However, it is believed that the purposes of this study are better served by treating them as separate categories. 2. How to find PSFs Few accident reports provide adequate information regarding team PSFs. Of course, identifying external and internal PSFs is not easy except for the cases where external and internal PSFs were discussed in accident accounts. Therefore, these PSFs were largely inferred from the descriptions of accidents. 3. How to discuss the relations between PSFs and team errors In order to identify why teams make team errors, it is probably best to discuss them in relation to the categories defined earlier. As described above, the data have biases so that some categories are largely unrepresented. Accordingly, we will focus upon the relations between PSFs and individual errors, shared errors,

K. Sasou, J. Reason / Reliability Engineering and System Safety 65 (1999) 1–9 Table 2 External PSFs observed in the shared and individual errors External PSFs

Shared errors

Individual errors

Seriousness Deficiency in human machine interface High workload Deficiency in procedures Deficiency in training High level activity Routine task Regulation Time pressure Insufficient visibility Others Total

21 20

24 7

16 9 9 6 6 3 3 3 4 100 (%)

19 8 7 4 0 0 4 2 24 100 (%)

failures to detect or failures to indicate and correct combined. 4.2. Shared errors and PSFs

5

Power Plant. Therefore, the authors think that deficiencies in human machine interface have a large influence on shared error-making. 2. Shared errors and internal PSFs Table 3 shows internal PSFs associated with the shared errors. It also lists internal PSFs found in the individual errors. Low situational awareness, low task awareness and excessive adherence (on their own ideas, opinion, decisions, actions, etc)/over-reliance (on indicators, warnings, etc) are observed more frequently in the shared errors than in the individual errors. What is the basis for this association? Working together with others can produce social loafing [9] which means that certain team members may not make the best efforts to achieve their common goal. However, the people discussed here are usually highly motivated. It is probably practical to talk of non-fulfilment of the responsibilities rather than social loafing. That is, it is thought that working with others produces an atmosphere where people can lose sight of their clear responsibilities, so that they do not make necessary observations, or share fully in collective decisions or actions.

1. Shared errors and external PSFs Table 2 shows the external PSFs associated with shared errors. It also lists external PSFs found in the individual errors. Major differences were not found between the external PSFs provoking the shared and the individual errors. This table, however, does suggest that deficiencies in the human machine interface exert a larger influence upon shared errors. The false alarm in Prairie Island Nuclear Power Station Unit 1 on October 2, 1979 [6] is an example in which a deficiency in the human machine interface had a significant effect on the occurrence of a team error. The repeated false alarm deprived the crew of the chances to take it seriously. Human machine interface is the only media through which operators understand system behavior. Some failures in human machine interface affect the behavior of operators. If failures happen in the media from which operators share information, the influence of it will spread out to all members, such as repeated false alarms in the accident at Prairie Island Nuclear Table 3 Internal PSFs observed in the shared and individual errors Internal PSFs

Shared errors

Individual errors

Deficiency in knowledge/experience High arousal Low situational awareness Low task awareness Excessive adherence/over-reliance Inadequate attitude Low confidence Others Total

22 21 20 16 13 3 3 2 100(%)

17 22 6 8 4 18 6 19 100 (%)

3. Shared errors and team PSFs Table 4 shows PSFs observed in the shared errors. Earlier, we argued that shared errors are defined as errors shared by some or all members, regardless of whether or not they were in direct communication. Therefore, we expected that the influences of team PSFs observed in the shared errors are very small. However, as shown in Table 4, two team PSFs, deficiency in communication and excessive belief (in other people’s ideas, opinion, decisions, actions, etc.) have the equivalent percentages to some external and internal PSFs. Considering the definition of shared errors, we do not believe that these two team PSFs affect the shared error making itself. Rather, these seem to affect the atmosphere in which people make shared errors. 4.3. Failures to detect and PSFs Table 5 lists observed PSFs surrounding the remainder of teams who failed to detect errors. In most cases, the remainder of a team were in the common situation where the people made errors. The analysis found some external and internal PSFs which were found in shared and individual errors as well. On the other hand, this table shows a large increase in team PSFs. The team PSFs combined exceed by 50% all of the PSFs found in failures to detect. The most common team PSF is deficiency in communication. If people do not have enough information through communication, they will not have any chance to detect individual errors or shared errors. They probably knew that communication was important.

6

K. Sasou, J. Reason / Reliability Engineering and System Safety 65 (1999) 1–9

Table 4 Team PSFs in the shared errors Team PSFs

%

External PSFs

%

Internal PSFs

%

Deficiency in communication Excessive belief

7 6

9 7

High arousal Deficiency in knowledge/experience

9 9

Excessive professional courtesy Excessive authority gradient Friendship Deficiency in resource/task management Organizational factors

4 2 2 2

Seriousness Deficiency in human machine interface High workload Deficiency in procedures Deficiency in training High level activity

6 4 4 2

Low situational awareness Low task awareness Excessive adherence/overreliance Inadequate attitude

7 6 4 1

1

1 1

24

2 1 1 1 1 38

Low confidence Others

Subtotal

Routine task Regulation Time pressure Insufficient visibility Others Subtotal

Subtotal Total

38 100%

Then why did they not communicate enough to detect individual errors or shared errors? There are a number of possible reasons. It could be that their arousal levels were high because of the seriousness of the events they faced, or they were too busy in operations or observations, or they were not aware of the importance of tasks or situations, etc. These are the effects of external or internal PSFs. The following examples indicate the necessity of considering the influences of other team PSFs. On January 12, 1995, the master of the Australian flag tanker CONUS did not discuss any departure plan with the pilot, in spite of the recommendation by International Chamber of Shipping, Bridge Procedure Guide, because the Master had confidence in the pilot’s ability and experiences [10]. In this example, the Master’s hesitation seems to have come from excessive professional courtesy (professional courtesy: a kind of hesitation to have a frank discussion or to give one’s own opinion. Being different from authority gradient, it happens in people in the equivalent rank. This term was referred from the AAIB report [11]). In another accident on January 19, 1994, the Master of tanker OSCO STAR over-trusted the ability of the Second Mate and this mate had an air of confidence in his job [12]. As a result, the Master did not supervise his job and the

Second Mate caused the oil to be sprayed into the air. In this example, over-trust and an air of confidence seem to be the two crucial factors. Although some external and internal PSFs cause deficiency in communication, excessive professional courtesy, over-trusting and air of confidence limit the ability to communicate properly. Team PSFs which hinder the remainder of a team from detecting errors include not only deficiencies in communication, excessive professional courtesy, over-trusting and air of confidence, but also excessive belief. In the accident of Oyster Creek Nuclear Power Plant on May 2, 1979, the operators believed that 2 of 4 valves were closed though all 4 valves had been mistakenly closed. Based on their incorrect belief, they executed inadequate actions which were correct on their belief. Excessive belief that 2 valves remained open caused both the failure to detect and the incorrect response to the plant. Inadequate resources and deficient task management not only create errors, they also lead to detection failures. If an error is seen by a person who does not have enough task knowledge to appreciate that an error has been made, then that error will not be detected. Excessive authority gradients were often coupled with task criticality and high arousal. If a decision maker is a senior member of a team and people are

Table 5 External, Internal and Team PSFs observed in team errors with failure to detect External PSFs

%

Internal PSFs

%

Team PSFs

%

Seriousness High workload Distance Duty hours Deficiency in training

8 5 4 4 1

High arousal Low task awareness Deficiency in knowledge/experience

11 8 4

Subtotal

22

Subtotal

23

Deficiency in communication Excessive belief Deficiency in resource/task management Excessive authority gradient Excessive professional courtesy Over-trusting Air of confidence Friendship Organizational factors Subtotal Total

14 9 9 6 5 5 4 2 1 55 100%

7

K. Sasou, J. Reason / Reliability Engineering and System Safety 65 (1999) 1–9 Table 6 External, Internal and Team PSFs observed in team errors with failures to indicate/correct External PSFs

%

Internal PSFs

%

Team PSFs

%

Seriousness Distance Deficiency in training Time pressure High workload Deficiency in procedures

15 5 5 5 2 2

High arousal Low confidence

9 2

Subtotal

34

Subtotal

11

Excessive authority gradient Excessive professional courtesy Deficiency in communication Deficiency in resource/task management Excessive belief Air of confidence Antipathy Subtotal

24 9 9 7 2 2 2 55

very tense, they may accept the senior’s decision without evaluating its appropriateness. We conclude that deficiencies in communication are the most crucial factor contributing to the failures to detect, and this often arises from excessive belief, excessive professional courtesy, over-trust, etc. The deficiency in resource/ task management and excessive authority gradient severely impair effective error detection. 4.4. Failures to indicate/correct and PSFs Table 6 lists the observed PSFs surrounding the remainder of teams who detected errors but failed to indicate or correct them. This table does not show major differences in external PSFs. However, important differences in internal PSFs were observed in this table. Low task awareness and low situational awareness disappeared in the failures to indicate/correct and the ratio of high arousal increased. This table suggests that arousal levels and low confidence make significant contributions to the indication and correction of errors. As well as detection failures, the combined team PSFs exceeded half of all PSFs observed in the failures to indicate/correct. Excessive authority gradient seems to be the most dominant factor. Problems caused by excessive authority gradient are discussed in many studies such as those associated with Cockpit Resource Management [13]. However, the factor which makes people hesitate to indicate/correct is not always excessive authority gradient. Another is excessive professional courtesy. For example, in the crash of Air Ontario Flight 1363 at Dryden on March 10, 1989 [14], two off-duty captains were in the cabin and noticed the build-up of ice on the wings, but failed to indicate it to the cockpit crew prior to takeoff. In this situation, there existed other PSFs such as distance between the crew and the off-duty captains, etc. However, if the offduty captains had over-ridden their excessive professional courtesy, they could have overcome other obstacles and indicated the problem. Deficiency in communication seems to be also important. Examples of this factor were found in the Dryden crash of Air Ontario, the crash of British Midland near Kegworth on January 8, 1989 [11] and the grounding of the Australian flag bulk carrier RIVER TORRENS on May 31, 1995 [15].

In these cases, people who were in the situation to be able to recover the errors noticed them but had no intentions of reporting them. As a result, they crashed or grounded. Why did they hesitate to indicate the errors in spite of their notices? Possible causes of the hesitation are thought to be excessive authority gradient and excessive professional courtesy. That is, they might feel excessive authority gradient or excessive professional courtesy in their relations with the error makers so that they hesitated to communicate with them. Deficiencies in resources or task management are also important. One example was observed in a crash of British Midland’s. The pilot judged No. 2 engine to have failed on the basis of his knowledge of the air conditioning system. In fact, it was the No. 1 engine that had the problem and some attendants saw that it was on fire. The pilot did not use the crew resource in the cabin as an additional information source to confirm his judgment and shut down No. 2 engine based on his wrong judgment. As a result, the aircraft lost its power and crashed. If the cockpit crew had asked if the cabin crew had seen anything on the engines, they could have seen the problem correctly and avoided shutting down the wrong engine. Certainly, deficiency in resource/task management is one of the factors which cause errors. 4.5. Summary on team errors and PSFs Fig. 2 summarizes the relations between team errors and PSFs. Shared errors are influenced by deficiencies in the human–machine interface, low task awareness, low situational awareness and excessive adherence or over-reliance. Failures to detect are influenced by deficiencies in communication, resource/task management, excessive authority gradient and excessive belief. Failures to indicate/correct are influenced by excessive authority gradient, excessive professional courtesy and deficiency in resource/task management. Given the incomplete nature of the source material, it was not always possible to identify all the relevant factors. However, we believe that this preliminary analysis has revealed some interesting patterns, both with regard to the nature of the errors that occur in teams and their recovery. We have also established some relationships between error types and performance shaping factors. That is, working together

8

K. Sasou, J. Reason / Reliability Engineering and System Safety 65 (1999) 1–9

Fig. 2. Team errors and PSFs

creates many problems such as deficiencies in communication, resource/task management, excessive authority gradient and excessive belief. However, we believe that many of these problems have their origins in deficiencies of responsibility. Understanding what is one’s own responsibility and what needs to be done may overcome other obstacles such as excessive authority gradient, excessive professional courtesy, and the like.

industries. As a result, the definition and taxonomy were found to have some value in categorizing team errors. Third, we analysed the relations between team errors and PSFs. The following associations were noted: a.

Shared errors are influenced by deficiencies in the human–machine interface, low task awareness, low situational awareness and excessive adherence/over-reliance.

b.

Failures to detect are influenced by deficiencies in communication, resource/task management, excessive authority gradient and excessive belief.

c.

Failures to indicate/correct are influenced by excessive authority gradient, excessive professional courtesy and deficiency in resource/task management.

5. Conclusion First, this paper discussed the definition and taxonomy of team errors. Team errors are human errors that are made by individuals or groups of people in a team context. Two axes make up the taxonomy of team errors. One is how an error occurs (the error-making process) and the other is how the error is not recovered (the error-recovery process). There are also four types of error in the error-making process: independent individual errors, dependent individual errors, independent shared errors and dependent shared errors. Three barriers to error recovery were identified in the error-recovery process: failure to detect, failure to indicate and failure to correct. This matrix generated 12 categories of team error. Second, this paper applied these notions to events that occurred in the nuclear power, aviation and shipping

This paper concluded that vague responsibility and nonfulfilment of responsibility seem to be crucial for team errors. Improving personal skills is important for error prevention. However, today’s industrial plants are too large to be controlled by individuals. Teams control power plants, aircrafts, ships and the like. However, this produces new problems. In the aviation industry, the focus of training has shifted from technical skill training to Crew Resource Management aiming to improve the performance of crews

K. Sasou, J. Reason / Reliability Engineering and System Safety 65 (1999) 1–9

in decision making, communication, leadership, stress/ fatigue management and team work. It has been estimated that around 80% of aviation accidents are caused in part by deficiencies of crew performance [13]. It is thought that there are specific causes in team errors that will not be revealed by an exclusive emphasis upon the errors of individuals. This paper has sought to elucidate some of these factors.

[8]

[9]

[10]

References [11] [1] Janis IL, Victims of Groupthink: A Psychological Study of Foreignpolicy Decisions and Fiascoes, Houghton Miffin, 1972. [2] Reason J, Human errors, Cambridge University Press, 1990. [3] MacPherson, The Black Box, Cockpit Voice Recorder Accounts of InFlight Accidents, Granada Publishing Ltd., 1984. [4] Reason J et al, Errors in a team context, Mohawc Belgirate Workshop, 1991. [5] Sasou K et al, A definition and modeling of team errors, The International Conference on Probabilistic Safety Assessment and Management, June 24–28, 1996, Crete, Greece. [6] Pew RW et al, Evaluation of Proposed Control Room Improvements Through Analysis of Critical operator Decisions, Final Report, EPRI, 1981. [7] BASI, Interim Factual Report 9503057, Fairchild Aircraft SA227AC(Metro III), UH-NJE, Tamworth, NSW, 16 September 1995,

[12]

[13] [14]

[15]

9

Bureau of Air Safety Investigation, Ministry of Transportation, Australia, 1996. Swain AD, Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Application, NUREG/CR12788M 1983. Latene B et al.. Many hands make light the work: The causes and consequences of social loafing. Journal of Personality and Social Behavior, 1979;37(6):822–832. MIIU, Incidents at sea 75: Joint departmental investigation into the grounding of the Australian flag tanker CONUS off Townsville, Queensland 12 January 1995, Department of Transport, Australia, 1995. AAIB, Aircraft Accident Report 4/10 Report on the accident to Boeing 737-400 G-OBME near Kegworth, Leicestershire on 8 January 1989, Air Accident Investigation Branch, Department of Transportation, UK, 1990. MIIU, Incidents at sea 63: Departmental investigation into structural damage sustained by the tanker OSCO STAR at the port of Kwinana, W.A. on 19 January 1994, Department of Transport, Australia, 1994. Jensen RS, Pilot Judgment and Crew Resource Management, Avebury aviation, 1995. Johnson S, Pathogens in the snow: the crash of Flight 1363, Maurino, D.E. eds., Beyond Aviation Human Factors Safety in High Technology Systems, Avebury aviation, 1995. MIIU, Incidents at sea 80: Joint department investigation into the grounding of the Australian flag bulk carrier RIVER TORRENS while entering NewCastle Harbour, New South Wales on 31 May 1995, Department of Transport, Australia, 1995.

Suggest Documents