6. Hudson

4 downloads 0 Views 571KB Size Report
Accident Causation Models, Management and the Law. ... Abstract. To apportion blame, and by extension liability, for an accident it is necessary to ... with a new one. ...... The Belief in a Just World: A Fundamental Delusion. Plenum: New. York.
Accident  Causation  Models,  Management  and  the  Law.     Patrick  Hudson   Department  of  Safety  Science,   Delft University of Technology. [email protected]

Abstract   To  apportion  blame,  and  by  extension  liability,  for  an  accident  it  is  necessary  to   decide   causality,   who   caused   the   accident   and   how   it   was   caused.   The   same   requirements   apply   to   the   preventative   management   of   such  potential  accidents,   except   blame   is   assigned   post-­‐hoc,   after   the   event,   whereas   preventative   management   is   essentially   proactive   and   obviates   the   need   for   blame.   Much   thinking  is  based  on  the  notion  that  there  is  a  single  root  cause  of  an  incident,  the   most   important   cause   and   therefore   the   one   pointing   at   liability   as   well   as   determining   the   main   target   for   prevention.   This   is   embedded   in   the   idea   that   incident  causation  is  linear  and  deterministic,  that  there  are  clear  sequences  of   causes  going  back  to  a  root  cause.       This   way  of  thinking  has  proved   very   successful   and   its   preventative   application   may  be  regarded  as  reducing  the  number  of  (potential)  accidents  by  80%.  Most   of   these   80%   accidents   are   personal;   the   development   and   use   of   the   Swiss   Cheese  model,  aimed  also  at  process  incidents,  has  led  to  a  further  reduction  of   possibly   80%   of   the   remaining   potential   incidents,   now   covering   some   96%   in   total.   Such   models   are   still   deterministic,   but   non-­‐linear   in   their   causal   effects.   The   remaining   4%   of   possible   incidents,   especially   complex   and   major   process   accidents,   unfortunately   appears   to   be   much   more   intractable.   The   proposal   is   that   these   incidents   have   a   causal   structure   that   is   both   non-­‐linear   and   non-­‐ deterministic,   being   inherently   probabilistic.   This   has   consequences   for   the   management  and  prevention  of  such  incidents,  because  of  their  complexity,  but   also  for  the  legal  approach,  that  has  to  confront  non-­‐deterministic  and  non-­‐linear   causation.  The  legal  viewpoint  is  made  more  complex  because,  in  hindsight,  such   incidents  still  appear  to  be  simple,  linear  and  deterministic.     1.  Introduction     Preventing  accidents  and  major  incidents  can,  initially,  be  done  quite  effectively  by   identifying   the   immediate   causes   and   implementing   direct   remedial   measures.   Experience   has   shown,   however,   that   this   piecemeal   reactive   approach   becomes   increasingly   difficult.   A   theoretical   understanding   of   how   accidents   are   caused   (e.g   Tripod  –  A  principled  basis  for  safer  operations  (Reason  et  al,  1988))  allows  analysis   of   the   nature   of   accident   causes   and   how   they   operate.   Armed   with   better   underlying  models  of  how  accidents  happen,  providing  the  theoretical  underpinning   of  safety  management  systems,  safety  performance  has  been  found  to  move  off  the   asymptotic  values  and  continue  to  improve  to  a  new,  and  much  lower,  asymptote.       This   paper   represents   an   attempt   to   develop   further   the   understanding   of   how   accidents   happen,   to   see   if   this   knowledge   can   be   applied.   Undoubtedly   the   main   reason  we  want  to  understand  how  accidents  happen  is  because  we  wish  to  prevent  

future  ones  (Wagenaar  &  Hudson,  1987).  The  best  way  to  do  this  is  to  have  a  solid   theoretical  basis  that  can  be  shown  both  to  describe  what  actually  happens  and  to   serve  the  goal  of  helping  define  effective  preventative  measures;  the  categorisation   of  causal  and  contributing  factors  and  the  understanding  of  the  mechanisms  should   be  explicitly  related  to  opportunities  to  prevent,  remedy  and  improve.  Armed  with   such   an   understanding,   the   legal   system   should   be   able   to   perform   its   roles   of   retribution   –assigning   blame   and   punishment   –   as   well   as   deterrence   –   ensuring   that  appropriate  preventative  measures  are  actually  taken  –  and  reform  –  ensuring   that   corporate   governance   does   not   give   in   to   the   temptation   to   slide   back   to   old,   dangerous,  habits.     The   message   developed   here   implies   that   to   really   understand   how   accidents   happen  we  need  to  accept  that  they  are  often  complex  processes  and  that  we  cannot   hope   to   prevent   accidents   with   models   that   are   too   simple   for   the   phenomena   involved.  The  persistent  failure  to  achieve  the  target  of  zero  accidents  in  high  hazard   industries   can   be   seen   in   the   light   that   efforts   based   on   simpler   models   are   ultimately   doomed   to   failure,   even   if   those   models   have   proven   successful   in   attacking  the  simpler  types  of  accident.  When  managers  say  “It’s  not  rocket  science”,   they  express  their  belief  that  accidents  are  pretty  simple  events,  so  preventing  them   is  pretty  simple  too;  I  believe  that  their  contention  about  the  science  of  safety,  that  it   is   not   rocket   science,   is   indeed   true,   but   only   because   it   has   to   be   a   lot   more   complicated   than   that.   Why   managers   fail   to   understand   that   reality   is   explained   later  in  terms  of  attribution  bias  and  the  asymmetrical  effects  of  hindsight  bias.       2.Models  of  Accident  Causation     2.1  Acts  of  Gods-­  out  of  our  control     As  a  first  approximation  to  any  study  of  a  complex  subject,  the  first  thing  we  do  is  to   reach  for  a  metaphor,  and  then  see  how  far  it  takes  us  before  we  have  to  replace  it   with   a   new   one.   These   metaphors   are   turned   into   models   that   should     allow   for   empirical   tests   and,   for   scientists   at   least,   rejection   of   the   model   in   favour   of   an   alternative   explanation.     Here   I   briefly   examine   the   history   of   thinking   about   how   accidents   happen,   how   they   are   caused,   starting   from   random   Acts   of   God(s)   malicious   or   uncaring,   to   complex   and   shifting   interdependencies   between   many   actors.  I  then  look  at  why  many  people,  often  those  in  positions  to  effect  change  for   the  better,  managers,  and  those  charged  with  determining  liability,  lawyers,  remain   fixated  on  an  early  and  overly  simplistic  model.       Bernstein  (1995)  reviewed  much  early  thinking  about  how  people  saw  accidents  as   the  acts  of  gods  and  other  spirits,  capricious  or  otherwise,  and  how  they  originally   saw   accident   prevention   as   best   done   by   performing   an   appeasing   ritual   or   sacrifice   rather  than  really  doing  something  about  it.  The  first  breakthrough  started  when  we   started   to   conceptualise   what   happens,   relinquishing   the   need   to   refer   to   luck,   good   or  bad,  in  favour  of  understanding.  Not  everybody,  however,  makes  this  intellectual   breakthrough  and  we  can  ourselves  still  fall  foul  of  our  heritage  if  we  are  not  careful   and  aware,  especially  given  our  tendency  to  ascribe  special  causal  powers  to  people,   as  opposed  to  animals,  technology  or  natural  conditions  which  cannot  choose  what  

should   happen   next.   This   means   that   in   complex   combinations   it   is   those   containing   people  that  we  select  for  special  attention,  often  called  blame.     2.2  The  Chain  of  Events  –  Linear  and  Deterministic     An   accident   can   be   seen   as   the   unfortunate   end   of   a   sequence   of   events   and   conditions.  With  the  benefit  of  hindsight,  the  final  event  first  becomes  obvious  and   then   seems   inevitable   and   each   step   before   that   can   be   traced.   This   simple   metaphor,  the  Chain  of  Events,  is  well  expressed  in  Benjamin  Franklin’s  aphorism  -­‐     "For  the  want  of  a  nail,  the  shoe  was  lost;  for  the  want  of  a  shoe  the  horse  was  lost;  and   for  the  want  of  a  horse  the  rider  was  lost,  being  overtaken  and  slain  by  the  enemy,  all   for  the  want  of  care  about  a  horseshoe  nail."    (Poor  Richard's  Almanack  ca  1750).     The  earliest  models  simplified  causal  effects  to  a  single  chain  of  events,  rather  than   an  ever-­‐increasing  tree  of  binary  or  more  combinations  that  we  see  later.  One  thing   was  clear  from  the  start;  people  cause  accidents  by  doing  the  wrong  thing,  whether   at  the  start  of  the  chain  or  at  its  end.  They  either  set  the  chain  in  motion,  possibly   they  exacerbate  it  along  the  way,  or  they  fail  at  the  last  moment  to  stop  a  potential   accident.  This  simple  model  was  best  expressed  in  Heinrich’s  Domino  theory,  with   the   metaphor   of   falling   dominoes,   each   one   bringing   down   the   next   one   as   it   fell   until  the  final  domino  fell  as  the  accident.       "The  occurrence  of  an  injury  invariably  results  from  a  completed  sequence  of  factors,   the   last   one   of   these   being   the   accident   itself.   The   accident   in   turn   is   invariably   caused   or   permitted   directly   by   the   unsafe   act   of   a   person   and/or   a   mechanical   or   physical   hazard."  (Heinrich,  1931).     He  also  wrote  (Heinrich,  1931),  introducing  the  domino  model:     “The  occurrence  of  a  preventable  injury  is  the  natural  culmination  of  a  series  of  events   or  circumstances,  which  invariably  occur  in  a  fixed  and  logical  order.  One  is  dependent   on   another   and   one   follows   because   of   another,   thus   constituting   a   sequence   that   may   be  compared  with  a  row  of  dominoes  placed  on  and  in  such  alignment  in  relation  to   one  another  that  the  fall  of  the  first  domino  precipitates  the  fall  of  the  entire  row”.     Heinrich   specified   4   levels   of   analysis,   in   which   he   concentrated   on   the   faults   of   humans,  as  he  claimed  that  his  data  showed  that  88%  of  accidents  were  caused  by   unsafe  acts,  with  only  10%  by  unsafe  conditions  and  2%  unavoidable  circumstances.   These  levels  were:     -­   Ancestry   and   social   environment   –   where   and   how   a   person   was   raised   and   educated   -­  Fault  of  person  –  from  the  social  environment  or  acquired  by  ancestry   -­  Unsafe  act/mechanical  or  physical  hazard  –  caused  by  careless  persons,  poorly   designed  or  improperly  maintained  equipment   -­  Accident  -­‐  caused  by  an  unsafe  act  or  an  unsafe  condition   -­  Injury  -­‐  the  result  of  an  accident    

Now   Heinrich   never   presumed   that   one’s   ancestry   would   inevitably   lead   to   injury   and  he  recognised  that  best  practice  was  a  major  route  to  accident  prevention.  His   Iceberg   model   accepted   that   not   all,   or   even   most,   accidents   inevitably   resulted   in   the  next  step,  even  if  the  reasons  why  this  should  be  was  kept  vague,  or  was  too  hard   at  this  level  of  analysis  to  be  treated  as  anything  more  than  chance.  Nevertheless  his   general  model  is  one  that  reads  in  a  fairly  straight  line.  In  many  ways  it  appears  as  if   poor  design,  improper  maintenance  and  the  presence  of  other  careless  persons  are   simply  to  be  accepted,  our  job  is  to  avoid  the  problems  they  create.     The   Iceberg   quantified   the   Domino   model,   with   figures   like   1   major   injury   to   29   minor   ones,   to   300   unsafe   acts   and   thousands   of   faults   and   unsafe   conditions   (Heinrich,   1931;   Bird,   1966).   This   introduced   a   level   of   attenuation,   so   that   not   every   domino   had   to   lead   inexorably   to   the   next   one   failing.   Many   people   have   criticised   this   model   because   they   keep   getting   different   ratios,   but   it   remains   appealing   just   because   of   the   idea   that   fixing   small   problems   will   solve   big   ones,   which   means   propping   up   dominoes,   usually   the   unsafe   act   one   by   simply   telling   people   ‘not   to   do   it’.   It   all   seemed   so   very   obvious   and   solved   the   problem   of   prevention,  often  by  just  telling  people  to  “Look  out”  and  “Be  safe”.  1     Heinrich’s  models  were  essentially  linear  -­‐  simple  and  in  a  straight  line.  They  appeal   to   people   who   abhor   what   they   see   as   unnecessary   complexity   in   simple   matters,   such  as  why  people  have  accidents.  A  moment’s  consideration,  however,  can  allow   the  posing  of  a  few  questions:  what  about  the  other  conditions  that  came  together?   And:   if   there   are   so   many   unsafe   acts   and   conditions,   why   do   we   not   have   more   accidents?       The   first   question   still   allows   for   an   answer   that   tracks   backwards   from   A   was   caused  by  B,  that  was  caused  by  C,  that  was  caused  by  …  etc,  except  that  every  time   we  have  an  event  or  condition  we  will  have  a  bifurcation  (split  or  branch)  starting   up   two   or   more   pathways   that   can,   and   should,   be   tracked   down   in   search   of   a   ‘real’   or   underlying   cause.   In   itself   the   notion   of   causality   used   remains   totally   deterministic   (every   time   you   get   B,   A   will   necessarily   follow)   and   we   have   only   substituted   single   linearity   with   a   branching   one.   The   straight-­‐line   piece   of   string   may   turn   out,   on   closer   examination,   to   be   a   number   of   individual   threads   woven   together  to  reach  the  accident  at  the  end  of  the  string.  The  A’s,  B’s  and  C’s  are  now   the  combinations  where  the  pathways  converge  en  route  to  an  accident.  This  is  still   fully  determined  and  proceeds  along  a  (admittedly  more  complicated)  straight  line   to  the  final  conclusion.       The   second   question   is   more   problematic.   It   implies   that   even   if   a   necessary   condition   is   met,   it   may   not   be   sufficient;   it   may   be   the   case   that   an   event   or   condition,  on  its  own,  requires  quite  specific  combination  with  other  conditions  or   events  together  to  become  a  significant  cause  of  what  happens  next,  as  opposed  to  

1  This  simple  way  of  thinking  was  enshrined  in  the  legal  framework  of  the  US  National  Transport  

Safety  Board  (NTSB),  requiring  them  to  report  the  probable  cause  of  an  accident  as  a  single  fact,   albeit  supported  by  contributing  factors.  The  NTSB  charter  under  the  Independent  Safety  Act  of   1974  requires  the  NTSB  to  determine  a  single  “probable  cause’  and  after  that  their  investigations   are  terminated.    

being   an   epiphenomenal2   cause.   If   this   is   true   then   we   cannot   truly   state   that   one   or   the  other  is  the  main  cause,  so  no  one  line  has  priority.  In  such  a  case  the  Domino   Theory  has  to  be  replaced  with  a  great  many  dominoes,  falling  together  in  sequence.   There  is  an  alternative  interpretation,  more  consistent  with  the  Iceberg  model,  that   combinations  may  (or  may  not)  become  causes  with  a  degree  of  probability  –  i.e.  we   step   away   from   having   definitive   combinations   as   causes   (A   &   B   definitely   ⇒   C   moves   to   A   &   B   might   ⇒   C).   Both   interpretations   are   difficult   for   deterministic   models,   as   it   is   the   combinations   that   have   to   be   taken   as   causes,   as   it   is   when   A   and   B   are   in   combination   that   C   happens.   This   weakens   the   causal   power   of   any   one   event   or   condition   and   implies   that   we   cannot   say   that   in   combination   with   other   factors   an   unsafe   act   will   necessarily   cause   an   accident,   only   that   it   might   and   in   certain   combinations   possibly   will.   While   the   first   interpretation   creates   problems   for   a   simple   deterministic   view   of   how   accidents   are   caused,   the   second   interpretation   weakens   the   requirement   for   linearity   by   introducing   variable   probabilistic  notions,  which  may  or  may  not  require  specific  combinations.     Probabilistic  versions  of  these  models  are  just  expressions  of  the  A  &  B  might   ⇒  C   logic,  in  that  we  may  have  a  set  of  possible  outcomes,  some  of  which  are  accidents,   but  all  of  which  can  still  be  identified  and  followed.  The  probabilities  remain  fixed3,   as  relative  frequencies  with  extra  conditions  just  being  taken  up  earlier  in  the  causal   chain   and   appearing   as   conditional   probabilities,   so   that   the   underlying   model   remains   essentially   linear   (if   complicated)   and   deterministic   (with   a   probabilistic   flavouring).   In   short,   a   moment’s   consideration   of   the   simple   linear   (straight   line)   and   deterministic   (If   A   happens   then   B   will   happen   next)   models,   implied   by   the   Domino  Theory  and  propagated  in  the  Iceberg  model,  shows  that  neither  one,  and   Figure 1. The original Tripod model. The defences grew out to become the Swiss Cheese model., but this can conflate the underlying cause and unsafe acts as similar slices of cheese.

possibly   both,   ways   of   thinking   captures   exactly   how   accidents   happen.   Simply   adding   probabilities   using   contingent   conditions   is   equivalent   to   adding   further   lines  back  in  history.  But  the  question  arises  is  whether  such  additions  are  sufficient   2  An  epiphenomenon  is  a  secondary  symptom  that  may  occur  simultaneously  with  a  disease  etc.  

but  is  not  regarded  as  its  cause  or  result  (Concise  Oxford  dictionary).    

3  We  can  call  these  point  probabilities,  represented  as  a  single  number,  in  contrast  to  probability  

distributions,  that  can  capture  conditionality  of  the  probability  in  which  the  point  probability   would  be  a  single  value  such  as  the  mean  or  median  probability.  

to  capture  the  complexity  of  causation.       2.3  Swiss  Cheese  –  Latent  conditions  and  non-­linear  thinking     I  propose  that  this  simplistic  approach  is  probably  adequate  to  capture  80%  (using   the   Pareto   principle)   of   all   the   potential   or   possible   accidents.   Analyses   of   real   accidents,  in  contrast,  shows  that  they  are  multi-­‐causal,  made  more  or  less  likely  by   large   numbers   of   events   and   conditions.   Furthermore   the   sequences   are   not   ‘accidental’   if   only   certain   combinations   are   effective   causes   of   the   subsequent   accident.  The  80%  can  be  taken  to  represent  an  adequate  approximation  to  a  more   complex  reality.  Such  analyses  led  to  the  realisation  that  Heinrich’s  unsafe  acts  were   themselves   the   result   of   behavioural   precursors   such   as   haste,   using   error-­‐prone   designs   and   lack   of   knowledge   (the   Hows   of   causation),   behind   which   could   be   identified  Latent  Failures  and  Conditions  (the  Whys  of  causation),  accidents  waiting   to   happen   (Reason   et   al,   1988;   Reason,   1990;   Wagenaar   et   al,   1990).   Haste,   error-­‐ prone   designs   and   lack   of   knowledge   can   be   brought   back   to   poor   planning,   inappropriate   design   standards   and   inadequate   training   programs,   any   or   all   of   which   may   wreak   their   havoc   at   any   time.   Behind   these   latent   failures   or   conditions   lie   fallible   decisions   made   by   people   far   distant   from   the   accident   in   time   and   space,   such  as  managers  and  regulators  (the  How  did  we  allow  this?  of   causation).  One  of   the   issues   that   the   model   developed   at   this   time   had   to   face   was   that   the   intermediate   causes   that   could   be   identified   were   all   too   often   negative,   that   is   to   say   nothing   happened,   there   was   no   procedure,   the   expectation   was   wrong,   the   antidote  failed,  the  extra  check  was  not  performed.       The   original   model,   called   Tripod,   had   a   number   of   defensive   barriers   after   the   unsafe  acts,  once  the  hazards  were  introduced,  and  before  the  incident  (See  Fig.  1).   If  there  were  holes  in  these  barriers  then  it  would  be  possible  for  the  hazard  to  have   an  impact  –  the  accident.     Pilots  see  747 and  abort   take-­‐off

Routine  violation   of  tow   procedures

Tunnel  brought  into  use  without  briefings Airport  structure

Airport  decides   to  change airport  structure Controller  gives  clearance   without  assurance  of  tow   position

Tower  combining  training   and  operations   during  difficult  periods

Figure 2. The Swiss Cheese model in its most recent graphic form, representing the near-miss on take off of a Delta 767 and a KLM 747 at Schiphol airport. The model in this representation implies a straight line of causation from the original airport structure to a controller giving a clearance.

The  barriers  started  to  be  drawn  as  slices  of  cheese  with  holes   but  it  was  also  still   clear   that   the   holes   at   the   end   of   the   sequence   were   being   put   there   by   yet   other   latent   issues,   some   long-­‐term,   some   short-­‐term.   In   this   original   model,   which   gradually   became   known   as   the   Swiss   Cheese   model,   the   causal   mechanisms   by   which  latent  failures  or  conditions  create  unsafe  acts  could  be  quite  different  from   the  causal  mechanisms  operating  once  the  hazard  was  lined  up  and  the  unsafe  acts   ready   to   be   carried   out.   This   model,   from   fallible   decision   to   accident,   was   still   deterministic   but   certainly   no   longer   linear,   except   possibly   for   the   final   short   trajectory   from   hazard   to   accident.   The   proposal   is   that   this   extension,   remaining   deterministic  but  removing  the  requirement  for  linearity  of  causes  and  conditions,   can   capture   80%   of   the   remaining   potential   accidents,   thus   covering   96%,   a   considerable   improvement,   but   still   not   100%.   Unfortunately   the   causal   links   between   early   and   late   causal   influences   was   often   mislaid   in   favour   of   a   conveniently   simple   picture   that   implied   that   the   holes,   and   therefore   the   causes,   were   independent;   this   picture   was   nevertheless   probably   adequate   in   more   than   95%  of  cases.     2.4  Billiard  balls,  Newton  and  Einstein     By   way   of   analogy   we   can   imagine   the   earlier   linear   and   deterministic   models   by   thinking   in   terms   of   billiard   balls   on   a   perfectly   hard   and   flat   green   baize   surface.   Seen   from   above   we   can   predict   what   will   happen   and   even   reconstitute   previous   situations   from   end   states.   We   can   call   this   a   Newtonian   universe;   one   that   is   linear,   deterministic   and   gives   us   the   equations   we   need   to   fire   rockets   accurately.   But   what   if   there   are   subtle   influences   at   work,   guiding   balls   slightly   off   course?   This   might  happen  if  the  surface  of  our  billiard  table  actually  turned  out  to  be  a  sheet  of   tightly  stretched  rubber,  covered  in  green  baize,  to  which  we  could  attach  weights  at   some  points  and  prop  up  small  hillocks  at  other  points  –  an  Einsteinian  relativistic   universe  in  which  a  flat  universe  is  now  warped  as  a  result  of  a  range  of  influences.   Seen   from   above   the   surface   would   still   look   perfectly   smooth   and   flat,   until   we   observe   balls   swerving   into   dips   and   away   from   hillocks;   a   more   accurate   picture   requires   understanding   the   3-­‐dimensional   structure   of   the   table.     Looking   from   above  under  uniform  lighting  conditions,  however,  we  may  still  interpret  the  baize   surface  as  two-­‐dimensional.  In  this  metaphor  the  world  looks  perfectly  uniform  as   long   as   it   is   static,   but   shows   a   different   structure   once   we   operate   within   it.   This   world   is   certainly   non-­‐linear;   balls   may   now   curve,   accelerate   and   decelerate,   depending   on   which   particular   part   of   the   table   they   find   themselves   on   and   the   speed  with  which  they  are  travelling.  It  is  also  still  completely  deterministic  as,  once   we   know   the   three-­‐dimensional   structure,   we   can   again   compute   where   the   balls   will  go  and  how  balls  interact.  Working  backwards  is,  however,  even  more  difficult   than  the  flat  version,  but  if  we  move  enough  balls  around  we  can  begin  to  identify   the   hills   and   valleys   that   make   up   the   3-­‐D   structure   (ignoring   the   temporal   dynamics)   just   as   we   can   identify   a   planet   orbiting   round   a   distant   star.     These   hillocks  and  depressions  in  the  tabletop  can  be  equated,  mixing  metaphors,  with  the   latent  conditions,  the  holes  on  the  cheese.     The  Swiss  Cheese  model,  at  least  in  its  original  and  more  complex  form,  represented   a   considerable   improvement   on   the   simpler   models.   Unfortunately   it   has   become   progressively   dumbed   down   as   more   attention   is   being   given   to   what   had   originally  

been  the  last  component,  the  final  barriers,  and  less  to  why  the  holes  were  appearing   in   the   cheese   in   the   first   place.   The   model   we   often   see   today   has   been   reduced   to   a   linear,   as   well   as   a   deterministic,   model.   This   simplicity   may   well   still   apply   to   the   last   moments,   but   washes   out   what   it   was   that   got   us   there   in   the   first   place,   the   underlying   conditions.   Such   a   model   loses   all   the   non-­‐linear   subtlety,   as   the   holes   in   the   cheese   should   also   be   seen   as   dynamic,   opening   and   closing   and   even   moving   as   conditions  change.     A   more   sophisticated   version   now   places   the   slices   of   cheese   in   an   organisational   context,   which   is   where   the   latent   conditions   are   created.   The   issue   for   safety   specialists   is   that   this   level,   rather   than   the   immediate   events   surrounding   an   incident,   is   where   effective   interventions   can   and   should   be   implemented.   The   problem,   however,   is   that   we   find   it   increasingly   difficult   to   specify   in   advance   exactly  what  accidents  we  are  preventing4,  even  though  we  know  that  this  is  where   we  ought  to  intervene.  Intervention  at  the  more  immediate  level  is  not  only  direct,   and  easy  to  justify  in  an  immediate  context,  like  removing  a  domino,  but  also  often   too  restricted  because  it  generally  targets  specific  combinations,  such  as  a  localized   failure   to   follow   a   particular   procedure   in   the   presence   of   a   unique   hazard.   The   possible  number  of  interventions  is  huge,  most  of  them  targeting  combinations  that   may  never  occur  again.     Figure 3. A probabilistic model from the CATS project (Ale et al , 2009)

  2.4  Beyond  Swiss  Cheese  –  non-­linear  and  non-­deterministic  models     4

Simple deterministic and linear models make it relatively easy to predict accidents, but typically only in the personal rather than the process area; non-linearity complicates this significantly.

One   of   the   problems   with   the   simple   Swiss   Cheese   model,   and   the   underlying   version  that  reflects  the  inherent  non-­‐linearity  from  organisational  causes  to  effects,   is  that  even  this  level  of  description  misses  common  effects  of  higher  order  causes   on  lower  order  barriers.  This  is  to  say,  poor  management  of  incompatible  goals  has   unspecified  but  real  effects  on  all  sorts  of  barriers  in  terms  of  their  effectiveness  at   any  one  time,  depending  on  what  other  goals  are  also  claiming  priority  at  the  same   time.  Operating  in  physically  difficult  conditions,  under  time  and  financial  pressures,   with   an   organisational   culture   that   accepts   a   degree   of   non-­‐compliance   and   a   regulatory  regime  that  may  be  prone  to  give  in  to  commercial  pressures  stresses  a   system  in  ways  where  it  becomes  increasingly  difficult  to  predict  what  will  happen   next.  The  original  model  would  therefore  have  to  be  expanded  to  allow  for  holes  to   be   altered   by   common   organisational   factors   operating   on   separate   slices.   This   could  mean  that  common  holes  in  different  slices  (in  terms  of  the  metaphor)  might   make  traversing  to  an  incident  much  easier  all  of  a  sudden.    So,  for  instance,  a  failure   to   have   up-­‐to-­‐date   procedures   can   appear   and   have   interaction   effects   in   a   great   many   apparently   unrelated   parts   of   an   accident   tree;   the   tree   no   longer   simply   expands,   now   branches   start   to   come   together   again.   The   influence   from   these   remote  factors  can  also  no  longer  be  treated  in  any  way  except  probabilistically,  i.e.   they   will   no   longer   be   deterministically   related,   but   will   make   conditions   and   sequences  of  actions  more  or  less  likely.  Furthermore  the  probabilities  would  have   to  be  represented  as  distributions,  varying  as  a  function  of  a  large  number  of  other   higher-­‐order   factors5.   Cultural   factors   have   far-­‐reaching   common   effects   on   many   levels  of  the  organisation  as  well  as  on  the  immediate  defences.  My  students  and  I   have   collected   evidence   in   commercial   aviation   that   the   nature   of   the   outcome,   whether   it   is   a   disaster,   a   near   miss   or   a   minor   incident   that   may   not   even   be   noticed,  can  be  statistically  predicted  by  factors  well  off  the  line  of  direct  causality   (Hudson,   v.d.   Graaf   &   Bryden,   2003;   Jonker,   2000;   v   d   Merwe,   2004;   Hudson,   1994).   So   being   first   or   last   flight   of   the   day,   flying   an   aircraft   just   out   of   maintenance   or   having  a  half-­‐hour  delay  on  push-­‐back  can  all  be  found  to  make  an  outcome  worse,   even   though   there   is   no   obvious   causal   pathway   to   connect   them   to   that   outcome.   Furthermore   we   find   that   it   is   not   the   specifics   of   the   particular   issues,   but   rather   the   absolute   number   of   problem   areas,   regardless   of   which   ones,   that   predicts   the   outcome  (Jonker,  2000).      At  this  level  of  analysis  we  may  have  to  accept  that  causal  effects  are  not  only  non-­‐ linear,   but   also   non-­‐deterministic.   The   only   approach   left   is   to   accept   that   the   relationships   between   causes   are   inherently   probabilistic   and   are   themselves   influenced   probabilistically   by   higher   order   factors,   as   distributions.   This   reflects   the  increasing  realisation  that  the  organisational  culture  is  a  pervasive  influence,  but   one   that   is   impossible   to   specify   deterministically,   rather   operating   as   a   multiplication  factor  on  lower  orders.  In  one  analysis  using  the  bowtie  methodology   (Hudson,   2010)   cultural   and   regulatory   factors   can   be   seen   to   influence   organisational   escalation   factors,   which   themselves   impact   on   the   immediate   and   5

This is more complicated than simple frequency-based probabilities, such as p= 0.005, computed on a single event such as a collision. With treating an event as a threshold related to an outcome, like taking the whole range of collisions and near-collisions, where we set a threshold of a ‘significant’ collision, we also allow the possibility of multiple causes being brought together. The probability distribution captures the variability and uncertainty of outcomes, and can be computed by examining the range of causes. Uncertainty becomes a parameter of the distribution.

more   deterministic   causes   of   accidents.   In   the   CATS   program   (Ale   et   al,   2009),   modeling  commercial  air  safety,  it  became  necessary  to  use  such  an  approach,  with   inherently   distributed   probabilities   (Figure   3).   These   show   sensitivity   to   small   variations  in  some  of  the  starting  conditions,  what  is  called  chaotic  behaviour,  and   hence  may  explain  why  the  few  accidents  we  have  with  the  4th  generation  of  aircraft   are  all  unexpected  and  ‘weird’  –  Wildly  Erratic  Incidents  Resulting  in  Disaster6.     2.5  from  Einstein  to  Shrödinger     The  analysis  here  suggests  that,  for  a  full  appreciation  of  accident  causality,  simple   linear   and   deterministic   models   are   inadequate   to   capture   organisational   and   cultural   factors.   Because   these   higher   order   factors   form   the   level   at   which   interventions  also  have  the  best  chance  of  succeeding,  simple  models  are  inadequate   for   effective   management   of   safety,   especially   as   the   accident   rate   decreases   towards  zero.  I  propose  that  this  extension,  into  naturally  distributed  probabilistic   causation,  should  enable  us  to  capture  80%  of  what  was  left  over,  the  remaining  4%,   getting  us  to  99.2%.  This  is  still  not  zero,  but  getting  a  lot  closer7!  To  do  so  we  have   had   to   leave   behind,   as   simplifications,   both   the   Newtonian   and   the   Einsteininan   metaphors;   moving   to   inherently   distributed   probabilities   suggests   we   move   to   Quantum  Physics  as  represented  by  Shrödinger.     Taking  the  quantum  metaphor  a  little  further  (possibly  to  its  limits  and  beyond),  we   can   propose   that   an   actual   event,   but   especially   one   that   we   will   later   label   as   an   accident,   may   be   regarded   as   the   collapse   of   the   totality   of   the   probabilistic   wave   function.  This  can  be  seen  as  opening  the  box  on  Shrödinger’s  cat  which,  up  to  that   point,   had   been   in   the   superposed   position   of   being   both   dead   and   alive   at   the   same   time.  Once  the  wave  function  has  collapsed,  looking  backwards,  each  probability  has   become  either  true  (1)  or  false  (0),  so  the  causes  are  clear  and  determinist  and  the   line   backwards,   after   the   event,   becomes   equal   to   what   would,   in   foresight,   have   only   been   approximated   by   a   linear   and   deterministic   description.   Such   approximations   serve   us   well   for   large   numbers   of   accidents,   the   80%,   and   reasonably  for  all  96%,  but  become  increasingly  inaccurate  as  approximations  when   one   attempts   to   capture   the   notion   of   causation   in   the   limit.   This   restriction   on   approximation  has  considerable  consequences  for  the  idea  of  causation  as  envisaged   by  managers  and  the  law.     The   notion   of   an   approximation   helps   us   understand   why   ideas   like   Target   Zero   are   so  difficult  to  achieve,  especially  when  those  in  a  position  to  influence  affairs  stick  to   the   old   metaphors   and   their   associated   models   and   continue   to   believe   that   just   trying  even  harder  will  achieve  the  final  goal.  The  success  of  early  approaches  can   now  be  understood  in  terms  of  the  adequacy  of  simplifications  of  the  real  issues  at   stake.   Linear   and   deterministic   approaches   represent   approximations   to   the   real   6

This acronym is thanks to Tim Hudson. Weird accident causation is partly related to chaotic behaviour due to sensitivity to initial conditions, but also arises from summation and interactions between otherwise very small probabilities, which is probably what Hollnagel’s Resonance model is actually about (c.f. Hollnagel et al, 2006).   7  The next step – to 99.84% - may either involve a different metaphor, such as protein folding or string theory if one looks for the next physics metaphor. Both require the computation of specific context sensitive information.  

situation  that  will  be  effective  when  performance  is  poor,  because  they  will  capture   enough   possibilities   even   if   they   also   miss   relevant   information   from   time   to   time.   As   performance   improves,   unfortunately,   these   approximations   become   less   and   less  appropriate  as  ways  of  capturing  the  remaining  factors.         3.0  Rocket  Science  thinking  –  Attribution  and  Hindsight.     So,   why   do   people   like   lawyers   and   managers   persist   in   sticking   to   the   old   20th   Century  models?    Why  should  they  feel  that  it  is  not  so  difficult  to  achieve  a  target   such   as   zero   accidents?   Partly   it   is   because   they   continue   to   believe   in   the   simple   linear   and   deterministic   models,   even   though   those   models   become   increasingly   inaccurate   representations   of   how   future   accidents   happen   as   the   accident   rate   reduces.   But   there   are   two   specific   mechanisms   that   help   reinforce   their   beliefs   about  how  accidents  are  caused  and,  accordingly,  what  can  be  done  to  prevent  them.   One   reason   is   the   attribution   of   blame   to   those   immediately   involved   in   incidents.   Attribution   means   explaining   behaviour,   either   in   terms   of   dispositional   factors   –   the  personal  characteristics  of  individuals  –  or  of  external  environmental  factors.       3.1  Attribution  error     The   Fundamental   Attribution   Error   (Jones   &   Harris,   1967;   Ross,   1977)   is   where   people   attribute   the   behaviour   of   others   to   dispositional   factors,   whereas   they   attribute  their  own  behaviour  as  being  caused  by  external  factors.  Attribution  bias   refers   to   this   tendency   to   attribute   failures   in   others,   such   as   in   accidents,   to   personal   weaknesses   and   failings,   while   at   the   same   time   attributing   one’s   own   failures  to  problems  in  the  environment.  An  individual  involved  as  the  driver  in  a  car   crash,   for   instance,   when   asked   about   why   an   incident   occurred,   will   describe   the   causes   with   reference   to   external   factors,   such   as   the   traffic   density,   low   visibility,   other  drivers  etc.  An  outside  observer,  in  contrast,  will  tend  to  feel  that  the  person  is   just  a  bad  driver.  This  is  a  reliable  mechanism  in  people  that  ensures  that  they  shift   blame  from  themselves,  while  at  the  same  time  blaming  others  for  personal  failure   (Campbell  &  Sedikides,  1999).     When  it  comes  to  explaining  the  roles  of  others  in  the  causal  pathway  to  an  accident,   the   fundamental   attribution   error   means   that   those   not   personally   involved,   in   particular   managers   and   supervisors,   both   attribute   the   causes   of   the   accident   to   personal   failings   in   someone,   frequently   the   victim,   while   at   the   same   time   persuading   themselves   that   they   personally   would   never   have   done   the   same8.   At   the  same  time  the  external  factors  are  actually  more  likely  to  be  those  under  their   control,   so   accepting   an   ‘external’   set   of   causes   is   likely   to   reflect   badly   upon   management.   Simple,   linear   and   deterministic   models   are   attractive   because   they   appeal  to  a  sense  of  predictability  that  is  then  reinforced  by  hindsight.       8  This  can  be  explained  in  terms  of  the  Self-­‐Serving  bias.  When  ascribing  a  failure  of  an  event,  

individuals  tend  to  deny  responsibility  for  their  outcomes  of  their  actions  (Bradley,  1978)  in   order  to  protect  and  maintain  a  high  task-­‐related  self-­‐esteem  (Larson,  1977;  Baron  et  al.,  2007).    

3.2  Hindsight     The  second  reason  why  managers  stay  with  simple  models  is  the  effect  of  hindsight   on  their  beliefs  of  how  accidents  are  caused.     Fischhoff  (1975,1986)  described  the  problem  of  Hindsight  Bias  very  clearly.         “Hindsight   bias   is   the   tendency   to   exaggerate   in   hindsight   what   one   knew   in   foresight.   The  feeling  that  one  knew  all  along  what  was  going  to  happen  leads  one  to  be  unduly   harsh   on   past   decisions   (if   it   was   obvious   what   was   going   to   happen,   then   failure   to   select   the   best   option   must   mean   incompetence)   and   to   be   unduly   optimistic   about   future  decisions  (by  encouraging  the  feeling  that  things  are  generally  well  understood,   even  if  they  are  not  working  out  so  well).”       How   this   operates   is   that   individuals   can   generate   a   small   number   of   scenarios   about   what   could   happen   given   a   description   of   the   situation   prior   to   an   event.   Armed   with   the   benefit   of   hindsight   they   now   know   one   specific   scenario,   the   one   leading   to   the   actual   outcome.   If   they   already   start   by   knowing   the   outcome   the   number  of  scenarios  they  can  develop  is  usually  fewer,  often  because  knowing  one   for  certain  suppresses  the  invention  of  ‘unrealistic’  alternatives.  Knowing  a  scenario   makes   it   seem   both   linear   and   fully   determined,   because   we   can   easily   trace   the   sequence   from   initial   to   final   state.   Knowing   the   outcomes   means   that   all   the   accidents  one  has  heard  of  will  most  easily  fit  such  a  simple  model,  so  that  proposing   non-­‐linear   and   non-­‐deterministic   models   are   seen   as   unnecessarily   complicated;   armed  with  20/20  hindsight  all  the  accidents  they  know  about  can  all  be  made  to  fit   the  simplest  model.       3.3  Hindsight  and  Attribution  together  make  Rocket  Science  obvious     In  2001  I  wrote  that:     “There   is   a   very   real   disparity   between   the   expectation   of   disaster   before   an   event   and   the   understanding   of   that   event   after   it   has   actually   happened.   The   considerable   differences   in   perception   of   events   mean   that   those   who   operate   with   hindsight,   but   unknowingly   open   to   the   biases   that   hindsight   can   bring,   may   see   the   events   as   inevitable   and   those   who   failed   to   understand   this,   in   advance,   as   worthy   of   blame.     Those  who  are  confronted  in  real  life  with  a  myriad  of  possibilities,  before  the  event,   may   act   reasonably,   by   their   own   lights,   and   totally   fail   to   predict   what   eventually   happened.   This   disparity   has   serious   consequences   in   areas   such   as   policy,   political   life   and  the  framing  of  the  Law.”  (Hudson,  2001)     The   combination   of   hindsight   bias   and   attribution   error   weighs   heavily   on   managers’   post-­‐incident   beliefs,   let   alone   the   lawyers’   beliefs,   that   the   individual   involved  1)  knew  what  was  happening,  2)  could  see  it  coming,  and  3)  was  personally   incapable   of   proceeding   properly   or   even   actively   sought   out   the   hazard.   Many   notions   of   blame   rely   upon   the   belief   that   individuals   can   be   held   accountable   for   their   conscious   choices   of   action,   and   that   they   could   have   easily   predicted   what   would   happen.   Failures   to   take   the   predictable   into   account   are   seen   to   reflect   deep   failings  in  such  individuals,  forming  the  direct  and  primary  causes  of  disasters  and  

who,   as   such,   deserve   to   be   punished.   The   reality,   of   course,   is   almost   always   that   the  person  did  not  see  it  coming  at  all,  was  not  well  supported  and  was  put  into  a   situation   in   which   the   accident   may,   suddenly,   have   become   inevitable   but   all   involved  are  surprised.  Under  such  circumstances  people  are  actually  far  less  able  to   predict   what   really   happened   in   disasters   than   they   themselves   would   like   to   believe   (Groenewegen,   1990).   Linear   and   deterministically   caused   incidents   may   well  meet  the  requirements  for  predictability,  but  the  remaining  incidents  will  not   be   caused   that   way   and   therefore   it   is   a   mistake   to   believe   that,   armed   with   the   benefit   of   hindsight,   they   should   have   seen   it   coming   just   because   the   judge   or   manager   can   persuade   themselves,   operating   with   the   self-­‐serving   bias   (Bradley,   1978;  Larsen,  1977)  that  they  would  never  have  been  so  thoughtless  or  reckless.     If,  however,  accident  causation  is  really  both  non-­‐linear  and  non-­‐deterministic,  and   the  most  effective  places  to  intervene  are  within  the  organisation  even  if  one  cannot   easily   predict   exactly   which   accidents   are   being   prevented,   then   these   beliefs   and   attitudes  have  to  be  reviewed.  All  too  often  both  the  courts  and  managers  demand   simplification  and,  as  I  have  discussed  above,  attribution  bias  and  hindsight  favour   the  simplistic  models  of  how  accidents  happen  over  the  more  complex  reality.  While   the   accident   rate   is   high   this   is   probably   good   enough,   as   linear   deterministic   approximations   will   still   catch   enough   to   demonstrate   success.   As   implemented   remedial   measures   prove   effective   in   reducing   the   accident   rate,   however,   the   approximations   become   increasingly   inaccurate,   leading   to   managerial   frustration   and  the  all  too  frequent  strengthening  of  the  ‘old’  measures.       5.0  Discussion     Rocket   science,   like   the   law,   has   been   well   served   in   the   past   by   complicated   but   inherently   simple   Newtonian   physics,   but   safety   management   requires   more   than   this   for   a   proper   understanding   of   the   totality   of   the   space   of   possibilities.   A   post-­‐ Newtonian   metaphor   is   needed.   This   could   be   based   on   Einsteinian   relativistic   thinking,   but   that   is   still   deterministic   and   we   have   seen   that   the   Swiss   Cheese   model  breaks  down  when  we  consider  causal  influences  that  operate  at  a  distance   and  simultaneously  over  different  tranches  of  cheese.  So  the  new  metaphor  could  be   based  on  some  other  physical  theory,  such  as  Quantum  physics.    Unlike  Newtonian   and   Einsteinian   physics,   Quantum   physics   is   inherently   probabilistic,   something   I   have  just  argued  we  may  have  to  accept  in  any  full  understanding  of  how  accidents   are   caused.   This   metaphor   takes   us   beyond   determinism   to   a   world   where   events   and  conditions  are  inherently  probabilistic,  and  where  events  and  states  acting  at  a   distance  may  influence  outcomes.  This  may  be  what  we  need  to  understand  how  the   aviation   accidents   were   made   more   likely   by   ‘non-­‐causal’   factors   like   fatigue   and   whether   a   flight   was   the   first   or   the   last   of   the   day.   Such   a   complex   type   of   model   may   become   increasingly   necessary   as   we   achieve   and   then   consequently   demand   ever-­‐improving   levels   of   safety   performance.   What   we   have   to   sacrifice   are   the   simple   notions   of   causality   implied   by   linear   and   deterministic   thinking,   treating   them  as  approximations  that  work  in  most,  but  not  all  cases.       The   dominant   model   for   understanding   how   accidents   happen   has   always   been   one   in  which  a  sequence  of  events  cause  one  another,  with  at  the  end  a  hazard  being  let   loose   on   a   victim.   This   Chain   of   Events   model   has   provided   a   strong   set   of  

conceptions,   such   as   Heinrich’s   Domino   theory   and   the   Iceberg,   but   as   safety   performance   has   improved   those   models   have   to   be   regarded   as   increasingly   inadequate,  only  serving  as  rough  approximations  to  a  more  adequate  theory.  More   sophisticated  approaches  saw  accidents  as  the  final  coming  together  of  events  that   stretch   off   into   the   past,   represented   by   event   trees   with   combinations   of   two   or   more  events   or   conditions.   Nevertheless  the  models  even  this  thinking  implies  are   still  deterministic,  if  non-­‐linear.  The  problem  I  have  identified  is  that  many  of  those   in   a   position   to   intervene   and   remedy,   managers   of   high   hazard   industries   and   even   regulators,  still  stick  to  the  simplistic  models.       One   reason   why   people   think   like   this   is   the   result   of   the   fundamental   attribution   error  and  the  self-­‐serving  bias,  that  lay  the  blame  at  the  feet  of  individuals  when  the   reality   is   that   individuals   are   caught   up   in   a   more   complex   web   that   leads   to   an   accident.  The  other  reason  is  hindsight,  where  the  inevitability  of  causes  producing   consequences  seems  more  and  more  obvious,  even  necessary.  We  may  understand   this   thinking,   that   event   A   causes   event   B,   in   terms   of   a   Newtonian   deterministic   world,   but   reality   is   more   complex   than   that.   In   terms   of   the   Quantum   metaphor   we   may   regard   events   as   being   more   like   opening   the   box   containing   Schrödinger’s   cat.   If  the  cat  is  dead,  it  was  an  accident;  if  the  cat  is  still  alive,  it  was  nothing  or  a  near   miss.  Prior  to  the  opening  both  states  are  true,  a  superposition.  With  a  dead  cat  we   can   look   back   at   what   had   been   in   advance   a   veritable   sea   of   possibilities   and   observe   them   after   the   event,   all   collapsed   into   actualities   or   having   vanished   as   non-­‐occurrences.  This  fixation  of  possibilities,  represented  by  distributions  of  future   probabilities,  is  what  can  lead  us  into  the  mistaken  belief  that  the  process  that  led  to   where  we  are  now  was  itself  simple,  linear  and  determined.     Another   area   in   which   the   mixture   discussed   here   occurs,   with   biased   interpretations   of   who   and   what   caused   accidents,   is   the   legal   system.   Whereas   managers   can   at   least   intervene   directly   and   proactively,   and   be   exposed   to   the   normal   state   of   affairs   as   well   as   incidents,   the   legal   profession   really   only   ever   sees   what  happens  after  the  event.  Possessing  20/20  hindsight,  together  with  belonging   to   a   profession   that   exists   to   apportion   blame   and   responsibility,   means   that   the   legal  profession  is  almost  doomed  to  believing  in  a  view  of  accident  causation  that  is   attractive   but   wrong.   There   are   probably   a   number   of   other   beliefs   that   circulate,   supported   by   simplistic   notions   of   how   accidents   happen   and   the   role   of   individuals   that   are   influenced   by   the   products   of   hindsight   and   attribution   bias,   taken   together   with  Lerner’s  Just  World  hypothesis  (Lerner,  1980;  Lerner  &  Simmons,  1966).  The   Just  World  represents  a  set  of  beliefs  that  the  world  is  just,  fair  and  ordered  so  that   bad   things   happen   to   bad   people;   if   someone   has   an   accident,   people   are   likely   to   believe  that  they  probably  deserved  it  –  blaming  the  victim.  Taken  all  together,  the   legal   system   tends   to   believe   that   an   accident   could   have   been   seen   to   be   coming,   and   should   have   been   prevented   by   those   involved   in   it   (hindsight   bias);   that   the   causes   of   the   accidents   and   the   failures   to   act   are   due   to   internal   or   dispositional   characteristics   of   individuals   rather   than   the   situation   they   find   themselves   in   (attribution   and   self-­‐serving   biases);   and   finally   that   they   must   have   been   worthy   of   blame   in   the   first   place   just   because   something   bad   happened   to   them   (just   world   bias).  So  not  only  managers,  but  also  practicing  lawyers  tend  to  suffer  from  what  we   can  now  add  to  the  list  of  biases,  the  Rocket  Science  bias.    

The   analysis   presented   here   suggests   that   real   accidents   arise   from   complex   and   essentially   almost   unpredictable   combinations.   The   quantum   metaphor   is   an   extension   of   the   notion   first   presented   as   the   Impossible   Accident   (Wagenaar   &   Groeneweg,   1987),   which   proposed   that   accidents   happen   because   people   cannot   oversee  what  is  influencing  them  and  they  do  not  believe  what  is  about  to  occur  is   even   possible   at   all.     In   many   cases,   nevertheless,   simplifying   assumptions   are   sufficient  to  catch  the  majority  of  direct  causes,  but  they  will  never  catch  them  all;   we   will   always   be   left   with   impossible   or   weird   accidents,   typically   those   we   can   identify  as  low  probability  but  often  of  high  consequence.  My  analysis  suggests  that   once   we   understand   the   more   complex   and   subtle   mechanisms,   we   can   start   to   develop  ways  of  designing  preventative  measures  that  will  get  us  much  closer  to  our   target   of   zero   accidents.   These   measures   are   starting   to   become   clear,   and   they   involve   working   on   the   culture,   regulatory   regimes   and   organisational   practices   at   all   levels   in   the   company,   from   the   board   to   the   front-­‐line   worker,   while   not   forgetting   any   of   the   basics   as   well   (Hudson,   2007).     What   this   requires   of   more   simple-­‐minded   managers   is   that   they   learn   to   accept   and   implement   preventative   measures   without   first   having   an   accident   to   prove   they   were   necessary.   The   final   message   is   that   if   we   wish   to   achieve   anything   really   close   to   a   zero   level   of   accidents   in   a   high   hazard   operation,   we   cannot   achieve   this   with   the   kind   of   simplifying  assumptions  and  attributions  too  many  managers  at  all  levels  still  make.    

References     Ale,   B.J.M.,   Bellamy,   J.J.,   Cooke,   R.M.,   Duyvis,   M.     Kurowicka,   D.   &   Lin,   P.H.   (2009)   Causal   Model   for   Air   Transport   Safety.   Final   Report   Ministry   of   Transport,   Directorate  General  for  Air  Transport  and  Maritime  Affairs.  The  Hague,  Netherlands.   Bernstein,  P.  L.  (1996)  Against  the  Gods:  The  Remarkable  Story  of  Risk.  John  Wiley  &   Sons,  New  York,  NY.     Bird,   F.E.   (1966)   Damage   Control.   Insurance   Company   of   North   America,   Philadelphia,  PA.     Bradley,   G   W.   (1978).   Self-­‐serving   biases   in   the   attribution   process:   A   re-­‐ examination   of   the   fact   or   fiction   question.   Journal   of   Personality   and   Social   Psychology,  36,  56-­‐71.     Campbell,  W.K.    &  Sedikides,  C.  (1999)  Self-­‐threat  magnifies  the  self-­‐serving  bias:  A   meta-­‐analytic  integration.  Review  of  General  Psychology,  3,  23-­‐43.     Fischoff,   B.   (1975)   Hindsight   ≠   foresight:   The   effect   of   outcome   knowledge   on   judgement  under  uncertainty.  Journal  of  Experimental  Psychology:  Human  Perception   and  Performance.  1,  288-­‐299.       Fischhoff,   B.   (1986)   Decision   Making   in   Complex   Systems.   In   E.   Hollnagel,   G.Mancini   &  D.D.  Woods  (Eds.)  Intelligent  Decision  Support  in  Process  Environments.  NATO  ASI   Series  Vol.  21.  Springer-­‐Verlag,  Berlin.     Fischhoff,   B.   &   Beyth,   R.   (1975)   “I   knew   it   would   happen”:   Remembered   probabilities   of   once-­‐future   things.   Organizational   Behaviour   and   Human   Performance.  13,    1-­‐16.     Fletcher,  G.J.O.    &  Ward,  C,  (1988)  Attribution  theory  and  processes:  A  cross-­‐cultural   perspective.   In   M.H.   Bond   (Ed.)   The   cross-­cultural   challenge   to   social   psychology.   Newbury  Park:  Sage.     Groenewegen,   A.J.M.   (1990)   What   happened?   Diagnosing   unfamiliar   real-­life   situations.  Ph.D.  Thesis,  Rijksuniversiteit  Leiden,  Leiden,  The  Netherlands.     Heinrich,  H.W.  (1931)  Industrial  Accident  Prevention.  McGraw-­‐Hill,  New  York,  NY.     Hollnagel,   E,   Woods,   D.D.   &   Leveson,   N.   (2006)   Resilience   Engineering.   Ashgate:   Aldershot  UK.   Hudson, P.T.W. (1994) Helicopter Procedures and Human Error: Findings from a study carried out by P Hudson. Report for Shell Aircraft Ltd London, Westland Helicopters, Weston Super Mare & GEC-Marconi London. pp 48   Hudson,   P.T.W.   (2001)   They   didn’t   see   it   coming:   Hindsight   and   foresight   on   the   road   to   disaster.   In   E.R.   Muller   &   C.J.J.M.   Stolker   (Eds.)   Ramp   en   Recht:  

Beschouwingen   over   rampen,   verantwoordelijkheid   en   aansprakelijkheid.   Boom   Juridische  Uitgevers:  Den  Haag.  Pp  91-­‐102     Hudson,   P.T.W.   (2007)   Implementing   a   safety   culture   in   a   major   multi-­‐national.   Safety  Science,  45,  697-­‐722.     Hudson,   P.T.W.   (2010)   Integrating   Organizational   Culture   into   Incident   Analyses.   Extending   the   Bowtie   model.   Proceedings   of   the   10th   SPE   International   Conference   on   Health,   Safety   and   Environment   in   Oil   and   Gas   Exploration   and   Production.   Brazil,   April  2010.  Richardson,  TX:  Society  of  Petroleum  Engineers.     Hudson,   P.T.W.   &   Hudson,   T.G.L.   (2010)   Moving   from   Investigating   to   Analyzing   Accidents:   Supporting   Organizational   Learning.   In   Proceedings   SPE   International   Conference   on   Health,   Safety   and   Environment   in   Oil   and   Gas   Exploration   and   Production.   Brazil,  April  2010.  Richardson,  TX:  Society  of  Petroleum  Engineers.     Jonker,  H.  (2000)  Cockpit  Decision  Making:  How  the  Rule  of  Three  can  help  making   Go-­NoGo  decisions.  Masters  Thesis.    Department  of  Experimental  Psychology,   Leiden  University,  The  Netherlands.     Larson,  J  R.  (1977).  Evidence  for  a  self-­‐serving  bias  in  the  attribution  of  causality.   Journal  of  Personality,  45,  430-­‐441.     Lerner  (1980).  The  Belief  in  a  Just  World:  A  Fundamental  Delusion.  Plenum:  New   York.     Lerner,  M.  J.,  &  Simmons,  C.  H.  (1966).  Observer’s  reaction  to  the  “innocent   victim”:  Compassion  or  rejection?  Journal  of  Personality  and  Social  Psychology,  4,   203–210.     Merwe,K.vd.  (2004)  The  Rule  of  Three:  The  creation  and  evaluation  of  a  tool.   Masters  Thesis.  Department  of  Experimental  Psychology,  Leiden  University,  The   Netherlands.   Nisbett,  R.E.  &  Ross,  L.(1980)  Human  inference:  Strategies  and  shortcomings  of  social   judgement.  Englewood  Cliffs  NJ:  Prentice  Hall.     Reason,   J.T.,   Wagenaar,   W.A.   &   Hudson,   P.T.W.   (1988)   A   New   Approach   to   Safety:   TRIPOD.   Report   for   Shell   International   SIPM,   The   Hague.   Department   of   Experimental  Psychology,  Leiden  University,  The  Netherlands.     Reason,  J.T.  (1990)  Human  Error.  Cambridge  University  Press,  Cambridge.     Reason,   J.T.   (1997)   Managing   the   risks   of   organisational   accidents.   Ashgate:   Aldershot  UK.     Ross,  L.  (1977).  The  intuitive  psychologist  and  his  shortcomings:  Distortions  in  the   attribution  process.  In  L.  Berkowitz  (Ed.),  Advances  in  experimental  social  psychology   (vol.  10,  pp.  173–220).  Academic  Press,  New  York,  NY.  

  Triandis,   H.   (1996)   The   psychological   measurement   of   cultural   syndromes.   American  Psychologist,  51,  407-­‐415     Wagenaar,   W.A.   &   Groeneweg,   J.   (1987)   Accidents   at   sea:   Multiple   causes   and   impossible   consequences.   International   Journal   of   Man   Machine   Studies.   27,   587-­‐ 598.     Wagenaar,   W.A.   &   Hudson,   P.T.W.   (1987)   The   analysis   of   accidents   with   a   view   to   prevention.   Report   for   Shell   International   SIPM,   The   Hague.   Department   of   Experimental  Psychology,  Leiden  University,  The  Netherlands.     Wagenaar,  W.A.,  Hudson,  P.T.W.  &  Reason,  J.T.  (1990)  Cognitive  Failures  and     Accidents.  Applied  Cognitive  Psychology,    4,  273-­‐294.