[DHSSPS] commissioned the Queen's University Belfast and the NSPCC to undertake ...... Futures Research Greenwood, Westport, CT, USA. ..... 6 College Park.
An Evaluation of the Case Management Review Process in Northern Ireland
Anne Lazenbatt, John Devaney and Lisa Bunting January 2009
AN EVALUATION OF THE CASE MANAGEMENT REVIEW PROCESS IN NORTHERN IRELAND AND A SCOPING EXERCISE OF ADVERSE INCIDENT REPORTING AND ALTERNATIVE INVESTIGATIVE SYSTEMS About the authors Anne Lazenbatt, PhD. is a health psychologist and NSPCC Reader in Childhood Studies based in the Institute of Child Care Research in the School of Sociology, Social Policy and Social Work at Queen’s University Belfast. She was previously Head of Research and Development in the School of Nursing and Midwifery, Queen’s University. Her research interests are focused around ‘‘vulnerability, violence and abuse’, with particular emphasis on domestic violence and emotional and mental health; and on the development of ‘theoretical and methodological models of evaluation of health and social care’. She is a fellow of the British Psychological Society. http://www.qub.ac.uk/schools/SchoolofSociologySocialPolicySocialWork/Staff/AcademicStaff/Ann eLazenbatt/ John Devaney, PhD. is a lecturer in social work in the School of Sociology, Social Policy and Social Work at Queen’s University Belfast. Prior to this he worked as a practitioner and manager in children’s social services in the statutory sector for twenty years in Northern Ireland. His research interests include familial violence, the impact of adversity in childhood on later adult outcomes, and non-accidental child deaths. He is chair of the Northern Ireland Branch of the British Association for the Study and Prevention of Child Abuse and Neglect, and Associate Editor with the journal Child Abuse Review. http://www.qub.ac.uk/schools/SchoolofSociologySocialPolicySocialWork/Staff/AcademicStaff/Joh nDevaney/ Lisa Bunting, PhD. is a Senior Researcher with NSPCC. Her research interests lie within a broad range of child protection and maltreatment related issues including physical idscipline, female sex offenders, attrition in tne criminal justice system, information sharing systems and child deaths. Her most recent publication maps therapeutic services for child victims of sexual abuse across Northern Ireland (available at: http://www.nspcc.org.uk/Inform/policyandpublicaffairs/NorthernIreland/northernireland_wda48642 .html)
Acknowledgements The authors would like to thank a number of people for supporting and informing this evaluation. Firstly, we are indebted to the Department of Health, Social Services and Public Safety for their funding of this evaluation. Throughout the evaluation we have also been extremely grateful to the staff who completed questionnaires and agreed to be interviewed. We hope that we have represented their views accurately. And finally, we would like to acknowledge the contribution of Colin Reid, NSPCC and Martin Quinn, HSC Board who assisted with the development of this project. The views expressed though are those of the authors.
2|Page
CONTENT 1. Background and Introduction
4
2. Evaluation of the Case Management Review System in Northern Ireland
9
3. Project Aims & Objectives
11
4. Key Themes Arising from the Literature on Case Reviews
12
5. Moving Forward – Evidence from Adverse Incident Reporting and Alternative Investigative Systems
14
6. Methodology
29
7. Data Analysis
33
8. Evaluation Findings
34
9. Discussion & Recommendations
42
10. References
53
Appendices: - The Way Forward: An outline of the local
58
Safeguarding Panel review process - Policy Delphi Interview Schedule
63
- Policy Delphi Round 2 Questionnaire
64
3|Page
1. BACKGROUND AND INTRODUCTION The United Nations Convention on the Rights of the Child (UNCRC) is an international human rights treaty that grants all children and young people (aged 17 and under) a comprehensive set of rights and places a duty on countries to protect children from abuse and neglect (Article 3) and to uphold their right to life (Article 6). Yet countries individually and collectively have struggled to develop a system of early intervention that prevents any child from experiencing harm, or responds robustly enough in all situations when abuse and neglect is suspected to obviate a child’s reabuse. This has resulted in children dying in circumstances were it is believed that professionals could, and should, have acted differently (for example, Laming, 2003; Laming, 2009).The ‘ideal’, therefore, is a child protection system that can protect all children all of the time. The child protection system exists to protect children from the risk of abuse or neglect posed by their parents or carers. It is a serious adverse event for the child protection system when a child dies as a result of abuse or neglect. It raises the possibility that the child protection system in some way failed that child, either by failing to identify that the child was at risk, or by failing to take appropriate action in response to an identified risk. Although the understanding of the nature of child abuse has improved, and child protection processes have evolved to respond to the neglect and abuse of children, there has been a failure to develop a system that ensures that ‘at-risk’ children do not slip through the protection net. Children have died because of a failure to recognise the adverse circumstances some children experience, or due to the failure of individuals to make a referral of a child care concern to the appropriate child welfare services (Creighton, 2007). In other circumstances child protective services have failed to keep children safe (Durfee & Tilton-Durfee, 1995; Gough, 1995 and Munro, 2005). Traditionally, one of the most public ways of learning has been through the inquiry into a death of a child from child abuse or neglect. In the UK, as in many other countries, these inquiries have had a major influence on the way services have developed (Parton, 2004; Stanley and Manthorpe, 2004). In England and Wales serious case reviews [SCRs] are carried out when abuse and neglect are known or suspected factors when a child dies, or is seriously injured or harmed, and it is believed that lessons may be learnt about inter-agency working to assist in the future safeguarding of other children (DfES, 2006; HM Government 2006; Brandon et al., 2008 a&b; Rose and Barnes, 2008; Fish et al., 2008). The review process can establish what improvements are required to be made to the way in 4|Page
Which professionals and agencies work together to safeguard children and identify how these will be acted upon. The purpose, scope and arrangements for SCRs are contained in Working Together to Safeguard Children (HM Government, 2006) which builds on the learning from various sources, including the circumstances of the death of Victoria Climbié which highlighted the continued failings of services for children, and illustrated how children can remain invisible, in spite of being known to many separate agencies. To optimise the learning from the deaths and serious injury of these children there is a government commitment to SCRs being analysed periodically to build a more rigorous knowledge base to provide better pointers to prevention of injury or death where abuse or neglect is a factor. A significant change in the guidance is the shift of responsibility for monitoring this process from Area Child Protection Committees to SCRs becoming a function of Local Safeguarding Children Boards [LSCBs]. Therefore the goal has been to develop a culture of critical reflection aligned with processes for system improvement and continuing professional development. In 2006 responsibility for undertaking SCRs was vested in LSCBs, and since April 2007 Ofsted has introduced a more transparent and consistent process for evaluating serious case reviews. This assesses the extent to which the review fulfilled its purpose by reviewing the involvement of agencies, the rigour of analysis and the capacity for ensuring that the lessons identified are learned. The evaluation process aims to support the improvement of practice and safeguarding at a local and national level. However, Ofsted (2008) has made serious criticisms and reported that, in just under a third of cases, SCRs have been judged to be inadequate because of a lack of rigour in carrying them out; and there are also serious delays in producing them in nearly all cases. SCRs should normally be completed within four months of the decision to carry one out but nearly all take much longer, and there is evidence that some of the delays are avoidable and the agencies involved have not given them sufficient priority. All of these factors limit the impact of SCRs on sharing the lessons and good practice and "severely restrict" the potential to learn and to improve practice from these reviews (Ofsted, 2008).For example, during the period 2006 to 2008 one SCR was completed by Haringey safeguarding children’s board which related to a infant named as ‘Baby P’. Ofsted (2008) had judged the quality of the SCR relating to ‘Baby P’ to be inadequate for the following reasons: − The terms of reference were insufficiently comprehensive, lacked clarity, and were not finalised until 12 December 2007, four months after the SCR 5|Page
began, and when the writing of the individual management reviews by the relevant agencies had already been completed. This resulted in some important aspects not being adequately considered, such as the capacity of front line services, the effectiveness of provision for other children in the family, and the reasons why agencies failed to discover the two men living in the household. − There was insufficient independence of the SCR panel; the panel was chaired by the director of the children and young people’s service, who also chaired the local safeguarding children board. − The quality of the nine individual management reviews was extremely variable, as five were judged as inadequate. − Agencies were working in isolation and without any effective co-ordination. − There was poor gathering, recording and sharing of information. This meant that vital information which might have helped to form a complete picture of a child’s safety and welfare was not available. There was too much reliance on quantitative data – which is not always accurate or complete - and not enough focus was placed on what makes a quality service on the ground. The Local Safeguarding Children Board failed to provide sufficient scrutiny and challenge. − There was insufficient challenge by the Local Safeguarding Children Board to its members and also to front-line staff. − Key actions required in order to improve safeguarding were not fully identified and the SCR therefore missed opportunities to ensure that lessons were learned Moreover in November 2008 following the death of ‘Baby P’ in Haringey an Ofsted inspection was commissioned by the Secretary of State for Children, Schools and Families, and conducted using the arrangements for joint area reviews as required under section 20 of the Children Act 2004. It was a special joint area review, which examined the circumstances of the baby’s death and the role of each of the services involved with the family. Recommendations from this review process now require: − An integration of individual service processes and systems across all agencies more effectively, so that all children and young people are safeguarded.
6|Page
− The establishment of a more systematic monitoring of the quality of practice. − Closer collaborations in inter-agency working to improve outcomes for children and young people. − The establishment of clear procedures and protocols for communication and collaboration between social care, health and police services to support safeguarding of children. − The appointment of an independent chairperson to the local safeguarding children board (LSCB). − More child-focused and better preparation with greater urgency so lessons could be learned more quickly.
In Northern Ireland the Case Management Review [CMR] process, the NI equivalent of Serious Case Reviews, was introduced in 2003 with the publication of DHSSPS Guidance and Regulations Co-operating to Safeguard Children. It was developed to consider the learning to be gained from reflecting upon child deaths, or serious adverse incidents involving children where abuse or neglect is known, or is suspected to have been a contributory, or causal factor. The framework for conducting a CMR is contained within chapter 10 of Co-operating to Safeguard Children (DHSSPS, 2003a), which broadly mirrors arrangements for SCRs in England and Wales as set out in equivalent guidance (HM Government, 2006). Where the CMR process differs from SCRs is that all reviews in Northern Ireland have been conducted by a panel of professionals chaired by someone independent of the agencies involved in the case being reviewed. Whilst the child death review provisions of the Children Act 2004 did not extend to NI, this jurisdiction has witnessed similar policy developments. In 2006 the DHSSPS published a consultation document ‘A Regional Multi-Agency Procedure to be followed in cases of Sudden or Unexpected Child Deaths from Birth to 18 years’. This differs from arrangements in England & Wales in that it focuses on unexpected or sudden deaths of child under 18 rather than all child deaths.
7|Page
Both CMRs and SCRs often attribute child deaths to the systematic failure of child protective agencies to respond to the issues known about the child and their circumstances. In the case of individual professional fault, the response has been to address this within the weakness of the system. The goal therefore has been to develop a culture of critical reflection aligned with processes for system improvement and continuing professional development. However, Reder and Duncan (2004) highlight that recent reviews appear to draw the same conclusions identifying the same problems in front-line practice, such as the lack of communication, effective training, resources, and time, and suggest that the result of so many inquiries into fatal child abuse is to foster a blame culture in child protection work (Munro, 2005; Reder & Duncan, 2004; Fish et al., 2008). In this regard, Stanley and Manthorpe (2004) contend that it is difficult to see how the benefits from reviews and inquiries positively inform policy, practice, and learning. This raises the question of whether the current methods of learning lessons are providing satisfactory explanations of the problems and, therefore, effective solutions. In June 2008, the Department of Health, Social Services and Public Safety [DHSSPS] commissioned the Queen's University Belfast and the NSPCC to undertake an ‘Evaluation of the Case Management Review Process [CMR] in Northern Ireland [NI], and a Scoping Exercise of Adverse Incident Reporting and Alternative Investigative Systems’. The aim was to provide an evaluation of the current CMR process and to propose refinements based on a consideration of other approaches to reviewing significant adverse incidents.
8|Page
2. EVALUATION OF THE CASE MANAGEMENT REVIEW PROCESS IN NORTHERN IRELAND To date, in excess of 17 Case Management Reviews have been commissioned by the four Area Child Protection Committees in Northern Ireland, with 13 final reports forwarded to the Department. In light of the experience of commissioning and conducting these reviews and the proposed establishment of a Regional Safeguarding Board for Northern Ireland that will replace the four Area Child Protection Committees, the Department wishes to review current processes. Anecdotally, there have been a number of issues raised about the operation of the current CMR system: -
The “one size fits all” approach to a range of very different situations.
-
Variability as to the levels of participation and commitment across agencies.
-
The interface between the Case Management Review process and the Coroner together with the Police investigative processes.
-
The difficulty in identifying appropriate independent Chairs for the reviews.
-
The difficulty in identifying and securing appropriate panel members.
-
The ability to meet the timescales laid out in ‘Co-operating to Safeguard Children’ for the completion of the review.
-
A perception that not all appropriate cases are referred for consideration by agencies other than HSC Trusts.
-
Repetition of recommendations from previous reviews.
-
Concerns as to whether the focus on sharing the learning was being achieved.
-
A lack of regional overview and dissemination of key messages in spite of the time and resources expended.
.
9|Page
The evaluation outlines a brief literature review to provide an overview of key adverse incident systems and processes currently in operation both nationally and internationally and their respective advantages and disadvantages. It also includes a brief overview of theoretical models such as root cause analysis, and systems theory, and how these might be applied to new case management review processes.
10 | P a g e
3.
PROJECT AIMS & OBJECTIVES
Aim To provide an evaluation of the current case management review [CMR] process and to propose refinements based on a consideration of other approaches to reviewing significant adverse incidents.
Objectives -
To review the strengths and limitations of the current CMR process with key stakeholders nominated by the DHSSPS;
-
To briefly review other approaches to adverse incident reporting and investigation, including alternative local review processes; and
-
To make recommendations to improve the current system.
11 | P a g e
4. KEY THEMES ARISING FROM THE LITERATURE ON CASE REVIEWS Although serious case reviews and case management reviews make an important contribution to understanding what happens in circumstances of significant harm, to date there has been relatively little evaluation of the operation, effectiveness or impact of serious case reviews/case management reviews. Whilst UK Government and academics have sought to overview the key themes arising from individual reports, less attention has been given to the actual processes involved.
Key Themes Arising from Case Reviews In England there has been a commitment to providing a biennial overview of the key themes arising from SCRs (for example, Brandon et al., 2008a). Since the first modern inquiry into the death of Maria Colwell in 1973, there have been a series of reviews of the deaths or serious injury of children that have reached similar conclusions about the operation of the system for protecting children from abuse and neglect. In summary the key themes are often related to: -
The knowledge base of some professionals working with children and families in relation to the identification and management of child abuse and neglect
-
The volume and quality of information sharing and communication between different professionals
-
The processes for the collection and analysis of information relating to families and the concerns about children
-
The focus on parents and the near invisibility of children
-
The failure to take appropriate decisions and action based on the information available to professionals
As a result of these recurring themes considerable time and effort has been spent in improving uni- and multi-disciplinary education and training for professionals; in developing ever more elaborate policies and procedures to guide professional working; and the development of tools to assist in the assessment and planning regarding children in need of support and protection. 12 | P a g e
Key Themes Arising from Reviews of the Process of Case Review There is a small but developing literature base relating to the operation of the SCR and CMR processes. This has arisen in part from the work above, where those who have been over viewing the individual reports have also commented on the processes involved (for example, Bunting and Reid, 2005; Rose and Barnes, 2008). Additionally there has been work conducted on exploring alternatives to the current processes that aims to locate the learning from individual tragedies within broader organisational and cultural approaches to governance and quality improvement (for example, Bostock et al, 2005). In summary the key issues include: -
The need to integrate the SCR/CMR process with other quality improvement processes in organisations
-
The incorporation of ‘near miss’ reporting systems alongside the SCR/CMR process
-
Focusing on latent (systemic) as well as active (practice) factors
-
Reviewing and developing the learning culture within Local Safeguarding Children Boards
-
Improvements in the commissioning and conduct of case reviews relating specifically to the standard of review reports and recommendations
-
The expertise and quality of the Chairperson
-
The involvement of family members
These issues are explored further in the next section.
13 | P a g e
5. MOVING FORWARD – EVIDENCE FROM ADVERSE INCIDENT REPORTING AND ALTERNATIVE INVESTIGATIVE SYSTEMS Although these methods of learning are central to efforts to improve outcomes for children and families, the findings of SCRs and CMRs tend to be familiar and repetitive, raising questions about their value for improving practice. Similar circumstances in engineering, health and other high-risk industries has led to the development of the ‘systems approach’ – an approach that gets to the bottom of why accidents occur and so allows for more effective solutions. A ‘systems’ method allows for the identification of what ‘works well’ as well as where there are problems, and offers a framework not just for examining cases with tragic outcomes but for conceptualizing how services routinely operate. A new ‘systems’ approach to learning Risk management systems programmes that promote learning from adverse events and ‘near misses', have proved to be a powerful means of improving safety. This approach has been pioneered within aviation and the oil exploration business and, more recently, adopted by the healthcare sector within the UK. In July 2001, the government set up the National Patient Safety Agency (NPSA) to coordinate the efforts to learn from adverse incidents and ‘near misses’ occurring in the National Health Service (NHS) in England and Wales. The report was highly critical of the ‘culture of blame’ prevalent in the NHS, which focuses attention only on the actions of the individuals involved in an incident. Instead, it recommended a ‘systems approach’ to identifying and redressing the causes of mistakes. The key aims of NPSA are to: -
Identify trends and patterns in patient safety through a national incident reporting system and other sources of information
-
Develop and provide tools to NHS staff at local level with the objective of learning from errors through detailed analysis
-
Develop solutions for high-frequency risks at national level
14 | P a g e
There is a match between a ‘systems model’ and the requirements of SCRs in England and Wales, and CMRs in Northern Ireland (Bostock et al, 2005). In Keeping children safe (Department for Education and Skills et al., 2003) the Government response to the findings of the Laming Report into the death of Victoria Climbié, a new system for examining unexpected child deaths in England was proposed, drawing on the expertise of the NPSA. It recommended that the current serious case review system for reviewing such incidents would benefit from more of a ‘systems approach’ orientation to examining why these incidents happen and what can be done to reduce the likelihood of further ones. The goal of a systems case review is not only to understand why a particular case developed in the way it did, for better or for worse, but to use one particular case as the means of building up an understanding about strengths and weaknesses of the system more broadly, and how it might be improved in future (Munro, 2008). In the systems approach the term ‘system’ is used in a far broader sense and includes all possible variables that make up the workplace and influence the efforts of front-line workers in their engagement with families. Importantly, as well as the more tangible factors like procedures, tools and aids, working conditions, resources etc, a systems approach also includes issues such as team and organisational ‘cultures’ and the covert messages that are communicated and acted on. It treats these apparently softer factors as systems issues as well. A systems approach can radically change the traditional perspective. Instead of the front line workers dominating the picture, the limits of their autonomy are recognised and they are placed in their wider cultural context. Investigations to understand why they lapse from the desired standards of practice consider the full range of factors operating on them: e.g. do they have the necessary knowledge and skills, are the right resources available to support them, does the organization set feasible and consistent goals? The fallible human is not then seen as the central problem, with solutions trying to find various ways of eliminating or reducing their role. Instead, the investigation starts by looking at what is needed to do the job well and then considering what aspects humans are good at, and where they need help. The investigation then works outwards to find out whether the organization is providing the context in which high-quality work can be undertaken. Solutions tend to take the form of redesigning the task so that it makes feasible demands on practitioners, taking a realistic view of human cognitive, psychological and emotional skills: ‘Errors are consequences not just causes … they are shaped by local circumstances: by the task, the tools, and equipment and the workplace in 15 | P a g e
general. If we are to understand the significance of these contextual factors, we have to stand back …and consider the nature of the system as a whole.’ Reason and Hobbs (2003: 9) As noted by Munro (2005: 381) the basis of the paradigm shift from a traditional investigation to a systems perspective is to take human error as a starting point for inquiry, not as a satisfactory explanation in itself. A systems approach has a complex view of causality, treating human error and the role the individual front line worker has in the sequence of events as a starting point. When the traditional investigation identifies professional error, it is assumed that the professional ‘could have done differently’ and so can be held responsible and merits censure. Views of human error based on a "systems" approach rather than an individual blame approach see: -
Humans as imperfect and errors are to be expected
-
Errors as consequences rather than causes
-
Errors having their origin in the design of system in which humans work
-
Errors as reducible most effectively by changing the way in which humans work rather than by changing human behaviour itself.
Therefore to develop an effective reporting system, an organisation needs to have certain key characteristics. It should: -
See errors as learning opportunities
-
Motivate individuals to talk about their own experiences by encouraging the sharing of experiences
-
Respond to problems that are identified
-
Not unfairly penalise those who have made an error that was not deemed careless or reckless
-
Have a reporting system which is seen to uncover the underlying causes of incidents
There are two important sources of data relevant to a systems investigation: 1. The written records of different agencies; and 2. Interviews with key staff as well as service users and carers. 16 | P a g e
The format of the interviews creates an initial organisation of the data from which the review team constructs a narrative and account of the history of the case. Within the narratives are a number of key episodes that are then analysed in more detail. Building on work done in healthcare by Woods and Cook (2001), the deeper analysis of the data categorises them in terms of patterns of interactions. These patterns can either be constructive or create unsafe conditions in which poor practice is more likely. An initial typology of patterns significant for child protection includes the following six different categories: 1. Human–tool operation 2. Human–management system operation 3. Communication and collaboration in multi-agency working in response to incidents - crises 4. Communication and collaboration in multi-agency working in assessment and longer-term work 5. Family–professional interactions 6. Human judgement/reasoning
Incident and Near Misses Incident and ‘near-miss’ reporting provides a rich source of key information on risk and recovery, and serves a number of different purposes in the NHS (Bostock et al 2005). ‘Near misses’ are defined as those cases where something was prevented from going wrong, or had gone wrong but no serious harm had been caused. In child welfare, the term safeguarding incident is proposed to capture near misses as well as incidents causing serious harm. Analyses of near misses reveal weaknesses in the necessarily complex assessment, decisionmaking and review systems surrounding child welfare and show ways of correcting them. This suggests that children and families who use social services will, just like patients and rail and air passengers, benefit from the application of safety management. In this way, children and families would benefit from ‘the application of safety management’ (Bostock et al 2005, p.xiii).
17 | P a g e
The strength of this approach is its separation of learning lessons from the SCR/CMR processes and its focus on a whole system approach, with the aim of bringing about systematic improvements across agencies. From a patient safety perspective, reporting serves two general functions: 1. Reports allow the identification of incidents to investigate further, usually through a form of root cause analysis (RCA), in order to pinpoint the causes of specific incidents reports, when aggregated, serve as an "epidemiological" tool to identify vulnerable areas; and 2. Contextual risk factors which create the conditions in which incidents occur. Rather than restricting case reviews to incidents of child death or significant harm, many LSCBs now seem to be setting up systems for the review of these ‘lower-level’ incidents and/or ‘near misses’. This mirrors developments in Northern Ireland concerning the improvement of the safety of patients and service users, led by the Department of Health, Social Services and Public Safety (DHSSPS) where the range of patient safety incidents are differentiated according to the severity of harm caused: no harm, low, moderate, severe, death (see Bostock et al, 2005). Learning from safeguarding incidents is dependent on five fundamental features of a learning organization, namely its: -
structure
-
organizational culture
-
information systems
-
human resources practices
-
leadership
This means that work to promote learning from safeguarding incidents can take place in different aspects of an organization, by a commitment from leaders to understand the underlying causes of incidents as well as by the growth of robust human resource systems to help develop practice. Bostock et al (2005) undertook the first study attempting to collect data about ‘near misses’ in children’s services as there was no evidence that ‘near misses’ actually featured in childcare social work, nor that social workers or service users would relate to the concept. The findings demonstrate that few practitioners or service users had 18 | P a g e
problems identifying ‘near misses’ and, given a safe environment in which they did not have to fear reprisals, were more than willing to identify and reflect on their own experiences of them. However, current incident reporting systems have limitations because the datasets often do not include contextual risk factors that affect human error and hence patient safety. Knowledge of the real context of incidents would better equip organizations to prevent them (Morrison, 2003). Contributory factors or contextual risk factors include items such as staff factors, team factors, task factors and environment factors and should include information on fatigue, stress, availability of training and procedures and other factors already identified in other industries as critical in safety. Gathering these data enables an organization to: -
identify the issues known to affect human error
-
accurately identify where these problems occur most frequently
-
target investigations and solution development
However, there are common conflicts in near miss reporting systems, as suggested in the following examples: 1. Sacrificing accountability for information—Negotiating moral hazards in choosing between the good of society compared with needs of individuals; 2. Near miss data compared with accident data—Near miss data plentiful, minimises hindsight bias, proactive, less costly, no indemnity; 3. A change in focus from errors and adverse events to recovery processes Recovery equals resilience; emphasis on successful recovery, which offers learning opportunity; 4. Trade-offs between large aggregate national databases and regional systems- National offers longer denominators, capture of rare events; regional offers potentially more specific feedback and local effectiveness; 5. Finding right mix of barriers and incentives—Supporting needs of all stakeholders; ecological model; 6. Safety has up front, direct costs; payback is indirect—Spending “hard” money to save larger sums and reduce quality waste;
19 | P a g e
7. Safety and respect for reporters as well as patients- A just culture that acknowledges pervasiveness of hindsight bias and balances accountability needs of society; 8. The need for continuous timely feedback that reporters find relevant; changing bureaucratic culture—Critical to sustain effort of ongoing reporting.
Learning from success Learning from effective safeguarding practice rather than learning from mistakes is another approach. However, learning from ‘what works’ well may sound simple and logical but can require a major shift in the prevailing mind-set that has inevitably been focused on learning lessons from what has gone wrong. Hammond (1996) sums up the reasoning behind this approach: ‘We are very good at talking about what doesn’t work...We have very little practice in looking toward what works and finding ways to do more of that. It never occurs to us that we can fix an organization or even our society by doing more of what works well. We are obsessed with learning from our mistakes.’ Hammond (1996: 9) Evidence suggests that one method of applying this approach is through Appreciative Inquiry (see Cooperrider et al 2001) which is beginning to gain interest in different parts of central and local government (Barnes 2007). Appreciative Inquiry (Cooperrider et al 2001) is a radical way of learning and building on existing good practice and is undertaken in a positive environment of collaborative inquiry. It can be applied to safeguarding practice, and is a facilitated approach undertaken with managers and practitioners and involves identifying the essential elements of best practice and exploring ways of using this knowledge to improve safeguarding practice across local agencies. It achieves this by:
-
exploring essential features of participants’ experience of existing best practice
-
collectively developing a shared vision of most desirable practice for the future
-
working together to develop, design and create this practice, with changes occurring from the very first questions asked 20 | P a g e
The benefits of this approach are that: -
shared understandings and intentions are made transparent and reinforced
-
staff work constructively in a safe and creative environment
-
existing connections and relationships are strengthened and renewed
-
plans for improving local safeguarding practice and outcomes are agreed and implemented
Root Cause Analysis The NPSA uses root cause analysis (RCA) techniques to understand the underlying causes of incidents rather than identifying individual failure. RCA is a class of problem solving methods aimed at identifying the root causes of problems, or events rather than merely addressing the obvious symptoms. Based on human error theory, in particular, Reason’s (1997, 2000) model of organisational accident causation, the RCA model, has been adapted for use in health and social care settings. It not only takes into account the active failures of frontline staff to follow a prescribed course of action but also considers latent failures (well intentioned but in hindsight faulty management decisions by senior management) and contributory factors (e.g. staff shortages, poor communication, busy work environment, emotional state of staff member, education and training etc). As such, this is a system-based approach which seeks not only to clarify the direct actions leading to the accident or incident but the contribution made by the wider organisational context. Root cause analysis is defined as: ‘...an analytic tool that can be used to perform a comprehensive systembased review of critical incidents. It includes the identification of the root and contributory factors, determination of risk reduction strategies, and development of action plans along with measurement strategies to evaluate the effectiveness of the plans’. Hoffman et al. (2006: 7) A good description of RCA is of ‘an open and fair culture’ which ‘requires a much more thoughtful and supportive response to error and harm when they do occur’ (Vincent, 2006: 158), by determining: what happened; why it happened; and what can be done to reduce the likelihood of a recurrence. Records should be made of
21 | P a g e
the influencing factors that have been identified as root causes or fundamental issues. These include: − Individual Factors − Team and Social Factors − Communication Factors − Task Factors − Education and Training Factors − Equipment and Resource Factors − Working Condition Factors − Organisational and Management Factors − Patient / Client Factors This way of learning involves the analysis of a child death considering not who made the mistakes but why. It takes human error as the starting point, not the conclusion, of the investigation. It is recommended that LSCBs should include ‘safeguarding incidents’ (near misses) alongside unexpected child deaths and serious injuries and also those children’s services should introduce reporting systems for such incidents, using root cause analysis to understand why these incidents happened. What the RCA approach highlights is that holding a particular individual or individuals fully responsible and accountable is often highly questionable because, typically incidents arise from a chain of events and the interaction of a number of factors, many of which are beyond the control of the individual concerned. Decisions about culpability, require tools which have been developed to aid this process, such as Reason’s ‘culpability matrix’ (Reason, 1997) and the UK NPSA’s Incident decision tree (2004). Reason and colleagues (2001) highlight that health and social care professional staff are obliged to report such ‘near miss’ incidents, and learn from latent failures within the organisations rather than focusing purely on active failures by the staff members. They maintain that active failure is usually associated with the errors and rule violations of ‘front-line’ operators (in child welfare, this translates to child protection social workers and direct service staff) and has an immediate impact upon the system. The term ‘active failures’ might include: -
Action slips or failures, such as picking up the wrong telephone message 22 | P a g e
-
Cognitive failures, such as memory lapses and mistakes through ignorance, or misreading a situation
-
"Violations" are deviations from safe operating practices, procedures, or standards. These are more often associated with motivational problems such as low morale, poor examples from senior staff, and inadequate management generally
Latent failure is most often generated by individuals more distant from the incident, at the upper levels of the system (policy makers and managers) and may lie dormant indefinitely. Examples of latent failure in child welfare might include pressure to complete child protection investigations within a certain number of days and chronic staff shortages. Latent failures provide the conditions in which unsafe acts occur; these work conditions include: -
Heavy workloads
-
Inadequate knowledge or experience
-
Inadequate supervision
-
A stressful environment
-
Rapid change within an organisation
-
Incompatible goals (for example, conflict between finance and staffing)
-
Inadequate systems of communication
However, active failures are neither necessary, nor sufficient in and of themselves to cause an accident. Reason et al (2000) created the “Swiss cheese” model to describe, how organizations are built with layers of defence against error (active failures), but with holes at each level representing weaknesses and gaps (latent failures). The holes are in constant flux, but occasionally line up perfectly, allowing an accident to occur (i.e., a child is severely injured while an abuse investigation is underway). Key Elements of Root Cause Analysis RCA is simplistic and accessible and usually centres on a multidisciplinary group and uses tools such as why-why or why-because questions to try to get at underlying causes of incidents as well as immediate causes by: -
Using techniques to challenge traditional individual-blame approaches to error 23 | P a g e
-
Understanding how humans interact with their environment
-
Identifying potential problems related to processes and systems
-
Using inter-disciplinary systems, involving experts from the frontline services
-
Having an awareness of and sensitivity to potential conflicts of interest
-
Continually digging deeper by asking why, why, why at each level of cause and effect
-
Analysing the underlying cause and effect systems through a series of why questions
-
Identifying changes that need to be made to systems
-
Developing actions aimed at improving processes and systems.
The overall effectiveness of this system is determined initially by the dataset captured. Therefore the development of a core dataset which includes the following three elements is important: − Contextual risk factors (including person factors, procedures, environment and equipment, for example) which are used to quantify and prevent underlying causes of incidents − Local area risk factors which are already developed in some areas and incorporated in local reporting systems, to enable area-specific improvements in safety − Legal and claims information which provides managers with key data to meet with litigation and legal issues. There are however, methodological limitations to RCA, as they are in essence uncontrolled case studies. As the occurrence of incidents is highly unpredictable, it is impossible to know if the root cause established by the analysis is the cause of the incident or accident (Morris et al., 2007). In addition, RCAs may be tainted by hindsight bias (see Caplan et al., 2001; Chapman, 2004). Other biases stem from how deeply the causes are probed and influenced by the prevailing concerns of the day. The fact that technological failures, which previously represented the focus of most accident analyses, have been supplanted by
24 | P a g e
staffing issues, management failures, and information systems problems may be an example of the latter bias. Finally, RCAs are time-consuming and labour intensive. Qualitative methods within RCA should be used to supplement quantitative methods, to generate new hypotheses, and to examine events not amenable to quantitative methods [for example, those that occur rarely]. As such, its credibility as a tool should be judged by the standards appropriate for qualitative methods, not quantitative (Gano, 2003). Yet, the outcomes and costs associated with RCA are largely unreported. The NPSA (2004) has developed a RCA methodology and training package, including a three-day training programme, an e-learning tool and full documentary support. Strong points of the system are its: -
simplicity and accessibility
-
techniques to challenge traditional individual-blame approaches to error
-
introduction of root-cause and contributory factor analysis
-
use of five-why questioning and fishbone diagrams
A key advantage of the methodology when applied is that staff become involved, and begin to develop an insight into systems-based views of human error. This in itself is a driver for cultural change. Guidance on the Application of Root Cause Analysis Techniques for Adverse Incident and Complaint Investigation (DHSSPS, 2008)1 has also been developed as a best practice guide for the investigation of complaints and adverse incidents in the Southern Health and Social Care Trust. Also the framework for organising the contributory factors investigated and recorded in the NPSA’s “Seven Steps to Patient Safety” document (and associated Root Cause Analysis Toolkit) is useful [www.npsa.nhs.uk/health/resources/7steps]. In developing a framework of contributory factors for child welfare Munro (2008) has drawn methodologically on the work of Charles Vincent and colleagues at the Clinical Safety Research Unit at Imperial College London, specifically the framework of contributory factors influencing clinical practice that they developed (Vincent et al, 2000; Taylor-Adams and Vincent, 2004) (see Table 1). This draws on, and extends Reason’s model of active and latent errors, classifying error-
1
It should be noted that the operation of this guidance does not apply to circumstances where the case is subject to CMR under the requirements of Co-operating to Safeguard Children (DHSSPS, 2003). 25 | P a g e
producing conditions and organisational factors in a single broad framework of factors affecting clinical practice (Vincent et al, 1998).
Table 1: Framework of contributory factors influencing clinical practice [Vincent et al., 1998] Factor types
Contributory influencing factor
1. Patient
Condition (complexity and seriousness) Language and communication Personality and social factors
2. Task and technology
Task design and clarity of structure Availability and use of protocols Availability and accuracy of test results Decision-making aids
3. Individual (staff)
Knowledge and skills Competence Physical and mental health
4. Team
Verbal communication Written communication Supervision and seeking help Team structure (congruence, consistency, leadership, etc)
5. Work environmental
Staffing levels and skills mix Workload and shift patterns Design, availability and maintenance of equipment Administrative and managerial support
6. Physical Environment
Organisational and management Financial resources and constraints Organisational structure Policy, standards and goals Safety culture and priorities
7. Institutional context
Economic and regulatory context NHS executive Links with external organisations
26 | P a g e
Using a Systems Approach in Case Management Reviews/Serious Case Reviews Importantly, the systems model is congruent with the aims of CMRs/SCRs, as laid out in paragraph 10.9 of Co-operating to Safeguard Children (DHSSPS, 2003a). This focuses not on an adversarial and forensic investigation but on learning about the way in which local professionals and organisations work together in order to identify lessons that can be acted on to improve inter-agency working and better safeguard and promote the welfare of children. Fish et al (2008) propose that the systems approach could form the basis for a nationwide framework that would facilitate reviewing cases in a consistent way so that wider lessons could be drawn from their similar findings, as it provides a structured and systematic process that, as in health, ‘can help to ensure a comprehensive investigation and facilitate the production of formal reports when necessary’ (Taylor-Adams and Vincent, 2004: 1). Moreover, it is one premised on an explicit theoretical framework that explains the rationale for the data collection and analysis methods proposed. It would, therefore, also be of benefit to the commissioners of SCRs and CMRs by providing clarity about the nature of the work required, against which the quality can be judged. The systems model provides clarity about the kind of data needed, and advances the degree of participation by going beyond the stipulation to provide ‘feedback and debriefing for staff involved’ on completion of each agency management review, in advance of the completion of the overview report (DHSSPS, 2003, paragraph 10.32). Instead, a systems approach allows participants themselves to play an active role in the development of the analysis, prioritisation of key findings and identification of solutions. The systems model also supports the formulation of recommendations by linking them to the initial typology of underlying patterns of systemic factors that contribute to either good or problematic practice, by making explicit that individual and systemic issues are not mutually exclusive and highlights the benefits of focusing on the interaction of factors. The identification of generic patterns of systemic factors and analysis of other inter-acting contributory factors allows for reflection on pathways and obstacles to modifying them. This generates ideas for how the work context and inter-agency working can be strengthened in future. While still tentative and not comprehensive, this typology provides a useful basis for discussion about the kinds of findings to be highlighted. It has the additional merit of helping to remedy the current lack of fit between the findings and recommendations of SCRs and CMRs. Finally the systems approach raises
27 | P a g e
some fundamental questions about traditional views on accountability, power and control (Fish et al., 2008). The individual case with a tragic outcome attracts public attention, and, quite reasonably, there is a demand to look into what happened. The public wants to know if anyone other than the perpetrator is to blame and whether lessons can be learned to prevent similar cases happening again. However, a focus on the individual case where a child dies has limited scope for teaching us what is working well or badly. The systems approach offers new ways of framing the problems and holds out the promise of more effective solutions (Munro, 2005).
28 | P a g e
6.
METHODOLOGY
The purpose of the evaluation was to identify the views of key stakeholders regarding the main strengths and weaknesses of the current case management review [CMR] process, and allow the formulation of key features of an improved process using the Delphi technique. Delphi Technique The Delphi technique is a group process used to survey and collect the opinions of experts on a particular subject (Hanafin, 2004). It is ‘a method for structuring a group communication process’ which facilitates a panel of experts ‘to deal with a complex problem’ by focusing on a set of predetermined tasks (Linstone, 1978). This ‘structured communication’ draws together multidisciplinary experts who ‘pool their judgments to invent or discover a satisfactory course of action’ (Berretta, 1996). Proponents suggest that using this method can improve creativity in decision making when accurate information is unavailable. While the format and content of this method vary greatly, the essence of all Delphi studies is a series of sequential questionnaires that are interspersed with summarizations of respondents’ comments. Through this iterative process, the Delphi method allows a group of geographically dispersed experts to deal systematically with a complex task where a single, correct answer is not necessarily available. It is used as a tool for solving problems in health and social care settings (Powell 2003), and monitoring ‘change management’ in health service provision (Green et al., 1999). In child protection research, it has been used to develop consensus on the early indicators of child abuse (Powell 2003). Once the Delphi has been accomplished, a small workable committee can utilize the results to formulate the required policy. Policy Delphi Arising out of the Delphi forecasting model, a Policy Delphi has been developed in areas of research where ‘judgmental information’ is needed as part of governmental decision-making processes. This Delphi process allows feedback and refinement of views that occurs between rounds. Furthermore, because respondents’ time is managed individually, there is no need to coordinate busy professional diaries and thereby inter-professional and intra-agency discussion can be more readily accommodated. Although Delphi has been characterized as a quick and inexpensive method (Jones et al 1992) that ‘avoids regional bias’ (Schneider and Dutton 2002), the time taken to construct questionnaires, digest
29 | P a g e
and feed back the information to respondents, and then analyze the data collected in the final Delphi round, should not be underestimated. METHOD Participants The expert panel has been described as the ‘lynchpin of the [Delphi] method’ as they construct the research findings and their collective opinions structure subsequent action. Experts are defined as a group of ‘informed individuals’ (McKenna 1994a&b) ‘specialists’ in their field of work, or as ‘individuals with knowledge of a particular area’. The expert panel is not typically selected through a random process but drawn from across professionals and agencies. The panel is not expected to be a (statistically validated) representative sample but rather their representativeness ‘is assessed on the qualities of the expert panel rather than its numbers’ (Powell 2003). The sample size depends on the scope of the research and the scale of resources attached to it. Throughout the Delphi process, experts’ anonymity is assured, enabling respondents to express an opinion without risk of reproach or (professional) obligation to follow a particular line of reasoning. Anonymity reduces the impact of dominant individuals in a focus group process flattening inter- and intra-institutional hierarchies. The present CMR evaluation consisted of a sample of three kinds of experts [n=28] representing a good geographical spread within Northern Ireland: 1. Stakeholders (those directly involved in the CMR process) - Chairs of the four Area Child Protection Committees - Sample of Chairs and Panel Members of completed case management reviews - A sample of Directors of Social Work - Senior Managers from organisations represented on the ACPC’s 2. Professionals (those with specialist knowledge of the subject) - Senior officials from within DHSSPS with responsibility for case management reviews - The Chief Coroner - The Director of Legal Services for Central Services Agency
30 | P a g e
3. Facilitators (mediators who organize and synthesize experts’ opinions) - The research team Procedure The Policy Delphi technique in the present evaluation is operationalized as a group process involving an interaction between the researchers and a group of identified experts on the specified topic of the CMR process, through a series of interview and questionnaires phases. The Policy Delphi method has three characteristic phases or rounds: Round 1:
A panel of ‘experts’ is selected and the first round semi-structured interview questionnaire is completed.
Round 2:
Through a second sequential (iterative) questionnaire, a review of experts’ opinions is undertaken, looking for areas of desirability/or feasibility.
Round 3:
The experts’ responses are evaluated and the results are communicated to all stakeholders for consensus opinions.
Round 1: Questionnaire – semi-structured to generate qualitative data The first round of the Policy Delphi was a qualitative semi-structured interview asking the twenty-seven ‘key experts’ to identify: their involvement in the CMR process; what they saw as its main strengths and limitations; key features of a new process; and issues surrounding outcomes, quality, and effectiveness as well as recommendations and improvements for the future [see Appendix 2Interview-Q1]. The experts had been sent information about the CMR project and the Policy Delphi research method and were invited to contact the team if they had any questions about the methodology, the project and/or their role as key experts. This presentation of the research project gave prospective experts a thorough appreciation of the research aims and outline of the methodology for those who are unfamiliar with the approach. Interview-Q1 was drafted and piloted among colleagues working in child protection, to allow for face validity checks such as awkward, unclear or misleading phraseology to be corrected. This phase was seen as exploratory in design, aiming to tease out the key themes for use within subsequent Delphi 31 | P a g e
questionnaires (Q2 and Q3). A content analysis approach was used to translate the data into a second round questionnaire (Morse and Field, 1995; Ryan and Bernard, 2001). All key respondents agreed to participate [n=27] apart from the Chief Coroner who was unable to be interviewed for the evaluation. Round 2: Questionnaire – prioritizes and rate issues identified A Policy Delphi deals largely with statements, arguments, comments, and discussion. To establish some means of evaluating ideas expressed by the expert group rating scales were established to assess the desirability and feasibility of various issues and to seek ‘quantification of earlier findings’ (Powell 2003). Through this round 2 questionnaire, the ‘experts’ were asked to rate various statements or assertions on a Likert scale from 1-7, indicating their levels of agreement or disagreement with the statements and rating them in terms of their level of desirability and feasibility – with 1 being the most desirable/feasible position and 7 representing ‘No Judgement’ i.e. that the participant did not have a judgement to make about the statement. The questionnaire did not contain a neutral position on the scales to allow the development of a debate around the potential benefits and weaknesses of a new CMR system. Q2 was subsequently e-mailed to all panel experts [n=27]. Paper copies of the questionnaire were sent to any key expert who preferred to complete the form in this format (see Appendix 3 for example). Round 3: Questionnaire – Evaluation and consensus agreement is calculated The third round questionnaire was essentially a modification of Q2, the key difference being the inclusion of both individual and statistical responses from Phase 2. Overall consensus is then estimated for all thematic areas identified. Ethical Considerations Participation in the study was by voluntary informed consent, obtained by the evaluator prior to all stages of data collection, allowing an opportunity for the key respondent to ask questions. Throughout the Policy Delphi process, experts’ anonymity was assured, enabling respondents to express an opinion without risk of reproach or (professional) obligation to follow a particular line of reasoning. Anonymity reduces the impact of dominant individuals in a group process flattening inter- and intra-institutional hierarchies.
32 | P a g e
7.
DATA ANALYSIS
Qualitative data generated from the semi-structured interview in round 1 was coded and thematically analysed using content analysis (Powell, 2003) in order to form items for the round 2 questionnaire. Quantitative data were treated as ordinal data and analysed using SPSS (Statistical Package for Social Science) version 13. Descriptive statistics, incorporating measures of central tendency, inter-quartile ranges, median, minimum and maximum ratings and ranked are reported where appropriate.
33 | P a g e
8.
EVALUATION FINDINGS
Clearly, the success of a Delphi study rests on the combined expertise of the participants who make up the expert panel. It was particularly gratifying to be able to recruit a diverse panel of participants with a broad range of knowledge and experience. A further strength was that participants were drawn from across Northern Ireland. In common with the majority of Delphi studies, a total of three rounds of data collection were initiated. Following the guidance of McKenna (1994a), the first-round questionnaire was open and unstructured and aimed to generate qualitative data. Participants were asked to indicate what features they would describe as the strengths and weaknesses of case management reviews, along with suggestions for improvements [see Table 2]. This generated a wealth of data. A basic content analysis approach was used to translate the data into the second-round questionnaire One of Delphi’s strengths is in the democratic nature of the methodology and the fact that each participant is equally able to make their contribution in private, unhindered by dominant or patriarchal influences. From a sample pool of 27 participants in round 2, data were collected and analysed on 18 respondents, giving a response rate of 67%. Twenty participants returned questionnaires however data from two participants was discarded owing to inaccurate questionnaire completion. The research priorities were ranked overall using the mean rating for each item on the 1–7 scale. For each research priority, the frequency of positive rankings of 1, 2 and 3 from the 1–7 scales were added together. This gave an overall consensus frequency, which was reported as a percentage. Table 3 shows the research priorities which gained full consensus from the key stakeholders and Table 4 shows the statements which gained the least consensus after round 2 of the Delphi.
34 | P a g e
Table 2:
Round 1 - Key Stakeholders’ CMR Key Themes
1. Concern that the CMR process is still seen as essentially a Trust/social services review of their action with other agencies giving a hand 2. Have a small number of different levels of investigation that meet different needs – internal review by senior manager; root cause analysis; external review by independent expert; CMR; inquiry. These would need criteria set out and a system whereby the findings would be shared with Safeguarding Panels or the Safeguarding Board. 3. The need to retain a degree of independence – greater protection for agencies if there is an element of independence in process(es) 4. Recruitment, selection, preparation and on-going support for Chairs, Panel members and those completing IARs. Would drive up consistency and quality of process 5. Resources – agencies struggling with the volume of CMRs and the associated work, even in reaching a decision about whether a review will take place. Maybe have fewer CMRs and more lower level reviews. 6. Sample a small number of ‘near miss/safeguarding incident’ type cases each year to compare with findings from CMRs 7. Quality of reports and recommendations – variability in standard of reports; far too many recommendations dealing with the same issues; better to have fewer, more focused, meaningful and measurable recommendations, and SMARTer objectives. Linked to preparation and support for Chairs 8. Link between CMR process and other agency governance processes – ensure that recommendations are incorporated into standard processes where possible 9. Dissemination – lack of regional dissemination; and to staff in general 10. SBNI seen as the body most suitable to oversee the process. Need to remove the current ambiguity with DHSSPS. 11. Training for Chairs and panel members - better support for staff who are subject to a CMR; and clarification of roles of ACPC chairs, CMRs and DHSSPS, thus ensuring quality and effectiveness 12. Greater consistency around the ways that family members are involved – preparation and support for Chairs would assist
35 | P a g e
13. Ensure outcomes of review improves practice – need learning culture; culture of self assessment in agencies, regional trends/implications collated and identified, dissemination to managers/practitioners in user friendly format (CEMACH 10 key points highlighted as useful format)
Table 3:
Round 2- Full Consensus - Ranked 1 [100%]
- All panel members must undergo some initial training in preparation for their role - Staff who will have responsibility for completing an individual agency review should receive training to assist them in their role - A standardised format for the presentation of chronological data in individual agency reviews should be developed -
A standardised format for the production of individual agency reviews should be developed
- Staff should ordinarily be expected to make themselves available to be interviewed if required - Regional trends and commonalities should be identified and disseminated regularly by the Safeguarding Board
Table 4:
Round 2- Least Consensus – Ranked