0
‘It doesn’t take a rocket scientist to know when someone is happy….’ An exploration of what information is of meaning to Educational Psychologists when evaluating their work.
C. L. Lowther
Doctorate in Child, Community and Educational Psychology
A thesis submitted for the degree of D. Ch. Ed. Psych
Tavistock and Portman NHS Foundation Trust/University of Essex
Date of Conferment: 18 May 2012
0
1
Contents
CONTENTS
1
LIST OF FIGURES
5
ACKNOWLEDGEMENTS
6
ABSTRACT
7
1. INTRODUCTION
8
2. LITERATURE REVIEW
12
2.1. INTRODUCTION
12
2.2. LITERATURE SEARCH STRATEGY
12
2.3. CONTEXT
15
2.3.1. EVIDENCE-BASED PRACTICE
15
2.3.2. POLITICS AND EPISTEMOLOGIES
20
2.3.3. THE ‘PROFESSIONAL’ ROLE AND EDUCATIONAL PSYCHOLOGY
22
2.4. EVALUATION
26
2.4.1. WHAT IS EVALUATION?
26
2.4.2. EVALUATION OF EDUCATIONAL PSYCHOLOGY PRACTICE – A REVIEW
27
2.5. PSYCHOLOGICAL THEORY
40
2.5.1. SYSTEMS THEORY
40
2.5.2. POSITIONING THEORY
42
2 2.5.3. PSYCHODYNAMIC THEORY
43
2.6. SUMMARY AND CONCLUSIONS
44
2.7. PURPOSE OF RESEARCH
45
2.8. RESEARCH QUESTIONS
46
3. METHODOLOGY
47
3.1. INTRODUCTION
47
3.2. ONTOLOGICAL AND EPISTEMOLOGICAL POSITION
47
3.3. RESEARCH DESIGN
48
3.4. ETHICS
49
3.5. PHASE ONE: QUALITATIVE METHOD
51
3.5.1. SAMPLE
55
3.5.2. INTERVIEW SCHEDULE
56
3.5.3. THE INTERVIEWS
58
3.6. QUALITATIVE ANALYSIS
59
3.6.1. PROCEDURE
59
3.6.2. VALIDITY
66
3.6.3. YARDLEY’S CHARACTERISTICS
68
3.6.4. REFLEXIVITY
70
3.7. PHASE TWO: QUANTITATIVE METHOD
71
3.7.1. PILOT PHASE
72
3.7.2. SAMPLE
74
3.7.3. PROCEDURE
75
3.7.4. DATA ANALYSIS
77
3.7.5. RELIABILITY AND VALIDITY
77
3 4. FINDINGS
79
4.1. INTERPRETATIVE PHENOMENOLOGICAL ANALYSIS OF INTERVIEWS
79
4.1.1. THEME 1: ROLE
82
4.1.1.1. Personal Power
86
4.1.1.2. Innovation
90
4.1.1.3. Thought
92
4.1.2. THEME 2: COMPLEXITY
95
4.1.2.1. Systems
95
4.1.2.2. Change
99
4.1.2.3. Inclusion
103
4.1.3. THEME 3: MEASURES AND OUTCOMES
106
4.1.3.1. External Tools
107
4.1.3.2. Internal Tools
118
4.1.3.3. Perception
125
4.1.4. CONNECTING CONCEPT: JUDGEMENT
127
4.2. QUANTITATIVE RESULTS
130
5. DISCUSSION
135
5.1. ROLE
135
5.2. RESEARCH QUESTIONS
138
5.2.1.
WHAT INFORMATION DO EDUCATIONAL PSYCHOLOGISTS CONSIDER TO BE RELEVANT, IMPORTANT, VALUABLE
AND MEANINGFUL WHEN THEY EVALUATE THEIR WORK?
5.2.2.
138
HOW CAN THIS INFORM THE DEVELOPMENT AND EVALUATION OF TOOLS USED TO EVALUATE EDUCATIONAL
PSYCHOLOGISTS’ PRACTICE?
141
4 5.2.3.
WHAT IS THE MEANING (THE RELEVANCE, IMPORTANCE AND VALUE) OF EVALUATION WITHIN THE LIVED
EXPERIENCE OF AN EDUCATIONAL PSYCHOLOGIST?
142
5.3. RESEARCH LIMITATIONS
147
5.4. FUTURE RESEARCH
149
5.5.
151
NEXT STEPS
5.6. RESEARCHER’S REFLECTIONS
151
6. CONCLUSION
153
7. REFERENCES
155
8. APPENDICES
171
8.1. APPENDIX 1: LITERATURE SEARCH INCLUSIONS/EXCLUSIONS
172
8.2. APPENDIX 2: INFORMATION LETTER TO INTERVIEW PARTICIPANTS
174
8.3. APPENDIX 3: CONSENT FORM FOR INTERVIEW PARTICIPANTS
176
8.4. APPENDIX 4: INFORMATION FOR PILOT PHASE PARTICIPANTS
177
8.5. APPENDIX 5: ETHICS APPROVAL
178
8.6. APPENDIX 6: INVITATION TO INTERVIEW PARTICIPANTS
179
8.7. APPENDIX 7: PILOT INTERVIEW SCHEDULE
180
8.8. APPENDIX 8: THEMES AND EXTRACTS FOR IPA
181
8.9. APPENDIX 9: EXCEL SPREAD SHEET OF ALL OF THE RESPONSES TO ALL OF THE QUESTIONNAIRES.
215
8.10. APPENDIX 10: FEEDBACK FROM INTERVIEW PARTICIPANTS
216
8.11. APPENDIX 11: PROPOSED PRACTICE EVALUATION FORM
217
8.12. APPENDIX 12: CONTENTS OF CD-ROM
218
5
List of Figures
Figure 1: Hierarchy of Evidence __________________________________________________________________ 17 Figure 2: Interview Schedule ____________________________________________________________________ 57 Figure 3: Iris Transcript Initial Readings (example) __________________________________________________ 62 Figure 4: Rose Transcript Initial Readings (example) _________________________________________________ 63 Figure 5: Iris Transcript Later Reading (example) ____________________________________________________ 64 Figure 6: Rose Transcript Later Reading (example) __________________________________________________ 65 Figure 7: Prevalence of Themes__________________________________________________________________ 66 Figure 8: Clustering of Themes __________________________________________________________________ 67 Figure 9: Final Questionnaire ___________________________________________________________________ 76 Figure 10: Overall Theme Schematic ______________________________________________________________ 81 Figure 11: Role _______________________________________________________________________________ 82 Figure 12: Complexity _________________________________________________________________________ 95 Figure 13: Measures and Outcomes _____________________________________________________________ 106 Figure 14: Frequency of Questionnaire Responses - All Statements ____________________________________ 131 Figure 15: Frequencies of ‘Agree’ Responses Compared to ‘Disagree’ Responses (Combined) _______________ 133
6
Acknowledgements
My thanks go to:
David and Euan, without whom this would not have been possible
Lesley for setting me off on the journey and Jeff for bringing me down the home straight
The ‘EPS’ for all the support and my participants for everything they shared
My proof readers
And the Usual Suspects for refreshments and entertainment
7
Abstract Evaluation is a central feature of Educational Psychologists’ work because of a professional commitment to ‘accountable and ethical’ practice (Frederickson, 2002, p. 106) and to prove worth within the current financially restricted public sector. A number of studies have been published using different tools with which to undertake this evaluation, however, a consistent approach is yet to emerge. This may be due to a lack of consensus about what information constitutes legitimate evidence both within Educational Psychology and within the broader field of evidencebased practice. Although many authors have philosophised about acceptable sources and types of knowledge, no research has asked the question of what information Educational Psychologists find meaningful when they evaluate their work. Using a mixed methods approach, this research explores this question. Six Educational Psychologists working in a local authority Educational Psychology Service were interviewed about their experiences of their work and what information let them know that they had made an impact. Using Interpretative Phenomenological Analysis a number of key themes emerged from the interviews. What the Educational Psychologists think about their role was prominent, as was the complexity of the work they undertake and the process of measuring the change they facilitate. A diverse range of information drawn upon as evidence that there has been change was described including standardised measures, qualitative feedback, target based evaluations and thinking, feeling, knowing, seeing and reflection or ‘professional opinion’. In addition to the interviews, questionnaires were given to all Educational Psychologists in the same service attending a team development day. Quantitative findings equally suggest that a range of information is valued by Educational Psychologists when they evaluate their work. It is thus proposed that a tool which enables Educational Psychologists to collate different types of information from a number of sources may be useful when evaluating their practice.
8
1. Introduction In May 2010, a new coalition government came into power with promises from the Prime Minister, David Cameron, to ‘take Britain in a historic new direction’ (BBC, 12 May 2010). This ‘new direction’ involves planned and already implemented changes to policy in many areas of public life, for example, health, equalities, education, the environment, foreign affairs, immigration, crime and banking (HM Government, 2010). As for any political shift, these new policies have been met with mixed responses and it is yet to be seen whether their successes and/or failures will be judged ‘historic’ in the future. Certain of these changes signify important new directions for Educational Psychologists. Although the education of children with Special Educational Needs has been seen as a central feature of much of Educational Psychologists’ work (Fox, 2002), comments made by the Minister for Children, Sarah Teather, imply a broadening of the Educational Psychologist’s role. She has said, ‘I would like Educational Psychologists to play a greater role in offering therapeutic advice rather than just being used by local authorities as a gatekeeper to services’ (TES, 29 October 2010) and ‘Educational Psychologists have a valuable role working with children and families in schools, and as part of early intervention projects’ (DfE, 14 November 2011). Statements such as these do not appear to have been made on a whim. Published Departmental papers corroborate Teather’s assertions. In the Support and aspiration: A new approach to special educational needs and disability Green Paper (DfE, 2011a) a ‘radically different system to support better life outcomes for young people’ (p. 4) is proposed and will be legislated for in the summer of 2012 (DfE, 2012b). Amongst the recommendations made is a highlighting of the contribution made by Educational Psychologists and a call for a review of their training. The Green Paper states:
9
We know that Educational Psychologists can make a significant contribution to supporting families and enabling children and young people to make progress with learning, behaviour and social relationships. … We want to encourage Educational Psychologists, as well as local authorities and schools that commission their services, to work in a more flexible manner that is responsive to the needs of the local community. (DfE, 2011a, p. 104) Similar positive references saturate the Final Report (DfE, 2011b) for the above proposed review. Frequent mention is made of ‘the developing Educational Psychologist’s role’. A ‘wider’ offer than the statutory duty required of Educational Psychologists as part of the Special Educational Needs Code of Practice (DfES, 2001) is promoted and the document states: Educational Psychologists have important roles in improving the opportunities of all children and young people, both in terms of local authority statutory responsibilities and more universal early intervention and preventative support … [They] are employed to provide professional advice on children and young
people’s
educational
and
emotional
development
and
by
understanding their needs and their educational contexts, they are well placed to identify and provide them with effective support to improve their life chances. (DfE, 2011b, p. 5)
10
These quotes suggest reasons for optimism from the Educational Psychology profession. However, these same papers contain notes of caution. Concerns are voiced in the Final Report that budget cuts will result in ‘targeted short term statutory intervention’ being prioritised over ‘long term universal solutions’ (ibid., p. 6). Similarly, the Green Paper specifies that ‘the current financial climate does not allow any government to be careless with resources. We, as well as local partners, must invest in a way that enables professionals to provide the best possible support for families and base this investment on evidence of what works’ (DfE, 2011a, p. 15). In a very recent paper, an indication of the return expected from this investment is given. The School Funding Reform document makes it clear that there are expectations ‘that children and young people get the right educational support that will enable them to aspire, achieve and fulfil their potential’ (DfE, 2012a, p. 36). On 20th October 2010, the current government released their Spending Review (HM Treasury, 2010). This document outlines plans for government spending until the financial year 2014-15. It describes how ‘the Coalition Government will carry out Britain’s unavoidable deficit reduction plan’ and says: This is an urgent priority to secure economic stability at a time of continuing uncertainty in the global economy and put Britain’s public services and welfare system on a sustainable long term footing. The Coalition Government inherited one of the most challenging fiscal positions in the world. Last year [2009], Britain’s deficit was the largest in its peacetime history …. The UK currently spends £43 billion on debt interest, which is more than it spends on schools in England. As international bodies such as the IMF and OECD have noted, reducing the deficit is a necessary precondition for sustained economic
11
growth. Failure to take action now would put the recovery at risk and place an unfair burden on future generations. (ibid., p. 5) Educational Psychology is thus one of many fields within the public sector that are under pressure to evidence their worth. Although there are clear opportunities for Educational Psychologists to ‘pioneer innovative and more effective forms of support for children and young people’ (DfE, 2011a, p. 99), within this ‘financial climate’ any public sector provision must be seen to offer good value for money and a high quality service. Educational Psychologists are thus more than ever to be encouraged to ‘have clear aims and objectives and a method for evaluating performance’ (Kelly and Gray, 2000, p. 8) because ‘it is imperative that the skills, knowledge and experiences that an Educational Psychologist brings to a situation are known to add value to other work that has already been done’ (Farrell, Woods, Lewis, Rooney, Squires and O’Connor, 2006, p. 100). This research explores the ways in which Educational Psychologists evaluate their performance and show that they add value to supporting children, young people, families and schools and specifically asks, what information do Educational Psychologists find meaningful when they evaluate their work?
12
2. Literature Review 2.1. Introduction In this chapter I review the approaches that have been taken by Educational Psychologists to evaluate their work that have been published in the literature. My specific focus is on evaluations which ask how an individual Educational Psychologist’s work has been perceived or has affected outcomes. I also discuss the complementary subject areas of evidence-based practice, politics and epistemology and the professional role of the Educational Psychologist. Although these latter subjects seem somewhat indirectly related to this research, I conclude that their consideration provides context to an exploration of what kinds of information give a meaningful insight into whether an Educational Psychologist’s involvement has made an impact or not. I also present brief descriptions of Psychological theories which, through analysis of the interviews, were felt to be of relevance.
2.2. Literature Search Strategy I undertook two systematic literature searches but also frequently revisited the literature, both in response to my reading around the topic as well as the findings arising from my research. The content of the literature review in this thesis therefore represents an evolution of ideas both informing and responding to the research and its methodology. My literature searches started in August 2010. My initial aim was to explore the field so I used relatively broad parameters. I found that broad searches produced an impossibly large number of hits to analyse effectively. By adding a single term to the search, however, I found
13
these huge numbers reduce substantially, often to zero results found. I used Google Scholar, Informaworld and Ingenta search engines. My first search looked at ‘evaluation’ and ‘educational psychology’, ‘evidence’ and ‘educational psychology’, and ‘outcome’ and ‘educational psychology’ anywhere in articles appearing after 2000. These searches produced many articles to look at and I judged the first 200 (assuming they were listed in ‘relevance’ order) by reading the title, and if that seemed relevant to my research, reading the abstract. I used tacit criteria of relevance at this point, looking at any articles which had used any evaluation techniques to specifically look at Educational Psychology practice. I also searched a variety of permutations for ‘target monitoring evaluation’ and ‘goal attainment scaling’. I also used references from articles I found particularly insightful to expand my search (which thus included books), as well as articles and book chapters I had already read as a result of my course programme and practice. Just over a year later (October 2011), I revisited my literature search, taking a more systematic approach and drawing on some of the information obtained through my interviews. I started with Google Scholar which produced 17 hits for ‘evaluation’ and ‘educational psychology’ in the title for articles published after 2000. Only 1 article passed the inclusion criteria that it was relevant to the evaluation of Educational Psychologists’ work. Using the same search parameters but for ‘anywhere’ 34,200 articles were returned. Since 2010, 16,100 were found. Using ‘evaluation’ and ‘education’ in the title field, 7,800 were found, 6,450 of which qualified as ‘social science’ articles. I felt that these numbers were too large.
14
I switched to use EBSCO and looked at the Psycharticles, Psychinfo and Psychology and Behavioural Science Collection databases. For ‘evidence-based practice’ in the title and ‘educational psychology’ in the subject field I obtained 1 result. For ‘evidence-based practice’ anywhere and ‘educational psychology’ in the subject field I obtained 24 results. I also looked at ‘evaluation’ anywhere and ‘educational psychology’ in the subject field and obtained 597 results, using the search engine’s inclusion criteria that the articles appear in peer reviewed journals, published after 2000 and related to the psychology profession and its research. On this occasion, I limited my search to articles with full text online availability. 18 of those looked at fit all these criteria as well as showing thematic congruence with the findings from my analysis of my interviews (excluding those already obtained through previous searches). Again I used reference lists to broaden my reading. I ran the same searches in March 2012 to ensure I included the most recently published articles possible. I recorded my searches in an Excel spread sheet (see Appendix 1). Gough (2007) provides a framework with which to appraise the quality and relevance of research to a particular literature search. His paper speaks specifically about judging whether findings of a particular study meet quality standards for inclusion in a review. From my initial explorations of the literature, I was not able to find any research which related directly to my research questions, which is why I pursued this line of study. I did however find examples of methods of evaluating Educational Psychology practice, which are presented here. Other than these examples, I found many position papers which were not research based at all. It was therefore difficult to apply Gough’s framework to these papers. However, for both example and position papers, I used the principle of ‘saturation’ to inform my search strategy and construction of my literature review, which Gough refers to. When I found myself reading papers which contained ‘no extra information’ (p. 219) I excluded these. I
15
complemented my saturation principle by comparing my literature review with reference lists in published papers. What follows is therefore a representation of the arguments in the literature and a review of the evaluation research that has been published to date.
2.3. Context 2.3.1.
Evidence-Based Practice
The approach known as ‘evidence-based practice’ initially arose within the field of medicine and the care of individual patients (Frederickson, 2002). In their frequently cited article, Sackett, Rosenberg, Gray, Haynes and Richardson (1996) endeavoured to define evidencebased medicine by describing both what it is, and what it is not. Their discussion raises the following points:
Evidence-based practice is not a ‘slavish, cookbook’ approach. ‘External clinical evidence can inform, but can never replace, individual clinical expertise, and it is this expertise that decides whether the external evidence applies to the individual patient at all and, if so, how it should be integrated into a clinical decision’ (p. 72).
Expertise involves ‘thoughtful identification and compassionate use of individual [client's] predicaments, rights, and preferences in making … decisions about their care’ (p. 71).
Evidence is the most robust available research which best answers the question posed by the client’s ‘predicament’.
16
Sackett, et al. (1996) further categorically state that ‘evidence-based medicine is not restricted to randomised trials and meta-analyses’ (p. 72, emphasis added). In spite of this assertion, randomised trials and meta-analyses form the pinnacle of what is known as the ‘hierarchy of evidence’ (Parry, Roth and Fonagy, 2006). As a result, priority is often given to randomised trials and meta-analyses when research is reviewed to find out ‘what works’. The result has been described as ‘a confined access to clinical knowledge, [which] incorporates only questions and phenomena that can be controlled, measured, counted, and analysed by statistical methods’ (Malterud, 2001, p. 397). It is thus that although Sackett, et al. (1996) are often referenced in the literature, their views have not successfully drawn a line under what constitutes ‘evidence-based practice’, both within their own field of medicine and beyond. Fox (2002; 2011) ascribes certain benevolent motives behind the adoption by government of the application of evidence-based practice outside the field of medicine. Lack of consistency had raised questions about quality and inequality (Fox, 2003) and evidencebased practice was seen as the means to levelling the field (Fox, 2002). Biesta (2007), in contrast, suggests that the drive for establishing evidence-based practice in education forgets ‘the crucial role of values’ (p. 4). He questions whether it can ever offer a ‘neutral framework’ for professional practice (p. 6) as what is seen as ‘educationally desirable’ may masquerade as ‘effective practice’ (p. 5). Clegg (2005) also raises concerns about the politics of evidencebased practice, asserting that it ‘serves an ideological function that is disguised through the rhetoric of independence and the idea that policy is disinterested and objectively informed’ (p. 419). She views this function as an ‘attack on professionals and professional knowing’ (p. 418), motivated by cost implications (p. 417) and in juxtaposition to Sackett, et al.’s (1996) insistence that evidence-based practice depends on careful implementation by an ‘expert’.
17
Criticisms of evidence-based practice are not all politically or ideologically loaded. Other objections tend to involve arguments about the types of research that are accepted as evidence. I refer once again to the ‘Hierarchy of Evidence’ (Parry, et al., 2006) which clearly categorises the most desirable or credible evidence and is reproduced in Frederickson (2002, p. 97) for consideration: 1. 2. 3. 4. 5. 6. 7.
Several systematic reviews of randomised controlled trials (RCTs) Systematic review of RCTs RCTs Quasi-experimental trials Case control and cohort studies Expert consensus opinion Individual opinion Figure 1.
Hierarchy of Evidence: The suggested relative authority of various types of research.
Frederickson endorses this hierarchy in her widely referenced paper promoting the use of evidence-based practice within Educational Psychology. In discussing the use of Randomised Controlled Trials (RCTs) Frederickson does highlight their limitations, particularly in terms of methodological issues. She refers to the difference between efficacy and effectiveness, which are defined by Ingraham and Oka (2006) who state that efficacy relates to findings of research in controlled settings and effectiveness to such findings seen in a natural context. Educational contexts can thus be said to be more suited to effectiveness research. This highlights one of the major concerns raised about the hierarchy; that the prioritising of RCTs and meta-analyses in education overlooks considerations of appropriateness where the subjects of the research are predominantly children within an uncontrolled school setting (Fox, 2003) where is it very difficult to manipulate variables with any precision (Stoiber and Waas, 2002). Slavin (2008) also discusses methodological issues relating to meta-analyses. He states that the number of studies relating to particular programmes tends to be very small and only positive outcomes tend to be published creating the potential for bias. Frederickson
18
(2002) shows some flexibility in therefore recommending a pragmatic approach to accumulating evidence within Educational Psychology, in other words that ‘the research approach of choice will depend on the question asked’ (p. 99). She mentions the use of qualitative research in answering certain questions but goes on to state that ‘where the question concerns evidence for the efficacy of an intervention then RCTs, and especially the systematic review of several such trials, is accepted as the gold standard for judging whether a treatment does more good than harm’ (op cit., emphasis added). Although Frederickson acknowledges that it is more practical for Educational Psychologists to focus on effectiveness research through evaluating their own work (for which she recommends the use of Goal Attainment Scaling), her paper does seem to prioritise quantitative over qualitative information in research, and presents a somewhat uncritical view of the hierarchy, in which robust qualitative or mixed methods research does not appear. Fox (2003) is more retrospect. He reviews literature which explores the question of what constitutes ‘quality research’ in education (p. 94) and suggests that an answer is yet to be reached. He also discusses the practicalities of evidence-based practice for Educational Psychologists (EPs), considering both the accessing of research and the appropriateness of the hierarchy for Educational Psychology. He suggests that ‘many EPs will be appalled by the concept of evidence-based practice’ as defined by the ‘gold standard’ of the hierarchy (p. 95) and characterises quantitative approaches as potentially ‘dehumanising’ (p. 96). He raises the issue of epistemology which he asserts is ‘fundamental to evidence-based practice for EPs’ (op cit.). He contrasts positivism with constructionism1 as the focal point of this concern, asking questions about Educational Psychologists’ beliefs about knowledge and how these views 1
Constructionism/constructionist is seemingly used interchangeably with constructivism/constructivist by different authors. This thesis will use the term constructionist/constructionism due to the closer linguistic relation to the construction of meaning which the paradigm embraces.
19
inform professional practice. I wonder whether these epistemological musings prompted Fox to call his position paper ‘Opening Pandora’s Box’. Elsewhere, the arguments about epistemological bases for research have reached such intensity as to have been labelled ‘paradigm wars’ (see below). Alternatively, Fox’s (2003) title may be a prediction about resistance to evidence-based practice which he, like Frederickson (2002), suggests is overcome through a ‘commitment to researching our own individual practice’ (Fox, 2003, p. 101). Kazak, Hoagwood, Weisz, Hood, Kratochwill, Vargas and Banez (2010) propose a metasystems approach to evidence-based practice. They describe the various contexts influencing particularly a child’s development and the many professional systems or agencies that may become involved. They suggest that interventions should take place within a child’s natural setting, be context appropriate and delivered ‘in partnership with families and local communities’ (p. 87). From their perspective, evidence-based practice is ‘a scientifically minded, culturally responsive approach, characterized by continual monitoring of interventions provided, the child and family’s response, and events and conditions that impact treatment’ (p. 86). This broader view of what evidence-based practice entails may be flourishing. Collecting evidence through ‘monitoring’ practice may be developing into an art which Fox (2011) refers to as ‘practice-based evidence’ (p. 328). What this means and the form this endeavour will take depends on ‘our view of what is good quality research and ultimately, therefore, our view of knowledge’ (Fox, 2003, p. 100). However, this shift towards practitioner led research can be seen as a celebration of practitioners’ voices (Matthews, personal communication), giving prominence to the potential quality of evaluation and claiming a space for it in a revised hierarchy of evidence.
20
2.3.2.
Politics and Epistemologies
Ontology is concerned with the nature of the world and reality (Fox, Martin and Green, 2007) while epistemology examines the nature of knowledge and its production (Willig, 2001). Pajares (2003) highlights the high cost of not acknowledging ontological and epistemological positions, i.e. a researcher’s ‘worldview’ or ‘paradigm’. He suggests that without a clarification of these foundations, the meaning behind research findings and their resulting theories could be distorted. Fox’s (2003) commentary about ‘our view of knowledge’ relates to the adoption of a particular paradigm and its associated assumptions. There are a number of different paradigms that researchers and practitioners draw from and each involves particular assumptions about reality and knowledge. These paradigms can be considered to sit on a continuum with positivism and social constructionism at opposing ends, critical realism somewhere in the middle and pragmatism as a possible mediator. According to Fox (2003), the hierarchy of evidence discussed above ‘is based on a logical positivist view of reality’ (p. 43). This means that the hierarchy assumes the ‘world is observable and … knowledge [needs] to be value-free and not affected by the philosophical or cultural beliefs of the day’ (Fox, et al., 2007, p. 10). It is this assumption of ‘value-free’ research that so concerns Biesta (2007) and Clegg (2005) and which makes the debate about which research is most valued a political one (ibid.). Robson (2011), however, suggests that ‘the view of scientists as value-free, totally objective, machine-like automata … is discredited’ (p. 15) and discusses ‘post-positive’ views of research. He states, ‘Post-positivists believe that a reality does exist but consider that it can only be known imperfectly and probabilistically’ (p. 22). However, as per Popper (2002), a theory about that reality may be corroborated and ‘its degree of corroboration will increase
21
with the number of its corroborating instances’ (p. 268). Critical realism and pragmatism can be grouped within a post-positive paradigm. A critical realist position acknowledges that ‘the data the researcher gathers may not provide direct access to … reality’ (Willig, 2001, p. 13). There is the understanding that realities are contextual and are a function of an interplay between individuals and their environments (Pawson and Tilley, 1997, p. xii). Explanations about social phenomena thus require reference to that context, with critical realists exploring the why and how of causal relationships (ibid.; Matthews, 2003). Pragmatism is offered as ‘a practical and outcome-oriented method of inquiry’ (Johnson and Onwuegbuzie, 2004, p. 17) which cannot be reduced to a ‘distinctive philosophical thesis’ (Talisse and Aiken, 2011, pp. 3 – 4). Pragmatists share a ‘common aspiration’ which encompasses a wide range of perspectives but involves a commitment ‘to taking seriously the actual practice of human investigators’ (ibid., p. 4). Pragmatism therefore focuses on the use of the most appropriate methodologies pertinent to the research question (Cresswell, 2003). Johnson and Onwuegbuzie (2004) offer pragmatism as a resolution for the ‘paradigm wars’ (p. 17) which Robson classifies as a conflict between ‘quantitative and qualitative social researchers’ (p. 18). These ‘wars’ appear to have been raging since the 1980s. Gage (1989), in his amusing sketch about the position of social research in an imagined 2009, gives three possible outcomes of the paradigm wars. The first of these is that ‘objective’ scientific method died ‘of the wounds inflicted by the critics’. The second is that ‘peace had broken out’ and that researchers from all worldviews worked together co-operatively in a pragmatic utopia. Finally, a third option is presented, that ‘nothing … had really changed’ (p. 10). Although I am not sure which of option one or three Gage felt would be the worst case scenario, it is clear that he was hoping for option two.
22
Unfortunately, it seems that the utopia has not been realised, especially with regard to the hierarchy of evidence and the gold standard assigned to RCTs. A published argument between Oakley (2006) and Hammersley (2008) illustrates the debate. While embracing pragmatism, Oakley (2006 and in Oakley, Gough, Oliver, and Thomas, 2005) characterises resistance to the hierarchy as ‘conservative responses to real or imagined threats’ (p. 64). Oakley defends randomised controlled trials (RCTs) and systematic reviews which she points out are ‘practical, feasible, ethical and useful’ (op cit.). In response, Hammersley (2008) describes Oakley’s argument as ‘defective’ (p. 7) and strongly criticises RCTs and systematic reviews. Reading both papers in conjunction, however, reveals a common standpoint: both reiterate the importance of contextual factors and a consideration of the political implications of research. These political implications may be why the hierarchy is resisted; less because of what Oakley (2006) calls the threats ‘of “new” technology’ (p. 64) and more due to concerns about how this new technology may affect education when no questions are asked about what is valid to whom (Ingraham and Oka, 2006), who chooses what is valued and prioritised, and who holds the purse strings (Cherry, 1998).
2.3.3.
The ‘Professional’ Role and Educational Psychology
Moore (2005), like Fox (2003), argues that Educational Psychologists’ practice is an expression of their epistemological and ontological positions and highlights the ethical need to question and explore these positions and what they mean for practice, instead of taking them for granted. He disputes the appropriateness of following ‘a procedure’ in Educational Psychology and refers to working ‘within the tangled complexities of the social world, where judgements often have to be made on the availability of only partial information and where the ability to deal with ambiguity and uncertainty … [is] paramount and perpetual’ (p. 111).
Social
23
constructionism is a paradigm which takes into account complexity, ambiguity and uncertainty and involves exploring the way human beings construct subjective meanings to understand their experiences in diverse and complex ways, both at an individual level and through group consensus (Cresswell, 2003; Lincoln and Guba, 2003). These constructions are changing and various and there may be multiple meanings ascribed to different phenomena (Mertens, 2005, p. 14). For this reason, a researcher working within this perspective specifically explores the ‘complexity of views’ in an attempt to ‘make sense of … the meanings others have about the world’ (Cresswell, 2003, p. 8). Because of the ‘tangled complexities’ inherent in their work, it may be that Educational Psychologists currently lean more towards social constructionism as a paradigm informing their practice (Fox, 2003). Like Moore, Baxter and Frederickson (2005) ask if Educational Psychologists can perform ‘more radical’ functions (p. 89) in response to the complexities of the lives with which they come in contact. They associate the limitations of statutory assessment as a way in which Educational Psychologists have avoided the need to justify their worth, reiterating the importance of evaluation. However, Annan (2005) wonders whether such a limited understanding of an Educational Psychologist’s role restricts ‘opportunities for development’ and curtails the ability of an Educational Psychology Service to meet the community’s needs (p. 270). Biesta (2007) describes ‘technological’ practice in education which he associates with a too strict adherence to evidence-based practice. It is this move towards technological practice that Clegg (2005) characterises as an attack against professional knowing. In an open, honest and reflective paper, Mercieca (2009) examines what this has meant for her practice in her role as an Educational Psychologist in Malta. She links technological practice to adopting an
24
‘unthinking’ position in order to avoid ‘uncertainty’ and feelings of inadequacy as a professional. In what a colleague described as a ‘therapeutic’ exploration of the role of the professional (Garwood, personal communication), Mercieca asserts that ‘the better professional’ is the one ‘who engages with the “not knowing” which arises … [from] the complexities in a situation’ (p. 171). She describes ‘not knowing’ not as a failure on the part of the professional but rather as ‘a better guarantee that a child will not be needlessly categorised in an unthinking attempt to put messy situations in order’ (p. 173). In thinking about a particularly complex case, Mercieca says that, ‘if a procedure was available, then all that was necessary was to follow it, and the intervention of a psychologist was not warranted’ (p. 175). Nixon (2004) highlights the complexity inherent in professional intervention and the corresponding need for professional ‘thoughtfulness’. Leiper (1994) highlights the positive angle to the statement ‘a professional qualification is a licence to make mistakes’ (p. 197). With professional thoughtfulness comes the practice of ‘shouldering the burden of responsibility for appraising one’s own work and reflecting upon it in order to learn the lessons that experience has to teach’ (op cit.). Fox (2003) references the work of Schön (1983) in discussing the value of self-appraisal and reflective practice. Schön describes how difficult it can be for anybody to speak about actions which are intuitively and spontaneously enacted and links this to the practice of professionals. He calls this ‘knowing-in-action’ which involves judgements and choices which are almost impossible to give coherent reasons for, even when drawing from an evidence base (p. 50). However, Schön goes on to state that ‘we sometimes think about what we are doing’ (p. 54), an activity he calls ‘reflecting-in-action’. He highlights the importance of reflection on
25
practice, both ‘in-action’ and at a later time. He says that reflective practice is a way to avoid working in a routinized and repetitive manner, which may result in the practitioner being ‘selectively inattentive to phenomena that do not fit the categories of his (sic) knowing-inaction’ (p. 61). He acknowledges that reflective practice is not considered ‘legitimate’ but calls for a study of reflection-in-action so that this legitimacy may be enhanced. Schön (1983) perhaps caused a groundswell towards more acceptance of reflective practice. However, the emphasis on evidence-based practice and the hierarchy perhaps heralds a return to the world of professionals ‘locked into a view of themselves as technical experts… [who] have become too skilful at techniques of selective inattention, junk categories and situational control … for them, uncertainty is a threat; its admission is a sign of weakness’ (Schön, 1983, p. 69). It may be thus that the ‘professional’ is replaced by the ‘technical expert’ whose work, as per Quicke (2000, p. 257), is ‘like painting by numbers’. Ironically, such a professional is potentially made less effective by focusing on ‘what works’ and ignoring uncertainty. The tacit knowing held and applied by proficient practitioners … represents a valuable form of clinical knowledge, which has been acquired through experience, and which should be investigated, shared, and contested. (Malterud, 2001, p. 397) A potential means by which professional knowledge may be investigated, shared and contested is the practice of evaluation. As proposed by Frederickson (2002) and hinted at by Fox (2002, 2003, 2011), evaluation may be a way in which Educational Psychologists can contribute to the evidence base by which their practice may be informed.
26
2.4. Evaluation In 2000 a report into the role, good practice and future directions of Educational Psychology from the then Department for Education and Employment urged Educational Psychology Services to ‘have clear aims and objectives and a method for evaluating performance’ (Kelly and Gray, 2000, p. 8). A later review of the functions and contribution of Educational Psychologists reiterates this emphasis on evaluation by stating that ‘it is imperative that the skills, knowledge and experiences that an Educational Psychologist brings to a situation are known to add value to other work that has already been done’ (Farrell, et al., 2006, p. 100, emphasis added). Fallon, Woods and Rooney (2010), although applying their discussion to the Every Child Matters outcomes of the previous government, continue in this vein, pronouncing an expectation that Educational Psychologists will ‘have to ensure … automatic mechanisms for evaluating the work in which they are engaged’ (p. 10, emphasis added). Even without these mandates, evaluation is what Frederickson (2002) calls ‘a key requirement of accountable and ethical professional practice’ (p. 106).
2.4.1.
What is Evaluation?
An attempt to define the term evaluation is less clear cut than one would expect. Mertens (2005) lists numerous, sometimes contradictory, definitions in support of her assertion that evaluation is a contested term (see p. 45 – 48). Stufflebeam (2001), however, contributes: ‘Evaluation means a study designed and conducted to assist some audience to assess an object’s merit and worth’ (p. 11). This definition on its own fails to take into account the potentially political nature of such studies, which Fox, et al. (2007) call the ‘shadow side of evaluation’ (p. 73) and does not raise the question of who decides those levels of merit and
27
worth. However, Stufflebeam is very clear that evaluations motivated by political objectives are more aptly referred to as ‘pseudoevaluations’ (p. 13). Leiper (1994) also warns against the defences that may arise from the threat that evaluation may pose, especially if it is felt to be ‘an accusation of inadequacy’ (p. 201). He specifically recommends avoiding ‘off-the-peg’ solutions which ‘can be used to reduce the need to think’ (op cit.). Potential disparities between ‘scientific method’ and ‘reflection, learning and thought’ in evaluation appears to be a recurrent theme in the consideration of what evaluation involves. It may be that the more pertinent question regarding ‘evaluation’ would be how it is done, rather than what it is. Returning to Stufflebeam’s (2001) definition, establishing the worth (or in more commonly used parlance, the impact) of an Educational Psychologist’s involvement through evaluation is to a certain degree taken for granted. However, how this worth is established has been a bone of contention in the literature and is a question which has not yet received a satisfactory answer.
2.4.2.
Evaluation of Educational Psychology Practice – A Review
Over forty years ago, Kiresuk and Sherman (1968) bemoaned the lack of consistency in defining and measuring the impact of interventions in the world of mental health provision. A review of the literature, as well as conversations with colleagues working with different local authority Educational Psychology Services, suggests that evaluation of Educational Psychology practice is similarly lacking in consistency. Turner, Randall and Mohammed (2010) make suggestions as to why this might be: There are plenty of obstacles to finding out about the effectiveness of the EP’s job; workload, being the ‘ghost in the machine’, the notion that
28
evaluation is driven by ‘bean counters’, who do not realise the sophistication and complexity inherent in educational psychology. There is also, perhaps, the unarticulated fear: what if EPs were to find out their impact was not so great after all? (p. 313) Studies have been undertaken to evaluate the work of Educational Psychologists (EPs), however, to meet both external expectations for evaluation and professional commitments to improving practice. Dunsmuir, Brown, Iyadurai and Monsen (2009) describe the different sources of information that have been used in evaluating impact by Educational Psychology Services. They critically suggest that there has tended to be a focus on outputs (being what Educational Psychologists did) instead of outcomes (what was achieved by that involvement (Sharp, Frederickson and Laws, 2000, p. 103)) and an emphasis on how clients have perceived the service they have received. In contrast, Anthun (2000) proposes that clients are best placed to define markers of quality against which to evaluate the service they received. As a result of his literature review around total quality management approaches, he involved parents as particularly valued ‘consumers’ of Educational Psychology practice. Anthun’s study was specifically aimed at exploring which aspects of this practice qualify as quality criteria, but also asked for satisfaction indicators from the participants involved. Although Anthun’s research was carried out in Norway, he outlines many common characteristics between Educational Psychology Services in his country and the United Kingdom. The sample consisted of all cases from twelve Educational Psychology Services within a delineated age band. A personal details questionnaire, two Likert-type indexes and a ranking
29
form were posted to parents in the sample. Surprisingly for what appears to be a fairly complex battery of questionnaires, his response rate was 50% and resulted in a large sample size of 374. Follow up enquiries and returned empty documents suggest that those who did not respond either felt they had not had sufficient contact with the service to comment or were ‘immigrant and refugee parents’ (p. 147). This evidence of selective response may have biased Anthun’s findings but it is praiseworthy that unreturned questionnaires were followed up in this way. Anthun (2000) used inferential statistics to establish the degree to which parents felt the service they had received was ‘good’. He triangulated these results between two different surveys and reports significant findings across the board. In terms of satisfaction, the findings were positive. The data analysis appears robust and thorough, although effect size is not reported. An examination into time factors indicate that this may be an area for improvement, with parents feeling that referral uptake took too long. To determine which aspects of service provision were felt to be most important to parents, they were asked to ‘score’ items in the satisfaction questionnaires. Not all respondents did this and a total of 267 participants contributed to this part of the research. By looking at both mean scores and aggregate scores, Anthun determined which items were most valued. This analysis feels less robust and intuition appears to have been implemented in considering items which, for instance, obtained low aggregate scores but high mean scores (due to fewer parents scoring those items). These findings suggest that parents rate being listened to, being kept informed of Educational Psychologists’ intentions, feeling secure during the process and having good solutions to their children’s problems as the most important aspects to consider when assessing the quality of an Educational Psychologist’s involvement.
30
Although these findings cannot be thought of as conclusive, Anthun’s research does suggest that process is an equally important referent for evaluation as outcome, favouring the inclusion of user perceptions in evaluation. Other researchers have also focused on user perceptions when evaluating Educational Psychologists’ work. Cuckle and Bamford (2000) used a customer satisfaction approach through questionnaire and interviews with parents. Their results suggest an overall positive perception of the service parents had received. However, both their methodology and the reporting of their results have a number of flaws. In terms of method there is a lack of clarity around the parameters used in designing the questionnaire. The Likert-type scale used for questionnaire responses is limited to only three points, the phrases given are not clear for respondents (being ‘very much’, ‘reasonable’ and ‘not really’) and it is questionable whether there is an equitable distance between each. Although the researchers translated the questionnaire into community languages, only one was returned in a language other than English, which is likely to have biased their results. Results are presented using percentages, although only 85 questionnaires were returned. Means were calculated based on numbers assigned to the Likert points to give an indication of satisfaction which is inappropriate for non-interval data (Howell, 1997), especially when the phrases given were relatively imprecise. No methodology for the interviews apart from the sampling procedure is reported, i.e. it is not clear whether semi-structured or structured interviews were used or whether the questions were open or closed and the type of analysis used is not stated. However, Cuckle and Bamford’s paper has been referenced by other researchers and can be seen as a useful starting point for the involvement of parents in evaluating Educational Psychologists’ practice. The manner of their sampling for both questionnaires and interviews, which was random, can also be seen as a strength.
31
Squires, Farrell, Woods, Lewis, Rooney and O’Connor (2007) also used questionnaires to establish parental perceptions of ‘helpfulness’. Making up for nearly all of the flaws described for Cuckle and Bamford’s (2000) paper, Squires et al. (2007) give a clear account of their questions and the motivations behind each and a five point Likert scale is used. The quantitative data obtained is reported using numbers and frequencies, rather than means and the information is clearly presented using graphs. Although interviews were not used to collect qualitative data, space was given on the questionnaires for comments. This information was analysed using thematic analysis and when reporting the qualitative findings, verbatim quotes are given which is seen as good practice for qualitative research (Cresswell, 2003). A weakness around the process used to randomly select their sample is disclosed in their discussion and there may have been further possibilities for bias in that ‘satisfied customers’ may have felt more inclined to participate. However, Squires et al. (2007) conclude that ‘the parents in this survey have valued highly the contribution of the EP and that they consider it to have improved the outcomes for their child’ (p. 357). Boyle and MacKay (2007) investigated the perceptions of head teachers of the overall value of the contribution made by their Educational Psychologist to pupil support using questionnaires. Their specific research question related more to the types of work done and the level of involvement of an Educational Psychologist within a particular school. Using regression analysis, these aspects of role were linked to comments given about ‘overall value’. Possibly due to the focus of this research, the method by which this perception of value was ascertained is not made clear. However, as for parental views, the results suggest that Educational Psychologists’ contributions were positively valued. Although perceptions by a service user may give useful indications of the impact facilitated by an Educational Psychologist, methodology around eliciting these views needs careful consideration as does
32
the reporting of findings and the use of analyses (statistical or qualitative). These methodological concerns may in part have contributed to a shift in focus for evaluation from user perception to outcome measurement. Sharp, et al. (2000) highlight the need for a consideration of outcome in their paper outlining a performance review process implemented both to justify changes in service provision and to raise the profile of the service in question. The paper describes the development of a measurement instrument to collect ‘360° feedback’, ‘a systematic process for assisting professional development by enabling an individual to obtain the collective opinions of a range of people with whom they work and to set these alongside their own opinions’ (p. 104). Although the development of this instrument appears to have been very carefully and thoroughly executed, its implementation within the service has not been reported on in the published literature since. The impact that this paper seems to have had, however, is an emerging emphasis on outcome over output. The irony to what follows is that in Sharp, et al.’s paper, outcomes are fairly loosely defined, including satisfaction ratings (op cit.), and the ‘contentious’ nature of outcomes based measurement is considered (p. 103). Proponents of outcomes based evaluation (e.g. Frederickson, 2002) claim that measuring outcomes is a ‘basic requirement of evidence-based practice’ (ibid., p. 106). The challenge of evaluating outcomes has become the goal of defining outcomes that are measurable and which provide evidence of impact (Dunsmuir, et al., 2009). Frederickson (2002) proposes the use of Goal Attainment Scaling (GAS) as an approach that could be very useful to Educational Psychologists. GAS has been said to describe measurable outcomes as part of its process (Malec, 1999), show that change has taken place (Smith, 1994) and allow
33
for comparisons of effectiveness to be made across different interventions, individuals and goals (Kiresuk and Sherman, 1968; Schlosser, 2004). First developed by Kiresuk and Sherman (1968) to evaluate community mental health programmes, GAS involves setting realistic goals and precise and objective descriptions of likely intervention outcomes. These descriptions are then assigned numerical values through a process called scaling. Scaling should involve a number of goals (Smith (1994) recommends at least three) and allows for a composite score to be calculated at the review stage. There are a number of reported benefits to using GAS. It is personally tailored to each individual, is cheap to do and is unobtrusive (Sladeczek, Elliott, Kratochwill, RobertsonMjaanes and Stoiber, 2001). Although research into the reliability of GAS is scarce and has produced mixed results, Schlosser (2004) describes its reliability as ‘encouraging’ (p. 228). GAS promotes a collaborative approach, as goals are set between the parties involved (Frederickson, 2002) and this process provides a shared language with which to discuss improvement (Sladeczek, et al., 2001). GAS has specifically been used within a consultation framework (Hughes, Hasbrouk, Serdahl, Heidgerken and McHaney, 2001) and is reported to be conceptually congruous with this approach (Roach and Elliott, 2005). However, there are also a number of limitations to GAS. It takes time to learn how to do it and to set the goals themselves (Sladeczek, et al., 2001). Although lauded as a strength by some, it is an idiosyncratic and subjective process and the quantitative nature of the GAS ‘may give a false sense of measurement precision’ (Malec, 1999, p. 259). Furthermore, significant mathematical concerns have been raised about the use of parametric statistics to analyse GAS scores. MacKay and Lundie (1998) suggest that the outcome scores should not be treated as interval data when they are in fact ordinal (and in an earlier paper MacKay,
34
Somerville and Lundie (1996) suggested the scores may arguably be nominal2). They recommend that GAS scores are treated as non-parametric information and that statistical analyses focus on frequencies of scores obtained, rather than the scores themselves. Although Marson, Wei and Wasserman (2009) argue that it is not necessary to limit analysis of GAS scores to non-parametric techniques, I feel this mathematical issue is very rarely considered, has not been adequately addressed in the literature and has serious implications for the results obtained by studies using GAS. Purporting to overcome a number of the difficulties associated with the use of GAS, Dunsmuir, et al. (2009) developed and piloted an evaluation method called Target Monitoring and Evaluation (TME). Based on the principles of GAS, TME involves agreeing targets and defining criteria for evaluation before intervention. These targets need to be SMART (specific, measurable, achievable, realistic and time limited) and must reflect the planned intervention. The process then requires that the target is given a baseline on a rating of 1 – 10. A rating for expected outcome is also given on the same scale. At review, the actual outcome is given a score on this scale. The baseline, expected and actual scores are then used to establish the level of progress made. Dunsmuir, et al. (2009) suggest that TME retains the benefits of GAS whilst improving the technique by providing baselines and describe TME as ‘a robust system’ (p. 57). The veracity of these claims is yet to be ascertained as TME is relatively new. To date, only two papers have been published that research TME (Monsen, Brown, Akthar and Khan, 2009 and Dunsmuir, et al., 2009). Both papers used the system to evaluate the impact of work undertaken by assistant Educational Psychologists. Although Dunsmuir, et al. (2009) report 2
Nominal scales merely label items. Ordinal scales order items along a continuum, but the difference between items is not constant. Interval is a measurement scale in which differences between points are constant and continuous. Calculating a mean assumes that the data being used exhibits interval properties (Howell, 1997).
35
positive perceptions of TME from school staff based on information gained through interviews, the methodology behind this qualitative research is not described and numbers involved in the interviews is not reported. Complementary results from focus groups held with the Educational Psychology Service staff involved in the study are reported more completely but again without clarity regarding methodology, particularly around the identification of themes. Dunsmuir, et al. acknowledge the need for further research to establish the validity and reliability of their tool. However, as the TME is based on the GAS, some of its limitations will be shared. Although TME data is explicitly referred to as interval, as with the GAS the data used is more appropriately considered as ordinal. Means analysis is used by Dunsmuir, et al. (2009) to compare differences between baseline scores, actual outcomes achieved and the expected outcome. This is inappropriate for non-interval data (Howell, 1997). However, non-parametric techniques as described by MacKay and Lundie (1998) are also used when reporting the results by describing the frequencies of scores showing ‘no progress’, ‘some progress’, ‘expected progress’ and ‘better than expected progress’ (Dunsmuir, et al. 2009; Monsen, et al. 2009). It may be more useful for scores obtained through the use of TME to be reported in this way as it ensures the statistics used are not erroneous. Besides mathematical assumptions, one further assumption underlies the use of evaluation techniques like GAS and TME. Turner, et al. (2010) criticise their ‘reductionist’ focus on measurable outcomes (p. 315) which arises from a presupposition of a positivist view of the world, namely that ‘it is possible to describe what is “out there” and to get it right’ (Willig, 2001, p. 3). This view could raise substantial issues within a profession which is leaning towards a more constructionist view (Fox, 2003). As Worrall-Davies and Cottrell (2009) note,
36
there is often a lack of agreement about what an outcome is and interventions are likely to have complicated results, in parallel to ‘the complexities of young people’s lives’ (p. 337). Turner, et al. (2010) also highlight the importance of experiences and perceptions of achievements while acknowledging there are times when ‘far more is achieved than can be measured’ (p. 315). Osborne and Alfano (2011) examined whether consultation sessions provided by Educational Psychologists were felt to be supportive for foster and adoptive carers. Although such practice may fall within the ‘radical’ realm of work undertaken by Educational Psychologists (Baxter and Frederickson, 2005), this study (Osborne and Alfano, 2011) is worth noting because of its use of mixed methods and because the researchers acknowledge their realist position. Following a voluntarily attended one hour consultation session, the Educational Psychologist and the carer involved filled in questionnaires. The questionnaires included a number of open-ended questions and the responses to these were analysed using thematic analysis. The questionnaires for the carers also included a ratings section, the purpose of which is described as ascertaining ‘the perceived impact of the sessions on the carer’ (p. 399). Mean values were calculated for the ratings and t-tests were used to analyse before and after session changes. A low rate of response of these forms (22 – 28%) was attributed to the fact that return from carers was dependent on whether or not the Educational Psychologist requested the form. The results are generally positive and usefully consider the views of both the receiver of the service and the professional delivering it. However, no attempt seems to have been made to collate these perspectives to explore any matches or mismatches between perceptions of how well the sessions went. The qualitative information is useful as it provides an opportunity for suggestions for improvement to be made, as well as exploring what was felt to be supportive. It also enabled the researchers to
37
record what was achieved even when this was difficult to measure, for example: ‘this was very helpful and it is so important to feel supported when life feels a little out of control’, ‘people listened…’ and ‘I feel much more positive about the future now’ (p. 406, carer comments). To ‘encapsulate the complexities of the EP role, and focus not only on what is measurable’ (p. 313), Turner, et al. (2010) also developed and trialled an evaluation tool which draws on multiple sources of evidence, including reflective practice. This was within the context of Scottish policy, which promotes ‘performance data, relevant documentation, stakeholder’s views and feedback and direct observation of practice’ as the ‘main sources of evidence, from which evaluations can be made’ (HMIE, 2011). Influenced also by Sharp, et al. (2000), who utilised ‘360° feedback’, Turner, et al. (2010) aimed to collate both outcome and user views on a case by case basis. Data collection forms were developed over a period of time through use, reflection, discussion and revision. They were designed as a means to record practitioner reflections about casework, actual outcomes achieved and reports about the impact made from a range of stakeholders. The forms were then piloted within a whole Educational Psychology Service over one year. This information was then analysed using thematic analysis to identify types of outcome and/or impact. The Educational Psychologists who used the tool were then asked for feedback via questionnaire. Turner, et al. (2010) differentiate between ‘outcomes’ and ‘impact’ in an attempt to collect information about what ‘actually’ happened (outcome) and ‘psychological effect’ (impact, p. 318). How these categories are different from each other is not made any clearer than this in the published work. Looking at the examples given of feedback obtained, this lack of clarity appears to have led to an overlap of some themes identified as outcomes and impact
38
(for example, involvement of additional agencies/multi-agency work). This limitation is acknowledged in the discussion and Turner, et al. state that ‘recorded outcome’ is a term that has been clarified through discussion within the service involved in the research. Unfortunately for other Educational Psychologists perhaps wishing to use this tool for their own practice, this clarification is not communicated. Furthermore, even in the discussion of this limitation, the authors use the terms ‘outcome’ and ‘impact’ interchangeably. Conceptually it seems useful to consider hard outcomes and ‘softer’ measures of impact as separate entities, however, for the sake of increased transparency, a more robust delineation between these two terms is needed. Another semantic difficulty is the synonymous use of the words ‘evidence’ and ‘feedback’. This appears to result in further similarities between outcome and impact which perhaps limits the potential for triangulation. The feedback examples presented also tend to be superficial and do not consistently indicate benefit, detriment or no change (i.e. direction). For ‘outcomes’, examples include, ‘assessment from another agency’, ‘more appropriate curriculum for pupil’ and ‘appropriate placement for pupil identified’ (p. 320). From these three examples questions can be asked along the lines of ‘was the other agency’s assessment felt to be useful?’, ‘how is the curriculum “more appropriate”?’ and ‘how was the placement deemed appropriate?’ Again a limitation around the methodology involved in gathering ‘evidence’ is discussed by Turner, et al. but the questions that the team are reported to have developed to increase the tool’s robustness are not given. I suggest that it may have been useful to associate ‘outcomes’ with ‘evidence’ and ‘impact’ with ‘feedback’ and that the clarifications alluded to should have been made explicit.
39
In spite of these limitations, Turner, et al. (2010) suggest that their form enabled Educational Psychologists to record ‘examples of real world impact’ (p. 322) to which they ascribe a degree of ‘ecological validity’ (p. 326). Feedback obtained from Educational Psychologists who had used the form was similarly positive; seven out of eight of those responding to a feedback questionnaire valued using it. However, the respondents to this questionnaire also highlighted further limitations: subjectivity, variations in the use of the form and potential bias because the cases on which the form was piloted were self-chosen. The clarification in terminology may address some of these concerns and Turner, et al. admit that their ‘method is still evolving’ (p. 327). Although a flexible approach is promoted by the authors, which enables users of the tool to ‘[try] it out and [agree] on consistency’ (p. 325), there is the potential that this may reduce the replicability of this study – a limitation to what Yardley (2000; 2008) might refer to as ‘coherence’. A random selection of cases for whom the form was filled in may also increase the reliability of the tool. As Educational Psychologists chose cases which were to be reviewed within a management structure during the pilot, the strong potential for bias is self-evident. Another way in which Turner, et al.’s tool may be improved is by following Matthews’ (2002) recommendation that both qualitative and quantitative information be used when evaluating the work of an Educational Psychology Service. Matthews’ proviso in using quantitative data is that ‘significance and limitations’ of measures have been addressed through research (p. 140) and that checks are made to ensure that measurements are appropriate to the context. Matthews points out that different stakeholders in an evaluation can have differing views, an eventuality provided for by Turner, et al. (2010) via their adoption of the 360° approach. However, the repercussions of fundamentally conflicting views is not addressed nor how such information may be summated. On the other hand, such summation
40
may be undesirable and the intention of triangulating different sources of information behind Turner, et al.’s casework evaluation form is to be applauded. With some development, this may provide a valuable tool with which to record and evaluate Educational Psychologists’ practice.
2.5. Psychological Theory Possible allusions to three Psychological theories emerged through the analysis of the interviews. These are systems theory, positioning theory and psychodynamic theory (in particular the ideas of Bion). Brief outlines of these theories in the context of this thesis are given below.
2.5.1.
Systems Theory4
Originating with von Bertalanffy’s work in the life sciences, systems theory has grown as a highly influential philosophy for many disciplines including Psychology (von Bertalanffy, 1968). Von Bertalanffy suggests that a systems approach became ‘necessary’ (p. 4) because the practice of isolating variables in the search for ‘one-way causality’ was ‘proved to be insufficient’ (p. 45). He sees a systems approach as especially pertinent to living organisms which he defines as ‘open systems’ (p. 39). Open systems are contrasted with closed systems which are ‘isolated from their environment’ (op cit.). Therefore, considering anything living as a system is to understand it in terms of complexity and interaction with others across permeable boundaries that are constantly changing. 4
When using the terms systems theory, systems thinking etc. I follow the convention used by Dowling (1994) and Rendall and Stuart (2005). Systemic theory and thinking is used by other authors (e.g. Campbell, Draper and Huffington, 1991) and is considered synonymous.
41
In Psychology, systems thinking is a view of individuals within a social context, i.e. ‘one component of the system is seen as affecting, and being affected by, … others’ (Dowling, 1994, p. 3). It is also ‘a way to make sense of the relatedness of everything around us. … the connectedness of people, things, and ideas: everything connected to everything else’ (Campbell, 2000, p. 7) and has been linked to a social constructionist worldview (ibid.). This is the understanding of systems theory adopted for this thesis, although I acknowledge Fox’s (2009) helpful discussion about ‘crossed and intertwined’ (p. 248) terminology and the development and application of this theory and practice within the field of Educational Psychology. Systems theory therefore usefully emphasises complexity and inter-relatedness and provides an alternative approach to a linear cause and effect model for human difficulties. There are certain key concepts of systems theory which I find are especially pertinent to the evaluation of Educational Psychologists’ practice: circularity and punctuation. Circularity purposefully avoids considering a linear process of cause and effect. Instead, circularity situates causality within interactions rather than within individuals which removes blame (Dowling, 1994; Rendall and Stuart, 2005). Linked to circularity, but important in its own right, is the concept of punctuation. ‘Punctuation refers to the point at which a sequence of events is interrupted to give it a certain meaning’ (Dowling, 1994, p. 5; Rendall and Stuart, 2005, p. 22). Interpreting ‘what causes what’ (op cit.) therefore depends on the chosen point from which a sequence of events is viewed. These two aspects of systems theory complicate the process of evaluation, emphasising the need for a cautious approach to the interpretation of any findings.
42
Positioning Theory
2.5.2.
Drawing together systems theory, personal construct psychology and social constructionism, Campbell and Grønbæk (2006) describe the process of being ‘positioned’ within human systems. Human behaviour is explained by the following: ‘we all make sense of the world – and decide how to act – by taking a position within a range of meanings about ourselves and our environments’ (p. 13). For Campbell and Grønbæk, these meanings are embedded in discourse. ‘Positions’ are identified through consideration of ‘semantic polarities’. A semantic polarity consists of a continuum of scaled points between two opposing terms which allows for an expression of the values of an individual within his or her group or system. ‘The act of taking one position along a polarity among other positions is the act of creating meaning for ourselves and others’ (p. 16). An example of a semantic polarity applicable to the profession of Educational Psychology may be ‘responsive – preventative’. Adopting a position between these terms will determine the way in which an individual practitioner approaches her or his work and will evidence his or her values. Campbell and Grønbæk (2006) highlight that the choice of position is limited by the discourses available and by the ways in which individuals are positioned by others (p. 17). Positions are made known to others through language and behaviour. When the language used within a system is especially ‘emotive and charged … people feel very unsafe’ causing them to persist in sticking to their positions and the corresponding values (p. 45). In terms of evaluation, positioning theory offers ways in which to consider the meanings given to and obtained by this activity.
43
2.5.3.
Psychodynamic Theory
Hanko (2002) sums this approach up in a word by referring to the ‘intangibles’ (p. 379). This is because much of psychodynamic psychology concerns unconscious processes. Some assumptions of this theory are:
Emotions and behaviours are often unconscious reactions to a situation
Emotions can be communications
Defences are deployed (unconsciously) against painful feelings and can inhibit action
Painful feelings may be ‘contained’ or held onto and processed by another. This process ordinarily takes place between an infant and his/her mother and is the beginning of thought (Bion, 1988)
A lack of containment impacts on the ‘imagination, vitality, capacity to think and to learn’ (Williams, 1997, p. 31) and
Defences are apparent within institutions as well as individuals
(drawn from Obholzer and Roberts, 1994; Salzberger-Wittenberg, Henry and Osborne, 1999; Williams, 1997; Youell, 2006) I especially find the psychodynamic thinking of Bion useful to the consideration of this research. Indeed, Bion (1997) defines ‘evaluation’ (albeit in very broad terms) as the act of ‘making a kind of interpretation of what evidence is brought into us by our senses’ (p. 39). Elsewhere, this interpretation of experience, especially at an emotional level, is called ‘alphafunction’ (Bion, 1988). Alpha-function ‘renders … emotional experience comprehensible and
44
meaningful’ (Symington and Symington, 2001, p. 61) so it is ‘suitable for use in thinking’ (ibid., p. 63). Alpha-function enables learning from experience by making ‘the sense impressions of the emotional experience available for conscious and dream thought’ (Bion, 1988, p. 7). Admittedly, no specific link between alpha-function and evaluation is made, however, the ideas presented by Bion are particularly reminiscent of the processing of experience described by the interviewees and more generally of reflective practice.
2.6. Summary and Conclusions In this chapter I have reviewed the literature in which the work of Educational Psychologists has been evaluated. I present a brief outline of three Psychological theories which emerged as pertinent through the analysis of the interview data and which are drawn upon during the discussion of my research findings. I have also taken an apparent diversion into the world of evidence-based practice, ontology and epistemology and the role of the professional. The reason for this meander is not one of distraction. Consideration of these themes is crucial in understanding both what evaluation is and, as I have alluded to, how it ‘ought’ to be done. In the literature represented here, there is the larger context of the usually rather heated debate about what information is considered of value with heavily weighted terms like ‘gold standard’ being used with regularity. There is also the practical application of evaluation tools which rely on mostly unacknowledged assumptions which lean towards the more positivist end of the paradigm continuum. Only Anthun (2000) seems to have carried out research to explore what quality criteria are important to stakeholders, but he failed to ask what form such information should usefully take. The danger of limiting the consideration of what information is meaningful to ideological argument lies in creating a potential discrepancy between what is
45
happening on the ground. However, the greater threat perhaps lies in developing ‘off-the-peg solutions’ (Lieper, 1994, p. 201) which take for granted that the information they collect is of meaning to the people wanting to establish whether an Educational Psychologist’s involvement has been of worth (Stufflebeam, 2001). Granted, there are many motivations behind promoting both the hierarchy of evidence and specific evaluation techniques, which I believe are primarily fuelled by ontology and epistemology but which presage a more disquieting political design which could undermine professionalism and the value of uncertainty. However, it may be that my own worldview equally biases me against certain forms of information and the evaluation tools which seek to capture them. The gap between philosophical debates and practical usage exists because research has not asked what information is of meaning to Educational Psychologists when they want to find out whether they have had an impact. This research aims to begin to address that gap.
2.7. Purpose of Research This research aims to explore what information Educational Psychologists feel is meaningful when evaluating the impact of their work. It aims to provide ‘evidence’ relating to the philosophical arguments in the literature around which epistemological positions are most appropriate. It explores the lived experience of Educational Psychologists as collectors of ‘practice-based evidence’ (Fox, 2011) and considers the meanings assigned to the activity of evaluation within that experience. The research then explores which sources of information are most valued by Educational Psychologists. This research finally endeavours to describe ways in which evaluation, and as a result evidence-based practice, may become increasingly relevant to Educational Psychologists in their role as reflective professionals.
46
2.8. Research Questions 1.
What is the meaning (the relevance, importance and value) of evaluation within the lived experience of an Educational Psychologist?
2.
What information do Educational Psychologists consider to be relevant, important, valuable and meaningful when they evaluate their work?
3.
How can this inform the development and evaluation of tools used to evaluate Educational Psychologists’ practice?
47
3. Methodology 3.1. Introduction This research is informed by a mixed methods approach. Mixed methods is said to utilise the strengths of quantitative and qualitative research while minimising the limitations of both (Johnson and Onwuegbuzie, 2004). Although potentially time-consuming and complicated to do, mixed methods research is useful for understanding the complexities of human experience (Elliott, 2004). It can also enable better communication of ideas between differing worldviews (Todd, Nerlich and McKeown, 2004), a consideration suited to this research in view of the philosophical discussions in the literature around ontology and epistemology and their effect on Educational Psychologists’ work and its evaluation.
3.2. Ontological and Epistemological Position One theoretical paradigm which takes into account subjectivity, ambiguity and complexity is known as social constructionism. Robson (2011) describes how this paradigm encompasses both ‘extreme’ and ‘moderate’ perspectives (p. 11). My personal worldview fits with a more ‘moderate’ description of social constructionism, which embraces the complexity which describes the human condition whilst overlapping with phenomenological and hermeneutic perspectives (ibid, p. 25). Social constructionism can be understood as an exploration of the way human beings construct subjective meanings to understand their experiences in diverse and complex ways, both at an individual level and through a group consensus (Cresswell, 2003; Lincoln and Guba, 2003). These constructions are changing and various and there may
48
be multiple meanings ascribed to different phenomena (Mertens, 2005, p. 14). For this reason, a researcher working within this perspective specifically explores the ‘complexity of views’ in an attempt to ‘make sense of … the meanings others have about the world’ (Cresswell, 2003, p. 8). My ontological position is therefore a social constructionist one. Robson (2011) discusses how research practice can be used as a point of reference for choosing appropriate methodologies and defines the pragmatic approach as ‘the concern for practical matters; being guided by practical experience rather than theory’ (p. 27). Pragmatism thus allows for a multiplicity in method (Cresswell, 2003, p. 11). With a commitment to producing a robust piece of research which addresses my research questions most effectively, my epistemological position is therefore informed by pragmatism.
3.3. Research Design This research employed a sequential exploratory strategy. This allows for the elaboration or expansion of results obtained in the initial qualitative phase of the study through an analysis of quantitative data. A sequential exploratory strategy was felt to be most appropriate to the research questions due to it being described as well suited to exploring phenomena (Cresswell, 2003, p. 211). Cresswell (2003) suggests that the strengths of this strategy are its simplicity and the added value of extending qualitative findings with quantitative results. The qualitative phase was the primary focus of the study. This involved interviews followed by analysis informed by Interpretative Phenomenological Analysis (IPA; Smith, 1996). It was followed by a quantitative component aimed at indicating which kinds of information are preferred by Educational Psychologists when evaluating their work. The qualitative findings
49
and results from the quantitative phase were then discussed as a means to inform future possibilities for evaluation in Educational Psychologists’ work. Although quantitative research tends to be associated with positivist positions, the acknowledgement that social constructs also involve a degree of consensus (implicit or explicit) within a specific community (Lincoln and Guba, 2003) allows for the limited application of quantitative data to describe that consensus.
3.4. Ethics The framework for making my research ethical was taken from the principles outlined by the British Psychological Society (BPS, 2010) with regard to research involving human participants, namely: respect for the autonomy and dignity of persons, scientific value, social responsibility and maximising benefit and minimising harm. In order to comply with these principles, I developed and followed procedures to ensure informed ‘consent, confidentiality and anonymity’ to protect my participants’ rights to ‘privacy and self-determination’ (ibid, p. 8). I have endeavoured to maintain a high degree of honesty and transparency (Fox and Rendall, 2002) in my contact with participants and in writing up my research. My methodology has been informed by published guidelines relating to my chosen approaches to ensure a ‘sufficiently high quality and robustness’ in the research (BPS, 2010, p. 9). Research questions were chosen to address an identified gap in the published literature thus aiming to contribute to the debate around evaluation and evidence-based practice for Educational Psychologists. My use of a social constructionist ontology ensures a commitment to acknowledging social relativism and an awareness of a personal impact on my research. I have therefore undertaken to be a reflexive researcher, to remain aware of my own positions within the
50
research (Fox and Rendall, 2002) and to sustain self-reflection and review throughout (Smith, Flowers and Larkin, 2009). Furthermore, I have also adopted the position suggested by Mertens (2005) which links the intention to effect change with ethics in research. This position informs my third research question which aims to result in recommendations for future evaluative methods in Educational Psychology. I have also highlighted the potential valuable contributions of the role of Educational Psychologist. By doing this I have aimed to ‘maximise’ what I perceive to be the benefits of my research whilst avoiding any harm to my participants by maintaining the utmost levels of confidentiality and anonymity (BPS, 2010, p. 11). Informed consent: Interview participants were given written information describing the purpose and aims of the research, how their contribution would be analysed and reported, how their names would be anonymised and how this data would be kept safe 5. Consent was obtained in writing6. For the pilot phase of the questionnaire, responses were anonymous to the researcher. A statement of anonymity and consent by participating was included on the questionnaires7. The final questionnaire was delivered to the team in person and a verbal introduction given stating anonymity of responses and consent by participating: ‘Please consider each statement presented on the questionnaire in the context of a piece of work that seems to be going well. The questionnaires will be anonymous and by completing them please note that this is taken as consent to participate. Because the questionnaires are anonymous, they will not be able to be withdrawn after they have been given to me. You can choose whether or not to complete the questionnaire. Thank you.’
5
(See Appendix 2) (See Appendix 3) 7 (See Appendix 4) 6
51
Written information was also given to participants as above. To ensure anonymity, I was not in the room while questionnaires were completed. Withdrawal: Interview participants were given the option to withdraw their contribution. However, as per advice given by Smith, et al. (2009), a clear time limit to this right to withdraw was given. For the qualitative data, participants were given the opportunity to withdraw completely up until data analysis began. The date of this was made clear to each participant. The opportunity to withdraw any specific comments was left open until one month before the thesis was submitted for assessment. The analysis of the qualitative data using pseudonyms was given in draft form to each of the interview participants for comment. Because participants filling in the questionnaires were anonymous to the researcher, right to withdraw was not offered. However, a statement to this effect was made when the questionnaires were handed out and the consent through participation explained. Ethical approval was obtained from the Tavistock and Portman NHS Trust Research Ethics Committee on 15th February 2011 prior to the commencement of sampling and data collection8.
3.5. Phase One: Qualitative Method The qualitative phase of this research was informed by Interpretative Phenomenological Analysis (IPA). The term was first introduced by Smith (1996) in a paper which Shinebourne (2011) describes as ‘seminal’ (p. 17). Smith (1996) promoted IPA as a potential mediator between the then dominant methodologies of ‘social cognition’, drawing from a positivist, 8
(See Appendix 5)
52
quantitative epistemology and the ‘diametrically opposed’ (p. 261) discourse analysis, based firmly within a social constructionist paradigm linking meaning making exclusively to language and culture. Smith suggested that IPA could be used to explore internal cognitions (like attitudes, beliefs and intentions) while acknowledging that to do so requires both a consideration of the meanings ascribed to events by individuals and an interpretation by a researcher subject to his or her own constructions of meaning. Since Smith’s paper IPA has become steadily more popular with noticeable growth in various fields of study (Reid, Flowers and Larkin, 2005). This exponential rise in popularity is alluded to by Smith (2010) and outlined in Smith (2011). It is thus that there are now a number of texts describing the approach and its application to research. IPA has been defined in minutely varying ways. The consensus between these definitions rests with individuals’ ‘lived experiences’, the sense made of those experiences and an ‘exploration’ of these experiences ‘in detail’ by a researcher (Smith and Osborn, 2003, p. 53; Smith, 2004, p. 40; Smith, et al., 2009). IPA is ‘interpretative’ because, although wanting to get as close as possible to ‘the participant’s personal world’, ‘access is both dependent on, and complicated by, the researcher’s own conceptions which are required in order to make sense of that other personal world through a process of interpretative activity’ (Smith, 1996, p. 264). This complicated access gives rise to discussion about a ‘double hermeneutic’ (Smith and Osborn, 2003, p. 53) in which a two tiered interpretation takes place: the participants attempt to make sense of their experience and the researcher attempts to make sense of the participants’ attempts to make sense of their experience (op cit.). IPA’s use of hermeneutic approaches allows links to theory and research as well as the professional and personal experiences of the
53
researcher (Larkin, Watts and Clifton, 2006; Shinebourne, 2011). This interpretative aspect of IPA has been highlighted as an area in which weaker studies using the approach are found lacking (Brocki and Wearden, 2006) and without which IPA potentially becomes ‘simply descriptive’, a ‘common misconception’ held about the endeavour (Larkin, et al., 2006, p. 102). However, as Langdridge (2008) points out, description appropriately avoids explanation and any reference to causality. Interpretation is thus different to explanation and it is inevitable when endeavouring to understand others’ experiences as is the case for phenomenological psychology. As per its title, IPA is also ‘phenomenological’. Patton (2002) discusses various perspectives of the term ‘phenomenology’ and arrives at the conclusion that the common link between them is ‘a focus on exploring how human beings make sense of experience and transform experience into consciousness, both individually and as shared meaning’ (p. 104). This conclusion is very similar to that arrived at by Smith, et al. (2009) and Shinebourne (2011) in their discussions about the theoretical foundations of IPA. To my mind, this echoes definitions of social constructionism as referenced above which state a focus on constructions of meaning. However, Willig (2007) suggests that phenomenologists ‘naïvely’ present descriptions of experience as concrete representations from which meanings are subsequently drawn (p. 210). Indeed Patton (2002) suggests that a phenomenological study is one in which an assumption is made that there is a real ‘essence’ to be discovered as per positivist or realist paradigms (p. 107). Although alluding to IPA’s flexibility with regard to epistemological questions, Larkin, et al. (2006) also propose that IPA belongs within a ‘minimal hermeneutic realist’ paradigm which they define by stating, ‘What is real is not dependent on us, but the exact meaning and nature of reality is’ (p. 107, emphasis in original). I find that this ‘splitting of hairs’ detracts from IPA’s focus on meaning making which I feel accommodates the
54
perspectives of numerous paradigms. It is thus that I find Patton (2002) particularly useful as he elucidates that research from a phenomenological perspective emphasises methods dedicated to ‘people’s experience of the world’ (p. 107). Outside the specifics of its name, Reid, et al. (2005, p. 20) helpfully list the ‘key elements’ of IPA which are more thoroughly expanded upon by Smith (2004) and Smith, et al. (2009). In addition to being interpretative and phenomenological, IPA is also idiographic, inductive and interrogative. It is idiographic because of its focus on the individual (Smith, 2004) but also because of its ‘commitment to the particular’ resulting in detailed, in depth analysis of particular experiences spoken about by particular people from particular contexts (Smith, et al., 2009, p. 29). This makes IPA findings impossible to ‘replicate’ which is only problematic from the perspective of certain paradigms which Smith (2010) finds incompatible with most qualitative methodologies as well as ‘human science research’ in general (p. 189). Yardley (2000) is referenced as providing more appropriate characteristics against which to evaluate qualitative research including IPA. Yardley (2000; 2008) describes ways in which qualitative research can be considered ‘good’. She gives four core principles to follow in this endeavour, namely: sensitivity to context, commitment and rigour, coherence and transparency and impact and importance. Smith, et al. (2009, pp. 180 – 183) explain how IPA meets each of these criteria. IPA is inductive because its flexibility allows for the emergence of unanticipated themes during analysis (Smith, 2004, p. 43) which Reid, et al. (2005) describe as being ‘bottom up’ (p. 20). This especially suits the ‘exploratory’ aspect of my research. IPA is ‘interrogative’ because of its core psychological base in that ‘a key aim of IPA is to make a contribution to psychology through interrogating or illuminating existing research’ (Smith, 2004, p. 43). That it
55
is a ‘psychological’ methodology appears to be a source of pride to its originator and of those utilising it (see for example, Smith et al., 2009) and is another key reason for me to use it for my research. However, the primary reason that I was drawn to IPA was because my experiences of evaluation in a variety of roles have shaped how I view evaluation. It is thus that I chose a research approach based on examining people’s experiences because it has been through experience that I have found meaning in evaluating my impact in a professional capacity on others, as well as what types of evaluation methods I have found meaningful. IPA is thus something of a personal choice. Admittedly other methodologies may fit my ontological position more coherently, for example discourse analysis. However, discourse analysis, as noted by Smith (1996, p. 263), is inadequate for exploring what an individual ‘thinks or believes’ and as Reid, et al. (2005, p. 21) suggest, discourse analysis is like behaviourism; limited to the observable (i.e. language).
3.5.1.
Sample
That IPA is ‘idiographic’ (see above) means that any sample should be as homogenous as possible (Smith, et al., 2009). With this in mind, I established criteria for the selection of my participants: they had been in service as qualified Educational Psychologists for more than five years, had worked for two or more local authorities and that they were not senior practitioners. All of the participants worked for the same local authority Educational Psychology Service. I wanted to draw from a wider experience, hence the stipulation of years in service and experience of different authorities. I also recognised that a managerial role could affect the perception of evaluation as an activity and therefore did not interview members of the senior management team. Members of the staff team fitting the description above were identified by the service’s Principal Educational Psychologist and another member
56
of the senior management team. All were invited to participate in the study verbally and by email9. All volunteers who responded to this invitation were given further information about the research including confidentiality and anonymity arrangements and were asked to sign a consent form. Interviews were arranged for a time and date convenient to both researcher and participant. These occurred in a private space within the work place. It so happened that participants were all women, adding to the homogeneity of the sample. Six participants were interviewed. All interviews were analysed and included in this research. Certain identifying features of difference amongst the participants outside of the selection criteria cannot be disclosed in the interests of anonymity. The participants referred to these differences briefly in their interviews and this may have impacted on how they experience their work. However, I felt that any impact on my interpretation and identification of themes was substantially outweighed by my commitment to protect my participants’ anonymity. Names of flowers also commonly used as women’s names are used for pseudonyms. These were ‘randomly’ assigned in that I listed all the names I could think of, shuffled the printed transcripts (at that point coded by computer generated numbers) and gave a name to each according to the original list’s sequence.
3.5.2.
Interview Schedule
Based on my research questions and literature pertaining to evaluation, I listed possible prompts for my interviews. These initial statements were adjusted based on suggestions made by Smith, et al. (2009), namely that IPA interviews give participants an ‘opportunity to tell their stories’ (p. 56) and that the aim is to ‘learn more about [participants’] lifeworld’ (p. 58). Further adjustments were made in response to suggestions during supervision. I then piloted
9
(See Appendix 6)
57
my interview schedule with two members of the staff team who did not qualify for my selection criteria10. I performed these interviews as per the process planned for the final interviews, including obtaining consent and recording the interview. I did not transcribe these interviews and did not save them. I did however ask for feedback about the questions once the pilot interviews were complete. I notarised each question with suggestions from both pilot participants. Using these suggestions and through further discussion during supervision I finalised an interview schedule (see figure 2). This was used as a guide, rather than a script, and additional questions and comments arose in each interview in response to what was said by each participant. This corresponds to the practice recommended by Smith, et al. (2009) which adopts the position of ‘a naïve and curious listener’ (p. 64). My questions and comments were recorded along with the participants’ responses in the transcripts and were included in the analysis where appropriate.
Some of these questions are very open ended. This is deliberate as I am very interested in hearing about your own individual experiences as an EP.
1.
Please quickly tell me how you came to be an EP.
2.
Describe yourself in role as an EP.
3.
What do you feel has been your ‘best’ work?
4.
Please tell me about a time/times when you feel you made an impact as an EP.
5.
What information let you know that you had made an impact?
6.
Tell me about a recent time when you did something differently than before.
7.
Why did you change your practice?
8.
What told you that the change had been useful/not?
9.
How do you see your work in the future?
10. That’s the end of my questions. I am going to leave the recorder on so that you can add anything else you feel is relevant if you want to. Thank you for participating in my research.
Figure 2. Interview Schedule: Schedule used to inform questions for final interviews. 10
(See Appendix 7)
58
3.5.3.
The Interviews
The interviews were conducted in private rooms in the building where the Educational Psychology Service is situated. I booked the rooms for this purpose and only one interruption took place when colleagues from another service found themselves locked on the balcony adjacent to the room my participant and I were in. I did not initiate discussion about the interviews outside of these rooms (i.e. within the office) although most of the participants spoke to me about the interviews in this more public arena. The interviews lasted between half an hour and an hour and a quarter. Each interview was recorded and saved anonymously to my personal computer and backed up on an external hard drive which is password protected. I transcribed each interview myself. The transcripts were initially labelled using the computer generated codes but were later assigned names as above. I was struck by how interesting it was to speak with my participants about their work. Their passion for the impact that they felt they had made was clear in their body language, emotive words used and inflection. I found myself caught up in their enthusiasm and feel it was a privilege to have had the opportunity to share this with them. The conversation about work and evaluating it for impact spilled out of the interview rooms on almost every occasion. A couple of participants also fed back that they had found the interviews helpful.
59
3.6. Qualitative Analysis 3.6.1.
Procedure
Brocki and Wearden (2006) level the criticism of being ‘somewhat mysterious’ at qualitative research (p. 100). They state, ‘Guidelines are offered to the researcher who is then informed that they cannot do good qualitative research simply by following guidelines’ (op cit.). Although Brocki and Wearden imply that IPA is far more accessible than other qualitative methodologies and that its guidelines are more straightforward, Smith, et al. (2009) deny that they provide a ‘definitive account’ of the analytic process in IPA and instead advocate a ‘healthy flexibility’ when analysing interviews (p. 79). In spite of carefully reading the chapter on analysis in Smith, et al.’s book (2009), I found the process of analysing my interview transcripts ‘somewhat mysterious’ in that much of it felt intuitive. I wrote in my research diary, ‘Analysis a strange process. [The themes] falling into place or fitting into place?’ However, in the interests of robustness, I followed as much of Smith et al.’s guidelines (2009) as possible. This took the form of handwritten notes on my transcripts in different coloured pens, as well as long lists of emerging themes and observations and reflexive notes in a research diary. The physical procedure of the analysis took the following path and is exemplary of the description ‘dynamic’ which has been applied to IPA (Smith and Osborn, 2003; Smith et al., 2009):
I transcribed each interview myself. As an amateur transcriber, I did not use standard transcription coding (e.g. Jefferson). I printed these transcripts in landscape format with wide side margins and 1.5 spacing to allow space for notes.
60
I read through each interview independently once, but very carefully, distinguishing between linguistic and descriptive elements in the text. During this first reading I jotted down all of my responses to what had been said. I also highlighted some preliminary concepts which stood out as potential themes (of which at this point there were many) and, in another colour, marked sections of text that I was ‘very struck by’. I then listed these preliminary concepts and highlighted those which stood out more conspicuously.
I then returned to the transcripts to undertake a more in-depth reading, adding to my original notes.
When I embarked on a third reading I tried to transfer my thinking to a Word document. However, I found this process cumbersome so after doing this for Jasmine I returned to a paper and ink approach. I printed new copies of the transcripts with more margin space and double spacing and listened to each interview again, checking the transcription for accuracy. I used colours to identify new commentary or commentary I felt still applied or continued to be important. I also highlighted sections of text which I thought might be appropriate for the findings section of this thesis. I also expanded commentary about language use and noted down some tentative links to theory (e.g. systems). At this point I decided not to undertake an in-depth analysis of responses to the ‘starter question’, choosing to focus my analysis strictly on my research questions.
To provide a boundary to my burgeoning themes, I restricted myself to interpretation of those parts of the interview that specifically related to the research questions. However, one theme arose which was extraneous to the research questions but was clearly important to the participants. This theme was included in my findings.
61
As clear themes emerged from the third reading I wrote them down for each participant and clustered them. To each cluster I gave a label which in turn became the identified emergent themes. I placed these into a flow diagram which I felt best encapsulated the priority of each. This became my theme structure for each participant.
I wrote down each of these emergent themes on separate pieces of paper and arranged them in individual theme structures. I then used a process whereby I clustered similar themes more closely for individuals and then clustered common themes across the sample, which I identified as my superordinate themes. Other themes were then arranged under each where appropriate and became subordinate themes11. Certain themes were felt to connect other themes, rather than be superordinate or subordinate. These were also placed into a schematic which included arrows. This schematic was then reproduced in a Word document (included in Findings). All steps of this process were photographed (figure 8).
I finally collated all the quotes applying to the superordinate themes and subordinate themes into a Word document along with commentary. I then coded this document in terms of narrative structure for my write-up12.
During writing, additional quotes were remembered and seen to be fitting. I therefore returned to the original transcripts on many occasions to augment the narrative. The schematic was also altered during the process as subordinate themes fell away or were grouped under a more appropriate title.
11 12
The descriptors ‘emergent’ and ‘superordinate’ for themes are used by Smith, et al. 2009. (See Appendix 8)
Iris Transcript Initial Readings: Annotated transcript showing commentary from initial readings (example).
Figure 3.
62
Rose Transcript Initial Readings: Annotated transcript showing commentary from initial readings (example).
Figure 4.
63
Iris Transcript Later Reading: Annotated transcript showing commentary from final readings (example).
Figure 5.
64
Rose Transcript Later Reading: Annotated transcript showing commentary from final readings (example).
Figure 6.
65
66
3.6.2.
Validity
Smith, et al. (2009) recommend counting the recurrence of themes between participants as a way to ‘enhance the validity of the findings of a large corpus’ (p. 107). This is reiterated by Smith (2011) in a description of what makes good IPA research, stating that ‘one should aim to give some measure of prevalence for a theme’ and that for research with sample sizes of between 4 and 8 ‘extracts from half the participants should be provided’ (p. 24). This indication of validity (or ‘rigour’ in the 2011 paper) was adhered to and a count presented in Figure 7. The prevalence of extracts for subordinate themes is considered to contribute to the overarching theme count. Although additional extracts were available for subordinate themes, due to constraints of word count, repetitions were avoided. All extracts considered for theme identification can be found in Appendix 8.
Role Personal Power Innovation Thought Complexity Systems Change Inclusion Measures and Outcomes External Tools Internal Tools Perception
Daisy Yes No No Yes No No No No Yes Yes Yes No
Iris Yes Yes No No Yes Yes No Yes Yes Yes Yes No
Jasmine Yes Yes Yes No Yes No No Yes Yes Yes Yes No
Lily Yes Yes Yes No Yes No Yes No Yes Yes Yes Yes
Rose Yes Yes No Yes Yes Yes Yes Yes Yes Yes Yes Yes
Violet Yes No No No Yes Yes No No Yes Yes Yes No Figure 7.
Prevalence of Themes: Table showing the inclusion of extracts in the Interpretative Phenomenological Analysis, providing a measure of the prevalence for each theme.
67
Figure 8. Clustering of Themes: Sequence of photographs showing how the themes identified for individual participants were clustered together.
68
3.6.3.
Yardley’s Characteristics
Yardley (2000) describes the expansion in the use of qualitative methodologies in research and applauds this move as one giving greater access to a range of possible knowledges that may come to be constructed as truth. She also discusses the range within those methodologies and suggests that a set of standards against which to measure the quality of qualitative research was missing at the time she wrote her paper. Although such standards had been established for quantitative research, these are not applicable or appropriate to the appraisal of qualitative work. Yardley (2000; 2008) thus recommends certain characteristics against which qualitative research may be judged. These characteristics are used by Smith, et al. (2009) to justify the ‘validity’ of IPA. I use these characteristics (Yardley, 2000; 2008) here to critique the qualitative phase of my study. Sensitivity to Context: This research has aimed to be informed by theory and the existing literature through presenting a thorough literature review and by referencing psychological theory where I felt it was relevant to the analysis. My sensitivity to participants’ needs is evidenced by my ethical position in this research and I involved the participants in checking my interpretations of their words and ideas (also known as ‘testimonial validity’). I acknowledge my position as researcher through reflexivity and appreciate the double hermeneutic (Smith and Osborn, 2003) in IPA. I have therefore endeavoured to present my interpretations as such as well as offering alternative possibilities. By using verbatim extracts from the interviews I enable readers of this research to access their own interpretations and offer a means by which they can check my analysis. Full transcripts are also available for further checking (see CD-ROM).
69
Commitment and Rigour: In the pursuit of rigour, this research has adhered to the principles of a specific approach to qualitative methodology, namely IPA. In this regard, guidelines relating to the collection and analysis of data were followed as closely as possible. This informed how I selected participants, namely my setting criteria for a homogeneous group which are delineated above. I have particularly focused on presenting as in depth an analysis as I could, based on my research questions and also on participants’ emphasis on certain aspects of their experience. I have aimed to present my own thoughts alongside participants’ statements as a reflexive researcher in order to be open about the level of my involvement with the data. To check for bias, I also submitted a transcript without annotations and my initial analysis to my supervisor who checked this and made suggestions during supervision. I have been very conscious of the need to be interpretative and have attempted to present my analysis in this way, as opposed to being ‘simply’ descriptive. Coherence and Transparency: My commitment to thinking about relative meaning making when exploring my research questions has informed the decisions I have made regarding research design and execution. I have aimed to be consistent to the principles of constructionism in the writing of my findings and the language I have used. In the interests of transparency I have been specific about what I have done in the process of my research and have given prominence to participants’ quotes. This is with the aim of presenting what participants said outside of my interpretation of what they may have meant by various statements. I have also included full transcripts, both with and without annotations, on the CD-ROM included. Impact and Importance: I have adopted the position that research is ethical if it makes an impact of some sort. I was inspired by the experiences my participants shared with me,
70
what I felt to be their passion for their work and the effect of this work that they bore witness to. I feel that I would be doing my participants a disservice not to disseminate my findings as widely as possible. I therefore have a dissemination plan that includes publication in a peer reviewed journal should such a submission be accepted. Evaluation has been deemed an important endeavour from many different perspectives. It is thus that I feel this research may usefully add to the debate about what information may be meaningful when evaluating the work of Educational Psychologists.
3.6.4.
Reflexivity
Yardley (2008) defines reflexivity as the ‘explicit consideration of specific ways in which it is likely that the study was influenced by the researcher’ (p. 250). There are a number of ways in which I am likely to have influenced this research. There are two which take priority. Firstly, I am training to be an Educational Psychologist and much of my discussion about role could relate to how I want that role to be shaped. I do not think that I am alone in wishing for more scope for the role, especially outside of existing statutory functions. However, it is possible that I was drawn to the statements which may reflect my own preferred way of working. Similarly, I am aware that my preference for more qualitative methods of evaluation could have skewed the importance I attached to reflections about evaluation in general. I have to admit that Lily’s statement about not needing to be a rocket scientist to know someone is happy is a personal favourite. I also acknowledge a certain degree of surprise when participants spoke of quantitative methods of evaluation with approval. I therefore had to draw on all my skills as a curious listener. Although this took the form of the mental question, ‘really?’ this helped maintain a healthy level of interest which I hope enabled me to equally value those views. However, on the whole I found myself thoroughly fascinated by what my
71
participants do and think about their work. I was wholeheartedly absorbed by what they told me and it is very likely that this emotional response also influenced the ways in which I interpreted our discussions. However, I also admit to feeling that this is more a positive implication than a negative concern, especially as emotional response figures quite prominently in what was said. I reiterate what a privilege it was to share the interview time with my participants and I was pleased that some fed back that they found it a positive experience too. Finally I confess that I was not overly confident while I did my analysis and am appreciative of the reassuring and constructive comments made by my supervisor and my participants. It may be thus that my analysis may appear over cautious and lacks the vigour I feel it deserves.
3.7. Phase Two: Quantitative Method The quantitative phase of this research aimed to indicate a level of consensus relating to the findings from the Interpretative Phenomenological Analysis of the interview data. The information drawn from participants when they spoke about their work is categorised as External and Internal Tools in the qualitative findings section. However, in the literature, almost all evaluation studies focus on various forms of external tools. It was therefore necessary to balance my findings with the published literature to ensure that I did not bias the study towards internal tools. Goal Attainment Scaling (GAS) and Target Monitoring and Evaluation (TME) are prominent in the literature and interview participants spoke about the value of setting targets and goals. I therefore clustered certain statements relating to setting goals and measuring
72
whether these were achieved as ‘Target Based Techniques’. Participants referred to quantitative measures of before and after scores like the Strengths and Difficulties Questionnaire (SDQ) as well as curriculum based scores. I therefore clustered statements pertaining to recording a change in scores as ‘Standardised Measures’. There was a lot of discussion in the interviews about being given verbal feedback, as well as written qualitative feedback. Statements relating to these kinds of evaluation were clustered under ‘Qualitative Feedback’. The final category, ‘Professional Opinion’, thus relates to the internal tools described by the participants, that which I summarised as ‘thinking, seeing, knowing, feeling and reflecting’. I specifically chose the name for this category because of the literature around evidence-based practice, which contrasts professional opinion with research evidence and because of the validity given to professional opinion by some of the discussion papers like Mercieca’s (2009). Statements used for both the pilot and final questionnaires were adapted verbatim extracts from the interviews or generated statements drawing from examples of methods used in the literature or summarising comments made in the interviews.
3.7.1.
Pilot Phase
I initially chose to use a ranking method which is endorsed by Viswanathan (2005) as an appropriate method when exploring ‘relative preference or prioritization’ (p. 247). Stone and Sidel (2004) also suggest that ranking is relatively easy to do and Gravetter and Wallnau (2009) consider ranking suitable for ‘variables’ that are not regularly measured because they are difficult to define or have absolute values applied to them. The literature about ranking methods seems dominated by marketing and product preference, particularly relating to food.
73
However, Dyer (1995) writes specifically about ranking style questions in relation to Psychological research. This text was used to inform the phrasing of the pilot questionnaires. Fifteen qualified Educational Psychologists undertaking a post-qualification Doctorate at the Tavistock Centre completed the pilot questionnaires13. They gave ranks to six statements each for the four categories of evaluation identified above in response to being asked ‘whichever seems to best fit’ each category. The mode and median values for the ranks of each statement were calculated and the statements with the lowest mode and median were selected for use in the final questionnaire. Although it was seductive to limit the analysis of my pilot phase to a quantitative activity, a large amount of qualitative information was offered by the pilot phase participants in the form of written comments. This commentary strongly indicated the degree of difficulty experienced by the participants when responding to the questions posed. Based on feedback from the pilot phase that the ranking activity was found ‘difficult’ and that the instructions were not clear, I chose instead to use a Likert scale for the final questionnaire, which asks participants ‘the extent to which they agree or disagree with a particular statement’ (Sullivan, 2009, p. 293). Ogden and Lo (2010) describe Likert scales as ‘the dominant measurement tool’ in quantitative psychology (p. 1). However, they also discuss the limitations of this technique, reviewing literature which highlights psychometric and cultural concerns. As the target audience for my questionnaire consisted of Educational Psychologists, well versed in scales of this nature, the impact of these limitations was likely negligible. Three items from the original six presented in the pilot questionnaire were chosen to represent each category of information. The three items with the highest ranks were 13
This questionnaire was originally intended to pilot with another group. However circumstances were such that this did not take place and an alternative group was identified.
74
determined through calculating both the mode and the median values for each item. This was done to determine which item to use in the case of ‘tie’ ranks. For ‘standardised measures’ and ‘target based techniques’ the half-way point between mode and mean was required to determine the third item to be included. For the other two categories, the mode and the median values indicated the same three items for inclusion in the final questionnaire. The resulting 12 items were drawn from a hat in order to randomise their presentation on the questionnaire. I altered some statements in the final questionnaire based on suggestions made, e.g. to make them less like examples and more like descriptions and to make them more differentiated from each other, more simple and less personal (although statements fitting ‘personal opinion’ were less amenable to this change). Following further suggestions from my supervisor, some statements were amended slightly to improve the face validity of the questionnaire (e.g. ‘wonderful stories of change’ was amended to ‘stories of change’). The statements were all presented as positive statements for the same reason.
3.7.2.
Sample
The final questionnaires were given to all Educational Psychologists attending a team day event. All Educational Psychologists practising within the local authority Educational Psychology Service in which this research took place were invited to this event. A total of 18 Educational Psychologists completed the questionnaire, out of 20 in the team. In contrast to the interview participants’ sample, participants filling in the questionnaire included senior members of the team (including the Principal Educational Psychologist) and Educational Psychologists with differing levels of experience. Interview participants also took part in the questionnaire.
75
3.7.3.
Procedure
The questionnaires were printed in landscape format on a single piece of A4 paper as per Figure 9. They were handed out to members of the Educational Psychology Service in which this research took place who attended a team day event. An additional verbal statement was made as follows: ‘Please consider each statement presented on the questionnaire in the context of a piece of work that seems to be going well. The questionnaires will be anonymous and by completing them please note that this is taken as consent to participate. Because the questionnaires are anonymous, they will not be able to be withdrawn after they have been given to me. You can choose whether or not to complete the questionnaire. Thank you.’ I was not present in the room while the questionnaires were filled in and completed questionnaires were collected by a member of the team and returned to me in one pile. There were no identifying marks on any of the questionnaires to show who had filled them in. Three of the questionnaires had ‘missing data’. Either no box was ticked against a number of statements or two boxes were ticked. I could not be sure if this had been done in error or if the participant was indicating an ‘I don’t know’ response. As there was a box for ‘Don’t Know’ I made the assumption that the missing data was in error. To avoid weighting responses that did not have missing data, I excluded these questionnaires from my analysis. The final sample therefore consisted of 15 questionnaires.
A tool is used to measure changes in the perception of the problem. There is a record of parental comments, children’s comments or teachers’ comments. Change is monitored in an aspect of learning using standardised before and after measures. Progress is monitored against Individual Education Plans. I think that whatever we have agreed upon is having some effect. I can see evidence of recommended strategies still in place within the classroom. Scores on any before and after measure improve. Goals to be achieved are agreed and progress towards these is monitored.
Targets are set making the work specific.
There are stories about change.
Somebody says, ‘That was helpful’ or ‘I got a lot out of that’. Things have visibly changed.
Disagree
Don’t Know Agree
Final Questionnaire: The questionnaire as it was presented to the final questionnaire sample.
Very Strongly Strongly Disagree Disagree
Please make a mark in the box to show whether you agree or disagree (and the degree to which you agree or disagree). Strongly Agree
Figure 9.
Very Strongly Agree
For each of the following statements indicate whether you find this meaningful when you evaluate or think about the impact of your work.
76
77
3.7.4.
Data Analysis
The data was entered into an Excel spreadsheet with responses coded as follows: 3 = ‘very strongly agree’, 2 = ‘strongly agree’, 1 = ‘agree’, 0 = ‘don’t know’, -1 = ‘disagree’ and -2 = ‘strongly disagree’. No participants ticked the box ‘very strongly disagree’ for any of the statements. Participants were randomly assigned number signifiers based on the order in which the data was entered. A frequency table of all responses was generated using Excel. This was done in order to represent each individual’s response. A table comparing all of the ‘agree’ responses with all of the ‘disagree’ responses was also produced to illustrate the consensus within the questionnaire participant sample.
3.7.5.
Reliability and Validity
Reliability is a measure of consistency in a questionnaire, while validity describes the degree to which the questionnaire ‘measures what it is intended to measure’ (Mertens, 2005, p. 352; Rosenthal and Rosnow, 1991, p. 60). Calculations of reliability aim to show stability of a test over time (test-retest reliability) or similarity in measurement between items (internalconsistency reliability) (Rosenthal and Rosnow, 1991). Neither of these calculations is applicable to the current questionnaire in that it was designed to ascertain the degree of consensus to particular statements within a specific group of people at a specific time. Testretest reliability may be checked at a later date but would not take into account the possibility of genuine opinion change. Internal-consistency is also inappropriate as this questionnaire does not aim to measure a single ‘attribute’. However, there are alternative ways of attributing reliability to this questionnaire. Cohen, Manion and Morrison (2007) suggest that the anonymity of a questionnaire adds to its reliability as this ‘encourages greater honesty’ (p. 158). They also recommend using a pilot phase to increase both reliability and validity, which
78
occurred in this instance. Sampling can also affect both reliability and validity. Although the sampling assumed whole group participation, there were two members of the group who were not present. Three further questionnaires were not included in the analysis due to missing data. This reduction in responses may have introduced error into my quantitative results, affecting the reliability of those findings. The pilot phase specifically addressed the content validity of the questionnaire by asking practising Educational Psychologists, who also happened to be undertaking their own research, to rate items which best fit certain stated categories. Content validity is the degree to which ‘items represent the kinds of material… they are supposed to represent’ (Rosenthal and Rosnow, 1991, p. 60). Although the scale of the pilot, and indeed of the questionnaire itself, does not lend itself to statements of absolute certainty, the process of the pilot can be said to offer a reasonable degree of content validity to the questionnaire. Following Kvale (1994) perhaps it is more appropriate to explore the quality of the questionnaire, rather than its reliability and validity. The degree to which the questionnaire relates to sources of information used by Educational Psychologists to think about whether they have made an impact or not can be judged by examining the questionnaire itself, which is reproduced in its entirety (figure 10). This relates more closely to the characteristics defined by Yardley which are used as a measure of the qualitative section of this research, specifically ‘coherence and transparency’ (2000; 2008). Applying such measures to quantitative methods can be seen to fit more with a social constructionist paradigm. Similarly, Guba and Lincoln (1989) promote dependability and credibility as parallel measures of quality.
79
4.Findings 4.1. Interpretative Phenomenological Analysis of Interviews Six Educational Psychologists from an Educational Psychology Service who had been in service as qualified Educational Psychologists for more than five years, had worked for two or more local authorities and were not senior practitioners were interviewed. Although my research questions specifically relate to evaluation, when I developed my interview schedule I felt that it would be useful to ask my participants to describe their role as Educational Psychologists. This concept seemed important to the participants I spoke with who all referred to it while answering other questions during the interview and a consideration of ‘role’ developed into one of the major themes of my analysis. Linked to the theme of role, the complexity experienced by the Educational Psychologists, both in terms of their work as well as how they did and could evaluate it, emerged as the second theme of my analysis. Acknowledged in the literature, this complexity is seen in the numerous systems around children and the ways in which these may affect them. Attention was given to the difficulty affecting change at the different levels of these systems in order to make a positive impact for the children concerned. The complexity in understanding in which ways change occurs and what meaning may be drawn from the changes seen was also explored. Finally, there are complex issues which affect the work of an Educational Psychologist, the one highlighted being that of ‘inclusion’. All of these things have many implications for evaluating the outcomes of this work and the evaluation tools used.
80
The tools drawn upon by the participants when thinking whether they had made an impact through their work are referred to under the theme ‘measures and outcomes’. Participants identified a range of information that they draw on in order to measure the success of their work. These tools are distinguished according to whether they are ‘internal’ tools or ‘external’ tools. The participants reflect on the uses and limitations of each, and the question of whether the information produced is tangible or not is raised. The impact of perception is also considered as a complicating factor for both. Finally a linking theme of ‘judgement’ is identified as a result of one participant’s reflections on her experiences of Ofsted. The impact of this judgement is presented as something to be thought of in the light of the activity of ‘evaluation’, which itself implies a certain degree of being judged or measured against a standard. The implications that feeling judged may have on creativity in particular are highlighted and linked back to the discussion of role.
Note: Ellipses indicate pauses in the narrative. Ellipses in parenthesis indicate words omitted in the interest of brevity and relevance. Words in parenthesis either explain omitted words or are more generic terms used to protect anonymity.
Overall Theme Schematic: Graphical representation of themes and their relationship to one another.
Figure 10.
81
82
4.1.1.
Theme 1: Role
‘Ah, I think there are a number of different roles actually. I think we kind of do everything…’ (Lily)
Personal Power
Innovation
Role Thought
Figure 11. Role: The theme ‘Role’ and its accompanying subordinate themes.
Participants seemed clear that they perform a variety of functions and tasks. To briefly summarise, participants highlighted staff training, group work with children, brief therapeutic work with families and children, consultations with school staff and Early Years professionals, joint home-school consultations, parenting courses and cognitive assessments of individual children followed by a report and associated recommendations for schools as aspects of ‘what they do’.
83
My interest in the role of Educational Psychologists relates less to the particulars of what the participants do, and more to what they think about what they do. Violet seems to prefer a multifaceted systemic approach. She says: I think historically the role of Educational Psychologists and the medical model is going in and the child has a learning difficulty and you go and you do the assessments and you do, you assess and you make a recommendation and you fix the problem type of model as opposed to a much broader perspective. And er using er many other different tools besides the IQ, the old IQ test, the cognitive assessment, to just assess and obtain a score and make a decision on that. It’s far broader than that. Violet, ll. 22 - 25 Violet juxtaposes the historical role and a broader, current role she now feels she performs. Violet is quite dismissive of this historical role, which she ascribes to the ‘medical model’. Her referral to cognitive assessment as ‘the old IQ test’ and her summary of this approach as ‘just assess, obtain a score and make a decision’ suggests that she does not value this approach in the same way she values the broader approach to which she refers and she says: I think it’s a real privilege to be able to work in the manner in which I am working at the moment with, with the broader … with the being able to go work with the broader view so I, I think that that is, that has been very good … and it’s just, it’s just really rewarding, it’s really really rewarding […] Violet, ll. 168 - 170
84
There is a strength of feeling behind Violet’s words demonstrating the degree to which her role is rewarding to her. However, Violet’s sense that this way of working is important does not only stem from her own satisfaction with her job. When she says, ‘that has been very good’ she may also be referring to the impact that she has seen through her work. Not only is her role a privilege, Violet also describes it as a luxury. Violet says that she is able to engage with the family and school and build relationships because she has the luxury of a number of sessions to spend with parents, teachers and family members. She contrasts this with ‘just a touch and go’ approach, which I believe refers to the medical model she spoke of earlier. She shows how strongly she feels that this way of working is positive by saying: I found that that has been um really really beneficial and it has really helped the work. The fact that it is not just a touch and go. Violet, l. 36 Daisy also considers her role to be a luxury, a word defined by the Oxford Dictionary 14 as ‘an inessential, desirable item which is expensive or difficult to obtain’ or ‘a pleasure obtained only rarely’. Mainly because of the economic doom and gloom I think that this is a bit of a luxury. My post is a luxury. […] But these are things that are extras so I think that I see purse strings tighten and so this role might actually go. Daisy, ll. 204 - 207 Daisy describes her work as ‘lovely’ and backs this assertion up by evidencing that her work has been valuable and useful with a reference to ‘very good feedback’. In both of these 14
http://oxforddictionaries.com/definition/luxury accessed 22 August 2011.
85
instances, the work that is felt to be a luxury is also described as beneficial, helpful, making a difference, rewarding, valuable and useful. Daisy speaks of economic doom and gloom, which possibly captures her own anxieties about her work, the future and perhaps also taps into a zeitgeist of a country experiencing governmentally imposed austerity measures and very slow economic growth. Daisy might also be expressing an anxiety that even valuable, useful work could be seen as ‘extra’ and may disappear. This raises a question about what defines essential work and what is considered a ‘luxury’. Daisy says that her post is different to what she describes as ‘the usual’ work of Educational Psychologists. This may be why Daisy considers what she does to be ‘extra’ and perhaps feels particularly vulnerable to cuts. However, other participants also distinguish between what they usually do and other work, with unusual work garnering the most praise. Iris talks about a group project run with children which she feels has been her ‘best’ work. [It] has been an interesting piece of work because it’s allowed um me to take part in an intervention which has been over time, which is a pretty rare opportunity in terms of the work that we do. Iris, ll. 16 - 18 Iris seems to be expressing a view that involvement in delivering interventions is rare and unusual work. The novelty of this group project has perhaps made it seem more interesting than the day to day tasks that Iris ordinarily engages in, however, her use of the word ‘opportunity’ suggests a desire to do this work more often. Jasmine also speaks about her experiences of being actively involved in class based work as ‘such a rare thing for an EP to have a chance to do’. Again the reference to rarity is notable as is the idea of such work being an opportunity or a positive thing for Educational Psychologists to do. Jasmine also describes a
86
time when her Educational Psychology Service was asked to provide additional involvement to a school. … so between us we kind of, we thought well, OK we can cover the ordinary work of an EP. But what we don’t want to do, it will be nice to do something extra but we don’t want to do more of the same [...] Jasmine, ll. 83 - 84 There is a distinction between ‘more of the same’ and doing something extra, which would be ‘nice’. This ‘more of the same’ is quite probably the sort of work that is referred to as ‘usual’ or ‘ordinary’. The implications are potentially worrying in that extraordinary working, identified as valuable and useful, may be under threat with a tightening of purse strings and a discarding of ‘luxuries’ and ‘extras’.
4.1.1.1. Personal Power In my early analyses of the interview transcripts I noticed repeated references to autonomy, control, choice and powerlessness. Jasmine told me about a programme she delivered in an earlier post which had a section devoted to ‘Personal Power’. Within this programme, personal power was a boost to confidence and self-esteem. However, I was struck by this phrase and in my reflections I noted down that I felt it also related to ‘the power to do something out of the ordinary’. It is thus that I have grouped the elements of autonomy, control and choice in the Educational Psychologists’ role under this title. Iris highlights one source of a feeling of powerlessness in that much of the work of Educational Psychologists tends to be ‘indirect’ (e.g. Erchul and Sheridan, 2008; Roach, Kratochwill and Frank, 2009). She says:
87
There are often times with the job as a whole you feel that what needs to be put in place for those children you’re passing back into somebody else’s hands so sometimes you feel a bit powerless with that. Iris, ll. 20 - 22 Iris was reflecting on this in contrast to working directly with a group of children. The results of that in terms of the follow up questionnaires and things seem to suggest that those programmes have made a difference to the children that have taken part. So that feels like a good piece of work to have taken part in because you feel like you’ve left children with some skills that they didn’t have before. Iris, ll. 18 - 20 When Iris speaks about her group project it sounds like she feels that the skills the children have obtained have been as a result of her own contribution when delivering the intervention, compared to passing the work over for someone else to do. Educational Psychologists are not always in a position to be able to directly deliver what they feel is needed by the children they see, both Jasmine and Iris talked about such work as ‘rare’. This has big implications for the evaluation of Educational Psychologists’ work. Also, passing on what needs to be done to another person feels very precarious.
Sometimes when you write your recommendations in a report, you’re reliant on the school to do it […] when you write recommendations in a report there
88
are those issues of ownership that the school doesn’t necessarily have because it is what you’re giving to them. Iris, ll. 140 - 144 Educational Psychologists may feel powerless because of the indirect nature of their role. Another possible source of feeling powerless is the volume of work an Educational Psychologist may have at any time. Jasmine spoke of not setting a precedent for ‘asking for more’ because this may ‘make life difficult for other EPs’. Rose is perhaps feeling this difficulty as she says: There’s lots of ideas that I’d like to develop but unfortunately at the moment, and I fear it’s going to get worse, the cases coming in they’re just flooding in. So you kind of tend to get sucked in to running around doing case work but I think there are lots of other things that could […] more effectively […] address some of the problems before they happen. Which would be more ideal really, wouldn’t they? Rose, ll. 106 - 110 Rose says that the casework is ‘flooding in’ suggesting a deluge which cannot be stopped or avoided or contained. This flood is perhaps pushing out more effective, preventative work due to the quantity of work. It also ‘sucks you in’ to working in a reactive, unplanned way. Rose expresses her anxiety about what may result and her preference for preventative work. Ideally we want to do is focus on the preventative work to stop so many problems happening but what will inevitably happen is that you only get time
89
for the crisis work which… so I don’t know. So hopefully we’ll find a way through that, so it doesn’t become like that. Rose, ll. 493 - 495 Daisy also intimates that she prefers preventative work. She supports a team of staff and she describes their work as ‘remarkably similar to an EP’s’. The difference is that this team ‘don’t have all the statutory stuff. They have all the supportive and preventative stuff’. The contrast in this statement suggests that statutory work does not provide a supportive or preventative function. Jasmine also contrasts ‘the burden of statutory work’ with creative work, saying that if... you have that pressure off then you’ve got that space to be er more creative. Jasmine, l. 307 Lily speaks more positively about the choices she is able to make in her role and the autonomy she enjoys, but makes reference to what she perceives as a threat from bureaucracy which may come to define what she should do, undermining the autonomy she holds so dear.
I do feel that I have quite a lot of autonomy which is quite nice to be able to decide how I feel best put to use in discussion with my own schools. […] I like being able to go into a school, discuss with them what they think they need, negotiate how we can meet that need and then you know work towards meeting that need. […] I would hate to have that autonomy taken away and
90
be told that I have to do certain things to jump through more hoops than I’m already jumping through. Lily, ll. 205 - 215 Lily seems to value her professional autonomy a great deal. Although she is resistant to externally imposed measures of effectiveness, she does focus on identifying need and meeting need. Negotiating how she is ‘best put to use’ with her schools also shows a commitment to efficiency and quality service. However, in spite of feeling autonomous in her role, Lily still feels that she has hoops to jump through which potentially have a negative impact on her ability to work innovatively and effectively and which she finds frustrating.
4.1.1.2. Innovation Innovation was something that was specifically highlighted by Jasmine. I consider this concept because I think it captures an important element of the autonomy referred to by Lily – being able to use one’s individual professional capacity and creativity to facilitate change alongside others. Jasmine reflects on how difficult it can be to be innovative as an Educational Psychologist often due to time constraints and workload limitations. Jasmine told me about a term she worked after returning from an extended period of illness.
That was almost the best term I’ve ever had as an EP. Because I had been ill I was given like a half workload because she was a half time and I covered her maternity leave before I took on more stuff. And if you could really let EPs work like that, then you could, the sky’s the limit (laughs). But what actually
91
happens is you’re bound down with an enormous amount of statutory work or… and they say why aren’t they more innovative? Jasmine, ll. 257 - 259 I am intrigued by Jasmine’s reference to ‘the sky’s the limit’ and what Educational Psychologists could achieve were they not ‘bound down’ by particularly statutory work as well as other restricting work that Jasmine does not specifically name. There is an acknowledgement of an infeasibility of halving Educational Psychologists’ workloads across the board in order to enable them to reach the sky. However, Jasmine talks about an accusation levelled at Educational Psychologists for not being more innovative by the same ‘they’ who mete out the work which is seen as so limiting. This is reminiscent of Lily’s feelings when she spoke of being told to jump through hoops. Jasmine does however paint a picture of potential which is rather alluring. Although Jasmine very strongly favours innovative working she was also very clear about what is required to be able to think creatively. I have already spoken about the impact of workload on innovation, but Jasmine also identifies another potential pitfall in trying to do it alone. Instead she advocates team work when being creative and emphasises that this can enhance the impact of the resulting work. One of the papers we were given to read um on my training course was the Myth of the Hero Innovator15 and there’s a picture at the beginning of this article of a knight in shining armour, with his spear and his shield and his white horse and rushing in to save the world. And um, that’s… I think that’s 15
Georgiades, N. J. and Phillimore, L. (1975). The myth of the Hero-Innovator and alternative strategies for organizational change. In C. Kiernan and F. Woodford, (Eds.) Behaviour Modification for the Severely Retarded. Amsterdam: Association Scientific.
92
probably the most important thing I have ever read because you tend to just fall flat on your face and not do anything for anyone. So it was not being the hero innovator but just going in and listening and putting in some ideas but then adapting to what they wanted […] then we had a whole team of people being creative whereas two of us would have run out of ideas in no time […] Jasmine, ll. 159 - 164
4.1.1.3. Thought To be innovative an Educational Psychologist needs time to think, which is difficult to do when burdened by the workload. Rose spoke about being pushed into reactive, crisis ways of working by a flood of referrals and the negative impact this has had on her ability to focus on preventative work and develop some of her ideas. In contrast, Daisy spoke about her best work as a project which specifically required a lot of thought from the professionals involved and which sounds to have been incredibly difficult for her and a colleague. I think it was my best bit of work because it was so challenging, and quite aggressively so at times because we were kind of stirring things up and um and just as an example in a light hearted way we had steering group meetings […] and they were dreadful. And we would actually call them, we began, just to make ourselves feel better, we began to call them steering group beatings because they were just such hard work (laughing). And what we were finding was, it was a classic case of them wanting us to just get on, do something, get on with the job. And we were saying, hang on a second you know we’re not going to run in to, rush into something without properly
93
thinking about what these children should be getting and how they should be getting it. You know, you’ve asked us to do some thinking about this, we’re doing some thinking about it. […] they just wanted us to have the children lined up outside and process them really. Daisy, ll. 74 - 84 I think it is important to acknowledge that Daisy did not choose an easy option as her best work. Indeed she says that she thinks it is her ‘best bit of work because it was so challenging’. I am struck by her description of the meetings she had to attend as ‘beatings’ and the almost intolerable level of discomfort that must have entailed. The aggression hinted at arises from the pressure to ‘get on with the job’. Daisy contrasts taking a thoughtful approach against rushing in, running in, as did Rose. In spite of the project being set up to properly think about what the children in question needed, Daisy acknowledges that steering group members would have preferred that these children were ‘processed’ by the psychologists involved. This sounds remarkably like Violet’s description of the ‘historical role’ referenced at the start of this section. Daisy also uses the word ‘just’ to deprecate this approach to working with children as Violet does. This lack of thinking and the drive to ‘do something’ is referred to as ‘a classic case’ implying that this is more the norm than the exception. Daisy sees that doing things without thinking first means that things are not done properly. And it was the classic situation you know where you’re being, people don’t want to think. They just want to get on and do things. So no space for thinking and that was the main part of the problem. That everybody had been swept along and running around like headless chickens doing things and not really able to do them properly because the thought wasn’t behind it. And
94
also what we felt was that some of the cases were so awful that it was hard to think about them and so for [some staff] a lot of it was about giving them the space to just sort of be able to think about these awful things with somebody else. Daisy, ll. 100 - 104 Like Rose’s flood of casework, Daisy talks about being swept along by the work because of the urge to ‘get on and do things’. There are also ‘awful things’ that arise when working with children that are very difficult to think about. It can be easier not to think at all, especially when there is not the space or support to do so. What Daisy says has implications for Educational Psychologists who may be involved with similar situations. On the one hand, Educational Psychologists can provide the space and support to professionals who need to think about ‘awful things’. On the other hand, Educational Psychologists also need similar space and support, perhaps by being permitted to think before doing, so as to avoid the ‘beatings’ that Daisy and her colleague experienced. Iris commends peer supervision, either informal or formal, as a source of such space and support saying, ‘just having the discipline of having those conversations challenges your thinking […] and actually how useful that is to talk to other people that’ve had tricky cases like that as well’.
95
4.1.2.
Theme 2: Complexity
‘I think you just get lost in the quagmire of all the massive difficulties and complex family history and kind of dynamics that are going to be going on. There’s only so much that you can do about that.’ (Rose)
Systems
Change
Complexity Inclusion
Figure 12. Complexity: The theme ‘Complexity’ and its accompanying subordinate themes.
4.1.2.1. Systems Violet talks about ‘being aware of other factors that could be impacting on the child, um, that you weren’t aware of before. It’s not just that they can’t spell, you know. It’s a bit broader than that.’ Rose’s experience has also taught her that it is not possible to ‘fix’ difficulties using a simple approach because children are a product of their environment or, as a systemic perspective might put it, the ‘complex interaction between the individual, school, family and
96
the wider society’ (Daniels and Williams, 2000, p. 222). I think these are the factors referred to by Violet and as Rose states: The more work that I’ve done directly […] with children the more I’m kind of acutely aware that unless you involve the adults it’s not necessarily going to… you need to involve the adults in some way because the children are very much still a product. […] ’Cause otherwise the children can change all they want but if the parents aren’t changing with them then it’s not going to make much difference. Rose, ll. 40 - 45 Rose says that her work can only make a difference to the child if his or her parents also make changes to support that child. This is why Rose has focused her attention on involving parents in her work. She sees her role as creating a supportive structure around the child. It is only in this way that any difference will be evident suggesting that without this systemic approach there will be little in the way of outcomes to measure. Family context can also be linked to a negative impact on outcome. Iris speaks about a little boy in nursery about whom she facilitated a consultation session with school. So it kind of went pear shaped for a while because I guess he was anxious that daddy was going away or um he wasn’t going to come back. … So I guess they just saw behaviours that they’d had under control and it flared up and I guess that felt outside the school’s control to manage because it was actually factors outside of the school that was driving it actually. Iris, ll. 175 - 177
97
This is an explicit example of what could happen when all the components in the system around a child are not making the changes needed of them. Iris says twice that the difficulties ‘flared up’ which makes me think of a fire in terms of the volatility and the potential to blaze out of control. Iris says that things were ‘alright’ for this little boy ‘when the parents were doing what they were supposed to be doing’. However, Iris explains that getting the parents to do this was beyond the school’s control and perhaps beyond her control as well. Although Iris talks about ‘managing’ the child’s behaviour it is possible that influences outside either her or the school’s control makes the situation itself impossible to ‘manage’. Because things ‘went pear shaped for a while’ for this little boy, does that mean that Iris has not done her job well? Iris herself expressed that this case ‘probably didn’t feel like such a satisfactory piece of work’. Although Iris is self-critical here, she admits that the teacher was not negative about the consultation itself. There was dissatisfaction that there was not enough resourcing to support this child, but Iris was able to identify that the school better understood what had gone wrong and that the knowledge they needed to manage the situation was available to them. However, it is possible that the feeling of being under-resourced and the possible anxiety created by the behaviours displayed by this little boy meant that the school was not able to access this knowledge. This difficulty with managing a situation because of anxiety is discussed by Rose in relation to one of her cases where another little boy in nursery was awaiting a diagnosis of Autism. The whole thing was just a struggle and I think everyone was reaching desperation point. And all I really did was spend a lot of time with mum I think at first, just trying to lower anxiety levels there, but just facilitated quite a
98
number of quite regular meetings with mum and the school. And they’d had lots of support from loads of different, like hundreds of different agencies, so there were loads of recommendations already there. There was no real need for me to reinvent the wheel, but you know what it’s like sometimes. I think when people are overwhelmed with a problem, the piece of paper and the recommendations all just go skim off the top and they go, nothing works. So I think I just helped them to actually implement some of the recommendations that they’d already been given. Rose, ll. 127 - 133 In contrast to the ‘loads’ of recommendations from ‘like hundreds of different agencies’ Rose says that she was told that ‘nothing works’. She says that both mum and school staff felt so overwhelmed with the problem that they were unable to implement the recommendations or use the help they had been given. The perception that ‘nothing’ had worked perhaps stemmed from this incapacitating anxiety and meant that previous input was viewed as unhelpful. When supported to ‘actually implement’ these recommendations, the result was, as Rose reports, ‘a marked difference’. However, Rose is not sure what interventions have made that difference. She speaks of the many agencies involved with the child and family and how useful their strategies may have been once they were implemented properly. Rose shares her suspicion that what really helped people feel that there had been a positive outcome was that their anxiety had been lowered. She acknowledges that she cannot change that the little boy has Autism but suggests that she was able to ‘contain an anxious situation’ which helped people think more clearly and access the help they have been offered. In this way the ‘problem’ cannot be solved, but the systems
99
around it can be helped to be more effective. Rose’s approach also highlights her use of psychodynamic psychology which, as it concerns unconscious effects and interactions, introduces another level of complexity at the systemic and individual level. So he has Autism, he’s always going to have difficulties, there’s always going to be challenging situations. That’s going to be life. They’re going to need kind of support throughout I’d imagine because there’s going to be new challenges that’ll flare up. But for that particular time I could close that with everyone going yeah actually everything’s great, this looks really good [….] So that felt quite successful but in a difficult to measure way actually. Rose, ll. 143 - 147
4.1.2.2. Change Rose’s reflections highlight that although her involvement with the little boy she speaks of ‘felt quite successful’, she is not able to identify why and what changes have been effected. Rose shared the story of another case where she experienced this difficulty. When she first met the family ‘everything was in crisis and mum couldn’t stop crying about how terrible things were’. This lady met with Rose once and ‘then mum suddenly was saying, well actually everything’s completely, he’s fine now. He seems quite happy going in to school.’ Rose felt that this also related to anxieties being ‘relieved’. However, she finds it interesting that there could be such a sharp turnaround in perception. So she feels that everything is fine now. I’m not completely convinced. It’s interesting this trying to capture what is the mechanism of change. What do you actually do? […] if you then hold the anxiety it then gives them
100
perspective to see that maybe things aren’t so bad and then they can take that back and carry on. I don’t know. Rose, ll. 174 - 180 Rose again theorises that she has performed a containing role for mum which has helped to give a different perspective on the difficulties being experienced. Although Rose is clearly thinking about what may have caused mum’s change in perception, she shows no certainty that her hypotheses are correct. She is also uncertain that what she has done has made the difference. Rose’s questions around ‘mechanisms of change’ show that evaluation is not only about establishing that a change has occurred or quantifying its magnitude, it is also about identifying what change has taken place and why. Rose sees planning the work and having a ‘clear rationale’ for it as the means by which she can identify what change took place. But change itself remains a complex phenomenon and as Rose confesses, it is ‘quite a tricky thing to measure’. Lily suggests that this is because what work helps and what changes are seen is situation dependent and varied. Um, sometimes it’s about looking at things differently, sometimes it’s about helping staff to work in a different way, sometimes it’s about helping children to behave in a different way, sometimes it’s about helping families to make changes to the way they see things or do things. So I think it’s just about trying to effect change within a variety of different systems. […] So it’s where that change takes place I think is very situation dependent. Lily, ll. 40 - 44
101
Helping is a key word in this quote, showing Lily’s commitment to this activity. However, ‘helping’ also shows the need for the active involvement of those being helped in order for change to take place. The definition of the word help (which is ‘make it easier or possible for someone to do something by offering them one's services or resources’16) shows that the primary actor within the helping relationship is the person helped. Add to this the ‘other factors’ that may have an impact on any situation at one time and change becomes an extremely complex concept to consider. Lily expounds further on the elusiveness of measuring change and reiterates its conditional nature. Because it’s a child, it’s a person, it’s difficult to measure, you know and it depends on what outcome you want. But whether the outcome you want is the same outcome that a parent wants or a school wants, so you know successful outcome is very difficult to just you know you’ve got to agree first what it is that you consider is successful. […] And you know in the end how do you measure whether it’s a successful outcome? If the parents are happy, if the child’s happy, if you’re happy? You know, how do you measure that? Everybody wants something different sometimes, so it’s not always an easy thing to just quantify. […] So every piece of work you do has a different goal, different outcome, different measure of success. Lily, ll. 128 - 149 When Lily says that change for children is difficult to measure because ‘it’s a person’, I think she is speaking of the intricacies of the human condition. On top of this lies the difficulty in agreeing on a successful outcome when there are many stakeholders in a ‘problem’. Lily 16
http://oxforddictionaries.com/definition/help accessed 22 August 2011.
102
acknowledges that different people will have different perspectives on what constitutes a successful outcome and that this may not necessarily match the view of the Educational Psychologist. Lily emphasises happiness as a possible measure of success, although whose happiness takes precedence is another decision that would need to be made. However, she is unambiguous about the diversity of work, goals and desirable outcomes and asserts that this necessitates different and flexible measures of success. Rose similarly speaks of what is involved in working with people. I think life’s a bit more complicated and messy than specific targets but […] I think it’s really useful to have some idea of this is what I’m working on to feel like you’re actually making progress because otherwise I think you just get lost in the quagmire of all the massive difficulties and complex family history and kind of dynamics that are going to be going on. There’s only so much that you can do about that. Rose, ll. 299 - 305 Rose identifies planning and recognising the rationale for the work as one of the ways in which to measure and understand any changes seen. However, she acknowledges that this merely acts as a focus in order to boundary the difficulties seen into something manageable. This focus also perhaps performs the function of a microscope, magnifying the small changes that are made within a ‘quagmire of all the massive difficulties’. This does not minimise the value of small changes, it is rather an acknowledgment that within the ‘messy and complicated’ phenomenon of ‘life’ there is ‘only so much you can do about that’. I think Rose’s
103
use of the word ‘quagmire’17 is expressive of both the mess and complication, and also a sense of danger that the direction of the work may easily give way underfoot or get bogged down or stuck. For me this really emphasises how complex the work can be and also how very difficult it is to measure change in this context.
4.1.2.3. Inclusion Another facet of the theme Complexity is the concept inclusion. Iris specifically speaks of this as an issue having a potential impact on both children and teachers and also considers how this may have changed the work of Educational Psychologists. Although Iris has particular eloquence about inclusion, other participants referred to it as well. Jasmine, for example, said ‘he never got into special ed or anything like that’ when thinking about the value of an intervention she had delivered and Lily refers to possibly thinking ‘it’s in the child’s best interest to be in one place and [the parents or the school] might think something different’. Inclusion has driven much of Special Educational Needs policy, a decision that has been suggested to be based on ethical considerations rather than based on evidence of effectiveness (Lindsay, 2007; Fox, 2002). Although showing a degree of tentativeness in her argument, Iris speaks about inclusion and how she sees what effects the policy of inclusion has had. I think the cases that we’re, that come our way now are more complex […] Probably some of those other children that were um posing challenges in mainstream schools would have been in special schools as well before. So I think that is probably why … why we’re having to deal with those challenging
17
Quagmire: ‘a soft boggy area of land that gives way underfoot’; ‘an awkward, complex, or hazardous situation’ http://oxforddictionaries.com/definition/quagmire accessed 23 August 2011.
104
children because they’re probably struggling with mainstream and the mainstream teachers are struggling to hold them whereas perhaps they would have felt more … I don’t know, contained and the children might have been happier in a smaller group. Rightly or wrongly in terms of the label they had, whether that was right but maybe being in a smaller class felt better for them. Iris, ll. 193 - 202 Iris speaks of the happiness of the children, a consideration of which seems to have been lost in the push to keep them in mainstream schooling. Instead of feeling ‘better’ in a smaller class, the children are struggling, as are their teachers. Although Iris says that this is because of ‘challenges’, there is something about her thoughts on ‘holding’ and ‘containing’ that suggest that these challenges are emotional in nature. She says that ‘mainstream teachers are struggling to hold them’ which I think means that these teachers are finding it difficult to provide a containing function to the children in their care. Similarly, I think the teachers themselves do not feel contained, although Iris does not make it clear if she means the children or the teachers when she says that ‘they would have felt more contained’ had certain children been offered a place in special school. The policy of inclusion has also had the effect of making the work of the Educational Psychologist more challenging in that more and more complex cases are ‘coming our way’ both because schools are managing lower level difficulties themselves, but also because there are more children surviving infancy with complex needs due to medical advances. This has implications for the evaluation of the work of Educational Psychologists. On the one hand there are difficulties that, as Rose says, ‘there’s probably nothing with the best will in the world that we can do to make that better magically’. On the
105
other hand, the complexity of need adds another ‘factor’ to the ‘quagmire’ of effecting and measuring change. Finally, as Lily says, ‘everybody wants something different sometimes’. Where then does the measure of ‘success’ lie? Is it about the child’s happiness, keeping a child in mainstream school or securing placement in a special school? As Iris asked, ‘what does inclusion mean and why are we doing this?’
106
4.1.3.
Theme 3: Measures and Outcomes ‘It doesn’t take a rocket scientist to know when someone is happy or not happy.’ (Lily)
External Tools
Measures and Outcomes
Perception
Internal Tools
Figure 13. Measures and Outcomes: The theme ‘Measures and Outcomes’ and its accompanying subordinate themes.
Participants identified a range of information that they draw on in order to measure the success of their work. Iris, Rose and Jasmine refer to standardised tools like the Strengths and Difficulties Questionnaire (SDQ, ©youthinmind). Daisy has looked at national statistics compiled through the use of curriculum based assessment. Questionnaires are spoken about by all participants, although what this means differs between them; for example, Iris talks about a questionnaire that entails a scoring system (i.e. a quantitative measure) and Lily very specifically states that the questionnaires she used asked only for qualitative information. Rose has used Goal Assessment Scaling (GAS) as well as similar target monitoring techniques via the
107
Common Assessment Framework (CAF). I refer to these tools as ‘external’ in that they provide a concrete and permanent record of evaluation from a source outside of the Educational Psychologist. On the whole, participants spoke about ‘external tools’ less than information I have labelled as ‘internal tools’. Such information comprises personally experienced indications of success like thinking, seeing, knowing, feeling and reflecting. Although all of these sources are admittedly obscure and transient, participants consistently referred to them when telling me whether or not they had made an impact in their work. Violet gives an indication as to why. It’s really difficult to evaluate what you do, isn’t it? It really is. It’s quite a touchy feely kind of, kind of subject. It’s hard to actually pin it down […] some things are measurable and some things just are more intangible. Violet, ll. 183 - 190
4.1.3.1. External Tools Rose perhaps spoke the most about external tools and considers some of the benefits and difficulties of using them in her work. She admits, ‘I work maybe a bit more in numbers’ implying that she prefers quantitative data. During her interview, Rose considers two particular quantitative tools that she has used namely the SDQ and the GAS. Rose admits to ‘quite liking’ the GAS. I quite like that idea of being able to be working towards something quite tight so that even if other problems pop up you can still keep an element of, look we have to not lose sight of the successes that you’ve had. […] With teachers I thought it worked quite well um and again for them it was quite
108
goal specific and they’re used to doing IEPs and stuff and we could map one out quite quickly but with a parent it was quite laborious […] I don’t think they saw the purpose of it and I just thought actually this is a bit unnecessary to go through, its meaningless if I just develop the 5 steps ’cause it needs to be done jointly but I don’t think they’re seeing the point in it so that still makes it meaningless. Rose, ll. 280 - 293 Rose is presenting her own personal view of the GAS and what she has found when using it in her work. She expresses a personal preference for the tool and is positive about its uses when working with teachers. Rose also highlights how useful the GAS can be in helping all parties keep sight of the successes achieved so far. However, Rose has found that the setting of numbers and specifying individual goals for the GAS may be a ‘meaningless’ exercise for parents who may find the process ‘laborious’. Although presented as a criticism of the GAS in particular, Rose’s reflection on ‘meaning’ is essential when thinking about evaluating work generally, and is of course the reason for this research. In spite of ‘quite liking’ the GAS, Rose is likely to use this measure in a different way with parents if she thinks that they find it ‘meaningless’. This suggests that tools used to evaluate Educational Psychologist’s work need to provide meaningful information to all stakeholders, including parents.
Rose is less complimentary about SDQs.
109
I’ve been quite cynical and critical of SDQs because I don’t I think that… they’re such a broad measure I don’t think they capture a lot to be totally honest. Rose, ll. 234 - 235 I think Rose’s criticism of the SDQ is because she does not believe it to be sensitive enough to measure small shifts and she seems to think of the SDQ as a measure of absolute change in instances where difficulties can be made better. Rose also expresses concern that the SDQ measures broad shifts, when she feels it is more important to identify a specific area where change is possible and work with that. Rose goes on to ask, ‘How do we measure the small bits of work that people actually are doing?’ Although speaking specifically about SDQs, I think that Rose’s question possibly applies to many quantitative measures. There are questions about the validity and reliability of these tools in terms of whether they actually measure what they say they do and consistently, although this can be established through research. Daisy, when speaking of using a curriculum based measure, says that ‘it’s not exactly the right tool to be using but it’s the best we’ve got’. Is the best measure available good enough if it is not ‘exactly right’? Previous discussion outlines the many complexities that are inherent in working with ‘people’ as well as in mapping out ‘mechanisms of change’. ‘Such a broad measure’ therefore does feel inadequate to the task of evaluating any work done with a child, family or school. However, Rose is ‘astonished’ by what happens when she uses the SDQ and says, ‘I had to eat my words a little bit’. Even though [the SDQ responses are] just the parent’s perceptions of the difficulties aren’t they? And obviously mum’s perception of the difficulties was much higher at the time because it was a bit Time of Crisis when I became
110
involved. And then even though you know to my mind within him the difficulties were the same, because he was still going to be Autistic, but actually also because they were being better managed and mum was less anxious about it, her perception; it was a quite a significant drop. So um so that did measure a change as a positive change as well which I was astonished by so had to stop being so cynical and slating generally of SDQs. Rose, ll. 257 - 261 Rose’s change of heart about the SDQ is not because she feels her criticisms of it are no longer valid. On the contrary, I think Rose still has reservations about its sensitivity and whether it is measuring changes in the difficulties shown. Rather, Rose is very clear that the ‘positive change’ shown by the SDQ is very much about a change in ‘mum’s perception’. However, Rose admits that even broad measures which do not perhaps quite fit the brief at hand can give strong evidence of change. Iris identifies another possible limitation in using a standardised measure. A project that she has been involved in used the Spence Anxiety Scale as a measure of change pre- and post-intervention.
One of the things we had thought about when we did the post-questionnaire was that for some children their scores might go up and because you’re doing so much around emotional literacy and the language around anxiety and coping with it […] So we thought that some children […] that that score might actually go up and that wouldn’t necessarily be reflective of them feeling
111
worse, it just might be that they could label their anxieties a bit more accurately. Iris, ll. 64 - 66 Similar to Rose’s concerns about the SDQ not measuring the small changes that ‘actually’ occur, Iris identifies a risk that scores may indicate a worsening in anxiety when they are ‘really’ a result of a heightened awareness of anxiety instead. Iris seems worried about the scores showing a negative impact when this may not be the case. This makes me wonder about the implications of ‘negative change’, and how much strength such numbers hold. Iris said that these concerns were only realised for very few of the children but colleagues involved in the project were sufficiently apprehensive to also put in place a teacher questionnaire as a back-up plan. Unfortunately this questionnaire also proved to have limitations. Well some teachers were um didn’t necessarily fill in the questionnaire in the way that you asked them, or even had the notes on it that explained how to fill it in. So you know some children who perhaps were exhibiting acting out behaviours, teachers may use it as a forum for sounding out about their own feelings about the child rather than describing the child. And that became less valuable because you were finding out about the teacher rather than about the child which wasn’t the information that we asked for. Iris, ll. 48 - 51 Although there is a risk that quantitative data may indicate a negative trend inaccurately, qualitative data can also be risky in that unexpected information may be obtained. In this instance, Iris implies that the questionnaire was filled in incorrectly for various
112
reasons. It was planned as an indication of whether or not the teachers felt that the programme had made a difference. Iris says that for this purpose the questionnaire ‘wasn’t necessarily helpful’ but acknowledges that information was still obtained, albeit not what was ‘asked for’. Iris also seems wary of qualitative information because of how it is susceptible to judgements and feelings. Some people would you know maybe quite appropriately say something about [the child’s] family circumstances. But other people might make a judgemental comment that was really about their feelings about the child’s family or about the child. Iris, ll. 57 - 58 This raises questions about the reliability and validity of qualitative information similar to those existing for quantitative data. There seems to be the potential for a gap between the information sought and the information obtained as well as a gap between what the information says and what can be said to be ‘really going on’. In contrast, participants showed much more confidence in another type of information coming from an external source, namely feedback. Although I defined external tools as concrete and permanent, feedback can be transient and possibly obscure unless it is recorded or written down. Nevertheless, every participant spoke about instances where they had received positive feedback in a variety of forms as a measure of positive impact. Violet berates herself a little bit for not using a formal process to obtain written feedback. However, she is steadfast in evidencing her successes through giving examples of verbal feedback.
113
Well, just thank you so much, you know thanks so much. I see a big difference, I’m managing much better, it’s no longer a problem any longer, I don’t think there’s any further work to be done because whatever was the presenting issue it’s no longer a problem. I’m chatting to the teacher and I’m sure that I have the skills to manage anything that comes up in the future now, um. I know what to do whereas before I felt… I have the information, I have the knowledge, I have the skills to be able to manage my child. Violet, ll. 135 - 138 I needed to prompt Violet to give me these examples which I find very rich; they are both full of praise for the work and they provide evidence of what has changed and why that feels positive. I think this prompt shows Violet’s insecurity about what information provides adequate objective evidence of impact from her work. When she says that she has been ‘pretty poor’ in obtaining written feedback this implies that this is the type of feedback she feels she ought to have. Daisy seems more confident about the role for verbal feedback. She tells me that she knows she has done a good job as a result of: Having people feed back to me, I suppose, tell me. If somebody says that was really helpful or I really got a lot out of that. […] So being told, I guess. By the people who feel the difference. Daisy, ll. 115 - 118 Daisy is describing the process of subjective experience being communicated by words. She stipulates that the feedback must come from the ‘people who feel the difference’. In this
114
way, their feelings have been put into words and become more ‘tangible’. Words can be seen as more ‘legitimising’, a thought which was put to me by my supervisor. They are that much more ‘real’ than feelings. Unfortunately, Daisy admits that these words are not often expressed. I did a consultation report following an observation and a discussion and [the teacher] said, ‘You’ve absolutely hit the nail on the head. That is exactly, that is the situation and it was a very helpful report.’ So I suppose again that sort of feedback, being told that was a very helpful report, because we don’t often get told that, told me that I must have done something right. Daisy, ll. 175 - 177 As with the feedback reported by Violet, the statement is clearly positive. Daisy values the veracity of this praise even more because it is not often received. The juxtaposition of the two different meanings of ‘told’ is interesting. Daisy is ‘told’ or given a verbal communication by this teacher. This communication in turn ‘told’ Daisy, or let her know or be certain that ‘I must have done something right’18. Words are thus legitimising because they allow another to ‘know’. Indeed, Iris elevates her feedback to the status of a ‘fact’ that what she did was useful: So the fact that [the parents] fed back that it was useful and the school fed back that it had been a useful process […]. Iris, ll. 125 - 126
18
Tell: ‘to say something to someone’; ‘to know, recognize or be certain’ http://dictionary.cambridge.org/dictionary/british/tell_1?q=tell accessed 24 August 2011.
115
Feedback can become even more legitimising when received from a variety of sources. Lily tells of some training that she delivered to a school staff and the different ways in which she learned that this training had been valued. I also know from our recent SENS [Special Educational Needs Service] SpLD um teacher advisor who’s gone into [a school] that one of the first things that came up in their conversation was the SENCO telling [her] that they had had the training and that this was a training that was still considered useful and appropriate and therefore they were still utilising the strategies. And that was then fed back to me via the SENS team, that the school had found the training very useful. And I mean one of the staff at the school, her mother works in the SEN Department and I had feedback from that SEN officer that my daughter attended your training and found it still to be very very useful. So it’s come back in a number of different ways besides from the formal questionnaires done on the day. Lily, ll. 111 - 116 Lily contrasts this spontaneous feedback to the formal questionnaires she used on the day of the initial training, which I gather was a couple of years prior to the visit by the teacher advisor. Neither the daughter nor the SENCO had any vested interest in praising Lily’s training whereas the formal questionnaires could have been artificially positive because they were being handed back to Lily directly. Instead, Lily finds out that her training is thought of as useful some time after it was delivered from two different independent sources. The unsolicited nature of this feedback as well as the ‘number of different ways’ it was received must have been highly validating.
116
Verbal feedback need not directly describe the work and how it has been valued. It may also be received in the form of what Jasmine refers to as ‘stories’. We had um a social worker for example who trained um and really had some wonderful stories, she trained a group of, she was doing this parenting group. This group of parents, all of whom had had children removed and one of the parents had a child removed during the course. And um and they were really really keen to come, so much so that um when she had to cancel one because of snow they were then kind of rang up and said, ‘Oh you are going to do another one aren’t you, it’s not going to stop is it? We can get there through these mountains of snow.’ So that was um what a tribute to her, but a tribute to the materials really as well. So, yeah. That was good stuff (laughs). Jasmine, ll. 53 - 57 Jasmine uses very expressive words like ‘wonderful’ and ‘tribute’ showing the admiration she feels for both the social worker and the project. When Jasmine presents this story and says, ‘That was good stuff’ it is apparent that she associates the value and impact of the parenting group with the enthusiasm of its participants and their willingness to come even ‘through these mountains of snow’. What is not clear from the transcript is the enthusiasm shown by Jasmine herself during the telling of this story and her conviction that is was indeed good. I am not able to separate the transcript from my experience of hearing Jasmine tell me about this group but her conviction was contagious and I believe that it was good too. Another source of verbal information relates to what people ask for. Although perhaps not strictly ‘feedback’, Daisy describes being asked for the support she provided as ‘a measure of usefulness’.
117
The fact that people were prepared to come to us with a difficult case and ask for support with it I suppose was a measure of the fact that we were valued. Because even though we were perhaps saying things that weren’t very palatable, people must have been seeing some kind of something in it for them to still want to have discussions. Does that make sense? I’m not being very clear but there’s no sort of… Cath: They came back for more. Daisy: They came back. Yeah, yeah that was it, you’ve summed it up. They came back for more, so whatever we were giving them must have been helpful otherwise they wouldn’t have done […] So that in itself is a measure isn’t it, of usefulness, they found it useful. Daisy, ll. 134 - 145 Daisy is talking about the project that she found ‘bloody hard’. In spite of communicating some ‘unpalatable’ information, in an atmosphere that turns meetings into ‘beatings’, the support provided by Daisy and her colleague is specifically requested. For Daisy this is a measure of usefulness. However, it is difficult for Daisy to express what she means. In spite of feeling there ‘must’ be value in the work she asks if what she says makes sense and admits that she is ‘not being very clear’. I perhaps jump the gun by reflecting that ‘they came back for more’ but Daisy wholeheartedly agrees with this summation. I assume return business is indicative of a desirable product or service in the world of marketing or finance. Daisy makes the same assumption about the motivations behind repeated requests for help, that it ‘must have been helpful’.
118
4.1.3.2. Internal Tools Although participants had specific examples of information they had obtained as measures of ‘usefulness’, when talking about impact and change, the interviews were permeated with references to things like thinking, feeling, seeing and knowing. Even Rose, who admitted to being a ‘numbers’ person, said things like, ‘So that was [a case] that I feel was quite successful’ (emphasis added) and used the word ‘feel’ in this context more than the other participants. Iris makes an almost identical statement about a case she speaks of, referring to it as ‘the one that I felt like it was very successful’. This internal sense of having made a difference is not made without basis. Iris backs her assertion up with reference to positive feedback she had received. For the case that Rose refers to she was also able to cite improved SDQ scores, verbal feedback from mum and ‘a visible change’ in the demeanour of the little boy. … you could see visibly things had changed. So from when I met them […] the only way they could get him into nursery would be to carry him screaming into nursery and sometimes they didn’t get him in at all and mum would just take him home again to every day he was going in fine and quite happy about going to nursery […] So there was a visible change there as in he was no longer screaming, he was now happy about going in and so you could see that. Rose, ll. 244 - 251 Rose is circumspect in her thoughts about what the SDQ score indicated (see above) as well as what mum said to her, saying, ‘I don’t know whether this is a good thing or a bad thing.’ In contrast, she has no apologies for feeling that the work had been successful, and she shows
119
no doubts about the ‘visible change’ in the little boy from ‘screaming’ to ‘happy’. Similarly, Daisy is very clear that if she is able to see a change she knows she has had ‘some effect’. If I see a change in the presenting problem, if the teachers are managing better or the parents are managing better or the child is happier and somebody’s telling me that, then I know that whatever we have agreed upon is having some effect. Daisy, ll. 167 - 168 Again there is information to substantiate this observed change, in this instance it is in the form of verbal feedback, i.e. someone has told Daisy that he or she is ‘managing better’ or that ‘the child is happier’. What is also common to both of these extracts is the reference to the child’s happiness. Happiness is also central to Lily’s determination of whether or not she has done her job well. And for me client satisfaction really is key to knowing whether a job’s been well done or not. I don’t need to tick a box or be given a stat or you know. That’s what some bureaucrat needs in order to feel that they’re getting value for money but I, you know on the ground in the trenches it’s about a school feeling better equipped to cope with something or a family feeling better able to cope with something or a child succeeding more than they were the day before. That for me you know is, I don’t know that you necessarily have to quantify that. I think that sometimes you can see it, you know. It doesn’t take a rocket scientist to know when someone is happy or not happy. Lily, ll. 209 - 214
120
In Lily’s mind knowing that a client is happy is sufficient evidence that she has done her job well. Any other information, especially in the form of statistics, by which I think she also means quantitative data, is relegated to the function of meeting bureaucratic need. Lily does not need any of this to ‘know’ as she is ‘on the ground, in the trenches’. Lily’s battle metaphor establishes a clear distance between the ‘bureaucrat’ who needs ‘to tick a box’ and her experience of working with schools, families and children. Lily’s work is difficult but she is a woman of action and her source of information about impact is real life, active engagement with the ‘client’. Lily also emphasises the ease with which someone in this situation, ‘on the ground’ as it were, is able ‘to know when someone is happy or not happy’. Lily is not the only participant who draws on an internal source of ‘knowing’ that there has been a change. Jasmine relates a personal experience in a school which she seems to be giving as evidence for her knowing that an intervention ‘was good’. I had one experience where I was in a school that was doing this and I was just walking through the dining hall at lunch time and a little boy I knew… no a little girl was crying, she was sitting next to a little boy I knew who was you know, who could be quite difficult. So… and the language was fantastic because I was just able to say, ‘Oh dear someone’s given you a cold prickly.’ And she said, she quoted this little boy and that just enabled me to say, ‘Oh what to we have to do now? What can we do to give her a warm fuzzy?’ And you know it was language that we all knew and seemed much much better than telling him off in some way. Um, you know inviting him to put it right with vocabulary that he understood. Um. Yeah, it was good (laughs). Jasmine, ll. 34 - 68
121
This interaction is presented as ‘evidence’ of the success of a school-wide project. Jasmine does not choose to quote figures of, for instance, decreased numbers of school exclusions. Instead she is certain that there is something positive about the project because she says ‘the language was fantastic’. She also categorically states that ‘it was good’. Jasmine makes a similar statement after sharing the ‘wonderful’ story about the parenting group getting to a session through ‘mountains of snow’. Jasmine laughs at both of these times. My experience of her laughter was that she was joyful about it being good which perhaps also underlines how sure she is that this was so. Jasmine’s emotional response to the work being good is reflected by both Violet and Lily’s response to thinking about their ‘best work’. For Lily, she chooses to define ‘best’ in relation to job satisfaction.
For me I suppose if I’m looking in terms of job satisfaction, as being a measure of best. Um, so around enjoyment and around feeling like I’ve been valued, I would say that staff training has been my best […] that’s been the most rewarding for me because not only does it have a preventative element in terms of equipping staff to deal with problems before they arise, in certain circumstances, but it also it’s enjoyable because I enjoy it. And because I feel that it’s something that I’m good at and I get a lot of positive feedback for it and that I guess is measurable in terms of questionnaires that are filled out and that kind of thing. So that, I would say, is probably for me the best aspect of this work. Um. It’s not the only one though but that’s the thing I would rate
122
as probably the most rewarding and I guess I’m measuring rewarding as being best. Lily, ll. 48 - 57 Lily’s satisfaction in her job is influenced by a number of things. She speaks of personal factors like enjoying this work and feeling that she is valued and good at it. However, Lily does not find the work rewarding only because of personal factors. She also speaks about there being ‘a preventative element’ and that she is ‘equipping staff’ or empowering them to better manage difficulties as they come up. Lily is also able to give evidence that she is good at staff training because she has questionnaires and positive feedback to prove it. I think therefore that job satisfaction in Lily’s case stems from how she feels and also that she has helped the staff and been told that she has done her job well. Violet also thinks about her work being rewarding. She finds reward specifically from feeling that she has had a positive impact on the families she works with. I’m feeling at the moment the work is very rewarding because I feel that I am having an impact on the large majority of cases. So sometimes a small impact, sometimes a greater impact, […] I’m thinking that on most of the cases that I hope to think that there is some kind of positive impact. Um, that has happened, even if the work has been of quite a short duration. Violet, ll. 43 - 47 Violet is less sure in her wording than Lily in that she ‘feels’ or ‘thinks’ that she has made a positive impact. She becomes less certain as she talks and she shifts from ‘I feel that I am having an impact’ to ‘I hope to think that there is some kind of positive impact’. This
123
highlights a distinction between perceiving an impact and hoping for one. Rose says she has found it useful to ask her clients to evaluate her work. With children she has used scales as simple as ‘thumbs up, thumbs middle, thumbs down’. She had assumed a boy had ‘hated’ a session but was pleasantly ‘surprised’ to be given a ‘thumbs up’. She says, I think sometimes it is harder to read, and sometimes you need to ask people rather than making assumptions about what they think about things. And equally sometimes I think you can think that’s gone really well and you can ask someone and they can say, ‘Not that helpful really’ (laughs). It’s useful to check it out I think. Rose, ll. 462 - 464 Jasmine also explicitly raises a concern about the use of internal tools. Jasmine stipulates that the ‘feel good factors’ are insufficient on their own. She talks about using quantitative measures as ‘scary’ to do, possibly because they may undermine the good feelings.
The quantitative success was important for me, I was scared to do it but it was important […] if we’d had great feel good factors and at the end of the day there’d still been exclusions and they still didn’t have any statements or that half the school had been statemented which would have been dreadful and no-one had improved in their reading then that wouldn’t have been so good. But the qualitative things um, … were very good, seeing um, … teachers maybe who’d felt the full weight of all the negative things that had happened
124
in the school blossom and be um … teaching in a more creative way, or just coming up with ideas. That was great. Jasmine, ll. 183 - 194 Jasmine does not in any way dismiss the ‘qualitative things’. She returns to talking about them and says they were ‘very good’. The outcomes that Jasmine links to quantitative data are clearly measurable; for example, numbers of exclusions and change in reading scores. In contrast the ‘feel good factors’ are perhaps impossible to quantify but no less important, namely teachers ‘blossoming’, being more creative and feeling more resilient to ‘all the negative things that had happened’. For Jasmine, the ‘quantitative success’ is ‘important’ but the ‘qualitative things’ are ‘very good’ or ‘great’. This suggests to me that both sources of data are valued but perhaps the quantitative data performs an auditing function for something intuitively known. Unfortunately there may be times when intuition is the only measure available, as Violet points out.
Um, I think it is helping them at the moment, it’s not that the problems have been solved or have gone away, no, they are still very much there but I think that the family does feel supported and I think there are measures in place that are helping the child […] Also in terms of the, of the parents um, … managing to go about their daily routines and managing to cope through and knowing that they are supported by a group of people not just necessarily one,… Tangible… mmmm (laughs)…. Um, […] Just making their appointments, having made, having contact people to help with their financial difficulties,
125
they’ve made the contacts, the budget is under control, um… yeah, you know mum is managing to get up in the morning. Violet, ll. 65 - 79 Violet acknowledges that ‘the problems have not been solved’ but suggests that feeling supported has been sufficient. Violet struggles to find a way to describe the impact for this family. It is possible that there are no specific outcomes that are tangible. However, Violet knows that they are ‘managing to cope’ now that they are receiving support. With additional thought, Violet settles on a very definite change that has taken place for mum, that she ‘is managing to get up in the morning’ where she could not before. Whether this is a ‘tangible’ outcome or a measurable one is difficult to ascertain, but I imagine this change must have been significant for mum. Although measuring outcomes or impact through more formal processes like questionnaires, standardised assessments or written feedback serves a useful purpose of corroborating ‘feel good factors’ I wonder how adequately they are able to measure such significant changes as being able ‘to get up in the morning’. Perhaps these measures ‘tick a box’ for ‘some bureaucrat’ or maybe it is just ‘more of the way the world is going’ (Rose). However, I think it is essential not to lose sight of Violet’s assertion that ‘some things are measurable and some things just are more intangible’.
4.1.3.3. Perception When Rose spoke about her experience of using the SDQ, she reflected that the scores obtained were most likely an indication of a change in perception of difficulty rather than a change in the difficulties themselves. Perception is likely to influence all measures of impact, whether as an external or internal process. As Rose states,
126
the reason that children have come to you is that people perceive that there’s a real difficulty at the moment and if they perceive that there’s less of a difficulty then there’s been some, some change somewhere but then how you measure the mechanism of that change is tricky isn’t it. I don’t know if I’ve got a simple answer to that (laughs). Rose, ll. 354 - 357 Rose’s focus on perception highlights the relative nature of difficulties and their resolution. In the first place, children are brought to the attention of the Educational Psychologist because there is a perceived problem somewhere. The problem and the resulting change are perhaps products of perception or experience. It is therefore my contention that problem, change and felt impact are constructs rather than absolute, quantifiable entities. This opens up more questions than answers, which may be why Rose finds it ‘tricky’ to measure what she calls ‘mechanism of change’ and why she hasn’t ‘got a simple answer to that’. One of these questions asks who has the problem. Another, where is the problem located? Rose continues, you’re often measuring the problem owner’s perception. It would be interesting to know the other people involved and the child themselves’ perception. I’d get the child’s perception of what the difficulties are probably initially […] but then I probably wouldn’t come back and check with them at the end. I might check with teachers and the parents, ‘Do we feel things are better?’ but not necessarily with the child so that’s probably something that’s missing that would be interesting to … it should be done really… Rose, ll. 529 - 535
127
As a perception or construct, there may be alternative definitions of the problem and differing targets for improvement. As Lily pointed out, ‘Everybody wants something different sometimes, so it’s not always an easy thing to quantify.’ It may be very interesting to check how the child feels and really, as Rose says, ‘it should be done’.
4.1.4.
Connecting Concept: Judgement
Jasmine shared her experiences of working with a school which had gone into ‘special measures’ after being inspected by Ofsted. Although only Jasmine explicitly spoke about the impact of such an inspection, something about her narrative stimulated a thought that there may be a link between this experience of being inspected, which I felt alike to being judged, and the evaluation of Educational Psychologists’ work. So it was gruelling, absolutely gruelling. And they felt that they were given loads of help to get out of special measures and as soon as they were out that help was removed. And they were, they were back in it. Um, so it was really that the confidence of the staff I felt that had been … taken away. Jasmine, ll. 94 - 96 The impact of Ofsted judging the school to be in special measures twice is ‘gruelling, absolutely gruelling’ on the staff. Jasmine talks about how it ‘absolutely exhausted’ the head who left her job at the end. This reminds me of Daisy’s experience where she underwent ‘steering group beatings’ and left local authority work for a time. Jasmine also observes that the ‘confidence of the staff’ is ‘taken away’. Jasmine does not say that the staff lost their confidence, she rather denotes that the process of being inspected and found lacking took it
128
from them. Jasmine herself comes under the eyes of the visiting inspectors when she is supporting the school to improve. I was down to teach, work with the teacher but lead on this anger management game that we were going to play with the whole class. And they told me the inspector was going to be in for that and I thought teachers do this, I ought to be prepared to do it so I did it. When I told my boss about it he said, ‘Oh my god, you shouldn’t have done that. If you’d failed, the whole school would have failed and it would have been you who had made it happen!’ But I didn’t (laughs). Jasmine, ll. 149 - 152 Jasmine’s boss highlights the possible outcomes of being ‘judged’. There is the possibility of failure and the way in which the boss is characterised as speaking makes it sound devastating. Failure is also an absolute value and is a position from which it is difficult to recover. Failure perhaps also entails blame and the holding of someone to account. Jasmine presents this as an almost paralysing pressure. That school […] had had people going in saying, ‘This is terrible, it’s got to improve, the results, got to deliver da daa da daa.’ And what threatened us um it closed down our creativity. Jasmine, ll. 155 - 156 This brings the discussion back to the Educational Psychologists’ role and the value of innovation. If judgement ‘closes down the creativity’ of the staff in the school, what impact does evaluation have? As Jasmine admitted, she ‘was scared to do’ quantitative assessment of
129
an intervention she had put in place. Perhaps this is why. Evaluation may be a multiply edged sword. On the one hand it is ‘what some bureaucrat needs in order to feel that they’re getting value for money’, on another it is evidence that Educational Psychologists are doing their job well and proof of what they ‘know’ already. However, any assessment involves the possibility of ‘failure’ which poses a threat to ‘creativity’ and confidence. This is a possibly detrimental effect of evaluation.
130
4.2. Quantitative Results All Educational Psychologists practising within the local authority Educational Psychology Service in which this research took place were invited to a team day. A total of 18 Educational Psychologists attended and completed the questionnaire, out of 20 in the team. In contrast to the interview participants’ sample, participants filling in the questionnaire included senior members of the team (including the Principal Educational Psychologist) and Educational Psychologists with differing levels of experience. Interview participants also took part in the questionnaire. Three of the total 18 returned questionnaires were ‘spoiled’. The remaining 15 were analysed using descriptive statistics. A table presenting the full set of data is given in Appendix 9. The frequencies of responses for each statement are presented in figure 14, excluding participants with missing data. As the statements were each designed to represent one of four categories of information used by Educational Psychologists when evaluating their work as referred to during the interviews (namely qualitative feedback, professional opinion, target based techniques and standardised measures) the responses are presented in these categories.
Frequency of Questionnaire Responses - All Statements
Figure 14.
131
132
The highest frequency response for all of the statements was either ‘agree’ or ‘strongly agree’. Those for which the highest frequency response was ‘strongly agree’ were:
Somebody says, ‘That was helpful’ or ‘I got a lot out of that’.
Things have visibly changed.
The other statements for which the highest frequency response was ‘agree’ were:
There are stories about change.
There is a record of parental comments, children’s comments or teachers’ comments.
I think that whatever we have agreed upon is having some effect.
I can see evidence of recommended strategies still in place within the classroom.
Goals to be achieved are agreed and progress towards these is monitored.
Progress is monitored against Individual Education Plans.
Targets are set making the work specific.
Change is monitored in an aspect of learning using standardised before and after measures.
Scores on any before and after measure improve.
A tool is used to measure changes in the perception of the problem.
Comparing the frequencies of all ‘agree’ responses with all ‘disagree’ responses gives a similar picture. The table in figure 15 shows that for all of the statements the vast majority of questionnaire respondents indicated that they ‘agreed’ to some extent that this was a source
133
of information that they valued when thinking about a piece of work that they felt was going well.
Qualitative Feedback
There are stories about change. There is a record of parental comments, children’s comments or teachers’ comments. Somebody says, ‘That was helpful’ or ‘I got a lot out of that’. Professional I think that whatever we have agreed Opinion upon is having some effect. I can see evidence of recommended strategies still in place within the classroom. Things have visibly changed. Target Based Goals to be achieved are agreed and Techniques progress towards these is monitored. Progress is monitored against Individual Education Plans. Targets are set making the work specific. Standardised Change is monitored in an aspect of Measures learning using standardised before and after measures. Scores on any before and after measure improve. A tool is used to measure changes in the perception of the problem.
Agree (all degrees)
Don’t Know
12 10
1 2
Disagree (all degrees) 2 3
15
0
0
6
4
5
13
1
1
14 13
1 0
0 2
11
2
2
10
4
1
12
2
1
13
1
1
11
2
2
Figure 15. Frequencies of ‘Agree’ Responses Compared to ‘Disagree’ Responses (Combined)
The frequency table for all of the responses (figure 14) as well as a simple comparison of all ‘agree’ and ‘disagree’ responses (figure 15) suggests that all of these statements as presented were valued to some extent by most of the participants when they thought about a piece of work they felt was going particularly well.
134
This supports a proposition that evaluation tools which collate information from a variety of sources may be the most meaningful record of impact in an Educational Psychologist’s work.
135
5. Discussion In my analysis of the interview data, I identified an additional theme which adds context to this research, namely ‘Role’. I discuss this theme in relation to published literature and government policy. I then address each research question in turn, although not in order, applying Psychological theories which are outlined in the Literature Review section. I finally present the limitations to this research, opportunities for future research and reflect on my position as the author of this thesis.
5.1. Role The comment made by Lily regarding an Educational Psychologist’s role that ‘we kind of do everything’ is reflected in other descriptions of what Educational Psychologists ‘do’. The British Psychological Society (2012) says that Educational Psychologists ‘carry out a wide range of tasks’ and describes how these tasks enhance learning through direct and indirect means. This multifaceted role is also referenced in the Department for Education’s (2011b) review of the training of Educational Psychologists. Attributed to the British Psychological Society, the following citation is given: Educational Psychologists work with children and young people from birth to nineteen years, and their families, in a variety of settings including schools and homes, and sometimes as part of multi-agency teams. They have competencies in consultation, assessment, case formulation, and intervention related to children’s learning, developmental, behavioural, emotional and
136
mental health needs. Intervention may take place at an organisational level, indirectly through parents and teachers, and/or directly with individuals, groups, and families. Educational Psychologists are also involved in evaluation of interventions, research and project work, management and leadership of teams, and offer training to other professional groups. (p. 9) Similarly, after reviewing the literature exploring the Educational Psychologist’s (EP) role, Fallon, et al. (2010) conclude ‘what EPs actually do appears to have been reasonably clearly articulated: EPs are fundamentally scientist-practitioners who utilise, for the benefit of children and young people, psychological skills, knowledge and understanding through the functions of consultation, assessment, intervention, research and training, at organisational, group or individual level across educational, community and care settings, with a variety of role partners’ (p. 4). In contrast to these very ‘broad’ descriptions of what Educational Psychologists do is a sense of a possible narrowing of the role in the experiences of the interview participants. This restriction of the Educational Psychologist’s role is also reflected in the literature. Ashton and Roberts (2006) in their research into what aspects of the work were valued by Educational Psychologists (EP) and Special Educational Needs Coordinators (SENCOs) found that although the Educational Psychologists expressed that they valued a ‘consultative, interactionist and systemic perspective’, ‘the SENCO responses indicated that many of them still value more traditional EP roles such as advice giving and individual assessment’ (p. 118). Baxter and Frederickson (2005) also highlight this tendency to focus on a specific aspect of the Educational Psychologist’s role. They ask, ‘Have Educational Psychologists become so
137
associated with the management of special educational needs … that no one, including school staff, can envisage a different, more radical role for the profession?’ (p. 89). The reason for this, they conjecture, is that there is insufficient evidence for the effectiveness of more ‘radical’ activities. This comparison between ‘traditional’ roles and more ‘radical’ approaches raises questions about what impact Educational Psychologists can have. When asked about their ‘best work’, all of the interview participants gave examples of working ‘outside the box’ or delivering non-statutory services, such as group interventions, family support work, staff training and involvement in specialist projects. They offer many examples that evidence the benefits of practising in this way. It may be that Educational Psychologists’ ability to improve the trajectories of children and young people could be limited if their role is reduced in focus. Mercieca’s (2009) paper emphasises a thoughtful, reflective approach as central to being a Psychologist. She writes about embracing uncertainty but also of the anxiety that uncertainty can entail. It is possible that this anxiety, as well as the anxiety imputed to the ‘ecomonic doom and gloom’, is one of the driving forces limiting opportunities for more radical functions for Educational Psychologists. The contrasting terminology raised by the interview participants about their role, for example ‘broad’ vs ‘narrow’ and ‘usual’ vs ‘unusual’, describes certain ‘semantic polarities’ (Campbell and Grønbæk, 2006) attributable to the work of Educational Psychologists. It is possible that anxiety is contributing to Educational Psychologists’ positions becoming ‘stuck’. I am not sure who this might benefit as there has also not been much rigorous evaluation of the more traditional aspects of Educational Psychologists’ work (Baxter and Frederickson, 2005). To the contrary, Pinney’s (2003) research highlights certain shortcomings of the statutory assessment and statementing process. She recommends increased delegated funding to schools to allow for support for SEN to be a more embedded
138
process and to provide earlier interventions. Although delegated funding is now in place in response to government directives (DfES, 2004), and will continue under current policy (DfE, 2012a), there was some insinuation that opportunities to deliver interventions and engage in preventative work was experienced as ‘unusual’ by the interview participants. This is in spite of the specific statement by Pinney (2003) that ‘we endorse local initiatives to shift the balance of Educational Psychologists’ work towards early intervention and preventative work’ (p. 121). It will be of great interest to see what the impact of new government policy will be. There have been strong indications of a desire to use Educational Psychologists in the way promoted by Pinney. The Children’s Minister, Sarah Teather, has repeatedly highlighted the interventive and therapeutic work Educational Psychologists do. How this will be achieved in the context of budget cuts is yet to be seen.
5.2. Research Questions 5.2.1.
What information do Educational Psychologists consider to be relevant, important, valuable and meaningful when they evaluate their work?
Interview participants expressed a very strong commitment to quality and offering something beneficial through their work. However, questions about evaluating their practice were met with mixed feelings, including worry and resentment. There may be a number of reasons for this. Complexity could be one. It is a key theme raised by the interview participants and can be said to make evaluation a complicated business. Complexity as discussed by the interview participants exists at many levels:
the complexity of the human condition
139
the complexities of effecting change at different levels of the system
increasing levels of complexity of need
the complicated and conditional process of change
what aspects of change are perceived as helpful
the complexities inherent in measuring such change
Amongst all these complexities, it is worth considering whether being able to ‘fix the problem’ is a realistic expectation as a measure of merit or worth. The idea of Educational Psychologists possessing a ‘magic wand’ is neither new nor unique (see for instance Anthony, 1999, p. 233 and Mercieca, 2009, p. 171). It connotes the impossibility of ‘fixing things’ and, on the flip side, the need for a careful consideration and acceptance of what is possible. Anxiety perhaps resides in the threat that what is possible may not be a sufficient indication of impact. Target led work is therefore of potential value as long as an appropriate target for change can be agreed, thereby defining criteria for evaluation before an intervention is implemented (as recommended by Dunsmuir, et al., 2009). A pragmatic approach could also be used when evaluating Educational Psychologists’ work, as referred to by Frederickson (2002, p. 99) in her seminal paper. Within a mutable world it is perhaps appropriate to employ evaluation techniques which are able to answer the question ‘has there been an impact?’ in different ways depending on each different context. Some of the resentment perhaps shown by interview participants towards certain types of evaluation practices should not be associated with complete resistance (as per Fox, 2003). On the contrary, it serves to highlight how important it is that types of information deemed to be appropriate are collected to meaningfully show that the work has been effective. Evaluation
140
must therefore be relevant so that it does not feel like another hoop that must be jumped through. The flexibility to choose measurement tools ‘that fit’ the context, as well as the practitioner, is therefore advisable. This flexibility should not be viewed as a licence for an arbitrary collection of data, especially not as a means to present a distorted overly positive picture of Educational Psychologists’ impact. It is therefore incumbent upon Educational Psychologists to adopt their role as ‘scientist-practitioners’ (Fallon, et al., 2010) when assessing the appropriateness of measurement tools for evaluation. The interview participants highlighted a range of types of information that they draw upon when they think about their work, especially when they feel it has gone well. Standardised tools like the Strengths and Difficulties Questionnaire were considered, as was the use of curriculum based assessment. Questionnaires collecting both quantitative and qualitative information featured, as did target monitoring techniques like Goal Assessment Scaling (GAS). In addition, personally experienced indications of success like thinking, seeing, knowing, feeling and reflecting (which I refer to as ‘internal tools’) dominated the interviews. That differing forms of evidence, including verbal feedback, target and goal based monitoring, comparisons of scores and examples of ‘internal tools’, are similarly valued by most of the questionnaire participants is supportive of the contention that a variety of sources of information is meaningful to Educational Psychologists. From the interview participants’ discussion it can be seen that none of these types of information is complication free and that some thought goes into their use. Even quantitative measures have their shortfalls; are they sensitive enough, refined enough or appropriate to the context? Matthews (2002) recommends that any quantitative measure should be valid and reliable as well as appropriate to the question in hand. Even when these issues are
141
addressed, there is a risk that either a misleading ‘result’ or none at all may occur. Qualitative information can also be risky as the information obtained may not correspond to the information sought. The acceptance of ‘fallibility’ within pragmatism and other post-positivist perspectives (Robson, 2011) allows some leeway in interpreting data but there is still the need to ‘check out’ that any indications of success or otherwise are proper. An approach similar to that advocated by Matthews (2002), which collates different data from a variety of sources, could provide the opportunity to triangulate information and compensate for any shortcomings in terms of the tools used, as well as enable more robust interpretations of the data. Collating information also allows for corroboration (Popper, 2002), especially of ‘internal tools’. Furthermore, as different practitioners are likely to prefer different kinds of information, the opportunity to utilise a range of data may help individual Educational Psychologists feel that they are collecting meaningful information.
5.2.2.
How can this inform the development and evaluation of tools used to evaluate Educational Psychologists’ practice?
Tools which collect multiple types of data from different sources are presented by Turner, et al. (2010) and Osborne and Alfano (2011). A tool such as the one developed by Turner, et al. (2010) could also provide an additional source of information to consolidate the evidence by recording practitioners’ reflections, i.e. their thinking, knowing, feeling and seeing. This corresponds to the sense I got from the interview participants that they would think, see or know there was change because they had information verifying this from one or more different sources. ‘Internal tools’ are perhaps insufficient on their own, but having the space to record professional opinions acknowledges the value these seemed to have for the interview participants.
142
That different interview participants valued certain sources of evidence over others suggests that there needs to be some flexibility as to what evidence is obtained, to ensure that evaluation remains a meaningful activity to all involved. This need for flexibility is also implied by the questionnaire responses. Any evaluation tool should therefore have the flexibility to be meaningful for a range of users while collecting sufficient information to provide evidence that is robust and credible for all stakeholders. The Turner, et al. (2010) tool provides this flexibility but perhaps needs further development to meet criteria for robustness and credibility as suggested in the literature review. It is therefore suggested that a tool closely based on the Turner, et al. Casework Evaluation Form (p. 329) is refined and piloted as a means to collate a variety of data types from multiple sources when evaluating the work of Educational Psychologists. Ideas for amendments to this form are presented in Appendix 11 with permission from the original authors (Turner, 2012, personal communication).
5.2.3.
What is the meaning (the relevance, importance and value) of evaluation within the lived experience of an Educational Psychologist?
Evidencing ‘feel good factors’ with other sources of information is a recurrent theme amongst the interview participants as they talked about the successes they had achieved through their work. There was repeated reference to things like thinking things have gone well, seeing a change, feeling there has been an impact and knowing things are better than they were. Such references are consistently followed up with examples of other kinds of evidence thereby legitimising the interview participants’ experience that they had done something worthwhile. Bion (1997), while reflecting on ‘uncertainty’, says, ‘A state of rhapsody and expression of rhapsodic excitement aren’t good enough: we do need some sort of discipline, rigour of thought … to recognise, a state of mind that is not adequate’ (pp. 50 – 51). The interview
143
participants seem to similarly feel that ‘feel good factors’ are not good enough. However, Bion’s statement also implies the need for a disciplined mind which is able to recognise that such feelings are insufficient, as well as recognise when it is not able to make this judgement. I think this gives eminence to reflective practice and, to focus on Bion’s thinking, is reminiscent of the ‘alpha-function’ (Bion, 1988). This he describes as being responsible for transforming ‘beta-elements’, which are ‘incoherent’ experiences that ‘cannot be thought about’, into something ‘suitable for use in thinking’ (Symington and Syminton, 2001, pp. 62 - 63). Bion (1988) specifically states that ‘to learn from experience alpha-function must operate on the awareness of the emotional experience’ (p. 8). Reflective practice could thus perhaps be said to perform an alpha-function; enabling practitioners to solidify their sense that things have gone well (or poorly), to ascertain the legitimacy of their senses and to learn from those experiences. Leiper (1994) particularly highlights that evaluation should be an opportunity for learning (p. 202). In addition, a linking together of preconceptions with a ‘realization’ contributes to the development and accumulation of meaning and knowledge (Symington and Symington, 2001), for example, ‘I think it is helping them at the moment’ because ‘you know mom is managing to get up in the morning’ (Violet). However, for this linking to be possible ‘tolerance of doubt and tolerance of a sense of infinity [are] essential’ (Bion, 1988, p. 94). Accommodating uncertainty is therefore a requirement of alpha-function, a position which Mercieca (2009) asserts warrants a Psychologist (p. 175). Another way in which meaning is made is through adopting positions as per Positioning Theory (Campbell and Grønbæk, 2006). A particular position Educational Psychologists might take, in terms of both their work and the evaluation of their practice, could lie on the continuum between the ‘semantic polarities’ ‘tangible’ and ‘intangible’. This appears specifically in the interviews considered in this research, but it is also implied in the literature,
144
especially with regard to ‘outcomes’. On the one hand, Dunsmuir, et al. (for example) characterise the ‘challenge’ of evaluation for Educational Psychologists as one of defining ‘outcomes that are measurable and demonstrate impact’ (2009, p. 54); while on the other hand, Turner, et al. (2010) call this focus ‘reductionist in nature’ while stating that, ‘looking at measurable pupil outcomes is an important part of investigating EP impact, but it is only a part’ (p. 315). Campbell and Grønbæk (2006) state that semantic polarities ‘generate meaning about what the organization stands for and how to act’ (p. 15). To insert the word ‘profession’ (as in ‘the profession of Educational Psychology’) in the place of ‘organization’ illustrates what impact the adoption of a position along the semantic polarity ‘tangible – intangible’ could have. This would affect the ways in which Educational Psychologists behave and the choices they make in terms of their work and how they evaluate impact. These actions will be a strong indication of what is valued by that practitioner, service or the profession as a whole. Leaning towards more concrete indications of impact is evidently attractive within a context of ‘economic doom and gloom’ and a requirement for ‘evidence of what works’ to secure resources (DfE, 2011a, p. 15). However, the findings of this research suggest that the Educational Psychologists in a particular local authority Educational Psychology Service value both the ‘tangible’ and less tangible indications of success. This gives clues as to the values of the research participants and of the organization to which they belong. These values might be a ‘tolerance of doubt and tolerance of a sense of infinity’ (Bion, 1988, p. 94). There is evidence in the literature of other Educational Psychologists sharing this position. However, ‘power is the ability to maintain a position and for that position to be influential in the way other positions are taken and maintained’ (Campbell and Grønbæk, 2006, p. 44). It may be that financial and political pressures compel the profession to adopt a more ‘tangible’ position.
145
This is unlikely to resolve the difficulties of evaluating work that is immersed in ‘the tangled complexities of the social world, where judgements often have to be made on the availability of only partial information and where the ability to deal with ambiguity and uncertainty … [is] paramount and perpetual’ (Moore, 2005, p. 111). Systems theory is presented as a means by which the complexities of the social world may be understood. It was explicitly raised by participants themselves and its ideas are implied in the qualitative findings. In the literature review, I highlighted two concepts of systems theory: circularity and punctuation. The ramifications of both for evaluation are patent. In terms of circularity, if a linear cause and effect understanding of problems ‘is not very helpful’ (Dowling, 1994, p. 4), it is fair to deduce that it is similarly unhelpful to adopt such an understanding of solutions. Research informed by methodologies like randomised controlled trials assumes linearity and takes for granted that an intervention can cause an effect in the targeted problem and that complicating variables can be effectively controlled, either through randomisation or later statistical manipulations. Evaluation can make analogous assumptions that it is possible to attribute change to a particular cause (e.g. an intervention, a practitioner) and thereby judge the merit and worth of that cause (as per Stufflebeam, 2001). Adopting circularity as a perspective undermines such assumptions and, while complicating the task of evaluation, it avoids blame. Circularity emphasises the need for a cautious approach when interpreting the results of an evaluation; a guardedness evident in what was said by the interview participants. Findings can only be understood as provisional, requiring corroboration. Punctuation has similar implications as any indication of impact can be said to be limited to a representation of a specific moment in time. Punctuation therefore
146
has the potential to vastly change the outcome of an evaluation. It is thus very important that evidence of change is considered in light of its context and the point of punctuation, a potential function for reflective practice. Neither circularity nor punctuation should be seen as undermining efforts to include evaluation as a practice which contributes to the evidence base. Instead, they imply that all research findings are evidence for a particular time within a specific context. There is not space for any absolute claims but through corroboration and careful consideration of findings, evaluation can offer sufficient evidence that an impact has been made. That care is implemented during evaluation is vital because of the meaning it engenders. As referenced in the literature review, Stufflebeam (2001) defines evaluation as ‘a study designed and conducted to assist some audience to assess an object’s merit and worth’ (p. 11). When the evaluation is of an Educational Psychologist’s practice, there is the potential that the object becomes the Educational Psychologist him or herself. Campbell (2000) highlights how systems are ‘meaning-making entities’ (p. 8). He says, ‘Our sense of who we are depends upon what meaning others make of us and how they convey that meaning back to us’ (p. 16). Speaking from an unambiguously social constructionist perspective, Campbell links this process very closely to interaction and the role of language. However, much of his commentary resonates with evaluation as an activity. It also confirms the notion that participants were seeking to ‘legitimise’ what they felt they knew to be true through chronicling various sources of evidence, like verbal feedback, stories and so on. Campbell (2000) asserts that: each of us is motivated by the desire to take part in meaning-constructing relationships with others, and part of this process is being recognised for
147
what we feel we are “really experiencing” and then having this validated through the ability to influence people and events from our own point of view. (p. 20) The impact of this validating, according to Campbell, is to ‘maintain an evolving sense of identity’ (p. 17), by which he means ‘the sense of who one is, the self, the I’ (p. 16). Evaluation can therefore be viewed as a profoundly meaningful activity, far beyond establishing ‘what works’. It is not surprising that anxiety is thus a feature of evaluation, that it may feel like ‘opening Pandora’s box’ (Fox, 2003). Care is therefore needed, not only to ensure that children, families and schools are enabled to enjoy ‘successful outcomes’, whatever they may be deemed to entail, but also in recognition of the impact that evaluation can have on the sense of identity, merit and worth of the professional involved.
5.3. Research Limitations I have been careful to critique my research in my methodology section. I used published guidelines specifically for my Interpretative Phenomenological Analysis to achieve as high a standard for this part of my research as I could. So saying, there is always room for improvement. Although a measure of prevalence for each theme is given and meets the criteria for rigour as outlined by Smith (2011), more evidence from the interviews was available for presentation. The limitations of word count required me to choose certain extracts over others which very much created opportunities for bias. However, in order to share what the participants told me as faithfully as possible, I leaned towards longer extracts,
148
rather than brief extracts illustrating similar or the same points. This hopefully improved the narrative quality of my IPA findings section, but perhaps reduced the strength of evidence for my themes. I used Yardley’s (2000; 2008) characteristics against which to judge the quality of my qualitative research. This is what is recommended by Smith, et al. (2009), a book I also closely followed to ensure my IPA was as rigorous as possible. How these characteristics apply is addressed in the methodology section. Outside of these characteristics, I reiterate how difficult it felt to represent my participants’ experiences. On the one hand I reduced some pretty lively discussion into transcript (‘laughs’ in brackets does not quite convey the experience of those laughs) and on the other I had to punctuate, choose and edit sections of what the participants said. Their verbatim quotes on their own therefore feel altered compared to how I remember the interviews, which is a sense confirmed by the feedback obtained from the participants who said they were surprised by appearing ‘inarticulate’. On top of this, I then interpreted what they had said. As with any IPA research, findings are the researcher’s interpretation of the data and any number of other interpretations is possible. However, to be as true to my participants’ experiences as possible, I did ask them to comment on my findings. There were some difficulties obtaining this feedback but that which I obtained was positive. I made amendments to my findings to reflect the comments made which related to anonymity and accuracy. Examples of feedback are presented in Appendix 10 along with the changes I made in response to a specific correction which does not undermine anonymity. I feel that there are more limitations to report with regard to my quantitative methodology. The categories I identified to cluster statements for the questionnaire are perhaps in retrospect a little arbitrary. My aim in using these clusters was to avoid biasing my questionnaire towards ‘internal tools’ which I needed to caution myself against. Although the clusters feel appropriate to me, I understand that they are less logical to external readers.
149
Their source from the literature and the interview transcripts could have been more clearly articulated. In retrospect, the use of these clusters was perhaps unnecessary; however, for the sake of transparency I have retained their labels in the results. When I divided statements into clusters, the aim was to use ranking to determine a specific preferred category. Because of the feedback from my pilot sample, I moved to using a Likert scale. Because of my sample size, and because the responses to each statement were so similar, identifying a most preferred category was neither statistically possible nor desirable from a social constructionist perspective. I therefore found it more useful to analyse the quantitative data at a statement rather than a category level. Doing the analysis in this way also reduced the impact of the potentially unreliable cluster labels on the results. This mismatch between the pilot and the final questionnaire therefore presents limitations to the validity of the tool used. Were I to undertake this research again, I would pilot all my original statements with a Likert scale and identify which statements showed the most differentiation using factor analysis. However, I find that my quantitative results give an indication that multiple sources of information is valued, which corresponds to more recent trends in the evaluation of Educational Psychologists’ work. My results could have been more conclusive had my questionnaire been developed differently. Had there been a greater emphasis on quantitative data in this study, a larger sample size, for example including participants from other Educational Psychology Services, could have also enabled greater opportunities for statistical analyses.
5.4. Future Research It is useful to consider this research within its specific context, i.e. it is based within one particular Educational Psychology Service. Further research may explore the views of
150
Educational Psychologists who work in other services. It may prove interesting to contrast these findings with those obtained within an Educational Psychology Service which uses a specific evaluation tool as part of its policy, for example Target Monitoring and Evaluation (TME). My contention that ‘unusual’ work may have a greater impact presents another opportunity for research. Such research could potentially revolutionise the role of the Educational Psychologist and might provide evidence for those in the profession and in government who wish to involve Educational Psychologists in more preventative and therapeutic interventions. It should be noted that Educational Psychologists are not the only stakeholders involved in the evaluation of their practice. Originally, this research aimed to obtain the views of other stakeholders through questionnaires, but this was not feasible within the constraints of both real world research (Robson, 2011) and a professional doctorate. Future research is therefore encouraged to explore what information is meaningful to a range of stakeholders including children, young people, parents/carers, schools, local authority commissioners and central government. Finally, it would be useful to pilot an improved evaluation form based on the research by Turner, et al. (2010). This could establish the viability of its use in a different context to that of the original research. If a pilot proved successful, such a tool could be the starting point for a more consistent approach to the evaluation of Educational Psychologists’ work that collects different types of information from multiple sources and may contribute to the evidence base of the effectiveness of this profession.
151
5.5. Next Steps As this research is limited to a single Educational Psychology Service, I offer my recommendations circumspectly. I feel that my findings create opportunities to discuss the meaning of evaluation and for practitioners to consider their positions among a range of semantic polarities related to both the activity of evaluation and their role. It may be useful for Educational Psychology Services to think about which positions have been adopted within polarities like: ‘tangible – intangible’, ‘responsive – preventative’, ‘radical – traditional’ and ‘capturing information of meaning to me – of meaning to others’. This may give clarity about ways forward in terms of evaluation tools used in those services, the types of information recorded and how individual Educational Psychologists might be supported in their evaluations. It may also enable services and practitioners to ‘be influential in the way other positions are taken and maintained’ (Campbell and Grønbæk, 2006, p. 44), in other words empowering them to choose how they work and the values this communicates.
5.6. Researcher’s Reflections I have a varied background working with children, young people and families. I have been a teacher, a youth worker and a Children’s Centre manager before training to become an Educational Psychologist. Evaluation and the monitoring of outcomes and impact have increasingly become features of my professional life. As a result, this research is firmly rooted in my own experiences of evidencing the merit and worth of projects and interventions I have delivered, commissioned, planned or recommended. When I began my training as an Educational Psychologist, I was surprised by the emphasis on quantitative evaluation tools
152
that was at that time apparent within the field of Educational Psychology. In my experience, I had found such data open to manipulation and felt that qualitative information is far more meaningful. It thus occurred to me to ask whether research had been undertaken to find out how others experienced evaluation and what kinds of information they found meaningful. My literature search showed a scarcity of such research and I therefore saw an opportunity for my doctoral thesis. The Principal Educational Psychologist of the Educational Psychology Service with which I undertook my professional placement agreed that my research would be of use. I am therefore grateful that I was allowed to undertake this research project. I anticipated that my participants would value qualitative over quantitative methods of evaluation. However, this was not the case, and through my research I have learned the value of quantitative data when triangulated with qualitative data. Although still an adherent to social constructionism, I am also much more open to the benefits of a pragmatic approach and am less sceptical of realist views. I hope that my research will provide substantiation for the use of multiple types and sources of information when Educational Psychologists’ work is evaluated and a flexible adoption of measurement tools that are appropriate to the context, intervention and individuals involved.
153
6. Conclusion The evaluation of Educational Psychologists’ work is receiving a high level of attention. This is in response to real or perceived pressures from external sources, such as a government wielding an austerity plan, and from within a profession committed to evidence-based practice and evidencing impact. Although the literature contains examples of techniques that have been used to evaluate Educational Psychologists’ practice, and numerous position papers discussing ontological and epistemological implications of using such techniques, no research has asked the question of what information is of meaning to Educational Psychologists when they evaluate their work. This research aimed to explore this question. Through interviews, this research found that Educational Psychologists working within a local authority Educational Psychology Service use many different types of information when they consider if their practice has been worthwhile and that they greatly value having an impact on the children, young people, families and schools they work with. Using questionnaires, it also found that a number of different types of information is seen as meaningful to Educational Psychologists within the same service when they think about a piece of work they feel has been going well. This research therefore concludes that a tool for evaluating the work of Educational Psychologists should draw upon a variety of data types so that the information obtained is meaningful to a large number of practitioners and provides opportunities for corroboration. This is necessary because of the inherent complexity of the work at numerous levels and the sometimes intangible nature of the changes effected. Although it may be true that ‘it doesn’t take a rocket scientist to know when someone is happy’, it perhaps requires a Psychologist to unpick how and why this may be so.
154
Educational Psychologists as scientist-practitioners are in a good position to choose measures appropriate to each case and context in order to collect information for evaluation. As professionals with the capacity to embrace uncertainty or the ‘tolerance of doubt and tolerance of a sense of infinity’ (Bion, 1988, p. 94), Educational Psychologists may also use reflective practice as a means to process their sense that things have gone well or poorly. Allowing these senses, or ‘internal tools’, to be available to conscious thought creates the opportunity to both learn from experience and to draw upon the various forms of evidence collected to corroborate those ‘internal tools’. It also reasserts the value of ‘expertise’ within evidence-based practice as described by Sackett, et al. (1996) as well as the importance of the practitioner voice. Getting evaluation right is not only important to ensure that Educational Psychologists are able to evidence the impact that they have and that their work is helpful. Evaluation is also potentially an activity which, on the one hand, measures the merit and worth of the practitioner involved and, on the other hand, can delineate the values of the profession through the positions adopted by Psychologists around the changes they effect. Evaluation should therefore be viewed as a profoundly meaningful activity, far beyond one which establishes ‘what works’. Rather, evaluation captures the meanings made by people in the light of an intervention. Considering the form evaluation takes also allows for an open discussion about Educational Psychology’s values and the nature of the Educational Psychologist’s role, giving opportunities for practitioners, services and the profession to reflect on the positions they inhabit and perhaps allow for change.
155
7.References Annan, M. (2005). Observations on a service review: Time to move on? Educational Psychology in Practice, 21 (4), 261 – 272. Anthony, W. (1999). From teacher to EP: The metamorphosis. Educational Psychology in Practice, 14 (4), 231 – 234. Anthun, R. (2000). Parents' views of quality in Educational Psychology Services. Educational Psychology in Practice, 16 (2), 141 — 157. Ashton, R. and Roberts, E. (2006). 'What is valuable and unique about the Educational Psychologist?' Educational Psychology in Practice, 22 (2), 111 — 123. Baxter, J. and Frederickson, N. (2005) Every Child Matters: can educational psychology contribute to radical reform? Educational Psychology in Practice, 21 (2), 87-102. BBC (12 May 2010). David Cameron and Nick Clegg pledge 'united' coalition. http://news.bbc.co.uk/1/hi/8676607.stm accessed 4 April 2012. Biesta, G. (2007). Why ‘‘what works’’ won’t work: evidence-based practice and the democratic deficit in educational research. Educational Theory, 57 (1), 1 – 22. Bion, W. (1988). Learning from Experience. London: Karnac. Bion, W. (1997). Taming Wild Thoughts. London: Karnac Books. Boyle, J. M. E. and MacKay, T. (2007). Evidence for the efficacy of systemic models of practice from a cross-sectional survey of schools' satisfaction with their Educational Psychologists. Educational Psychology in Practice, 23 (1), 19 – 31.
156
British
Psychological
Society
(2012).
What
do
Educational
Psychologists
do?
http://www.bps.org.uk/careers-education-training/how-become-psychologist/typespsychologists/becoming-educational-psycholo accessed 16 March 2012. The Ethics Committee of the British Psychological Society (2010). Code of Ethics and Conduct: Guidance published by the Ethics Committee of the British Psychological Society. Leicester: The British Psychological Society. Brocki, J. M. and Wearden, A. J. (2006). A critical evaluation of the use of interpretative phenomenological analysis (IPA) in health psychology. Psychology and Health, 21(1), 87–108. Campbell, D. (2000). The Socially Constructed Organization. London: Karnac Books. Campbell, D., Draper, R. and Huffington, C. (1991). Teaching Systemic Thinking. London: Karnac Books. Campbell, D. and Grønbæk, M. (2006). Taking Positions in the Organization. London: Karnac. Cherry, C. (1998). Evaluation of an Educational Psychology Service in the context of LEA inspection. Educational Psychology in Practice, 14 (2), 118 – 127. Clegg, S. (2005). Evidence-based practice in educational research: a critical realist critique of systematic review. British Journal of Sociology of Education, 415 – 428. Cohen, L. Manion, L. and Morrison, K. (2007). Research Methods in Education. Abingdon: Routledge. Cresswell, J. W. (2003) Research Design: Qualitative, Quantitative and Mixed Method Approaches: Second Edition. London: Sage.
157
Cuckle, P. and Bamford, J. (2000). Parents' evaluation of an Educational Psychology Service. Educational Psychology in Practice, 16 (3), 361 — 371. Daniels, A. and Williams, H. (2000). Reducing the need for exclusions and statements for behaviour. Educational Psychology in Practice, 15 (4), 220 – 227. Department for Children, Schools and Families (2009). Lamb Enquiry: Special Educational Needs and Parental Confidence. Nottingham: DCSF Publications. Department for Education (2011a). Support and Aspiration: A new approach to special educational needs and disability. A Consultation. Norwich: TSO. Department
for
Education
(14
November
2011).
Press
release.
http://www.education.gov.uk/inthenews/inthenews/a00200150/16-million-tosupport-training-of-educational-psychologists accessed 29 March 2012 Department for Education (2011b). Developing Sustainable Arrangements for the Initial Training of Educational Psychologists. Final Report. Crown Copyright. Department for Education (2012a). School Funding Reform: Next Steps towards a Fairer System. Crown Copyright. Department for Education (2012b). Support and Aspiration: A new approach to special educational needs and disability: Progress and Next Steps. Crown Copyright. http://media.education.gov.uk/assets/files/pdf/s/support%20and%20aspiration%20a %20new%20approach%20to%20special%20educational%20needs%20and%20disability %20%20%20progress%20and%20next%20steps.pdf accessed 16 May 2012.
158
Department for Education and Skills (2001). Special Educational Needs: Code of Practice. Department for Education and Skills. Department for Education and Skills (2004). Removing Barriers to Achievement: The Government’s Strategy for SEN. Nottingham: DfES Publications. Dowling, E. (1994). Theoretical framework: A joint systems approach to educational problems with children. In E. Dowling & E. Osborne (Eds.), The Family and the School: A Joint Systems Approach to Problems with Children (pp. 1 – 29). London: Routledge. Dunsmuir, S., Brown, E., Iyadurai, S. and Monsen, J. (2009). Evidence-based practice and evaluation: from insight to impact. Educational Psychology in Practice, 25 (1), 53 – 70. Dyer, C. (1995). Beginning Research in Psychology: A Practical Guide to Research Methods and Statistics. Oxford: Blackwell Publishing. Elliott, J. (2004). Multimethod approaches in educational research. International Journal of Disability, Development and Education, 51 (2), 135 — 149. Erchul, W. P. and Sheridan, S. M. (2008). Overview: the state of scientific research in school consultation. In W. P. Erchul & S. M. Sheridan (Eds.), Handbook of Research in School Consultation. New York: Lawrence Erlbaum Associates. Fallon, K., Woods, K. and Rooney, S. (2010). A discussion of the developing role of educational Psychologists within Children’s Services. Educational Psychology in Practice, 26 (1), 1 – 23. Farrell, P., Woods, K., Lewis, S., Rooney, S., Squires, G. and O’Connor, M. (2006). A Review of the Functions and Contribution of Educational Psychologists in England and Wales in
159
light of “Every Child Matters: Change for Children” RR792. Nottingham: DfES Publications. Fox, M. (2002). The education of children with special educational needs: evidence or value driven? Educational and Child Psychology, 19 (3), 42 – 53. [Special Issue]: Educational Psychology and Evidence. Fox, M. (2003). Opening Pandora’s Box: evidence-based practice for educational psychologists. Educational Psychology in Practice, 19 (2), 91 – 102. Fox, M. (2009). Working with systems and thinking systemically – disentangling the crossed wires. Educational Psychology in Practice, 25 (3), 247 – 258. Fox, M. (2011). Practice-based evidence – overcoming insecure attachments. Educational Psychology in Practice, 27 (4), 325 – 335. Fox, M., Martin, P. and Green, G. (2007). Doing Practitioner Research. London: Sage. Fox, M. and Rendall, S. (2002). Ethical issues for educational psychologists engaged in research. Educational and Child Psychology, 19 (1), 61 – 69. Frederickson, N. (2002). Evidence-based practice and educational psychology. Educational and Child Psychology, 19 (3), 96 – 111. [Special Issue]: Educational Psychology and Evidence. Gage, N. L. (1989). The paradigm wars and their aftermath: A ‘historical’ sketch of research on teaching since 1989. Educational Researcher, 18 (9), 4 – 10. Gough, D. (2007). Weight of evidence: a framework for the appraisal of the quality and relevance of evidence. Research Papers in Education, 22 (2), 213 – 228.
160
Guba, E. G. and Lincoln, Y. S. (1989). Fourth Generation Evaluation. Newbury Park: Sage. Hammersley, M. (2008). Paradigm war revived? On the diagnosis of resistance to randomized controlled trials and systematic review in education. International Journal of Research & Method in Education, 31 (1), 3 – 10. Hanko, G. (2002). Making psychodynamic insights accessible to teachers as an integral part of their professional task: the potential of collaborative consultation approaches in school-based professional development. Psychodynamic Practice, 8 (3), 375-389. HM Government (2010). The Coalition: our programme for government. London: Crown Copyright. HMIE (2011). Quality Management in Local Authority Educational Psychology Services 1: Selfevaluation for Quality Improvement: Part 1 Self-evaluation in the local authority context. http://www.hmie.gov.uk/documents/publication/epsseqi-04.html accessed 2 November 2011. H. M. Treasury (2010). Spending Review 2010. CM7942. London: Crown Copyright. Howell, D. C. (1997). Statistical Methods for Psychology: Fourth Edition. Belmont: Duxbury Press. Hughes, J. N., Hasbrouk, J. E., Serdahl, E., Heidgerken, A. and McHaney, L. (2001). Responsive Systems Consultation: A preliminary evaluation of implementation and outcomes. Journal of Educational and Psychological Consultation, 12 (3), 179 – 202. Ingraham, C. L. and Oka, E. R. (2006) Multicultural Issues in Evidence-Based Interventions. Journal of Applied School Psychology, 22 (2), 127 — 149.
161
Johnson, R. B. and Onwuegbuzie, A. J. (2004). Mixed methods research: a research paradigm whose time has come. Educational Researcher, 33 (7), 14 – 26. Kazak, A. E., Hoagwood, K., Weisz, J. R., Hood, K., Kratochwill, T. R., Vargas, L. A. and Banez, G. A. (2010). A Meta-Systems approach to Evidence-Based Practice for children and adolescents. American Psychologist, 65 (2), 85 – 97. Kelly, D. and Gray, C. (2000). Educational Psychology Services (England): Current Role, Good Practice and Future Directions: The Research Report. Nottingham: DfEE Publications. Kiresuk, T. J. and Sherman, R. E. (1968). Goal Attainment Scaling: a general method for evaluating comprehensive community mental health programs. Community Mental Health Journal, 4 (6), 443 – 453. Kvale, S. (1994). Validation as communication and action: on the social construction of reality. Presented at the American Educational Research Association Conference in New Orleans, 4 – 8 April, 1994. http://eric.ed.gov/PDFS/ED371020.pdf accessed 16 February 2012. Langdridge, D. (2008). Phenomenology and Critical Social Psychology: Directions and debates in theory and research. Social and Personality Psychology Compass, 2 (3), 1126 – 1142. Larkin, M., Watts, S. and Clifton, E. (2006). Giving voice and making sense in interpretative phenomenological analysis. Qualitative Research in Psychology, 3, 102 – 120. Leiper, R. (1994) Evaluation: organizations learning from experience. In A. Obholzer and V. Z. Roberts (Eds.), The Unconscious at Work: Individual and Organizational Stress in the Human Services (pp. 197 – 205). London: Routledge.
162
Lincoln, Y. S. and Guba, E. G. (2003). Paradigmatic controversies, contradictions, and emerging confluences. In N. K. Denzin and Y. S. Lincoln (Eds.), The Landscape of Qualitative Research: Theories and Issues (Second Edition). Thousand Oaks: Sage. Lindsay,
G.
(2007).
Educational
Psychology
and
the
effectiveness
of
inclusive
education/mainstreaming. British Journal of Educational Psychology, 77, 1 – 24. MacKay, G. and Lundie, J. (1998). GAS released again: proposals for the development of goal attainment scaling. International Journal of Disability, Development and Education, 45 (2), 217 — 231. MacKay, G., Somerville, W. and Lundie, J. (1996). Reflections on goal attainment scaling (GAS): cautionary notes and proposals for development. Educational Research, 38 (2), 161 — 172. Malec, J. F. (1999). Goal Attainment Scaling in rehabilitation. Neuropsychological Rehabilitation, 9 (3/4), 253 – 275. Malterud, K. (2001). The art and science of clinical knowledge: evidence beyond measures and numbers. The Lancet, 358, 397 – 400. Marsh, R. (2005). Evidence-Based Practice for education? Educational Psychology, 25 (6), 701 – 704 . Marson, S. M., Wei, G. and Wasserman, D. (2009). A reliability analysis of Goal Attainment Scaling (GAS) weights. American Journal of Evaluation, 30 (2), 203 – 216. New York: Guilford Press.
163
Matthews, J. (2002). An evaluation of educational psychologists’ interventions at stage 3 of the code of practice. Educational Psychology in Practice, 18 (2), 139 – 156. Matthews, J. (2003). A framework for the creation of practitioner-based evidence. Educational and Child Psychology, 20 (4), 60 – 67. Mercieca, D. (2009). Working with uncertainty: Reflections of an Educational Psychologist on working with children. Ethics and Social Welfare. 3 (2), 170 – 180. Mertens, D. M. (2005). Research and Evaluation in Education and Psychology: Integrating Diversity with Quantitative, Qualitative, and Mixed Methods. Thousand Oaks: Sage. Monsen, J. J., Brown, E., Akthar, Z. and Khan, S. Y. (2009). An evaluation of a pretraining assistant educational psychologist programme. Educational Psychology in Practice, 25 (4), 369 — 383. Moore, J. (2005). Recognising and questioning the epistemological basis of educational psychology practice. Educational Psychology in Practice, 21 (2), 103 — 116. Nixon, J. (2004). What is theory? Educar, 34, 27 – 37. Oakley, A. (2006). Resistances to ‘new’ technologies of evaluation: education research in the UK as a case study. Evidence and Policy, 2 (1), 63 – 87. Oakley, A., Gough, D., Oliver, S. and Thomas, J. (2005). The politics of evidence and methodology: lessons from the EPPI-Centre. Evidence and Policy, 1 (1), 5 – 31. Obholzer, A. and Roberts, V. Z. (1994). The Unconscious at Work: Individual and Organizational Stress in the Human Services. London: Routledge.
164
Ofsted (2010). The special educational needs and disability review: A statement is not enough. 090221. Manchester: Crown Copyright. Ogden, J. and Lo, J. (2011). How meaningful are data from Likert scales? An evaluation of how ratings are made and the role of the response shift in the socially disadvantaged. Journal
of
Health
Psychology,
1
–
12.
http://hpq.sagepub.com/content/early/2011/08/06/1359105311417192 accessed 12 December 2011. Osborne, C. and Alfano, J. (2011). An evaluation of consultation sessions for foster carers and adoptive parents. Educational Psychology in Practice, 27 (4), 395 – 413. Parry, G., Roth, A. and Fonagy, P. (2006). Psychotherapy research, health policy, and service provision. In A. Roth and P. Fonagy, What Works for Whom? Second Edition. A Critical Review of Psychotherapy Research. New York: Guilford Press. Pajares, F. (2003). In search of Psychology’s philosophical center. Educational Psychologist, 38 (3), 177 – 181. Patton, M. Q. (2002). Qualitative Research and Evaluation Methods. Thousand Oaks: Sage. Pawson, R. and Tilley, N. (1997). Realistic Evaluation. London: Sage. Pinney, A. (2003). In need of review? The Audit Commission’s report on statutory assessment and Statements of Special Educational Needs. British Journal of Special Education, 29 (3), 118 – 122. Popper, K. (2002). The Logic of Scientific Discovery. London: Routledge Classics.
165
Quicke, J. (2000). A phenomenology of Educational Psychological practice. Educational Psychology in Practice, 15 (4), 256 – 262. Reid, K., Flowers, P. and Larkin, M. (2005). Exploring lived experience. The Psychologist, 18 (1), 20 – 23. Rendall, S. and Stuart, M. (2005). Excluded from School: Systemic Practice for Mental Health and Education Professionals. Hove: Routledge. Roach, A. T. and Elliott, S. N. (2005). Goal Attainment Scaling: an efficient and effective approach to monitoring student progress. Teaching Exceptional Children, 37 (4), 8 – 17. Roach, A. T., Kratochwill, T. R. and Frank, J. L. (2009). School-based consultants as change facilitators: Adaptation of the Concerns-Based Adoption Model (CBAM) to support the implementation of research-based practices. Journal of Educational and Psychological Consultation, 19, 300 – 320. Robson, C. (2011). Real World Research: A Resource for Users of Social Research methods in Applied Settings (Third Edition). Chichester: John Wiley and Sons. Rosenthal, R. and Rosnow, R. L. (1991). Essentials of Behavioral Research: Methods and Data Analysis: Second Edition. New York: McGraw-Hill. Sackett, D. L., Rosenberg, W. M. C., Gray, J. A. M., Haynes, R. B. and Richardson, W. S. (1996). Evidence based medicine: what it is and what it isn’t. British Journal of Medicine, 312, 71 – 72. Salzberger-Wittenberg, I., Williams, G. & Osborne, E. (1999). The Emotional Experience of Learning and Teaching. London: Routledge.
166
Schlosser, R. W. (2004). Goal attainment scaling as a clinical measurement technique in communication disorders: a critical review. Journal of Communication Disorders, 37, 217 – 239. Schön, D. A. (1983). The Reflective Practitioner: How Professionals Think in Action. London: Temple Smith. Sharp, S., Frederickson, N. and Laws, K. (2000). Changing the profile of an Educational Psychology Service. Educational and Child Psychology, 17 (1), 98 – 111. Shinebourne, P. (2011). The theoretical underpinnings of Interpretative Phenomenological Analysis (IPA). Existential Analysis, 22 (1), 16 – 31. Sladeczek, I. E., Elliott, S. N., Kratochwill, T. R., Robertson-Mjaanes, S. and Stoiber, K. C. (2001). Application of Goal Attainment Scaling to a conjoint behavioral consultation case. Journal of Educational and Psychological Consultation, 12 (1), 45 – 58. Slavin, R. (2008). Perspectives on Evidence-Based Research in education – what works? Issues in synthesizing educational program evaluations. Educational Researcher, 37 (1), 5 – 14. Smith, A. (1994). Introduction and overview. In T. J. Kiresuk, A. Smith and J. E. Cardillo (Eds.), Goal Attainment Scaling: Applications, Theory, and Measurement. Hillside: Lawrence Erlbaum Associates. Smith, J. A. (1996). Beyond the divide between cognition and discourse: Using interpretative phenomenological analysis in health psychology. Psychology and Health, 11 (2), 261 – 271.
167
Smith, J. A. (2004). Reflecting on the development of interpretative phenomenological analysis and its contribution to qualitative research in psychology. Qualitative Research in Psychology, 1, 39 – 54. Smith, J. A. (2010). Interpretative Phenomenological Analysis: A reply to Amedeo Giorgi. Existential Analysis, 21 (2), 186 – 192. Smith, J. A. (2011). Evaluating the contribution of interpretative phenomenological analysis. Health Psychology Review, 5 (1), 9 – 27. Smith, J. A., Flowers, P. and Larkin, M. (2009). Interpretative Phenomenological Analysis: Theory, Method and Research. London: Sage. Smith, J. A. and Osborn, M. (2003). Interpretative phenomenological analysis. In J. A. Smith (Ed.) Qualitative Psychology: a Practical Guide to Research Methods. London: Sage. Squires, G., Farrell, P., Woods, K., Lewis, S., Rooney, S. and O'Connor, M. (2007). Educational Psychologists' contribution to the Every Child Matters agenda: The parents' view. Educational Psychology in Practice, 23 (4), 343 – 361. Stoiber, K. C. and Waas, G. A. (2002). A contextual and methodological perspective on the evidence-based intervention movement within school psychology in the United States. Educational and Child Psychology, 19 (3), 7 – 21. [Special Issue]: Educational Psychology and Evidence. Stone, H. and Sidel, J. L. (2004). Sensory Evaluation Practices (Third Edition). San Diego: Elsevier Academic Press. Stufflebeam, D. L. (2001). Evaluation models. New Directions for Evaluation, 89, 7 – 98.
168
Sullivan, L. E. (2009). The SAGE Glossary of the Social and Behavioural Sciences. Thousand Oaks: Sage. Symington, J. and Symington, N. (2001). The Clinical Thinking of Wilfred Bion. London: Routledge. Talisse, R. B. and Aiken, S. F. (2011). The Pragmatism Reader. Woodstock: Princeton University Press. TES,
(29
October
2010).
Psychologists:
training
freeze
-
service
at
risk.
http://www.tes.co.uk/article.aspx?storycode=6061755 accessed 29 March 2012. Todd, Z., Nerlich, B. and McKeown, S. (2004). Introduction. In Z. Todd, B. Nerlich, S. McKeown and D. D. Clarke (Eds.), Mixing Methods in Psychology (pp. 1 – 16). Hove: Psychology Press. Turner, S., Randall, L. and Mohammed, A. (2010). Doing an effective job? Measuring the impact of casework. Educational Psychology in Practice, 26 (4), 313 – 329. von Bertalanffy, L. (1968). General System Theory: Foundations, Development, Applications (Revised Edition). New York: George Braziller. Viswanathan, M. (2005). Measurement Error and Research Design. Thousand Oaks: Sage. Williams, G. (1997). Internal Landscapes and Foreign Bodies. London: Karnac. Willig, C. (2001). Introducing Qualitative Research in Psychology: Adventures in Theory and Method. Buckingham: Open University Press.
169
Willig, C. (2007). Reflections on the use of a phenomenological method. Qualitative Research in Psychology. 4, 209 – 225.
Worrall-Davies, A. and Cottrell, D. (2009). Outcome research and interagency work with children: what does it tell us about what the CAMHS contribution should look like? Children and Society, 23, 336 – 346. Yardley, L. (2000). Dilemmas in qualitative health research. Psychology and Health, 15, 215 – 228. Yardley, L. (2008). Demonstrating validity in qualitative psychology. In J. Smith (Ed.) Qualitative Psychology: A Practical Guide to Research Methods (pp. 235 – 251). London: Sage. Youell, B. (2006). The Learning Relationship: Psychoanalytic Thinking in Education. London: Karnac. youthinmind. http://www.sdqinfo.com/ accessed 15 May 2012.
170
171
8. Appendices
172
8.1. Appendix 1: Literature Search Inclusions/Exclusions Applicable to Educational About EP Post 2000 Psychology Practice New Info evidence-based practice' and 'educational psychology' Practice-based evidence – overcoming insecure attachments.Detail Only Available By: Fox, Mark. Educational Psychology in Practice, Dec2011, Vol. 27 Issue 4, p325-335, y
Social Stories : does the research evidence support the popularity?Detail Only Available By: Styles, Adam. Educational Psychology in Practice, Dec2011, Vol. 27 Issue 4, p415-436 y An Evaluation of the Treatment Integrity Planning Protocol and Two Schedules of Treatment Integrity Self-Report: Impact on Implementation and Report Accuracy.Detail Only Available By: Sanetti, Lisa M. Hagermoser; Kratochwill, Thomas R.. Journal of Educational & Psychological Consultation, OctDec2011, Vol. 21 Issue 4, p284-308, y
y
y
y
y
n
n/a
n
n
n/a
Is the glass half-full or half-empty? Perceptions of recently-qualified educational psychologists on the effectiveness and impact of their Master's level research.Full Text Available Landor, Miriam; Educational Psychology in Practice, Vol 27(1), Mar, 2011. y
y
n
n/a
School Counseling Outcome: A Meta-Analytic Examination of Interventions.Full Text Available By: Whiston, Susan C.; Wendi Lee Tai; Rahardja, Daryn; Eder, Kelly. Journal of Counseling & Development, Winter2011, Vol. 89 Issue 1, p37-55 y
y
n
n/a
Cognitive behaviour therapy in schools: The role of educational psychology in the dissemination of empirically supported interventions.Full Text Available Pugh, John; Educational Psychology in Practice, Vol 26(4), Dec, 2010. pp. 391-399 y
y
n
n/a
Sharing and Inspiring.Full Text Available By: Milsom, Amy. Professional School Counseling, Aug2009, Vol. 12 Issue 6, p2-3 y
n
n
n/a
Evidence-based practice and evaluation: From insight to impact.Full Text Available Dunsmuir, Sandra; Brown, Emma; Iyadurai, Suzi; Monsen, Jeremy; Educational Psychology in Practice, Vol 25(1), Mar, 2009. pp. 53-70 y
y
y
y
y
y
n
n
n
n/a
n
n
n/a
n
n
n/a
The Boulder model in academia: Struggling to integrate the science and practice of psychology.Detail Only Available Overholser, James C.; Journal of Contemporary Psychotherapy, Vol 37(4), Dec, 2007. pp. 205-211 y
n
n
n/a
Preparing psychologists for evidence-based school practice: Lessons learned and challenges ahead.Full Text Available Kratochwill, Thomas R.; American Psychologist, Vol 62(8), Nov, 2007. pp. 829-843. y
y
y
y
Educational psychology: The fall and rise of therapy.Detail Only Available Mackay, Tommy; Educational and Child Psychology, Vol 24(1), 2007. Special issue: Therapy. pp. 7-18. y
n
n
n/a
The educational psychologist as community psychologist: Holistic child psychology across home, school and community.Detail Only Available MacKay, Tommy; Educational and Child Psychology, Vol 23(1), 2006. pp. 7-15 y
n
n
n/a
Evidence-based practice for education?Detail Only Available Marsh, Reg; Educational Psychology, Vol 25(6), Dec, 2005. Special issue: Developments in educational psychology: How far have we come in 25 years?. pp. 701-704 y
y
y
y
Evidence-based practice in educational research: A critical realist critique of systematic review.Detail Only Available Clegg, Sue; British Journal of Sociology of Education, Vol 26(3), Jul, 2005. pp. 415-428. y
y
y
y
Situational Analysis: A Framework for Evidence-Based Practice.Detail Only Available Annan, Jean; School Psychology International, Vol 26(2), May, 2005. pp. 131-146. y
y
y
y
Current Status and Future Directions of School-Based Behavioral Interventions.Full Text Available By: Gresham, Frank M.. School Psychology Review, 2004, Vol. 33 Issue 3, p326-343 y
y
n
n/a
Learning Environments: The Case for Evidence-Based Practice and Issue-Driven Research.Full Text Available By: Mayer, Richard E.. Educational Psychology Review, Dec2003, Vol. 15 Issue 4, p359-366 y
y
n
n/a
A framework for the creation of practitioner-based evidence.Detail Only Available Matthews, Jeff; Educational and Child Psychology, Vol 20(4), 2003. pp. 60-67 y
y
y
y
Evidence-based practice and educational psychology.Detail Only Available Frederickson, Norah; Educational and Child Psychology, Vol 19(3), 2002. pp. 96-111 y
y
y
y
Developing evidence-based practices and research collaborations in school settings.Full Text Available Apel, Kenn; Language, Speech, and Hearing Services in Schools, Vol 32(3), Jul, 2001. pp. 149152. y
n
n
n/a
Determining Evidence-Based Practices in Special Education.Full Text Available By: COOK, BRYAN G.; TANKERSLEY, MELODY; LANDRUM, TIMOTHY J.. Exceptional Children, Spring2009, Vol. 75 Issue 3, p365383 y Using a Three-Step Decoding Strategy With Constant Time Delay to Teach Word Reading to Students With Mild and Moderate Mental Retardation.Detail Only Available By: Belfiore, Phillip J.; Fritts, Kevin M.; Herman, Brian C.. Focus on Autism & Other Developmental Disabilities, Jun2008, Vol. 23 Issue 2, p67-78 y Sweating the small stuff in educational psychology: How effect size and power reporting failed to change from 1969 to 1999, and what that means for the future of changing practices.Detail Only Available Osborne, Jason W.; Educational Psychology, Vol 28(2), Apr, 2008. pp. 151-160. y Report of the National Panel for Evidence-Based School Counseling: Outcome Research Coding Protocol and Evaluation of Student Success Skills and Second Step.Full Text Available By: Carey, John C.; Carey, Dimmitt; Hatch, Trish A.; Lapan, Richard T.; Whiston, Susan C.. Professional School Counseling, Feb2008, Vol. 11 Issue 3, p197-206, y
seminal!
173
evaluation' and 'educational psychology' (excluding duplicates) A discussion of the developing role of educational psychologists within Children's Services.Full Text Available By: Fallon, Kate; Woods, Kevin; Rooney, Steve. Educational Psychology in Practice, Mar2010, Vol. 26 Issue 1, p1-23 y
y
y
y
An evaluation of a pre-training assistant educational psychologist programme.Full Text Available Monsen, Jeremy J.; Brown, Emma; Akthar, Zobiah; Khan, Sehra Y.; Educational Psychology in Practice, Vol 25(4), Dec, 2009. pp. 369-383 y
y
y
y
An evaluation of consultation sessions for foster carers and adoptive parents.Detail Only Available By: Osborne, Cara; Alfano, Julia. Educational Psychology in Practice, Dec2011, Vol. 27 Issue 4, p39541 y
y
y
y
An Evaluation of the Use of the Behaviour Questionnaire.Full Text Available By: Martin, Harriet; Carroll, David. Educational Psychology in Practice, Sep2005, Vol. 21 Issue 3, p175-196 y
y
n
n/a
Bridging the gap between educational research and educational practice: The need for critical distance.Detail Only Available Biesta, Gert; Educational Research and Evaluation, Vol 13(3), Jun, 2007. Special issue: How educational researchers and practitioners meet. pp. 295-301 y
y
y
y
Cognitive behaviour therapy in schools: The role of educational psychology in the dissemination of empirically supported interventions.Full Text Available Pugh, John; Educational Psychology in Practice, Vol 26(4), Dec, 2010. pp. 391-399. y
y
n
n/a
y
n
n/a
y
n
n/a
Parents' evaluation of an educational psychology service.Full Text Available Cuckle, Pat; Bamford, Judi; Educational Psychology in Practice, Vol 16(3), Oct, 2000. pp. 361-371. y
y
y
y
Parents' Views of Quality in Educational Psychology Services.Full Text Available By: Anthun, Roald. Educational Psychology in Practice, Jun2000, Vol. 16 Issue 2, p141-157 y
y
y
y
The advance of research and evaluation skills by EPs: implications for training and professional development.Full Text Available By: Eodanable, Miranda; Lauchlan, Fraser. Educational Psychology in Practice, Jun2009, Vol. 25 Issue 2, p113-124 y
y
y
y
The appraisal of educational psychologists: A very private affair.Full Text Available Webster, David; Educational Psychology in Practice, Vol 17(2), Jun, 2001. pp. 109-120 y
y
n
n/a
Consultation groups: Participants' views.Full Text Available Bozic, Nick; Carter, Anna; Educational Psychology in Practice, Vol 18(3), Sep, 2002. pp. 189-201. y Multi-professional assessment and intervention of children with special educational needs in their early years: The contribution of educational psychology.Detail Only Available Robinson, Mary; Dunsmuir, Sandra; Educational and Child Psychology, Vol 27(4), 2010. Special issue: Early years education. pp. 10-21. y
174
8.2. Appendix 2: Information Letter to Interview Participants Exploring what is of Value in Evaluation in Educational Psychology You are invited to participate in a research study by taking part in face-to-face interviews. Before you decide whether you would like to take part, it is important that you have the opportunity to understand the research and what it will involve. Please take the time to read the following information. You are welcome to speak to me about any aspect you do not find clear. I am undertaking this research for the thesis component of my Doctorate in Child, Community and Educational Psychology. My research is an exploration into the factors which make evaluation of Educational Psychologists’ involvement meaningful, i.e. relevant, important and useful in measuring change. The reason why I am researching this area is because there has been little research about this aspect of evaluation, although there is a lot of literature about the need for Educational Psychologists to evaluate their work. I would like to better understand what are the best indicators of a meaningful impact from Educational Psychologists’ work. My research questions are: 1.
What is the meaning (the relevance, importance and value) of evaluation within the lived experience of an Educational Psychologist working as an evidencebased practitioner?
2.
What aspects make evaluation of the impact of Educational Psychologists’ involvement relevant, important, valuable and meaningful for them?
3.
Are these aspects of evaluation relevant, important, valuable and meaningful to parents, head teachers and special educational needs co-ordinators when evaluating the impact of Educational Psychologists’ involvement?
4.
How can this inform the development and evaluation of tools used to evaluate Educational Psychologists’ practice?
I anticipate that qualitative measures of impact will be highlighted as valuable. There are two phases to this research. The initial phase will involve face-to-face interviews. Findings from these interviews will be used to inform a questionnaire for the second phase of the study. This request is for you to take part in the interview phase. The interviews will be recorded and transcribed. This information will then be analysed using a technique called Interpretative Phenomenological Analysis. If you decide to participate in these interviews, be assured that your identity will only be known to me. Everyone who is involved in the interviews will be assigned a code. Recordings and transcripts of the interviews will be anonymised and only identifiable according to these
175
codes. Any personal information will be kept on password protected digital media and will be deleted on completion of the research. I would like each participant to be interviewed for an hour. The timings of these interviews will be arranged for a mutually convenient time and will take place in a private room in Carmelita House. There are no anticipated emotional or physical risks to researcher or participants, however, should any questions elicit unforeseen distress in participants, the researcher is available to discuss any issues after the interviews have taken place. However, as researcher, I will be unable to provide extensive support and will help the participant think through who might best offer the support needed. If you have any concerns about any aspects of the way you have been approached or treated during the course of this study, or have any complaints, please contact the Principal Educational Psychologist, John Miller or my research supervisor, Dr. Lesley Bennett via email:
[email protected] or by phone: 020 8981 2005. Verbatim extracts from the interviews will be used in the analysis and write-up of the research as well as in the questionnaires. You will not be identified in any part of the study. All contributions will be anonymous. If there are any specific comments you would like to withdraw from the research, please inform me of this by 30 April 2011. Similarly, if you would like to withdraw from the research altogether after interviews have taken place, you are free to do so. Because of the schedule of the research, this would need to be before 30 April 2011. The research will be submitted as part of a doctoral thesis which will be available for public perusal. I also anticipate submitting this research for publication in a peerreviewed journal. I hope this information sheet provides useful information to help you decide whether or not you would like to participate in this study. If you have any questions, please contact me on:
[email protected] 020 XXXX XXXX Thank you very much for reading this information sheet. I will contact you to find out if you would like to be interviewed. If you decide that you would like to be involved, I will ask you to sign a consent form.
Cath Lowther Trainee Educational Psychologist
176
8.3. Appendix 3: Consent Form for Interview Participants CONSENT FORM Exploring what is of Value in Evaluation in Educational Psychology Name of Researcher: Cath Lowther
Please initial box
1. I confirm that I have read and understand the information sheet for the above study and have had the opportunity to ask questions if needed.
2. I understand that my participation is voluntary, and that I am free to withdraw in the time specified, without giving any reason, and that any data related to my involvement will be destroyed.
3. I agree to take part in this study.
Name of participant: ___________________________________________________
Signature: _____________________________________
Date: ______________
Preferred contact number or email address: ________________________________
Researcher’s contact details: Cath Lowther, Trainee Educational Psychologist
[email protected] 020 XXXX XXXX
177
8.4. Appendix 4: Information for Pilot Phase Participants I am undertaking Doctoral thesis research to explore Educational Psychologist’s experiences as a way to look at the different types of information they find meaningful when reflecting on the impact of their work. Through interviews and a review of published literature I have identified four types of information which Educational Psychologists might draw upon, namely:
Standardised measures (i.e. pre- and post-test measures using e.g. the Strengths and Difficulties Questionnaire (SDQ)) Target based techniques (i.e. setting targets and monitoring whether these have been met e.g. Goal Attainment Scaling) Qualitative feedback (i.e. verbal or written feedback about work, informal or formal) Professional opinion (i.e. qualitative observation of improvement within a child, family or system from the EP’s point of view) This questionnaire constitutes the pilot phase in the development of a further questionnaire to find out which of the above types of information Educational Psychologists most prefer. This pilot phase will involve reducing the number of statements so that those remaining best fit each category stated. I am asking Educational Psychologists to be involved in this phase to help improve the content validity of the final questionnaire. Please take the time to respond to the following four ranking exercises. Responses are completely anonymous. Results will only be used to determine which statements to include in the final questionnaire which will be given to Educational Psychologists in my sample. Completion and return of this pilot questionnaire indicates consent to participate. Your participation is very much appreciated. I will make sure I provide feedback about my research to your service so that you may see what my findings were. If you have any questions regarding this questionnaire or my research please do not hesitate to contact me via email:
[email protected] If you have any concerns regarding this questionnaire or my research please feel free to contact my supervisor Jeff Matthews via email:
[email protected] Thank you very much for your time.
178
8.5. Appendix 5: Ethics Approval
Quality Assurance & Enhancement Directorate of Education & Training Tavistock Centre 120 Belsize Lane London NW3 5BA Tel: 020 8938 2548 Fax: 020 7447 3837 www.tavi-port.org
1st February 2011 Catherine Lowther 3 Lemington Grove Bracknell Cheshire RG12 7JE
Dear Catherine Re: Research Ethics Application Title: Title of research project: An Exploration of what is of Value in Evaluation in Educational Psychology: A Search for Meaning I am pleased to inform you, that subject to formal ratification by the Trust Research Ethics Committee on Feb 15th 2011, your application has been approved. If you have any further questions or require any clarification do not hesitate to contact me. I am copying this communication to your supervisor. May I take this opportunity of wishing you every success with your research. Yours sincerely
Louis Taussig Secretary to the Trust Research Ethics Committee
179
8.6. Appendix 6: Invitation to Interview Participants Dear All It has been a while since I presented to you about the research I will be undertaking for the thesis component of my Doctorate in Child, Community and Educational Psychology. The research has now been approved by my university’s ethics board which means I can now begin. There are two phases to this research. The initial phase will involve face-to-face interviews. Findings from these interviews will be used to inform a questionnaire for the second phase of the study. I am hoping to have the opportunity to interview some of you during the first phase. The interviews will be analysed using a technique called Interpretative Phenomenological Analysis. Due to methodological reasons, my sample for interviews needs to be as homogeneous as possible. Because of this, I will be inviting qualified EPs who have at least 5 years’ experience in more than one service to be interviewed. I hope those of you who fit this description will agree to take part. Anonymity of responses will be assured and data protection procedures are described in the consent form/information sheet that I will give to you. Those of you who do not fit the description for my interview sample have not been let of the hook! I will be asking everybody to complete the questionnaire once it is finalised (towards the end of the summer term). Additional information about anonymity and data protection etc. will be included with the questionnaires when they are given out. I would like to thank you in advance for participating in my study and would also like to thank you for the interest you have already shown in my research. If you would like to know a bit more, I have included a very brief summary of the research: My research is an exploration into the elements which make evaluation of Educational Psychologists’ involvement meaningful, i.e. relevant, important and useful in measuring change. The reason why I am researching this area is because there has been little research about this aspect of evaluation, although there is a lot of literature about the need for Educational Psychologists to evaluate their work. I would like to better understand what the best indicators of a meaningful impact from Educational Psychologists’ work are. My research questions are: 1. What is the meaning (the relevance, importance and value) of evaluation within the lived experience of an Educational Psychologist working as an evidence-based practitioner? 2. What aspects make evaluation of the impact of Educational Psychologists’ involvement relevant, important, valuable and meaningful for them? 3. Are these aspects of evaluation relevant, important, valuable and meaningful to parents, head teachers and special educational needs co-ordinators when evaluating the impact of Educational Psychologists’ involvement? 4. How can this inform the development and evaluation of tools used to evaluate Educational Psychologists’ practice? I anticipate that qualitative measures of impact will be highlighted as valuable.
180
8.7. Appendix 7: Pilot Interview Schedule Pilot Interview Schedule 1. 2. 3. 4. 5. 6. 7. 8. 9.
Please tell me how you came to be an EP. Describe yourself in role as an EP. What has been your ‘best’ work? Please tell me about a time/times when you feel you made an impact as an EP. How did you know you had made an impact? Tell me about a recent time when you did something differently than before. Why did you change your practice? What told you that the change had been useful/not? How do you see your work in the future?
181
8.8. Appendix 8: Themes and Extracts for IPA Themes/Extracts Who ROLE Violet
Lines
Extract
16 - 20
Violet
22 - 25
Violet
34 - 36
Violet
39 - 40
V
100 103
… in the role at present obviously having the professional knowledge in terms of knowing about the emotional and social and learning difficulties that children experience, so working in the role with that at present but I also see myself very much as working joint with the school and the teachers and the parents and us forming a team really to support the child. So it’s definitely not a me and the child type scenario it’s definitely all heads together and how do we work together to support … And often not clearly a one sided learning issue. In almost every case there’s, it’s a multi-pronged approach that is needed. Um, in terms of I think historically the role of educational psychologists and the medical model is going in and the child has a learning difficulty and you go and you do the assessments and you do you assess and you make a recommendation and you fix the problem type of model as opposed to a much broader perspective. And er using er many other different tools besides the IQ, the old IQ test, the cognitive assessment, to just assess and obtain a score and make a decision on that. It’s far broader than that. having the luxury of a few sessions with the child and a few sessions with the parents and the teachers and going into the home and meeting other family members and actually having a much richer experience and I found that that has been um really really beneficial and it has really helped the work. The fact that it is not just a touch and go. So working with the whole child has been really rewarding. It has because it has broadened my outlook even further, it’s made me look with um, with a much wider perspective on everything. So it has broadened perspective, it’s made me aware, um of what is there and um, more to be sure that the child is safe so that that is a, and having not just having a room in a school which was my previous role to actually … um,
Notes/Comments Also Complexity Complexity of issue & need for ‘complex’ approach
Historical role (medical model) v broader role Expectation of ‘you fix the problem’ relative to more complex needs requiring multi-pronged approach Implications for evaluation
Luxury v touch and go Really really beneficial Working in this way is a luxury, not the usual
This is a luxury? Compare narrowness of role ie ‘in a room’ opposite to ‘child’s life’
182
V
168 173
Rose
27 - 31
R
74 - 80
… you know be part of all, more of the child’s life than I was previously. I think it’s a real privilege to be able to work in the manner in which I am working at the moment with, with the broader … with the being able to go work with the broader view so I, I think that that is that has been very good… and it’s just, it’s just really rewarding, it’s really really rewarding to actually work, it’s hard work and every case is so very varied and likely to become more. […] So, um… possibly bringing a bit more stress in with that… but still working with the … the work that I’m doing is is good… And I’ve only done two IQ tests which is (laughs)… and they really really were needed so that’s been, that’s been good (laughs). I was a little bit nervous going into it ‘cause I was a little bit nervous of losing the direct contact with. I enjoyed the direct contact with children and I guess a lot of Educational Psychology is more about working with adults and the systems around them. So I was kind of that was one thing I was unsure about was losing that direct contact but I think the more I’ve been in it the more important I actually now see the value and the importance of working with the adults around the children because actually that’s where you can really effect change that’s more sort of long lasting and a bit more wide reaching I guess the difference in my role as a safe EP is you probably can have more regular contact so I might have a case load of about its often getting near twenty at the moment which actually feels quite a lot but because you’re which is nowhere near the kind of number of children that I might be seeing and juggling in my head when I was going round kind of schools as a school EP but then I am seeing those families probably once every other week but I prefer that way of working because then you can follow up so rather than making recommendations and then going back a term later and having people go well actually that didn’t really work and then problem solving then, you can catch it 2 weeks later and say well what about that isn’t working let’s have a look is there a way that you can still do it but do it a bit differently and so try and get it working it just helps you to
Working this way is a privilege It is very rewarding and the work is good Is good work a privilege/luxury? WHY?
Systemic work, not direct contact The value of this, importance and impact
Regular contact = opportunity to evaluate and change quickly Compare to lots of children being juggled in my head Implications for evaluation/monitoring (shrinking workforce) Role as this kind of EP relative to school EP (special case)
183
R
106 110
R
203 205
R
213/ 214/ 220
R
493 495
Daisy
39 - 45
D
54 - 57
get things moving there’s lots of ideas that I’d like to develop but unfortunately at the moment and I fear it’s going to get worse the cases coming in they’re just flooding in so you kind of tend to get sucked in to running around doing case work but I think there are lots of other things that could be more effectively kind of early intervention such as groups and training and what have you which might then address some of the problems before they happen which would be more ideal really wouldn’t they but um that’s yet to be developed (laughs). I think there are constructs that we have from our training that you don’t even I think you almost internalise them so you don’t, it’s easy to not notice until you start working with people who maybe don’t have that background, it’s easy to think that there’s nothing different that I am doing. oh I can’t articulate it oh what am I trying to say? Does that make any sense whatsoever? (Laughs.)
ideally we want to do is focus on the preventative work to stop so many problems happening but what will inevitably happen is that you only get time for the crisis work which so I don’t know so hopefully we’ll find a way through that so it doesn’t become like that. But we’ll see. It was a national strategy and local authorities were all given a sum of money to support [this] in the early years. I forget how much it was now. I don’t know, let me think. 40 grand or something, um, and this was kind of landed on me I have to say because the money came, the kind of national strategy … booklets came and nobody was doing anything with them so ‘Frank’ said to me, ‘Can you do something with this?’ and of course it was all at the very last minute and the money had to be spent by the end of March, the end of the financial year. So the best we could do at that stage was to divide the money between all the settings that had committed themselves And then I have weekly meetings … and really
More effective work pushed out by workload and being reactive Link to THINKING (Iris) Deluge – can’t stop or avoid it Crisis v prevention – impact on measure – scale at crisis level, more difficult?
Losing sight of the psychology Internalised/automatic Link to theory – both using that in practice but also the way in which memory works (cognitive psychology) So very difficult to describe the psychology in role Hard to articulate then how hard to measure? Language used when trying to describe her contribution Crisis work v prevention
Comparison between choosing own role and being landed with something Links later with comments about thinking Reactionary work
Statutory stuff put in
184
D D
59 204 219
D
73 - 91
my role there is to support them in their thinking about individual children so they have these meetings, that’s sort of their peer supervision session if you like where they discuss cases. Well they are, they are often involved with individual children and their role really is remarkably similar to an EP’s role except obviously they don’t have the statutory stuff, they have all the supportive, the preventative stuff. Describes school work as ‘the usual’ Mainly because of the economic doom and gloom. I think that this is a bit of a luxury. My post is a luxury. Doing all this lovely training around the place, people being released to attend it. Getting very good feedback from that, doing lots of parenting. But these are things that are extras so I think that I see purse strings tighten and so this role might actually go. [ … ] I: It sounds like a really useful and valuable thing that you’re doing. It would be a shame to lose that. P: It would be a shame. It would be a shame to lose it yeah. And it’ll also be a shame because you know we’ve developed lots of things. You know particularly this training and er be a shame if that just went. Um, because that’s taken a lot of time a lot of you know time is money, so a lot of money to develop this. Um and it needs somebody to run it and so if there’s no one to run it. We we’ll have trained a lot of people by then I guess but it could go a lot longer and be valuable, useful. Um. So I don’t know. We’ll just have to wait and see. I think it was my best bit of work because it was so challenging and quite aggressively so at times because we were kind of stirring things up and um and just as an example in a light hearted way we had steering group meetings [ … ] and they were dreadful. And we would actually call them, we began just to make ourselves feel better, we began to call them steering group beatings because they were just such hard work (laughing). And what we were finding was, it was a classic case of them wanting us to just get on, do something, get on with the job. And we were saying, hang on a second you know we’re not going to run in to, rush into something without properly thinking about what these
contrast with preventative/ supportive stuff
Work that is valued and given good feedback is seen as a luxury and might go Why? A shame Implications of cutting valuable/useful work Link with time…
Best because of or in spite of being so challenging Best does not mean easy Work like beatings Doing something v thinking about it first Classic case = do something, not thinking Thinking v reactivity The impact on staff, work takes a toll What would the people who were upset by the process have said about the outcomes/value? What does success look
185
D
100 104
Iris
16 - 24
children should be getting and how they should be getting it. You know, you’ve asked us to do some thinking about this, we’re doing some thinking about it. Um but they wanted us in there assessing children, we’ve got a educational psychologist, we’ve got a clinical psychologist, get on with it. They didn’t really want to know about the research side of the project they just wanted us to have the children lined up outside and process them really. [ … ] The clinical left to do something else. I think she had felt really, you know it was very hard work. I left as well but that was more a personal decision. I wanted to be at home with my kids for a while… so I just actually left local authority work and um did some private work. Um. And partly, I mean that could have been because it was so bloody hard, you know, um, but at the end of it we had produced a … fairly hefty document outlining what the concerns were for these children, what the needs were, and how we felt you know recommendations in relation to meeting the needs. Um. Which we circulated to everyone. Um. And we kind of upset a few people in the process um it was kind of travelling the fine line between saying what needed to be said and maintaining good relationships with people. So that was quite hard in itself. And it was the classic situation you know where you’re being, people don’t want to think. They just want to get on and do things. So no space for thinking and that was the main part of the problem. That everybody had been swept along and running around like headless chickens doing things and not really able to do them properly because the thought wasn’t behind it, And also what we felt was that some of the cases were so awful that it was hard to think about them and so for the social workers a lot of it was about giving them the space to just sort of be able to think about these awful things with somebody else. I suppose that the [group project] has been an interesting piece of work because it’s allowed um me to take part in an intervention which has been over time which is pretty rare opportunity in terms of the work that we do and so the results of that in terms of the follow up questionnaires and things seems to
like?
Classic situation as above No space for thinking the main part of the problem Headless chickens = reactive working Things can’t be done properly unless they are thought about first and this needs space and time.
Being part of an intervention is rare Direct v indirect role – implications re control over outcomes Good work is o Interesting
186
I
29 - 37
I
97 - 102
I
135 144
suggest that those programmes have made a difference to the children that have taken part so that feels like a good piece of work to have taken part in because you feel like you’ve left children with some skills that they didn’t have before. There are often times with the job as a whole you feel that what needs to be put in place for those children you’re passing back into somebody else’s hands so sometimes you feel a bit powerless with that but to feel I suppose that I for individual cases there is a general satisfaction that if you’ve painted a really good picture of a child and made people think a bit differently from how they have before and then you hear later that that child has made progress that feels good to just felt that you’ve clarified everybody else’s thinking a little bit. I guess I enjoy doing that. one of the things that well for me when I started the job that I missed about teaching was the aspects of relationship so to see that you are building a relationship with children and you see them respond to the interventions and the teaching that you’re putting in place for them is quite powerful [ … ] I’ve been the one that’s built up the rapport with those kids. Yeah. It’s been fun.
I haven’t actually done a follow up for that one. I just passed it back to the school. I should probably do a follow up one and see how everybody is. And I guess that’s the down side of it from that point of view is that once the school feels equipped and I think probably for that particular student it was the parents um who contacted me directly and it was a way of managing it because the school already felt that they were meeting the student’s needs. So that student has never been a priority for my involvement so a follow up consultation in order to do that that would have to come from me. [ … ] I probably should follow up that everyone’s feeling happy about it. I haven’t heard from them since either so their immediate verbal feedback was positive and then the fact that there hasn’t been any
o Interventive o Makes a difference Impact measured by questionnaire but also felt Difference in outcomes in skills and thoughts =- how to measure? Progress also outcome Powerlessness – Eps not always in a position to do what is needed Importance of change in thinking as an outcome Active involvement when powerless – impact on thinking
Value of relationship – makes it possible to see the change Relationship sometimes/often missing from EP role Hands on work = powerful; into somebody else’s hands = powerless Value of direct work/interventions – in control and having ownership of the impact School having control over the monitoring/evaluation Downside – who feels what – are these feelings appropriate or accurate? Who has the ownership or responsibility of making sure everyone’s feeling happy?
Ownership – school/parents Parents as monitors Reliance on the school to do
187
I
203 209
I
252 254
Lily
34 - 38
L
62 - 66
further concern to say oh we’re still really worried about him. [ … ] sometimes when you write your recommendations in a report, you’re reliant on the school to do it but in the sense those parents in that case were acting as monitors that those things were being put in place which you know the school were committed to do but weren’t doing whereas when you write recommendations in a report there are those issues of ownership that the school doesn’t necessarily have because it is what you’re giving to them. if statements go out the window then we’ll have to evaluate what schools want us for because there is a certain element in which they want us to do some of those assessments so they jump through that hoop. There’s no doubt about that… [ … ] I think there is something to be said about being a whole team, because the job is so vast in the age range and the types of needs that children have. People develop their interests in different ways don’t they? So the thought that you need to be an expert in everything would be awful (laughs)… overwhelming. But there’s always the job level work and not work. [ … ] It’s hard to give time to everything and feeling like you’re not quite doing everything as well as you’d like. I think there’s a number of different roles, actually. I think we kind of do everything from facilitate change, to collaborate with um our other colleagues, um. … It’s a tricky question because I don’t know if I can put it into you know, concise words. Um, I definitely, it’s easier to say what I don’t see myself as. I don’t see myself as an expert. Um, I see myself as sort of someone who can add to the process. Um, and help identify I suppose with the process where the change needs to happen. I suppose yeah facilitator of change would be probably be the way I sort of try and describe myself. I do quite a lot of staff training because I like it and because I find it to be quite a useful intervention. It’s a good use of time because you reach a whole staff. So rather than talking to an individual teacher about strategies for a particular child, you can talk to the whole staff about strategies for children of this particular type and then you know you’ve
the work – who checks that things are done? Consultation = commitment? Recommendations = no ownership?
What do schools want EPs for? Assessments sometimes = jumping through hoops for a statement – impact on work and value of work Job is potentially vast and overwhelming
Desire to do everything well and not quite doing it as well as you’d like – balance, time Direct description
Autonomy Doing the work that she likes and that she finds useful – because she sees it as empowering and whole staff (not just one child).
188
L
204 215
Jasmine
28-32
empowered them at least to do some basic class based strategies. I would say that in the time that I’ve been working as an Ed Psych my role has changed so much already. Um, and I do feel that I have quite a lot of autonomy which is quite nice to be able to decide how I feel best put to use in discussion with my own schools. And I appreciate the fact that there isn’t a one size fits all, um, ten outcomes that I have to subscribe to in order to meet some bureaucratic need to make sure that I’m effective. You know and I would really hate to see that go away. I like being able to go into a school, discuss with them what they think they need, negotiate how we can meet that need and then you know work towards meeting that need. And for me client satisfaction really is key to knowing whether a job’s been well done or not. I don’t need to tick a box or be given a stat or you know. That’s what some bureaucrat needs in order to feel that they’re getting value for money but I you know on the ground in the trenches it’s about a school feeling better equipped to cope with something or a family feeling better able to cope with something or a child succeeding more than they were the day before. That for me you know is, I don’t know that you necessarily have to quantify that. I think that sometimes you can see it, you know. It doesn’t take a rocket scientist to know when someone is happy or not happy. So I would hate to have that autonomy taken away and be told that I have to do certain things to jump through more hoops than I’m already jumping through. Um, and we planned interventions together and I suppose er one of the key things I saw as developing were around nurturing so we um, used standards funds originally to get two nurture groups off the ground. And then by the time I left those two were being funded centrally funded and we had got one other that was funded by children’s fund money I think. And then we managed two other schools decided that this was such a good idea they were going to do their own. They got, I think they more or less went off on their own, Managed to do it somehow from their budget. So that was lovely.
Autonomy Bureaucracy v autonomy Outcomes and prestipulated outcomes as a bureaucratic need, not a measure of effectiveness Value of professional autonomy as a means for meeting identified needs Imagery of warfare, action, real life (not bureaucracy) Client satisfaction, not stats Happy or not happy Rocket Science quote! Importance of satisfaction as an indicator
Funding = impact? Implications and messages from being funded centrally, measure of value/being valued. Managing to do these things, squeezing it in because it is seen as important. Feeling pleased – it was lovely.
189 J
82-85
J
100-101
J
159-165
J
257-260
J
267-274
And so we were asked to see what we could do. And um, … so between us we kind of we thought well OK we can cover the ordinary work of an EP but what we don’t want to do, it will be nice to do something extra but we don’t want to do more of the same because partly that will set up a precedent for asking for more. Um, you know it’s going to make life difficult for other EPs. Um, and there’s a whole section in the nurturing programme on your personal power so we were trying to boost children’s confidence and things like that. We’d talk about, personal power seemed a good one. But we looked at self-esteem and all those kinds of things. I don’t know if you every got to read it but I trained years ago but one of the papers we were given to read um on my training course was the Myth of the Hero Innovator and there’s a picture at the beginning of this article of a knight in shining armour, his spear and his shield and his white horse and rushing in to save the world. And um, that’s I remember that’s probably the most important thing I have ever read because you tend to just fall flat on your face and you’re not doing anything for anyone so it was not being the hero innovator but just going in and listening and putting in some ideas but then adapting to what they wanted and um being alongside them being able to say yeah you’ve got some really tricky kids here and this is really difficult um as I say then we had a whole team of people being creative whereas two of us would have run out of ideas in no time but um, so er. … yes so that um yeah, that really did feel like an impact. Yeah. that was almost the best term I’ve ever had as an EP because I had been given like a half workload because she was a half time and I covered he maternity leave before I took on more stuff and if you could really let EPs work like that, then you could, the sky’s the limit (laughs). But what actually happens is you’re bound down with the rules about the statutory work or, and then say why aren’t they more innovative, well (laughs). Yeah, well, yeah. There you go. I suppose if the head had said to me the problem is this little boy isn’t behaving very
Agency/power/autonomy? EPs defining what they are able to do, doing something out of the ordinary – nice to do something extra… Busyness of EPs. Ordinary work = more of the same. Personal power as confidence, but also thinking about it in terms of the power to choose to do something out of the ordinary…
HEROISM… this concept of the hero innovator is very important. Values innovative practice – contrast with the value placed on personal power/agency. Working together important in this instance and an appreciation of limitations, although these aren’t always adhered to (busyness, wanting to work out of the ordinary). It feels like EPs get to make a big difference when their work is outside of the ordinary, being creative.
The best work ever – quality versus full workload. Freedom to be innovative and the sky’s the limit versus being bound down by rules and statutory work. Expectation for innovation from the same source as those rules and prescriptions. Link with ‘judgements’… EPs can do so much more… effective practice versus
190
J J
304 307-309
J
195-199
well and he’s getting ostracised in the playground and the mum is getting ostracised and she comes in and complains and the other mothers complain so now I’m under a terrible amount of stress. And I had said, well OK, I can do you a behaviour programme, which is probably what I would have had to do and only that if I had a full workload. Because I didn’t then you could look a bit wider and say um, OK, I can look at that as a whole problem for the school, and do something about much more of it. And not just take the bit that was my expertise you know, that, the bit that I could do really in my sleep. Assessing and writing a behaviour programme. … Um, let’s look at how, because I suppose one of the things that I think as well is that a small change can make quite a big difference. And … um, deciding where to make that small change um … it um, it might not be um, the obvious place. that burden of statutory work and reports But because you have that pressure off then you’ve got that space to be er more creative and you’re picking up things from other professionals that, ways that they work, um, ideas, and thinking oh OK I could use that or adapt that or I still think we could be a whole lot more innovative than we are. Um, but that’s often to do with physical constraints. so she said things to me like I had never ever had someone in my classroom working alongside me, trying to support me and offer ideas. … it’s such a rare thing for an EP to have a chance to do but um, it is interesting to think like that, that we give teachers lots of advice… um, and when it’s in the implementation that you hit the difficulties and we always say to them come back, I always say to them come back. I say it to parents as well. Because you know you’ll try it and think well she didn’t but you’ve got to come back and tell us that. But actually being there and seeing it and then at the end of each day she and I and the um, classroom assistant would sit down and troubleshoot something that might have happened that day.
Outcomes/Measures V 29 Mmmm. … Best work as an EP. Is … the work that I’ve enjoyed the most and have most of a
limitations of (?) workload, what else? Need for flexibility (link to evidence-based practice/professional opinion) and ability to choose the not obvious place.
… and below Pressure and impact on creativity (and impact on effective work).
Difference between being there and seeing it and giving advice and saying come back to me if you hit any difficulties. Working alongside school staff. Implementation is where the difficulties take place, when EP is not always active – such a rare thing to get the chance to do.
Job satisfaction as a measure and its link to
191
V
V
V
43 - 47
65 - 83
132 144
difference I’m feeling at the moment the work is very rewarding because I feel that I am having an impact on the large majority of cases. So sometimes a small impact, sometimes a greater impact, […] I’m thinking that on most of the cases that I hope to think that there is some kind of positive impact. Um, that has happened, even if the work has been of quite a short duration. Um, I think it is helping them at the moment, it’s not that the problems have been solved or have gone away, no, they are still very much there but I think that the family does feel supported and I think there are measures in place that are helping the child […] Also in terms of the, of the parents um, … managing to go about their daily routines and managing to cope through and knowing that they are supported by a group of people not just necessarily one,… Tangible… mmmm (laughs)…. Um, […] but putting in place the clinical support needed for those difficulties so at the moment I suppose this case is in the process of … in progress, so yeah, it’s not at its final stage yet so maybe it wasn’t a good example […] Just making their appointments, having made, having contact people to help with their financial difficulties, they’ve made the contacts, the budget is under control, um… yeah, you know mom is managing to get up in the morning. I: And was that? P: that was a problem before, mmm, I: So that has been a change. P: mmmm. (agreement) P: Um, I’ve been pretty poor in getting. We do have a written feedback form that um I haven’t been all that good about sending out and getting back but verbal feedback definitely. Um, and um, I: Can you give me an example? P: Well, just thank you so much, you know thanks so much I see a big difference, I’m managing much better, it’s no longer a problem any longer, I don’t think there’s any further work to be done because whatever was the presenting issue it’s no longer a problem. I’m chatting to the teacher and I’m sure that I have the skills to manage anything
making a difference Thinking/feeling/knowing as measure Where is the line between thinking and hoping to think – a desire to make a positive impact (effect on perception) Sense of reward from feeling an impact has been made Feeling supported is sufficient The is no fix EP perspective of what is helpful The problems have not been solved but the family feels supported and there are measures in place to help Intangible outcomes There are no tangible outcomes – therefore not seen as a good example Process, what about that? And then a clear indication of change identified – is this tangible? Quantifiable?
Positive feedback showing the skills and knowledge obtained and the shifts in perspective and confidence in managing the ‘problem’ as well as more tangible outcomes… Are these tangible as they are identifiable and describable Not felt so much that this is adequate, that this is really a measure, ‘pretty poor’.
192
V
183 190
R
102 – 105
R
118 140
that comes up in the future now, um. I know what to do whereas before I felt, I have the information, I have the knowledge, I have the skills to be able to manage my child. Um, … as far as the child is concerned, just managing difficult transitions, you know, coming through. Managing to make new friends, um. And yes, coming to school instead of school refusal, becoming less you know coming to school, being happy in school, having friendships so you know that type of outcome. And teachers managing better. Teachers seeing improvements. Or having a perception that things have improved you know even the actual um behaviour might be the same because it’s not, you really can’t change it but managing better I think. So child, parent and teacher managing better, or saying they’re feeling that they’re managing better. Reduced bonks on the head from playground fights (laughs). Um, that type of thing. I’m not sure, in terms of the evaluation it’s really difficult to evaluate what you do, isn’t it? It really is. It’s quite a touchy feely kind of, kind of subject. It’s hard to actually pin it down unless you’re saying, well you can tie it down to less fights in the playground, better school attendance, that kind of thing. You can tie it down to numbers in that respect, … but, um, also just the parents saying I don’t need you anymore, and not from a bad, you know, a bad way (laughing). In a good way, to say I really don’t, you know, I think I’m OK now. So and the child saying I think I’m OK now. So, … I don’t know how that is evaluated really, is it? […] How you actually evaluate that, and the feedback. So some things you can do practically, … um, but some things are measurable and some things just are more intangible. we’ve been doing a lot of thinking in our team […] about how do we evaluate our work and measure outcomes and things like that because I guess that’s more of the way the world is going there’s a few families that I feel have been quite successful pieces of work but it’s often quite difficult to prove [ … ] So there’s one … oh. It’s terrible isn’t it that I can’t think of it. [ … ] There’s a few pieces that I’ve completed …
Touchy feely Outcomes = being OK People saying I think I’m OK, expressing that they are in a better place How do you actually evaluate that? Good question – how do you? Some things are intangible and perhaps these are the things that mean something, that show there has been a change.
Evaluation a requirement from outside
Perceptions of what is helpful or if it has been helpful – role of EP doesn’t know if comment is good or bad,
193 … and the theme of what seems to have been helpful is more the well what the parents have perceived to be helpful is more having … um… what’s it somebody said. I don’t know whether this is a good thing or a bad thing but they felt like I had been a bit of a safety net. [ … ] the whole thing was just a struggle and I think everyone was reaching desperation point and all I really did was spend a lot of time with mum I think at first just trying to lower anxiety levels there but just facilitated quite a number of quite regular meetings with mum and the school and they’d had lots of support from loads of different, like hundreds of different agencies so there were loads of recommendations already there there was no real need for me to reinvent the wheel but you know what’s it like sometimes I think when people are overwhelmed with a problem the piece of paper and the recommendations all just go skim off the top and they go nothing works so I think I just helped them to actually implement some of the recommendations that they’d already been given [ … ] and yeah there was a marked difference [ … ] Um but a number of things have happened over that time so I think I helped contain an anxious situation where everyone felt like ooh maybe they’re not going to give an autism diagnosis, maybe we’re never going to get a statement, maybe this behaviour’s going to go on forever. And I think I probably contained that anxious situation until actually the assessments were completed and he was given a diagnosis of autism and a statutory assessment was then agreed when they reapplied and all of these things that they’d been feeling that they needed also then happened over that time so it’s difficult to know I think I probably contained that anxious situation until actually the assessments were completed and he was given a diagnosis of autism and a statutory assessment was then agreed when they reapplied and all of these things that they’d been feeling that they needed also then happened over that time so it’s difficult to know but I think but certainly whether it was lowering the anxiety levels or whether well I think it was a combination of both lowering the anxiety levels and implementing some of these strategies that
bur presents this as ‘best work’ Lots, loads, hundred, expression of complexity in number v ‘nothing’ works Unable to implement recommendations when overwhelmed, role of containment Difficult to know what has made the difference but able to identify a ‘marked’ difference – clearly visible/obvious Containment as enabling people to see change or manage change, implement strategies etc Or is it about feeling better? Also COMPLEXITY
194
R
152 155
R
233 239
R
244 251
did really help him kind of made the situation kind of feels much so even though in some ways so he has autism, he’s always going to have difficulties, there’s always going to be challenging situations. that’s going to be life they’re going to need kind of support throughout I’d imagine because there’s going to be new challenges that’ll flare up but for that particular time I could close that with everyone going yeah actually everything’s great this looks really good even though … and then there’s a whole load of new support coming in that would be more … so that felt quite successful but in a difficult to measure way actually. she said oh yeah, yeah, no so long as you’re going to ring I just feel I just like to have you there as I feel a bit like you’ve been my safety net but if things don’t work out there’s which is I don’t know whether it’s a good or a bad thing to be somebody’s safety net (laughs) but maybe that was part of the containing the kind of anxious situation at that time. So that was one that I feel was quite successful. so the two cases that I feel have been quite successful I have done quite a lot of kind of standardised measures with them. with our team we do SDQs at the beginning and end anyway and I’ve been quite cynical and critical of SDQs because I don’t I think that they’re such a broad measure I don’t think they capture a lot to be totally honest they’ll say yeah there are some difficulties but I think in terms of when you’re working with children that’s capturing like a broad range of difficulties and actually if you’re doing a specific piece of work you’re probably not going to change everything, particularly with children who are maybe going to get a diagnoses of ASD there’s probably some of those that are probably always going to have social difficulties and there’s probably nothing with the best will in the world that we can do to make that better magically. Where the mum sort of felt like it had been a safety net and things and you could see visibly things had changed so from when I met them he wasn’t going into every morning every um he was in afternoon nursery actually so every afternoon when he would arrive there, there would be a massive screaming and the only
Feeling a change Verbal feedback + own feeling + relayed feeling by mum Feelings in words
Standardised measures and how they compare with feelings Can’t magically make things better
Seeing a visible change
195
R
254 256
R
257 261
R
273 293
way they could get him into nursery would be to carry him screaming into nursery and sometimes they didn’t get him in at all and mum would just take him home again to every day he was going in fine and quite happy about going to nursery [ … ] so there was a visible change there as in he was no longer screaming he was now happy about going in and so you could see that. how do we measure the small bits of work that people actually are doing because SDQs measure there’s no change whereas they might have you know effected a massive change in terms of the scope that they were you know the brief that they had to come in and work on. even though they are just the parent’s perceptions of the difficulties aren’t they and obviously mum’s perceptions of the difficulties was much higher at the time because it was a bit time of crisis when I became involved and then even though you know to my mind within him the difficulties were the same because he was still going to be autistic but actually also because they were being better managed and mum was less anxious about it her perception it was a quite a significant drop so um so that did measure a change as a positive change as well which I was astonished by so had to stop being so cynical and slating generally of SDQs. I think maybe because my mind’s a bit maybe I used to be a maths teacher so maybe that’s what it is that I work maybe a bit more in numbers but just to try and quantify it I quite like, I know you were looking initially at the Goal Attainment Scaling but I was looking at how can the Goal Attainment Scaling be useful just in terms of that I quite liked that you described where you are at the moment and where you hoped to be and so therefore you could have a realistic goal […] I quite like that idea of being able to be working towards something quite tight so that even if other problems pop up you can still keep an element of, look we have to not lose sight of the successes that you’ve had so then actually well we started off working on this and actually made progress there yes now this is terrible but there is some progress and I quite like that idea [ … ] with teachers I thought it
Celebrating small/immeasurable changes
Improvement seen quantitatively but as an expression of perceived change, rather than ‘actual’ change Perceived change = actual change??
Personal preference for numbers Personal view of GAS and experience of working with it Working towards something specific is useful – it helps to not lose sight of successes Less useful with parents Raises question about how meaningful the process is The target setting is too cumbersome and laborious When parents don’t see the point in something then it is meaningless What information is meaningful?
196
R
354 357
R
386 387
R
459 464
R
474 -
worked quite well um and again for them it was quite goal specific and they’re used to doing IEPs and stuff and we could map one out quite quickly but with a parent it was quite laborious because every time we came on to different just trying to get 5 steps to from where you are now to where or you know maybe one worse than where you are now but trying to get those five steps I mean each time we then put on new descriptor on that it led off on a new story and then I think they were like why are you still going on about this (laughs) I don’t think they saw the purpose of it and I just thought actually this is a bit unnecessary to go through, its meaningless if I just develop the 5 steps coz it needs to be done jointly but I don’t think they’re seeing the point in it so that still makes it meaningless the reason that children have come to you is that people perceive that there’s a real difficulty at the moment and if they perceive that there’s less of a difficulty then there’s been some, some change somewhere but then how you measure the mechanism of that change is tricky isn’t it. I don’t know if I’ve got a simple answer to that (laughs). you learn there is some value in feeling like things have gone badly sometimes because then you do reflect on it more and then tweak things about how you do things next time I think he just needed time to get to know me and then he kind of more visibly I think quite enjoyed and he did the homework that I had set him in between times and he obviously quite liked doing it which I think that showed me that it’s really important to ask those questions then cause otherwise had I not done that I would have gone OK let’s scrap this because I think it’s too much pressure on him and he’s hating it and it’s not fair. So I think that was, that surprised me (laughs) in a good way. Yeah, I think sometimes it is harder to read, and sometimes you need to ask people rather than making assumptions about what they think about things. And equally sometimes I think you can think that’s gone really well and you can ask someone and they can say not that helpful really (laughs). It’s useful to check it out I think. I would hope that … my practice individually
Perception a reason for involvement in the first place. Perception of change = change somewhere
Reflection
Time needed for a relationship to develop Need to avoid making assumptions, evaluation as checking it out, asking those questions As back up/check for what is perceived
Reflection
197 475
R
528 535
D
115 127
D
134 145
would continue to change in a positive way. I think that I would hope that I would keep reflecting on things and learning from stuff and changing accordingly. Exactly so measuring the perception of the problem really which doesn’t necessarily really, but I think that could get tighter I guess because also you’re often measuring the problem owner’s perception. It would be interesting to know the other people involved and the child themselves’ perception. I’d get the child’s perception of what the difficulties are probably initially when I do initial assessments and talk with them at length about how things are at school. I often would give them a rating scale then of like more of a smiley face kind of where are we? sort of happy, middle, sad. About how is school and how are things generally. So I’d get that information from them but then I probably wouldn’t come back and check with them at the end. I might check with teachers and the parents do we feel things are better but not necessarily with the child so that’s probably something that’s missing that would be interesting to … it should be done really… Having people feed back to me, I suppose, tell me. If somebody says that was really helpful or I really got a lot out of that. Or in relation to a child things are so much better and perhaps identifying something that we had discussed or thought about as a way of moving forward um and attributing that to me having said you know perhaps we should try this um so being told, I guess. By the people who feel the difference. [ … ] Social workers feeling they were managing their cases, feeling that they had more of a hold on their cases that they were more in control of them and that they could think about a way forward. [ … ] and then we would work with the teachers who would feed back, you know we would offer them consultation sessions where they would bring cases that they were concerned about and didn’t know where to go and they’d come back and say well actually this is working well or whatever the case may be. So bringing back that sort of information as a result of a discussion with us. the fact that people were prepared to come to us with a difficult case and ask for support
What is the child’s perception? The importance of finding that out… perception often a ‘problem owner’s’ Who has the problem? Where is the problem located? Therefore who to ask/where to look for change?
Being told People feeling a difference then communicating this in words Subjective experience communicated through words Impact seen in changes in feeling Being able to think
Palatable – digestible, thought, taking things in
198
D
167 168
D
175 177
D
229 233
with it I suppose was a measure of the fact that we were valued because even though we were perhaps saying things that weren’t very palatable, people must have been seeing some kind of something in it for them to still want to have discussions. Does that make sense? I’m not being very clear but there’s no sort of… I: they came back for more. P: They came back. Yeah, yeah that was it, You’ve summed it up. They came back for more, so whatever we were giving them must have been helpful otherwise they wouldn’t have done [ … ] they wanted more of that so again it was about wanting more. They wanted more of that. They wanted as much of that as they could get because they felt very unsupported and in the structure of the place they didn’t have that kind of support system. Um. So we were having, yes, we were having to spread ourselves around um and it looked like people were wanting things from us. … So that in itself is a measure isn’t it, of usefulness, they found it useful. if I see a change in the presenting problem. If the teachers are managing better or the parents are managing better or the child is happier and somebody’s telling me that, then I know that whatever we have agreed upon is having some effect. I did a consultation report following an observation and a discussion and she said you’ve absolutely hit the nail on the head. That is exactly, that is the situation and it was a very helpful report. So I suppose again that sort of feedback, being told that was a very helpful report, because we don’t often get told that, told me that I must have done something right.
Seeing it Actions as measures – asking for service or coming back for more Requests for help even though things said were difficult Expressions of felt change – actions and words Wanting more is a measure of how helpful the service has been Wanting – desire and lack
we’re using the early years behaviour checklist to measure before and after behaviours in children. Um and that’s showing a kind of 80%, consistently it seems to be 80% change in before and after scores and although I think it’s a bit of a dubious, it’s not exactly the right tool to be using but it’s the best we’ve got and not all of the children that those that have shown improvements have not come from a very high level in the first place. They’re not necessarily clinical but
Gave this example after the questions ended, such ‘formal’ evaluation was not considered earlier suggesting that other measures are more meaningful to her/come to mind more easily (also spoke of qual/quan data) Devalues the standardised tool – dubious/crude
Seeing change and being told about it
Feedback is not often given or received When obtained must mean something
199
I
45 - 60
I
60 - 66
they’re all, they’ve got lower scores, 80% of them have got lower scores than they had at the beginning. So it’s kind of a crude measure. depending that was very dependent on who was filling it in as to how valuable that information has been really but it was something that we could take note of in terms of feeding back [ … ] well some teachers were um didn’t necessarily fill in the questionnaire in the way that you asked them or even had the notes on it that explained how to fill it in so you know some children who perhaps were exhibiting acting out behaviours teachers may use it as a forum for sounding out about their own feelings about the child rather than describing the child and that became less valuable because you were finding out about the teacher rather than about the child which wasn’t the information that we asked for. We found out information about what they thought about the parents. (laughs) [ … ] some people would you know maybe quite appropriately say something about their family circumstances but other people might make a judgemental comment that was really about their feelings about the child’s family or about the child themselves. You know you could infer things from but wasn’t necessarily helpful in then doing a pre and post evaluation of whether you’d made a difference as far as the teacher was concerned. Because the Spence Anxiety Scale one of the things we had thought about when we did the post questionnaire was that for some children their scores might go up and because you’re doing so much around emotional literacy and the language around anxiety and coping with it anyway that where children perhaps just in a state of anxiousness but weren’t really able to reflect on that before the programme started by the end of it they’d certainly know what it was about to be worried and all the body signs that they would have so we thought that some children they you know when it has questions like I get butterflies in my stomach and we’d been talking about that that that score might actually go up and that wouldn’t necessarily be reflective of them feeling worse, it just might be that they could label their anxieties a bit more accurately.
Even though figures give strong indication of positive shift Tools can be used inappropriately, possibly due to a flaw in the tool What do you do with information you haven’t asked for? Risk – qualitative information can be risky – you don’t necessarily know what you might get back Although the information obtained was not what was asked for, the process still served a function and information was obtained. Qualitative measures are susceptible to judgements/feelings – making them unreliable? Measurement tools are not without their limitations
Quantitative/standardised tools may also measure something a little different to what is advertised.
200 I
94 - 95
so let’s sit down and work out what needs to happen for you to feel happy that your child’s needs are being met and that one worked very well.
I
121 137
So the one that felt like it was very successful in the high school where the parents directly contacted me even within that consultation they did come back to the idea so when are you going to see him again so we did have to revisit a couple of those things more than once but at the end of it I think they felt that they had a whole range of things that were being put in place or things that they could explore um perhaps more realistic understanding of their child’s abilities from the information that we already had on the table that perhaps they hadn’t already taken on board so the fact that they fed back that it was useful and the school fed back that it had been a useful process and they had other things that they could go off and explore. I think it was a child where they um they felt that he was dyslexic and he wasn’t getting on and they had aspirations for him doing A levels and going on to university and actually aspects of his ability would have made some of the expectations that parents had probably a difficult route for him to follow. Um but with the SENCO there as well saying well there are these other options and you know you might want to go and look at this college that offers this which wouldn’t be exactly the same because I think they were talking about something like engineering or um being an accountant or something like that I can’t remember exactly what it was but something that would have quite a high um A level requirement to get into. And yet some of the things they were saying and the boy’s strengths so while that is towards his strengths but maybe that’s asking a bit too much of him but there are other things like this, that and the other that would be similar but maybe would give him more satisfaction. So the school was able to give some resources so they went away in a way with some stuff in their hands and there were some leads to follow up. I haven’t heard from them since either so their immediate verbal feedback was positive and then the fact that there
Happiness/satisfaction – working with aprents to define needs and what would make them happy works very well Feedback Hearing v seeing and feeling Felt successful because she was told it was useful Describes shifts in thinking as a positive outcome, although tentatively Satisfaction – for whom? Needs of the child v needs of the parents Lack of further concern a positive indicator
201
I
152 158
I
220 226
L
48 - 57
hasn’t been any further concern to say oh we’re still really worried about him. even in that scenario they knew what had gone wrong I suppose it was still addressing what they could do and there was probably enough information around for them to know what to do. Um … but actually I think for that child um they’re going to put him forward for a statutory assessment because of his level of ability so some of the dissatisfaction that is there is also because the resourcing that they’ve got doesn’t really meet his needs I think (coughs) which isn’t going to be resolved in and of itself in a consultation because they just need the resources to follow up recommendations really. But the teacher was definitely negative later on… um. Not about the consultation itself, but just about the child so obviously didn’t feel that there were action points in place to address those things. We were talking about the fact that even though you are part of a team, it’s a bit like being a teacher and going behind your door and not really always being able to evaluate your own contribution to the job. I suppose in terms of, it’s not just you know am I better than you or as good, but actually knowing that you’re doing a good job and knowing whether there are other skills that other people have got that actually would be really useful to know but because you kind of get into your little rut of you know these are your schools and you build up a relationship and you build up a working relationship but inevitably that becomes relaxed and probably a bit sloppy sometimes that have other people there that can challenge the rigour of the way that you do things and reflecting on your own work and using other people’s expertise. I don’t think that we necessarily do enough of that in the team. Well I suppose again that’s a tricky one because it depends how you measure best. Um. For me I suppose if I’m looking in terms of job satisfaction, as being a measure of best. Um, so around enjoyment and around feeling like I’ve been valued, I would say that staff training has been my best [ … ] Me training school staff in a variety of different areas. But I would say that’s probably, that’s been the most rewarding for me because not only does
Role of negative feedback – what does it express? Does negative feedback or poor outcomes reflect on the value of the work? The process may still have been helpful What are they dissatisfied about? The system or the EP? Do they not feel supported or do they just want more resources? Where does the negativity come from? Is it a measure of the work or a comment on the system? Impact of anxiety Peer evaluation/reflection Can’t always evaluate our own contributions – help needed from the team Can we ever actually know if we’re doing a good job? Recommendation for PEP – more peer supervision?
Measure – satisfaction (job satisfaction) Lots of qualification – it depends on the situation and how you define best How do you define best? Enjoyment and feeling valued give job satisfaction Rewarding = best because preventative, enjoyable and
202
L
72 - 76
L
100 105
L
111 116
it have a preventative element in terms of equipping staff to deal with problems before they arise, in certain circumstances, but it also it’s enjoyable because I enjoy it. And because I feel that it’s something that I’m good at and I get a lot of positive feedback for it and that I guess is measurable in terms of questionnaires that are filled out and that kind of thing. So that, I would say, is probably for me the best aspect of this work. Um. It’s not the only one though but that’s the thing I would rate as probably the most rewarding and I guess I’m measuring rewarding as being best.
I’m good at it – have received positive feedback Lots of personal references – personal view/definition of ‘best’
And for me that effected a lot of change because most of the staff who attended that and they’ve had a very low turnover of staff, actually you can see evidence of those strategies still in place within the classrooms now. So the knowledge hasn’t been lost and they haven’t sort of stopped trying those things and so that’s been really good for me to see but I also the feedback from them again has been that those are very useful strategies for them and therefore they keep using them. So the evidence for me is in the fact that they haven’t put it to one side and said that this is a waste of time. I was sitting on the knowledge so [ … ] there was lots of cross over because of the fact that I had the family history and I had the knowledge. [ … ] I had worked with them in the beginning so we just kind of maintained so it may have petered out eventually and I’m sure it probably will anyway because I’m not going to maintain a close eye on this forever because I don’t have the capacity but um I think it would have continued for a while in terms of me being kept informed and being kept in the loop. I also know from our recent SENS SpLD um teacher advisor who’s gone into (school) that one of the first things that came up in their conversation was the SENCO telling [her] that they had had the training and that this was a training that was still considered useful and appropriate and therefore they were still utilising the strategies and that was then fed back to me via the SENS team. That the school had found the training very useful. And I
Seeing strategies still in place Bearing witness Feedback and action in evidence Waste of time – avoiding that
Knowledge of child/family attracts information and communication from other agencies Value of living it and being involved, staying in the loop Active involvement in finding out progress over time
Word of mouth feedback through different channels Qualitative information Serendipity – how reliable is this sort of feedback? How often would it happen?
203
L
119 121
J
42-44
J
49 - 52
J
77-80
J
280-284
mean one of the staff at the school, her mother works in the SEN department and I had feedback from that SEN officer that my daughter attended your training and found it still to be very very useful. So it’s come back in a number of different ways besides from the formal questionnaires done on the day. I suppose in terms of formal questionnaires that were answered, um, but it was all qualitative. You know I’m not a , I don’t think you need stats in Psychology necessarily in order to prove a point um, so qualitative feedback, um, verbal feedback… Um, and that was really very well established by the time I left and I think that was those are the things that stand out for me. I thought it was really um … they were interventions that were embedded in schools, they weren’t add ons. They targeted, they hit all my buttons, early intervention, um, working with um attachment and emotional development of children. Um, and you know schools loved it and you know were very successful really. So yeah.
it started off, it was always er the training always took place in the school, the parent group always took place in the school but initially it was two people from our team but then as they went on they trained their own people and then they were independent with it. So they managed to er as I say teaching assistants or parents who had been on the course and could be seen to be er… or teachers, that um … sometimes the teaching assistants or parents who were really successful because people would open up with them, they’re just people like us. we managed to I think as a team so we were a team and people gave whatever they could even if it was only that they agreed to come to a meeting once a month and share ideas. Um, it just made the um made the impact greater. So although at the end of the day you might be on your own going into the school or whatever, going to the training you’d got that team behind you. Um, I always think if you go into a school and you say how’s Johnny? You know who
Qualitative information (compare to numbers person)
The influence of personal values – all my buttons. Expression of success without ‘evidence’ but KNOWN – said with passion and story inspiring to me. The evidence is the established practice embedded in schools. And the schools loved it (you just know). It stands out that there is continuity and that this practice is funded by the schools themselves. Self-sufficiency as success (really successful) – people just like us (practice embedded in the school – their teaching assistants and parents). Value in participants feeling able to open up, relaxed with people they can identify with.
Team and commitment = greater impact. What it means to be a team: sharing and commitment, being behind the individual.
Effect of this on evaluating EPs’ practice – if the school owns
204
J
52-57
J
64-68
J
242-249
everyone’s been terribly worried about and they’ve been telling you they’re going to exclude and they say oh he’s much better. But you know it was nothing to do with you (laughs). Not only are they saying that there’s no panic anymore and the kid’s OK, but they’re also saying we’ve um, … we’ve done something effective there. So they’re owning it and they’re not saying um, it’s only while you’re involved, they’re saying we’re independent with this now. We’ve learned whatever, your intervention has done or not done it has left them feeling confident to manage the situation. Success! (laughs). Yeah. one of the things, the other thing that was really nice about the cascade group was it was a multiagency group so we had um a social worker for example who trained um and really had some wonderful stories, she trained a group of, she was doing this parenting group, this group of parents, all of whom had had children removed and one of the parents had a child removed during the course. And um and they were really really keen to come, so much so that um when she had to cancel one because of snow they were then kind of rang up and said, oh you are going to do another one aren’t you, it’s not going to stop is it? We can get there through these mountains of snow. So that was um what a tribute to her but a tribute to the materials really as well. So, yeah. That was good stuff (laughs). I had one experience where I was in a school that was doing this and I was just walking through the dining hall at lunch time and a little boy I knew… no a little girl was crying, she was sitting next to a little boy I knew who was you know, who could be quite difficult. so and the language was fantastic because I was just able to say, oh dear someone’s given you a cold prickly. And she said, she quoted this little boy and that just enabled me to say oh what to we have to do now? What can we do to give her a warm fuzzy? and you know it was language that we all knew and seemed much much better than telling him off in some way. Um, you know inviting him to put it right with vocabulary that he understood. Um. Yeah, it was good (laughs). I suppose what we made, what I always felt
the success then how can the school see that the EP was successful, especially if this ownership is a measure of success to the EP… Success = no more panic, the school is more confident and owns the intervention/success.
What does this story/tribute say about the facilitator and the materials? A wonderful story – does this make me feel that it was good? Will others feel this says it was good? Commitment from the participants in spite of adversity = good stuff.
Another story, this time about her own experience/work. Use of words like ‘fantastic’ – shows how much she feels it was good – as does the laugh! Happiness and joy being expressed because it was good (this is what this laugh felt like). Shared language, evidence for embedded practice in the school (link to self-sufficiency).
Emphasis on feelings – feeling
205
J
263-265
J
183-185
J
190-194
that we made her feel … that she was heard, that her problems were taken seriously. Um, and that she had a right to er and her child, um, to be in that community. And I suppose the biggest um, one of the things that it did was it take away the stress that the school were feeling, small school. Which always means that people have got more jobs than if you’re in a bigger school because you have to have someone in charge of language, someone in charge of numeracy, head’s doing lots of things and the head got very stressed by this woman and how she’s going to handle it. So it took that huge stress off the staff. Um, and once you’ve shown someone a pattern, of how you could work with things, then … I’m a great believer that knowing what you’re going to do removes stress because you’ve got a pattern to deal with it next time. So, yeah. (laughs). And he did survive, you know, he did go to another school and he never got into special ed or anything like that or got statements or anything. As far as I know. But when you leave you don’t know do you? (laughs) it’s quite interesting to think of what you’ve learned and then think well how can I apply it? What, um… or is there something that I already know that would apply because I suppose one of those things that I always feel is, um, that whatever people bring to you as a problem, then you really get that in XXXX, is the problem the quantitative success was important for me, I was scared to do it but it was important that if we put all that reading intervention in and we hadn’t made a difference then we needed to know that. So that aspect of the quantitative was very good. But um, so those kind of quantitative, if we’d had great feel good factors and at the end of the day there’d still been exclusions and they still didn’t have any statements or that half the school had been statemented which would have been dreadful and no-one had improved in their reading then that wouldn’t have been so good but the qualitative things um, … were very good, seeing um, … teachers maybe who ‘d felt the full weight of all the negative things that’s had happened in the school blossom and be um … teaching in a
it worked well, and feelings being targeted for change, specifically ‘stress’. Impact from being heard and having your problems taken seriously. Evidence from child’s outcomes – or lack of problems: he survived. Knowing through the grapevine, until you leave (link to attachment???) Description of containment as ‘knowing what you’re going to do’ – does this link in with EP practice? A way to decrease EP stress too? (tentative) How long until impact seen? Survived up to time of leaving – at what point may a change/lack of change be attributable to EP intervention? How long the wait?
Reflective practice – learning from experience + professional knowledge.
Values quantitative measures as well – need to know that move has occurred. Reading is measurable.
Feel good factors versus numbers. Value in looking at both…
206
J
205-210
Complexity V 37 - 38
V
105 114
more creative way, or just coming up with ideas. That was great. We didn’t do, we didn’t do a formal evaluation, where we did do a formal evaluation of the nurture groups and someone in our team who had been a researcher before he um … before he was an EP, undertook an evaluation. We looked at um, … we certainly looked at qualitative evaluation in terms of um, parental comments, children’s comments, interviews with the teachers. But we also looked at quantitative outcomes on, we used, we looked at things like attendance, um, children’s, how many children … um children’s progress on the special needs sort of areas. And we looked at their score on an SDQ, before and after. So um… so we did a mix there which was good. Um, … yeah. So I think that was some of the main ways we evaluated the … that school. Yeah. That you’re able to actually go in and really um build a relationship with parents, build a relationship with teachers and with the child and make a difference on many different levels, not just the single scholastic level It’s broadening the perspective again to other family members, to … to home conditions, all of that having impact on the child and how the facilities in the home, amount of stimulation the child has… um, all of that having impact. I: OK, so um, (coughs). How did you know that that’s been useful for your practice? P: … I think the relationship between um, myself and the clients, it’s enriched that relationship. Of being able, and it also has allowed parents and carers to be more relaxed in their own home environment and allowed to be more open so I think that it has helped the relationship definitely. And it’s also a allowed that wider perspective to also come into the schools, so it’s that communication between home and schools, so self as a link between home and school and to bring it together in a meeting so that there is more openness around the child and then more support around the child, obviously always with parental consent but um… So that, that has been helpful so actually the
Formal evaluation versus informal evaluation. Value of mixed method approach – qualitative and quantitative measures. Quantitative measures used = standardised (e.g. SDQ).
Relationships important
Talking about complexity within the role as well Complexities of the systems having an impact on a child and the need to work with those systems, rather than ‘just’ the child usefulness of wider perspective – relationship building and the impact of that openness and communication as helpful relationships parallel to communication
207
R
157 159
R
171 192
building of relationships and communication channels all round. And being aware of other factors that could be impacting on the child, um. That you weren’t aware of before. It’s not just that they can’t spell, you know, it’s a bit broader than that. there does seem to have been a marked change in all of them but it is difficult to work out what exactly is the mechanism of that change as well and I don’t know sometimes I’m not sure that it is always the actual therapy all I’d done really was add the attention and then mum suddenly was saying well actually everything’s completely he’s fine now he seems quite happy going in to school and there was something about the fact that she’d sought help I think and that there was some somebody was addressing the difficulties, that seemed to relieve the anxiety … yeah. it was an interesting mechanism and that so she feels that everything is fine now I’m not completely convinced it’s interesting this trying to capture what is the mechanism of change what do you actually do and I think sometimes you need to have a clear sometimes there are very clear strategies you know and there’s a clear you need to have a clear rationale for what you’re doing but sometimes also I think there’s something about the relationship and something about …. um … yeah feeling like you’re holding something at a time when they can’t cope, that then helps them and you can kind of give it back when they feel like OK actually this isn’t it then gives if you then hold the anxiety it then gives them perspective to see that maybe things aren’t so bad and then they can take that back and carry on. I don’t know. [ … ] in both those cases I gave previously I feel like actually I’ve used my educational psychology, my psychology really, there in terms of making an impact in terms of helping them think about it. In some cases where it does feel more like almost … in some cases this is the interesting thing about working in a multiagency team I’m not necessarily convinced that it needed to be me as an educational psychologist that it probably just needed to be someone to support and to be
Mechanisms for change – identifying those and appreciating that it is not just about measuring that there has been change Change – what is it, where does it come from? Happiness a criterion of success for mum Mum and EP’s views differ – EP not convinced that all is fine but mum feels this is so A plan is needed to help to see mechanisms for change Relationship, holding anxiety – this gives a clearer perspective of the problem Empowering? Impact seen as being able to help people think about the situation What is involved in using psychology? What does that mean? Does impact equate to the psychology or something else? Qualifying language… Impact on MEASURES – thinking and perspectives
208
R
40 - 45
R
299 305
R
428 238
on their side and to kind of empower them to attend meetings and to think about what was going on and feel confident to speak out. Um… so that maybe would I would say in cases like that I might make a positive impact but um that wouldn’t necessarily need to be that’s not necessarily through using psychology the more work that I’ve done directly therapeutic work that I’ve done directly with children the more I’m kind of acutely aware that unless you involve the adults it’s not necessarily going to, you need to involve the adults in some way because the children are very much still a product, they can’t control what’s going around them as adults so I think that’s why I kind of became interested in how can you involve adults in this and what work can you do with parents to get them to support the changes the children are making cause otherwise the children can change all they want but if the parents aren’t changing with them then it’s not going to make much difference I think life’s a bit more complicated and messy than specific targets but I think if you have got a few things to focus on then that helps weave your way through the quagmire of general er just because often a lot of the families that we work with there’s very complex things going on aren’t there and um sometimes you can’t make all that better sometimes you almost need to focus on just to feel like you’re getting somewhere I think it’s really useful to have some idea of this is what I’m working on to feel like you’re actually making progress because otherwise I think you just get lost in the quagmire of all the massive difficulties and complex family history and kind of dynamics that are going to be going on there’s only so much that you can do about that I think the reason that I wouldn’t have done them previously would be because I would think that well I’m going to wait all this time and they’ll say I don’t know and they’ll feel uncomfortable. But actually from trying it I’ve discovered that I’ve waited all this time and actually they come out with something that’s quite good and maybe they needed that time to get up their guts to actually say it so um it’s
Children are not ‘the’ problem – systemic view Children are a product of their environment Complexity of the system, working with the system, changing the system – complexity of outcome and change and measuring that…
So messy, complexity like a quagmire – great imagery… stuck in the quagmire Focus needed to be able to see success There is only so much that one person can do Still about feeling progress Although has stated she’s a numbers person she still refers to feelings a lot Life v targets
Trying things out, being surprised, having assumptions challenged Not making assumptions, being open to trying things Seeing what ‘actually’ happens Evaluation as lip service or something quite
209
R
504 513
D
196 198
trying it isn’t it and then being surprised by the result. Yeah being surprised by the result and then the result has challenged my assumption of what I thought would be the outcome. It’s the same as doing, I think I felt like giving people, asking people to evaluate what the session had been like for them, whether verbally or sort of through a more formal sheet. I felt that would be a bit paying lip service to we should evaluate things at the end. And it would feel a bit trite because you know I think I had a perception in my head that it might be a bit meaningless they’d probably just tick something to please me and I’m just doing it to tick a box to say I’ve evaluated it and if there’s no meaning what’s the point of doing it? But then when I’ve done it and actually something quite useful and meaningful has come out of it then that’s challenged that assumption that it’s not useful so that’s made me think OK it’s quite a good thing to do I guess. Yeah. I think it is when doing it then the result challenges your assumption of the reason why you weren’t doing this previously. the more I’ve worked alongside health, say clinical psychologists in health. They do a lot more assessment and evaluating and they’re a lot more rigorous about it. There’s a lot more systems there to support them because actually I think that making assessment part of kind of what you do is almost how do you keep the records because I’ve done lots of assessments and made interpretations from that but then what about the monitoring and following up of assessments. [ … ] How do you then work out have you made change? [ … ] I think that monitoring and follow up and measuring what we have done to make a difference I’m not sure how good EPs really are at that and I think that’s probably because it is quite a tricky thing to measure especially in what we do because there are so many different factors it is very difficult to say this was down to us but there must be a way of measuring… So there’s all sorts of things going on with this child, all sorts of people involved in an effort to keep him in school. And I suppose another measure will be if he manages to stay in school (laughs). So far it’s an hour and a half a
meaningful? Ticking a box is meaningless – making evaluation meaningful, making sure there is a point in doing it. Feeling does not always match the evaluation (said later).
Being rigorous needs a system in place to support the monitoring like in health EPs aren’t quite there yet – we don’t have such a system but also how do we know that we have made the difference? The complexity of both indirect working and also the numerous factors affecting children /schools / families etc Many factors therefore tricky
What is the message if this is not achieved? Complexity and negative outcomes – where is the negativity? The effort hasn’t
210 day.
I
110 118
I
171 177
I
193 202
I think something happened in the family dynamic that having had the consultation everything kind of went pear shaped again so what you had addressed in the consultation didn’t actually then meet those presenting needs. Not that another assessment would have been any better from that point of view probably. [ … ] there was other stuff that needed addressing which included the parents. And that was alright when the parents were doing what they were supposed to be doing and you know working together with the school but when a spanner got thrown in the works it was like mmm so that didn’t feel such a satisfactory piece of work because it probably didn’t address problems because it was actually going quite well when I got involved. there was behaviour that needed managing and I think when things were better at home you know some of that behaviour you know stopped quite a lot and I think they managed to reduce it in school and then when um I think dad went off one time and dad had been erratic in the home and dad just went off for the weekend it just kind of all flared up again and he hadn’t realised the impact of him coming and going just even for a weekend without any explanation to the child what you know what that was having on him. Um. So it kind of went pear shaped for a while because I guess he was anxious that daddy was going away or um he wasn’t going to come back. … So I guess they just saw behaviours that they’d had under control and it flared up and I guess that felt outside the school’s control to manage because it was actually factors outside of the school that was driving it actually. I mean I think the cases that we’re, that come our way now are more complex … um … and whether that’s just because they’re handling, the schools are handling children that we might have seen before … um themselves. and perhaps I don’t know, perhaps some of those more complex kids used to get excluded more. I don’t know. And obviously the population of children there’s the medical
worked or is there just too many things going on to sort out? Systems (theoretical input) – systemic impact Shifting needs, context inconsistent Impact is subject to external influences Complexity of impact and problems Direct work and possibilities – how to effect change when other parties don’t do what they are supposed to. Here and now snapshot v changing circumstances and problems. Judgement made about this piece of work.
Managing behaviour – managing as a school especially thing outside of their control Impact of parents’ actions Home instability → anxiety → behavioural problems Complexities of the systems around a child and the impact on outcomes and monitoring these Beyond their control
Complexity of cases now seen, a possible explanation – medical advances and inclusion policies More complex cases are more challenging, less containing and less happiness Very tentative language –
211
I
232 238
L
40 - 44
advances in terms of children, prem babies and all that kind of stuff, being so much better. It means there’s children coming into the education system with more complex needs probably. Probably some of those other children that were um posing challenges in mainstream schools would have been in special schools as well before. So I think that is probably why … why we’re having to deal with those challenging children because they’re probably struggling with mainstream and the mainstream teachers are struggling to hold them whereas perhaps they would have felt more … I don’t know, contained and the children might have been happier in a smaller group rightly or wrongly in terms of the label they had whether that was right but maybe being in a smaller class felt better for them. I don’t know. … So some changes can be, are probably good, some are more challenging. we’ve done quite a lot of talking around the issues for example around Down’s syndrome, and um parental expectations and what does inclusion mean and why are we doing this and why are we responding to parental wishes when they’re in a state of denial, you know some of those big kind of meta issues that in a way on an individual case you think are going to be resolved but you know just the discipline of having those conversations challenges your thinking but and actually how useful that is to talk to other people that’ve had tricky cases like that as well and whether we should do a bit more case sharing and you know almost do it like a master class, this is a case I’ve got have you had something similar. So I think that sharing practice is probably in that kind of peer supervision kind of way would be really quite useful. different types of change depending on the situation. Um, sometimes it’s about looking at things differently, sometimes it’s about helping staff to work in a different way, sometimes it’s about helping children to behave in a different way, sometimes it’s about helping families to make changes to the way they see things or do things. So I think it’s just about trying to effect change within a variety of different systems. Um and I think that’s what we bring to the table. Is by coming into the process we effect change
conflicting with dominant ideology of inclusion? Thinking about happiness above inclusion Teachers are struggling and so are the children Happiness and containment – link for both children and teachers and the impact on each (emotional aspects of learning)
Inclusion Use of sharing practice and talking through issues with colleagues – they are big and meta… Role of peer supervision
Difference and variety It depends… Helping a key word What helps and what changes is situation dependent Systems
212
L
128 150
J
324-331
within systems. Um. Yeah. So it’s where that change takes place I think is very situation dependent. I think it’s more tricky getting that kind of feedback on individual cases because it’s a child, it’s a person, it’s difficult to measure, you know and it depends on what outcome you want but whether the outcome you want is the same outcome that a parent wants or a school wants so you know successful outcome is very difficult to just you know you’ve got to agree first what it is that you consider is successful. In our job sometimes what we think is in the best interests of the child is not necessarily what the school thinks or the parents thinks and you’re not always going to agree on everything You know you can try to find common ground absolutely but you know, … you might think the best, it’s in the child’s best interest to be in one place and they might think something different. And you know in the end how do you measure whether it’s a successful outcome? If the parents are happy, if the child’s happy, if you’re happy? You know, how do you measure that? Everybody wants something different sometimes, so it’s not always an easy thing to just quantify. [ … ] I think it depends, it’s different for every individual piece of work that we do because we do so many different things. [ … ] So every piece of work you do has a different goal, different outcome, different measure of success. I think it’s very difficult to evaluate the practice. I don’t think that means that we shouldn’t do it. I think that we do have to do it. But um I remember being at a conference um, recently in London, um, I might have been at UCL and whoever was introducing it said, um, that something that had been distilled from a lot of parents was that what parents say is what EPs do is help us make sense of our children. And if a parent said to me you’ve helped me make sense of my child I would be content. I think that’s … we can’t take away autism, we can’t make a child with special needs, we can’t make that not have an impact on their lives. But we can help parents understand and therefore be able to um, … interact with their child you know in a different way. And I hope we can do the same
Difficult to measure change for children because they are people (ie infinitely complex) Success is dependent on many factors Emphasis on happiness, question is whose? And how do you measure that?
Very difficult to evaluate practice. Importance of parents and teachers being able to make sense of a child or his/her difficulties and also understanding how the child makes sense of his/her experiences. This is a personal priority. Contentment with outcomes a measure of success – specific outcome of ‘making sense’. Limitations of the work – can’t take away the additional needs, but can help in a different way. That’s what I think.
213 with teachers. … So that’s what I think. We make sense, well that was what um, … I can’t remember whose phrase that was, was it Vygotsky? How does this child make sense of the world? And if in the process we can help someone see the, I mean give them insight, help them to make sense. That’s OK (laughs). JUDGEMENT Jasmine 91-96
111113
140 149152
what had happened to the school was it had gone into special measures, they’d had a head come in who had worked extremely hard and got them out of special measures and then felt absolutely exhausted and left without a job to go to, I don’t know what happened to her afterwards. The deputy was acting head and then they’d just got this new head in and just before the new head came they were back into special measures. So, and there were there’d been some staff who had been through that, right through that. So it was gruelling, absolutely gruelling. And they felt that they were given loads of help to get out of special measures and as soon as they were out that help was removed. And they were, they were back in it. Um, so it was really that the confidence of the staff I felt that had been … taken away. it was certainly being prepared not to challenge what they were saying. I think if I’d have challenged and said, look, I don’t think it’s, why do these kids lack confidence, these kids lack confidence because you’re lacking confidence. Then I think you know we’re dipping on the high road to nothing, I would have been another one who um, who’d you know who was blaming them. we were really um throwing everything we could at these children’s reading And on one occasion I was, the school were in Ofsted, this was their Ofsted to see how they were dong and I was down to teach, work with the teacher but lead on this anger management game that we were going to play with the whole class. And they told me the inspector was going to be in for that and I thought teachers do this, I ought to be prepared to do it so I
Story of what happened to a school when judged to be in special measures. Impact on staff and the experience being absolutely gruelling and destructive (head left). I wonder if this links to evaluation generally – feeling that evaluation is much akin to judgement. What parallels with EP work and being evaluated? Strong identification with staff’s experiences and some hesitation while telling the story (difficult?) – empathy or shared experience (tentative). Confidence taken away.
The result of the judgement experience – lack of confidence, depletion of personal power (see innovation, ll100 -1). Cascade from staff to children.
Impact of failure – an absolute value (fail or not). Blame and fault. Ownership of failure…
214
155156
183185
186188
did it. When I told my boss about it he said oh my god you shouldn’t have done that. If you’d failed, the whole school would have failed and it would have been you who had made it happen. But I didn’t (laughs). that school had had again had had people going in saying, this is terrible, it’s got to improve, the results, got to deliver da da da da, and what threatened us um it closed down our creativity. the quantitative success was important for me, I was scared to do it but it was important that if we put all that reading intervention in and we hadn’t made a difference then we needed to know that. So that aspect of the quantitative was very good. But certainly there’s a tradition of being quite um, … there can be quite a lot difficulties between the inspectorate and EPs and EPs think well actually we know a lot, we probably know more than you do about the way in which children’s learning develops and we’ve got lots of things to say about the curriculum, we’re not just there to go into a corner and assess children who haven’t done very well.
Judgements close down creativity. Judgement is a threat. List of demands – got to… da da da da (so much). The results of quantitative measures are scary because they imply judgement and the possibility of failure.
Strength of knowledge and confidence in own expertise. Others’ understanding of the role of the EP. Difficulties between EPs and the inspectorate – inspectorate as who? Government? Those who judge? When we know more…
215
8.9. Appendix 9: Excel spread sheet of all of the responses to all of the questionnaires. 3 = ‘very strongly agree’, 2 = ‘strongly agree’, 1 = ‘agree’, 0 = ‘don’t know’, -1 = ‘disagree’ and -2 = ‘strongly disagree’. No participants ticked the box ‘very strongly disagree’ for any of the statements.
Question
Somebody says, Things have
There are stories Targets are set
A tool is used to There is a record of parental measure comments, making the work changes in the children’s comments or perception of teachers’ specific. the problem. comments.
‘That was helpful’ or ‘I got a lot out of that’. visibly changed. about change. Participant:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
2 2 2 2 2 2 2 3 3 3 3 3 2 1 2 2 1 2
0 3 3 2 2 2 1 2 1 1 1 3 2 2 2 3 2 3
-2 2 1 1 -2 2 1 1 2 2 1 1 1 2 0 3 0 2
0 1 2 0 -2 2 1 1 0 2 1 1 0 2 1 2 3 2
0 2 1 1 2 -1 -1 1 1 2 2 1 0 3 1 1 0 3
1 1 3 2 -2 2 -1 1 1 1 3 0 2 1 -1 0 2 2
Change is monitored in an aspect of learning using standardised before and after measures. 2 2 1 1 3 -2 1 1 0 1 3 1 0 2 2 2 3 3
Progress is
I think that
I can see
monitored
whatever we
evidence of
against
have agreed
Individual
Goals to be
before and after
strategies still in
upon is having
classroom. 1 0 1 -1 -2 2 0 0 0 1 1 1 -2 -1 -1 x -1 x
agreed and towards these is
improve. 1 3 3 1 -1 2 2 1 2 1 1 2 1 0 1 2 x x
achieved are progress
measure
place within the
Education Plans. some effect. 1 2 2 -1 0 2 1 1 1 1 2 1 -1 2 1 1 2 x
Scores on any
recommended
monitored. 2 2 1 1 2 -2 2 2 1 2 3 1 0 2 2 1 3 2
1 -1 3 1 -1 2 2 1 1 2 2 1 1 2 1 1 1 x
216
8.10. Appendix 10: Feedback from Interview Participants Email from ‘Rose’: … it’s all really interesting. I like the different themes you have brought out, it’s all very relevant and reflects the complexity of EP work. It’s funny reading the transcripts of what I have said … Interesting to see how this fits in with what others have said and the interpretations to your themes. I would generally agree with all your interpretations of what I said. The only thing I might slightly disagree with in terms of your interpretations of my comments is that I would still use a simplified version of the GAS with parents - but just in the form of a target and a basic rating scale, without the laborious specification of the meaning of each number on each scale which I think parents didn't see the meaning of and disengaged because they couldn't see the point. I think there is still some value in the tool, it just needs to be adapted to be meaningful to parents. Other than that, I think it looks really good and must have taken so much work to go through it all. Good luck with putting it all together …. Original commentary: However, Rose has found that setting the goals for the GAS may be a ‘meaningless’ exercise for parents who may find the process ‘laborious’. Although presented as a criticism of the GAS in particular, Rose’s reflection on ‘meaning’ is essential when thinking about evaluating work generally, and is of course the reason for this research. In spite of ‘quite liking’ the GAS, it is very unlikely that Rose will use this measure with parents because she thinks that they find it ‘meaningless’. Adjusted commentary: However, Rose has found that the setting of numbers and specifying individual goals for the GAS may be a ‘meaningless’ exercise for parents who may find the process ‘laborious’. Although presented as a criticism of the GAS in particular, Rose’s reflection on ‘meaning’ is essential when thinking about evaluating work generally, and is of course the reason for this research. In spite of ‘quite liking’ the GAS, Rose is likely to use this measure in a different way with parents if she thinks that they find it ‘meaningless’.
Written comments from ‘Violet’: I found this so interesting to read Cath. To me the power of being able to make direct, creative intervention contributes a great deal to role satisfaction. Your themes are very relevant.
Verbal comments from ‘Iris’: Iris felt that the findings were ‘OK’ and primarily spoke about the discomfort of seeing her words in writing. She asked me to remove a possibly identifying statement to ensure the anonymity of a child.
217
8.11. Appendix 11: Proposed Practice Evaluation Form Practice Evaluation Form (Proposed) Educational Psychologist: _______________________________________________________________ Date work commenced: ___________________
Date form completed: ________________________
Description of work: ___________________________________________________________________
Planned goals from beginning of work: Recorded scores and/or other evidence showing change towards planned goals (two or more):
Feedback about the additional impact of EP involvement: Child/Young Person’s comments School’s Comments
Parents/Carers’ Comments Other Comments
EP Reflections and Self-Evaluation (consider the meaning of your involvement):
Adapted from Turner, et al. (2011)’s Casework Evaluation Form (p. 329) with permission.
218
8.12. Appendix 12: Contents of CD-ROM
Daisy Annotated Transcripts
Daisy Interview Transcript
Iris Annotated Transcripts
Iris Interview Transcript
Jasmine Annotated Transcript
Jasmine Interview Transcript
Lily Annotated Transcripts
Lily Interview Transcript
Rose Annotated Transcripts
Rose Interview Transcript
Violet Annotated Transcripts
Violet Interview Transcript