Aug 22, 2002 - current best evidence in their practice (Muir Gray 1997). The ... structure to support the identification and dissemination of evidence and as a ...
NIN_146.fm Page 141 Thursday, August 22, 2002 7:50 PM
Nursing Inquiry 2002; 9(3): 141– 155
Special article Blackwell Science, Ltd
Methodological strategies for the identification and synthesis of ‘evidence’ to support decision-making in relation to complex healthcare systems and practices Angus Forbes and Peter Griffiths Primary and Intermediate Care Section, The Florence Nightingale School of Nursing and Midwifery, King’s College London, London, UK Accepted for publication 12 April 2002
FORBES A and GRIFFITHS P. Nursing Inquiry 2002; 9: 141–155 Methodological strategies for the identification and synthesis of ‘evidence’ to support decision-making in relation to complex healthcare systems and practices This paper addresses the limitations of current methods supporting ‘evidence-based health-care’ in relation to complex aspects of care, including those questions that are best supported by descriptive or non-empirical evidence. The paper identifies some new methods, which may be useful in aiding the synthesis of data in these areas. The methods detailed are broadly divided into those that facilitate the identification of evidence and those that enable the interpretation of the data retrieved. To illustrate some of the issues involved, reference is made to a multimethod review recently completed by the authors, which aimed to identify factors that promote continuity in the transition from child to adult health and social care. It is argued that as healthcare organisations are becoming increasingly preoccupied with the evidence base of practice, such methods may help ensure that aspects of care and approaches that are outside the dominant pharmaco-medical domain maintain a prominent position on the healthcare agenda while remaining open to external scrutiny. Healthcare professionals who use such approaches need to know their relative utility and benefits to inform clinical decisions, so as to ensure that best practice is observed. Key words: evidence-based health-care, realistic evaluation, synthesis, systematic review, tracer conditions.
Health-care is an increasingly complex and diverse field of human activity, with new technologies and approaches continually being developed. There is a pressure on practitioners of all disciplines and healthcare organisations to accommodate innovation and simultaneously ensure that their current practices are effective, safe and efficient. Increasing consumer awareness of health issues and their
Correspondence: Angus Forbes, King’s College London, The Florence Nightingale School of Nursing & Midwifery, Research in Primary and Intermediate Care Section, James Clerk Maxwell Building, Waterloo Road, London SE1 8WA, UK. E-mail: © 2002 Blackwell Science Ltd
expectations of health-care augments this pressure. However, there is evidence of a widespread failure to respond to emerging evidence, demonstrated by: a failure to adopt new modes of care even though they are supported by sound evidence; and the continuation of practices that are demonstrably ineffectual or potentially harmful (Sackett et al. 1998). Individual practitioners and service organisations need robust systems to help promote the quality, effectiveness and efficiency of the care they provide (Griffiths 2002). Evidence-based health-care (EBHC) and its associated techniques aim to provide such a structure through a range of activities designed to ensure that practitioners utilise current best evidence in their practice (Muir Gray 1997). The
NIN_146.fm Page 142 Thursday, August 22, 2002 7:50 PM
A Forbes and P Griffiths
healthcare systems of many developed countries have an infrastructure to support the identification and dissemination of evidence and as a result EBHC has had an immense impact on clinical practice, particularly medicine (Greenhalgh 1997). EBHC is clearly the dominant paradigm in the United Kingdom (UK), where it has become a key determinant in the allocation (some might say rationing) of healthcare resources (Griffiths 2002). However, a number of limitations of the current methods used within EBHC are beginning to emerge. EBHC has often been accused of focusing exclusively on evidence from randomised controlled trials (RCTs) while neglecting other materials, such as those generated by qualitative research or indeed by the experience of practitioners themselves (Kippax and Van De Ven 1998; Pawson 2002). This impression is compounded by widely cited hierarchies of evidence that place systematic reviews of (good quality) RCTs squarely at the top, followed by (good quality) individual trials (National Health Service Centre for Reviews and Dissemination (NHS CRD) 2001a) with little or no attention being given to that which cannot be quantified. Even the more extensive hierarchies of evidence only recognise empirical questions and place expert opinion firmly at the bottom (see for example, the Centre for Evidence-Based Medicine ‘levels of evidence’, Phillips et al.), although this apparent preoccupation with RCTs is largely explained by the pre-eminence given to questions of effectiveness and accuracy, which are best addressed by such methods. The preoccupation with the synthesis of outcome-based experimental research within EBHC assumes that the best way of improving health-care is to provide definitive statements on what practitioners should do. Given the complexity of health-care this position would seem somewhat limited, as not all questions of relevance to health services and health practitioners have (or require) answers that are quantifiable, a fact that is increasingly being recognised by the EBHC community, as reflected in the recent broadening of EBHC topics (identified in Table 1) to include facets such as ‘experiences’ and ‘perceptions’. Thus, while it is clearly vital to answer a question about the diagnostic accuracy of a procedure using an empirical approach, alternative methods are required if the experiences (anxieties, fears, discomfort) of the person having the procedure are to be understood. Unfortunately, established EBHC techniques have difficulty interpreting or even recognising data from other areas of evaluative research where questions relate to values and experiences (Mays, Roberts and Popay 2001). Furthermore, even where the question is one of effectiveness, the methods are limited when the nature of the 142
Table 1 Centre for evidence-based medicine topics and questions for health-care Topics of healthcare questions
Potential questions
Aetiology/disease causation Prognosis Prevalence Screening Diagnosis Prevention Therapy (benefits and harm) Service organisation
Cost Value Perceptions Experience Effectiveness Accuracy Quality
intervention is either multifaceted (e.g. organisational development) or involves multiple-transactions (e.g. a health promotion programme) (Gomm and Davies 2000). There are also difficulties where the outcomes of interest are measures of subjective experience (e.g. quality of life), as the techniques for statistical synthesis are less well developed and measurement itself is more controversial. The dominance of statistical meta-analysis in determining what is permissible within EBHC has meant either that these areas are not reviewed or that the evidence within them is regarded as weak (Oakley et al. 1995). In fields of health-care such as nursing, where research activity and funding are more marginal, the primary studies required for current EBHC techniques, such as systematic review and meta-analysis, are often unavailable or of a poor quality. This problem applies to mainstream nursing interests such as wound care, pressure relief and bowel care, where there has been relatively high levels of research. Recent systematic reviews of these topics (Bradley et al. 1999; O’Meara et al. 2000; Cullum et al. 2001; Wiesel, Norton and Brazzelli 2001) have suggested that the evidence for current practice is weak. This hierarchy of evidence leaves a question mark over the worth of many of the practices examined and may mean that such areas receive less attention and fewer resources in the future (Steiner and Robinson 1998) even though the requirement is for more evidence on complex, hard to research, topics. Important areas of nursing practice are blighted by the stigma of poor evidence and subtly devalued, even though the finding is of ‘weak’ evidence, which, by definition, cannot be used to impugn practice. The failure of EBHC techniques to grapple with these issues has the potential to create a laissez-faire approach to practice as care must go on despite the lack of an organised assessment of the evidence for what is done in many areas of healthcare practice. Kippax and Van de Ven (1998) argued that only those interventions amenable to evaluation by the highest levels of © 2002 Blackwell Science Ltd, Nursing Inquiry 9(3), 141–155
NIN_146.fm Page 143 Thursday, August 22, 2002 7:50 PM
Identification and synthesis of evidence
evidence (in the case of effectiveness, a RCT) attract funding for research and only publications adhering to the orthodox criteria of high quality pass through the peer review process and attain publication. Given the importance of psychosocially derived (hence complex) therapeutics in nursing practice and a paucity of research funding sources, this bias is particularly damaging, in effect a double whammy. With many practitioners being employed to deliver care in line with EBHC guidelines and protocols that reflect little of the wider ontology of nursing, EBHC may inadvertently become a mechanism to suppress broader health initiatives and the ability of nurses to implement them in practice. Alternatively, the limited guidance available for dealing with ‘imperfect’ evidence and the implicit rejection of non-empirical questions may in fact foster evidence-less practice by removing some topics and questions from the evidence-based discourse entirely. EBHC is not going to go away, however, and so simply ignoring or dismissing these developments is not an option, although many have tried to do so (French 2002). The principles of EBHC have much to offer all professional groups in resolving the common question: What is in the best interest of the people whom I seek to help? What is required is not therefore a rejection or demonisation of EBHC but the development of rigorous methods complementary to the principles of EBHC, which can deal with complexity and different forms of empirical expression and are acceptable to the practitioners and policy-makers who will use their output to inform their clinical decisions. The focus of this paper is on the retrieval and synthesis of evidence in complex areas of health-care and in areas supported by a broad range of research, including both qualitative and quantitative material. The first part of the paper explores a range of different philosophical perspectives in relation to data retrieval and synthesis within EBHC and the second part identifies specific methods that may help resolve some of the issues raised in this introduction. The paper will draw on the authors’ experience in designing and undertaking a review of evidence that aimed to identify good practice in the promotion of continuity in the transition from child to adult health and social care (Forbes et al. 2001), a suitably complex topic.
analyse all the ‘relevant’ materials), the process is as epistemologically complex as any other form of human inquiry involving a series of decisions. The process begins with the identification of the precise review question. For example, is the information required patient specific, population specific, intervention specific, programme specific or any combination of these attributes. Decisions about how the question can be answered follow (e.g. categorically, to a degree of probability, theoretically, contextually), which in turn dictate what sort of source materials are required to answer the question (e.g. experimental data, qualitative data, professional opinions, etc.) and the criteria selected to determine the quality of those materials (critical appraisal). Finally, a decision must be made as to how to reach a collective interpretation (synthesis) of review findings. Unfortunately, little attention is given to these issues. Indeed, all too often the framing of review questions appears to be a result of an a priori decision that the most beneficial synthesis would be statistical meta-analysis. While this decision may be unproblematic in the case of drug treatments where there is evidence from multiple RCTs, for many other interventions the outcome of ‘no evidence’ or ‘little evidence’ is virtually predetermined by this decision. It is vital therefore that we begin to develop a greater insight into the processes of determining, identifying and interpreting evidence. To open up the theoretical discourse on synthesis of research evidence we will examine three very different approaches to synthesis: the ‘positivist’ (systematic review), ‘constructivist’ (meta-ethnography) and ‘realist’ (realist synthesis) traditions. These approaches will be examined in relation to the questions of judgement involved in the synthesis process identified above: • What type of questions can be answered by the approach? • What sort of source materials are required to answer the question? • What determines the quality of those materials? • How can the materials be integrated? • What do the integrated materials tell us? The aim here is not simply to reinforce the wellrehearsed differences between these approaches but to examine what each offers and whether they can be used in union to enhance EBHC systems and, ultimately, healthcare decision-making.
DIVERSITY IN THE SYNTHESIS OF RESEARCH EVIDENCE
Types of synthesis
It is important to remember that evidence takes many forms based on the type of evidence we seek and what we regard evidence to be. Thus, while on a superficial level the synthesis of research evidence seems straightforward (find and
Before examining the approaches to synthesis in more detail it is important to briefly clarify the different ways in which research materials can be synthesised. Mays, Roberts and Popay (2001) provided a useful taxonomy of research
© 2002 Blackwell Science Ltd, Nursing Inquiry 9(3), 141–155
143
NIN_146.fm Page 144 Thursday, August 22, 2002 7:50 PM
A Forbes and P Griffiths
synthesis, which differentiates narrative, tabular and statistical synthesis. In a ‘narrative synthesis’, evidence is usually presented under series of subheadings, with a commentary relating the findings of the primary (original) studies. The commentary may indicate the direction of the reported findings, recommending or cautioning against a particular intervention or approach. A ‘tabular synthesis’ usually presents statistical summaries in tables, sometimes with accompanying graphs to show the main findings of the studies reviewed. ‘Statistical synthesis’ involves the aggregate analysis of the findings of the primary studies using statistical techniques known as meta-analysis, with the aim of improving the precision of empirical estimates. An analogous approach to qualitative findings might be an overall thematic analysis of study findings, although this might also be a guiding principle for a narrative review. We would extend Mays’ levels to include another more complex form of synthesis, ‘analytical or theoretical synthesis’. At this level a different focus is adopted that goes beyond simply reporting the aggregate findings of a given set of studies to an examination of what, in addition, can be learned from the collective application of these findings. The output from such an analysis may take the form of theoretical models, which seek to identify the relationships between the facets of a given set of healthcare interventions and the outcomes observed.
Systematic review ‘Systematic review’ is currently the dominant approach for research synthesis within EBHC. These reviews drive many of the other mechanisms within EBHC, such as clinical guidelines. In systematic reviews, questions need to be very explicit, identifying a particular population, intervention or group of interventions for comparison, and types of outcomes. Systematic reviews can be used to answer many types of empirical question, although the majority of effort has gone into reviews of effectiveness, asking the questions ‘What works?’ or ‘What works best?’. Ideally, source materials should be homogenous in target population, type of interventions and outcome measurement, although at an exploratory level it is possible to be less prescriptive about the type of intervention. To answer the types of question posed only empirical studies are appropriate. Systematic reviews must therefore be explicit about the types of methods to be considered and, where available, will generally select RCTs on the assumption that large welldesigned RCTs provide the strongest evidence. Systematic reviews must follow a very explicit search strategy or protocol, which are often subject to external review 144
(Chalmers and Altman 1995). There are, however, a number of acknowledged problems with data retrieval in systematic reviews that challenge their validity. These problems include bias in favour of the intervention or procedure under investigation induced by weak methods in the primary studies ( Juni, Altman and Egger 2001) and selective publication (Dickersin, Min and Meinert 1992). Careful scrutiny and appropriate selection of primary studies can (theoretically) address bias induced by weak methods. A number of systems to aid this process have been produced (see Greenhalgh 1997; Mays, Roberts and Popay 2001). Selective publication of positive (and statistically significant) studies is not so readily dealt with; West and Jones (1997) illustrated how the effect of psychological rehabilitation after myocardial infarction was overestimated in a meta-analysis due to the trial authors not reporting outcomes that showed non-significant differences between groups. The theoretical remedy for this problem is to search for unpublished studies but the extent to which such an endeavour is ever truly realised is unknown. Integration of evidence can take two forms, either descriptive aggregation or predictive aggregation. In the former the key outputs from the identified studies (observed outcomes, confidence intervals and statistical significance) are plotted to reveal the overall trends, which generally takes the form of a tabular synthesis. In the latter case the findings of the primary studies are pooled and reanalysed (statistical meta-analysis) in order to determine the average of the observed outcome in all studies, effectively increasing the sample and giving increased statistical precision. Systematic reviews can thus produce very concise summaries of the overall effect of a given intervention. It is the concise nature of these reviews that makes them so attractive and useful within EBHC decision-making, especially when the output is turned into useful and very communicable statements like ‘beneficial’, ‘likely to be beneficial’, ‘uncertain benefit’ or ‘harmful’ (e.g. National Health Service Centre for Reviews and Dissemination 2001b). There are, of course, a number of limitations to this approach. Statistical pooling demands that the internal validity of the study is sufficiently high to allow precise estimation of uncertainty by means of a statistical test of significance or confidence interval. As weaker evidence is correctly excluded from this endeavour, the procedures of metaanalysis are unhelpful where the only available evidence is from methodologically weaker sources. The conclusion of the review is often simply that no evidence is available (Bradley et al. 1999; O’Meara et al. 2000; Cullum et al. 2001; Wiesel, Norton and Brazzelli 2001), even where there is a considerable volume of weaker evidence. The choice of method then © 2002 Blackwell Science Ltd, Nursing Inquiry 9(3), 141–155
NIN_146.fm Page 145 Thursday, August 22, 2002 7:50 PM
Identification and synthesis of evidence
seems to determine not only the type of answer (which is proper) but also the likely answer itself and, indeed, the question that is asked in the first place (which is certainly not proper). Additionally, while such methods are very useful in informing categorical decision-making because they are outcome based, they say little about other factors such as the context or processes which helped determine that outcome, other than what the intervention was and the population on which it was initiated. In some circumstances a knowledge of these factors is important as it is a knowledge of the driving mechanisms or principles that are key in making a decision rather than the outcome. Such an understanding may be crucial where a greater degree of reflexivity is involved in the application of evidence. Consider, for example, the question: What is the primary care nurse’s role in the management of non-accidental injury in children? The evidence required is not just ‘what to do’ but more crucially ‘how to do it’, with the child and family in a particular setting. To answer this question a review of case histories of previous non-accidental injuries might be useful, in which the unit of analysis is not outcomes but the factors within the cases that seemed to make a difference, perhaps using a thematic mode of inquiry. Similarly, if a policy-maker needs to decide which model of intermediate care facility for older people to adopt, they may consider the context of care within their area as a key factor in judging the issue of effectiveness. The question changes from ‘What works?’ to ‘What will work here?’. To answer such a question structure and process data are required; unfortunately, these data are rarely reported in the primary research (Griffiths 2002). Essentially, the techniques developed for EBHC have emphasised internal validity and precise statistical estimation of uncertainty. External validity has not been a priority and remains essentially subjective. Furthermore, the preference for methods of high internal validity (such as the randomised controlled trial) that is predicated by a desire for statistical synthesis may inevitably be at the cost of external validity even for quite simple interventions (Griffiths 2002). Just as a desire for high internal validity was identified by Kippax and Van de Ven (1998) as a factor in determining the interventions to be evaluated, so there is a danger that the same desire can shape the questions posed in systematic reviews. At a philosophical level, it also important to note that these reviews are little more than highly elaborate ‘descriptions’ of outcomes; they tell you little about how or why something happens (Pawson and Tilley 1997), reflecting their underpinning positivist epistemology, which is barely a limitation when the phenomenon of interest has fairly universal properties, as is often the case with bio-physiologically © 2002 Blackwell Science Ltd, Nursing Inquiry 9(3), 141–155
related interventions, like drugs. However, there are distinct dangers in such extrapolations where interventions are more complex, for example in fields such as health promotion contextual factors often play an important part in determining the observed effect (see Kippax and Van de Ven 1998).
Narrative review and meta-ethnography In recognition of the fact that healthcare practices may not only be based on evidence derived from experimental studies but also a large body of qualitative research, the synthesis of this material is a growing area of interest within EBHC (Giacomini 2001). Mays, Roberts and Popay (2001) have identified a number of methods through which this synthesis can be achieved, principle among them are narrative review and meta-ethnography. Overall these approaches follow a more inductive mode of inquiry and focus on process factors as much as outcomes in order to try and understand more about the nature of the investigation and the phenomenon being studied. In narrative review the analysis is largely descriptive, pooling together the findings of studies to present an overview of the collected material. Meta-ethnography goes a little further than this in that it seeks to collectively analyse the material, aggregating the themes within each piece of research to either reinforce existing thematic areas or to develop new ones (Noblit and Hare 1988). For example, to return to the case of non-accidental injury such a synthesis would systematically (following an explicit search strategy) identify all the reports or inquiries into child deaths and then analyse them to identify common themes that could be linked to important practice considerations. The potential for such syntheses within EBHC is unrealised at present but as attention is increasingly turned to more complex areas of practice there are many expressions of hope that they will play an increasing role in evidence-based decision-making (Popay et al. 1998), even in addressing questions of effectiveness (Dixon-Woods and Fitzpatrick 2001). As with qualitative methods more generally, such reviews are likely to explore or ‘map out’ an area of practice, attempting to answer questions such as ‘What factors contribute to … ?’ or ‘What are the experiences of … ?’. At present the identification of source material tends to be quite broad, with the primary focus being on the topic of interest and the population, although these can be somewhat blurred. This lack of distinction, combined with a lack of consistent terminology, makes many of the search strategies described for EBHC problematic, as maximally sensitive electronic searches (aimed at identifying all relevant studies) can yield vast numbers of references to scrutinise, a 145
NIN_146.fm Page 146 Thursday, August 22, 2002 7:50 PM
A Forbes and P Griffiths
problem shared with systematic reviews of interventions that are not consistently described or indexed. The type of materials utilised in meta-ethnography may stray beyond formal research to include the analysis of expert opinion or case materials. Such inclusivity is both a strength and a weakness. While this approach may foster a more lateral approach to evidence, if not carefully managed it can slide into chaos as (essentially) anything goes (Schreiber, Crooks and Stern 1997), reflecting the constructivist (and hence relativist) paradigm that underlies much qualitative research. To counter this problem it has been suggested that formal frameworks for assessing the quality of the source materials be employed (Popay et al. 1998) to ensure that a ‘true’ account of the participants’ construction of reality is given. In other words, that the level of rigour used to validate or authenticate the reported materials is adequate to support the theoretical constructions or observations they generated. Integration either adopts the narrative approach or is based on the collective analysis of the source material, as with meta-ethnography. The former may simply summarise key findings from the source studies (narrative synthesis), while the latter may seek to contribute new theoretical propositions (analytical or theoretical synthesis). However, it has been cautioned that there may be inherent ‘dangers’ in mixing qualitative materials derived from different epistemological traditions, such as ethnography and phenomenology, as there may be issues of incompatibility (Mays et al. 2001). These types of syntheses tend to identify general principles or core aspects of practice or client experience in a given field. These are potentially useful in providing practitioners with systems or templates to deal with different clinical situations. These reviews often allude to best or good practice but they can not stipulate with a degree of confidence how good the practice is, acknowledging that this is not their intention. As with other qualitative research this is left to the judgement of the readers and their assessment of how convincing the review is. The most significant methodological problem with this approach is inconsistency in, and lack of description of, the methods used to undertake the synthesis. A reader of one of the key texts in this area (Schreiber et al. 1997) gives no guidance on how to identify relevant studies nor how to synthesise them (other than the implication that the process is essentially the same as dealing with primary data). Assessment of study rigour is discouraged, as is combining studies from different research paradigms. It is contended (by us) that without assessing the quality of the source materials there is a danger that the findings of the review may simply echo previous errors. Conversely, the 146
perspective of the researcher should matter less than that of participants if the same phenomena are being studied. If qualitative synthesis is to establish itself within EBHC systems, then a higher degree of rigour will be required in executing the reviews. At a philosophical level the principle problem is that while everything is seen as having value or meaning, which is useful in mapping social structures and interactions, its utility to evidence-based decision-making is less clear. What is required is perhaps a more structured approach, guided by clear questions of practical (clinical) application using validated evidence, otherwise such reviews become little more than extended literature reviews.
Realist synthesis The case for realist synthesis within EBHC is largely made on the basis of its epistemological differences with the other approaches over the nature of causality. The issues involved in this debate are complex (see Pawson 2001). In simple terms, realists argue that the positivist approach of numerical meta-analysis is blind to everything but outcome and therefore fails to observe the generative mechanisms that give rise to such outcomes. It is descriptive rather than explanatory. On the other hand, the constructivist position is guilty of epistemological relativism and therefore unhelpful in decision-making (Pawson 2002). The axiomatic basis of realist explanation is that causal outcomes follow from mechanisms acting in context (Pawson and Tilley 1997). Thus, within realist research synthesis a key task is to examine the context in which the comparative programmes were undertaken and the resources available to each programme, so that active mechanisms contributing to the outcome of the programme can be observed. However, realist synthesis within EBHC is at present little more than a theoretical notion as (to the authors’ knowledge) no explicit attempt to undertake such a review of a topic has been attempted and the information required for such review is rarely published in healthcare evaluations (Griffiths, in press). The systematic review discussed in the next section of the paper has, however, drawn on some realist principles. The particular utility of realist synthesis to EBHC may be in relation to the assessment of policy or whole programmes of care (Pawson 2001). The focus of the question is distinct from the other two approaches as it seeks to explain rather than describe. The question is not ‘Does this work?’ or ‘What are the factors?’ but ‘Why or how does this work in these circumstances?’. As with the systematic review, the realist synthesis requires evaluative studies, although what is allowed is related to the potential for material to contribute an understanding of the context and mechanism (i.e. structure, © 2002 Blackwell Science Ltd, Nursing Inquiry 9(3), 141–155
NIN_146.fm Page 147 Thursday, August 22, 2002 7:50 PM
Identification and synthesis of evidence
process and resource variables) of the review intervention, rather than to methodological dogma (i.e. RCT is best). Outcome-based research is still important as it is still necessary to review the context and mechanism alongside the outcome. It may be that the hypothetical realist review will provide a structure for the use of qualitative materials alongside empirical (systematic) reviews of effectiveness. If such a review were undertaken it would still be important to appraise the quality of the materials following similar systems as for systematic review. However, as the data required include context and mechanism variables, the appraisal must encompass a broader range of quality criteria and apply different criteria to accommodate these data. The integration of the materials for realist synthesis involves the generation of theoretical models, something that may involve both thematic analysis and statistical procedures. The thematic analysis would be to isolate the core facets of the intervention and evaluation, determining its antecedents, context and mode of effect. The statistical procedures could include metaregression (Clarke 2000) or other statistical techniques to confirm thematic analysis and the relationship between the identified facets and the direction of the outcome (e.g. Hanson 1992). Confirmatory analysis could then be undertaken by testing hypothesis, for example peer education is more effective in sex education for young people when the peer educators work alongside an adult educator. Thus, both statistical and analytical or theoretical synthesis are employed. The output of these reviews would be in the form of theoretical models or propositions reporting the likely effectiveness of different techniques or approaches in different circumstances. The major limitation of this approach is that it is largely at odds with the dominant approach to evaluative research within health-care, which remains positivist. So it is unlikely that the source materials will contain the data required for such a synthesis. One of the authors (PG) has had a report of an RCT rejected by a high-profile journal for the stated reason that the findings were context specific. Shortly afterwards the same journal published a near exact replication of the study without an analysis of the contextbound nature of the intervention. One solution might be to follow-up each piece of research independently by contacting the original authors to collect these unreported data, which would be costly and time consuming. Additionally, there is no guarantee that the original authors would have collected the contextual and process level data required. An answer to this problem in some areas of practice may lie in the use of different sources of evidence. For example, it may be possible to look at data on the prevalence and incidence of pressure sores in different Trusts, hospitals and even in wards, alongside the types of systems used to prevent © 2002 Blackwell Science Ltd, Nursing Inquiry 9(3), 141–155
pressure sores in those areas. Although this process may appear to differ little from primary research, the key purpose is to use existing recorded information to answer questions, something analogous to the widely recognised practice of reviewers contacting authors for unpublished data.
DEALING WITH COMPLEXITY: THE SEARCH FOR A UNIFYING FRAMEWORK Having examined each of the main approaches to the synthesis of research evidence it is argued that each can contribute to EBHC. The focus of the remainder of the paper is on how these different approaches can be harnessed collectively to help resolve the problem of synthesis in relation to complex topic areas where the explication of evidence is perhaps more demanding than say addressing a question of effectiveness regarding a particular drug. To provide a meaningful summary of evidence in such areas it is necessary to transcend competing epistemologies while maintaining rigour. To demonstrate how such an eclectic model can be used in practice a recently completed review, examining good practice in promoting continuity during the transition from child to adult health and social care, will be used to structure the discussion. The discussion will consider how the review addressed the inherent complexity within the study area and its limitations.
Background and aim of the review The number of young people in most affluent ‘western’ countries with chronic diseases and disabilities entering adulthood in need of supportive care is growing (British Paediatric Association 1990; While et al. 1996; Betts et al. 1996). There is evidence to suggest that services are failing to manage this transition effectively, with detrimental consequences (Association for Children with Life Threatening or Terminal Conditions 1997; House of Commons 1997a; House of Commons 1997b; Goodinge 1998; Fruin 1998; Morris 1999; Noes 1999). The factors explaining this malaise are multifaceted, with failures in continuity potentially occurring in a number of different dimensions, within and between services, professionals, young people and their families. The aim of the review was twofold: first, to identify practices that address continuity during the transition from child to adult care; and secondly, to assess the merits of those practices determining good practice. Good practice was to be determined at two levels: the specific practices identified as ‘good practice’ in a range of (explicit) contexts; and overarching themes expressing the common components of good practice in promoting continuity in general terms. 147
NIN_146.fm Page 148 Thursday, August 22, 2002 7:50 PM
A Forbes and P Griffiths
The type of question being addressed by the review was less ‘What works?’ and more ‘Why or how does this work in these circumstances?’. This focus reflected the desire to identify general principles of what good practice constitutes rather than the effectiveness of specific practices, although the review was not blind to such detail.
Scope of the review Clearly continuity is a complex topic; not only does it transcend multiple fields of practice, it also encompasses a range of approaches to the management of different types of continuity. Thus, the scope of the review had to be wide ranging, examining practices that aimed to promote continuity in a number of different care groups in both health and social care settings. To try and deal with this complexity a number of judgements were made to limit the scope of the review without compromising its ability to address the review questions. To some extent the limitations were pragmatic, there was a generous but limited source of funding for the review, whereas the topic was so complex with so many interfaces in so many different settings as to be potentially limitless. One of the core principles of the ‘systematic review’ is to be as comprehensive as is feasible, something that goes beyond electronic searching to include the pursuit of secondary references, experts and unpublished data. While such inclusivity is an important objective, particularly where the intervention and population are discrete and the research methods homogenous, it is not feasible when the review is broad, the topic ill defined and the potential sources of information multitudinous. Clearly, if the review seeks to reduce the materials down to a definitive statement on a given facet of practice then inclusivity is important. In this case, however, such an approach would not work as it would require a series of separate reviews addressing a vast range of practices across every dimension of continuity, with scant evidence in each. The scope of the review was therefore adjusted to ensure that ‘enough’ could be found out about these practices to identify what they contributed to continuity in the fields of practice in which they were deployed, learning towards the realist approach. To achieve such a broad analysis, targeted search strategies were developed that utilised classic systematic review methods together with more novel methods to identify literature in depth on a range of topics.
Search strategy To identify studies of effectiveness (search A) we used maximally sensitive searches of facets of the topic (child/adult 148
and transition) combined with each other and a methodological filter for evaluation studies to achieve specificity (Greenhalgh and Donald 2000a). However, as the results of this process are apt to identify only those practices that can be (or have been) evaluated using experimental or quasi-experimental approaches, additional searches were required to achieve depth in our sample of practices. We used five chronic tracer conditions (Kessner and Kalk 1973) affecting the population of interest (those undergoing the transition from child to adult services), identified as having outcomes that were likely to be dependent on the quality of care provided, in addition to specific medical treatments. The selected conditions were diabetes, learning disability, cystic fibrosis and muscular dystrophy. Search B comprised sensitive searches for the condition and the concept of transition, with no methodological filter. This approach allowed a broad selection of literature on clearly defined topics to be identified without biasing the identification of practice to that which had been evaluated. A final search (C) aimed to identify practices that were not revealed in the literature. There were two main elements to this search: first, contact with key informants or stakeholders, and secondly, a survey of services involved in the commissioning, management or provision of care to young people in the Greater London and Manchester areas. The survey was mailed to named health, social services or education staff (n = 244) working with children and young people with chronic illnesses and/or disabilities across the two areas. It was envisaged that while each of these searches would generate important findings, their collective output should ensure a more comprehensive, detailed and valid account of practices than would any one single approach. Each search was designed to target different aspects of the overall field of study and to different depths. It was believed that drawing materials from these different sources would enhance the completeness of the inquiry and would be useful in exploring any convergence between the materials from the different sources: triangulation (although this latter possibility was not fully realised). The resulting materials were then appraised and synthesised. An overview of the design is presented schematically in Fig. 1. Thus, in theory, A + B + C should have identified a broad and fairly comprehensive range of material, identifying many practice examples, some linked to particular areas of practice, some with evaluations of outcome. In all, 5319 items were identified. The preliminary list of items identified by the search was then filtered by two researchers examining the titles and abstracts of papers, rejecting those that did not address continuity. The process of filtering down on the basis of the study’s title and abstract © 2002 Blackwell Science Ltd, Nursing Inquiry 9(3), 141–155
NIN_146.fm Page 149 Thursday, August 22, 2002 7:50 PM
Identification and synthesis of evidence
Phase 1 SEARCH A Systematic review of effectiveness continuity
Phase 2 Databases and grey literature
SEARCH B Systematic review of tracer conditions
Search strategy + method filters Search strategy
Phase 3
Critical appraisal of review items (A, B, C) and then synthesis (A+B+C) to produce overall findings, identifying general and specific factors
Key components of good practice – continuity
SEARCH C Review of practice and research networks
Figure 1 Overview of review design.
is rarely addressed in systematic reviews but is important in an area where terminology is vague and preliminary decisions about relevance may be subjective. In this review inter-rater reliability was assessed on a sample of abstracts (n = 118), with 81% concordance between the reviewers; 368 items were identified for further appraisal. Search terms were difficult to define because of the recalcitrant nature of the concepts being examined (i.e. the inherent vagueness and inconsistencies in the use of terms such as continuity and transition). External validation of these terms would have been advantageous. Also, as the reviewed materials were in part directed by tracer conditions there may have been bias towards those types of materials. However, given the huge volume of material identified, sufficient data were secured to support the level of analysis undertaken. Another way of dealing with areas that have large volumes of difficult to search and diffuse material is being explored by the authors in another review examining clinical nurse specialists. Over 2000 papers have been identified as relevant following an examination of the abstracts from the initial search. The solution proposed to the problem of dealing with such a large collection of material relies, again, on the notion that one need not examine everything to know something worthwhile and involves the random selection of papers (n = 200), together with the purposive selection of papers by key authors. It is intended to examine these materials until data saturation (in terms of concepts) is achieved. Other reviews we have advised upon have utilised © 2002 Blackwell Science Ltd, Nursing Inquiry 9(3), 141–155
the concept of data saturation to achieve a sample of the literature for a descriptive review (Bridges et al. 1999).
Study selection The scope of the review was limited to the consideration of practice rather than theory. It was deemed that, while there was a body of literature relating theoretical models for continuity and views of what continuity entails, unless those models or views identified specific practices they fell outside the scope of this review. For example, an author may suggest that empowering the young person is important; however, the concept of empowerment alone does not translate into good practice unless some indication is given as to how this should be achieved. The rationale for this distinction was not to negate the importance of such theory but to maintain a clear focus on ‘good practice’ rather than ‘good theory’, although in the best examples of reported reviews or evaluation of initiatives the two were found to coexist. Another dimension in which the scope of the review was limited was in the form in which practice was expressed. The review was restricted to reporting explicit practices that were argued to have a positive effect on improving continuity in the transition from child to adult care. Thus, implicit practices that may be fundamental to continuity in the transition were not examined, as such practices are often so habituated as to be obscured. Hence, there was a bias towards reporting what was explicit, novel and innovative, although identification 149
NIN_146.fm Page 150 Thursday, August 22, 2002 7:50 PM
A Forbes and P Griffiths
A. Appraisal questions Score 1. Was the study prospective (stronger) or retrospective (weaker)? Retro=1 Prosp=3 0 1 2 3 4 2. Were the outcome measures appropriate and clearly linked to the intervention? 3. What method was used for the study? (Grade methods 1–4, 1=expert opinion, 4=RCT). 0 1 2 3 4 0 1 2 3 4 4. Were the methods adequately described and appropriate, following EPOC* guidelines? 0 1 2 3 4 5. How strong was the impact of the intervention on the identified outcomes? 6. How accurate/precise was the measure of impact (P-values and CI)? 0 1 2 3 4 SUMMARY SCORE Weak Moderate Strong 0–9 10–16 17–23 B. Appraisal questions Score 0 1 2 3 4 1. Was there a clear statement of the aims of the research? 2. Was the sampling strategy clearly justified and linked to the target population? 0 1 2 3 4 3. Were the data collection methods adequately described? 0 1 2 3 4 0 1 2 3 4 4. Was the data analysis clearly linked to the themes/categories identified? 5. Were the themes and categories linked to the aims of the research and plausible? 0 1 2 3 4 6. How transferable were the study’s findings? 0 1 2 3 4 7. What was the strength of the implications of the study for practice? 0 1 2 3 4 SUMMARY SCORE Weak Moderate Strong 0–11 12–20 21–28 *EPOC=Cochrane Effective Practice and Organisation of Care Group, http://www.abdn.ac.uk/hsru/epoc/index.hti
Figure 2 Examples of appraisal schedules (A = quantitative, B = qualitative). of practices that were not being evaluated makes this review less prone to that bias than many. The outputs from the searches comprised journal papers, written reports and descriptions of practice from the survey; these were termed items. No a priori quality criteria were set but all the items (e.g. studies) were critically appraised, critiqued and graded according to the type of evidence that supported the propositions regarding ‘good’ practice that were made within them. To facilitate this appraisal a number of assessment schedules were developed. For each item the following data were identified: the target population (care group and age range); the practice and its setting, together with any subcomponents where the intervention was multifaceted; structure; process and outcome variables; economic data; and the role and involvement of users. Items were classified into one of four categories denoting the manner in which the practice was presented and supported. The categories were: • Description, an item describing practices. • Evaluation, an evaluation of practices. • Survey/interview, a survey or interview with service users from which practices were suggested. • Review, an item reviewing previous literature (theory and practices) suggesting specific practices. Where an explicit methodology was employed for either the evaluation or description of the practices presented within the items three further schedules, derived from 150
established appraisal guidelines (Greenhalgh and Donald 2000b), were used to assess the quality of that method depending on the type of research used (see Fig. 2). Scoring systems within these schedules rated the method as either ‘weak’, ‘moderate’ or ‘strong’. The scoring system was not externally validated and there are internal inconsistencies with the schedules, as the same weighting was applied to each element (e.g. where the outcome measures ‘appropriate’ vs. where the methods are adequately ‘described’) within the appraisal despite the fact some are obviously more crucial than others. So this method must be regarded as tentative. The sensitivity of this process was also compromised by focusing the grading at the methodological rather than the practice component level. This focus meant that where an item described multiple practices no allowance was given to the fact that some of these practices may have been peripheral to, or even outside of, the focus of the research. If the method was strong the practice component was reported as strong. A further reason for the dominance of the methodological rating score was that very few items reported any outcomes other than the direction of impact (i.e. positive or negative change in continuity of care), so to grade items consistently beyond the quality of the method was not possible. This crude approach was taken to rapidly organise and grade large volumes of material. However, it may have been better to set inclusion criteria to ensure that each item © 2002 Blackwell Science Ltd, Nursing Inquiry 9(3), 141–155
NIN_146.fm Page 151 Thursday, August 22, 2002 7:50 PM
Identification and synthesis of evidence
Figure 3 An example of a thematic heading with coding data.
contains the minimum number of the elements necessary for the proposed synthesis, whether that be outcome or process data. However, had the review restricted itself to the ‘highest’ form of evidence of effectiveness the yield from our searches would have been a single RCT and so the review (at best) would have reported evidence of one practice located in a single context. Wider inclusion criteria that allowed quasiexperiments still yielded very few comparative evaluations.
Integration and synthesis Prior to the synthesis phase of the review, the data were a disparate collection of practices, as detailed on the item assessment schedules. In this phase each of the practices identified on those schedules was described and coded. In many cases item schedules generated more than one practice © 2002 Blackwell Science Ltd, Nursing Inquiry 9(3), 141–155
code as the interventions related were often multifaceted. Thus an ongoing bank of codes was generated linked to specific types of practices. The coding exercise was completed when all the practices had been coded. Following the coding phase, a thematic analysis of the allocated types of practices was undertaken. The key question applied in this analysis was: What are the key features of the practice in relation to continuity? In applying this question it was possible to group practices under headings describing core aspects that suggested the active mechanisms involved in promoting continuity. The coding and analysis schedule enabled a link between the identified themes and source items to be maintained. Thus, it is possible to report the number of items, the weight of evidence, the key findings and the main care groups alongside each thematic heading (see Fig. 3 for an example). 151
NIN_146.fm Page 152 Thursday, August 22, 2002 7:50 PM
A Forbes and P Griffiths
Transition process
[This model starts from the premise that the young person will need some help in acquiring the skills and support systems necessary to use or experience adult care effectively. This model has an active focus on personal growth and development within the transition, what was termed developmental continuity. This is also likely to involve a redefining of the family’s role in care provision. In the schematic illustration of the model above, the shaded area within the transitional care box was used to illustrate that the model extends to encompass personal development and the family’s role. This model was suggested to be particularly useful for vulnerable young people and those with physical disabilities or learning difficulties.] Figure 4 An example of a theoretical model, analytical synthesis.
From these thematic assignations a further more tentative level of analysis was attempted. When consideration was given to the underlying dynamics of the various practices identified by the review, some interesting patterns emerged. These patterns reflected the different foci of the initiatives described within the review items, particularly in the way in which young people and the transition itself were viewed. For example, in some initiatives the focus was on getting the young person from child to the adult services as safely and efficiently as possible. In others a developmental model was adopted, seeing the transition within the context of personal growth. In these latter cases continuity was established by equipping young people with the resources necessary to not only weather the transition but to take on a new role in relation 152
to their condition. In examining these patterns it was possible to identify a number of models for continuity promotion (see Fig. 4 for an example). The potential utility of these models in different practice contexts was also identified. Thus, a number of different levels of synthesis were employed within this review. Narrative synthesis was used to describe the items collectively. Tabular synthesis was used under each thematic heading to present the type of item and its rating (as in Fig. 3). The thematic analysis is an example of meta-synthesis in that the items have been subjected to a secondary assignation, and analytical or theoretical synthesis has been attempted in the attribution of the underlying dynamics or mechanisms found within the practices to models of continuity promotion (as in Fig. 4). © 2002 Blackwell Science Ltd, Nursing Inquiry 9(3), 141–155
NIN_146.fm Page 153 Thursday, August 22, 2002 7:50 PM
Identification and synthesis of evidence
Implications of evidence synthesis for EBHC This review has suggested a number of strategies that may be useful in developing evidence synthesis in a complex multifaceted area of health-care. The approach was eclectic and contained elements of all the major approaches to evidence synthesis. The use of explicit formal search strategies was drawn from the traditional methods of systematic review, as was the need to carefully appraise the identified items. Meta-ethnography was employed in the thematic analysis at the centre of the synthesis of the materials. Realist synthesis made, perhaps, the most important contribution as it provided the guiding epistemology determining the mode of analysis and the included material. This perspective informed the search for principles (mechanisms) that may inform good practice in promoting continuity in the transition from child to adult care in different practice areas (context) and determined that materials other than those providing descriptive estimates of the (relative) effect of different approaches (i.e. RCTs and quasi-experimental evaluations) should be included. The review was certainly successful in identifying practice, producing an extensive list of manifest practices that aimed to promote continuity. However, what is not known with any quantifiable degree of certainty is whether those practices are effective. The danger with a simple constructivist approach to this material, expressed in this review’s use of thematic analysis, is that well-established and well-supported practices were sat alongside more obscure and empirically naive practices, with little distinction between the two. Claims of effectiveness for some practices were based on little more than a report author’s opinion or, where evaluated, a positive experience of identified aspects of the process. However, in all cases the weight and nature of evidence was explicitly reported. Individual or collective constructions of reality as reported might be informative but they do not, for the realist, entirely define it. In addition to describing manifest practice and outcomes the review has sought to extrapolate core facets of practice from a wide variety of sources across contexts. These facets are more than simple descriptions; they represent the mechanisms through which current practices (are hypothesised to) operate. This representation was the substance of the models described by the review. At this level questions of utility and explicit effectiveness are inapplicable, as such questions only have meaning when the mechanisms are applied in a given context. Thus, it could be argued that all these mechanisms have the potential to impact positively on the different dimensions of continuity. The mechanisms represent good practice if, © 2002 Blackwell Science Ltd, Nursing Inquiry 9(3), 141–155
in the first instance, they are judged to be of use by those involved in establishing continuity for a particular group of young people and, in the second instance, if they lead to some improvement in a given set of continuity dimensions. This latter claim can only be confirmed following rigorous evaluation, which clearly highlights the need for more primary research in this area. In these terms the review has contributed evidence and has managed to be informative even where evidence of outcome is weak. Finally, it must be considered whether this synthesis of evidence can inform clinical practice, the ultimate goal of EBHC. The situation in this field of practice was that the issue of continuity was largely unaddressed and the evidence was not readily accessible, being widely distributed through generally low quality reports of individual practices that utilised different terminology. While the empirical data to support specific practices in this area is weak the overall assessment of the nature and pattern of activity indicates a way forward. In the EBHC tradition the review has highlighted this (clinical) topic as being potentially subject to evidence and provided some principles to help practitioners and policy respond in a purposeful manner, guided by the current evidence base. True to the spirit of the origins of EBHC the emphasis is on supporting judgement, although this is rarely how EBHC is perceived. Stalwarts of the current EBHC movement with a vested interest in maintaining the focus on systematic review may see these attempts as sullying the purity of the original ethos of EBHC. The alternative approach to this topic, following the classic ‘systematic review of effectiveness model’, would have simply concluded that for nearly all aspects of continuity in most settings and for most client groups, there were no practices known to improve clinical and care outcomes. While the output of our review is far from perfect we believe that its use of systematic, explicit and reproducible approaches to the identification, rating and synthesis of a broad range of relevant materials is far superior to either of the alternatives. On the one hand, we could have produced a clearly defined epistemological vacuum, represented by the failure to identify evidence convincing to the positivist. Alternatively, we could have filled the vacuum with constructions of reality (either our own, the original authors’ or that of the respondents), with no external verification or consideration of the extent to which this captured the ‘real’ world. We also believe that the methods described here could be useful in formalising the approach to the use of qualitative research alongside reviews of effectiveness even though in this case such evidence of effectiveness was largely absent. 153
NIN_146.fm Page 154 Thursday, August 22, 2002 7:50 PM
A Forbes and P Griffiths
CONCLUSION Health and social care deal with real world issues of life and death, poverty and deprivation. There is a real world out there and while individual constructions of it may be vital to our understanding of their actions in it, they do not directly change it. Practice cannot therefore be based entirely on such constructions. Conversely, care practice deals with much that is real but not directly observable (suffering, grief, pain) and must address matters in subtle, difficult to study, ways. If evidence-based practice divides into competing camps of positivism and relativism, the care that is delivered will either be severely limited (to that which can be directly observed) or defined as unamenable to external scrutiny. While few if any would truly ascribe to a position that defined practice as all ‘art’ or all ‘science’, reproducible and reliable methods of integration of evidence in complex areas are required if EBHC is to make its fullest contribution to individual and social well-being. We hope that the techniques and issues described here make some contribution to the development of such an approach.
REFERENCES Association for Children with Life Threatening or Terminal Conditions and Royal College of Paediatricians & Child Health. 1997. A guide to the development of children’s palliative care service. London: Royal College of Paediatricians & Child Health. Betts P, M Buckley P Davies, R Swift and E McEvilly. 1996. The care of young people with diabetes. Diabetes Medicine 13(9, Suppl. 4): S54–9. Bradley M, N Cullum, E Nelson, T Sheldon and D Torgerson. 1999. Systematic reviews of wound care management: (2) Dressings and topical agents used in the healing of chronic wounds. Health Technology Assessment 3: 1– 134. Bridges J, S Spilsbury, J Meyer and R Crouch. 1999. Older people in A & E: Literature review and implications for British policy and practice. Reviews in Clinical Gerontology 9: 127–37. British Paediatric Association. 1990. The organisation of services for children with diabetes in the United Kingdom. Report of the British Paediatric Association Working Party. Diabetes Medicine 7(5): 457–64. Chalmers I and D Altman. 1995. Systematic reviews. London: British Medical Journal Books. Clarke M. 2000. Cochrane reviewers’ handbook 4.14 (updated October 2001). Oxford: Update Software. Cullum N, J Deeks, T Sheldon, F Song and A Fletcher. 2001. 154
Beds, mattresses and cushions for pressure sore prevention and treatment (Cochrane review). Oxford: Update Software. Dickersin K, Y Min and C Meinert. 1992. Factors influencing publication of research results. Follow up of applications submitted to two institutional review boards. Journal of the American Medical Association 267: 374 – 8. Dixon-Woods M and R Fitzpatrick. 2001. Qualitative research in systematic reviews. British Medical Journal 323: 765–6. Forbes A, A While, R Ullman, S Lewis, L Mathes and P Griffiths. 2001. A multi method review to identify components of practice which may promote continuity in the transition from child to adult care for young people with chronic illness or disability. London: King’s College. French P. 2002. What is the evidence on evidence-based nursing? An epistemological concern. Journal of Advanced Nursing 37(3): 250–9. Fruin D. 1998. A matter of chance for carers? London: Department of Health. Giacomini M. 2001. The rocky road: Qualitative research as evidence. Evidence Based Medicine 6: 4 – 6. Gomm R and C Davies. 2000. Using evidence in health and social care. London: Sage. Goodinge S. 1998. Inspection of services to disabled children and their families. London: Department of Health. Greenhalgh T. 1997. How to read a paper: The medline database. British Medical Journal 315: 180 –3. Greenhalgh T and A Donald. 2000a. Unit 3. Approaching the literature. In Evidence based health care workbook, 2nd edn, eds T Greenhalgh and A Donald, 37–50. London: British Medical Journal Publishing Group. Greenhalgh T and A Donald. 2000b. Evidence based health care workbook. London: British Medical Journal Publishing Group. Griffiths P. 2002. Understanding effectiveness for service planning. In public health in policy and practice: A sourcebook for health visitors and community nurses, ed. S Cowley, 144 – 63. London: Harcourt Health Sciences. Griffiths P. 2002. Nursing-led in-patient units for intermediate care: a survey of multi-disciplinary discharge planning practice. Journal of Clinical Nursing 11: 322– 30. Hanson C. 1992. Developing systemic models of the adaptation of youth with diabetes. In Stress and coping in child health, eds A La Greca, L Seigel, J Wallander and C Walker, 212–42. New York: Guilford Press. House of Commons. 1997a. Hospital services for children and young people. Fifth report. Health Committee. Chairman: M Roe. London: Stationery Office. House of Commons. 1997b. Health services for children and © 2002 Blackwell Science Ltd, Nursing Inquiry 9(3), 141–155
NIN_146.fm Page 155 Thursday, August 22, 2002 7:50 PM
Identification and synthesis of evidence
young people in the community: Home and school. Third report. Health Committee. Chairman: M. Roe. London: Stationery Office. Juni P, DG Altman and M Egger. 2001. Systematic reviews in health care: Assessing the quality of controlled clinical trials. British Medical Journal 323: 42–6. Kessner DM and CE Kalk. 1973. A strategy for evaluating health services. In Contrasts in health status, ed. Panel on Health Services Research, 1–11. Washington: National Academy of Sciences. Kippax S and P Van de Ven. 1998. An epidemic of orthodoxy? Design and methodology in the evaluation of the effectiveness of HIV health promotion. Critical Public Health 8: 371– 86. Mays N, E Roberts and J Popay. 2001. Synthesising research evidence. In Studying the organisation and delivery of health services: Research methods, eds N Fulop, P Allen, C Aileen and B Black, 188–219. London: Routledge. Morris J. 1999. Hurtling in to a void. Brighton: Pavilion Publishing for Joseph Rowntree Foundation. Muir Gray J. 1997. Evidence-based healthcare. Edinburgh: Churchill Livingstone. National Health Service Centre for Reviews and Dissemination. 2001a. Undertaking systematic reviews of research on effectiveness. CRD guidelines for those carrying out or commissioning reviews. York: CRD. National Health Service Centre for Reviews and Dissemination. 2001b. Accupuncture. Effective health care 7: 1–12. Noblit W and R Hare. 1988. Meta-ethnography. Synthesizng qualitative studies. London: Sage. Noes J. 1999. Voices and choices. Norwich: Stationery Office. O’Meara S, N Cullum, M Majid and T Sheldon. 2000. Systematic reviews of wound care management: (3) Antimicrobial agents for chronic wounds; (4) diabetic foot ulceration. Health Technology Assessment 4(21): 1–245. Oakley A, D Fullerton, J Holland and S Arnold. 1995. Sexual health education interventions for young people: A methodological review. British Medical Journal 310: 158–62. Pawson R. 2001. Evidence based policy. II. The promise of ‘realist
© 2002 Blackwell Science Ltd, Nursing Inquiry 9(3), 141–155
synthesis’. London: Economic Social Research Council, Centre for Evidence Based Policy and Practice. Pawson R and N Tilley. 1997. Realistic evaluation. London: Sage. Pawson R. 2002. Evidence-based policy. In search of a method (part 1) the promise of ‘realist synthesis’ (part 2). Evaluation 8(2): 157–81. Phillips B, C Ball, D Sackett, D Badenoch, S Straus, B Haynes and M Dawes. No date. Levels of evidence and grades of recommendations. London: Centre for Evidence Based Medicine. http://cebm.jr2.ox.ac.uk/docs/levels.html. Accessed December 2001. Popay J, A Rogers and JG Williams. 1998. Rationale and standards for the systematic review of qualitative literature in health services research. Qualitative Health Research 8: 341–51. Sackett D, W Richardson, W Reosenberg and R Haynes. 1998. Evidence-based-medicine. How to practice and teach EBM. London: Churchill Livingstone. Schreiber R, D Crooks and P Stern. 1997. Qualitative metaanalysis. In Completing a qualitative research project: Details and dialogue, ed. J. Morse, 311–27. Thousand Oaks, California: Sage. Steiner A and R Robinson. 1998. Managed care: US research evidence and its lessons for the NHS. Journal of Health Services and Research Policy 3(3): 173 – 84. West RR and DA Jones. 1997. Publication bias in statistical overview of trials: example of psychological rehabilitation following myocardial infarction (Abstract). In Proceedings of the 2nd International Conference, Scientific Basis of Health Services, and 5th Annual Cochrane Colloquium, Amsterdam. Oxford: The Cochrane Library. While A, C Citrone and J Cornish. 1996. A study of the needs and provisions for families caring for children with life-limiting incurable disorders. Report for Department of Health. London: Department of Nursing Studies, King’s College London. Wiesel P, C Norton and M Brazzelli. 2001. Management of faecal incontinence and constipation in adults with central neurological diseases. Cochrane Review. Oxford: The Cochrane Library.
155