Adm Policy Ment Health (2010) 37:81–88 DOI 10.1007/s10488-010-0277-0
ORIGINAL PAPER
Finding the Common Core: Evidence-Based Practices, Clinically Relevant Evidence, and Core Mechanisms of Change Thomas L. Sexton • Susan Douglas Kelley
Published online: 6 February 2010 Springer Science+Business Media, LLC 2010
Abstract Improving the quality of children’s mental health care can benefit from the adoption of evidence based and evidence informed treatments. However, the promise of moving science into practice is hampered by three core elements that need to be addressed in the current conversation among key stakeholders: (1) expanding our understanding of the clinical relevance of different types of evidence, (2) emphasizing the identification of core mechanisms of change, and (3) re-conceptualizing what evidence-based practice means. This paper focuses on these elements in an attempt to find a common core among stakeholders that may create opportunities for more inclusive conversation to move the field of children’s mental health care forward. Keywords Evidence based practice Mechanisms of change Practice improvement Children’s mental health Research Research and practice
Significant progress has been made within the last two decades in the field of children’s mental health care (CMHC). Researchers and treatment model developers working to understand the ingredients of successful interventions now tout a wide range of prevention and treatment programs, many with an evidence base. Researchers know increasingly specific information regarding the etiology of T. L. Sexton (&) Center for Adolescent and Family Studies, Indiana University, 1901 East 10th Street, Bloomington, IN 47405, USA e-mail:
[email protected] S. D. Kelley Center for Evaluation and Program Improvement, Vanderbilt University, Nashville, TN, USA
children’s mental health problems. Community based treatment organizations have successfully implemented a vast network of ‘‘systems of care’’ communities that seek to unite and expand treatment teams that include the voice of the consumers (youth and families) served. Despite these efforts, the answers to some of the most basic questions about what works, for which child, in what context (Paul 1967) remain elusive. And more discouragingly, despite their common goal, the work of researchers, practitioners, and proponents of systems of care often becomes mired in differences in philosophy, approach, and method. Nowhere is this more apparent than in regard to issues related to evidence-based practices. Despite their promise to improve CMHC, there is significant debate and disagreement that has kept evidence-based treatments (EBTs) and community-based systems of care from maximizing their potential in actual practice. For example, there is a growing bifurcation impeding our ability to promote useful clinical evidence. For many researchers the meaning of evidence aligns with the traditional notions often associated with randomized clinical trials. Many now reject that type of evidence as ‘‘sterile’’ and unrealistic promoting instead what some now call ‘‘practice-based evidence,’’ or information that comes from actual clients in real clinical settings (Anker et al. 2009). Similarly, there is a growing call to find the core mechanisms that result in change so that treatment models can be more effectively implemented in community settings (Kazdin 2008). While some treatment models have clearly articulated mechanisms others focus on broad principles without a clear understanding of how they work. Finally, even the definition and meaning of what constitutes an evidence-based practice is now a barrier. For some, EBTs represent a reliable and valid way to navigate the complex process of mental health treatment. For others, EBT is viewed as constraining, removing
123
82
clinical judgment and creativity from an inherently artful human and relational process. The result is that the current stakeholders in CMHC seem to be pursuing diverse and sometimes incompatible directions. Researchers and model developers of evidence based treatments (EBTs) typically follow the traditional path and methods of developing efficacious treatments they hope would then be shown effective in community settings following large-scale utilization. Proponents of community based care systems (Stroul and Friedman 1994) hope that multiple ‘‘voices’’ and collaborative input will lead to improved client outcomes. Practitioners rely on their clinical judgment to address the ‘‘needs of the individual child’’ in hope of successful treatment outcomes. Consumers hope that their ‘‘voices’’ will direct practitioners and researchers toward relevant process and outcomes. Meanwhile, policy makers are hoping for guidance to promote and pay for high quality care. In each case, the problem with this isolated progression, not surprisingly, is that it does not account for the multitude of attitudinal, contextual, and resource considerations that are increasingly recognized as more than simply barriers to implementation and sustainability of evidence-based care but important factors to incorporate into all stages of intervention development. These issues are not specific to mental health care. The slow growth, indeed resistance, to the promotion and sustainability of evidence-based approaches affects initiatives in education, health care, and other human service endeavors (Altman 2009; Clancy 2009). As Clancy (2009) states, ‘‘belief in the existence of ‘transformative’ interventions—from health information technology (IT) to disease management to many others—which will effect dramatic improvements, feeds the illusion that clarifying the targets of opportunity is equivalent to addressing them (p. 3).’’ Like Clancy (2009) we suggest that a number of core issues exist that are barriers not being addressed in the current conversation among researchers, community practitioners, and families involved in CMHC. The current and sometimes disparate approaches to model development, community practice, family involvement, and service delivery funding are not likely to result in advances that can meet the rising tide of need for effective, family focused, and community-based CMHC. Rather, achieving transformation in CMHC will depend in part on finding a common core in the validity of different types of evidence and the research methods that produce them, a greater emphasis on the identification of core mechanisms of clinically relevant change, and a re-conceptualization of evidence-based practice (EBP) that highlights both science and practice. In this article we focus on these three issues with the intent of furthering the conversation among the key
123
Adm Policy Ment Health (2010) 37:81–88
stakeholders in CMHC and promoting a common core that will help advance treatment in community based settings. We suggest that a greater consideration of these core elements is necessary to move research and practice forward in a way that unites and builds bridges among families, practitioners, and researchers.
Expanding Our Understanding of Evidence: What Really Matters? In the field of children’s mental health, there is rarely proof of anything. One of the core issues and debates in CMHC centers on the appropriate type and source of the evidence that is/should be used as the basis of determining best practices. The dictionary definition of evidence is rather simple, as ‘‘something that furnishes proof’’ (www.merriamwebster. com). Yet, there is little consensus among researchers, practitioners, consumers, proponents of systems of care, and policy makers as to what constitutes the best type of evidence or the best method of identifying interventions that comprehensively represents the mental health needs of children and adolescents. Researchers tend to use the traditional scientific method that has come be defined as Clinical Trial Research (CTR). CTR has become the ‘‘gold standard’’ within the researcher tradition and starts with participants assigned to either a treatment or experimental condition in laboratory conditions to control for confounding variables. This type of efficacy research is often followed by effectiveness research, where the studies are conducted in real-world clinical settings. In contrast, practitioners, consumers, and proponents of systems of care value evidence that starts with the individual stakeholder (youth, family, practitioner) and is generated by more naturalistic methods (e.g., practice-based evidence, Margison et al. 2000). For example, it has been suggested that effectiveness research may be preferred by the practice community over tightly controlled efficacy studies that are seen as less clinically relevant to real-world practice (Nelson and Steele 2008). Research is really just a process by which information is systematically gathered to answer the question at hand, whether through the ongoing assessment process that occurs in typical treatment or the stringent scientific methods followed in a randomized control trial. While a comprehensive review of this important debate is beyond our scope we do suggest that two areas are particularly important in bringing a common core to CMHC: defining levels of evidence and operationalizing the notion of systematic gathering of data. The lack of a common core on these fundamental elements is a central roadblock to the further adoption and development of effective practices in CMHC.
Adm Policy Ment Health (2010) 37:81–88
Defining Levels of Evidence No single type of evidence best answers the complex questions of CMHC. This is not to say that all information is ‘‘evidence’’ and holds equal value in determining treatment choices. Rather, a diverse range of informative evidence can only improve our understanding and application of more effective methods of treatment. We suggest that there are at least three levels of evidence that are useful to determining treatment/intervention effectiveness: absolute, relative, and contextual (Sexton et al. 2004). Consider the case of the various types of evidence that may help determine whether a treatment or intervention works. Outcome evidence that measures of the success of the intervention or treatment compared to no treatment is a measure of absolute efficacy/effectiveness (Wampold 2003). While a blunt measure it is useful in determining if a treatment works even in the best of contexts. A more specific type of outcome evidence compares a treatment to a reasonable alternative (e.g., common factors, a different modality of treatment, or a different treatment). Termed relative effectiveness/efficacy this type of evidence is useful to establish that a particular treatment is the best choice for specific clients or types of problems. Even greater specificity of outcome evidence addresses whether a treatment is effective in varying community contexts. A critical third dimension, contextual effectiveness (Sexton et al. 2004) speaks to the context in which a treatment intervention or program works and is vital to demining the clinical utility of an intervention or treatment program in the varied and unique contexts of CMHC. Each type of evidence is potentially useful. The utility of the evidence is not necessarily in its nature but instead depends on how it is used. One advantage of a levels approach is that evidence can be matched to type of question it is intended to answer. For example, absolute efficacy/effectiveness may answer policy questions about what type of treatment to use, while practitioners will find little use in these broad and non-specific results. Similarly, relative and contextual efficacy/effectiveness answers questions for practitioners on what to ‘‘do’’ in a specific case but has little use for broad policy decisions. To answer the complex questions of CMHC, the best use of evidence is in matching the level of evidence to the type of question (Sexton et al. 2004). Research evidence beyond the study of outcomes is also important. For example, the study of dissemination and transportability of EBP also shows much promise in providing direction to intervention developers on both system issues and intervention characteristics that are important to increasing acceptability of EBP in typical care settings (Daleiden et al. 2006; Glisson et al. 2008; Greenhalgh et al.
83
2004; Woolston 2005). The use of process evaluation techniques and mixed-methods research designs are being promoted (Blow et al. 2007; Kazdin 2008), ideally not just as separate research strands but incorporated into efficacy and effectiveness research. Different research approaches bring different perspectives critical to understanding the complexity of youth and family functioning and therapeutic change. Furthermore, different research methods may be needed at different places in the developmental trajectory of a treatment intervention or program (Sexton et al. 2008). Useful evidence also comes from various sources. Currently most evidence results from traditional research studies. Another source of potentially valuable evidence is the current ‘‘practice as usual’’ settings that dominate CHMC (Kazdin 2008). A relatively new area of study, our understanding of usual care (UC) has evolved only recently from a handy control group to an area worth understanding in and of itself (Weisz et al. 2006; Weisz and Gray 2008). For example, there are new measurement tools and strategies being developed that tap into the unique domain of usual care (e.g., Miranda et al. in press; Kelley et al. in press).
Diversifying the Systematic Gathering of Evidence There are many ways to gather useful information, which contributes to distinctions in how different forms of evidence are valued and subsequently used in research and practice settings. While the researcher may value ‘‘objective’’ measures, consumers, practitioners, policy makers, and proponents of systems of care all also have perspectives that provide a critical picture of the real ‘‘outcome’’ of a treatment. For example, practitioners draw upon a wide variety of information sources in clinical decision-making. Clinical reasoning has been defined as incorporating research findings and credible theories, observations of the client, client preferences, and the practitioner’s past experience and education (Nelson and Steele 2008; Shapiro et al. 2006). Practitioners in community settings are influenced by service delivery and setting variables, valuing interventions that are brief, flexible, and easy to learn and implement (Addis and Krasnow 2000; Nelson et al. 2006; Shapiro 2009). All of these inputs may influence a practitioner’s decision making and treatment planning in complex and subtle ways. Most importantly, the information gathered in typical settings is rarely systematic, thus making it impossible to use as the basis for building evidence. Regardless of its source, evidence that is consistent and reliable must be based on a systematic inquiry process in which the study abides by the principles of the method, the rigor of which be evaluated by an agreed upon set of
123
84
standards, and the information that results is only used within the limits inherent in the methodological approach. Shapiro (2009) suggests that practitioners use multiple sources of information–a synthesis of outcome research and clinical reasoning that is based on a review of the strengths and weaknesses of each. Traditional research evidence provides outcomes relevant to groups while clinical decision making is specific to individuals (Persons 2005). Unfortunately, researchers are often tied to CTR and traditional methods of scientific study whereas practitioners are often averse to systematically gathering data as a normal part of practice (Bickman 2008; Kelley and Bickman 2009). This issue is no more apparent than in the current debate about evidence-based practice versus practice-based evidence. What is often lost is that useful, relevant and valuable clinical evidence can be gathered outside traditional scientific studies to inform client-specific and agency-level decision-making (Chorpita and Viesselman 2005). Evidence on clinical process, outcomes, and decision-making can be collected systematically and easily on every case through computerized measurement feedback systems (MFSs; Bickman 2008; Harmon et al. 2007; Hawkins et al. 2004; Lambert et al. 2003). These data form the backbone of a potentially flexible and real-time information system that can inform individual treatment as well as contribute to building contextual efficacy evidence in a more efficient and timely manner. The use of MFS information with individual consumers has been termed ‘‘evidence farming’’ and provides an ideal methodology to weigh outcomes-based evidence with locally relevant information to promote scientifically valid and clinically useful knowledge (Cameron Hay et al. 2008). The aggregation of such data across clients, practitioners, and settings could serve to revolutionize the way that evidence is accumulated and used to inform intervention development at all levels. The implication is that there is more to evidence than traditional research methods, such as CTR. While necessary, CTR is not always the most useful, valid, and appropriate approach to the complex domain of children’s mental health (Sexton et al. 2008). Instead, it is simply a level of evidence that ‘‘clarifies a target of opportunity’’ (Clancy 2009) but does not typically address the contextual issues that influence how care is delivered and received in the practice community. Broadening the accepted and understood levels of evidence and increasing the diversity with which evidence is gathered could help overcome the division among stakeholders in CMHC and move the field to a more diverse area in which various types of systematically gathered evidence is valued yet understood for its level and potential contribution to the complex formula of treatment.
123
Adm Policy Ment Health (2010) 37:81–88
Identifying Mechanisms of Change: The Core of Effective Practice As noted by Kazdin (2008, 2009), understanding and promoting effective treatment is best accomplished by understanding the ‘‘mechanisms’’ of action that cumulatively result in positive outcomes for youth and families. Mechanisms are what is ‘‘inside the black box’’ of a treatment. Knowing what the mechanisms are makes the knowledge of mental health treatments have more validity to consumers and their advocates, utility to practitioners who are attempting to design treatments that fit families, and specificity to those model developers attempting to improve treatment approaches. Understanding mechanisms of action allows for better care, improved implementation, and better adoption by community agencies. Understanding change mechanisms also helps with the development of new treatments that may go beyond those currently considered evidence-based. Understanding mechanisms of treatment requires a greater focus on processes involved in treatment rather than focusing solely on the target of improved youth outcomes. We suggest that the discussion that best moves the field forward is one that focuses not on ‘‘which horse is fastest’’ or ‘‘which model is superior’’ and instead looks at the common core elements of what makes a good treatment successful. Focusing on mechanisms also forces a theoretical specificity resulting in clearer articulations of what is most clinically useful in treatment approaches. There are a number of examples. Functional Family Therapy is an evidence-based family intervention program for adolescents. FFT has a series of outcome studies that demonstrate its successful outcomes with a range of youth, in diverse cultures and contexts (Alexander et al. 2000a, b; Sexton and Alexander 2006). What makes FFT clinically useful is less its overall package than the studies that have helped to identify the core mechanisms of change (Sexton 2009). In the early stages of treatment, for example, the evidence suggests that FFT works because of three specific mechanisms of action: within family negativity reduction, within family blame reduction, and a refocusing of the ‘‘problem’’ from an individual to a family focus. The proposed outcome of these mechanisms is that when they occur, families develop greater alliance, increased treatment motivation, and further engagement and involvement in the treatment process. This is similar to the recent evidence that suggests that the active ingredient in CBT may be less in the domain of cognitive change and have more to do with exposure (Kazdin 2009). Another example is the growing literature on ‘therapist effects’ (e.g., Wampold 2001). It is common sense that the practitioners themselves are a key mechanism of change in psychotherapy, yet in typical CTR they are standardized,
Adm Policy Ment Health (2010) 37:81–88
with any resulting differences treated as noise (or fidelity drift). On the other hand, in the common factors literature (e.g., Simon 2006), practitioners are treated as the sole source of treatment effects. We take the middle ground, not as a neutral stance, but because we believe both treatment models and practitioners likely contribute in unique ways as well as through complex interactions to youth outcomes (Sexton 2007). There is a great need to study the practitioners themselves as the agents of change in the therapeutic process. For example, how do practitioners make decisions on when to adjust a treatment model or to adhere to it, and how do such decisions influence treatment process and outcomes? In order to understand this ‘‘flexibility within fidelity’’ (Kendall et al. 2008), it would also be useful to learn more about the clinical reasoning process that practitioners engage into tailor treatment to individual children and families. This can and should happen separate from EBT research as an attempt to better understand the concept of clinical competence. Practitioner competence has been suggested as a major common factor in and of itself, differentiated from adherence (e.g., the knowledge of how to intervene with specific outcomes envisioned) as a more complex and nuanced type of knowledge that is fluid and with outcomes that are context-dependent (Blow et al. 2007; Sharpless and Barber 2009; Staines 2008). However, in order to truly inform intervention development, we must be able to better understand where adherence to core treatment components is important to good treatment outcomes and where flexibility, or adjustment of treatment models contributes to outcomes. This is not a new idea, but we would argue that we now have the technology to build such a research design with a MFS in combination with an EBT, such as FFT described above, or a treatment strategy matching model (e.g., Chorpita et al. 2005; Chorpita and Daleiden 2009). By focusing on the mechanisms of action, the basic conversation changes and facilities a cross-fertilization between and among treatments ultimately forming better approaches. In addition, practitioners who must make a myriad of clinical decisions are best served by knowing what exactly ‘‘to do’’ than by the broad and general ‘‘stick to the model’’ mantra of monolithic treatment packages. The outcome of a ‘‘mechanism’’ approach is that it allows for constant adjustment of approaches for practitioners and more fine tuned study by researchers. It also provides a transparency to consumers and policy makers that helps get beyond labels (an EBP) to the actual actions of the treatment and the specific ways it may help a family. As noted by Kazdin (2008, 2009), mechanisms of change open the treatment process to consumers, practitioners, and treatment organizations and allow more perspectives at the table.
85
Evidence-based Practices Reconsidered There are a range of practices that fall under the umbrella of ‘‘evidence-based practices.’’ These practices range from common practices to specific techniques and more specifically, to systematic treatment programs. While promoted as a potential answer to better outcomes in CMHC, EBP (we would argue more particularly, EBTs) are not universally accepted. Some argue that these programs are nothing more than good common care ‘‘repackaged’’ or good case management and general care that does nothing more than bring a systematic lens to treatment (Hubble et al. 1999; Simon 2006). The resulting debates over common factors or model specific factors seem to lose one of the core elements: how treatment works, and what interacts to enhance or detract from the change process that occurs in treatment (Eisler 2006). Over time the core intention of EBPs has been lost in the recent debate about the relevance of EBTs to real world settings that has divided the important stakeholders rather than united them (Kazdin 2008; Sexton and Gordon 2009). The common core is that the central principle of an EBP is the same as sought by all key stakeholders: that youth and families have the opportunity (and right) to receive the best available treatment for the specific problems they experience. At their core, EBPs are neither magical answers for complex questions nor over-simplistic approaches to complex problems. EBPs are specific approaches (programs or interventions) that produce positive outcomes for clients based on a ‘‘body of scientific knowledge about service practices… [and] the impact of clinical treatments or services on the mental health problems of children and adolescents (p. 1179)’’ (Hoagwood et al. 2001). This definition of EBP emphasizes the systematic nature of the knowledge gained through scientific inquiry. EBPs are intended to start with the consumer need, involve multiple and ongoing points of assessment, and be open to constant adaptation and adjustment based on treatment progress (APA Presidential Task Force on Evidence-Based Practice 2006). When you look beyond the labels and debates, the intended goal of better outcomes is one shared by all. In the current debate over evidence based practices and treatments, differences are often reflected by the terms evidence-based practice versus common factors or practice-based evidence (Kelley et al. 2010). We don’t see these as competing paradigms but rather as complementary, each with their strengths and weaknesses. In fact, in CMHC there will never be an ‘‘answer’’ to the question of what is ‘‘best.’’ The complexities of clients, contexts, and clinical problems are too involved to ever produce simple and easy answers. Unfortunately, EBP are seen as providing this answer. One of the inherent struggles with EBP
123
86
Adm Policy Ment Health (2010) 37:81–88
will always be the need to accept the ambiguity of not having a ‘‘specific’’ answer (Sexton et al. 2008). The issues of whether all treatments fit all people and whether EBPs actually work in ‘‘real’’ settings (among many beyond the scope of this paper) represent important and valuable issues to be further investigated (see Kazdin 2008; Westen et al. 2004). In fact, some (Bickman 2008) would argue that there is really little specific evidence and that the field of EBP is based on only ‘‘blunt’’ outcomes that lack specificity at best. With the caveat that there is much more to learn and that there are important issues being raised about EBP, we suggest a more inclusive and common approach to defining the core principles of what an EBP is in order to move the field forward. These would include the following: 1.
2.
3.
Evidence based child and adolescent mental health treatments are both scientifically sound and clinically relevant. Scientifically sound means that they are based on evidence that is gathered in a systematic way with replicable outcomes. Clinically relevant means that the outcomes are those most important to the daily lives of the people they serve. Thus, they are based on both science and the accumulated clinical knowledge of experienced practitioners that can identify both the efficacy (reliability) and utility (contextual efficacy) of the clinical procedures in actual practice. Establishing scientific soundness requires understanding that EBPs represent a range of clinical and administrative procedures that vary in specificity ranging from common factors, to scientifically based principles/and interventions, to comprehensive treatment packages designed for unique mental health problems. As noted by Kazdin (2008) there are evidence-based treatments (specific programs with a range of evidence), evidence based practices (e.g., Chorpita and Daleiden 2009), and common factors (e.g., Karver et al. 2006) applicable to good practice in CMHC. Regardless of the levels of interventions, evidence-based interventions and treatment programs share some core elements. Each is built upon: (a) targeting clinically meaningful problems as the focus of interest; (b) a coherent conceptual framework underlying the clinical interventions; (c) specific, core interventions described in detail with an articulation of the practitioner qualities necessary to follow them; and to the degree possible; (d) process research that identifies how the change mechanisms work; and e) outcome research that demonstrates how it works (Kazdin 1997; Sexton and Gordon 2009). Clinical relevance is enhanced when evidence based interventions and treatment programs result in outcomes that go beyond single measures and statistical
123
4.
5.
indications of positive outcome. Clinically relevant outcomes are those that matter not to the researcher but to the daily life functioning of the client. They may include changes in individual and family functioning, reductions in clinical symptoms (e.g., depression, family conflict), and improved client well being. On an administrative level, relevant outcomes may include resource use such as cost increases, supervision needs, and time available for needed staff training. Such a focus on clinically relevant outcomes could serve to increase the credibility of research efforts, thus improving their potential for use by practitioners and policy makers (Weiss et al. 2008). Evidence based practices and treatments are only effective when part of a client centered alliance based upon the relationship between client and interventionist (Sexton et al. 2004). For example, simple ‘‘check list’’ approaches (e.g., some versions of cognitive behavioral therapy) are not the same as other models that build in the relational and interaction factors between client and practitioner. As another example, some family based therapies operate within and are based upon the relational factors between client and practitioner. There is no single evidence-based treatment or practice that fits all consumers with all problems in all situations. In fact, there never will be such a practice. Unlike medicine’s specific targeting of treatment protocols to specific diagnostic categories, in CMHC, evidence based practices/treatments will never address all the needs of clients in community settings. In fact, one of the most critical issues is how to match the needs and preferences of consumers to specific procedures that may best fit the consumer and community context.
Conclusions: Building a Common Core The complexity of what is an EBP, what constitutes the best evidence, the methods to produce the most systematic and reliable evidence, and identifying the core mechanism of change are complex. Even with significant advancements in each, we suggest that continuing to pursue these domains as separate efforts will never move fast enough and the incremental gains made in community settings and research laboratories can never get to those who need it the most until we overcome the core barriers and concomitant attitudes that divide the field. If we can find the common core by reconsidering what constitutes evidence based practices, expanding our understanding of the evidence they are based upon, and increasing our focus on the
Adm Policy Ment Health (2010) 37:81–88
mechanisms through which they work the field is likely to advance in broader ways that move beyond different values of researchers, practitioners, families, and funders. In order to do this, we need to promote a sense of urgency for change to transform the field of CMHC. This will require revisiting the training of both practitioners and researchers to be more inclusive of multidisciplinary evidence and theory on system change (e.g., Greenhalgh et al. 2004; Grol et al. 2007) and health care delivery infrastructure and management. Practitioners in training need to be exposed to multiple evidence-based interventions and models (Blow et al. 2007). In addition, continued expansion of collaborative efforts to bring key stakeholders together to consider solutions to the systemic issues preventing the quality improvement of mental health practice requires bring researchers, practitioners, policy makers (including funding agencies and state regulatory organizations, proponents of systems of care, and consumers together to form communities of systematic inquiry to achieve breakthroughs (Altman 2009). It is a bit like the timeless story of the elephant and the blind men….depending on what part you are holding onto you might experience an entirely different shape of the problem. A return to basics means that nobody has the correct ‘‘piece of the elephant’’ and that no single perspective owns the elephant. Instead, the complexity means that there are multiple perspectives that more of a dialectic than different camps. Each is useful and inexorably linked to the other. To solve the complexity we need to reconsider the basic elements and find a common ground that breaks down barriers and unites the diverse trajectories and perspectives. CMHC is ultimately a human endeavor that involves multiple and often competing value systems. We must move beyond searching for simply technical solutions to improving the quality of care by embracing the challenge of the broader social, attitudinal, and infrastructure development needs that is necessary to create sustainable change (Altman 2009; Clancy 2009; Grol et al. 2007).
References Addis, M. E., & Krasnow, A. D. (2000). A national survey of practicing psychologists’ attitudes toward psychotherapy treatment manuals. Journal of Consulting and Clinical Psychology, 68(2), 331–339. Alexander, J. F., Pugh, C., Parsons, B., & Sexton, T. L. (2000a). Functional family therapy. In D. Elliott (Series Ed.), Book three: Blueprints for violence prevention (2nd ed.). Golden, CO: Venture Publishing. Alexander, J. F., Robbins, M. S., & Sexton, T. L. (2000b). Familybased interventions with older, at- risk youth: From promise to proof to practice. Journal of Primary Prevention, 42, 185–205. Altman, D. G. (2009). Challenges in sustaining public health interventions. Health Education & Behavior, 36(1), 24.
87 Anker, M. G., Duncan, B. L., & Sparks, J. A. (2009). Using client feedback to improve couple therapy outcomes: A randomized clinical trial in a naturalistic setting. Journal of Consulting and Clinical Psychology, 77(4), 693–704. APA Presidential Task Force On Evidence-Based Practice. (2006). Evidence-based practice in psychology. American Psychologist, 61(4), 271–285. Bickman, L. (2008). A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of Amer Academy of Child & Adolescent Psychiatry, 47(10), 1114. Blow, A. J., Sprenkle, D. H., & Davis, S. D. (2007). Is who delivers the treatment more important than the treatment itself? The role of the therapist in common factors. Journal of Marital and Family Therapy, 33(3), 298. Cameron Hay, M., Weisner, T. S., Subramanian, S., Duan, N., Niedzinski, E. J., & Kravitz, R. L. (2008). Harnessing experience: Exploring the gap between evidence-based medicine and clinical practice. Journal of Evaluation in Clinical Practice, 14, 707–713. Chorpita, B. F., & Daleiden, E. L. (2009). Mapping evidence-based treatments for children and adolescents: Application of the distillation and matching model to 615 treatments from 322 randomized trials. Journal of Consulting and Clinical Psychology, 77(3), 566–579. Chorpita, B. F., Daleiden, E. L., & Weisz, J. R. (2005). Identifying and selecting the common elements of evidence based interventions: A distillation and matching model. Mental Health Services Research, 7(1), 5–20. Chorpita, B. F., & Viesselman, J. O. (2005). Staying in the clinical ballpark while running the evidence bases. Journal of Amer Academy of Child & Adolescent Psychiatry, 44(11), 1193–1197. Clancy, C. (2009). Building the path to high-quality care. Health Services Research, 44(1), 1. Daleiden, E. L., Chorpita, B. F., Donkervoet, C., Arensdorf, A. M., & Brogan, M. (2006). Getting better at getting them better: Health outcomes and evidence-based practice within a system of care. Journal of American Academy of Child & Adolescent Psychiatry, 45(6), 749–756. Eisler, I. (2006). The heart of the matter—a conversation across continents. Journal of Family Therapy, 28(4), 329. Glisson, C., Landsverk, J., Schoenwald, S., Kelleher, K., Hoagwood, K. E., Mayberg, S., et al. (2008). Assessing the organizational social context (OSC) of mental health services: Implications for research and practice. Administration and Policy in Mental Health and Mental Health Services Research, 35(1), 98–113. Greenhalgh, T., Robert, G., MacFarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: systematic review and recommendations. The Milbank Quarterly, 82(4), 581–629. Grol, R., Bosch, M. C., Hulscher, M., Eccles, M. P., & Wensing, M. (2007). Planning and studying improvement in patient care: The use of theoretical perspectives. The Milbank Quarterly, 85(1), 93–138. Harmon, S. C., Lambert, M. J., Smart, D. M., Hawkins, E., Nielsen, S. L., Slade, K., et al. (2007). Enhancing outcome for potential treatment failures: Therapist-client feedback and clinical support tools. Psychotherapy Research, 17(4), 379–392. Hawkins, E. J., Lambert, M. J., Vermeersch, D. A., Slade, K. L., & Tuttle, K. C. (2004). The therapeutic effects of providing patient progress information to therapists and patients. Psychotherapy Research, 14(3), 308–327. Hoagwood, K., Burns, B. J., Kiser, L., Ringeisen, H., & Schoenwald, S. K. (2001). Evidence-based practice in child and adolescent mental health services. Psychiatric Services, 52(9), 1179–1189. Hubble, M. A., Duncan, B. L., & Miller, S. D. (1999). Directing attention to what works. In The heart and soul of change: What works in therapy (pp. 407–447).
123
88 Karver, M. S., Handelsman, J. B., Fields, S., & Bickman, L. (2006). Meta-analysis of therapeutic relationship variables in youth and family therapy: The evidence for different relationship variables in the child and adolescent treatment outcome literature. Clinical Psychology Review, 26(1), 50–65. Kazdin, A. E. (1997). A model for developing effective treatments: Progression and interplay of theory, research, and practice. Journal of Clinical Child Psychology, 26, 114–129. Kazdin, A. E. (2008). Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist, 63(3), 146. Kazdin, A. E. (2009). Understanding how and why psychotherapy leads to change. Psychotherapy research, 19(4), 418–428. Kelley, S. D., & Bickman, L. (2009). Beyond outcomes monitoring: measurement feedback systems in child and adolescent clinical practice. Current Opinion in Psychiatry, 22(4), 363–368. Kelley, S. D., Bickman, L., & Norwood, E. (2010). Evidence-based treatments and common factors in youth psychotherapy. In B. Duncan, S. Miller, & B. Wampold (Eds.), Heart and soul of change: Delivering what works in therapy (2nd ed., pp. 325– 355). Washington, DC: American Psychological Association. Kelley, S. D., Vides de Andrade, A. R., Sheffer, E., & Bickman, L. (in press). Exploring the black box: Measuring youth treatment process and progress in usual care. Administration and Policy in Mental Health and Mental Health Services Research, Psychotherapy Practice in Usual Care (special issue). Kendall, P. C., Gosch, E., Furr, J. M., & Sood, E. (2008). Flexibility within fidelity. Journal of American Academy of Child & Adolescent Psychiatry, 47(9), 987–993. Lambert, M. J., Whipple, J. L., Hawkins, E. J., Vermeersch, D. A., Nielsen, S. L., & Smart, D. W. (2003). Is it time for clinicians to routinely track patient outcome? A meta-analysis. Clinical Psychology Science and Practice, 10(3), 288–301. Margison, F. R., Barkham, M., Evans, C., McGrath, G., Clark, J. M., Audin, K., et al. (2000). Measurement and psychotherapy: Evidence-based practice and practice-based evidence. British Journal of Psychiatry, 177, 123–130. Miranda, J., Azocar, F., & Burnam, A. (in press). Assessment of evidence-based psychotherapy practices in usual care: Challenges, promising approaches, and future directions. Administration and Policy in mental Health and Mental Health Services Research, Psychotherapy Practice in Usual Care (special issue). Nelson, T. D., & Steele, R. G. (2008). Influences on practitioner treatment selection: Best research evidence and other considerations. The Journal of Behavioral Health Services and Research, 35(2), 170–178. Nelson, T. D., Steele, R. G., & Mize, J. A. (2006). Practitioner attitudes toward evidence-based practice: Themes and challenges. Administrative Policy in Mental Health and Mental Health Services Research, 33(3), 398–409. Paul, G. L. (1967). Outcome research in psychotherapy. Journal of Consulting Psychology, 31, 109–118. Persons, J. B. (2005). Empiricism, mechanism, and the practice of cognitive-behavior therapy. Behavior Therapy, 36, 107–118. Sexton, T. L. (2007). The therapist as a moderator and mediator in successful therapeutic change. Journal of Family Therapy, 29, 103–107. Sexton, T. L. (2009). Functional family therapy: Traditional theory to evidence-based practice. In J. H. Bray & M. S. Stanton (Eds.),
123
Adm Policy Ment Health (2010) 37:81–88 The Wiley-Blackwell handbook of family psychology (pp. 327– 340). New York: John Wiley & Sons. Sexton, T. L., & Alexander, J. F. (2006). Functional family therapy for externalizing disorders in adolescents. In J. Lebow (Ed.), Handbook of clinical family therapy (pp. 164–194). New Jersey: John Wiley. Sexton, T. L., Alexander, J. F., & Mease, A. L. (2004). Levels of evidence for the models and mechanisms of therapeutic change in family and couple therapy. In Bergin and Garfield’s handbook of psychotherapy and behavior change (pp. 590–646). Sexton, T. L., & Gordon, K. (2009). Science, practice, and evidencebased treatment in the clinical practice of family psychology. In J. Bray & M. Stanton (Eds.), The Wiley-Blackwell handbook of family psychology (pp. 314–326). New York: John Wiley & Sons. Sexton, T. L., Kinser, J. C., & Hanes, C. W. (2008). Beyond a single standard: Levels of evidence approach for evaluating marriage and family therapy research and practice. Journal of Family Therapy, 30(4), 386–398. Shapiro, J. P. (2009). Integrating outcome research and clinical reasoning in psychotherapy planning. Professional Psychology: Research and Practice, 40(1), 46–53. Shapiro, J. P., Friedberg, R. D., & Bardenstein, K. K. (2006). Child and adolescent therapy: Science and art. New York: Wiley. Sharpless, B. A., & Barber, J. P. (2009). A conceptual and empirical review of the meaning, measurement, development, and teaching of intervention competence in clinical psychology. Clinical Psychology Review, 29(1), 47–56. Simon, G. M. (2006). The heart of the matter: A proposal for placing the self of the therapist at the center of family therapy research and training. Family Process, 45, 331–344. Staines, G. L. (2008). The relative efficacy of psychotherapy: Reassessing the methods-based paradigm. Review of General Psychology, 12(4), 330–343. Stroul, B. A., & Friedman, R. M. (1994). A system of care for children and youth with severe emotional disturbances (Revised ed.). Washington, DC: Georgetown University Child Development Center, CASSP Technical Assistance Center. Wampold, B. E. (2001). The great psychotherapy debate: Models, methods, and findings. Mahwah, NJ: Lawrence Erlbaum. Wampold, B. E. (2003). Bashing positivism and reversing a medical model under the guise of evidence. The Counseling Psychologist, 31, 539–545. Weiss, C. H., Murphy-Graham, E., Petrosino, A., & Gandhi, A. G. (2008). The fairy godmother—and her warts: Making the dream of evidence-based policy come true. American Journal of Evaluation, 29(1), 29–47. Weisz, J. R., & Gray, J. S. (2008). Evidence-based psychotherapy for children and adolescents: Data from the present and a model for the future. Child and Adlolescent Mental Health, 13(2), 54–65. Weisz, J. R., Jensen-Doss, A., & Hawley, K. M. (2006). Evidencebased youth psychotherapies versus usual clinical care: A metaanalysis of direct comparisons. American Psychologist, 61, 671–689. Westen, D., Novotny, C. M., & Thompson-Brenner, H. (2004). The empirical status of empirically supported psychotherapies: Assumptions, findings, and reporting in controlled clinical trials. Psychological Bulletin, 130, 631–663. Woolston, J. L. (2005). Implementing evidence-based treatments in organizations. Journal of American Academy of Child & Adolescent Psychiatry, 44(12), 1313.