In: Evaluation Practice, 17(3), 261-272.
Integrating Program and Evaluation Values: A Family Support Approach to Program Evaluation
Beth L. Green, Ph.D. Portland State University
Laurie Mulvey, A.C.S.W. Heather A. Fisher, B. A. Flora Woratschek, M.A. University of Pittsburgh
Running Head: A FAMILY SUPPORT APPROACH TO EVALUATION
Questions or comments regarding this manuscript should be directed to the first author at: 317 Cramer Hall, Department of Psychology, Portland State University, Portland, OR, 97207-0751. Email: D:\WPWIN2\FF1\FSCPRTNR.MS2 April 24, 2002
[email protected]
D:\WPWIN2\FF1\FSCPRTNR.MS2 April 24, 2002
Acknowledgments This work could not have been done without ongoing support and input from the Partnerships for Family Support administrator, Ms. Eartha Sewell, and her staff, as well as the program directors, staff, and parents of the PFFS family support centers. We would also like to thank Robert B. McCall and Esther Sales for their thoughtful comments on earlier versions of this manuscript, as well as the helpful comments of four anonymous reviewers. Thanks also to Cathy Kelly for her administrative help in preparing this document. This work was supported by the Allegheny County Department of Human Services, the Howard Heinz Endowments, and an Urban-Community Services Program Grant No. P252A20082 from the U. S. Department of Education.
Family Support Approach, p. 1
Abstract In the past several years, models of social service provision that are based on principles of “family support” have been implemented nationwide. Development of evaluation methods to adequately reflect the multifaceted nature of these programs, however, has lagged behind program development. It is particularly difficult (especially within the context of traditional evaluation paradigms) to develop evaluations that do not in and of themselves violate the principles of family support, which advocate for services that are strengths-based, collaborative, family-centered, comprehensive, and flexible. To this end, we have been collaborating with local agency directors, family support staff, and parents to develop an evaluation approach that is based on the general principles of family support. The work that we describe draws techniques from participatory, empowerment, and utilization-focused evaluation approaches, but operationalizes evaluation activities based on the guiding program value system. By tailoring the evaluation to the service philosophy, we contend that the evaluation process can become more thoroughly integrated with the service program, and thus provide a more useful and accurate picture of the strengths and weaknesses of the program.
Family Support Approach, p. 2
Integrating Program and Evaluation Values: A Family Support Approach to Evaluation Introduction The last decade has been marked by a growing trend in human services which recognizes the importance of providing early, comprehensive, and individualized services to families with young children. As a part of this movement, models of social service provision based on principles of "family support" have begun to be widely implemented nationwide (Dunst, Trivette, Starnes, Hamby, & Gordon, 1993; Weiss & Jacobs, 1988). Such models include prevention-oriented programs that have evolved from the settlement house model, which focus on community-based efforts to strengthen families (Weissbourd & Kagan, 1989), as well as intervention-oriented programs that have their roots in service programs for developmentally disabled children (Dunst et. al. 1993). Both prevention and intervention family support programs aim to “enable and empower people by enhancing and promoting individual and family capabilities that support and strengthen family functioning” (Dunst et al., 1993, p. 4). These programs are characterized by their partnership-based approach to working with families, their emphasis on building upon family strengths (rather than on eliminating deficits), and their provision of a comprehensive array of services tailored to the needs of individual families (Kagan & Shelley, 1987). Evaluation and Family Support Programs As Weiss and Jacobs (1988) and others have noted, program development and theory in family support have far outdistanced efforts to develop evaluations that can adequately reflect the multifaceted
Family Support Approach, p. 3
nature of these programs. Family support programs reflect a change in the “way of doing business” in the human services system from one of responding to single problems with a single standard set of treatments, to responding to multiple risk factors with a comprehensive, prevention-based approach. Evaluations of these programs, therefore, face unique challenges both conceptually and methodologically. Further, it is difficult, especially within the context of traditional evaluation paradigms, to develop evaluations that do not in and of themselves violate the principles of family support services. For example, experimental designs employing random assignment of participants continue to be held by many as the “gold standard” of evaluation research; however, such a design undermines the principle of universal access that is at the core of many family support programs (Kagan & Shelly, 1987). Indeed, it is the use of formal and informal supports that derive from increased community cohesion that is one of the key ways in which family support programs differ from more traditional social service institutions. Thus, the use of random assignment within a community changes the very nature of the program ( Dunst et al., 1993; Weiss, 1987). Despite this fact, evaluations of national family support demonstration projects, such as the Comprehensive Child Development Program, continue to rely on this technique (Smith & Lopez, 1994). Evaluators working in the field may have experienced other similar incongruencies between methods and service paradigms, such as between collecting data through traditional methods (e.g., structured interviews, cross-group comparisons) and the individualized, familyfriendly approaches used by service providers. Knapp (1995) describes several major issues confronting evaluators of comprehensive,
Family Support Approach, p. 4
collaborative programs (like family support) including: (1) engaging divergent participants’ perspectives, (2) defining and measuring the “independent” and “dependent” variables, (3) the difficulty of attributing cause, and (4) the difficulty of capturing the complex interrelationship between providers and consumers of these types of services. Despite these issues, we believe that evaluation can provide useful information to inform family support programs. The challenge to the profession is to develop evaluation strategies that can creatively address these very real concerns. To this end, we have been working with local agency directors, family support staff, and parent groups to develop an evaluation approach that is appropriate for evaluating family support programs. Our experience with such evaluations suggests that one method for bridging the gap between program practice and evaluation is through the identification of a shared set of principles that can provide the basis for both service delivery and evaluation -- namely, the family support principles. We suggest that evaluations of family support programs, like the programs themselves, should be strengths-based, collaborative (specifically, involving family and community members), and comprehensive/flexible. Thus the evaluation, like the program, becomes part of the process of empowering families (Fetterman, 1996; Weiss & Greene, 1992). To demonstrate our approach, we outline the basic family support principles and some of the ways in which the they can be used to shape evaluation design, methods, and ongoing utilization. We draw examples from our experiences in implementing a family-support evaluation in several different communities in Allegheny County (Pittsburgh), Pennsylvania. Principles of a Family Support Evaluation
Family Support Approach, p. 5
(1) Strengths -based. Just as family support services are based on the assumption that all families have strengths that can be nurtured and developed, our evaluation approach assumes that family support programs have strengths that should be documented and used as the basis for program improvement. Given this assumption, we suggest that family support program evaluations should be formative in approach. Thus, evaluation is seen as an process rather than as a tool to deliver the “bottom line” about program success or failure. Although some may argue that all evaluation is inherently improvement-focused, we have found that taking an explicitly improvement-oriented approach requires different methods and emphases compared to more traditional outcomes-based evaluations. Issues and sample solutions . There remains a significant challenge, however, to ensure that the evaluation uses a strengths-based approach. Although much has been made in recent years of using a strengths-based approach to service delivery, little has been written about what exactly such an approach entails; evaluation is even less oriented to using program strengths rather than focusing on problems. At the most basic level, however, strengths-based evaluation should, like strengths-based service delivery, actively seek out and identify areas in which a program is doing well and identify ways in which those strengths can be used to improve the program. Strengths-based evaluation, therefore, must give serious attention to methodologies that can be useful in defining and identifying program successes. It should be noted, however, that we are not advocating that evaluations focus solely on program strengths. The formative approach requires identification of both strengths and weaknesses to ensure continuing improvement. What distinguishes a strengths-based approach is the corresponding
Family Support Approach, p. 6
attention to identifying strengths and the utilization of strengths in addressing identified weaknesses. One technique that we have used involves a periodic “peer review” in which family support advocates, participants and staff visit existing centers (at the centers’ request) and conduct a strengthsbased assessment of the center activities. After two days of intensive site visit activities, peer review committees employ instruments that define key characteristics of family support programs and then prompt committee members to identify program strengths related to these characteristics. The committees also make suggestions about ways that these strengths might be generalized to other areas. To provide this feedback to the centers, half-day sessions are held with the center staff, directors, and participants, to debrief and discuss the suggestions of the peer reviewers. For example, one program that was found to be exceptionally good at bringing families into the center for recreational events focused on how this ability could be used to bring families to the center for more substantive activities (starting with recreational activities tailored to a substantive theme, such as healthy cooking). Strengths evidenced by certain programs or program staff are sometimes parlayed into trainings for others. This showcases the strengths of the agency that conducts the training, as well as disseminating successful approaches to other programs. We have also used staff records of family contacts to identify areas in which service delivery was particularly strong. For example, an initial analysis of contact logs at one center indicated that a good deal of staff time was devoted to informal discussions to engage families in the program and in getting families involved in recreational activities. While recognizing that these activities were important to get people engaged in the program, funders were concerned that the services were too “soft.”
Family Support Approach, p. 7
Therefore, several decisions were made in a collaborative oversight group to build on the staff’s ability to involve families in recreational activities, but to focus on including in these activities services that were more directly linked to the program’s goals---child development services, education and training services, and health services. Subsequent analysis of contact logs indicated that, indeed, substantially more work was being done in these substantive areas. (2) Collaborative. Family support services are collaborative on a number of levels, including systems-level collaborations (pooling monies across funding streams), inter-agency collaborations (bringing multiple providers together to work on behalf of families), collaborations between families and staff (sharing of decision-making in helping families achieve their goals), and collaborations between families themselves (families working together to achieved mutually desired goals). Perhaps most importantly, family support programs “do not do things to but with parents” (Weiss, 1987, p. 145, emphases in original). Similarly, a family support program evaluation should do things with programs and families and not to programs and families. This requires that the evaluator, programs, and families strive to be active participants in the collaborative process of designing, implementing, and using the evaluation. In this way a family support evaluation exemplifies a participatory evaluation approach, a key component of which is the identification and engagement of stakeholders who can participate actively in the evaluation process (Cousins & Earl, 1992; Greene, 1987; Patton, 1986). The approach that we have used involves establishing a group of stakeholders to participate in an evaluation oversight committee that meets in an ongoing fashion to make decisions about the evaluation, analyze and interpret evaluation information, and suggest strategies for program improvement.
Family Support Approach, p. 8
Patton (1986) suggests that the only stakeholders of interest are those who are the “primary intended users of the evaluation.” However, because a family support approach requires partnership with program participants, who may not be direct “users” of the evaluation, we define stakeholders following an earlier, broader definition (Gold, 1983; Lincoln & Guba, 1985) as persons or groups with a vested interested in a program or its evaluation. Thus, we include persons who are directly affected by the program (e.g., program participants, community members, program staff, etc.) and those who have the power to influence decisions about the program’s implementation (e.g., administrators and funders). Further, we have come to recognize that the evaluator (objective scientist though s/he may be) is indeed a stakeholder in the evaluation. However, all involved stakeholders are “evaluators” in the sense that there is ongoing shared decision-making about the evaluation. The evaluator must give up his/her role as the evaluation “expert” (e.g., Patton, 1986), and adopt such multiple roles as facilitator, negotiator, educator, and, in the context of family support evaluations, advocate for adherence to family support principles (Fetterman, 1996). The family support approach focuses on partnerships not only with program supervisors, staff, and funders, but also with program participants and community members. Inclusion of these latter groups helps to address two other fundamental principles of family support, that services be (1) respectful of cultural diversity, and (2) family and community-driven. By bringing family and community members into the decision-making process of the evaluation, a voice is given to a population that is often not empowered to make their own decisions about the programs that serve them. As described previously, empowerment is a key mechanism underlying the family support philosophy, and evaluations
Family Support Approach, p. 9
that provide participants an opportunity to have real decision-making power foster this empowerment (e.g., Fetterman, 1996). Issues and sample solutions. Including program participants in the evaluation process, however, does not necessarily ensure either that their voices will be heard or that the evaluation will incorporate their perspectives (e.g., Mertens, Farley, Madison, & Singleton, 1994). There are clearly power differentials inherent in any collaborative group, and we have taken several steps to help balance these inequities. First, the contributions that parents and community members make to the evaluation should be explicitly recognized. For example, in our initial meetings with (sometimes skeptical) community members, we explained that we needed their input and wanted them to have a meaningful voice in developing their own program and plan for its evaluation. As community members, they brought important knowledge about the history and culture of the neighborhood that would be critical to the program and its evaluation. One community, because of its’ long history of being the target for recruitment by university researchers, resisted the idea of university-based evaluators doing anything in this particular program. Our first step, then, was to try to break down their negative perceptions and convey the fact that our primary goal was to support the program and the community and to work with them to provide data that they could use for program improvement and funding. Second, we have used different approaches to involve parents on a long-term basis. Program staff tell us that there are three keys to facilitating parent participation: transportation, child care, and food. Thus, we hold meetings in the neighborhoods where the parents live, ensure that there is someone to care for young children during the meeting, and have meetings over lunch or dinner (which we
Family Support Approach, p. 10
provide). We also facilitate parent involvement by using existing governing bodies. For example, the family centers are governed by a Parent Council, comprised of participating parents. By requesting time on their regular agendas, we have used this governing board as a means for communicating evaluation oversight decisions, questions, and issues to parents. In one situation in which we were unsuccessful in bringing parent representation to the evaluation table, we instead brought any decisions made by the collaborative group to the Parent Council for final approval. Finally, we have taken steps to try to ensure that parents felt comfortable interacting in a mixed group setting. After too many meetings with silent parents sitting at the table, we knew that simply inviting them to participate was not enough. Pre-meeting sessions with parents were held to let them know what issues were going to be discussed, and special care was taken to solicit their input during meetings. We have also made parents the clear authority by giving them the power to “veto” any group decision about the program or its evaluation. Together, these steps have increased parent involvement in the collaborative group; however, the challenge of facilitating meaningful parental involvement in the evaluation process continues to require our most creative efforts. We also suggest that a representative from the funding agency take an active role in the collaborative group. Some people will find it difficult to imagine funders collaborating in such a fashion; not only is their time extremely limited, but they may feel it is inappropriate for them to be involved with “nitty gritty” program decisions. However, because funders have an important monetary stake in the program and/or its evaluation, we feel it is crucial to bring them into the loop. Further, we have found that funders who would not be amenable to recommendations made by evaluators or program staff
Family Support Approach, p. 11
would listen to parents -- thus the collaborative group can provide a powerful vehicle dialog between these two groups. If it is not possible to directly involve funders, we suggest sending them written correspondence (notes, memos, meeting minutes) on a regular basis. Finally, it is important to note that in this approach stakeholders are collaborative partners in all aspects of the evaluation--from its early planning and design stages through to utilization and dissemination. More typically, stakeholder evaluations involve extensive input during the planning and design stages of the evaluation, but do not focus on the entire evaluation process (Greene, 1987). Because of the dynamic nature of the family support programs, however, we feel it is critical to have stakeholder involvement at each phase of the evaluation. Thus, it is also critical to have stakeholders who are committed to the idea of service delivery and evaluation as an ongoing process. (3) Comprehensive and flexible. Another basic principle of family support is that programs should be responsive to the myriad and dynamic needs of families. Thus, “each program tends to provide a smorgasbord of services so that the needs of a variety of families can be met. The challenge is to match services to diverse families rather than, as in traditional agencies, to fit families into the available services. It is not unusual to see a program fielding formal and informal services or modifying its services frequently” (Kagan & Shelley, 1987, p. 11). Just as service programs offer a multitude of services to meet varying family needs, family support evaluations need to employ a comprehensive and varied set of methodologies, ( Jacobs, 1988). Evaluations that focus on just one aspect of the family center are likely to miss important aspects of the program; similarly, use of just one method is inadequate to produce the depth of information needed to understand the program. Multiplism in methods is nothing new to evaluators; however, family centers
Family Support Approach, p. 12
challenge evaluators to employ an even broader scope of methods and measures by virtue of their dynamic service approaches. Flexibility in the evaluation is also required in order to incorporate multiple stakeholder perspectives and differing information needs in these groups. Clearly, an evaluation can not simultaneously address the questions of all stakeholders; however, because this approach views evaluation as an ongoing process, new evaluation questions can be asked at different points in the program lifetime. An evaluation that is tied closely to the service delivery process will allow the focus of the evaluation to shift over time, as appropriate to the evolution of the program, and to address the varying questions of stakeholders. Issues and sample solutions. The flexibility and responsiveness of family support programs creates a challenge for evaluators in that outcomes must be assessed on at least two levels (programwide and family-specific), and that the evaluation must be able to capture the constantly changing and sometimes loosely-defined service process. In Knapp’s (1995) framework, this is the problem of identifying and measuring the independent and dependent variables. One method we have used to help ensure continuity between changes in the program process and what we are measuring in the evaluation is to provide a forum (regular and frequent meetings) for ongoing reflection by the oversight committee in regards to program approaches and data collection by the evaluation. Program staff who participate in the oversight committees can inform the group regarding program process, and the evaluation can be adapted as needed to capture these changes. There must also be ongoing flexibility in terms of methods and approaches to collecting
Family Support Approach, p. 13
evaluation information. For example, we have found that staff are often quite creative in eliciting information from families, and able to tailor their approach to individual families. While varying their questions according to the situation may conflict with researchers’ “rules” about structured interviewing, we have learned to respect the quality of information that results from these more personalized exchanges. In one example, staff suggested that rather than having families answer a barrage of questions on their initial visits, that survey forms be filled out during an enrollment phase lasting 6-8 weeks. Further, to minimize the intrusiveness of the forms, staff suggested that families sit down with them and complete the forms in partnership, and have families initial all forms to give their “seal of approval” to the information contained in their files. This allowed information to be collected without families thinking that a “secret” case file was being kept without their knowledge or approval. Flexibility and comprehensiveness need to be built into the evaluation not only in terms of responding to stakeholder demands and changing programmatic goals and services, but also in measuring individual family outcomes. New and creative methods for tracking individual family progress towards goals need to be developed. One potential method we have developed adapts Goal Attainment Scaling procedures which allow individuals to identify their own goals and objective “markers” for progress (Kirusek, Smith, & Cardillo, 1994). Methods for aggregating individual family progress data into broader indices of program effects remains a central challenge to this approach. Finally, however, it should be noted that evaluations must balance flexibility with goal-direction. That is, an evaluation that is instantly responsive to the whims of changes in the political or social climate would be unlikely to produce useful information in the long run. Thus we strongly advocate for a well-
Family Support Approach, p. 14
conceptualized evaluation plan that is based on a relatively stable set of long-term goals and expectations, and that serves as a guide for weighing the appropriateness of changes to the evaluation focus and methods (Knapp, 1995). This is congruent with family support approaches that work with families to establish long-term goals, but recognize that the means to these goals may change, and that the goals themselves may change as well. In sum, the family support approach to evaluation draws upon three core principles: (1) strengths orientation; (2) collaboration; and (3) comprehensiveness/flexibility. Key aspects to operationalizing these principles include taking an explicitly formative approach to the evaluation, developing techniques to define and identify program strengths for ongoing program improvement, supporting a collaborative group including parents and funders who actively participate in program development and evaluation, and using multiple, flexible, and individualized methods to define and track program progress. In previous sections we have alluded to several evaluation issues related to specific family support principles. Below we elaborate on some of the broader challenges we have faced in implementing family-support evaluations. Challenges In Conducting Family Support Evaluations As noted by Greene (1987) and others (Knapp, 1995), collaborative, participatory evaluations pose special problems and require special conditions in order to be successfully implemented. We encountered the reality of this when attempting to replicate a successful collaborative evaluation strategy in another, very different location. At this second site, local politics had created “turf battles” between several providers, leading to direct competition for control and credit for the family support center, and
Family Support Approach, p. 15
an atmosphere that was not conducive to open discussion of either the program or its evaluation. Despite our best efforts, we were relatively unsuccessful at bringing stakeholders together to have meaningful dialogue about evaluation, and finally simply replicated the evaluation strategy that had developed from our collaborative work in other communities. However, the result has been a less well implemented, less powerful, and certainly less useful evaluation at this site. It is only after two years of working with this site and several changes in program administration, structure, and staffing that we are beginning to rebuild an evaluation system that meets our goals of being strengths-based, collaborative, and comprehensive. Based on this and other experiences, we describe below several factors that we believe are important to the successful implementation of a family support evaluation approach. Shared commitment to program improvement. Knapp (1995) suggests that one factor that contributes to successful evaluations of comprehensive service programs is the presence of a wellarticulated program model. We agree that this is important, but suggest that it is even more critical that stakeholders agree that evaluation can be a valuable tool for elucidating the program model. In the situation mentioned above, there were disagreements regarding the program model, including what and how services should be provided, thus there was no shared conceptual program framework. In fact, there were divisive subgroups within program staff, administrators and parents regarding how the program should be implemented. Simultaneously, there was an explicitly stated disbelief (by key program administrators) that evaluation could assist ongoing program improvement. This, we believe, was the key factor that prevented successful development of the evaluation partnership. In other situations -- in which people have disagreed about the conceptual model, but agreed to use evaluation
Family Support Approach, p. 16
to ask questions it -- results have been much more positive. The collaborative atmosphere and power differentials. Assuming that a collaborative group can be assembled to work toward program improvement, efforts must still be made to ensure that the atmosphere within the group is one of openness, in which honest discussion of program strengths and weakness can take place. Greene (1987) highlighted the importance of having stakeholders who are “democratic, decentralized, collegial, and participatory themselves” to the success of participatory evaluations. In the second, previously mentioned site, such characteristics were markedly absent in our dealings with the program administrators, who adhered to a hierarchical approach to decision making. Even if key stakeholders are willing to share openly, a collaborative atmosphere can be difficult to achieve, given the power differentials that naturally occur when the group consists of funders (who control money for programs), program administrators (who control hiring and firing of staff), staffpersons (who control direct services for families), and family members (whose presence is required for a service program to exist). In our experience, funders, in particular, tend to tip the balance of power. It is not surprising that program administrators and staff are reticent about discussing sensitive program issues, given the power of funders to affect the very existence of these programs. For example, we have sat through many advisory board in which the program staff and/or administrators assured funders that the program was great, everybody loved it, every one was ‘happy, happy, happy’ -- despite the fact that informal discussions with these same persons indicated that there were serious issues that could benefit from open discussion. Mechanisms can be established to ameliorate some of the power differentials in the
Family Support Approach, p. 17
collaborative group, by using strategies such as the “parent veto” described previously. Another technique is to put evaluation and program decisions to a confidential vote. In this way anyone who dissents (but is reluctant to speak up) can more freely express his/her opinion. Confidential expression, in writing, of program and evaluation issues is another way to ensure that important issues are raised. The evaluators can report on issues, submitted anonymously or confidentially, without disclosing ‘whose’ issue it is. Ideally, the need for confidential and anonymous techniques will decrease as rapport and trust builds within the committee. But it is naive to think that people will readily bring sensitive program issues to the table from the outset. Further, we would suggest that if the presence of certain groups makes open discussion impossible, re-structuring or re-organizing the group should be considered. Again, it is important that the evaluation be flexible in order to respond to stakeholders’ needs and to continue to strive toward the evaluation goal of continued program improvement. The issue of control. Perhaps the most formidable barrier to successful partnerships in a collaborative evaluation (or, indeed, any collaborative effort) is the need for various stakeholder groups to share control, especially control over the evaluation decisions and over programmatic issues. As Patton states, “what fundamentally distinguishes utilization-focused evaluation from other approaches is that the evaluator does not alone carry this burden for making choices about the nature, purpose, content and methods of evaluation. These decisions are shared by an identifiable and organized group of intended users” (Patton, 1986, p. 53). We extend this by putting the decision-making power for the service program as well as for the evaluation in the hands of this collaborative group. We have found that the family support approach works best if committee members are given the power to make
Family Support Approach, p. 18
decisions not just about the evaluation, but about the program itself. This supports the use of evaluation information for ongoing program development and improvement, in that the people who make decisions about the evaluation are also called upon to make decisions about the program itself. Because of this, it is critical that the program supervisors, administrators, and other top decision-makers be included in the collaborative group. In large part, the family support evaluation approach depends on these stakeholders’ ability to share the decision making power with the group. Such a sharing of control can be both a benefit and a barrier to evaluations. In the second site described above, the issue of control was a key barrier to evaluation development. In particular, several different service agencies were involved and had agreed on paper to collaborate in building the family support center. Because of the “turf battles” among these providers, however, these agencies were unwilling to share decision making power with each other, let alone with parents or the evaluator. Relinquishing control for evaluation decisions can also be a significant challenge for the evaluator. The collaborative evaluator must continually resist the temptation to make decisions for the group. This is especially difficult when the group directly asks the evaluator to make a decision based on his or her “expert” opinion. There are some areas in which the evaluator presumably has more expertise (methodology, statistics); for requests of this sort, we have tried to provide information to help the group make an informed decision by framing such information in terms of the costs and benefits of different decisions. For example, we might say : “if you do include a control group, here are the benefits. However, there are also costs. The advantages to not including a control group might be…”. This approach is based on Patton (1986), who suggested that evaluators put together hypothetical
Family Support Approach, p. 19
scenarios (positive and negative) to illustrate methodological issues in a relatively objective way. After explaining the options, the evaluator must agree to abide by the decision of the group, which can lead to compromises in methodological rigor. These compromises are perhaps the most difficult pills for the applied researcher to swallow. In retrospect, however, we strongly believe that compromising some methodological rigor for continued community support and congruence of values in program and evaluation is well worth it. Commitment of time and resources. Another element that is critical for family support evaluation is a significant time commitment by the collaborative group. As suggested by others advocating a participatory approach, and as we have shown through the examples above, this work is time consuming and requires numerous iterations as the evaluation and program evolve (Greene, 1987; Patton, 1994). Such a process also requires new and creative methods for establishing and financing evaluations that are longer-term and more partnership-oriented. Greene (1987) suggests that evaluators should shift their focus from evaluations of specific programs to evaluation partnerships with entire organizations. We agree that this change in focus is important, and have also found that while evaluation funding is often tied to specific programs, by including representatives from the broader organizational structure in the oversight groups, a more generalized learning can take place. Given the impetus of the family support approach to services and evaluation, these representatives have begun to implement a more “organizational learning” style of management. As evaluators, we have been able to extend our relationships with these organizations by obtaining consultation agreements with programs
Family Support Approach, p. 20
and organizations that are not specifically tied to a concrete set of evaluation activities, and thus achieve the flexibility needed to carry out this type of evaluation. Finally, as our relationships with service providers have evolved, we have been able to build evaluation funding into some of the grants written for specific programs, and thereby increase a core of evaluation funding within an organization sponsoring a variety of programs. Role of the evaluator. One frequent criticism to the participatory approach is that the objective status of the evaluator is compromised by his/her relationship with the program. Clearly, our approach requires a close relationship with stakeholders. We would argue, however, that this is a strength, rather than a weakness, especially within the context of a formative evaluation. Genuine stakeholder commitment to program improvements means not simply demonstrating those areas in which the program is doing well, but articulating how programs can improve. We would add that most evaluators are, at some level, committed to the success of the projects they work on -- no evaluator wants to see a program fail. Further, the misguided notion that evaluators should retain maximum distance from programs may, on the whole, have been detrimental to the usefulness of program evaluations for developing effective programs. Certainly it has contributed to the general distrust by service providers of evaluations and evaluators. Finally, the close relationship between the evaluator and the program does have implications in terms of the replicability and dissemination of program models. That is, the program that has an integrated evaluation component such as the one we describe is clearly a qualitatively different program compared to one which does not include evaluation. This limits replicability only in terms of programs
Family Support Approach, p. 21
that do not have an evaluation, and we strongly advocate including evaluation as an integrated component of all programs. Although many programs have limited budgets, the family support evaluation approach does not necessarily require expensive external data collection or large-scale evaluation costs, and thus can be more widely available to smaller programs and agencies. Conclusions In this article, we have attempted to describe an approach for evaluating family support programs that is built on the guiding family support principles of strengths-orientation, collaboration, and flexibility/comprehensiveness. This approach helps to address several challenges particular to evaluations of family support programs by utilizing methods that are congruent with family support principles. Further, the family support approach facilitates evaluation utilization, in that it (1) closes the gap between service program and program evaluation by basing each on a shared set of grounding principles, (2) uses a participatory, collaborative approach; and (3) focuses on ongoing program improvement. We believe that this approach represents a creative way of blending various evaluation methodologies to fit the unique challenges inherent in evaluating family support programs. Although the approach that we have outlined is in some ways specific to family support programs, there is an increasing array of human service programs that share similar guiding principles, and which could conceivably benefit from a family support approach to evaluation. Other types of programs and organizations may also be better served by evaluations that are congruent with their own unique guiding program principles. This approach challenges those who evaluate complex service programs to consider not only program services, but program values, as an important factor in the
Family Support Approach, p. 22
design and implementation of evaluations.
Family Support Approach, p. 23
References
Cousins, J. B., & Earl, L. M. (1992). The case for participatory evaluation. Educational Evaluation and Policy Analysis, 14(4), pp. 397-418. Dunst, C. J., Trivette, C. M., Starnes, A. L., Hamby, D. W., & Gordon, N. J. (1993). Building and evaluating family support initiatives: A national study of programs for persons with developmental disabilities. Baltimore, MD: Brookes. Fetterman, D. M. (1996). Empowerment evaluation: An introduction to theory and practice. In D. M. Fetterman, S. J. Kaftarian, & A. Wandersman, (Eds.) Empowerment Evaluation: Knowledge and Tools for Self-Assessment and Accountability, (p. 3-48). Thousand Oaks, CA: Sage. Gold, N. (1983). Stakeholders and program evaluation: Characterizations and reflections. In A. S.Bryk (Ed.), Stakeholder-based evaluation (63-72). New Directions For Program Evaluation, 17. San Francisco, CA: Jossey-Bass. Greene, J. C. (1987). Stakeholder participation in evaluation design: Is it worth the effort? Evaluation and Program Planning, 10, pp. 379-394. Jacobs, F. (1988). The five-tiered approach to evaluation: Context and implementation. In H. Weiss & F. Jacobs (Eds.), Evaluating Family Programs. NY: Aldine De Gruyter. Kagan, S. L., & Shelley, A. (1987). The promise and problems of Family Support Programs. In S. L. Kagan, D. R. Powell, B. Weissbourd, B. & E. F. Zigler (Eds.), America’s Family Support Programs. New Haven, CT: Yale University Press. Kirusek, T. J., Smith, A., & Cardillo, J. E. (1994). Goal Attainment Scaling: Applications, Theory, and Measurement. Hillsdale, NJ: Erlbaum. Knapp, M. (1995). How shall we study comprehensive, collaborative services for children and families? Educational Researcher, 24 (4), p. 5-16. Mertens, D. M., Farley, J., Madison, A., Singleton, P. (1994). Diverse voices in evaluation practice: Feminists, minorities, and persons with disabilities. Evaluation Practice, 15(2), pp. 123-129. Patton, M. Q. (1986). Utilization-Focused Evaluation. Newbury Park, CA: Sage.
Family Support Approach, p. 24
Patton, M. Q. (1994). Developmental evaluation. Evaluation Practice, 15(3), 311-319. Smith, A. & Lopez, M. (1994). Comprehensive Child Development Program: A National Family Support Demonstration, Interim Report to Congress, U. S. Department of Health and Human Services. Weiss, C. H. (1983). The stakeholder approach to evaluation: Origins and promise. In A. S. Byrk (Ed.) Stakeholder-Based Evaluation (pp. 3-14). New Directions For Program Evaluation, 17. San Francisco, CA: Jossey-Bass. Weiss, H. (1987). Family support and education in early childhood. In S. L. Kagan, D. R. Powell, B. Weissbourd, & E. F. Zigler (Eds.), America’s Family Support Programs. New Haven, CT: Yale University Press. Weiss, H. & Greene, J. G. (1992). An empowerment partnership for family support and education programs and evaluation. Family Science Review, 5 (1 & 2), 131-148. Weiss, H. & Jacobs, F. (1988). Evaluating Family Programs. NY: Aldine De Gruyter. Weissbourd, B., & Kagan, S. L. (1989). Family Support Programs: Catalysts for change. American Journal of Orthopsychiatry, 59, 20-31.