Most often these studies originate in the United States where service learning has been an important .... apply subject matter knowledge to agency settings, to question the reasons behind social issues, to ..... webquest/rubrics/weblessons.htm.
Learning as we go: Evaluation and its Implications For Service-Learning1 Florence E. McCarthy
Introduction In framing a keynote address about a technical subject such as evaluation and its importance to service-learning, there are at least two critical issues to consider. One is how to present the topic in a way that is interesting and not boring, and the other is to be specific enough so that the audience can draw useful implications and apply them to their own programs. Evaluation, defined as the “systematic assessment of the operation of a program” (Weiss, 1998), is ideally included as part of the original design of any program or curriculum. That is, assessing the ongoing operations and the outcomes of a program as it develops allows those involved to be aware of problems as they arise, and to move quickly to ameliorate difficulties as they happen. In the case of service learning, however, evaluation is often an add-on. By that I mean that assessing how a program is working, or what effect it has had, is often considered after the program has been created and is ongoing.2 This doesn’t mean that problems can’t be addressed or changes made, but it often happens that evaluations are called for when decisions have to be made regarding budgets, work load, and faculty time. In these situations, evaluations tend to be stressful, interpreted as negative by those involved, and ‘after the fact’ by administrators and decision-makers. Rather than a positive program device, evaluation can be seen as a negative encounter bearing only negative outcomes. Obviously, this does not need to happen, and the point about evaluation is that it should be a regular, ongoing process in any service learning program that generates data that is of both immediate and long term use. One purpose of evaluation is to discover ways to improve the program in question; it should provide a positive contribution to program implementation. In the following discussion, I consider ways to approach the evaluation of servicelearning curricula and/or programs in terms of four main issues: 1. 2. 3. 4.
What do we want to know? Who wants to know? Why do we want to know? What data/information do we need to collect?
1
Keynote address given at the Tokyo Forum of the Study Group of Overseas Experiential Learning, International Christian University, Tokyo, Japan, June 11, 2005. 2 There are two main types of evaluation. One type, known as formative evaluation, considers how a program is operating with the idea of gaining insight into how to improve it. Summative evaluation, the other type, occurs usually at the end of a program and seeks to determine what the program accomplished and its effectiveness according to some pre-determined standard or set of objectives. Depending on when an evaluation is done, it most often combines elements of both types of evaluation: the program is not necessarily finished but it is necessary to assess what it has accomplished so far, in order to determine future directions, alterations or new structures and format.
1
In initially framing an evaluation around these questions, it is possible to identify the information/data we want to collect; the audience for whom and from whom it will be collected, and the objectives or reasons for gathering data and doing the evaluation. Before I consider these questions, I would like to quickly review what the research on service learning programs indicates about the impact and effectiveness of service learning endeavors. Not surprisingly, the majority of research focuses on students and their response to involvement in service learning programs or curriculum. Most often these studies originate in the United States where service learning has been an important trend in educational practice since the 1980’s.3 Research on the Effects of Service Learning Amidst the abundant literature on service learning are a majority of articles reporting the effects of service learning on students. These are usually course-based evaluations indicating how service learning connected to subjects like communications (Artz 2001; O’Hara 2001), teacher education (Swick 2001, Yarbrough 1999), political science (Markus, Howard, and King 1993; Hepburn et al 2000), nursing , accounting (Rama et al. 2000), or history (Crothers 2002) have improved the learning of students. Most of these studies depend on self-reports of students, either at the end of a course to which service learning has been added, or as part of pre- and post-tests given students at different times in the semester. A variation of this type of study are those that consider the effects of service learning on specific academic content such as social justice (Warren 1998); technical writing (Rehling 1997), civic responsibility (Eyler and Giles, 1996) or specific outcomes such as academic performance (Fredericksen 2000). Far fewer studies investigate the effects of service learning on community agencies (ParkerGwin 1996; Roschelle et al 2000) or faculty (Hesser 1995). Another type of research that also involves an evaluation of what students gain from service learning are studies that compare students participating in courses where there is a service learning component, with students doing the same type of course but where no service learning is involved. In these studies the intent is to compare the outcomes on grades, personal or skills development among students, and student satisfaction with the course. It is found for example, that students engaging in service learning as part of their course work have a tendency to do better, get better grades and are generally more
3
It is important to note that the focus here is on academic service learning, which should be distinguished from ‘community service,’ volunteerism or service projects. Academic service learning implies a tie between students’ substantial service in an agency and academic course content. Agency settings provide an arena for the application and testing of course content; and this connection between community setting and course content is emphasized through structured, guided reflections of students by teachers. Learning from others is encouraged by student’s being supervised by agency staff, and from the interaction students have with agency participants. Subjects combining these elements of application of knowledge, reflection, and service experiences are credit-bearing. In addition, academic service learning implies reciprocal partnerships between community agencies and universities or colleges in conjunction with the service provided by students. That is, agencies are not just vehicles to promote university interests; universities ideally support the activities of the participating agencies in the publicity they publish; the recognition they give to agency support, and so on. These factors distinguish academic service learning from community service, where the emphasis is often only on what the student’s gain from service; or from volunteering where the emphasis is on providing students with an experience of being involved in the community, but with no academic tie or reflective connection.
2
satisfied with their courses than students doing the same type of course but with no service learning included (Berson and Younkin 1998). A third type of research are studies that undertake large scale investigations of student’s involvement in service learning courses (Sax and Astin 1997; Eyler and Giles 1999), over a period of time (Sax and Astin 1997), and often involve comparing grades in different kinds of courses (Strage 2004) or participant responses to Likert-type structured questions. Community agency staff and/or administrators may be included (Gray et.al 2000). Findings from these studies suggest that students participating in service learning courses are more likely to have higher grades, report greater interest in their courses, and be more active in civic affairs than students who didn’t participate in service learning. Finally, looking at studies about international service learning, the literature suggests that providing service to others abroad can lead to the mutual empowerment of all those participating in service (Crabtree 1998). However, fostering the reciprocal learning and sharing that is an intrinsic part of empowering experiences demands concentrated efforts from teachers to encourage students to actively make connections of classroom knowledge to lives in the outside world (Sipe 2001). Such activity is necessary in encouraging students to see others as more than simply “recipients or disadvantaged individuals who need help” (Sipe 2001). It is also the case that without structured discussions, readings and reflective assignments to push students to consider their own assumptions and beliefs about service and helping others, it is all too possible for international service learning experiences to simply reinforce biased, prejudicial views among students, as well as an unfounded sense of their own superiority (Grusky 2000; McCarthy 2003). Mitigating factors influencing the outcome of service learning often have to do with student characteristics, the intended learning outcomes set for service learning classes and activities, and the organization of the courses themselves (Rama, et al. 2000). Those courses that incorporate what are considered to be “best practices” are found to be the most effective in influencing student outcomes. Best practices include 1) clear specification of intended student outcomes in relation to service learning activities, 2) preparation of students for service and duration of service: involving students for sufficient periods of time in service agencies so that they have time to develop relationships with the people there, and to make a contribution to agency outcomes; 3) onsite supervision and training, 4) clear connections between course subject matter and service experiences; 5) ample time for extensive reflection encouraging students to apply subject matter knowledge to agency settings, to question the reasons behind social issues, to analyze agency events or interactions, and to pose questions about their own learning and understanding of the nature of their involvement in their agency settings. In addition, 6) modes of assessment should be clearly stated in the beginning of the class, and 7) students should ideally have a mix of assessment formats by which to demonstrate their learning. We should now turn to the four questions raised earlier and consider how each contributes to and effects the evaluation of service learning curricula and programs. We will consider first, why do we want to do evaluations? Why do we want to know? Basically programs are evaluated because we want detailed information about results. In the case of service learning programs, it is often the case that vast amounts of anecdotal information exists about changes in students as observed by faculty, staff and often the 3
students themselves. However, what is lacking is ‘proof’ or factual evidence that supports the inferential claims that are made. In times of financial constraints, reduced budgets and increased work loads, it is hardly possible to justify the inputs service learning requires without substantial indications that it pays off in concrete ways such as more satisfied students, increased retention rates, better class attendance and improved grades. These items are as important to administrators, for example, as the claims of improved student self-confidence, better command of English, and more interest in learning are to faculty. All these claims can be made for service learning, but it is necessary to have the data that supports such claims. Additionally, evaluations are done to assess how other stakeholders in the effort have been affected by or responded to their participation in service learning programs.4 Service learning requires partnerships between community agencies and the administration, and the long run institutionalization of the curriculum/program depends on the mutual satisfaction, support and continued involvement of the agencies. Without systematic evaluation, it is all too possible to overlook problems as they develop, which if not addressed may jeopardize the whole effort. So to summarize, evaluations are important for ‘proof’ and to ‘troubleshoot’ problems, and to improve and sustain program delivery and outcomes. However, on a broader plane it is important for the long term success of a service learning curriculum that it be designed to fit with the mission of the university or college. Anchoring service learning to the institutional mission is a way of framing outcomes in the development of service learning activities. For example, if the mission is to promote “active learners,” or “truth and service” or “unto a whole person”, it is a wise and strategic move to shape learning objectives for service learning activities that realize these mission goals. How you do this becomes a major element in program design and implementation, but the point is that the development of service learning curricula should be connected to learning outcomes that fit within the umbrella of the institution’s aims. Having established what these general outcomes are, it then becomes a process of working from the outcomes, asking ‘what do we need to do to realize these outcomes?’ That is how do we move from the ideal to the actual, and translate the ideal into concrete activities, class content, readings and forms of assessment? What do we want to know? In addressing the issue of what do we want to know, it is useful to use the triangles that illustrate the basic elements of service learning (McCarthy 2003). These also help us understand what it is we want to know. There are countless questions we could pursue in an evaluation, but having focus establishes that we have decided what is most important to explore. By including the major stakeholders in service learning (agencies, students, faculty and administration) in our evaluation, we expand our focus away from just the students, to our other partners. We want to hear from the stakeholders their experience in the service learning program/curriculum; we want to know what they learned from the experience, and we want to learn of their assessment and reflections on what has occurred and what could be improved. The triangle below illustrates the interdependencies among the basic elements of service learning.
4
I tend to use the terms service learning programs and service learning curriculum somewhat interchangeably, recognizing that academic service learning is both a curriculum and a program in that it links service to the classroom curriculum, but is a program in that it involves institutional ties between college or university and community, government, or not-for-profit agencies.
4
Experience
Reflection
Knowledge Figure 1
By identifying our interests in terms of the triangle, we can then move to frame questions or topics to discuss with each group of stakeholders. Questions to the agencies could be: Overall, what was your experience with the service learning students? What contributions did they make? What were the strengths of the arrangements? What problems arose? What could be done to resolve these issues or make the program more effective? It is important to note as well that in doing evaluations, we need to be aware of the content of the service learning program/curriculum, as well as the processes by which the program developed. Both are important as the concern with content tells us how the individual components of service learning are structured, i.e. the courses, the preparation, the placements, the assessments and so on. A focus on process allows us to explore how the program operates in areas such as student recruitment, faculty development, agency relations, and so forth. Here we are looking at program flows that have to do with operations, social relations, and how the program components are melded together into the program itself. Who wants to know? Faculty The main stakeholders comprise students, faculty, and community agencies. However the institutional administration and staff are also an important part of the university context influencing service learning. In addition, there are agency clients or participants, as well as agency leaders that influence the enactment of service learning activities. While most often it is not possible to include every group in the evaluation that is done, it is important to nonetheless be aware of their presence and influence. We represent the stakeholders in this way. Agencies staff
leaders
Students Pre/post
clients
Faculty
self-reports comparisons
administrators Figure 2 5
staff
teachers
Broadening the scope of the evaluation allows us to identify points of support as well as dissent and opposition to service learning that may not be known unless we look for them. For example, implementing a service learning program often puts new forms of strain and places new demands on administrators and staff not directly involved with service learning. Staff in the study abroad office, for instance, may have to monitor the activities of service learning students who don’t follow the same schedules or patterns. Perhaps new forms of accounting procedures are required that upset established routines in the accounting department; perhaps an already overworked staff person is given the task of being responsible for ensuring that all the paperwork related to agency agreements, student insurance and parent’s permission are in order. All these added elements may cause disruption in the established order of the bureaucracy, and it would be foolhardy for any service-learning faculty member to ignore them. An evaluation would hopefully make these problems evident. Only then can steps be taken to ease the tension, or ways found to resolve these issues. Those responsible for service learning should make a point of requesting and listening to the responses and suggestions that staff and other teaching faculty may have about the program. Here too, it is a mistake to ignore the knowledge of those most involved in the program. Often, however, it is not a lack of interest, but a reluctance to invite “bad news” that discourages some program managers from talking more with their staff and teachers. However, a critical factor in maintaining staff morale and teacher involvement in the program is their feelings of being included, and their ability to influence program decisions. Community Agencies The interest in building reciprocal partnerships with community agencies is important at all stages of service learning program design, implementation and evaluation. In the early design of service learning activities it is crucial to include the ideas and suggestions of agencies. In addition to being interested in having students undertake placements in their agencies, agency staff would have ideas about what kinds of activities the students could do; how they could fit in; and what kinds of preparation or orientation students would need or would be given when they first appear at the agency. Over the course of the student’s presence in the agencies, agency staff are providing supervision and guidance; students are learning from them, and the staff are learning things about and from the students as well. Agency staff should provide their own assessment of the students based on the activities the student did, and from the perspective of the agency involved. This assessment should be included in the final assessment for the service learning grade the student earns. In addition, any program evaluation would be seriously remiss if the knowledge and reflections of agency people were not also solicited. In doing an evaluation, agency staff should be asked to reflect on their experiences with the students and faculty; what worked, what needs improvement; what they learned from the students; what agency participants thought about the students, strengths and weaknesses of the students, problems and so on. In this way, the information requested from the agency staff validates their participation, takes their contribution seriously, and acknowledges that aspects of the relationship with the university or college could be improved. The willingness of faculty to involve, listen to, and take seriously the input of agency personnel goes a long way in developing strong ties between the university and the community. Such ties are essential to the long-run success of service learning. 6
Students Students are the focus of most activities, assessment and evaluation. From the beginning, the service learning curriculum is designed with them in mind: from learning objectives and desired outcomes, to readings, class lectures, discussions, preparation and all forms of assessment. Agencies provide the service placements and mentor the students through their time with them; teachers provide the theoretical and technical inputs, orientation and preparation, and follow-up problems. Teachers provide on-line mentoring or on-site supervision, and create the forms of assessment that best captures the experience of students, and their reflective experiences. Forms of assessment involve journals, written exercises, a variety of preparation exercises, class comment sheets, class discussions, and often course evaluations. All this information gives us a sense of the immediate effect that participation in service learning has on students, but we often lack the supportive data that solidifies possible differences from the experiences of other students not engaging in service learning experiences. Because we usually don’t do comparisons with other students, we can’t prove that the grades of service learning students improve after taking the service learning courses. We can claim but usually can’t document the increases in selfconfidence and self-esteem of students successfully fulfilling 30 days of continual service or it equivalence. We have the students’ word for the fact they feel different, or have a much broader picture of the world and its problems; but we lack comparative pre- and post-tests to verify what the students affirm. What data/information should we collect? In discussing kinds of data to collect, it is important to note that combinations of data provide the most accurate and useful information; that is, combinations of both quantitative and qualitative data. Each kind of data provides different insights into program dynamics. Quantitative data gives you aggregate information and can be collected on an ongoing basis. An example of quantitative data gathering would be tracking the grades of certain students over the course of their time at the university and comparing the grades of students taking service learning classes with the grades earned by students in similar majors who did not participate in service learning. Data like this gives you trend data, and allows you to answer questions about the effects of service learning on the grades of students. Other quantitative data could be collected about retention rates, class attendance, and the majors of students taking service learning classes. However, quantitative data does not provide you with the personal comments, reasons, or reflections of service learning participants. For this kind of information you need to gather qualitative data. Qualitative data provide insight and some understanding to questions such as ‘why did you go?’ ‘What did you learn?’ ‘How will this experience help you in the future?’ It helps us understand the thoughts and feelings of people about different aspects of the program. Qualitative data could be collected at the end of service learning classes, or during workshops when students return from their service. Brief face-to-face interviews could be done with agency staff and administrators to obtain their reactions to the program. Agency staff could be asked to submit short assessment forms for each student that contain a few open-ended questions about the student’s contribution to the agency. Of course, both qualitative and quantitative questions can be asked together, for example, student evaluation forms for their classes often use both quantitative forcedchoice questions combined with open-ended qualitative questions. In such cases, data provide specific comparative information on degrees of like or dislike, satisfaction or 7
dissatisfaction with regard to specific elements of the class. The qualitative questions provide students with the opportunity to write more extensively about their experiences or expand on issues of importance to them. A major source of combined data comes from the students and their various assignments. Journals, reports, critical incidents, and portfolios are all forms of qualitative data. Attendance logs, attendance sheets, background information are examples of quantitative data. The purpose in collecting these data is to provide information about different aspects of the program, all of which are important in evaluating how it is doing. The trend data allows you to see how service learning affects students in terms of the basic ingredients of a college education: grades, not dropping out, being satisfied with classes and so on. Qualitative data answer questions about what does it mean to have a service learning program/curriculum on campus. How does it contribute to community-university relations? How does service learning promote a better image of the university to the public and prospective students? How does it provide research opportunities for faculty and engage their interest? How does service learning create more active learners and engaged students? These are the basic questions we would like answered about all our classes and about higher education in general. How does it make a difference? We can only begin to answer this question as we commit ourselves to doing the difficult work of evaluation, so that we may provide answers for one program and curriculum that we all believe makes a difference. The promise of evaluation is to help us prove it.
References Artz, L. 2001. Critical ethnography for communication studies: Dialogue and social justice in service-learning. The Southern Communication Journal. 66 (3). Pp. 239-250. Astin. A. W. and L.J. Sax. 1998. How undergraduates are affected by service participation. Journal of College Student Development. 39(3), pp 251-263. Billig, S. and A. Furco. Eric ED 470790 Berson, J.S., W.F. Younkin. 1988. Doing well by doing good: A study of the effects of a service-learning experience on student success. Paper present at the 23rd Annual meeting of the Association for the Study of Higher Education. Miami, Florida. November 5-8. Conway, P.F. 2001. Anticipatory reflection while learning to teach: From a temporally truncated to a temporally distributed model of reflection in teacher education. Teaching and Teacher Education 17. pp. 89-106. Crabtree, R.D. 1998. Mutual empowerment in cross-cultural participatory development and service learning: Lessons in communication and social justice from projects in El Salvador and Nicaragua. Journal of Applied Communication Research. 26 (2), pp 182-209. Crothers, A.G. 2002. “Bringing history to life”: Oral history, community research, and multiple levels of learning. The Journal of American History. 88(4). Pp 1446-1451. Driscoll, et.al. 1998 Eyler, J and D.E.Giles Jr. 1999. Where’s the learning in service-learning? San Francisco: Jossey Bass.
8
Fredericksen, P.J. 2000. Does service learning make a difference in student performance? The Journal of Experiential Education. 23 (2) pp. 64-74. Furco, A. and Shelley Billig. 2003. ED 462631 Gray, M J, Ondaatje, E H, Fricker, R D, Geschwind, S A. 2000. Assessing service-learning: Results from a survey of “Learn and Serve America, Higher Education” Change. 32 (2), pp 3039. http://www.unity.edu/vcp/ServiceResources/SLToolkit/Evaluation.asp Grusky, S. 2000. International service learning. The American Behavioral Scientists. 43 (5) pp 858-867. James, P. 1996. Learning to reflect: A story of empowerment. Teaching and Teacher Education 12 (1). Pp. 81-97. Markus, G.B., J P F Howard, and D.C King. 1993. Integrating community service and classroom instruction enhances learning: Results from an experiment. Educational Evaluation and Policy Analysis 15 (4), pp. 410-419. McCarthy, F.E. 2002. Education as the practice of freedom: service learning, social dilemmas and “developing” countries. Paper presented at the International Partnership for Service Learning Conference, Prague, Czech Republic. ____________. 2003. Service Learning Triangles: Key Elements, Partners and Relationships. Eric Document #ED 481081. O’Hara. L.S. 2001. Service-learning: Students’ transformative journey from communication student to civic-minded professional. The Southern Communication Journal. 66 (3), pp 251-266. Payne, D. 2000. The need to ask evaluation questions. In Payne, D. Evaluating ServiceLearning Activities and Programs. Lanham, MD: The Scarecrow Press, Inc. pp 17-18. Project Evaluation and Assessment. Rama, D.V., S. P. Ravenscroft, S.K. Wolcott, E. Zlotkowski. 2000. Service-Learning outcomes: Guidelines for educators and researchers. Issues in Accounting Education. 15 (4). Pp 657-692. Rehling, L. 1997. A comment on “Technical Writing and Community Service” Journal of Business and Technical Communication. 11(4) pp. 506-508. Rubrics for Web Lessons. Found on: http://edweb.sdsu.edu./webquest/rubrics/weblessons.htm Sax, L. J. and A. W. Astin. 1997. The benefits of service: Evidence from undergraduates. The Educational Record. 78 (3-4) Summer/Fall, pp. 25-32. . Seifer, S.D., S. Holmes. 2002. Tools and Methods for Evaluating Service-Learning in Higher Education. Located at: Shumer, R. and T. Berkas. 1992. Doing Self-directed service-learning. Eric #Ed417997. Sipe, R. B., 2001. Academic service learning: More than just “doing time.” English Journal. 90 (5), pp. 33-38.
9
Soukup, Paul A. 1999. Assessing service-learning in a communication curriculum. Paper presented at the 85th Annual Meeting of the National Communication Association, Chicago, Illinois. November 4-7. (Eric #ED438 569) Stix, Andi. 1997. Creating rubrics through negotiable contracting and assessment. US Department of Education ERIC #TM027246. Strage, A. 2004. Long-term academic benefits of service-learning? When and where do they manifest themselves? College Student Journal. 38 (2) pp. 257-261. Swick, K.J. 2001. Service-learning in teacher education: Building learning communities. The Clearing House. 73(5). Pp. 261-264. Vogelgesang, L.J. and A. W. Astin. 2000. Positive impact of service-learning on academic learning www.servicelearning.org Vogelgesang, L.J. and Astin. A. W. 2000. Comparing the effects of service-learning and community service. Michigan Journal of Community Service-Learning, 7. pp. 25-34. Wade, R C. and D. B. Yarbrough. 1996. Portfolios: A tool for reflection in teacher education? Teacher and Teacher Education: An International Journal of Research and Studies. 12 pp 63-79. Wade, Rahima, C. and D. B. Yarbrough. 1997. Community service-learning in student teaching. Michigan Journal of Community Service-Learning., vol. 4 pp 42-57. Ward, K., L.Wolf-Wendel. 2000. Community-centered service learning. The American Behavioral Scientist. 43 (5) pp. 767-780. Yarbrough, D. B. 1999. Novice teachers experience of community service-learning. Journal of Teacher and Teacher Education. Vol 15, pp 667-684.
10