Evaluation Model for Developing, Implementing,. Assessing Conservation Education Programs: Examples from Belize and Costa Rica and. $OSAN K. J.&COB$ ...
PROFILE Evaluation Model for Developing, Implementing, and Assessing Conservation Education Programs: Examples from Belize and Costa Rica $OSAN K. J.&COB$ON Program for Studies in Tropical Conservation Department of Wildlife and Range Sciences 118 Newins-Ziegler Hall University of Florida Gainesville, Florida 32611-0304, USA
ABSTRACT / Evaluation of conservation education programs can: (1) provide accountability in demonstrating a program's worth, (2) offer an opportunity for receiving feedback and improving programs, (3) further our understanding of the process of program development, and (4) promote conservation education by substantiating claims about its benefits. The Planning-Process-Product systems evaluation model provides feedback needed for making decisions about the development, implementation, and outcome of a program. Planning evaluation was useful in assessing the needs, goals, oppor-
Conservation education programs can increase ecological awareness, foster more favorable attitudes toward the environment, and promote natural resource conservation (e.g., Dietz 1986, Fitter 1986, Jacobson 1987, Olson and others 1984, Sharpe 1982). Researchers concerned with biological conservation increasingly call for the further development of conservation-oriented educational infrastructure, particularly in developing countries (Brown 1988, Janzen 1987, Ramos 1988, Soule 1986). Yet, prescriptive guidelines are needed, such as those provided by evaluation models, to guide conservation education programs from conception through completion. T h e limited funds available for conservation education demand the effective use of evaluation to ensure successful programs. Evaluation consists of the collection, measurement, analysis, and interpretation of data relevant to a program's audience and environment. These data are needed to make decisions about the merits of a program and to facilitate the assessment of whether a program meets identified needs or achieves specific goals and objectives: Is the program effective?
KEY WORDS: Conservation education; Program evaluation; Environmental education; Park interpretation; Resource management; Sustainable development
Environmental Management Vol. 15, No. 2, pp. 143-150
tunities, and constraints of a number of programs in Costa Rica and Belize, such as a forestry education project and a zoo outreach program. It provided a basis for making planning decisions incorporating specific objectives, such as the reforestation of a region or a change in knowledge and attitudes in program participants. Process evaluation provided a Costa Rican sustainable development program with feedback during its implementation and enabled it to modify and improve its newsletter for local farmers and its ecology classes for school children. Product evaluation assessed project accomplishments, such as the $700,000 raised by the Children's Rainforest group and the 20 miles of riparian land under conservation management as part of the Belize Community Baboon Sanctuary project. Outcomes are compared with the programs original monetary or land management objectives to determine the success of the programs and to provide feedback for improvement.
Can the program be improved? Are tile approaches and materials efficient and cost effective? Evaluation provides accountability in demonstrating a program's w o r t h - - t o funding sources, the community, and other groups. It offers an opportunity for receiving feedback and improving programs. Furthermore, evaluation helps in assessing secondary or unexpected program outcomes; it furthers understanding of the process of program development; and it promotes conservation education by substantiating claims about its benefits. Unfortunately, evaluation seldom is carried out formally in conservation education programs in the United States or abroad, due to a perceived lack of time, money, or expertise (e.g., Wood and Wood 1985). Yet some form of useful evaluation can be included in program development even under the most difficult situations (Nowak 1984, Passineau 1975). Ideally, evaluation should be conducted fi'om beginning to end, providing feedback on all stages of the development, implementation, and outcome of a program. A number of evaluation tectmiques have been developed over the past three decades. The approach of systems models (e.g., Stuffelbeam and others 1971) provides prescriptive guidelines tor program evaluation, including tormative evaluation for planning and implementing programs, and summative evaluation 9 1991 Springer-Verlag New York Inc.
144
s.K. Jacobson
for assessing the products of programs. This approach, which I have modified into the Planning-Process-Product model, incorporates self-correcting processes to provide conservation education programs with the feedback needed to ensure their effectiveness. Using the Planning-Process-Product model, three stages of evaluation are conducted. For each stage, needed inibrmation is identified and obtained. Figure 1 provides an overview of the model, the infortnation to be examined at each stage, and the decision-making processes. These data are described in the discussion that follows using examples from: (1) the conservation education programs of three organizations operating on a regional level in and around the Monteverde Cloud Forest Reserve in Costa Rica, and (2) the conservation education programnfing at a national level in Belize. The data presented are only a small part of the intormation used in the systematic evaluation of individual programs or the cumulative evaluation of regional or national programming, but they serve to illustrate the model and to provide a sample of the innovative approaches to conservation education being introduced in Costa Rica and Belize. The PlanningProcess-Product format has been useful lor internal evaluation by staff involved in program development (Jacobson 1987, 1988a) and as a guide for program assessment by external evaluators (Jacobson 1988b). This article demonstrates how systematic evaluation can be used to guide and strengthen the development, implementation, and outcome of conservation education programs.
Planning Evaluation During the plamling process, the needs, goals, opportunities, and constraints of tile program environment are evaluated. This provides a basis for defining objectives and choosing among alternative project designs. Community involvement is critical at this stage. Defining Needs and Audiences on a Regional Level in Costa Rica
A number of conservation programs are associated with the Monteverde Cloud Forest Reserve in Puntarenas Province, Costa Rica. The reserve encompasses an area of 10,569 ha and was established in 1972 to protect the watershed for the Monteverde community and its unique wildlife. The reserve provides a sanctuary for over a hundred species of mammals and 400 species of birds, as well as the great diversity of plant life that grows in the neotropical cloud forest. In 1988, over 15,000 people visited the reserve and comprised just one of the variety of audiences currently exposed
PLANNING Identify: Needs Goals and objectives Audiences Resources and constraints Alternative methods Make Design and Structuring Decisions
IIu PROCESS Implement Program Activities: Operation, content, and approach Pre- and post-activity preparation Audience and staff participation Budget constancy, availability Make Operational Decisions
I PRODUCT Analyze Program Effectiveness:
Achievement of objectives Secondary and long-term effects Modifications or expansions
Information dissemination
Adoption and future needs Make Recycling Decisions
Figure 1. Diagram of the Planning-Process-Product evaluation model.
to a range of conservation information and education programs. The reserve is surrounded by small, rural communities practicing dairy farming and small-scale agriculture. Three organizations have developed conservation education programs associated with the Monteverde Cloud Forest Reserve and surrounding community: the Tropical Science Center, which administers the reserve, the Monteverde Conservation League (MCL), which purchases conservation land and provides reforestatiou and education programs, and the Monteverde Institute, which provides courses, mainly for US college students, in tropical biology and forestry, and cultural programs for the community. T h e basis needs or problems that the organizations are addressing in the region have been defined by the staff to include deforestation, dwindling water resources, unsustainable agricultural practices, lack of protection tbr biological diversity, lack of knowledge
Conservation Education
of and concern for conservation and resource management, and lack of local cultural and educational opportunities. The range of audiences currently being targeted by the education programs include tourists, both foreign and domestic; local primary and secondary school students; college students, both foreign and domestic; community members and organizations; youth groups; farmers; the public reached by mass media in Costa Rica; as well as members of foreign organizations, such as The Organization for Tropical Studies and The Children's Rainforest. Defining Needs and Audiences on a National Level in Belize
Belize, the smallest country in Central America, is 22,963 km 2, including 450 offshore cays and 280 km of coast rimming the Caribbean Sea and the world's second largest barrier reef. Belize is fortunate in having many of its natural resources still intact (Nicolair and Associates 1984). Yet, Belize also faces an increasing variety of environmental problems. In my discussions with four dozen teachers, government officials, and environmental organizations' staff, a number of hazards representative of the environmental challenges presently facing Belize were identified. These include illegal hunting; indiscriminate conversion of forests; soil erosion and other problems associated with unsound farming practices, particularly due to increased immigration; destruction of coral reefs, over-fishing and pollution; unplanned coastal development and the destruction of mangrove forests; deteriorating water quality affecting environntental and human health; solid-waste disposal problems; unsafe use of pesticides and herbicides; looting and destruction of archeological sites; and lack of enforcement of existing environmental legislation. A number of education programs in Belize address natural resource conservation, targeting audiences formally in the school system, and informally in programs for rural communities, adults, and special groups such as farmers, fishermen, and tour guides. Government ministries and departments currently ii1volved in conservation education in Belize include the Ministry of Education, the Belize Teachers College, the Curriculum Development Unit, the Central Farm Training Center, the Department of Fisheries, the Department of Archeology, and the Ministry of Health. Nongovernmental organizations active in conservation education include the Belize Audubon Society, which oversees the country's wildlife preserves, the Comnmnity Baboon Sanctuary, the Belize Zoo and Tropical Education Centre, the Belize Tourist Industry Association, and the Programme for Belize. Foreign aid orga-
145
nizations such as tile US Peace Corps and (;ARE also have contributed suhstantially to conservation education. Defining Goals
Based on tile determination of needs at a local, regional, or national level, goals are formulated. Usually of a general nature, goals for conservation education typically include developing an awareness of and concern about the total environment and its associated problems and developing the knowledge, attitudes, motivations, and skills to work toward solutions of current problems and the prevention of new ones (Stapp and Cox 1974). Conserwnion education goals espoused by individuals and organizations in both Costa Rica and Belize iucluded aspirations such as increasing people's knowledge and appreciation of wildlife conserwttion and promoting relorestation and sustainable agricuhural practices. These serve as the guiding lights for program development. Yet, while goals can be subjectively assessed, specific objectives that result in more quantitative or measurable outcomes are needed in order to judge the effectiveness of the programs. Defining Objectives
By identifying specific objectives for target audiences, desired behavioral changes in program participants can be clearly articulated. These are crucial data for later assessing tile effectiveness of tile program. Programs that lack specific objectives can still be evaluated (e.g., Scriven 1972). Yet, without stated objectives, it usually is difficult to determine whether tim program is successful, how it may be improved, or to justify its accomplishments to administrators or funding agencies. Specific behavioral objectives concerning tile conservation of natural resources usually inch, de changes in (1) knowledge, (2) attitudes, and (3) skills in the program participants. Objectives for demonstrating changes in knowledge incorporate tile facts and concepts that participants shoukl understand alier exposure to a program. For example, an objective for the Monteverde school program could specify that 75% of the students would be able to identify eight common animals and their prey afier participation in the program. Or, given criteria for a sustainable agricultural practice, 75% of the students would be able to delineate the resources needed to plant a school gm:den. Values and attitudes guide the actions and lives of individuals. They reflect people's feelings towards objects or ideas and dictate how much merit is given to something. It is difficult to measure attitudes directly.
146
s.K. Jacobson
Participants may say that they believe in wildlife conservation, but their actions may not reflect this. Using more than one approach to evaluate a specific attitude can lead to a more accurate assessment. Approaches may include questionnaires comparing attitude statements before and after a program, direct observation of the participants' activities, or examination of the effort exemplified in journals, pictures, or other indirect responses of the participants. An example of an attitude related objective for the Monteverde environmental workshop could be: after participating in the workshop, 75% of the participants will conduct similar workshops in their own neighborhoods. Or, for the Belize Zoo Outreach program: after learning about wildlife management laws, the taking of protected wildlife by students and their families will decrease by 50%. Objectives in the skills domain include action taken toward environmental conservation through political of legal activities, consumer choices, or land-management choices. Objectives for the Monteverde scbool program could include: 75% of the students will introduce and plant new crops in their home gardens after the school garden program; or, after participating in the MCL solid-waste disposal workshop, 75% of the participants will recycle glass Iiom their homes. In the case of the Belize Community Baboon Sanctuary, a program in which local landowners are asked to preserve wildlife habitat on their farms, an objective in the skills domain is for all of the landowners in Bermudian Landing to manage their riparian fbrests to preserve howler monkey habitat.
Identifying Resources and Constraints During the planning process, it also is necessary to determine the ingredients of the program that make it work and to assess which resources currently are available and which are needed. Data should be collected on the personnel, funds, time, facilities, natural resources, materials, and equipment. For example, the MCL environmental education program has the resources of three personnel, an office, a $20,000 annual budget, audiovisual and printing equipment, a motorcycle, access to a cloud torest reserve, local farms, outside conservation materials, local biologists, and so on. This inibrmation enables staff to analyze alternative program designs, given their resources, and to select the program structure to be implemented to achieve the given objectives. After assessing this information, it often is necessary to reevaluate the goals and objectives to ensure that they are realistic. During this stage, the community and targeted audience should also be involved. Input from community
leaders, school personnel, local industry, religious leaders, and other organizations' staff ideally should be solicited in the program planning and implementation. For example, the MCL works closely with the local dairy plant and farmer's cooperative to ensure maxinmm input from, and benefit to, the community. As a result, it receives additional funds or help from these groups to assist with its programs. At this stage, too, use or adaptation of outside materials can be considered and new materials should be critiqued by experts in the field. Consultation with other groups could lead to cooperative ventures and prevent unnecessary duplication of efforts. For instance, in the Belize example, the zoo and the Audubon Society offer general conservation slide shows and posters for school groups. Coincidently, both organizations have designed and printed similar posters depicting tapirs, the national animal. A more coordinated effort should reduce the overlap in material development, strengthen both programs, and help stretch scarce resources further. Once the resources and constraints are identified, it is necessary to review ahernative strategies for the program and to select the best approach. The types of programs selected by the Monteverde organizations ranged from classrooln lectures, field trips, and garden construction for school groups, to workshops, theatre programs, and newsletters for community members. The content of the education programs also was diverse. Based on specific objectives, programs focused on topics ranging from tropical ecology and agroforestry to solid-waste management, local culture, and land-use planning. In reviewing alternative strategies for the school program, MCL staff assessed other existing programs and projects in Costa Rica and abroad. They were able to adapt and expand upon worksheets from an existing environmental conservation textbook published in Costa Rica for some of their classroom work, rather than invent activities from the beginning.
Process Evaluation As a program is implemented, evaluation at this stage facilitates activities by guiding project operations.
Developing Pilot Programs Once the program design is selected, a pilot program generally is developed, tried, evaluated, and improved several times before being implemented formally. The procedures and primary program operations are described and assessed at this stage. Process evaluation determines whether the curriculum con-
Conservation Education
tent--biological, social, economic, and/or political-was complete and whether preactivity preparation and postactivity follow-up occurred as planned. The roles of staff and facilitators, methods for disseminating information about the program, adequacy of funding, and audience participation in the implementation and modification are all assessed during tile process evaluation. These data can be reported informally at weekly staff meetings, such as those that the MCL holds, or more formally with daily checklists of tasks and accomplishments. This information is useful for determining why objectives were or were not achieved, so that methods can be modified. It also provides infi)rmation to others about implementation strategies that are critical to the success of the program.
Improving Program Efficiency At the process stage, changes in tile program should be made to make it run more efficiently. Qualitative feedback from participants in determining what works and what does not is especially useful throughout the pilot programs. For example, during the publication of a newsletter for farmers in Monteverde, the MCL education director decided that additional feedback was needed from the farmers to determine more appropriate content tot the newsletter. Discussion groups were organized with the community to solicit additional information before the newsletter resumed. Feedback from staff involved in the program is also critical. Feedback is incorporated into the training program when new tour guide staff begin giving guided walks at the Belize Zoo. Their presentations are carefully critiqued by their fellow personnel to enhance their perforlnance.
Product Evaluation Product evaluation assesses the effectiveness and efficiency of the program. Are the objectives achieved? Are there shifts in participant knowledge, attitudes, or skills? Are there long-term changes in resource use or participant behavior, increased concern for environmental management, or conservation legislation? Whether the program is adopted by other organizations or in other regions are also data to be noted and assessed. Finally, is the program worth the time, money, and resources? Is it cost effective? Does it need modification? Should it be continued?
Assessing Program Outcomes Directly Program outcomes may be immediate or long-term and expected or unexpected. T h e r e are a number of evaluation techniques for assessing the products of a
147
program. They vary in their objectivity, robustness, and resources required. T h e familiar written tests encountered in schools often are used to measure knowledge levels. They test the program participant's recall of facts and concepts learned. Furthermore, written questionnaires that measure interest levels and attitudes may be given to participants before and after a program. These need to be carefully designed and pilot tested. Participants are presented with statements that would reveal shifts in attitudes resulting from a program. Statements such as, "The reserve is too large," or "The reserve should be logged," and so on, can serve as a litmus test of attitudes toward the reserve when participants are asked whether they agree, have no opinion, or disagree with them (Jacobson 1987). Follow-up surveys at a later date are important to ascertain if shifts in attitudes are retained. Other types of product evaluation can include performance tests where participants demonstrate a new skill they have mastered. For example, students can demonstrate the proper techniques for tilling the soil or mist-netting a bird. For the reforestation program in Monteverde, which involved a workshop, extension activity, and a tree nursery, tile rates of seedling production and farmer participation were used as indices of achievement of objectives. The results: over 100,000 seedlings produced, 35 km of windbreaks planted, and more than 40 farmers participated, exceeding their objectives. For another MCL project, a community environmental education workshop, the goal was more efficient community solid-waste management. The specific objective was to reduce the amount of garbage tor disposal. A glass-recyding project was set up in two communities. T h e outcome: hundreds of pounds of glass now are recycled, additionally providing extra income for the program. Secondary outcomes include activities to recycle other materials and the adoption of recycling activities by neighboring communities. Other measures also can be used to provide further evidence of program outcomes. Direct interviews with the participants can be effective when participants' knowledge and attitudes are probed in a systematic manner. An example of this is the product assessment of the Belize Zoo Outreach Program, which visits primary schools. T h e zoo lecturer carefully questioned students at the end of each program to determine their recall of facts, as well as their impressions. Querying participants and, in the case of school programs, teachers, directly about their perceptions of ttae program can provide important information for improving it. Questions may include: "What were the best aspects/least effective aspects of the program?
148
s.K. Jacobson
What suggestions do you have to improve the program? What was the highlight of the program for you?" This type of written evaluation was asked of teachers as part of the Belize Zoo Outreach Program to improve the zoo classroom presentations.
Assessing Program Outcomes Indirectly Examining attendance patterns at programs or the production of exhibits, letters, conservation projects, and community activities can provide good supplemental data to augment more objective or quantifiable information. T h e numbers of people in the audience or participating in the environmental theatre in Monteverde, or visitors to the zoo in Belize, are indirect measures of program outcomes. Similarly, student responses to conservation essay contests in school programs in both countries help indicate the success of programs in disseminating information, although not in assessing effectiveness per se. In Belize, 14 schools participated in a national essay contest sponsored by the zoo. A higher response rate was achieved in Monteverde, where all 11 local schools participated in a horticulture contest. The results of the latter will establish a model garden on the school grounds. Unobtrusive monitoring techniques also can provide insight into program effectiveness (e.g., Bennett 1974, Nowak 1984, Passineau 1975). Observing participants as they voluntarily engage in certain activities, such as enhancing wildlife habitat in their school yard, debating wildlife conservation laws, or writing to government agencies to obtain environmental information, provides feedback on the success of the program to program staff. Systematic observation of behavior, checklists, rating scales, simulation games, experimental choice situations, or role-playing may be used, especially in a school situation, to evaluate the effects of a program. T h e activities of participants involved in the development of environmental theatre in Monteverde serve as an example of this. Once the first play was completed, the performers volunteered to create and perform new environmental plays in other c o m munities.
Identifying Secondary Outcomes As well as the specific knowledge, attitude, and skill objectives that are delineated for a program, additional consideration must be given to identifying unanticipated or secondary outcomes in these domains. For example, the majority of students in the school program may have learned appropriate application techniques for insecticides, whether or not they achieved the objective of planting eight new food crops in their home gardens. Or, visitors touring the
archeological vault in Belize may later report incidences of archeological looting, whether or not they remember the history of the Mayans. These types of outcomes also should be valued.
Measuring Program success Programs with specific objectives, such as the MCL extension program's aim to plant a certain area of windbreaks per year, or the Children's Rainforest education program's objective of raising funds to purchase a certain acreage of protected forest, can be easily evaluated by comparing tile outcome achieved with the outcome originally projected. The Children's Rainforest program resulted in $700,000 being raised internationally. This far exceeded their original objectives tor land purchase in Costa Rica. Discrepancies between program objectives and outcomes provide information oil tile success of tile program, and analysis of the results provides feedback tor improving the program. Specific objectives for programs aiming to change knowledge levels or shift attitudes also should provide a measuring stick for program success. Evaluating the products of these programs often requires the use of an experimental design with a control or comparison group. Extraneous factors that might influence the program results need to be comrolled to ensure that the program outcomes are (1) valid--that the product evaluation really measures what it was designed to measure, and (2) reliable--that the same results would be generated if the measurement were repeated. Experimental and quasiexperimental designs provide accountability in determining program success (e.g., Bennett 1974, Langbein 1980, Marshdoyle and others 1982). Not only can it be reported that participants in a program, such as the MCL school program, were "very enthusiastic," but shifts in knowledge or attitudes can be documented and needed moditications can be identified.
Improving the Program Using several different measuring techniques simultaneously can help increase confidence in judging the program's worth. Data, both quantitative and qualitative, on the program results need to be systematically collected and summarized. Analysis of the data, using descriptive and, where possible, inferential statistics, should reveal the strengths and weaknesses of the program. I f objectives are not achieved, careful analysis of the staff, program content, implementation procedures, etc., will identify which areas need modification. I m p r o v e m e n t of the program is the ultimate result of evaluation. Continuous feedback to each of
Conservation Education
the prior evaluation categories allows for the reevaluation of the goals, objectives, target audiences, methods and techniques, facilities, resources and constraints, and the implementation process itself: scheduling, staffing, and the like. Through the process of evaluation, it is within every organization's capabilities to improve their programs.
Conclusions The organizations in Belize and Costa Rica that rely on outside funding for program support increasingly are finding that careful collection of data about the planning, implementation, and product of their programs is necessary in substantiating claims about their benefits. Reports of the evaluation results can be used to gain support for the program, provide accountability to administrators or funding sources, and provide intbrmation back to the participants, staff, and community members. Systematic evaluation during all stages of progranl development helps provide program reliability and validity, characteristics lacking in the commonly ad hoc development of conservation education programs. This is particularly true in inlbrmal settings, such as programming for parks and rural development projects. The model is both valid and useful in this context (Richardson and Pugh 1981). The principles proposed in the model helped ensure internal validity (Stufflebeam and others 1971); the emphasis on planning and process data as well as product data helped provide a means to judge whether the information obtained was an accurate depiction of the phenomenon studied. Unlike much traditional evaluation that only presents product information, data were collected about the programs from inception to completion, making it possible to determine under which conditions the results of a particular program might be generalized. Given the diversity of cultures, politics, economics, and resources in Central America, this information is critical. The major problem with the model's general nature was a consequent lack of specificity in prescribing methods for product evaluation. It left the development of appropriate test instruments and other techniques up to the evaluator. Experimental methods or statistical analysis needed for some product evaluation may be impossible in some informal settings because of limited research experience, money, or time. However, as noted earlier, other outcome evaluation techniques, such as observations and interviews, could be employed carefully and could provide useful information for making decisions (Nowak 1984).
149
"There is a notable scarcity of documented conservation education case studies, so objective and detailed accounts of program success and difficulties can significantly help others" (Wood and Wood 1985). As evaluation becomes an inherent part of the development and implementation, as well as the assessment, of" conservation prograins, a better understanding of elements leading to effective programs should emerge. The judgements that result should be made in a more coherent and rational way, subject to widespread participation and review. Once effective approaches are determined for specific situations, natural resource conservation through education should become more widespread and successful.
Acknowledgments I am very grateful to the many people in the organizations in Belize and Costa Rica mentioned in this article for their valuable discussion and help. These include W. Aspinall, J. Crisp, W. Craig, C. Echeverria, R. Day, E. Gillett, V. Gonzalez, L. Miller, L. Nicolait, F. Puck, R. Manzanero, S. Matola, E. Raymond, and G. Vargas. I thank J. Hardesty, J. Robinson, C. Shepard, and J. Smallwood for reviewing various drafts of this article. Support h)r this work was provided in part by the UF Center for Latin American Studies, the Tinker Foundation, and the t'rogratnnm for Belize. This is the Florida Agricuhural Experiment Station Journal Series No. R-00372,
Literature Cited Bennett, D.B. 1974. Evaluating environnmntal education progranls. Pages 113-164 in J. A. Swan and W. B. Stapp (eds.), Environmental Education. Halsted Press, New York. Brown, L. R. 1988. And today we're going to talk about biodiversity--that's right, biodiversity. Pages 446-449 in E. O. Wilson (ed.), Biodiversity. National Academy Press, Washington, DC. Dietz, L.A. 1986. Community conservation education program for the golden lion tamarin. Pages 8-16 in Building Support for Conservation in Rural Areas, Conference Proceedings, 27-31 May 1986. QLF/Atlantic Center for the Environment Publication, Ipswich, Massachusetts. Fitter, R. 1986. Wildlife for man: How and wily we should conserve our species. William Collins Sons and Co., London. 223 pp. Jacobson, S.K. 1987. Conservation education programs: evaluate and improve them. Environmental Co~e~,ation 14:201-206. Jacobson, S.K. 1988a. Media effectiveness in a Malaysian park system.Jour*tal of Environmental Education 19:9-15. Jacobson, S. K. 1988b. Evaluation of environmental education needs in Belize. Report to the Programme for Belize, Belize City, Belize, August 1988.26 pp (unpublished).
150
s.K. Jacobson
Janzen, D.H. 1987. Editorial. Comervation Biology 1(2):9596, Langbein, L. I. 1980. Discovering whether programs work: A guide to statistical methods for program evaluation. Scott, Foresman and Co., Glenville, Illinois. 184 pp. Marshdoyle, E., M. L. Bowman, and G. Mullins. 1982. Evaluating programmatic use of a community resource: The zoo. Journal ~f Environmental Education 13(4): 19-26. Nicolait, R. and Associates (eds.) 1984. Belize, country environmental profile. U.S.A.I.D. Contract No. 505-0000-C00-3001-00; R.N. & A. Ltd., P.O. Box 785, Belize City, Belize. Nowak, P. F. 1984. Direct evaluation: A management tool for program justification, evolution, and modification. Journal of Environmental Education 15(4):27-30. Olson, E. C., M. L. Bowman, and R. E. Roth. 1984. Interpretation and nonfornial environmental education in natural resources management. Journal of Environmental Education 15(4):6-10. Passineau, J.P. 1975. Walking the "tightrope" of environmental education and evaluation. Pages 371-414 in N. McInnes and D. Albrecht (eds.), What makes education environmental. Environmental Educators and Data Couriers, Inc., New York. Ramos, M. A. 1988. The conservation of biodiversity in Latin
America: A perspective. Pages 428-436 in E. O. Wilson (ed.), Biodiversity. National Academy Press, Washington, DC. Richardson, G.P., and A.L. Pugh. 1981. Introduction to system dynamics modeling with DYNAMO. M.I.T. Press, Boston, Massachusetts. Scriven, M. 1972. Pros and cons about goal-free evaluation. The Journal of Educational Evaluation December: 1-4. Sharpe, G.D. 1982. Interpreting the environment. John Wiley & Sons, New York. 694 pp. Soule, M. E. 1986. Conservation biology and the "real world." Pages 1-12 in M. E. Soule (ed.), Conservation biology; the science of scarcity and diversity. Sinauer Associates, Sunderland, Massachusetts. Stapp, W. B., and D. A. Cox. 1974. Environmental education activities manual. Stapp and Cox, 32493 Shady Ridge Drive, Farmington Hills, Michigan. 764 pp. Stufflebeam, D. L., W.J. Foley, W.J. Gephart, E. G. Guba, R.I. Hammond, H.O. Merriman, and M.M. Provus. 1971. Educational evaluation and decision making. F.E. Peacock, Chicago, Illinois. 368 pp. Wood, D. W., and D.S. Wood. 1985. Conservation education: A planning guide. Peace Corps Manual M-23, Washington, DC. 115 pp.