Session S1A USING MULTIPLE METHODS TO ... - CiteSeerX

4 downloads 1881 Views 44KB Size Report
design course in which students engage in design activities, ... Assessing the learning that results from design courses .... Illustration of understanding of a topic.
Session S1A USING MULTIPLE METHODS TO EVALUATE A FRESHMEN DESIGN COURSE Cynthia J. Atman1, Robin S. Adams2 & Jennifer Turns3 Abstract - ENGR 100 is a hands-on, team-based freshmen design course in which students engage in design activities, learn about the field of engineering, and develop an understanding of what kinds of skills and knowledge are necessary to pursue a career in engineering. The learning that results from these kinds of authentic tasks is not easy to assess with traditional tests or a single assessment method. To evaluate the effectiveness of the teaching methods and course content, we created an evaluation plan that utilized triangulation of data through multiple methods. Triangulation is not just a tactic to establish validity and credibility of findings--it is a robust mode of inquiry that provides opportunities to view complementary and contrasting data. In this paper we present an in depth description of our evaluation plan. This includes a description of the planning and administration of the assessment tasks, the selection of assessment methods and development of instruments, and how the plan triangulated data across assessment methodologies.

INTRODUCTION Integrating design experiences into engineering education is at the center of current efforts to transform the undergraduate engineering experience [1-3]. This trend has expanded the scope of design education from capstone to freshmen year experiences [4], and the range of design activities from individual and linear approaches to teambased and systems -oriented approaches to solving design problems [5]. In the context of the new ABET EC 2000 criteria [6], design experiences may provide opportunities for students to gain skills and knowledge in such areas as design, teamwork, communication, and understanding global and societal issues relevant to engineering problems. Assessing the learning that results from design courses can be difficult to operationalize or measure, and traditional assessment tools or existing instruments may not be appropriate. For these courses, the goal of assessment is to

provide a means for clarifying what we should be teaching, identifying disciplinary knowledge and skills, and monitoring students' performance. In light of the new EC 2000 criteria, we can expand this goal of assessment to include making judgments about curricula and instructional practices with an aim towards improvement. By combining these trends of 1) integrating design experiences across the curriculum and 2) promoting assessment of student learning, we identify a need to develop better assessment methodologies for characterizing and measuring the learning that results from design experiences. As part of a larger national research project [7], we developed and implemented a plan to evaluate the educational benefits of freshmen engineering design course. To explore the level of congruence between course learning objectives and educational benefits for students we adopted a multiple method assessment approach. Triangulation through multiple methods is less a tactic to establish validity or credibility of findings than it is a mode of inquiry [8]. As a mode of inquiry, triangulation provides opportunities to view complementary and contrasting data [9]. This provides different vantagepoints to view evaluation questions and data, which in turn reduces uncertainty in interpretations and establishes contextual validity [9]. In this paper we present an in depth description of an evaluation plan to assess student learning in a freshmen engineering design course. This includes a description of our process for creating and administering the evaluation plan, as well as the process for selecting assessment methodologies. We also provide examples from the assessment tasks, and a map illustrating how data was triangulated across the assessment methodologies. The goal of our evaluation plan is to summarize the effectiveness of the teaching methods and course content, and identify areas of improvement in the course. We know that the assessment of student learning is difficult and we hope that our experience may serve as an illustrative example.

1

Cynthia J. Atman, University of WA, Center for Engineering Learning and Teaching, Box 352180, Seattle, WA, 98102, [email protected] 2 Robin S. Adams, University of WA, Center for Engineering Learning and Teaching, Box 352180, Seattle, WA, 98102, [email protected] 3 Jennifer Turns, University of WA, Center for Engineering Learning and Teaching, Box 352180, Seattle, WA, 98102, [email protected]

0-7803-6424-4/00/$10.00 © 2000 IEEE October 18 - 21, 2000 Kansas City, MO 30 th ASEE/IEEE Frontiers in Education Conference S1A-6

Session S1A DESCRIPTION OF ENGR 100 ENGR 100 is a freshman design course that was originally developed by the ECSEL Coalition [10] and is offered as an elective at the University of Washington. Each quarter four sections are offered, and each section enrolls approximately 30 students. The course has two very broad objectives: impart specific skills associated with design at an early point in the curriculum, and provide an early opportunity for students to envision their place in the profession [11]. Specific skills related to course activities include identifying and describing attributes of the design process, describing the role that math, science and engineering fundamentals play in design problem solving, working effectively in teams, and effectively presenting technical ideas. The course is organized to encourage a high level of faculty and student interactions, and is centered around a series of short design exercises and extensive design modules. Activities are structured to provide a realistic exposure to team problem solving and effective design strategies, as well as promote the importance of communication and teamwork. As part of a university-wide initiative to help students develop a greater awareness of international issues and compete in a global engineering market, a new section of ENGR 100 was piloted in autumn of 1999. In ENGR 100International Freshman Interest Group (FIG), students and faculty from the University of Washington and Tohoku University in Japan engaged in collaborative design and research team projects. For this section, the original ENGR 100 course goals were expanded to include an ability to contribute to the engineering profession, work effectively in international teams, and to understand global and societal issues relevant to engineering solutions. Course activities were structured to provide an authentic exposure to teambased research and design activities, to increase an awareness of international issues and professional practices, and to increase students' confidence in their ability to contribute to science and technology. Projects ranged from simulating the molecular dynamics of defects in crystalline materials to designing and testing a MEMS-based microvalve to designing a SMA actuated device for repairing holes in the septal wall of the human heart [12].

CREATING, ADMINISTERING AND IMPLEMENTING AN EVALUATION PLAN 1. Developing and Refining Course Objectives: The process we followed for creating and administering our evaluation plan shares many features with models for continuous quality improvement [13-14]. First, the evaluation team worked with the instructional team to refine course objectives. By taking a collaborative approach, we

were able to build trust and understanding, increase awareness of student learning and pedagogy issues, and promote opportunities to incorporate assessment strategies into future course offerings [15]. In other words we were able to build a strong bridge connecting teaching practices to student learning by developing assessment instruments. 2. Generating Evaluation Questions: The course objectives provided the basis for generating evaluation questions and characterizing learning objectives. We also need to know who will use the information and how, as well as identify particular instructional or learning concerns. For both versions of the course, there were three audiences: the instructors, administrators and other faculty who would play a role in the process of refining the course, and administrators and faculty involved in the ABET accreditation process. Therefore, it was necessary that information from the evaluation provide individual feedback to the instructors, inform the decision making process for improving the course, and provide insight into characterizing evidence of student learning outcomes related to the EC 2000 criteria. Our evaluation goals for both versions of ENGR 100 were to assess (1) the effectiveness of teaching methods and course content, (2) the impact of the course on student learning and motivation to pursue engineering, and (3) the match between learning objectives and educational benefits for the students. Additional questions specific to the Tohoku section were to model characteristics of successful projects and international collaboration, assess the ability of freshmen to engage in authentic research and design experiences, and to explore the ways in which freshmen learn in a group and in an international cooperative setting. 3. Operationalizing Learning Objectives: To translate course objectives into student learning outcomes we utilized the attribute frameworks in reference [7]. These frameworks characterize ABET "a-k" learning outcomes at various levels of detail related to Bloom's Taxonomy in terms of knowledge, skills and attitudes [11]. Because our context is a freshmen design course, we focused on the comprehension and application levels of the framework and identified specific learning outcomes that best mapped to our course objectives. These outcomes were later transformed into aspects of our assessment tasks, such as survey items or open-ended questions. Some examples of these are provided later in Table II. Utilizing these frameworks [7, 11] provides a means for characterizing course objectives and operationalizing learning objectives in a common language. This makes it easier to clearly identify areas for improvement, and promotes continuous feedback between assessing learning and monitoring instructional goals and activities.

0-7803-6424-4/00/$10.00 © 2000 IEEE October 18 - 21, 2000 Kansas City, MO 30 th ASEE/IEEE Frontiers in Education Conference S1A-7

Session S1A 4. Identifying Opportunities to Collect Data: Identifying opportunities to gather student data requires knowing when you can have access to students, what kinds of data you can gather, and how many students can be studied. Answers to these questions were used to develop timelines for administering assessment tasks and collecting data. They also limited the boundaries of the evaluation plan in terms of how many students were in the study and what kinds of quantitative and qualitative data were gathered. A related issue was creating mechanisms for protecting the privacy of students and for the ethical use of data. For our purposes, we followed the guidelines of our human subjects application process which served as an external reviewer for our plan. We utilized a voluntary consent process that generated a unique code to protect confidentiality and limited access to the data until final grades were recorded. 5. Selecting Assessment Methods: Selecting appropriate assessment methodologies requires knowing the advantages and disadvantages of different methods, how to minimize the disadvantages, and understanding what kinds of information different methods provide. One way of comparing across these issues is to create a methodology matrix (see Table I). Items in the matrix were drawn from a variety of sources [16-19], and for the sake of brevity only methods utilized in our evaluation plan are described. As illustrated in Table I, our evaluation plan utilized triangulation across multiple methods, and integrated

quantitative and qualitative assessment methods: closedended surveys, open-ended questionnaires, a concept map task, observations and interviews with students in design teams (Tohoku section only), course performance, and archival data. Our selection criteria for adopting or creating assessment instruments can be summarized in terms of five issues: credibility, validity, utility, appropriateness of the instrument given our constraints, and level of experience using the methods. Each of these issues is summarized below with respect to each assessment method. We adopted one existing instrument. This decision was based on the credibility, validity and utility of the instrument to describe changes in students' attitudes about engineering and their confidence in their ability to pursue engineering [20]. We also developed our own survey instrument, which adopted some items from an ECSEL instrument that has established both reliability and credibility in measuring students’ perceptions of their abilities related to some of the EC 2000 outcomes (e.g., function in a team, communicate effectively, solve problems) [21]. There are also mid-program performance-based tasks available such as those developed by TIDEE [22], but these were not selected given our scheduling constraints. Members of our evaluation team had experience with utilizing concept maps for formative and summative assessment, and characterizing learning objectives [23]. In Spring ‘99 we piloted a concept map task asking students to represent their understanding of engineering, and we believe that the task is both relevant and useful for addressing our evaluation questions.

TABLE I. Method Closed-ended Surveys

Advantages Easy to administer, replicate and measure changes over time Can perform statistical analyses Can provide clear implications for improvement

Open-ended Questionnaires

Students describe or explain issues in their own words Can be direct evidence of learning

Concept Map

Representations of the organization a student sees among concepts Learning or instructional aid Can perform statistical analyses May be used as alternative to an exam High internal validity Themes emerge from the data Access to attitudes and interactions Opportunities to probe or follow-up

Observation & Interviews

METHODOLOGY MATRIX Disadvantages Requires considerable knowledge and skill to make a “good” survey Results are not direct evidence of learning Forced responses Reduce by: cross-validating, piloting instruments Data analyses time consuming Difficult to generalize results Difficult to develop scoring rubrics Reduce by: cross-validating, work with experts to develop scoring rubrics Difficult to develop scoring rubrics Data analyses time consuming Reduce by: cross-validating, work with experts to develop scoring rubrics

Students may behave atypically Observer bias and rapport Extremely time intensive Reduce by: cross-validating, piloting interview

Outcome Changes in self-assessed abilities, knowledge, and attitudes over time Attitudes about issues Self-assessed confidence levels In-depth explanations or definitions of issues Identify naive conceptions of issues Characterize expertise in a given subject area, as well as identify naive conceptions Illustration of understanding of a topic Information on how students experience a course Information on group interactions

0-7803-6424-4/00/$10.00 © 2000 IEEE October 18 - 21, 2000 Kansas City, MO 30 th ASEE/IEEE Frontiers in Education Conference S1A-8

Session S1A Coursework and Grades

Direct measure of skills and application of knowledge Particularly relevant for assessing course goals and objectives

Archival Data

Background information on population Track retention and graduation rates Often readily available and cost efficient

questions, create audio or video records, have participants review summaries Ratings may be subjective Performance may by atypical May not provide information on how performance relates to educational experience Reduce by: cross-validating, work with experts to develop scoring rubrics Does not often provide information on correlations Confidentiality issues May encourage attempts to misuse data Reduce by: cross-validating, utilize guidelines for protecting privacy

Because of the exploratory nature of the Tohoku pilot evaluation questions, we adopted an ethnographicbased approach to directly identify attributes of the course activities that may contribute to learning. Because this method allows themes to emerge naturally, it also encourages opportunities to identify new evaluation issues. We also took full advantage of our ability to access archival data and course grades. Archival data provides baseline descriptions of student characteristics, as well as opportunities to track retention. Although course grades can be a direct measure of student ability, they are limited in their ability to inform areas for course improvement. None of these methods, on their own, adequately addressed our concerns or provided the detail we needed to sufficiently respond to our evaluation questions. Therefore, we combined approaches to triangulate across assessment methods. 6. Creating and Administering Assessment Tasks: Descriptions of each assessment task are provided below:

Information about individual attitudes and beliefs Direct measure of performance Can be directly related to course goals and objectives

Biographical and academic data Longitudinal data (e.g., retention, GPA) Establish representativeness of sample

communicate in oral and written formats, function in a team, and understand the field of engineering. The purpose of the second set of questions was to provide insight into how students’ understanding of engineering and teamwork may have changed as a result of their enrollment. In addition, the post-test survey provided opportunities for students to reflect on their experience and to respond to questions about what they think they learned. The purpose of the third set of questions was to provide insight into how students perceived their confidence in their abilities to pursue an engineering career, their attitudes about engineering as a field, and how these may have changed as a result of their enrollment [20]. Concept Map: The primary purpose of the concept map task was to provide insight into students' perceptions of their engineering knowledge and skills as described in the new EC 2000 criteria. A secondary purpose was to promote the task as a learning aid to help students situate themselves as engineering practitioners. The task was administered during the middle of the quarter. Students were given approximately 30 minutes to create their map, and the administrator worked with students for an additional 30 minutes to synthesize what the maps represent. The format of the synthesis activity was to have students compare concepts from their maps to the EC 2000 criteria, as well as identify some engineering concepts that are not explicit in the criteria (e.g., what engineers make, kinds of engineering jobs).

Survey and Open-ended Questionnaire: The survey contained three sets of questions, and included both open and closed-ended items. The pretest survey was administered during the first week of the quarter, and the post-test survey was administered during the last week of the quarter. The closed-ended portion of the survey took approximately 25 minutes, and the open-ended portion was a take-home assignment to be returned at the next class meeting. Responses to closed-ended items were on a 5 point Observations and Interviews: Likert scale. Example questions are provided in Table II Two student teams in the Tohoku section also and in reference [24]. participated in an observational study. Typically, teams The purpose of the first set of questions was to included 4 students from each university, a faculty provide insight into students’ perceptions of their member from each university, and at least one senior knowledge and skills, and how these may have changed peer or graduate assistant. The goal was to understand as a result of their enrollment in ENGR 100. Broad the ways in which engineering students learn and categories of survey items include self-assessed abilities communicate in a group and in an international to: solve design problems, solve engineering problems, cooperative setting, and to help us understand the utilize laboratory equipment and internet resources, benefits of freshmen research projects. 0-7803-6424-4/00/$10.00 © 2000 IEEE October 18 - 21, 2000 Kansas City, MO 30 th ASEE/IEEE Frontiers in Education Conference S1A-9

Session S1A TABLE II. EXAMP LE QUESTIONS FROM P RE AND P OST-T EST SURVEYS

Method Closed-ended items:

Open-ended items:

Closed-ended items:

Examples I can identify the knowledge and the sources of information needed to understand the problem. I can recognize the potential risks to the public of an engineering solution or d esign. I can provide specific and constructive feedback to other team members. When working in teams, I can modify my own style to accommodate the needs of others. When working in teams, I can be a technically contributing member. What k inds of information would you gather to solve an engineering design problem? What issues and concerns should you consider when solving an engineering design problem? Explain what you think teamwork means: Engineering is a precise science As a result of taking this class, how confident are you that engineering is the right choice for you?

TABLE III. T RIANGULATION ARTICULATION MAP

Issue

Evaluation Questions: • Effectiveness of teaching methods and course content: • Impact of course on learning and motivation to pursue engineering • Comparison of learning objectives to learning outcomes • Characteristics of successful projects and international collaborations • Ability of freshmen to contribute to engineering profession • Explore the ways in which freshmen learn in a group and in an international cooperative setting Learning Benefits: • Design • Apply math, science and engineering knowledge to solve problems • Conduct experiments and analyze data • Function in team • Communicate effectively • Understand engineering as a field and as a career • Understand global and society issues Other Benefits:

Survey: Closedended Questions

Survey: Open-ended Questions

X X X

X X X X X X

X X

X X X X X X X

Attitude Survey

Concept Map

Observatio n & Interviews

Course Grades

X

X X X

X X X X X X

X X X X X X

X

X X X X X X X

X

X X X X X

0-7803-6424-4/00/$10.00 © 2000 IEEE

X

X X

X X X X X

October 18 - 21, 2000 Kansas City, MO 30 th ASEE/IEEE Frontiers in Education Conference S1A-10

Session S1A • • • •

Motivation to pursue engineering career Motivation to pursue international or research opportunities Attitudes about their ability to be creators and contributors to engineering knowledge Retention in engineering

X

X X X

X X X

X X X

X X

X

X

X

X

0-7803-6424-4/00/$10.00 © 2000 IEEE

October 18 - 21, 2000 Kansas City, MO 30 th ASEE/IEEE Frontiers in Education Conference S1A-11

Session S1A Students were asked to voluntarily participate, and data collected includes: classroom and laboratory observations of interactions within student teams for the duration of the course, forwarded email transactions among team members, informal interviews with student team members, and exit interviews with both students and faculty. Interviews were held privately, and took a maximum of 1 hour. Examples of questions include: how was the division of labor among the group established, how has this experience helped you understand engineering, and what was the nature of your interactions with your peers and instructors. Archival Data and Course Grades: Additional data included background data from student records, course grades, and academic transcripts. Archival data was utilized to provide baseline characteristics of students, track retention, and analyze the representativeness of the students in this study as compared to the characteristics of the whole freshmen pre-engineering class. Course grades were used to compare achievement on course assignments with measures of student learning from the surveys, concept map tasks, and observations. 7. Mapping Methods to Learning Objectives: To provide a framework for triangulating how the different assessments methods contribute to our ability to respond to our evaluation questions, we created an articulation map (see Table III). The map identifies our evaluation questions, the educational benefits of ENGR 100, and whether or not an assessment task provides any data along these dimensions. As stated earlier, triangulating across multiple methods provides rich opportunities to characterize learning from different perspectives and in different levels of detail. In addition, our plan provides a rich opportunity to perform a comprehensive comparison of educational outcomes across two very different freshmen year design experiences.

CONCLUS ION In this paper we have presented the details of our plan for evaluating a freshmen engineering design course. This includes an explanation of our evaluation decisions, and a description of our assessment methodologies. Finally, we have provided a methodology matrix for selecting assessment methods, and an articulation map for identifying areas for triangulating across assessment methods. By triangulating across methods we have created opportunities to cross-validate assessment instruments, gain insight into the independence of different methods to describe unique educational benefits, and provide rich and detailed information about what students gain from a particular learning experience.

Although we are still in the process of analyzing data, preliminary analyses are supporting our evaluation decisions. Overall, our approach may be described as a means for connecting evaluation intimately to the assessment of student learning, with the goal of informing decision making in the classroom.

ACKNOWLEDGMENTS This project was made possible with the help of many colleagues. At this time we would like to thank all the ENGR 100 students who participated in the study. We would also like to acknowledge the contributions of Gretchen Kalonji, Denice Denton, Rie Nakamura, Remie Calalang, and Mike Safoutin for developing and executing the evaluation plan. In addition, this project was funded by three sources: a NSF grant (EEC-9872498), an internal University of Washington grant (Tools for Transformation), and the NSF sponsored Engineering Coalition of Schools for Excellence in Education and Leadership (ECSEL) Coalition.

REFERENCES : [1] American Society of Engineering Education, Engineering education for a changing world , Engineering Deans Council and Corporate Roundtable of ASEE October, 1994. [2] National Research Council, Engineering Education: Designing an Adaptive System , National Academy Press, 1995. [3] National Science Foundation, Restructuring engineering education: a focus on change, 1995. [4] McNeill, B. W., Evans, D. L., Bowers, D. H., Bellamy, L. & Beakley, G. C. Beakley, “Beginning Design Education with Freshmen”, Engineering Education, July/August, 1990, pp. 548553. [5] Sheppard, S. & Jenison, R., “Examples of Freshmen Design Education”, Journal of Engineering Education, 13 (4), 1997, pp. 248-161. [6] Accreditation Board for Engineering and Technology, Engineering Criteria 2000: Criteria for accrediting programs in engineering in the United States (2nd edition), Engineering Accreditation Commission, Accreditation Board for Engineering and Technology, 1998. [7] URL: http://www.engrng.pitt.edu/~ec2000/ [8] Miles, M. & Huberman, M., Qualitative data analysis: An expanded sourcebook (2nd Edition), Beverly Hills, Sage Press, 1994. [9] Lincoln, Y. & Guba, E. G., Naturalistic Inquiry, Newbury Park, Sage Publications, 1985. [10] URL: http://echo.umd.edu/ [11] Safoutin, M., Atman, C., Adams, R., Shuman, T.R., Kramlich, J. & Fridley J., “A Design Attribute Framework for Course Planning and Learning Assessment”, IEEE Transactions on Education, May 2000. [12] URL: http://www.courses.washington.edu/uwtohoku/ [13] McGourty, J., “Four Strategies to Integrate Assessment into the Engineering Education Environment”, Journal of Engineering Education, 88 (4), 1999, pp. 391-395.

0-7803-6424-4/00/$10.00 © 2000 IEEE October 18 - 21, 2000 Kansas City, MO 30 th ASEE/IEEE Frontiers in Education Conference S1A-12

Session S1A [14] Rogers, G. M. & Sando, J. K., Stepping Ahead: An Assessment Plan Development Guide, Terre Haute, Rose-Hulman Institute of Technology, 1996. [15] Angelo, T. A. & Cross, K.P., Classroom Assessment Techniques: A Handbook for College Teachers (Second Edition). San Francisco, Jossey-Bass Publishers, 1993. [16] Prus, J. & Johnson, R., “A Critical Review of Student Assessment Options”, New Directions for Community Colleges, 88, 1994. (Published as monograph Assessment & Testing, Myths & Realities, Editors Bers & Mittler). [17] Besterfield-Sacre, M.E. & Atman, C.J., “Survey design methodology: Measuring freshmen attitudes about engineering”, Proceedings of the Annual ASEE Conference, 1994. [18] Converse, J. M. & Presser, S., Survey Questions: Handcrafting the Standardized Questionnaire, Newbury Park, Sage, 1986. [19] Patton, M. Q., Qualitative evaluation and research methods, Newbury Park, Sage Publications, 1990. [20] Besterfield-Sacre, M.E., Atman, C.J. & Shuman, L.J., “Engineering student attitudes assessment”, Journal of Engineering Education, April, 1998, pp. 133-141. [21] Terenzini, P.T., Cabrera, A.F., Parent, J.M., & Bjorklund, S.A., “Preparing for ABET 2000: Assessment at the Classroom Level”, Proceedings of the Annual ASEE Conference, 1998, Session 2630. [22] URL: http://www.cea.wsu.edu/TIDEE/ [23] Turns, J., Atman, C.J., & Adams, R., “Concept maps for engineering education: A cognitively motivated tool supporting varied assessment functions”, IEEE Transactions on Education, May 2000. [24] URL: http://www.engr.washington.edu/~celtweb/

0-7803-6424-4/00/$10.00 © 2000 IEEE October 18 - 21, 2000 Kansas City, MO 30 th ASEE/IEEE Frontiers in Education Conference S1A-13

Suggest Documents