31st ASEE/IEEE Frontiers in Education Conference. S2A-1. USING TECHNOLOGY TO ENHANCE. OUTCOME ASSESSMENT IN ENGINEERING EDUCATION.
Session S2A USING TECHNOLOGY TO ENHANCE OUTCOME ASSESSMENT IN ENGINEERING EDUCATION1 Jack McGourty2 , Larry Shuman, Mary Besterfield-Sacre, Ray Hoare and Harvey Wolfe3 , Barbara Olds and Ronald Miller4 Abstract This paper describes on-going research at several major universities on the design, development, and application of outcome assessment methodologies enhanced by information technologies. Several applications are described as well as advantages and disadvantages. Future research objectives are discussed.
ciently collected, analyzed, and then fed back to relevant constituents, allowing more time to be spent in utilizing that information for enhancing learning. Hence, the integration of emerging information technologies and outcome-based assessment methodologies offers a true opportunity to improve higher education.
Index Terms Assessment, Evaluation, On-line Assessment, Technology-Mediated Assessment.
OUTCOME-ASSESSMENT RESEARCH
INTRODUCTION In today’s competitive environment outcome, assessment is now a primary focus of higher education. This is due in part to pressure from industry, academic accreditation entities, and government agencies to incorporate broader student learning outcomes and sound assessment techniques into education programs and courses. The most relevant example is the incorporation by the Accreditation Board of Engineering and Technology (ABET) of eleven student learning outcomes and assessment in its Engineering Criteria 2000 (EC 2000) that is now required for over 1600 undergraduate US engineering programs at more than 300 institutions [1]. As a result, there has been an increased interest in assessment methodologies and research within the engineering educational community. As validated assessment methods begin to appear, there is a strong need to integrate them into adaptable and accessible system applications that must become an essential component of the engineering learning environment. Information technology makes such assessment applications feasible. However, while many activities have attempted to use technology for knowledge transfer, increased communication, and administrative productivity [2], there is little more than anecdotal evidence that technology has substantially enhanced learning or education assessment processes. Further, institutions face formidable challenges when implementing outcome-based assessment processes not the least of which is finding the resources for collecting, tabulating and disseminating information in a useful format and timely manner. We propose that information technology (IT) can mitigate these obstacles by enabling data to be effi-
Over the past three years under NSF funding, assessment researchers from five universities have been working to identify, evaluate, and validate various methodologies for assessing engineering education and to provide educators with new tools where none previously existed [3]. We have been investigating a number of assessment methodologies, many of which have been developed and used successfully in areas other than engineering education. Currently, we are conducting a series of very promising “triangulation” experiments in which we are using multiple assessment methodologies to measure specific undergraduate outcomes on defined student cohorts at our five institutions [4]. Because we lack definitive outcome measures at the present time, triangulation is a necessary step to validate outcome-based assessment methods. By triangulating, we can build upon each method’s strengths while minimizing its weaknesses. As part of these experiments, we are both adapting existing assessment methodologies and developing new ones. We expect to determine which methods are most effective for particular outcomes as well as better understand how students achieve stated outcome objectives during the course of their undergraduate education. Through these experiments, we will better understand the applicability of a number of promising methods: surveys, concept maps, behavioral analysis, comp etency measurements, measurements of intellectual development; authentic assessments; multi-source feedback; portfolios and data warehousing [5, 6]. By working collaboratively, we have been able to proceed with methodological developments in a well-coordinated manner across our institutions. Specifically, we are developing and testing multi-source feedback systems at Columbia University and the University of Pittsburgh [7, 8]; reflective portfolios at the Colorado School of Mines and the Rose-Hulman Institute of Technology [9, 10];
1
This paper supported in part by National Science Foundation grants: EC-9872498, Engineering Education: Assessment Methodologies and Curricula Innovations and EEC-9727413Gateway Engineering Education Coalition. Also Engineering Information Foundation grant EiF 98-4. 2 Columbia University 3 University Of Pittsburgh 4 Colorado School of Mines
0-7803-6669-7/01/$10.00 © 2001 IEEE October 10 - 13, 2001 Reno, NV 31 st ASEE/IEEE Frontiers in Education Conference S2A-1
Session S2A and concept maps as assessment tools at the Un iversity of Pittsburgh and the University of Washington [11]. In addition, authentic assessment and intellectual development research is being conducted at the Colorado School of Mines [12]. At the University of Pittsburgh, research into modeling the educational system [13, 14], along with using closed form surveys [15, 16,17], data warehousing, and behavioral analysis, is underway. As a team, we have produced over 40 archival publications and conference proceedings, as well as conducted workshops at both the national and local levels. ASSESSMENT TOOLKIT As part of this research, we have begun to develop and apply a variety of assessment methods administered either via the web or by PC. We already have developed several highquality assessment methodologies (discussed below). Our goal is to link these methodologies to form the core of what we call the Assessment Toolkit. A method for adding “tools” to the kit would be a part of ongoing research that will adapt currently “non-IT enabled” assessment tools and create new tools for the Toolkit. Web-enabled applications can process and report information accurately and efficiently, allowing students, faculty, administration, and other relevant constituents to receive timely feedback as to the effectiveness of academic programs and achievement of student learning outcomes. The following are examples of tools that have been developed to take advantage of emerging information technologies.
ON-LINE STUDENT SURVEY SYSTEM The University of Pittsburgh’s On-line Student Survey System (OS3 ) is an example of an assessment tool that we have recently developed and is currently being used by seven universities connected to a single server. By utilizing Java, email and Oracle, we have demonstrated that we can provide an infrastructure for assessment using a series of integrated, student attitude surveys (freshman pre- and post-, sophomore, junior, senior exit and alumni) that support outcome measurement. These instruments are designed to measure: engineering related attitudes; preparedness in knowledge and communication skills; attitudes about studying, working in groups and personal abilities; confidence in engineering outcomes; pre-professional experiences (junior and senior); and information on graduate education and employment (senior). In addition, an alumni survey measures: overall ratings about the program and school; competence in engineering outcomes; and alumni perceptions of curriculum, culture, inclass instruction, learning through experience, and university support services. These surveys are available via the web to engineering students under the guidance of a local administrator who can then download the data for benchmarking and research purposes. Consequently, student outcomes data can now be shared with academic advisers during key points throughout the student’s academic experience, thus allowing
students and their advisers to make more effective curriculum planning decisions. We are currently adding a reporting mechanism to the OS3 that performs a “strength and weakness” analysis for visualizing the results [18], as well as being an input source to a data warehouse.
WEB COURSE EVALUATION SYSTEM Columbia University’s Fu Foundation School of Engineering and Applied Science (SEAS) team members have developed the Web Course Evaluation System (WCES), a webenabled application that allows faculty to customize surveys to the objectives of a specific course. Students complete these surveys at their convenience. Data are easily coordinated with the institution’s existing information and administrative system via a file transfer with the registrar’s office. It provides users with secure information based on password and authorized user access. Reports are produced for students, faculty, and administration in a timely manner for curricula improvement. The current WCES has several important features: providing a measurement of core questions on course and faculty quality; allowing faculty to add course-specific questions; and generating timely feedback reports to all constituents. First, the current web-enhanced course evaluation system (WCES) is designed to measure a core set of questions for all SEAS courses. These are questions that SEAS faculty and administration have agreed upon so the results can be reviewed each year and on a longitudinal basis. Second, SEAS faculty can add up to five scaled questions and two open-ended questions based on specific course learning objectives and intended outcomes. Students then can go to the WCES website and complete evaluations for each of their registered courses. Third, once the evaluation period is over, survey results are immediately e-mailed to each faculty member, with summary information sent to department chairs and dean’s office. The faculty report includes the student response rate, quantitative ratings of core and custom questions, and qualitative comments. The summary reports provide department chairs and deans with aggregate survey data by department and faculty member. In addition, the student’s Oracle Website provides all students with final ratings for the course (but not the comments). WCES is designed to provide all relevant constituents with feedback regarding the course in a very timely manner – a major benefit of the system.
COGITO® SOFTWARE AND REFLECTIVE JUDGMENT INTERVIEWS At the Colorado School of Mines (CSM), team members have developed the Cogito ® software and reflective judgment (RJ) interviews to measure intellectual development of students [19, 20]. Based on Perry’s Model of Intellectual and Ethical Development [21} and King and Kitchener’s Reflec-
0-7803-6669-7/01/$10.00 © 2001 IEEE October 10 - 13, 2001 Reno, NV 31 st ASEE/IEEE Frontiers in Education Conference S2A-2
Session S2A tive Judgment model [22], Cogito ® presents the student with a series of scenarios with response choices at various intellectual levels. A neural network is used to score the responses. Although Cogito ® is still under development, computer responses and in -depth interviews have been collected from nearly 100 subjects. While college-level engineering and science programs expect students to develop intellectually while acquiring discipline-specific knowledge and skills, nearly all measures of student achievement are focused on content knowledge, process ability (e.g. design), or communication skills. Students are assumed to be developing intellectually, especially in their ability to think critically, but rarely are meaningful data collected and reported to support such an assumption. Historically, the most recognized methods to quantify maturation of college students’ intellectual abilities were based on Perry’s Model of Intellectual and Ethical Development and King and Kitchener’s Reflective Judgment model. These models measure students’ positions along a hierarchical construct of stages representing increasingly more sophisticated ways of understanding and solving complex problems. This is most reliably measured using an audio taped interactive interview conducted by a trained expert and evaluated by a second trained expert. This extremely time-consuming process costs $150 per subject, which precludes the method from routinely being used as a program assessment tool. However, CSM’s Cogito ® webbased system offers considerable promise to do this type of asses sment in a cost-effect, efficient manner. Working with team members at the University of Pittsburgh, researchers are developing a neural network/fuzzy logic approach for classifying the student’s level of intellectual development based on responses to a series of web-based scenarios.
THE TEAM DEVELOPER The Team Developer is a computerized, multi-source assessment and feedback survey designed to examine team behavior that may affect team performance [23,24]. It is a competency-based system that can be used to assess both basic team skills and behaviors and Engineering Criteria 2000 (EC 2000) learning outcomes. It does this by combining self/peer ratings to provide individual, team, and/or class feedback. Originally developed at Columbia University, it is now being tested at the University of Pittsburgh for its effectiveness as both an assessment and as a learning enhancement tool as part of our triangulation experiments [25]. Once refined, this multi-source feedback system will be included in the Assessment Toolkit.
DATA WAREHOUSING The NSF Gateway Coalition has developed a data warehousing application and a parallel system has been developed at the University of Pittsburgh. A data warehouse can: reduce
data redundancy; measure data-driven improvement (EC2000 criteria); provide comprehensive data for performance monitoring; and develop a better method for tracking undergraduate students. At the University of Pittsburgh, benefits include providing accurate retention information, facilitating the evaluation and prediction of student performance, and measuring the influence of environment/personal factors. In addition, the data warehouse is an administrative aid for strategic decision-making and provides an integrated environment for analysis of engineering education surveys.
ADVANTAGES AND DISADVANTAGES OF TECHNOLOGY-M EDIATED ASSESSMENT The debate over the quality of web-based versus traditional paper-and-pencil based assessments continues. However, there are several recognized benefits to on-line assessment. One is that students have the opport unity to complete assessments, such as course evalu ations, on their own time, without the time constraints of in-class surveys. The urgency involved in completing surveys during class may cause students to fill them out in a cursory manner. Since much of the surveying must occur at the end of the course, in-class data collection often cuts into valuable instructional time. In contrast, evaluations posted on the web may generate more detailed and thoughtful responses from the students. For example, Columbia’s faculty has observed a significant increase in written comments in web course evaluation surveys. In addition, administrators are able to organize, code, and analyze comments in a very efficient way, a feature not available in previous paper survey processes. Another benefit of the web assessment system is the timely manner that feedback is provided to all constituents. The Hmieleski report found that at 90% of the institutions examined faculty did not receive their results until two months after survey administration. In addition, the majority of the schools did not provide results to students at all [26]. Web systems allow for flexible dissemination of survey results. For example, faculty applying formative evaluations in the classroom can receive immediate feedback. A third benefit is the flexibility that on-line asses sment systems provide in terms of survey design and development. The Columbia course evaluation system is designed to allow administrators to add customized questions to measure specific program objectives. Faculty also can provide questions to support the measurement of intended learning outcomes based on specific course objectives. Although most universities have the capacity to sustain a web-enhanced assessment system, few schools have actually implemented such a system. In addition to the development cost, the experience to date of return rates of 30 to 40% at best [27] appears to be the most pervasive problem among those who have converted to web-based evaluation, and these two issues have most strongly influenced schools’ decisions to maintain their current paper-and-pencil systems.
0-7803-6669-7/01/$10.00 © 2001 IEEE October 10 - 13, 2001 Reno, NV 31 st ASEE/IEEE Frontiers in Education Conference S2A-3
Session S2A However, if completion of web-based surveys is strongly encouraged by the instructor, we may find that web-based evaluations will have response rates comparable to the more traditional methods, and future research may prove that this barrier to on-line evaluation is not, in fact, warranted. Columbia has experienced response rates of 85% in recent webbased course evaluations. Success is due to a combination of technology-mediated communications, ni centive packages, and internal marketing strategies. For example, the Columbia system allows us to monitor response rates during the survey administration period and target e-mails to both faculty and students where incremental urging is required. Similar capabilities are built into OS3 .
FUTURE ACTIVITIES We propose that, within the next few years, personal computers will be routinely equipped with a built-in camera, microphone and a host of collaboration software “tools.” For example, use of distance learning through tutorials, e-mail, discussion groups, instant messaging, key word detection capabilities; video web-conferencing, and electronic white boards, will become commonplace throughout higher education. Consequently, it soon will be possible to build assessment tools based on these emerging Internet technologies and accompanying software tools. To take advantage of this soon-to-be-realized scenario, the work can begin now on constructing an Assessment Toolkit. Once developed, the Assessment Toolkit could be implemented on a national basis. One way of doing this is through a series of Assessment Servers equipped with the tools and features described above that would comprise a national, web-based system. These Assessment Servers would ensure high quality service while providing a redundant infrastructure. Each Assessment Server could be equipped to handle multiple universities concurrently, using multiple tools. In addition to the Assessment Servers, an Assessment Center could be established that would receive the data from each of the Assessment Servers via the web. This combined data set would be used to further validate and improve the instruments/methodologies in the Assessment Toolkit. This mix of distributed and centralized data collection would provide US (and international) universities with high quality assessment tools that are updated through crossinstitutional data collection and would allow both academic and non-academic units to share information as required.
3. 4.
5. 6.
7.
8. 9. 10. 11.
12.
13.
14.
15.
16. 17.
18.
REFERENCES 19. 1.
2.
Engineering Criteria 2000 Third Edition: Criteria for Accrediting Programs in Engineering in the United States. Published by The Accreditation Board for Engineering and Technology (ABET) http://www.abet.org/EAC/eac2000.html; 1997. Farmer, J. Using technology. In J. Gaff & J. Ratcliff (Eds.). Handbook of the Undergraduate Curriculum: A Comprehensive Guide to Purposes, Structures, Practices, and Change. San Francisco, Jossey Bass. 1997
20.
EEC-9872498, Engineering Education: Assessment Methodologies and Curricula Innovations. Besterfield-Sacre, ME, LJ Shuman, H Wolfe and J. McGourty, “Triangulating Assessments: Multi-Source Feedback Systems and Closed Form Surveys,” Proceedings, Frontiers in Education 2000, Kansas City, Missouri, October 2000. Prus, J. and Johnson, R. “A Critical Review of Student Assessment Options,” New Directions for Community Colleges, No. 88 (Winter), 1994 We have developed a very successful workshop on assessment – A Baker’s Dozen: Assessment Methods and Their Strengths and Weaknesses that has been presented at the past two Frontiers in Education Conferences, the Best Assessments Practices III (RoseHulman Institute of Technology), and at a recent NSF Grantees Workshop in Washington, DC (October 2-3, 2000). We have also given variations of this workshop at a number of engineering schools. McGourty, J., P. Dominick, ME Besterfield-Sacre, LJ Sh uman and H. Wolfe, “Improving Student Learning Through the Use of MultiSource Assessment and Feedback,” Proceedings, Frontiers in Education 2000, Kansas City, Missouri, October 2000. McGourty, J. “Using Multisource Feedback in the Classroom: A Computer-Based Approach,” IEEE Transactions on Engineering Education, Volume 43, Number 2, May 2000, pp. 120-124. Rogers G. and T. Chow, “Electronic Portfolios and the Assessment of Student Learning,” Assessment Update, Jossey-Bass Publisher, January-February 2000, Vol. 12, No. 1, pp. 4-6, 11. Olds, BM “Reflection as an Assessment Measure” ASEE 2000, St. Louis, Missouri, June 2000. Turns, J., CJ Atman and R. Adams, “Concept Maps for Engineering Education: A Cognitively Motivated Tool Supporting Varied Assessment Functions,” IEEE Transactions on Engineering Education, Volume 43, Number 2, May 2000, pp. 164-173. Miller, RL, BM Olds and MJ Pavelich, “Using Computer Software to Assess the Intellect ual Development of Engineering Students,” Proceedings, Frontiers in Education 1999, San Juan Puerto Rico, November 1999. Besterfield-Sacre, ME, CJ Atman, and LJ Shuman. “Characteristics of Freshman Engineering Students: Models for Determining Attrition in Engineering,” Journal of Engineering Education, Vol. 86(2), 1997, pp. 139-149. Scalise, A., ME Besterfield-Sacre, LJ Shuman and H. Wolfe, “First Term Probation: Models for Identifying High Risk St udents,” Proceedings, Frontiers in Education 2000, Kansas City, Missouri, October 2000. Moreno, M, ME Besterfield-Sacre, LJ Shuman, H Wolfe, and CJ Atman, “Institutional Differences in Student Self-Assessed Confidence: How Gender and Ethnicity Affect EC-2000 Outcomes,” Proceedings, Frontiers in Education 2000, Kansas City, Missouri, October 2000. Besterfield-Sacre, ME, Atman, CJ, and Shuman, LJ, “Engineering Student Attitudes Assessment,” Journal of Engineering Education, 87(2), April 1998, 133-141. McGourty, J., ME Besterfield-Sacre, LJ Shuman and H. Wolfe, “Improving Academic Programs by Capitalizing on Alumni’s Perceptions and Experiences,” Proceedings, Frontiers in Education 1999, San Juan Puerto Rico, November 1999. Perez, GL, LJ Shuman, H. Wolfe and ME Besterfield-Sacre, “Measuring Continuous Improvement In Engineering Education Programs: A Graphical Approach,” Proceedings, ASEE National Meeting, Albuquerque, NM, 2001. Olds, BM, RL Miller and MJ Pavelich, “Measuring The Intellectual Development Of Engineering Students Using Intelligent Assessment Software,” Proceedings, ICEE 2000, Taipei, Taiwan, August 2000. Miller, RL, BM Olds, and M.J Pavelich, “Measuring the Intellectual Development of Students Using Intelligent Assessment Software,” Proceedings, Frontiers in Education 2000, Kansas City, Missouri, October 18-21, 2000
0-7803-6669-7/01/$10.00 © 2001 IEEE October 10 - 13, 2001 Reno, NV 31 st ASEE/IEEE Frontiers in Education Conference S2A-4
Session S2A 21. 22. 23. 24. 25. 26.
27.
Perry, WG, Jr., Forms of Intellectual and Ethical Development in the College Years, Holt, Rinehart and Winston, Inc., New York, 1970. King, PM and KS Kitchener, Developing Reflective Judgment, Jossey-Bass Publishers, San Francisco, 1994. McGourty, J. and K. De Meuse, The Team Developer: An Assessment and Skill Building Program . New York: J. Wiley and Company, 2000. McGourty, J., Sebastian, C., and Swart, W. “Development Of A Comprehensive Assessment Program In Engineering Education,” Journal of Engineering Education, Vol. 87, No. 4. 355-361; (1998). Besterfield-Sacre, ME, LJ Shuman, H. Wolfe and J. McGourty, 2000, op. cit. Hmieleski, K. (2000). Barriers to online evaluation: Surveying the nation’s top 200 most wired colleges. Report prepared by the Interactive and Distance Education Assessment Laboratory at Rensselaer Polytechnic Institute, Troy, NY. Dillman, Don A., Mail and Internet Surveys: The Tailored Design Method, New York: J. Wiley and Company, 2000.
0-7803-6669-7/01/$10.00 © 2001 IEEE October 10 - 13, 2001 Reno, NV 31 st ASEE/IEEE Frontiers in Education Conference S2A-5