Assessment of Technology-Assisted Learning in Higher ... - FIE 2012

0 downloads 0 Views 26KB Size Report
it requires new thinking by Universities & Colleges ... education delivery (http://w3.scale.uiuc.edu/scale/). ... of graduate level engineering courses available.
Assessment of Technology-Assisted Learning in Higher Education: it requires new thinking by Universities & Colleges Sheri D. Sheppard, Derek Reamon, Larry Friedlander, Charles Kerns, Larry Leifer, Michele Marincovich, George Toye members of the Stanford Learning Laboratory Stanford University Stanford, CA 94305 Abstract: The purpose of this paper is to outline some of the issues, challenges and questions facing universities and colleges as they consider the use of technology in the support of teaching and learning. The issues are difficult in that they require balanced consideration of questions such as: - What types of technologies improve learning? - Which faculty members will adopt and experiment with technologies? How will this impact the reward system? - Which technologies can we afford? What are the hidden costs? Most of these questions remain unanswered. We begin by giving examples of the types of learning technologies that universities are exploring and adopting. We proceed to enumerate some of the reasons for this adoption, and then discuss the various groups who should be asking probing questions about the effectiveness of these technologies. Some of the characteristics of an evaluation plan that would address these various questions are then proposed. We conclude with several emerging models for such an evaluation plan.

on the use of the world-wide web (WWW) in instruction (http://ciue.rpi.edu/about.html). • The University of Illinois is exploring the role of asynchronous learning for undergraduates to improve cost effectiveness and efficiency of education delivery (http://w3.scale.uiuc.edu/scale/). • Stanford University has created an extensive menu of graduate level engineering courses available synchronously to enable students around the world to work on a Master’s Degree (http://scpd.stanford.edu/news.html#scpd). • The University of Oklahoma, has created multimedia-based learning experiences to supplement traditional course experiences (e.g., strength of materials, dynamics), and also to create non-traditional experiences (e.g., a trip to mars) (http://eml.ou.edu/). • Both the University of Illinois and Stanford University offer subsets of campus-based courses On-line” (UI-OnLine website: http://www.online.uillinois.edu/ (Stanford-Online website: http://stanford-online.stanford.edu/). • The NSF Synthesis Coalition maintains a web-based library of engineering curricular materials from which faculty from across the country can “borrow” and submit. (http://www.needs.org)

Examples of Adoption Over the last ten years there has been growing interest in universities and colleges across the U.S. in the use of technology for supporting learning in higher education. For example: • The Anderson Center for Innovation in Undergraduate Education at Rensselaer Polytechnic Institute is creating computer-based learning environments that enable students to get multiple views of science concepts, and on providing training

More and more university courses across the nation are supported with web-sites, newsgroups, and discussion forums. One need only look at a copy of the magazine Syllabus to get a sense of the explosion in the use of learning technologies. In this paper, we define “learning-technology” as any learning tool that uses computers or advanced communication systems.

What is motivating this interest? There are a number of reasons that universities are considering and adopting the use of learning technologies. Some of them are related to beliefs that learning technologies: --are inherently “good”; --are needed to remain competitive as an institution; --make the delivery of education more cost effective; --open up possibilities of reaching new/different students groups; --offer students more control over when and where they interact with “knowledge”; --offer more and new opportunities for studentstudent and student-faculty interaction; --offer new opportunities for cross-university interaction by both students and faculty; --offer students richer, more diverse learning resources and alternate points-of-view; --offer opportunities for documenting, cataloguing and re-using curriculum materials and student work. All of these factors, except the first three, are focused on improving student learning and/or the faculty work environment. Who Needs Evaluation Results? Evaluation of the contributions and limitations of technology in supporting and improving learning are sorely needed. The need exists for all participants. For example, university administrations need evaluation results in order to be able to make informed financial and policy decisions. On the financial front, university administrations need to decide how many resources to direct toward creating, maintaining and updating a technology infrastructure in their institution. For example, n the policy front, the university needs to decide how to view faculty participation in creating technology-based course resources when it comes time for tenure and promotion, or to decide which groups of students they want to better service with technology tools. In addition, faculty need evaluation results in order to make more informed decisions about how they might use technology in their teaching. Students need evaluation results to be able to make more informed

decisions about what types of learning environments work best with their learning styles. Of course, no single evaluation study could address all of the questions posed by the various groups listed above, in terms of the level of detail, point-of-view, and relevant data. In spite of these differences, all of these groups are in need of constructive feedback that can only result from thoughtfully posed, implemented and assessed curricular experiments. Difficult Problem Assessment in order to evaluate technology effectiveness in promoting learning is difficult. It is problematic because participants and stake-holders range from students and faculty, to school administrations, to university technology support groups. In addition, it is hard because the types of questions that these groups ask on the relative value of technology in improving learning are quite diverse. For example, a school administrator may be asking, “What are the relative advantages of providing Internet connections to all dorm rooms vs. providing better staffing in the dorm computer clusters?” while an art professor may be asking, “Will students utilize the course slide collection more extensively over the Internet than when it was on reserve in the library, even though the visual quality is inferior? What are the copyright issues related to some of the images in the collection?” And an engineering professor may be asking, “Are there more effective means of teaching difficult concepts? Are there advantages to students being able to turn in their problem sets digitally? How will I provide them with feedback?”. While it is important to address and answer these individual questions, it is equally important for a university, collectively, to come to an understanding of the role of technology in supporting the work of students and faculty. It requires that assessment and evaluation be looked at in new and extended ways. It requires that: --Assessment be undertaken at a more finegrained level in learning activities in order to discover what elements of an academic experience contributed most to learning effectiveness. This means, among other things, that faculty and students must be more reflective about rationale for their actions. Both faculty and student “buy-in” to assessment is critical. It

is not enough to simply ask students at the end of a term “what is your overall rating of this course/instructor”. In many institutions, including Stanford, student surveys are generally the only measure used to evaluate course quality. The results of these surveys are not timely, often being returned to the instructor several months after the term is over, and are so abbreviated as to give little direction for course improvement/changes. --Assessment be seen as a partnership activity. University administration, faculty and students all need to embark on the sort of assessment described above, believing that its primary objective is the improvement of learning, and that they are all stake-holders in discovering and validating what elements work (and similarly, what elements do not). Faculty and students alike need this partnership to support their taking thoughtful risks in their teaching and learning. If the assessment is at all perceived to be “judgmental”, the partnership will breakdown. -- Assessment be seen as a long-term commitment by all of the partners, and a way of doing business. School administrations, faculty and students alike need to adopt the attitude that improvement is continuous (i.e., what I learn this year goes into changes for next year), and that some significant effects on student learning may be measured over years, not over a single quarter. -- Assessment results be synthesized, publicized and disseminated. If individual faculty continue to only look at results from their own curricular experiments (with and without technology), there is no opportunity to view the cumulative effect of an education on students. The university, collectively, needs to take responsibility for synthesizing individual results. In addition, these “meta-level” findings and results should be disseminated to faculty within and beyond the institution. Emerging Assessment Models There are a number of technology-related projects and programs in colleges and universities across the country

that are developing evaluation/assessment models that fulfill all or some of the requirements outlined in the previous section. A few of these models will be discussed here. This discussion is not intended to be comprehensive, but rather illustrative. We start by offering several examples of projects that are “undertaking assessment at a more fine-grained level in learning activities in order to discover what elements of an academic experience contributed most to its effectiveness.” The papers of Reamon and Sheppard [1-2], and Regan and Sheppard [3] are focused on developing an understanding of the role of simulation software/courseware and physical models in affecting students’ understanding of mechanical systems. In the former, the “mechanical system” was 4bar linkages, and in the later, it was the drivetrain of a bicycle. Both projects utilized an assessment technique called “video interaction analysis” that involves videotaping student learning activities so that a diverse group of researchers could then repeatedly revisit the activity to fully examine, from multiple perspectives, what students were doing [4]. As such, a detailed understanding of the roles that the simulation software/courseware played in the student learning can emerge. Each project used small sample sizes (fewer than fifteen student volunteers). The more recent work of D. Reamon (NSF grant DUE-9653114) is looking at the role of simulation software in larger samples of student learning. This work focuses more specifically on the role of “interactivity” in promoting conceptual understanding and information retention. Assessment instruments that are being used in the two-week long experiment included pre- and post- tests, surveys and video interaction analysis. The sample size was 105 junior and senior mechanical engineering students taking a required course for their major. A final example of projects that are exploring at a fine-grained level the interactions of students and technology comes not from the engineering domain, but the humanities domain. In the fall of 1997 the Stanford Learning Laboratory (SLL) piloted a course called “Introduction to Humanities: the Word and the World” (http://sll1.stanford.edu/wordworld/index.html). The course, as mandated by the faculty senate, was focused on introducing freshmen to the methods of inquiry that researchers and scholars in the humanities use; five significant texts were the objects of study. The quarter-long “Word and the World” course was supported by a web-based “backbone”, and conventional lectures and discussion sections. The

course’s web environment provided content, supporting and enrichment materials on all five texts,; posed and collected short- answer questions prior to each lecture; and provided both lecture and section forums. In parallel with the creation of the web “backbone” during the summer of 1997, the Stanford Learning Laboratory funded the work of an “Assessment Team”, consisting of 6 core individuals and 5 advisors. The Assessment Team, which became known as the ATeam, worked in partnership with the teaching and technology teams to define teaching and learning issues, goals and questions that motivated the investment in web-technologies for the course, and an appropriate assessment plan (i.e., one that would address questions being posed by the various stake-holders--SLL, the teaching team, and by the faculty senate). The plan utilized a variety of assessment tools (e.g., surveys, interviews, video taping, web-statistics) to directly evaluate the utilization and effectiveness of particular elements of the web backbone and to address of questions posed by the stake-holders. At the same time, the tools afforded a broader window into technology utilization more generally by the 90 Stanford undergraduates in the course. Sample findings from the study of the Word and World by the A-Team include: 1) A significant number of “silent” students (students who talk very little in discussion section) find the online forum discussion significant in their learning of the text. 3) There is a high correlation between posting volume on the on-line forum and grades on required papers. 3) Students, in general, put a finite amount of time into exploring enrichment materials on the web. 4) Students, in general, view the in-person meetings related to the course (lecture & discussion) as the “heart of the class”. Other interaction opportunities, such as an on-line forum, are important, but of lesser importance relative to the in-person meetings. The teaching and technology teams, along with the ATeam will be using such findings to revise the World and the Word course for a second offering during the fall of 1998. It is harder to find examples of learning-technology projects that are explicitly promoting assessment to be “a partnership activity between the University, faculty and students”; “a long-term commitment by all of the partners-- a way of doing business”; and/or “an undertaking resulting in synthesis, publishing and disseminating of results”. Fortunately we can report

that a few are emerging. For example, the Sloan Project at the University of Illinois (http://w3.scale.uiuc.edu/scale/) is collecting, analyzing and publishing assessment data from both faculty and student data for many of the technology assisted courses that it is creating. A second example is at the Anderson Center for Innovation in Undergraduate Education at Rensselaer Polytechnic Institute (RPI) Advances which is actively researching how interactive learning has improved students' education or cut additional costs. Rensselaer has recently been awarded a grant from the Atlantic Philanthropic Service Company that will enable RPI to follow two groups of RPI students throughout the second half of their undergraduate education and into the early years of their careers or graduate studies. One group will have taken interactive studio courses and the other primarily traditional lecture-based courses. A detailed cost analysis of interactive vs. traditional education is also being carried out as part of this study by Rensselaer economists. A third example is the Stanford Learning Laboratory (SLL), which was established in 1997 by Stanford University President Gerhard Casper and the Commission on Technology in Teaching and Learning (http://learninglab.stanford.edu/). Its mission is to enhance the personal learning experience of all Stanford students and to create a model for the judicious deployment of pedagogically-informed technology for learning and knowledge management. In all projects, the Lab conducts a comprehensive benchmark-assessment using qualitative and quantitative methods to evaluate project effectiveness, utility, impact and deployment barriers. We offer a final example which does not fall within the traditional definition of a university--the NSFsponsored Synthesis Coalition. Synthesis represents a partnership of eight universities exploring the use of technology to improve learning in engineering schools. Synthesis has set as its goal not only the creation and dissemination of engineering courseware through a distributed web-based database (NEEDS; [http://www.needs.org]), but also the assessment of the courseware by developing and promoting quality standards and recognition for outstanding courseware development (e.g., the John-Wiley Premier Award). All of these projects face major challenges in gaining and sustaining faculty and student buy-in and interest; of balancing needs for fine-grained level assessment data with policy-level information; and in

finding the appropriate voices for getting information and findings back to the various constituents (e.g., university administrators, faculty, students) and to the broader higher education community. Closing Remarks Technology-based learning tools, whether they are webbased courses, electronic newsgroups or simulationbased courseware, all hold the potential to improve student learning. It is imperative, however, that we embark on the exploration and adoption of these tools in a thoughtful manner. We, as faculty and administrators, should be asking hard questions related to the relative merit of the tools. Many of these questions will be unanswerable until we undertake wellposed curricular experiments and pilot studies. A large component of any of these curricular experiments should be an assessment plan that addresses questions about technology and learning of concern to faculty, students and university administrations alike. References 1. Reamon,D., Sheppard, S.D., "Analytic Problem Solving Methodology," IEEE Proceedings from 1996 Frontiers in Education Conference, Nov. 69, 1996, Salt Lake City, Utah, vol. 1, p. 484-488. 2. Reamon, D., Sheppard, D., The Role of Simulation Software in an Ideal Learning Environment, Proceedings of the ASME Design Theory and Methodology Conference, Sept. 14-17, 1997. 3. Regan, M., Sheppard, S.D., "Interactive Multimedia Courseware and Hands-on Learning Experience: An Assessment Study," ASEE Journal of Engineering Education, vol. 85, no. 2, p. 123130, 1996. 4. Jordan, G., Henderson, A., "Interaction Analysis: Foundations and Practice," a report for Xerox Palo Alto Research Center (PARC) and Institute for Research on Learning (IRL), Palo Alto, CA, 1992, accepted for publication in The Journal of the Learning Sciences