Student Evaluation of Online Instruction - LearnTechLib

4 downloads 167467 Views 156KB Size Report
could use or adapt to evaluate courses and programs delivered online. ... evaluation of online courses: online design and organization, instructional design and ...
Student Evaluation of Online Instruction: Development of a Comprehensive System Kristine Y. Hogarty, [email protected] Jeffrey D. Kromrey, [email protected] Melinda R. Hess, [email protected] Shauna Schullo, [email protected] Ann E. Barron, [email protected] Amy Hilbelink, [email protected] Lou M. Carey, [email protected] University of South Florida, USA Abstract: The primary purpose of this study was to develop a valid, reliable, and feasible evaluation system that institutions of higher learning, as well as other educational institutions could use or adapt to evaluate courses and programs delivered online. The foundations of this research were built upon evaluation and accreditation standards, learning theories, systems perspectives and principals of instructional design. Psychometric analyses of data gathered from three semesters of testing in 20 online courses were used to refine the instrumentation. The results of this study suggest seven primary domains that should be addressed in student evaluation of online courses: online design and organization, instructional design and delivery, technological support, communications, interactions, student characteristics, and assessment. This research provides a concise reference to guide revision of previously constructed instruments and the development of new evaluation and assessment systems and tools.

Introduction In higher education, student evaluation of instruction provides data that serve a variety of purposes including the revision of courses and programs, improvement of instruction, institutional accreditation, and tenure decisions about faculty. When instruction is delivered online, student evaluation becomes notably more complex, as issues of technology and pedagogy intertwine (Cohen, 2003). Delivering effective instruction on the Internet is more complex because of concerns about the technology skills of students, the availability of universally-accessible resources, and the clarity of expectations and requirements. Educators, administrators, and institutions need tools and methods to evaluate whether their courses and programs not only meet the requirements of accreditation, policymaking, and funding agencies but also meet the needs of their students and faculty. Online courses are proliferating at post-secondary institutions at an ever-increasing pace, bringing new and unique challenges to course assessment and evaluation. The use of the Internet as a means to deliver courses at both the postsecondary and K-12 levels is continuing to expand and it is highly likely that this trend will continue. As the National Center for Education Statistics reported: “In 2000–2001, 90 percent of public 2-year and 89 percent of public 4-year institutions offered distance education courses” (U.S. Department of Education, p. iii). This expansion of online course delivery underscores the need for an effective system of student evaluation of instruction. The primary purpose of this study was to develop a valid, reliable, and feasible evaluation system that institutions of higher learning, as well as other educational institutions could use or adapt to evaluate courses and programs delivered online. This research was conducted in support of a five-year technology project at a large, metropolitan research university and builds upon previously conducted research (see Hogarty, Kromrey, Barron, Hess, & Schullo, 2004). Large scale evaluation planning and implementation should be grounded in formal models of program evaluation. A consideration of the variety of standards, theories and models for evaluation, within the context and nature of the specific program to be evaluated often suggests an amalgam model or framework that will provide appropriate direction for evaluation planning. The foundations of this research were built on the work of Eaton (2002); Phipps, Wellman, and Merisotis (1998); and Moore and Kearsley (1996); and were guided by the standards of ISTE (2002) and NCATE (2000). Drawing upon evaluation models delineated by Baker (2004); Cohen (2003); Gunawardena, Lowe and Carabajal (2000); and Bonk, Cummings, Hara, Fischler, and Lee (2000), our team of measurement and instructional technology specialists developed and field-tested a student evaluation system.

Method Previous research that focused on both evaluation models and methods of evaluating online courses were thematically analyzed to identify the primary domains that underlie an effective evaluation system (see for example, Baker, 2004; Cohen 2003; Gunawardena, Lowe & Carabajal, 2000). During our analysis of the various models and systems, a taxonomy was developed to guide our analysis of the components of the various systems. Important indicators that were central to our investigation included the presence of evaluation tools and methods, the discussion of learning theories and models, the delineation of a variety of interactions and roles (i.e., student, instructor, practitioner), attention to student satisfaction and engagement, and course implementation. Further, we were equally concerned with technology use and skill, technology design and support, student learning, specificity of context and administration/management/institutional commitment and concern. In addition to the issues described above, Bonk, et al., (2000) contend that the degree of web integration is an important consideration in the development of an evaluation system. For example, at the lowest levels of integration, the use of technology in course delivery may be so sparse that traditional methods of evaluation are entirely satisfactory. At the higher levels, however, the differentiation between pedagogy and technology becomes more difficult and the need for an improved student evaluation system becomes more compelling. Finally, considerations of the utility of an improved student course evaluation system suggest the need for a system that is applicable across a broad range of web-based courses. The literature review was augmented with an analysis of current instruments used by instructors of online courses and commercially available online evaluation software. The themes identified from the models and examples of student evaluation instruments provided the framework for the development of the evaluation system. Paralleling the literature review and analysis of existing instruments, the research team developed and implemented a series of student surveys to collect information related to the delivery of courses that were components of online Masters degree programs (for details on this project, see Hogarty, Kromrey, Barron, Hess, & Schullo, 2004). Three student surveys were initially developed for administration at distinct stages of course delivery to gain a holistic and in-depth evaluation of the online courses. That is, an introductory survey was designed to gather background information, computer access and equipment constraints, and level of proficiency. A mid-semester survey was developed to solicit information regarding technological efficiency, communication, and instructional content and materials. The final survey contained items related to delivery of instruction and overall impressions. Once an initial version of each instrument was constructed, our entire team reviewed each for purpose, consistency of focus, wording and uniqueness. The instruments were then refined and finally sent for review by the instructors of the online courses for content, layout, comprehensiveness, applicability, etc. Their input guided further refinement and adjustments to the instruments. These instruments were subsequently administered to students in 20 newly created master’s degree courses during three semesters of the 5-year project. Psychometric analyses of these field data guided further refinement of the items used in the final student course evaluation instruments.

Results The results of this study suggest seven primary domains that should be addressed in student evaluation of online courses: online design and organization, instructional design and organization, student assessment, technological support, communications, interactions, and student characteristics (see Table 1). The first domain, online design and organization, is comprised of questions regarding the look and feel of the course (i.e., aesthetics), and the accessibility and usability of the interface. The second domain, instructional design and delivery, contains questions designed to measure the clarity of expectations (e.g., course objectives and assignments), organization and the utility of resources. Items in this section query students about the logical organization of lessons; the utility and clarity of examples and non-examples used to elucidate instruction, opportunities for practice, and the difficulty level and the clear articulation of course assignments. Further, students are asked about the utility of links to other sites or resources, quizzes and tests, online help, the online grade book, online presentations and submission of assignments and homework. Examples of items from this domain are provided in Figure 1.

Domain

Content Description

Online design and organization

Aesthetics (course look and feel) Accessibility Usability

Instructional design and delivery

Clarity of objectives Organization of materials Utility of resources

Student assessment

Clarity of assignments Integration of assessments with instruction Quality of formative feedback

Technological support

Hardware requirements Software requirements Technical support contacts

Communications

Flexibility of communication vehicles

Interactions

Instructor and peer interactions Quality and quantity

Student characteristics

Technological capabilities and proficiencies Reasons for taking online course Time commitments

Table 1: Domains of Student Course Evaluation

Rarely/Not at all The organization of the lessons was logical and easy to follow. There were sufficient examples and nonexamples to clarify instruction. Instructional examples and non-examples were clear and easy to follow. There were sufficient opportunities to practice and apply important concepts. Course activities and assignments facilitated my understanding of course content.

Figure 1: Instructional Design and Delivery (Sample Items)

Sometimes

Frequently

Almost Always

Not Applicable

The third section, student assessment, is concerned with the clarity of assignments, the integration of assessments with instruction, and both the quality and the timeliness of formative feedback. In the fourth section, technological support, items were drafted to glean information regarding hardware and software requirements and the provision of contacts for technical support. With respect to communications, our aim was to query students regarding the flexibility and variety of options for communicating with their instructors and peers (see Figure 2). In contrast, when we examined the interaction domain, we were concerned with the quality and the quantity of both instructor and peer interactions (Figure 3). Lastly, a series of items designed to glean information about the students themselves posed questions regarding technological capabilities and proficiencies, reasons for taking online courses and time commitments in the online environment. Questions within this domain query students regarding their history of taking web-based or Internet courses, their current course load, reasons for taking online courses, and finally their level of proficiency using various software applications such as web browsers, e-mail, chat, word processing, spreadsheets, software for creating web pages, presentation software and audio/video programs. An eighth domain (evaluation of the quality of course content) was suggested by the review of literature but was not included in the evaluation system. Although this domain is appropriate and necessary for curricular review by content experts, students enrolled in a post-secondary course are unlikely to possess the expertise needed to garner meaningful data about the course from such a domain.

Used to Communicate with Communication Vehicle Email Chat Room Electronic Bulletin Board Instant Messaging Listserve Fax Telephone Face-to-Face meeting Other (please specify ________________)

Figure 2: Communications (Sample Items)

Instructor(s)

Other Students

Rarely/Not at all

Sometimes

Frequently

Almost Always

Not Applicable

The instructor responded to my questions in a timely manner. I received timely feedback on assignments. I received constructive feedback on assignments. Email/electronic discussion with peers were encouraged.

Figure 3: Interactions (Sample Items)

The administration of the series of three student surveys to participants in 20 newly created online courses served to further our efforts to develop an evaluation system that could be easily adapted to meet the needs of a host of individuals and groups involved in higher learning. Psychometric analyses of the data gathered from three semesters of administration in these courses (e.g., exploratory factor analysis and internal consistency evaluation) were used to refine the instrumentation and to inform the development of the student evaluation system. Further, feasibility data associated with these assessments suggested revisions to our data collection strategies. For example, both technical and logistical problems were encountered when attempting to capture and synthesize student perceptions across three different time points during the course of a semester, thus leading us to refine and refocus our instruments in order to be more efficient in our data collection. Additionally, we streamlined certain measurement strategies when we discovered that a series of questions that were originally represented on frequency scales could be adequately measured using a series of binary responses. Further, psychometric analyses of instruments used in the field test suggest clearly interpretable factors and satisfactory reliability estimates (with Cronbach’s alphas ranging from .75 to .95). Finally, the administration of three separate surveys (i.e., during the first week of each semester, at the midpoint and at the end of each semester) was determined to be too cumbersome and time consuming for programs to administer and manage effectively. Therefore, instruments and processes were revamped for data collection in two phases (mid-semester and end of course). Currently, we are poised to field test the resultant instrumentation with the hope of fine tuning and refining our evaluation system. The result from this next round of survey administration should serve to advance our understanding and appreciation of the factors that contribute to successful online course administration.

Significance of the Research As distance learning becomes an increasingly prevalent means of delivering instruction throughout the educational system, including post-secondary and K-12 institutions, the importance of ensuring that these courses are effective and useful is growing as well. As such, educators, administrators, and institutions need tools and methods to confirm that the courses and programs they offer not only meet the requirements of governing accreditation, policy-making, and funding agencies but also meet the needs of their students and instructors. This study provides a foundation for developing sound practices for student evaluation of instruction in an online environment, both in regards to conceptual frameworks of such evaluation and the types of instruments and processes that should be employed. This research provides not only concrete examples of instrumentation that can be directly used or adapted by individuals, institutions, and programs; but also a concise reference to guide revision of previously constructed instruments and the development of new evaluation and assessment systems and instruments.

Implications for Practice Quality instruction in an online environment requires different perspectives and practices as compared to the more traditional, face-to-face mode of course delivery. Concerns about the technology skills of students, the availability of universally accessible resources, and the clarity of expectations and requirements enhance the complexity of effective instruction. As such, educators need sound tools and processes that can be used to support online courses by informing instructors and administrators about the effectiveness of their courses from the students’ perspectives. The use of instruments previously used in classroom-based courses is neither valid nor appropriate for student evaluation of online courses. Further, instructors have neither the time nor the resources to adequately develop and implement a sound evaluation system on an individual basis. The results of this research will provide instructors with easily accessible resources to gather information that will help them meet the needs of their students.

References Baker, R (2004). A Framework for Design and Evaluation of Internet-Based Distance Learning Course Phase OneFramework Justification, Design, and Evaluation Barron, A. (1998) Designing Web-Based Training. British Journal of Educational Technology, 29, 355-370. Bonk, C. J., Cummings, J. A., Hara, N., Fischler, R. B., & Lee, S. M. (2000). A ten-level web integration continuum for higher education. In B. Abbey (Ed.), Instructional and cognitive impacts of web-based education (pp. 56 – 77). Hershey, PA: Idea Group Publishing. Cohen, V.L. (2003). A model for assessing distance-learning instruction. Education 14(2), 98-120.

Journal of Computing in Higher

Eaton, J. S. (2002). Maintaining the delicate balance: Distance-learning, higher education accreditation, and the politics of self-regulation. American Council on Education. Washington, DC. Gunawardena, C.N., Lowe, C., and Carabajal, K. (2000). Evaluating online learning: Models and methods. Society for Information Technology & Teacher Education International Conference: Proceedings of SITE 200 (vol 13). San Diego, Ca. Hogarty, K. Y., Kromrey, J. D., Barron, A. E., Hess, M. R., & Schullo, S. (2004, June). Development, delivery, and effectiveness: Evaluation of innovative online instruction at a research university. Paper presented at the annual meeting of National Educational Computing Conference, New Orleans, LA. International Society for Technology in Education (2002). Educational Technology Standards and Performance Indicators for All Teachers. Washington, DC: Author. Moore, M. & Kearsley, G. (1996). Distance education: A systems view. Belmont, CA: Wadsworth. National Council for Accreditation of Teacher Education (2000). Professional Standards for the Accreditation of Schools, Colleges and Departments of Education. Washington, DC: Author. Phipps, R. A., Wellman, J. V., & Meisotis, J. P. (1998). Assuring Quality in Distance Learning. Washington, DC: Council for Higher Education Accreditation. U.S. Department of Education, National Center for Education Statistics. Distance Education at Degree-Granting Postsecondary Institutions: 2000–2001, NCES 2003-017, by Tiffany Waits and Laurie Lewis. Project Officer: Bernard Greene. Washington, DC: 2003.

Acknowledgements This work was supported, in part, by the University of South Florida and the Fund for the Improvement of Postsecondary Education, under Grant No. P339Z000006. The opinions expressed are those of the authors and do not reflect the views of the United States Department of Education or the University of South Florida.

Suggest Documents