Teaching of Psychology - Idaho State University

5 downloads 0 Views 75KB Size Report
Jan 17, 2014 - Nicole M. Heath, Steven R. Lawyer and Erin B. Rasmussen. Web-Based ... feedback (see Cates, 1993; Marsh, 1984, for examples). The goals ...
Teaching of Psychology http://top.sagepub.com/

Web-Based versus Paper-and-Pencil Course Evaluations Nicole M. Heath, Steven R. Lawyer and Erin B. Rasmussen Teaching of Psychology 2007 34: 259 DOI: 10.1080/00986280701700433 The online version of this article can be found at: http://top.sagepub.com/content/34/4/259

Published by: http://www.sagepublications.com

On behalf of:

Society for the Teaching of Psychology

Additional services and information for Teaching of Psychology can be found at: Email Alerts: http://top.sagepub.com/cgi/alerts Subscriptions: http://top.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav

>> Version of Record - Oct 1, 2007 What is This?

Downloaded from top.sagepub.com at Society for the Teaching of Psychology on January 17, 2014

Web-Based Versus Paper-and-Pencil Course Evaluations Nicole M. Heath, Steven R. Lawyer, and Erin B. Rasmussen Idaho State University Our study compared the quantitative and qualitative outcomes associated with course evaluations collected over the Internet with those collected using a paper-and-pencil method. We randomly assigned students to 1 of the 2 different formats. There was no significant difference in quantitative student responses based on administration method, but students who completed evaluations over the Internet were more likely to give qualitative feedback compared to students who completed their evaluations in the classroom. Moreover, students in the Web-based condition provided longer qualitative comments than students in the paper-and-pencil group. We discuss the implications of these findings. Researchers postulate that students, teachers, and administrators value course evaluations completed by students (see Richardson, 2005 for a review), which are a primary method of evaluating effective teaching for tenure and promotion (Rasmussen, 2005). Several empirical studies and reviews indicate that students usually respond to evaluative questions using Likert-type scales but also can provide individualized qualitative feedback (see Cates, 1993; Marsh, 1984, for examples). The goals of evaluations are to improve the course and to inform instructors, as well as the university, about teaching quality (Canelos, 1985). Unfortunately, the process of collecting course evaluative information using paper-and-pencil measures collected during class has disadvantages. The process diverts time away from instruction to administration and collection of surveys. Also, students who are not in class on the day set aside for course evaluation might lose the opportunity to provide important feedback. Once evaluations are collected, someone must devote time to typing students’ qualitative feedback to ensure confidentiality. Finally, at some universities, individual departments might pay for test scoring by an independent entity (e.g., a testing center or computer services). One way to avoid the pitfalls associated with traditional course evaluation procedures is to collect evaluations over the Internet. Using WebCT or other Internet-based tools could help address the

disadvantages of paper-and-pencil course evaluations by allowing students to complete evaluations outside of class, which reduces the class time associated with completing course evaluations and provides students more time and flexibility to provide course feedback. A WebCT-based method of data collection could also reduce costs associated with compiling relevant data, because compiling the quantitative data is programmed, and typed qualitative comments are easily transferred into an electronic file. However, it is not clear whether Web-based and paper-and-pencil assessment procedures provide equivalent course evaluation data. In an attempt to address this concern, Cates (1993) used a within-subjects design and had students complete paper-and-pencil and computerized course evaluations. The findings revealed that student responses on both evaluation forms were highly correlated; however, the computerized evaluation forms used in this study were not Internet based, and the use of the within-subjects design might have introduced practice effects. We sought to experimentally examine the extent to which the course evaluations completed via WebCT would yield different results compared to a traditional paper-and-pencil method. We were specifically interested in whether there were differences in quantitative and qualitative feedback and in the return rates between the two methods.

Method Participants Undergraduate students (N = 342) enrolled in three sections of an introductory psychology class at a public university in the northwest United States participated. One of two instructors (the second and third authors) independently taught each of the three course sections. All participants received a small amount of research extra credit (2 points;

Vol. 34, No. 4, 2007

259 Downloaded from top.sagepub.com at Society for the Teaching of Psychology on January 17, 2014

approximately 0.5% of the points used to calculate total grade) toward their course grades as incentive to complete evaluations. Measures The course evaluation form consisted of 25 items related to various aspects of the course including the quality of the course (e.g., “I learned fundamental principles, generalization, and theories in this course”), the instructor’s efficacy and skill in teaching (e.g., “The instructor communicated his/her ideas clearly”), and the validity of examinations (e.g., “The test questions were appropriate for the course content”). Students answered each item using a 5-point Likert-type scale ranging from 1 (strongly disagree) to 5 (strongly agree). The end of the questionnaire contained a blank space where students could leave qualitative comments about the course that they believed would be helpful to the instructors. Students typically completed the questionnaire in 10 to 15 min. We used two different versions of the evaluation form. On the paper-and-pencil version, participants responded to quantitative items on a scantron sheet and wrote qualitative comments in space provided on the questionnaire. A Web-based version was the same as the paper-and-pencil version. Participants assigned to the WebCT condition logged on via the university Web site and completed the measure by computer. Students answered Likert-type items by clicking a box that corresponded with the participant’s answers to each item and provided qualitative comments by typing responses into spaces at the end of the questionnaire. Procedures We randomly assigned all students in courses taught by both instructors to complete the WebCT (n = 180) or the paper-and-pencil (n = 162) versions of the evaluation. The instructors informed the participants assigned to the paper-and-pencil version what day course evaluations would take place and that they would receive extra credit for their participation. These evaluations took place at the end of class one week from the end of the semester; participants not attending class on that day did not complete a questionnaire. Graduate teaching assistants distributed evaluation forms. Instructors left the room to avoid influencing student responses. The WebCT versions of the surveys were available for completion when paper-and-pencil evaluations were distributed. Students assigned to this condition could complete the evaluations at any time during

a 1-week interval. Students assigned to the paper-andpencil versions received credit for their participation by writing their names on a sign-up sheet and handing it in separately from their evaluation form; students assigned to the WebCT condition received credit when a list of names based on WebCT log-in information confirmed that they completed the evaluation. Instructors informed students in both conditions that their evaluations would not be linked to their names, so all information would be confidential.

Results We excluded data from two participants who logged onto the WebCT course evaluation Web site but provided no quantitative data. We conducted logistic regression analyses, as this statistical technique allows for using multiple variables to predict to a dichotomous outcome variable. These analyses indicated no differences in the proportion of students who completed their evaluations via WebCT (72.2%; n = 130) versus paper-and-pencil evaluations (81.5%; n = 132), Wald χ 2 (1, N = 342) = 1.75, O R = .51, p = .19. Students taught by one of the course instructors completed significantly more course evaluations than the other (89.5% vs. 63.7%), Wald χ 2 (1, N = 342) = 14.48, O R = .17, p < .01. We found no significant Instructor × Format interaction, Wald χ 2 (1, N = 342) = 0.49, O R = 1.53, p = .48. After excluding students who did not complete evaluations, an ANOVA revealed that total quantitative evaluation scores on the WebCT (M = 104.26, S D = 14.76) versus paper-and-pencil (M = 107.39, S D = 16.09) evaluation forms were not significantly different, F (1, 259) = 2.76, p = .098. The total scores for one instructor (M = 106.08, S D = 14.86) did not significantly differ from the total evaluation scores for the other (M = 105.52, S D = 16.43), F (1, 259) = 0.006, p = .94, nor was there a significant Instructor × Format interaction for total scores, F (1, 259) = .23, p = .63. However, students completing evaluations on WebCT were more likely to leave qualitative feedback (n = 104) than were students completing paperand-pencil evaluations (n = 73), χ 2 (1, N = 262) = 18.23, p < .001. Post-hoc power analyses revealed a small to medium (Cohen, 1988) effect size of w = .26. Additionally, among those students who left qualitative feedback, a comparison of word counts revealed that participants completing the Web-based version left comments that were more than 50% longer (M =

260

Teaching of Psychology Downloaded from top.sagepub.com at Society for the Teaching of Psychology on January 17, 2014

47.77, S D = 42.47) than did those completing the paper-and-pencil version (M = 30.73, S D = 26.83), t(175) = 3.03, p = .003.

Discussion Our goal was to assess the equivalence of Internetbased versus paper-and-pencil methods in the administration of end-of-course evaluations. The findings lead to several conclusions. First, course feedback collected from students using Web-based or paper-and-pencil methods yielded similar quantitative ratings of instructor performance. This finding suggests that the Webbased manner of administration does not influence overall course ratings. Second, our findings suggest that there are some potential benefits to using Web-based methods for course evaluations. Students completing their evaluations via WebCT were more likely to leave supplemental, qualitative comments about the course. Moreover, the qualitative comments from Web-based participants were significantly longer than were those from paper-and-pencil participants. This information is particularly relevant to university instructors, many of whom value individualized feedback specific to their courses as it can provide information to help improve the quality of instruction (Richardson, 2005). Our findings indicate that administering course evaluations via the Internet is helpful, as it can provide instructors with more detailed feedback. One aspect of our study that might limit generalization to other classrooms is that students received a small amount of extra credit for the completion of course evaluations. Although students could obtain extra credit in both conditions, it is possible that offering extra credit could have inflated the overall proportion of students completing the evaluations or differentially influenced the number of WebCT versus paper-andpencil evaluations. Using the incentive might also

have affected the external validity of our findings. Future research that can clarify such uncertainties is warranted. In spite of these limitations, our study provides some optimism that administering course evaluations in a Web-based format yields similar, if not better, data regarding student responses to introductory psychology courses.

References Canelos, J. (1985). Teaching and course evaluation procedures: A literature review of current research. Journal of Instructional Psychology, 12, 187–195. Cates, W. M. (1993). A small-scale comparison of the equivalence of paper-and-pencil and computerized versions of student end-of-course evaluations. Computers in Human Behavior, 9, 401–409. Cohen, J. (1988). Chi-square tests for goodness of fit and contingency tables. In J. Cohen (Ed.), Statistical power analysis for the behavioral sciences (2nd ed., pp. 215–271). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Marsh, H. W. (1984). Students’ evaluation of university teaching: Dimensionality, reliability, validity, potential biases, and utility. Journal of Educational Psychology, 76, 707–754. Rasmussen, E. B. (2005). Creating teaching portfolios. In W. Buskist & S. F. Davis (Eds.), The handbook of the teaching of psychology (pp. 301–306). Malden, MA: Blackwell. Richardson, J. T. E. (2005). Instruments for obtaining student feedback: A review of the literature. Assessment and Evaluation in Higher Education, 30, 387–415.

Notes 1. We thank Maria M. Wong for her assistance with this article. 2. Send correspondence to Steven R. Lawyer, Idaho State University Department of Psychology, Campus Box 8112, Pocatello, ID 83209; e-mail: [email protected].

Vol. 34, No. 4, 2007

261 Downloaded from top.sagepub.com at Society for the Teaching of Psychology on January 17, 2014