JOURNAL OF DISTANCE EDUCATION REVUE DE L’ÉDUCATION À DISTANCE SPRING/PRINTEMPS 2004 VOL. 19, No 2, 77-92
Students’ Perceptions of Online Assessment: A Case Study M. Yasar Özden, Ismail Ertürk, and Refik Sanli Abstract For many reasons the use of computer-assisted assessment (CAA) is increasing. Although computer-based examinations increase in use, research is lacking about students’ perceptions of online assessment in general and of categorized fields of online assessment systems. The aim of the study was to investigate students’ perceptions of the use of CAA and to investigate the potential for using student feedback in the validation of assessment. To determine the students’ perceptions of online assessment, an examination Web site was developed and implemented as part of the assessment of Masaüstü Yayincilik (Desktop Publishing), a course given by the Department of Computer Science at Kocaeli University, Turkey. The study was descriptive, using a paper-based survey and interviews for data collection. Participants were third-year students enrolled in the course. Descriptive analysis of the questionnaire and interview data showed that the most prominent features of the online assessment system were immediate feedback, randomized question order, item analysis of the questions, and obtaining the scores immediately after the exam. Participants reported the effectiveness of the online assessment system. Although there is much room for improvement in online assessment systems in the near future, such systems are accepted by computer-friendly youth. Résumé L’utilisation de l’évaluation assistée par ordinateur est en hausse pour plusieurs raisons. Bien que l’emploi d’examens par ordinateur soit en croissance, la recherche fait défaut sur la perception qu’ont les étudiants de ces derniers et des catégories de systèmes d’évaluation en ligne. Le but de l’étude était d’examiner comment les étudiants perçoivent l’évaluation assistée par ordinateur et d’évaluer comment les réactions des étudiants peuvent guider la recherche à ce chapitre. Afin de déterminer les perceptions des étudiants à propos des évaluations en ligne, un site Web d’examens a été conçu dans le cadre de l’évaluation du cours Masaüstü Yayincilik (Publication assistée par ordinateur), donné par le département des sciences informatiques à Kocaeli University en Turquie. L’étude, de nature descriptive, utilisait un questionnaire papier ainsi que des entrevues pour recueillir les données. Les participants étaient des étudiants de 3e année inscrits à ce cours. Une analyse descriptive des questionnaires et des données récoltées en entrevue a révélé que les caractéristiques dominantes du système d’évaluation en ligne étaient la rétroaction immédiate, la répartition aléatoire des questions, l’analyse de chaque
78
M.Y. ÖZDEN, I. ERTÜRK, and R. SANLI
élément des questions et l’obtention immédiate du résultat de l’examen. Les participants ont noté l’efficacité du système d’évaluation en ligne. Même s’il y a amplement place à amélioration des systèmes d’évaluation en ligne, on remarque que ces derniers sont tout de même déjà acceptés par les jeunes de la génération informatique.
Background of the Study In recent years developments in information and communication technologies (ICT) have led to growth in the range of Internet tools that can be used for learning and research. Some have gained wide-scale acceptance (e.g., the ease with which e-mail has been adopted); others seem to find either niche applications or are less pervasive than one might at first have imagined (e.g., videoconferencing). One application that is becoming more common is computer-assisted assessment. The term computer-assisted assessment can cover any kind of computer use in the process of assessing knowledge, skills, and abilities of individuals. Computer-assisted assessment (CAA) encompasses a range of activities, including the delivery, marking, and analysis of all or part of the student assessment process using stand-alone or networked computers and associated technologies. Earlier research has shown a range of motivations for implementing CAA in a course, and often a combination of factors result in CAA being used (Bull & McKenna, 2001). Some of the key reasons cited include: • To increase the frequency of assessment, motivating students to learn and encouraging skills practice; • To broaden the range of knowledge assessed; • To increase feedback to students and lecturers; • To extend the range of assessment methods; • To increase objectivity and consistency; • To reduce marking loads; and • To aid administrative efficiency. This article describes the findings of an external evaluation of a project that aimed to disseminate good practice, guidelines, and models of implementation and evaluation of one particular type of learning technology, namely, CAA. In particular the evaluation explored the effect of integrating CAA in learning and teaching and the perception of students about CAA.
Purpose of the Study Over the past decade there has been a large increase in the use of computer-based assessment (Stephens & Mascia, 1997). However, little has been published to date on students’ views of computer-based assessment,
STUDENTS’ PERCEPTIONS OF ONLINE ASSESSMENT
79
particularly that based on more complex interactions offered by the TRIADs system (Mackenzie, 1997). Because some of the published works are on the prevalence of computer anxiety among students, the use of computers for assessment has been open to question. This comes with a general recognition in higher education that this assessment is no longer separate from, but rather affects, all stages of the learning process (Brown & Knight, 1995). Given the history of CAA, we were interested in observing the effect of the introduction of CAA on the learning process and to investigate further the perception of students. The aim of the study was to gain an understanding of students’ perceptions of the use of CAA and to investigate the potential for using student feedback in the validation of assessment.
Significance of the Study The use of computer-based assessment is increasing for many reasons. Examples include entrance exams in education, military training exams, and certification exams by professional groups. Although the use of computer-based exams is increasing, there is not enough research about students’ perceptions of online assessment in general and of categorized fields of online assessment systems. Such research would give detailed information about which parts of the online assessment systems are important or which parts of the systems should be developed or revised to achieve better results.
Design of the Study The descriptive study used paper-based surveys and interviews for data collection. To obtain information about the students’ perceptions of online assessment, a Web site was developed and implemented. The course instructor was responsible for the instructional design, content creation, and all activities for the course, but one researcher designed and developed the online assessment Web site. The Web site was database driven and developed using Active Server Pages (ASP), a Microsoft Access Database, and Cascading Style Sheets (CSS). The online assessment Web site was mainly designed as two modules with user and administrator interfaces. The user module contained multiple-choice questions. This online assessment site was used for the module assessment part of Masaüstü Yayincilik (Desktop Publishing), a computer course given by the Department of Computer Education at Kocaeli University in spring 2003 about computer literacy, MS Office applications, and Web development tools. The instructor of the course was not allowed to see the data obtained from questionnaires and interviews before giving the final marks.
80
M.Y. ÖZDEN, I. ERTÜRK, and R. SANLI
Table 1 Demographic Characteristics of Participants N (46)
%
Sex Male Female
40 6
87 13
Age 20 21 22
6 30 10
13 65 21
Participants Participants were third-year students in the Department of Computer Education, Kocaeli University, enrolled in the course Masaüstü Yayincilik (Desktop Publishing). All the students taking the course were informed about the research, and all were free to choose whether to complete the questionnaires and attend interviews. In the study all students showed a strong willingness to participate in the research. Forty-six students registered in the course and their demographic characteristics are shown in Table 1.
Instrumentation A paper-based questionnaire and in-depth interviews were used to investigate the students’ perceptions of the online assessment. Each tool used in this study is described below. User evaluation questionnaire. This questionnaire was designed to obtain information on the students’ computer familiarity and prior online assessment experience and their evaluation of specific components such as user interface, effects on the learning process, and system usage of the online assessment web site. Two measurement and evaluation experts and one distance education expert from the Faculty of Education, METU contributed to preparing the questionnaire. The questions were of three types: nominal data responses; Likert five-point scale items from strongly agree, agree, neutral, disagree, to strongly disagree; and open-ended responses. In-depth interviews. In order to gain a better understanding of the responses and suggestions for the online assessment system, especially in terms of its function design, implementation, and Web site production, we decided to supplement the study with follow-up interviews. In-depth interviews probed the opinions and suggestions of the users about the unresolved answers and controversial issues that were not revealed through the earlier questionnaire. A researcher design interview protocol was developed after the survey data were analyzed (see Appendix).
STUDENTS’ PERCEPTIONS OF ONLINE ASSESSMENT
81
Reliability Reliability refers to the consistency of responses over time. In order to assess the reliability of this questionnaire, a pilot study was undertaken with five students randomly chosen from the population. Cronbach’s alpha coefficient was calculated from the results of this pilot study. The resulting scores were all at least 0.75. Data from the open-ended responses were used to improve the Web design. Subsequently, Cronbach’s alpha coefficients were also assessed for the actual study responses, and all scores were at least 0.75, showing that the reliability level of this research was consistently high.
Data Collection and Analysis The evaluation survey was conducted one and a half months after the course began. The paper-based questionnaire was distributed to a class of 46 students who were taking the course. The responses were collated and percentages and mean values calculated. After the course was completed, in-depth individual interviews were conducted with five randomly selected students using a random number sequence. During the interviews students’ responses were written down. Each interview took almost half an hour. The data were collated and responses linked to the numerical survey data and the open responses. In addition, usage data for the Web site were printed. The course instructor was not allowed to see the data obtained from questionnaires and interviews before giving the final marks.
Findings Question 1: What was the participants’ competence with computers? The purpose of this question was to investigate the students’ competence with computer application programs such as Web browsers and e-mail programs as being familiar with those programs is a prerequisite for using the Web-assisted assessment program (Table 2). Four percent of all students indicated that their competence level with the Web browser was poor. For effective use of the developed online assessment tool, it was enough to have an introductory competence level. The total percentage of the students beyond the introductory competence level was 96%. Before the final examination students were given sample quizzes and trained on the important points of the online assessment tool. Thus any problem caused by browser usage was eliminated. Question 2: What were the prior experiences of participants for online assessment? The purpose of this question was to identify the students’ prior experience with online assessment (see Table 3).
82
M.Y. ÖZDEN, I. ERTÜRK, and R. SANLI
Table 2 Percentage Distribution of Students’ Computer Competence (N=46) Student Competence with
1. Web browser 2. Chat 3. Telnet 4. E-mail 5. FTP 5. Mailing lists
Advanced %
Good %
Introductory %
Poor %
None
25 25 21 25 8 33
54 34 37 45 46 38
17 29 17 13 21 8
4 8 0 13 0 4
0 4 25 4 25 17
As the results indicated, most of the students had no prior experience with online assessment. Only 42% had used the Internet for instructional purposes, 20% had taken online Web quizzes, and 33% had previously taken some kind of online assessment. Unfortunately, none of the students had taken an online course before the research. Question 3: What were participants’ perceptions of the user interface of the online assessment Web site? Table 4 shows the evaluation items of the system in terms of screens and interface. The agreeability mean of the users is also high at over 3.75. Almost all standard deviations are less than 1.00. These indicate that almost all users have common thoughts in terms of the user interface. Based on the results of our survey, the appropriateness in terms of the overall framework, the overall configuration of colors and background, the overall layout of screen and window design, and overall interface operation method were appraised highly. In addition, the appraisal of the appropriateness of screen design and ease of use of the interface operation were both scored highly and evenly. Whereas 59% of users saw the help page interface as clear and easy to operate, 33% disagreed. In terms of the standard deviations, this item Table 3 Students’ Prior Experiences of Online Assessment (N=46) Prior experiences I am taking course(s) online I have attended an online course before I have taken TOEFL or GRE before I have taken some kind of online assessments before I have taken an online quiz on the web I have used Web for instructional purposes
Yes %
No %
0 0 0 33 20 42
100 100 100 67 80 58
STUDENTS’ PERCEPTIONS OF ONLINE ASSESSMENT
83
Table 4 Frequencies, Percentages and Means of Student Agreement in Online Assessment System “Screen and Interface Design” Evaluation of User Perception of Online Assessment
User interface evaluation 1. Overall framework and operation levels of the system are clear and smooth 2. Overall configuration color and background is normal harmonious for the system 3. Overall screen layout and window design of the system is appropriate 4. Overall interface operation method is easy and appropriate 5. Log-in interface is clear and easy to operate 6. Log-in interface design is appropriate 7. Register interface is clear and easy to operate 8. Register interface design is appropriate 9. Exam interface is clear and easy to operate 10. Exam interface design is appropriate 11. Past exam results interface is clear and easy to operate 12. Past exam results interface design is appropriate
Percentages of Agreement (%) Frequency Distribution 5 4 3 2 1
Means
SD
23 11
35 16
38 17
4 2
0 0
3.77
0.86
14 6
55 26
27 12
4 2
0 0
3.79
0.75
18 8
64 30
14 6
4 2
0 0
3.96
0.72
17 8
35 16
35 16
9 4
4 2
3.52
1.03
22 10
35 16
30 14
9 4
4 2
3.62
1.07
22 10
52 24
17 8
9 4
0 0
3.87
0.86
23 11
30 13
43 20
4 2
0 0
3.72
0.87
18 8
48 22
30 14
4 2
0 0
3.80
0.79
27 12
39 18
30 14
4 2
0 0
3.89
0.86
18 8
43 20
30 14
9 4
0 0
3.70
0.87
17 8
58 26
17 8
8 4
0 0
3.84
0.81
26 12
29 13
338 4
4 2
3.65
1.09
15
84
M.Y. ÖZDEN, I. ERTÜRK, and R. SANLI
Table 4 (continued) Evaluation of User Perception towards Online Assessment
User interface evaluation 13. Statistical evaluation interface is clear and easy to operate 14. Statistical evaluation interface design is appropriate 15. Exam result interface is clear and easy to operate 16. Exam result interface design is appropriate 17. Help page interface is clear and easy to operate 18. Help page interface design is appropriate
Percentages of Agreement (%) Frequency Distribution 5 4 3 2 1
Means
SD
16 7
50 23
25 12
9 4
0 0
3.73
0.87
17 8
46 21
33 15
4 2
0 0
3.76
0.79
29 13
42 19
25 12
4 2
0 0
3.96
0.93
13 6
50 23
29 13
8 4
0 0
3.68
0.81
21 10
38 17
11 5
13 6
17 8
3.33
1.38
37 17
38 17
16 8
9 4
0 0
4.03
1.11
showed the biggest standard deviation at 1.38. It also had the smallest mean score (3.33). All this shows that in terms of the help page, users did not agree as to whether it was good or bad, but the trend is negative relative to the other items in the questionnaire. Although all the users were asked to read the help page, usage data indicated that most did not read it but went directly to the exam pages. Thus although some applicants said that the help page interface was not clear and easy to operate, it is likely that they did not read it. Therefore, help pages that are more effective and easy to use should be provided to meet the learners’ needs more effectively. Help pages should encourage the participants to read while they use the online assessment tools. In contrast to the above, almost all users indicated that the help page interface design was appropriate. The mean value for this questionnaire item was 4.03 and standard deviation was 1.11. This suggests that the interface design of the help page was good, but not good enough to use. A majority of students (53-71%) rated the various interfaces as clear and easy to operate. Negative responses ranged from 4% to 13%, with the highest number being about the log-in interface. Responses about the
STUDENTS’ PERCEPTIONS OF ONLINE ASSESSMENT
85
clarity and operability of the register interface resulted in the largest percentage (43%) of neutral responses. Higher percentages of students (6375%) rated the interfaces as appropriate. The exception was the past exam results interface, with 12% giving negative ratings. These results may be explained by students’ infrequent use of this part of the online assessment tool, which may have led to their negative opinions. Although the mean values in the user interface evaluation are over 3.50, there is room for improvement Question 4: What are participants’ perceptions about system use of the online assessment Web site? The aspects of system use are shown in Table 5. The means were between 3.50 and 4.17, and the standard deviation of most questions was less than 1. This shows that users used the Web pages with no significant problems. Problems with the use of the help page resurfaced, with 54% indicating positive and 17% negative support for the statement “Help page made me use the Web site better.” On average, 71% of users agreed that browsing the Web pages was easy, and 75% agreed that directions were followed with no problem; registration to the system and taking the exam were easy; the system was easy to use and comfortable; and changes could be made easily. It is likely that the high scores on these items may be a result of the initial training on system use and sample quizzes taken before the final exam. Question 5: What are participants’ perceptions about the impact of the online assessment Web site on the learning process? Questions were asked about three topics: assessment, cheating, and use (see Table 6). In terms of the fairness of the assessment process, 74% rated it positively, whereas 10 students (21%) were uncertain. However, when asked to respond to the statement “Cheating is difficult,” the majority (54%) of students disagreed whereas only 33% agreed. To prevent cheating in the system, questions were asked in random order, and placement of the options of the questions were also varied from user to user. In addition, all exams were taken in the labs under the supervision of proctors. Students may not have been aware of these procedures. At least 70% of students thought that the system feedback helped them reflect on their learning, and page-by-page questions made them feel better in the exam. Most students thought their own growth had improved through use of the system, and 67% hoped to see the system used in the other courses.
86
M.Y. ÖZDEN, I. ERTÜRK, and R. SANLI
Table 5 Frequencies, Percentages and Means of Student Agreement in Online Assessment System “System Use” Evaluation of User Perception towards Online Assessment
System Use 1. I have browsed among Web pages easily 2. I have followed the direction without any problem 3. It is easy to register to system 4. It is easy to take an exam 5. Easier to correct work 6. Ease of use and comfortable 7. I often visit the past exam result page 8. Help page made me use the Web site better 9. Seeing left time makes me progress better
Percentages of Agreement (%) Frequency Distribution 5 4 3 2 1
Means
SD
21 10
50 22
25 12
4 2
0 0
3.88
0.82
46 21
29 13
21 10
4 2
0 0
4.17
0.91
46 21 47 22 31 14 37 17
29 13 29 13 46 21 38 17
21 10 20 9 19 9 21 10
4 2 4 2 4 2 4 2
0 0 0 0 0 0 0 0
4.17
1.05
4.19
0.92
4.04
0.79
4.08
0.88
28 13
42 19
17 8
13 6
0 0
3.85
0.92
16 7
38 18
29 13
17 8
0 0
3.53
1.97
38 17
33 15
21 10
4 2
4 2
3.97
1.07
Question 6: What are the participants’ opinions about the online assessment Web site? Students’ general opinionss about the online assessment tool were also investigated. As shown in Table 7, 58% of users agreed that the system provided immediate feedback, 79% agreed that online assessment was better than the paper-and-pencil format, and 92% agreed that online assessment was faster than the paper-and-pencil form. On average, 80% of the users agreed that online assessment was contemporary and more systematic. All the users thought this kind of the online assessment was consistent with the teaching style, but 30% disagreed that they were less anxious.
STUDENTS’ PERCEPTIONS OF ONLINE ASSESSMENT
87
Table 6 Frequencies, Percentages, and Means of Student Agreement about “Impacts on Learning Process” Evaluation of User Perception towards Online Assessment
Impacts on learning process 1. Assessment is fair 2. Cheating is difficult 3. System feedback helps me to reflect on my merits in learning 4. Tracking past exam results makes me understand my progress 5. Statistical evaluation page gives a detailed information on units where I am good or unsuccessful 6. It helps me to better understand my growth and improvements in the course by using the system 7. It helps me to learn this course by using this system 8. I hope to use this system in other courses as well 9. Page-by-page questions makes me feel better in the exam
Percentages of Agreement (%) Frequency Distribution 5 4 3 2 1
Means
SD
4.10
0.97
2.78
1.47
41 19 20 9
33 15 13 6
21 10 13 6
5 2 33 15
0 0 21 10
18 8
64 30
18 8
0 0
0 0
4.00
0.61
41 19
36 17
13 6
5 2
5 2
4.03
1.04
18 8
30 14
39 18
13 6
0 0
3.53
0.94
13 6
50 23
33 15
4 2
0 0
3.72
0.75
21 10
42 19
29 13
8 4
0 0
3.76
0.89
38 17
29 13
25 12
4 2
4 2
3.93
1.03
37 17
33 15
13 6
17 8
0 0
3.90
1.07
Results of In-Depth Interviews with Users After analyzing the results, we conducted in-depth interviews with five users who were chosen randomly from the students who had been taking the course. Regarding the system function, some students viewed the exam style as inconvenient because the questions were selected randomly from a
88
M.Y. ÖZDEN, I. ERTÜRK, and R. SANLI
Table 7 Percentages and Means of Student Agreement in Online Assessment System “Student opinions” Evaluation of User Perception towards Online Assessment
Students’ Opinions 1. System provides immediate feedback 2. Less anxious 3. Better than paper-and-pencil form 4. Consistent with the teaching style 5. Faster than paper-and-pencil 6. Contemporary 7. More systematic 8. Can be applied to other courses
Percentages of Agreement (%) Frequency Distribution 5 4 3 2 1
Means
SD
3.66
0.76
3.21
1.35
12 6 15 7
46 21 38 17
38 17 17 8
4 2 13 6
0 0 17 8
71 32
8 4
13 6
4 2
4 2
4.38
1.13
25 12 59 27 62 28 37 17
42 19 33 15 17 8 38 17
33 15 4 2 17 8 21 10
0 0 0 0 0 0 0 0
0 0 4 2 4 2 4 2
3.92
0.77
4.43
0.92
4.33
1.04
4.04
0.99
24 11
50 23
13 6
13 6
0 0
3.85
0.94
question pool. They suggested that questions should appear in the form of ordered categories and that questions in this category should appear randomly on the exam screen. Some students suggested we add a notebook area. This would allow students to keep their notes permanently to use whenever they wished. This would positively affect their learning process. Another problem discovered during the interviews was that students could not see their selections on the completed exam pages although they could see all the exam pages. They suggested that when they visited a completed exam page, they should be able to see their selections there to enable them to make any changes easily. Prior answers were hidden to prevent cheating. Based on the interview data, this should be reconsidered as a way of improving the effectiveness of the online assessment tool.
STUDENTS’ PERCEPTIONS OF ONLINE ASSESSMENT
89
Discussion The purpose of this study was to investigate students’ perceptions about the use of online assessment. A Web site and exam system were used for summative assessment of computer education students for Masaüstü Yayincilik (Desktop Publishing), a course given in the spring term of the 2003-2004 academic year at Kocaeli University. Descriptive analysis of the questionnaire and interviews showed that the most prominent features of the system were immediate feedback, randomized question order, item analysis of the questions, and obtaining the scores immediately after the exam. Overall, participants agreed on the effectiveness of the online assessment system. Most of the students agreed that the features of obtaining immediate scores and feedback motivated them and contributed positively to their achievement on the exam. These features are the main advantages of computer-based compared with paper-based exams. The greatest physical differences between computer and paper test administration are perceived as the interactivity and physical size of the display area. The amount of information comfortably presented in a computer display is only about one third of that presented by a standard sheet of paper. For example, Haas and Hayes (1986) reported that when a text passage associated with a test item required more than one page, computer administration yielded lower scores than paper-and-pencil administration, apparently because of the difficulty of reading the extended text on-screen. A student can rapidly scan all the questions on a page and can easily f1ip backward or forward to other pages (a form of interactivity). In computer-based assessment, one test item is presented on each computer screen display, and the student needs to act physically to move from screen (item) to screen (another form of interactivity). This difference probably leads to greater focus and closure with each computer-based item. Thus computer-based items (relative to paper) may increase transition time and memory load with a tighter focus on and closure of each individual item (Clariana, 1997). Some students also suggested that units of the subjects should appear in an ascending order, but that the questions in the units should appear randomly. Item order (computer-administered test items are presented in a randomized order) and the order of multiple-choice response options (randomized in computer administered tests) can affect performance on an item (Beaton & Zwick, 1990). This probably relates to ordered versus randomized test item sequencing. Specifically, when the instructional lesson content and the test items are in the same order, the ordered test will probably obtain greater scores than a randomized version. In our investigation, the computer-based tests were randomly generated, thus justifying an order effect.
90
M.Y. ÖZDEN, I. ERTÜRK, and R. SANLI
The other valued features were simplicity of testing, comfort, speed, simplicity of editing and alterations, effective measure of learning outcomes, and reduced anxiety (Karakaya, 2001). Both high- and low-ability students should benefit from greater focus on an item, although because of the greater cognitive load required, only high-ability students would be able to tie ongoing items together to learn from the test in order to answer other test items. To examine this hypothesis, a test could be designed that intentionally provides items that, if remembered, would allow the student to answer other items correctly. If high-ability learners do learn during the test (relatively), a pattern of means similar to that observed in this investigation should occur. If display-size format is the primary factor, then the multiple-page group should outperform the one-item-per-page format.
Conclusion Based on our review and study results, we anticipate that computer and assessment tool familiarity are the most fundamental key factors in the perception of online assessment, especially for unfamiliar content and/or for low-attaining examinees (especially an issue for students with reduced computer access such as women and minorities). In general, higher-attaining students will adapt most quickly to any new assessment approach (Watson, 2001) and will quickly develop test-taking strategies that benefit from the new approach. Thus in the current investigation, because students are from the Department of Computer Education, the higher-attaining students probably accommodated more quickly and so benefited more from computer-based assessment. Once all students are fully familiar with computers, familiarity should become less important. Although students were trained before the exam about how to use the online assessment system, some felt anxious in the exam. In order to prevent such problems, students must be comfortable with the online assessment system, and the context in which they are taking the exam should have a warm atmosphere. Using online assessment requires close cooperation of academic and technical units. First, preparing questions for online settings requires extra effort. Questions should measure the intended level of knowledge. Instructors should be trained on how to conduct a course online and ask questions via the Internet. Administrative units should support such a teaching-learning environment and should prepare the required structure for the system. Finally, this type of assessment system works through technological devices: computers, network devices, and so forth. Computers must be powerful enough to run the Web pages, and the server should be stable.
STUDENTS’ PERCEPTIONS OF ONLINE ASSESSMENT
91
Bugbee (1996) recommends that test developers show that computerbased and paper-based test versions are equivalent and/or must provide scaling information to allow the two to be equated. Most instructors, and in fact even most instructional designers, do not have the skill or the time and expertise to pilot their examinations extensively. However, additional time and effort must be invested by instructors to design quality test items for use in online testing. With the likely production of Web-based courses and of inexpensive fingerprint identification computer devices and other automatic supervisory technologies, computer-based testing will probably increase substantially. The findings of this investigation indicate that it is critical to realize that computer-based tests, even with identical items, will not necessarily produce equivalent measures of student learning. Instructors and institutions should spend the time, money, and effort to create positive student perception of online assessment.
References Beaton, A., & Zwick, R. (1990). The effect of changes in the National Assessment: Disentangling the NA EP 1985-86 reading anomaly. Princeton, NJ: Educational Testing Service. Brown, S., & Knight, P. (1995). Assessment in higher education. London: Kogan Page. Bugbee, A.C. (1996). The equivalence of paper-and-pencil and computer-based testing. Journal of Research on Computing in Education, 28, 282-299. Bull, J., & McKenna, C. (2001). Blueprint for computer-assisted assessment. Clariana, R.B. (1997). Considering learning style in computer-assisted learning. British Journal of Educational Technology, 28, 66-68. Haas, C., & Hayes, R. (1986). What did I just say? Reading problems in writing with the machine. Research in the Teaching of English, 20, 22-35. Karakaya, Z. (2001). Development and implementation of an on-line exam for a programming language course. Ankara: Metu. Watson, B. (2001). Key factors affecting conceptual gains from CAL. British Journal of Educational Technology, 32(5), 587-593. M. Yasar Özden is a professor in the Department of Computer Education and Instructional Technologies. He has published several articles in national and international journals. He is actively working on the use of multimedia on the Web, audio/video streaming and broadcasting on the Internet, teacher education through the Web, and Internet-based distance education. He can be reached at
[email protected]. Ismail Etürk received his master of science and doctoral degrees from Sussex University, UK in 1996 and 2000 respectively. His research interests are in ATM, wireless data communications, CAN, IP and QoS, MPLS, VSATs, real-time multimedia applications, and distance education. Refik Sanli received his master of science degree from the Middle East Technical University, Turkey in 2003. He is currently a doctoral student in Kocaeli University, Turkey. His reserach interests are in distance education, online assessment, cognitive tools in Web-based education, and information system management.
92
M.Y. ÖZDEN, I. ERTÜRK, and R. SANLI
Appendix Student Interview Questions Which component or area needs to be improved most? Is the screen and interface design of this online assessment system appropriate and convenient to use? Is the system use of this online assessment system easy to use? Does the online assessment system have a positive effect on learning progress? What are the difficulties faced while using the online assessment system? What did you like most while using the online assessment system? Is there any other issue or area that has not been mentioned in the questionnaire but need to be improved?