An Automatic Marking System for Interactive Exercises on Blind ...

7 downloads 24687 Views 177KB Size Report
In: Lane H.C., Yacef K., Mostow J., Pavlik P. (eds) Artificial Intelligence in Education. AIED 2013. Lecture Notes in Computer Science, vol 7926. Springer, Berlin ...
An Automatic Marking System for Interactive Exercises on Blind Search Algorithms Foteini Grivokostopoulou and Ioannis Hatzilygeroudis University of Patras, School of Engineering Department of Computer Engineering & Informatics, 26500 Patras, Hellas, Greece {grivokwst,ihatz}@ceid.upatras.gr

Abstract. In this paper, we present a web-based automatic marking system that aims to assist the tutor in assessing the performance of students in interactive exercises related to breadth-first search (BFS) and depth-first search (DFS) algorithms. The system has been tested on a number exercises for BFS and DFS search algorithms and its performance has been compared against that of an expert tutor. The experimental results are quite promising. Keywords: Web-based e-learning system, automated marking, e-assessment, blind search algorithms.

1

Introduction

Student assessment via tests is an important and complex part of learning process. Automatic assessment can assist the tutor in evaluating student’s work and also enable more regular and prompt feedback [3][5][9]. In an artificial intelligence (AI) course, a fundamental topic is "search algorithms". It is considered necessary for students to get a strong understanding of the way search algorithms work and also of their implementation for solving various problems. Usually in an AI course, for teaching a search algorithm and evaluating the students’ comprehension, the tutor creates and gives a set of assignments asking the students to provide their hand-made solutions. Afterwards, the tutor has to mark all students’ answers, present the correct ones and discuss the common errors. This process is time demanding for the tutor. So, an automatic marking system, which helps the tutor reduce the time spent in marking and use this time efficiently for more creative work, is desirable. Moreover, the automatically marking system allows every student to have his/her test immediately evaluated. In this paper, we present a system that has been developed to support automatic marking of student answers to interactive exercises concerning blind search (i.e. BFS and DFS) algorithms.

2

Related Work

There are a number of automatic assessment systems recently developed to aid in assessing student answers to exercises in various courses. The most common field, where automatic assessment is widely used, is assessing programming exercises [2]. K. Yacef et al. (Eds.): AIED 2013, LNAI 7926, pp. 783–786, 2013. © Springer-Verlag Berlin Heidelberg 2013

784

F. Grivokostopoulou and I. Hatzilygeroudis

For example, BOSS system [7] is a web based tool facilitating the online submission and processing of programming assignments. QuizPACK [4] is a good example of a system that assesses program evaluation skills. Other applications include exercises on algorithms. TRAKLA2 [8] is a system for automatically assessing visual algorithm simulation exercises and provides automatic feedback and grading. In [1], a work that deals with teaching AI searching algorithms is presented. It is a visualization tool for helping students to learn artificial intelligence searching algorithms. However, this system does not support automatic marking of student answers or error feedback. Furthermore, in our previous works, systems that automatically mark exercises about logic have been developed. AutoMark-NLtoFOL [10] is a web-based system that automatically marks student answers in exercises related to converting Natural Language (NL) into First Order Logic (FOL). Also, in [6] a system for automatic marking FOL to CF (clause form) conversion exercises is presented. In addition, the systems provide feedback on errors made by students through interactions with them.

3

Automatic Marking

An automatic marking mechanism has been developed that marks a student’s answers to an interactive test. Each test consists of a number of BFS and DFS interactive exercises. The student's answer to an interactive exercise is stored as the sequence of the selected nodes (states). For example, the following node sequence: N1-N2-N3-N4N5-N6 corresponds to the following state transitions: (S1-S2) (S2-S3) (S3-S4) (S4S5) (S5-S6), where Si is the state corresponding to node Ni. A student's answer is characterized in terms of completeness and accuracy as follows: Complete-Accurate (C-A), Complete–Inaccurate (C-I), Incomplete–Accurate (I-A), Incomplete–Inaccurate (I-I) and Superfluous (S-F). An answer is complete if all nodes and transitions of the correct answer appear in the student's answer; otherwise, it is incomplete. An answer is correct when all nodes and transitions of the student’s answer are correct; otherwise it is inaccurate. Case S-F tries to capture superfluous answers; represents cases where nodes and corresponding transitions in the answer are more than the required. We also consider that existence of only one error in an answer may be due to inattention. We distinguish the following types of single-error answers: SE1 (a node is missing compared to the correct sequence), SE2 (there is an extra node-state compared to the correct sequence), SE3 (two consecutive nodes-states have been switched between each other compared to the correct sequence). Marking is based on the similarity between the student's answer and the correct answer on a 1-100 scale. We consider that a student's answer NSi includes si states (nodes) and ni transitions (actually si = ni+1), whereas the correct answer NSic includes sic states (nodes) and nic transitions (actually sic = nic+1). Both NSi and NSic are sequences of nodes (states). We use a simple similarity formula to calculate similarity simi of NSi to NSic: ∑

Then, mark Mi for answer NSi is

(1)

1, 0,

where 100

(3)

(2)

An Automatic Marking System for Interactive Exercises on Blind Search Algorithms

785

If NSi and NSic have not the same length, then we start tracing both sequences from both start and end node of each of them and each time we meet identical nodes at the same position in the two sequences we put mj = 1, otherwise mj = 0. For any missing or extra nodes in NSi, we put mj = 0. Finally, given a test including a number of interactive exercises, the test score is calculated as the average mark of answers: _



Test_Score is a real number between 0 and 100 and gives the score that a student has achieved in a test for blind search algorithms. The above is the basic mechanism, followed to all answers with less than seven transitions. The algorithm for marking an answer NSi, based on the above ideas, is as follows: 1. If nic ≤ 6, Mi is calculated via formulas (1)-(3) 2. If nic > 6 2.1 If NSi is of type I-A, Mi = (ni/nic)*100 2.2 If NSi is of type C-I 2.2.1 If NSi is of type SE3, Mi = ((nic-3)*100+3*40)/nic 2.2.2 Mi is calculated via formulas (1)-(3) 2.3 If NSi is of type I-I 2.3.1 If NSi is of type SE1, Mi = ((nic-2)*100+2*40)/nic 2.3.2 Mi is calculated via formulas (1)-(3) 2.4 If NSi is of type S-F 2.4.1 If NSi is of type SE2, Mi = 100*0.8 2.4.2 Mi is calculated via formulas (1)-(3) 2.5 (Type C-A) Mi = 100. For example, consider that the correct answer is: A-C-B-D-E-G-L-K-N (where A, B, C etc represent nodes-states) and the answer of a student is: A-C-B-D-E-G-L-N. The mechanism detects the student’s answer as I-I and also as a SE1. According to the marking algorithm, M1 = ((8-2)*100+2*40)/8 = 85.

4

Evaluation

We conducted an evaluation study for the automatic marking during the Artificial Intelligence course in our department. The participants were 10 undergraduate students enrolled in the course. An assignment on DFS and BFS algorithms was given to them. More specifically, the students were asked to take a number of tests on the BFS and DFS algorithms and then submit their answers. All students’ answers were sent to the automatic marking tool for marking. After that, they were also marked by the tutor. The tests marked by the tutor and the tool were 10, each one containing five exercises, thus giving a total number of 50 marked answers. The results indicate a good agreement between expert and system marking. Also, at the end of the test, students were asked to complete an online questionnaire about the system. The results show that most of the students gave positive responses. The students in general found their marks to be fair and the feedback provided by the system helpful

786

F. Grivokostopoulou and I. Hatzilygeroudis

in understanding their errors. Moreover, 70% of the students agreed that the system assisted them in learning BFS and DFS algorithms.

5

Conclusions and Future Work

In this paper, we present a new mechanism for automatic assessment of students' tests on BFS and DFS exercises in a consistent manner. The automatic marking mechanism is used to mark the student’s answers based on the similarity between student’s answer and the correct answer. Evaluation results show good agreement with the experttutor. However there are some points that the system could be improved. First, the system does not take into account in calculating the test score the difficulty levels of the exercises. Also, a more sophisticated similarity measure could be used. Acknowledgements. This work was supported by the Research Committee of the University of Patras, Greece, Program “KARATHEODORIS”, project No C901.

References 1. Naser, A.: Developing Visualization tool for teaching AI searching algorithms. ITJ 7(2), 350–355 (2008) 2. Ala-Mutka, K.: A Survey of Automated Assessment Approaches for Programming Assignments. Computer Science Education 15, 83–102 (2005) 3. Barker-Plummer, D., Dale, R., Cox, R., Etchemendy, J.: Automated Assessment in the Internet Classroom. In: Proc. AAAI Fall Symp. Education Informatics, Arlington, VA (2008) 4. Brusilovsky, P., Sosnovsky, S.: Individualized exercises for self-assessment of programming knowledge: An evaluation of QuizPACK. ACM J. Education Resources Computing 5(3), 6 (2005) 5. Charman, D., Elmes, A.: Computer Based Assessment: A guide to good practice, vol. 1. University of Plymouth (1998) 6. Grivokostopoulou, F., Perikos. I., Hatzilygeroudis, I.: An Automatic Marking System for FOL to CF Conversions. In: Proc. of IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE), Hong Kong, pp. H1A-7–H1A-12 (2012) 7. Joy, M., Griffith, N., Boyatt, R.: The Boss Online Submission and Assessment System. ACM Journal on Educational Resources in Computing 5(3), 2 (2005) 8. Malmi, L., Karavirta, V., Korhonen, A., Nikander, J., Seppala, O., Silvasti, P.: Visual algorithm simulation exercise system with automatic assessment: TRAKLA2. Informatics in Education 3(2), 267–288 (2004) 9. Mehta, S.I., Schlecht, N.W.: Computerized assessment technique for large classes. Journal of Engineering Education 87, 167–172 (1998) 10. Perikos, I., Grivokostopoulou, F., Hatzilygeroudis, I.: Automatic Marking of NL to FOL Conversions. In: Proc. of 15th IASTED International Conference on Computers and Advanced Technology in Education (CATE), Napoli, Italy, pp. 227–233 (2012)

Suggest Documents