Automated and Flexible Comparison of Course Sequencing Algorithms in the LS-Lab Framework Carla Limongelli1 , Filippo Sciarrone3 , Marco Temperini2 , and Giulia Vaste1 1 DIA-Department of Computer Science and Automation Roma Tre University - Via della Vasca Navale, 79 00146 Rome, Italy {limongel,vaste}@dia.uniroma3.it 2 DIS-Department of Computer and System Sciences Sapienza University of Rome - Via Ariosto, 25 00185 Rome, Italy
[email protected] 3 Open Informatica srl E-learning Division - Via dei Castelli Romani, 12/A 00040 Pomezia, Italy
[email protected]
Abstract. Curriculum Sequencing is one of the most interesting challenges in learning environments, such as Intelligent Tutoring Systems and e-learning. The goal is to automatically produce personalized sequences of didactic materials or activities, on the basis of each individual student’s model. In this paper we present the extension of the LS-Lab framework, supporting an automated and flexible comparison of the outputs coming from a variety of Curriculum Sequencing algorithms over the same student models. The main aim of LS-Lab is to provide researchers or teachers with a ready-to-use and possibly extensible environment, supporting a reasonably low-cost experimentation of several sequencing algorithms. The system accepts a student model as input, together with the selection of the algorithms to be used and a given learning material; then the algorithms are applied, the resulting courses are shown to the user, and some metrics computed over the selected characteristics are presented, for the user’s appraisal. Keywords: adaptive e-learning, learning object sequencing.
1
Introduction
Curriculum sequencing is one of the key components in classic Intelligent Tutoring System (ITS) [4,6]. Each solution to this problem has its own strength and weakness: different teachers could prefer different sequencing approaches, but there is not a didactic framework to help them in selecting the right sequencing algorithm. The question: ”what is the best sequencing algorithm to use in a particular learning environment?” is a hard question because of the number of variables that could affect this choice. Here we propose a framework for comparing and testing different Sequencing algorithms to reason about V. Aleven, J. Kay, and J. Mostow (Eds.): ITS 2010, Part II, LNCS 6095, pp. 371–373, 2010. c Springer-Verlag Berlin Heidelberg 2010
372
C. Limongelli et al.
them in a self-contained and homogeneous environment. We extend the approach presented in [3] where the system LS-Lab is sketched. In LS-Lab different sequencing algorithms, belonging to different adaptive educational environments, are involved. These algorithms, through suitable software interfaces, e.g. parsers, run in the same environment, taking in input the same educational material, the same student model, the same goal for the student’s course. The different generated courses are presented to the teacher, highlighted by a set of measures that could suggest and support her evaluation.
2
The LS-Lab System
The design of LS-Lab and its functional schema have been presented in [3]. Here we give a synthetic description of the system. Once an algorithm has been added to the system, the GU I allows to perform experiments. An experiment consists in selecting i) an algorithm (or more, if available), ii) a learning domain, i.e. a set of learning materials tagged with prerequisites and acquired knowledge, iii) a target knowledge, iv) a student model, v) the metrics to be used, and then activating the selected algorithms, so to produce, accordingly, comparable learning sequences for the student (model). The algorithms run on the same input, suitably adapted for each of them. The algorithms presently integrated into LS-Lab are used in the LS-Plan system [2], in the KBS-Hyperbook system [1], and in the IWT system [5]. Two basic attitudes could be considered for the teacher’s assessment of a Learning Objects Sequence (LOS). In a subjective comparison attitude, the teacher is left to judge the suitability of the sequence. We concentrate instead on a more objective comparison attitude: we operate through the following three metrics in order to measure certain characteristics and qualities of the LOS and offer the results to support teacher’s LOS evaluation. Overall Effort metrics ME : One possible way to measure a LOS is by computing the cognitive effort implied by the LOs in the sequence. We have defined the effort as a value associated to a LO, that might represent the time expected to study the LO, or the complexity of such contents. The metrics ME compares LOSes basing on the overall effort required by their respective set of LOs. Overall Acquired Knowledge metrics MAK : This metrics allows to compare LOSes by measuring how redundantly a LOS does actually cover the gap between the student’s starting knowledge with respect to the topics to be learnt and the target knowledge of the course. It is the set of pieces of knowledge acquired studying the LO of the LOS. Of course a “more direct course” is not necessarily “simpler” in terms of ME . Overall p-effort metrics Mp−ef f : It measures the “cognitive distance” between a LO of the sequence, and its prerequisites, by measuring “how recently” the prerequisites for studying a given LO have been acquired. The more the prerequisites have been recently acquired, the less the Mp−ef f .
Automated and Flexible Comparison of Course Sequencing Algorithms
3
373
A First Experiment
We used the system for comparing LOSes produced (they are below) by the three available algorithms, for a given student model in the Recursion domain. KBS ME = 13 (effort) Mp−ef f = 2.25 (distance) id1:Unit description id2:Recursive programs id3:Rec.Funct. intro id5:Rec.Funct. StrgReverse id6:Rec.Funct. examples id9:Rec. r/t stack examples id10:Recursion exercises
LS-PLAN ME = 16 (effort) Mp−ef f = 3.00 (distance) id1:Unit description id2:Recursive programs id4:Rec.Funct. intro id5:Rec.Funct. StrgReverse id6:Rec.Funct. examples id9:Rec. r/t stack examples id14:Recursive list id10:Recursion exercises
IWT ME = 13 (effort) Mp−ef f = 1.75 (distance) id1:Unit description id2:Recursive programs id9:Rec. r/t stack examples id3:Rec.Funct. intro id5:Rec.Funct. StrgReverse id6:Rec.Funct. examples id10:Recursion exercises
Note that LS-Plan has one additional LO (id 14) and, consequently a bigger effort; all the three sequences present the same LOs, proposed in different order, and all the learning paths are logically consistent with the prerequisite relations in the learning domain; id3 and id4 are two alternative LOs, i.e. they have same prerequisites and acquired knowledge, but they have different learning styles: IWT and LS-Plan have a different methodology for selecting alternative LOs, consequently they choose id3 and id4 respectively. On these bases the teacher has some elements for judging and comparing the behavior of the algorithms.
References 1. Henze, N., Nejdl, W.: Adaptation in open corpus hypermedia. International Journal of Artificial Intelligence in Education 12(4), 325–350 (2001) 2. Limongelli, C., Sciarrone, F., Temperini, M., Vaste, G.: Adaptive Learning with the LS-Plan System: a Field Evaluation. IEEE Trans. on Learning Technologies 2(3), 203–215 (2009) 3. Limongelli, C., Sciarrone, F., Vaste, G.: LS-Lab: A framework for comparing curriculum sequencing algorithms. In: Proc. of 9th Int. Conf. of intelligent Systems, Design and Application, ISDA 2009 (2009) 4. McArthur, D., Stasz, C., Hotta, J., Peter, O., Burdorf, C.: Skill-oriented task sequencing in an intelligent tutor for basic algebra. Instr. Science 4(17), 281–307 (1988) 5. Sangineto, E., Capuano, N., Gaeta, M., Micarelli, A.: Adaptive course generation through learning styles representation. Universal Access in the Information Society 7(1), 1–23 (2008) 6. Stern, M.K., Woolf, B.P.: Curriculum sequencing in a web-based tutor. In: Goettl, B.P., Halff, H.M., Redfield, C.L., Shute, V.J. (eds.) ITS 1998. LNCS, vol. 1452, pp. 584–593. Springer, Heidelberg (1998)