Empowering learning objects: an experiment with

0 downloads 0 Views 310KB Size Report
calling a specific code (plug-in) to assess the students' answers. These plug-ins may ... specification of an exercise for learning Java programming. We continue ...
Empowering learning objects: an experiment with the ganesha platform Patrick Duval, Agathe Merceron, Michel Scholl, Laurent Wargon Computer Science Department Engineering School Léonard de Vinci F-92916 Paris La Défense - Cedex France [email protected], [email protected], [email protected], [email protected] Abstract We present an architecture model to empower learning objects with the ability of calling services and of recording extensive data about students' work. This model of 'empowered' learning objects can be used with any e-Learning platform. It provides better support to students' activity since calling services allows for providing immediate feedback to (syntax-based) open-end exercises. It provides better support to teachers/tutors as well, since recording extensive data makes detailed reporting and data mining of students' work possible.

Introduction One challenge of E-learning is to offer enough support to students and to teachers/tutors. Support is needed to reactivate initial motivation of students since direct human contact is missing. When not enough support is provided, students may drop off. As quoted from (Frizell & Hubscher 2002) «In distance education courses, an attrition rate of 50% is common». Support is needed for teachers to follow students better, to be aware of their activity, again since direct human contact is missing. There are various ways to cope with this problem. All ways aim at enhancing the active participation of students, and at providing tools to teachers to be aware of this activity. One approach is the development of collaborative tools, to recreate a community. There is a vivid research in this area, see for example the special interest group "Computer Support for Collaborative Learning" of the Kaleidoscope project (Kaleidoscope 2004). Following (Duval & al. 2004), another approach, that we propose here, is the use of ‘empowered’ learning objects. A learning object can be defined as a configurable software component, designed with a specific pedagogical objective, and able to manage some kind of learning activity on an e-Learning platform. Self-evaluation exercises are one class of learning objects. Without loss of generality we restrict our attention in this paper to this class: learning objects managing exercises with immediate feedback for students. Exercises with immediate feedback proposed in most e-Learning platforms today have a simple structure. They are either multiple choice or click and drop exercises, or simple fill-in exercises. However, for more complex exercises such as software programming exercises, where substantial parts of a program are entered by the student, the checking of the answers cannot be handled by such simplistic assessment methods. It is necessary to empower learning objects with the ability of calling a specific code (plug-in) to assess the students' answers. These plug-ins may in turn call external services, such as a Java compiler, an SQL server, or any ad hoc web service. The second line along which learning objects can be empowered concerns teachers/tutors. After having processed answers, in current platforms, a score table is updated. However, this score table contains only marks, or statistics on success, failures and so on. It does not contain students' answers, including mistakes and thus the pedagogical information that can be conveyed to teachers is very limited. It is necessary to return more pedagogical information to teachers about the work done by students, and to store in a database not only whether students have solved successfully exercises or not, but also (parts of) the answers they have submitted, along with feedback messages that plug-ins may have produced. The learning objects that we introduce in the following have this capability. The contribution of this paper is twofold: (i) we propose an architecture for 'empowered' learning objects, with the capability of calling various plug-ins and external services to check automatically learners' answers; (ii) we propose a data schema for students' tracking that allows for detailed reporting to tutors, and can be used for further data mining. This architecture has been validated by two prototypes, tested with students of the ESILV Engineering school. Both prototypes (a course on introductory programming in Java, and another one on the database query

language SQL) currently use the Ganesha platform (Ganesha 2004). Exercises are not merely multiple choice or click and drop exercises: Java exercises do require learners to write real Java code, SQL exercises require students to enter real SQL queries. And in each case, there is a variety of correct answers for each exercise. Checking whether an answer is correct, and evaluating its quality, require specific testing that cannot be handled by simple processing rules encapsulated with the exercise item, as suggested by standard proposals like (IMS 2004). Processing responses to a Java exercise requires a configurable plug-in capable of calling the services of a Java compiler and virtual machine. Similarly, processing an SQL exercise needs a plug-in capable of calling an external SQL service. The idea of using compilers and services in e-Learning is not new. Among several examples, (Razek & al. 2003) proposes an agent to search the web for documents related to an ongoing discussion between distant learners. (Foubister & al. 1996) presents an automatic assessment of elementary standard ML programs using Ceilidh. (Truong & al. 2003) describes a Web based Environment for programming. (Beierle et al. 2003) is a work on automatic analysis of programming assignments. A number of papers report on intelligent SQL tutoring systems (Mitrovic 2003, Radosav & al 2004, Russell & Cumming, 2004). (Sraweera & Mitrovic, 2004) describes an intelligent tutoring system for Entity Relationship modeling. What makes our approach distinctive is the integration of the whole assesment functionality in learning objects with immediate feedback, configurable with plug-ins. There exist a number of systems that store comprehensive data from students in order to help teachers in their follow up, see for example (Jean & al. 1999), (Merceron & Yacef 2004). The originality of our work is to integrate this functionality in a learning platform via learning objects. This approach allows pedagogical data from various disciplines to be accessed, queried and mined by teachers, providing them a more accurate picture of the students they follow. The paper is structured as follows. In the next section, we present in more details the enhanced functionalities of learning objects. The following section describes the architecture model and provides more details on the specification of an exercise for learning Java programming. We continue with support provided to learners and tutors. Concluding remarks and perspectives are given in the last section.

Enhancing functionality self-evaluation exercises, containing simple processing rules, or exercises corrected by humans

exercises corrected with help of plug-ins. answers, mistakes, and time stored in a database

solves exercises

solves exercises and get immediate feedback

correct exercises human scorers

consult score table learners

feedbacks on exercises (with delay)

learners

consult history tutors

tutors

consult scores table

Fig. 1: Usual Functionality

queries and mine students’ answers

Fig 2: Empowered learning objects

Managing learning resources, scores and marks is a central functionality offered by e-Learning platforms. Usually this functionality is specified as the Use Case shown in Figure 1. Learning objects include course material and exercises configured by teachers. Quite often, exercises include self-evaluation exercises and other, more complex exercises. Self-evaluation exercises are simple enough to be scored automatically using response processing rules defined in the item itself: learners receive immediate feedback. More complex exercises are scored by humans. This is usually the case for programming exercises, where there is a number of ways to write correct code. Here, students receive only feedbacks with delay. Once exercises have been marked, the table of scores is updated manually. In contrast, in the Use Case shown in Figure 2, complex exercises are managed by enhanced learning objects, using plug-ins to process students' responses: students get immediate feedback, even for complex exercises such as Java or SQL programming. As responses are processed, not only a score table is updated, but also a database, containing students' answers, processing time, as well as messages produced by plug-ins. This database can be used by teachers to produce reports, or can be mined further to retrieve pedagogically relevant information.

Architecture model In our model, the layer responsible for managing the student navigation through learning materials is called the eLearning Guided Tour (EGT). This layer provides guidance and constraints to students in their training activity (and play the role of a top-level Controller in the Model View Controller model terminology). Each time a student requests a training exercise, or submits some response, the EGT delegates further processing to a Learning Object, passing it the exercise identification, the student identification, and submitted data (figure 3). Once activated by the EGT, a learning object is responsible for (i) providing the learner with a specified exercise (ii) managing the training process regarding the exercise description, providing feedbacks and scores to the learner, (iii) logging significant training events for subsequent tutors and authors mining’s, (iv) returning an execution status to the EGT, indicating success or failure. It is up to the EGT to decide what exercises or course materials may be relevant to be presented next to the student.

Students Interface E-learning Guided Tour exercise-id student-id

course description

status exercise description

Learning Object

DataBase interface

evaluation log student response

response evaluation

evaluation logs

course & exercise descriptions

Evaluation plug-in

Mining tools

Authoring tools

External services

evaluation reports

Graphical User Interface

Enhanced Learning Objects

Tutors & Authors Interfaces

Figure 3: Learning Object Model

The exercise-id passed to the learning object allows it to retrieve the exercise description and state from the database. The exercise description (see the outline in Figure 4) includes the text of the question to be answered, the types of entry areas for the student answer, the name of an evaluation plug-in capable of evaluating the student response, and relevant parameters for this evaluation: specifications of expected answers, test vectors for code validation, and score mappings. The learning object logs students’ responses and evaluation results in the database. Sophisticated evaluation plug-ins typically invoke external services during their evaluation process (e.g. a Java compilation or execution service for Java training, a database server for SQL training). Evaluation plug-ins conform to simple generic interfaces, enabling incremental add-ons to the platform without knowledge of the core code. XML specification of an exercise Part of an XML exercise description is given in Figure 4, in a purposely-simplified syntax. The main feature that distinguishes this exercise specification from standard assessment items like those proposed in (IMS 2002) relates to response processing. Our specification does not contain a “right answer” description, but provides descriptions of test vectors used to check if the learner code is correct. Further, the name of an evaluation plug-in is given. Consider the Java exercise given in Figure 4. The exercise statement says that the body of a Java method needs to be completed and provides an example of expected result. The “eval” element asks for an evaluation plug-in of type “SimpleJava”, and provides two test vectors for its evaluation of the student code. Actually, nothing prevents a student to give as a solution the “algorithm” of Figure 5b. This solution will produce the right output for the example given, but without implementing a correct algorithm. Therefore a second test vector is necessary for the assesment. Java exercise evaluation plug-in The Java exercise plug-in is responsible (i) for compiling the student code, by invoking an external Java compilation service (see Figure 3), (ii) for building and executing a test suite for this code, based on the tests specifications provided in the exercise description (Figure 4), and (iii) for returning an adequate evaluation report and score for the student. This approach applies as well to any field discipline where the student has to learn the syntax and expressive power of a language for problem solving, and for which a compiler or interpreter can be invoked in a secured environment. And actually, we did apply the same model for SQL learning (Wargon 2005).

Write a java method returning the smallest of its two arguments, i.e. returning 3 when called with (5, 3) int smallest(int x, int y) { // fill-in your code here... } smallest(5, 3) 3 smallest(-1, 2) -1

Figure 4: Exercise description

int smallest(int x, int y) { if(x < y) x } a: compilation errors

void smallest(int x, int y) { return 3; } b: wrong algorithm

Figure 5 : Error types

Supporting students' activity The call to a plug-in and further services is transparent to the learner. In the case of the Java course for example, students do not need to learn anything about Java compiler or virtual machine. Mastering the learning platform is enough. However, learners do receive immediate feedback to exercises.

Fig. 6 Screenshot while attempting a Java exercise. Figure 6 shows a screenshot taken while a student attempts an exercise on arrays. The aim of this exercise (text not visible on the screenshot) is twofold: practice array declaration and initialization, and reinforce parameter passing among functions. In her answer, this student does not practice these skills. Instead, she just prints naively the expected result from the tester class. The feedback is a correct compilation, but a wrong code with the message “the result is not the expected one”. Note also that the exercise status is displayed to students. In the present case, exercise 1 has been successfully completed, other exercises have not been done yet. Storing complete answers and mistakes in the database makes it possible for learners to look at their entire history. For example, a student can look back at the mistakes made while completing a particular exercise.

Supporting tutors’ activity Comprehensive data is stored to allow tutors to track students' activity while they progress through the course material. Several indicators for both tutors and students are calculated with this data. Attributes attempt-id student-id exercise-id date-begin date-end student-answer service-answer Status

Description Attempt Identification. Student's identification. Exercise's identification. Date and Time the student begin reading the present exercise. Date and Time the student leaves the current exercise. Current student's answer to that exercise. Current service's message after processing the current student's answer. Status of the exercise such as 'syntax error', 'successfully done' etc. Table 1: Schema for recording students' work.

Database Schema Table 1 gives details on what is stored in the Database of Figure 3. A student may attempt many times a single exercise, till s/he completes it successfully for example. Each attempt is recorded. Therefore the attribute attempt-id uniquely identifies a particular attempt of the student with id student-id on exercise identified by exercise-id. datebegin is the date and time that this student began working with that exercise, while date-end is the date and time the attempt finishes. The answer provided by a student is given by the attribute student-answer while the message reported by the service is given by the attribute service-answer. Last, status gives the result of the attempt: for Java exercises, either the student has only read the exercise without attempting it, or some syntax error in the answer has been reported by the plug-in, or the execution of the code was incorrect, or the answer was correct, and so on. Reporting This data is used to provide several views of progress to tutors: An overall class view, a student view, and an exercise view. These views can be displayed under the form of tables or graphs.

Fig. 7 Screenshot of the While loop chapter view for student Bob Smith. The class view allows to know for each student how many exercises were looked at, how many exercises were attempted, how many exercises were successfully terminated as well as the overall time spent for them. The exercise view has two levels, a global level and a chapter level. It gives figures that are an average per student. The global level displays for each chapter how many exercises were browsed, how many mistakes were made, how many exercises were successfully terminated and how much time has been spent. The chapter view is similar, except that it allows focusing on the exercises of a particular chapter. Tutors can look for outliers, chapters or exercises where students make many mistakes on average or spend a lot of time. This gives hints for concepts or exercises that are particularly difficult. As for the exercise view, the student view has two levels, a global level and a chapter level. The global level shows, for a particular student, all chapters, the number of exercises in a chapter, the number of trials per chapter, the number of successful exercises per chapter as well as the time spent on each chapter. The chapter view is similar, except that it allows focusing on the exercises of a particular chapter. Figure 7 shows the view for chapter 'while loop' done by student Bob Smith under the graph form. This chapter has 7 exercises. For each exercise, the first column shows how many times it has been read, the second column how many mistakes have been made and the third one shows whether it has been terminated successfully. The two thicker columns in the background show left the total time spent and right the total time spent till success. Bob has read the first exercise three times, has made

two mistakes on it before getting it right and it took him less than two minutes. Looking at the number of mistakes, it has not been easy for Bob to get exercise 7 correct. If the tutor wishes, she can consult the history to see all answers. In the present case, the tutor could see all answers and all mistakes Bob has made before getting exercise 7 correct. Note that time is indicative only; it is not easy to say whether a student is working or dreaming while in front of the computer.

Conclusion and perspectives In this paper, we have shown how to empower learning objects. By calling services, learning objects configurable with plug-ins can offer immediate feedback for open-end exercises with a formal syntax. We also advocate for a full recording of students' answers and evaluation messages. This allows for a detailed reporting to tutors, for a better following up of their students' progress. We have currently two prototype courses running in our engineering school following this model, a course on SQL programming, and a course on introductory programming in Java. The course on SQL is an extra resource for students taking a face-to-face course on databases. The course on Java was designed as a distance course to students joining our engineering school in year 3 (most of the time, such students need a catch up in computer science). While attractive for students with already some experience in programming, it still appears difficult for true beginners and is being refined. It is also used as an extra resource for students following a face-to-face introductory course on Java. Presently, there is no link between students' work and e-Learning Guided Tours. An interesting future work would be to personalize EGTs, taking into account students’ performance. Following (Minaei-Bidgoli & Al. 2003), (Merceron & Yacef 2004), (Merceron & Al. 2004), another future work is to mine the database. For example, it would be interesting to cluster students according to their results and set up the tutorial groups for face-to-face teaching on programming accordingly. Also, it would be interesting to mine for associations between exercises, looking for rules of the form: if students make “correct compilation, wrong code” mistake for exercises i and j, then they also make the same mistake for exercise k. These rules can give hints to teachers for improving progression among exercises. Porting the model to other styles of platforms, like (Moodle 2005) and (Cocoon 2005) is a work in progress. Finding a way to connect the model to e-Learning standards like (IMS 2004) would finally help the sharing of development results and experiences.

References Beierle,C., Kulas, M., and Widera,M. (2003). Automatic analysis of programming assignments, Proc. Of DeLFI 2003, Köllen Verlag, Bonn. Cocoon (2005). An XML-Centric web-development framework, http://cocoon.apache.org/ Duval, P., & and Merceron, A., & Rinderknecht, C., & Scholl, M. (2004). LeVinQam: A Question Answering Mining platform. ITHET04, Proceedings of the 5th International Conference on Information Technology Based Higher Education and Training, Turkey. IEEE Press. 250-255. Foubister, S., Michaelson, G., and Tomes, N. (1996). Automatic assessment of elementary standard ML programs using Ceilidh”, Journal of Computer assisted Learning. Frizell, S., & Hubscher, R. (2002). Supporting the application of design patterns in web-course design. Proceedings of the World Conference on Educational Multimedia, Hypermedia and Telecommunications, ED-MEDIA. Ganesha (2004). An open-source e-learning platform, http://www.anemalab.org/ IMS (2004). IMS Global Learning Consortium. Question and Test Interoperability: Item Implementation Guide. www.imsglobal.org/question/qti_item_v2p0pd/implementation.html. October 31.

Jean, S., & Delozanne, E., & Jacoboni, P., & Grugeon, B. (1999). A Diagnosis Based on a Qualitative Model of Competence in Elementary Algebra. Proceedings of the 7th International Conference on Artificial Intelligence in Education, Le Mans, France. IOS Press. Kaleidoscope (2004). Computer Support for Collaborative Learning. www-kaleidoscope.imag.fr/pub/cscl/index.html, October 31. Merceron, A., & Oliveira, C., & Scholl M., & Ullrich, C. (2004). Mining for Content Re-Use and Exchange -Solutions and Problems. Poster Proceedings of the 3rd International Semantic Web Conference, ISWC2004, Hiroshima, Japan. 39-40. Merceron, A., & Yacef, K. (2004). Mining Student Data Captured from a Web-based Tutoring Tool: Initial Exploration and Results. Journal of Interactive Learning Research, Special Issue on Computational Intelligence in Web-Based Education, 15(4) 319-346. Minaei-Bidgoli, B., & Kashy, D.A., & Kortemeyer, G., & Punch, G. (2003). Predicting Student Performance: an Application of Data Mining Methods with an Educational Web-Based System. Proceedings of the 33rd Frontiers in Education conference, Boulder, Colorado, USA. IEEE Press. Mitrovic, A. (2003). An intelligent SQL tutor on the Web, Int. J. Artificial Intelligence in Education, vol. 13, no. 24, 2003, 173-197. Moodle (2005). An open-source e-learning platform, http://download.moodle.org/ Radosav, D., Kazi, Z., and Kazi,L. (2004). SQL E-learning System, http://www.timsoft.ro/ejournal/articleradosav.htm Razek, M.A., & Frasson, C., & Kaltenbach, M. (2003). A context-based information agent for supporting intelligent distance learning environment. Proceedings of the 12th International World Wide Web Conference, Budapest, Hungary. 195-202. Russell, G., and , Cumming, A. (2004). Improving the student Learning experience for SQL using Automatic Marking, http://www.soc.napier.ac.uk/publication/op/getpublication/publicationid/7436710 Suraweera, P., Mitrovic, A. (2004). An Intelligent Tutoring System for Entity Relationship Modeling Int. J. Artificial Intelligence in Education, vol. 14, no. 3-4, 2004, 375-417. Truong, N., Bancroft, P., and Roe, P. (2003). A Web based Environment for learning to program, ACSC2003, Adelaide, Australia, 2003. Wargon, L. (2005). Objets pédagogiques actifs pour l'apprentissage à distance de l'algorithmique et de la programmation. Mémoire CNAM, Paris, April 2005.

Suggest Documents