ELM-ART II, an intelligent interactive textbook to support learning .... cause the delay caused by the correspondence with the server would be too long.
In Anthony Jameson, Cécile Paris, and Carlo Tasso (Eds.), User Modeling: Proceedings of the Sixth International Conference, UM97. Vienna, New York: Springer Wien New York. © CISM, 1997. Available on−line from http://um.org.
User Modeling and Adaptive Navigation Support in WWW-Based Tutoring Systems Gerhard Weber and Marcus Specht★ Department of Psychology, University of Trier, Germany
Abstract. Most learning systems and electronic textbooks accessible via the WWW up to now lack the capabilities of individualized help and adapted learning support that are the emergent features of on-site intelligent tutoring systems. This paper discusses the problems of developing interactive and adaptive learning systems on the WWW. We introduce ELM-ART II, an intelligent interactive textbook to support learning programming in LISP. ELM-ART II demonstrates how interactivity and adaptivity can be implemented in WWW-based tutoring systems. The knowledge-based component of the system uses a combination of an overlay model and an episodic user model. It also supports adaptive navigation as individualized diagnosis and help on problem solving tasks. Adaptive navigation support is achieved by annotating links. Additionally, the system selects the next best step in the curriculum on demand. Results of an empirical study show different effects of these techniques on different types of users during the first lessons of the programming course.
1 Introduction Originally, the WWW was used to retrieve information from all over the world. Very soon, however, it became clear that the WWW will be able to allow for extended interactivity. With the increased utilization of the interactive features of the WWW a lot of learning systems emerged that introduce users into various domains. The number of learning courses is exploding, and one can see a lot of interesting features that emerge with the improved capabilities of new WWW browsers. Up to now, however, most of these systems have been in an experimental stage. They provide only limited support to users who are not familiar with the new domain. And there are only few systems that adapt to a particular user as on-site tutoring systems do. In this paper, we first discuss why student modeling is necessary in an individualized WWWbased tutoring system and what the goals of student modeling are. Then we introduce ELMART II, an adaptive, knowledge-based tutoring system on the WWW that supports learning programming in LISP. We show how the goals of individual student modeling are accomplished in this system and, finally, we report on the first results of an empirical study of different types of adaptive navigation support.
—————————— ★
This work is supported by a grant from the “Stiftung Rheinland-Pfalz für Innovation” to the first author.
290
G. Weber and M. Specht
2 Goals of Student Modeling in WWW-Based Tutoring Systems The two main features in intelligent tutoring systems (ITS) are curriculum sequencing and interactive problem solving support. These features differentiate intelligent learning systems from traditional computer-assisted instruction in that they incorporate intelligent techniques that skilled human teachers use in teaching classes or in coaching individual learners. Most intelligent learning systems are used in the classroom and, therefore, do not necessarily need to include all these intelligent features. Many systems concentrate on diagnosing solutions to exercises only (e.g., Johnson, 1986; Soloway et al., 1983; Vanneste, 1994) or support all stages of problem solving during work on exercises and problem solving tasks (e.g., Anderson et al., 1995; Weber and Möllenberg, 1995). WWW-based learning systems, however, can be used outside the classroom. In such a distance learning situation, no teacher is directly available who can help during learning and who can adapt the number and nature of new concepts presented to the learner’s current knowledge state. Therefore, the learning system has to play the role of the teacher as far as possible. The system has to build up an individual user model for every user to be able to adapt the curriculum to the user, to help him or her navigate through the course, and to support working on exercises and problem solving individually. 2.1 Curriculum Sequencing and Adaptive Guidance Curriculum sequencing describes the order in which new knowledge units and skills to be learned and corresponding teaching operations (e.g., presenting examples and demonstrations, asking questions, providing exercises and tests, solving problems) are presented to a particular learner. In textbooks, the traditional learning medium, the curriculum is predefined by the author of the textbook. The same holds for most texts delivered via the WWW. The curriculum is predefined by the author of the text or by the developer of the system. That is, authors provide an optimal learning path for an assumed average learner. This is a well-established strategy in the writing of textbooks. In the case of electronic textbooks, however, the situation is totally different. Electronic textbooks are usually presented in the form of a hypertext that allows for random surfing through the text space. In order not to get lost in this hyperspace, some guidance by the system may be helpful. WWW-browsers only annotate visited links but are not able to give any hint as to what pages will be suitable to be visited next. Another situation arises when the learner is not a complete beginner in the new domain to be learned but already possesses some (possibly incomplete and incorrect) knowledge of the topics to be learned. In this case, it is a waste of time for the learner to read all of the pages of the canonical curriculum and to work at corresponding problems and tests that the learner is already familiar with. In both situations, information contained in an individual user model can be used by the learning system to adapt the presentation of pages to the particular user. A simple type of a user model like an overlay model (Carr and Goldstein, 1977) may be well suited to represent all the necessary knowledge for individualized curriculum sequencing and adaptive guidance in the hypertext. In its simplest form, the user model contains information on whether an item of the knowledge base is learned, it is not completely learned, or its status is unknown. Examples of systems using such knowledge are BIP (Barr et al., 1976), ITEM-IP (Brusilovsky, 1992) and HyperTutor (Pérez et al., 1995a). A more elaborate user model can differentiate between more detailed knowledge states. It is important to distinguish between pages describing new knowledge that were only visited and
Adaptive Navigation Support in WWW-Based Tutoring Systems
291
pages where learners successfully performed some tests or problem solving tasks. Additionally, depending on results from tests and exercises, the system can decide whether some prerequisite knowledge must be known to the learner though he or she did not work at these knowledge items before. In WWW-based learning systems, maintaining an individual user model and observing and diagnosing the learner’s knowledge state is much more complicated. Only a few systems exist that use at least rudimentary types of individualized curriculum sequencing and adaptive hypertext guidance (e.g., Brusilovsky et al., 1996b; Kay and Kummerfeld, 1994; Lin et al., 1996; Schwarz et al., 1996). More advanced techniques of knowledge-based navigation support are described in KBNS (Eklund, in press) and in HyperTutor (Pérez et al., 1995b). In this paper we describe an alternative approach that is used in ELM-ART II. 2.2 Individual Help and Problem Solving Support Knowledge-based learning systems support learners while they are working on exercises and during problem solving. There are two main techniques used. On the one hand, a lot of systems exist that provide intelligent diagnosis of complete solutions to exercises and problem solving tasks. In the domain of learning programming, several well-known systems exist that offer this type of problem solving support—e.g., MENO II (Soloway et al., 1983), PROUST (Johnson, 1986), CAMUS II (Vanneste et al., 1993), and ELM-PE (Weber and Möllenberg, 1995). On the other side, systems based on the model tracing approach (Anderson et al., 1995) provide continuous interactive problem solving support during working at exercises. The currently most advanced systems are the model tracing tutors based on the ACT theory (Anderson, 1993) and the programming tutors ELM-PE and ELM-ART based on the ELM model (Weber, 1996). In the ACT-based model tracing tutors, the system observes the learner during the solving of a problem and gives advice when the solution path will result in an error. This type of tutoring is well suited for on-site tutoring systems. Up to now, however, such direct observing of single problem solving steps cannot be performed on-line in WWW-based tutoring systems because the delay caused by the correspondence with the server would be too long. In the future, this problem may be solved by creating intelligent on-site agents based on JAVA applets. The ELM-based systems follow an episodic learner modeling approach. Episodic learner modeling is well suited for diagnosing complete and incomplete solutions to problems and giving individualized help. Moreover, examples that best fit the current learning situation can be chosen on the basis of the individual episodic learner model (Burow and Weber, 1996). The diagnosis in ELM-PE does not follow the problem solving process directly but is performed only on demand. Therefore, this approach meets the needs of the client-server communication employed in WWWbased learning systems.
3 ELM-ART II 3.1 The History of Developing ELM-ART II The WWW-based introductory LISP course ELM-ART (ELM Adaptive Remote Tutor) is based on ELM-PE (Weber and Möllenberg, 1995), an on-site intelligent learning environment that supports example-based programming, intelligent analysis of problem solutions, and advanced testing and debugging facilities. The intelligent features of ELM-PE are based on the ELM model
292
G. Weber and M. Specht
(Weber, 1996). For several years, ELM-PE was used in introductory LISP courses at the University of Trier. The course materials were presented to students in regular classes (completed with printed materials) as well as to single students working on their own with the printed materials only. Students used ELM-PE to practice the new knowledge by working on exercises. In this way, they were able to acquire the necessary programming skills. ELM-PE was limited by the platform dependent implementation of the user interface and the large size of the application. Both limitations hindered a wider distribution and usage of the system. So, we decided to build a WWW-based version of ELM-PE that can be used both in intranets and in the Internet. The first step was to translate the texts of the printed materials into WWWreadable form (html files), dividing it into small subsections and text pages that are associated with concepts to be learned. These concepts were related to each other by describing the concepts’ prerequisites and outcomes, building up a conceptual network. All interactions of the learner with ELM-ART were recorded in an individual learner model. For each page visited, the corresponding unit of the conceptual network was marked correspondingly. When presenting text pages in the WWW browser, links shown in section and subsection pages and in the overview were annotated corresponding to a simple traffic lights metaphor referring to information from the individual learner model (Schwarz et al., 1996). A red ball in front of the link indicated that the corresponding section or text page was not ready to be learned because necessary prerequisites were not met. A green ball indicated that this page or section was ready and recommended to be learned and a yellow ball indicated that this link was ready to be visited but not especially recommended by the system. ELM-ART enabled direct interactivity by providing live examples and intelligent diagnoses of problem solutions. All examples of function calls could be evaluated. When the learner clicked on such a live example link, the evaluation of the function call was shown in an evaluator window similar to a listener in ordinary LISP environments. Users could type solutions to a programming problem into an editable window and then send it to the server. Evaluation and diagnosis of problem solutions were performed the same way as in ELM-PE. Therefore, the same feedback messages that had proven to be useful in ELM-PE could be sent back to the learner. The approach of converting printed textbooks to electronic textbooks used in ELM-ART has been developed further in INTERBOOK (Brusilovsky et al., 1996b), an authoring tool for creating electronic textbooks with adaptive annotation of links. However, from our first experiences with using ELM-ART we understood that printed textbooks are not suitable for being transformed to hypertext pages in electronic textbooks in a one-to-one manner. Textbooks are usually written in sequential order so that single pages cannot be read easily when they are accessed from any page within the course. Additionally, the simple adaptive annotation technique used in ELM-ART had to be improved. Users should get more information on the state of different concepts that they had already visited and learned or had to learn. And, perhaps most importantly, inferring the knowledge state of a particular user from only visiting (and possibly reading) a new page is not appropriate (as correctly pointed out by Eklund, in press). These objections and shortcomings were the motivation for building ELM-ART II, a new version of ELM-ART we describe in the following sections.
Adaptive Navigation Support in WWW-Based Tutoring Systems
293
3.2 The Electronic Textbook and Adaptive Guidance Knowledge representation. ELM-ART II represents knowledge about units to be learned with the electronic textbook in terms of a conceptual network (Brusilovsky et al., 1996a). Units are organized hierarchically into lessons, sections, subsections, and terminal pages. Terminal pages can introduce new concepts or offer problems to be solved. Each unit is an object containing slots for the text to be presented with the corresponding page and for information that can be used to relate units and concepts to each other. Static slots store information on prerequisite concepts, related concepts, and outcomes of the unit (they are the concepts that the system assumes to be known when the user worked on that unit successfully). Units for terminal pages have a tests slot that may contain the description of a group of test items the learner has to perform. When test items have been solved successfully the system can infer that the user possesses the knowledge about the concepts explained in this unit. Problem pages have a slot for storing a description of a programming problem. Dynamic slots are stored with the individual learner model that is built up for each user. This user model is updated automatically during each interaction with ELM-ART. For each page visited during the course, the corresponding unit is marked as visited in the user model. Moreover, when the test items in a testgroup or a programming problem are solved correctly, the outcome concepts of this unit are marked as known and an inference process is started. In the inference process, all concepts that are prerequisites to this unit (and recursively all prerequisites to these units) are marked as inferred. Information from the dynamic slots in the user model are used to annotate links individually and to guide the learner optimally through the course. Testgroups. Testgroups are collections of test items that are associated with page units. Single test items may belong to different testgroups. In ELM-ART II, four different types of test items are supported: yes-no test items, forced-choice test items, multiple-choice test items, and free-form test items. In yes-no test items users simply have to answer yes-no questions by clicking the “yes” or the “no” button. In forced-choice test items, users have to answer a question by selecting one of the alternative answers and in multiple-choice test items users have to answer a question by selecting all correct answers provided by the system. In free-form test items, users can type an answer to the question asked freely into a form. Each testgroup has parameters that determine how many single test items are presented to the learner. The group-length parameter determines how many test items are presented on a single page. The min-problems-solved parameter defines the minimal number of test items that have to be solved correctly within the testgroup. The max-errors parameter determines how many errors maximally are allowed in the test items that were presented on a single page. These parameters can be set for each testgroup. Test items from a testgroup are presented as long as not enough test items have been answered correctly. A fixed number of test items are presented simultaneously on one page. The system gives feedback on the number of errors in the test items presented on the last page and presents all erroneous test items with both the users’ answers and the correct answers. Additionally, an explanation is given why the answer provided by the system is the correct one. These explanations are stored separately with each test item. Correctly solved test items from the current testgroup can be accessed via an icon on that page. They are displayed in a new window showing the correct answers as well as the reason why this answer was correct. Users are called on to solve more test items as long as not enough test items have been solved correctly and not too many incorrect an-
294
G. Weber and M. Specht
swers have been submitted with the last test items. In the individual user model, all test items that are solved correctly for a particular testgroup are stored in a dynamic slot. When enough test items are solved correctly without making too many errors, the outcome concepts of the corresponding unit are marked as solved and the inference process is started. In the current version of ELM-ART II, the values of the min-problems-solved parameter vary between 4 and 10 depending on the difficulty of the tests and, in most cases, the max-errors parameter is set to 1. That is, after solving at least a number of min-problems-solved test items correctly, in the next group of test items shown on a page one error is allowed. In the LISP course, tests play a twofold role. On the one hand, tests are used to check whether the user possesses the correct declarative knowledge. This is especially useful in the beginning of the course when a lot of new concepts (data types and function definitions) are introduced. On the other hand, tests can be used in evaluation tasks to check whether users are able to evaluate LISP expressions correctly. Skills used in evaluation are the inverse skills to generating function calls and function definitions. Evaluation skills are needed to decide whether programs work correctly and to find errors in programming code. Program creation skills are practiced in special tasks. They are supported by the episodic learner model approach described in Section 3.3.
Visual adaptive annotation of links. ELM-ART II uses an extension of the traffic lights metaphor to annotate links visually (see Figure 1). On the top of each terminal page (below the navigation button line) all links belonging to the same subsection are shown with the links annotated corresponding to their current state. Green, red, yellow, and orange balls are used to annotate the links (additionally, the texts of the links are outlined in different styles to aid color-blind users). A green ball means that this page is ready and suggested to be visited and the concepts taught on this page are ready to be learned. That is, all prerequisites to this concept have been learned already or are inferred to be known. A red ball means that this page is not ready to be visited. In this case, at least one of the prerequisite concepts is not known to the learner (that is, the system cannot infer from successfully solved tests and programming problems that the user will possess the required knowledge). However, the user is allowed to visit this page and in the case that he or she solves the corresponding test or programming problem correctly, the system infers backwards that all the necessary prerequisites are known. This is a very strong assumption in diagnosing the user’s knowledge state and will be changed through the use of fuzzy or probabilistic models in the future. A yellow ball has different meanings depending on the type of page this link points to. In the case of a terminal page with a test or a problem page, the yellow ball means that the test or the problem have been solved correctly. In the case of any other terminal page, the yellow ball indicates that this page has been visited already. In the case of a lesson, section, or subsection link the yellow ball means that all subordinated pages have been learned or visited. An orange ball has different meanings, too. In the case of a terminal page, an orange ball means the system infers from other successfully learned pages that the content of this page will be known to the learner (as described above). In the case of a lesson, section, or subsection link an orange ball means that this page has been visited already but not all subordinated pages have been visited or worked at successfully.
Adaptive Navigation Support in WWW-Based Tutoring Systems
295
Figure 1. Example of a text page with free-form test items and adaptive annotation of links.
In browsers supporting JavaScript, the different meanings of a state of a link are explained in the status line at the bottom of the window when the cursor is located over the link (Figure 1). Individual Curriculum Sequencing. While adaptive annotation of links is a powerful technique to aid users when navigating through the pages of the course, some users may be confused about what the best next step should be to continue with the course. This may happen when the learner
296
G. Weber and M. Specht
moves around in the hyperspace and loses orientation. Or, the learner wants to follow an optimal path through the curriculum in order to learn as fast and as completely as possible. To meet these needs, a NEXT button in the navigation bar of the text pages allows the user to ask the system for the best next step depending on the current knowledge state of the particular user. The algorithm to select the best next step for a particular user works as follows: Starting from the current page, the next page is searched for a page that is annotated as suggested to be visited. This may be the same page as before if it is a terminal page with a testgroup or a problem not solved correctly up to that moment. When no further page can be found with all prerequisites fulfilled, all pages from the beginning of the course are checked to see if they have not yet been visited or solved and the first one found is annotated as suggested to be visited. The learner completes the course successfully when no best next page can be found at all. 3.3 On-Line Help and Episodic User Modeling ELM-ART II supports example-based programming. That is, it encourages students to re-use the code of previously analyzed examples when solving a new problem. The hypermedia form of the course and, especially, similarity links between examples help the learner to find the relevant examples from his or her previous experience. As an important feature, ELM-ART II can predict the student’s way of solving a particular problem and find the most relevant example from the individual learning history. This kind of problem solving support is very important for students who have problems with finding relevant examples. Answering the help request, ELM-ART II selects the most helpful examples, sorts them corresponding to their relevance, and presents them to the student as an ordered list of hypertext links. The most relevant example is always presented first, but, if the student is not happy with this example for some reason, he or she can try the second and the following suggested examples. The implementation of this feature directly was adopted from the recent version of ELM-PE (Burow and Weber, 1996). If the student failed to complete the solution to the problem, or if the student cannot find an error that was reported when evaluating the code in the evaluator window, he or she can ask the system to diagnose the code of the solution in its current state. The system gives feedback by providing a sequence of help messages with increasingly detailed explanation of an error or suboptimal solution. The sequence starts with a very vague hint on what is wrong and ends with a code-level suggestion of how to correct the error or how to complete the solution. In many cases, the student can understand from the very first messages where the error is or what can be the next step and does not need any more explanations. The solution can be corrected or completed, checked again, and so forth. The student can use this kind of help as many times as required to solve the problem correctly. In this context, the option to provide the code-level suggestion is a very important feature of ELM-ART II as a distance learning system. It ensures that all students will ultimately solve the problem without the assistance of a human teacher. Both the individual presentation of example programs and the diagnosis of program code are based on the episodic learner model (ELM, Weber, 1996). ELM is a type of user or learner model that stores knowledge about the user (learner) in terms of a collection of episodes. In the sense of case-based learning, such episodes can be viewed as cases (Kolodner, 1993). To construct the learner model, the code produced by a learner is analyzed in terms of the domain knowledge on the one hand and a task description on the other hand. This cognitive diagnosis results in a derivation tree of concepts and rules the learner might have used to solve the problem. These concepts
Adaptive Navigation Support in WWW-Based Tutoring Systems
297
and rules are instantiations of units from the knowledge base. The episodic learner model is made up of these instantiations. In ELM only examples from the course materials are pre-analyzed and the resulting explanation structures are stored in the individual case-based learner model. Elements from the explanation structures are stored with respect to their corresponding concepts from the domain knowledge base, so cases are distributed in terms of instances of concepts. These individual cases—or parts of them—can be used for two different purposes. On the one hand, episodic instances can be used during further analyses as shortcuts if the actual code and plan match corresponding patterns in episodic instances. On the other hand, cases can be used by the analogical component to show up similar examples and problems for reminding purposes.
4 An Experimental Study on Annotation and Curriculum Sequencing With the introduction of the new version ELM-ART II, we started an accompanying empirical study to look at the effects of combining the new adaptive annotation technique used in ELMART II with the guidance offered by the NEXT button. Therefore, two treatments with two levels each were investigated simultaneously. The first treatment contrasts the adaptive annotation of links as described above with simply annotating links as visited (yellow ball) and not visited (orange ball). This second type of annotation used in the control group is comparable to the usual annotation performed by WWW browsers annotate links that have already been visited and that are cached. The second treatment contrasts providing a NEXT best step button with a version without this navigation button. Each user starting to work with ELM-ART II is assigned randomly to one of the four treatment conditions. In an introductory questionnaire, users are asked whether they are familiar with programming languages and whether they already know LISP. In this experimental study, only data from subjects that do not know LISP are being used. Subjects come from an introductory LISP course held at the Psychology Department at the University of Trier. Additionally, users from all over the world can login to the LISP course. A first hypothesis postulates that both the visual adaptive annotation of links and individual curriculum sequencing with the NEXT button will motivate users to proceed with learning. Many people from outside of our university visit ELM-ART II to see how such an introductory interactive programming course works via WWW. So, a good measure of the stimulative nature of link annotation and individual curriculum sequencing is the number of pages with tests and problems users solved correctly before they stopped working with ELM-ART II. In this first investigation, 33 subjects working with ELM-ART II, who visited more than the first five pages (that is, they had more than a first glance at the course), and without any experience in LISP did not finish the third lesson. Fourteen of these 33 subjects had no previous experience in any programming language at all, whereas 19 subjects were familiar with at least one other programming language. Results are shown in Table 1. Table 1A shows a significant effect of individual curriculum sequencing by a specific NEXT button on subjects who had no previous experiences in any programming language. Subjects who could use such a button worked on about 10 pages more than subjects without such a button (MS = 401.5, F(1,10) = 5.71, p < .05). There was no effect of link annotation on how long complete beginners tried to learn with ELM-ART II. Unlike the programming beginners, subjects who were
298
G. Weber and M. Specht
Table 1. Mean number of pages with tests and problems that users (who did not finish the third lesson) solved correctly until they stopped working with ELM-ART II. A) Users with no previous programming language. B) Users with at least one previous programming language.
A)
Adaptive Annotation With Without NEXT button
21.0 (N = 4)
25.0 (N = 3)
22.7 (N = 7)
No NEXT button
13.8 (N = 5)
9.5 (N = 2)
12.6 (N = 7)
17.0 (N = 9)
18.8 (N = 5)
17.7 (N = 14)
B)
Adaptive Annotation With Without NEXT button
23.5 (N = 6)
14.0 (N = 3)
20.3 (N = 9)
No NEXT button
22.4 (N = 5)
12.6 (N = 5)
17.5 (N = 10)
23.0 (N = 11)
13.1 (N = 8)
18.8 (N = 19)
familiar with at least one other programming language (Table 1B) visited tendentially more pages and solved more exercises and problems when working with adaptive link annotation, though the effect is not quite statistically significant in our small sample (23.0 vs. 13.1 pages, MS = 413.9, F(1,15) = 2.96, p = 0.11). These results can be easily interpreted when one looks at the navigation behavior of the complete beginners more closely. All but one of the beginners had no experience in using a WWW browser. That is, these subjects profited from being guided directly by the system when using the NEXT button. Without such a button, they had to navigate through the course materials on their own. Learning to navigate through a hypertext in addition to learning the programming language may have been too difficult. So individual adaptive guidance by the system is especially helpful for the complete beginners. Most subjects who were familiar with at least one other programming language were familiar with Web browsers. They were more pleased with the link annotation and stayed longer with the learning system when links were annotated adaptively. A second hypothesis postulates that the number of navigation steps is reduced by both adaptive navigation support and individual curriculum sequencing with the NEXT button. Both techniques should have an additive effect on the navigation process. Fourteen subjects finished the first lesson of the introductory LISP course in ELM-ART II. The number of navigation steps used with and without link annotation and with and without the NEXT button are shown in Table 2. Differences in the average numbers of navigation steps are not significant with this small number of subjects, and the data seem to support the hypothesis only partially. Subjects that are individually guided by using a NEXT button needed fewer steps to finish the first lesson than subjects without such an option (71.9 vs. 98.6 steps, respectively). The adaptive link annotation does not seem to have any systematic effect on the number of navigation steps. These very small effects observed from the first lesson fade away during the following lessons. That is, only in the beginning does individual guidance by the system help learners to follow an optimal path through the curriculum. Later on,
Adaptive Navigation Support in WWW-Based Tutoring Systems
299
Table 2. Mean number of navigation steps needed in the first lesson.
Adaptive Annotation With Without NEXT button
66.6 (N = 7)
81.3 (N = 4)
71.9 (N = 11)
No NEXT button
103.4 (N = 9)
87.8 (N = 4)
98.6 (N = 13)
87.3 (N = 16)
84.5 (N = 8)
86.4 (N = 24)
all subjects understand the simple hierarchical architecture of the programming course and most of them follow the best learning path without any guidance. These results do not mean that adaptive link annotation and adaptive curriculum sequencing are not as important as expected. As could be shown in the data above, these techniques are especially useful in the starting phase when users, especially beginners, are often frustrated. And, these techniques will presumably be helpful to advanced users that already posses some of the to-belearned knowledge. In this case, a system that is able to adapt to the particular user will be helpful in navigating him or her around all the pages that the system infers to be known. However, this has to be shown in a another advantage study.
5 Conclusion The system ELM-ART II described in this paper is an example of how an ITS can be implemented on the WWW. It integrates the features of electronic textbooks, learning environments, and intelligent tutoring systems. User modeling techniques like simple overlay models or more elaborated episodic learner models are well suited to allow for adaptive guidance and to individualized help and problem solving support in WWW-based learning systems. Perhaps the WWW can help ITS to move from laboratories (where most of these “intelligent” system are used due to the enormous requirements in computing power and capacity) to classrooms and to permanent availability in distance learning. ELM-ART–II is implemented with the programmable WWW-server CL-HTTP (URL: http://www.ai.mit.edu/projects/iiip/doc/cl-http/home-page.html). ELM-ART II can be accessed with the following URL: http://www.psychologie.uni-trier.de:8000/elmart.
References Anderson, J. R. (1993). Rules of the Mind. Hillsdale, NJ: Lawrence Erlbaum Associates. Anderson, J. R., Corbett, A. T., Koedinger, K. R., and Pelletier, R. (1995). Cognitive tutors: Lessons learned. The Journal of the Learning Sciences 4:167–207. Barr, A., Beard, M., and Atkinson, R. C. (1976). The computer as a tutorial laboratory: The Stanford BIP project. International Journal of Man-Machine Studies 8:567–596. Brusilovsky, P. (1992). Intelligent tutor, environment, and manual for introductory programming. Educational and Training Technology International 29:26–34.
300
G. Weber and M. Specht
Brusilovsky, P., Schwarz, E., and Weber, G. (1996a). ELM-ART: An intelligent tutoring system on World Wide Web. In Frasson, C., Gauthier, G., and Lesgold, A., eds., Proceedings of the Third International Conference on Intelligent Tutoring Systems, ITS-96. Berlin: Springer. 261–269. Brusilovsky, P., Schwarz, E., and Weber, G. (1996b). A tool for developing adaptive electronic textbooks on WWW. Proceedings of WebNet’96, World Conference on the Web Society. Charlottesville: AACE. 64–69. Burow, R., and Weber, G. (1996). Example explanation in learning environments. In Frasson, C., Gauthier, G., and Lesgold, A., eds., Intelligent Tutoring Systems—Proceedings of the Third International Conference, ITS ’96. Berlin: Springer. 457–465. Carr, B., and Goldstein, I. (1977). Overlays: A theory of modelling for computer aided instruction (AI Memo 406). Cambridge, MA: Massachusetts Institute of Technology, AI Laboratory. Eklund, J. (in press). Knowledge-based navigation support in hypermedia courseware using WEST. Australian Educational Computing 11. Johnson, W. L. (1986). Intention-Based Diagnosis of Novice Programming Errors. London: Pitman. Kay, J., and Kummerfeld, R. J. (1994). An individualised course for the C programming language. Proceedings of the Second International WWW Conference “Mosaic and the Web”. Kolodner, J. L. (1993). Case-Based Reasoning. San Mateo, CA: Morgan Kaufmann. Lin, F., Danielson, R., and Herrgott, S. (1996). Adaptive interaction through WWW. In Carlson, P., and Makedon, F., eds., Proceedings of ED-TELEKOM 96—World Conference on Educational Telecommunications. Charlottesville, VA: AACE. 173–178. Pérez, T., Gutiérrez, J., and Lopistéguy, P. (1995a). An adaptive hypermedia system. In Greer, J., ed., Proceedings of AI-ED’95, 7th World Conference on Artificial Intelligence in Education. Washington, DC: AACE. 351–358. Pérez, T., Lopistéguy, P., Gutiérrez, J., and Usandizaga, I. (1995b). HyperTutor: From hypermedia to intelligent adaptive hypermedia. In Maurer, H., ed., Proceedings of ED-MEDIA’95, World Conference on Educational Multimedia and Hypermedia. Graz, Austria: AACE. 529–534. Schwarz, E., Brusilovsky, P., and Weber, G. (1996). World-wide intelligent textbooks. In Carlson, P., and Makedon, F., eds., Proceedings of ED-TELEKOM 96—World Conference on Educational Telecommunications. Charlottesville, VA: AACE. 302–307. Soloway, E., Rubin, E., Woolf, B., Johnson, W. L., and Bonar, J. (1983). MENO II: An AI-based programming tutor. Journal of Computer-Based Instruction 10:20–34. Vanneste, K., Bertels, K., and De Decker, B. (1993). The use of semantic augmentation within a student program analyser. Proceedings of the Seventh International PEG Conference. 250–260. Vanneste, P. (1994). The Use of Reverse Engineering in Novice Program Analysis. Ph.D. Dissertation, Katholieke Universiteit Leuven, Belgium. Weber, G. (1996). Episodic learner modeling. Cognitive Science 20:195–236. Weber, G., and Möllenberg, A. (1995). ELM programming environment: A tutoring system for LISP beginners. In Wender, K. F., Schmalhofer, F., and Böcker, H.-D., eds., Cognition and Computer Programming. Norwood, NJ: Ablex Publishing Corporation. 373–408.