Tseng, S.-S., Su, J.-M., Hwang, G.-J., Hwang, G.-H., Tsai, C.-C., & Tsai, C.-J. (2008). An Object-Oriented Course Framework for Developing Adaptive Learning Systems. Educational Technology & Society, 11 (2), 171-191.
An Object-Oriented Course Framework for Developing Adaptive Learning Systems Shian-Shyong Tseng and Jun-Ming Su Department of Computer Science, National Chiao Tung University, Taiwan //
[email protected] //
[email protected]
Gwo-Jen Hwang Department of Information and Learning Technology, National University of Tainan, Taiwan //
[email protected] // Fax: +886-6-3017001
Gwo-Haur Hwang Information Management Department, Ling Tung University, Taiwan //
[email protected]
Chin-Chung Tsai Graduate School of Technological and Vocational Education, National Taiwan University of Science and Technology, Taiwan //
[email protected]
Chang-Jiun Tsai Coretech Corporation, Taiwan //
[email protected]
ABSTRACT The popularity of web-based learning systems has encouraged researchers to pay attention to several new issues. One of the most important issues is the development of new techniques to provide personalized teaching materials. Although several frameworks or methods have been proposed, it remains a challenging issue to design an easy-to-realize framework for developing adaptive learning systems that benefit student learning performance. In this paper, we propose a modular framework that can segment and transform teaching materials into modular learning objects based on the SCORM standard such that subject contents can be composed dynamically according to the profile and portfolio of individual students. An adaptive learning system has been developed based on this innovative approach. Based on the experimental results of a college computer course, we conclude that the proposed framework can be used to develop adaptive learning systems that benefit the students’ learning achievements.
Keywords Adaptive learning system, Personalized learning course, SCORM, Learning object
Introduction Owing to the popularity of computer and information technologies, web-based educational systems are becoming more and more popular all over the world. Earlier computer-assisted educational systems merely treated computers and networks as a medium for presenting teaching materials or conducting assessments (Weiss & Kingsburg, 1984). In the past few decades, researchers have attempted to employ artificial intelligence technologies to develop more functional web-based learning systems on computer networks (Alessi & Trollip, 1991). However, most traditional course frameworks arrange learning objects and corresponding tests in sequential and monotonous ways, and hence only limited individualization strategies can be applied to the tutoring process (Karampiperis & Sampson, 2005). Without the assistance of an adaptive learning mechanism, students always need to learn all of the learning materials in each section, regardless of whether the teaching materials are suitable for them or not (Graf, 2006). An adaptive learning system is usually a web-based application program that provides a personalized learning environment for each learner, by adapting both the presentation and the navigation through the learning content (Retalis & Papasalouros, 2005). Such a learning environment can dynamically reorganize learning resources to meet specific learning objectives based on an individual learner’s profile or learning portfolio (Brusilovsky, 2001). More specifically, it offers the potential to uniquely address the specific learning goals, prior knowledge and context of a learner so as to improve that learner’s satisfaction with the course and motivation to complete that course (Dagger, Wade, & Conlan, 2005). ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from the editors at
[email protected].
171
Although the idea of adaptive learning seems to be promising, it remains an interesting and challenging issue to develop a system that can improve the learning performance of students in practical applications. One important problem is the complexity of such systems; that is, the frameworks of such systems are too complex to be realized for practical applications. To cope with this problem, we present a modular framework in this paper. Under the modular paradigm for segmenting and transforming all learning materials into learning objects, dynamic and flexible course contents can be provided to the students in accordance with their learning aptitudes and status. In addition, the Sharable Content Object Reference Model (SCORM) (SCORM, 2004), the most popular international standard for teaching materials, is used to represent the modular learning objects and the composed learning materials, which makes the framework more adaptable, flexible, maintainable, exchangeable, and extensible. To evaluate the performance of the modular framework, an adaptive learning system has been developed based on the framework, and an experiment with a college computer course has been conducted. The experimental results show that this innovative approach is helpful to the students in improving their learning achievements.
Relevant work In recent years, researchers have attempted to develop computer-assisted learning systems that are more intelligent and individualized. For example, Gonzalez and Ingraham (1994) developed a tutoring system that is capable of automatically determining exercise progression and remediation during a training session, relative to the students’ past performance; Harp, Samad, Villano et al. (1995) employed the technique of neural networks to model the behavior of students in the context of an intelligent tutoring system. They used self-organizing feature maps to capture the possible states of student knowledge from an existing item bank; Rowe and Galvin (1998) employed planning methods, consistency enforcement, objects, and structured menu tools to construct intelligent simulationbased tutors for procedural skills. Owing to the popularity of the Internet, several adaptive educational hypermedia systems have been developed. For example, Ozdemir and Alpaslan (2000) presented an intelligent agent to guide students through course material on the Internet. The agent helps students to study and learn the concepts of the course by giving navigational support according to their knowledge level. Meanwhile, some testing and evaluation mechanisms to detect well-learned or poorly learned instructional objectives for individual students have been proposed (Hwang, 2003; Hwang, Lin, & Lin, 2006). Based on the evaluation results, proper remedial learning materials can be offered to students who did not learn well, so that they can study the content again. Unfortunately, previous studies on the development of adaptive learning systems have not only demonstrated the benefits of adaptive learning, but have also revealed the difficulty of applying it. One of the most important problems is the inflexibility of such systems; that is, they are designed for special-purposes with non-modular learning material structures. Therefore, Brusilovsky and Maybury (2002) indicated that designing such adaptive hypermedia systems is still an open research issue. Papasalouros, Retalis, & Papaspyrou (2004) attempted to cope with this problem by employing UML as the modeling language. Furthermore, Dagger et al. (2005) presented an adaptive course construction methodology that provides a more flexible development concept, including adaptivity definition, subject matter concept modelling, adaptivity technique selection and alternative instructional design template customization. However, it remains a challenging issue to design an easy-to-realize framework for developing adaptive learning systems that benefit the students’ learning achievements in practical applications. In the following sections, an adaptive-learning framework, as well as the Modular Adaptive Learning System (MALS), which was developed based on the framework, is presented to show how teaching materials can be segmented and transformed into learning objects; moreover, a method for composing those learning objects based on an individual’s learning profile and learning portfolio is proposed.
Modular Adaptive Learning System (MALS) The teaching materials of a course are structured into a course framework. Figure 1 shows the differences between a traditional course framework and a modular course framework. In a traditional course framework, the teaching materials are arranged in a sequential and monotonous way; that is, different learners are forced to read the same subject content for each section (or subject unit) even though they might have quite different knowledge levels. On 172
the contrary, in a modular course framework, each section (subject unit) is constructed in an individualized and intelligent way; that is, for different learners, the same section might consist of different learning objects, based on each learner’s background and learning status.
Traditional Course Framework Chapter
Section
Section
Section
Constructed in a Sequential & Monotonous Way Teaching Material
OO Course Framework Chapter
Section
Section
Section
Constructed in an Individualized & Intelligent Way Quizzes
Learning Object
Figure 1. Comparison between traditional and modular course frameworks The architecture of MALS is presented in Figure 2. The Learning Object Repository (LOR) stores the shared SCORM-compliant learning objects, which can be used by teachers to construct the teaching materials. Alternatively, the teaching materials are composed of some learning objects, which would be retrieved from the LOR (Tseng & Hwang, 2004).
Figure 2. Architecture of the modular adaptive learning system (MALS) The Authoring Interface assists teachers in constructing the teaching materials, either by retrieving desirable learning objects from the LOR or by editing new learning objects. The learning objects, which are represented by means of 173
the SCORM standard (SCORM, 2004), are then read by the Teaching Materials Importing Engine and stored in the Teaching Materials Database. The Evaluation Center evaluates the learning achievements of the students. Based on the results of tests, the learning achievements of the students will be stored in the Learning Record Database. Note that different tests could be offered in accordance with different learning objects; moreover, the students at a lower learning level but with good learning achievement still have the opportunity to learn the objects at a higher learning level. The Course Construction Engine, including the Individualized Course Construction Algorithm and the Course Framework Revision Algorithm is used to dynamically construct the course framework relative to the students’ learning records stored in the Learning Record Database, and then attributes values of learning objects stored in the Teaching Materials Database, and the students’ profiles stored in the Student Profile Database. The Course Visualization Engine can present a SCORM-compliant course framework for students in the students’ browser. Structure of teaching materials In SCORM (SCORM, 2004), a content packaging scheme is proposed to package the learning objects into standard teaching materials, as shown in Figure 3. The content packaging scheme defines a package of teaching materials consisting of 4 parts: 1) Metadata, describing the characteristics or attributes of this learning content; 2) Organizations, describing the structure of this teaching material; 3) Resources, denoting the physical file linked by each learning object within the teaching material; and 4) (Sub)Manifest, describing this teaching material as consisting of itself and other teaching material. As shown in Figure 3, the structure of the whole set of teaching materials, which consists of many organizations containing arbitrary number of tags called items, is defined to denote the corresponding chapter, section, or subsection within the physical teaching material. Each item as a learning activity can also be tagged with activity metadata, which can be easily retrieved from a content repository. The metadata also provide descriptive information about the activity. Based upon the concept of learning objects and the SCORM content packaging scheme, the teaching materials can be constructed dynamically by organizing the learning objects according to the learning strategies, students’ learning aptitudes, and the evaluation results. Thus, individualized teaching materials can be offered to each student, and then the teaching material can be reused, shared, and recombined (Su et al., 2005).
Figure 3. SCORM content packaging scope and corresponding structure of teaching materials 174
Object-oriented course framework In a traditional course framework, the arrangement of the teaching materials in a section is sequential and monotonous. Thus, without appropriate segmentation and labeling of the teaching materials, it is difficult for an individualized tutoring system to offer appropriate materials to students in accordance with their individual aptitude. However, the relationship between segmented-materials and materials-attributes is very similar to that between class and class-members under the modular paradigm. Therefore, our proposal is to segment teaching materials into learning objects based on the SCORM standard; that is, the original teaching materials are divided into several segments which are called learning objects, according to the instructional objectives defined by the educational experts, enabling the tutoring systems to construct personalized teaching materials for individual students. As shown in Figure 4, the teaching materials stored in the Teaching Materials Database are composed of several related learning objects which consist of the learning content and the corresponding tests. These tests will be offered in order to evaluate the student’s learning achievement of the learning activity. Subject Material Learning Object Learning Object 。 。 。
Content
Quiz(zes)
Content
Quiz(zes)
Structure of teaching material
Figure 4. Structure of the teaching material In this paper, each learning object has four attached elements (as shown in Table 1): Background Knowledge (BK), Major Concept (MC), Learning Level (LL), and Difficulty Level (DL), which are proposed to meet the adaptive learning needs. In order to determine the learning level of an individual student, the test results of each student are analyzed based on the approach proposed by Chu et al. (2006). Table 1. Definitions of Instructional Objective Class Class Member Definition Background Knowledge The prerequisite instructional objectives before learning the background knowledge Major Concept The acquired knowledge after learning Learning Level The appropriate level of instructional objective for students where a lower learning level is more basic for students Difficulty Level The difficulty of a specific object
Initialized in Base class Base class Base class Sub-class
As shown in Figure 5, the learning objects are basically constructed in two-tiers: 1) the first tier includes the instructional objective base classes, which describe the individual instructional objectives and the corresponding difficulty levels for achieving the objectives defined by the teacher or education expert, and 2) the second tier is the teaching material sub-class. Based upon the inheritance property of object-orientation, the attributes of the teaching material sub-class can be easily inherited from the corresponding base class. Before transforming the teaching material into a learning object, teachers must select an appropriate instructional objective base class and determine the instructional objective and the difficulty level. Then, the system will derive a teaching material sub-class and generate a concrete learning object, according to the class members of the base class and sub-class. That is, the relationship between the teaching material sub-classes and the learning objects is one to one in this framework, and the attributes in the base classes are inherited by the corresponding learning objects. This transformation process is repeated until all the original teaching materials are transformed into learning objects. 175
Instructional Objective Base-Class
Subject Material Sub-Class
Learning Objects Figure 5. Object-oriented course framework In the first tier, for a given section of teaching material, an instructional objective base class with the values of class members can be constructed or initialized according to the instructional objective defined by the educational experts. In the second tier, for a teaching material sub-class, the class members, such as major concept, background knowledge, and learning level, are completely inherited from the base class through the inheritance property of the modular paradigm, while difficulty level is initialized according to the content of the particular teaching material. The instructional objective base class in the first tier and the subject material sub-class in the second tier are composed of several related learning objects (LO) from the Learning Object Library in the MALS. Individualized Course Construction Algorithm (ICCAlg) The ICCAlg is used to ensure that the course construction engine can construct different course frameworks for different students in accordance with their aptitudes. Input: Learning objects and an empty course framework Output: Individualized course framework for a given student Step 1: Determine the learning level of the student according to the student’s learning profile stored in the learning record database. Step 2: For each object LOi, if all of its background knowledge (the prerequisite learning objects) has been fulfilled (no prerequisite learning object is needed, or all of the prerequisite learning objects have been appended to the student’s learning list) and the learning level of LOi is less than or equal to the learning level of the student, LOi is selected as an element of the candidate learning object set S.
LO
Background Knowledge Major Concept Learning Level Difficulty Level
LO1
0 A 1 1
LO5
AH F 2 1
LO: Learning Object
LO2
0 A 1 2
LO6
AB C 1 1
LO3
0 H 1 1
LO7
AB D 3 1
LO4
A B 1 1
LO8
ABD E 3 1
Figure 6. Illustrative example of eight learning objects Step 3: Append all learning objects with the smallest utilization degree in S to the student’s course framework in increasing order of difficulty. 176
Step 4: If all objects whose learning levels are not larger than the student’s learning level are already appended to the course framework for the student, then stop; otherwise, go back to Step 2. In ICCAlg, the utilization degree of a given learning object is defined as the number of prerequisites; that is, the number of the learning objects taking this learning object’s major concept as one component of their background knowledge. For example, there are eight learning objects and their attribute values are initialized as shown in Figure 6. Since five of the learning objects, LO4, LO5, LO6, LO7, LO8, take A as part of their background knowledge, the utilization degree of LO1 and LO2 is 5. Example 1 For the learning objects shown in Figure 6, the ICCAlg can be used to construct three course frameworks for three different learning levels. Before the learner starts to learn through MALS, the number of achieved instructive objectives of this section is initialized as zero. Consequently, the first step is to determine all learning objects without any background knowledge. Three learning objects, LO1, LO2, and LO3, are selected as candidate learning objects in Step 2 since all of them have the same learning level of 1. In Step 3, the learning object LO3, having the smallest utilization degree 1, is then chosen and appended to the course framework. Therefore, the background knowledge of the students includes the instructional objective H of LO3, and the next step is to go back to Step 2 to find the remaining objects. Two more learning objects, LO1 and LO2, are subsequently found, and the three course frameworks are shown in Figure 7. Three Course Frameworks withThree Learning Levels Level 1:
LO3
LO1
LO2
Level 2:
LO3
LO1
LO2
Level 3:
LO3
LO1
LO2
Figure 7. An individualized course construction example (I) Next, the learning objects LO4 and LO5 are chosen. For the course framework whose learning level equals 1, only LO4 can be appended since LO5 has learning level 2. For the course frameworks with a learning level larger than 1, LO5 will be chosen and appended to the course framework since it has the smallest utilization degree. Add F to the background knowledge. Go back to Step 2 again, and then LO4 is appended to the course framework as shown in Figure 8. Three Course Frameworks withThree Learning Levels Level 1:
LO3
LO1
LO2
LO4
LO6
Level 2:
LO3
LO1
LO2
LO5
LO4
LO6
Level 3:
LO3
LO1
LO2
LO5
LO4
LO6
Figure 8. An individualized course construction example (II) Furthermore, LO6 will be chosen according to utilization degree. Repeating Steps 2 to 4 again, the objects LO7 and LO8 with learning level 3 will be finally chosen as shown in Figure 9. 177
Three Course Frameworks withThree Learning Levels Level 1:
LO3
LO1
LO2
LO4
LO6
Level 2:
LO3
LO1
LO2
LO5
LO4
LO6
Level 3:
LO3
LO1
LO2
LO5
LO4
LO6
LO7
LO8
Figure 9. An individualized course construction example (III) 3.4 Course Framework Revision Algorithm (CFRAlg) To provide the opportunity for students with a lower learning level to learn the learning objects at a higher learning level, we propose the CFRAlg which can revise the students’ course framework according to several indexes, including efficiency of learning, achieved rate of tests, etc., in which these indexes can be obtained by the evaluation mechanism. Therefore, even for two different students with the same learning level in the beginning, two individualized course frameworks will be offered via the revision process. As the above-mentioned process stipulates, some tests related to the characteristics of various learning objects are usually offered to evaluate students’ learning achievements related to the learning object (Hwang, 2003; Chu, Hwang, Tseng, & Hwang, 2006). If the evaluation results are not positive, CFRAlg will reconstruct the course framework. Input: Original course framework Output: Revision course framework Step1: If the evaluation results are positive, then the learning objects with the nearest higher learning level will be appended to the original course framework. Step2: If the evaluation results are negative, then the objects with the highest learning level will be removed from the original course framework. Step3: If the evaluation results show that some instructional objectives have not been well learned, for example, the achievement rate of some specific tests following some specific learning objects is lower than the threshold defined by the educational expert, then the corresponding learning objects with a lower learning level will be appended to the original course framework.
Original Course Framework
TO3
TO1
TO2
TO5
TO4
TO6
If the evaluation result is positive, the system will revise the original course framework by inserting the more difficult teaching objects (TO7, TO8) Revised Course Framework
TO3
TO1
TO2
TO5
TO4
TO6
+ TO7
TO8
Figure 10. Example of Course Framework Revision for Positive Feedback 178
Example 2 For the eight learning objects shown in Figure 5, LO1, LO2, LO3, LO4, LO5, and LO6 will be selected for the students with learning level of 2. After evaluating the students’ learning achievements, if the evaluation output is positive, the students still have the opportunity to learn the learning objects with a higher learning level. As shown in Figure 10, learning objects LO7 and LO8, with a learning level of 3, will be appended to the original course framework. If the evaluation results are negative, as shown in Figure 11, the learning object LO5 with a learning level of 2 will be removed from the original learning object set.
Original Course Framework
TO3
TO1
TO2
TO5
TO4
TO6
If the evaluation result is negative, the system will revise the original course framework by removing the more difficult teaching object TO5 Revised Course Framework
TO3
TO1
TO2
TO5
TO4
TO6
Figure 11. Example of Course Framework Revision for Negative Feedback Furthermore, by applying the evaluation mechanism, MALS can determine which instructional objectives the student does not learn well. If a student has low learning achievements for some learning objects, the learning objects will be re-appended to the original course framework for the student to learn again. Moreover, the action to append the more difficult learning objects to the course framework will not be a serious encumbrance for the system under the modular paradigm. For example, assuming that instructional objective A is poorly learned, the corresponding learning objects LO1 and LO2 are appended to the original course framework again (as shown in Figure 12).
Figure 12. Example of Course Framework Revision by Appending Poorly-Learned Learning Objects Evaluation mechanism In the MALS Evaluation Center, the evaluation mechanism applies fuzzy reasoning technology to evaluate the students’ learning achievements with regard to the learning objects. Accordingly, the Course Construction Engine can reconstruct the course framework for students’ relearning. The Fuzzy Set theory was proposed in the mid-sixties by Zedah (1965), and was extended later to include fuzzy logic (Zedah, 1973), a superset of conventional (Boolean) logic that was developed to handle the concept of partial truth. Fuzzy sets (or vague sets) generalize the notion of crisp sets; that is, an element could be in a set with a membership degree between 0 and 1. The source of fuzziness in “If-Then” rules stems from the use of linguistic variables (Zedeh, 1971). Concept understanding degree, for example, may be viewed as a numerical value ranging over the interval [0, 179
100%], and as a linguistic variable that can take on values such as high, not very high, and so on. Each of these linguistic values may be interpreted as a label of a fuzzy subset of the universe of discourse X = [0, 100%], whose base variable, x, is the generic numerical value concept understanding degree. Each linguistic term is defined by a membership function which helps to take the crisp input values and transform them into degrees of membership (Ngai & Wat, 2003). A membership function (MF) is a curve that defines how each point in the input space is mapped to a membership value, or degree of membership, between 0 and 1. The function itself can be an arbitrary curve whose shape can be defined as a function that suits the particular problem based on simplicity, convenience, speed, and efficiency (Kalogirou, 2003). In practical applications, four standardized MFs are used: Z-type, Λ-type (lambda), Π-type (pi), and S-type, as shown in Figure 13.
Z-type
Pi-type
Lambda-type
S-type
Figure 13. Four standardized membership functions Fuzzy rules describe the quantitative relationship between variables in linguistic terms. These IF-THEN rule statements are used to formulate the conditional statements that comprise fuzzy logic (Kalogirou, 2003). In the following, we shall present the fuzzy approach for identifying online learning performance in detail. Definition of the Linguistic Variables In this section we define related linguistic variables and the membership functions to measure the status of learners, including the individuals’ relative learning achievement, concentration, and patience. As the system will eventually provide a five-scale remedial learning suggestion, these linguistic variables are designed to have five linguistic terms, as shown in Table 2. Linguistic Variables Input LAsc(Si,LOj) ALAc(LOj) Output RLAsc(Si,LOj)
Table 2. Linguistic variables and terms used in the study Definitions
Linguistic Terms
Learner Si’s individual learning achievement in terms of a learning object LOj. The learning group’s average learning achievement in terms of a learning object LOj.
Low, Medium, High Low, Medium, High
The Si’s relative learning achievement in terms of a certain learning object LOj compared with the learning group.
Grades 1 – 5
The linguistic terms of the output linguistic variable RLAsc(Si,LOj), for example, are between Grade 1 and Grade 5, in which Grade 1 represents the lowest achievement degree while Grade 5 represents the highest achievement degree. Definition of the Membership Functions In this study, we assume that the input and output fuzzy numbers are in triangular forms and that these forms approximate human thought processes. Three membership functions are used: Z-type, Lambda-type (triangular), and S-type. The definition of learning achievement is LAsc(Si,LOj)/ALAc(LOj). Generally, most examinations are implemented by using a cut-off point of 60 percent to judge whether a learner has passed or failed. This rule of thumb is also adopted in our study. We use 60 points as the minimum criterion for determining whether a learner/learning group 180
understands enough of the concept. Hence, the membership function of learner Si’s learning achievement degree in terms of the learning object LOj is defined as follows:
; Linguistic Term is Low ⎧Z ( x; 0.4, 0.6) ⎪ ⎨Tri ( x; 0.4, 0.6, 0.8) ; Linguistic Term is Medium ⎪S ( x; 0.6, 0.8) ; Linguistic Term is High ⎩ The graphical representation of membership functions is shown in Figure 14. degree of compatibility 1.0
Low
0.5
Z
Medium
High
Tri
S
x
0.0 0.2
0.4
0.6
0.8
1.0
Figure 14. Membership functions to model learning achievements degree of compatibility Grade 1 1.0
Grade 2 Grade 3 Grade 4
Grade 5
0.5
x
0.0 1/12 1/6
2/6
3/6
4/6
5/6 11/12 6/6
Figure 15. Membership functions for learning achievements The fuzzy reasoning model plays an important role in our system for diagnosing learning performance. For learners, it provides indispensable assisted information. Thus, the establishment of fuzzy rules is critical to the analysis of the correctness of inference results. There are nine rules defined in our study, as shown below:
~ R 1: IF LAsc(Si,LOj) = Low AND ALAc(LOj) = Low THEN RLAsc(Si,LOj) = Grade 3 ~ R 2: IF LAsc(Si,LOj)= Low AND ALAc(LOj)= Medium THEN RLAsc(Si,LOj) = Grade 2 ~ R 3: IF LAsc(Si,LOj) = Low AND ALAc(LOj)= High THEN RLAsc(Si,LOj) = Grade 1 ~ R 4: IF LAsc(Si,LOj) = Medium AND ALAc(LOj) = Low THEN RLAsc(Si,LOj) = Grade 4 ~ R 5: IF LAsc(Si,LOj) = Medium AND ALAc(LOj) = Medium THEN RLAsc(Si,LOj) = Grade 3 ~ R 6: IF LAsc(Si,LOj) = Medium AND ALAc(LOj) = High THEN RLAsc(Si,LOj) = Grade 2 ~ R 7: IF LAsc(Si,LOj) = High AND ALAc(LOj) = Low THEN RLAsc(Si,LOj) = Grade 5
181
~ R 8: IF LAsc(Si,LOj ) = High AND ALAc(LOj) = Medium THEN RLAsc(Si,LOj) = Grade 4 ~ R 9: IF LAsc(Si,LOj) = High AND ALAc(LOj) = High THEN RLAsc(Si,LOj) = Grade 3 The result of fuzzy reasoning is a group of linguistic terms that need to be quantified to generate explicit output value. The quantification process is called “defuzzification.” Figure 15 shows the membership functions for learning achievements.
~ (l )
Assume that l-th fuzzy rule is represented as R
~ ~ : If x1 is A1l and ...and xm is Anl then ~ y l . Let μ il be the
membership function of Grade i. The membership function for Grade 1 is:
⎧0 , if y l > 2 / 6 l ⎪⎪ 2 / 6 − y μ1l = ⎨ , if 1/6 < y ≤ 2 / 6 ⎪2 / 6 −1/ 6 , if y l ≤ 1 / 6 ⎪⎩1 The membership function for Grade 2 is:
⎧0 ⎪ yl − 1/ 6 ⎪ ⎪⎪ 2 / 6 − 1 / 6 μ 2l = ⎨1 ⎪ 3 / 6 − yl ⎪ ⎪3 / 6 − 2 / 6 ⎪⎩0
, if y l ≤ 1 / 6 , if 1/6 < y l < 2 / 6 , if y l = 2 / 6 , if 2/6 < y l < 3 / 6 , if y l ≥ 3 / 6
The membership function for Grade 3 is:
⎧0 ⎪ yl − 2 / 6 ⎪ ⎪⎪ 3 / 6 − 2 / 6 μ3l = ⎨1 ⎪ 4 / 6 − yl ⎪ ⎪4 / 6 − 3 / 6 ⎪⎩0
, if y l ≤ 2 / 6 , if 2/6 < y l < 3 / 6 , if y l = 3 / 6 , if 3/6 < y l < 4 / 6 , if y l ≥ 4 / 6
The membership function for Grade 4 is:
⎧0 ⎪ yl − 3 / 6 ⎪ ⎪⎪ 4 / 6 − 3 / 6 μ 4l = ⎨1 ⎪ 5 / 6 − yl ⎪ ⎪5 / 6 − 4 / 6 ⎪⎩0
, if y l ≤ 3 / 6 , if 3/6 < y l < 4 / 6 , if y l = 4 / 6 , if 4/6 < y l < 5 / 6 , if y l ≥ 5 / 6
The membership function for Grade 5 is: 182
⎧0 , if y l ≤ 4 / 6 ⎪⎪ y l − 4 / 6 , if 4/6 < y ≤ 5 / 6 μ5l = ⎨ 5 / 6 − 4 / 6 ⎪ , if y l ≥ 5 / 6 ⎪⎩1 The defuzzification output can be derived by applying the following formula: n
f ( x1 , x2 ,...xn ) =
∑y l =1
Where W = Π μ ( xi ) and y is the center of fuzzy set ~ yl . l
n i =1
l i
n
l
⋅W l
∑W
l
l =1
l
l
For the membership functions in Figure 13, y is 1/12, 2/6, 3/6, 4/6 and 11/12 for Grade 1, 2… 5, respectively. If the defuzzification output for learning achievement is greater than or equal to the pre-defined threshold, the system will assume that the student has learned the major concept of the learning object well; otherwise, the student is said to have failed to learn the concept. The corresponding learning object is thus recorded on the non-passed list (NP). Accordingly, CFRAlg can reconstruct the course framework for the student by appending the learning object recorded in NP.
Figure 16. The XML format of learning objects in MALS based on SCORM
Data Representation of Teaching Materials in MALS In this paper, we use the Sharable Content Object Reference Model (SCORM) (the most popular teaching materials standard, created in XML) to represent the learning object (LO) and personalized teaching materials, including several related LOs. 183
Representation of learning object based on SCORM As mentioned in Section 3 of this paper, in order to support the personalized construction of teaching materials based on learner’s capabilities and learning results, four required elements of an LO have been defined and described in Table 1. In addition, for the reusability and interoperability of learning contents among different learning systems, we apply the SCORM standard to represent an LO in MALS and extend its definition to support the adaptive learning needs. As shown in Figure 16, the Metadata used to describe an LO, and its sub-element, , has been extended by adding three elements: , , and to denote the definition in Table 1, where the element , which is the original definition of SCORM, is used to denote the Difficulty Level (DL). Then, the metadata of an LO with the associated physical files, for example, content and quizzes, are described in the element within the part of SCORM. Therefore, the LO can be described, managed, and reused in MALS. Furthermore, in order to construct the personalized teaching materials based on defined LOs, we define the representation scheme of teaching materials in MALS to package several related LOs into a course based on the SCORM standard, as shown in Figure 17. Here, the element describes the structure of a course, and its included sub-elements denote different information of LOs associated with their physical LO definition files in the part. Accordingly, based on this content packaging scheme, personalized teaching material can be dynamically constructed according to our proposed algorithms, ICCAlg and CFRAlg.
Figure 17. The XML format of teaching materials consisting of related LOs in MALS
Implementation of MALS In this section, we describe the implementation of MALS, including the authoring interface, the learning material importing engine, the course construction engine, and the course visualization engine. 184
Authoring interface As mentioned above, the learning objects contain not only the teaching content but also the required elements of learning objects. As shown in Figure 18, to help teachers to author and combine teaching contents with these elements, a menu-driven selection of the attributes in the authoring interface is given for the uploaded teaching contents.
Concept to be learned (Major Concept) Background Knowledge (BK) Learning Level Upload Teaching Materials Difficulty Level
Number of Tests
Upload the Quiz
Figure 18. Authoring interface of MALS Thus, for the elements shown in Figure 18, SCORM-compliant teaching materials are created automatically, as shown in Figure 16. Teaching materials importing engine The teaching materials importing engine is used to validate the SCORM-compliant teaching materials and to transform them into the teaching materials database. If the validating result is positive, the corresponding attribute value of the learning materials will be retrieved and stored in the teaching materials database. Course construction engine The course construction engine is used to organize the learning objects into personalized course frameworks based on each student’s aptitude and learning status as identified by ICCAlg and CFRAlg. Since the generated course framework is based on the SCORM standard, the course framework contains not only the learning content but also the content structure with related metadata information. For example, Figure 17 shows the course framework consisting of the learning objects: LO1, LO2, and LO3.
185
The basic way to present web content is to describe the content with the tag. E.g.,
tag is used The available image file formats include JPEG and GIF
This is an illustrative
Figure 19. Learning level 1
An advanced way to present the web content is to use the tag, which displays the web content in a separate area ih l
Figure 20. Learning level 2 Course visualization engine The course visualization engine, which provides several different XSL-format layout templates, is used to present the course framework organized by the course construction engine. The desired layout template chosen by the students 186
can be used to translate the SCORM-compliant course framework into readable learning materials. For the course framework shown in figure 17, figures 19 and 20 show the layout with learning level 1 and learning level 2, respectively. In Figure 19, the teacher prepared the basic HTML statements for the students with a lower learning level while in Figure 20, the more advanced statements, FRAME description, were given for the students with a higher learning level. It is important to note here that, based upon object-orientation, different course frameworks can be easily provided by only changing the XSL-format layout templates.
Analysis and evaluation of MALS Under the modular paradigm, MALS has the following properties: ¾ Individuality. Based upon the ICCAlg, students need to learn only the appropriate learning material. In addition, the system will revise the course framework according to the learning achievements of individual students. That is, the system can offer personalized course frameworks with identical learning levels and different objective learning achievements for each student. ¾ Flexibility. The modular course framework can be flexibly organized by combining the appropriate learning objects. If the domain or the subject of the learning materials is changed, only the corresponding attributes need to be changed to meet the new condition. Therefore, MALS can be easily applied to various science or engineering courses. ¾ Maintainability. Since the learning materials are divided into some modular learning objects that are stored in the learning material database, the maintenance of learning materials can be performed easily. ¾ Exchangeability. By applying the SCORM standard as the learning material representation, the attribute values of the learning objects can be easily retrieved and transformed into other platforms, for example, UNIX or other types of storage media, such as a database or plain text. Also, data exchangeability between different platforms and different storage media can be achieved with a minimum of effort. ¾ Extensibility. Because of the extensibility of the SCORM standard described by the XML format, learning objects can be dynamically integrated into the course framework and new learning objects can be easily added to the teaching materials database. In addition, the elements of metadata in SCORM can be extended according to real pedagogical needs, thus enhancing the exchangeability in adaptive learning environments for future applications. In order to evaluate the performance of MALS, we developed four chapters of a computer course, web-based programming, based on this novel approach. The course aims to train the students to design web-based systems that retrieve data from databases. There are 38 learning objects in the respective chapters, including ActiveX object, ADO object, ADO.Recordset object, AdRotator, Application_OnEnd, Application object, ASP build-in object, ASP object model, ASP structure, ASPError object, Check box, Column Hidden, cookies concept, DataBase Concept, Form validation, FSO object model, HTML form, HTML structure, List box, Multi-column textarea, Raid Button, Request object, Request.cookies, Request.Form, Request.QueryString, Response object, Response.Buffer, Response.Clear, Server object, Session object, SQL language, VBSCript, build-in function, VBScript condition control, VBScript Data Types, VBScript loop control, VBScript Operators, and VBScript structure. An experiment was conducted from September to December 2003. Sixty-four students from the Information Management Department of a Taiwanese college participated in the experiment and were separated into two groups, A (experimental group) and B (control group), each consisting of 32 students. The students in group A (V1) received adaptive contents arranged by applying the novel approach, while those in group B (V2) received a regular online course with sequential contents. All 64 students were given two tests within the space of three months (a pre-test and a post-test). The statistical results obtained by applying SPSS to analyze the two tests are presented below. Pre-test The pre-test aimed to ensure that both groups of students had the equivalent computer knowledge required for taking the course. The examination questions of the pre-test included 30 multiple-choice test items and 10 open-ended test items. Table 3 presents the t-test results of the pre-test. Notably, the mean and standard deviation of the pre-test was 68.56 and 20.80 for the experimental group, and 69.72 and 20.72 for the control group. The p-value indicates that the 187
two groups do not significantly differ at the 0.05 level. From the above, it is evident that the two groups of students had statistically equivalent abilities in learning the programming course. N 32 32
Experimental group Control group
Table 3. t-test of the pre-test results Mean SD 68.56 20.80 69.72 20.72
t
p
–0.22
0.82
Post-test The post-test was intended to compare the learning achievements of the two groups of students after taking the programming course. The post-test consisted of two parts: (1) an examination that used 20 multiple-choice test items and 10 open-ended test items to evaluate the web-based programming knowledge of the students; (2) an online test in which the students were asked to develop a web-based system that retrieved data from a database. The weight of each part was 50%. Table 4 lists the t-test values for the post-test results. Notably, the mean and standard deviation of the post-test were 79.31 and 10.44 for the experimental group, and 71.53 and 10.49 for the control group. From the t-test, we can conclude that the experimental group achieved significantly better performance than the control group after implementing the subject approach (t = 2.97, p < .05).
Experimental group Control group
N 32 32
Table 4. t-test of the post-test results (December 25, 2003) Mean SD t 79.31 10.44 2.97 71.53 10.49
p 0.004
Furthermore, from 2004 to 2005, two experiments were conducted to evaluate the performance of MALS. In the first experiment, 60 undergraduate students were asked to use MALS for the expert systems course and then to answer a questionnaire concerning the difficulty level of the subject materials presented to them. 91% of the students indicated that the adaptive subject materials were “suitable” and 9% of the students replied that the adaptive subject materials were “a little bit difficult, but acceptable.” In the second experiment, 104 students were asked to evaluate the adaptive system with the web programming course and answer a questionnaire concerning the overall performance of the system. Table 5 shows the questionnaire items and the corresponding statistical results of the feedback from the students in the second experiment. The statistical results indicate that most of the students agreed that the system presented suitable content for them (Q2), which implies that the adaptive presentation of the subject content (i.e., the personalization feature provided by the modular learning object approach) was appreciated. In terms of the user interface (Q3) and system functions (tutoring functions and assessment functions in Q4 and Q5), the satisfaction degree of the students is obviously lower than it is with the adaptivity function (Q2). Moreover, over 90% of the students agreed that the system was helpful or very helpful to them in improving their learning performance (Q6). Therefore, it is possible that the students referred the helpfulness of the system (Q6) to the adaptivity function (Q2). To acquire more in-depth feedback, six students were interviewed by the researchers after conducting the experiment. Three students were from the high achievement group and three from the low achievement group. These students gave some deep and interesting feedback and suggestions, as follows: 1.
2.
Regarding the difficulty of the subject materials, the interviewees thought that the learning objects were well structured and presented. However, they also indicated that the user interface of the system could be improved. For example, four students shared the same feeling: “Compared with the online systems I have used before, the subject content presented to me is more suitable for me and easier to follow. However, the user interface can be improved.” Students thought that the adaptivity of the subject materials was surprising and interesting: “It is interesting to see different content when I attempted to read the same unit for the second time.” “I am eager to see what will be presented if I pass some units.” 188
3.
4.
“It is surprising to see additional information show up when I read the same unit for the second time.” With respect to the helpfulness of the system, most of the students indicated that the innovative approach is helpful to them in several ways. For example, two students in the low achievement group shared the same feeling: “I feel more willing to read the subject content, since it seems to be easier to follow.” Also, three students in the high achievement group gave the following comments: “It is good to see some advanced issues in the subject content.” “Learning becomes more efficient since I do not need to waste time and can bypass the obvious knowledge or skills.” “The best feature of this system is that it provides moderate subject content for each one… ” “It is definitely helpful to me in learning. Moderate subject content will make everyone learn more effectively.”
Based on these interview results, it is concluded that the adaptive system is innovative, helpful, and well-developed enough to foster students’ learning.
Q1 Q2 Q3 Q4 Q5 Q6
Table 5. Questionnaire items and the feedback from the students What is your general impression of this system? (25%) very satisfied (71%) satisfied (4%) more or less satisfied (0%) inclining toward unacceptable (0%) totally unacceptable Does this system present suitable subject content for you? (88%) moderate in difficulty (12%) a little bit difficult but acceptable (0%) too difficult (0%) too easy (0%) not suitable Are you satisfied with user interface of the system? (14%) very satisfied (34%) satisfied (52%) more or less satisfied (0%) inclining toward unacceptable (0%) totally unacceptable Are you satisfied with the tutoring functions of the system? (27%) very satisfied (47%) satisfied (27%) more or less satisfied (0%) inclining toward unacceptable (0%) totally unacceptable Are you satisfied with the assessment functions of the system? (21%) very satisfied (63%) satisfied (15%) more or less satisfied (0%) inclining toward unacceptable (0%) totally unacceptable Do you think that the system is helpful to you in improving your learning performance? (40%) very helpful (54%) helpful (6%) more or less helpful (0%) inclining to unhelpful (0%)Totally unhelpful
Conclusion In this paper, we propose, under a modular paradigm, an adaptive learning system that can construct and manage the learning materials for teachers and can offer a more appropriate learning environment for students. In MALS, two algorithms, ICCAlg and CFRAlg, are applied to construct and revise the course framework according to the students’ aptitude. The prototype of MALS shows that teachers can author the learning materials through the authoring interface, and the course framework can be constructed and re-constructed adaptively in accordance with an individual student’s aptitude. In addition, the SCORM standard and XSL are used as the data representation of the teaching materials. Therefore, MALS has individuality, flexibility, maintainability, exchangeability, and extensibility. Although experimental results have demonstrated the benefits of applying MALS, there are some limitations when applying it: ¾
Additional time for preparing teaching material is needed. Based upon the object-orientation, teachers must make additional effort segmenting the original teaching materials into several learning objects and attaching the 189
¾ ¾
attributes to the learning objects. Compared to the traditional course framework, the preparation time for teaching materials is obviously longer. Additional time for analyzing teaching materials is needed. It might be time-consuming for the teachers to analyze the teaching materials in order to define the background knowledge, learning level, and difficulty level of the learning objects. More evaluations of different courses are needed to see the effectiveness of the innovative approach. The attributes of learning objects play an important role in the MALS. According to these attributes, different course frameworks can be constructed for different students in accordance with their aptitude. So far we have only applied this innovative approach to some computer science courses. It would be interesting to see the effectiveness of the innovative approach for various other kinds of courses, such as history, chemistry, and language courses.
Fortunately, some semi-automatic tools have been provided in MALS to assist the teachers in analyzing teaching materials and defining learning objects. We are now planning to apply MALS to several computer courses, including database systems, expert systems and computer networks.
Acknowledgement This study is supported in part by the National Science Council of the Republic of China under contract numbers NSC 95-2520-S-009-007-MY3 and NSC 95-2520-S-024-003-MY3.
References Alessi, S. M., & Trollip, S. R. (1991). Computer-based instruction: Methods and development (2nd). Englewood Cliffs, NJ: Prentice-Hall. Brusilovsky, P. (2001). Adaptive hypermedia. User Modelling and User Adapted Interaction, 11(1/2), 87–110. Brusilovsky, P. & Maybury, M. T. (2002). From adaptive hypermedia to the adaptive web. Communications of the ACM, 45(5), 30–33. Chu, H. C., Hwang, G. J., Tseng, J. C. R., & Hwang G. H. (2006). A computerized approach to diagnosing student learning problems in health education. Asian Journal of Health and Information Sciences, 1(1), 43–60. Dagger, D., Wade, V., & Conlan, O. (2005). Personalisation for all: Making adaptive course composition easy. Educational Technology & Society, 8(3), 9–25. Gonzalez, A. V. & Ingraham, L. R. (1994). Automated exercise progression in simulation-based training. IEEE Transactions on Systems, Man and Cybernetics, 24(6), pp. 863–874. Graf, S. (2006). Book Review: Adaptable and Adaptive Hypermedia Systems (Sherry Y. Chen and George D. Magoulas). Educational Technology & Society, 9(1), 361–364. Harp, S. A., Samad, T., & Villano, M. (1995). Modeling student knowledge with self-organizing feature maps. IEEE Transactions on Systems, Man and Cybernetics, 25(5), 727–737. Hwang, G. J. (2003). A concept map model for developing intelligent tutoring systems. Computers & Education, 40(3), 217–235. Hwang, G. J., Lin, B. M. T., & Lin, T. L. (2006). An effective approach for test-sheet composition from large-scale item banks. Computers & Education, 46(2), 122–139. Kalogirou, S. A. (2003). Artificial intelligence for the modeling and control of combustion processes: A review. Progress in Energy and Combustion Science, 29, 515–566. Karampiperis, P. & Sampson, D. (2005). Adaptive learning resources sequencing in educational hypermedia systems. Educational Technology & Society, 8(4), 128–147.
190
Ngai, E. W. T. & Wat, F. K. T. (2003). Design and development of a fuzzy expert system for hotel selection. Omega, 31, 275–286. Ozdemir, B. & Alpaslan, F. N. (2000). An intelligent tutoring system for student guidance in Web-based courses. 4th International Conference on Knowledge-Based Intelligent Eng. Systems and Allied Technologies, Vol. 2, pp. 835– 839. Papasalouros, A., Retalis, S., & Papaspyrou, N. (2004). Semantic description of educational adaptive hypermedia based on a conceptual model. Educational Technology & Society, 7(4), 129–142. Retalis, R. & Papasalouros, A. (2005). Designing and generating educational adaptive hypermedia applications. Educational Technology & Society, 8(3), 26–35. Rowe, N. C. & Galvin, T. P. (1998). An authoring system for intelligent procedural-skill tutors. IEEE Intelligent Systems, 13(3), 61–69. Sharable Content Object Reference Model (SCORM) 2004, Advanced Distributed Learning. http://www.adlnet.org/ Su, J. M., Tseng, S. S., Wang, C. Y., Lei, Y. C., Sung, Y. C., & Tsai, W. N. (2005). A content management scheme in a SCORM compliant learning object repository. Journal of Information Science and Engineering, 21, 1053–1075. Tseng, J. C. R. & Hwang, G. J. (2004). A novel approach of learning object extraction and management to support multiple standards. Newsletter of IEEE Learning Technology Newsletter, 6(2), 28–30. Weiss, D. J., & Kingsburg, G. G. (1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21, 361–375. Zedah, L. A. (1965). Fuzzy sets. Information and Control, 8, 338–353. Zedah, L. A. (1971). Quantitative fuzzy semantics. Information Science, 3, 159–176. Zedah, L. A. (1973). Outline of a new approach to the analysis of complex systems and decision processes. IEEE Transactions on Systems, Man, and Cybernetic, 3, 28–44.
191