Automatic Learning Object Selection and. Sequencing in Web-Based Intelligent Learning Systems using Learning Technology. Specifications. In Zongmin Ma ...
Karampiperis P. and Sampson D. (2005). Automatic Learning Object Selection and Sequencing in Web-Based Intelligent Learning Systems using Learning Technology Specifications. In Zongmin Ma (Editor), Web-Based Intelligent e-Learning Systems: Technologies and Applications, Idea Group Inc, January 2005 Automatic Learning Object Selection and Sequencing in Web-Based Intelligent Learning Systems using Learning Technology Specifications
N
Abstract
O
Automatic courseware authoring is recognized as among the most interesting research
SI
questions in intelligent web-based education. Automatic courseware authoring is the
ER
process of automatic learning object selection and sequencing. In most intelligent learning systems that incorporate course sequencing techniques, learning object selection
V
and sequencing is based on a set of teaching rules according to the cognitive style or
FT
learning preferences of the learners. In spite of the fact that most of these rules are generic (i.e. domain independent), there are no well-defined and commonly accepted
A
rules on how the learning objects should be selected and how they should be sequenced
D
R
to make “instructional sense”. Moreover, in order to design adaptive learning systems a huge set of rules is required, since dependencies between educational characteristics of learning objects and learners are rather complex. In this paper, we address the learning object selection and sequencing problem in intelligent learning systems proposing a methodology that instead of forcing an instructional designer to manually define the set of selection and sequencing rules, it produces a decision model that mimics the way the designer decides, based on the observation of the designer’s reaction over a small-scale learning object selection case.
INTRODUCTION The high rate of evolution of e-learning platforms implies that on the one hand, increasingly complex and dynamic web-based learning infrastructures need to be managed more efficiently, and on the other hand, new type of learning services and mechanisms need to be developed and provided. To meet the current needs, such services should satisfy a diverse range of requirements, as for example, personalization and
N
adaptation (Dolog, Henze, Nejdl, Sintek, 2004).
O
Learning object selection is the first step to adaptive navigation and adaptive course
SI
sequencing. Adaptive navigation seeks to present the learning objects associated with an
ER
on-line course in an optimized order, where the optimization criteria takes into consideration the learner’s background and performance on related learning objects
V
(Brusilovsky, 1999), whereas adaptive course sequencing is defined as the process that
FT
selects learning objects from a digital repository and sequence them in a way which is appropriate for the targeted learning community or individuals (Knolmayer, 2003).
A
Learning Object selection and sequencing is recognized as among the most interesting
D
R
research questions in intelligent web-based education (McCalla, 2000; Dolog, Nejdl, 2003; Devedžić, 2003). Although many types of intelligent learning systems are available, we can identify five key components which are common in most systems, namely, the student model, the expert model, the pedagogical module, the domain knowledge module, and the communication model. Figure 1 provides a view of the interactions between these modules.
Pedagogical Module
v
Student Model
Communication Model
v
Domain Knowledge
Expert Model
Figure 1. Main Components of Intelligent Learning Systems
O
N
In most intelligent learning systems that incorporate course sequencing techniques, the
SI
pedagogical module is responsible for setting the principles of content selection and instructional planning. The selection of content (in our case, learning objects) is based on
ER
a set of teaching rules according to the cognitive style or learning preferences of the
V
learners (Brusilovsky, Vassileva, 2003; Stash, De Bra, 2004). In spite of the fact that
FT
most of these rules are generic (i.e. domain independent), there are no well-defined and commonly accepted rules on how the learning objects should be selected and how they
A
should be sequenced to make “instructional sense” (Knolmayer, 2003; Mohan, Greer,
R
McGalla, 2003). Moreover, in order to design adaptive learning systems a huge set of
D
rules is required, since dependencies between educational characteristics of learning objects and learners are rather complex. In this paper, we address the learning object selection and sequencing problem in intelligent learning systems proposing a methodology that instead of forcing an instructional designer to manually define the set of selection rules; produces a decision model that mimics the way the designer decides, based on the observation of the designer’s reaction over a small-scale learning object selection problem. In the next section we discuss the learning object selection process and the instructional planning
process as part of automatic course sequencing. The third section presents a methodology for capturing expert’s decision model on learning objects selection. The fourth section presents a methodology for automating instructional planning and constitutes the main contribution of this paper. Finally, we present experimental results of the proposed methodology by comparing the resulting learning objects selected and sequenced by the
N
proposed method with those selected and sequenced by experts.
O
AUTOMATIC COURSE SEQUENCING
SI
In automatic course sequencing, the main idea is to generate a course suited to the needs
ER
of the learners. As described in the literature, two main approaches for automatic course sequencing have been identified (Brusilovsky, Vassileva, 2003): Adaptive Courseware
V
Generation and Dynamic Courseware Generation
FT
In Adaptive Courseware Generation the goal is to generate an individualized course taking into account specific learning goals, as well as, the initial level of the student’s
A
knowledge. The entire course is adaptively generated before presenting it to the learner,
D
R
instead of generating a course incrementally, as in a traditional sequencing context. In Dynamic Courseware Generation on the other hand, the system observes the student progress during his interaction with the course and dynamically adapts the course according to the specific student needs and requirements. If the student’s performance does not meet the expectations, the course is dynamically re-planned. The benefit of this approach is that it applies as much adaptivity to an individual student as possible. Both the above mentioned techniques employ a pre-filtering mechanism to generate a pool of learning objects that match the general content requirements. The main goal of
filtering is the reduction of the searching space. Learning Object Repositories often contain hundreds or thousands of learning objects, thus the selection process may require a significant computational time and effort. In most intelligent learning systems, learning object filtering is based either on the knowledge domain they cover or on the media type characteristics they contain (Kinshuk, Oppermann, Patel & Kashihare, 1999). In the IEEE LOM metadata model, there exist a number of elements covering requirements such as
N
the subject, the language and the media type of the targeted learning object. Alternatively,
O
filtering can be based on integration of the IEEE LOM metadata model elements and
SI
ontologies (Kay, Holden, 2002; Urban, Barriocanal, 2003), but those approaches assume
ER
that both the domain model and the learning objects themselves use the same ontology (Mohan, Greer, McGalla, 2003) and limit the filtering only to knowledge domain
V
filtering. The result of the filtering process falls in a virtual pool of learning objects that
FT
will act as an input space for the content selector. This pool can be generated from both distributed and local learning object repositories, provided that the appropriate access
R
A
controls have been granted.
General Requirements
D
Local or Distributed Learning Object Repositories
Pedagogical Module
Filtering
Virtual Pool of Learning Objects
Content Selector
Student Model (Learner Characteristics)
Domain Knowledge
Course Instructional Planner
Domain Ontologies
Figure 2. Generalized Framework of Automatic Course Sequencing
After the creation of the initial pool of learning objects, the content selection process is applied based on learner characteristics such as accessibility and competency characteristics or even historical information about related learning activities, included in the Student Model module. Figure 2 presents a generalized framework of the above mentioned course sequencing techniques that utilize filtering, content selection and instructional planning processes. In the next sections we will present a methodology for
O
N
automatic learning object selection and sequencing.
SI
LEARNING OBJECT SELECTION
ER
Typically, the design of adaptive learning systems requires a huge set of rules, since dependencies between educational characteristics of learning objects and learners are
V
rather complex. This complexity introduces several problems on the definition of the
FT
rules required (Wu, De Bra, 2001; Calvi & Cristea, 2002), namely: Inconsistency, when two or more rules are conflicting.
-
Confluence, when two or more rules are equivalent.
-
Insufficiency, when one or more rules required have not been defined.
D
R
A
-
The proposed methodology is based on an intelligent mechanism that tries to mimic an instructional designer’s decision model on the selection of learning objects. To do so, we have designed a framework that attempts to construct a suitability function that maps learning object characteristics over learner characteristics and vice versa. The main advantage of this method is that it requires less effort by the instructional designer, since instead of identifying a huge set of rules, only the designer’s selection from a small set of learning objects over a reference set of learners is needed. The
machine learning technique will try then to discover the dependence between learning object and learner characteristics that produce the same selection of learning objects per learner as the instructional designer did. START
Modeling of Learning Object & Learner Characteristics Step 1
Consistency Check between Expert's and resulted Model
Pass
V
Repeat
ER
Selection Model Extraction
SI
O
N
Expression of Expert's LO assignments on a reference set of Learners
Extrapolation on the whole set of Learner Instances Step 2
FT
Fail
Fail
Assignment Model Consistency Check Pass
D
R
A
The Assignment Model is judged non sufficient
Step 3
END END
Figure 3. Selection Model Extraction Framework
The proposed methodology does not depend on the characteristics used for learning objects and learner modeling, thus can be used for extraction of even complex pedagogyrelated dependences. It is obvious that since characteristics/requirements like the domain are used for filtering, the dependencies produced are quite generic, depending only on the educational characteristics of the content and the cognitive characteristics of the learner.
Figure 3 presents a graphical representation of the Selection Model Extraction Framework, consisting of three main steps: - Step1: Modeling and Selection of Criteria The selection methodology is generic, independent of the learning object and the learner characteristics used for the selection. In our experiment, we used learning object characteristics derived from the IEEE Learning Object Metadata (LOM) standard and learner characteristics derived from the IMS Learner Information Package (LIP)
N
specification.
Selection Criteria
O
Table 1 LO Selector Input Space (Learning Object characteristics)
LOM/General/Structure
ER
General
LOM/General/Aggregation Level
V
LOM/Educational/Interactivity Type
LOM/Educational/ Interactivity Level
FT
LOM/Educational/Semantic Density
A
LOM/Educational/Typical Age Range LOM/Educational/Difficulty
D
R
Educational
Explanation
SI
IEEE LOM Path
LOM/Educational/Intended End User Role
Underlying organizational structure of a Learning Object The functional granularity (level of aggregation) of a Learning Object. Predominant mode of learning supported by a Learning Object The degree to which a learner can influence the aspect or behavior of a Learning Object. The degree of conciseness of a Learning Object, estimated in terms of its size, span or duration. Age of the typical intended user. This element refers to developmental age and not chronological age. How hard it is to work with or through a Learning Object for the typical intended target audience. Principal user(s) for which a Learning Object was designed, most dominant first.
LOM/Educational/Context
The principal environment within which the learning and use of a LO is intended to take place.
LOM/Educational/Typical Learning Time
Typical time it takes to work with or through a LO for the typical intended target audience.
LOM/Educational/Learning Resource Type
Specific kind of Learning Object. The most dominant kind shall be first.
There exist many criteria affecting the decision of learning objects selection. Those criteria that lead to a straightforward exclusion of learning objects, such as the subject, the language and the media type, are used for filtering. The rest set of criteria such as the educational characteristics of learning objects are used for selection model extraction, since the dependencies of those criteria can model the pedagogy applied by the
instructional designer, when selecting learning objects. Those criteria, due to the complexity of interdependencies between them, are the ones that cannot be directly mapped to rules from the instructional designer. Thus an automatic extraction method, like the proposed one, is needed. In Tables 1 and 2 we have identified the LOM and LIP characteristics respectively, that can be used as an input space (set of selection criteria) to the learning object selector. Table 2 LO Selector Input Space (Learner characteristics) IMS LIP Path
Explanation
Accessibility
LIP/Accessibility/Preferenc e/typename LIP/Accessibility/Preferenc e/prefcode LIP/Accessibility/Eligibility /typename LIP/Accessibility/Disability/ typename
The type of cognitive preference The coding assigned to the preference The type of eligibility being defined The type of disability being defined
R
A
LIP/Activity/Evaluation/noo fattempts
D
Activity
O
SI
ER
V
LIP/QCL/Level
The level/grade of the QCL
FT
Qualifications Certifications Licenses
N
Selection Criteria
LIP/Activity/Evaluation/res ult/interpretscope LIP/Activity/Evaluation/res ult/score
The number of attempts made on the evaluation. Information that describes the scoring data. The scoring data itself.
Usage Condition LIP/QCL/Typename, LIP/QCL/Title and LIP/QCL/Organization should refer to a qualification related with the objectives of the learning goal LIP/QCL/date > Threshold LIP/Activity/Typename, LIP/Activity/status, LIP/Activity/units and LIP/Activity/Evaluation/Typena me should refer to a qualification related with the objectives of the learning goal LIP/Activity/date > Threshold LIP/Activity/Evaluation/date > Threshold
- Step2: Selection Model Extraction After identifying the set of characteristics/criteria (step1) that will be used as the input space of the LO Selector, we try to extract for each learning object characteristic the expert’s suitability evaluation model over a reference set of LIP-based characterized learners. The input to this phase is the IEEE LOM characteristics of a reference set of
learning objects, the IMS LIP characteristics of a reference set of learners and the suitability preference of an expert for each of the learning objects over the whole reference set of learners. The model extraction methodology has the following formulation: Let us consider a set of learning objects, called A, which is valued by a set of criteria g = ( g1 , g 2 ,K, g n ) . The assessment model of the suitability of each learning object for a specific learner, leads to the aggregation of all criteria into a
N
unique criterion that we call a suitability function: S (g ) = S ( g1 , g 2 ,K, g n ). We define n
i =1
SI
O
the suitability function as an additive function of the form S ( g ) = ∑ si ( g i ) with the
ER
following additional notation:
si (g i ) : Marginal suitability of the ith selection criterion valued gi ,
-
S ( g ) : Global suitability of a learning object.
V
-
FT
The marginal suitability evaluation for the criterion gi is calculated using the
A
formula si ( x) = ai + bi x exp( −ci x 2 ) , where x is the corresponding value of the gi
R
learning object selection criterion. This formula produces, according to parameters a,
D
b and c as well as the value space of each criterion, the main criteria forms, we have identified: - Monotonic form: when the marginal suitability of a criterion is a monotonic function; - Non monotonic form: when the marginal suitability of a criterion is a nonmonotonic function. The calculation of the optimal values of parameters a, b and c for each selection criterion is the subject of the Knowledge Model Extraction step. Let us call P the strict preference
relation and I the indifference relation. If SO1 is the global suitability of a learning object O1 and SO2 is the global suitability of a learning object O2, then the following properties generally hold for the suitability function S:
SO1 > SO2 ⇔ (O1 ) P(O2 ), SO1 = SO2 ⇔ (O1 ) I (O2 )
and the relation R = P ∪ I is a weak order relation.
The expert’s requested information then consists of the weak order R defined on A for
N
several learner instances. Using the provided weak order relation R and based on the
O
form definition of each learning object characteristic we can define the suitability
SI
differences ∆ = (∆1 , ∆ 2 ,K, ∆ m−1 ) , where m is the number of learning objects in the
ER
reference set A and ∆ k = SOk − SOk +1 ≥ 0 depending on the suitability relation of (k) and
V
(k+1) preferred learning object for a specific learner of the reference set. We can
FT
introduce an error function e for each suitability difference: ∆ k = S Ok − S Ok +1 + ek ≥ 0 . Using constrained optimization techniques, we can then solve the non-linear problem: m −1
∑ (e ) j
R
j =1
2
A
Minimize
D
Subject to the constraints: ∆j > 0 ∆j = 0
if O j P O j +1 ⎫⎪ ⎬ for each one of the learners of the reference set. if O j I O j +1 ⎪⎭
This optimization problem will lead to the calculation of the optimal values of the parameter a, b and c for each learning object selection criteria over the reference set of learners. - Step3: Extrapolation
The purpose of this phase is to generalize the resulted marginal suitability model from the reference set of learners to all learners, by calculating the corresponding marginal suitability values for every combination of learner characteristics. This calculation is based on the interpolation of the marginal suitability values between the two closest instances of the reference set of learners. Suppose that we have calculated the marginal suitability siL1 and siL2 of a criterion ci
N
matching the characteristics of learners L1 and L2 respectively. We can then calculate
O
the corresponding marginal suitability value for another learner L using interpolation
SI
if the characteristics of learner L are mapped inside the polyhedron that the
ciL − ciL1 ciL2
[
− ciL1
]
[s (c )− s (c )], if s (c ) > s (c ) i
L2 i
i
L1 i
i
L2 i
i
L1 i
V
( ) ( )
si ciL = si ciL1 +
ER
characteristics of learners L1 and L2 define, using the formula:
FT
Let Ci = ci* , ci* , i = 1,2,K n be the intervals in which the values of each criterion – for both learning object and learners – are found, then we call global suitability
R
A
surface the space C = × in=1 C i . The calculation of the global suitability over the above
D
mentioned space is the addition of the marginal suitability surfaces for each of the learning object characteristics over the whole combination set of learner characteristics.
LEARNING OBJECT SEQUENCE SELECTION The instructional plan of an intelligent educational system can be considered as two interconnected networks or “spaces”: − a network of concepts (knowledge space) and
− a network of educational material (hyperspace or media space). Accordingly, the instructional planning process involves three key steps (Brusilovsky, 2003)]: Structuring the knowledge. The heart of the knowledge-based approach to developing intelligent learning management systems is a structured domain model that is composed of a set of small domain knowledge elements (DKE). Each DKE represents
N
an elementary fragment of knowledge for the given domain. Depending on the
O
domain, the application area, and the choice of the designer, concepts can represent
SI
bigger or smaller pieces of domain knowledge. The use of ontologies can
ER
significantly simplify the task of knowledge structuring by providing a standardbased way for knowledge representation. Ontologies are specifications of the
V
conceptualization and corresponding vocabulary used to describe a domain (Gruber,
FT
1993). Ontologies typically consist of definitions of concepts relevant for the domain, their relations, and axioms about these concepts and relationships. For the
R
A
instructional planning process we have identified four classes of concept relationships, namely:
D
-
-
“Consists of”, this class relates a concept with it’s sub-concepts
-
“Similar to”, this class relates two concepts with the same semantic meaning
-
“Opposite of”, this class relates a concept with another concept semantically opposite from the original one
-
“Related with”, this class relates concepts that have a relation different from the above mentioned
− Structuring the media space. Structuring of the media space is based on the use of learning object metadata. More precisely, in the IEEE LOM metadata model, the ‘Relation’ Category, defines the relationship between a specific learning object and other learning objects, if any. The kind of relation is been described by the subelement ‘Kind’ that holds 12 predefined values based on the corresponding element of the Dublin Core Element Set. In our case we use only four of the predefined
-
“references” / “is referenced by”
-
“is based on” / “is basis for”
-
“requires” / “is required by”
SI
“is part of” / “has part”
V
ER
-
O
N
relation values, namely:
1. Is part of - Has part 2. References - Is referenced by 3. Is based on - Is basis for 4. Requires - Is required by
2
1 1
1
1
4
3
4
4
3
1
R
1
2 4
A
3
1
2
2
1
2
4
3
2
1
4
1
FT
1 1. Consists of 2. Similar to 3. Opposite of 4. Related with
Domain Ontology
D
4 4
1
1
2
2
2 3
Figure 4: Knowledge Space and Media Space Connection − Connecting the knowledge space and the media space. The connection of the knowledge space with educational material can be based on the use of the ‘Classification’ Category, defined by the IEEE LOM Standard as an element category that describes where a specific learning object falls within a particular classification
system. The integration of IEEE LOM ‘Classification’ Category with ontologies provides a simple way of identifying the domains covered by a learning object. Since it is assumed that both the domain model and the learning objects themselves use the same ontology, the connection process is then, relatively straightforward. Figure 4 presents an example of the connection of the two spaces. The result of the merging of the knowledge space and the media space is a directed
N
acyclic graph (DAG) of learning objects inheriting relations from both spaces. In order to
O
extract from the resulting graph of learning objects the “optimum” sequence of learning
SI
objects we need to add learner preference information to the DAG. This information has
ER
the form of weight on each connection of the DAG. As a weighting function we use the inverse of the suitability function calculated at the content selection phase. This means
V
that the bigger a weight is the less suitable the target learning object is. After weighting
FT
the DAG with the use of the suitability function, we need to find the optimum (shortest) path by the use of a shortest path algorithm.
A
The result of applying the shortest path algorithm is the optimum sequence of learning
D
R
objects that covers the desired concepts (thus reach the learning goal) according to cognitive characteristics and preferences of the learner.
EXPERIMENTAL RESULTS AND DISCUSSION In order to evaluate the total efficiency of the proposed methodology both on calculating the suitability on the reference (training) set of learning objects and on estimating the suitability of learning objects external from the reference set, we have designed an
⎛ Correct Learning Objects Selected ⎞ evaluation criterion, defined by: Success (%) = 100 * ⎜ ⎟ n ⎝ ⎠
where n is the number of the desired learning objects from the virtual pool that will act as input to the instructional planner. We assume that the number of desired learning objects is less than the total number of learning objects in the input space (learning objects pool) and that both the learning object metadata and the learner information metadata have normal distribution over the value space of each criterion. Additionally, we have classified the learning objects, for both sets of learning objects, in
N
two classes according to their aggregation level, since granularity is a parameter affecting
O
the capability of an instructional designer to select learning content for a specific learner.
SI
The classification is based on the value space of the “General/Aggregation_Level”
ER
element of the IEEE LOM standard. Table 2 presents a description of the two classes used.
Value Space
FT
IEEE LOM Element
V
Table 2 Learning Objects Aggregation Level according to IEEE LOM standard 1
A
General/Aggregation_Level
2
Description The smallest level of aggregation, e.g. raw media data or fragments A collection of level 1 learning objects, e.g. a lesson chapter or a full lesson
R
We present experimental results of the proposed methodology by comparing the
D
resulting selected learning objects with those selected by experts. We have evaluated the success on both the reference set of learning objects (Training Success) and on the suitability estimation of learning objects external from the reference set (Estimation Success). Figure 5 and 6 present average experimental results for learning objects with aggregation level 1 and 2 respectively.
Aggregation Level 1 100.0
% Success
95.0 90.0 85.0 80.0
10
20
50
100
200
500
Training Set
100
100
96.7
95.4
92.1
90.6
Estimation Set
100
99.2
95.3
93.1
90.6
88.4
N
n
O
Figure 5. Average Experimental Results for Learning Objects with Aggregation Level 1
SI
If we consider that for one learner instance, the different combinations of learning
ER
objects, calculated as the multiplication of the value instances of characteristics in the IEEE LOM information model, lead to more than one million learning object categories,
V
it is evident that it is almost unrealistic to assume that an instructional designer can
FT
manually define the full set of selection rules which correspond to the dependencies extracted by the proposed method and at the same time to avoid the inconsistencies,
A
confluence and insufficiency of the produced selection rules.
R
The proposed methodology is capable of effectively extracting dependencies between
D
learning object and learner characteristics affecting the decision of an instructional designer on the learning object selection problem. More analysis on the results, presented in figures 5 and 6, shows that when the desired number of learning objects (n) is relatively small (less than 100), the selected learning objects by the extracted decision model are almost similar to those the instructional designer would select.
Aggregation Level 2
% Success
100.0 95.0 90.0 85.0
10
20
50
100
200
500
Training Set
100
100
98.3
97.1
95.6
93.4
Estimation Set
100
100
96.5
94.8
92.3
90.8
N
n
O
Figure 6. Average Experimental Results for Learning Objects with Aggregation Level 2
SI
On the other hand, when the desired number of learning objects is relatively large (about
ER
500) the success of the selection is affected, but remains at acceptable level (about 90%). Another parameter affecting the selection success is proved to be the granularity of
V
learning objects. Granularity mainly affects the capability of an instructional designer to
FT
express selection preferences over learning objects. Learning objects with small aggregation level have bigger possibility of producing “gray” decision areas, where the
A
instructional designer cannot decide which learning object matches most the cognitive
R
style or learning preferences of a learner. In our experiments, learning objects with
D
aggregation level 2, which can be small or even bigger collections of learning objects with aggregation level 1, appear to have less possibility of producing indifference relations, enabling to make secure decisions even for bigger desired number of learning objects (n=200).
CONCLUSIONS In this chapter we address the learning object selection problem in intelligent learning systems proposing a methodology that instead of forcing an instructional designer to
manually define the set of selection rules; produces a decision model that mimics the way the designer decides, based on the observation of the designer’s reaction over a smallscale learning object selection problem. Since one of the primary design goals of learning objects is reusability in a variety of diverse learning contexts, learning objects are generally designed in a highly decontextualized manner (South, Monson, 2000). At the same time, it is nearly impossible
N
to define learning characteristics, like difficulty or semantic density, which affect both
R
A
FT
V
ER
SI
O
selection and sequencing of learning objects.
D
Figure 7. The Granularity/Aggregation Spectrum (South, Monson, 2000) The proposed content selection methodology can provide the framework for designing highly adaptive learning systems, provided that learning objects are as small as needed learning threshold - in order for a content author to be able to identify the pedagogical features they contain (Gibbons, Nelson, Richards, 2000). Figure 7 presents the optimum granularity of learning objects as the space in between the learning and the context thresholds.
Future research agenda includes learning object decomposition from existing courses, allowing reuse of the disaggregated learning objects in different educational contexts. The intelligent selection of the disaggregation level and the automatic structuring of the atoms (raw media) inside the disaggregated components in order to preserve the educational characteristics they where initially designed for, is a key issue in the research
N
agenda for learning objects (Duval, 2003).
O
REFERENCES
ER
Künstliche Intelligenz, vol. 4, pp.19-25.
SI
Brusilovsky, P. (1999). Adaptive and intelligent technologies for web-based education.
V
Brusilovsky P. and Vassileva J. (2003). Course sequencing techniques for large-scale Web-based education. International Journal of Continuing Engineering Education and
FT
Life-long Learning, vol.13 (1/2), pp. 75-94.
A
Brusilovsky, P. (2003). Developing adaptive educational hypermedia systems: From
R
design models to authoring tools. Authoring Tools for Advanced Technology Learning
D
Environment, Dordrecht: Kluwer Academic Publishers. Calvi L. & Cristea A. (2002). Towards Generic Adaptive Systems: Analysis of a Case Study. In Proc. of the 2nd International Conference on Adaptive Hypermedia and Adaptive Web Based Systems, Malaga, Spain. Devedžić V.B. (2003). Key Issues in Next-Generation Web-Based Education. IEEE Transactions on Systems, Man and Cybernetics Part C, vol. 33 (3) pp. 339-349.
Dolog P., Nejdl W. (2003). Challenges and Benefits of the Semantic Web for User Modelling. Workshop on Adaptive Hypermedia and Adaptive Web-Based Systems, In Proc. of the 12th International World Wide Web Conference, Budapest, Hungary. Dolog P., Henze N., Nejdl W., Sintek M. (2004). Personalization in Distributed eLearning Environments. In Proc. of the 13th International World Wide Web Conference, New York, USA.
N
Duval E., Hodgins W. (2003). A LOM Research Agenda. In Proc. of the12th
O
International World Wide Web Conference, Budapest, Hungary.
SI
Gibbons A. S., Nelson, J. and Richards, R. (2000). The nature and origin of instructional
ER
objects. In D. A. Wiley (Ed.), The Instructional Use of Learning Objects: Online Version. Available: http://reusability.org/read/chapters/gibbons.doc
V
Gruber T. (1993). A Translation Approach to Portable Ontology Specifications.
FT
Knowledge Acquisition, vol. 5, pp. 199-220. IEEE Learning Object Metadata (IEEE LOM) standard. Available Online at:
A
http://ltsc.ieee.org/wg12/
D
R
Kay J. and Holden S. (2002). Automatic extraction of ontologies from teaching document metadata. ICCE Workshop on Concepts and Ontologies in Web-based Educational Systems. In Proc. of the International conference on computers in education, Auckland, New Zealand. Kinshuk, Oppermann R., Patel A. & Kashihara A. (1999). Multiple Representation Approach in Multimedia based Intelligent Educational Systems. Artificial Intelligence in Education, Amsterdam: IOS Press. pp 259-266.
Knolmayer G.F. (2003). Decision Support Models for composing and navigating through e-learning objects. In Proc. of the 36th IEEE Annual Hawaii International Conference on System Sciences, Hawaii, USA. McCalla G. (2000). The fragmentation of culture, learning, teaching and technology: implications for the artificial intelligence in education research agenda in 2010. International Journal of Artificial Intelligence in Education, vol. 11, pp. 177-196.
N
Mohan P., Greer J., McGalla G. (2003). Instructional Planning with Learning Objects.
O
Workshop on Knowledge Representation and Automated Reasoning for E-Learning
SI
Systems. In Proc. of the 18th International Joint Conference on Artificial Intelligence,
ER
Acapulco, Mexico.
South, J. B. & Monson, D. W. (2000). A university-wide system for creating, capturing,
V
and delivering learning objects. In D. A. Wiley (Ed.), The Instructional Use of Learning
FT
Objects: Online Version. Available: http://reusability.org/read/chapters/south.doc Stash N., De Bra P. (2004). Incorporating cognitive styles in AHA! First International
A
Workshop on Adaptable and Adaptive Hypermedia. In Proc. of the IASTED Conference
D
R
on Web Based Education, Innsbruck, Austria. Sicilia, M. A. García, E. (2003). On the integration of IEEE-LOM Metadata Instances and Ontologies. IEEE Learning Technology Newsletter 5(1), IEEE LTTF, pp. 27-30. Wu, H., De Bra, P. (2001). Sufficient Conditions for Well-behaved Adaptive Hypermedia Systems. In Proc. of the First Asia-Pacific Conference on Web Intelligence: Research and Development, Maebashi City, Japan.