A CONTINUOUS UNSUPERVISED ADAPTATION METHOD FOR SPEAKER VERIFICATION Alexandre Preti¹², Jean-François Bonastre¹, François Capman² ¹LIA, 339 chemin des Meinajaries 84911 Avignon Cedex 9, France
²Thales, MMP Laboratory, 160 Bd Valmy 92700 Colombes, France
{alexandre.preti,jean-francois.bonastre}@univ-avignon.fr
Abstract—This paper deals with unsupervised model adaptation for speaker verification. We proposed a new method for updating speaker models using all test information incoming in the system. This is a continuous adaptation method which relies on the probability of the test trial belonging to the target speaker. Our adaptation scheme is evaluated in the framework of the NIST SRE 2005. This approach reaches a relative improvement for the NIST unsupervised adaptation mode of 15% DCF and 35% EER.
I.
INTRODUCTION
Gaussian Mixture Models (GMM) based systems for speaker recognition are widely used in speaker recognition applications due to their robust performance [1]. Despite new normalization techniques such as Bayesian Factor analysis [2], GMM based speaker verification systems show their limits when limited enrollment data are available. A way of passing through this problem is to increase this amount of data by using test information [3,4,5]. Indeed during the normal use of the verification system some test trials are true target trials and could be added to the enrollment data. This is known as unsupervised adaptation. Moreover, adding information coming from the use of the system can improve the robustness of the system in terms of speaker and channel variabilities as the speaker voice and the recording conditions could change with time. Interest on unsupervised adaptation has grown in the community and many speaker verification systems using this technique have appeared in the past years (NIST unsupervised adaptation task [6]). Results of supervised adaptation show that a huge improvement could be reached, but State of the Art methods in unsupervised adaptation are far from such improvement [7, 8, 9]. The main drawback of unsupervised adaptation technique remains the difficulty to set a decision threshold for selecting the trials. To solve the problem we proposed a new continuous adaptation method that uses all the test information to update the target model. The influence of a given test segment in the new target model is weighted according to its probability to belong to the target. Section 2 describes the unsupervised adaptation principle.
[email protected]
Section 3 introduces database, tools and protocols used to set up experiments. Experimental results are presented in section 4. Finally, conclusions are given in section 5.
II. UNSUPERVISED ADAPTATION A. State of the Art methods The main idea for unsupervised speaker model adaptation is to detect during the system use the true target trials and to adapt the model using this material. Of course, in order to not deteriorate the target speaker models, as less as possible impostor trials should be selected. Unsupervised adaptation is mainly done as follows: • Log Likelihood Ratio of the trials given the target model is computed, • some normalization techniques are applied (TNORM [10]), • if the test score is above a threshold, the test is kept for adapting the corresponding target model. The adapted target model is usually obtained from a MAP adaptation of the UBM with all selected materials or from MAP adaptation of the previously trained target model [7]. Some works proposed also to vary the regulation factor used for MAP adaptation according to the available quantity of data [8] (reaching an improvement). If unsupervised adaptation is quite attracting, promising large improvements, the comparison with supervised adaptation still reveals large difference in terms of performance [8, 9]. The gap between supervised and unsupervised adaptation could be explain by the fact that the test segment selection thresholds are very conservative in order to avoid the selection of an impostor trial which could damage the target model (and the speaker verification system performance consequently). In this case, the amount of selected trials is not large and adaptation does not occur very often. Moreover, the high threshold leads to select data already well recognized by the target model (high LLR). Consequently the added data will bring few interesting and new information in terms of speaker or channel characteristics. The useful data we expect to use for adaptation are the ambiguous data, where the acceptance decision is not straightforward, represented in the aliasing area between the target and the impostor distributions (see fig. 2). Finally the
461 M. Iskander (ed.), Innovations in E-learning, Instruction Technology, Assessment, and Engineering Education, 461±465. © 2007 Springer.
PRETI ET AL.
462
optimum adaptation threshold is very hard to determine. As the amount of data increases when creating the new target models, the best decision threshold changes. A way of bypassing the problem is to select the trials using always the initial target model (learnt on a one session record) but obviously it is not optimal [9].
distributions, i.e., when the score is very low or very high (see fig. 1). The score distributions (see fig. 2) are modelled using 12 component GMM learnt on a development data set (NIST SRE 2004 database).
B. Proposed approach The work described in this paper intends to integrate trials for which the acceptance decision is ambiguous in the adaptation set. It also intends to avoid the problem of setting a hard decision threshold for adaptation. The proposed method consists in adapting the target model with all the data gathered from the real life of the system. A weight is assigned to each test and is used in the computation of the Maximum Likelihood estimate (EM algorithm) when updating the new target model. This weight is the a posteriori probability of the trial belonging the target. To compute these probabilities we use a 2 class Bayesian classifier (target/impostor), World MAP (WMAP) [11]. We obtain the a posteriori probability of having a target (of a class) given the a priori target and impostor score distributions of the verification system (LLR). It takes also into account the prior probability of both classes. WMAP is defined as follows:
Figure 2: NIST SRE 2004 target and impostor score distributions The target model is adapted from the UBM using the initial training segments of the speaker (weight = 1) and the test segments with their corresponding weights. A traditional Bayesian adaptation algorithm (MAP) [12] is used with a regulation factor of 14 (mean parameters only are adapted).
(1) where
is the prior probability of a target trial, is the prior probability of an impostor trial, is the likelihood of the score (LLR) given the target score distribution, is the likelihood of the score (LLR) given the impostor score distribution.
Figure 1: WMAP function for scores between -2 and 2 We can notice that WMAP outputs a fixed probability (equal to the prior probability of a target trial) when the observed score is outside the learned target and impostor score
III. TOOLS AND PROTOCOLS A. Database All the experiments presented in section 4 are performed based upon the NIST 2005 database, all trials (det 1), 1conv4w 1conv-4w, restricted to male speakers only. This condition consists of 274 speakers. Train and test utterances contain 2.5 minutes of speech on average (extracted from telephone conversations). The whole speaker detection experiment consists in 13624 tests, including 1231 target tests and 12393 impostors trials. The priors used for WMAP are 0.1 for target and 0.9 for impostors. From 1 to 170 tests are computed by speaker, with an average of 51 tests. B. Baseline speaker recognition system The LIA_SpkDet system [13] developed at the LIA lab is used as baseline in this paper. Built from the ALIZE platform [14, 15], it was evaluated during the NIST SRE’04, SRE’05 and SRE’06 campaigns, where it obtained good performance for a cepstral GMM-UBM system. Both the LIA_SpkDet system and the ALIZE platform are distributed under an open source licence.
SPEAKER VERIFICATION
The LIA_SpkDet system is based on classical UBM-GMM and Tnorm approach for likelihood score normalization. For the background model, the train set is composed of records from 1464 male speakers taken from a part of the fisher corpus. More precisely, only speakers having a unique conversation have been used, resulting in 1464 selected speakers (male only). Concerning Tnorm, a cohort of 160 target- male speakers of NIST SRE 2004 database has been used. For the front-end processing, the signal is characterized by 50 coefficients including 19 linear frequency cepstral coefficients (LFCC) issued from a filter-bank analysis, their first derivative, 11 of their second derivative and the delta energy. The parameterization is done with SPRO [16]. An energy-based frame removal is computed (the energy is modelled by a 3 component GMM [17]). A mean and variance normalization process is applied on coefficient. A genderdependent feature mapping [18] process is applied to all the data for channel robustness. Three channel conditions have been used (landline, cellular, cordless). All channel dependent models are derived from the UBM using MAP adaptation (mean and variance). The UBM and target models contain 2048 Gaussian components. LLR scores are computed using the top ten components. The performance is evaluated through classical DET performance curves [19] and decision cost function (DCF).
463
computed an "adapted Tnorm" cohort. The corresponding Tnorm models are estimated on NIST SRE 2004 database using the batch unsupervised mode. IV. RESULTS Experimental results obtained for the unsupervised adaptation method are presented below. A.Batch unupervised protocol
C. Protocol description The two protocols described here were already presented and more detailed in [9]. 1) Batch protocol:
Figure 3: DET curves for the Batch unsupervised protocol (adapted TNORM cohort) and baseline (basic TNORM cohort)
This experimental protocol allows to adapt a target model with all the trials involved with it. The new target model is computed using all the test information; next, the LLR scores for each test are computed using the adapted speaker model. 2) Nist protocol:
The results for the system using the Batch protocol are presented in figure 3. Performance of the baseline system is also presented for comparison. This method reaches a 10% DCF and 37 % EER relative improvement.
The Nist unsupervised adaptation mode allows to update target models using previously seen trial segments (including the current segment) before to take the decision on the current trial segment. It is required to follow the order of the trials in the test protocol [6]. For each test, a LLR is computed using the test data and the corresponding adapted target model.
If we look at the difference in terms of right or wrong decisions, presented in Table 1, we can notice that the Batch protocol accepts nearly 5% more (good) target trials than the non adaptative system (975 true target trials accepted for the unsupervised adaptation system compared to 952 for the baseline). It accepts also less impostor trials. B. Nist unsupervised protocol
D. Score normalization It is well known that the amount of training data plays an important role for the Tnorm approach [20], i.e. performance are better when the amount of data for training the target speaker models is similar to the one used for training Tnorm impostor models. Using the unsupervised adaptation the amount of data used to estimate the target model varies test by test (or by set of tests for the batch protocol). In order to give a first evaluation of this issue on the system performance, we
The results for the system using the Nist protocol are presented in figure 4. Performance of the baseline system is also presented for comparison. Our method allows a 15% DCF and 35 % EER relative improvement. Regarding Table 1, the number of true acceptance is increased by nearly 12%, from 952 for the baseline to 1040 for the Nist protocol.
PRETI ET AL.
464
But some degradation of the target models during the adaptation occurs and some additional false acceptance errors are noticed (63 false reject and 91 false acceptance added).
We can also notice that the basic Tnorm is not well suited for the Batch protocol in terms of DCF (see III.D). However it performs well for EER.
Batch protocol adapted Tnorm Batch protocol basic Tnorm Nist protocol adapted Tnorm Nist protocol basic Tnorm
DCF
EER (%)
2.75
5.04
2.98
4.47
2.53
8.14
2.59
5.20
Table 2: EER and DCF for the Batch and Nist unsupervised adaptation protocols using the two different Tnorm cohort. Figure 4: DET curves for the Nist unsupervised protocol and baseline (basic TNORM cohort)
Target trials accepted Impostor trials accepted False reject added True reject added True acceptance added False acceptance added
Nist protocol
Batch protocol
1040
975
130
89
63
39
58
43
151
62
91
35
Table 1: Information on unsupervised adaptation behavior (total trials: 13624, target trials: 1231, impostor trials: 12393). Baseline reference: 952 target trials accepted, 97 impostor trials accepted. For all the systems, we use the optimum a posteriori decision threshold. C. Basic Tnorm vs adapted Tnorm This section gives an overview of the different Tnorm effects on the two unsupervised systems. The table 2 presents the different results using the two Tnorm cohorts. We can notice that the adapted Tnorm performs well for DCF for the two adaptation protocols. Nevertheless, using the adapted Tnorm for the Nist protocol leads to a significative loss in terms of EER (36% relative loss).
D. Fusing scores of systems This section proposes a fusion experiment between our baseline system (defined in section III.B) and the system with unsupervised adaptation mode for both protocols, Batch and Nist. The combination presented here is performed by computing an arithmetic mean of Tnorm scores of two different systems. The best fusion reaches a gain of 24% and 39.5% DCF and EER respectively for the baseline fused with the Nist protocol (fusion weights 0.5 0.5 , see fig. 5) compared to the baseline alone. Experimental results obtained with the various adaptation protocols are summarized by table 3.
Baseline Batch protocol Nist protocol Fusion Nist-Baseline Fusion Batch-Baseline
DCF
EER (%)
3.05 2.75 2.59 2.32 2.83
8.05 5.04 5.20 4.87 5.77
Table 3: EER and DCF of (1) the baseline system (2) Batch unsupervised system (3) Nist unsupervised system (4) Fusion Nist-baseline (TNORM scores, see III.D)
V. DISCUSSION In this paper a new method for unsupervised target model adaptation in the framework of a speaker detection system was presented. The proposed adaptation algorithm uses a
SPEAKER VERIFICATION
continuous adaptation process instead of classical threshold based technique was presented. It was evaluated using the NIST SRE 2005 database and showed its potential as it reduces significantly DCF and EER (up to 15% DCF and 35% EER relative improvement for the NIST protocol and 10% DCF and 37% EER relative improvement for the Batch protocol). We proposed two different adaptation processes that reach nearly the same improvement but which differ in terms of error handling. Two Tnorm cohorts were also presented. We showed that gains on DCF or EER rely on the use of the appropriate cohort for Tnorm score normalization. Results of previous work on unsupervised adaptation using a classical threshold based method [9] and the improvement showed in this paper highlight the need of data with variability for updating the speaker model. It also shows that using a threshold on scores for selecting the target trials seems not well suited to select the useful data for enriching the speaker models.
Figure 5: DET curves for the fusion system and the baseline, (NIST protocol, basic TNorm cohort)
Future works will be focused on a better a priori score distribution estimation (using Tnormed scores for example) and on adapting the speaker detection decision threshold when the unsupervised adaptation mode is used.
Petrovska, D. A. Reynolds, "A tutorial on text-independent speaker verification," EURASIP Journal on Applied Signal Processing, 2004, Vol.4, pp.430-451 [2]
P. Kenny, G. Boulianne, P. Oullet, and P. Dumouchel, "Improvements in factor analysis based speaker verification," In ICASSP, Toulouse, France, 2006.
[3]
C. Barras, S. Meignier, J. L. Gauvain, "Unsupervised Online Adaptation for Speaker Verification over the telephone," In Odyssey , Toledo, Spain, 2004.
[4]
L.P. Heck, N. Mirghafori, "Online unsupervised adaptation in speaker verification," Proc. International Conference on Spoken Language Processing, Beijing, China, 2000.
[5]
C. Fredouille, J. Marithoz, C. Jaboulet, J. Hennebert, C. Mokbel, and F. Bimbot, "Behavior of a bayesian adaptation method for incremental enrollment in speaker verification." In ICASSP, Istanbul, Turkey, 2000.
[6]
NIST Speaker Recognition Evaluation campaigns web site, http://www.nist.gov/speech/tests/spk/index.htm
[7]
D. A. van Leeuwen, "Speaker adaptation in the NIST Speaker Recognition Evaluation 2004," In Interspeech, Lisbon, Portugal, 2004.
[8]
E.G. Hansen, R.E. Slyh, T.R. Anderson, "Supervised and Unsupervised Speaker Adaptation in the NIST 2005 Speaker Recognition Evaluation," In Odyssey, Puerto Rico, USA, 2006.
[9]
A. Preti, J-F. Bonstre, "Unsupervised model adaptation for speaker verification,” In ICSLP, Pittsburgh, USA, 2006.
[10]
C. Auckenthaler, Lloyd-Thomas, "Score Normalization for Text-independent Speaker Verification Systems," Digital Signal Processing, vol. 10 No 1-3, 2000.
[11]
C. Fredouille, J-F. Bonastre, T. Merlin. "Bayesian approach based-decision in speaker verification.” In Odyssey, Crete, Grece, 2001.
[12]
J.-L. Gauvain and C.H. Lee, "Maximum a posteriori estimation for multivariate gaussian mixture observations of markov chains,” IEEE Trans. on Speech and Audio Processing, vol. 2, no. 2, pp. 291–298, Apr. 1994.
[13]
LIA_SpkDet system web site, avignon.fr/heberges/ALIZE/LIA_RAL
http://www.lia.univ-
[14]
ALIZE project web avignon.fr/heberges/ALIZE/
http://www.lia.univ-
[15]
J.-F. Bonastre, F. Wils, S. Meignier, "ALIZE, a free toolkit for speaker recognition,” In ICASSP, Philadelphia, USA, 2005.
[16]
"SPRO: a free speech signal processing toolkit,” Guillaume Gravier,http://www.irisa.fr/metiss/guig/spro/
[17]
J.-F. Bonastre, N. Scheffer, C. Fredouille, D. Matrouf, "NIST’04 speaker recognition evaluation campaign: new LIA speaker detection plateform based on ALIZE toolkit,” NIST SRE’04 Workshop: speaker detection evaluation campaign, June 2004. Toledo, Spain.
[18]
D. A. Reynolds, "Channel Robust Speaker Verification via Feature Mapping,” International Conference on Acoustics, Speech, and Signal Processing, IEEE, Hong Kong, 2003, pp. 53-56.
[19]
A. Martin, G. Doddington, T. Kamm, and M. Ordowski. "The DET curve in assessment of detection task performance,” In EuroSpeech, 1997.
[20]
D.E. Sturim, D.A. Reynolds, "Speaker Adaptive Cohort Selection for Tnorm in Text-Independent Speaker Verification,” In ICASSP, 2005.
ACKNOWLEDGMENT This work was supported by the French Ministère de la recherche et de l’industrie under the CIFRE grant number 858/2005 in association with the Thales Communications company. REFERENCES [1]
F. Bimbot, J.-F. Bonastre, C. Fredouille, G. Gravier, I. MagrinChagnolleau, S. Meignier, T. Merlin, J. Ortega-Garcia, D.
465
site,
Index Academic data management, 317, 320 Academic excellence, 163 Accessibility guidelines, 451 ActionScript code, 19 Active learner approach, 7–9 Activity Tree, 349 Adaptability, 63, 133, 135, 136, 243 Adaptation, 28, 44, 55, 62, 76, 133–136, 262, 321, 379, 381, 414, 416, 418, 440, 452, 461–465 Adaptive assessments, 133–137 Adaptive linking, 441, 442 Adult training center, 432 AI applications, 459 AJAX, 443 ALEKS, 367 Annotated learning object repository, 255 Annotation system, 255–260 Apprenticeship, 123, 124, 125, 145, 226 Arabic morphology, 182–184 Assessing students knowledge, 367 Assessment, 79, 90, 134, 137, 165, 166, 175, 176, 199, 217, 296 Assistive technology, 442, 451 Association map, 199, 200, 202, 203 Association test, 199, 200, 203 Asynchronous learning, 61, 99, 127, 175, 178, 341–345, 347 Augmented transition network, 181 Authoring tool, 57, 59, 133, 279, 347, 451, 454 AutoCAD, 268, 269, 270 Auto-calibrated online system, 217, 219, 220 Automatic assessment, 108, 296, 297 Bayesian adaptation algorithm, 462 Bayesian networks, 59, 411, 413 BeLearning project, 440, 442, 443 Benchmarking, 35, 181–185, 289 Bilingual system, 277 Bioinformatics, 117–122 Blackboard, 34, 142, 278, 295, 298, 324, 423, 425, 433, 437 Blackboard academic suite, 278 Booking system, 45–49 Breakout groups, 175 CAAD, 267–272 CABD, 283–285 CABLE classes, 145–147, 149–150 CALCEC, 401, 404 Capstone, 117, 122, 187–191
Case study, 31–37, 78, 99, 103, 105–110, 129, 139, 140, 244, 245, 267, 277 CAT instrument, 79–82 CBT, 128, 211, 213, 389, 395 CCNA, 355, 357–360 CCNP, 355, 357–360 Cell directory service, 6 Center for distributed learning, 193 Chinese language, 401, 404 Chinese words, 401–403 Cisco, 43, 122, 133, 355–360, 406 Classroom management, 39–43, 231, 235 Closeness coefficient, 151, 154–156 CMS, 13, 14, 16, 17, 46, 47, 61, 64, 129, 452 Coedu system, 429–432 Cognitive apprenticeship, 123–125, 145 Cognitive-computational, 55 Cognitive environments, 417–421 Cognitive networks, 411, 412, 416–418 Cognitive service delivery, 411–416 Collaborative learning, 123, 145, 217, 255, 305, 311, 341, 345, 390, 395, 396 Collaborative organizations, 361 Collaborative tools, 99–103 Collaborative video library, 255–260 Collaborative virtual environments, 89–93 COM-layer, 47, 48 Comparative study, 341–345 Complex programmable logic devices, 237 Computational linguistics, 181 Computation independent model, 452 Computer aided architectural design, 267–272 Computer-aided building design, 283–288 Computer aided sketching, 270 Computer assisted language learning, 105, 106, 108 Computer-based training, 395 Computerized tests, 433, 434 Computer learning systems, 373 Computer management instruction, 390 Computer programming, 51–53, 145–146, 211 Computer science curriculum, 31 Conceptual modeling, 441 Constraint logic programming, 445, 455, 459 Constructive learning framework, 73–78 Content aggregation model, 348 Content delivery system, 61, 63, 347–350 Content difficulty, 414–416 Content management systems, 46, 47, 61 Content Markup Language, 13, 15 467
468
INDEX
Control systems, 7, 111, 112, 114, 237, 381 Cooperative education, 335 Cooperative knowledge spaces, 335, 336 Cooperative learning objects, 361–366 Course sequence, 117, 118, 187 Critical thinking, 27, 52, 79–81, 118, 187 Cross-cultural learning, 87 Curriculum, 29–31, 36, 43, 90, 91, 99, 101, 117, 194, 217, 230, 232, 244, 283, 356, 429, 432, 433 Dante-project, 442 Data representation, 456 DC electrical circuits, 55–59 Decision support systems, 289 Degree of complexity, 447 Delicious learning resources, 139–141 Design and implementation, 187–190 Designing courses, 362 Design process, 267, 269, 283–287, 341, 439 Development process, 187, 205, 206, 211, 278, 441, 443 Didactic design, 379–381, 383, 385–387 Digital logic, 237–242 Distance learning, 45, 90, 115, 121, 122, 127, 193, 195, 197, 279, 310, 302, 306, 324, 429 Distance teaching systems, 373 Distributed computing environment, 6, 396 Domain-independent, 56 Domain name system, 1 Dynamic digital library, 259 Dynamic environment, 163 Dynamic web applications, 440, 441 EChalk system, 323–327 e-cheating, 424, 426 eContent project, 296 e-courses, 55, 278, 279, 281 Educational system, 73, 74, 217, 230, 397 e-Jameah, 277, 281 eLearning, 61–63, 86, 87, 105, 111, 127, 142, 175, 176, 178, 205, 208–210, 223, 225, 226, 242–244, 246, 260, 262, 277, 290, 291, 329–333, 335, 338, 347, 401, 417, 423, 424, 426, 439–443, 451, 454 e-learning platforms, 277, 289 e-learning portal, 223 Electrical domain, 111 e-lecture, 243–246, 279–281 email response, 67 Encyclopedia, 211–213, 215–216 Engineering and technology, 229, 232 Engineering modelling, 7 Engineering research, 163–167 Engineer manager, 429 e-psycho-diagnostics, 249
eResearch, 338, 339, 442 eScience, 439 e-services, 61, 62, 412, 416, 417–421 e-university, 277 European training village, 277 Evaluation models, 289 Expansibility, 389, 391–393 EXtensible Markup Language, 14, 348, 407 Federated learning system, 361–366 Feedback-control, 373 Ferromagnetism, 85, 329 Field programmable gate arrays, 237 Flash graphic design, 19 FLOE-T, 261 Frequency response, 7, 8, 113, 114 FTP, 63, 237 Funded research, 163 Fuzzy logic, 95, 97, 152, 212, 289 Fuzzy numbers, 151–156 Game playing, 379–387 Gaussian mixture models, 461 GENIUS, 176–179 Gold standards, 181, 183–186 Grading, 19, 28, 33, 35, 40, 82, 115, 120, 171, 177, 178, 277, 279 Graphical user interface, 109, 189 Group-learning, 341 Hands-on approach, 187 Hands-on networking, 405–409 Hardware description languages, 237 Higher technical institutions, 373 HTML, 46, 74, 76, 117, 118, 142, 209, 211, 241, 242, 318, 365, 376 HTTP, 63, 224, 237, 312, 348 ICALLESAL, 105–110 ICT, 127–131, 176, 178, 179, 208, 261, 295–299, 317, 320, 379–382 ICT integration, 127–131 IMMCIP, 341–345 Individual-learning, 341 Information technology, 31, 39, 117, 125, 129, 158, 187, 193–196, 212, 217, 262, 267, 272, 373, 423, 424 Information technology curriculum, 31 Instructional design, 137, 387, 395, 431 INTEGRAL II, 223, 225, 227, 228 Integrated education system, 401 Intelligent digital chalkboards, 323 Intelligent testing, 95 Intelligent tutoring systems, 55, 73, 74, 106 Interactive computerized tests, 434 Interactive Java Applets, 325
INDEX
Interactive learning environment, 106, 217 Interactive multimedia computer instruction package, 341, 342 Interactive tutorials, 433 Interactive whiteboards, 175 International assessments, 165 Internet engineering task force, 5 Internet Protocol, 405 Internet textbook, 373–377 ISDN, 62 Kernel programming, 26–29, 32–35, 41 Knowledge-based society, 211, 216 Knowledge evaluation, 217 Knowledge points, 390–392, 397–399 Knowledge representation, 55–59 Knowledge spaces, 335–339, 367, 439, 440 LAN administration, 405–408 Language tutoring, 73–78 Learning content management system, 61, 209, 335, 390, 452 Learning environment, 62, 67, 76, 115, 123, 125, 145, 149, 175, 178, 179, 217, 226, 272, 311–315, 347, 365, 390, 395, 397, 401, 405, 409, 419, 451, 453 Learning management system, 61, 129, 134, 135, 142, 335, 345, 347, 362, 411–417 Learning network environment, 408 Learning object metadata, 347, 362, 391 Learning objects, 134–136, 197, 209, 261–264, 311, 347–348, 361–366, 390–392, 397–398, 453 Learning outcomes, 32, 35, 129, 146 Learning resources, 139–142 Learning styles, 107, 134, 135, 361, 396, 433, 437 Learning technology standards committee, 390 Lexical analysis, 181 Lightweight directory access protocol, 1, 5, 48 Linear programming, 290, 445 Linguistic data consortium, 181 Linguistic variables, 151–155 Linux, 25–30, 33, 39–43, 46, 47, 121, 212, 213, 407–409 Linux virtual server, 42 LKM technology, 26 Loadable Kernel Modules, 26–27 Long-term strategies, 55–56 Maintainability, 389–392 Management framework, 61–66 Managerial psychology, 243–246 Mathematics, 51, 54, 83, 123–125, 187, 194, 273, 274, 295–299, 323, 324, 329, 373, 437, 439 Mathematics education, 295–299 MathML, 14, 430
469
MATLAB, 301–304, 380, 382, 385 Meet-distributive lattices, 367–371 Mental engagement, 145–150 Metallurgy, 341–345 Meta-model, 451, 452 Microsoft sharepoint, 99, 100, 102, 103 Middleware, 5, 6, 61, 63, 64 Mining technology, 311–314 Mobile teaching, 83 Model-based development, 439, 440, 443 Model driven approach, 442, 451–454 Model driver architecture, 452 Model transformations, 452 Morphological tools, 181–185 Morphological transducer, 181 Morphophonemic transformation, 183–185 Multi channel learning, 127–131 Multiple criteria analysis, 289 Multisim, 237–242 National science foundation, 169, 229 Natural cooperative work, 335 Networking, 25, 30, 34, 39, 41–43, 62, 195, 196, 256, 357–359, 405–410, 417, 424 Networking laboratory, 405–410 Network management, 355–360 Network-oriented middleware, 1, 5 Network technologies, 63, 355 NIST, 461–465 Non-linear learning, 83, 87 NSF, 25, 169, 193, 230–232, 274 Occupational stress, 249–252 Online assessment, 295, 296 Online lab grid, 45 Online lab network, 45–47 Online learning, 13, 67, 128, 175, 243, 289, 312, 341, 423 Online reservation system, 45 Ontology, 56, 141, 205–209, 259, 442 OOWS, 410, 411, 441–443 Open course ware, 255 Open source, 16, 28, 34, 35, 39, 40, 43, 45, 46, 48, 117, 179, 213, 357, 277 Operating system, 5, 25–26, 30, 36, 41–43, 212, 213, 224, 225, 407 Operating systems concepts, 25, 39 Organizational requirements, 158 Orthographic transformation, 183–185 Outsourcing, 351–354 OWL, 56, 141, 207, 259, 442 PCML, 13–17 Pedagogical framework, 89–90 Pedagogical impact, 27, 40–43 Pedagogical objectives, 24–28
470
INDEX
Pedagogy-oriented Content Markup Language, 13 Personalization, 62, 311–315, 396, 418 Personalized e-learning, 255–260 Personalized learning, 255, 365, 395, 396, 399, 414 Physics education, 83 Platform independent model, 452 Platform-independent tutorial, 211 Platform specific model, 452 Polygon mesh, 374, 375, 402, 403 Practical guidelines, 157 Problem oriented learning, 87 Programmable logic devices, 237, 240 Project-based learning, 83, 87, 230, 283 Project descriptions, 157–162 Project Merlot, 193–197 Project organization, 230 Proposal writing, 157–162 PSTN, 62 Quality management, 188, 432 Quality skill-set, 351, 353 Radio access technologies, 417, 418 RAISE, 169–174, 229–235 Random variables, 445–449 RATH, 367 Relational adaptive tutoring hypertext, 367 Remote engineering, 45 Remote experiment, 83–87, 111–115, 331–333 Response words, 199, 200, 202–204 Reusable e-learning contents, 255 Reusable learning objects, 361 Robotic manipulators, 273–275 Role-based model, 317 Root-locus, 113–114 RSS-feeds, 49 SAT score, 81 Scaffolding, 26, 28, 31, 36, 123, 145, 146, 420 Scaffolding architecture, 28 School timetable builder, 445 Science teachers, 169, 172, 229, 231 SCORM, 347–350, 362, 390, 391, 397–399, 430, 453 Self-learning, 393, 396–399 Semantic annotation, 139, 140 Semantic encoding, 439, 440 Semantic network, 199–204 Semantic web reasoner, 259 Semantic web technologies, 255, 259, 260 Service providing system, 277–281 Service provisioning, 61, 411, 412 Session profile, 61–66 Sharable content object reference model, 347, 390
Shared whiteboards, 335, 337 Simulation, 306–308, 433 SIMULINK, 8, 112 SISO, 7 SOFTICE, 25, 29, 30, 39–44 Software development, 127, 187, 188, 191, 352, 418, 419, 442, 451 SSL, 48 State model diagrams, 355–360 State variables, 113 Statistical analysis, 108, 250, 320 Statistical mechanics, 84, 329, 330 STB application, 445–449 STEM, 79–80, 229–230, 235 Stimulating word, 199–204 Stress prevention system, 249, 252 Support system, 405–410 SWRL, 55, 59, 196, 198, 207, 209 Synchronous e-learning, 175–179, 347 System administration, 25, 35, 39–43 Systematic evaluations, 327 Systemic design, 386 System modeling, 189 Systems thinking, 379–387 Systems tool, 355 Tablet PCs, 83–86, 298 Target-oriented segmentation, 161 TCP/IP, 43, 405 Teacher relocation, 455–459 Teaching and learning, 51, 74, 127–128, 130–131, 133, 176, 255, 256, 261, 298, 317, 379–384, 423, 426, 440 Teaching skills, 151, 154, 156, 324 Teaching timetable, 445 Tender-fit training, 162 TEUTATES, 83–87 Theoretical mechanics, 373–377 Thermodynamics, 84–86, 329–333, 434 Timetable optimization, 445 TOPSIS, 143–148, 151–156 Transfer function, 7–8, 10, 112–114 Transient-response, 112 Transmission control protocol, 405 Transport engineer, 429 Tutor, 317–321, 338, 374, 430, 432 UML, 28, 29, 40–44, 100, 146, 150, 207, 209, 291, 439, 441, 452 Undergraduate laboratories, 25 Undergraduate students, 31–33, 55, 89, 118, 145, 169, 170, 217–221, 231, 232, 405, 410 Unsupervised model adaptation, 461 User-centered design, 440–441 User model, 136, 411–416, 443, 453 User profile, 62, 63, 317, 318, 320, 413, 417, 420
INDEX
Verilog, 237–238 VHDL, 237–238, 240–241 Video-based lectures, 243–244 VideoEasel, 84–87, 329–330, 333 Virtual conferences, 301 Virtual-electro-lab, 111 Virtual environments, 89–93, 99, 123, 301–304 Virtual experiments, 86, 87, 329–333 Virtualization technology, 40, 405, 406, 408 Virtual laboratories, 46, 83–87, 115, 298, 305, 330–333 Virtual learning, 60, 73, 123–125, 380 Virtual reality, 89–93, 179, 211, 216, 267, 268, 302, 305, 376 Virtual room concepts, 335–339 Virtual tutorials, 321 Visual elements, 431 Voice XML, 454 VoIP, 175, 337–339 VXML, 454 W3C, 46, 279, 439, 440, 451 WafA, 442 WAMP, 46
471
WBT, 395 WCAG, 451, 453 Wealth creation, 163–167 Web accessibility initiative, 440, 451 Web advanced learning technologies, 296 WebALT, 296–299 Web-based educational platform, 317–321 Web-based training, 395 Web-based tutorials, 211 Web courses, 348 WebCT, 423–425 Web ontology language, 56, 442 Web portal, 47–48, 99–103 Web portal tools, 99 WebSphere, 99 Web usage mining, 311–315 World Wide Web, 176, 311, 317, 418, 439 WSDM, 441, 442 XHTML, 318, 454 XML, 13–17, 70, 71, 77, 134, 135, 137, 205, 207, 209, 306, 348, 349, 362, 407, 408, 429, 430, 432, 440, 453, 454