Standards and Tools for Context-Aware Ubiquitous Learning Fan-Ray Kuo1, Gwo-Jen Hwang2, Yen-Jung Chen2 and Shu-Ling Wang3 1 Center for Teacher Education, National University of Tainan 2 Department of Information and Learning Technology, National University of Tainan 33, Sec. 2, Shulin St.,Tainan city 70005, Taiwan 3 Institute of Technological and Vocational Education, National Taiwan University of Science and Technology, 43, Sec.4, Keelung Road, Taipei, Taiwan {revonkuo| gjhwang}@mail.nutn.edu.tw
[email protected],
[email protected] Abstract In recent years, with great innovation and advance of those wireless communication and sensor technologies, a new research issue in education has been popularly discussed, that is, how to develop a brand-new learning environment so that the students can learn anywhere and anytime in the real world. Thus, in order to construct a sound and adaptive e-learning environment, some factors such as personal contexts, learning activity and environmental contexts are included for further consideration. Meanwhile, for the sake of effective management and deployment of learning resource as well as understanding learner’s context better, in this study we attempted to propose an extension of SCORM/IMS standards concerning learning resource reused and shared, as well as learner’s physiological and psychological conditions in the context. Finally, a set of tools are presented to demonstrate the use in the new standards.
1. Scenario of Context-Aware Ubiquitous Learning Figure 1 depicts the scenario of a context-aware u-learning environment, in which various communication and context-sensing devices have been used to provide personalized services in the context-aware u-learning environment. For instance, when a student gets into the lab or stands in front of an instrument, context-sensing devices will detect the situation and transfer the information to the server. Based on the decision of the server, relevant information such as the operating procedure for each device, the need-to-know rules for working in the lab, and emergency handling procedures will be timely displayed to student based on the detected context.
Figure 1. Concept of Context-Aware in U-learning
2. Standards for Context-Aware U-learning In order to model and record each learner’s behaviors and contexts in the u-learning environment, thus, personal and environmental parameters need to be considered [1]:
(1) Basic Personal Contexts In an ideal u-learning environment, the computing, communication and sensor equipments will be embedded and incorporated into the articles for daily use. In addition, researchers also indicated that the factors of “time” and “location” may be the most important parameters for presenting a learner’s context.
(2) Advanced Personal Contexts Recent studies [2,3,4] have depicted the possibilities for detecting several advanced personal contexts, such as human emotions. Sensing devices with affective aware ability can not only capture the human facial expression, but also tell apart their emotional conditions [5,6]; therefore, we define the
Seventh IEEE International Conference on Advanced Learning Technologies (ICALT 2007) 0-7695-2916-X/07 $25.00 © 2007
standard for “facial_expression”. Human voice could be another context for describing the learner’s status, which might be affected by personal emotion health condition, or surrounding noise. Another advanced technology concerning personal context detection is the development of wearable computers, which can derive information from human actions as well as psychological/physiological conditions [7]. Interactions between human and environment could produce physiological changes directly or indirectly, including human body temperature, pulse, blood pressure and heartbeat, which can be automatically detected by those wearable sensors or context-aware clothes for further analysis and explanation of the learner’s behaviors.
(3) Environmental Contexts In a u-learning environment, environmental factors may be paid much attention while learning system is providing students with adaptive supports according to the contextual changes around learners. For example, the equipments (e.g., GPS, RFID, PDA and wireless communication devices) and the environmental contexts sensed in the u-learning environment (e.g., temperature, humidity, and cleanliness level) .
3. System Implementation and Illustrative Example A set of tools has been developed for browsing and transforming the file format of the documents based on the u-learning standard. Figure 2 shows a tool that can analyze the XML files to obtain detailed information concerning the u-learning contexts.
Figure 2. Tools for analyzing XML file with u-learning contexts
4. Conclusions Nowadays, a new learning style has gradually formed with the advance of sensor and wireless network technologies in the u-learning environment. If only we keep paying much attention on the aim and strive to develop a feasible and effective learning model, an ideal u-learning environment will come true. Currently, we are trying to apply the standard to the development of a u-learning environment for a natural science course in a school. Also, we will apply these standards in other fields in the future. Hopefully, more u-learning parameters can be created while the new application is being conducted.
Acknowledgement This study is supported by the National Science Council of the Republic of China under contract numbers NSC 95-2524-S-024-002.
5. References [1] G. J. Hwang (2006). “Criteria and Strategies of Ubiquitous Learning”, IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing, Taichung, Taiwan, vol. 2, June 5-7, 2006, pp. 72-77. [2] L. B. Almeida, B. C. Silva and A. L. Bazzan (2004). “Towards a physiological model of emotions: first steps”, In: Architectures for Modeling Emotions: Cross-Disciplinary Foundations, AAAI Spring Symposium, vol. 1, 2004. [3] J. Gratch, S. Marsella and W. Mao (2006). Towards a Validated Model of “Emotional Intelligence”, Twenty-First National Conference on Artificial Intelligence (AAAI06), Boston, Massachusetts, vol. 21, no. 2, July 16-20, 2006, pp. 1613-1616. [4] C. L. Lisetti and F. Nasoz (2004). “Using Non-invasive Wearable Computers to Recognize Human Emotions from Physiological Signals”, EURASIP Journal on Applied Signal Processing - Special Issue on Multimedia Human-Computer Interface, vol. 2004, no. 11, 2004, pp. 1672-1687. [5] O. Kwon, K. Yoo and E. Suh (2005). “ubiES: An Intelligent Expert System for Proactive Services Deploying Ubiquitous Computing Technologies”, the 38th Hawaii International Conference on System Sciences, January 3-6, 2005. [6] F. Yang, M. Paindavoine, H. Abdi, and A. Monopoli (2005). “Development of a Fast Panoramic Face Mosaicking and Recognition System”, Optical Engineering, vol. 44, August, 2005. [7] J. A. Healey and R. W. Picard (2005). “Detecting Stress During Real-World Driving Tasks Using Physiological Sensors”, IEEE Transaction on Intelligent Transportation Systems, vol. 6, no. 2, June, 2005, pp. 156-166.
Seventh IEEE International Conference on Advanced Learning Technologies (ICALT 2007) 0-7695-2916-X/07 $25.00 © 2007