Enhancing Learning Experiences through Context-Aware Collaborative Services: Software Architecture and Prototype System. Nikolaos Dimakis, Lazaros ...
Enhancing Learning Experiences through Context-Aware Collaborative Services: Software Architecture and Prototype System Nikolaos Dimakis, Lazaros Polymenakos and John Soldatos Athens Information Technology, 19,5 Km Markopoulou Ave. Peania, GR 19002 +30 2106682759 {ndim, lcp, jsol}@ait.edu.gr Abstract Context-aware computing systems hold the promise of enhancing the level and quality of educational activities. However, development of such systems to provide novel educational services is still extremely challenging and requires new architectures and middleware paradigms. In this paper we present a structured, agent based, middleware architecture, able to support sophisticated context-awareness in lectures, presentations and meetings. The architecture caters for the generation of sophisticated context by employing a large set of contextacquisition components, while enabling service logic using non-trivial situation state models. Based on this architectural framework, we have developed a wide array of innovative context-aware services that can play a significant role in education. We present the most prominent of these (named ‘Memory Jog’), which provides pertinent information and human-centric assistance to participants in lectures, presentations and meetings.
1. Introduction Ubiquitous Computing [13] aims in dispersing the computers in the environment, by providing non-obtrusive services to the end user, in a transparent manner, regardless of time and user location. Ubiquitous services are context-aware, since they rely on extracting and further processing information based on human observation as well as on information regarding their surrounding environment. Thus, they execute service logic based on both explicit user input and on implicitly derived information using specialized components which interpret the in-door or out-door environment. Extracting context can be based on several methods some of which use RFID tags to track objects and infer context (location and speed), others instrument humans based on wearable systems, and some third category uses sensors nonintrusively for natural interaction in an in-door or out-door setting.
Fourth IEEE International Workshop on Wireless, Mobile and Ubiquitous Technology in Education (ICHIT'06) 0-7695-2723-X/06 $20.00 © 2006
Hybrid approaches are adapted for sophisticated ubiquitous computing applications, which essentially manipulate the aforementioned methods to provide high level services. In addition to the sensing infrastructure, actuating components are required in order to interact with and present information to the users in the smart space, such as projectors and speakers. Context-awareness is a preferred feature of education-oriented applications, as it extends the functionality of the overall service, by distinguishing key cases during a lecture, (presentation, question, project reporting, etc.), and applying specific service-logic based on the context (video annotation, reminders, lecture access, etc.). Current research on context-aware pervasive systems for education purposes has resulted in the development of many prototype systems such as [10], [11], [21], [22], which use context-awareness and multiagent software architectures. Multi-agent approaches are commonly adopted in implementing such services because they enable the distribution of individual components and enhance the performance of the ubiquitous service while facilitating the designing of more complicated applications, by combining many preferred features [15]. Many of the existing approaches are built on specialized software architectures which however, are site and service specific (e.g., video annotation service). This limitation prevents the overall application to be easily extended and adapted to new conditions and interaction scenarios. Moreover, components of various vendors should be easy to integrate thus the architecture has to address interoperability issues. In addition to technical requirements, ubiquitous learning applications have 6 main characteristics, as defined in [25], which are: Permanency, Accessibility, Immediacy, Interactivity, Situating of instructional activities and Adaptability. Moreover, in addition to the application characteristics, the ubiquitous learning service has to take into consideration pedagogical aspects as well, which enhance not only the performance of the student body, but improve the overall educational experience as well [26].
In this paper we introduce a structured architecture which operates in a multi-agent fashion, using the JADE framework for multi-agent systems [3]. Service logic is applied by fusing a variety of context-acquisition components which are processing sensor output. All component societies are connected with high performance specialized middleware components facilitating the interconnection of the societies and allowing the interoperability of components that follow the proposed architecture. Moreover, each member of these societies is able to self-configure based on the environment conditions using adaptive signal processing techniques, as well as to recover from errors using self-healing features which are embedded in the core agent architecture. In addition to the architectural framework, we elaborate on a prototype ubiquitous application which aims in providing assistance to users in the scope of lectures, called ‘Memory Jog’. The motivation for designing services following this architecture is in the scope of the ‘Computers in the Human Interaction Loop’ (CHIL) project [8], which is one of the largest European research initiatives in the areas of smart spaces and multimodal interfaces. CHIL involves more than fifteen different research laboratories (within both the E.U and the U.S.A), which are actively investigating perceptual components for sophisticated context-awareness in smart spaces and prototype services. The structure of this paper is the following: In Section 2, we briefly present a realization of a ‘smart learning space’. Based on this infrastructure, the overall architecture, following a 3-tier approach, is presented. Section 3 focuses on the Situation Modeling and the approach used to reflect the context-awareness of this service. Section 4 presents an innovative application called Memory Jog able to generate sophisticated context and apply service logic, which can provide support for complex events such as lectures, meetings etc. The paper ends with section 5 which presents some evaluation results for the Memory Jog service, and section 6 discusses the conclusions of the current work.
2. Overall architecture In this section we present the architectural framework for the Memory Jog service. We start by introducing the smart learning spaces and later we describe the various software architectural tiers, giving some details of their implementation. Subsequently we present the directory service, namely the Knowledge Base Server [1], which uses web ontologies to facilitate high semantics information querying. Several additional middleware components are discussed that allow the composition of advanced, context-aware learning services. Smart learning spaces are a realization of a large set of hardware and software components, facilitating
Fourth IEEE International Workshop on Wireless, Mobile and Ubiquitous Technology in Education (ICHIT'06) 0-7695-2723-X/06 $20.00 © 2006
ubiquitous services for learning and information access. By nature, these infrastructures are highly distributed and heterogeneous, interconnected with specialized middleware components. The present research was conducted in the Smart Room of Athens Information Technology and is shown in figure 1. The various hardware and software components of this infrastructure are discussed in the sequel, along with the software architectural tiers that support them.
Figure 1 - The Smart Room of Athens Information Technology The software architecture was designed and implemented in a 3-tier fashion; the sensors tier, the context-acquisition tier and the services-tier. In addition to these tiers, a fourth integrated component is used and described, the Knowledge Base Server, which acts as a directory service for the ubiquitous service.
2.1.
Sensors Tier
Sensor infrastructure is the foundation of the ubiquitous context-aware systems. It comprises a variety of numerous sensors both visual and acoustic, either single or a set of sensors, forming clusters such as inverted-T microphone formations, or linear microphone arrays. The sensors tier is generating large amounts of data which with appropriate middleware interfaces is directed to the higher tier of processing, the contextacquisition tier of the architecture.
2.2.
Context-Acquisition Tier
The context-acquisition tier applies sophisticated signal processing algorithms to extract elementary context from a single or multiple sensors. This tier comprises a set of 2D-visual, 3D-visual perceptual components, acoustic components, audio-visual components, as well as output perceptual components like multi-modal Speech Synthesis
and Targeted Audio. Examples of such perceptual components extracting elementary context by observation of human activity have been in incorporated in smart learning space and are the Body Tracker [20], extracting people’s locations, the Audio Source Localization [18], extracting speakers’ locations, the Face Identification [16], [17], [19], extracting the identification of the participants, the Speaker Identification, determining the identity of the speaker and the Speech Activity Detection, determining whether the received sound signals are speech or not. Following the context-acquisition, the context is forwarded, using a specialized middleware library called CHiLiX, to the higher tier of the architecture, the service tier which applies the service logic.
2.3.
Service Tier
The service tier of this System is built following a multi-agent approach, using the JADE (Java Agent DEvelopment) Agent Framework [3]. The JADE Framework is a software framework to develop agentbased applications in compliance with the FIPA (Foundation of Intelligent Physical Agents) [4] specifications for interoperable intelligent multi-agent systems. The goal is to simplify the development while ensuring standard compliance through a comprehensive set of system services and agents. The Services are implemented as agent-services of the JADE Society of agents distributed among several hosts. These agents have specialized functionalities some of which include participant personalization, event coordination, actuating control, perceptual component and sensor control, and others. These agents are communicating using the existing JADE communication architecture formulating a fully distributed service. The JADE architecture as implemented, distinguishes between 3 types of agents, which have specialized functionalities: • Core Agents: Core agents are independent of the service and the smart room installation. They provide the communication mechanism for the distributed entities of the system. Moreover, core agents undertake the control of the sensing infrastructure, while also allowing service providers to ‘plug’ service logic into the framework. A key component of this agent group is the ‘Agent Manager’ Agent, which facilitates agent registration, look-up and deregistration, while providing the necessary underlying Service-JADE framework front-end. Moreover, specialized core agents have been developed which provide specialized features such as the ‘Autonomic Agent’ providing autonomic behavior for the service-tier.
Fourth IEEE International Workshop on Wireless, Mobile and Ubiquitous Technology in Education (ICHIT'06) 0-7695-2723-X/06 $20.00 © 2006
•
Basic Services Agents: These agents incorporate the service logic of basic services, which are tightlycoupled with the installed infrastructure of each smart room. Basic services include the ability to track composite situations, as well as the control of sensors and actuators. Tracking of composite situations is performed through the ‘Situation Modeling Agent’ (SMA) based on the context modeling approach discussed in following paragraphs. Control of sensors and actuators is performed through the ‘Smart Room Agent’. Furthermore a ‘Knowledge Base Agent’, allows the agents of the framework to dynamically access information on the state of the components of ubiquitous computing environment (e.g., sensors, actuators, perceptual components), through a Knowledge Base Server that is supported as an ontology management system. • Ubiquitous Service Agents: Ubiquitous service agents implement the non-obtrusive service logic of the various context-aware services. Each ubiquitous computing service is therefore implemented as a ubiquitous Service Agent and accordingly plugged into the framework. In the scope of the CHIL project, several ubiquitous agents corresponding to various ubiquitous computing services are implemented and integrated into the framework. The agent inheritance tree implemented in this tier is shown in figure 2.
2.4.
Knowledge Base and Ontologies
In addition to these tiers, we have developed a directory mechanism leveraging a Knowledge Base [6], [7] which uses appropriate web ontologies. In particular, the ontologies include all the concepts associated with hardware, middleware, software, as well as with the physical objects (people, artifacts) of the smart space. Our web ontologies are described based on the Web ontology language (OWL) [2] and accessed through a variety of distributed access techniques. Using this ontology-based directory service mechanism, we can more intelligently answer queries, a feature which is often required in the scope of contextaware, human-centric, ubiquitous computing services. This intelligence lies in the ability to infer information from existing sets of meta-data according to current context and user intention interpreted by a reasoning procedure. Reasoning is important to ensure the quality of the ontology and to fully exploit its rich structure. External rule-based reasoning systems can access the ontological data of our OWL based directory. As an example, a service using a person tracking component can easily acquire a reference to the ceiling-mounted camera by just describing it with OWL statements to the Ontology
which will select it among a large number of other sensors in a smart space.
wearable components, but using visual and acoustic sensors.
3. Situation Models tracking lectures Figure 4, depicts a situation model for tracking activities within a lecture. The transitions between the states are pointed by the use of arrows. The transitions are triggered by explicit context by the underlying perceptual component tier, or by implicit context generated by the Situation Modeling Agent. For example State S8 (“Question”) can be reached by State S5 (“Presentation Starts/Goes on”) if the Audio Source Localization determines that the speaker is located in the table area, while a presentation is taking place in the room. The first part is explicitly derived by the perceptual component while the second is implied by the Situation Modeling Agent.
Figure 2 - The agent-tier inheritance tree
2.5.
Middleware Interfaces
The above tiers are coupled together with additional middleware components which facilitate communication of raw sensor data, context and ontologies [9]. These middleware components are: • The SmartFlow interface [5], effectively connecting the sensor-tier with the context-acquisition tier by providing high distribution capabilities and separating the capturing clients with the processing components. • The CHiLiX interface which acts as a software bridge between the context-acquisition tier and the servicetier. It provides high abstraction levels, and by using XML-over-TCP formatted messages, it is platform independent. Graphically, the overall architecture hierarchy is shown in figure 3, including all tiers and middleware interfaces. Our architecture novelty lies in the fact that the whole set of component societies are designed so as to adapt to the environment using adaptive signal processing techniques. Moreover, the top level agent tier has in its core agents, functionalities for recovering from errors, while maintaining their state in persistent storage means, such as databases, which is automatically retrieved after a successful recovery. Moreover, the middleware interfaces provide a plug ‘n play capability for all component tiers, a feature which highly favors the scaling and enhancement of such complex applications, while ensuring the interoperability of components provided by different technology providers. Finally, the context-acquisition process is not done using intrusive techniques such as
Fourth IEEE International Workshop on Wireless, Mobile and Ubiquitous Technology in Education (ICHIT'06) 0-7695-2723-X/06 $20.00 © 2006
Figure 3 - The architecture stack, showing the middleware connections and tiers Situation modeling is eventually triggered by events occurring in the smart learning space. These triggers are either explicitly derived by the context-acquisition tier, or can by implicitly generated by fusing a number of these components. For example, a Body Tracker can infer that a person has entered the room, when there is an additional target detected. An implicit trigger is that the lecture has started, which requires the fusion of the Body Tracker to
determine that the number of participants is sufficient and a Face Identification component to determine that the lecturer is present. The Situation modeling requires a plethora of Perceptual Components to correctly determine state transitions. These components (i.e. TablePeopleCount, WhiteBoardPeopleCount, Speech Activity Detection, Acoustic Localization, Body Tracker, FaceID and others) are fused to produce the composite context based on which the traversal of the network occurs. The Situation model is accompanied by a detailed truth table which shows the required combination of the underlying combination of perceptual components which enable the transitions to the next state. Eventually, the service has elevated to a higher level of perception, enabling it to be context-aware [14] which is the backbone of human-centric services.
Figure 4 - A Situation Model for tracking activities within a lecture This model consists of 4 core states corresponding to the commencement of the lecture, the start and stop of the presentation, the question state and the ending of the lecture. Moreover, the model tracks situations regarding the entering and leaving of people and whether they are new attendees or not. This separation enables the adoption of a new feature, called ‘What Happened While I Was Away?” which is described in a later paragraph.
Fourth IEEE International Workshop on Wireless, Mobile and Ubiquitous Technology in Education (ICHIT'06) 0-7695-2723-X/06 $20.00 © 2006
Table 1 shows the association between the contextual states and appropriate perceptual components.
3.1.
Implementing the Situation Model
From an implementation perspective, situation models are described in XML format specifying the state connectivity, and loaded by the Situation Modeling Agent (SMA) during instantiation of the service. Situation Modeling Agents access the underlying perceptual components through their APIs by using the CHILiX library and initializes its internal data structures. The SMA keeps track of the current state, and at every event checks to see if a valid transition is feasible. As soon as a transition is needed, the SMA advances to the next state, which the Situation Model implies. This advancement to the new state requires the adjustment of the internal data structures of SMA to be re-initialized or extended to hold the new information for the state. Table 1 - Contextual states and Perceptual Components association Situation Combinations of Transition Perceptual Components Outputs NILÆS1 BodyTracker=N+1 (where N is previous number of people in the room) S 1 Æ S2 FaceID=i (i does not exist as a current ID in the system) S 2 Æ S4 BodyTracker=3 AND TableWatcher=3 S4Æ S5, S6ÆS5 TablePeopleCount=2 AND WhiteBoardPeopleCount =1 AND SpeakerLocation in the presentation area S 5Æ S 6 TablePeopleCount =2 AND WhiteBoardPeopleCount =1 AND SpeakerLocation in the table area. S8ÆS9 BodyTracker=0 S4,S5,S6,S7ÆS8 BodyTracker=N-1 (N is the current number of people in the room) S5,S6,S7ÆS1 BodyTracker=N+1 (N is the current number of people in the room)
4. Context Support of Educational Activities The Memory Jog service provides non-obtrusive services to the users in smart spaces, during events such as meetings, lectures or presentations, by supporting studentinstructor and student-student interaction [23], while observing their functional roles [24]. It combines all the sensor and Perceptual Components output to provide services in the smart room. These services are: lecture
tracking, intelligent video recording by selecting the best view of the speaker [12], private messaging (text and audio), person and speaker tracking, intelligent display of current and background information, as well as providing a summary of events during a person’s absence. All these features are adequately presented in a flexible and welldesigned Graphical User Interface, which can be adapted to the user’s terminal equipment. Moreover, these services are transparent, without intruding and interrupting the lecture and creating distractions. The intelligence of the Memory Jog is based on the Situation Model. The flexibility of this situation model is a key factor for its usage, since it can be extended to track additional, more complicated situations occurring in the smart space. It is worth noting that the current design presented is quite flexible and can easily support changing situations. For example, the original situation model that we experimented with was designed for tracking events in meetings and presentations. The overall effort to enable it to monitor lectures was minimal and was focused on the XML schema, modeling the lecture scenario. The Memory Jog service employs all available sensors and a collection of Perceptual Components, which forward the context they generate to the Situation Modeling Agent. Moreover, specialized sensor proxies have been developed which provide the means to control sensors of the smart room using their native APIs by bridging CHILiX and sensor driver requests. In addition to the input (context & streams), the Memory Jog service makes use of additional components, controlling and actuating devices, in order to interact with the user. These devices are projectors, speakers and a Target Audio device, providing highly targeted beams of sound to specific locations of the room. This device, in combination with a Text-To-Speech component and the Body Tracker Perceptual Component, is used to transfer personalized messages to specific individuals within the room. Examples of such messages could be to notify the lecturer of the attention level in the class (based on a pose tracking component) or that the lecture should be finishing within a specific time, etc. Moreover, an intelligent display component has been developed which presents information to specific individuals, or a group of individuals. The intelligent display service selects the optimal display device according to the location of the target person(s), within a smart room by examining the options in relation to the current context. The context of interest includes the location and orientation of participants, which are tracked by appropriate perceptual components such as the Body Tracker and the ‘Faceness’ components, providing a metric of ‘faceness’ for the target participants. Based on this information, a display selection algorithm has been implemented and incorporated within an appropriate
Fourth IEEE International Workshop on Wireless, Mobile and Ubiquitous Technology in Education (ICHIT'06) 0-7695-2723-X/06 $20.00 © 2006
(JADE) agent. The algorithm attempts to provide a satisfactory view for as many participants as possible. The Memory Jog service is highly deviceindependent. All personal agents are coupled with a device agent which depending on the user’s equipment can be a PDA, Smartphone, Laptop etc. The basic device agent which provides the interface with the personal agent and the user interface provides the core JADE functionalities such as service discovery, message transmission etc. The device-specific agent is filtering all incoming information so as to be viewable by the endusers’ terminal equipment. The main device agent is the NotebookDevice Agent which is used in cases of Laptops/Desktop computers. In the scope of the overall application we have extended this device-agent set to handle a PDA device, enabling users to connect to the Memory Jog service remotely. The service is presented to the PDA using the PDAAgent in a similar fashion as in the basic laptop user front-end, keeping in mind however the screen and power limitations of the specific device. The user interface was designed so as to be as interactive as possible. The user can, by using the UI, use the whole range of services which the Memory Jog combines, such as private messaging, person tracking etc. Screenshots of the two graphical user interfaces follow in figures 5 and 6.
Figure 5 - The Laptop Graphical User interface
Figure 6 - The PDA User Interface In addition to the features that the user initiates, an additional feature was implemented providing a basic
summary of events that occurred during a person’s absence from the smart room. This case is tracked by the Situation Model by combining the ‘Person Left’ and ‘Person Returned’ states from the Situation Model. Whenever a person leaves the smart room, his/her identity is stored, and the Personal Agent of this user monitors the state alterations and stores appropriate entries to the database for later retrieval. As soon as a new person enters the room, his identity is compared to a list of people who left and if a person has actually returned, the Memory Jog retrieves a list of events which occurred during this person’s absence. This feature is conveniently called ‘What happened while I was away?’(WHWIWA) and is automatically shown to the user as soon he/she returns to the smart room. Overall, the Memory Jog provides features for education which are summarized in the next table. Table 2 – Matching requirements in education with features of Memory Jog Feature Memory Jog Report on progress (of Update Knowledge Base some work/project) using high semantics queries. Keep track of lecture Agenda tracking progress Find out about progress Dynamic search from the Knowledge Base. Present new information Presentation assistance Arrange the next lecture Automatic arrangement using Knowledge Base Refresh memory as to what High semantics search happened at previous using Knowledge Base, lectures WHWIWA Record minutes Intelligent Recording
5. User evaluations In the scope of evaluating the Memory Jog service, we conducted a series of focus groups evaluations to determine how useful, intrusive and efficient the Memory Jog service is. During these evaluations, we presented the service to 25 people of diverse backgrounds and job orientations such as students, faculty and engineers, and later we interviewed them, focusing on their opinions on the previous features. The main outcome was that both the faculty and student body were very interested to see this service being used in cases of conferences and lectures, and not so much in cases of meetings where the scenario is more static. Furthermore, the features individually were very appealing to the users especially the Targeted Audio device and the Intelligent Video Recording with turn and content annotation. In the following table we summarize
Fourth IEEE International Workshop on Wireless, Mobile and Ubiquitous Technology in Education (ICHIT'06) 0-7695-2723-X/06 $20.00 © 2006
the results of these evaluations. The approval rating reflects the overall approval of the users based on how useful the corresponding feature is. Finally, one very important comment, which was mentioned by groups working in management positions, is that this service introduces a novel learning procedure and is not just a supporting environment. Table 3 - Memory Jog Evaluation Summarization Approval Feature Percent Intelligent Recording Agenda tracking What happened while I was away? Participants Biography Search past events (lectures, meetings)
100.0% 56.0% 72.0% 68.0% 84.0%
6. Conclusions In this paper we presented an innovative architecture which combines a large set of sensors and actuators with a specialized collection of context-acquisition components in order to provide transparent services to the end-user in smart spaces. The system novelty is the flexibility of the architecture providing interfaces which enable‘plug and play’ behavior, as well as in the fact that the components themselves are able to adapt to the smart space by using adaptive techniques, and recovery features. As future work, we aim in introducing additional features such as automatic lecture transcription using speech recognition tools, and speech translation which could be useful for foreign students.
7. Acknowledgements This work is part of the FP6 CHIL project (FP6506909), partially funded by the European Commission under the Information Society Technology (IST) program. The authors acknowledge valuable help and contributions from all partners of the project, especially of partners participating in WP2 which defines the software architecture of the project.
8. References [1] Alexander Paar, Juergen Reuter and Jaron Schaeffer, “A Pluggable Architectural Model and a Formally Specified Programming Language Independent API for an Ontological Knowledge Base Server”, in the Proc. Of the Australasian Ontology Workshop (AOW 2005), Sydney, Australia, December 2005
[2] W3C Recommendation: OWL Web Ontology Language Overview, available at: http://www.w3.org/TR/owlfeatures/, (2004) [3] Java Agent Development Environment, http://jade.tilab.com/ [4] FIPA – The Foundation for Intelligent Physical Agents, http://www.fipa.org [5] Vince Stanford, “Pervasive Computing and Smart Work Spaces: Integration, Interoperability and Interfaces” Pervasive Computing 2000, New IT Industry Conference, January 25-26, 2000, NIST, USA. [6] Ippokratis Pandis, John Soldatos, Alexander Paar, Jürgen Reuter, Michael Carras, Lazaros Polymenakos, “An Ontology-based Framework for Dynamic Resource Management in Ubiquitous Computing Environments”, in the Proc. of the 2nd International Conference on Embedded Software and Systems, Northwestern Polytechnical University of Xi'an, P.R.China, December 16-18, 2005. [7] Alexander Paar, Jürgen Reuter, John Soldatos, Kostas Stamatis, Lazaros Polymenakos, “A Formally Specified Ontology Management API as a Registry for Ubiquitous Computing Systems”, in the Proc. of the 3rd IFIP conference in Artificial Intelligence Applications and Innovations, AIAI 2006, June 2006. [8] The CHIL project, http://chil.server.de [9] J. Soldatos, I. Pandis, K. Stamatis, L. Polymenakos, J. Crowley, 'A Middleware Infrastructure for Autonomous Context-Aware Computing Services', accepted for publication to the Computer Communications Magazine, special Issue on Emerging Middleware for Next Generation Networks, to appear 2006. [10] W.K. Xie, et al., “Smart Platform: A Software Infrastructure for Smart Space (SISS)”, in the Proc. 4th Int'l Conf. Multimodal Interfaces (ICMI 2002), IEEE CS Press, 2002, pp. 429-434. [11] Yuanchun Shi, et. al “The Smart Classroom: Merging Technologies for Seamless Tele-Education”, in IEEE Pervasive Computing, Vol. 2 No. 2 pp.47-55. [12] Siamak Azodolmolky, Nikolaos Dimakis, Vassileios Mylonakis, George Souretis, John Soldatos, Aristodemos Pnevmatikakis and Lazaros Polymenakos, “Middleware for Indoor Ambient Intelligence: The PolyOmaton System”, in the Proc. of the 2nd Next Generation Networking Middleware Workshop (NGNM ’05) in the scope of Networking 2005 conference, Waterloo, Canada, May 2005. [13] Weiser M., “The Computer for the 21st Century” Scientific American, vol. 265, no. 3, 1991, pp. 66–75. [14] Anind K. Dey, “Understanding and Using Context”, Personal and Ubiquitous Computing Journal, Volume 5 (1), 2001, pp. 4-7
Fourth IEEE International Workshop on Wireless, Mobile and Ubiquitous Technology in Education (ICHIT'06) 0-7695-2723-X/06 $20.00 © 2006
[15] Gerald Tesauro et. al “A Multi-Agent Systems Approach to Autonomic Computing”, in the Proc. of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1 (AAMAS'04), 2004, pp. 464471. Aristodemos Pnevmatikakis and Lazaros [16] Polymenakos, “An Automatic Face Detector and Recognition System for Video Streams”, in the Proc. of the 2nd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms, Edinburgh, July 2005. [17] Andreas Stergiou, Aristodemos Pnevmatikakis and Lazaros Polymenakos, “Audio/Visual Person Identification”, in the Proc of the 2nd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms, Edinburgh, July 2005. [18] F. Talantzis, A. G. Constantinides, and L. Polymenakos, “Estimation of Direction of Arrival Using Information Theory,” IEEE Signal Processing, vol. 12, no. 8, pp. 561-564, Aug. 2005. [19] Pnevmatikakis, A. and Polymenakos, L. C. (2004). “Comparison of Eigenface-Based Feature Vectors under Different Impairments”, In ICPR (1) 2004, pp. 296-299 [20] Aristodemos Pnevmatikakis and Lazaros Polymenakos, “Kalman Tracking with Target Feedback on Adaptive Background Learning”, in Proceedings of Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms (MLMI ’05), 2005. [21] Zixue Cheng, et.al, "A Proposal on a Learner’s Context-aware Personalized Education Support Method based on Principles of Behavior Science", 20th International Conference on Advanced Information Networking and Applications Volume 2 (AINA'06), 2006 pp. 341-345. [22] S.J.H. Yang,, et. al “Context Aware Service Oriented Architecture for Web based Education”, in the Proc. Of Web Based Education 2005, Switzerland, February 2005. [23] Fabio Pianesi, Massimo Zancanaro, Vera Falcon and Elena Not, “Towards supporting group dynamics”, In the Proc. of the Artificial Intelligence, Applications and Innovations (AIAI) 2006 Conference, pp. 302-311, Athens, June 2006. [24] K.D. Benne, P. Sheats, “Functional Roles of Group Members”, Journal of Social Issues 4, pp. 41-49, 1948 [25] Chen, Y.S., Kao, T.C., Sheu, J.P., and Chiang, C.Y.: “A Mobile Scaffolding-Aid-Based Bird -Watching Learning System”, Proceedings of IEEE International Workshop on Wireless and Mobile Technologies in Education (WMTE'02) [26] Honebein, P. C., “Seven goals for the design of constructivist learning environments”. In B. Wilson (Ed.), Constructivist learning environments (pp. 11-24). Englewood Cliffs, NJ: Educational Technology. (1996)