Pervasive and Mobile Computing 5 (2009) 558–573
Contents lists available at ScienceDirect
Pervasive and Mobile Computing journal homepage: www.elsevier.com/locate/pmc
Fast track article
Ambient intelligence platform using multi-agent system and mobile ubiquitous hardware Kevin I.-K. Wang ∗ , Waleed H. Abdulla, Zoran Salcic Department of Electrical and Computer Engineering, University of Auckland, Private Bag 92109, Auckland, New Zealand
article
info
Article history: Received 27 September 2008 Received in revised form 5 June 2009 Accepted 5 June 2009 Available online 10 June 2009
Keywords: Ambient intelligence Multi-agent system Ubiquitous environment
abstract In this paper, a novel ambient intelligence (AmI) platform is proposed to facilitate fast integration of different control algorithms, device networks and user interfaces. This platform defines the overall hardware/software architecture and communication standards. It consists of four layers, namely the ubiquitous environment, middleware, multi-agent system and application layer. The multi-agent system is implemented using Java Agent DEvelopment (JADE) framework and allows users to incorporate multiple control algorithms as agents for managing different tasks. The Universal Plug and Play (UPnP) device discovery protocol is used as a middleware, which isolates the multi-agent system and physical ubiquitous environment while providing a standard communication channel between the two. An XML content language has been designed to provide standard communication between various user interfaces and the multi-agent system. A mobile ubiquitous setup box is designed to allow fast construction of ubiquitous environments in any physical space. The real time performance analysis shows the potential of the proposed AmI platform to be used in real-life AmI applications. A case study has also been carried out to demonstrate the possibility of integrating multiple control algorithms in the multi-agent system and achieving a significant improvement on the overall offline learning performance. © 2009 Elsevier B.V. All rights reserved.
1. Introduction As the advancement of embedded and communication technologies, growing quantities of computational units are attached to our surroundings or even to ourselves. Due to the increasing amount of computational resources and scarcity of human controls and supervisions, a new paradigm of computer utilisation, known as ubiquitous computing, was proposed by Mark Weiser in 1991 [1]. The ubiquitous computing has been considered as the third era of computing after main-frame and personal computing. In the scenario of ubiquitous computing, computational units and information processing activities are embedded in all surrounding and everyday devices, functioning invisibly. Human users are able to access information from more than one specific device and most likely from different physical locations, rather than engaged with one computer. According to this scenario, human users may not necessarily be aware of the existence of embedded devices and computations occurred behind the scene. Following the concept of ubiquitous computing, the notion of Ambient Intelligence (AmI) was raised in 1998 by Philips [2], which is essentially a combination of ubiquitous computing, context-aware computing and natural interactive interfaces [3].
∗ Corresponding address: Department of Electrical and Computer Engineering, University of Auckland, Private Bag 92109, Unit 2, 79-81 Wellesley Street West, 1010 Auckland, New Zealand. E-mail addresses:
[email protected],
[email protected] (K.I.-K. Wang). 1574-1192/$ – see front matter © 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.pmcj.2009.06.003
K.I.-K. Wang et al. / Pervasive and Mobile Computing 5 (2009) 558–573
559
The vision of AmI is to provide intelligent, personalised, and interconnected systems and services in the surrounding environment to support activities and interactions of human users [4]. Nowadays, the vision of AmI can be realised through a thoroughly planned integration of various advanced technologies, including embedded technologies, sensor networks, human–computer interfaces, middleware and software control systems. In order to facilitate smooth integration and flexible future adaptation of different technologies, an AmI platform, which defines the overall software/hardware architecture and communication, is necessary. The main focus of this research is to propose a well-defined, yet flexible platform which supports the distributed and dynamic nature of AmI applications. The platform should allow easy integration of different adaptive control algorithms and provide interoperability among multi-modal user interfaces and heterogeneous device networks. There have been several AmI platforms proposed and implemented in some research projects for different AmI applications. Massachusetts Institute of Technology (MIT) had started AmI researches known as the Intelligent Room project [5] in the late 90s. The intelligent room, Hal, emulating office environment was constructed using various vision and speech technologies. As part of the Intelligent Room project, a Java-based, multi-agent programming framework, called Metaglue, was developed to address the distributed and dynamic nature of AmI applications [6]. The software agents, which operate on the Metaglue runtime system, act as a middleware layer between the hardware devices and high level user interface applications. Metaglue has been used as the fundamental programming and operating platform for later AmI projects in the Agent-based Intelligent Reactive Environments (AIRE) research group [7]. Metaglue provides good supports on integrating various hardware devices and user interfaces into AmI applications. However, context management and adaptive learning components are not supported at the time. The University of Florida proposed a smart space middleware which integrates underlying embedded hardware devices and upper level user applications [8]. Different embedded devices are interpreted as services through the middleware designed based on the Open Services Gateway initiative (OSGi) service architecture and the services can be utilised by high level user applications. Efforts have been put into achieving self-describing sensor networks [9] and plug-and-play (PnP) devices [10]. This particular platform provides good integration between embedded hardware devices and high level user applications. Customised learning and context management components are also supported. Nevertheless, proprietary hardware interface is required to interconnect all the embedded devices with the middleware. Further, the OSGi architecture is centralised which implies low scalability. At the University of Texas at Arlington (UTA), a multi-agent architecture approach has been proposed for controlling smart homes in the MavHome (Managing An intelligent Versatile Home) project [11]. This project is focused on finding and modelling sequential routines of user activities using prediction algorithms. Based on the discovered routines, control policies can be formulated to assist user activities in a proactive manner. In this particular platform, three middleware tools are used. Common Object Request Broker Architecture (CORBA) is used to provide point-to-point communication between software components [12]. ZeroConf is used to replace the CORBA naming system and to provide an easy to use service discovery functionality [12]. Boot Strap adds password security to the RPC (Remote Procedure Call) mechanism used by the software components [13]. Due to the complex architecture, this platform does not support easy integration of different types of hardware devices. Further, there is no support for integrating multi-modal user interfaces. The University of Essex takes a different approach by using an embedded agent to access and control embedded sensors and actuators within the AmI application, intelligent Dormitory (iDorm) [14]. A novel fuzzy inference control technique, known as the Adaptive Online Fuzzy Inference System (AOFIS), has been proposed and successfully implemented to learn personalised user behaviours and to provide proactive assistance to user activities [15]. A well-defined, protocol-based device discovery protocol, Universal Plug and Play (UPnP), has been employed as a middleware to provide integration between the embedded agents and the upper level fuzzy controller. This particular AmI platform supports a number of user interfaces designed by the same research group [16]. However, it does not provide a standard procedure for incorporating other user interfaces designed by different research groups. In this research, a novel AmI platform is proposed to allow easy adaptation and integration of different adaptive control algorithms, heterogeneous device networks and multi-modal user interfaces. The proposed AmI platform has a hierarchical architecture which consists of four different layers, namely physical ubiquitous environment, middleware, multi-agent system (MAS) and application layer. The physical ubiquitous environment refers to a single or a combination of different physical device networks which interconnect all the devices in the surrounding environment. In order to speed up the construction of physical ubiquitous environments for AmI researches, a mobile ubiquitous setup box has been proposed and implemented in this research. The mobile setup box can monitor up to four different sensors and can control up to three electrical/motorised devices. The middleware layer implemented using UPnP hides the complexity of heterogeneous device networks and provides a standard communication between physical devices and the MAS. The MAS is implemented using Java Agent DEvelopment framework (JADE). The MAS appears to be one of the best approaches which support the distributed and dynamic nature of AmI applications. Different adaptive control algorithms which target on different tasks can be implemented as agents and coexist in the MAS. New control algorithms can also be integrated into the MAS at runtime. The application layer is mainly concerned about user interface applications. Different user interfaces may communicate using different protocols. In order to support multi-modal user interfaces, multiple interface agents can be implemented in the MAS. Further, an XML content language which consists of seven XML messages have been developed to standardise the contents of communication between different user interfaces and the MAS. The XML content language provides a standard procedure for incorporating various user interfaces by encoding the contents of communication in the same manner. This
560
K.I.-K. Wang et al. / Pervasive and Mobile Computing 5 (2009) 558–573
Fig. 1. The proposed AmI platform architecture.
allows user interfaces designed and developed by different research groups to be integrated easily to the proposed AmI platform. In this paper, Section 2 explains the proposed AmI platform architecture and functionalities of each layer. Section 3 introduces the multi-agent development framework, JADE, and the actual implementations of the MAS and middleware. The implementation of physical ubiquitous environment using mobile ubiquitous setup box is demonstrated in Section 4. Section 5 presents three different user interfaces implemented in this research and an XML-based content language that allows fast integration of multi-modal user interfaces with the proposed AmI platform. Section 6 presents a real time performance analysis conducted by using the three user interfaces introduced in Section 5. Section 7 discusses the results of a case study of integrating two different adaptive control algorithms in the proposed platform. 2. Ambient intelligence platform An AmI platform generally refers to the hardware/software architecture and communication standards that provide a common base for development, operation and interaction for all the components of AmI applications [17]. The main purpose of an AmI platform is to assist the development and integration process of various software and hardware components within AmI applications. The proposed AmI platform architecture can be discussed in four different layers, as shown in Fig. 1. The bottom layer is the physical ubiquitous environment which consists of all the physical embedded devices and
K.I.-K. Wang et al. / Pervasive and Mobile Computing 5 (2009) 558–573
561
communication infrastructures such as the routers. The proposed platform architecture does not limit the use of any possible device networks, as long as the required routing devices and communication media have been properly installed. The IP-based UPnP device discovery protocol is selected to implement the middleware layer, which abstracts the devices that exist in the underlying physical device networks. UPnP protocol consists of two components, namely the UPnP software devices and UPnP control point. Each physical device should have a corresponding UPnP software device. Each software device calls its corresponding software communication module to communicate with the actual physical device. Any type of physical devices can be supported by the proposed platform with proper UPnP software devices and communication module implemented. The UPnP software devices essentially map all the physical devices onto the IP network and hence hide the complexity of heterogeneous device networks. The UPnP control point keeps records of all the available UPnP software devices and provides a unique portal for the upper layer MAS to control the available devices. The multi-agent control system layer comprises control agents, learning agents and interface agents. Control agents mainly refer to agents that perform control and management tasks. For example, the UPnP control point is implemented as an agent and processes control commands issued by all the other agents. Other agents such as database agent and profile and context management agent can also be implemented if necessary. The multi-agent architecture provides developers the flexibility to include any control component via implementing new agents during development time or runtime. The learning agents refer to various control algorithms implemented as agents for adaptive learning tasks. Learning agents retrieve user and context information from control agents and perform learning based on the retrieved data. Based on the results of learning, proactive control can be performed through issuing commands to the control agents. The interface agents handle communications of various user interface applications. There can be multiple interface agents in the MAS. Each interface agent can handle multiple user interfaces that use the same communication protocol. Similar to the learning agents, interface agents can also issue user commands received from interface applications to the control agents. The application layer at the top refers to all possible user interface applications. Taken into consideration that different user interfaces may be implemented in different programming languages and different communication protocols, a message-oriented middleware implemented using XML is included in the proposed platform. An XML schema and a set of XML messages are designed and implemented to standardise the contents of communication between user interfaces and interface agents. Despite different platforms, programming languages and communication protocols used by different user interfaces, the contents of communication can always be decoded in the same manner. The actual implementations of the MAS, middleware, physical ubiquitous environment and user interfaces are introduced in Sections 3–5. 3. MAS and UPnP In this research, the MAS is implemented using JADE multi-agent development framework. In this section, the basics of JADE and the actual implementations of control, learning and interface agents are introduced. The implementation of UPnP middleware layer is also explained to demonstrate the communication between the MAS and actual physical devices. 3.1. JADE runtime environment JADE is a Java-based, FIPA (Foundation for Intelligent Physical Agents) compliant agent development framework. The JADE toolkit comes with three parts, a library of classes assisting agent development, a runtime environment with FIPA-specified agent management services and a set of graphical tools for monitoring and debugging purposes. The JADE runtime environment is running on top of the Java Virtual Machine (JVM). Each host computing device that executes an instance of JADE runtime environment is referred to as a ‘‘container’’. An agent system is a collection of containers, as shown in Fig. 2 [18]. Within each agent system there must be one bootstrap point, which is referred to as the main container. Following the FIPA agent management specification, each agent system should have one and only one Agent Management System (AMS) agent and Directory Facilitator (DF) agent [19]. The AMS keeps a record of all the agents running within an agent system and the DF keeps a record of all the agent services available. The AMS agent will be loaded in the beginning of the runtime environment execution. The DF agent will be loaded slightly after the AMS agent to be able to register with the AMS agent. The AMS agent has the authority to control life cycles of all the agents within its own agent system. That means, users and developers are able to initiate and terminate agents within the agent system at runtime by using the AMS agent services. 3.2. Agent implementations This section discusses the implementations of interface, learning and control agents according to the proposed platform architecture. It is worth mentioning that the current agent implementations only demonstrate one possible scenario of realising the proposed AmI platform architecture. A different set of interface, learning and control agents can be implemented as long as the platform architecture and communication standards are followed. In the current multi-agent system, IP interface agent, fuzzy inference agent, decision tree agent, and UPnP control point agent have been implemented to achieve a complete working prototype.
562
K.I.-K. Wang et al. / Pervasive and Mobile Computing 5 (2009) 558–573
Fig. 2. The JADE runtime environment.
point agent
Fig. 3. IP interface agent implementation.
3.2.1. IP interface agent In the current implementation, an IP interface agent has been developed to handle socket communication protocol. Other interface agents handling different communication protocols can be easily developed and loaded in the MAS at runtime. As shown in Fig. 3, the IP interface agent is designed and implemented to handle multiple socket connections by spawning a new server module when a new user interface application tries to connect with it. The IP interface agent maintains synchronisation among all the connected interface applications by broadcasting status updates to all the interface applications. In order to execute user control commands, the IP interface agent searches the UPnP control point agent service through DF agent and establishes communication link with the UPnP control point agent. The control commands received from the interface applications are forwarded to the UPnP control point agent for further execution. At this stage, three types of user interfaces, including remote Graphical User Interface (GUI), PDA interface and 3D virtual reality interface, have been developed using socket communication. The contents of communication between the interface agent and interface applications, such as control commands and status updates, are encoded using the proposed XML content language. More details are provided in Section 5 on the development of user interfaces and XML content language. 3.2.2. Learning agents Two learning agents, namely fuzzy inference and decision tree agents, have been implemented in the current MAS, as shown in Fig. 4. In the current implementation, the database agent has been replaced by simple text files. Each file is named according to a specially designed pattern in order to provide necessary personal profile information and to achieve personalised learning among multiple users. Based on this simple technique, personalised learning and adaptation can be achieved without extra database and profile management agents and hence allows fast prototyping. Developers can plug-in different types of adaptive control algorithms into the MAS as long as the control policies can be interpreted by the UPnP control point agent.
K.I.-K. Wang et al. / Pervasive and Mobile Computing 5 (2009) 558–573
563
agent
Fig. 4. Learning agents implementation.
device
device
device
Fig. 5. UPnP control point agent and middleware implementation.
3.2.3. UPnP control point agent The UPnP control point agent consists of two parts, namely the control agent service module and UPnP control point module, as shown in Fig. 5. The control agent service module is responsible for communicating with all the other agents in the MAS using ACL (Agent Communication Language) communication routines provided by JADE. During the initialisation stage, the UPnP control point agent registers its service with the DF agent. All the other agents wish to control UPnP software devices need to search for control agent service through the DF agent. The control agent service converts upper layer commands into the right form and forward the commands to the UPnP control point module for further execution. Similarly, device status updates are also processed by the control agent service and sent back to the corresponding agents using ACL communication. The UPnP control point module is responsible for sending control commands to and receiving status updates from UPnP software devices over the IP network. 3.3. Middleware implementation As mentioned in Section 2, UPnP device discovery protocol is used as a middleware between physical devices and the MAS. As a middleware, UPnP provides a layer of abstraction which hides the complexity of heterogeneous physical device networks underneath. CyberLink UPnP software development kit, provided by CyberGarage [20], is used in this research. CyberLink UPnP development kit comes in four different languages, including C, C++, Java and Perl, for implementing the UPnP communication infrastructure. The CyberLink Java toolkit is used in order to integrate UPnP control point as part of the Java-based MAS. Nevertheless, UPnP software devices implemented in different programming languages can still be used in the middleware layer provided that the UPnP protocol standards are followed.
564
K.I.-K. Wang et al. / Pervasive and Mobile Computing 5 (2009) 558–573
UPnP software devices are the exact software replicates of the physical devices. Control commands issued to the software devices are also forwarded to and performed on the physical devices. The attribute statuses of software devices are synchronised with the actual physical devices all the time. The upper layer MAS sees UPnP software devices as real devices on one integrated IP network and hence is not affected by the complexity of heterogeneous device networks underneath. There are two different ways to initiate UPnP software devices in the middleware layer. During the initialisation stage of the UPnP control point agent, a set of UPnP software devices is initiated for all the static devices, such as curtains and windows, within an AmI application. UPnP software devices can also be initiated by mobile physical devices in order to support dynamic device networks. However, the required communication module must also be provided to establish communications between the UPnP software devices and physical devices, as shown in Fig. 5. Apart from the UPnP software devices and required communication modules, there is one more component, known as the status update module, in the middleware layer. In an ideal case, each UPnP software device communicates via its corresponding communication module for sending control commands and receiving status updates. However, multiple physical devices on a single device network may be linked to a host computer through a common USB (Universal Serial Bus) or RS-232 serial interface in reality. This causes multiple UPnP software devices competing for the same physical communication port to listen to their own event notifications and results in resource sharing conflicts. To overcome this problem, a separate status update module is developed to listen to all the incoming events and to issue the events to the right software devices, as shown in Fig. 5. However, the software devices can still make calls to the communication module for sending control messages temporarily and release the resource after sending. 4. Physical ubiquitous environment One of the key components of AmI applications is the physical ubiquitous environment. However, the construction process of a specialised ubiquitous environment is very time consuming. Therefore, in this research, we proposed and developed a novel mobile ubiquitous setup box which can be easily integrated with the proposed AmI platform architecture. The mobile ubiquitous setup box is designed specifically for embedded devices and sensors that are interconnected over RS-485-based SmartHouse network [21]. It allows fast construction of physical ubiquitous environments in any physical spaces, rather than a hardwired specialised lab. This also implies that multiple experimental testbeds can be constructed and used at the same time. 4.1. Digitisation The main requirement of a physical ubiquitous environment is to provide communication abilities for all the embedded and distributed devices. In order to interconnect various analog devices, a process of digitisation is required. In this research, a digital smart switch developed by Kristil Technology [21] is used to interconnect and to control all the analog sensors and devices. The digital smart switch consists of two microcontrollers, the HC11 and PIC. The PIC controller is used to provide communication abilities over various sensor and control networks, including proprietary SmartHouse network [21], 1-wire network [22] and triac comms control network. The RS-485-based, SmartHouse network provides communications between multiple smart switches and computers and hence allows digitised control. The 1-wire network is incorporated to handle 1-wire compliant temperature sensors or proxy readers. The triac comms protocol encodes low level control commands for the motor or triac control modules, which output analog signals to the corresponding analog devices. Different from the use of PIC, HC11 is the core of the smart switch which handles all the control actions and information processing. The HC11 has an internal Light Dependent Resistor (LDR) for measuring local light intensity and supports two extra analog sensors through the analog to digital converter. Each digital smart switch consists of three push-button interfaces which correspond to the manual controls of up to three motorised or electrical devices. In between the digital smart switch and the actual analog devices, there is either a motor control module or a triac module, which converts digital commands into analog signals. The motor control module receives low level control commands encoded using the triac comms protocol and outputs the corresponding current directions to toggle the motor movement. The triac module accepts the same control commands and outputs the right current level for the connected electrical appliances such as lights, fans and heaters. The smart switch can also communicate with a computer through a USB or RS-232 gateway interface board. The gateway interface board turns its host computer into an ordinary smart switch node on the SmartHouse network, which sends or receives messages using the SmartHouse network protocol. A communication module implementing SmartHouse protocol has been incorporated in the middleware layer for establishing communications between UPnP software devices and corresponding digital smart switches. 4.2. Mobile ubiquitous setup box The conceptual diagram of the mobile ubiquitous setup box is shown in Fig. 6. Each setup box is consisted of two components, a digital smart switch and a 3-to-3 triac or motor control module. The three push-button outputs of the smart switch are connected to the 3-to-3 triac or motor module to manually control the connected devices. A 3-to-3 triac module supports an overall power rating of up to 1000 W and 15 Amps, which is sufficient for most of the electrical appliances.
K.I.-K. Wang et al. / Pervasive and Mobile Computing 5 (2009) 558–573
565
Mobile ubiquitous setup box
240V Input
Fig. 6. Conceptual diagram of the mobile ubiquitous setup box.
Fig. 7. A possible scenario of constructing physical ubiquitous environment using two mobile setup boxes.
Multiple setup boxes can also be connected with each other through the SmartHouse network. By interconnecting multiple setup boxes, different amount of devices and different sized environments can be supported. The mobile setup box can also be connected with a computer through a USB or RS-232 gateway interface board as mentioned in Section 4.1. 4.3. Physical ubiquitous environment Based on the mobile setup box, a physical ubiquitous environment can be easily assembled by connecting devices and sensors to a set of setup boxes. Fig. 7 shows one possible scenario of constructing a physical ubiquitous environment used in this research by using two mobile setup boxes. Refer to Fig. 7, four dimmable lighting devices, one fan and one heater are controlled by two setup boxes numbered 2 and 3. Both setup boxes have an internal LDR and a 1-wire temperature sensor attached to detect local light intensity and temperature. Two additional analog pressure sensors have been connected to setup box 2 to detect user’s current location. The setup boxes and the host computer (numbered 1 in Fig. 7) are able to communicate with each other using the SmartHouse network. As mentioned previously, SmartHouse network protocol has been implemented as one of the communication module in the middleware layer to enable communications between UPnP software devices and physical devices. Upper layer MAS and interface applications are able to perform control actions through the middleware layer. In this approach, ubiquitous environments can be quickly assembled and disassembled in different types of physical spaces and allow different types of experiments to be performed. Multiple ubiquitous environments can also be assembled at different physical locations to allow more experiments to be carried out at the same time. Further, Fig. 7 also shows the possibility of extending the physical ubiquitous environment to accommodate heterogeneous device networks such as Wi-Fi and Bluetooth. Devices on other physical networks, such as Bluetooth, can be seamlessly integrated as part of the physical infrastructure as long as the physical communication media exist. Following the proposed AmI platform
566
K.I.-K. Wang et al. / Pervasive and Mobile Computing 5 (2009) 558–573
a
b
Fig. 8. XML messages for (a) registering interface application, (b) terminating interface application.
Fig. 9. Active device query message.
architecture, physical devices can be integrated in a consistent way by implementing corresponding UPnP software devices and required communication modules. 5. User interfaces One of the visions of AmI applications is to allow users to control devices and access information from different physical locations. In order to achieve this vision, multi-modal user interfaces have been considered as a key requirement for many AmI applications. In this section, the XML content language, which standardises the contents of communication between user interfaces and interface agents, is introduced. Further, three different user interfaces, namely remote GUI application, PDA interface and 3D virtual reality (VR) interface, developed in this research are presented. 5.1. XML content language In order to facilitate fast and smooth integration between user interfaces and underlying MAS, an XML schema and a set of seven XML messages are incorporated in the proposed AmI platform. All the XML messages are implemented following the proposed XML schema, which provides an easy and standardised approach to design new XML messages. The XML messages standardise the contents of communication between user interfaces and interface agents. Based on the standard content language, messages come from different user interfaces can always be decoded in the same manner. The pair of XML messages, shown in Fig. 8, is used by interfaces to register and deregister with the interface agent. The registration message provides essential information for the interface agent to establish communication with the interface application. The registration message shown in Fig. 8(a) is designed specifically for socket communication interfaces implemented in this research. Different registration messages, containing different registration information, may be required for other communication protocols such as Bluetooth. As will be seen in the next section, some user interfaces have no prior knowledge on the available devices within a particular ubiquitous environment. Such user interfaces can be used in different ubiquitous environments that contain different sets of devices. However, no control action can be performed by the user interface before a list of active device is retrieved. After the registration process, these user interfaces need to send out a query message, requesting for a list of currently active devices. The query message is shown in Fig. 9. Once the interface agent receives the message, it passes the message to the UPnP control point agent to process the request. In response to the query request, the UPnP control point agent replies with two types of messages. The first message, called device list acknowledgement message, contains a list of active devices recorded by the UPnP control point agent, as shown in Fig. 10(a). Once this message is received by the user interfaces, control actions can be performed on those active devices. The second message, called device status acknowledgement message, contains device status for a particular device, as shown in Fig. 10(b). There may be multiple device status acknowledgement messages, one for each active device, sent back to the user interfaces. Different from the
K.I.-K. Wang et al. / Pervasive and Mobile Computing 5 (2009) 558–573
a
567
b
Fig. 10. (a) Device list acknowledgement message, (b) device status acknowledgement message.
a
b
Fig. 11. (a) Device control message, (b) control action acknowledgement message.
device list acknowledgement message, the status acknowledgement message is also used in other situations such as event notification. Event notification mainly happens when device status is changed by other user interfaces or auto control policies. In order to keep all the connected interfaces in synchronisation, event notification messages will be sent to all the connected user interfaces by broadcasting. Based on the list of active devices, user interfaces can perform control actions on the devices. The device control message is shown in Fig. 11(a). The control command received by the interface agent is forwarded to the UPnP control point agent. The control point agent issues the control command to the corresponding UPnP software device. In return to this control message, a control action acknowledgement message, as shown in Fig. 11(b), will be sent back. The error value in this acknowledgement message depends on the reply from the UPnP software devices, which indicates the result of performing such control action. Due to the fact that one control action could trigger changes in multiple device statuses within a particular environment, such as turn on the light would also alter the ambient light sensor level, a set of device status acknowledgement messages may also be sent back to the upper layer user interfaces. 5.2. Remote GUI and PDA interface The Java-based GUI application is designed to examine the feasibility and functionality of the proposed AmI platform and XML content language. The communication between the GUI application and interface agent is done by socket communication. The interface agent has been designed to have a fixed IP address and port number to listen for incoming registration messages. When a GUI application starts at a remote location, it will attempt to establish a communication link with the interface agent by sending a registration message containing its own IP address and port number. The remote GUI application has no prior knowledge on the available active devices. Therefore, a query message needs to be sent after the registration process. Refer to Fig. 12(a), the list of active devices can be selected from the drop down menu on top. By simplifying the remote GUI application, another user interface has been developed on a PocketPC machine and acts as a mobile user interface. Similar to the remote GUI application, this interface also connects to the interface agent through socket communication over a Wi-Fi link. This particular interface is developed using standard J2SE, rather than J2ME as for most mobile device interface applications. A JVM called Mysaifu [23], is used to support the operation of this interface. Mysaifu is a free JVM which supports J2SE on Windows Mobile series operating systems. The actual implementation of the PDA interface is shown in Fig. 12(b). This particular user interface is used to test the practicality of the XML content language over a mobile device.
568
K.I.-K. Wang et al. / Pervasive and Mobile Computing 5 (2009) 558–573
a
b
Fig. 12. (a) Remote GUI application, (b) PDA interface.
5.3. 3D virtual reality interface In this research, a realistic, highly extendable virtual reality interface is developed using game technologies and supports interactions from multiple remote clients. The main focus of this application is to provide a 3D virtual reality (VR) interface which allows teleoperating of devices installed in a particular physical ubiquitous environment. Similar to the other two user interfaces, the communications between the 3D VR interface and the interface agent is done by socket communication. 5.3.1. VR interface implementation The 3D model of the VR interface is created in real world scale using 3dsMax and mapped with photo textures taken from real objects. The open source Ogre3D graphics engine [24] is selected to perform real time rendering of the VR interface, due to its large community supports and various add-on projects that provide easy integration of physics and networking libraries necessary for this implementation. The open source C++ based openTNL network library [25] is used for implementing IP communications. Despite the fact that 3D VR interface is implemented using C++, the XML content language can still be applied for the communications between the 3D VR interface and the interface agent. Physics simulation is created using the Newton Game Dynamics Engine [26]. It provides real time simulation of physics of the virtual environment. Newton Game Dynamics Engine enables developers to create rigid bodies of objects in a virtual world with gravitational force and to model the objects with appropriate masses, sizes, materials and shapes. When velocity, force, and torque are applied to an object, the dynamic behaviour of the object will be automatically simulated. Physics have mainly been used in the VR interface to monitor collision detections and to simulate the behaviour of objects when exerted with force. 5.3.2. VR interface control At this stage, controls have been implemented in the VR interface for fixed devices such as lights, windows and curtains. These devices are flagged as ‘Controllable Objects’ in which their statuses can be manually altered by the end-users. There are two ways to interact with the controllable objects in the VR interface. Refer to Fig. 13(a), the device status can be altered by clicking on that specific object, such as the curtain. When a specific switch is clicked, multiple objects linked to the particular switch can be controlled as shown in Fig. 13(b). In both cases, a dialogue window will appear on the screen that allows end-users to configure the status of a specific object or a group of objects connected to the switch, through graphical widgets such as push buttons and horizontal slide bars. After the end-user performs an action that alters the state of a controllable object, the corresponding control command is sent to the IP interface agent to perform the actual action on the physical device. 6. Real time performance analysis In order to examine the feasibility and practicality of the proposed AmI platform, the control response time is measured to evaluate the real time performance. The control response time is the period of time between sending out a control command and displaying a corresponding acknowledgement on the user interface. The process of sending a control action and getting an acknowledgement back needs to go through each layer of the communication stacks of the proposed AmI platform and hence provides a good indication of the real time performance. The three user interfaces introduced in Sections 5.2 and 5.3
K.I.-K. Wang et al. / Pervasive and Mobile Computing 5 (2009) 558–573
a
569
b
Fig. 13. Controlling object via (a) assigned switch (b) object itself.
Table 1 Results on control response time for each user interface over different communication media. User interfaces
Remote GUI interface
Mobile PDA interface
3D VR interfaces
Communication Programming language Communication medium Platforms Control response time (t)
Socket J2SE Wired (Ethernet) Wireless (WiFi) Laptop Laptop t