of smart user interface for context-aware applications by taking account of how to acquire ... wear-UCAM can be a rapid application development toolkit for developers who want ...... to currently released J2ME version for mobile applications.
wear-UCAM: A Toolkit for Mobile User Interactions in Smart Environments* Dongpyo Hong, Youngjung Suh, Ahyoung Choi, Umar Rashid, Woontack Woo GIST U-VR Lab. Gwangju 500-712, Korea {dhong, ysuh, achoi, urashid, wwoo}@gist.ac.kr
Abstract. In this paper, we propose a toolkit, wear-UCAM, which is a smart user interface to support mobile user interactions in ubiquitous computing environments through utilizing user’s context. With the rapid developments of ubiquitous computing and its relevant technologies, the interest in contextaware applications for mobile/wearable computing also becomes popular in both academic and industrial fields. In such smart environments, furthermore, it is crucial for a user to manage personal information (health, preferences, activities, etc) for the personalized services. However, there are only a few research activities on frameworks or toolkits which support smart user interfaces on mobile/wearable computers regarding reflection of user’s context to context-aware applications. In the proposed wear-UCAM, therefore, we focus on a framework of smart user interface for context-aware applications by taking account of how to acquire contextual information relevant to a user from sensors, how to integrate and manage it, and how to control the user context in smart environments. For an experimental evaluation on the proposed toolkit, we utilize physiological sensors and PDA with wireless LAN and Bluetooth for the smart user interface, and experiment with it in ubiHome, a smart home environment test-bed.
1 Introduction In recent years, with the development of ubiquitous computing enabled technologies and their rapid deployment, there has been an increased interest in context-aware applications on mobile/wearable computing [1-4]. In ubiquitous computing, it is generally required to install various sensors in environments in order to serve users [2]. Consequently, it is not easy for a user to manage the personal information from the environmental monitoring [5]. In mobile/wearable computing, however, personal information can be retrieved from various wearable/mobile sensors, analyzed and used in services through explicit controls of a user rather than that of environments [5,6]. In this regard, a user interface for mobile/wearable computers should support users’ personal information to be controlled by users themselves as well as manage their preferences to have quality of services. This is the reason that mobile/wearable computing has become a focus of *
This was supported by Seondo project of MIC, Korea
attention in the era of ubiquitous/pervasive computing environments. In addition, mobile/wearable computing complements the inherent limitation of ubiquitous computing such as cost for infrastructure construction, privacy and so on. Nonetheless, there have been only a few research activities on context-aware application development toolkits for a mobile/wearable computing compared to those for ubiquitous computing [7-12]. Furthermore, it is burdensome for developers to adopt toolkits for ubiquitous computing into mobile/wearable computing because toolkits for ubiquitous computing include many mechanisms for sensors to detect where we are, what we want etc. Therefore, it is necessary of a toolkit to support acquisition of contextual information from wearable sensors, management of user’s profile for feedback, and control of user’s personal information for privacy in ubiquitous computing environments. This paper is intended to provide developers with a practical solution for implementing smart user interfaces for context-aware applications. In this regard, we propose wear-UCAM, which is a toolkit for the rapid development of smart user interfaces for personalized services in ubiquitous computing environments. The proposed wear-UCAM provides developers with methods for acquiring personal information from wearable sensors, methods for manipulating and analyzing the acquired information, and methods for processing flows of contextual information in applications for personalized services. The proposed wear-UCAM has the following three characteristics: 1) Acquisition and analysis of the user’s physiological signals, 2) User profile management from the extracted and analyzed information, and 3) Privacy from other users or environmental sensing equipment by providing a user with a control mechanism for personal information disclosure. As a toolkit, the proposed wearUCAM provides procedural abstraction when processing the user’s context from sensor to service. Moreover, wear-UCAM splits application into wearSensor and wearService in order to support independence of sensors and services [13,14]. This separation of sensors and services is also a key feature of ubiquitous computing environments [11,15]. Along with the separation, wear-UCAM includes a networking interface to support dynamic connections between sensors and services, and it uses a unified context as a common data format among sensors and services. Likewise, there are many components in wear-UCAM to support extracting and analyzing a user’s context from various sensors, which we explain in Section 3. Therefore, the proposed wear-UCAM can be a rapid application development toolkit for developers who want to implement a smart user interface to receive personalized services from contextaware applications in ubiquitous computing environments. In this paper, we outline the fundamental architecture and design of wear-UCAM. In addition, we discuss the empirical implementation issues and experimental results. To show the effectiveness of wear-UCAM, we utilize a physiological sensor and a wireless LAN and Bluetooth enabled PDA as a user interface in a smart home environment. This paper is organized as follows. In Section 2, we review relevant context-aware applications and frameworks. In Section 3, we sketch the overall architecture and design issues of wear-UCAM and its components briefly. In Section 4, 5 and 6, we explain the implemented components for context integration, context management and information disclosure control, respectively. The experimental setup and result with detailed analysis of wear-UCAM are described in Section 7. Finally, we discuss
the proposed wear-UCAM as a toolkit for mobile user interactions and future works in Section 8.
2 Related work In this section, we review previous context-aware application frameworks. As we explained, feedbacks from a user are an important factor for user interactions in ubiquitous computing environments. In this regard, there are several good survey papers which present summaries on context-aware applications [2-4]. However, we find that most of them are relatively dependent on specific context-aware application which exploits certain contextual information such as user’s identity (name), location and time. In this review, we only focus on context-aware application models. Additionally, we check whether the previous context-aware applications exploit physiological signal in context acquisitions, provide privacy protections, and support user’s feedback. Schilit (1994) describes the context-aware life cycle as context discovery (sensing or context acquisition), context selection (interpretation), and context use. This is a general architecture for a context-aware application and probably the earliest attempt. However, it is mainly focused on location and an active badge infrastructure [7]. Stick-e Notes Framework has been developed at the University of Kent at Canterbury, as an electronic equivalent of Post-It notes. Brown et al. (1997) propose a general application framework based on the stick-e notes to enable easy creation of context-aware applications [8]. In their proposed architecture, there are three component types: triggering component (matching user’s present context with the context of loaded stick-e notes), execution component (can be any existing program), and set of sensor component (feeding periodically information to the triggering module and hiding sensor specific issues). In addition, a context should be presented in SGML to enable easy exchange, publishing and extensibility. Pascoe et al. (1999) continue the work based on the visions of Brown, where they propose the Context Information Service (CIS). This is a software architecture which supports context-aware applications through the maintenance of an object-oriented view of the world comprised of artifacts, states, sensors, synthesizers, monitors and a catalog [9]. The design of the system is focused on providing a set of tools to assist in the collection of data such as location and time. Such information is also tagged with locations on a map for later analysis. Situated Computing Service was developed at the Hewlett Packard Laboratories in 1997 as a framework for context-aware applications [10]. The idea is to have a Situated Computing server that has a set of services to interpret sensor data into useful events. The services are responsible for retrieving information from the sensors and abstracting that information for applications. The model is on a conceptual level, and a single prototype has been constructed as the proof of concept using active tag detectors as a single type of sensor. Context Toolkit (2000), developed at GATECH, aims to separate the context acquisition process from the delivery and use of context. For this reason, it uses three types of abstractions: widgets, servers, and interpreters [11]. Context widgets are software components that provide applications with access to context sensed from
their operating environment. With the aid of this, applications are freed from the context acquisition process. Context servers are used to collect the entire context about a particular entity, such as a person. The context server is responsible for subscribing to every widget of interest and acts as a proxy to the application, collecting information for that particular entity. Context interpreters are responsible for implementing the interpretation of context information. They transform between different formats or merge different context information to provide new representations. Recently, Dey (2005) introduces context mediation to resolve ambiguity arising from a recognizer’s uncertainty about how to interpret the user’s inputs [12]. As we have reviewed, most of existing context-aware application models exploit only a few contextual cues such as location, identity and time. As Chen & Kotz (2000) also indicate, other contextual information and the context history are generally believed to be useful, but rarely used. The reason for this is that it is not clear whether other contexts are difficult to obtain or whether they are not useful [2]. Furthermore, many context-aware applications are not deployed and commercially available yet, because of the complexity of implementing such applications [3]. In this regard, we believe another application framework is needed which can exploit other contextual information (physiological signals) in addition to location, and keep user’s profile as a history for usage. Moreover, a control mechanism of personal information should be implemented in the framework. Above all, a context-aware application framework should provide developers with an easier interface to implement applications, and hide complicated components for context-awareness (context acquisitions, process, and management) from the developers.
3 wear-UCAM wearService
ServiceProvider
ucam
Initial package point for UCAM
Integrated Context
Context Integrator
ContextManager (User Profile Manager)
Interpreter
Final Context
Communicator (Networking Interface)
context
comm
wearucam
Common packages for UCAM
Preliminary Context
Signal Acquisition
wearSensor
Feature Extraction
Preliminary Context Generation
wearsensor
wearservice
examples
Core packages for wear-UCAM
Fig. 1. Conceptual wear-UCAM Architecture and Software Package Structure
As shown in Figure 1, wear-UCAM forces application developers to split wearSensor and wearService in an application and to connect each wearSensor and wearService with the communicator (a networking interface). Through this separation into sensor and service in the application, we can utilize not only wearable sensors on a user but also other sensors which probably have more computational power in envi-
ronments. This mechanism can be crucial especially for resource restricted computing platforms like wearable and mobile computing. However, personal context, being highly relevant to personal privacy, is protected from other wearable computers and environmental monitoring by the explicit controls of the user. 3.1 Software Architecture for a Toolkit In order to explain the architecture of wear-UCAM, we need to introduce UCAM briefly. For context-aware applications in future computing environments, application developers should manipulate user’s context easily. However, application developers need not know low-level details of context-aware mechanisms for implementing context-aware applications. In this regard, UCAM is a framework for seamless user interactions, and it provides developers a way to implement context-aware applications. As an instance of UCAM, wear-UCAM is a toolkit to support developers who want to develop a prototyping application rapidly in wearable or mobile computing environments. As illustrated in Figure 1, wear-UCAM includes the following common packages: context package for the unified context model, and comm package for a networking interface. Specifically, sensors and services are able to communicate with each other using a common context data format due to context and comm packages. The key packages of wear-UCAM are wearsensor, wearservice, and examples as shown in Figure 1. The package of wearucam includes various classes to support sensors and services which are applicable in mobile/wearable computing environments. For instance, wearSensor extracts signals from sensors and constructs a preliminary context by analyzing retrieved signals. Therefore, developers are able to implement their applications in mobile or wearable computing without having to care about the details of context-aware mechanisms for handling context-awareness. Furthermore, we separate different types of sensors in order to support various wearable sensors on a user. In short, the architecture of wear-UCAM enables wearable computing application developers to exploit context-aware mechanisms as well as supports communications among heterogeneous applications through the context and networking interface in heterogeneous computing environments. 3.2 Components of wear-UCAM As shown in Figure 1, wear-UCAM requires sequences of context processing in order to exploit the user context in applications in which context information is acquired from various sensors and utilized in various services. Table 1 shows the core components for user’s context processing with respect to sensor and service. The class of wearBioSignalSensor, for example, is a specific instance of Sensor class and uses components in the wearsensor of wear-UCAM; it acquires physiological signals, extracts features from the signals, and transforms the features into values. The transformed values are used to construct a preliminary context. The preliminary context is sent to sensors or services through the networking interface. In the comm package,
Communicator provides dynamic connections among sensors and services to deliver the formatted user’s context. Communicator includes context filtering, dynamic multicasting group creation, and trivial networking functionalities to deliver contexts. Then, Service class provides appropriate service to a user through the process and manipulation of contexts from sensors and other services. ContextIntegrator in wearService integrates user’s physiological signals from physiological sensors or context from other services, and then generates an integrated context. ContextIntegrator constructs the integrated context by applying an integration algorithm (Section 4). ContextManager then provides the final context by comparing the delivered context from Interpreter (user’s conditional context) with the integrated context. The UserProfileManager is an instance of ContextManager class which updates and manages user’s profile, and provides personal information disclosure [16]. Although we utilize the unified context model in this paper, the contextual elements are modified to fit into wear-UCAM [17]. For example, contextual information obtained from physiological signals is added. Table 1. Functionalities of components in wear-UCAM wear-UCAM
Component
wearSensor
Sensor*
Networking
Communicator Service* ContextIntegrator
wearService
ContextManager Interpreter ServiceProvider*
Functions Basic functions of wearSensor (signal processing, feature extraction, preliminary context) Dynamic connection among sensors and services Basic functions of wearService (service classification, register ServiceProvider) Generates integrated context by analyzing preliminary context Generates final context by manipulating user’s preference and the integrated context User’s conditional context provision Basic interfaces for developers who develop context-aware application
* Abstract Class
4 Context Integration on Physiological Signals In the proposed wear-UCAM, the integration of contextual information is the first step to process context for context-aware services. Basically, the role of ContextIntegrator in wear-UCAM is to fill out each contextual element (5W1H) by using the preliminary context obtained from various sensors. Namely, it creates the integrated context from the preliminary contexts which have several blanks in contextual elements because a sensor cannot fill out all contextual elements. In this Section, we focus on ‘how’ and ‘why’ contextual elements to integrate the preliminary context from physiological sensors, which delivers body conditions of a user such as heart rate, temperature and skin humidity. The ‘why’ contextual element gives information about higher level context of user’s internal states (intention, attention, emotion, etc)
by inferring, interpreting and analyzing other contextual elements (who, when, how and what). The overall procedure of ContextIntegrator is illustrated in Figure 2.
Fig. 2. The structure of context integrator
The ContextIntegrator in wear-UCAM basically has two kinds of fusion methodologies; temporal fusion and spatial fusion. These fusion functions are applied in each 5W1H field fusion. The temporal fusion reduces the burden of real-time data processing from various sensors to make one integrated context by using a voting algorithm which selects the maximum number of key-value. For the voting algorithm, we use representative criteria to integrate the preliminary contexts under multiple hierarchies of context. For the spatial fusion methodology, the ContextIntegrator module supports the personalized analysis of physiological signals by adaptive thresholds. In general, the acquisition of physiological signals is fragile by environmental and sensing conditions. In ‘why’ fusion module, therefore, we use spatial fusion for user adaptive analysis of physiological signals, such as tension, stress, emotion, etc. In this paper, we analyze the stress level with integrating GSR (Galvanic Skin Response) signal, PPG (PhotoPlethysmoGraph) signal, temperature signal obtained from a wrist type physiological sensor as an example. Measurements of GSR reflect the amount of humidity in skin surface and are used to analyze the states of tension with a useradaptive threshold [18]. In addition, PPG signal and temperature signal reflect underlying psychological states according to the change in the state of the autonomic nervous system [19].
5 Context Management for User Profile In general, context management is a key factor to utilize the processed contextual information in context-aware applications. The UserProfileManager (UPM) generates the final context by comparing the integrated context and the conditional context. Firstly, UPM receives the integrated contexts from ContextIntegrator. Then, it compares the integrated context with the conditional context, which consists of the user
conditional context and the service conditional context. At last, it generates the final context and sends it to ServiceProvider. The role of UPM is to learn the preferences of a user to a certain service so that it can provide a personalized service. Specifically, it generates the user conditional contexts and disseminates them into the given service environment while automatically updating them. Figure 3 shows the flows of contextual information in UPM, where IC is the integrated context, FC is the final context, SS is service status, and UCC is the user conditional context. In addition, FC’ and UCC’ are the results of processed from the given FC and UCC. GUI
wearSP FC
SS
UPM (wearCM) UCC Construction
Learning Inference
Matching& Selection
Context DB
wearInt UCC’
FC’ #1 FC’ #2 FC’ #3
UCC
FC generation
UCC’ #1 UCC’ #2 UCC’ #3
IC
FC’
wearCI
Fig. 3. User Profile Manager in wearService
Specifically, user profiles are described in 5W1H of the user conditional context (UCC), ‘who’ being as a focal factor. Each element of 3W1H (‘where’, ‘when’, ‘how’, ‘why’) in UCC represents where a user is, when he/she enjoys a specific service, which gesture he/she makes, and what his/her stress level is. In UCC, ‘what’ element denotes what kind of services he/she likes to enjoy. Eventing (Context Adding) – Thread is subscribed to ContextDatabase for eventing
CM Thread GetContext (IC) ContextDatabase Context Integrator
Learning Engine
IC
ICDatabase
Inference Net Interpreter Primary DB
Secondary DB UCCDatabase Service Provider FCDatabse
GetContext (UCClist)
PutContext (UCC)
Prediction Contex
Context Comparison Training (Neural Network)
GetContext (FClist) PutContext (FC)
Matched Context
FC Context Generation
Fig. 4. User Profile Handling Module
The final context (FC) contains the results of service execution which a user desires. Thus, we can acquire the service-specific user preferences which are dynamically changed by collecting the final context from services. After learning and matching contexts, we can update the user conditional context (UCC), which represents a user’s indirect intention to enjoy a specific service, so that users can be provided with
personalized services. Figure 4 illustrates the process of constructing UCC and disseminating it into the environment through a network. The procedure of UCC construction and dissemination is as follows. First of all, we construct a primary database in UCC database from static user-related information provided through a GUI. Then, FCs are aggregated at a periodic interval and stored in the FC database. At a regular pace, the UPM analyzes and learns the dynamic user-related information from a FC. At last, through context matching process, the UPM generates a new UCC, updates a secondary database in the UCC database with it, and sends it to other services. Optional
Trained Neural Network
[Inference Net] w ho
?
w here how
w ho w hen
Training FC set
?
w ho w hen w here
who when
where how why
what
1 2 3 · ·
who when where how why 1 · ·
FC DB to be trained
w hy w hat
?
w hy w hat ?
w hat
w ho w hen w here how
w hy
?
w ho
?
?
IC
FC
UCC dissemination
Decision making based on ranking on the preferred services/contents who when
Combination (Matching & Selection)
what
w hy w hat
how
w ho w hen w here how
Learning process
N samples
Prediction context
Context Integrator
?
w hy
?
1 2 3 w ho · ·
where how why
what
w ho
w h e n 1 w h e re 1 h ow 1 w h y w h at1
w ho
w hen2
w he re 2 how 2 w h y w h at2
w hen3
w he re 3 how 3 w h y w h at3
Integration from Biological sensors
User’s explicit
UCC command
through GUI
Interpreter
Fig. 5. Context Inferring and Matching
Although users new in smart environments can provide their personal information and service-specific preferences through a GUI offered by Interpreter in wearUCAM, it is necessary to automatically update a user’s service-specific preferences when a user’s command is not explicitly provided. And what if users want to enjoy a service for which they have not initially set their preferences through the GUI? For these situations, it is also necessary to learn a user’s behavior patterns to infer servicespecific preferences, so that results of learning can be reflected into the dynamic update of user profile. Figure 5 shows the procedures of network training, context inferring, and context matching. In updating a user profile, the learning engine expects the appropriate service for given input. The input represents the current situation of a user’s surroundings. For learning user preferences, a Multiple-Layer Perceptron (MLP) neural network is used [20]. We use a set of contexts as input because contexts express user’s behavior pattern. If the number of elements of each context is EN and the number of context is CN, then the total number of input neurons is equal to EN × CN. Through the experiment, we select the number of neurons in the hidden layer and the number of neurons in the output layer which is the number of categories in services to be provided to the users. The input and output pairs of training pattern are fed to the neural network because the supervised learning is exploited.
6 Personal Information Disclosure Management Disclosure of personal information is inevitable to personalize services and applications in context-aware computing environments. Context-aware application developers need to provide the user with flexible ways to control when, to whom and at what level of detail he/she can disclosure his/her personal information. Most of the research activities in this realm have been primarily focused on protecting personal information of users in context-aware systems deployed in institutes and organizations [21-24]. However, privacy concerns that arise in the office environments of organization and institutes do not necessarily apply to home environments. As pointed out by Hindus (1999), “homes are fundamentally different from workplaces”, “customers are not knowledge workers”, and “families are not organizations” [25]. Family members living in the same home are more closely knit than colleagues in a workplace, and privacy from other family members is not as big an issue as it is for an enterprise. However, while family members in a smart home may be quite frank to share the personal information with each other, they may desire to keep certain elements of the personal information obscure from context-aware services in a smart home. This signifies the need for mechanisms that provide the family members with the flexibility to adjust the granularity of context information disclosed to service providers. In this regard, we illustrate this phenomenon with the model scenario of a contextaware movie service [26]. We draw inspiration for this service from the contextaware Movie Player system and physiological signal sensor. The movie service provides the user with appropriate movies based upon the detail level of context information he/she discloses to the service. The user can choose and assign priorities to his/her preferences for movies in “what” element of 5W1H in user conditional context (UCC), via a Graphical User Interface (GUI) offered by Interpreter. For example, if the user does not want to disclose some of his/her preferences to the service, he/she can set the value to “0” for those elements as shown in Table 2. Table 2. User’s preference on the context-aware movie service Preference SF Horror Drama Comedy Animation
Priority 10 0 7 5 3
The ‘who’ element of 5W1H can accommodate user’s age and the ‘how’ element can keep track of his/her physiological condition. There is a tradeoff between user’s privacy and utility of service. The more a user discloses personal information, the more customizable and beneficial the service becomes. The more specific the user is in revealing his/her personal context information to context-aware service, the more relevant movies he/she is likely to receive from the service, with respect to his/her preferences (‘what’), age (‘who’), and physiological condition (‘how’). If the user
has chosen to disclose his/her physiological condition, the service takes into account if the user is in tension or not. In the former case, certain genres of movies (Horror, Adventure, Action etc.) are excluded from the search results. Similarly, user can leave the age attribute void and consequently the movies will be filtered without taking the age factor into account. In this way, we provide the user with the option to manage disclosure of his/her personal information and avail the service in relation to the disclosed information.
7 Experimental Result Figure 6 illustrates the interaction scene between wear-UCAM and services in ubiHome test-bed, where we use a PDA-based wear-UCAM simulator. We use 2 services, MRWindow and ubiTV, which can provide multimedia services based on user’s preferences.
wear-UCAM
wearService
Initial InitialUser UserPreference Preference on onServices Services
Light
UserProfileManager
User Conditional Context #2
IR receiver
BioSensor
ubiTrack
wearSensor
User Conditional Context #3
+ •Location •Location •Physiological •PhysiologicalSignals Signals
↓ UserProfileManager UserProfileManager
MRWindow
Info. Info.Disclosure Disclosure
↓ ubiTV
User Conditional Context #1
Environmental EnvironmentalService Service
Fig. 6. Experimental setup and interactions between wear-UCAM and services
As shown in Figure 6, a user disseminates his or her preferences (initial user preference, i.e., UCC) in order to use services in ubiHome. However, the user can control the dissemination of his or her contextual information, i.e., whether to send services or not. Based on his or her preferences, a corresponding service is triggered. While the user is using the service, location and physiological signals are acquired from sensors. In this experiment, we exploit the processed physiological signals which tell whether a user is in tension or not, and location sensor, called ubiTrack [27], which tracks a user’s location in ubiHome. Any change of these signals and service is contextualized and reflected to user’s preference. Through this procedure, the user can
interact more naturally with services in smart environments. Table 3 shows the specifications of the wear-UCAM simulator as well as the exploited sensors and services. Table 3. Hardware and Software for wear-UCAM Hardware Device
Function & Role
iPAQ h5550
wear-UCAM Simulator
BioSensor ubiTrack Software Java Personal Java wear-UCAM
Extract physiological signal Track user’s location
Specs PocketPC 2003 RAM: 128MB ROM: 48MB Bluetooth Serial communication
Desktop development toolkit PDA development toolkit Toolkit for wearable computing
J2SE (Java1.5) Java1.1.8 compatible v1.0a2 (118KB .jar)
In order to acquire contextual information relevant to a user, we employ two kinds of sensors; a wrist type physiological sensor and an IR (infrared)-based location sensor. In this experiment, we implement BioSensor and ubiTrack sensor module from Sensor class because this sensor class in wear-UCAM provides the necessary procedures for generating preliminary context. Table 4 shows generated preliminary context. Table 4. The generated preliminary context from sensors Component
Contextual Information Note Name=Dongpyo, Priority=4 SensorID=1, SensorType=BioSensor GSR.mean=14.37 GSR.var=3.0 BioSensor PPG.HR_mean=63.0 How PPG.HR_var=3.4 PPG.HR_LF_power=70.0, Preliminary PPG.HR_HF_power=62.0 sensor context SKT=36.5 Who Name=Dongpyo, Priority=4 IndoorLocation.x=140 IndoorLocation.y=160 ubiTrack Where SymbolicLocation=TV Direction=0.0 What SensorID=2, SensorType=LocationSensor *GSR(Galvanic Skin Response), PPG(photoplethysmogram), SKT(skin temperature) Who What
The ContextIntegrator integrates a set of the generated preliminary contexts from various sensors through the temporal and spatial fusion methodologies. In the experiment on the context integration, we find that GSR can be used to analyze the states of tension. Although the preliminary context from BioSensor has 3 kinds of
signals, we only process the GSR signals in the ContextIntegrator because PPG signal is not stable in noise and motion, and SKT signal is almost invariant. As a result, the integrated contextual information is used in updating user’s preference while the user is using a service as shown in Table 5. Table 5. The integrated context from ContextIntegrator Component Who
Contextual Information Name=Dongpyo, Priority=4
When
130527(hh:mm:ss)
Note
IndoorLocation.x=140 IndoorLocation.y=160 Where SymbolicLocation=TV service ContextIntegrator Direction=0.0 What
-
How
GSR.mean=14.37 GSR.var=3.0
Why
Tension=0
Integrated context
With the integrated context and the initial user preference (UCC), UPM finds a matched service from the given service environment. Since UPM makes UCCs which reflect the user’s preferences that suit him best, it can play a key role in providing personalized services in the environment. After a corresponding service is executed, UPM keep updating user’s preferences based on the integrated context from wearable sensors and the feedbacks from the service. For the experiment of learning user’s preferences, we exploit Multiple-Layer Perceptron Neural Network. Input Layer
Hidden Layer
4 inputs
1 neuron
WHO
1 input
1 neuron
1 neuron
1 neuron
1 neuron
3 bits
2 bits
2 bits
2 bits
4 bits
WHEN
WHERE
HOW
WHY Tensionlevel
Dongpyo
N samples
Output Layer
GSR
-Serene
Morning
TV
Day
Sofa
-Composing
Evening
MRWindow
-Nervous
Audio
-Anxious
What
N Layers (M neurons per each layer)
N X 4 neurons
Fig. 10. 1st Data sets of each layer in MLP
Movie SF Horror Drama Comedy Animation
Service, Parameters
The procedure of training the network is as follows: Firstly, the input data are obtained in the form of 3W1H from FC database. After getting the context (3W1H), we apply the neural networks (NN) to induce an intention of a user from input contexts. In the neural network, each input vector is converted into a numerical value. These numerical input values are the input to the hidden layer for processing. In this processing, weights and biases in networks are adjusted. It compares the trained results and the given target data. If they are identical, the training phase is done. If they are not, it adjusts networks again until the trained result is consistent with the target data. After proper training, the MLP associates the input vectors with specific service/service parameters category, i.e. categorizes the user’s contexts to several service categories. In this framework, a set of user’s context (i.e. when, where, how, and why) are mapped directly to a set of symbolic representations of services as input and output vectors of the MLP. Figure 10 shows one mapping set of each layer in MLP. In the input layer, as shown in Figure 10, only one neuron is assigned to each element of 3W1H context, i.e. ‘when’, ‘where’, ‘how’, ‘why’. Also, in the output layer, only one neuron is assigned to ‘what’ context. Note that alternatively, the proposed UPM can be used to make the analysis procedure even simpler. For example, without extracting user’s status and desire directly from physical sensors, context information {when, where, how, why} obtained from context-aware application model can be directly connected to the input layer of NN. Each context is denoted by numerical values, which stand for where the user is, when he is observed, which gesture he takes, and what his tension level is. Thus, these contexts can be trained to be mapped to the proper output categories. The final context is generated as shown in Table 6. When the final context is disseminated to the service, an altering message is displayed to a user which asks the user whether he or she wants to the service or not. In this experiment, the user discloses his or her preference on SF movie to ubiTV service because he/she sets the value to “10”. Thus, the user does not have to take any action to have the service. Table 5. The final context from ContextManager Component
service ContextManager
Who
Contextual Information Name=Dongpyo, Priority=4
When
130640(hh:mm:ss)
IndoorLocation.x=140 IndoorLocation.y=160 Where SymbolicLocation=TV Direction=0.0 What
ServiceName=ubiTV Parameter={“SF”, 10} Function=Play
How
GSR.mean=14.37 GSR.var=3.0
Why
Tension=0
Note
Final context
To show the effectiveness of the proposed wear-UCAM as a toolkit, we simulated it with 2 sensors and 2 services. However, we only focused on the wear-UCAM itself rather than on the whole scenario which includes environmental services. The processing time for constructing a final context in wear-UCAM from sensor to service is as follows. FC (∆T ) = PC (∆T1 ) + Network(∆T2 ) + IC (∆T3 ) + UCC(∆T4 )
where the final context processing time, FC, is mostly related to the preliminary context (PC), network delivery (Network), the integrated context (IC), and the user’s conditional context (UCC) generation time. However, it is difficult to measure each component’s context generation time because wear-UCAM is designed for distributed computing environments and each component is implemented as a Thread. Thus, we exploit Java monitoring tools, such as jps, jstat, and visualgc [28]. With the aid of these tools, we also follow the guideline of [29] to measure the performance of wear-UCAM. The compiling time to Java byte code is 125.158ms as the accumulated time during the execution meanwhile the compiling time in jstat monitoring tool indicates each time of compiling step. Amount of class loading time is 21.650ms. Compared to visualgc, jstat includes more powerful options to monitor Java application, but it is a text-based software tool. With the help of jstat monitoring tool, we also observe several properties of sensor and service. First of all, compile time of the service is dependent on the number of compilation tasks performed. For example, when the number of compilation tasks performed is 267, compile time is 0.18ms. However, the number of compilation tasks performed and its compiler time are converged after a period of time. In the case of class loader time, sensor requires 51 loaded classes and its time is 0.01ms. On the other hand, service requires 95 loaded classes and its time is 0.02ms. In both cases, the loaded byte is 45.9KB and 91.7KB, respectively. Regarding memory usage, we take current permanent space capacity as a clue and this required 8192KB. Through the experiment, the generation of the final context is highly relevant to the user’s conditional context in wear-UCAM. This is because the final context is generated if and only if the user’s conditional context and the integrated context, which is produced from sensors, are matched. Therefore, it is required that the matching algorithm in ContextManager should be well designed as well as the user’s conditional context in Interpreter should be provided flexibly when we want to increase the matching rate of the final context. Furthermore, the ContextIntegrator should deal with unknown fields of the preliminary context within a proper time window. Lastly, we would like to point out several issues when we implement and develop a smart user interface with the proposed wear-UCAM. wear-UCAM is packed .jar file in order to be easily distributed and keep code organized in a compressed form. As shown in Table 3, the package size of the current version of wear-UCAM is 118KB, which is compact enough to be deployed in a restricted computing platform such as a mobile or a wearable computer. In the implementation of wear-UCAM, we adopt JavaTM because it can be easily deployed into various platforms and it is a well defined language to realize object-oriented concepts. However, there are some difficulties when we apply it into wear-UCAM. For example, currently we utilize Personal JavaTM, which is not supported any more and has restricted features comparing
to currently released J2ME version for mobile applications. Another problem is that most hardware interface and real time signal processing are implemented in C or C++. Thus, we have to take into account hardware interface with JavaTM or another version of wear-UCAM.
8 Discussion and Future work In this paper, we proposed the wear-UCAM to support a rapid context-aware application development for a smart user interface. Through the wear-UCAM, a user is able to use contextual information obtained from physiological sensor and location sensor for his or her preference. In addition, the user can update his or her preferences on certain services with the contextual information and feedbacks from other services. Finally, the user can control whether to disclose his or her personal context information or not. The proposed wear-UCAM can provide appropriated services to the users based on their preferences as wear-UCAM supports retrieving personal information from sensors, processing it, and analyzing it. Furthermore, the proposed wear-UCAM provides the abstraction in processing context from sensors and services as well as supports independence of sensors and services in applications by splitting them according to the unified context. However, wear-UCAM is not yet fully experimented in an actual user study due to unstable physiological signals from user’s movements. Moreover, the interpretation and modeling of contextual information should be considered closely. Additionally, it is required to conduct experiments and analyses on the wear-UCAM based on software engineering aspects with various applications as well as to enhance the robustness of sensing physiological signals.
References 1. M. Weiser, “Some computer science issues in ubiquitous computing”, Communications of the ACM, vol. 36, no. 7, pp. 75-84, July 1993. 2. Guanling Chen, and David Kotz. “A Survey of Context-Aware Mobile Computing Research”, Technical Report TR2000-381, November, 2000. 3. Mitchell, K. A Survey of Context-Awareness. Internal Technical Report, Lancaster University, Computing Department, March 2002. 4. Mari Korkea-aho, "Context-aware Application Survey" available in http://users.tkk.fi/~mkorkeaa/doc/context-aware.html 5. Bradley J. Rhodes, Nelson Minar and Josh Weaver, "Wearable Computing Meets Ubiquitous Computing: Reaping the best of both worlds," Proc. of The 3rd International Symposium on Wearable Computers (ISWC '99), San Francisco, CA, October 18-19 1999, pp. 141-149. 6. Jennica Falk,Staffan Bjork, “Privacy and information integrity in wearable computing and ubiquitous computing,” Conference on Human Factors in Computing Systems archive CHI '00, pp.177-178, 2000. 7. B.N.Schilit, N.L.Adams, and R.Want. Context-aware computing applications. In IEEE Workshop on Mobile Computing Systems and Applications (Santa Cruz, CA, US, 1994). 8. Brown, P.J., Bovey, J.D., Chen X. (1997). Context-Aware Applications: from the Laboratory to the Marketplace, IEEE Personal Communications, vol.4, no.5, pp.58-64.
9. Pascoe, J., “Adding Generic Contextual Capabilities to Wearable Computers,” In 2nd International Symposium on Wearable Computers (ISWC 1998) (1998), pp.92-99. 10. Hull, R., Neaves P. and Bedford-Roberts J., (1997). Towards Situated Computing, 1st International Symposium on Wearable Computers, Cambridge, Massachusetts, October 1314, 1997, pp. 146-153. 11. Anind K. Dey and Gregory D. Abowd, "The Context Toolkit: Aiding the Development of Context-Aware Applications", In the Workshop on Software Engineering for Wearable and Pervasive Computing, Limerick, Ireland, June 6, 2000. 12. A.K. Dey, J. Mankoff, "Designing mediation for context-aware applications," ACM Transactions on Computer-Human Interaction (TOCHI), vol.12, no.1, pp.53-80, Mar. 2005. 13. Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, "Design Patterns: Elements of Reusable Object-Oriented Software", Addison Wesley. October 1994. 14. Fowler, Martin, "UML distilled: applying the standard object modeling language", Addison-Wesley, 2004. 15. S.Jang, W.Woo, "ubi-UCAM: A Unified Context-Aware Application Model," Context03 (Lecture Note Artificial Intelligence), vol.2680, pp.178-189, 2003. 16. Youngjung Suh, W.Woo, "Context-based User Profile Management for Personalized Services," ubiComp workshop(ubiPCMM), pp.64-73, 2005. 17. S.Jang, W.Woo, "Unified Context Representing User-Centric Context: Who, Where, When, What, How and Why ," ubiComp workshop(ubiPCMM), pp.26-34, 2005. 18. B.Elie and P.Guiheneuc, “Sympathetic skin response: normal results in different experimental conditions”, Eletronecephalogr Clin Nerophysiol, vol.76, pp.258-267, 1990. 19. A.Barreto and J. Zhai, "Physiological Instrumentation for Real-time Monitoring of Affective State of Computer Users", WSEAS Transactions on Circuits and Systems, vol. 3, pp. 496-501, 2003. 20. Richard P. Lippmann. “An introduction to computing with neural nets,” IEEE Acoustics, Speech and Signal Processing Magazine, vol.4, no.2, pp.4-22, April 1987. 21. Scott Lederer, Jennifer Mankoff, Anind K. Dey, Christopher Beckmann, “Managing Personal Information Disclosure in Ubiquitous Computing environments,” Technical Report IRB-TR-03-015, Intel Research Berkeley, 2003. 22. Amr Ali Eldin, Rene Wagenaar, “Towards Users Driven Privacy Control,” IEEE International Conference on Systems, Man and Cybernetics, vol. 5, pp. 4673-4679, 2004. 23. Jason I. Hong, James A. Landay, “An Architecture for Privacy Sensitive Ubiquitous Computing,” Second International Conference on Mobile Systems, Applications and Systems, pp. 177-189, 2004. 24. Ryan Wishart, Karen Henricksen, Jadwiga Indulska, “Context Obfuscation for Privacy via Ontological Descriptions,” 1st International Workshop on Location-and-Context-Awareness, (LNCS) vol. 1678, Springer (2005), pp. 267-288, 2005. 25. D. Hindus, “The Importance of Homes in Technology Research,” 2nd Int’l Workshop Cooperative Buildings (CoBuild 99), LNCS, vol. 1670, Springer-Verlag, Berlin, pp.199-207, 1999. 26. Yoosoo Oh, W.Woo, "A unified Application Service Model for ubiHome by Exploiting Intelligent Context-Awareness," Ubiquitous Computing Systems (LNCS), vol.3598, pp.192202, 2005. 27. W.Jung, W.Woo, "Orientation tracking exploiting ubiTrack," ubiComp05, pp. 47-50, 2005. 28. http://java.sun.com/performance/jvmstat/ 29. Banavar, G. et al., “Challenges: An Application Model for Pervasive Computing,” In Proceedings of MOBICOM 2000: the 6 Annual International Conference on Mobile Computing and Networking, August 6-11, 2000, Boston, MA USA, pp.266-274.