to build homogeneous Ubicomp environments. We introduce the concept of modular integration of context-aware application by presenting the AwareOffice â.
AwareOffice: Integrating Modular Context-Aware Applications Tobias Zimmer Michael Beigl Telecooperation Office (TecO), University of Karlsruhe Vincenz-Priessnitz-Strasse 3, 76131 Karlsruhe, Germany {zimmer, michael}@teco.edu
Abstract Developing valuable context-aware applications is at the centre of research in Ubicomp. In this paper we want to describe our experience over the last years in designing and developing such applications and in integrating them to build homogeneous Ubicomp environments. We introduce the concept of modular integration of context-aware application by presenting the AwareOffice – TecO’s test bed for all kinds of context-aware applications form the office domain. We will show how we can easily build new applications by simply exploiting the modularity of our system. Therefore we introduce some base applications of the AwareOffice in some detail and show how to build more complex applications by modularly integrating them.
1. Introduction How to develop context-aware applications is an important topic in Ubiquitous Computing research. Building a context-aware application requires two steps: Firstly, designing and implementing the internal context-awareness functionality of the application itself and secondly, designing and implementing the external combination of applications to provide an overall functionality for application settings. Many context-aware systems address these issues by using a central ”server” or ”service” that runs the application logic. A well known example of such an approach is implemented in the Context Toolkit [2]. This system supports the implementation of internal context-aware functionality, but it does not support the ad-hoc combination of context-aware functionality to build application settings. In the settings we are interested in, context-aware applications are working independent from a central server, service or infrastructure. Most of these applications are implemented on tiny computers that are embedded into physical objects. Complex applications are composed dynamically by an ad-hoc combination of such applications. A
very early prototype of this approach is the MediaCup [1], a coffee cup with embedded tiny computing, sensing and networking electronics. Here, applications running on the coffee cups are detecting cup usage contexts like ”drinking”, ”standing”, ”full” etc.. These findings are distributed in the local environment via network communication. Context information is picked up by other applications running on other smart physical objects in the environment and actions are triggered inside these objects – for example a coffee machine is brewing new coffee when all cups are empty. Up to now, applications running on such resource-poor devices are designed and implemented from scratch with minimal guidance for the developer. In this paper we go beyond this simplistic approach and present a more sophisticated support framework for developers of Ubicomp applications. Our approach supports the construction of application settings that are specifically dedicated towards the detection and processing of context information. It helps developers in two aspects: A library of internal functions largely simplifies development of context recognition and processing. Also, a guideline how to develop application allows flexibility and further extensibility of Ubicomp application settings build upon existing applications, and the transparent integration of new applications into application settings.
1.1
Modular Context-Aware Applications
In our model, applications forming a Ubiquitous Computing environment consist of modular functional building blocks (figure 1), but also can be regarded as functional building blocks itself (figure 2). A Ubiquitous Computing environment consists of one or several application building blocks that cooperatively work together to provide a functionality to the users as an output. In our general model (figure 1), the input to an application is provided via two interfaces: an external communication interface – the network – and physical interfaces – sensors. Likewise, output is provided externally through a network communication interface and physical actuator inter-
plications in an environment form a catalogue of potentially available context information. This output is provided to other devices using a common language, ConCom. ConCom is an extensible language that guarantees the understanding of context information between various applications [4]. The language also allows the qualitative validation of the transferred context information through a Quality of Context (QoC) field [5]. By that means an application can estimate the value of a piece of context information that is received via network communication in a similar way to generating the context from sensor data by itself.
1.2. AwareOffice
Figure 1. An application as a functional block
Figure 2. Combining of functional blocks
faces. Context information is generated inside applications through context recognition modules and is then distributed to other applications via network communication. Input to the context recognising functional blocks may come either from sensors – e.g. sensor directly attached or/and through network communication (figure 2). This way applications can be regarded as consumers and producers of context information: By producing context information applications appear as functional blocks to other applications in the same way as context recognition modules represent functional blocks inside applications. This approach of modular context processing and recognition brings several advantages. First, it allows the separation of concerns [3] to cope with the complexity of systems. Following this approach allows a seamless extension of Ubicomp environments with new applications and in general simplifies integration of new functionality. It also supports both bottom-up and top-down software development of Ubicomp applications. Bottom-up software development of complex context-aware applications is supported by allowing a step-by-step approach: First build and test simple, stand-alone applications and then continue by putting these application blocks together to form more complex applications. Top-town software development is supported by guiding the developer on how to split up functionality in application blocks. Modular context processing allows us to simply (re-)use applications in multiple (composite) application settings. An application can be designed to dynamically use different subsets of the context information available from any application in the environment and to provide its output to various other applications. The outputs of all possible ap-
The AwareOffice environment is installed in TecO’s meeting room; it is our test bed for context-aware applications form the area of Computer Supported Cooperative Work (CSCW) and the office domain in general. Digitally augmented artefacts in the AwareOffice today include a whiteboard, a large conference table assembled from 4 separate tables, 16 chairs, 6 windows, one door (a second door is not used), one data projector, a flip chart and a Smartboard. All these artefacts are equipped with a pPart or a µPart Particle Computer sensor node. Additionally we have other augmented artefacts that are not exclusively associated to the AwareOffice, but can provide context information that is processed in the AwareOffice environment as well like the MediaCups. In the next section we will introduce some of the applications that run on a selection of theses augmented artefacts in the AwareOffice to illustrate the concept of functional building blocks and modular integration.
2. Applications – Modules In this section we will introduce selected applications from the AwareOffice. The applications described here show the concept of modular functional units and build the basis for the integration process we show in section 3.
2.1. AwarePen The AwarePen is a whiteboard pen that is able to detect its mode of use. Three implementations of the AwarePen are available: the first is based on filter chains for context recognition, the second uses a neural network for classification and the third implements a fuzzy logic expert system. All three implementations can differentiate between the states ”laying still”, ”writing” and ”playing around”. The different algorithms were implemented to get a feeling for how they perform on the same task, running on the same hardware platform.
system work on the same sensor values.
Figure 3. AwarePen hardware: Particel Computer with Ssimp attached to whiteboard pen
The hardware of the AwarePen is composed of a standard whiteboard marker and an attached Particle Computer with a Ssimp full Sensor Board featuring a 3D-acceleration sensor (see figure 3). The sampling rate of the acceleration sensor is one sample per 13ms (≈ 76, 9Hz). 2.1.1
Filter Chains
The first implementation of the AwarePen is based on a combination of different filter algorithms. Figure 4 shows the architecture of this filter structure. The filter chain is form three blocks: general pre-processing filters, specialized filters that work in parallel and an aggregating module. Each of the filters in the parallel branches of the filter chain is optimized to detect one aspect of the movement pattern of the AwarePen. In the first step the sensor values pass the DeltaFilter that eliminates offset errors of the sensors. The DeltaFilter is used to improve the presentability of the value in a GUI, it is not necessary for the context recognition process itself as the used algorithms do not operate on absolute values. In the second filter stage sensor noise is eliminated by filtering sensor values that have an alteration of less than 60 sm2 form the stream of sensor values. After this NoiseFilter the sensor values are forwarded to different branches of the filter chain in parallel. The first branch is the MinMaxFilter. For each of the tree axis this filter searches for local minima and maxima in the stream of sensor values. For further noise reduction and to accent the extremal values of the frequency, the data is smoothed with a Gaussian function with a radius of 6 sensor values and a standard deviation of 2 values. By examining the distance between two extremal values the presence of a writing pattern can be detected. To provide a regular output in the presence of constant or near to constant acceleration an upper bound of 50 values (≈ 0.65s) in the distance of two extrema is defined. Typically writing yields a distance of 5 to 15 sensor values between extremal values. Pointing gestures typically result in larger values while smaller values indicate faster movements that result from playing with the pen or form residual noise. In parallel to the smoothing GaussFilter the DelayFilter delays data input to the other filters in the system to make sure all branches of the filter
Figure 4. Architecture of the filter chain used by the AwarePen The DChMCrFilter (Direction-Change-to-MeanCrossing-Ratio-Filter) – mainly known from audio processing – is able to detect special gestures and playing patterns. In the current implementation the ratio is computed for a window of 40 sensor values. Whenever for one axis this ratio is greater than 4 or for two axis the ratio is greater than 3, writing is considered to be very unlikely. The MaxAbsDiffFilter internally consists of three parts: one stage that provides absolute values of the sensor data and a combination of a maximum absolute difference filter (after which the component is named) and an absolute difference filter that is used to fine tune the output of the MaxAbsDiffFilter component. This component is used to detect writing with the pen. The StillDetectFilter recognizes whether the pen is laying still. This is done by comparing successive sensor values. Whenever 25 or more sensor values in the data stream are of the same value the filter presumes the pen is not in use. The SkipFilter is used to generate an update trigger for an GUI. Like the DeltaFilter it makes no contribution to the recognition process but ensures the output of the filter chain is updated after 10 received sensor values. The next stage in the filter chain computes a probability value for each of the three possible contexts the AwarePen can recognize based on the output of the corresponding filter component and forwards these probabilities to the PenProbHub. The filters in this stage are name after their input filters with the extension ”2Prob” indicating that the output is probability values. Finally the PenProbHub aggregates the results from the different branches of the filter chain and forms the final output context.
State x ¯ σx σy σz
Still
Play
(0, 40) (100, 1000) but mostly (200, 800) (0, 40) (100, 1200) but mostly (200, 700) (0, 40) (200, 1000) but mostly (200, 800)
Write (−400, 500) (50, 800) but mostly (100, 400) (100, 600) but mostly (100, 400) (100, 800) but mostly (100, 400)
Table 1. Mapping of input values to the states of the AwarePen 2.1.2
Neural Networks
The second implementation of the AwarePen is based on a Learning Vector Quantization Neural Network (LVQNN). Data is presented to the network as a 6-element input vector vN~ N = (¯ x, y¯, z¯, σx , σy , σz ) containing the mean and standard deviation for each of the three axis. These values are computed from 24 successive samples covering approximately 0.3s. With that the response time to context changes of the neural network is much smaller than that of the filter chain. The LVQNN is built from one layer of 40 hidden competitive neurons and an output layer of 3 linear neurons representing the result classes for ”writing”, ”playing” and ”still”. In addition to the LVQNN implementation, we experimented with other NNs. Namely a Feed-Forward Neural Network (FFNN) with one layer of 7 hidden neurons and one output layer with one neuron and a FFNN with one layer of 32 hidden neurons and one output layer of one neuron.
”write” and σz is ”write” then AwarePen is ”write”. With these rules we created a Sugeno system with outputs 0, 127 and 255 representing the tree states of the pen. By replacing the hand crafted rules by learned rules the recognition rates could be increased, but this improved FIS contains 16 rules making it significantly slower.
2.1.4
Results
The performance of the AwarePen in its different implementations is subject to an ongoing study. Preliminary results for all three algorithmic approaches show context recognition rates well above 90%. The underlying hardware platform – a pPart Particle Computer – provides a 8bit microcontroller (PIC 18F6720) with 5 MIPS at 20M Hz. The internal memory of the microcontroller divides into 128kbyte program Flash memory, 4kbyte RAM and 1kbyte EEPROM. Our experience shows that 128kbyte of program memory is fairly enough for most applications. The Filter Chain implementation of the AwarePen needs 51% (66,3 kbytes) of memory; the usage of the FIS and NN implementations varies between 44% and 46% (57,6 - 60,4 kbytes). All the implementations are optimised to make use of the whole available RAM. It turns out that all the algorithms would profit form larger RAM. Higher computing power would enable faster processing, potentially leading to slightly better results due to the higher sensor sampling rates that could be achieved. More detailed results will be available soon, when we finish our study.
2.2. Other Applications 2.1.3
Fuzzy Logic
The Fuzzy Inference System (FIS) based implementation of the AwarePen is mainly a prove of concept and demonstration for our Particle Computer FIS Toolbox. The characteristic of the input data to the FIS makes it very hard to specify efficient rules and at the same time keep the system small and fast enough that it can be run on the Particle Computers. Our reference pen was implemented using 3 rules on the features of a 4 element input vector vF~IS = (¯ x, σx , σy , σz ). From the test data we formed table 1 mapping the ranges of the input values in vF~IS to the states of the pen we want to recognize. This table was used to derive the following three inference rules: Rule 1: If σx is ”still” and σy is ”still” and σz is ”still” then AwarePen is ”still”. Rule 2: If σx is ”play” and σy is ”play” and σz is ”play” then AwarePen is ”play”. Rule 3: If x ¯ is ”write” and σx is ”write” and σy is
Some less complex applications that are currently running in the AwareOffice are the AwareSponge, AwareCam and our Chairs, Windows and Doors. These application that have been implemented and added to the environment during the last few years, will now be introduced in more brief.
2.2.1
AwareSponge
The AwareSponge is a whiteboard sponge that is equipped with a pPart Particle Computer and a special sensor board featuring an additional force sensor that measures the pressure between the sponge and the surface it is used on. By detecting movement and correlating this with the readings from the force sensor the AwareSponge is able to determine whether it is used for wiping the whiteboard. The context recognition is based on a simple filter chain reusing some of the modules that have been implemented for the AwarePen.
2.2.2
AwareCam
AwareCam is a context-aware application operating a Canon Ixus digital camera. Depending on the current context the camera can make pictures of a whiteboard. In the AwareOffice the AwareCam is observing the context information provided by the AwarePens and the AwareSponge. It is the first application we introduce that follows the concept of modular integration of context information and thus the functionality of external applications. The AwareCam tries to archive a new picture of the whiteboard whenever it determines that some amount of new writing has occurred. In the course it also has to make sure it can store a picture before the sponge is used to wipe the whiteboard clean again. To achieve that the camera makes a picture when it has received the writing context of an AwarePen for some time. This picture is analysed by applying different filters. The first filter run detects writing on the whiteboard. It is used to make a differential analysis of the newly taken picture in comparison to the last archived one to determine the amount of new writing. If a certain configurable threshold is reached the new picture is forwarded to the obstacle filter. In the second step another filter is applied to detect obstacles in front of the whiteboard. If an obstacle is detected the AwareCam can send an alerting packet to the AwarePen that was used last. The pen can issue a beep sound making the writer aware of him shielding the whiteboard. After a few seconds the AwareCam starts another try to make a picture. The AwareCam is a modular application. It works quite well when only one AwarePen is present. For the camera the context provided by one pen is the minimum set of required context data to operate. The functionality of the AwareCam is improved when an AwareSponge is in operation, but it is not mandatory. And the AwareCam can handle the input of multiple pens at a time. 2.2.3
Chairs, Windows, Doors
Some smaller applications in the AwareOffice are our augmented chairs, windows and doors. All chairs windows and doors in the AwareOffice are equipped with uPart sensor nodes (see figure 5). These highly integrated nodes (20x17x7 mm including battery) feature light, temperature and movement sensors.
Figure 5. uPart sensor node
Chairs are programmed to detect movement and derive whether they are in use or not from that data. Windows and doors recognize whether they are open or closed.
3. Proof of Concept Of the applications we introduce in section 2 only the AwareCam makes active use of the context information provided in the AwareOffice. In this section we will describe the design and functionality of an application that was developed strictly following the concept of modular contextaware applications as introduced in paragraph 1.1.
3.1
The AwareDoorPlate
The AwareDoorPlate is an interactive doorplate for the AwareOffice. Its hardware is a Siemens SIMPad SL4; a handheld webpad with a 8,5” touch sensitive TFT display and wireless LAN connection that is mounted next to the door (figure 6).
Figure 6. AwareDoorPlate The first step in the design process of the software was to define the minimal functionality the AwareDoorPlate should provide. Minimal requirements to the doorplate are to be able to display the status of the AwareOffice depending on the context information that is available. At least the doorplate should be able to determine whether the AwareOffice is in use or not and dynamically adept its display form ”AwareOffice” to ”Meeting in progress...”.
3.2
Implementation
The next design step is to have a close look at the catalogue of potentially available contexts in the AwareOffice (see table 2). From this we choose the contexts that we need to derive the information that is necessary to enable the minimal functionality defined before. We can assume that a room where meetings will take place at least has a door that can be closed to remain undisturbed. Also a meeting room will most probably be equipped with chairs for the attendees to sit on. So we take these two initial modules and add them to the input of the AwareDoorPlate as depicted in figure 7. These two types of context-aware artefacts do
Artefact AwarePen AwSponge AwareCam Chair Window Door MediaCup
writing wiping WB WB blocked in use open open drinking
Context playing moving new picture archived free closed closed playing
still still standby
coffee temp
Table 2. Catalogue of available contexts not provide very much information but it is enough to get a coarse idea of the usage of the AwareOffice. Starting with a very simple rule based reasoning engine1 , we will determine that a meeting is in progress when the door is closed and some of the chairs (more than one) are used. So the minimal set of context information that is needed to implement the frame of the AwareDoorPlate is provided by the chairs and the door.
are composed with the basic rule operating on the minimal set of context information by a logical OR. That ensures that the doorplate can dynamically adept to changing availability of context information in the environment. The resulting architecture of the AwareDoorPlate application is shown in figure 7. Additionally the functionality of the doorplate appliance can be extended dynamically if more context information becomes available. E.g. the doorplate in the AwareOffice includes a schedule for the meeting room that can be accessed via the touch screen or remotely via a web interface. This allows meetings to be pre scheduled specifying a list of attendees. Alternatively the AwareDoorPlate could determine the attendees by observing personalized MediaCups in the AwareOffice. With information on the attendees the doorplate organizes the distribution of pictures provided by the AwareCam via email.
4. Conclusion In this paper we presented the concept of modular integration of context-aware applications to form homogeneous ubiquitous computing environments. We introduced the basic applications we implemented in the AwareOffice and showed how these functional blocks can be used to build more complex appliances like the AwareDoorPlate. Our experience shows that we largely benefit form exploiting modularity in applications; development times are reduced while the diversity of contexts constantly increases, which in turn makes it even faster and easier to implement new context-aware applications.
References Figure 7. AwareDoorPlate Architecture In the next step we revisit the catalogue in table 2 and extend the functionality and reliability of the AwareDoorPlate by integrating more optional modules providing additional context information. E.g. we can add a rule that says it becomes more likely that a meeting is held when the AwarePens and the Sponge are in use regulary. In that way we can step-by-step add rules making use of all potentially available contexts in an environment that can contribute to the functionality of the application. Applying these rules influences the probability that the doorplate deducts that a meeting is in progress. To make the AwareDoorPlate a robust modular context-aware application, the optional rules 1 We will not discuss the implementation of the reasoning engine of the doorplate in detail in this paper as this would lead too far, but argue on basis of an intuitively working rule set.
[1] M. Beigl, H.-W. Gellersen, and A. Schmidt. Mediacups: Experience with design and use of computer-augmented everyday objects. Special Issue on Pervasive Computing, 35 (4):401–409, 2001. [2] A. K. Dey, G. D. Abowd, and D. Salber. A conceptual framework and a toolkit for supporting the rapid prototyping of context-aware applications. Human-Computer Interaction, 16 (2-4):97–166, 2001. [3] E. W. Dijkstra. Selected Writings on Computing: A Personal Perspective. Springer-Verlag, 1982. [4] A. Krohn, M. Beigl, C. Decker, and T. Zimmer. ConCom A language and Protocol for Communication of Context. ISSN 1432-7864 2004/19, University of Karlsruhe, 2004. [5] T. Zimmer. Towards a Better Understanding of Context Attributes. In Proceedings of PerCom 2004, pages 23–28, Orlando, USA, Mar. 2004.