Context Detection on Mobile Devices - 4th CoSDEO workshop

5 downloads 115048 Views 265KB Size Report
Locale is an application for the Android operating system2 which can ... configure the ring volume, ring tone, screen brightness, earphones volume, and so on.
Context Detection on Mobile Devices Uta Christoph, Janno von Stülpnagel RWTH Aachen, Informatik 4, Aachen, Germany. Email: {uta.christoph, janno.stuelpnagel}@rwth-aachen.de Karl-Heinz Krempels, Christoph Terwelp RWTH Aachen, Informatik 5, Aachen, Germany. Email: {krempels, terwelp}@cs.rwth-aachen.de

Abstract Mobile devices have obtained a significant role in our life providing a large variety of useful functionalities and features. It is desirable to have an automated adaptation of the behavior of a mobile device depending on a change of user context to spare him additional effort or unwanted behavior in individual situations. To enable such an automatic adaptation the mobile user’s context needs to be determined by the mobile device itself. A prototype implementation for the detection of movement patterns based on the built-in sensors of the Android G1 smartphone is presented which processes sensor information in a neural network to match them to certain contexts of a mobile user.

1

Introduction

The omnipresence of mobile devices requires the ability to adapt the device’s capabilities. Simple implementations of this feature are already in place in most common mobile devices. So the user can limit the usage of the device’s interaction features for example muting the ring tone, deactivating the network interface radio, etc. To simplify the configuration the settings are often grouped into configuration profiles, so the user only has to select a defined profile and gets a suitable setup for a situation. In real environments the context continuously changes and therefore it is desirable to support automatic detection of the user’s current context. This would increase the usage comfort due to improved adaptability of the device. As an additional feature the current context of the user in combination with his current position can be used to improve the location based services by providing this information to the service. In this paper we discuss existing and new approaches to determine a mobile user’s context. We analyze their prospects, possible drawbacks and the technical requirements. The paper is organized as follows, in Section 2 we illustrate a simple example scenario from the domain of discourse to motivate our research. In Section 3 we give an overview of existing work in this area. Section 4 discusses the configuration space of a mobile device which should be adapted to changing context. In Section 5 we focus on existing approaches and introduce some new ideas to determine a mobile user’s context and in Section 6 we shortly describe the architecture of our integrated approach for context detection. Section comprises our prototype implementation

and the results for context detection based on movement patterns. Finally in Section 8 we summarize our ideas and describe our vision for future development.

2

Scenario

Imagine a researcher who is also involved in teaching at his University on a "normal" workday. He rides to work in his car, gets a coffee in the common room, prepares the last slides and comments for today’s lecture in his office, is called to an important meeting with the Professor at last minute. Afterwards he gives his lecture, goes to lunch, meets with one of his thesis students, finally finds time to write up his research work for an urgent Call for Papers. After work he drives home, goes cycling with his son and takes his wife to dinner and a movie after the babysitter arrived. At this point we leave our fellow researcher and state that his context concerning the behavior of his mobile phone has at least changed 12 times during the day. The common researcher is very busy thinking about his research work, so it is very inconvenient for him to always keep the behavior setting of his mobile device up to date to his current surroundings. Instead it would be preferable that the mobile device itself adapts automatically according to the situation. For example, changing the input mode from pen to finger touch and the output mode to speech synthesis and symbols while driving, switching the device to silent mode and enabling visual signals (a flashing light) for incoming calls and messages during the meeting, but deactivating vibration alarm because the device is on the table and not in the pocket, switching to a louder ring tone

and vibrating alert in the cafeteria or when cycling, and muting it in the restaurant and cinema, but setting vibration alarm to emergency calls from the babysitter.

3

Related Work

3.1

Context Detection based on Sensors

In ubiquitous computing context is often derived from one specialized sensor either attached to a person or located in several different places across so called smart rooms. For example [Karantonis et al., 2006] uses one accelerometer to detect movement of the user. [Kern et al., 2003] use multiple identical accelerometers to detect activity context information. In most cases where different sensor types are used, each type of sensor detects a certain kind of context for a certain kind of purpose. Only few research projects use more than one sensor type to recognize a single context, e.g. [Berchtold and Beigl, 2009], where an accelerometer and a microphone are used to recognize "knocking on the table in appreciation". [Gellersen et al., 2002] investigated multi-sensor contextawareness in a series of projects by augmenting prototype devices with several comparatively simple sensors. The individual sensor data are fused to an overall picture of the situation according to rule-based calculations. In more recent research approaches, like [Ranganathan et al., 2004] and [Brdiczka et al., 2007], pattern recognition instead of programmed rules to derive contexts from sensor information is used.

3.2

Definition of Context

To this day there is still no standardized description and definition of context. [Dey, 2001], for example, gives an operational definition of context whereas [Henricksen and Indulska, 2005] defines conceptual models for context-aware applications. A survey of context modeling approaches is given in [Strang and Linnhoff-Popien, 2004], but often researchers use their own definitions. We choose the following definition of [Abowd et al., 1999] as it takes into account the interaction between the user and an application: "‘Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves.”

3.3

Context-aware Mobile Applications

There a many context-aware mobile applications available on the smartphone market. Most offer services based on 1 http://www.twofortyfouram.com/ 2 http://www.android.com/index.html

location-awareness. One interesting application which besides positioning information also considers other context aspects is Locale1 . Locale is an application for the Android operating system2 which can manage the settings of a mobile device according to predefined conditions. The user can choose conditions such as location, contact information, battery status, time and calendar information to adjust display and sound settings or to en- and disable certain networks. Device configuration rules depend on conditions related to context influencing parameters. Thus context detection is a side effect of the description of these conditions. However, the original intention is adaptive device configuration and not context detection.

4

Device Configuration Space

The device configuration space consists of all possible configurations of a given device. To effectively manage this configuration space of a device profiles are used which are preset configurations combined with a suitable name. Some of the settings from a device configuration space are listed here to help the reader get an impression of configuration possibilities of a mobile device. For example, as output modes the user can choose among icon or text based representation of a content, or even voice synthesis. Furthermore, for the textual and graphical output the text size and screen direction (portrait or landscape) can be configured, and for voice synthesis the volume and perhaps a few additional characteristic parameters can be adjusted, e.g. speaking speed, the pitch and timbre of the voice or the gender of the speaker. As input modes the user can choose for example among a microphone with voice recognition, a touch screen (finger or pen mode), or the traditional keypad. There are also many additional settings, e.g. microphone and touch screen sensitivity, etc. As general behavior settings the user can configure the ring volume, ring tone, screen brightness, earphones volume, and so on.

5

Context Detection Approaches

Today the predominant way to adjust a mobile device to a changing context is that the user configures a set of different profiles for his device in advance and later selects the suitable profile for his current context. If the context changes very often it is inconvenient to make such adjustments manually. Thus, it is desirable to automatically detect the user’s context and adapt the configuration of the device accordingly. In the following we discuss several approaches to determine the context of a user.

In order to extend this approach to all kinds of context a standardized or ontology based description of events or activities is essential.

Figure 1: Event Chain Description

5.1

Planning

A very simple approach to context detection is to deduce it from the digital calendar, that contains all the events and trips the user has planned. Each event may consist of several activities. Since the classical calendar functionality is provided by many mobile devices, especially smartphones, the scheduled events and trips are already available for further processing. Therefore, the actual time is used to determine the corresponding event from the user’s digital calendar. The actual context of the user can be deduced from the description of the event, e.g. meeting, flight, etc. However, if the event is composed of several activities, e.g. all the activities of a conference journey, the context can not be deduced in this simple way. The reason for this is that the context will probably change for each activity, so additional information is necessary to determine all the activities of an event, and to derive the right context for each of them. To do this in an automated way seems to be a complex job that will not be available as a service soon. So a more detailed description of the whole event with all its activities must be included in the calendar. This seems to be the approach used by many travel information systems. E. g. the travel information system provided by the German Railways (Deutsche Bahn)3 provides a detailed description of a planned trip including all the activities, e.g. walking, bus, train station, train, bus, etc. In Fig. 1 the schedule of a journey is shown as a screenshot from a mobile device. The event chain for this journey can be derived immediately from this schedule. A description of this schedule is also provided in an XML-based representation or can be attached to the event notes in the user’s calendar. So, the deduction of the user context for each activity (Fig. 2) is easily handled. 3 Deutsche Bahn AG Homepage http://www.bahn.de/railnavigator/

1) Footway -> Dep 13:07 Wadern - Dagstuhl, Bahnhofstr. 2-11 Arr 13:09 Wadern Bahnhof; 2) Bus R2 -> Dep 13:09 Wadern Bahnhof Arr 13:11 Wadern Busbahnhof; 3) Bus R1 -> Dep 13:21 Wadern Busbahnhof Arr 14:18 Merzig Bahnhof; 4) Footway -> Dep 14:18 Merzig Bahnhof Arr 14:20 Merzig(Saar); 5) RE 23036 -> Dep 14:27 Merzig(Saar), Pl. 1 Arr 15:04 Trier Hbf, Pl. 13; 6) IC 337 -> Dep 15:09 Trier Hbf, Pl. 12 Arr 17:42 Koeln Hbf, Pl. 3; 7) THA 9460 -> Dep 18:13 Koeln Hbf, Pl. 8 Arr 18:53 Aachen Hbf, Pl. 7 Figure 2: Event Chain Description as Calendar Attachment

5.2

User Behavior Model

This approach is based on the assumption that user behavior is to some degree consistent. So, for example the user goes to bed at almost the same time every day and he has lunch at a specific time. From this we derive that user behavior can be defined as the correlation of time, space, activity and context. Consequently, the user behavior can be deduced from his past, by observing former behavior tuples and matching them to the current situation [Sama et al., 2008] [da Silva et al., 2006]. A special case of user modeling is the prioritization of incoming calls depending on the location of the mobile device and on the time. For example, if the user is in a meeting he does not want to be disturbed by a business call but he wants to be alerted if his wife calls with an emergency. To enable such a functionality the mobile device can log the number of the call, time and location. The priority of an incoming call can become higher if the user had many calls with the incoming number at the same time and place or if the user has previously answered calls from this number at the same time and place.

5.3

Radio Signals

Mobile devices are able to interact with several communication networks, like GSM (Global System for Mobile Communications), UMTS/3G (Universal Mobile Telecommunications System), as well as WLAN (Wireless Local Area Network), and positioning networks, like GPS (Global Positioning System), RFID (Radio Frequency Identification), and NFC (Near Field Communication). Every wireless interaction is based on radio communication mechanisms, that means that a suitable communication interface of a mobile device will send or receive messages based on radio signals. Furthermore, every radio signal received by a mobile device contains information about the position of the sender. To derive the precise position of a mobile device with the help of the received signals is subject of many research projects. There are many approaches available for existing communication networks, like GSM [Kunczier and Anegg, 2004] [Borenovic et al., 2005], UMTS/3G [Kos et al., 2006], and WLAN [Wallbaum and Diepolder, 2005] [Jan and Lee, 2003] [Wallbaum and Spaniol, 2006] [Krempels and Krebs, 2008], providing localization estimations with acceptable accuracy. Many currently available mobile devices can also be used as personal navigation assistants. A special interface can receive signals from GPS satellites and processes them to determine the position of the device. GPS signals are only available outdoors. So, for indoor applications the position of a mobile device must be derived with the help of other approaches, e.g. using GSM, UMTS, or WLAN networks. Other localization approaches for special environments and applications are based on RFID and NFC. To obtain the position of a detected RFID, a database or an information system is queried to map the unique identifier of an RFID or the NFC label to its position [Michahelles and Samulowitz, 2002]. At the moment only a small number of RFID devices is installed with fixed locations, but it is possible that in the near future all road and street signs, information signs inside of airports, train and subway stations will have an additional RFID label to provide radio based access to information about the signs. This enables users to use this information for further processing, like user context detection, personal navigation assistance, etc. The context of a mobile user can be derived from his position. For that purpose the geographical position must be processed by a GIS (Geographical Information System) to obtain additional information about the user context, e.g. street name, building name, or name of a point of interest. From the type of building, e.g. train station, the user context can often be derived directly. If the localization accuracy is high the user context can also be determined in a more detailed manner, e.g. if the user is in a restaurant or in the business lounge inside the train station. This additional information can be used to detect time shifts or 4 Verkehrsverbund

Berlin http://www.vbb.de/

schedule deviations in conjunction with plan based context detection, as discussed in section 5.1.

5.4

Sensors

The majority of todays mobile devices have a number of different sensors at their disposal. Most of them are only built in to serve for one or two special applications but they also can help to determine the user’s context. Many state of the art mobile devices have at least a camera, a microphone, accelerometers and an integrated compass. To classify the information of the sensors the mobile device can use pattern recognition, but in order to save energy the sensor is only accessed for a short period of time when required. One thing the build-in sensors can provide is information about the position of the device in relation to the user. If the user is already watching the display it can alert the user with a pop-up of an incoming call, inside the pocket there is no need for a flashing LED (light-emitting diode) and on the table there is no need for the vibrating alert. The sensors can also be used to get information about the user’s environment and thus about the user’s context. 5.4.1

Accelerometer

The accelerometer not only measures acceleration of the device but also how the device is tilted. It is possible to match the movement pattern of the device to certain situations, such as the device is in the user’s hands, laying on a desk or held to the user’s ear. While in the user’s pocket it is also possible to decide for example if he is walking, running or sitting down [Karantonis et al., 2006]. 5.4.2

Camera

The camera’s use for detecting user context is limited because most of the time when automated context detection is needed the mobile device is in the user’s pocket, but sometimes it can be used for image recognition [Luley et al., 2005]. For example it can help to determine wether the mobile device is in a pocket or not. It also helps to recognize the lighting conditions of the surrounding even if you use this information only to adapt the brightness of the display. It is possible to use 2D barcodes and the camera to get information about a location that is equipped with them. The public transportation system in the city of Berlin4 for example is using them on bus stops. If people scan the bar code they get a hyperlink to a timetable with real-time bus schedules. 5.4.3

Microphone

The microphone has many possibilities to detect the user context. It is easy to measure the volume of the background noise and adapt the volume of the audio alarm

or the speakers accordingly. It is also possible to identify places based on the characteristic background noise. For example [Ma et al., 2003] were able to recognize with background noise only if they are located in an office, at football match or at the beach. They were able to differ between ten location with an overall accuracy of 91.5%. It should also be possible to use voice recognition to detect how many people are in the surroundings and who they are.

5.4.4

Compass

Some mobile devices have a built in electronic compass, because GPS can only give a position but no direction information. The electronic compass gives information about the movement of the user and can be used to compliment the data from the accelerometer and GPS.

6

Integrated Context Detection Architecture

As mentioned in Section 3 existing approaches only consider a certain aspect or technology to discover a user’s context. Thus, we proposed in [Christoph et al., 2010] that a combination of several approaches is desirable to offer an overall adaptation to a mobile user’s context. We also proposed that detection of a mobile users’s context should be provided by the mobile device as a service, so that all applications can have access to the context information and can adapt their behavior accordingly. In Fig. 3 we show an architecture of a Context Detection Service which comprises all aspects of context detection mentioned above and computes a suitable context that is provided as a context description for other applications for further processing.

Camera

Radio Signals (GPS,UMTS,WLAN)

Light Sensor

5.5

Clock

Near Field Community Interaction

Microphone Brightness

To derive the user’s actual mobile context from the context of the connected people in his social network or from his instant messaging contact list we assume that he has either a similar or an identical context as his colleagues and friends. To determine a user’s near field community and to request the actual mobile context therefrom, additional services are needed. This is very similar to the process to spread a rumor with the help of messages among mobile devices. We suggest that the mobile context of the user could also be derived with the help of a majority vote from the actual contexts of his near field community. A straight forward way to implement a near field community service is to advertise the user if one of his colleagues or friends is in the same cell of the mobile network, the same WLAN access point range, or even in the same train, building, plane, etc. After the advertising the mobile device can accept the new context as its current context or can use it to determine a similar context by combining the advertised context with other relevant knowledge.

User Location

Calendar Brightness Tag Location Activity Tag Tag

Activity List

Noise Tag

Knowledge Layer

Information Layer

Date / Time Data / Signal Layer

We see a promising area for further development in the sharing of context information with other users. Since applications for instant messaging and social networking became available for many mobile devices a user can share all the information with respect to its position and context with the people connected to him in his social network or through his instant messaging contact list. We can assume that the user is willingly sharing his information with these people because there is already an existing trustful relationship. However, the access to the user’s private information can be adjusted by the user in more detail for each person of his community individually, so the trustfulness must not be the same for all community members [Mantyjarvi et al., 2002].

QR Code

Context Detection Service

Noise Pattern

Accelerometer

Movement Pattern Movement Tag Context Tag

Compass

Near Field Community

Context CDS API

Context Context Repository

Information Information Processing Information Processing Service Processing Service Service

Configuration Service

Device Configuration

User Behavior Recognition Pattern Repository

Configuration Repository

Figure 3: Context Detection Service (CDS) Architecture The integration approach of the Context Detection Service is based on a three layer architecture. The data or signal source layer consists of the available sensors, radio network interfaces, the built in clock, and or even the connection interface to the user’s community. The source layer provides information in form of data or signals which comprise the information layer. This information needs to be transformed to defined knowledge so it can be processed by the Context Detection Service. The transformation of information to knowledge is done by additional Information Processing Services. This matching of sensor information to knowledge can either be derived from a rulebased analysis or based on a pattern-matching to suitable patterns defined in the Pattern Repository or on a combination of both strategies. The Context Repository contains context definitions consisting of a context name and a list of required context description tags provided by information processing services. Every time when all the required description tags of

a defined context arrive in a well defined time interval at the Context Detection Service, the corresponding context is matched and provided as output for the Configuration Service or for context sensitive applications on the Context Detection Service Application Programming Interface (CDS API). The Configuration Service selects a suitable device configuration description from the Configuration Repository for a detected context and adapts the settings of the mobile device accordingly.

Prototype Implementation Evaluation

and

A first evaluation of the presented architecture for a Context Detection Service is based on an implemented prototype. As development platform we used a mobile phone Google G1 / HTC Dream with the operating system Android Version 1.6. The mobile phone has the following built-in sensors: GPS radio, UMTS radio, and GSM radio: for communication and positioning, microphone for noise detection and voice recording, accelerometer for movement detection, compass for orientation, and camera for brightness detection and image recording. At first we investigated context detection based on movement patterns characterized by the sensors pointed out in Fig. 4.

Camera

Radio Signals (GPS,UMTS,WLAN)

Light Sensor Clock

Microphone Brightness

QR Code

User Location

Calendar Brightness Tag Location Tag

Activity List

Activity Tag

Noise Tag

Knowledge Layer

Information Layer

Data / Signal Layer

Date / Time Context Detection Service

Noise Pattern

Accelerometer

Movement Pattern Movement Tag Context Tag

Compass

Near Field Community

Figure 5: Structure of the Multilayer Perceptron As example we show the measurement results for the "walking" pattern in Fig. 6. This context was recognized correctly in 85% of the situations and in only less than 10 % recognized as cycling. The results are influenced negatively by the microphone acting as input sensor, since the perceptron can not identify a steady state pattern due to noise and distortion effects. Thus, there is a need for an additional component for the identification of noise patterns to obtain suitable results that can be used as input for the perceptron.

CDS API

Context Repository Configuration Service

Device Configuration

User Behavior Recognition Configuration Repository

Figure 4: Sensors used for the Prototype Implementation of the CDS The information collected from the sensors was saved in a database on the mobile device, then exported to an external host for pattern recognition with the help of a neural network using a multilayer perceptron sketched in Fig. 5. The multilayer perceptron contains 741 nodes (large set) 5 Neuroph

output

Context Context

Information Information Processing Information Processing Service Processing Service Service

Pattern Repository

input

Homepage – http://neuroph.sourceforge.net/

detected contexts for 50 measurements

7

and 97800 connections (very large set) and was realized with the help of the open source library for neuronal networks Java Neuronal Network Framework Neuroph5 . The network was trained for a set of predefined patterns ("at home", "at university", "cycling", "on a bus", "on a train", and "walking") and then it was used on the mobile device for the recognition of these patterns based on actual information from the considered sensors.

cycling

at home at university

on a train on a bus

walking

Figure 6: Measurement results for the walking pattern

detection rate in %

[Berchtold and Beigl, 2009] Berchtold, M. and Beigl, M. (2009). Increased robustness in context detection and reasoning using uncertainty measures - concept and application. In Proceedings of the European Conference on Ambient Intelligence (AmI’09), Salzburg, Austria. Springer.

all sensors

without microphone without accelerometer without camera

without GPS

Figure 7: Measurement results ignoring one sensor

8

Summary & Outlook

In this paper we discussed different approaches to automatic context detection and our integrated service architecture which can combine information from the different approaches. We think this is necessary to gain a precise view on the user’s context which is the main preposition to developing context-aware mobile applications. As a fundamental for an integrated service the context information from the data or signal source layer is transformed into knowledge which can be interpreted by the context detection service and then be provided to mobile applications running on the device or as basis for the device configuration. The implemented prototype shows that the multilayered design approach of the CDS architecture is very promising for the development of context aware applications and the context aware adaptation of mobile devices. To determine the context of a mobile user in an automated way it is necessary to process knowledge from several sources and services. Thus, there is need for common ontologies for the description of the context concept, knowledge description tags characterizing a defined context, and even the events provided by the signal layer. The next implementation steps for the proposed context detection architecture are the definition of a suitable context ontology and the interaction design of the discussed components of a Context Detections Service.

References [Abowd et al., 1999] Abowd, G. D., Dey, A. K., Brown, P. J., Davies, N., Smith, M., and Steggles, P. (1999). Towards a better understanding of context and contextawareness. In HUC ’99: Proceedings of the 1st international symposium on Handheld and Ubiquitous Computing, pages 304–307, London, UK. Springer-Verlag.

[Borenovic et al., 2005] Borenovic, M., Simic, M., Neskovic, A., and Petrovic, M. (2005). Enhanced Cell-ID + TA GSM Positioning Technique. Computer as a Tool, 2005. EUROCON 2005.The International Conference on, 2:1176–1179. [Brdiczka et al., 2007] Brdiczka, O., Crowley, J. L., and Reignier, P. (2007). Learning situation models for providing context-aware services. In HCI (6), pages 23–32. [Christoph et al., 2010] Christoph, U., Krempels, K.-H., von Stülpnagel, J., and Terwelp, C. (2010). Automatic context detection of a mobile user. In WINSYS. [da Silva et al., 2006] da Silva, B. C., Basso, E. W., Perotto, F. S., C. Bazzan, A. L., and Engel, P. M. (2006). Improving reinforcement learning with context detection. In AAMAS ’06: Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, pages 810–812, New York, NY, USA. ACM. [Dey, 2001] Dey, A. K. (2001). Understanding and using context. Personal Ubiquitous Comput., 5(1):4–7. [Gellersen et al., 2002] Gellersen, H. W., Schmidt, A., and Beigl, M. (2002). Multi-sensor context-awareness in mobile devices and smart artifacts. Mob. Netw. Appl., 7(5):341–351. [Henricksen and Indulska, 2005] Henricksen, K. and Indulska, J. (2005). Developing context-aware pervasive computing applications: Models and approach. Pervasive and Mobile Computing, In, 2:2005. [Jan and Lee, 2003] Jan, R.-H. and Lee, Y. R. (6-9 Oct. 2003). An Indoor Geolocation System for Wireless LANs. Parallel Processing Workshops, 2003. Proceedings. 2003 International Conference on, pages 29–34. [Karantonis et al., 2006] Karantonis, D. M., Narayanan, M. R., Mathie, M., Lovell, N. H., and Celler, B. G. (2006). Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring. Information Technology in Biomedicine, IEEE Transactions on, 10(1):156–167. [Kern et al., 2003] Kern, N., Schiele, B., and Schmidt, A. (2003). Multi-sensor activity context detection for wearable computing. In In Proc. EUSAI, LNCS, pages 220–232.

[Kos et al., 2006] Kos, T., Grgic, M., and Sisul, G. (June 2006). Mobile User Positioning in GSM/UMTS Cellular Networks. Multimedia Signal Processing and Communications, 48th International Symposium ELMAR2006 focused on, pages 185–188. [Krempels and Krebs, 2008] Krempels, K.-H. and Krebs, M. (2008). Directory-less WLAN Indoor Positioning. In Proceedings of the IEEE International Symposium on Consumer Electronics 2008, Vilamoura Portugal. [Kunczier and Anegg, 2004] Kunczier, H. and Anegg, H. (5-8 Jan. 2004). Enhanced Cell ID Based Terminal Location for Urban Area Location Based Applications. Consumer Communications and Networking Conference, 2004. CCNC 2004. First IEEE, pages 595–599. [Luley et al., 2005] Luley, P. M., Paletta, L., and Almer, A. (2005). Visual object detection from mobile phone imagery for context awareness. In MobileHCI ’05: Proceedings of the 7th international conference on Human computer interaction with mobile devices & services, pages 385–386, New York, NY, USA. ACM. [Ma et al., 2003] Ma, L., Smith, D., and Milner, B. (2003). Environmental noise classification for contextaware applications. In Marík, V., Retschitzegger, W., and Stepánková, O., editors, Database and Expert Systems Applications, 14th International Conference, DEXA 2003, Prague, Czech Republic, September 1-5, 2003, Proceedings, volume 2736 of Lecture Notes in Computer Science, pages 360–370. Springer. [Mantyjarvi et al., 2002] Mantyjarvi, J., Huuskonen, P., and Himberg, J. (2002). Collaborative context determination to support mobile terminal applications. Wireless Communications, IEEE, 9(5):39–45.

[Michahelles and Samulowitz, 2002] Michahelles, F. and Samulowitz, M. (2002). Smart caps for smart its – context detection for mobile users. Personal Ubiquitous Comput., 6(4):269–275. [Ranganathan et al., 2004] Ranganathan, A., Al-Muhtadi, J., and Campbell, R. H. (2004). Reasoning about uncertain contexts in pervasive computing environments. IEEE Pervasive Computing, 3(2):62–70. [Sama et al., 2008] Sama, M., Rosenblum, D. S., Wang, Z., and Elbaum, S. (2008). Model-based fault detection in context-aware adaptive applications. In SIGSOFT ’08/FSE-16: Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering, pages 261–271, New York, NY, USA. ACM. [Strang and Linnhoff-Popien, 2004] Strang, T. and Linnhoff-Popien, C. (2004). A context modeling survey. In In: Workshop on Advanced Context Modelling, Reasoning and Management, UbiComp 2004 - The Sixth International Conference on Ubiquitous Computing, Nottingham/England. [Wallbaum and Diepolder, 2005] Wallbaum, M. and Diepolder, S. (19-19 July 2005). Benchmarking Wireless LAN Location Systems Wireless LAN Location Systems. Mobile Commerce and Services, 2005. WMCS ’05. The Second IEEE International Workshop on, pages 42–51. [Wallbaum and Spaniol, 2006] Wallbaum, M. and Spaniol, O. (Oct. 2006). Indoor Positioning Using Wireless Local Area Networks. Modern Computing, 2006. JVA ’06. IEEE John Vincent Atanasoff 2006 International Symposium on, pages 17–26.

Suggest Documents