Cloud-Hosted Continuous Activity Recognition Using ... - IEEE Xplore

2 downloads 0 Views 465KB Size Report
MyActivity: Cloud-Hosted Continuous Activity. Recognition using Ontology Based Stream. Reasoning. Amin BakhshandehAbkenar and Seng W. Loke.
2014 2nd IEEE International Conference on Mobile Cloud Computing, Services, and Engineering

MyActivity: Cloud-Hosted Continuous Activity Recognition using Ontology Based Stream Reasoning Amin BakhshandehAbkenar and Seng W. Loke Department of Computer Science and Computer Engineering. La Trobe University, Melbourne, Australia {A.Abkenar, s.loke}@latrobe.edu.au



Abstract—Activity recognition and context-aware systems with mobile users have attracted much attention recently, with numerous applications. Also, for a system to be scalable, cloud computing is a useful paradigm, but needs to be appropriately integrated. This paper presents an innovative approach for mobile activity recognition that exploits a hybrid model using both machine learning techniques and stream reasoning using an ontological representation of activities. As a proof-ofconcept, we implemented a cloud-hosted framework called MyActivity using Continuous SPARQL (C-SPARQL) to run queries in order to recognize human activities with accelerometer and GPS data from mobile devices.

using OWL ontology definitions in stream reasoning for human complex activity recognition research; • demonstrating a hybrid model which takes advantage of both knowledge-bases and machine learning; • proposing an ontology-based model to represent and recognize complex activities; • and providing a framework that integrates the following:  a C-SPARQL engine as stream reasoner,  mobile apps on the Android platform which processes and collects streaming sensor data, and  enabling activity recognition via the cloud, not necessarily but as an example, using the popular Amazon Cloud, i.e. AWS (Amazon Web Service) and SQS (Simple Queue Service2). The rest of the paper is organized as follows. In Section 2, a generic architecture for our framework will be proposed. Each component of the framework is described. Ontologybased reasoning is also discussed in this section. In Section 3, we present the implementation scenario and phases, and discuss our low level detailed design. We also provide a number of examples to evaluate the functionality of our framework, including performance for different types of possible queries. Section 4 reviews related work. Section 5 concludes with future work.

Keywords-continuous activity recognition; stream reasoning; context-aware mobile computing; mobile cloud application

I.

INTRODUCTION

In recent years, stream reasoning has emerged with techniques developed for reasoning about stream data using ontologies [12,13,16]. 1 This paper aims to investigate ontology-based stream reasoning for continuous human activity recognition using a mobile cloud approach. We also propose and implemented a cloud-hosted framework as part of our investigations and evaluated the framework. In our prototyping, we define an ontology which describes activities and their inter-relationships, to makes activity inference feasible. Reasoning is performed over streams of sensor data which are obtained from the accelerometer and GPS of our mobile device. Each stream has a time-stamp and using continuous querying with inference rules built into the query, the activity recognition process can be performed. We named our framework MyActivity, which differs from all the research reviewed in Section 4 and current human activity recognition systems. The novelty of this work is ontological stream reasoning in the context of complex activity recognition via cloud-hosted inference engine. We can summarize MyActivity’s contribution with the following features:

1

II.

With current activity recognition systems, there are challenges that can be summarized as follows which we address in our work: • Real-time support: an activity recognition system should support real-time reasoning. Typically, after collection of data, reasoning needs to be performed on those collected data in an online way. Stream reasoning is useful for this. 2

For example, see http://streamreasoning.org

978-1-4799-4425-5/14 $31.00 © 2014 IEEE DOI 10.1109/MobileCloud.2014.27

CONCEPTUAL DESIGN OF MYACTIVITY

117

http://aws.amazon.com/sqs/





locations of them and detecting the parallel movements, it is highly probable (though not without uncertainty) that Maria and John are playing table tennis together, since they are near each other. Attending the same meeting can be another scenario with collocated persons. In these scenarios, we are running queries on multiple sources of data streams.

RDF semantic data and SPARQL query support: current activity recognition systems reviewed earlier largely do not use SPARQL on streams of data. But what is essential in activity recognition is running queries to reason on incoming and continuous streams of data. Capability of expansion: one should have a system that can be extended to a wider range of different types of sensors and sensor data and to support different types of activity queries. In our design, we seek to accommodate this via the ontology approach where we assume the ontology can be extended by adding more activity models, and then new CSPARQL queries to query on the particular added activities can be added to the system. The ontology would be a key element of our system.

B. Approach and Methodology As we noted earlier, exploiting both machine learning and knowledge-based techniques can be considered as an effective way of performing human activity recognition and for reasoning about activity knowledge. As mentioned, our approach uses a combination of (i) machine-learning methods for atomic activity recognition and (ii) ontologybased reasoning for recognizing more complex activities. Our approach to continuous activity recognition is layered, as follows. Using machine learning techniques, streaming sensor data on the mobile is continually processed (via a running classifier) into a stream of recognized basic activities (basic with respect to an ontology). These basic activities are then forwarded from the mobile to a cloud-hosted server which receives the streams of basic activities and processes the streams (applying ontology-based C-SPARQL queries, effectively performing a spatiotemporal aggregation of basic activities) in order to recognize more complex activities. It is possible that the cloud-hosted server receives streams of basic activities from multiple mobiles. The C-SPARQL queries have to be registered with the cloud server beforehand and can be changed depending on the range of complex activities to be recognized. The C-SPARQL queries for complex activities use the ontology about complex activities that we define and the stream(s) of basic activities sent from the mobile device(s). The output of the reasoning is, hence, a stream of complex activities recognized in time.

A. Scenarios To illustrate our proposed approach, we first describe simple scenarios which can be used with our proposed system. Atomic Activities Recognition and Aggregate Queries. According to our defined activity ontology, there are two types of activities: (1) atomic activities which we cannot break down into other activities; walking, jogging and running are considered to be examples of atomic activities in our work (though the relationships between complex and “simpler” activities can be application-specific), and (2) complex activities which are composite, i.e., composed of some other atomic activities, such as making dinner that comprises preparing table, cooking and serving drinks. One type of our queries is to perform atomic activities recognition. As an example, we can write a query which returns the total number of minutes the user is running in real-time within a day, and subsequently calculate the number of daily burnt calories. Using ontology, and reasoning according to that ontology, is another possible scenario which has been implemented in our proposed system. For example, if running and cycling have been defined as subclasses of “doingExercise”, a system reason and return “running” and “cycling” as results of a “user doingExercise?” query. Complex Activity Reasoning. We will discuss in the following sections complex activity recognition in detail, which comprises a two stage process. Our proposed system first detects atomic activities from raw sensor data by the use of machine learning techniques. Then, our ontology is used in a stream reasoner to recognize complex activities from aggregating atomic activities. For example, the activity sequence of “walking, OnBus and walking” (which are three atomic activities) is assumed to be a complex activity called “CommutingOnTheBus”. In order to do this, our ontology needs to be defined in such a way that our reasoner will be able to handle these types of queries, as we show later. Consider another scenario. Let us assume John and Maria are playing tennis at the same time and same location. Both are carrying smartphones or some body-worn sensors. Also, suppose our proposed system detects individually that both John and Maria are playing table tennis. By comparing the

C. Architecture Inherently, working with sensor data especially mobile devices is very challenging. In our proposed system, as an assumption, the position of the smartphone is in the user’s trouser left pocket. Accelerometer values are extremely sensitive to its position and movement. In order to make classification more reliable, feature extraction has been done on raw data. RDF streams are generated from the stream of classified raw data and sent to a C-SPARQL stream reasoner which is hosted on the Cloud in our design. The stream reasoner executes different queries simultaneously and performs reasoning according to received data from the smartphone and using concepts from a pre-defined ontology. At the end, the results (i.e., activities recognized, if any) will be sent back to the client’s device (or to any other device interested in the results, e.g., performing monitoring). Recognized activities are also stored on the Cloud, and can be combined with activities recognized from other mobiles to infer collaborative activities. Figure 1 illustrates the highlevel abstract architecture of our approach. The client side comprises the following components: • Sensor/GPS Listener: a listener to acquire accelerometer data and GPS data changes. Sensor/GPS Listener saves sensor data in a buffer.

118



Feature Extractor: as detailed later, the Feature Extractor computes features used for classification of sensor data on the smartphone. • Classifier: the output of the feature extractor is considered as classifier input. Based on the trained data set, this component classifies incoming sensor data continuously. Ideally, the classifier must recognize atomic activities. • Message Builder: the results of the classifier must be converted into a message format that is readable on the server side, i.e., we do not transfer raw accelerometer data but the results of the classifier which are effectively streams of atomic activities (to save bandwidth). • Messaging Service: this component is responsible for transferring messages to the Cloud. • Event handler and AR Notifier: These two modules show the recognized activities on the mobile screen (but such activities can also be forwarded into any other Android application for their own use). The Event Handler listens to the messaging service. If an activity is detected, as sent from the server side, it enables the notifier to display the query results on the GUI. The server side is hosted on the Cloud and comprises the following components: • Atomic Activity Receiver: messages are received from the client via the Messaging Service; this module parses received messages and creates an RDF stream (i.e., a stream of quadruples of the form: (, Timestamp) ) which is then sent to the Stream Reasoner Engine (below). • Stream Reasoner Engine is the key component of our proposed system. It contains Stream Gate, Activity Component Detector and Register Query. This component receives the RDF Stream from the Atomic Activity Receiver, and executes the queries using an ontology hosted on the Cloud.  Stream Gate: acts as an RDF data streams gateway. All quadruples are entered into this module; also activity recognition results are published to the Messaging Service as done by the Result Publisher, which is part of Stream

Gate. Register Query: registers different queries in the Stream Reasoner. Six different queries have been defined in our testing, which will be explained later.  Activity Component Detector: as previously mentioned, it uses an ontology and predefined knowledge to reason with the atomic activities, and inferring that they possibly constitute a complex activity. This module is mainly used for complex activity recognition. Activity Ontology: contains knowledge about activities and their properties and relationships. 



D. Sensor Data Classification In order to collect training data, we developed a mobile application which takes user input via the mobile screen to label the incoming accelerometer data – such labels are then used in supervised learning. In deployment, we envision that the system will initially need a preliminary training phase to be operable. (We collected data for six atomic activities including OnBus, Walking, Running, Stationary, OnTrain, and Cycling. The decision tree technique was used as our learning method.) In the Feature Extractor, in order to achieve reliability for classification, typical statistical values were computed. Indeed, as we expect, these values show correlations among readings of accelerometer X & Y, X & Z and Y & Z axes. As can be seen in Figure 2, first of all, sensor data are collected continuously by an accelerometer and GPS as streams of data on a smartphone or any portable device. Our approach is to do the classification (using accelerometer data only) in two phases: the first phase detects inactive vs. active movements, and the second phase classifies activities into further classes such as running, walking, cycling. Using accelerometer data alone, it is not possible to further separate inactive classes (e.g., into “stationary, not on-vehicle” vs. “stationary, on-vehicle”) reliably. Therefore, we also use GPS data, with a SPARQL query on such data that checks to see if the subject is moving. Moreover, classifications should be based on some assumptions. As an example, we assumed, in our testing, that the device always must be in a certain pocket during data collection or even reasoning processes. Also, some similar activities such as onTrain, onBus and onCar could be hard to differentiate in many cases

Figure 1. Abstract Conceptual Architecture

119

this level of abstraction seems acceptable. Note that particular notions such as “stationary” (which includes e.g., lying down, sitting down, or standing still) are not strictly speaking an activity, but immobile positions such as “sitting” and “standing” have often been used in the activity recognition literature in conjunction with “walking” and “running” activities, and so, we include the general class of such immobile positions, called “stationary”, as a subclass of “Activity”. Also, basic or atomic activities which are components of complex activities are modelled with the concept “ComplexActivity Component”. Other activity ontologies can be incorporated but we developed out own tailored for our purposes. Complex Activity Ontology. We developed an ontology used to infer what we view as complex activities. The approach is defining a number of properties in such a way that they describe specifications of an activity such as, sequences of atomic activities and the times over which the activities are typically carried out; such knowledge are encoded once and can be further used. ‘componentOf’, ‘precondition’, ‘isBegining’ and ‘isTerminal’ are properties which potentially would be reused in other complex activities. Apart from these generic properties, some other properties can be designed for specific complex activities. As an example, in the “play table tennis” complex activity, jumping and offensive/defensive strokes might occur simultaneously, and particular properties need to be defined to express this situation in the ontology. We define a complex activity as a composition of a set of activities, identified by a set of ‘componentOf’ relationships, namely, A is a complex activity if there are one or more activities Ai such that, for each Ai, we have Ai ‘componentOf’ A (of course, we assume that there are no cycles in the definitions, e.g., none of Ai is A). The definition can also be recursive in that an Ai is a complex activity. This definition of a complex activity is relatively generic and does not necessarily imply any order on the component activities. Temporal properties can be used for more complex activity recognition. In many cases, the occurrence of one component before or after another does not make any difference on realizing a complex activity (e.g., it is an unordered set of atomic activities) but some complex activities are indeed a sequence of activities, i.e. a temporal constraint is imposed on the components of the complex activity. For example, a constraint could be that given components A1, ..., An of a complex activity A, we have that Ai < A(i+1), for all such i, where “

Suggest Documents