Creating Robust Activity Maps Using Wireless ... - Semantic Scholar

2 downloads 41253 Views 670KB Size Report
non-intrusive wireless sensors, the approach utilizes a Bayesian. Network fusion .... The advantage of implementing wireless smart floor blocks is twofold: 1) This ...
Proceedings of the 3rd Annual IEEE Conference on Automation Science and Engineering Scottsdale, AZ, USA, Sept 22-25, 2007

MoRP-B04.3

Creating Robust Activity Maps Using Wireless Sensor Network in a Smart Home Ching-Hu Lu, Student Member, Yu-Chen Ho and Li-Chen Fu, Fellow, IEEE n/p

Abstract—This paper presents a practical application called “activity map” to serve as guidance to show ambient intelligence-related contextual information gathered from both humans and their surrounding environments. The activity map utilizes results inferred from a location-aware activity recognition approach to statistically show a resident’s possibly interleaved activities along with corresponding location-related contexts. With observations from a variety of multi-modal and non-intrusive wireless sensors, the approach utilizes a Bayesian Network fusion engine with inputs from a set of the most informative features extracted from the sensors and applies joint inference to improve the accuracy and consistency of activity and location estimates. Additionally, each feature has to reckon its corresponding reliability factor to control its contribution in case of possible device failure, therefore making the system more tolerant to inevitable disturbance commonly encountered in a cluttered home environment. This mechanism can cope with some inherent sensor limitations and noise interference, thus improving overall robustness and performance. All experiments were conducted in an instrumented living lab and their results demonstrate the effectiveness of the system.

W

I. INTRODUCTION

sensor network techniques have shown a promising future for a smart home aiming at fulfilling the vision of ambient intelligence which demands that resultant systems or technologies should be responsive, sensitive, interconnected, contextualized, transparent and, of course, intelligent [1]. However, residents in such an intelligent system may not have clues about how much information they can access and the building blocks for the system may not be cost-effective enough to manufacture, deploy and maintain, thus not making them widely acceptable to the public. Even though camera related technologies can sometimes provide satisfactory tracking and activity recognition capability, they become undesirable due to potential intrusiveness, occlusion or light fluctuations[2]. Likewise, an audio-based tracking system may suffer from problems caused by sound reflections . To be more practical, the sensors in a smart home must be anonymous enough [3] IRELESS

Manuscript received April 28, 2007. This work is sponsored by the National Science Council, Taiwan, under project NSC93-2752-E002-007-PAE and in part by Intel Digital Health Care project. Ching-Hu Lu, Yu-Chen Ho are with National Taiwan University, Taipei 106, Taiwan (e-mail: jhluh@ sgi.csie.ntu.edu.tw, [email protected]). Li-Chen Fu is with both the Department of Computer Science & Information Engineering and the Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan, R.O.C. (phone: +886-2-23622209; fax: +886-2-23657887; e-mail: [email protected]).

1-4244-1154-8/07/$25.00 ©2007 IEEE.

such that they become applicable in recognizing resident related contextual information without compromising their privacy[4]. That is the reason why we primarily show high level activity information on an activity map by utilizing reliable and inexpensive sensors, which are incapable of neither directly identifying residents nor capturing privacy-sensitive information such as image or audio data. Moreover, we also demonstrate that those sensors are still capable of providing helpful contextual information and residents do not have to carry or don them all the time. Recent advances in wireless sensing devices make possible the high-level recognition of a resident’s interleaved activities and we believe that these advances will realize plenteous applications ranging from bridge structure monitoring to context-aware health care. The Smart Home Research Group at National Taiwan University (NTU) belongs to the Attentive Home, which is an interdisciplinary project whose members comprise different specialists from diverse domains in order to provide deep insights into both real human needs and the most advanced technologies. Our work currently focuses more on displaying accurate information about a person’s current activity and whereabouts statistically on the activity map based on features extracted from wireless sensors or information appliances (IAs) instrumented throughout the indoor environment. These two estimates (both locations and activities) further cooperate with each other to confirm the results and facilitate location-aware service provisioning in the attentive home. Our approach is feasible and the contribution is threefold: 1) We have prototyped and enhanced some objects, IAs, or simple sensors by integrating the sensors with wireless sensor nodes (NTU Taroko [5] , a tmote sky [6] compatible sensor node but with less expensive cost) and deployed them throughout a living lab called OpenLab at NTU rather than on residents , since it can be quite unwieldy for a resident to wear detection devices along with heavy battery packs; furthermore, the annoying task of battery replacement will eventually deter residents from using the system. 2) Instead of merely focusing on utilizing complicated upper-level probabilistic models, we also concentrate on designing smart sensors which can generate as many invariant features as possible to evaluate a posterior of estimated states (both activities and locations) given observations so far. One benefit of using wireless sensors rather than wired ones is the possibility of further improvements in the future, given upcoming advances in this

741

MoRP-B04.3

promising domain. 3) In order to recognize interleaved activities, in the current phase, this work has chosen to utilize multiple naive Bayesian classifiers and enhanced them by incorporating with ranking distinguishable features and reliability factors, which serve as confidence to the outcomes from error-prone wireless sensors. Although we did not collect long-term large datasets to train the classifiers in this preliminary work, these models still enable us to gain early understanding of the characteristics that exist in interleaved activities. II. RELATED WORK Human tracking and activity recognition are among fundamental issues in ubiquitous computing and much effort has been made to solve these crucial problems via a variety of sensors with uncertainties, including RFIDs (Radio frequency identification), cameras, multi-modal sensors, wireless sensors, and accelerometers, etc. Our survey shows that much attention has been focused on utilizing computer vision and wearable sensors to recognize human behavior. Some of the work has focused on building complicated models such as [7] and [8]. Not much attention has been paid to wireless sensor networks deployed in living labs, except works such as [9] and [10]. From their results, simple sensors have shown solid potential for solving activity recognition problems in home settings. Similar to [9] and [10], we also set up a living lab, however we focus more on the sensors themselves to generate more invariant features and calculate real-time reliability factors to detect possible malfunction, which is commonly encountered in battery-powered wireless devices. Unlike [3], we attempt to provide tracking and activity recognition at a more fine-grained level without asking residents to carry sensors [11]. This means we will provide tracking outcomes higher than room-level granularity, and activity recognition results more than motion types (namely, moving and not-moving), which is accomplished quite well in [3]. In addition, our proposed activity map shows useful information indicating a resident’s both activities and locations of the activities within a specific time interval. To do this, our

Fig. 1. The circuit diagram of the mobile wireless current sensor with a RFID reader.

approach reasonably assumes that indoor ground-truth (GT) maps or layouts of the rooms can be obtained with ease. III. THE PROPOSED APPROACH A. Wireless Sensor Selection and Deployment for Location-aware Activity Recognition A smart sensor in our definition should be programmable to cooperate with others and should have the ability to process raw data to generate reliable features with at least on-board self-diagnostic function (e.g. sending wireless heart beat packets to a super node) to increase overall efficiency and reduce maintenance costs. Currently, we have selectively integrated some off-the-shelf sensors [12] with Tarokos to generate informative features without revealing privacy information; these commonly used analog sensors can detect current, voltage, pressure, vibration, motion, acceleration, contact (via reed/mercury switches), etc. along with passive RFID readers to detect RFID tags. Taking the mobile current sensor (Fig. 1) as an example, it consists of a power socket and an outlet and can be in series connection with a regular appliance to measure its power usage. In addition, this power device can be powered directly from a power line (or from batteries). With the integration of a passive RFID reader, an appliance with a passive RFID tag attached can connect to this device to readily recognize any appliance-related activities that commonly occur during our activity of daily living (ADL). Furthermore, this device can enhance a regular appliance, allowing it to become a smart

Fig. 2. Overview of the NTU Attentive Home OpenLab (The four cameras are only used to collect ground-truth for labeling training data.)

742

MoRP-B04.3

interest like [15] to save more expense.

 

(a)

(b)

Fig. 3. The prototype of a smart floor block (a) the layout with four pressure sensors (b) the circuit diagram

object transparently, as the work in [13] has shown. According to domain knowledge, sensors are deployed so that they can be triggered whenever activities of interest are performed. In addition, we attached unique RFID tags to cellular phones such that the system can capture identities as residents enter and leave the house. Fig. 2 shows an overview of our instrumented OpenLab at NTU where a variety of wireless sensors are deployed to detect various common activities. B. Wireless Smart Floor Block Implementation for Indoor Human Detection to Assist Activity Recognition In prototyping a wireless smart floor block, we aimed at designing a cost-effective, accurate sensory device that is not only easy to deploy, but also readily be replaced after any malfunction; moreover, this smart floor block can lay the groundwork for later integration with other smart objects to provide more powerful or attentive services. We have successfully integrated an off-the-shelf pressure sensor with a Taroko wireless node to create a prototype of a wireless smart floor block. The Taroko wireless node is a full function node (with routing ability) and it can connect up to eight analog sensors to its analog/digital converters (ADCs) and then transfer wirelessly the pre-processed information to a remote super node (also a Taroko) plugged into an OSGi-based [14] home/room server, therefore making it feasible to deploy more wireless devices without the worry of having to hide large bundles of wires. Fig. 3 is a basic wireless smart floor block in our experiment composed of four pressure sensors embedded in four mats with dimensions of approximately 60x60 cm. The output voltage of a pressure sensor is roughly proportional to the input force exerted on the sensor and it is converted to digital readings by the ADCs initially bundled in the micro controller on the Taroko. The advantage of implementing wireless smart floor blocks is twofold: 1) This prototype is easy to deploy and can be powered by either batteries or a USB power cord. 2) Its accuracy and cost can be determined by the size of each mat and the total cost consideration of the smart floor. We can even deploy some dummy blocks, which have no sensors, under where furniture or appliances are located; furthermore, we can place the smart floor blocks only in some areas of

C. Invariant Feature Creation and Selection Good features extracted from sensors are more invariant to changes and can provide high level information contexts, thus reducing overall computational burdens. However, invariant and highly discriminative features are not that common. In addition to passively collecting features, this work has managed to generate as many features as possible that are invariant to changes and insensitive to interference or uncertainties from the wireless smart sensors. The importance of these invariant features is their potential for facilitating the feasibility of recognizing more complicated activities under a multiple-resident environment in the future. Fig. 4 illustrates various types of binary features, which can be collected from a smart sensor. It is very challenging to generate an ideal feature (Fig. 4 (b)), which exactly matches the original ground truth in Fig. 4 (a), let alone a predictive one (Fig. 4 (c)) with time invariant characteristics. Like an energy function, more realistic features look like Fig. 4 (d) with delayed detections of an activity. The mobile current sensor depicted in the previous subsection can generate very reliable time invariant features for detecting appliance usage and its outcome is similar to Fig. 4 (b). With these time invariant features, we can easily model the time durations, which may vary drastically for different activities. All samples in the training set consist of raw data and can be processed to obtain all possible features f = {f1 ,..., fN }. Instead of using all possible features for an activity Ak from A={A1,…,Ak,…AK}, we are more interested in finding a ranking of feature sets fR,k = {f1,k ,..., fi,k,…,fRk,k ,…,fN,k} based on their usefulness to minimize recognition errors and meanwhile mitigating the computational load for the system. Those features after fRk,k won’t provide further improvement for the accuracy during the training phase. The usefulness Ui,k from feature fi,k for Ak is defined by (1).

U i , k  ( wC ⋅ Ci , k ) ⋅ ( wD ⋅ Di , k )

Ci , k =

(

1 Tk

(1)

⎛ T − lk k





t =1



max ⎜ ∑ g k [t + lk ] ⋅ fi , k [t ] ⎟

Di , k = α k ⋅ N μi , k , σ i2, k , D Des

es

) + (1 − α ) ⋅ N ( μ k

Dss i ,k

(2)

, σ i2, k , D

ss

) (3)

where there are two confidence parameters to be evaluated

743

Fig. 4. An activity ground-truth sample and its possible corresponding features

MoRP-B04.3

for each feature fi,k with their corresponding weights (wC and wD). Ci,k in (2) is the cross-correlation (or the convolution) between ground-truth function gk (whose duration is lk) and a normalized feature fi,k. The cross-correlation originally refers to the measurement of similarity between two signals in the signal-processing domain by comparing an unknown signal to a known one. Tk is the maximum cross-correlation value from the training data and is used to normalize the factor. Di,k in (3)is used to evaluate how small the early activity-start detection time Tes and early activity-end detection time Tee (as in Fig. 4 (d)), each of which is approximated with a Gaussian distribution N(μ,σ) with its corresponding mean μ and variance σ. Theoretically, the mean and the variance will be very close to zero for an invariant feature. Using the definitions stated above, the cut-off point Rk for the ranking feature set can be determined by a predefined error function with a presetting threshold based on the usefulness of a feature. To prevent one single feature from dominating the others, we choose at least two features for each activity in case the first ranking feature fails to be received owing to unexpected power failure or noise interference. The core idea behind determining Rk is that it can reduce the computational complexity by not extracting less useful features. D. Activity Model and Bayesian Network Activity Inference References [16] and [10] have demonstrated that naive Bayesian networks work quite well on some domains with low variances of the classifiers in lieu of their strict independence assumptions between attributes and the classes. This work, in the current phase, has implemented multiple naive Bayesian classifiers, each of which represents an activity to be recognized. We also enhanced the classifiers by incorporating both ranking features and reliability factors to detect potential interleaved activities since these classifiers have the advantage of not enforcing mutual exclusivity. For now, we can formulate the parameter learning as a maximum likelihood (ML) problem to obtain parameters θ k , ML for an *

activity Ak given available labeled training data set Z as shown below.

θ k , ML* = arg max P ( Z | θ k )

(4)

θk

Fig. 5 shows multiple Bayesian inference models for performing inference and data fusion on the activity estimates by cooperating with features given the observations from a variety of multi-modal sensors as well as their corresponding reliability estimates. For activity Ak , the nodes include fk,j (the j-th ranking feature), lk (location feature), and their corresponding reliability estimates (Rkj and Rl for ranking and location respectively) given the current observations O. Each arrow represents the relationship between two nodes and can be depicted by a conditional probability. Based on Fig. 5, the marginal probability p(Ak) given observed evidence Z can be factorized into (5).

Fig. 5. A generalized enhanced naive Bayesian inference model for performing data fusion to infer interleaved activities.

⎛ ⎝

log [ p ( Ak | Z ) ] ∝ ⎜ log [ p (lk | Ak ) ] +

Rk

∑ log [ p( f

ki

⎞ ⎠

| Ak ) ] ⎟ +

i

⎛ ⎞ ⎜ log [ p ( Rl | lk ) ] + ∑ log [ p ( Rki | f ki ) ] ⎟ + i ⎝ ⎠ Rk

(5)

R ⎛ ⎞ log p ( O | l , R ) + [ ] ∑ log [ p(Oki | fki , Rki )] ⎟ l k l ⎜ i ⎝ ⎠ k

where p ( f | A ) and p (l | A ) represent the likelihood that ki

k

k

k

Ak may occur given the feature fki and lk correspondingly. For now, the prior probabilities p(Ak) for all activities are assumed equal. Rkj and Rl are reliability estimates and can be used to control the contribution of one single feature by using the likelihood p ( Rki | f ki ) and p ( Rl | lk ) .On the other hand, p (Ol | lk , Rl ) and p (Oki | f ki , Rki ) are closely related to sensor

models, both of which are closely related to whether a particular set of sensors are triggered during a specific time interval based on their corresponding reliability factors. Obviously, these models with ranking features are more concise than the ones using all possible features and thus greatly reducing the computational burden. Since the activities that residents perform have certain natural patterns or spatio-temporal smoothness, the recent history can be useful in predicting the current activity. On the other hand, a sensor’s output should likely be considered erroneous if it has had a tendency to malfunction; since the output of a failed sensor is usually persistent for a while and will constantly contribute to the data fusion mechanism, which may have no clue at all as to the cause. Accordingly, it just fuses the unreliable feature along with the other good ones thus causing significant degradation in the overall accuracy. According to these above considerations, the reliability Rk,j used to improve the performance and smoothness of our recognition system is defined as: t

1

t −WA

f sample

Rk , j = ω ⋅ ∑ Ak [t ] + (1 − ω ) ⋅

t

⋅ ∑ Pk [t ]

(6)

t −W p

where Ak[t] is the total number of detections of the activity Ak during the interval ranging from time t-WA to time t and WA is the window width of the detecting interval; ω is the weight

744

MoRP-B04.3

IV. EXPERIMENT

TABLE I ACTIVITY RECOGNITION EXPERIMENTAL RESULTS

In order to validate our approach, we have collected training data related to activities of interest that commonly Sensors occur in a regular home, as listed in TABLE I. Some of these A,C,L,M, activities are even interleaved. The dataset was collected in Using PC 89% 91% P,M,L P,U,V,B the living lab as shown in Fig. 2 from three volunteers (two of A,C,L,M, Watching TV 75% 98% C,M,U P,U,V,B whom are not researchers) across multiple days. The A,C,L,I, volunteers were asked to read the brief instructions as rough Studying 71% 96% P,C,L M,P,B guidance then perform a series of activities in an arbitrary A,L,M,P, Using phone 82% 82% U,R,A order (some activities are performed simultaneously). On R,U,V,B Listening to average, we have collected varying lengths of data per C,L,P,B 99% 99% C,P,L music activity and we have used four cameras deployed on the four Using C,L,R,U, 98% 99% C,R,A corners of the Openlab to collect ground truth for labeling microwave V,B Using C,L,R,U, training data afterwards. In order to make the most efficient 95% 97% C,R,A refrigerator V,B use of the limited training data, we chose the leave-one-out Using other methodology to overcome the issue of time constraints in appliances C, I,V,B 87% 89% C, I, L collecting hand-labeled segmented data for different with RFID tags 4 activities. Sitting L, M,B 99% 99% P,L,M The average accuracies of the experimental results in 1 A: Accelerometer, C: current, I:RFID, L:load sensor, M: motion, P: TABLE I refer to percentage of time that an activity being pressure mat, R: reed switch, U: mercury switch, V: vibration, B: correctly detected. The table also demonstrates the bundled sensors including node ID, signal strength, temperature, comparison between the results with/without the help of humility, battery voltage, etc. 2 without using location context, 3using location context , 4 including shredder, toaster, water boiler, lamp, etc. location context, which is presented here to verify the and set to 0.4 based on some pilot tests. Since we discovered usefulness of location-awareness. The results with location from some pilot tests that the number of correctly received information outperform the others and these information are packets is closely related to the stability of a remote sensor, Pk especially advantageous in the testing scenarios of “watching [t] is the total number of packets (with the frequency fsample as TV” and “studying.” The reason is that the volunteers were a normalizing constant) correctly received for the activity Ak asked to take some stuff from the refrigerator while they were within the interval Wp looked back from time t. Now, we can performing those activities. The table also lists the ranking of sensors which can generate the most useful features. The define p ( Rki | f ki ) using a sigmoid function as: majority of the useful features stems from current sensors, τ ,if R ≤ τ and 0 < τ ≤ 1 pressure mats, load sensors, contact sensors, and ⎧ fail kj R fail ⎪ λ ⋅R accelerometers. Especially, the current sensors along with p ( Rki | f ki ) = ⎨ e (7) node IDs are very helpful in distinguishing activities that , otherwise ⎪ involve any home appliances such as “watching TV,” ⎩1 + e λ ⋅ R where λ is a constant for the sigmoid function and the “listening to music,” “using microwave,” “using contribution of a feature fki will be completely ignored in (5) refrigerator,” and “using other appliances with RFID,” etc. The reason why the accuracy for “using phone” is lower when τ fail = 1 and Rki is smaller than a predefined threshold than others is the volunteers sometimes prefer to use the τ R . As for p (l | A ) , we can define it as: speaker-phone rather than the handset, which, we thought, is crucial for the smart sensors to detect this activity. This ,if lk ⊂ Laz , k ⎧1 (8) unexpected event became even more frequent when the p (l | A ) = ⎨ testers tried to dial out. That means our current settings for , otherwise & 0 < lτ < 1 ⎩lτ Testing activity

1

Average accuracy2

Average accuracy3

Top 3 useful sensors 1

ki

ki

k

k

745

Sesor malfunction test

Using PC & Phone Using Phone

Using PC

1

f1+f2+f3 +.....(with all features)

P(A)

where lk is the feature representing the current location and Laz,k contains the activity zones consisting of several smart floor blocks for Ak with lτ as a lower bound of the probability. The location feature incorporates high-level contextual information or domain knowledge about the object to which the sensors are attached (e.g. TV, microwave, or light, etc.) and their locations (e.g. living room, kitchen, or study room, etc.). And lastly, the possible on-going activities can be derived by a threshold to control the how many activities can be detected from (5) at the same time.

Activty

k

k

f2+f3 +...(w/o f1) t1

Using PC (GT)

Using PC

Using Phone (GT)

time Phone

t2

0

time

Gound‐truth

w/t Reliability

Test‐1

Test‐2

(a) (b) Fig. 6. (a) Continuous recognition result of interleaved activities using PC & phone (b) The comparison of results with/without the help from sensor reliability factor

MoRP-B04.3

added to the map on a real-time basis in the near future. V. CONCLUSION

Fig. 7. An activity map (locations and activities) for a user in the Attentive Home

“using phone” should be improved by exploring other reliable features in our future work. Another lower recognition accuracy is in “Using other appliances with RFID,” and this is partly because our current passive RFID reader sometimes failed to detect the tag attached on the plug of an appliance. Additionally, the volunteers did not have patience to retry after any detection failure. Fig. 6 (a) illustrates the result of a continuous testing trace and demonstrates the recognition results of two interleaved activities between “using PC” and “using phone.” The majority of the data traces were correctly classified except some sparse misclassifications owing to the less informative features from “using phone” as just discussed. In order to simulate sensor malfunction, the batteries were taken from the Taroko which generated the most informative feature during the experiment. Fig. 6 (b) shows the Bayesian network fused results with/without the help from reliability factors. In the case of a device failure (at t1 for battery outage simulation or at t2 for a more realistic case of a simulation using decreasing power batteries for a while) and without any assistance from the reliability factors, the final fused result became less accurate. However, the simulated results shows that the reliability factors could automatically keep the engine away from fusing the failed sensor, thus making the overall system less sensitive to device failure and resulting in improved overall robustness. The final results confirm that using a Bayesian network as the fusion engine improves the accuracy of the overall recognition performance when comparing to a pure naive one (without the assistance from both reliability factors and location information). Fig. 7 illustrates our first version of the activity map showing activity information inferred from the training/testing data of a tester. This map utilizes a similar concept to the recently emerged “tag clouds” which are often used for visualizing the popularity of topics in a website. The bigger the font size in the map, the more frequent the activity performed in the place. For example, this map shows the tester performed more “watching TV” on the sofa in the living room than other activities. It also suggests that this volunteer tends to study in the living room and listen to music at the same time. This map has the potential of enhancing healthcare quality while preserving privacy by providing more complete context information to caregivers without invasiveness. More and more contextual information will be

To show a robust activity map, this paper presented a location-aware activity recognition approach utilizing a Bayesian Network based fusion engine with inputs from ranking features along with assistance from reliability factors reckoned from a variety of wireless sensors to improve the overall robustness and performance. This work built on multi-modal sensors to collect the most informative features to meet the challenges of recognizing the multi-faceted nature of human activities. Our initial work, verified by experiments, has yielded high recognition rates, thus suggesting that this is a feasible approach that may lead to practical ambient intelligent applications. The proposed approach has the potential of achieving the goal of implementing a responsive, sensitive, interconnected, contextualized, transparent, and intelligent system that can move from our living lab into the real world. REFERENCES [1]

I. A. G. (ISTAG), "Experience and application research: Involving Users in the Development of Ambient Intelligence, ISTAG Working Group Final Report - v1," 2004. [2] S. S. N. Strobel, and R. Rabenstein, "Joint audio-video object localization and tracking," in IEEE Signal Processing. vol. 18, 2001, pp. 22-31. [3] D. H. Wilson and C. Atkeson, "Simultaneous Tracking and Activity Recognition (STAR) Using Many Anonymous, Binary Sensors," in Lecture Notes in Computer Science, 2005, p. 62. [4] P. Robinson, H. Vogt, and W. Wagealla, Privacy, Security And Trust Within The Context Of Pervasive Computing: Springer, 2005. [5] C. You, Y. C. Chen, J. R. Chiang, P. Huang, H. Chu, and S. Y. Lau, "Sensor-Enhanced Mobility Prediction for Energy-Efficient Localization," Proceedings of Third Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks (IEEE SECON 2006), September, 2006. [6] Tmote-sky, "http://www.moteiv.com/products.php." [7] H. H. Bui, D. Q. Phung, and S. Venkatesh, "Hierarchical hidden markov models with general state hierarchy," Proceedings of the Nineteenth National Conference on Artificial Intelligence, pp. 324–329. [8] H. H. Bui, S. Venkatesh, and G. A. W. West, "Policy Recognition in the Abstract Hidden Markov Model," Journal of Artificial Intelligence Research, vol. 17, pp. 451-499, 2002. [9] E. M. Tapia, N. Marmasse, S. S. Intille, and K. Larson, "MITes: Wireless portable sensors for studying behavior," Proceedings of Extended Abstracts Ubicomp 2004: Ubiquitous Computing, 2004. [10] E. M. Tapia, S. S. Intille, and K. Larson, Activity Recognition in the Home Using Simple and Ubiquitous Sensors, 2004. [11] J. Lester, T. Choudhury, N. Kern, G. Borriello, and B. Hannaford, "A hybrid discriminative-generative approach for modeling human activities," Proceedings of International Joint Conference on Artificial Intelligence (IJCAI 2005). [12] Phidgets, "http://www.phidgets.com/." [13] H. Ishii, "Bottles: A Transparent Interface as a Tribute to Mark Weiser," IEICE Transactions on Information and Systems, vol. 87, pp. 1299-1311. [14] T. Honkanen, "“OSGi—Open Service Gateway initiative." [15] K. Koile, K. Tollmar, D. Demirdjian, H. Shrobe, and T. Darrell, "Activity Zones for Context-Aware Computing," Proceedings of the Internation Conference on Ubiquitous Computing (UbiComp), 2003. [16] J. H. Friedman, "On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality," Data Mining and Knowledge Discovery, vol. 1, pp. 55-77, 1997.

746

Suggest Documents