Activity Situation Model and Application Prototype for Lifelog ... - SERSC

3 downloads 0 Views 324KB Size Report
Oct 4, 2008 - oriented model and the action-oriented model. ... individual's life experiences in some semantic and structured ways for future experience data ... images) from a wearable camera based on pictorial visual features [5, 6, 7].
International Journal of Software Engineering and Its Applications Vol. 2, No. 4, October, 2008

Activity Situation Model and Application Prototype for Lifelog Image Analysis Bernady O. Apduhan Kyushu Sangyo University Fukuoka 813-8503, Japan [email protected]

Katsuhiro Takata, Jianhua Ma, Runhe Huang Hosei University Tokyo 184-8584, Japan [email protected], [email protected], [email protected] Abstract

Lifelog is a set of continuously captured data records of our daily activities. The lifelog data usually consists of text, picture, video, audio, gyro, acceleration, position, annotations, etc., and is kept in some large databases as records of individual’s life experiences, which can be retrieved when necessary and used as reference to improve life’s quality. The lifelog in this study includes several types of media data/information acquired from wearable multi sensors which capture video images, individual’s body motions, biological information, location information, and so on. We propose an integrated technique to process the lifelog which is composed of both captured video (called lifelog images) and other sensed data. Our proposed technique, called Activity Situation Model, is based on two models; i.e., the spaceoriented model and the action-oriented model. By using the two modeling techniques, we can analyze the lifelog images to find representative images in video scenes using both the pictorial visual features and the individual’s context information, and to represent the individual’s life experiences in some semantic and structured ways for future experience data retrievals and exploitations. The resulting structured lifelog images were evaluated using the previous approach and the proposed technique. Our proposed integrated technique exhibited better results.

1. Introduction Vannevar Bush, an atomic bomb scientist, imagined a machine that can keep written memos and their related materials into a microfilm, e.g., personal hand-written notes/memos, technical papers, and related experimental results. This machine was envisioned to enhance human’s memory, and Bush named this as “MEMory Extender” (MEMEX) [1]. Currently, some projects, e.g., MyLifeBits [2] and LifeLog [3], are trying to realize this MEMEX’s idea by continuously capturing and recording all the acts/experiences of an individual’s daily life using various sensors. These captured data consists of video, audio, body movements, gyro, position, annotation, etc. The captured individual’s life activities and all his/her experiences by various media are generally called a lifelog [4]. Therefore, a lifelog is a data set composed of two or more media forms that record the same individual’s daily activities. From the lifelog data, we are able to know what things have occurred to an individual, as well as where and when it happened. By analyzing the lifelog data, we are also able to find what events/states are interesting or important, to summarize these useful records in some structured and semantic ways for efficient retrievals and presentations of past life experiences, and to use these experiences to further improve the quality of life at present and/or in the future. Lifelog information processing which includes lifelog data acquisition, collection, modeling, analysis, management and utilization, is an emerging research area covering

31

International Journal of Software Engineering and Its Applications Vol. 2, No. 4, October, 2008

various IT technologies and has great potential applications in the way we live, learn, work, etc. However, there exist many challenging issues to be solved when developing practical lifelog systems which can effectively analyze, manage and use the lifelog data. The two of the major problems are the huge size and the complexity of lifelog data. The former is due to continuous accumulation of captured data along time, and the latter is due to too many events in an individual’s daily life which may happen regularly or irregularly, often or occasionally, independently or simultaneously, etc. To find/recognize the sequence of events in individual’s daily life, many researchers have focused on the lifelog video data and tried to analyze the captured video (called lifelog images) from a wearable camera based on pictorial visual features [5, 6, 7]. However, lifelog images are closely related to individual’s interests, states, and other factors. These researches on lifelog image analysis using purely vision-based approach only made scant progress because the procedures involved did not consider the individual’s contextual and situational information. In this study, aside from the lifelog images captured by the wearable camera, the lifelog data also includes additional types of media data acquired from other wearable sensors which capture individual’s body motions, biological information, and location information. Here, we propose an integrated technique to process the lifelog using the correlations between different kinds of captured data from multi sensors, instead of dealing with them separately. The proposed technique is based on two models; i.e., the space-oriented model and the actionoriented model. This idea is referred to as the situated action concept, as proposed by Suchman [8]. By using the two models in the proposed technique, we can analyze the lifelog using both the pictorial visual features and the individual’s context information such as motion, location, action and biological data; and then extract the meaningful events/records and represent these individual’s life experiences in some semantic and structured ways. The structured lifelog images were constructed and then evaluated using the previous approach and our proposed techniques. Our proposed techniques exhibited better results. The remainder of this paper is organized as follows. The next section describes the related work, main research issues and our basic ideas for effective lifelog image analysis. Section 3 discusses the lifelog acquisition using wearable sensors and describes how an individual’s life is captured using each sensor. Our based models, i.e., space-oriented model and actionoriented model, are discussed in Section 4. The lifelog image analysis and processing are explained in Section 5. In Section 6, we describe a prototype application and discuss the conducted experiments and related issues. Finally, Section 7 provides the summary of our research and cites some future work.

2. Related work The acquisition of a lifelog is realized in three major phases; i.e., monitoring, storing, and recording. The monitoring phase of an individual’s daily life is realized by wearable sensors or some sensor networks. A sensor network [9] usually consists of small-size wired or wireless nodes, such as the power saving ZigBee sensors [10]. The sensor network when embedded in an indoor environment such as in a house, can be enabled to continuously monitor an individual activities in the environment. In many outdoor environments, however, sensors are hardly placed everywhere, and thus some sensors are carried/worn by a user to continuously monitor the user’s activities and capture individual’s information [11, 12]. In the storing phase, the captured individual’s daily life data is kept in large data storages. There are growing interests in archiving individual’s activities, such as location, ambient

32

International Journal of Software Engineering and Its Applications Vol. 2, No. 4, October, 2008

surrounding, body movements, biological data, captured images, speech, audio, etc. Although larger data storage capacity per degree of density is becoming cheaper and bigger day by day, the amount of captured data, especially video, grows very fast along with time. The concept of ubiquitous computing, first initiated by Mark Weiser [13], has inspired the development of ubiquitous networks that enables pervasive recording with monitoring devices and its connection to the storage. However, the data obtained directly from sensors contains many redundant data which should be processed further to extract important data/events in some structured and semantic ways. The prospects on life recording/storing methods and environments have motivated some experimental research projects aimed to realize Bush’s vision of MEMEX. Two representatives of individual’s life recording projects are the MyLifeBits project by Microsoft [2], and the LifeLog project by DARPA [3]. These projects collect the individual’s daily life events/factors/conditions, and the captured data is stored in a database. Other similar projects are EyeTap [7], Stuff I've Seen [14], and SemanticLIFE [15]. The acquired information in these projects can be used in various applications, such as memory recording, automatic Web blogging, or assisting human conversation. MyLifeBits is focused on recording an individual’s activities using computers and how to manage the recorded data. Our research is mainly for some outdoor daily activities, adopting similar approach as LifeLog in capturing and recording data, i.e., by wearable sensors. LifeLog though is mainly to record soldiers’ related activities. One efficient way to record the daily life is to recognize the typical life patterns. An individual’s daily life has some basic patterns, i.e., one week consists of five similar weekdays and a two-day weekend holidays. Workers/students have regular daily life styles (patterns). However, there are many different events that can occur in each day. Some researchers [5-7, 11, 12] tried to find the patterns by analyzing video, audio, biological or other sensed data, but they process the data from different sensors independently. Actually, the lifelog has a property in which one single activity can be represented by various media, so it is difficult to extract a similar pattern automatically across these media. Moreover, each medium has limited content presentation capability. Therefore, previous researches which used single medium only, often produce a semantic gap with the original meaning of the lifelog. In our study, we try to find and use the lifelog correlations between different kinds of captured data from multi sensors, and process the data integrally based on the individual’s spatial and action contexts, instead of dealing with the captured data separately.

3. Lifelog acquisition The captured lifelog in this study is acquired from wearable multi sensors which capture the individual’s body motions, biological information, location information, and eyeshot images. The wearable multi sensors used in this research consists of one acceleration sensor, one heart rate monitor, one location positioning sensor, and one wearable camera. The acceleration sensor senses the three axis motions of one part of the body. In previous research, this sensor is always used in obtaining the individual’s gestures/posture. A three dimensional motion sensor can be constructed from three acceleration sensors. This sensor is attached on the person’s foot to sense his walking/running paces every one second. The heart rate monitor senses the individual’s heartbeat. The total heartbeat count per minute is called Heart Rate (HR). Usually HR is utilized in measuring the level of training exercises. However, the HR is not highly reliable since the HR is influenced by neural activities and heart pumping activities. Accordingly, this paper adopts the vital changes in heart’s activities. Vital changes can be represented by three components of HR; i.e., the high frequency value

33

International Journal of Software Engineering and Its Applications Vol. 2, No. 4, October, 2008

(HF, 0.15–0.40 Hz), the low frequency value (LF, 0.04–0.15 Hz), and the LF/HF ratio as neural activities measurement information. These three components are calculated from the R-R intervals (RRi) which means the gap during the heart’s pumping activity. The frequency components of HR are calculated by fast Fourier transform analysis of the RRi. This sensor is attached on the person’s chest to sense the individual’s biological information every one second. The location positioning sensor gets the individual’s position and the surrounding landmarks using GPS (Global Positioning System) satellites. The GPS is a radio navigation system that is available worldwide and it can calculate one’s position on earth. Many previous researches used this sensor. Although, the drawback is that GPS cannot function everywhere, especially in indoor environments. This local positioning sensor (GPS) is attached on the person’s body to sense the individual’s position once in every one hundred milliseconds. The wearable camera can capture the sights of the person’s surroundings. Many previous researches also used this sensor and placed the wearable camera near the eyes. However, at anytime, a person usually moves his head, and therefore not an ideal place for the wearable camera. In this paper, the sensor is placed near the person’s chest or shoulder to sense the individual’s ambient surroundings every one second. To capture an individual’s daily life, an experimenter was asked to wear these multi sensors, some of which are shown in Fig. 1.

Figure 1. Wearable multi sensors

The details of these devices are as follows: - Wearable Camera: An image sensor of 300,000 pixels VGA image capture, a smallsize camera made by ELECOM Corp., Japan, model: Y1C30MNSV. - Wearable Accelerator: A sensor to capture the walking or running speed with x-y-z axis motions, a foot-pod type, made by POLAR Electro, France, model: S810/S625. - Wearable Heart Rate Sensor and Console: To monitor heartbeat counts and intervals, a band-type sensor and a watch-type console machine, made by POLAR Electro, France, a model: S810/S625. - Wearable GPS Signal Receiver: To monitor the signals from satellites, a wireless independent device type connected with Bluetooth protocols, made by EMTAC Corporation, model: BT-GPS. - Wearable Computer: Stores the captured images from a wearable camera, laptop computer with large data storage space, model: Dynabook CX1/214LE.

34

International Journal of Software Engineering and Its Applications Vol. 2, No. 4, October, 2008

4. Space-oriented and action-oriented models As mentioned earlier, one major problem in lifelog is the complexity of captured data, since the lifelog is affected by various events, factors, and conditions. To find the factors in this problem, we focus on how an individual’s daily life is affected from social factors. Every individual has some affiliations since every individual has a role in the society in various forms; for example, employers work in the company, students attend school, an individual and his/her friend will go shopping together, and so on. Each individual’s action is selected from his/her own contexts (situations) which were made based on their appropriate affiliation at that time. This situation is explained in the writings of Lucy A. Suchman, an action which she called, the situated action. The concept of situated action stands in strong contrast to the pre-planned nature of actions wherein every action is embedded in a unique context and can only be understood in this context. The context is any information that can be used to characterize the situation of an entity [16], and can be simply understood as 5Ws, i.e., who, where, when, what and why [17]. Our basic idea is that a situation of an individual outdoor activity can be separated according to his/her own action state and surrounding spatial state, as shown in Fig. 2. Individual’ s Contexts

Surrounding Contexts

Action-oriented Model

Space-oriented Model

Action State

Spatial State

Individual Activity Situation

Figure 2. Modeling concept of state and situation

Contexts can be divided into levels, i.e., from low to high, in order to differentiate levels of meanings and their abstraction. The lowest level contexts are raw data directly acquired from sensors. The space-oriented model is used for mapping the sensed surrounding contextual data to some spatial state, S(ti)=[p(ti), L(ti), La(ti)], where p(ti) is the individual geographic position at time ti, while L(ti) is some sites such as a shop and a gasoline station near p(ti), and La(ti) is some site-related attributes, e.g., a dangerous degree as used in our kids’ safety care application [18]. The change of spatial states between time ti and ti-1 can be expressed as some kind of differential ⊿S(tm)=Diff[S(ti), S(ti-1)], and all the spatial states occurring from t1 to tm is a set of situations described as follows: Sm = [S(t1), S(t2), …, S(tm)] ~ [⊿S(t1), ⊿S(t2), …, ⊿S(tm)] where ⊿S(t1) = S(t1) where ‘~’ is an equivalent notation which means that any S(ti) can be recoved from [⊿S(t1), ⊿S(t2), …, ⊿S(tm)]. Similarly, the action-oriented model is used for mapping the sensed individual’s contextual data to some action state, A(ti)=[M(ti), B(ti), C(ti)], where M(ti) is the individual’s motion data, B(ti) is some kind of individual’s biological information, and C(ti) is related attributes, e.g., running states in our sport’s exercise application [19] and file revision lines in team work [20]. The change of action states between time ti and ti-1 can be expressed as ⊿A(ti)=Diff [A(ti), A(ti-1)], and all action states occuring from t1 to tn is a set of action states described as follows:

35

International Journal of Software Engineering and Its Applications Vol. 2, No. 4, October, 2008

An = [A(t1), A(t2), …, A(tn)] ~ [⊿A(t1), ⊿A(t2), …, ⊿A(tn)]; where ⊿A(t1) = A(t1) A situation related to an activity at time t is characterized by [S(t), A(t)], and the whole situations and their transition process can be expressed by [Sm, An]. When processing or exploiting lifelog data, the situation information related to the action state and the spatial state can be used separately or together.

5. Lifelog image analysis and processing Due to the large size and high redundancy in captured video, one of our research objectives is to extract the representative lifelog images and organize them in a structured way based on the analysis of lifelog image features as well as their relations with location and action information. Many previous researches have tried to use shot segmentation and scene abstraction, but they haven’t obtained yet effective results. This is because, as mentioned earlier, lifelog images have problematic properties when captured from a wearable camera. Our analysis to the lifelog video starts from an image tree structure divided into four layers: cut, scene, shot and frame, as shown in Fig. 3. c

...

q1

s1

s2

s3

...

qm

sn

frj

Figure 3. Layered structure of lifelog images

The cut c is the recording from start to end of an individual’s (whole) daily life with a wearable camera. The length is dependent on activities, currently limited to two hours due to power supply limitation of the equipments. The scene q usually consists of a number of interrelated shots that happened at the same place. The length is dependent on the time an individual stays in a place, usually from few minutes to one hour. The shot s is a continuous part of a video divided by segment points. The length is from few minutes to some tens of minutes. The frame fr is one picture taken at a specific time, and it is the minimum element of lifelog images. The interval between two frames is usually from 1/30 to 1 second. Temporal relations between the lifelog images are kept as they are in the above layered structure. We further segment and re-organized the images according to the semantic relationships between scenes considering the lifelog contexts such as locations and types of individual activities, as shown in Fig. 4.

36

International Journal of Software Engineering and Its Applications Vol. 2, No. 4, October, 2008

(3) (1)

(2)

(4)

Figure 4. Context-based structure analysis flow

First, the lifelog images are segmented into shots by detecting correlation values of images’ visual features with considering individual’s movements and activities. Next, the representative frames in shots are chosen. Then the shots are synthesized to scenes, according to the frame similarities calculated by distances between lifelog images and the representative frames. This scheme uses both real daily life context information and relevant lifelog image features, instead of only using computer vision techniques. In order to effectively segment the shots, we have used both image variation features and corresponding actions/locations where the images were taken. The similarity correlation Cp between lifelog images is defined as follows, Cp =

MF − MB DL + D A

where MF is the number of forward variation, MB is the number of backward variation, DL is the difference from the location changing point, and DA is the difference from the action changing point, as shown Fig. 5. LB

Location LA

Images Detected Point

Detected Point

frame

Action

Figure 5. Location action-based shot segmentation

From the equation, it can be seen that when the difference between MF and MB becomes larger, or DL and DA are small, the absolute Cp value will become big. There will be a big reference deviation which leads to the detection of a shot segmentation point. After shots are detected, a representative frame for each shot will be chosen from images in the shot. It is not easy to select the proper representative frame since the shot direction of the camera worn by the user may change often due to the user’s movement, and the objects inside the shot images may move frequently in an outdoor environment, such as in a subway station. An improper representative frame can result in loss of useful information in a shot. Our approach to determine the appropriate representative frame consists of two steps. The first

37

International Journal of Software Engineering and Its Applications Vol. 2, No. 4, October, 2008

step is to calculate an average image with luminance and chrominance components Yij , Crij, and Cbij for each pixelij as follows, ⎧ ⎪ Yij = ⎪ ⎪ ⎨ Crij = ⎪ ⎪Cb = ⎪⎩ ij

1 ∑ Yijn N n =1 1 ∑ Crijn N n =1 1 ∑ Cbijn N n =1

where N is the number of lifelog images in a shot, and Yijn , Crijn and Cbijn are the luminance and chrominance components of pixelij in nth lifelog image. The second step is to choose a frame having the nearest distance Dpix to the average image, and make this as the representative frame,

D pix =

1 HW

H

W

∑∑ | a i =0 j =0

i, j

− bi, j |

where H and W are numbers of pixel rows and columns in an image, | aij - bij | is the pixel difference computed as Euclidian distance in YCrCb space, and Dpix is the sum of absolute differences between a pair of images. Using the above approach, a set of representative frames corresponding to the shots can be selected. The similarity between any two frames can be measured using the color histogram distance Dhist, as follows N bin

N bin

i =0

i =0

Dhist = 1 − ∑ min( I i , J i ) / ∑ I i

where Dhist shows how different the colors in the two representative frames are. Similar representative frames can then be found according to their distance values, and corresponding shot images can be connected to form the structured lifelog images based on the contents and semantics of scenes, as shown in Fig 4.

6. Application prototype: outdoor running workout assistance 6.1 System configuration This prototype application is aimed to develop a safe and effective wearable prototype system to assist people doing outdoor running workout. The system and the wearable sensors used are shown in Fig. 6.

38

International Journal of Software Engineering and Its Applications Vol. 2, No. 4, October, 2008

PDA

Location Detector

State Recognition

Map Management

Transmitter Correlation Calculator

Course Manager

3

Object Generator

3

4 Sensor

Sensor

Sensor

1 State Analyzer

2 6

Updater

8 5

GUI

1. Correlation value 2. Input data 3. Object 4. Course data 5. Request 6. Advice 7. Load map 8. Update

Course Generator

Wearable Devices

7

Home Server

Figure 6. A running assistance system and sensors

Here, one essential research issue is to correctly recognize a runner’s state (warming-up, main workout, cool-down, or over-training) during a running process by analyzing contextual data obtained from sensors and GPS positioning device carried on by the runner. Figure 7 shows one test result that shows that the changes in state were correctly detected by finding abrupt changes of correlation values using the space-oriented model, as explained earlier. T he C orrelation 70

① W arm up ② M ain w orkout ③ C ooldow n

60



C orrelation value

50

② 40

② 30



20



10







0 1

20

39

58

77

96 115 134 153 172 191 210 229 248 267 286 305 324 343 362 381 400 T im e (sec)

Figure 7. The state correlation and recognition

6.2 Structured lifelog image generation We analyze the captured video from a camera worn by a user and generate a series of structured images using both visual features and user’s locations and actions. We have applied the approach to process the video taken in different periods, and the performance results are shown in Table 1. Table 1. The numbers of segmented shots 07/08/15

07/09/02

07/09/03

07/09/04

Total Frames

3041

965

789

957

Proposed

118

39

31

35

Color-based

154

44

58

49

Action-based

146

45

43

44

39

International Journal of Software Engineering and Its Applications Vol. 2, No. 4, October, 2008

The results show that more shots are segmented when only using either the action information or image color features. Using our proposed approach, the number of shots can be greatly reduced because similar shots can be found and combined. Therefore, the number of lifelog images to be stored becomes smaller. Further, the shots related to an individual’s activity type and place can be merged to generate a structured image series corresponding to a scene, as shown in Fig. 8.

Figure 8. The Space-action based image series

7. Conclusion and future work In this study, we have presented the analysis of lifelog images using both pictorial vision features and the individual contextual situation information based on two models; i.e., the action-oriented model and the space-oriented model. We called this integrated model as activity situation model. The result in getting structured lifelog images provided better results than the previous researches which uses only the vision-based technology. However, the proposed method cannot solve some problems in lifelog images segmentation. When a huge object is moving towards the lifelog images for a long time, the previous technique and the proposed technique cannot reduce this effect. We consider this as a deficiency in our lifelog segmentation method. However, this problem can be avoided by using good sound processing/recognition. At the current stage, this study has been focused mainly on the models and analysis of lifelog data, and only applied the techniques to lifelog image scene summary and a few other application cases [18, 19, 20]. In the future, it is therefore necessary to integrate our approach to other researches to develop a whole lifelog system which will cover lifelog data acquisitions, analyses, processing, mining, representation, presentations, managements, and services.

8. References [1] Bush, V.: As We May Think, The Atlantic Monthly, vol. 176, No. 1, 1945, pp. 101–108. [2] Gemmell, J., Bell, G., Lueder, R.: Drucker, S., Wong, C.: MyLifeBits - Fulfilling the Memex Vision, Proc. of the ACM International Conference on Multimedia, Juan les Pins, 2002. [3] DARPA LifeLog Initiative, http://www.darpa.mil/ipto/ Programs/lifelog/ [4] ACM Workshop on Continuous Archival and Retrieval of Personal Experiences, http://sigmm.utdallas.edu/Members/ jgemmell/CARPE/view?searchterm=CARPE [5] Aizawa, K., Ishijima, K., Shiina, M.: Summarizing Wearable Video, Proc. of the IEEE International Conference on Image Processing, Thessaloniki, 2001, pp.398-401.

40

International Journal of Software Engineering and Its Applications Vol. 2, No. 4, October, 2008

[6] Healey, J., Picard, R. W.: StartleCam: a Cybernetic Wearable Camera, Proc. of the IEEE International Symposium on Wearable Computers, Pittsburgh, 1998, pp. 42-49. [7] Garabet, A., Mann, S., Fung, J.: Watching Them Watching Us, Proc. of the ACM Conference on Computer Human Interaction, Minneapolis, 2002, pp. 634-635. [8] Suchman, L. A.: Plans and Situated Actions: The Problem of Human-Machine Communication, 1987. [9] Estrin, D., Govindan, R., Heidemann, J., Kumar, S.: Next Century Challenges: Scalable Coordination in Sensor Networks, Proc. of the 6th ACM International Conference on Mobile Computing and Networking, Seattle, 1999, pp. 263-270. [10] ZigBee Alliance, http://www.zigbee.org/ [11] Clarkson, B. P.: Life Patterns - Structure from Wearable Sensors, the PhD thesis, MIT, 2002. [12] Ueoka, R., Hirota, K., Hirose, M.: Wearable Computer for Experience Recording, Proc. of the International Conference on Artificial Reality and Telexistence, Tokyo, 2001, pp. 155-160. [13] Weiser, M.: The Computer for the Twenty-First Century. Scientific American, September, 1991, pp. 94-104. [14] Dumais, S. T., Cutrell, E., Cadiz, E., Jancke, G., Sarin, R., Robbins, D. C.: Stuff I've Seen - A System for Personal Information Retrieval and Re-use, Proc. of the ACM-SIGIR Conference on RDIR, Toronto, 2003, pp.72-79. [15] Ahmed, M., et al,: SemanticLIFE - A Framework for Managing Information of a Human Lifetime, Proc. of the International Conference on IIWAS, Jakarta, 2004, pp. 687-696. [16] Dey, A. K.: Understanding and Using Context, Personal and Ubiquitous Computing, 5(1):4–7, Springer Verlag, 2001. [17] Ma, J., et al: Towards a Smart World and Ubiquitous Intelligence: A Walkthrough from Smart Things to Smart Hyperspaces and UbicKids, International Journal of Pervasive Comp. and Comm., 1(1), March 2005. [18] Takata, K., Shina, Y., Ma, J., Apduhan, B. O.: A Context Based Architecture for Ubiquitous Kids Safety Care Using Space-oriented Model, Proc. of the IEEE Int’l Conf. on Parallel and Distributed Systems , Japan, 2005, pp. 384-390. [19] Takata, K., Tanaka, M., Ma, J., Huang, R., Apduhan, B. O., Shiratori, N.: A Wearable System for Outdoor Running Workout State Recognition and Course Provision, Lecture Note in Computer Science on Autonomic and Trusted Computing, Vol. LNCS4610, Springer, 2007, pp. 385-394. [20] Takata, K., Ma, J.: A Decentralized P2P Revision Management System Using a Proactive Mechanism, International Journal of High Performance Computing and Networking, Vol.2 Issue 4/5, 2005.

Authors Bernady O. Apduhan received his BS in Electronics Engineering from MSU-Iligan Institute of Technology, Philippines, and later served as a faculty member in the Dept. of Electronics Engineering in the same institution. He studied MS Electrical Engineering at the University of the Philippines, Diliman Campus, and received his MS, PhD degrees in Computer Science from Kyushu Institute of Technology, Japan. He was a research associate at the Dept. of Artificial Intelligence, Faculty of Computer Science and Systems Engineering, Kyushu Institute of Technology, and is currently an Associate Professor in the Dept. of Intelligent Informatics, Faculty of Information Science, Kyushu Sangyo University, Japan. His current research interests include cluster/grid computing, ubiquitous computing and intelligence, and mobile/wireless computing and applications. He served as an Editorial Member of the Transactions of IPSJ-DPS, International Journal of Business Data Communication and Networking, and Journal of Ubiquitous Computing and Intelligence, and as a reviewer in other respected journals. He also served as executive steering committee member, PC Chair/PC Co-chair/PC member, and reviewer in various local and international conferences. He is a member of IPSJ-SIGDPS, IPSJ-SIGHPC, and a professional member of ACM and IEEE-CS. Katsuhiro Takata received his BS in Computer Science degree from Meiji University, Japan. He received his MS in Computer Science degree from Hosei University, Japan, in March 2004, and is currently a PhD candidate. His research interest is in the area of ubiquitous computing and intelligence.

41

International Journal of Software Engineering and Its Applications Vol. 2, No. 4, October, 2008

Jianhua Ma received his B.S. and M.S. degrees from National University of Defense Technology (NUDT), China in 1982 and 1985, respectively, and the Ph.D degree from Xidian University in 1990. He has joined Hosei University since 2000, and is a professor in the Faculty of Computer and Information Sciences. Prior to joining Hosei University, he had 15 years' teaching and/or research experiences at NUDT, Xidian University, and The University of Aizu, Japan. His research from 1983 to 2003 covered coding techniques for wireless communications, data/video transmission security, speech recognition and synthesis, multimedia QoS, 1-to-m HC hyper-interface, graphics rendering ASIC, e-learning and virtual university, CSCW, multiagents, Internet audio and video, mobile web service, P2P network, etc. Since 2003 he has been devoted to what he called Smart Worlds (SW) pervaded with smart/intelligent u-things, and characterized by Ubiquitous Intelligence (UI) or Pervasive Intelligence (PI). He has published over 140 referred papers, and served over 40 international conferences/workshops as a chair. He is the Co-EIC of JMM, JUCI and JoATC, and Ass. EIC of JPCC. Dr. Runhe Huang is a Professor in Faculty of Computer and Information Sciences at Hosei Univesity, Japan. She received her B.Sc. in Electronics Technology from National Defense University of Technology, China, in 1982, and her PhD in Computer Science and Mathematics from the University of the West of England, UK, in 1993. She worked at National Defense University of Technology as a lecturer during the period 1982-1988. In 1988, she received a Sino-Britain Friendship Scholarship for her Ph.D. study in U.K. After receiving her PhD, she worked in the University of Aizu, Japan, for 7 years and has been working in Hosei University, Japan, since 2000. Dr. Huang has been working in the field of Computer Science and Engineering for more than 25 years. Her research fields include Computer Supported Collaboration Working, Artificial Intelligent Applications, Multi-Agent Systems, Multimedia and Distributed Processing, Genetic Algorithms, Mobile Computing, Ubiquitous Computing, and Grid Computing. Dr. Huang is the member of IEEE and ACM. She has served as the editor board member of the Journal of Ubiquitous Computing and Intelligence (JUCI) and the Journal of Autonomic and Trusted Computing (JoATC), as the guest editor of the special issue on Innovative Networked Information Systems and Services in the Journal of Interconnection Networks (JOIN) and of the special issue on Ubiquitous Computing and Mobile Networking in the International Journal of Wireless and Mobile Computing (IJWMC), and as PC cochair/PC member for various international conferences.

42

Suggest Documents