A Life Log System that Recognizes the Objects in ... - ACM Digital Library

26 downloads 12 Views 2MB Size Report
A Life Log System that Recognizes the Objects in a Pocket. Kota Shimozuru. Grad. School of Engineering,. Kobe University. 1-1, Rokkodaicho, Nada,. Kobe ...
A Life Log System that Recognizes the Objects in a Pocket Kota Shimozuru Grad. School of Engineering, Kobe University 1-1, Rokkodaicho, Nada, Kobe, Hyogo 657-8501, Japan [email protected]

Tsutomu Terada Grad. School of Engineering, Kobe University and JST PRESTO 1-1, Rokkodaicho, Nada, Kobe, Hyogo 657-8501, Japan [email protected]

Masahiko Tsukamoto Grad. School of Engineering, Kobe University 1-1, Rokkodaicho, Nada, Kobe, Hyogo 657-8501, Japan [email protected]

also various kinds of sensors such as a brain-wave analyzer, GPS receiver, acceleration sensor, and gyro sensor to extract user contexts [1]. Watanabe et al. proposed an ultrasoundbased system that recognizes gestures on the basis of the volume of the received sound and the Doppler effect. They evaluated the approach in one scenario on nine gestures and activities with 10 users. When there was no environmental sound generated from other people, the recognition rate was 87% on average [2]. Fukumoto et al. proposed a method of smile/laughter recognition using photo interrupters for indexing interesting/enjoyable events on recorded video [3]. Thus, indexing the recorded data is combined by attaching text information, which is related to the user context, surroundings, and emotions.

ABSTRACT

A novel approach has been developed for recognizing objects in pockets and for recording the events related to the objects. Information on putting an object into or taking it out of a pocket is closely related to user contexts. For example, when a house key is taken out from a pocket, the owner of the key is likely just getting home. We implemented a objects-inpocket recognition device, which has a pair of infrared sensors arranged in a matrix, and life log software to obtain the time stamp of events happening. We evaluated whether or not the system could deal with one of five objects (a smartphone, ticket, hand, key, and lip balm) using template matching. When one registered object (the smartphone, ticket, or key) was put in the pocket, our system recognized the object correctly 91% of the time on average. We also evaluated our system in one action scenario. With our system’s time stamps, user could easily remember what he took on that day and when he used the items.

In addition, scenes in which objects are used are also treated as indexing methods [4–7]. In our research, we focused on scenes in which an object was put into/taken out of a pocket because these scenes have a close relationship with our intentions. For example, when people take out their house key from their pocket, they are likely just getting home, and when they put a ticket into their pocket, they will likely be getting on trains. Therefore, the objects in pockets are worth recording in a life logging system. However, no research has been conducted for using pockets for indexing life log data. Therefore, we developed a system based on the objects in pockets. This system is divided into radiating and receiving parts. The former consists of infrared LEDs arranged in a matrix. The latter consists of photo transistors also arranged in a matrix. The system uses template matching to identify objects by the size and shape with interrupted infrared lights.

Author Keywords

Wearable Computing, Life Log, Context Awareness, Pockets ACM Classification Keywords

H.4 Information Systems Applications: Miscellaneous INTRODUCTION

Wearable computing has been achieved due to the downsizing of computers and memory storage. As a result, life logs, which enable us to recall our past and to look back on records of our life, have caught a great deal of attention. However, isolating and retrieving particular events from numerous recordings is still difficult. Therefore, a method to index the recorded data appropriately is required for efficient retrieval.

The remainder of this paper is organized as follows. In Section 2, we discuss related work. In Section 3, we describe the design of the device and the method of recognizing objects in the pocket, and in Section 4, we explain our implementation of a prototype system. The experimental evaluation is described in Section 5. Finally, we conclude and discuss future work in Section 6.

There are many indexing methods. For example, Aizawa et al. applied not only a wearable camera and microphone but Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. AH ’15, March 09–11, 2015, Singapore, Singapore. c 2015 ACM 978-1-4503-3349-8/15/03 ...$15.00. Copyright ⃝ http://dx.doi.org/10.1145/2735711.2735788

RELATED WORK Logging and indexing systems

The history of the life log dates back to 1945. Bush et al. described the MEMEX, which is the name of a hypothetical proto-hypertext system [8]. He envisioned the MEMEX as a

81

device in which individuals would compress and store all of their books, records, and communications. To create it, Bell et al. captured many things, namely articles, books, cards, CDs, papers, photos, voice recordings, etc. and stored them digitally [9]. They presented practical interpretations of specific cases as well as basic concepts in the field of life logs. Furthermore, within the last ten years, terabyte hard drives and cloud storages have become widespread. Thus, storing a huge amount of data like life logs has also become feasible for the average computer user. Although life logging enables us to record everything that we are watching and listening to, retrieving particular events from the numerous recordings is difficult. Therefore, efficient retrieval requires indexing the recorded data.

(a) Horizontal type

Many indexing methods using various kinds of sensors are available. For instance, Watanabe et al. suggested a method of indexing recorded data with context information by using ultrasound [2]. Kobayashi et al. implemented a life log system that indexes recorded data in the context of having a meal, going to the toilet, and smoking by using olfactory sensors [10]. Fukumoto et al. proposed a method of smile/laughter recognition using photo interrupters for indexing the interesting/enjoyable events on a recorded video [3]. Thus, indexing the recorded data is combined with user contexts, surroundings, and emotions as text information. In addition, using objects is also worth recording in a life log system because we use our belongings when we need them.

(b) Vertical type Figure 1. Most general types of pockets

because the first method requires the user to wear a camera for searching for registered objects and the second method requires him/her to wear sensors on the hands. The others, which need attached RFID tags to every object that we want to recognize, are not realistic.

Life log systems based on objects

Several approaches have been developed for life log systems based on objects. Ueoka et al. proposed a wearable interface system named “I’m Here!”, which helps users find what they are looking for [4]. The system indexes the scenes where the user released registered objects such as a smartphone by recording them continuously with a camera worn around his/her waist. Wahl et al. attached radio frequency identification (RFID) tags to places where phones are located in daily life, such as in pockets and backpacks [11]. In a first evaluation across several full-day recordings at nine locations, their approach achieved an accuracy of 80%. Only 5.3% of all tags were missed. Makino et al. proposed a new life log system based on a contact sound caused by touching objects [5]. Their prototype system recognized 12 different actions with 94.4% accuracy. Kawamura et al. proposed an object-triggered human memory augmentation system named “Ubiquitous Memories,” which enables the user to directly associate their experience-related data with physical objects by touching an object attached with an RFID [6]. Minamikawa et al. presented a mobile-based life log system called “Life Pod,” which collects users, activities by a mobile phone in the real world and displays them in a blog-like style [7]. With sensors installed in a mobile phone and information attached to objects, users can easily record their daily activities anytime, anywhere. Implementation of the RFID system to the Life Pod also improves capturing information in real space.

Recognition system using infrared light

In this research, we use infrared sensors for recognizing objects in pockets. Many methods use infrared sensors because they detect the temperature and distance to the objects without any contact [12–14]. Kakeshi et al. enabled an interaction with soft objects such as a cushion by using infrared sensors [12]. Moller et al. enabled precision free-air interaction by using infrared sensors attached on a frame [13]. Thus, infrared sensors can detects objects without any contact. Mizoguchi et al. used infrared sensors arranged in a matrix to recognize the objects on a robot hand [14]. This method solves the problem of difficulty a camera has in detecting objects in a range within several centimeters. As for our research, we found it physically impossible to set a camera in a pocket and found it necessary to recognize objects in a pocket in a range within several centimeters. Therefore, we used infrared sensors arranged in a matrix for recognizing objects in pockets. SYSTEM DESIGN

Our goal of this research was to recognize the objects in pockets for indexing the recorded data of a life log and for providing context-aware services. Requirements for our system

Although these methods can detect objects well, they are not suitable for everyday use. These methods are not comfortable

As described in Section 2, attaching RFID tags to everything is not realistic because expendable objects such as tickets can

82

Vcc Control signal

Demultiplexer

Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr.

a Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr.

b Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr.

c Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr.

Sensor data d

Figure 2. Bag structure of pockets

Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr.

e Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr.

f Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr. Ph. Tr.

g 1

Send to the microcontroller

Receiving part

Detection area

The pocket

2

3

4

5

6

7

8

Rows of sensor

Figure 4. Circuit to read sensor data

Thus, when there is something in the pocket, our system can detect it and its condition using the strength pattern of the infrared light interrupted by it. Moreover, the system does not disturb the normal use of the pocket because no intrusive device is present in it. Moreover, the system does not have to consider ambient light because of the position of the device. The recognition software on the PC records the information on objects with a time stamp after recognizing the object.

Radiating part

Figure 3. Structure of our system

As mentioned previously, the methods of using infrared sensors are valid for detecting objects in a range within several centimeters in environments where cameras cannot be used. Thus, we apply a similar method for using infrared sensors arranged in a matrix. Figure 4 shows a circuit diagram to read sensor data. In Figure 4, the “Ph. Tr.” arranged in a matrix represents a photo transistor, which is connected to the power supply through the demultiplexer depicted in the upper part of the figure. The microcontroller depicted on the left reads the sensor value of the rows from “a” to “g” in Figure 4. Then, the microcontroller sends a control signal to the demultiplexer, which causes sensor arrays to apply power voltage. The detection area of the photo transistors is larger than the pocket area. The radiating part is always lit when the system is running.

also be put in pockets. Therefore, we require the device to recognize the objects in pockets. Figure 1 shows the general structures of pockets. All pockets can be classified roughly into two types. One has a horizontal slit, as shown in Figure 1(a), and the other has a vertical slit, as shown in Figure 1(b). Both types have the same bag structure, as shown in Figure 2. Pockets are sewn up in the trousers around the hem of the opening of the pocket. They are made of soft fabric so as to easily enable objects to be placed in or taken out of pockets, even if the trousers are made of hard fabric material such as denim. In our research, the recognition system must not prevent the user from putting objects into or taking them out of pockets. In addition, the sensor needs to be robust in dark and narrow environments.

Recognition method

Our system discriminates objects by using the template matching method [15]. Template matching finds a pattern that matches a particular pattern by comparing the input data (hereinafter referred to as searched data) with the prepared data (hereinafter referred to as template data). Note that both the template and searched data are output from the receiving part and converted into two-dimensional binary data arranged in a matrix of photo transistors in the receiving part. In our research, the system has multiple templates for one registered object.

System structure

We used infrared sensors to satisfy the requirements. Figure 3 shows the structure of the system. It consists of the input part, which gets the sensor data, and the processing part, which detects objects in the pocket. Additionally, the input part consists of the radiating part, which emits infrared light from infrared LEDs arranged in a matrix, and the receiving part, which detects the strength of the radiated lights by the photo transistors arranged in a matrix. As shown in Figure 3, the two sections of the input part face each other across the pocket. The data from the input part are sent to a PC through a microcontroller connected by a wire and then processed.

Figure 5 shows examples of searched and template data. We describe the width and height of the template as M and N ,

83

0

1

2

3

1

I(0,0)

I(1,0)

I(2,0)



I(M-1,0)

2

I(0,1)

I(1,1)

I(2,1)



I(M-1,1)

3

I(0,2)

I(1,2)

I(2,2)



I(M-1,2)

N-1

I(0,N)



I(M-1,N-1)

M-1

i

Receiving part

Radiating part

Battery

Microcontroller I(1,N-1) I(2,N-1)

Figure 6. Prototype of device

j

(a) Searched data 1

0

2

M-1

3

i Battery

1

Tk(0,0)

Tk(1,0)

Tk(2,0)



Microcontroller

Tk(M-1,0)

Prototype device 2

Tk(0,1)

3

Tk(1,1)

Tk(0,2)

Tk(1,2)

Tk(2,1)

Tk(2,2)





Tk(M-1,1)

(a) Side view

(b) Back view

Figure 7. Photographs of the prototype device being worn

Tk(M-1,2)

treats it as an unknown object. In this research, the threshold was set to 6. N-1

Tk(0,N)

Tk(1,N-1)

Tk(2,N-1)



Tk(M-1,N-1)

IMPLEMENTATION

j

We implemented the prototype device to recognize objects in pockets, for which the system is described in Section 3. Figure 6 shows the prototype device, and Figure 7 shows photographs of the prototype device being worn. It consists of a PC, microcontroller, radiating part, and receiving part. The specifications of each device are as follows:

(b) Template data Figure 5. Searched and template data for template matching

respectively. The system calculates the sum of the absolute difference (SAD), which is the evaluation value showing the difference between the searched and template data. The total number of sets of template data is defined as S. When the searched data is compared with the k-th (1 ≤ k ≤ S) template data, a value at coordinates (i, j) of the template data is represented by Tk (i, j), and that of the searched data is represented by Ik (i, j). Thereby, RSAD (k), which is the SAD between the searched data and k-th template data, is defined in the following equation: N −1 M −1 ∑ ∑

• PC: ThinkPad X220i (CPU: Core i3 2.10 GHz, memory: 4 GB) • microcontroller: Arduino nano • battery for the infrared LEDs of the radiating part: mobile battery Energizer whose output voltage was transformed into 12V from 19V. The pocket was 100-percent cotton. We developed the recognition software on the PC using Processing on Windows 7.

(1)

Detailed snapshots of the input part are shown in Figure 8. The specifications of the Infrared LED and photo transistor are as follows:

The system determines the minimum value of SAD dmin in the following formula:

• infrared LED: 5-mm round infrared LED (the radiant intensity is 55 [mW/sr] at a forward current of 50 [mA])

RSAD (k) =

|I(i, j) − Tk (i, j)|

j=0 i=0

dmin = min RSAD (k)

• photo transistor: TPR-105F

(2)

1≤k≤S

Infrared sensors

If dmin is less than or equal to the threshold, the system treats the pocket-held object that gives the dmin as a known object. In contrast, if dmin is more than the threshold, the system

The prototype device consists of the radiating and receiving parts. As shown in Figure 6, these parts face each other across the pocket.

84

Demultiplexer

Photo transistor Infrared LED

(a) Radiating part

(a) Smartphone

(b) Ticket

(c) Key

(d) Unknown object

(b) Receiving part

Figure 8. Infrared sensors Table 1. Number of template data samples and size of each object

Object Smartphone Ticket Key

Number of template data samples 1 153 112

Size[cm2 ] 14 × 7 5.5 × 3 6.2 × 2

Figure 9. Examples of screen of application

We put an object in the pocket 100 times and then evaluated the results of the system classification. In this experiment, we used three registered objects (a smartphone, ticket, and key) and two unregistered objects (a lip balm and hand).

As shown in Figure 8(a), the radiating part consists of infrared LEDs arranged in a matrix of 7×8. They are provided with a grid form at intervals of 10 mm. The size of the substrate is 90×70 mm, for which the infrared LEDs occupy approximately 58×50 mm. Passing through the pocket, the light of one infrared LED has enough strength to activate the nearest photo transistor and a few around it. The photo transistors can detect the light of the infrared radiation even if the pocket is opened approximately 300 mm at most.

Table 2 shows the results of putting a registered object into the pocket. The accuracy shown in Table 2 was calculated on the basis of a true positive. As this table shows, registered objects were mostly recognized correctly. Our evaluation results demonstrated that the recognition rate was 91% accuracy on average. While a smartphone and a ticket were mostly identified correctly, a key had lower accuracy. This is because the template data for the key were not sufficient around the top of the pocket, where the key was sometimes put. However, when the key was put at the bottom of the pocket, our system identified it with a high degree of accuracy. Table 3 shows the results of putting an unregistered object into the pocket. The recognition rate shown in Table 3 was calculated on the basis of a true negative. Our system also classified unregistered objects such as a lip balm with 96% accuracy. The reason for this was that the size and shape of the lip balm differed from those of the registered objects.

As shown in Figure 8(b), the light receiving part consists of photo transistors arranged in a matrix of 7×8 and a demultiplexer. They are provided with the same grid form as that of the radiating part. The demultiplexer quickly switches the power supply route of the photo transistor arrays, and the sensor data are received at 1.7 kHz. Templates for recognition

Table 1 shows the number of template data samples and the size of each object. Figure 9 shows screenshots of the implemented application that presents the received data. The blue squares represent a lit photo transistor, and the white squares represent a non-lit photo transistor.

In addition, Table 4 shows the results of putting the user’s hand in the pocket. The hand was also identified as an unknown object correctly. This is attributed to be the fact that the hand did not cover the lower position of the grid of infrared sensors, while the registered objects mostly covered the lower position. When the hand covered all the infrared sensors or the same shape as registered objects, the system mistakenly identified it as a smartphone or ticket.

The template data, which are obtained as binary patterns of 0s and 1s when each object is in the pocket, are saved in CSV format. Additionally, they are obtained with consideration of the rotation of objects and are mainly at the bottom of the pockets. Loading them in advance, the system detects the objects in the pocket by template matching, which is performed at approximately 1 kHz. Note that, as shown in Figure 9(a), the number of template data of a smartphone is 1 sample because it is always covered by all the photo transistors.

Additionally, we evaluated the accuracy of simultaneously putting two registered objects in the pocket, as shown in Table 5. When we put in two registered objects (a ticket + key), our system identified them as a ticket or key with 71% accuracy. As these results show, when multiple registered objects were added, the system mostly classified them as the larger

EVALUATION Evaluation of object recognition

85

Table 2. Accuracy with registered objects[%]

Object Smartphone Ticket Key

Smartphone 100 0 0

Ticket 0 91 2

Key 0 3 82

Unknown object 0 6 17

Accuracy[%] 100 91 82

Table 3. Accuracy with unregistered objects[%]

Object Lip balm

Smartphone 0

Ticket 2

Key 2

Unknown object 96

Accuracy[%] 96

Table 4. Accuracy with user’s hand put in the pocket[%]

Object Hand

Smartphone 1

Ticket 1

Key 0

Unknown object 98

Accuracy[%] 98

Table 5. Accuracy with multiple registered objects[%]

Object Key + Ticket

Smartphone 0

Ticket 67

Key 4

Unknown object 29

Accuracy[%] 71

Table 6. Action scenarios

Context Staying at home Going out Putting a smartphone in the pocket Using a smartphone Putting a smartphone in the pocket Using a smartphone Riding on a train Putting a smartphone in the pocket Using a smartphone Getting off the train Putting a smartphone in the pocket Riding on a train Using a smartphone Getting off the train Getting home

Wearable camera

Prototype device

Figure 10. User wearing our device and wearable camera

one or as an unknown object. It follows that multiple object identification is one of the challenges for our system.

Smartphone

Action scenario

Ticket

To determine the practicality of our method in daily life, we evaluated actions for the scenarios shown in Table 6. We assumed the situations of staying at home, going out, riding on trains, and returning home. Simultaneously, we recorded the video with a wearable camera during the experiment. Figure 10 shows a photograph of the test user, the first author of this paper. He wore both the prototype device and the wearable camera. We picked up some scenes from the stored video in the experiment by checking the time stamps obtained from our system. The time stamps were stored every second. It took approximately 40 minutes to record the video. In this experiment, we used three registered objects (a smartphone, ticket, and key).

Key Unknown object

Figure 11. Scenario results

assumed that this was the transition time caused by putting them into or taking them out of the pocket. This transition time was approximately two seconds. To remove the transition time, we should redesign the algorithm to determine the object only when the system recognizes a particular object for more than 2 seconds.

Figure 11 shows the recognition results. As this figure shows, the context was usually recognized correctly. However, when we put the objects into or took them out of the pocket, our system mistakenly recognized them as unknown objects. We

Table 7 shows the accuracy on the basis of true positives. As this table shows, while our system could identify larger objects correctly, smaller objects were sometimes mistakenly

86

Table 7. Accuracy in action scenarios[%]

Object Smartphone Ticket Key

Smartphone 5 1 0

Ticket 0 4 1

Key 0 1 1

recognized. This is because larger objects interfered with the infrared lights more often than smaller objects. These results indicate that multiple object recognition is one of the challenges of our system. Furthermore, even when the object did not block off the infrared light sufficiently, our system sometimes could not work correctly. The reason is, as described in Experiment 1, the key was sometimes not put at the bottom of the pocket, and templates for keys were not sufficient around the top. To recognize the objects correctly more often, we should increase the number of templates around the shallow position of the pocket or the density of the grid of infrared sensors. Figure 12 shows scenes from the stored video in the experiment by checking the time stamps obtained. Our system picked up important scenes in which the user interacted with objects in a typical day. Our system reminds us of these scenes/objects when we went out (Figure 12(a)) and got home (Figure 12(f)). The smartphone was taken out when the user wanted to listen to music (Figure 12(b)), to write important memos (Figure 12(c)), and to waste time while waiting for the traffic light to turn green (Figure 12(e)). With our system’s time stamps, the user could easily remember what he took on that day and when he used the items. It follows that the information of objects in our pockets is worth logging. Moreover, because our system can detect and track objects, it can notify us when objects are dropped from pockets.

Unknown object 0 1 2

Accuracy[%] 100 57 25

(a) Going out

(b) Using a smartphone to listen to songs

(c) Using a smartphone to write a memo

(d) Riding on a train

(e) Waiting

(f) Getting home

Figure 12. Scenes obtained from our system

Experiment 2. It usually took less than 2 seconds. Therefore, our system should be modified to determine the object only when the system recognizes a particular object after more than 2 seconds. Moreover, the current system cannot reliably identify multiple objects that are in the pocket simultaneously or ones that are the same size. To solve this problem, we need to increase the density of the photo transistors or use color sensors to identify objects based on color difference.

CONCLUSION

In this research, we designed and implemented a pocketbased life log system focusing on putting objects into or taking them out of a pocket and on indexing the important events on a recorded video. When one registered object (a smartphone, ticket, or key) was put in the pocket, our system could recognize the object correctly on an average 91% of the time. When multiple registered objects (e.g., a key + ticket) were added into the pocket, the average accuracy was 71%. We evaluated the recognition in an action scenario and confirmed the effectiveness of our method. For future work, we plan to design additional types of data loggers and to improve the recognition accuracy. As mentioned in Section 3, the PC-based system of data loggers that we implemented in this study requires a wired connection with a PC. We intend to implement a stand-alone system to solve this problem. Moreover, our current system has to use unique trouser pockets because of the size of the recognition device. Therefore, we should create a smaller recognition device to use original pockets as described in Section 3. As for the recognition, our system identified three registered objects (a smartphone, key, and ticket). However, our system mistakenly discriminated objects when we put in/took out objects from the pocket in

REFERENCES

1. K. Aizawa, Tancharoen, Datchakorn, S. Kawasaki and T. Yamasaki: Efficient Retrieval of Life Log Based on Context and Content, Proc. of the 1st ACM Workshop on Continuous Archival and Retrieval of Personal Experiences (CARPE’04), pp. 22–31 (Oct. 2004). 2. H. Watanabe, T. Terada, and M. Tsukamoto: A Method for Embedding Context to Sound-based Life Log, Journal of Information Processing, Vol. 22, No. 4, pp. 651–659 (Oct. 2014). 3. K. Fukumoto, T. Terada, and M. Tsukamoto: A Smile/Laughter Recognition Mechanism for Smile-based Life Logging, Proc. of the 4th Augmented Human Conference 2013 (AH’13), pp. 213–220 (Mar. 2013). 4. T. Ueoka, T. Kawamura, Y. Kono and M. Kidode: I’m Here!: A Wearable Object Remembrance Support

87

5.

6.

7.

8. 9.

10.

System, Proc. of the 5th International Conference on Human-Computer Interaction with Mobile Devices and Services (Mobile HCI’03), pp. 422–427 (Sep. 2003). Y. Makino, M. Murao, and T. Maeno: Life Log System Based on Tactile Sound, Journal of Haptics: Generating and Perceiving Tangible Sensations, Vol. 6191, pp. 292–297 (July 2010). , T. Kawamura, T. Fukuhara and H. Takeda, Y. Kono and M. Kidode: Ubiquitous Memories: A Memory Externalization System Using Physical Objects, Journal of Personal Ubiquitous Computing, Vol. 11, No. 4, pp. 287–298 (Apr. 2007). A. Minamikawa, N. Kotsuka, M. Honjo, D. Morikawa, S. Nishiyama and M. Ohashi: RFID Supplement for Mobile-Based Life Log System, Proc. of the 6th International Symposium on Applications and the Internet Workshops (SAINTW’06), p. 50, (Jan. 2007). B. Vannevar: As We May Think, Journal of The Atlantic Monthly, Vol. 176, No. 1, pp. 101–108 (July 1945). J. Gemmell, G. Bell and R. Lueder, MyLifeBits: A Personal Database for Everything, Journal of Communications of the ACM, Vol. 49, Issue 1, pp. 88–95 (Jan. 2006). T. Terada, Y. Kobayashi and M. Tsukamoto: A Context Aware System Based on Scent, Proc. of the 15th International Symposium on Wearable Computers (ISWC ’11), pp. 47–50 (June 2011).

11. Wahl F. and Amft, O.: Using RFID Tags As Reference for Phone Location and Orientation in Daily Life, Proc. of the 4th Augmented Human International Conference (AH’13), pp. 194–197 (Mar. 2013). 12. G. Kakehi, Y. Sugiura, A. Withana, C. Lee, N. Nagaya, D. Sakamoto, M. Sugimoto, M. Inami, and T. Igarashi: FuwaFuwa: Detecting Shape Deformation on Soft Object Using Directional Photoreflectivity Measurement, Proc. of the 38th International Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH’11), No.5, p. 1 (Aug. 2011). 13. J. Moeller and A. Kerne: ZeroTouch: An Optical Multi-Touch and Free-Air Interaction Architecture, Proc. of 30th the ACM Conference on Human Factors in Computing Systems (CHI’12), No. 10, pp. 2165–2174 (May 2012). 14. Y. Mizoguchi K. Tadakuma, H. Hasegawa, A. Ming, M. Ishikawa, M. Shimojo: Development of Intelligent Robot Hand Using Proximity, Contact and Slip Sensing, Proc. of the Society of Instrument and Control Engineers (SICE’10), Vol. 46, No. 10, pp. 632–640 (Oct. 2010). 15. Rosenfeld, Azriel: Picture Processing by Computer, Journal of ACM Computing Surveys, Vol. 1, No. 3, pp. 147–176 (Sept. 1969).

88