Towards Using Embedded Magnetic Field Sensor for Around Mobile ...

5 downloads 255425 Views 1MB Size Report
and Google Android are equipped with compass (magnetic field) sensor. In this work, we propose ADI based ... sensor is already integrated in mobile phones (devices) it does not ... MobileHCI 2010 September 7 - 10, 2010, Lisboa, Portugal.
Towards Using Embedded Magnetic Field Sensor for Around Mobile Device 3D Interaction Hamed Ketabdar

Mehran Roshandel

Kamer Ali Yüksel

Quality and Usability Lab, TU Berlin Deutsche Telekom Laboratories Ernst-Reuter-Platz 7,

Deutsche Telekom Laboratories Ernst-Reuter-Platz 7 10587 Berlin

TU Berlin Ernst-Reuter-Platz 7 10587 Berlin

10587 Berlin

[email protected] [email protected] ABSTRACT We present a new technique based on using embedded compass (magnetic) sensor for efficient use of 3D space around a mobile device for interaction with the device. Around Device Interaction (ADI) enables extending interaction space of small mobile and tangible devices beyond their physical boundary. Our proposed method is based on using compass (magnetic field) sensor integrated in new mobile devices (e.g. iPhone 3GS, G1/2 Android). In this method, a properly shaped permanent magnet (e.g. a rod, pen or a ring) is used for interaction. The user makes coarse gestures in 3D space around the device using the magnet. Movement of the magnet affects magnetic field sensed by the compass sensor integrated in the device. The temporal pattern of the gesture is then used as a basis for sending different interaction commands to the mobile device. The proposed method does not impose changes in hardware and physical specifications of the mobile device, and unlike optical methods is not limited by occlusion problems. Therefore, it allows for efficient use of 3D space around device, including back of device. Zooming, turning pages, accepting/rejecting calls, clicking items, controlling a music player, and mobile game interaction are some example use cases. Initial evaluation of our algorithm using a prototype application developed for iPhone shows convincing gesture classification results.

Categories and Subject Descriptors I.5.4 [Computing Methodologies]: Applications – Signal processing.

Pattern

Recognition,

General Terms Algorithms

Keywords Embedded Compass (Magnetic) Sensor, Mobile Devices, Around Device 3D Interaction, Properly Shaped Magnet, MovementBased Gestures.

1. INTRODUCTION: AROUND DEVICE INTERACTION Around Device Interaction (ADI) is being increasingly investigated as an efficient interaction technique for mobile and Copyright is held by the author/owner(s). MobileHCI 2010 September 7 - 10, 2010, Lisboa, Portugal. ACM 978-1-60558-835-3.

[email protected]

tangible devices. ADI provides possibility of extending interaction space of small mobile devices beyond their physical boundary allowing effective use of 3D space around the device for interaction. This can be especially useful for small tangible/wearable mobile or controller devices such as mobile phones, wrist watches, headsets, etc. In these devices, it is extremely difficult to operate small buttons and touch screens. However, the space beyond the device can be easily used, no matter how small the device is. ADI can be also very useful for interaction when the device screen is not in line of user's sight. ADI techniques can be combined with other interaction methods such as keyboard or touch to provide more advanced interaction possibilities with devices. ADI concept can allow coarse movement-based gestures made in the 3D space around the device to be used for sending different interaction commands such as turning pages (in an e-book or calendar), controlling a portable music player (changing sound volume or music track), zooming, rotation, etc. In a mobile phone case, it can be also used for dealing with incoming calls, for instance accepting, rejecting or diverting a call. ADI techniques are based on using different sensory inputs such as camera [1], infrared distance sensors [2, 3, 4, 5], touch screen at the back of device [6], proximity sensor [7], electric field sensing [8, 9], etc. Some new mobile devices (phones) such as Apple iPhone 3GS and Google Android are equipped with compass (magnetic field) sensor. In this work, we propose ADI based on interaction with compass (magnetic) sensor integrated in these mobile devices using a properly shaped magnetic material. The user takes the magnet which can be in shape of a rod, pen or ring in hand, and draws coarse gestures in 3D space around the device (Fig. 1). These gestures can be interpreted as different interaction commands by the device. This new interaction method can overcome some shortcomings of state of the art techniques such as occlusion problems existing in optical methods. In addition, as the sensor is already integrated in mobile phones (devices) it does not impose change in hardware and physical specifications of mobile devices, and only uses a properly shaped magnet as external accessory. Since the magnetic field can pass through many different materials, the interaction is possible even if the device is for instance in a bag or pocket. Such an approach opens up a new and effective way for interaction with mobile devices (phones), mobile games, as well as for user authentication based on magnetic gestures.

The paper is organized as follows: Section 2 describes the idea behind our approach for ADI in more details, and compares it with some state-of-the-art approaches. Section 3 explains feature extraction and gesture classification. Section 4 presents initial experiments and results. An implementation of the proposed approach on Apple iPhone is introduced in Section 5, and Section 6 provides conclusions and future work tracks.

(which can pass through hand), the space at the back of device can be efficiently used for interaction (Fig. 2). Additionally, the user can interact with the mobile device, even if the device is not in the line of sight, or covered (e.g. mobile device in a pocket or bag). For instance, the user may decide to accept or reject a call, or change a music track, without taking the phone out of his pocket/bag (Fig. 3). This interaction can be used in different applications such as turning pages, zooming, reacting to a call alert, and music players. We have built a demonstrator called “MagiTact” based on this concept which is presented in Section 5. In addition to regular interaction options, our technique can be used for efficient interaction with games, as well as a new user authentication technique based on signature shaped gestures.

3. GESTURE RECOGNITION BASED ON MAGNETIC FIELD The gestures are created based on moving the magnet (a rod or ring) by hand in the space around the device along different 3D trajectories. The gestures studied in this work are mainly based on movement of magnet at different positions around the device in different directions with different periodicities. Fig. 4 shows different gestures used in this study. The rod shaped magnet is installed in a pen. We have used iPhone 3GS as mobile device for our studies. Figure 1. Interaction with a mobile phone using space around the phone based on change in magnetic field.

2. OUR APPROACH: INTERACTION WITH MOBILE DEVICES BASED ON MAGNETIC FIELD SENSOR In this work, we demonstrate our initial investigations towards using compass (magnetic field) sensor integrated in mobile devices (e.g. iPhone 3GS, G1 Android) for ADI. In our approach, we use a regular magnetic material in a proper shape to be taken in hand (e.g. rod shaped, pen, ring) to influence compass (magnetic) sensor by different movement-based gestures, and hence interact with the device. In the Introduction, we briefly mentioned to a few methods for ADI. Compared to camera based techniques, getting useful information from magnetic sensor is algorithmically much simpler than implementing computer vision techniques. Our method does not impose major change in hardware specifications of mobile devices or installing many optical sensors (e.g. in front, back or edges) of the device. It is only based on a magnetic sensor which is internally embedded in some new mobile devices (phones). Optical sensor installation occupies certain physical space which can be critical in small devices. In our method, for mobile devices such as iPhone and G1/2 Android, it is only necessary to have a properly shaped magnet as an extra accessory. Our approach also does not suffer from illumination variation and occlusion problems. Optical interaction techniques can be limited when the senor is occluded by an object, including body of user. In our proposed method, the interaction is based on magnetic field which can pass though many different materials. Considering the fact that the back of mobile device is usually covered by hand, optical ADI techniques (e.g. camera and infra-red based) can face difficulties for using the space at the back of device. However, since the interaction in our method is based on magnetic field

The embedded compass (magnetic) sensor provides a measure of magnetic field strength along x, y, and z directions. The values change over a range of -128 to 128. The first step in processing gestures is detecting beginning and end of them. This is achieved by comparing Euclidian norm of magnetic field strength against a pre-defined threshold.

3.1 Feature Extraction The next step in gesture processing is feature extraction. Feature extraction allows for preserving information which can be discriminative between gestures and removes redundant information. All the features are extracted over samples in an interval marked by the beginning and end of the gesture. This interval is divided into two equal length windows, and a feature vector is extracted for each window. The two feature vectors are then concatenated to form a new feature vector to be used for gesture classification. Dividing the gesture interval to multiple windows allows for capturing temporal pattern of the gesture in a more detailed way. The features we have used are mainly based on average or variance of magnetic field strength in different directions, as well as piecewise correlation between field strength in different directions. Features used in this study are listed in the following: • Average field strength along x, y, and z directions (3 features) • Variance of field strength along x, y, and z directions (3 features) • Average of Euclidian norm of filed strength along x, y, z (1 feature) • Variance of Euclidian norm of field strength along x, y, and z (1 feature) • Piecewise correlation between field strength along x and y, x and z, and y and z (3 features)

These features form an 11 elements feature vector for each window. The two window feature vectors are then concatenated to form a new 22 elements feature vector for each gesture.

gestures for the experiments. Figure 4 shows the gestures used for the experiment. The gestures are selected in a way that they have different variability in movement pattern and usage of space around the device.

3.2 Gesture Classification

We invited 6 test users for the experiments. Each user is asked to repeat each gesture at least 15 times using a rod shaped magnet. We recorded the signals captured by the magnetic sensor using an application developed for Apple iPhone 3GS.

The extracted feature vector is used as input to machine learning algorithms for gesture classification. We have studied two different classifiers: Multi-Layer Perceptron (MLP) [10], and Binary Decision Tree [11]. Multi-Layer Perceptron (MLP) is an Artificial Neural Network which can realize an arbitrary set of decision regions in the input feature space. The feature vectors are used to train the MLP. During testing the system, a feature vector is presented at MLP input. The MLP estimates posterior probability of different gesture classes at output (each MLP output is associated with one gesture class). The gesture class with highest posterior probability is selected as recognition output. Binary Decision Tree is a logical model represented as a binary (two-way split) tree that shows how the value of a target variable (gesture classes) can be predicted by using the values of a set of predictor variables (features).

Features are extracted from magnetic signals as described in Section 3.1. The extracted features are used for classification using different classifiers. We have used a 10 fold cross-validation scheme for managing training and test data. Table 1 shows classification results using Multi-Layer Perceptron (MLP), and Binary Decision Tree for classification. As can be seen in the table, the MLP performance is better than Binary Decision Tree, reaching good accuracy of 91.4% for gesture recognition. Table 2 shows confusion matrix for MLP based results. The confusion matrix shows that the highest level of confusion is between gestures 3 and 6, as well as 1 and 7. Gesture 3 can be similar to gesture 6 (circle) if the right-left trajectory in this gesture is different from first left-right trajectory. Gesture 7 can be interpreted as quick repetition (twice) of gesture 1 (double click vs. click). The MLP based recognition algorithm has also been implemented on Apple iPhone 3GS device, and is able to perform gesture classification in real time. Our studies have shown that even using a smaller feature set (e.g. only cross correlation based features), and a very simple classifier (e.g. a MLP with 3, 3, and 8 input, hidden and output nodes, respectively) reasonably high classification results can be obtained. This can be very important considering practical aspects of implementing such a system on mobile devices.

Figure 2. Back of device interaction based on magnetic field.

Table 1. Gesture classification results for different classifiers Multi-Layer Perceptron

Binary Decision Tree

91.4%

83.7%

Table 2. Confusion matrix for gesture recognition using MLP classifier. It shows the actual gesture entries (rows) and the classification results (columns). The numbers in each row are divided by the total number of entered gestures for that class.

Figure 3. Interaction with a mobile device (for instance dealing with incoming calls) is possible even if the device is in a bag or pocket.

4. EXPERIMENTS AND RESULTS We have set up gesture recognition experiments in order to have an initial evaluation of our method for interaction. We have used 8

Gesture Index

1

2

3

1

0.89

0.02

0.01

0

2

0.01

0.93

0.03

0.01

3

0.01

0.01

0.86

0.01

4

0

0

0

5

0

0.02

0.01

6

0

0.03

0.01

7

0.02

0

8

0

0

4

5

6

7

8

0

0

0.08

0

0.01

0.01

0

0

0.02

0.08

0.01

0

0.90

0.06

0.04

0

0

0.03

0.92

0.02

0

0

0.04

0

0.91

0

0.01

0.03

0

0

0

0.95

0

0.01

0.01

0

0.02

0

0.96

1)

2)

3)

In addition to gaming, the proposed method can be used as a new technique for user authentication. The user can draw a 3D signature by a magnet in the space around the device for authentication. This is what we call as “3D Magnetic Signature”. The 3D magnetic signature provides a wider choice for authentication as it can be flexibly drawn in 3D space around the device, and can be consequently very difficult to replicate. Additionally, unlike regular signature, no hardcopy of the magnetic signature can be produced, resulting in higher security.

4)

7. ACKNOWLEDGMENTS 5)

6)

7)

8)

Figure 4. Different gestures used in this study. Gestures 7 and 8 can be interpreted as quick repetition (twice) of gestures 1 and 3, respectively (double click vs. click).

5. IMPLEMENTATION: “MagiTact” We have implemented a demo application called “MagiTact” based on the presented method for Apple iPhone 3GS. The interaction is used to turn pages left-right or up-down in a photo view and document view application, as well as zooming a map in and out. Zooming functionality can be also achieved using space at the back of device, so that the screen doe not get occluded. The application can also demonstrate rejecting/accepting a call using our interaction method. This functionality can be achieved even when the phone is in a bag or pocket, and facilitates dealing with unexpected calls in an improper situation (e.g. office, meeting). In addition, the application demonstrates interacting with a music player for changing sound volume or music track.

6. CONCLUSIONS AND EXTENSION OF THE WORK In this paper, we studied the use of magnetic sensor embedded in new smart phones (e.g. Apple iPhone 3GS and Google Android) for interacting with the device by 3D movement-based gestures. We showed that such an interaction is possible using a properly shaped magnet. Experiments and results show high gesture classification accuracy with a relatively simple processing system. The proposed method provides simple yet effective interaction technique allowing efficient use of 3D space around device (including back of device), and it is not limited with occlusion problems existing in optical methods. Additionally, it does not impose any change in hardware and physical specifications of new mobile devices which are already equipped with compass (magnetic) sensor. Apart from gesture commands for interacting with device’s user interface, such a technique can have a good potential for improving mobile games. Gaming applications are being increasingly developed for mobile devices, and more user friendly interaction techniques are essential for their enhancement. Our method enables an efficient way of using 3D space around the device for game interactions. It can also be combined with regular game interaction techniques.

The authors would like to thank Sven Kratz and Michael Rohs for helpful discussions. This work was supported by strategic research project “Hierarchical Multimodal Interfaces”.

8. REFERENCES [1] Starner, T. and Auxier, J. and Ashbrook, D. and Gandy, M, The gesture pendant: A self-illuminating, wearable, infrared computer vision system for home automation control and medical monitoring, International Symposium on Wearable Computing, 2000, pp. 87-94. [2] Kratz, S., Rohs, M. HoverFlow: expanding the design space of around-device interaction. In Proc. of the 11th International Conference on Human Interaction with Mobile Devices and Services, Sep. 2009, Bonn, Germany. [3] Hinckley, K., Pierce, J., Sinclair, M., and Horvitz, E. Sensing techniques for mobile interaction. In Proc. of UIST '00. ACM, pp. 91-100. [4] Butler, A., Izadi, S., and Hodges, S. 2008. SideSight: Multi“touch” interaction around small devices. In Proc. of UIST’08. ACM, 201-204. [5] Howard, B. and Howard, M.G.: Ubiquitous Computing Enabled by Optical Reflectance Controller. Whitepaper. Lightglove, Inc., http://lightglove.com/WhitePaper.htm. (Visited on 25.06.2009). [6] Baudisch, P. and Chu, G. Back-of-device interaction allows creating very small touch devices. In Proc. of CHI ‘09. [7] Metzger, C., Anderson, M., and Starner, T. 2004. FreeDigiter: A Contact-Free Device for Gesture Control. In Proc. of the Eighth international Symposium on Wearable Computers. ISWC. IEEE Computer Society, 18-21. [8] Theremin, L.S. The Design of a Musical Instrument Based on Cathode Relays. Reprinted in Leonardo Music J., No. 6, 1996, 49-50. [9] Smith, J., White, T., Dodge, C., Paradiso, J., Gershenfeld, N., and Allport, D. 1998. Electric Field Sensing For Graphical Interfaces. IEEE Comput. Graph. Appl. 18, 3 (May. 1998), 54-60. [10] Minsky M L and Papert S A 1969 Perceptrons (Cambridge, MA: MIT Press). [11] Sheldon B. Akers. Binary Decision Diagrams, IEEE Transactions on Computers, C-27(6), pp. 509–516, June 1978.

30 words contrubution statement: A new technique for efficient use of 3D space around a mobile device for gesture-based interaction with the device based on using embedded compass (magnetic) sensor and a properly shaped magnet.

Suggest Documents