Advanced Navigation Techniques for Spatial Information ... - CiteSeerX

1 downloads 0 Views 156KB Size Report
both hands (ii) or putting the map on a tabletop and perform ... (i) Caesar inspecting a map from above (ii) Napoleon analyzing the battle of Waterloo (iii).
Advanced Navigation Techniques for Spatial Information Using Whole Body Motion Johannes Schöning

Florian Daiber

Antonio Krüger

Institute for Geoinformatics University of Münster, Germany Weseler Str. 253 48151 Münster, Germany

Institute for Geoinformatics University of Münster, Germany Weseler Str. 253 48151 Münster, Germany

Institute for Geoinformatics University of Münster, Germany Weseler Str. 253 48151 Münster, Germany

[email protected]

[email protected]

[email protected]

ABSTRACT Since 6000 years humans have used maps to navigate through space and solve other spatial tasks. Nearly all of time maps were drawn or printed on a piece of paper (or on material like stone or papyrus) of a certain size (see figure 1). Thus the two most common interaction methods were: (i) Holding the map in both hands (ii) or putting the map on a tabletop and perform specific tasks (see figure 1). Nowadays maps can be displayed on a wide range of electronic devices, starting from small screen mobile devices or highly interactive large multi-touch screen. In this paper we highlight the possibilities to navigate through maps using physical (whole body) gestures. We describe two different systems that allow navigating through spatial information by using the whole body. In the first case the map is displayed on a mobile device and the user can interact by moving the mobile device over a paper map. In the second case the user can navigate on a large-scale multi-touch wall by using multi-touch hand gestures in combination with foot input. We concentrate on the second application and show how to perform typical basic navigation tasks within a Geographic Information System (GIS). Recent developments in interactive surfaces that enable the construction of low cost multi-touch displays and relatively cheap sensor technology to detect foot gestures allows the deep exploration of these input modalities for GIS users, with medium or low expertise. Combining multitouch hand and foot interaction has a couple of advantages and helps also to rethink the use of the dominant and non-dominant hand. In pure multi-touch hand interaction systems the nondominant hand often sets the reference frame that determines the navigation mode, while the dominant hand carries out the precise task. Since in this case one touch is only used to define a certain mode, the advantages of multi-touch are not fully exploited. Foot gestures can be used to provide continues input for a spatial navigation task (such as panning or tilting), which is more difficult to provide with the hands in a natural way

1. INTRODUCTION Maps are turning from static information products printed on paper to dynamic representations of space displayed on varies kinds of interactive electronic devices (like mobile device or large scale interactive displays). Today paper maps are still superior in some categories to their digital counterparts. They provide high resolution with zero power consumption. Because of the advantages of paper maps and the drawbacks of visualizing maps on mobile devices (small display, low resolution) the combination of mobile devices with external paper maps is an active research area. With the Wikeye [3] application the user can sweep the mobile device display over a paper maps and can augmented the map with additional (personal or dynamic) information (see figure 2). We showed in [7] that this physical movement of the users arm over the map helped users significantly to perform task. Using small physical movements of the users arm with the mobile device can significantly improve the interaction. Following this interaction paradigm we illustrate in the following how multi-touch interaction in combination with foot input can be used to perform spatial task. The remainder of this paper is structured as follows. Section 2 describes the related work in the field of multi-touch interaction. In Section 3 we show how users can perform spatial tasks using their whole body. Finally the last Section provides some concluding remarks.

2. RELATED WORK Multi-touch has great potential for exploring complex content in an easy and natural manner. In general, tangible user interfaces (TUIs) provide physical form to digital information and computation. People have developed sophisticated skills for sensing and manipulating their physical environments [9]. However, most of these skills are not employed in interaction with the digital world today. Multi-touch interaction with computationally enhanced surfaces has received considerable attention in the last few years. The

Fig. 1: Interaction methods with paper maps. (i) Caesar inspecting a map from above (ii) Napoleon analyzing the battle of Waterloo (iii) Mountaineers using a paper map (iv) Pupils trying to map objects. The interaction schema nearly stays constant for nearly 6000 years. Pictures from different sources. (© i: http://static.flickr.com/33/65793995_de849967d0.jpg ii: http://www.greenchameleon.com iii: www.bergsteigen.at ttp://www.britarch.ac.uk iv)

Figure 2: User navigation through by using a mobile device like a magic lens over the map.

rediscovery of the Frustrated Total Internal Reflection (FTIR) principle [2], which allows for the building of such surfaces at low cost, has pushed forward the development of new multitouch applications quickly. Some designers of these applications make use of the geospatial domain to highlight the viability of their approaches. This domain provides a rich and interesting testbed for multi-touch applications because the command and control of geographic space (at different scales) as well as the selection, modification and annotation of geospatial data are complicated tasks and have a high potential to benefit from novel interaction paradigms [10]. These are central tasks in a GIS (as in any interactive system) [4, 11]. One particular problem that conventional Geographic Information Systems have is the low expressiveness and the significant ambiguity of single pointing gestures (provided by the traditional WIMP paradigm [5]) on geospatial data. GIS often maintain data organized in different overlapping layers, each containing information of a spatial feature class (e.g. streets layer, regional boundaries layer, national boundaries layer, etc.). A single pointing gesture is often not enough to precisely identify the intended object of a selection. If one considers also the multitude of operational commands (such as topological operations, annotations or geometric modifications) supported by a GIS, it is clear that these systems can only be operated by an expert user, even if temporal information on the pointing gesture is available [1].

3. Navigation through spatial navigation using the whole body Multi-touch gestures provide much more information than a single point gesture, i.e. they allow the user to explore multiple regions of contact and their temporal change with respect to each other to increase the expressiveness of the interaction [8]. Combining hand and foot gestures has several advantages. Hand gestures are good for precise input regarding punctual and regional information [6]. It is however more difficult to input continuous data with one or several hands for a longer time. For example, panning a map on a multi-touch wall is usually performed by a “wiping”-gesture. This can cause problems if the panning is required for larger distances, since the hand moves over the surface and when it reaches the physical border it has to be replaced and then moved again. In contrast foot interaction can be provided continuously by just pushing the body weight over the respective foot. Since the feet are used to navigate in real space such a foot gesture has the additional advantage of being more intuitive in the sense that it borrows from a striking metaphor. Motivated by this finding we developed a method in which users can navigate through spatial data in combination with

Figure 3: User navigation through spatial information with the whole body.

their hand and with their feet by shifting their weight over their feet on a Wii Balance Board1 (as shown in Figure 3). Tilting is performed just with the feet and two-handed gestures can be used for more appropriate tasks, such as zooming or region. The video2 shows the application in use.

4. CONCLUSION We have provided a first concept and implementation of the combination of multi-touch hand and foot interaction. For this purpose we have combined the advantages of both to overcome interaction problems with spatial data. Hand gesture is well suited for rather precise input. Foot interaction is well suited for continuous data input over a longer time period, for example when panning for a longer time. In a more general way foot interaction provides an orthogonal horizontal interaction plane to the vertical multi-touch hand service. We still need to explore the whole body interaction to solve spatial tasks further, but believe that it has a huge potential for interaction with spatial data or even for more abstract visualization that uses a 3D-space to organize data.

5. REFERENCES [1] Florence, J., Hornsby, K., and Egenhofer, M. (1996). The GIS wallboard: interactions with spatial information on large-scale displays. International Symposium on Spatial Data Handling, 7:449–463. [2] Han, J. Y. (2005). Low-cost multi-touch sensing through frustrated total internal reflection. In UIST ’05: Proceedings of the 18th annual ACM symposium on User interface software and technology, New York, NY, USA, 115–118. [3] Hecht, B., Rohs, M., Schöning, J. and Krüger, A. (2007). Wikeye - Using Magic Lenses to Explore Georeferenced Wikipedia Content. Pervasive 2007: Workshop on Pervasive Mobile Interaction Devices (PERMID). [4] Maceachren, A., and Brewer, I. (2004). Developing a conceptual framework for visually-enabled geocollaboration. International Journal of Geographical Information Science, 18(1):1–34. [5] Myers, B. A. (1988). A taxonomy of window manager user interfaces. IEEE Comput. Graph. Appl., 8(5):65–84, 1988. [6] Pakkanen, T., and Raisamo, R. (2004). Appropriateness of foot interaction for non-accurate spatial tasks. Conference

1

http://e3nin.nintendo.com/wii_fit.html

2

http://www.youtube.com/watch?v=riFw0kKoinU

on Human Factors in Computing Systems, pages 1123– 1126.

Conference on Intelligent User Interfaces, New York, ACM.

[7] Rohs, M., Schöning, J., Raubal, M., Essl, G., Krüger, Antonio (2007). Map Navigation with Mobile Devices: Virtual versus Physical Movement with and without Visual Context. Proceedings of the 9th International Conference on Multimodal Interfaces (ICMI), pp. 146-153.

[9] Ullmer, B., and Ishii, H. (2000). Emerging frameworks for tangible user interfaces. IBM Systems Journal, 39(3):915– 931.

[8] Schöning, J., Hecht, B., Raubal, M., Krüger, A., Marsh, M., and Rohs, M. (2008). Improving Interaction with Virtual Globes through Spatial Thinking: Helping users Ask „Why?“. IUI 2008: Proceedings of the International

[11] Wasinger, R., Stahl, C., and A. Krüger (2003). M3I in a Pedestrian Navigation & Exploration System. HumanComputer Interaction With Mobile Devices and Services: 5th International Symposium, Mobile Hci 2003, Udine, Italy, September 8-11.

[10] UNIGIS (1998). Guidelines for Best Practice in User Interface for GIS. ESPRIT/ESSI pro ject no. 21580.