Augmenting Intuitive Navigation at Local Scale - Spatial@UCSB

1 downloads 0 Views 25KB Size Report
suggesting the user to “turn right after 150 meters,” pre-computing the map ... As a result, despite the fact the user might take a wrong turn, the system has the.
2014 Specialist Meeting—Spatial Search

Conroy Dalton, Krukar—1

Augmenting Intuitive Navigation at Local Scale RUTH CONROY DALTON Department of Architecture and the Built Environment Northumbria University Email: [email protected] With

JAKUB KRUKAR Department of Architecture and the Built Environment Northumbria University Email: [email protected]

F

or long, internet search has been performed almost exclusively at a distance—from one’s own desktop computer. When the search contains geographical information, it has been justifiable to provide the entire map of the environment. Such a map typically allows the user to integrate the represented information into his/her geographical survey knowledge. Most recently however, the increasing number of searches is performed from mobile devices, out in the world, and considers places located at closer proximity, within the spatial context of the searcher. And yet, in mobile-based navigation, the only substantial progress in relation to desktop-based (or traditional paper-based) representations is the presence of a “You-Are-Here” indicator. The traditional approach of providing the map of the entire relevant area in order to aid its integration into the searcher’s existing survey knowledge thus presents an unmerited cognitive processing challenge. In addition, the sole action of looking at a map (not to mention its processing) seems superfluous. We argue that human-computer interaction for spatial searches should be (a) based on local, egocentric cues, intuitively preferred in human spatial behavior instead of global, allocentric, representations of the entire environment, and (b) confirming to the ideas of HINTeractions [1] and calm computing [2], where the output communicated by the system and hence intrusion is minimized [3]. The local context of a large proportion of spatial searches results in a typical user-case scenario in which the searched location must be navigated to by foot (“nearest pub,” “open shop,” “bus stop”). This task can be performed without referring to high-level representations of the environment. Knowing the user’s position in the environment and his or hers orientation, it is possible to make use of the visual cues typically used in everyday navigation. These include, but are not limited to street width, or line-of-sight lengths of alternative spatial choices. Instead of suggesting the user to “turn right after 150 meters,” pre-computing the map (i.e., using the geoinformatics approach) might allow the system to identify “that smaller street on the right,” or “the main road in front.” This natural language approach, even if consisting of fuzzy definitions, provides the opportunity for much more natural interaction [4–5]. While this method bears the risk of generating mistakes, it does not affect the precision of GPS tracking. As a result, despite the fact the user might take a wrong turn, the system has the capability to recalculate the route and repeatedly correcting any errors. According to the ideas of calm computing, this fact does not need to be indicated to the user, since no additional input is required to perform the task correctly. Not only the system should not alarm the user when it is not necessary, but it should avoid making explicit suggestions as long as it expects the user to perform well without cues. Numerous studies in Space Syntax and Spatial Cognition have demonstrated “default” patterns of spatial behavior during decision-making situations (such as

2014 Specialist Meeting—Spatial Search

Conroy Dalton, Krukar—2

the preference for paths providing longer line of sight). And yet, this knowledge base remains underused by navigational systems. Local spatial searches of instances to which navigation can be performed on foot exemplify a well-suited context for its application. When the system can reliably predict the default spatial decision most likely to be made by the user (and this prediction matches an optimal, or close-to-optimal solution), there is no need for providing explicit, complex output, like a map. Instead, the system can indicate that “everything is ok” and the user shall continue intuitively, according to one’s own “default” strategy. Certainly, even during a short, on-foot navigation, there are critical decision points, at which making the correct decision is important, and no local cues suggest one. New wearable technologies make it possible to limit this output to the bare necessity, decreasing the cognitive effort required to process it. We can consider three examples of distinct wearable technologies (augmented reality glass, smart watch, and audio-enabled earrings [6]) to demonstrate how the action of looking at a map can be replaced by unobtrusive visual, audio, or haptic hints. In each case, pre-processing of the required route takes place in the system, similarly to traditional geoinformatic methods. However, it is additionally enriched by the perceptual information available to the user from his/her current position. In the augmented reality glass scenario, the user performs a search (“take me to the nearest ATM”) from the middle of a market square. The glass responds by highlighting left side of the user’s viewing field (or slightly darkening the rest of it). The user starts walking in this direction and comes across a junction where the market square ends and divides into two roads. Using Space Syntax analysis of the map of the area and visual image recognition, the glass recognizes that the left leg of the crossroad is perceived as the “main road,” being much wider, more connected, and more densely populated with small businesses than the right leg. It therefore does not suggest the user any choice, but indicates (with a small green mark) that “everything is fine.” The user, seeing the lack of explicit alerts, takes the preferred route and continues until the spatial configuration of the environment prompts the system to display another hint. A similar scenario can be simulated with a smart watch, which can provide haptic feedback regarding the direction, but is limited in the number of choices it can suggest. Thus, small precise vibrations can serve as “left/right/straight on” hints, while more intense vibration alarms the user of a major mistake and suggests consulting the display for details. Audio-enabled earrings can either use natural language commands, non-lingual audio signals, or a distortion to the music already being listened. A user turning to the wrong direction might hear a sudden decrease in volume, which immediately comes back to normal as the head orientation changes to correct for the decision error. References [1] G. Garcia-Perate, P. Agarwal, and D. Wilson, “HINTeractions: Facilitating Informal Knowledge Exchange in Physical and Social Space,” in Proceedings of the 3rd International Conference on Tangible and Embedded Interaction, 2009, pp. 119–122. [2] M. Weiser and J. S. Brown, “The coming age of calm technology,” in Beyond calculation, Springer, 1997, pp. 75–85. [3] W. Ju and L. Leifer, “The design of implicit interactions: Making interactive systems less obnoxious,” Des. Issues, vol. 24, no. 3, pp. 72–84, 2008. [4] R. Dale, S. Geldof, and J.-P. Prost, “Using Natural Language Generation in Automatic Route,” J. Res. Pract. Inf. Technol., vol. 37, no. 1, p. 89, 2005.

2014 Specialist Meeting—Spatial Search

Conroy Dalton, Krukar—3

[5] H. Cuayáhuitl, N. Dethlefs, K.-F. Richter, T. Tenbrink, and J. Bateman, “A dialogue system for indoor wayfinding using text-based natural language,” Int. J. Comput. Linguist. Appl. ISSN 0976, vol. 962, 2010. [6] P. Drescher, D. Tan, W. Hutson, D. Berol, K. Merkher, R. Granovetter, A. Kraemer, K. Collins, and J. Rippie, “Group Report: Form Factors and Connectivity for Wearable Audio Devices,” 2012.

Suggest Documents