Navigating into urban environments using - Semantic Scholar

3 downloads 12691 Views 311KB Size Report
May 10, 2006 - compass and GPS in a wearable system, the key challenge in the .... In additional, it provides support to load binary 3D files and it is easy to ...
Navigating within the urban environment using Location and Orientation-based Services

Fotis Liarokapis, Jonathan Raper and Vesna Brujic-Okretic giCentre Department of Information Science, City University, London EC1V 0HB {fotisl, raper, vesna}@soi.city.ac.uk

Liarokapis F, et al., Navigating within the urban environment using Location and Orientation-based Services.

Abstract Up to now most attempts to develop pedestrian navigation tools for the urban environment have used GPS technologies to display position on two-dimensional digital maps (as in the classic 'satnav' systems on the market). Although GPS is the key technology for location-based services (LBS), it cannot currently meet all the requirements for navigation in urban environments. Specifically, GPS technologies suffer from multipath signal degradation and they cannot provide orientation information at low or zero speed, which is an essential component of navigation. It has also been demonstrated in research that maps are not always the most effective interfaces to pedestrian navigation applications on mobile devices. This paper will explore solutions to the orientation and interface challenges in pedestrian navigation on mobile devices. Orientation information is necessary to help the user selflocalise in an unknown environment and can be provided by calibrated digital compass integrated with the GPS positioning. Further orientation assistance can be provided by computer-vision techniques by detecting features included in the navigation route. These can be either user-predefined fiducials or a careful selection of features belonging into the real world (i.e. parts of buildings). With the combination of position and orientation it is possible to design augmented reality interfaces, which offer a richer cognitive experience and which, deliver orientation information infinitely and without the limitations of maps. Augmented reality is a collection of technologies with the aim of enhancing the real environment with digital information. The paper will be illustrated with applications and case studies from the LOCUS project forming part of the UK Pinpoint Faraday Initiative (now succeeded by the Location and Timing Knowledge Transfer Network).

European Navigation Conference 2006, 7-10 May, 2006.

2

Liarokapis F, et al., Navigating within the urban environment using Location and Orientation-based Services.

1. Introduction Pedestrian wayfinding in urban environments is undoubtedly one of the most challenging issues in navigation today. The challenge is an order of magnitude greater than the problem solved in the contemporary 'satnav' systems on the market for in-car navigation, as the constraints are more severe including the size/ power/ weight limits, the need to navigate off the street network, the diversity of landmarks required, the wide ability range of the users and the problem of orientation. A range of research prototypes have been suggested for urban wayfinding (Burrell et al, 2002; Griswold et al, 2004) but up to now it seem that none of them can provide a low-cost, lightweight and efficient solution that can work everywhere and target a wide range of users. Augmented reality is a collection of technologies ranging from vision to multimedia that can be used to enhance the real environment with digital information. In the past few years a number of experimental augmented reality navigation and wayfinding prototypes have been proposed (Feiner et al, 1997; Thomas et al, 1998; Gleue and Dähne, 2001; Reitmayr and Schmalstieg, 2004) illustrating promising results for further research. Virtual reality technologies are used to create virtual environments controlled by the user, for example, models of the built environment that offer alternative perspectives on a user's current position. The LOCUS research project (LOCUS 2006) aims to enhance current location-based services for urban navigation by extending the current map-based approach to offer augmented reality and virtual reality interfaces to enrich the contextual information available to users. In this work, we are presenting the results from the development of rich and user-oriented prototype interfaces for urban navigation and wayfinding, fully integrated with positioning and orientation sensors.

2. Requirements for AR/VR wayfinding There are several important differences between the experimental augmented reality (AR) or virtual reality (VR) systems and existing navigational systems. The key ones are the

European Navigation Conference 2006, 7-10 May, 2006.

3

Liarokapis F, et al., Navigating within the urban environment using Location and Orientation-based Services.

richness of the contextual spatial/temporal information available and the affordances of the interface for the navigation task. AR/VR interfaces for navigation are hypothesised to be richer by virtue of their augmented virtual nature and to have greater affordances through the control given to the user over the visualisation employed. Hence, an AR/VR environment can look like the reality in which the user is situated, and the feedback from the registration of the model with reality can improve the user's self localisation/ orientation. There are many experimental virtual reality prototypes that have demonstrated the use of positional and orientation information for urban navigation and wayfinding (Kulju and Kaasinen, 2002; Laakso et al, 2003; Burigat and Chittaro, 2005). These systems make use of GPS and orientation-based technologies, such as digital compass, gyroscopes and accelerometers, to estimate the position and orientation and provide a virtual representation of the real environment. In theory, if VR navigation and wayfinding systems are presented in a user-friendly way they can overcome some of the limitations of current navigational systems because they can make effective use of position and orientation information to display helpful assistance to the user. However, initial studies have showed that even if the real environment has been modelled accurately and realistically in a VR model, the duplication of the environment can sometimes confuse the users rather than assist those (Liarokapis et al, 2006). In contrast to virtual environments, one of the virtues of AR in navigation and wayfinding is that the characteristics of augmented environments do not differ a lot from the characteristics of the real environment (cf. a live digital camera preview). The only difference is the information required for the navigation task is seamlessly ‘inserted’ into the visualisation of the environment. Another advantage of AR is that in many applications, simple forms of visual information suffice for effective navigational operations. In the simplest scenario, textual information can be employed to provide navigational information or even descriptive information about a particular urban area and its building structures. Additionally, other types of visual information (i.e. 3D arrow) and auditory information (spatial sound) can be geo-referenced and blended with the environment (see Figure 5). Thus, a significant advantage of using AR is that other types of multimedia (or digital) information such as spatialised sound, meaningful textual European Navigation Conference 2006, 7-10 May, 2006.

4

Liarokapis F, et al., Navigating within the urban environment using Location and Orientation-based Services.

information and spatial images can be superimposed on the visualisation so that the user can effectively interact with it (Liarokapis et al, 2005; Liarokapis et al, 2006). The pragmatic way to achieve the maximum navigation and wayfinding benefit from this range of technologies is to integrate them to produce the richest representation and most communicative interface. This is the goal of mixed reality (MR) technologies which can take advantage of both hardware tracking technologies (i.e. GPS, digital compass) and software technologies (AR, VR, mobile interfaces). However, the key challenge is implementing sensor integration, registration and representation on a mobile device as this is still a hard system integration problem.

3. The LOCUS Approach to AR/VR navigation interfaces An overview of the architecture of the LOCUS mobile AR/VR interface includes the following components: •

Tracking sub-system



Camera sub-system



Graphics sub-system



Interaction sub-system



Database sub-system



Interface sub-system

The aim of the tracking sub-system is to detect the position and orientation of the user during navigation based on various technologies including GPS, digital compass, and computer-vision techniques. The first two are used as the base tracking solution while the latter is used to provide additional navigation information when the environment can be 'recognised' by the image processing capabilities of the system. The camera-sub-system is used to process the video stream and decompose it into frames for further processing. This can be sometimes used for the determination of the user’s exact position and orientation (see section 4) and sometimes for the overlaying of visual

European Navigation Conference 2006, 7-10 May, 2006.

5

Liarokapis F, et al., Navigating within the urban environment using Location and Orientation-based Services.

content into the mobile display or by external visualisation devices (see section 3.1). The graphics sub-system merges visual content such as directional indications with each frame. The interaction sub-system is used to control the visualisation on the device in an intuitive way. We are investigating both stylus based interaction (see Figure 2) as well as sensor driven interactions (i.e. digital compass). The GIS database (database sub-system) is needed to serve the necessary geo-referenced spatial data into the navigation visualisation interface. This will serve the client locally from the device or through an optimised network with minimum latency in real-time performance. The LOCUS project has constructed a database for parts of central London by the integration of Ordnance Survey mapping with building information from the GeoInformation Group Cities Revealed datasets. Finally, the interface sub-system integrates everything into a user-centred interface and is delivered on Windows Mobile devices through native interface or browser. To achieve the goals of the LOCUS project, a number of technologies have been prototyped as shown in Table 1. Application Data Form Pure Augmented Reality Virtual Reality Mixed Reality

Tracking * * * *

Camera N/A *** N/A ***

Graphics N/A * ** *

Interaction N/A ** * *

Database * * * *

Interface N/A ** *** **

Table 1. Technology overview (* Operational; ** Minor challenge; *** Major challenge)

Currently, the boxes with one star are operational and the rest are under development. Note that every row corresponding to a particular application type has a significant challenge within it. In the next section an overview of the most significant developments are briefly explained.

3.1.

GPS and Digital Compass

Inspired by the ARCHAEOGUIDE (Gleue and Dähne 2001) idea of using an electronic compass and GPS in a wearable system, the key challenge in the LOCUS project is to integrate all the tracking technologies with the application on the mobile device. Given the complexity of interfacing serial and USB communications with mobile devices and

European Navigation Conference 2006, 7-10 May, 2006.

6

Liarokapis F, et al., Navigating within the urban environment using Location and Orientation-based Services.

the impracticality of cables, Bluetooth communications has proved the most effective approach. Bluetooth GPS based on the SIRFStar III chipset have been used (i.e. Holux GPSlim 236) due to their small size, long battery life and their massive correlation power, making them especially suitable to for urban environments where signal strength may be limited. A tool has been written to read GPS data as NMEA strings from the selected COM port, and these are passed to the tracking component. Dilution of Precision (DOP) values are used to filter positions by quality criteria. On the other hand, the integration of a digital compass is not as straight-forward since the hardware is not packaged for consumer use. Specifically, most manufactures provide OEM hardware and software integration needs to be implemented for the specific device. Connecting the digital compass hardware (i.e. Honeywell HMR3300) to a cordless Bluetooth serial adaptor is the solution adopted. A tool has been written to read the data strings from the selected COM port, and these are passed to the tracking component. Calibration of the compass for magnetic interference and to quantify latency is under way. We are currently using the HTC Universal (aka JASJAR) Windows Mobile 5 device with its 520 MHz processor and VGA screen as the prototyping mobile device.

3.2.

Client-Server Approach

The Camineo platform (Camineo 2006), developed in the WebPark project (WebPark 2006) in which City was a partner) is a client-server architecture designed for mobile devices that can provide location-based services to users using java applications delivered through a browser interface (Raper 2005). The Camineo platform offers tools for integrating positional information, via Bluetooth GPS, to provide navigational information about the surrounding environment as well as 'mobile search' options (Figure 1).

European Navigation Conference 2006, 7-10 May, 2006.

7

Liarokapis F, et al., Navigating within the urban environment using Location and Orientation-based Services.

Figure 1. Camineo platform (a) Home page (b) Map navigation (c) Top 5 searches Map data is © Crown Copyright/database right 2006. An Ordnance Survey/EDINA supplied service.

Figure 1 (a), shows the (customisable) home page of Camineo that provides five different options to the user. Figure 1 (b), illustrates how navigation is performed using GPS and SVG maps. The red dots show the path of the user in the map. Figure 1 (c) shows the ‘Top 5’ searches of important information to the user such as: buildings, landmarks, monuments and statues. In LOCUS, we are using the Camineo platform to receive positional and orientation information through Sockets in order to take advantage of the client-server functionality. Further work is being carried out on the integration of AR and VR applications with the stream of position and orientation information.

3.3.

Mobile Graphics API

Currently we are using two different technologies for presenting our 3D maps in the mobile device: VRML and Managed Direct3D Mobile (MD3DM). VRML can be realised via Pocket Cortona which can operate either as an ActiveX plug-in in Pocket Internet Explorer or in stand-alone mode. The Pocket Internet Explorer plug-in allows us to integrate it with the Camineo platform in an efficient way because it also uses a web browser as the main tool for retrieving and visualising navigational information. Two example screenshots of a user interacting with a VRML-based VR world in a mobile device are shown in Figure 2.

European Navigation Conference 2006, 7-10 May, 2006.

8

Liarokapis F, et al., Navigating within the urban environment using Location and Orientation-based Services.

Figure 2. VR navigation of City University’s campus

Figure 2 shows how a user can use the stylus to navigate inside a realistic virtual representation of City University’s campus. It is worth-mentioning that the environment has been modelled as accurately as possible using both manual and photogrammetric techniques. In terms of performance, the frame-rate per second (FPS) achieved is in the range of 3 to 5 FPS in the HTC Universal device while in a Dell Axim X51v PDA (with 624 MHz processor) that includes a dedicated 16 MB graphics accelerator, the efficiency ranges between 12 to 15 FPS. In addition to this, we are developing our own customised mobile 3D graphics engine based on md3dm which is operational in Pocket PCs, Smartphones, and other devices running Windows CE with the .NET Compact Framework. The graphics engine operates as a separate mode handling the output from the GPS/compass automatically providing sufficient functionality to generate mobile VR applications. Compared to the VRML solution, the advantages of MD3DM is that it takes full advantage of graphics hardware support and enables the development of high-performance three-dimensional rendering. In additional, it provides support to load binary 3D files and it is easy to integrate with other DirectX components (i.e. DirectShow).

4. Urban Navigation using computer vision To explore the potential of computer-vision augmented reality (without using position and orientation from hardware devices), we have designed a tangible mixed reality interface called MRGIS (Liarokapis et al, 2005; Liarokapis et al, 2006) based on top of ARToolKit’s tracking libraries (Kato et al, 2000); VRML and OpenGL graphics API;

European Navigation Conference 2006, 7-10 May, 2006.

9

Liarokapis F, et al., Navigating within the urban environment using Location and Orientation-based Services.

DirectShow video API; and Microsoft Foundation Classes (MFC). The aim of the system is to calculate the position and orientation of the user and provide them with a rich navigational interface. In this way, we can check or supplement the use of position and orientation devices such as GPS and digital compass. To test the effectiveness of the interface in navigation and wayfinding we have experimentally applied two potential wayfinding scenarios. In all scenarios, the overall goal was to provide wayfinding and navigation information for users to navigate around urban areas.

4.1.

Detecting Road Signs

The aim of the first scenario is to use road signs as fiducials to compute the user’s pose (position and orientation). Road signs are usually found at decision points and because we use them in real-life as a means for navigation, they can be considered as a seminatural way of wayfinding. An example of a road signs used in this research for providing navigational information through the use of AR is illustrated in Figure 3.

Figure 3. Sign used for AR wayfinding (a) Before detection (b) After detection

As soon as the user reaches a decision point and the camera detects the relative marker, the system automatically provides the user with audio-visual AR overlays. Figure 3 (b), shows what the view of a user is in front of a road sign, which is close to the end of the road (decision point). In this example, a 3D arrow and auditory information is superimposed showing and narrating the route towards a pre-determined destination. In addition, it feels natural to point the camera to the road signs and in some cases more than one road sign is required to accurately calculate the position of the user (orientation can be still calculated from one road sign). This happens in larger streets where the same

European Navigation Conference 2006, 7-10 May, 2006.

10

Liarokapis F, et al., Navigating within the urban environment using Location and Orientation-based Services.

or very similar road signs have been used in more than one instance. Figure 4 illustrates a complex scenario where the same road signs have been used on the corners of many streets.

Figure 4. Complicated road sign navigation scenario using four road makers

In such a case, we employ a three-stage semi-automatic technique which mimics the way we navigate. In the first stage, the user detects the first road sign, which is marked with an ‘A’ in Figure 4. As soon as we have established a rough approximation of the position of the user alongside road ‘A’, then we need to limit the range more using the second stage. This is performed by detecting a second road sign, which is marked as ‘B’ in Figure 4. Therefore, the user is located in the junction between road ‘A’ and ‘B’. Similarly, it is easy to calculate whether the user is located on road ‘D’ (top part of Figure 4) or on road ‘C’ (bottom part of Figure 4) limiting more the area of interest. At this stage we have designed an experimental way of interacting with the system by asking the user to input some feedback back to the system. This may be in the form of visual (i.e. textual feedback, 3D arrows, 2D maps) or auditory information. The main advantage of using road signs for navigating into the environment is that they already exist and there is no need to prepare the environment prior to navigation. On the other hand, on many occasions they are not in good condition making them difficult to detect. Therefore, alternative techniques have been investigated as shown in following section.

European Navigation Conference 2006, 7-10 May, 2006.

11

Liarokapis F, et al., Navigating within the urban environment using Location and Orientation-based Services.

4.2.

Detecting Natural Features

The most complex objective is to detect parts of the real environment such as buildings or part of them that are distinctive in shape (Liarokapis et al, 2006). These may include building structures or parts of buildings such as door entrances or even kiosks. As the navigator reaches the decision points, the environment must be scanned through the video camera in order to detect the correspondent points-of-interest. When a point-of-interest is detected then the system computes the users position and orientation in real-time and virtual navigational information is superimposed indicating the direction of interest as shown in Figure 5.

Figure 5. Detecting natural features

Figure 5 shows the view of the user when approaching a decision point, in this example a door entrance. A 3D arrow in blue is mixed with the real world showing the direction the user has to follow in order to reach the intended destination (town centre, entrance of a building, etc.). An optional feature of the system is that it allows the user to change the appearance of the augmented information if necessary. For example, if the user is located further away from the decision point, the size as well as the colour of the arrow can be changed interactively without affecting the rest of the process. However, the tracking accuracy of our current approach needs further improvement (Liarokapis 2006) and so we are investigating alternative methods for improving the effectiveness of the system (Harris and Stephens 1988).

European Navigation Conference 2006, 7-10 May, 2006.

12

Liarokapis F, et al., Navigating within the urban environment using Location and Orientation-based Services.

5. Conclusions and Future Work In this paper we have shown how AR/VR interfaces could take advantage of different tracking technologies to provide efficient support for pedestrian wayfinding and navigation in urban environments. We are currently completing the development work in the integration of the various components of the project and in parallel we are building a spatial database which will hold accurate 3D representations of parts of central London. We plan to use these data to perform 3D tracking in real time without the need to precalibrate the environment. Finally, all the tracking solutions will be integrated into a hybrid mobile interface and we then plan perform extensive user studies with focused user groups.

Acknowledgements The work presented in this paper is mostly conducted within the LOCUS project, funded by EPSRC through the Location and Timing KTN partnership. We would also like to thank our partner on the project, GeoInformation Group, Cambridge, for making the entire database of the City of London buildings available to the project.

References Burigat, S., and Chittaro, L., (2005). Location-aware visualization of VRML models in GPS-based mobile guides, In Proc of the 10th International Conference on 3D Web Technology, ACM Press, 57-64. Burrell, J., Guy, G., Kubo, K., and Farina, N., (2002). Context-aware computing: A test case. In Proc. of UbiComp, Springer, 1-15. Camineo, (2006). Available at: [http://www.camineo.com/], Accessed at: 08/04/2006. Feiner, S., MacIntyre, B., Höllerer, T., and Webster, T., (1997). A touring machine:

European Navigation Conference 2006, 7-10 May, 2006.

13

Liarokapis F, et al., Navigating within the urban environment using Location and Orientation-based Services.

Prototyping 3D mobile augmented reality systems for exploring the urban environment. In Proc of the 1rst IEEE Int’l. Symposium on Wearable Computers, October 13-14, 7481. Gleue, T., Dähne, P., (2001). Design and Implementation of a Mobile Device for Outdoor Augmented Reality in the ARCHEOGUIDE Project, In Proc of the 2001 conference on Virtual reality, archeology, and cultural heritage (VAST), Greece, 161-168. Griswold, W.G., et al., (2004). ActiveCampus: Experiments in Community-Oriented Ubiquitous Computing, In IEEE Computer, IEEE Computer Society, October, Vol. 37, No. 10, 73-81. Harris, C., and Stephens, M. (1988). A combined corner and edge detector. In Proc. of the 4th Alvey Vision Conference, 147-151. Kato, H., Billinghurst, M., and Poupyrev, I., (2000). ARToolkit User Manual, Version 2.33, Human Interface Lab, University of Washington, November. Kulju, M., and Kaasinen, E., (2002). Route guidance using a 3D city model on a mobile device, In Mobile HCI02 Workshop: Mobile Tourism Support Systems, Mobile HCI 2002 Symposium, Pisa, Italy, 17th Sep. Laakso, K., Gjesdal, O., and Sulebak, J.R., (2003). Tourist information and navigation support by using 3D maps displayed on mobile devices. In Workshop on Mobile Guides, Mobile HCI 2003 Symposium, Udine, Italy. Liarokapis, F., et al., (2005). Mobile

Augmented Reality Techniques for

GeoVisualisation, In Proc. of the 9th Int’l Conference on Information Visualisation, IEEE Computer Society, 6-8 July, London, UK, 745-751. Liarokapis, F., et al., (2006). Mixed Reality for Exploring Urban Environments, In Proc. of the 1rst Int’l Conference on Computer Graphics Theory and Applications, Insticc Press, 25-28 February, Setubal, Portugal, 208-215.

European Navigation Conference 2006, 7-10 May, 2006.

14

Liarokapis F, et al., Navigating within the urban environment using Location and Orientation-based Services.

LOCUS, (2006). Available at: [http://www.locus.org.uk/], Accessed at: 08/04/2006. Raper, J. F. (2005). Design constraints on operational LBS. Location-based services and Telecartography, Vienna Reitmayr, G., and Schmalstieg, D., (2004). Collaborative Augmented Reality for Outdoor Navigation and Information Browsing. In Proc. Symposium Location Based Services and TeleCartography, Vienna, Austria, January, 31-41. Thomas, B.H., Demczuk, V., et al., (1998). A Wearable Computer System with Augmented Reality to Support Terrestrial Navigation. In Proc. of the 2nd Int’l Symposium on Wearable Computers, October, IEEE and ACM, 168-171. WebPark, (2006). Available at: [http://www.webparkservices.info/], Accessed at: 08/04/2006.

European Navigation Conference 2006, 7-10 May, 2006.

15

Suggest Documents