Comprehensible and Interactive Visualizations of GIS Data in ...

8 downloads 6205 Views 8MB Size Report
One of the main challenges of interactive AR visualizations of data from profes- sional geographic information systems (GIS) is the establishment of a close link-.
Comprehensible and Interactive Visualizations of GIS Data in Augmented Reality Stefanie Zollmann1 , Gerhard Schall1 , Sebastian Junghanns2 , and Gerhard Reitmayr1 1

Graz University of Technology 2 GRINTEC GmbH

Abstract. Most civil engineering tasks require accessing, surveying and modifying geospatial data in the field and referencing this virtual, geospatial information to the real world situation. Augmented Reality (AR) can be a useful tool to create, edit and update geospatial data representing real world artifacts by interacting with the 3D graphical representation of the geospatial data augmented in the user’s view. One of the main challenges of interactive AR visualizations of data from professional geographic information systems (GIS) is the establishment of a close linkage of comprehensible AR visualization and the geographic database that allows interactive modifications. In this paper, we address this challenge by introducing a flexible data management between GIS databases and AR visualizations that maintain data consistency between both data levels and consequently enables an interactive data roundtrip. The integration of our approach into a mobile AR platform enables us to perform first evaluations with expert end-users from utility companies.

1

Introduction

Geographic information systems (GIS) support civil engineering companies in managing existing or future utility infrastructures. Locating existing assets during construction work (e.g. gas or water pipes), surveying and visualizing the planned construction in the context of existing structures are some of the tasks which benefit from GIS. Efficient utility location tools and computer-assisted management practices can largely reduce costs and therefore are worth to be continuously improved. Some companies already employ mobile GIS systems for on-site inspection (e.g. ARCGIS for Android1 ). However, current visualization techniques implemented in these tools do not show the relation of GIS data to the real world context and still involve the tedious task of referencing assets correctly to the real world. Using Augmented Reality (AR) as an interface to extend mobile GIS systems has the potential to provide significant advances for the field of civil engineering by supporting the visual integration of real worlds and existing assets. AR is an emerging user interface technology superimposing registered 3D graphics over the user’s view of the real world in real-time [1]. The visualization of both, real and virtual geospatial information, at the same time in reference to each other has a big potential to avoid errors 1

http://www.arcgis.com.

2

Stefanie Zollmann, Gerhard Schall, Sebastian Junghanns, and Gerhard Reitmayr

Fig. 1. Interactive planning and surveying with mobile AR. Left: Users with setup. Middle left: Creating and surveying a new cable with the setup. Middle right: Manipulating the cable. Right: Comprehensible visualization showing an excavation along the cable.

and to decrease workload. The goal of this project is to enable - with AR technology information access, interactive planning and surveying. AR can simplify such tasks by presenting an integrated view of the geospatial models in 3D and providing immediate feedback to the user and intuitive ways for data capturing, data correction and surveying. Merging the view of the real world with the presentation of geospatial objects raise some major challenges in terms of visualization: such as information clutter, depth perception issues or wrong interpretations of information. While GIS have a well standardized symbology used for representing geographical features, it is not adapted for 3D AR visualizations. To address these issues, we propose new comprehensible AR visualization techniques for GIS data following these three goals: – The visualized information should be easily interpretable. – The spatial arrangement of real and virtual structures should be understandable. – Interactive modifications should be consistent in the AR visualization and the GIS database. Our work contributes to the field of AR, specifically in the domain of visualization of GIS data and digital surveying applications. The core contributions are thereby (1) a novel transcoding layer that ensures data consistency between the GIS database and the displayed content during interactive modifications; (2) novel visualization techniques that help to interpret the GIS data as well as support the spatial understanding of the augmented scene. In addition we present (3) interactive manipulation techniques that are applied to the displayed GIS data and automatically update the GIS database through our transcoding layer. Finally, (4) the integration of all the aforementioned features into an outdoor AR system that was evaluated with expert users.

2

Related Work

Since the introduction of stand-alone geographic information systems in the late 1970s, there were many advancements in research and development on the visualization of geographic data. Mobile GIS already extends GIS from the office to the field by combining mobile devices, wireless communication and positioning systems. Whereas mobile GIS enable on-site capturing, storing, manipulating, analyzing and displaying of geographical data, the user still has to build the reference between geospatial data and real-world,

Lecture Notes in Computer Science

3

as well as data interpretations by himself. Several research groups worked on bridging this gap by using AR for visualizing geographic data on-site. The group of Roberts were among the first that propose an AR overlay of underground assets over a live video image on a mobile device [2]. Later on, Schall et al. built an AR platform for experimenting with the visualization of underground infrastructure in the Vidente project [3]. The potential of AR for the Architecture, Engineering, and Construction (ACE) industry was also identified by Shin et al. [4]. For the construction industries, some research groups showed potential applications. For instance, Hakkariainen et al. describe an AR system for the visualization of Building Information Models (BIM) [5] and GolparvarFard et al. proposed an approach for visualizing construction progress [6]. On the other site, Mobile AR has lately gained more interest as research field. While the Touring Machine as one of the first mobile AR systems combing tracking position and orientation with differential GPS and a magnetometer was quite bulky [7], research groups have been working on making AR systems more compact and mobile such as the Tinmith system by Piekarski et al. [8] or the Vesp’R system by Veas and Kruijff [9]. Further research moved AR applications towards low-end devices such as cell phones [10]. Besides registration techniques and hardware design, an important aspect for the presentation of complex data in AR are visualization methods that support the comprehension of the presented information. Kalkofen et al. addressed this issues by implementing comprehensible visualization techniques based on Focus and Context techniques, filtering and stylization [11]. To enable the comprehensible visualization of GIS data, Mendez et al. introduced a transcoding pipeline that allows the mapping of different stylizations to GIS data [12]. Since these methods only work unidirectional, they can only provide passive viewing functions. The main goal of our work is a bi-directional data management allowing interactive manipulations and comprehensible visualization of GIS data at the same time. For this purpose we maintain data consistency between the GIS data base and the visualized geometric representations. This approach opens new prospects for outdoor AR-GIS by enabling interactive surveying in the field with visual references to the real-world.

3

Approach

To enable interactive modifications of geospatial data in the AR view while keeping consistency with the GIS database, we introduce two different types of data levels in our architecture: the GIS-database level and the comprehensible 3D geometry level. The GIS-database level consists of a set features, each describing one real world object with a 2D geometry and a set of attributes. Attributes are stored as key-value pairs and provide a description of various properties of the feature, such as type, owner or status. The comprehensible 3D geometry level consists of a set of 3D geometric representations of real world objects, such as extruded circles, rectangles, polygons and arbitrary 3D models; visualizing pipes, excavations, walls or lamps respectively. To support the consistency between both data levels, we add a new data layer that serves as transmission layer between both. We call the additional layer transcoding layer, since it supports the bi-directional conversion of data between the comprehensi-

4

Stefanie Zollmann, Gerhard Schall, Sebastian Junghanns, and Gerhard Reitmayr

(a)

(b)

Fig. 2. (a) GIS data model, all data is represented by lines and points, which makes interpretations difficult. (b) Advanced geometries, GIS data is transcoded to a comprehensible representation showing cylindrical objects representing trees and cylindrical objects representing pipes. Color coding enables the user to interpret semantics.

ble 3D geometry level and GIS database level. Each feature of the GIS database is stored as scene graph object with a set of attributes. Interactive modification in our system are conducted at this level and automatically propagated to the two other levels. Applying manipulations at the transcoding layer allows to manipulate feature data directly and avoids that manipulations are only applied to specific 3D geometries. For instance, the exclusive manipulation of a excavation representation of a pipe makes no sense without modifying the line feature representing the pipe. Furthermore, the transcoding layer has still access to the semantic information of a feature, which is important since interaction methods can depend on the type of object. We introduce a bi-directional transcoding pipeline that creates the transcoding layer and the comprehensible 3D geometry level automatically from the geospatial data and updates the database with manipulations applied in the AR View. The pipeline is working as follows: (1) The conversion of GIS data into the transcoding layer and into specific comprehensible 3D geometries (Section 4). (2) Interaction techniques such as selection, manipulation and navigation allow the user to manipulate the various features (Section 5). The data connections between the 3 data layers guarantee data coherency while interacting with the data. To avoid administration overhead, modifications are recorded through tracing and only changed features will be written back to the GIS database.

4

From Geospatial Data to Comprehensible AR Visualization

Real-time comprehensible AR visualization and manipulation of geospatial objects requires a different data model than traditional geospatial data models. Abstract line and point features need to be processed to create 3D geometry representing more the actual shape than the surveyed line of points (compare Figure 2, (a) and (b)). Additional geometries have to be created automatically based on the features attributes to improve the comprehension of the presented information in the AR visualization. For instance, virtual excavations should help to understand the spatial arrangement of subsurface objects. All of these geometries need to be interactive and changeable, so that interactive manipulation allows updating the features. To support these operations we developed a bi-directional transcoding pipeline that realizes the conversion from GIS features to

Lecture Notes in Computer Science Geospatial Data Layer GeospatialGeospatial Data Layer DataLayer GeospatialData Layer

Comprehensible 3D Geometry Layer Transcoding Layer Comprehensible 3D Comprehensible Comprehensible 3D3D Transcoding Layer Layer Transcoding Layer Transcoding Geometry Layer Layer Geometry Layer Geometry

write to database

write to database

5

data connections

Manipulation

Simple Pipes Simple Pipes Simple Pipes

changes

Shadows

GIS

GIS GIS GIS

GML File

convert

GMLFile File GML GML File

Data Data Scenegraph DataScenegraph Scenegraph Scenegraph

transcoding

Excavations SimpleExcavations Pipes Excavations

filtering & transcoding

User Interaction

Excavations

Shadows Shadows Shadows

User Interaction User Interaction Fig. 3. Overview of the bi-directional transcoding pipeline. Data from the geospatial database is converted to a simple GML exchange format. The GML file is imported to the application and transcoded into the transcoding layer representation. Filtering and transcoding operations map the transcoding layer data to comprehensible 3D geometry. Data connections between the transcoding layer scene graph and the comprehensible geometry scene graph keep the visualization up-to-date. User interaction is applied directly to the transcoding layer.

comprehensible 3D data and back (Figure 3). A transcoding operation using a commercial tool extracts and translates geospatial data into a simpler common format for the mobile AR client, listing a set of features and their geometries (section 4.1). The AR system further filters the features for specific properties and applies transformations to generate 3D data from them (section 4.2). The 3D data structures are functionally derived from the geospatial data and stay up-to-date when it changes. Interactions operate directly on the feature data synchronizing the 3D visualization and features. 4.1

From Geospatial Data to the Transcoding Layer

For extracting the features from the geo-database, we use FME2 which is an integrated collection of tools for spatial data transformation and data translation. FME represents a GIS utility that help users converting data between various data formats as well as process data geometry and attributes. The user interactively selects objects of interest in the back-end GIS, which is then exported to a GML3-based file format. A GML file represents a collection of features, where each feature describes one real world object. Geometric data being only available in 2D is converted to 3D representations by using a digital elevation model (DEM) and known laying depths of the subsurface objects. This step has to be done offline before starting the AR system since it requires external software and interactive selection of the export area. All following steps of the transcoding pipeline can be done during runtime but are not in realtime. Finally, the GML file is converted into a scene graph format representing the data in the transcoding layer. For each feature, we create a scene graph object representing the semantic attributes and geometric properties of the feature. We support the main 2

The Feature Manipulation Engine: http://www.safe.com.

6

Stefanie Zollmann, Gerhard Schall, Sebastian Junghanns, and Gerhard Reitmayr

(a)

(b)

(c)

(d)

Fig. 4. Different visualizations of an electricity line feature. (a) A yellow rectangular extrusion as graphical representation. (b) A red cylindrical extrusion. (c) Showing an excavation along the electricity line. (d) Showing virtual shadows cast on the ground plane.

standard features of GML such as GMLLineStrings, GMLLinearRings, GMLPoint and GMLPolygon in the conversion step. In our current implementation, we use COIN3D3 to implement the scene graph because it is easily extendable, but the approach can be easily adapted to be used with other scene graphs. 4.2

From Transcoding Layer Data to Comprehensible Geometries

The second step is the creation of comprehensible 3D geometries from the data of the transcoding layer. The final visualization of the geospatial data strongly depends on the application, application domain and the preferences of the user (e.g. color, geometry symbology or geometry complexity). For instance, a pipe could be represented in several ways, such as a normal pipe using an extruded circle (Figure 4(b)) or as an extruded rectangle to show an excavation around the pipe (Figure 4(c)). We call the conversion from the transcoding layer data representation to comprehensible geometries geometry transcoding. Different types of transcoding operations are called transcoders and each transcoder can be configured offline or during runtime to create different geometries from the same geospatial data. Each comprehensible 3D geometry is independent from other 3D representations of the corresponding feature but connected to the feature data in the transcoding layer (Figure 3). The implementation of different visualization styles for different feature types is supported by a filtering-transcoding concept. The filtering step searches for a specific object type from attributes stored in the transcoding layer and the transcoding step transforms the data into specific geometric objects, which can later be displayed by the rendering system. The separation of the two steps allows for a very flexible system that can support many applications. Filter Operations The filtering step searches for specific tags in the semantic attributes of features in the transcoding layer and extracts the corresponding features. For instance, features can be filtered by a unique id, a class name, class alias, or type. The matching is implemented through regular expressions testing against the string values of the attributes. The features extracted by filtering can then be processed by an assigned transcoder. Filter rules and transcoding operations can be configured by the application 3

http://www.coin3d.org.

Lecture Notes in Computer Science

7

designer using a script or during runtime. The mapping of filters and transcoding operations has to be implemented in the application and allows to not only configure the visualization methods for specific data, but also a filtering of the presented information. Transcoding Operation Each transcoding operation depends on the type of transcoding and the transcoding parameters. The transcoding type assigns the underlying geometric operation for deriving the 3D geometry, for instance converting a line feature into a circular extrusion representing a pipe. The transcoding parameters configure visualization specifications such as color, width, radius, height, textures and 3D models of the objects. Multiple transcoders can be used to create different visualizations of the same data. The user selects the appropriate representation during runtime and the geometry created by the filtering and transcoding is finally rendered by the AR application. The transcoding is usually performed during start-up time of the application and takes several seconds for thousands of features. Comprehensible Visualization Techniques The filtering-transcoding concept allows us to create various geometric objects from the same semantic data. This is important since the comprehensible visualization of underground infrastructures on-site poses several challenges: – Semantic interpretation: Geometric and appearance of visualized objects should fit to the requirements of users and application areas and is mostly achieved by using adequate colors and shapes or meaningful geometric models. – Depth perception: A comprehensible arrangement of virtual and real objects in the augmentation is important to improve the comprehension of the visualized information and can be achieved by providing additional depth cues or avoiding clutter. For AR visualization of underground infrastructure, depth perception is particularly challenging. Using simple overlays for visualizing underground objects via an AR XRay view can cause perceptional issues, such as the impression of underground objects floating over the ground. To avoid these problems, it is essential to either decide which parts of the physical scene should be kept and which parts should be replaced by virtual information [13] or which kind of additional virtual depth cues can be provided [14]. A comprehensible visualization should provide users with essential perceptual cues to understand the relationship of depth between hidden information and the physical scene. The flexible data management allows us to address both challenges by using the filtering-transcoding pipeline to create easily interpretable geometric objects and additional depth cues. While we support the semantic interpretation by using different color codings (e.g red for electricity cables) and adequate geometric representations (e.g cylinders for trees), depth perception issues are addressed by creating cutaways (Figure 4(c)), reference shadows projecting the pipe outlines to the surface, or connection lines visualizing the connection between the object and the ground (Figure 4(d)). The advantage of our method of separating the GIS data level and the comprehensible geometry level with the transcoding layer is that we can create as many different visualization objects as needed by an application. There is no additional effort for updating the various geometric objects, since modifications are automatically applied due

8

Stefanie Zollmann, Gerhard Schall, Sebastian Junghanns, and Gerhard Reitmayr

to the connection with the transcoding layer. The data connections between the comprehensible 3D data level and the transcoding layer are implemented as field connections in COIN3D to ensure data consistency.

5

Interaction Techniques

The bi-directional transcoding pipeline allows an interactive data round trip of GIS data in an AR application and therefore the interactive manipulation of the geospatial data itself. The direct AR visualization allows to align and modify geospatial data with immediate visual feedback. In-field tasks that can benefit from this direct interaction include planning, inspection and surveying of new structures. Following taxonomy from Bowman et al. [15], we divided the interaction techniques in selection, manipulation and navigation tasks. 5.1

Selection

The selection of GIS features is the starting point for information access, manipulation and surveying. While selecting features consisting of one feature point is unambiguous, for features consisting of multiple feature points, the user may want to select all corresponding feature points or only a subset. For instance, for surveying it may be useful to select only a single vertex to manipulate it, while for planning tasks it may be useful to select and manipulate all corresponding vertices at once, e.g. to move a complete pipe. Therefore, we provide two selection options: Object-based selection for selecting a complete object and feature point-based selection for selecting single feature points. For the object-based selection, we compute the object that contains the intersection point and add all corresponding feature points to the selection. For the feature-point based selection, we compute the closest feature point of the intersection object to the intersection point. After selecting feature points, information about these features can be accessed or can be manipulated. 5.2

Manipulation

The manipulation of data is important since as-built objects have to be surveyed, already documented objects may have to be updated, planned data should be adjusted after checks against the real-world situation, or has to be adjusted after being surveyed.

(a)

(b)

(c)

(d)

Fig. 5. Manipulations: (a) Constrained transformation of lamps. (b) Constrained transformation of cables. (c) Single-Point Manipulation. (d) Surveying of a physical cable.

Lecture Notes in Computer Science

9

Transformation After selecting features, transformations can be applied by manipulating a reference geometry with the mouse pointer. A number of different manipulators in COIN3D support the easy integration of different manipulation methods. The resulting manipulation matrix is directly applied to the transcoding layer on feature vertex level. That means all selected feature vertices are transformed by the transformation matrix of the current manipulation operation. Due to the direct data connection between transcoding layer and comprehensible geometries, all corresponding visualizations are updated by the same transformation. Surveying The AR-Surveying allows to survey features directly on-site. To provide accurate measurements, we use a laser measurement device. Points measured by the device are mapped into the global coordinate system and used as input for surveying features points. Since the measurement only provides information about distances in a fixed direction, the device has to be calibrated in relation to the camera. That allows to compute a 3D point for every distance measurement. The measured 3D points are then visualized and used to survey point features and line features by measuring single or multiple points respectively (Figure 5(d)). Additionally, we integrated a surveying method based on pen input. In this case, the surveyed 3D point is calculated by reprojecting screen-coordinates onto a 3D-digital terrain model describing the surface of the surrounding environment. 5.3

Navigation

Navigation in AR differs to the navigation in virtual environments. Usually, users move physically in the real world (1:1 motion mapping) updating their point of view with their movement. However, in our application, the surveying of new features may be physically engaging, if a feature Fig. 6. Augmented Multi-views. is exceptionally large. For instance, pipe features may be distributed over large areas, which makes it challenging to measure them directly by laser surveying from one location, since start and end points may be not visible from the same camera view. To overcome this problem, we provide a method called multi-views [16] that allows capturing camera views and their poses and switch between these stored views. The user can access these multi-views to view and edit objects from remote locations. Furthermore they can be stored as KML3 files and viewed in external virtual globe viewers for documentation purposes.

6

Results

We integrated our approach into a mobile AR platform designed for high-accuracy outdoor usage to test the interactive data roundtrip and analyze the surveying accuracy. Additionally, we gained first feedback with the setup in workshops with expert users from civil engineering companies. 3

Keyhole Markup Language

10

Stefanie Zollmann, Gerhard Schall, Sebastian Junghanns, and Gerhard Reitmayr

Mobile AR Platform The AR prototype hardware setup consist of a tablet PC (Motion J3400) with 1.6GHz Pentium CPU and sunlight viewable screen capable for real-world outdoor conditions. We equipped the tablet PC with different sensors such as a camera, a 3DoF orientation sensor, and a Novatel OEMV-2 L1/L2 Real-Time Kinematic (RTK) receiver for achieving a positional accuracy within the centimeter range. The laser distance measurement device for surveying was mounted on the back of the camera and calibrated with respect to the camera. To calibrate the laser device, we captured several images showing the laser-dot and a standard checkerboard calibration target and determined the position of the dots in relation to the camera. A set of determined positions with known distance measurements allows to compute the ray of the laser in relationship to the camera. Finally, a 3D point can be computed from each distance measurement by mapping the distance back into a 3D point on the ray. Field Tests To test the system with real-world data in field tests, we used data from conventional GIS provided by two civil engineering companies. We performed first field-trials with the surveying system with 16 expert participants (12m/4f) from a civil engineering company. Users were asked to survey an existing pipe with the system and afterwards to complete a short questionnaire. Results of the test showed that users rated the suitability of the AR system for ”As-built” surveying over average on a 7-point Likert scale (avg. 5.13, stdev 1.14) and compared to traditional surveying techniques as quite equivalent (avg. 4.43, stdev 1.03). The simplicity of surveying new objects was rated above average (avg. 5.44, stdev 0.96). And while the outdoor suitability of the current setup was rated low due to the prototypical character (avg. 3.28, stdev 1.20), the general usefulness of the AR application was rated high (avg. 5.94, stdev 1.19). For assessing the surveying accuracy of the surveying application, we performed experiments measuring a known reference point. The reference point was surveyed with the AR setup from more than 20 different positions and directions. The results (Easting: avg. 0.02, stdev 0.19, Northing: avg. 0.18, stdev 0.14, Height: avg. -0.12, stdev 0.16 ) show that the accuracy is better than 30 centimeters (the minimum accuracy required by our end users from the civil engineering sector). The observed inaccuracies are caused by orientation sensor and laser calibration errors.

7

Conclusion and Outlook

In this paper, we demonstrated how existing workflows such as on-site planning, data capture and surveying of geo-spatial data can be improved through a new approach enabling interactive, on-site AR visualizations. Surveying tasks benefit by the immediate visualization of preview geometries and correction/surveying of the geospatial objects through showing the known and captured features in context. First field trials with expert users from the utility industry showed promising results. To achieve this level of functionality, several technical advances were necessary. For the visualization of geospatial data we implemented a data roundtrip which allows a comprehensible AR visualization and still being flexible for modifications. Furthermore, the integration of interaction tools for creating, editing and surveying features shows how planning and

Lecture Notes in Computer Science

11

surveying structures can be simplified. Currently, we explore the commercial development of the prototype in a set of pilot projects with industrial partners. The aim is to adapt the prototype to industrial needs to realize a novel on-site field GIS providing a simpler, yet more appealing way to address specific productive industrial workflows.

Acknowledgements This work was supported by the Austrian Research Promotion Agency (FFG) FIT-IT projects SMARTVidente (820922) and Construct (830035).

References 1. Azuma, R.: A survey of augmented reality. Presence: Teleoperators and Virtual Environments 6 (1997) 355–385 2. Roberts, G.W., Evans, A., Dodson, A., Denby, B., Cooper, S., Hollands, R.: The use of augmented reality, GPS, and INS for subsurface data visualization. In: FIG XXII International Congress. (2002) 1–12 3. Schall, G.: Handheld Augmented Reality in Civil Engineering. In: Proc. ROSUS’09. (2009) 19–25 4. Shin, D., Dunston, P.: Identification of application areas for Augmented Reality in industrial construction based on technology suitability. Automation in Construction 17 (2008) 882– 894 5. Hakkarainen, M., Woodward, C., Rainio, K.: Software architecture for mobile mixed reality and 4D BIM interaction. In: Proc. 25th CIB W78 Conference. (2009) 1–8 6. Golparvar-Fard, M., Pena-Mora, F., Savarese, S.: D4AR- A 4-Dimensional augmented reality model for automating construction progress data collection, processing and communication. Journal of Information Technology 14 (2009) 129–153 7. Feiner, S., MacIntyre, B., Hollerer, T., Webster, A.: A touring machine: prototyping 3D mobile augmented reality systems for exploring the urban environment. In: ISWC’97. (1997) 74–81 8. Piekarski, W., Thomas, B.H.: Tinmith-metro: New outdoor techniques for creating city models with an augmented reality wearable computer. In: ISWC’01. (2001) 31–38 9. Veas, E., Kruijff, E.: Vesp’R: design and evaluation of a Handheld AR device. In: Proc. ISMAR 2008, IEEE Computer Society (2008) 43–52 10. Wagner, D., Schmalstieg, D.: First steps towards handheld augmented reality. In: Proc. ISWC’03. (2003) 127–135 11. Kalkofen, D., Mendez, E., Schmalstieg, D.: Comprehensible visualization for Augmented Reality. IEEE Trans. Vis. Comput. Graphics 15 (2009) 193–204 12. Mendez, E., Schall, G., Havemann, S., Fellner, D., Schmalstieg, D., Junghanns, S.: Generating semantic 3D models of underground infrastructure. IEEE Comput. Graph. Appl. 28 (2008) 48–57 13. Zollmann, S., Kalkofen, D., Mendez, E., Reitmayr, G.: Image-based ghostings for single layer occlusions in augmented reality. In: Proc. ISMAR 2010. (2010) 19–26 14. Feiner, S., Seligmann, D.: Cutaways and Ghosting: Satisfying visibility constraints in dynamic 3D illustrations. The Visual Computer 8 (1992) 292–302 15. Bowman, D.A., Kruijff, E., LaViola Jr., J.J., Poupyrev, I.: 3D User Interfaces: Theory and Practice. 1 edn. Addison-Wesley (2004) 16. Veas, E., Grasset, R., Kruijff, E., Schmalstieg, D.: Extended overview techniques for Outdoor Augmented Reality. IEEE Trans. Vis. Comput. Graphics 18 (2012) 565–72

Suggest Documents