A Comparative Study of 3D Web Integration ... - MAFIADOC.COM

0 downloads 0 Views 14MB Size Report
A comparison with examples of 3D Web integration models for the Sensor Web: (a) .... query. 4.2 X3DOM Integration. Because the W3C standards of HTML5, CSS3 and Web browser ... To manage the 3D augmentation model, the CSS media.

A Comparative Study of 3D Web Integration Models for the Sensor Web Sangchul Ahn* University of Science and Technology


Byounghyun Yoo† Korea Institute of Science and Technology


Heedong Ko‡ Korea Institute of Science and Technology



Figure 1. A comparison with examples of 3D Web integration models for the Sensor Web: (a) a global view of an X3DOM integration example, (b) a local view of the X3DOM example, (c) a global view of a webized integration example, and (d) a local view of the webized example.

Abstract To facilitate the dynamic exploration of the Sensor Web and to allow users seamlessly to focus on a particular sensor system from a set of registered sensor networks deployed across the globe, interactive 3D graphics on the Web, which enables sensor data exploration featuring a good level of detail for a multi-scaled Sensor Web, is necessary. A comparative study of decent approaches to integrate Sensor Web information with the latest 3D Web technology and the geospatial Web has been conducted. We implemented prototype systems in three different 3D Web integration models, i.e., a common X3D model, the X3DOM integration model, and a webized AR content model. This paper presents examples of our prototype implementations using various approaches and discusses the lessons learned.*†‡

data collection, fusion and distribution [Liang 2009]. Increasing numbers of distributed environmental sensing and monitoring sensor networks are being deployed and are generating datasets continuously. Furthermore, the increasing level of demand for environmental sensing and modeling at multiple scales makes the discovery, exploration and sharing of sensor resources indispensable.

Keywords: 3D Web, augmented reality, geospatial Web, hypermedia, HTML 5, Sensor Web

In this paper, we present feasible approaches for integrating Sensor Web information with the latest 3D Web technology in an effort to facilitate sensor data discovery and sharing by exploring a Web-based 3D virtual globe on which the metadata of the Sensor Web are queried and visualized through interaction with the information seeker, as shown in the examples in Figure 1. In order to provide more intuitive exposure of the sensor data, we implement a dynamic 3D scene of sensor information on the globe by interactive navigation using various 3D Web integration models. We compare our prototype implementations with different approaches and discuss the findings.




With the ongoing development of inexpensive miniature and smart sensors, abundantly fast and ubiquitous computing devices, wireless and mobile communication networks, and autonomous and intelligent software agents, the Sensor Web is rapidly emerging as a powerful technological framework for geospatial


e-mail: [email protected] Corresponding author. e-mail: [email protected] ‡ e-mail: [email protected]

Related Work

Similar to the W3C Web standards enabling the World Wide Web (WWW), the Open Geospatial Consortium’s Sensor Web Enablement (SWE) standards enable researchers and developers to make sensing resources discoverable, accessible, and re-useable via the Web. The SWE is composed of candidate specifications that include Observation and Measurement (O&M), Sensor Model Language (SensorML), and the Sensor Observation Service (SOS). A reader can refer to a recent publication [Bröring et al. 2011] for detailed information about examples and applications of SWE. Previous work has shown the potential of 3D visualization using a virtual globe as a data exploration tool [Stensgaard et al. 2009; Tomaszewski 2011]. The virtual globe is used for exploring geotemporal differences of datasets [Hoeber et al. 2011; Wood et al. 2007] and data publications on the Sensor Web [Liang et al. 2010]. 3D thematic mapping using the virtual globe and

online database and are used for formulating casual queries based on the jurisdictional region name, such as the city or country. For example, it is not possible to determine whether a sensor station is included in South-Eastern Asia, which is a sub-region of Asia, with only the metadata of the station. The formulation of a geospatial query that refers to the corresponding information from a world-borders dataset enables such a casual query based on a region name.

geobrowsers [Sandvik 2008] offer a framework for the 3D geovisualization of statistical datasets. It provides inspiration for the 3D geo-visualization of the metadata on the Sensor Web. Creating 3D interactive content using the HTML5 platform is complex. Yang and Zhang [2010] compared the principles of models used to create 3D content which can run on a Web browser without special plug-ins. Behr et al. [2009] introduced X3DOM, which is a model that allows the direct integration of X3D nodes into HTML5 DOM content. X3DOM eases the integration of X3D in modern web applications by directly mapping and synchronizing live DOM elements to an X3D scene model. This type of integration model is notably evolving and has been used as a framework for the online visualization of 3D city model [Mao and Ban 2011].


4.1 Common X3D and SAI We implemented a Web-based system to support the visual exploration of a large collection of metadata from the distributed Sensor Web. The prototype performs online browsing of approximately 34,000 sensor systems from which datasets are accessible. The architecture of the visual metadata exploration system is divided into two parts, the client side and the server side, as shown in Figure 2. On the Web server, data is processed and filtered to generate a dynamic X3D scene. We use PHP as the overall programming framework and PostgreSQL as the back-end database for storing the metadata processed from the distributed Sensor Web. PostGIS is used to extend the geospatial capability of the database.

Ahn et al. [2013] proposed a content structure to build mobile AR applications in HTML5 to achieve a clean separation of mobile AR contents from their application logic to scale like the web. They extended points of interest (POI) to objects and places with a Uniform Resource Identifier (URI). They build objects of interest for their application as DOM elements and control their behavior and user interactions through DOM events.


3D Web Integration Models

Common Geospatial Web Technologies

3.1 X3D Earth X3D Earth is an open-standard-based technology for publishing earth globes. It includes tools to enable users to build their own globes utilizing their own data. We can build our X3D Earth globe instance based on the specific requirements of exploring metadata from particular sensor networks [Yoo and Brutzman 2009]. In order to use the X3D Earth globe to provide background geographical information for the metadata of sensor resources, a self-referring PHP script was created to generate X3D terrain-tile sets on the fly. When the server receives an initial query pertaining to the location, it constructs an X3D scene that includes a top-level geo-referenced elevation model (GeoElevationGrid node) with height data and appropriate draped texture imagery (ImageTexture node) along with child URL links to the subsequently lower quadtree of similarly constructed scenes. The PHP script is then used repeatedly to query, generate and link terrain-tile files dynamically for each subsequent inline quadtree child. The terrain height data, bathymetry altitude data, cartography, satellite imagery and/or aerial photography of interest are retrieved as needed to produce tiled structures for multi-resolution terrain. The X3D Earth globe works with several geospatial web services, including OpenAerialMap and OpenStreetMap.

Figure 2. Web-based architecture of the implemented system.

3.2 Geospatial Query A geospatial query is necessary to generate 3D geospatial Sensor Web visualization dynamically during progressive explorations and focus narrowing. We collect jurisdictional information such as the city and country name when we extract metadata from sensor networks. In order to create a semantic relationship between the jurisdictional name of the region and the geographical geometry pertaining to the geospatial query, we use a composition of macro-geographical (continental) regions, geographical sub-regions, and information from selected economic and other groupings [United Nations Statistics Division 2011] as provided by the statistics division of the United Nations. The correlated region name, code, and geometry are stored in our



Figure 3. 3D Web integration of Sensor Web based on the common X3D and SAI integration model: (a) a global view and (b) a local view. Presentations and interactions are realized in the Web browser, as shown in Figure 3. We support the latest builds of Mozilla Firefox, Google Chrome, and Microsoft Internet Explorer. We employ BS Contact Geo [Web3D Consortium 2010] as a standard


X3D player to renders X3D scenes and handle interactive events during the exploration of the metadata. We chose BS Contact Geo because it conforms to the specification of the X3D Geospatial Component and enables integration with multiple Web browsers, as mentioned above. While the basic structure of the interface is transmitted as HTML, CSS, and JavaScript files, the actual processed and filtered data for the interactive exploration of metadata is retrieved by the browser as X3D XML encoded objects that are generated in response to the current geospatial query.

4.3 Webized AR Content Model Webizing mobile AR content [Ahn, Ko and Feiner 2013] refers to the process of combining virtual and real-world objects through three components: URIs for dealing with physical objects, a CSS Place Media module, and DOM event extensions for situated interaction.

4.3.1 AR Content Structure in HTML In this model, a point of interest (POI) refers to a physical object. It is viewed as a physical resource - an extension of a web resource. It is assumed that any POI has a URI and is accessible through HTTP. However, the acts of identifying and tracking physical objects require feature samples suited for the recognition method. To address this, the webized AR content model uses Representational State Transfer (REST) to provide various types of feature samples as a representation of its URI via HTTP content negotiation. The browser can choose one of the available tracking methods and request the feature sample data via the URI of the target POI. As a result, the AR content does not need to include all feature samples for tracking, instead simply referring to the URIs of the target POIs. HTML elements describing POIs and their augmented model must be rendered in a 3D environment and not on a page. For this reason, this model uses a place-based augmentation model to situate the virtual objects in the physical world on a human scale. The AR content is situated at a place while providing its own coordinate reference system (CRS) and POI map. To manage the 3D augmentation model, the CSS media type is extended to place media. The place media module selects the HTML element to augment to the target POI and its 3D position and orientations. A CSS rule has two main parts: a selector and one or more declarations consisting of property-value pairs. A selector determines the valid elements to apply the CSS declarations. The -ar-target property is used for specifying the target POI or place. With this property, the selected HTML element is transformed depending on the target POI. In addition, ar-transform applies a 3D transform operation to the elements using three methods: translate, rotate and scale. This property stems from the CSS 3D transform but differs in terms of the dynamic determination of origin with -ar-target.

4.2 X3DOM Integration Because the W3C standards of HTML5, CSS3 and Web browser implementations are moving forward quickly, X3D has also been moving toward a new framework and runtime to support the integration of HTML5 and declarative 3D content. The new framework tries to fulfill the current HTML5 specifications for declarative 3D content and allows the inclusion of X3D elements as part of any HTML5 DOM tree. Therefore, we employ X3DOM as a HTML5/X3D integration model that renders X3D scenes and handles interactive events that arise during the exploration of the metadata. The architecture of the visual metadata exploration system using the HTML5/X3D integration model is depicted in Figure 4. Presentations and interactions are realized in the Web browser. We support the latest builds of WebGL-enabled browsers. WebGL-enabled web browsers are available for most platforms. The data for the interactive exploration of the metadata are retrieved by the Web browser as XHTML, which is embedded in HTML documents. In contrast to the abovementioned Webbased architecture employing a separate X3D player, this integration model eliminates the most significant drawback of legacy Web-based architecture: the need for third party plug-ins to enable an interactive representation of 3D geospatial information.

Figure 4. Architecture of the implemented system using HTML5/X3D integration. The examples shown in Figure 1(a) and (b) use the HTML5/X3D integration model. We implemented a prototype exploration system to verify the advantages of eliminating third-party X3D players and to prove the feasibility of HTML DOM integration. Google Chrome and Mac OS X were used in these examples (other browsers such as Firefox and Safari should also work) for a comparison with the prototype using BS Contact Geo and Microsoft Windows. The current implementation of X3DOM lacks support for several nodes belonging to the geospatial component, such as GeoLocation, GeoTransform and GeoViewpoint nodes. Thus, we implemented visualization templates using IndexedFaceset, which composed of GeoCoordinate nodes, instead of geo-referencing standard X3D objects.

Figure 5. The architecture of the webized mobile AR application platform. To validate the proposed content structure, we developed a prototype of a mobile AR Web browser for iOS and Android. Here, we explain its components that distinguish it from typical


mashups with the X3D-based Digital Earth globe and sensor resources using the latest 3D Web integration models. We expect that harmonization of the HTML/X3D integration model and the webized AR content structure conforming to the Web architecture will facilitate the use of Sensor Web resources.

page-based Web browsers. The prototype system consists of four major components: a target manager, a positioning subsystem, a place manager, and a situated renderer. Figure 5 shows the system architecture of the webized mobile AR application platform and how these core components support webized AR content. The components provide the context information that is closely related to the performance of the situated interactions. The target manager selects the target POIs from style sheets and obtains the feature samples through REST access to their URIs. The types of feature samples are extendable to supported tracking methods. Currently, RFID and image-recognition features are supported. The positioning subsystem determines the position and orientation of the user and the movable POIs. Its major components are mappers and trackers. Mappers correlate the different location representations from diverse local positioning systems into the LCRS of the situated place. A tracker provides pose estimations of target objects from the place’s origin. The place manager serves information about the place environment and the local POI map. The information is brought from an external web service by resolving the place ID. The situated renderer manages the unified scene graph to combine physical objects, virtual elements, and the user viewpoint in the situated place. To support device-specific requirements, Adobe PhoneGap is used as a hybrid mobile application platform that provides JavaScript APIs access to the device capabilities and plug-in architecture to offer additional native functions.

Acknowledgement This research is supported in part by Ministry of Culture, Sports and Tourism (MCST) and Korea Creative Content Agency (KOCCA) in the Culture Technology (CT) Research & Development Program 2013 and by the Korea Institute of Science and Technology (KIST) Institutional Program (Project No. 2E24100).

References AHN, S., KO, H. and FEINER, S. 2013. Webizing Mobile AR Contents. In Proceedings of the IEEE Virtual Reality, Orlando, Florida, United States, 16-23 March 2013. BEHR, J., ESCHLER, P., JUNG, Y. and ZÖLLNER, M. 2009. X3DOM: a DOM-based HTML5/X3D integration model. In Proceedings of the International Conference on 3D Web Technology, Darmstadt, Germany2009 ACM, 1559784, 127135. BRÖRING, A., ECHTERHOFF, J., JIRKA, S., SIMONIS, I., EVERDING, T., STASCH, C., LIANG, S. and LEMMENS, R. 2011. New Generation Sensor Web Enablement. Sensors 11, 2652-2699. HOEBER, O., WILSON, G., HARDING, S., ENGUEHARD, R. and DEVILLERS, R. 2011. Exploring geo-temporal differences using GTdiff. In Proceedings of the Pacific Visualization Symposium (PacificVis), 2011 IEEE, 1-4 March 2011 2011, 139-146. LIANG, S., 2009. What is Sensor Web? Available from: http://sensorweb.geomatics.ucalgary.ca/gsw/what-is-sensorweb. LIANG, S., CHANG, D., BADGER, J., REZEL, R., CHEN, S., HUANG, C.Y. and LI, R.Y. 2010. GeoCENS: Geospatial Cyberinfrastructure for Environmental Sensing. In Proceedings of the International Conference on Geographic Information Science, Zurich, Switzerland2010. MAO, B. and BAN, Y. 2011. Online Visualization of 3D City Model Using CityGML and X3DOM. Cartographica: The International Journal for Geographic Information and Geovisualization 46, 109-114. SANDVIK, B., 2008. Using KML for Thematic Mapping. thematicmapping.org [online]. Available from: http://thematicmapping.org/downloads/Using_KML_for_The matic_Mapping.pdf. STENSGAARD, A.-S., SAARNAK, C.F.L., UTZINGER, J., VOUNATSOU, P., SIMOONGA, C., MUSHINGE, G., RAHBEK, C., MØHLENBERG, F. and KRISTENSEN, T.K. 2009. Virtual globes and geospatial health: the potential of new tools in the management and control of vector-borne diseases. Geospatial Health 3, 127141. TOMASZEWSKI, B. 2011. Situation awareness and virtual globes: Applications for disaster management. Computers & Geosciences 37, 86-92. UNITED NATIONS STATISTICS DIVISION, 2011. Composition of macro geographical (continental) regions, geographical subregions, and selected economic and other groupings [online]. http://unstats.un.org/unsd/methods/m49/m49regin.htm. WEB3D CONSORTIUM, 2010. Player support for X3D components [online]. http://www.web3d.org/x3d/wiki/index.php/Player_support_fo r_X3D_components.

4.3.2 Use Case We implemented a prototype system and applied it to the same examples explained in the aforementioned integration models. When the user selects sensor data (e.g., the temperature) in our prototype AR Web browser, the system displays the average values of the temperature as recorded by sensors at weather stations within each country’s borders (Figure 1(c)). A geospatial query that computes the average values of the temperature by country from registered weather stations is implicitly formulated within our system, and statistical, dynamic 3D bar charts are generated and registered as objects on the physical globe. The 3D bar charts are dynamically added as AR DOM elements in the webized AR application platform (Figure 5). The physical globe is a place in our augmentation model; it uses CSS place media with each country functioning as a target POIs. The 3D bar charts are then added as virtual elements onto the physical globe. The prototype AR browser augments the physical globe in the real world by overlaying the dynamic 3D bar charts onto the physical globe. The color and the height scale of the bar chart in Figure 1(c) vary in proportion to the temperature. The 3D bar chart shows actual temperature values scaled to fit the physical globe, and the user can easily assess differences in the average temperatures between stations. When the user has more interest in a specific region, e.g., Japan, as shown in Figure 1(d), the user can simply move their mobile device to gaze around Japan on the physical globe. This interface facilitates not only the augmentation of cyber-physical information in a unified space but also provides a more intuitive experience for users who navigate the physical sensor network.



The main contribution of this work is an experimental comparison with different approaches for a 3D geospatial Web integration model. We compare a common X3D integration model to a HTML5/X3D integration model which does not require any thirdparty plug-ins while also assessing webized AR content model that conforms to the current Web contents architecture. We enable


WOOD, J., DYKES, J., SLINGSBY, A. and CLARKE, K. 2007. Interactive Visual Exploration of a Large Spatio-temporal Dataset: Reflections on a Geovisualization Mashup. Visualization and Computer Graphics, IEEE Transactions on 13, 1176-1183. YANG, J. and ZHANG, J. 2010. Towards HTML 5 and interactive 3D graphics. In Proceedings of the Educational and Information Technology (ICEIT), 2010 International Conference on, 17-19 Sept. 2010 2010, V1-522-V521-527. YOO, B. and BRUTZMAN, D. 2009. X3D earth terrain-tile production chain for georeferenced simulation. In Proceedings of the International Conference on 3D Web Technology, Darmstadt, Germany2009 ACM, 1559789, 159166.



Suggest Documents