Presented at the Joint AURISA and Institution of Surveyors Conference Adelaide, South Australia 25 – 30 November 2002
Composition of Augmented Reality and GIS To Visualize Environmental Changes Payam Ghadirian1 and Ian D. Bishop2 1
Department of Geomatics, The University of Melbourne, VIC 3010, AUSTRALIA Phone: +61 3 8344 6881 Fax: +61 3 9347 2916 Email:
[email protected] 2
Centre for GIS and Modelling, The University of Melbourne, VIC 3010, AUSTRALIA Phone: +61 3 8344 7500 Fax: +61 3 9347 2916 E-mail:
[email protected]
ABSTRACT Nowadays, researchers, managers and even the general public are much more concerned about environmental changes. Changes in the landscape around us, and their effect on our lives, are among the most important indicators of cause for environmental concern. Observation and monitoring of change and attempts at the creation of more realistic simulations of the evolving environment have become key research fields. Nevertheless, current visualisation systems are still unable to create a sense of place and realism sufficiently similar to that in the real world. A very beautiful animation is not necessarily a sufficient surrogate for real world experience. Therefore, in order to use simulation to monitor people’s responses to and behaviour in real world situations we need better tools. The development of an interactive visualisation system that can use all the amenities of current systems (Geo-reference GIS, environmental process modelling and 3D modelling and rendering) in combination with a 3D immersion system that uses real world elements (Video sequences), would overcome several of the problems associated with current systems. This paper introduces a preliminary stages of a project which combines GIS based environmental process modelling with use of augmented reality (AR) technology to present environmental change in an immersive environment. The paper first reviews current visualisation systems, it then introduces the concepts and existing development of our proposed system and describes the proposed weed spread case study and some experiences in multi-channel video capture. KEYWORDS: GIS, Environmental Visualization, Video Sequences, Augmented Reality
INTRODUCTION In the past, different types of Visualisation and Virtual Reality have been utilised for many purposes. The training of pilots, the entertainment industry, architecture and urban planning and in recent years, environmental
visualisation, are among various applications of these technologies. Augmented Reality is another visualisation technique and is defined as a combination of the real scene viewed and a virtual scene generated by the computer in which the virtual objects are superimposed on the real scene (Backman, 2000). Vallino (1998) believes that an augmented reality (AR) could be considered the ultimate in immersive systems and researchers have proposed AR as solutions in many domains. Medicine, entertainment industry, military, engineering design, robotics and tele-robotics are some application areas of this technology. He also indicated that Augmented Reality does not simply mean the superimposition of a graphic object over a real world scene. It requires detailed knowledge of the relationship between the frames of reference for the real world, the camera viewing it and the user. Environmental assessment and landscape visualisation are among other applications of this technology (e.g. Rokita, 1998; Nakamae, et al., 1999, 2001; Qin, et al., 1999). On the other hand, Geographical Information Systems (GIS) are increasingly being used for inventory, analysis, understanding, modelling and management of the natural environment (Bouma and Bregt, 1989; Burrough, 1986; Goodchild, et al., 1993; Maidment, 1993, 1995; as cited by Burrough, 1997). Also, computer graphics techniques have been used widely in 3D GIS (systems) to visualise data and to enhance the interaction of the user with the data (Verbree, et al., 1999). Furthermore, using GPS and its linkage to the 3D GIS models provided some more capabilities in this regard. Video Mapping is one of these features which can populate a GIS database with actual snapshots and streaming video/audio that are linked to their map location (Berry and Burgess, 2000). In this method, GP S signals that can be directly recorded on the videotape, provide positioning information for linking between video sequences and GIS. Our main objectives in this research are development of a new immersion situation by using three similar video cameras for recording simultaneous videos and then projecting them seamlessly on three large screens. By using AR, montage and visual special effects, environmental changes that come from modelling within geo-reference GIS, will be represented realistically in the immersive augmented environment. Observation of people’s responses to and behaviours in the new system in comparison with current visualisation systems would be the future direction of this research. In this paper, after a brief review on current visualisation systems, difficulties associated with them will be discussed. Then, the main concepts of our proposed system and a case study will be inroduced.
CURRENT VISUALISATION SYSTEMS Over the last few years, several studies regarding landscape visualisation, monitoring of environmental changes and the role of GIS have been carried out. The validity and degree of realism in created models and their importance in people’s behaviours in virtual world have also been discussed. Augmented Reality and comp osition of computer generated graphics with video sequences are among other important issues which are covered below.
Landscape Visualisation/GIS and Visualisation Bishop and Karadaglis (1996) pointed out that the increasing availability of very high performance graphic engines offers new opportunities for combining interactive environmental modelling with interactive visualisation within immersive decision support environments. They described the development of such a system combining GIS based modelling of environmental impact with high performance visual simulation in a multichannel graphics environment. Integration between GIS and visualisation has the subject of several research projects in the last few years. Verbree, et al. (1999), proposed a multi-view approach based on different types of visualisation to support 3D GIS interaction within virtual reality environments. This system is operational in a range of visualisation mode from PC monitor to Virtual Workbench (looking down on the model as if it is a 3D scale model) to the multiprojector CAVE (Cruz-Niera et al, 1992) and allows interaction with GIS data in a range of virtual views; plan view, model view and world view ( Figure 1.) Maren and Germs (2001) have carried out more research in this area. They presented a multi-view approach to support GIS interaction within Virtual Reality environments using existing GIS and VR technology. They also developed a 3D GIS/VR system (K2VI) using ESRI’s Spatial Database Engine (SDE) that supports visualisation, manipulation and editing of standard GIS data from within a VR environment.
Plan view
Model view
World view
Figure 1: Different view modes. Verbree, et al (1999)
Validity of visualisation/ People and virtual world Some researchers have discussed the importance of validity and visualisation effectiveness in different methods of visualisation. For instance, Sheppard discussed the risks of a growing but unstructured use of landscape visualisations as a popular decision-making and public communications tool in planning (Sheppard, 2001). A framework was proposed for guidance and support of visualisation practitioners, with the aim to improve the chances of ethical practice and scientific validity in the use of these systems. Bishop (1992, 1994) has long argued the importance of realism in any simulation to be used in environmental decision support. Lange (2001) also discussed the validation of simulation of virtual landscape, in terms of their degree of realism. He also asked an important question as to ‘whether, how and to which degree, the real visually perceived landscape, represented through photographs, can be validly represented by means of virtual landscapes.’ Besides the degree of realism, immersion is another important concept in visualisation systems. Bishop et al. (2001b), pointed out that using an immersion situation instead of just a simple monitor, would make conditions more similar to what we can see in the real world and therefore increase the validity of the system. Danahy (2001) argued that in immersion situation, using three screens to present two images on either side of primary focus in the scene, people gain a much better sense of the spatial structure of a given location. He said that people are able to obtain a relatively accurate sense of scale and distance in panoramic projection and therefore can judge distance when calibrated binocular viewing is not available. He added that the visual media commonly used to structure scientific analysis, professional design, decision-making and artistic interpretations of visual landscapes are quite weak at portraying the dynamic and peripheral dimensions of human vision. He also suggested that landscape visualisation needs to develop instruments for research that more fully capture the fundamental components of human vision before we can properly advance landscape research or practice. Finally, Danahy proposed use of a peripheral vision and dynamic viewing system in deliberations about visual landscapes. Bishop and Dave (Bishop and Dave, 2001) pointed out that in a multi-projector immersive environment, a high level of perceived reality could be gained but that this would be significantly enhanced by the inclusion of natural interactivity with the environment. This enhances a user’s sense of place in the environment and so increases the validity of assessments and responses. People’s behaviour and their interaction with a virtual world, has been the subject of several researchers. Bishop et al (2001a) described the process of model building for a section of the Dee valley in north-east Scotland. This included the development of software to support interactive exploration, and an experiment designed to answer some questions about validity and also local landscape preferences. The conclusion which may be draw from these studies is that realism, immersion and interactivity are all significant contributors to a valid virtual environment experience. In the discussion below we add the concept of inclusion of valid and verified environmental models to show the effects of changing landscapes and of human management decisions. The use of augmented reality techniques as an alternate approach to achievement of both realism and interactivity is seen as a key element of a successful outcome.
Augmented Reality Augmented reality and its various applications have been described by several researchers. Vallino (1998) first defined virtual and augmented reality and their differences. He also discussed applications and types of Augmented Reality systems and the difficulties encountered in registration of computer generated graphics with video sequences. Nakamae et al (1999, 2001) argued that panned/zoomed video sequences images composited with computer generated images gave viewers a much more lifelike scene and better visualisation than still montaged images. They also introduced rendering techniques for visual environmental landscape assessment using computer graphics and/or video sequence images. These techniques are classified into three categories: (1) computer generated images and montages for visualising landscapes with photo-realistic rendering techniques, (2) panoramic images and panoramic montages employing image-based rendering techniques made from video sequence images, and (3) panned/zoomed video sequences composited with computer generated images. Other research in this area was carried out by Rokita, (1998). This study described new techniques for compositing computer generated images and real world video sequences obtained using a camcorder. Possible applications of these techniques included environmental assessment and pre-evaluation of the visual impact of large-scale constructions. Problem and difficulties As discussed above, virtual environment systems have increasingly developed in recent years. With better and more powerful computers, the quality of these systems has improved rapidly. Texture mapping and other advance techniques have created more realistic visualisations (Liggett and Jepson, 1995). Sheppard believes that parallel to the development of computer systems, the number of people who are using these technologies, and the number of people demanding their use, seem to be increasing (Sheppard, 2001). Although the development and popularity of these systems have increased, it seems that there is still a huge gap between the real environment and the best virtual worlds that are commonly used. The main reason for this gap is the limitation of computers to recreate or visualise complexities of the real world – especially in the natural environment. So, in the best case, the result may be a very beautiful and highly realistic animation that permits no user interaction or a degraded simulation with some level of user control. Therefore, these visualisation systems are not ideal for public communication or decision making. Some researchers have been concerned about this problem. For instance, Daniel and Meitner believe that: “Environmental visualisations may be completely accurate with respect to their portrayal of relevant and accurately projected bio-physical conditions, but still produce perceptions, interpretations, and value judgements that are not consistent with those that would be produced by actual encounters with the environments represented.” (Daniel and Meitner, 2000 as cited by Sheppard, 2001).
PROPOSED VISUALISATION SYSTEM As mentioned above, some researchers have been concerned about the validity, deficiency and the lack of a sense of place in current visualisation systems and Danahy discussed the establishment of a new generation of visualisation technology (Danahy, 2001). Our initial objective in this research has been development a new immersion situation by using three similar video cameras for recording simultaneous videos and then projecting them on three large screens in our CAEV facility (http://www.arbld.unimelb.edu.au/~bdave/caev/) (Figure 2.). In comparison with other displaying techniques including one screen (a large TV), one projector (cinema) and Head Mounted Display (HMD) (Figure 2), multiple projectors (CAEV) has been chosen. Our natural wide field of view (200 D lateral and 125 D vertical) make a single screen or projector inappropriate. On the other hand, HMDs have been unpopular because of the weight of devices, limitations in field of view and motion sickness effects on the user.
Figure 2: CAEV and HMD The CAEV consists of three 2.4m (w) x 1.8m (h) screens each with a dedicated high-resolution data projector, giving a total projection surface of 7.2m and a 135 degree field of view from the preferred viewing position. Images are fed to the projectors by (currently) three Silicon Graphics computers. The fast processors allow for real-time movement through reasonably detailed and realistic three dimensional models of environments. In the next stage of development the SGI’s will be replaced with PCs running Linux and high-performance graphic cards (currently GeForce-3). We also plan to install a single PC with high speed SCSI drives and a multi-channel MPEG-2 card which will allow simultaneous projection of 3 video sequences to the 3 projectors. The concepts of the proposed system are shown in Figure 3. An environmental impact model (in our case a weed spread model – see below) will be entered in a geo-referenced 3D GIS of the selected area. Through running this model, alternative futures, under environmental change, will be defined. Furthermore, video sequences of these regions (using three video cameras) as well as GPS data for instant positioning of cameras, will be provided and linked to the geo-referenced models. In the last part, using Augmented Reality, special effect techniques and also the GPS data, the video sequences and computer (GIS) generated images will be mixed to produce 3 linked video sequences, augmented by the modelling projections. These will be projected in the CAEV immersion situation.
Figure 3. Components of the proposed system
Weed-Spread Model Visualisation of the spread of weeds through the landscape depends upon an effective spread model. Available model types are reviewed by Higgins and Richardson (1996). Many of the models are not spatially explicit and hence not suitable to support visualization. Higgins and Richardson argue that in many circumstances individualbased cellular automata (CA) models are the best spatially explicit option. Through their simulation approach “they are extremely flexible, …allowing the incorporation of spatial heterogeneity and stochasticity” (p262). The approach adopted here is similar to a CA model but less restrictive in the sense that influences over longer distances are allowed. The model, which is written in Visual Basic for Applications (VBA) within ArcGIS v 8.1 (www.esri.com), is based on the widely recognized stages of seed-based plant spread: seed dispersal, germination and establishment, growth to reproductive maturity and further seed dispersal. At each stage the model parameters can be adjusted (in order to mimic different species) and stochastic influences introduced (Figure 4.). The model also allows for the application of weed control measures at different times within the simulation period and with different success rates. The output is a series of maps showing grid cells (as small as 1 m2 ) that have been infested and the age, and hence likely size, of the infestation (Figure 4). This information can be fed back to the visual simulation driver program and the appropriate elements (e.g. simulated blackberry bushes) added to the virtual landscape before combination with video footage of the same region to generate the augmented reality view.
Figure 4. The ArcGIS screen showing the specification of spread parameters and the output map from 10 spread cycles from a single blackberry bush (red cells are older and green cells newer).
Multiple camera capture and projection An ongoing feasibility study based on the proposed system has been conducted. Some of the difficulties in using simultaneous three video cameras and seamless reprojection have been identified. In this study, a bracket for mounting video cameras on the steering of a bike has been designed and built (Figure 5). Videos using one and
three similar video cameras mounted on this bracket have been captured and with using video editing software (Adobe After Effects 5.5), some preliminary wide-screen sequences have been produced. (Figure 6).
Figure 5: Three camera mounting bracket
(a)
(b)
Figure 6: (a) Frames from the 3 separated cameras (b) Composited view In the beginning, some experiments with one and three cameras, in cycling situation have been carried out but because of great amount of vibration and bicycle tilt, the resulting videos were not desirable. In the next situation, the bicycle was carefully wheeled rather than ridden. Some experiments using different features of the digital video cameras (Cannon MV1), including Image Stabiliser, Progressive Scan and 16:9 ratio have been repeated. Also, some videos in Wide, Medium and Tele angle in the camera’s lens have been produced. Some of the points resulted by the feasibility study are as followed: •
Using the camera lens in the maximum wide angle may cause some curved edges and makes difficulties in edge matching process (Figure 7).
Figure 7: Curve edges in wide-angle lens
•
Using the large amount of overlapping for the adjacent cameras may disturb the geometry. In this case
combination of the videos would be difficult if not impossible. •
For creating continuos brightness and coherent contrast in the combined video, using the manual exposure versus the automatic exposure may produce much better results.
•
In addition to the using manual exposure, Cloudy Sky would be the best weather situation for this experiment.
•
Using the 16:9 ratio in single camera may produce the best edge matching situation but in the case of projection on large screens, because of the limited number of image pixels in this method, pixel sizes would be so large and the final results would not be desirable. Three cameras are necessary to get the desired resolution using conventional PAL or NTSC formats. High resolution HDTV equipment may make a single camera approach viable.
Augmented Reality The weed simulations will be modelled and rendered using 3D Studio Max. The models may be full 3D entities or 2D texture mapped billboard or intersecting plains may be used. These will be accurately positioned in the landscape and appropriately occluded by superposition on the local digital terrain model (1:25000 scale, 10 m contours). The terrain will be rendered in solid colour (e.g. blue) and video of the real environment merged into the sequence using a chroma-key process. In the case of a moving camera (either pan, zoom or physical movement) the corresponding frame sequence will be generated for the DTM and modelled weed locations and the merging of video and graphics will be on a frame-by-frame basis. Pilot Study Using the result of the feasibility study, a pilot project will be executed in conjunction with other studies the Cudgewa Valley in rural Victoria (Australia) to examine the validity and efficiency of proposed system (Stock and Bishop, 2002). We will also compare the results of the suggested system with other visualisation techniques.
CONCLUSION We have just begun to explore the capabilities and applications of the proposed system but it seems that composition of Augmented Reality, GPS and GIS can provide a versatile device for representing environmental changes in an immersion situation. Preliminary experiments with static and mobile cameras are on-going and further research to delimit the limitations and abilities of the system are necessary. Greater sense of place and consequently improved public communication means an improved awareness among the general public about environmental changes around us and helpful development of public participation in decision making process.
REFERENCES Backman, A., 2000, "Augmented Reality",http://www.cs.umu.se/kurser/TDBD12 /HT00 /lectures/ar.pdf , 2001. Berry, J. K. and K. Burgess, 2000, "Practical Applications of Video Mapping in Natural Resources", http://corp.pacificmeridian.com/basis2/misc/gis01_vms.htm , 2001. Bishop, I. D. (1992). Data integration for visualisation: the role of realism in public debate. Proceedings AURISA '92. Gold Coast, Queensland: 74-80. Bishop, I. D. (1994). The role of visual realism in communicating and understanding spatial change and process. in H. M. Hearnshaw & D. J. UnwinVisualization in Geographic Information Systems. Chichester, John Wiley. Bishop, I. D., 2001. “Predicting movement choices in virtual environments.” Landscape and Urban Planning. 56(3-4), 97-106. Bishop, I. D. and B. Dave, 2001, "Beyond the moving camera: Systems development for interactive immersive exploration of urban environment", Proceedings of Computers in Urban Planning and Urban Management, Honolulu, The USA.
Bishop, I. D. and C. Karadaglis, 1996, "Combining GIS based environmental modelling and visualisation: Another window on the modelling process ", Proceedings of Third International Conference/workshop on Integrationg GIS and Environmental Modeling, Santa Fe, New Mexico, The USA. Bishop, I. D., J. R. Wherrett, D. R. Miller, 2001a. “Assessment of path choices on a country walk using a virtual environment.” Landscape and Urban Planning. 52(4), 225-237. Bishop, I. D., W. S. Ye, C. Karadaglis, 2001b. “Experiential approaches to perception response in virtual worlds.” Landscape and Urban Planning. 54(1-4), 115-123. Bouma, J. and A. K. Bregt, 1989. "Land Qualities in Space and Time". Wageningen, PODOC. Burrough, P. A., 1986. "Principles of Geographical Information Systems for Land Resource Assessment". Oxford, Oxford University Press. Burrough, P. A., 1997, "Environmental modelling with Geographical Information Systems", Innovations in GIS 4, Taylor & Francis. 146-153. Cruz-Neira, C., D. J. Sandin, T. A. D. Fanti & J. C. Hart (1992). The Cave: Audio Visual Experience Automatic Virtual Environment. Communications of th ACM. 35, 64-72. Danahy, J. W., 2001. “Technology for dynamic viewing and peripheral vision in landscape visualisation.” Landscape and Urban Planning. 54(1-4), 127-138. Daniel, T. C. and M. J. Meitner, 2000. “Representational validity of landscape visualisations: the effect of graphical realism on perceived scenic beauty of forest vistas.” Journal of Environmental Psychology. In press. Goodchild, M. F., B. O. Parks, L. T. Steyaert, 1993. "Environmental Modelling with GIS". New York, Oxford University Press. Higgins, S. I. and D. M. Richardson, 1996, “A review of models of alien plant spread.” Ecological Modelling, 87: 249-265. Lange, E., 2001. “The limits of realism: perceptions of virtual landscapes.” Landscape and Urban Planning. 54(1-4), 163-182. Liggett, R. S. and W. H. Jepson, 1995. “An integrated environment for urban simulation.” Environment and Planning B: Planning and Design. 22,291-302. Maidment, D. R., 1993. "Application of Geographic Information Systems in Hydrology and Water Resources Management, Hydro GIS 1993". IAHS Publication No.211, 181-192. Maidment, D. R., 1995. "GIS/Hydrologic models of non-point source pollutants in the vadose zone". Proc. Bouyoucos Conf. May 1995, Riverside, CA, The USA. Maren, G. and R. Germs, 2001, "A Virtual Reality Interface for the Spatial Data", http://www.k2vi.com/k2vi/esri/p551.htm , 2001. Nakamae, E., X. Qin, G. Jiao, P. Rokita, K. Tadamura, 1999. “Computer-generated still images composited with panned/zoomed landscape video sequences.” The Visual Computer. 15(9), 429-442. Nakamae, E., X. Qin, K. Tadamura, 2001. “Rendering of Landscapes for Environmental Assessment.” Landscape and Urban Planning. 54(1-4), 19-32. Qin, X., E. Nakamae, K. Tadamura, 1999, "Creating a precise panorama from panned video sequence images", Proceedings of IEEE Visualisation 1999, San Francisco, California, The USA. Rokita, P., 1998. “Compositing computer graphics and real world video sequences.” Computer Networks and ISDN Systems, 30 (20-21), 2047-2057. Sheppard, S. R. J., 2001. “Guidance for crystal ball gazers: developing a code of ethics for landscape
visualisation.” Landscape and Urban Planning. 54,183-199. Stock, C. and Bishop, I.D. (2002) Immersive, interactive exploration of changing landscapes, International Environmental Modelling and Software Society Conference, Lugano June 24-27 Vallino, J. R., 1998, "Interactive Augmented Reality", PhD Thesis, Department of Computer Science, University of Rochester, New York, The USA. Verbree, E., G. V. Maren, R. Germs, F. Jansen, M. J. Kraak, 1999. “Interaction in virtual world views-linking 3D GIS with VR.” International journal of Geographical Information Science. 13(4), 385-396.