On Recording Virtual Environments - Semantic Scholar

0 downloads 0 Views 125KB Size Report
John C. Hart. School of EECS. Washington State University. Pullman, WA 99164-2752 hart@eecs.wsu.edu. \If it ain't recordable, it ain't science." | Thomas A.
On Recording Virtual Environments Position Statement John C. Hart School of EECS Washington State University Pullman, WA 99164-2752 [email protected]

\If it ain't recordable, it ain't science." | Thomas A. DeFanti, SIGGRAPH '88 Panel on Hardware Strategies for Scienti c Visualization, calling for visualization hardware manufacturers to produce \legit NTSC video" output.

Abstract Documentation is fundamental to science. For virtual reality to be an e ective scienti c visualization tool, it must be recordable. This position statement outlines the bene ts of image-based recording of virtual environments and introduces the VR-VCR, a device for immersive playback of recorded virtual environments.

1 Introduction Science requires documentation. Visualization now serves as a major component in the scienti c method, and as such, must be documented with every other part of a scienti c investigation. The need for virtual reality to become an e ective visualization tool prompts the development of recordable virtual environments. Section 2 presents arguments for the image-based recording of virtual environments whereas Section 3 introduces the VR-VCR, a device for recording virtual environments for immersive playback. These discussions utilize several general-purpose terms which are de ned speci cally as follows. The viewer is a participant in a virtual environment, a user of an immersive display, and is assumed to exist initially in some canonical state. The position of a viewer in an arbitrary state is speci ed by a translation from the center of the canonical state to the cen-

ter of the current state. The orientation of a viewer is speci ed as a rotation from the canonical state to a state congruent to the current state. Furthermore, a recorder is a viewer documenting the virtual environment whereas an observer is a viewer immersed in the recorder's environment. The following arguments contrast object-based versus image-based methods. Although these terms originated from the analysis of visible-surface algorithms [Sutherland et al., 1974], the following discussions exemplify their continued utility in the analysis of virtual reality.

2 Issues in Recording Virtual Environments In classical visualization situations, reproduction of results for veri cation or peer communication meant re-executing the visualization system and reproducing the interaction sequences necessary, either through physically-rehearsed interaction or electronically-recorded controls, to obtain the desired state. This form of object-based visualization recording permits complete user interaction during playback. Moreover, object-based playback brings the entire visualization system to its prerecorded state, facilitating continued investigation. On the downside, objectbased visualization recording is not archiveable. Considering the volatility of operating systems, computer

hardware and graphics libraries, a particular visualization program is unlikely to be so easily executed after a few years. Video has always been a means for archiving interactive scienti c visualization. Although this form of image-based recording lacks interactive playback, its simplicity and robustness made it the primary medium of documentation for interactive scienti c visualization. Until recently, standard visualization displays did not produce NTSC video. Recording required pointing an external camera at the monitor, causing a loss of contrast and timing problems. In response to this shortcoming, visualization workstations now output NTSC video signals suitable for recording on a variety of video tape formats. This results in clearer, archivable documentation of interactive visualization sessions. The current system for documenting virtual reality interactions also uses standard NTSC video. In this case, the visual information seen through one of the eyes of a head-mounted display is recorded, and played back on a standard video monitor. (The user is often superimposed during playback to demonstrate interactivity.) This form of image-based virtual reality recording documents only a small section of the panoramic environment. Completeness in documentation demands recording the entire visible panorama of the virtual environment. Object-based recording is capable of documenting the entire virtual environment. However, the following arguments suggest that image-based recording is the more appropriate method.

Constant Communication Rate: Object-based

communication requirements are proportional to the resolution of the model. The complexity of transmitted virtual environments is bound by network bandwidth under object-based communication. Image-based communication requirements remain constant as model resolution increases. Although current network bandwidth limitations prevent the transmission of acceptable-resolution uncompressed images at animation rates, eventually image-based techniques will permit the transmission of virtual environments of unlimited object-complexity.

Consistency: Object-based communication of a

virtual environment requires an agreement on object representation. Globally, these agreements form standards. One very promising standard for virtual en-

vironment modeling is the MR toolkit [Shaw et al., 1992; Shaw, 1993]. The drawback to standards is their limitations. It is doubtful that a single standard could encompass all geometric models, including B-reps, solids and volumes, as well as behavioral models. Diverse scienti c applications will always require models unsupported by any current modeling standard. Image-based recording imposes a standard that does not impact the operation of scienti c visualization systems. For example, NTSC video is a standard for communicating interactive visualization, but has no impact on the way the visualization system operates. De ning a standard for image-based recording the entire panorama of a virtual environment does not constrain its internal object representation.

High-Quality Rendering: Real-time speed con-

straints limit the rendering quality of virtual environments. In standard visualization systems large amounts of data are interactively visualized at coarse resolution and re-rendered for higher resolution playback. In virtual reality, re-rendering the observer's eld of vision allows high resolution non-immersive playback, resembling a standard visualization session. Image-based recording of virtual environments should record the entire visible panorama. Rerendering would result in a high-resolution immersive view of the virtual environment.

Progressive Re nement: Progressive re nement

occurs in interactive visualization during input pauses, when the computer has time to automatically rerender the current view at increasingly higher resolutions. Typical immersive displays, such as the headmounted display, require the user to remain at a xed position and orientation while waiting for the current scene's re-rendering. Under a complete image-based recording paradigm, high-resolution panoramic images can be rendered when position is xed, accommodating changes in orientation allowing the user to look around at an increasing higher-resolution virtual environment.

3 The VR-VCR The bene ts of image-based recording of virtual environments are clear. The proposed implementation of such a system relies on the virtual environment map (Section 3.1), and poses new research problems for stereo (Section 3.2), utilizing present-day networks

(Section 3.3) and standard graphics hardware (Section 3.4).

3.1 The Virtual Environment Map The virtual environment map enables panoramic image-based recording of virtual environments based on standard environment-mapping principles. An environment map is a cube surrounding an object's centroid. Elements in the environment visible from the centroid project onto the faces of the environment map | a so-called \world projection" [Green, 1986]. Environment maps simulate inter-object re ections by bouncing the view vector o the object surface into the environment map. Viewer-centered environment maps are popular for simulating the sky. From the centroid, the world appears exactly as it otherwise would. Hence, an observer's visible environment can be completely recorded by a world-projection onto a viewer-centered environment map. In the context of virtual reality, this is called a virtual environment map. The CAVE interface [Cruz et al., 1992; Cruz et al.,

1993] is a physical manifestation of the virtual environment map. In fact, the virtual environment map o ers head-mounted display users many of the bene ts o ered by the CAVE. Virtual environment mapping implementation and capabilities vary across immersive display families. In particular, for head-mounted immersive displays (including the BOOM [McDowall et al., 1990]), the entire visible environment surrounding the recorder's viewpoint world-projects into a virtual environment map, as shown in Figure 1 (left). During playback, the virtual environment map provides the observer with a panoramic view of the recorder's environment, while being pulled along the recorder's position trail.


Figure 1: mounted display (left) and monitor (right) interfaces.

just the image, eliminating the e ects of perturbations due to vibration. The virtual environment map similarly eliminates nauseating e ects by giving the observer control over orientation. The monitor-based virtual reality interface [Venolia & Williams, 1990; Deering, 1992] derives view orientation from viewer position. The virtual environment map, in the con gurations illustrated in Figure 1 (right) accommodates changes in view direction. This con guration allows the viewer movement about the monitor constrained to a xed distance from the monitor.

3.2 Stereo Stereo is an important ingredient for suspending disbelief, though it pales in comparison to the more powerful depth cues such as occlusion, shading and perspective.

Hypothesis: Free orientation under constrained position is suciently interactive to reduce nausea and facilitate scienti c exploration.

Hypothesis: A small number of simultaneous disparate virtual environment maps suciently simulate stereo to di erentiate objects and to create an immersive suspension of disbelief.

This semi-interactive form of viewing is comparable to an amusement park ride. The acceleration of the position path dictates nausea, ranging from a guided tour to a roller-coaster. The recorder in a virtual environment is making a movie for the observer. Home video recordings, for example, su er when the recorder does not hold the camera still. The resulting small changes in orientation make the recording dicult to watch. Electronic stabilizers on modern home video cameras ad-

We expect image processing methods can suciently simulate stereo given as few as four virtual environment maps.

3.3 Communication Current network bandwidth supports object-based communication, as demonstrated by the 14.4 Kb/s interactive \handball" virtual environment link described in [Shaw, 1993].

Hypothesis: A gigabit-rate network provides sucient bandwidth for real-time image-based transmission of virtual environments. A cubic virtual environment map with 5400  5400

resolution per face accommodates 20/20 viewer acuity. Transmitting such a virtual environment map at 30 Hz with 24 bit RGB color resolution requires a transmission rate of 126 Gb/s, which exceeds current network abilities by two orders of magnitude. Current immersive display technology supports resolutions of 1000 pixels, and one perceives animation at a display rates as low as 10 Hz. These reductions require a rate of 1.5 Gb/s to transmit virtual environment maps. Hence, the emerging gigabit testbeds should be capable of image-based transmission of virtual environments (of arbitrarily high object-complexity).

3.4 Implementation We plan to construct a prototype system using graphics workstations with real-time texture mapping capabilities. For example, the Silicon Graphics Reality Engine is purported to be capable of lling 320 million texture-mapped pixels per second, which, if ignoring clipping, scan converts the virtual environment map with a linear resolution of over 2000 pixels at 10 Hz.

Hypothesis: The VR-VCR can be inexpensively constructed from existing image processing hardware. Rasterization of a virtual environment map requires the real-time perspective distortion of between three and ve textured quadrilaterals. We envision the VRVCR as a self-contained head-mounted video-tapebased playback system | a global interface for viewing prerecorded virtual environments.

4 Conclusion Image-based techniques provide methods for the complete and archivable recording of the panorama of a virtual environment. We have outlined the bene ts of image-based techniques over object-based techniques, and described a method for image-based recording of virtual environments using the virtual environment bu er. Other applications include teleprescence. Physical environments may be digitized via six opposing cameras for simultaneous or prerecorded playback at remote sites.

This research direction was formulated through conversations with many experts on a variety of virtual environment and visualization topics. This position statement was inspired from a conversation with the NSF CISE IRIS Program Director regarding the Interactive Systems Program.

References [Cruz et al., 1992] Cruz-Neira, C., Sandin, D. J., DeFanti, T. A., Kenyon, R. V., and Hart, J. C. The CAVE audio visual experience automatic virtual environment. Communications of the ACM, 35(6):64{ 72, June 1992. [Cruz et al., 1993] Cruz-Neira, C., Sandin, D. J., and DeFanti, T. A. Surround-screen projection-based virtual reality: The design and implementation of the CAVE. Computer Graphics, 27:135{142, Aug. 1993. [Deering, 1992] Deering, M. High resolution virtual reality. Computer Graphics, 26(2):195{202, July 1992. [Green, 1986] Green, N. Environment mapping and other applications of world projections. IEEE Computer Graphics and Applications, 6(11):21{29, November 1986. [McDowall et al., 1990] McDowall, I. E., Bolas, M., Pieper, S., Fisher, S. S., and Humphries, J. Implementation and integration of a counterbalanced CRT-based stereoscopic display for interactive viewpoint control in virtual environment applications. Proc. SPIE, 1256, 1990. [Shaw et al., 1992] Shaw, C., Liang, J., Green, M., and Sun, Y. The decoupled simulation model for virtual reality systems. Proc. CHI'92, May 1992. [Shaw, 1993] Shaw, C. The MR toolkit peers package and experiment. Proc. Western Computer Graphics Symposium, Mar. 1993. [Sutherland et al., 1974] Sutherland, I. E., Sproul, R., and Schumacker, R. A characterization of ten hidden-surface algorithms. Computing Surveys, 6(1):1{55, 1974. [Venolia & Williams, 1990] Venolia, D. and Williams, L. Virtual integral holography. Tech. Rep. 90-10, Apple Computer Inc., Feb. 1990.