Explorer) [Sawant et al. ... are stored as multiple scene-graphs that in real-time can be viewed ... nal software api's in any application, and provide a clear, yet.
VR++ and its Application for Interactive and Dynamic Visualization of Data in Virtual Reality Henrik R. Nagel Computer Vision and Media Technology Lab. Aalborg University Denmark
Abstract A significant problem in scientific visualization is how to create modular, interactive, and dynamic virtual reality (VR) applications that efficiently visualize large datasets. One of the obstacles often encountered is the high degree of complexity and specialized knowledge involved in creating parallel visualization applications. Designing a framework especially for this purpose, seems like a promising solution. This paper presents a parallel visualization framework that is scalable, encapsulates parallel programming details for developers, and supports the creation of interactive and dynamic VR applications. CR Categories: I.3.2 [Computer Graphics]: Graphics Systems— Distributed/network graphics; I.3.7 [Computer Graphics]: ThreeDimensional Graphics and Realism—Virtual Reality; Keywords: parallel, distributed, large data, visual data mining
1
Introduction
Most virtual reality (VR) applications in existence today are incompatible with each other. Many minor projects succeed in implementing e.g. a useful tool, but thereafter, the tool is usually not used by anyone, since the underlying system architecture is incompatible with the system architecture of other systems. To be able to create truly reusable code, it is necessary to design modular code that use a common, general system architecture of a framework. For VR applications, it would furthermore be desirable, if this framework would make optimal use of the processing power of existing computers used to drive VR installations, that is supercomputers with multiple processors, or VR clusters of networked Linux PC’s. For interactive and dynamic VR applications, this is not only desirable but necessary, since high update rates of continuously changing visualizations can only be achieved, if data processing is taking place in the background simultaneously with visualization. A common approach, when visualizing large amounts of data in VR, is the creation and playback of video sequences in VR. This approach is e.g. used by the TIDE project (Tele-Immersive Data Explorer) [Sawant et al. 2000], which make use of networked supercomputers to create video streams of sometimes very high quality.
This solutions does not allow any kind of fast real-time interaction, such as navigation, but it does allow a very large number of visual objects to move independently of each other. The 3D Visual Data Mining (3DVDM) system [Nagel et al. 2001] makes use of pre-calculated snapshots that all are stored in memory simultaneously, and shown one at the time. The snapshots are stored as multiple scene-graphs that in real-time can be viewed from any position and angle. This solution thus allows fast realtime navigation, but not any other kind of interaction, nor does this solution allow a large number of visual objects to move independently of each other. Another possibility is VIRPI [Germans et al. 2001], which is a high-level toolkit for interactive scientific visualization in VR. VIRPI, however, focuses on the construction of measuring tools by non-expert scientist considering to use VR, and is thus not a general framework. The Visualization Toolkit (VTK) [Schroeder et al. 1996] is often used for visualizing data on ordinary computers. However, VTK was neither designed for exploiting parallel computing resources, VR, or real-time interaction. Several attempts have been made to remedy this situation. A research project has investigated how to add parallel processing capabilities to VTK [Ahrens et al. 2000], while another project has investigated how to add VR capabilities to VTK [Rajlich 1998]. However, both these two approaches extend VTK with capabilities, it was not intended to have from the beginning. This limits the possibilities for using VTK for fast, interactive, and dynamic visualization of data in VR. The approach taken in this research project is to design a new, general open-source framework called VR++. The facilities provided will be used to research new Virtual Reality Data Mining (VRDM) methods, such as temporal visualizations used for exploring large and complex data sets in VR. The goals for this framework are: • Reusability - Having a flexible and well-defined mechanism for reusing software is a very important aspect of any software framework. The framework should therefore make it possible to use any number of complementary, or overlapping external software api’s in any application, and provide a clear, yet flexible separation between multiple research projects. • Scalability - Applications developed with this framework should be able to use increasing computing resources, when these become available or needed. The framework should support task parallelism (i.e. when independent tasks are executed in parallel), data parallelism (i.e. when a dataset is partitioned and subsequently processed in parallel) and pipeline parallelism (i.e. when the processing of data is divided into multiple steps that execute in parallel on different data elements). These possibilities should be available on sharedmemory supercomputers, as well as on distributed-memory multi-computers, so that multiple computers with different resources available can be used in a VR++ application.
• Abstraction of complexity - It is difficult to write correct and efficient parallel programs. The framework should therefore encapsulate parallel computing details. • Flexible communication - Routing of data and signals from one process to another is very important in multi-processing applications, which both process data, and handle events occurring in real-time. The communication part of a framework is the interface between different parts of applications. The framework should therefore provide efficient, well-defined, and flexible communication methods to ensure compatibility between different modules and applications, which can handle both ordinary data and events. • Dynamic visualization - Exploring large and complex datasets is often difficult with static visualizations, since the number of possible views of the data usually is too large to make it possible to capture all interesting information in just one or a few static visualizations. The framework should therefore support dynamic visualizations, in which the content can be changed continuously. • Interactivity - Users of VR applications usually experience themselves as being part of the virtual environment, and therefore wish to be able to control it. The framework should therefore make interactivity easily accessible for developers of VR applications. This article presents the design of the VR++ framework, as well as some examples of its use.
2
Motivation
In the 3DVDM system [Nagel et al. 2001], which is the predecessor to the VR++ framework, attempts were made to find VR alternatives to well established visual data mining methods for finding e.g. partial relationships in large and complex data sets. Among these attempts were the creation of ”extended 3D Scatter Plots”. The basic idea underlying this method is, that if one could find a perceptually acceptable way of visualizing scatter plots with more than 3 statistical variables represented, one would increase the possibilities of detecting patterns, clusters, partial relationships, etc. in the data set being explored. A well known method for mapping statistical variables to visual object properties in a 3D Scatter Plot is to map 3 statistical variables to the spatial properties of visual objects, and a 4th statistical variable to the color property. Trying to develop this method even further, ”morphing” shapes were investigated. That is, shapes that transitioned smoothly from one shape to another. This allowed the mapping of continuous 5th statistical variables, in contrast to the usual method of using e.g. tetrahedrons, cubes, and sphere, which only allows mapping of discrete statistical variables. Further experiments were made with including the orientation, and size of visual objects to visually represent statistical variables. This gave 3 or 4 other possibilities. However, common to the above experiments were that the extended 3D Scatter Plot was always static. It was, thus, with the 3DVDM system not possible to experiment with the combination of the above described ideas and e.g. a VR version of ”The Grand Tour” [Asimov 1985; Buja and Asimov 1985; Wegman 1991]. It was also not possible to experiment with observer relative data extraction, automatic flight path, temporal visualizations with macro dynamics, or temporal visualizations with micro dynamics. It was, therefore, necessary to design the VR++ framework presented in this article.
Of the goals mentioned in the introduction, the two most important are ”Scalability” and ”Felxible communication”, since these make it possible to study ”Dynamic visualization”, and ”Interactivity”. Especially ”Scalability” and ”Flexible communication” was not supported by the 3DVDM system.
3
Methods
The framework consists of a set of C++ classes, that are inherited to create applications with this framework. The following sections describe the framework in details, and how the goals mentioned in the introduction have been met.
3.1
Reusability
It is often necessary to base research projects on multiple software systems, and to have the possibility of replacing some of the software systems, or adding new to solve specific problems. Researchers might for instance know a solution to a problem in the statistical tool ”R”. It should in such a situation be simple for researchers to add real-time support for ”R” to their application. For this purpose, VR++ divides projects into multiple software packages, each of which serves a specific purpose, such as providing real-time support for ”R”. Applications are created by combining functionality provided by one or more software packages. This mechanism is also used to make it possible for multiple research partners to each have their own project, while still being able to integrate their projects, when needed. Each partner is responsible for one or more software packages that each make functionality available to the entire research group. For each application developed by the research group, only selected software packages have to be used to provide required functionality of the application. It is thus possible for e.g. a computer vision project to provide a VR++ software package for vision based interaction. The application, the computer vision project would develop for testing their methods, would then make use of two software packages: their own, and a VR software package, which would provide both VR support and a simple 3D menu. This would free the computer vision researchers from also having to investigate how to develop VR applications with menus. The basic building block inside software packages is modules. They are used to create tools for the VR++ framework, and have, much like C functions, input and output parameters. However, in contrast to C functions, modules encapsulate all the details necessary to provide the required functionality and can be handled automatically by the VR++ framework.
3.2
Scalability
To allow applications to use multiple computers working together in real-time, VR++ is based on the distributed parallel programming system Message-Passing Interface (MPI). This means that all VR++ applications consist of multiple processes that communicate by sending messages to each other. This can, in principle, be done in two different ways: by having a single executable file that is distributed to multiple computers, and executed simultaneously, or by having multiple executable files distributed on multiple computers and working together. In VR++ the latter solution has been chosen, since this makes it possible to separate conflicting external software api’s into separate executable files, and still have them working together in an application through the framework. VR++ makes use of the master/slave parallel processing paradigm. In this paradigm an application consists of multiple processes of which a single is master and the rest are slaves. The slaves ”know” how to do the work, but not what has to be done, while the
master ”knows” what has to be done, but not how to do it. The slaves are provided by the software packages, since in VR++ it is in these that functionality is defined. Slaves provide access to the modules that are defined in the software packages, and are contained in executable files that are reused from one application to another. Masters are the part of an application that must be changed from one application to another. Their main functionality is to instruct the slaves on which modules they should use, and on how the slaves should communicate with each other. After that, it is up to the developer of an application, whether the master should participate actively in the execution of an application, or just wait for the slaves to finish. Furthermore, while masters only contain a single thread of execution, slaves can be either single-threaded or multi-threaded. This allows the use of shared-memory algorithms, in cases where this would be advantageous. Data parallelism is obtained by making multiple instances of the same slave, each executing an instance of the same module, and by letting these modules know the existence of each other. This allows the design of algorithms in which the modules divide the work between them, so that the processing time is reduced by a factor (almost) equal to the number of modules working in parallel. The output produced by these modules working in parallel can either be combined into a single result, such as a sum, before the result is sent to subsequent modules, or be sent independently to e.g. another group of modules working in parallel. Task parallelism and pipeline parallelism is always used in VR++, since processes can work independently of each other, or in a pipeline such as when reading data from a disk, computing results, and rendering graphics. Data can be sent one event/record at the time, or in chunks of multiple events/records simultaneously. Whether to choose one method or the other depends on the situation and on efficiency considerations.
3.3
Abstraction of complexity
Parallel applications can be highly complex to develop. This is especially true for VR++ applications that use the ”multiple instruction stream, multiple data stream” (MIMD) paradigm, since in this paradigm multiple processes are running independently of each other, while sending messages to each other that must be synchronized. VR++ therefore makes use of an object-oriented class hierarchy to hide this complexity from developers of VR++ applications. Process
Master
MultiThreadSlave
Slave
SingleThreadSlave
Figure 1: The Process Inheritance Graph. Both the master and slave class inherits from a common process class. Since slaves can be both single-threaded and multi-threaded, there are defined specialized classes for these two cases, which inherits from the slave class. In VR++ a large part of the complexity lies in the initial configuration of the system of processes and modules an application consists of. This is done automatically by the master and the slaves. Sending and receiving data is also done automatically by the framework.
3.4
Communication
Just like the 3DVDM system, VR++ uses the term ”data slot” to describe the class that encapsulates information and methods for handling input and output parameters. However regarding the internal description of the actual data, the 3DVDM system focuses on object-oriented programming and VR++ focuses on efficiency. The 3DVDM system has, therefore, a basic ”data” class from which all data-classes must inherit. As a side-effect, this caused the creation of several specialized data classes that each internally contained complicated data structures, which would be very difficult to send over network from one process to another. In contrast to this, VR++ has no basic ”data” class. Instead, only data structures that efficiently can be sent over network connections can be created, and this is in praxis mainly arrays. When declaring an input or output data slot in a module, it is thus also necessary to associate this data slot with one of the module’s internal variables. This way, the actual variables used in algorithms are used for receiving and sending data. There is no ”translation” needed, as in the 3DVDM system. Data slots can accept ordinary variables, fixed size arrays, and variable size arrays. In the last case, it is the sending module that determines the length of the array. However, the two data slots, between which data it sent, must specify the same record size. A variable array of positions, would e.g. have a record size of 3, since 3 floats are required to specify a position. The following example adds an input data slot ”properties” with the record size 12. When processing the data, ”numberOfProperties” will contain the number of records that were received: float *properties; int numberOfProperties; ... addInput("properties", &properties, numberOfProperties, 12); Notice that for ”properties”, a pointer to a pointer to an array of floats is passed to the addInput method. This allows the framework to automatically allocate and deallocate arrays, as the number of properties sent from the sending module changes, and make ”properties” point to the last created array. By default, a module must have new data in all of its input data slots before processing of the data begins. This can, however, be changed with ”tags”. A tag is just an integer number identifying the input data slot. Input data slots with the same tag constitute a group, which must all have received data before processing of data begins. By using multiple tags when specifying the input data slots, multiple groups will be created, and processing of data begins when all data slots in one of the groups has received data. The following example adds two input data slots with the tags 1 and 2: int intSlotId, floatSlotId, intSlot; float floatSlot; ... intSlotId = addInput("intSlot", intSlot, 1); floatSlotId = addInput("floatSlot", floatSlot, 2); Notice, that the return value of the addInput method is stored. This is because, they are needed when processing the data and asking, which of the two input data slots actually received data. Modules can specify that a collective operation is to be performed on the value stored in an output data slot, before the value is sent to other modules. The collective operations are such as sum, product, etc. This is complemented with the possibility in the master of specifying that multiple or all available slaves of a given kind, should execute a certain module as a group. The collective operation will be performed by all the members of the group before the resulting value is sent.
The following example specifies an output data slot called ”pi”, for which the SUM operation must be performed on the pi output data slot of all the instances of the given module, before the data is sent: double pi; ... addOutput("pi", pi, SUM); Multiple communication channels can be added to each data slot. While a data slot can only be used for either receiving or sending data, it can be connected with channels to multiple other data slots. However, data can only be received from one channel at the time. A ”round-robin” scheme is used to ensure that all channels are checked in turn. There are four different communication modes available for transferring data on communication channels. The four communication modes are: • Synchronous - The standard form of communication. All data samples sent from a sending module arrives to the receiving module, one by one, in the order it was sent. The sending module waits until the data has been received. • Asynchronous - For transmission of states. The receiving module must first ask for data, before the sending module sends any data. If the sending module has not received a query when it is ready to send, it continues immediately with producing the next data sample to be sent, without saving the previous data. This is useful when the receiving module is expected to be much slower than the sending module, and one is only interested in the latest state of the sending module, and not the previous states it was in. • Incremental - For transmission of events and triggering an action, when the event has happened. The sending module only sends data, when this has changed compared with the last transmission. The receiving module usually sleeps until it receives data, and, thus, does not take up any processing resources. • Immediate - Used when it is preferable for a receiving module, without any delay, to reuse the last received data, instead of waiting for new data to arrive. As in incremental mode, the sending module only sends data, when this has changed compared with the last transmission. The first time data is to be received, the receiving module waits for the data as in incremental mode. Thereafter, the receiving module returns immediately, leaving the input variable untouched, if no new data has been received. The following example shows how an application developer would connect the ”pi” output data slot in the module with the identifier stored in ”slaveModule” to the ”pi” input data slot in the master’s module with the identifier ”masterModule”: int masterModule, slaveModule; ... master.connect(SYNCHRONOUS, slaveModule, "pi", masterModule, "pi");
3.5
Dynamic visualization
There are currently implemented two visualization add-on packages for the VR++ framework. One for OpenGL visualization, and another for VR visualization of OpenGL graphics using VRJuggler [Just et al. 1998]. It is expected that an add-on package for OpenSG visualizations soon will follow, as well as a VRJuggler
version of the same. The VR enhanced version of the simple visualization add-on packages reuses the modules defined in these two packages, but changes the rendering engine to conform with VRJuggler, and adds specific VRJuggler modules for handling e.g. navigation using a Wanda. To obtain real-time update rates equal to the frame rate, it is necessary to extensively optimize the visualization pipeline. The visualization pipeline starts with the source module that produces the new visualization. As explained earlier this module can be parallelized, but this is not always necessary. The next step is transferring of data from the process running the source module to the process running a visualization module. To maximize the performance of this step, a dedicated thread in the multi-threaded visualization slave handles I/O. Double buffering is used for variable sized arrays to allow modules to receive new data, while they are processing the last received data. When the module again is ready to receive data, the data is already waiting, so only two pointers has to be switched, and processing of the data can commence again. The next step in the pipeline is the creation of a visualization. This step is handled by a visualization module with a dedicated thread. To speed up the calculation of e.g. OpenGL vertex arrays, it is an advantage to use data parallelism at this step. The coordinate system can e.g. be divided into 8 octants (2x2x2), which each is handled by a dedicated visualization module and a corresponding thread. Having a supercomputer available, this would allow a better utilization of the processors available on it. Depending on the relative speeds of the various steps in the visualization pipeline, data parallelism can also be used in previous steps of the pipeline. To achieve real-time updates of visualizations, it is essential to use double buffering in the visualization module. OpenGL already uses double buffering to allow the visualization of the content of one buffer, while another is being constructed. The standard approach to obtain dynamic visualization is to make a function call in the rendering loop, which modifies the graphics. This has, however, the disadvantage of stopping the rendering loop, while the calculations are being performed, and is clearly unacceptable in VRDM, where as many visual objects as possible must be visualized to allow visual explorers to detect e.g. partial relationships. With double buffering, while the rendering engine is rendering the next frame using e.g. one OpenGL vertex array, the visualization module is calculating the next in another buffer. When finished, a signal is sent to the rendering engine, that causes it to switch the buffers.
3.6
Interaction
The flexible routing of data and signals is in VR++ the essence to achieving real-time interaction. Any information residing on any output data slot on any module can with just a single line of programming be sent to any input data slot on any module on any slave running on any computer. By attaching multiple channels to an output data slot, the information residing on that data slot can be sent to multiple input data slots on multiple modules simultaneously. Combining this with e.g. the incremental communication mode, in which only changes are sent, and the receiving module is inactive (an thus requires no processing resources) until it receives data, a button click can easily be made to activate the inactive module. This module can then e.g. receive the most recent user position, and then make a calculation, and initiate a visualization based on this information. Input from input devices, e.g. used for navigation, can be done by either having an independent input slave, e.g. again using VRJuggler, and then connecting the output of an input module to the camera module, or by using a navigational module, which runs in a thread, on the multi-threaded visualization slave.
3.7
Collaborative VR
Although collaborative VR is not one of the primary goals of VR++, and there certainly is no ambition of competing with so well established software frameworks as CAVERNSoft [Leigh et al. 1997], the design of VR++ makes it easy to create simple collaborative VR applications. Since VR++ uses the MIMD scheme for organizing applications, and thus divides the applications into multiple executable files, one can easily have multiple visualization slaves running simultaneously in the same application. Avatar modules displays an avatar for each of the other users. Incremental communication mode would be used to send the head position, and the position of the input device to the avatar modules. Any VR application created with VR++ can therefore easily be transformed to a collaborative VR application, although with limited functionality.
4
Processing Data in Parallel
The following shows a complete VR++ module for parallel calculating of the value of pi. Program 4.1 A module for parallel calculation of Pi. class PiModule : public Module { public: PiModule(); ModuleState processData(); protected: int iterations; double pi;
A Dynamic 3D Scatter Plot
A dynamic 3D Scatter Plot is essentially the same as the extended 3D Scatter Plot used in the 3DVDM system, with the added feature that the entire content of the 3D Scatter Plot can be changed as often as computing resources allows it. The dynamic 3D Scatter Plot thus allows a completely new visualization to be created for each frame in the rendering loop. A source module is needed to produce the new visualizations. The visualizations can, like in the 3DVDM system, be produced, e.g. every 5 seconds, and be completely different from one and other. They can also be produced 60 times every second and be almost identical except for a few changes. In this case a smooth animation is obtained.
4.3
A Guided Tour
A camera module is defined that has ”eye point”, ”center point”, and ”up vector” as input data slots. It is therefore possible to create processes that based on statistical analyses guide the visual explorers through the data. A simple example of this can be seen in the following example:
Results
4.1
4.2
// Input // Output
int cameraModule, circleModule; ... master.setModule("vjOpenGLSlave", "CameraModule", cameraModule); master.setModule("InteractionSlave", "CircleModule", circleModule); ... master.connect(SYNCHRONOUS, circleModule, "currentPosition", cameraModule, "eyePoint"); In this example, a circle module is continuously producing the coordinates for a circle, with a given number of seconds per orbit. The coordinates of the circle are used as input to the ”eye point” of the camera module. This causes the camera to move on the circumsphere of a circle, while constantly looking at the default center position, which is at the center of the coordinate system.
}; PiModule::PiModule() { addInput("iterations", iterations); addOutput("pi", pi, SUM); } ModuleState PiModule::processData() { double h, sum, x; h = 1.0 / (double) iterations; sum = 0.0; for (int i = myRank; i < iterations; i += groupSize) { x = h * ((double) i + 0.5); sum += 4.0 / (1.0 + x * x); } pi = h * sum; return SEND; }
The sum operation is performed on the data in the output data slot ”pi”, before the value is sent to receiving modules. In the ”for” loop, the loop variable ”i” initialized to ”myRank” and is incremented with ”groupSize”. This makes several modules working in parallel automatically divide the work between them.
5
Discussion
In the introduction of this paper was listed a set of goals that should be achieved with this framework: • Reusability • Scalability • Abstraction of complexity • Flexible communication • Dynamic visualization • Interactivity Of these, the two most important goals were ”Scalability” and ”Flexible communication”. The framework allows developers of VR applications to use a combination of ”data parallelism” and ”pipeline parallelism”, and to have multiple modules connected using multiple communications methods, each designed for a specific purposes. If, at some point, the communication methods presented in this paper seem insufficient, more can easily be introduced. ”Reusability” and ”Abstraction of complexity” was achieved through the use of object-oriented methods in the design of the framework’s system architecture, and though the use of ”software packages” and ”modules”. The framework presented in this paper provides the basis for studying interactive and dynamic visualization methods for visualizing large and complex data sets in VR.
6
Conclusion and Future Work
This project intended creating a framework that would make it possible for researchers from multiple disciplines to collaborate on software development, each contributing with methods from his/hers area of expertise. The framework is intended to be general, so that it can be used in multiple contexts. This paper presented the basic design of the framework, with emphasis on interactive and dynamic visualizations of data in VR. A flexible and maintainable system architecture was presented, and some suggestions to how this can be used for visual data mining in VR were given. The next step in the development of the framework is to experiment with its application in the field of VRDM. Methods that rely on dynamic visualization and interaction will be developed, and the perceptual psychological aspects of these methods will possibly also be evaluated. After that, the work will focus on case studies.
Acknowledgments We gratefully acknowledge the support to the 3DVDM project from the Danish Research Councils, grant no. 9900103.
References A HRENS , J., L AW, C., S CHROEDER , W., M ARTIN , K., AND PAPKA , M. 2000. A parallel approach for efficiently visualizing extremely large, time-varying datasets. Tech. rep., Los Alamos National Laboratory, Los Alamos. Technical Report LAUR-001620. A SIMOV, D. 1985. The grand tour: A tool for viewing multidimensional data. SIAM. Journal of Science and Statistical Computing. 6, 128–143. (original paper, 2-D grand tour). B UJA , A., AND A SIMOV, D. 1985. Grand tour methods: an outline. In Computing Science and Statistics: Proceedings of the Seventeenth Symposium on the Interface, 63–67. (2-D grand tour). G ERMANS , D., S POELDER , H. J., R ENAMBOT, L., AND BAL , H. E. 2001. Virpi: A high-level toolkit for interactive scientific visualization in virtual reality. In Proceedings of Immersive Projection Technology/Eurographics Virtual Environments Workshop (IPT/EGVE). J UST, C., B IERBAUM , A., BAKER , A., AND C RUZ -N EIRA , C. 1998. Vr juggler: A framework for virtual reality development. Presented at the 2nd Immersive Projection Technology Workshop (IPT98) (May). L EIGH , J., J OHNSON , A., AND D E FANTI , T. 1997. Cavern: A distributed architecture for supporting scalable persistence and interoperaility in collaborative virtual environments. In Virtual Reality: Research, Development and Applications, vol. 2, 217– 237. NAGEL , H. R., G RANUM , E., AND M USAEUS , P. 2001. Methods for visual mining of data in virtual reality. In Proceedings of the International Workshop on Visual Data Mining, in conjunction with ECML/PKDD2001, 2nd European Conference on Machine Learning and 5th European Conference on Principles and Practice of Knowledge Discovery in Databases. R AJLICH , P. J. 1998. An object oriented approach to developing visualization tools portable across desktop and virtual environments. Master’s thesis, The Graduate College of the University of Illinois at Urbana-Champaign, Urbana, Illinois.
S AWANT, N., S CHARVER , C., L EIGH , J., J OHNSON , A., R EIN HART, G., C REEL , E., BATCHU , S., BAILEY, S., AND G ROSS MAN , R. 2000. The tele-immersive data explorer: A distributed architecture for collaborative interactive visualization of large data-sets. Proceedings of 4th International Immersive Projection Technology Workshop, Ames, Iowa (June). S CHROEDER , W., M ARTIN , K., AND L ORENSEN , W. 1996. The Visualization Toolkit: an Object-Oriented Approach to 3D Graphics, 2 ed. Prentice Hall. W EGMAN , E. J. 1991. The grand tour in k-dimensions. In Computing Science and Statistics: Proceedings of the 22nd Symposium on the Interface, 127–136. (general k-dimensional grand tour).