The QUICK Framework for Task-Specific Asset ... - Semantic Scholar

6 downloads 2812 Views 59KB Size Report
consumption of these resources, in a cross-platform manner which can be ... found in [3]. Most collaborative virtual environment applications ex- ... reality on the desktop. In 1997, in .... work that used lessons from perceptual accuracy to make.
The QUICK Framework for Task-Specific Asset Prioritization in Distributed Virtual Environments Michael V. Capps Naval Postgraduate School Monterey, California, USA [email protected]

Abstract In virtual environment systems, the ultimate goal is delivery of the highest-fidelity user experience possible. This paper describes a general-form framework for optimizing fidelity on a per-task basis. The virtual world database is priority-ordered; the ordering is dynamically recomputed based on display platform, virtual world state, and (possibly shifting) task objectives. Optimization is performed with the QUICK model, proposed herein, which integrates ratings of representational Quality, scene node Importance, and machine resource Cost. The approach has been implemented in a prototype display and distributed cache management system, which can be incorporated into most existing networked virtual reality applications.

1. Introduction The requirements of networked virtual environments strain the capacity of current technology in a number of directions: graphical rendering capability, processing speed, local storage, network capacity, and network latency. In this paper we introduce a framework for enabling optimal consumption of these resources, in a cross-platform manner which can be incorporated into existing systems. While this framework has general applicability to computer rendering systems, this paper focuses on those aspects which facilitate interactivity and fidelity in distributed virtual environments. A detailed discussion of this broader applicability can be found in [3]. Most collaborative virtual environment applications expect that each client contains a locally-stored version of the initial shared virtual world. Additions and modifications are then communicated incrementally as required by entity interactions. Even many single-participant applications, such as a VRML browser, can require a complete copy of the virtual world before allowing user interactivity. Unfortunately,

complete local replication of the virtual world database can be needless, or impossible. In those cases where replication is possible, there is usually a significant delay between user request and useful interaction with the world.

1.1. Contributions of this Work To address these concerns, we have developed the QUICK framework for asset prioritization of objects in a virtual world. Through annotation of these entities, and their (possibly multiple) representations, optimal ordering of transfer and display requests is possible. We have designed and prototyped a proof of concept architecture in order to demonstrate the effectiveness of such a solution. In this context, we describe herein the following contributions to the art:

     

the characterization of the prioritized transfer problem domain as a subset of the general distributed display problem QUICK, a novel mechanism for transfer prioritization based on contribution to fidelity a method for optimal solution of the distributed display problem techniques for input generation for that solution a generalized architecture for application and incorporation of the QUICK framework a task-dependent fidelity definition

This last notion, of task-dependent fidelity, merits additional discussion. Traditionally, fidelity and visual realism are considered synonymous when analyzing performance of virtual world display. That is, a high-fidelity environment is one with high visual resolution, a refresh rate too rapid to cause eyestrain. Fidelity in this instance is essentially a direct linear

combination of scan rate and resolution; increasing either increases fidelity as well. We believe visual accuracy is only a passing first-order approximation for fidelity. In Section 3, we reference supporting work that demonstrates the relationship between visual accuracy and fidelity is not always a direct one. Herein we define fidelity as a function of task performance, and show that the notion of user task can be incorporated into the QUICK optimization framework.

1.2. Organization The remainder of this paper discusses each of the contributions outlined above. Sections 2 and 3 motivate our efforts in asset prioritization and task-dependent fidelity, and report on related work. Section 4 introduces the QUICK framework and optimization process. Here we explore methods for generating the QUICK input values for a particular virtual environment. The architectural design of a proof-of-concept system is discussed at length in Section 5; details of a limited prototype are included. In the final section, we conclude with a summary of the goals of the QUICK project, and a look to its future possibilities.

2. Asset Prioritization In virtual environment systems, asset prioritization is the process of classifying subsets of the world to facilitate system operations. For instance, display accuracy can be significantly improved in overload situations, given an ordering of scene graph nodes for rendering. To reap the benefits of such a framework, then, a system must support both the discrete subdivision of the virtual world and annotation of those subdivisions with priority information. This section discusses the motivations for asset prioritization, and previous applications and research projects that have addressed this issue.

2.1. Motivation The following extended example helps to demonstrate the utility of a general-purpose asset prioritization framework. The release of id Software’s entertainment game Quake was a quantum leap in the availability of distributed virtual reality on the desktop. In 1997, in fact, their product was hesitatingly labeled the state of the art in the entire field of networked virtual environments—including research systems [2]. In the multi-player version, each participant connects to a single centralized server. Motion and action updates are communicated via the server to other players. The server stores the current state of the virtual environment, in

order to provide support for latecomers. The original game comes with a limited number of maze and building maps to play; new environments can be found on the web, or dynamically downloaded when first joining a session in that environment. However, this latter method exposes a major weakness of the network architecture. Most Quake players connect to the server by modem; the application of a number of advanced techniques in awareness management and clientside simulation make possible play with such limited bandwidth. A client connecting to an unfamiliar environment automatically requests the environment description, which is usually about one megabyte in size. This process nominally takes five minutes on a 28.8kbps modem, but usually requires closer to fifteen minutes due to the server’s double duties. Game play does not begin until the entire model has been acquired; interestingly, most servers run a game for ten to fifteen minutes before cycling to a new map. Therefore it is quite possible for a participant to be stuck in a cycle where each environment file is moot before its download is complete. Quake environments are purposely divided into rooms with limited connectivity, so as to allow precomputation of visibility between spaces. This reduces the computation required for the physics and rendering engines, as in the Berkeley Walkthrough system [7]. This division is exactly the sort of subdivision required for asset prioritization: rooms can and should be downloaded in order of importance. Yet Quake allows absolutely no interaction during the download process—fidelity is zero.

2.2. Distributed Virtual Environments A number of popular distributed virtual environments use either severely limited asset prioritization or none at all: DIVE. The DIVE system from the Swedish Institute of Computer Science [4] is a landmark tool for virtual collaboration and interaction. DIVE uses no loading priority in state transfer. There is support for world segmentation, with scene graph subdivision; additionally the application can perform session management over these segments. We are unaware of any examples of these facilities being used in combination for asset prioritization. MASSIVE. In its many incarnations, the MASSIVE system from the University of Nottingham [11] has been a test-bed for a number of advanced virtual environment technologies. The most recent version, MASSIVE-3, has no prioritization of state transfer. Again, the world description is segmented, and it does offer internal feedback facilities that would make such prioritization simple to support.

VRML. Though VRML is not itself a virtual environment system, we consider here the class of VRML-based worlds and browsers as a whole. Most VRML browsers operate similarly to the Quake client: a VRML world must be downloaded in its entirety before interaction is allowed. VRML worlds often consist of multiple VRML files, linked via World Wide Web URLs; most browsers resolve these links and fetch all included files before passing control to the user. VRML files already contain excellent inherent model subdivision: each file represents a standard treebased scene graph, and files can contain internal switch nodes that divide the files further. All of the above examples have in common the required facilities for asset prioritization. We find this to be an encouraging sign—that there exists a large space of applications for our proposed prioritization framework. In particular, the display and delivery of Internet-based 3D graphics with VRML is a ripe context for such optimization. In fact, asset prioritization and component reuse have been presented as a major requirement for the next-generation VRML specification, X3D [1].

2.3. Previous Work Though the deployment of graphics-capable platforms and virtual world browsers has reached the tens of millions, there has been little previous effort in efficient environment replication. Most techniques use derivatives of spatial awareness management, similar to that used in inter-entity communications [10, 15]. The excellent work of Schmalstieg and Gervautz [17], which uses spatial awareness to regulate object requests, is an example of this technique. The NPSNET system [16] uses a limited version of this technique for terrain paging. A number of systems have addressed management of Level of Detail in graphical worlds. These typically operate in only limited domains, such as terrain display [14, 12]. Funkhouser’s Berkeley Walkthrough system [6] is notable in that it performs disk cache management for large worlds, and maximizes geometric levels of detail within a desired frame rate. The system is restricted to 2 1/2 dimension architectural walkthroughs, and does not utilize the network. Rich Gossweiler’s excellent thesis [9] included a framework that used lessons from perceptual accuracy to make decisions of rendering complexity. Level-of-detail decisions were made based on psychophysical metrics, such as the difference in acuity between foveal versus peripheral perception. We later demonstrate that this dynamic, display-dependent computation can be logically incorporated into the QUICK framework. Likely the most significant research in asset prioritization is the approach of Chim et al [5]. Their research uses a combination of prefetching, multiresolution geometry, and

local client caching. It includes limited notions of quality (a factor of geometric resolution) and importance (based on distance and visual focus), which are a subset of techniques described in [13]. The optimization target of this system is improved visual perception; in the next section, we explore a more general-purpose notion of fidelity.

3. Task-Specific Fidelity The primary goal of any virtual environment application is representational fidelity. The exact definition of fidelity, however, is subjective and difficult to quantify; rarely are formal methods used. Usually, the desired result is straightforward: maximized graphical acuity within an interactive frame rate threshold. System designers often choose a target point on the rate vs. accuracy axis, and use that decision to drive the assignment of system resources. A framework which includes asset prioritization, such as QUICK , makes this class of computations possible in a general manner. The existence of a much more complex trade-off space for fidelity motivates QUICK’s enhanced functionality. Depending on the user’s task, certain objects in the virtual world can be vital—or useless. For example, an overloaded system might omit certain normally-visible objects in order to keep above a frame rate threshold. A user virtually walking through a shopping mall might want to omit furniture, and retain display of the building’s structure. Furnishing detail is paramount, however, to a user browsing a furniture store. A less intuitive need for a shifting definition of fidelity might be needed when multiple representations of a single scene graph node are available. Usually, the highestprecision representation of that node’s contents, limited by the display platform’s capabilities, would be displayed. This is exactly the procedure used in the advanced graphics libraries discussed previously, which manage multiresolution models; high-detail is chosen unless the detail would be wasted or the machine is overloaded. But in some cases, there is an unintuitive need for less-precise models. For instance, research at the Naval Postgraduate School [8] has shown that additional detail can have a negative impact on some training tasks; mental correlation between virtual representation and real object can be confounded by inaccurate precision. This implies that fidelity may come from symbolic representation rather than realistic presentation. It is also important to note that fidelity cannot always be defined in visual terms; accuracy in portrayal of object behaviors, audio communications, and other events must also be considered. Little previous research exists in this area; again, some leverage is possible from communications awareness management techniques. Authoring a virtual world for a specific task is tractable, but there are as yet no general-purpose so-

lutions. This problem in fact transforms neatly to that of ontological description: the goal is a complete knowledge representation of each virtual object. Given that information– which items bounce, are red, are delicious–a computational process can generate asset prioritizations from a task-based fidelity definition. These ontological solutions are necessarily of limited domain, but recent work in virtual environment knowledge representation is encouraging.

available representations n R . The notation for the choice r for any given node n 2 W , then, is s(n). The display cost of any particular selection is a function of the complexity of the representation choice and the display platform d: c(s(n); d). The total cost C for a given selection state is shown in equation 1. The fidelity function is nearly identical, as shown in equation 2. C (s; W; d)

4. The QUICK Framework F (s; W; d)

=

: [F (s0 ; W; d)



This section defines the QUICK framework for performing optimal task-fidelity and asset prioritization stuff.

4.1. Environment Display for Fidelity In order to present a general-form optimization for display selection, it is necessary to characterize a generic form of the model display problem: “The selection of a visual representation for scene nodes in a virtual world, such that the combined display of those selections provides the highest-fidelity user experience on a given display platform.” Though the terms of this statement are familiar, their usage bears definition:

 

  

scene node: a denotable unit in a virtual world, usually a single artifact, group of artifacts, or virtual space. visual representation: a computer-parsable graphical description, such as polygons, triangles, images, etc. A single scene node may have multiple representations, for example, graphical Levels of Detail (LODs). Every scene graph contains a minimum of two choices for representing each scene node—the single representation given or none at all. combined display: visual presentation of each scene node’s chosen representation. highest-fidelity user experience: discussed previously, in section 3. display platform: the combination of software, computer, and graphics hardware.

Mathematically, we illustrate this optimization problem as follows. Let SW be the set of all selection states for drawing the nodes in a virtual world W . That is, for each selection s 2 SW , all nodes n 2 W have associated with them a choice of representation r. That representation can be null, meaning node n is omitted, or can be one of the

8 2 s

W

S

X Xn W

=

2

n2W

c(s(n); d)

(1)

f (n; s(n); d)

(2)

F (s; W; d)]

^[

C (s0 ; W; d)



d]

T

(3) The optimization function (3) is to choose a selection set s0 2 SW such that fidelity is maximized, and cost does not pass a given threshold T d of the display platform. We omit the details of the optimization here, but this is performed with standard linear programming techniques.

4.2. Defining QUICK We postulate that it is possible to optimize display for a virtual world, given the following information:

  

Quality rating of each representation Importance rating of each associated scene node Cost rating for rendering each representation

This general framework is referred to as the QUICK model, where QuICk represents Quality, Importance, and Cost. To make representation assignments for each scene node as discussed above, one could compute the fidelity function of a representation choice to be its quality multiplied by its importance. That is, f (n; s(n); d)

= q (n; s(n); d)



i(n)

(4)

where the quality function q is a factor of the node, representation choice, and display; and the importance i is only a function of the node’s place in the virtual world’s scene graph. This heuristic leads to all scene nodes having the highest quality representation in the case where infinite computational resources are available, and the greatest possible quality chosen in the most important scene nodes when resources are limited. Boundary cases are logical as well: for example, there is no contribution to scene fidelity by any node with the null representation or a node with 0 importance, regardless of the chosen representation.

A greedy algorithm would give an acceptable approximation of optimal selection. But while the computation of an optimal selection set s can be straightforward, determining the inputs for the display function is non-trivial. Generation of each of the three q , i, and c functional inputs is discussed in turn below, with special attention to the simple display problem stated above.

4.3. Determining QUICK Factors While the application of the QUICK framework can be straightforward, there exist myriad options for generating the inputs to the optimization model. The following discussion gives only a brief overview of the information encapsulated in these inputs. Readers interested in the extension of the QUICK system, such as for the display of complex heterogeneous models, are encouraged to seek the full reference [3]. In nearly all cases, the simplest and most accurate method for determining these input factors is by taskspecific author annotation. A number of automated solutions exist or can be extrapolated, but we do not foresee that any will replace human consideration. Quality. The quality of a representation is a subjective notion that can vary significantly between users, applications, and display platforms. We record with each representation all pertinent information about its rendered result: geometric precision, geometric accuracy, color accuracy, etc. These values are combined at run-time with taskand platform-specific factors to compute the possible Quality contribution of each representation. A static factor, such as display hardware resolution, are combined at initialization time; an example task is equating the color quality of a 16-bit and 24-bit image on an 8-bit display. User task can be static or dynamic, depending on the application. Dynamic factors are significantly more expensive as they must be tested repeatedly, and recomputed after any change. Gauging the relative quality of multiple geometric levelof-detail representations is straightforward, and simple to record in this system. Quantifying the difference between functional accuracy and visual accuracy is an open research area, but QUICK provides an open framework for experimentation. Importance. The trivial case for importance is a scene where all nodes are equally important, and their Importance rating is the same. It is sometimes possible to reduce the complexity of a scene without significantly reducing the viewing fidelity by dropping detail only from unimportant areas. For example, in a virtual painting gallery the paintings should have a very high relative importance, while floor tiles, benches, and the like should be low. Likely a user

viewing this world would ignore such accouterments anyway, and definitely would prefer that in a resource-limited situation that the paintings’ nodes were the last to be degraded. Cost. In a model where each representation is a list of indexed face set polygons, an appropriate cost approximation is the number of polygon vertices. If the display platform is polygon-limited, optimization to the threshold is straightforward. A number of graphics systems have explored complex cost evaluations that include multiple related resources such as rendering hardware, texture memory, and central processing. We leave characterization and consumption of these resources to the graphics hardware community, and note that QUICK can easily incorporate any such approach.

4.4. Environment Fetching for Fidelity Supporting transfer ordering with the QUICK framework requires only minor modification to the model. At each stage after initialization, the optimization process has access to the characteristics of all nodes in memory and some nodes which have not been requested. (Section 5 explains the process by which annotations and nodes are requested and cached.) The display optimization is performed as if unrequested nodes were available; their transfer costs are kept below the network capability threshold, and their storage costs are included in primary storage. Once a working selection set is generated, the missing nodes are requested. The optimization is then repeated with only the currently available nodes; with memoization techniques, the second computation is greatly accelerated. To support transfer ordering and optimization, Cost information must also include memory footprint and bandwidth consumption. This same information is required for objects in secondary storage; disk and network transfer paths are functionally equivalent. In conjunction with a specification of machine capability threshold, these values are used to optimize consumption of the network and disk resources. Memory footprint values are vital to local cache management, as well as to computing the cost of a cache fetch action.

5. System Architecture We have implemented a software architecture to support the annotation and optimization processes, in concert with a traditional virtual environment client/server pair. The framework consists of three distinct functional components: the annotated scene graph; client-side management; and server-side management.

5.1. QUICK Annotations In order to support the QUICK optimization process, the virtual world description must be annotated with the Quality, Importance, and Cost information described previously. Determining these values is an inherently complex process; for instance, it is impossible to know a user’s priorities at any given instance. However, the world author can perform a subjective static approximation which often yields a reasonable approximation of user interest. Inclusion of the display platform and any task-based factors adds significant complication, as does dynamicism in world state. We propose that the accuracy of the optimization increases directly with refinement of the input annotations. Again, the reader is referred to the complete project description [3] for further details. For this discussion, the set of node types in a typical scene graph is divided into five generic categories:

   

Control: A scene-graph control node. Examples: translation, rotation, author information, rendering hints, material properties, color. Geometry: a leaf node that contains display information; need not be true polygonal geometry. Examples: cone, billboard texture, depth image, triangle mesh, indexed face set. Switch: a node with one or more subtrees, each of which contains a representation for the same virtual object. Examples: switch, level of detail. Group: a node that contains one or more subtrees, each of which represents a different virtual objects. Examples: group, transform group, root.

(For simplicity, we ignore the fact that certain Control nodes (e.g., Color) can modify the Quality and Cost values of Geometry nodes. Instead, the annotated values of a Geometry node are assumed to include any such scene graph state.) Given these definitions, the optimization problem can be restated as multiple instances of the following questions:

  

Display. Given a Geometry node, should it be displayed? Switching. Given a Switch node, which particular subtree (if any) should be displayed? Child request. Given a Group or Switch node, which subtrees (if any) should be loaded into memory, and in what order?

We address each of these issues in turn. Display. Each Geometry node in the scene graph is associated with the QUICK annotation information (hereafter

referred to as QInfo). The Display problem is exactly the linear optimization problem discussed in Section 4; some subset of the active Geometry nodes are displayed, based on the capability of the display platform. Switching. This problem has two possible subproblems: the Switch node’s children are either all Geometry leaf nodes, or the children are arbitrary subtrees. The first case is a slight complication of the Display issue above; the optimization simply has more than two options for drawing the same node. The second case is essentially intractable. Choosing a child subtree of a Switch node is equivalent to performing a global cost optimization: complete QInfo about each node in each subtree is required, including the state of all nested Switch nodes. To simplify this process, we arbitrarily constrain any scene graph to have no more than one Switch node on a path between the root and any leaf. Multiresolution models traditionally do not contact multiresolution sub-models; resolution selections are usually internally complete. Therefore, in practice this constraint is not overly restrictive. With this constraint, it is possible to generate summary QInfo for the entire subtree. During the annotation process, all Geometry nodes in the subtree can simply be combined and treated as a single representation. Depending on the annotation process, this often gives better results than weighted averages of the QInfo information of individual nodes. This summary information is calculated for each subtree and stored in the Switch node itself. Child request. To perform asset prioritization for virtual world transfer, the system must create a total ordering for the subtrees of each Group or Switch node. Again, this process cannot be performed in an optimal manner without QInfo for each node in each subtree. In this instance, the decision must be made in advance of downloading the subtree descriptions—since we may decide not to download the subtree at all. Downloading a skeleton of the scene graph, including QInfo annotations, is not possible for some instances of the problem; for a large database, the skeletal subtree can itself be too great for local replication. The most logical approach is to record summary QInfo information at each level of the scene graph hierarchy. Unfortunately, this approach is difficult to support in practice; there is no straightforward method to summarize the QInfo annotations. For instance, given three nodes with very different Quality annotations, there is no way to give a summary that is accurate and smaller than a complete listing. This summary information is only used to determine whether to load the next level of the scene graph. Therefore, we have found an appropriate summary for a Group node to be the maximum Importance of its children. Im-

Loaded Nodes

Scene Graph

Load Manager

Application Cache Manager

Disk Manager

Network Manager

New Nodes Disk

World Manager

Cache Manager

Figure 1. Functional components in the QUICK client; each box represents  1 thread of execution.

portance alone is the proper metric for determining whether a subtree is interesting; also, certain facets of Importance (such as bounding box information, for visibility and spatial awareness) can be combined conveniently.

5.2. QUICK Client Architecture The QUICK system assumes the existence of a threadsafe scene graph and an associated graphical browser- or walkthrough-class application. As shown in Figure 1, a WorldManager is attached to the scene graph to control display. The WorldManager continuously chooses the displayed child of each Switch node, and whether to draw each Geometry node. It also is responsible for loading new nodes, and freeing memory, via the CacheManager’s API. The CacheManager has an insert-only interface to a threadsafe buffer of nodes; the WorldManager occasionally inspects the buffer and merges any newly arrived nodes into the application scene graph. The node insertion process is controlled to allow exploitation of computational coherence in the optimization calculation. The CacheManager in turn consists of a number of components to help it access storage and network resources. Those components, shown in Figure 2, operate as follows:

 

CacheManager: The CacheManager component provides the disk and network interface to the WorldManager. It contains a LoadManager and a buffer of nodes to be returned to the WorldManager. LoadManager: This component offers a spare interface to the CacheManager for nodes: Load(), Unload(), and Delete(). It contains a DiskManager and

Net

Figure 2. Cache management components. NetworkManager to handle platform-specific details involved.

 

DiskManager: The DiskManager controls transfer of nodes to and from local secondary storage. A simple API includes the following: Load(), Save(), Delete(), and CheckOnDisk(). NetworkManager: The NetworkManager implements a single Fetch() method, for downloading a node given its network location. NetworkManager, DiskManager, and LoadManager enforce the Singleton pattern; that is, only one instance of each can exist in any process space.

5.3. Prototype Implementation We have developed a prototype of this system in the Java language. Java was selected primarily to ease the programming burden, though its portability and simple threading facilities were also significant assets. Java is also a natural choice for networked applications, as it was designed with web-based data and code transport in mind. The single most important factor, however, was scene graph library availability. This project comes at a difficult time in the graphics industry, where there is no consensus scene graph—much less a portable or free scene graph. These last two factors being requirements, we selected the Java3D graphics library [18]. Java3D supports all standard scene graph operations, is free for development and release, and its graph structure is threadsafe. There is also an open-source VRML97 loader available, giving access to a significant model base. Using VRML as the primary file format also provides practical testing for our proposed application area of next-generation VRML. To ensure compatibility with standard VRML browsers, and avoid modification of the VRML97 loader, no source files were modified. Annotations and file linkages were instead placed in an auxiliary file. This information is read by

the CacheManager and included in the scene graph as indicated. Upon initialization, the client requests both the root VRML (*.wrl) and associated annotation (*.qik) files. The user is immediately given interactive access to the virtual world, while remaining scene nodes are selected and requested in a separate thread of execution.

[5]

[6]

6. Conclusions and Future Work This paper introduces the QUICK framework, a novel mechanism for display and transfer prioritization based on contribution to user experience. By determining the Quality, Importance, and Cost for each virtual object; display platform resource thresholds; and application task parameters, we can theoretically maximize fidelity. We have developed a prototype system that manages a virtual world database to explore this approach. This system allows integration of third-party components, thereby leveraging expertise in parallel developments in motion prediction, cost determination, and resource characterization. A primary lesson learned from the implementation was the difficulty of the annotation process; generating this information without support in authoring tools is a lengthy process. Effective determination of the QUICK factors remains an open question which requires solution. It is hoped that the planned follow-on work to generate empirical performance data will both validate the QUICK approach and present a convincing case for annotation support in modeling applications. Acknowledgments The author would like to acknowledge his dissertation committee and colleagues at the Naval Postgraduate School. Thanks also go to the review committee, whose insightful comments and criticisms greatly enhanced this work. We gratefully acknowledge the sponsorship of the NSF through a Graduate Research Fellowship, and Advanced Network and Services via the National Tele-Immersion Initiative.

References [1] M. Aktihanoglu, D. Brutzman, R. Lee, and T. Parisi. X3d (vrml-ng) workshop. Held at ACM VRML ’99, February 1999. [2] M. Capps and D. Stotts. Research issues in developing networked virtual realities. In Proceedings of the Sixth Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, pages 205–211, Cambridge, MA, June 1997. [3] M. V. Capps. The QUICK Model for Virtual Environment System Optimization. PhD thesis, Naval Postgraduate School, Monterey, CA, 2000. [4] C. Carlsson and O. Hagsand. Dive: A multi-user virtual reality system. In Proceedings of the IEEE Virtual Reality

[7]

[8]

[9] [10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

Annual International Symposium, pages 394–401, September 1993. J. Chim, M. Green, R. Lau, H. V. Leong, and A. Si. On caching and prefetching of virtual objects in distributed virtual environments. In Proceedings of ACM Multimedia ’98, pages 171–180, Bristol, UK, 1998. ACM Press. T. Funkhouser. Database and Display Algorithms for Interactive Visualization of Architectural Models. PhD thesis, Computer Science Division (EECS), University of California, Berkeley, 1993. T. A. Funkhouser, C. H. Sequin, and S. J. Teller. Management of large amounts of data in interactive building walkthroughs. In D. Zeltzer, editor, Computer Graphics (1992 Symposium on Interactive 3D Graphics), volume 25, pages 11–20, Mar. 1992. S. Goerger. Spatial knowledge acquisition and transfer from virtual to natural environments for dismounted land navigation. Master’s thesis, Naval Postgraduate School, Monterey, CA, 1998. R. Gossweiler. Perception-Based Time Critical Rendering. PhD thesis, University of Virginia, January 1996. C. Greenhalgh. Analysing awareness management in distributed virtual environments. In M. Capps, editor, Workshop: Systems Aspects of Sharing a Virtual Reality, at CVE ’98, Manchester, UK, June 1998. C. Greenhalgh and S. Benford. Massive: a collaborative virtual environment for teleconferencing. ACM transactions on CHI, 2(3), September 1995. L. Hitchner and M. McGreevy. Methods for user-based reduction of model complexity for virtual planetary exploration. In SPIE Vol. 1913, pages 622–636, 1993. R. W. H. Lau, D. To, and M. Green. An adaptive multiresolution modeling technique based on viewing and animation parameters. In Proceedings of IEEE Virtual Reality Annual International Symposium, pages 20–27, March 1997. P. Lindstrom, D. Koller, L. F. Hodges, W. Ribarsky, N. Faust, and G. Turner. Level-of-detail management for real-time rendering of phototextured terrain. Technical report, Graphics, Visualization, and Usability Center, Georgia Tech, 1995. TR 95-06, URL www.cc.gatech.edu/gvu/reports/TechReports95.html. M. Macedonia and M. Zyda. A taxonomy for networked virtual environments. IEEE Multimedia, 4(1):48–56, January 1997. M. Macedonia, M. Zyda, D. Pratt, P. Barham, and S. Zeswitz. Npsnet: A network software architecture for large-scale virtual environments. Presence, 3(4):265–287, 1994. D. Schmalstieg and M. Gervautz. Demand-driven geometry transmission for distributed virtual environments. In Proceedings of EUROGRAPHICS ’96, pages 421–433, Poitier, France, August 1996. H. Sowizral, K. Rushforth, and M. Deering. The Java 3D API Specification. Java Series. Addison-Wesley, December 1997.