To appear in the Proceedings of the GI Workshop on Modeling - Virtual Worlds - Distributed Graphics, Bad Honnef/Bonn, Germany, November 27-28, 1995
VRML: From the Web to Interactive Multi-User Virtual Reality Wolfgang Broll GMD — German National Research Center for Information Technology Institute for Applied Information Technology D-53754 Sankt Augustin, Germany email:
[email protected] www: http://hera.gmd.de/~broll
1
Abstract
VRML (Virtual Reality Modeling Language) has already established itself as a standard for the exchange of 3D descriptions on the Internet. However, it is a completely static description, with no support for virtual worlds with several users and applications, or with a high number of dynamic objects. In this paper we want to examine how VRML can be extended to fit the requirements of an interactive, distributed, multi-user virtual environment. This paper considers three major fields where VRML needs to be extended: (1) the network components to support multiple users and to achieve consistent worlds, (2) an event model, including a naming scheme and support for arbitrary input and output devices, and (3) an object-oriented interaction model, which allows the modeling of interactions and behaviors, that can be extended to support complex applications. CR Descriptors: C.2.4 [Computer Communication Networks]: Distributed Systems —Distributed applications; I.3.1 [Computer Graphics]: Graphics Systems —Distributed/network graphics; I.3.6 [Computer Graphics]: Methodology and Techniques — Device independence; Interaction techniques; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism —Virtual reality;
2
Introduction
Already a few month after the first Internet browsers were released, VRML was established as the standard 3D format for the distribution of virtual worlds on the Internet. In its initial version it is still closely related to existing products. On the one hand the WWW (World Wide Web), which uses the same protocol (HTTP) to transmit data. Additionally VRML browsers are usually called from HTML pages at the Web. On the other hand Open Inventor [Wer94], since the initial draft specification was almost a subset of it. Nevertheless the development of VRML started very fast to become a dynamic process, as VRML becomes
more and more independent of its parents. Currently, VRML still is a static scene description language, which does not include interaction or object behavior. The aim of this paper is to show possible extensions to add multi-user support as well as interactions and rich behavior. Multi-User Support In the first part of this paper we want to examine how VRML could be extended to support cooperative, multi-user worlds on the Internet. We wish to examine how cooperation can be supported without radically altering VRML. As an emerging standard radical changes are unacceptable. Currently each VRML client communicates isolated to a HTTPD server. Ideally we should be able to provide a smooth transition from the existing, isolated-client model to a communicating-clients model. The problem of the distribution of artifacts in virtual worlds can be tackled at two levels. Firstly there are problems of multi-user access to virtual worlds and how changes to shared environments might be managed. Secondly there is the question of how the visualization of objects in a shared space might be kept consistent. Some existing work has looked at the problems of distributed virtual environments [BHML92, CH93, MZP95]. Interactions and Behavior The second and the third part of the paper show the necessary extensions to VRML in order to support interactions and object (artifact) behavior. Interactions between users and artifacts, or several users, as well as object behavior are currently not specified in VRML. Even existing VR systems support them in a limited way only [CH93, CRK92]. However there seems to be a strong need to have the ability to specify those mechanisms using a high level description within virtual worlds. Existing solution for VR systems are usually based on complex scripting languages to provide the required flexibility [SSP95, WGS95]. Today each application has to re-implement certain interactions and behavior, since existing implementations cannot be reused for further applications.
VRML: From the Web to Interactive Multi-User Virtual Reality
W. Broll
Currently no general accepted interfaces and I/O devices for 3D applications exist. For that reason, interactions very often depend on the existing input or output devices. Most virtual environments provide their own functionality to support those devices [BSZ93, SGLS93]. Even generic interactions, such as navigation, or selecting objects, are not independent of the available input devices. An interaction model for virtual environments should provide mechanisms to make interactions as independent of input devices as possible.
3
All participants, as well as the server, get the new user information from the multicast address. The browser now listens to the multicast address for updates of other user representations, for new users and for quit messages of existing users. A time-out mechanism may be provided by the server. If the server does not receive any message from a client for a certain time, it sends an appropriate quit message to the multicast address. ext. client / browser
Multi-User Extensions
In this section we will present our approach to support several users using VRML on the base of slight extensions to the existing HTTP protocol [BFN95] and servers.
HTTPD server (extended)
Client-Server Protocol A naive approach would just extend the existing HTTPD server. VRML browsers (clients) would receive the virtual world file from the server and return their own user embodiment. They would continue to send local changes of the world (especially updates on the own position) to the server. The server would distribute the user embodiments, and all succeeding updates among all clients. A scalable, more realistic approach would move most tasks from the server to decentralized components. Central servers as used in the naive approach can very quickly become a bottleneck of a distributed system. Especially when adding further enhancements, such as interactions within shared worlds, the central server approach is no longer suitable, since it is not scalable. A more reasonable approach uses the multicast mechanism, which has already proven to be suitable for large scale interactive multi-user virtual environments, such as NPSNET [MZP95, ZPF94] and DIVE [CH93]. However, even this approach does not require really new or additional servers, it can be realized by extending existing HTTPD servers. This is different to other approaches, such as VSCP (Virtual Society Server Client Protocol) used by the Virtual Society project [HMRL95], which realizes multi-user extensions for VRML by additional servers.
multicast address
ext. client / browser
ext. client / browser initial transfers user updates
Figure 1:
Multi-user support using multicast groups
This approach reduces server communication dramatically, since the update messages are sent by clients. Thus they are reduced to necessary updates only. Additionally, sending the messages directly via the multicast address, increases the average update time significantly, since the messages need not be redistributed by the server (see figure 1). User Objects Individual user embodiments for each local user should be located at the client site. Each user may have several representations for different virtual worlds. The server should provide some default user embodiments. All these embodiments could be stored in separate files using the current VRML specification. Nevertheless it would be preferable, if the server could also add user representations to pages requested by clients not capable of multi-user support. Currently, the user description cannot be included within the virtual world description, since the naming mechanism used in VRML is not general enough to identify different users as well as different types of user representations. As a general mechanism to support user representations in VRML, we would prefer to represent a user embodiment by a new node. This node should contain a user’s name, his or her identification, e.g. user@host or the email address of the user and an entry to specify the type of representation. This type could either specify the type of environment for the specific user representation (e.g., OFFICE, SPACE, GAME, etc.) or the
In our approach the browser contacts the server to receive the VRML page. Afterwards the client will introduce itself as a user to the server and the server will respond by sending the user embodiment files of the current users. Along with this, the client receives a multi-cast address and port number. The multi-cast address will usually refer to the server, while the port number will refer to the individual world. However, in large virtual worlds, or when separating groups of users by replicated worlds, different relations might be used. The browser now sends its current user description, including the location, to the multicast address. -2-
W. Broll
VRML: From the Web to Interactive Multi-User Virtual Reality
type of object the user is representing (HUMAN, CAR, SPACESHIP, BEATLE). Objects the user is carrying and which have not become (identifiable) parts of his representation, i.e. they are not visible and have no location, might also be included in a separate component of a user node (see figure 2). It would also be preferable to include not only the geometry of the user object but also user (embodiment) specific attribute settings. Examples are camera settings and navigation specifications (currently part of the browser configuration). This seems to be very useful in combination with the specification of different user representations (types), since the requirements for viewing and navigating will change according to this type. It does not make sense to walk while your current representation is a space ship.
Only one user writes at the whiteboard at any time, while other users will wait until he or she has finished. Thus no access control for the white board is necessary, since consistency among the local views of the world is achieved by a social protocol. This approach might be extended to a large virtual world (or even a MUD1). Such a world will be completely anarchistic, but it might still work. However, we can find other situations where social protocols will not be sufficient to guarantee the required level of consistency. We will demonstrate this by another example: Imagine a user who wants to drive a virtual car. While one user enters the car and drives into one direction, another user within another local copy of the same world might drive the car into another direction. When the position changes are distributed among all sites, they might arrive at the individual client at arbitrary orders. Thus, different clients will have different views of the same world (since multicasting does not preserve the message order). Even the car of one of the driving users may suddenly be relocated. Even if changes within the local worlds are distributed frequently, this will distort the user’s view and make the results of interactions unpredictable. Guaranteeing consistency on a certain level always comes along with a certain loss of flexibility and spontaneity, since the access has to be restricted. Consistency is in particular very important for user representations or avatars, to use a more popular term, but also for many basic interactions.
User node
embodiment
Figure 2:
items
attributes
User nodes to specify embodiments
Consistency Control There are several examples of multi-user applications, —not only in the VR context, but rather in the CSCW context— which show, that access management can be solved on social rather than on technical mechanisms [Bus95]. Imagine a group of users standing in front of a virtual white board (see picture 1).
Picture 1:
Active locking without acknowledge is one example of a simple mechanism to achieve consistency in the system described above. It requires at least three times as many messages as without any access management. Object modifications or a block of them have to be preceded by a locking message on the specific objects. After the distribution of the modifications, a release message has to be distributed. Locks are not acknowledged by the individual clients nor by the server. But the server will manage multiple locks on the same object. Different kinds of locks might be used to support various kinds of reliability requirements. Locks should not be understood to guarantee absolute access to an object—this kind of locks has failed in most cooperative environments. We see locks rather as a guarantee of achievable consistency, based on a prediction of the client’s future activities, similar to the lock mechanism proposed by Dourish [Dou95]. This mechanism provides a kind of soft locks, which may be broken for the price of a certain loss of consistency. However, this scheme may lead to rejections of locking requests. Thus an appropriate message can be sent to the client to reset its lock. Additionally the client might send a request to the server to restore the object. The server will also distribute the current lock status to all new clients, joining the world. Again the server will
Multi-user virtual environment
1. Multi user dungeon -3-
VRML: From the Web to Interactive Multi-User Virtual Reality
W. Broll
be responsible to release a lock after a certain time-out and to achieve consistency.
event, this event cannot be processed by the Shape node, since it contains no information about the object except the shape type. The color information is stored in a separate Material node. This node class could easily be extended to process the event, but it may be located at a different branch of the scene graph (see figure 3). Thus it is usually impossible to determine the correct Material node without traversing the whole scene graph.
On the one hand, many virtual worlds will not need any kind of access control, since consistency is not a major problem if users use social protocols. This allows the server to stay almost passive after the initialization phase and for that reason reduces server and network load. On the other hand, at least for some objects of the world (including avatars) or for certain kinds of interactions (grabbing, moving) consistency is important. Thus it should be possible to specify consistency either on an object (subgraph) level or on an interaction level. This will require extensions to the VRML specification, or will have to be included in the interaction and behavior specification respectively. We will show how this applies to our interaction model in the last part of this paper. As long as we distinguish between servers and clients, i.e. a more or less centralized system, decentralized access mechanisms [Bro95a] do not seem to be advantageous, since all virtual world contents are originally located at the server. However, this may change, if all users will have a basic common library of objects on their local system. For the central server architecture a rather simple approach should be used. We think in multi-user VRML worlds access should not be restricted by default, but it should be possible to manage access on a soft lock mechanism if required.
4
material
shape
node property propagation
Figure 3:
Propagation of properties in the scene graph
In addition, the appropriate node may be (meanwhile) eliminated by the browser, since browsers do not have to preserve the original scene graph (VRML specification 1.1). Other mechanisms, such as the VRML node naming scheme, do not seem to be suitable either. On the one hand, names can be used several times within one scene graph, on the other hand, all nodes would need a (useful and unique) name. This name could not be created automatically by browsers, since it has to be identical for all copies of the virtual world. Assigning names to each node by the author of the worlds does not seem to be applicable, since even very simple geometric objects, such as a cube, may consist of six or even more VRML nodes. Additionally, the naming will become a problem for nodes which influence several shapes, since each node can have only one name. Our approach is based on a new node type, the Artifact node.
Event Handling
Currently VRML does not specify an event model, since it describes static scenes. There is only one defined interaction within VRML (specification 1.0 [BPP95]). This is the activation of the WWWAnchor node. Usually it is activated by clicking on shapes which are sub-nodes of the anchor. Then a new URL, which does not necessarily have to be VRML world, is loaded. Any additional interactions, such as navigation, are completely browser specific. For that reason, they can neither be specified nor influenced within VRML. However, some browsers use Info nodes, which were added to VRML to provide an environment for save comments, to setup resources.
Artifacts Although the new Artifact node is based on existing group nodes, it represents an complete entity. Since browsers will be forced to keep the structure of those entities, they can be used to handle events. It would also be possible to add such functionality to all kind of group nodes, but authors of VRML worlds frequently use those nodes to group objects which should be recognized as an entity. In addition, the User node, described in the first section will also be able to handle events. Beside the ability to handle events, an Artifact node applies some restrictions to its child nodes. •there may be only one immediate child node representing a particular property (except transformations)
To realize interactions and behavior in VRML, a suitable event mechanism has to be established. This raises the questions: “Who will send events?” and “Who will receive and process events?”. Senders are at least the input devices, timers or real-time clocks, and external events from other virtual worlds or external applications. The question for the recipients of the events is much harder to solve, since VRML does not provide mechanisms to define entities or artifacts. Let us give a short example to demonstrate the problem. If the object color of a Shape node is to be changed by an -4-
W. Broll
VRML: From the Web to Interactive Multi-User Virtual Reality
•all property nodes influencing the current transformation are combined into a single node •all property nodes are traversed before any shape nodes •child nodes which are groups (including artifacts) are traversed after the shape nodes Figure 4 gives an example of an Artifact node. To achieve the same representation (visualization) of the scene using a old style Separator 1 node, the child nodes would have to be rearranged.
•qualified names including type and class or subclass definitions are not supported Unique identifiers and hierarchical references can easily be achieved by modifying the existing DEF/USE scheme (see example 1). DEF aTable Separator { Translation { translation 0.0 0.8 0.0} DEF top Cube { height 0.05 depth 0.8 width 1.5} ... } DEF top Cylinder { ... } # allowed, identifiers on different # levels DEF aTable Cube { ... } # illegal, identifier already used USE aTable { top {depth 0.4} } # add a second table with modified top USE aTable.top # add just another cube
scene graph using an Artifact node
artifact
cube
separator material transform1
cone
Example 1: Required modifications of the DEF/USE scheme
transform2
Nevertheless these extensions do not support typing or sub-classing. We use the new keyword TYPE to indicate a type definition, which allows to identify the node by additional names. Further we use the keyword CLASS in addition to the DEF keyword. In contrast to the DEF keyword, the CLASS keyword starts an object definition without creating an instance. Other proposals achieve this by introducing a new Prototype node class [MB95]. Additionally it has to be possible to create labels for instances created by USE (this is not possible in the current VRML specification). These mechanisms might be added as shown in example 2.
separator
material transform1+2
properties
cube
shapes
cone
separator
subgroups
equivalent scene graph using a Separator node
Figure 4:
Scene traversal of Artifact nodes
The behavior of an Artifact node seems to be similar to that of Inventor shape node kits [Wer94]. However, Inventor shape kits contain a number of nodes by default and add nodes in a certain order. Artifact nodes reduce the number of child nodes and do not add additional nodes, until they receive a corresponding event. Now Artifact nodes can be used to handle simple events, e.g. rotating, translating, or applying a new material. If an appropriate child node does not exist, it will be added by the Artifact node automatically. However, for realizing interactions and complex behavior additional sub-nodes are required, which will be introduced in the next section.
CLASS table Artifact { ... }
# definition only, no instance created
DEF myTable USE table
# creates an instance of the table
A more intuitive syntax might look like this: Artifact myTable : table
The TYPE keyword is used to classify classes or nodes TYPE furniture, wooden CLASS table Artifact { ... }
Naming Scheme As already mentioned above, VRML currently supports a very limited naming scheme. The DEF keyword allows to add a label to each node. This label can be used with the USE keyword to create instances (identical copies) of a named node. The disadvantages of the current scheme are: •names are not unique within a scene graph •hierarchical references are not supported
# define types to identify groups of nodes
Example 2: Typing and sub-classing mechanisms Additionally a naming scheme for event distribution has to be more flexible than the one used for instancing. This is, because events can have several recipients. Thus it seems to be necessary to allow wild cards within the recipients’ specification and to support node
1. In the new VRML 1.1 Specification proposal, the scene traversal of Separator nodes will be very similar to the Artifact nodes described here (see also [VAG95]). -5-
VRML: From the Web to Interactive Multi-User Virtual Reality
W. Broll
class names in addition to names and types. A scheme similar to that used to specify X resources can be used. For example
Input/Output Objects Input/output objects translate event entities. To virtual world artifacts they look like managers, to managers they look like artifacts. In our original work [Bro95b] they were part of the interaction model. However, in VRML they cannot be part of the scene graph, since they are not distributed. Since they use exactly the same syntax as used for interaction and behavior objects, which are described in the next section, we propose to place input/output objects in a separate local file. This is similar to the local files used to store the user embodiment.
*.desk.top.QvArtifact
would specify all Artifact nodes, which are children of the “top” Artifact node of the “desk” Artifact node. While class names start with the letters “Qv” (these are the class names used in the public VRML parsing library), names (defined by DEF or CLASS) and types (defined by TYPE) would share a common name space. However, we can think of other naming conventions to provide separate name spaces for all three types of identifiers. While names assigned with the DEF keyword have to be unique (they specify exactly one node), identifiers created by CLASS or TYPE may address several nodes in the scene graph. In the following three subsections we will present the three major parts of our event model, event entities, managers, and input/output objects.
output manager
Event Entities Our simple object-oriented event model uses event entities to represent events. Event entities are objects, which include information on the sender, the recipient, and optionally, some kind of object specific data. Local recipients of events can be specified as shown in the previous section. Beside this it is useful to allow objects to become a recipient by expressing an interest in events of a certain type or events sent by a certain object. Each event entity class also realizes the methods to set and read the information and object-specific data transmitted. The communication between all objects of the virtual world is performed by event entities. This model has a large advantage compared with models using static connections. The dynamic distribution mechanism allows to send events to artifacts, which did not exist, when the event sender was established. Thus interactions and behavior can be applied dynamically to artifacts joining and leaving a virtual world.
input manager
visualizer
input/output object
user
input/output object
artifact 1
artifact 2
virtual world
virtual environment event entities events / data / messages
Figure 5:
Managers The second main part of our event model are managers. Managers send and receive events and provide a certain kind of service. Input managers translate external events, such as moving the mouse and pressing keys, into the appropriate event entities. Output managers generate output according to the received event entities. A visualizer is a typical example for an output manager, it usually receives events to change the camera view point (see also figure 5). Very often input and output managers are realized by just one program (e.g. for visualization and input events). Managers are also used to receive and send data on a network as described in the first part. Several types of network connections may be supported by adding additional managers. Other services realized by managers include collision detection, or speech or gesture recognition.
Event distribution between managers, input/output objects and artifacts
Input/output objects are used to realize virtual managers, i.e. they provide input and output facilities to virtual world artifacts, which are not supported by local managers (see figure 5). For example, an input/output object can be used to realize a 6D input device, although the system provides only mouse input. It receives mouse events and translates the movements and button positions into appropriate 6D events. Although input/output objects are not necessary to realize interactions and behavior, they provide the key mechanism to keep managers simple and general, and to keep interactions independent of input devices, network connections, etc. The event model already provides a powerful mechanism for basic interactions. A new node type —Arti-
-6-
W. Broll
VRML: From the Web to Interactive Multi-User Virtual Reality
fact nodes— is used to handle simple but very common events within the scene graph. Thus this model would already allow to modify the scene graph. However the interactions would have to be defined within the browsers (managers) or within the input/output objects, which actually were introduced to provide more basic input facilities. So we need a more flexible mechanism to define complex interactions as well as object behavior. It should be possible to transmit such a mechanism over a network, since interactions and behavior are often scene (virtual world) dependent.
The advantage of the dynamic communication scheme, as provided by the event model, is the possibility to reuse objects within different contexts. Thus the objects of our approach can easily be applied to new artifacts, since they are independent of static links as used in most existing approaches (e.g. Inventor engines [Wer94]). The objects consist of several sub-objects, which do not necessarily have to refer to nodes. Sub-objects are used to filter, create, process and distribute event entities. Different types of sub-objects can be used to assemble new objects (see figure 6). Widely used configurations might be hard coded into new nodes after evaluation to improve performance and to simplify their use. This approach focuses on the exchange of relevant information between the different objects used to realize interaction and behavior. This distinguishes our approach from other proposals for behavior extensions to VRML [MB95, NM95]. One of the most popular approaches uses a mechanism based on Sensor and Trigger nodes to handle simple behavior. Sensors detect a certain condition while triggers apply a simple action such as setting a node field. To realize more sophisticated behavior these approaches usually refer to external complex scripting languages. Since scripting languages provide a powerful mechanism to realize behavior as well as whole applications, we will refer to this later in this section.
In the following section we will introduce our interaction model, which provides support for interactions as well as behavior on top of the presented event model.
5
Interaction and Behavior Support
Our approach supports interactions and behavior by adding additional mechanisms to handle events to artifacts. As shown in the previous section, the basic event model allows the scene graph to handle only simple (node-related) events. We will now introduce facilities, which allow the support of arbitrary events. Our model is based on two new objects —represented by new node classes— which handle complex interactions and behavior: •interaction objects •behavior objects The common aspect of these two object types as well as the input/output objects is the translation of events (event entities) into new events. However, sometimes objects are used to implement a complex behavior rather than translating events —especially when realizing behavior objects. The resulting events are then distributed among the recipients.
Adding Interactions and Behavior to an Artifact Interaction and behavior can easily be applied to artifacts by adding the appropriate nodes to it (see figure 7). All events received by the artifact are sent to existing interaction and behavior sub-nodes first (see figure 8). This allows the user to override the default behavior of simple events. Only events which are not handled by the interaction and behavior nodes, will be processed by the artifact node directly.
component tool boxes artifact trigger
deactivate
semantic
activate
interaction
interaction and behavior
new interaction/ behavior node type
Figure 7: instancing interaction/behavior node, which can be added to an artifact
Figure 6:
behavior
cube
cone
properties
shapes
separator
subgroups
Adding behavior and interaction nodes to an artifact
Interaction and behavior nodes attached to an Artifact or User node can not only influence the local nodes but also any other artifact in the scene graph. Thus interaction nodes can also be used to realize navigation by sending appropriate events to the visualizer or to load a new world similar to an WWWAnchor node.
Assembling new interaction and behavior nodes from a tool box
-7-
VRML: From the Web to Interactive Multi-User Virtual Reality
(1)
highlight event entity
W. Broll
objects when grabbed), access control, or even shared interfaces. Sub-objects (components) of Interaction nodes can be out of four types: activate, deactivate, trigger and semantic. The nodes need at least the trigger and the semantic component in order to work. The trigger component defines the events triggering the interaction. This may be a single event or even a boolean combination of several events. Different trigger components provide more or less rich interfaces to specify those events. The semantic component defines the resulting actions, which usually depends on the triggered event. Several highly configurable semantic components are available to realize most interactions. Activate and deactivate components activate and deactivate the Interaction node according to received events. In addition a share component can be used to realize a shared interaction interface. The share component blocks, synchronizes or combines events of several users (located on different sites) to support access control (consistency) as well as multi-user interactions [Bro95a].
Artifact (4)
(2)
(3) color event entity
Interaction:highlight
Figure 8:
Material
Translating high level events into low level events
While some interactions and behaviors will require only a small number of specifications and parameters, more complex behavior as required for intelligent agents or even whole applications, might be better defined within an external file. For that reason it seems to be most applicable to allow one or several sub-objects of behavior and interaction nodes to be read in from external files. Additionally, the sub-classing mechanism can be used to apply an interaction or behavior node once defined to several artifacts (see example 3)
Input/output objects are constructed similar to Interaction nodes, but since they are part of the local environment, they cannot contain share components.
. CLASS spin Behavior { ... # sub-objects (components) semantic /usr/local/vrml/Behavior/spin.b # specified within file } Artifact { USE spin { ... } ... }
Usually several Interaction nodes (and input/output objects) or even a network of them will be used to perform a single interaction between a user (or input device) and a virtual world artifact. This allows abstract interactions on a higher level (semantic interactions) rather than interactions on a functional level provided by the artifacts and the event model. One example is a “Select” interaction, which might be invoked differently depending on the input device and the preferences of the user.
# setting up parameters (fields) here # other artifact parts
or using the second proposed syntax:
Behavior Nodes represent the dynamic behavior of artifacts. Behavior nodes are very similar to Interaction nodes, but they are usually independent of input events. They are used to model object behavior such as gravity, velocity and animation. They may have the same components as Interaction nodes, but usually the trigger components will be replaced by an engine component. Engine components provide an internal timer and create internal events at specified intervals or a certain time. These events are used to trigger the semantic component. Behavior nodes may not contain share components, but they may have sync components as sub-objects. Sync components provide a mechanism to synchronize behavior in a distributed environment. This is achieved by synchronizing the engine components, using NTP [Mil92]. Re-synchronization may be applied in regular time-intervals or on the base of threshold values, which can be used to realize a kind of general dead-reckoning [ZPF94] mechanism.
Artifact { Behavior : spin ... }
Example 3: Using sub-classing and external files to keep behavior and interaction nodes simple Interaction and Behavior Nodes In this sub-section will look in more detail on the components and sub-objects of interaction and behavior nodes. Interaction Nodes influence the interactions of an artifact. These are interactions with the artifact, such as moving and highlighting. And interactions by the artifact (or user) such as selecting and grabbing. Additionally interaction objects can also realize more complex mechanisms such as awareness (e.g. by highlighting -8-
W. Broll
VRML: From the Web to Interactive Multi-User Virtual Reality
Very complex behavior can be realized by using more powerful semantic components. The most flexible semantic component would contain the code of a scripting language, such as Java [JAV95]. Embedded within the interaction as well as the event model, these scripting languages would benefit from the dynamic connections and artifact independence. The Behavior object only defines an interface to the virtual world—all calculations are performed by the scripting language. Since those scripts can be very extensive, it should be possible to load them from separate files. However such nodes may also realize whole applications, thus they are rather Application nodes than Behavior nodes.
6
Virtual Reality International Symposium, VRAIS’95. pp. 148-155, IEEE Computer Society Press, Las Alamitos, Ca. (March 1995). [Bro95b] Broll W.: “Interaction and Behavior Support for Multi-User Virtual Environments”. Proceedings of the ACM SIVE 95, First Workshop on Simulation and Interaction in Virtual Environments (Iowa City, Iowa, July 13-15, 1995), 256-263. [Bus95] Busbach U.: Activity Coordination in Decentralized Working Environments in Dix, A.(ed): Remote Cooperation - CSCW issues for Mobile and Tele-Workers, Springer, forthcoming. [CH93] Carlsson C. and Hagsand O.: DIVE–A Platform for Multi-User Virtual Environments. Computer & Graphics, pp. 663–669, Vol.17, No. 6 (1993). [CRK92] Codella C.F., Reza Jalili, Koved L., Lewis J.B., Ling D.T., Lipscomb J.S., Rabenhorst D.A., Wang C.P., Norton A., Sweeney P., and Turk G.: Interactive simulation in a multi-person virtual world. In Human Factors in Computing Systems CHI‘92 Conference Proceedings, pp. 329-334. ACM, (May 1992). [Dou95] Dourish, P.: “Consistency Guarantees: Exploiting Application Semantics for Consistency Management in a Collaboration Toolkit”. Rank Xerox Technical Report EPC-1995-106, Cambridge, 1995. [HMRL95]Honda Y., Matsuda K., Rekimoto J., and Lea R.: Virtual Society: Extending the WWW to support a Multi-user Interactive Shared 3D Environment. To appear in Proceedings of the VRML’95 Symposium, (San Diego, Ca., Dec. 13-15, 1995) ACM, 1995. [JAV95] Java Language Documentation. Sunsoft (1995). [www] http://java.sun.com/documentation.html. [MZP95] Macedonia M.R., Zyda M.J., Pratt D.R., et al.: Exploiting Reality with Multicast Groups: A Network Architecture for Large-Scale Virtual Environments. In Proceedings of the Virtual Reality Annual International Symposium. VRAIS ‘95. (pp. 2-10). Los Alamitos, CA: IEEE Computer Society Press (March 1995). [MB95] Meyer B., and Brookshire Connor D.: Adding Behavior to VRML. Brown Computer Graphics Group. [www] http://www.cs.brown.edu/research/ graphics/research/papers/vrmlbehavior.html [Mil92] Mills, D. L.: “Network Time Protocol (Version 3) specification, implementation and analysis”. DARPA Network Working Group Report RFC1305, University of Delaware, (March 1992). [NM95] Nadeau D.R.and Moreland J.L.: The Virtual Reality Behavior System (VRBS): A Behavior Lang u a g e P r o t o c o l f o r V R M L . To a p p e a r i n Proceedings of the VRML’95 Symposium, (San Diego, Ca., Dec. 13-15, 1995) ACM, 1995. [SGLS93]Shaw C., Green M., Liang J., and Sun Y.: Decoupled Simulation in Virtual Reality with The MR Toolkit. In ACM Transactions on Information Systems, 287-317, Vol. 11, No. 3, (July 1993).
Conclusions and Future Work
As VRML is an emerging standard for 3D virtual worlds, we showed some of the necessary extensions to support an interactive distributed virtual environment with multiple users. Our proposals added only small but powerful extensions to the current specification and protocol, without altering the whole standard. We showed how the HTTP based client/server network architecture can be extended to support multiple users. Further, we introduced an event model for VRML and proposed more flexible naming, typing and entity mechanisms. The event model was used as the base of our interaction model, which provides the basic elements of any real virtual environment: interactions and behavior. Our future work will include a library of basic interaction and behavior objects and components. Additional sub-objects will be necessary to provide mechanisms for multi-user interactions as well as for synchronized behavior. This will also require appropriate extensions to the network protocol as well as clients and servers.
7
References
[BPP95] Bell G., Parisi A., and Pesce M.: “The Virtual Reality Modeling Language, Version 1.0 Specification”. [www] http://vrml/wired.com/vrml.tech/ [BFN95] Berners-Lee, T., Fielding R.T., Nielsen H.F.: “Hypertext Transfer Protocol HTTP 1.0”, HTTP Working Group. [www] http://www.w3.org/ hypertext/WWW/Protocols/Overview.html [BHML92]Blau B., Hughes C.E., Moshell J.M., and Lisle C.: Networked virtual environments. In Computer Graphics (1992 Symposium on Interactive 3D Graphics), Zeltzer, D. (Ed), pp. 157-160, 25, 2 (1992). [BSZ93] B ö h m K . , S o k o l o w i c z M . , a n d Z e d l e r J . : GIVEN++: A Toolkit for Advanced 3D User Interface Construction. In Virtual Reality Vienna ‘93 Proceedings, Vienna, Austria, December 1993. [Bro95a] Broll W.: Interacting in Distributed Collaborative Virtual Environments. In Proceedings of the IEEE -9-
VRML: From the Web to Interactive Multi-User Virtual Reality
W. Broll
[SSP95] Singh G., Serra L., Png W., Wong A., and Ng H.: BrickNet: Sharing Object Behaviors on the Net. In Proceedings of the Virtual Reality Annual International Symposium. VRAIS ‘95. pp. 19-27. Los Alamitos, CA: IEEE Computer Society Press. [VAG95] VRML Architecture Group, Meeting on VRML 1.0/1.1/2.0 issues (August 20-23 1995), [www] http://earth.path.net/mitra/papers/vag950823.html [WGS95]Wang Q., Green M., and Shaw C.: EM - An Environment Manager For Building Networked Virtual
Environments. In Proceedings of the Virtual Reality Annual International Symposium. VRAIS ‘95. pp. 11-18. Los Alamitos, CA: IEEE Computer Society Press. [Wer94] Wernecke, J.: The Inventor Mentor. Reading, Massachusetts, USA, Addison Wesley (1994) [ZPF94] Zyda M.J., Pratt D.R., Falby J.S., Lombardo C., and Kelleher K.M.: The Software Required for the Computer Generation of Virtual Environments, Presence 2, 2 (1994)
- 10 -