To appear in Proceedings of the ACM SIVE’95—First Workshop on Simulation and Interaction in Virtual Enviroments, University of Iowa, July 13-15, 1995
Interaction and Behavior Support for Multi-User Virtual Environments Wolfgang Broll GMD — German National Research Center for Information Technology Institute for Applied Information Technology D-53754 Sankt Augustin, Germany email:
[email protected]
Abstract This paper examines the problem of modeling user interaction and object behavior in multi-user virtual environments. The paper presents a new object-oriented interaction model to support the various aspects of interaction and behavior. The model considers topics such as new 3D I/O devices, multi-modal input and output, as well as CSCW issues, such as awareness, user representation and sharing of objects and the interactions with these objects. The model includes support for rich and autonomous object behavior. Previous work, such as the spatial model had significant influence on the development of our model. One key issue of our work is the support rapid prototyping of complex virtual worlds and virtual reality applications.
Keywords Virtual reality, human computer interaction, multi-user virtual environments, computer supported cooperative work.
1 Introduction Interactions between users, or agents and artifacts, as well as object behavior, are only supported in a limited way by existing VR systems [6,17,22]. Complex interactions and rich behavior usually have to be reimplemented by each application. Even simple everyday interactions such as walking, or moving objects, are not supported in an adequate way. They require a large effort on
development and programming, and the resulting implementations often cannot be reused for further applications. Approaches similar to those used in 2D environments [8,12] usually fail in virtual environments, due to the immersive user-interface. This user-interface suggests intuitive, natural interactions. Since the user is part of the virtual world, rather than interacting with it from outside, that he or she expects to interact with virtual world objects in the same way, or at least a similar way, he or she does in the real world. Nevertheless virtual world objects may also be completely artificial or metaphorical In both cases they will not act natural, but they still have to be able to realize complicated interactions. However, even when no user interaction is performed, many objects also have to realize complex behavior. Although there are some approaches to overcoming these problems in local single-user systems [3], there are no solutions for these problems in multi-user virtual environments [7], especially when they are distributed [2,14,22,]. Some of the problems are not very well understood, e.g. how to handle multi-user interactions [10]. These are interactions between several users and one artifact. In multi-user environments the social aspects of interactions also have to be considered [5]. Currently no general accepted interfaces and I/O devices for 3D applications exist. For that reason each virtual reality system provides its own functionality to support those devices. In most cases even generic interactions, such as navigation, or the selection of objects, are not independent of the input devices provided by the individual workstation.
Interaction and Behavior Support for Mult-User Virtual Environments
The intent of our interaction model is to provide a simple but powerful mechanism to realize complex interaction and behavior facilities, and to enable virtual environments to deal with several users without limiting their interaction possibilities. The aim of this paper is to show, that interaction and behavior within virtual environments may be realized with minimal effort and implementations can be easily applied to future applications.
W. Broll
agers, which are the second important part of the event model. network connection distribution manager
distribution manager
virtual environment file
2 Event Handling The solution we will present is based on a simple event mechanism. This event model is used for the development of our current prototype. The interaction model can also be implemented on different event models.
object 1
input/output manager output manager input manager
object 2 virtual world
virtual environment event entities events / data / messages
The object-oriented event model uses event entities to represent events. Event entities are objects, which include information on the sender, the recipients, and optionally some kind of object (entity) specific data which is transmitted. Like the spatial model [1], the possible recipients (users, agents, artifacts, etc.) of the events can be restricted to a certain scope. Beside this, recipients may be identified by their class, type, or name, or even by the evaluation of component properties. Objects can also become recipients by expressing an interest in events of a certain type or events sent by a certain object. Each event entity class also implements the methods to set, read and process the information and object-specific data. The communication between all objects of the virtual environment is performed by event entities. Each object has individual abilities, or restrictions, on the classes of event entities it can receive, process and send. This mechanism is also used to transmit objects in a distributed environment. Object information is encapsulated in event entities which are linearized and transmitted over a network. Thus, event entities can not only be used to transmit events and messages, but also whole virtual world objects (artifacts). At the destination the event entities are rebuilt and the information can be extracted. Linearization and transmission is performed by man-
Figure 1:
Examples for object communication and transmission
Managers are also used to realize input and output (see figure 1). Input managers translate external event, such as pushing the mouse button, into the appropriate event entities. Output managers generate output according to the received event entities. One example is a visualizer, which performs rendering depending on the current camera position. Input and output managers are also used to read world descriptions from files or write them back to support persistence. Thus support for different input devices (mouse, glove, speech), different input file formats (Inventor, OBJ, DXF, etc.) and different kinds of network communication (TCP, UDP, multicast) can be easily achieved by adding the appropriate manager to the system. Other important tasks realized by managers are central (real-time) timers and collision detection.
3
Interaction and Behavior Support
It is possible to implement applications on the basis provided by an event model like the one described above. Unfortunately, in most existing
-2-
W. Broll
Interaction and Behavior Support for Mult-User Virtual Environments
virtual environments there is no additional support to realize user interactions and object behavior.
information between the different objects used to realize interaction and behavior. component toolboxes
Our interaction model allows us to keep input and output managers general and to separate most interaction and behavior realization from application development. For that reason different interaction mechanisms, provided by our model, are located between managers and virtual world objects. We distinguish three different types of objects in our model:
trigger
deactivate
semantic
activate
new object type
instancing
•input/output objects object, which can be applied to an artifact
•interaction objects •behavior objects Figure 2:
Although interaction objects, I/O objects and behavior objects sometimes realize very similar functions, there are some important differences between these object types. The common aspect of these objects is the translation of event entities into new events. However, sometimes objects are used to implement a complex behavior rather than to translate event entities —especially when specifying behavior objects. The advantage of using a dynamic communication scheme, as provided by the event model, to exchange data between the different object types, as well as to transmit data to virtual world objects, is the possibility to reuse objects within different contexts. In contrast to existing approaches (.e.g. Open Inventor engines [21]), the objects of our model can easily be applied to new artifacts, since they can be realized without any static links. The objects consist of several sub-objects to filter the received events, to process them and to distribute the results. Several highly configurable sample implementations for the individual subclasses exist and can be combined to create new objects (see figure 2). This kind of functionality may be realized by an interactive composer as part of a virtual reality authoring tool. Widely used components can be hardcoded after evaluation to improve performance. Our approach focuses on the exchange of relevant
Assembling new objects from a toolbox
This is different to approaches based on complex scripting languages which provide a powerful mechanism to realize complex behavior [13,17] but without the ability to create a dynamic network of interactions and behavior. Nevertheless scripting can be included in our model (see “Complex Interactions”).
Input / Output Objects Input/output objects (I/O objects) realize virtual managers. To other virtual world objects an I/O object looks and acts like a manager. Input/output objects provide a simple mechanism to create new (virtual) devices by combining existing input managers. Since they depend on the existing local managers, rather than on the individual artifact or user, they are part of the virtual environment. This implies that they are not replicated in distributed systems. They are used to provide input and output managers to the virtual world objects, which are not supported by local managers. For example, an I/O object can be used to model a 6D input device, although the system provides 2D mouse input only. Thus I/O objects support a high level interface between the virtual environment and the virtual world objects. Using a set of I/O objects, a virtual environment can be designed indepen-
-3-
Interaction and Behavior Support for Mult-User Virtual Environments
W. Broll
objects attached to artifacts are used to manipulate the user object. Examples which can be realized by this kind of interactions objects are:
dently of the existing real I/O devices. The mechanism is flexible and powerful enough to allow each user or group of users to realize input/output in an adequate way to fit their personal needs, as well as the requirements of certain application areas. An example definition for such an input/output object is shown below (see example 1).
•hyperlinks, as used in VRML [20], •portals to travel between several worlds, •buttons, sliders, etc.
InputOutput IOAutoConnect INSTANCE MouseTo6D { # activation and deactivation handled by connection Trigger simple { # triggers on each specified event/input input [ WinMouseEvent winMouse ] } Semantic calculate { input [WinMouseEvent mouse = Trigger.winMouse] output [6DEvent location calculate location.Pos.x = ( mouse.Pos.x / mouse.winWidth – 0.5 ) * 10.0 * mouse.btn1 calculate location.Pos.y = . . . ... } }
Example 1: Defining an input/output object to realize 6D input by a mouse
This object realizes navigation in 3D depending on the position and the buttons of a mouse pointer within a viewer window.
Interaction Objects
Picture 1:
Using interaction objects to support awareness (highlighting the cup)
Interaction objects are less general than I/O objects. They are used to process event entities and to manipulate virtual world objects (artifacts). Instances of interaction objects are usually connected to an artifact. They influence the interactions of the artifact. These are interactions with the artifact as well as interactions by the artifact. For that reason we distinguish two types of interaction objects: indirect and direct interaction objects.
Direct interaction objects Direct interaction objects, such as the navigation object shown below, influence the way other objects can interact with the artifact to which the object is attached. Beside object manipulation (moving, grabbing, changing color, etc.), these interaction objects can provide more complex mechanisms such as: •awareness [11], e.g. by highlighting the
object (see picture 1), Indirect Interaction objects Indirect interaction objects are used to influence the interaction with other artifacts and they are usually attached to user artifacts (user representations/avatars). They allow user-dependent interactions and can be used to model different types of users (roles) [9]. Indirect interactions objects are also used to model 3D widgets [8] or tools. Sometimes those
•access control [16], by restricting interac-
tions to specified users, or •interfaces to share the objects and the inter-
actions on these objects among several users [19]. A simple example for an interaction object template (soft class) used to implement user naviga-
-4-
W. Broll
Interaction and Behavior Support for Mult-User Virtual Environments
simple locking mechanisms [6] or do not supervise access control at all [14]. This is a very important task not only for reliable systems, but also to keep the views of different users consistent. A user wants to be sure if he or she has control over an object or not. Two approaches exist for handling interaction with distributed objects. Either the interaction objects are copied (replicated) but consistency between the replicated objects is not supported, or all copies are kept consistent. Even if the copies of the interaction objects are not kept consistent, the replicated artifacts are! One important question is which kind of interaction object distribution we should use for a certain interaction? Single event interactions can be performed by several users without any interference. In distributed systems access control on the artifact is necessary to ensure consistent worlds. However, concurrent multi-event interactions need further support. Either locking is done for the interaction objects, or mechanisms to provide concurrent interaction resolution have to be implemented [4, 10]. We have included support for multiple users grabbing an artifact at the same time. The different positions of the users are combined and can be used by other interaction objects. This allows us, for example, to move a complex artifact through a labyrinth, to deform a sticky artifact, or even to model some kind of group decisions.
tion is shown below (see example 2). Instances of the template can be easily attached to artifacts to support the specified interaction. Interaction IASimple TEMPLATE NaviSimple { Activate auto { # activating event passed to trigger input [ 6DEvent location = ANYMATCH ] } Deactivate simple Trigger simple { input [ 6DEvent location = Activate.location ] # trigger on 6DEvents } Semantic calculate { # supports basic arithmetic calculations input [ 6DEvent newLoc = Activate.location,# input event 3DVec oldPos = LOCALOBJECT.Transform.translation, 3DVec oldDir = LOCALOBJECT.Transform.rotation ] output [3DVecEvent newPos, 3DVecEvent newDir] # could be replaced by internal variables, since output is sent directly calculate newDir.z = oldDir.z + newLoc.Dir.z send newDir => LOCALOBJECT.Transform.rotation calculate newPos.x = oldPos.x + ( newLoc.Pos.x * cos( newDir.z ) calculate newPos.y = oldPos.y + ( newLoc.Pos.y * sin( newDir.z ) send newPos => LOCALOBJECT.Transform.translation # since output is defined, other objects can also receive these events } }
Example 2: Defining an interaction template for simple user navigation
Usually a network of several interaction objects is used to perform a single interaction between a user artifact and any other virtual world object. This allows abstract interactions on a higher level (semantic interactions) rather than interactions on a functional level provided by the virtual world objects. One example is a Select interaction, which might be defined differently, depending on the input device and the preferences of the user. However, using this abstract interaction to trigger more complex interactions, allows us to model them independent of the user and the input device.
Behavior Objects Behavior objects represent the dynamic behavior of an artifact. In certain ways they are very similar to interaction objects. Like interaction objects, behavior objects are part of an artifact, but they are independent (or at least not directly dependent) on input or output events. The processing performed by behavior objects depends on elapsed time or on events, such as collision detection. Examples, for object behavior, which can be modeled by this kind of objects are gravity, velocity and animation (windmill, waterfall, etc.). Animation provided by a behavior object can be used to realize a virtual clock (see picture 2), using three behavior objects for animation: one for each hand (see example 3)
Interaction objects in multi-user environments When the artifact is distributed, the interaction objects are distributed as well. In multi-user virtual environments several users may interact with one artifact at the same time. Thus interaction objects have to provide suitable access mechanism to either forbid or to support concurrent interactions. Existing systems usually use
-5-
Interaction and Behavior Support for Mult-User Virtual Environments
W. Broll
as dead reckoning [22], are often limited to handling object velocity. In our model the behavior objects can be forced to resynchronize under certain conditions. However, for some kind of behavior this may not be sufficient, if the affected data fields are not replicated at the same time. For that reason behavior objects also provide the mechanism to perform object behavior on a single site (on a per object basis) and to distribute the output to all other sites. This is recommended for nondeterministic behavior. Both methods might use NTP [15] to realize distributed behavior.
and one for the pendulum. Since the behavior object used as an example does not use a static connection to the artifact, it may also be applied to different artifacts, even those, which can not perform rotations. Mechanisms can be applied to behavior objects (as well as to I/O objects and interaction objects) to prevent execution, if the output is not processed by another object, artifact, or manager. Behavior BAlways TEMPLATE HourHand { # activation / deactivation included in base type BAlways Trigger timeLoop { startValue 0 # 0 deg. endValue 6.23955 # 357.5 deg stepWidth 0.043633 # 2.5 deg stepLength 300 # 5 min repeat TRUE } Semantic simple { output [ FloatEvent hour = Trigger.value] # event entity send hour => LOCALOBJECT.rotation.x } }
Complex Interactions Artifacts, and especially user artifacts, will sometimes have a high number of interaction and behavior objects attached to them. To make the artifact descriptions more readable, and to hide internal event distribution between the individual objects, our model provides a grouping mechanism. When grouping several interaction and behavior objects together, a new object type is created. Instances of this object type can be attached to artifacts (see example 4).
Example 3: Simple behavior object template to animate the hour hand of the clock
InteractionGroup IAGroupEmpty TEMPLATE IAGmyUser { Interaction NaviSimple # see example above Interaction Camera # defined elsewhere Interaction Select Interaction Fly }
Artifact ArtifactSimple { # users and artifacts are not distinguished Shape BodyShape # defined elsewhere InteractionGroup IAGmyUser }
Example 4: Grouping mechanism and simple user definition
Picture 2:
However the grouping mechanism does not provide any additional interaction features at this time. Grouping of objects is also allowed for input/output objects, but cannot be used to combine them with interaction or behavior objects.The complexity of the interaction or behavior which can be performed by a single object, is highly dependent on the particular semantic component. In
Behavior objects can be used to animate the hands and the pendulum of a clock
Distributed Behavior In distributed systems independent dynamic behavior can lead to inconsistent states. Techniques to deal with this [18], such
-6-
W. Broll
Interaction and Behavior Support for Mult-User Virtual Environments
most of the examples given in this paper, the semantic component calculate is used, since it provides a universal and very powerful mechanisms to calculate the output of the object. Nevertheless we can think of even more flexible semantic components. The most flexible semantic component would contain code of a scripting language, such as Java [13]. This would allow flexible as well as very complex semantics. Although (in contrast to the basic idea of our approach) these components would require a certain amount of real programming rather than configuration, embedded within our model, those scripting languages would also benefit from the dynamic connections and the object independence. They allow the integration of whole virtual reality applications within such kinds of objects. For that reason they should be called application objects. These objects may be attached to single artifacts, e.g. to realize an intelligent agent, or to larger parts of the virtual world (or even the whole virtual world) to realize a virtual reality application.
tion into a RGB value. It can easily be used to change an objects color interactively. Manipulators are similar to input/output objects, since they are not replicated and only temporarily visible to the user. Manipulators are not a component of the virtual world. Virtual devices are input or output devices which exist within the virtual world only. They consist of an artifact and at least one interaction object. Examples for virtual devices are a keyboard or a light switch within a virtual world, which may be used as in the real world. Since virtual devices are based on artifacts they are distributed and visible to all users. Communicators consist of an artifact and an input/output object. They are used to provide or support teleconferencing or teleoperating facilities. Examples of communicators are virtual microphones and virtual speakers. The artifact part of a communicator is replicated, while the input/ output object exists in the local system only. Properties are used to realize rich behavior. They consist of behavior objects and artifact subcomponents. One example for a property is velocity. Velocity could be specified entirely within a single behavior object. However several dynamically changing forces may influence the velocity of an object. For that reason it seems to be more appropriate to represent the current velocity in a separate component of the artifact. This allows several behavior objects to use and manipulate the same data.
Extensions Using the existing objects of the interaction model, more powerful new objects can be created by combining them with virtual world artifacts or their subcomponents. These combinations do not offer any additional functionality, but allow description of interactions on a more abstract level and support complex interaction and behavior. The objects we have discovered to be useful so far are:
Interaction and behavior support for groups of objects
•manipulators
Our interaction model can be further improved by adding new group objects. These group objects are neither used to combine objects of the interaction model, as described above, nor the kind of grouping mechanism usually used to assemble artifacts or their graphic components. Rather these group objects are used to apply interaction and behavior objects to several artifacts. The group mechanism allows the addition and removal of interaction and behavior objects dynamically to artifacts when joining or leaving a group. Additionally the group object itself may
•virtual devices •communicators •properties
Manipulators are a combination of input/output objects and artifacts or the graphical representation of an artifact. One example, for a manipulator object is a handlebox as used in Open Inventor [21]. Another more complex example is a RGB cube, which allows the translation of a 3D posi-
-7-
Interaction and Behavior Support for Mult-User Virtual Environments
W. Broll
References
have interaction and behavior objects. Group objects can be distinguished by the methods that artifacts use to join or to leave them. Virtual worlds are one type of group objects in our model. As soon as an object enters a virtual world, the appropriate interaction and behavior objects are added to it. This allows the representation of gravity within one world, since all artifacts will receive a gravity behavior object, as soon as they enter the world. Spaces are another type of group objects. The group is spatialized, which allows to model things such as fish tanks or force fields, where object behavior is different from the rest of the world. The group mechanism may also be applied to an arbitrary group of artifacts or artifacts with certain attributes. Objects are then members of a group rather than a part of it. Thus an object can participate in several groups at the same time.
[1] Benford, S.D., and Fahlén, L.E. A Spatial Model of Interaction in Large Virtual Environments. Proceedings of the Third European Conference on CSCW (ECSCW’93), Milano, Italy, 1993, Kluwer. [2] Blau, B., Hughes, C.E., Moshell, J.M., and Lisle, C. Networked virtual environments. In Computer Graphics (1992 Symposium on Interactive 3D Graphics), Zeltzer, D. (Ed), pp. 157-160, 25, 2 (1992). [3] Böhm, K., Sokolowicz, M., and Zedler, J. GIVEN++: A Toolkit for Advanced 3D User Interface Construction. In Virtual Reality Vienna ‘93 Proceedings, Vienna, Austria, December 1993. [4] Broll, W. Interacting in Distributed Collaborative Virtual Environments. In Proceedings of the IEEE VRAIS’95 Conference. pp. 148155, IEEE Computer Society Press, Las Alamitos, Ca. (March 1995). [5] Busbach, U. Activity Coordination in Decentralized Working Environments in Dix, A.(ed): Remote Cooperation - CSCW issues for Mobile and Tele-Workers, Springer, forthcoming. [6] Carlsson, C. and Hagsand, O. DIVE–A Platform for Multi-User Virtual Environments. Computer & Graphics, pp. 663–669, Vol.17, No. 6 (1993). [7] Codella, C.F., Reza Jalili, Koved, L., Lewis, J.B., Ling, D.T., Lipscomb, J.S., Rabenhorst, D.A., Wang, C.P., Norton, A., Sweeney, P., and Turk, G. Interactive simulation in a multi-person virtual world. In Human Factors in Computing Systems CHI‘92 Conference Proceedings, pp. 329-334. ACM, (May 1992). [8] Corner, D.B., Snibbe, S.S., Herndon, K.P., Robbins, D.C., Zeleznik, R.C., and van Dam, A. Three-dimensional widgets. In Computer Graphics (1992 Symposium on Interactive 3D Graphics), Zeltzer, D. (Ed), pp. 183-188, 25, 2 (1992). [9] Danielsen, T. AAM — The AMIGO Activity Model. In Computer-Based Group Communication, pp. 86-89. Pankoke-Babatz, U. (ed). Ellis Horwood Ltd, 1989. [10] Ellis, C.A., Gibbs, S.J., and Rein, G.L. Groupware - Some Issues and Experiences. Communications of the ACM, Vol. 34, No. 1, pp. 38–58, January 1991.
4 Conclusions and Future Work Interaction models have a high influence on the development of applications for virtual environments and especially for multi-user virtual environments. In existing VR systems interaction and behavior are not adequately supported. By paying attention to the requirements of VR applications we have introduced a new object-oriented interaction model. The model considers requirements from the CSCW perspective; it supports userdependent and artifact-dependent interactions and multi-modal input and output. Multi-user support, such as sharing and concurrent editing of artifacts, is integrated into the approach as well as rich object behavior. In our future work we will extend our prototype implementation by new interaction and behavior classes and components. The extensions already mentioned in this paper will be added to simplify the use of the interaction and behavior objects and to support rich interfaces.We will further try to adapt our model for multi-user interaction and behavior with VRML —the virtual reality modeling language.
-8-
W. Broll
Interaction and Behavior Support for Mult-User Virtual Environments
[11] Fuchs, L., Pankoke-Babatz, U., Prinz, W. Supporting Cooperative Awareness with Local Event Mechanisms, to appear in Proceedings of the Conference on Computer S u p p o r t e d C o o p e r a t i v e Wo r k 1 9 9 5 (ECSCW‘95), Sep. 1995, Stockholm, Sweden. [12] Hübner, W., and de Lancastre, M. Towards an Object-Oriented Interaction Model for Graphics User Interfaces. In Computer Graphics Forum, pp. 207-218, 8, 3 (1989). [13] Java Language Documentation. Available through: URL: http://java.sun.com/documentation.html. [14] Macedonia, M.R., Zyda, M.J., Pratt, D.R., et al. (1995). Exploiting Reality with Multicast Groups: A Network Architecture for LargeScale Virtual Environments. In Proceedings of the Virtual Relity Annual Interanational Symposium. VRAIS ‘95. (pp. 2-10). Los Alamitos, CA: IEEE Computer Society Press. [15] Mills, D. L. Network Time Protocol (Version 3) specification, implementation and analysis. DARPA Network Working Group Report RFC-1305, University of Delaware, March 1992, 113 pp. [16] Moffet, J. D., and Sloman, M. S. The source of authority for commercial access control. IEEE Computer, 21(2):59-69, Feb. 1988. [17] Shaw, C., Green, M., Liang, J., and Sun, Y. Decoupled Simulation in Virtual Reality with The MR Toolkit. In ACM Transactions on Information Systems, 287-317, Vol. 11, No. 3, (July 1993). [18] Singh, G. Serra, L., Png, W., Wong, A., and Ng, H. (1995). BrickNet: Sharing Object Behaviors on the Net. In Proceedings of the Virtual Reality Annual International Symposium. VRAIS ‘95. (pp. 19-27). Los Alamitos, CA: IEEE Computer Society Press. [19] Trevor, J., Rodden, T. and Mariani, J. The use of adapters to support cooperative sharing. In Proceedings of the ACM CSCW‘94, Conference on Computer Supported Cooperative Work. pp. 219-230, ACM, 1994. [20] VRML —Virtual Reality Modeling Language. Specification 1.0. URL: http:// w w w. h y p e r r e a l . c o m / ~ m p e s c e / v r m l / vrml.tech/vrml10-3.html [21] Wernecke, J. The Inventor Mentor. Reading, Massachusetts, USA, Addison Wesley (1994).
[22] Zyda, M.J., Pratt, D.R., Falby, J.S., Lombardo, C., and Kelleher, K.M., The Software Required for the Computer Generation of Virtual Environments, Presence 2, 2 (1994).
-9-