An Object Oriented Description of Interaction Techniques ... - CiteSeerX

3 downloads 19709 Views 116KB Size Report
interaction techniques, 3D user interface, virtual reality, vir- tual environments ... ronment, designed for desktop-based applications, where the set of user's input ...
An Object Oriented Description of Interaction Techniques in Virtual Reality Environments Pablo Figueroa

Mark Green

Benjamin Watson

Computer Graphics Lab.

Computer Graphics Lab.

Computer Graphics Lab.

Department of Computing Science,

Department of Computing Science,

Department of Computing Science,

University of Alberta,

University of Alberta,

University of Alberta,

Edmonton, Alberta, Canada, T6G 2H1

Edmonton, Alberta, Canada, T6G 2H1

Edmonton, Alberta, Canada, T6G 2H1

1 780 492 5198

1 780 492 5198

1 780 492 5198

[email protected]

[email protected]

[email protected]

Abstract The success of a virtual reality (VR) application relies heavily on its interaction techniques (ITs). A developer has to decide, from an increasing number of options, which ITs are more suitable for a particular application domain. Despite the importance of ITs, most VR development environments offer little support to model, reuse, and compose them. In this paper we use known concepts such as virtual devices, common VR tasks, scene graphs, and object orientation to model device-independent interaction techniques in a new way. ITs become first class objects throughout the entire development cycle, identifiable in the code and reusable in new VR applications. We developed an implementation of this model using a scripting language connected to a VR development environment, allowing runtime changes and methodologies such as participatory design, rapid prototyping, and testbed-based evaluation.

Keywords interaction techniques, 3D user interface, virtual reality, virtual environments, scripting programming language, framework, development environment, authoring system, interactive 3D graphics, user interface design, human computer interaction

1 Introduction Virtual reality has been successfully used in applications where users are immersed in an environment with navigation and visualization capabilities. We believe the next frontier in this technology will be in improving the level of interactivity between users and virtual objects, providing better work environments than the traditional 2D desktops to manipulate

certain kinds of information. As we will describe, research in interaction techniques (ITs) suggests that the best for a particular case depends on domain and context, and usually such a technique is created as a novel combination of already known ITs for simple tasks. In recent years, VR development environments have been developed for several purposes. Most of them define callbacks as a basic but standard mechanism to connect user events with application functionality. However, this mechanism is usually extended from a platform-dependent environment, designed for desktop-based applications, where the set of user' s input devices is not as extendible and rich as in a VR application. Moreover, an IT is implemented in such a mechanism as pieces of code over several callbacks, which makes it more difficult to differentiate them, modify them, and reuse them in other applications. A more abstract mechanism is necessary to express ITs. In this paper we model interaction techniques as a new identifiable entity in a virtual reality development environment. Our main objectives are to facilitate the implementation of new interaction techniques as variations or combinations of existing ones, and to create a standard library of interaction techniques extracted from experiences of all the 3D UI community. We use object oriented concepts because they have many advantages from the software engineering point of view, they can be seamlessly implemented in most of the current VR development environments, and they are widely known. We implemented a VR development environment using MRObjects [12] as a core and a console where a series of commands in the scripting language Python [24] can be written. Python sentences can be used to add new ITs to the environment, run ITs, reconfigure IT devices or affected objects, or send messages to any object in the scene. This command-based interface is very useful during development in order to change possible ITs quickly, tune IT' s characteristics, and perform controlled experiments. The ability to make changes to ITs at runtime allows us to use design methodologies such as participatory design and rapid prototyping. The IT classes form a library that can be reused and

extended throughout several VR applications. The library and the console can be ported to other VR development environments, providing a standard way to manipulate ITs in VR applications. This paper is organized as follows: Section 2 presents related work. Section 3 describes the required context to define interaction techniques, Section 4 defines the interaction technique model. Section 5 shows an example of an implemented interaction technique. Section 6 shows the current implementation on top of MRObjects and Section 7 presents conclusions and future work.

2 Related Work Current VR development environments offer basic tools to develop interaction techniques. Performer [19] does not offer any support for VR devices nor ITs, so developers have to start from scratch. Alice [1], the CAVE library [2, pp. 20-22], SVE [29], MRToolkit [26], and DIVE [9] offer a set of signals or events that can be captured by user defined callbacks, though it is not always clear how to add new VR devices or signals. Bamboo [30] offers physical modeling of devices. Avocado [2, pp. 17-19] and Lightning [3] model abstract input and output devices and uses a dataflow architecture to propagate user events to objects in the scene. WorldToolKit [31] offers an abstract model for any device (WTsensor) with user-defined callbacks to update the application state every time a user event is received. MRObjects [12] models input devices of two basic types: triggers and six-degrees-of-freedom (6DOF) values. Each device offers a callback register service that can be used to add application functionality. It provides also a basic IT model that can be extended, and a specific way to select objects. Java3D [15] models input devices different from the mouse and the keyboard as a collection of Sensors that receive 6DOF positions or button events. It also has objects called Behaviors, whose purpose is to connect user' s input events with application code. Behaviors might be used to model interaction techniques, but their main intention is object and applicationspecific behavior. Most of these environments require recompiling after changes, but Alice, Avocado, and Lightning provide scripting languages to make changes at runtime (Python, Scheme, and TCL, respectively). Several interaction techniques are emerging, each one with advantages in certain conditions. For example, techniques that use spotlights [17], gaining functions for movement [21], the image plane [20], and the aperture of a cone created by a tracker [11] can compete as mechanisms for selection at a distance, while two trackers [27], two pointer input [32] or proprioception [18] can compete to support object manipulation. Several authors have created evaluation techniques for these new ITs, in particular for tasks such as traveling [6], grabbing, and object manipulation [5], [16], [22], [23]. Some of these studies have produced testbeds

where the user can change ITs to evaluate user' s performance. These are the first environments that allow changes of ITs at runtime, though it is not clear how new interaction techniques are added and how a developer can reuse the already tested ITs in new applications. Other important results from these studies are classifications of ITs, particularly for traveling [6], selection and manipulation [4], where common characteristics of a task are identified, known variations of a task are classified and new possibilities are discovered. Finally, few attempts have been done to support ITs in VR applications. The XWindow system defines event translations [25, p. 188] as a way to change the mapping between events and application functionality, as a support for changes in the interface of a desktop-based application. In [28], Steed and Slater define a visual programming language to modify dataflows from sources to receptors . Our model defines IT objects as explicit representations of common dataflows, independent from the actual objects, that allows us to reuse ITs in other applications. The behavior editor presented by Halliday and Green in [14] uses a menu-based interface to change the behavior of an application. In our work we extend this initial idea by changing the menu-based interface to a command-based interface with a complete programming language and by differentiating between application-specific behavior and common ITs. In [7], Broll describes a language to model the application functionality in terms of interaction and device objects. This model has several similarities with the model proposed here, though we use a more standard language, we separate standard from application-dependent ITs (enabling reuse), and we have a more flexible device model.

3 VR Development Environment Framework In this section we present the foundations of the IT framework, in terms of virtual devices and objects in the scene. Figure 1 shows the messages between physical devices (through standard callbacks), virtual devices (VD and SD), ITs, and objects in the scene (O). Different application interfaces result in changes of connections between devices, ITs, and objects. These changes are possible through the standard interface provided by virtual devices and ITs, and because an IT can send any kind of messages to any object. The definition of an IT is compact, identifiable, and loosely coupled, and it can be reused in other applications. Note that most of the VR development environments described in Section 2 do not model ITs and devices. Because our framework represents input devices and ITs as objects, and because the interface1 of these objects can be standardized, we can have great flexibility and portability in development. Figure 2 shows the most important elements of the framework, described in the following two subsections. 1 The interface of an object is the set of messages it can receive

Virtual Devices

111 000 000 111 00 11 000 111 00 11 000 111 00 11 000 111 11 00 00 11 000 111 00 11 000 111 00 11 000 111 000 111 000 111 000 111 Dev1 ... DevN

VD

IT

VD VD ...

00 11 000 111 000 111 000 111

Obj Obj

SD

VD

Figure 1: Message passing between objects

A virtual device (VD in Figure 1) transforms user' s orders into application events. Using definitions from [10] (buttons and locators) and from [13] (triggers and values), we model any physical device as a collection of two basic logical devices: a trigger and a value. A trigger indicates changes between two states (pressed and not pressed). A value describes a position in a six dimensional space that usually comprises a 3D position and a 3D orientation. Any physical device can be described as a set of virtual devices. For example, a 3 button mouse can be modeled as three triggers, one for each button, and a value for the mouse' s position2. An interaction technique will be defined in terms of a collection of required virtual devices, in order to be independent of the actual physical device in use. A simulated virtual device (SD) is a virtual device constructed in terms of other VDs. It can create new events from the events of one or several VDs, which is useful to simulate not available devices. For example, it is possible to combine six triggers to create a value device with a fixed orientation (SimPos in Figure 2), where each pair of triggers modify one of the three dimensions; or to create a simulated trigger that divides in two the domain of a value. The framework provides at runtime the list of available devices (the object called devices, in Figure 2), useful to create new simulated devices or for reconfiguration of ITs.

Objects in the Scene

VirtualDevice AbsDevice

SceneObject

Trigger Value devices ...

ShapeObject objects ...

SimulatedInputDevice SimPos ... GoGoValue Figure 2: VR Development Environment Framework

An object is an entity inside a virtual reality environment that is of interest to the user. It is defined in terms of the objects it contains (an object can be composed of other objects, constituting a hierarchy), and the behavior it offers. An IT use these objects and their behavior to show reactions to user' s events. For example, an IT for selection can use an object to indicate a pointer to the selected objects, and it can show the bounding box of an object, once it is selected. The framework defines a class with all functionality available for any object, and provides at runtime the list of available objects.

4 Interaction Technique Representation An interaction technique is defined as a dialogue between the user and the VR environment, which translates user' s actions received from the virtual devices to reactions of the objects in the scene. Each interaction technique has its own, unique and compact representation, uncoupled from objects in the scene, devices, or other important elements of the VR application. Therefore we can reuse or modify ITs easier than standard implementations with callbacks. Figure 3 shows a diagram that relates the IT modeling with the VR Development Environment Framework presented in Section 3. 2 Where one of the coordinates is always 0 and the orientation is fixed.

IT AbsSelection SelectOneByPointing SelectOneByTouching Move ... VirtualDevice



AbsSelection selectedObjs highlightedObjs affectedObjs setHighlighted( newSet ) select( ) on( ) off( )

SceneObject

SimulatedInputDevice Figure 3: Interaction Technique Model During development, a designer has a hierarchy of classes that represents available interaction techniques. Abstract classes in the hierarchy represent common semantics between ITs (such as AbsSelection in Figure 3), while concrete classes represent either generic simple tasks or domain specific ITs (such as SelectOneByPointing). A domain specific IT is created as the combination of simple tasks, devices, and objects, that manipulates their states in novel ways. An interaction technique consists of:

 A set of affected objects. An object is affected by an IT if its reaction is required in the dialog. For example, a selection can require the objects to select and a pointer to select them.  Input devices from which the IT will receive events. These devices can be VDs or SDs, and they can be changed to test other input configurations.  Information gathered by the IT. In a selection, for example, this information is the set of selected objects.  State of the dialogue. This is used for control purposes, when an interaction technique is used as a part of another. For example, a technique for moving an object can use an IT for selection at the beginning of the operation, deactivate it while it is moving the object, and activate it again when the operation is completed.  Expected object behavior. This describes what type of reaction is expected from the objects in each one of the steps of the interaction. Again, we can change the object' s reaction to a particular event, in order to redefine the expected behavior. The IT' s methods can be classified in the following groups:

 Manipulation of affected objects. Methods to add or remove related objects.

SelectOneByPointing

SelectOneByTouching

Figure 4: Two selection interaction techniques.

 Information manipulation. Different messages to access and change the state of the information gathered by the IT.  Change of state. Methods to change an IT from a particular state to another. A simple example is a pair of methods to activate (on) or deactivate (off) an IT.  Change of the expected behavior. Methods to change the expected reaction from the related objects throughout the IT lifecycle. The framework defines at runtime a list of ITs that represents the available tasks in the environment. We can send messages to any IT in this list in order to modify its behavior, which makes this development environment more suitable to use development schemes such as participatory design and rapid prototyping for virtual reality applications. We can also add or delete objects in the list, adding or deleting functionality from the application.

5 Examples We present two examples of ITs modeled using this framework: first, two simple interaction techniques for selection, second the Go-Go selection technique [21]. In the first example, the objective is to model two different ways to select an object: by touching or by pointing. Figure 4 shows the class hierarchy. AbsSelection models general characteristics to all selection interaction techniques. Any selection has a set of objects to select from, a set of highlighted objects (the candidates to be selected) and the current set of selected objects. Any selection IT can add (or remove) highlighted objects, can select the set of highlighted objects in a particular moment, or can unselect them. SelectOneByTouching allows us to select one object by touching it with a pointer. It requires an object to act as

the pointer and two devices, a value to get the pointer position and a trigger to indicate the operation of select/unselect. There are two additional methods to get events from the devices. The definition of SelectOneByPointing is similar to the selection by touching, but with different behavior to detect how an object is highlighted when the pointer moves3 . SelectOneByPointing and SelectOneByTouching have more methods to change the attached devices, the default behavior of objects when they are highlighted or selected, or to redefine the meaning of on and off. These are not shown in the figure for readability purposes. The Go-Go technique is a selection technique that uses a gain function for the movement of the user' s hand: the farther from the head, the more gain of the movement. This technique can be modeled in a very straightforward way from our first example: Go-Go is an instance of SelectOneByTouching, where the hand device is a special simulated Go-Go Value that receives events from user' s head and hand movement and produces an event for the hand movement with gain.

6 Implementation in MRObjects The main objective of this implementation was to show an environment where ITs are fully configurable at runtime. We combined two environments to achieve this goal: a console that understands orders in the scripting language Python, and MRObjects, a VR development environment. Using the console we can define which ITs we want to use, create ITs, or send messages to them to change their behavior. MRObjects provides the basic functionality of a VR application: display of objects, reception of events from the user, and the basic behavior of the objects in the scene. We develop an application in two steps. First, we compile the basic functionality developed in MRObjects and, second, we configure ITs using Python sentences at runtime. The user' s interface of ITs is then command based, but it can be extended as required, because it is possible to define VR interaction techniques to manipulate interaction techniques. From MRObjects we export all devices and objects in the scene, as a list of devices and a list of objects. It is possible to send messages to devices or objects from the console. In this implementation, the objects are obtained from a standard VRML file with special names for objects in the hierarchy. The special names are required because we want to identify the object' s composition from the file. We choose Python [24] as scripting language, because its implementation of object oriented concepts and because it is available in several platforms. The library of classes that represents ITs is written completely in Python, so it is totally independent of MRObjects. ITs can be created and configured at runtime from the console, or they can be imported 3 There is a class between AbsSelection and these two subclasses capturing more commonalities, that is not very important in this example.

Figure 5: SelectOneByTouching, with bounding box from a file. We have implemented the required ITs and device classes to represent most of the selection techniques in [4]. For example, Figures 5 and 6 shows the change of the selection IT and feedback at runtime: from SelectOneByTouching that draws the bounding box of the selected object, to SelectOneByPointing that shows a change of color in the object. The sentences given by the user in this example are: from start import * its = all( objects, pointer, W,X,A,D,F,V,T,Y,M) its[0].on() its[0].off() its[1].changeObjBehavior( ShapeObject.hL, ShapeObject.uhL, ShapeObject.doNothing, ShapeObject.doNothing) its[1].on() First we import the functions and definitions to this environment. The function all creates the ITs and a simulated device from the keys4 . We start the first IT by sending the message on. After interacting for a while we turn off the selection by touching, configure the second IT (selection by pointing) to highlight objects with a different color, and finally we turn on this IT.

7 Conclusions and Future Work To allow reuse and easy modification of interaction techniques in virtual reality environments, we have modeled ITs 4 The capital letters W...M are the names of the corresponding triggers associated to these keys.

[3] Blach, R,; Landauer, J.; Rsch, A.; Simon, A. A Highly Flexible Virtual Reality System, Future Generation Computer Systems Special Issue on Virtual Environments, Elsevier Amsterdam, 1998 [4] Bowman, D.; Hodges, L. Formalizing the Design, Evaluation, and Application of Interaction Techniques for Immersive Virtual Environments. to Appear in The Journal of Visual Languages and Computing, 1999. [5] Bowman, D.; Hodges, L. An Evaluation of Techniques for Grabbing and Manipulating Remote Objects in Immersive Virtual Environments. In Symposium on Interactive 3D Graphics, 1997. pp. 35-38 [6] Bowman, D.; Koller, D.; Hodges, L. Travel in Immersive Virtual Environments: An Evaluation of Viewpoint Motion Control Techniques. In Proceedings of Virtual Reality Annual International Symposium, 1997. pp. 45-52 Figure 6: SelectOneByPointing, with color change in terms of object oriented concepts, virtual devices and objects in the scene. An IT is then defined as an object in a VR application that interacts with objects in the VR world or with other ITs. An IT can be modified allowing several variations of the same interaction technique. We implemented a VR development environment where the basic functionality is compiled and the ITs are configured at runtime. This scheme enforces architectural patterns such as MVC or PAC [8] in a VR application. The VR development environment framework presented in Section 3 can be implemented over several VR toolkits and tools, in order to port the IT library over different environments. The future work will be concentrated in the definition of the IT library from the experiences of the 3D UI community, the inclusion of 3D widgets and output devices in the model, the implementation of the IT framework over different VR development environments, and the addition of capabilities for user performance evaluation.

[7] Broll, W. Interaction and Behavior Support for MultiUser Virtual Environments. In First Workshop on Simulation and Interaction in Virtual Environments, ACM SIVE' 95. 1995. [8] Buschmann, Frank; et. al. Pattern Oriented Software Architecture. A System of Patterns. John Wiley & Sons Ltd. 1996 [9] The DIVE Home http://www.sics.se/dive/dive.html

Page,

at

[10] Foley, J.; Wallace, V. The Art of Natural Man-Machine Conversation. In Proceedings of the IEEE, Vol 62. #4. 1974 [11] Forsberg, A.; Herndon, K.; Zeleznik, R. Aperture Based Selection for Immersive Virtual Environments. In UIST' 96. 1996. pp. 95-96 [12] Green, Mark. Introduction to MRObjects. Available at http://www.cs.ualberta.ca/ graphics/ documents.html . 1999

Acknowledgments

[13] Green, Mark. MRObjects Map Files and Input devices. Available at http://www.cs.ualberta.ca/ graphics/ MRObjects/documents/map.html

Thanks to my wife and Paul for their patient and many hours of proof-reading, and to my supervisor for his generous funding.

[14] Halliday, S.; Green, M. A Geometric Modeling and Animation System for Virtual Reality. In Proceedings of VRST' 94. 1994

References

[15] Java 3D API Specification at http://java.sun.com/products/java-media/3D /forDevelopers/j3dguide/j3dTOC.doc.html

[1] Alice: Easy Interactive 3D Graphics. Home Page at http://www.alice.org [2] Bierbaum, A.; Just, C. Software Tools for Application Development. In SIGGRAPH' 98, Course 14. 1998. pp. 3-1, 3-45

[16] Lampton, D.; Knerr, B.; Goldberg, S.; Bliss, J.; Moshell, J.; Blau, B. The Virtual Environment Performance Assessment Battery (VEPAB): Development and Evaluation. In Presence: Teleoperators and Virtual Environments, Vol 3, #2. 1994. pp. 145-157

[17] Liang, J.; Green, M. Geometric Modeling Using Six Degrees of Freedom Input Devices. In the Third International Conference on CAD and Computer Graphics. 1993. pp. 217-222 [18] Mine, M.; Brooks, F. Moving Objects In Space: Exploiting Proprioception In Virtual Environment Interaction. In SIGGRAPH' 97. 1997 pp. 19-26 [19] SGI - IRIS Performer Home http://www.sgi.com/software/Performer

Page,

at

[20] Pierce, J. et. al. Image Plane Interaction Techniques in 3D Immersive Environments. In Symposium on Interactive 3D Graphics, 1997. pp. 39-43 [21] Poupyrev, I.; Billinghurst, M.; Weghorst, S.; Ichikawa, T. Go-Go Interaction Technique: Non-Linear Mapping for Direct Manipulation in VR. Proceedings of UIST' 96, pp. 79-80. 1996 [22] Poupyrev, I.; Weghorst, S.; Billinghurst, M.; Ichikawa, T. A Framework and Testbed for Studying Manipulation Techniques for Immersive VR. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology. 1997. pp. 21-28 [23] Poupyrev, I.; Weghorst, S.; Billinghurst, M.; Ichikawa, T. Egocentric Object Manipulation in Virtual Environments: Empirical Evaluation of Interaction Techniques. In Eurographics' 98. 1998 [24] Python Language Website at http://www.python.org [25] Quercia, V.; O' Reilly, T. X Window System User' s Guide. O' Reilly & Associates, Inc. 1990, 3rd Edition [26] Shaw, C.; Green, M.; Liang, J.; Sun, Y. Decoupled Simulation in Virtual Reality with the MR Toolkit. in ACM Transactions on Information Systems. Vol. 13, #3. 1993. pp. 287-317 [27] Shaw, C.; Green, M. Two-Handed Polygonal Surface Design. In Proceedings of UIST' 94. 1994. pp. 205-212 [28] Steed, A.; Slater, M. A Dataflow Representation for Defining Behaviours within Virtual Environments. In Proceedings of VRAIS' 96. pp. 163-167 [29] SVE Library Home Page, http://www.cc.gatech.edu/gvu/virtual/SVE/

at

[30] Watsen K.; Zyda, M. Bamboo - A Portable System for Dynamically Extensible, Real-time, Networked, Virtual Environments. In IEEE VRAIS' 98. 1998 [31] WorldToolKit, at http://www.sense8.com [32] Zeleznik, R.; Forsberg, A. Two Pointer Input for 3D Interaction. In Symposium on Interactive 3D Graphics, 1997. pp. 115-120

Suggest Documents