A Combined Immersive and Desktop Authoring Tool for ... - CiteSeerX

2 downloads 77616 Views 3MB Size Report
the desktop authoring application, and the other in the immersive VR-simulation – to build a complete scenario. 1. Introduction. Designing virtual environments ...
A Combined Immersive and Desktop Authoring Tool for Virtual Environments Roland Holm, Erwin Stauder, Roland Wagner FAW, Johannes Kepler University of Linz [rholm|estauder|rwagner]@faw.uni-linz.ac.at Abstract While frameworks and application programming interfaces for virtual reality are commonplace today, designing scenarios for virtual environments still remains a tedious and time consuming task. We present a new authoring tool which combines scene assembly and visual programming in a desktop application with instant testing, tuning and planning in an immersive virtual environment. Two authors can work together – one with the desktop authoring application, and the other in the immersive VR-simulation – to build a complete scenario.

1. Introduction Designing virtual environments and assembling scenarios are challenging tasks. While 2D WIMP (windows, icons, menus, pointer) interfaces often build on previous experiences with other applications through the use of common interaction elements (widgets) and the building of an appropriate mental model of the application through repeated use, these information sources are often unavailable for 3D interface designers due to the lack of standards. There is little knowledge about how virtual environments are designed, what issues need to be addressed, and little guidance about how the design should be carried out [1].

Markus Priglinger, Jens Volkert GUP, Johannes Kepler University of Linz [mprigl|jvolkert]@gup.uni-linz.ac.at Our main goal is the development of a generic set of tools that provide nonprogrammers with the means to create virtual environments as efficiently as possible. Another objective of our research is to investigate how combined immersive and desktop authoring can accelerate and improve the assembling task of virtual environments. Traditional assembling of virtual environments only makes use of the desktop metaphor, where the 3D effect is limited. We provide a tool which should enable any user to build a virtual environment like our virtual refinery in Figure 1 on the desktop and test it simultaneously in an immersive simulation. In this way the user can assemble and tune the virtual environment very quickly.

1.1. Related Work Coninx et al. [4] describe a framework for an immersive modeling system with a hybrid 2D/3D user interface, where the designer is immersed in the design space. The Lightning virtual reality system [5, 6] is a rapid prototyping tool for VR applications, particularly in the field of architecture and presentation. The Virtual Assembly Design Environment (VADE) [7] allows engineers to evaluate, analyze, and plan the assembly of mechanical systems. This system focuses on utilizing an immersive virtual environment tightly coupled with commercial Computer Aided Design (CAD) systems. The VR Juggler [8] is a development environment for virtual reality systems. Commercially available products such as EON Studio or Sense 8’s WorldToolKit and WorldUp support many tasks of the scenario design process, but are still too complicated to use for the novice user. Furthermore, these applications do not support a combination of designing immersively and on the desktop. An approach for arranging objects in the 3D world can be found in [9]. Smith et al. describe a constraint based 3D scene construction system that exploits human intuitions to restrict object placements and interactions. In the area of safety training there are several other systems [10, 11, 12].

2. SAVE Figure 1. A hazard in a virtual refinery

The Safety Virtual Environment (SAVE) is a Virtual Reality system for safety training. It has been developed

for the OMV AG Schwechat in order to educate and train workers in safety critical tasks, hazard prevention, recovery and emergency handling. A more detailed description of SAVE can be found in [2] and [3]. SAVE comprises four major parts or modules. A visual Simulation which represents the core part of the system where the simulation is computed, all user input is processed and the images are generated for the head mounted display (HMD). An Instructor Desk which lets a human instructor supervise the simulation and control the training session. A Motion Platform to enhance the immersive experience by providing motion patterns and automatic slope adjustments according to the virtual ground. The user stands on this platform and can feel vibrations near virtual engines, shaking ground or shocks from explosions. A Desktop Authoring Tool to construct new scenarios or manipulate existing ones. The tool supports all tasks necessary to build a scenario, including the virtual environment with a 3D editor, dependencies and actions in the event network through visual programming, the graphical user interface (GUI) of the instructor with a GUI builder, as well as all motion patterns, sounds and other special components.

Simulation

simulation data

Instructor

control events

SAVEbase Server

Sound Server

tracker signal

i

men ts

u nd

ha nd mo ve

ste reo

3D so

Joystick

vem ents

HMD

s& ge ma

hea d mo

3D s ou

key

pu sh ed

Motion Platform Figure 2. Topology of SAVE

CAN-Bus

Tracker

l

nd

2x VGA sig na

user input

SAVE uses structures derived from VRML97 [13] and enhances them by providing new kinds of nodes to interface special hardware (motion platform, 3D joystick, network connections etc.) and dynamically change the scene (manipulating the scene graph and the event network at runtime). Simulation and authoring tool are based on exchangeable components and plugins that provide means for adding new functionality. Figure 2 depicts the topology of the parts of SAVE needed to accomplish a training session. The authoring tool is only needed during development and testing of a scenario. Instructor and motion platform are optional parts. We will describe the simulation and the desktop authoring tool in more detail in the following sections, before we take a look at their collaborative utilization in the process of building a scenario.

3. Simulation The entire simulation comprises the actual simulation and the visual simulation running on the simulation machine (SGI Onyx2 Reality), the sound server, the optional instructor application and the optional motion platform steering server. Each server is designed to run independently on different computers, consuming and generating a minimum of network traffic in order to support good scalability. Earlier versions of the simulation have been ported to the CAVE. We plan to use a standard PC running Linux as simulation machine. In order to get a maximum of performance out of the Onyx2 services like sound rendering are running on a PC. The instructor application is written in Java, all other modules of the simulation are written in C++. The virtual environment is represented as a scenegraph structure and is based upon IRIS Performer [14]. SAVE defines virtual environments based on extended VRML97 code. A parser as part of the simulation transforms the scene description to SAVE’s scenegraph structure. The scene description comprises the entire information including sound and motion platform related data. In many cases objects within the virtual environment must be able to exchange information. Being a sender requires a facility to route information to a receiver. Objects may be both receiver and sender, therefore information exchange facilities, known as slots in VRML97, have to be established. The simulation supports communication networks of arbitrary complexity, information interchange is processed in parallel. We provide a powerful tool (dependency editor) to define communication channels. Since this editor is a visual programming tool, the scenario author need not write a single line of code. A tracking system provides position and orientation of the trainee’s head and hand for rendering the graphics

spatially correct. One tracker sensor is mounted on the HMD to get the head’s position and orientation while the second sensor is integrated in a two button joystick used for navigating and interacting in the virtual environment. Collision detection between the trainee and the objects in the virtual environment contributes to the impression of being part of the simulation. The acoustic part of the simulation (sound rendering) is provided by a PC using a standard sound card. The sound server is programmed in C++ and provides the functionality of Microsoft’s DirectSound. Since sound rendering is computed by a PC, only changes of the position and the orientation of the trainee and the sound sources have to be transmitted across the network. The optional motion platform constitutes the haptic part of the simulation. A discussion of haptic feedback issues can be found in [15] and [16]. The motion platform consists of a dynamic part and a static hull. The dynamic part comprises a metal disk on which the user stands, three servo motors, one shaker motor intended for vibrations and a robust frame construction. The servo motors are arranged in a equilateral triangle, the shaker is mounted at the center of the disk. This arrangement allows raising and pitching the disk, rotation is not possible. The static part of the motion platform comprises a solid construction surrounding the user for his own safety. It also integrates the tracker emitter and the power electronics. The motion platform steering server controls the motion platform. We decided to use a CAN (Controller Area Network) [17] fieldbus system. It establishes network communication with the power electronics that drives the servo motors and the shaker, respectively. We designed the motion platform to meet the needs of safety trainings. SAVEace provides an integrated motion pattern editor. Every motion pattern is visualized on screen and may be tested on the platform instantly. The motion editor only allows motion patterns which the motion platform can perform, invalid input will be corrected or discarded. The platform server can be configured to drive the platform autonomously: Everytime the trainee changes his/her position the platform reacts on that kind of movement. Climbing on a ladder or using an elevator raises and pitches the platform in a suitable manner automatically, no motion pattern needs to be defined explicitly.

4. Desktop Authoring Application Our desktop authoring application SAVEace (SAVE Assembly and Construction Environment) enables authors to build a complete scenario for SAVE without scripting or programming. Scenarios are assembled from a collection of prebuilt components. Graphical objects and objects with a spatial dependency (such as sounds, motion patterns for the motion platform, invisible event trigger

volumes etc.) are assembled in the 3D editor. Every such object can be accessed in the dependency editor where an event network can be defined by connecting the components together with logical elements (e.g. logical AND, logical OR, timer components, delay components, etc.). Finally, there’s a GUI editor which allows building a graphical user interface for the instructor. The user can switch between the editors seamlessly. E.g. he/she can add a petrol tank to the scenario in the 3D editor, switch to the GUI editor and build GUI elements for it, resumes working in the 3D-Editor to add an explosion sound and an explosion motion pattern, switch to the dependency editor and connect the tank with the two new objects in a way that if the tank’s valves are opened and closed in a wrong way, it will explode as a consequence. The application is entirely written in Java and Java3D.

4.1. Architecture of SAVEace Tool Plugins



3D Editor

Dependency Editor

Component Repository

75°

Instructor GUI Editor

External Data

Scenario Data

Simulation Proxy

Scene Optimizer

Scenario Exporter



Simulation

75°

Instructor Control Panel

Figure 3. Architecture of the authoring tool Figure 3 depicts the architecture of SAVEace. The central element is the scenario data, which is manipulated by the three main editors: the 3D editor, the dependency editor using visual programming, and the instructor GUI editor. Prebuilt components are fetched by the editors from the component repository. New components can be

added to the repository, or a new type of component can be created by defining a set of components in the scenario as a compound object and editing its interface slots. Arbitrary geometries can be imported into the scenario as long as they can either be directly loaded by SAVEace or converted in a loadable format. At the moment SAVEace and the simulation are tailored to load VRML97 and Inventor files. The editors use a set of tools or plugins which are identified and integrated at startup time. Tools can specify the area to which they apply: 3D editing, dependency editing, GUI editing, editing of a specific component, or a combination of the above. During authoring the simulation proxy can deal as a bridge between SAVEace and the simulation. It filters and forwards changes to components in the scenario data of SAVEace to the simulation and receives simulation data such as the tracking data of the simulation’s user and his/her actions in the simulation. The simulation proxy and its counterpart on the simulation side (the editor proxy) update their respective scenes in order to keep the desktop authoring tool and the simulation synchronized. The final export of the simulation after finishing testing and tuning, is accomplished by optimizing the scene graph structure (e.g. replacing multiple instances of the same node or texture with references to a single instance, reorganizing the spatial structure of the scene graph) and then sending it to the simulation (usually via ftp). The instructor’s user interface is exported by serializing the GUI elements for later use in the instructor runtime application.

4.2. Compound Components An important feature of SAVEace is the ability to construct new components from existing ones and reuse them in other scenarios. Any set of components including visual components (edited primarily in the 3D editor) and logical components (edited in the dependency editor) can be combined into a compound component. Such a compound also comprises the event network, a representation of the compound for the instructor (through a set of automatically generated, or customized GUI elements) and the components’ initial state and parameters as defined by the author. Compounds may be nested in other compounds. A compound hides its contained components and their slot interfaces, usually exposing only a subset of their slots to the outside. Contained components cannot have direct connections in the event network to other components outside of the same compound. Events and state changes can only be delivered through the compound’s proxy slots which deal as surrogates to the slots of internal components.

set_state_proxy

out_changed_proxy

clickable_proxy

play_changed_proxy

Figure 4. Compound Every compound can be permanently stored in the repository for reuse in the current or any future scenario. A compound can be edited and parameterized and it can be exploded to expose its internal components again. The compound mechanism is similar to VRML97’s Prototype concept, yet it also comprises GUI elements, editor specific data (e.g. snapping data, icons) and offers the ability to edit internals for a specific use. We actually export the simulation parts of a compound to the simulation using VRML97’s PROTO node. Compounds may be as complex as desired. One could build a destillation column for a virtual refinery scenario with a myriad of components and GUI elements for the instructor. In order to reuse it, a compound may be generated from these components with only the necessary slots exposed (e.g. it makes no sense to expose all valves).

5. The 3D Editor

Figure 5. The 3D-Editor The 3D editor is specialized in rapidly placing and transforming objects in the scenario. It cannot be used to model new objects. Today’s modeling tools have reached

such a high level of sophistication that we would not be able to compete with them. Anyway, we consider modeling and building scenarios separate tasks. Scenarios heavily allow reusing objects and comprise behavior and dependencies bound to these objects. Therefore it’s rarely possible to use standard modeling applications, because the objects are not mere graphical models, but components with functionality. Take a refinery (or any other industrial complex) as an example: it is built from a great number of objects (pipes, pumps, gauges, valves, metal structures, tanks, etc.) which are of few different types only. Although a scenario might demand four pumps, we can use a single pump model four times and connect them with pipes made up of only three of four different pipe segment types – just variations of length and diameter. Having built scenarios for SAVE without the help of a dedicated editor like SAVEace, we can only conclude that most of our time was wasted for rearranging objects in the scenario, replacing objects, reconnecting objects with pipes, checking the correctness of the event network and managing the various objects altogether. SAVEace should ease this task enormously. SAVEace and all its editors were designed for the regular computer user – not for professionals in 3D graphics or programmers. Those who want to conduct training sessions with SAVE should be enabled to build scenarios according to their own ideas.

vector combination as a source at its bottom. The table is of snap class “table” which is a descendant of “storage surface”, which is a descendent of “horizontal plane”. The cupboard is of class “board” which is a descendent of “horizontal plane” and the bed is of type “bed” which is a root class. The vase demands a “storage surface” as target. Bringing the vase in the vicinity of the three objects invokes the snap manager to indicate a snap connection with the table, because it has the highest appeal – even though it might be farer away than the cupboard. Releasing the mouse button would invoke the snap operation and the snap manager would try to match the snap source object (the point/up-vector combination) with the snap target object by lying exactly in the plane with the up-vector matching the plane’s normal vector.

5.1. Arrangement Tools After dragging an object from the component gallery list and dropping it above the 3D editor, the author can move (translate) the object through the scenario with the help of several tools. An intersection-move tool places the object at the point of intersection with the existing object under the mouse pointer. This tool turned out to be a very powerful method of rapidly arranging objects in the 3D world. A second feature is the automatic snapping of objects. Every object has a snap source and a snap target which can be of different geometric type. Snapping can only occur between compatible objects. Every object has a snap class which is part of a snap class hierarchy tree. Classes are user-defined and can be assigned and reassigned at any time without restrictions. The hierarchy tree resembles a multiple inheritance tree of a C++ class hierarchy, where only classes of the same branch are equal and the distance within that branch defines the degree of compatibility. Moving an object in the vicinity of other objects invokes the 3D editor’s snap manager to search for the most appealing snap partner for this object. Appeal is a function of distance and compatibility. Consider inserting a vase in a scene with a table, a cupboard and a bed (see Figure 6). The table might have a plane as a target on top of itself, and a vase has a point/up-

Figure 6. Snapping Snap targets and sources can be visualized and edited in the 3D-Editor by the same tools used to manipulate the objects themselves.

5.2. General and Component-Specific Tools Besides these tools, the editor offers the usual transformation capabilities known from OpenInventor [18] (a bounding box with manipulation knobs) or editors like CosmoWorlds [19]. Translations can be restricted to one axis or a plane, rotations are performed around any of the three main axes. Scaling can be restricted along one axis, too. Some component-specific tools may as well use the 3D-Editor. E.g. creating a keyframe animation requires manipulating the involved objects inside the scenario. For the time consuming task of laying out pipes, SAVEace offers a piping tool which lets the user attach new pipes at the end of existing ones seamlessly with one mouse click.

Editing happens in a What-You-See-Is-What-You-Get (WYSIWYG) fashion in a perspective view. Additional parallel editable views can be opened, but view manipulation capabilities are sufficient for just a single view. In order to keep interactive frame rates, the editor can use lower detail versions of the models used in the simulation and existing objects can be hidden. Objects which comprise functionality (components) instantly appear in the dependency editor and the instructor GUI editor when inserted in the scenario.

6. Visual Programming Components in SAVE have input slots, output slots, and parameters, which allow a closer description of the object. By routing events from the output slots to the input slots of another component, customized functionalities and dependencies can be realized, e.g. if a switch has been switched on, a lamp lights etc. The dependency editor is a visual tool allowing the user to create and edit the event network.

A tool like this has to offer the ability to create and edit compound components. Components contained in a compound are hidden from the user, but he/she can modify the interface by removing or adding proxy-slots of the compound (see 4.2). The editor allows to add or remove components to/from the compound. Futhermore the author has the option to change the internal event network of the compound.

7. Instructor GUI Editor SAVE enables an instructor to interfere with the simulation which makes each training session unique. Modern flight simulators allow the instructor to cause fog or fire on demand, SAVE offers similar functionality by generating an instructor application. The instructor GUI editor enables the scenario author to generate a tailor-made instructor application. Objects having states which were added to the scenario are offered by the GUI editor for further use. Consider a scenario comprising some gauges, the scenario author has the opportunity to use all or only a subset of the gauges in the GUI editor. By selecting an object to be part of the instructor application the selected object appears in a suitable manner within the GUI editor. If the suggested representation does not appear suitable, the scenario author has a broad range of opportunities to change the appearance of an object. He/She will only pick up objects that have to provide the opportunity to be changed or to be observed by the instructor. After arranging the representations for the objects of interest, the instructor GUI can be generated.

Figure 7. The dependency editor To support the construction of logical dependencies the dependency editor provides a set of logical components (e.g. logical AND, logical OR, timer components, delay components, etc.). Thus the user can visually build every dynamic scene he/she would had to program in the past. The visual routing task implies two constraints: Out-slots of a component can be routed only to in-slots and slots of different type can not be connected.

Figure 8. A finished instructor GUI During a training session the instructor may decide to turn on sounds, to cause the motion platform to move or to

start a special effect like fog or fire. Consider some valves that have been added to the scenario. The instructor may have opened some of the valves via the instructor application, the trainee sees the result in the virtual environment. Whatever the trainee’s reaction might be, the instructor observes the result at the instructor application. He can also trace the trainee’s position within the virtual environment. The instructor application allows the instructor to write a protocol for the session and to store session relevant data.

8.2. Collaborative Scenario Design Of course the hero will tell the god if the world he/she built works according to plan (e.g. if dependencies and behaviors work correctly), so that the god can adjust parameters of components, rearrange the objects or add props to the scene in order to meet the hero’s requirements.

8. Immersive Editing Being able to build a scenario in a WYSIWYG fashion might not be sufficient for creating the desired experience in the scenario. After all, WYSIWYG is not What-YouSee-Is-What-You-Will-Experience. There’s a lot more in a virtual environment than the visual impression: e.g. interaction, force feedback (motion), sound. Furthermore, a scene looks different when viewed inside a HMD. For these reasons – and in order to support testing and tuning a scenario – we have added the capability to instantly experience the “scenario-in-construction” in the simulator. Every component added to the scenario in the editor will cause a generic simulation in the simulator to add this component to itself in response. These two components are coupled in a way that every change to one side results in the according change on the other side. The user on the simulation side can experience the scenario as if it has been finished already. He/She can interact with the components. Because all dependencies and behaviors defined in the dependency editor are forwarded to the simulation as well, the event network can instantly be tested inside the simulation.

8.1. A Greek God Metaphor To build a bridge between the two metaphors of interaction (“immersive VR” and “desktop point and click”) and moderate the collaboration between the two users, we have created a Greek god metaphor. The simulation side user who is immersed in the virtual environment is the hero, whereas the editor side user is the Olympian god. An avatar represents the hero in the 3D editor. All of the hero’s movements and actions are synchronously performed by the avatar. The god can pick up the avatar and drop the hero at any position in the scenario. The hero experiences the god in the form of a gigantic hand which moves according to the mouse pointer of the editor. Every time the god clicks, the hand reaches down to grab or touch the object intersecting the mouse pointer. Even floor control is determined by this metaphor: The god has all the power, so his/her actions always overrule the actions of the hero.

Figure 9. Views of the same scene by the desktop user (top) and the immersive user Another duty of the hero is testing the ergonomics of the scenario: Is everything within reach? Are there any objects blocking the user or complicating the navigation in the virtual world? Is the geometric complexity too high at one point, so that frame rates drop below an acceptable level? Is it even possible to accomplish the planned training task in reasonable time? Does the scene look

[3]

realistic? What can be done to improve the aesthetics of the scene? Along with this general testing, the hero will try out individual components: new motion patterns, sounds (too loud, too faint?), animations etc. But the hero is not restricted to be a spectator in the virtual world, he/she can actively modify components, i.e. translate and rotate them. And because the simulation side user is a hero, he/she has infinite strength and can even lift houses! The system can be used by a single user, too. In this case he/she can alternately work with the immersive editor and the desktop editor. Every change performed with one editor will appear in the other as well. The desktop application can be used for tasks which are easier and faster on an ordinary WIMP application (like file operations and managing the objects in the repository) and to define dependencies and build the instructor GUI.

[7]

9. Limitations and Future Work

[8]

Scenarios entirely authored on the desktop may be larger than collaboratively authored scenarios, because they can be optimized before utilization in the simulator, thereby reducing resource consumption. Collaboratively designing a scenario is just an option. Not always will there be a reliable second author for the scenario and an entire building session at once might be too stressful for the immersive worker wearing the HMD. The possibilities of SAVE and SAVEace have not yet been exhausted. The simulation-side user should be equipped with the same capabilities as the desktop user. Dependencies could be defined in the virtual environment, too. The hero could have a palette of components in the left hand like a painter, and create new components by dragging them from this palette into the virtual world.

Acknowledgements The authors would like to thank Anton Dunzendorfer, the project manager of SAVE, and the project team of OMV AG Schwechat for their support: Wolfgang Ginzel, Franz Grion, Alois Mochar and Günther Schwarz.

[4]

[5]

[6]

[9] [10]

[11] [12] [13] [14] [15] [16]

References [1]

[2]

K. Kaur, “Designing Virtual Environments for Usability”. In S. Howard, J. Hammond, and G. Lindgaard, editors, Human-Computer Interaction: INTERACT'97, Chapman and Hall, 1997, pp. 636-639. M. Haller, R. Holm, J. Volkert, and R. Wagner, „A VR based safety training in a petroleum refinery“, 20th Annual Conference of the European Association for Computer Graphics (Eurographics ‘99), Milano (Italy), September 1999.

[17] [18] [19]

M. Haller, R. Holm, M. Priglinger, J. Volkert, and R. Wagner, “Components for a virtual environment”, Workshop on Guiding Users through Interactive Experiences: Usability Centred Design and Evaluation of Virtual 3D Environments, Paderborn (Germany), April 2000. K. Coninx, F. Van Reeth and E. Flerackers, “A Hybrid 2D/3D User Interface for Immersive Object Modeling”, Computer Graphics International, Hasselt and Diepenbeek, Belgium, June 1997, pp. 47-54. J. Landauer, R. Blach, M. Bues, A. Roesch, and A. Simon, “Toward Next Generation Virtual Reality Systems”, Proc. IEEE International Conference on Multimedia Computing and Systems, Ottawa, 1997. R. Blach, J. Landauer, A. Roesch, A. Simon, “A flexible Prototyping Tool for 3D Realtime User-Interaction”. In: Virtual Environments: Conference and 4th Eurographics workshop / IEEE YUFORIC Germany '98, June 1998, pp. 195-203. S. Jayaram, et al., “VADE: A Virtual Assembly Design Environment”, IEEE Computer Graphics and Applications, Vol. 19, No. 6, November 1999, pp. 44-50. A. Bierbaum, C. Just, P. Hartling, K. Meinert, A. Baker, and C. Cruz-Neira, "VR Juggler: A Virtual Platform for Virtual Reality Application Development". IEEE VR 2001, Yokohama, March 2001. G. Smith, W. Stuerzlinger, “Integration of Contraints into a VR Environment”, in VRIC 2001 Proceedings, Laval (France), pp. 103-110, ISBN 295157300-6, June 2001. D. Tate, L. Sibert, and T. King, “Virtual Environments for Shipboard Firefighting Training”. Proc. IEEE Virtual Reality Annual International Symposium, March 1997, pp. 61-68. R. B. Loftin, P. J. Kenney, “Training the Hubble Space Telescope Flight Team”, IEEE Computer Graphics and Applications, Vol. 15, No. 5, September 1995, pp. 31-37. R. Bukowski and C. Sequin, “Interactive simulation of fire in virtual building environments”, Proc. ACM SIGGRAPH, Los Angeles, 1997, pp. 28-35. The VRML Consortium Inc., “The Virtual Reality Modeling Language Specification”, ISO/IEC 147721:1997, http://www.web3d.org/Specifications/VRML97/. J. Rohlfs, J. Helman, “IRIS Performer: A High Performance Multiprocessing Toolkit for Real-Time 3D Graphics”, Proc. ACM SIGGRAPH, Orlando, 1994. G. Burdea, “Haptics Issues in Virtual Environment”, Proc. IEEE Computer Graphics International, Geneva (Switzerland), June 2000. W. Mark, S. Randolph, M. Finch, J. V. Werth, and R. Taylor, “Adding Force Feedback to Graphics Systems: Issues and Solutions”, Proc. ACM SIGGRAPH, New Orleans, 1996, pp. 447-452. CAN in Automation International Users and Manufacturers Group, http://www.can-cia.de/ Silicon Graphics Inc., “OpenInventor C++ Reference Manual”, Addison-Wesley, 1994. Silicon Graphics Inc., “Cosmo Worlds”, Document Number 007-3312-001, 1996.

Suggest Documents