Proceedings of OZCHI 2005, Canberra, Australia. November 23 - 25, 2005. Copyright the author(s) and CHISIG
REACH THE VIRTUAL ENVIRONMENT -3D TANGIBLE INTERACTION WITH SCIENTIFIC DATA Wen Qi 1, Jean-Bernard Martens 1, Robert van Liere 2 and Arjan Kok2 1
Department of Industrial Design University of Technology Eindhoven Eindhoven, the Netherlands {w.qi, J.B.O.S.Martens}@tue.nl
2
Department of Computer Science University of Technology Eindhoven Eindhoven, the Netherlands {r.v.liere, a.j.f.kok}@tue.nl
ABSTRACT In this paper we present an augmented virtuality system with a set of tangible devices for interactive visualization with scientific data (volumetric scalar field data and molecular data). We describe the design concepts and application scenarios that underlie the development of the interfaces and system. The prototype system allows users to interact with scientific data by manipulating tangible devices, such as a graspable cube, pen and plane frame. These devices provide passive haptic cues that help the user to maintain position awareness and relative spatial relation during three-dimensional (3D) interaction. Active stereoscopic shutter glasses can be used to provide a 3D display, whenever necessary. We describe the differences with traditional fish tank and fully immersive virtual reality (VR) systems. At the end of the paper, we discuss the user experience and the research questions related to VR system with tangible interfaces for visualization application. Our planned activities for obtaining a more in-depth understanding of some of the usability issues involved through user study are described as well.
KEYWORDS: tangible interface, virtual reality, scientific visualisation.
1.
INTRODUCTION
Many applications in computer graphics and human-computer interaction aim to provide users with access to virtual information. VR tries to accomplish this by positioning body parts of the user, such as the head and/or one or two hands, within the virtual environment (Brooks, 1999). This VR approach does however possess some drawbacks. First, intrusive devices such as head-mounted displays and data gloves tend to hamper effective interactions without hand and eye coordination. Second, the consequence of absent, or at best, primitive, haptic feedback is that simple interaction tasks require much more visual attention than in the case of manipulating real objects. This usually results in an interaction that is slower and more cumbersome than in the case of a similar interaction with real objects. Alternative approaches that provide some feeling of presence within a virtual environment are therefore of interest. More specifically, tangible interaction promotes the idea of using physical interaction devices for manipulating objects in an interactive environment. We might call this kind of system augmented virtual reality (AVR) or augmented virtuality (AV) system. According to Milgram et al. (Milgram, 1994), AV lies on the more virtual segment of the reality-virtuality continuum. They define AV as enhancing virtual worlds with real world components or technical aspects. The goal of the work that we present in this paper is to create an augmented virtuality system with a set of interfaces for tangible interaction with scientific data (such as volumetric scalar data, molecular data). In order to accomplish this, we have combined state-of-the-art technologies from VR with aspects of tangible interfaces. More specifically, this AV environment offers users physical interaction devices that closely Additional copies are available at the ACM Digital Library (http://portal.acm.org/dl.cfm) 1 or ordered from the CHISIG secretary (
[email protected]) OZCHI 2005 Proceedings ISBN: 1-59593-222-4
match corresponding objects in the virtual scene. The kernel of our system is a visualization engine for performing interactive (real-time) rendering of volumetric data or molecular data. All the other components that are needed for the tangible interaction are integrated with this kernel. Through this combination, rich 3D interaction experiences with scientific data can be achieved. After the discussion of some related activities in VR for visualization and tangible interfaces, we describe the design concept of our interactive system, as well as the current state of implementation on the prototype. Based on available user experiences with this first prototype, we draw conclusions about technical and psychological aspects of the system. Planned activities, such as more formal usability evaluations, are described at the end of this paper.
2.
RELATED WORKS
The work that we present in this paper relies on state-of-the-art insights and technology from both visualization and human-computer interaction (especially VR). Immersive or semi-immersive VR systems provide the user with the experience of being present in an interactive environment that consists purely of virtual objects (Ware, 1993; Brooks, 1999). This intense experience of 3D spatial awareness and natural interaction can be achieved by an integration of VR hardware (3D displays and 3D spatial input devices) with a responsive computer-generated 3D environment. At the same time, the potential of visualization for exploring and analysing large scientific data sets is by now well-recognized (van Dam 2000). Several algorithms have been proposed for visualizing different kinds of scientific data: field data (such as scalar data) and molecular data (Hansen, 2004), (Limaye, 2001). 3D artificial (VR) worlds that provide opportunities for navigation, interaction, and quantification can now be generated from scientific data. Much development work has been done to produce the VR technology that is needed to create such interactive visualization environments (Bryson, 1996). Fuhrmann et al. (Fuhrmann, 2002) have for instance described the integration of volume rendering for 3D scalar data within VR. They have combined a fast and flexible software implementation of direct volume rendering with the intuitive interaction and navigation techniques of a virtual environment. Their virtual environment combines 3D interaction methods with six degrees of freedom (DOFs), such as positioning objects by dragging them with a pen, with more traditional 2D interaction techniques. Wossner et al. (Wossner, 2002) have presented a collaborative volume-rendering application that can be used in distributed virtual environments. Their application allows users to collaboratively view volumetric scalar data and manipulate relevant aspects, such as the transfer function. These systems aim at putting the users into complete immersive virtual environments. Ai et al. (Ai, 1998) has presented a virtual environment for interactive molecular dynamics simulation at the Fraunhofer Institute for Computer Graphics. Different kinds of VR devices are used in the environment for immersive display and interaction with the molecular system. A parallel computer is used to simulate the physical and chemical properties of the molecular system dynamically. A high-speed network exchanges data between the simulation program and the modelling program. Molecular dynamics simulation in this virtual environment provides scientists with a powerful tool to study the world of molecules immersively. Sharma (Sharma, 2000) has used visual hand-gesture analysis and speech recognition for developing a speech/gesture interface to control a 3D display. His interface enhances an existing application, Visual Molecular Dynamics (VMD), which is a VR visual computing environment for structural biology (Humphrey, 1996). The free-hand gestures manipulate the 3D graphical display, together with a set of speech commands. All these work has been concentrated on developing virtual environment for visualizing scientific data, without taking many human factors into account during the design of the systems. Tangible interface is a recent topic that investigates new ways of coupling physical forms with digital information. The primary goal is to enrich the interaction with virtual information by relying on welldeveloped skills for real-object manipulation from the point of view of the end user (Ishii, 1997). Tangible interfaces that are specifically aimed at interacting with scientific data have also been proposed (Schkolne, 2004). The Passive Interface Props (PassProps) (Hinckley, 1996) was one of the first tangible interfaces to support continuous interaction in 3D space. The PassProps was developed to allow surgeons to explore a patient's anatomy data by interactively generating cross-sections through the 3D data. The PassProps contains a head prop, a cutting-plane prop for creating intersections, and a pen-like prop for planning 2
trajectories. Visual feedback of the user's actions is provided on a computer display in front of the user. The head prop is used to manipulate the orientation of the patient's anatomy. The rendering of the volumetric data on the screen follows the rotation of the head prop. The rendering is always positioned in the centre of the screen, i.e., it does not follow the translations of the head prop. The rendering scale (i.e., the zoom factor) is determined by the observer-to-object rendering distance, and is controlled by moving the head prop closer to or further away from the body. The user is provided with a cutting-plane prop that can be used to specify the translation (location) and orientation of an intersection plane through the 3D data. The user holds the cutting plane prop relative to the head prop to specify the location and orientation of the slice. The generated intersection image is presented on the display, next to the 3D model. The Cubic Mouse (CMouse) (Frohlich, 2000) was developed to support exploration of 3D geological data (seismic data) and car crash analysis data. The CMouse allows users to specify three orthogonal cutting planes and to perform so-called "chair cuts" through the data. The prop is a cube-shaped case with three perpendicular rods passing approximately through the centres of two parallel faces of the case. It is usually held in the non-dominant hand. The rods are used to control three orthogonal slices through the 3D data, i.e., by pushing or pulling a rod, usually with the dominant hand, the corresponding intersection plane moves back and forth. The movement of a slice is hence constrained to the direction orthogonal to the slice. There is a (wired) flock-of-birds tracker embedded in the cube-shaped case. The 3D data set and the orthogonal slices are visualized on a large stereo display in front of the user. De Guzman et al. (Guzman, 2003) also presented two tangible devices for navigating a slice through the human body. Interface A consisted of a 30-inch 2D model of a human body, together with a U-shaped fork at the end of an adjustable arm that could be rotated 180 degrees along the device’s baseboard. Interface B consisted of a transparent 3D model of the human body and a free-moving hand-held fork. The fork in each case represented the intersection plane (window), and its position and orientation was used to generate an intersection image on a separate display.
3.
PROTOTYPE CONCEPT AND DESCRIPTION
From the above examples, we may conclude that VR environment with tangible interface constitutes a promising approach towards interacting with scientific data. However, with the exception of Hinckley's work, there are very few studies available today that have tried to investigate the value of tangible interaction for different applications and experimentally validate the added value of tangible interaction for visualizing scientific data. The obvious bottleneck and requirement to performing such studies is the need for a low-cost system in which visualizing different scientific data in virtual environment and tangible interaction can be combined in a flexible way. In order to address this requirement, we started the development of a prototype system with the following goals in mind: •
The prototype should be a platform with sufficient functionality and flexibility so that usability studies can be created and performed in an easy way.
•
The architecture of the prototype should be modular so that alternative interaction methods, for a specific application, can be easily included or excluded.
•
The prototype should support real-time interactions with volumetric or molecular data, in order to allow simulation of real application contexts and task requirements.
•
Unlike expensive immersive VR systems, the system should be low cost, i.e., based on common and inexpensive hardware components, in order to promote broad acceptance for a diversity of applications.
In the implementation of our prototype, different modules are integrated into a single computing platform. The system is divided into four parts: •
The tangible and graphical user interfaces allow a diversity of input modalities for the user in both 2D and 3D.
•
The control unit transforms the interactions performed through either the tangible or the graphical user interfaces into parameter values needed for interactive visualization. Examples of relevant
3
parameters are: 3D position and orientation of the data set and interaction elements such as an intersection plane, camera and light positions, etc. •
The visualization kernel is the core module that implements the volumetric rendering and molecule modelling algorithms respectively.
•
The display is used to output the rendering results and the graphical user interface components. It can operate in either monoscopic or stereoscopic mode.
These different modules are implemented as separate programs that exchange information over a network sockets. With the same real-time performance, the different modules can be executed on either the same machine or on different computers, depending on the size of the data set involved. For a more in-depth discussion of the pros and cons of centralized versus distributed architectures for visualization, see for instance the paper by Schmalstieg et al. (Schmalstieg, 1998).
3.1. Hardware The computing engine is a Dell graphics workstation (Pentium IV, 2.4 GHz, 512 MB RAM) with the following specific interface components (see Figure 1): •
A FireGL 4 graphics card coupled to an infrared emitter from StereoGraphics.
•
Two analog Leutron Vision LV-8500 progressive scan CCD cameras (720x576 pixels, 50Hz frame rate), equipped with COSMICAR/PENTAX lenses with a focal length of 12mm and infrared transparent filters (that block visible light); these cameras are connected to two synchronized Leutron Vision PictPort H4D frame grabbers.
•
A 14' CRT monitor with a vertical refresh rate up to 120Hz, so that stereoscopic images can be viewed with the help of active liquid crystal shutter glasses (CrystalEyes 3).
•
An LCD projector and a tablet with a pen input allow for additional 2D interactions on a horizontal workspace.
A wooden chassis has been constructed for integrating the different components and creating a workspace for the users. The two infrared cameras are mounted on the upper layer of the wooden chassis. A silver mirror mounted on a wooden slab is hung in front of the chassis under an angle of 45’ in order to reflect the image of the user hands with the tangible devices to the cameras. The use of the wooden cabinet with the cameras makes the system set-up stable and allows for easy transportation. In the current prototype there are no provisions for tracking the user's head (which might be useful for providing motion parallax feedback in the displayed image as well).
Figure 1: A diagram of the system set-up.
3.2. Tangible Interaction Devices The currently available tangible interaction devices are wooden cubes, a metal frame and a metal pen. All devices are painted black in order to reduce unwanted reflections. Every interaction device is
4
characterized by one or more unique dot patterns. The patterns consist of small dots that are created with infrared-reflecting tape.
Figure 2: Snapshots of the three tangible interaction devices Left: tangible cube; Middle: plane frame; Right: tangible metal pen.
•
Six different planar patterns are on the six faces of a wooden cube. Each pattern is a unique combination of five white dots (Figure 2 left). The complete pattern that is facing the cameras must be uncovered in order for the cube to be tracked.
•
A square-shaped metal frame is used as a planar interface. The five dots on three of its sides again form a unique planar pattern. The user can most easily grasp the frame on the side that has no dots on it (Figure 2 middle).
•
The pen has a unique linear pattern constituted of four dots near its top. The user can easily grasp the pen close to the point at the bottom (Figure 2 right).
The plane and pen are made of lightweight metal (aluminium). All tangible devices are tracked by means of the infrared stereo cameras. A vision-based tracking algorithm (Liere and Mulder, 2003; Liere and Mulder, 2004; Liere and Rhijn, 2003) provides the six DOFs for the cube and the frame, and the five DOFs for the pen (the rotation of the pen around its axis cannot be tracked). The tracking accuracy depends on the camera characteristics and the camera calibration, and is currently around 3mm in the space above the table (up to a height of about 50 cm).
4.
USER SCENARIOS OF TANGIBLE INTERACTION
The tangible design is especially useful for creating a natural and intuitive interface for 3D spatial interaction task. Two applications (3D scalar filed data visualization and molecular visualization) have been developed. These two applications give us a better understanding how tangible interfaces function for a specific task in a VR environment.
Figure 3: The tangible cube and the volumetric data within the virtual binding box.
5
4.1. Interaction with Volumetric data First example is about the tangible interaction with 3D scalar field data. These kinds of data often come from imaging devices, such as Computerized Tomography (CT) or magnetic resonance (MR). For interacting with volumetric scalar data, a direct volume visualization module provides all the core functionality with hardware-accelerated 3D texture mapping (Wossner, 2002). This is a widely accepted algorithm with an OpenGL API extension developed by Silicon Graphics Inc (Meissner, 1999). Hardwareaccelerated 3D texture mapping is available on most major graphics cards (ATI or NVIDIA). In our case it is from ATI Inc.
Figure 4: 3D interaction scenario with volumetric data with tangible interface Left: the tangible clipping plane with the cube; right: the 3D intersection result.
The user can hold the tangible cube to rotate and translate the volumetric data, and see the physical action reflected in the rendered image on the screen. A cubic-shaped virtual bounding box is created around the data set in order to clarify the one-to-one correspondence with the physical cube (Figure 3). In case easy switching is required between a few alternative data sets, this can be accomplished by assigning different physical cubes to different data sets. A clipping operation can be accomplished in a very intuitive way by using a two-handed operation, in which the non-dominant hand holds the cube representing the data set, and the dominant hand holds the frame representing the clipping plane (Figure 4 left). This two-handed interaction is very similar to the natural way in which people manipulate physical objects. The clipping plane cuts through the 3D scalar data that make interior structure visible (Figure 4 right). The cube can also be placed on a small pedestal in case the user prefers to perform clipping operations with a single hand. It could also make the frame cut through the complete cube without being obstructed by the user's hand.
4.2. Molecular Visualization Recent scientific and technical advances in the field of experimental biology and structure biology, particularly in genomics, have produced large amounts of biological data, which has posed new conceptual challenges. The visualization and visualization-driven analysis of these experimentally derived data has become a key component of the scientific process, which is to manipulate them interactively, and to animate their dynamic properties in real-time (Henn, 1996). For example, popular applications for molecular visualization include, but are not limited to, Rasmol (Sayle, 1995) and Swiss-PDB Viewer (Guex, 1997) and etc.
Figure 5: 3D interaction scenario with molecular data with tangible interface. Left: the two-hand interaction with the tangible cube and pen; right: foot pedal for function switch.
6
In molecular biology, the shape and infrastructure of a molecule provides the basis for its function. Thus, visualizing a molecule's 3D structure is crucial for understanding its function, their molecular interactions and supramolecular assemblies (for example, 3D structure is the key to design a specific drug that binds with high affinity and specificity to a therapeutic target). However, these large amount of data still cannot be explored and manipulated efficiently only with traditional interfaces. Instead, new specialized interfaces are needed that offer flexibility, rapid interaction while performing a relatively complex interaction task. The second application scenario of our interfaces is on interacting with molecular data. The molecular visualization module is based on the VMD package developed by the University of Illinois at UrbanaChampaign (UIUC). Together with a cube and a pen (Figure 5 left), a foot pedal has been added in order to assist the user to change different interaction mode with their feet. The Infinity foot pedal is connected to the workstation through USB port. Its ergonomic design includes a wide, central sloping play pedal, one fast-forward key on the left side and one rewind key on the right side (see Figure 5 right). It allows a user to rest his foot comfortably while he/she is interacting with the tangible interfaces with two hands. It has been used to create a smooth interaction without asking a user to put down the tangible interface and using another modality (such as a mouse). Currently with these tangible interfaces, several interaction tasks could be performed. When a molecule has been loaded into the system, a small virtual cube is displayed on the screen together the molecule structure (Figure 6). In default, the virtual cube is separated with the molecule model. When a user puts down his/her foot on the central pedal, the virtual cube is coupled with the molecule model. The user then could manipulate the molecule through manipulating the wooden cube. When he/she presses function key F3, the following interaction will be to measure the angle among three atoms. In this mode, the user needs to hold the pen and cube, to select three atoms consequently. The cube and central pedal are coordinated by the user’s hand and foot to set the molecule model in a designed position. Every time when the pen selects one target atom, the label of that atom will be displayed. After three time selections, an angle will be calculated. Also a user could move individual atom with the pen. For the system, the “move” mode will be activated when a user presses function key F4 first. Then the user will manipulate the cube and pen together in order that the top of the virtual pen will intersect and select the target atom. Once the user pick up the right one, he/she will put down his/her foot on the right rewind key and keep it down; the selected atom will follow the movement of the top of the pen. After moving to the designed position, the user releases the right pedal. The atom will stay at that new position (Figure 6).
Figure 6: The virtual pen interacts with the atoms of a molecule.
5.
DISCUSSIONS
The described prototype has been demonstrated within the university and at the Second European Symposium on Ambient Intelligence, alongside a fish-tank VR system (Martens, 2004). All users who have tried out the tangible interfaces gave very positive feedback. There are some important differences between our system and existing systems that we would like to emphasize:
7
•
Our prototype uses vision-based tracking to create tangible interaction devices with six DOFs. Because the interaction devices are wireless, users experience fewer problems than in the case of existing interfaces such as the Passive Props and the Cubic Mouse that need to rely on existing (and often expensive) electromagnetic tracking technologies. Most of current tracking systems require wires, when the interaction devices are active. Moreover, because of electromagnetic interference, a combination of 2D and 3D interaction devices would not be possible with a number of existing 3D tracking technologies, which is not the case with our interfaces. Another advantage of our interaction devices is that they are low cost. Only wood, metal, paint and infrared-reflecting tape are required to create an interaction element, so that a large and varied collection of such physical interaction elements can easily be created.
•
In an immersive VR environment, users often cannot directly observe the interaction devices that they are using. Also the look of the devices could not provide any additional functional (haptic) cues for the users. Some VR systems make use of an active haptic device (such as the Phantom) to provide users with force feedback. This haptic feedback is however limited to a single point, which is at best only a weak approximation of the rich haptic feedback that we receive from actual physical objects that we sense with our hands. The single point feedback is not appropriate to all kinds of tasks. Our prototype adds passive haptic feedback and provides position awareness through directly observing the appearances and the relative positions of interaction elements. These have been demonstrated to be very important issues for natural 3D interaction (Linderman, 1999). There is no quantitative evidence that it is important that the shapes of the physical 3D interaction devices are consistent with the shapes of the virtual objects in a non-immersive or a traditional 3D desktop environment. For example, the experiment from Hinckley (Hinckley, 1996) implies that the physical shape of a 3D interface device has no significant effect on the user performance in a 3D rotation task on a common desktop system. However, it has been reported that spatial knowledge training become significantly more effective when training in a virtual environment augmented with passive haptics (Insko, 2001), which means passive haptic feedback is important for the 3D tasks (such as in the case of creating a clipping plane) and interaction in an immersive/semi-immersive environment. Hinckley himself also reports a problem with the head prop that he used to represent the data: the fact that the nose in the head prop did not necessarily coincide with the nose in a rendered data set did lead to confusion and reduced performance in some subjects. This may actually be an argument for using more abstract interaction devices in traditional desktop system, but using the device with a specific shape (for instance the head prop) in immersive VR system.
There is still no compelling evidence that demonstrates that fully immersive VR is significantly better than desktop VR for visualization application (van Dam, 2000). The tangible interfaces presented here provide users with a different type of mental, cognitive map of the 3D data than are achievable with traditional 3D devices. They help us to explore the utility of rich interaction in 3D environment (immersive or semiimmersive) with passive haptics. Obviously, with the described system, much more experimental evidence could be required in order to settle the issue (Whittaker, 2000). We formulate some questions with these tangible interfaces that we feel should certainly be addressed in more detail: •
Will unique insights into scientific data (such as molecular structure) be gained through specific tangible interface from VR visualization application? Most of previous researches concentrated on simply combining the VR technology and visualization without optimizing the interface according to the application context and tasks. Although fully immersive VR has been recognized with many advantages, being fully immersed also implies the absence of physical references (Linderman, 1999). Therefore it is important to realize that the feeling of immersion from VR for visualization application is created at a price, namely an increased isolation from the people and the objects around a user. The tangible interface could decrease this isolation to certain extent by means of passive haptics.
•
In what ways do these tangible interfaces help researchers to observe scientific data (such as macromolecules) that cannot be achieved with current, conventional interface in VR visualization? Several experiments have suggested that users make similar mistakes in spatial judgements in large-sized virtual environments compared to real environments, while such 8
mistakes occur less frequently in small-sized 3D desktop environments (van Dam, 2000). Therefore it will be valuable to know whether the tangible interface can correct such mistakes despite of the system differences. Possible explanation is that the realism presented by a tangible interface may contribute to assist the user to perform his/her task. •
6.
Do these tangible interfaces have the same effects (positive or negative) on different 3D interaction tasks from different applications? For instance, will these tangible interfaces advance research on understanding the structure of anatomic data and the function of bio-molecules both? In our case, the same tangible concept has been applied to two different visualization applications and tasks. A designer of the 3D user interface will benefit a lot from this kind of findings (Bowman, 2004).
CONCLUSIONS
We have presented an AV system prototype with tangible interaction devices for manipulating and interacting with scientific data (more specially, volumetric and molecular data). Because AV has the potential to combine the advantages of two worlds, VR and "real" (physical) reality, we believe that it constitutes an ideal approach for natural communication with a synthetic (or virtual) world. Another motivation for developing the platform was to enable hands-on experience with such an approach. The prototype development is now at a stage where more formal usability experiments can be created and conducted. The user study on different data with the same tangible interface could provide in-depth understanding about the usability issue of such interface. The first usability study that we plan will concern comparing alternative methods for clipping-plane specification (Qi, 2005). Additional experiments that are more in-line with the discussion in the current paper are planned. For example, studying the efficiency and effectiveness of atom selection and measurement task within a molecule is one of the experiments that are planned for the near future.
7.
REFERENCES
Z. M. Ai and T. Frhlich. (1998). Molecular dynamics simulation in virtual environments. Computer Graph Forum, 17(3): 267–273. D. Bowman, E. Kruijff, J. J. LaViola and I. Pousyrev. (2004). 3D User Interfaces: Theory and Practice. AddisonWesley, Virginia. F. Brooks, Jr. (1999). What’s real about virtual reality? IEEE Computer Graphics and Applications, 19:16–27. S. Bryson. (1996). Virtual reality in scientific visualization? Comm. ACM, 39:62–71. A. van Dam, A. Forsberg, D. Laidlaw, J. J. LaViola and R. Simpson. (2000). Immersive VR for scientific visualization: A progress report. IEEE Computer Graphics and Applications, 20:26–52. E. De Guzman, F. Wai-ling Ho-Ching, T. Matthews, T. Rattenbury, M. Back, and S. Harrison. (2003). Eewww!: Tangible instruments for navigating into the human body. In Extended Abstracts of CHI 2003, 806–807. B. Frohlich and J. Plate. (2000). The Cubic Mouse: A new device for three dimensional input. In Proceedings of the CHI 2000, 526–531. A. Fuhrmann, B. Ozer and H. Hauser. (2002). VR2 -interactive volume rendering using pc-based virtual reality. Technical Report TR-VRVis-2002-014 at the VRVis Research Center, Vienna. N. Guex and M. Peitsh. (1997). Swiss-model and Swiss-PDB Viewer: an environment for comparative modeling. Electrophoresis, 18: 2714–2723. C. Henn, M. Teschner, A. Engel and U. Aebi. (1996). Real-time isocontouring and texture mapping meet new challenges in interactive molecular graphics applications. Journal of Structural Biology, 116:92–96. C. D. Hansen, C. R. Johnson. (2004). The Visualization Handbook, Elsevier Butterworth Heinemann Inc. K. Hinckley. (1996). Haptic issues for virtual manipulation. PhD thesis, University of Virginia, Virginia. H. Ishii and B. Ullmer. (1997). Tangible bits: Towards seamless interfaces between people, bits and atoms. In Proceedings of CHI 97, 234–241. W. Humphrey, A. Dalke. and K. Schulten. (1996). "VMD - Visual Molecular Dynamics", J. Molec. Graphics, vol. 14, 33-38. B. E. Insko. (2001). Passive Haptics Significantly Enhances Virtual Environments, Ph.D thesis, University of North Carolina at Chapel Hill. C. Johnson, Y. Livnat, L. Zhukov, D. Hart and G. Kindlmann. (2001). Computational field visualization. In Mathematics Unlimited – 2001and Beyond, 2: 605–630.
9
A. C. Limaye and S. R. Gadre. (2001). Univis-2000: An indigenously developed comprehensive visualization package. CURRENT SCIENCE, 80(10):1296–1301. R. Lindeman, J. Sibert, and J. K. Hahn. (1999). Towards usable VR: An empirical study of user interfaces for immersive virtual environments. In Proceedings of CHI 99, 64–71. R. Liere and J. Mulder. (2003.) Optical tracking using projective invariant marker pattern properties. In Proceedings of the IEEE Virtual Reality Conference 2003, 191–198. R. Liere and J. Mulder. (2004). Emerging frameworks for tangible user interfaces. In Workshop Beyond Glove and Wand based Interaction, Proceedings of the IEEE Virtual Reality Conference 2004, 126–136. R. Liere and A. Rhijn. (2003). Search space reduction in optical tracking. In J. Deisinger and A. Kunz, editors, Proceedings of the Immersive Projection Technology and Virtual Environments Workshop 2003 (IPT/EGVE 2003), 207–214. P. Milgram, H. Takemura, A. Utsumi and F. Kishino. (1994). A taxonomy of mixed reality visual displays. IEICE Transactions on Information Systems, 12:1321–1329. J. B. Martens, W. Qi, D. Aliakseyeu, R. van Liere, and A. Kok. (2004). Experiencing 3D interactions in virtual reality and augmented reality. In Adjunct Proceedings of the Second European Symposium on Ambient Intelligence, 25–28. M. Meissner, U. HormanN, and W. Straer. (1999). Enabling classification and shading for 3D texture mapping based volume rendering. In Proceedings of the 10th IEEE Visualization 1999 Conference, 124–125. J. Mulder and R. Van Liere. (2002). The personal space station: Bringing interaction within reach. In Fourth Virtual Reality International Conference, 73–81. W. Qi, J. B. Martens. (2005). Tangible user interface for clipping plane: A Case Study, Proceedings of Seventh International Conference on Multimodal Interface 2005, Italy. (In press)
S. Schkolne, H. Ishii, P. Schröder. (2004). Immersive Design of DNA Molecules with a Tangible Interface. IEEE Visualization 2004, 227-234. R. Sayle and E. Milner-White. (1995). Rasmol: biomolecular graphics for all. Trends Biochem Science, 20:374–376. D. Schmalstieg, A. Fuhrmann, Z. Szalavari and M. Gervautz. (1998). Studierstube-an environment for collaboration in augmented reality. Virtual Reality -Systems, Development and Applications, 3:37–49. R. Sharma, M. Zeller, V. Pavlovic and et al. (2000). Speech/gesture interface to a visual-computing environment. IEEE Computer Graphics and Applications, 20:29–37. C. Ware, K. Arthur and K. Booth. (1993). Fish tank virtual reality. In Proceedings of CHI 93, 37–42. U. Wossner, J. Schulze, S. Walz and U. Lang. (2002). Evaluation of a collaborative volume rendering application in a distributed virtual environment. Proceedings of Eighth EuroGraphics Workshop on Virtual Environments, 113–122. S. Whittaker, L. Terveen and B. Nardi. (2000). Let’s stop pushing the envelope and start addressing it: A reference task agenda for HCI. Human-Computer Interaction, 15(2/3), 75–106.
8.
ACKNOWLEDGEMENTS
This project is part of the IOP-MMI program (Innovative Research Program on Man-Machine Interaction), funded by SenterNovem under the Dutch Ministry of Economic Affairs.
10