Stereo- scopic visualization has been used in molecular visualization for ..... have become common in recent smartphones (e.g. the iPhone, Android, etc.) and.
Immersive Molecular Visualization and Interactive Modeling with Commodity Hardware John E. Stone1 , Axel Kohlmeyer2 , Kirby L. Vandivort1 , and Klaus Schulten3 1
3
Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign 2 Center for Molecular Modeling, Temple University Department of Physics, University of Illinois at Urbana-Champaign
Abstract. Continuing advances in development of multi-core CPUs, GPUs, and low-cost six-degree-of-freedom virtual reality input devices have created an unprecedented opportunity for broader use of interactive molecular modeling and immersive visualization of large molecular complexes. We describe the design and implementation of VMD, a popular molecular visualization and modeling tool that supports both desktop and immersive virtual reality environments, and includes support for a variety of multi-modal user interaction mechanisms. A number of unique challenges arise in supporting immersive visualization and advanced input devices within software that is used by a broad community of scientists that often have little background in the use or administration of these technologies. We share our experiences in supporting VMD on existing and upcoming low-cost virtual reality hardware platforms, and we give our perspective on how these technologies can be improved and employed to enable next-generation interactive molecular simulation tools for broader use by the molecular modeling community.
1
Introduction
Over the past decade, advances in microprocessor architecture have led to tremendous performance and capability increases for multi-core CPUs and graphics processing units (GPUs), enabling high performance immersive molecular visualization and interactive molecular modeling on commodity hardware. Stereoscopic visualization has been used in molecular visualization for decades, but it was previously a prohibitively costly technology. A growing number of commodity GPUs, televisions, and projectors now support stereoscopic display, enabling molecular scientists to afford stereoscopic display hardware for individual use for the first time. Inexpensive but powerful embedded microprocessors have also enabled a new generation of low-cost six-degree-of-freedom (6DOF) and haptic input devices suitable for molecular visualization in both desktop and immersive visualization environments. These immersive VR displays, 6DOF input devices, and powerful rendering and computing capabilities previously available only to well-funded laboratories are creating a new opportunity to greatly expand the use of immersive visualization and interactive simulation in molecular modeling.
2
John E. Stone, Axel Kohlmeyer, Kirby L. Vandivort, and Klaus Schulten
Although the the molecular modeling community has long had an interest in the use of immersive visualization and advanced input devices [1–6], these technologies have not seen much adoption outside of institutions with significant local VR infrastructure and expertise. One of the challenges that must be overcome in making VR technologies available to application scientists is to make them easy to configure and intuitive to use. Most users have minimal experience with installing and configuring visualization clusters, stereoscopic projection systems, or complex input devices; this presents a barrier to adoption of such advanced technologies, irrespective of cost. This paper describes the software infrastructure supporting commodity virtual reality hardware and software within VMD [3], a full-featured software package for molecular visualization, analysis, and interactive molecular dynamics, used by tens of thousands of researchers worldwide. We describe our experiences developing and supporting VMD in both desktop and immersive virtual reality environments, and the unique challenges involved in supporting sophisticated immersive display and input technologies in software that is used by a broad community of life scientists that have little or no background in the technical disciplines related to installing and managing complex VR systems.
2
Software Architecture
Among similar molecular visualization tools, VMD is somewhat unique in that it was originally designed (under the initial name “VRChem”) for use primarily in immersive virtual environments such as the CAVE [7], ImmersaDesk [8], and non-immersive stereoscopic projection environments, but subsequently became a widely used desktop application supporting commodity personal computers. Today, VMD continues to support both desktop and immersive virtual environments, and has added expanded support for a wide variety of advanced input devices for use in both scenarios. Current releases of VMD support multiple VR toolkits including CAVElib [7], FreeVR [9], and VRPN [10]. The VMD user community has, at various times, also modified VMD to support VR Juggler [11], and a number of custom in-house VR systems and multi-modal input devices of various kinds [1, 5, 12]. Below we describe the design constraints and the hardware and software abstractions that VMD uses to support these diverse toolkits and their respective APIs, while maintaining ease of use for molecular scientists. 2.1
Design Goals and Constraints
One of the major problems that broadly deployed scientific applications must solve is to abstract the differences in hardware and software environments over a potentially wide range of usage scenarios. The bulk of daily molecular modeling work is done on commodity laptop and desktop computers that have no immersive display capability nor advanced input devices, so this usage scenario must be well-supported and must not be compromised in the effort to add support for immersive displays and interfaces.
Lecture Notes in Computer Science
3
The typical end users of molecular modeling applications today expect applications to be provided as binary or “shrink-wrapped” applications, just as most business and productivity applications are typically distributed. Similarly, as the separation between a tool or feature’s intended purpose and its technical realization has grown, some level of software automation is typically expected, particularly for common, repetitive, and well understood tasks. Such automation enables the user to focus on the science of their project rather than the technologies they are employing along the way. Our experience confirms that users are often averse or even unable to compile complex applications from source, and that they prefer not to go through complex installation and configuration processes and therefore are willing sacrifice some degree of sophistication, flexibility, and performance in exchange for convenience and ease of use. This observation places some constraints on the design of molecular modeling applications that are intended for broad use, and it gives guidance for designers of VR toolkits that support such broadly used scientific applications. The design of VMD supports the use of low-cost 6DOF VR input devices and stereoscopic display within an otherwise typical desktop application, as well as immersive virtual environments, enabling molecular scientists to incrementally evaluate and adopt VR technologies according their budget and needs. 2.2
Display Management and Rendering
Due to the diversity of display environments that VMD must support, the display management and rendering infrastructure within the program is designed to be flexible and extensible. Since the low-level rendering API, and windowing system or VR APIs for managing the display differ significantly, VMD uses a hierarchy of inherited classes to abstract the differences. In molecular modeling, traditional scene graphs often incur significant performance penalties either during display redraws or during trajectory animation updates due to the fine-grained nature of the independent atomic motions and transformations that occur. VMD employs a full-custom scene graph structure that is designed specifically for the needs of molecular visualization, and in particular for visualization of dynamics of multi-million atom molecular complexes. VMD’s custom scene graph is also very memory efficient, which is particularly important when rendering structures with up to 100 million atoms. A consequence of the use of a custom scene graph is that VMD must also implement its own code for rendering the scene graph. VMD implements a DisplayDevice base class to abstract the details of windowing systems, VR toolkits, rendering APIs, and hardware platforms. This base class underlies subclasses for both interactive displays and for batch-mode photorealistic ray tracing, scene export, and external rendering of various types. In order to support multiple interactive graphics APIs, the DisplayDevice class is subclassed according to the rendering API used, as illustrated in Fig 1. Although OpenGL is the only fully supported rendering API in current versions of VMD, this structure enabled earlier versions of VMD to support IRIS GL and Direct X rendering APIs as well. As we begin to transition from present-day OpenGL 1.1
4
John E. Stone, Axel Kohlmeyer, Kirby L. Vandivort, and Klaus Schulten
Molecular Structure Data and Global VMD State
DisplayDevice
Graphical Representations
User Interface Subsystem
DrawMolecule
Tcl/Python Scripting
Non-Molecular Geometry
Mouse + Windows
Display Subsystem Windowed OpenGL
OpenGLRenderer
CAVE FreeVR
Interactive MD
Scene Graph
VR “Tools”
6DOF Input
Spaceball
Position
Haptic Device
Buttons
CAVE Wand
Force Feedback
VRPN Smartphone
Fig. 1. VMD software subsystems responsible for managing input devices, user interactions, and interactive rendering on desktop and immersive displays.
and 2.x rendering functionality towards OpenGL 4.x, the rendering API subclass capability will again prove to be beneficial. The abstraction of windowing system and VR toolkit APIs is handled by further subclassing one of the renderers. Since the windowing system and VR APIs are largely independent of the underlying renderer code, compile-time macros determine which renderer class is used as the parent class for each windowing system, GUI toolkit, and VR API subclass. This design allowed VMD to support both IRIS GL and OpenGL within the same X-Windows and CAVE DisplayDevice subclass implementations. The rendering and display management classes require close handshaking during initial display surface creation, when the VR toolkit or windowing system requests features that must be specified during display initialization, such as a stereoscopic-capable visual context. Beyond the abstraction of basic display and window management operations, the DisplayDevice class hierarchy also manages the details involved with parallel and multi-pipe rendering. The subclasses for each VR toolkit such as CAVElib and FreeVR provide the necessary data sharing or communication infrastructure along with the mutual exclusion or synchronization primitives required to share the molecular scene graph among each of the renderer processes or threads associated with the VR display system. Unlike many VR applications, VMD contains a significant amount of internal program state aside from the scene graph, such that it is impractical to attempt to completely replicate and synchronize the full program state among rendering slaves in cluster-based VR systems. This is due in part to the fact that VMD includes internal scripting engines and a multiplicity of other interfaces that can modify internal state, and it would be prohibitive to attempt to main-
Lecture Notes in Computer Science
5
tain coherency of all such interfaces among rendering slaves. For this reason, the multi-pipe VR implementations of VMD have largely been limited to one of two strategies; a tightly-coupled implementation that stores the scene graph and associated data structures in shared memory arena accessible to both the master application process and the rendering slaves, or a very loosely-coupled implementation based on a set of completely independent VMD processes running on a cluster of rendering nodes communicating over a network. The tightly-coupled shared memory approach is used by the CAVElib and FreeVR display class implementations in VMD, and allows full program flexibility and functionality. The loosely-coupled approach is typically used for ad-hoc display walls where a limited range of interactions are required and no on-the-fly scripting is used. 2.3
Multi-modal Input and Haptic Feedback
VMD abstracts VR input devices through a set of classes that allow 6DOF motion control input, buttons, and haptic feedback. As with the display and rendering classes, the base classes are subclassed for each device or VR toolkit API that is added. Many devices report events through the windowing system, some through VR toolkit API calls, and others through direct communication with low level operating system I/O calls. Since a given VMD session may use devices provided by disparate interfaces and coordinate systems, VMD provides a simple mechanism for applying an arbitrary transformation matrix to the position and orientation information associated with 6DOF motion control devices, enabling the users to customize the mapping from the physical workspace of the device to the virtual workspace. This enables devices to be used seamlessly in the environment, whether they are provided by CAVElib, FreeVR, VRPN, the host OS windowing system, or other sources. VMD supports low-cost commodity input devices such as joysticks, Spaceball, and SpaceNavigator 6DOF input devices via input subclasses that handle windowing system events or direct device I/O. Devices supporting windowing system events can be used immediately for basic 6DOF motion control with no further configuration, making them an ideal choice for molecular scientists that have neither the time nor inclination to use the more sophisticated VR “tool” controls described below. Most VR-oriented 6DOF input and haptic feedback devices are supported through VR toolkits or libraries such as VRPN. VRPN [10] is an ideal counterpart to VMD for several reasons. VMD can be statically linked against a VRPN client library, which has few or no additional dependencies. In order to use VRPN-supported input devices with VMD, a VRPN server daemon runs on the host with the attached devices, and VMD connects to it over the network. This structure enables new devices to be supported without recompiling VMD binaries, it separates low-level input device management from the main VMD event loop (this work is done inside the VRPN server daemon), and it provides a simple mechanism for access to input devices that are not managed by the host windowing system. The use of VRPN in VMD also makes it easy to use devices that only have drivers for a single operating system with a VMD client instance running on any operating system [13].
6
John E. Stone, Axel Kohlmeyer, Kirby L. Vandivort, and Klaus Schulten
In order to use VR input devices within VMD, a mapping is created between a physical device or devices, and an instance of a VMD “tool”. Tools are user interface objects that performs actions on molecules based on 6DOF motion control and button state inputs. This design is similar to the tool abstraction provided in recent VR toolkits such as Vrui [14], except that the tools in VMD are domain-specific, and they support multiple VR toolkits. Tools also create force feedback for haptic input devices, and they maintain a visual representation of the tool orientation within the virtual environment. VMD provides several standard tools for different visualization and interactive modeling tasks. The VMD “tug” and “pinch” tools apply forces to atoms or higher level molecular structure components within interactive simulations. A “grab” tool allows 3DOF and 6DOF input devices to arbitrarily translate and rotate the molecular scene. A “rotate” tool to allows 3DOF input devices to rotate the scene. A “spring” tool enables precise visual placement of spring constraints used in molecular dynamics simulations. A “print” tool logs input device position and orientation data to a text console to assist the users with VR input device testing and calibration. Additional tools are easily implemented by subclassing a top level tool base class. 2.4
User Interfaces
A potential limitation to the application of immersive VR for molecular modeling is the need for alphanumeric input and simultaneous display of multiple properties of the molecular structure, often in other visualization modalities such as as 2-D plots of internal coordinates, timeline views of simulation trajectories, and tabular displays of alphanumeric information. Although such interfaces can be embedded within immersive environments, and some have been implemented in modified versions of VMD [12], users often find them inefficient relative to traditional 2-D interfaces and keyboard input. For this reason, VMD maintains the ability to display 2-D desktop graphical interfaces concurrently with an immersive virtual environment, by redirecting them to another windowing system console. The 2-D user interfaces are often most useful during exploration of large and unfamiliar molecular complexes when a user may wish to make many detailed queries about the model, not limiting themselves to 3-D interactions, visual representations, or an immersive environment. Several groups have demonstrated the utility of incorporating 2-D GUI toolkits into VR applications as a partial solution to the need for auxiliary interfaces while working within immersive environments [15–17]. Since VMD incorporates Tcl and Python scripting interfaces, it is also possible to control the application or to display results through web browsers, graphical interfaces, gesture interfaces, and voice interfaces hosted on auxiliary desktop computers, tablet computers, or smartphones using a variety of network protocols. VMD’s scripting interfaces also enable creation of user-customized interfaces specific to the project that they are working on. All of the input device modalities supported in VMD (VRPN, CAVE, etc.) can be made to trigger scripting language event callbacks so they can be used in an arbitrary user-defined way.
Lecture Notes in Computer Science
3
7
Interactive Molecular Dynamics
A compelling use of the combination of immersive display and interaction features within VMD is interactive molecular dynamics (IMD) simulation [13]. Steered and interactive molecular dynamics simulations can be used to study the binding properties of biomolecules and their response to mechanical forces, or allow nanomechanical experiments, such as surface grafting or manipulation of simulated nano-devices or nanoscale objects like carbon nanotubes of fullerenes. The development of these ideas began with steered molecular dynamics techniques (SMD) that enabled runtime simulation visualization with limited support for interaction with the simulation [1, 2] and has subsequently evolved toward fully interactive molecular dynamics simulation [13, 18]. The fully interactive nature of IMD and similar techniques holds promise for both research [19, 20] and teaching [6], but, until recently, the computational requirements for IMD hindered its applicability, limiting its use to relatively small molecular systems, or requiring the use of HPC clusters or supercomputers for IMD simulations of larger structures. Even under those restrictions, IMD has proved to be a valuable tool for education and outreach, giving non-scientists a captivating view – and, in combination with haptics, also a feel – of the world of computer simulations. Recent advances in the use of GPU computing to accelerate molecular dynamics simulations have brought the performance of GPU-accelerated desktop workstations up to the level of small or mid-sized HPC clusters [21–23], largely eliminating the need for end-users to have expertise in using and managing HPC clusters, and making it possible to perform IMD simulations of moderate size molecular structures on a single GPU-accelerated desktop workstation. The advantage of GPU-acceleration is more pronounced in nano-mechanical and nanochemical modeling because the many-body models used (e.g. Tersoff, StillingerWeber, AIREBO) have a much higher algorithmic complexity than potentials used in life sciences and, thus, benefit more from the GPU hardware architecture. Even for non-GPU accelerated applications, the overall performance of a multi-socket, multi-core desktop workstation can be as high as that of typical moderately sized HPC clusters of less than ten years ago. 3.1
IMD Software Design
VMD supports live display and interaction with running molecular dynamics (MD) simulations through two main software interfaces: a network channel to a molecular dynamics simulation engine, and an input device for motion control, ideally with 6DOF input and haptic feedback. Atomic coordinates, applied and resulting simulation forces, and global simulation properties such as energies, simulation volume, pressure and other quantities are continuously exchanged between VMD and the attached molecular dynamics simulation engine through a TCP/IP socket. This enables VMD to be coupled to a simulation running on the same workstation, or alternatively to a simulation running on a remote HPC cluster or supercomputer, enabling simulations of much larger molecular complexes while maintaining interactivity.
8
John E. Stone, Axel Kohlmeyer, Kirby L. Vandivort, and Klaus Schulten
A special “Tug” tool (see description of VR input mechanisms above) allows groups of atoms to be linked to a haptic device. For the duration of time the user activates the haptic device, its effector follows the linked object and any force that is exerted on the effector will be translated by VMD to a force on the atoms and communicated to the ongoing MD simulation. A user can feel any objects that the linked atoms bump into, how much (relative) force is necessary to move the linked atoms to a specific position, and also gives an impression of the linked atom’s inertia. 3.2
Physical Realism and Technical Challenges
Using IMD in combination with haptic devices creates new and unique challenges to molecular visualization applications. The length scales of all-atom classical molecular dynamics simulations are in the range of nanometers; at the same time, those simulations describe processes that happen on pico- to nanosecond time scales. In the visualization process this has to be converted to something that can be managed by the human perception, i.e. the objects are shown many orders of magnitude larger and processes have to be slowed down correspondingly. Typically a molecular dynamics simulation is calculated “off-line” and often “off-site”, i.e. without directly visualizing simulation progress, and on a different machine from where the visualization is performed. During the simulation, configuration snapshots are regularly stored, transferred to the visualization site and then read into the visualization software to be viewed and analyzed. Typical problem sizes often require a substantial amount of computational power as provided by clusters or supercomputing centers. Consequently, no interactive manipulation is possible, although a number of biasing methods like steered molecular dynamics or meta-dynamics exist that can “guide” a system from one state to another via predefined recipes. The alignment of time and length scales for non-interactive simulations is primarily a question of how efficiently the molecular visualization software can render the data. Adjustments can be made by either slowing down animations or by reducing the level of detail. In difficult cases even the visualization can be done “off-line” by producing movies via batch-mode rendering. For interactive viewing of the MD simulation this flexibility is no longer available. The MD simulation must keep pace with the interactive visualization so that the molecular processes of interest can be viewed at interactive speeds. While less a problem for small systems, this is a big problem for large systems, as a powerful (parallel) compute resource will be needed. The remote computation resource also needs to be joined with the visualization resource via a high throughput/low latency link to enable smooth, stutter-free interaction. Additional complications arise in using a haptic device to interactively manipulate the system in the ongoing simulation. The perceived inertia of the manipulated object depends on the MD simulation rate. The faster the MD simulation runs, the lighter an object “feels” and the more easily it can be manipulated. In principle, this can be adjusted by a scaling factor when computing the force to be sent to the simulation, based on the position of the effector of the haptic
Lecture Notes in Computer Science
9
device relative to the atoms it is linked to, but the larger this scaling factor, the less realistic the simulation. If the scaling factor is too large, the MD integration algorithm can become unstable. This becomes even more difficult if one is interested in manipulating a large and slowly moving object immersed in many small and fast ones (e.g. a protein solvated in water, or a cantilever of an atomic force microscope). The visualization time scale would have to follow this slow object, but the simulation must be run at an appropriate resolution and timescale for the small objects. Thus, the demands on the MD simulation performance are extremely high, yet the frame rate of the visualization software and human perception limits how quickly frames can be processed and need to be communicated, potentially resulting in a jumpy representation of such small objects. This applies as well to the force feedback, where an overly-soft coupling of the haptic device to atoms will result in an indirect feel, while a strong coupling will make an object linked to the effector feel jittery or lead to unwanted and unphysical resonances in the force-feedback coupling. Filtering of high frequency molecular motions within the molecular dynamics simulation engine is likely the best method for addressing such timescale-related haptic interaction issues.
4
Future Direction
The increasing computational capabilities of commodity hardware and the availability of affordable and commonly available advanced input devices will allow more realistic interactive modeling and more intuitive and flexible interaction with the visualization. Although VMD supports many 6DOF input modalities, the technical difficulty involved in configuring and maintaining such input devices remains a hurdle for the general user community. For mainstream adoption by non-VR-experts, such devices need to move from being a niche item to something that the majority of users have and can regularly use. In addition, a slight reduction in “immersiveness” and a significant reduction in the (perceived) complexity of VR components, in combination with the use of commodity hardware, would lead to affordable interactive molecular simulation “appliances”. Such appliances could be preconfigured with VR scenarios that would only require that the user provide the molecular model and would automate configuration, launching, and connecting the different hardware and software components. The perceived value of a given component of a VR or immersive display system directly impacts the amount of use of that component for a given session. We have observed that users often utilize a subset of the available VR components (usually choosing those that are least disruptive to their workflow). They might use stereo visualization, but not an enhanced input device, or vice versa. Ultimately, input devices and their interactions with the VR environment must correspond to well understood metaphors in order to gain acceptance among application scientists. Devices that operate in a familiar way are more apt to be used than unique or special purpose VR devices. This makes a strong case for adoption of new input technologies such as multitouch user interfaces that
10
John E. Stone, Axel Kohlmeyer, Kirby L. Vandivort, and Klaus Schulten
have become common in recent smartphones (e.g. the iPhone, Android, etc.) and tablets (e.g. iPad) and are slowly becoming a standard feature of mainstream desktop operating systems. As multitouch input devices become more broadly deployed and their associated programming APIs become more standardized, many new opportunities for use will arise in the domain of molecular modeling, particularly for workbenches, walls, and other display modalities of particular interest for small-group collaborative visualization.
While the standardization of input conventions, gestures, and APIs for multitouch input is an area of ongoing effort, all modern smartphones include accelerometers and cameras, and new phones such as the iPhone 4 include gyroscope instrumentation. Together, these provide the necessary data for various kinds of 6DOF motion control, text input, auxiliary 2-D graphical interfaces, and voice input, all in a familiar package that users already own [15–17, 24]. This will encourage everyday use of 6DOF motion control, and in an ideal case, will enable a convenient means for collaborative visualization and modeling among small groups of researchers, wirelessly, and with none of the typical burdensome installation required by traditional 6DOF VR input devices.
We have developed a prototype user interface for VMD that allows a smartphone to be used as a wireless touchpad or 6DOF wand, using the touch sensitive surface of the phone display and 6DOF data obtained from on-board accelerometer and magnetometer instrumentation, respectively. We envision smartphones being particularly useful for wireless control during interactive presentations, and in multi-user collaboration scenarios. In our prototype implementation, the smartphone communicates with VMD via datagram (UDP) packets sent over a local IEEE 802.11 wireless network, and a VMD input device subclass listens for incoming motion control messages, potentially from multiple phones. Our initial experiments have shown that current smartphones hold great potential both as 6DOF input devices, and potentially for various other input and program control modalities. The responsiveness of our prototype implementation has already demonstrated that smartphone 6DOF processing overhead and local WiFi network latencies are qualitatively low enough to be tolerable for typical molecular modeling tasks. Much work remains to improve the quality of 6DOF orientation data, particularly on smartphones that lack gyroscopes. We found that the use of magnetometer data in the orientation calculation could lead to erratic results in some workspaces due to proximity to metal furniture, etc. We expect that smartphones incorporating on-board gyroscopes will provide higher quality 6DOF orientation data and will be less susceptible to errors arising from the physical workspace. If successful, these developments will pave the way for smartphones to be used as ubiquitous multi-modal input devices and auxiliary displays in both single-user and collaborative settings, all without the installation, wiring, and other hassles that have limited the use of traditional VR input mechanisms among molecular scientists.
Lecture Notes in Computer Science
11
Acknowledgments This work was supported by the National Institutes of Health, under grant P41RR05969 and the National Science Foundation, under grant no. 0946358. The authors wish to thank Justin Gullingsrud, Paul Grayson, Marc Baaden, and Martijn Kragtwijk for their code contributions and feedback related to the VR and haptics interfaces in VMD over the years. We would also like to thank Russell Taylor for the development and ongoing maintenance of VRPN, and many useful discussions related to haptic interfaces. A.K. thanks Tom Anderson of Novint Inc. for donation of two Falcon devices for implementing VRPN support and Greg and Gary Scantlen for stimulating discussions and more.
References 1. Nelson, M., Humphrey, W., Gursoy, A., Dalke, A., Kal´e, L., Skeel, R., Schulten, K., Kufrin, R.: MDScope – A visual computing environment for structural biology. In Atluri, S., Yagawa, G., Cruse, T., eds.: Computational Mechanics 95. Volume 1. (1995) 476–481 2. Leech, J., Prins, J., Hermans, J.: SMD: Visual steering of molecular dynamics for protein design. IEEE Comp. Sci. Eng. 3 (1996) 38–45 3. Humphrey, W., Dalke, A., Schulten, K.: VMD – Visual Molecular Dynamics. J. Mol. Graphics 14 (1996) 33–38 4. Ihlenfeldt, W.D.: Virtual reality in chemistry. J. Mol. Mod. 3 (1997) 386–402 5. Sharma, R., Zeller, M., Pavlovic, V.I., Huang, T.S., Lo, Z., Chu, S., Zhao, Y., Phillips, J.C., Schulten, K.: Speech/gesture interface to a visual-computing environment. IEEE Comp. Graph. App. 20 (2000) 29–37 6. Sankaranarayanan, G., Weghorst, S., Sanner, M., Gillet, A., Olson, A.: Role of haptics in teaching structural molecular biology. Haptic Interfaces for Virtual Environment and Teleoperator Systems, International Symposium on 0 (2003) 363 7. Cruz-Neira, C., Sandin, D.J., DeFanti, T.A.: Surround-screen projection-based virtual reality: The design and implementation of the CAVE. In: Proceedings of SIGGRAPH ’93, Anaheim, CA, Association for Computing Machinery (1993) 135–142 8. Czernuszenko, M., Pape, D., Sandin, D., DeFanti, T., Dawe, G.L., Brown, M.D.: The ImmersaDesk and Infinity Wall projection-based virtual reality displays. SIGGRAPH Comput. Graph. 31 (1997) 46–49 9. Pape, D., Anstey, J., Sherman, B.: Commodity-based projection VR. In: SIGGRAPH ’04: ACM SIGGRAPH 2004 Course Notes, New York, NY, USA, ACM (2004) 19 10. Taylor II, R.M., Hudson, T.C., Seeger, A., Weber, H., Juliano, J., Helser, A.T.: VRPN: a device-independent, network-transparent VR peripheral system. In: VRST ’01: Proceedings of the ACM symposium on Virtual reality software and technology, New York, NY, USA, ACM (2001) 55–61 11. Bierbaum, A., Just, C., Hartling, P., Meinert, K., Baker, A., Cruz-Neira, C.: VR Juggler: a virtual platform for virtual reality application development. In: Virtual Reality, 2001. Proceedings. IEEE. (2001) 89 –96 12. Martens, J.B., Qi, W., Aliakseyeu, D., Kok, A.J.F., van Liere, R.: Experiencing 3D interactions in virtual reality and augmented reality. In: EUSAI ’04: Proceedings
12
13.
14.
15. 16.
17.
18.
19.
20.
21.
22.
23. 24.
John E. Stone, Axel Kohlmeyer, Kirby L. Vandivort, and Klaus Schulten of the 2nd European Union symposium on Ambient intelligence, New York, NY, USA, ACM (2004) 25–28 Stone, J., Gullingsrud, J., Grayson, P., Schulten, K.: A system for interactive molecular dynamics simulation. In Hughes, J.F., S´equin, C.H., eds.: 2001 ACM Symposium on Interactive 3D Graphics, New York, ACM SIGGRAPH (2001) 191– 194 Kreylos, O.: Environment-independent VR development. In: ISVC ’08: Proceedings of the 4th International Symposium on Advances in Visual Computing, Berlin, Heidelberg, Springer-Verlag (2008) 901–912 Angus, I.G., Sowizral, H.A.: Embedding the 2D interaction metaphor in a real 3D virtual environment. Volume 2409., SPIE (1995) 282–293 Watsen, K., Darken, R.P., Capps, M.V.: A handheld computer as an interaction device to a virtual environment. In: Proceedings of the Third Immersive Projection Technology Workshop. (1999) Hartling, P.L., Bierbaum, A.D., Cruz-Niera, C.: Tweek: Merging 2D and 3D interaction in immersive environments. In: Proceedings of the 6th World Multiconference on Systemics, Cybernetics, and Informatics. Volume VI., Orlando, FL, USA (2002) 1–5 F´erey, N., Delalande, O., Grasseau, G., Baaden, M.: A VR framework for interacting with molecular simulations. In: VRST ’08: Proceedings of the 2008 ACM symposium on Virtual reality software and technology, New York, NY, USA, ACM (2008) 91–94 Grayson, P., Tajkhorshid, E., Schulten, K.: Mechanisms of selectivity in channels and enzymes studied with interactive molecular dynamics. Biophys. J. 85 (2003) 36–48 Hamdi, M., Ferreira, A., Sharma, G., Mavroidis, C.: Prototyping bio-nanorobots using molecular dynamics simulation and virtual reality. Microelectronics Journal 39 (2008) 190 – 201 Phillips, J.C., Stone, J.E., Schulten, K.: Adapting a message-driven parallel application to GPU-accelerated clusters. In: SC ’08: Proceedings of the 2008 ACM/IEEE Conference on Supercomputing, Piscataway, NJ, USA, IEEE Press (2008) Anderson, J.A., Lorenz, C.D., Travesset, A.: General purpose molecular dynamics simulations fully implemented on graphics processing units. J. Chem. Phys. 227 (2008) 5342–5359 Stone, J.E., Hardy, D.J., Ufimtsev, I.S., Schulten, K.: GPU-accelerated molecular modeling coming of age. J. Mol. Graph. Model. 29 (2010) 116–125 Hachet, M., Kitamura, Y.: 3D interaction with and from handheld computers. In: Proceedings of IEEE VR 2005 Workshop: New Directions in 3D User Interfaces., IEEE (2005)