Warping to Enhance 3D User Interfaces Bruce H. Thomas School of Computer and Information Science University of South Australia The Levels, SA, Australia 5095
[email protected]
Abstract Cartoon animation techniques have previously been used to enhance the illusion of direct manipulation in 2D graphical user interfaces. In particular, animation may be used to convey a feeling of substance to the objects being manipulated by the user. This paper presents an expansion of this concept to 3D graphical object manipulation. A standard set of 3D direct manipulation operations have been extended to include animated visual feedback to add substance, operation cues for the user, and constraint visualisation. Visual feedback effects using 3D warping can substitute for haptic feedback, squashing of an object when pressed against a wall, or stretching of an object to show frictional forces. Finally, a pinning effect is explored for multiple users manipulating a common object in a collaborative environment.
the following: 1) it describes a set of animation effects for direct manipulation interfaces; 2) it shows these animation effects to be effective and enjoyable for users; and 3) I have implemented of a set of software tools to support these animation effects; In the 2D warping investigation, two human factors experiments were performed to determine the effectiveness and usability of a 2D drawing editor which had been modified to include animated direct manipulation [7]. The 2D warping investigation showed that computer-literate users like the idea of cartoon-style animation in direct manipulation drawing editor interfaces. In particular, they felt that animation improved the visual feedback for constrained operations such as movement under the influence of “gravity”, and they rated the animation enjoyable and intuitive to use for a range of editing tasks. Moreover, when given a choice, such users chose to set the magnitude of the animation effects to be clearly evident.
1 Introduction This paper presents an extension to joint research conducted with Paul Calder in the area of animating direct manipulation interfaces [8]. The original work concentrated on animation effects for visual feedback in 2D user interfaces, whereas this paper presents work which extends these animation effects to 3D interfaces. In common with recent work [1], I have set out to explore how techniques borrowed from cartoons and computer animation can enhance the experience of interacting with a computer. However, I am not concerned here with portraying animated data, as would be the case when operating animation editing tools (such as an algorithm animation editor) or when using animation to supplement the presentation of otherwise static information (such as an animated help system). Rather, I wish to apply animation to the interface itself—to enhance or augment the effectiveness of human interaction with applications that present a graphical interface. My 2D animated interfaces work to date has focused on
This positive use of warping in the 2D domain has been extended to a set of 3D warping effects. The distortion of 3D graphical objects presented in this paper proposes to enhance a user’s impression of substance when interacting with objects in an application. In addition the animation effects can add additional visual feedback as to the type of operation being applied to an object, and the animation effects may augment visual cues for constraints. For virtual reality applications, the animation effects may be applied as a useful substitute for haptic feedback and provide addition visual cues for collaborative operations. Fundamental to the animation effects presented in this paper is the distortion of graphical objects through warping. The paper first presents the warping effects for the above enhancements to 3D interfaces, these descriptions include discussions of current applications under development. The implementation issues of warping in the 3D domain are also discussed. Finally, some concluding remarks are presented.
2. Substance Simple feedback techniques based on drawing object outlines tend to give the impression that a direct manipulation operation is happening to a surrogate object rather than the real thing. The drawing of a full-featured object during the operation gives the object solidity, but can still fail to convey a sense of substance; somehow it seems too easy to manipulate the object. When moving objects in the Macintosh Finder, and many other applications that use a direct manipulation interface, a simple outline of the object follows the pointer around the screen. For some movement tasks, this technique conveys the necessary feedback information. It does not, however, attempt to convince users that they are moving real objects. More ambitious animated feedback can make the interaction more convincing. To be convincing, solid-seeming objects must do more than look solid; they must also feel solid. The approach I have taken is to consider how a cartoon animator would depict such behaviour. One approach widely used by cartoon animators is to use techniques that mimic physical effects [3], such as inertia and friction, to reinforce the illusion of substance. For example, to suggest movement caused by someone dragging an object, an animator could show the object distorted in the direction of the pull. To suggest an attempt to move a fixed object, the animator could show the object leaning in the direction of the pull while its base stays fixed in place. With techniques like these, an application can create the illusion of a greater sense of substance for its objects while still allowing users to feel that they are in control. The overall effect is that the user has a stronger sense of progress being made [8]. Figure 1 shows a 3D direct manipulation move applied to a square. The figure shows the move operation with animation, where the corner of the square stays attached to the mouse point while the bulk of the object lags slightly behind. This animation gives the effect of manipulating a heavy “rub-
Figure 1. Animated 3D move operation
Figure 2. Pushed In Corner bery” object that distorts as it is pushed and pulled. Although the effect does not faithfully mimic physical reality, this simple algorithm gives the impression that the shape is made of elastic material, with weights attached to the vertices, causing them to lag behind the movement. The grasped point is held by a cursor and the remaining vertices of the graphical object lag behind when the object is moved. The grasped point, however, need not be a vertex, but in fact may be any point on the surface of the object. The graphical objects can be either pulled (as in Figures 1 ) or pushed (as in Figures 2.
3. Cues For Manipulation Operations Window-based direct manipulation applications generally use a 2D input device, such as a mouse. For 2D applications this form of input is quite natural, since mouse movements are mapped directly to cursor movements on the display surface, providing an intuitive one-to-one mapping of a user’s actions with the visual feedback on the display. This oneto-one mapping breaks down for 3D applications because it creates a mismatch between what the user sees and the actions on the display. For example, the user may move the mouse to the right to have a graphical object recede away from the current viewpoint. These movements of the mouse are orthogonal to the movement of the object on the screen. In fact the movements of the mouse are the same as those needed to move the graphical object to the right. The direction of movement of a graphical object is determined by the current mode of the editor. This problem is compounded by the similarity between visual feedback for scaling down an object and translating the object away; the object reduces in size in both cases. Applications which require a 3D manipulation (such as translation, rotation, and scaling) with a 2D device use special widgets to allow the user to manipulate an object in 3space. For example the warping effects for translation, rotation, scaling, and pinning have been built on top of the Inventor [5] toolkit, from Silicon Graphics Incorporated (SGI),
which provides such widget classes. The application presented here experimented with the SoHandleBoxManip and SoTransformBoxManip classes. These widgets allow users to either translate, rotate, or scale an object, depending on which portion of the widget is grasped. The above mentioned interaction mismatch can easily occur if a user misses the correct portion of the widget and thus performs the wrong operation. The user sees what appears to be the correct outcome, and only after viewing the object from a different position can they determine if it is correct. Having different warping visual feedback for the different transformations can give the user the necessary cues to determine if the correct operation is being applied.
3.1. Scaling and Rotating The animation effects described above for translation are also effective when used in conjunction with other common direct manipulation operations, and in particular scaling and rotating. The basic warping principles are the same, but the visual effects look quite different. For example, Figure 3 illustrates a 3D operation to make an object smaller by scaling. As in the translation example, during the animated version of the scale operation the part of the object that is grasped is controlled by the user, while the bulk of the object lags behind. The overall visual effect is of a star shaped object shooting out from the centre of the object. The reverse 3D animation scaling operation to make an object larger, has the vertices are pulled towards the centre of the object. The new 3D rotate animation effect for a clockwise rotation is shown in Figure 4. Once again the animation effect is to have the object lag behind so as to give the illusion that the object has inertia. Hence, the vertices are curled back around the object. The warping effects for the three standard direct manipulation operations translation, scaling, and rotation are all visually unique. They still provide the user with a sense of object substance, while providing easily recognisable operation cues.
4. Constraint Visualisation The concept of anticipation is fundamental to the animation world, as it is for static drawings. Anticipation can be used to enhance operations that represent forms of geometric constraints outside the user’s control. For example, if an object is constrained to one fixed location, animation can provide feedback to indicate that the object cannot be moved or altered by the user. When a user tries to move the pinned object, the application attempts to keep it stationary. Animation can also be used to suggest the resulting action when the user releases the pinned object. This section presents the animation effects to support visualisation of the constraints inherent in the pinning and the snap and drag operations.
Figure 3. Animated 3D Scale Down Effect
Figure 5. 3D Pin Effect Figure 4. Animating a 3D Rotate Effect
4.1. Pinning The pinning animation is used to supply simple visual constraining effects which can convey extra information for direct manipulation operations. Consider an attempt to move an object that is fixed in place – i.e. pinned. One response to this attempt might simply be to prevent the object from following the mouse. However, this lack of visible feedback might be misinterpreted as the result of a failure to “grasp” the object correctly. A user might make several attempts at the operation before realising the true cause of the lack of response. Another strategy might be to allow the object to follow the mouse, but then to snap it back to its original place when released. This approach avoids the problem with lack of feedback, but can lead to surprises when a carefully placed object suddenly jumps back to a previous position. Figure 5 shows a single frame of a 3D pinning animation effect that avoids both problems. As the user attempts to drag the pinned object, the grasped point stays attached to the mouse but the bulk of the object stays fixed. The effect is as if the user is pulling on a corner of an object that is anchored in place. The feedback provides extra information: it makes it clear that the user is attempting to move the object, but that the attempt is not succeeding. When the grasped point is released, the object springs back to its original shape.
4.2. Snap and Drag Snapping is a technique commonly used as an aid for accurate positioning in direct manipulation systems. Key features on the object being manipulated are snapped to nearby “hot spots” such as other objects or regularly spaced grids. Graphical feedback in systems that use snapping is difficult to implement without animation because of the tension between the position constraint implied by the snapping action and the need to accurately track user input. One possibility is to prevent the object from moving at all until it is dragged sufficiently far from a grid point, then to have it suddenly
snap to the next point. However, with this scheme users may once again be uncertain whether the object has been grasped. Another possibility is to allow the object to move freely, but to make it snap to the nearest grid point when let go. This scheme, though, may lead to surprises when an object unexpectedly jumps when released. The use of the pinning animation effect may be used to avoid both of these pitfalls.
5. Haptic Feedback The interaction with virtual reality (VR) systems can be characterised by the following features: immersion, rich interaction, and presence [10]. Users find interacting with 3D graphical objects difficult in current VR environments; for example, in relation to the precise placement of objects or visualisation of object collision. The need for additional feedback when interacting with 3D objects is mandatory. One solution is to add haptic feedback, as in the nanoManipulator system [6]. This system provides a virtualenvironment interface with haptic feedback for scannedprobe microscopes. A force feedback pointing device allows chemists and physicists to experience the feel of atomic surfaces and molecules. A problem is that general-purpose haptic feedback devices that do not restrict the user’s mobility are not yet available or practical [4]. Mark Mine, et al. [4] have been investigating replacing haptic feedback in systems with body-relative interaction techniques based on the framework of proprioception. Proprioception is a person’s sense of position and orientation of various parts of their body. Mine, et al. investigated three methods of using these body-relative interaction techniques; these were as follows: direct manipulation, physical mnemonics and gestural actions. I have developed two 3D warping effects for use in the VR domain as a substitute for haptic feedback. These include: firstly, squashing of an object when pressed against a wall, and secondly, stretching of an object to show frictional forces. In poorly designed 3D virtual worlds, objects pass through
Figure 6. Squash Effect
shape. Anticipation gives the viewer cues as to what is about to happen in the animation. This is shown as internal tension, or potential energy. The squash and friction effects are being investigated for visual feedback for the previously mentioned nanoManipulator system. When taking direct control of the microscope tip, there is a connection between the displayed force model, the graphics display and the actual tip motion. This is not wellmodelled in the current system. The squashing and friction effects may be used to explicitly show this relationship, making the force and visual model correspond with each other. It is possible that the deformation will give enough information about the forces being applied to serve as a stand-in for system configurations that do not have a force-feedback device.
6. Collaborative Virtual Environments
Figure 7. Friction Effect one another during a collision. Not only does this not reflect what the user may be expecting, but the user may be unaware of the collision. The squashing effect in Figure 6 shows a manipulated object prevented from passing through a solid wall. 1 This provides visual cues that the object and the wall have substance, thus giving the user a feeling forces are being applied. The pyramid crumples as it is pressed against the wall, and during this process the portion closest to the base of the pyramid stretches out. This stretching maintains the illusion of conservation of mass. As in the case for collisions, objects brushing up against one another require the depiction of forces to provide realistic visual feedback of the interaction. When an object brushes up against a wall as in Figure 7, the friction effect is displayed by the tip of the object being attached to the wall while the object is still moving. This gives the illusion that there is a frictional force between the object and the wall. The amount the pyramid is stretched is greater than occurs during a normal physical effect. Animators use a counter intuitive principle of making actions larger than life, or exaggerated [3]. There is also an anticipation it will snap back to its original 1 The sequence of the diagrams in the Figures 6, 7, and 8 is as follows: 1. the upper left hand corner, 2. the upper right hand corner, 3. the lower left hand corner, and 4. the lower right hand corner.
Collaborative virtual environments (CVE) are multi-user distributed systems in which several users share a 3D virtual space. The interactions among users may cause conflicts, and it may be difficult to convey to the users involved exactly what the conflicts are. For example, in a 3D environment where users can grasp and move objects, if two users grasp the same object simultaneously the system must decide what should happen. Should one user be given the object and the other denied? Should both be denied and the object frozen until one lets go? If the object is frozen in place, how will the users know that the system has not simply failed? This kind of information arises from subtle interactions, and so does not have to be communicated in a single user system. Consequently, techniques for conveying these subtle interface cues are largely absent from the literature. The approach presented here for conveying user interaction is to visibly animate the 3D objects in different ways as the users affect them. The animation method we are currently studying is real-time shape warping. A prototype system called MUVEE (multi-user virtual environment editor) has been built [9]. The MUVEE system embodies the warping effects for direct manipulation operations in an experimental system. MUVEE is a multi-user distributed system that allows several different people to create and modify 3D objects in a shared virtual space. The objects are warped as users grasp, move, scale, and rotate them. Collaborative systems are required to provide visual cues when multiple users are simultaneously manipulating a particular data object. The tug-of-war effect shown in Figure 8 depicts a form of visual feedback when two users are attempting to move an object simultaneously. The object is first moved by one user. Once the object has been grasped by a second user, the object stops moving. While the two users attempt to move the object, the grasped corners of the object are stretched out demonstrating that no one user has full control of the object.
mentation of the warp leads to several properties that simplify the selection of a suitable set of vectors for a desired warp effect. These include: 1) Points coincident with one of the warp vectors are displaced by the amount of that vector. 2) Points equidistant from two vectors are affected equally by those vectors. 3) In the limit, the combined effect of distant vectors approaches the mean of all vectors.
8. Conclusion
Figure 8. Tug of War Effect
7. Implementation The animation effects presented in this paper were developed in two different applications on SGI graphics workstations. The three effects of squashing, friction and tug of war were embedded in an immersive VR application. The second application is the MUVEE system, described earlier. The library of object classes I used in both applications to represent and draw the graphical objects, with and without warping, are based on a package written by Kenneth Hoff [2]. The data structures from this package come in three models: 1) TriModel – basic triangle model composed of an array of “basic” triangles; 2) SharedVertTriModel – triangle model composed of triangles with shared vertices; 3) SharedVertNormTriModel – triangle model composed of triangles with shared vertices (with vertex normals). Two additions have been made to Hoff’s package to support the warping of all three models. The first addition is a triangulation function to subdivide each triangle in a TriModel into four smaller triangles covering the same area in 3-space. The second addition is a set of warping functions for each of the three models. The warp transformation has been embodied in a new Warp C++ class. This new class defined to represent a set of warp vectors. This new class is an extension of the 2D version defined in [8]. The remained of the section provides an overview of the warp transformation. The warping transformation is characterised by a set of bound vectors 2 that describe the transformations applied to key points in the coordinate space. Transformations for points that do not coincide with vectors are calculated by interpolating between the vectors. To add a particular animation effect to an interaction, an application need only calculate a set of vectors that characterise an appropriate warp. Apart from its simplicity and relative efficiency (the algorithm has complexity O(n) for n warp vectors), the imple2 A bound vector is defined as pair of values, a vector and a location of the tail of that vector.
This paper reported on providing cartoon animation for visual cues during the operation of direct manipulation style interfaces. The application domains for the 3D graphical objects were window-based and VR applications. In the window-based application domain a standard set of 3D direct manipulation operations were extended to include animation visual feedback. Three effects using 3D warping were presented for use in the VR domain.
References [1] B.-W. Chang and D. Ungar. Animation: From cartoons to the user interface. In Proceedings of the ACM SIGGRAPH Symposium on User Interface Software and Technology, pages 45–55. 1993. [2] K. Hoff. Useful triangle model for use with OpenGL. Dept. of Computer Science, The University of North Carolina, Chapel Hill, 1997. [3] J. Lasseter. Principles of traditional animation applied to 3D computer graphics. In SIGGRAPH ’87, pages 35–44, Anaheim, CA, July 1987. ACM. [4] M. Mine, F. P. B. Jr., and C. Sequin. Moving objects in space: Exploiting proprioception in virtual-environment interaction. In Proceedings of SIGGRAPH 97, page (to appear), Los Angeles, CA, 1997. [5] P. S. Strauss. IRIS Inventor, a 3D graphics toolkit. In A. Paepcke, editor, OOPSLA ’93, pages 192–200, Washington D.C., Oct. 1993. [6] R. M. Taylor, W. Robinett, V. L. Chi, J. F. P. Brooks, W. V. Wright, R. S. Williams, and E. J. Snyder. The nanomanipulator: A virtual-reality interface for a scanning tunneling microscope. In Proceedings of SIGGRAPH ’93, pages 127–134, Anaheim, CA, Aug. 1993. [7] B. Thomas, P. Calder, and V. Demczuk. Experiments with animating direct manipulation in a drawing editor. In ACSC’98 - The 21st Australasian Computer Science Conference, page to appear, Perth, Australia, Feb. 1998. [8] B. H. Thomas and P. R. Calder. Animating direct manipulation interfaces. In Proceedings of ACM Symposium on User Interface Software and Technology, pages 3–12, Pittsburgh, Nov. 1995. [9] B. H. Thomas and D. Stotts. Warping distributed system configurations. In 4th International Conference on Configurable Distributed Systems, page to appear. Annapolis, Maryland, USA, May 1998. [10] M. Wloka. Interacting with virtual reality. Virtual Environments and Product Development Processes, 1995.