UI Concepts of VR Systems for Product Development

8 downloads 0 Views 670KB Size Report
According to Jakob Nielsen, one of the most notable human computer interaction specialists [17], usage of voice makes sense in special cases when users with ...
UI Concepts of VR Systems for Product Development Andre Stork, Pedro Santos, Stefan Wundrak Fraunhofer Institute for Computer Graphics A2 - Industrial Applications Fraunhoferstr. 5, D-64283 Darmstadt, Germany {Andre.Stork, Pedro Santos, Stefan Wundrak}@igd.fhg.de

Abstract VR systems are well accepted in the product development process today. However this technology does not have a broad penetration and high number of installations in comparison to desktop systems, due to the high investments in VR technology especially hardware. On the user interface side many VR systems took over ideas from traditional WIMP interfaces (windows, icons, menus, pointers) without maintaining a smooth transition for the users from desktop environments to immersive environments and vice versa. In addition a great bunch of multi modal interaction techniques have been developed to render VR interaction more intuitively. This paper discusses various user interface concepts of VR systems and their use along the product development process. We give examples for solutions and projects developed in collaboration with different companies, such as BMW, Italdesign, FIAT, Airbus.

1

Introduction

The product development process consist of a number of stages where the earlier ones are more related to virtual models and the later ones are more dealing with physical ones. When the idea of Virtual Reality was developed, it seemed as if it would only be a matter of time until physical models were completely replaced by virtual ones. Today, the limitations of Virtual Reality are clearly visible and the need for physical mock-ups is unquestionable. Undoubtedly, the number of physical mock-ups could have been reduced considerably due to virtual models. In addition to Virtual Reality, the technology of augmenting physical models has been developed. Meanwhile Augmented Reality (AR) is more and more used in the different stages of the product development process. Beginning at the later stages, AR has conquered more and more applications in the earlier stages, e.g. in the styling phase of a car where physical models of the former generation are augmented by design alternatives for the new generation.

VR

styling

design

AR

analysis

production production planning

us e

(Fig. 1 - VR/AR and the product development process)

maintenance

recycling

Another development can be clearly reproduced: VR’s breakthrough only came with immersive projection technology. Only after switching from highly obtrusive HMD and data-glove technology, VR made its way out of the labs into industrial applications. AR is still suffering from premature hardware development: no convincing optical see-through stereo HMDs exist that drive AR to its full potential. With the varying hardware environments different user interface (UI) concepts have been developed. It became obvious that six degrees-of-freedom is not always beneficial and task-specific interaction techniques are required to best perform dedicated actions in VR, AR and at the desktop using VR devices. In this paper we give an overview of our VR and AR activities along the process development chain (Fig.1) and the UI concepts that we are developing. In industry-related projects the need to provide a seamless integration between 2D and 3D became more and more apparent. Thus, we focus many of our developments on bridging the gap between 2D and 3D interaction. UI concepts to be described in the remainder of the paper are. • • • • • •

Stroke-based input techniques for curves and surfaces (car styling phase) Gesture recognition Speech input Finger tracking for virtual tape-drawing pie menus vs using a PIP (personal interaction panel) Hybrid objects (for a desktop VR system in the analysis domain)

Note that other papers have been accepted for HCI 2005 describing some of the presented techniques in detail.

2

Stroke-based input

In product design, designers still prefer to work with pen and paper. This is due to the fact that on one hand these are the tools they were educated with and on the other hand they are the most intuitive ones. To persuade a designer to use a computer system instead, the hard- and software interface should be similarly intuitive and make use of their trained skills. Unfortunately Computer Aided Styling (CAS) software is in general not simple to use for designers but requires long adaptation training and an understanding of the math behind the geometries to modify curves and surfaces. Because strokes are usually represented as Bezier Splines, the user has to manipulate the control points to change the curve. When using a pen as an input device it is much more appropriate to simply redraw parts of curves to refine them. The technique, which allows doing this, is called oversketching (Fig. 2). This interaction is very close to the sketching behaviour of designers on paper.

(Fig. 2 - Oversketching)

Oversketching in an immersive environment was presented in Bruno et al.[1]. Our oversketching implementation is integrated in SketchAR, our immersive modelling and design system, which is targeted towards early stages of automotive design. Details about the set-up can be found in Fiorentino [2], Fleisch [3]. The 3d data input for the curve creation comes from an optical tracking system [4] using the Cyberstilo [5] a 3d input device used similar to a pen. When the user moves the Cyberstilo the tracker delivers continuous data on the pen position. For computational geometry the software depends on the CAD library ACIS [6]. Although our software works with mathematic geometries, our 3d oversketching approach uses discrete curve points. Converting from Splines to discrete curves and vice versa is done by ACIS, the mathematical background can be found in [7]. An extension to the idea of oversketching adds accuracy by constraining the changes to two coordinates. To perform a constraint oversketch the user best chooses a 2d orthographic view of the profile he intends to change. For example in figure 3 the oversketch is done on the xy-plane. The modification of the curve is now constraint in a way, that the original profile from the xz-plane is not changed.

(Fig. 3 - Constraint Oversketching)

3

Gesture recognition

A primary goal of virtual environments is to support natural, efficient, powerful and flexible interaction. If the interaction technology is overly obtrusive, difficult to use or constraining, then the user’s experience is severely degraded. If the interaction itself draws attention to the technology rather than the task in hand, then it is distractive and inappropriate. The traditional two-dimensional WIMP interface is not well-suited for virtual environments. Instead such environments provide a lot of different sensing modalities and technologies to integrate them into the user experience. Devices which sense body position, hand position, direction of gaze and other aspects are used to convey the user’s intention to the system. The multimodal input techniques used in virtual environments appear to be promising and more efficient than the traditional ones. To more fully support natural communications, one of the goals is to interpret human motion. While tracking the human’s position is helpful for direct control of rendering and virtual movements, users normally express more complicated intentions by speech or gestures. Gesture Recognition provides a way to interpret such intentions.

delete

select

(Fig. 4 - Gesture recognition in SketchAR) One way to grab, interpret and associate gestures to operations is through Pen-based Gesture Recognition. Sutherland's early Sketchpad system in 1963 [8] used light-pen gestures for example. There are examples for document editing [9,10], for air traffic control [11], and for design tasks such as editing splines [12]. Oviatt[13] has demonstrated significant benefits of using both speech and pen gestures together in certain tasks (see next section for an application of such a technique). Zeleznick et al. [14] and Landay and Myers [15] developed interfaces that recognize gestures from pen-based sketching. Although pen-based gesture recognizers such as CALI [16] work well in the proximity to a flat screen or surface, they may also be adapted to virtual environments, in this case to SketchAR (Fig.4) and provide satisfactory results.

4

Speech Recognition

According to Jakob Nielsen, one of the most notable human computer interaction specialists [17], usage of voice makes sense in special cases when users with disabilities cannot use other input devices, or are in eyes-busy or hands-buys situations or simply don't have access to other input devices. In general the second case is the most proeminent one in virtual and augmented reality environments. Speech and pen gestures are used for virtual tape drawing in SketchAR, our immersive design and modelling system, when the user needs one hand to fix the tape to the model and the other to control the tangent (Fig.5). A number of factors influence the performance of speech recognition such as noise, the correct placement of microphones, if just commands or spoken sentences are to be recognized. If just commands are to be issued, then best results have been achieved by choosing pre-defined twoword combinations so they are not confounded with regular spoken text. In virtual tape drawing speech commands like "create line", "stop line" are used to initiate and stop line drawing and other two-word combinations are used to choose colour or thickness of lines

(Fig. 5 - Virtual Tape Drawing with speech input)

5

Finger Tracking for Virtual Tape Drawing

To provide a natural and less-intrusive means of interaction for virtual tape-drawing than using traditional tracked artefacts, SketchAR uses finger-tracking to identify the position of the left and right hand's forefinger. The A.R.T. [18] optical tracking system used with SketchAR is supported by an appropriate module. The tracking system allows detection of tracked artefacts composed of different geometries of retro-reflective markers and reports them as so called stations to this module, which in turn reports them to SketchAR. Among the stations reported may be interaction pens, model artifacts or physical work planes. In addition the tracking system will also report remaining single markers to OpenTracker if configured to do so. Therefore an additional OpenTracker module was created, which is responsible for receiving the extra marker positions, computing the markers representing the left and right hand and transparently pass this information on to SketchAR as two additional stations. In practice the user wears a ring with two markers on each forefinger, one fastens the tape, the other controls the tangent (Fig. 6).

(Fig.6 - Clouds of remaining single markers)

6

Pie (marking menus) vs. PIP (Personal Interaction Panel menus)

SketchAR uses the Studierstube [19] interaction paradigm in which the user operates a virtual Pen and a PIP-sheet (PIP=Personal Interaction Panel). However the PIP-sheet has a serious disadvantage. In general the user is holding the tracked artefact corresponding to a package-model of a car in one hand and the drawing pen in the other. Now to choose any kind of operation, the user has to put down the package-model, thus changing the view on the model, pick up the PIP-sheet and choose an operation. While choosing the operation his view is obstructed by the opaque virtual PIP menu. He selects an operation, lays down the PIP-sheet and picks up the model again to continue. This procedure distracts the user from his original work. Any improvement to this situation should allow continuous work without as much distraction, intrusion and obstruction.

(Fig. 7 - PIP interaction) Alternative means of conceiving menus have been proposed by Buxton et al. [20]. They propose pie menus and marking menus, an extension to pie menus, where a line is drawn while selecting from a pie menu and its subsequent menus. Usage of pie menus has several advantages. The user view is not overly obstructed by a wide range of operations, most of them he is not going to choose. The hierarchical structure of pie menus only shows few menu options each time, but due to its tree structure does not need many levels to cover all necessary functions. The pie menu paradigm is ideally suited to place most used commands further up and less used commands further down the hierarchy. Traversal and selection can be accomplished without a further tracked artefact, but by simply pressing a button on the interaction pen when commencing traversal and releasing it when selecting an operation (Fig. 8).

(Fig. 8 - PIE interaction) Within the SmartSketches [21] project, SketchAR usability tests where carried out at Fiat Styling Center and Italdesign in Torino, Italy, testing the performance of the PIP interaction versus PIE interaction. Eight candidates took part in the user test. 37,5% were CAD product engineers, 62,5% were automotive designers. Test candidates as a whole were technically sophisticated and worked more than 3 years for their companies. The test comprised three design and modelling tasks using the traditional PIP menu and subsequently the Pie menu. In those tasks, designers had to create curves, surfaces and manipulate them choosing the appropriate functions from the PIE / PIP menu. The overall performance analysis for all tasks PIP vs. PIE was as follows: PIP

PIE

Average time [s] Std Deviation [s] Std Error [s] 95%UCL [s] 95%LCL [s] Average time [s] Std Deviation [s] Std Error [s] 95%UCL [s] 95%LCL [s]

44,32 16,7947412 9,69644838 63,3217055 25,3116278 41,10 8,32481231 4,80633263 50,520412 31,679588

Apparently the overall result does not give PIE menus a much bigger advantage over PIP menus. This effect is not what had been expected. However from observation during the test, it was noticed that people sometimes had difficulty to pick and move control-point spheres, because they were relatively small. This might have deteriorated the test results in the second task, where control point manipulation of a surface was asked for. Looking at the overall results however, one can see that usage of Pie menus was in general faster than PIP menus. A very interesting fact is that the standard error value for people using the Pie menu is half of the standard error value when people used the PIP menu. From this fact we deduce, that because pie menus present a much more simple interface to the user (8 slices), the user does not get as confused as with the PIP menu, where he sees much more options at a given time. In addition the Pie menu does not obstruct the users view as much as the PIP menu, so the user is not abruptly pulled out of his previous context, but still remains in it, with a comparably small portion of his view occupied by the Pie menu. Another reason for the narrow 95% confidence interval which indicates an increased reliability in terms of time consumption using the Pie menu is the fact, that no additional artefact has to be used by the user. The user does not need to put something down or pick something up to be able to select operations from a menu. In conclusion Pie menus seem to be faster and easier to use than PIP menus. They are not as error-prone as PIP menus, because their hierarchical concept is more simple and shows the user just what he is supposed to be able to choose at a given tree depth. Pie menus are more reliable, the user can make less mistakes and in addition the user is much less distracted from his task, because he does not need to drop or pick up another artefact to select operations from, which may make him loose focus.

7

Hybrid Desktop & Hybrid Objects

Even though VR systems are nowadays established in many areas, compared to desktop systems the number of installations in the industry is still low – this is on the one hand due to high investment costs and space demands of typical VR hardware and on the other hand due to the users’ lack of acceptance for entirely new user interfaces. Even in environments where VR is widely accepted, users spend the bigger part of their time working at classic desktops and WIMP (windows, icons, menus, pointers) interfaces. To change this, one has to close the gap between multi-modal interaction in VE and the user interfaces present on the users’ desktops today. Thus, one has to establish a seamless integration of 2D and VR user interfaces. A lot of 3D applications have in common that an object (i.e. model or scene) is of central interest for the user and the target for manipulating, analyzing or modeling human-computer interactions. Usually some operations are done more precisely on a 2D orthogonal projection of the model, other, more perceptual operations, are better done in a stereo view of the object. For this reasons we created the Hybrid Desktop, an autostereoscopic display paired with a second LCD touch screen in an L-shape configuration (Fig. 9). As 3D input we use a 6 DOF pen that can also be used to create 2D input on the touch screen and a space mouse for easy navigation in the scene. This is similar to desktop VR systems described in previous publications, for instance in [22]. The Hybrid Desktop combines the advantages of 2D and 3D interaction and closes the gap between expensive VR-Hardware and the user’s desktop.

(Fig. 9 – The Hybrid Desktop) In order to fully unfold the advantages of the Hybrid Desktop and to close the gap between 2D and 3D setups we created a concept for developing user interfaces for applications that run seamlessly on pure 2D desktops as well as in VR environments. For that matter, we introduce so called Hybrid Objects that allow the interaction in 3D environments as well as on 2D desktops with a similar look and feel. The basic idea is to create 3D-Widgets that are designed in such a way that with little or no modification interaction with a 2D projection of the geometry is possible. As the 2D projection of the Hybrid Object is viewed it is contained in a so called Hybrid Frame. A Hybrid Frame renders a single Hybrid Object into a WIMP dialog window from an axis aligned camera view. Depending on the type of object an orthogonal or a perspective camera projection is used. While working on a Hybrid Desktop the Hybrid Objects can be moved between the 3D and 2D display by simple drag and drop operations to switch to the currently preferred operation mode.

(Fig. 10 – Example for a Hybrid Object in 2D and 3D) Figure 10 shows the PIP input device in a VE as known from [23]. On a 2D desktop the PIP appears as a regular WIMP dialog that handles mouse input. Figure 11 shows an interactive column [24] in a VE and a context menu realized as pie menu (which itself also is a Hybrid Object). On a 2D desktop the same Hybrid Object appears in a 2D projection and is handled using the mouse. If one would prefer linear context menus that are closer to the regular WIMP look and feel the Hybrid Object could also alter its appearance accordingly. The use of the Hybrid Object concept together with the Hybrid Desktop showed the quick user acceptance during various industrial projects, even with users that have never used VR applications before.

(Fig. 11 – Example for a Hybrid Object in 2D and 3D)

8

Conclusion

This paper presented an overview of our VR and AR activities along the process development chain and the UI concepts that we are developing in industry-related projects where the need to provide a seamless integration between 2D and 3D becomes more and more apparent. Thus, in many of our developments we focus on bridging the gap between 2D and 3D interaction.

References [1]

Bruno F., Luchi M. L., Muzzupappa M., Rizzuti S., The over-sketching technique for free-hand shape modelling in Virtual Reality. In Proceedings of Virtual Concept 2003, Biarritz – France, November 5-7 2003.

[2]

M. Fiorentino, R. De Amicis, A. Stork, G. Monno, “Spacedesign: A Mixed Reality Workspace for Aesthetic Industrial Design”, Proceedings of ISMAR 2002 IEEE and ACM International Symposium on Mixed and Augmented Reality, Darmstadt, Germany, Sept. 30 - Oct. 1, 2002.

[3]

T. Fleisch, G. Brunetti, P. Santos, A. Stork, Stroke-Input Methods for Immersive Styling Environments, to be published SMI04, Genua, Italy, 2004.

[4]

A.R.T., Advanced Realtime Tracking GmbH, http://www.ar-tracking.de, 82211 Herrsching, Germany

[5]

H. Graf, M. Koch, A. Stork, O. Barski. Cyberstilo – Towards an Ergonomic and Aesthetic Wireless 3D-Pen. In Proceedings of IEEE VR2004 Workshop “Beyond Wand and Glove Based Interaction”, pages 51-54, 2004.

[6]

3D ACIS Modeler, Spatial Corp., http://www.spatial.com, Westminster, Colorado 80021 U.S.A.

[7]

L. Piegl, W. Tiller, „The NURBS Book“, ISBN-3-540-55069-0, Springer, 1995

[8]

Johnson, T. “Sketchpad III: Three Dimensional Graphical Communication with a Digital Computer” in AFIPS Spring Joint Computer Conference. 1963. 23 pp.347-353

[9]

G.Kurtenbach, W.Buxton (1991). “GEdit: a testbed for editing by continuous gesture“ SIGCHI Bulletin, 23(2), 22-26

[10]

D.Rubine, (1991) “The Automatic Recognition of Gestures” Ph.D. Dissertation, Carnegie-Mellon University.

[11]

C.P.Mertz and P. Lecoanet, “GRIGRI: gesture recognition on interactive graphical radar image” in P. Harling and A.Edwards (eds.) Progress in Gestural Interaction: Proceedings of Gesture Workshop 1996, Springer-Verlag, 1997

[12]

T.Baudel “A Mark-Based Interaction Paradigm for Free-Hand Drawing” Proc. ACM Symposium on User Interface Software and Technology (UIST), 1994

[13]

Oviatt, S.L., “Multimodal interfaces for dynamic interactive maps” Proc. Of CHI 1996 Proc. Of CHI 1996 Human Factors in Computing Systems. ACM Press, NY, 1996, 95-102

[14]

Zeleznik, R.C., Herndon,K.P., and Hughes J.F. “Sketch: An interface for sketching 3D scenes.” Computer Graphics (Proceedings of SIGGRAPH 1996), August 1996

[15]

J.A.Landay and B.A. Myers. “Interactive sketching for the early stages of user interface design” Proc. Of CHI 1995, pp 43-50,1995

[16]

Fonseca, M.J., Jorge, J.A., “Using Fuzzy Logic to Recognize Geometric Shapes Interactively” Proc. 9th. Int. Conference on Fuzzy Systems (FUZZ-IEEE 2000), San Antonio, USA, May 2000

[17]

J. Nielsen, "Voice Interfaces: Assessing the Potential", Jakob Nielsen's Alertbox , Website: http://www.useit.com/alertbox/20030127.html ,January, 2003

[18]

AR-Tracking GmbH, Optical Tracking Systems, Website: http://www.ar-tracking.de

[19]

D.Schmalstieg, A.Fuhrmann, Z.Szalavari, M.Gervautz: “Studierstube – An Environment for Collaboration in Augmented Reality” Extended abstract appeared Proc. Collaborative Virtual Environments 1996, Nottingham, UK, Sep.19-20,1996. Full paper in : Virtual Reality – Systems, Development and Applications, Vol.3, No. 1, pp.3749,1998

[20]

Kurtenbach, G., Sellen, A. & Buxton, W. (1993). An empirical evaluation of some articulatory and cognitive aspects of "marking menus." Human Computer Interaction, 8(1), 1-23.

[21]

EU Project SmartSketches - A user centered approach to introducing computer-based tools in the initial stages of product design, IST-2000-28169, Website: http://smartsketches.inesc-id.pt

[22]

S. Tano, T. Kodera, et al.: Godzilla: Seamless 2D and 3D Sketch Environment for Reflective and Creative Design Work, Human-Computer Interaction - INTERACT'03, Published by IOS Press, (c) IFIP, 2003, pp. 311-318

[23]

Schmalstieg, D., Encarnação, Fuhrmann A., Szalavari, Z, Gervautz, M.: „Studierstube – An Environment for Collaboration in Augmented Reality“, in Proceedings of Collaborative Virtual Environments 1996, Nottingham UK.

[24]

C.Knoepfle: „Intuitive und Immersive Interaktion für Virtuelle Umgebungen am Beispiel von VR-Design Review“. TU-Darmstadt, 2004, Dissertation.