On Next Generation User Interfaces for Computer-Aided Design (CAD ...

9 downloads 256277 Views 205KB Size Report
Keywords: Computer-Aided Design, User Interface ... graphic design systems such as PrintShop (Broder- ... (Adobe Systems, Inc.), and even games and chil-.
On Next Generation User Interfaces for Computer-Aided Design (CAD) Systems Gerard Jounghyun Kim

Dept. of Computer Science and Engineering Pohang University of Science and Technology (POSTECH) San 31, Hyoja-dong, Pohang, 790-784, Korea [email protected]

Abstract This paper presents a review of di erent types of user interfaces used in current state of the art commercial and research prototype CAD systems. Two di erent perspectives are taken: one from the interface point of view and the other from the interaction point of view. This paper is not intended to be an exhaustive survey on the subject, but rather an assessment of a current trend. Then, I discuss and comment on basic requirements of CAD user interfaces and make a projection on future directions in CAD user interfaces, particularly for mechanical CAD systems.

Keywords:

Computer-Aided Design, User Interface

1 Introduction: Why \CAD" User Interface ? During the past 15 years, a great progress has been made in the world of computer-aided design (CAD) systems. Particularly, in the domain of threedimensional geometric CAD systems, compared to the early 80's when the rst lines of wire-frame based modelers were introduced, today's advanced systems feature solid modeling, feature-based design, visualization of complex surfaces, just to name a few. Naturally, in many cases, when one says \computer-aided design (or CAD)" systems, one often thinks of mechanical design systems or geometric modelers, for instance, those in the lines of AutoCAD (Autodesk, Inc.) and Pro/Engineer (Parametric Technologies Corp.). These programs only represent a subclass of many families of CAD systems. Designing things with the aid of computers is much

more common than one may think. The author has sampled a few lists of top ten softwares sold [2] [3] [1] in the United States last year on the World Wide Web and was able to come up with several popular softwares that can be categorized as CAD systems, to name a few, word processors like Word (MircoSoft Corp.), and WordPerfect (Corel Corp.), presentation authoring tools such as PowerPoint (MicroSoft), graphic design systems such as PrintShop (Broderbund Software, Inc.), CorelDraw (Corel), PhotoShop (Adobe Systems, Inc.), and even games and children's programs like the SimCity (Maxis) and KidPix (Broderbund). In addition to traditional applications in mechanical design, CAD systems are used in almost all domains, for example, software design, Internet homepage design, simulation design, circuit design, business process reengineering, and others. Regardless of their application domains, however, it is safe to say that research and development ef-

forts for CAD systems so far have been focused on improving their functionalities, and consequently, the user interface aspect has rather been neglected. Most CAD systems employ the so called, \WIMP" (i.e. Windows, Icons, Menus, and Pointers) based interfaces, a rather thoughtless but straightforward method of mapping user intentions to functionalities. Judging from the ubiquity and importance of CAD systems, their user interfaces deserve a more in-depth look. Moreover, there are now new requirements such as system usability, integrated design, con guration management, redesign, collaborative design, and technological developments like distributed processing, three dimensional and haptic interfaces and immersive environments. These new requirements and technologies necessitate and enable formulation of new concepts and approaches to more e ective user interactions. The problem of designing good user interfaces for CAD systems may be viewed from two perspectives. One is the interface technology view often taken from the Human Computer Interaction (HCI) research community. Their interests are, in general, to develop new computer interfaces, and to establish principles and guidelines for e ectively combining them in hopes of creating the most user and psychologically friendly and e ective way of conveying user intentions to a computer (e.g. issues may include, for instance, how to organize menus and buttons on a screen or to mix multimedia information for best presentation e ects). Another is the design interaction view taken from the CAD research community whose main interests are rst to nd the basic set of generic tools and services to maximize the eciency of the design process. To illustrate the subtle difference between these two perspectives, let's take an example of the domain of software design. The software engineering research community might prescribe an object-oriented design paradigm, and while the UI community could suggest a graphical user interface with menus and icons as the most natural UI for the object-oriented modelling, a generic design tool for presenting multiple views of design (see section 3) may be con gured to support separate but coupled modeling of object de nitions and object behaviors. Obviously, the two views are intimately related and need to be combined. It is clear that CAD systems must not simply be a collection of \drawing" primitives. If not intelligent, they must at least facilitate the design process. However, while designers expect the so called \intelligent" design systems to automate trivial design tasks for them, they also hate to be limited by the tools they use, and this is why care must be taken in designing the interface. In this paper, we rst brie y review various as-

pects of CAD user interfaces from the aforementioned two perspectives and comment on various previous related approaches and implementations. This paper is not intended to be an exhaustive survey on the subject but rather an assessment of a current status and trend. Then, I discuss and comment on requirements for CAD user interfaces and make a projection on future directions of CAD user interfaces, particularly for mechanical CAD systems.

2 Interface Technologies: Data Input and Presentation It is quite interesting that I. Sutherland's Sketchpad system developed back in the early 60's [17] not only marked the beginning of CAD systems, but also originated the concept of graphical user interface (e.g. rubber banding, zooming in and out). In this section, we look at various input and output technologies employed in di erent CAD systems.

2.1 WIMP

Perhaps the most popular and common CAD interface is the so called WIMP based interface. WIMP stands for \Window, Icon, Mouse, and Pointer", the four representative two dimensional software and hardware means of data input and presentation. Certain geometric modelers like ACIS (Spatial Technology, Inc.) and PADL-2 [11] solely used the keyboard, although these systems were designed more as a kernel of a larger system with a separate interface layer to be on top of them. Contrary to the early tone of this paper, there is nothing wrong with the WIMP based interfaces themselves. What is problematic is the thoughtless deployment of them without considering the context in which they are used. Most CAD systems that employ WIMP based interfaces are very much functionoriented. Menus and icons used to group and index basic design functions. Invoking a design function often occurs through following a hierarchy of menu/icons, making a selection, and answering series of queries with mouse clicks or keyboard input. This hierarchical model is similar to the GOMS (Goals, Operators, Methods and Selection rules) model [5] which stems from the classical AI problem solving paradigm of subgoal decomposition. An interaction scheme based on such a model with no other supporting features is ideal for both routine or innovative design tasks. In a routine design task, designers often have clear goals and readily know which options and rules to select from the choices presented to the them, and therefore, would like to avoid repeatedly invoking series of menus and icons in a cas-

caded fashion (and rather specify the desired action at once). A mechanism to easily compose and combine di erent design operators is needed, and some systems do o er this capability through scripting and application programming interfaces. For innovative design tasks, designers often wander around the design space trying out di erent design operators, a process characterized as a \trialand-error" or \generate-and-test" process. During such a process, designers continually look up and request for various types of design information (e.g. history, dimension, simulation results, alternative views, etc.) seemingly unrelated to the currently chosen task (thus, dicult for the computer to automatically infer) [16]. Thus, a single monolithic structure of menus, icons, and other hooks with corresponding design primitives is perhaps not fully supportive of a rapid and ecient design space exploration. On an added note on WIMP based interfaces, in order to make the best use of icons, its pictorial appearance must be easily matched to its functional meaning [8]. For large CAD systems with a large numbers of primitive design operators and many levels of hierarchical selection structure, it is dicult to pictorially represent functionally bundled primitives at intermediate or higher levels of this hierarchry. This is an often a cause of wasteful mouse clicks (e.g. frequent undo's and redo's), and therefore, icons should be used to represent functionalities near the bottom of the selection hierarchy.

2.2 Three Dimensional Interface

One of the characteristics of WIMP based interfaces is that they are two dimensional. Certain design domains, particularly mechanical design systems, often require three dimensional interactions, and thus three dimensional interfaces. We must bear in mind that three dimensional interactions are not required for all tasks that occur in three dimensional design. For example, sketching a cross section on a plane is easier with 2D based interfaces than through 3D interactions. Also task selection and parameter input are perhaps handled best with 2D based interfaces. For 3D object selection/manipulation and navigation, many CAD systems simply employ a scheme of constrained motion (e.g. rotate in x axis) with 2D devices. A slightly more advanced form of 3D interaction may be provided through 3D widgets. 3D widgets may be controlled by a 2D device and many such widgets and special purpose interfacing devices have been devised [19] [20] [21]. Interacting in 3D with 2D devices (or devices with less than 6 degrees of freedom) can be quite unnatural and, thus this approach is mostly suitable for inexpensive or simple applications.

The next logical step, therefore, is to use three dimensional input devices (such as motion trackers, gloves, and 3D mouses) in conjunction with the 3D widgets. Two types of interactions are thinkable: gestures and motion capture. It has been reported that interacting with gestures alone is not very e ective [8], but, when used with other modes of communication like voice, can be otherwise (see next section for further discussion of this). Motion capture has to do with tracking three dimensional movements (vs. static gestures) for conveying user intentions to the system. Gaylean et al. developed an interactive volumetric modeling (or sculpting) tool controlled by a 3D input device (a motion tracker) [9]. Dani et al. are developing a similar tool for free form surface design [7]. This type of interaction is very intuitive, but su ers from inaccuracy. For large scale designs, an immersive VR environment can provide an added realism and natural feelings to 3D interaction. VR has been applied to design mainly in the form of three dimensional design reviews (e.g. architectural walk-throughs) and simulations, however, not very much for modelling. Virtual Design Studio developed by Rosen et al. implements a design studio metaphor, analogous to a desktop metaphor, to o er more realism to carrying out everyday design tasks [18]. A commercial system, called the SmartScene (MultiGen, Inc.), making use of a head mounted display and two gloves, allows users to immerse themselves in a 3D visual workspace and perform con guration design (e.g. selection of parts and assembling them). Application of comprehensive virtual reality systems to design systems is still in its infancy due to problems with usablity and reliability, despite the increasing a ordability of virtual reality devices.

2.3 Voice and Multimodal Interfaces

One way to increase the usability of three dimensional interaction is through the use of multimodal interface. In particular, many e orts have been made to combine gestures and voice. According to the study by Hauptmann [12], users indeed prefer to communicate and communicate much more e ectively using both gestures and voice instead of just either one. The pioneering example of using speech and gesture is the \Put That There" system in which users interacted with objects using voice and hand gestures (e.g. pointing) [4]. One of the bene ts of the system is the easy interpretation of deictic expressions (e.g. pronouns like that, there) with the aid of gestures. This allows the dialogues to be more concise for the user and less complicated for the computer. Voice interface (e.g. keyword-based speech recognition) will become more important because 3D de-

sign interaction will require human hands to be committed to manipulation and navigation tasks only, and the only viable mode of communication left for other design tasks would be the voice. While speech interface is generally preferred by users and results in faster performance, it is also reported that voice commands often interfere with short-term memory tasks (like design tasks), because speech and linguistic memory compete for the same cognitive resources [13].

3 Tools for Design Interaction In the previous section, we have reviewed the \how" (i.e. the technology) side of the CAD interface problem. In this section, we look at the \what" side, that is the basic \tools" or \services" (for lack of a better term) needed to facilitate design processes in CAD systems. While di erent domains and di erent users require di erent types of interactions, there are several important generic \tools" that cut across these di erences, often overlooked by current CAD systems.

3.2 Design Space Exploration While each individual design view provide a look at a particular aspect of the design, a global picture is useful for tracking the overall design space exploration, and visualizing mappings and relationships among the di erent aspects. While it is true that threads of design tasks ow in a step-wise re nement manner, collectively, the overall design space is explored in a depth- rst manner, designers working on one part of a design to a certain level of details and then moving to another [14]. For large scale design, it is dicult for designers to keep track of the design process ow and remember all the alternatives considered during the process, which can take as long as several weeks to months. One conventional method of handling design alternatives is to use version control, whereby alternative or evolving designs are stored as separate versions. Version control of design les alone can not suciently support design space exploration because each version simply represents a design snapshot in time and there is no explcit or semantic links among them aside from the date and time they were created [15].

3.1 Multiple Views

3.3 Design Rationale

Designs need to be looked at from di erent viewpoints. A viewpoint may be de ned by several different criteria. One such important criterion is the abstraction level. In general, design proceeds in a step-wise re nement manner, starting from a user speci cation down to detailed design levels. The design knowledge associated with the abstraction levels and relationships among them play an important role in generating \good" designs. Most mechanical CAD systems, for example, only allow viewing of a design at a single level of abstraction, i.e. the detailed design level. Another good example of a useful criterion is the underlying representation. A design employs many di erent representations for expressing function, constraints, rules, geometry, etc. As mentioned before, designers continually look up and request for various types of design information, and move between different design contexts more so as the nature of the design becomes more innovative and creative. Level of a detail (LOD) is another popular criterion of design views. This facility enables designers to focus on a particular feature best illustrated with the corresponding view. Di erent applications may emphasize di erent design features and views, and should organize their interface around them.

However, just proving a view of the design process is not sucient. Many design tasks are often adaptations and revisions of old designs, in other words, redesigns [6]. In a redesign, designers are often forced to make conjectures and guesses about rationale behind the previous design processes, which may lead to redundant consideration of failed designs. While the design space exploration view can mark and show failed/successful design points and paths to alert designers, it will not be able handle the above problem under changing design conditions due to requirements of redesigns. What is needed is a construct for recording justi cations for design decisions, i.e. design rationale. Many research has been devoted to design rationale capture systems, and there are mainly two types of approaches: the Record and Replay, and Generative approach [10]. The Record and Replay is an approach where design rationale is captured during design, and is often considered intrusive (e.g. designers do not bother to record their thoughts, or when forced to do so, nds the system cumbersome to use). In the Generative approaches, design rationale is inferred automatically from pre-constructed behavioral models of target designs, and e orts put into the knowledge engineering process, a design task in itself, can be quite signi cant.

4 Others Interfaces and Tools Here, I only list and comment brie y (for lack of space) on other promising technologies and tools for CAD UI's. 

Haptic Interface (Force Feedback): One of the exciting and interesting three dimensional interfaces is the haptic device that can provide force feedback, another important element in implementing physically natural interaction with the synthetic world. Although design tasks may be mostly mental, the sense of touch can be helpful in three dimensional coordination, navigation and object selection.



Multimedia Presentation: In Section 2, I mostly discussed methods and technologies for inputting data into a CAD system. CAD system is not only about displaying nice looking geometries, but also about symbolic and nongeometrical design information. As mentioned before, many design tasks are often adaptations and revisions of old design, and a multimedia presentation of an old design (e.g. an animated snapshots of the evolving designs annotated with hypertext link to important design rationale) can lead designers to the con icting portion of the old design with the current target design and perhaps even inspire them for more innovative solutions.



Collaborative Environment: Large design projects invariable require collaboration among many participants including designers, engineers, managers, etc. From an interface point of view, communication facilities (e.g. WWW, le transfers, e-mail) have become an essential ingredient of CAD systems need to be integrated into CAD systems so that designers can exchange data with other parties as easy as possible.



Adaptive and Flexible Interfaces: A CAD system is used by people with many di erent backgrounds, expertise, and personalites. User interfaces should ideally be con gurable to these variations. Scripts and application programming interfaces can be used for this purpose to a limited degree. An ultimate exible user interface would also adapt to di erent design domains.

5 Conclusion: Next Generation CAD UI ? In this paper, I have presented some random thoughts on the current status of CAD user interfaces and what I believe to be the requirements for next generations of CAD systems. To summarize, on the interface technology side, in a short term, I expect that interfaces utilizing simple three dimensional devices (e.g. tracker, gloves, 3D mouse) will start to appear for special purpose CAD systems, and multimedia presentation to become more common. Multimodal interfaces (e.g. voice and gesture) will probably follow next, and VR systems in a longer future. I strongly maintain the position that time has matured for the end of the monolithic function oriented interface, beginning of a process-oriented interface. Several concepts introduced in this paper (multiple design views and etc.) are all much related to one another, and together, they form a basis for building \intelligent" yet non-intrusive design systems that can free designers from cumbersome interactions with the system. They will open up new possibilities to design interaction. For example, it may be possible to build a non-intrusive design rationale capture system, taking advantage of the design process tool, that is, with the ability to to select important decision points, and annotate explanations of design actions in a batch mode. Multimedia presentations and authoring can be tied directy to the Internet for exporting concept designs and accessing design catalogues worldwide. Adaptive interfaces can perhaps learn design processes (e.g. top-down or bottom-up) and con gure often used design views, menus, and icons for a particular domain.

References [1] Compulink compushop - top ten software. www.compulink.com/compushop/topten, 1996. [2] Learning services software top ten. learnserv.com:80/topten.html, 1996. [3] Top ten bestselling windows business software. www.winmag.com:80/library/1996/, 1996. [4] R. Bolt. Put that there: Voice and gesture in the graphics interface. ACM Computer Graphics, 4:262{270, 1980. [5] S. Card, T. Moran, and A. Newell. The Psychology of Human Computer Interaction. Erlbaum, 1983. [6] R. Coyne, M. Rosenman, A. Radford, M. Balachandran, and J. Gero. Knowledge-based De-

Figure 1: An example of a process-based user interface for CAD. The right graph like structure represents a design space representation with links to di erent design abstraction levels (each row) where a path to a node represent a particular design thread or alternative. On the right, two other view of a design is shown, a symbolic and object oriented representation on top and a geometric model below (from [14]). sign Systems. Addison-Wesley, Reading, MA, 1990.

[7] T. Dani and R. Gadh. A conceptual virtual design system. Proc. of ASME Computers in Engineering Conf., pages 956{966, 1995. [8] R. Eberts. User Interface Design. Prentice Hall, 1994. [9] T. Gaylean and J. Hughes. Sculpting: An interactive volumetric modeling technique. ACM Computer Graphics, 25:267{274, 1991. [10] T. Gruber and D. Russel. Generative design rationale: Beyond the record and replay paradigm. Technical Report KSL-92-59, Stanford University, 1992. [11] E. Hartquist and H. Marisa. UM-10/2.2 PADL2 user manual. Technical report, Cornell Programmable Automation, Cornell University, 1988. [12] A. Hauptmann. Speech and gesture for graphic image manipulation. Proc. of CHI, pages 241{ 245, 1989. [13] L. Karl, M. Pettey, and B. Schneiderman. Speech versus mouse commands for word processing. Int. Journal of Man-Machine Studies, pages 667{687, 1993.

[14] G. Kim and G. Bekey. Design-for-assembly by reverse engineering. Arti cial Intelligence in Design 94, pages 717{734, 1994. [15] G. J. Kim and S. Szykman. Combining interactive exploration and optimization for assembly design. Proc. ASME Design Automation Conf., 1996. [16] T. Ku ner and D. Ullman. The information requests of mechanical design engineers. Proceedings of Design Theory and Methodology Conference, 1990. [17] Sun Microsystems. Ahead of the pack ... http://192.9.100/960710/feature3/ivan.html, 1996. [18] D. Rosen, B. Bras, and F. Mistree. Virtual prototyping for product demanufacture and service using a virtual design studio approach. Proc. of ASME Computers in Engineering Conf., pages 951{958, 1995. [19] K. Shoemake. Arcball: A user interface for specifying 3d orientation using a mouse. Unpublished Work, 1991. [20] D. Venolia. Facile 3d direct manipulation. Proc. of INTERCHI, pages 31{36, 1993. [21] M Wloka and E. Green eld. The virtual tricorder: A uniform interface for virtual reality. Proc. of UIST, pages 39{42, 1995.