Assessing the Impact of Graphical Design ... - Semantic Scholar

10 downloads 33085 Views 1MB Size Report
the effect of response delay in the design software on user pro- ductivity has focused on ... I-beam and desk lamp, which are described in the next section. The experimental ..... erate as many designs as possible to help identify the best design.
Christopher Ligetti Graduate Research Assistant, Department of Industrial & Manufacturing Engineering, The Pennsylvania State University, University Park, PA 16802

Timothy W. Simpson* Assistant Professor, Departments of Mechanical & Nuclear Engineering and Industrial & Manufacturing Engineering, Member ASME, The Pennsylvania State University, University Park, PA 16802

Mary Frecker Assistant Professor, Department of Mechanical & Nuclear Engineering, Member ASME, The Pennsylvania State University, University Park, PA 16802

Russell R. Barton Professor, Management Science and Information Systems, The Pennsylvania State University, University Park, PA 16802

Gary Stump Graduate Research Assistant, Department of Mechanical & Nuclear Engineering, The Pennsylvania State University, University Park, PA 16802

1

Assessing the Impact of Graphical Design Interfaces on Design Efficiency and Effectiveness Despite the apparent advantages of and recent advances in the use of visualization in engineering design and optimization, we have found little evidence in the engineering literature that assesses the impact of fast, graphical design interfaces on the efficiency and effectiveness of engineering design decisions or the design optimization process. In this paper we discuss two examples—the design of an I-beam and the design of a desk lamp— for which we have developed graphical and text-based design interfaces to assess the impact of having fast graphical feedback on design efficiency and effectiveness. Design efficiency is measured by recording the completion time for each design task, and design effectiveness is measured by calculating the error between each submitted design and the known optimal design. The impact of graphical feedback is examined by comparing user performance on the graphical and text-based design interfaces while the importance of rapid feedback is investigated by comparing user performance when response delays are introduced within each design interface. Experimental results indicate that users of graphical design interfaces perform better (i.e., have lower error and faster completion time) on average than those using text-based design interfaces, but these differences are not statistically significant. Likewise, we found that a response delay of 0.5 seconds increases error and task completion time, on average, but these increases are not always statistically significant. Trials using longer delays of 1.5 seconds did yield significant increases in task completion time. We also found that the perceived difficulty of the design task and using the graphical interface controls were inversely correlated with design effectiveness—designers who rated the task more difficult to solve or the graphical interface more difficult to use actually performed better than those who rated them easy. Finally, a significant ‘‘playing’’ effect was observed in our experiments: those who played video games more frequently or rated the slider bars and zoom controls easy to use took more time to complete the design tasks. 关DOI: 10.1115/1.1583757兴 Keywords: Visualization, Human-computer Interaction, Multiobjective Optimization

Introduction

The use of visualization to support engineering design and decision-making is gaining considerable attention as the requisite computer tools and technologies become more readily available. Companies such as Chrysler 关1兴, Raytheon 关2兴, and Boeing 关3,4兴 are working to harness the power of visualization to enhance product design and development. Multidimensional visualization techniques that employ desktop PCs are also becoming prevalent for use in multiobjective optimization 关5–10兴. Virtual reality has become a powerful tool for networking multiple users in distributed design environments 关11兴; a recent review of virtual reality applications in engineering design can be found in Ref. 关12兴. Despite the apparent advantages and recent advances of visualization in design and optimization, many consider the state-of-theart in visualization to still be in its infancy 关5兴. Jones 关13兴 argues that appropriate representations 共i.e., visualization strategies兲 are needed to better understand the models, algorithms, data, and design candidates obtained during the design process. A recent study conducted by the National Research Council 共NRC兲 found that the ‘‘Most effective collaboration today still takes place among a few partners in similar disciplines 共e.g., engineers兲 across a narrow slice of the supply chain using a standardized software package *Corresponding Author, 329 Leonhard Building, Penn State University, University Park, PA 16802. Email: [email protected]. Phone/fax: 共814兲 863-7136/4745. Contributed by the Engineering Simulation and Visualization Committee for publication in the JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING; Manuscript received Sept. 2002; Revised Apr. 2003. Associate Editor: D. Rosen.

144 Õ Vol. 3, JUNE 2003

for the interface 共e.g., the use of CATIA by Boeing for the 777 关14兴兲. Anyone not comfortable using the standard software package 共e.g., small suppliers兲 may have difficulty adjusting to the collaboration technology’’ 关15兴. This NRC study also highlighted three requirements for an effective design interface: it must be integrative, visual, and fast, i.e., enable real-time response to user input. Speed of response is critical for certain cognitive tasks; however, we have found little evidence in the engineering design literature that assess the impact of fast graphical design environments on the efficiency and effectiveness of engineering design or decision-making. Evans, et al. 关16兴 have compared the effectiveness of using traditional interfaces and virtual reality interfaces for 3-D spherical mechanism design, but they did not investigate the impact of response delay on user performance. Most research on the effect of response delay in the design software on user productivity has focused on simple placement, searching, and editing tasks 关17–20兴 or on the loss of information held in short-term memory 关21兴. There have been some investigations addressing the impact of software performance 共speed兲 on the quality of design. Goodman and Spence 关22兴 examined the effect of system response time on the time to complete an artificial task that was created to mimic design activity. The task was the graphical adjustment of five parameters to change the shape of a function 共presented graphically兲 so that it passed between forbidden regions in the x, f (x) plane. They found an increase in task completion time of approximately 50% for response delays of 1.5 seconds between a parameter adjustment and the resulting shape change for the function.

Copyright © 2003 by ASME

Transactions of the ASME

Fig. 1 Overall Experimentation Strategy

For more complex tasks, Foley and Wallace 关23兴 found that system response delays of up to ten seconds did not have significant impact on the design process. Hirschi and Frey 关24兴 recently investigated the interaction between task complexity and completion time for simple graphical user interfaces developed for problems with 2⫻2 to 5⫻5 input and output parameters, but response delays were not considered in their study. Unfortunately, many design analysis tasks may not be instantaneous, even when calculated using state-of-the-art software on state-of-the-art computers. For instance, Ford Motor Company reports that one crash simulation on a full passenger car takes 36 – 160 hours 关25兴; therefore, we hypothesize that a metamodeldriven design interface provides a strategy for meeting the challenge of creating an integrative, visual, and fast graphical design environment. By metamodels we mean simple mathematical approximations to the input/output functions calculated by the designer’s analyses and simulation models 关26 –28兴. Because the approximations are simple, they are fast, virtually instantaneous, enabling performance analyses to be computed in real-time when design 共input兲 variables are changed within a graphical design environment; however, because they are simple approximations, there is a tradeoff between accuracy and speed. Consequently, our primary research objective is to determine the efficacy of metamodel-driven visualization for graphical design and optimization as shown in Fig. 1. Figure 1 shows how the experiments presented in this paper fit into our overall research strategy. At the highest level, our investigations can be divided into two categories: 共1兲 assessing the benefit of having a rapid response to user requests for performance as a function of design parameters, and 共2兲 assessing the

cost of lost accuracy due to the use of approximations or metamodels. The benefit of rapid response depends on the nature of the design task and the richness of the design interface 共e.g., textbased versus graphical兲. The first of these has led us to investigate the development of alternative design interfaces and exercises to study the impact of time delays in the system response on the efficiency and effectiveness of the design process. To study the impact of response delays on design efficiency and effectiveness, we have developed five design exercises—design of an I-beam, desk lamp, pressure vessel, automobile suspension, and aircraft wing—and two sets of graphical and text-based design interfaces, one for the I-beam and one for the desk lamp. As highlighted in Fig. 1, in this paper we focus on the design exercises involving the I-beam and desk lamp, which are described in the next section. The experimental protocol used to test these interfaces is outlined in Section 3, and Section 4 contains a discussion of the hypotheses we tested. Closing remarks are given in the Section 5.

2

Descriptions of Design Exercises and Software

Two graphical design interfaces 共GDIs兲 have been developed to assess their impact on design efficiency and effectiveness. Corresponding text-based design interfaces 共TDIs兲 have also been developed to serve as controls in each experiment. Two engineering design problems have been selected as motivating examples in these interfaces. The first is an I-beam design example adapted from Haftka and Gurdal 关29兴 wherein the user varies the crosssection of an I-beam, subject to a bending moment, to minimize the cross-sectional area and the bending stress. The second example involves the design of a desk lamp to maximize the mean

Journal of Computing and Information Science in Engineering

JUNE 2003, Vol. 3 Õ 145

Fig. 2 Graphical Design Interface for I-Beam Example

illuminance and minimize the standard deviation of the illuminance on a predetermined surface area 共e.g., a paper or a book兲 on a desk using the analyses in Ref. 关30兴. For our graphical-based interfaces, the user manipulates the values of design variables using the mouse to control slider bars on the screen. The resulting system performance is displayed graphically in a plot of the performance 共response兲 space as the slider bars are adjusted, and the user can investigate solutions by clicking the mouse on any point that has been plotted in the performance space. In the text-based design interfaces, the user enters the values of design variables using the keyboard, and the resulting performance is displayed numerically in text boxes, which are stored in a drop-down menu that the user can browse using the mouse. In both examples, the impact of delay in the display of the system performance is studied. For the I-beam example, the analysis is virtually instantaneous, making it easy to study the effect of response delay by imposing an artificial delay. For large complex systems, however, detailed analyses often dictate large response delays, and rapid response can only be achieved by using metamodels as discussed in the previous section. We employ the metamodels developed in Ref. 关31兴 for the desk lamp example to investigate their benefit as fast surrogates for detailed analyses that have large response delays. Using both interfaces, users are asked to perform two design exercises: 1. Free-form design exercise: find the ‘‘best’’ design to simultaneously minimize the two competing objectives. There is no response delay present during this portion of the design exercise. 2. Weighted-sum design exercises: find the ‘‘best’’ design to minimize a weighted-sum of the two competing objectives. Each user solves three weighted-sum problems where the value of the weighting factor, ␣, is varied, along with the response delay. The experimental set-up is described in detail in Section 3. The two interfaces for the I-beam example are discussed in the next section; Section 2.2. contains an overview of the two desk lamp design interfaces. 2.1

I-Beam Design Example

I-Beam Graphical Design Interface. In the I-beam GDI, users can adjust the height, h, and width, w, of the I-beam cross-section to vary the cross-sectional area, A, and the imposed bending stress, ␴. As users change h and w using the slider bars, the cor146 Õ Vol. 3, JUNE 2003

responding A and ␴ values appear in the performance 共response兲 space 共xy-plot of A vs. ␴兲 as shown in Fig. 2. Normalized measures of A and ␴ are used to help alleviate scaling inconsistencies on the plot. The GDI for the I-beam example was developed using Visual Basic 6.0. The graphical design interface allows the user to change the scale of the plot of the performance space by selecting any of the three Plot Scale buttons that appear in the upper right portion of Fig. 2. The Color buttons allow the user to change the color of new points as they are plotted in the performance space. The performance space plot can be cleared of all points using the Clear button. Users can check the quality of a specific design by using the mouse to click on a point that has been plotted within the performance space. The numerical values of the normalized measures of A and ␴ then appear in the right-hand portion of the screen in the output display, and the slider bars return to the corresponding h and w values for that design. For the free-form design exercise, users seek to resolve the tradeoff between the two competing objectives by finding the best combination of h and w that simultaneously minimize the normalized measures of A and ␴. For the weighted-sum design exercises, users seek to find the combination of h and w that minimizes the weighted-sum objective function, F: min F⫽ ␣

共 ␴ ⫺ ␴ min兲 共 A⫺A min兲 ⫹ 共 1⫺ ␣ 兲 A ⫺A ␴ 兲 共 max 共 max⫺ ␴ min兲 min

s.t.

(1)

0.2⭐h⭐10.0 in 0.1⭐w⭐10.0 in

where ␣ is a scalar weighting factor (0⭐ ␣ ⭐1); A and ␴ are the area and stress in the I-beam; A max and A min are the maximum and minimum possible areas, respectively; and ␴ max and ␴ min are the maximum and minimum possible stresses, respectively, based on the slider bar limits. Contour lines of constant F are automatically displayed in the GDI during each weighted-sum design exercise 共see Fig. 2兲, where the slope of the contour line depends on the value of ␣. In addition, the numerical value of F is displayed when a design point is selected. In many bicriteria optimization problems, a weighted-sum approach may not yield good solutions 关32–34兴; however, the weighted-sum approach is suitable in this example because of the convex nature of the Pareto frontier. Transactions of the ASME

Fig. 3 Text-based Design Interface for I-Beam Example

I-Beam Text-Based Design Interface. Figure 3 shows the layout and functionality of the text-based design interface for the I-beam. Text boxes in the lower right portion of the interface allow the user to enter values of the design variables using the keyboard; changing the design variables causes the values of performance variables to be displayed in the right portion of the interface. Previously analyzed designs are stored and can be accessed by using the drop-down menu as shown in Fig. 3. As with

the graphical-based interface, the user is asked to complete freeform design and weighted-sum design exercises using this textbased design interface. 2.2

Desk Lamp Design Example

Desk Lamp Graphical Design Interface. The desk lamp design example has three design variables: reflector length, L 1 , rod

Fig. 4 Graphical Design Interface for Desk Lamp Example

Journal of Computing and Information Science in Engineering

JUNE 2003, Vol. 3 Õ 147

Fig. 5 Text-based Design Interface for Desk Lamp Example

length, L 2 , and reflector angle, ␪. The GDI for this example is shown in Fig. 4 and was also developed using Visual Basic 6.0. As the user changes the values of L 1 , L 2 , and ␪ using slider bars, the mean, ␮, and the corresponding standard deviation of the illuminance, ␴, are displayed in the performance space 共xy-plot of ␮ vs. ␴兲. Since both ␮ and ␴ are competing design objectives, the user seeks to find the best combination of L 1 , L 2 , and ␪ that maximizes the mean illuminance, ␮, and minimizes its respective standard deviation, ␴. The overall layout and functionality of the desk lamp graphical design interface is very similar to the I-beam GDI as discussed in Section 2.1. In the desk lamp GDI, the user can check the quality of a specific design by using the mouse to click on a point plotted within the performance space. The numerical values for ␮ and ␴ then appear to the right of the performance space in the output display shown on the right-hand side of Fig. 4. For the free-form design exercise, the user seeks to resolve the tradeoff between the two competing objectives by finding the best combination of L 1 , L 2 , and ␪ that simultaneously maximizes ␮ and minimizes ␴. In the weighted-sum exercises, the user seeks to resolve tradeoffs between ␮ and ␴ using a weighted-sum objective function, F: min F⫽⫺ ␣

共 ␮ ⫺ ␮ min兲 共 ␴ ⫺ ␴ min兲 ⫹ 共 1⫺ ␣ 兲 共 ␮ max⫺ ␮ min兲 共 ␴ max⫺ ␴ min兲

s.t.

(2)

50⭐L 1 ⭐100 mm 300⭐L 2 ⭐500 mm 0°⭐ ␪ ⭐45°

Here F is a weighted-sum of normalized measures of ␮ and ␴, where ␮ max and ␮ min are the maximum and minimum possible values for mean illuminance, respectively, and ␴ max and ␴ min are the maximum and minimum possible standard deviations for illuminance, respectively. The scalar weighting factor, ␣, ranges from 0 to 1, and the optimal design maximizes the normalized ␮ by using ⫺␣ for the first term in Eq. 共2兲. Desk Lamp Text-based Design Interface. As with the I-beam experiment, a text-based design interface was developed for the desk lamp design problem. Figure 5 displays the layout and func148 Õ Vol. 3, JUNE 2003

tionality of the desk lamp text-based interface. The controls and operation of the desk lamp text-based design interface are identical to those of the I-beam text-based design interface except that it has three design variables.

3

Experimental Setup

Careful experimental design was required to properly assess the impact of response delay on user performance. The series of experiments that were developed to test the impact of delay on design efficiency and effectiveness are described in this section. In the following four sections, we identify the factors that were varied in the experiments, the performance measures, the experimental design, and the procedures that were used to conduct the experiments. The results are analyzed and discussed in Section 4. 3.1 Experimental Factors. Experiments were completed using the two examples and four interfaces described in Section 2. Three levels of response delay were tested in the experiments: 0 s, 0.1 s, and 0.5 s, where user performance was expected to decline as response delay increased. The delays of 0.1 s and 0.5 s were smaller than the 1.5 s delay used by Goodman and Spence 关22兴 in a less realistic design exercise; however, they were large enough to provide a noticeable effect in the behavior of the interface. For the weighted-sum exercises, three levels of the weighting factor, ␣, were also tested in the experiments: ␣ ⫽0.1, 0.5, and 0.9. 3.2 Performance Measures. Our performance measures were related to the design effectiveness and efficiency of the users in completing the design exercises. Design effectiveness was measured by calculating the percent error between each submitted design and the known optimal design. For our analyses, we found that the logarithm of the percent error tended to produce more normally distributed variation; hence, the natural logarithm of percent error, ln共error兲, was used for the analyses reported in Section 4. Design efficiency was measured by recording the completion time for each design task. The natural logarithm of task completion time, ln共completion time兲, was also used to produce more normally distributed variation. Users were also asked to complete survey questionnaires before 共pre-test兲, during 共mid-test兲, and after 共post-test兲 the experiment. Responses to these questions were used to test for significant correlation with user performance. Transactions of the ASME

Table 1 Single Replicate Assignment of Experiments Run 1

Run 2

Run 3

Test Subject



Delay 共s兲



Delay 共s兲



Delay 共s兲

1 2 3 4 5 6 7 8 9

0.1 0.5 0.9 0.9 0.1 0.9 0.1 0.5 0.5

0.0 0.1 0.5 0.0 0.1 0.1 0.5 0.0 0.5

0.5 0.9 0.1 0.1 0.5 0.1 0.5 0.9 0.9

0.1 0.0 0.0 0.1 0.5 0.5 0.0 0.5 0.1

0.9 0.1 0.5 0.5 0.9 0.5 0.9 0.1 0.1

0.5 0.5 0.1 0.5 0.0 0.0 0.1 0.1 0.0

3.3 Experimental Design. Each subject was assigned three weighted-sum experiments for the same design exercise; therefore, run number was considered another factor when designing the experiment. A Graeco-Latin square design was used to test the three factors—response delay, ␣, and run number—for a given GDI or TDI. The Graeco-Latin square design was constructed such that for the three experiment runs, each level of time delay occurs only once with the three levels of ␣. With three levels for each factor, nine test subjects were required for one replicate of the design. 3.4 Experimental Protocol and Location of Experiments. Test subjects were provided with two resources to guide them through the design experiment. First, the subjects were given a packet of written instructions that included a brief outline of the experiment instructions and a human subject consent form. Also included in the packet was a one-page overview of the specific problem exercise, including problem background and some software instruction. Finally, the packet included three questionnaires that were completed throughout the course of the experiment. The test subjects were also supplied with a CD that included a training and demonstration video in addition to the software. The packet instructed the subjects exactly how to run the training video and design software. The packet of instructions also assigned the three weighted-sum experiments that the subjects were to perform and the order in which they were to perform them. Test subjects would select their assigned experiments using the setup pull-down menu available in

Table 2 Summary of Test Subjects Data Source

Number of Subjects

PSU UBuffalo UIC U of M UPenn Total:

64 37 12 6 14 133

the upper left-hand corner of each interface. Table 1 summarizes the experiment assignments for the nine test subjects that complete one replicate of the experiment design. Experiments for the I-beam and desk lamp exercises were conducted by 133 graduate students at several universities from December 2000 through April 2002. The universities participating in the study included the Pennsylvania State University 共PSU兲, the University at Buffalo 共UBuffalo兲, the University of Illinois at Chicago 共UIC兲, the University of Michigan 共U of M兲, and the University of Pennsylvania 共UPenn兲. These sites were identified based on our personal contacts at each university and student availability in graduate-level design optimization courses. Table 2 summarizes the number of test subjects that participated at each university. The majority of the test subjects completed their design exercise in less than the 30 minutes allocated to them; however, those experiments exceeding 30 minutes were also accepted. The level of education of the participating graduate students ranged from first year M.S. students to fifth year Ph.D. students, and 80% of the test subjects were male, with age ranging from 22 to 35 years of age. The test subject population was comprised of 65% Mechanical Engineering students and 28% Industrial Engineering students; the remaining 8% were students from Mechanics, Mechanical Engineering Design, Architecture, Engineering Optimization, Math, and Aerospace Engineering. Table 3 summarizes the evolution of the design experiments conducted at each of these five universities, which resulted from slight modifications to the experimental protocol as results were obtained and analyzed. Specifically, increased response delay time and increased instruction were tested in the I-beam GDI in March and April 2002 in hopes of reducing variability in the experimental results as discussed in the next section. A response delay of up to 1.5 s was tested at UIC, along with increased instruction, as part of a graduate-level design optimization course; increased instruction was simultaneously tested at PSU as well. The increased instruction included a modified overview and explanation of the design exercise. The new overview began with a more detailed explanation of the problem background and design objective derivation. The overview was broken into two sections, one explaining the freeform design experiment, and the other explaining the three weighted-sum experiments. The explanation of the weighted-sum experiments included a better description of how to minimize the weighted-sum objective using the objective function contours. The new overview also encouraged test subjects to generate as many designs as possible to help identify the best design. For the experiments conducted at PSU, the increased instruction eliminated the use of the training video provided on the CD. Instead, an experienced supervisor advised the test subjects oneon-one throughout the experiment. This supervisor would actively guide the test subjects as they performed the demonstration experiment. The supervisor was provided with a detailed outline to discuss all software features and also clarify parts of the design exercises that are commonly misunderstood by the users, such as the objective function contours. The supervisor was present to

Table 3 Evolution of Experiments and Changes in Protocol Date

Location

Environment

Supervision

Interface

n

Instruction

Delays

Dec 2000 July-Aug 2001 July-Aug 2001 July 2001 Sept 2001 Oct 2001 Oct 2001 Oct 2001 Oct 2001 March 2002 April 2002

PSU PSU PSU U of M UPenn UBuffalo UBuffalo UBuffalo UBuffalo PSU UIC

Private Private Private Private/Lab Private/Lab Lab Lab Lab Lab Lab Private/Lab

Supervised Supervised Supervised Unsupervised Unsupervised Supervised Supervised Supervised Supervised Supervised Unsupervised

1-Beam GUI I-Beam Text Desklamp Text Desklamp GUI Desklamp GUI I-Beam GUI I-Beam Text Desklamp GUI Desklamp Text I-Beam GUI I-Beam GUI

19 18 18 6 14 8 9 14 6 9 12

Original Original Original Original Original Original Original Original Original Increased Increased

Original Original Original Original Original Original Original Original Original Original Increased

Journal of Computing and Information Science in Engineering

JUNE 2003, Vol. 3 Õ 149

Table 4 Summary of Results of Hypothesis Tests P-Value Hypothesis

Sample Size

Primary Hypotheses

1. 2. 3. 4. 5. 6. 7. 8. 9.

Secondary Hypotheses

Ordering of Weights during Multiobjective Optimization 10a. Error Unaffected by Weight 共GDI only兲 10b. Error Unaffected by Weight 共TDI only兲 11a. Completion Time Unaffected by Weight 共GDI only兲 11b. Completion Time Unaffected by Weight 共TDI only兲 Learning Effect 12a. Run Number Affects Error 共GDI only兲 12b. Run Number Affects Error 共TDI only兲 13a. Run Number Affects Completion Time 共GDI only兲 13b. Run Number Affects Completion Time 共TDI only兲 Pre-Test Questionnaire Responses 14. Rating of Computer Knowledge Affects Error 15. Rating of Computer Usage Affects Error 16. Rating of Computer Understanding Affects Error 17. Rating of Video Gaming Playing Affects Error 18. Rating of Optimization Knowledge Affects Error 19. Rating of GUI Development Affects Error Post-Test Questionnaire Responses 20. Rating of Software Ease Affects Error 21. Rating of Contours Affects Error 22. Rating of Ease of Sliders/Zoom Affects Error 23. Rating of User Frustration Affects Error 24. Rating of Ease of Selecting/Submitting Affects Error 25. Rating of Freeform Satisfaction Affects Error 26. Rating of User Confidence Affects Error 27. Rating of Problem Understanding Affects Error Impact of Increased Delay and Instruction 28. Increased Delay 共1.5 s兲 Affects Error 29. Increased Delay 共1.5 s兲 Affects Completion Time 30. Variance Reduced with Increased Instruction

a

Delay Affects Error in GDI Delay Affects Completion Time in GDI Error Unaffected by Delay in TDI Completion Time Unaffected by Delay in TDI GDI Users Faster than TDI Users Lower Error with GDI than TDI Error Variance is Equal for GDI and TDI Completion time Variance is Equal for GDI and TDI Delay Affects Design Submission

27 IB/34 DL 27 IB/24 DL 27 IB/34 DL 27 IB/24 DL 82 GUI/51 Text 82 GUI/51 Text 82 GUI/51 Text 82 GUI/51 Text 27 IB/34 DL

I-Beam

Desk Lamp

0.403 0.022a 0.115 0.304 0.084 0.581 0.058 0.519 0.000a 0.000a 0.562 0.034a 0.000a 共combined兲 0.000a 共combined兲 0.021a —

27 27 27 27

IB/34 IB/24 IB/34 IB/34

DL DL DL DL

0.466 0.148 0.088 0.239

0.036a 0.000a 0.810 0.125

27 27 27 27

IB/34 IB/24 IB/34 IB/24

DL DL DL DL

0.483 0.233 0.000a 0.000a

0.000a 0.855 0.000a 0.000a

27 27 27 27 27 27

IB/34 IB/34 IB/34 IB/34 IB/34 IB/34

DL DL DL DL DL DL

— 0.101 共TDI兲 — 0.125 共TDI兲 — 0.004a 共GDI兲 0.055a 共TDI兲 — 0.022a 共GDIs combined兲 0.087 共GDI兲 0.093 共GDI兲

27 27 27 27 27 27 27 27

IB/34 IB/34 IB/34 IB/34 IB/34 IB/34 IB/34 IB/34

DL DL DL DL DL DL DL DL

0.046a 共TDI兲 0.001a 0.224 共GDIs combined兲 0.083 共GDIs combined兲 0.154 共TDI兲 0.146 0.002a — — 0.021a — 0.040a 0.990 共GDIs combined兲

12 IB 共GDI only兲 12 IB 共GDI only兲 21 IB

0.574 0.056 0.137

NA NA NA

indicates results statistically significant at p-value⭐0.05. indicates results were not statistically significant and were removed during step-wise regression analysis.



guide the test subjects throughout the entire design exercise and to answer any clarification questions during the experiment.

4

Hypotheses Results and Discussion

Several hypotheses were established prior to any experimentation and data analysis to reflect the research objectives outlined in Fig. 1. The primary hypotheses were those concerning the impact of response delay and graphical feedback on user performance; however, the impact of the ordering of the runs, as well as analysis of questionnaire responses, led to the establishment of several secondary hypotheses. Statistical testing of each hypothesis was conducted, and the results are summarized in Table 4 and discussed in this section. A point-by-point discussion of each hypothesis test can be found in Ref. 关35兴, and we refer the reader to Refs. 关36,37兴 for more detailed information on the design and analysis of experiments and hypothesis testing using t-tests and F-tests for comparing differences in means and variances, respectively. 4.1 Primary Hypotheses „Hypotheses 1–9…: Results and Discussion. The impact of response delay and graphical feedback on user performance is assessed with our primary hypotheses. As seen from Table 4, there was no statistically significant increase in ln共error兲 as response delay varied from no delay to 0.1 s to 0.5 s for the I-beam GDI experiments. Plots of ln共error兲 and ln共completion time兲 versus response delay for the I-beam GDI are shown in Fig. 6. In both figures, we can see positive increases in the average ln共error兲 and ln共completion time兲, but these increases are not statistically significant for the I-beam GDI. A significant 150 Õ Vol. 3, JUNE 2003

increase (p⫽0.022) in ln共error兲 was observed in the desk lamp GDI as response delay increased, but increased response delay did not significantly affect ln共completion time兲 when using the desk lamp GDI. A correlation analysis of the ln共error兲 and ln共completion time兲 for both GDIs for the increased delay did reveal a significant negative correlation between the two performance measures such that ln共error兲 increased as ln共completion time兲 decreased as one might expect. For the text-based design interfaces, we hypothesized that response delay would have little effect on error or task completion time since there was no real-time graphical feedback; however, there was statistical evidence that response delay does have some effect on ln共error兲 and ln共completion time兲 for the I-beam TDI but not for the desk lamp TDI. Interestingly, error did not necessarily increase as delay increased. In fact, error was lowest at the highest level of response delay for the I-beam TDI, but completion time was the highest at this level of delay. To test whether graphical design interface users were faster than text-based design interface users, we first examined the I-beam and desk lamp data separately. Using the I-beam experimental data, the mean ln共completion time兲 was compared between the GDI and TDI. There is statistically significant evidence that the mean ln共completion time兲 between the two interfaces differed, where users of the I-beam GDI took significantly less time than users of the TDI (p⫽0.000). For the desk lamp, there is also a statistically significant difference in mean ln共completion time兲 between the desk lamp GDI and TDI, where users of the GDI take less time (p⫽0.000). In terms of effectiveness, the average ln共erTransactions of the ASME

Fig. 6 Impact of Response Delay on Error and Completion Time for I-Beam GDI

ror兲 was lower for the I-beam GDI users when compared to the I-beam TDI users, but this difference was not statistically significant due to high variance in the error. The same analysis was conducted for the users of the desk lamp GDI and TDI. For this exercise, TDI users performed better than users of the GDI (p ⫽0.034), which may have been caused by an unusual trajectory in the objective function in the desk lamp GDI. Figure 7a and 7b illustrate the typical search results moving the slider bars one-ata-time for the desk lamp and I-beam GDIs respectively. It is clear that a smooth search path is not always achievable within the desk lamp GDI, making it difficult for subjects to use intuition to help identify the best desk lamp design. When examining both design exercises, we found that the variances in ln共error兲 and ln共completion time兲 for the GDIs and TDIs are not equal. There was a statistically significant difference be-

tween the variance in ln共error兲 of the two groups (p⫽0.000), where the GDI users had significantly lower variance in ln共error兲 than the TDI users. There was also a statistically significant difference between the variance in ln共completion time兲 for the two types of interfaces (p⫽0.000), where the TDI users had significantly lower variance in ln共completion time兲 than the GDI users. These results are supported by the box plots shown in Fig. 8, which compare user performance on the two sets of interfaces. We see little difference in average ln共error兲 between the GDI and TDI users in Fig. 8a, but we notice a much larger variation in the error for the TDI users. In Fig. 8b, we see a larger difference in ln共completion time兲, with GDI users being faster on average and a much tighter variance among the TDI users compared to the GDI users, indicating that TDI users spent consistently more time with the interface than GDI users as we expected. Finally, we wanted to determine if response delay had any im-

Fig. 7 Typical Search Results When Moving Input Slider Bars One-at-a-Time

Fig. 8 Comparison of Performance on Graphical and Text-Based Design Interfaces

Journal of Computing and Information Science in Engineering

JUNE 2003, Vol. 3 Õ 151

pact on selecting and submitting a design. Our hypothesis was that users who rate design selection and submission more difficult due to the response delay on their post-test questionnaire will perform worse. Users of the I-beam GDI who found the selection task more difficult due to the delays perform with significantly lower error (p⫽0.021). This result suggests that the response delay was noticeable to the test subjects and made selecting and submitting the ‘‘best’’ design more difficult, but the magnitude of the delays was not large enough to affect performance as noted previously. 4.2 Secondary Hypotheses „Hypotheses 10–28…: Results and Discussion. The remaining hypotheses listed in Table 4 are secondary hypotheses that helped us pinpoint areas of concern and future improvement, such as increased training to account for the learning effect that we found. To facilitate discussion, the hypotheses have been grouped into five categories: ordering of weights during multiobjective optimization 共Hypotheses 10 & 11兲, learning effect 共Hypotheses 12 & 13兲, pre-test questionnaire responses 共Hypotheses 14 –19兲, post-test questionnaire responses 共Hypotheses 20–27兲, and impact of increased delay and instruction 共Hypotheses 28 –30兲. As stated previously, a detailed discussion of each individual hypothesis can be found in Ref. 关35兴. Ordering of weights during multiobjective optimization. Even though we controlled the run order in our experimental design 共see Table 1兲, we hypothesized that the ordering of the weighting factor, ␣, would not affect ln共error兲 or ln共completion time兲, i.e., user performance with either interface should not depend on which weighted-sum objective the user was trying to minimize. This hypothesis was true for the I-beam GDI and TDI 共i.e., the weighting had no effect兲, but this was not the case for the desk lamp. The weighting factor had a significant effect on ln共error兲 for both the desk lamp interfaces (p⫽0.036 for GDI and p⫽0.000 for TDI兲, and the highest error occurred when alpha equaled 0.1. The weighting factor did have suggestive, but not significant, influence on ln共completion time兲 for the I-beam GDI (p⫽0.088), but it did not have a not significant impact on the remaining three exercises, namely, the desk lamp GDI and TDI, and the I-beam TDI. On average, users spent less time when the alpha was 0.1, which also helps explain why error was highest for experiments where ␣ ⫽0.1. Learning effect. Despite the demostration and software training implemented in the experimental protocol, each of the four interfaces had a significant learning effect in terms of ln共completion time兲, where completion time is highest during the first run. Also, the four data sets show that there is very little difference in completion time between the second and third run, indicating that the learning effect occurs at once after the first run. The anticipated learning effect on ln共error兲, where error decreases as run number increases, was true for the I-beam GDI and TDI; however, the results were not statistically significant. Run number was significant in explaining ln 共error兲 for the desk lamp GDI, but the error was much lower for the first run and about the same for the second and third runs. This suggests a fatiguing effect as opposed to a learning effect. The addition of motivational feedback would likely help in eliminating this degradation of performance error throughout the three weighted-sum experiments—a test subject who knowingly did not achieve an optimal design should be more inclined to develop better strategies to achieve optimality in the next experiment. No statistically significant effect of run number on ln共error兲 existed for the desk lamp TDI data. Pre-Test Questionnaire Responses. For the pre-test questionnaires, we asked users to rate their level of computer knowledge, weekly usage, and overall understanding; the amount of time spent playing video games; their knowledge of multiobjective optimization; and their experience with GUI development on a 5-point Likert scale 共1 lowest, 5 highest兲. Stepwise regression analysis was used to determine which factors, if any, had a significant effect on ln共error兲, and the most significant results are reported in Table 4. While many of these factors did not have a significant impact on user performance as we had anticipated, we 152 Õ Vol. 3, JUNE 2003

did find that users who rated themselves with a greater understanding of multiobjective optimization had significantly lower ln共error兲, especially when using the GDIs (p⫽0.022 for GDIs combined兲. Computer knowledge and usage did not significantly effect ln 共error兲, but people with a greater understanding of computers actually performed significantly worse with the desk lamp interfaces (p⫽0.004 for GDI, and p⫽0.040 for TDI兲; however, this effect was not found to be significant for the I-beam interfaces. This result suggests that users with greater understanding of computers may need more training on the problem mechanics and strategies to reach the optimal design. Previous experience in GUI development also negatively impacted user performance, although not significantly; users who had previous experience in GUI development had higher ln共error兲 than those with no experience. Those with GUI development skills may have concentrated less on the design task and more on how the interface was designed or how it could be improved. Finally, frequent video game playing caused an increase in ln共error兲, which was almost statistically significant (p⫽0.055). This result suggests that users interested in video game playing may not concentrate on the task at hand but rather on ‘‘playing’’ with certain software features, such as the input slider bars. We found that video game playing also had a significant effect on completion time: those that played video games more frequently spent longer using the TDIs (p⫽0.055); the same suggestive effect was found with the GDIs, but it was not as statistically significant. None of the other factors had a significant effect on ln共completion time兲. Post-Test Questionnaire Responses. After completing the design tasks, users were asked to rate the ease of using the software, the slider bars and zoom, selecting and submitting a design, and the objective function contours. They were also asked to rate their frustration with the software, their satisfaction with their freeform design, their confidence with their final design, and their overall understanding of the problem. Stepwise regression analysis was used to determine which factors had a significant effect on ln共error兲 based on users ratings of each item on a 5-point Likert scale 共1 lowest, 5 highest兲. For the desk lamp GDI, we found that users who rated the interfaces easy to use had significantly (p⫽0.001) higher ln共error兲. Much like users interested in video gaming, a ‘‘playing’’ effect may have been present, taking away concentration from the design task, and thus increasing performance error. This result is still significant when the data sets for both GDIs are combined. Likewise, users that found the slider bars and zoom control easy to use performed with higher ln共error兲, although this result was only suggestive and not statistically significant (p⫽0.083 for GDIs combined兲. These users also took significantly longer to complete the task, which further supports the ‘‘playing’’ effect. Ease of use of the objective function contours did not have a significant effect on ln共error兲 or ln共completion time兲. Evidence suggests that users of the desk lamp GDI that found the software frustrating to use performed with lower error; however, this result was not statistically significant (p⫽0.146). The task of selecting one optimal design among hundreds of possible designs on the objective plot can certainly be frustrating. Those users who were more determined to submit the optimal design and perform with lower error likely experienced this tedium of searching through a large number of possible designs, but users of the I-beam GDI who rated selecting and submitting designs easy performed with significantly (p⫽0.002) lower ln共error兲. Meanwhile, users of the desk lamp GDI who were more satisfied with their freeform design after completing the weighted-sum experiments performed with significantly higher ln共error兲 in the weighted-sum experiments (p⫽0.021). This result remains significant only at the 90% confidence level when the data sets of both GDIs are combined. Users of the desk lamp GDI that were more confident with their final designs also performed with significantly (p ⫽0.040) lower ln共error兲; no significant relationship was found with the I-beam GDI or the GDIs combined. It appears that users Transactions of the ASME

confident with their final design submissions had a good understanding of the desk lamp design task and the strategies required to reach the optimum; however, user ratings of problem understanding the problem did not have a significant effect on user performance. Increased Delay and Instruction. Since response delays of up to 0.5 s did not have a statistically significant effect on the task completion time for either the I-beam or the desk lamp GDI, we increased response delay to 1.5 s for the I-beam GDI tests that were conducted at UIC in April 2002. The increased delay did not have a statistically significant effect on ln共error兲 due to the small sample size (n⫽12), but ln共completion time兲 was definitely impacted by the longer response delay, where experiments with the longer response delay took longer than those with no delay (p ⫽0.056). The mean completion times for experiments with 0.5 s delay and 1.5 s delay were 180.3 s and 229.1 s, respectively, which were much higher than the mean completion time with no response delay 共125.2 s兲. The increased set of instructions did yield lower variance for both ln共error兲 and ln共completion time兲 for the I-beam GDI, but the difference was not statistically significant. Interestingly, we found that when the I-beam GDI experiments were conducted with the increased response delay, the weighting factor did have a significant (p⫽0.039) effect on ln共error兲. The highest average error occurred when ␣ ⫽0.1, which is when the slope of the objective function contours is closest to 0. This effect may have been caused by the use of slider bars to vary design 共input兲 variable values over a discretized range. Even though the slider bar discretization is uniform, the resulting Pareto frontier in the performance space is not uniformly spaced as seen in Fig. 2 and Fig. 7b, which is not an uncommon phenomenon 关33兴. When ␣ ⫽0.9, the gaps in the Pareto frontier are larger than the gaps when ␣ ⫽0.1, making it easier to identify the optimal design; it is much more difficult to select the optimal design when ␣ ⫽0.1 since the points are tightly spaced for small values of ␴.

5

Conclusions and Future Work

The study has provided many insights based on simple design exercises that use fast graphical and text-based design interfaces. Experimental results indicate that users of graphical design interfaces perform better 共lower error and faster completion time兲 on average than those using text-based design interfaces, but this difference is not statistically significant. In terms of rapid feedback, we found that a response delay of 0.5 s does increase error and task completion time, on average, but this increase is not always statistically significant. Trials using longer delays 共1.5 seconds兲, however, did yield significant increases in task completion time. When evaluating the pre-test and post-test questionnaire responses, it was reassuring to find that those that were more familiar with multiobjective optimization performed better than those less familiar with it. We also found that the perceived difficulty of the design task and the graphical interface controls were inversely correlated with design effectiveness—designers who rated the task more difficult to solve or the graphical interface more difficult to use actually performed better than those who rated them easy. A significant ‘‘playing’’ effect was observed among our test subjects: those who played video games more frequently or rated the slider bars and zoom controls easy to use took more time to complete the design tasks. We also found significantly less variance in ln共error兲 among GDI users, but TDI users had significantly less variance in ln共completion time兲—TDI users took consistently more time to complete the design tasks than GDI users. In fact, some subjects performed surprisingly well with the TDI 共see Figure 8兲, indicating a possible preference for text-based interfaces. Gardner 关38兴 argues that there are differences among students and teachers that impact the efficiency of different ways of presenting information, and one distinguishing characteristic is right-brain versus left-brain dominance in individuals 关39,40兴.

Individuals with left-brain dominance are likely to prefer a text-based interface, while those with right-brain dominance may prefer a graphical interface. In future studies, we plan to test for this preference by evaluating each subject’s dominance with the Hemispheric Dominance Inventory Test, 具http://brain.web-us.com/brain/braindominance.htm典. Additional future modifications include increased training to reduce variability among subjects and minimize the learning effect that we observed. The training video initially included on the CD distributed with the software demonstrates the usage of all the GDI and TDI features; however, the video is quick-moving, allowing little opportunity for subjects to practice the features in conjunction with the demonstration. Those users that did not complete or fully understand the demonstration exercise after viewing the video may not have fully understood how to use the software and were subsequently still ‘‘training’’ during the initial design tasks. We are also considering simplifying the problem by making it a single objective, constrained optimization problem rather than a multiobjective optimization problem to minimize potential confusion about the contours and impact of the weighting factor. Allowing users to manipulate more than one variable at a time might also improve performance. Slider bars restrict users to one-at-atime variation, which can diminish their ability to explore all possible combinations in the design space. We are currently investigating the use of a ‘‘field box’’ that allows users to manipulate two variables at a time with the mouse; similar studies are also being performed within the virtual reality community 共e.g., Pinch™ gloves 关16兴, haptic devices 关41兴兲. We are also investigating ways to motivate our subjects better to combat the ‘‘fatiguing’’ effect that we observed in some experiments, especially with the TDIs. The lack of feedback on user performance during the three weighted-sum experiments may have discouraged some users during the experiment since they did not know how well 共or how poorly兲 they were doing. Users who receive feedback on how close their designs are to the optimal design may be more likely to modify and improve their strategies in future experiments in order to achieve better designs. Finally, we are also in the process of developing graphical and text-based design interfaces for industrial applications. We have been working with engineers at 共1兲 The Boeing Company to develop a graphical design interface for a wing design problem involving six design 共input兲 variables, one objective, and three constraints, and 共2兲 Renault to develop a graphical design interface for optimizing a vehicle sub-frame. The sub-frame problem employs a detailed finite element analysis wherein the user can manipulate 12 design 共input兲 variables to minimize the sub-frame’s weight and deflection and to maximize its first natural frequency. Pilot studies are currently being conducted for both interfaces.

Acknowledgments We thank Dr. Kemper Lewis at the University at Buffalo, Dr. Wei Chen at the University of Illinois at Chicago, Dr. Panos Papalambros at the University of Michigan, and Dr. G. K. Ananthasuresh at the University of Pennsylvania for their help in gathering data for our study. We also thank our anonymous reviewers for their helpful feedback and suggestions to improve the paper. This work was supported by the National Science Foundation under Grant No. DMI-0084918.

References 关1兴 Jost, K., 1998, ‘‘Chrysler’s ‘‘Clean Sheet’’ V6,’’ Automotive Engineering International, 106共1兲, pp. 79– 81. 关2兴 Mecham, M., 1997, ‘‘Raytheon Integrates Product Development,’’ Aviat. Week Space Technol., October 6, pp. 50. 关3兴 Mecham, M., 1997, ‘‘Aerospace Chases the Software Boom,’’ Aviat. Week Space Technol., October 6, pp. 46 – 48. 关4兴 Sabbagh, K., 1996, Twenty-First Century Jet: The Making and Marketing of the Boeing 777, Scribner, New York, NY. 关5兴 Messac, A., and Chen, X., 2000, ‘‘Visualizing the Optimization Process in

Journal of Computing and Information Science in Engineering

JUNE 2003, Vol. 3 Õ 153

Real-Time Using Physical Programming,’’ Eng. Optimiz., 32共6兲, pp. 721–747. 关6兴 Eddy, J. and Lewis, K., 2002, ‘‘Multidimensional Design Visualization in Multiobjective Optimization,’’ 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Atlanta, GA, AIAA, AIAA-2002-5621. 关7兴 Stump, G. M., Yukish, M., Simpson, T. W., and Bennett, L., 2002, ‘‘Multidimensional Visualization and Its Application to a Design by Shopping Paradigm,’’ 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Atlanta, GA, AIAA, AIAA-2002-5622. 关8兴 Winer, E. H., and Bloebaum, C. L., 2002, ‘‘Development of Visual Design Steering as an Aid in Large-Scale Multidisciplinary Design Optimization. Part I: Method Development,’’ Structural and Multidisciplinary Optimization, 23共6兲, pp. 412– 424. 关9兴 Winer, E. H., and Bloebaum, C. L., 2002, ‘‘Development of Visual Design Steering as an Aid in Large-Scale Multidisciplinary Design Optimization. Part II: Method Validation,’’ Structural and Multidisciplinary Optimization, 23共6兲, pp. 425– 435. 关10兴 Anderson, R. K., and Dror, M., 2001, ‘‘An Interactive Graphic Presentation for Multiobjective Linear Programming,’’ Applied Mathematics and Computation, 123共2兲, pp. 229–248. 关11兴 Kihonge, J. N., Vance, J. M., and Larochelle, P. M., 2002, ‘‘Spatial Mechanism Design in Virtual Reality with Networking,’’ ASME J. Mech. Des., 124共3兲, pp. 435– 440. 关12兴 Jayaram, S., Vance, J. M., Gadh, R., Jayaram, U., and Srinivasan, H., 2001, ‘‘Assessment of VR Technology and its Applications to Engineering Problems,’’ ASME J. Comput. Inf. Sci. Eng., 1共1兲, pp. 72– 83. 关13兴 Jones, C. V., 1994, ‘‘Visualization and Optimization,’’ ORSA J. Comput., 6共3兲, pp. 221–257. 关14兴 CAD/CAM Update, 1997, ‘‘IBM and Daussault Awarded Boeing CATIA Contract,’’ CAD/CAM Update, 9共1兲, pp. 1– 8. 关15兴 National Research Council, 1998, ‘‘Visionary Manufacturing Challenges for 2020,’’ Committee on Visionary Manufacturing Challenges, National Research Council, National Academy Press, Washington, D.C. 关16兴 Evans, P. T., Vance, J. M., and Dark, V. J., 1999, ‘‘Assessing the Effectiveness of Traditional and Virtual Reality Interfaces in Spherical Mechanism Design,’’ ASME J. Mech. Des., 121共4兲, pp. 507–514. 关17兴 Card, S. K., Moran, T. P., and Newell, A., 1983, The Psychology of HumanComputer Interaction, Lawrence Erlbaum, Hillsdale, NJ. 关18兴 Sturman, D. J., Zeltzer, D., and Pieper, S., 1989, ‘‘Hands-on Interaction With Virtual Environments,’’ Proceedings of the 1989 ACM SIGGRAPH Symposium on User Interface Software and Technology, pp. 19–24. 关19兴 Ware, C., and Balakrishnan, R., 1994, ‘‘Reaching for Objects in VR Displays Lag and Frame Rate,’’ ACM Transactions on Computer-Human Interaction, 1, pp. 331–356. 关20兴 Watson, B., Walker, N., Hodges, L. F., and Worden, A., 1997, ‘‘Managing Level of Detail through Peripheral Degradation: Effects on Search Performance in Head-Mounted Display,’’ ACM Transactions on Computer-Human Interaction, 4, pp. 323–346. 关21兴 Waern, Y., 1989, Cognitive Aspects of Computer Supported Tasks, John Wiley & Sons, New York. 关22兴 Goodman, T., and Spence, R., 1978, ‘‘The Effect of System Response Time on Interactive Computer-Aided Design,’’ Comput. Graphics, 12, pp. 100–104. 关23兴 Foley, J. D., and Wallace, J. D., 1974, ‘‘The Art of Natural Graphic ManMachine Conversation,’’ Proceedings of the IEEE, 4, pp. 462– 471.

154 Õ Vol. 3, JUNE 2003

关24兴 Hirschi, N. W., and Frey, D. D., 2002, ‘‘Cognition and Complexity: An Experiment on the Effect of Coupling in Parameter Design,’’ Res. Eng. Des., 13共3兲, pp. 123–131. 关25兴 Gu, L., 2001, ‘‘A Comparison of Polynomial Based Regression Models in Vehicle Safety Analysis,’’ ASME Design Engineering Technical ConferencesDesign Automation Conference, A. Diaz, ed., Pittsburgh, PA, ASME Paper No. DETC2001/DAC-21063. 关26兴 Kleijnen, J. P. C., 1975, ‘‘A Comment on Blanning’s Metamodel for Sensitivity Analysis: The Regression Metamodel in Simulation,’’ Interfaces, 5共1兲, pp. 21–23. 关27兴 Barton, R. R., 1998, ‘‘Simulation Metamodels,’’ Proceedings of the 1998 Winter Simulation Conference, D. J. Medeiros, E. F. Watson, J. S. Carson and M. S. Manivannan, eds., Washington, DC, IEEE, pp. 167–174. 关28兴 Simpson, T. W., Peplinski, J., Koch, P. N., and Allen, J. K., 2001, ‘‘Metamodels for Computer-Based Engineering Design: Survey and Recommendations,’’ Eng. Comput., 17共2兲, pp. 129–150. 关29兴 Haftka, R., and Gurdal, Z., 1992, Elements of Structural Optimization, 3rd Revised and Expanded Edition, Kluwer Academic Publishers, Boston, MA. 关30兴 Barton, R. R., Limayem, F., Meckesheimer, M., and Yannou, B., 1999, ‘‘Using Metamodels for Modeling the Propagation of Design Uncertainties,’’ 5th International Conference on Concurrent Engineering, N. Wogum, K.-D. Thoben and K. S. Pawar, eds., The Hague, The Netherlands, Centre for Concurrent Enterprising, pp. 521–528. 关31兴 Meckesheimer, M., Barton, R. R., Simpson, T. W., Limayem, F., and Yannou, B., 2001, ‘‘Metamodeling of Combined Discrete/Continuous Responses,’’ AIAA J., 39共10兲, pp. 1955–1959. 关32兴 Athan, T. W., and Papalambros, P. Y., 1996, ‘‘A Note on Weighted Criteria Methods for Compromise Solutions in Multi-Objective Optimization,’’ Eng. Optimiz., 27共2兲, pp. 155–176. 关33兴 Das, I., and Dennis, J. E., 1997, ‘‘A Closer Look at Drawbacks of Minimizing Weighted Sums of Objectives for Pareto Set Generation in Multicriteria Optimization Problems,’’ Struct. Optim. , 14共1兲, pp. 63– 69. 关34兴 Messac, A., Sundararaj, J. G., Tappeta, R. V., and Renaud, J. E., 2000, ‘‘Ability of Objective Functions to Generate Points on NonConvex Pareto Frontiers,’’ AIAA J., 38共6兲, pp. 1084 –1091. 关35兴 Simpson, T. W., Frecker, M., and Barton, R. R., 2003, ‘‘Impact of MetamodelDriven Visualization on Design,’’ Proceedings of the 2003 NSF Design, Service and Manufacturing Grantees and Research Conference, R. G. Reddy, ed., Birmingham, AL, The University of Alabama, Tuscaloosa, AL, pp. 478 – 494. 关36兴 Mason, R. L., Gunst, R. F., and Hess, J. L., 2003, Statistical Design and Analysis of Experiments, with Applications to Engineering and Science, 2nd Edition, Wiley-Interscience, New York. 关37兴 Hicks, C. R., and Turner, K. V., 1999, Fundamental Concepts in the Design of Experiments, 5th Edition, Oxford University Press, New York. 关38兴 Gardner, H., 1983, Frames of Mind: The Theory of Multiple Intelligences, Basic Books, New York, NY. 关39兴 Edwards, B., 1989, Drawing on the Right Side of the Brain, Torcher/Putnam, New York. 关40兴 Harb, J. N., Terry, R. E., Hurt, P. K., and Williamson, K. J., 1995, Teaching Through the Cycle, 2nd Edition, Brigham Young University Press, Provo, UT. 关41兴 Volkov, S., and Vance, J. M., 2001, ‘‘Effectiveness of Haptic Sensation for the Evaluation of Virtual Prototypes,’’ ASME J. Comput. Inf. Sci. Eng., 1共2兲, pp. 123–128.

Transactions of the ASME