Engineering with Computers (2007) 23:93–107 DOI 10.1007/s00366-006-0045-7
O R I G I N A L A RT I C L E
Graphical and text-based design interfaces for parameter design of an I-beam, desk lamp, aircraft wing, and job shop manufacturing system Timothy W. Simpson Æ Mary Frecker Æ Russell R. Barton Æ Ling Rothrock
Received: 1 September 2005 / Accepted: 2 June 2006 / Published online: 11 October 2006 Springer-Verlag London Limited 2006
Abstract In this paper we describe four design optimization problems and corresponding design interfaces that have been developed to help assess the impact of fast, graphical interfaces for design space visualization and optimization. The design problems involve the design of an I-beam, desk lamp, aircraft wing, and job shop manufacturing system. The problems vary in size from 2 to 6 inputs and 2 to 7 outputs, where the outputs are formulated as either a multiobjective optimization problem or a constrained, single objective optimization problem. Graphical and text-based design interfaces have been developed for the I-beam and desk lamp problems, and two sets of graphical design interfaces have been developed for the aircraft wing and job shop design problems that vary in the number of input variables and analytical complexity, respectively. Response
delays ranging from 0.0 to 1.5 s have been imposed in the interfaces to mimic computationally expensive analyses typical of complex engineering design problems, allowing us to study the impact of delay on user performance. In addition to describing each problem, we discuss the experimental methods that we use, including the experimental factors, performance measures, and protocol. The focus in this paper is to publicize and share our design interfaces as well as our insights with other researchers who are developing tools to support design space visualization and exploration. Keywords Visualization Design optimization Metamodels Simulation Graphical user interface
1 Introduction T. W. Simpson (&) Departments of Mechanical and Industrial Engineering and Engineering Design, The Pennsylvania State University, 329 Leonhard Building, University Park, PA 16802, USA e-mail:
[email protected] M. Frecker Department of Mechanical Engineering, The Pennsylvania State University, University Park, PA 16802, USA R. R. Barton Supply Chain and Information Systems, Smeal College of Business, The Pennsylvania State University, University Park, PA 16802, USA L. Rothrock Harold & Inge Marcus Department of Industrial & Manufacturing Engineering, The Pennsylvania State University, University Park, PA 16802, USA
In 1984, Lembersky and Chi developed software and a graphical user interface that incorporated artifact representations of logs to enable timber buckers to position cuts on a log and determine the use for each section (e.g., plank, plywood veneer, pulp). The software provided immediate feedback on the resulting profit per section and overall profit for the log. At the same time, the software computed an optimal design via Dynamic Programming (DP) for log segmenting and product allocation and presented the alternative graphically, adjacent to the cutter’s design, in real time. Invariably the DP allocation produced higher profit, but an interesting result of their study was that the timber buckers using the software improved their own cutting abilities. After 1 week of practice on the log simulator/ design interface, the timber buckers had developed new strategies for cutting and product allocation based on
123
94
viewing the competing (and superior) DP solutions, improving the profitability of their own ad-hoc cutting/ allocation performance [1]. Over the next 20 years, advancements in computing power and software sophistication have fostered increased interest in visualization and interactive design tools. Today, we find visualization and interactive graphical user interfaces receiving considerable attention in facilitating decision-making and optimization in engineering design [2–13]. A rationale for this continued interest is the lack of consensus on the best computational method for design decisions that involve multiple attributes, uncertain outcomes, and often multiple decision makers [14, 15]. Zionts cites ten myths of multiple criteria decision-making, including (#2) the myth of a single decision maker (it is often a group), (#4) the myth of an optimal solution, (#5) the myth of limiting consideration to nondominated (Paretooptimal) solutions, and (#6) the myth of the existence of a utility or value function. Competing approaches include weighted objective functions and mathematical programming [16–18], construction of utility functions [19–23], quality function deployment and modifications [24, 25], game theory [26–28], fuzzy set methods [29, 30], and other proxy functions [10, 31–33]. A study by the National Research Council highlighted three requirements for an effective design interface: it must be (1) integrative, (2) visual, and (3) fast, i.e., enable real-time response to user input [34]. Ullman [35] corroborates this, stating that ‘‘In order to be useful to the short-term memory, any extension (in the external environment) must share the characteristics of being very fast and having high information content.’’ Despite the apparent advantages and recent advances of visualization techniques for engineering design, we have found limited evidence in the engineering literature that assesses the impact of having a fast graphical design interface on the efficiency and effectiveness of engineering design or decision-making. Most research on the effect of response delay on user productivity with design interfaces has focused on simple placement, searching, and editing tasks [36–39] or on the loss of information held in short-term memory [40]. Goodman and Spence [41] examined the effect of response time on the time to complete an artificial task that was created to mimic design activity; they found an increase in task completion time of approximately 50% for response delays of 1.5 s in the software. For more complex tasks, Foley and Wallace [42] found that response delays of up to 10 s did not have significant impact. Unfortunately, many design analysis tasks may not be instantaneous, even when calculated using stateof-the-art software on state-of-the-art computers. For
123
Engineering with Computers (2007) 23:93–107
instance, Boeing frequently uses simulation codes that can take 15–18 h for analysis of some design applications [43] while researchers at Ford report that a crash simulation of a full passenger car takes 36–160 h to compute [44]. Therefore, we assert that a metamodeldriven design interface provides a strategy for meeting the challenge of creating an integrative, visual, and fast graphical design environment. By metamodels we mean simple mathematical approximations to the input/output functions calculated by the designer’s analyses and simulation models [45–47]. Metamodels have been used in a variety of engineering design and optimization applications, and recent reviews can be found in [47–50]. Because the approximations are simple, they are fast, virtually instantaneous, enabling performance analyses to be computed in real-time when design (input) variables are changed within a graphical design interface; however, because they are simple approximations, there is a tradeoff between accuracy and speed. Hence, the overarching objective guiding our research is to determine the efficacy of metamodel-driven visualization for graphical design and optimization as shown in Fig. 1. At the highest level in Fig. 1, our investigations have been divided into two categories: (1) assessing the benefit of having a rapid response to user requests for performance as a function of design parameters, and (2) assessing the cost of lost accuracy due to the use of approximations or metamodels. By working with the metamodels themselves, we can impose artificial delays in the software to simulate computationally expensive analyses. The benefit of rapid response depends on the nature of the design task, the ‘‘richness’’ of the design interface (e.g., text-based versus graphical), and the training received by the user. In the next section, we describe the four design problems and corresponding interfaces that have been developed as part of our research. The experimental factors, measures, design, and protocol are discussed in Sect. 3, and a brief overview of our findings is given in Sect. 4.
2 Overview of design problems and interfaces A summary of the design problems and interfaces presented in this paper is given in Table 1. Each problem is formulated in terms of a numerical optimization problem, the standard form of which is given in Eq. 1. The function f is called the objective or cost function, and x are the design variables. There can be a number of inequality constraints, gj(x), and equality constraints, hk(x), which may or may not be explicit functions of x. The goal is to find the best set of variables x
Engineering with Computers (2007) 23:93–107
95
Fig. 1 Overall experimentation strategy
Table 1 Overview of design problems and interfaces Design problem [source] I-beam [51] Desk lamp [55] Aircraft wing [Boeing] Job shop [57]
Problem formulation # Inputs
# Objectives
# Constraints
2 2 3 6 2, 4, 6 6
2 1 2 1 1 1
0 1 0 3 3 6
that minimize f (or alternatively maximize –f) while satisfying the constraints. min ðf ðxÞÞ subject to:
gj ðxÞ 0
ð1Þ
hk ðxÞ ¼ 0
The problems described in Table 1 vary in size from 2 to 6 input (design) variables and 2 to 7 outputs, where the outputs are formulated as either a multiobjective optimization problem or a constrained single objective optimization problem. The source from which each example has been derived is noted in the table along with the paper(s) wherein we discuss results involving each interface. The interfaces are either graphical or text-based, and response delays within each interface
Type of interface
Response delay (s)
Results
Graphical and text-based Graphical and text-based Graphical and text-based Graphical Graphical Graphical
0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
[52, 53] [54] [53] [43] [56] [58]
0.25, 1.5 0.25, 0.25, 0.25, 1.5
0.5 0.5 0.5 0.5
vary from 0.0 to 1.5 s as indicated in the table. Each interface is developed using Visual Basic 6.0, which is then compiled into an executable. The executables for each interface are available at: . The rationale for selecting these four problems is to guard against a common misconception in human subject testing, namely, generalizability of results. The normative course of scientific investigations is to narrow the scope of a real-world task into laboratory tasks that are generalizable. For example, one might expect a functional relationship to exist between the number of inputs and outputs and a subject’s performance on a task without empirical investigation; however, researchers have warned against such an assumption because findings from a simple laboratory task cannot
123
96
Engineering with Computers (2007) 23:93–107
readily be transferred to tasks situated in dynamic and complex environments (e.g., design of a desk lamp or a wing, or job shop control) [59, 60]. By examining a broad range of problems varying in size, scope, and application, we can generalize our results to a greater extent. Before describing each design problem and its corresponding interfaces, we note that the basic functionality of our graphical design interfaces (GDI) is as follows. 1.
2.
After pushing the Start button, the user manipulates the values of the design variables by moving the slider bars that are located in the lower right hand corner of the GDI, where the values of the slider bars define a design alternative. As the slider bars change,
(a)
the picture of the geometry changes to reflect the values of the new design variables, (b) the objective and constraints are re-evaluated for the new design variable values, (c) the new values of the objective(s) and constraint(s) are displayed numerically in the table in the upper right hand corner of the GDI, and (d) the new values of the objective(s) and constraints(s) are plotted graphically in the 2-D output display window in the middle of the GDI. 3.
The user continues to manipulate the slider bars until s/he determines that a good design is obtained.
At any point during this process, the user can use the mouse to select a point that is plotted in the 2-D output display window. Whenever a point is selected, the slider bars revert back to the corresponding settings of the design variables that yielded this design, and the geometry and numerical display of the output(s) are updated. This way, the user can always return to a promising design with the click of a mouse, using the selected point as the new starting point when manipulating the slider bars to search the design space. If the output display window gets too crowded, the user can click the Clear button to clear the screen or the Zoom button to zoom in (or out) around the point that is selected or most recently plotted. Finally, we note that the interfaces do not work in reverse, i.e., the user cannot point the mouse to a good position in the output display and have it ‘‘back-solve’’ to find the corresponding values of the design variables. The text-based design interfaces (TDIs) use the same analyses as the GDIs, but two different methods are used in the TDIs for changing the design variables: slider bars and text boxes. Also, the user must click on
123
the Calculate button in order to evaluate a design alternative. Because these interfaces are only textbased, the picture of the design geometry does not change as the design variables are changed, and there is no graphical display that plots the output responses. Instead, input-output response values are stored numerically in a drop-down list from which users can select the best design. The drop-down list can be cleared of all but the last point selected or analyzed at any time, but there is no zoom in a TDI as it is not necessary. Descriptions of each design problem and corresponding interfaces follow. 2.1 I-beam The I-beam design problem is adapted from [51] wherein the user varies the cross-sectional dimensions of an I-beam, subject to a bending moment, to minimize the cross-sectional area, subject to a constraint on the bending stress. In the I-beam design problem, users can adjust the height, h, and width, w, of the I-beam cross-section, which varies the cross-sectional area, A, and the imposed bending stress, r. The objective in the I-beam design problem is to minimize the cross-sectional area, A, while satisfying a maximum stress constraint on the bending stress, r. Thus, the user is asked to solve the following constrained, single-objective optimization problem: Minimize: A r rmax Subject to: 0:2 in h 10.0 in.
ð2Þ
0.1 in w 10.0 in. The analytical expressions for A and r are taken directly from [51] and coded within the GDI that was developed for the I-beam; no metamodels are used in this problem. The GDI for the I-beam design problem is shown in Fig. 2. The user manipulates the two slider bars to vary the height and width of the I-beam. As these values change, the resulting values for A and r are plotted in the graphical display window, and the picture of the I-beam geometry changes accordingly. A numerical display of A and r are also provided for the user. In addition to the GDI shown in Fig. 2, a text-based design interface (TDI) was also created to serve as a control in our experiments. The TDI for the I-beam design problem is shown in Fig. 3 and is a modified version from our earlier study [53], which used text boxes and the keyboard to enter values for the input variables. To ensure consistency with the allowable input values between the three I-beam design interfaces,
Engineering with Computers (2007) 23:93–107
97
Fig. 2 GDI for I-beam design problem
Fig. 3 TDI for I-beam design problem
the design variable input method for the TDI is through slider bars rather than a keyboard. Thus, users can vary w and h of the I-beam by moving the slider bars with the mouse in the center of the GUI (see Fig. 3). Since slider bars restrict design variable manipulation to one-at-a-time variation, a ‘‘field box’’ GDI was also created to allow users to simultaneous change both input variables. The field box is simply a box with w and h on the horizontal and vertical axes, respectively, and a cursor inside the box, as shown in Fig. 4. Users can move the cursor anywhere within the boundaries of the box, which correspond to the design variable bounds. An advantage of a simultaneous input
device such as the field box is that it allows users to perform two-factor-at-a-time variation, which can facilitate design space exploration. The field box input method also helps reduce the gaps found in the graphical window in the slider bar GDI, which can be seen by comparing the A versus r plots in Figs. 2 and 4. All other functionality is identical between the field box GDI and the slider bar GDI. A multiobjective formulation for the I-beam design problem has also been developed and tested [53]. The objectives in this formulation are to simultaneously minimize normalized measures of A and r using a weighted-sum formulation:
123
98
Engineering with Computers (2007) 23:93–107
Fig. 4 I-beam GDI with ‘‘field box’’ for user input
ðA Amin Þ ðr rmin Þ þ ð 1 aÞ ðAmax Amin Þ ðrmax rmin Þ 0:2 h 10:0 Subject to : ð3Þ 0:1 w 10:0
Minimize : F ¼ a
where a is a scalar weighting factor (0 £ a £ 1) that we set at 0.1, 0.5, or 0.9; A and r are the area and stress in the I-beam; Amax and Amin are the maximum and minimum possible areas, respectively; and rmax and rmin are the maximum and minimum possible stresses, respectively, based on the slider bar limits. For this formulation, the I-beam GDI is modified to show contour lines of constant F to facilitate the search for the best solution (see Fig. 5a); note that the feasible region is no longer highlighted since we do not have any constraints in this formulation. In addition, the numerical value of F is displayed when a design point is selected. The corresponding TDI for this formulation is shown in Fig. 5b. Note that text boxes are used in this TDI for design variable input instead of the slider bars. This TDI was actually developed prior to the slider bar version shown in Fig. 3, and the slider bars were added when the multiobjective optimization problem was simplified to the constrained single-objective optimization problem of Eq. 2 in an effort to reduce the between-subject variability [53]. 2.2 Desk lamp The desk lamp design problem is derived from [55] and uses the radial basis function metamodels for analysis that are developed in [61]. The objective is to maximize
123
normalized measures of the mean illuminance and minimize the standard deviation of the illuminance on a predetermined surface area (e.g., a paper or a book) on a desk by changing three design variables: rod length, L2, reflector length, L1, and reflector angle, h (see Fig. 6). A weighted-sum formulation is used for the optimization: ðl lmin Þ ðr rmin Þ þ ð 1 aÞ ðlmax lmin Þ ðrmax rmin Þ 50 L1 100 Subject to : 300 L2 500 ð4Þ Minimize : F ¼ a
0 h 45 where F is a weighted-sum of normalized measures of l and r; lmax and lmin are the maximum and minimum possible values for mean illuminance, respectively; and rmax and rmin are the maximum and minimum possible standard deviations for illuminance, respectively. The scalar weighting factor, a, ranges from 0 to 1 (we use a = 0.1, 0.5, 0.9), and the optimal design maximizes the normalized l by using –a for the first term in Eq. 4. The GDI for the desk lamp problem is shown in Fig. 6. We note that the axis for the mean illuminance has been reversed so that the best designs reside in the lower left hand corner of the 2-D output display to match the location of optimal solutions in the I-beam GDI. A text-based design interface (TDI) was also developed for the desk lamp design problem (see Fig. 7). The functionality is nearly identical to that of the I-beam TDI except that the user enters the values for each design variable into textboxes instead of
Engineering with Computers (2007) 23:93–107
99
Fig. 5 GDI and TDI for multiobjective I-beam design problem. a I-beam GDI with slider bar input. b I-beam TDI with textbox input
changing them with slider bars. Also, the responses are update only after the user pushes the Calculate button as noted earlier. 2.3 Aircraft wing The wing design problem involves sizing and shaping the plan view layout of an aircraft wing to minimize its cost subject to constraints on range, buffet altitude, and takeoff field length. The aircraft wing design problem was developed in conjunction with researchers at The Boeing Company and is presented in detail in [43]. The initial problem involved six design variables that could be manipulated to design the wing; however, 2- and
4-variable versions of the problem have also been created: 2-variable 1. Semi-span, x1 problem 2. Aspect ratio , x2 3. Taper ratio , x3 4. Sparbox root chord , x4 5. Sweep angle , x5 6. Fan diameter , x6
4-variable problem
6-variable problem
Bounds: 0 < xi < 1
The definition of each variable with respect to the wing’s geometry is given in [43]. The objective and constraints for the wing design problem are summarized in Eq. 5.
123
100
Engineering with Computers (2007) 23:93–107
Fig. 6 GDI for desk lamp design problem
Fig. 7 TDI for desk lamp design problem
Minimize:Cost Range[0:589 Subject to :
Buffet altitude[0:603
ð5Þ
Takeoff field length\0:377 The relationships between the design variables and the objective and constraints are obtained using second-order response surface models, which are given in [43]. To maintain the proprietary nature of the data, the cost, constraints, and design variables have all been normalized to vary between [0,1] based on the minimum and maximum values observed in the
123
sample data used to construct the response surface models used for analysis within the GDI. Consequently, the constraint limits on range, buffet altitude, and takeoff field length are given as normalized values in Eq. 5, and the bounds on each design variable are normalized to [0, 1]. The GDI for the 6-variable wing design problem is shown in Fig. 8. Simplified GDIs for the 2- and 4-variable problems are identical except that they have fewer slider bars, and each user only uses one of these GDIs for the experiment. As with the other GDIs, the user manipulates the slider bars to change the design variable values, and the GDI updates as follows.
Engineering with Computers (2007) 23:93–107
101
Fig. 8 GDI for wing design problem
1. 2. 3.
4.
The picture of the wing geometry changes to reflect the values of the new design variables. The objective and constraints are re-evaluated for the new design variable values. The new values of the objective and constraints are displayed numerically in the table in the upper right hand corner of the GDI (green if all constraints is satisfied, red otherwise). The new values of cost and range are plotted graphically in the 2-D output display window (green if all constraints are satisfied, red otherwise) in the middle of the GDI.
The red and green color scheme was first introduced in this interface since, unlike the I-beam and desk lamp problems, the problem has more than two output responses of interest. In general, user feedback was positive on the use of the red and green color scheme [43].
first-out (FIFO) queue exists at each workstation, where the first part to enter the queue is the next one to be processed. There are three different product types that are manufactured in this job shop, and jobs are moved from one station to another by a fork truck. The user can vary the number of machines (from 2 to 6) at each workstation as well as vary the number of fork trucks (from 1 to 3) transporting the parts. A simulation model of the job shop system was created using Arena 3.0, and the routing times, probabilities, and mean service times for each job are given in [58], along with the distances between workstations and operating costs for each workstation. Seven output responses are considered in the job shop design problem: system operating cost, average time a part is in the system, and average utilization at each of the five workstations. The problem statement for the job shop design problem is:
Minimize : System operating Cost Average time in the system 0.425,0.100,0.340 Subject to:
Average utilization at workstation i 0:35
8i ¼ 1; . . . ; 5
ð6Þ
2 Number of machines at workstation i 6 8i ¼ 1; . . . ; 5 1 Number of fork trucks 3
2.4 Job shop manufacturing system The job shop design problem is adapted from Ref. [57]. The job shop manufacturing system consists of five workstations, where the machines at a workstation are identical and perform the same function(s). A first-in,
The values for the seven performance measures are all normalized to [0, 1] to alleviate scaling inconsistencies between them. The values for each performance measure are obtained through polynomial regression models that were developed from the simulation model using design of experiments and
123
102
Engineering with Computers (2007) 23:93–107
Fig. 9 GDI for job shop design problem
least squares regression. First-order, stepwise, and second-order polynomial regression models are used to approximate the system responses to allow us to investigate the impact of coupling within the approximation model. All three sets of models can be found in [58] along with details on how we sampled the simulation model and fit each set of regression models. Three GDIs were created for the job shop design problem where each GDI used a different set of the regression models; the controls, layout, and capabilities of the three GDIs are identical otherwise. A screen shot of the GDI for the job shop design problem is shown in Fig. 9. Similar to the aircraft wing design problem, the constraints on the average workstation utilizations are represented using a green (all of the utilization constraints are satisfied) and red (one or more constraints is violated) color scheme. This color scheme is also applied to the Job Shop Layout figure on the right of the GDI: workstations that do not satisfy the utilization constraint are shown in red, green otherwise. Finally, the constraint on average time in the system is highlighted in the objective plot by shading the feasible region; savvy users will quickly narrow their search for job shop designs that are located within this feasible region.
experimental protocol for researchers to follow should they desire to conduct additional experiments using our design interfaces. 3.1 Experimental factors •
Response delay–response delay is one of the experimental factors common to all of our design interfaces, where the amount and levels for response delay are as listed in Table 1.
In addition to response delay, the following factors have also been studied: • •
• • •
I-beam, single objective case-type of interface (3 levels: TDI, GDI w/slider bars, or GDI w/field box) I-beam, multi-objective case-type of interface (2 levels: TDI or GDI w/slider bars) and alpha value (3 levels: 0.1, 0.5, or 0.9) Desk lamp-type of interface (2 levels: TDI or GDI) Aircraft wing-the size of the problem (3 levels: 2, 4, or 6 variables) Job shop-the level of coupling within the polynomial regression model (3 levels: first-order, secondorder, or stepwise model)
Additional levels for many of these factors could be easily added to any design interface by changing the Visual Basic 6.0 code and recompiling it.
3 Experimental methods 3.2 Performance measures In this section, we overview the experimental factors, performance measures, and experimental design typically used in our experiments and give a sample
123
User performance is measured by percent error and task completion time, which we use as surrogates for
Engineering with Computers (2007) 23:93–107
design effectiveness and design efficiency, respectively. Data transformations are commonly used when model assumptions such as residual normality are violated, and two of the more common data transformations used to satisfy model assumptions are the square root transform and the logarithmic transform [62]. We have employed both within our studies. Various aspects of the design search process can also be evaluated as each design interface records the number of designs evaluated along with the number of times each feature (i.e., clear and zoom buttons) was used. Users are also asked to complete pre- and post-test questionnaires to gather demographic information and evaluate various aspects of the design interface, the design process, and the design problem itself. Responses to these questions were used to test for significant correlation with user performance. Finally, we have also administered the NASA Task Load Index (NASA-TLX) [63] after each trial of the experiment to study the perceived workload of the user during the experiment. The NASA-TLX is a widely used subjective workload measure that provides the users with a direct method of providing their opinions, is easy to use, has high face validity, and has been shown to be sensitive to a variety of task commands [64]. We have found that user’s perceived workload, in addition to their performance, has been adversely affected by delay and type of GDI [54, 65].
103
3.4 Experimental protocol Each experiment starts by giving subjects an overview of the problem and brief introduction on what they will do and how long it will take. After reading the overview and answering any of the user’s questions, they are asked to sign an informed consent form. The subjects are then given the pre-test questionnaire to complete. Once this is done, they can begin using the software by entering a tracking number and selecting the experimental trial number in the upper left hand corner of the design interface. After pushing the Start button, pop-up windows guide the user through the interface and its controls, demonstrating its capabilities. Once comfortable with the interface, the user completes a series of trials during which time data is gathered. After each trial, the NASA-TLX is administered via computer, and after the final trial, subjects complete a post-test questionnaire for the experiment. A graduate student can be quickly trained to supervise the experiment, administer questionnaires and NASATLX, and answer questions. To compensate the subjects for their time—experiments can take up to an hour to complete depending on the number of trials—we pay them $10/half h. When used as a supplement to in-class instruction, extra credit has been used effectively to recruit subjects to participate in the experiment outside of class [52].
4 Summary of results and future avenues of research 3.3 Experimental design We typically employ a between-subjects n · m factorial design where there are n levels for response delay and m levels for the other factor(s) being tested. We have also used a Graeco-Latin square design to test three factors—response delay, a, and run number—for the I-beam and desk lamp GDIs and TDIs [53]. Pilot studies are strongly recommended to assess the sensitivity of the experiment prior to gathering final data. In most cases, we have found that we need ~9 subjects per run condition, which equates to approximately 60 subjects when using a 2 · 3 factorial design. We have also used pilot studies to determine the amount of ‘‘training’’ necessary to ensure a sufficient level of proficiency with the design interface (typically 6–10 trials [54]). In our early experiments, there was a significant learning effect, indicating that users were insufficiently trained during the demonstration trials and were still learning how to use the software during the actual trials of the experiment [52, 53].
To date, we have run more than 330 subjects through our experiments. The results from each experiment are summarized in the papers noted in Table 1, of which a brief summary follows. In our initial study involving the multiobjective formulation of the I-beam [52], the 0.5 s response delay significantly increase error but did not affect the task completion time, and we noticed that users considered fewer design alternatives as response delay increased. Some users needed more time to become familiar with the GDI as evidenced by the significant learning effect that we found, which indicated that users were not yet proficient with the interface. As a continuation of this study, we tested 133 students using the multiobjective I-beam and desk lamp GDIs and TDIs [53]. We found that GDI users performed better (i.e., have lower error and faster completion time) on average than those using TDIs, but these differences were not always statistically significant. We also found that a response delay of 0.5 s increased error and task completion time, on average,
123
104
but these increases were not always statistically significant either due to high variability in user performance. Our results indicated that the perceived difficulty of the design task and using the graphical interface controls were inversely correlated with design effectiveness—designers who rated the task more difficult to solve or the graphical interface more difficult to use actually performed better than those who rated them easy. In our follow-up study [54], we used the singleobjective I-beam design problem, and we lengthened the response delay to 1.5 s and increased the number of user trials for training to 8 in an effort to minimize variability between subjects. We also studied the impact of the ‘‘richness’’ of the I-beam design interfaces, i.e., the TDI (Fig. 3) versus the GDI with slider bars (see Fig. 2) and the GDI with the ‘‘field box’’ input mechanism (see Fig. 4). After testing 60 subjects, we found that the response delays of 1.5 s significantly increased error and completion time and that users performed better as the ‘‘richness’’ of the design interface increased. We also found that the perceived workload of the users increased as delay increased and as the ‘‘richness’’ of the design interface decreased, as measured by the NASA Task Load Index. Rothrock et al. [65] investigate these latter findings in more detail. In an effort to study more complex problems with larger dimensions, the manufacturing job shop and the aircraft wing examples were developed. In the job shop example [58], experimental results from 54 subjects revealed that user performance deteriorates significantly when a response delay of 1.5 s is introduced: percent error and task completion time increased, on average, by 9.4% and 81 s, respectively, when the delay was present. The use of first-order, stepwise, and second-order polynomial regression models was also studied, and we found that user performance improved when stepwise polynomial regression models were used instead of first-order or second-order models. The stepwise models yielded 12% lower error and 91 s faster completion times, on average, over the first-order models; error was 13.5% lower and completion time was 62 s faster, on average, then when secondorder models were used. These findings corroborated those of Hirschi and Frey [6] who found that user performance deteriorates as the level of coupling increases, but we were able to quantify the extent to which this occurs by testing different levels of coupling in the metamodels for the job shop design problem. As noted in [58], future studies should investigate the broader implications of this finding by examining problems with larger and smaller numbers of input
123
Engineering with Computers (2007) 23:93–107
variables and output responses. It is well known that humans can effectively remember seven (plus or minus 2) pieces of distinct information in their short-term memory [66], and the impact of coupling may become negligible in smaller problems while being exacerbated in larger ones. Recent experiments with the aircraft wing design problem have attempted to ascertain the effect of problem size on user performance. The initial study at Boeing involved only the 6-variable formulation, and few significant results were achieved with the 6-variable aircraft wing design example due to the small sample sizes [43]. Delay did have a significant impact on the number of points plotted, but not on the percent error or completion time, whereas the constraint violation display (i.e., the number of constraints that were plotted in red/green) did affect the search strategy employed by the user in the GDI. We found that the fewer constraint violations displayed, the more freedom users felt they had to explore the design space for good designs. As a follow-on study, we created the 2and 4-variable versions of the aircraft wing design problem and performed tests using 66 engineering students to study the effect of problem size on user performance [56]. We found that user performance dropped off sharply as the number of variables increased from 2 to 4 to 6 and that response delay only had an impact on the small size problems. We also found a significant interaction between response delay and problem size such that the impact of response delay had less of an effect as the size of the problem increased: the 6-variable problem was so difficult to solve that response delay did not impact user performance with this GDI whereas its impact was statistically significant in the 2-variable problem. While we have successfully demonstrated the usefulness of metamodel-driven graphical design interfaces in engineering design, we feel that we have just ‘‘scratched the surface’’ of a very large, complex, and challenge problem, namely, how to develop effective user interfaces to support engineering design and decision-making. Our investigations into the specific capabilities of design interfaces have been limited to testing text-based versus graphical displays and different user input methods (e.g., slider bars versus a 2-D ‘‘field-box’’). The TDIs and GDIs can be analyzed in terms of user performance with respect to two general graphical design principles, which provide insight into types of things that trip up novice users. The first principle, called the Proximity Compatibility Principle [67], specifies that displays relevant to a common task or mental operation should be rendered close together in perceptual space. The second principle, called the
Engineering with Computers (2007) 23:93–107
Control-Display Compatibility Principle [68], stipulates that the spatial arrangement and manipulation of controls should be easily distinguishable. A detailed investigation of TDIs and GDIs in terms of adherence to the display principles and the impact on user performance is described in [65]. The populations on which each task was tested varied from student novices (for all four tasks) to experts (for the wing design problem). For student novices, we found that training sessions generally mitigated the effects of user experience in terms of user errors. For example, we found that the level of previous computer usage, the frequency of playing video games, or familiarity with single- and multi-objective optimization did not have a significant effect on user’s error during the trials [54]. In terms of response time, however, the results were mixed and warrant further investigation. For the experts, we found that user groups tended to adopt different strategies toward solving the wing design problem and, therefore, did have an impact on user performance [43]. We also found that specific types of users (e.g., statisticians vs. mathematicians) desired different features in their GDIs (e.g., interaction plots vs. Lagrange multipliers), indicating the need for flexibility within any GDI to be able to tailor it to the application as well as to different users [43]. Hence, there is a need for a better understanding of the interaction of the nature of the design task, the type of user, and the design features of the GUI. The advantage of metamodel-based design analyses extends beyond the instantaneous (approximate) calculation of responses, even beyond calculations of statistical characterizations such as variance. Fast function evaluations should permit the marriage of instantaneous evaluation with optimization-assisted and other computational design strategies, such as the Dynamic Programming coupling in the interface employed by Lembersky and Chi [1] discussed at the start of this paper. For that interface, the Dynamic Programming solutions were pre-computed for a fixed set of log datasets, but increasing computational capability makes real-time optimization of metamodel functions increasingly practical. An important future endeavor will be to search for effective combinations of the fastresponse visual environment with strategic control and use of optimization-based design suggestions. Finally, for all of this work, we used metamodels as surrogates of the original analyses. This introduces an additional element of uncertainty in that the metamodel is a ‘‘model of the model’’, and the tradeoff between speed of response and lost accuracy needs to be examined. Recent related research seeks to address the added uncertainty in the metamodels themselves
105
[69]. We have also been working predominantly in a deterministic setting by either ignoring any uncertainty in the system or by creating metamodels of the mean and variance of relevant system responses. Uncertainty visualization is becoming an important area for future research. Acknowledgments This research was supported by the National Science Foundation under Grant No. DMI-0084918. We are indebted to the graduate students who worked on this project—Gary Stump, Martin Meckesheimer, Chris Ligetti, Britt Holewinski, and Param Iyer—as well as the undergraduate students, Kim Barron and Chris Ligetti, who were supported on REU supplements to our grant.
References 1. Lembersky MR, Chi UH (1984) Decision simulators speed implementation and improve operations. Interfaces 14:1–15 2. Burgess S, Pasini D, Alemzadeh K (2004) Improved visualization of the design space using nested performance charts. Des Stud 25(1):51–62 3. Dahl DW, Chattopadhyay A, Gorn GJ (2001) The Importance of visualisation in concept design. Des Stud 22(1):5–26 4. Eddy J, Lewis K (2002) Visualization of multi-dimensional design and optimization data using cloud visualization. In: ASME design engineering technical conferences - design automation conference, Montreal, Quebec, Canada, ASME, Paper No. DETC02/DAC-02006 5. Evans PT, Vance JM, Dark VJ (1999) Assessing the effectiveness of traditional and virtual reality interfaces in spherical mechanism design. ASME J Mech Des 121(4):507–514 6. Hirschi NW, Frey DD (2002) Cognition and complexity: an experiment on the effect of coupling in parameter design. Res Eng Des 13(3):123–131 7. Jayaram S, Vance JM, Gadh R, Jayaram U, Srinivasan H (2001) Assessment of VR technology and its applications to engineering problems. ASME J Comput Info Sci Eng 1(1):72–83 8. Kelsick J, Vance JM, Buhr L, Moller C (2004) Discrete event simulation implemented in a virtual environment. ASME J Mech Des 125(3):428–433 9. Kodiyalam S, Yang RJ, Gu L (2004) High performance computing and surrogate modeling for rapid visualization with multidisciplinary optimization. AIAA J 42(11):2347– 2354 10. Messac A, Chen X (2000) Visualizing the optimization process in real-time using physical programming. Eng Optim 32(6):721–747 11. Stump G, Yukish M, Simpson TW (2004) The advanced trade space visualizer: an engineering decision-making tool. In: 10th AIAA/ISSMO multidisciplinary analysis and optimization conference, Albany, NY, AIAA, AIAA-2004–4568 12. Maxfield J, Juster NP, Dew PM, Taylor S, Fitchie M, Ion WJ, Zhao J, Thompson M (2000) Predicting product cosmetic quality using virtual environments. In: ASME design engineering technical conferences - computers and information in engineering, Baltimore, MD, ASME, Paper No. DETC2000/CIE-14591 13. Winer EH, Bloebaum CL (2002) Development of visual design steering as an aid in large-scale multidisciplinary design optimization. part i: method development. Struct Multidiscip Optim 23(6):412–424
123
106 14. Zionts S (1992) The state of multiple criteria decisionmaking: past, present, and future. In: Goicoechea A, Duckstein L, Zionts S (eds) Multiple criteria decision making, Springer, Berlin Heidelberg New York, pp 33–43 15. Zionts S (1993) Multiple criteria decision making: the challenge that lies ahead. In: Tzeng GH, Wang HF, Wen UP, Yu PL (eds) Multiple criteria decision making. Springer, Berlin Heidelberg New York, pp 17–26 16. Athan TW, Papalambros PY (1996) A note on weighted criteria methods for compromise solutions in multi-objective optimization. Eng Optim 27(2):155–176 17. Charnes A, Cooper WW (1977) Goal programming and multiple objective optimization - part I. Eur J Oper Res 1(1):39–54 18. Wilson B, Cappelleri DJ, Frecker MI, Simpson TW (2001) Efficient Pareto frontier exploration using Surrogate approximations. Optim Eng 2(1):31–50 19. Hazelrigg GA (1996) The implications of arrow’s impossibility theorem on approaches to optimal engineering design. ASME J Mech Des 118(2):161–164 20. Hazelrigg GA (1996) Information-Based Design, Prentice Hall, Upper Saddle River 21. Steuer RE, Choo EU (1983) An interactive weighted Tchebycheff procedure for multiple objective programming. Math Program 26:326–344 22. Thurston DL, Carnahan JV, Liu T (1994) Optimization of design utility. J Mech Des 116(3):801–808 23. Yang JB, Sen P (1994) Multiple objective design optimization by estimating local utility functions. In: Advances in design automation vol. ASME DE-vol 69–2, pp 135–145 24. Hauser JR, Clausing D (1988) The house of quality. Harvard Bus Rev 66(3):63–73 25. Locascio A, Thurston DL (1998) Transforming the house of quality to a multiobjective optimization formulation. Struct Optim 16(2–3):136–146 26. Lewis K, Mistree F (1998) Collaborative, sequential, and isolated decisions in design. ASME J Mech Des 120(4):643– 652 27. Lewis K, Mistree F (2001) Modeling subsystem interactions: a game theoretic approach. J Des Manuf Autom 1(1):17–36 28. Rao SS, Vankayya VB, Khot NS (1988) Game theory approach for the integrated design of structures and controls. AIAA J 26(4):463–469 29. Otto KN, Antonsson EK (1991) Trade-off strategies in engineering design. Res Eng Des 3(2):87–103 30. Wood KL, Antonsson EK, Beck JL (1990) Representing imprecision in engineering design: comparing fuzzy and probability calculus. Res Eng Des 1(3/4):187–203 31. Messac A (1996) Physical programming: effective optimization for computational design. AIAA J 34(1):149–158 32. Messac A (2000) From dubious construction of objective functions to the application of physical programming. AIAA J 38(1):155–163 33. Saaty T (1988) The analytic hierarchy process, revised and extended edition. McGraw-Hill, New York 34. National Research Council (1998) Visionary manufacturing challenges for 2020, Committee on Visionary Manufacturing Challenges, National Research Council, National Academy Press, Washington, DC 35. Ullman DG (2003) The mechanical design process. 3rd edn, McGraw-Hill, New York 36. Card SK, Moran TP, Newell A (1983) The psychology of human-computer interaction. Lawrence Erlbaum, Hillsdale 37. Sturman DJ, Zeltzer D, Pieper S (1989) Hands-on interaction with virtual environments, Proceedings of the 1989
123
Engineering with Computers (2007) 23:93–107
38.
39.
40. 41.
42. 43.
44.
45.
46.
47.
48.
49. 50.
51.
52.
53.
54.
55.
ACM SIGGRAPH Symposium on User Interface Software and Technology:19–24 Ware C, Balakrishnan R (1994) Reaching for objects in VR displays lag and frame rate. ACM Trans Compr Hum Interact 1:331–356 Watson B, Walker N, Hodges LF, Worden A (1997) Managing level of detail through peripheral degradation: effects on search performance in head-mounted display. ACM Trans Comput Hum Interact 4:323–346 Waern Y (1989) Cognitive aspects of computer supported tasks. Wiley, New York Goodman T, Spence R (1978) The effect of system response time on interactive computer-aided design. Comput Graph 12:100–104 Foley JD, Wallace JD (1974) The art of natural graphic manmachine conversation. Proc IEEE 4:462–471 Simpson TW, Meckesheimer M (2004) Evaluation of a graphical design interface for design space visualization. In: 45th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics & materials conference, Palm Springs, CA, AIAA, AIAA-2004–1683 Gu L (2001) A comparison of polynomial based regression models in vehicle safety analysis. In: ASME design engineering technical conferences - design automation conference, Pittsburgh, PA, ASME, Paper No. DETC2001/DAC21063 Kleijnen JPC (1975) A comment on Blanning’s metamodel for sensitivity analysis: the regression metamodel in simulation. Interfaces 5(1):21–23 Barton RR (1998) Simulation metamodels. In: Proceedings of the 1998 winter simulation conference (WSC’98), Washington, DC, IEEE, pp. 167–174 Simpson TW, Peplinski J, Koch PN, Allen JK (2001) Metamodels for computer-based engineering design: survey and recommendations. Eng Comput 17(2):129–150 Sobieszczanski-Sobieski J, Haftka RT (1997) Multidisciplinary aerospace design optimization: survey of recent developments. Struct Optim 14(1):1–23 Haftka R, Scott EP, Cruz JR (1998) Optimization and experiments: a survey. Appl Mech Rev 51(7):435–448 Simpson TW, Booker AJ, Ghosh D, Giunta AA, Koch PN, Yang RJ (2004) Approximation methods in multidisciplinary analysis and optimization: a panel discussion. Struct Multidiscip Optim 27(5):302–313 Haftka R, Gu¨rdal Z (1992) Elements of structural optimization. 3rd Revised and Expanded Edition, Kluwer Academic Publishers, Boston Frecker M, Simpson TW, Goldberg JH, Barton RR, Holewinski B, Stump G (2001) Integrating design research into the classroom: experiments in two graduate courses. In: 2001 Annual ASEE Conference, Albuquerque, NM, ASEE Ligetti C, Simpson TW, Frecker M, Barton RR, Stump G (2003) Assessing the impact of graphical design interfaces on design efficiency and effectiveness. ASME J Comput Inform Sci Eng 3(2):144–154 Barron K, Simpson TW, Rothrock L, Frecker M, Barton RR, Ligetti C (2004) Graphical user interfaces for engineering design: impact of response delay and training on user performance. In: ASME design engineering technical conferences - design theory & methodology conference, Salt Lake City, UT, ASME, Paper No. DETC2004/DTM-57085 Barton RR, Limayem F, Meckesheimer M, Yannou B (1999) Using metamodels for modeling the propagation of design uncertainties. In: 5th international conference on concurrent
Engineering with Computers (2007) 23:93–107
56.
57. 58.
59.
60.
61.
engineering (ICE’99), The Hague, Centre for Concurrent Enterprising, The Netherlands, pp 521–528 Simpson TW, Iyer P, Barron K, Rothrock L, Frecker M, Barton RR, Meckesheimer M (2005) Metamodel-driven interfaces for engineering design: impact of delay and problem size on user performance. In: 46th AIAA/ASME/ ASCE/AHS/ASC structures, structural dynamics & materials conference and 1st AIAA multidisciplinary design optimization specialist conference, Austin, TX, AIAA, AIAA2005–2060 Law AM, Kelton WD (2000) Simulation modeling and analysis. 3rd edn, McGraw Hill, Boston, MA Ligetti C, Simpson TW (2005) Metamodel-driven design optimization using integrative graphical design interfaces: results from a job shop manufacturing simulation experiment. ASME J Comput Inform Sci Eng 5(1):8–17 Hammond KR (1986) Generalization in operational contexts: what does it mean? can it be done?. IEEE Trans Syst Man Cybernet 16(3):428–433 Hammond KR, Hamm RM, Grassia J, Pearson T (1987) Direct comparison of the efficacy of intuitive and analytical cognition in expert judgment. IEEE Trans Syst Man Cybernet 17(5):753–770 Meckesheimer M, Barton RR, Simpson TW, Limayem F, Yannou B (2001) Metamodeling of combined discrete/continuous responses. AIAA J 39(10):1955–1959
107 62. Neter J, Kutner MH, Nachtsheim CJ, Wasserman W (1996) Applied linear statistical models. 4th edn. WCB/McGraw Hill, Boston, MA 63. Hart SG, Staveland LE (1988) Development of NASA-TLX (Task Load Index): results of experimental and theoretical research. In: Hancock PA, Meshkati N (eds) Human mental workload. North Holland, Amsterdam, pp 139–183 64. Wierwille WW, Eggemeier FT (1993) Recommendations for mental workload measurement in a test and evaluation environment. Hum Factors 35(2):262–282 65. Rothrock L, Barron K, Simpson TW, Frecker M, Barton RR, Ligetti C (2006) Applying the proximity compatibility and the control-display compatibility principles to engineering design interfaces. Hum Factors Ergonom Manuf 16(1):61–81 66. Miller GA (1956) The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol Rev 63:81–97 67. Wickens CD, Carswell CM (1995) The proximity compatibility principle: its psychological foundation and relevance to display design. Hum Factors 37(3):473–494 68. Wickens CD (1992) Engineering psychology and human performance. 2nd edn. Harper Collins Inc., NewYork 69. Martin JD, Simpson TW (2004) A Monte Carlo simulation of the kriging model. In: 10th AIAA/ISSMO multidisciplinary analysis and optimization conference, Albany, N, AIAA, AIAA-2004–4483
123