RESPONSE SURFACE BASED, CONCURRENT ... - CiteSeerX

7 downloads 0 Views 199KB Size Report
Concurrent Subspace Optimization (CSSO) algo- rithm proposed by Sobieski 2]. ...... Optimization, Methods and. Applications. John Wiley and Sons, New York,.
RESPONSE SURFACE BASED, CONCURRENT SUBSPACE OPTIMIZATION FOR MULTIDISCIPLINARY SYSTEM DESIGN R.S. Sellar S.M. Batilly J.E. Renaudz Department of Aerospace and Mechanical Engineering University of Notre Dame Notre Dame, Indiana requirements for the complete system, often many other practical considerations lead to a situation where the e orts of discipline experts with design responsibility are not well coordinated nor are there well-established, rational schemes to provide that coordination. Consequently, the design of such complex, coupled systems is often compartmentalized along discipline lines. The problem created by this compartmentalization is that the design of many systems is accomplished as the result of the design of subsystems which do not consider the e ects of design decisions on the performance of other subsystems nor on the system as a whole. The goals of Multidisciplinary Design Optimization (MDO) include accounting for the couplings between disciplines and providing a framework in which improved system designs can be obtained. Much of the current work in MDO is directed toward the e ective inclusion of the advanced simulation and analysis tools into the design process, at the earliest time possible, in such a manner that the designer can eciently exploit the information provided by these tools. As the tools become more complex the ability of the design engineer to e ectively process the information they provide can be limited. It appears that the successful integration of MDO methodologies into the actual system design process will therefore require a more complete understanding of the design problem formulation, the role of the tools, and the design decision processes. The current paper is an attempt to add some insight to this discussion. In its simplest sense the Multidisciplinary Design Optimization (MDO) problem can be cast as a traditional optimization problem of the form: Minimize: f = f (x; y) Subject to: hi (x; y) = 0:0 (1) gj (x; y)  0:0

Abstract The analysis of engineering systems must often be conducted using complex, non-hierarchic, coupled, discipline-speci c methods. When the cost of performing these individual analyses is high, it is impractical to apply many current optimization methods to this type of system to achieve improved designs. Consequently, methods are being developed which attempt to reduce the cost of designing or optimizing non-hierarchic systems. This paper details the application of an extension of the Concurrent Subspace Optimization (CSSO) approach through the use of neural network based response surface mappings. The response surface mappings are used to allow the discipline designer to account for discipline coupling and the impact of design decisions on the system at the discipline level as well as for system level design coordination. The ability of this method to identify globally optimal designs is discussed using two example system design problems. Comparisons between this algorithm and full system optimization are made with regard to computational expense associated with obtaining optimal system designs. I. Introduction Aerospace vehicles have evolved into complex, highly integrated and interdependent systems. Effective design of these systems requires the coordinated e orts of a large number of discipline experts striving to accomplish a common goal. Though their decisions are driven by the design  Graduate Research Assistant, Member y Professor, Associate Fellow AIAA z Assistant Professor, Member AIAA

AIAA

1 Copyright

c 1996 by Stephen M. Batill. Published by the American Institute of Aeronautics and Astronautics, Inc. with permission.

1

where the system merit function, f , is an arbitrary function of the bounded system design variables (x) and system states (y) which is minimized subject to equality, hi, and inequality, gj , constraints. For systems utilizing only a small number of design variables (on the order of 102) and which are composed of analyses which can be performed quickly (on the order of minutes), solving the MDO problem as stated in Equation 1 can often be accomplished using a variety of accepted numerical optimization strategies. However, engineering systems typically contain many more design variables ( 103) and involve the use of many varied analyses which each contribute information about the system and its performance. The determination of the system characteristics, referred to as a system analysis, often necessitates iterating between complex numerical procedures which are expensive both computationally and in terms of person-hours. Consequently, simply performing a single system analysis to obtain a consistent set of states for a given design vector can be a dicult and lengthy process - and it may not necessarily lead to a design which satis es all of the constraints imposed on the system. Optimizing such a system using conventional optimization strategies is beyond current and immediately foreseeable computational capabilities. If each discipline embarks upon an independent venture to improve the design, then they are limited in their ability to evaluate the system level merit, f , and cannot determine the in uence of their design decisions on many of the system constraints. Thus improving the design or even determining a feasible design can be dicult for problems of practical scope. Reference [1] surveys a number of the recent developments in MDO methods and highlights many of the other current concerns in developing practical MDO methodologies. It appears that one way in which the goals of MDO can be realized is through the use of the Concurrent Subspace Optimization (CSSO) algorithm proposed by Sobieski [2]. This work provides a framework in which designers at the discipline level have the ability to improve the system design through the use of the tools and analyses with which they posess expertise. Information on the in uence of their decisions on the complete system is provided to them using the Global Sensitivity Equations (GSEs) to provide partial derivatives for system states - and thus the impact on constraints and the system level merit function can be determined.

A limitation of the original formulation of the CSSO algorithm is that the information based on the GSE is approximate and the designs generated at the subspace, or discipline level must be coordinated during each iteration of the algorithm. Initially this coordination assumed disjoint discipline level design vectors which were concatenated to obtain an \optimal" baseline design. More recent formulations have cast system level coordination as an approximate optimization problem in which the accuracy of the approximation decreases with distance from the current design. To deal with this diculty, move limits are typically used in the optimization routine. Additionally, new system approximations (both GSE and coordination level approximation) are required for each design cycle. In general, the new approximations do not utilize information from the previous iterations which means that useful information is ignored after the design has moved to a new region of the design space. This information could be of great value, particularly if one then wishes to perform \what if" studies and consider alternative design requirements. The algorithm used to obtain designs of coupled systems in the current paper is based on the modi ed CSSO algorithm proposed by Renaud and Gabriele [3]. The current algorithm di ers from their approach in that the linear approximations formed using GSE-generated sensitivities are replaced with response surface approximations in the sub-space optimizations. The coordination procedure utilizes these same approximations, augmented with the data generated in the subspace optimizations. The response surface mappings implemented in this research make use of all data in the design data base. The goal of this paper is to outline this extension to the CSSO algorithm and to demonstrate a variety of issues related to its implementation using two simple demonstration problems. The next section provides a brief overview of certain features of this framework and the demonstration problems address speci c implementation issues. II. The Framework The framework used in this research is shown schematically in Figure 1. This schematic illustrates the four main processes which comprise the framework. The process proceeds in an iterative fashion, counterclockwise beginning with the system analysis, SA, in the upper left-hand corner. The system analysis itself most likely involves an 2

SA

CA1 CA2

CA3

???@@Convergence  @@?? System Level

'c c $ &c %

Coordination

B



Response Surface Approximations

BB N



b ` "H

J ] J J

 



? ? ? ? SSO SSO SSO ? ? ? 2

6



B

1

they only need be consistent (i.e. the design varibles and associated states satify all of the coupled system analyses). The next step is the parameterization of the response surface approximations used to provide coordination in the process. There exist many methods by which a system response surface approximation can be constructed; however, in this study response surface approximations will be limited to feedforward, sigmoid activation neural networks. The reason for selecting these neural networks for system approximation is that they have shown the ability to represent systems which contain both discrete and continuous design variables [4, 5, 6]. Additionally, neural networks do not conform to any prespeci ed functional form, so a priori knowledge of the design space is not necessary to represent the shape of the design space. Once the system is approximated the subspace designers are given the freedom to optimize the system design based on the most appropriate analyses and their experience. This requires the subspace designers to solve the system optimization problem based upon accurate information about a particular region of the design space and approximate information about the rest of the space. Neural networks provide non-local state information during each subspace optimization. The approach taken in this study is to train a series of neural networks to represent the states associated with each contributing analysis. The use of arti cial neural networks as state approximators in the CSSO algorithm is also useful in retaining design space information from one iteration to the next. This is due to the fact that these system models require no assumptions concerning the functional form of the response surface and can be trained to represent the entire design space. This can be accomplished by starting with an approximation and adding training data in a new region at each iteration. This recursive learning procedure [7] promotes the exploration of larger regions of the design space than local approximations since move limits in optimization routines can be relaxed. Once the subspace designers have developed improved designs - recalling that the corresponding states will be estimates - a system analysis is performed on each design and the data base augemented with each, as shown in the lower right corner of Figure 1. The last step in the process is the system level coordination which is accomplished using only the design space approximation in the form of the neural networks. This step could produce either a

-

3

Subspace Optimizations

-

SA

CA1

CA2 CA3

Process Flow Information Flow

Figure 1: CSSO-NN Framework iterative solution to a complex, coupled set of contributing analyses and it may represent the most costly single step in the process. The contributing analyses, CA's, are the discipline speci c analyses which are used to evaluate states and constraints. The purpose of a system analysis is to provide a consistent set of states for a given design. Other components of the framework are the response surface approximations (neural networks), subspace optimizations (SSO), and the system level coordination. The role of each and their relationship to each other are brie y described below. The CSSO-NN algorithm begins with the selection of a baseline set of designs. The reason for starting with multiple designs is that a system approximation is constructed before the subspace optimization occurs. These approximations to various \non-local" states are used to help the discipline designer understand the in uence of local design decisions on system level constraints and the system merit function. These response surfaces play the same role as the linear approximations formed using GSE-generated sensitivities in earlier CSSO formulations. A coupled system analysis is performed on each of these designs in order to provide data to construct the system approximations. It is important to note that the designs considered at this point need not be feasible, 3

single \optimal" design or a number of potential design candidates. The cost to perform this step is quite low since all of the design space is mapped and evaluation of the approximate system states for a given set of design variables is a very quick process. The coordination procedure then provides the next current design/s, a system analysis is performed on each, and a decision made to continue the process. Early in the process when the design space mapping is very crude, one would not expect the results of the coordination procedure and the system analysis to necessarily agree which means these designs may not be feasible. As the process continues and the response surface representation improves, the ability to obtain accurate coordination results also improves. The type of optimization methods used in either the subspace optimization step or the system level coordination depends upon the type of design variables and nature of the system. Gradient based search techniques or genetic algorithms could be envoked depending upon whether continuous or discrete design variables are included. The remainder of this paper presents the application of the CSSO-NN framework to two simple demonstration problems. In the rst problem the design space is small to allow for limited graphical presentation of results. The second problem involves an aircraft concept sizing study and allows for the presentation of a variety of practical implementation issues related to this approach.

~x

-

CA1

6 y2

y1

? CA2

- ~y

Figure 2: Information Flow for Analysis of Coupled Problem converge using a zero order updating scheme, i.e. the current value of states in the interation process are the most recent estimates. Initial state estimates in uence the convergence properties of a system, in general. For the system shown here the initial state estimates in uenced the number of iterations required to reach convergence in only a small portion of the design space ( 0:1%). In the remaining cases the convergence was insensitive to the initial state estimates. The system analysis presented in Equation 2 can, in general, be a complex and expensive task when the tools needed to perform contributing analysis are complex and expensive themselves. Improving the design of the system is achieved by changing elements of the design vector ~x, and the optimization problem for the system (shown in Figure 3) can require this potentially expensive, iterative system analysis for each perturbation of the design. This is a fundamental concern in multidisciplinary optimization. For small systems it is possible to perform the full system optimization referred to as an \all-at-once" approach as shown in Figure 3.

III. Initial Demonstration Problem Problem Implementation The rst problem is a simple example problem which operates on three variables f~xg and produces two states f~yg as given in Equation 2. [CA1] y1 = x12 + x2 + x3 ? 0:2y2 [CA2] y 2 = p y1 + x 1 + x 3 (2) Thus the state evaluations or contributing analyses are simple closed-form analytic expressions in the design variables and non-local states which exchange information as shown in Figure 2. The solution of this set of coupled, nonlinear algebraic equations is determined using an iterative process. The process used to determine y1 and y2 given a xed set of design variables, ~x, is the system analysis for this problem. Sequentially performing each contributing analysis, CA1 and CA2, represents a single iteration cycle in the system analysis. For this set of functions the system analysis required on the order of ve iteration cycles to

Full System Optimization Solution In order to nd the global optimumdesign and establish a benchmark for the assessment of the neural network based implementation of the modi ed CSSO algorithm, the all-at-once optimization problem posed in Figure 3 was solved with design variable bounds:

?10:0  x1  10:0 0:0  x2  10:0 0:0  x3  10:0 This optimization was performed using a generalized reduced gradient (GRG) approach [8]. The 4

~xstart

-

Optimizer ~x ?6~y Coupled Analysis

- ~xend

feasible 150.0

100.0

50.0

minimize: f = f (x; y) = x12 + x3 + y1 + e?y2 

-50.0 -10.0



xn ?xnopt xnmax ?xnmin

-5.0

0.0 x1

5.0

10.0

120.0

feasible

100.0 80.0

f 60.0 40.0 20.0

g 1

0.0 -20.0

g 0.0

2.0

2

4.0

6.0

8.0

10.0

x2

(b) variation in x2 direction 30.0

feasible

20.0

f

10.0 g 1 0.0

g

2

+:::+ n where xnopt is the point from which distance is being measured and xnmax ? xnmin is the range of the nth design variable. The percent design space measure is simply 100% times d. Another useful benchmark, and one of the primary reasons the all-at-once optimization for min

2

(a) variation in x1 direction

process was begun by starting at all combinations of x1 = (?10; ?1; 1; 10), x2 = (0; 5; 10), and x3 = (0; 5; 10), a total of 36 initial conditions. The design space for this coupled problem is shown in the region around the globally optimal design in Figure 4. The design space exhibits second order univariate behavior with x1 and x2 and linear univariate behavior with x3 . Because of the location of the constraint surfaces in this problem, there are two local minima as well as the global minimum. Figure 4a shows the global optimum at (3:03; 0:0; 0:0) and a local optimum at (?2:83; 0:0; 0:0). The ability of the gradient based, all-at-once system optimization to locate the global optimum in the design space is illustrated in Figure 5. This gure shows that the process converged to three distinct regions of the space. The global optimum was identi ed in 56% (20/36) of the cases; the remaining 44% of the starting points yielded designs which were not the global optimum. The distance between designs is de ned as:

d=

g

local optimum



Figure 3: Full System Optimization for the Coupled Analytic Problem

2

g 1

0.0

y1 subject to: g1 = y1 allowable ?10   y2 g2 = 1 ? y2 allowable 0

v u x1 ?x1opt u t x1max ?x1

global optimum

f

-10.0

0.0

2.0

2

4.0

6.0

8.0

10.0

x3

(c) variation in x3 direction Figure 4: Design Space for Coupled Problem Around Global Optimum 5

union of the design vectors of the coupled disciplines as inputs was chosen. Since each of the subspace optimizers relies on approximations to non-local states, a reasonable initial design space approximation is desirable before any optimization is attempted. In this problem, the initial data base consisted of four designs: a baseline design and three designs which were single variable perturbations of the baseline. For the initial implementation of this problem the baseline design was ~x = (0; 5; 5) with perturbations of 20% of the range of each design variable. The intent in choosing these designs was that the baseline design is in the middle of the design space and approximate partial derivative information could be provided for the construction of the response surface mappings. Although it appears that the increased number of designs in the data base is a detriment since each design requires a system analysis in order to obtain state information, this same procedure is employed in other approximation techniques and traditional optimization in the formation of local gradients. The di erence between the CSSO-NN and the other approaches is the point in the algorithm at which these additional system analyses are performed. With an initial data base established response surface mappings are developed. Each subspace optimizer then solves a system optimization problem in which a discipine level analysis is performed on the baseline design vector and approximations to non-local states. For the simple problem under consideration it is assumed that the determination of the rst constraint, g1(y1 ), is the responsibility of the rst contributing analysis, CA1 , and calculation of g2(y2 ) is the responsibility of designers in CA2 . A designer in CA1 has the ability to control only that portion of the design vector about which the designer has expertise. Design variables may be shared between disciplines when designers in multiple disciplines have expertise in dealing with the same design variables. Within the rst contributing analysis the designer has a tool which allows for the computation of y1 . Since y1 in this problem is based on ~x and y2 , the designer in the rst contributing analysis needs information about y2 at various values of ~x. This information is provided using a response surface mapping of y2 as a function of the complete system design vector. An optimization problem is formulated using the system level merit function and constraints and it is solved in this rst subspace using the partitioned system design vector. The resulting subspace optimization problem for the rst contributing anal-

24 Based on 36 AAO Optimizations

Number of Designs

22 20 18

56%

16 14 12

33%

10 8 6 4 11% 2 0 0.0 20.0 40.0 60.0 80.0 100.0 Distance from Global Optimum (% of design space)

Figure 5: Full System Optimization Design Determination large, coupled systems is infeasible, is the number of contributing and system analyses required to obtain an optimal design. The average number of contributing analyses performed by each discipline over the 36 trials was 617 which occurred during 154 system analyses. Although the number of contributing analyses which need to be performed during a coupled system analysis varies with the system solution scheme, initial guesses for system states, design variables, the manner in which gradients are determined, etc., the number of analyses needed for even this simple problem becomes unacceptably large as the complexity of the tools used to perform these analyses increases. Since approximately four CAs are performed for every SA (617/154), reducing the number of system analyses reduces the computational time by a factor of four for this problem, assuming the time to perform the optimization is small with respect to the time required to obtain a result from a discipline. Additional time penalties are incurred in a practical design environment since communication of results between disciplines is not instantaneous. MDO Formulation Results Formulation of this problem in an MDO framework requires a decision about what information to use in the construction of response surfaces. Since the problem contains both feedforward and feedback between disciplines, a set of response surface mappings which depends on the 6

Distance From Optimum (% of Design Space)

ysis is given in Equation 3. Minimize: f = f (x2 ; x30 ; y10 ; y~2) = x22 + x30 + y1 0 + e?y~2 0 ?1  0 Subject to: g1 = y y1 1 allowable   g2 = 1 ? y y~2  0 (3) 2 allowable ?10  x1  10 0  x2  10 0 y1 = x1 2?+ x2 + x30? 0:2y~2 y~2 = NN x1; x2; x30

1e+02 1e+01

4 Initial Data Points 3-2-1 Neural Network

1e+00 1e-01 1e-02 1e-03 1e-04 1e-05 1e-06

0

1

2

3

4 5 Iteration

6

7

8

Figure 6: Distance Between Subspace-Predicted and Optimal Designs

System design variable x3 0 is xed for this subspace, y~2 is a response surface approximation based on the full system design vector and y1 0 and f 0 are a result of analyses performed using the most appropriate tools within this discipline and this approximate information related to other disciplines. A similar subspace optimization problem is formulated for the second contributing analysis. These subspace optimizations begin from a baseline design as initially speci ed or as determined by the system level coordination problem. After a subspace optimization is completed, a full system analysis is performed on the resulting design. In early algorithm iterations, the predicted and calculated states, merit function, and constraints can be signi cantly di erent. This is a consequence of a desire not to impose move limits on the design variables and the fact that the early forms of the response surface approximations are developed with only very sparse design space information. The ability of the rst subspace optimization to identify the global optimal design is depicted in Figure 6. The designs generated at the discipine level are within 0.1% of the global optimum within the third iteration. This rapid convergence is rooted in the benign nature of the design space and is strongly dependent on the ability of the other subspace designers and the system optimization process to determine the optimal values for design variables which are not under the control of the subspace being considered. Upon completion of the subspace optimizations, each optimal design, as determined at the discipline level, is analyzed and added to the database. The response surface approximations are subsequently updated and used in the optimization of the full system. The system level optimization problem is cast in the same form as the full system optimization problem of Figure 3.

The di erence between the full system optimization and system level optimization in the CSSO environment is that state information in the CSSO environment is determined via neural network approximation. For this example these approximations are based on the full system design vector. In the current formulation of the CSSO-NN algorithm, the role of arti cial neural networks is to aid in reducing the number of system analyses which must be performed in order to nd the global optimum. This is accomplished by constructing arti cial neural network response surface mappings about a baseline point. Each subspace, working with these approximations and starting at this baseline point, arrives at a new design. These new designs, when added to the database, give gradient-like information about the baseline. The tendency is for the CSSO-NN algorithm to identify progressively better designs. This point is illustrated by examining the convergence history of the objective at the system level, shown in Figure 7. Starting with the baseline design (0.0,5.0,5.0) with an objective value of 28.4, in two iterations the baseline design moved to (3.07,0.0,0.0) with an objective of 8.22. This design is very close to the global optimum of (3.03,0.0,0.0), f = 8:01. The interesting thing to note is that the objective function value predicted by the neural network remains nearly constant throughout the process while the actual value of the merit function decreases with each iteration. To gain insight into the cause of this it is useful to examine the neural network representation of the design space in the neighborhood of the global optimum (Figure 8). The initial network representation of the merit function (iteration 0) shows 7

the approximation still maintains only local validity as demonstrated by the approximation at convergence (iteration 6). This is the result of the fact that both the system level coordination optimization problem and the discipline level optimization problems are adding designs to the data base only in the vicinity of the global optimum and there is little de nition of the the design space away from this point. This highlights the distinction between using this approach to identify an \optimal" design or using it to map the design space over some larger domain of interest. Both approaches have merit and in order to achieve their di erent goals, di erent procedures for adding additional designs to the data base need to be pursued. Although the ability to determine the global solution is of interest, of equal or greater importance is the cost and time associated with determining the solution to the system design problem. The cost can be measured in terms of the number of system and contributing analyses required to nd an improved, and ideally optimal, solution. The results in Table 1 provide a comparison between the full system optimization and ve CSSO-NN trials from di erent starting points. Examining the case currently under dis-

30

25 actual 20 f 15

10 neural network 5

0

1

2

3 4 iteration

5

6

7

Figure 7: Comparison of Neural Network Prediction and Actual Merit Function Convergence 120 100 actual 80

f 60

Method Starting Point CA1 CA2 SA All-At-Once (36 di erent) 617 617 154 CSSO-NN(4) (0.0,5.0,5.0) 379 290 16 CSSO-NN(7) (0.0,5.0,5.0) 317 274 28 CSSO-NN(27) (0.0,5.0,5.0) 420 136 39 CSSO-NN(4a) (-1.25,0,8.43) 713 279 22 CSSO-NN(4b) (-1.25,0,8.43) 2986 2321 61

40 20 iteration 0 0 −10

−5

iteration 6 0 x1

5

Table 1: Number of Analyses to Determine

10

Global Optimum

Figure 8: Design Space Near Global Optimum

cussion (starting point (0.0,5.0,5.0), four initial data points), a 39% reduction in the number of times CA1 is performed and a 53% reduction for CA2 with respect to the full system optimization is observed. More importantly, however, is the drastic reduction in the number of full system analyses performed, 86%. This sharp reduction in the number of times accurate state information must be exhanged between disciplines for evolving designs highlights the potential bene t of MDO methodologies. This algorithm was applied to the same problem using initial data bases consisting of 7 and 27 initial data points. In both cases the global optimal design was found with fewer contributing and system analyses than for all-at-once optimization (see Table 1). This can be attributed to a better global representation of the design space as a result of the increased number of training data

a sigmoidal shape which accurately represents the actual objective function at two distinct points by coincidence. (The nonparameteric space for this simple problem represented by the solid line is the result of repeated system analysis representative of an exhaustive search of the design space.) The sigmoidal nature of the approximations stems from the fact that there is no information in the region of the global optimum and the initial network weights were such that the network activation function was recovered with variation in x1 about the global optimum. After a training data point is added to the data base in the neighborhood of the global optimum in the rst iteration, the approximation becomes locally accurate about this point. Slightly improved networks result from further iteration in the CSSO-NN algorithm, but 8

points. An example of this is given in Figure 9 for the case in which the initial data base contained 27 designs. The gure shows the nonparametric

120 100 actual 80

100 nonparametric approximate

80

f 60 neural network

40

60 g 40

1

20 0

20 0

2 −8

−6

−4

−2

0 x1

2

4

6

8

1

2

3 4 iteration

5

6

Figure 10: Comparison of Neural Network Prediction and Actual Merit Function Convergence (case 4a)

g

−20 −10

0

10

an improved, feasible design was located. In Figure 11 it can be seen that the algorithm initially identi ed two infeasible designs before moving toward an improved design which was non-optimal in the non-parametric design space. The reason

Figure 9: Approximation to Constraints With Expanded Initial Data Base solution and approximation to the constraints g1 and g2 after the second iteration with variations in design variable x1 about the global optimum. The data base after this iteration consists of 33 designs, four as a result of subspace optimization and two from system level coordination. The approximations are accurate over a large portion of the design space due to the large initial data base. Since the design space under consideration is at worst quadratic in nature, the neural networks can, with the few additional design points, arrive at an initial approximation of the design space which fairly well represents the state variations across the space. In order to investigate the robustness of the current algorithm, one of the designs identi ed as a local optimum in the all-at-once optimization study (-1.25,0.00,8.43) was selected as the initial baseline design and two trials were performed, each using 3 single-variable perturbations to acquire gradient information for the initial data base as before. In the rst case started from the local optimum the design rapidly progressed to the global optimum (see Figure 10). This result is consistent with all previous trials and was accomplished by performing 713 and 279 of the rst and second contributing analyses, respectively. This case required 22 system analyses (from Table 1). The second trial starting with an initial baseline design of (-1.25,0.00,8.43) was not successful in nding the global optimum solution, although

18 16 14 12 f 10 neural network

8

actual

6 4 20

x x x

x - infeasible designs 5

10 iteration

15

20

Figure 11: Neural Network Prediction and Actual Merit Function Convergence Starting Near Local Optimum (case 4b) the algorithm converged to this improved suboptimal design can be seen in Figure 12. This gure illustrates the neural network approximation in the proximity of the nal design in comparison with the actual merit function. Variations between the nonparametric and approximate constraint surfaces force the design to its nal converged value which is sub-optimal in the nonparametric space. The reason that the approximate representation 9

20

30 25

15

15

Number of Designs

20 nonparametric

f 10 5 neural 0 network

5

design point

−5 −10 0

2

4

x3

6

8

0

10

(a) Nonparametric and Approximate Merit Function 10

g

2

g 1

0

methodology. These initial results are encouraging in that the application of this method resulted in solutions to the design problem which were globally optimal for almost all starting conditions examined. The exception to this occured for a case which was begun from a local optimum and an improved design was found in this instance. Of particular concern in multidisciplinary design optimization is the the number of times that the system as a whole is analyzed during the design process. The current algorithm yielded a decrease in the number of system analyses as compared to a full system optimization procedure. This is especially signi cant considering the percentage of cases and conditions under which the CSSO-NN algorithm was able to identify the global optimum.

actual

−5 −10 design point

−15 neural network −20 0

0 5 10 15 20 25 30 35 40 45 50 Distance from Final Design (% of Design Space)

Figure 13: Distribution of Final System Data Base

5

g

10

2

4

x3

6

8

10

(b) Variation in Constraints Figure 12: Design Space Around Converged Solution

IV. Aircraft Concept Sizing Problem

to the design space is inaccurate away from the nal design point is illustrated in Figure 13. This gure shows that most of the data in the design database lies within a hypersphere with a diameter of 5% of the total design space. In other words, 68.9% (42/61) of the data base points lie within (?2:39  0:50; 0:00  0:25; 2:78  0:25). Of the remaining data, 15% falls in the neighborhood of the initial baseline design. Consequently, the approximation of the design space is accurate within two localized regions. The mapping of the design space away from these regions diverges quickly from the nonparametric solution. The result is that local gradient information is poorly represented and a non-optimal, albeit improved, design is obtained. The purpose of this demonstration problem was to provide an initial implementation for testing the CSSO-NN multidisciplinary design

Problem Implementation The second problem addressed using the neural network based CSSO algorithm was a preliminary aircraft concept sizing problem. This problem was originally posed as an analysis problem in which two constraints were imposed as part of the process to generate a consistent system. The goal of the problem was to determine the total weight of the aircraft subject to requirements on range and stall velocity. Eight xed parameters and ve design variables composed the input information for the analysis which computed six states. The order of the state calculations is one in which an aircraft preliminary designer might typically approach this problem (see Figure 14); it has not been optimized for computational eciency or to minimize feedback within the system analysis. 10

s s ss s s s ss

Aerodynamics

Wtotal

Swing

Weights

Swetted

L=D

ss

Performance

Wempty

States Aerodynamics: Wetted Area Aspect Ratio Weights: Empty Weight Total Weight Performance: Range Stall Velocity Figure 15: Information Flow for MDO Formulation of Aircraft Concept Sizing Problem

Wfuel

Design Variables Parameters number of passengers aspect ratio propeller eciency fuselage length speci c fuel consumption fuselage diameter payload weight cruise altitude cruise velocity ultimate load factor number of engines weight per engine CLmax Figure 14: Information Flow for Aircraft Concept Sizing Analysis

In this problem formulation the contributing analyses can be classi ed in three ways: analyses which provide information to other analyses (Aerodynamics), analyses which have direct impact on the merit function and provide information to other analyses (Weights), and analyses which require information from other analyses and have direct impact on the constraints (Performance). The classi cation of the contributing analyses has a strong bearing on the way in which the response surface approximations for the various states are developed. In the case of the aerodynamics contributing analysis, the only information used to determine L=D comes from a subset of the system design variables. For this reason the response surface mapping for L=D is a function of wing area, aspect ratio, fuselage length, and fuselage diameter, the subspace design vector. Determination of the aircraft empty weight requires the full system design vector and the aircraft total weight. Since total weight determination depends on the same set of design variables as aircraft empty weight, the empty weight response surface mapping only relies on the design variables. Another interpretation of this is that since the response surface approximation is constructed using consistent design information, inclusion of total weight into the empty weight response surface mapping is redundent due to the fact that the design variables upon which total weight depends are inputs to the empty weight approximation. This also applies to systems in which feed-

In order to formulate this problem such that it can be solved in the current CSSO framework, the constraints become part of the system optimization problem formulation rather than part of the system analysis. Changing the role of the constraints in this way allows wing area and fuel weight to become design variables rather than system states. This increases the number of design variables in the MDO formulation of the problem to seven. Another way in which this problem is reorganized is that the calculation of state information is grouped into three contributing analyses: aerodynamics, weights, and performance. By grouping calculations in this way the feedbacks between contributing analyses are eliminated and feedback in the system analysis is con ned to reconciliation between total weight and empty weight. Arrangement of the contributing analyses in the MDO formulation of this problem is shown in Figure 15. Optimization of this problem takes the form: Minimize: Wtotal = Wempty + Wfuel + Wp=l Range (4) Subject to: Rangerequired ? 1  0:0 V stall  0:0 1 ? Vstallrequired where the objective, Wtotal , is a function of a state, Wempty , a design variable, Wfuel , and payload weight, Wp=l , a xed parameter. 11

Swing Wfuel L=D Wempty

jj jj

bb ``b`b` "" ""

  HHH H

j j

of states has not yet been investigated. For the purposes of this paper it suces to recognize this trade-o and present results with this in mind.

Range

Aircraft Sizing Results The most critical test of algorithm performance is if the optimum design in the space is identi ed. The optimal design as obtained by performing a series of all-at-once optimizations for the aircraft concept sizing problem occurs for

Vstall

Figure 16: Response Surface Mapping for Range and Vstall

x = (AR; Swing ; lf ; Df ; cruise; Vcruise; Wfuel )

back exists between contributing analyses. In such systems, the variation in a set of states whose determination requires iteration can be described by the union of the independent sets of design variables which are necessary to compute the desired states. In terms of the current terminology, a given state depends only on the subspace design vector and non-local state information. As a result, a particular state can be described by the union of the subspace design vectors for all states involved in the iteration with its own design vector. The third classi cation applied to contributing analyses is one which receives information from other contributing analyses and has a direct impact on the constraints of the system. This is the case in the Performance contributing analysis. The same approach described above for states which require iteration for solution could be applied to the case in which only feedforward occurs. A concern with such as approach is the number of design variables required as inputs to the approximation and the corresponding amount of data needed to obtain a valid approximation. In order to reduce the number of independent variables used to describe a state which is based on nonlocal information, the response surface mapping for such a state becomes the design vector for that state plus the necessary non-local states. For the Performance CA the approximation is as shown in Figure 16. This approximation requires only four inputs as opposed to the seven that would be required if the alternate formulation were used. Additionally, this approximation is constructed using consistent data from system analyses. The disadvantage of this formulation (and the reason it is not used for CAs involved in iteration) is that approximation of range and stall velocity is based on an approximation to lift-to-drag ratio and/or empty weight (depending on which part of the algorithm is being performed). The trade-o between the size of the response surface mapping and loss of accuracy due to multilevel approximation

x = (5:00; 177:9; 20:0; 4:0; 0:0019; 200:0; 145:1) with a total weight (the merit function) of 1761.8 pounds. In order to obtain this design the full system optimization was again started from the center of each face of the design space hypercube and a point at the center of the design space, a total of 15 starting points. In six of the fteen cases (40%) the GRG optimization algorithm located the global optimal design with the remaining nine cases identifying a local optimum design in the neighborhood of x = (7:00; 182:0; 20:0; 4:0; 0:0019; 200:0; 127:0) with a total weight of about 1802.0 pounds. For the cases in which the global optimal design was identi ed, solution required an average of 646 system analyses and the same number of contributing analyses for each CA (since the system contains no feedback between CAs). The remaining cases took an average of 138 system analyses with the overall average being 341 SAs for optimum convergence. When initiated from the middle of the design space the all-at-once optimization procedure located the local optimal design, requiring 134 system analyses. To provide a meaningful basis for comparison, the CSSO-NN algorithm was performed ve times, each time beginning from the middle of the design space with an initial data base consisting of this point and univariate perturbations of the design vector. In all cases the nal design identi ed by this procedure was nominally the global optimal design. The cost of performing this optimization was on the order of 1000 analyses of each discipline and 35 system analyses to convergence. Although the number of individual contributing analyses is higher for the CSSO-NN problem formulation than for the all-at-once optimization, the number of system analyses performed is signi cantly less. The reason for the large number of 12

be almost entirely attributed to error in determining the cruise altitude design variable. This is a result of the aircraft total weight being rather insensitive to this design variable. The di erence in the total aircraft weight due to the variation in cruise was only 4 pounds. This is apparent from Figure 17b in which both cases yielded nominally the same aircraft total weight. Even in the ten iteration case the system design was near the global optimum in 5 iterations (28 system analyses). This implies that, for this problem, near-optimal designs can be located in a reasonable number of design iterations. This is of major importance when dealing with practical design problems in which only a few design iterations are possible due to market constraints. As was demonstrated in the simple, analytic demonstration problem discussed previously, the neural network implementation of the CSSO algorithm presented here was robust in its ability to identify not only improved designs, but designs in the neighborhood of the global optimum. Even more important, perhaps, is the fact that these designs were located utilizing a fraction of the system analyses required with traditional optimization procedures and were identi ed in a very small number of design iterations.

Distance from Global Optimum (% of design space)

contributing analyses in the CSSO-NN formulation is that a gradient based optimization is being performed at the subspace level at each iteration. The signi cance of this number in a practical design environment is not very great since the level of sophistication of the tools used to perform analyses can be varied in order to get a \ballpark" point on which to perform optimization/analysis with the best tool available. The speed with which the CSSO-NN algorithm was able to locate the optimum design is shown in Figure 17a for the most quickly (5 algorithm iterations) and most slowly (10 algorithm iterations) converging cases. For the case that con30.0 quickest convergence slowest convergence 20.0

10.0

0.0

0

1

2

3

4 5 6 iterations

7

8

9

V. Conclusions

10

In the design of engineering systems the most dicult tasks to perform are exchanging information between disciplines such that every designer is working on the same problem, obtaining consistent information about the entire system, and formulation the design problem such that designers can use the tools with which they possess a level of pro ciency in order to move the system toward a \best" design. The CSSO-NN algorithm addresses the rst diculty by providing designers at the discipline level with approximations to the all non-local information. The discipline level designer is able to provide accurate information about a part of the system and optimize the system, taking into account the e ect of discipline level design decisions on the performance of the entire system. The cost of providing this functionality is tied up in the ability to construct response surfaces approximations to represent each discipline. Perhaps the greatest challenge in applying this formulation to practical systems is de ning the metric by which the system is to be evaluated and dealing with constraints which change during the design process.

(a) Design Vector History

Aircraft Total Weight (pounds)

2400.0

quickest convergence slowest convergence

2200.0

2000.0

1800.0

Global Optimum 1600.0

0

1

2

3

4 5 6 iterations

7

8

9

10

(b) Total Weight History Figure 17: Convergence History for Aircraft Concept Sizing verged in 5 iterations the nal design was approximately 9% from the global optimum. This 9% can 13

The success of the CSSO-NN algorithm in distributing non-local information was demonstrated using simple \closed-form" contributing analyses. The ability of a designer to identify optimal system designs was illustrated. Although the number of analyses required at the discipline level to obtain an optimal system is high for the current implementation, realistic design environments can provide a more ecient means to achieve subspace optimality due to previous design experience and a variety in the complexity (and cost) of tools available. Because of the time and cost associated with simply analyzing a system without performing any design, it is highly desirable to minimize the number of pure analyses performed during the design process. A reduction in the number of system analyses required to obtain the global optimal design by full system optimization was demonstrated in the implementation of the CSSO-NN algorithm on two example problems. However, the number of system analyses necessary for optimality is still quite high. Since recent advances in automatic di erentiation have made the computation of local gradient information an inexpensive proposition in cases where this information is not generally available, further reduction in the number of system analyses can be expected through incorporation of this technology and of neural network training algorithms which take local gradient information into account.

[3] [4]

[5]

[6]

[7]

[8]

Acknowledgements This work was supported in part by the National Aeronautics and Space Administration Langley Research Center Grant NAG-1-1561, Dr. J. Sobieski, Project Monitor. The authors would like to thank Mr. Marc Stelmack for his assistance with the computer simulations. References [1] R. T. Haftka and J. Sobieski. Multidisciplinary Design Optimization: Survey Of Recent Developments. 34th AIAA Aerospace Sciences Meeting and Exhibit, 96-0711, Reno, Nevada, January 1996. [2] J. Sobieszczanski-Sobieski. Optimization by Decomposition: A Step From Hierarchic to Non-Hierarchic Systems. NASA Conference Publication 3031, Part 1, Second NASA/Air Force Symposium on Recent Advances in 14

Multidisciplinary Analysis and Optimization, Hampton, Virginia, September 1988. J. E. Renaud and G. A. Gabriele. Approximation in Nonhierarchic System Optimization. AIAA Journal, 32(1), January 1994. R. Swift and S. Batill. Application of neural nets to preliminary structural design. AIAA/ASME/AHS/ASC 32nd Structures, Structural Dynamics and Materials Conference, 1991. R. Swift and S. Batill. Simulated annealing utilizing neural networks for discrete design variable optimization problems in structural design. AIAA/ASME/AHS/ASC 33rd Structures, Structural Dynamics and Materials Conference, AIAA 92-2311, February 1992. R. S. Sellar, J. E. Renaud, and S. M. Batill. Optimization of Mixed Discrete/Continuous Design Variable Systems Using Neural Networks. 5th AIAA/NASA /USAF/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Panama City, Florida, September 1994. S. Batill and R. Swift. Structural Design Space De nition Using Neural Networks and a Reduced Knowledge Base. AIAA/AHS/ASEE Aerospace Design Conference, AIAA 93-1034, Irvine, California, February 1993. G. V. Reklaitis, A. Ravindran, and K.M. Ragsdell. Engineering Optimization, Methods and Applications. John Wiley and Sons, New York, New York, 1983.