Multi-Objective Optimization Techniques - CiteSeerX

12 downloads 259 Views 43KB Size Report
Page 1 ... The solution of a MO optimization problem is a vector in R. N ... In this section several techniques used for the solution of MO optimization problems.
MULTI-OBJECTIVE OPTIMIZATION TECHNIQUES Asim Karim ([email protected]) INTRODUCTION Natural systems have evolved over time to achieve a high degree of suitability to task. In the design of man made systems a similar goal is sought. That is, to achieve the best possible solution to a decision making or design problem given the constraints of practicality. Such design problems are often modeled as single objective (SO) optimization problems consisting of a single valued measure of goodness and a set of constraints. This model is often a simplification as most real-world problems have multiple conflicting objectives. For example, in the design of reinforced concrete beams the goal is to obtain the least cost and the least weight design. The minimum weight design will not necessarily give the minimum cost design because of the different cost-toweight ratios of the materials used. Multi-objective (MO) optimization provides a framework for solving decisionmaking problems involving multiple objectives. Multiple criteria decision-making (MCDM) problems, in general, are divided into two categories. The first category is multi-attribute decision analysis that studies the outcome of conflicting goals when faced with risk and uncertainty. Furthermore, the outcomes are usually small in quantity and known in advance. Analyses of these types are often applied in economics, marketing, public policy, and business decision making. Multi-objective optimization is the deterministic branch of MCDM in which the outcomes are not known in advance. The analysis of this type is often used in engineering design. In general, the MO optimization problem is defined as

1

min f(x) = { f 1 ( x ), f 2 ( x ),K, f N (x )}

(1)

subject to g j ( x ) ≤0

j = 1, J

(2)

hk (x ) = 0 k = 1, K

(3)

and

where f is the vector of objective functions, x is the vector of decision variables, gj is the jth inequality constraint, hk is the kth equality constraint, and N, J, and K are the total number of objective functions, inequality constraints, and equality constraints, respectively. The only difference from a SO problem is the use of a vector of objective functions instead of just one. Because of this, the MO optimization is also sometimes called vector optimization.

OPTIMALITY CONCEPT In a single objective optimization problem the Kuhn-Tucker conditions provide the necessary conditions for optimality. The conditions can then be used to characterize any feasible solution of a SO problem. The MO optimization problem is more complicated. In general, a solution obtained by optimizing one objective at a time in a MO problem will produce an infeasible solution. The solution of a MO optimization problem is a vector in RN instead of a single valued number. Therefore, the MO problem can be thought of as a vector minimization problem written as min {z1(x),z 2(x),K ,z N (x)}

(4)

subject to f i (x ) = z i (x ) i = 1, N

(5)

g j ( x ) ≤0

(6)

j = 1, J

2

hk (x ) = 0 k = 1, K

(7)

Determining the best outcome vector is a set theory problem where ordering and preference of one vector (set) over another has to be considered. Optimal MO solutions are defined in terms of these sets by the condition called Pareto optimality condition. A set of solutions is said to be Pareto optimal if, in moving from one point to another in the set, any improvement in one of the objective functions from its current value would cause at least one of the other objective functions to deteriorate from its current value. The Pareto optimal set is usually an infinite set. The decision-maker, therefore, has to choose the desired solution from the set.

SOLUTION TECHNIQUES In this section several techniques used for the solution of MO optimization problems are briefly described. All of these techniques require input from the decision-maker beyond that needed for the formulation of the MO problem (Eqs. 1-3). This input usually consists of ranking, weighting, or attainability information of the objectives so that the MO problem can be converted to a SO problem. More generally, the decision-maker implicitly assumes a value (or utility) function V : R N → R that maps the N-dimensional outcome vector to a single valued number. Weighing Objectives Method In this method the MO problem is converted to a SO problem by using a weighted sum of the original multiple objectives. The equivalent optimization problem is then given by N

min

∑w

i

f i (x )

(8)

i =1

3

subject to g j ( x ) ≤0

and

j = 1, J

(9)

hk (x ) = 0 k = 1, K

(10)

where wI’s are the weighting coefficients satisfying the following conditions:

0 < wi ≤1 and

N

∑w

i

=1

(11)

i =1

The weighting coefficients are chosen a priori. If the problem is convex, a complete set of Pareto solutions can be obtained by varying the weighing coefficients. The final solution is then chosen by the decision-maker. Hierarchical Optimization Method This method requires that the objectives be ordered in decreasing importance. Let f1 be the most important and fN the least important objective. Then, this method consists of the following procedure: Step 1. Optimize the SO problem consisting of the most important objective function f1(x) and the original constraints. All other objectives are ignored. Let x&1 be the optimal solution obtained where the superscript denotes the iteration number. Repeat Step 2 for i = 2,N. Step 2. Find the optimum solution x&i of the ith objective function fi(x) subjected to the following additional constraint:

 ε  f i − 1 (x ) ≤1 ± i − 1  f i − 1 (x&i − 1 )  100 

(12)

where ε i is the percentage variation allowed in the objective function value f i (x&i ) . This constraint ranks the importance of the last calculated objective. This percentage can also be equal to zero in which case the procedure is known as the lexicographic method. The

4

algorithm will produce a Pareto optimal solution. By changing the values of ε i a set of Pareto optimal solutions can be obtained. Trade-Off Method In this method a trade-off among the multiple objectives is specified by the decisionmaker. This method is also known as the ε -constraint or the reduced feasible space method because the technique involves search in a progressively reduced criterion space. The original problem is converted to a new problem in which one objective is minimized subject to N – 1 constraints that limit the values of the remaining objectives and the original constraints. Mathematically, we have the following problem:

min f r (x )

(13)

subject to f i (x ) ≤ε i

i = 1, N ; i ≠ r

(14)

plus Eqs. 2 and 3 where ε i is the limiting value of fi desired by the decision-maker. By varying the values of ε i a complete set of Pareto optimal solution can be obtained. Global Criterion Method In this method the decision-maker uses an approximate solution f* (or the ideal solution if it is known) to formulate a single objective criterion to determine the optimum decision variables by solving the following SO optimization problem: P

 f i* − f i (x )   min ∑    f i* i =1   N

(15)

subject to Eqs. 2 and 3 The value of P is set by the decision-maker. Common values used are 1 and 2.

5

Method of Distance Functions and Min-Max Method These methods are similar to the global criterion method in that a known ideal or approximate criterion vector is used to construct a single objective function. In these methods the objective function is given by 1/ P

 f * − f (x ) P  i i  min ∑  *    fi i =1      N

(16)

When P = 2 this objective represents the distance between the ideal and the final solution. When P = ∞ this method becomes the min-max method with the objective f i * − f i (x ) min max i∈ I f i*

(17)

Using these methods a Pareto optimal solution is obtained. Goal Programming Goal programming (GP) is a common technique used for the solution of MO optimization problems. In this approach the objectives are thought of as goals with target or threshold values that are desired. However, these constraints are often not strict but are usually allowed to vary within close range of the desired values. This is accomplished by using deviational variables. The goals are assigned some priority or weighing to signify their importance relative to others. The goal criterion can be one of the following: greater than or equal to, less than or equal to, equal to, or range. Consider a MO optimization problem with only two objective functions f1 and f2. Assume the first goal is that f1 be less than or equal to z1, and the second goal is that f2 be equal z2. Then the GP problem can be written as min (w1+ d1+ + w2+ d 2+ + w2− d 2− )

(18)

6

subject to f 1 (x ) − d1+ ≤ z1 f 2 (x ) − d 2+ + d 2− = z 2

(19) (20)

plus Eqs. 2 and 3 where the w’s are the penalty weights corresponding to each deviational variable d which specifies the undesirable deviation in the achievement of each goal. This formulation is known as the Archimedian GP problem and it can be solved by any SO optimization algorithm. The lexicographic GP approach uses priorities instead of weights to order goals. A goal at a higher level of priority is assumed to be infinitely more important than a goal at the next lower level. Considering the problem described in the previous paragraph the lexicographic GP problem can be written as lex min {d 1+ , (d 2+ + d 2− )}

(21)

subject to f 1 (x ) − d1+ ≤ z1

(22)

f 2 (x ) − d 2+ + d 2− = z 2

(23)

plus Eqs. 2 and 3 where “lex min” denotes the lexicographical minimum of the set of deviational variables. To solve this problem a sequence of SO problems must be solved. Each SO problem involves the minimization of one deviational variable subject to all the constraints and using the optimal values of the deviational variables obtained from previous stages.

PRACTICAL IMPLEMENTATION CONSIDERATIONS The algorithms presented in the previous section show that the approaches require additional input from the decision-maker, which is often subjective. Therefore, MO

7

optimization is as much an art as it is a science. The concept of optimality in MO problems is derived from consumer economics where each consumer gets the “best deal” possible; any deviation from this would result in some consumers getting a better deal than others. The Pareto optimal is therefore a nondominated outcome vector. This optimality reasoning is based on the availability of a value function that can characterize one outcome with respect to another. In practice, the value function is rarely known and the approaches presented in the preceding section are used. The Pareto optimal set is usually of infinite dimension. The choice of a final solution therefore depends on the decision-maker. The best approach is to use the algorithms presented earlier in an interactive manner. This will not only provide the decision-maker with insights into the problem but will also allow him to choose the better solution quickly. The goal of the algorithms is to provide a structured and ordered technique of achieving a better solution from a current solution. A limitation of the weighed objectives and the trade-off methods is that their objective functions do not have a physical meaning. At best, they can be thought of as a figure of merit that can be used to classify different outcomes. This lack of understanding often makes it very difficult to find the optimal weight and ε i vectors to use. Further, the weight vectors also depend on the scaling of the objective. This can be a problem when the objective is rescaled between iterations. The trade-off method is quite arbitrary. However, it has been commonly used as an interactive decision making tool because of its easy implementation. The decision-maker can configure the constraints in each iteration to try to achieve better solutions.

8

The global criterion, the distance function, and the min-max methods are based on the knowledge of a near optimal decision vector. This is often difficult when the problem is large and complicated. Further, a choice of an invalid decision vector will make the problem infeasible. The most practical approach is goal programming. This technique has a good conceptual foundation that allows the decision-maker to set up the problem accurately. The use of deviational variables effectively handles flexible constraints. However, they are not very efficient. There is at least one deviational variable associated with each goal. Thus, if the number of goals is large the number of variables in the problem becomes very large.

ILLUSTRATIVE EXAMPLE Consider the design of a simply supported steel beam of length L subjected to a single concentrated load P at midspan. Suppose the beam is to have a uniform rectangular crosssection (width = b; depth = d). The goal of the designer is to minimize both the mass and the deflection of the beam. In what follows, a step by step procedure is given which the designer may follow in order to solve the problem. All the previous solution techniques are considered together with comments on how they may be used to solve this particular problem. Step 1. The first step is to create a precise mathematical model of the problem. The design variables are b and d. The MO problem is then given by

min m = ρbdL

(24)

PL3 48 EI

(25)

min δ=

9

subject to M max ≤ M a ; b ≥ 0; d ≥ 0

(26)

where E is the modulus of elasticity, ρ is the density, I is the moment of inertia, Mmax is the maximum bending moment, and Ma is the allowable bending moment. The quantities I, Mmax, and Ma can all be expressed in terms of the design variables b and d. Step 2. The designer must decide on a value function for the problem or choose any other technique of ranking the solutions (criterion vector) of the problem. For example, the designer should be able to determine whether a solution with a mass of 200 and a deflection of 0.5 (i.e. a criterion vector {200, 0.5}) is better than a solution {250, 0.4}. This is a fundamental concept in MO optimization and is the basis of the Pareto optimality conditions. Note that if the designer knows the exact mathematical form of the value function then he/she could use that as the objective function and obtain the desired solution. However, the value function is usually not known. Therefore, the designer must come up with a technique for distinguishing good solutions from bad ones. The designer can either use his experience to make such distinctions or encode previous design results into an expert system or neural network system to automate the process. Step 3. In this step a particular algorithm is selected and used for the solution. The procedure used for each algorithm is described in the following paragraphs. The weighting objectives method requires the designer to choose a weight vector. For example, the weight vector {0.6, 0.4} will indicate that the mass minimization goal is 50% more important than the deflection minimization goal. But this decision is subjective. However, the use of any feasible weight vector will produce a Pareto optimal solution. The designer must then choose which solution is the best. Another limitation of this approach is that the weighted-sum objective function does not have any physical

10

meaning. Therefore, it is difficult for the designer to reason with the solution while using it in an interactive manner. The hierarchical method requires the designer to rank the objective functions from most important to least important. If the mass of the beam is chosen as the most important criterion then only this objective is used to obtain an intermediate solution. In the next step the deflection of the beam is taken as the objective and solved with an additional constraint that depends on how good the mass calculated in the previous step was. The designer must experiment with different values of ε i and choose the best solution. The trade-off method also requires the selection of the most important objective. However, the remaining objectives are added as constraints and a single SO optimization problem is solved. If the mass is taken as the most important objective the deflection is added as one or more constraints that limits the range of values it can attain. Although this method is also arbitrary in terms of the ranking of the objectives and the choice of the ε i values it is better than the weighting objectives method when used interactively. This is because the designer has explicit control and a better understanding of the values of one objective. The global criterion, the distance function, and the min-max methods all require the ideal or approximate criterion vector to be known. The designer can choose an appropriate criterion vector and check if it is feasible. This is a trial-and-error process. The objective function of this method has a geometric meaning that may be useful in problems that have similar representations. However, for the present problem the minimization of the distance between two vectors with coordinates of mass and

11

deflection has no physical meaning. Therefore, this method is not appropriate for interactive use. The goal programming approach allows the designer to specify the range of values for each objective, which are conceptually thought of as goals. The objective of the GP problem is then the minimization of the weighted-sum of the deviations from these goals. This approach provides the designer with the greatest control over the process. As such, it is the most appropriate approach to use in an interactive manner. Step 4. The designer must choose a final solution (criterion vector) and the corresponding design variables to complete the design. This may require several iterations of the procedures presented in Step 3.

CONCLUDING REMARKS Multi-objective optimization provides a structured and ordered approach to solve real-world complex decision making problems such as those encountered in engineering design. Multi-objection optimization approaches all employ additional information which determine the final outcome. As such, they are not automatic. Multi-objective optimization is most effective when it is used interactively to obtain better solutions from an available feasible solution.

BIBLIOGRAPHY Azarm, S. (1996), “Multiobjective Optimum Design,” http://www.glue.umd.edu/~azarm/. Lai, Y. –J. and Hwang, C. –L. (1994), Fuzzy Multiple Objective Decision Making— Methods and Applications, Springer-Verlag, New York, NY.

12

Serafini, P. (Ed.) (1985), Mathematics of Multi Objective Optimization, Springer-Verlag, New York, NY. Stadler, W. (Ed.) (1988), Multicriteria Optimization in Engineering and in the Sciences, Plenum Press, New York, NY. Steuer, R. E. (1986), Multiple Criteria Optimization: Theory, Computation, and Application, John Wiley and Sons, New York, NY.

13

Suggest Documents