Multi-Objective Optimization

6 downloads 279 Views 446KB Size Report
If the initial basis corresponds to an efficient solution then the bi-objective simplex .... The optimal solution from this tableau is given by the negative of the final ...
STOR601: Research Topic I

Multi-Objective Optimization Ciara Pike-Burke

1

Introduction

Optimization is a widely used technique in Operational Research that has been employed in a range of applications. The aim is to maximize or minimize a function (eg. maximizing profit or minimizing environmental impact) subject to a set of constraints. However, in many situations, decision makers find themselves wanting to optimize several different objective functions at the same time. This leads to Multi-Objective Optimization (MOO). It is easy to see that if the multiple objectives do not coincide, this problem becomes considerably more difficult. There have been many methods suggested for MOO, this report will look at some of them.

1.1

Background

In Multi-Objective Optimization, it is often unclear what constitutes an optimal solution. A solution may be optimal for one objective function, but suboptimal for another. Let yi = fi (x) for i = 1, . . . , p, denote the p objective functions to be optimized over the feasible set X . Throughout this report, the optimization problem will be assumed to be of minimization type. ˆ ∈ X , is efficient (or in some literature Pareto optimal ) if there is A feasible solution x no other x ∈ X satisfying both fk (x) ≤ fk (ˆ x) for k = 1, . . . , p, and fi (x) < fi (ˆ x) for some ˆ is weakly efficient if there is no x ∈ X satisfying fk (x) < fk (ˆ i ∈ {1, . . . , k}. x x) for all k = ˆ = f (ˆ 1, . . . , p. The image of a (weakly) efficient solution, y x), is called a (weakly) undominated point. If X is a polytope, then it is possible to define an extreme point, x ∈ X , if for 0 ≤ α ≤ 1 and x1 , x2 ∈ X , x = αx1 + (1 − α)x2 implies that x1 = x2 = x. Another way of comparing solutions is to use lexicographic ordering; y1 0, where j represents the ‘worst’ value fj is allowed to take. This method is known as the -Constraint Method and is very simple to implement. It has been shown that if the solution to the -constraint method is unique then it is efficient (Marler and Arora, 2004). One issue with this approach is that it is necessary to preselect which objective to minimize and the j values. This is problematic as for many values of  there will be no feasible solution. Some results for Example 1.2 are shown in Figure 3.

2.3

The Goal Programming Method

Goal Programming is a method commonly used in mathematical programming when it is not possible to exactly meet some constraints. Charnes and Cooper (1977) present a way of using goal programming in the Multi-Objective setting. Their method is to solve the following LP: min

x,δ + ,δ −

st

p X

(δi+ + δi− )

i=1

fi (x) + δi+ − δi− = gi

∀i = 1, . . . , p

Ax ≤ b δi+ , δi−

≥0

∀i = 1, . . . , p

x ≥ 0. This linear program is minimizing the deviations of the objective functions from some prespecified goals, gi . One fairly intuitive option is to use the utopia point as the goal for each objective and try to minimize the deviations from this perfect optimum (even if it is not feasible for the problem). In this case, the goal programming method is equivalent to compromise programming (Romero et al., 1998). However, the solution obtained by the goal programming method will not necessarily be an efficient solution (Marler and Arora, 2004). The Goal programming method was implemented for Example 1.2 and the result is given in Figure 2. 2

STOR601: Research Topic I

(a) Optimal solution at y∗ = (1, 4).

(b) Optimal solution at y∗ = (9, −3).

(c) Optimal solution at y∗ = (6.2143, −0.5625).

Figure 2: Three solutions to Example 1.2. (a) was found to be optimal for the weighted sum method with weight w > 0.46 1 and the goal programming method with goal g = (−1, 4) = yI . (b) was found to be optimal for the weighted sum method with weight w ≤ 0.46 and the bi-objective simplex method. (c) was found to be optimal for the game theoretic approach.

(a) Minimize y1 subject to the additional constraint that y2 ≤ 2 where 2 = 2.

(b) Minimize y2 subject to the additional constraint that y1 ≤ 1 where 1 = 4.

Figure 3: The epsilon constraint method for Example 1.2. The optimal solution in (a) is 11 ∗ y∗ = ( 23 7 , 2) and in (b) it is y = (4, 8 ).

3

The Simplex Method

It is possible to extend the simplex method commonly used in single objective optimization to the bi-objective case. In order to do so, the mathematical program must be linear and in standard form (slack/surplus variables may have to be added). The bi-objective simplex method is outlined in Algorithm 1. Note that in the algorithm N represents the set of non basic columns of A, and e is the standard basis. If the initial basis corresponds to an efficient solution then the bi-objective simplex method pivots between efficient solutions. Furthermore, if the LP in phase 2 has an optimal basic solution then this will correspond to an initial efficient solution to the bi-objective problem. Hence, as long as an optimal basic solution is found in phase 2, the method should find efficient solutions (Ehrgott, 2006). An issue with this method is that due to the definition of the optimality criterion, the order of the objective functions will influence the final solution.

3.1

Example

We return to Example 1.2 and apply the bi-objective simplex algorithm. Phase I: x1 x2 s1 s2 s3 s4 z x1 x2 s1 s2 s3 s4 z 1 1 s1 0 1 1 0 0 0 0 6 s1 −3 0 1 − 3 0 0 − 13 5 1 1 1 z 1 3 0 -1 0 0 1 3 x2 1 0 0 0 1 3 3 3 7 1 1 s3 2 -1 0 0 1 0 0 6 s3 0 0 1 0 −3 7 3 3 5 1 s4 2 1 0 0 0 1 0 10 s4 0 0 − 0 1 − 13 9 3 3 eT z 1 3 0 -1 0 0 0 3 eT z 0 0 0 -1 0 0 -1 0 Phase II: c(λ) = λ(3x1 + x − 2) + (1 − λ)(−x1 + 4x2 ) = (4λ − 1)x1 + (−3λ + 4)x2 . An optimal basis for λ = 1 is given by {x2 , s1 , s3 , s4 } and optimal solution (0, 1, 5, 0, 7, 9) with c∗ (λ) = 1. 1

In the bi-objective case, we can just use one weight, 0 ≤ w ≤ 1, and define the objective wf1 (x)+(1−w)f2 (x).

3

STOR601: Research Topic I

Algorithm 1: Bi-Objective Simplex Input : A bi-objective LP of the form min{cT x|Ax = b, x ≥ 0}. Phase I : Solve the auxiliary LP min{eT z|Ax + z = b, x ≥ 0, z ≥ 0} to get optimal solution z∗ . if eT z∗ > 0 then stop there are no feasible solutions. else Define B to be the optimal basis. Go to Phase II. Phase II : Define c(λ) := λc1 + (1 − λ)c2 . Solve the LP min{c(λ)T x|Ax = b, x ≥ 0} for λ = 1 using initial basis B. Phase III: while I = {i ∈ N |¯ c2i < 0, c¯1i ≥ 0} = 6 ∅ do λ = maxi∈I

Return

−¯ c2i , c¯1i −¯ c2i

n o −¯ c2 s ∈ arg min i ∈ I| c¯1 −¯ic2 , i i n o ¯bj r ∈ arg min j ∈ B| A¯ , A¯sj > 0 . sj Perform a simplex pivot on row xs , column xr . : A sequence of λ and optimal BFSs.

Phase III: x1 x2 s1 s2 s3 8 1 0 0 − 31 0 c 3 7 2 c −3 0 0 − 34 0 1 1 x2 1 0 0 3 3 1 1 s1 − 3 0 1 −3 0 7 1 s3 0 0 1 3 3 1 5 0 0 −3 0 s4 3 7 , s = 1, r = 3. I = {1}, λ = 15

s4 0 0 0 0 0 1

c1

-1 -4 1 5 7 9

c2 x2 s1 x1 s4

x1 0 0 0 0 1 0

x2 0 0 1 0 0 0

s1 0 0 0 1 0 0

s2 − 75 -1 2 7

− 72 1 7

− 74

s3 − 87 1 − 17 1 7 3 7

− 57

s4 0 0 0 0 0 1

-9 3 0 6 3 4

The optimal solution from this tableau is given by the negative of the final entries in the c1 and c2 rows, so y1∗ = 9, y2∗ = −3. This solution is shown in Figure 2.

3.2

Multi-Objective Simplex

Ehrgott (2006) also presents a simplex algorithm for the case where p > 2. Even for just one objective, the simplex algorithm may require an exponential number of pivots, and so the same is true of the bi- and multi-objective simplex algorithms. Furthermore, as dimensionality increases, so does the number of efficient extreme points (which must be considered in the algorithm), thus making the problem more computationally difficult.

4

The Game Theoretic Approach

An interesting approach to Multi-Objective Optimization is to think of it as a multi-player co-operative game where each objective function to be minimized is a player in the game. A game is said to be co-operative if the players are able to reach an agreement on strategies. In Multi-Objective Optimization, the players are the objective functions which are ultimately controlled by the decision maker and so can be expected to reach an ‘agreement’, meaning the game is co-operative. Using the fundamental text on co-operative games (Nash, 1953) a game theoretic method for Multi-Objective Optimization was proposed by Rao (1987) and is outlined in Algorithm 2.

4

STOR601: Research Topic I

Algorithm 2: Multi-Objective Optimization using Game Theory Input : A multi-objective LP of the form min{f1 (x), . . . , fp (x)|Ax ≤ b, x ≥ 0}. Step 1 : Normalize the objective functions fi (x) to Fi (x) = mi fi (x) s.t. Fi (x) = · · · = Fp (x)=M. Step 2 : for i = 1, . . . , p do min{Fi (x)|Ax ≤ b, x ≥ 0} to get solution x∗i . Step 3 : for i = 1, . . . , p do Fwi = max1≤j≤pQFi (x∗j ). Step 4 : Set S = pi=1 [Fwi − Fi (x)] and solve max{S|Ax ≤ b, x ≥ 0}. Return: An efficient solution. In Step 1, the objective functions must be normalized as we are multiplying them, so differences in scale could have an effect on the solution. Rao suggests doing this by finding a feasible ˆ , and using the equality m1 f1 (ˆ solution, x x) = · · · = mp fp (ˆ x) = M to calculate mi for some ˆ , so better methods that do constant M . This way of normalizing fi depends on the solution x ˆ could be investigated. In Step 3, we are calculating the worst value that Fi can not depend on x take and then trying to find a solution, x, such that each Fi (x) is furthest from its worst value. The game theoretic method was implemented for Example 1.2 and the result is shown in Figure 2. It is interesting to observe that the solution in this case is in between the other two solutions (representing the optima of f1 and f2 ) suggesting a compromise has been made. The efficiency of the solution obtained by algorithm 2 is stated in Ghotbi (2013). This method involves optimizing a non-linear function which is generally more difficult than the linear case. However, if S is convex the problem becomes significantly easier.

5

The Two Phase Method for Two Objectives

A subclass of Multi-Objective Optimization problems are Multi-Objective Combinatorial Optimization problems (MOCO). Formally, a MOCO problem can be stated as min{Cx|Ax = b, x ∈ {0, 1}n } and interpreted as MOO with the additional requirements that all variables are binary and constraints linear. A two phase method for solving this type of problem was presented in Ehrgott and Gandibleux (2014). In phase 1, the objective is to find a complete set of extreme efficient solutions. The best way to do this is to find two lexicographically optimal solutions2 and then calculate a weight vector, λT , normal to the line connecting them. This weight vector helps define the weighted sum LP min{λT Cx|Ax = b, x ∈ {0, 1}2 }, the solution to which is used to split the problem into two sub-problems. In each sub-problem, the same technique is applied and this is repeated until no further non-dominated extreme points are found. Once all the non-dominated extreme points are found, phase 2 aims to find any other efficient solutions. It has been shown that this search can be reduced to the triangles created by connecting the last non-dominated extreme points found in phase 1 (see Figure 4(c)). In fact, by using a ranking algorithm to order the feasible solutions in the triangle according to λT Cx, the search can be stopped when a solution is found that has a worse value of λT Cx than all the corners of the triangle. By construction, the two phase method finds a set efficient solutions to the problem. Work has been done on extending the two phase method to situations with more than two objective functions, however, even with just three objective functions, the problem becomes considerably more difficult (Ehrgott and Gandibleux, 2014). 2 In a bi-objective problem, the first lexicographically optimal solution is found using the definition in section 1.1. The second can then be found by switching the order of the objective functions.

5

STOR601: Research Topic I

(a) Phase 1: a weighted average normal to the lexicographical solutions.

(b) Phase 1: no further nondominated extreme points can be found.

(c) Phase 2: the triangles to which the search for efficient solutions is restricted.

Figure 4: The two phase method for bi-objective problems.

6

Conclusion

This report has looked into several methods for solving multi-objective optimization problems. However, there exist many more approaches, details of which can be found in Marler and Arora (2004), and many combinations of existing methods. Aside from the two phase method which is only suitable for MOCOs, all methods which were described have been applied to Example 1.2. Interestingly, only three different solutions to this problem were obtained, all of which are efficient. Most methods produced solutions that were lexicographically optimal for one of the objective functions, only the game theoretic approach produced a compromise solution, but this came at the cost of solving a non-linear program. Therefore, it would be useful to produce methods for generating compromise solutions that are more computationally efficient. In multi-objective optimization, different methods are often used to generate a set of efficient solutions from which the decision maker can choose. Hence methods that are able to produce the entire set of efficient solutions (such as the two-phase method for MOCO) are preferable and more of these methods should be investigated. Each of the methods discussed has advantages and disadvantages and a lot of them can be adapted for specific problems. However, there is still no general ‘best’ method that can be used to solve Multi-Objective Optimization problems.

References Charnes, A. and Cooper, W. W. (1977). Goal programming and multiple objective optimizations: Part 1. European Journal of Operational Research, 1(1):39–54. Ehrgott, M. (2006). Multicriteria optimization. Springer Science & Business Media. Ehrgott, M. and Gandibleux, X. (2014). Multi-objective combinatorial optimisation: Concepts, exact algorithms and metaheuristics. In Al-Mezel, S. A. R., Al-Solamy, F. R. M., and Ansari, Q. H., editors, Fixed Point Theory, Variational Analysis, and Optimization, pages 307 – 341. CRC Press. Ghotbi, E. (2013). Bi- and Multi Level Game Theoretic Approaches in Mechanical Design. PhD thesis, University of Wisconsin-Milwaukee. Marler, R. T. and Arora, J. S. (2004). Survey of multi-objective optimization methods for engineering. Structural and multidisciplinary optimization, 26(6):369–395. Marler, R. T. and Arora, J. S. (2010). The weighted sum method for multi-objective optimization: new insights. Structural and multidisciplinary optimization, 41(6):853–862. Nash, J. (1953). Two-person cooperative games. Econometrica: Journal of the Econometric Society, pages 128–140. Rao, S. (1987). Game theory approach for multiobjective structural optimization. Computers & Structures, 25(1):119–127. Romero, C., Tamiz, M., and Jones, D. (1998). Goal programming, compromise programming and reference point method formulations: linkages and utility interpretations. Journal of the Operational Research Society, 49(9):986–991. 6

Suggest Documents