MIC’2001 - 4th Metaheuristics International Conference
707
Tabu Searching for Robust Solutions Kenneth S¨ orensen∗ ∗ University of Antwerp UA Faculty of Applied Economic Sciences UFSIA–RUCA Middelheimlaan 1, B-2020 Antwerp, Belgium Email:
[email protected]
Abstract In this paper, we investigate how a basic tabu search technique can be adapted so that it finds solutions that (1) have a good solution quality and (2) are more robust than other solutions. We show that there is a need for robust solutions in many practical problems and discuss different types of robustness. Some simple applications are also investigated.
1
Introduction
Tabu search is a metaheuristic that guides a local heuristic search procedure to efficiently explore the search space of a problem. It uses both short-term and long-term memory structures to prevent the search from getting stuck in local optima. For a detailed description of this technique, we refer to [9, 10, 11]. The best-known solutions to many problems have been found by cleverly using a tabu search algorithm. Research on tabu search and metaheuristics in general, however, has neglected the need for robust solutions that exists in many real-life problems. If a solution is very sensitive to small perturbations in the input data, it might not be a good solution at all. Therefore, we develop a technique that modifies a tabu search heuristic so that it searches for solutions that not only are of good quality but are also robust. In this paper, we examine some very simple examples that show the effectiveness of the procedure, leaving extensive tests on real-life examples for future work. Since the principles of the procedure described in this paper are independent of the tabu search technique used—and to some extent even of the local search procedure used—we limit ourselves to very basic tabu searches.
2 2.1
Robustness Definitions
In many real-life combinatorial optimisation problems, robustness is just as important an issue as is optimality. However, this aspect of optimisation has long been neglected by researchers. Robustness refers to the insensitivity of a solution with respect to the input data. We distinguish two types of robustness: solution robustness and quality robustness. A solution is called quality robust if its solution quality remains high if changes in the input data occur. This type of robustness is important in problems in which a frequent re-optimisation of the problem is infeasible, for example in warehouse location problems. A typical warehouse location problem searches for the ideal placement of warehouses in the plane or on a graph, given a number of customers with their respective demands. Once the location of the warehouses is determined and the warehouses have been built, this cannot be undone without extremely high costs. Therefore, it is imperative that the solution initially found remains as close to optimal as possible under changing circumstances. In general, quality robustness is needed in situations where volatile input data is used to base inflexible decisions on. In other problems a new solution has to be found when changes in input data occur and—as a result—the original solution does not satisfy the constraints of the new problem. To this end the
Porto, Portugal, July 16-20, 2001
708
MIC’2001 - 4th Metaheuristics International Conference problem is re-optimised. In these cases we might require the new solution to be as close to the original one as possible. A solution is called solution robust if the new solution changes only slightly with changes in input data. In vehicle routing, for example, most changes in input data (e.g. a new customer has to be serviced, a certain route is closed, etc.) will require new routes to be calculated. In practice however, there is a preference for the routes to stay largely the same. Several reasons can be found for this: drivers may get confused if their routes change every other day, errors may be made by the planners, etc. For these and other reasons, vehicle routing becomes more of a tactical decision. It is therefore desirable that a vehicle routing algorithm generates solutions that do not differ too much when changes in input data occur. Contrary to quality robustness, solution robustness is a not only a property of the solution, but also of the solution algorithm. The main difference between quality robustness and solution robustness is that in the former case, it is the quality of the solution that is not allowed to change. In the latter case, it is the solution itself that is not allowed to change. The robust tabu search procedure discussed in this paper is intended to produce solutions that are quality robust.
2.2
Robustness versus optimality
Robustness and optimality are often conflicting objectives in that an increase in one of them will often incur a decrease in the other. In most cases, however, the relationship between the two objectives is rather unclear. Finding a good tradeoff between robustness and optimality is a matter of judgement on the part of the decision maker. Mathematical procedures can help this person to take a responsible decision. However, they cannot make the decisions in his/her place. Models of real-world systems are always subject to incomplete, noisy, and erroneous data. This is a problem to operations researchers that attempt to exploit these models and apply their mathematical solutions in practice. In contrast, most techniques for solving such models assume deterministic data. In general, models are formulated by performing worst-case or mean-value analysis. The latter has been shown to have very large error bounds, whereas the former is known to produce very conservative and sometimes very expensive solutions. The aspect of robustness has been studied in the past and some techniques to evaluate and improve the robustness of the solution to a problem with regard to changes in input data exist. Some of these techniques (sensitivity analysis [6, 7], robust optimisation [15], stochastic linear programming [5, 2, 19, 13], surface response techniques [4, 16, 3, 8]) come from the field of operations research. Others (Taguchi methods [12, 14]) find their roots in other sciences (mainly engineering). A recent and closely related way to improve the robustness of solutions found by genetic algorithms is GAs/RS3 (genetic algorithms with a robust solution searching scheme) [18]. This technique finds robust solutions by adding a Gaussian noise to the phenotypic parameters (the decoded genotypes). Doing this reduces the height of narrow peaks in the evaluation function and therefore allows the GA to find “wider” peaks, that correspond to more robust solutions. Taillard’s robust taboo search [17] and Battiti and Tecchiolli’s reactive tabu search [1] are techniques that dynamically update the length of the tabu list. These procedures can be called “robust” because they consistently find good solutions with less parameters having to be set. The robust tabu search method described in the next section is called “robust” because it finds robust solutions.
3
Robust tabu search
Robust tabu search modifies the function evaluation of a tabu search procedure. If effectively replaces the solution evaluation function f (x) with a new one, fr (x). This new evaluation function can take on some different forms, but always follows the following basic principles. • Principle 1 The new evaluation function adds some noise to the current solution before evaluating it. So, the robust tabu search procedure evaluates x∗ = x + δ instead of x. We will call x∗ a derived solution. Derived solutions may or may not belong to the original solution space. The noise added is dependent on the expected noise in the input data and should reflect the expected changes in input data. • Principle 2 The new evaluation function does not evaluate a single point, but evaluates several derived solutions and combines these into a single function value. This new function will be called the robust evaluation function fr (x).
Porto, Portugal, July 16-20, 2001
MIC’2001 - 4th Metaheuristics International Conference
709
Figure 1: A discrete function
A general form of a robust evaluation function is fr (x) =
1 n
n
wi f (x + δi )
(1)
i=1
In equation 1, n is the number of derived solutions that are evaluated. The function value f (x + δi ) of the i-th derived solution x + δi is weighed by a factor wi . Many forms of this function are possible, including some very complex ones. These can be divided in deterministic and stochastic ones, depending on the nature of the noise that is added. We’ll see examples of both cases later. One of the main advantages of robust tabu search is that it only modifies the evaluation function. It does not alter any tabu-search-specific features of the procedure. This makes it extremely easy to implement. To some extent, the design and use of a robust evaluation function is also independent of the local search technique used. The value of n is an indicator of the importance that is attached to robustness, as opposed to solution quality. If n is larger, more derived solutions will be evaluated, resulting in a more robust solution.
4
Applications
In this section, two applications of the robust tabu search technique will be investigated. The applications are of the “educational” type, meaning that they are too simple to be useful in practical cases. Their only aim is to efficiently show the principles of robust tabu search without worrying about implementational details. We hope to convey the message that the basic principles are valid regardless of the specific tabu search procedure used.
4.1
Optimisation of a function of a binary vector
In this paragraph, a simple procedure for maximising or minimising a function f of a single variable x is developed. We represent x as a binary vector of length n and approximate the optimum of the function by optimising f (x)|x ∈ {0, 1}n . The tabu search procedure moves from a solution x to a solution x by a 1-flip move. At each iteration, the bit that is flipped is the one that yields the x with the highest function value. The tabu list contains the indices of the bits that were most recently flipped. Although much too simple to be of practical use, this procedure can be used to show the effectiveness of robust tabu search. As an example, the function in figure 1 is maximised. A solution to this problem can be found by representing the variable x as a binary vector of 4 bits. The proposed tabu search procedure (with a tabu tenure of 2) will quickly find the maximum: x = 5 and f (x) = 5. This solution is optimal but not robust: any deviation of x from the value of 5 will result in an immediate and drastic decline of f (x). To prevent this from happening, the
Porto, Portugal, July 16-20, 2001
710
MIC’2001 - 4th Metaheuristics International Conference
1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Figure 2: A continuous function evaluation function is changed to fr (x) =
1 3
+1
f (x + i)
(2)
i=−1
The new evaluation function calculates the average of the function value of x and its two neighbouring points x + 1 and x − 1. The result will be a much more robust solution: x = 13. For this solution, the value of fr (x) is 3, whereas for x = 5, fr (x) = 53 . Depending on the function to optimise, the robust evaluation function fr (x) will have a different form. We wish to maximise the following function on the interval x ∈ [0, 1]: f (x) =
x−0.1 e−2( 0.8 ) | sin(5πx)| x−0.1 e−2( 0.8 ) sin6 (5πx)
0.4 < x ≤ 0.6
(3)
otherwise
This function is depicted in figure 2. The function has five unequal peaks, four of which are very narrow. The global maximum is at x = 0.1. On the other hand, the function has a much more stable maximum at x = 0.486. The proposed tabu search procedure with robust evaluation function 10 1 i ) (4) f (x + fr (x) = 21 i=−10 100
has no trouble finding the broad peak. We can define a stochastic robust evaluation function for this problem as fr (x) =
1 n
n
f (x + δi ),
(5)
i=1
where δi is a normally distributed noise. Following [18], the size of the standard deviation σ of the normal distribution from which δi is drawn, can be calculated when the maximum allowable width of a peak to be considered as a sharp peak, must be given.
4.2
Knapsack problem
The 0-1 (or binary) knapsack problem can be described as follows. Given are a set of items S = {1, ..., n}, where item i has size si and value vi and a knapsack with capacity C. The problem is to find the subset S ⊂ S which maximises the value of i∈S vi subject to the constraint that i∈S si ≤ C. The knapsack problem arises for example when there is resource allocation with financial constraints, for example in the unicriterion selection of projects. The knapsack problem
Porto, Portugal, July 16-20, 2001
MIC’2001 - 4th Metaheuristics International Conference
711
assumes that size and value of each item is known in advance and with absolute certainty. In practice, however, this is not always true. For example, project gains will always be estimates and budgets may change over time. This implies that the decision taken should be a robust one, meaning that it should remain optimal even when changes occur to the initial conditions of the problem. A simple tabu search procedure for the knapsack problem is the following. Order all items by decreasing value of contribution vi − si . Starting from the item with the highest contribution, add as many items as possible to the knapsack until no more items can be added. From there, proceed as follows: 1. Remove the item with the lowest contribution from the knapsack. Put this item on the tabu list. Repeat this step until an item can be added. 2. Add the item with highest contribution that is not on the tabu list or in the knapsack and can be added without exceeding the knapsack capacity. Put this item on the tabu list, repeat this step until no more items can be added. This procedure can be enhanced by allowing for strategic oscillation. This means that we allow the solution to be infeasible for a limited amount of time, i.e. in step 2, items can be added until the knapsack capacity is exceeded by a certain amount or a certain number of items. This surplus in the knapsack then has to be removed in the next iteration of step 1. Many other possible enhancements are possible, such as frequency-based memory, reactive tabu search schemes, and so on. To illustrate our robust tabu search procedure, we assume that we have to select a number of projects that have a certain gain (vi ) and a certain cost (si ). The gain of each project is assumed to be known in advance. The cost of a project is assumed to be uniformly distributed between a lower and an upper bound (ci ∼ U(ci , ci )). If the total cost exceeds the budget, none of the projects can be executed and the total gain is zero. A robust evaluation function for this problem is
fr (S ) =
vi
if ∀j(j ∈ [1, n]) :
i∈S
0
i∈S
cij ≤ C
(6)
otherwise
In equation 6, the index j (j ∈ [1, n]) represents an evaluation and cij is the j-th drawing of ci from its distribution. Note that the robust evaluation function is nonzero only if the total cost does not exceed the budget in all n evaluations. This strict procedure is imposed to make sure that the solution chosen by the robust evaluation function has a very small probability to exceed budget constraints. This probability will be smaller with larger n. The tabu search procedure has to be slightly modified. Since the contribution is not a priori known in this case (the cost of a project is stochastic), we order the items by average contribution, that is by vi − (ci + ci )/2. To determine whether the knapsack capacity has been reached, we use the average cost (ci + ci )/2. In the first step, items are added—starting with the one with highest average contribution—until no more items can be added. The procedure for adding and removing items from the knapsack remains the same except that the average contribution is now used as the criterion to decide which items to add or remove. The main difference is that the quality of a solution is now evaluated with the robust evaluation function 6. This procedure has a tendency to select solutions that consist of items of which the cost has a small variance. Clearly, these solutions are more robust in that the probability of their exceeding the budget is smaller than that of solutions that consist of items of which the cost has a large variance. It also has a tendency not to fill up the knapsack completely, to allow some room for variations in the cost. This tendency increases with rising n. In an example problem with 20 items with ci ∼ U(5, 15), ci ∼ U(ci , ci + 5), vi ∼ U((ci + ci )/2, (ci + ci )/2 + 5), and C = 100, the best solution found uses 100% of the knapsack when n = 1. This percentage drops to 99% when n = 5, 98% when n = 10, 95% when n = 20. The lower the percentage of the knapsack used, the higher the robustness of the solution will be. However, solution quality dropped from 148 (n = 1) to 139 (n = 20).
5
Conclusions and future work
By evaluating several derived solutions and combining these evaluations into a robust evaluation function, a tabu search procedure can be altered so that it searches for solutions that are relatively
Porto, Portugal, July 16-20, 2001
712
MIC’2001 - 4th Metaheuristics International Conference insensitive to perturbations in the input data. The design of robust evaluation functions and the application of the proposed technique to realistic problems remains a topic for future research.
References [1] R. Battiti and G. Tecchiolli. The reactive tabu search. ORSA Journal on Computing, 6(2):126– 140, 1994. [2] E.M.L. Beale. On minimizing a convex function subject to linear inequalities. Journal of the Royal Statistical Society, 17:173–184, 1955. [3] G.E.P. Box and N.R. Draper. Empirical Model-Building and Response Surfaces. John Wiley & Sons, New York, 1987. [4] G.E.P. Box and K.B. Wilson. On the experimental attainment of optimum conditions. Journal of the Royal Statistical Society, 13:1–45, 1951. [5] G.B. Dantzig. Linear programming under uncertainty. Management Science, 1:197–206, 1955. [6] J. Dupa˘cov´ a. Stochastic programming with incomplete information: A survey of results on postoptimization and sensitivity analysis. Optimization, 18:507–532, 1987. [7] J. Dupa˘cov´ a. Stability and sensitivity analysis for stochastic programming. Annals of Operations Research, 27:115–142, 1990. [8] A. Giovannitti-Jensen and R.H. Myers. Graphical assessment of the prediction capability of response surface designs. Technometrics, 31(2):159–171, 1989. [9] F. Glover. Tabu search – part I. ORSA Journal on Computing, 1:190–206, 1989. [10] F. Glover. Tabu search – part II. ORSA Journal on Computing, 2:4–32, 1990. [11] F. Glover and M. Laguna. Tabu Search. Kluwer, Boston, 1999. [12] R.N. Kackar. Off-line quality control, parameter design and the Taguchi method. Journal of Quality Technology, 17(Oct):176–209, 1985. [13] P. Kall and S.W. Wallace. Stochastic Programming. John Wiley & Sons, New York, 1994. [14] J. Krottmaier. Optimizing engineering designs. McGraw-Hill, London, 1993. [15] J.M. Mulvey, R.J. Vanderbei, and S.A. Zenios. Robust optimization of large-scale systems. Operations Research, 43(2):264–281, 1995. [16] R. H. Myers, A. I. Khuri, and W. H. Carter. Response surface methodology: 1966–1988. Technometrics, 31(2):137–157, 1989. [17] E. Taillard. Robust taboo search for the quadratic assignment problem. Parallel Computing, 17:443–455, 1990. [18] S. Tsutsui and A. Ghosh. Genetic algorithms with a robust solution searching scheme. IEEE Transactions on Evolutionary Computation, 1(3):201–208, 1997. [19] R.-J. B. Wets. Stochastic problems with fixed resources: the equivalent deterministic problem. SIAM Review, 16:309–339, 1974.
Porto, Portugal, July 16-20, 2001