Genetic Programming Applied to Mixed Integer ... - CiteSeerX

18 downloads 0 Views 163KB Size Report
Abstract. We present the application of Genetic Programming (GP) .... The process involves keeping a list of linear programming problems obtained by relaxing ...
Genetic Programming Applied to Mixed Integer Programming Konstantinos Kostikas and Charalambos Fragakis Department of Computational Mathematics and Computer Programming School of Mathematics, Physics and Computational Sciences Faculty of Technology Aristotle University of Thessaloniki 54006 Thessaloniki, Greece {kostikas, fragakis}@gen.auth.gr

Abstract. We present the application of Genetic Programming (GP) in Branch and Bound (B&B) based Mixed Integer Linear Programming (MIP). The hybrid architecture introduced employs GP as a node selection expression generator: a GP run, embedded into the B&B process, exploits the characteristics of the particular MIP problem being solved, evolving a problem-specific node selection method. The evolved method replaces the default one for the rest of the B&B. The hybrid approach outperforms depth-first and breadth-first search, and compares well with the advanced Best Projection method.

1

Introduction

A mixed integer programming (MIP) problem is an optimization problem where some or all of the variables are restricted to take only integer values. General integer and mixed integer programs are NP-hard [1]; even today’s state of the art commercial IP solvers have difficulties tackling MIP formulations representing simple engineering or business optimization problems containing more than a few hundred integer variables [2]. The classical and most widely used approach for solving MIP problems is Linear Programming (LP) based Branch and Bound (B&B) which employs LP based relaxations of the MIP problem for exploring the solution space [1]. The implementation of a Branch and Bound algorithm can be viewed as a tree search, where the problem at the root node of the tree is the original MIP; new nodes are formed by branching on an existing node for which the optimal solution of the relaxation is fractional. In order for Branch and Bound to be successful in solving interesting problems, it needs to ’guide’ the search effectively: that is it needs to make correct choices regarding branching (which node to expand next) and node selection (which node to visit next). In this paper we examine the application of Genetic Programming (GP) [3] to node selection. Several methods for node selection have been proposed since the introduction of B&B for MIP [4]. Most of them offer considerable gains in the execution

times of the B&B algorithm. However, as Linderoth and Savelsbergh demonstrate in their study of search strategies for MIP [5], no method outperforms all others when a variety of MIP problems is concerned. In particular, Linderoth and Savelsbergh point out that of the 13 different node selection methods, 11 ranked from first to last depending on the particular MIP problem solved. The above indicate that node selection is a problem domain where (i) no general solutions exist (ii) an approximate solution is acceptable (or is the only result that is ever likely to be obtained) (iii) small improvements in performance are routinely measured and highly prized and (iv) the interrelationships among the relevant variables are poorly understood. According to J. Koza, the founder of Genetic Programming, in his foreword to [6], these are characteristics providing a good indication for applying Genetic Programming to a problem area. More than that, the ’essence’ of a node selection method is an expression that can be expressed in Genetic Programming terms. Consequently, we decided to tackle the problem of node selection in the Branch and Bound method for MIP by applying to it Genetic Programming. Our intention was not to develop node selection methods in a static manner, but to evolve them dynamically in runtime and to apply them during the rest of the B&B search. This study introduces our idea, presents a prototype realization of it and discusses the results obtained. The rest of the paper proceeds as follows: Section 2 briefly outlines related work. Section 3 is an introduction to the Branch and Bound algorithm for MIP. Section 4 illustrates our modified version of Branch and Bound, and how GP is incorporated in it. Section 5 discusses the experimental parameters of our study, Sect. 6 presents the obtained results, and Sect. 7 summarizes our conclusions.

2

Related Work

The field of metaheuristics for the application to combinatorial optimization problems is a rapidly growing field of research. A recent, general ’overview and conceptual comparison’ is [7]. More specifically, metaheuristic algorithms are increasingly being applied to process MIP problems. Most of the time they are used in order to discover ’good’ solutions quickly, concluding the search if optimality is not required, or aiding exact approaches, like Branch and Bound, to reach optimality. Mitchell and Lee [1] cite several references regarding the application of metaheuristics in MIP. Abramson and Randall [8][9] utilize Simulated Annealing, Tabu Search and other metaheuristics for building a ’general purpose combinatorial optimization problem solver’; [9] is also a good reference for related work. A recent example of a Genetic Algorithm based method for finding a first integer solution to MIP problems is contained in [10]. However, to our knowledge, there exists no previous attempt of utilizing metaheuristics or evolutionary computation techniques for evolving node selection methods. We are also not aware of previous uses of Genetic Programming in MIP.

3 3.1

Background Mixed Integer Programming

A mixed integer program (MIP) is an optimization problem stated mathematically as follows: M aximize

zM IP =

X

cj xj +

j∈I

subject to

X j∈I

aij xj +

X

X

cj xj

(1)

j∈C

aij xj ≤ bi

i = 1, . . . m

(2)

j∈C

lj ≤ xj ≤ uj xj ∈ Z xj ∈ R

j∈N j∈I j ∈ C,

where I is the set of integer variables, C is the set of continuous variables, and N = I ∪C. The lower and upper bounds lj and uj may take on the values of plus or minus infinity. Thus, a MIP is a linear program (LP) plus some integrality restriction on some or all of the variables [5].

3.2

The Branch and Bound Algorithm

The classical approach to solving integer programs is Branch-and-Bound [4]. The B&B method is based on the idea of iteratively partitioning the solution space (branching) to form subproblems of the original (mixed) integer program. The process involves keeping a list of linear programming problems obtained by relaxing some or all of the integer requirements on the variables xj , j ∈ I. To precisely define the algorithm, some definitions are needed. We use the term node or subproblem to denote the problem associated with a certain portion of the feasible region of MIP. Define zL to be a lower bound on the value of zM IP . i For a node N i , let zU be an upper bound on the value that zM IP can have in i N . The list L of problems that must still be solved is called the active set. Denote the optimal solution by x∗ . The algorithm in Table 1, adopted from [5], is a Linear Programming-based Branch and Bound algorithm for solving MIP. Branch and Bound is more of a framework, than a specific algorithm: Among other things, every implementation has to provide specific ’rules’ regarding how problems are selected for evaluation in step 2 of the algorithm. Although LP based Branch and Bound, being an exact algorithm, is guaranteed to finish whatever choices are made during the run in step 2, ’good’ choices are of paramount importance regarding how fast ’acceptable’, or even ’optimal’ solutions are reached.

Table 1. Linear Programming based Branch and Bound Algorithm Initialize. L = MIP. zL = −∞. x∗ = Ø. Terminate? Is L = Ø? If so, the solution x∗ is optimal. Select. Choose and delete a problem N i from L. Evaluate. Solve the LP relaxation of N i . If the problem is infeasible, go to i step 1, else let zLP be its objective function value and xi be its solution. i 4. Prune. If zLP ≤zL , go to step 1. If xi is fractional, go to step 5, else let j i zL = zLP , x∗ = xi , and delete from L all problems with zU ≤zL . Go to step 1. 5. Divide. Divide the feasible region of N i into a number of smaller feasible regions N i1 , N i2 , . . . , N ik such that ∪kj−1 N ij = N i . For each j = 1, 2, . . . , k, ij i = zLP and add the problem N ij to L. Go to 1. let zU

0. 1. 2. 3.

3.3

Node Selection

The goal of a node selection method is twofold: to find good integer feasible solutions (i), or to prove that no solution better than the current lower bound zL exists (ii). Integer solutions tend to be found deep into the search tree; as a result, a Depth-First type search (DFS) tends to do well in (i). On the other hand, a Breadth-First type of search (BFS), like best-bound (which chooses the i subproblem with the largest value of zU ) performs better in (ii) [1]. Estimate-based node selection methods attempt to estimate the value of the best feasible integer solution obtainable from a given node N i of the B&B tree, and then choose the node with the highest (or lowest in case of minimization) such estimate Ei . There exist many formulae for computing the estimate,  the sum of integer infeasibilities P and most of them are based around i si ≡ j∈I min xij − bxij c, 1 − xij − bxij c of the relaxation solution zU at node i N of the tree. A popular estimation method, introduced by Mitra [11], is Best Projection:   0 zL − zU i si , (3) E i = zU + s0 i where zU is the relaxation solution at node N i , zL is the best MIP solution 0 obtained in the search so far, zU is the relaxation solution of the original MIP problem (root node), s0 is the sum of integer infeasibilities at root node and si is the sum of integer infeasibilities at node N i .

Backtracking methods are based on the idea of combining the advantages of (i) and (ii). They try to go as deep as possible or needed, and then backtrack using for example best bound or best projection as a node selection method. Most commercial MIP solvers utilize such backtracking strategies in their B&B implementation in order to speed up the search [12][13]. Mitchell and Lee [1] and particularly Linderoth and Savelsbergh [5] contain thorough presentations and discussions of node selection methods.

4

Evolving Problem Specific Node Selection Strategies

Node selection methods need to be robust: they need to perform well on a wide variety of MIP problems [5]. If the ’generality’ requirement is dropped, maybe much better ’customized’ node selection rules could be devised, which would perform optimally for the specific MIP problem at hand. Manually building special purpose selection rules for each data set obviously doesn’t make practical and economical sense; however, GP offers the means for doing exactly that: By embedding in the B&B algorithm a Genetic Programming run, customized node selection methods can be evolved, which will consequently replace the initial node selection method. In other words, we are using Genetic Programming for evolving an expression similar to 3 for use in our MIP problem. Our approach consists of three distinct stages: B&B Stage1 begins with the start of the search and lasts until a criterion is met, i.e. until a specified number of MIP solutions have been found, or until a specified number of nodes has been visited. Stage 1 is ordinary B&B search1 , employing a standard node selection method. GP Stage is where the GP search takes place: Initially the training set is constructed (Sect. 5.3), and after that the GP run is performed. At the end of the run, the best evolved expression, that is the fittest individual, is selected and replaces the node selection method used in Stage1. B&B Stage2 resumes the execution of B&B, which was paused in the previous stage, and continues the search by utilizing the GP evolved expression for node selection. Further GP Stages. In our prototype implementation, only one GP Stage is used during the life-cycle of the B&B search. In other words, the loop starting with B&B Stage1 and finishing with B&B Stage2 is executed once. This needs not be necessarily the case: A design where the node selection method is constantly refined as a result of multiple GP Stages, seems also attractive. More than that, since both B&B and GP are highly parallelisable algorithms, the hybrid architecture is also susceptible to parallelisation. 4.1

GP Building Blocks for Node Selection

The building blocks we used for node selection method construction, that is our terminal set, are listed in Table 2. It contains primitives required for forming node selection expressions. All terminals are readily available from the runtime data structures of the MIP solver we used (GLPK, Sect. 5.1). The terminal set 1

Actually some extra data, required in the GP stage, need to be maintained for each node.

is quite simple, from an LP based B&B perspective; more complicated primitives, i.e. pseudocosts, found in methods like best estimate [5], would probably enhance the capabilities of GP search. Such primitives were not used however, because they were not readily available in our solver. Besides, at this early point of research, we were primarily interested in testing the soundness of our approach than to fine-tune the performance of our prototype implementation, and employing extravagant terminals would be of no use to the former2 . Table 2. GP Terminals for node selection method construction. Constant Terminals involve general characteristics of the MIP problem and information regarding the solution of the LP relaxation of the problem. Dynamic Terminals concern the specific B&B node being evaluated Property intvar count binvar count totvar count ii sumN0 ii countN0 lp solN0 bonly

ii sumNi ii countNi lp solNi tree depth Ni

5

Description Constant Terminals Number of integer variables of the problem (including binary variables) Number of binary variables of the problem Total number of variables of the problem (integer and continuous) Sum of integer infeasibilities at the root node of the Branch and Bound tree (i.e. of the linear relaxation solution of the MIP problem) Count of integer infeasibilities at the root node of the Branch and Bound tree (i.e. of the linear relaxation solution of the MIP problem) Value of the solution of the linear programming relaxation of the MIP problem TRUE if the problem contains only Binary integer variables, FALSE otherwise Dynamic Terminals Sum of integer infeasibilities at the parent problem of node Ni Count of integer infeasibilities at the parent problem of node Ni Value of the solution of the linear programming relaxation of the parent problem of node Ni Depth of the tree at node Ni

Experimental Parameters

5.1

Infrastructure

The MIP solver we based our experimentation on is GLPK (Gnu Linear Programming Kit) [14]. GLPK doesn’t provide the plethora of options of commercial software like CPLEX [12], but it contains a solid implementation of the simplex method, and, most importantly, comes with full source code. GLPK adopts a backtracking method for node selection: it goes depth first as much as possible, 2

Even expression 3 cannot be formed using the available terminals because zL is not in the terminal set.

and then backtracks by selecting a node using DFS3 , BFS4 , or Best Projection (see Sect. 3.3). The necessary hooks were placed into GLPK in order to cater for collecting data during B&B Stage1, for performing the GP run, and for replacing the default backtracking method with the one evolved. For Genetic Programming we used strongly typed lilgp[15][16], which was integrated with the GLPK infrastructure.

5.2

GP Run Parameters

The GP parameters we used in our tests are listed in Table 3. GP Stage evolves 1200 individuals for 50 generations. All individuals constituting the first generation are randomly created, and each subsequent generation is formed using individuals resulting from crossover operations, with 80% probability, mutation, with 10% probability, and reproduction with 10% probability. Tournament selection with size seven is used. The maximum depth of each individual is restricted to 10. Finally, the standard arithmetic and boolean logical and comparison primitives listed in Table 3 are used. Table 3. Control Parameters. Last row contains the termination criteria for B&B Stage1, that is how many B&B nodes need to be visited before GP Stage begins. This mainly affects the construction of the terminal set in the GP Stage, see Sect. 5.3 Objective:

evolve LP based Branch and Bound node selection criterion specialized for MIP problem instance Function set: ADD SUB MUL PDIV AND OR NOT IFTRUE IFGTE IFLTE IFEQ Terminal set: B&B and LP related runtime data, see Table 2 Fitness cases: dynamically created based on data obtained in B&B Stage1 Fitness function: standardized fitness, based on mean error over the fitness cases Population size: 1200 Initial population: initialization method: grow, initial depth: 5-7 Crossover probability: 80 percent Mutation probability: 10 percent Selection: tournament, size 7 Termination: generation 50 Maximum depth of tree: 10 Parameters for B&B Stage1: MinNodes 100, MaxNodes 1000, MaxMIPsolfound 10, node selection method best bound.

3

4

DFS selects a node using LIFO: this results in B&B spending most of its time evaluating nodes found to the bottom of the tree; such a strategy is imbalanced. BFS selects a node using FIFO: This resembles best bound, mentioned in Sect. 3.3.

5.3

Training Sets

Evolving a problem-specific node selection method using Genetic Programming, means that a problem-specific training set is required as well. The training set has to make use of data generated during B&B Stage1, in order to allow GP to evolve a node selection method which will successfully ’guide’ the B&B search in Stage2. Since our work is novel, there was little help from MIP bibliography regarding what such a training set could be; consequently we experimented with two different schemes of our own: The first scheme is based on the observation that B&B with backtracking capabilities (see Sect. 3.3), which is the case in GLPK and most commercial MIP solvers (for example CPLEX [12], FortMP[13]), can be seen as a sequence of ’dives’ from some node in the B&B tree to some node further ’down’. Each dive is comprised from one to a maximum of ”tree-depth” nodes. By rating the effectiveness of each dive in finding a MIP solution, the dives taking place in B&B Stage1 can be used for the construction of the training set: such a rating Ri for each dive di is obtained through (4), corresponding to training set T1 : ! sistart − sif inish Vbonus (4) Ri = dif inish − distart where sistart is the sum of integer infeasibilities at the node where the dive starts, sif inish at the node where it ends, dif inish and distart is the depth of the B&B tree at the finishing and starting node respectively, and Vbonus is a bonus factor applied if the dive results in a full integer feasible MIP solution. Our rational was that fast reduction of integer infeasibility denotes a promising dive; in addition, dives resulting in full integer feasible solutions deserve to be rewarded with the Vbonus factor5 . We also applied formula 5 for constructing the training set (T2 ). T2 differs from T1 in that it does not reward dives resulting in full integer feasible MIP solutions. Ri =

sistart − sif inish dif inish − distart

(5)

An effect of utilizing the above scheme for constructing the training set, is that the number of fitness cases is problem dependant. The second scheme is much simpler: The training set (T3 ) is made up of the MIP solutions found in B&B Stage1.

6

Results

For our experimentation we used the problems of the standard MIPLIB3 library [17] listed in Table 4. MIPLIB3 is probably the most widely used IP and MIP 5

For all the experiments we used Vbonus = 3.0 for integer feasible solutions worst than the lower bound and Vbonus = 5.0 for solutions better than the lower bound.

Table 4. MIPLIB3 benchmarks used. The set contains MIP and pure 0/1 integer problems Problem bell3a bell5 blend2 khb05250 l152lav lseu misc07 mod008 qnet1 qnet1o rentacar stein45

Rows 123 91 274 101 97 28 212 6 503 456 6803 331

Cols 133 104 353 1350 1989 89 260 319 1541 1541 9557 45

Int vars 71 58 264 24 1989 89 259 319 1417 1417 55 45

0/1 vars 39 30 231 24 1989 89 259 319 1288 1288 55 45

Cont vars Descriptions 62 MIP 46 MIP 89 MIP 1326 MIP 0 0/1 IP 0 0/1 IP 1 MIP 0 0/1 IP 124 MIP 124 MIP 9502 MIP 0 0/1 IP

benchmarking suite in the literature [2][5][10]. The criterion for choosing the specific 12 problems, out of the 59 contained in the library, was the capability of ’vanilla’ GLPK to find the optimal solution for each problem (and prove its optimality) between the minimum and maximum time limits, which were set to 10 seconds and 20 minutes of cpu time respectively. The rest of the problems were excluded because they were too ’easy’, or because they required excessive computational resources. All tests were performed in the same hardware/software system (Pentium 4 at 2.4GHz with 768Mb Ram running Linux 2.4). Our goal was to compare the performance of ’standard’ GLPK with our GP enhanced system. In particular, we wanted to find out if the hybrid system, utilizing each one of T1, T2 and T3 training methods, would be able to compete with GLPK using the simple DFS and BFS node selection methods, or the advanced Best Projection method. Total execution times are presented in in Table 5; Table 6 shows times for reaching the optimal solution. Time and ranking averages are included in the Tables as well. It can be seen from Tables 5 and 6 that no selection method dominates the others. This is typical for node selection methods and in accordance with [5]. Two winners stand out however, Best Projection and GP1. Best Projection ’solves’ on average each problem in 53.10 seconds, being the fastest method, whereas GP1 is the second best with 94.30 seconds. However, as the average ranking of each method demonstrates, GP1 is a more ’solid’ performer (average rank 2.81) than Best Projection (3.00). As far as the time it takes to reach optimality (but not prove it) is concerned, Best Projection is clearly superior to other methods; GP1 comes second. This means however that once optimality is reached, GP1 is, on average, faster to prove that the solution is optimal, than it is Best Projection. Concerning how the employed training methods perform in respect to the design criteria mentioned in Sect. 5.3, it appears that our rational ’made sense’:

that is, GP1 takes advantage of the ’bonus factor’ applied to the test cases where a MIP solution is found, being able to compete head to head with the Best Projection method. GP2, having no such ’boost’, performs average. GP3, employing a simplistic design for the training method, portrays mediocre performance. Finally, it deserves to be mentioned that if the execution times depicted in Tables 5 and 6 were normalized by deducting the runtime overhead introduced by Genetic Programming in cases GP1-GP36 , the obtained results would be more in favor of the GP-enhanced solver. For the purposes of our study, such normalization was not deemed necessary. Table 5. Execution times in seconds for solving the MIP and IP problems. Bestp, BFS and DFS are standard node selection methods. GP1, GP2 and GP3 are methods dynamically evolved by GP using training methods T1, T2 and T3 accordingly. ’NS’ means no optimal solution was found in the available time limit, ’A’ means that the solver aborted in B&B Stage1 because no MIP solution, required for building the training set, was found. Results are the average of 5 runs. Problem bell3a bell5 blend2 khb05250 l152lav lseu misc07 mod008 qnet1 qnet1o rentacar stein45 Time avg Ranking avg

7

Bestp 71.69 22.73 14.94 10.00 30.04 50.97 82.19 41.34 72.21 15.16 19.62 206.32 53.10 3.00

BFS 39.04 577.76 57.71 13.80 46.20 29.36 73.88 30.06 134.00 71.83 246.11 147.62 122.28 4.27

DFS 21.79 754.81 NS 11.94 33.41 15.60 70.19 87.26 NS NS 100.05 101.64 399.72 4.27

GP1 41.79 494.15 26.71 18.00 29.81 93.57 69.65 26.21 100.42 110.28 22.63 98.34 94.30 2.82

GP2 127.51 512.39 53.99 16.94 31.99 93.76 69.68 26.65 245.61 NS 23.28 94.06 207.99 4.00

GP3 23.59 A 30.07 31.17 84.46 16.89 76.35 26.30 745.85 59.66 37.85 305.11 219.78 4.45

Conclusions and Further Research

We used GP as a component in a hybrid MIP Branch and Bound framework, where GP is utilized for generating the node selection method. Although no special GP structures or techniques (i.e. ADFs) were employed, and despite the application of an untested approach for creating problem-specific training sets, 6

The runtime overhead introduced is about one second on average for the GP stage, plus minor overheads resulting from data collection operations in B&B Stage1 and the continuous evaluation of the usually complex node selection expression evolved in B&B Stage2.

Table 6. Times in seconds to obtain the best integer solution for the different node selection methods Problem bell3a bell5 blend2 khb05250 l152lav lseu misc07 mod008 qnet1 qnet1o rentacar stein45 Time avg Ranking avg

Bestp 0.02 16.37 12.79 0.68 8.46 0.90 7.05 0.76 29.85 10.81 18.48 5.89 9.34 2.00

BFS 0.02 564.76 54.00 11.77 41.10 5.47 0.52 8.07 130.85 68.37 190.58 0.77 89.69 3.64

DFS 0.02 751.50 924.93 5.18 8.46 8.17 3.45 65.78 600.06 16.37 82.76 12.41 206.59 4.36

GP1 0.02 474.50 24.50 16.57 8.23 50.31 1.99 3.03 52.19 110.27 22.44 1.26 63.77 3.00

GP2 0.04 486.21 49.49 15.94 9.35 50.54 1.62 3.09 245.11 496.03 23.09 1.12 115.13 4.00

GP3 0.02 A 27.90 31.04 82.32 8.57 3.61 3.04 743.85 50.32 37.66 1.51 182.49 4.64

GP was able to extract information from the solution space traversed, coming up with effective node selection expressions. We believe that the experimental results, obtained by our prototype implementation, show that the hybrid B&B-GP architecture we introduce portrays significant potential. Our future research efforts will be directed in two fronts: The first one will be to incorporate multiple GP Stages in our design, as well as to experiment with more elaborate GP structures and techniques. The second one will be to try to increase our understanding of how the training method adopted in the GP Stage affects the search in B&B Stage2, regarding different classes of MIP problems. Acknowledgments. We would like to thank George Zioutas for the motivating discussions and his aid in MIP bibliography during the early stages of this research. Also, we would like to thank Panagiotis Kanellis and Leonidas Pitsoulis for their suggestions and comments.

References 1. Mitchell, J., Lee, E.K.: Branch-and-bound methods for integer programming. In Floudas, C.A., Pardalos, P.M., eds.: Encyclopedia of Optimization, Kluwer Academic Publishers (2001) 2. Bixby, R., Fenelon, M., Gu, Z., Rothberg, E., Wunderling, R.: MIP: Theory and practice – closing the gap. In: System Modelling and Optimization: Methods, Theory, and Applications. Volume 174 of IFIP INTERNATIONAL FEDERATION FOR INFORMATION PROCESSING. Kluwer Academic Publishers, Boston (2000) 19–49

3. Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge, MA, USA (1992) 4. Land, A.H., Doig, A.G.: An automatic method for solving discrete programming problems. Econometrica 28 (1960) 497–520 5. Linderoth, J., Savelsbergh, M.: A computational study of search strategies for mixed integer programming. INFORMS Journal on Computing 11 (1999) 173 – 187 6. Banzhaf, W., Nordin, P., Keller, R.E., Francone, F.D.: Genetic Programming – An Introduction; On the Automatic Evolution of Computer Programs and its Applications. Morgan Kaufmann, dpunkt.verlag (1998) 7. Blum, C., Roli, A.: Metaheuristics in combinatorial optimization: Overview and conceptual comparison. ACM Computing Surveys 35 (2003) 268–308 8. Abramson, D., Randall, M.: A simulated annealing code for general integer linear programs. Annals of Operations Research 86 (1999) 3–24 9. Randall, M., Abramson, D.: A general metaheuristic based solver for combinatorial optimisation problems. Kluwer Journal on Computational Optimization and Applications 20 (2001) 10. Nieminen, K., Ruuth, S.: Genetic algorithm for finding a good first integer solution for milp. Technical report, Department of Computing, Imperial College, London (2003) 11. Mitra, G.: Investigation of some branch and bound strategies for the solution of mixed integer linear programs. Mathematical Programming 4 (1973) 155–173 12. CPLEX: (www.ilog.com/products/cplex) 13. Numerical Algorithms Group Limited and Brunel University: FortMP Manual. 3 edn. (1999) 14. GLPK: (www.gnu.org/software/glpk) 15. Zongker, D., Punch, B.: lilgp 1.01 user’s manual. Technical report, Michigan State University, USA (1996) 16. Luke, S.: (Strongly typed lilgp, www.cs.umd.edu/users/seanl/gp/patched-gp/) 17. Bixby, R.E., Ceria, S., McZeal, C.M., Savelsbergh, M.W.P.: An updated mixed integer programming library: MIPLIB 3.0. Optima 58 (1998) 12–15