A Partheno-Genetic Algorithm for Dynamic 0-1 Multidimensional Knapsack Problem Ali Nadi Ünal∗ Aeronautics and Space Technologies Institute Industrial Engineering Department, Yeşilyurt/İstanbul
[email protected]
Gülgün Kayakutlu İstanbul Technical University Industrial Engineering Department, Maçka/İstanbul
[email protected] Abstract Multidimensional Knapsack Problem (MKP) is a well-known, NPhard combinatorial optimization problem. Several metaheuristics or exact algorithms have been proposed to solve stationary MKP. This study aims to solve this difficult problem with dynamic conditions, testing a new evolutionary algorithm. In the present study, the Parthenogenetic Algorithm (PGA) is tested by evolving parameters in time. Originality of the study is based on comparing the performances in static and dynamic conditions. First the effectiveness of the PGA is tested on both the stationary, and the dynamic MKP. Then, the improvements with different random restarting schemes are observed. The PGA achievements are shown in statistical and graphical analysis.
Keywords— Combinatorial Optimization, Dynamic Environments, Multidimensional Knapsack Problem, Partheno-Genetic Algorithm
Introduction 5
Hard optimization problems are roughly defined as “problems that cannot be solved to optimality, or to any guaranteed bound, by any exact (deterministic) method within a “reasonable” time limit” [10]. Multiple Knapsack Problem (MKP) is a well-known hard and combinatorial optimization problem. It is accepted as NP-hard [3, 7, 14, 41]. Combinatorial optimization ∗
Corresponding author.
1
10
15
20
25
30
35
problems involve finding optimal solutions from a discrete set of feasible solutions [30]. Evolutionary Algorithm (EA), a sub-group of metaheuristics, is one of the most preferred methods to solve hard optimization problems [9, 30]. EA is a general term for some nature inspired optimization algorithms such as Genetic Algorithms (GAs), Evolution Strategies, Evolutionary Programming and Genetic Programming. GAs have been widely used in different forms to solve stationary combinatorial optimization problems [4, 7, 13, 14, 31]. However, many real life problems are actually dynamic, for this reason a solution approach is expected to be adaptive to environmental changes [6]. In a changing environment, the problem-specific fitness evaluation function and constraints of the problem, such as design variables and environmental conditions, may change over time [45]. Evolutionary optimization in dynamic environments has become one of the most active research areas in the field of evolutionary computation [20, 28] and investigating the performance of GAs in dynamic environments has attracted a growing interest from GA’s community [45, 46]. Compared with static extensions, there are relatively far less reported publications about dynamic multiple knapsack problems [6]. This is one of the main motivations of the present study. Secondly, to the best of our knowledge, this is the first study that investigates the PGA for multiple knapsack problem in dynamic environments. The paper is organized as follows: in Section 1 we give a brief explanation of the Multiple Knapsack Problem and the related work. Standard genetic algorithm and the PGA are described in Section 2. Experimental setup for static and dynamic environments are explained in Section 3. We present the results of the experiments in Section 4. Finally, some concluding remarks and future recommendations are given in Section 5.
1
Problem Definition and Related Work
In this section we outline the Multidimensional Knapsack Problem in dynamic environment and give the related work on GAs in dynamic environments.
2
40
45
50
1.1
Definition of Multidimensional Knapsack Problem
Before introducing the dynamic version of the problem, it may be helpful to review the static one. The MKP is widely studied combinatorial optimization problem. Both theoreticians and practitioners are interested in the problem; theoreticians think that the simple structured MKPs can be used as sub problems to solve more complicated ones, and practitioners think that these problems can model many industrial opportunities such as cutting stock, cargo loading and the capital budgeting [23]. There are some other names given to this problem in the literature such as the multi constraint knapsack problem, the multi-knapsack problem, the multiple knapsack problem, m-dimensional knapsack problem, and some authors also include the term zero-one in their name for the problem [14]. Basically, the MKP can be formulated as follows [29]: f=
maximize
n P
p j xj
(1)
j=1
subject to
n P
ri,j xj ≤ bi , i = 1, . . . , m,
(2)
j=1
with
55
60
(3)
pj > 0, ri,j ≥ 0, bi > 0.
(4)
In this formulation; the MKP consists of m knapsacks of capacities b1 , b2 , . . . , bm and n objects, each of which has a profit pj . If the jth object is included in the solution then the decision variable xj equals to 1, otherwise it equals to 0. Unlike the simple version of the knapsack problem in which the weights (resource consumptions) of the objects are fixed, the weight of the jth object takes m values [23]. In fact, any 0/1 integer problem with non-negative coefficients can be formulated as a MKP [24]. Several effective algorithms (i.e., exact algorithms, and heuristic or metaheuristic algorithms) have been developed to tackle this stationary type of problem [11, 17, 18, 22, 40]. For a good review of these algorithms, we refer the reader to [39].
1.2 65
xj ∈ {0, 1}, j = 1, . . . , n,
Dynamic MKP
According to Simoes [34] “When the environment changes over time, resulting in modifications of the fitness function from one cycle to another, 3
70
75
80
85
we say that we are in the presence of a dynamic environment”. In other words, in a dynamic optimization problem; the objective function, the problem instance, or constraints may change over time, and thus the optimum of that problem might change as well [20]. Literature includes benchmark generators that translate well known static problems into dynamic versions (refer to [6]). In [34], on 0/1 dynamic knapsack problem, they used three types of changes in the capacity of knapsack: periodic changes between two values, three values and non-periodic changes between 3 different capacities. In each of the periodic experiments they started a total capacity C1 and after half a cycle the constraint was switched to C2. When using three values, after a complete cycle the capacity was changed to third value C3. Cycle lengths were 30, 100, 200 and 300 generations. The term “cycle lengths” has the same meaning with “simulation time units” [6]. The less number of simulation time units yield more frequent changes and vice versa. In [6], a series of 1000 iterations were adopted as the frequency of changes. A benchmark problem that includes 100 items and 10 dimensions was used as the basis environment. After a change occurs, as in [12], the parameters were updated as stated below [6]: pj = pj ∗ (1 + N (0, σp )) ri,j = ri,j ∗ (1 + N (0, σr ))
(5)
bi = bi ∗ (1 + N (0, σb ))
90
95
In (5), ri,j denote the weights of the objects and bi denote the capacities of knapsacks, all standard deviations are assumed to be equal and set to 0.05. While Branke et al. in [12] restricted the dynamic changes to a fixed interval, neither lower nor upper bounds for those dynamic parameters were employed in [6]. Additionally, in [6], the best error to optimum was used as the performance measure. According to this measure, the average values of the best solution achieved for each environments on 30 runs are considered. The optimum of each environment was obtained using GAMS CPLEX solver. Mori and Kita in [27] surveyed the methods for applying the adaptation of GA to dynamic environments. They reported the characteristics of dynamic environments, and also surveyed the major kinds of GAs designed to cope
4
with the dynamic environment. They categorized the GAs into two types: (i) the search-based approach and (ii) the memory-based approach.
100
Yang proposed a hybrid memory and random immigrants scheme for genetic algorithms in dynamic environments in [45]. Results of an experimental study, based on systematically constructed dynamic environments, showed that memory-based immigrants scheme efficiently improved the performance of genetic algorithms in dynamic environments .
110
Branke, Orbayı and Uyar compared and analyzed three different representations (i.e., weight coding, binary representation and permutation representation) on the basis of a dynamic multidimensional knapsack problem in [12]. They concluded that the representation could have a tremendous effect on EA’s performance in dynamic environments. They stated binary representation with penalty was extremely poor, because of slow improvement and feasibility considerations.
115
Uyar and Uyar in [38] briefly outlined the environmental change methods, discussed their shortcomings and proposed a new method that could generate changes for a given severity level more reliably. Then presented the experimental set up and results for the new method and compared the new method with existing methods.
105
120
125
Yuan and Yang tested a hybrid genetic algorithm by different functional dimensions, update frequencies, and displacement strengths in different types of dynamics in [46]. Also they compared with some other existing evolutionary algorithms for dynamic environments. The hybrid genetic algorithm had been illustrated its better capability to track the dynamic optimum, based on the results. Ünal compared two different approaches using GAs to adapt the changing environments for multiple knapsack problems in [37]. These approaches were named random immigrants based approach and memory based approach respectively. It was concluded that the memory based approach was more effective to adapt the changing environments . Baykasoğlu and Ozsoydan proposed an improved firefly algorithm (FA) to solve dynamic multidimensional knapsack problem in [6]. In order to evaluate the performance of the proposed algorithm, the same problem was also 5
130
modelled and solved by genetic algorithm, differential evolution and original FA. Based on the computational results and convergence capabilities they concluded that improved FA was a very powerful algorithm for solving the multidimensional knapsack problems for both static and dynamic environments.
135
2
140
GAs sometimes fail adapt to a dynamic environment [27]. GAs tendency to converge prematurely in stationary problems and their lack of diversity in tracking optima are deficiencies that need do be addressed for dynamic environments [1]. Maintenance of the diversity is an essential requirement to apply the GA to dynamic environments [27].
145
There are several techniques that have been used to enhance the performance of the standart genetic algorithm in dynamic environments to maintain diversity [1, 20]. A new genetic algorithm named partheno-genetic algorithm (PGA) overcomes the premature convergence problem and keeps the diversity of the population [33]. For this reason we tested the PGA, with different reinitializing schemes. Based on this explanation, we give a brief review about standard genetic algorithm, partheno-genetic algorithm and techniques for maintaining the diversity.
2.1 150
155
160
Methodology
Standard Genetic Algorithm (SGA)
In 1970s, John Holland and his students, set theoretical foundations of genetic algorithms [2]. GAs are nature inspired optimization techniques based on the evolutionary process of biological organisms. Through generations, the most fitted individuals will survive and less fitted ones will be eliminated. This process is based on the principles of natural selection and “survival of the fittest” [2, 14]. In a standard GA, each individual represents a possible solution of the problem to be solved. Randomly generated initial population, which composed of individuals, evolves toward better solutions through generations, using genetic operators (crossover and mutation) and selection procedures. An objective function is used to evaluate the fitness of each individual. The basic steps of a simple genetic algorithm are illustrated in Algorithm 1.
6
Algorithm 1 A pseudo code for simple GA. 1: Generate initial population; 2: Evaluate fitness values of individuals in population; 3: repeat 4: Select parents from the population for mating; 5: Recombine parents to produce offspring; 6: Mutate the resulting offspring; 7: Evaluate fitness of the offspring; 8: Replace some or all the population by the offspring; 9: until a termination criterion is satisfied //e.g., a predefined iteration number 10: Results and visualization. 1
0
0
0
1
1
0
Figure 1: A simple example for binary representation, n = 7.
165
170
175
180
There is a huge amount of GA literature including books, theses and articles. For this reason only necessary explanation (used techniques in this paper) is given about genetic algorithms. We refer the reader to [16] and [35] for detailed information. The first step of designing a genetic algorithm is creating an initial population that consists of individuals (chromosomes)[37]. This initial population can be generated randomly. Sometimes, a hybrid application of metaheuristics and Branch and Bound (in particular memetic algorithms), can be used to obtain a good starting solution [36]. To generate an initial population, individuals must be represented in a proper way. Direct or indirect representation schemes can be used. “A representation is called direct if it can be interpreted directly as a solution to the problem” [12]. In this representation, an individual consists of zeros and ones. If an item is included in the solution, the value of the corresponding position in the chromosome is 1. A simple example for binary representation for an individual is shown in Figure 1. In this individual; the 1st , the 5th and the 6th items are included in the solution. If any of the knapsack’s capacity is not exceeded, this is also a feasible solution. The main drawback of the binary (direct) representation is the difficulty to maintain feasibility [12]. To maintain feasibility a number of procedures
7
3
7
1
4
2
6
5
Figure 2: A simple example of permutation representation, n = 7.
185
190
are encouraged to be implemented [14]: (i) to use a representation that ensures all solutions are feasible, (ii) to separate the evaluation of fitness and infeasibility, (iii) to use a heuristic operator to transform any infeasible solution into a feasible one, (iv) to apply a penalty function. Unlike the direct representation, it is not necessary to design complicated repair mechanisms or to use penalty functions to ensure feasibility in indirect representations [12]. Priority based encoding technique [6],permutation representation, and real valued representation with weight coding [12] can be used as indirect representations. Figure 2 represents a simple permutation representation. In this representation, the 3rd item must be included in the solution firstly and the 7th item must be included secondly, and so on so forth. Inclusions of successive items must be held by considering the knapsack capacities. In this paper we used the permutation representation.
195
200
205
210
“For selection phase of genetic algorithms, several methods are used such as roulette wheel, stochastic universal sampling, sigma scaling, rank selection, tournament selection and steady state selection” [37]. For a detailed explanation of these methods, reader should refer to [25]. Additionally, a mathematical analysis of tournament selection, truncation selection, linear and exponential ranking selection, and proportional selection is carried out in depth in [8]. We used tournament selection in this study. In tournament selection, a predefined number of individuals are chosen randomly from the population and the best of them is copied from this group into the immediate population [8]. The pseudo code for the tournament selection is shown in Algorithm 2.
After implementing the selection, the next phase is recombining the population. During recombination process, a new individual solution is created from the information contained within two or more parent solutions. Several recombination operators can be implemented for permutation representations. Eiben and Smith in [15] describe these operators in detail. We use 8
Algorithm 2 A pseudo-code for tournament selection. 1: currentpopulation ← P 2: selectedmatrix ← zeros 3: tournamentsize ← k 4: set t := 0; 5: while t < populationsize do 6: t←t+1 7: Select k individuals from P and evaluate 8: row t of selectedmatrix ← the best of k individuals 9: end while
215
cycle crossover in this paper. In this operator, new offspring is created by dividing the parents into cycles. The procedure for constructing cycles is as follows [15]: 1. Start with the first unused position and allele of Parent 1, 2. Look at the allele in the same position in Parent 2, 3. Go to the position with the same allele in Parent 1, 4. Add this allele to the cycle,
220
5. Repeat steps 2 through 4 until you arrive at the first allele of Parent 1. An example illustration of cycle crossover is shown in Figure 3.
225
230
Following the recombination operator, the next step is mutating the population. Swap mutation, insert mutation, scramble mutation or inversion mutation can be implemented as a mutation operator for permutation representations [15]. Mutation operator is often interpreted as a “background operator” supporting the crossover [23] and it is implemented with small probabilities compared to crossover [14]. In this paper, we use insert mutation. It is a simple and effective operator. It works by randomly selecting two positions in the chromosome and moving one next to the other. An example for insert mutation is shown in Figure 4. In this example, the 5th gene is inserted next to the 2nd one, and the others are shifted right.
9
(a)
(b)
Figure 3: Cycle Crossover (a) Step 1: identification of cycles, (b) Step 2: construction of offspring [15].
Figure 4: Insert mutation [15].
235
After the mutation step, a fraction of the best individuals of the current population can be directly inherited to the next generation, if desired. This is called elitism. Using elitism may help maintaining the diversity.
2.2
240
245
Partheno-Genetic Algorithm
Parthenogenesis is a special mode of reproduction that occurs without the male contribution[21, 33]. In this process, an egg cell from female parent produces new individuals without being penetrated by a sperm cell [21]. In a standard genetic algorithm, traditional crossover operators are used in order to produce offspring. This standard process need to use two or more parents to produce children. The PGA is a variant of the GA [42–44] and it simulates the partheno-genetic process of primary species in the nature [21]. Compared with the SGA, the PGA acts on single chromosome [19]. Crossover and mutation operators are replaced by partheno-genetic operators, named “swap”, “reverse”, and “insert”. Several advantages of the PGA are stated in the literature: (i) complicated crossover operators (such as partially mapped
10
(a)
(b)
(c)
Figure 5: Partheno-genetic operators, (a) swap, (b) reverse, (c) insert.
250
crossover, ordered crossover and cycle crossover) are avoided in the optimization process [5, 21, 33], (ii)the PGA restrains the immature convergence phenomenon [5], (iii) the population diversity is guaranteed [33]. As stated earlier, maintaining the diversity is a critical issue on adapting the dynamic environments. The procedure of the PGA is summarized in Algorithm 3. Algorithm 3 A pseudo code for the PGA. 1: Generate initial population; 2: Evaluate fitness values of individuals in population; 3: repeat 4: apply selection 5: apply the swap operator; 6: apply the reverse operator; 7: apply the insert operator; 8: until a termination criterion is satisfied //e.g., a predefined iteration number 9: Results and visualization.
Partheno-genetic operators are comparatively easy to implement. In the swap operator, a pair of positions on a chromosome is selected randomly.
11
255
260
265
Then the contents of these two points are swapped with a predefined probability to produce a new chromosome. This operator is especially useful for permutation representations. A simple illustration is shown in Figure 5a. In the reverse operator, a sub-string is selected from the chromosome randomly, the positions of the genes in this sub-string are reversed to generate a new chromosome with a given probability. An example for this procedure is shown in Figure 5b. In the insert operator, a substring of a chromosome is selected randomly. Then the last value in this substring is transferred to the first position of the substring, and all the other positions in the substring are moved backwards to generate a new chromosome with a predefined probability. An example illustration is shown in Figure 5c.
2.3
270
275
There are several approaches to enhance the performance of the SGA in dynamic environments (for a detailed survey see [28]). The most straightforward approach to increase the diversity is to restart the algorithm completely by reinitializing the population after each environmental change [1]. But, sometimes the information on hand, gained from the current population, can be useful. This situation is more likely to happen if the environmental change is not considerably severe. In this case, partial random restarting approach can be implemented. In other words, a fraction of the new population is seeded with old solutions, rather than complete reinitialization.
3
280
285
Techniques for Adapting the Dynamic Environments
Experimental Setup
Shuai et al. in [33] stated that the PGA overcomes the problem of the premature convergence of GAs, and in addition it was guaranteed to keep the diversity of the population in the conditions of the lower diversifying of initial population. This leads us to test the PGA in dynamic environments by solving the MKP. Firstly, we solved the stationary MKP to measure the performance of the PGA. Secondly, we conducted some comparative computational experiments in dynamic environments employing both the SGA and the PGA. Finally, we tested different reinitialization schemes in dynamic environments using the PGA.
12
Table 1: Parameters for the SGA and the PGA. Attribute # of Generations Population (randomly generated) Size Crossover (cycle) rate Mutation (insert) Rate Elitism Rate Insert Rate Reverse Rate Swap Rate
3.1
290
295
300
PGA 3000 600 0.01 0.3 0.3 0.3
Setup on Static Environment
In order to test the performance of the PGA in stationary environment, we compared the SGA and the PGA on 12 different benchmark problems available on the OR-LIBRARY web-site∗ . These problems were also previously used by Boussier et al. [11] and Hanafi & Wilbaut [18]. Four problem sets include; 10 constraints-250 variables, 5 constraints-500 variables, 10 constraints-500 variables and finally 30 constraints-250 variables. At each set we solved 3 representative problems with different difficulty levels. The parameters for the SGA and the PGA were set as shown in Table 1. For each algorithm and each problem, 5 independent runs with 3000 iterations were executed using Python programming environment, with a personal computer bundled with Intel i5 1.6 GHZ processor and 8 GB RAM. Pseudo-codes of the problem specific SGA and the PGA are shown in Algorithm 4 and Algorithm 5, respectively.
3.2
305
SGA 3000 600 0.7 0.01 0.01 -
Setup on Dynamic Environment
In a dynamic environment, some factors such as frequency or periodicity of changes (cyclic/periodical/recurrent or not), and components that change (objective functions, constraints etc.) are important issues [28]. In real life problems, changes in parameters of any problem have a stochastic nature. On the other hand, environmental changes typically may not alter the problem completely and may affect only some part of the problem at a time [1]. ∗
http://people.brunel.ac.uk/ mastjjb/jeb/info.html
13
Algorithm 4 The pseudo-code for the SGA used in the stationary problem. 1: begin 2: maxiter := 3000 3: t := 0 4: generate random initial population (size = 600) 5: while t < maxiter do 6: t←t+1 7: apply cycle crossover (Pc = 0.7) 8: apply insert mutation (Pm = 0.01) 9: decode the permutation and sort individuals w.r.t fitness values 10: apply tournament selection (tournamentsize = populationsize ∗ 0.1) 11: 12:
apply elitism elitismrate = 0.01 end while
Algorithm 5 The pseudo-code for PGA used in the static problem. 1: begin 2: maxiter := 3000 3: t := 0 4: generate random initial population (size = 600) 5: while t < maxiter do 6: t←t+1 7: swap (Ps = 0.3) 8: reverse (Pr = 0.3) 9: insert (Pi = 0.3) 10: decode the permutation and sort individuals w.r.t fitness values 11: apply tournament selection (tournamentsize = populationsize ∗ 0.1) 12: 13:
apply elitism elitismrate = 0.01 end while
14
310
315
320
325
330
335
340
345
Based on this philosophy, we modelled the changing environment as a Poisson Process with parameter (λ), by means of frequency of changes. To generate more frequent changes we set λ = 0.01 and on the contrary, for rare changes compared with the previous approach, we set λ = 0.005. In addition to this, we modelled the type of changes as a Markov chain. Again, we used 5 benchmark problems from the OR-LIBRARY. These problems are Weish26 to Weish 30, which are previously used by Shih [32]. Within these problems, only the knapsack capacities are different from each other. It means that the problem landscape doesn’t change ultimately. These problems constitute the states of an irreducible Markov Chain. At each change point, current problem changes into another problem (or change into itself again) with equal transition probability of 0.2. Different transition probabilities could be used, yet for simplicity we preferred equal transition probabilities. The best error to the optimum was used as the performance measure. At each iteration, the difference between the optimum value of corresponding environment and the best of generation fitness values were recorded. Then these difference values were averaged over the generation number. Firstly, for each algorithm 15 replicates, each had different environmental conditions, were executed and the performances of the SGA and the PGA were compared. Secondly one representative environment for each change frequencies (number of changes are 8 and 15 for λ = 0.005 and 0.01, respectively) were selected, and then we tested the PGA’s behaviour with respect to the different reinitialization schemes (i.e., 5 different reinitialization rates) on chosen environments through 30 runs. Generation numbers were set to 1000 for each run with the population size of 150. All experiments were conducted using Python. The pseudo-code for the PGA tested in dynamic environment in this study is shown in Algorithm 6. As mentioned earlier in Section 2, rapid convergence is a problem for GAs in dynamic environments. Once converged GAs can not adapt well to the changing environment [45]. For dynamic environments, an effective algorithm is expected quickly adapt the new environment. For this reason, diversity must be maintained through generations. We used the most straightforward approach to increase diversity: to restart the algorithm by reinitializing a predefined fraction of population after each environmental change. For example, if the reinitialization rate is 0.5, then when an environmental change occurs, fifty per cent of the current population is replaced with same proportion of the initial population. This approach can be con-
15
Algorithm 6 The pseudo-code for the PGA used in the dynamic problem. 1: begin 2: maxiter := 1000 3: t := 0 4: define regenrate //regeneration rate 5: generate random initial population (size = 150) 6: substitute ← first (regenrate*popsize) elements of initial population 7: define change points using Poisson Process 8: while t < maxiter do 9: t←t+1 10: swap (Ps = 0.3) 11: reverse (Pr = 0.3) 12: inverse (Pi = 0.3) 13: decode the permutation and sort individuals w.r.t fitness values 14: apply tournament selection (tournamentsize = populationsize ∗ 0.1) 15: 16: 17: 18: 19: 20: 21:
apply elitism elitismrate = 0.01 record errors from optimum value of current environment if changepoint = t then change problem type randomly insert substitute into current population starting from the first row end if end while
16
sidered as partial random restarts. We tested the dynamic behaviour of the PGA for 5 different reinitialization schemes (treatments) (i.e., 0, 0.25, 0.5, 0.75, 1) in 2 different change frequencies (0.01 and 0.005). We used analysis of variance (ANOVA) in order to compare the treatments.
350
355
360
4
Results and Discussion
Results on stationary environment are shown in Table 2. In this table, the 2nd and the 3rd columns include the best fitness values coupled with the CPU times which are previously achieved by Boussier et al. [11]. Fitness values with asterisks are the optimum values. The best fitness values reached with in five runs are given in boldface for the PGA and the SGA in the 5th , and the 7th columns, respectively. Time values in the 6th and the 8th columns were produced using the Python’s time.clock() function for each run. On Windowsr , this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number. The resolution is typically better than one microsecond. Table 2: Experimental results on stationary environment. Boussier et al. [11] Inst. 5.500.0
Best 120148∗
CPU time 40 (s)
5.500.10 218428∗
35 (s)
5.500.20 295828∗
0 (s)
10.250.0 59187∗
0 (s)
10.250.10 110913∗
336 (s)
SGA Run 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1
Best 98753 99308 100164 100896 101678 196502 194942 197644 195212 193554 279814 280098 279204 281950 278849 51227 50693 50936 51288 50543 102386
17
PGA Time (s) 1665.73 1841.8 1758.27 1793.95 1809.36 2926.27 2951.47 3811.89 3080.28 2964.97 4099.54 4196.86 6247.99 4164.70 4134.03 1298.34 1297.29 1298.82 1264.93 1250.03 2359.02
Best 115575 117365 116656 116732 116442 215078 215973 215251 215350 216079 293235 292399 292901 293738 292355 57229 57477 57903 57362 57943 108826 Continued on
Time (s) 1646.58 1654.36 1650.28 1656.36 1681.24 2918.14 2867.3 2890.02 2889.77 2904.08 3953.4 4003.18 3970.20 4028.37 3996.76 2092.4 1326.41 1316.14 1278.17 1294.12 2304.58 next page
Boussier et al. [11] Inst.
Best
CPU time
10.250.20 151809∗
211 (s)
10.500.0 117821∗
24.5 (h)
10.500.7 118344∗
161.7 (h)
10.500.20 304387∗
6.6 (h)
30.250.0 56842
39.7 (h)
30.250.13 106876
86.9 (h)
30.250.28 149570
16.8 (h)
Table 2 – (continued) SGA Run 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
Best 98830 100887 99473 101401 144448 144806 142987 142187 146314 101539 99541 99270 98423 99187 98142 98960 102531 98071 100279 287077 285036 287679 289481 288520 49340 48046 47230 50185 46128 97895 98431 95285 95621 96128 143942 140364 144159 142260 143038
Time (s) 2402.03 2342.19 2411.26 2318.52 3365.25 3329.11 3394.98 3349.07 3597.74 2569.87 2622.25 2675.15 2636.77 2633.77 2606.73 2590.43 2681.26 2724.78 3690.61 6726.03 6913.78 6936.82 6831.02 6965.56 3138.46 3190.02 3074.92 3135.61 3111.10 6100.31 6095.72 5963.57 6059.43 6005.22 9068.53 9051.11 9079.86 9123.31 9176.41
PGA Best 108234 106416 108828 108977 150728 150383 150108 149921 150608 114563 114488 114842 113174 114187 114404 113383 115282 114496 114847 301682 301994 301370 302344 300422 55229 55374 54363 54612 54557 104536 104621 105642 105326 105587 147653 148333 148549 148289 148415
Time (s) 2338.95 2320.25 2318.28 2366.28 3429.24 5235.56 3452.85 3475.20 3483.27 2506.83 2542.94 2520.3 2602.56 2554.77 2463.17 2518.62 2531.55 2481.85 2466.53 6631.94 6693.94 7413.24 6770.35 6670.93 3140.97 3042.05 3067.34 3051.15 3130.98 6076.02 6048.82 5933.81 5989.37 6007.77 8994.21 9057.92 9016.86 9753.12 9078.3
According to the results in Table 2, it is possible to say that the PGA is more effective than the SGA, with given parameters, by means of the solu18
Figure 6: Behaviours of the PGA and the SGA in the stationary environment for 5.500.20 instance.
365
370
375
tion times and the best fitness values achieved. On the contrary, the PGA is not effective at all compared with the results that are reported by Boussier et al., especially for relatively easy problems (i.e., for 5.500 and 10.250). But, for harder problems (i.e., 10.500 and 30.250) the PGA can achieve acceptable fitness values with reasonable solution times. Moreover, the PGA (and also the SGA) never runs out of memory. In fact, meta-heuristics are strictly dependent to the parameters used. For example, it could be possible to reach better fitness values by increasing the number of iterations. Then, a trade-off between the solution times and the fitness values achieved must be taken into consideration. The convergence characteristics of both the SGA and the PGA on the stationary environment (for 5.500.20 problem instance) are represented in Figure 6. As it can be seen from Figure 6, the PGA rapidly reaches the near optimal value in the initial generations. It has a very strong convergence capability. 19
Figure 7: Points of change and the problem types for the 4th column of Table 3.
380
Using 3 operators (i.e., swap, reverse and insert) the PGA does not stick to the local optimum and it converges the near optimal value rapidly. On the contrary, standard GA sticks to the local optimum. For dynamic environments, as mentioned in Subsection 3.2, we firstly implemented 15 replicates of the SGA and the PGA for each of two λ values (i.e, rare changes and frequent changes). Comparisons of the results taken are shown in Table 3 for λ = 0.01, and in Table 4 for λ = 0.005.
385
390
For the 4th column of Table 3, there are 14 environmental changes through 1000 iterations. The positions of the changes and corresponding problem types between these positions for this environment are depicted in Figure 7. In this Figure, “0” represents Weish26, and “4” represents Weish30 problem type. Until the first point of change, the problem type is Weish26, and following the first point of change(i.e., 47th iteration) the problem type turns into Weish30, and the process keeps on in a similar fashion. As a representative example, we compared the behaviours of the first entries of the PGA and the SGA graphically in Figure 8 (i.e, 538.87, and 3505.69, respectively).
395
400
As shown in Figure 8, the PGA yields lower error values. Moreover, the PGA has lower mean error values (µ) compared with the corresponding SGA counterpart as shown in Table 3 and Table 4 for all environmental conditions. Additionally, the mean error values in Table 4 are less than the mean error values in Table 3 on the average. This is an expected result, since the algorithm has more time to decrease the error value when the environment has lower number of changes.
20
21
µ t (s) µ t (s) µ t (s) µ t (s) µ t (s) µ t (s) µ t (s) µ t (s) µ t (s) µ t (s) 538.87 182.93 539.68 56.73 555.95 56.28 543.83 56.40 935.47 54.99 3505.69 161.43 2441.57 178.66 1930.26 54.83 2122.94 55.24 1833.99 53.25
14 428.19 190.44 523.29 57.54 534.68 56.55 566.91 57.22 847.32 56.24 1564.89 181.69 2075.51 190.98 2075.51 60.58 2075.51 60.49 2038.28 51.68
14 394.99 207.70 358.98 57.13 375.01 56.77 418.79 57.15 887.52 56.15 1959.56 178.91 1733.25 172.80 1733.08 55.61 1698.72 58.08 2396.04 53.45
13 484.56 194.75 391.51 58.81 461.79 58.30 380.48 60.15 896.66 56.41 3420.28 164.64 2698.75 146.31 1869.76 54.46 2071.45 57.55 1699.71 54.88
10 388.90 70.37 543.86 56.59 609.14 58.33 507.27 57.34 927.61 58.03 2660.16 165.94 2057.03 55.52 1990.13 54.72 2050.32 56.26 1973.91 51.96
11 294.52 71.74 343.87 65.85 397.77 68.61 397.31 68.16 734.19 65.69 3389.15 164.54 1965.09 54.74 2831.89 54.01 2091.65 56.71 1858.99 58.15
11 347.92 61.50 222.95 56.02 230.55 55.75 366.54 57.48 971.83 52.92 2615.73 162.00 2739.84 53.87 2121.40 54.87 1780.97 58.61 1922.09 54.83
12 365.40 65.92 409.44 62.84 364.17 61.44 419.19 61.46 865.33 59.75 1694.89 169.38 1622.90 56.11 2037.50 54.83 2056.15 55.29 2141.11 51.30
10
R: Random regeneration rate, µ: Mean error value over 1000 iterations, t(s): Execution time in seconds.
1
0.75
0.5
0.25
SGA 0
1
0.75
0.5
0.25
PGA 0
R 370.22 56.78 373.07 58.19 382.57 57.90 301.13 57.75 585.74 58.46 2929.27 161.53 2001.25 55.16 2207.60 53.86 1819.35 57.43 1430.10 57.03
9 417.64 56.97 501.07 59.58 576.84 58.10 485.76 59.03 769.20 60.64 2143.27 170.90 2143.27 55.36 2413.29 58.08 2117.82 54.25 1915.52 53.34
8
Environments (with number of changes in 1000 iterations)
203.04 55.76 270.92 57.70 195.45 55.33 312.24 57.02 765.25 57.58 3381.90 149.78 2080.41 55.09 1975.57 58.41 2027.54 57.09 2045.68 54.42
7
Table 3: Comparison of the PGA with the SGA on dynamic environments (lamda=0.01).
513.22 59.58 488.76 61.76 452.63 57.82 521.81 59.53 1037.52 59.42 2804.57 171.24 2668.88 56.84 2053.12 56.50 1502.61 59.10 2079.90 52.26
15
339.93 57.06 400.51 58.19 470.78 57.59 459.80 57.75 867.53 56.47 2911.22 180.10 2466.04 54.06 2342.32 55.28 2329.97 54.95 1840.37 55.77
10
204.76 55.27 194.67 56.30 211.11 55.00 168.68 54.29 640.39 56.51 2887.68 189.81 2595.10 53.93 2595.10 56.73 2291.04 52.46 1804.64 57.70
6
440.75 56.14 534.78 56.68 479.71 55.96 641.91 55.65 706.73 54.94 2127.40 167.38 2097.22 52.72 2075.56 50.98 1880.37 54.00 2029.70 54.17
9
22
µ t (s) µ t (s) µ t (s) µ t (s) µ t (s) µ t (s) µ t (s) µ t (s) µ t (s) µ t (s) 254.22 73.00 300.86 55.81 254.94 54.72 323.84 55.85 473.66 53.09 2032.88 53.17 1987.24 51.58 2090.66 51.32 2090.66 51.38 2103.97 46.35
5 120.71 55.80 120.03 56.00 120.03 56.06 122.94 56.08 303.47 54.04 1967.91 51.08 1967.91 49.91 1967.91 49.56 1844.33 51.02 1678.34 52.73
3 336.87 57.23 358.68 57.94 301.70 56.74 388.27 58.37 554.96 56.84 1123.74 58.05 1123.74 57.75 1122.37 57.64 1122.37 57.54 1675.96 53.17
4 366.53 58.94 238.47 57.77 343.36 58.48 389.05 59.47 832.23 56.13 2760.38 52.34 2943.57 52.77 2576.80 49.04 2018.85 55.74 1685.74 53.13
9 558.15 59.99 430.23 57.85 506.19 59.88 381.29 58.46 618.73 58.00 3167.07 48.97 1372.03 55.10 1033.39 57.23 1676.73 52.81 1684.60 53.70
6 199.65 68.70 238.87 67.22 253.72 67.62 252.02 66.95 515.39 67.64 3116.16 52.28 2038.11 55.18 2231.45 55.44 1837.14 54.15 1655.59 56.14
8 204.68 55.07 249.35 56.28 147.85 55.26 227.76 54.98 573.79 53.32 2504.87 59.59 2144.79 61.83 2419.43 59.52 1986.54 62.38 2103.91 57.47
4 219.61 55.68 201.42 56.41 198.00 55.67 283.41 56.10 264.71 53.83 1576.07 56.96 1576.07 57.14 1576.07 57.93 1576.07 57.28 2291.79 50.96
1
R: Random regeneration rate, µ: Mean error value over 1000 iterations, t(s): Execution time in seconds.
1
0.75
0.5
0.25
SGA 0
1
0.75
0.5
0.25
PGA 0
R 211.10 57.67 268.40 57.49 212.01 57.32 219.92 56.46 281.14 56.86 2365.10 54.35 2200.02 55.36 2200.02 55.50 2126.78 56.00 1589.00 55.82
3 245.10 61.07 229.67 62.29 220.25 61.31 222.33 60.12 574.35 60.51 2635.56 50.91 2635.56 51.12 2095.78 52.22 2059.39 51.62 1848.42 51.17
6
Environments (with number of changes in 1000 iterations)
296.76 60.44 185.17 59.01 154.31 58.65 200.90 58.45 489.33 58.63 1847.57 55.90 1699.79 55.42 1679.36 57.30 1679.36 56.85 1852.42 55.83
4
Table 4: Comparison of the PGA with the SGA on dynamic environments (lamda=0.005).
212.62 61.95 356.73 62.60 402.46 63.10 264.30 62.38 419.04 59.94 2644.68 54.83 1408.93 58.40 1841.31 56.10 1852.58 57.19 1472.55 55.20
4
316.15 55.05 343.50 55.86 337.94 54.98 363.64 55.02 610.27 54.92 2097.10 53.28 1670.36 54.64 1888.66 53.28 1728.79 53.06 1373.30 56.21
5
174.91 53.38 245.07 54.45 144.42 53.94 267.15 54.30 644.33 52.41 2057.97 49.34 2173.00 49.37 2173.00 48.92 1844.61 49.25 1884.49 55.02
5
330.01 59.79 298.67 59.78 308.65 60.44 336.39 59.12 480.13 60.07 2019.03 55.89 2019.03 55.38 2019.03 55.33 2019.03 55.99 1735.73 53.37
5
Figure 8: Comparison of the PGA and the SGA for a selected dynamic environment.
23
405
410
415
420
425
430
435
Regarding the comparison of different reinitialization schemes, it is not proper at all to compare the efficiency of treatments with the data only shown in Table 3 and Table 4 since the environments are not the same. In order to compare the treatments, we selected one representative environmental condition from each of these tables, namely the 12th environment (with 15 environmental changes) from Table 3, and the 6th (with 8 environmental changes) from Table 4. Then we executed the algorithm 30 times for each treatment. Results are shown in Table 5, and Table 6 . Each entry within these tables represents the error value averaged over 1000 iterations. In order to detect whether the treatment schemes differ among each other, analysis of variance (ANOVA) was implemented for the data given in Table 5, and Table 6. ANOVA results are given in Table 7, and Table 8, respectively. According to the first ANOVA summarized in Table 7, when λ = 0.01 and the number of changes is 15, since the P − value = 0.000 it can be concluded that there is a significant difference between at least one pair of each treatment types by means of their effect on the average solution. A multiple comparison method (i.e., Fisher’s Least Significance Difference Method, LSD) was performed to isolate the specific differences [26]. For an α level of 0.05, the LSD value for these data is equal to LSD = 26.04. The differences and significance of all pairwise comparisons are shown in Table 9. According to this comparison there is a significant difference between the first (i.e., 100% reinitialization) and all the others. When the algorithm uses the knowledge on hand, it is easier to adapt the new environment. On the other hand it is interesting that while using the information on the current population has a positive effect on adapting the dynamism, there is not a specific rate for information usage. But it is clear that the complete random restarting is not an effective way of adapting the new environment. Similar procedures were applied for the data shown in Table 6. For these environments, the number of environmental changes and average errors are lower compared with the situation that λ = 0.01. Analysis of variance is summarized in Table 8. For an α level of 0.05, the LSD value for these data is equal to LSD = 14.62. The differences and significance of all pairwise comparisons are shown in Table 10. According to these results, similar to the previous case, it can be concluded that there is a significant difference between at least one pair of treatments. LSD procedure indicates that the
24
Table 5: Comparison of treatments (λ = 0.01). Number of changes is equal to 15. Treatments Replicates 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
1.0 921.57 908.12 916.64 868.27 1041.53 941.06 843.92 889.08 870.91 842.49 842.81 918.27 844.20 946.54 1017.61 976.55 911.97 844.88 916.82 909.72 845.87 824.58 999.71 1009.21 865.64 829.10 945.44 895.13 915.82 964.53
0.75 500.58 517.69 472.27 584.71 564.61 480.18 611.40 563.26 489.07 530.95 519.50 567.57 493.77 421.53 677.17 632.87 494.36 496.40 531.70 560.99 595.95 495.85 447.22 520.09 536.83 458.23 541.06 489.91 561.52 525.56
0.5 504.64 547.32 411.90 390.88 631.56 581.09 436.97 483.18 515.57 494.66 550.37 433.24 444.12 501.82 552.90 407.01 545.99 526.24 506.63 595.15 547.38 592.05 575.80 526.15 502.91 456.92 475.32 498.34 516.64 542.19
0.25 553.63 456.22 476.93 470.83 537.99 422.70 441.51 463.73 568.65 471.92 454.08 511.01 461.47 510.72 509.52 461.19 494.59 482.03 532.91 489.33 476.10 502.88 503.73 484.49 499.63 561.06 507.05 538.27 446.92 452.04
0.0 469.27 517.77 498.96 431.14 459.99 527.60 433.14 464.72 458.06 433.47 559.21 487.79 447.42 420.11 517.67 472.19 474.68 472.96 464.36 442.18 436.15 526.68 525.81 487.53 500.85 453.54 468.82 489.93 552.10 473.63
Mean
908.93
529.43
509.83
491.44
478.92
25
Table 6: Comparison of treatments (λ = 0.005). Number of changes is equal to 8. Treatments Replicates 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
1.0 305.60 293.64 295.42 321.33 275.99 286.15 294.95 306.09 316.23 291.06 324.53 293.57 352.17 329.08 260.70 285.31 282.47 279.38 325.16 302.56 251.17 283.35 264.97 272.11 350.30 321.76 327.20 286.21 380.03 304.08
0.75 305.30 328.93 227.24 228.92 265.26 263.25 312.93 252.64 258.41 257.50 245.19 287.97 263.60 327.87 249.91 315.83 301.91 315.57 292.26 289.12 307.61 263.61 319.63 271.06 266.14 298.52 234.81 281.46 311.37 232.14
0.5 313.62 314.62 257.14 286.49 294.04 286.92 289.59 278.48 276.62 285.84 305.30 264.25 268.23 240.61 292.03 262.76 277.30 250.62 273.19 241.43 315.86 262.75 261.82 293.06 298.97 265.73 290.28 245.93 246.53 331.61
0.25 332.22 253.60 242.42 291.74 279.86 272.45 290.29 304.24 263.06 310.82 298.09 247.37 231.89 242.94 273.87 264.00 279.01 247.68 254.74 283.02 283.44 264.32 270.58 332.89 274.29 284.00 232.42 159.59 256.08 236.61
0.0 243.85 254.53 229.09 232.09 271.37 255.33 213.39 255.48 232.04 324.03 258.23 287.10 262.91 295.49 244.92 256.51 237.54 244.45 296.92 262.18 235.32 258.17 234.24 250.74 231.90 263.83 252.53 297.98 256.33 276.61
Mean
302.08
279.20
279.05
268.58
257.17
26
27
Source of Variation Treatments Error Total
Source of Variation Treatments Error Total
Table 8: ANOVA table, # of Degrees of Freedom Sum of Squares 4 33063.95 145 118433.19 149 151497.13
Table 7: ANOVA table, # of Degrees of Freedom Sum of Squares 4 4010066.17 145 376379.91 149 4386446.08
changes 8 (λ = 0.005) Mean Square F 8265.99 10.12 816.78
changes 15 (λ = 0.01) Mean Square F 1002516.54 386.22 2595.72
P-value 0.000
P-value 0.000
Table 9: Pairwise comparisons of treatments (λ = 0.01).Differences larger than 26.04 are significant at the α = 0.05 level and are indicated with an “∗ ”. Treatments
1.00 0.75 0.50 0.25 0.00
1.00
0.75
0.50
0.25
0.00
0.00
379.51∗ 0.00
399.10∗ 19.60 0.00
417.50∗ 37.99∗ 18.40 0.00
430.01∗ 50.50∗ 30.91∗ 12.51 0.00
Table 10: Pairwise comparisons of treatments (λ = 0.005).Differences larger than 14.61 are significant at the α = 0.05 level and are indicated with an “∗ ”. Treatments
1.00 0.75 0.50 0.25 0.00
1.00
0.75
0.50
0.25
0.00
0.00
22.89∗ 0.00
23.03∗ 0.15 0.00
33.50∗ 10.62 10.47 0.00
44.91∗ 22.03∗ 21.88∗ 11.41 0.00
28
440
first treatment type results in different means than all the other four types. Parallel with the previous ANOVA, it can be said that using the information on hand increases the efficiency of the algorithm adapting the new environment. There is not a specific more effective information usage rate, however complete random restart is not suggested.
5 445
450
455
460
465
470
Conclusion
Dynamism in the business world encourages the analysis of change. Multidimensional Knapsack Problem (MKP), an NP-hard combinatorial optimization problem, is used in several areas such as cutting stock, cargo loading and the capital budgeting applications. Hence, analysis of MKP in a dynamic environment is one of the motivating necessities of agile business exercises today. One of the analysis methods of MKP, GAs are criticized in dynamic environments. That is why this original study aims to analyze MKP in a dynamic environment using Partheno-Genetic Algorithm (PGA), an enhanced version of genetic algorithms. First, the performance of PGA is benchmarked with the classical GA on different MKP problems. It is clearly shown that PGA outperforms both by converging fast and finding near optimum. It is also observed that GA easily sticks to the local optimum, which explains its weakness in dynamic environment. Second, we compared the SGA and the PGA in dynamic environments. Again, the PGA outperformed the SGA by means of the mean error values. Later, a dynamic environment is constructed by using different parameter values for a Poisson initialization to represent rare changes and frequent changes. PGA is used in the dynamic environment for five different reinitialization schemes. The best performances in different environments are tested. ANOVA showed at least one significant difference among the treatments. Fisher’s LSD has shown where the significant difference occurred. It is concluded that it is easier to adapt the new environment when old information is used. It is shown that complete random restart is not effective at all. This study will be enhanced to compare the intelligent methods in the dynamic environment. The achievements will be enriched when the algorithms developed in this study will be implemented in real life exercises.
29
References 475
480
[1] Otman Basir Abdunnaser Younes, Shawki Areibi, Paul Calamai. Adapting genetic algorithms for combinatorial optimization problems in dynamic environments. In Witold Kosinski, editor, Advances in Evolutionary Algorithms, chapter 11, pages 207–230. I-Tech Education and Publishing, Vienna, 2008. [2] Zaheed Ahmed and Irfan Younas. Article: A dynamic programming based ga for 0-1 modified knapsack problem. International Journal of Computer Applications, 16(7):1–6, February 2011. Published by Foundation of Computer Science. [3] M Haluk Akin. New heuristics for the 0-1 multi-dimensional knapsack problems. PhD thesis, University of Central Florida, Ann Arbor, 2009.
485
490
495
[4] Maria João Alves and Marla Almeida. MOTGA: A multiobjective Tchebycheff based genetic algorithm for the multidimensional knapsack problem. Computers & Operations Research, 34(11):3458–3470, November 2007. [5] Jian-Cong Bai, Hui-you Chang, and Yang Yi. A partheno-genetic algorithm for multidimensional knapsack problem. In Machine Learning and Cybernetics, 2005. Proceedings of 2005 International Conference on, volume 5, pages 2962–2965 Vol. 5, 2005. [6] Adil Baykasoğlu and Fehmi Burcin Ozsoydan. An improved firefly algorithm for solving dynamic multidimensional knapsack problems. Expert Systems with Applications, 41(8):3712–3725, 2014. [7] Murat Ersen Berberler, Asli Guler, and Urfat G. Nurıyev. A geneti algorithm to solve the multidimensional knapsack problem. Mathematical and Computational Applications, 18(3):486 – 494, 2013.
500
[8] Tobias Blickle and Lothar Thiele. A comparison of selection schemes used in genetic algorithms. Technical report, Computer Engineering and Communication Networks Lab (TIK), Swiss Federal Institute of Technology (ETH), 1995. [9] Christian Blum and Andrea Roli. Metaheuristics in combinatorial optimization. ACM Computing Surveys, 35(3):268–308, September 2003.
30
505
510
515
520
525
[10] Ilhem Boussaïd, Julien Lepagnot, and Patrick Siarry. A survey on optimization metaheuristics. Information Sciences, 237:82–117, July 2013. [11] Sylvain Boussier, Michel Vasquez, Yannick Vimont, Saïd Hanafi, and Philippe Michelon. A multi-level search strategy for the 0–1 Multidimensional Knapsack Problem. Discrete Applied Mathematics, 158(2):97–109, January 2010. [12] Jürgen Branke, Merve Orbayı, and Şima Uyar. The Role of Representations in Dynamic Knapsack Problems. In Franz Rothlauf, Jürgen Branke, Stefano Cagnoni, Ernesto Costa, Carlos Cotta, Rolf Drechsler, Evelyne Lutton, Penousal Machado, JasonH. Moore, Juan Romero, GeorgeD. Smith, Giovanni Squillero, and Hideyuki Takagi, editors, Applications of Evolutionary Computing SE - 74, volume 3907 of Lecture Notes in Computer Science, pages 764–775. Springer Berlin Heidelberg, 2006. [13] Pei-Chann Chang and Shih-Hsin Chen. The development of a subpopulation genetic algorithm II (SPGA II) for multi-objective combinatorial problems. Applied Soft Computing, 9(1):173–181, January 2009. [14] P.C. Chu and J.E. Beasley. A Genetic Algorithm for the Multidimensional Knapsack Problem. Journal of Heuristics, 4(1):63–86, June 1998. [15] Agoston E. Eiben and J. E. Smith. Introduction to Evolutionary Computing. SpringerVerlag, 2003. [16] Mitsuo Gen and Runwei Cheng. Genetic Algorithms and Engineering Optimization. John Wiley and Sons Inc., New York, USA, 2000.
530
535
[17] Said Hanafi and Christophe Wilbaut. Scatter Search for the 0–1 Multidimensional Knapsack Problem. Journal of Mathematical Modelling and Algorithms, 7(2):143–159, February 2008. [18] Saïd Hanafi and Christophe Wilbaut. Improved convergent heuristics for the 0-1 multidimensional knapsack problem. Annals of Operations Research, 183:125–142, May 2011. [19] Dayong Hu and Zhenqiang Yao. Stacker-reclaimer scheduling for raw material yard operation. In Advanced Computational Intelligence (IWACI), 2010 Third International Workshop on, pages 432–436, 2010. 31
540
[20] Y. Jin and J. Branke. Evolutionary Optimization in Uncertain Environments—A Survey. IEEE Transactions on Evolutionary Computation, 9(3):303–317, June 2005. [21] Fei Kang, Jun-jie Li, and Qing Xu. Virus coevolution partheno-genetic algorithms for optimal sensor placement. Advanced Engineering Informatics, 22(3):362–370, July 2008.
545
550
[22] Hans Kellerer, Ulrich Pferschy, and David Pisinger. Multidimensional Knapsack Problems. In Knapsack Problems SE - 9, pages 235–283. Springer Berlin Heidelberg, 2004. [23] Sami Khuri, Thomas Bäck, and Jörg Heitkötter. The zero/one multiple knapsack problem and genetic algorithms. In Proceedings of the 1994 ACM Symposium on Applied Computing, SAC ’94, pages 188–193, New York, NY, USA, 1994. ACM. [24] Joost Langeveld and Andries P. Engelbrecht. Set-based particle swarm optimization applied to the multidimensional knapsack problem. Swarm Intelligence, 6(4):297–342, November 2012.
555
[25] Melanie Mitchell. An Introduction to Genetic Algorithms. MIT Press, Cambridge, MA, USA, 1998. [26] Douglas C. Montgomery and George C. Runger. Applied Statistics and Probability for Engineers. John Wiley and Sons, 2003.
560
[27] Naoki Mori and H Kita. Genetic algorithms for adaptation to dynamic environments-a survey. . . . Electronics Society, 2000. IECON 2000. 26th . . . , pages 2947–2952, 2000. [28] Trung Thanh Nguyen, Shengxiang Yang, and Juergen Branke. Evolutionary dynamic optimization: A survey of the state of the art. Swarm and Evolutionary Computation, 6(0):1 – 24, 2012.
565
570
[29] G R Raidl. An improved genetic algorithm for the multiconstrained 01 knapsack problem. In Evolutionary Computation Proceedings, 1998. IEEE World Congress on Computational Intelligence., The 1998 IEEE International Conference on, pages 207–211, May 1998. [30] Celso C. Ribeiro, Simone L. Martins, and Isabel Rosseti. Metaheuristics for optimization problems in computer communications. Computer Communications, 30(4):656–669, February 2007. 32
[31] Masatoshi Sakawa and Kosuke Kato. Genetic algorithms with double strings for 0–1 programming problems. European Journal of Operational Research, 144(3):581–597, February 2003. 575
580
585
[32] Wei Shih. A Branch and Bound Method for the Multiconstraint ZeroOne Knapsack Problem. The Journal of the Operational Research Society, 30(4):37–47, 1979. [33] Ren Shuai, Wang Jing, and Xuejun Zhang. Research on chaos parthenogenetic algorithm for TSP. In Computer Application and System Modeling (ICCASM), 2010 International Conference on, volume 1, pages V1–290–V1–293, 2010. [34] Anabela Simões and Ernesto Costa. Using genetic algorithms to deal with dynamic environments: A comparative study of several approaches based on promoting diversity. In "Proceedings of the genetic and evolutionary computation conference GECCO’02, New York, NY, USA, 2002. Morgan Kaufmann Publishers. [35] S.N. Sivanandam and S.N. Dipa. Introduction to Genetic Algorithms. Springer-Verlag, Berin, 2008.
590
595
[36] Marcel Turkensteen, Diptesh Ghosh, Boris Goldengorin, and Gerard Sierksma. Iterative patching and the asymmetric traveling salesman problem. Discrete Optimization, 3(1):63 – 77, 2006. The Traveling Salesman Problem. [37] Ali Nadi Ünal. A Genetic Algorithm for the Multiple Knapsack Problem in Dynamic Environment. In Proceedings of the World Congress on Engineering and Computer Science 2013, page 1162, San Francicso, USA, 2013. IAENG.
600
[38] Şima Uyar and H.Turgut Uyar. A Critical Look at Dynamic Multidimensional Knapsack Problem Generation. In Mario Giacobini, Anthony Brabazon, Stefano Cagnoni, GianniA. Caro, Anikó Ekárt, AnnaIsabel Esparcia-Alcázar, Muddassar Farooq, Andreas Fink, and Penousal Machado, editors, Applications of Evolutionary Computing SE 86, volume 5484 of Lecture Notes in Computer Science, pages 762–767. Springer Berlin Heidelberg, 2009.
605
[39] M. Jalali Varnamkhasti. Overview of the Algorithms for Solving the Multidimensional Knapsack Problems. Advanced Studies in Biology, 4(1):37–47, 2012. 33
[40] Michel Vasquez and Yannick Vimont. Improved results on the 0–1 multidimensional knapsack problem. European Journal of Operational Research, 165:70–81, August 2005. 610
615
[41] Moussa Elkihel Vincent Boyer, Didier El Baz. Solution of multidimensional knapsack problems via cooperation of dynamic programming and branch and bound. European Journal of Industrial Engineering, 4(4):434–449, 2010. [42] Jingli Wu. A parthenogenetic algorithm for haplotyping a single individual based on WMLF model. In Natural Computation (ICNC), 2012 Eighth International Conference on, pages 622–626, May 2012. [43] Jingli Wu and Hua Wang. A Parthenogenetic Algorithm for the Founder Sequence Reconstruction Problem. Journal of Computers, 8(11):2934– 2941, November 2013.
620
625
[44] Jingli Wu, Jianxin Wang, and Jian’er Chen. A parthenogenetic algorithm for single individual {SNP} haplotyping. Engineering Applications of Artificial Intelligence, 22(3):401 – 406, 2009. [45] Shengxiang Yang. Memory-based immigrants for genetic algorithms in dynamic environments. In Proceedings of the 2005 conference on Genetic and evolutionary computation - GECCO ’05, page 1115, New York, New York, USA, 2005. ACM Press. [46] Quan Yuan and Zhixin Yang. On the performance of a hybrid genetic algorithm in dynamic environments. Applied Mathematics and Computation, 219(24):11408–11413, August 2013.
34