Memetic Comp. DOI 10.1007/s12293-008-0004-5
REGULAR RESEARCH PAPER
Memetic algorithms for solving job-shop scheduling problems S. M. Kamrul Hasan · Ruhul Sarker · Daryl Essam · David Cornforth
Received: 29 June 2008 / Accepted: 15 October 2008 © Springer-Verlag 2008
Abstract The job-shop scheduling problem is well known for its complexity as an NP-hard problem. We have considered JSSPs with an objective of minimizing makespan while satisfying a number of hard constraints. In this paper, we developed a memetic algorithm (MA) for solving JSSPs. Three priority rules were designed, namely partial re-ordering, gap reduction and restricted swapping, and used as local search techniques in our MA. We have solved 40 benchmark problems and compared the results obtained with a number of established algorithms in the literature. The experimental results show that MA, as compared to GA, not only improves the quality of solutions but also reduces the overall computational time. Keywords Job-shop scheduling · Genetic algorithm · Memetic algorithm · Local search · Priority rules 1 Introduction The job-shop scheduling problem (JSSP) is a well-known practical planning problem in the manufacturing sector. A classical JSSP is a combination of N jobs and M machines. Each job consists of a set of operations that has to be S. M. K. Hasan (B) · R. Sarker · D. Essam · D. Cornforth School of Information Technology and Electrical Engineering, University of New South Wales at the Australian Defence Force Academy, Nortchcott Drive, Canberra, ACT 2600, Australia e-mail:
[email protected] R. Sarker e-mail:
[email protected] D. Essam e-mail:
[email protected] D. Cornforth e-mail:
[email protected]
processed, on a set of known machines, and where each operation has a known processing time. A schedule is a complete set of operations, required by a job, to be performed on different machines, in a given order. In addition, the process may need to satisfy other constraints such as (1) no more than one operation of any job can be executed simultaneously and (2) no machine can process more than one operation at the same time. The objectives usually considered in JSSPs are the minimization of makespan, the minimization of tardiness, and the maximization of throughput. The total time between the starting of the first operation and the ending of the last operation, is termed as the ‘makespan’. In this research, the objective considered is the makespan minimisation as this is more practical than other objectives used in solving JSSPs [2,4,5,17,25–28]. The JSSP is widely acknowledged as one of the most difficult NP-complete problems [7,8,16] and is also well known for its practical applications in many manufacturing industries. Over the last few decades, a good number of algorithms have been developed to solve JSSPs. However, no single algorithm is suitable for solving all kinds of JSSP with both a reasonably good solution and within a reasonable computational effort. Thus, there is a scope to analyze the difficulties of JSSPs as well as to design improved algorithms that may be able to solve them effectively. Over the last few decades, a good amount of research has been reported aiming to solve JSSP by using the genetic algorithms (GAs) and hybrid GAs [3,5,6,17,18,26–28]. The earliest application of GAs for solving JSSPs was reported in the mid-1980s by Lawrence [14]. In recent times, it is a common practice to improve the performance of GA by incorporating different search and heuristic techniques, and this approach is readily applied to solving the JSSP. For example, the hybrid methods proposed by Shigenob et al. [24], Park et al. [21], Croce et al. [5] and Ombuki and Ventresca [18].
123
Memetic Comp.
In this research, we first develop a traditional genetic algorithm (GA) for solving JSSPs. In this GA, each individual represents a particular schedule and the individuals are represented by binary chromosomes. After reproduction, any infeasible individual is repaired to be feasible. The phenotype representation of the problem is a matrix of m ×n integer numbers where each row represents the sequence of jobs in a given machine. We apply both genotype and phenotype representations to analyze the schedules. The binary genotype is effective for the simple crossover and mutation techniques. After analyzing the traditional GA solutions, we realized that the solutions could be further improved by applying simple rules or local search. The performance of GA was then first improved by incorporating a simple priority rule. That algorithm was tested by solving 14 small test problems [11]. This work was later extended by introducing rules for jobs on bottleneck machines, and gaps left within intermediate solutions [12]. These rules were tested with a set of test problems and compared with a few other existing algorithms [10]. The current paper differs from those publications as follows. We have modified the rules introduced in those papers, tested a few other rules, provided justification for the use of local search, experimented with different combinations of rules, and performed statistical and parametric analysis. We have formally introduced three priority rules here for improving the performance of traditional GA, namely: partial reordering (PR), gap reduction (GR) and restricted swapping (RS). These rules are basically local search techniques which are applied in addition to the genetic operators. The details of these priority rules are discussed in a later section. A GA with local search, which is known as a memetic algorithm (MA), can use one or more local search techniques. For convenience of explanation, in this paper, we have designated them as MA(PR), MA(GR) and MA(GR-RS). To test the performance of our proposed algorithms, we have solved 40 benchmark problems originally presented in Lawrence [15]. The proposed MAs improved upon the performance of traditional GAs solving JSSPs. Among the MAs, MA(GR-RS) was the best performing algorithm. It obtained optimal solutions for 27 out of 40 test problems. The overall performance of MA(GR-RS) is better than many key JSSP algorithms appearing in the literature. The algorithms proposed by Ombuki and Ventresca [18] and Croce et al. [5] are basically memetic algorithms. Ombuki and Ventresca [18] applied only swapping of jobs with GA, and Croce et al. [5] used traditional priority rules with GA, to improve the performance of their algorithms. Our algorithm has the ability to explore the gaps and bottlenecks, and remove them using the local search schemes, such as: (1) reordering the machines by setting priority to the bottleneck machines, (2) identifying the gaps in the schedule and filling the gaps with the jobs that reduces the makespan and (3) swapping between the adjacent jobs if there is any benefit.
123
The paper is organized as follows: after the introduction, the problem definition is presented in Sect. 2. Section 3 discusses the chromosome representation for JSSPs, and how to handle infeasibility in JSSPs. Section 4 introduces new local search techniques for improving the performance of traditional GA. Section 5 presents the proposed algorithms and implementation aspects. Section 6 provides the experimental results and the parameter analysis. Finally, the conclusions and future research direction are presented.
2 Problem definition The standard job-shop scheduling problem makes the following assumptions: • Each job consists of a finite number of operations. • The processing time for each operation using a particular machine is defined. • There is a pre-defined sequence of operations that has to be maintained to complete each job. • Delivery times of the products are undefined. • There is no setup or tardiness cost. • A machine can process only one job at a time. • Each job is performed on each machine only once. • No machine can deal with more than one type of task. • The system cannot be interrupted until each operation of each job is finished. • No machine can halt a job and start another job before finishing the previous one. • Each and every machine has full efficiency. The objective of the problem is the minimization of the total time taken to complete each and every operation, while satisfying the machining constraints and required operational sequence of each job. In this research, we develop three different algorithms for solving JSSPs. These algorithms are briefly discussed in the next three sections.
3 Job-shop scheduling with memetic algorithm As indicated earlier, we consider the minimization of makespan as the objective of JSSPs. According to the problem definition, the sequence of machine use (this is also the sequence of operations as any one machine is capable of performing only one type of operation) by each job is given. In this case, if we know either the starting or finishing time of each operation we can calculate the makespan for each job and hence generate the whole schedule. In JSSPs, the main problem is to find the sequence of jobs to be operated on each machine that minimizes the overall makespan. The chromosome representation is an important issue in solving JSSPs
Memetic Comp.
using GAs and MAs. We discuss the representation aspect below. 3.1 Chromosome representation In solving JSSPs using either GAs or MAs, the chromosome of each individual usually comprises the schedule. Chromosomes can be represented by binary, integer or real numbers. Some popular representations for solving JSSPs are: operation based, job based, preference-list based, priority-rule based, and job pair-relationship-based representations [23]. We select the job pair-relationship-based representation for the genotype, as in [17,26], due to the flexibility of applying genetic operators to it. In this representation, a chromosome is symbolized by a binary string, where each bit stands for the order of a job pair (u, v) for a particular machine m. For a chromosome C p 1 if the job ju precedes the job jv C p,m,u,v = (1) 0 otherwise This means that for the individual p, the job u must precede the job v in machine m. The job having the maximum number of 1s is the highest priority job for that machine. The length of each chromosome is; l = m × (n − 1) × n/2
(2)
where n stands for the number of jobs, m for the number of machines and the length l is the number of pairs formed by a job with any other job. This binary string acts as the genotype of individuals. It is possible to construct a phenotype which is the job sequence for each machine. This construction is described in Table 1. The binary representation is helpful if the conventional crossover and mutation techniques are used. We use the binary representation for the flexibility of applying simple reproduction operators. We also use the constructed phenotype as the chromosome on which to apply local search methods. Table 1 Derivation phenotype from the binary genotype and pre-defined sequences 1 m1
0 m3 j1–j2
1 m2
0 m1
0 m3 j1–j3
1 m2
1 m2
0 m1 j2–j3
0 m3
I.A
j1
j1 *
j2 1
j3 0
S 1
j1
j1 *
j2 1
j3 1
S 2
j1
j1 *
j2 0
j3 0
S 0
j2 j3
0 1
* 1
0 *
0 2
j2 j3
0 0
* 0
1 *
1 0
j2 j3
1 1
* 1
0 *
1 2
I.B.1 – m1
I.B.2 – m2
I.B.3 – m3
j1
jo1 m1
jo2 m3
jo3 m2
m1
mt1 j3
mt2 j1
mt3 j2
j2 j3
m2 m1
m1 m3
m3 m2
m2 m3
j1 j3
j2 j2
j3 j1
I.C
I.D
In this algorithm, we perform simple two-point crossover and bit flip mutation. The crossover points are selected randomly. After performing crossover and mutation, we map the phenotype directly from the binary string i.e. the chromosome. We apply the following repairing techniques: local and global harmonization in order to make the infeasible solutions into feasible solutions. 3.2 Local harmonization This technique is used during construction of the phenotype (i.e. the sequence of operations for each machine) from the binary genotype. From a chromosome of length l, m tables are formed. Table 1 shows the way to construct the phenotype from the genotype by applying local harmonization. Table 1.A represents the binary chromosome (for a 3 jobs and 3 machines problem) where each bit represents the preference of one job with respect to another job in the corresponding machine. The third row shows the machine pairs in a given order. The second row indicates the order of the machines for the first job of the pair shown in the third row. The first bit is 1, which means that job j1 will appear before job j2 for machine m 1 . Table 1.B.1, I.B.2 and I.B.3 represent the job pair based relationship in machine m 1 , m 2 and m 3 , respectively, as mentioned in Eq. (1). In Table 1.B.1, the ‘1’ in cell j1 − j2 indicates that job j1 will appear before job j2 in machine m 1 . Similarly, the ‘0’ in cell j1 − j3 indicates that job j1 will not appear before job j3 in machine m 1 . In the same Table 1.B, column S represents the priority of each job, which is the row sum of all the 1s for the job presented in each row. A higher number represents a higher priority because it is preceding many other jobs. So for machine m 1 , job j3 has the highest priority. If more than one job has equal priority in a given machine, a repairing technique modifies the order of these jobs to introduce different priorities. Consider a situation where the order of jobs for a given machine is j1 − j2 , j2 − j3 and j3 − j1 . This will provide S = 1 for all jobs in that machine. By swapping the content of cells j1 − j3 and j3 − j1 , it would provide S = 2, 1 and 0 for jobs j1 , j2 and j3 , respectively. Table 1.C shows the pre-defined operational sequence of each job. In this table, jo1 , jo2 and jo3 represent the first, second and third operation for a given job. According to the priorities found from I.B, the Table 1.D is generated which is the phenotype or schedule. For example, the sequence of m1 is j3 j1 j2 , because in I.B.1, j3 is the highest priority and j2 is the lowest priority job. In Table 1.D, the top row (m t1 , m t2 and m t3 ) represents the first, second and third task on a given machine. For example, considering Table 1.D, the first task in machine m 1 is to process the first task of job j3 . Further details on this technique can also be found in [17,26,28].
123
Memetic Comp.
For an m × n job-shop scheduling problem, there will be (n!)m possible solutions. Only a small percentage of these solutions are feasible. The solutions mapped from the chromosome do not guarantee feasibility. Global harmonization is a repairing technique for changing infeasible solutions into feasible solutions. Suppose job j3 specifies its first, second and third operation to be processed on machine m 3 , m 2 and m 1 , respectively, and the job j1 specifies its first, second and third operation on machine m 1 , m 3 and m 2 , respectively. Further assume that an individual solution (or chromosome) indicates that j3 is scheduled on machine m 1 first as its first operation, followed by job j1 . Such a schedule is infeasible as it violates the defined sequence of operations for job j3 . In this case, the swap of places between job j1 with job j3 on machine m 1 would allow job j1 to have its first operation on m 1 as required, and it may provide an opportunity for job j3 to visit m 3 and m 2 before visiting m 1 as per its order. Usually, the process identifies the violations sequentially and performs the swap one by one until the entire schedule is feasible. In this case, there is a possibility that some swaps performed earlier in the process are required to swap back to their original position to make the entire schedule feasible. This technique is useful not only for the binary representations, but also for the job-based or operation based representation. Further details on the use of global harmonization with GAs (or MAs) for solving JSSPs can be found in [17,26,28]. In our proposed algorithm, we consider multiple repairs to narrow down the deadlock frequency. As soon as a deadlock occurs, the algorithm identifies at most one operation from each job that can be scheduled immediately. Starting from the first operation, the algorithm identifies the corresponding machine of the operation and swaps the tasks in that machine so that at least the selected task disallows deadlock for the next time. For n jobs, the risk of getting into deadlock will be removed for at least n operations. After performing global harmonization, we obtain a population of feasible solutions. We then calculate the makespan of all the feasible individuals and rank them based on their fitness values. We then apply genetic operators to generate the next population. We continue this process until it satisfies the stopping criteria.
introduce three new priority rules. We propose to use these rules as local search techniques for selected individuals after the fitness evaluation. The action of the local searches will be accepted if and only if it improves the solution. As the improvement is passed to the chromosomes, which can be transferred to the offspring, it follows Lamarckian type learning [19]. The local searches are briefly discussed below. 4.1 Partial reordering (PR) In the first rule, we identify the machine (m k ) which is the deciding factor for the makespan in phenotype p and the last job ( jk ) that is to be processed by the machine m k . The machine m k can be termed as the bottleneck machine in the chromosome under consideration. Then we find the machine (say m ) required by the first operation of job jk . The re-ordering rule then suggests that the first operation of job jk must be the first task on machine m if it is not already scheduled. If we move the job jk from its current lth position to the 1st position, we may need to push some other jobs currently scheduled on machine m to the right. In addition, it may provide an opportunity to shift some jobs to the left on other machines. The overall process helps to reduce the makespan for some chromosomes. Algorithm I and II in the Appendix describe this re-ordering process. The following explains the re-ordering process with a simple example. In Fig. 1a, the makespan is the completion time
(a)
j2
As reported in the literature, different priority rules and local searches are imposed in conjunction with GAs to improve the JSSP solution. Dorndorf and Pesch [6] proposed twelve different priority rules for achieving better solutions for JSSPs. However they suggested choosing only one of these rules while evaluating the chromosome. In this section, we
123
j1
j2
j1
j3
m2 j3
j2
j1 m1 Time
Improve-
(b)
j2
j3
j1
ment
m3
Machine
j2
4 Priority rules and local search
j3
m3
Machine
3.3 Global harmonization
j1
j3
m2 j2
j1
j3
m1 Time
Fig. 1 Gantt chart of the solution (a) before applying the partial reordering and (b) after applying partial reordering and reevaluation
Memetic Comp.
of job j3 on machine m 1 . That means machine m 1 is the bottleneck machine. Here, job j3 requires machine m 3 for its first operation. If we move j3 from its current position to the first operation of machine m 3 , it is necessary to shift job j2 to the right for a feasible schedule on machine m 3 . These changes create an opportunity to move job j1 on m 3 , j3 on m 2 and j3 on m 1 to the left without violating the operational sequences. As can be seen in Fig. 1b, the resulting chromosome is able to improve its makespan. The change of makespan is indicated by the dotted line. Algorithm II in Appendix also shows how the partial reordering can be done. 4.2 Gap reduction (GR) After each generation, the generated phenotype usually leaves some gaps between the jobs. Sometimes, these gaps are necessary to satisfy the precedence constraints. However, in some cases, a gap could be removed or reduced by placing a job from the right side of the gap. For a given machine, this is like swapping between a gap from the left and a job from the right of a schedule. In addition, a gap may be removed or reduced by simply moving a job to its adjacent gap at the left. This process would help to develop a compact schedule from the left and continuing up to the last job for each machine.
j1
j3
4.3 Restricted swapping (RS) For a given machine, the restricted swapping rule allows swapping between the adjacent jobs if and only if the resulting
j2
j3
j1
Machine
m2 j2
j1
j3
j1
j3
j2
m2 j2
j1
j3
m1
m1
j1
j3
Time
makespan
Time
(d)
j2
m3
j1
j3
makespan
Machine
j2
j3
m3 j2
(c)
j1
(b)
m3
j2
m3 j3
j2
Machine
j1 m2 j2
j3
j1
m1
j1
j3
j2
m2 j2
j3
j1
Time
makespan
m1
Time
makespan
(a)
Machine
Fig. 2 Two steps of a partial Gantt chart while building the schedule from the phenotype for a 3 × 3 job-shop scheduling problem. The x axis represents the execution time and the y axis represents the machines
Of course, it must ensure no conflict or infeasibility before accepting the move. The rule is to identify the gaps in each machine, and the candidate jobs which can be placed in those gaps, without violating the constraints and not increasing the makespan. The same process is carried out for any possible shift of jobs to the left of the schedule. The gap reduction rule, with swapping between gap and job, is explained using a simple example. A simple instance of a schedule is shown in Fig. 2a. In the phenotype p, j1 follows j2 in machine m 2 , however, job j1 can be placed before j2 , as shown in Fig. 2b, due to the presence of an unused gap before j2 . A swap between this gap and job j1 would allow the processing of j1 on m 2 earlier than the time shown in Fig. 2a. This swapping of j1 on m 2 creates an opportunity to move this job to the left on machine m 3 (see Fig. 2c). Finally, j3 on m 2 can also be moved to the left which ultimately reduces the makespan as shown in Fig. 2d. Algorithm III in the Appendix gives the step by step GR algorithm.
123
Memetic Comp.
schedule is feasible. This process is carried out only for the job which takes the longest time for completion. Suppose job j takes the longest time for completion as the phenotype p. The algorithm starts from the last operation of j in p and checks with the immediate predecessor operation whether these two are swappable or not. The necessary conditions for swapping are: none of the operations can start before the finishing time of the immediate predecessor operation of that corresponding job, and both operations have to be finished before starting the immediate successive operations of the corresponding jobs. Interestingly, the algorithm does not collapse the feasibility of the solution. It may change the makespan if any of the operations are the last operation of the corresponding machine, but it will also give an alternate solution which may improve the fitness of the solution in successive generations, when the phenotype will be rescheduled. The details of this algorithm are mentioned in Algorithm IV in the Appendix. This process also allows swapping between two randomly selected individuals. This is done for a few individuals only. As the complexity of this algorithm is simply of order n, it does not affect the overall computational complexity much.
5 Implementation First, we implement a simple GA for solving JSSPs. Each individual is represented by a binary chromosome. We use the job-pair relationship based representation as of Nakano and Yamada [17,26] and Paredis et al. [20]. We use simple two point crossover and bit flip mutation as reproduction operators. We carried out a set of experiments with different crossover and mutation rates to analyze the performance of the algorithm. After the successful implementation of the GA, we implemented three versions of MAs by introducing the priority rules as local search techniques, as discussed in the last section, as follows: • MA(PR): Partial re-ordering rule with GA • MA(GR): Gap reduction rule with GA and • MA(GR-RS): Gap reduction and restricted swapping rule with GA In both GA and MA, we apply elitism in each generation to preserve the best solution found so far, and also to inherit the elite individuals more than the rest [9,13]. In performing the crossover operation, we use the tournament selection that chooses one individual from the elite class of the individuals (i.e. the top 15%) and two individuals from the rest. This selection then plays a tournament between the last two and performs crossover between the winner and the elite individual. We rank the individuals on the basis of the fitness value. A high selection pressure on the better individuals may con-
123
tribute to premature convergence. In particular, we consider the situation where 50% or more of the elite class are the same solution. In this case, their offspring will be quite similar after some generations. To counter this, when this occurs, a higher mutation rate will be used to help to diversify the population. We carried out experiments by varying the crossover and mutation rates for all the test problems. From the experimental results, the best crossover and mutation rate was identified as 0.45 and 0.35 respectively. Detailed result and analysis are discussed in the next section. We set the population size to 2500 and the number of generations to 1000. Note that JSSPs usually require a higher population size. For example [22] used a population size of 5000 even for 10 × 10 problems. In our approach, GR is applied to every individual. On the other hand, we apply PR and RS to only 5% of randomly selected individuals in every generation. In implementing all local searches, we use the following conditions. • A change in the individual, due to local search, will be accepted if it improves the fitness value. • If the fitness value does not improve, the change can still be accepted, if it is better than a certain threshold value. To test the performance of our proposed algorithms, we have solved the 40 benchmark problems designed by Lawrence [15] and have compared our results with several existing algorithms. The problems range from 10 × 5 to 30 × 10 where n × m represents n jobs and m machines. 6 Result and analysis The results for the benchmark problems were obtained by executing the algorithms on a personal computer. Each problem was run 30 times and the results and parameters are tabulated in Tables 2, 3, and 4. Table 2 compares the performance of four algorithms we implement [GA, MA(PR), MA(GR), and MA(GR-RS)] in terms of the % average relative deviation (ARD) from the best result published in the literature, the standard deviation of % relative deviation (SDRD), and the average number of fitness evaluations required. From Table 2, it is clear that the performance of the MAs are better than the GA, and MA(GR) is better than both MA(PR) and GA. The addition of RS to MA(GR), which is known as MA(GR-RS), has clearly enhanced the performance of the algorithm. Out of the 40 test problems, both MA(GR) and MA(GR-RS) obtained exact optimal solutions for 23 problems. In addition, MA(GR-RS) obtained optimal solutions for another 4 problems and substantially improved solutions for 10 other problems. In general, these two algorithms converged quickly, which can be seen from the average number of fitness evaluations. As shown in Table 2, the addition of the local search techniques to GA (for the last two MAs) not only improves the
Memetic Comp. Table 2 Comparing our four algorithms for 40 test problems
Algorithm
Optimal found
ARD (%)
Average computational time (s)
15
3.591
4.165
270.93
664.90
201.60
16
3.503
4.192
272.79
660.86
213.42
MA(GR)
23
1.360
2.250
136.54
356.41
105.87
MA(GR-RS)
27
0.968
1.656
146.63
388.58
119.81
Local search algorithm
% improvement from GA 100
250
1000
40 (la01–la40)
PR
1.33
1.12
0.38
RS
0.89
1.05
0.18
GR
8.92
7.73
6.31
Table 4 Percentage relative improvement of the five problems (la21–la25) 10%
Average # of fitness eval. (103 )
MA(PR)
No. of problems
5%
Average # of generations
GA
Table 3 Individual contribution of the priority rules after 100, 250 and 1000 generations
Changing
SDRD (%)
15%
20%
25%
PR
5.26
4.94
0.17
2.80
1.37
RS
4.20
4.95
2.22
3.02
3.03
quality of solutions significantly but also helps in converging to the solutions with a lower number of generations and a lower total number of fitness evaluations. However, as the local search techniques require additional computation, the computational time per generation for all three MAs is higher than GA. For example, the average computational time taken per generation by the algorithms GA, MA(PR), MA(GR) and MA(GR-RS) are 0.744, 0.782, 0.775 and 0.817 s, respectively. Interestingly, the overall average computational time per test problem solved, MA(GR) and MA(GR-RS) are much lower compared to the other two algorithms. As of Table 2, for all 40 test problems, the algorithm MA(GR-RS) improved the average of the best solutions over GA by 2.623%, while reducing the computational time by 40.57% on average per problem. To analyze the individual contribution of the local search methods, we experiment on a sample of five problems (la21– la25) with the same set of parameters in the same computing environment. For these problems, the individual percentage improvements of PR, GR and RS over GA after 100, 250 and 1000 generations are reported in Table 3. The % improvement for a local search (LS) after n generations is (Fitness of LS)n − (Fitness of GA)n × 100 (Fitness of GA)n
Although all three priority rules have a positive effect, GR’s contribution is significantly higher than the other two rules and is consistent over many generations. Interestingly, in case of GR, the average rate of improvement gradually decreases with the generation number increases (8.92% after 100 generations and 6.31% after 1000 generations). The reason for this is that, GR starts with a set of high quality initial solutions. Although PR and RS have no signification effect on the solutions after first evaluation, GR provided 18.17% improvement compared to GA. This is measured by the average improvement of the best makespan after the first evaluation without applying any other genetic operators. To observe the contribution more closely, we recorded the improvement due to the individual local search in every generation in the first 100 generations. It was observed that GR consistently outperformed the other two local search algorithms. PR is effective only for the bottleneck jobs, whereas GR was applied to all individuals. The process of GR eventually makes most of the changes performed by PR over some (or many) generations. We identified a number of individuals where PR could make a positive contribution. We applied GR on those individuals to compare their relative contribution. For the five problems we considered over 1000 generations, we observed that GR made a 9.13% higher improvement than PR. It must be noted here that GR is able to make all the changes which PR does. That means PR cannot make an extra contribution over GR. As a result, the inclusion of PR with GR does not help to improve the performance of the algorithm. That is why we do not present other possible variants of MAs, such as MA(PR-RS), MA(GR-PR) and MA(GR-RS-PR). We have also tested a number of other priority rules, such as (1) the job with the longest total machining time required under an unconstrained situation gets the top priority, (2) identifying two or more bottleneck machines, and applying the rules simultaneously, and (3) allowing multiple swapping in one go. As these rules do not provide significant benefits, we are reluctant to report them here. Both PR and RS were applied only to 5% of the individuals. The role of RS is mainly to increase the diversity. A higher rate of PR and RS does not provide significant benefits either in terms of quality of solution or computational time. We experimented with varying the rate of PR and
123
Memetic Comp.
RS individually, for five selected problems, from 5 to 25% and tabulated the percentage relative improvement from GA in Table 4. From the above Table, it is clear that the increase of the rate of applying PR and RS does not improve the quality of the solutions. Moreover, it takes extra time to converge. Table 5 presents the best, average, standard deviation, median and worst fitness, with the best known fitness, for 40 test problems, for our four algorithms [GA, MA(PR), MA(GR), and MA(GR-RS)]. As shown in Table 5, MA(GR-RS) outperformed our other three algorithms. We have compared the performance of our best algorithm MA(GR-RS) with other published algorithms based on the average of relative deviation (ARD) and the standard deviation of the relative deviations (SDRD) as presented in Table 6. As different authors used different numbers of problems, we have compared the results based on the number of test problems solved by the other. For example, as Ombuki and Ventresca [18] solved 25 (problems la16–la40) out of the 40 test problems considered in this research, we calculated ARD and SDRD for these 25 problems to make a fairer comparison. As in Table 6, for 7 selected problems, MA(GR-RS) outperforms Croce et al. [5] For 24 selected problems, MA(GRRS) also outperforms Adams et al. [2] SB II algorithm. For 25 test problems (la16–la40), our proposed MA(GR-RS) is very competitive with SBGA (60) but much better than Ombuki and Ventresca [18]. When considering all 40 test problems, our MA(GR-RS) clearly outperformed all the algorithms compared in Table 6. Our algorithm, MA(GR-RS), works well for most benchmark problems tested. If the distribution of the jobs to all machines is uniform (in terms of the number of jobs requiring a machine) with respect to the time axis, the problem can be managed using GA alone. However, if the concentration of jobs is high on one machine (or on a few machines) relative to the others, within any time window, GA usually fails to achieve good quality solutions. In those cases, our PR rule improves the solution because of its reordering of the tasks to give priority on the bottleneck machines. Our combined GR and RS rules improve the solution by utilizing any useful gaps left in the schedules and by swapping the adjacent jobs on a given machine. 6.1 Parameter analysis In GAs and MAs, different reproduction parameters are used. We performed experiments with different combinations of parameters to identify the appropriate set of parameters and their effect on solutions. A higher selection pressure on better individuals, with a higher rate of crossover, contributes towards diversity reduction hence the solutions converge prematurely. In JSSPs, as a big portion of the population con-
123
verges to particular solutions, the probability of solution improvement reduces because the rate of selecting the same solution increases. So it is important to find an appropriate rate for crossover and mutation. Three sets of experiments were carried out as follows: • Experiment 1 Crossover varied from 0.60 to 0.95 with an increment of 0.05, and mutation from 0.35 to 0.00 with a reduction of 0.05. However, the crossover rate plus the mutation rate must be equal to 0.95. The detailed combinations are shown as 8 sets in Table 7. • Experiment 2 Experimented by varying crossover while keeping mutation fixed. The value of mutation was taken from the best set of experiment 1. • Experiment 3 Experimented by varying mutation while keeping crossover fixed. The value of crossover was taken from the best value of experiment 2. The detailed results for different combinations are graphically shown in this section. Figure 3 represents how the quality of solutions varies with the changing crossover and mutation rates. For the parameter set 2, our algorithm provided the best solution. In the parameter set 2, the crossover rate is 0.65 and the mutation rate is 0.30. In the second set of experiments, we varied the crossover rate from 0.70 to 0.35 with a step size 0.05 while fixing the mutation at 0.30. Figure 4 presents the outcome of the second set of experiments which shows that the crossover rate of 0.45 is the best with mutation rate of 0.30. In the third set of experiments, we fixed crossover rate at 0.45 and varied the mutation rate from 0.5–0.0. The third set of experiments showed the effect of mutation when the crossover rate was as fixed the best crossover rate found from the second set of experiments. Figure 5 shows that the algorithm performed better as soon as the mutation rate was increased, while the algorithm is outperforming even with the mutation rate of 0.35, which is almost the equal performing best in the first set of experiments. It can be concluded from the above experiments that the higher mutation rate and the lower crossover rate perform better for this combination of problems and the algorithm. It is noted that the average relative deviation in the above Figs. 3, 4, and 5 is around 1.30%, whereas it is 0.97% in Table 6. This is due to the fact that the results presented in Tables 5 and 6 are based on the individual parameter tuning, and the results provided in the Figs. 3, 4, and 5 are based on the same parameters for all problems.
7 Conclusion Although JSSP is a very old and popular problem, still no algorithm can assure the optimal solution for all test prob-
Memetic Comp. Table 5 Comparison of the % relative deviations from the best result found in literature with that of other authors
Problem and sizea
Algorithm
Best
la01 10 × 5
Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS)
666 667 667 666 666 655 655 655 655 655 597 617 617 597 597 590 606 606 590 590 593 593 593 593 593 926 926 926 926 926 890 890 890 890 890 863 863 863 863 863 951 951 951 951 951 958 958 958 958 958
la02 10 × 5
la03 10 × 5
la04 10 × 5
la05 10 × 5
la06 15 × 5
la07 15 × 5
la08 15 × 5
la09 15 × 5
la10 15 × 5
Average
SD
Median
Worst
676.07 675.13 667.53 667.60
6.31 5.64 2.86 2.90
678.00 678.00 667.00 667.00
688 688 678 678
658.43 659.67 655.20 656.27
5.06 5.37 0.55 3.39
655.00 660.50 655.00 655.00
666 666 658 666
624.70 625.67 615.27 613.93
9.41 5.81 5.92 7.63
619.50 628.00 617.00 617.00
642 640 619 619
607.27 606.67 594.33 593.33
2.30 2.09 1.53 2.44
606.00 606.00 595.00 595.00
613 611 595 595
593.00 593.00 593.00 593.00
0.00 0.00 0.00 0.00
593.00 593.00 593.00 593.00
593 593 593 593
926.00 926.00 926.00 926.00
0.00 0.00 0.00 0.00
926.00 926.00 926.00 926.00
926 926 926 926
896.50 894.67 890.00 890.00
4.27 3.01 0.00 0.00
897.00 897.00 890.00 890.00
906 897 890 890
863.00 863.00 863.00 863.00
0.00 0.00 0.00 0.00
863.00 863.00 863.00 863.00
863 863 863 863
951.00 951.00 951.00 951.00
0.00 0.00 0.00 0.00
951.00 951.00 951.00 951.00
951 951 951 951
958.00 958.00 958.00 958.00
0.00 0.00 0.00 0.00
958.00 958.00 958.00 958.00
958 958 958 958
123
Memetic Comp. Table 5 continued
Problem and sizea
Algorithm
Best
la11 20 × 5
Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS)
1222 1222 1222 1222 1222 1039 1039 1039 1039 1039 1150 1150 1150 1150 1150 1292 1292 1292 1292 1292 1207 1207 1207 1207 1207 945 994 994 946 945 784 794 785 784 784 848 861 861 861 848 842 890 896 850 842 902 967 967 907 907
la12 20 × 5
la13 20 × 5
la14 20 × 5
la15 20 × 5
la16 10 × 10
la17 10 × 10
la18 10 × 10
la19 10 × 10
la20 10 × 10
123
Average
SD
Median
Worst
1222.00 1222.00 1222.00 1222.00
0.00 0.00 0.00 0.00
1222.00 1222.00 1222.00 1222.00
1222 1222 1222 1222
1039.00 1039.00 1039.00 1039.00
0.00 0.00 0.00 0.00
1039.00 1039.00 1039.00 1039.00
1039 1039 1039 1039
1150.00 1150.00 1150.00 1150.00
0.00 0.00 0.00 0.00
1150.00 1150.00 1150.00 1150.00
1150 1150 1150 1150
1292.00 1292.00 1292.00 1292.00
0.00 0.00 0.00 0.00
1292.00 1292.00 1292.00 1292.00
1292 1292 1292 1292
1241.40 1245.80 1207.13 1207.13
17.64 14.35 0.37 0.52
1243.00 1243.50 1207.00 1207.00
1265 1275 1209 1209
1002.83 1001.53 962.60 968.27
10.05 10.25 15.81 15.46
994.00 994.50 962.00 979.00
1028 1028 982 982
822.50 814.67 788.00 788.93
10.18 12.88 3.97 4.18
820.00 820.00 785.00 792.00
839 839 793 793
895.33 896.67 861.00 859.27
14.56 13.89 0.00 4.57
901.00 901.00 861.00 861.00
912 911 861 861
915.73 916.27 855.13 855.47
9.04 6.83 5.74 7.76
914.00 914.00 850.00 855.00
929 929 869 869
979.67 978.73 910.00 910.00
12.62 10.59 2.49 2.54
977.00 980.50 912.00 912.00
1016 1016 912 912
Memetic Comp. Table 5 continued
Problem and sizea
Algorithm
Best
la21 15 × 10
Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS)
1046 1098 1090 1089 1079 927 986 985 963 960 1032 1043 1043 1032 1032 935 984 986 979 959 977 1077 1077 991 991 1218 1295 1303 1220 1218 1235 1328 1328 1298 1286 1235 1328 1328 1298 1286 1157 1265 1267 1237 1221 1355 1377 1363 1355 1355
la22 15 × 10
la23 15 × 10
la24 15 × 10
la25 15 × 10
la26 20 × 10
la27 20 × 10
la28 20 × 10
la29 20 × 10
la30 20 × 10
Average
SD
Median
Worst
1145.07 1136.00 1103.67 1097.60
20.07 21.20 12.22 12.48
1145.00 1138.00 1102.50 1096.00
1185 1187 1128 1124
1027.50 1023.80 982.00 981.00
17.59 20.68 8.74 13.58
1031.00 1028.00 980.50 979.00
1055 1076 1003 1000
1077.13 1076.80 1032.07 1032.00
17.60 18.05 0.18 0.00
1080.00 1072.00 1032.00 1032.00
1108 1099 1033 1032
1023.70 1021.93 1000.53 996.40
24.94 26.09 9.64 16.21
1020.50 1027.00 997.50 1003.00
1080 1083 1011 1011
1116.97 1114.07 1016.53 1016.67
19.06 15.11 9.67 19.64
1114.00 1114.00 1016.00 1013.00
1154 1153 1036 1076
1324.13 1331.53 1239.40 1234.27
18.52 13.32 13.71 16.12
1322.50 1331.50 1234.00 1228.00
1369 1366 1267 1266
1356.87 1356.40 1309.27 1306.33
17.70 18.83 9.79 13.17
1355.00 1350.00 1310.00 1301.00
1389 1392 1333 1333
1356.87 1356.40 1309.27 1306.33
17.70 18.83 9.79 13.17
1355.00 1350.00 1310.00 1301.00
1389 1392 1333 1333
1294.07 1301.40 1242.93 1240.47
20.01 22.53 6.76 6.98
1291.00 1292.00 1241.50 1240.50
1334 1336 1262 1249
1465.43 1448.53 1357.33 1362.33
35.82 35.90 6.53 8.23
1476.00 1476.00 1355.00 1359.50
1555 1564 1380 1380
123
Memetic Comp. Table 5 continued
Problem and sizea
Algorithm
Best
la31 30 × 10
Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS) Known-Best GA MA(PR) MA(GR) MA(GR-RS)
1784 1784 1784 1784 1784 1850 1850 1850 1850 1850 1719 1719 1719 1719 1719 1721 1748 1740 1721 1721 1888 1898 1888 1888 1888 1268 1388 1389 1317 1307 1397 1561 1544 1466 1442 1196 1367 1367 1304 1266 1233 1338 1342 1263 1252 1222 1357 1357 1252 1252
la32 30 × 10
la33 30 × 10
la34 30 × 10
la35 30 × 10
la36 15 × 15
la37 15 × 15
la38 15×15
la39 15×15
The bold values are the optimal solutions found by our proposed algorithms a In problem size representation n × m, there are n jobs and m machines
123
la40 15×15
Average
SD
Median
Worst
1784.00 1784.00 1784.00 1784.00
0.00 0.00 0.00 0.00
1784.00 1784.00 1784.00 1784.00
1784 1784 1784 1784
1869.60 1870.40 1850.00 1850.00
19.79 17.52 0.00 0.00
1872.00 1866.00 1850.00 1850.00
1916 1933 1850 1850
1729.00 1726.60 1719.00 1719.00
11.22 8.21 0.00 0.00
1725.00 1723.00 1719.00 1719.00
1765 1745 1719 1719
1770.90 1784.33 1721.00 1721.00
13.31 23.14 0.00 0.00
1770.50 1775.00 1721.00 1721.00
1814 1814 1721 1721
1947.20 1938.87 1888.00 1888.00
30.94 32.28 1.83 0.00
1942.50 1934.50 1888.00 1888.00
2018 2007 1898 1888
1432.60 1422.47 1327.20 1328.67
29.14 28.38 9.80 11.59
1432.00 1427.50 1324.00 1327.00
1497 1477 1352 1346
1606.23 1611.20 1476.47 1473.60
25.27 29.77 6.12 11.51
1605.50 1605.50 1479.00 1479.00
1656 1675 1487 1487
1412.53 1410.33 1317.53 1309.13
17.07 17.05 5.83 14.52
1415.00 1420.00 1317.00 1314.00
1439 1432 1329 1329
1368.47 1375.73 1287.27 1282.60
18.29 18.91 14.35 16.62
1367.00 1372.00 1294.50 1277.00
1413 1423 1323 1301
1366.30 1368.13 1278.93 1279.60
13.57 14.17 12.41 17.84
1360.00 1364.50 1280.00 1289.00
1407 1406 1294 1303
Memetic Comp. Table 6 Comparing the algorithms based on average relative deviations and standard deviation of average relative deviations
No. of problems
Test problems
Author
Algorithm
ARD (%)
SDRD (%)
40
la01–la40
Proposed
MA(GR-RS)
0.9680
1.6559
Aarts et al. [1]
GLS1
2.0540
2.5279
Aarts et al.[1]
GLS2
1.7518
2.1974
Dorndorf and Pesche [6]
PGA
4.0028
4.0947
Dorndorf and Pesche [6]
SBGA (40)
1.2455
1.7216
Binato et al. [4]
–
1.8683
2.7817
Adams et al.[2]
SB I
3.6740
3.9804
Proposed
MA(GR-RS)
1.5488
1.8759
Ombuki and Ventresca [18]
–
5.6676
4.3804
Dorndorf and Pesch [6]
SBGA (60)
1.5560
1.5756
Proposed
MA(GR-RS)
1.6810
1.8562
25
la16–la40
24
la02–la04, la08,
8
Adams et al. [2]
SB II
2.4283
1.8472
la01, la06, la11, la16
Proposed
MA(GR-RS)
0.7788
1.4423
la21, la26, la31, la36
Croce et al. [5]
–
1.5588
1.9643
Set 7
Set 8
Set 1
Set 2
Set 3
Set 5
Set 6
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
Mutation
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
0.020
0.020
0.019
0.019
0.018 0.017 0.016 0.015 0.014 0.013 0.012 set 1
Set 4
Crossover
Average Relative Deviation
Average Relative Deviation
Table 7 Combination of different reproduction parameters
la16–la30, la36–la40
0.018 0.017 0.016 0.015 0.014 0.013 0.012
set 2
set 3
set 4
set 5
set 6
set 7
set 8
0.50
0.45
0.40
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
Mutation Rate
Combination of Parameters
Fig. 3 Average relative deviation (ARD) with respect to different parameter sets tabulated in Table 7
Fig. 5 Average relative deviation (ARD) based on fixed crossover and variable mutation rate
Average Relative Deviation
0.015
0.014
0.013
0.012 0.70
0.65
0.60
0.55
0.50
0.45
0.40
0.35
Crossover Rate
Fig. 4 Average relative deviation (ARD) based on fixed mutation and variable crossover rate
lems, specifically for larger problems appearing in the literature. However, GAs and MAs are gaining popularity due to their effectiveness of solving optimization problems within a reasonable time period. In this paper, we have presented genetic and memetic algorithm based approaches to solve job-shop scheduling problems. After developing a traditional GA with different kind of operations, we have designed and implemented three local searches and three versions of memetic algorithms. All three memetic algorithms provided superior results than the GA for JSSPs. We have solved 40 benchmark problems and have compared results with well-known algorithms appearing in the literature. Our
123
Memetic Comp.
memetic algorithm MA(GR-RS) clearly outperforms all the algorithms considered in this paper. We have also provided a sensitivity analysis of parameters and have also experimented with different parameters and algorithms for analyzing their contributions. Although our algorithm is performing well, we feel that the algorithm requires further work to ensure consistent performance for a wide range of practical JSSPs. We intend to extend our research by introducing constraints such as, machine breakdown, dynamic job arrival, machine addition and removal, and due date restrictions. Moreover, we would also like to test the performance of our algorithm on large scale problems. However, the new memetic algorithm is a significant contribution to the research of solving JSSPs. Appendix
Algorithm III Algorithm for the gap reduction (GR) Let p be the phenotype of an individual i, M and N are the total number of machines and jobs respectively. S and C are the set of starting and finishing times of all the operations, respectively, of those that have already been scheduled. T ( j, m) is the execution time of the current operation of job j in machine m.Q p (m, k) is the kth operation of machine m for phenotype p.m Fr ont (m) represents the front operation of machine m, j Fr ont ( j) is the machine where schedulable operation of job j will be processed. jBusy ( j) and mBusy (m) are the busy time for job j and machine m respectively. max (m, n) returns the m or n which is the maximum.
Algorithm I Algorithm to find out the Bottleneck Job Assume Q p (m, k) is the kth job in machine m for the phenotype p and C(m, n) is the finishing time of nth operation of machine m; where m varies from 1 to M and n varies from 1 to N . getBottleneckJob is a function that returns the job which is taking maximum time in the schedule. getBottleneckJob (void) 1. Set m:=1and max:=-1 2. Repeat while m M A. If max1 A. Swap between Cp(m ,k) and Cp(m ,k-1) B. Set k:=k-1 [End of Step 3 Loop] [End of Algorithm]
123
1. Set m:=1 and mFront(1:M):=0 2. Repeat until all operations are scheduled A. Set Loc:=mFront(m) and jID:= Qp(m,Loc) B. If jFront(jID)=m then i. Set flag:=1 and k:=1 ii. Repeat until k Loc a. Set X:=max(C (m,k-1),jBusy(jID)) b. Set G:=S (m,k)-X c. If G T(jID,m) (1) Set Loc:=k (2) Go to Step F [End of Step b If] d. Set k:=k+1 [End of Step ii Loop] Else Set flag:=flag+1 [End of Step B If] C. Set j1:=1 D. Repeat while j1 J i. Set mID:=jFront(j1) ii. Find the location h of j1 in machine mID iii. Put j1 in the front position and do 1-bit right shift from location mFront(mID) to h. iv.Set j1:=j1+1 [End of Step D Loop] E. Go to Step A F. Place jID at the position Loc G. Set S (m,Loc):=X H. Set C (m,Loc):= S (m,Loc)+T(jID,m) I. Set m:=(m+1) mod M [End of Step 2 Loop] [End of Algorithm]
Memetic Comp.
Algorithm IV Algorithm for the restricted swapping (RS) Assume Q p (m, k) is the kth job in machine m and O p ( j, m) is the order of job j in machine m particularly for the phenotype p.nonCon f lict (m, i, j) is a function that returns true if the ending time of the immediately preceding operation of j does not overlap with the modified starting time of the same job in machine m and the starting time of the immediately following operation of job j does not conflict with the ending time of the same job in machine m.
1. Set j := getBottleneckJob and k:=N-1 2. Repeat while k 1 A. Set m:=S(j ,k) B. If Op(j ,m) 1 then i. Set j :=Qp(m,(Op(j , m)-1)) ii. If nonConflict(m,j ,j )=true a. Swap j with j in phenotype p. b. Go to Step C [End of Step ii If] [End of Step B If] C. Set k:=k-1 [End of Step 2 loop] [End of Algorithm]
References 1. Aarts EHL, Van Laarhoven PJM, Lenstra JK, Ulder NLJ (1994) A computational study of local search algorithms for job shop scheduling. ORSA J Comput 6:118–125 2. Adams J, Balas E, Zawack D (1988) The shifting bottleneck procedure for job shop scheduling. Manage Sci 34:391–401 3. Biegel JE, Davern JJ (1990) Genetic algorithms and job shop scheduling. Comput Ind Eng 19:81–91 4. Binato S, Hery WJ, Loewenstern DM, Resende MGC (2001) A GRASP for job shop scheduling. Essays and surveys on metaheuristics. Kluwer Academic Publishers, Boston, MA 58–79 5. Croce FD, Tadei R, Volta G (1995) A genetic algorithm for the job shop problem. Comput Oper Res 22:15–24 6. Dorndorf U, Pesch E (1995) Evolution based learning in a job shop scheduling environment. Comput Oper Res 22:25–40 7. Garey MR, Johnson DS (1979) Computers and intractability : a guide to the theory of NP-completeness. W. H. Freeman, San Francisco 8. Garey MR, Johnson DS, Sethi R (1976) The complexity of flowshop and jobshop scheduling. Math Oper Res 1:117–129 9. Goldberg DE (1989) Genetic algorithms in search, optimization, and machine learning. Addison-Wesley Pub. Co., Reading, MA
10. Hasan SMK, Sarker R, Cornforth D (2008) GA with priority rules for solving job-shop scheduling problems. In: IEEE world congress on computational intelligence, Hong Kong, pp 1913–1920 11. Hasan SMK, Sarker R, Cornforth D (2007) Hybrid genetic algorithm for solving job-shop scheduling problem. In: 6th IEEE/ACIS international conference on computer and information science, Melbourne, pp 519–524 12. Hasan SMK, Sarker R, Cornforth D (2007) Modified genetic algorithm for job-shop scheduling: a gap-utilization technique. In: IEEE congress on evolutionary computation, Singapore, pp 3804–3811 13. Ishibuchi H, Murata T (1998) A multi-objective genetic local search algorithm and its application to flowshop scheduling. IEEE Trans Syst Man Cybernet Part C 28:392–403 14. Lawrence D (1985) Job shop scheduling with genetic algorithms. In: First international conference on genetic algorithms, Mahwah, New Jersey, pp 136–140 15. Lawrence S (1984) Resource constrained project scheduling: an experimental investigation of heuristic scheduling techniques. Graduate School of Industrial Administration, Carnegie-Mellon University, Pittsburgh, Pennsylvania 16. Lenstra JK, Rinnooy Kan AHG (1979) Computational complexity of discrete optimization problems. Ann Discrete Math 4:121–140 17. Nakano R, Yamada T (1991) Conventional genetic algorithm for job shop problems. In: Fourth international conference on genetic algorithms, Morgan Kaufmann, San Mateo, CA, pp 474–479 18. Ombuki BM, Ventresca M (2004) Local search genetic algorithms for the job shop scheduling problem. Appl Intell 21:99–109 19. Ong YS, Keane AJ (2004) Meta-Lamarckian learning in memetic algorithms. IEEE Trans Evol Comput 8:99–110 20. Paredis J (1997) Exploiting constraints as background knowledge for evolutionary algorithms. Handbook of evolutionary computation. Institute of Physics Publishing and Oxford University Press, Bristol, New York, G1.2:1–6 21. Park BJ, Choi HR, Kim HS (2003) A hybrid genetic algorithm for the job shop scheduling problems. Comput Ind Eng 45:597–613 22. Pezzella F, Morganti G, Ciaschetti G (2008) A genetic algorithm for the flexible job-shop scheduling problem. Comput Oper Res 35:3202–3212 23. Ponnambalam SG, Aravindan P, Rao PS (2001) Comparative evaluation of genetic algorithms for job-shop scheduling. Prod Plan Control 12:560–674 24. Shigenob K, Isao O, Masayuki Y (1995) An efficient genetic algorithm for job shop scheduling problems. In: 6th International conference on genetic algorithms, Pittsburgh, PA, pp 506–511 25. Wang W, Brunn P (2000) An effective genetic algorithm for job shop scheduling. Proc Inst Mech Eng Part B: J Eng Manuf 214:293–300 26. Yamada T (2003) Studies on metaheuristics for jobshop and flowshop scheduling problems. Doctor of Informatics, Department of Applied Mathematics and Physics, Kyoto University, Kyoto, Japan 27. Yamada T, Nakano R (1997) Genetic algorithms for job-shop scheduling problems. Modern Heuristic for Decision Support. UNICOM seminar, London, pp 67–81 28. Yamada T, Nakano R (1997) Job-shop scheduling. Genetic algorithms in engineering systems. Cambridge University Press, New York, pp 134–160
123