160. B.6 Solutions of the problems la06 (926), la07 (890), and la08 (863) . . 161 .... (m, n). Operation of job n in machine m p. Index of an individual in the population. Qp(m, k) ...... 8 0 7 88 0 103 5 161 3 254 6 362 9 432 1 643 4 709 2 771 m7.
Evolutionary Algorithms for Solving Job-Shop Scheduling Problems in the Presence of Process Interruptions
S. M. Kamrul Hasan Bachelor of Science in Computer Science & Engineering Rajshahi University of Engineering & Technology, Bangladesh
A thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the School of Information Technology & Electrical Engineering University of New South Wales Australian Defence Force Academy c Copyright 2009 by S. M. Kamrul Hasan
Abstract In this thesis, the Job Shop Scheduling Problem (JSSP) is the problem of interest. The classical JSSP is well-known as an NP-hard problem. Although with current computational capabilities, the small problems are solvable using deterministic methods, it is out of reach when they are larger in size. The complexity of JSSP is further increased when process interruptions, such as machine breakdown and/or machine unavailability, are introduced. Over the last few decades, several stochastic algorithms have been proposed to solve JSSPs. However, none of them are suitable for all kinds of problems. Genetic and Memetic algorithms have proved their effectiveness in these regards, because of their diverse searching behavior. In this thesis, we have developed one genetic algorithm and three different Memetic Algorithms (MAs) for solving JSSPs. Three priority rules are designed, namely partial re-ordering, gap reduction and restricted swapping, and these have been used as local search techniques in designing our MAs. We have solved 40 well-known benchmark problems and compared the results obtained with some of the established algorithms available in the literature. Our algorithm clearly outperforms those established algorithms. For better justification of the superiority of MAs over GA, we have performed statistical significance testing (Student’s t-test). The experimental results show that MA, as compared to GA, not only significantly improves the quality of solutions, but also reduces the overall computation. We have extended our work by proposing an improved local search technique, shifted gap-reduction (SGR), which improves the performance of MAs when tested with the relatively difficult test problems. We have also modified the new algorithm to i
accommodate JSSPs with machine unavailability and also developed a new reactive scheduling technique to re-optimize the schedule after machine breakdowns. We have considered two scenarios of machine unavailability. Firstly, where the unavailability information is available in advance (predictive), and secondly, where the information is known after a real breakdown (reactive). We show that the revised schedule is mostly able to recover if the interruptions occur during the early stages of the schedules. We also confirm that the effect of a single continuous breakdown has more impact compared to short multiple breakdowns, even if the total durations of the breakdowns are the same. Finally, for convenience of implementation, we have developed a decision support system (DSS). In the DSS, we have built a graphical user interface (GUI) for user friendly data inputs, model choices, and output generation. This DSS tool will help users in solving JSSPs without understanding the complexity of the problem and solution approaches, as well as will contribute in reducing the computational and operational costs.
ii
Dedicated to my wife Nafisa Tarannum and my parents
Keywords Decision support system Job-shop scheduling problem Genetic algorithm Machine breakdown Machine unavailability Memetic algorithm Priority rules
iv
Acknowledgment All praises are due to Allah, the almighty God, the most beneficent and the most merciful. I gratefully thank him for his enormous blessings upon me to be successful to accomplish this work. No words of gratitudes are enough to thank my principle supervisor, A/Prof. Ruhul A. Sarker, School of Engineering and Information Technology, UNSW@ADFA, who provided endless support, guidance, and motivation to fulfill this work. His honest, thorough, and thoughtful approach of studying, have made myself trained to complete this big volume of work. I am also grateful to my co-supervisors, Dr. Daryl Essam, School of Engineering and Information Technology, UNSW@ADFA, and Dr. David Cornforth, Division of Energy Technology, CSIRO, for their enormous encouragements and supports in the weekly meetings as well as in the entire PhD candidature. I give a special thank to them for their deep concentration on the writing of my publications, and this thesis. I like to thank the School of Engineering and Information Technology for providing me a full-free studentship, and sponsoring my conferences. I also thank the University of New South Wales at the Australian Defence Force Academy for providing me the University College Postgraduate Research Scholarship (UCPRS), one of the prestigious scholarships in Australia, to complete this research. I thank the Australian Centre for Advanced Computing and Communications (ac3), located in the Australian Technology Park, for providing a massive high performance computing facility, which helped me to execute the large numbers of exper-
v
iments. I appreciate the administrative support from the school. I would like to thank the help desk staffs of the school, Eri Rigg, Steve Fisher, and Michael Lanza for providing me the technical assistance as well as arranging the computer labs to run experiments in local computers. I express gratitude to the fellows in the school, especially Mr. Abu S. S. M. Barkat Ullah, Mr. Ehab Zaky Elfeky, and Mr. Ziauddin Ursani for their constructive suggestions and discussions regarding this research. Many thanks to my parents, brothers, and grandmother for their continuous mental support during the stressful days of my research. Finally, I thank my wife, Nafisa Tarannum, for her love and patience during my PhD candidature. It would not be possible to complete this work without her encouragement during the frustrating times throughout the pathway of my research.
vi
Originality Statement I hereby declare that this submission is my own work and that, to the best of my knowledge and belief, it contains no material previously published or written by another person, nor material which to a substantial extent has been accepted for the award of any other degree or diploma at UNSW or any other educational institution, except where due acknowledgment is made in the thesis. Any contribution made to the research by colleagues, with whom I have worked at UNSW or elsewhere, during my candidature, is fully acknowledged. I also declare that the intellectual content of this thesis is the product of my own work, except to the extent that assistance from others in the project’s design and conception or in style, presentation and linguistic expression is acknowledged.
Signed................................... Date......................................
vii
Copyright Statement I hereby grant the University of New South Wales or its agents the right to archive and to make available my thesis or dissertation in whole or part in the University libraries in all forms of media, now or here after known, subject to the provisions of the Copyright Act 1968. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertation. I also authorize University Microfilms to use the 350 word abstract of my thesis in Dissertation Abstract International (this is applicable to doctoral theses only). I have either used no substantial portions of copyright material in my thesis or I have obtained permission to use copyright material; where permission has not been granted I have applied/will apply for a partial restriction of the digital copy of my thesis or dissertation.
Signed................................... Date......................................
viii
Authenticity Statement I certify that the Library deposit digital copy is a direct equivalent of the final officially approved version of my thesis. No emendation of content has occurred and if there are any minor variations if formatting, they are the result of the conversion to digital format.
Signed................................... Date......................................
ix
List of Publications Journal 1. Hasan, S. M. K., Sarker, R., Essam, D., and Cornforth, D. (2009). Memetic Algorithms for Solving Job-Shop Scheduling Problems, Memetic Computing, 1(1), Springer-Verlag, pp. 69-83, [DOI: 10.1007/s12293-008-0004-5]. 2. Hasan, S. M. K., Sarker, R., Essam, D., and Kacem, I. (11/2009). A DSS for Job-Shop Scheduling under Process Interruptions, Flexible Services and Manufacturing Journal, Springer-Verlag (Under revision). 3. Hasan, S. M. K., Sarker, R., and Essam, D. (05/2009). Genetic Algorithm for Job-Shop Scheduling with Machine Unavailability and Breakdowns, International Journal of Production Research, Taylor & Francis (Under revision).
Book Chapter 1. Hasan, S. M. K., Sarker, R., Essam, D., and Cornforth, D. (2009). A Genetic Algorithm with Priority Rules for Solving Job-Shop Scheduling Problems, in The Natural Intelligence for Scheduling, Planning and Packing Problems, Studies in Computational Intelligence (SCI) Series, Springer-Verlag (In Press).
x
Conference 1. Hasan, S. M. K., Sarker, R., and Essam, D. (05/2009). Genetic Algorithm for Job-Shop Scheduling with Machine Unavailability and Breakdowns, 20th National Conference of Australian Society for Operations Research, 27-30 September, Gold Coast, Australia (Abstract Accepted). 2. Hasan, S. M. K., Sarker, R., and Essam, D. (2008). A Decision Support System for Solving Job-Shop Scheduling Problems Using Genetic Algorithms, 12th Asia Pacific Symposium on Intelligent and Evolutionary Systems, 7-8 December, Melbourne, Australia, pp. 71-78. 3. Hasan, S. M. K., Sarker, R., and Cornforth, D. (2008). GA with Priority Rules for Solving Job-Shop Scheduling Problems, IEEE World Congress on Computational Intelligence, 1-6 June, Hong Kong, pp. 1913-1920, [DOI: 10.1109/CEC.2008.4631050]. 4. Hasan, S. M. K., Sarker, R., and Cornforth, D. (2007). Modified Genetic Algorithm for Job-Shop Scheduling: A Gap-Utilization Technique, IEEE Congress on Evolutionary Computation, 25-28 September, Singapore, pp. 3804-3811, [DOI: 10.1109/CEC.2007.4424966]. 5. Hasan, S. M. K., Sarker, R., and Cornforth, D. (2007). Hybrid Genetic Algorithm for Solving Job-Shop Scheduling Problem, 6th IEEE International Conference on Computer and Information Science, 11-13 July, Melbourne, Australia, pp. 519-524, [DOI: 10.1109/ICIS.2007.107]
xi
Contents
Abstract
i
Keywords
iv
Acknowledgments
v
Originality Statement
vii
Copyright Statement
viii
Authenticity Statement
ix
List of Publications
x
Table of Contents
xii
List of Figures
xviii
List of Tables
xx
List of Algorithms
xxiii
List of Acronyms
xxiv
List of Notations
xxvi
xii
1 Introduction
1
1.1
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.2
Motivation and Scope of Research . . . . . . . . . . . . . . . . . . .
2
1.3
Objectives of this Thesis . . . . . . . . . . . . . . . . . . . . . . . .
4
1.4 Key Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
1.5
8
Organization of the Thesis . . . . . . . . . . . . . . . . . . . . . . .
2 Literature Review
10
2.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
2.2
Scheduling Problems . . . . . . . . . . . . . . . . . . . . . . . . . .
11
2.2.1
Job-Shop Scheduling . . . . . . . . . . . . . . . . . . . . . .
12
2.2.2
Variants of JSSPs . . . . . . . . . . . . . . . . . . . . . . . .
13
2.2.2.1
Additional Constraints in JSSPs . . . . . . . . . .
13
2.2.2.2
JSSPs under Process Interruptions . . . . . . . . .
14
2.2.2.3
Flexibility in JSSPs . . . . . . . . . . . . . . . . .
14
Objectives for Solving JSSPs . . . . . . . . . . . . . . . . . .
15
2.2.3.1
Makespan . . . . . . . . . . . . . . . . . . . . . . .
15
2.2.3.2
Throughput Time . . . . . . . . . . . . . . . . . .
16
2.2.3.3
Earliness . . . . . . . . . . . . . . . . . . . . . . .
16
2.2.3.4
Tardiness . . . . . . . . . . . . . . . . . . . . . . .
17
2.2.3.5
Due-Date Cost . . . . . . . . . . . . . . . . . . . .
17
Solving JSSPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
2.3.1
Classical Optimization Approaches . . . . . . . . . . . . . .
18
2.3.2
Meta-Heuristic Methods . . . . . . . . . . . . . . . . . . . .
20
2.3.2.1
Tabu Search . . . . . . . . . . . . . . . . . . . . . .
21
2.3.2.2
Shifting Bottleneck . . . . . . . . . . . . . . . . . .
22
2.2.3
2.3
xiii
2.3.2.3
GRASP . . . . . . . . . . . . . . . . . . . . . . . .
23
2.3.2.4
Simulated Annealing . . . . . . . . . . . . . . . . .
24
2.3.2.5
Genetic Algorithm . . . . . . . . . . . . . . . . . .
24
Priority Rules . . . . . . . . . . . . . . . . . . . . . . . . . .
25
Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
2.4.1
Fitness Function . . . . . . . . . . . . . . . . . . . . . . . .
27
2.4.2
Representation of a Solution . . . . . . . . . . . . . . . . . .
28
2.4.2.1
Operation-Based Representation . . . . . . . . . .
28
2.4.2.2
Preference List-Based Representation . . . . . . . .
29
2.4.2.3
Priority Rule-Based Representation . . . . . . . . .
29
2.4.2.4
Job-Based Representation . . . . . . . . . . . . . .
29
2.4.2.5
Machine-Based Representation . . . . . . . . . . .
30
2.4.2.6
Job Pair Relation-Based Representation . . . . . .
30
2.4.2.7
Disjunctive Graph-Based Representation . . . . . .
30
Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
2.4.3.1
Ranking Selection . . . . . . . . . . . . . . . . . .
31
2.4.3.2
Roulette-Wheel Selection . . . . . . . . . . . . . .
31
2.4.3.3
Tournament Selection . . . . . . . . . . . . . . . .
31
Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
2.4.4.1
Ordered Crossover (OX) . . . . . . . . . . . . . . .
32
2.4.4.2
Partially-Mapped Crossover (PMX) . . . . . . . . .
33
2.4.4.3
Cycle Crossover (CX) . . . . . . . . . . . . . . . .
33
2.4.4.4
Position-Based Crossover (PBX) . . . . . . . . . .
33
Mutation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
2.4.5.1
34
2.3.3 2.4
2.4.3
2.4.4
2.4.5
Bit-flip Mutation . . . . . . . . . . . . . . . . . . .
xiv
2.5
2.4.5.2
Reciprocal Exchange Mutation . . . . . . . . . . .
34
2.4.5.3
Insertion/Deletion Mutation . . . . . . . . . . . . .
35
2.4.5.4
Shift Mutation . . . . . . . . . . . . . . . . . . . .
35
GAs for Solving JSSPs . . . . . . . . . . . . . . . . . . . . . . . . .
35
3 Solving Job-Shop Scheduling by Genetic Algorithms
38
3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
3.2
Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
3.2.1
Chromosome Representation . . . . . . . . . . . . . . . . . .
40
3.2.2
Mapping a Solution . . . . . . . . . . . . . . . . . . . . . . .
43
3.2.2.1
Local Harmonization . . . . . . . . . . . . . . . . .
44
3.2.2.2
Global Harmonization . . . . . . . . . . . . . . . .
47
3.2.3
Fitness Calculation . . . . . . . . . . . . . . . . . . . . . . .
48
3.2.4
Reproduction Phases . . . . . . . . . . . . . . . . . . . . . .
48
3.2.4.1
Selection
. . . . . . . . . . . . . . . . . . . . . . .
48
3.2.4.2
Crossover . . . . . . . . . . . . . . . . . . . . . . .
50
3.2.4.3
Mutation . . . . . . . . . . . . . . . . . . . . . . .
50
3.3
Experimental Study . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
3.4
Chapter Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . .
64
4 Priority Rules for Solving JSSPs
65
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
4.2
Partial Reordering (PR) . . . . . . . . . . . . . . . . . . . . . . . .
67
4.3
Gap Reduction (GR) . . . . . . . . . . . . . . . . . . . . . . . . . .
69
4.4
Restricted Swapping (RS) . . . . . . . . . . . . . . . . . . . . . . .
72
4.5
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
74
xv
4.6
4.7
Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . .
75
4.6.1
Group-wise Comparison . . . . . . . . . . . . . . . . . . . .
87
4.6.2
Statistical Analysis . . . . . . . . . . . . . . . . . . . . . . .
88
4.6.3
Contribution of the Priority Rules . . . . . . . . . . . . . . .
90
4.6.4
Parameter Analysis . . . . . . . . . . . . . . . . . . . . . . .
94
Chapter Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . .
99
5 GA for JSSPs with Interruptions
101
5.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.2
Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.3
Solution Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 5.3.1
Shifted Gap Reduction (SGR) . . . . . . . . . . . . . . . . . 106
5.3.2
Reactive Scheduling . . . . . . . . . . . . . . . . . . . . . . . 109
5.4
Experimental Study . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.5
Result and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.6
5.5.1
JSSPs under Ideal Condition . . . . . . . . . . . . . . . . . . 112
5.5.2
JSSPs with Machine Breakdowns . . . . . . . . . . . . . . . 115
5.5.3
JSSPs with Machine Unavailability . . . . . . . . . . . . . . 120
Chapter Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
6 Decision Support System for Solving JSSPs
122
6.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6.2
A Brief Literature Review . . . . . . . . . . . . . . . . . . . . . . . 123
6.3 A Decision Support System . . . . . . . . . . . . . . . . . . . . . . 124 6.3.1
Common DSS Attributes . . . . . . . . . . . . . . . . . . . . 125
6.3.2
DSS Framework . . . . . . . . . . . . . . . . . . . . . . . . . 126
xvi
6.4
6.5
Implementation of DSS . . . . . . . . . . . . . . . . . . . . . . . . . 128 6.4.1
Data Base Management Subsystem (DBMS) . . . . . . . . . 128
6.4.2
Model Base Management Subsystem (MBMS) . . . . . . . . 131
6.4.3
Dialog Management Subsystem (DGMS) . . . . . . . . . . . 131
6.4.4
User Interface (GUI) . . . . . . . . . . . . . . . . . . . . . . 132 6.4.4.1
Home . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.4.4.2
Input . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.4.4.3
Output . . . . . . . . . . . . . . . . . . . . . . . . 134
Chapter Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
7 Conclusion and Future Works 7.1
7.2
138
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 7.1.1
Genetic Algorithm for JSSPs . . . . . . . . . . . . . . . . . . 138
7.1.2
Priority Rules for JSSPs . . . . . . . . . . . . . . . . . . . . 139
7.1.3
JSSPs with Interruptions . . . . . . . . . . . . . . . . . . . . 140
7.1.4
Decision Support System . . . . . . . . . . . . . . . . . . . . 142
Future Research Directions . . . . . . . . . . . . . . . . . . . . . . . 142
A Gantt Charts
145
B Sample Best Solutions
158
Bibliography
187
xvii
List of Figures 3.1
Two-point Exchange Crossover . . . . . . . . . . . . . . . . . . . .
3.2
Boxplot representation of the makespans of the problems la01-la40 based on 30 independent runs . . . . . . . . . . . . . . . . . . . . .
3.3
50
59
Convergence curve of the best and average makespan of one problem from each problem groups (la01, la06, la11, la16, la21, la26, la31, la36) 63
4.1
Gantt chart of the solution: (a) before applying the partial reordering, (b) after applying partial reordering and reevaluation. . . . . .
4.2
68
Two steps of a partial Gantt chart while building the schedule from the phenotype for a 3 × 3 job-shop scheduling problem. The X axis represents the execution time and the Y axis represents the machines. 71
4.3
Gantt chart of the solution: (a) before applying the restricted swapping, (b) after applying restricted swapping and reevaluation.
4.4
. . .
Fitness curves of the problems la21–la25 for the first 100 generations using our proposed algorithms . . . . . . . . . . . . . . . . . . . . .
4.5
96
Average relative deviation based on fixed mutation and variable crossover rate
4.7
93
Average relative deviation with respect to different parameter sets tabulated in Table 4.8 . . . . . . . . . . . . . . . . . . . . . . . . .
4.6
73
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
97
Average relative deviation based on fixed crossover and variable mutation rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xviii
98
5.1 Average relative deviation of RSH and SGR from the best results found by GA-SGR . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.2
Comparison of average relative deviation vs. the average downtime of each breakdown event . . . . . . . . . . . . . . . . . . . . . . . . 118
6.1
Basic framework of a standard decision support system . . . . . . . 126
6.2
Internal structure of the implemented decision support system . . . 129
6.3
Graphical user interface of the simple input panel options of the decision support system . . . . . . . . . . . . . . . . . . . . . . . . 133
6.4
Graphical user interface of the advanced input panel of the decision support system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.5
Graphical user interface of the output panel of the decision support system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.6
Machine breakdown warning and option for reactive scheduling . . . 135
6.7
Rescheduled solution after a machine breakdown . . . . . . . . . . . 136
A.1 la36 with 1 machine breakdown . . . . . . . . . . . . . . . . . . . . 148 A.2 la37 with 2 machine breakdowns . . . . . . . . . . . . . . . . . . . . 151 A.3 la38 with 3 machine breakdowns . . . . . . . . . . . . . . . . . . . . 154 A.4 la39 with 4 machine breakdowns . . . . . . . . . . . . . . . . . . . . 157
xix
List of Tables 2.1
Example of a 2 × 5 FJSSP . . . . . . . . . . . . . . . . . . . . . . .
15
2.2
Problem definitions . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
3.1
Representation of a binary chromosome for a sample 3 × 3 JSSP . .
43
3.2
Construction of the phenotype from the binary genotype and predefined sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
3.3
Experimental parameters . . . . . . . . . . . . . . . . . . . . . . . .
52
3.4
Experimental results of the traditional GA based on 40 benchmark problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
3.5
Group-wise results comparison . . . . . . . . . . . . . . . . . . . . .
55
4.1
Comparing our four algorithms for 40 test problems . . . . . . . . .
76
4.2
Experimental results of four of our algorithms including the best solutions found from the literature
4.3
. . . . . . . . . . . . . . . . . .
77
Comparing the algorithms based on ARD and SDRD with other algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
86
4.4
Group-wise comparison between GA and MA(GR-RS) . . . . . . .
87
4.5
Statistical Significance Test (Student’s t-Test) Result of GA, MA(GR), and MA(GR-RS) Compared to the GA . . . . . . . . . . . . . . . .
4.6
89
Individual contribution of the priority rules after 100, 250 and 1000 generations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xx
91
4.7
Percentage relative improvement of the five problems (la21–la25) . .
94
4.8
Combination of different reproduction parameters . . . . . . . . . .
95
5.1
Best results with varying tolerance . . . . . . . . . . . . . . . . . . 113
5.2
Comparison of average relative percentage deviations from the best available result in the literature . . . . . . . . . . . . . . . . . . . . 114
5.3
Average number of gaps utilized versus the tolerance set . . . . . . 116
5.4
Comparison of average makespan of 100 breakdown scenarios for the delayed, right shifted, gap reduced reactive scheduling . . . . . . . . 119
5.5
%ARD of the best makespans considering the machine breakdown and unavailability compared to that of GA-SGR . . . . . . . . . . . 120
6.1
A sample input file for DSS . . . . . . . . . . . . . . . . . . . . . . 130
B.1 Solution of the problem la01 (666) . . . . . . . . . . . . . . . . . . 159 B.2 Solution of the problem la02 (655) . . . . . . . . . . . . . . . . . . 159 B.3 Solution of the problem la03 (597) . . . . . . . . . . . . . . . . . . 159 B.4 Solution of the problem la04 (590) . . . . . . . . . . . . . . . . . . 160 B.5 Solution of the problem la05 (593) . . . . . . . . . . . . . . . . . . 160 B.6 Solutions of the problems la06 (926), la07 (890), and la08 (863) . . 161 B.7 Solutions of the problems la09 (951), and la10 (958) . . . . . . . . 162 B.8 Solution of the problems la11 (1222) . . . . . . . . . . . . . . . . . 163 B.9 Solution of the problem la12 (1039) . . . . . . . . . . . . . . . . . . 163 B.10 Solution of the problem la13 (1150) . . . . . . . . . . . . . . . . . . 164 B.11 Solution of the problem la14 (1292) . . . . . . . . . . . . . . . . . . 164 B.12 Solution of the problem la15 (1207) . . . . . . . . . . . . . . . . . . 165 B.13 Solution of the problem la16 (945) . . . . . . . . . . . . . . . . . . 166 B.14 Solution of the problem la17 (784) . . . . . . . . . . . . . . . . . . 166 xxi
B.15 Solution of the problem la18 (848) . . . . . . . . . . . . . . . . . . 167 B.16 Solution of the problem la19 (842) . . . . . . . . . . . . . . . . . . 167 B.17 Solution of the problem la20 (907) . . . . . . . . . . . . . . . . . . . 168 B.18 Solutions of the problems la21 (1079), and la22 (960) . . . . . . . . 169 B.19 Solutions of the problems la23 (1032), and la24 (959) . . . . . . . . 170 B.20 Solution of the problem la25 (991) . . . . . . . . . . . . . . . . . . . 171 B.21 Solution of the problems la26 (1218) . . . . . . . . . . . . . . . . . 172 B.22 Solution of the problem la27 (1286) . . . . . . . . . . . . . . . . . . 173 B.23 Solution of the problem la28 (1236) . . . . . . . . . . . . . . . . . . 174 B.24 Solution of the problem la29 (1221) . . . . . . . . . . . . . . . . . . 175 B.25 Solution of the problem la30 (1355) . . . . . . . . . . . . . . . . . . 176 B.26 Solution of the problem la31 (1784) . . . . . . . . . . . . . . . . . . 177 B.27 Solution of the problem la32 (1850) . . . . . . . . . . . . . . . . . . 178 B.28 Solution of the problem la33 (1719) . . . . . . . . . . . . . . . . . . 179 B.29 Solution of the problem la34 (1721) . . . . . . . . . . . . . . . . . . 180 B.30 Solution of the problem la35 (1888) . . . . . . . . . . . . . . . . . . 181 B.31 Solution of the problem la36 (1292) . . . . . . . . . . . . . . . . . . 182 B.32 Solution of the problem la37 (1434) . . . . . . . . . . . . . . . . . . 183 B.33 Solution of the problem la38 (1249) . . . . . . . . . . . . . . . . . . 184 B.34 Solution of the problem la39 (1251) . . . . . . . . . . . . . . . . . . 185 B.35 Solution of the problem la40 (1251) . . . . . . . . . . . . . . . . . . 186
xxii
List of Algorithms 3.1
Traditional Genetic Algorithms (TGA) . . . . . . . . . . . . . . . .
41
3.2
Local Harmonization . . . . . . . . . . . . . . . . . . . . . . . . . .
45
3.3
Global Harmonization . . . . . . . . . . . . . . . . . . . . . . . . .
49
4.1
Algorithm to find out the bottleneck job . . . . . . . . . . . . . . .
67
4.2
Algorithm for the partial reordering technique (PR) . . . . . . . . .
69
4.3
Algorithm for the gap-reduction technique (GR) . . . . . . . . . . .
70
4.4
Algorithm for the restricted swapping technique (RS) . . . . . . . .
72
5.1
Shifted Gap Reduction (SGR) Algorithm . . . . . . . . . . . . . . . 107
5.2
Algorithm to identify a gap . . . . . . . . . . . . . . . . . . . . . . 108
5.3
Algorithm to recover from a machine breakdown . . . . . . . . . . . 110
xxiii
List of Acronyms ( ARD )
Average relative deviation
( %ARD )
Average relative deviation in percent scale
( B&B )
Branch-and-bound
( CX )
Cycle crossover
( DSS )
Decision support system
( DBMS )
Data-base management system
( DGMS )
Dialog-base management system
( EA )
Evolutionary algorithm
( FCFS )
First Come First Serve
( FJSSP )
Flexible job-shop scheduling problem
( FSP )
Flow-shop scheduling problem
( GA )
Genetic algorithm
( GH )
Global harmonization
( GR )
Gap reduction
( GRASP )
Greedy random adaptive search procedure
( GSA )
Genetic search approach
( GUI )
Graphical user interface
( HGA )
Hybrid genetic algorithm
( JSSP )
Job-shop scheduling problem
( LS )
Local search
( LH )
Local harmonization
( LNRP )
Longest Number of Remaining Operations
xxiv
( LOX )
Linear operational crossover
( LPT )
Longest Processing Time
( LRPT )
Longest Remaining Processing Time
( MA )
Memetic algorithm
( MB )
Machine breakdown
( MBMS )
Model-base management system
( MU )
Machine unavailability
( NP )
Non-deterministic polynomial
( OX )
Ordered crossover
( PBX )
Position-based crossover
( PMX )
Partially mapped crossover
( POP )
Proximate optimality principle
( PR )
Partial reordering
( RCL )
Restricted candidate list
( RD )
Relative deviation
( RS )
Restricted swapping
( RSH )
Right shifting
( SA )
Simulated annealing
( SB )
Shifted bottleneck
( SDRD )
Standard deviation of relative deviations
( %SDRD )
Standard deviation of relative deviation in percent scale
( SGR )
Shifted gap reduction
( SNRP )
Smallest Number of Remaining Operations
( SPT )
Shortest Processing Time
( SRPT )
Shortest Remaining Processing Time
( TGA )
Traditional genetic algorithm
( TL )
Tabu list
( TS )
Tabu search
( TSP )
Traveling salesman problem
( UI )
User interface
( VRP )
Vehicle routing problem xxv
List of Notations N
Total number of jobs
M
Total number of machines
(m, n)
Operation of job n in machine m
p
Index of an individual in the population
Qp (m, k)
k-th operation in machine m of individual p
Qp (m, n)
Index of (m, n) in individual p
O(n, k)
k-th machine required by job n.
O (n, m)
Index of (m, n) in the problem description.
T (n, m)
Predefined execution time of (m, n).
S(m, n)
Starting time of (m, n).
S (m, k)
Starting time k-th operation in machine m of a solution.
Fn
Finishing time of job n.
F(m, n)
Finishing time of (m, n) in a solution.
F (m, k)
Finishing time k-th operation in machine m of a solution.
Cp (m, u, v)
A binary bit representing the relationship between job u and v in machine m of individual p.
P (t)
A set of evaluated individuals at generation t.
L
Length of a chromosome.
Rx
Probability of doing crossover.
Rm
Probability of doing mutation.
Xp1 & Xp2
Two crossover points.
D
Number of bits to mutate.
xxvi
ψ
Tolerance level for shifted gap reduction.
Λ(m, ˆ t , r )
Breakdown instance for machine m, ˆ at time t , and for the period of r .
Fmax
The objective function to minimize.
JFn
Next available of job n.
M Fm
Next available of machine m in the solution.
JBn
Available time of job n.
M Fm
Available time of machine m in the solution.
G
Throughput time.
E
Earliness.
K
Tardiness.
D
Due-date cost.
xxvii
Chapter 1 Introduction This chapter contains a brief introduction to the research problem considered in this thesis, the motivation for carrying out this research, the specific objectives of this thesis, and the key contributions made. The chapter ends with the organization of the thesis.
1.1
Background
Scheduling is a challenging research topic in the Operations Research and Computer Science domain. This thesis deals with scheduling problems in the manufacturing area known as the job-shop scheduling problems (JSSPs). The JSSPs are wellknown combinatorial optimization problems, which consist of a finite number of jobs and machines. Each job consists of a set of operations that has to be processed, on a set of known machines, and where each operation has a known processing time. A schedule is a complete set of operations, required by a job, to be performed on different machines, in a given order. In addition, the process may need to satisfy other constraints such as (i) no more than one operation of any job can be executed simultaneously and (ii) no machine can process more than one operation at the same time. The objectives usually considered in JSSPs are the minimization of makespan, the minimization of tardiness, and the maximization of throughput. The total time between the starting of the first operation and the ending of the last operation, is 1
Chapter 1. Introduction
termed as the makespan. In JSSPs, the size of the solution space is an exponent of the number of machines, which makes it quite expensive to find the best makespan for larger problems. By larger problem, we mean a higher number of jobs and (or) a higher number of machines. Most JSSPs that have appeared in the literature are for ideal conditions. However, in practice, process interruptions such as machine breakdown and machine unavailability are very common, which makes JSSPs more complex to solve. There exists a number of conventional optimization methods for solving JSSPs, such as the integer programming method. However, the conventional methods are unable to solve larger problems due to the limitation of current computational power. Considering the complexity of solving JSSPs, with or without interruptions, and the limitations of existing methodologies, it seems that an evolutionary computation based approaches would do better, as they have proven to be successful for solving many other combinatorial optimization problems. Genetic Algorithms (GAs) are widely known Evolutionary Algorithms that are used in practice. GAs and other EAs are population based search techniques that explore the solution space in a discrete manner. In these algorithms, the solutions evolve by applying reproduction operators. When GAs/EAs are hybridized with local search, they are known as Memetic Algorithms (MAs). The priority rules are widely used in the scheduling domain for either problem solving or refinement of solutions with another technique. In solving JSSPs, the priority rules can be used as local search in conjunction with genetic algorithms, which is basically a memetic algorithm. These approaches have been proved to be efficient for solving job-shop scheduling problems.
1.2
Motivation and Scope of Research
In this section, we briefly discuss the scope of research in JSSPs in this thesis and our motivation for carrying out this research. In solving the job-shop scheduling problems, the determination of the best
2
Chapter 1. Introduction
makespan by reducing the machine idle time is a challenging task. The JSSPs are widely acknowledged as one of the most difficult NP-complete problems and is also well-known for its practical applications in many manufacturing industries. Over the last few decades, a good number of algorithms have been developed to solve JSSPs. However, no single algorithm is suitable for solving all kinds of JSSP with both a reasonably good solution and within a reasonable computational effort. Thus, there is scope to analyze the difficulties of JSSPs as well as to design improved algorithms that may be able to solve them effectively. The search space for a 20 jobs and 20 machines problem, is as big as a set of 5.278 × 10367 solutions. Due to the limitations of conventional optimization techniques, it is very hard to solve such a problem optimally, with the current computational capabilities. Even no algorithm is successful to solve such a problem optimally, up to today (Zhang et al. 2008a). This motivates us to study the suitability of meta-heuristic methods, which may reduce the search space. Over the last few decades, a substantial amount of work aimed at solving JSSPs using genetic algorithms has been reported in the literature (Lawrence 1985; Biegel and Davern 1990; Nakano and Yamada 1991). The majority of the works propose to generate a set of random solutions and to reproduce them by applying reproduction operators. As the GA is a random evolutionary search process, premature convergence to a non-optimal solution is a major drawback. It is a common practice to attempt to hybridize the GAs by incorporating different heuristic techniques to overcome this drawback (Shigenobu et al. 1995; Park et al. 2003; Della-Croce et al. 1995; Ombuki and Ventresca 2004). The commonly used heuristic techniques either swap between two tasks, or reduce the duplications to maintain the diversity of the solutions. None of these or the other techniques attempted to identify the individual machine idle times (/gap) in the solutions, which has strong negative affects on the quality of solutions. So there is scope to reduce these gaps in an intelligent way, which may result in an improvement in terms of the solution quality and the computation. In almost all researches, the classical JSSPs were considered, assuming there
3
Chapter 1. Introduction
will be no interruption during the process. However, in practice, the process interruptions due to machine breakdown, unavailability of machines due to scheduled or unscheduled maintenance, etc. exist on the shop floor (Fahmy et al. 2008). The inclusion of the process interruptions with the traditional JSSPs, makes the problems more practical, but more complex and more challenging. The simplest way to re-optimize the solutions after a breakdown, is to apply right-shifting on the affected tasks (Leon et al. 1994). These schedules are generally poor as it leaves many gaps in the schedule. So there is scope to reduce the gaps or improve the solution in reactive scheduling. Cost cutting is a standard practice in manufacturing planning. A small improvement in a manufacturing cycle, through better scheduling, would save productive machine hours, manpower and overhead costs. In addition, it would help to reduce the bottlenecks and congestions to deliver jobs on time. As a result, the customer satisfaction will be higher. A small saving from a schedule, when combined for all industries, may mean a saving of millions of dollars on a yearly basis. The research topic of this thesis is independent of country or regional boundaries as it can be applied to any manufacturing industries in the world.
1.3
Objectives of this Thesis
The main objective of this research is to develop an evolutionary computation based approach for solving practical job-shop scheduling problems effectively. In the pathway of the research, we have divided the research objective into three sub-objectives, which are mentioned below along with the steps taken to achieve them.
Objective 1:
Developing hybrid algorithms to solve job-shop scheduling problem
effectively. To achieve this objective, we have completed the following steps: • Study the conventional optimization and other algorithms for solving JSSPs;
4
Chapter 1. Introduction
• Study the evolutionary algorithms for solving combinatorial optimization problems, especially JSSPs; • Develop a simple genetic algorithm to solve JSSPs; • Study the different priority rules used in conventional heuristics for JSSPs, and develop new rules suitable for hybridizing with GA; • Design two to three hybrid GAs for solving traditional JSSPs; • Implement GA and Hybrid GAs, and carry out experimental study; and • Analyze the performances of hybrid GAs and compare with that of simple GA.
Objective 2:
Revising the hybrid genetic algorithm for solving JSSPs with process
interruptions. To achieve this objective, we have completed the following steps: • Study the relevant literature for JSSPs with process interruptions; • Analyze and revise the key priority rule for further improvement of hybrid GA; • Implement the revised algorithm and carry out experimental study for JSSPs under ideal condition; • Apply the revised algorithm to JSSPs with machine unavailability, and carry out experimental study; • Develop a mechanism for reactive scheduling and apply it to JSSPs with machine breakdown, and carry out experimental study; and • Analyze the results of the revised algorithm for JSSPs under process interruptions.
5
Chapter 1. Introduction
Objective 3:
Developing a decision support system for supporting the user in
solving JSSPs. To achieve this objective, we have completed the following steps: • Study and analyze the literature related to decision support systems; • Design a decision support system for solving JSSPs; and • Implement the system by solving a number of JSSPs with or without process interruptions;
1.4
Key Contributions
This section summarizes the unique contributions made in this research. The thesis includes contribution in both scientific and practical extents. The scientific contributions made in this research lies in the area of genetic and memetic algorithms for solving classical job-shop problems and JSSPs in the presence of process interruptions. • Three priority rules that work in conjunction with the genetic algorithm in a form of local search. This thesis introduces three memetic algorithms by combining three new priority rules with GA. These rules work as heuristic searches to accelerate the performance of GAs. Although the memetic algorithms are known as computationally expensive (compared to GA), our research shows that the proposed memetic algorithms, as compared to the genetic algorithms, not only improve the quality of solutions, but also reduce the computation required. From the view point of problem solving, for the test problems considered in this thesis, the proposed algorithm outperforms all key GA based approaches that have appeared in the literature. • An improved shifting technique to reduce the gaps in the solutions. In this thesis, we have introduced an improved technique to reduce the existing gaps in a solution, by filling them with the tasks from their right 6
Chapter 1. Introduction
of the schedule if they do not violate any precedence constraints. In this process, if a gap is not enough to be replaced by a selected task, the gap may be increased up to certain tolerance limit by pushing all the tasks to its right. The research shows that the quality of solutions improves with the increase of the tolerance limit up to a certain limit. This is a new contribution to the scientific knowledge in the field. • An algorithm for handling the process interruptions in JSSPs. The memetic algorithm is modified to deal with the JSSPs with machine unavailability. A new mechanism is also developed to re-optimize the schedule after machine breakdowns. To analyze the effect of machine breakdowns and unavailability, we have experimented with up to four breakdowns with nearly equal cumulative breakdown duration. The experimental results show that the multiple shorter scattered breakdowns have lower impact on the revised schedule as compared to a continuous longer breakdown of the same duration. It also confirms that the effect of interruption is lower when the information about the unavailability of machine is known in advance. To the best of our knowledge, this type of analysis and findings are new in the literature. The advancement of knowledge made in this thesis can be used as a benchmark for further improvement of scheduling algorithms. The contribution also contains the development of a practical tool to support the users for solving JSSPs. • A decision support system to solve JSSPs with genetic and memetic algorithms. A decision support system is designed and implemented in this research to support users/decision makers for solving JSSPs, without needing an understanding of the complexity of the problems and their solution approaches. The system contains a dialog based environment to allow the user/decision maker to select appropriate model, input parameters, and scenarios. Although this is not a scientific contribution, this will attract practitioners to utilize the tool for their production scheduling. 7
Chapter 1. Introduction
• Better decision making tool. The decision making tool developed, will not only produce better solutions, but will also support timely and informed decision making. It will help to improve the productivity of the machine shop and reduce the production cost through minimization of makespan. If required, the decision tool is also capable of producing sub-optimal solutions within a limited time frame.
1.5
Organization of the Thesis
The thesis is divided into seven chapters as follows: • Chapter 1: Introduction • Chapter 2: Literature Review • Chapter 3: Solving Job-Shop Scheduling by Genetic Algorithms • Chapter 4: Priority Rules for Solving JSSPs • Chapter 5: GA for JSSPs with Interruptions • Chapter 6: Decision Support System for Solving JSSPs • Chapter 7: Conclusion and Future Works Chapter 1 starts with a background of the research topic. It also discusses the motivation behind this research, objectives of the thesis, and the contribution to the scientific knowledge made. The last section of the chapter presents the organization of the thesis. Chapter 2 discusses the background study of literature related to the topics used in this thesis. The first part of the chapter includes brief descriptions of various scheduling problems, including the JSSPs, definition of standard JSSPs along with a mathematical model, objectives commonly considered in JSSPs, and the variants of JSSPs. The next part focuses on the relevant literature to solve JSSPs, which includes the traditional solution methodologies, the meta-heuristic techniques, and 8
Chapter 1. Introduction
the priority rules. The last part of the chapter provides an overview of the genetic algorithms with different search operators, and a brief review of solving JSSPs using GAs. Chapter 3 presents a standard genetic algorithm for solving the JSSPs. The details of chromosome representation, solution mapping and reproduction process are discussed. The chapter ends with the experimental results of test problems, and analysis and discussions of results. In Chapter 4, the genetic algorithm proposed in Chapter 3 is revised. The priority rules are discussed and three new priority rules are proposed. Three memetic algorithms are developed by combining the new priority rules with GA. The implementation aspects of the algorithms, and their experimental results along with necessary statistical analysis are discussed. In Chapter 5, the key priority rule is revised to enhance the performance of the developed memetic algorithms. Process interruptions, such as machine breakdown and unavailability, are discussed and introduced to the classical JSSPs. The memetic algorithm is further modified to accommodate JSSPs in the presence of machine unavailabilities. A new reactive scheduling technique is also developed to re-optimize the schedule after machine breakdowns. Details of the experiments with necessary analysis are presented in this chapter. In Chapter 6, a decision support system is designed that contains a set of models for solving the classical and interrupted JSSPs using genetic and memetic algorithms. The components and sub-components of a standard DSS, and the developed DSS, are discussed in this chapter. The final chapter, Chapter 7, gives a summary of the research carried out in this thesis and the key findings of this research. Finally, potential future research directions are indicated.
9
Chapter 2 Literature Review 2.1
Introduction
In our thesis, we have developed a genetic algorithm and proposed a number of priority rules to improve the performance of GA. We have solved both standard job-shop scheduling problems (JSSPs) and JSSPs under process interruptions. In this chapter, we describe JSSPs with objectives and constraints, and variants of standard JSSPs. We have briefly discussed how JSSPs are solved using different deterministic or heuristic methods along with the genetic algorithm (GA). The chapter is organized as follows. In Section 2.2, various scheduling problems including the JSSPs are discussed. The definition of standard JSSPs along with a mathematical model is provided. The objectives commonly considered in JSSPs, and the variants of JSSPs are also discussed. Section 2.3 contains the solution methodologies for JSSPs, which includes both the deterministic and heuristic methods. A brief overview of the meta-heuristic techniques, and the priority rules, that are considered for solving JSSPs are discussed in this section. Section 2.4 introduces genetic algorithms and briefly describes different search operators used in JSSPs. Section 2.5 provides a brief review of solving JSSPs using a genetic algorithms based methodology.
10
Chapter 2. Literature Review
2.2
Scheduling Problems
Sequencing and scheduling are very old and well-known optimization problems. These problems exist whenever a decision is needed to arrange a number of tasks to be performed by a number of resources. In terms of the optimization problem categories, these are classified as the combinatorial optimization problem. Some specific well-known and challenging combinatorial optimization problems in this domain are: job-shop scheduling (JSSP), flow-shop scheduling (FSP), traveling salesman (TSP), vehicle routing (VRP), knapsack, timetabling, aircraft landing, and others. Job-shop scheduling was previously known as industrial scheduling. The JSSP is one of the combinatorial optimization problems that has been addressed significantly in the literature. It is widely acknowledged as one of the most difficult NP-hard problems (Allahverdi et al. 2008). Even, a benchmark JSSP of size 10×10, proposed by Muth and Thompson (1963), was unsolved before 1985, and no benchmark of 20 × 20 has, so far, been solved optimally up to today (Zhang et al. 2008a). Flow-shop, which is a restricted version of JSSP, can be reduced to the traveling salesman problems (TSPs), where TSP is itself NP-hard (Nakano and Yamada 1991). The most simplified and restricted version of the problem is the single-machine scheduling problem, which is quite old, but which can still be a challenging optimization problem (Sidney 1977). Single-machine scheduling problems consist of a set of jobs where every job is assigned with a target interval (ai , bi ) where ai is the target start time and bi is the target end time. There is a penalty for earliness and tardiness. The objective of the problem is to schedule the jobs on a single machine to reduce the earliness and/or tardiness. For a classical single-machine problem with N jobs, N ! different solutions exist which is called the solution space. Thus, complexity is O(N !). The next (moderate) version is the flow-shop scheduling problem which is a bunch of one-machine scheduling problems. FSP contains M machines and N jobs, where each job requires each of the machines and in the same order. The processing 11
Chapter 2. Literature Review
time of each job in each machine is known. The objective is to find a permutation of the jobs that will minimize the overall schedule time (which is called makespan). Works have been started from more than four decades ago to solve FSPs optimally by using conventional optimization methods (Ignall and Schrage 1965; McMahon and Burton 1967). FSP is reported as a strong NP-hard problem in terms of its complexity (Lin and Cheng 2001). JSSP is the advanced version of FSP where the order of the machines for every job is different. It is identical with FSP only in terms of the size of the problem. In JSSP, N jobs need to be processed by M machines where the order of the machines for each job is given in the problem definition. JSSPs are reduced to FSP if the orders of the machines remain the same. As the size of the solution space is (N !)M , JSSPs have exponential complexity.
2.2.1
Job-Shop Scheduling
A classical job-shop scheduling problem contains N jobs and M machines, where each job is a combination of M operations. Also, every operation is assigned to a particular machine, and each machine is capable of processing only one kind of operation. As well as these, there are some other basic criteria that have to be considered for the classical JSSPs, as stated below. • The number of operations for each job is finite. • The processing time for each operation in a particular machine is defined. • There is a pre-defined sequence of operations that has to be maintained to complete each job. • Delivery times of the products are undefined. • No setup cost and tardiness cost. • A machine can process only one job at a time. • Each job visits each machine only once. 12
Chapter 2. Literature Review
• No machine can deal with more than one type of task. • The system cannot be interrupted until each operation of each job is finished. • No machine can halt a job and start another job, before finishing the previous one. • Each and every machine has full efficiency.
2.2.2
Variants of JSSPs
The classical JSSPs consist of the basic constraints, which include the precedence of operations, machine capacity, preemption, etc. In practice, the problems come with many other constraints and process interruptions. The additional constraints could be defined as due date, maximum setup cost and others. The process interruptions could be due to machine unavailability and breakdown, change of due dates, dynamic arrival of new jobs, and others. The advanced industrial problem solving methodologies are developed considering all these issues. JSSPs can be defined as flexible where the machines are capable of processing multiple types of operations. Based on this information, JSSPs can be classified into three categories as discussed below.
2.2.2.1
Additional Constraints in JSSPs
In conjunction with the basic constraints, JSSPs may contain many other constraints. For example, Nuijten and Aarts (1996) considered a multiple capacity of machines, where the machines are assigned with a fixed capacity limit. Each operation comes with a size (which can be treated as a weight). A machine can perform multiple operations if the cumulative size is less than the capacity of the corresponding machine. Xi et al. (2009) and Ren et al. (2009) considered a similar problem with multiple resource constraints. These methods are effective when different resources are available with a defined maximum load. Balas et al. (2008) solved the JSSPs in the presence of a sequence dependent setup time and release
13
Chapter 2. Literature Review
date constraints using a shifting bottleneck. They reported that the presence of such constraints certainly changes the behavior of the problems.
2.2.2.2
JSSPs under Process Interruptions
Any interruption changes the state of the schedules generated. The rescheduling process to recover from the interruption is commonly known as reactive scheduling. In fact, this interruption is not a periodical event. However, the possibility of the event may be predicted using relevant historical data. Suwa and Sandoh (2007) used different statistical distributions, such as uniform, exponential, and poisson, to generate interruption scenarios due to machine breakdown. Fahmy et al. (2008) experimented on JSSPs with a few interruptions, such as machine breakdown, process time variation, urgency of existing jobs, and order cancellation. Machine breakdown is a scenario where a particular machine becomes inoperative at an instance and for a particular period of time. Liu et al. (2005), Subramaniam et al. (2005), Suwa and Sandoh (2007), and many others reported solving JSSPs under process interruptions. Process time is the duration to execute any particular operation of a job. It may vary as an effect of other difficulties, such as strength degradation of machines and requirements of additional resources. Sometimes the priority of a certain job might increase due to a sudden change in requirements. In such cases, the tasks need to be rescheduled to optimize the delivery time of the job with revised priority. Similar action is required when any job is canceled in the middle of operations.
2.2.2.3
Flexibility in JSSPs
Practical JSSPs may include flexibilities of their machines, where the machines are capable of processing more than one kind of operation. In the flexible JSSPs (FJSSPs), the number of operations to complete a job may vary. Zhang et al. (2008b) proposed a variable neighborhood GA to solve FJSSPs, where they considered a variable number of operations in each job. They also defined a set of alternative machines for each of the operations. As considered by Pezzella et al. (2008), the operation time of a task may vary when processed using different ma14
Chapter 2. Literature Review
Table 2.1: Example of a 2 × 5 FJSSP Job j1
j2
Operation
m1
m2
m3
m4
m5
o11
8
4
-
2
5
o12
4
-
-
7
3
o21
-
5
2
8
-
o22
3
8
2
7
1
o23
9
7
-
3
-
chines. However, they considered a fixed number of operations for all jobs. In such flexible JSSPs, different heuristic rules are applied to select the appropriate machine for a given task. An example of a flexible JSSP is shown in Table 2.1 The table shows the alternative machines for each operation of each job with the required operation time. A ‘-’ means the incapability to process an operation by the corresponding machine.
2.2.3
Objectives for Solving JSSPs
In practice, there are several objectives used for standard JSSPs. The most widely used objective for solving JSSPs is to minimize the makespan. As the earliness and tardiness are important issues in the one machine scheduling problem (Sidney 1977), many researchers optimize the earliness and tardiness in JSSPs (Feng and Lau 2008; Hoksung et al. 2008; Li et al. 2008). Throughput and due dates are also considered as objectives in some cases. These objectives are briefly discussed below.
2.2.3.1
Makespan
The most common objective for solving JSSPs is to minimize the makespan (Adams et al. 1988; Binato et al. 2001; Della-Croce et al. 1995; Wang and Brunn 2000; Yamada 2003). Since the early stages of the industrial scheduling problems, makespan was the popular objective for solving JSSPs. In a standard schedule, every task 15
Chapter 2. Literature Review
is represented by its required machine, starting time, and finishing time. The makespan can be calculated from this information for a given schedule. Makespan is the maximum of the set F containing the finishing times of all completed jobs. The objective function can be represented as Minimize Fmax
(2.1)
Fmax ≥ Fn for ∀n and Fmax ∈ F
(2.2)
w.r.t.
Makespan minimization seems to be similar to minimizing the machine delay. But in some cases, forcing to delay certain tasks may cause an improvement of the makespan. These processes are termed as delay or parameterized active scheduling (Goncalves et al. 2005).
2.2.3.2
Throughput Time
Throughput time is the cumulative sum of the completion time of each job. Makespan minimization concentrates only on the finishing time of the latest job, where throughput minimization takes every job into account. Throughput is a challenging issue in cases of flow-shop as well as job-shop scheduling. It is calculated by G=
N
Fn
(2.3)
n=1
where Fn is the completion time of job n.
2.2.3.3
Earliness
This is the measurement in time of how early a job is started in comparison to the assigned start time. This objective may appear in conjunction to tardiness. In some cases, a penalty applies for the early start of a job (Feng and Lau 2008). To solve these problems in their optimal way, a cumulative penalty needs to be minimized. 16
Chapter 2. Literature Review
Earliness is calculated by
E=
N
( Fn ) − Fn × αn where αn =
n=1
⎧ ⎪ ⎨1
when Fn > Fn
⎪ ⎩0 otherwise
(2.4)
where Fn and Fn are respectively the proposed and actual completion time of the job n.
2.2.3.4
Tardiness
Tardiness is also known as lateness. It is the difference between the defined delivery time and the actual release time. Tardiness can be measured by the unit of time delay or penalty value. When this objective is considered, every job comes with some customer demands, capacity constraints, availability of resources, etc. (Mattfeld and Bierwirth 2004). Tardiness can be represented by
T =
N
(Fn − Fn ) × αn where αn =
n=1
⎧ ⎪ ⎨1
when Fn > Fn
⎪ ⎩0 otherwise
(2.5)
where Fn and Fn are respectively the proposed and actual completion time of the job n.
2.2.3.5
Due-Date Cost
Due-date is the release dates of the jobs. In practice, when a job cannot be delivered within the due date, a negotiation applies. The value of that negotiation depends on the length of the delay after the due-date. Due-date cost can be represented by
D=
N
(In − In ) × wn αn where αn =
n=1
⎧ ⎪ ⎨1
when In > In
⎪ ⎩0 otherwise
(2.6)
where In and In are respectively the proposed due time and the due-date of the job n. wn is the unit cost of delay after the due-date. 17
Chapter 2. Literature Review
2.3
Solving JSSPs
Classical JSSPs having N jobs and M machines have (N !)M different solutions. From those, each and every solution is not feasible. It is possible to explore the entire solution space for smaller problems. However, as the problem has exponential complexity, the solution space expands exponentially with the increase of either the number of machines or jobs. For example, a 5 × 5 JSSP has 2,488,320,000 possible solutions, while the number is 139,314,069,504,000,000 for a 6 × 6 problem, which is 5,598,720 times bigger. These problem sizes are considered as very small problems, where the large scale problems may have 100 or more jobs, with 50 or more machines. Currently, with superior computer power, it is impossible to explore the entire solution space for some of these problems. However, many attempts have successfully been made to reduce the search space in finding optimal or near optimal solutions within a reasonable period of time.
2.3.1
Classical Optimization Approaches
Classical methods are the exact methods which guarantee optimality, but still have exponential complexity. The integer programming approach is a well-known approach for mathematically formulating scheduling problems, including JSSPs. The formulations are then solved mathematically using conventional optimization approaches, such as Branch and Bound, Cutting Plane, and Branch and Cut.
Mathematical Model The main considerations of a job-shop problem are the machine capacity, preemption and flows of the operations, and others. It is considered that a machine can only perform a particular type of operation. Thus, a machine is able to execute just a single operation of a job. The operations are non-preemptive; operations cannot be paused and resumed as soon as it is started. The execution time for each of the operations is defined. The setup cost is negligible. Any operation can only be started if the preceding operations of the same job are complete. Based 18
Chapter 2. Literature Review
on these considerations and the problem described in Section 2.2.1, and using the notations tabulated in Table 2.3.1, the integer programming formulation of JSSPs can be represented as follows. Table 2.2: Problem definitions Notations
Definitions
(m, n)
Task representing the operation of job n in machine m
N
Total number of jobs
M
Total number of machines
m(r) ˜
Machine required by the r-th operation of a job
T (n, m)
Processing time of (m, n)
S(m, n)
Start time of (m, n) in a solution
F(m, n)
Finishing time of (m, n) in a solution
gmnk
Gap between (m, k) and (m, n) in the machine m
Pmn
Set of tasks immediately following (m, n)
δmn
Variable equals 1 if (m, n) is active in the machine m
rmn
The breakdown event followed by the the task (m, n)
tmn
Recovery time for the breakdown rmn
xmn
Breakdown flag; equals 1 if rmn exists and 0 otherwise
Equation (2.9) ensures that no operation overlaps with any other operation in the same machine. Equation (2.10) ensures there is no overlapping between two operations of a same machine, as well as maintaining the precedence constraint. Equation (2.12) confirms the activation of a single operation in a machine. Equation (2.11) justifies the execution time of each operation in the solution with that of the problem definition. And Equation (2.13) assures that no operation starts before the starting time of the schedule. Minimize Fmax
19
(2.7)
Chapter 2. Literature Review
subject to Fmax ≥ F( m, n), (m = 1, 2...M ; n = 1, 2...N )
(2.8)
F(m, n) + gmnk ≤ S(m, k) , (m = 1, 2...M ; n = 1, 2...N − 1; k ∈ Pmn ) (2.9) F( m(r)n) ˜ ≤ S(m(r ˜ + 1), n) , (r = 1, 2...M − 1; n = 1, 2...N ; )
(2.10)
F(m, n) − S(m, n) = T (n, m) , (m = 1, 2...M ; n = 1, 2...N ; ) N n=1 δmn ≤ 1 , (m = 1, 2...M ; )
(2.11)
S(m, n) ≥ 0 , ∀m∀n
(2.13)
(2.12)
Due to the exponential complexity of these algorithms, heuristics based on conventional optimization concepts are widely used for solving scheduling problems. In the early stages of solving JSSPs, Akers Jr and Friedman (1955) introduced an approach where they wiped out the solutions that were technically infeasible, in order to reduce the number of programs required to evaluate the solutions. Based on this idea, Giffler and Thompson (1960) proposed a two-fold method to generate the set of active schedules, i.e. the schedules having no operation to be shifted towards the left. Later, a branch-and-bound algorithm was proposed to solve JSSPs (Ashour and Hiremath 1973). This algorithm explores a subset of the solutions and converges towards the exact optimal solution by branching from the near-optimal solutions. As these algorithms still have the difficulty of exponential complexity, they are not suitable to be applied to larger problems.
2.3.2
Meta-Heuristic Methods
To overcome the limitations of the traditional methods, heuristic and meta-heuristic methods have been more recently proposed. Meta-heuristic methods are a class of approximation methods designed to handle hard combinatorial optimization problems where the classical algorithms are not that effective. These methods use iterative techniques which explore the solution space starting from an initial position using different operators. Although the meta-heuristic methods do not guarantee optimality, they provide good quality solutions in a reasonable time period. A 20
Chapter 2. Literature Review
meta-heuristic is defined as follows: Definition 1: A meta-heuristic is an iterative generation process which guides a subordinate heuristic by intelligently combining different concept for exploring and exploiting the search spaces using learning strategies to structure information in order to efficiently find near-optimal solutions. The meta-heuristics methods widely accepted for solving the JSSPs are; Tabu Search (TS) (Glover 1989), Shifting Bottleneck (SB) (Adams et al. 1988), Greedy Randomized Adaptive Search Procedure (GRASP) (Feo and Resende 1989), Simulated Annealing (SA) (Kirkpatrick et al. 1983), Genetic Algorithms (GA) (Holland 1975), etc. These methods are briefly discussed below.
2.3.2.1
Tabu Search
Tabu search is a neighborhood search method where the search space is increased by generating neighboring solutions starting from some initial solutions. A tabu-list (TL) is maintained to keep track of the solutions that have already been explored. The list is updated after certain iterations or after satisfying some specific criteria. Details about the basic structure and details of TS are available in (Glover 1989, 1990). Attempts to solve JSSPs by using TS were made by Dell’Amico and Trubian (1993). They proposed a bi-directional method for generating the initial solutions. The operations are chosen based on priority-rules to generate the schedules. In the iterative process, a number of neighborhood solutions are generated from the initial solutions by changing some local properties of those solutions. The new solutions are evaluated and the best solution with better fitness is accepted for the next generation. That solution is marked as forbidden for the next few generations and is listed in the TL to prohibit further exploration of the same solutions. Barnes and Chambers (1995) used a similar approach but proposed to use 14 different priority dispatching rules to generate the initial solutions. This technique gives the extra advantage of exploring a few, more different, solutions. It also increases the chance to generate the solutions on the hill where the optimal solution is residing. Taillard (1994) implemented TS by using the neighborhood structure
21
Chapter 2. Literature Review
proposed by Van Laarhoven et al. (1992). The author concluded with the observation that the algorithm has higher efficiency for rectangular problems. Nowicki and Smutnicki (1996) also used Taillard’s approach and improved the neighborhood selection strategy. It gives a faster convergence to the optimum (i.e. 30 seconds for the FT10 instance). Works are still going on to improve the neighborhood structure. For example, Zhang et al. (2007) modified the neighborhood structure defined by Van Laarhoven et al. (1992), in which a move is defined by inserting an operation to either the beginning or the end of a block, or by moving either the first or the last operation of the critical block into the internal operation within another block. The authors proposed integrating SA with TS, where SA was used to find the best elite solution which acts as the initial solution. As TS is attracted the region related to the elite solution, the neighborhood search is improved.
2.3.2.2
Shifting Bottleneck
In classical JSSPs, the machine that takes the longest processing time, to complete a set of jobs (that is the decider for the makespan), is usually considered as the bottleneck machine. In this sense, the machine having the maximum time (set-up, operation and idle) involved in the process is termed as the bottleneck machine. However the definition may vary with the objectives of the scheduling problems. This technique identifies the bottleneck machine and performs necessary swapping or reordering to improve the quality of the solution. It starts by arranging the machines according to a specific order, then identifies the first bottleneck machine and schedules it optimally. Finally it selects the next machine in the order and updates the starting time of the jobs that have already been scheduled. The main purpose of the technique is to identify the best order of the machines. The most frequently used strategy is to rearrange the machines according to the criticalness of the machines that can be identified by the longest processing time. Adams et al. (1988) proposed to solve the M -machine JSSPs by M different single machine scheduling problems doing local re-optimization. They sequence the machines one at a time, consecutively. For each machine not yet sequenced, they
22
Chapter 2. Literature Review
solve to optimality a one-machine scheduling problem which relaxes the original problem, and use the outcome both to rank the machines and to sequence the machine with highest rank. Every time a new machine has been sequenced, they proposed to re-optimize the sequence of each previously sequenced machine that is susceptible to improvement by again solving a one-machine problem. But a monotonic decrease of the makespan is not guaranteed by the local re-optimization technique. To overcome this problem, Dauzere-Peres and Lasserre (1993) proposed to schedule the machines one-by-one, which never gives a worse solution than the approach proposed by Carlier and Pinson (1989). Wenqi and Aihua (2004) proposed to identify the bottleneck machine by counting the critical paths from the disjunctive graph. They improved the SB by applying a backtracking strategy for selecting the bottleneck machines.
2.3.2.3
GRASP
Each iteration of Greedy Randomized Adaptive Search Procedure (GRASP) consists of two phases: construction and local search. A feasible solution is built in the construction phase, where the neighborhood of that solution is explored by local search. In the construction phase, a feasible solution is built, one element at a time. In every iteration, the next element to be added is determined by ordering all of the elements in a candidate list with respect to a greedy function that measures the benefit of selecting each element. This list is called the restricted candidate list (RCL). The algorithm keeps randomly choosing one of the best candidates in the list, but usually not the best. This allows different solutions to be obtained during each GRASP iteration. But the solutions generated in the GRASP iterations, do not guarantee optimality. Local search (LS) algorithms help to improve in such cases. Binato et al. (2001) proposed GRASP for solving JSSPs. The authors proposed to select the operation which gives the minimum increment of schedule time from that instance. This technique may not work in all cases, as it reduces the schedule time for one machine and may delay some operations in other machines. They proposed a local search that identifies the longest path in the disjunctive graph and that then swaps the critical paths to improve the makespan. GRASP 23
Chapter 2. Literature Review
has a problem that it does not take any information from the previous iterations. To address this, the authors proposed an intensification technique which keeps track of the elite solutions (e.g. having better fitness value) and includes new solutions in the record if they are better than the worst from the elite list. They also applied the proximate optimality principle (POP) to avoid the error in scheduling early in the construction process, as this may lead to errors in following operations.
2.3.2.4
Simulated Annealing
Simulated annealing (SA) is an enhanced local search algorithm, which accepts not only better but also worse neighborhood solutions with a certain probability. The difference between a traditional local search and SA is that the local search absolutely rejects the worse ones. The acceptance probability of the worse solutions is determined by the temperature of an SA, which decreases during the process of the algorithm. Performance of the algorithm depends on its initial temperature, evolution strategy, stopping criteria, etc. SA is effective in the cases where the population has low diversity. The local search algorithm starts with an initial solution and then searches through the solution space by iteratively generating a new solution which is near to the current solution (Aarts et al. 2007). A neighborhood function, N (s), is used to determine a set of neighboring solutions, where the set is a possible subset of the entire set of solutions. Van Laarhoven et al. (1992) proposed an approximation algorithm based on SA to minimize the makespan of JSSPs. The algorithm stands on the generalization of the well-known iterative improvement approach. It minimizes being stuck in local minima by accepting cost increasing transitions with non-zero probabilities. He et al. (1996) integrated simulated annealing and an exchange heuristic algorithm to minimize the total job tardiness. Lin et al. (1995) developed a local search technique with an arbitrary objective function and constraints. The authors then compared that local search technique with the approach of SA with threshold acceptance considering maximum and mean completion time as the objective. The adaptive
24
Chapter 2. Literature Review
neighborhood search and the adaptive temperature/threshold values were proposed to guide the future search accordingly.
2.3.2.5
Genetic Algorithm
Genetic algorithms are widely accepted class of meta-heuristic algorithms used for solving optimization problems. The meta-heuristic methods discussed above are based on local re-optimization along with neighborhood search. GAs differ from those in terms of exploration strategy, as GAs are based on evolutionary theory with random exploration in the solution space. The weaknesses of the other metaheuristics appears when the solution space is discrete. In those cases, if the initial solutions are outside the solution space where the optimal solution resides, it is out of reach of the optimal when using only neighborhood search techniques. GAs generate a population of initial solutions and evolve by exchanging genetic materials between two or more solutions. This helps to explore the solution space much more than other heuristics. These characteristics have motivated many researchers to solve optimization problems by using GAs. As GAs are considered as the prime methods for solving JSSPs in this research, the details of GAs are described in the section below.
2.3.3
Priority Rules
In the literature of sequencing and scheduling, the priority rules are also named as dispatching rules, scheduling rules, and heuristic rules. The idea behind these rules is to apply them appropriately in the construction phase of the schedules. The rules are frequently used to select an appropriate operation to be scheduled. These rules can be classified into groups, based on factors such as processing time, due date, setup time, and arrival time. In this research, we focus on the processing time, which is quite useful in choosing the appropriate operation to place in the schedule. The description of the rules are mostly covered by Panwalkar and Iskander (1977) and Dorndorf and Pesch (1995). 25
Chapter 2. Literature Review
• Shortest Processing Time (SPT): Operation is chosen from the job having the least total processing time. • Longest Processing Time (LPT): Operation is chosen from the job having the longest total processing time. • Shortest Remaining Processing Time (SRPT): The rule selects the operation from a job which has the minimum cumulative execution time of the unscheduled operations. • Longest Remaining Processing Time (LRPT): An operation from the job having the maximum cumulative execution time of its unscheduled operations, is selected. • First Come First Serve (FCFS): The first operation that comes in the queue of the schedulable operations, is chosen. • Smallest Number of Remaining Operations (SNRP): An operation from the job having the least number of unscheduled operations remaining, is chosen. • Longest Number of Remaining Operations (LNRP): An operation is chosen from the job having the highest number of unscheduled operations. • Random: Operations are chosen uniformly at random. Further information on the different scheduling rules can be found in the survey paper by Panwalkar and Iskander (1977).
2.4
Genetic Algorithm
Genetic algorithms (GAs) are a class of the evolutionary algorithms (EAs), and is one of the widely accepted meta-heuristic methods. The concept of the evolutionary algorithms (EAs) was borrowed from Darwin’s natural selection theory (Darwin
26
Chapter 2. Literature Review
1859), which says that the evolution process does not operate directly on an organism, but on its chromosome. The evolutionary process takes place during reproduction using the mutation and recombination strategies (Alba and Cotta 2006). EAs are commonly classified into four groups: genetic algorithms (Holland 1975), evolutionary strategies (Rechenberg 1973), evolutionary programming (Fogel et al. 1966), and genetic programming (Koza 1992). All these techniques have a similar structure having a set of population, and the traditional reproduction operators. The details of the components are discussed later in the section. The genetic algorithm was first introduced by Holland (1975). Since then, the algorithm has been used successfully in several areas of optimization (Tang et al. 1996). The conventional optimization techniques are able to solve simple and small scale problems. As problem size increases, the conventional methodologies move far from the abilities of the current computational power. In such cases, evolutionary algorithms such as GAs, help to solve those problems. Though these algorithms do not guarantee optimality, they help to find a good solution within a reasonable period of time. Traditional GAs have several components, such as initial population, evaluation, selection, crossover, and mutation. GAs start from a set of initial solutions, which is termed as population, and reproduce new offspring by means of several kinds of reproduction operators. Each and every solution is represented by a chromosome. The smallest part of the chromosome is called a gene. The solutions evolve by exchanging similar genes between two chromosomes or by changing the property of a particular gene. The quality of an offspring depends on its genes. In the genetic algorithms, the chromosome represents the features of the object. As for example, if we consider a product like a cube, a chromosome can represent its height, width, length, capacity or any other features according to the needs of the optimization function. Traditional GAs contain different components that help to evolve the solutions.
27
Chapter 2. Literature Review
2.4.1
Fitness Function
In GAs, the fitness function determines the quality of an individual in the population. Every optimization problem has one or more objective measures. For each of the individuals, the fitness is the closeness of the quality of the object with the target objective value. In a sense, it represents the goodness and badness of an individual. Fitness functions can be formulated using mathematical notations. The chromosomes are mapped to the fitness function and the corresponding fitness values are calculated. Mathematically, fitness function and objective function are the same. The most commonly used objective/fitness function for JSSPs is the minimization of the makespan. It is considered as a more practical measure than the other objectives used in solving JSSPs (Adams et al. 1988; Della-Croce et al. 1995; Yamada and Nakano 1997a,b; Wang and Brunn 2000; Binato et al. 2001). Other than the makespan, a few more objectives, like maximum tardiness, flow time, lateness, etc. have also been used (Fang et al. 1996).
2.4.2
Representation of a Solution
Representation is the way to encode a chromosome so that it can represent the entire solution. One basic feature of GAs is that it works on a coding space and a solution space (Cheng et al. 1996). The evolutionary process works in the coding space, while the evaluation works in the solution space. The transformation between these two spaces depends on the representation of the solutions. Representations are based on the problem definition, pattern of the solution space, reproduction operators, etc. Different kinds of representation have been used in the last few decades to solve the JSSPs (Cheng and Gen 2001). This section discusses some of the common representations. Some of these are also discussed in (Ponnambalam et al. 2001).
28
Chapter 2. Literature Review
2.4.2.1
Operation-Based Representation
In the operation-based representation, each chromosome stands for a sequence of operations, where every integer number in a gene represents a job ID (Ponnambalam et al. 2001). The first occurrence of a job ID in a chromosome stands for the first operation of that job in a defined machine. Suppose, for a 3×3 JSSP; a chromosome may be represented as 211323321 where the first number ‘2’ means the first operation of job j2 (O21 ), similarly the second number ‘1’ indicates the first operation of job j1 (O11 ), the third number ‘1’ indicates the second operation (as it is the second appearance of the number ‘1’) of job j1 (O12 ) and so on. Then the operations can be read from the chromosome as O21 O11 O12 ... and so on, where Onk represents the k-th operation of the job n. As the job processing time is known, it is possible to construct the complete schedule from the chromosome information.
2.4.2.2
Preference List-Based Representation
In the preference list-based representation, the chromosome represents the preferences of each job. There are M sub-chromosomes in each chromosome, where each sub-chromosome represents the preference of the jobs on that machine. Suppose, if the chromosome looks like 312123321 , it means that the first preferential jobs on machine m1 , m2 and m3 are jobs j3 , j1 and j2 respectively.
2.4.2.3
Priority Rule-Based Representation
A chromosome can be represented by a combination of a set of priority-dispatching rules. The genetic algorithm then applies the reproduction operators to choose the best set of rules that gives the best throughput. For example, a chromosome like 312123321 represents using the priority dispatching rules in the order of (rule 3) (rule 1) (rule 2) and so on. Dorndorf and Pesch (1995) used twelve different priority rules, such as the SOT-rule (shortest operation time), the LRPT-rule (longest remaining processing time), and others for solving JSSPs while using this representation. This is a specialized representation possible only when the priority
29
Chapter 2. Literature Review
dispatching rules are used.
2.4.2.4
Job-Based Representation
In the job-based representation, both the machine sequences and the job sequences are necessary to represent a solution. Here, the first job is scheduled first. Moreover, the sequence of operations of each job is determined from the machine sequence. In this representation, there should be two strings of the same length which form the chromosome; one is the job sequences and the other is the machine indexes. Each number in the job sequence represents the sequence of machines, while the numbers in the machine index represents the machine IDs. For example, if the job sequence is 322112313 and the index of the machines is 213122133 , the number 2 (which is job 2) occurs three times in the job sequence. Here, the corresponding values in the machine indexes are 1, 3 and 2. That means that the sequence of the job j2 is m1 → m3 → m2 .
2.4.2.5
Machine-Based Representation
In this representation, chromosomes are composed of a sequence of machines. Schedules are constructed using the shifting bottleneck heuristic based on the chromosomes. While decoding the schedules, the immediate bottleneck machine is chosen from the chromosome. Whenever a machine is chosen as the bottleneck, the previously chosen machines are locally reoptimized (Adams et al. 1988). Suppose, for a three-machine problem, a chromosome may look like 213 . This sequence is used as the machine sequence for the shifting bottleneck heuristic. A machine-based GA using this representation is proposed in (Dorndorf and Pesch 1995).
2.4.2.6
Job Pair Relation-Based Representation
In this representation, a chromosome is symbolized by a binary string, where each bit stands for the order of a job pair (u, v) for a particular machine m. The length of the chromosome is equal to the number of pairs formed by N jobs. For example,
30
Chapter 2. Literature Review
a bit for a chromosome C of the p-th individual, Cp (m, u, v) equals 1 if the job ju precedes the job jv in machine m. More precisely, if the relevant bit for job ju and job jv for machine m is 1, ju must be processed before jv in machine m. The job having the maximum number of 1s is the highest priority job for that machine. This binary string acts as the genotype of the individuals. It is thus possible to construct a phenotype which is the job sequence for each machine.
2.4.2.7
Disjunctive Graph-Based Representation
A disjunctive graph G = (k, α,β) is represented by a combination of vertices and edges (Cheng et al. 1996). The graph is based on the number of tasks k, that are the nodes of the graph, a set of edges α among the consecutive task nodes of each job (conjunctive edge), and a set of edges β that are the edges between task nodes that will be processed by each machine (disjunctive edge). A chromosome will be formed by the order of disjunctive edges. When the sequence for a machine is determined, the entire conjunctive edges α will be replaced by the disjunctive edges β.
2.4.3
Selection
To generate a good offspring, a good parent selection mechanism is necessary. In every generation of the GAs, a selection process is used to select the individuals for applying the reproduction operators. For example, two individuals are selected for crossover, while a single individual is enough for mutation. Some commonly used selection methods are the rank based selection (Baker 1985), roulette-wheel selection (Holland 1975) and tournament selection (Goldberg 1989). These methods are briefly discussed below.
2.4.3.1
Ranking Selection
To apply this selection, the individuals are sorted according to their fitness values. Every individual is assigned with a weight by using a non-increasing assignment function γ(x). The weight function is a factor of the fitness or the specified class of 31
Chapter 2. Literature Review
the fitness. The assignment function should work in such way that the summation of all values equals to 1. In this way, individuals having higher fitness, have a better of chance to be selected.
2.4.3.2
Roulette-Wheel Selection
In this selection technique, the selection probability of an individual is calculated from the ratio of the fitness of that individual with the cumulative fitness of the entire population. A roulette-wheel is then subdivided into k groups, where k is the number of individuals in the population. An individual having a better fitness value; has a wider area in the roulette-wheel. A random number is generated between 0 and 1 to select a portion of the wheel and the associated individual is then identified.
2.4.3.3
Tournament Selection
A number of randomly selected individuals are chosen. A tournament is played among them based on the selection criteria. The winner of each tournament is selected for the next round and the final winner(s) of the tournament is selected for reproduction. A tournament can be performed between two parents (when s = 2) or more than two. The higher the value of s, the larger the number of individuals that contribute to the tournament, and hence the higher the chance at selecting a very good individual. As a result, there is a possibility to lose diversity. This key factor influences whether or not a solution reaches a local minima (the best solution of a subset of solutions where the global optimum solution does not reside).
2.4.4
Crossover
The most common form of the reproduction operator is the crossover. The operator is used to exchange similar genes between individuals to generate two new offspring. For crossover, it is important to choose a crossover point which classifies the genes to be exchanged. For a single crossover point, genes on the left of the chromosome
32
Chapter 2. Literature Review
c1 will be exchanged with the genes on the left of chromosome c1 , while the genes on the right of the chromosome c1 will be exchanged with the genes on the right of the chromosome c2 . For example, If c1 = 10|011101 and c2 = 00|011011 where the crossover point is after the second bit, the two offspring will look like c1 = 10|011011 and c2 = 00|011101 . Similarly, for two crossover points, the genes between the crossover points exchange. Rather than the binary chromosome, crossover can be applied on other representations as well. For the combinatorial optimization problems, exchange crossovers are more convenient and useful for any kind of the binary representations. Uniform crossover is also used where the process is independent of the crossover points. There are many different kinds of crossover operators reported in the literature. Cheng et al. (1999) reported some useful crossovers to solve JSSPs in their survey on hybrid genetic strategies for solving JSSPs. A comparison among these crossover operators is available in (Starkweather et al. 1991). Some of these crossover methods are briefly discussed below.
2.4.4.1
Ordered Crossover (OX)
This crossover operator was introduced by Davis (1985). The goal of this crossover is to maintain the relative order of the genes. It is a two-point crossover where the offspring inherits the gene between the crossover points, in the same order and position as appeared in that parent. The remaining genes are inherited from the second selected parent in the order they appear in that parent, but starting from the first position following the second crossover point. It skips all the elements that are already present in the offspring. OX may not be useful when it is necessary to generate a different order in the offspring. In the case of JSSPs, it may need a stronger mutation operator to generate different combinations and to maintain the diversity of the solutions.
2.4.4.2
Partially-Mapped Crossover (PMX)
PMX was proposed by Goldberg and Lingle (1985) and is a similar kind of operator 33
Chapter 2. Literature Review
as OX, which preserves the order of the genes. Two crossover points are selected randomly. Genes between the points are inherited to the offspring p1 from the alternate parent p2 in the order of the first parent p1 . The remaining genes are then inherited from the same parent without any modification. If any gene is repeated in the offspring, it is replaced by the missing gene from the first parent maintaining the order, adjacency and position.
2.4.4.3
Cycle Crossover (CX)
CX was originally developed by Oliver et al. (1987). It preserves the absolute position of elements in the parent sequence. Initially, a cycle starting point and parent sequence are generated randomly. The gene g0 at the starting point is inherited from the parent p1 to the offspring p1 . The location h0 of the gene g0 is then identified in p2 . The content of the location h0 from the parent p1 is then inherited to the h0 position of offspring p1 . This process continues until no gene is remaining.
2.4.4.4
Position-Based Crossover (PBX)
Syswerda (1991) introduced PBX where the intention is to preserve the positions of the genes in the offspring. Several positions in the parents are selected randomly and the genes are inherited from those positions to the same positions in the offspring. The remaining genes are inherited in the same order, but from the alternate parent. The same process is applied for the alternate offspring. Thus OX, PMX, and PBX try to maintain the same order as the parents. In the case of JSSPs, when the ordered operations are crossed over without prior knowledge, there is a big chance of rolling the solution back to it’s original order after necessary repairment. However it does help in reducing the iterations required in the repairing phase.
34
Chapter 2. Literature Review
2.4.5
Mutation
Mutation is independent of the genes of other individuals, as it is a self-applied approach. It is a process of changing the genetic property of some selected individuals by changing the property of the individual genes. The operator may not improve the solution, but generates a new solution and increases the diversity. Some of the mutation approaches are briefly discussed below.
2.4.5.1
Bit-flip Mutation
Mainly used in the binary representation. Some genes are selected randomly from a selected individual. The values of those genes are altered to generate a new chromosome.
2.4.5.2
Reciprocal Exchange Mutation
This crossover is also useful for chromosomes represented using integer or real values. A pair of genes are selected and exchanged. The number of pairs to be selected depends on a selection factor.
2.4.5.3
Insertion/Deletion Mutation
Some genes are inserted or deleted from the chromosomes. These kinds of mutation are possible only if a variable length encoding is used. The number of inserted or deleted genes is restricted by a limit.
2.4.5.4
Shift Mutation
Selected genes are shifted either left or right from their subsequent positions. In the case of combinatorial optimizations, solutions have to be verified for feasibility before applying this mutation.
35
Chapter 2. Literature Review
2.5
GAs for Solving JSSPs
The first attempt to solve JSSPs by GAs was proposed by Davis (1985). A simple GA along with the basic one-point crossover was applied on a small 3 × 3 test problem. Though it is a simple GA, it proved the applicability of GAs for solving combinatorial optimization problems like JSSPs. Falkenauer and Bouffouix (1991) implemented a GA where they modified the ordered crossover proposed by Goldberg (1989) and introduced the linear operational crossover (LOX), which is similar to OX but without the cycle. The concept behind this is that OX concentrates on the relative position of the genes, rather than the absolute, and it is thus effective when the chromosome is circular e.g. representing traveling-salesman problems. But the representations of JSSPs are linear and the absolute positions of genes are also important. Thus, LOX covers both of these requirements. Based on their encoding and crossover scheme, Della-Croce et al. (1995) implemented the traditional GA with a proposed look-ahead simulation method for the encoding. They observed that for the problems where the optimal schedule is far away from the non-delay schedules, it is difficult to achieve the optimal solution. The look-ahead simulation assigns a priority for every operation. While considering an operation to be scheduled, the priority of the corresponding operation is checked. When multiple schedulable operations exist, they are ordered according to the priority. They claimed that the technique widens the search space, but decision making to choose the right priority rules is itself difficult, which is found in (Dorndorf and Pesch 1995), who claim that random selection of the rules may not guide the selections toward the global optimum. In the algorithm implemented by Gen et al. (1994), the job based representation is used. They have reported that the special crossovers, like PMX, OX, etc. are not suitable for this type of representation. They used the simple two-point exchange crossover, although extensive repair is necessary. They also applied elitism which ensures the presence of the best solution in every generation. Recently, Watanabe et al. (2005) implemented a genetic search approach (GSA) as proposed by Someya and Yamamura (1999) and applied a modified crossover operator. To maintain
36
Chapter 2. Literature Review
diversity and get out of local minima, they proposed allowing the child worse than the worst parent to survive. This was done during their crossover search phase, where three parents were selected and three different pairs were formed. A finite number of children were generated by applying the simple two-point crossover. The best of the children was selected as well as the worst. Selecting the worst offspring increases the availability of poor quality solutions, but may prohibit converging to the local optima. Nakano and Yamada have been working in the area of GAs to solve JSSPs during the last two decades. Their proposed job pair-relationship based representation (Nakano and Yamada 1991) is widely accepted to represent the solutions, even in combinatorial optimization problems. They proposed to map the binary chromosome (genotype) into a sequence of operations (phenotype). In a sense, the sequence of jobs itself can be treated as a job-based representation. The good thing in the representation is that every bit reflects the relationship between two jobs in a particular machine, and two consecutive bits does not represent the relationship in the same machine. When a simple one-point or two-point crossover is applied, genes related to different machines and different job pairs are inherited from the parent. Though it needs repairing to make the solutions feasible, it helps to explore more parts of the solution spaces. Many works, (Goncalves et al. 2005; Ombuki and Ventresca 2004; Shi and Pan 2005), etc. have integrated local search procedures with GAs. Goncalves et al. (2005) proposed three different approaches. The first approach deals with non-delay schedules, while the second one generates active schedules. The second approach faces difficulty when the optimal schedule is far away from the best active schedule. In such a case, it needs to introduce a delay which may affect some particular machines, but may reduce the makespan. In their final approach, they introduced a parameterized active scheduling technique where the maximum allowable delay is controlled. Rather than implementing a simple GA, they applied the two exchange local search with neighborhood structure defined by Nowicki and Smutnicki (1996). Ombuki and Ventresca (2004) also proposed a similar kind of local search technique, but using only a single exchange. These approaches are complicated 37
Chapter 2. Literature Review
because it is difficult to identify the combined effect of multiple swaps. In some critical problems, single swapping may initially reduce the fitness of a solution, but may improve the fitness of that solution in the long run by allowing further swapping in a different pair of operations. Ombuki and Ventresca (2004) showed a look-ahead simulation technique that helps to analyze a few further steps. A local re-optimization technique can be applied, where the swapping happens only inside a partial schedule. Moreover, approaches like shifting-bottleneck, which operates on the bottleneck machines, may increase the scope for further improvements.
38
Chapter 3 Solving Job-Shop Scheduling by Genetic Algorithms In this chapter, we define the traditional job-shop scheduling problems considered in this thesis. A genetic algorithm based solution methodology for solving jobshop scheduling problems and its components are described. The algorithm is implemented and the performance of the algorithm is judged by analyzing the results of 40 benchmark problems.
3.1
Introduction
In this thesis, we intend to develop solution approaches for solving different scenarios of the job-shop scheduling problems (JSSPs). In this chapter, we implement a Genetic Algorithm (GA) for solving JSSPs and analyze its performance. For convenience of discussions and analysis, we define our JSSPs in this section. A classical job-shop scheduling problem contains N jobs and M machines where each job needs to process for M operations. Every operation is assigned to a particular machine. Each machine is capable of processing only one kind of operation. In addition, there are several other basic conditions that have to be considered for the classical JSSPs as stated below.
39
Chapter 3. Solving Job-Shop Scheduling by Genetic Algorithms
• The number of operations for each job is finite. • The processing time for each operation in a particular machine is defined. • There is a pre-defined sequence of operations that has to be maintained to complete each job. • Delivery times of the products are undefined. • No setup cost and tardiness cost. • A machine can process only one job at a time. • Each job visits each machine only once. • No machine can deal with more than one type of task. • The system cannot be interrupted until each operation of each job is finished. • No machine can halt a job and start another job before finishing the previous one. • Each and every machine has full efficiency. In this thesis, the objective considered for solving JSSPs is to minimize the makespan. Further discussion on JSSPs and their mathematical models can be found in Chapter 2. The rest of the chapter is organized as follows. In Section 3.2, we describe the genetic algorithm with its chromosome representation, solution mapping and reproduction process. In Section 3.3, we present experimental results and analysis of the algorithm’s performance. Finally, we draw our conclusions in Section 3.4.
3.2
Genetic Algorithm
In this chapter, we have implemented a traditional GA, for solving JSSPs, with the job pair-relation based representation, tournament selection, two-point crossover,
40
Chapter 3. Solving Job-Shop Scheduling by Genetic Algorithms
and bit-flip mutation. In this section, the algorithm (see Algorithm 3.1) with its components are discussed.
3.2.1
Chromosome Representation
In solving JSSPs using GAs, the chromosome of each individual usually comprises the schedule. Chromosomes can be represented by binary, integer or real numbers. Some popular representations for solving JSSPs are: operation based, job based, preference-list based, priority-rule based, and job pair-relationship based representations (Ponnambalam et al. 2001). A list of representations commonly used for JSSPs can also be found in the survey of Cheng et al. (1996) and are also briefly discussed in Section 2.4.2. The general structure of a chromosome includes the necessary information which needs to represent the solution of an optimization problem. It might be the sequence of variables representing the outcome of a problem and the relationship among the variables. Solutions of the JSSPs depend on the sequence of the operations which is generally represented either in binary or decimal form. For the classical JSSPs, every machine is capable of performing only one kind of operation. In that case, chromosomes represented in the form of a sequence of operations, containing the same number of a particular gene. The organization of the genes in a different order generates a different solution. There is a close relationship between the representation and the reproduction operators, such as crossover and mutation. Representation may prohibit some particular kinds of genetic operators. For example, the crossover operators like PMX or OX discussed in Section 2.4.4, can only be applied if the chromosome is represented as a sequence of operations. On the other hand, simple one or two point crossover can easily be applied on most of the representations. The ease of using simple reproduction operators, increases the acceptability of binary representations for solving combinatorial optimization problems. For this reason, we selected the job pair-relationship based representation for the genotype, as in (Nakano and Yamada 1991; Paredis 1992; Yamada 2003; Yamada and Nakano 1997a). 41
Chapter 3. Solving Job-Shop Scheduling by Genetic Algorithms
Algorithm 3.1 Traditional Genetic Algorithms (TGA) Let Rx and Ry be the selection probabilities for two-point crossover and bit-flip mutation respectively. P (t) is the set of current individuals at generation t and P(t) is the evaluated set of individuals repaired using local and global harmonization at generation t. p is the index of the current individual. 1. Initialize P (0) with a set of random individuals having chromosome length L. 2. Repeat until some stopping criteria are met (a) Set t := t + 1 (b) Evaluate P(t) from P (t − 1) by the following steps; i. ii. iii. iv. v. vi.
Map p by applying the local and global harmonization if necessary. Apply the standard evaluation technique to evaluate p. Calculate the objective function |p|. Go to step 2(b)i until every individual of P(t) is evaluated. Rank the individuals based on the fitness. Apply elitism to preserve the best solution.
(c) Go to step 3 if the stopping criteria are met. (d) Modify P(t) using the following steps; i. Select the current individual p from P(t) and select a random number R between 0 and 1. ii. If R Rx then A. Select two individuals p1 and p2 using tournament selection. B. Apply two-point crossover between p1 and p2 . Generate two offspring p1 and p2 . iii. Else if R > Rx and R (Rx + Ry ) then randomly select one individual p1 from P(t) and apply bit-flip mutation. Replace p with p1 . iv. Else continue. [End of step 2(d)ii If] (e) Assign the P (t) by P(t). [End of step 2a Loop] 3. Store the best solution. [End of Algorithm]
42
Chapter 3. Solving Job-Shop Scheduling by Genetic Algorithms
In this representation, a chromosome is symbolized by a binary string, where each bit stands for the order of a job pair (u, v) for a particular machine m. For a chromosome Cp
Cp (m, u, v) =
⎧ ⎪ ⎨1 if the job ju precedes the job jv in machine m ⎪ ⎩0 otherwise
(3.1)
This means that in the chromosome representing the individual p, if the relevant bit for job ju and job jv for machine m is 1, ju must be processed before jv in machine m. The job having the maximum number of 1s is the highest priority job for that machine. In the representation, the length of each chromosome is: L = M × (N − 1) × N/2
(3.2)
where N stands for the number of jobs, M for the number of machines. Thus, L is the number of pairs formed by a job with any other job. To explain the job pair-relation based representation, we provide a simple example using Table 3.1. The subtables, Table 3.1(a) and Table 3.1(b) represent a sample 3 × 3 (3 jobs and 3 machines) job-shop scheduling problem (order of operations or machine used for each job) and one possible solution respectively. The next subtable, Table 3.1(c) shows the binary chromosome which represents the solution shown in Table 3.1(b). In this representation, each bit represents the preference of one job with respect to another job in the corresponding machine. In the chromosome, the first three bits represent the relationship between the jobs j1 and j2 in the machines m1 , m3 and m2 . The order of these machines is not m1 → m2 → m3 , because it follows the given precedence of the first job j1 in the job pair, which can be found from Table 3.1. For better understanding, consider the machine m2 in the solution. In m2 , j1 appears before any other job. This enforces the 3rd and 6th bit of the chromosome to be 1 as these bits represent the relationship between j1 −j2 and j1 −j3 respectively 43
Chapter 3. Solving Job-Shop Scheduling by Genetic Algorithms
Table 3.1: Representation of a binary chromosome for a sample 3 × 3 JSSP j1 : m 1
→ m3
→
m2
m1 :
j3
→
j1
→
j2
j2 : m 2
→ m1
→
m3
m2 :
j1
→
j2
→
j3
j3 : m 1
→ m3
→
m2
m3 :
j3
→
j2
→
j1
(a) Given order of operations
(b) A possible solution
1
0
1
0
0
1
1
0
0
m1
m3
m2
m1
m3
m2
m2
m1
m3
j1 − j2
j1 − j3
j2 − j3
(c) A Chromosome
in machine m2 . On the other hand, j2 comes in between j3 and j1 in the machine m3 . As the 9th bit and the 2nd bit represents the relationships between j2 − j3 and j1 − j2 in m3 respectively, the bits are 0 and 1. This binary string acts as the genotype of the individual solutions. It is possible to construct a phenotype which is the job sequence for each machine. This construction is described in Table 3.2. This binary representation is helpful if the conventional crossover and mutation techniques are used. We use this representation for the flexibility of applying simple reproduction operators.
3.2.2
Mapping a Solution
In the case of JSSPs, it is impossible to calculate the makespan (/fitness) of an infeasible solution. Due to the existence of over-constraints, it is very common in JSSPs that some individuals in the initial and subsequent populations will be either infeasible or unsuitable for generating useful solutions. These individuals require repairing when mapping from genotype to phenotype solutions. In such cases, we use the repairing methods known as local and global harmonization (Nakano and Yamada 1991; Yamada 2003; Yamada and Nakano 1997a). These methods are 44
Chapter 3. Solving Job-Shop Scheduling by Genetic Algorithms
briefly discussed in the following section.
3.2.2.1
Local Harmonization
The repairing process starts with mapping the binary chromosome (genotype) to a sequence of operations (phenotype). The first step of the repairing technique is to apply the local harmonization. In local harmonization, any tie in priority index is broken by reassigning the scores. If two jobs still have the same score, one of them is chosen randomly. M tables are formed from a chromosome of length L as described in Equation (3.2). Each of the tables is of N × N size which reflects the relationship between the corresponding jobs of every job pair which contains only binary values. The job having the maximum number of 1s’ represents the most preferred job having the highest priority score. These jobs are then rearranged according to their own priorities. We explain the local harmonization process with an example. Table 3.2(a) represents the binary chromosome (for a 3 jobs and 3 machines problem) where each bit represents the preference of one job with respect to another job in the corresponding machine. The third row shows the machine pairs in a given order. The second row indicates the order of the machines for the first job of the pair shown in the third row. The first bit is 1, which means that job j1 will appear before job j2 in machine m1 . Table 3.2(b).1, Table 3.2(b).2 and Table 3.2(b).3 represent the job pair based relationship in machines m1 , m2 and m3 respectively, as mentioned in Equation (3.1). In Table 3.2(b).1, the ‘1’ in cell j1 − j2 indicates that job j1 will appear before job j2 in machine m1 . Similarly, the ‘0’ in cell j1 − j3 indicates that job j1 will not appear before job j3 in machine m1 . In the same Table 3.2(b), column S represents the priority of each job, which is the row sum of all the 1s for the job presented in each row. A higher number represents a higher priority because it is preceding all other jobs. So for machine m1 , job j3 has the highest priority. If more than one job has equal priority in a given machine, the local harmonization technique modifies the order of these jobs to introduce different priorities. Consider a situation where the
45
Chapter 3. Solving Job-Shop Scheduling by Genetic Algorithms
Algorithm 3.2 Local Harmonization Assume, C(m, n, x) is the bit in the chromosome which represents the relationship between job n and x in the machine m. The same information is represented by the bit pmatm (n, x) in the priority matrix. max is a function which identifies the maximum value from a set of values. n is the job having highest priority score. 1. Set m := 1 2. Repeat until m > M (a) Set n := 1 (b) Repeat until n > N i. Set pmatm (n, n) := ∗ and x := n + 1 ii. Repeat until x > N • Set pmatm (n, x) := C(m, n, x) • Set pmatm (x, n) := |C(m, n, x) − 1| • Set x := x + 1 [End of step 2(b)ii Loop] iii. Set pmatm (n, N + 1) := N i=1 pmatm (n, i) where i = n iv. Set n := n + 1 [End of step 2b Loop] (c) Set n := 1 (d) Repeat until n > N i. Determine n where n ∈ [1, 2, ..., N ] ii. If pmatm (n , x) = 0 and pmatm (x, N + 1) ≥ 0 for ∀x • Set pmatm (n , x) := 1 and pmatm (x, n ) := 0 • Set C(m, n , x) := |C(m, n , x) − 1| [End of step 2(d)ii If] iii. pmatm (n , N + 1) = −1 iv. Set n := n + 1 [End of step 2d Loop] (e) Set m := m + 1 [End of step 2 Loop] [End of Algorithm]
46
Chapter 3. Solving Job-Shop Scheduling by Genetic Algorithms
Table 3.2: Construction of the phenotype from the binary genotype and predefined sequences 1
0
1
0
0
1
1
0
0
m1
m3
m2
m1
m3
m2
m2
m1
m3
j1 − j2
j1 − j3
j2 − j3
(a) A chromosome
j1
j2
j3
S
j1
j2
j3
S
j1
*
1
0
1
j2
0
*
0
j3
1
1
*
j1
*
1
1
2
0
j2
0
*
1
2
j3
0
0
*
(b).1 – m1
j1
j2
j3
S
j1
*
0
0
0
1
j2
1
*
0
1
0
j3
1
1
*
2
(b).2 – m2
(b).3 – m3
j1 : m 1
→
m3
→
m2
m1 :
j3
→
j1
→ j2
j2 : m 2
→
m1
→
m3
m2 :
j1
→
j2
→ j3
j3 : m 1
→
m3
→
m2
m3 :
j3
→
j2
→ j1
(c) Given order of operations
(d) A possible solution
order of jobs for a given machine is j1 − j2 , j2 − j3 , and j3 − j1 . This will provide S = 1 for all jobs in that machine. By swapping the content of cells j1 − j3 , and j3 − j1 , it would provide S = 2, 1 and 0 for jobs j1 , j2 , and j3 respectively. Table 3.2(c) shows the predefined operational sequence of each job. According to the priorities found from Table 3.2(b), Table 3.2(d) is generated, which is the phenotype or schedule. For example, the sequence of m1 is j3 → j1 → j2 , because in Table 3.2(b).1, j3 is the highest priority and j2 is the lowest priority job. Table 3.2(d) represents a solution of this problem in a form of the sequence of operations. For example, considering Table 3.2(d), the first operation in machine m1 is to process the first operation of job j3 .
47
Chapter 3. Solving Job-Shop Scheduling by Genetic Algorithms
The local harmonization is also explained by Algorithm 3.2.
3.2.2.2
Global Harmonization
For an N × M job-shop scheduling problem, there will be (N !)M possible solutions. Only a small percentage of these solutions are feasible. The solutions mapped from the chromosome do not guarantee feasibility. Global harmonization is a repairing technique for changing infeasible solutions into feasible solutions. Suppose job j3 specifies its first, second, and third operation to be processed on machine m3 , m2 and m1 respectively, and the job j1 specifies its first, second and third operation on machine m1 , m3 and m2 respectively. Further assume that an individual solution (or chromosome) indicates that j3 is scheduled on machine m1 first as its first operation, followed by job j1 . Such a schedule is infeasible as it violates the defined sequence of operations for job j3 . In this case, the swap of places between job j1 with job j3 on machine m1 would allow job j1 to have its first operation on m1 as required, and it may provide an opportunity for job j3 to visit m3 and m2 before visiting m1 as per its order. Usually, the process identifies the violations sequentially and performs the swap one by one until the entire schedule is feasible. In this case, there is a possibility that some swaps performed earlier in the process are required to swap back to their original position to make the entire schedule feasible. This technique is useful not only for the binary representations, but also for the job-based or operation based representation. Further details on the use of global harmonization with GAs (or MAs) for solving JSSPs can be found in (Nakano and Yamada 1991; Yamada and Nakano 1997a; Yamada 2003). In our proposed algorithm, we consider multiple repairs to narrow down the deadlock frequency. As soon as a deadlock occurs, the algorithm identifies at most one operation from each job that can be scheduled immediately. Starting from the first operation, the algorithm identifies the corresponding machine of the operation and swaps the tasks in that machine so that at least the selected task disallows deadlock for the next time. For N jobs, the risk of getting into deadlock will be removed for at least N operations, and the complexity will be O(M N 2 ).
48
Chapter 3. Solving Job-Shop Scheduling by Genetic Algorithms
After performing global harmonization, we obtain a population of feasible solutions. We then calculate the makespan of all the feasible individuals and rank them based on their fitness values. We then apply genetic operators to generate the next population. We continue this process until it satisfies the stopping criteria. The process is described in Algorithm 3.3.
3.2.3
Fitness Calculation
The fitness in our JSSPs represents the makespan. In fitness calculation, we check the overlapping of tasks and constraint satisfaction, and record the completion time of each job. It is worth noting here that there is no fitness value for infeasible solutions in JSSPs, which is not the case for general optimization problems. The duration of the job taking the longest completion time is reported as the makespan of the problem. When applying the traditional GA in JSSPs, the fitness evaluation is performed along with the global harmonization.
3.2.4
Reproduction Phases
One of the important and vital parts of a genetic algorithm is to reproduce the solutions by changing the genetic properties. In the reproduction process, the idea is to produce new individuals and to exchange the productive genetic properties among the individuals. Here we discuss the two-point exchange crossover and bitflip mutation, which we use as parts of the reproduction phase.
3.2.4.1
Selection
We apply the tournament selection to select the parents for the purpose of crossover. In our method, we classify the population into two classes based on the fitness of the individuals. We set a tournament selection boundary where any individual within the boundary is treated as a member of elite class. We select one individual (parent) from the elite class and two from the rest. We play a tournament between these two and select the winner. We apply the crossover between the elite individual and 49
Chapter 3. Solving Job-Shop Scheduling by Genetic Algorithms
Algorithm 3.3 Global Harmonization Assume, O(n, k) is represented as k-th machine required by job n. Qp (m, n) is the index of job n in machine m of the individual p. JFn is the next machine required by job n and M Fm is the immediate job, has to be performed by machine m. Um is the number of unscheduled operations in machine m. hP and lP are two temporary sets containing the scheduled operations and their corresponding index in the schedule. The sets are initialized with 0 values. 1. Set U(1 : M ) := 0, and lP (1 : M, 1 : N ) := 1 2. Repeat for n = 1 to N (a) Set m := Op (n, JFn ), and pos := Qp (m, n) (b) Set x := 0 (c) Repeat for x 1 (a) Swap between Qp (m, k) and Qp (m, k − 1) (b) Set k := k − 1 [End of Step 3 Loop] [End of Algorithm]
4.3
Gap Reduction (GR)
After each generation, the generated phenotype usually leaves some gaps between the jobs. Sometimes, these gaps are necessary to satisfy the precedence constraints. However, in some cases, a gap could be removed or reduced by placing in it a job from the right side of the gap. For a given machine, this is like swapping between a gap from the left and a job from the right of a schedule. In addition, a gap may be removed or reduced by simply moving a job to its adjacent gap at the left. This process would help to develop a compact schedule from the left and continuing up to the last job for each machine. Of course, it must ensure that there is no conflict or infeasibility before accepting the move. The rule is to identify the gaps in each machine, and the candidate jobs which can be placed in those gaps, without violating the constraints and by not increasing the makespan. The same process is carried out for any possible shift of jobs to the left of the schedule. The gap reduction rule, with swapping between gap and job, is explained using a simple example. A simple instance of a schedule is shown in Figure 4.2(a). In the phenotype p, j1 follows j2 in machine m2 , however, job j1 can be placed before j2 , 70
Chapter 4. Priority Rules for Solving JSSPs
Algorithm 4.3 Algorithm for the gap-reduction technique (GR) function gapred(p, m, n) M and N be the total number of machines and jobs respectively. S and F are the sets of starting and finishing times of all the operations respectively of those that have already been scheduled. T (n, m) is the execution time of the current operation of job n in machine m. Qp (m, k) is the k-th operation in machine m of phenotype p. M Fm represents the front operation of machine m, JFn is the machine where the schedulable operation of job n will be processed. JBn and M Bm are the busy time for job n and machine m respectively. max(u, v) returns the u or v which is the maximum. 1. Set m := 1 and mf ront(1 : M ) := 0 2. Repeat until all operations are scheduled (a) Set Loc := M Fm and jID := Qp (m, Loc) (b) If JFjID = m then i. Set f lag := 1 and k := 1 ii. Repeat until k ≤ Loc A. Set X := max(F (m, k − 1), JBjID ) B. Set G := S (m, k) − X C. If G ≥ T (jID, m) • Set Loc := k • Go to Step 2f [End of Step 2(b)iiC If] D. Set k := k + 1 [End of Step 2(b)ii Loop] Else Set f lag := f lag + 1 [End of Step 2b If] (c) Set n := 1 (d) Repeat while n ≤ N i. Set mID := JFn and h := Qp (mID, n) ii. Put n1 in the front position and do 1-bit right shift from location M FmID to h. iii. Set n1 := n1 + 1 [End of Step 2d Loop] (e) Go to Step 2a (f) Place jID at the position Loc (g) Set S (m, Loc) := X (h) Set F (m, Loc) := S (m, Loc) + T (jID, m) (i) Set m := (m + 1) mod M [End of Step 2 Loop] [End of Algorithm]
71
Chapter 4. Priority Rules for Solving JSSPs
Figure 4.2: Two steps of a partial Gantt chart while building the schedule from the phenotype for a 3 × 3 job-shop scheduling problem. The X axis represents the execution time and the Y axis represents the machines. as shown in Figure 4.2(b), due to the presence of an unused gap before j2 . A swap between this gap and job j1 would allow the processing of j1 on m2 earlier than the time shown in Figure 4.2(a). This swapping of j1 on m2 creates an opportunity to move this job to the left on machine m3 (see Figure 4.2(c)). Finally, j3 on m2 can also be moved to the left which ultimately reduces the makespan as shown in Figure 4.2(d). Algorithm 4.3 gives the step by step description of the GR algorithm.
72
Chapter 4. Priority Rules for Solving JSSPs
In this swapping technique, maximum N swaps require for each of the operations. Thus, the complexity of GR can be defined as O(M N 2 ).
4.4
Restricted Swapping (RS)
For a given machine, the restricted swapping rule allows swapping between adjacent jobs if and only if the resulting schedule is feasible. This process is carried out only for the job which takes the longest time to complete. Algorithm 4.4 Algorithm for the restricted swapping technique (RS) function restrictedswap(p) Let O(n, k) be the k-th machine required by job n in the predefined machine sequence where O (n, m) is the index of machine m in job n. Qp (m, k) is the k-th operation in machine m and Qp (m, n) is the index of job n in machine m of individual p. nonConf lict(m, u, v) is a function that returns true if the ending time of the immediate predecessor operation of u does not overlap with the modified starting time of the same job in machine m and the starting time of the immediate following operation of job v does not conflict with the ending time of the same job in machine m. 1. Set n := getBottleneckJob(p) and k := N − 1 2. Repeat while k ≥ 1 (a) Set m := O(n, k) (b) If O (n, m) = 1 then i. Set n := Qp (m, (Op (n, m) − 1)) ii. If nonConf lict(m, n, n )=true • Swap j with j in phenotype p • Go to Step 2c [End of Step 2(b)ii If] [End of Step 2b If] (c) Set k := k − 1 [End of Step 2 loop] [End of Algorithm] Suppose that job j takes the longest time for completion for the phenotype p. This algorithm starts from the last operation of j in p and checks with the 73
Chapter 4. Priority Rules for Solving JSSPs
Figure 4.3: Gantt chart of the solution: (a) before applying the restricted swapping, (b) after applying restricted swapping and reevaluation. immediate predecessor operation whether these two are swappable or not. The necessary conditions for swapping are; none of the operations can start before the finishing time of the immediate predecessor operation of that corresponding job, and both operations have to be finished before starting the immediate successive operations of the corresponding jobs. Interestingly, the algorithm does not collapse the feasibility of the solution. It may change the makespan if any of the operations are the last operation of the corresponding machine, but it will also give an alternate solution which may improve the fitness of the solution in successive generations, when the phenotype will be rescheduled. The details of this algorithm are described in Algorithm 4.4. This swapping can be seen from Figures 4.3(a) and (b). The makespan in Figure 4.3(b) is improved due to the swapping between jobs j3 and j2 in machine m3 . This swapping is not permissible by GR, because there is not enough gap for j3 in front of j2 . However, one such swapping may create the scope for another 74
Chapter 4. Priority Rules for Solving JSSPs
such swapping in the next bottleneck machine. This is done for a few individuals only. As the complexity of this algorithm is simply of order N , it does not affect the overall computational complexity much.
4.5
Implementation
First, we implemented the simple GA for solving JSSPs which was discussed in Chapter 3. For it, each individual is represented by a binary chromosome. We use the job-pair relationship based representation as of Yamada (2003); Nakano and Yamada (1991); Paredis (1997). We use simple two point crossover and bit flip mutation as reproduction operators. We carried out a set of experiments with different crossover and mutation rates to analyze the performance of the algorithm. We then implemented three versions of MAs by introducing the priority rules as local search techniques, as discussed earlier, as follows: • MA(PR): Partial re-ordering rule with GA • MA(GR): Gap reduction rule with GA and • MA(GR-RS): Gap reduction and restricted swapping rule with GA In the algorithms, we apply elitism in each generation to preserve the best solution found so far, and also to inherit the elite individuals more than the rest (Goldberg 1989; Ishibuchi and Murata 1998). In performing the crossover operation, we use a tournament selection that chooses one individual from the elite class of the individuals (i.e. the top 15%) and two individuals from the rest. This selection then plays a tournament between the last two and performs crossover between the winner and the elite individual. We rank the individuals on the basis of the fitness value. A high selection pressure on the better individuals may contribute to premature convergence. In particular, we consider the situation where 50% or more of the elite class are the same solution. In this case, their offspring will be quite similar after some generations. To counter this, when this occurs, a higher mutation rate will be used to help to diversify the population. We carried out experiments 75
Chapter 4. Priority Rules for Solving JSSPs
by varying the crossover and mutation rates for all the test problems. From the experimental results, the best crossover and mutation rate were identified as 0.45 and 0.35 respectively. The detailed results and analysis are discussed in the next section. We use the same population size and the same number of generations as used in the traditional GA in Chapter 3. In this approach, GR is applied to every individual. On the other hand, we apply PR and RS to only 5% of the randomly selected individuals in every generation. In implementing all local searches, we use the following conditions. • A change in the individual, due to local search, will be accepted if it improves the fitness value. • If the fitness value does not improve, the change can still be accepted, if it is better than a certain threshold value. To test the performance of our proposed algorithms, we have solved the 40 benchmark problems considered in Chapter 3 and have compared our results with several existing algorithms.
4.6
Results and Analysis
The results for the benchmark problems were obtained by executing the algorithms on a personal computer. Each problem was run 30 times and the results and parameters are tabulated in Tables 4.1–4.3. Table 4.1 compares the performance of the four algorithms we implemented [GA, MA(PR), MA(GR), and also that MA(GR-RS)] in terms of the % average relative deviation (%ARD) from the known best result published in the literature, the standard deviation of % relative deviation (%SDRD), and the average number of fitness evaluations required. From Table 4.1, it is clear that the performance of the MAs are better than the GA, and MA(GR) is better than both MA(PR) and GA. The addition of RS to MA(GR), which is known as MA(GR-RS), has clearly enhanced the performance of the algorithm. Out of the 40 test problems, MA(GR) obtained exact optimal solutions for 23 problems and 76
Chapter 4. Priority Rules for Solving JSSPs
Table 4.1: Comparing our four algorithms for 40 test problems Algorithm
Optimal ARD Found (%)
SDRD (%)
Avg. Gen.
Avg. Fit. Eval. (103 )
Avg. Comp. Time (sec)
GA
15
3.591
4.165
270.93
664.90
201.60
MA(PR)
16
3.503
4.192
272.79
660.86
213.42
MA(GR)
23
1.360
2.250
136.54
356.41
105.87
MA(GR-RS)
27
0.968
1.656
146.63
388.58
119.81
MA(GR-RS) obtained optimal results for 27 problems (23 of them are the same as MA(GR)). In addition, MA(GR-RS) obtained substantially improved solutions for 10 other problems. In general, these two algorithms converged quickly, which can be seen from the average number of fitness evaluations. As shown in Table 4.1, the addition of the local search techniques to GA (for the last two MAs) not only improves the quality of solutions significantly, but also helps in converging to the solutions with a lower number of generations and a lower total number of fitness evaluations. However, as the local search techniques require additional computation, the computational time per generation for all three MAs is higher than GA. For example, the average computational time taken per generation by the algorithms GA, MA(PR), MA(GR) and MA(GR-RS) are 0.744, 0.782, 0.775 and 0.817 seconds respectively. Interestingly, for the overall average computational time per test problem solved, MA(GR) and MA(GR-RS) are much lower compared to the other two algorithms. As of Table 4.1, for all 40 test problems, the algorithm MA(GR-RS) improved the average of the best solutions over GA by 2.623%, while reducing the computational time by 40.57% on average per problem. Table 4.2 presents the best, average, standard deviation, median and worst fitness, with the best known fitness, for 40 test problems, for our four algorithms [GA, MA(PR), MA(GR), and MA(GR-RS)]. As shown in Table 4.2, MA(GR-RS) outperformed our other three algorithms. We have compared the performance of our best algorithm, MA(GR-RS), with other
77
Chapter 4. Priority Rules for Solving JSSPs
published algorithms based on the average relative deviation and the standard deviation of the relative deviations as presented in Table 4.3. As different authors used different numbers of problems, we have compared the results based on the number of test problems solved by the other. For example, as Ombuki and Ventresca (2004) solved 25 (problems la16–la40) out of the 40 test problems considered in this research, we calculated ARD and SDRD for these 25 problems to make a fairer comparison. Table 4.2: Experimental results of four of our algorithms including the best solutions found from the literature Problem Algorithm la01 10 × 5
la02 10 × 5
la03 10 × 5
Minimum Average Standard Median Worst
Known-Best
666
GA
667
676.067
6.313
678.0
688
MA(PR)
667
676.133
5.637
678.0
688
MA(GR)
666
667.567
2.861
667.0
678
MA(GR-RS)
666
667.600
2.848
667.0
678
Known-Best
655
GA
655
658.433
5.063
655.0
666
MA(PR)
655
660.500
5.368
660.5
666
MA(GR)
655
655.100
0.548
655.0
658
MA(GR-RS)
655
656.267
3.331
655.0
666
Known-Best
597
GA
617
624.700
9.407
619.5
642
MA(PR)
617
625.567
5.811
628.0
640
MA(GR)
597
614.867
5.917
617.0
619
MA(GR-RS)
597
613.933
7.497
617.0
619
continued on next page
78
Chapter 4. Priority Rules for Solving JSSPs
Table 4.2 – continued from previous page Problem Algorithm la04 10 × 5
la05 10 × 5
la06 15 × 5
la07 15 × 5
la08 15 × 5
Best
Average
St.Dev
Median Worst
Known-Best
590
GA
606
607.267
2.303
606.0
613
MA(PR)
606
607.333
2.090
606.0
611
MA(GR)
590
594.500
1.526
595.0
595
MA(GR-RS)
590
593.333
2.397
595.0
595
Known-Best
593
GA
593
593.000
0.000
593.0
593
MA(PR)
593
593.000
0.000
593.0
593
MA(GR)
593
593.000
0.000
593.0
593
MA(GR-RS)
593
593.000
0.000
593.0
593
Known-Best
926
GA
926
926.000
0.000
926.0
926
MA(PR)
926
926.000
0.000
926.0
926
MA(GR)
926
926.000
0.000
926.0
926
MA(GR-RS)
926
926.000
0.000
926.0
926
Known-Best
890
GA
890
896.500
4.273
897.0
906
MA(PR)
890
895.367
3.011
897.0
897
MA(GR)
890
890.000
0.000
890.0
890
MA(GR-RS)
890
890.000
0.000
890.0
890
Known-Best
863
GA
863
863.000
0.000
863.0
863
MA(PR)
863
863.000
0.000
863.0
863
MA(GR)
863
863.000
0.000
863.0
863
MA(GR-RS)
863
863.000
0.000
863.0
863
continued on next page
79
Chapter 4. Priority Rules for Solving JSSPs
Table 4.2 – continued from previous page Problem Algorithm la09 15 × 5
la10 15 × 5
la11 20 × 5
la12 20 × 5
la13 20 × 5
Best
Average
St.Dev
Median Worst
Known-Best
951
GA
951
951.000
0.000
951.0
951
MA(PR)
951
951.000
0.000
951.0
951
MA(GR)
951
951.000
0.000
951.0
951
MA(GR-RS)
951
951.000
0.000
951.0
951
Known-Best
958
GA
958
958.000
0.000
958.0
958
MA(PR)
958
958.000
0.000
958.0
958
MA(GR)
958
958.000
0.000
958.0
958
MA(GR-RS)
958
958.000
0.000
958.0
958
Known-Best
1222
GA
1222
1222.000
0.000
1222.0
1222
MA(PR)
1222
1222.000
0.000
1222.0
1222
MA(GR)
1222
1222.000
0.000
1222.0
1222
MA(GR-RS)
1222
1222.000
0.000
1222.0
1222
Known-Best
1039
GA
1039
1039.000
0.000
1039.0
1039
MA(PR)
1039
1039.000
0.000
1039.0
1039
MA(GR)
1039
1039.000
0.000
1039.0
1039
MA(GR-RS)
1039
1039.000
0.000
1039.0
1039
Known-Best
1150
GA
1150
1150.000
0.000
1150.0
1150
MA(PR)
1150
1150.000
0.000
1150.0
1150
MA(GR)
1150
1150.000
0.000
1150.0
1150
MA(GR-RS)
1150
1150.000
0.000
1150.0
1150
continued on next page
80
Chapter 4. Priority Rules for Solving JSSPs
Table 4.2 – continued from previous page Problem Algorithm la14 20 × 5
la15 20 × 5
la16 10 × 10
la17 10 × 10
la18 10 × 10
Best
Average
St.Dev
Median Worst
Known-Best
1292
GA
1292
1292.000
0.000
1292.0
1292
MA(PR)
1292
1292.000
0.000
1292.0
1292
MA(GR)
1292
1292.000
0.000
1292.0
1292
MA(GR-RS)
1292
1292.000
0.000
1292.0
1292
Known-Best
1207
GA
1207
1241.400
17.641
1243.0
1265
MA(PR)
1207
1249.067
14.355
1243.5
1275
MA(GR)
1207
1207.067
0.365
1207.0
1209
MA(GR-RS)
1207
1207.133
0.507
1207.0
1209
Known-Best
945
GA
994
1002.833
10.048
994.0
1028
MA(PR)
994
1003.100
10.246
994.5
1028
MA(GR)
946
964.433
15.809
962.0
982
MA(GR-RS)
945
968.267
15.193
979.0
982
Known-Best
784
GA
794
822.500
10.180
820.0
839
MA(PR)
785
819.600
12.878
820.0
839
MA(GR)
784
787.800
3.969
785.0
793
MA(GR-RS)
784
788.933
4.110
792.0
793
Known-Best
848
GA
861
895.333
14.563
901.0
912
MA(PR)
861
896.333
13.887
901.0
911
MA(GR)
861
861.000
0.000
861.0
861
MA(GR-RS)
848
859.267
4.495
861.0
861
continued on next page
81
Chapter 4. Priority Rules for Solving JSSPs
Table 4.2 – continued from previous page Problem Algorithm la19 10 × 10
la20 10 × 10
la21 15 × 10
la22 15 × 10
la23 15 × 10
Best
Average
St.Dev
Median Worst
Known-Best
842
GA
890
915.733
9.044
914.0
929
MA(PR)
896
916.333
6.830
914.0
929
MA(GR)
850
854.167
5.736
850.0
869
MA(GR-RS)
842
855.467
7.628
855.0
869
Known-Best
902
GA
967
979.667
12.623
977.0
1016
MA(PR)
967
980.567
10.585
980.5
1016
MA(GR)
907
910.000
2.491
912.0
912
MA(GR-RS)
907
910.000
2.491
912.0
912
Known-Best
1046
GA
1098
1145.067
20.067
1145.0
1185
MA(PR)
1090
1139.633
21.196
1138.0
1187
MA(GR)
1089
1105.800
12.220
1102.5
1128
MA(GR-RS)
1079
1097.600
12.260
1096.0
1124
Known-Best
927
GA
986
1027.500
17.591
1031.0
1055
MA(PR)
985
1022.567
20.681
1028.0
1076
MA(GR)
963
980.667
8.739
980.5
1003
MA(GR-RS)
960
981.000
13.339
981.0
1000
Known-Best
1032
GA
1043
1077.133
17.596
1080.0
1108
MA(PR)
1043
1072.100
18.048
1072.0
1099
MA(GR)
1032
1032.033
0.183
1032.0
1033
MA(GR-RS)
1032
1032.000
0.000
1032.0
1032
continued on next page
82
Chapter 4. Priority Rules for Solving JSSPs
Table 4.2 – continued from previous page Problem Algorithm la24 15 × 10
la25 15 × 10
la26 20 × 10
la27 20 × 10
la28 20 × 10
Best
Average
St.Dev
Median Worst
Known-Best
935
GA
984
1023.700
24.940
1020.5
1080
MA(PR)
986
1022.967
26.087
1027.0
1083
MA(GR)
979
999.733
9.638
997.5
1011
MA(GR-RS)
959
996.400
15.930
1000.0
1011
Known-Best
977
GA
1077
1116.967
19.063
1114.0
1154
MA(PR)
1077
1114.600
15.106
1114.0
1153
MA(GR)
991
1017.133
9.669
1016.0
1036
MA(GR-RS)
991
1016.667
19.297
1013.0
1076
Known-Best
1218
GA
1295
1324.133
18.524
1322.5
1369
MA(PR)
1303
1328.967
13.325
1331.5
1366
MA(GR)
1220
1237.467
13.711
1234.0
1267
MA(GR-RS)
1218
1234.267
15.837
1228.0
1266
Known-Best
1235
GA
1328
1356.867
17.702
1355.0
1389
MA(PR)
1328
1356.333
18.826
1350.0
1392
MA(GR)
1298
1311.967
9.793
1310.0
1333
MA(GR-RS)
1286
1306.333
12.944
1309.0
1333
Known-Best
1216
GA
1297
1316.933
9.563
1316.0
1338
MA(PR)
1292
1318.233
11.485
1316.0
1338
MA(GR)
1243
1265.033
8.054
1266.5
1283
MA(GR-RS)
1236
1257.333
11.354
1261.0
1269
continued on next page
83
Chapter 4. Priority Rules for Solving JSSPs
Table 4.2 – continued from previous page Problem Algorithm la29 20 × 10
la30 20 × 10
la31 30 × 10
la32 30 × 10
la33 30 × 10
Best
Average
St.Dev
Median Worst
Known-Best
1157
GA
1265
1294.067
20.005
1291.0
1334
MA(PR)
1267
1297.500
22.526
1292.0
1336
MA(GR)
1237
1244.867
6.756
1241.5
1262
MA(GR-RS)
1221
1240.467
6.857
1241.0
1249
Known-Best
1355
GA
1377
1465.433
35.823
1476.0
1555
MA(PR)
1363
1460.400
35.901
1476.0
1564
MA(GR)
1355
1358.900
6.525
1355.0
1380
MA(GR-RS)
1355
1362.333
8.083
1360.0
1380
Known-Best
1784
GA
1784
1784.000
0.000
1784.0
1784
MA(PR)
1784
1784.100
0.548
1784.0
1787
MA(GR)
1784
1784.000
0.000
1784.0
1784
MA(GR-RS)
1784
1784.000
0.000
1784.0
1784
Known-Best
1850
GA
1850
1869.600
19.793
1872.0
1916
MA(PR)
1850
1866.633
17.515
1866.0
1933
MA(GR)
1850
1850.000
0.000
1850.0
1850
MA(GR-RS)
1850
1850.000
0.000
1850.0
1850
Known-Best
1719
GA
1719
1729.000
11.225
1725.0
1765
MA(PR)
1719
1726.700
8.209
1723.0
1745
MA(GR)
1719
1719.000
0.000
1719.0
1719
MA(GR-RS)
1719
1719.000
0.000
1719.0
1719
continued on next page
84
Chapter 4. Priority Rules for Solving JSSPs
Table 4.2 – continued from previous page Problem Algorithm la34 30 × 10
la35 30 × 10
la36 15 × 15
la37 15 × 15
la38 15 × 15
Best
Average
St.Dev
Median Worst
Known-Best
1721
GA
1748
1770.900
13.314
1770.5
1814
MA(PR)
1740
1783.367
23.137
1775.0
1814
MA(GR)
1721
1721.000
0.000
1721.0
1721
MA(GR-RS)
1721
1721.000
0.000
1721.0
1721
Known-Best
1888
GA
1898
1947.200
30.940
1942.5
2018
MA(PR)
1888
1936.200
32.282
1934.5
2007
MA(GR)
1888
1888.333
1.826
1888.0
1898
MA(GR-RS)
1888
1888.000
0.000
1888.0
1888
Known-Best
1268
GA
1388
1432.600
29.145
1432.0
1497
MA(PR)
1389
1431.933
28.384
1427.5
1477
MA(GR)
1317
1326.867
9.801
1324.0
1352
MA(GR-RS)
1307
1328.667
11.385
1327.0
1346
Known-Best
1397
GA
1561
1606.233
25.268
1605.5
1656
MA(PR)
1544
1606.567
29.768
1605.5
1675
MA(GR)
1466
1476.933
6.125
1479.0
1487
MA(GR-RS)
1442
1473.600
11.312
1479.0
1487
Known-Best
1196
GA
1367
1412.533
17.067
1415.0
1439
MA(PR)
1367
1412.167
17.048
1420.0
1432
MA(GR)
1304
1316.733
5.831
1317.0
1329
MA(GR-RS)
1299
1312.533
8.212
1316.0
1329
continued on next page
85
Chapter 4. Priority Rules for Solving JSSPs
Table 4.2 – continued from previous page Problem Algorithm la39 15 × 15
la40 15 × 15
Best
Average
St.Dev
Median Worst
Known-Best
1233
GA
1338
1368.467
18.294
1367.0
1413
MA(PR)
1342
1370.933
18.910
1372.0
1423
MA(GR)
1263
1287.767
14.352
1294.5
1323
MA(GR-RS)
1252
1282.600
16.328
1278.0
1301
Known-Best
1222
GA
1357
1366.300
13.565
1360.0
1407
MA(PR)
1357
1370.800
14.170
1364.5
1406
MA(GR)
1252
1278.500
12.415
1280.0
1294
MA(GR-RS)
1252
1279.600
17.533
1289.0
1303
As in Table 4.3, for 7 selected problems, MA(GR-RS) outperforms Della-Croce et al. (1995) For 24 selected problems, MA(GR-RS) also outperforms Adams et al. (1988) (SB II) algorithm. For 25 test problems (la16–la40), our proposed MA(GRRS) is very competitive with SBGA (60), but is much better than Ombuki and Ventresca (2004). When considering all 40 test problems, our MA(GR-RS) clearly outperformed all the algorithms compared in Table 4.3. Our algorithm, MA(GR-RS), works well for most benchmark problems tested. In comparison, if the distribution of the jobs to all machines is uniform (in terms of the number of jobs requiring a machine) with respect to the time axis, the problem can be managed using GA alone. However, if the concentration of jobs is high on one machine (or on a few machines) relative to the others, within any time window, GA usually fails to achieve good quality solutions. For example, in problem “la37”, four jobs require machine 11 to complete their first operations, but no job requires machine 1, even for the second operation. In those cases, our PR rule improves the solution because of its reordering of the tasks to give priority on the bottleneck machines. Our combined GR and RS rules improve the solution by utilizing any 86
Chapter 4. Priority Rules for Solving JSSPs
Table 4.3: Comparing the algorithms based on ARD and SDRD with other algorithms No. of
Test
Probs.
Problems
40
la01-la40
25
la16-la40
Author
Algorithm
Proposed MA(GR-RS)
9
SDRD
(%)
(%)
0.9680
1.6559
Aarts et al. (1994)
GLS1
2.0540
2.5279
Aarts et al. (1994)
GLS2
1.7518
2.1974
Dorndorf and Pesch (1995)
PGA
4.0028
4.0947
Dorndorf and Pesch (1995)
SBGA (40)
1.2455
1.7216
Binato et al. (2001)
1.8683
2.7817
Adams et al. (1988)
3.6740
3.9804
Proposed MA(GR-RS)
1.5488
1.8759
Ombuki and Ventresca (2004)
5.6676
4.3804
1.5560
1.5756
1.6810
1.8562
2.4283
1.8472
Dorndorf and Pesch (1995) 24
ARD
SBGA (60)
la02–la04, la08
Proposed MA(GR-RS)
la16–la30, la36–la40
Adams et al. (1988)
la01,la06, la11,la16
Proposed MA(GR-RS)
0.7788
1.4423
la21,la26, la31,la36
Della-Croce et al. (1995)
1.5588
1.9643
87
SB II
Chapter 4. Priority Rules for Solving JSSPs
useful gaps left in the schedules and by swapping the adjacent jobs on a given machine.
4.6.1
Group-wise Comparison
As in Chapter 3, we have reported on problem group-wise performance in Table 4.4 in terms of the %ARD and the average number of evaluations required for each of the problem groups. The first two columns of Table 4.4 represent the problem groups and their sizes respectively. The next two columns show the group-wise average of ARD for GA and MA(GR-RS) using a percentage scale. The last two columns contain the average number of evaluations required for these two algorithms. From Table 4.4, it is clear that MA(GR-RS) not only produced better results for each and every group of problems, but also reduced the average number of evaluations required to solve the problems. The pattern of the algorithm’s performances, observed in Chapter 3, as related to the problem size and the number of machines and jobs, is still valid for MA(GR-RS). Table 4.4: Group-wise comparison between GA and MA(GR-RS)
Prob. Group
Size
Avg. of %ARD
Avg. of Eval.
GA
MA(GR-RS)
GA
MA(GR-RS)
la01–la05
10 × 5
1.242
0.000
0.364
0.106
la06–la10
15 × 5
0.000
0.000
0.080
0.009
la11–la15
20 × 5
0.000
0.000
0.087
0.029
la16–la20
10 × 10
4.180
0.111
0.542
0.320
la21–la25
15 × 10
5.576
2.143
0.948
0.718
la26–la30
20 × 10
6.294
2.261
1.160
0.958
la31–la35
30 × 10
0.420
0.000
1.008
0.220
la36–la40
15 × 15
11.013
3.229
1.130
0.748
88
Chapter 4. Priority Rules for Solving JSSPs
4.6.2
Statistical Analysis
To get a clear view of the performance of our three algorithms over the traditional GA, we have performed a statistical significance test for each of these algorithms against the traditional GA. We have used the Student’s t-test (Student 1908) where the t-values are calculated from the average and standard deviation of 30 independent runs for each problem. The values are normally distributed. The results of the test are tabulated in Table 4.5. We have derived nine levels of significance, to judge the performance of MA(PR), MA(GR), and MA(GR-RS) over the GA, using the critical t-values 1.311 (which is for the 80% confidence level), 2.045 (for the 95% confidence level), and 2.756 (for the 99% confidence level). We defined the significance level S as follows. ⎧ ⎪ ⎪ + + ++ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ +++ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ++ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ + ⎪ ⎪ ⎨ S= = ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ − ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ −− ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ −−− ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩− − −−
ti ≥ 2.756 2.045 ≤ ti < 2.756 1.311 ≤ ti < 2.045 0 < ti < 1.311 ti = 0
(4.1)
−1.311 < ti < 0 −2.045 < ti ≤ −1.311 −2.756 < ti ≤ −2.045 ti ≤ −2.756
It is clear from Table 4.5 that MA(GR) and MA(GR-RS) are extremely better than traditional GA, as these two algorithms made extremely significant improvement over GA, in 30 and 29 problems respectively. Also, both the algorithms are either better or equally performed for the rest of the problems. Although the algorithm MA(PR) is not extremely better than GA for any problem, it is either slightly, significantly better, or equal to GA for most of the test problems. The
89
la01 la02 la03 la04 la05 la06 la07 la08 la09 la10 la11 la12 la13 la14 la15 la16 la17 la18 la19 la20
Prob.
-0.04 -1.53 -0.43 -0.12 0.00 0.00 1.19 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -1.85 -0.10 0.97 -0.27 -0.29 -0.30
MA (PR)
6.72 3.59 4.85 25.31 0.00 0.00 8.33 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10.66 11.23 17.39 12.91 31.49 29.66
MA (GR)
t-value
6.68 1.95 4.87 22.74 0.00 0.00 8.33 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10.63 10.27 16.70 12.94 27.70 29.64
MA (GR -RS) -= = + = = = = = = = -+ -
MA (PR) ++++ ++++ ++++ ++++ = = ++++ = = = = = = = ++++ ++++ ++++ ++++ ++++ ++++
MA (GR)
Significance
++++ ++ ++++ ++++ = = ++++ = = = = = = = ++++ ++++ ++++ ++++ ++++ ++++
MA (GR -RS) la21 la22 la23 la24 la25 la26 la27 la28 la29 la30 la31 la32 la33 la34 la35 la36 la37 la38 la39 la40
Prob.
1.02 1.00 1.09 0.11 0.53 -1.16 0.11 -0.48 -0.62 0.54 -1.00 0.61 0.91 -2.56 1.35 0.09 -0.05 0.08 -0.51 -1.26
MA (PR) 9.15 13.06 14.04 4.91 25.58 20.60 12.16 22.74 12.76 16.02 0.00 5.42 4.88 20.53 10.40 18.83 27.24 29.09 19.01 26.15
MA (GR)
t-value
11.00 11.46 14.05 5.03 20.07 20.05 12.54 21.76 13.86 15.36 0.00 5.42 4.88 20.53 10.48 18.15 26.16 25.28 19.03 21.19
MA (GR -RS) + + + + + + + + + --++ + + -
MA (PR)
++++ ++++ ++++ ++++ ++++ ++++ ++++ ++++ ++++ ++++ = ++++ ++++ ++++ ++++ ++++ ++++ ++++ ++++ ++++
MA (GR)
Significance
++++ ++++ ++++ ++++ ++++ ++++ ++++ ++++ ++++ ++++ = ++++ ++++ ++++ ++++ ++++ ++++ ++++ ++++ ++++
MA (GR -RS)
Table 4.5: Statistical Significance Test (Student’s t-Test) Result of GA, MA(GR), and MA(GR-RS) Compared to the GA
Chapter 4. Priority Rules for Solving JSSPs
90
Chapter 4. Priority Rules for Solving JSSPs
Better ++++ Extremely Significant +++ High Significant ++ Significant + Slightly Significant = Equal
Worse - - - - Extremely Significant - - - High Significant - - Significant - Slightly Significant
above analysis supports the fact that the priority rules improve the performance of traditional GA significantly.
4.6.3
Contribution of the Priority Rules
To analyze the individual contribution of the local search methods, we experimented on a sample of five problems (la21–la25) with the same set of parameters in the same computing environment. For these problems, the individual percentage improvements of PR, GR and RS over GA after 100, 250 and 1000 generations are reported in Table 4.6. The percentage improvement is calculated as follows. The % Improvement for a local search (LS) after k generations = (Fitness of LS)k − (Fitness of GA)k × 100% (Fitness of GA)k
(4.2)
Although all three priority rules have a positive effect, GR’s contribution is significantly higher than the other two rules and is consistent over many generations. Interestingly, in the case of GR, the average rate of improvement gradually decreases as the generation number increases (8.92% after 100 generations and 6.31% after 1000 generations). The reason for this is that GR starts with a set of high quality initial solutions. Although PR and RS have no signification effect on the solutions after the first evaluation, GR provided 18.17% improvement compared to GA. This is measured by the average improvement of the best makespan after the first evaluation, without applying any other genetic operators. To observe the contribution more closely, we recorded the improvement due to the individual local search in every generation in the first 100 generations. This is depicted in Figure 4.4. It can be observed that GR consistently outperformed 91
Chapter 4. Priority Rules for Solving JSSPs
Table 4.6: Individual contribution of the priority rules after 100, 250 and 1000 generations No. of
Local Search
% improvement from GA
Probs.
Algorithm
100
250
1000
40
PR-GA
1.3293
1.1244
0.3838
(la01–la40)
RS-GA
0.8943
1.0511
0.1824
GR-GA
8.9206
7.7280
6.3105
the other two local search algorithms. The is because PR is effective only for the bottleneck jobs, whereas GR was applied to all individuals. The process of GR eventually makes most of the changes performed by PR over some (or many) generations. We identified a number of individuals where PR could make a positive contribution, and then we applied GR on those individuals to compare their relative contribution.
(a)
92
Chapter 4. Priority Rules for Solving JSSPs
(b)
(c)
93
Chapter 4. Priority Rules for Solving JSSPs
(d)
(e)
Figure 4.4: Fitness curves of the problems la21–la25 for the first 100 generations using our proposed algorithms
For the five problems we considered over 1000 generations, we observed that GR made a 9.13% higher improvement than PR. It must be noted here that GR is able to make all the changes which PR does. That means PR cannot make an extra contribution over GR. As a result, the inclusion of PR with GR does not help 94
Chapter 4. Priority Rules for Solving JSSPs
to improve the performance of the algorithm. That is why we do not present other possible variants of MAs, such as MA(PR-RS), MA(GR-PR) and MA(GR-RS-PR). We have also tested a number of other priority rules, such as (i) the job with the longest total machining time required under an unconstrained situation gets the top priority, (ii) identifying two or more bottleneck machines, and applying the rules simultaneously, and (iii) allowing multiple swapping in one go. As these rules did not provide significant benefits, we are reluctant to report them here. Both PR and RS were applied only to 5% of the individuals. The role of RS is mainly to increase the diversity. A higher rate of PR and RS does not provide significant benefits either in terms of quality of solution or computational time. We experimented with varying the rate of PR and RS individually, for five selected problems, from 5% to 25% and tabulated the percentage relative improvement from GA in Table 4.7. Table 4.7: Percentage relative improvement of the five problems (la21–la25) Changing
5%
10%
15%
20%
25%
PR
5.2613
4.9379
0.1683
2.8030
1.3665
RS
4.2180
4.9491
2.2183
3.0232
3.0311
From Table 4.7, it is clear that the increase of the rate of applying PR and RS does not improve the quality of the solutions. Moreover, it takes extra time to converge.
4.6.4
Parameter Analysis
In GAs and MAs, different reproduction parameters are used. We performed experiments with different combinations of parameters to identify the appropriate set of parameters and their effect on solutions. A higher selection pressure on better individuals, with a higher rate of crossover, contributes towards diversity reduction, but consequently the solutions converge prematurely. In JSSPs, as a big portion 95
Chapter 4. Priority Rules for Solving JSSPs
Table 4.8: Combination of different reproduction parameters Set 1
Set 2
Set 3
Set 4
Set 5
Set 6
Set 7
Set 8
Crossover
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
Mutation
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
of the population converges to particular solutions, the probability of solution improvement reduces, because the rate of selecting the same solution increases. So it is important to find an appropriate rate for crossover and mutation. Three sets of experiments were carried out as follows: • Experiment 1: Crossover varied from 0.60 to 0.95 with an increment of 0.05, and mutation from 0.35 to 0.00 with a reduction of 0.05. However, the crossover rate plus the mutation rate must be equal to 0.95. The detailed combinations are shown as 8 sets in Table 4.8. • Experiment 2: Experimented by varying crossover while keeping mutation fixed. The value of mutation was taken from the best set of experiment 1. • Experiment 3: Experimented by varying mutation while keeping crossover fixed. The value of crossover was taken from the best value of experiment 2. The detailed results for the different combinations are graphically shown in this section. Figure 4.5 represents how the quality of solutions varies with the changing crossover and mutation rates. For parameter set 2, our algorithm provided the best solution. In parameter set 2, the crossover rate is 0.65 and the mutation rate is 0.30. In the second set of experiments, we varied the crossover rate from 0.70 to 0.35 with a step size of 0.05 while fixing the mutation at 0.30. Figure 4.6 presents the outcome of the second set of experiments, which shows that the crossover rate of 0.45 is the best with a mutation rate of 0.30.
96
Chapter 4. Priority Rules for Solving JSSPs
(a)
(b) Figure 4.5: Average relative deviation with respect to different parameter sets tabulated in Table 4.8
97
Chapter 4. Priority Rules for Solving JSSPs
(a)
(b) Figure 4.6: Average relative deviation based on fixed mutation and variable crossover rate
98
Chapter 4. Priority Rules for Solving JSSPs
(a)
(b) Figure 4.7: Average relative deviation based on fixed crossover and variable mutation rate
99
Chapter 4. Priority Rules for Solving JSSPs
In the third set of experiments, we fixed the crossover rate at 0.45 and varied the mutation rate from 0.5–0.0. The third set of experiments showed the effect of mutation when the crossover rate was fixed to be the best crossover rate found from the second set of experiments. Figure 4.7 shows that the algorithm performed better as soon as the mutation rate was increased, while the algorithm outperformed even with the mutation rate of 0.35, which is almost the equal performing best in the first set of experiments. It can be concluded from the above experiments that the higher mutation rate and the lower crossover rate perform better for this combination of problems and the algorithm. It is noted that the average relative deviation in the above Figures 4.5–4.7 is around 1.30%, whereas it is 0.97% in Table 4.3. This is due to the fact that the results presented in Tables 4.2 and 4.3 are based on individual parameter tuning, and the results provided in Figures 4.5–4.7 are based on the same parameters for all problems.
4.7
Chapter Conclusion
As JSSPs are complex combinatorial optimization problems, it is somewhat impossible to reach to optimality for larger problems using the conventional methods. In this chapter, we have designed and implemented three local search techniques and three versions of memetic algorithms. We have solved 40 benchmark problems and have compared results with our GA and well-known algorithms appearing in the literature. All three memetic algorithms provided superior results than the GA. Our memetic algorithm MA(GR-RS) clearly outperforms all the algorithms considered in this paper. We have also provided a sensitivity analysis of parameters and have also experimented with different parameters and different algorithms for analyzing their contributions. Although our algorithm performs very well for small to medium range problems, it requires further improvement to ensure consistent performance for a wide range of practical JSSPs.
100
Chapter 4. Priority Rules for Solving JSSPs
The GR technique, proposed in this chapter, improves the performance of our algorithm by utilizing the gaps left in the schedule. The technique uses a gap if it is big enough to fit a task that improves the fitness while not violating the constraints. However, if a gap is slightly smaller than the task duration, the fitness may still be improved by utilizing that gap. This will also be considered in the next chapter. In this chapter, we have considered the job-shop scheduling problems under ideal conditions. However, many interruptions in schedule implementation are experienced due to machine breakdown, machine unavailability and other reasons. Our algorithm can be modified to accommodate such interruptions in shop floors, and this will also be considered in the next chapter.
101
Chapter 5 GA for JSSPs with Interruptions In Chapter 4, we proposed a gap reduction rule which reduces the gaps between the tasks in a schedule, in order to improve the quality of a solution. In this chapter, we have extended the gap reduction rule so that it also applies shifting to fill-up the gaps with even a certain level of infeasibility. We have shown that the revised algorithm with the proposed rule outperforms the other algorithms discussed in the previous chapter. We have also revised the JSSPs definition to incorporate interruption scenarios. We have further revised our algorithm for solving JSSPs with machine breakdown and unavailability scenarios. Extensive experiments have been carried out to see the effect of machine breakdowns and unavailability on the schedules.
5.1
Introduction
In the previous chapters, we have developed a simple genetic algorithm and introduced three priority rules to develop memetic algorithms that can solve classical job-shop scheduling problems. In the MA(GR) and MA(GR-RS) algorithms discussed in Chapter 4, we have integrated our gap-reduction rule with GA. We have proposed to place a task there from the right side of a gap. We have considered that there must not be any conflict or infeasibility before accepting the move. Later, we noticed that some gaps exist, which are infeasible, but that were able to improve 102
Chapter 5. GA for JSSPs with Interruptions
the solutions if reduced. This motivated us to introduced a shifted gap-reduction (SGR) rule by tolerating a certain level of infeasibility, and so we have developed a GA with the SGR (GA-SGR) algorithm. The new SGR heuristic rule enhances the performance of GAs in solving JSSPs. In the rule, for a candidate solution, it identifies any gap left between any two consecutive tasks (/jobs) on a machine. A job from the right of the gap can be placed in it, if the gap is big enough for the operation without violating any precedence constraints. The job can also be placed in the gap, even if the gap is not big enough, but within a certain tolerance limit of the operation time, if it improves the overall fitness value. Of course, in this case it must push (/shift) other jobs. We have shown that the inclusion of such rules not only improves the performance of GAs, but also reduces the overall computational requirements. In almost all research, schedules are produced for ideal conditions, assuming there will be no interruptions of any type. However, in practice, the process interruptions (or rescheduling) due to machine breakdown, unavailability of machines because of scheduled or unscheduled maintenance, defective materials, quality control issues, arrival of a high priority job, and change of due date for certain jobs are experienced on the shop floor. The inclusion of such interruptions with the traditional JSSPs makes the resulting problem more practical, but also more complex and challenging. Machine unavailability is a common event on the production floor, due to both preventive and failure maintenance of machineries and their supporting equipment. The preventive maintenance data, if known in advance, can easily be incorporated in generating the job-shop scheduling. However, in the case of machine failure, the tasks scheduled on the broken machine cannot be resumed until the machine is appropriately repaired. During and after the machine repair, the continued implementation of the schedule generated earlier would delay the completion of some jobs due to active precedence constraints. Such delays may create a contractual problem with customers. In order to minimize the delay, it is very important to re-optimize the remaining tasks at the time of machine breakdown. If the actual repair time is longer than the estimated one, the schedule must be revised again 103
Chapter 5. GA for JSSPs with Interruptions
once the repair is finished. In the algorithms proposed in Chapters 3–4, we have considered only the classical JSSPs. In this chapter, we have introduced machine unavailability and breakdown. We have modified the GA-SGR algorithm to implement JSSPs with conditions of machine unavailability. In the cases of machine breakdowns, we have applied the scenario on the best solution found from GA-SGR. We have re-optimized the tasks that follow the breakdowns by applying the SGR rule. We have performed detailed experimentation on machine breakdown scenarios by varying the number of breakdowns up to four. For multiple non-overlapping breakdowns, we have sequentially re-optimized the solution after each of the breakdown events. We have shown that our proposed technique outperforms the traditional right-shifting in all cases. We have also come to the conclusion that the algorithm performs better in the cases of scattered but multiple small breakdowns, in comparison to a single breakdown, where the cumulative duration of breakdowns are nearly equal. We have experimented with the same scenarios while considering known in advance machine unavailabilities. We have found that the effect of interruptions is less if they are known in advance. We have solved five test problems (la36–la40) from the “la” series, as they are considered as the most difficult in this series. We have used the new hybrid algorithm and have experimented by varying the tolerance limit for placing the jobs in the gaps. We have realized that the solution quality gradually improves with the increase of the tolerance limit. For the condition of interruptions, we have used different distributions to generate a number of non-overlapping machine unavailability and breakdown situations. We have observed that the revised solution is able to recover from most of the breakdowns if they happen either during the early stage or the very last stage of the schedules. This chapter is organized as follows.
After this introduction, Section 5.2
presents a brief literature review related to job-shop scheduling with interruptions. We provide a description of the proposed shifted gap-reduction and the reactive scheduling approach in Section 5.3. Section 5.4 contains the details of the experi-
104
Chapter 5. GA for JSSPs with Interruptions
mental setup. Section 5.5 contains the experimental outcomes of our research and the necessary analysis to justify the applicability and performance of our algorithm. This chapter concludes with a brief summary of the achievements from this research in Section 5.6.
5.2
Literature Review
In the last two chapters, we have considered JSSPs without any process interruptions. In practice, the operations may be interrupted for many reasons, such as machine breakdown, arrival of a new job, process time variation, change of job priority, and order cancellation (Fahmy et al. 2008; Subramaniam et al. 2005). Machine breakdown is considered as one of the most frequent and challenging issues in production engineering. The unavailability of machines, due to planned preventive maintenance, can be incorporate as a constraint when solving for an optimal schedule. In the case of sudden breakdowns, the easiest solution is to apply some dispatching rules which help to select a task immediately after the breakdown occurs (Blackstone et al. 1982). In a multi-period planning environment, Baker and Peterson (1979) proposed to implement the scheduling for a certain time period and to then update the model for the next periods on a rolling horizon basis. The selection of scheduling interval in this case is not an easy task. Recently, reactive scheduling has been introduced to deal with the unexpected interruptions, where reschedules of the affected tasks are carried out in an efficient manner. Liu et al. (2005) developed a tabu search-based meta-heuristic algorithm to solve job-shop scheduling problems reactively under machine breakdown. They proposed to divide the scheduling process into two non-overlapping parts: predictive before the breakdown occurs, and reactive when the machine recovers after the breakdown. According to their hypothesis, if the problems are relatively easy to solve, it is still possible to find the optimal solution even if a breakdown occurs. As they used a disjunctive graph-based representation, they identified constructing the appropriate neighborhood structure as the most crucial challenge of their work. This issue can be resolved by using different kinds of representations, like job pair 105
Chapter 5. GA for JSSPs with Interruptions
relation-based, operation-based, and others. These representations also simplify the algorithms by removing the process of identifying the critical path in the disjunctive graph. Fahmy et al. (2008) have implemented a reactive scheduling technique in the presence of a sudden breakdown. They suggested to insert dummy tasks to remove the affected tasks from the schedule and to then reschedule. The duration of each dummy task is equal to the recovery time of the broken machine. However the incorporation of such logic, in an evolutionary algorithm, will drastically increase the computational complexity. Abumaizar and Svestka (1997) have proposed to repair the reactive schedules using a new right shifting process. The process shifts each and every operation to its right after a machine breakdown. Wu et al. (1993) developed a genetic algorithm with a pairwise-swapping heuristic and proposed using the right shifting technique to re-optimize. They experimentally showed that the stability of the schedule can be improved with only a little sacrifice in efficiency in the presence of uncertainties. The drawback in this technique is the presence of uniform shifting for every operation, which increases the machine gaps. Although the authors suggested using the LPT-rule (Dorndorf and Pesch 1995) while evaluating the solutions, as this approach ignores the possible gaps between the tasks. The above discussions concentrate mainly on run-time breakdowns where the information about the interruption is not known in advance. McKay et al. (1989) described machine breakdowns as three different scenarios: complete unknowns when the breakdown information is not known in advance, suspicions about the future when the possibility of some future breakdowns is known, and known uncertainties when necessary information is available at the beginning of the scheduling process. A single-machine scheduling based approach on these three uncertainties can be found in (O’Donovan et al. 1999). Another approach was presented by Mehta and Uzsoy (1998) who developed a predictive scheduling technique, for job-shop problems with machine breakdown, that minimizes the maximum lateness. In this research, we have considered the first and third type of interruptions. As will be discussed in the next few sections, we have noticed that the impact
106
Chapter 5. GA for JSSPs with Interruptions
of a breakdown is quite low if the machine unavailability information is known in advance. On the other hand, the reactive scheduling is quite complex when the breakdown is sudden. In such a case, the quality of the reactive schedules depends on the position and duration of the breakdown. Abumaizar and Svestka (1997) pointed out that interruptions are of two modes: resume and repeat. The resume mode can be applied when the operations are preemptive and repeat can be applied otherwise. In our case, we have considered that the operations are strictly non-preemptive. Thus, the resume mode is not applicable in our case. The authors in (Abumaizar and Svestka 1997) reported that rescheduling the affected operations reduces much more deviation and computation than performing total rescheduling.
5.3
Solution Approach
In this section, we will first discuss the shifted gap-reduction technique for solving JSSPs under ideal conditions. We have used a job-pair relationship-based representation, two-point exchange crossover, bit-flip mutation, tournament selection, local and global harmonization, elitism, and others as discussed in Chapter 3. Then we will consider both of the interruptions: machine breakdown and unavailability. We will give a detailed overview of the reactive scheduling at the end of this section.
5.3.1
Shifted Gap Reduction (SGR)
It is not uncommon in practice to leave some gaps (i.e., machine idle time) between two consecutive tasks processed on a machine. In some cases, these are absolutely necessary to satisfy the precedence constraints. In other cases, this is due to the generation of sub-optimal solutions for implementation. For the later cases, it is possible to improve the solution by filling in the gaps with suitable tasks. In our shifted gap reduction technique, a gap will be filled in with a task, if the gap is good enough to accommodate it without creating infeasibility. In addition, a gap can also be filled in if it is lower than the task duration by a certain tolerance limit, by shifting the tasks to the right of the gap. Here, let us assume that g(time) is a 107
Chapter 5. GA for JSSPs with Interruptions
Algorithm 5.1 Shifted Gap Reduction (SGR) Algorithm function sgapred(p, m, n) Let, (m, n) denotes the operation of job n in machine m. QSmn is the status of (m, n) in the QUEUE; 0 means (m, n) is not in the QUEUE. S(m, n) and F(m, n) are the starting and completion time of (m, n) respectively. JBn and M Bm represents the instance of the availability of job n and machine m. m+ and m− denotes the next and previous machine required by the current operation, while n+ and n− denotes the next and previous job in the current machine. push(m, n) and pop(m, n) are two functions, which are used to save and restore (m, n) in the QUEUE. The function gapred(p, m, n) places (m, n) in the found gap if it is within the tolerance level. 1. Set QS := 0 and p¯ = −1 2. Call function gapred(p, m, n) 3. Repeat (a) If F (m, n) > S(m+ , n) • push(p, m+ , n) (b) Else JBn := max(JBn , F(m, n)) [End of step 3a If] (c) If F(m, n) > S(m, n+ ) • push(m, n+ ) (d) Else Set M Bm := max(M Bm , F(m, n)) [End of step 3c If] (e) If the QUEUE is not empty i. pop(m, n) (f) Else return p¯ [End of step 3e If] (g) Set the displacement Δt := max(F(m, n− ), F(m− , n)) − S(m, n) (h) Set S(m, n) := S(m, n) + Δt and F(m, n) := F(m, n) + Δt (i) Set p¯ := max(JBn , F(m, n)) (j) Set QSmn := QSmn − 1 [End of step 3 Loop] [End of Algorithm]
108
Chapter 5. GA for JSSPs with Interruptions
Algorithm 5.2 Algorithm to identify a gap function idgap(p, m, n) Assume, Qp (m, n) is the index of job n in machine m of individual p. gmn is denoted as the gap before the operation (m, n). JBn and M Bm represents the instance of the availability of job n and machine m. 1. Set h := Qp (m, n), M Bm := 0 and k := 1 2. Repeat until k > h (a) Set u := |M Bm − JBn | (b) Set v := gmn − T (m, n) − u (c) If v ≥ 0 • Call function gapred(p, m, n) (d) Else If |v| ≤ u × ψ • Call function sgapred(p, m, n) [End of step 2c Ioop] (e) Set M Bm := M Bm + T (m, n) + gmn [End of step 2 Loop] [End of Algorithm] gap where we wish to fit a task with operation time Tmn . By (m, n) we mean the operation of job n that will be processed in machine m. We further assume that ψ is the tolerance limit which varies between 0 and 1. The necessary condition of applying SGR can be defined as: g(time) ≥ (1 − ψ)T (n, m)
(5.1)
For a better understanding of how SGR works, we present the steps of SGR in Algorithm 5.1, and the steps for identifying the gaps in Algorithm 5.2. These two algorithms show the way of fitting an operation (m, n) into a gap. In the case of machine unavailability, where the information is known well in advance due to planned preventive maintenance, these unavailable machines can easily be blocked, for their entire maintenance durations, from any assignments when generating the schedule. However for a sudden breakdown, it is required 109
Chapter 5. GA for JSSPs with Interruptions
to re-optimize the affected tasks for the remaining operations from the point of breakdown. The situation becomes complex when there are multiple breakdowns.
5.3.2
Reactive Scheduling
The most obvious issue of a breakdown, is that the broken machine cannot be used until it is either repaired or replaced. In the JSSPs, these tasks are related. If any task is incomplete due to a broken machine, the task must wait for a certain period of time. In reactive scheduling, the right shifting strategy is normally applied to the affected tasks for the generation of a revised schedule. We instead apply the SGR, which identifies possible gaps before pushing the tasks towards the right. The reactive scheduling is explained in Algorithm 5.3. Here, we make the following assumptions and definitions. • We classify any task finished before the breakdown occurrence as completed. The completed tasks do not need to be considered in the reactive scheduling or rescheduling. • Any task that needs to be relocated due to the interruption is classified as affected. The set of affected tasks is generated based on the precedence relationships of the tasks. At any instance, if the shifting of a task (because of breakdown) does not affect its successor task, then the successor task is not considered as affected. • Reactive schedules consist of the affected tasks which begin from a revised starting time of each machine. The starting times are calculated from the finishing time of the completed tasks. In this section, we represent a breakdown scenario by Λ(m, ˆ t , r ), which represents that the machine m ˆ at time t needs r units of time to be recovered. A breakdown instance is generated randomly. We use a uniform distribution to identify the m, ˆ a poisson distribution for t , and an exponential distribution for r , which generates a breakdown scenario that is similar to the scenarios, which commonly 110
Chapter 5. GA for JSSPs with Interruptions
Algorithm 5.3 Algorithm to recover from a machine breakdown function reactivesch(p) Assume t and r are the instance and duration of a breakdown event, which represents Λ(m, ˆ t , r ) where m ˆ is the broken machine. at is the set and number of affected tasks in each machine which forms the sub-solution p˜. rt is the set of revised starting times of the first tasks in each machine. 1. Set m := 1 2. Repeat until m = M (a) Set n := 1 and rtn := 0 (b) Repeat until k = N or rtn ≥ t i. Set rtn = cmn ii. Set n := n + 1 [End of step 2b Loop] (c) Build p˜m with the affected tasks starting from (m, n) (d) Set atm := (N − n) (e) Set n := n + 1 [End of step 2 Loop] 3. Call function sgapred(˜ p, m, n) for ∀m, n in the revised solution p˜ [End of Algorithm] occur with practical breakdowns. For multiple breakdowns, we divide the time line into several segments and generate the breakdown instances in those segments.
5.4
Experimental Study
As indicated earlier, we use a job pair-relation based representation, tournament selection, two-point crossover, and bit-flip mutation. We rank the individuals on the basis of their fitness so as to apply different selection pressure on different portions of the population. In the tournament selection, we define the top 15% of the population as the elite class. We select one individual from the elite class and two from the rest. The selected individuals then play a tournament between the last two. Crosover is then performed between the winner and the elite individual. In
111
Chapter 5. GA for JSSPs with Interruptions
every generation, we apply elitism to pass the best individual to the next generation. JSSPs usually require a high population size, as an example, Pezzella et al. (2008) used a population size of 5000 merely to solve 10 × 10 JSSPs. We set the population size to 2500 and the number of generations to 1000. A higher number of generations is somewhat useless if the diversity is not controlled in a proper way. In such cases, the solutions converge to a local minima after a few generations. From our initial set of experiments, we have identified the best set of parameters which we use in our other experiments. We consider the machine breakdown and machine unavailability as interruptions in the classical JSSPs. First, we ran experiments without considering any machine unavailability or breakdowns, so as to judge the quality of our proposed SGR algorithm. We then performed two different set of experiments, considering different interruption scenarios as follows. • Breakdown Scenario (GA-SGR(MB)): We solve the JSSP using GASGR under ideal conditions. We generate a number of interruption scenarios Λ, using the different statistical distributions discussed in Section 5.3.2. We introduce these breakdowns to the best solution, and apply the SGR algorithm alone to re-optimize the affected sub-solution. • Unavailability Scenario (GA-SGR(MU)): Assuming that Λ is known in advance, we implement GA-SGR and evaluate every individual considering the machine unavailability. Here we simply block the unavailable machines until they are available. In all cases, we run the experiments 30 times independently with different random seeds. In both of the cases, we use the same environment to observe how GA-SGR(MB) differs from GA-SGR(MU) under similar circumstances. We consider up to four breakdowns and use 100 different breakdown scenarios. For multiple breakdowns, we divide the solutions into an equal number of segments. We generate the Λ(∗, ∗, ∗) in different segments to measure the impact of the instances of a breakdown.
112
Chapter 5. GA for JSSPs with Interruptions
In this research, we consider five benchmark problems from the “la” series proposed by Lawrence (1985). We choose (la36–la40) of size 15×15 as they are both relatively larger and have higher complexity compared to other “la”s, as experienced by many other algorithms.
5.5
Result and Discussion
We discuss the results of JSSP under ideal conditions first and then of when using the interruption scenarios. In these experiments, we use the parameters discussed in the earlier section and the reproduction parameters found from the experiments described in Chapter 4.
5.5.1
JSSPs under Ideal Condition
We report the detailed results of experimentation for GA-SGR under ideal conditions (i.e., without any interruption) in Tables 5.1–5.3. Table 5.1(a) shows the best makespan obtained by GA-SGR from 30 independent runs. We varied the tolerance level ψ from 0.00 to 0.25 with an increment of 0.05. Tolerance 0.00 is identical with MA(GR) that was considered in our earlier works, as it does not consider any shifting. Table 5.1(b) and (c) present the average fitness and the number of evaluations performed (in thousands) respectively. All the tables start with a column of the problem instances. The following six columns in Table 5.1(a) show the best makespans produced by our experiments, for the tolerance, ψ, of 0.00, 0.05, 0.10, 0.15, 0.20, and 0.25 respectively. The last column represents the best makespan of each row or test problem. In Tables 5.1(b)–(c), the last six columns represent the average makespan, and the average evaluation required for the runs discussed in Table 5.1(a). From Tables 5.1(a)–(c), interestingly, one can see that the average solution quality improves with the increase of ψ. We observe from Table 5.1(a) that the algorithm gives the minimum average of makespans when the tolerance is set to 113
Chapter 5. GA for JSSPs with Interruptions
Table 5.1: Best results with varying tolerance Tolerance (ψ)
Prob.
Found
0.00
0.05
0.10
0.15
0.20
0.25
Best
la36
1307
1302
1305
1292
1297
1298
1292
la37
1442
1443
1434
1442
1436
1442
1434
la38
1266
1258
1258
1252
1251
1249
1249
la39
1252
1258
1253
1253
1251
1256
1251
la40
1252
1252
1252
1252
1252
1251
1251
Avg.
1303.8
1302.6
1300.4
1298.2
1297.4
1299.2
1295.4
(a) Best Makespan
Tolerance (ψ)
Prob. 0.00
0.05
0.10
0.15
0.20
0.25
la36
1352.21
1347.15
1340.66
1339.31
1335.76
1334.97
la37
1509.13
1499.60
1496.23
1489.86
1489.33
1487.22
la38
1348.52
1346.82
1340.19
1333.21
1329.46
1327.86
la39
1327.01
1321.98
1318.16
1316.97
1311.90
1308.40
la40
1301.52
1299.94
1297.00
1293.50
1293.67
1292.60
Avg.
1367.68
1363.10
1358.45
1354.57
1352.02
1350.21
(b) Average Makespan
Tolerance (ψ)
Prob. 0.00
0.05
0.10
0.15
0.20
0.25
la36
210.00
238.25
132.75
180.00
110.81
150.75
la37
271.31
124.44
112.31
205.19
136.69
297.19
la38
229.88
194.13
307.44
173.31
219.94
225.75
la39
158.75
279.81
175.44
150.00
171.13
125.06
la40
145.69
168.13
229.44
109.75
263.25
135.19
Avg.
203.13
200.95
191.48
163.65
180.36
186.79
(c) Average Evaluation (×103 )
114
115
1397
1196
1233
1222
la37
la38
la39
la40
Average
1268
la36
Prob. BKS
2.561
2.373
1.460
4.431
2.649
1.893
3.229
2.455
1.541
5.853
3.221
3.076
Proposed (Hasan SGR et al. 2009)
9.593
8.265
12.814
13.880
8.590
4.416
(S-I)
4.255
3.028
4.623
5.936
4.295
3.391
(S-II)
(Aarts et al. 1994)
10.129
8.265
12.814
13.880
8.590
7.098
8.309
8.101
9.570
8.361
7.230
8.281
PGA
4.584
4.255
3.974
4.599
6.228
3.864
SB(40)
3.432
2.455
3.569
3.763
3.508
3.864
SB(60)
(Ombuki (Dorndorf and Pesch 1995) et al. 2004 )
4.617
3.028
4.623
5.936
4.295
5.205
3.171
2.946
2.028
3.595
4.366
2.918
(Binato (Storer et al. et al. 2001) 1992)
7.103
8.511
7.137
7.023
6.299
6.546
(SB-I)
3.360
3.846
3.244
4.933
1.861
2.918
(SB-II)
(Adams et al. 1988)
Table 5.2: Comparison of average relative percentage deviations from the best available result in the literature
Chapter 5. GA for JSSPs with Interruptions
Chapter 5. GA for JSSPs with Interruptions
0.2. The computation required is also affected by the increase of the tolerance after a certain level, which is reflected from the results tabulated in Table 5.1(c). We see that the number of fitness evaluations gradually decreases up to ψ = 0.15, and then increases again. The results in Table 5.1(b) indicate that the idea of shifted gap-reduction not only helps to find a better solution, but also helps to obtain a higher quality population. We get an improved average solution in all cases of GA-SGR, compared to the gap reduction without shifting. To justify the performance of our proposed GA-SGR, we compare the results with a few other GA based methods and other heuristics published in the literature. We compare the relative deviations measured from the best results found in the literature and this is tabulated in Table 5.2. The relative deviations of each of the problems are calculated using Equation 5.2. %RD = (|f | − F )/F × 100%
(5.2)
where F is the reference fitness which is the best known fitness found for the problems in Table 5.2. The second column of Table 5.2 contains the best known solution. From the third column onward, the makespan obtained by the other algorithms reported in the literature are presented. We find the average percentage relative deviation (%ARD) from Table 5.2 is 2.561% for our algorithm. It means that our algorithm clearly dominates the other algorithms compared in this chapter for the benchmark problems considered. To show the relationship between the tolerance level and the number of gaps utilised for each test problem, while implementing SGR, in Table 5.3 we have reported the average number of gaps per evaluation used, with the tolerance level varied from 0.00 to 0.25. It is clear that the average number of gaps increases with the increase of tolerance level. Although the higher levels of tolerance increases the computation required, it also improves the solution quality. In the last row in Table 5.3, the average number of gaps for zero tolerance and the average additional gaps for different tolerance levels are given.
116
Chapter 5. GA for JSSPs with Interruptions
Table 5.3: Average number of gaps utilized versus the tolerance set Tolerance (ψ)
Prob.
5.5.2
0.00
0.05
0.10
0.15
0.20
0.25
la36
7.33
7.82
8.36
8.86
9.56
10.23
la37
6.88
7.39
7.87
8.31
9.07
10.41
la38
7.75
8.27
8.83
9.53
10.37
12.03
la39
6.39
6.86
7.27
7.76
8.48
9.04
la40
7.16
7.67
8.25
8.78
9.74
10.75
Avg.
7.09
7.60
8.12
8.65
9.44
10.49
JSSPs with Machine Breakdowns
First, we ran the experiments while varying the number of breakdowns from 1 to 4 for each test problem. For one breakdown, we choose a machine, a breakdown start time and a repair duration using the distribution discussed earlier. For the two breakdowns cases, we choose two different machines, two non-overlapping breakdown start times (one for each machine) and their durations. Similarly, three and four non-overlapping breakdowns are generated for three and four machines respectively. The length of breakdown (downtime) for one breakdown is assumed to be higher than each machine’s downtime for the two breakdown case. The downtime for each machine for the two breakdown case is higher than the average downtime of each machine for the three breakdown case. We can make similar statements between the three and four breakdown cases. The relative amount of breakdowns can be found in Figure 5.2. We take 100 different scenarios for each of the experiments. In the case of breakdowns, we pick the best solution from GA-SGR, introduce breakdowns and produce the reactive schedule. In the part of reactive scheduling, we replace the traditional right shifting with our proposed SGR to reoptimize the affected sub-solution. To measure the contribution of the proposed approach, we compare the outcome with that of right shifting. We present the
117
Chapter 5. GA for JSSPs with Interruptions
results in Table 5.4. In order to judge the algorithm performance in terms of individual breakdown duration and frequency, we assume that the total breakdown duration is nearly equal in all cases of the breakdowns. We apply the right shift (RSH) and SGR separately to generate the solution after breakdown. In the top row of Table 5.4, the best makespan found using our algorithm, under ideal conditions, is provided with the problem number. The first column of each problem contains the average breakdown duration. For multiple breakdowns, we introduce the first breakdown and re-optimize the solution. Then we introduce the second breakdown, and so on. To show the effect of multiple breakdowns, the revised makespans after each and every breakdown, for those cases, are also provided. The columns, labeled as RSH, show the average makespan of 100 independent breakdown scenarios after applying the traditional right-shifting. The next columns to RSH show the same measurement with the SGR. Our proposed reactive scheduling approach is clearly dominating the traditional right-shifting in all cases. For the two-machine breakdowns, SGR gives a 6.530% more improved result than RSH, which is the maximum among the four breakdown cases considered in this chapter. The improvements for the other three cases are 6.099%, 5.349%, and 3.449%, for the one, three, and four machines breakdown respectively. Some of the solutions under ideal condions, after applying RSH, and after applying SGR, are presented graphically in Appendix A. From our experiments, we have observed that the location of breakdowns within the planning horizon, influences the makespan of the re-optimized schedule. If a breakdown occurs during the early stage of a solution, the affected tasks could be rescheduled with only a small increase in makespan. We represent the average downtime over the solution quality, graphically, in Figure 5.2. The figure contains four different groups of scattered values, one for each breakdown. The X-axis shows the %ARD, while the Y -axis represents the downtime. For ease of comparisons, the total length of downtimes for any schedule is considered to be approximately equal. This means that the length of downtime in the one dreakdown case is approximately equal to the sum of all of the breakdowns
118
Chapter 5. GA for JSSPs with Interruptions
Figure 5.1: Average relative deviation of RSH and SGR from the best results found by GA-SGR
Figure 5.2: Comparison of average relative deviation vs. the average downtime of each breakdown event
119
430.1
431.9
217.0
214.9
429.6
144.6
142.2
142.7
416.7
104.0
104.3
104.9
103.6
1st Break
Total Dur.
1st Break
2nd Break
Total Dur.
1st Break
2nd Break
3rd Break
Total Dur.
1st Break
2nd Break
3rd Break
4th Break
-
476.5
120 -
476.5
-
415.8
1535.2 1493.5 104.7
1472.7 1444.7 104.3
1415.5 1404.6 103.4
1369.9 1358.6 103.5
-
1571.4 1510.6 158.4
1481.8 1445.3 158.7
1399.0 1376.9 159.5
-
1618.7 1538.3 237.6
1475.2 1431.2 238.9
-
SGR
-
419.5
-
415.7
-
416.3
1680.0 1632.7 104.4
1635.5 1598.0 104.2
1571.3 1550.7 103.4
1510.4 1501.3 104.3
-
1764.1 1673.3 139.7
1662.8 1600.6 137.1
1552.7 1529.6 138.9
-
1807.7 1703.7 210.1
1635.2 1575.4 209.4
-
SGR
-
416.6
-
416.9
-
417.0
1500.3 1446.1 104.9
1452.7 1411.9 103.9
1380.4 1363.9 104.1
1333.5 1322.0 104.1
-
1541.1 1475.7 140.2
1448.7 1412.0 139.1
1359.6 1343.5 137.6
-
1597.8 1499.0 207.5
1437.7 1404.6 209.0
-
SGR
-
417.4
-
419.1
-
413.9
1499.1 1464.7 103.2
1436.3 1418.7 103.4
1387.7 1373.0 102.8
1330.6 1324.2 104.5
-
1541.1 1475.7 140.6
1448.7 1412.0 139.9
1359.6 1343.5 138.6
-
1575.9 1512.5 209.3
1425.9 1391.7 208.1
-
SGR
-
-
-
1507.8 1462.5
1449.1 1412.2
1399.1 1368.0
1338.6 1319.0
-
1541.1 1475.7
1448.7 1412.0
1359.6 1343.5
-
1595.6 1518.1
1441.3 1391.6
-
1644.8 1568.4
Duration
Break RSH
la40 (1251)
1646.2 1582.4 420.1
Duration
Break RSH
la39 (1251)
1637.6 1562.7 420.5
Duration
Break RSH
la38 (1249)
1881.5 1778.7 412.5
Duration
1666.9 1587.9 479.6
Duration
Break RSH
Break RSH
SGR
la37 (1434)
la36 (1292)
Table 5.4: Comparison of average makespan of 100 breakdown scenarios for the delayed, right shifted, gap reduced reactive scheduling
Chapter 5. GA for JSSPs with Interruptions
Chapter 5. GA for JSSPs with Interruptions
Table 5.5: %ARD of the best makespans considering the machine breakdown and unavailability compared to that of GA-SGR
Prob.
1-break
2-break
MB
MU
MB
la36
22.90
14.12 19.07
la37
24.03
la38
MU
3-break MB
4-break
MU
MB
MU
11.22 16.92
9.21
15.60
8.54
15.48 18.80
9.44
8.80
13.86
7.25
25.12
16.72 20.01
11.27 18.15
10.73 15.78
8.59
la39
26.49
19.23 20.90
12.54 18.79
11.33 17.08
11.05
la40
25.37
17.58 21.35
11.82 18.88
10.40 16.90
9.30
Avg.
24.78
16.63 20.03
11.26 17.88
10.09 15.84
8.95
16.69
in the four breakdown case. This is why the average downtime per breakdown is the longest in the case of the single breakdown, which can be seen in Figure 5.2. Interestingly, the solution quality has a strong relationship with the frequency of breakdowns when the duration of the total breakdown is fixed. As observed, multiple smaller breakdowns are better than one long breakdown for minimizing the loss due to breakdowns.
5.5.3
JSSPs with Machine Unavailability
In the case of machine unavailability, we impose unavailability as a constraint when generating schedules using GA-SGR. While evaluating the solutions, the unavailable periods of the machines are considered as forbidden time slots that cannot be used to schedule any task. As the information is known in advance, there is no need to consider the affected tasks. We present the %ARD of the best makespans found from GA-SGR(MU) with one to four breakdowns and machine unavailabilities in Table 5.5. For better understanding of the advantage of knowing the interruption information in advance, we list the same results found from GA-SGR(MB) and list these in Table 5.5.
121
Chapter 5. GA for JSSPs with Interruptions
We use the same scenarios for both GA-SGR(MB) and GA-SGR(MU) to make an exact comparison. From GA-SGR(MU), we have obtained an average of 8.95% deviation in the case of the scenario with three breakdowns, which is 6.90% improved compared to GA-SGR(MB) under the same scenarios. GA-SGR(MU) shows a 7.90% average improved compared to GA-SGR(MB). This proves that when the machine unavailability information is known in advance, the effect on the schedule is less compared to the machine breakdown scenario.
5.6
Chapter Conclusion
The practical industrial problems consider different interruptions which make the traditional JSSPs more complex. In this chapter, we have developed a genetic algorithm based approach to solve the standard job-shop problems, where we combined a genetic algorithm with a new shifted gap reduction heuristic. We show that the algorithm outperforms the existing meta-heuristic algorithms when solving several difficult benchmark problems. We then extended the algorithm to consider machine breakdown and machine unavailability constraints. We observe that the effect of machine unavailability has less impact on the schedule compared to breakdown, as the information is known in advance. Another observation is that the presence of scattered multiple (smaller) breakdowns are relatively easier to minimize the makespan, compared to a single continuous (longer) breakdown, even if the cumulative breakdown durations are nearly the same. We propose using the shifted gap reduction, instead of right shifting, in order to minimize the effect of such interruptions. This algorithm can be extended for other interruptions, such as dynamic job arrival, change of due dates, and machine inclusion or deletion.
122
Chapter 6 Decision Support System for Solving JSSPs For ease of utilization of the algorithms, in this chapter we have developed a decision support system (DSS) for solving JSSPs. The details of the algorithms used in the DSS can be found in Chapters 3–5. The different components of the DSS and their design aspects are briefly discussed. Also the input parameters used in different subsystems of the DSS are also explained.
6.1
Introduction
As part of the implementation of a practical job-shop scheduling system, we have developed a DSS. The aim is to introduce interactivity and flexibility to the process of choosing the computational parameters, as well as to analyze the outputs. As the user and the planner of a job-shop scheduling scenario may not be expert on optimization, computer programming and genetic algorithms, the development of a decision support system (DSS) would help in utilizing the algorithm without needing to understand (or to go through) the complex methodology involved. As a part of the models, we use our proposed algorithms described in the previous chapters. We use a similar input/output structure. We represent the outputs both in visual (as a gantt chart) and numerical form. We have designed 123
Chapter 6. Decision Support System for Solving JSSPs
several dialogs to assist decision makers. The rest of this chapter is organized as follows. We provide a brief literature review on DSS in Section 6.2. We describe a standard DSS and it’s components in Section 6.3. We give the detailed description of our implemented decision support system along with the input/output data standards in Section 6.4. We conclude the chapter with the contribution summary and future research directions in Section 6.5.
6.2
A Brief Literature Review
Numerous works have appeared on the development of DSS for different versions of scheduling problems, especially industrial scheduling problems or JSSPs. In the early 1980s, Viviers (1983) developed a DSS to solve JSSPs. As well as their interactive user interface, they closely focused on the management issues related to decision making, such as those of: accepting or rejecting orders, subcontracting, increasing capacity or workload, breaking down jobs, assigning jobs to artisans, assigning due-date and reassigning the jobs if necessary. In their model based management system, their objectives were to minimize the work-in-progress, as well as reducing the lead time. Moreover, improving customer satisfaction by attaining due-dates was considered. More recently, Speranza and Woerlee (1991) worked to link the fields of DSS and Operations Research (OR), and discussed how DSS can take advantage of the methodology of OR. Hence they consequently focused on scheduling problems, as one of the most challenging applications where both DSS and OR can be applied successfully. The authors raised certain issues to justify the effectiveness of DSS over simplified models, such as dynamic decision making, which may change depending on the particular situation, the inability to impose the entire knowledge of a decision maker in the models, the presence of unusual circumstances like political issues, the knowledge of users over the problems, and the preparation of complex data. These issues are common in real life problems, while they are hard to include in models. In their models, they considered flexibility for users, including; reassigning due date and operation priorities, resetting total number of 124
Chapter 6. Decision Support System for Solving JSSPs
products to be produced, and moving an operation to a particular machine. They have reported that their model is capable of adjusting overlapping conditions and prohibits invalid machine selection. Also recently, McKay and Buzacott (2000); McKay and Wiers (2003) developed a computerized system for solving scheduling problems, more precisely: JSSPs and timetabling. The first work may not be treated as DSS due to the absence of some necessary DSS components such as data base management. The authors implemented an interactive computer based interface for solving JSSPs. In their later work, they emphasized the decision making functionalities, such as start of day routine and special periods (like Friday, last day of the month etc.) for timetabling, and categorized their system as a DSS. Petrovic et al. (2007) developed a decision support tool for solving JSSPs that used a fuzzygenetic model. Their main emphasis was also on model based management. They considered multi-objective GA as a model for problem solving. Our main focus is also on the problem solving methodology and its relationship with the intractability of the DSS. At the current stage, we considered benchmark problems, rather than real-life industrial problems, to better judge the system performance. Silva et al. (2006) developed a DSS for use in the mould industry, for their production planning. They mainly combined a system model, data model, and MAPP (Mould: Assistant Production Planner) to form the system. Data coming from the client are processed and stored in a DBMS, which is used by the application server. A web based client module is used to interface with the system. In the work of Kumar and Rajotia (2006), they have integrated the process plan generator, DBMS and the scheduler with the DSS. The job-scheduling operations are performed by the scheduler, where the process plan generator organizes different tools for generating an appropriate machine setup. DSS can more generally be applied on numerous applications, including in scheduling, as may be found in the survey of Eom and Lee (1990) and Eom et al. (1998).
125
Chapter 6. Decision Support System for Solving JSSPs
6.3
A Decision Support System
DSS is a computer-based interactive system that supports decision makers utilizing data and models. It solves problems with various degrees of structures and focuses on the effectiveness, rather than the efficiency of the decision process (Eom and Lee 1990). However, there is no restriction to develop a DSS with the efficient decision processes. More importantly, the computer programs need to be interactive with options to change all the parameters. Also, it needs to carry out and present all the detailed information about scheduling. For problems like JSSPs, a Gantt chart is preferable to represent the solutions graphically (Viviers 1983).
6.3.1
Common DSS Attributes
The major focus of a decision support system lies on the effectiveness of using a complex model in the simplest way by the end users. DSS presents a friendly environment to the user/manager to effectively apply their decision making abilities. Some attributes which make a DSS meaningful, are listed below (Marakas 2002): • Employed in semi-structured or unstructured decision contexts • Intended to support decision makers rather than replacing them • Supports all phases of the decision-making process • Focuses on the effectiveness of the decision-making process rather than the efficiency • Is the under control of the DSS user • Uses underlying data and models • Facilitates learning on the part of the decision maker • Is interactive and user-friendly • Is generally developed using an evolutionary, iterative process 126
Chapter 6. Decision Support System for Solving JSSPs
• Provides support for all levels of management from top executives to line managers • Can provide support for multiple independent or interdependent decisions • Provides support for individual, group, and team-based decision-making contexts
6.3.2
DSS Framework
DSS is not a simple system with identifiable characteristics. Numerous factors are needed to be considered to implement a DSS. It is not even a single autonomous system. Instead, only a few components are necessary to establish a perfect decision making system. There is no such defined number of components for a DSS. It varies depending on the degree of direct influences of those components (Marakas 2002; Turban 1993, 2007). The mostly described parts of a standard DSS framework are pictorially represented in Figure 6.1 and are described in this section.
Data Management System The data management component of a DSS has responsibility to retrieve, store, and organize the relevant data. In addition, it provides various security supports, data integrity procedures, and others. This subsystem also provides data query and repository services to the end user. The three major sources which belong to the data management system, are:
Internal Sources: Data is stored in several places inside an organization. Mostly, the data contains information about employees, salary, product, and others. Information about the internal resources can also be stored in these databases.
External Data: Data that belongs to commercial databases, such as a government central database, research center data bank, etc. are treated as external 127
Chapter 6. Decision Support System for Solving JSSPs
Figure 6.1: Basic framework of a standard decision support system data sources. This data can also be available in different forms of magnetic/optical media.
Personal Data: Data which is personally stored by the DSS user/manager. This can be the estimated or targeted range of sales in a production industry.
Model Management System The model management system is the methodological subsystem which is responsible for analyzing, manipulating, and solving the problem based on the decision parameters. This subsystem uses the necessary data collected by the data management system. It uses different models to manipulate the input data to solve the problems. For example, in a cost minimization system of an industry, this subsystem gather information about the products through the data management 128
Chapter 6. Decision Support System for Solving JSSPs
systems to optimize the overall production cost. It uses the model of the production planning problem, fits the data, and solves it optimally to achieve the best result.
Knowledge Base System Decision making problem solvers are fruitless without reasoning. Reasoning is the process where a single decision is derived from the combination of multiple decisions. Even though a decision is unknown, derivation is possible using existing knowledge. The Knowledge base is where such knowledge is stored. Knowledge contains the rules, heuristics, variable boundaries, constraints, etc. This subsystem is designed to apply propositional logic among the stored knowledge to derive a decision.
User Interface User interface (UI) is a surface through which the data passes back and forth between the user and the computer. The interface is operated by the input devices and displays meaningful outputs. UI is the only subsystem which gives a clear picture of the computational aspects of the entire system to the user. It is mainly a hardware or software system which enables the dialogs and pre-processes the information.
6.4
Implementation of DSS
We have developed a standard decision support system to evaluate management decisions and to execute the appropriate algorithm to solve simple JSSPs. We consider three main subsystems: data management, model management and dialog management, as parts of our implementation phase (Turban 2007). These three components usually interact with each other. This interaction consists of sharing resources, exchanging information and messages, passing feedback etc. Moreover, the whole management system interacts with the user interface (UI) to process the input and simulate the output. The flow diagram is presented in Figure 6.2.
129
Chapter 6. Decision Support System for Solving JSSPs
Figure 6.2: Internal structure of the implemented decision support system
130
Chapter 6. Decision Support System for Solving JSSPs
6.4.1
Data Base Management Subsystem (DBMS)
The DBMS is mainly the input processing subsystem. The major tasks of this subsystem are to reshape and simplify the incoming data. It handles • the problem description i.e. sequence of operations and corresponding execution time. • reproduction parameters i.e. crossover and mutation probabilities, along with other selection parameters. • stopping criteria i.e. maximum allowable number of generations, particular delivery time or specific makespan to stop the iterative process. • information about machine unavailabilities i.e. instance, duration, and the corresponding machine. The purposes of the incoming data are elaborated in Section 6.4.4.2. The DGMS shares the processed inputs with the model management to execute a selected model successfully. It also handles further requests of input data by any other units. On the other hand, it is connected to the UI through the DGMS, in terms of acquiring the inputs from the interactive user interface. Users also load necessary data by using the UI which is finally captured by the DBMS.
Input Parameters Standard The *.inf files, which are used to set the algorithm parameters, are configured with the values of different variable and flags. Each line contains the name and value of a parameter using the pattern Variable Name:
#Value. A sample file is given in
Table 6.4.1. We set a flag for each of our proposed rules. In the case of MA(GR-RS), both the flags for GR and RS are set to 1. There are three flags for three different stopping criteria; generation, makespan, and time. In the case of makespan, if it is not achieved, the program stops after the given number of generations. The file also 131
Chapter 6. Decision Support System for Solving JSSPs
Table 6.1: A sample input file for DSS Population: 50 Generation: 100 Crossover: 0.65 Mutation: 0.15 GR: 1 PR: 0 RS: 1 SGR: 0 TGen: 1 TMS: 0 TTime: 0 StCriteria: 100 MBreak: 0 MUnavail: 1 3 45 56
contains the size of the GA population, the number of generations, along with the crossover and the mutation probabilities. A value is set to the stopping criteria for assigning the terminating condition. In cases of machine breakdown/unavailability, the first value contains the number of events. Each event consists of the interrupted machine ˆi, instance t , and the recovery time r , which are described in Chapter 5.
6.4.2
Model Base Management Subsystem (MBMS)
MBMS deals with the different kinds of algorithms which can be treated as models. We have developed four different algorithms, which act as four different models. The MBMS controls the operations of those models. Moreover, it facilitates the models by ensuring appropriate support from other units. It also keeps the process input data from the DBMS. The models use this data to execute themselves and to generate the expected outputs. The MBMS needs the support of the UI to choose any particular model. Thus the user/manager has the ability to select any particular model. In a sense, this unit is fully controllable from the UI. 132
Chapter 6. Decision Support System for Solving JSSPs
6.4.3
Dialog Management Subsystem (DGMS)
The dialog component of a DSS is the hardware and software that provides the service of connectivity, between the user interface and other management systems. It also accommodates the user with a variety of input devices and stores input/output data. Sometimes the UI is also treated as a part of the DGMS (Turban 1993). According to Figure 6.2, the UI is virtually connected to all three management subsystems, as it is directly linked and controlled by the DGMS. It handles the accessibility of the dialogs in the user interface, and maintains the input and output data flows between the UI and the other subsystems.
6.4.4
User Interface (GUI)
The UI is the main graphical component to control the interaction between the DSS and the end user or manager. It communicates directly with the DGMS and exchanges input and output data. As we implemented three different rules and developed three different algorithms using those, we have also considered the option to select any of the algorithms as well as the traditional GA. For better convenience, we used an option to select the algorithm parameters from an initialization file. Figure 6.2 represents the flow diagram of the complete system. The decision support system takes the input data in three different classes, through the help of the DGMS, and then passes it to the DBMS. The DGMS finally processes the output and gives feedback to the user/manager. The output is generated in the form of values, as well as a gantt chart.
6.4.4.1
Home
This option is for getting the simplified input parameters directly from the configuration file. All of the necessary algorithm parameters can be stored in files using a specific input format for the purposes of quickly selecting parameters. This tab contains two input buttons. The CONFIG button loads *.ini files to initialize the 133
Chapter 6. Decision Support System for Solving JSSPs
Figure 6.3: Graphical user interface of the simple input panel options of the decision support system algorithm parameters, while the Inputs button loads the input files in *.txt format for operational sequences etc. The parameters can be reset to the default values using the RESET button.
6.4.4.2
Input
This option allows users to choose the input parameters, including the algorithm and stopping criteria, interactively from the interface itself. More precisely, it gives a clear graphical view of each and every component of the algorithms. The manual input allows a user to insert all of the parameters of the algorithms manually in cases where smaller amounts of data have been used. This is obviously helpful in the case of small scale problems. Selecting manual input enables the edit box to be used to specify the total number of machines and jobs. On the other hand, the input data can be loaded from a file, which is much more convenient for large problems. As we initially implemented a traditional GA, and then later applied the priority rules, we kept the option for all four of the algorithms, including TGA. The two main parameters are the size of population and total number of generations. The value of the number of generations is the default stopping criteria in any critical 134
Chapter 6. Decision Support System for Solving JSSPs
Figure 6.4: Graphical user interface of the advanced input panel of the decision support system circumstances. The stopping criterion is essential for better management, as it allows a manager to specify the criteria to stop the iterative process. The program may stop after a certain number of generations, or after a particular period of time, or after achieving a specific makespan. In the case of makespan, if it is not achieved within the maximum number of generations, the program stops. This is to avoid infinitely approaching towards an infeasible makespan. There is also an option to insert the reproduction parameters i.e. crossover and mutation probability, while the elitism technique is to keep the best solution of every generation unchanged in the next generation. Adding elitism ensures the continuation of the best solution in every following generation.
6.4.4.3
Output
The simplified output screen shows the necessary information to measure the quality of the solutions, as well as a graphical view of the best solution in the form of a gantt chart. The computational results contain: the best makespan found, which is the solution with the minimum completion time; the average makespan of all the solutions; and the worst fitness, in the scale of unit time. Moreover, it also 135
Chapter 6. Decision Support System for Solving JSSPs
Figure 6.5: Graphical user interface of the output panel of the decision support system
Figure 6.6: Machine breakdown warning and option for reactive scheduling includes: the standard deviation of all the solutions, total execution time to reach to the current solution, total number of fitness evaluations, and the cumulative idle time between each consecutive operation. In the gantt chart, each color represents the operations of a particular job which is specified in the legend. The chart contains M rows, where M is the total number of machines. Each row contains N different operations, represented by N colors, where N is the total number of jobs. Empty spaces between a pair of operations indicate idle time for that particular machine. The gantt chart is the best and most effective way to visualize the schedules in the time domain.
136
Chapter 6. Decision Support System for Solving JSSPs
Figure 6.7: Rescheduled solution after a machine breakdown If machine breakdown information is provided to the system, the system gives a warning when there is a machine breakdown. This message contains necessary information (i.e. breakdown time, elapsed time, broken machine, etc.) to let the decision maker know about the interruption event. The message gives an option to reschedule the remaining tasks from the current state. The message is displayed in Figure 6.6. Figure 6.7 shows the rescheduled solution after a breakdown of machine m4 , in the form of a gantt chart. The black task in the schedule represents the period when the machine m4 is broken.
6.5
Chapter Conclusion
This chapter describes a decision support system containing a set of models for solving job-shop scheduling problems using genetic and memetic algorithms. The system combines the models with the data management subsystem which is controlled by a dialog based user interface. This DSS gives full flexibility of using the simple genetic algorithm (discussed in Chapter 3), memetic algorithms (dis-
137
Chapter 6. Decision Support System for Solving JSSPs
cussed in Chapter 4), and the GA with interruptions (discussed in Chapter 5) to users/managers. It is quite useful for the decision makers as it gives full control of the computational aspects. Moreover, it helps by reducing the cost of computation and/or execution when an appropriate decision is applied. The system can be modified with the options to solve flexible job-shop scheduling problems, where the job arrives or leaves while the schedule is executing.
138
Chapter 7 Conclusion and Future Works In this chapter, we briefly describe the research carried out in this thesis, discuss the findings and conclusions, and indicate some possible future research directions.
7.1
Conclusion
In this research, we have considered classical job shop scheduling problems and their variations. First, we implemented a genetic algorithm based job shop scheduling. The performance of the genetic algorithm has then been improved by combining it with three priority rules. The performance of this hybrid algorithm has been further enhanced by modifying the key priority rule. These algorithms were later modified to solve the job-shop scheduling problems with process interruptions due to machine unavailability and breakdown. The details of these developments, experimental results and findings are briefly discussed below.
7.1.1
Genetic Algorithm for JSSPs
In Chapter 3, we have considered a set of classical job-shop scheduling problems. We have implemented a genetic algorithm for solving these problems. In the genetic algorithm, we have used a job-pair relationship based representation, two point exchange crossover, and bit-flip mutation. We have used tournament selection to 139
Chapter 7. Conclusion and Future Works
select the solutions for reproduction. The proposed GA reproduces the solutions until it satisfies a set stopping criteria. We have used elitism to keep the best solution of each generation unmodified. We have solved 40 benchmark problems, from the “la” series, which are well known in the literature. For the test problems considered, the number of jobs varies from 10 to 30, while the number of machines varies from 5 to 15. In the l-series problems, there are 8 problem groups where each group consists of 5 problems of similar size. We have obtained the optimal solutions for a total of 15 test problems. The algorithm can solve smaller problems (i.e., with fewer jobs and fewer machines) with quality solutions easily. We have observed that the complexity of the problems depends on the job-machine ratio and the number of machines. When the jobmachine ratio is high (e.g. 3 or higher), it is easy to either reach optimality or to obtain a reasonable quality solution quickly. As the size of the solution space is an order of the number of machines, changing the number of machines to be significantly higher, affects the solution space. We have faced the highest difficulty when solving the problems with 15 jobs and 15 machines. In this chapter, we have seen that although the genetic algorithms are able to solve those problems within a reasonable period of time, the quality of the solutions is not acceptable for some problems, specifically for larger problems. As calculated, the average deviation of makespan for the largest group of problems, compared to their known best, is as high as 11 percent. For individual problems, it is as high as 14.3%, this finding encouraged us to improve the algorithm and to design new algorithms for solving JSSPs.
7.1.2
Priority Rules for JSSPs
In Chapter 4, we have introduced three new priority rules and have also developed a hybrid genetic algorithm with these priority rules. These rules are known as partial reordering (PR), gap reduction (GR) and restricted swapping (RS). For any chromosome or solution, we have observed that the arrangement of tasks in the bottleneck machine, and the gaps between each consecutive task pairs, control the
140
Chapter 7. Conclusion and Future Works
quality of solutions. Based on this information, the proposed priority rules are designed to improve the performance of a genetic algorithm based approach for solving JSSPs. We have solved the same benchmark problems considered in Chapter 3 using a similar representation, operators and parameters. Our first algorithm with partial reordering (known as MA(PR)) achieved 16 optimal solutions out of 40 test problems and obtained improved solutions for other problems compared to GA. In the second proposed algorithm with gap reduction (or MA(GR)), we have successfully achieved optimal solutions for 23 benchmark problems. When restricted swapping was added to the second algorithm, (identified as MA(GR-RS)) it performed the best among the three memetic algorithms, with an achievement of 27 optimal solutions and with a lower average number of fitness evaluations. Considering all 40 test problems, the average deviation per problem is only 0.97 percent, which is the best when compared to six well-known algorithms published in the literature. We have also shown a detailed analysis of the reproduction parameters in Chapter 4. We have varied the crossover and mutation rate, within a certain range, and examined the results. We have uncovered the fact that the algorithms work best for relatively higher mutation and lower crossover rates. We came up with the best crossover and mutation rates of 0.45 and 0.35 respectively. We have performed a statistical significance testing for our algorithms compared with confidence levels of 95% and 99%. The test results have shown that our algorithms are significantly better than GA, and MA(GR-RS) is the best among the three algorithms we developed. Finally, our experimental results confimed that MA, as compared to GA, not only improves the quality of solutions, but also reduces the overall computational time.
7.1.3
JSSPs with Interruptions
In Chapter 5, we have modified the gap-reduction (SRG) rule, proposed earlier, for further improving the performance of our developed algorithm. In addition, we have considered machine breakdown and unavailability as process interruptions 141
Chapter 7. Conclusion and Future Works
in job shop scheduling. We have introduced a shifting-based reactive scheduling technique to re-schedule the affected tasks as a result of the machine breakdowns. We have also revised the evaluation technique based on the proposed shifted gapreduction so that it can better handle the known machine unavailability. In this chapter, we have considered five test problems, from the Lawrence series (i.e. la36–la40), that are those that are relatively bigger and harder. Each of the problems have 15 machines and 15 jobs. First, we have solved the problems under ideal conditions using the SGR rule. We have found that the new algorithm is able to improve the quality of solutions as well as reducing the computations. As the SGR is controlled by a tolerance ψ, we have performed experiments varying the tolerance within a certain range and have come up with a best value of 0.2. For JSSPs with machine breakdowns, we have designed a reactive scheduling algorithm in conjunction with the SGR. We have considered up to four nonoverlapping breakdowns where the cumulative duration of the breakdowns were nearly equal. We have experimented with 100 random breakdown scenarios which were generated by three different statistical distributions. The experiments were run for ψ = 0.2 and the results were compared with the traditional right-shifting technique. We have obtained significantly improved results compared to right-shifting in all cases. We have observed that the presence of scattered multiple (smaller) breakdowns are relatively easier to minimize the makespan compared to a single continuous (longer) breakdown, when the cumulative breakdown durations are the same. We have applied the same breakdown scenarios while considering machine unavailabilities. For this, the algorithm evaluates each and every individual taking the machine unavailabilities into account. We have noticed that the effect of machine unavailability has less impact on the schedules, compared to the re-optimized schedules after machine breakdown, as the information is known in advance.
142
Chapter 7. Conclusion and Future Works
7.1.4
Decision Support System
In Chapter 6, we have developed a decision support system so that users can solve JSSPs without knowing, or of having insight, of the complex solution algorithms. The system combines the algorithms (which can be considered as models) with the data management subsystem which is controlled by a dialog based user interface. The system facilitates the use of genetic algorithms, memetic algorithms, and the GAs with interruptions in a flexible manner. It assists the users/managers to take appropriate decisions depending on the state of the situations. In the dialog based subsystem, we have given the flexibility of choosing the appropriate algorithm, reproduction parameters, stopping criteria, information regarding the machine unavailability and breakdowns, etc. The system has the flexibility of loading the necessary parameters and input problems from a file, while manual inputs are also available. The dialog, model, and database subsystems are interlinked so that the information can be exchanged among these to complete the scheduling process. The DSS is also designed to tackle the presence of interruptions. The system allows reactive scheduling in cases of machine breakdowns. We have designed a graphical output platform where the resultant schedule is displayed as a form of gantt chart. Moreover, solutions are stored in numerical form for analysis and reuse.
7.2
Future Research Directions
The contributions in this thesis open many differing scopes to improve the methodologies for solving job-shop scheduling problems. The algorithms, considered in Chapters 3–5, can be tested with some other test problems. The rules proposed in Chapter 4 can be combined with other existing priority rules to improve their performance. Other interruptions can also be considered in the algorithm proposed in Chapter 5. In addition, some other aspects can also be introduced in conjunction with our proposed techniques, which are described here: 143
Chapter 7. Conclusion and Future Works
• In our research, we have considered non-delay scheduling, where a task is scheduled immediately after a time-slot becomes available. Some researchers have been carrying out research on delay scheduling (Goncalves et al. 2005). For this, delays are applied on the tasks that finish relatively earlier than the makespan. Our algorithms proposed in Chapter 4 can be revised considering the concept of delay scheduling. • We have developed the gap-reduction and shifted gap-reduction rules to reduce the machine delays (gaps) between each consecutive task pairs in the solution. We selected the tasks on a first-come first-processed basis. Some other priority dispatching rules can be applied to select an appropriate task, when more than one task is available to fill a gap. • We have experimented with only five bigger, but harder, test problems for machine breakdown and unavailability. The other 35 test problems for the “la” series can be experimented for process interruptions. • We have considered only machine breakdown and unavailability as process interruptions. In practice, a few more interruptions are available; such as dynamic job arrival, change of job priorities, change of due dates, etc. Our proposed reactive scheduling described in Chapter 5 can be modified considering those interruption scenarios. • We have designed our algorithms to work with single objective job-shop scheduling problems where makespan is the only objective. In cases of independent job orders, delivery time/due dates become an issue. A penalty may exist for early/late delivery. The algorithms can be revised to solve the multi-objective problems by minimizing makespan and also optimizing the earliness/lateness costs. • In our research, we have assumed that each machine is capable of processing only one type of operation. The developed algorithm can be extended for considering the machine capability of processing more than one task. However, these tasks would be processed in a non-parallel manner. 144
Chapter 7. Conclusion and Future Works
• In our research, we have considered negligible setup time or cost. Our algorithm can be extended for consideration of setup time /cost for changing the job or task in a machine.
145
Appendix A Gantt Charts This chapter contains visual representation of four schedules. The schedules are represented in the form of gantt charts, where each color represent a particular job labeled with a job number. Each row in the gantt chart represents a machine, where the bottom row stands for machine m1 and the top row stands for machine mM . The jobs, colored with black, represent the breakdown period. Each machine ends with a positive number, which represents the finishing time of that particular machine. The maximum value of the finishing times, is the makespan. In each figure, there are three sub-figures. The first one represents the solution under ideal condition (no breakdown), the second one is the solution after traditional right-shifting, and the last sub-figure is the solution reactively scheduled using SGR. • Figure A.1 stands for the problem la36 with one machine breakdown. • Figure A.2 stands for the problem la37 with two machine breakdowns. • Figure A.3 stands for the problem la38 with three machine breakdowns. • Figure A.4 stands for the problem la39 with four machine breakdowns.
146
Appendix A. Gantt Charts
(a) la36 – Standard solution
147
Appendix A. Gantt Charts
(b) la36 – Solution after Right-shifting
148
Appendix A. Gantt Charts
(c) la36 – Solution after SGR Figure A.1: la36 with 1 machine breakdown
149
Appendix A. Gantt Charts
(a) la37 – Standard solution
150
Appendix A. Gantt Charts
(b) la37 – Solution after Right-shifting
151
Appendix A. Gantt Charts
(c) la37 – Solution after SGR Figure A.2: la37 with 2 machine breakdowns
152
Appendix A. Gantt Charts
(a) la38 – Standard solution
153
Appendix A. Gantt Charts
(b) la38 – Solution after Right-shifting
154
Appendix A. Gantt Charts
(c) la38 – Solution after SGR Figure A.3: la38 with 3 machine breakdowns
155
Appendix A. Gantt Charts
(a) la39 – Standard solution
156
Appendix A. Gantt Charts
(b) la39 – Solution after Right-shifting
157
Appendix A. Gantt Charts
(c) la39 – Solution after SGR Figure A.4: la39 with 4 machine breakdowns
158
Appendix B Sample Best Solutions This chapter contains the numerical representation of the best solutions found by using the algorithms proposed in this thesis. Solutions for the problems la01–la35 were achieved by MA(GR-RS) and the rest by GA-SGR. Each table consists of M rows and 2 × N columns. Each row represents the operations in a machine, labeled by the first column. Each following column represents the operation number and the starting time of that operation alternatively. The fitness value of each solution is presented in the table captions. The fitness, presented in a bold face, means that the solution attained optimal fitness. Contents of this chapter are arranged as follows: • Tables B.1–B.5 show the solutions from la01 to la05. • Tables B.6–B.7 show the solutions from la06 to la10. • Tables B.8–B.12 show the solutions from la11 to la15. • Tables B.13–B.17 show the solutions from la16 to la20. • Tables B.18–B.20 show the solutions from la21 to la25. • Tables B.21–B.25 show the solutions from la26 to la30. • Tables B.26–B.30 show the solutions from la31 to la35. • Tables B.31–B.35 show the solutions from la36 to la40. 159
Appendix B. Sample Best Solutions
Table B.1: Solution of the problem la01 (666) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 4 0 1 83
0 104 7 157 3 217 5 272 6 375 8 468 2 523 9 536
m2 5 0 0 54
3 75
m3 7 0 5 54
4 159 1 249 6 288 9 375 3 426 2 492 8 523 0 624
m4 6 0 2 69
8 108 4 125 1 159 9 211 5 364 7 426 3 492 0 569
m5 9 0 6 77
5 154 1 233 2 249 3 347 8 426 0 451 4 546 7 583
8 152 6 201 4 288 7 307 2 348 1 390 9 461
Table B.2: Solution of the problem la02 (655) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 0 0 4 27
1 84
8 108 2 183 7 241 5 335 6 433 3 513 9 558
m2 2 0 5 72
6 139 1 151 7 169 3 232 9 308 0 345 4 376 8 422
m3 1 52 2 84
3 107 4 193 9 210 8 228 7 335 0 481 6 513 5 565
m4 0 20 4 107 6 155 1 174 2 255 9 354 7 433 8 483 5 538 3 565 m5 4 0 1 27
8 52
6 66
2 107 9 135 3 308 0 405 5 481 7 529
Table B.3: Solution of the problem la03 (597) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 3 37 6 91
5 182 8 261 1 268 4 295 0 376 7 458 2 490 9 542
m2 0 0 1 23
7 61
9 145 3 228 5 290 4 379 8 440 6 479 2 545
m3 1 0 2 21
6 59
3 91
m4 6 0 2 59
8 268 3 322 1 379 4 440 5 508 0 542 7 580 9 588
m5 3 0 7 37
9 61
0 165 8 322 5 386 7 490 4 545 9 575
5 101 8 182 4 238 2 295 1 311 6 352 0 458
160
Appendix B. Sample Best Solutions
Table B.4: Solution of the problem la04 (590) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 0 0 2 33
8 108 5 170 4 246 7 261 3 323 9 330 6 418 1 470
m2 1 0 2 19
4 33
m3 8 0 9 61
5 115 3 135 6 230 0 304 1 398 2 419 4 439 7 482
m4 1 19 5 30
2 108 4 121 7 166 9 235 8 294 3 384 0 398 6 490
m5 1 30 8 96
9 115 2 121 4 137 3 230 5 296 7 384 0 490 6 581
7 78
8 170 9 294 6 309 5 397 3 450 0 581
Table B.5: Solution of the problem la05 (593) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 3 0 5 59
6 104 0 157 1 244 9 292 2 382 8 479 4 527 7 555
m2 0 0 2 72
5 118 6 228 3 265 8 311 7 369 4 424 9 448 1 471
m3 8 0 9 49
4 114 7 187 2 274 5 295 1 378 0 417 6 483 3 495
m4 5 0 1 28
3 63
2 118 9 138 6 157 8 228 4 311 7 336 0 483
m5 1 0 4 5
7 28
5 104 3 109 0 244 6 339 9 382 2 479 8 534
161
162
6 57
2 29
0 47
m3 7 0
m4 4 0
m5 13 0
11 45
1 55
1 34
0 30
9 9
m1 12 0
m2 2 0
m3 4 0
m4 13 0
m5 6 0
j1 s1 j2 s2
4 29
j1 s1 j2 s2
m2 5 0
10 37
m5 4 0
1 47
j3 s3
1 77
m4 3 0
m1 0 0
12 65
8 43
m3 5 0
5 14
8 122
3 55
10 124
10 83
j3 s3
12 104
11 61
11 115
0 104
3 122
12 129
2 141
3 97
5 43
m2 0 0
7 180
6 87
j3 s3
m1 14 0
j1 s1 j2 s2
4 102
7 212
14 150
13 174
1 124
j4 s4
14 120
7 111
5 136
1 175
13 166
j4 s4
13 124
4 145
0 172
14 174
9 240
j4 s4
11 191
4 281
9 245
4 287
8 212
j5 s5
8 177
13 220
2 253
14 235
2 220
j5 s5
1 167
6 180
10 206
1 219
11 336
j5 s5
3 263
12 287
7 299
3 352
14 274
j6 s6
10 233
0 277
13 322
3 296
8 253
j6 s6
3 183
8 249
14 241
7 290
2 397
j6 s6
10 329
5 382
0 351
7 429
3 345
j7 s7
9 317
1 373
0 396
6 330
5 337
j7 s7
14 280
14 289
1 290
12 331
5 409
j7 s7
8 407
2 459
5 459
5 517
4 352
j8 s8
1 332
3 452
8 410
12 369
9 434
j8 s8
8 289
9 413
7 331
13 422
1 501
j8 s8
12 416
6 472
11 479
14 586
7 367
j9 s9
3 354
9 521
3 510
2 421
10 454
j9 s9
9 336
2 516
12 422
2 474
12 522
j9 s9
13 468
1 499
8 570
0 595
13 429
j10 s10
7 405
5 551
10 559
9 454
11 536
j10 s10
11 413
5 555
11 481
6 516
3 568
j10 s10
0 504
14 510
12 631
8 631
0 452
j11 s11
5 492
10 617
9 604
10 536
12 554
j11 s11
7 423
7 617
4 576
11 603
13 623
j11 s11
1 595
9 586
13 656
6 683
2 472
j12 s12
2 530
8 655
12 674
8 559
6 606
j12 s12
0 506
11 641
13 651
4 640
10 651
j12 s12
Table B.6: Solutions of the problems la06 (926), la07 (890), and la08 (863)
2 661
10 645
2 677
12 771
6 547
j13 s13
11 588
6 716
14 744
11 644
14 663
j13 s13
6 603
13 678
6 680
9 659
8 746
j13 s13
7 677
11 698
10 698
9 777
5 599
j14 s14
6 663
12 766
1 774
13 673
4 744
j14 s14
5 680
10 746
3 767
8 790
0 790
j14 s14
14 775
3 766
6 771
11 792
9 675
j15 s15
4 856
14 820
4 839
7 869
7 806
j15 s15
2 759
0 843
9 833
10 839
4 843
j15 s15
Appendix B. Sample Best Solutions
163
3 17
8 43
13 59
9 43
m2 11 0
m3 9 0
m4 1 0
m5 12 0
3 110
9 222
7 67
14 34
m3 9 0
m4 5 0
m5 5 34 10 129
11 129
6 147
0 89
m2 1 0
8 129
10 77
j3 s3
10 128
6 116
7 138
0 84
14 98
j3 s3
m1 2 0
j1 s1 j2 s2
4 14
m1 3 0
j1 s1 j2 s2
6 242
12 167
4 199
13 238
4 247
j4 s4
2 177
12 126
3 205
13 150
5 135
j4 s4
8 317
7 194
0 247
4 325
1 287
j5 s5
4 265
5 199
6 262
14 159
11 198
j5 s5
13 381
8 241
13 325
3 374
11 384
j6 s6
8 329
9 227
13 279
1 244
12 221
j6 s6
4 444
9 316
12 381
2 395
6 442
j7 s7
6 375
2 265
14 286
7 308
13 294
j7 s7
1 515
0 331
14 463
7 482
14 499
j8 s8
1 448
10 345
1 308
2 405
8 375
j8 s8
12 617
4 374
5 499
12 554
13 535
j9 s9
14 461
0 386
10 386
5 478
10 447
j9 s9
7 623
10 444
3 579
11 617
0 561
j10 s10
3 545
3 471
2 478
9 524
2 531
j10 s10
0 658
13 561
2 610
8 710
5 579
j11 s11
13 592
7 545
0 531
12 609
1 572
j11 s11
2 716
1 611
10 691
9 800
9 589
j12 s12
0 690
11 640
4 615
6 707
0 615
j12 s12
Table B.7: Solutions of the problems la09 (951), and la10 (958)
3 755
6 737
6 720
5 871
12 623
j13 s13
11 710
4 710
5 656
8 718
9 677
j13 s13
11 828
2 755
1 737
14 893
3 718
j14 s14
7 809
14 794
12 707
10 783
6 737
j14 s14
14 835
11 840
8 821
10 901
7 733
j15 s15
5 847
8 835
11 809
4 873
7 847
j15 s15
Appendix B. Sample Best Solutions
Appendix B. Sample Best Solutions
Table B.8: Solution of the problems la11 (1222) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 1 0 6 21
10 114 2 209 9 221 4 317 14 400 15 487 17 507 18 540
m2 15 0 12 54
17 145 2 234 0 276 13 297 16 349 10 377 8 385 9 434
m3 0 0 3 34
13 100 12 145 19 204 2 276 5 360 11 404 17 540 9 548
m4 1 21 3 100 7 177 10 209 16 377 4 441 14 487 15 561 17 575 19 641 m5 16 0 7 33
19 116 18 197 5 281 8 360 14 385 11 394 4 404 6 441
j11 s11 j12 s12 j13 s13 j14 s14 j15 s15 j16 s16 j17 s17 j18 s18 j19 s19 j20 s20 m1 11 6090 670 5 723 8 815 12 859 3 905 16 960 13 1038 19 1066 7 1162 m2 19 5091 587 7 658 11 699 4 708 6 790 18 877 3 960 5 1037 14 1091 m3 18 6096 703 15 790 14 861 7 900 10 945 1 980 8 1006 16 1104 4 1141 m4 0 72311 778 5 815 2 877 9 917 6 996 12 1065 8 1104 18 1121 13 1148 m5 15 5183 561 13 640 12 683 2 742 9 840 10 917 1 945 17 961 0 1003
Table B.9: Solution of the problem la12 (1039) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 9 0 8 19
10 26
6 89
0 180 12 262 13 288 15 363 17 443 7 465
m2 6 0 11 66
17 108 0 141 12 164 3 244 14 306 5 321 4 410 16 471
m3 13 0 18 39
16 103 19 165 8 183 10 247 15 311 7 354 5 410 3 499
m4 7 0 4 8
1 76
15 126 17 152 11 170 6 231 19 264 9 279 3 306
m5 2 0 13 39
9 61
7 101 1 126 18 167 19 231 6 264 11 284 3 363
j11 s11 j12 s12 j13 s13 j14 s14 j15 s15 j16 s16 j17 s17 j18 s18 j19 s19 j20 s20 m1 16 50718 570 1 659 5 677 2 756 3 808 4 862 14 943 11 955 19 1029 m2 2 5071 559 8 588 7 627 15 711 13 772 18 816 19 912 9 950 10 1033 m3 4 5732 603 6 641 17 661 1 677 0 698 12 743 11 830 9 928 14 955 m4 14 3632 442 5 499 10 510 13 601 12 625 8 700 0 754 16 792 18 912 m5 0 40015 484 17 506 14 511 8 627 12 700 10 706 5 756 16 888 4 943
164
Appendix B. Sample Best Solutions
Table B.10: Solution of the problem la13 (1150) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 8 0 10 48
19 110 9 170 0 260 16 347 1 411 3 459 17 518 13 603
m2 16 0 1 11
5 65
2 143 19 189 12 274 0 347 18 419 8 484 6 524
m3 12 0 3 53
13 90
14 147 4 188 11 303 19 349 9 392 1 459 18 498
m4 0 0 2 60
18 80
6 173 11 244 17 303 5 398 10 426 4 511 7 536
m5 15 0 17 52
7 90
13 147 14 194 9 260 11 349 0 419 19 514 1 632
j11 s11 j12 s12 j13 s13 j14 s14 j15 s15 j16 s16 j17 s17 j18 s18 j19 s19 j20 s20 m1 2 6174 714 5 742 18 787 15 846 11 909 14 934 7 1018 6 1056 12 1109 m2 4 56110 585 7 651 11 706 14 770 3 848 13 882 17 949 9 1046 15 1069 m3 15 5975 623 6 706 0 718 8 784 10 833 7 917 2 1004 16 1025 17 1046 m4 15 5691 597 16 632 3 642 14 661 12 745 9 825 8 842 19 925 13 957 m5 16 6424 742 6 765 5 794 12 825 18 913 3 959 8 1005 2 1025 10 1080
Table B.11: Solution of the problem la14 (1292) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 17 0 3 54
5 69
6 79
13 134 16 256 12 331 15 471 9 507 11 531
m2 12 0 1 63
18 152 16 178 8 256 15 346 14 430 11 438 19 531 7 595
m3 6 0 14 17
4 53
m4 7 0 11 47
9 124 19 139 0 193 12 198 2 225 3 310 13 367 15 430
m5 15 0 13 78
17 114 10 154 1 247 5 343 18 425 0 507 4 565 14 636
2 101 7 182 10 247 17 276 8 346 9 440 13 507
j11 s11 j12 s12 j13 s13 j14 s14 j15 s15 j16 s16 j17 s17 j18 s18 j19 s19 j20 s20 m1 18 5891 641 10 738 19 790 0 844 14 853 2 907 4 984 8 1024 7 1088 m2 2 66717 754 6 836 0 927 3 985 10 1006 5 1063 13 1085 4 1172 9 1221 m3 0 56511 609 5 679 1 759 18 843 12 868 19 950 15 982 16 1058 3 1139 m4 5 4714 636 14 706 18 782 16 862 17 875 1 904 6 1037 10 1063 8 1131 m5 3 6518 724 16 774 12 862 2 868 7 907 9 942 6 962 11 1037 19 1044
165
Appendix B. Sample Best Solutions
Table B.12: Solution of the problem la15 (1207) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 0 0 7 6
13 86
10 156 5 201 11 290 18 344 12 365 8 393 4 484
m2 2 0 14 46
0 141 9 222 16 262 10 269 15 298 7 379 19 456 18 538
m3 1 0 0 40
3 80
4 101 11 186 2 222 7 292 16 349 10 361 13 369
m4 17 0 1 45
6 77
13 156 0 242 2 292 11 347 10 446 8 504 7 554
m5 6 0 8 59
2 115 3 180 0 279 5 298 16 327 10 369 7 456 14 583
j11 s11 j12 s12 j13 s13 j14 s14 j15 s15 j16 s16 j17 s17 j18 s18 j19 s19 j20 s20 m1 14 5249 583 1 671 3 726 17 790 15 883 16 935 19 1023 2 1122 6 1199 m2 4 6226 666 11 746 12 755 5 853 8 936 1 953 13 962 3 1061 17 1076 m3 12 39615 469 19 561 8 594 18 665 6 746 14 776 9 817 17 883 5 976 m4 14 65118 736 4 804 3 828 12 853 5 945 9 976 16 1056 15 1116 19 1155 m5 15 63919 671 9 742 11 801 1 811 18 892 4 918 17 955 12 1004 13 1091
166
Appendix B. Sample Best Solutions
Table B.13: Solution of the problem la16 (945) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 7 45 6 189 1 316 4 381 9 425 0 502 5 555 3 654 2 747 8 863 m2 7 0 0 45
3 66
2 161 6 235 1 394 4 598 5 685 8 692 9 732
m3 5 0 2 35
1 99
6 130 9 189 4 283 3 381 0 468 8 502 7 539
m4 2 0 5 35
6 111 7 132 3 173 9 381 0 579 8 638 4 820 1 898
m5 1 0 8 55
7 173 2 193 9 285 5 381 6 530 0 558 4 647 3 824
m6 5 1111 139 4 425 6 471 7 530 8 605 0 638 3 747 9 824 2 908 m7 0 66 7 193 6 419 5 471 4 480 2 555 3 613 9 654 8 732 1 821 m8 0 2181 328 3 468 2 506 4 555 9 598 8 664 5 692 6 787 7 814 m9 9 0 2 99
0 166 6 369 7 491 3 506 8 539 5 650 1 685 4 837
m10 9 69 0 150 5 166 1 237 6 326 2 369 7 448 3 530 4 743 8 821
Table B.14: Solution of the problem la17 (784) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 3 0 4 91
7 176 5 196 9 259 8 398 6 461 2 504 0 610 1 692
m2 7 0 6 15
9 95
1 191 2 243 5 560 0 587 8 610 3 671 4 737
m3 2 0 8 30
5 92
7 156 0 176 4 221 3 285 1 317 6 487 9 618
m4 5 0 3 99
2 132 6 200 8 275 0 371 1 409 9 463 7 558 4 637
m5 0 0 2 30
4 109 6 275 8 371 3 393 5 425 1 562 7 696 9 776
m6 1 57 5 156 3 196 4 285 6 350 9 356 0 459 8 646 7 688 2 696 m7 4 1357 218 2 315 6 404 1 463 8 525 0 558 3 587 5 671 9 713 m8 0 18 7 39
6 95
3 134 1 243 5 317 2 415 8 558 9 585 4 691
m9 1 0 3 91
4 128 6 135 7 188 2 304 0 409 5 499 8 568 9 603
m10 4 0 0 40
8 393 1 525 9 562 2 585 7 666 6 688 5 713 3 737
167
Appendix B. Sample Best Solutions
Table B.15: Solution of the problem la18 (848) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 8 11 0 75
2 162 6 190 5 235 4 297 1 345 7 442 3 616 9 731
m2 8 0 6 11
2 105 5 129 3 195 7 314 9 392 4 457 0 497 1 692
m3 4 83 9 132 6 231 5 297 3 549 2 561 1 634 0 664 7 737 8 831 m4 4 0 6 84
1 164 7 184 9 268 0 361 8 421 3 431 2 502 5 683
m5 9 0 2 60
0 162 5 210 3 235 8 273 4 358 6 462 1 516 7 778
m6 7 0 1 264 2 408 4 448 8 455 0 569 5 664 9 685 6 731 3 819 m7 0 0 4 132 1 230 7 268 2 330 5 408 9 472 3 561 6 616 8 734 m8 8 75 4 182 2 247 6 331 0 421 5 472 7 526 9 557 1 655 3 692 m9 9 60 2 129 6 388 0 462 1 497 3 516 7 552 5 580 8 639 4 734 m10 3 0 4 155 1 184 6 284 5 381 2 527 8 532 9 600 7 685 0 737
Table B.16: Solution of the problem la19 (842) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 7 0 0 258 9 267 1 319 4 396 2 492 3 557 8 612 6 689 5 789 m2 3 0 1 91
4 178 6 268 2 443 8 501 9 579 0 653 5 711 7 768
m3 0 0 3 91
8 108 9 319 5 331 6 368 4 460 1 562 2 643 7 691
m4 0 44 8 189 7 222 4 298 2 373 3 443 6 450 1 477 9 562 5 615 m5 1 0 9 88
0 161 2 326 6 336 8 362 3 416 7 427 5 457 4 587
m6 8 0 7 88
0 103 5 161 3 254 6 362 9 432 1 643 4 709 2 771
m7 4 0 6 71
9 158 8 222 2 304 7 326 3 450 5 522 0 711 1 800
m8 1 15 5 46
3 116 9 222 0 267 8 416 7 457 6 533 4 599 2 691
m9 7 1031 178 5 254 3 331 9 378 8 432 0 461 2 538 4 572 6 589 m10 9 0 7 144 2 222 8 461 3 522 0 557 6 653 4 689 1 709 5 782
168
Appendix B. Sample Best Solutions
Table B.17: Solution of the problem la20 (907) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 9 0 4 21
2 130 8 170 3 333 0 424 5 438 1 526 7 731 6 790
m2 0 9 7 90
6 210 5 308 4 337 8 354 1 361 2 407 3 579 9 703
m3 5 0 2 8
9 93
m4 8 0 5 60
6 118 9 210 7 278 2 363 0 387 4 442 3 541 1 603
m5 6 0 3 70
0 150 4 205 1 293 7 363 8 417 5 510 9 564 2 663
1 154 8 258 0 270 3 341 4 371 7 790 6 831
m6 2 93 5 130 9 278 6 308 8 395 1 417 3 482 0 541 4 692 7 831 m7 0 0 5 9
4 112 3 152 9 304 6 395 7 494 2 575 8 670 1 697
m8 1 0 7 185 3 277 6 494 5 564 8 601 9 670 4 748 2 783 0 867 m9 8 60 0 310 5 342 9 457 1 501 6 526 4 612 7 692 2 752 3 783 m10 1 2249 386 5 483 4 493 8 552 7 601 6 633 3 729 0 770 2 867
169
170
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10
70 2 0 2 0 1 0 0 0 14 0 12 130 14 35 7 0 4 28 1
82 28 39 39 28 28 149 302 35 125
13 13 1 1 4 12 5 2 2 0
37 0 13 0 0 0 0 77 0 0
5 11 9 9 7 0 9 13 7 8
90 37 52 13 88 66 71 97 163 66
j1 s1 j2 s2
1 8 0 1 6 11 5 5 5 11
j1 s1 j2 s2
165 47 70 94 89 55 203 382 136 204
2 10 2 3 5 6 4 9 14 9
131 83 97 52 153 300 154 198 281 154
j3 s3
8 10 9 14 7 7 7 12 13 0
j3 s3 243 82 164 165 109 64 257 527 204 220
11 7 3 11 14 2 6 5 5 6
146 155 131 83 191 352 234 251 342 198
j4 s4
10 1 7 2 10 8 0 8 1 4
j4 s4 263 124 203 243 133 97 328 535 290 297
7 4 11 6 10 5 3 4 8 2
248 169 243 103 281 460 300 342 415 234
j5 s5
11 13 3 8 3 0 9 3 3 5
j5 s5 321 257 290 335 220 192 406 589 355 307
10 8 4 5 1 7 12 7 6 1
329 221 264 198 329 538 388 396 501 350
j6 s6
13 7 10 6 0 9 2 6 12 12
j6 s6 423 342 307 454 241 276 460 616 382 355
6 1 14 12 8 4 7 11 4 11
416 336 325 251 336 626 466 462 531 416
j7 s7
6 5 5 3 2 6 1 4 10 10
j7 s7 469 504 342 536 333 335 537 659 469 380
4 5 10 8 0 9 2 3 3 10
461 410 416 364 364 635 494 513 590 482
j8 s8
7 9 4 4 11 14 6 10 6 6
j8 s8 556 578 440 553 378 349 589 692 519 423
9 12 0 14 3 3 10 0 10 5
523 466 482 415 451 659 547 576 659 599
j9 s9
4 3 14 10 5 5 11 1 14 9
j9 s9 600 665 490 640 440 377 665 758 598 504
1 14 5 2 13 13 8 10 11 3
539 498 576 547 513 668 608 601 694 671
j10 s10
0 14 8 9 4 3 12 0 2 2
j10 s10 653 712 527 667 536 463 732 784 660 583
12 2 8 13 6 10 14 12 12 12
614 553 599 553 551 697 663 642 725 769
j11 s11
5 0 13 13 12 10 13 2 0 13
j11 s11 781 733 616 686 637 539 795 827 712 706
8 9 6 7 9 1 1 14 0 13
663 659 608 626 574 792 686 725 813 774
j12 s12
9 6 6 11 1 1 3 14 9 14
j12 s12
Table B.18: Solutions of the problems la21 (1079), and la22 (960)
850 824 697 785 795 660 836 850 781 826
0 0 12 0 2 11 11 6 13 7
801 794 774 702 620 808 772 790 824 786
j13 s13
14 11 2 7 13 2 4 9 8 7
j13 s13
925 911 785 826 895 697 911 895 869 869
3 6 7 4 11 14 0 8 9 14
813 807 847 794 713 863 824 807 857 813
j14 s14
12 4 11 12 9 13 8 13 7 3
j14 s14
967 967 1013 874 1042 960 1000 993 896 1000
14 3 13 10 12 8 13 1 1 4
870 865 942 884 847 870 857 858 953 884
j15 s15
3 12 12 5 8 4 10 11 11 8
j15 s15
Appendix B. Sample Best Solutions
171
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10
0 0 0 0 20 0 0 0 0 0
2 1 11 3 12 0 5 6 14 12
75 21 14 115 32 84 21 84 62 60
11 2 14 2 13 14 1 0 5 12
0 0 0 50 0 24 0 0 0 0
5 3 13 11 8 13 6 4 1 0
68 50 24 143 10 116 19 8 60 79
j1 s1 j2 s2
8 14 13 7 4 3 1 0 10 4
j1 s1 j2 s2
140 108 92 303 108 142 114 140 155 92
8 12 4 1 5 2 13 3 7 9
117 118 116 203 117 143 139 118 133 154
j3 s3
6 9 9 13 1 13 4 8 0 11
j3 s3 235 345 98 424 241 181 185 153 235 267
0 4 6 12 3 4 0 10 4 14
156 242 150 246 176 235 228 161 195 212
j4 s4
11 7 3 14 13 2 13 9 6 7
j4 s4 360 360 153 493 303 252 241 185 283 345
9 6 11 10 2 5 3 7 6 8
228 312 241 302 275 242 302 260 241 261
j5 s5
4 5 8 2 6 7 3 12 2 2
j5 s5 424 417 234 563 329 267 313 235 317 427
1 1 2 8 0 8 4 11 14 5
246 402 337 407 302 335 362 350 306 335
j6 s6
10 11 10 10 3 5 9 14 8 13
j6 s6 512 489 301 624 345 360 377 293 353 493
2 5 10 9 10 3 12 8 13 6
282 487 449 459 393 362 436 360 361 402
j7 s7
12 6 0 0 0 10 14 4 7 3
j7 s7 556 552 345 629 442 392 424 360 422 528
14 13 8 4 6 10 14 13 11 1
361 556 467 509 500 430 527 407 401 560
j8 s8
9 3 6 1 8 1 12 3 3 14
j8 s8 643 643 431 714 496 556 512 422 492 585
3 9 12 13 1 7 2 6 0 11
430 579 530 521 508 449 604 508 461 586
j9 s9
3 13 1 6 9 12 0 5 5 6
j9 s9 698 650 512 741 569 608 601 492 569 708
4 11 0 7 7 1 9 5 9 4
521 667 570 547 699 547 667 558 509 658
j10 s10
5 0 4 9 5 9 8 10 11 0
j10 s10 750 708 606 768 642 690 683 561 633 804
6 0 7 3 4 11 10 12 3 3
564 706 608 578 737 560 687 658 561 727
j11 s11
13 12 7 11 7 8 2 13 4 5
j11 s11 826 774 642 828 705 778 705 714 650 811
13 14 9 14 14 0 8 14 12 7
639 759 699 674 787 608 725 751 572 738
j12 s12
0 10 12 4 2 4 11 1 13 1
j12 s12
Table B.19: Solutions of the problems la23 (1032), and la24 (959)
852 809 669 903 809 828 768 745 745 884
12 7 1 6 9 6 11 1 8 2
734 810 767 751 835 706 760 834 658 810
j13 s13
7 8 14 5 10 14 7 2 1 10
j13 s13
888 887 750 971 881 907 852 825 802 889
10 8 3 0 11 9 5 2 2 10
814 819 834 768 888 767 850 852 679 852
j14 s14
1 2 5 12 14 11 6 11 12 9
j14 s14
965 936 825 1017 995 995 939 907 977 977
7 10 5 5 12 12 7 9 10 13
838 899 922 895 906 814 910 898 948 887
j15 s15
14 4 2 8 11 6 10 7 9 8
j15 s15
Appendix B. Sample Best Solutions
0 88
11 85
10 55
7 86
5 14
7 171
m5 14 0
m6 1 0
m7 4 0
m8 9 0
m9 0 0
m10 2 0
12 42
m3 3 0
1 86
3 85
m2 11 0
m4 7 0
3 97
m1 6 0
j1 s1 j2 s2
172
8 298
10 23
14 171
13 104
3 193
9 163
8 168
8 57
2 93
2 193
j3 s3
3 382
9 43
10 178
12 136
6 274
8 210
0 184
6 101
14 205
7 236
j4 s4
6 455
11 92
12 252
8 223
10 295
3 221
6 196
1 168
8 268
9 368
j5 s5
1 486
14 178
3 280
2 268
13 345
10 280
14 274
13 253
12 298
0 400
j6 s6
0 575
13 378
2 377
9 316
9 400
1 290
10 365
0 345
13 460
5 486
j7 s7
9 587
3 460
5 455
1 368
0 476
2 348
2 399
5 383
0 587
1 575
j8 s8
13 622
12 503
13 543
11 455
5 573
6 356
4 525
4 455
1 618
10 618
j9 s9
14 721
6 546
0 616
6 599
2 646
12 450
9 622
9 525
9 698
12 703
j10 s10
Table B.20: Solution of the problem la25 (991)
10 783
4 600
6 660
14 622
7 717
11 514
5 646
14 553
7 749
11 722
j11 s11
5 851
8 642
4 746
0 660
14 783
4 642
12 741
7 622
4 782
8 739
j12 s12
12 911
2 735
1 769
5 766
4 830
7 687
11 773
10 703
6 830
14 822
j13 s13
11 927
7 801
11 838
3 876
8 835
5 741
3 802
11 802
10 851
13 975
j14 s14
4 940
1 845
8 896
7 917
12 927
13 766
13 876
2 819
5 935
4 983
j15 s15
Appendix B. Sample Best Solutions
Appendix B. Sample Best Solutions
Table B.21: Solution of the problems la26 (1218) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 13 0 14 80
6 155 17 201 8 263 1 341 3 353 19 446 5 530 15 625
m2 3 0 17 87
8 131 16 159 5 190 19 197 0 282 11 333 12 413 2 459
m3 18 0 4 58
12 156 17 173 15 190 0 204 10 238 2 267 8 341 9 417
m4 15 0 18 96
9 151 1 203 13 242 10 261 12 365 16 413 17 445 3 474
m5 1 0 9 55
10 151 2 175 12 267 6 365 5 393 18 454 14 517 7 578
m6 2 0 6 37
7 96
1 105 4 203 3 228 11 305 9 333 13 417 15 445
m7 0 78 9 178 4 256 2 331 17 385 6 400 1 473 12 550 7 617 14 671 m8 0 52 17 131 10 175 16 207 13 261 7 359 11 430 2 516 1 559 3 630 m9 0 0 11 52
13 142 8 192 7 258 17 272 16 338 1 396 12 473 9 511
m10 14 0 6 96
7 139 0 182 10 207 1 242 4 331 8 408 17 474 18 517
j11 s11 j12 s12 j13 s13 j14 s14 j15 s15 j16 s16 j17 s17 j18 s18 j19 s19 j20 s20 m1 2 67218 755 7 812 0 899 12 952 4 994 16 1038 10 1071 9 1091 11 1160 m2 4 4789 580 1 696 6 805 7 899 10 944 13 975 14 987 15 1046 18 1117 m3 11 5163 543 5 630 1 665 13 696 6 746 14 805 19 917 7 1007 16 1071 m4 4 5438 560 19 586 6 647 11 663 7 762 2 803 14 869 0 952 5 1007 m5 16 59817 642 0 739 13 760 15 854 3 929 11 1015 8 1063 4 1096 19 1192 m6 17 51219 530 0 586 16 681 8 739 10 773 14 855 5 887 12 1063 18 1156 m7 3 70811 762 18 838 16 859 10 893 13 911 8 974 19 1063 15 1130 5 1190 m8 12 66815 730 19 795 9 865 18 910 14 967 6 985 4 1038 5 1083 8 1178 m9 14 5803 668 10 692 4 773 5 852 19 887 6 917 15 967 18 1046 2 1078 m10 19 6475 665 9 675 3 756 2 839 11 918 12 1015 13 1063 16 1143 15 1194
173
Appendix B. Sample Best Solutions
Table B.22: Solution of the problem la27 (1286) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 17 0 3 52
8 129 1 193 2 290 6 318 14 366 10 384 11 450 9 532
m2 7 0 13 78
8 118 14 129 2 162 16 186 3 227 11 264 6 362 9 435
m3 10 0 2 61
3 134 4 146 13 195 9 248 18 343 5 419 1 503 8 524
m4 0 0 5 60
9 145 10 238 14 279 12 366 2 407 18 432 7 518 1 602
m5 2 0 4 45
11 135 0 148 5 196 19 221 6 229 16 243 7 276 12 339
m6 6 0 18 88
17 170 7 212 10 279 15 328 1 419 4 474 13 481 0 568
m7 14 0 16 33
1 115 6 149 12 216 19 292 8 334 7 431 4 481 5 504
m8 1 0 15 37
11 148 4 222 13 287 9 384 10 450 19 499 6 563 5 651
m9 2 18617 214 3 264 11 362 15 419 9 500 10 532 13 602 14 698 7 711 m10 12 0 19 86
3 163 4 195 17 238 8 431 5 568 10 602 1 701 7 747
j11 s11 j12 s12 j13 s13 j14 s14 j15 s15 j16 s16 j17 s17 j18 s18 j19 s19 j20 s20 m1 7 6020 686 12 773 19 810 4 880 5 928 16 1006 15 1064 13 1099 18 1210 m2 12 5005 585 10 701 0 791 1 863 15 947 19 1103 18 1189 4 1211 17 1251 m3 15 54114 595 6 650 16 750 12 844 7 861 19 902 11 1036 0 1105 17 1202 m4 17 6223 669 6 740 16 828 8 904 15 914 13 947 11 963 19 1036 4 1103 m5 10 3538 541 1 626 15 685 18 754 13 838 14 922 17 999 9 1090 3 1150 m6 2 66316 686 8 750 9 823 3 869 19 1004 5 1032 12 1051 11 1130 14 1178 m7 18 5689 661 3 746 15 801 17 880 13 968 10 1034 0 1051 2 1105 11 1276 m8 2 6977 799 18 838 16 904 12 910 0 964 14 1003 3 1063 17 1150 8 1202 m9 1 7475 766 12 825 0 872 16 910 19 959 18 1004 8 1099 4 1194 6 1211 m10 2 7996 820 0 867 9 872 16 959 15 1008 13 1047 18 1099 14 1136 11 1178
174
Appendix B. Sample Best Solutions
Table B.23: Solution of the problem la28 (1236) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 16 0 14 75
3 139 10 147 4 219 5 310 0 355 6 386 13 422 7 513
m2 7 0 8 95
0 102 17 183 14 192 2 263 1 323 9 369 11 453 6 538
m3 1 0 19 70
4 138 9 209 11 270 3 337 18 367 16 430 12 521 7 582
m4 1 70 14 125 7 137 16 222 2 239 17 263 6 294 12 386 0 479 3 516 m5 3 0 18 80
2 115 0 204 1 259 13 323 10 422 7 452 8 504 12 597
m6 3 80 19 139 12 199 7 279 9 335 0 361 18 430 10 455 13 553 8 624 m7 4 0 6 40
11 139 14 262 17 336 5 385 3 394 18 471 9 506 8 597
m8 2 0 5 84
9 120 12 153 1 199 0 259 4 310 8 317 15 386 16 521
m9 0 0 10 32
12 75
m10 14 0 17 69
5 141 19 163 4 317 0 380 1 461 13 526 11 538 10 564
14 147 6 187 9 273 11 337 1 410 4 435 18 515
j11 s11 j12 s12 j13 s13 j14 s14 j15 s15 j16 s16 j17 s17 j18 s18 j19 s19 j20 s20 m1 1 57219 688 8 693 9 781 11 802 12 825 15 880 18 979 17 1026 2 1098 m2 3 63610 716 12 742 16 792 4 882 18 899 19 979 5 1067 15 1096 13 1165 m3 2 62314 708 15 755 17 764 0 855 5 895 13 923 10 1060 6 1113 8 1186 m4 15 5544 581 9 631 19 725 18 775 13 827 11 923 8 981 10 1041 5 1096 m5 15 6045 663 11 717 6 756 14 826 9 922 4 1021 19 1109 16 1181 17 1196 m6 16 64615 683 4 755 14 811 1 818 11 883 6 896 5 983 2 1060 17 1098 m7 7 62419 705 15 764 2 810 0 895 1 904 10 919 12 993 13 1050 16 1110 m8 18 57113 621 3 716 11 772 10 781 19 789 17 936 14 1026 7 1110 6 1202 m9 16 57119 636 2 708 7 739 5 778 17 874 8 936 13 995 3 1006 15 1165 m10 12 63916 683 2 781 3 810 9 851 7 922 15 954 6 1017 8 1113 18 1162
175
Appendix B. Sample Best Solutions
Table B.24: Solution of the problem la29 (1221) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 11 0 1 17
2 60
0 189 4 346 6 353 10 457 19 587 17 592 15 624
m2 6 0 11 35
9 149 3 291 7 319 8 390 17 509 10 549 2 644 15 730
m3 3 0 7 42
0 107 8 145 10 189 15 266 17 333 2 364 1 378 9 463
m4 2 0 3 42
13 116 17 215 9 285 14 305 11 396 0 425 15 437 1 463
m5 9 0 18 47
4 94
3 131 5 190 12 215 6 254 10 348 7 358 2 388
m6 6 7 11 28
13 35
19 68
m7 8 0 5 45
12 88
7 175 6 234 3 250 4 291 13 346 17 378 9 491
m8 4 0 2 41
9 48
5 88
0 145 7 234 12 319 15 333 8 420 17 549
m9 0 0 5 14
4 23
6 65
9 118 12 254 8 297 3 390 17 433 7 509
m10 2 48 16 53
18 191 3 299 10 372 8 510 9 594 1 641 0 730 17 742
18 127 8 191 0 265 5 362 4 435 10 440
j11 s11 j12 s12 j13 s13 j14 s14 j15 s15 j16 s16 j17 s17 j18 s18 j19 s19 j20 s20 m1 7 7125 773 12 859 18 878 16 910 3 961 8 1057 9 1117 13 1149 14 1157 m2 0 77414 803 18 841 12 878 13 917 1 1000 4 1080 16 1128 5 1152 19 1208 m3 19 49114 587 13 656 12 748 11 763 4 780 5 852 6 924 18 1013 16 1152 m4 16 54518 617 5 666 12 763 8 795 6 811 4 889 7 964 10 1050 19 1113 m5 13 3960 487 1 583 8 641 16 664 15 712 17 747 11 824 14 950 19 1038 m6 14 4571 545 2 583 17 644 9 732 16 743 12 795 3 1057 15 1076 7 1156 m7 16 6170 664 10 730 14 779 1 822 2 909 11 957 18 1027 19 1139 15 1156 m8 19 59214 683 16 704 18 725 10 783 13 857 1 914 6 988 3 1076 11 1173 m9 13 55310 635 14 656 19 683 1 730 2 822 15 888 18 933 16 1013 11 1065 m10 13 7484 850 14 888 5 950 15 1010 12 1074 7 1090 11 1153 19 1166 6 1189
176
Appendix B. Sample Best Solutions
Table B.25: Solution of the problem la30 (1355) j1 s1 j2 s2
j3 s3
j4 s4
j5 s5
j6 s6
j7 s7
j8 s8
j9 s9
j10 s10
m1 9 0 11 59
13 133 16 184 12 202 3 263 8 357 5 416 19 509 0 591
m2 5 0 14 86
3 178 15 208 0 236 11 269 8 416 17 488 4 591 1 635
m3 12 0 7 85
3 144 5 159 18 236 17 328 16 424 2 514 9 571 10 660
m4 8 0 2 47
0 131 19 147 7 245 13 263 6 287 1 365 4 456 10 516
m5 3 0 1 39
8 120 10 199 4 318 7 329 15 370 0 421 18 431 12 510
m6 6 0 3 86
10 92
17 155 14 222 15 264 4 302 19 325 13 403 12 411
m7 0 0 4 32
6 86
10 176 8 199 5 275 3 357 13 411 18 490 16 572
m8 19 2450 325 7 395 3 415 13 441 15 453 4 523 2 571 5 612 9 704 m9 1 0 5 86
3 159 0 269 14 281 18 349 13 453 4 528 10 665 16 678
m10 4 0 12 85
7 263 5 329 6 377 0 465 1 566 18 635 10 705 19 784
j11 s11 j12 s12 j13 s13 j14 s14 j15 s15 j16 s16 j17 s17 j18 s18 j19 s19 j20 s20 m1 17 6732 770 6 858 1 890 4 976 7 1021 10 1075 18 1157 14 1190 15 1283 m2 18 70510 784 12 808 19 906 16 975 2 1058 9 1063 13 1125 6 1206 7 1286 m3 11 66514 726 19 757 15 764 6 797 8 854 4 862 13 877 0 990 1 1010 m4 3 5535 571 9 749 18 780 14 846 17 909 11 1003 15 1020 12 1139 16 1177 m5 16 54717 572 9 660 2 685 11 802 13 894 14 911 5 976 19 1047 6 1098 m6 1 51018 566 2 612 7 685 5 768 9 813 16 867 0 902 8 990 11 1103 m7 12 6641 758 14 817 9 867 2 975 11 1058 17 1103 15 1138 7 1192 19 1286 m8 11 74910 808 18 834 16 902 6 931 17 1003 8 1089 14 1146 1 1173 12 1241 m9 7 76815 833 2 911 11 949 9 959 6 988 12 1074 19 1139 8 1218 17 1302 m10 8 87311 903 13 923 2 958 15 975 9 1020 14 1027 16 1097 17 1177 3 1259
177
178
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10
0 0 0 0 0 0 0 21 0 0
4 9 14 24 22 10 19 7 17 12
93 45 64 76 21 8 120 47 77 77
j2 s2
21 20 26 23 18 0 8 22 21 14
928 889 736 688 706 966 936 896 980 896
27 29 23 29 12 18 13 2 5 1
980 912 823 721 769 1061 1025 985 1041 968
j16 s16 j17 s17
3 7 2 5 0 27 20 0 1 4
j1 s1
137 119 114 130 100 84 187 118 143 125
12 11 4 0 13 20 22 19 12 10
992 1008 843 816 867 1081 1088 1028 1126 1047
j18 s18
20 28 8 21 19 1 10 10 3 13
j3 s3 219 155 151 184 120 186 241 150 167 205
16 12 3 10 24 28 23 8 26 18
1034 1080 941 871 961 1165 1177 1098 1190 1081
j19 s19
9 26 5 18 29 5 12 17 29 15
j4 s4 288 235 186 234 128 214 308 332 348 227
6 5 10 28 5 24 29 9 28 28
1067 1126 1028 958 980 1205 1261 1106 1214 1205
j20 s20
10 25 24 16 2 12 18 6 11 23
j5 s5 308 241 250 266 220 241 329 371 438 251
1 15 11 4 1 7 0 26 8 7
1113 1167 1080 1054 1047 1261 1299 1151 1232 1210
j21 s21
28 6 18 2 20 15 24 28 6 11
j6 s6 371 332 308 300 241 345 412 381 488 348
19 10 20 15 3 22 3 14 19 20
1125 1240 1165 1071 1102 1270 1370 1190 1298 1253
j22 s22
25 1 17 8 25 13 21 1 4 21
j7 s7 434 374 325 326 315 438 499 447 567 385
2 17 21 7 21 19 16 27 9 29
1209 1271 1216 1167 1162 1328 1411 1208 1328 1317
j23 s23
26 23 16 13 4 11 27 12 22 0
j8 s8 460 440 394 345 411 466 567 526 578 444
15 24 6 17 10 8 11 4 27 2
1292 1315 1254 1208 1216 1419 1445 1251 1397 1340
j24 s24
5 14 7 26 8 9 4 24 2 8
j9 s9 555 481 433 420 444 550 642 534 640 486
18 0 12 12 7 29 14 29 24 6
1339 1370 1313 1237 1270 1452 1521 1299 1423 1419
j25 s25
7 22 0 27 15 21 1 11 14 24
j10 s10 642 542 467 499 519 658 719 620 721 640
23 8 9 14 17 2 5 13 13 27
1396 1391 1397 1285 1315 1516 1558 1317 1430 1462
j26 s26
17 2 29 6 23 3 2 3 23 25
j11 s11
Table B.26: Solution of the problem la31 (1784)
704 561 531 515 551 735 773 658 729 704
0 3 27 22 26 16 28 25 10 22
1487 1419 1491 1340 1412 1553 1567 1415 1480 1484
j27 s27
8 19 1 3 14 14 25 5 7 17
j12 s12 782 656 562 584 612 749 815 753 839 742
13 18 22 19 27 23 9 23 25 19
1540 1506 1565 1408 1511 1611 1600 1513 1561 1565
j28 s28
22 16 15 1 16 6 7 15 20 9
j13 s13
863 687 576 623 656 808 869 818 889 823
29 4 13 11 11 25 26 16 15 3
1620 1545 1620 1521 1620 1631 1678 1611 1622 1580
j29 s29
11 13 25 9 28 17 17 20 0 16
j14 s14
921 699 646 650 678 941 884 839 941 874
14 21 28 25 9 26 15 21 16 5
1709 1732 1670 1671 1678 1722 1722 1658 1701 1663
j30 s30
24 27 19 20 6 4 6 18 18 26
j15 s15
Appendix B. Sample Best Solutions
179
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10
0 0 0 0 0 48 0 0 13 0
9 17 5 6 13 13 0 12 10 27
64 8 48 26 26 119 47 31 42 59
j2 s2
5 2 26 0 15 10 2 21 1 0
783 17 790 8 695 4 861 23 797 11 833 0 924 19 945 28 907 7 101325
835 839 768 866 874 875 946 966 964 1109
j16 s16 j17 s17
4 7 2 8 6 2 14 1 8 24
j1 s1
116 43 77 53 88 158 136 81 104 157
0 11 25 4 21 3 25 5 26 3
866 917 862 904 881 933 977 1035 1005 1161
j18 s18
26 18 11 17 3 17 16 15 20 15
j3 s3 144 140 155 155 99 222 166 157 136 289
18 20 22 10 5 8 27 20 4 26
875 989 870 979 945 1008 986 1105 1091 1196
j19 s19
20 10 9 11 23 29 21 25 22 23
j4 s4 150 175 161 166 179 248 181 193 167 330
25 3 17 3 2 26 29 9 11 22
986 1070 955 1040 1003 1096 1067 1145 1106 1292
j20 s20
2 0 29 9 12 9 4 18 16 29
j5 s5 190 233 222 207 207 330 252 236 331 512
21 13 23 13 9 4 12 0 6 19
1031 1161 1036 1047 1013 1183 1149 1177 1170 1321
j21 s21
24 27 14 12 28 23 5 19 24 1
j6 s6 281 328 263 281 300 391 345 335 411 585
7 25 15 19 16 16 17 13 27 8
1108 1168 1066 1144 1067 1233 1237 1261 1218 1344
j22 s22
23 6 21 24 4 6 3 22 29 9
j7 s7 289 391 335 345 312 431 417 419 455 673
1 9 27 28 22 15 20 29 14 11
1144 1197 1140 1218 1159 1328 1323 1346 1257 1384
j23 s23
8 4 19 5 10 28 6 7 21 21
j8 s8 364 481 419 413 384 453 504 495 480 738
19 28 24 29 8 12 10 8 2 6
1221 1278 1181 1278 1248 1350 1332 1384 1350 1474
j24 s24
14 12 28 16 0 5 9 3 19 20
j9 s9 428 547 431 456 481 546 568 557 579 835
28 22 10 20 19 24 8 27 18 16
1313 1321 1252 1346 1344 1381 1425 1397 1384 1510
j25 s25
15 5 1 25 20 22 28 10 0 5
j10 s10 504 604 512 514 536 583 595 626 656 842
22 26 7 14 24 7 26 2 17 13
1401 1365 1319 1383 1437 1437 1507 1489 1478 1539
j26 s26
13 21 0 26 18 20 15 11 28 7
j11 s11
Table B.27: Solution of the problem la32 (1850)
580 650 556 606 608 602 656 708 701 920
12 15 20 18 14 14 22 4 23 18
1441 1463 1383 1478 1539 1452 1606 1569 1539 1605
j27 s27
29 24 8 7 1 19 18 23 5 12
j12 s12 601 667 637 682 623 700 744 768 778 931
27 14 16 15 26 21 23 24 3 17
1489 1521 1539 1542 1618 1505 1689 1636 1636 1627
j28 s28
3 1 12 2 25 25 7 26 9 10
j13 s13
656 754 664 752 682 777 828 795 832 936
11 29 18 21 29 27 1 16 13 28
1548 1539 1638 1620 1688 1570 1766 1643 1683 1662
j29 s29
16 19 3 27 7 1 13 14 12 14
j14 s14
721 774 681 837 712 816 884 853 881 993
6 23 6 1 27 11 11 17 25 2
1633 1766 1728 1675 1787 1633 1787 1707 1730 1711
j30 s30
10 16 13 22 17 18 24 6 15 4
j15 s15
Appendix B. Sample Best Solutions
180
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10
0 0 0 0 0 0 0 0 0 0
20 26 5 10 17 9 23 23 21 15
43 39 38 75 11 19 65 7 71 31
j2 s2
2 28 21 13 11 14 17 15 16 25
109215 827 5 720 1 891 2 817 15 738 6 856 18 662 6 651 28 734 21
1135 842 777 990 929 828 876 742 703 816
j16 s16 j17 s17
1 12 0 4 8 3 2 14 11 6
j1 s1
118 60 110 109 88 30 109 60 125 95
11 8 15 19 9 17 15 8 17 16
1223 918 862 1031 947 876 970 828 763 875
j18 s18
14 23 16 3 22 10 10 25 13 27
j3 s3 257 125 125 183 144 47 160 110 207 130
24 4 12 9 26 2 20 27 14 5
1244 948 929 1057 994 1031 1008 918 839 934
j19 s19
9 21 18 6 2 1 24 5 15 19
j4 s4 336 153 139 261 152 85 222 141 252 153
4 16 27 21 1 12 11 19 5 4
1260 996 993 1077 1065 1092 1051 948 898 996
j20 s20
0 19 29 7 0 13 3 17 26 24
j5 s5 412 164 209 347 227 118 263 184 279 160
19 29 28 17 4 5 25 29 27 26
1267 1066 1001 1105 1123 1156 1110 1027 948 1065
j21 s21
10 1 20 16 12 28 29 18 19 22
j6 s6 511 244 229 419 266 141 292 242 313 189
29 20 22 24 27 15 1 10 22 13
1272 1177 1061 1175 1160 1229 1192 1066 993 1130
j22 s22
27 17 9 27 21 4 6 28 2 12
j7 s7 600 284 257 511 345 146 308 326 379 227
7 6 11 14 24 22 19 20 23 17
1300 1192 1130 1244 1260 1309 1279 1140 1061 1229
j23 s23
22 10 25 11 3 11 12 9 20 0
j8 s8 640 412 347 540 404 153 412 379 389 239
25 2 19 18 13 7 26 7 6 20
1361 1199 1147 1335 1341 1371 1316 1177 1139 1234
j24 s24
21 0 7 15 6 8 7 2 29 8
j9 s9 695 441 412 564 498 239 471 386 441 323
13 27 26 23 20 21 9 22 8 11
1444 1283 1243 1384 1432 1403 1415 1262 1192 1317
j25 s25
5 25 13 22 16 0 0 21 0 14
j10 s10 774 515 504 600 538 470 537 474 471 385
23 15 6 25 7 19 28 3 12 2
1452 1317 1316 1448 1500 1445 1467 1285 1285 1330
j26 s26
16 7 4 5 28 18 21 26 7 18
j11 s11
Table B.28: Solution of the problem la33 (1719)
849 548 574 695 623 534 662 517 515 470
26 3 8 0 10 20 5 11 1 10
1534 1382 1380 1494 1530 1504 1561 1382 1328 1335
j27 s27
6 18 17 1 14 26 4 24 3 28
j12 s12 948 574 605 777 711 554 717 607 558 477
17 22 14 29 18 29 16 0 18 7
1558 1390 1444 1506 1540 1572 1604 1430 1420 1403
j28 s28
8 13 2 8 19 27 14 16 9 23
j13 s13
1008 657 619 793 786 607 738 628 589 536
3 14 24 28 25 23 8 13 25 1
1590 1406 1513 1572 1587 1638 1651 1474 1500 1466
j29 s29
12 11 3 20 29 24 13 12 4 29
j14 s14
1027 742 661 859 792 690 770 639 631 661
28 9 10 26 23 16 27 1 24 9
1686 1590 1571 1655 1620 1651 1696 1555 1572 1555
j30 s30
18 24 23 12 5 25 22 4 10 3
j15 s15
Appendix B. Sample Best Solutions
181
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10
0 0 0 0 0 50 0 0 32 0
15 24 0 29 5 21 13 9 11 16
32 70 14 72 41 187 83 93 54 50
j2 s2
7 16 24 8 12 18 3 20 16 7
636 667 820 940 783 966 899 784 728 869
22 18 21 10 9 12 20 28 23 19
660 699 897 989 829 973 987 818 775 978
j16 s16 j17 s17
6 21 8 27 4 25 2 7 6 25
j1 s1
54 98 65 150 139 213 155 175 114 105
13 4 17 14 23 23 1 4 2 3
677 781 990 1038 855 1006 1083 851 864 1047
j18 s18
8 17 20 5 10 8 29 23 18 6
j3 s3 98 128 123 178 232 294 204 205 151 201
17 9 15 11 19 4 25 29 26 23
784 863 997 1109 935 1014 1136 875 954 1075
j19 s19
14 2 25 3 22 13 4 0 8 9
j4 s4 148 155 187 260 260 322 216 264 164 216
9 5 22 12 29 20 9 2 17 26
855 931 1066 1147 978 1087 1169 954 1046 1112
j20 s20
12 13 7 16 1 0 5 12 21 15
j5 s5 210 164 284 327 324 395 258 279 187 310
4 22 23 24 8 16 8 8 15 17
875 982 1133 1159 1048 1157 1190 989 1082 1140
j21 s21
11 14 2 19 6 15 10 13 27 4
j6 s6 224 178 356 410 339 421 281 294 285 322
23 10 10 28 14 17 23 3 3 1
956 1066 1162 1174 1109 1250 1275 994 1136 1176
j22 s22
20 19 29 7 11 1 22 26 25 5
j7 s7 250 260 402 453 409 496 357 388 323 394
19 26 14 21 15 3 27 5 9 24
1007 1162 1218 1245 1157 1259 1287 1047 1201 1225
j23 s23
28 3 28 17 24 7 6 1 12 27
j8 s8 335 283 487 551 416 569 435 421 392 409
25 6 11 0 20 28 18 25 22 10
1244 1223 1298 1266 1203 1280 1351 1169 1246 1294
j24 s24
10 0 27 2 21 5 16 18 10 11
j9 s9 394 318 496 558 430 663 512 493 446 464
21 7 16 13 26 10 11 11 0 14
1285 1317 1322 1347 1301 1340 1433 1322 1347 1330
j25 s25
5 8 1 23 27 11 26 14 5 12
j10 s10 409 381 529 649 512 746 599 583 461 487
16 15 6 22 7 19 15 21 24 21
1328 1359 1328 1411 1364 1393 1481 1359 1379 1391
j26 s26
26 20 18 6 16 14 0 19 20 0
j11 s11
Table B.29: Solution of the problem la34 (1721)
440 429 581 680 542 792 612 628 513 558
3 27 12 26 28 24 14 6 14 8
1368 1456 1351 1454 1399 1485 1542 1377 1433 1427
j27 s27
29 29 3 4 25 29 12 16 28 2
j12 s12 552 464 674 757 635 812 638 634 552 654
24 25 9 20 13 9 19 10 29 22
1433 1499 1388 1488 1454 1524 1586 1463 1457 1493
j28 s28
0 11 26 9 18 2 21 22 1 13
j13 s13
579 487 695 829 666 864 728 659 562 781
27 1 4 18 17 26 28 15 19 28
1499 1523 1487 1568 1541 1551 1614 1542 1531 1581
j29 s29
1 12 5 1 2 27 24 24 13 18
j14 s14
630 569 728 867 705 959 820 705 660 835
18 23 13 25 0 22 7 27 4 20
1641 1578 1549 1641 1633 1628 1649 1617 1570 1603
j30 s30
2 28 19 15 3 6 17 17 7 29
j15 s15
Appendix B. Sample Best Solutions
182
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10
0 0 0 0 0 0 0 0 0 0
18 17 7 29 11 24 13 6 10 12
14 57 12 32 43 87 71 95 32 81
j2 s2
15 26 2 27 10 14 3 2 18 25
756 22 659 28 909 5 741 14 979 28 646 2 763 10 100522 927 1 756 3
798 689 969 780 995 742 772 1099 938 821
j16 s16 j17 s17
13 3 14 1 2 26 27 5 23 9
j1 s1
98 82 91 64 115 100 97 192 160 124
21 11 10 8 14 17 0 10 0 18
815 766 1030 825 1073 823 869 1118 1031 838
j18 s18
4 15 21 2 16 3 9 8 27 29
j3 s3 150 160 137 132 164 140 168 250 180 250
29 14 13 6 18 10 4 3 3 14
871 865 1072 850 1133 869 967 1199 1038 927
j19 s19
0 19 24 24 22 27 18 20 2 8
j4 s4 216 174 216 137 214 160 251 312 332 312
12 7 17 3 8 25 21 27 4 17
944 1053 1139 917 1183 954 1060 1276 1125 956
j20 s20
5 12 0 11 7 15 12 25 8 20
j5 s5 249 187 300 214 220 210 321 360 425 331
27 1 20 18 1 7 26 4 5 15
1039 1193 1161 949 1233 985 1144 1326 1222 1029
j21 s21
28 13 29 22 4 6 22 16 6 28
j6 s6 299 258 313 285 355 302 371 410 430 373
16 10 19 17 20 8 29 1 12 1
1056 1213 1225 1161 1283 1053 1241 1375 1284 1105
j22 s22
19 5 8 4 21 9 5 7 7 2
j7 s7 337 263 360 333 481 309 388 430 475 441
6 24 15 9 12 0 16 21 14 21
1149 1294 1298 1395 1296 1138 1338 1392 1296 1193
j23 s23
10 23 25 0 19 21 19 12 9 10
j8 s8 423 342 419 359 611 355 476 523 517 484
24 20 18 21 9 1 25 26 29 6
1154 1389 1355 1451 1311 1392 1388 1400 1338 1276
j24 s24
14 9 9 13 6 4 7 0 16 26
j9 s9 466 381 435 405 631 403 510 552 530 532
9 0 4 15 29 29 23 15 21 5
1240 1450 1388 1534 1395 1444 1485 1447 1362 1354
j25 s25
26 6 22 23 5 5 20 17 13 22
j10 s10 484 412 532 502 665 476 548 612 612 540
2 8 12 19 0 22 24 18 22 19
1311 1512 1477 1593 1495 1501 1561 1534 1381 1393
j26 s26
17 16 6 28 13 19 15 19 17 23
j11 s11
Table B.30: Solution of the problem la35 (1888)
531 427 611 574 702 502 632 753 658 589
7 25 27 16 25 16 11 9 26 24
1373 1538 1555 1664 1627 1555 1639 1631 1473 1445
j27 s27
1 18 26 5 17 23 1 13 11 4
j12 s12 628 498 714 631 755 540 687 845 714 606
8 29 28 25 27 12 6 24 24 16
1419 1627 1599 1674 1659 1614 1679 1730 1639 1473
j28 s28
11 2 11 7 23 18 2 14 19 0
j13 s13
658 555 772 655 889 608 707 865 741 700
20 21 16 12 24 20 28 29 25 27
1512 1695 1688 1711 1740 1711 1739 1740 1699 1599
j29 s29
3 4 3 10 15 28 8 28 20 7
j14 s14
736 581 821 726 949 615 756 923 805 726
23 22 23 26 26 11 17 23 28 11
1561 1792 1723 1725 1794 1744 1809 1834 1809 1803
j30 s30
25 27 1 20 3 13 14 11 15 13
j15 s15
Appendix B. Sample Best Solutions
104
77
183
8
0
0
175 5
58 12 148
m12 1
m13 6
m14 7
m15 13 199 14 240
416
121
297
244
12 424
14 168
13 240
14 87
0
2
14 297
m11 4
54
288
104
79
13 122
3
9
6
13 37
9
281
258
192
10 134
2
1
6
j3 s3
m10 14 0
217
8
0
m9 6
217
12 65
150 0
m7 8
77
58
79
m8 10 0
0
m6 3
8
0
m5 7
0
0
m4 12 0
20 11 91
8
m2 11 0
m3 5
3
j2 s2
0
m1 5
j1 s1
368
457
175
229
263
204
364
261
337
5
5 424
336
10 514
7
8
3
4
6
6
13 184
1
11 230
8
0
10 261
j4 s4
1
6
1
6
0
0
1
1
5
2
5
7
0
5
466
462
583
306
395
441
414
335
466
204
258
305
407
494
14 354
j5 s5
336
402
441
549
500
415
460
448
399
731
636
10 583
3
2
12 337
6
7
9
2
11 576
12 281
4
5
4
6
4
j6 s6
623
546
364
435
548
625
437
546
4
641
10 767
14 734
4
10 641
5
0
7
14 620
11 301
9
6
12 514
7
0
j7 s7 599
462
641
707
623
680
6
2
7
5
707
841
764
591
11 674
11 620
5
14 460
9
6
10 439
8
7
10 674
9
j8 s8 693
748
783
534
748
707
850
707
771
650
9
783
11 858
0
8
5
4
11 771
9
4
0
11 514
1
6
14 764
1
j9 s9
828
748
576
855
693
801
599
825
8
4
3
855
924
886
11 800
12 804
9
3
0
1
10 802
2
9
14 877
4
2
j10 s10 981
858
881
636
943
877
877
899
7
976
14 965
11 981
3
2
10 745
7
11 628
2
1
3
2
13 915
12 919
8
j11 s11
Table B.31: Solution of the problem la36 (1292)
1112 1
1127 9
960
4
8
883
645
943
844
3
766
4
801
1185
1257
2 1126 9
915
8
1142
1049
1203 2
1162 0
1214
1224
1210 13 1228
1012 10 1069
1097 3
12 962 1004 7
881
1061 12 1167 13 1249
707
1033 13 1112 1
1087 8
0
1
6
2
5
11 1073 12 1111 3
9
4
13 886
9
1
10 976
8
1213
1224
1228
14 1033 1028 14 1141 7
13 794
12 1049 10 1100 13 1162 7
5
1052 2 10 1055 9
3
j15 s15
1012 13 1052 14 1087 3
985 12 731
4
3
13 985
j14 s14
1073 13 1185 12 1205
j13 s13
11 1023 7
j12 s12
Appendix B. Sample Best Solutions
141
77
129
283
184
98 2
0
0
0
23 11 72
m9 4
m10 4
m11 3
m12 5
m13 3
m15 1
132
202
141
5
8
291
292
66
11 243
6
239 6
m14 14 0
66 1
41
m8 5
0
0
m7 7
41
0
m6 0
7
10 77
0
m5 6
6
m3 12 0
67 3
9
0
m2 1
m4 1
255 3
j2 s2
m1 5
j1 s1
343
105
87
174
173
182
174
343
87
7
9
382
388
10 174
9
14 385
8
3
12 182
2
8
3
4
1
11 222
1
j3 s3 357 240
215
129
129
429
265
0
6 474
433
13 432
7
7
7
11 437
10 313
5
5
14 273
14 246
3
10 255
8
j4 s4 409
507
397
255
215
388
273
283
466
9
3
8 562
466
502
13 211
6
14 329
1
3
4
6
5
5
8
13 361
2
j5 s5 473
292
390
458
536
586
304
527
377
12 639
7
2
0
2
0
14 650
7
13 304
12 289
11 621
9
0
12 432
4
j6 s6
661
562
483
553
832
546
409
586 681
11 768
11 587
6
14 377
5
12 390
5
4
8
11 364
4
0
5
4
10 546
j7 s7
651
730
670
425
900
742
3
8
5
889
621
735
10 555
0
1
6
9
11 507
10 437
7
10 647
11 556
7
14 555
j8 s8
734
650
788
787
591
711
489
832 5
988
13 707
0
11 661
8
2
13 954
8
1
13 648
9
13 668
10 652
8
9
j9 s9
696
804
735
784
966
697
734
527
858
877
795
964
7
9
2
11 966
648
946
901
1078
1037 0
1270
1203 1
1078 1
1333 4
1338
1158
1417
1371
1270
1116 10 1203
1270 10 1338
1224 2
1152 14 1198 10 1299 4
4
4
1118 12 1281 1
10 1126 4
11 1023 5 4
988
1329
1023
1240
1158
10 1037 12 1126
3
1066 14 1130 1
997 10 952
9
6
8
1197 4
1040 0
1186 12 1270 0
887
9
5
1354
j15 s15 12 1066 0
j14 s14
1197 12 1221 0 13 806
3
1070 2
12 877
12 972
714
12 742
13 555
913
997
972 13 887
2
7
j13 s13
13 1047 11 1094 0
6
14 895
1130 8
14 964
2
2
7
6
13 864
j12 s12
1008 13 1093 1
753
14 804
9
2
1041 9
840
714
669
858
6
14 713
14 901
11 753
j11 s11
13 1040 8
2
7
3
9
9
7
6
3
1
8
12 689
2
3
6
j10 s10
Table B.32: Solution of the problem la37 (1434)
Appendix B. Sample Best Solutions
0
m3 4
130
185
0
6
m8 7
m9 8
82
115
26 8
0
0
m13 0
m14 8
m15 3
6
6
3
0
m12 5
63
163
211
63
10 17
m11 13 0
132
8
m10 12 0
12 189
6
35 9
14 73
m7 5
m6 11 0
92
26
166
13 17
8
m5 13 41 3
m4 14 0
0
m2 0
9
91 0
j2 s2
m1 4
j1 s1
179
238
179
41
238
139
211
253
398
303
12 85
4
2
14 108
6
4
3
14 164
0
13 130
2
2
12 253
3
1
j3 s3 280
382
337
145
278
326
310
364
1
143
11 429
6
13 164
4
3
13 295
3
2
9
5
12 143
11 313
13 253
3
j4 s4 310
257
317
254
393
397
7
2
3
9
185
454
391
224
14 451
14 382
0
2
12 397
7
9
6
13 393
12 434
6
j5 s5 364
309 387
490
508
315
539
444
436
545
11 257
3
1
6
9
6
9
4
14 444
8
10 364
4
14 643
11 512
9
j6 s6
502
453
326
693
554
513
595
364
587
0
313
13 550
10 554
8
0
10 643
2
9
11 454
6
1
5
3
1
14 539
j7 s7
777
606
550
508
633
429
610
671
526
14 326
14 765
9
4
8
1
6
11 619
7
3
0
11 388
6
14 636
10 636
j8 s8
624
619
538
538
845
438
728
763
578
9
374
12 800
13 764
7
5
5
1
13 712
3
4
7
0
9
10 658
13 643
j9 s9
624
921
664
706
656
671
823
528
780
2
0
440
878
11 842
2
1
13 971
4
8
10 694
5
12 606
7
0
6
7
j10 s10
656
777
959
788
895
732
4
7
7
7
0
1
787
447
921
851
694
830
5
9
545
981
14 861
0
7
966 780
11 825
10 967
13 873
0
14 948
1
7
8
5
1
6
4
1039
1020
1161
1105
1177
1033
1210
10 972
8
1212
1119 10 1191
1138
12 1138
11 999 12 1039 5
11 930
12 907
1230
10 894 1155 7
842
1058 12 1194
948
2
1005 8
1065 8 10 878
6
9
1049 1
948
873
848
13 948
5
4
1
3
995
11 1155 5
j15 s15
1058 10 1157 2
927
921
j14 s14
1036 11 1070 0
765
959
823
754
11 712 1
2
8
j13 s13
1049 7
837
830
10 946
1
4
2
j12 s12
1009 9 10 664
2
2
14 736
5
8
12 728
4
3
5
5
12 754
j11 s11
Table B.33: Solution of the problem la38 (1249)
Appendix B. Sample Best Solutions
186
0
0
m13 4
m14 5
m15 11 0
131 3
m12 5
78
0
134
10 107
5
168
83
0
m11 6
0
167
40 6
12 73
40
m10 1
0
m8 7
3
210
0
m7 1
84
m9 10 121 7
0
m6 3
9
13 82
m5 8
0
13 44
m4 10 0
210
0
m3 9
8
82 10 186
49
m2 8
3
0
j2 s2
m1 2
j1 s1
69
218
428
139
4
6
3
6
2
4
8
0
330
121
131
239
134
174
274
177
10 49
2
12 173
2
3
4
12 124
j3 s3 218
7
9
6
7
7
9
2
2
7
5
5
8
5
453
354
174
319
139
233
319
257
274
203
244
118
326
14 507
8
j4 s4
547
326 308
418
210
299
399
3
4 518
392
10 395
1
1
7
5
14 315
6
12 244
9
14 210
13 349
5
13 256
j5 s5
463
463
370
322
354
302
394
633
370
299
483
10 634
3
12 447
11 473
9
12 322
6
3
8
1
3
4
6
2
6
j6 s6
429
429
539
686
366
529
483
391
2
689
12 538
11 538
10 522
11 305
5
1
9
9
10 381
7
3
0
9
14 408
j7 s7
453
463
471
603
8
2
748
659
13 555
2
12 377
10 492
13 599
10 573
0
11 395
0
1
11 679
11 780
10 517
j8 s8
859 515
634
509
473
6
0
8
0
783
685
576
643
13 418
11 601
12 683
5
2
8
10 471
0
10 742
7
11 555
j9 s9
518
726
600
908
941
601
628
774
693
5
1
0
8
825
758
628
667
14 547
8
0
6
11 628
7
1
7
1
1
9
j10 s10
693
600
825
698
941
988
686
782
601
669
819
834 14 884
8
14 643
4
3
3
4
13 783
5
6
6
5
4
6
1
j11 s11
Table B.34: Solution of the problem la39 (1251)
906
780
995
877
698
941
712 12 906
7
7
12 803
10 689
13 721
11 873
1
4
4
906
771
774
834 1
934
13 968
1
13 951
8
2
14 952
917
12 782
14 712 8
1019 0
1117
j15 s15
819
4
884
4
0
953
9
1049
1157
1001
950
1091
1043
9
939
13 1156
14 1025 11 1122
2
14 1122 9
5
899 13 860
4
1025 9 14 906
3
11 952
14 803
0
11 1148 1131 14 1210
12 975
1049 12 1063 14 1140 11 1043 4
6
2
7
j14 s14
1117 13 1126 12 1156
995
j13 s13
1073 3
897
13 683
2
9
7
0
5
j12 s12
Appendix B. Sample Best Solutions
187
65
135
3
11 92
m15 2
0
5
m14 12 135 13 224
7
0
4
4
6
8
7
2
0
m13 4
395
73
285
84
128
285
230
437
190
104
373
111
147
192
11 128
2
7
1
12 86
j3 s3
13 190
137 4
13 92
m12 1
m11 11 0
0
m10 0
6
66 7
m9 8
78
54 3
m8 7
104
111
12 10 86
78 3
m5 7
54
m7 3
8
0
m4 7
192
97 5
5
m3 10 0
12
m6 4
5
0
m2 3
10 64
0
j2 s2
m1 1
j1 s1 135 337
240
193
228
200
4
4
0
3
215
298
292
485
14 218
8
14 438
9
6
9
10 150
10 196
1
14 137
9
j4 s4
370
207
341
200
461
527
235
6
6 288
461
10 325
2
3
11 223
2
14 521
14 235
10 269
4
11 269
0
2
4
j5 s5 485 527
558
325
413
218
392
333
7
341
11 514
14 331
14 597
9
3
13 526
5
2
2
0
12 316
3
11 316
7
j6 s6
434
298
389
551
398
681
640
427
608
597
8
396
14 550
7
5
5
1
3
2
12 413
3
5
6
6
13 418
13 737
j7 s7
704
446
780
637
672
495
528
342
766
657
5
3
424
593
11 435
8
2
10 479
1
8
5
8
9
13 461
4
8
5
j8 s8 955
565
816
573
644
727
0
8
9
490
608
514
12 775
1
5
0
4
11 558
0
13 392
14 500
14 773
12 528
3
j9 s9
418
636
866
610
996
813
676
856
1
1
6
565
672
676
11 848
8
13 667
11 676
0
0
13 638
1
3
2
4
0
j10 s10
704 689
783 774
776
762
927
12 637
9
1
0
12 915
14 737
6
13 872
1
14 681
14 424
0
j12 s12
806
855
561
806
820
783
982
950
10 729
10 822
8
9
6
12 950
9
6
819
7
650
816
5 1001 7
1026 2
889
856 14 866
0
3
868
7
1166
927
955
2
1098
987
12 1172
1058 13 1154
1025 10 1095
13 1004 9
7
5
1120
1118
1031
1166
1159
12 1011 1138 7
960
12 1048 1
4
12 1004 6
2
1025 9
1072 12 1120
1095 10 1135
12 848
4
10 1026 6
4
9
10 891
10 945
13 820
11 937
6
5
1004 11 1041 9
776
11 930
9
1
8
1
8
0
j15 s15 1138 14 1200
j14 s14 1077 8
j13 s13
1023 11 1047 2
13 959
9
6
j11 s11
Table B.35: Solution of the problem la40 (1251)
Appendix B. Sample Best Solutions
Bibliography Aarts, E. H. L., Van Laarhoven, P. J. M., Lenstra, J. K., and Ulder, N. L. J. (1994). A computational study of local search algorithms for job shop scheduling. ORSA Journal on Computing, 6(2), 118–125. Aarts, E. H. L., Korst, J., and Michiels, W. (2007). Simulated annealing. In T. F. Gonzalez, editor, Handbook of approximation algorithms and metaheurististics, Computer and Information Science. Chapman & Hall/CRC, Boca Raton. Abumaizar, R. J. and Svestka, J. A. (1997). Rescheduling job shops under random disruptions. International Journal of Production Research, 35, 2065–2082. Adams, J., Balas, E., and Zawack, D. (1988). The shifting bottleneck procedure for job shop scheduling. Management Science, 34(3), 391–401. Akers Jr, S. B. and Friedman, J. (1955). A non-numerical approach to production scheduling problems. Journal of the Operations Research Society of America, 3(4), 429–442. Alba, E. and Cotta, C. (2006). Evolutionary algorithms. In E. Alba, B. Dorronsoro, M. Giacobini, and M. Tomassini, editors, Handbook of Bioinspired Algorithms and Applications, pages 3–19. Taylor & Francis. Allahverdi, A., Ng, C. T., Cheng, T. C. E., and Kovalyov, M. Y. (2008). A survey of scheduling problems with setup times or costs. European Journal of Operational Research, 187(3), 985–1032. Ashour, S. and Hiremath, S. R. (1973). A branch-and-bound approach to the job-
188
Bibliography
shop scheduling problem. International Journal of Production Research, 11(1), 47–58. Baker, J. E. (1985). Adaptive selection methods for genetic algorithms. In Proceedings of the 1st International Conference on Genetic Algorithms, pages 101–111, Hillsdale, NJ. L. Erlbaum Associates Inc. Baker, K. R. and Peterson, D. W. (1979). An analytic framework for evaluating rolling schedules. Management Science, 25(4), 341–351. Balas, E., Simonetti, N., and Vazacopoulos, A. (2008). Job shop scheduling with setup times, deadlines and precedence constraints. Journal of Scheduling, 11(4), 253–262. Barnes, J. W. and Chambers, J. B. (1995). Solving the job shop scheduling problem with tabu search. IIE Transactions, 27(2), 257–263. Biegel, J. E. and Davern, J. J. (1990). Genetic algorithms and job shop scheduling. Computers & Industrial Engineering, 19(1-4), 81–91. Binato, S., Hery, W. J., Loewenstern, D. M., and Resende, M. G. C. (2001). A grasp for job shop scheduling. In C. Ribeiro and P. Hansen, editors, Essays and surveys on metaheuristics, pages 58–79. Kluwer Academic Publishers, Boston, MA. Blackstone, J. H., Phillips, D. T., and Hogg, G. L. (1982). A state-of-the-art survey of dispatching rules for manufacturing job shop operations. International Journal of Production Research, 20(1), 27–45. Carlier, J. and Pinson, E. (1989). An algorithm for solving the job-shop problem. Management Science, 35(2), 164–176. Cheng, R. and Gen, M. (2001). Production planning and scheduling using genetic algorithms. In J. Wang and A. Kusiak, editors, Computational intelligence in manufacturing handbook , The mechanical engineering handbook, page 1 v. (various pagings). CRC Press, Boca Raton, FL ; London. 189
Bibliography
Cheng, R., Gen, M., and Tsujimura, Y. (1996). A tutorial survey of job-shop scheduling problems using genetic algorithms–i. representation. Computers & Industrial Engineering, 30(4), 983–997. Cheng, R., Gen, M., and Tsujimura, Y. (1999). A tutorial survey of job-shop scheduling problems using genetic algorithms, part ii: hybrid genetic search strategies. Computers & Industrial Engineering, 36(2), 343–364. Darwin, C. (1859). On the origin of species by means of natural selection, or, The preservation of favoured races in the struggle for life. John Murray, London. Dauzere-Peres, S. and Lasserre, J. B. (1993). A modified shifting bottleneck procedure for job-shop scheduling. International Journal of Production Research, 31(4), 923–932. Davis, L. (1985). Applying adaptive algorithms to epistatic domains. In International Joint Conference on Artifcial Intelligence, pages 162–164, Los Angeles, CA. IEEE Computer Society. Della-Croce, F., Tadei, R., and Volta, G. (1995). A genetic algorithm for the job shop problem. Computers & Operations Research, 22(1), 15–24. Dell’Amico, M. and Trubian, M. (1993). Applying tabu search to the job-shop scheduling problem. Annals of Operations Research, 41(3), 231–252. Dorndorf, U. and Pesch, E. (1995). Evolution based learning in a job shop scheduling environment. Computers & Operations Research, 22(1), 25–40. Eom, H. B. and Lee, S. M. (1990). A survey of decision support system applications (1971-april 1988). Interfaces, 20(3), 65–79. Eom, S. B., Lee, S. M., Kim, E. B., and Somarajan, C. (1998). A survey of decision support system applications (1988-1994). Fahmy, S. A., Balakrishnan, S., and ElMekkawy, T. Y. (2008). A generic deadlockfree reactive scheduling approach. International Journal of Production Research, 46(1), 1–20. 0020-7543. 190
Bibliography
Falkenauer, E. and Bouffouix, S. (1991). A genetic algorithm for job shop. In Robotics and Automation, 1991. Proceedings., 1991 IEEE International Conference on, volume 1, pages 824–829. Fang, H., Corne, D., and Ross, P. (1996). A genetic algorithm for job-shop problems with various schedule quality criteria. In Evolutionary Computing, pages 39–49. Springer-Verlag, London, UK. Feng, G. and Lau, H. (2008). Efficient algorithms for machine scheduling problems with earliness and tardiness penalties. Annals of Operations Research, 159(1), 83–95. Feo, T. A. and Resende, M. G. C. (1989). A probabilistic heuristic for a computationally difficult set covering problem. Operations Research Letters, 8(2), 67–71. Fogel, L. J., Owens, A. J., and Walsh, M. J. (1966). Artificial intelligence through simulated evolution. Wiley, New York,. Gen, M., Tsujimura, Y., and Kubota, E. (1994). Solving job-shop scheduling problems by genetic algorithm. In IEEE International Conference on Systems, Man, and Cybernetics, volume 2, pages 1577–1582. Giffler, B. and Thompson, G. L. (1960).
Algorithms for solving production-
scheduling problems. Operations Research, 8(4), 487–503. Glover, F. (1989). Tabu search – part i. ORSA Journal on Computing, 1(3), 190–206. Glover, F. (1990). Tabu search – part ii. ORSA Journal on Computing, 2(1), 4–32. Goldberg, D. and Lingle, R. (1985). Alleles, loci and the traveling salesman problem. In G. JJ, editor, First International Conference on Genetic Algorithms, pages 154–159, Hillsdale, NJ. Lawrence Erlbaum Associates. Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning. Addison-Wesley Pub. Co., Reading, MA. 191
Bibliography
Goncalves, J. F., de Magalhaes M., J. J., and Resende, M. G. C. (2005). A hybrid genetic algorithm for the job shop scheduling problem. European Journal of Operational Research, 167(1), 77–95. Hasan, S. M. K., Sarker, R., Essam, D., and Cornforth, D. (2009). Memetic algorithms for solving job-shop scheduling problems. Memetic Computing, 1, 69–83. He, Z., Yang, T., and Tiger, A. (1996). An exchange heuristic imbedded with simulated annealing for due-dates job-shop scheduling. European Journal of Operational Research, 91(1), 99–117. Hoksung, Y., Yunpeng, P., and Leyuan, S. (2008). New solution approaches to the general single- machine earliness-tardiness problem. Automation Science and Engineering, IEEE Transactions on, 5(2), 349–360. Holland, J. H. (1975). Adaptation in natural and artificial systems : an introductory analysis with applications to biology, control, and artificial intelligence. University of Michigan Press, Ann Arbor. Ignall, E. and Schrage, L. (1965). Application of the branch and bound technique to some flow-shop scheduling problems. Operations Research, 13(3), 400–412. Ishibuchi, H. and Murata, T. (1998). A multi-objective genetic local search algorithm and its application to flowshop scheduling. IEEE Transactions on Systems, Man, and Cybernetics, Part C , 28(3), 392–403. Kirkpatrick, S., Gelatt, C. D., and Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598), 671–680. Koza, J. R. (1992). Genetic programming : on the programming of computers by means of natural selection. Complex adaptive systems. MIT Press, Cambridge, MA. Kumar, M. and Rajotia, S. (2006). Integration of process planning and scheduling in a job shop environment. The International Journal of Advanced Manufacturing Technology, 28(1), 109–116. 192
Bibliography
Lawrence, D. (1985). Job shop scheduling with genetic algorithms. In First International Conference on Genetic Algorithms, pages 136–140, Mahwah, NJ. Lawrence Erlbaum Associates, Inc. Leon, V. J., Wu, S. D., and Storer, R. H. (1994). Robustness measures and robust scheduling for job shops. IIE Transactions, v26(n5), p32(12). Li, C. L., Mosheiov, G., and Yovel, U. (2008). An efficient algorithm for minimizing earliness, tardiness, and due-date costs for equal-sized jobs. Computers & Operations Research, 35(11), 3612–3619. Lin, B. M. T. and Cheng, T. C. E. (2001). Batch scheduling in the no-wait twomachine flowshop to minimize the makespan. Computers & Operations Research, 28(7), 613–624. Lin, C., Haley, K. B., and Sparks, C. (1995). A comparative study of both standard and adaptive versions of threshold accepting and simulated annealing algorithms in three scheduling problems. European Journal of Operational Research, 83(2), 330–346. Liu, S., Ong, H., and Ng, K. (2005). Metaheuristics for minimizing the makespan of the dynamic shop scheduling problem. Advances in Engineering Software, 36(3), 199–205. Marakas, G. M. (2002). Decision support systems: in the 21st century. Prentice Hall, Upper Saddle River, NJ, 2nd edition. Mattfeld, D. C. and Bierwirth, C. (2004). An efficient genetic algorithm for job shop scheduling with tardiness objectives. European Journal of Operational Research, 155(3), 616–630. McKay, K. N. and Buzacott, J. A. (2000). The application of computerized production control systems in job shop environments. Computers in Industry, 42(2-3), 79–97.
193
Bibliography
McKay, K. N. and Wiers, V. C. S. (2003). Integrated decision support for planning, scheduling, and dispatching tasks in a focused factory. Computers in Industry, 50(1), 5–14. McKay, K. N., Buzacot, J. A., and Safayeni, F. R. (1989). The scheduler’s knowledge of uncertainty: The missing link. In Knowledge Based Production Management Systems, Amsterdam, The Netherlands. Elsevier. McMahon, G. B. and Burton, P. G. (1967). Flow-shop scheduling with the branchand-bound method. Operations Research, 15(3), 473–481. Mehta, S. and Uzsoy, R. (1998). Predictable scheduling of a job shop subject to breakdowns. Robotics and Automation, IEEE Transactions on, 14(3), 365–378. Muth, J. F. and Thompson, G. L. (1963). Industrial scheduling. Prentice-Hall international series in management. Prentice-Hall, Englewood Cliffs, NJ. Nakano, R. and Yamada, T. (1991). Conventional genetic algorithm for job shop problems. In B. a. Booker, editor, Fourth International Conference on Genetic Algorithms, pages 474–479, Morgan Kaufmann, San Mateo, CA. Nowicki, E. and Smutnicki, C. (1996). A fast taboo search algorithm for the job shop problem. Management Science, 42(6), 797–813. Nuijten, W. P. M. and Aarts, E. H. L. (1996). A computational study of constraint satisfaction for multiple capacitated job shop scheduling. European Journal of Operational Research, 90(2), 269–284. O’Donovan, R., Uzsoy, R., and McKay, K. N. (1999). Predictable scheduling of a single machine with breakdowns and sensitive jobs. International Journal of Production Research, 37, 4217–4233. Oliver, I. M., Smith, D. J., and Holland, J. R. C. (1987). A study of permutation crossover operators on the traveling salesman problem. In Proceedings of the Second International Conference on Genetic Algorithms on Genetic algorithms and their application, pages 224–230, Hillsdale, NJ. L. Erlbaum Associates Inc. 194
Bibliography
Ombuki, B. M. and Ventresca, M. (2004). Local search genetic algorithms for the job shop scheduling problem. Applied Intelligence, 21(1), 99–109. Panwalkar, S. S. and Iskander, W. (1977). A survey of scheduling rules. Operations Research, 25(1), 45–61. Paredis, J. (1992). Handbook of evolutionary computation. In Parallel Problem Solving from Nature 2 . Institute of Physics Publishing and Oxford University Press, Brussels, Belgium. Paredis, J. (1997). Exploiting constraints as background knowledge for evolutionary algorithms. In T. Back, D. Fogel, and Z. Michalewicz, editors, Handbook of Evolutionary Computation, pages G1.2:1–6. Institute of Physics Publishing and Oxford University Press, Bristol, NY. Park, B. J., Choi, H. R., and Kim, H. S. (2003). A hybrid genetic algorithm for the job shop scheduling problems. Computers & Industrial Engineering, 45(4), 597–613. Petrovic, D., Duenas, A., and Petrovic, S. (2007). Decision support tool for multiobjective job shop scheduling problems with linguistically quantified decision functions. Decision Support Systems, 43(4), 1527–1538. Pezzella, F., Morganti, G., and Ciaschetti, G. (2008). A genetic algorithm for the flexible job-shop scheduling problem. Computers & Operations Research, 35(10), 3202–3212. Part Special Issue: Search-based Software Engineering. Ponnambalam, S. G., Aravindan, P., and Rao, P. S. (2001). Comparative evaluation of genetic algorithms for job-shop scheduling. Production Planning & Control , 12(6), 560–674. Rechenberg, I. (1973). Evolution Strategy: Optimization of Technical Systems according to the Principles of Biological Evolution. Frommann-Holzboog, Stuttgart. Ren, H., Jiang, L., Xi, X., and Li, M. (2009). Heuristic optimization for dualresource constrained job shop scheduling. In Informatics in Control, Automation and Robotics, 2009. CAR ’09. International Asia Conference on, pages 485–488. 195
Bibliography
Shi, L. and Pan, Y. (2005). An efficient search method for job-shop scheduling problems. Automation Science and Engineering, IEEE Transactions on, 2(1), 73–77. Shigenobu, K., Isao, O., and Masayuki, Y. (1995). An efficient genetic algorithm for job shop scheduling problems. In L. J. Eshelman, editor, 6th International Conference on Genetic Algorithms, pages 506–511, Pittsburgh, PY. Morgan Kaufmann Publishers Inc. Sidney, J. B. (1977). Optimal single-machine scheduling with earliness and tardiness penalties. Operations Research, 25(1), 62–69. Silva, C., Roque, L., and Almeida, A. (2006). Mapp - a web-based decision support system for the mould industry. Decision Support Systems, 42(2), 999–1014. Someya, H. and Yamamura, M. (1999). A genetic algorithm without parameters tuning and its application on the floorplan design problem. Speranza, M. G. and Woerlee, A. P. (1991). A decision support system for operational production scheduling. European Journal of Operational Research, 55(3), 329–343. Starkweather, T., McDaniel, S., Whitley, C., and Mathias, K. (1991). A comparison of genetic sequencing operators. Proceedings of the fourth International Conference on Genetic Algorithms, pages 69–76. Storer, R. H., Wu, S. D., and Vaccari, R. (1992). New search spaces for sequencing problems with application to job shop scheduling. Management Science, 38(10), 1495–1509. Student (1908). The probable error of a mean. Biometrika, 6(1), 1–25. Subramaniam, V., Raheja, A. S., and Reddy, K. R. B. (2005). Reactive repair tool for job shop schedules. International Journal of Production Research, 43, 1–23.
196
Bibliography
Suwa, H. and Sandoh, H. (2007). Capability of cumulative delay based reactive scheduling for job shops with machine breakdowns. Computers & Industrial Engineering, 53(1), 63–78. Syswerda, G. (1991). Schedule optimization using genetic algorithms. In L. Davis, editor, Handbook of Genetic Algorithms, pages 332–349. Van Nostrand Reinhold, New York. Taillard, E. D. (1994). Parallel taboo search techniques for the job shop scheduling problem. ORSA Journal on Computing, 6(2), 108. Tang, K. S., Man, K. F., Kwong, S., and He, Q. (1996). Genetic algorithms and their applications. IEEE Signal Processing Magazine, 13(6), 22–37. Turban, E. (1993). Decision support systems: An overview. In C. Stewart, editor, Decision support and expert systems : management support systems, pages 83– 129. Macmillan, New York, 3rd edition. Turban, E. (2007). Decision support and business intelligence systems. Pearson Prentice Hall, Upper Saddle River, NJ, 8th edition. Van Laarhoven, P. J. M., Aarts, E. H. L., and Lenstra, J. K. (1992). Job shop scheduling by simulated annealing. Operations Research, 40(1), 113–125. Viviers, F. (1983). A decision support system for job shop scheduling. European Journal of Operational Research, 14(1), 95–103. Wang, W. and Brunn, P. (2000). An effective genetic algorithm for job shop scheduling. Proceedings of the Institution of Mechanical Engineers - Part B: Journal of Engineering Manufacture, 214(4), 293–300. Watanabe, M., Ida, K., and Gen, M. (2005). A genetic algorithm with modified crossover operator and search area adaptation for the job-shop scheduling problem. Computers & Industrial Engineering, 48(4), 743–752.
197
Bibliography
Wenqi, H. and Aihua, Y. (2004). An improved shifting bottleneck procedure for the job shop scheduling problem. Computers & Operations Research, 31(12), 2093–2110. Wu, S. D., Storer, R. H., and Pei-Chann, C. (1993). One-machine rescheduling heuristics with efficiency and stability as criteria. Computers & Operations Research, 20(1), 1–14. Xi, X., Jiang, L., and Zhang, Q. (2009).
Optimization for multi-resources-
constrained job shop scheduling based on three-level heuristic algorithm. In Informatics in Control, Automation and Robotics, 2009. CAR ’09. International Asia Conference on, pages 296–300. Yamada, T. (2003). Studies on Metaheuristics for Jobshop and Flowshop Scheduling Problems. Ph.D. thesis, Kyoto University. Yamada, T. and Nakano, R. (1997a). Genetic algorithms for job-shop scheduling problems. In Modern Heuristic for Decision Support, pages 67–81, UNICOM seminar, London. Yamada, T. and Nakano, R. (1997b). Job-shop scheduling. In A. Zalzala and P. Fleming, editors, Genetic algorithms in engineering systems, volume 55, pages 134–160. The Institution of Electrical Engineers. Zhang, C. Y., Li, P. G., Guan, Z. L., and Rao, Y. Q. (2007). A tabu search algorithm with a new neighborhood structure for the job shop scheduling problem. Computers & Operations Research, 34(11), 3229–3242. Zhang, C. Y., Li, P., Rao, Y., and Guan, Z. (2008a). A very fast ts/sa algorithm for the job shop scheduling problem. Computers & Operations Research, 35(1), 282–294. Zhang, G., Gao, L., Li, X., and Li, P. (2008b). Variable neighborhood genetic algorithm for the flexible job shop scheduling problems. In Intelligent Robotics and Applications, pages 503–512. Springer-Verlag.
198
Index data management, 126, 128
Adaptive neighborhood search, 24
dialog management, 131
Branch-and-bound, 20
framework, 126 Capacity constraint, see Job-shop model
knowledge base, 127
Chromosome
model management, 127, 131
length, 42, 44
user interface, 128, 132–136
representation, 28–30, 40
Distribution
Classical
Exponential, 109
job-shop problem, 40
Poisson, 109
optimization, 18
Uniform, 109
Complexity
Elitism, 36
exponential, 12 Fitness
NP-hard, 11
evaluation, 48
of algorithm, 11 Crossover, 32–34
function, 27
Cycle, 33
Flexible job-shop, 14
Linear operational, 35
Flow-shop scheduling, 11
Ordered, 32
Genetic algorithm, 39
Partially-mapped, 33
Genotype, 44
point, 32
Global harmonization, 47–48
Position-based, 33 Input parameters, 130
Two-point exchange, 50
Interruptions, 14 Deadlock, 47 Job-shop
Decision support system, 124–128
model, 18
attributes, 125
problem, 38 199
Index
Objective, 15, 19
scheduling, 12
Due-date cost, 17
Local
Earliness, 16
harmonization, 44–46
Makespan, 15
re-optimization, 22, 25
Tardiness, 17
search, see Meta-heuristics
Throughput time, 16
Look-ahead simulation, 35
Phenotype, 44
Machine
construction, 46
breakdown, 103, 111, 115–118
Priority rules, 25–26
unavailability, 102, 105, 111, 120
First come first serve, 26
Makespan, see Objective
Longest number of remaining op-
Memetic algorithm, 66
erations, 26
Meta-heuristics, 20–25
Longest processing time, 25
Genetic algorithm, 24, 26
Longest remaining processing time,
Genetic search, 36
26
GRASP, 23
Shortest processing time, 25
Shifting bottleneck, 22
Shortest remaining processing time,
Simulated annealing, 24
26
Tabu search, 21
Smallest number of remaining op-
Mutation, 34–35
erations, 26
Bit-flip, 34, 50
Process interruption, 102
Deletion, 35 Insertion, 35
Reactive scheduling, 104, 109
Reciprocal exchange, 34
Representation, 28–30, 40 chromosome, 40
Shift, 35
Disjunctive graph-based, 30
Neighborhood
Job pair relation-based, 30, 42–43
function, 24
Job-based, 29
search, 21, 22, 24, 25
Machine-based, 30
selection, 21
Operation-based, 28
solution, 21, 24
Preference list-based, 29
structure, 21, 22 200
Index
Priority rule-based, 29 Reproduction, 48 Restricted candidate list, 23 Right shift, 105 Rule Gap reduction, 65, 69–72 Partial reordering, 65, 67–68 Restricted swapping, 65, 72–74 Shifted gap-reduction, 106–108 Scheduling single-machine, 105 Selection, 31–32 probability, 31 Ranking, 31 Roulette-wheel, 31 Tournament, 31 Solution space, 11, 18 Student’s t-test, 88 Traditional genetic algorithm, 41 Two-fold method, 20
201