J Comb Optim (2007) 13:79–102 DOI 10.1007/s10878-006-9015-7
A multi-objective particle swarm for a flow shop scheduling problem A. R. Rahimi-Vahed · S. M. Mirghorbani
Published online: 7 October 2006 C Springer Science + Business Media, LLC 2006
Abstract Flow shop problems as a typical manufacturing challenge have gained wide attention in academic fields. In this paper, we consider a bi-criteria permutation flow shop scheduling problem, where weighted mean completion time and weighted mean tardiness are to be minimized simultaneously. Since a flow shop scheduling problem has been proved to be NP-hard in strong sense, an effective multi-objective particle swarm (MOPS), exploiting a new concept of the Ideal Point and a new approach to specify the superior particle’s position vector in the swarm, is designed and used for finding locally Pareto-optimal frontier of the problem. To prove the efficiency of the proposed algorithm, various test problems are solved and the reliability of the proposed algorithm, based on some comparison metrics, is compared with a distinguished multi-objective genetic algorithm, i.e. SPEA-II. The computational results show that the proposed MOPS performs better than the genetic algorithm, especially for the large-sized problems. Keywords Bi-criteria flow shop scheduling problem . Permutation flow shop . Multi-objective particle swarm . Multi-objective genetic algorithm
1 Introduction In the context of manufacturing, scheduling is fundamentally related to the problem of finding a successive assignment of limited resources to a number of jobs which is optimal in terms of certain performance measures. On many occasions in manufacturing environments, a set of processes is needed to be serially performed in several stages before a job is completed. Such systems are referred to as the flow shop environments. In a flow shop system, a set of n different jobs needs to be processed on a sequential set of m machines. That is, each job consists of m operations where each operation must be performed on a different machine
A. R. Rahimi-Vahed () · S. M. Mirghorbani Department of Industrial Engineering, Faculty of Engineering, University of Tehran, Tehran, Iran e-mail:
[email protected] Springer
80
J Comb Optim (2007) 13:79–102
for an amount of processing time. Each machine can handle only one job at a time and the operation of a machine on a job usually cannot be preempted. In flow shop scheduling, the processing routes are the same for all the jobs (Solimanpur et al., 2004). In the permutation flow shop, passing is not allowed. Thus the sequencing of different jobs that visit a set of machines is in the same order. In the general flow shop, passing is allowed. Therefore, the job sequence on each machine may be different (Pinedo, 1995). The early group of flow shop researchers was quite small, and these people were concentrated in a few US academic and research institutions. However, today’s flow shop research community is global and from every continent and every geographical region (Gupta and Stafford, 2006). Recently, flow shop scheduling problems have been one of the most renowned problems in the area of scheduling and there are numerous papers that have investigated this issue (Murata et al., 1996). Gupta and Stafford (2006) investigated the evolution of flow shop scheduling problems and possible approaches for their solution over the last fifty years. They introduced the current flow shop problems and the approaches used to solve them optimally or approximately. The majority of papers on flow shop scheduling problem have been concentrated on single criterion problems. For example, Pan et al. (2002) considered the two machine flow shop scheduling problem by minimizing total tardiness. Fink and Vob (2003) examined the application of diverse metaheuristic methods to tackle the continuous flow shop scheduling problem by minimizing the total completion time (flow-time). They considered the trade-off between computational time and solution quality as well as the knowledge and efforts needed to implement and calibrate the algorithms. Bulfin and M’Hallah (2003) proposed an exact algorithm to solve the two machine flow shop scheduling problem with objective of weighted number of tardy jobs. Blazewicz et al. (2005a) analyzed different solution procedures for the two machine flow shop scheduling problem with a common due date and weighted late work criterion. Choi et al. (2005) investigated a proportionate flow shop scheduling problem in which only one machine is different and job processing times are inversely proportional to machine speeds by minimizing the maximum completion time. Grabowski and Pempera (2005) addressed the no-wait flow shop problem with makespan criterion and developed and compared different local search algorithms for solving this problem. Blazewicz et al. (2005b) dealt with the two machine non-preemptive flow shop scheduling problem with a total weighted late work criterion and a common due date. Whang et al. (2006) dealt with a two machine flow shop scheduling problem with deteriorating jobs in which they minimized total completion time. Nowicki and Smutnicki (2006) addressed the flow shop scheduling problem with the makespan criterion and for solving it, they proposed a new algorithm which uses some elements of the scatter search, the path relinking technique and some properties of neighborhoods. Lian et al. (2006) suggested a similar particle swarm optimization algorithm for solving the permutation flow shop scheduling problem with respect to minimization of makespan. Many researchers have also tackled the problem of scheduling in job shop environments. Akpan (1996) introduced a network-based tool to analyze job shop sequencing problems. Rahmati (1998) proposed a hybrid GA for non-classical job shop scheduling problems. Chen and Chen (1996) provided a survey of GAs for job shop scheduling. Blazewicz et al. (1996) presented an overview of solution techniques for solving job shop problems. They deducted that local search methods are the most powerful tools to schedule job shops. While these studies treated a single objective, however, consideration of multiple criteria is more realistic practically (Murata et al., 1996; Toktas et al., 2004).The multi-objective flow shop scheduling problem has been addressed by some papers on scheduling. Marett and Wright (1996) compared the performance of simulated annealing and tabu search by Springer
J Comb Optim (2007) 13:79–102
81
using them for solving a large and complex multi-objective flow shop problem. Sayin and Karabati (1999) dealt with the scheduling problem in a two machine flow shop environment by minimizing makespan and sum of completion times simultaneously. For solving this problem, they developed a branch- and-bound procedure that iteratively solves restricted single objective scheduling problems until the set of efficient solutions is completely enumerated. Danneberg et al. (1999) addressed the permutation flow shop scheduling problem with setup times where the jobs are partitioned into groups or families. Jobs of the same group can be processed together in a batch but the maximum number of jobs in a batch is limited. The setup time depends on the group of the jobs. They proposed the makespan as well as the weighted sum of the completion times of the jobs as objective function. For solving such a problem, they proposed and compared various constructive and iterative algorithms. Toktas et al. (2004) considered the two machine flow shop scheduling by minimizing makespan and maximum earliness simultaneously. They developed a branch-and-bound procedure that generates all efficient solutions with respect to the two criteria and also proposed a heuristic procedure that generates approximate efficient solutions. Ponnambalam et al. (2004) proposed a TSPGA multi-objective algorithm for flow shop scheduling where a weighted sum of multiple objectives (i.e. minimizing makespan, mean flow time and machine idle time) was used. The weights are randomly generated for each generation to enable a multi-directional search. The proposed algorithm was evaluated by applying it to benchmark problems available in the OR-Library. Ravindran et al. (2005) proposed three heuristic algorithms for solving the flow shop scheduling problem by makespan and total flow. Loukil et al. (2005) proposed multiobjective simulated annealing algorithm to tackle the multi-objective production scheduling problems (one machine, parallel machines and permutation flow shops). They considered seven possible objective functions (the mean weighted completion time, the mean weighted tardiness, the mean weighted earliness, the maximum completion time (makespan), the maximum tardiness, the maximum earliness, the number of tardy jobs). They claimed that the proposed multi-objective simulated annealing algorithm was able to solve any subset of seven possible objective functions. In this paper, we deal with a multi-objective permutation flow shop scheduling problem. The weighted mean completion time and weighted mean tardiness are to be optimized simultaneously. To tackle this problem, an effective multi-objective particle swarm (MOPS), exploiting a new concept of the Ideal Point and a new approach to specify the superior particle’s position vector in the swarm, is designed for searching locally Paretooptimal frontier. The remainder of this paper is organized as follows: Basic definitions of multi-objective optimization are presented in Section 2. Section 3 gives the problem definition. In Section 4, the background of PSO and previous works are summarized. Section 5 deals with the proposed multi-objective particle swarm. The experimental results are provided in Section 6. Finally, Section 7 provides conclusions and the evaluation of the work.
2 Multi-objective optimization A mono-objective optimization algorithm will be terminated upon finding an optimal solution. But it would be an ideal case to find just a single solution for a multi-objective problem in terms of non-dominance criterion and most often the process of optimization leads to more than one solution. In order to better understand the multi-objective optimization concepts, some basic definitions are recapitulated below: Springer
82
J Comb Optim (2007) 13:79–102
Without loss of generality, let us consider a general multi-objective minimization problem with p decision variables and q objectives (q > 1): Minimize y = f (x) = ( f 1 (x), f 2 (x), . . . , f q (x)) Where x ∈ R p , and y ∈ R q . Non-dominated solutions: A solution a is said to dominate solution b if and only if: (1) f i (a) ≤ f i (b) ∀i ∈ {1, 2, . . . , q} (2) f i (a) < f i (b) ∃ i ∈ {1, 2, . . . , q} Solutions which dominate the others but do not dominate themselves are called nondominated solutions. Local optimality in the Pareto sense: A solution a is locally optimal in the Pareto sense if there exists a real ε > 0 such that there is no solution b which dominates the solution a with b ∈ R p ∩ B (a, ε), where B (a, ε) shows a bowl of center a and of radius ε. Global optimality in the Pareto sense: A solution a is globally optimal in the Pareto sense if there does not exist any vector b such that b dominates the vector a. The main difference between this definition and the definition of local optimality lies in the fact that we do not have a restriction on the set R p anymore. Pareto-optimality: A feasible solution is called Pareto-Optimal when it is not dominated by any other solution in the feasible space. Pareto-Optimal Set which is also called the Efficient Set is the collection of all Pareto-optimal solutions and their corresponding images in the objective space are called the Pareto-optimal frontier. Many researches and surveys have been devoted to the subject of multi-objective problems and the developed methods to solve multi-objective optimization can be generally classified into 5 different types:
r Scalar methods, r Interactive methods, r Fuzzy methods, r Methods which use metaheuristics, r Decision aid methods, For a detailed description of the above methods, readers are referred to Collette and Siarry (2003). Among the methods mentioned above, metaheuristics seem practically suitable to solve multi-objective optimization problem. Different approaches appear in the literature, some of them are Vector Evaluated Genetic Algorithm (Schaffer, 1985), Multi-Objective Genetic Algorithm (MOGA) (Fonseca and Fleming, 1993), Niched Pareto Genetic Algorithm (NPGA) (Horn et al., 1994), Non-dominated Sorting Genetic Algorithm (NSGA & NSGA-II) (Deb, 1999; Deb et al., 2002), Pareto Stratum- Niche Cubicle Genetic Algorithm (PS-NC GA) (Hyun et al., 1998), Multiple Objective Genetic Local Search (MOGLS) (Jaszkiewicz, 1999), Strength Pareto Evolutionary Algorithm (SPEA & SPEA2) (Zitzler et al., 2001a,b), MicroGenetic Algorithm (Coello Coello and Toscano Pulido, 2001), Pareto Archive Evolution Strategy (PAES) (Knowles and Corne, 1999), Multi-Objective Tabu Search (MOTS) (Pilegaard, 1997), Multi-Objective Scatter Search (MOSS) (Beausoleil, 2006). Springer
J Comb Optim (2007) 13:79–102
83
3 Problem definition In this paper, a permutation flow shop problem is considered. The permutation flow shop represents a particular case of the flow shop scheduling problems, having as its goal achieving a schedule for a number of jobs on several machines regarding predetermined objective functions and related constraints. Consider a hypothetical permutation flow shop scheduling problem in which n jobs are to be processed on m machines where the machines are ceaselessly ready to be used from time zero onwards. Each job consists in m operations and the jth (i = 1, . . . , m) operation of each job must be processed on machine j( j = 1, . . . , m). At any time, every job can be processed at most on one machine and every machine can process at most one job. One job can start on machine j if it is completed on machine j − 1 and if machine j is free. In addition, preemption is not permitted; i.e., once an operation is started, it must be completed without interruption. For the permutation flow shop the operating sequences of the jobs are the same on every machine. That is to say, if one job is at the ith position on machine 1, then this job will be at the ith position on all the machines. Given the known uninterrupted processing time of job i on machine j, Pi j , and due date of job i, di , and the precedence constraints, the objective is to seek a schedule that minimizes the weighted mean completion time and the weighted mean tardiness of the manufacturing system. 3.1 Weighted mean completion time The first objective considered is the minimization of the weighted mean completion time. This objective can be calculated by the following expression: n
wi · Ci W
i=1
(1)
where Ci is the completion time for job i, n is the number of the jobs and wi is an importance factor related to job i. For instance, it may be equal to a holding cost per unit time. These importance factors are not required to be less than 1. W is the sum total of jobs’ weights; that is, W =
n
wi .
i=1
Let C(πk , j) denote the completion time of the kth job, (k = 1, 2, . . . , n) on machine j in an imaginary permutation π = {π1 , π2 , . . . , πn }, the completion time of the kth job in this permutation, which is equal to C(πk , m) can be calculated by the following equations: C(π1 , 1) = pπ1 ,1 C(πk , 1) = C(πk−1 , 1) + pπk ,1
k = (2, . . . , n)
C(πk , j) = max{C(πk−1 , j), C(πk , j − 1)} + pπk , j
k = (2, . . . , n) and j = (2, . . . , n)
C(πk , m) = max{C(πk−1 , m), C(πk , m − 1)} + pπk ,m Springer
84
J Comb Optim (2007) 13:79–102
3.2 Weighted mean tardiness Another objective considered is the minimization of the weighted mean tardiness. This objective is due-date based and calculates how due-dates are being met. That is to say, this objective takes into account the due dates that are violated. To calculate the value of this objective, the subsequent expression is used: n
wi · Ti W
i=1
(2)
where n is the number of the jobs, Ti is the tardiness for job i and equals to max{0, Ci − di } and wi and W are the same as explained in Section 3.1. It can be easily noticed that the objectives considered are inherently contradicting. To illustrate the point, one should take into account that the optimization of the first objective in a single objective problem is performed regardless of the jobs’ due-dates. Hence, the resulting sequences may have large due-dates violations, thus imposing large penalties to the system. On the other hand, while optimizing the second objective, the goal is to schedule jobs as close as possible to their due-dates. However, the sequence obtained is very likely to cause large penalties to the system due to the fact that this sequence is formed without regard to the job’s completion times.
4 Introduction to particle swarm 4.1 Traditional particle swarm Particle swarm was originated in 1995 by Kennedy and Eberhart (1995) after studying the social behavior of birds. According to what scientists have found, in order to search for food, each member in a flock of birds determines its velocity based on their personal experience as well as information gained through interaction with other members of the flock. This idea was the main principle for PSO. Each bird, called particle, flies through the solution space of the optimization problem searching for the optimum solution and thus its position represents a potential solution for the problem. In Particle Swarm terminology, the available solutions in each iteration are called the “swarm” which is equivalent to the “population” in genetic algorithms. The following is the mathematical concepts of PSO. Suppose a D-dimensional searching space and a swarm of N particles searching for the globally optimal solution within the searching space. Three D-dimensional vectors are assigned to each particle, that is position, denoted by X i = (xi1 , xi2 , . . . , xid ), velocity, denoted by Vi = (vi1 , vi2 , . . . , vid ) and best personal position, denoted by Pi = ( pi1 , pi2 , . . . , pid ). In a continuous searching space, each dimension of the position vector corresponds to the value of a decision variable for the problem. In other words, the position of each particle is a potential solution for the problem at hand and the fitness of this particle (solution) can be calculated by putting these values into a predetermined objective function. When the fitness is more desirable in terms of the objective function, the particle’s position is better. Velocity vector represents the distance a particle will traverse in each dimension in each iteration of the algorithm. Best personal position vector is the best visited position for a particle. Particles also need to be aware of the best global position visited by the whole swarm which is denoted by Pg = ( pg1 , pg2 , . . . , pgd ). Then, to update velocity and position vectors of the Springer
J Comb Optim (2007) 13:79–102
85
particles for the next iteration, following equations will be used to compute the new elements of velocity and position vectors: vid (k + 1) = χ [ωk (k)vid (k) + c1 r1 [ pid (k) − xid (k)] + c2 r2 [ pgd (k) − xid (k)]]
(3)
xid (k + 1) = xid (k) + vid (k + 1) where ωk is the inertia weight, which represents the particle’s preference to continue moving in the same direction it was going on the previous iteration, as introduced by Shi and Eberhart (1998), χ is the constriction coefficient, which serves as a balancing factor for the local and global search, as introduced by Clerc (1999), c1 and c2 are cognitive and social factors respectively, often set equal to 2, k represents the iteration number, r1 and r2 are random numbers between [0, 1], i(i = 1, 2, . . . , N ) is the index representing the particles in the swarm and d(d = 1, 2, . . . , n) is the index for dimensions of searching space. 4.2 Previous work in multi-objective PSO The first attempt to modify PSO to handle multi-objective optimization problems was by Moore and Chapman (1999). They modify the p-vector of the particles so that each particle keeps track of all non-dominated solutions (using Pareto preference) experienced by itself. In another study, Coello Coello and Lechuga (2002) proposed the use of an external repository to keep non-dominated solutions. If the repository is full and a newly discovered non-dominated solution lies in a less crowded area of the Pareto front, it replaces a nondominated solution that lies in a more crowded area. Hu and Eberhart (2002) introduced the use of a dynamic neighborhood to the PSO to cope with multi-objective optimization problems. After each update of the swarm, particles update their neighborhoods. The new neighborhoods for the particles are determined in terms of the proximity calculated by using one of the objective functions. Then, the particles in each neighborhood try to optimize the other objective function to move towards the Pareto front. Hu et al. (2003) extended this work by using an external repository to store the current set of Pareto optimal solutions. Parsopoulos and Vrahatis (2002) proposed a weighted aggregation method with three different variants to calculate the weights. Another study by Parsopoulos et al. (2004) suggested using multiple swarms where the number of swarms is equal to the number of objective functions. Each swarm searches one objective function and the best solution found by a swarm is fed to another swarm to direct the search of the particles of that swarm. Fieldsend and Singh (2002) proposed a new data structure to cope with the shortcomings of using a constant size repository. The “dominated tree” structure represents non-dominated solutions compactly. They also studied a stochastic component to increase the effectiveness of the PSO algorithm in multi-objective optimization (Fieldsend and Singh, 2002). A recent study on multi-objective PSO is by Coello Coello et al. (2004). An external repository and a mutation operator are employed. The external repository is composed of two parts: the archive controller and the grid. The archive controller is where non-dominated solutions are kept and newly found solutions are checked against. The grid is used to ensure a uniform distribution of non-dominated solutions along the Pareto front. A mutation operator encourages full exploration of the solution space.
Springer
86
J Comb Optim (2007) 13:79–102
5 The proposed multi-objective particle swarm (MOPS) 5.1 General scheme of MOPS The pseudo-code of MOPS is as follows: {Initialize search parameters
Determine the approximate dynamic ideal point Generate N initial particles with elite tabu search Generate initial velocities using (7) Initialize the adaptive Pareto archive set so that it is empty While a given maximal number of iterations is not achieved. Perform non-dominated sorting Update the adaptive Pareto archive set Construct the L set as a subset of both diverse and high quality solutions from the swarm Improve the solution quality of each particle from the L set using parallel local search Determine the best global position vector Determine the best personal vector for each particle in Both the swarm and the L set For i = 1 to number of particles For d = 1 to number of jobs (number of dimensions) Update velocity vector’s elements,vid , using (3) Update position vector’s elements, xid ,using (3) Next d Next i Construct a selection pool from both the new solutions obtained by PSO and the improved solutions by local search Select those particles with the shortest distance to the dynamic ideal point for the next iteration Update the value of the dynamic ideal point End While} 5.2 Solution representation One of the most important decisions in designing a metaheuristic lies in deciding how to represent solutions and relate them in an efficient way to the searching space. Representation should be easy to decode to reduce the cost of the algorithm. Two kinds of different solution representations are used simultaneously in this research, the well-known job-to-position and continuous representation. Each particle concurrently has a job-to-position and continuous representation, each of them is used in different steps in our algorithm. In the next sections, we discuss how they are used: 5.2.1 Job-to-position representation One of the most widely used representations for scheduling problems is job-to-position representation. In this kind of representation, a single row array of the size equal to the number of the jobs to be scheduled is formed. The value of the first element of the array Springer
J Comb Optim (2007) 13:79–102
87
Location in a sequence
1
2
3
4
5
6
7
Job to be scheduled
1
2
4
3
5
6
7
Fig. 1 Job-to-position representation for a flow shop scheduling problem
shows which job is scheduled first. The second value shows which job is scheduled second and so on. Suppose that in a hypothetical problem, the sequence of seven jobs must be determined. Figure 1 shows how this representation is used. 5.2.2 Modified continuous representation Tasgetiren et al. (2004) introduced a new way of representation for scheduling problems using continuous values. In this paper, a modified version of this representation is provided. In a problem with n jobs, a continuous representation is an array made of n real values. These values indicate a particle’s position in each dimension of the n-dimensional space they move in. However, an upper bound for these values called X max , puts a limit on the maximum distance each particle are permitted to travel in each dimension. In this paper, X max is set to 4. Since the position of a particle does not represent a solution throughout the algorithm, a transformation, as will be described in the next section, is needed to obtain a permutation from a continuous representation and vice versa. Consider the sample job-to-position representation illustrated in Fig. 1. To construct the continuous version of this representation, we first need to generate 7 (as many as the number of the jobs to be produced) random number between [0, xmax ] = [0, 4]; Then these numbers will be sorted and the smallest of them will be assigned to the position that contains the first job, that is job number 1, the next smallest will be assigned to position that contains the second job, that is job number 2 and so on. Suppose the numbers shown in Table 1 are the random numbers obtained. To build the continuous representation, we have to assign 0.46 to job number 1, 1.54 to job number 2, 1.77 to job number 3 and so on. Thus, Fig. 2 shows the associated representation. The reverse approach is performed when job-to-position representation is needed to be obtained from the continuous representation. The first job in a job-to-position representation will be scheduled in the place of the smallest value of the continuous representation, the second job in the place of the next smallest value of the continuous representation and so on. Table 1 A sample set of random numbers
No.1
No.2
No.3
No.4
No.5
No.6
No.7
0.46
2.96
1.77
2.49
1.54
3.61
2.88
Location in a sequence
1
2
3
4
5
6
7
Continuous representation
0.46
1.54
2.49
1.77
2.88
2.96
3.61
Fig. 2 The continuous representation of the Fig. 1 Springer
88
J Comb Optim (2007) 13:79–102
5.3 Initialization Most of the evolutionary algorithms use a random procedure to generate an initial set of solutions. But, since the output results are strongly dependant on the initial set, we propose a new Elite Tabu Search (ETS) mechanism to construct this set of solutions. The main purpose of applying this metaheuristic is to build a set of potentially diverse and high quality solutions in the domination relation sense. Before describing the elements of the proposed tabu search, the following definition must be provided: Ideal point- Ideal point is a virtual point with coordinates obtained by separately optimizing each objective function. Finding the ideal point requires separately optimizing each of the objective functions of the problem. On the other hand, since the problem in question is non-linear, even optimizing it considering only one objective at a time is a demanding task. To overcome this obstacle, the problem in hand is first linearized so that each of the objective functions can be solved to optimality with available optimization software such as LINGO 8. Another problem in the process of finding the ideal point, even after linearization, is the NP-hardness of the large-size linearized problems due to their large feasible space and our inability to find the global optimum (even a strong local optimum) in a reasonable time. The following approach is adopted to solve this problem: when finding the exact ideal point is not easy, an approximation of it called the Dynamic Ideal Point (DIP) is used instead. The approximation requires interrupting the optimization software (LINGO 8) after ξ seconds after the first feasible solution is found and report the best solution found up to that time as the respective coordinate of the ideal point. The value of ξ is determined after running various test problems. To improve this approximation and to prevent it from reducing the quality of our algorithm, DIP must be updated at the end of each iteration of the proposed particle swarm. The procedure of updating the position of the ideal point will be explained in Section 5.4. 5.3.1 ETS implementation The desired size of the swarm, which is shown by N , remains constant during the optimization process. To construct N diverse and good solutions, the proposed Elite Tabu Search (ETS) will be done α × N times where α is an integer greater than or equal to 1. The tabu search starts from a predetermined point called the Starting Point which can be set to be the related sequence of any one of the two values obtained for coordinates of the ideal point or initial DIP. In our research, the string of objective function 1 is considered as the starting point. Then the current solution is saved in a virtual list and will be replaced by a desired solution in its neighborhood that meets the acceptance criterion. This process must be continued until the prespecified termination criterion is met. The detailed description of implementation of the proposed tabu search is as follows: 5.3.1.1 Move description. The proposed move procedure, which is used to generate a neighborhood subset μ, is based on an implementation of what is known in the GA literature as the inversion operator. Inversion is a unary operator that first chooses two random cut points in a solution. The elements between the cut points are then reversed. An example of the inversion operator is presented below: Before inversion: 2 1 3 | 4 5 6 7 | 8 9 After inversion: 2 1 3 | 7 6 5 4 | 8 9 Springer
J Comb Optim (2007) 13:79–102
89
5.3.1.2 Tabu list. The move mechanism uses the intelligent tabu search strategy, whose principle is to avoid returning to the solution recently visited by using an adaptive memory called tabu List. The proposed tabu list is attributive and made of a list of pairs of integers (i, j), where i, j ∈ {1, . . . , n}. It is forbidden to change the job i with the job j, if the pair (i, j) exists in the tabu list. The size of tabu list, which is shown by ψ, is a predetermined and sufficiently large value. In order to diversify the search, a long-term memory is deployed and the tabu tenure (Tmax) will be considered infinite. A frequency-based memory mechanism as is introduced by Ben-Daya and Al-Fawzan (1998) as well as a recency-based memory are also utilized here. 5.3.1.3 Search direction. In order to simultaneously maintain suitable intensification and diversification, we introduce a new function based on Goal Attainment method. This Function can be shown as follows: ζ =
k | f i − Fi | wi i=1
(4)
where f i is the ith objective function value of the solution, Fi is the ith coordinate value of the ideal point and wi , weight of the ith objective function, is a random number from a uniform distribution of U(0,1). The motivation to use this metric is that a solution is efficient for a given set of weights w if it minimizes ζ . The main difference of the proposed function with the existing ones is that it allows working with a set of solutions which is not necessarily convex. This advantage makes the proposed ETS very popular that can be implemented in every optimization problem with every search space pattern. Another advantage is achieved by generating wi randomly. According to this approach, the proposed ETS can search the solution space in various directions, so the high diversification is maintained. To explain the acceptance criteria of a new solution, the variable η is defined as follows: η = ζB − ζ A
(5)
where A is the current solution and B is generated from A by a recent move. So the acceptance criteria can be defined in the following way: (1) If η ≤ 0 and the move is not found in the tabu list, solution A will be replaced by B. (2) If η ≤ 0 but the move is found in the tabu list, the aspiration strategy is used and solution A will be replaced by B. (3) If η > 0 and the move is not found in the tabu list, solution A will be replaced by B when solution B is not dominated by solution A. (4) If η > 0 and the move is found in the tabu list, solution A does not change. 5.3.1.4 Stopping criteria. The proposed tabu search must be done α × N times. After running the ETS, We have α × N number of solutions that are selected among the whole set of visited solutions by ETS to be as near to the Pareto front as possible. To construct the set of N initial solutions, those N solutions are selected that have the shortest distance to the dynamic ideal point. 5.3.1.5 Generating initial velocities. In order to use the sequence of the jobs obtained by ETS in PSO, we need to assign initial velocities to each of these initial sequences (initial Springer
90
J Comb Optim (2007) 13:79–102
position of the particles). Since the representation used in the main phase of our algorithm is the modified Continuous Representation, continuous values are assigned to velocities. Velocity of a particle is a single row array of the size equal to the number of the products to be sequenced. These values are restricted to some range, that is: vid (k) ∈ [vmax , vmin ] = [−4, 4]
(6)
Initial velocities are generated by the formula which follows: vid (k) = β ∗ [vmin + (vmax − vmin ) ∗ r1 ]
(7)
where β = 1 and r1 is a random number between [0, 1]. The range of acceptable velocity values (i.e. [vmax , vmin ]) is chosen so that |vid (k)| ≤ X max . Since the traveling space of particles are limited to X max , no velocity value greater than X max is needed. Furthermore, if a step larger than vmax is required in order to escape a local optimum, then the particle will not be trapped. 5.4 DIP updating procedure To update the dynamic ideal point, the following strategy is developed: At the end of each iteration of the proposed particle swarm, if the minimum values of each objective function for all the particles of the swarm is smaller than its related coordinate of the dynamic ideal point; this coordinate is replaced by this value. According to this procedure, the value of the dynamic ideal point will progressively be improved during the optimization process. 5.5 Adaptive pareto archive set In many researches, a Pareto archive set is provided to explicitly maintain a limited number of non-dominated solutions. This approach is incorporated to prevent losing certain portions of the current non-dominated front during the optimization process. This archive is iteratively updated to get closer to correct Pareto-optimal front. When a new non-dominated solution is found, if the archive set is not full, it will enter the archive set. Otherwise it will be ignored. When a new solution enters the archive set, any solution in the archive dominated by this solution will be removed from the archive set. When the maximum archive size is exceeded, removing a non-dominated solution may destroy the characteristics of the Trade-off front. There exist many different and efficient methods which deal with the updating procedure when the archive size is exceeded. Among them the most widely adopted techniques are: Clustering methods and k-nearest neighbor methods. But most of these algorithms do not preclude the problem of temporary deterioration, and not converge to the Pareto set. In this study, we propose an adaptive Pareto archive set updating procedure that attempts to prevent losing new non-dominated solutions, found when Pareto archive size has reached its maximum size. The archive size, which is shown by Arch size, is a prespecified value and must be determined at the beginning of the algorithm. When a new non-dominated solution is found, one of the two following possibilities may occur for updating the Pareto archive set: (1) Number of the solutions in the archive set is less than Arch size, thus this solution joins the archive set. Springer
J Comb Optim (2007) 13:79–102
91
(2) Number of the solutions in the archive set is equal to (or greater than) Arch size, thus the new solution will be added if its distance to the nearest non-dominated solution in the archive set is greater-than-or-equal-to the radius of “Duplication Area” of that nearest non-dominated solution in the archive set. Duplication area of a non-dominated solution in the Pareto archive is defined as a bowl of center of the solution and of radius λ. This area is used as a measure of dissimilarity in order to find diverse non-dominated solutions. The distance between the new non-dominated solution and the nearest non-dominated solution in the archive is measured in the Euclidean distance form. To put it another way, if the new non-dominated solution is not located in the duplication area of its nearest non-dominated solution in the archive, it is considered as a dissimilar solution and added to the Pareto archive set. The main advantage of this procedure is to save dissimilar non-dominated solutions, without losing any existing non-dominated solutions in the archive. The Pareto archive set is updated at the end of each phase of the proposed particle swarm. 5.6 PSO iteration In the beginning of each iteration, the number of available particles is equal to the swarm size. One iteration of our proposed algorithm includes the main PSO plus local search. To save time, only a select group of solutions, named L set , is chosen for local search. This group is chosen in a way that enhances the quality of the proposed algorithm by maintaining both diversification and good quality. It should be noted that Local search is applied, as is described below, in a parallel way with the main PSO. 5.6.1 Lset L set is a subset of both diverse and high quality solutions that consists of an approximation to the Pareto-optimal set. L set , consists of two subsets, L set1 of maximum size b1 and L set2 of size b2 respectively. That is, |L set | = b ≤ b1 + b2 . Construction of L set1 includes selection of maximum b1 solutions from Pareto archive set. For this purpose, maximum b1 non-repeated non-dominated solutions that have the highest crowding distance will be selected and added to L set . The crowding distance, as a density measure, is a distance value equal to the absolute difference in the function values of two adjacent solutions. This measure helps to diversify the solution space. For more detailed description about crowded-comparison approach, readers may refer to Deb et al. (2002) Afterwards, those dominated and non-dominated solutions which do not exist in L set are identified and the minimum of the Euclidean distances of these solutions to the solutions in L set is computed. To build L set2, b2 solutions with the maximum of these minimum distances are selected and added to L set . Each member of the L set is subjected to the improvement method. This method aims to enhance the quality of these solutions. The improvement method that we implemented is based on the parallel local search strategy. This parallel local search consists of two well-known operators, i.e. exchange procedure and insert procedure, which separately operate on each solution. These two operators are explained as follows: XP local search. If a solution is added to L set , it is iteratively improved using the pairwise exchange procedure (XP). In XP, the positions of every pair of jobs are exchanged. Springer
92
J Comb Optim (2007) 13:79–102
IP local search. If a solution exists in L set , the insert procedure (IP) will be applied on it. In IP, each job is removed from its current position and inserted into another position. As mentioned before, each solution is put into all of the above local searches simultaneously and is separately improved in an intelligent manner. Each local search improves the solution with the following improvement mechanism: {For 1 to Local itaretion (Local itaretion is the number of iterations) Apply the desired local search on current solution a Name the new solution b If new solution b dominates solution a, move b to the selection
pool End for} Please note that whenever the local search finds an improved solution, this solution does not have velocity values. Therefore, once again, using Eq. (7) with β = 0.5 velocity values for this solution will be generated. Besides that, since when applying the local search, job-to-position representation is used, in order to prepare this solution for PSO, its continuous representation must be constructed by the strategy described in Section (5.2.2). Moreover, the personal best position visited by this solution will be set to its current continuous position. 5.6.2 Updating positions and velocities Equations (6) and (7) are used to update velocity and position of the solutions which were available at the first of the iteration. The gbest model of Kennedy and Eberhart (1995) is deployed. That is, no local neighborhood is defined. Further, to overcome another problem in multi-objective PSO, which is the determination of the global best vector (Pg ) this vector is set equal to the position vector of the non-dominated solution in the Pareto archive set that has the highest crowding distance. After updating, all the particles of the swarm will be moved to the selection pool. That is, the selection pool consists of both the new solutions obtained by PSO and the improved solutions by local search. In other words, Selection pool is an array of high quality solutions who are candidates to be selected for the next iteration. To state the matter differently, even if a better solution is found by local search, presence of this solution in the next iteration depends on the quality of the other solutions in the selection pool and the solutions found by local search may or may not be fed into the swarm. Since the size of the pool in all likelihood will exceed swarm size, those particles with the lowest distance to the DIP will be transferred to the next iteration.
6 Experimental results The performance of the proposed multi-objective particle swarm is compared with a wellknown multi-objective genetic algorithm (i.e. SPEA-II). These algorithms have been coded in the Visual Basic 6 and executed on a Pentium 4, 2.8 GHz, and Windows XP using 256 MB of RAM. At first, we present a brief discussion about the implementation of SPEA-II. Springer
J Comb Optim (2007) 13:79–102
93
6.1 Strength Pareto evolutionary algorithm II (SPEA-II) Zitzler et al. (2001) proposed a Pareto-based method, the strength Pareto evolutionary algorithm II (SPEA-II), which is an intelligent enhanced version of SPEA. In SPEA-II, each individual in both the main population and elitist non-dominated archive is assigned a strength value, which incorporates both dominance and density information. On the basis of the strength value, the final rank value is determined by the summation of the strengths of the points that dominate the current individual. Meanwhile, a density estimation method is applied to obtain the density value of each individual. The final fitness is the sum of rank and density values. Additionally, a truncation method is used to maintain a constant number of individuals in the Pareto archive. 6.2 Algorithms’ assumptions The experiments are implemented in two folds: first, for the small-sized problems, the other for the large-sized ones. For both of these experiments, we consider the following assumptions:
r General assumptions: (1) The processing times ( pi j ) are integers and are generated from a uniform distribution of U(1, 40), (2) The due dates (di ), generated based on the method introduced in Loukil et al. (2005), are uniformly distributed in the interval [P(1 − T − R2 ) , P(1 − T + R2 )] where P = (n + m − 1)P with P the mean total processing time. The values of T and R are set to 0.2 and 0.6 respectively, (3) The jobs’ importance factors (wi ) are uniformly generated in the interval (1,20), (4) Each experiment is repeated 15 times. r Multi-objective particle swarm’s assumptions: (1) The value of α is set to 3, (2) Local itaretion is fixed to 5, (3) The values of c1 and c2 are fixed to 2, (4) the value of χ is set to 0.73 and (5) The initial and final values for the inertia weight are set to 0.9 and 0.4 respectively. r SPEA-II’s assumptions: (1) The initial population is randomly generated, (2) The binary tournament selection procedure is used, (3) The selection rate is set to 0.8, (4) The order crossover (OX) and inversion (IN) are used as crossover and mutation operators, and (5) The ratio of ox-crossover and inversion is set to 0.8 and 0.4, respectively. 6.3 Small-sized problems 6.3.1 Test problems The first experiment is carried out on a set of the small-sized problems. This experiment contains 16 test problems of different sizes depicted in Table 2. The proposed multi-objective particle swarm is applied to the generated problems and its performance is compared, based on some comparison metrics, with the above mentioned multi-objective genetic algorithms. The comparison metrics that we implemented are explained in the next section. 6.3.2 Comparison metrics To validate the reliability of the proposed MOPS, five comparison metrics are taken into account: Springer
94
J Comb Optim (2007) 13:79–102
Table 2 Problem sets for the small-sized problems
Problem
Job (n)
Machine (m)
No. of Pareto solutions
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
6 6 6 6 7 7 7 7 8 8 8 8 9 9 9 9
5 10 15 20 5 10 15 20 5 10 15 20 5 10 15 20
3 4 4 3 4 7 6 7 7 7 10 4 9 4 7 5
The number of Pareto solutions (N.P.S). This metric shows the number of Pareto optimal solutions that each algorithm can find. The number of Pareto solutions found by each algorithm is compared with the total Pareto optimal solutions which are obtained by the total enumeration algorithm. Error ratio (ER). This metric allows us to measure the nonconvergence of the algorithms towards the Pareto-optimal frontier. The definition of the error ratio is the following: N
i=1 ei
E=
N
where N is the number of found Pareto optimal solutions, and ei =
0
if the solution i ∈ Pareto-optimal frontier
1
otherwise
The closer this metric is to 1, the less the solution has converged toward the Pareto-optimal frontier. Generational distance (GD). This metric allows us to measure the distance between the Pareto-optimal frontier and the solution set. The definition of this metric as follows: G=
N i=1
di
N
where di is the Euclidean distance between solution i and the closest which belongs to the Pareto-optimal frontier obtained from the total enumeration. Springer
J Comb Optim (2007) 13:79–102
95
Spacing metric (SM). The spacing metric allows us to measure the uniformity of the spread of the points of the solution set. The definition of this metric is the following: S=
N 1 2 (d − di ) × N − 1 i=1
12
where d is the mean value of all di . Diversification metric (DM). This metric measures the spread of the solution set. Its definition is the following:
N
D= max xi − yi i=1
where xi − yi is the Euclidean distance between of the non-dominated solution xi and the non-dominated solution yi . 6.3.3 Parameter setting For tuning the algorithms, extensive experiments were conducted with different sets of parameters. In the end, the following set was found to be effective in terms of solution quality and diversity level: Multi-objective particle swarm’s tuned parameters. (1) The number of solutions at each iteration, N , is set to 50, (2) The algorithm is terminated after 50 iterations, (3) Since each objective function is linearized and the lingo software can obtain the optimum values of the coordinates of the ideal point immediately, the value of ξ is set to 0, (4) The neighborhood subset size, μ, and the tabu list size, ψ, are respectively set to 3 and 20, in the ETS, (5) The maximum Pareto archive size, Arch Size, is fixed to 35, (6) The radius of duplication area, λ, is set to 1, (7) The maximum size of L set1 , b1 , is set to 15, and the size of L set2 , b2 , is set to 10. SPEA-II’s tuned parameters. (1) The population size is set to 50, (2) The algorithm is terminated after 50 iterations. 6.3.4 Comparative results In this section, the proposed MOPS is applied to the test problems and its performance is compared with SPEA-II. Table 3 represents the average values of the above mentioned comparison metrics. As shown in Table 3, the proposed MOPS is superior to the SPEA-II in each test problem. In other words: (1) MOPS could achieve the greater number of Pareto optimal solutions in comparison with SPEA-II. (2) The proposed MOPS has less error ratios in most test problems. This fact illustrates that the proposed MOPS has less nonconvergence towards the Pareto-optimal frontier. Springer
96
J Comb Optim (2007) 13:79–102
Table 3 Computational results for the small-sized problems (N.P.S)
(ER)
(GD)
(SM)
(DM)
Problem MOPS SPEA-II MOPS SPEA-II MOPS SPEA-II MOPS SPEA-II MOPS SPEA-II 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
3 4 4 3 4 5.77 5.79 6.73 5.55 5.73 7.9 3.7 7.57 2.81 4.87 3.67
3 4 4 3 3.6 5.6 4.07 5.4 4.47 3.33 7.27 2.27 2 1.54 2 2.33
0 0 0 0 0 0.14 0.05 0.14 0.23 0.31 0.1 0.02 0.43 0.13 0.20 0.15
0 0 0 0 0.08 0.08 0.26 0.14 0.23 0.43 0.11 0.36 0.8 0.15 0.59 0.34
0 0 0 0 0 0.32 0.12 0.07 0.25 0.54 0.19 0.04 0.61 0.06 0.52 0.39
0 0 0 0 1.81 3.46 16.52 12.67 6.77 20.23 7.14 25.6 22.71 6.42 38.42 27.55
2.74 4.22 5.12 5.37 1.28 0.16 0.09 0.17 0.18 1.64 0.29 2.35 1.18 4.89 3.75 6.84
3.12 6.42 7.08 5.89 2.53 1.04 0.87 0.4 0.67 2.34 0.64 5.89 1.56 7.23 4.25 8.66
4.65 6.51 7.34 5.91 5.44 6.56 8.45 7.14 6.31 6.47 5.64 8.07 7.93 7.43 8.54 9.96
0.72 1.19 1.25 1.11 1.25 1.83 2.03 2.69 1.5 2.81 3.57 3.47 3.18 3.43 3.04 3.23
(3) The proposed particle swarm can obtain Pareto solutions which are considerably closer to the true Pareto-optimal frontier in comparison with the benchmark algorithm. (4) MOPS provides non-dominated solutions which have less average values of spacing metric. This fact reveals that non-dominated solutions obtained by MOPS are more uniformly distributed in comparison with the other algorithm. (5) The average values of diversification metric in MOPS are considerably more than the other algorithm. In the other word, MOPS could find non-dominated solutions which are more scattered. Table 4 represents the average of computational times that algorithms consume: Table 4 The average values of computational times (sec.) for the small-sized problems
Springer
Problem
MOPS
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
12 12 17 26 9 14 16 18 9 13 37 54 12 44 37 68
SPEA II 1 2 2 3 1 1 2 3 2 2 2 3 1 2 3 4
J Comb Optim (2007) 13:79–102
97
Table 5 Problem sets for the large-sized problems
Problem
Job (n)
Machine (m)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
50 50 50 50 100 100 100 100 200 200 200 200 300 300 300 300 500 500 500 500
5 10 15 20 5 10 15 20 5 10 15 20 5 10 15 20 5 10 15 20
Table 6 Computational results for the large-sized problems (N.P.S)
(QM)
(SM)
(DM)
Problem
MOPS
SPEA-II
MOPS
SPEA-II
MOPS
SPEA-II
MOPS
SPEA-II
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
17.54 18.76 17.45 19.16 18.75 18.47 20.53 19.82 21.39 22.44 25.56 23.92 24.71 27.36 25.44 19.92 25.37 31.18 35.75 30.63
14.27 15.56 14.89 17.34 16.87 16.54 18.67 16.37 17.06 18.32 21.67 20.56 22.71 24.44 19.93 14.76 23.89 26.66 29.58 27.45
63.7 70.4 74.5 69.9 63.2 61.8 63.8 77.2 61.8 70.9 66.1 82.1 76.7 65.4 77.4 67.3 63.4 73.5 73.6 66.4
36.7 29.6 25.5 30.1 36.8 38.2 36.2 22.8 38.2 29.1 33.9 17.9 23.3 32.6 22.6 32.7 36.6 26.5 26.4 33.6
5.06 5.18 5.16 6.22 5.15 5.77 6.23 5.12 6.49 6.44 7.29 6.25 4.39 5.45 5.65 6.26 6.30 6.35 7.47 4.16
6.90 5.80 6.14 6.35 5.55 6.81 6.65 5.55 6.85 6.85 8.32 6.45 7.67 6.46 6.31 6.37 6.65 6.74 8.32 5.24
17.97 21.37 18.75 25.37 21.52 20.45 31.29 23.76 34.30 30.17 25.44 30.70 17.04 33.67 25.15 33.65 20.52 31.07 29.87 26.23
12.22 17.56 14.46 18.88 18.46 20.47 18.72 19.64 26.67 25.46 21.12 28.55 12.34 32.40 22.46 29.81 17.45 27.57 26.09 24.91
As illustrated in Table 4, the proposed MOPS consumes more computational time than SPEA-II. Since MOPS, Because of the structures of the proposed elite tabu search and local searches, can intelligently search more regions of the search space, this higher value of computational time is reasonable. Springer
98
J Comb Optim (2007) 13:79–102
6.4 Large-sized problems 6.4.1 Test problems Another set of experiments is implemented for the large-sized problems. To construct desired test problems, 16 test problems of different sizes are generated and illustrated in Table 5. 6.4.2 Comparison metrics Because of the large size of the test problems, it is impossible to find the Pareto optimal solutions using the total enumeration algorithm. Therefore, the comparison metrics which are used in the small sized problems must be changed. For this purpose, the following comparison metrics are used: (1) the number of non-dominated solutions (N.P.S) that each algorithm can find, (2) the quality metric (QM) that is simply measured by putting together the non-dominated solutions found by two algorithms, i.e. A and B, and reporting the ratio of the non-dominated solutions which are discovered by algorithm A to the non-dominated solutions which are discovered by algorithm B, (3) spacing metric (SM), and (4) diversification metric (DM) ( the definition of the third and fourth metrics is the same as explained in the small-sized problems section). 6.4.3 Parameter setting For tuning this category of problem, different experiments with different sets of parameters were solved. in the end, the following set was found to be effective in terms of the above mentioned metrics:
Table 7 The average values of computational times (sec.) for the large-sized problems
Springer
Problem
MOPS
SPEA II
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
162.26 167.22 170.64 172.24 226.12 224.34 242.39 242.03 304.17 294.61 342.28 286.45 384.47 384.28 472.35 485.83 380.34 483.17 454.03 485.16
83.4 84.61 85.33 84.72 112.95 113.12 120.84 121.04 121.72 118.65 137.14 144.78 129.32 155.23 158.51 162.16 181.23 185.39 188.16 192.24
J Comb Optim (2007) 13:79–102
99
Multi-objective particle swarm’s tuned parameters. (1) The number of solutions at each iteration, N , increases to 200, (2) The algorithm is terminated after 500 iterations, (3) The value of ξ is set to 300 minutes, (4) The neighborhood subset size, μ, and the tabu list size, ψ, are respectively fixed to 3 and 40, in the ETS, (5) The maximum Pareto archive size, Arch Size, is set to 100, (6) The radius of duplication area, λ, is fixed to 10, (7) The maximum size of L set1 , b1 , is set to 50, and the size of L set2 , b2 , is set to 25. SPEA-II’s tuned parameters. (1) The population size increases to 200, (2) the algorithm is terminated after 500 iterations. 6.4.4 Comparative results Table 6 represents the average values of the four above mentioned metrics. As illustrated in Table 6, the proposed MOPS shows better performance in all problem sets. In other words, MOPS provides the higher number of diverse locally non-dominated solutions which are closer to the true Pareto-optimal frontier. Table 7 represents the average of computational times of the algorithms. As illustrated in Table 7, computational time increases depending on the number of jobs that must be processed. On the average, MOPS consumes about 2.5 times more than the computational time that SPEA-II spends.
7 Conclusions This paper presented a new multi-objective particle swarm (MOPS) for solving a permutation flow shop scheduling problem with the objective of the minimization of the weighted mean completion time and weighted mean tardiness. Tabu Search was used to generate diverse initial solutions. A similar concept for Ideal Point in multi-objective optimization problems (Dynamic Ideal Point) was introduced and used in the initialization phase and in the main algorithm. In the initialization phase, the DIP was approximated using linear programming when finding the exact Ideal Point was a difficult task and this approximation was improved with regard to the better values found for each of the objective functions throughout the main algorithm A new method was applied to specify the superior particle position’s vector (Pg ) in the swarm based on solutions’ crowding distance rather than dominance concept. To validate the proposed multi-objective particle swarm, we designed various test problems and evaluated the performance and the reliability of the proposed MOPS in comparison with a conventional multi-objective genetic algorithm (i.e. SPEA II). Different comparison metrics (such as, number of Pareto solutions found by algorithm, error ratio, generational distance, spacing metric, and diversity metric) were applied to validate the efficiency of the proposed MOPS. The experimental results indicated that the proposed MOPS outperformed the SPEA II. in all test problems, the MOPS was able to improve the quality of the obtained solutions, especially for the large-sized problems and in five cases in the small sized problems, all (100%) of the available non-dominated solutions of the searching space obtained by enumeration was detected by MOPS.
Acknowledgmets The authors would like to sincerely thank the two anonymous referees for their valuable comments on the early version of this paper. Springer
100
J Comb Optim (2007) 13:79–102
References Akpan EOP (1996) Job shop sequencing problems via network scheduling technique. Int J Oper Prod Manage 16(3):76–86 Beausoleil RP (2006) “MOSS” multi-objective scatter search applied to non-linear multiple criteria optimization European. J Oper Res 169:426–449 Ben-Daya M, Al-Fawzan M (1998) A tabu search approach for the flow shop scheduling problem. Eur J Oper Res 109:88–95 Blazewicz J, Domschke W, Pesch E (1996) The job shop scheduling problem: conventional and new solution techniques. Eur J Oper Res 93:1–33 Blazewicz J, Pesch E, Sterna M, Werner F (2005a) A comparison of solution procedures for two-machine flow shop scheduling with late work criterion. Comput Ind Eng 49:611–624 Blazewicz J, Pesch E, Sterna M, Werner F (2005b) The two-machine flow shop problem with weighted late work criterion and common due date. Eur J Oper Res 165:408–415 Bulfin RL, M’Hallah R (2003) Minimizing the weighted number of tardy jobs on a two-machine flow shop. Comput Oper Res 30:1887–1900 CA, Corne D (eds) (2001) First international conference on evolutionary multi-criterion optimization. Lecture Notes in Computer Sciences, No 1993, Springer-Verlag, pp 126–140 Chen L-H, Chen Y-H (1996) A design procedure for a robust job shop manufacturing system under a constraint using computer simulation experiments. Comput Ind Eng 30(1):1–12 Choi B-C, Yoon S-H, Chung S-J (2005) Minimizing maximum completion time in a proportionate flow shop with one machine of different speed. Eur J Oper Res (2005) Article in Press Clerc M (1999) The swarm and the queen: towards a deterministic and adaptive particle swarm optimization. In: Proc. ICEC, Washington, DC, pp 1951–1957 Coello Coello CA, Lechuga MS (2002) MOPSO: A proposal for multiple objective particle swarm optimization. In: Proceedings of the 2002 congress on evolutionary computation vol 2, pp 1051–1056 Coello Coello CA, Pulido GT, Lechuga MS (2004) Handling multiple objectives with particle swarm optimization. IEEE Trans Evolutionary Comput 8(3):256–279 Collette Y, Siarry P (2003) Multi-objective optimization: principles and case studies Coello Coello CA, Toscano Pulido G (2001) A micro-genetic algorithm for multi-objective optimization. In: Zitzler E, Deb K, Thiele L, Coello CA Coello, Corne D (eds) First international conference on evolutionary multi-criterion optimization, Lecture Notes in Computer Sciences, No. 1993, Springer-Verlag: 2001, pp 126–140 Danneberg D, Tautenhahn T, Werner F(1999) A comparison of heuristic algorithms for flow shop scheduling problems with setup times and limited batch size. Math Comput Model 29:101–126 Deb K (1999) Multi-objective genetic algorithms: problem difficulties and construction of test problems. Evolut Comput J 7(3):205–230 Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Trans Evolut Comput 6(2):182–197 Eberhart RC, Kennedy J (1995a) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science. Nagoya, Japan, pp 39–43 Eberhart RC, Kennedy J (1995b) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science. Nagoya, Japan, pp 39–43 Fieldsend JE, Singh S (2002) A multi-objective algorithm based upon particle swarm optimization, an efficient data structure and turbulence. In: Proceedings of the 2002 UK workshop on computational intelligence, pp 37–44 Fink A, Vob S (2003) Solving the continuous flow shop scheduling problem by metaheuristics. Eur J Oper Res 151:400–414 Fonseca CM, Fleming PJ (1993) Genetic algorithms for multi-objective optimization: formulation, discussion and generalization. In: Forrest S (ed) Proceedings of the fifth international conference on genetic algorithms, San Mateo, California, University of Illinois at Urbana-Champaign: Morgan Kaufman Publishers, pp 416–423 Grabowski J, Pempera J (2005) Some local search algorithms for no-wait flow shop problem with makespan criterion. Comput Oper Res 32:2197–2212 Gupta JND, Stafford Jr EF (2006) Flow shop scheduling research after five decades. Eur J Oper Res 169:699– 711 Horn J, Nafpliotis N, Goldberg DE (1994) A niched Pareto genetic algorithm for multi-objective optimization. In: Proc of 1st IEEE-ICEC conference, pp 82–87
Springer
J Comb Optim (2007) 13:79–102
101
Hu X, Eberhart R (2002) Multi-objective optimization using dynamic neighborhood particle swarm optimization. In: Proceedings of the 2002 congress on evolutionary computation vol 2, pp 1677–1681 Hu X, Eberhart R, Shi Y (2003) Particle swarm with extended memory for multi-objective optimization. In: Proceedings of the 2003 IEEE Swarm Intelligence Symposium, Indianapolis, pp 193–197 Hu X, Shi Y, Eberhart RC (2004) Recent advances in particle swarm. In: Proceedings of the IEEE congress on evolutionary computation, Oregon, Portland vol 2, pp 90–97 Hyun CJ, Kim Y, Kim YK (1998) A genetic algorithm for multiple objective sequencing problems in mixed model assembly lines. Comput Oper Res 25(7–8):675–690 Jaszkiewicz A (1999) Genetic local search for multiple objective combinatorial optimization. Technical Report RA-014/98, Institute of Computing Science, Poznan University of Technology. Technical Report Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceedings of the 1995 IEEE international conference on neural networks 4:1942–1948 Knowles JD, Corne DW (1999) The Pareto archieved evolution strategy: a new baseline algorithm for multiobjective optimization. In: Congress on Evolutionary Computation, Washington, DC, IEEE Service Center, pp 98–105 Lian Z, Gu X, Jiao B (2006) A similar particle swarm optimization algorithm for permutation flow shop scheduling to minimize makespan. Appl Math Comput 175:773–785 Loukil T, Teghem J, Tuyttens D (2005) Solving multi-objective production scheduling problems using metaheuristics. Eur J Oper Res 161:42–61 Luh G-C, Chueh C-H, Liu W-W (2003) Moia: multi-objective immune algorithm. Eng Optim 35:143–164 Marett R, Wright M (1996) A comparison of neighborhood search techniques for multi-objective combinatorial problems. Comput Oper Res 23:465–483 Moore J, Chapman R (1999) Application of particle swarm to multi-objective optimization. Department of Computer Science and Software Engineering, Auburn University, 1999 Murata T, Ishibuchi H, Tanaka H (1996) Multi-objective genetic algorithm and its applications to flow shop scheduling. Comput Ind Eng 30:957–968 Nowicki E, Smutnicki C (2006) Some aspects of scatter search in the flow shop problem. Eur J Oper Res 169:654–666 Pan JC-H, Chen J-S, Chao C-M (2002) Minimizing tardiness in a two-machine flow shop. Comput Oper Res 29:869–885 Parsopoulos KE, Vrahatis MN (2002) Particle swarm optimization method in multi-objective problems. In: Proceedings of the 2002 ACM symposium on applied computing, pp 603–607 Parsopoulos KE, Tasoulis DK, Vrahatis MN (2004) Multi-objective optimization using parallel vector evaluated particle swarm optimization. In: Proceedings of the IASTED international conference on artificial intelligence and applications 2:823–828 Pilegaard HM (1997) Tabu search in multi-objective optimization: MOTS. in: Proceedings of the 13th international conference on multiple criteria decision making (MCDM 97), Cape Town, South Africa Pinedo M (1995) Scheduling: theory algorithms and systems. Englewood Cliffs, Prentice-Hall, New Jersey Ponnambalam SG, Jagannathan H, Kataria M, Gadicherla A (2004) A TSP-GA multi-objective algorithm for flow shop scheduling. Int J Adv Manufacturing Tech 23:909–915 Rahmati A (1998) Representation of hybrid genetic algorithm for solving non-classic job shop scheduling problems. MS Eng Thesis. Faculty of engineering, University of Tehran (in Persian) Ravindran D, Noorul Haq A, Selvakuar SJ, Sivaraman R (2005) Flow shop scheduling with multiple objective of minimizing makespan and total flow time. Int J Adv Manufacturing Tech 25:1007–1012 Sayin S, Karabati S (1999) Abicriteria approach to the two-machine flow shop scheduling problem. Eur J Oper Res 113:435–449 Schaffer JD (1985) Multiple objective optimization with vector evaluated genetic algorithms. In: Schaffer JD (ed), Genetic algorithms and their applications: proceedings of the first international conference on genetic algorithms, Lawrence Erlbaum, Hillsdale, New Jersey, pp 93–100 Shi Y, Eberhart RC (1998) A modified particle swarm optimizer. In: Proceedings of the IEEE congress on evolutionary computation. Piscataway, NJ, pp 69–173 Shi Y, Eberhart R (1998) Parameter selection in particle swarm optimization. In: Evolutionary programming VIZ: Proc. EP98, Springer Verlag, New York, pp 591–600 Solimanpur M, Vrat P, Shankar R (2004) A neuro-tabu search heuristic for flow shop scheduling problem. Comput Oper Res 31:2151–2164 Tasgetiren MF, Sevkli M, Liang YC, Gencyilmaz G (2004a) Particle swarm optimization algorithm for single machine total weighted tardiness problem. In: Proceedings of the IEEE congress on evolutionary computation, Oregon, Portland vol 2, pp 1412–1419
Springer
102
J Comb Optim (2007) 13:79–102
Tasgetiren MF, Liang YC, Sevkli M, Gencyilmaz G (2004b) Particle swarm optimization algorithm for makespan and maximum lateness minimization in permutation flow shop sequencing problem. In: Proceedings of the fourth international symposium on intelligent manufacturing systems, Sakarya, Turkey, pp 431–41 Toktas B, Azizoglu M, Koksalan SK (2004) Two-machine flow shop scheduling with two criteria: maximum earliness and makespan. Eur J Oper Res 157:286–295 Wang J-B, Daniel Ng CT, Cheng TCE, Li-Li Liu (2006) Minimizing total completion time in a two-machine flow shop with deteriorating jobs, Appl Math Comput (2006) Article in Press Zitzler E, Laumanns M, Thiele L (2001a) SPEA2: Improving the strength Pareto evolutionary algorithm. In: Giannakoglou K, Tsahalis D, Periaux J, Papailou P, Fogarty T (eds) EUROGEN 2001, Evolutionary methods for design, optimization and control with applications to industrial problems, Athens, Greece, pp 95–100 Zitzler E, Laumanns M, Thiele L (2001b) SPEA2: Improving the strength pareto evolutionary algorithm. Computer Engineering and Networks Laboratory (TIK) -Report 103 Sept 2001
Springer