This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/copyright
Author's personal copy Information Sciences 181 (2011) 668–685
Contents lists available at ScienceDirect
Information Sciences journal homepage: www.elsevier.com/locate/ins
An effective hybrid discrete differential evolution algorithm for the flow shop scheduling with intermediate buffers Quan-Ke Pan a, Ling Wang b, Liang Gao c,⇑, W.D. Li d a
College of Computer Science, Liaocheng University, Liaocheng 252059, PR China Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Automation, Tsinghua University, Beijing 100084, China c State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, PR China d Faculty of Engineering and Computing, Coventry University, Coventry, UK b
a r t i c l e
i n f o
Article history: Received 24 May 2009 Received in revised form 29 September 2010 Accepted 7 October 2010
Keywords: Flow shop with intermediate buffers Makespan Discrete differential evolution Hybrid algorithm Local search
a b s t r a c t In this paper, an effective hybrid discrete differential evolution (HDDE) algorithm is proposed to minimize the maximum completion time (makespan) for a flow shop scheduling problem with intermediate buffers located between two consecutive machines. Different from traditional differential evolution algorithms, the proposed HDDE algorithm adopted job permutation to represent individuals and applies job-permutation-based mutation and crossover operations to generate new candidate solutions. Moreover, a one-to-one selection scheme with probabilistic jumping is used to determine whether the candidates will become members of the target population in next generation. In addition, an efficient local search algorithm based on both insert and swap neighborhood structures is presented and embedded in the HDDE algorithm to enhance the algorithm’s local searching ability. Computational simulations and comparisons based on the well-known benchmark instances are provided. It shows that the proposed HDDE algorithm is not only capable to generate better results than the existing hybrid genetic algorithm and hybrid particle swarm optimization algorithm, but outperforms two recently proposed discrete differential evolution (DDE) algorithms as well. Especially, the HDDE algorithm is able to achieve excellent results for large-scale problems with up to 500 jobs and 20 machines. Ó 2010 Elsevier Inc. All rights reserved.
1. Introduction Among all types of scheduling problems, a flow shop scheduling problem with intermediate buffers has important applications in different industries including the petrochemical processing industries and cell manufacturing, where a buffer is either non-existent or with limited size due to limited room and storage facilities (e.g., storage tanks, intermediate inventory) [1–3,19]. In a flow shop with intermediate buffers, each of n jobs consists of m operations with a predetermined processing order through machines, and there exist a certain number of intermediate buffers between two consecutive machines. After finishing processing on a machine, a job is either directly processed on the next machine or it has to be stored in the buffer. If the buffer is full and the next machine is occupied, the job has to wait on its current machine and holds this machine to process other jobs until a buffer unit or the next machine becomes available. Given that the release time of all jobs is zero and setup time on each machine is included in the processing time, the widely used objective for the flow shop scheduling problem with intermediate buffers is to minimize the maximum completion time, i.e., makespan. For the computational complexity of the flow shop problem with intermediate buffers, Papadimitriou and Kanellakis [18] ⇑ Corresponding author. Address: State Key Laboratory of Digital Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, PR China. Tel.: +86 27 87559419; fax: +86 27 87543074. E-mail address:
[email protected] (L. Gao). 0020-0255/$ - see front matter Ó 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.ins.2010.10.009
Author's personal copy Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
669
proved that it was NP-hard even only for two machines. Therefore, it is of significance both in theory and in engineering applications to develop effective, efficient and novel solutions for such a problem. However, compared with a lot of literature on the classic flow shop scheduling problem, the research on flow shop scheduling with intermediate buffers has not been so extensive. Hall and Sriskandarajah provided a comprehensive survey for scheduling problems with blocking and no-wait in process [6]. Ducclos and Spencer [3] discussed the impact of a constrained buffer in a flow shop. Thoronton and Hunsucker [32] developed a constructive heuristic for hybrid flow shop scheduling without intermediate storage. Leisten [7] presented some priority-based heuristics for both the permutation and general flow shop problems with intermediate buffers, and the numerical experiments indicated that the NEH method [11] was the best heuristic for the three machine examples. A few meta-heuristics were further proposed to solve the flow shop scheduling problem with intermediate buffers. Norman [12] applied the taboo search (TS) algorithm for flow shop scheduling problems containing both sequence-dependent setup time and finite buffers. Smutnicki [26] proposed another TS approach for the two-machine flow shop scheduling problem with intermediate buffers, whereas Nowicki [13] generalized the approach to the problem with an arbitrary number of machine. Later, Brucker et al. [1] further generalized Nowicki’s idea to the cases where different job-sequences on machines are allowed. Recently, Wang et al. [34] developed a hybrid genetic algorithm (HGA) for the general flow shop scheduling problem with limited buffers to minimize the makespan. Case and benchmarking studies in the paper proved that the proposed HGA algorithm performed much better than the TS algorithm of Nowicki [13]. More recently, Liu et al. [8] proposed a hybrid particle swarm optimzation (HPSO) algorithm with extensive experiments showing that the HPSO outperformed the HGA algorithm [34] in most cases. For a multi-objective case, a hybrid differential evolution (DE) algorithm was developed by Qian et al. [22]. The DE algorithm was first introduced by Storn and Price [27] for complex continuous optimization problems. Due to its simplicity, easy implementation, fast convergence, and robustness, the DE algorithm has gained increasing attention and a wide range of successful applications were implemented, such as digital PID controller design, digital filter design, earthquake hypocenter location, and etc. [14,21,25,28]. However, due to its continuous nature, the traditional DE algorithm is unable to be directly applied for solving scheduling problems with discrete characteristic [9,17,33]. Therefore, several discrete versions, called the discrete differential evolution (DDE) algorithms, were presented recently. For example, Pan et al. [17] proposed a DDE algorithm, denoted by P_DDE, for solving the permutation flow shop scheduling problem, whereas Wang et al. [33] developed another DDE algorithm (W_DDE for short) for solving the flow shop scheduling problem with blocking constraints. Unlike the traditional DE algorithm, the above two DDE variants represented solutions as discrete job permutations, and presented mutation and crossover operators based on permutations. The mutant scale factor and crossover probability were limited in the range of [0 1]. The DDE algorithms started from a population of initial solutions generated by some heuristics, and generated candidates by using the permutation-based mutation and crossover operators with probability parameters. Like the traditional DE algorithm, a one-to-one selection operator was utilized to decide individuals for the next generation. The computational simulations and comparisons demonstrated that both the algorithms were effective and efficient for the problems considered. Following the successful application of the DDE algorithm, in this paper, we proposed an effective hybrid DDE (HDDE) algorithm for solving flow shop scheduling problems with intermediate buffers to minimize makespan. Different from the above two DDE algorithms, the proposed HDDE algorithm was applies to the newly designed mutation and crossover operators to generate new candidate solutions. A one-to-one selection operator with probabilistic jumping was utilized to decide the target population for next generation. Furthermore, an efficient local search algorithm based on both insertion and swap neighborhood structures was embedded in the algorithm to enhance exploitation. Simulation results and comparisons demonstrate the effectiveness of the proposed HDDE algorithm in the flow shop scheduling problem with intermediate buffers with makespan criterion. The remaining contents of this paper are organized as follows. In Section 2, the flow shop scheduling problem with intermediate buffers is stated and formulated. In Section 3, the traditional DE is introduced. The HDDE algorithm is proposed and its implementation is explained in detail in Section 4. The computational results and comparisons are provided in Section 5. Finally, we end the paper with some conclusions in Section 6. 2. Problem statement The flow shop scheduling problem with intermediate buffers can be described as follows. There are n jobs from the set J = {1, 2, . . . , n} and m machines from the set M = {1, 2, . . . , m}. Each job j 2 J will be sequentially processed on machines 1, 2, . . . , m. Operation oj,k corresponds to the processing of job j 2 J on machine k (k = 1, 2, . . . , m) during an uninterrupted processing period pj,k, where pj,k is non-negative. A common simplification measure is to avoid job passing in the sequence, that is, the sequence in which the jobs are to be processed is the same on each machine. At any time, each machine can process at most one job and each job can be processed on at most one machine. Between each successive pairs of machines k and k + 1, there exists a buffer with the size equal to Bk P 0 (i.e., at most Bk jobs can be held simultaneously), k 2 Mn{m}, and jobs obey the First In First Out rule in the buffer. Each job must go through buffer Bk on its route from machine k to k + 1. If Bk is completely occupied, the job has to remain on the current machine k until a free place is available in the buffer. Given that the release times of all jobs are zero and the setup times on each machine are included in the processing times, the aim is to find a sequence for processing all jobs on all machines so that the maximum completion time (i.e. makespan) is minimized. Let a job permutation p = {p(1), p(2), . . . , p(n)} represent the sequence of jobs to be processed, and dp(i),k be the departure time of operation op(i),k from machine k, dp(i),k can be calculated as follows:
Author's personal copy 670
Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
dpð1Þ;1 ¼ ppð1Þ;1
ð1Þ
dpð1Þ;k ¼ dpð1Þ;k1 þ ppð1Þ;k ; k ¼ 2; 3; . . . ; m dpðiÞ;1 ¼ dpði1Þ;1 þ ppðiÞ;1 ; i ¼ 2; 3; . . . B1 þ 1
ð2Þ
dpðiÞ;k ¼ maxðdpði1Þ;k; dpðiÞ;k1; Þ þ ppðiÞ;k ;
ð4Þ
ð3Þ
i ¼ 2; 3; . . . Bk þ 1; k ¼ 2; 3; . . . ; m 1
dpðiÞ;1 ¼ maxðdpði1Þ;1 þ ppðiÞ;1 ; dpðiB1 1Þ;2 Þ;
i > B1 þ 1
dpðiÞ;k ¼ maxðmaxðdpði1Þ;k; dpðiÞ;k1; Þ þ ppðiÞ;k ; dpðiBk 1Þ;kþ1 Þ; dpðiÞ;m ¼ maxðdpði1Þ;m; dpðiÞ;m1; Þ þ ppðiÞ;m ;
ð5Þ i > Bk þ 1; k ¼ 2; 3; . . . ; m 1
i ¼ 2; 3; . . . n
ð6Þ ð7Þ
In the above recursion, the departure time of the first job on each machine is calculated first, then the second job, and so on until the last job. Eqs. (1) and (2) define the departure time of job p(1) through machine 1 to machine m, making sure that at any time, each machine can process at most one job and each job can be processed on at most one machine. Eqs. (3) and (4) specify the departure time of job p(i) on machine 1 (i = 2, 3, . . . , B1 + 1) or on machine k = 2, 3, . . . , m 1 (i = 2, 3, . . . , Bk + 1), which ensure no job passing is allowed. Eqs. (5) and (6) control the calculation of job p(i) on machine 1 (i > B1 + 1) or on machine k = 2, 3, . . . , m 1 (i > Bk + 1). In this case, the buffer capacities should be considered. In Eq. (6) max(dp(i1),k,dp(i),k1,) + pp(i),k represents the completion time of Operation op(i),k, and dpðiBk 1Þ;kþ1 is associated with limited buffer capacities. Eq. (7) gives the departure time of job p(i), i = 2, 3, . . . , n, on the last machine. It can be easily concluded from the above formulations that the buffer constraint makes no effect on makespan and the above problem becomes the classical permutation flow shop scheduling problem if Bk P n 1. Finally, the makespan of the job permutation p = {p(1), p(2), . . . , p(n)} is given by
C max ðpÞ ¼ dpðnÞ;m
ð8Þ
The following example illustrates the calculation of makespan in detail with a permutation p = {1, 2, 3, 4}. Supposed there are four jobs and three machines. The buffers are B1 = 1 and B2 = 1, and the processing time pj,k is given by
2
pj;k
43
1 3 1
3
61 2 27 7 6 ¼6 7 41 1 25 1 1 1
Then the departure time dp(i),k is calculated as follows (See in Fig. 1):
dpð1Þ;1 ¼ ppð1Þ;1 ¼ 1 dpð1Þ;2 ¼ dpð1Þ;1 þ ppð1Þ;2 ¼ 4 dpð1Þ;3 ¼ dpð1Þ;2 þ ppð1Þ;3 ¼ 5 dpð2Þ;1 ¼ dpð1Þ;1 þ ppð2Þ;1 ¼ 2 dpð2Þ;2 ¼ maxðdpð1Þ;2 ; dpð2Þ;1 Þ þ ppð2Þ;2 ¼ 6 dpð2Þ;3 ¼ maxðdpð1Þ;3 ; dpð2Þ;2 Þ þ ppð2Þ;3 ¼ 8 dpð3Þ;1 ¼ maxðdpð2Þ;1 þ ppð3Þ;1 ; dpð1Þ;2 Þ ¼ 4 dpð3Þ;2 ¼ maxðmaxðdpð2Þ;2; dpð3Þ;1; Þ þ ppð3Þ;2 ; dpð1Þ;3 Þ ¼ 7 dpð3Þ;3 ¼ maxðdpð2Þ;3; dpð3Þ;2; Þ þ ppð3Þ;3 ¼ 10 dpð4Þ;1 ¼ maxðdpð3Þ;1 þ ppð4Þ;1 ; dpð2Þ;2 Þ ¼ 6 dpð4Þ;2 ¼ maxðmaxðdpð3Þ;2; dpð4Þ;1; Þ þ ppð4Þ;2 ; dpð2Þ;3 Þ ¼ 8 dpð4Þ;3 ¼ maxðdpð3Þ;3; dpð4Þ;2; Þ þ ppð4Þ;3 ¼ 11
Fig. 1. An example for calculating makespan.
Author's personal copy Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
671
Thus, the makespan is:
C max ¼ dpð4Þ;3 ¼ 11
3. Introduction to the DE and DDE algorithm 3.1. The DE algorithm The differential evolution (DE) is one of the latest evolutionary optimization methods proposed by Storn and Price [27]. Like other evolutionary-type algorithms, DE is a population-based and stochastic global optimizer, where each individual, Xi = (xi(1), xi(2), . . . , xi(n)), i = 1, 2, . . . , NP, is represented as a n-dimensional real vector. Starting from a population of NP randomly initialized target individuals P = {X1, X2, . . . , XNP} in the search ranges, the DE algorithm employs mutation and crossover operators to generate new candidates. Then a one-to-one selection scheme is applied to determine whether the offspring or the parent survives in the next generation. The above process is repeated until a predefined termination criterion is reached. In the DE algorithm, a mutant individual, Vi = (vi(1), vi(2), . . . , vi(n)), i = 1, 2, . . . , NP is generated via a mutation operator. There are many mutation strategies in the literature [28], where the commonly used strategy ‘DE/best/1’ can be described as follows:
V i ¼ X best þ F ðX r1 X r2 Þ
ð9Þ
where Xbest is the best individual in the current target population, X r1 and X r2 are two target individuals randomly chosen such that r1 – r2 – i 2 {1, . . . , NP}, and F > 0 is a mutation scale factor affecting the differential variation between two individuals. Following the mutation operator, a crossover operator is applied to yield a trail vector Ui = (ui(1), ui(2), . . . , ui(n)) by considering the mutant individual and its corresponding target as follows:
ui ðjÞ ¼
v i ðjÞ;
if
xi ðjÞ;
otherwise;
r j 6 CR or
j ¼ nj ;
j ¼ 1; 2; . . . n
ð10Þ
where nj refers to an index randomly chosen from the set {1, 2, . . . , n} to ensure that at least one dimension of the trial individual Ui differs from its counterpart Xi in the current generation, and CR is a crossover constant in the range [0, 1], and rj 2 [0, 1] is a uniformly random number. The selection scheme is based on the survival of the fittest between the trial individual Ui and its target counterpart Xi. For a minimization problem, it can be formulated as follows:
Xi ¼
U i ; if Xi;
f ðU i Þ 6 f ðX i Þ
otherwise
ð11Þ
where f(Ui) and f(Xi) are the objective values of Ui and Xi, respectively. 3.2. The P_DDE algorithm The traditional DE algorithm cannot be directly used to generate discrete values since the representation is real-valued. Therefore, several discrete variant of the DE algorithm (DDE for short) were presented for scheduling problems [9,17,33]. These DDE algorithms designed the solution representation, mutation and crossover operations according to the problem considered. The P_DDE algorithm [17] adapted discrete job permutation based representation and defined the mutation and crossover operators as follows. Mutant individual is obtained by perturbing the best solution in the target population. To obtain the mutant individual, the following equation can be used:
Vi ¼
F k ðX best Þ; if X best ;
r < Pm
otherwise
ð12Þ
where Xbest is the best solution in the target population; Pm 2 [0, 1] is the perturbation probability; Fk is the perturbation operator with the perturbation strength of k. r is a uniform random number generated between [0, 1]. Following the perturbation phase, the trial individual is obtained as follows:
Ui ¼
CX ðV i ; X i Þ; if r < P C V i; otherwise
ð13Þ
where CX is the crossover operator; and PC is the crossover probability. In the P_DDE algorithm, the basic idea is to preserve the best solution in the target population during the search procedure. Unlike its continuous counterpart, P_DDE is achieved through perturbation of the best solution in the target population
Author's personal copy 672
Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
by stochastically modifying it. Then a crossover operator is used to obtain the trial individual to be compared to its counterpart in the target population. 3.3. The W_DDE algorithm The W_DDE algorithm [17] is another discrete variant of the DE algorithm presented for the blocking flow shop scheduling with the makespan criterion. In the W_DDE algorithm, the target individuals are represented as job permutations, and the mutation and crossover operators are defined as follows. A mutant individual is generated by adding the weighted difference between two target individuals randomly selected from a previous population to form a new individual:
V i ¼ X a F ðX b X c Þ
ð14Þ
where a, b and c are three random integers in the range [1, PS] such that a, b, c and i are pair-wisely different, and i = 1, 2, . . . , PS, and F 2 [0, 1] is a mutant scale factor. The above formula consists of two components. The first component refers to the weighted difference between two target individuals Xb and Xc. That is,
Di ¼ F ðX b X c Þ () di;j ¼
xb;j xc;j ; if 0;
randðÞ < F
otherwise
ð15Þ
where Di = (di,0, di,1, . . . , di,n) is a temporary vector. The second component is to produce a mutant individual Vi by adding Di to another target individual Xa, that is,
V i ¼ X a Di () v ij ¼ xaj dij ¼ modððxaj þ dij þ nÞ; nÞ
ð16Þ
where ‘‘mod” denotes the modulus operator which result is the remainder when the first operand is divided by the second. This operator ensures that each dimension of Vi can represent a job. Note that a mutant individual cannot always represent a complete sequence since some jobs may repeat many times whereas other jobs may be lost. However, the W_DDE algorithm generates the feasible target individual by the following crossover operator. Step 1: From j = 1 to n, remove job vi,j 2 Vi if rand( ) P CR or the job has already been in Vi. Step 2: Let Ui = Xi, and eliminate the jobs included in Vi from Ui. Step 3: The first job from Vi is taken and inserted in the best position of Ui by evaluating all the possible slots of Ui. This step is repeated until Vi is empty. And then a complete schedule is obtained. 4. Hybrid discrete DE algorithm Due to the effectiveness of the DE algorithm and its variants, this section presents a hybrid DDE (HDDE) algorithm for the flow shop scheduling with intermediate buffers. We detail the solution representation, mutation, crossover, selection, initialization, and local search as follows. 4.1. Solution representation The job permutation based representation is easy to decode to a schedule, and it has been widely used in the literature for a verity of permutation flow shop scheduling problems [4,5,16]. In our HDDE algorithm, such a representation has been also adopted. In other words, each target individual is represented as a job permutation p = {p(1), p(2), . . . , p(n)}, and f(p) is the makespan or fitness value of the permutation p. 4.2. Job permutation based DE operators According to the ‘DE/best/1’ mutation strategy commonly used in the traditional DE algorithm, two target individuals are randomly chosen from the current population, and then their weighted difference is added to the best individual to generate a mutant individual. With the idea of this operator, a job permutation based mutation operator is presented as follows:
V i ¼ gðX best F ðX r1 X r2 ÞÞ
ð17Þ
where Xbest is the best target individual; X r1 and X r2 are two target individuals randomly chosen such that r1 – r2 – i 2 {1, . . . , NP}, and F 2 [0, 1] is a mutant scale factor, and g( ) is a function, and the operators ‘‘”, ‘‘”, and ‘‘—” are the same as that in Eq. (14). The g( ) is to generate a mutant individual Vi by removing redundant jobs in Ui ¼ X best F ðX r1 X r2 Þ from last to first, and ensures that each job in Ui appears at most once.
V i ¼ gðUi Þ
ð18Þ
The crossover operator combines the mutant individual Vi and a target individual Xi to generate a trial individual Ui. The crossover operator is the same as that in the W_DDE algorithm with the exception that the Step 1 is modified as follows.
Author's personal copy Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
673
Step 1: From j = 1 to n, remove job vi,j 2 Vi if rand( ) < CR. Different from the W_DDE algorithm, the HDDE algorithm improves the DE operators by keeping the best target individual Xbest which always carries better information than others in the population, so it is expected to generate individuals with better fitness. 4.3. Selection scheme To avoid being trapped in local minima, a new one-to-one selection scheme with probabilistic jumping mechanism is presented to decide whether or not the trial individual Ui should become a member of the target population in the next generation. That is,
X 0i ¼
U i ; if Xi;
randðÞ < minf1; exp½ðf ðX i Þ f ðU i ÞÞ=tg
otherwise
ð19Þ
where t is the temperature factor, and exp () is the exponential function. Following Osman and Potts [15], the constant temperature was used here as follows:
Pn Pm t¼
k¼1 pj;k h 10 n m
j¼1
ð20Þ
where h is a constant. By adopting this new selection scheme, each target individual is given a chance to be replaced by a certain inferior solution during the search process. Thus, the target population can be diversified and the presented algorithm has an opportunity to escape from the local minima. 4.4. Population Initialization The NEH heuristic [11] is one of the well-known heuristics in the literature, and it has been proved to perform well for the considered problem. To guarantee an initial population with a certain level of quality, the NEH heuristic was leveraged to produce a target individual. The NEH heuristic is described as follows [11]. Step 1: Sort the jobs according to the descending sums of their process times. Let the obtained sequence be p = {p(1), p(2), . . . , p(n)}. Step 2: The first two jobs of p are taken and the two possible partial sequences of these two jobs are evaluated. Then, the better partial sequence is chosen as the current sequence. Step 3: Take job p(j), j = 3, 4, . . . , n, and find the best partial sequence by placing it in all possible plots of the partial sequence of jobs that have been already scheduled. Then the best partial sequence is selected for the next generation. (If several partial sequences are of the same minimum makespan value, then one as the best partial sequence is randomly selected). Repeat this step until all jobs are sequenced and a final sequence of n jobs is constructed. One of the important features for the effectiveness of the NEH heuristic is based on the basic premise that the job with larger total processing time should be given higher priority than the job with smaller total processing time. However, for flow shop scheduling with intermediate buffers, the job with higher total processing time may cause to block its successive jobs and to yield larger blocking time than the job with shorter total processing time when the buffer size is equal to zero or very small. The blocking time may increase the makespan value. The larger the blocking time is, the greater the makespan value may be. Therefore, a reasonable premise is that the job with smaller total processing time should be given a higher priority for the problem with very small buffers. Based on it, the second target individual is generated by using the NEH heuristic with the following modified Step 1. Step 1: Sort the jobs according to the non-decreasing sums of their process times. Let the obtained sequence be p = {p(1), p(2), . . . , p(n)}. To obtain an initial target population with higher diversity, each remaining initial individual is yielded by a twophase procedure: Phase I, to generate an initial job sequence randomly; Phase II, to obtain the final job permutation following Steps 2 and 3 of the NEH heuristic by using the generated initial sequence as seeds. 4.5. Local search algorithm In order to improve exploration ability, a local search algorithm has been embedded into the presented algorithm. As to the permutation based neighborhood structure, INSERT and SWAP operators are commonly used to produce neighbor solutions in the literature [4,5]. As thus, a local search algorithm based on the INSERT+SWAP variant of the VNS method [10] is presented in this paper. Let p = {p(1), p(2), . . . , p(n)} be the job permutation to undergo the local search, and lmax be the maximum iterations of the local search algorithm. The local search algorithm is described as follows.
Author's personal copy 674
Step Step Step Step Step
Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
1: 2: 3: 4: 5:
Set l = 1, Cnt = 1 and / = p. Perform INSERT or SWAP operator to / according to Cnt equal to 1 or 2. Denote the obtained permutation /0 . If /0 is better than or equal to p, let p = /0 and Cnt = 1; otherwise let Cnt = Cnt + 1. If Cnt 6 2, let / = p and go to Step 2. Set l = l + 1. If l P lmax, stop the procedure and output p; otherwise set Cnt = 1 and go to Step 2.
From the above procedure, it can be seen that a local search operator will be rewarded to perform again if it can get a better neighbor solution; otherwise, another operator will be used in the next round. With this scheme, intensification search or exploitation can be well enhanced by using the two operators in a reasonably hybrid way. To provide a rather good compromise between solution quality and search efficiency, the above local search is only applied to the trial individual generated by Xbest. 4.6. Procedure of the HDDE algorithm Based on the above design, the procedure of the proposed HDDE algorithm for solving the flow shop scheduling with intermediate buffers is summarized as follows. Step 1: Initialization. Generate a population of NP target individuals P = {X1, X2, . . . , XNP} by using the initialization presented in Section 4.5, and then evaluate each initial target individual. Step 2: Generate NP trial individuals Ui for i = 1 to NP by Step 2.1 through Step 2.3. Step 2.1: Mutation. Generate a mutant vector V i ¼ fv 1i ; v 2i ; . . . ; v ni g for Xi, i = 1, 2, . . . , NP by using the job permutation based mutation operator. Step 2.2: Crossover. Generate a trial vector U i ¼ fu1i ; u2i ; . . . ; uni g for Xi, i = 1, 2, . . . , NP, by applying the job permutation based crossover operator on Vi and Xi. Step 2.3: Local Search. If Ui is generated by Xbest and the related mutation individual, perform local search to Ui. Step 3: Selection. Evaluate each Ui, i = 1, 2, . . . , NP, and determine the members of the target population for next generation by using the modified selection scheme. Step 4: If termination criterion is not satisfied, go to Step 2; otherwise, stop the procedure and output the best solution found so far. Unlike the traditional DE algorithm, the proposed HDDE algorithm adopts the job permutation based encoding scheme, and employs the job permutation based DE mutation and crossover operators, and applies a modified selection scheme, and takes advantage of the initialization based on the NEH heuristic and its variants. Furthermore, the HDDE algorithm not only applies the DE-based evolutionary searching mechanism to effectively perform exploration for promising solutions within the entire region, but it also applies a well developed local search algorithm to perform exploitation for good solutions. Since both exploration and exploitation are stressed and balanced, it is expected to achieve good performances for flow shop scheduling problems with intermediate buffers. It can be observed that the HDDE algorithm adopts the quite different crossover and mutation operators from the existing P_DDE algorithm. Although there are some similarities between the W_DDE and HDDE algorithms, the HDDE algorithm further enhances the mutation operator by taking advantage of the best target individual Xbest which always carries good information than others in the population, and avoids being trapped into a local optimum by a selection scheme with probabilistic jumping. In addition, the presented initialization method and local search render the HDDE algorithm applicable to the problem considered. In the next section, we will investigate the performance of the HDDE algorithm based on computational simulation and comparisons. 5. Computational results and comparisons 5.1. Experimental setup We first conducted a preliminary experiment to determine the parameters for the presented HDDE algorithm. In the preliminary experiment, the following ranges of parameter values from the related literature [16,17,21,33] were tested: NP 2 {5, 10}, F 2 {0.2, 0.4}, CR 2 {5/n, 10/n}, lmax 2 {n2/5, n2} and h 2 {0.4, 0.8}. We select the ranges of parameter values also due to the fact that taking small values for NP, F and CR can give more chances to the local search to take advantage the intensification provided by the local search algorithm. As a matter of fact, the hybridization of an evolutionary algorithm with a good local search can enhance its performance significantly by taking advantage of exploration and intensification features of evolutionary algorithm and local search method, respectively. Based on the experimental results, the proposed HDDE algorithm achieved the best performance under the following settings: NP = 5, F = 0.2, CR = 5/n, lmax = n2/5 and h = 0.4. To evaluate the performance of the proposed HDDE algorithm for solving the permutation flow shop scheduling problem with intermediate buffers, the HDDE algorithm is compared with both the two recently developed DDE algorithms, i.e., P_DDE [17] and W_DDE [33], and two effective algorithms presented for the problem considered, i.e., the HPSO [8] and
Author's personal copy 675
Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
HGA [34] algorithms. In the experimental tests, computational simulations were carried out with the commonly known 120 instances generated for the classic permutation flow shop problem by Taillard [29] with different sizes. Obviously, the increase of buffer size implies that a constraint caused by buffer has less influence on the makespan value [34], and it was also reported in [34] that the results obtained for the buffer of size as larger as four are very close to those produced for the buffers of infinite capacity. Thus, in this paper, we only compared the algorithms with buffer size B equal to 1, 2, 3, and 4, respectively. All of the compared algorithms were coded in Visual C++ and the experiments were executed on a Pentium PIV 3.0 GHz PC with 512 MB memory. For the algorithms from the literature, the parameters were set the same as those in the literature. Since all the algorithms were coded in the same programming language, and used the same library functions and data structures, and executed on the same computers (we used the same maximum CPU time T = 30 n m milliseconds as the
Table 1a The PRE ±SD values of the algorithms (B = 1). Instances
HDDE
HDDEnl
W_DDE
P_DDE
HGA
HPSO
CPU time
20 5 20 10 20 20 50 5 50 10 50 20 100 5 100 10 100 20 200 10 200 20 500 20 Average
0.07 ± 0.12 0.07 ± 0.08 0.08 ± 0.10 0.19 ± 0.12 0.80 ± 0.35 0.90 ± 0.36 0.43 ± 0.23 0.70 ± 0.34 0.92 ± 0.35 0.45 ± 0.28 0.51 ± 0.33 0.28 ± 0.16 0.45 ± 0.24
0.16 ± 0.18 0.08 ± 0.08 0.09 ± 0.11 0.34 ± 0.12 1.11 ± 0.32 1.13 ± 0.32 0.93 ± 0.17 1.24 ± 0.26 1.43 ± 0.30 1.62 ± 0.19 1.41 ± 0.21 1.31 ± 0.13 0.90 ± 0.20
0.07 ± 0.10 0.03 ± 0.06 0.04 ± 0.06 0.49 ± 0.10 1.40 ± 0.27 1.37 ± 0.24 1.51 ± 0.18 1.82 ± 0.25 2.01 ± 0.27 2.06 ± 0.25 1.87 ± 0.30 1.39 ± 0.39 1.17 ± 0.21
0.11 ± 0.13 0.06 ± 0.10 0.11 ± 0.13 0.26 ± 0.12 0.82 ± 0.42 0.75 ± 0.37 0.92 ± 0.24 0.84 ± 0.30 0.90 ± 0.31 1.22 ± 0.25 0.85 ± 0.21 0.98 ± 0.08 0.65 ± 0.22
0.77 ± 0.27 1.19 ± 0.59 1.07 ± 0.32 0.95 ± 0.28 3.18 ± 0.48 3.57 ± 0.46 2.32 ± 0.32 3.23 ± 0.31 3.74 ± 0.27 3.46 ± 0.19 2.71 ± 0.17 1.96 ± 0.04 2.35 ± 0.31
1.05 ± 0.47 1.84 ± 0.62 1.52 ± 0.44 0.92 ± 0.46 2.83 ± 0.70 2.93 ± 0.69 1.15 ± 0.40 1.83 ± 0.50 2.20 ± 0.48 1.11 ± 0.35 1.05 ± 0.32 0.98 ± 0.21 1.62 ± 0.47
3.00 6.00 12.00 7.50 15.00 30.00 15.00 30.00 60.00 60.00 120.00 300.00 54.88
Table 1b The PRE ±SD values of the algorithms (B = 2). Instances
HDDE
HDDEnl
W_DDE
P_DDE
HGA
HPSO
CPU time
20 5 20 10 20 20 50 5 50 10 50 20 100 5 100 10 100 20 200 10 200 20 500 20 Average
0.04 ± 0.02 0.05 ± 0.06 0.08 ± 0.07 0.03 ± 0.03 0.54 ± 0.22 0.89 ± 0.34 0.09 ± 0.06 0.45 ± 0.19 0.69 ± 0.31 0.25 ± 0.16 0.51 ± 0.29 0.25 ± 0.17 0.32 ± 0.16
0.04 ± 0.03 0.04 ± 0.06 0.08 ± 0.07 0.05 ± 0.04 0.86 ± 0.27 1.16 ± 0.32 0.14 ± 0.06 0.67 ± 0.16 1.28 ± 0.24 0.80 ± 0.18 1.67 ± 0.18 1.74 ± 0.13 0.71 ± 0.14
0.04 ± 0.01 0.02 ± 0.04 0.04 ± 0.05 0.03 ± 0.02 0.96 ± 0.18 1.36 ± 0.23 0.15 ± 0.05 0.87 ± 0.17 1.79 ± 0.25 1.00 ± 0.19 2.07 ± 0.31 1.66 ± 0.44 0.83 ± 0.16
0.03 ± 0.02 0.03 ± 0.05 0.08 ± 0.08 0.03 ± 0.04 0.48 ± 0.29 0.67 ± 0.34 0.13 ± 0.08 0.48 ± 0.19 0.67 ± 0.28 0.54 ± 0.15 0.72 ± 0.22 1.00 ± 0.10 0.41 ± 0.15
0.56 ± 0.30 1.13 ± 0.64 0.91 ± 0.37 0.34 ± 0.12 2.65 ± 0.54 3.34 ± 0.42 0.38 ± 0.07 1.57 ± 0.23 3.13 ± 0.27 1.70 ± 0.12 2.66 ± 0.15 2.01 ± 0.10 1.70 ± 0.28
0.46 ± 0.42 1.45 ± 0.58 1.48 ± 0.56 0.19 ± 0.18 2.02 ± 1.07 2.78 ± 0.71 0.23 ± 0.16 0.89 ± 0.35 1.65 ± 0.53 0.66 ± 0.22 1.03 ± 0.37 0.82 ± 0.19 1.14 ± 0.44
3.00 6.00 12.00 7.50 15.00 30.00 15.00 30.00 60.00 60.00 120.00 300.00 54.88
Table 1c The PRE ±SD values of the algorithms (B = 3). Instances
HDDE
HDDEnl
W_DDE
P_DDE
HGA
HPSO
CPU time
20 5 20 10 20 20 50 5 50 10 50 20 100 5 100 10 100 20 200 10 200 20 500 20 Average
0.03 ± 0.02 0.07 ± 0.07 0.08 ± 0.07 0.01 ± 0.01 0.40 ± 0.18 0.87 ± 0.32 0.04 ± 0.03 0.24 ± 0.15 0.66 ± 0.28 0.20 ± 0.12 0.38 ± 0.20 0.27 ± 0.16 0.27 ± 0.13
0.03 ± 0.02 0.08 ± 0.07 0.08 ± 0.07 0.02 ± 0.03 0.64 ± 0.20 1.14 ± 0.29 0.07 ± 0.05 0.53 ± 0.16 1.29 ± 0.24 0.46 ± 0.13 1.46 ± 0.17 1.92 ± 0.10 0.64 ± 0.13
0.04 ± 0.01 0.04 ± 0.05 0.04 ± 0.04 0.01 ± 0.01 0.78 ± 0.17 1.33 ± 0.22 0.06 ± 0.04 0.63 ± 0.14 1.71 ± 0.25 0.58 ± 0.12 1.87 ± 0.26 1.91 ± 0.40 0.75 ± 0.14
0.03 ± 0.02 0.04 ± 0.06 0.08 ± 0.08 0.01 ± 0.01 0.37 ± 0.24 0.61 ± 0.32 0.07 ± 0.05 0.38 ± 0.21 0.73 ± 0.24 0.38 ± 0.15 0.70 ± 0.18 1.10 ± 0.10 0.38 ± 0.14
0.48 ± 0.30 1.03 ± 0.58 0.92 ± 0.39 0.15 ± 0.09 2.32 ± 0.47 3.20 ± 0.45 0.24 ± 0.06 1.22 ± 0.22 3.17 ± 0.29 0.93 ± 0.11 2.29 ± 0.15 2.08 ± 0.09 1.50 ± 0.27
0.46 ± 0.44 1.45 ± 0.57 1.44 ± 0.54 0.11 ± 0.08 1.66 ± 0.91 2.65 ± 0.71 0.16 ± 0.10 0.67 ± 0.29 1.50 ± 0.58 0.39 ± 0.17 0.74 ± 0.33 0.75 ± 0.19 1.00 ± 0.41
3.00 6.00 12.00 7.50 15.00 30.00 15.00 30.00 60.00 60.00 120.00 300.00 54.88
Author's personal copy 676
Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
Table 1d The PRE ±SD values of the algorithms (B = 4). Instances
HDDE
HDDEnl
W_DDE
P_DDE
HGA
HPSO
CPU time
20 5 20 10 20 20 50 5 50 10 50 20 100 5 100 10 100 20 200 10 200 20 500 20 Average
0.03 ± 0.02 0.06 ± 0.07 0.08 ± 0.07 0.01 ± 0.01 0.42 ± 0.17 0.86 ± 0.32 0.02 ± 0.02 0.27 ± 0.15 0.75 ± 0.27 0.16 ± 0.07 0.39 ± 0.22 0.18 ± 0.11 0.27 ± 0.13
0.04 ± 0.03 0.08 ± 0.06 0.08 ± 0.07 0.02 ± 0.02 0.69 ± 0.23 1.14 ± 0.30 0.03 ± 0.03 0.51 ± 0.16 1.38 ± 0.23 0.37 ± 0.11 1.54 ± 0.17 1.40 ± 0.10 0.61 ± 0.13
0.03 ± 0.02 0.04 ± 0.05 0.04 ± 0.04 0.01 ± 0.01 0.80 ± 0.18 1.37 ± 0.23 0.03 ± 0.03 0.62 ± 0.14 1.84 ± 0.21 0.52 ± 0.10 1.98 ± 0.27 1.47 ± 0.38 0.73 ± 0.14
0.02 ± 0.02 0.05 ± 0.06 0.08 ± 0.08 0.01 ± 0.01 0.38 ± 0.21 0.68 ± 0.33 0.04 ± 0.03 0.37 ± 0.19 0.84 ± 0.25 0.28 ± 0.16 0.78 ± 0.21 0.74 ± 0.08 0.36 ± 0.14
0.51 ± 0.33 1.03 ± 0.59 0.92 ± 0.39 0.16 ± 0.09 2.18 ± 0.52 3.19 ± 0.47 0.16 ± 0.05 1.24 ± 0.25 3.20 ± 0.28 0.74 ± 0.17 2.27 ± 0.19 1.47 ± 0.09 1.42 ± 0.28
0.41 ± 0.38 1.45 ± 0.60 1.44 ± 0.54 0.12 ± 0.08 1.55 ± 0.86 2.53 ± 0.76 0.12 ± 0.08 0.64 ± 0.33 1.59 ± 0.62 0.32 ± 0.17 0.82 ± 0.24 0.39 ± 0.17 0.95 ± 0.40
3.00 6.00 12.00 7.50 15.00 30.00 15.00 30.00 60.00 60.00 120.00 300.00 54.88
Table 2 The Wilcoxon two-sided rank sum test results of the compared algorithms. (HDDE, HDDEnl)
(HDDE, W_DDE)
(HDDE, P_DDE)
(HDDE, HGA)
(HDDE, HPSO)
B
p
h
p
h
p
h
p
h
p
h
1 2 3 4
5.775521e203 1.416523e102 3.034319e100 3.987542e089
1 1 1 1
1.812295e271 4.054473e117 1.832264e106 2.304243e095
1 1 1 1
2.260265e065 6.149285e017 9.369268e020 2.868328e016
1 1 1 1
0.000000e + 000 0.000000e + 000 0.000000e + 000 0.000000e + 000
1 1 1 1
0.000000e + 000 0.000000e + 000 0.000000e + 000 9.129378e305
1 1 1 1
Table 3a The success rates of the algorithms (B = 1). Instances
HDDE
HDDEnl
W_DDE
P_DDE
HGA
HPSO
20 5 20 10 20 20 50 5 50 10 50 20 100 5 100 10 100 20 200 10 200 20 500 20 Average
97.67 95.33 98.67 94.33 25.67 15.67 59.67 36.00 13.67 60.00 53.00 91.00 61.72
90.67 92.67 97.00 78.67 7.67 5.00 4.33 3.67 1.00 0.00 0.00 0.00 31.72
97.00 98.33 100.00 55.00 0.33 0.33 0.33 0.00 0.00 0.00 0.00 0.00 29.28
95.00 95.00 97.33 88.33 29.67 28.67 9.00 21.00 16.67 1.00 14.00 0.00 41.31
46.00 22.00 8.67 26.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 8.56
33.00 3.33 1.33 30.00 0.33 0.00 3.33 1.67 0.00 6.00 4.00 2.00 7.08
Table 3b The success rate of the algorithms (B = 2). Instances
HDDE
HDDEnl
W_DDE
P_DDE
HGA
HPSO
20 5 20 10 20 20 50 5 50 10 50 20 100 5 100 10 100 20 200 10 200 20 500 20 Average
99.33 99.67 100.00 100.00 49.67 13.00 99.33 64.33 29.00 92.00 46.00 90.00 73.53
99.00 99.67 100.00 100.00 24.67 3.67 97.33 40.00 0.67 32.00 0.00 0.00 49.75
100.00 100.00 100.00 100.00 19.33 0.33 98.33 24.67 0.00 10.00 0.00 0.00 46.06
100.00 99.67 99.67 100.00 58.00 35.33 96.67 59.67 29.00 48.00 19.00 0.00 62.08
61.33 22.67 14.00 79.67 0.00 0.00 59.00 5.67 0.00 0.00 0.00 0.00 20.19
65.00 9.00 7.33 93.67 10.67 0.00 84.00 28.33 2.00 42.00 7.00 5.00 29.50
Author's personal copy 677
Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685 Table 3c The success rate of the algorithms (B = 3). Instances
HDDE
HDDEnl
W_DDE
P_DDE
HGA
HPSO
20 5 20 10 20 20 50 5 50 10 50 20 100 5 100 10 100 20 200 10 200 20 500 20 Average
100.00 99.00 100.00 100.00 68.67 16.67 100.00 80.33 27.00 98.00 70.00 90.00 79.14
100.00 99.67 100.00 100.00 43.33 4.67 100.00 43.67 0.00 57.00 0.00 0.00 54.03
100.00 99.33 100.00 100.00 30.67 0.00 100.00 35.33 0.00 37.00 0.00 0.00 50.19
100.00 98.67 99.67 100.00 71.33 44.33 100.00 65.00 19.33 79.00 25.00 0.00 66.86
67.00 23.67 14.33 92.67 2.00 0.00 89.67 11.33 0.00 12.00 0.00 0.00 26.06
64.33 7.33 8.00 95.67 17.67 0.00 95.33 36.67 4.67 77.00 27.00 13.00 37.22
Table 3d The success rate of the algorithms (B = 4). Instances
HDDE
HDDEnl
W_DDE
P_DDE
HGA
HPSO
20 5 20 10 20 20 50 5 50 10 50 20 100 5 100 10 100 20 200 10 200 20 500 20
100.00 99.67 100.00 100.00 63.00 18.00 100.00 82.33 19.00 96.00 64.00 95.00
99.33 99.67 100.00 100.00 38.00 4.00 100.00 48.67 1.00 76.00 0.00 0.00
100.00 99.33 100.00 100.00 27.33 0.33 100.00 38.33 0.00 57.00 0.00 1.00
100.00 99.33 99.67 100.00 67.33 35.67 100.00 70.00 12.33 82.00 15.00 23.00
64.67 24.33 14.33 94.67 3.00 0.00 100.00 4.00 0.00 28.00 0.00 0.00
67.67 7.67 8.00 98.00 18.00 0.00 100.00 42.00 4.67 75.00 12.00 73.00
Average
78.08
55.56
51.94
67.03
27.75
42.17
3350 HDDE HDDEnl WDDE
3300
PDDE HGA HPSO
makespan
3250
3200
3150
3100
3050 0
2
4
6
8
10
12
t (s) Fig. 2a. The convergence curves of instance Ta45 (B = 1).
14
16
Author's personal copy 678
Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
3180 HDDE HDDEnl
3160
WDDE PDDE
3140
HGA HPSO
makespan
3120
3100
3080
3060
3040
3020
3000 0
2
4
6
8
10
12
14
16
t (s) Fig. 2b. The convergence curves of instance Ta45 (B = 2).
3120 HDDE HDDEnl WDDE
3100
PDDE HGA HPSO
makespan
3080
3060
3040
3020
3000
0
5
t (s)
10
15
Fig. 2c. The convergence curves of instance Ta45 (B = 3).
termination criterion for all the algorithms), it is fair to say it is a reasonable comparison. This termination criterion was increasingly used in the recent literature on scheduling problems [20,23,24,30,31].
Author's personal copy 679
Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
3120 HDDE HDDEnl 3100
WDDE PDDE HGA
3080
makespan
HPSO
3060
3040
3020
3000
2980 0
5
10
15
t (s) Fig. 2d. The convergence curves of instance Ta45 (B = 4).
6950 HDDE HDDEnl 6900 WDDE PDDE HGA
6850
makespan
HPSO
6800
6750
6700
6650
6600 0
10
20
30
40
50
60
70
t (s) Fig. 3a. The convergence curves of instance Ta85 (B = 1).
For each instance, every algorithm ran with 30 independent replications, and in each replication the percentage relative error (PRE) was calculated as follows:
Author's personal copy 680
Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
6750
HDDE HDDEnl WDDE
6700
PDDE HGA HPSO
makespan
6650
6600
6550
6500
6450 0
10
20
30
40
50
60
70
t (s) Fig. 3b. The convergence curves of instance Ta85 (B = 2).
6750
HDDE HDDEnl 6700
WDDE PDDE HGA
6650
makespan
HPSO 6600
6550
6500
6450
6400 0
10
20
30
40
50
t (s) Fig. 3c. The convergence curves of instance Ta85 (B = 3).
60
70
Author's personal copy 681
Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
6700 HDDE HDDEnl
6650
WDDE PDDE HGA
6600
makespan
HPSO
6550
6500
6450
6400
6350 0
10
20
30
40
50
60
70
t (s) Fig. 3d. The convergence curves of instance Ta85 (B = 4).
4 x 10
1.27 HDDE HDDEnl
1.265
WDDE PDDE
1.26 HGA
makespan
HPSO
1.255
1.25
1.245
1.24
1.235
1.23 0
20
40
60
80
100
t (s) Fig. 4a. The convergence curves of instance Ta105 (B = 1).
120
140
Author's personal copy 682
Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
x 10
4
1.19 HDDE HDDEnl 1.185
WDDE PDDE HGA
1.18
HPSO
makespan
1.175
1.17
1.165
1.16
1.155 0
20
40
60
80
100
120
140
t (s) Fig. 4b. The convergence curves of instance Ta105 (B = 2).
x 10
4
1.175 HDDE HDDEnl WDDE
1.17
PDDE HGA HPSO
makespan
1.165
1.16
1.155
1.15
1.145
0
20
40
60
80
100
t (s) Fig. 4c. The convergence curves of instance Ta105 (B = 3).
120
140
Author's personal copy 683
Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
PREðAÞ ¼
ðC A C b Þ 100
ð21Þ
Cb
where Cb is the best makespan value found by any of the compared algorithms, and CA is the makespan value obtained by a certain algorithm. Obviously, the smaller the PRE(A) value is, the better result produces from the algorithm A. Moreover, the average percentage relative error (APRE) and standard deviation (SD) over 30 replications for each problem size are also calculated as statistic for the solution quality. 5.2. Comparison of the algorithms There are several existing meta-heuristics for solving the flow shop scheduling with intermediate buffers to minimize makespan in the literature. Typically, Nowicki [13] presented a TS approach using a non-trivial generalization of the block elimination properties known for the classic flow shop problem, and provided excellent results for problems up to 200 jobs and 20 machines. Wang et al. [34] proposed an effective hybrid genetic algorithm (HGA) by combining multiple genetic operators based on evolutionary mechanism with a local search based on block property. Liu et al. [8] developed a hybrid particle swarm optimization (HPSO) algorithm, where both the local search based on block property and the local search based on simulation annealing with self-adaptive multiple neighborhood structures were applied to stress the exploitation of the HPSO algorithm. According to the computational experiments in [8,34], both algorithms performed better than the TS approach, and especially HPSO outperformed HGA for most of instances. Recently, several discrete variants of the DE algorithms have been developed for flow shop scheduling problems. Pan et al. [17] proposed a DDE algorithm, denoted as P_DDE, for the classic permutation flow shop scheduling problem with both makespan and total flow time criteria, whereas Wang et al. [33] developed another DDE algorithm (W_DDE for short) to minimize makespan for the permutation flow shop scheduling problem with blocking constraint. Computational simulations and comparisons demonstrated that both algorithms were effective and efficient for the problems considered. In this section, we adapt the above two DDE algorithms to minimize makespan for the permutation flow shop scheduling problem with intermediate buffers by modifying the objective functions. We compared the HGA, HPSO, P_DDE and W_DDE algorithms with the proposed HDDE algorithm. Table 1 summarizes the average computational results over each instance size for these algorithms. The best results are shown in bold. To further demonstrate the effectiveness of combining DDE-based exploration and local search based exploitation, the computational results generated by the HDDE algorithm without local search (HDDEnl for short) are included in the Table 1. It can be found from Table 1 that at the same computational requirements, the overall mean PREvalues yielded by the HDDE algorithm are equal to 0.45%, 0.32%, 0.27% and 0.27% when buffer size is equal to 1, 2, 3, and 4, respectively, which x 104 1.175 HDDE HDDEnl 1.17
WDDE PDDE HGA
1.165
HPSO
makespan
1.16
1.155
1.15
1.145
1.14
0
20
40
60
80
100
t (s) Fig. 4d. The convergence curves of instance Ta105 (B = 4).
120
140
Author's personal copy 684
Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
are significantly lower than those (1.17%, 0.83%, 0.75% and 0.73%) generated by the W_DDE algorithm, those (0.65%, 0.41%, 0.38% and 0.36%) by the P_DDE algorithm, those (2.35%, 1.70%, 1.50% and 1.42%) by the HGA and those (1.62%, 1.14%, 1.00% and 0.95%) by the HPSO algorithm. Especially, the HDDE algorithm produced much better APRE value than both the HGA and HPSO algorithms for all the problem sizes and all buffer sizes as well. For the problems with large sizes including 200 10, 200 20 and 500 20, the HDDE algorithm surpassed all the other compared algorithms. Out of 12 instance sizes, the HDDE algorithm achieved the lowest APRE values of 8(B = 1), 6(B = 2), 8(B = 3) and 7(B = 4), whereas the W_DDE algorithm obtained the lowest APRE values of 3(B = 1), 3(B = 2), 3(B = 3) and 3(B = 4), and the P_DDE algorithm generated the lowest APRE values of 2(B = 1), 4(B = 2), 5(B = 3) and 4(B = 4). None of the lowest APRE values was yielded by the HGA and HPSO algorithms. In addition, the HDDE algorithm generated very lower overall mean SD values for each buffer size, which leads to a conclusion that the HDDE algorithm is robust on initialization. For a further justification, two-side Wilcoxon rank sum tests (equivalent to the Mann–Whitney U test) with significant level equal to 0.05 were conducted for the HDDE algorithm against the HDDEnl, W_DDE, P_DDE, HGA and HSPO algorithms, respectively. The returned p and h values were reported in 2, where p is the probability of observing the given result by chance if the null hypothesis (the results by the two compared algorithms are equal) is true. Small values of p cast doubt on the validity of the null hypothesis. An h = 1 indicates that the results obtained by the two compared algorithms are significantly different, while h = 0 implies that the difference between the two algorithms are not significant at 5% significant level. Obviously, the results in Table 2 confirm the statistical differences in the favor of the HDDE algorithm over the other compared algorithms. For the HDDEnl algorithm, it performed better than the W_DDE, HGA and HPSO algorithms, indicating the effectiveness of the proposed DE operators in this paper. Interestingly, the HDDEnl algorithm produced better APRE value than the HDDE algorithm on the instances with size 20 10 when buffer size is equal to B = 2. This might be due to the fact that the solution landscapes of these instances are in favor of exploitation in a large scope. Whereas the HDDE algorithm consumed a period of CPU time on the local search, and correspondingly, the time for exploitation is less than that of the HDDEnl algorithm. However, on average, the HDDEnl algorithm was outperformed by the HDDE algorithm, which demonstrated the effectiveness of incorporating a local search into the HDDE algorithm. In other words, the success of the HDDE algorithm is from to the hybridization of local search and permutation based DE search. That is, the superiority in terms of search ability and efficiency of the HDDE algorithm is attributed to the combination as well as the balance between exploration and exploitation. To analyze the convergence of the compared algorithms, the success rate was reported in Table 3. The success of an algorithm means that this algorithm can result in a PRE value no worse than 0.5% in the given maximum CPU time. The success rate was calculated as the number of successful runs divided by the total number of runs and then multiplied by 100. It can be observed from Table 3 that the overall success rate of the HDDE algorithm is much higher than those of the other compared algorithms for all buffer sizes. Several typical convergence curves were illustrated in Figs. 2, 3 and 4 in terms of the best makespan value of each algorithm, showing that on average the HDDE algorithm reaches lower levels than those of the W_DDE, P_DDE, HGA and HPSO algorithms. For other instances, the convergence characteristics of the compared algorithm were almost similar. All in all, it can be concluded that the proposed HDDE algorithm outperformed the W_DDE, P_DDE, HGA and HPSO algorithms with the same time requirements for the flow shop scheduling with intermediate buffers to minimize makespan. 6. Conclusions This paper presented a hybrid different evolution (HDDE) algorithm to deal with the minimization of makespan for the flow shop scheduling problem with intermediate buffers, which has a significant application value in modern manufacturing environments. With the discrete job permutation representation, the HDDE algorithm adopted permutation based mutation and crossover operators as well as a modified one-to-one selection scheme with probabilistic jumping to perform evolutionary search within the mechanism of differential evolution. Moreover, a well developed local search algorithm, which uses two neighborhood structures in a reasonably hybrid way, was incorporated in the algorithm to enhance the intensification search and to balance the exploration and exploitation. Computational simulations and comparisons demonstrated the superiority of the proposed HDDE algorithm in terms of solution quality and robustness. The future work is to develop some adaptive HDDE algorithms with a learning mechanism and to apply the algorithms to some other kinds of combinatorial optimization problems. Acknowledgements This research is partially supported by National Science Foundation of China (Grants Nos. 60874075, 70871065, 60774082, 60973086, 68034004), Program for New Century Excellent Talents in University (NCET-10-0505). References [1] P. Brucker, S. Heitmann, J. Hurink, Flow-shops with intermediate buffers, OR Spectrum 25 (2003) 549–574. [2] F.C. Chang, H.C. Huang, A refactoring method for cache-efficient swarm intelligence algorithms, Information Sciences, doi:10.1016/j.ins.2010.02.025. [3] L.K. Duclos, M.S. Spencer, The impact of a constraint buffer in a flow shop, International Journal of Production Economics 42 (1995) 175–185.
Author's personal copy Q.-K. Pan et al. / Information Sciences 181 (2011) 668–685
685
[4] J. Grabowski, J. Pempera, Some local search algorithms for no-wait flow-shop problem with makespan criterion, Computers and Operations Research 32 (2005) 2197–2212. [5] J. Grabowski, J. Pempera, The permutation flow shop problem with blocking. A tabu search approach, OMEGA 35 (2007) 302–311. [6] N.G. Hall, C. Sriskandarajah, A survey of machine scheduling problems with blocking and no-wait in process, Operations Research 44 (1996) 510–525. [7] R. Leisten, Flow shop sequencing problems with limited buffer storage, International Journal of Production Research 28 (1990) 2085–2100. [8] B. Liu, L. Wang, Y.H. Jin, An effective hybrid PSO-based algorithm for flow shop scheduling with limited buffers, Computers and Operation Research 35 (2008) 2791–2806. [9] F. Liu, Y. Qi, Z. Xia, H. Hao, Discrete differential evolution algorithm for the job shop scheduling problem, Shanghai, China GEC 09, 2009, pp. 879–882. [10] N. Mladenovic, P. Hansen, Variable neighborhood search, Computers and Operations Research 24 (1997) 1097–1100. [11] M. Nawaz, E.E.J. Enscore, I. Ham, A heuristic algorithm for the m-machine, n-job flow shop sequencing problem, OMEGA 11 (1983) 91–95. [12] B.A. Norman, Scheduling flowshops with finite buffers and sequence-dependent setup times, Computers and Industrial Engineering 36 (1996) 163– 177. [13] E. Nowicki, The permutation flow shop with buffers: a tabu search approach, European Journal of Operational Research 116 (1999) 205–219. [14] G.C. Onwubolu, D. Davendra, Scheduling flow shops using differential evolution algorithm, European Journal of Operational Research 171 (2006) 674– 692. [15] I. Osman, C. Potts, Simulated annealing for permutation flow shop scheduling, OMEGA 17 (1989) 551–557. [16] Q.K. Pan, P.N. Suganthan, M.F. Tasgetiren, T.J. Chua, A novel artificial bee colony algorithm for a lot-streaming flow shop scheduling problem, Information Sciences (2010), doi:10.1016/j.ins.2009.12.025. [17] Q.K. Pan, M.F. Tasgetiren, Y.C. Liang, A discrete differential evolution algorithm for the permutation flowshop scheduling problem, Computers and Industrial Engineering 55 (2008) 795–816. [18] C.H. Papadimitriou, P.C. Kanellakis, Flow shop scheduling with limited temporary storage, Journal of Association Computing Machine 27 (1980) 533– 549. [19] M. Pinedo, Scheduling: Theory Algorithms and Systems, Prentice-Hall, New Jersey, 2002. [20] B. Naderi, R. Ruiz, M. Zandieh, Algorithms for a realistic variant of flowshop scheduling, Computers and Operations Research 37 (2010) 236–246. [21] B. Qian, L. Wang, R. Hu, et al, A hybrid differential evolution for permutation flow-shop scheduling, International Journal of Advanced Manufacturing Technology 38 (2008) 757–777. [22] B. Qian, L. Wang, D.X. Huang, W.L. Wang, X. Wang, An effective hybrid DE-based algorithm for multi-objective flow shop scheduling with limited buffers, Computers and Operations Research 36 (2009) 209–233. [23] R. Ruiz, C. Maroto, J. Alcaraz, Two new genetic algorithms for the flowshop scheduling problem, OMEGA-The International Journal of Management Science 34 (2006) 461–476. [24] R. Ruiz, T. Stutzle, A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem, European Journal of Operational Research 177 (2007) 2033–2049. [25] B. Ruzek, M. Kvasnicka, Differential evolution in the earthquake hypocenter location, Pure and Applied Geophysics 158 (2001) 667–693. [26] C. Smutnicki, A two-machine permutation flow shop scheduling problem with buffers, OR Spectrum 20 (1998) 29–235. [27] R. Storn, K. Price, Differential evolution-A simple and efficient adaptive scheme for global optimization over continuous spaces, Journal of Global Optimization 11 (1997) 341–359. [28] R. Storn, Designing Digital Filters with Differential Evolution, in: D. Corne, M. Dorigo, F. Glover (Eds.), New Ideas in Optimization, McGraw-Hill, UK, London, 1999. [29] E. Taillard, Benchmarks for basic scheduling problems, European Journal of Operational Research 64 (1993) 278–285. [30] E. Vallada, R. Ruiz, Genetic algorithms with path relinking for the minimum tardiness permutation flowshop problem, OMEGA 38 (2010) 57–67. [31] E. Vallada, R. Ruiz, G. Minella, Minimising total tardiness in the m-machine flowshop problem: a review and evaluation of heuristics and metaheuristics, Computers and Operations Research 35 (2008) 1350–1373. [32] H.W. Thornton, J.L. Hunsucker, A new heuristic for minimal makespan in flow shops with multiple processors and no intermediate storage, European Journal of Operational Research 152 (2004) 96–114. [33] L. Wang, Q.K. Pan, P.N. Suganthan, et al, A novel hybrid discrete differential evolution algorithm for blocking flow shop scheduling problems, Computers and Operations Research 37 (2010) 509–520. [34] L. Wang, L. Zhang, D.Z. Zheng, An effective hybrid genetic algorithm for flow shop scheduling with limited buffers, Computers and Operation Research 33 (2006) 2960–2971.