Knowledge-Based Systems 116 (2017) 114–129
Contents lists available at ScienceDirect
Knowledge-Based Systems journal homepage: www.elsevier.com/locate/knosys
A multi-objective artificial bee colony algorithm for parallel batch-processing machine scheduling in fabric dyeing processes Rui Zhang a,∗, Pei-Chann Chang b, Shiji Song c, Cheng Wu c a
School of Economics and Management, Xiamen University of Technology, Xiamen 361024, PR China Department of Information Management, Yuan Ze University, Taoyuan 32003, Taiwan c Department of Automation, Tsinghua University, Beijing 100084, PR China b
a r t i c l e
i n f o
Article history: Received 22 April 2016 Revised 16 October 2016 Accepted 31 October 2016 Available online 1 November 2016 Keywords: Production scheduling Fabric dyeing Meta-heuristics Multi-objective optimization
a b s t r a c t Fabric dyeing is a critical production process in the clothing industry. Characterized by high energy consumption and water pollutant emission, dyeing processes need careful scheduling in order to reduce the relevant direct and indirect costs. In this paper, we describe the dyeing process scheduling problem as a bi-objective parallel batch-processing machine scheduling model, in which the first objective function reflects the tardiness cost and the second objective function concerns the utilization rate of dyeing vats. To obtain satisfactory schedules within reasonable time, we propose an efficient multi-objective artificial bee colony (MO-ABC) algorithm to solve the scheduling problem. The proposed algorithm features a specialized encoding scheme, a problem-specific initialization technique and several unique functions to deal with multi-objective optimization. After preliminary tuning of parameters, we use a set of 90 instances with up to 300 jobs to test the MO-ABC algorithm. Extensive experiments show that the MO-ABC outperforms a generic multi-objective scheduling algorithm in terms of both solution quality and computational time robustness. © 2016 Elsevier B.V. All rights reserved.
1. Introduction A clothing factory typically consists of three sequential departments, i.e., the weaving workshop, the dyeing workshop and the sewing workshop. Among the three production stages, dyeing is often the most critical process (bottleneck) that dominates the entire production progress because it is time-consuming and technologically demanding. Therefore, scheduling of the dyeing processes is of great significance to maintaining high product quality and ontime delivery rate for a clothing firm. In addition, since dyeing processes inevitably generate high emissions of water and air pollutants, scheduling plays an important role in controlling the cost of pollution by means of increasing the utilization rate of dyeing equipment. Dyeing process scheduling needs to consider the following factors. Each job is characterized by three attributes (i.e., color, weight and due date). Jobs with the same color belong to a family and have an identical processing time. Several jobs from the same family can be processed as a batch in the same vat (dyeing machine) as long as the total weight does not exceed the capacity of the vat. However, jobs from different families can never be processed ∗
Corresponding author. E-mail address:
[email protected] (R. Zhang).
http://dx.doi.org/10.1016/j.knosys.2016.10.026 0950-7051/© 2016 Elsevier B.V. All rights reserved.
together in the same vat (they are called incompatible). The available dyeing vats in the workshop have different capacities and can be utilized in a simultaneous manner. 1.1. The batch-processing machine scheduling problem The scheduling model that best fits the characteristics of dyeing processes is known as the batch-processing machine scheduling problem in literature. A batch-processing machine can process several jobs simultaneously as a batch (subject to limit capacity). Therefore, batch-processing machine scheduling models inherently integrate the batching problem and the sequencing problem, and thus are more difficult to solve than ordinary scheduling problems that require only sequencing decisions. It has been shown in [1] that makespan minimization on a single batchprocessing machine with non-identical job sizes is equivalent to a bin-packing problem which is N P -hard in the strong sense. In addition, given the fact that scheduling of ordinary parallel machines for makespan minimization is also N P -hard (even for two machines) and the fact that makespan minimization can be reduced to (and thus can be regarded as a special case of) total tardiness minimization [2], we may conclude that parallel batch-processing machine scheduling with total tardiness criterion, as discussed in this paper, is a highly complex and challenging problem.
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129
From the perspective of real-world applications, we could notice that there has been increasing interest in parallel batchprocessing machine scheduling problems because of their considerable application potential. Such scheduling problems are often encountered in contemporary manufacturing industries, such as chemical and mineral processing, pharmaceutical and mechanical production, semiconductor manufacturing, etc. [3]. To get a glimpse of the literature, some representative works are introduced below as examples of research in this line. An ant colony optimization meta-heuristic is proposed in [4] to schedule a single batch-processing machine with arbitrary job sizes and incompatible job families for minimizing total weighted completion time. A fuzzy goal programming approach is presented in [5] for integrated loading and scheduling of a batchprocessing machine considering both short-term and long-term objectives. A combined scheduling algorithm based on simulation and integer programming is proposed in [6] to address the problem of minimizing total weighted tardiness on a re-entrant batch-processing machine with incompatible job families in semiconductor wafer fabrication processes. A genetic algorithm is designed in [7] to schedule a set of identical batch-processing machines in parallel with the aim of minimizing makespan. A genetic algorithm, an ant colony optimization approach and a large neighborhood search approach are described in [8] for solving a scheduling problem for parallel identical batch-processing machines with incompatible job families to minimize the total weighted tardiness. A linear-programming algorithm, an integerprogramming algorithm and a heuristic algorithm are proposed in [9] to schedule non-homogenous parallel batch-processing machines with non-identical job sizes and incompatible job families in semiconductor manufacturing systems. An online version of the scheduling problem for unbounded parallel batch-processing machines and equal-length jobs is studied in [10] with the objective of minimizing makespan. Besides genetic algorithm, other metaheuristics such as discrete particle swarm optimization [11], discrete differential evolution [12], and max–min ant system [13] have also been applied to parallel batch-processing machine scheduling problems. All of the above works deal with single-objective optimization models for batch-processing machine scheduling. In fact, quite few publications are concerned with multi-objective versions of the scheduling problem. In [14], the authors propose two hybrid multi-objective genetic algorithms based on different representation schemes to address a single batch-processing machine scheduling problem with bi-criteria of makespan and maximum tardiness. In [15], the authors design an ant colony optimization algorithm to solve a parallel batch-processing machine scheduling problem with incompatible job families and dynamic job arrivals while taking into account two objective functions, namely, total weighted tardiness and makespan. In [16], the authors present a multi-objective imperialist competitive algorithm for scheduling identical parallel batch-processing machines with arbitrary job sizes and unequal release times in order to minimize the makespan and the total weighted earliness and tardiness of jobs (JIT). In [17], the authors develop an efficient meta-heuristic algorithm based on tabu search with multi-level diversification to address a batch scheduling problem on a set of unrelated parallel machines with the objective of minimizing a linear combination of total weighted completion time and total weighted tardiness. As can be seen from above, batch-processing machine scheduling problems can be divided into several subclasses according to the specifics of machines and jobs. In terms of machines, we can differentiate between the single machine setting and the parallel machines setting. When discussing parallel machines, we can further differentiate between identical parallel machines and nonidentical/heterogeneous parallel machines. In terms of jobs, we can
115
distinguish between the case of identical/equal job sizes and the case of arbitrary/unequal job sizes, and we can also distinguish between the case of compatible job families and the case of incompatible job families. In each pair of classifications stated above, it is apparent that the latter type represents a higher level of complexity for scheduling than the former type. In this way, it is easily seen that the dyeing process scheduling problem belongs to the most complex category, i.e., parallel batch-processing machine scheduling problem with heterogeneous parallel machines, arbitrary job sizes and incompatible job families. 1.2. The artificial bee colony optimization algorithm To solve large-scale scheduling problems with N P -hard complexity, most researchers nowadays would resort to meta-heuristic algorithms because of their computational efficiency and controllable precision. Among various types of meta-heuristics, the artificial bee colony (ABC) optimization algorithm is a relatively new one but it has been applied to many engineering optimization problems with noticeable success. The ABC algorithm, originally proposed by Karaboga in 2005 for optimizing multi-variable and multi-modal continuous functions [18], simulates the cooperative foraging behavior of a swarm of honey bees [19]. Later research has revealed some good properties of the ABC [20–22]. In particular, the ABC involves much fewer control parameters than many other population-based meta-heuristics, which makes it easier to implement and more reliable for engineering purposes. Therefore, the ABC has become a popular algorithm in the optimization community for solving problems such as production scheduling [23,24], vehicle routing [25], transit route design [26], land-use allocation [27], portfolio selection [28], and dynamic optimization problems [29]. In real-world applications, optimization problems often involve more than one objectives which have to be optimized simultaneously. Extensive research efforts have been made to address such problems. For example, an approximate algorithm has been proposed in [30] for solving fuzzy multi-objective linear programming problems with fuzzy parameters in both objective functions and constraints, and a decision support system (DSS) has been developed based on the algorithm. Focusing on meta-heuristics, we could see that a number of efforts have been made to adapt the ABC algorithm for multi-objective optimization problems. In [31], a multi-objective artificial bee colony (MOABC) algorithm is proposed, which features a grid-based approach to adaptively assess the Pareto front maintained in an external archive. The external archive is used to control the flying behaviors of the bees and the structure of the bee colony. The vector evaluated artificial bee colony (VEABC) algorithm proposed in [32] organizes multiple bee colonies based on the number of objectives to be optimized. Each colony separately evaluates one single objective and they exchange information so as to obtain the optimal solution set. In [33], the author proposes three multi-objective ABC algorithms based on synchronous/asynchronous models and Pareto non-dominated sorting, called A-MOABC/PD, A-MOABC/NS and S-MOABC/NS, respectively. These algorithms have been evaluated on 10 unconstrained test functions and compared with three algorithms in terms of different performance metrics. An elitism based multiobjective artificial bee colony (eMOABC) algorithm is proposed recently [34], which uses a fixed-size archive maintained on the basis of crowding-distance to store the non-dominated solutions found during the search process. In the algorithm, an improved elitism strategy is utilized for the purpose of avoiding premature convergence. It is worth pointing out that another multi-objective ABC algorithm based on similar ideas (elite-guided) is presented in [35]. In the so-called EMOABC algorithm, a novel elite-guided solution generation strategy is proposed to accelerate the convergence
116
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129
speed and improve the solution quality. As an example of engineering applications, a new hybrid multi-objective evolutionary algorithm based on the integration of artificial bee colony and differential evolution, called MOABCDE [36], is proposed to solve time– cost–quality tradeoff problems in the planning and management of construction projects. The algorithm combines crossover operators from differential evolution with the ABC framework to balance exploration and exploitation efforts of the optimization process. For interesting applications of multi-objective ABC algorithms, readers are suggested to refer to publications such as [37–41]. In this paper, we will propose a multi-objective ABC algorithm specifically for solving the production scheduling problem occurring in fabric dyeing processes. We have chosen ABC as the main framework of our optimization algorithm mainly because of its stable and reliable performance on large-scale scheduling problems. Previous studies like [23] have shown that ABC is robust with respect to parameter settings and it converges fast if properly modified by considering problem-specific features. To meet the challenge of the complex scheduling problem, we will introduce a series of techniques to build and enhance the multi-objective ABC, including a problem-dependent initialization method which utilizes internal properties of the scheduling problem, an efficient encoding/decoding scheme which can quickly transform solutions between two states (i.e., continuous representations and discrete schedules), and a number of enhanced strategies for handling the evolutionary optimization process (e.g., comparison of solutions, sorting of solutions, selection of solutions, and preservation of elite solutions). These novel techniques that will be detailed in Section 3 ensure that the proposed multi-objective ABC perform consistently and reliably on large-scale instances of the parallel batch-processing machine scheduling problem under study. The remainder of this paper is organized as follows. Section 2 gives a formal description of the parallel batchprocessing machine scheduling problem. Section 3 introduces the MO-ABC algorithm which is designed specifically for the problem under investigation. Section 4 presents the full set of computational experiments devoted to fine-tuning and testing the proposed algorithm. Section 5 concludes the paper and discusses some future research ideas. 2. The scheduling problem 2.1. Problem statement We have n jobs J = {1, 2, . . . , n} which are waiting to be processed by m parallel batch-processing machines M = {1, 2, . . . , m}. Each job i ∈ J has a processing time of pi , a weight of wi and a due date of di . Meanwhile, each machine k ∈ M has a volume (i.e. weight capacity) of vk . The set of machines capable of handling job i is denoted as Mi , which is determined by two factors, i.e., the volume constraint (a job of 150 kg fabric cannot be handled by a 100 kg dyeing machine) and the technological constraint (certain jobs can only be processed by high-temperature & high-pressure dyeing machines). Once a job is started on a machine, interruption of the current job or insertion of additional jobs into the machine is not allowed. The jobs can be divided into l families (1, 2, . . . , l) according to the required dyeing color, and the family index of job i is denoted as ϕ (i). The processing time of a job is exclusively determined by the family it belongs to (but not related with its own weight), so the jobs within the same family have the same processing time. In other words, if ϕ (i1 ) = ϕ (i2 ) = j, then pi1 = pi2 = p j (pj represents the processing time of each job in family j). The jobs within the same family can be processed simultaneously as a batch on a machine provided that the total weight of the batched jobs does not exceed the volume of the machine. The processing time of a batch
is equal to the processing time of any job in the batch (i.e. pj if the batch of jobs belong to family j). On the contrary, jobs from different families cannot form a batch, which is why the job families are called “incompatible”. When a machine is prepared for switching from one job family to another job family, a pre-specified amount of setup time is incurred (since the dyeing equipment needs cleaning before handling a different color). The length of setup is determined by the difference between the two adjacent families j1 and j2 (e.g., a thorough cleaning should be performed if a light color immediately follows a dark color) and is denoted as s j1 j2 . The definition also implies that s j1 j2 = 0 if j1 = j2 . Let π = (π1 , π2 , . . . , πm ) represent a feasible solution to the scheduling problem, where πk = (Bk1 , Bk2 , . . . , Bknk ) indicates the sequence of batches processed by machine k (Bkh is the hth batch and nk is the total number of batches on machine k). For any job i in batch Bkh , the processing time (resp. completion time) of job i is equal to the processing time (resp. completion time) of batch Bkh . Therefore, the completion time of job i in Bkh can be calculated as Ci = p jk1 + hh =2 s j j + p j , where jkh indicates the famk (h −1 ) kh
kh
ily index of the jobs in Bkh . Then, the tardiness of job i is defined as Ti = (Ci − di )+ , where (x )+ = max{x, 0}. Meanwhile, the finishing time of machine k (Fk ) is defined as the completion time of the last batch on the machine. The working time of a machine (which is equal to its finishing time if each machine starts at time 0) is a measure of operational cost. Hence, if minimizing the total tardiness of jobs is an exterior objective aiming at the satisfaction of customers, minimizing the total finishing time of machines is an interior objective aiming at the energy efficiency of the factory (note that the dyeing industry is characterized by high energy consumption, and a reduction in operation time directly helps to cut the related cost). In addition, minimizing the total finishing time of machines potentially guarantees a high utilization rate for each machine, which is also a key concern in real-world production scheduling (e.g., allocating 90 kg jobs to a 100 kg dyeing machine is practically considered as more efficient than to a 500 kg dyeing machine even though the tardiness may be the same). 2.2. The mathematical model We formulate the studied scheduling problem as a mixedinteger linear programming (MILP) model. The two sets of 0–1 decision variables are defined as follows:
xikh =
y jkh =
1
if job i is processed in Bkh ,
0
otherwise.
1
if the jobs in Bkh belong to family j,
0
otherwise.
(1)
(2)
where Bkh stands for the hth batch processed on machine k. Then, the MILP formulation is detailed below.
Minimize T T =
n
Ti
(3)
i=1
TF =
m
Fk
(4)
k=1
subject to
n
(wi · xikh ) ≤ vk , k = 1, . . . , m, h = 1, . . . , n
(5)
i=1 m n k=1 h=1
xikh = 1, i = 1, . . . , n
(6)
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129
l
Table 1 The job data for the toy instance.
y jkh ≤ 1, k = 1, . . . , m, h = 1, . . . , n
(7)
j=1
xikh ≤ yϕ (i )kh , i = 1, . . . , n, k = 1, . . . , m, h = 1, . . . , n n
xikh ≥
l
y jkh , k = 1, . . . , m, h = 1, . . . , n
i=1
j=1
l
l
j=1
117
y jkh ≥
(8)
(9)
y jk(h+1) , k = 1, . . . , m, h = 1, . . . , n − 1
Ci ≥ pϕ (i ) · xik1 , i = 1, . . . , n, k = 1, . . . , m Ci2 − Ci1 ≥ sϕ (i1 )ϕ (i2 ) + pϕ (i2 ) + M (xi1 kh + xi2 k(h+1) − 2 ), i1 , i2 = 1, . . . , n, k = 1, . . . , m, h = 1, . . . , n − 1
(10)
(11)
Due date/h
Family
70 30 40 90 50 20 60 30
5 6 8 9 9 11 12 13
1 1 2 2 2 3 3 3
(12)
(13)
Ti ≥ 0, i = 1, . . . , n
(14)
n
xikh − 1 , i = 1, . . . , n, k = 1, . . . , m
(15)
h=1
xikh = 0, i = 1, . . . , n, k ∈ M\Mi , h = 1, . . . , n
Suppose there are two dyeing machines with a volume of 50 kg and 100 kg, respectively. Eight jobs are grouped into three families according to their desired color. The processing time for each job family is: 5 h for family 1, 8 h for family 2 and 10 h for family 3. The setup time matrix [s j1 j2 ] is
Ti ≥ Ci − di , i = 1, . . . , n
Fk ≥ Ci + M
Weight/kg
1 2 3 4 5 6 7 8
2.3. An example problem
j=1
Job no.
(16)
xikh , y jkh ∈ {0, 1}, i, h = 1, . . . , n, k = 1, . . . , m, j = 1, . . . , l Eqs. (3) and (4) define the two objective functions to be minimized, i.e., the total tardiness of jobs (TT) and the total finishing time of machines (TF). Constraint (5) requires that the jobs processed simultaneously in a batch should fit the volume of the corresponding machine. Constraint (6) ensures that each job is assigned to exactly one batch. Constraint (7) guarantees that the jobs in each batch (if any) must belong to a single family. lj=1 y jkh = 0 corresponds to the case that Bkh is an empty batch (because the number of batches on each machine is undetermined, we have to assume nk = n and enumerate h from 1 to n for each machine, and thus Bkh contains no job when h gets sufficiently large). Constraint (8) means that job i cannot be assigned to Bkh unless the batch has been associated with the same family as job i. Constraint (9) sug gests that, if lj=1 y jkh = 1 (i.e., Bkh is not empty), at least one job should have been assigned to Bkh . Constraint (10) requires that all the empty batches be placed after all the nonempty batches (placing an empty batch between two nonempty batches is infeasible). Eqs. (11) and (12) define the completion time (Ci ) of each job i. No setup is needed before the first batch on a machine. M denotes a very large positive number. Eqs. (13) and (14) define the tardiness (Ti ) of each job i. Note that the two inequalities together with the objective of minimizing ni=1 Ti control Ti at the correct value (equivalent to the expression Ti = (Ci − di )+ ). Constraint (15) defines the finishing time (Fk ) of each machine k, where M is introduced as above. Constraint (16) reflects the technological constraint: a job should not be assigned to a machine which is incapable of handling it. M\Mi is the set of incapable machines for job i.
0 3 5
4 0 3
1 2 . 0
Each job may be processed by either machine (Mi = M, ∀i). The detailed information on each job is listed in Table 1. We use IBM ILOG CPLEX with Concert Technology to model and solve the MILP of the above instance. Since CPLEX cannot handle multi-objective optimization directly, we apply the ε -constraint method. Noticing that both objective functions take integer values in this example, we focus on minimizing TT subject to the additional constraint that TF ≤ ε TF . CPLEX is used to solve the extended model under each fixed value of ε TF , so that all the Pareto optimal solutions could be found as ε TF gradually decreases from a sufficiently large value (a loose upper bound for TF can be acquired from any schedule that minimizes TT without constraint on TF). We have eventually obtained four Pareto optimal solutions for this example problem, and they are displayed in the form of Gantt charts in Fig. 1: solution (a) with T T = 49, T F = 55; solution (b) with T T = 57, T F = 51; solution (c) with T T = 58, T F = 47; solution (d) with T T = 67, T F = 45. It is interesting to note that the average machine utilization rate (defined as the average value of “actual weight of each batch/volume of corresponding machine”) has increased from 83.3% in case (a) to 88.0% in case (d). This reveals that TF is a useful indicator of machine utilization level. 3. The proposed MO-ABC algorithm 3.1. The basic ABC algorithm In this subsection, we briefly introduce the principles of the standard artificial bee colony algorithm, which form the basis of our MO-ABC algorithm. In the artificial bee colony (ABC) optimization algorithm, artificial bees are divided into three categories: employed bees, onlooker bees and scout bees. The bees currently exploiting a food source are classified as employed bees. The onlooker bees wait in the hive and observe the feedback from the employed bees, and will finally choose a food source to exploit (become employed) according to the quality of the detected food sources. Naturally, good food sources will attract more bees than bad ones. Scout bees search for new food sources in a random manner in the vicinity of the hive. Once a scout or onlooker bee finds a food source, it becomes employed. When a food source has been fully exploited, all the employed bees working on it will abandon that position, and these bees will become scouts again. Using the terminology of
118
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129
real vector. Let xi = (xi,1 , xi,2 , . . . , xi,D ) be the ith solution, which is generated by
xi,d = xmin + r × (xmax − xmin d d ) , d ∈ {1 , . . . , D}, d
(17)
where r is a random real number following the uniform distribution U[0, 1], and xmin (resp. xmax ) is the lower limit (resp. d d upper limit) for the value in dimension d. (2) The employed bee phase. Each employed bee works on a solution (i.e. food source) by imposing stochastic modifications on the original solution to obtain a new solution. This can be regarded as a way of implementing the neighborhood search function. The new solution vi is produced from the original solution xi by applying a differential expression:
vi,d = xi,d + r × (xi,d − xk,d ),
Fig. 1. Gantt charts of the obtained schedules in the example.
meta-heuristic optimization, we may come to conclude that scout bees are performing the exploration (or diversification) task, while employed bees are performing the exploitation (or intensification) task. In the ABC algorithm, a food source corresponds to a possible solution to the optimization problem, and the quality (nectar amount) of a food source corresponds to the objective value of that solution. In the ABC algorithm, the initial colony consists of an equal number of employed bees and onlooker bees, i.e., half are employed and the other half are onlookers. The number of employed bees (SN) is equal to the number of food sources since it is assumed that only one employed bee is associated with each food source. Obviously, the number of onlooker bees is therefore also equal to the number of food sources currently under consideration. Iterations of the ABC algorithm begin from a set of randomlygenerated food sources, and the complete procedure is outlined below. Step 1: (Initialization phase) Generate a desired number of food sources. Step 2: (Employed bee phase) Each employed bee is dispatched to work on a specific food source. Step 3: (Onlooker bee phase) Each onlooker bee chooses a food source according to the information disclosed by the employed bees, and then starts working on it (in employed status). Step 4: (Scout bee phase) The employed bees who have failed to attain improvement become scouts, and they will search for new food sources randomly. Step 5: (Termination) Check whether the termination condition is satisfied. If not, go back to Step 2 and iterate. To provide more accurate information, the implementation of each step is detailed as follows. (1) The initialization phase. Each of the SN initial solutions (i.e. food sources) is represented by a randomly-generated D-dimensional
(18)
where d is picked randomly from {1, . . . , D}, k is picked randomly from {1, . . . , SN}\{i}, and r is a random real number following the uniform distribution U[−1, 1]. After vi has been obtained, it will be compared with xi . If the quality of vi is better than that of xi (or equivalently, the new food source contains a higher amount of nectar), the bee will refocus its attention on vi and forget the previous solution. On the contrary, if the new solution is even worse, the bee will simply stay with the original solution. (3) The onlooker bee phase. When all the employed bees have completed their work, they share the information about the detected food sources with the onlookers. Then, each onlooker bee will select a food source in a probabilistic manner. The probability (Pri ) with which an onlooker bee chooses food source xi is defined as
fi P ri = SN
i=1
fi
(19)
where fi denotes the fitness value of solution xi (or equivalently, nectar amount of the corresponding food source). After the onlooker bee has selected solution xi , it will also conduct a neighborhood search around xi by applying Eq. (18). As in the previous case, the new solution will replace the original xi if an improvement occurs. (4) The scout bee phase. If a solution cannot be improved after a predetermined number (L) of iterations, the corresponding food source is assumed to be abandoned, and the associated employed bee becomes a scout. The scout will then start looking for a new food source randomly by using Eq. (17). 3.2. Encoding and decoding The encoding scheme used in MO-ABC is based on the random key representation. Each solution is expressed by a vector of n real numbers x = (x1 , x2 , . . . , xn ), where xi ∈ k∈M (k − 1, k]. The intei ger part of xi indicates the machine assignment for job i, while the decimal part of xi determines the preferential order of the job in the batching process. In the decoding process, this vector will be transformed to a feasible schedule via two steps. The first step obtains m job sequences according to the smallest position value (SPV) rule, and the second step yields the complete schedule by a simple heuristic. To illustrate the first step, Table 2 gives a solution for the example problem introduced in Section 2.3. The decoded sequence information is shown in the last row, where (k, z) indicates that job i is assigned to machine k and ranks at the zth position in the corresponding sequence. In fact, the machine that should process job i is simply k = xi . Hence, job 1 (x1 = 1.80) is assigned to machine 2 and job 3 (x3 = 0.21) is assigned to machine 1 in this solution. The relative order of the jobs assigned to the same machine is
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129 Table 2 Illustration of the first-step decoding process using SPV. i
1
2
3
4
5
6
7
8
xi (k, z)
1.80 (2, 4)
1.19 (2, 1)
0.21 (1, 2)
1.32 (2, 2)
0.95 (1, 4)
0.05 (1, 1)
1.54 (2, 3)
0.82 (1, 3)
119
(the jobs which belong to family ϕ (i) but cannot be handled by machine k are excluded). Arrange the production of this batch on machine k. Update the time on which machine k finishes the production and becomes available. Step 6: If not all jobs have been scheduled, go back to Step 2. The criterion that is used to select machine k in Step 4 of the above procedure is detailed as follows. We introduce three heuristic rules for making such a selection, and randomly determine the rule to be adopted at each time (switch among the rules to try breaking a tie if necessary).
Fig. 2. Gantt chart of the schedule decoded from the given solution.
resolved by sorting the decimal parts in non-decreasing order. For instance, the jobs 1, 2, 4 and 7 should all be processed by machine 2. Furthermore, since x2 < x4 < x7 < x1 , the preferential order on machine 2 is (2, 4, 7, 1). Finally, the job sequences Lk (k = 1, . . . , m) decoded from this solution can be expressed as
L1 : L2 :
6, 3, 8, 5 2, 4, 7, 1
Then, the second step constructs a feasible schedule by batching the jobs on each machine. The procedure is formally described as follows. Step 1: Let k = 1. Step 2: If Lk is empty, go to Step 6. If Lk contains only one job, schedule the job on machine k, empty Lk and go to Step 6. Step 3: Record the color of job Lk1 (the first job in Lk ) as ϕ (Lk1 ) and its weight as wLk1 . Let W = wLk1 , B = {Lk1 } and z = 2. Step 4: If ϕ (Lkz ) = ϕ (Lk1 ) and W + wLkz ≤ vk , let W ← W + wLkz and B ← B ∪ {Lkz }. Step 5: Let z ← z + 1. If W < vk and z does not exceed the current length of Lk , go back to Step 4. Otherwise, schedule batch B on machine k, delete the scheduled jobs from Lk , and go to Step 2. Step 6: Let k ← k + 1. If k ≤ m, go to Step 2. Otherwise, exit the procedure. According to the above procedure, the final schedule decoded from the solution is: B11 = {6, 8}, B12 = {3}, B13 = {5}, B21 = {1, 2}, B22 = {4}, B23 = {7}. The corresponding Gantt chart is shown in Fig. 2. 3.3. A heuristic for generating initial solutions To produce a set of initial solutions, we have designed the following heuristic algorithm to replace the completely random initialization method in the basic ABC. The heuristic consists of an iteration of three major steps, i.e., job selection, machine selection, and batch formation. Step 1: Sort all jobs 1, . . . , n according to the earliest due date (EDD) rule. Jobs that share the same due date are sorted by their weights (with larger values prioritized). Step 2: Select the first unscheduled job from the job list, which is denoted as job i. Step 3: Determine the set of machines (Mi ) that are capable of handling job i. Step 4: Select a machine from Mi using the criterion introduced below, and denote the selected machine as machine k. Step 5: Construct a batch for machine k subject to its volume limit vk , by assigning the jobs of family ϕ (i) to the batch in the previously specified order and as many as possible
(Rule 1) For each machine in Mi , evaluate the potential tardiness incurred if job i is scheduled on that machine, and choose the machine that leads to minimum tardiness. (Rule 2) For each machine in Mi , evaluate the potential setup time needed if job i is processed subsequently on this machine, and choose the machine that requires the smallest setup time before processing job i. (Rule 3) For each machine in Mi , evaluate the potential utilization rate of each machine (considering only the batch that contains job i) in the case that job i is scheduled on this machine (c.f. Step 5), and choose the machine that results in the highest utilization rate. To further diversify the initial solutions, we impose a moderate level of random perturbations on the EDD-sorted job list generated in Step 1. We apply the simplest form of modification, i.e., swap of two randomly-selected jobs. Once a solution is determined, it is necessary to transform it to the encoded form in order to be used by the proposed MOABC algorithm as initial solutions. To do this, we scan the jobs on each machine in the order they are processed. For machine k ∈ {1, . . . , m}, we first find the number of batches processed on this machine (denoted by nk ), and then identify nk evenly spaced points within the interval (k − 1, k ), i.e., k − 1 + 1/(nk + 1 ), k − 1 + 2/(nk + 1 ), . . . , k − 1 + nk /(nk + 1 ). Finally, each job in the hth batch on machine k is associated with the real number k − 1 + h/(nk + 1 ) for encoding of the solution. 3.4. Multi-objective optimization functions As described above, the standard ABC algorithm is only capable of solving single-objective optimization problems. In order to adapt the ABC algorithm to our problem, we introduce some specialized strategies so that the resulting MO-ABC algorithm is able to handle multi-objective optimization. 3.4.1. Comparison of solutions In the context of multi-objective optimization, Pareto dominance is used to differentiate the quality of different solutions. For the problem discussed in this paper, solution x1 , which has an objective value of (TT1 , TF1 ), is said to dominate solution x2 , which has an objective value of (TT2 , TF2 ), if either (1) TT1 < TT2 and TF1 ≤ TF2 or (2) TF1 < TF2 and TT1 ≤ TT2 holds (the dominance relation is often denoted as x1 ≺x2 ). To reflect the Pareto dominance relations, we modify the solution comparison mechanism of ABC (used in the employed bee phase and the onlooker bee phase) as follows. • If the new solution vi dominates the original solution xi , the bee will replace xi with vi in its memory. • If the new solution vi is dominated by the original solution xi , the bee will forget about the new solution and keeps only the original one. • If there exists no Pareto dominance relation between the new solution vi and the original solution xi , the bee has to keep both solutions in its memory.
120
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129
In addition, the scout bee phase is accordingly modified as follows. A food source (i.e. solution) is completely abandoned only if all of the neighborhood solutions vi produced in the previous L iterations are dominated by the original solution xi . 3.4.2. Sorting of solutions Unlike the single-objective case (in which solutions are sorted simply according to their objective values), the sorting of solutions in the multi-objective optimization context is much more complex. When deciding the preferential order of solutions, we have to consider two factors simultaneously, namely, the Pareto dominance relations and the distribution of solutions in the objective space. The former criterion has higher priority than the latter. Therefore, if solution x1 dominates solution x2 , x1 must be placed before x2 in the sorted sequence of solutions. If solution x1 and solution x2 are not dominated by each other, we evaluate the distribution statistics of the two solutions (in the objective space) and the solution which is located in a less crowded region is placed prior to the other solution. The underlying motivation is that we always need to encourage diversity in an evolutionary algorithm and thus we discourage solutions that are crowded around by many other solutions. To sort the solutions, we rely on a two-layered procedure, in which the solutions are first divided into an ordered series of Pareto ranks and then the relative order of solutions within each Pareto rank is determined. In the first step, the set of solutions (S) is divided into K subsets S1 , S2 , . . . , SK satisfying: (1) If 1 ≤ i < j ≤ K, for any solution x ∈ S j , there exists a solution x ∈ Si such that x dominates x; (2) Within the same subset Si , any pair of solutions are not mutually dominated (i.e., no solution dominates another). After the ranking procedure is finished, it is clear that S1 consists of the best solutions in the current set, which are also known as Pareto non-dominated solutions. The ranking procedure is given in [42] and therefore the implementation details are omitted here. In the second step, the solutions within each Pareto rank need to be sorted, with crowding level in the objective space as the sorting criterion. The aim is to ensure that the solutions located in lower density regions are given higher priority than the solutions that are crowded together, so that diversity is promoted in the evolutionary process. To evaluate the density index for each solution in a solution set S, it is necessary to define the distance between any two solutions x1 , x2 ∈ S, which is to be calculated as follows:
D(x1 , x2 ) =
Z
(δi (x1 , x2 ))2 ,
i=1
(20)
where δi (x1 , x2 ) = ( fi (x1 ) − fi (x2 ) ) ( f i (S ) − f i (S ) ) and Z is the number of objective functions (we have Z = 2 in this work). f i (S ) = max{ fi (x )| x ∈ S } and f i (S ) = min{ fi (x )| x ∈ S }, respectively, indicate the maximum and minimum of the ith objective value in the solution set S. Such a normalization technique is used to ensure that the objective values in different dimensions are addable. To calculate the density index α i for solution xi , we first sort 1 the solutions in S \{xi } according to their distances to xi . Let Di( ) denote the distance between xi and the solution that is closest to
xi , i.e., Di( ) = min j D(xi , x j )| x j ∈ S \{xi } . Likewise, let Di( ) denote the distance between xi and the solution that is kth closest to xi (for k = 2, . . . , T ). Then, the density index for xi is calculated as 1
the motivation explained above, the solutions with smaller α i will be given higher priority by the sorting procedure. 3.4.3. Selection of solutions In the proposed MO-ABC, the solution selection procedure is useful in two major steps. Firstly, each onlooker bee needs to pick a food source to exploit after receiving the nectar information shared by the employed bees working in the previous stage. Secondly, an adequate number of solutions need to be selected for the next iteration after the employed bee phase and the onlooker bee phase have both completed. Unlike in the single-objective case where selection is simply dependent on solution fitness, here we have to rely on results from the sorting procedure as the basis for selection. To this end, we design the following selection procedure. In the proposed procedure, the probabilities of selecting each candidate solution are assigned in a linear pattern. We assume that all the solutions in the considered set S have been sorted using the procedure presented above, the probability of selecting the solution that ranks at the ith position is defined as
P r[i] =
2 (q + 1 − i ) , q (q + 1 )
(22)
where q is the current size of the solution set S. For example, if q = 5, the probability of selecting each solution (according to the sorted order) is P r[1] = 5/15, P r[2] = 4/15, P r[3] = 3/15, P r[4] = 2/15, P r[5] = 1/15, respectively. If more than one solution need to be selected, we remove each selected solution from set S in an iterative fashion. In this case, we should let q ← q − 1 and reevaluate all Pr[i] . Such a process is repeated until we have selected the required number of solutions. 3.4.4. Preservation of elite solutions One critical component of evolutionary algorithms is the elitism strategy, which concerns the preservation of high-quality solutions found so far during the evolutionary process. A suitable form of elitism strategy is generally required to guarantee theoretical convergence of evolutionary algorithms. The proposed MO-ABC algorithm maintains an elitist archive E during its evolutionary process. In each iteration, the solutions in E are merged with the current set of solutions (i.e. food sources) before the employed bee phase starts (redundant solutions are removed to keep the size of the solution set always equal to SN). Also, we have set a limit on the maximum number of solutions in E as emax = [SN × β %], i.e., a percentage of the number of food sources in consideration. At the end of each iteration, the elitist archive needs to be updated with the (possibly better) solutions found during the employed bee phase and onlooker bee phase of this iteration. The procedure of updating E with a solution set A is described below. Step 1: Try each solution xi ∈ A: if xi is dominated by a solution in E, simply skip xi ; otherwise, delete from E all solutions that are dominated by xi and let E ← E ∪ {xi }. Step 2: If |E| > emax , evaluate the density index α i for all the solutions in E, and delete the (|E | − emax ) solutions with higher density values. 3.5. Overview of the MO-ABC algorithm
k
(21)
Based on the details presented in previous subsections, we can now come to an overall description of the proposed MO-ABC algorithm. A simplified flowchart is provided in Fig. 3 as a complement to the following listed steps.
k where D˜ i = T1 Tk=1 Di( ) . T is the range parameter, which means, if |S | > T , only the T solutions that are closest to xi will be involved in the calculation of the density index for xi . According to
Step 1: Generate SN initial solutions using the procedure presented in Section 3.3. Let U0 be the set of initial solutions and let E (elitist archive) be an empty set. Initialize the iteration index I = 0.
αi =
1 , D˜ i
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129
121
Table 3 Bi-objective test functions. Function
Definition f 1 ( x ) = x1
Constraints
n = 30 0 ≤ xi ≤ 1 i = 1, 2, . . . , 30
f 1 (x ) f 2 ( x ) = g( x ) · 1 − g( x ) n 9 g(x ) = 1 + n−1 · i=2 xi f 1 ( x ) = x1 2 f 2 (x ) = g(x ) · 1 − fg1((xx)) 9 g(x ) = 1 + n−1 · ni=2 xi f 1 ( x ) = x1
ZDT1
ZDT2
n = 30 0 ≤ xi ≤ 1 i = 1, 2, . . . , 30
f 1 (x ) − fg1((xx)) · sin(10π f 1 (x )) f 2 ( x ) = g( x ) · 1 − g( x ) 9 g(x ) = 1 + n−1 · ni=2 xi
ZDT3
f 1 ( x ) = x1
f 2 ( x ) = g( x ) · 1 −
ZDT4
f 1 (x ) g( x )
g(x ) = 1 + 10 · (n − 1 ) +
n
i=2 6
(x2i − 10 cos(4π xi ))
f 1 (x ) = 1 − exp (−4x1 ) · sin(6π x1 )
f 2 ( x ) = g( x ) · 1 −
ZDT6
g( x ) = 1 + 9 ·
n
f 1 ( x ) 2
i=2 xi n−1
g( x )
1/4
n = 30 0 ≤ xi ≤ 1 i = 1, 2, . . . , 30 n = 10 0 ≤ x1 ≤ 1 −5 ≤ xi ≤ 5 i = 2, 3, . . . , 10 n = 10 0 ≤ xi ≤ 1 i = 1, 2, . . . , 10
Step 10: Let I ← I + 1. If I < Imax , return to Step 2. Otherwise, terminate the MO-ABC algorithm and output E.
Fig. 3. Work flow of the MO-ABC algorithm.
Step 2: Let FI = UI ∪ E. Step 3: Implement the employed bee phase, in which each working bee concentrates on a solution in FI (in a one-toone fashion) and produces a neighbor solution by Eq. (18). After all the employed bees have completed their job, the set of solutions that survive (still memorized by the bees) is denoted as FI(1 ) . Noticeably, the size of FI(1 ) can be larger than that of FI because, as described in Section 3.4.1, it is possible that the neighbor solution and the original solution are both kept. Step 4: Sort the solutions in FI(1 ) using the procedure presented in Section 3.4.2. Step 5: Implement the onlooker bee phase, in which each onlooker bee selects a solution from FI(1 ) according to the probability assigned by Eq. (22) and then produces a neighbor solution by Eq. (18). After all the onlooker bees have completed their job, the set of solutions that survive (still memorized by the bees) is denoted as FI(2 ) (duplicate solutions have been removed, if any). Step 6: Implement the scout bee phase, in which the solutions that have always led to inferior (Pareto dominated) solutions in the past L iterations are abandoned, and the associated bees become scouts. Each scout bee produces a completely random solution using Eq. (17). The solution set with these newly produced solutions added is denoted as FI(3 ) . Step 7: Sort the solutions in FI(3 ) using the procedure presented in Section 3.4.2. Step 8: Update the elitist archive E with the non-dominated solutions in FI(3 ) (i.e., solutions in the first Pareto rank) using
the procedure presented in Section 3.4.4. Delete from FI(3 ) those solutions which have just been added to E, if any. Step 9: Select the first (SN − |E | ) solutions from FI(3 ) in the order that is consistent with the sorting result of Step 7, and add these solutions to UI+1 .
Note that, when applying Eqs. (17) and (18), there exists an upper limit and a lower limit on the feasible values that xi, d or vi, d can take, due to the adopted encoding scheme. If the newly generated value vi, d exceeds the range (0, m], it needs to be corrected by plus or minus of a smallest possible integer to make it feasible. The time complexity of each step is analyzed and briefly summarized as follows (recall that SN denotes the population size and Imax denotes the number of iterations under the ABC framework): Step 1 has a complexity of O(SNmn2 ), Step 2 has a complexity of O(SN), Step 3 has a complexity of O(SNn(log n + m )), Step 4 has a complexity of O(SN2 ), Step 5 has a complexity of O(SN 2 n(log n + m )), Step 6 has a complexity of O(SNn2 (log n + m )), Steps 7, 8 and 9 have a complexity of O(SN2 ), and Step 10 has a complexity of O(1). Since Steps 2–10 are executed in an iterative manner, the overall complexity of the MO-ABC algorithm can be described as O(SNn2 (log n + m )Imax ), given that SN is practically bounded by n. In the above results, the factor O(n(log n + m )) reflects the complexity of decoding and evaluating a solution. 4. Computational experiments and results 4.1. Preliminary test To verify the effectiveness of our MO-ABC, we benchmark it on a set of standard test functions, including ZDT1, ZDT2, ZDT3, ZDT4 and ZDT6 [43] (shown in Table 3) and compare its performance with that of the well-known NSGA-II [42] and the recently proposed EMOABC (elite-guided multi-objective ABC) [35]. The parameters of these algorithms are all set according to the suggestions of [35]. We adopt the GD (Generational Distance) metric [44] to evaluate the algorithms’ quality since it is most commonly used for the case of continuous solution space. GD describes how far the obtained Pareto front is away from the true Pareto optimal front:
1 GD = N
N
di2 ,
i=1
where N is the number of obtained solutions, and di represents the Euclidean distance between solution xi and its nearest neighbor solution in the true Pareto front. Therefore, smaller value of GD suggests higher solution quality of the corresponding algorithm. Each algorithm has been executed for 10 independent times, and the averaged computational results together with the time
122
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129 Table 4 Computational results on selected benchmark functions. Function
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6
GD
Computational time (s)
MO-ABC
EMOABC
NSGA-II
MO-ABC
EMOABC
NSGA-II
1.55E 2.09E 3.90E 0.5813 1.26E -
1.60E - 03 8.80E - 02 8.81E - 04 1.4964 1.68E - 04
1.80E - 03 2.64E - 03 6.32E - 04 0.5042 2.69E - 02
6.5003 6.8917 6.4338 7.9169 6.7635
5.7318 5.3307 5.8921 8.2897 5.3175
8.8836 11.4737 9.3890 15.2137 9.5596
03 03 04 04
consumed by each algorithm (in seconds) are listed in Table 4. As can be seen from the table, the proposed MO-ABC outperforms the EMOABC on all the tested functions and outperforms the NSGAII on four of the five tested functions. Meanwhile, the proposed MO-ABC requires much less computational time than the NSGA-II while consumes slightly more time than the EMOABC. Summarizing these data, we may conclude that the proposed MO-ABC is a competitive multi-objective optimization algorithm compared with the state of the art. However, the performance of MO-ABC on the studied scheduling problem, which is certainly a more relevant concern for this study, will be investigated and discussed in the subsequent subsections.
4.2. Experimental settings Test instances for the parallel batch-processing machine scheduling problem are generated according to the following specifications. The number of jobs and the number of job families (colors) can appear as one of the six combinations, i.e., (n, l) ∈ {(100, 6), (100, 9), (200, 9), (200, 12), (300, 12), (300, 15)}. The processing time pj required for each family j is generated from the uniform distribution U[10, 30]. The setup time s j1 j2 (j2 > j1 ) required when processing family j2 immediately after family j1 is generated from the uniform distribution U[2, 5], and meanwhile it is assumed the setup time s j2 j1 (j2 > j1 ) required when family j1 is processed after family j2 is equal to s j1 j2 − 1. For each job i, the family index ϕ (i) is randomly determined from the set {1, 2, . . . , l }, the weight wi is drawn from the uniform distribution U[5, 60], and the due date is set as ξ · 5n/m with ξ following the uniform distribution U[0.5, 1.5]. The number of machines m will be tested at three different levels, i.e., m ∈ {5, 7, 9}. The volume of each machine is determined by vk = 30 + 10k (k = 1, . . . , m ). Since the investigated scheduling problem is new in the literature, we compare the proposed MO-ABC with the universal multi-objective optimizer NSGA-II [42], which is commonly used as benchmark to evaluate new algorithms. The NSGA-II uses the same encoding scheme as MO-ABC and produces the initial population randomly. The MO-ABC and the NSGA-II have both been implemented with Visual C++ 2015 and tested on a workstation with Intel Core i7-4790 3.60 GHz CPU, 16GB RAM and Windows 10 operating system. To ensure fairness in the comparison, the allowed execution time will be confined to the same level for both algorithms, i.e., (3 × n) seconds are allowed to solve an instance with n jobs. Based on extensive computational tests, the parameters of the MO-ABC algorithm are set as follows. • Parameters related with the basic ABC algorithm: SN = 100 (the number of solutions in each iteration), L = 20 (the maximum number of iterations before a solution is abandoned). • Parameters for multi-objective optimization functions: T = 6 (the range parameter involved in the definition of solution density), β = 30 (percentage of solutions that are maintained in the elitist archive).
To maximize performance, the parameters of the NSGA-II are set as follows: the population size is Ps = 60, the crossover probability is pc = 0.8, the mutation probability is pm = 0.4. Note that the maximum number of iterations (or generations) for MO-ABC and NSGA-II is to be determined by the limit on computational time, which means, optimization will continue as long as the computational time has not been exhausted. 4.3. The performance indicators Unlike in the case of single-objective optimization where we care only about the optimality of obtained solutions, we must consider three factors at the same time when comparing the performance of multi-objective optimization algorithms. • The number of Pareto non-dominated solutions obtained by the algorithm. Normally, the more the solutions, the more superior the algorithm is supposed to be. • Pareto optimality of the solutions obtained by the algorithm. Clearly, the closer the solutions are to the true Pareto front, the higher the algorithm is rated. • Evenness of distribution of the solutions obtained by the algorithm. The more evenly the solutions are distributed, the better the trade-offs between different objectives are reflected. To capture the above three factors, we adopt three performance indicators in this paper, namely, 1 , 2 and 3 . Given two multiobjective optimization algorithms (a1 and a2 ), the set of nondominated solutions output by algorithm a1 (resp. a2 ) is denoted as S1 (resp. S2 ). To quantitatively measure the performance of algorithm ai (i ∈ {1, 2}), the three performance indicators are respectively defined as follows. (1) 1 (ai ) = |Si |, which is the number of non-dominated solutions found by algorithm ai . This indicator is also called the ONVG metric [45] . (2) 2 (ai ) = Si∗ /|Si |, where S1∗ (resp. S2∗ ) is the subset of solutions in S1 (resp. S2 ) which are not dominated by any solution in S2 (resp. S1 ). It often happens that 2 (a1 ) + 2 (a2 ) > 1 because a part of solutions in each set can be mutually nondominated. This indicator is similar to the C-metric described in [46]. 1 1 ¯ ¯ 2 (3) 3 (ai ) = 1 x ∈S (Di − D ) , where D = x ∈S Di and D¯
|Si |
i
i
|Si |
i
i
Di is the Euclidean distance in the objective space between xi and its nearest neighbor in Si . Since 3 characterizes the standard deviation of distances, the smaller the value of 3 (ai ), the more evenly the solutions in Si are distributed. This indicator is also known as Tan’s Spacing metric [47]. 4.4. Parameter tuning We first investigate how the parameters affect the optimization performance of MO-ABC. A remarkable advantage of the ABC algorithm is that there are fewer parameters than in many other metaheuristic algorithms. This enables us to examine the impact of all
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129
123
the parameters in an integrated manner. We use the design of experiments (DOE) approach to study the influence of the 4 parameters of MO-ABC (i.e., SN, L, T and β ), and each parameter will be considered at 3 different levels (SN ∈ {60, 100, 150}, L ∈ {10, 20, 40}, T ∈ {3, 6, 9}, β ∈ {10, 30, 50}). The Taguchi design method [48] with orthogonal array L9 (34 ) is adopted, which means only 9 scenarios have to be tested explicitly. We use an instance in the category n = 100, l = 6, m = 7 to experiment with the parameters. Computational results are collected with respect to each of the three performance indicators, 1 , 2 and 3 . When calculating 2 , we rely on NSGA-II as a reference algorithm (the solution set produced by NSGA-II is used as the reference set). The MO-ABC algorithm is executed 10 independent times under each scenario (parameter combination) specified by the Taguchi method, and the averaged values of 1 , 2 and 3 are used as inputs for the DOE. The final results in terms of signal-to-noise ratios (S/N ratios) are shown in Fig. 4 (made with Minitab software). The S/N ratio is calculated for each factor level combination using the following formula: when the goal is to maximize the response (i.e., larger is better), S/N is defined as −10 × log10 ( ni=1 (1/Yi2 )/n ) (for 1 and 2 in our experiment); when the goal is to minimize the response (i.e., smaller is better), S/N is defined as −10 × log10 ( ni=1 (Yi2 )/n ) (for 3 in our experiment), where Yi denotes each response for the given factor level combination and n is the number of responses in the factor level combination. For example, in the experiment for 3 , the three responses associated with SN = 100 are 1.430, 1.440 and 1.422, respectively, so the S/N ratio calculated with the above formula is −3.111. According to the results, each parameter affects the three performance indicators to different extents. For example, SN and L tend to have a larger impact on 1 , T seems to affect 3 to a larger extent, while β turns out to be significantly relevant for all the three indicators. The following remarks could then be made.
(1) By observing the impact of SN on 1 and 2 , it is apparent that the MO-ABC would obtain more Pareto solutions as the parameter SN increases, however, the quality of obtained solutions deteriorates considerably if SN becomes too large. This is due to the fact that the overall computational time is limited: increasing SN too much will reduce the number of iterations performed by the algorithm and thus it is impossible to exploit each solution to a sufficient degree. Therefore, the recommended value for SN is 100. (2) By observing the impact of L on 1 and 2 , we notice that a small value of L is helpful to increase the number of Pareto solutions but meanwhile is not beneficial for the optimality of obtained solutions. If L takes a small value, the ABC exhibits a low tolerance level for each food source and would change frequently for new (and random) food sources. This will certainly help to promote the diversity of solutions, however, as these new solutions are generated randomly, their quality requires more subsequent iterations to improve. To make a tradeoff, the recommended value for L is 20. (3) By observing the impact of T on 3 , we can conclude that either too small or too large values of T will destroy the desired even distribution of solutions. Recall that T is a parameter that controls the calculation of the density (crowdedness) index for a solution. If T takes a too small value, distortion might occur because the sorting procedure will give low priority to small clusters of solutions which are relatively far from other solutions. This may lead to exclusion of critical solutions and thereby deteriorate the evenness of solution distribution. On the other hand, too large values of T will reduce the sensitivity of the density index (due to law of large numbers), making
Fig. 4. Influence of the parameters on solution quality.
it much less effective in differentiating the solutions. It is therefore recommended to assume T = 6. (4) By observing the impact of β on each performance indicator, we see that increasing β from 10 to 30 results in more Pareto solutions and improved solution optimality, but increasing β further to 50 will cause considerable deterioration to 1 and 3 at the same time (i.e., the number of solutions and evenness of solution distribution both worsen). Recall that β decides the number of elite solutions, and these solutions will be combined with the current set of solutions in each iteration. Hence,
124
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129 Table 5 Comparison of MO-ABC and NSGA-II on the instances with n = 100. (n, l, m)
(100, 6, 5)
(100, 6, 7)
(100, 6, 9)
(100, 9, 5)
(100, 9, 7)
(100, 9, 9)
Average
No.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
1
2
3
MO-ABC
NSGA-II
MO-ABC
NSGA-II
MO-ABC
NSGA-II
8.81 7.60 8.38 7.63 8.08 7.64 7.97 9.15 7.19 8.13 7.47 7.17 7.58 9.02 7.38 9.11 9.65 7.80 7.93 9.34 7.98 9.46 7.66 9.60 8.60 10.05 8.86 9.44 8.58 8.75 8.40
7.78 6.40 5.98 6.39 7.43 6.67 5.80 7.74 6.60 6.77 6.01 5.58 6.39 6.54 5.65 7.81 8.44 6.20 6.23 6.67 6.51 8.01 7.12 7.08 6.46 7.54 7.63 8.04 7.93 8.13 6.92
0.88 0.80 0.86 0.81 0.74 0.80 0.84 0.84 0.77 0.84 0.76 0.78 0.74 0.84 0.74 0.75 0.85 0.76 0.79 0.85 0.75 0.83 0.79 0.83 0.73 0.83 0.78 0.86 0.88 0.80 0.80
0.19 0.34 0.29 0.33 0.33 0.33 0.28 0.28 0.37 0.30 0.32 0.36 0.36 0.23 0.38 0.36 0.26 0.35 0.36 0.25 0.36 0.30 0.35 0.26 0.35 0.28 0.36 0.27 0.21 0.28 0.31
1.24 1.35 1.48 1.54 1.44 1.41 1.51 1.28 1.39 1.24 1.45 1.30 1.31 1.30 1.12 1.50 1.43 1.16 1.47 1.45 1.27 1.50 1.33 1.26 1.17 1.23 1.17 1.49 1.53 1.19 1.35
1.60 1.37 1.68 1.86 1.70 1.54 1.86 1.53 1.39 1.65 1.71 1.56 1.35 1.29 1.55 1.52 1.89 1.30 1.54 1.49 1.30 1.96 1.76 1.52 1.30 1.22 1.48 1.74 1.80 1.38 1.56
a moderate number of elite solutions can help to accelerate the search progress. But a high proportion of elite solutions will restrain the diversity of solutions and limits the algorithm’s capability of finding more (and evenly distributed) Pareto solutions. The recommend setting is thus β = 30. 4.5. Algorithm comparison Using the recommended parameter settings, we now compare the performance of MO-ABC with that of NSGA-II. For each problem size (specified by the triplet (n, l, m)), we have generated 5 different instances. Since there are 6 options for (n, l) and 3 options for m, a total of 6 × 3 × 5 = 90 instances have been generated for the computational experiment. For each instance, both the MOABC and NSGA-II are executed 10 times independently so that the indicator 2 could be evaluated in the one-to-one style. We use 1 , 2 and 3 to denote the average values of the three indicators resulting from the 10 runs. Tables 5–7 show the computational results in terms of the three performance indicators. Some remarks could be made as follows based on the computational results. (1) According to the indicator 1 , MO-ABC is able to find consistently more Pareto solutions than NSGA-II, and the difference in the number of obtained solutions expands as the problem size grows. The superiority of MO-ABC in this respect is partly contributed by the scout bee mechanism, which promotes the diversity of solutions if parameter L has been properly adjusted. In addition, it is apparent that both algorithms find more Pareto solutions for larger-sized instances (in the average sense). With increased number of jobs and machines, there are more possibilities for job batching and thus also more flexibilities in mak-
ing compromises between the two objective functions, which results in more Pareto solutions for the problem. (2) According to the indicator 2 , the solutions obtained by MOABC are superior to those obtained by NSGA-II in terms of Pareto optimality. With respect to the set of instances with 200 jobs, as an example, we see that on average 87% of the MOABC solutions are not dominated by any solution from NSGA-II, and by contrast, 78% of the NSGA-II solutions are dominated by at least one solution from MO-ABC. The superiority of MOABC can be attributed to its specialized optimization mechanisms, especially the problem-specific initialization technique which produces high-quality solutions with sufficient diversity, and the elitism strategy which preserves the best solutions and meanwhile maintains diversity if parameter β has been properly adjusted. The NSGA-II, however, does not limit the number of elite solutions and thus may suffer from premature convergence. (3) According to the indicator 3 , the solutions obtained by MOABC are distributed more evenly than the solutions by NSGA-II on most of the instances. This verifies that the sorting procedure of MO-ABC, which relies on an accurately defined density measure, is more effective than that of NSGA-II. With a properly adjusted value of parameter T, the sorting procedure helps to exclude part of solutions located in crowded areas so that the remaining solutions are spread more evenly. As the problem size increases, the distribution evenness of the MO-ABC solutions has improved considerably (from 1.35 to 1.20 in the average sense), but there is little improvement when NSGA-II is concerned. This reveals the fact that the sorting mechanism of MO-ABC performs better in the case where the number of candidate solutions is larger.
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129
125
Table 6 Comparison of MO-ABC and NSGA-II on the instances with n = 200. (n, l, m)
(200, 9, 5)
(200, 9, 7)
(200, 9, 9)
(200, 12, 5)
(200, 12, 7)
(200, 12, 9)
No.
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Average
1
2
3
MO-ABC
NSGA-II
MO-ABC
NSGA-II
MO-ABC
NSGA-II
10.71 10.54 9.72 9.33 10.66 9.57 9.40 10.44 11.13 11.27 11.50 11.32 9.01 9.90 9.85 10.60 11.28 10.29 10.82 9.38 10.99 9.91 11.71 11.95 10.28 11.74 12.02 10.57 9.60 10.24 10.52
7.33 6.59 7.16 6.33 6.94 5.86 7.43 7.71 8.42 6.86 9.14 8.83 6.32 7.93 7.22 7.41 8.52 8.13 8.41 7.33 8.07 6.55 9.16 7.92 8.38 9.64 9.41 7.38 5.77 7.79 7.66
0.86 0.87 0.86 0.91 0.92 0.89 0.85 0.91 0.89 0.85 0.88 0.91 0.85 0.85 0.78 0.94 0.83 0.90 0.81 0.78 0.94 0.81 0.85 0.91 0.82 0.90 0.86 0.87 0.89 0.79 0.87
0.20 0.23 0.20 0.18 0.17 0.18 0.22 0.18 0.17 0.21 0.18 0.15 0.23 0.28 0.34 0.14 0.25 0.20 0.25 0.27 0.19 0.30 0.21 0.20 0.27 0.19 0.27 0.23 0.21 0.29 0.22
1.34 1.34 1.16 1.13 1.10 1.39 1.04 1.42 1.31 1.14 1.19 1.33 1.12 1.34 1.18 1.38 1.37 1.15 1.13 1.13 1.15 1.09 1.35 1.16 1.32 1.42 1.15 1.24 1.05 1.31 1.23
1.75 1.71 1.47 1.28 1.53 1.82 1.05 1.91 1.86 1.13 1.35 1.61 1.56 1.64 1.63 1.65 1.82 1.49 1.35 1.26 1.46 1.21 1.63 1.35 1.49 1.80 1.44 1.26 1.18 1.31 1.50
Table 7 Comparison of MO-ABC and NSGA-II on the instances with n = 300. (n, l, m)
(300, 12, 5)
(300, 12, 7)
(300, 12, 9)
(300, 15, 5)
(300, 15, 7)
(300, 15, 9)
Average
No.
61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
1
2
3
MO-ABC
NSGA-II
MO-ABC
NSGA-II
MO-ABC
NSGA-II
12.25 10.99 12.20 10.94 10.70 10.92 11.69 11.66 13.28 11.63 11.71 13.49 13.58 10.93 13.04 12.39 10.89 12.45 10.71 11.07 11.64 11.54 12.59 11.13 12.56 10.80 12.02 11.77 13.62 11.59 11.86
9.26 6.05 8.11 7.77 7.47 6.13 8.72 6.92 9.43 7.03 8.48 9.73 8.84 7.05 9.30 9.52 7.19 9.18 8.17 6.81 6.72 8.54 6.85 8.55 9.06 6.74 6.87 8.69 8.78 7.73 7.99
0.91 0.88 0.94 0.89 0.85 0.91 0.86 0.85 0.86 0.95 0.92 0.95 0.88 0.94 0.86 0.84 0.95 0.89 0.96 0.85 0.86 0.85 0.93 0.86 0.90 0.86 0.86 0.85 0.90 0.89 0.89
0.22 0.25 0.22 0.25 0.28 0.25 0.27 0.27 0.29 0.17 0.23 0.16 0.27 0.20 0.26 0.30 0.19 0.24 0.18 0.26 0.30 0.29 0.21 0.25 0.25 0.28 0.27 0.29 0.25 0.22 0.25
1.33 1.30 1.12 1.12 1.22 1.11 1.08 1.16 1.19 1.23 1.19 1.27 1.02 1.34 1.30 1.15 1.09 1.16 1.29 1.28 1.21 1.22 1.14 1.11 1.13 1.32 1.07 1.14 1.34 1.22 1.20
1.94 1.82 1.34 1.13 1.58 1.53 1.45 1.57 1.67 1.68 1.47 1.83 1.19 1.39 1.43 1.28 1.38 1.44 1.62 1.69 1.32 1.57 1.54 1.49 1.46 1.32 1.33 1.57 1.60 1.37 1.50
126
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129
Fig. 5. Comparison with MOICA under the three performance indicators.
4.6. More comparative results In addition to the comparison with NSGA-II, we have also implemented an adapted version of the MOICA (multi-objective imperialist competitive algorithm), which was originally designed for a bi-objective parallel batch-processing machine scheduling problem [16] (the target problem in their work is similar but less complicated than ours because it only involves identical parallel machines and compatible job families). The parameters of MOICA are set according to the original publication, and the other experimental conditions are controlled at exactly the same standard as in
the previous comparative study with NSGA-II. We show the average data resulting from 10 runs of each algorithm in Fig. 5, where the 90 instances are marked in the horizontal axis and “Diff” represents the differences between the values produced by the two algorithms. The advantage of the proposed MO-ABC over MOICA is quite remarkable according to the figures. With a closer observation of 1 and 3 , we find that MOICA is able to find more Pareto solutions than NSGA-II, but the solutions obtained by MOICA exhibit a less even distribution than the solutions output by NSGA-II. To visualize the solutions found by each algorithm, we select an instance with the largest size (No. 86) and plot the obtained
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129
127
2550 2500 2450 TF
MO-ABC 2400
NSGA-II MOICA
2350 2300 4650
4700
4750
4800
4850
4900
4950
TT Fig. 6. The Pareto front obtained by each algorithm.
solutions in the objective space (Fig. 6). As can be intuitively observed from the figure, the Pareto front achieved by MO-ABC has the highest quality in terms of both diversity and distribution. 4.7. Robustness test Finally, we are concerned with the robustness of the MO-ABC algorithm with respect to the available computational time. This is an important perspective for assessing a scheduling algorithm because, in practical circumstances, scheduling modules must respond to the fast-changing production environment in a timely manner, which means the available computational time is usually much more limited than in a theoretical research setting. To test computational time robustness, we vary the computational time limit imposed on both algorithms and examine their performances. We use an instance in the category n = 300, l = 12, m = 7 for this experiment. Previously, the maximum time allowed for solving this instance was specified as 3 × n = 900 s, but now we let the time limit vary from 300 s to 1200 s in order to observe the solution quality under different time budget. The average values of 1 , 2 and 3 based on 10 independent runs of each algorithm are reported in Fig. 7, with “Diff” representing the difference between the two algorithms. Based on the comparative results, we could make the following remarks. (1) Focusing on 1 , we notice that, for both algorithms, the number of obtained solutions tends to increase as the computational time resource becomes more abundant. Comparatively, the increasing slope of MO-ABC is steeper than that of NSGAII, resulting in a widening gap between the two curves (“Diff” exhibits a slight growing trend). MO-ABC finds more solutions when given more computational time because the scout bee mode which aims at promoting diversity needs sufficient iterations to produce desirable effect. So the results show that MOABC utilizes the computational time in a more efficient way. (2) Focusing on 2 , we can see that MO-ABC maintains an edge over NSGA-II under each computational time limit. The 2 value of MO-ABC is relatively stable and insensitive with respect to variations in time limit, whereas the 2 value of NSGA-II tends to increase slowly as the time limit loosens. It can be concluded that MO-ABC is more robust with respect to computational time budgets, and the advantage of MO-ABC over NSGA-II is more evident especially when the computational time is very limited (“Diff” shows a decreasing trend). (3) Focusing on 3 , we can observe a downward trend for both algorithms (without a clear trend for “Diff”). This can be explained by the following fact: as the allowed computational time increases, the algorithm can obtain a larger number of candidate solutions, and in this case it is easier to select a portion of solutions with improved evenness of distribution.
Fig. 7. Algorithm performance under different computational time limits.
In summary, when computational time budget changes, the responses of the two algorithms are comparable in terms of 1 and 3 , but remarkably different in terms of 2 . Considering the fact that 2 is actually the core indicator of solution quality (it captures the dominance relations between the solution sets output by each algorithm), we may conclude that MO-ABC is more robust than NSGA-II under varying limits on the computational time. 5. Conclusion and future research In this paper, we investigate a special variant of the parallel batch-processing machine scheduling problem which originates
128
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129
from the dyeing industry. We formulate the problem as a biobjective mixed integer linear programming model, with one objective function related with service level (total tardiness) and the other objective function related with production efficiency (finishing time or utilization of machines). To solve the scheduling problem with practical scale, we develop a multi-objective artificial bee colony (MO-ABC) algorithm specialized by problem-dependent initialization and multi-objective-oriented handling techniques. Extensive computational experiments have been implemented to validate the proposed solution method, including a DOE-based parameter tuning process, a systematic comparison with NSGA-II using three performance measures and a robustness test considering different computational time limits. Results show that, with proper parameter settings, the proposed MO-ABC outperforms NSGA-II in terms of both solution quality and computational time robustness. Future research will be conducted with a special focus on problem-specific local search techniques. The solution quality is likely to be improved further if enhanced local search algorithms are designed and incorporated into the population-based optimization framework. In the function aspect, it is possible to devise two local search modules aimed at improving the tardiness performance and improving the machine workload performance, respectively. In the implementation aspect, a two-layered search mechanism might be feasible, in which the top layer deals with the scheduling of batches, disregarding job details, while the bottom layer deals with the adjustment of jobs in each batch. Acknowledgment This research is supported by the Natural Science Foundation of China under Grant Nos. 61473141, U1660202 and 61273233. The authors are grateful for the constructive feedback provided by the reviewers. References [1] R. Uzsoy, Scheduling a single batch processing machine with non-identical job sizes, Int. J. Prod. Res. 32 (7) (1994) 1615–1635. [2] M.L. Pinedo, Scheduling: Theory, Algorithms and Systems (3rd ed.), Springer, New York, 2008. [3] M. Mathirajan, A. Sivakumar, Minimizing total weighted tardiness on heterogeneous batch processing machines with incompatible job families, Int. J. Adv. Manuf. Technol. 28 (9–10) (2006) 1038–1047. [4] A.H. Kashan, B. Karimi, Scheduling a single batch-processing machine with arbitrary job sizes and incompatible job families: an ant colony framework, J. Oper. Res. Soc. 59 (9) (2008) 1269–1280. [5] D. Petrovic, O. Aköz, A fuzzy goal programming approach to integrated loading and scheduling of a batch processing machine, J. Oper. Res. Soc. 59 (9) (2008) 1211–1219. [6] W. Jia, Z. Jiang, Y. Li, Combined scheduling algorithm for re-entrant batch-processing machines in semiconductor wafer manufacturing, Int. J. Prod. Res. 53 (6) (2015) 1866–1879. [7] P. Damodaran, N.S. Hirani, M.C. Velez-Gallego, Scheduling identical parallel batch processing machines to minimise makespan using genetic algorithms, Eur. J. Indus. Eng. 3 (2) (2009) 187–206. [8] S. Lausch, L. Mönch, Metaheuristic approaches for scheduling jobs on parallel batch processing machines, in: Heuristics, Metaheuristics and Approximate Methods in Planning and Scheduling, Springer, 2016, pp. 187–207. [9] P. Jula, R.C. Leachman, Coordinated multistage scheduling of parallel batch-processing machines under multiresource constraints, Oper. Res. 58 (4) (2010) 933–947. [10] H. Liu, J. Yuan, W. Li, Online scheduling of equal length jobs on unbounded parallel batch processing machines with limited restart, J. Comb. Optim. 31 (4) (2016) 1609–1622. [11] L. Jiang, J. Pei, X. Liu, P.M. Pardalos, Y. Yang, X. Qian, Uniform parallel batch machines scheduling considering transportation using a hybrid DPSO-GA algorithm, Int. J. Adv. Manuf. Technol. (2016), doi:10.10 07/s0 0170-016-9156-5. (in press) [12] S. Zhou, M. Liu, H. Chen, X. Li, An effective discrete differential evolution algorithm for scheduling uniform parallel batch processing machines with non-identical capacities and arbitrary job sizes, Int. J. Prod. Econ. 179 (2016) 1–11. [13] Z.-H. Jia, C. Wang, J.Y.-T. Leung, An ACO algorithm for makespan minimization in parallel batch machines with non-identical job sizes and incompatible job families, Appl. Soft Comput. 38 (2016) 395–404.
[14] A.H. Kashan, B. Karimi, F. Jolai, An effective hybrid multi-objective genetic algorithm for bi-criteria scheduling on a single batch processing machine with non-identical job sizes, Eng. Appl. Artif. Intell. 23 (6) (2010) 911–922. [15] L. Li, F. Qiao, Q. Wu, ACO-based multi-objective scheduling of parallel batch processing machines with advanced process control constraints, Int. J. Adv. Manuf. Technol. 44 (9–10) (2009) 985–994. [16] M. Abedi, H. Seidgar, H. Fazlollahtabar, R. Bijani, Bi-objective optimisation for scheduling the identical parallel batch-processing machines with arbitrary job sizes, unequal job release times and capacity limits, Int. J. Prod. Res. 53 (6) (2015) 1680–1711. [17] O. Shahvari, R. Logendran, An enhanced tabu search algorithm to minimize a bi-criteria objective in batching and scheduling problems on unrelated-parallel machines with desired lower bounds on batch sizes, Comput. Oper. Res. 77 (2017) 154–176. [18] D. Karaboga, An idea based on honey bee swarm for numerical optimization, Technical Report, TR06, Computer Engineering Department, Erciyes University, Turkey, 2005. [19] D. Karaboga, B. Akay, A survey: algorithms simulating bee swarm intelligence, Artif. Intell. Rev. 31 (1) (2009) 61–85. [20] D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC) algorithm, Appl. Soft Comput. 8 (1) (2008) 687–697. [21] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm, Appl. Math. Comput. 214 (1) (2009) 108–132. [22] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, J. Global Optim. 39 (3) (2007) 459–471. [23] Q.-K. Pan, L. Wang, J.-Q. Li, J.-H. Duan, A novel discrete artificial bee colony algorithm for the hybrid flowshop scheduling problem with makespan minimisation, Omega 45 (2014) 42–56. [24] I. Ribas, R. Companys, X. Tort-Martorell, An efficient discrete artificial bee colony algorithm for the blocking flow shop problem with total flowtime minimization, Expert Syst. Appl. 42 (15) (2015) 6155–6167. [25] S. Zhang, C. Lee, K. Choy, W. Ho, W. Ip, Design and development of a hybrid artificial bee colony algorithm for the environmental vehicle routing problem, Transp. Res. Part D 31 (2014) 85–99. [26] W. Szeto, Y. Jiang, Transit route and frequency design: bi-level modeling and hybrid artificial bee colony algorithm approach, Transp. Res. Part B 67 (2014) 235–263. [27] L. Yang, X. Sun, L. Peng, J. Shao, T. Chi, An improved artificial bee colony algorithm for optimal land-use allocation, Int. J. Geogr. Inf. Sci. 29 (8) (2015) 1470–1489. [28] W. Chen, An artificial bee colony algorithm for uncertain portfolio selection, Sci. World J. 2014 (2014), doi:10.1155/2014/578182. [29] S.K. Nseef, S. Abdullah, A. Turky, G. Kendall, An adaptive multi-population artificial bee colony algorithm for dynamic optimisation problems, Knowl. Based Syst. 104 (2016) 14–23. [30] F. Wu, J. Lu, G. Zhang, A new approximate algorithm for solving multiple objective linear programming problems with fuzzy parameters, Appl. Math. Comput. 174 (1) (2006) 524–544. [31] R. Akbari, R. Hedayatzadeh, K. Ziarati, B. Hassanizadeh, A multi-objective artificial bee colony algorithm, Swarm Evol. Comput. 2 (2012) 39–52. [32] S.N. Omkar, J. Senthilnath, R. Khandelwal, G.N. Naik, S. Gopalakrishnan, Artificial bee colony (ABC) for multi-objective design optimization of composite structures, Appl. Soft Comput. 11 (1) (2011) 489–499. [33] B. Akay, Synchronous and asynchronous Pareto-based multi-objective artificial bee colony algorithms, J. Global Optim. 57 (2) (2013) 415–445. [34] Y. Xiang, Y. Zhou, H. Liu, An elitism based multi-objective artificial bee colony algorithm, Eur. J. Oper. Res. 245 (1) (2015) 168–193. [35] Y. Huo, Y. Zhuang, J. Gu, S. Ni, Elite-guided multi-objective artificial bee colony algorithm, Appl. Soft Comput. 32 (2015) 199–210. [36] D.-H. Tran, M.-Y. Cheng, M.-T. Cao, Hybrid multiple objective artificial bee colony with differential evolution for the time–cost–quality tradeoff problem, Knowl. Based Syst. 74 (2015) 176–186. [37] M. Yahya, M.P. Saka, Construction site layout planning using multi-objective artificial bee colony algorithm with Levy flights, Autom. Constr. 38 (2014) 14–29. [38] J. Zhou, X. Liao, S. Ouyang, R. Zhang, Y. Zhang, Multi-objective artificial bee colony algorithm for short-term scheduling of hydrothermal system, Int. J. Electr. Power Energy Syst. 55 (2014) 542–553. [39] A.K. Dwivedi, S. Ghosh, N.D. Londhe, Low power FIR filter design using modified multi-objective artificial bee colony algorithm, Eng. Appl. Artif. Intell. 55 (2016) 58–69. [40] U. Saif, Z. Guan, W. Liu, C. Zhang, B. Wang, Pareto based artificial bee colony algorithm for multi objective single model assembly line balancing with uncertain task times, Comput. Ind. Eng. 76 (2014) 1–15. [41] L. Ma, K. Hu, Y. Zhu, H. Chen, Cooperative artificial bee colony algorithm for multi-objective RFID network planning, J. Netw. Comput. Appl. 42 (2014) 143–162. [42] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput. 6 (2) (2002) 182–197. [43] E. Zitzler, K. Deb, L. Thiele, Comparison of multiobjective evolutionary algorithms: empirical results, Evol. Comput. 8 (2) (20 0 0) 173–195. [44] D.A. Van Veldhuizen, G.B. Lamont, Multiobjective evolutionary algorithm test suites, in: Proceedings of the 1999 ACM symposium on Applied Computing, 1999, pp. 351–357.
R. Zhang et al. / Knowledge-Based Systems 116 (2017) 114–129 [45] D.A. Van Veldhuizen, G.B. Lamont, On measuring multiobjective evolutionary algorithm performance, in: Proceedings of the IEEE Congress on Evolutionary Computation, La Jolla, CA, 1, 20 0 0, pp. 204–211. [46] E. Zitzler, L. Thiele, Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach, IEEE Trans. Evol. Comput. 3 (4) (1999) 257–271.
129
[47] K.C. Tan, C.K. Goh, Y.J. Yang, T.H. Lee, Evolving better population distribution and exploration in evolutionary multi-objective optimization, Eur. J. Oper. Res. 171 (2) (2006) 463–495. [48] W.Y. Fowlkes, C.M. Creveling, J. Derimiggio, Engineering Methods for Robust Product Design: Using Taguchi Methods in Technology and Product Development, Addison-Wesley Reading, MA, 1995.