A differential evolution algorithm for the ... - Semantic Scholar

4 downloads 47768 Views 423KB Size Report
ciently solve the CF problem and its variants with good results ob- tained. Sun, Lin ... tabu search approach to deal with this problem by modeling it as a shortest ...... rithm was coded in Matlab7.4 and implemented on a laptop com- puter with ...
Expert Systems with Applications 37 (2010) 4822–4829

Contents lists available at ScienceDirect

Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

A differential evolution algorithm for the manufacturing cell formation problem using group based operators Azadeh Noktehdan, Behrooz Karimi *, Ali Husseinzadeh Kashan Department of Industrial Engineering, Amirkabir University of Technology, P.O. Box 15875-4413, Tehran, Iran

a r t i c l e

i n f o

Keywords: Cell formation problem Grouping genetic algorithm Differential evolution algorithm

a b s t r a c t Cellular manufacturing (CM) is an important application of group technology (GT), a manufacturing philosophy in which parts are grouped into part families, and machines are allocated into machine cells to take advantage of the similarities among parts in manufacturing. The target is to minimize inter-cellular movements. Inspired by the rational behind the so called grouping genetic algorithm (GGA), this paper proposes a grouping version of differential evolution (GDE) algorithm and its hybridized version with a local search algorithm (HGDE) to solve benchmarked instances of cell formation problem posing as a grouping problem. To evaluate the effectiveness of our approach, we borrow a set of 40 problem instances from literature and compare the performance of GGA and GDE. We also compare the performance of both algorithms when they are tailored with a local search algorithm. Our computations reveal that the proposed algorithm performs well on all test problems, exceeding or matching the best solution quality of the results presented in previous literature. Ó 2009 Elsevier Ltd. All rights reserved.

1. Introduction A cellular manufacturing system (CMS) is a production approach aimed at increasing production efficiency and system flexibility by utilizing the process similarities of the parts. It involves grouping similar parts into part families and the corresponding machines into machine cells. This results in the organization of production systems into relatively self-contained and self-regulated groups of machines such that each group of machines undertakes an efficient production of a family of parts. The significant benefits of cellular manufacturing are a reduced setup time, reduced work-in-process inventory, reduced throughput time, reduced material handling costs, improved product quality, simplified scheduling, etc. (Wemmerlov & Hyer, 1987). The cell formation (CF) problem is the first step in designing of cellular manufacturing systems (CMS). The main objective of CF is to construct machine cells and part families, and then dispatch part families to machine cells to optimize the chosen performance measures such as inter-cell and intra-cell transportation cost, grouping efficiency and exceptional elements (Lei & Wu, 2005). The machine–part cell formation (MPCF) problem is an NP-hard problem (Ballakur & Steudel, 1998), thus extensive research has been devoted to cell formation (CF) problems, with many methods having been proposed for identifying machine cells and part fami-

* Corresponding author. Tel.: +98 21 66413034; fax: +98 21 66413025. E-mail addresses: [email protected] (A. Noktehdan), [email protected] (B. Karimi), [email protected] (A. Husseinzadeh Kashan). 0957-4174/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2009.12.033

lies in an approximate manner. Many of them are developed on the basis of heuristic clustering techniques to obtain approximate solutions, but some of them may be far from optimum. Up to now, the only algorithm that has been heavily modified to consider the structure of grouping problems, such as cell formation problem, is based on genetic algorithm which is called grouping genetic algorithm (GGA). In this paper we apply the notion of grouping representation and operators in the body of differential evolution algorithm to obtain the grouping version of differential evolution (GDE). The reminder of the paper is organized as follows: In Section 2 we describe the problem definition. Section 3 provides a review of previous solution approaches developed for cell formation problem. An overview on the differential evolution algorithm is given in Section 4. The proposed grouping differential evolution algorithm is introduced in Section 5. Section 6 deals with the computational results, and Section 7 concludes the paper. 2. Cell formation problem Given 0–1 machine–part incidence matrix, cell formation task involves rearrangement of rows and columns of the matrix to create part families and machine cells. In this research we attempt to determine a rearrangement so that the inter-cellular movement can be minimized and the utilization of the machines within a cell can be maximized. Two matrices shown in Fig. 1 are used to illustrate the concept. Fig. 1a is an initial matrix where no blocks can be observed directly. After rearrangement of rows and columns, two

4823

A. Noktehdan et al. / Expert Systems with Applications 37 (2010) 4822–4829

(a)

Parts

M1 M2 Machines M3 M4 M5

P1 1 0 1 0 1

P2 0 1 0 1 0

P3 0 1 0 1 0

P4 1 0 1 0 1

P5 0 1 0 1 0

(b)

Parts

M2 M4 Machines M1 M3 M5

P2 1 1 0 0 0

P3 1 1 0 0 0

P5 1 1 0 0 0

P1 0 0 1 1 1

P4 0 0 1 1 1

Fig. 1. Rearrangement of rows and columns of matrix to create cells: (a) initial matrix and (b) matrix after rearrangement.

blocks can be obtained along the diagonal of the solution matrix in Fig. 1b. There have been developed several measures of goodness of machine–part groups in cellular manufacturing in the literature (Sarker, 2001). Two measures frequently used are the grouping efficiency (Chandrasekharan & Rajagopalan, 1986a) and the grouping efficacy (Kumar & Chandrasekharan, 1990). Grouping efficiency g is defined as follows:

g ¼ qg1 þ ð1  qÞg2

ð1Þ

where g1 is the ratio of the number of 1s in the diagonal blocks to the total number of 0s and 1s in the diagonal blocks, g2 is the ratio of the number of 0s in the off-diagonal blocks to the total number of elements in the off-diagonal blocks, and q is the weight factor (Chandrasekharan & Rajagopalan, 1989). Those 1s outside the diagonal blocks, are called ‘exceptional elements’; while those 0s inside the diagonal blocks are called ‘voids’. Although grouping efficiency has been used widely, it was argued for its low discriminating capability in some cases affected by the size of the problem. To overcome this weakness, Kumar and Chandrasekharan (1990) proposed another measure, the grouping efficacy s, that can be defined as:



e  eo e þ ev

ð2Þ

where e is the total number of 1s in the matrix. eo is the total number of exceptional elements, and ev is the total number of voids. Grouping efficacy ranges from 0 to 1, with 1 being the perfect grouping. As grouping efficacy has been widely accepted in recent studies on CF problem, we will use it to measure the quality of a given solution. 3. Literature review A large number of studies related to GT/CM have been performed both in academia and industry. Reisman, Kumar, Motwani, and Cheng (1997) gave a statistical review of 235 articles dealing with GT and CM over the years 1965–1995. Comprehensive summaries and taxonomies of studies devoted to part–machine grouping problems were presented in Wemmerlov and Hyer (1989) and Kusiak and Cho (1987). Methods for part family/machine cell formation can be classified as design-oriented or production-oriented. Design-oriented approaches group parts into families based on similar design features. An overview of design-oriented approaches based on classification and coding was presented in Askin and Vakharia (1991). Production-oriented techniques are for aggregating parts requiring similar processing. These approaches can be further classified into cluster analysis, graph partitioning, mathematical programming, artificial intelligence (AI), and heuristics based approaches (Joines, Culbrethe, & King, 1996). Mathematical programming is widely used for modeling CMS problems. The objective of the mathematical programming model is often to maximize the total sum of similarities of parts

in each cell, or to minimize inter-cell material handling cost. Purcheck (1975) applied linear programming techniques to a group technology problem. Kusiak and Cho (1987) proposed the generalized p-median model considering the presence of alternative routings. Shtub (1989) used the same approach and reformulated the problem as a generalized assignment problem. Wei and Gaither (1990) developed a 0–1 programming cell formation model to minimize bottleneck cost, maximize average cell utilization, and minimize intra-cell and inter-cell load imbalances. Rajamani, Singh, and Aneja (1992) proposed three integer programming models to consider budget and machine capacity constraints as well as alternative process plans. Askin and Chiu (1990) proposed a cost-based mathematical formulation and a heuristic graph partitioning procedure for cell formation. Shafer and Rogers (1991) applied a goal programming method to solving CMS problems for different system reconfiguration situations: setting up a new system and purchasing all new equipments, reorganizing the system using only existing equipments, reorganizing the system using existing and some new equipments. Shafer and Kern (1992) presented a mathematical programming model to address the issues related to exceptional elements. Due to their excellent performance in solving combinatorial optimization problems, meta-heuristics algorithms such as genetic algorithms, simulated annealing, neural networks and tabu search make up a class of search methods that has been adopted to efficiently solve the CF problem and its variants with good results obtained. Sun, Lin, and Batta (1995) presented a short-term tabu search-based algorithm for solving the CF problem with the objective of minimizing the inter-cellular parts flows, while Lei and Wu (2005) maximizes the parts flow within cells using long-term tabu search-based algorithm. Aljaber, Baek, and Chen (1997) proposed a tabu search approach to deal with this problem by modeling it as a shortest spanning path problem with respect to both parts and machines. The resulting spanning paths for parts and machines are then decomposed into subgraphs representing machine groups and part families, respectively. Cheng, Gupa, and Lee (1998) formulated the CF problem as a traveling salesman problem (TSP) and proposed a solution methodology based on genetic algorithm, while Dimopoulos and Mort (2001), presented a hierarchical clustering approach based on genetic programming. Onwubolu and Mutingi (2001) developed a genetic algorithm which accounts for inter-cellular movements and the cell-load variation. Goncalves and Resende (2004) presented a hybrid algorithm combining a local search and a genetic algorithm with very promising results reported. Wu, Chang, and Chung (2007) developed a simulated annealing algorithm that uses the similarity coefficient for designing cells. Boulif and Atif (2006) proposed a branch-and-bound-enhanced genetic algorithm for the manufacturing cell formation problem which is combination of a genetic algorithm with the branch-and-bound approach. Brown and James (2007) presented a hybrid algorithm combining a local search and a grouping genetic algorithm. Recently many researchers apply grouping genetic algorithm for solving cell formation problems. Vin, Delit, and Delchamber (2005) and Brown and James (2007) are some of the researchers that apply the GGA to cell formation problems.

4824

A. Noktehdan et al. / Expert Systems with Applications 37 (2010) 4822–4829

4. An introduction to DE The differential evolution (DE) algorithm introduced by Storn and Price (1995) is a novel parallel direct search method and is one of the latest evolutionary optimization techniques for optimizing continuous nonlinear functions. Currently, there are several variants of DE. The particular variant used also in this research is DE/rand/1/bin scheme. This scheme will be discussed shortly here. Generally, the function to be optimized, F, is of the form FðXÞ : RD ! R. The optimization target is to minimize the value of this objective function FðXÞ by optimizing the values of its parameters X = {x1, x2, . . . , xD}, X 2 RD, where X denotes a vector composed of D objective function parameters. Usually, the parameters of the objective function are also subject to lower and upper boundary constraints.

Generally, both F and CR affect the convergence rate and robustness of the search process. Their optimal values are dependent both on objective function characteristics and on the population size, NP. Usually, suitable values for F, CR and NP can be found by experimentation after a few tests using different values. Practical advice on how to select control parameters NP, F and CR can be found in Storn and Price (1995). 4.4. Selection The selection scheme of DE also differs from other evolutionary algorithms. On the basis of the current population P(T) and the temporary population, the population of the next generation P(T+1) is created as follows:

( X iTþ1

4.1. Initialization

¼

U Tþ1 i

if FðU Tþ1 Þ 6 FðX Ti Þ i

X Ti

otherwise

ð6Þ

As with all evolutionary optimization algorithms, DE works with a population of solutions. Population P at generation T contains NP solution vectors called individuals and each vector represents a potential solution for the problem. In order to establish a starting point for optimum seeking, the population must be initialized. Often there is no more knowledge available about the location of a global optimum than the boundaries of the problem variables. In this case, a natural way to initialize the population P(0) (initial population) is to seed it with random values within the given boundary constraints.

Thus, each individual of the temporary or trial population is compared with its counterpart in the current population. The one with the lower value of cost-function F(X) will propagate the population of the next generation. As a result, all individuals of the next generation are as good as or better than their counterparts in the current generation (Storn & Price, 1995).

4.2. Mutation

This section describes how DE meta-heuristic is modified to solve the cell formation problem which is discrete in nature. The original DE algorithm can only optimize problems in which the elements of the solution are continuous real numbers. Therefore, several approaches have been used to deal with discrete optimization by DE. Most of them round off the variable to the nearest available value before evaluating each trial vector. In the following subsections we introduce a new DE algorithm adapted based on the structure of the MPCF problem. The new algorithm called GDE, owns a mutation equations analogous to the classical DE’s mutation which enables us to maintain all major characteristics of DE.

The population recombination scheme of DE is different from the other evolutionary algorithms. From the first generation onward, the population of the subsequent generation PðTþ1Þ is obtained on the basis of the current population P ðTÞ . A mutated Tþ1 Tþ1 ¼ ðv Tþ1 parameter vector V Tþ1 i 1i ; v 2i ; . . . ; v Di Þ of candidate vectors for the subsequent generation is generated as follows:

v jiTþ1 ¼ xTj;r

3

þ FðxTj;r1  xTj;r2 Þ 8i ¼ 1; . . . ; NP 8j ¼ 1; . . . ; D

ð3Þ

where r1 ; r 2 ; r 3 2 f1; . . . ; NPg and F > 0. Three randomly chosen indexes, r1 ; r 2 , and r 3 refer to three randomly chosen vectors of population. They are mutually different from each other and also different from the running index i. 4.3. Crossover In order to increase the diversity of the perturbed parameter vectors, crossover is introduced. To this end, the trial vector: Tþ1 Tþ1 U Tþ1 ¼ ðu1i ; uTþ1 i 2i ; . . . ; uDi Þ

ð4Þ

is formed, where ( Tþ1 v ji if randðjÞ 6 CR or j ¼ randintðiÞ Tþ1 uji ¼ 8i ¼ 1;...;NP 8j ¼ 1;...;D otherwise xTþ1 ji

ð5Þ rand(j) is the jth evaluation of a uniform random number generator with outcome 2 ½0; 1. CR is the crossover constant 2 ½0; 1 which has to be determined by the user. randint(i) is a randomly chosen index gets at least one parameter in {1, . . . , D} which ensures that U Tþ1 i (Storn & Price, 1995). from V Tþ1 i F and CR are DE control parameters. Both values as well as the third parameter, NP (population size), remain constant during the search process. F is a real-valued factor that controls the amplification of differential variations. CR is a real-valued crossover factor.

5. The proposed grouping based differential evolution algorithm (GDE)

5.1. The encoding The algorithm developed in this study (GDE) uses the encoding strategy used in GGA (Falkenauer, 1992), namely the grouping encoding. The grouping encoding scheme uses a variable length chromosome (solution) that includes the items to be grouped along with an additional section denoting the actual groups present in the solution. Consider for example the individual ABCB that encodes the solution where the first object is in group A, second in B, third in C, and fourth in B. The grouping encoding related to this individual could be ABCBjBAC. Note that the order in which the groups are listed does not matter (the order BAC) (Falkenauer, 1992). This representation is crucial to the design of the GDE, as the modified operators for crossover and mutation are designed to manipulate the group portion of the individuals. The encoding scheme used for the machine–part cell formation (MPCF) problem is a natural adaptation of this strategy. The chromosome representation consists of three sections: one representing the parts, one representing the machines, and the additional group section that may be variable length (Brown & James, 2007). The individuals used for the MPCF problem can be represented as shown in (7) where pi denotes what group part i is assigned, for parts 1, . . . , P; mj denotes what group machine j is assigned, for machines 1, . . . , M; and gk denotes the group numbers for groups 1  K

4825

A. Noktehdan et al. / Expert Systems with Applications 37 (2010) 4822–4829

p1 p2 p3 p4    pp jm1 m2 m3 m4    mM jg 1 g 2 g 3 g 4    g K

ð7Þ

Considering the example given in Fig. 1b, the related solution can be encoded as follows:

2 1 1 2 1j2 1 2 1 2j1 2: The solution consist of two cells with the cell 1 containing parts {2, 3, 5} and machines {2, 4}, and cell 2 containing parts {1, 4} and machines {1, 3, 5}. Note that the part and machine portions of the individuals are fixed in length based on the size of the problem. For this example, there are five parts and five machines. The group portion of the individuals can vary in length depending on the number of cells into which the machines and parts are grouped.

Since the CF problem considers the grouping of parts and machines, an intuitive solution approach is to decompose the entire problem into two subproblems dealing with parts assignment and machine assignment, respectively. When parts assignment is firstly determined, followed by a proper assignment of machines, generation of an initial solution is hence completed (Wu et al., 2007). 5.2.1. Parts assignment Minkowski’s metric (Heragu, 1997) is used to evaluate dissimilarity between parts, as a measure to assign parts to cells. The dissimilarity measure dij is defined as:

dij ¼

# jaik  ajk j

ð8Þ

k¼1

where

aik ¼



1 if part i requires processing on machine k 0

otherwise

and M is the total number of machines. After calculating the dissimilarity value for each pair i and j of parts, we are now able to generate the initial parts assignment by using the following greedy rule: the lower dissimilarity measure a pair of parts has, with the higher priority, they should be placed in the same cell. This process is repeated until all parts being assigned to cells. 5.2.2. Machine assignment We used the same strategy as above for assigning machines to groups, and the dissimilarity measure is defined as:

dij ¼

" P X

# jaki  akj j

ð9Þ

k¼1

where

 aki ¼

1 if machine i has an operation on part k 0

’ lnð1  ð1  ð1  pÞl ÞRÞ þ 1; L¼ lnð1  pÞ

L 2 f1; 2; 3; . . . ; lg

ð10Þ

where L is a random number of initial groups in an initial solution, distributed by a truncated geometric distribution, R is a uniformly distributed variable belonging to [0, 1], p is the probability of success, and l, is the maximum number of cells. To generate an initial individual, first the number of cells (L) is produced by (10) and parts and machines are assigned accordingly. 5.3. The GDE mutation The mutated parameter vector in GDE algorithm is generated via equation (11) as follows:

5.2. Initial solutions

" M X

&

otherwise

In this measure, P is the total number of parts. After calculating the dissimilarity value for each pair of machines, we generate the initial machine assignment. The potential of any initial solution of (MPCF) depends on the number of initial cells (L). Starting with large number of initial cells would be undesirable. Thus we use a truncated geometric distribution to generate randomly the number of initial groups (Husseinzadeh Kashan, Karimi, & Jolai, 2006). Using a geometric distribution to simulate the number of initial groups ensures that the probability of starting with large number of groups would be small, against the high probability for starting with less number of groups. The following relation gives the random number of initial groups

  V Tþ1 ¼ X Tr3  X Tr1  X Tr2 i

ð11Þ

The definitions of the operators, used in the body of (11) are given next. The subtract operator ðÞ. Differences between two individuals represented in form of (7) can be presented by an array of elements in which each element shows that whether the content of the corresponding element in the first individual is different from the second one or not. If yes, that element gets its value from the first individual. Fig. 2 illustrates the manner in which  operator performs. More precisely, the number of elements that have not the same value in both A and B are equal to the Hamming distance between A and B. Worth to mention that when X Tr1 is exactly equal to X Tr2 , we omit the terms ðX Tr1  X Tr2 Þ, since it is a null array. To reassigns orphaned parts or machines (0’s in the array resulted by  operator), a repair heuristic, given by Brown and Sumichrast (2001), is utilized. It is important to note that since we are developing an algorithm which is capable to be applied on grouping problems, and that, all operators employed in an algorithm modified to suit the structure of grouping problems (i.e. a grouping algorithm) should work with groups (cells, in our case) rather than items, thus it is required that ðÞ operator be essentially a grouping operator (examples of grouping operators are crossover and mutation operators in GGA). If we consider a group (cell) as a set of different parts and different machines, then the result of ðÞ operator would be equal to the result of applying the subtraction operator on two sets, sequentially. Given the fact that the set subtraction operator can be regarded as an operator which is applied on two sets (groups), thus ðÞ would be a grouping opearator. For example in Fig. 2 group 1 from individual A contains parts {3, 4, 5} and machine {1} while group 1 from individual B contains parts {3, 6} and machine {1}. Applying the set subtraction operator, we have group 1 containing parts {4, 5} and no machine and this is the same result as we have got in the created individual (C). The add operator ðÞ. This operator is a crossover operator that typically is used in genetic algorithms. Crossover is viewed as the operator that has the mission of interchanging structural information developed during the search. The crossover operator utilized in the current algorithm is consistent with the implementation described by Falkenauer (1998). The crossover operator works like

X rT1 ) = 2 3 1 1 1 2 3 2 | 1 2 3 3 | 1 2 3. X rT2 ) = 3 2 1 2 4 1 3 4 | 1 2 3 4 | 1 2 3 4.

B=23011202|0003|123. Fig. 2. Illustration of the manner in which  operator performs.

4826

A. Noktehdan et al. / Expert Systems with Applications 37 (2010) 4822–4829

two-point crossover (Seifiddini, 1989), but the cross-points separate the group portion of the individual into segments, not the part or machine portions. Specifically, the crossover takes the following steps (Falkenauer, 1998): (1) Select two cross-points from the group portion of one of the individuals. (2) Inject the cross-section from this contributing individual into the other one. (3) Modify the assignment portion(s) of the individual to reflect the group assignments from the contributing individual. (4) Apply a local problem-dependent heuristic in order to adapt the resulting groups. This crossover method does not simply switch two portions of the individuals like traditional two-point crossover, but instead inserts groups from one individual into the other one. As step (4) of crossover indicates, modifications of the resulting groups via some repair heuristic may be necessary to create feasible individuals. Details of the problem-dependent heuristic employed by GDE are given in Section 5.4. An illustrative example of the crossover operator, similar to that was presented by Brown and James (2007), is as follows. Consider the following two parental solutions for a seven parts and 11 machines problem

P 1 ¼ 3 1 1 1 2 3 2j1 2 3 3 2 3 1 2 3 1 1j1 2 3 P 2 ¼ 2 4 2 3 1 2 4j1 1 2 2 1 2 3 4 2 3 2j1 2 3 4

ð12Þ

The crossover begins by selecting two cross-points from the group portion of the second parent P2; for example, CP1 ¼ 0 and CP 2 ¼ 3

P2 ¼ 2 4 2 3 1 2 4j1 1 2 2 1 2 3 4 2 3 2j½1 2 3 4

ð13Þ

In this example, groups 1, 2 and 3 from second parent are inserted into first parent, which produces the offspring C1 after the corresponding machine/part modifications are made to reflect the machine/part assignments of the inserted groups from the contributing parent. The modified values are underlined

C 1 ¼ 2 1 2 3 1 2 2j1 1 2 2 1 2 3 2 2 3 2j1 2 3 1 2 3

By renumbering the groups, the resulting new child appears as:

C 1 ¼ 3 2 3 4 2 3 1j2 2 3 3 2 3 4 1 3 4 3j1 2 3 4

This repair operator is applied to all orphaned machines or parts. 5.5. Local search The local search employed in the current algorithm is given by Goncalves and Resende (2004). It uses the partial efficacy given in Eq. (19). The partial efficacy, lc , is the efficacy value obtained by assigning either a part to a machine group or a machine to a part family/group. In Eq. (19), e is the total number of 1s in the MP matrix, eo;c is the number of exceptions for that part/machine in relation to the machine group/part family being considered and ev ;c is the number of voids for that machine/part, also in relation to the machine group/part family being considered (Brown & James, 2007)

lc ¼

e  eo;c e þ ev ;c

C 1 ¼ 4 2 3 4 4 3 1j2 2 3 3 2 3 4 1 3 4 3j1 2 3 4

P(i)/M(j) 1 2 3 4 5 6 7

ð15Þ

5.4. Repair heuristic

1 1 1 0 0 0 0 0

2 0 1 1 0 0 0 0

3 1 0 0 0 1 1 0

4 0 0 0 1 0 1 0

5 0 0 0 1 0 0 1

6 0 1 1 0 0 0 0

7 1 0 0 0 1 0 0

8 0 0 0 0 0 0 1

9 0 0 1 0 0 0 0

10 0 0 0 1 0 0 1

11 1 0 0 0 0 1 0

Fig. 3. Machine–part incidence matrix.

To fix infeasibilities arise when applying GDE operators, and reassign orphaned parts or machines, a repair heuristic given by Brown and Sumichrast (2001), is adopted. Recalling the previous example:

Group 2 Group 3

Group 4

Parts Required 8 Machines

1,2,5

3,4,6,9,11

7,10

1 2 3 4 5 6 7

78.26 90.91 82.61 82.61 79.17 75.00 82.61

79.17 76.00 83.33 76.00 80.00 91.30 69.23

81.82 78.26 78.26 86.36 90.91 78.26 86.36

ð16Þ

Group 1 contains part 2, but no machines. Therefore, part 2 must be reassigned to a new group. Considering the MP incidence matrix given in Fig. 3, we locate all the machines part 2 needs, machines = {1, 2, 6}. The machine section of C1 is then examined to locate in which groups machines 1, 2, and 6 are assigned. In this example, machine 6 is in group 2, while machines 1 and 2 are in group 1. As group 1 contains more of the required machines for part 2, part 2 is reassigned to group 1 and group 1 is eliminated, resulting in the following solution:

C 1 ¼ 2 1 2 3 1 2 2j1 1 2 2 1 2 3 2 2 3 2j2 1 2 3

ð20Þ

ð14Þ

The original group 1 from P1 contains one part (part 2) but no machines, and is thus infeasible. We employ a repair heuristic to place part 2 into a group that contains at least one machine.

C 1 ¼ 2 1 2 3 1 2 2j1 1 2 2 1 2 3 2 2 3 2j1 2 1 2 3

ð19Þ

Taking the machine grouping of the incoming solution, first the partial efficacy is calculated for each part using Eq. (19) and then each part is reassigned to a group with the largest partial efficacy value. If the modified solution is better than the original solution, the original solution is replaced and the process is restarted with this time taking the part families rather than the machine groupings. After calculating the partial efficiency for each machine, it is reassigned to a part family with the largest partial efficacy value. The algorithm iterates by returning to the further reassignment of the parts in the third iteration and further reassignment of the machines in the forth iteration and so on, until the quality of the new solution does not exceed the quality of the last solution. To illustrate, consider the offspring, C1, created in the above. Fig. 4 gives the resulting partial efficacy values for the local search in which the parts are being considered for reassignment. The partial efficacy matrix given in Fig. 4 results in the following repaired individual:

As can be seen, the original group 3 from P1 no longer contains either parts or machines, so it can be eliminated, resulting in:

C 1 ¼ 2 1 2 3 1 2 2j1 1 2 2 1 2 3 2 2 3 2j1 2 1 2 3

ð18Þ

ð17Þ

1,3,7,11 1,2,6 2,6,9 4,5,10 3,7 3,4,11 5,8,10

77.27 81.82 81.82 81.82 86.36 81.82 90.48

Fig. 4. Partial efficacy matrix.

Table 1 Computational results on 40 benchmarked problems. #Problem source

1 2 3 4 5 6 7 8 9

23 24 25 26 27 28 29 30 31 32 33 34 35 36

GGA

GDE

57 57 5  18 68 7  11 7  11 8  12 8  20

82.35 60.87 57.53 76.92 56.00 70.37 63.83 66.67

82.35 66.22 73.39 76.92 58.27 70.60 67.02 79.50

82.35 69.57 79.59 76.92 60.87 70.83 69.44 85.25

– – – – – – – –

82.35 69.57 79.59 76.92 60.87 70.83 69.44 85.25

82.35 69.57 79.59 76.92 60.87 70.83 69.44 85.25

82.35 69.57 79.59 76.92 60.87 70.83 69.44 85.25

3.12 3.35 3.86 3.79 4.52 4.39 4.64 5.08

82.35 69.57 79.59 76.92 60.87 70.83 69.44 85.25

82.35 69.57 79.59 76.92 60.87 70.83 69.44 85.25

82.35 69.57 79.59 76.92 60.87 70.83 69.44 85.25

– – – – – – – –

82.35 69.57 79.59 76.92 60.87 70.83 69.44 85.25

82.35 69.57 79.59 76.92 60.87 70.83 69.44 85.25

82.35 69.57 79.59 76.92 60.87 70.83 69.44 85.25

3.55 3.55 3.96 4.04 5.13 4.83 4.99 5.30

8  20

52.08

54.68

55.32



58.72

58.72

58.72a

4.44

58.72

58.72

58.72



58.72

58.72

58.72

5.11

10  10 10  15 14  24 14  24 16  24 16  30 16  43 18  24 20  20 20  23 20  35 20  35 24  40

72.41 92.00 67.09 68.12 42.72 61.11 47.48 50.00 36.36 41.77 72.15 53.93 96.18

74.48 92.00 69.93 70.05 48.35 66.42 51.72 53.47 38.65 45.83 75.49 55.72 99.62

75.00 92.00 72.06 71.83 51.58 68.61 55.48 57.43 40.74 49.65 77.02 57.14 100.00

– – – – – – – – – – – – –

75.00 92.00 72.06 70.13 53.26 67.65 57.06 57.06 41.79 48.32 77.58 57.59 100.00

75.00 92.00 72.06 71.17 53.47 68.66 57.40 57.42 42.67 49.44 77.81 57.82 100.00

75.00 92.00 72.06 71.83 53.85a 68.99a 57.53a 57.73a 43.26a 50.81a 77..91a 57.98a 100.00

4.91 4.82 6.44 6.36 6.89 7.16 8.25 7.42 6.89 7.31 7.90 8.14 9.25

75.00 92.00 72.06 71.83 52.75 68.99 57.53 57.43 42.74 50.81 77.91 57.98 100.00

75.00 92.00 72.06 71.83 52.75 68.99 57.53 57.70 42.93 50.81 77.91 57.98 100.00

75.00 92.00 72.06 71.83 52.75 68.99 57.53 57.73 43.18 50.81 77.91 57.98 100.00

– – – – – – – – – – – – –

75.00 92.00 72.06 71.62 53.33 68.99 57.43 57.73 42.42 50.81 77.91 57.98 100.00

75.00 92.00 72.06 71.79 53.37 68.99 57.51 57.73 42.95 50.81 77.91 57.98 100.00

75.00 92.00 72.06 71.83 53.41a 68.99 57.53 57.73 43.45a 50.81 77.91 57.98 100.00

5.07 5.00 6.68 6.69 7.31 7.53 8.81 7.90 7.53 7.94 8.43 8.72 9.64

24  40

82.01

84.08

85.11



85.11

85.11

85.11

9.25

85.11

85.11

85.11



85.11

85.11

85.11

9.65

24  40

67.97

71.27

73.51



73.51

73.51

73.51

9.26

73.51

73.51

73.51



73.51

73.51

73.51

9.76

24  40

45.27

50.70

52.41



52.56

53.22

53.29a

9.51

53.15

53.27

53.29



53.29

53.29

53.29

10.28

47.88

48.95

a

9.70

48.55

48.66

48.95



48.61

48.75

48.95

10.73

a

9.96

46.90

47.16

47.26



46.85

47.16

47.26

10.46

43.42

45.14

46.67

Avg. time



Min. sol.

47.71

Avg. sol.

Max. sol.

Avg. time

Min. sol.

HGDE

Avg. sol.

24  40

Max. sol.

HGGA

Min. sol.

Avg. sol.

Max. sol.

Avg. time

Min. sol.

Avg. sol.

Max. sol.

Avg. time

24  40

41.43

43.32

45.27



41.88

44.55

47.26

27  27

43.79

48.65

52.53



54.13

54.48

54.82a

8.25

53.41

53.84

54.02



54.82

54.82

54.82a

8.97

30  41 30  50 37  53 16  30

56.52 54.49 43.23 34.84

59.57 55.96 49.27 38.06

61.39 57.95 52.47 45.77

– – – 4.70

59.49 54.49 58.03 51.18

61.51 56.70 59.38 51.64

62.59a 58.89a 60.40a 52.07a

10.61 12.76 12.90 6.70

62.24 59.77 60.48 52.05

62.74 59.77 60.57 52.05

63.31 59.77 60.64 52.05

– – – 7.35

62.42 59.77 60.48 52.07

62.85 59.77 60.59 52.17

63.31 59.77 60.64 52.29a

11.87 13.75 15.39 7.68

16  30

51.09

56.30

61.16

4.33

61.84

62.19

63.04a

6.36

62.99

62.99

62.99

7.37

62.29

63.03

63.04a

7.60

16  30

58.33

61.95

64.81

4.45

68.38

68.38

68.38a

5.57

68.38

68.38

68.38

7.14

68.38

68.38

68.38

7.41

a

a

16  30

46.13

46.60

48.18

4.57

48.19

49.16

50.00

6.79

49.29

49.60

49.65

7.48

49.62

49.75

50.00

16  30

51.41

62.78

70.54

4.55

72.37

72.37

72.37a

5.47

72.37

72.37

72.37

7.08

72.37

72.37

72.37

A. Noktehdan et al. / Expert Systems with Applications 37 (2010) 4822–4829

10 11 12 13 14 15 16 17 18 19 20 21 22

King and Nakornchai (1982) Waghodekar and Sahu (1984) Seifiddini (1989) Kusiak and Cho (1992) Kusiak and Cho (1987) Boctor (1991) Seifiddini and Wolf (1986) Chandrasekharan and Rajagopalan (1986a) Chandrasekharan and Rajagopalan (1986b) Mosier and Taube (1985a) Chan and Milner (1982) Askin and Subramanian (1987) Stanfel (1985) McCormick et al. (1972) Srinivasan et al. (1990) King (1980) Carrie (1973) Mosier and Taube (1985b) Kumar et al. (1986) Carrie (1973) Boe and Cheng (1991) Chandrasekharan and Rajagopalan (1989) Chandrasekharan and Rajagopalan (1989) Chandrasekharan and Rajagopalan (1989) Chandrasekharan and Rajagopalan (1989) Chandrasekharan and Rajagopalan (1989) Chandrasekharan and Rajagopalan (1989) Chandrasekharan and Rajagopalan (1989) Kumar and Vannelli (1983) Stanfel (1985, Fig. 5) McCormick et al. (1972) Chandrasekharan and Rajagopalan (1989) Chandrasekharan and Rajagopalan (1989) Chandrasekharan and Rajagopalan (1989) Chandrasekharan and Rajagopalan (1989) Chandrasekharan and Rajagopalan (1989)

Size

7.71 7.30

(continued on next page) 4827

7.41

7.34

8.43

9.80

73.24

77.30

72.37 72.37 72.37 9.54 72.37 72.37 7.72 72.37 72.37

7. Conclusions

72.37

6.12

72.37

In this paper, we present a new solution approach for the machine–part cell formation problem based on the grouping representation used in the grouping genetic algorithm. We proposed an equation analogous to that of the mutation equations in the classical differential evolution algorithm (DE) yielding a grouping version of DE (GDE). Results obtained from extended computational efforts, justifies the competitive performance of GDE compared to a recently proposed algorithm, i.e., the grouping genetic algorithm (GGA). We found that GDE algorithm outperforms GGA in terms of solution quality in 23 out of 40 benchmarked problem instances. For the rest of problems both algorithms provide the same results. We also compared the hybridized version of GDE and GGA (HGDE and HGGA, respectively), where the same achievement was observed. However, there were only six problems on which HGDE outperformed HGGA. For future research the extension of our approach could be investigated for other type of grouping problems, e.g. bin packing problem, graph coloring problem, etc.

a

Denotes a case where GDE outperforms GGA.

65.58 61.38 24  40 and Rajagopalan 40

39

38

In this section, 40 test problems taken from literature are used to evaluate the computational characteristics of the proposed heuristic with and without local search. Results are compared with GGA algorithm of Brown and James (2007). The proposed algorithm was coded in Matlab7.4 and implemented on a laptop computer with 2.5 GHZ CPU speed and 2 GB of main memory. On the first 31 problems we just state the results obtained by GGA as reported in Brown and James (2007). For the further nine problems we report the results obtained by GGA based on our implementation of this algorithm. Computational results on all 40 problems are given in Table 1. The worst solution, average solution, and best solution found in 10 runs are presented for both algorithms. Average computational time in seconds is also reported. It can be verified that GDE outperforms GGA in terms of solution quality, on 23 problems. For the remaining 17 problems, GDE matches the solution quality obtained by GGA. Furthermore, the hybridized version of the GDE, called HGDE, is shown to outperform the HGGA in terms of solution quality on problems 14, 18, 28, 32, 33 and 35. For the remaining problems, HGDE matches the solution quality obtained by HGGA.

72.37

77.00 76.51 8.13 77.30 76.60 6.65 77.30 76.64 58.89 and Rajagopalan

20  35

66.64

76.19

4.86

75.74

a

76.51

73.24 73.24 7.12 73.24 73.24 5.53 73.24a 56.25 and Rajagopalan

16  30

63.55

71.43

4.46

73.24

73.24

73.24

77.31 77.31 7.10 77.31 77.31 77.31 5.61 77.31 77.31 77.31 4.52 77.31 69.44 37

Chandrasekharan (1989) Chandrasekharan (1989) Chandrasekharan (1986a) Chandrasekharan (1989)

and Rajagopalan

67.23

Max. sol. Avg. sol. #Problem source

Table 1 (continued)

The local search is applied to 10% of generated individuals. Based on preliminary computations, we also choose the following setting for parameters: the population size is considered to be 100; the number of generations is considered to be 50; and CR is equal to 1. 6. Computational results

16  30

Max. sol. Avg. sol. HGDE

Min. sol. Avg. time Max. sol. Avg. sol.

HGGA

Min. sol. Avg. time Max. sol. Avg. sol.

GDE

Min. sol. Min. sol.

Avg. time GGA Size

77.31

A. Noktehdan et al. / Expert Systems with Applications 37 (2010) 4822–4829

Avg. time

4828

References Aljaber, N., Baek, W., & Chen, C. (1997). A tabu search approach to the cell formation problem. Computers and Industrial Engineering, 32, 169–185. Askin, R. G., & Chiu, K. S. (1990). A graph partitioning procedure for machine assignment and cell formation in group technology. International Journal of Production Research, 28, 1555–1572. Askin, R. G., & Subramanian, S. P. (1987). A cost-based heuristic for group technology configuration. International Journal of Production Research, 25, 101–113. Askin, R. G., & Vakharia, A. J. (1991). Group technology – Cell formation and operation. In D. I. Cleland & B. Bidanda (Eds.), The automated factory handbook: Technology and management (pp. 317–336). New York: TAB Books. Ballakur, A., & Steudel, H. J. (1998). A within–in cell utilization based heuristic for designing cellular manufacturing system. International Journal of Production Research, 25, 639–655. Boctor, F. F. (1991). A linear formulation of the machine–part cell formation problem. International Journal of Production Research, 29, 343–356.

A. Noktehdan et al. / Expert Systems with Applications 37 (2010) 4822–4829 Boe, W. J., & Cheng, CH. (1991). A close neighbor algorithm for designing cellular manufacturing systems. International Journal of Production Research, 29(10), 2097–2116. Boulif, M., & Atif, K. (2006). A new branch-and-bound-enhanced genetic algorithm for the manufacturing cell formation problem. Computers and Operation Research, 33, 2219–2245. Brown, E., & James, T. (2007). A hybrid grouping genetic algorithm for the cell formation problem. Computers and Operation Research, 34, 2059–2079. Brown, E., & Sumichrast, R. (2001). CF-GGA: A grouping genetic algorithm for the cell formation problem. International Journal of Production Research, 36, 3651–3669. Carrie, A. S. (1973). Numerical taxonomy applied to group technology and plant layout. International Journal of Production Research, 399–416. Chandrasekharan, M. P., & Rajagopalan, R. (1986a). MODROC: An extension of rank order clustering for group technology. International Journal of Production Research, 24, 1221–1233. Chandrasekharan, M. P., & Rajagopalan, R. (1986b). An ideal seed non-hierarchical clustering algorithm for cellular manufacturing. International Journal of Production Research, 24, 451–463. Chandrasekharan, M. P., & Rajagopalan, R. (1989). GROUPABILITY: An analysis of the properties of binary data matrices for group technology. International Journal of Production Research, 27, 1035–1052. Chan, H. M., & Milner, D. A. (1982). Direct clustering algorithm for group formation in cellular manufacturing. Journal of Manufacturing Systems, 1, 65–74. Cheng, C., Gupa, Y., & Lee, W. A. (1998). TSP-based heuristic for forming machine groups and part families. International Journal of Production Research, 36, 1325–1337. Dimopoulos, C., & Mort, N. (2001). A hierarchical clustering methodology based on genetic programming for the solution of simple cell-formation problems. International Journal of Production Research, 39, 1–19. Falkenauer, E. (1992). The grouping genetic algorithms-widening the scope of the GAs. Belgian Journal of Operation Research, Statistics and Computer Science, 33–79. 102. Falkenauer, E. (1998). Genetic algorithm for grouping problems. New York: Wiley. Goncalves, J., & Resende, M. (2004). An evolutionary algorithm for manufacturing cell formation. Computers and Industrial Engineering, 47, 247–273. Heragu, S. S. (1997). Facility design. Boston, MA: PWS Publishing Company. Husseinzadeh Kashan, A., Karimi, B., & Jolai, F. (2006). Effective hybrid genetic algorithm for minimizing makespan on a single-batch-processing machine with non-identical job sizes. International Journal of Production Research, 44, 2337–2360. Joines, J., Culbrethe, C., & King, R. (1996). Manufacturing cell design: An integer programming model employing genetic algorithms. IEE Transactions, 28, 69–85. King, J. R. (1980). Machine-component grouping in production flow analysis: An approach using a rank order clustering algorithm. International Journal of Production Research, 213–232. King, J. R., & Nakornchai, V. (1982). Machine-component group formation in group technology: Review and extension. International Journal of Production Research, 20(2), 117–131. Kumar, C., & Chandrasekharan, M. P. (1990). Grouping efficacy: A quantitative criterion for goodness of block diagonal forms of binary matrix in group technology. International Journal of Production Research, 28, 233–243. Kumar, K. R., Kusiak, A., & Vannelli, A. (1986). Grouping of parts and components in flexible manufacturing systems. European Journal of Operational Research, 24, 387–397. Kumar, K. R., & Vannelli, A. (1983). Strategic subcontracting for efficient disaggregated manufacturing. International Journal of Production Research, 25, 1715–1728. Kusiak, A., & Cho, W. S. (1987). Efficient solving of the group technology problem. Journal of Manufacturing system, 117–124.

4829

Kusiak, A., & Cho, M. (1992). Similarity coefficient algorithm for solving the group technology problem. International Journal of Production Research, 30(11), 2633–2646. Lei, D., & Wu, Z. (2005). Tabu search approach based on a similarity coefficient for cell formation in generalized group technology. International Journal of Production Research, 19, 4035–4047. McCormick, W. T., Schweitzer, P. J., & White, T. W. (1972). Problem decomposition and data reorganization by a clustering technique. Operation Research, 20, 993–1009. Mosier, C. T., & Taube, L. (1985a). The facets of group technology and their impact on implementation. OMEGA, 13, 381–391. Mosier, C. T., & Taube, L. (1985b). Weighted similarity measure heuristics for the group technology machine clustering problem. OMEGA, 13, 577–583. Onwubolu, G. C., & Mutingi, M. (2001). A genetic algorithm approach to cellular manufacturing systems. Computers and Industrial Engineering, 39, 125–144. Purcheck, G. F. K. (1975). A linear programming method for the combinatorial grouping of an incomplete power set. Journal of Cybernetics, 51–76. Rajamani, D., Singh, N., & Aneja, Y. P. (1992). Selection of parts and machines for cellularization: A mathematical programming approach. European Journal of Operational Research, 62, 47–54. Reisman, A., Kumar, A., Motwani, J., & Cheng, C. H. (1997). Cellular manufacturing: A statistical review of the literature (1965–1995). Operations Research, 45, 508–520. Sarker, B. (2001). Measures of grouping efficacy in cellular manufacturing systems. European Journal of Operation Research, 130, 588–611. Seifiddini, H. (1989). A note on the similarity coefficient method and the problem of improper machine assignment in group technology applications. International Journal of Production Research, 27, 1161–1165. Seifiddini, H., & Wolf, P. M. (1986). Application of the similarity coefficient method in group technology. IIIE Transaction. Shafer, S. M., & Kern, G. (1992). A Mathematical programming approach for dealing with exceptional elements in cellular manufacturing. International Journal of Production Research, 30, 1029–1036. Shafer, S. M., & Rogers, D. F. (1991). A goal programming approach to cell formation problems. Journal of Operations Management, 10, 28–43. Shtub, A. (1989). Modeling group technology cell formation as a generalized assignment problem. International Journal of Production Research, 27, 775–782. Srinivasan, G., Narendran, T., & Mahadevan, B. (1990). An assignment model for the part-families problem in group technology. International Journal of Production Research, 28, 145–152. Stanfel, L. E. (1985). Machine clustering for economic production. Engineering Costs and Production Economies, 9, 73–81. Storn, R., & Price, K. (1995). Differential evolution – A simple and efficient adaptive scheme for global optimization over continues spaces, Technical Report TR (pp. 95–102). Sun, D., Lin, L., & Batta, R. (1995). Cell formation using tabu search. Computers and Industrial Engineering, 28, 485–494. Vin, E., Delit, P., & Delchamber, A. (2005). A multiple-objective grouping genetic algorithm for the cell formation problem with alternative routings. Journal of Intelligent Manufacturing, 16, 189–205. Waghodekar, P. H., & Sahu, S. (1984). Machine-component cell formation in group technology MACE. International Journal of Production Research, 22, 937–948. Wei, J. C., & Gaither, N. (1990). A capacity constrained multi-objective cell formation method. Journal of Manufacturing Systems, 9, 222–232. Wemmerlov, U., & Hyer, N. (1987). Research issues in cellular manufacturing. International Journal of Production Research, 25, 413–431. Wemmerlov, U., & Hyer, N. L. (1989). Cellular manufacturing in the US industry: A survey of users. International Journal of Production Research, 27(9), 1511–1530. Wu, T., Chang, C., & Chung, S. (2007). A simulated annealing algorithm for manufacturing cell formation problems. Expert System with Applications.

Suggest Documents