A greedy random adaptive search procedure for the ... - CiteSeerX

0 downloads 0 Views 242KB Size Report
(ii) If Gp is a bipartite (connected) graph then the number of triangular faces is t ¼ 2n 2 4. I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635– ...
Computers & Industrial Engineering 45 (2003) 635–651 www.elsevier.com/locate/dsw

A greedy random adaptive search procedure for the weighted maximal planar graph problemq Ibrahim H. Osmana,*, Baydaa Al-Ayoubib, Musbah Barakea a

School of Business, Center for Advanced Mathematical Sciences, American University of Beirut, P.O. Box 11-0236, Bliss Street, Beirut, Lebanon b Department of Applied Mathematics, Faculty of Science I, Lebanese University, Haddath, Beirut, Lebanon Accepted 11 September 2003

Abstract The weighted maximal planar graph (WMPG) problem seeks to find a subgraph from a given weighted complete graph such that the subgraph is planar—it can be embedded on the plane without any arcs intersecting. The subgraph is maximal—no additional arc can be added to the subgraph without destroying its planarity and it also has the maximal sum of arc weights. In this paper, the main objective is to develop, implement and empirically analyse a new greedy random adaptive search procedure (GRASP) to solve the WMPG problem. A dynamic strategy to update the restricted candidate list is proposed. An efficient data structure is developed for the Green&Al-Hakim (GH) construction heuristic. The data structure reduces the GH complexity from Oðn3 Þ to Oðn2 Þ: The GH heuristic with the data structure is then integrated with advanced moves neighbourhood to develop an efficient GRASP implementation. Further, we investigate the behaviour of GRASP parameters in relation to the problem’s characteristics. Finally, the developed algorithms are compared with the best-known procedures in the literature on a set of 100 test instances of sizes varying from 20 to 100 nodes. q 2003 Elsevier Ltd. All rights reserved. Keywords: Combinatorial optimisation problem; Data structures; Facility layout; Graph theory; Heuristics; Weighted maximal planar graph; Greedy random adaptive search procedure; Meta-heuristics

1. Introduction Given a complete undirected weighted graph, G ¼ ðN; A; WÞ; where N is the set of nodes, A is the set of arcs, and W is the set of positive weights associated with the arcs. Each arc k [ A is represented by q This manuscript was processed by area editor Tom M. Cavalier. * Corresponding author. E-mail addresses: [email protected] (I.H. Osman), [email protected] (B. Al-Ayoubi).

0360-8352/$ - see front matter q 2003 Elsevier Ltd. All rights reserved. doi:10.1016/j.cie.2003.09.005

636

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

k ¼ ðni ; nj Þ where ni and nj are its two end nodes. It also has a positive weight wk representing a benefit (or desirability value) for the two end nodes to be adjacent. A graph is planar if it can be drawn on the plane without any arcs intersecting. A planar graph is maximal if no additional arc can be added to it without destroying its planarity. The objective of the weighted maximal planar graph (WMPG) problem is to findPa maximal planar subgraph Gp ¼ ðN; Ap ; Wp Þ with the maximal (highest) sum of arc weights, W ðGp Þ ¼ k[Ap wk ; where Ap , A: The WMPG problem is of practical importance in modern manufacturing systems. It has many practical applications in arranging rooms within a building floor, placing machines on a factory floor, locating controls on an instrument panel, or components on a circuit board. These include the location of electrical circuits in VLSI design (Hassan & Hogg, 1987), graph planarization (Resende & Riberio, 1997), and automatic graph drawing (Junger & Mutzel, 1993, 1996). For more applications, we refer to Domschke and Krispin (1997). For other related papers, we refer to Carson (1993), Junger, Leipert, and Mutzel (1998), Shimozono (1997), and Watson and Giffin (1996). The WMPG application approach to the design of skeletons for a family of facility layouts and location problems involves three systematic phases. Firstly, an adjacency graph is constructed in which the nodes represent facilities, the arcs define the relationships between the facilities, and the weights on arcs express the material flow (or adjacency benefit) between facilities. Secondly, a WMPG problem of the adjacency graph is solved to obtain its optimal solution. Finally, the designer to satisfy space and shape requirements, facility arrangement, and adjacencies draws a block layout, which is the dual of the WMPG optimal solution. Fig. 1 shows a WMPG solution Gp ¼ ðN; Ap ; Wp Þ; where the set Ap is represented by solid lines and its corresponding block layout by dashed lines. It can be seen that each node of Gp is associated with one facility, while the node E is only a special case defining the external area, and each arc of Ap is translated into a common wall (a dashed boundary) between two adjacent facilities in the final block layout. Moreover, all the faces of Gp are triangular, i.e. each is bounded by three arcs and determined by three adjacent nodes. For more details on the problem of drawing block layout, we refer to Al-Hakim (1992), Giffin et al. (1986), Hassan and Hogg (1991), Rinsma, Giffin, and Robinson (1990), and Watson and Giffin (1997). The most difficult phase of the systematic three-phase approach is finding the optimal solution for the WMPG problem. It is an important and challenging combinatorial optimization problem since it is

Fig. 1. A WMPG solution (solid lines) and its block layout (dashed lines).

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

637

NP-complete (Giffin, 1984). Thus, it does not have an optimal solution procedure of a polynomial time complexity. Only small-sized instances can be solved to optimality. In the literature, the largest solved instances are of sizes up to 17 facilities (Hasan & Osman, 1995), 10 facilities on a complete graph or 20 facilities in the case of 20% density graph (Junger & Mutzel, 1993, 1996). As a result, research attentions have turned to the design of approximate methods (or heuristics) to solve large-sized instances of this problem. Meta-heuristics are the most recent techniques that have widespread success in providing high-quality near-optimal solutions to many real-life complex optimisation problems in diverse areas. We refer to Osman (1995) for an introduction, and Osman and Laporte (1996) for a bibliography and to the books, Glover and Kochenberger (2003), Osman (2003), Osman and Kelly (1996), and Voss, Martello, Osman, and Roucairol (1999). The success of meta-heuristics motivated us to design and implement the greedy random adaptive search procedure (GRASP) for the WMPG problem. Each GRASP iteration consists of two phases: a construction phase and a local search phase (Feo & Resende, 1995). In the construction phase, a feasible solution is iteratively constructed by randomly choosing an element from a restricted candidate list (RCL). RCL contains high-quality elements, which are generated according to a greedy adaptive function. The size of RCL normally depends on a parameter a of fixed value, which must be carefully chosen to construct good initial solutions. In this paper, we shall investigate the behaviour of GRASP parameters. First, a new expression to determine the initial value of a is introduced. Second after performing an iteration, a new dynamic strategy is implemented to update the size of the candidate list. Third, another contribution is the design of a new data structure for the GH heuristic of Green and Al-Hakim (1985). The GH heuristic with the implemented data structure (GH&D) is followed by a local search (LS) ascent phase using the advanced type of moves in Pesch, Glover, Bartsch, Salewski, and Osman (1999). The GH&D and LS procedures are combined together into a single procedure (GH&D þ LS), which is then integrated into our GRASP implementation. The remaining parts of the paper are organized as follows. Section 2 presents a brief literature review of the WMPG and its graph-theoretic properties. The GH construction heuristic and the new data structure are discussed in Section 3. Our GRASP implementation is presented in Section 4. The efficiency and effectiveness of the proposed approaches are demonstrated by comparing them to the best-known algorithms in the literature in Section 5. The computational results are reported using a set of 100 test instances of size varying from 20 to 100 nodes. Finally, we conclude with some remarks in Section 6.

2. Brief review of the WMPG and its properties Let G ¼ ðN; A; WÞ be a complete graph, where N ¼ {1; …; n} is the set of n nodes, A ¼ {1; …; a} is the set of undirected arcs, W ¼ {wk =wk $ 0; ;k [ A} is the set of arc weights, let Gp ¼ ðN; Ap ; Wp Þ be its corresponding weighted maximal planar subgraph solution defined by the set of arcs Ap ¼ {a1 ; …; ap }; and the set of triangular faces F ¼ {f1 ; …; ft } then, we have Euler’s formula ðn 2 p þ t ¼ 2Þ and the following implied properties are valid for any n $ 3 : (i) For any given planar graph, p # 3n 2 6 and p ¼ 3n 2 6 when it is maximal. (ii) If Gp is a bipartite (connected) graph then the number of triangular faces is t ¼ 2n 2 4:

638

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

From the above properties, the combinatorial nature of the WMPG can be illustrated from the need to find a set Ap of 3n 2 6 arcs out of the total of number of arcs a ¼ nðn 2 1Þ=2 and a set F of 2n 2 4 triangular faces out the total of number of faces f ¼ nðn 2 1Þðn 2 2Þ=6 that satisfy planarity and maximality requirements. For more details on graph theory terminology and details, we refer to Nishizeki and Chiba (1988). In the literature, there are a few exact procedures for the WMPG and they can only solve smallsized instances. Foulds and Robinson (1976) propose a tree-search branch-and-bound (B&B) algorithm based on a complete enumeration with planarity testing. Hasan and Osman (1995) reported results from a tree-search B&B using a Lagrangean relaxation bound. Recently, Osman and Abdullah (2003) designed B&B tree-search exact algorithms based on linear programming relaxation bounds without planarity testing for the maximal planar graph problems. Whereas Junger and Mutzel (1993, 1996) designed an exact branch-and-cut algorithm with planarity testing. However, there are a large number of approximate algorithms for the WMPG problem. These can be grouped into classical heuristics and recent meta-heuristics. The classical heuristics include: Construction heuristics with planarity testing (Foulds, Gibbons, & Giffin, 1985; Osman & Hasan, 1997), Construction heuristics without planarity testing (Boswell, 1992; Eades, Foulds, & Giffin, 1982; Foulds & Robinson, 1976; Green & Al-Hakim, 1985; Leung, 1992; Wascher & Merker, 1997), and local search improvement heuristics (Al-Hakim, 1991; Cimikowski & Mooney, 1997; Foulds et al., 1985; Hasan & Osman, 1995; Kim & Kim, 1995; Pesch et al., 1999). However, there is little published on meta-heuristics, except the hybrid simulated annealing and tabu search approach in Hasan and Osman (1995) and the linear-programming based meta-heuristics in Osman, Hasan, and Abdullah (2002). For recent reviews on solution methods, we refer to Barake (1997), Domschke and Krispin (1997), Hassan and Hogg (1987), and Wascher and Merker (1997).

3. Data structure for the GH construction heuristic In this section, we present the GH construction heuristic of Green and Al-Hakim (1985) and describe our new data structure for its efficient implementation. 3.1. The GH construction heuristic Green and Al-Hakim (1985) proposed a good construction heuristic to obtain a WMPG solution by a series of triangulations without the need for planarity testing. It is a modification of the well-known deltahedron heuristic of Foulds and Robinson (1976). The GH heuristic begins by ordering the nodes in decreasing order of the sum of weights of the arcs incident to each node. It then chooses the first four nodes from the ordered list (OL) to construct a planar subgraph. It connects each pair of the four nodes to obtain an initial deltahedron, a complete subgraph on four nodes with four triangular faces. The remaining nodes are then evaluated for insertions in the current set of triangular faces. The un-inserted node with the highest insertion value is then selected and inserted in its corresponding face. The process is repeated until all nodes are inserted at which a complete WMPG solution is obtained with 3n 2 6 arcs and 2n 2 4 triangles (or faces).

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

639

The steps of the G&H heuristic are as follows: Step I: Initialization: (1) Let UN be the set of n un-inserted nodes, F be the set of constructed faces, and fip be the best insertion face of node ni with its maximumP benefit value bpi : Set UN ¼ N; n ¼ n; and F ¼ f. (2) For each node ni [ UN; compute Wðni Þ ¼ k[Ai wk where Ai is the set of arcs incident to ni : (3) Order the nodes in decreasing order of Wðni Þ in an ordered list (OL). (4) Select the four nodes {n1 ; …; n4 } from the top of OL to construct the initial deltahedron of four faces, f1 ; …; f4 : (5) Set UN ¼ UN\{n1 ; …; n4 }; n ¼ ðn 2 4Þ and F ¼ F < {f1 ; …; f4 }: Step II: Greedy evaluation and selection: (6) For each un-inserted node ni [ UN; compute the benefit bji of inserting it in every face fj [ F; let bpi ¼ Maxfj [F ðbji Þ: (7) Find k ¼ Arg maxni [UN ðbpi Þ; the index of node ni with the highest bpi value. Step III: Insert and update: (8) Insert node nk in its best face fkp : (9) Let fk1 ; fk2 ; fk3 be the three new faces generated after inserting node nk : (10) Set UN ¼ UN\{nk }; n ¼ ðn 2 1Þ and F ¼ {F\fkp } < {fk1 ; fk2 ; fk3 }: Step IV: Termination: (11) If {UN ¼ f; i.e. n ¼ 0} Then Stop; Otherwise go to Step 2. 3.2. GH data structure We shall first illustrate the working of the GH heuristic using the example in Fig. 2 to demonstrate the reasoning and motivation for the proposed data structure before describing it. Let N ¼ {1; …; 6} be the set of nodes in G, {1; …; 4} be the set of four nodes to construct the initial deltahedron composed of the four faces F ¼ {f123 ; f124 ; f134 ; f234 }: Step II (6) of the GH heuristic would compute the best insertion values {bp5 ; bp6 } for the remaining set of un-inserted nodes UN ¼ {5; 6}; respectively. Let us assume that bp5 is greater than bp6 and both nodes have the same best insertion face, f124 ; then Step II (7) would select node 5 to be inserted in f124 : Consequently, f124 would be removed from F and be replaced by three new faces to obtain an updated set of faces F ¼ {f123 ; f134 ; f234 ; f125 ; f145 ; f245 }: As a result, the best insertion face f124 for node 6 is removed. Therefore, the insertion values must be re-computed in all

Fig. 2. Illustration for the GH data structure.

640

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

new and old faces to identify its new best face. However, if for node 6, the first and second best insertion 2p 1p 2p values {b1p 6 and b6 } are recorded with their corresponding faces {f6 and f6 } say {f124 and f134 } respectively, then the re-computation is only needed in the three new generated faces {f125 ; f145 ; f245 }: The new insertion values are then compared to the best-stored value (i.e. the second best value) in f134 to identify the next node for insertion. Hence, the motivation is to store some previously computed information in order to avoid unnecessary re-computation of un-inserted nodes in the unchanged faces. The data structure procedure steps: Step I: Initialization: Let UN be the set of remaining un-inserted nodes. Let F be the set of current set of faces. 2p 1p Let b1p i and bi be the first and second best insertion values with their corresponding faces values fi 2p and fi ; respectively, for each ni [ UN: Let fk1p be the removed face after inserting node nk : Let fk1 ; fk2 ; fk3 be the three new faces after inserting node nk : Step II: Update insertion values due to removal: For each ni [ UN do: 2p If {fi1p ; fk1p } Then, set b1p i ¼ bi : 2p 1p Else If {fi ; fk } Then find b2p i by re-inserting ni [ F: Endfor. Step III: Update best insertion values with the new faces: For each un-inserted node ni [ UN: 2k 3k 1 2 3 Compute the benefits b1k i ; bi ; bi of inserting node ni [ fk ; fk ; fk : Do k ¼ 1 – 3: 1k 1p 1k If {b1p i less than bi } Then bi ¼ bi and update the faces. 2p 1k 1k Else If {bi less than bi } Then b2p i ¼ bi and update the corresponding face. Enddo. Endfor. Step IV: Select a node for insertion: Find k ¼ Arg maxni [UN ðb1p i Þ; the index of node ni with the highest insertion value. Go to Step III (6) of Section 3.1. It should be noted that we could store the three best insertion values to find the b2p i value for each node ni to avoid the re-inserting of ni [ F at Step II, when the special case of removing its second best face occurs. However, such an approach may require more data storage and management than the current one in order to handle properly a few of occurrences of such a special case. Furthermore, the above date structure does not require more than two matrices F and B. Matrix F is of dimensions ð2n 2 4Þ £ 3 to hold for each triangular face its associated three nodes, whereas B is another matrix of dimension

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

641

ðn 2 4Þ £ 4 to store for each un-inserted node its first, second best insertion values, and two pointers to their corresponding faces. Matrix F may be used regardless of the data structure to store the faces, whereas matrix B is the only extra storage needed for the data structure. Hence, it is a negligible amount of extra storage. We shall denote the GH heuristic with the proposed data structure by GH&D. Theorem: The computational complexity of the GH&D heuristic is Oðn2 Þ time. First, the algorithm of Fischetti and Martello (1988), denoted by FM algorithm, finds the smallest element of n-elements in OðnÞ time. In our implementation, instead of ordering the nodes of OL in decreasing order using quick-sort algorithm of Oðn log nÞ time complexity, the FM algorithm of OðnÞ can be used to select the top four nodes to construct the initial deltahedron. The FM algorithm could also be used after each insertion to select the next best node for insertion. The FM algorithm would make our implementation even faster but it does not reduce the computational complexity of our algorithm. The remaining ðn 2 4Þ nodes are inserted iteratively one at a time, using ðn 2 4Þ iterations. Each iteration of the GH heuristic requires the insertion-evaluation of each of the remaining ðn 2 4Þ nodes in each of the ð2n 2 4Þ faces. The computation requirement per iteration would then be Oðn2 Þ time. Since there are ðn 2 4Þ iterations and each is Oðn2 Þ time, then the total computation requirement of the GH heuristic 3 without the data structure Pn24 would be Oðn Þ time. The total number of insertion-evaluations to be performed is determined by k¼1 ðn 2 3 2 kÞð2 þ 2kÞ: For instance, for k ¼ 1; the first term and second terms of the summation would be ðn 2 4Þ and 4 defining the insertions of the remaining ðn 2 4Þ nodes in the entire initial faces at the first iteration. However, each iteration of the GH&D heuristic requires the insertion-evaluations of each of the ðn 2 4Þ nodes in only three faces, hence requiring OðnÞ time rather than Oðn2 Þ: Therefore, the total computation requirement for the GH&D heuristic would become ðn 2 4ÞOðnÞ plus the time for managing the data structure and the selection of best node which can be done using FM in OðnÞ time. Hence, the total time complexity of GH&D would become Oðn2 Þ þ OðnÞ < Oðn2 Þ:

4. GRASP Implementation The greedy random adaptive search procedure (GRASP) was introduced by Feo and Resende (1989) to solve combinatorial problems. It is a meta-heuristic superimposed on a good construction heuristic to enable the generation of different initial solutions instead of the usual single constructed solution. In general, GRASP is an iterative procedure. Each iteration consists of two phases: a construction phase and a local search improvement phase. The construction phase is in itself an iterative greedy and adaptive process. The method is adaptive because the greedy function takes into account previous decisions made in the construction. In each stage of the construction, a RCL, consisting of high-quality elements according to the greedy function is formed. Then the next element to enter the constructed solution is randomly selected from the current RCL of size a: The iterative process is continued until a constructed feasible solution is obtained. At the second phase, a local search improvement procedure is applied to the constructed initial solution and the construction/improvement procedure is repeated for a maximum number of GRASP iterations, MaxIter. The best solution found during MaxIter is the final GRASP solution. Successful applications of GRASP can be found in the review of Feo and Resende (1995) and the recent bibliographies (Festa & Resende, 2001; Osman & Laporte, 1996). It can be seen that a successful GRASP implementation depends on a good design of the following three components: an efficient construction heuristic, a management of the RCL and a local search

642

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

improvement procedure. These components will be described next for our GRASP implementation. The following steps are the generic GRASP procedure for the WMPG problem. GRASP-WMPG (input: G, a; MaxIter; output: Gp ): Set Sbest ¼ f {the best solution}. Begin For iter ¼ 1 to MaxIter do: Construct a GRASP solution using GH&D ða; Gp Þ: Apply the LS ascent procedure on Gp : Update Sbest if necessary. End do. Report Gp ¼ Sbest as the GRASP final solution. End begin. 4.1. An efficient construction heuristic Green and Al-Hakim (1985) suggested a modification to improve the Foulds and Robinson deltahedron construction heuristic through an example to demonstrate the effectiveness of the modification without computational testing. Recently, Barake (1997) and Pesch et al. (1999) showed the effectiveness of the GH heuristic using extensive computational experiments. Moreover, GH&D would have the same effectiveness but better efficiency than the GH heuristic. Therefore, it is used in the first GRASP component for a good implementation. 4.2. The management of the RCL At each iteration of the GH&D heuristic, the restricted candidate list (RCL) is a subset of nodes of the remaining un-inserted nodes, i.e. all ni [ UN: For each ni [ UN; its best insertion value b1p i is computed s best insertion values of all n [ UN are sorted in and its corresponding face fi1p is identified. The b1p i i decreasing order of their values in an ordered candidate list (OCL) of size n : The RCL is a subset of the OCL of nodes. Our RCL size is dynamic rather than static and it is determined by a parameter a ¼ pn; where p is a percentage value ranging between 1=n and 1. RCL contains the elements of OCL starting from the first element until the ath element is included. Note that, when p ¼ 1=n; i.e. a ¼ 1; RCL would contain only the first node of OCL, which is purely the greedy choice of the GH heuristic, whereas when p ¼ 1; i.e. a ¼ n ; RCL would contain all the current set of un-inserted nodes. In this case, the selection of a node for insertion is simply a random process leading to a random construction of an initial solution. In GRASP, the node nk [ UN to be inserted at iteration k of the GH&D heuristic is randomly selected from RCL. Note here, the size of RCL is dynamically reduced and varied with the size of the current OCL. For a general discussion on the effect of a in terms of solution quality and diversity, we refer to Feo and Resende (1995). However, our strategy for updating and choosing the values of a are further discussed in Section 5. 4.3. The local search ascent procedure A local search (LS) ascent procedure for the WMPG starts from an initial solution, Gp ; generated at random or by the GH&D heuristic. A neighboring solution G0p is normally generated by a well-defined

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

643

Fig. 3. Relocation of node 6 of degree 3 into face f145 from f134 :

generation mechanism, which defines the type of moves applied to Gp to obtain G0p : If the generated G0p provides an improvement in the objective function, i.e. WðG0p Þ is greater than WðGp Þ; then G0p is accepted to replace the current solution Gp in a first-improvement strategy, otherwise another neighboring solution is generated and the process continues until no further improvements can be found. The success of LS procedure depends on the power of the neighborhood generation mechanism. In our implementation, four types of advanced move generation mechanisms developed by Pesch et al. (1999) are used. These include a partner move, which replaces one or two adjacent arcs by new ones; a degree three, four or five relocation move that removes a node with degree three, four or five from its current place and relocates (or inserts) it in another place if deemed beneficial. Figs. 3 – 6 depict the relocation of the four types of advanced moves, where removed arcs are marked in dashed lines and new arcs are marked in solid lines. Further details on data structures for these types of moves are found in Pesch et al. (1999).

Fig. 4. A partner move of arcs a14 and a16 to be replaced by a56 and a53 :

644

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

Fig. 5. Relocation of node 3 of degree 4.

The local search (LS) procedure is as follows: Step 1: Start from a given initial solution, Gp : Step 2: Apply a LS accent procedure using a first-improve selection strategy to improve Gp to local optimality using a partner-move mechanism. Step 3: Apply a LS accent procedure using a first-improve selection strategy to improve Gp to local optimality using degree-three move mechanism. Step 4: Apply a LS accent procedure using a first-improve selection strategy to improve Gp to local optimality using degree-four move mechanism.

Fig. 6. Relocation of node 1 of degree 5.

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

645

Step 5: Apply a LS accent procedure using first-improve selection strategy to improve Gp to local optimality using degree-five move mechanism. Step 6: Repeat Steps 2 – 6, until no further improvement can be obtained. 5. Computational experience A set of 100 test instances was generated to evaluate the performance of the proposed algorithms. For each size n ¼ 20; 40, 60, 80 and 100, complete graphs were generated with arc weights generated according to Foulds et al. (1985). The arc weights were taken from a normal distribution with mean 100 and standard deviation s ¼ 5; 10, 20 and 30. For each pair of values of n and s, five instances (each using a different random generator seed) were generated. It should be mentioned that the large-sized instances are more than twice of the largest in the literature with n ¼ 40: The algorithms are evaluated in terms of solution quality and computational time. For each instance, the well-known classical upper bound ðZCUB Þ is computed by adding up the weights of its highest ð3n 2 6Þ arcs. The relative percentage deviation (RPD) of a heuristic solution value ZH, from ZCUB is computed as follows:   ZCUB 2 ZH RPD ¼ 100: ZCUB For each pair of values of n and s, the average relative percentage deviation (ARPD) over five instances is also reported. All the algorithms are coded in FORTRAN 77 using an FTN77/386 compiler except the heuristic algorithm of Leung (1992). Leung’s algorithm was provided by the author and is written in C. The computation times are reported in CPU seconds on a compatible IBM PC (Toshiba Satellite 100CS, Pentium Processor 75 MHz, 8 MB RAM). 5.1. Effect of data structure The average CPU time in seconds for the GH and GH&D heuristics are presented in Fig. 7 for each problem size n consisting of 20 instances. It is clear the data structure reduces the overhead computation time of the GH heuristic by a big factor without loss of solution quality. For example, the average CPU times for the class size of n ¼ 100 are 0.019 and 0.209 seconds for the GH&D and GH heuristics, respectively, i.e. a reduction factor of 10.

Fig. 7. Average CPU in seconds for the GH and GH&D Heuristics.

646

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

5.2. Effect of GRASP parameters on the solution quality and computing time The performance of most meta-heuristics depends on parameters setting. GRASP requires the setting values of only two parameters: the percentage value of the parameter ( p) to determine the number ða ¼ pnÞ of nodes in RCL and the number of GRASP iterations, MaxIter, to be performed. First, the effect of the initial value of a ¼ pn in terms of n (the number of remaining un-inserted nodes at a given construction step) is discussed. GRASP was run with four different values of p, i.e. p ¼ 0.1, 0.2, 0.3, and 0.4. For each instance in a given pair ðn; sÞ and a given p, GRASP was executed for a total of MaxIter ¼ 1000 iterations. The average of their corresponding RPD values over the five instances is reported in the appropriate entry in Table 1. The last two columns report the best ARPD values and their corresponding p (column) values. The best values are highlighted in bold in columns 2 – 4. The last line ‘average’ of the table displays the overall average of all entries in each column. From the table, it can be seen that the overall average varies in values from best at 4.364 ðp ¼ 0:2Þ to worst at 4.386 ðp ¼ 0:1Þ: These differences are not significant and show that the overall average criterion does not demonstrate the effect of p values on the quality of solutions. However, the distribution of the best (bold) values in the table shows a clear relationship between p and ðn; sÞ values. Fig. 8 explains Table 1 ARPD of GRASP solutions for different values of p ðn; sÞ

p

Best

0.1

0.2

0.3

0.4

ARPD

p

(20,5) (20,10) (20,20) (20,30)

1.077 1.963 3.239 4.087

0.875 1.837 3.033 3.588

0.815 1.723 2.973 3.296

0.797 1.653 2.945 3.282

0.797 1.653 2.945 3.282

0.4 0.4 0.4 0.4

(40,5) (40,10) (40,20) (40,30)

1.557 2.652 4.691 6.375

1.509 2.626 4.571 6.303

1.501 2.635 4.709 6.231

1.504 2.638 4.649 6.247

1.501 2.626 4.571 6.231

0.3 0.2 0.2 0.3

(60,5) (60,10) (60,20) (60,30)

1.771 3.358 5.602 7.577

1.780 3.345 5.717 7.811

1.765 3.439 5.748 7.800

1.805 3.403 5.818 7.817

1.765 3.345 5.717 7.577

0.3 0.2 0.2 0.1

(80,5) (80,10) (80,20) (80,30)

2.003 3.754 6.431 8.707

2.008 3.815 6.505 8.865

2.033 3.867 6.617 8.926

2.042 3.881 6.616 8.958

2.003 3.754 6.431 8.707

0.1 0.1 0.1 0.1

(100,5) (100,10) (100,20) (100,30)

2.258 4.110 7.211 9.305

2.261 4.188 7.262 9.384

2.307 4.203 7.289 9.526

2.320 4.286 7.31 9.542

2.258 4.110 7.211 9.305

0.1 0.1 0.1 0.1

Average

4.386

4.364

4.370

4.375

4.289

0.210

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

647

Fig. 8. Relationship between p values and problem sizes.

such a relationship for each value of s. The plot for s ¼ 10 is removed as it coincides with that of s ¼ 20: It can be seen that there is a negative relationship between p and n. Moreover, s seems to be a slight effect on p when n ¼ 40 and 60. To further investigate this relationship, a stepwise regression analysis on the values of p, n and s was performed. The following expression p ¼ 0:435 2 0:0037n was derived with a multiple coefficient correlation ðMultiple-R ¼ 0:899Þ and significant estimated coefficients with p-values ¼ 0: However, adding a term ð20:0012sÞ to the regression model was not significant with p-value ¼ 0:348; and only raised the Multiple-R value to 0.905. Hence it can be dropped from the regression expression without any significant loss. A fixed constant value of p, consequently a ða ¼ pnÞ; has an effect on the overall average CPU time, which increases as a function of p in Table 2. If a is small, the size of the candidate list becomes small. This would lead to the generation of low diversity of good solutions and to the relatively small computation time during the local search phase. On the other hand, high values of a would lead to the generation of high diversity of poor quality solutions. Consequently, this would also lead to a large computation time during the local search phase. From the above analysis, it can be seen how important is the derivation of an expression linking p to the characteristic of the problem as opposed to using a fixed constant value in the literature. Table 2 Average CPU seconds of GRASP for different values of p Size ðnÞ

p 0.1

0.2

0.3

0.4

20 40 60 80 100

020.02 100.17 249.84 492.17 830.05

019.62 101.99 277.65 509.31 860.55

019.79 104.16 265.42 525.19 888.31

020.14 106.62 272.84 540.19 913.74

Average

338.45

353.824

360.574

370.706

648

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

Table 3 Comparison of GRASP with other heuristics in terms of ARPD Size ðn; sÞ

GRASP MaxIter

OHA LPI-PR

GH&D þ LS

GH&D

FR

L

100

500

1000

(20,5) (20,10) (20,20) (20,30)

0.915 1.909 3.085 3.756

0.853 1.843 3.076 3.669

0.853 1.826 3.076 3.541

1.046 1.945 3.380 4.443

1.248 2.109 3.608 5.000

1.265 2.701 4.891 6.550

1.472 2.919 5.051 6.830

1.161 2.563 4.077 5.559

(40,5) (40,10) (40,20) (40,30)

1.584 2.747 4.931 6.704

1.548 2.627 4.739 6.513

1.548 2.594 4.670 6.202

1.643 2.775 5.580 7.408

1.865 3.199 5.778 7.820

2.258 3.559 6.533 8.877

2.257 3.878 7.321 9.153

1.697 3.111 5.746 7.218

(60,5) (60,10) (60,20) (60,30)

1.838 3.527 5.887 8.020

1.796 3.431 5.696 7.924

1.754 3.340 5.685 7.791

2.009 3.417 6.191 8.614

1.994 3.842 6.433 8.988

2.267 4.144 7.213 9.893

2.555 4.553 8.104 10.217

1.946 3.812 6.253 8.767

(80,5) (80,10) (80,20) (80,30)

2.058 3.958 6.808 9.031

2.033 3.847 6.571 8.879

1.986 3.813 6.474 8.850

2.203 4.142 7.283 9.850

2.219 4.244 7.159 9.643

2.423 4.685 7.849 10.587

2.698 4.939 8.786 11.728

2.201 4.124 6.754 9.544

(100,5) (100,10) (100,20) (100,30)

2.324 4.264 7.440 9.634

2.300 4.208 7.255 9.454

2.278 4.169 7.255 9.338

2.486 4.543 7.822 9.822

2.504 4.529 7.917 9.784

2.727 4.969 8.565 11.181

2.963 5.388 9.380 12.137

2.322 4.298 7.571 9.784

Average

4.522

4.413

4.352

4.830

4.994

5.566

6.116

4.925

Rank

3

2

1

4

6

7

8

5

Last, the effect of MaxIter on the quality of solution is presented in columns 2– 4 of Table 3. Three different values for MaxIter (100, 500 and 1000) of GRASP iterations are used with p ¼ 0:2: It can be shown that the quality of solution improves with the increase of the value of MaxIter. For the instances marked in bold in column 3, improvement did not occur with an increase in MaxIter. This may be due to finding the optimal solutions. The corresponding CPU time for the different GRASP iterations are provided in Table 4 where they are averaged over the 20 instances for each n. 5.3. Comparison of GRASP with other heuristics GRASP meta-heuristic is compared to five heuristics in the literature in terms of solution quality in Table 3 and in terms of computation requirement in Table 4. Both tables have the same legends for columns. These are from left to right as follows: the instance size; GRASP with 100 iterations; GRASP with 500 iterations; GRASP with 1000 iterations; OHA-LPI-PR, the LP-based meta-heuristic of Osman et al.; GH&D þ LS, the GH&D heuristic followed by the LS ascent procedure; GH&D, Green and Al-Hakim with data structure; FR, Foulds and Robinson; and L, Leung construction heuristic.

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

649

Table 4 Comparison of GRASP with other heuristics in terms of CPU seconds Size (n)

GRASP MaxIter 100

500

1000

20 40 60 80 100

01.95 09.94 25.34 50.37 84.85

009.69 050.08 126.34 251.66 424.85

019.40 100.21 252.96 501.15 846.51

Average

34.49

172.52

344.04

a

OHA LPI-PRa

G&HD þ LS

G&HD

FR

L

1.75 14.22 33.60 100.17 187.72

0.002 0.096 0.239 0.445 0.752

0.002 0.010 0.032 0.098 0.208

0 0 0.005 0.007 0.012

000.07 002.94 027.25 173.05 526.07

0.306

0.070

0.005

145.88

67.595

Others: CPU time on a Toshiba Laptop-Pentium 75 MHz, 8 MB RAM. CPU time on a Toshiba Laptop-Pentium II, 400 MHz with 128 MB RAM.

The line ‘average’ is the overall average over all instances. The last line shows the ranking of the compared algorithms in term of solution quality from best (1) to worst (8). The computational results indicate a number of points. First, Leung heuristic (L) is the best among the FR and GH&D construction heuristics but it requires significantly more CPU time. Second, the strength of GH&D þ LS—the combination of the GH&D construction heuristic with the local search procedure—is demonstrated by the improved results over that of GH&D, their closeness to that of L and the usage of a small fraction of the latter’s CPU time. Third, comparing GRASP-1000 to the best LP-based meta-heuristic (LP-PR), it is clear that GRASP-1000 is better than LP-PR in term of solution quality while both meta-heuristics use almost the same amount of CPU time. Moreover, comparing GRASP-100 to the best constructive heuristic of Leung, it can be seen that GRASP-100 produces better results than L and also requires much less CPU seconds by a factor of 4. Last, the Kruskal-Wallis rank sum test of hypothesis for differences between medians of the compared algorithms was performed (Levine, Stephan, Krhbiel, & Berenson, 2001). The null hypothesis was rejected with p-value ¼ 0.00, i.e. the medians are significantly different. Consequently, the results obtained by the compared algorithms are significantly different, making GRASP the best algorithm for the problem.

6. Conclusion In this paper, we have described an efficient and effective GRASP implementation for the WMPG problem. An expression to determine the initial value of the size of the candidate list is derived, and a strategy to update dynamically the list is also implemented. A new data structure for the GH constructive heuristic is also developed to reduce the original complexity from an order Oðn3 Þ to Oðn2 Þ: Extensive computational results indicate that the GRASP implementation is consistently better than the best constructive heuristic and LP-based meta-heuristic in the literature. Overall, taking into account both the quality of solution and the computational requirement, GRASP seems to be the state of the art algorithm for the problem. If a quick good solution is required, then using GH&D þ LS is an appealing option. Moreover, the general lesson that can be learned from our experiment is that the combination of

650

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

an efficient constructive heuristic with a good neighborhood mechanism can generate high-quality solutions with small CPU times using a GRASP meta-heuristic approach. Finally, it should be noted that GRASP results can be further improved by either adding memory components to the search as in the GRAMPS approach of Ahmadi and Osman (2003) or implementing the path re-linking approach to the GRASP final solutions as in Laguna and Marti (1999).

Acknowledgements The American University Research Board in part supported this research. This support is greatly acknowledged. The authors would like to thank Professor Jenny Leung for sending us her C code, which is used, in our comparative analysis and the referees for their comments that improved the clarity of the paper.

References Ahmadi, S., Osman, I. H (2003). Greedy random adaptive memory programming search for the capacitated clustering problem. To appear in the special issue on Logistics of the European Journal of Operational Research edited by E. Pesch and S. Martello. Al-Hakim, L. A. (1991). Two graph theoretical procedures for an improved solution to the facilities layout problem. International Journal of Production Research, 29, 1701–1718. Al-Hakim, L. A. (1992). A modified procedure for converting a dual graph to a block layout. International Journal of Production Research, 30, 2467– 2476. Barake, M.A (1997). Approximate algorithms for the weighted maximal planar graph problem, MSc Thesis, Institute of Mathematics and Statistics, University of Kent, Canterbury, UK. Boswell, S. (1992). TESSA—A new greedy heuristics for facilities layout planning. International Journal of Production Research, 30, 1957– 1968. Carson, D. I. (1993). On O(P2) algorithms for planarization. IEEE Transaction on Computer-Aided Design of Integrated Circuits and Systems, 12, 1300 –1302. Cimikowski, R., & Mooney, E. (1997). Proximity-based adjacency determination for facility layout. Computers and Industrial Engineering, 32, 341– 349. Domschke, W., & Krispin, G. (1997). Location and layout planning: A survey. OR-Spektrum, 19, 181–194. Eades, P., Foulds, L. R., & Giffin, J. W. (1982). An efficient heuristic for identifying a maximum weight planar subgraph. Lecture notes in mathematics, (Vol. 952) (pp. 239 –251), Berlin: Springer-verlag. Foulds, L. R., Gibbons, P., & Giffin, J. (1985). Facilities layout adjacency determination: An experimental comparison of three graph theoretic heuristics. Operations Research, 33, 1091– 1106. Foulds, L. R., & Robinson, D. F. (1976). A strategy for solving the plant layout problem. Operations Research Quarterly, 27, 845– 855. Feo, T. A., & Resende, M. G. C. (1989). A probabilistic heuristic for a computationally difficult set covering. Operations Research Letters, 8, 67 – 71. Feo, T. A., & Resende, M. G. C. (1995). Greedy randomized adaptive search procedures. Journal of Global Optimization, 6, 109– 133. Festa, P., & Resende, M. G. C. (2001). An annotated bibliography. In C. C. Ribeiro, & P. Hansen (Eds.), Essays and surveys in metaheuristics (pp. 325– 367). Dordrecht: Kluwer Academic Publishers. Fischetti, M., & Martello, S. (1988). A hybrid algorithm for finding the k-th smallest element of n-element in OðnÞ time. In G. Gallo, F. Maffioli, S. Pallottino, B. Simeone, & P. Toth (Eds.), FORTRAN codes for network optimization. Annals of Operations Research 13. Giffin, J. W (1984). Graph theory techniques for facilities layout, PhD Thesis, University of Canterbury, New Zealand

I.H. Osman et al. / Computers & Industrial Engineering 45 (2003) 635–651

651

Giffin, J. W., Foulds, L. R., & Cameron, D. C. (1986). Drawing a block plan from a REL-chart with graph theory and microcomputer. Computers and Industrial Engineering, 10, 109– 115. Glover, F., & Kochenberger, G. (2003). Handbook of metaheuristics. Boston: Kluwer Academic Publishers. Green, R., & Al-Hakim, L. A. (1985). A heuristic for facilities layout planning. Omega, 13, 469– 474. Hasan, M., & Hogg, G. (1987). A review of graph theory application to the facility layout problem. Omega, 15, 291– 300. Hasan, M., & Osman, I. H. (1995). Local search algorithms for the maximal planar layout problem. International Transactions in Operational Research, 2, 89 – 106. Hassan, M. M. D., & Hogg, G. L. (1991). On constructing a block layout by graph theory. International Journal of Production Research, 32, 231– 234. Junger, M., Leipert, S., & Mutzel, P. (1998). Note on computing a maximal planar subgraph using PQ-trees. IEEE Transaction on Computer-Aided Design of Integrated Circuits and Systems, 17, 609– 612. Junger, M., & Mutzel, P. (1993). Solving the maximum weighted planar subgraph problem by branch-and-cut. In G. Rinaldi, & L. Wolsey (Eds.), Proceedings of the Third Conference on Integer Programming and Combinatorial Optimization (IPCO) (pp. 479– 492). Junger, M., & Mutzel, P. (1996). Maximum weighted planar subgraphs and nice embeddings: Practical layout tools. Algorithmica, 16, 33 – 59. Kim, J.-Y., & Kim, Y.-D. (1995). Graph theoretic heuristics for unequal-sized facility layout problems. OMEGA, 23, 391– 401. Laguna, M., & Marti, R. (1999). GRASP and Path relinking for 2-layer straight line crossing minimization. INFROMS Journal on Computing, 11, 44 – 52. Leung, J. (1992). A new graph-theoretic heuristic for facility layout. Management Science, 38, 595– 605. Levine, D. M., Stephan, C., Krhbiel, T. C., & Berenson, M. L. (2001). Statistics for managers using microsoft excel. New Jersey: Prentice-Hall. Nishizeki, T., & Chiba, N. (1988). Planarity graph: Theory and algorithms (32). Annals of Discrete Mathematics, Amsterdam: North-Holland. Osman, I. H. (1995). An introduction to metaheuristics. In M. Lawrence, & C. Wilsdon (Eds.), Operational Research Tutorial Papers. Hampshire: Stockton Press, Publication of the Operation Society, UK, 92122. Osman, I. H. (2003). Focused issue on applied meta-heuristics. Computers and Industrial Engineering, 44, 205–207. Osman, I. H. and Abdullah, A. (2003). Exact algorithms for the maximal planar graph problems. Working paper, Olayan school of Business, American University of Beirut, Lebanon. Osman, I. H., & Hasan, M. (1997). A layered matching-planarity testing heuristic for the machine layout planning. Egyptian Computer Science Journal, 19, 1 – 17. Osman, I. H., Hasan, M., & Abdullah, A. (2002). Linear programming based meta-heuristics for the weighted maximal planar graph. Journal of Operational Research Society, 53, 1142– 1149. Osman, I. H., & Kelly, J. P. (1996). Meta-heuristics: An overview. In I. H. Osman, & J. P. Kelly (Eds.), Meta-heuristics theory and applications (pp. 1 – 21). Boston: Kluwer Academic Publishers. Osman, I. H., & Laporte, G. (1996). Meta-heuristics: A bibliography. Annals of Operations Research, 63, 513– 628. Pesch, E., Glover, F., Bartsch, T., Salewski, F., & Osman, I. H. (1999). Efficient facility layout planning in a maximally planar graph model. International Journal of Production Research, 37, 263–283. Resende, M. G. C., & Riberio, C. C. (1997). A GRASP for graph planarization. Networks, 29, 173– 189. Rinsma, F., Giffin, J. W., & Robinson, D. F. (1990). Orthogonal floorplans from maximal planar graphs. Environment and Planning B: Planning and Design, 17, 57 – 71. Shimozono, S. (1997). Finding optimal subgraphs by local search. Theoretical Computer Science, 17, 265– 271. Voss, S., Martello, S., Osman, I. H., & Roucairol, C. (1999). Advances and trends in local search paradigms for optimization. Boston: Kluwer Academic Publishers. Wascher, G., & Merker, J. (1997). A comparative evaluation of heuristics for the adjacency problem facility layout planning. International Journal of Production Research, 35, 447– 466. Watson, K. H., & Giffin, J. W. (1996). On the worst case performance of TESSA. International Journal of Production Research, 34, 2963– 2966. Watson, K. H., & Giffin, J. W. (1997). The vertex splitting algorithm for facilities layout. International Journal of Production Research, 35, 2477– 2492.

Suggest Documents