A Parallel GRASP for the Steiner Problem in Graphs - CiteSeerX

0 downloads 0 Views 224KB Size Report
set of (terminal) nodes of the original graph. In this paper, we describe a parallel GRASP for the Steiner problem in graphs. We review basic concepts of GRASP: ...
A Parallel GRASP for the Steiner Problem in Graphs Simone L. Martins, Celso C. Ribeiro, and Mauricio C. Souza Department of Computer Science, Catholic University of Rio de Janeiro, Rio de Janeiro, RJ 22453-900, Brazil. E-mail: fsimone,celso,[email protected].

Abstract. A greedy randomized adaptive search procedure (GRASP)

is a metaheuristic for combinatorial optimization. Given an undirected graph with weights associated with its nodes, the Steiner tree problem consists in nding a minimum weight subgraph spanning a given subset of (terminal) nodes of the original graph. In this paper, we describe a parallel GRASP for the Steiner problem in graphs. We review basic concepts of GRASP: construction and local search algorithms. The implementation of a sequential GRASP for the Steiner problem in graphs is described in detail. Feasible solutions are characterized by their nonterminal nodes. A randomized version of Kruskal's algorithm for the minimum spanning tree problem is used in the construction phase. Local search is based on insertions and eliminations of nodes to/from the current solution. Parallelization is done through the distribution of the GRASP iterations among the processors on a demand-driven basis, in order to improve load balancing. The parallel procedure was implemented using the Message Passing Interface library on an IBM SP2 machine. Computational experiments on benchmark problems are reported.

1 Introduction Let G = (V; E ) be a connected undirected graph, where V is the set of nodes and E denotes the set of edges. Given a non-negative weight function w : E ! IR+ associated with its edges and a subset X  V of terminal nodes, the Steiner problem SPG(V; E; w; X ) consists in nding a minimum weighted connected subgraph of G spanning all terminal nodes in X . The solution of SPG(V; E; w; X ) is a Steiner minimal tree (SMT). The non-terminal nodes that end up in the SMT are called Steiner nodes. The Steiner problem in graphs is a classic NP-complete (Karp [13]) combinatorial optimization problem, see e.g. Hwang, Richards and Winter [10], Maculan [16], and Winter [27] for surveys on formulations, special cases, exact algorithms, and heuristics. Applications can be found in many areas, such as telecommunication network design, VLSI design, and computational biology, among others. Due to its NP-hardness, several heuristics have been developed

2 for its approximate solution, see e.g. Duin and Voss [6], Hwang, Richards and Winter [10], and Voss [25]. Among the most ecient approaches, we nd implementations of metaheuristics such as genetic algorithms (Esbensen [7], and Kapsalis, Rayward-Smith and Smith [12]), tabu search (Gendreau, Larochelle and Sanso [9], Ribeiro and Souza [24], and Xu, Chiu and Glover [28]), and simulated annealing (Dowsland [4]). A greedy randomized adaptive search procedure (GRASP) is a metaheuristic for combinatorial optimization. A GRASP [8] is an iterative process, where each iteration consists of two phases: construction and local search. The construction phase builds a feasible solution, whose neighborhood is explored by local search. The best solution over all iterations is returned as the result. In this paper we present a parallel GRASP for the Steiner problem in graphs. In the next section, we review the basic components of this approach and we present a sequential GRASP for the Steiner problem in graphs. Feasible solutions are characterized by their non-terminal nodes. A randomized version of Kruskal's algorithm for the minimum spanning tree problem is used in the construction phase. Local search is based on insertions and eliminations of Steiner nodes to/from the current solution. In Sec. 3 we describe the parallelization of this GRASP, performed through the distribution of its iterations among the processors on a demand-driven basis, in order to improve load balancing. The parallel procedure was implemented using the Message Passing Interface library on an IBM SP2 machine. Computational experiments on benchmark problems are reported. Concluding remarks are made in Sec. 4.

2 Sequential GRASP Approximate solutions for the Steiner minimal tree problem can be obtained by either spanning tree or path based approaches. In this section, we apply the concepts of GRASP to the approximate solution of the Steiner problem in graphs, using a spanning tree based approach. We summarize below the basic concepts of GRASP, as presented in Resende and Ribeiro [23] and Prais and Ribeiro [22]. GRASP can be seen as a metaheuristic which captures good features of pure greedy algorithms (e.g. fast local search convergence and good quality solutions) and also of random construction procedures (e.g. diversi cation). Each iteration consists of the construction phase, the local search phase and, if necessary, the incumbent solution update. In the construction phase, a feasible solution is built, one element at a time. At each construction iteration, the next element to be added is determined by ordering all elements in a candidate list with respect to a greedy function that estimates the bene t of selecting each element. The probabilistic component of a GRASP is characterized by randomly choosing one of the best candidates in the list, but usually not the top one.

3 The solutions generated by a GRASP construction are not guaranteed to be locally optimal. Hence, it is almost always bene cial to apply local search to attempt to improve each constructed solution. A local search algorithm works in an iterative fashion by successively replacing the current solution by a better one from its neighborhood. It terminates when there are no better solutions in the neighborhood. Success for a local search algorithm depends on the suitable choice of a neighborhood structure, ecient neighborhood search techniques, and the starting solution. The GRASP construction phase plays an important role with respect to this last point, since it produces good starting solutions for local search. The customization of these generic principles into an approximate algorithm for the Steiner problem in graphs is described in the following.

2.1 Solution Characterization Let G = (V; E ) be an undirected graph with node set V and edge set E . A graph H = (V (H ); E (H )) is said to be a subgraph of G if V (H )  V and E (H ) is any subset of the edges in E having both extremities in V (H ). This graph H is said to span a subset U  V of the nodes in G if U  V (H ). For any subset of nodes W  V , the edge subset E (W ) = f(i; j ) 2 E j i 2 W; j 2 W g de nes the induced subgraph G(W ) = (W; E (W )) in G by the nodes in W . We denote by T (X ) the Steiner minimal tree solving the Steiner problem SPG(V; E; w; X ) formulated in Sec. 1. Given the graph G = (V; E ) with non-negative weights w associated with its edges, the minimum spanning tree problem MSTP(V; E; w) consists in nding a minimum weighted subtree of G spanning all nodes in V . This problem can be seen as particular case of the Steiner problem SPG(V; E; w; X ), in which we have X = V . Accordingly, we denote by T (V ) the minimum spanning tree solving MSTP(V; E; w). We can associate a feasible solution of the Steiner problem SPG(V; E; w; X ) with each subset S  V n X of Steiner nodes, given by a minimum spanning tree solving problem MSTP(S [ X; E (S [ X ); w). Let S  be the set of Steiner nodes in the optimal solution of SPG(V; E; w; X ). The optimal solution T (X ) is a minimum spanning tree of the graph induced in G by the node set S  [ X , i.e., the solution to the minimum spanning tree problem MSTP(S  [ X; E (S  [ X ); w). In the following, solutions of the Steiner problem SPG(V; E; w; X ) will be characterized by the associated set of Steiner nodes and one of the corresponding minimum spanning trees. Accordingly, the search for the Steiner minimal tree T (X ) will be reduced to the search for the optimal set S  of Steiner nodes. V;E;w

V;E;w

V;E;w

V;E;w

2.2 Construction Phase The construction phase of our GRASP is based on the distance network heuristic, suggested by Choukmane [3], Iwainsky, Canuto, Taraszow and Villa [11], Kou,

4 Markowsky and Berman [14], and Plesnk [21], with time complexity O(jX jjV j2 ). Lately, Mehlhorn [18] proposed a modi cation of the original version, leading to a procedure using simple data structures and presenting an improved time complexity. For every terminal node i 2 X , let N (i) be the subset of non-terminal nodes of V that are closer to i than to any other terminal node. The rst step of this procedure consists in the computation of a graph G0 = (X; E 0 ), where E 0 = f(i; j ); i; j 2 X j 9(k; `) 2 E; k 2 N (i); ` 2 N (j )g. Then, we associate a weight w0 = minfd(i; k)+w +d(`; j ) j (k; `) 2 E; k 2 N (i); ` 2 N (j )g with each edge (i; j ) 2 E 0 , where d(a; b) denotes the shortest path from a to b in the original graph G = (V; E ) in terms of the weights w. Next, Kruskal's algorithm [15] is used to solve the minimum spanning tree problem MSTP(X; E 0; w0 ). Finally, the (X ) so obtained are replaced by the edges in the minimum spanning tree T edges in the corresponding shortest paths in the original graph G. Neighborhoods N (i); 8i = 1; : : : ; jX j can be computed in time O(jE j + jV j log jV j), which is the same complexity of the minimum spanning tree computation. Then, the overall complexity of the distance heuristic network with Mehlhorn improvements is only O(jE j + jV j log jV j), which is much better than the original bound. As discussed in the previous section, the construction phase of GRASP relies on randomization to build di erent solutions at di erent iterations. Graph G0 = (X; E 0 ) is created once for all and does not change throughout all computations. In order to add randomization to Mehlhorn's version of the distance network heuristic, we make the following modi cation in Kruskal's algorithm. Instead of selecting the feasible edge with the smallest weight, we build a restricted 0 + (wmax 0 ? candidate list (RCL) with all edges (i; j ) 2 E 0 such that w0  wmin 0 0 0 wmin), where 0   1 and wmin and wmax denote, respectively, the smallest and the largest weights among all edges still unselected to form the minimum spanning tree. Then, an edge is selected at random from the restricted candidate list. The operations associated with the construction phase of our GRASP are implemented in lines 2 and 4 of the pseudo-code of algorithm GRASP for SPG outlined in Fig. 1. k`

ij

X;E 0 ;w 0

ij

2.3 Local Search Since the solution produced by the construction phase is not necessarily a local optimum, local search can be applied as an attempt to improve it. The rst step towards the implementation of a local search procedure consists in identifying an appropriate neighborhood de nition. Let S be the set of Steiner nodes in the current Steiner tree. We have noticed in Sec. 2.1 that each subset S of Steiner nodes can be associated with a feasible solution of the Steiner problem SPG(V; E; w; X ), given by a minimum spanning tree T [ ( [ ) (S [ X ) solving problem MSTP(S [ X; E (S [ X ); w). Moreover, let W (T ) denote the weight of a tree T . The neighbors of a solution characterized by its set S of Steiner nodes are de ned by all sets of Steiner nodes S

X;E S

X ;w

5 which can be obtained either by adding to S a new non-terminal node, or by eliminating from S one of its Steiner nodes. Given the current tree T [ ( [ ) (S [ X ) and a non-terminal node s 2 fV n X g n S , the computation of neighbor T [f g[ ( [f g[ ) (S [ fsg [ X ) obtained by the insertion of s into the current set S of Steiner nodes can be done in O(jV j) average time, using the algorithm proposed by Minoux [19]. For each non-terminal node t 2 S , neighbor T nf g[ ( nf g[ ) (S n ftg [ X ) obtained by the elimination of t from the current set S of Steiner nodes is computed by Kruskal's algorithm as the solution of the minimum spanning tree problem MSTP(S n ftg [ X; E (S n ftg [ X ); w). In order to speedup the local search, since the computational time associated with the evaluation of all insertion moves is likely to be much smaller than that of the elimination moves, only the insertion moves are evaluated in a rst pass. The evaluation of elimination moves is performed only if there are no improving insertion moves. S

X;E S

X ;w

S

S

t

s

X;E S

X;E S

t

s

X ;w

X ;w

2.4 Algorithm description The pseudo-code with the complete description of procedure GRASP for SPG for the Steiner problem in graphs is given in Fig. 1. The procedure takes as input the original graph G = (V; E ), the set X of terminal nodes, the edge weights w, the restricted candidate list parameter (0   1), a seed for the pseudo random number generator, and the maximum number of GRASP iterations to be performed. The value of the best solution found is initialized in line 1. The preprocessing computations associated with Mehlhorn's version of the distance network heuristic are performed in line 2, as described in Sec. 2.2. The procedure is repeated max iterations times. In each iteration, a greedy randomized solution T is constructed in line 4 using the randomized version of Kruskal's algorithm. Let S be the set of Steiner nodes in T . Next, the local search attempts to produce a better solution. In line 5 we initialize the best set of Steiner nodes as those in the current solution, and the weight of the best neighbor as that of the current solution. The loop from line 6 to 11 searches for the best insertion move. In line 7 we compute the minimum spanning tree T + associated with problem MSTP(S [fsg[ X; E (S [fsg[ X ); w) de ned by the insertion of node s into the current set of Steiner nodes. Let W (T + ) be its weight. In line 8 we check whether the new solution T + improves the weight of the current best neighbor. The best set of Steiner nodes and the weight of the best neighbor are updated in line 9. Once all insertion moves have been evaluated, we check in line 12 whether an improving neighbor has been found. If this is the case, the set of Steiner nodes, the current Steiner tree and its weight are updated in line 13, and the local search resumes from this new current solution. s

s

s

6

procedure GRASP for SPG(V; E; w; X; ,seed,max iterations) 1 2 3

4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

best value 1; Compute graph G0 = (X; E 0) and weights wij0 ; 8(i; j ) 2 E 0; for k = 1; : : : ; max iterations do /* Construction phase */ Apply a randomized version of Kruskal's algorithm to obtain a spanning tree T of G0 = (X; E 0 ) with S as its set of Steiner nodes; /* Insertion moves */ best set S ; best weight W (T ); for all s 2 (V n X ) n S do Compute the minimum spanning tree T +s ; if W (T +s ) < best weight then do best set S [ fsg; best weight W (T +s ); end then; end for; if best weight < W (T ) then do S S [ fsg; T T +s ; W (T ) W (T +s ); go to line 5; end then; /* Elimination moves */ best set S ; best weight W (T );

for all t 2? S do if G = ((S nftg) [ X; E ((S n ftg) [ X )) is connected then do Compute the minimum spanning tree T ? ; if W (T ? ) < best weight then do best set S n ftg; best weight W (T ? ); end then; end then; end for; if best weight < W (T ) then do S S n ftg; T T ? ; W (T ) W (T ? ); go to line 5; end then; /* Best solution update */ if W (T) < best value then do S S ; T  T ; best value W (T ); end then; end for;   return S ; T ; t

t

t

t

t

t

end GRASP for SPG;

Fig. 1. Pseudo-code of the sequential GRASP procedure for the Steiner problem in graphs

7 If no improving insertion moves have been found, then the elimination moves are evaluated. In line 15 we reinitialize the best set of Steiner nodes as those in the current solution, and the weight of the best neighbor as that of the current solution. We check in line 17 whether the graph G? = ((S nftg) [ X; E ((S nftg)[ X )) obtained by the elimination of node t is connected or not. If it is connected, we compute in line 18 the minimum spanning tree T ? associated with problem MSTP((S n ftg) [ X; E ((S n ftg) [ X ); w) de ned by the elimination of node t from the current set of Steiner nodes. Again, let W (T ? ) be its weight. In line 19 we check whether the new solution T ? improves the weight of the current best neighbor. Once again, the best set of Steiner nodes and the weight of the best neighbor are updated in line 20. Once all elimination moves have been evaluated, we check in line 24 whether an improving neighbor has been found. If this is the case, the set of Steiner nodes, the current Steiner tree and its weight are updated in line 25, and the local search resumes from this new current solution. If the solution found at the end of the local search is better than the best solution found so far, we update in line 28 the best set of Steiner nodes, the current Steiner tree and its weight. The best set S  of Steiner nodes and the best Steiner tree T  are returned in line 31. t

t

t

t

2.5 Acceleration In order to accelerate the local search phase, we implemented a faster evaluation scheme for insertion and deletion moves. The basic idea consists in keeping candidate lists with promising moves of each type, which are periodically updated. In the rst GRASP iteration, we build a list containing the k best improving insertion moves, which is kept in nondecreasing order of the associated move values. At each following iteration, let S be the set of Steiner nodes in the current solution. Instead of reevaluating all insertion moves, we just take the node s corresponding to the rst move in this candidate list and we reevaluate the weight of a minimum spanning tree associated with the insertion of this node into the current set of Steiner nodes. If this move reveals itself to be an improving one, then the current solution is updated, the move is eliminated from the candidate list, and the local search resumes from the new set S [fsg of Steiner nodes. Otherwise, if the move is not an improving one, it is eliminated from the candidate list and the next candidate move is evaluated. Once the candidate list becomes empty, a new full iteration is performed, all insertion moves are evaluated, and the candidate list is rebuilt. A similar procedure is implemented for deletion moves. We present in Table 1 some computational results illustrating the eciency of the above acceleration scheme on an IBM RISC/6000 370 processor with 256 Mbytes of RAM. For each of the 20 series C test problems from the ORLibrary [2], we present the weight W (T ) of the best solution found and the

8 computation time in seconds (sec's) obtained by (i) a straightforward version of algorithm GRASP for SPG with a xed value for = 0:5, and (ii) the same algorithm using the above acceleration scheme for insertion and deletion moves with k best = 40. These results show that this technique signi cantly reduced the computational times, even attaining a reduction of up to 81% in the case of problem C.05, with very small losses in terms of solution quality.

Table 1. E ect of the acceleration scheme on OR-Library problems of series C GRASP for SPG Acceleration scheme Problem W (T  ) sec's W (T  ) sec's C.01 85 149.8 85 76.3 C.02 144 154.1 144 88.4 C.03 754 3143.4 766 703.0 C.04 1080 5009.3 1089 947.1 C.05 1579 8058.6 1579 1569.7 C.06 55 108.1 55 80.4 C.07 103 200.3 102 91.7 C.08 509 3290.9 523 664.0 C.09 709 5412.6 737 1112.0 C.10 1094 11378.8 1100 7160.7 C.11 32 110.0 32 82.8 C.12 46 248.1 46 100.5 C.13 260 3912.0 280 1179.4 C.14 325 5611.5 336 2575.1 C.15 559 28825.7 561 20479.3 C.16 12 93.6 12 84.2 C.17 18 146.4 18 86.6 C.18 124 3052.5 134 1256.9 C.19 167 5126.0 186 2123.22 C.20 267 30056.6 268 28063.7

3 Parallelization and Computational Results The most straightforward GRASP parallelization scheme consists in the distribution of the iterations among the available processors, using a master-slave scheme. Each slave processor performs a xed number of GRASP iterations, equal to the total number of GRASP iterations divided by the number of processors. Once all processors have nished their computations, the best solution among those found by each processor is collected by the master processor. To reduce communication times, each slave processor should have its own copy of all problem data. Each slave processor is involved with exactly one communication

9 step at the end of its computation, when it communicates to the master the best solution it has found and its weight. However, since some slave processors may be slower than others (they can even be di erent, in the case of a heterogeneous system) and since the time spent on the local search may be quite di erent from one iteration to another, this kind of strategy leads very often to critical load unbalancing and small eciency values. It has been recently shown in Alvim [1] that a very simple strategy, consisting in the distribution of the iterations among the slave processors on a demand-driven basis, can signi cantly improve load balancing and leads to smaller computation (elapsed) times. In fact, this strategy allowed reductions in the order of 20% in the case of the parallelization of a GRASP for a scheduling problem arising in the context of trac assignment in TDMA systems [1, 22]. We use the same strategy in the parallelization of the GRASP for SPG sequential algorithm, whose pseudo-code was presented in Fig. 1. The max iterations GRASP iterations to be performed are divided into xed batches of batch size iterations each. In the beginning, each slave processor receives a batch of iterations to perform. Whenever a slave processor nishes its current batch, it requests a new one to the master. If the total number of GRASP iterations has already been performed, then this slave nishes its computations. Otherwise, it performs a new batch of iterations. The computational experiments have been performed on a set of 60 benchmark problems from series C, D and E of the OR-Library [2]. The graphs have been previously reduced using the \smaller special distance test" proposed by Duin and Volgenant [5]. The above parallelization scheme was implemented using the Message Passing Interface (MPI) library [20, 26] on an IBM SP2 machine with 16 RISC/6000 370 processors with 256 Mbytes of RAM each. In the case of series C and D, the parallel GRASP was implemented with the following parameter settings: = 0:1, max iterations = 400, k best equal to the number of terminal nodes, and batch size = 25. The computational results obtained using ve processors (with four of them acting as slaves) are reported in Table 2. For each test problem, we present the number of nodes, terminal nodes and edges of the graph, the weight Best value of the optimal Steiner tree, the weight Par-GRASP of the best solution found by the parallel GRASP algorithm, an indication on whether this solution is optimal or not, and the local index of the iteration on which the best solution was found (# iter). Since the problems in series E are larger, fewer iterations have been performed. We used max iterations = 100 with batch size = 2. The computational results obtained using eleven processors (with ten of them as slaves) are reported in Table 3. These results illustrate the e ectiveness of the proposed GRASP procedure for the Steiner problem in graphs. The parallel GRASP found the optimal solution for 18 out of 20 problems of series C. For the other two problems, the best

10

Table 2. Run statistics on OR-Library problems of series C and D Problem jV j C.01 500 C.02 500 C.03 500 C.04 500 C.05 500 C.06 500 C.07 500 C.08 500 C.09 500 C.10 500 C.11 500 C.12 500 C.13 500 C.14 500 C.15 500 C.16 500 C.17 500 C.18 500 C.19 500 C.20 500 D.01 1000 D.02 1000 D.03 1000 D.04 1000 D.05 1000 D.06 1000 D.07 1000 D.08 1000 D.09 1000 D.10 1000 D.11 1000 D.12 1000 D.13 1000 D.14 1000 D.15 1000 D.16 1000 D.17 1000 D.18 1000 D.19 1000 D.20 1000

jX j

5 10 83 125 250 5 10 83 125 250 5 10 83 125 250 5 10 83 125 250 5 10 167 250 500 5 10 167 250 500 5 10 167 250 500 5 10 167 250 500

jEj

625 625 625 625 625 1000 1000 1000 1000 1000 2500 2500 2500 2500 2500 12500 12500 12500 12500 12500 1250 1250 1250 1250 1250 2000 2000 2000 2000 2000 5000 5000 5000 5000 5000 25000 25000 25000 25000 25000

Best value Par-GRASP optimal? # iter 85 85 yes 1 144 144 yes 1 754 754 yes 7 1079 1079 yes 1 1579 1579 yes 1 55 55 yes 1 102 102 yes 1 509 509 yes 3 707 707 yes 1 1093 1093 yes 1 32 32 yes 1 46 46 yes 1 258 258 yes 20 323 323 yes 1 556 556 yes 1 11 11 yes 1 18 18 yes 1 113 114 no 25 146 147 no 1 267 267 yes 1 106 106 yes 1 220 220 yes 1 1565 1565 yes 3 1935 1935 yes 1 3250 3250 yes 1 67 68 no 1 103 103 yes 1 1072 1072 yes 24 1448 1449 no 1 2110 2110 yes 3 29 29 yes 1 42 42 yes 1 500 500 yes 8 667 667 yes 17 1116 1116 yes 6 13 13 yes 1 23 23 yes 1 223 227 no 6 310 314 no 7 537 537 yes 1

11

Table 3. Run statistics on OR-Library problems of series E Problem jV j E.01 2500 E.02 2500 E.03 2500 E.04 2500 E.05 2500 E.06 2500 E.07 2500 E.08 2500 E.09 2500 E.10 2500 E.11 2500 E.12 2500 E.13 2500 E.14 2500 E.15 2500 E.16 2500 E.17 2500 E.18 2500 E.19 2500 E.20 2500

jX j

5 10 417 625 1250 5 10 417 625 1250 5 10 417 625 1250 5 10 417 625 1250

jEj

3125 3125 3125 3125 3125 5000 5000 5000 5000 5000 12500 12500 12500 12500 12500 62500 62500 62500 62500 62500

Best value Par-GRASP optimal? # iter 111 111 yes 1 214 214 yes 1 4013 4013 yes 1 5101 5102 no 1 8128 8128 yes 1 73 73 yes 1 145 145 yes 1 2640 2647 no 1 3604 3606 no 1 5600 5600 yes 2 34 34 yes 1 67 67 yes 1 1280 1286 no 2 1732 1736 no 2 2784 2787 no 2 15 15 yes 1 25 25 yes 1 564 575 no 2 758 764 no 2 1342 1342 yes 1

solutions found weight only one unit more than the optimal values. For what concerns series D, the optimal solution was found for 16 out of 20 problems. Among the four problems not solved to optimality, for two of them the best solutions found weight only one unit more than the corresponding optimal values. The solutions found for the remaining two problems are approximately within 2% from optimality. For series E, the optimal solution was found for 12 out of 20 problems. Seven of the remaining problems are less than 1% from optimality and the last one is approximately within 2% from optimality. We also notice that, in most of the cases, the best solution was found in very few iterations: for 42 out of 60 test problems the best solution was found by at least one of the processors in its rst iteration.

4 Concluding remarks In this paper we described a parallel greedy randomized adaptive search procedure for nding approximate solutions to the Steiner problem in graphs. Feasible solutions are characterized by their non-terminal nodes. A randomized version of Kruskal's algorithm for the minimum spanning tree problem is used in the construction phase. Local search is based on insertions and eliminations of Steiner

12 nodes to/from the current solution. Parallelization is performed through the distribution of the iterations among the processors on a demand-driven basis, in order to improve load balancing. The parallel procedure was implemented using the Message Passing Interface library on an IBM SP2 machine. Computational experiments on a set of benchmark problems from series C, D and E of the OR-Library illustrate the e ectiveness of the proposed parallel GRASP procedure, which found the optimal solution for 46 out of the 60 test problems. The best solution found is only one unit from optimality for ve among the remaining problems. However, the observed execution times per iteration are quite high and should be improved in order to allow for the execution of more GRASP iterations. Since local search seems to be the bottleneck of the current implementation in terms of computational times, one promising strategy is the combination of the randomized Kruskal-like construction phase with a local search procedure based on key-paths, as recently proposed by Pardalos and Resende [17]. Another hibrid strategy also currently under investigation is the application of a variable neighborhood search procedure starting from the best solution found by each processor. Acknowledgements. The authors acknowledge the Laboratorio Nacional de Computac~ao Cient ca (Rio de Janeiro, Brazil) for making available their computational facilities and the IBM SP system on which the computational experiments were performed.

References 1. A.C. Alvim, Evaluation of GRASP parallelization strategies (in Portuguese), M.Sc. Dissertation, Department of Computer Science, Catholic University of Rio de Janeiro, 1998. 2. J.E. Beasley, \OR-Library: Distributing test problems by electronic mail", Journal of the Operational Research Society 41 (1990), 1069{1072. 3. E.-A. Choukmane, \Une heuristique pour le probleme de l'arbre de Steiner", RAIRO Recherche Operationnelle 12 (1978), 207{212. 4. K.A. Dowsland, \Hill-climbing simulated annealing and the Steiner problem in graphs", Engineering Optimization 17 (1991), 91{107. 5. C.W. Duin and A. Volgenant, \Reduction tests for the Steiner problem in graphs", Networks 19 (1989), 549{567. 6. C.W. Duin and S. Voss, \Ecient path and vertex exchange in Steiner tree algorithms", Networks 29 (1997), 89{105. 7. H. Esbensen, \Computing near-optimal solutions to the Steiner problem in a graph using a genetic algorithm", Networks 26 (1995), 173{185. 8. T.A. Feo and M.G. Resende, \Greedy randomized adaptive search procedures", Journal of Global Optimization 6 (1995), 109{133. 9. M. Gendreau, J.-F. Larochelle and B. Sanso, \A tabu search heuristic for the Steiner tree problem in graphs", GERAD, Rapport de recherche G-96-03, Montreal, 1996.

13 10. F.K. Hwang, D.S. Richards and P. Winter, The Steiner tree problem, NorthHolland, Amsterdam, 1992. 11. A. Iwainsky, E. Canuto, O. Taraszow and A. Villa, \Network decomposition for the optimization of connection structures", Networks 16 (1986), 205{235. 12. A. Kapsalis, V.J. Rayward-Smith and G.D. Smith, \Solving the graphical Steiner tree problem using genetic algorithms", Journal of the Operational Research Society 44 (1993), 397{406. 13. R.M. Karp, \Reducibility among combinatorial problems", in Complexity of Computer Computations (E. Miller and J.W. Thatcher, eds.), 85{103, Plenum Press, New York, 1972. 14. L.T. Kou, G. Markowsky and L. Berman, \A fast algorithm for Steiner trees", Acta Informatica 15 (1981), 141{145. 15. J.B. Kruskal, \On the shortest apanning subtree of a graph and the traveling salesman problem", Proceedings of the American Mathematical Society 7 (1956), 48{50. 16. N. Maculan, \The Steiner problem in graphs", in Surveys in Combinatorial Optimization (S. Martello, G. Laporte, M. Minoux, and C.C. Ribeiro, eds.), Annals of Discrete Mathematics 31 (1987), 185{212. 17. S.L. Martins, P. Pardalos, M.G. Resende, and C.C. Ribeiro, \GRASP procedures for the Steiner problem in graphs", presented at the DIMACS Workshop on Randomization Methods in Algorithm Design, research report in preparation, 1997. 18. K. Mehlhorn, \A faster approximation for the Steiner problem in graphs", Information Processing Letters 27 (1988), 125{128. 19. M. Minoux, \Ecient greedy heuristics for Steiner tree problems using reoptimization and supermodularity", INFOR 28 (1990), 221{233. 20. Message Passing Interface Forum, \MPI: A new message-passing interface standard (version 1.1)", Technical report, University of Tennessee, Knoxville, 1995. 21. J. Plesnk, \A bound for the Steiner problem in graphs", Math. Slovaca 31 (1981), 155{163. 22. M. Prais and C.C. Ribeiro, \Reactive GRASP: An application to a matrix decomposition problem in TDMA trac assignment", Research paper submitted for publication, Catholic University of Rio de Janeiro, Department of Computer Science, 1998. 23. M.G. Resende and C.C. Ribeiro, \A GRASP for Graph Planarization", Networks 29 (1997), 173{189. 24. C.C. Ribeiro and M.C. Souza, \An improved tabu search for the Steiner problem in graphs", Working paper, Catholic University of Rio de Janeiro, Department of Computer Science, 1997. 25. S. Voss, \Steiner's problem in graphs: Heuristic methods", Discrete Applied Mathematics 40 (1992), 45{72. 26. D.W. Walker, \The design of a standard message passing interface for distributed memory concurrent computers", Parallel Computing 20 (1994), 657{673. 27. P. Winter, \Steiner problem in networks: A survey", Networks 17 (1987), 129{ 167. 28. J. Xu, S.Y. Chiu and F. Glover, \Tabu search heuristics for designing a Steiner tree based digital line network", Working paper, University of Colorado at Boulder, 1995.

Suggest Documents