Tree-decomposition based heuristics for the two-dimensional bin

1 downloads 0 Views 314KB Size Report
Jul 3, 2009 - bins of width W and height H. We dispose also of a conflict graph G = (I,E), where (i, j) ∈ E if items i ... when all the items are small enough to be packed in one bin. Theoretical ..... In figure 3, we can notice that only items i0, i4, i5 and i7 were initialized since each of .... while the stopping criteria do not hold.
Tree-decomposition based heuristics for the two-dimensional bin packing problem with conicts Ali Khanafer, François Clautiaux, El-Ghazali Talbi

Université des Sciences et Technologies de Lille, LIFL CNRS UMR 8022, INRIA Lille - Nord Europe Bâtiment INRIA, Parc de la Haute Borne, 59655 Villeneuve d'Ascq, France

Abstract This paper deals with the two-dimensional packing problem with conicts (BPC2D). Given a set of rectangular items, an unlimited number of rectangular bins and a conict graph, the goal is to nd a conict-free packing of the items minimizing the number of bins used. In this paper, we propose heuristics for this problem based on the concept of tree-decomposition. These heuristics proceed by decomposing a BPC-2D into subproblems to be solved independently. The application of this decomposition method is not straightforward, since merging partial solutions is hard. Several strategies are proposed in this paper to make an eective use of the decomposition. Computational experiments show the practical eectiveness of our approach.

Key words: bin packing with conicts, cutting and packing, tree-decomposition, heuristics, tabu search.

PACS: 1. Introduction Many real world problems can be formulated as

two-dimensional bin pack-

ing problems with conicts such as job scheduling, scheduling communication systems, school time tables construction, container loading, load balancing,

Email addresses: [email protected] (Ali Khanafer), [email protected] (François Clautiaux), [email protected] (El-Ghazali Talbi)

Preprint submitted to Computers & Operations Research

July 3, 2009

database replicas storage, cutting objects out of a strip of material, etc. Formally, the two-dimensional variant of this problem (BPC-2D) is dened as follows. We are given a set I = {1, 2, . . . , n} of n rectangular items, each having width wi and height hi , and an unlimited number of nite identical rectangular bins of width W and height H . We dispose also of a conict graph G = (I, E), where (i, j) ∈ E if items i and j are in conict. Problem BPC2D generalizes the one-dimensional bin packing problem with conicts (BPC1D), introduced by Jansen and Öhring [14], when for all i of I , hi = H . The aim of BPC2D is to minimize the number of bins used to pack all items of I , provided that the contents of any bin consists of compatible items tting within the bin capacity without overlapping. Problem BPC is clearly a gener-

unconstrained bin packing problem (BP) when E = ∅ and the vertex coloring problem (VCP)

alization of the following well-known N P -Hard problems [9]: the when all the items are small enough to be packed in one bin.

Theoretical studies of BPC through approximate resolution schemes were provided in several papers [8, 13, 14]. These studies considered the problem for specic graph classes and focused on the improvement of the approximation ratio from a theoretical point of view. Gendreau et al. [10] studied the BPC1D on general graphs and proposed six heuristics and two lower bounds. The rst heuristic is a simple adaptation of the

rst-t decreasing heuristic proposed by Coman et al. [7] for the BP1D. The next three heuristics take good use of a graph coloring procedure. The last two heuristics are based on the computation of cliques over the graph of conicts or the graph of compatibility. More recently, the literature for the BPC1D was extended by Muritiba et

al. [20] who proposed four heuristics, several lower

bounds and an exact method. For the rst three heuristics, they adapt classical algorithms dedicated to BP1D [16] by considering a special function to surrogate the weights of the items. The last heuristic is an improvement of a clique based heuristic proposed in [10]. Epstein et al. [8] studied the BPC2D on specic graph classes, and designed approximation algorithms. To our knowledge, BPC2D was never studied for 2

general graphs from a computational point of view. Of course, as done for BPC1D, classical BP2D heuristics may be simply adapted and applied. The

bottom-left (BL) heuristic and its variants [1, 5] are the most famous algorithms for BP2D. In a previous paper [17], we introduced new lower bounds for BPC2D, based on the conict generalization of so-called data-dependent dual-feasible functions [4].

Some of these functions were based on the concept of

tree-

decomposition [22]. Tree-decomposition was previously used successfully for solving coloring problems (see e.g. [18]). Consequently, it seems a natural idea to extend the methodology to packing problems which generalize the graph coloring problem. However applying tree-decomposition to BPC is far from easy and leads to several issues that have to be addressed in order to obtain eective algorithms. This paper proposes a new resolution approach to the BPC, based on the tree-decomposition of the compatibility graph. First, a tree-decomposition and a good separation of its clusters have to be computed. Then, we prove that nding in which cluster the items should be assigned is a N P -complete problem. For this aim, we propose several fast algorithms to this problem. Finally, we propose a tabu search method which improves the results when a larger computing time is allowed. We report computational results, which conrm the quality of our approach: the results are clearly improved for classical data sets from the literature. For very large size instances, we also show that a modied version of our approach leads to a dramatical reduction of the computing time when the density of the graph is large. The remainder of this paper is organized as follows. In section 2, we recall the main notions of graph theory exploited in tree-decomposition methods. In section 3, we investigate the concept of tree-decomposition to solve BPC2D using greedy construction heuristics. Section 4 introduces the tabu search and describes some details of its implementation. Computational experiments are reported in Section 5. Finally, some concluding remarks are provided in section 3

6.

2. Graph Decomposition Before introducing our methodology for solving BPC, it is necessary to recall some graph theory denitions and the notion of

tree-decomposition introduced

by Robertson and Seymour [22], and describe classical algorithms to obtain such a decomposition with a tractable complexity.

2.1. Terminology undirected graph G is a pair (V, E) composed of a set V of vertices and a set E of edges, E ⊆ V × V . A graph G is connected if for all vertices w, v ∈ V An

such that w 6= v , there exists a path, sequence of vertices such that from each of its vertices there is an edge to the next vertex in the sequence , from w to

v . A subgraph of G = (V, E), induced by W ⊆ V , is a graph G(W ) = (W, EW )

tree is a graph in which any two vertices are connected by exactly one path and a forest is a disjoint union of trees. such that EW = E ∩ (W × W ). A

2.2. Tree-Decomposition A

tree-decomposition is a mapping of a graph into a set of clusters linked in

a tree.

Denition 1. ([22]) A tree-decomposition of G = (V, E) is a pair (C, T ) where

is a tree with node set N and arc set A, and C = {ci : i ∈ N }, is a family of subsets of V such that: 1. ∪i∈N ci = V , 2. ∀(v, w) ∈ E , ∃ci ∈ C containing both items v and w, 3. ∀i, j, k ∈ N , if j is on the path from i to k in T , then ci ∩ ck ⊆ cj .

T = (N, A)

Figure 1 shows a graph with eight vertices, and a tree decomposition of it onto a tree with six nodes. Each graph edge connects two vertices that are listed together at some tree node, and each graph vertex is listed at the nodes of a contiguous subtree of the tree.

Denition 2. The intersection of two or more clusters of a tree-decomposition

is called separator.

4

Figure 1: A graph and its tree decomposition

The width w(C, T ) of a tree-decomposition is equal to maxi∈N (|ci | − 1). The

treewidth tw(G) of a graph G is dened as minw (C, T ) where the minimum is taken over all tree-decompositions (C, T ) of G. Whereas for some graph families, such as trees and serie-parallel graphs, one can compute the treewidth in linear time, computing the treewidth of any graph is a N P -complete problem [18].

2.3. Computing a tree-decomposition In the following, we present a tree-decomposition algorithm that relies on the concept of graph triangulation. So, some denitions and properties must be recalled.

Denition 3. A graph is chordal or triangulated if every cycle of length > 3

has a chord, i.e. edge joining two non-consecutive vertices of a cycle.

Property 1. A tree-decomposition of a chordal graph consists of the set of its

maximal cliques.

Property 2. A chordal graph of n vertices has at most n maximal cliques (a

clique is maximal i it is not included in another clique).

Given a conict graph G, it is not possible to immediately use these properties because G is not necessarily chordal. Several approaches were proposed to provide a graph triangulation (see [15]),

i.e. nding a suitable set of edges

to add to the graph to obtain a triangulated graph. In this paper, we use the heuristic called maximum cardinality search (MCS) [23], which is one of the most famous heuristics for computing graph triangulations. It can be implemented in O(|V | + |E|) time [23] and computes a perfect elimination order δ , permutation [v1 , v2 , . . . , vn ] of the elements of V where vi is 5

a simplicial vertex of the subgraph induced by {vi , . . . , vn }, when G is chordal as the following: Number the vertices from n to 1 in decreasing order. As the next vertex to number, select the vertex adjacent to the largest number of previously numbered vertices.

Denition 4. A graph is chordal if and only if it has a perfect elimination

order δ .

Denition 5. Every maximal clique of a chordal graph G = (V, E) is of the

form {v} ∪ X(v), for some vertex v ∈ V where X(v) = {w ∈ adj(v) / δ(v) < δ(w)} and adj(v) is the set of vertices adjacent to v . Property 3. The problem of nding all maximal cliques in a chordal graph G = (V, E)

is in O(|V | + |E|).

Computing a tree-decomposition involves several steps. The rst one is based on the triangulation of the graph by adding some new edges to it. The second step identies the maximal cliques in the chordal graph by making a simple traversal of the perfect elimination order δ . For each vertex v of δ , a maximal clique is computed as the following {v} ∪ X(v) (see denition 5). According to 0

property 3, the time complexity for these dierent steps is O(|V | + |E |) where 0

E is the number of edges in the chordal graph. Finally, note that the problem of nding a tree-decomposition of minimal treewidth is N P -hard. In the next section, we show how this concept can be applied to the graph of compatibility G of a BPC instance, and we show that it can help heuristics nding faster and more accurate results for several instances.

3. Application to the bin packing problem with conicts In this section, we present our framework for using tree-decomposition to help the resolution of BPC. In our approach, once a tree-decomposition is obtained, which means that the set of clusters was identied, each item has to be assigned to a specic cluster to prevent items belonging to several clusters from being packed more than once. We call such an assignment a

cluster separation.

We show that nding the best assignment is N P -Complete, and propose a rst family of heuristics to nd fast solutions for this problem. 6

3.1. The general approach Given a conict graph G = (I, E), let us denote by G = (I, I × I \ E) the corresponding compatibility graph. In the rst step, we compute a treedecomposition of G leading to a set of subproblems to be solved iteratively. Then, the partial solutions obtained are merged into a unique solution to the whole problem. Figure 2 shows a tree-decomposition (C, T ) of a graph computed using MCS, where C = {c1 = {0, 1}, c2 = {1, 2, 5, 6}, c3 = {2, 3, 6}, c4 = {3, 6, 7}, c5 =

{3, 4}}. Referring to the same gure, we can notice that a vertex may belong to several clusters (e.g. vertex 6 belonging to c2 , c3 and c4 ). If the corresponding item is treated as many times as the vertex appears in the decomposition, the solution obtained will be of weak quality. 0

1

2

3

5

6

7

4

Figure 2: A tree-decomposition of a graph of compatibility G

Algorithm 1 shows a step-by-step description of the details of implementation of the new approach. At line 1, the graph of compatibility is tree-decomposed. Since a vertex may belong to several clusters, a cluster-separation is needed and therefore applied at line 2. The separated clusters are then solved as subproblems by the mean of a resolution method at lines 4-6. The resolution method we used is described in section 3.4. Finally, at line 7, a repairing heuristic discussed in section 3.5 is applied on the nal solution in order to improve it. Note that although only heuristics are used in this paper, our framework allows exact methods to be used in each step of the algorithm (computing the decomposition, clustering, packing, repairing). This would lead to better results, but also to much more time-consuming methods (packing items optimally in two 7

Algorithm 1: Tree-Decomposition based resolution of a BPC instance. input : Set I of n items and G = (I, E) graph of conicts. output: A Packing of the n items in a set B of bins. 1 (C, T ) ←− T reeDecomposition(G); 2 µ(C, T ) ←− ClusterSeparation(C, T ); 3 B ←− ∅; 4

foreach ci ∈ µ(C, T ) do B ←− B ∪ ResolutionM ethod(ci );

5 6

end

7 B ←− RepairingHeuristic(B);

dimensions is more dicult than one-dimensional case, even when all items can be packed into one bin, see [6, 21] for example).

3.2. The cluster-separation problem An important issue in the new approach is to nd a suitable partition of the items in the clusters. We call such a partition a

cluster-separation.

Denition 6. Given a tree-decomposition (C, T ) of graph G = (I, E) where T = (N, A), a cluster-separation µ ∀si ∈ S, ∃cj ∈ C s.t. si ⊆ cj .

of (C, T ) is a partition S of I such that

The choice of the cluster-separation is the most crucial part of the algorithm. We prove below that there exists a cluster-separation that corresponds with an optimal solution.

Proposition 1. For any BPC instance D with a compatibility graph G and its tree-decomposition (C, T ), there exists a cluster-separation µ of (C, T ) such that solving each cluster ci ∈ µ using an exact resolution method leads to an optimal solution to D.

Let s∗ be the optimal solution for the whole problem. This solution consists of a set of bins, each containing a set of compatible items packed together and representing a stable in G, which is a clique in G. As claimed in property 1, a clique in a graph belongs necessarily to a cluster in its tree-decomposition. Thus the clusters solved to get s∗ are obtained by a cluster-separation of the tree-decomposition.  Proof.

8

We call the problem of nding the best cluster-separation the

best-cluster-

separation problem. We now show that this problem is N P -Complete for an arbitrary graph.

Denition 7. Let D be a BPC instance with a compatibility graph G = (I, E) and (C, T ) its tree-decomposition and k an integer value. The best-clusterseparation problem consists in nding a cluster separation of µ of (C, T ) leading to a solution of value k, if it exists. Proposition 2. The best-cluster-separation problem is N P -Complete.

We prove this result by reduction in the classical partition problem. Consider the following BPC instance: Proof.



the height of a bin is H and its width is W ,



all items have the same height as the bin, hi = H (∀i ∈ I), such that I1 ∪ I2 and I2 ∪ I3 are cliques, and there is no compatibility edge connecting any item of I1 to any item of I3 ,

• ∃I1 , I2 , I3 ⊆ I •

P

i∈I1

wi =

P

i∈I3

wi = w ∗

and

P

i∈I2

wi = 2 × W − 2 × w ∗ .

A valid tree-decomposition of this instance is composed of two clusters: I1 ∪I2 and I2 ∪I3 . The question "Is there a cluster-separation leading to a solution with 2 bins?" is equivalent to nding if there exists a partition of the item sizes wi into two subsets of equivalent size. This problem is N P -Complete [9]. Consequently, nding the best cluster-separation is N P -Complete.  3.3. Fast heuristics for computing a cluster-separation Since the best-cluster-separation problem is N P -Complete, a reasonable way of tackling this problem for dense graphs is to use heuristics. This phase is the most crucial of our approach: a bad choice for the cluster-separation would lead to solutions of weak quality. These heuristics belong to the family of greedy algorithms based on an initial sorting of the clusters. Consider a tree-decomposition (C, T ) and let N = |C|. A cluster-separation can be computed as follows:

• number the clusters according to a criterion ordering N : C → {1, . . . , N }. • For each value i by ascending order:

 assign all remaining items of the current cluster Ci to a new set Si 9

 remove all items of Si from all following clusters Cj such that j > i. Several criteria has been proposed to explore the cluster tree associated to a tree-decomposition (see e.g. [15]). Two types of criterion were introduced in [15]:

local and global. A local (resp. global) criteria evaluates the relevance of a

candidate cluster without (resp. by) taking into account the interactions with other clusters. They also proposed two criteria, the

cluster size (local) and the

cluster neighborhood size (global) as to be the number of clusters connected to it. In this paper we introduce a new global criterion, the

demand D(i) of a

item i as to be the number of clusters that contain i. This criterion can be generalized and applied to clusters as the following, the demand of a cluster ck P is the sum of the demands of the items of ck : D(ck ) = D(i). A cluster i∈ck

with a large demand shares many items with other clusters, and therefore this criterion helps identify this kind of clusters in order to be considered as "central" among the other clusters. We also introduce another local criterion that we call

rand consisting of randomly sorting the clusters. The choice of these simple heuristics may be justied by the fact that they do not entail a large computing time, since the use of any complicated heuristic for computing a cluster-separation would increase the computing time of algorithm 1. Also note that any criterion enables exploring the cluster tree in two ways:

ascending and descending. 3.4. Construction heuristic As a resolution method for solving the subproblems induced by the clusters, we propose an adaptation of the state-of-the-art heuristic called bottom-left (BL) developed by Coman

et al. [7] for the BP2D. This adaptation (BLC in the

following) consists of a simple conict generalization of this heuristic. The heuristic BLC is a rst-t based heuristic (an item is packed in the rst place that can contain it). The class of heuristics to which this method belongs corresponds to the packing procedures which preserve

10

bottom-left stability. We

say that a rectangle is packed in a bottom-left stable position if it cannot be moved downwards or leftwards. At any stage of the heuristic, the bin contains a set of empty spaces (areas), which can be viewed as rectangles with horizontal and vertical edges. BLC consists in placing the current item into the lowest and then leftmost possible location of the rst bin that can contain it. The process is iterated for each item in turn with a certain order predened on the items (e.g. increasing height). In practice, experience has shown that the bottom-left strategy tends to perform fairly well. The best implementation has been proposed by Chazelle [5]. The adaptation of this heuristic for the case of BPC is straightforward: a test has to be added to forbid packing two conicting items in the same bin.

3.5. Repairing heuristic Once all subproblems are solved, partial solutions are merged into a unique solution for the initial problem. Then, a repairing heuristic is applied to improve the nal solution. It is based on the progressive reduction of the number of bins used by a previously constructed solution. The idea is to empty some bins and redistribute their contents to the other bins. Let B be the set of bins obtained by solving the dierent subproblems of a BPC instance. A set B of candidate bins is extracted from B . We only consider candidates containing only items that belong to a separator (and thus that can be packed with items of other clusters). Each bin b ∈ B is successively eliminated and its items are redistributed to the other bins. If this is possible, an improved solution with one bin less has been obtained and the procedure is repeated.

3.6. Complexity of the algorithm For graphs whose complexity is bounded by a small constant, the complexity of the algorithm is reduced compared to the complexity of an equivalent construction heuristic.

11

The complexity of computing a cluster-separation according to an ordering criterion depends on the number of items n, the number of clusters N in the tree-decomposition and the width w of the tree-decomposition. If a local sorting algorithm is used, the complexity of this phase is in O(N log(w)), which is in O(n log(n)) for an arbitrary graph. For the global sorting strategies, the complexity is O(n2 ), since an initial phase with this complexity is needed to compute the size of the neighborhood of a cluster, or its demand. The complexity of the heuristic of Chazelle [5] is in O(n2 ). If the graph is such that there is an algorithm to nd a tree-decomposition whose width is bounded by a constant, it would lead to a O(1) complexity in our framework (since the number of items in the subproblems would be a constant). The complexity of the repairing heuristic is O(|I B | × n2 ) where I B is the set of items packed in the set of bins B . The complexity is entailed by the maximum number of possible placements at any step of a packing. A possibility for reducing this complexity is to consider a constant number of items in I B and a number of open bins such that the number of packed items is also bounded by a small constant. In this case, the complexity of this phase would also be

O(1). Unfortunately, experiments showed that it also dramatically weaken the eectiveness of the repairing phase. However for huge instances, this restriction leads to a practical algorithm, although the repairing phase may have a large cost for these instances. To summarize these results, if the algorithms above are used (MCS to compute the tree-decomposition, the cluster separation computed by a greedy algorithm based on a local criterion, BLC for the packing phase and our repairing heuristic), the complexity of our approach would bea O(m + n log(n) + n × w3 ). If the width of the decomposition obtained by MCS is bounded by a constant, the complexity becomes O(m + n log(n)). In addition, if the graph is initially chordal, the complexity is O(n log(n)) (sorting the clusters), whereas the original constructive algorithms are at least in O(n3 ). For huge instances, one can even avoid the sorting algorithm, which leads to a linear complexity (although the quality of the solution obtained is expected to be weak). 12

Let us denote by BLC-TD the overall algorithm consisting in applying algorithm 1 by using BLC as a resolution method at line 5 and then repairing the nal solution as described above.

4. A tabu search for computing a cluster-separation Local search algorithms are widely acknowledged as powerful tools for providing high-quality solutions to a wide variety of combinatorial problems. In the previous section, we have stated that the cluster separation problem was the core of our resolution approach. In order to improve the results obtained by using a greedy algorithm, we designed a tabu search to solve this problem. In this section, we describe the tabu search algorithm, denoted as TS-TD in the following. Tabu search [11] has already been used to solve packing problems (see [12] for example), using a so-called oscillation strategy. We use this concept of oscillation by iteratively switching from construction to destruction phases.

4.1. Solution encoding The solution is represented by a vector v of size n. Each element vi of this vector represents the value aected to the item i, where vi ∈ Di and Di is the set of possible clusters that can accommodate it. For example, according to gure 2, the domain of item 3 is D3 = {3, 4, 5}. The initialization phase should generate an initial solution vector by aecting each item i whose domain size |Di | is equal to 1. Figure 3 shows a solution vector and a possible initialization of it. i0

i1

i2

i3

i4

i5

i6

i7

i0

i1

i2

i3

i4

i5

i6

i7

v0

v1

v2

v3

v4

v5

v6

v7

c0

−1

−1

−1

c1

c2

−1

c4

Figure 3: A vector representing a solution and its initialization according to gure 2.

In gure 3, we can notice that only items i0 , i4 , i5 and i7 were initialized since each of them can be aected to only one cluster (see gure 2). The remaining items are set to -1 in order to be aected to clusters later on. 13

4.2. Solution space A solution space S in a local search method is the set of possible solutions. A possible solution may represent a feasible or an infeasible solution. When dealing with a problem where the set of feasible solutions is small that is dicult to dene an appropriate neighborhood, it is better to include some infeasible solutions in the solution space. In this case, we have to be able to guarantee the feasibility of the nal solution. The solution space of TS-TD is dened as the set of complete and incomplete solutions. A solution is said to be complete (resp. incomplete) when its variables are (resp. not) all assigned. In addition, any complete solution in S must be be feasible. Figure 4 shows the solution space for the graph of gure 2. i0

i1

i2

i3

i4

i5

i6

i7

c0

−1

−1

−1

c1

c2

−1

c4

c0

c2

c1

c2

c2

c3

c3

c3

−1

−1

c4

c4

−1

−1

Figure 4: The solution space for the graph of gure 2. For each item i, except i0 , i4 , i5 , and i7 whose value is xed along the search process, we create a list Di containing the set of clusters that can accommodate i.

As can be seen in gure 4, the size of S depends of the size of the domains of the items. It can be calculated as follows:

|S| =

Y

(|Di | + 1)

i∈I/vi =−1

4.3. Move A move is the assignment of a variable i to a value in its domain Di in addition to the value −1. This is analogous to aecting item i to a cluster (vi ∈ 14

Di ) "forward

move ", and removing item i from a cluster (vi = −1) "backward

move ". The existence of two types of movements is justied by the fact that our TS involves two phases, construction and destruction. The construction phase guides the TS to get a complete solution while the destruction phase tries to desinstantiate some variables. Alternating these two phases plays the role of a diversication process in order to enable the TS to explore new regions of the search space.

4.4. Objective function The crucial factor of any local search procedure is the choice of the objective function f . In a bin packing context, choosing the real objective function, which is to minimize the number of bins, is rather pointless, since many dierent solutions may have the same number of bins. However, this information may be interesting combined to other information. To meet this preference, we include in f some information about the number of items not yet assigned and the gap value, dened as the following for a given cluster c: GAPc = UBc ∗ (W H) − 2

X

2 wi hi

i∈c

where UBc is the number of bins needed to pack the items of cluster c computed using any BPC2D heuristic (BLC in our case). Thus the tness function is

f (s) = α1 |I −1 | + +α2

X c∈C

UBc + α3

X

GAPc

c∈C

where I −1 = {i ∈ I : vi = −1}, and α1 , α2 and α3 are three real coecients. The rst term in f is to distinguish the packings by preferring a packing that uses a smaller number of non assigned items. The third term is to tell the dierence between packings that use the same number of bins (second term) with the same number non aected items and prefer the smaller GAP over clusters.

15

4.5. Neighborhood exploration A search method determines which of the better solutions in a neighborhood a local search algorithm should move to. The two most frequently used search methods are best improvement (BI) and rst improvement (FI). BI evaluates all the solutions in the neighborhood and moves to the best one. On the other hand, FI moves to the rst solution that is found to be better than the current solution. In this paper, BI is used which applies the move described in subsection 4.3. Local search methods for combinatorial optimization proceed from an initial solution by performing a sequence of local changes (moves), each improving the value of the objective function, until a local optimum is found. In our case, there is no need to recalculate the objective function from scratch when evaluating a move, but merely the change in the objective function that would result. Thus, the value of a move depends whether it is a forward or a backward move. The value of a transfer of item i from cluster c1 to cluster c2 is calculated as

∆fci1 ,c2 =

  −1 + UBc

2 ∪i

 UBc

2 ∪i

+ GAPc2 ∪i

+ GAPc2 ∪i − UBc1 \i − GAPc1 \i

if vi = −1 otherwise.

Analogously, the value of removing an item i from the current cluster c containing it is

∆fci = 1 − UBc\i − GAPc\i .

4.6. Diversication and Intensication Strategies In our TS, we use a diversication strategy that would foster a wider and more complete exploration of the solution space. This strategy could quite possibly improve the performance of our approach quality-wise, or, at least, make it more robust. One of the main motivations behind this strategy is that, in our context, stopping the search and restarting it in a completely new region 16

of the search space might be too radical and eventually overlook many very good solutions located relatively close to the ones presently being explored. Our diversication strategy consists of alternating the construction and destruction phases as possible as we can (time limit, maximal number of iterations, etc.). Figure 5 show the behavior of our TS through the search process. We start a construction phase with an incomplete solution s. We keep running our local search until assigning all items of s. A destruction phase is then applied on s by making some backward moves in order to enable the next construction phase to explore new regions of the search space, and the procedure is repeated while the stopping criteria do not hold. This strategy may be controlled by a set of parameters, the amplitude a and the

frequency f . The amplitude represents the maximal number of backward

moves to be done during a destruction phase. The frequency is minimal number of complete solutions reached during the search process. Number of items not yet packed

a2

a1

a3

1

3

2

4

Number of iterations

f

Figure 5: A search trajectory in the search space. The frequency f is the number of times a complete solution is obtained. The amplitude a represents the number of items allowed to be set to −1 in a destruction phase.

The intensication in promising areas is generally made by restarting the search from the best conguration found until now. In our case, the intensication consists in applying the repairing heuristic described above on each complete solution we nd during the search process. Using this heuristic, our 17

TS will try to improve the best solution known by grafting components of good solutions.

4.7. Tabu list A tabu list (TL) is needed to prevent cycling, which notably occurs when we attempt to instantiate the last non-assigned items. In our implementation, we use a TL consisting of a set of moves classied tabu during some iterations (tabu tenure). This tabu tenure is dynamically dened by the size of TL which is initially equal to the total number of possible moves +  where  represents the number of non aected items in the initial solution. Once TL is full, it will resized in a way that the oldest half is erased. For example, the size of TL for the example of gure 2 is equal to 14.

5. Computational experiments In this section, we report computational experiments on the approach proposed in this paper. The algorithms were implemented in C++ and run on an Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz. The tabu search is provided by Paradiseo [3], a white-box object-oriented framework dedicated to the exible design of metaheuristics. In our implementation, the tabu search was given a time limit of 60 CPU seconds. The BPC-2D instances used were constructed using those of Berkey and Wang [2] and Martello and Vigo [19] dedicated to BP-2D. The benchmark contains 10 dierent classes of instances.

Each class considers 5 values of

n : 20, 40, 60, 80 and 100. For each value of n, there are ten instances for a total of 500 instances. For each of these instances, we generated four graph of conicts of density d = {30%, 50%, 70%, 90%}, yielding to a total of 2000 instances. The conict graphs were randomly generated with the pˆ-generator. Note that no results for classes 2, 4 and 6, since they all methods tested led to the same results.

18

5.1. Impact of the tree-decomposition In table 1 we compare the results obtained by the following heuristics: BLC (Bottom

Left Conict ), BLC-TD (Bottom Left Conict based on TreeDecomposition ) and TS-TD (Tabu Search based on Tree-Decomposition ).

Table 1: Numerical results for the instances of [2, 19] (classes 1, 3, 5, 7, 8, 9 and 10). Each line shows the average results over 50 instances. The line labeled Avg shows the average upper bound and time values over the 1800 instances. The line labeled Ttl shows the total number of optimal solutions reached by the dierent algorithms. Class d(%) 30 50 I 70 90 30 50 III 70 90 30 50 V 70 90 30 50 VII 70 90 30 50 VIII 70 90 30 50 IX 70 90 30 50 X 70 90 Avg Ttl

%gap 12,36 3,2 1,22 0,27 0,57 0,62 0,65 0,41 13,24 4,75 1,32 0,11 20,08 3 0,44 0,09 19,39 3,74 0,39 0,31 0,19 0,84 0,25 0,04 16,95 0,88 1,41 0,11 2,67

BLC

sec.

0,02 0,03 0,03 0,03 0,04 0,04 0,05 0,04 0,04 0,05 0,05 0,04 0,04 0,05 0,05 0,04 0,05 0,05 0,05 0,04 0,05 0,05 0,04 0,04 0,04 0,04 0,05 0,04 0,03

#opt %gap 10 13 29 44 35 21 39 39 6 9 31 37 1 16 40 46 3 16 41 41 46 33 43 48 29 39 26 47

828

8,09 1,26 0,53 0,24 0,46 0,27 0,37 0,23 10,42 2,29 0,41 0,08 16,58 2,21 0,21 0,23 16,84 1,77 0,28 0,25 0 0,12 0,16 0,2 16,02 0,25 0,56 0,06 2,01

BLC-TD

sec.

0,07 0,08 0,05 0,02 0,12 0,09 0,06 0,03 0,14 0,12 0,1 0,02 0,15 0,12 0,1 0,03 0,16 0,12 0,06 0,03 0,09 0,08 0,05 0,02 0,11 0,09 0,08 0,02 0,06

#opt %gap 15 32 38 45 40 37 44 42 12 24 42 38 2 21 45 48 3 29 43 43 50 46 48 45 34 46 39 48

999

7,27 0,55 0,33 0,14 0,46 0,23 0,13 0,16 8,25 1,71 0,41 0,03 13,16 1,51 0,05 0,02 13,96 1,18 0,03 0,13 0 0,09 0,04 0 14,27 0,25 0,22 0,03 1,62

TS-TD

sec.

39,75 20,41 11,55 3,37 19,15 14,29 6,03 6,82 43,01 28,49 11,26 1,1 50,92 33,16 4,13 1,13 46,51 20,09 3,85 4,55 5,22 6,81 1,37 0 21,54 7,59 13,1 3,32 10,71

#opt 18 38 42 48 41 43 47 44 15 29 42 40 7 27 48 49 8 37 49 47 50 47 49 50 36 46 45 49

1091

Due to the large size of the test-bed, we give average results for each class,

i.e. each line reports an average value over 50 instances (ten instances for each value of n). Column %gap gives the average percentage gap, computed as 100(U − L)/U , where U represents the value obtained by the corresponding heuristic and L the value of a lower bound proposed by the authors in [17]. 19

Column

#opt gives the total number of instances solved to the optimality, and

sec is the average CPU time in seconds required to run each of the algorithms. Concerning the dierent sorting strategies for BLC-TD, no strategy was better than the others in average. Consequently the results reported are obtained by running all strategies and keeping the best result. The computing time reported is the sum of the computing times of all strategies. The results clearly show that applying the tree-decomposition using a simple heuristic clearly improves the results compared to BLC. BLC-TD solves

171 instances more than BLC. The average percentage deviation from the best known lower bound was also reduced of 0, 66 (2, 67 to 2, 01). Moreover, the computing time is not too much increased in average. For dense graphs, the computing time is even decreased: this is due to the fact that the width of the tree-decomposition is far smaller for sparse compatibility graphs. When TS-TD is considered, the improvement is much more important. It decreases the

%gap of 1.05%, and increases the number of solved instances of

263 instances compared to BLC. This improvement is however achieved with a larger computing time, which is not surprising since TS-TD makes use of more advanced techniques while BLC is a simple greedy algorithm. For dense graphs, the dierence in term of computing time is smaller, for the same reasons as above.

5.2. Applying the algorithm to huge instances When huge instances are considered, the computing time entailed by a resolution method may be very large. In table 2, we report results obtained by our approach for such graphs in order to evaluate the gain in term of computing time. However this time reduction is not assured without loss in term of solution quality, since for huge instances, our repairing heuristic cannot be used anymore due its complexity. In table 2 we arbitrary considered two classes of instances (1 and 4), two values of n (1000 and 2000) and 4 values for the density d (40, 70, 80 and 90). For BLC, we report the upper bound (U ) obtained and the computing time 20

(sec.). For BLC-TD, the column

%degradation shows the percentage of loss in term of number of bins compared to the value obtained by BLC, sec. is the total computing time, TD is the time needed to compute a tree-decomposition and Resolving the clusters is the time needed to resolve the dierent clusters. Note that BLC-TD is applied without repairing the nal solution. As seen in table 2, the

%degradation varies depending on the density of the

conict graph. For small densities, our approach does not seem to be interesting, which can be justied by the fact that for dense graphs, MCS may entail a large computing time and may generate a decomposition with a treewidth near from

n. However, for dense graphs, BLC-TD seems to work in an ecient way and specially for densities 80% and 90%. For the instance of class 1 where n = 1000 and d = 90%, a solution quality loss of 1.85% leads to a computing time gain equal to 98.71%. Instances of class 4 shows that our approach may dramatically

Table 2: Numerical results for 8 instances of class 1 and 8 instances of class 4 [2, 19]. For BLC-TD, the column %degradation shows the percentage of loss in term of number of bins compared to the value obtained by BLC, sec. is the total computing time, TD is the time needed to compute a tree-decomposition and Resolving the clusters and Resolving the clusters is the time needed to resolve the dierent clusters. BLC Class

I

IV

n d(%) U 1000 40 441 1000 70 724 1000 80 803 1000 90 915 2000 40 884 2000 70 1388 2000 80 1645 2000 90 1812 1000 40 403 1000 70 700 1000 80 811 1000 90 900 2000 40 813 2000 70 1394 2000 80 1624 2000 90 1791

BLC-TD without repairing the nal solution

sec. %degradation

30,7 72,91 74,05 71 217.4 540,28 597,72 558,75 67,58 100,56 100,84 87,68 492,75 771,05 773,5 703,51

sec.

41,04 9,8 5,6 1,85 42,30 12,53 6,6 3,31 1,48 0,0 0,0 0,0 1,72 0,0 0,0 0,0

95,22 18,3 7,82 0,92 1629 325,8 90,51 16,56 278,43 37,54 9,54 1,46 7663,12 684,16 150,53 26,56

21

TD

82,86 (87,02%) 17,07 (93,28%) 7,22 (92,33%) 0,78 (84,78%) 1099 (67,46%) 303.69(93,21%) 85,93 (94,94%) 15,69 (94,75%) 109,13(39,19%) 27,64 (73,63%) 7,97 (83,54%) 1,24 (84,93%) 1524.53(19,89%) 400,54 (58,54%) 124,89 (82,97%) 23,34 (87,88%)

Resolving the clusters 12,36(12,98%) 1,23(6,72%) 0,6(7,67%) 0,14(15,22%) 530 (32,54%) 22.11(6,79%) 4,58 (5,06%) 0,87 (5,25%) 169,3(60,81%) 9,9 (26,37%) 1,57 (16,46%) 0,22 (15,07%) 6138.59(80,11%) 283,62(41,46%) 25,64(17,03%) 3,22 (12,12%)

improve the computing time without losing in term of the solution quality. For example, the instance of class 1 where n = 2000 and d = 90%, the same solution obtained by BLC was obtained by BLC-TD but in a computing time reduced of 96.23%. Finally, note that if the graphs considered were already triangulated, only column "Resolving the cluster" would have to be taken into account, and then leading to a large reduction of the computing time even for small densities.

6. Concluding Remarks In this paper, we have studied a new approach based on a tree-decomposition to improve the eciency of resolving methods for the BPC. This new approach is generic and may be adapted according to the context and the needs. In this paper, the approach was exploited using approximate algorithms nevertheless exact methods can also be used if the computing time allowed is larger, or if the instances are of small size. Solving BPC instances with graphs of small treewidth using the new approach has the advantage of dramatically improving the computing time. However, this improvement may be accompanied by a loss in term of solution quality. Future work could explore the possibility of designing more sophisticated heuristics that combine the tree-decomposition procedure and the cluster-separation one. Another issue may also be to design faster yet eective repairing heuristics. Hybridizing this approach by using exact methods to solve some subproblems is also a promising lead.

Acknowledgments We thank Région Nord-Pas de Calais for giving support to this project. The computational experiments have been executed at the French National Institute for Research in Computer Science and Control (INRIA Lille  Nord Europe).

22

References [1] B. S. Baker, E. G. Coman., and R. L. Rivest. Orthogonal packing in two dimensions.

SIAM Journal on Computing, 9:846855, 1980.

[2] J. O. Berkey and P. Y. Wang. Two-dimensional nite bin-packing algorithms.

Journal of the Operational Research Society, 38:423429, 1987.

ParadisEOMO : une plateforme pour le développement de métaheuristiques à base de solution unique. In Roadef 2009, dixième congrès de la Société Française de Recherche Opérationnelle et d'Aide à la décision, pages 208209, 2009.

[3] J. C. Boisson, S. Mesmmoudi, L. Jourdan, and E. G. Talbi.

[4] J. Carlier, F. Clautiaux, and A. Moukrim. New reduction procedures and lower bounds for the two-dimensional bin-packing problem with xed orientation.

Computers and Operations Research, 34(8):22232250, 2007.

[5] B. Chazelle. The bottom-left bin-packing heuristic: an ecient implementation.

IEEE Trans. Comput., C-32:697707, 1983.

[6] F. Clautiaux, A. Jouglet, J. Carlier, and A. Moukrim. A new constraint programming approach for the orthogonal packing problem.

Computers

and Operations Research, 35(3):944959, 2008. [7] E. G. Coman, M. R. Garey, and D. S. Johnson. Approximation algorithms for bin packing - an updated survey. In G. Ausiello, M. Lucertini, and P. Serani, editors,

Algorithms design for computer system design, pages

49106. Spring-Verlag, New York, 1984. [8] L. Epstein, A. Levin, and R. van Stee. Two-dimensional packing with conicts.

Acta Informatica, 45(3):155175, 2008.

[9] M. R. Garey and D. S. Johnson.

Computers and intractability, a guide to

the theory of NP-completeness. Freeman, New York, 1979.

23

[10] M. Gendreau, G. Laporte, and F. Semet. Heuristics and lower bounds for the bin packing problem with conicts.

Computers and Operations

Research, 31:347358, 2004. [11] F. Glover and M. Laguna. Tabu search.

Kluwer Academic Publishers, 1998.

[12] S. Hana and A. Fréville. An ecient tabu search approach for the 0 1 multidimensional knapsack problem.

European Journal of Operational

Research, 106:659675, 1998. [13] K. Jansen. An approximation scheme for bin packing with conicts. Journal

of Combinatorial Optimization, 3:363377, 1999. [14] K. Jansen and S. Ohring. Approximation algorithms for time constrained scheduling.

Inf. Comput., 132(2):85108, 1997.

[15] P. Jégou, S. N. Ndiaye, and C. Terrioux.

Computing and exploiting

Proceedings of the Eleventh International Conference on Principles and Practice of Constraint Programming (CP-2005), pages 777781, 2005. tree-decompositions for solving constraint networks. In

[16] D. S. Johnson, A. Demers, J. D. Ullman, M. R. Garey, and R. L. Graham. Worst case performance bounds for simple one-dimensional packing algorithms.

SIAM Journal on Computing, 3:299325, 1974.

[17] A. Khanafer, F. Clautiaux, and E.G. Talbi. New lower bounds for bin packing problems with conicts. Technical report, Université des Sciences et Technologies de Lille, 2008. [18] C. Lucet, F. Mendes, and A. Moukrim. An exact method for graph coloring.

Computers and Operations Research, 33(8):21892207, 2006. [19] S. Martello and D. Vigo. Exact solution of the two-dimensional nite bin packing problem.

Management Science, 44:388399, 1998.

24

[20] A. E. Fernandes Muritiba, M. Iori, E. Malaguti, and P. Toth. Algorithms for the bin packing problem with conicts. Technical report, DEIS - University of Bologna, 2008. [21] D. Pisinger and M. M. Sigurd. Using decomposition techniques and constraint programming for solving the two-dimensional bin packing problem.

INFORMS Journal on Computing, 19:3651, 2007. [22] N. Robertson and P. Seymour. Graph minors. II algorithmic aspects of tree-width.

Journal of Algorithms, 7:309322, 1986.

[23] R. E. Tarjan and M. Yannakakis. Simple linear-time algorithms to test chordality of graphs, test acyclicity of hypergraphs, and selectively reduce acyclic hypergraphs.

SIAM Journal on Computing, 13:566579, 1984.

25

Suggest Documents