Exploiting multidirectionality in coarse-grained arc consistency ... - CRIL

1 downloads 0 Views 150KB Size Report
of the basic algorithm AC3 has the advantage of being simple and competitive. However, it does not take into account constraint bidirectionality as AC7 does.
Exploiting multidirectionality in coarse-grained arc consistency algorithms Christophe Lecoutre, Fr´ed´eric Boussemart, and Fred Hemery CRIL (Centre de Recherche en Informatique de Lens) CNRS FRE 2499 rue de l’universit´e, SP 16 62307 Lens cedex, France lecoutre,boussemart,hemery @cril.univ-artois.fr

f

g

Abstract. Arc consistency plays a central role in solving Constraint Satisfaction Problems. This is the reason why many algorithms have been proposed to establish it. Recently, an algorithm called AC2001 and AC3.1 has been independently presented by their authors. This algorithm which is considered as a refinement of the basic algorithm AC3 has the advantage of being simple and competitive. However, it does not take into account constraint bidirectionality as AC7 does. In this paper, we address this issue, and, in particular, introduce two new algorithms called AC3.2 and AC3.3 which benefit from good properties of both AC3 and AC7. Indeed, AC3.2 and AC3.3 are as easy to implement as AC3 and take advantage of bidirectionality as AC7 does. More precisely, AC3.2 is a general algorithm which partially exploits bidirectionality whereas AC3.3 is a binary algorithm which fully exploits bidirectionality. It turns out that, when Maintaining Arc Consistency during search, MAC3.2, due to a memorization effect, is more efficient than MAC3.3 both in terms of constraint checks and cpu time. Compared to MAC2001/3.1, our experimental results show that MAC3.2 saves about 50% of constraint checks and, on average, 15% of cpu time.

1 Introduction Arc consistency plays a central role in solving Constraint Satisfaction Problems. Indeed, the MAC algorithm [10], i.e., the algorithm which maintains arc consistency during the search of a solution, is still considered as the most efficient generic approach to cope with large and hard problem instances [3]. Many algorithms have been proposed to establish arc consistency. On the one hand, coarse-grained algorithms such as AC3 [8], AC2000 [5], AC2001 [5], AC3.1 [17] and AC3d [13] have been developed, the principle of which is to apply successive revisions of arcs, i.e., of pairs C; X composed of a constraint C and of a variable X belonging to the set of variables of C . These algorithms are easy to implement and efficient in practice. On the other hand, fine-grained algorithms such as AC4 [9], AC6 [1] and AC7 [2] have been proposed, the principle of which is to apply successive revisions of “values”, i.e., of triplets C; X; a composed of an arc C; X and of a value a belonging to the domain of X . These algorithms are more difficult to implement since it is necessary to

(

)

(

)

(

)

manage heavy data structures. And even if, AC6 and AC7 are quite competitive with respect to coarse-grained algorithms in the context of a preprocessing stage, this is less obvious in the context of a search since maintaining these data structures can be penalizing. Arc consistency algorithms can also be characterized by a certain number of desirable properties [2]. In particular, it is interesting to exploit constraint bidirectionality (called multidirectionality when constraints are not binary) in order to avoid useless constraint checks. Bidirectionality means that if a value b of the domain of a variable Xj supports (is compatible with) a value a of the domain of a variable Xi with respect to a binary constraint C defined on Xi and Xj then a of Xi also supports b of Xj . Hence, if a constraint check C a; b is performed when looking for a support of a, there is no need to perform the same constraint check when looking for a support of b provided that the constraint check has been recorded as a success or a failure (positive and negative bidirectionality exploitation). Among all algorithms cited above, AC7 is the only one which fully takes into account bidirectionality. And, as far as we are aware, AC3d is the only coarse-grained algorithm that partially exploits bidirectionality (by using a so-called double-support domain heuristic). In this paper, we address the issue of exploiting constraint bidirectionality with respect to coarse-grained algorithms. First, we introduce two new algorithms, called AC3.2 and AC3.3, which can be seen as improvements of AC2001/3.1. AC3.2 is a general algorithm, i.e., suitable to both binary and non-binary problems, which partially exploits positive bidirectionality whereas AC3.3 is a binary algorithm, i.e., only adapted to binary problems, which fully exploits positive bidirectionality. In both cases, integrating positive bidirectionality exploitation only requires a slight additional data structure. Next, we show that AC2001/3.1, AC3.2, AC3.3 can all benefit from negative bidirectionality by concentrating the search of a support with respect to so-called candidates [4]. As a result, AC3.2 and AC3.3 benefit from good properties of both AC3 and AC7. Indeed, AC3.2 and AC3.3 are as easy to implement as AC3 and take advantage of bidirectionality as AC7 does (although AC3.3 is the only coarse-grained algorithm which fully takes into account bidirectionality). Our experimentations show that, when arc consistency is used as a preprocessing, AC3.3 seems to be the most efficient algorithm. Compared to AC2001/3.1, AC3.3 saves of constraint checks and, on average, of cpu time. However, it turns out about that, when Maintaining Arc Consistency during search, MAC3.2, due to a memorization effect, is more efficient than MAC3.3 both in terms of constraint checks and cpu time. Compared to MAC2001/3.1, our experimental results show that MAC3.2 saves about of constraint checks and, on average, of cpu time.

( )

25%

15%

50%

15%

2 Preliminaries In this section, we briefly introduce some notations and definitions used hereafter. Definition 1. A constraint network is a pair –

=

(X C ) where: ;

fX1 ; : : : ; Xng is a finite set of n variables such that each variable Xi has an associated domain dom Xi denoting the set of values allowed for Xi , X

( )



= ( )

fC1 ; : : : ; Cm g is a finite set of m constraints such that each constraint Cj has an associated relation rel Cj denoting the set of tuples allowed for the variables vars Cj involved in the constraint Cj . C

( )

( ) ) ( )

Without loss of generality, it is possible to assume that any set of variables vars C associated with a constraint C is ordered. Then, we can get the position pos X; C of a variable X in vars C and the ith variable var i; C in vars C . We shall say that a constraint C involves (or binds) a variable X if and only if X belongs to vars C . The arity of a constraint C is the number of variables involved in C , i.e., the number of variables in vars C . A Constraint Satisfaction Problem (CSP) is the task of finding one (or more) solution for a constraint network. A solution is an assignment of values to all the variables such that all the constraints are satisfied. A solution guarantees the existence of a support in all constraints.

( )

( )

( )

(

( )

Definition 2. Let C be a k -ary constraint, a k -tuple t is a list of k values indexed from to k length t and denoted here t ; : : : ; t k . A k -tuple t is:

1

=

()

[1℄ [℄ iff 8 2 1 [ ℄ 2 ( ( )), iff 2 ( ),

i ::k; t i dom var i; C – valid wrt C – allowed by C t rel C – a (current) support in C iff it is valid and allowed.

(

)

A tuple t will be said to be a support of X; a in C when t is a support in C such that t pos X; C a. Determining if a tuple is valid is called a validity check and determining if a tuple is allowed is called a constraint check. It is also important to note that, assuming a total order on domains, tuples can be ordered using a lexicographic order . To solve a CSP, a depth-first search algorithm with backtracking can be applied, where at each step of the search, a variable assignment is performed followed by a filtering process called constraint propagation. Usually, constraint propagation removes some values which can not occur in any solution. Modifying the domains of a given problem in order to get it arc consistent involves using constraint checks and, also, for some algorithms, validity checks. Constraint checks are required to find (new) supports whereas validity checks are used to determine if (old) supports are still valid.

[ (

)℄ =

(

)

Definition 3. Let P be a CSP and X; a be a pair composed of a variable X of P and of a value a 2 dom X . X; a is said to be consistent wrt a constraint C of P if either X 62 vars C or there exists a support of X; a in C . X; a is said to be consistent wrt P iff X; a is consistent wrt all constraints of P . P is said to be arc consistent iff all pairs X; a are consistent wrt P .

( (

( )

( )(

) )

)

(

)

(

)

3 Properties of arc consistency algorithms In order to avoid useless constraint checks, arc consistency algorithms can exploit different properties. In this section, we present an adaptation of the desirable properties defined in [2].

Algorithm 1 AC3.X 1: 2: 3: 4: 5: 6: 7: 8: 9: 10:

Q

f(

C; X )

j 2C ^ 2 C

X

g

vars(C )

init3.X() while Q = do pick (C; X ) in Q if revise3.X(C ,X ) then if dom(X ) = then return FAILURE 0 0 0 else Q Q (C ; X ) X vars(C ) end if end while return SUCCESS

6 ;

; [f

j 2

^

X

0

2

vars(C

0

)

^ 6= X

X

0

^ 6= g C

()

C

0

In any arc consistency algorithm, a constraint check C t is always performed with respect to a triplet C; X; a where C is a k -ary constraint, X a variable in vars C , a a value in dom X and t a k -tuple. The following properties should be ideally verified by any arc consistency algorithm, given a triplet C; X; a and a tuple t.

( ( )

)

() ( () ) () () ( )

(

( )

)

(

)

– positive unidirectionality C t is not checked if there exists a support t0 of X; a in C already successfully checked wrt C; X; a . – negative unidirectionality C t is not checked if it has already been unsuccessfully checked wrt C; X; a . – positive multidirectionality C t is not checked if there exists a support t0 of X; a in C already successfully checked wrt a triplet C; Y ; b with Y 6 X. – negative multidirectionality C t is not checked if C t has already been unsuccessfully checked wrt a triplet C; Y ; b with Y 6 X .

(

(

)

)

=

( ) ()

=

Roughly speaking, above properties correspond to properties 1, 3a, 2 and 3b of [2].

4 AC3.X algorithms In this section, we present different algorithms which are based on AC3, and are consequently coarse-grained algorithms denoted AC3.X. First, we introduce the main procedure of all these algorithms and recall the AC2001/3.1 algorithm. Next, we propose two originals algorithms, called AC3.2 and AC3.3, which can be seen as improvements of AC2001/3.1. Note that the description of all below algorithms (except AC3.3) is given in the general case of non binary problems. 4.1 Main procedure of AC3.X algorithms The structure of the AC3.X algorithms is identical to the one of the AC3 algorithm [8]. All these algorithms use a propagation set, denoted Q here, in order to hold all the arcs that need to be revised; the objective of the revision of an arc C; X being to remove the values of dom X that have become inconsistent with respect to C . Although we present, for the sake of simplicity, an arc-oriented propagation scheme, our implementation integrates a variable-oriented one since it turns out to be more efficient when using so-called revision ordering heuristics [6].

( )

(

)

Algorithm 2 init3.1()

8 2C 8 2 C

;

X

vars(C );

last[C; X; a℄

8 2 a

dom(X )

nil

Algorithm 3 revise3.1(in C,X) : int 1: 2: 3: 4: 5: 6: 7:

j

j

nbElements dom(X ) for each a dom(X ) do if last[C; X; a℄ is valid then continue

2

seekN extSupport(C; X; a; last[C; X; a℄)

if last[C; X; a℄ = nil then remove a from dom(X ) end for return nbElements = dom(X )

6 j

j

Here is a quick description of the main procedure described by Algorithm 1. Initially, all arcs C; X are put in the set Q in order to be revised, and a call to init3.X allows the initialization of AC3.X specific data structures. Then, arcs are revised in turn, and when a revision is effective (at least a value has been removed), the set Q has to be updated.

(

)

4.2 AC2001/3.1 The same algorithm, seen as an extension of AC3, and called AC2001 by [5] and AC3.1 by [17] has been proposed independently by their authors. There is a simple but important difference between AC3 and AC2001/3.1. Indeed, when a support of a value has to be found, AC3 starts the search from scratch whereas AC2001/3.1 starts the search from a resumption point which corresponds to the last support found for this value. More precisely, AC2001/3.1 verifies positive and negative unidirectionality. This less naive approach requires the introduction of a data structure, denoted last. This data structure is an array used to store the last support of any triplet C; X; a composed of an arc C; X and of a value a belonging to dom X . Initially, the structure last must be initialized to nil (see Algorithm 2). The revision (see Algorithm 3) involves testing for any value the validity of the last support (nil is not valid) and potentially looking for a new support. Note that seekNextSupport (see Algorithm 4) modifies its parameter t with either the smallest support of X; a in C strictly greater than it or with nil (remember that a constraint check is denoted by C t ). It calls the function seekNextTuple which modifies its parameter t with either the smallest valid tuple t0 in C such that t  t0 and 0 t pos X; C a or with nil.

(

)

(

[ (

( )

)

()

)℄ =

Algorithm 4 seekNextSupport(in C,X,a, in/out t)

6

1: while t = nil do seekN extT uple(C; X; a; t) 2: 3: if C (t) then break 4: end while

(

)

Algorithm 5 init3.2()

8 2C 8 2 C

;

X

vars(C );

last[C; X; a℄

nil

8 2 a

dom(X )

; lastE [C; X; a℄

nil

Algorithm 6 revise3.2(in C,X) : int 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13:

j

j

nbElements dom(X ) for each a dom(X ) do if lastE [C; X; a℄ is valid then continue if last[C; X; a℄ is valid then continue

2

seekN extSupport(C; X; a; last[C; X; a℄)

if last[C; X; a℄ = nil then remove a from dom(X ) else for each Y vars(C ) Y = X do

2

j 6

last[C; X; a℄[pos(Y; C )℄

b

lastE [C; Y; b℄

endfor end for return nbElements =

6 j

last[C; X; a℄

dom(X )

j

( )

AC2001/3.1 has a space complexity of O md and an optimal worst-case time complexity of O md2 [5, 17] (even if we consider non binary constraints provided that we arbitrarily bound constraint arity).

(

)

4.3 AC3.2 To improve the behaviour of the AC2001/3.1 algorithm while keeping simplicity of the algorithm, it is possible to partially benefit from positive multidirectionality. In particular, when a support is found, it can be used not only for the value for which it was looking for but also for all values occurring in the support. To avoid dealing with heavy data structures, one simply records for any value the last extern support, i.e., a support that corresponds to the last support of another value. For instance, let us consider a binary constraint C such that vars C = fXi ; Xj g. If a support a; b of Xi ; a is found in C (when looking for a support of Xi ; a ), then it is also recorded as being the last extern support of Xj ; b in C . If later, a support ; b of Xi ; is found in C , then the last extern support of Xj ; b in C becomes ; b . This new algorithm requires the introduction of an additional data structure, denoted lastE. This data structure is an array used to store the last extern support of any triplet C; X; a . Initially, the structure lastE must be initialized to nil (see Algorithm 5). The revision (see Algorithm 6) involves testing for any value the validity of the last extern support (line 3) and if, it fails, the validity of the last support (line 4). If neither are valid then a search of a new support is started, and if it succeeds, some extern supports are updated (lines 8 to 11). AC3.2 keeps the time and space complexities of AC3.1. Indeed, in the worst case, the algorithm simply performs one extra test (line 3) and a bounded number of extra

( ) (

(

(

)

)

)

(

(

)

( ) (

)

) ( ) ( )

Algorithm 7 init3.3()

8 2C 8 2 C

;

X

vars(C );

last[C; X; a℄

nil

8 2 a

dom(X )

; pt[C; X; a℄

0

Algorithm 8 revise3.3(in Ci;j ,Xi ) : int 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15:

j

j

nbElements dom(Xi ) for each a dom(Xi ) do if pt[Ci;j ; Xi ; a℄ > 0 then continue if last[Ci;j ; Xi ; a℄ is valid then continue if last[Ci;j ; Xi ; a℄ = nil then

2

6

pt[Ci;j ; Xj ; last[Ci;j ; Xi ; a℄℄ seekN extSupport(Ci;j ; Xi ; a; last[Ci;j ; Xi ; a℄)

if last[Ci;j ; Xi ; a℄ = nil then remove a from dom(Xi ) for each Ci;k C k = j do if last[Ci;k ; Xi ; a℄ = nil then

2 j 6 6

pt[Ci;k ; Xk ; last[Ci;k ; Xi ; a℄℄

else pt[Ci;j ; Xj ; last[Ci;j ; Xi ; a℄℄ + + end for return nbElements = dom(Xi )

6 j

j

assignments (lines 8 to 11) for each value revision. And we only need an additional array to store the last extern supports. 4.4 AC3.3 AC3.2 only integrates a partial positive multidirectionality exploitation. For binary problems, it is possible to conceive a simple algorithm which fully exploits positive bidirectionality. This algorithm, which is called AC3.3, simply records for any value the number of its extern supports. Then, an array, denoted cpt is introduced in order to store the number of extern supports of any triplet C; X; a . After initializing these counters to (see Algorithm 7), we have to carefully update them (see Algorithm 8), when a support is lost (line 6), a support is found (line 13) or a value is removed (lines 10 to 12). For the sake of simplicity, last Ci;j ; Xi ; a will be considered as equivalent to last Ci;j ; Xi ; a pos Xj ; C . For instance, if a; b is the last support of Xi ; a in Ci;j then last Ci;j ; Xi ; a will designate the value b instead of the pair a; b . The correctness of AC3.3 is given by the following proposition (the proof of which is omitted here). As AC3.2, AC3.3 keeps the time and space complexities of AC2001/3.1.

(

0

( )

(

[

)

[

℄[ (



)

)℄

[

℄ ( )

Proposition 1. The following invariant of the main loop of algorithm 8 holds: 8Ci;j 2 C ; 8Xi 2 vars Ci;j ; 8a 2 dom Xi ; pt Ci;j ; Xi ; a gives exactly the number of extern supports of Xi ; a in Ci;j .

(

(

)

)

( )

[



Algorithm 9 seekCandidate(in C,X,a, in/out t, in k) : int 1: for k from f rontier to length(t) do 2: if k = pos(C; X ) then continue 3: if last[C; X; t[k℄℄ = nil then continue 0 4: t last[C; var (k; C ); t[k ℄℄ 5: s 1 6: while s length(t) t[s℄ = t0 [s℄ do s++ 7: if s = length(t) + 1 then return SUPPORT 8: if t[s℄ > t0 [s℄ then continue seekN extT uple(C; X; a; t; k ) 9: if s < k then k0 10: else 0 0 11: k

opy (C; t; t ; s; pos(C; X )) 0 12: if k = length(t) + 1 then return SUPPORT 13: if k0 = pos(C; X ) t[k0 ℄ > t0 [k0 ℄ then 14: reinitTupleAfter(C,X,a,t,k’) 0 15: else k0 seekN extT uple(C; X; a; t; k ) 16: end if 17: if k0 = 1 then return NOTHING 0 18: else if k0 1 < k then k k 1 19: end for 20: return CANDIDATE



^

^

5 Negative multidirectionality exploitation In the previous section, we have focused our attention to positive multidirectionality. In this one, we show that AC2001/3.1, AC3.2, AC3.3 can all benefit from negative multidirectionality. New algorithms, denoted AC3.1 , AC3.2 and AC3.3 , are then obtained by replacing the call to the “standard” seekNextSupport function by a call to the seekNextSupport function described below. The principle is to concentrate the search of a support with respect to so-called candidates [4]. A candidate is a tuple which has never been checked. Note that the presentation is quite technical as it is given for non binary constraints. First, let us consider a function seekCandidate such that a call of the form seekCandidate C; X; a; t; k computes the smallest candidate t0 valid w.r.t. C such that t  t0 and t0 pos X; C a. Note that k is only given for optimization as it indicates that the k first values in t have been verified to be a possible prefix for a candidate. This function updates t with t0 and returns one value among NOTHING, SUPPORT and CANDIDATE which respectively indicate that there are no more candidates, that the updated argument t is a support or simply a candidate. The difference between this function, described by Algorithm 9, and that of [4] is due to the fact that it can be called by an algorithm, such as AC2001/3.1 or AC3.2, which does not (fully) exploit positive multidirectionality. This is the reason why supports can be found when looking after candidates. Three auxiliary procedures are called by seekCandidate:

( [ ( 1

) )℄ =

(

)

– seekNextTuple C; X; a; t; k computes the smallest valid tuple t0 in C such that 0 0 0 0 6 t0 k 0 . This function, similar to the t  t , t pos X; C a and 9k  k j t k

[ (

)℄ =

[ ℄= [ ℄

[ ℄= [ ℄

one described in [4], updates t with t0 and returns the smallest k 0 j t k 0 6 t0 k 0 or if it does not exist. – reinitTupleAfter C; X; a; t; k computes the smallest valid tuple t0 in C such that 0 0 0 t  t , t pos X; C a and 8i 2 ::k; t i t i . This procedure updates t with 0 t . – copy C; t; t0 ; start; pivot (see Algorithm 10) copies elements of t0 in t from index start until an incompatibility is found at position pivot (line 3), an invalid value is found (line 5) or the end of the tuple (line 7).

1

[ (

(

)

)℄ =

(

1

[℄= [℄

)

Algorithm 10 copy(in C, in/out t, in t0 ,start,pivot) : int 1: for i from start to length(t) do 2: if t[i℄ = t0 [i℄ then continue 3: if i = pivot then return i 0 4: t[i℄ t [i℄ 5: if t[i℄ dom(var(i; C )) then return i 6: end for 7: return length(t)+1

62

Due to the lack of space, we focus our analysis of this algorithm to original parts and report the reader, for a complementary description, to [4]. First, remark that t0 last C; var k; C ; t k can be equal to t and then be a candidate supporting X; a (line 7) since the tuple t is valid1 . t0 can also represent a support even if it is distinct of t provided that X; a is supported by t0 and all values of t0 are still valid (line 12). When X; a is not supported by t0 or when there exists an invalid value in t0 , the copy is stopped at index k 0 and we have either to reinit values after k 0 or seek the next tuple with a different prefix of size k 0 . Now that seekCandidate has been described, we can present the function seekNextSupport that has to be called to exploit negative multidirectionality (and also a limited form a positive multidirectionality if it is called by AC2001/3.1 or AC3.2). This function (see Algorithm 11) is similar to the one in [4] but takes into account candidates detected as supports by seekCandidate.

[

(

(

)

) [ ℄℄ ( )

(

)

6 Experiments To prove the practical interest of the algorithms introduced in this paper, we have implemented them in Java [7] and performed some experiments (run on a PC Pentium IV 2,4GHz 512Mo under Linux) with respect to random, academic and real-world problems. Performances have been measured in terms of the number of constraint checks (#ccks), the number of validity checks (#vcks) and the cpu time in seconds (cpu). All coarse-grained algorithms have been implemented using a variable-oriented propagation scheme and the revision ordering heuristic domv which orders the variables in the 1

Initially, t is valid, and after each turn of the main loop, either t is not modified or t is updated by a call to setNextTuple or reinitTupleAfter which both only yield valid tuples.

Algorithm 11 seekNextSupport(in C,X,a, in/out t) : boolean 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13:

if t = nil then reinitT upleAf ter(C; X; a; t; 0) else seekN extT uple(C; X; a; t; length(t)) seekC andidate(C; X; a; t;

result

1)

if result = NOTHING then return false if result = SUPPORT then return true while true do if C (t) then return true k

seekN extT uple(C; X; a; t; length(t))

if k = result

1 then

return false

seekC andidate(C; X; a; t; k )

if result = NOTHING then return false if result = SUPPORT then return true end while

propagation set by increasing current size of their domains (more details can be found in [6]). On the other hand, our implementation of AC7 integrates the two “standard” revision ordering heuristics lifo and fifo. 6.1 Stand-alone arc consistency First, we have considered stand alone arc consistency, i.e., the task of making arc consistent a constraint satisfaction problem. The first series of experiments that we have run corresponds to some random problems. In this paper, a class of random CSP instances will be characterized by a 5-tuple n; d; m; k; t where n is the number of variables, d the uniform domain size, m the number of k -ary constraints and t is either the number of not allowed tuples or the probability that a given tuple is not allowed.

(

)

AC3 AC3.1 AC3.1* AC3.2 AC3.2* AC3.3 AC3.3* P1 #ccks 99; 968 99; 968 97; 967 94; 012 93; 984 94; 012 93; 984 cpu 0:064 0:069 0:072 0:072 0:078 0:067 0:072 P2 #ccks 148; 029 74; 539 61; 641 63; 540 56; 645 62; 935 56; 437 cpu 0:087 0:048 0:046 0:044 0:043 0:045 0:045 P3 #ccks 2; 351; 578 587; 505 504; 941 478; 135 446; 188 470; 177 442; 867 cpu 1:375 0:384 0:375 0:342 0:347 0:319 0:327 P4 #ccks 4; 202; 630 1; 033; 014 934; 082 857; 789 831; 040 844; 334 824; 805 cpu 2:490 0:701 0:704 0:636 0:661 0:627 0:652 Table 1. Stand alone arc consistency on random instances

AC7 93; 994 0:092 272; 443 0:266 506; 340 0:457 794; 853 0:614

We present the results, given in Table 1, about the random binary instances studied in [2, 5, 17]. More precisely, 4 classes, denoted here P1, P2, P3 and P4, have been experimented. P1= ; ; ; ; and P2= ; ; ; ; correspond to classes of under-constrained and over-constrained instances whereas P3= ; ; ; and P4= ; ; ; ; correspond to classes of instances at the phase ;

2 2296)

(150 50 500 2 1250) (50 50 1225 2 2188)

(150 50 500 2 2350)

(150 50 500

transition of arc consistency for sparse problems and for dense problems, respectively. For each class, 50 instances have been generated and the mean of cpu time, constraint checks and validity checks have been computed. While all algorithms have close performances with respect to P1 and P2, one can notice that AC3.2, and especially AC3.3, clearly outperforms AC3.1 with respect to P3 and P4. On the other hand, the algorithms that exploit negative bidirectionality slightly reduce the number of constraint checks but, due to the overhead, not the cpu time. Next, we have tested real-world instances, taken from the FullRLFAP archive2, which contains instances of radio link frequency assignment problems. For stand alone arc consistency, we present the results, in Table 2, about two instances, respectively denoted SCEN#08 and SCEN#11, studied in [2, 13, 17]. AC3 AC3.1 AC3.1* AC3.2 AC3.2* AC3.3 AC3.3* AC7 SCEN #ccks 46; 294 42; 223 35; 079 39; 795 34; 223 39; 713 34; 215 866; 382 #08 cpu 0:021 0:023 0:027 0:024 0:028 0:029 0:032 0:516 SCEN #ccks 971; 893 971; 893 841; 225 671; 664 638; 932 671; 664 638; 932 638; 448 #11 cpu 0:226 0:247 0:272 0:225 0:243 0:206 0:232 0:376 Table 2. Stand alone arc consistency on RLFAP instances

For these instances, AC3.2 and AC3.3 are quite close. Both algorithms saves many constraint checks while not reducing cpu times compared to AC3 and AC3.1. Remark that the number of constraints checks required for SCEN#05 and SCEN#08 is far weaker than those presented by [2, 17]. This gap is mainly due to the introduction of our revision ordering heuristic [6]. A similar behaviour can be observed in [13]. The third problem that we have experimented is called Domino and is taken from [17]. The Domino problem corresponds to an undirected constraint graph with a cycle and a trigger constraint. Each instance, characterized by a pair n; d where n denotes the number of variables, the domains of which are f ; : : : ; dg, is such that there exists n equality constraints Xi Xi+1 (8i 2 ::n ) and a trigger constraint X1 Xn ^ X 1 < d _ X 1 Xn ^ X 1 d .

1 +1

) ( =

=

= )

1

1 1

( )

( =

AC3 AC3.1 AC3.1* AC3.2 AC3.2* AC3.3 AC3.3* AC7 #ccks 137:330M 5:970M 3:980M 3:980M 3:960M 3:980M 3:960M 2:029M d = 200 cpu 22:799 1:259 1:417 0:951 1:055 1:009 1:089 0:802 n = 100 #ccks 459:011M 13:456M 8:970M 8:971M 8:926M 8:971M 8:926M 4:559M d = 300 cpu 76:406 2:705 3:010 2:010 2:237 2:136 2:295 1:583 Table 3. Stand alone arc consistency on Domino instances n

= 100

33%

of constraint On Domino instances, AC3.2 and AC3.3 allow saving about checks and about of cpu time compared to AC3.1. AC3.2 is the fastest coarse-

20%

2

We thank the Centre d’Electronique de l’Armement (France).

MAC3 MAC3.1 MAC3.1* MAC3.2 MAC3.2* MAC3.3 MAC3.3* MAC7 #ccks 678; 547 441; 011 379; 966 212; 832 203; 995 361; 994 354; 709 522; 898 A #vcks 0 255; 119 255; 119 517; 056 517; 056 192; 606 192; 606 0 cpu 0:422 0:411 0:406 0:318 0:327 0:460 0:487 0:923 #ccks 413; 987 279; 601 189; 628 145; 852 127; 128 426; 788 B #vcks 0 112; 551 112; 551 222; 671 222; 671 0 cpu 0:250 0:225 0:209 0:167 0:171 0:660 Table 4. Maintaining arc consistency on random instances

MACs 3.1 MACs 3.1* MACs 3.2 MACs 3.2* MACs 3.3 MACs 3.3* #ccks 374; 190 318; 605 203; 989 193; 355 311; 094 303; 765 A #vcks 356; 732 356; 732 540; 728 540; 728 263; 548 263; 548 cpu 0:476 0:481 0:365 0:373 0:539 0:553 #ccks 218; 929 142; 985 134; 792 110; 828 B #vcks 167; 684 167; 684 241; 715 241; 715 cpu 0:237 0:225 0:183 0:185 Table 5. Maintaining arc consistency on random instances

grained algorithm with respect to this problem since AC3.2 seems to fully benefit from positive bidirectionality, and since its overhead is smaller than the one of AC3.3. 6.2 Maintaining arc consistency during search As it appears that one of the most efficient complete search algorithms is the algorithm which Maintains Arc Consistency during the search of a solution [10, 3], we have implemented all MAC versions of previous algorithms and experimented them. All our MAC algorithms integrate the dom/futdeg or DD [11] variable ordering heuristic. Two classes of MAC algorithms have been developed for AC3.1, AC3.2 and AC3.3. Algorithms of the former class do not require any additional memory storage to manage the data structures, and hence, have a O md space-complexity. As a result, some constraint checks are sacrificed since it is necessary to reinitialize the data structures last and cpt when backtracking. Algorithms of the latter class requires an additional memory storage to maintain the data structures, and hence, have a O md2 space-complexity. We shall denote algorithms of this class MACs . On the other hand, we have observed that it is worthwhile to leave unchanged the specific data structure lastE of AC3.2 while backtracking, having the benefit of a socalled memorization effect. It means that a (extern) support found at a given depth of the search has the opportunity to be still valid at a weaker depth of the search (after backtracking). The performances of all algorithms have been compared with respect to three distinct problems. First, two classes of random instances from model RD of [16] denoted A and B have been experimented. A= ; ; ; ; = and B= ; ; ; ; = correspond to classes of instances at the phase transition of search for dense = ) with low tightness ( = ) and sparse binary problems (

( )

(

155 256)

150 300 = 50%

)

(25 15 150 2 90 256) (50 5 80 3 90 256 = 35%

80 19600 0 40%

155 256 60%

ternary problems ( =  : ) with high tightness ( =  ), respectively. For each class, mean results are given for 50 generated instances. Table 4 gives the results obtained when maintaining arc consistency with respect to A and B. MAC3.2 clearly outperforms MAC3, MAC3.1 and MAC3.3. The number of constraint checks achieved by MAC3.2 is two times less important than the number of constraint checks achieved by MAC3.1. The good behaviour of MAC3.2 results from its memorization effect. Indeed, many constraint checks have been replaced by validity checks: one can observe that the number of validity checks of MAC3.2 is two times more important than the number of validity checks of MAC3.3. As validity checks are cheap, and performed in constant time unlike constraint checks, MAC3.2 has a great advantage. Table 5 gives the results for the MACs algorithms. Although some constraint checks are saved, the overhead of maintaining the data structures during the search is penalizing in terms of timing. Next, we have experimented a combinatorial mathematics problem, called Golomb ruler, of the CSPLib benchmark library (http://4c.ucc.ie/˜tw/csplib/). The satisfaction problem specification is the following: given two values l and m, does there exist a ruler of length l with m marks, i.e., a set of m integers  a1 < : : : < am  l such that the m m = differences aj ai ,  i < j  m are distinct. We have modeled this problem as a CSP by using ternary and binary constraints as described in [12]. The ; corresponds to the maximum number of marks on a ruler of instance l; m = length . Again, we observe the same phenomenon: MAC3.2 requires two times less constraint checks and two times more validity checks than MAC3.1. There is then a speed up of about .

(

1) 2 ( ) (34 8) 34

1

0

25%

MAC3 MAC3.1 MAC3.1* MAC3.2 MAC3.2* MAC7 #ccks 10:677M 4:433M 3:334M 2:388M 2:129M 5:337M m = 8 #vcks 0 1:297M 1:297M 2:390M 2:390M 0 cpu 2:632 2:169 2:250 1:777 1:863 8:243 l = 34 #ccks 388:871M 208:674M 164:672M 87:691M 81:207M 303:586M m = 9 #vcks 0 73:080M 73:080M 148:434M 148:434M 0 cpu 95:508 108:452 111:867 81:150 85:202 485:369 Table 6. Maintaining arc consistency on Golomb ruler instances l

= 34

Finally, the behaviour of the different MAC algorithms has been studied with respect to two real instances of the RLFAP archive. Whereas all algorithms are close in terms of CPU time with respect to GRAPH#14, MAC3.2 is the fastest with respect to the most difficult instance SCEN#11 ( ; visited nodes). It is important to note the relative bad behaviour of MAC7. We believe that MAC7 could save more constraint checks by integrating some advanced revision ordering heuristics and could certainly save cpu time with a further optimized implementation.

17 001

MAC3 MAC3.1 MAC3.1* MAC3.2 MAC3.2* MAC3.3 MAC3.3* MAC7 SCEN #ccks 137M 49:278M 44:193M 23:050M 21:622M 46:255M 43:935M 68M #11 #vcks 0 53:675M 53:675M 89:842M 89:842M 47:504M 47:504M 0 cpu 96:761 93:613 95:232 89:751 91:104 131:361 133:556 617 GRAPH #ccks 2:944M 1:584M 1:333M 1:189M 1:115M 1:187M 1:115M 1:129M #14 #vcks 0 0:712M 0:712M 1:034M 1:034M 0:638M 0:638M 0 cpu 1:958 1:876 1:976 1:864 1:921 1:913 1:960 2:295 Table 7. Maintaining arc consistency on RLFAP instances

7 Conclusion In this paper, we have introduced two new coarse-grained arc consistency algorithms. These algorithms, called AC3.2 and AC3.3, are extensions of AC2001/3.1. The positive form of multidirectionality is partially exploited by AC3.2 and fully exploited by AC3.3. As far as we are aware, AC3 d was the only coarse-grained algorithm exploiting bidirectionality. However, unlike AC3.3 (and AC3d ), AC3.2 has the advantage to be adapted to non binary constraints. Next, we have shown that the negative form of multidirectionality can be taken into account by AC3.1, AC3.2 and AC3.3, resulting in new algorithms denoted AC3.1 , AC3.2 and AC3.3 . As a result, AC3.3 is proved to fully exploit bidirectionality as AC7, a fine-grained algorithm, does. The main differences between AC3.2/AC3.3 and AC7 are the following: – AC3.2/AC3.3 are far more easier to implement (exploiting negative multidirectionality is immediate in the binary case) than AC7, – AC7 does not perform useless validity checks (at the price of heavy data structures), – AC3.2/AC3.3 perform revisions of arcs whereas AC7 performs revisions of “values”. With respect to the last item, we believe that it must be more difficult to enhance AC7 than AC3.2/3.3 by integrating a revision ordering heuristic [15, 6] since the number of elements in the propagation set can be very important for AC7. Some observations can be supported by our experimentations. When arc consistency is used as a preprocessing, AC3.3 seems to be the most efficient algorithm. Compared to AC2001/3.1, AC3.3 saves about of constraint checks and, on average, of cpu time. When arc consistency is maintained during the search, MAC3.2, due to a memorization effect, is more efficient than MAC3.3 both in terms of constraint checks and cpu time. Compared to MAC2001/3.1, our experimental results show that MAC3.2 of constraint checks and, on average, of cpu time. saves about Finally, one could wonder if, MAC3.2 which seems to be the most efficient arc consistency algorithm in terms of constraint checks is also really the fastest algorithm. In [14], MAC3d is shown to be about 1.5 times faster than MAC2001 for difficult random problems. However, the version of MAC2001 used by [14] can be improved since it is simply equipped with the lexicographic revision ordering heuristic. In a related work [6], we have shown that using a variable-oriented variant of MAC2001 with a (not optimized) revision ordering heuristic based on the current domain size allows saving about

25%

50%

15%

15%

25% of cpu time. It is then very difficult to know which algorithm among MAC3d and MAC3.2 is the fastest without a direct confrontation. It is one perspective of this work.

Acknowledgements This paper has been supported by the CNRS, the “programme TACT de la R´egion Nord/Pas-de-Calais” and by the “IUT de Lens”.

References 1. C. Bessiere. Arc consistency and arc consistency again. Artificial Intelligence, 65:179–190, 1994. 2. C. Bessiere, E.C. Freuder, and J.C. Regin. Using constraint metaknowledge to reduce arc consistency computation. Artificial Intelligence, 107:125–148, 1999. 3. C. Bessiere and J.C. Regin. MAC and combined heuristics: two reasons to forsake FC (and CBJ?) on hard problems. In Proceedings of CP’96, pages 61–75, 1996. 4. C. Bessiere and J.C. Regin. Arc consistency for general constraint networks: preliminary results. In Proceedings of IJCAI’97, 1997. 5. C. Bessiere and J.C. Regin. Refining the basic constraint propagation algorithm. In Proceedings of IJCAI’01, pages 309–315, 2001. 6. C. Lecoutre, F. Boussemart, and F. Hemery. Revision ordering heuristics for the constraint satisfaction problem. In submission, 2003. 7. C. Lecoutre, F. Boussemart, F. Hemery, and S. Merchez. Abscon 2.0, a constraint programming platform. http://www.cril.univ-artois.fr/˜lecoutre, September 2003. 8. A.K. Mackworth. Consistency in networks of relations. Artificial Intelligence, 8(1):118–126, 1977. 9. R. Mohr and T.C. Henderson. Arc and path consistency revisited. Artificial Intelligence, 28:225–233, 1986. 10. D. Sabin and E. freuder. Contradicting conventional wisdom in constraint satisfaction. In Proceedings of the PPCPA’94, Seattle WA, 1994. 11. B.M. Smith and S.A. Grant. Trying harder to fail first. In Proceedings of ECAI’98, pages 249–253, Brighton, UK, 1998. 12. B.M. Smith, K. Stergiou, and T Walsh. Modelling the Golomb ruler problem. Technical Report 1999.12, University of Leeds, 1999. 13. M.R.C. van Dongen. AC3d an efficient arc consistency algorithm with a low space complexity. In Proceedings of CP’02, pages 755–760, 2002. 14. M.R.C. van Dongen. Lightweight arc-consistency algorithms. Technical Report TR-012003, University college Cork, 2003. 15. R.J. Wallace and E.C. Freuder. Ordering heuristics for arc consistency algorithms. In Proceedings of NCCAI’92, pages 163–169, 1992. 16. K. Xu and W. Li. Many hard examples in exact phase transition. Submitted, 2002. 17. Y. Zhang and R.H.C. Yap. Making AC3 an optimal algorithm. In Proceedings of IJCAI’01, pages 316–321, Seattle WA, 2001.

Suggest Documents