CSP techniques using partial orders on domain values Amit Bellicha
Christian Capelle Michel Habib Marie-Catherine Vilarem
Tibor Kokeny
LIRMM - UMR 9928 UNIVERSITE MONTPELLIER II/CNRS, 161, Rue Ada 34392 Montpellier cedex 5 France email: fbellicha,capelle,habib,
[email protected]
Abstract
In [Fre91], Freuder has de ned a notion of interchangeability on values in a CSP, whose purpose is to reduce the size of the problem without losing any solution. He also de ned related and weaker notions, as subsituability. This paper explores this last notion in several ways: rst the partial orders on domain values induced by the substituability relation are emphasized and used to reduce the size of the problem without losing its satis ability (but some solutions may be lost); nally weaker forms of this notion are presented and used to improve classical resolution methods like backtracking and forward-checking.
Keywords
Constraint Satisfaction Problems, Partial Orders.
1 Introduction
Constraint Satisfaction Problems (CSPs) represent an interesting way of modelisation for many Arti cial Intelligence and optimisation problems. Unfortunately, the resolution of such problems is generally NP-complete. So, in order to facilitate the resolution task, a lot of techniques have been studied. We can distinguish two main approaches: rstly reduce the problem size before the resolution keeping the same solution set by some preprocessing techniques ( lterings). Secondly reduce the size of the search space during the resolution which is possible through heuristics and weaker ltering techniques. In this paper, we use partial orders on domain values of a CSP in a way to either reduce the size of the problem, or search eciently a solution. These orders are de ned according to the fact that, in the domain of a variable, a value a can dominate another value b, in the sense that each support of b is a support of a. This relation of domination, called the Neighbourhood Substituability, has been de ned by Freuder in [Fre91]. We will use it in two manners: to propose a global ltering of the problem by keeping only the values that dominate the other ones. This strong property may be very useful in some cases such as problems involving a lot of monotone constraints [DV91], [HDT92]. 1
to execute local lterings during the resolution according to weaker conditions of dom-
ination which do not allow a global ltering In section 2, after some basic de nitions, we show how the Neighbourhood Substituability relation induces partial orders on the domain values. In section 3, we study how this relation of domination can be used to reduce the size of a CSP and we propose an algorithm, which is similar to AC-6 [BC94], the existing most ecient arc-consistency algorithm. In section 4, we exploit weaker forms of the orders to improve classical backtracking techniques and lookahead techniques like forward checking. Then, in section 5, we show how this notion of substituability of values on a domain, can be generalised to tuples of the cartesian product of domains. In the conclusion we show other ways to use these properties.
2 Background, de nitions
A binary Constraint Satisfaction Problem (CSP) involves a set of n variables X = fi; j; : : :g each with an associated domain fD ; D ; : : :g (where maxfjD j; 1 i ng = d) and a set of m binary constraints between variables. A binary constraint R between the variables i and j is a subset of the cartesian product D D , that speci es which values are compatible with each other. If (a; b) 2 R then it is said that a supports b or b supports a. If there is no constraint de ned between two variables, then all their domain values are compatible with each other. The problem is to nd a solution, that is an assigment of values to variables which satis es all the constraints. A CSP is satis able if it has at least one solution. The Constraint graph G associated to a CSP is a non oriented graph having X as set of vertices and there is an edge between two vertices i and j if and only if there is a constraint R between i and j . The Consistency graph GC associated to is the graph having [ D as set of vertices 1 and such that there is an edge between two vertices a and b i there is a constraint R such that (a; b) 2 R . i
j
i
ij
i
j
ij
ij
i
i
ij
ij
For any domain value a, the neighbourhood of a in GC is de ned by N (a) = fb 2 [D j (a; b) is an edge of GC g. Remark that the neighbourhood of a value is the set of its supports. A CSP is said to be smaller than a CSP if and only if and have the same constraint graph and for all i D D and for all constraint R R . In [Fre91], Freuder studies the relation of interchangeability between values, and he also de ned several variations on this interchangeability notion. One of them is the Substituability: De nition 1 Let a and b be two values of a domain D of a CSP . a is substituable for b if and only if from any solution of a CSP containing b, if we replace b with a, the assignment remains a solution of . De nition 2 Let a and b be two values of a domain D of . a is neighbourhood substituable for b if and only if N (b) N (a). Proposition 1 Let a and b be two values of a domain D of . a is neighbourhood substituable for b implies that a is fully substituable for b. 1 there are P =1 D vertices i
0
0
0
i
0
i
ij
i
i
i
n i
j
ij
2
ij
1
2
3
1
2
3
a a
a b
a
b
b
a
b c
(a)
(b)
Figure 1: (a) - the original CSP. (b) - the CSP obtained by removing the non-maximal element of D according to the for each i. i
i
Notice that the renverse is not true.
Proposition 2 The Neighbourhood Substituability relation induces a partial order on the domain D of every variable i of a CSP . These partial orders are called NS -orders and denoted by where 8a; b 2 D , b a () a is neighbourhood substituable for b. i
i
i
i
In fact, is the order on the vertices of D induced by the neigborhood inclusion order in the graph GC . i
i
3 Closing CSPs according to NS-orders
Two important problems attached to CSPs are the existence and the computation of a solution. Then, reducing the size of a problem without losing the property of satis ability, if it exists, is very important. Let be a CSP de ned as in section 2. One way to reduce is to reduce each domain D according to the order : the new domain D is composed of the maximal values. Remark that if each domain of the original CSP is reduced in this way, the resulting CSP may still contain non-maximal values in some domains, according to the re-computed orders in this new CSP. In the example of gure 1, there are two ways of proceeding: either compute each order in the original CSP, then delete all non-maximal elements, or compute a rst order and reduce the CSP according to it, compute a second order in the obtained CSP and so on. If we proceed in the order 1; 2; 3, the two methods lead to the same result: in D1, one value can still be removed (Figure 1-b). So, in the following, we develop another method of reduction which removes all removable values. Let us call Red () the CSP deduced from by the deletion of a non-maximal value (i; a) of the domain D . We shall refer to this operation as the Red reduction operation. This new problem Red () is smaller than . i
0 i
i
i
i
i
i;a
i
i;a
i;a
Proposition 3 Red () is satis able () is satis able. i;a
Each solution for Red () is a solution for . i;a
Proof: ) Since, Red () is smaller than and has the same constraints, then every i;a
solution of Red () is a solution of . i;a
3
( Let S be a solution to and v the value of variable i in S . If v = a, as a is not maximal in D according to , then there is a value v in D such that v v . According to proposition i
0
i
0
0
i
1, the assignment of X deduced from S by replacing v with v is a solution of Red (). If v 6= a, then the solution S holds in Red () 2 Our goal is to reduce a CSP using Red operators for all (i; a), until a CSP such that Red ( ) = for all (i; a). It can be formalized as follows : De nition 3 A CSP is Closed for Neighbourhood Substituability (or NS-closed) i for each value a of each domain D , there is no other value b in D such that b a. So, we search for the maximal CSP NS-closed smaller than . Notice that the Consistency graph of such a resulting CSP can be seen as a particular case of prime graph in graph decomposition theory [Hsu87] [MR84]. In the following, we present the algorithm NS-Closure (see algorithm 1), which computes the closure of a CSP according to the Red operations for all (i; a). When a value is removed from a domain, we have to examine if this removal does not imply removals of values in other domains, as the arc consistency algorithms do. This operation is generally called propagation of value removal. So we propose an algorithm inspired by AC-6 [BC94], one of the best, actually known arc-consistency algorithms. The basic idea of this algorithm is the following: for each pair (a; b) of every domain D , we search for a value s in the neighbourhood of a and b, such that s supports a but not b. Such a value s is called a splitter of (a; b). It is a similar technique as in AC-6 which maintains a support for each value. If the pair (a; b) does not have any splitter then a is not maximal according to , so it must be removed from D . Remark that each undirected pair of values fa; bg will be considered exactly twice. Once, as the pair (a; b), and once as the pair (b; a). When a value is removed from D it is necessary to propagate this removal, searching for a new splitter for each pair `splitted' by this value. So, for each splitter s in D , the set Split is maintained for the pairs splitted by s. This lead to an algorithm in two steps: In the rst part, called Initialisation, a splitter is searched for each pair of values in the same domain and, for each splitter s in D , the pairs splitted by s are memorized in Split . The second part, called Deletion & Propagation consists to delete rst members of pairs without splitter, and to propagate this deletion to pairs splitted by the deleted value. In a way to eciently manage splitter researchs, arbitrary total orders are considered: a total order on variables, and a total order on domain values. This induces a total ordering I of all values in the CSP. Two functions use this order. The rst one, called FirstSplitter(i,a,b), where a and b belong to the same domain D , returns the rst splitter of (a; b) (according to I ) if such a value exists. Otherwise, it returns the empty set. The second, called NextSplitter((i,a,b),(j,s)), where a and b belong to D and s, a value of D , is a splitter of (a; b), returns the smallest splitter of (a; b) greater than s (according to I ) if such a splitter exists. Otherwise, it returns the empty set. These two functions, using I , allow to search for a splitter of (a; b), only examining the domains of variables constraint to i. i
0
i;a
i;a
0
i;a
i;a
0
0
i
i
i
0
i;a
i
i
i
i
j
js
j
js
i
i
4
j
Algorithm 1: NS-Closure Data: a CSP . Result: a CSP obtained from by closure under the Red operations for all i. begin i;a
Initialisation
List ? ; for each D 2 do for each a 2 D do Split ? ; for each D 2 do for each (a; b) 2 D D do (j; s) ? FirstSplitter(i,a,b) if (j; s) = ; and b 2 D then // N (a) N (b) so a must be D ? D ? fag List ? List [ f(i; a)g i
i
ia
i
i
i
i
i
deleted from
D
i
i
else
Split
js
? Split [ f(i; a; b)g js
Deletion & Propagation while List 6= ; do (j; s) ? rst element of List remove (j; s) from List for each (i; a; b) 2 Split do remove (i; a; b) from Split js
if a 2 D and b 2 D then (j; s) ? NextSplitter((i; a; b),(j; s)) if (j; s) = ; then // N (a) N (b) so a must be deleted from D D ? D ? fag List ? List [ f(i; a)g else Split ? Split [ f(i; a; b)g js
i
i
i
i
i
js
js
end
The correctness of the algorithm is a straight forward application of Proposition 3.
Proposition 4 The time complexity of NS-Closure algorithm is O(md3 ) and its space complexity is O(nd2 ).
Proof sketch: Space complexity :
First, remark that in each domain, there are d2 pairs, so nd2 pairs for the whole CSP. Split is implemented via an array of lists, and at each time of the algorithm, a pair is splitted by at most one value, so the total space used by the Split sets is also O(nd2). ia
5
Time complexity : For each pair (a; b) of values of a domain D , all the values c in other domains D such that R exists are examined at most once by FirstSplitter or NextSplitter calls. So for the whole P D there are d2jfj j R existsgjd values examimed. If we sum up on every D , we obtain d3 =1 jfj j R existsgj that is equal to O(d3m). 2 For such a reduction algorithm, it is important to consider, not only the worst case time complexity, but also the ratio time complexity over performance of the reduction. When there are few values removed by the algorithm, that is when most elements are maximal in each domain, it seems that all the pairs will be quickly splitted, so the total time cost could be far less than O(md3 ). However a lower bound time complexity of this method seems to be (nd2 ), in the case where a splitter is sought in constant time for each pair. Notice that if a CSP is arc-consistent, then the NS -Closure of is also arc-consistent. So, it is possible to achieve the NS-Closure ltering after arc-concistency. Similarities between this two methods may even allow to unite them in the same procedure. Notice also that if the CSP contains only monotone constraints, then every domain in the NS-closed resulting CSP contains 0 or 1 value. i
j
ij
i
ij
i
n
ij
i
4 Using NS-orders extensions In the previous section, we proposed a ltering of the domains according to the neighbourhood substituability relations . We took advantage of the fact that the removal of an element b from a domain D does not modify the satis ability of the original CSP if exists a in D such that b a. However, for a pair of values (a; b), the condition of belonging to such a relation is quite strong (we have to consider every domain in the neighbourhood). Thus, the eciency of the corresponding ltering may be low (in the worst case, every relation may be empty, that is no value is removed). A natural idea is to consider domain orders which are based on the same principle, that is the neighbourhood substituability, but by taking into account only a subset of the neighbourhood of a given variable (Relaxed Neighbourhood Substituability or RNS-orders). This subset can be only one neighbour or a set of neighbours verifying some conditions. However, based on these orders, we cannot pre- lter without eventually loss of satis ability. Thus, instead of ltering before the resolution (which is the search for a solution), we propose to use these relations during the resolution. In the following, we propose an algorithm in order to statically compute such a relation (without ltering), then show how partial orders de ned on domains can be used to improve backtrack search: rst, we give an improvement of classical backtracking, then we propose two techniques in order to improve looking ahead type algorithms. The computation of a RNS-order on a domain D of a CSP can be reduced to the computation of the partial order on the vertices induced by the neighbourhood inclusions in a non-oriented bipartite graph G = (V1; V2; E ), where V1 is D , V2 is the set of values of the domains which are neighbours and considered in the computation of the given RNS order. There is an edge between two values in G i there is an edge between the same values in the consistency graph of . So, in the next section we propose an algorithm which computes the partial order induced by the neighbourhood inclusions in a general bipartite graph. i
i
i
i
i
i
i
6
4.1 Computing RNS-orders without ltering
Let G = (V1; V2; E ) be a non-oriented bipartite graph, where jV1j = n1, jV2j = n2, jE j = m, and each edge of E is composed of a vertex of V1 and a vertex of V2. For two vertices x; y 2 V1, a 2-length path between x and y is determined by a vertex z 2 V2 such that fx; zg and fz; yg are edges in G. Let us denote by nbchain (y) the number of 2-length paths between x and y. The following property is veri ed: x
Proposition 5 nbchain (y) = jN (y)j () N (y) N (x) Proof: ) nbchains (y) is the number of vertices in V2 which are both adjacent to y and x. If this number is equal to the number of vertices of V2 adjacent to y, jN (y)j, then each vertex adjacent to y is adjacent to x. ( If each vertex z of V2 adjacent to y (z 2 N (y)) is adjacent to x, then it is possible to x
x
create a chain of length 2, containing z, between x and y, and there is no other such chain.
2
According to this last proposition, algorithm 2 computes the partial order on the vertices of V1 induced by the neighboorhood inclusions. In this algorithm, we replace nbchains by nbchains: the same datastructure is reused for each vertex x 2 V1. After the initialization (computing jN (y)j and initializing nbchains(y)), for each vertex x 2 V1 we execute the loops 1 and 2. The rst one enumerates all chains of length two and computes the elements y 2 V1 which are in relation with x. The second loop resets the counters nbchains(y). We choose this way to restore the counters, because in this manner the costs of the rst and the second loops are the same (c.f. proposition 6). Algorithm 2: Red Data: G = (V1; V2; E ) a bipartite non-oriented graph. Result: the partial order 1 on the vertices of V1 induced by the neighbourhood inclusions. x
1
2
begin 1 ? ; for each y 2 V1 do compute jN (y)j for each y 2 V1 do nbchain(y) ? 0 for each x 2 V1 do for each neighbour z 2 V2 of x do for each neighbour y 2 V1 of z do if y =6 x then nbchain(y) ? nbchain(y) + 1 if nbchain(y) = jN (y)j then 1 ?1 [f(y; x)g
for each neighbour z 2 V2 of x do for each neighbour y 2 V1 of z do if y =6 x then nbchain(y) ? nbchain(y) ? 1
end Proposition 6 The time complexity of the Red algorithm is O(n1 :m) and its space com-
plexity is O(n21)
7
(1)
(2)
Figure 2: A dicult CSP for the NS-closure with the Red algorithm
x 2 V1, we consider the Proof: Time complexity: for each neighbour z of each vertex P P complete neighbourhood N (z). Then, the complexity is: O( ( ) deg (z )). As P ( ) deg(z) m and jV1j = n1, the complexity in time is O(n1 :m) x2V1
z2N x
z2N x
Space complexity: the structure nbchain is shared by all the vertices x of V1 in order to compute nbchain ; then its space complexity is O(n1). The space complexity of the partial order 1 is O(n21), because we do not remove the redundancies induced by transitivity. 2 x
Remark 1
Another way to compute partial order 1 uses the matrix product: let
M (G) be the adjacency matrix of the bipartite graph G (its lines are indexed with V1 and its rows with V2), and M (G) the transposed matrix of M (G). Let M be the product M (G):M (G) : M [i; j ] is the number of chains of length 2 between the vertices i and j ( which are in V1 ). Then we have the following property : M [i; j ] = jN (i)j , N (i) N (j ). The best algorithm for matrix product has an asymptotic time of computation, for a n vertices graph, of O(n2 376) [CW87]. So any progress in practical matrix product algorithm can also be applied to obtain the partial order preceq1. Theorically matrix product algorithms yield bounds for the computation of preceq1. Notice that for substituability purpose, only the maximal elements for 1 are needed, and their computation could be less costly than the order one itself. t
t
;
Remark 2 As we noticed in section 3, the deletion of the non-maximal elements for the orders computed on the original CSP does not give the NS-closure. i
We could apply the above algorithm on the obtained CSP again, and after a nite number of applications we could obtain the NS-closure of the original CSP. Nevertheless, this number of applications may be important. Figure 2 shows an example, in which each application can remove only two domain elements, so in the worst case, we have to perform the computation of NS-orders O(d) times. Thus, in order to compute the NS-closure for such a CSP we can have a complexity O(md4) instead of O(md3 ) which is the complexity of the NS-closure algorithm presented in the section 3.
4.2 Improving resolution algorithms
In order to improve backtrack based resolution algorihms which search for only one solution, we de ne two RNS-orders. The rst consists on considering a single neighbour for a given domain: 8
De nition 4 Let R be a constraint of a CSP . Let the order de ned on the elements of D be the following: for all a,b in D , a b if and only if fc 2 D j c supports ag fc 2 D j c supports bg (i.e. N (a) \ D N (b) \ D ) . Proposition 7 Let D be the domain of the variable i, then for all a,b in D a b if and only if a b for all j such that R exists. ij
ij
i
i
j
ij
j
j
j
i
ij
i
i
ij
For the second order, we have to take a static assignment ordering on variables and to compute partially the neighbourhood substituability relations according to this ordering. For the sake of simplicity, let us take the assignment ordering I such that the variable i precedes the variable j i i < j . So, we de ne a \directed" extension of every order (directed NS-order) as follows: iI
i
De nition 5 Let a and b be two values of a domain D . a is neighbourhood substituable for b according to I (b a) if and only if for all j > i, b a. i
iI
ij
Using the algorithm 2, the computation of one order is O(d3 ) in time and O(d2) in space. Thus, the computation complexity of all these orders is O(m:d3) in time and O(m:d2) in space. The computation cost of the orders is the same as the computation cost of the orders (by considering the all neighbourhood), so in total we have O(m:d3) in time and O(m:d2) in space. ij
iI
i
4.2.1 Classical backtracking
In this section we will use the relation . The following property is veri ed: iI
Proposition 8 Let a and b be two values of a domain D , and let A = f(1; v1); (2; v2); : : :; (i; a)g and B = f(1; v1); (2; v2); : : : ; (i; b)g be two consistent assignments of the variables f1; : : : ; ig. If b a and B can be extended to a solution f(1; v1); (2; v2); : : : ; (i; b); (i + 1; v +1); : : : ; (n; v )g, then the assignment f(1; v1); (2; v2); : : : ; (i; a); (i +1; v +1); : : : ; (n; v )g i
iI
i
n
i
is also a solution.
n
Corollary 1 Let a and b be two values of a domain D , and let A = f(1; v1); (2; v2); : : :; (i; a)g and B = f(1; v1); (2; v2 ); : : : ; (i; b)g be two consistent assignments of the variables f1; : : :; ig. If b a and A cannot be extended to a solution, then B cannot be extend to a solution i
iI
either.
Thus, in order to search for a solution, we can give a schema of the backtrack procedure which has to be called with i = 0 parameter (algorithm 3). The classical Backtracking procedure is modi ed in only one point: in each recursive call of the algorithm, if a value a for the variable i is consistent (that is the current value assignment of the variables f1 : : : ig is consistent), then we remove every value b from the current domain D which is smaller then a (D ? D n fb j b ag). We have two possibilities. First, suppose that the current assignment of the variables f1 : : : ig (the value of i is a) can be extended to a solution. In this case, before nding such a solution, there is no backtrack on the variable i, so the removed values from D have no eect on it. We i
i
iI
i
i
9
can loose solutions, but the satis ability of the CSP does not change (we search for only one solution). Algorithm 3: Backtracking (D , i, a)
begin if i > 0 then value(i) ? a if the current value assignment for the variables f1; : : : ; ig is consistent then if i = n then return (current assignment) else if i =6 0 then D ? D n fb j b ag for each value v in D +1 do i
i
iI
i
i
Backtrack(D +1; i + 1; v) i
end
Second, if the current assignment cannot be extended to a solution, then there is no value b a such that by replacing b by a, the obtained assignment can be extended to a solution (Corollary 1). Thus, the following property is veri ed: Proposition 9 If a CSP is satis able, then the algorithm 3 on ends by nding a solution of . We did not take any heuristics on value ordering, but it easy to see that an order of consideration of values which respects the ordering (if b a then a is considered before b), can improve the algorithm (more values can be removed). It is also clear that the algorithm 3 can be easily completed by a lookingahead ltering. iI
iI
iI
4.2.2 Improving forward checking
In this section we will use the extension of , in order to improve forward checking search. As we de ned above, is induced by the neighbourhood substituability according to an unique constraint. In order to use this relation, rst, we de ne a schema of lookahead algorithms (algorithm 4). We take the same assignment ordering of variables as in the previous section. In order to search for a solution, this function has to be called with LookaheadSchema(0; ;). The functions First(i), Next(i; a), and Last(i) ensure an order of consideration of the values of a domain D . Algorithm 4: LookaheadSchema (i, a) ij
i
ij
i
begin if i > 0 then value(i) ? a if i = n then return (current assignment) if Filtering(i; a) = true then i ? i + 1 ; flag ? true ; v ? first(i) while flag = true do
end
LookaheadSchema(i; v) Reconstruction(i; v) if v = last(i) then flag ? false ; else v ? next(i; v); 10
In a forward checking search, the Filtering function enumerates the domain values of non-assigned variables, and removes all values which are found incompatible to the current assignment. More precisely, for a given assignment f(1; value(1)); : : :; (i; value(i) = a)g, Filtering executes a consistency checking between a 2 D and each value b 2 D such that j > i and R exists. By taking advantage of the partial orders , the number of these consistency checks can be reduced. If the pair of values ((i; a); (j; b)) is found inconsistent on a constraint R , then not only b is removed from D , but also remove each value b' such that b' b without any consistency check (algorithm 5). In order to save removed values from a domain D , we will use the variables which are supposed empty at the start. contains the values of D which were removed because of the last assignment of the variable i. Algorithm 5: Filtering (i, a) i
j
ij
ij
ij
j
ji
j
ij
ij
j
begin if i = 0 then return (true) for all j > i do if 9R then for all b 2 D do if (a; b) 62 R then S ? fb j b bg ? [S D ?D nS if D = ; then return (false) return (true) end ij
j
ij
0
ij
0
ji
ij
j
j
j
We did not take any particular order on the elements of D when they are compared with an element a of D . However, it is clear that by taking a value ordering on D which respects the relation , we can improve the performance of Filtering. j
i
j
ji
In the case of backtrack (when the domain of a non-assigned variable becomes empty), the algorithm 4 uses the function Reconstruction in order to restore the domains. This function could be hidden in the recursive calls by adding the domains as a parameter of LookaheadSchema as we did it in the classical backtracking algorithm. But here, we decided to restore the domains \by hand" which allows us to avoid some unnecessary work. Intuitively, the idea is the following: Suppose that the current value a of the variable i has to be changed (backtrack). Let the next value to examine in D be a . For each constraint R , we have to put back to D all values which have been removed by the assignment a of i ( ), then we have to lter according to a . However, if a a, we need not restore D , because the values removed by a in D would be inevitably removed by a too. Thus, in this case, it is enough to continue the ltering D by a and keep the removed elements in . i
ij
0
j
0
ij
0
ij
j
0
j
j
11
0
ij
Algorithm 6: Reconstruction (i, a) begin if a = last(i) then for all j > i do if 9R then D ?D [ ?; else a ? next(i; a) for all j > i do if 9R and a 6 a then D ?D [ ?; end All in all, we can summarize the two improvements on the algorithm Forward Checking induced by the relation as follows: First, we reduced the number of consistency checks in the ltering step of the forward ij
j
j
ij
ij
0
0
ij
j
j
ij
ij
ij
ij
checking: if a value was removed by the ltering process, we also removed all values which are smaller then the rst removed value. Second, using these partial orders, we can restore PARTIALLY the domains in case of backtrack.
Finally, we remark that in this case we can not choose a \good" value ordering heuristics on a domain D , because it may happen that for a; a 2 D and for some j 6= k, j > i, k > i, that a a a. 0
i
ij
0
i
ik
5 A Generalization : the k-NS relation
The notion of substituability can be extended in the same way the arc-consistency has been extended in k-consistency [Fre78].
De nition 6 Let f(a,i), (b,j)g and f(c,i), (d,j)g be two pairs of values of dierent variables i and j. The pair f(a,i), (b,j)g is fully 2-substituable for the pair f(c,i), (d,j)g i every solution involving f(c,i), (d,j)g remains a solution when f(a,i), (b,j)g is substituted for f(c,i), (d,j)g As usually, this global notion is too strong to be computed, and we look for a sucient condition which is locally computable. We de ne the neighbourhood N (f(a; i); (b; j )g) of a pair of values of dierent variables as the intersection of the neighbourhoods of the two values in the consistency graph of the CSP : N (f(a; i); (b; j )g) = N (a; i) \ N (b; j )
De nition 7 Let f(a,i), (b,j)g and f(c,i), (d,j)g be two pairs of values of dierent variables i and j. The pair f(a,i), (b,j)g is neighbourhood 2-substituable for the pair f(c,i), (d,j)g i N (f(c ; i ); (d; j)g) N (f(a; i ); (b; j)g). 12
This general de nition will be useful only with pairs of compatible values, as the 3consistency holds only with such pairs. We will refer to this relation as the 2-NS relation, de ned on the cartesien products D D . i
j
Proposition 10 Let f(a,i), (b,j)g and f(c,i), (d,j)g be two pairs of values of dierent variables i and j. The pair f(a,i), (b,j)g is neighbourhood 2-substituable to the pair f(c,i), (d,j)g implies that f(a,i), (b,j)g is fully 2-substituable to f(c,i), (d,j)g As previously, remark that the reverse is not true. The 2-NS relation induces a partial order on each cartesian product D D (2-NS -orders), and as we proposed for the maximal elements according to the NS (or 1-NS ) orders, it is desirable to reduce the problem by keeping only maximal pairs according to the 2-NS -orders. The 2-NS -closure of a CSP (closed for the reduction associated to the 2-NS orders), conserves the satis ability of the original CSP. The ltering process computing the 2-NS closure of a CSP removes pairs from relations, that is remove edges from the consistency graph of the CSP. The notions of 2-Substituability can trivially be extended to k-Substituability (fully and neighbourhood): k-uples are involved instead of pairs. Remark that these de nitions match with the preceding ones: the full substituability is the full-1-substituability and the neighbourhood substituability is the neighbourhood-1substituability. i
j
6 Conclusion In this paper, we have followed two main directions. First, based on the neighbourhood substituability relation de ned on domains (NS-orders), we de ned the property NS ?closed. Then, for a given CSP , we de ned the closure of according to this property (NS-closure of ) which can be obtained from by removing some domain values ( ltering). The main interest of this approach is the fact, that if a CSP is satis able then its NS-closure , which is smaller than , is also satis able, and every solution of is a solution of too. Thus, we de ned a ltering algorithm which computes the NS-closure of a CSP with the time complexity O(md3). Second, we de ned two relaxed NS-orders (RNS-orders), which were used to improve backtracking based resolution algorithm. For a start, we de ned a general algorithm which can be used to compute any RNS-orders. In order to improve classical backtracking search, we de ned a RNS-order on each domain according to a variable instantiation ordering. Then, with a more relaxed RNS-order (considering only one neighbour at once), we proposed a re nement of forward checking search. Our future research on this topic includes the followings: In order to validate the practical usefulness of the proposed algorithms, we are about to prepare the experimentations. As we previously noticed, the reduction method described in section 3 is not always ecient in real problems, because many domain values can be maximal according to the partial orders. In this case, a possible way to take advantage from these orders is to 0
0
13
decompose such a CSP into several subproblems among which at least one subproblem allows a strong reduction and which keep the satis ability of the original CSP. The partial ordering of domains can also be used in the resolution of dynamic CSPs (adding and deleting constraints and variables [DD88]: if the neighbourhood of a maximal value turns to be empty (or the neighbourhood of a maximal tuple, according to a higher level of substituability), then this value and every smaller value can be removed. Splitters can be also dynamically kept for any couple of values in order to maintain partial orders and to consider only reduced problems at any moment.
References
[BC94] Christian Bessiere and Marie-Odile Cordier. Arc-consistency and arc-consistency again. Arti cial Intelligence, 65(1):179{190, January 1994. [CW87] D. Coppersmith and S. Winograd. Matrix multiplication via arithmetic progressions. In Proceedings of 19th Annual Symposium on the Theory of Computation, pages 1{6, 1987. [DD88] Rina Dechter and Avi Dechter. Belief maintenance in dynamic constraint networks. In Proceedings of the sixth National Conference on Arti cial Intelligence, pages 37{ 42, St Paul, MN, 1988. [DV91] Yves Deville and Pascal Van Hentenryck. An ecient arc consistency algorithm for a class of CSP problems. In Proceedings of the Twelfth International Joint Conference on Arti cial Intelligence, pages 325{330, Sydney, Australia, 1991. [Fre78] Eugene C. Freuder. Synthesizing Constraint Expressions. Communications of the ACM, 21(11):958{966, November 1978. [Fre91] Eugene C. Freuder. Eliminating Interchangeable Values in Constraint Satisfaction Problems. In Proceedings of the Ninth National Conference on Arti cial Intelligence, pages 227{233, Anaheim, California, 1991. [HDT92] Pascal Van Hentenryck, Yves Deville, and Choh-Man Teng. A generic arcconsistency algorithm and its specializations. Arti cial Intelligence, 57(2{3):291{ 321, October 1992. [Hsu87] Wen-Lian Hsu. Decomposition of perfect graphs. Journal of Combinatorial Theory, B(43):70{94, 1987. [MR84] R.H. Mohring and F.J. Radermacher. Substitution decomposition for discrete structures and connections with combinatorial optimization. Annals of Discrete Mathematics, (19):257{356, 1984.
14