Models for using Stochastic Constraint Solvers in Constraint Logic Programming Peter J. Stuckey and Vincent Tam Dept. of Computer Science, The University of Melbourne, Parkville 3052, Australia. fpjs,
[email protected]
Abstract. This paper proposes a number of models for integrating stochastic constraint solvers into constraint logic programming systems in order to solve constraint satisfaction problems eciently. Stochastic solvers can solve hard constraint satisfaction problems very eciently, and constraint logic programming allows heuristics and problem breakdown to be encoded in the same language as the constraints. Hence their combination is attractive. Unfortunately there is a mismatch in the kind of information a stochastic solver provides, and that which a constraint logic programming system requires. We study the semantic properties of the various models of constraint logic programming systems that make use of stochastic solvers, and give soundness and completeness results for their use. We describe an example system we have implemented using a modi ed neural network simulator, GENET, as a constraint solver. We brie y compare the eciency of these models against the propagation based solver approaches typically used in constraint logic programming.
1 Introduction This paper proposes a general framework for integrating constraint logic programming (CLP) with a stochastic constraint solver to solve constraint satisfaction problems (CSP's) more eciently. A CSP involves a set of variables, each of which has a domain of possible values, and a constraint formula, limiting the combination of values for a subset of variables. The task is to assign values to the variables so that the constraint formula is satis ed. CSP's occur in a large number of applications, such as computer vision, planning, resource allocation, scheduling and temporal reasoning. But CSP's are, in general, NP-complete. Thus, a general algorithm designed to solve any CSP will necessarily require exponential time1 in the worst case. Constraint logic programming systems have been successfully used to tackle a number of industrial CSP applications such as car sequencing [3], disjunctive scheduling [14] and rmware design [4]. Stochastic search methods have also had remarkable success in solving industrial CSP's [13] and constraint satisfaction optimisation problems (CSOPs) [1, 5]. Constraint logic programming systems use a constraint solver to direct a search for an answer to goal. When the constraint solver determines that the constraints collected on some derivation path are unsatis able, the CLP system 1
Assuming P 6= NP .
backtracks and tries a dierent derivation path. Thus the key behaviour of the solver is its determining of unsatis ability correctly. Typically, constraint solvers are incomplete, that is they do not answer for every constraint whether it is satis able or unsatis able. For example, many real number constraint solvers treat non-linear constraints incompletely, and integer constraint solvers are also typically incomplete. The major requirement of the incomplete solver in a CLP system is determining unsatis ability, that is, as often as possible if a constraint C is unsatis able, then solv(C ) = false. Most constraint solvers in a CLP system used to solve CSP's employ the technique of constraint propagation to solve the set of constraints. Thus, they are algorithmic in nature. In order to guarantee nding a solution, these solvers are augmented with some form of enumerative search. When the CSP is tight,2 this search may be very costly. Many of the methods traditionally used in solving CSP's are not propagation based. A major class of CSP solving techniques are stochastic methods, for example: simulated annealing, neural networks and evolutionary algorithms. These algorithms are designed to nd solutions to constraint problems, using a relaxation-based search. The methods are in general not guaranteed to nd a solution, and typically involve a resource bound on the search which determines how long and how likely to nd a solution the method will be. Such solvers do not usually ever determine when a constraint is unsatis able. But these kinds of solvers can be considerably more ecient than propagation based solvers on large or hard instance of CSP's. The problem we examine is how such solvers can be used eciently in CLP systems. Earlier approaches that considered incorporating stochastic solvers into CLP systems are restricted to one of the models we discuss. Lee and Tam [9] required the program to execute only a single derivation so that backtracking could never occur. At the end of the single derivation the stochastic solver (the GENET algorithm discussed later) determines a solution. Illera and Ortiz [6] use a genetic algorithm to search for a good solution to a constraint which is collected by a CLP system, hence using a stochastic method to tackle a CSOP. Both these papers show how using stochastic methods instead of enumeration methods traditional to CLP can lead to substantial bene ts. Neither of these works use the stochastic constraint solver to control the search for a successful derivation; rather it is used only as a solution nder, hence most of the issues dealt with in this paper do not arise. This paper is organised as follows. Section 2 brie y introduces some preliminaries for subsequent discussion. Section 3 describes various models for how a stochastic solver can be used within a CLP system, while Section 4 gives soundness and completeness results for these various models. In section 5, we brie y describe the GENET [13, 2] model which is a probabilistic arti cial neural networks(ANN) used as the constraint solver to demonstrate the feasibility of our approach. We describe how we adopt the original model to support ecient incremental execution with backtracking on constraints. Section 6 gives our experimental results. 2
A tight CSP has few solutions over a large search space.
2 Preliminaries We brie y introduce constraint logic programming, for more details see [7]. A constraint domain A is a structure de ning the meaning of the primitive constraints. A primitive constraint is of the form r(t1 ; : : : ; tn ) where r is a relation de ned by A and t1 ; : : : ; tn are terms over the constraint domain, e.g. X > Y +2 is a primitive constraint over the domain of integer constraints. A constraint is of the form c1 ^ ^ cn where n 0 and c1 ; ; cn are primitive constraints. When n = 0, we shall usually write the constraint as true. A valuation for a set S of variables is an assignment of values from the domain of A to the variables in S . Suppose S = fX1; ; Xn g; then may be written fX1 7! a1 ; ; Xn 7! an g indicating that each variable Xi is assigned the value ai . Let vars(F ) denote the set of variables occurring in a formula F . Let 9V F represent the formula 9w1 9wk F where fw1 ; : : : ; wk g = vars(F ) ? V . Let 9~F represent the formula 9; F , the existential closure of F . If is a valuation for S vars(c), then it is a solution of c if the replacement of each variable by its value (as given by ) yields a true statement, that is A j= c. A constraint c is satis able if there exists solution of c. Otherwise it is unsatis able . A constraint solver , solv, for a constraint domain A takes as input any constraint c in A and returns true, false or unknown. Whenever solv(c) returns true the constraint c must be satis able, and whenever solv(c) return false the constraint c must be unsatis able. An atom , A, is of the form p(t1 ; : : : ; tn ) where p is a n-ary predicate symbol and t1 ; : : : ; tn are terms. A goal G is a sequence of atoms and primitive constraints L1 ; ; Ln. A rule , R, is of the form A:-B where A is an atom and B is a goal. A is called the head of R and B is called the body of R. Execution proceeds by mapping one state to another. A state is a pair written hGjci where c is a constraint and G is a goal.
L1 ; ; Li?1 ; Li ; Li+1 ; ; Lm where Lj , 1 j m are literals and Li is the selected literal . There are two
cases. (a) Li is a primitive constraint. Then c1 is c0 ^ Li and, if solv(c1 ) = false, G1 is the empty goal written 2; otherwise G1 is L1; ; Li?1 ; Li+1 ; ; Lm. (b) Li is an atom p(t1 ; ; tn ). Let R be a renamed version of a rule in P , sharing no variables with hG0 jc0 i of the form p(s1 ; ; sn ) B1 ; ; Bk : Then c1 is c0 and G1 is L1 ; ; Li?1 ; s1 = t1 ; ; sn = tn ; B1 ; ; Bk ; Li+1 ; ; Lm. A partial derivation is a sequence of states hG0 jc0 i ) hG1 jc1 i ) where at each stage i 0 the selected literal is given by some selection strategy and hGi jci i ) hGi+1 jci+1 i. A derivation is a maximal length partial derivation. A derivation is successful if it is nite and the last state h2jcn i is such that solv(cn ) 6= false. If hG0 jtruei ) ) h2jcn i is a successful derivation then 9vars(G) cn is an answer to the goal G. A derivation is nitely failed if it is nite and at the last state solv(cn ) = false. Thus a derivation only terminates by
success or the solver detecting an unsatis able constraint. Otherwise the derivation is in nite. A derivation is fair if it is nitely failed or each literal appearing in the derivation is eventually selected. A selection strategy is fair if it always produces fair derivations. It will later be useful to refer to partial derivations where the last step is a) always a derivation step of type (a). Let hG0 jc0 i () hG1 jc1 i represent a sequence of any number of type (b) derivation steps followed by a type (a) derivation step. a) Clearly for a partial derivation of the form hG0 jc0 i () hG1 jc1 i ()a) ()a) hGn jcn i the constraint solver is invoked on each of the constraints c1 ; c2 ; : : : ; cn .
3 Using a Stochastic Solver Consider the following simple example of a stochastic constraint solver for Boolean conjunctive normal form constraints (based on an algorithm by Wu and Tang [15]).
Bool solve(c; ) Let m be the number of clauses in c n := ln = ln(1 ? (1 ? 1=m)m ) for i := 1 to n generate a random truth assignment if c is true return true
endfor
return unknown This algorithm is a randomised polynomial time algorithm for solving a Boolean constraint c. The second argument gives a bound on its incompleteness. Applied to a satis able constraint c, the algorithm3 will return true with probability 1 ? (this is quite deep result of [15]). So, is the probability that the solver returns unknown when the constraint c is satis able. All known complete solvers for this problem are exponential while this algorithm is polynomial. But this solver never returns false because it can never determine that constraint c is unsatis able. Hence it seems dicult to use it in the context of a CLP system where we need failure to prune the search space. Because stochastic solvers in general can never determine unsatis ability, when applied to an unsatis able constraint they will execute forever unless some limit is placed on them. Hence they usually have a resource limit, for example, the iteration limit n in Bool solve. Thus, we de ne a stochastic constraint solver as a solver that takes two arguments, a constraint and a resource bound.
De nition 1. (Stochastic Solver) A stochastic solver solv takes a constraint
c and resource limit n, and either returns
(a) true , a solution and the amount of resources used to nd a solution, e.g. solv(c; n) = htrue; ; mi where m n, 3
Assuming no individual clause is trivially false, or true.
(b) unknown , the empty valuation and the resource limit, e.g. solv(c; n) = hunknown; ; ni where is an empty valuation. We also de ne res(hx ; x ; x i) = x and truth(hx ; x ; x i) = x . 1
2
3
3
1
2
3
1
Stochastic solvers are generally randomised algorithms hence they de ne a probability of nding a solution for some satis able constraint c. We assume if c is a satis able constraint, then Pr(truth(solv(c; n)) = true) > 0 if n > 0. That is the solver always has a chance of nding solution given some resources. We assume if c is an unsatis able constraint, Pr(truth(solv(c; n)) = unknown) = 1 whatever resource bound n is used. That is the solver is correct and never returns false4 . We also assume for technical convenience that Pr(truth(solv(c; ?1)) = unknown) = 1. That is given negative resources the solver always returns unknown. Because of the mismatch in the type of information that a stochastic solver provides with that required by a CLP system several useful models arise for using a stochastic solver within a CLP system.
3.1 Model A Stochastic solver as a constraint solver. The most obvious combination is to use the stochastic constraint solver to control the derivations. Clearly because a stochastic constraint solver only ever returns true or unknown it will never cause execution to backtrack if used directly. Hence at some stage, we must assume its failure to nd a solution indeed indicates that the constraint is unsatis able. Thus we must sometimes treat the answer unknown as false . Clearly this will lead to incompleteness, but there are already a number of sources of incompleteness for CLP systems. For general CLP programs depth- rst search and unfair literal selection lead to incompleteness. Many CLP programs to solve CSPs are guaranteed to only have nite derivations, so these problems cannot arise. But, even though guaranteed complete, the search for a solution may take too long, leading to a kind of \in practice" incompleteness. Hence we should be willing to accept the incompleteness of model A if it is not too great.
3.2 Model B Stochastic constraint solver as a solution nder at the end of a derivation. The usual approach to solving CSP's in a CLP system is with a propagation solver. Propagation solvers are weak and do not detect many cases of unsatis ability. Usually propagation solvers can only answer false or unknown, they can answer true when there is a unique value for each of the variables in the constraint (that is the solver can check that a valuation is a solution). Because of this CLP programs for solving CSPs typically have the following form: goal(Vs) :- setupvars(Vs), constrain(Vs), labelling(Vs).
4
It is not dicult to extend the presentation here to handle solvers that return false, but it is orthogonal to the problems we tackle.
First the variables are declared and then constrained, and nally the labelling predicate performs an enumerative search for a valuation, by setting each variable in turn to each of its possible values. The enumerative search is usually where most of the computation time of the program is spent. Enumerative searches, even with clever heuristics to order the search well can perform very poorly, hence it is worth considering using a stochastic solver as a valuation nder. For Model B, the constraint check in a derivation is performed by a propagation solver during each derivation step of type (a), and then whenever a successful derivation is found, e.g. hG0 jtruei ) ) h2jcn i the stochastic solver is invoked on cn to nd a solution. In this way the labelling part of the program is not required.
3.3 Model C Stochastic solver augmented by a propagation solver. This model is analogous to model A. At each derivation step the constraint is checked by the stochastic solver, and by a propagation solver. The role of the propagation solver is to detect unsatis ability (which, when it is able to detect it, can be found much faster than the stochastic solver). In addition the propagation solver can communicate extra constraint information it derives to the stochastic constraint solver to improve its behaviour. The obvious example is communicating the domains of variables. Whenever the propagation solver can reduce the domain of a variable it can inform the stochastic solver of this reduced domain. This reduces the search space of the stochastic solver.
3.4 Model D A stochastic solver is used in parallel with a propagation solver. The stochastic solver is used to give information about success | answering either true or unknown. The propagation solver is used to give information about failure, answering false or unknown. Constraints are sent to both solvers independently by the run-time engine. When the stochastic solver answers true the run-time engine continues the derivation immediately, without waiting for the propagation solver. When the propagation solver answers false the run-time engine backtracks immediately without waiting for the stochastic solver. Otherwise the run-time engine waits until both answer unknown before execution continues. A diagram of the interaction is given in Figure 1. The combination of solvers working somewhat independently gives advantages to both: rst the stochastic solver does not need to calculate every failure (as in model C), but additionally the propagation solver does not need to complete propagation of constraints for every satis able constraint store. Consider executing a goal hc1 ; c2 ; c3 ; c4 jci where c ^ c1 ^ c2 ^ c3 is satis able, but c ^ c1 ^ c2 ^ c3 ^ c4 is not. Then suppose the stochastic solver quickly nds solutions for c ^ c1 , c ^ c1 ^ c2 and c ^ c1 ^ c2 ^ c3 (for example if its solution for c is also a solution for these constraints). Before the propagation solver can nish propagation for c ^ c1 execution has continued adding the constraints c2 ; c3 and c4 . Now the propagation solver may be able to detect the unsatis ability
Engine constraints
constraints true
Stochastic Solver
false
constraints
Propagation Solver
Fig. 1. A stochastic solver in parallel with a propagation solver of c ^ c1 ^ c2 ^ c3 ^ c4 before it completes the propagation required for solving c ^ c1 ^ c2 ^ c3 . Overall the failure of the derivation will then be detected faster.
Usage Strategies When a stochastic constraint solver is required to communicate with the run-time engine, as in Models A, C and D we need a way of specifying at each derivation step of type (a) how many resources should be used. Similarly for model B we need a way of specifying how many resources are used at the end of each derivation. This is the role of a usage strategy. A usage strategy U is a function on derivations which determines how a stochastic solver is to be used. Given a (a) partial derivation D ending in a derivation step of type (a), hGjtruei ) hGn jcn i the usage strategy U (D) gives (a) a resource limit R to be used in solving cn (b) whether an unknown result is considered unknown or false. Many usage strategies are possible, for example : { Fixed Limit - the simplest usage strategy denoted Ufl(R) Ufl(R) (D) = hR; falsei where in each step R resources are allocated and failure to nd a solution with R resources is considered to be \proof" that the constraint is unsatis able. { Derivation Limit - a more complicated strategy denoted Udl(R) Udl(R) (hGjtruei) = hR; falsei Udl(R) (D ()a) hGn+1 jcn+1 i) = hm; falsei where Udl(R) (D) = hl; falsei and m = l ? res(solv(cn ; l)). This encodes a strategy where in each derivation R resources are available for use. After each step in the derivation, the resources used res(solv(cn ; l)) are subtracted from the remaining resources.
Many other strategies are possible, for example where the xed limit varies with the length of the derivation. Note that model B can be seen as a particular case of model C, given a usage strategy Ub for model B (which clearly only assigns resource for successful derivations) then the following usage strategy for model C yields equivalent derivations: a
a
( )
( )
Uc2b (hGjtruei ) h2jci) = Ub (hGjtruei ) h2jci) a
( )
Uc2b (hGjtruei ) hGn jcn i) = h?1; unknowni when Gn 6= 2
4 Semantics In this section we give theoretical results for the use of the various models.
4.1 Success Regardless of which model and usage strategy U is used, we have the following soundness result for successful derivations. Note that the result is slightly stronger than the usual soundness result (see [7]) because stochastic solvers return solutions. Theorem 2. For models A, B, C or D if hGjtruei ) h2jci is a derivation achieved using stochastic solver solv and usage strategy U then
P ^ A j= c ! G and if solv(c; k) = htrue; ; li where U (hGjtruei ) h2jci) = hk; ?i then P ^A j= G. Given we are using a probabilistic solver we cannot expect to have a strong completeness result. In practice CLP systems are already incomplete because of the xed search strategy so completeness may not be seen as vital. But we would like to have some form of completeness. hG jc i; U ) to be the probability that the Let us de ne Pr(hGjtruei ) n n solver applied to cn using usage strategy U returns false when cn is in fact satis able. Given the sequence
hGjtruei )a hG jc i )a )a h2jcn i ( )
1
1
( )
( )
where each ci is satis able. The overall probability that this sequence succeeds using strategy U is a
( )
a
( )
(1 ? Pr(hGjtruei ) hG1 jc1 i; U )) (1 ? Pr(hGjtruei ) h2jcn i; U )) Clearly as long as the probability of false failure never reaches 1 there is a nite probability that the sequence succeeds. We can guarantee this if the usage
strategy provides enough resources so that the solver at each stage has a nonzero probability of nding a solution. We call such combinations of solvers and usage strategies non-starving . Because we assume for any satis able constraint c that Pr(truth(solv(c; n)) = true) > 0 if n > 0 any xed limit strategy is non-starving. Theorem 3. Suppose program P is executed under model A, B or C using nonstarving stochastic solver solv and usage strategy U . If P ^ A j= 9~G then there is a non-zero probability that there is a successful derivation. For model D we have the usual completeness result for CLP because using model D we never assume the stochastic solver has determined unsatis ability when this is not actually the case. For the other models we would like to ensure a greater degree of completeness. There exists usage strategies that can ensure this. Suppose (a) Pr(hGjtruei )m hGm jcm i; U ) ( 1 )m+k 2 for all derivations containing m derivation steps of type (a), where k 1. Then the probability that a derivation with l steps of type (a) where all constraints are satis able is failed is less than ( 12 )k+1 + ( 12 )k+2 + + ( 12 )k+l In other words the derivation will succeed using solv and usage strategy U with at least probability 1 ? ( 12 )k . Consider the following stochastic solver | based on that in the beginning of this section. Bool solve(c; n) for i := 1 to n guess a truth assignment for c if c is true return htrue; ; ii
endfor return hunknown; ; ni
For each constraint solve, the probability of an unknown answer for a satis able constraint is (from [15]) = (1 ? (1 ? 1=m)m )n where m is the size of c in terms of number of clauses. We will assume that each primitive constraint is a single clause, thus the size of constraint cm is less than or equal to m. Now 0:25 (1 ? 1=m)m < 0:5. Hence, < 0:75n. If usage strategy U chooses resources to be used at the mth step in a derivation as n > (m + k) ln(1=2)= ln(3=4) e.g. n > 2:5(m + k) Then Pr(hGjtruei )m hGm jcm i; U ) ( 1 )m+k 2
Thus we can expect5 a probability of at least 1 ? ( 12 )k of nding any derivation that should succeed. Note that the overall resources used by a derivation is quadratic in its length using this strategy.
4.2 Finite Failure Unfortunately because of the nature of the solvers we are unable to have any soundness result for nite failure for models A, B and C. This is because we cannot ensure that there is no false failure.
De nition 4. (False failure) A goal G is falsely failed for usage strategy U and solver solv if there is a sequence of states hGjtruei ) ) h2jcn i where each ci is satis able, but the execution of this derivation using solv and
U fails.
For model D of course we have the usual soundness of nite failure result for CLP. In contrast for models A and C we do have strong completeness results for nite failure provided the usage strategy satis es a basic condition: A usage strategy is eventually failing if there is no in nite derivation D where U (Di ) = h?; unknowni for each nite pre x Di of D. That is, eventually on every derivation the strategy will consider an unknown answer by the solver to be false. The theorem below make use of the program completion P , which is an i version of the program that is required to have negative consequences, see [7] for details. Theorem 5. If P ^A j= :9~G then for models A and C if we use a fair selection strategy then any derivation for G using solver solv and eventually failing usage strategy U is nitely failed. Note that such a result does not usually hold for CLP systems using propagation solvers because these solvers are incomplete.
4.3 Iterative Deepening For models A, B and C the possibility of false failure on all successful derivations means that a goal which has a solution may falsely fail. In particular given a derivation limit usage strategy for models A or C we may expect to falsely fail on all successful derivations if the limit is too small. This leads to the idea of re-executing the goal with more resources, to avoid the false failure and nd a success. Thus we consider a form of iterative deepening approach. Iterative deepening works by re-executing a nitely failed goal, using greater resources. For example, we can double the limits in the usage strategy. So when 5
We have not taken into account the dependencies between successive constraint solves in this simplistic analysis.
a goal G fails using xed limit strategy Ufl(1000) , we re-execute the goal using strategy Ufl(2000) . If that execution fails to nd an answer then we re-execute usage strategy, Ufl(4000) , etc. The aim of the approach is that a solution will be found quicker by automatically using the minimal number of resources required to avoid false failure. If we give too many resources to solve a problem, execution will be slowed because failure is determined by resource exhaustion. Hence too many resources slows execution. Iterative deepening automatically determines the (nearly) least number of resources required to nd a solution. Clearly using an iterative deepening approach there is no possibility of nite failure since a goal with no solutions will run in nitely. Technically we also need to ensure that no in nite derivations are possible, so that for example using strategy Ufl(R) we never consider derivations of length greater than R, otherwise we cannot guarantee a goal will either succeed or fail. In practice the resource limits will usually manage this, but in theory we must use a depth-bounded usage strategy. A depth-bounded strategy is such that there exists number L such that for any derivation of length m L, m hG jc i) = h?1; falsei. In other words after some length of U (hGjtruei ) m m derivation the strategy allows only negative resources to the solver, ensuring it returns unknown, and this causes failure. For example, we can de ne a bounded depth xed limit strategy as a
( )
Ubdfl(R) (hGjtruei )m hGm jcm i) = hR; falsei if m < R = h?1; falsei if m R Thus any derivation of length R will be failed.
Using an iterative deepening approach we can ensure that eventually we will nd a success if one exists. Note that iterative deepening does not make sense in the context of model D because it never falsely fails. Theorem 6. If P ^ A j= 9~G then executing goal G under models A or C using iterative deepening and a depth-bounded usage strategy U will nd a successful derivation.
5 A Constraint Solver : GENET We have chosen the GENET [13, 2], a generic neural network simulator, to demonstrate the feasibility of our proposal. GENET can be used to solve binary CSP's with nite domains. A GENET network consists of a cluster of nodes for each domain variable in a CSP. Each node denotes a value(label) in the corresponding domain variable. And each constraint is represented by a set of inhibitory connections between nodes with incompatible values. In the original GENET model, the entire network must be constructed before computation starts. However, a constraint solver in a CLP system must support ecient incremental execution since new primitive constraints are being added to an existing solvable constraint during a derivation. Therefore, an ecient
incremental version of GENET is necessary. A naive but ecient incremental GENET, called I-GENET, adds new primitive constraints and variables to the network as they are collected. Its incrementality originates from the re-use of the connection weights which are computed using the heuristic learning rule in previous cycles. Thus, the network is trained while it is being built incrementally. A more detailed discussion of the incremental GENET is contained in [12].
6 Experimental Results To compare the dierent models of using stochastic solvers, we have used ECLiPSe Version 3:5:1 and GNU C 2:6:3 running on a SPARCSTATION 1000, to build prototypes for the models A, B and C, using the backtrackable I-GENET as the stochastic constraint solver, and the ECLiPSe fd library as a propagation solver. Note that the rst-fail principle (i.e. choose the variable with the smallest domain to be instantiated rst) is used with forward checking to improve the performance of search in the ECLiPSe system. We also include another example of model A, using a version of GENET extended to use lazy arc-consistency [11] to reduce the domains of variables. Using this solver (denoted model A0 ) has an eect similar to model C, where the reduction of domains comes from the propagation solver. Construction of a model D solver is not yet completed. We give results of preliminary experiments comparing the dierent models using usage strategy Ufl (1000) and compare our systems versus a traditional propagation-based CLP approach on a set of CSP's with and without disjunctive constraints. The Hamiltonian path calculation and disjunctive graph colouring are examples of CSP's with disjunctive constraints, while N-queens and permutations generation problems as examples of CSP's without disjunctive constraints. In all the test cases, the CPU time for the dierent models are the median over 10 successful runs to nd a solution. All runs for the dierent models were terminated if the constraint solvers used 1000000 resources without nding a solution. And we aborted the execution of any ECLiPSe program if it took more than 10 hours. In any case, we use a \|" symbol to mean there were no successful runs and a \?" symbol to denote the execution took > 10 hours.
Hamiltonian Path Problems
Given a graph G of n vertices, the Hamiltonian path problem is to nd the Hamiltonian path of G, in which each of n vertices in G is visited once and only once [10]. The Hamiltonian path problem is a practical CSP which is very similar to the route planning problems faced by many circus, travelling road company and travelling salesman. In any case, it can be regarded as an non-optimising version of the well-known travelling salesman problem, in which the salesman does not need to return to the original city. The formulation of the program is the natural one for constraint programming, where each node i is associated with a variable Vi . A path is an assignment of an order of visiting the nodes (dierent values from 1 to n) to each variable, such that there is an edge from each node to its predecessor, and predecessors are chosen by disjunctive search. This formulation easily handles ad hoc constraints
on the problem such as distance between two given nodes must be at least some distance k. Problems ECLiPSe Model A Model B Model C Model A0 10 0.040 4.195(11) | 2.565(15.5) 0.460(11) 20 1090 58.27(21) | 50.65(20) 31.75(20) 30 ? 5493(46) | 4287(46) 2687(46) The table above show the execution time of ECLiPSe and the dierent models on Hamiltonian path problems using xed limit strategies with the average number of backtracks in the search given in parentheses. The rst two problems are coded from some interesting real-life examples in graph theory [10]. And the last one is a modi ed example obtained from [8]. In general, the stochastic models A and C outperform ECLiPSe and model B on all but the smallest example. This is because the search space for models A and C is much smaller than for the ECLiPSe because they determine unsatis ability much earlier. Model B fails in all cases since the search space is so large, and it requires exhaustion of resources to initiate backtracking. For these problems Model C betters model A showing that the propagation solver can provide useful information for the stochastic solver during the search. And in these problems the improved stochastic solver (model A0 ) nds useful domain reduction information without the overhead of communication with the propagation solver.
Disjunctive Graph-colouring Problems
The disjunctive graph-colouring problem is to colour a graph, possibly nonplanar, with a number of hyper-arcs connecting nodes. A hyper-arc f(i1; j1 ); (i2 ; j2 )g speci es that at least one of the pairs of nodes (ik ; jk ) must be coloured dierently. The disjunctive graph-colouring problem has wide applicability in timetabling, scheduling and production planning. Graph CPU time Nodes Colours ECLiPSe Model A Model B Model C Model A0 10 3 19.96(6428) 1.580(4) | 1.645(4) 1.820(4) 20 4 1362(65535) 0.690(1) | 0.770(1) 0.770(1) 30 5 0.050(0) 0.115(0) 0.095(0) 0.210(0) 0.120(0) 40 5 0.070(0) 0.145(0) 0.105(0) 0.220(0) 0.155(0) 50 5 0.090(0) 0.200(0) 0.140(0) 0.345(0) 0.225(0) The table above show results for ECLiPSe and the dierent models on a set of small-sized disjunctive graph-colouring problems. In general, each of the approaches where the stochastic solver controls the search (models A and C) outperforms ECLiPSe when the problem requires backtracking. This is because they do much less backtracking since they do not require a backtracking enumeration search. Here is an example when no useful domain reduction information is discovered by model C or model A0 . Hence they do not improve on model A, and suer some overhead.
N-queens Problem
The N -queens problem is to place N queens onto a N N chess board so that no queens can be attacked. A queen can be attacked if it is on the same column, row or diagonal as any other queen. It is a standard benchmark for CSPs since the size N can be increased without limit. No. of Queens ECLiPSe Model A Model B Model C Model A0 10 0.040 0.200 0.195 0.320 0.290 20 0.220 3.665 2.195 4.200 4.020 30 0.880 23.79 10.11 27.43 23.91 40 0.810 93.12 33.58 96.04 95.82 50 63.42 253.3 73.09 268.8 257.1 60 3.580 594.0 146.4 625.5 605.8 70 66.67 1172 273.9 1242 1231 The above table shows the CPU time of ECLiPSe and the dierent models on a N -Queens problems where N ranges from 10 to 70. In general, ECLiPSe out-performs the stochastic approaches because the problems do not require backtracking on constraints and using the rst-fail principle signi cantly improves the performance of the enumerative search strategy. Model B performs best among all the models since the problem does not involve any search on constraints and it does not need to expend resources on derivation steps except the last one. Again models A0 and C do extra work without gaining any useful information for these problems. Thus, they are slower than model A.
Permutations Generation
Given a permutation f on the integers from 1 to n, we de ne the monotonies of f as a vector (m1 ; m2 ; ; mn?1 ), where mi equals to one if the value of the i + 1-th element in the permutation is greater than that of the i-th element and zero otherwise, and the vector (a1 ; a2 ; ; an?1 ) of advances of f such that ai = 1 if the integer f (i) + 1 is placed in the permutation on the right of f (i), and 0 otherwise. The problem is to construct all the permutations of a given range that admit a given vector of monotonies and a given vector of advances. Length of List ECLiPSe Model A Model B Model C Model A0 9 0.010 0.210 0.115 0.190 0.210 10 0.020 0.325 0.140 0.455 0.355 20 8.440 6.470 1.490 6.295 7.035 30 ? 50.61 9.730 45.69 52.77 The table above shows the performance of ECLiPSe and the dierent models on the permutations generation given a number of vectors of dierent sizes for monotonies and advances. The rst example is taken from [14] while the others are arbitrary examples of permutation lists with length from 10 to 30. For the rst two test cases, ECLiPSe performs better than our models since the search space is relatively small. But when the length of the permutation list grows larger, the stochastic approaches out-performs ECLiPSe. In particular, model B performs best since the problems do not require backtracking on constraints. For these problems model C, in general, performs better than model A and A0
since it gains useful information from the propagation solver, which cannot be deduced by model A0 . In conclusion, using a stochastic solver to control search (models A and C) is of bene t for disjunctive problems where the search space of constraints is large. For non-disjunctive problems model B is most eective and out performs enumerative search in many instances. Cooperating solvers (model C) can result in signi cant improvements, and usually do not add too much overhead. Improving underlying stochastic solvers (model A0 ) is another worthwhile approach. No model is uniformly better | and the user should be prepared to experiment.
References 1. E.H.L. Aarts and J.H.M. Korst. Boltzmann machines for travelling salesman problems. European Journal of Operational Research, 39:79{95, 1989. 2. A. Davenport, E.P.K. Tsang, C.J. Wang, and K. Zhu. GENET: A connectionist architecture for solving constraint satisfaction problems by iterative improvement. In Proceedings of AAAI'94, 325-330, 1994. 3. M. Dincbas, H. Simonis, and P. Van Hentenryck. Solving the car-sequencing problem in constraint logic programming. In Procs. of the European Conf. on Art. Int., 290{295, 1988. 4. M. Dincbas, H. Simonis, and P. Van Hentenryck. Solving large combinatorial problems in logic programming. Journal of Logic Programming, 8:75{93, 1990. 5. J.J. Hop eld and D. Tank. \Neural" computation of decisions in optimization problems. Biological Cybernetics, 52:141{152, 1985. 6. A.R-A. Illera and J. Ortiz. Labelling in CLP(FD) with evolutionary programming. In Procs. of GULP-PRODE 95, 569{590, 1995. 7. Joxan Jaar and Michael J. Maher. Constraint logic programming: A survey. Journal of Logic Programming, Volume 19 / 20, May 1994. 8. J.H.M. Lee, H.F.Leung and H.W. Won. Extending GENET for Non-Binary CSP's. In Proceedings of the Seventh IEEE International Conference on Tools with Arti cial Intelligence, 338{343, November 1995. 9. J.H.M. Lee and V.W.L. Tam. A Framework for Integrating Arti cial Neural Networks and Logic Programming International Journal on Arti cial Intelligence Tools 4(1&2), 3{32, June, 1995. 10. Ronald E. Prather Discrete Mathematical Structures for Computer Science. Houghton Miin,1976. 11. P. Stuckey and V. Tam, Extending GENET with lazy arc consistency. University of Melbourne, Department of Computer Science Tech. Report 96/8. 12. V.W.L. Tam. Integrating arti cial neural networks and constraint logic programming. Master's thesis, Department of Computer Science, The Chinese University of Hong Kong, 1995. 13. E.P.K. Tsang and C.J. Wang. A generic neural network approach for constraint satisfaction problems. In G Taylor, editor, Neural Network Applications, 12{22. Springer-Verlag, 1992. 14. P. Van Hentenryck. Constraint Satisfaction in Logic Programming. The MIT Press, 1989. 15. L.C. Wu and C.Y. Tang. Solving the satis ability problem by using randomized approach. In Information Processing Letters 41, 295{299, North-Holland, 1992. This article was processed using the LATEX macro package with LLNCS style