Randomised Backtracking for Linear Pseudo ... - Semantic Scholar

3 downloads 467 Views 88KB Size Report
Many constraint satisfaction and optimisation problems can be expressed using linear ... local search, an approach demonstrated on SAT and other combinatorial problems [14, 16, 17, ...... SATIRE: A New Incremental Satisfiability Engine.
Proceedings CPAIOR’02

Randomised Backtracking for Linear Pseudo-Boolean Constraint Problems Steven Prestwich Cork Constraint Computation Centre Department of Computer Science University College, Cork, Ireland email: [email protected] Abstract Many constraint satisfaction and optimisation problems can be expressed using linear constraints on pseudo-Boolean (0/1) variables. Problems expressed in this form are usually solved by integer programming techniques, but good results have also been obtained using generalisations of SAT algorithms based on both backtracking and local search. A recent class of algorithm uses randomised backtracking to combine constraint propagation with local search-like scalability, at the cost of completeness. This paper describes such an algorithm for linear pseudo-Boolean constraint problems. In experiments it compares well with state-ofthe-art algorithms on hardware verification and balanced incomplete block design generation, and finds improved solutions for three instances of the Social Golfer Problem.

1 Introduction Propositional satisfiability (SAT) algorithms have been successfully applied to many real-world combinatorial problems. SAT is highly expressive and was the first problem shown to be NPcomplete. However, it is not sufficiently expressive to model some problems in a practical sense, because the models become too large. Some researchers have proposed more general languages including nested equivalences [9], exclusive-or [2] and disjunctions of conjunctions of literals [25]. A recent language called Regular-SAT [3] generalises the notion of literal to finite domains. An older language that subsumes SAT is linear constraints on 0/1 variables, which we shall refer to as PB (the term pseudo-Boolean is sometimes used). PB problems have been solved for decades using integer programming techniques from Operations Research. Broadly speaking, Operations Research is most successful at cost reasoning while Constraint Programming specialises in feasibility reasoning. For problems in which feasibility is hard to establish, constraint-based approaches therefore seem appropriate, and interesting results have recently been obtained by generalising SAT solvers to PB. The classical SAT backtracking algorithm is the Davis-Putnam procedure in Loveland’s form (DLL) [6], which uses depth-first search with inference rules such as unit propagation. Many modern systematic SAT

solvers are forms of DLL. It was extended to PB by Barth [1] and performed well on several integer programming benchmarks. However, like any depth-first search algorithm, DLL has poor scalability on some classes of problem. A local search approach often scales much better to large problems, and early examples of such algorithms for SAT are GSAT [20] and Gu’s algorithms [8]. GSAT was further improved by the addition of a random walk heuristic, giving a family of algorithms called WSAT. Walser [24] extended WSAT to PB, and the WSAT(PB) algorithm performed well on radar surveillance benchmarks and the Progressive Party Problem. Besides systematic backtracking and local search, a third class of algorithm is randomised backtracking. The aim is to combine backtracker-style constraint handling with the scalability of local search, an approach demonstrated on SAT and other combinatorial problems [14, 16, 17, 18] (see these papers for discussions of the relationships between randomised backtracking and other search algorithms). In this paper a randomised backtracker for SAT is generalised to PB and evaluated on three classes of problem: recent SAT-encoded hardware verification problems, balanced incomplete block design generation, and a scheduling problem called the Social Golfer Problem. Section 2 describes the algorithm, Section 3 evaluates it experimentally, and Section 4 concludes the paper.

2 Randomised backtracking for PB PB constraints have the form

  =        n   <  X > d w i li    i=1   ≤      ≥

where the weights wi and the constant d are (possibly negative) integers, a literal li is either a variable vi or its negation ¬vi = 1 − vi and each variable vi ∈ {0, 1}. Barth [1] shows that any PB constraint can be transformed into a normal form involving only ≥ and positive constants. To simplify the description and implementation of the new algorithm we do the same, but find it more convenient to think in terms of ≤ constraints. As an example of transformation, consider the constraint 3v1 + 2v2 − v3 = 2. This expands to 3v1 + 2v2 − v3 ≤ 2 and 3v1 + 2v2 − v3 ≥ 2. In the first inequality the negative weight can be removed by using the fact that −cli is equivalent to c¬li − c, giving 3v1 + 2v2 + ¬v3 ≤ 3. The second inequality can be expressed as −3v1 −2v2 +v3 ≤ −2. Again the negative weight can be removed giving 3¬v 1 +2¬v2 +v3 ≤ 3. For convenience we shall state constraints in their general form, but they are transformed before being passed to the algorithm. Randomised backtracking for SAT has been described in previous papers [14, 17] so here we omit some details and concentrate on its extension to PB. The algorithm begins like a standard backtracker by selecting a variable using a heuristic, assigning a value to it, performing constraint propagation where possible, then selecting another variable; on reaching a dead-end (a variable that cannot be assigned any value without causing domain wipe-out)Pit backtracks then resumes n variable selection. Constraint propagation occurs in a constraint i=1 wi li ≤ d as follows. Any currently unassigned variable in the constraint whose weight is greater than d − s, where s is the sum of the weights of currently assigned literals, can have a domain value removed: 0 (false) if the literal is positive and 1 (true) if it is negative. This is a simple generalisation of the SAT propagation rule, which occurs when every literal but one has an assigned variable and is false, and where all weights are 1. A simple implementation trick speeds up execution: in

a preprocessing phase before search begins, sort the variables in each constraint into order of decreasing weight. Then as soon as we find a variable in a constraint whose weight is no greater than d − s we can ignore the rest of the variables in that constraint. The novel feature of the algorithm is the choice of backtracking variable: the variables selected for unassignment are chosen randomly, or using some other heuristic that does not preserve completeness. As in Dynamic Backtracking [7] only the selected variables are unassigned, without undoing later assignments. Because of this resemblance we call this form of backtracking incomplete dynamic backtracking. Unlike Dynamic Backtracking this algorithm is not complete, and we must sometimes force the unassignment of more than one variable. We do this by adding an integer noise parameter b ≥ 1 to the algorithm and unassigning b variables at each dead-end. We also add a restart parameter r: after r backtracks the search is restarted from a randomised configuration. The only tricky part of the implementation concerns the combination of incomplete dynamic backtracking and constraint propagation. Suppose we have reached a dead-end after assigning variables v1 . . . vk , and vk+1 . . . vn remain unassigned. We would like to unassign some arbitrary variable vu where 1 ≤ u ≤ k, leaving the domains in the state they would have been in had we assigned only v1 . . . vu−1 , vu+1 . . . vk . A technique to achieve this has been described in previous papers (for example [17]). Briefly, a counter cv,d is associated with each variable-value pair (v, d), denoting how many constraints would be violated if the assignment v = d were added (or in the case of an already-assigned v, if the current v assignment were replaced by v = d). This information is maintained incrementally, and at any point during the search d ∈ dom(v) iff cv,d = 0. Thus domain sizes are maintained for both unassigned and assigned variables. We call an algorithm based on incomplete dynamic backtracking constrained local search (CLS) because of (i) its local search-like scalability (for example on random 3-SAT problems [14]) and (ii) the fact that, like a backtracker, it never violates a constraint. This particular CLS implementation is called Saturn to reflect its SAT origins. The new implementation is faster and its variable selection heuristics simpler than those previously described for the SAT algorithm: • For assignment a variable with only one domain value remaining is randomly selected where possible, otherwise one with both domain values. • For unassignment the reverse heuristic is used: a variable with both domain values remaining is randomly selected where possible, otherwise one with a single domain value. These heuristics have been found to be more robust than the previous more complex versions, and can select variables in constant time. Finally, a value ordering heuristic has been found useful. It reassigns each variable to its previous assigned value where possible, otherwise to a different value. This speeds up the rediscovery of consistent partial assignments. However, to avoid stagnation it attempts to use a different (randomly-chosen) value for just one variable after each dead-end. The algorithm described so far applies to constraint satisfaction problems expressed in PB. It can be applied to optimisation problems by solving a sequence of satisfaction problems with an increasingly tighter cost bound (this was done for the Social Golfer problem below). An obvious heuristic is to begin the search for each solution in the neighbourhood of the previous solution, but this has not yet been implemented.

algorithm Chaff DLM-2 DLM-3 CGRASP QSAT Saturn SATO rel sat1.0 Walksat rel sat rand DLM-2000 GRASP GRASP+restarts CLS rel sat.2.1 eqsatz BDDs

Suggest Documents