An Incremental Hierarchical Constraint Solver 1

0 downloads 0 Views 215KB Size Report
A valuation to a constraint hierarchy H, is a mapping of the free variables in .... ASconf and USconf are the candidates for relaxation and RSconf the candidates.
An Incremental Hierarchical Constraint Solver Francisco Menezes ([email protected])

Pedro Barahona ([email protected])

Departamento de Informatica, Universidade Nova de Lisboa 2825 Monte da Caparica, PORTUGAL

Philippe Codognet

([email protected])

INRIA-Rocquencourt BP 105, 78153 Le Chesnay, FRANCE

Abstract

This paper presents an incremental method to solve hierarchies of constraints over nite domains, which borrows techniques developed in intelligent backtracking, and nds locally-predicate-better solutions. A prototype implementation of this method, IHCS, was written in C and can be integrated with di erent programming environments. In particular, with Prolog producing an instance of an HCLP language. Possible applications of IHCS are brie y illustrated with a time-tabling and a set covering problem. Because of its portability and incremental nature, IHCS is well suited for reactive systems, allowing the interactive introduction and removal of preferred constraints 1 .

1 Introduction

Modelling a real life problem by means of an explicit set of constraints always involve, to some extent, the abstraction of the context in which these constraints are to be considered. Decision support systems are usually concerned with supplying the decision maker with a set of alternative scenarios in which the most important constraints are satis ed [1]. Because not all constraints can be met by a solution, one might simply specify a set of mandatory constraints and select one of the possible solutions that satisfy such constraints. Nevertheless, the choice of a good solution often depends on preferred criteria (forming the contextual background knowledge), not explicit in this approach to problem formulation. A more powerful approach consists of specifying all intended constraints in some hierarchy, i.e. qualifying them either as mandatory or as mere preferences, possibly with some associated preference strenght. In [2], a general scheme is proposed for Hierarchical Constraint Logic Programming (HCLP) languages, parameterized by D, the domain of the constraints and by C , a comparator of possible solutions. The Incremental Hierarchical Constraint Solver (IHCS) that we have developed is intended as the kernel of a HCLP(FD; LPB) instance of this scheme, where FD stand for nite domains and LPB is the locallypredicate-better comparator. Operationally, our approach diverges from the one presented in [2] because it is incremental. Instead of delaying the non-required constraints until the complete reduction of a goal, IHCS tries, in its forward phase, to satisfy constraints as soon as they appear. In case of inconsistency, a special backward algorithm is evoked. This can be seen as an \optimistic" treatment of preferred constraints (i.e. we bet they will participate in the search for a solution), as opposed to the \pessimistic" view of [2] where non-required constraints (source of possible inconsistency) are delayed as long as possible. The advantage is to actively use these constraints for pruning the search space. This approach nevertheless requires a specialized backward phase where dependencies between constraints, caused by their handling of common variables, are exploited to identify pertinent causes of failure. This is done much in the same way as in intelligent backtracking [9] [4], although instead of nding pertinent choice points, IHCS identi es pertinent constraints to be relaxed. Because of its portability and incremental nature, IHCS is well suited for reactive systems requiring constraint facilities, allowing the interactive introduction and removal of preferred constraints to further re ne any solution found. This paper presents a formal speci cation of IHCS and describes the algorithms that perform these transitions. The paper is organized as follows. Section 2 presents the formal speci cation of the basic IHCS,

1 This work was developed at INRIA/Rocquencourt and at the AI Centre of UNINOVA and was funded by D elegation aux A aires Internationales (DAI) and Junta Nacional de Investigaca~o Cient ca e Tecnologica (JNICT).

as a set of transition rules over hierarchy con gurations, together with supporting de nitions. Section 3 describes the algorithms that perform these transitions. Some extensions to the basic IHCS are addressed in Section 4, to deal with the search for alternative solutions, the incremental removal of constraints and to cope with disjunctions of constraints. A set-covering application and a time-tabling application are presented in Section 5 and the conclusions are presented in Section 6.

2 An Incremental Hierarchical Constraint Solver - IHCS

A constraint hierarchy H is a set of labelled constraints c@level relating a set of variables ranging over nite domains. c is a constraint on some variables and level the strength of c in the hierarchy. Level 0 corresponds to the required constraints and the other levels to the non-required (or preferred) constraints. The higher the level, the weaker a constraint is. A valuation to a constraint hierarchy H, is a mapping of the free variables in H to elements in their respective domains, that satis es all the required constraints (level 0). Given two valuations  and ,  is locally-predicate-better than  [2] if a)  and  both satisfy exactly the same number of constraints in each level until some level k, and b) in level k + 1  satis es more constraints than . Given a constraint hierarchy H with n variables and m constraints, V = fv1 ; v2; : : :; vng denotes the set of variables and C = fc1; c2; : : :; cm g the set of constraints. In our notation, c or ci designates any constraint from C . The index indicates the introduction order of the constraint in the hierarchy. De nition 1 (Constraint Store) A constraint store S is a set of constraints ordered by introduction order, i.e., if ci and cj belong to S and i < j then ci precedes cj in S. Any operation on constraint stores preserves this ordering. De nition 2 (Con guration) A con guration  of hierarchy H is a triple of disjoint constraint stores hAS  RS  USi, such that AS [ RS [ US = C . AS is the Active Store, RS the Relaxed Store and US the Unexplored Store. A con guration may be seen as a state of the evaluation of a hierarchy where the active store contains all the active constraints (i.e. those that might have reduced some domains of its variables), the relaxed store is composed by the relaxed constraints and the unexplored store is the set of candidates \queuing" for activation. We will denote that a store S is consistent by S 6`X ?, where X designates a network consistency algorithm (e.g. X = AC for Arc-Consistency [6]). Si denotes the subset of S containing only constraints of level i. A store S 0 = S [ fcg may be expressed by c:S if c is the element of S 0 with lower introduction order or by S:c if c is the element of S 0 with higher introduction order. De nition 3 (Final Con guration) A con guration hASRSUSi of hierarchy H is a nal con guration if, given the initial domains of the variables the following conditions hold: 1. AS 6`X ?; 2. AS [ fcg `X ? (8c 2 RS ); 3. US = ;. De nition 4 (Locally-Predicate-Better) hASRSUSi is locally-predicate-better than hAS 0 RS 0 US 0i, if and only if exists some level k > 0 such that: 1. #(AS i [ US i ) = #(AS 0i [ US 0i ) (8i < k); 2. #(AS k [ US k ) > #(AS 0k [ US 0k ).

De nition 5 (Best Con guration) A nal con guration  is a best con guration if there is no other nal con guration 0 which is locally-predicate-better than . De nition 6 (Promising Con guration)  = hAS  RS  USi is a promising con guration, denoted PC(), if i) AS 6`X ? and ii) there is no nal con guration 0 which is locally-predicate-better than .

IHCS aims at computing best con gurations incrementally: given an hierarchy H with a known best con guration hAS  RS  ;i, if a new constraint c is inserted into H, then starting from the promising con guration hAS  RS  fcgi several transitions will be performed until a best con guration is reached undoing and redoing as little work as possible. The following rules de ne the valid transitions for con gurations. If we start with a promising con guration and a solution to the hierarchy exist, transitions will always stop at the base rule with a best con guration. While the active store is consistent and the unexplored store is not empty, the forward rule keeps activating a new constraint. If a con ict is raised (the active store becomes inconsistent) the backward rule searches for an alternative promising con guration for the hierarchy, relaxing some constraints and possibly reactivating other constraints previously relaxed. More formally,

Base rule

Forward rule

AS 6`X ? hAS  RS  ;i AS 6`X ? hAS  RS  c:USi ! hAS :c  RS  USi

Backward rule AS `X ? Relax  (AS [ US ) Activate  RS Reset  (AS n Relax) PC() hAS  RS  USi !  8 AS 0 = AS n (Relax [ Reset) < where  = hAS 0  RS 0  US 0i : RS 0 = (RS n Activate) [ Relax US 0 = (US n Relax) [ Reset [ Activate The main idea of the backward rule is to nd constraints pertinent to the con ict that should be relaxed (the Relax set). Since the relaxation of these constraints may also resolve previous con icts, constraints previously relaxed may now be re-activated (the Activate set). Constraints a ected by the relaxed ones must be reset (temporary removed from the active store) in order to re-achieve maximum consistency (the Reset set). The con guration obtained  = hAS 0 RS 0 US 0 i must be a promising con guration, since if no other con ict is found, future transitions performed by the forward rule will lead to the nal con guration hAS 0 [ US 0  RS 0  ;i which will then be a best con guration.

3 Implementation of basic IHCS

IHCS is divided in two phases: a forward phase performing forward transitions where constraints are activated using an incremental arc-consistency algorithm and a backward phase corresponding to the backward rule that is evoked to solve any con ict raised during the forward phase.

3.1 The Forward Algorithm

Since methods to verify strong K-consistency are exponential for K > 2, [5], other weaker consistency conditions, such as Arc-consistency (K = 2) [6], are usually better suited for real implementations. The forward algorithm is an adaptation of an arc consistency algorithm based on constraint propagation, generalized for the case of constraints with an arbitrary number of variables. In our implementation we adapted AC5 [10], but since arc consistency algorithms are not the main issue of this article, a simpli ed algorithm is described to keep this presentation clear. The forward rule is implemented with function Forward. A counter AO is increased any time a new constraint c is inserted in the active store, to update the activation order of that constraint (AOc ). This order will be needed in the backward phase, as will be seen later. A set of trail stacks are also kept to undo work in the backward phase when constraints are deactivated. For each constraint c a trail stack Tc is kept to record any transformation made on data structures caused by

the activation of c. The set of all trail stacks may be seen as a single partitioned trail stack. This partitioning allows to save work that is not related with the con ict raised, as it will be seen in the backward phase.

function Forward() 0 while US = cj :0US do US US AS AS [ fcj g

AO AO + 1 AO AOc Enqueue(cj ; Q) % Q initially empty while Dequeue(Q; ck ) do if not Revise(ck ; Tc ; Q) then if not Backward(ck ) then return false return true j

j

Function Revise(c; T ; Q) performs the removal of inconsistent values from domains of c variables and updates information about dependency between constraints (see below). All these transformations are stacked in trail T and all active constraints over a ected variables are enqueued in Q (the propagation queue). If there are no values to satisfy c then Revise(c; T ; Q) returns \false", otherwise \true". When the revision of some constraint c fails, the backward algorithm will examine this dependency information to nd out the pertinent causes of the failure and what will be a ected by the relaxation of some constraints. The domain of variable v is denoted by Dv and for each constraint c, the set of its variables is designated by Vc (Vc  V ).

De nition 7 (Constrainer) c (c 2 AS ) is a constrainer of v (v 2 Vc ) if it actually caused the reduction of Dv , i.e., values where removed from Dv during some revision of c. De nition 8 (Immediate DependentnSupporter) ck is an immediate dependent of cj (conversely cj is an immediate supporter of ck ), written cj ,! ck , i 9v 2 Vc s.t. cj is a constrainer of v. De nition 9 (Immediate Related) cj is immediate related to ck , written cj ! ck , i cj ,! ck or ck ,! cj . k

A special dependency graph (DG ) is used to record dependencies between constraints. The implementation of DG and its proprieties are explained in [7]. By analyzing DG it is possible to compute Supportersc , the set of all supporters of c (transitive closure of ,!) and Relatedc , the set of all constraints related to c (transitive closure of !). The dependency relation is based on local propagation of constraints in the following way: whenever a constraint cj (cj 2 AS ) makes a restriction on some v 2 Vc , any other constraint ck (ck 2 AS ) such that v 2 Vc will be reactivated and possibly cause the reactivation of further constraints, even if they do not share any variable with cj . The restrictions performed by cj may consequently a ect all those constraints and for this reason they all become dependent of cj . j

k

3.2 The Backward Algorithm

During the backward phase, the following requisites should be attained: a) only constraints pertinent to the con ict should change status (relaxed or reactivated), to avoid un-useful search; b) a potentially best con guration must be re-achieved, to obtain a sound behavior; c) no promising con guration should be repeated, to avoid loops; d) no promising con guration should be skipped, for completeness of the algorithm; e) global consistency of the new active store must be re-achieved, undoing as little work as possible. The relaxed store RS is implicitly maintained: for each active and unexplored constraint c, an opponent constraints S set is kept (OCc) with all the relaxed constraints with whom c had previous con icts and RS = c2AS[US OC c . This representation allows an easy access to candidates for re-activation, should c be relaxed. The hierarchical level of c is expressed by levelc .

function Backward(in cj ) % Step1) AS conf fck 2 AS j levelc > 0 and ck 2 Supportersc g if AS conf = ; then return false % else % Con ict US conf fc 2 USSj levelc > 0g % Con guration % RS conf c2AS [US OC c ActivateRelaxSets(hAS conf [ US conf  RS conf i; Activate; Relax) % Step2) Activate & Relax Reset fck 2 AS j 9cj 2 Relax; AOc > AOc and ck 2 Relatedsc g % Step3) Reset Set untrail(Tc) (8c 2 Reset [ Relax) % Step4) Untrailing AS AS n (Reset [ Relax) % Step5) OC c OC c n Activate (8c 2 C ) % New OC c OC c [ Relax (8c 2 (AS conf [ RS conf [ US conf )) % Con guration US (US n Relax) [ Reset [ Activate % return true j

k

conf

conf

k

j

j

Con ict con guration. Step 1 of the backward algorithm computes the con ict con guration conf = hAS conf  RS conf  US conf i, which includes only those constraints pertinent to the con ict (AS conf  AS , RS conf  RS and US conf  US ). conf is thus the only portion of the whole con guration that should be changed to solve the con ict. AS conf and US conf are the candidates for relaxation and RS conf the candidates for re-activation. Note that although US conf does not contain any active constraint, some of them may have

to be relaxed (in this case, no longer activated) to ensure that all promising con gurations will be tried. The possible causes of the con ict are all the supporters of cj (the failing constraint). Those supporters represent the constraints that directly or indirectly restricted the domains of the variables of cj , so that no consistent values remained to satisfy cj . Since required constraints may not be relaxed, AS conf will only include the non-required supporters of the failing constraint. If AS conf is empty then there is no possible solution to the con ict and the constraint hierarchy is not satis able. The backward rule is the only rule that inserts constraints in the unexplored store. If the current con ict is not the rst to occur, then during the resolution of the previous con ict the backward rule generated a promising con guration with unexplored constraints. After some transitions, that con guration proved to be not convertible into a best con guration since it lead to the current con ict. The current con ict is thus related to the previous one and US conf is formed by all non required constraints left unexplored. Constraints relaxed in previous con icts should be reconsidered for re-activation, if some constraints involved in those con icts are now relaxed. There is a chance that the new relaxations will also solve those early con icts hence allowing previously relaxed constraints to be active now. RS conf is the set of candidates for re-activation which are the opponents in previous con icts of any candidate for relaxation.

Activate and Relax Sets. By analyzing the con ict con guration, this step determines which non relaxed constraints should be relaxed (the Relax set), and which relaxed constraints should be activated (the Activate set) in order to obtain the next promising con guration. Since unexplored constraints are candi-

dates for activation, one can consider only two states in which a constraint can be: relaxed or non relaxed. Therefore, step 2 of the backward algorithm only manipulates a simpli ed form of con gurations with only two stores, hNS  RSi, where the rst store contains the non relaxed constraints (active or unexplored) and the second one the relaxed constraints. As mentioned before, given a constraint store S, Si designates the subset of S containing all constraints of level i. Si are the subsets of S containing all constraints of levels lower and higher then i respectively. Given a con guration  = hNS  RSi, its levels i is denoted by i = hNS i  RS i i. The LPB ordering is not a total ordering since some con gurations are not comparable. A total ordering is nevertheless necessary to enssure a sound and complete search for solutions without generating the same con guration more than once. De nition 10 is a re nement of De nition 4, adapted for simpli ed con gurations and taking into account introduction orders. In case of ambiguity between con gurations not LPB comparable, i.e. having exactly the same number of non relaxed constraints in each level, the introduction orders of constraints are used in Condition 2 to determine the \best" con guration.

De nition 10 (Extended LPB) hNS RSi is locally-predicate-better then hNS 0 RS 0 i if exist some level

k such that: 1. 8i < k; #NS i = #NS 0i and #NS k > #NS 0k

or 2. 8i; #NS i = #NS 0i and a. NS 0 and all alternatives have already failed. This extension, which is formalized in [7], enables the speci cation of more complex constraint hierarchies and we take advantage of the dependency graph to backtrack intelligently to alternative choices. Disjunctions however complicates the overall IHCS algorithm, as non exhausted disjunctions must be integrated in con ict con gurations. Since we want to minimize the number of constraints to be relaxed, it is preferable whenever possible to try an alternative choice rather than relaxing extra constraints. As in intelligent backtracking methods, an alternative set is associated to each disjunction { cf. the Alt sets of [4] { and disjunctions to be re-inserted are restarted from the rst alternative { cf. the selective reset of [3].

The use of disjunctive constraints is very useful for the nal generation of solutions. After the pruning due to all constraints being treated, some variables may still have several possible W Wvalues in their domains. If the domain of a variable v is fw1; : : :; wng then adding a constraint v = w1    v = wn will assure that a single value will be assigned to v within a best solution. We used such constraint as the basic de nition for a built-in value generator - predicate indomain(v).

5 Applications

We integrated IHCS with prolog to create a HCLP(FD; LPB ) language, using pre-processing methods. At present we are employing YAP prolog running on a NeXT Station 68040. In this section we describe two problems with our HCLP language, namely a set-covering problem and a time-tabling problem, to illustrate the applicability and declarativity of hierarchical constraints and the eciency of our incremental approach to solve them. In the set-covering problem, the goal is to minimize the number of services required to cover a set of needs (the problem variables designated by X1 ; : : :; Xm ). Each variable ranges over the set of services that cover that need. The approach that we took to solve this problem is depicted in the following HCLP program: cover([X1 ; : : :; W Xm ]) :- W W X1 = X2 W X1 = X3 W    W X1 = Xm @ 1, X2 = X3 X2 = X4    X2 = Xm @ 1, .. . Xm?1 = Xm @ 1, labeling([X1; : : :; Xm ]). For m needs, predicate cover/1 states m ? 1 disjunctive constraints of level 1. This set of constraints will try to assure that the service assigned to variable Xi will also be assigned to at least some Xj , j > i. Predicate labeling/1 simply uses the built-in predicate indomain to generate values for each variables. A best solution (one that relaxes the minimum of constraints as possible) will correspond to the minimization of services. Table 1 presents results obtained using several real life instances, taken from a Portuguese Bus company. The time presented concerns the rst (best) solution found and column Min reports the minimum number of services required to cover the needs. Table 1: Results for the set-covering problem Needs Services Time Min 13 43 0.33s 6 24 293 3.98s 7 38 67 3.57s 11 The time-tabling problem is taken from the experience in the Computer Science Department of UNL, but it is simpli ed so that no spatial constraints are considered (it is assumed that there are enough rooms) and each subject is already assigned to a teacher (c.f. [8] for a full description). For this problem we used a multi-level hierarchy to model preferences of di erent strength regarding the layout of blocks of subjects in a time-table. Table 2 presents results for the generation of time-tables for three semesters. The rst line reports the results obtained by specifying only required constraints (teachers availability, blocks for the same subject at di erent days, non overlapping of classes for the same semester or given by the same teacher). Each of the other lines shows the e ect of adding an extra hierarchical level. Constraints in each level are designated to: level 1) avoid consecutive blocks of a subject at consecutive days; level 2) avoid consecutive blocks of a subject to be apart by more than two days; level 3) disallow any block, from a subject with only two blocks per week, to take place on mondays; 4) place blocks of the same subject at the same hour. The Relaxed Constraints columns report the number of preferred constraints

Table 2: Results for the time-tabling problem Max. Number of Relaxed Constraints level constraints Time @1 @2 @3 @4 0 356 1.80s (16) (1) (7) (15) 1 +21 = 377 1.86s 2 (4) (7) (11) 2 +21 = 398 1.98s 2 1 (5) (10) 3 +11 = 409 1.98s 2 1 1 (10) 4 +21 = 430 2.33s 2 1 1 0 relaxed in each level (in fact, values inside round brackets do not represent relaxed constraints, since that level was not being used, but rather the number of those that are not satis ed by the solution). The introduction of the preferred constraints of each level, signi cantly increases the quality of the solution. The last one satis es 95% of the preferences against only 47% satis ed by the rst one with a mere slowdown penalty of 32%.

6 Conclusion

This paper reports a rst formalization of IHCS as a set of transition rules over hierarchy con gurations. We conjecture that the forward and backward algorithms presented are a) sound (only locally predicate better solutionsare obtained), b) complete (all such solutions are computed), and c) non redundant (no repeated solutions). These properties are yet to be formally proven, and this task is in our plans for future work. The experimental results shown in the examples, are quite promising with respect to IHCS performance. Yet, the algorithm complexity (both in time and memory requirements) is yet to be fully assessed. This is likely to be related with the study for a potential replacement of the present criterion (locally-predicatebetter), which aims at nding some kind of optimal solutions, by a less demanding satis ability criterion (e.g. a solution is acceptable if a certain threshold of preferences is met).

References

[1] P. Barahona and R. Ribeiro. Building an Expert Decision Support System: The Integration of AI and OR methods. In Knowledge, Data and Computer-Assisted Decisions. Springer-Verlag, Berlin Heidelberg, 1990. [2] A. Borning, M. Maher, A. Martingale, and M. Wilson. Constraints hierarchies and logic programming. In Proceedings of 6th ICLP, Lisbon, 1989. The MIT press. [3] C. Codognet and P. Codognet. Non-deterministic Stream AND-Parallelism based on Intelligent Backtracking. In Proceedings of 6th ICLP, Lisbon, 1989. The MIT press. [4] C. Codognet, P. Codognet, and G. File. Yet Another Intelligent Backtracking Method. In Proceedings of 5th ICLP/SLP, Seattle, 1988. [5] Vipin Kumar. Algorithms for Constraint-Satisfaction-Problems: A Survey. AI Magazine, Spring 1992. [6] Alan K. Mackworth. Consistency in Networks of Relations. Arti cial Intelligence, 8:99{118, 1977. [7] F. Menezes and P. Barahona. Report on IHCS. Research report, Universidade Nova de Lisboa, 1993. [8] F. Menezes, P. Barahona, and P. Codognet. An Incremental Hierarchical Constraint Solver Applied to a Timetabling Problem. In Proceedings of Avignon 93, 1993. Forthcomming. [9] Luis Moniz Pereira and M. Bruynooghe. Deduction Revision by Intelligent Backtracking. In Implementations of Prolog. J.A. Campbell, 1984. [10] P. Van Hentenryck, Y. Deville, and C.-M. Teng. A Generic Arc Consistency Algorithm and its Specializations. Technical Report RR 91-22, K.U. Leuven, F.S.A., December 1991. [11] M. Wilson and A. Borning. Extending Hierarchical Constraint Logic Programming: Nonmonotonocity and Inter-Hierarchy Comparison. In Proceedings of the North American Conference 1989, 1989.

Suggest Documents