A Rule-based Approach to Dynamic Constraint Satisfaction Problems Armin Wolf GMD { German National Research Center for Information Technology GMD FIRST, Kekulestrae 7, D-12489 Berlin, Germany E-mail:
[email protected]
URL:
http://www.first.gmd.de
Abstract Practical constraint satisfaction problems (CSPs) are rarely statically xed. For instance, in job-shopscheduling express jobs have to be added, while already planned jobs are canceled. In this dynamic environments previously occupied resources like machines and sta members have to be set free for further usage. In established Constraint Programming systems the dynamics of CSPs are not well supported. In fact, all state-of-the-art systems support incremental additions of constraints but deletions are in general supported via chronological backtracking. This results in a loss of performance in dynamic environments with continuous changes of the CSPs to be solved. For nite domain constraint problems, several approaches exists overcoming this drawback while supporting arbitrary additions and deletions of constraints. However, these approaches are domain-speci c, a generalpurpose approach for solving DCSPs is still missing. In fact, there are Constraint Handling Rules (CHRs) successfully used to solve several CSPs but only constraint addition and chronological backtracking is supported. Based on CHRs a new rule-based method for solving Dynamic CSPs (DCSPs) is presented supporting arbitrary constraint additions and deletions.
1 Introduction Practical reasoning and problem solving problems like the feasibility of resource allocation [25], multifrequency allocation [7], aircraft sequencing [8] and crew allocation [12] are generally established in dynamic environments. The reasons are obvious and manifold: machines break down, satellites are installed,
ights are delayed or diverted, sta or crew members get ill, etc. Thus, real-world constraint problems are under continuous change. All mentioned cases are part of Dynamic Constraint Satisfaction Problems (DCSPs) where the constrains which have to be solved, are changing over the time. Additionally, these changes { arbitrary additions or deletions of constraints { are
Dynamic Constraints
finite domains
CHR
[Sakkout et al. 97] [Fages et al. 95/97] [Codognet et al. 96/99] [Verfaillie,Schiex 93/94]
sets
[Wolf 99]
Constraint Addition chronol. Backtracking
Rlin
Qlin
booleans diagrams
trees
[Burg et al. 94] [Holzbaur et al. 96]
[Wolf 98] [Meyer 99]
Figure 1: The state-of-the-art in dynamic constraint satisfaction. a priori unknown. After each change the solution of the previously valid constraints has to be adapted immediately, oering a new solution with respect to the currently valid constraints. Considering established constraint programming systems like CLP(R), clp(fd), CHIP, ECLi PSe or ILOG Solver the incremental addition of constraints is well-supported while the deletion of constraints is not really satisfying. In general, the deletions are realized via chronological backtracking resulting in a loss ef ciency. There are several specialized approaches to overcome this drawback (cf. Figure 1). For nite domain (FD) constraints, a lot of algorithms are developed eciently supporting arbitrary constraint deletions: Local repair techniques are integrated in ECLi PSe [24], re-propagation based on the execution model after constraint deletions is integrated in clp(fd) [4, 12]. An adaptive constraint solver for FD constraints is embedded in a new reactive constraint logic programming system supporting dynamics on top
to the rst equation, if the value of the \unknown" Y is xed and the value of X is computable. Thus, the application of R 1 requires the application of R 2. Applying a rule means replacing the constraints matching the head by the body. In the considered example, the second equation is replaced by the value of Y and the rst is replaced by the value of X . The processing and solution of both equation are summarized in Figure 2.
of a programming system [7, 8]. The solution adaptation uses a Constraint Dependency Graph (cf. [18]) which is based on the consequences of constraint propagation. Dynamic backtracking for DCSPs adapting valuations after changes of constraints is reported in [26] adopting dynamic backtracking [13] for static CSPs. Furthermore, there is an adaptive version of the Simplex algorithm [14] and an approach supporting intelligent backtracking [3] for linear systems, there are new adaptive uni cation an entailment algorithm for equations over rational trees [28, 27], and there is a new CLP scheme supporting dynamic changes of diagrams [19]. However, a general approach supporting arbitrary constraint additions and deletions was still missing. Of course, there are Constraint Handling Rules (CHRs), which are successfully used to implement constraint solvers for dierent kinds of constraints and to solve practical problems [11], but only constraint additions are fully supported. In this paper the main ideas and results for an adaptation of constraint processing using CHRs [29] are reported, thus presenting an rule-based approach to solve dynamic constraint satisfaction problems.
? ? -
?X + ?Y = 3
?Y = 2
R1
Y =: 2
R2
? X =: 1
Figure 2: Processing and solution of linear equations.
2 Constraint Handling Rules
Easily, we can see that an addition of further constraints, e.g.
Constraint Handling Rules (CHRs) [10, 11] are a high-level, declarative language extension of the CLP scheme, especially developed to implement specialpurpose constraint solvers for user-de ned constraints. CHRs are rewriting rules which are used to simplify and solve constraints. There are three kinds of rules { propagation rules and simpli cation rules { and a combination of both { simpagation rules . In general, propagation rules are used to make implicitly given constraints explicitly available, e.g. the transitivity of the mathematical order relation \; ;; [ ])) = (3; I; A)
The rst argument, a tag, either signals consistency `>' or inconsistency `?'. The second argument is a set of marked variable bindings in rational solved form (cf. [16]), which is equivalent to the considered marked equations in case of their consistency. The third argument, and this really new, is a list of some marked equations from the input, which have to be re-considered after some deletions. During the iterative process of uni cation the previously calculated solution triple is
The transition rule for propagation is quite similar { extension of the user-de ned constraints instead of replacement { and thus not further considered. 1
3
hy =: cif ; g ; hu =: vif g g; [hf (z; c) =: xif g ; hv =: uif g ; hg(c) =: z if g ]) :
part of the input in the next step. Thus, there is an initial solution triple (>; ;; [ ]).
12
3
2
4
5
Example Let u; v; x; y; z be variables, c be a con-
stant, and f; g be function symbols. Furthermore, a list of marked syntactical equations E = [hx =: f (x; y)if1g ; hf (z; c) =: xif2g ; hu =: vif3g ; hv =: uif4g ; hg(c) =: z if5g ] is given. The left- and right-hand-sides of the equations are uni ed iteratively. At the beginning of the rst iteration step, no variables are bound and the equation hx =: f (x; y)if1g became part of the rational solved form. An additional storage of this equation for further considerations is not necessary, because it is part the rational solved: form. In the second step, the equation hf (z; c) = xif2g is considered. Therefore, the variable x is dereferenced (cf. WAM or UNION/FIND): The involved variable bindings are traversed until an end is reached and the labels are :combined. This yields to the new equation hf (z; c) = f (x; y)if1;2g to be solved. For the uni cation of its sides the arguments are equated and uni cation process is recursively applied to the list [hz =: xif1;2g ; hc =: yif1;2g ]. During the consideration of the rst equation the variable x is dereferenced again. As its consequence, the rational solved form is extended by the new binding hz: =: f (x; y)if1;2g . Processing :of the other equation hc = yif1;2g adds the binding hy = cif1;2g , because the variable y was: unbound. The initially considered equation hf (z; c) = xif2g is stored for further adaptations. This is necessary, because the deletion of equations and bindings marked with 1 deletes the bindings of y and z . Thus the still valid equation hf (z; c) =: xif2g is not respected in the rational solved form and has to be re-considered. : In the third step, uni cation of both sides of hu = vif3g is simple, because neither u nor v are bound. Consequently the rational solved form is extended by the variable binding hu =: vif3g . An additional storage of this equation for adaptation is super uous, it is part of the rational solved: form. In the fourth step processing of the equation hv = uif4g shows the redundancy of this equation. Dereferencing yields to the trivially solved equation hv =: vif3;4g . Thus, the redundant equation is stored for adaptation, because an deletion of the binding of u makes it non-redundant. Consideration of the last equation hg(c) =: z if5g, especially dereferencing of z shows the inconsistency of the given: list of equations: The derived equation hg(c) = f (x; y)if1;2;5g is unsolvable. It is also shown that the inconsistency is justi ed by at least one of the equations marked with 1, 2 or 5. The deletion of the third or fourth equation will not resolve the inconsistency. The result of the uni cation is unify(E ) = (?; fhx =: f (x; y)if1g ; hz =: f (x; y)if1;2g ;
Re-uni cation shows that the deletion of the second equation is one possibility to resolve the inconsistency. Therefore, all variable bindings and equations in the solution triple justi ed by 2 are deleted and uni cation is re-started with the remaining equations on the remaining solved form: re-unify(f2g; unify(E )) = unify([hv =: uif4g ; hg(c) =: z if5g ]; (>; fhx =: f (x; y)if1g ; hu =: vif3g g; [ ])) :
Re-consideration of the equation hv =: uif4g shows that it is still redundant. It is re-stored for further adaptations. Re-consideration of the last equation binds the variable z to g(c), because its previous binding lost its justi cation together with the deletion of the second equation. Now, it is not necessary to re-store the equation hg(c) =: z if5g , because: its solution is guaranteed by the variable binding hz = g(c)if5g having the same justi cation. The nal result re-unify(f2g; unify(E )) = (>; fhx =: f (x; y)if1g ; hu =: vif3g ; hz =: g(c)if5g g; [hv =: uif4g ]) :
is identical to that resulting from re-calculation from scratch. In general, identity between the results of recalculation and adaptation is not given, but consistency and inconsistency is correctly detected in both cases. In case of consistency, equivalence between both sets of variable bindings in rational solved form is given. Formally, let R be a label and G be a list or set of marked equations, del(R; G) deletes all equations in G justi ed by R, i.e. marked with labels not disjoint to R. Now de ning re-unify(R; unify(E )) = unify(del(R; A); (>; del(R; I ); >)) = (30; I 0 ; A0 )
whenever unify(E ) = (3; I; A) holds, the correctness of re-computed solution triple is given:
Er j= 9(del(R; E )) if and only if 30 = >. 0 Er j= del(R; E ) $ I if del(R; E ) is consistent, i.e. 30 = >.
4 Adaptive Entailment The adaptive entailment algorithm is a generalization of the adaptive uni cation algorithm. Again, there 4
an appropriate binding of y namely v. This binding became part of the rational solved form. : The initially considered equation hx = f (g(y))if2g has to be stored for further adaptations, because an deletion of the global binding hu =: g(v)if3g endangers entailment of this equation and has to be re-decided. Finally, we have entail(x; I; F ) = (>; fhx =: f (u)if1g ; hy =: vif1;2;3g g; [hx =: f (g(y))if2g ])f1;2;3g : Entailment is justi ed by f:1; 2; 3g. After the deletion of hu = g(v)if3g in I and the addition of a new binding of u to f (a) with justi cation f4g an adaptation without re-calculation is possible. De ning I 0 = del(f3g; I ) [ fhu =: f (a)if4g it holds re-entail(x; I 0 ; f3g; entail(x; I; F )) = entail(x; I 0 ; [hx =: f (g(y))if2g ]; (>; fhx =: f (u)if1g g; [ ])f1g = (>; fhx =: f (u)if1g ; hy =: aif1;2;4g g; [hx =: f (g(y))if2g ])f1;2;4g = entail(x; I 0 ; F ) : In general, if del(R; F ) = F and del(R; I ) I 0 holds for any label R and any set of global variable bindings I 0 in rational solved form, we de ne re-entail(x; I 0 ; R; (4; J; D)S ) = entail(x; del(R; D); (>; del(R; J ); [ ])Q ) = (2; K; L)T whenever entail(x; I; F )) = (4; J; D)S holds and Q is the union of all labels in del(R; J ). In this case, the re-computed solution is correct: Er j= I 0 ! 9x F if and only if 2 = >. Er j= I 0 ^ F $ I 0 ^ K in case of entailment, i.e. 2 = >.
are two variants, called entail and re-entail. Both algorithms decide whether a existentially quanti ed conjunction of marked syntactical equations is entailed by another conjunction in rational solved form. Given some existential or local variables x, a set of marked variable bindings I containing only global variables, and a list of marked syntactical equations F , entail computes a marked solution triple: entail(x; I; F ) = entail(x; I; F; (>; ;; [ ]); ) = (4; J; D)S
The rst argument, a tag, either signals entailment `>' or disentailment `?'. The second argument is a set of marked variable bindings in rational solved form (only local variables are bound there), such that I ^ F is equivalent to I ^ J in case of entailment. Again, the third argument is a list of some marked equations from the input, which have to be re-considered after deletions of some global variable bindings, necessary for entailment. In case of entailment the annotated label justi es it. In detail, it is the union of all labels of the required global variable bindings. Thus, entail(x; I 0 ; F ) = (>; J; D)S
holds for every set of marked syntactical equations I 0 in rational solved form with its variables disjoint to x containing all variable bindings in I which are marked with subsets of S .
Example Let x = fx; yg be a set of local variables and I = fhu =: g(v)if3g g be a set of marked global variable bindings in rational solved form. Furthermore, a list of marked syntactical equations F = [hx =: f (u)if1g ; hx =: f (g(y))if2g ]
is given. Iteratively, equation by equation, it is decided whether they are implied by I and an equivalent set of variable bindings in rational solved form is calculated. At the beginning of the rst iteration step, no local variables are bound. The equation hx =: f (u)if1g is entailed, because there exists an appropriate binding of x namely f (u). This binding became part of the rational solved form. An additional storage of this equation for further considerations is not necessary, because it is part of the rational solved form. Entailment is justi ed by the label f1g. In the second iteration step the equation hx =: f (g(y))if2g is considered. Therefore, the variable x: is dereferenced yielding to the new equation hf (u) = f (g(y))if1;2g further considered: Equating the arguments of the left- and right-hand-side and recursion on its result checks the equation hu =: g(y)if1;2g . Deref-: erencing u results in another recursion on hg(v) = g(y)if1;2;3g and consequently on hv =: yif1;2;3g . Entailment of this equation is possible, because there is
5 How to Make CHR Applications Adaptable For an adaptation of CHR applications, it is necessary to instantiate the given operational semantics (see above) and to add to the applications their justi cations. To keep the states of the transition system adaptable, the built-in constraints are replaced by a solution triple. Thus we have states hC; (3; I; A)i. In case of consistency, i.e. 3 = > a CHR H , G j U; B is in principle applicable. To decide head matching and guard entailment of this CHR with respect to some se-: lected user-de ned constraints H 0 in C entail(x; I; (H = H 0 ) ^ G) is calculated where x, the local variables, are the variables in the head H and the guard G of the rule. If the calculated result, say (4; J; D)S , signals entailment, i.e. 4 = >, the CHR is applicable and justi ed by S . 5
by a label R, i.e. marked with labels not disjoint to R, adaptation is also possible. Given the last state in a CHR derivation and the sequence of rule applications therein, an incremental adaptation algorithm was developed, which computes the last state of a CHR derivation and a sequence of rule applications resulting is this state starting with a state which is equivalent to the state we get after the deletion and normalization of the rst state in the given derivation. The computed sequence is in some sense optimal, because it consists of the subsequence of the independent rule applications i1 ; : : : ; i with Si \ R = ; for j = 1; : : : ; k and of a subsequence of the other rule applications but dierently justi ed:
Application of a CHR means replacement of matching head constraints2, addition of built-in constraints, and normalization of the built-in constraints. The added constraints as the whole rule application are justi ed by S . For the possibility of removing the rule application without deletion of any constraints (e.g. if a constraint addition causes an inconsistency). The constraints are also justi ed by an unique identi er of the applied CHR. Normalization based on uni cation only requires an iterative process on the added body constraint, because I ^ (H =: H 0 ) ^ G is equivalent to I ^ J. For any further adaptation, the necessary information is stored together with the applied rule, i.e. (x; H 0 ; J; D; S ): the local variables, the replaced constraints to be taken back, if the rule will be eliminated, and information for adaptation of the entailment including the justi cation of the application. All these required processing steps are summarized in an instantiated and adaptable transition rule: x = var(H ) [ var(G), : H , G j U; B with and entail(x; I; (H =: H 0 ) ^ G) = (>; J; D)S
k
j
S0 7! 1 : : : 7! Sk 7! 1 : : : 7! |
0
)
6 Adaptation of CHR Derivations Applying CHRs in sequence results in a CHR derivation
hC ; (3 ; I ; A )iv 7!1 7! hCn ; (3n; In ; An )iv 0
0
0
n
This derivation has to be adapted whenever its initial state hC0 ; (30; I0 ; A0 )iv changes. The adaptation is simple, if user-de ned constraints Z and syntactical equations E are added and the consistency of the extended last state is given, i.e. unify(E; (3n ; In ; An )) is consistent. In this case
hC ^ Z; unify(E; (3 ; I ; A ))iv 7!1 7! hCn ^ Z; unify(E; (3n; In ; An ))iv 0
0
0
0
n
is a valid CHR derivation. In case of inconsistency, some of the last CHR applications n ; : : : ; n?l has to be cut until the last state of the shortened derivation is the only inconsistent state in this derivation. The elimination of the necessary CHR applications is quite similar to adaptation after constraint deletions: The rule applications justi ed by n ; : : : ; n?l are removed. In general, if all the constraints in the initial state (or rule applications) have to deleted which are justi ed 2
ik
}
|
j
{z
jl
}
Sk+l
prev: dependent=added
If the applied rules are simpli cation rules, it is proved that there is a re-ordering of the rule applications in the initially given CHR derivation such that the adapted sequence is the longest pre x of the rearranged sequence of rule applications.3 The adaptation algorithm after constraint deletions processes as follows: It goes backwards in the given sequence of rule applications until a dependent rule application is found. If no dependent rule application is found, it stops. Otherwise: { The consequences of the considered rule are taken back by use of re-unify. { It is tried to re-apply the eliminated rule application again. Therefore, it is checked whether the previously matching constraints are still valid. If they are valid, applicability is decided using re-entail. { In case of applicability, the rule is re-applied (now dierently justi ed) at the end of the current derivation. Termination and correctness of this algorithm is formally proved. It is also worth to emphasize that the adaptation after additions or deletions of constraints only requires the last state of the derivation to be adapted and some information about the applied rules and not the whole CHR derivation. Thus, the information to be kept for adaptation is rather small.
The common justi cation of all added constraints is exempli ed by the explicit annotation of S [ fg to them.
0
{z
independent=kept
hH 0 ^ C; (>; I; A)iv 7! x;H ;J;D;S hC ^ U S[fg ; unify(B S[fg ; (>; I [ J S[fg ; A))iv (
i
7 Benchmark Results The comparison between the adaptation of CHR derivations using the sketched incremental algorithm and a 3 It is also proved that a longest adapted subsequence of the initial sequence is not uniquely determined.
As mentioned before, we focus on simpli cation rules only.
6
nave recalculation from the scratch shows that the adaptation is 2{4 times faster (see Figure 4). The improvements are the better the longer the adapted derivations are. These results are comparable with other more specialized approaches, e.g. [12]. The uni cation and entailment algorithms used during the recalculations from the scratch are simpli ed versions of the introduced algorithms: The labels of the syntactical equations are neither used nor maintained so a \truth maintenance" overhead is omitted. The informations stored for each computation step are omitted, too.
ular, projection is applied in Figure 3. For an closer impression consider the following C , example: The application of ? P = P =: C to the state hh?X +?Y = 3if1g ^h?Y = 2if2g ; (>; >; >)ifX;Y g results in the state hh?X + ?Y: = 3if1g ; : (>; fhP = Y if2g ; hC = 2if2g ; hY =: 2if2g g; >)ifX;Y g involving bindings of the locals variables P and C . Further simpli cation using early projection results in the equivalent state hh?X + ?Y = 3if1g ; (>; fhY =: 2if2g g; >)ifX;Y g : We expect further speed-ups after the integration of an early projection procedure into the adaptation algorithm.
CHR Adaptation after Deletions - Run-time Comparison 90
100 transitions 200 transitions 400 transitions
incremental adaptation [sec.]
80 70
References
60
[1] Alfred V. Aho, John E. Hopcroft, and Jerey D. Ullman. The Design and Analysis of Computer Algorithms. Addison Wesley, Reading MA, 1974. Chapter 4: Data Structures for Set Manipulation Problems. [2] Hassan Ait-Kaci. Warren's Abstract Machine: a tutorial reconstruction. The MIT Press, 1991. [3] Jennifer Burg, Sheau-Dong Lang, and Charles E. Hughes. Finding con ict sets and backtrack points in CLP(