Index Termsâdynamic constraint satisfaction; search heuris- tic; random probing ... defined as a sequence of CSPs in which each problem in the sequence is ...
Problem-Structure vs. Solution-Based Methods for Solving Dynamic Constraint Satisfaction Problems Richard J. Wallace and Diarmuid Grimes Cork Constraint Computation Centre and Department of Computer Science University College Cork, Cork, Ireland Email: {r.wallace,d.grimes}@4c.ucc.ie
Abstract—A new type of reasoning-reuse technique for dynamic constraint satisfaction problems (DCSPs) is based on a form of the weighted degree strategy known as random probing, in which failures are sampled prior to search. This approach is effective with DCSPs because it can locate major bottlenecks in a problem, and because bottlenecks tend to remain stable after small or moderate changes. Here, we show that this approach is effective with various kinds of change including changes in constraint-relations and changes that transform a problem from one with solutions to one without, or vice versa. The latter, in particular, is especially troublesome for a solution reuse method like Local Changes. We also examine the second quality metric for DCSP methods, solution stability. We show that an enhancement of probing-based search, which begins with values in the solution found before perturbation and continues to choose values that minimise conflicts with the original solution, actually improves on Local Changes for the problems tested, as well as improving average search performance further. Probing-based methods can, therefore, solve DCSPs very efficiently after many types of change, while also meeting the criterion of high solution stability. Index Terms—dynamic constraint satisfaction; search heuristic; random probing
I. I NTRODUCTION A “dynamic constraint satisfaction problem”, or DCSP, is defined as a sequence of CSPs in which each problem in the sequence is produced from the previous problem by changes such as addition and/or deletion of constraints [1]. Such problems have application in many dynamic environments such as production scheduling where changes occur unexpectedly and their exact features cannot be anticipated. Although several strategies have been proposed for handling DCSPs, there is still considerable scope for improvement, in particular, when problems are in a critical complexity region. In recent work we found that for hard CSPs, search performance (amount of effort) can change drastically even after small alterations that do not change the values of the basic problem parameters. At the same time, one feature that is not greatly affected is the set of variables that are the major sources of contention within a problem. It follows that information derived from assessment of these sources of contention should enhance performance even after the problem has been altered. This is what we have shown [2]. More specifically, we showed that a heuristic procedure that uses failures during iterated sampling (“random probing”) can perform effectively after problem change, using information
obtained before such changes and thus avoiding the cost of further sampling. This is a new form of what Verfaillie and Jussien have called ”reasoning reuse techniques” [1], which they place within the larger class of reactive – as opposed to proactive – techniques. It differs from other reasoning reuse strategies, such as no-good learning [3], in that it is heuristic rather than inferential in nature. However, for this reason it may be more robust than existing techniques of this type. At the same time, this initial work left several important issues unresolved. In the first place, more information is needed regarding the usefulness of the present technique for different kinds of problem change. In this paper, we show that this technique is very impressive in this regard. For some forms of problem alteration (changes in constraint relations), the information gained by probing the original problem is effective across an extended series of problem alterations. At the same time, such perturbations wreak havoc with solutionreuse techniques. In addition, we show that information gained by probing can be used to improve efficiency even if the alteration makes a solvable problem unsolvable, or vice versa. Obviously, such cases are beyond the scope of solution reuse strategies. In the second place, the earlier work did not address the issue of solution stability. Here, “stability” refers to similarity in the solutions found for the original and perturbed problems. High stability means that these solutions are very similar, as measured by the Hamming distance between full assignments. High stability is an important criterion in areas such as production scheduling [3] and timetabling [4], [5], where a valid solution that is not similar to the original can be costly or impossible to implement. Now, it is possible that even when solution reuse methods such as Local Changes are relatively inefficient, they still find solutions after problem change that are more similar to solutions found for the original problem than are those found using constraint weights obtained from probing. With regard to this criterion, we find that Local Changes is on average better than the probing method with hard problems, although the difference is not large. However, we are able to enhance the latter method with a value ordering heuristic which begins with the assignments in the original solution and continues to be guided by a strategy that minimises conflicts with the original solution. With this enhancement, called “solution-guided probing”, solution stability is better
than for Local Changes for problems in the critical complexity region. Moreover, there is the added benefit that average search effort is also better in comparison to the basic probing method. This is because the value ordering heuristic can reduce or eliminate search when the old solution or one very close to it is still viable. Together, these results show that an approach to solving DCSPs which is based on finding maximal points of contention is often markedly superior to other reactive strategies. The next section gives some basic definitions and describes our experimental methods. Section 3 gives results on efficiency for the present weight reuse method, for random CSPs as well as more structured problems. Sections 4 and 5 compare efficiency of weight reuse and solution reuse methods across an extended series of problem alterations. Section 6 describes our solution-guided probing method and compares results for solution stability with Local Changes. Section 7 gives conclusions. II. BACKGROUND M ATERIAL A. Definitions and notation Following [6] and [7], we define a dynamic constraint satisfaction problem (DCSP) as a sequence of static CSPs, where each successive CSP is the result of changes in the preceding one. As in earlier work, we consider DCSPs with specific sequence lengths, especially length 1, where “length” is the number of successively altered problems starting from alteration of the initial problem. In our extended notation, Pij (k) refers to the kth member in the sequence for DCSPij , where i is the (arbitrary) number of the initial problem in a set of problems, and j denotes the jth DCSP generated from problem i. However, for DCSPs of length 1 a simpler ij notation is often more perspicuous; in this case P ij is the jth DCSP generated by perturbing base problem i. For means (shown in some tables), since the i values range over the same set in all cases, this notation can be simplified further, to P -j or P j. B. Experimental methods Because they allow for greater control, many of the present experiments were done with random problems, generated in accordance with Model B [8]. The base problems had 50 variables, domain size 10, and constraint tightness 0.369. Graph density was either 0.184 or 0.19, depending on the experiment, so these problems are designated as and , respectively. Problems of the first type have 225 constraints in their constraint graphs, those in the second 233. Both sets of problems are in a critical complexity region, the latter being close to the peak, although they are still small enough to allow a large number of experiments to be performed in a reasonable amount of time. (The basic difficulty of problems with these parameters is indicated by performance with lexical variable ordering (with MAC): for 100 problems with density 0.184, average search nodes to find one solution was over 250,000 [9]).
For these problems, we considered two forms of problem alteration: addition and deletion of k constraints from a base CSP, and replacement of k relations by new ones with the same tightness. In the present experiments, k was always five. Note that in both cases, the number of constraints remains the same. In addition, the first type of change was carried out so that additions and deletions did not overlap. For random problems, DCSP sequences were formed starting with 25 independently generated initial problems. In most experiments, three DCSPs of length 1 were used, starting from the same base problem. In one experiment a single set of 25 DCSPs was generated (from independent problems), each of length 20, in which the form of alteration was relation replacement. (This supplements earlier work with a sequence of the same length generated by adding and deleting constraints [2].) In these problems, graph density was always 0.184 and problems always had solutions, both before and after alteration. A graph density of 0.19 was used for DCSPs which included problems with and without solutions. For some experiments DCSP sequences of length 1 were created; in this case there were again three sets of DCSPs based on a single set of 25 independently generated problems. Separate sets of DCSPs were created in which the base problem had solutions but the altered problem did not, or vice versa (making six sets in all). In another experiment, 25 DCSPs of length 20 were created, again based on replacement of five relations in each successive CSP in a DCSP sequence. In this experiment, all 25 base problems had solutions, and perturbed sequences were selected so that 40-60% of the problems had solutions. (This was done by generating 100 DCSPs and selecting the first 25 that met this specification.) In addition, in all but three cases after the first perturbation, the proportion of problems with solutions after the kth perturbation was between 40 and 60%. Experiments were also performed on structured problems. In this case, “structure” means having ordered domains and a highly structured constraint graph. The problems were simplified open shop scheduling problems, used in a recent CSP solver competition 1 . These were “os-taillard-4” problems, derived from the Taillard benchmarks [10], with the time window set to the best-known value (os-taillard-4-100, solvable). Here, the base set contains ten problems. In these problems there are four jobs, each with four tasks that must be done on four different machines. Constraints prevent two operations that share the same job or require the same resource from overlapping; specifically, they are disjunctive relations (Xj + durj ≤ Xi ). of the form, (Xi + duri ≤ Xj ) These problems have 16 variables (total number of tasks), the domains are ranges of integers starting from 0, with 100-200 values in a domain, and all variables have the same degree. The scheduling problems were perturbed by changing upper bounds of a random sample of domains. In the original problems, domains of the 4-100 problems are all ten units greater than the corresponding unsolvable 4-95 problems, i.e. 1 http:/www.cril.univ-artois.fr/
lecoutre/benchmarks/ benchmarks.html
the upper limit is 10 greater in the former case. Perturbed problems were obtained by decreasing four domains of the 4-100 problems by ten units. For some DCSPs, perturbed problems were selected so that those generated remained solvable. For others, perturbed problems were selected that did not have solutions. In both cases, five sets of DCSPs of length 1 were generated from the original ten base problems. The heuristic that forms the core of the new method is an adaptive heuristic based on weighted degree. This heuristic is a version of the domain over weighted degree heuristic of [11] that uses weights at the start of search obtained by “random probing” [12]. This method involves a number of short ‘probes’ of the search space where search is run to a fixed cutoff and variable selection is random. Constraint weights are updated in the normal way during probing, but the information is not used until complete search begins. In our experiments, this method was compared with the original domain over weighted degree heuristic, or dom/wdeg. In this work, we used a probing schedule of 100 probes with a cutoff of 30 failures (domain wipeouts) for both types of problems. This involves 3100-3700 search nodes per problem. (Using Xlisp, each probe required about .2 sec; with a more powerful solver it takes about .01 sec.) Since we are interested in comparing quality of search using this information, the means shown in the tables are for the search phase alone. (Obviously, total effort is obtained by adding the probing effort and the search effort.) All heuristics were employed in connection with the maintained arc consistency algorithm, using AC-3 (MAC-3). A MAC-3 style of search was also used with the scheduling problems, which involved maintaining lists of subintervals for each future domain. The performance measure was search nodes; in this work, differences in this measure corresponded to differences in constraint checks and runtime. For problems with solutions, search was for one solution; for unsolvable problems search was carried out until unsolvability was proven. (In a few cases it was necessary to impose a limit on search effort; these are noted at appropriate places in the text.) For random problems with probing and dom/wdeg, value ordering was usually randomised and mean search effort was obtained over 100 runs. For scheduling problems, value ordering was done by taking the largest or smallest value in alternating sequence beginning with the smallest value. III. C ONSTRAINT W EIGHTS FOR DCSP S EARCH The major finding in previous work was that despite their marked effects on search, the changes just described often do not greatly affect the locations of major points of contention (bottleneck variables). Therefore, a heuristic that assesses major sources of contention should perform well, even when using information from the base problem on a perturbed problem. Since the constraint weights obtained from random probing distinguish points of high contention [12] [9], the usefulness of this information is not lost in the face of changes such as these.
The results in Table 1, which include data from [2] (first three entries in rows one and three), show the effectiveness of weight learning and the importance of proper assessment procedures for obtaining thse weights. The basic data are for three methods: (i) dom/wdeg with no restarting, (ii) independent random probing for each problem (rndi), (iii) a single phase of random probing on the original problems (rndi-orig), after which these weights were used with the original and each of its altered problems (on each of the 100 runs with random value ordering). Here, we added a fourth method: (iv) weights obtained from dom/wdeg on the base problem were used as initial weights for perturbed problems. In the third and fourth cases, the new constraints in an altered problem were given an initial weight of 1. The basic data shows that random probing improves search performance in comparison with dom/wdeg and there is relatively little falloff if weights from the base problem are used. We see in addition that, if weights from dom/wdeg are reused, search performance is worse on average than the original dom/wdeg heuristic, which starts search with no information other than degree. This shows that it is important to obtain information about contention through careful sampling of failures, not just in association with CSP search. It is also consistent with the proposal that random probing provides information about global sources of contention, in partial contrast to dom/wdeg [12]. Table 1. Search Results for Perturbed Problems with Weighted Degree Heuristics problems dom/wdeg rndi rndi-orig d/wdg-orig random-5c 1617 1170 1216 1764 random-5r 1729 1344 1317 2055 scheduling 11,340 7972 5999 20,283 Notes. Random problems altered by adding/deleting constraints (5c) or by changing relations (5r). Scheduling problems altered by decrementing domains. Mean search nodes across all altered problems.
IV. E FFECTIVENESS OF A P ROBLEM -S TRUCTURE -BASED M ETHOD WITH VARIOUS F ORMS OF P ROBLEM C HANGE The practicality of this DCSP method would be greatly enhanced if weights obtained from a single set of probes could be used at the beginning of search for a succession of CSPs, i.e. over a DCSP-sequence of length greater than one. In this case, the work required for probing can be amortised across a series of subsequent searches. Previous work showed that, for DCSPs of length 20, where each successive CSP is formed from the previous one in the sequence by adding and deleting five constraints, weights from the original problem enhanced search - in comparison with dom/wdeg - over the first six problems in the series [2]. Moreover, even when the additional boost from the original weights is lost, the algorithm recovers quickly on each problem and never does significantly worse than dom/wdeg. Fig. 1 shows results for a similar set of DCSPs, but here the alterations consisted of changing five relations chosen at random. In this case, when the original weights are used there is no fall-off in the gain in efficiency over 20 successive perturbations.
40
40 rndi rndi-orig
% improvement over domwdeg
% improvement over domwdeg
rndi rndi-orig
30
20
10
0
30
20
10
0 0
5
10
15
20
P-1(k)
Fig. 1. Percent improvement with rndi and rndi-orig in comparison with dom/wdeg on successive sets of perturbed problems. Problem set k+1 in sequence is derived by changing 5 relations in each problem in set k (numbered on x-axis).
That these results are due to the effect of constraint weights can be shown by correlating weight profiles for a base problem with each of its successors. (A “weight profile” is the set of weights associated with each of the variables at the end of probing.) For this purpose, we used the Spearman Rank Correlation Coefficient [13]. So far this has been done for a single DCSP from each set. For the DCSP from the 20 successive add/delete series, the correlation was > 0.92 for the first four CSPs in the sequence; then it declined to about 0.85 for the next four and then to 0.80. This corresponds to the decline in enhancement observed in the experiment in [2]. For the DCSP from the 20 successive relational-change series, the correlations ranged between 0.94 and 0.97 for the entire series. For comparison, correlations among weight profiles from the same problem using different random seeds were 0.97-0.98, while for independent problems they ranged from -0.1 to 0.1. Note that, since changing relations does not add or remove constraints, all the weights can be applied to successive problems, which must contribute to the persistence of the positive effects of rndi-orig. For problems in which alterations changed a solvable problem into an unsolvable one or vice versa, the pattern of differences in search effort was essentially the same as for problems that had solutions before and after change (Table 2). Again, random probing improves performance in comparison with dom/wdeg, and in this case there is very little fall-off in performance when the weights from the base problem are used (rndi-orig) instead of weights obtained from probing the perturbed problem directly (rndi). Table 2. Search Results for Problems where Perturbations Change the Solvability sol->nosol nosol->sol method base perturbed base perturbed dom/wdeg 2858 6321 4985 2390 rndi 2262 5162 3954 1886 3943 1906 rndi-orig 2224 5227 Notes. Random problems altered by adding/deleting constraints. Mean search nodes across all altered problems based on 100 runs per problem with random value ordering.
Fig. 2 shows the same persistent gain in efficiency (over
0
5
10
15
20
P-1(k)
Fig. 2. Percent improvement with rndi and rndi-orig in comparison with dom/wdeg on successive sets of perturbed problems. Successive problems in a DCSP of length 20 were derived as described in Fig. 1. In this case, however, at every step in the sequence about half the problems do not have solutions.
dom/wdeg) across a long sequence of perturbations that was demonstrated in the earlier experiment of this type (Fig. 1). This means that for this form of perturbation, amortisation of probing costs can be achieved regardless of whether problems have solutions. Results for DCSPs based on scheduling problems, where perturbations produced problems with no solutions, are shown in Table 3. (The means for rndi and rndi-orig on base problems are the same because the same seeds were used during probing.) Here, the improvement in performance after random probing is marked. And in this case the same degree of improvement is found following perturbation when the weights from the original problems are used in solving the altered problems. Table 3. Search Results for Scheduling Problems When Perturbations Make Problems Unsolvable method base perturbed dom/wdeg 17,159 49,085 rndi 6,524 19,439 rndi-orig 6,524 19,208 Notes. Mean search nodes across all 50 altered problems. Probing results based on 10 runs per problem.
V. S EARCH WITH A S OLUTION -BASED M ETHOD : L OCAL C HANGES The best-known algorithm for DCSPs that is based on solution reuse is called Local Changes. Local Changes is a complete algorithm designed to find solutions to an altered problem while conserving as much of the original assignment as possible [14]. It works by determining a minimal set of variables that must be reassigned, and undoing old assignments only when they are inconsistent with the new ones. Our version of Local Changes updates the classical description by using MAC; it also makes use of the data structures and style of control found in our basic MAC implementation. In the present experiments, for random problems the algorithm was run with a min-conflicts lookahead value ordering [15], which has been found to be better than either lexical value ordering or the min-conflicts-look-back ordering reported earlier [2]. For
scheduling problems, this was not possible since the domains are fairly large; hence the ordering described in the Methods section was used. Three different variable ordering heuristics were used in connection with Local Changes: maximum forward degree (f d), the FF2 heuristic of [16] (f f 2), and the domain over weighted degree heuristic of [11] (dom/wdeg). FF2 chooses d i mi , a variable that maximises the formula (1 − (1 − pm 2 ) ) where mi is the current domain size of vi , di the future degree of vi , m is the original domain size, and p2 is the original average tightness. The weighted degree heuristic was described earlier. It was used here to determine whether efficiency of local changes could be improved by using a more powerful variable ordering heuristic. In all cases the same heuristic was used for the original and perturbed problems. For perturbed problems, weighted degree was used starting with all weights initialized to 1 (dom/wdeg) or beginning with constraint weights from the base problems (with only new constraints initialized to 1: d/wdg(r)). Earlier work has shown that for random problems that are in a critical complexity region, Local Changes performs very poorly, giving results for search effort that are two to three orders of magnitude greater than results with methods based on random probing. These results were obtained for perturbations consisting of adding and deleting five constraints. Means are shown in Table 4 for three sets of 25 DCSPs, both for this type of change and for changes involving altered relations. Results of a similar nature were found when perturbations took the form of changing relations. Recall that for this case, the maximum points of contention are often stable across a long sequence of changes, as shown in Fig. 1. Nonetheless, the mean performance of Local Changes is on average worse than the mean found with DCSPs obtained by changing the topology of the constraint graph. Table 4. Performance of Local Changes with DCSPs Based on Random Problems change fd ff2 dom/wdeg d/wdg(r) 5c 7,463,591 191,503 377,596 370,230 5r 8,931,099 285,796 569,130 564,097 5c/ns 34,848,645 1,231,672 2,048,161 2,034,161 Notes. problems. “5c” problems altered by adding and deleting 5 constraints (“/ns” means perturbed problems had no solutions). “5r” problems altered by changing 5 relations. Mean search nodes across 75 problems.
One reason that the means for problems perturbed by changing relations were greater than those for problems perturbed by adding and deleting constraints is that, in the former, there were fewer cases where the original solution was still a valid assignment after problem alteration. For the changedconstraints condition there were 13 and 12 such cases out of 75 for max forward degree and ff2, respectively; for changedrelations condition the corresponding counts were 5(6) and 4. (The parenthetical value includes one instance where the number of nodes was 1.) This also shows that there are types of change where solution reuse methods are likely to be inefficient because of the reduced chance that the old solution will still be valid after change.
Tests with scheduling problems show that the same effects are found in this case. For alterations that gave problems with solutions, in the majority of cases (35/50), the original solution was still valid for the altered problem (Table 5). But with even a single invalid assignment, there was the same unraveling process observed with random problems with the attendant explosion in search effort. (Since there were several cases where search reached the ten-million node limit before finding a solution, the numeric means in Table 5 are usually lower bounds on average search effort.) Table 5. Performance of Local Changes with DCSPs Based on Scheduling Problems condit
probs/data orig
d/fd 620,681
d/wdg 17,159
d/wdg(r) 17,159
>2,104,397 35/10
>2,148,399 35/10
>2,343,342 35/11
sol->sol pert # sol->nosol
pert > 107 > 107 > 107 Notes. os-taillard-4-100 problems, altered by reducing four domains. Means based on 10 original and 50 perturbed problems. 107 node limit. For sol->sol condition, first row is mean search nodes, second row is number of cases with 0 or > 107 search nodes.
The data for the solution to no-solution DCSPs show the same type of result, although in this case search effort increases by about another order of magnitude, since now the algorithm must prove unsolvability (Table 4, “5c/ns” problems; Table 5, sol->nosol condition). For the scheduling problems, it was not possible to do this for any of the 50 problems within the limit imposed (cf. Table 5). VI. S OLUTION S TABILITY WITH R ANDOM P ROBING AND L OCAL C HANGES A. Probing versus Local Changes Table 6. Mean Hamming Distance for Perturbed Problems with Local Changes variant fd ff2 LC 30.6 31.3 LC-mc 30.4 31.1 LC-mv 25.8 28.6 Notes. problems. Problems altered by adding and deleting 5 constraints. Mean Hamming distances across 75 altered problems. “LC” uses lexical value ordering, “LC-mc” uses min-conflicts look-ahead (as in Table 4), “LC-mv” uses min-conflicts look-back.
Local Changes starts with the previous solution, which it then tries to repair. Table 6 shows data for this algorithm with three different value orderings; note that the numbers in the table are mean Hamming distances, so smaller values represent higher stability. In comparison, the mean for rndiorig is 33.9. Hence, the difference was small except for cases where the value ordering was based on the minimum conflicts with existing assignments. B. An Enhanced Probing Procedure Heretofore, search after probing was done without a specific value ordering heuristic. A simple enhancement is to begin with the assignments in the original solution before switching to some other ordering. This can be elaborated by choosing
subsequent values so as to minimise the number of conflicts with the original assignments for the unassigned variables. Using a solution-guided probing method of this form, we found that for the 75 perturbed problems in the 5c sets, the Hamming distance was significantly reduced; the mean value was 23.0. This is, in fact, better than the best values found with Local Changes (Table 6). (Note that the term “solutionguided probing” is a slight abuse of the language in that the probing itself is not solution-guided, only the search applied to the altered problem.) An additional benefit is that with solution-guided probing mean search effort is reduced by almost 50% in comparison with the mean shown in Table 1 for rndi-orig. This is because search is now avoided in cases where the original solution is still valid after perturbation, or greatly reduced in cases where there is a highly similar solution.
There are, therefore, at least three good reasons for using problem-structure-based methods rather than alternative solution-guided procedures: • the former methods are much more efficient, up to several orders of magnitude (and previous work has shown that weighted degree methods scale impressively [11], [12]), • when married with solution-guided techniques, the former give solution stability that is as good or better than methods that are primarily solution-guided, • the former are effective in situations where the latter cannot be, e.g. when problems have no solutions, In addition, the present methods are quite easy to code on top of the basic algorithm. ACKNOWLEDGMENTS This work was supported by Science Foundation Ireland under Grant 05/IN/I886.
VII. C ONCLUSIONS AND F UTURE W ORK In this paper (and in our previous work [2]) we have shown that a heuristic reasoning reuse strategy based on weighted degree is robust across a variety of forms of problem alteration and resulting conditions. We have also shown that this method can be combined with a solution reuse technique to give a high degree of solution stability. Despite the fact that the latter technique is geared toward enhancing stability, it also reduces the average search effort by curtailing or eliminating search in cases where a solution identical or highly similar to the original solution is still viable. Solution-guided probing should also do well on easy problems, since after perturbation there are usually solutions that are very similar to the original. Information gained with probing on problems that have no solutions can subsequently be used to find solutions more quickly in altered problems. This is another context in which heuristic reason reuse performs impressively; in fact, this is apparently the first demonstration of such a capacity in connection with DCSPs. In addition, probing is also effective in the reverse scenario, which is especially vexatious for solution reuse techniques. The present method is perhaps the first technique for DCSPs that can be said to be based on the structure of the problem, which, in turn, is related both to the constraint graph topology and to patterns of support. All other approaches, including solution-reuse and reasoning reuse, and proactive as well as reactive strategies (cf. [1]), are based either on solutions or on tuples of values. A general difficulty with tuple-oriented approaches is that they can get bogged down in the combinatorics of the problem. In contrast, an approach that focuses on likely bottlenecks can directly address this difficulty, essentially by following the Fail-First Principle in a more thoroughgoing manner. We know from earlier work that probing-based methods do facilitate adherence to the fail-first policy, which is to minimize the size of the refutation whenever search departs from a solution path [9]. Given that the major bottlenecks are also more stable in the face of change than the solution sets (as shown in [2]), this results in a very robust strategy.
R EFERENCES [1] G. Verfaillie and N. Jussien, “Constraint solving in uncertain and dynamic environments: A survey,” Constraints, vol. 10, no. 3, pp. 253– 281, 2005. [2] R. J. Wallace, D. Grimes, and E. C. Freuder, “Solving dynamic constraint satisfaction problems by identifying stable features,” in Twenty-First International Joint Conference on Artificial Intelligence, IJCAI’09, 2009, pp. 621–627. [3] T. Schiex and G. Verfaillie, “Nogood learning for static and dynamic constraint satisfaction problems,” International Journal of Artificial Intelligence Tools, vol. 3, pp. 187–207, 1994. [4] R. Bart´ak, T. M¨uller, and H. Rudov´a, “A new approach to modeling and solving minimal perturbation problems,” in Recent Advances in Constraints-CSCLP 2003. LNAI. No. 3010, K. Apt, F. Fages, F. Rossi, P. Szeredi, and J. V´ancza, Eds. Springer, 2004, pp. 233–249. [5] T. M¨uller and H. Rudov´a, “Minimal perturbation problem in course timetabling,” in Practice and Theory of Automated Timetabling V PATAT 2004. LNCS No. 3616. Springer, 2005, pp. 126–146. [6] R. Dechter and A. Dechter, “Belief maintenance in dynamic constraint networks,” in Proc. Seventh National Conference on Artificial Intelligence-AAAI’88. AAAI Press, 1988, pp. 37–42. [7] C. Bessi´ere, “Arc-consistency in dynamic constraint satisfaction problems,” in Proc. Ninth National Conference on Artificial IntelligenceAAAI’91. AAAI Press, 1991, pp. 221–226. [8] I. P. Gent, E. MacIntyre, P. Prosser, B. M. Smith, and T. Walsh, “Random constraint satisfaction: Flaws and structure,” Constraints, vol. 6, pp. 345– 372, 2001. [9] R. J. Wallace and D. Grimes, “Experimental studies of variable selection strategies based on constraint weights,” Journal of Algorithms: Algorithms in Cognition, Informatics and Logic, vol. 63, pp. 114–129, 2008. [10] E. Taillard, “Benchmarks for basic scheduling problems,” European Journal of Operational Research, vol. 64, pp. 278–285, 1993. [11] F. Boussemart, F. Hemery, C. Lecoutre, and L. Sais, “Boosting systematic search by weighting constraints,” in Proc. Sixteenth European Conference on Artificial Intelligence-ECAI’04. IOS, 2004, pp. 146–150. [12] D. Grimes and R. J. Wallace, “Learning to identify global bottlenecks in constraint satisfaction search,” in Twentieth International FLAIRS Conference. AAAI Press, 2007, pp. 592–598. [13] W. L. Hays, Statistics for the Social Sciences, 2nd ed. Holt, Rinehart, Winston, 1973. [14] G. Verfaillie and T. Schiex, “Solution reuse in dynamic constraint satisfaction problems,” in Twelth National Conference on Artificial Intelligence- AAAI’94. AAAI Press, 1994, pp. 307–312. [15] D. Frost and R. Dechter, “Look-ahead value ordering for constraint satisfaction problems,” in Fourteenth International Joint Conference on Artificial Intelligence, IJCAI’95, 1995, pp. 572–578. [16] B. M. Smith and S. A. Grant, “Trying harder to fail first,” in Proc. Thirteenth European Conference on Artificial Intelligence-ECAI’98. Wiley, 1998, pp. 249–253.