transforms CSP models to ILP models, such that these can be solved with an ILP-solver. ... The CSP-solver for the cava language uses interval arithmetics for handling .... A relation may be part of a larger boolean expression, thus we will link.
Automatic Transformation of Constraint Satisfaction Problems to Integer Linear Form — an Experimental Study Søren R. Nielsen, David Pisinger, and Per Marquardsen Dept. of Computer Science, University of Copenhagen, Denmark Baan Nordic, Hørkær 12A, Herlev, Denmark
Abstract. Constraint Satisfaction Problems (CSP) and Integer Linear Programming Problems (ILP) in their decision form are both N Pcomplete problems and thus rely on tree-search techniques for their solution. Despite several similarities in the solution algorithms, the two research fields have developed in quite separate directions. Recent attempts however indicate that a unification of the techniques may lead to faster solution algorithms in both fields. The present paper deals with an automatic transformation system which transforms CSP models to ILP models, such that these can be solved with an ILP-solver. BaaN Nordic recently developed a very flexible object oriented modeling language for CSP problems. This language accepts variables with real values, as well as logical constraints, arithmetic functions, and mathematical expressions. By allowing the decision variables to take on real values, the CSP problem no longer is defined on a finite domain, and thus normal techniques for CSP cannot be applied. Moreover, several practical problems only involve very few non-linear constraints, making it attractive to transform such models to an ILPform. The integer-linear models tend to be easier to solve, since the LP-relaxation quickly may identify unsatisfiable parts of the search tree. In particular, it is often possible to identify unsatisfiable problems by solving the LP-relaxation. The paper illustrates how CSP problems may be transformed to an equivalent linear form by introducing new boolean decision variables. Solution times for the two solution approaches are compared, showing that CSPproblems with several real valued decision variables in general are easier to solve as ILP problems. The paper is concluded with a discussion of a unified language for CSP and ILP problems.
1
Introduction
A constraint satisfaction problem (CSP) consists of a vector x of decision variables, each variable xi having a finite domain Di of possible values it can attain, and a set of constraints stating some relations between the decision variables. A solution to a given CSP is an assignment of values to the variables such that xi ∈ Di and such that all the constraints are satisfied.
An integer linear problem (ILP) wishes to maximize a linear objective function subject to some linear constraints. This may be stated as maximize cx subject to Ax ≤ b x≥0 xj integer for j ∈ I
(1)
where c and b are vectors of length n and m respectively, and A is a matrix of size n × m. The solution vector x is a vector of length n. In its decision form, an ILP simply asks for a feasible solution to the constraints, thus in this case the objective function may be written 0x. If all decision variables xj are demanded to be integers, i.e. I = {1, . . . , n} then we speak about a Pure Integer Problem (PIP). Notice that all constraints in ILP models are linear, where CSP models accept non-linear expressions with arithmetic functions in the constraints. For a discussion of CSP models as opposed to ILP models see Brailsford, Potts and Smith [4]. To avoid a complete exploitation of the search tree, solvers for Constraint Satisfaction Problems use constraint propagation [10] to restrict the domains of variables whose values are not fixed, where solvers for Integer Linear Problems use bounds from LP-relaxations to prune the search tree [15]. BaaN Nordic recently proposed a new CSP language cava [2], which due to its object oriented nature is very flexible and easy to use for building CSP models. The cava language however accepts that some of the decision variables attain a real value, which somehow violates the principle of finite domains in CSP. In particular, this means that enumerative techniques cannot be used in the traditional form. The CSP-solver for the cava language uses interval arithmetics for handling variables with continuous domain: The domain of a real-valued variable is recursively split up into still smaller units, using constraint propagation for each step. Unfortunately this approach is very time-consuming, which lead to the idea of transforming CSP problems with a majority of real-valued variables into ILP problems, solving the latter by a general ILP-solver. The benefits of an automatic transformation of CSP models to ILP models are numerous: First of all, the decision maker may choose between CSP or ILP techniques for solving a given problem. Although specific modeling languages already exist for ILP problems, it may be practical to build an ILP model using the object oriented cava language — in particular if parts of the model already have been formulated in cava. Moreover, the ILP model may be extended with an objective function if there exists a preference between the feasible solutions. Finally, due to the transformation of CSP models to ILP models, it was possible to experiment with the solution of problems written in cava before the specialized CSP-solver for cava was developed. Although the transformation algorithm may be seen as a simple prototype for integrating CSP and ILP solution techniques, it has given valuable insight into which techniques are best suited for some given classes of problems. This experience can be used to develop an automatic scheduler which selects the most
functions: sin(x), cos(x), tan, arcsin(x), arccos(x), arctan(x), log(x), exp(x), 1/x, y x relational operators: >, ≥, 100), logical constraints (e.g. (a ∨ b) ∧ ¬c) and domains on the decision variables (e.g. x ∈ {3, 6, 9}). For a more detailed description of the language see Figure 1. In order to transform a CSP problem given in this form to an ILP problem, it is necessary to linearize the model using some additional decision variables. Some of the constraints may be linearized without loss of precision, while e.g. arithmetic functions need to be approximated with piecewise linear functions, thus leading to some inaccuracy in the model. The linearization of the model is performed in a number of steps as follows: – Arithmetic functions are replaced by piecewise linear approximations of the functions. – Products of decision variables are transformed into products of binary variables. Products of binary variables may easily be expressed as logical constraints, and thus put on binary form. – Relations are transformed into linear inequalities with boolean variables. – Logical predicates are transformed into ordinary logical constraints. – Boolean logics are transformed into linear form.
– Declarations of integer variables are expressed as bounds on the variables with additional constraints. cava has a powerful syntax allowing integers with divided domains and table domains. In the following we will give a detailed description of the transformation steps. Most of the techniques are based on general model building techniques described in Williams [14] and Mitra [11] which have been adapted to the specific structure of cava. Arithmetic functions Arithmetic functions may be on the form cos(x), exp(x), √ x. To transform a continuous real-valued function f (x) with a single realvalued argument defined on an interval of real values [d′ , d′′ ], we replace f (x) by a piecewise linear function f˜(x) which approximates the function (c.f. Figure 2). Assume that f (x) is defined on a linear domain [d′ , d′′ ], then by inserting a number of tabular points d′ = a0 , a1 , a2 , . . . , an = d′′ with ai < ai+1 for i = 0, . . . , n − 1 we get the approximation f˜(x) =
n X
bi λi
(2)
i=0
where bi = f (ai ) and where the additional constraints demand that n X
n X
ai λi = x,
λi = 1
(3)
i=0
i=0
At most two values of λi may be positive and these values must be consecutive (i.e. λi > 0 and λi+1 > 0 and all other values of λj are zero). The λ-values form a so-called Special Ordered Set (SOS) of order 2 (Williams [14]). The effect of this formulation is, that when x ∈ [ai , ai+1 ] then the only way we can obtain x in (3) is by having λi and λi+1 nonnegative, such that 6
-
Fig. 2. The function sin(x) and its piecewise linear approximation
ai λi + ai+1 λi+1 = x. This will have the effect that the approximation (2) returns a linear interpolation between the two points (ai , f (ai )) and (ai+1 , f (ai+1 )). By choosing sufficiently many tabular points a0 , a1 , a2 , . . . , an one may obtain an approximation f˜(x) with the property that |f˜(x)−f (x)| ≤ ε for all x ∈ [d′ , d′′ ]. The constant ǫ should however be chosen moderately large, so that the number of λ-variables in (2) does not become too large. Transformation of products of variables Assume that a given constraint contains a product xy of two integer decision variables x and y with domains x ∈ {0, . . . , xu } and y ∈ {0, . . . , yu }. If the domains do not start with zero, we may introduce new decision variable having this property, which are linked to the original decision variable with an equation stating the difference between the lower bounds on the domains. If a given constraint contains the product of more than two variables the same approach may be used recursively. The variables x and y may now be expressed in binary form using a number of binary variables b a X X 2i δi′ (4) 2 i δi , y= x= i=0
i=0
where a = ⌊log2 (xu )⌋ and b = ⌊log2 (yu )⌋. Now the product xy can be written a X b X
2i+j δi δj′
(5)
i=0 j=0
where the product δi δj′ easily may be put in linear form by noting that δi δj′ = 1 ′′ if and only if δi = 1 and δj = 1. Introducing new variables δij to replace δi δj′ we get xy reformulated to the linear form a X b X
′′ 2i+j δij
(6)
i=0 j=0
where we demand that ′′ δi + δj′ − 2δij ≥ 0,
′′ δi + δj′ − δij ≤ 1,
for all i, j
(7)
′′ The first inequality has the effect δij = 1 ⇒ (δi = 1) ∧ (δj′ = 1) while the second constraint models the reverse implication. The transformation algorithm thus replaces the product xy with a new variable z which is set equal to the sum (6), and the constraints (4) and (7) are added to the model. The number of new binary variables and additional constraints becomes O(log2 (xu ) log2 (yu )). This indicates, that if the CSP model contains a product of several integer variables, the transformation may become too large to be applicable in practice. If a constraint contains a product xy of variables where one or both of the variables are real, then we may use a similar technique. We simply generate a
binary expansion of the variables up to a given predefined limit ε. Thus e.g. if x should be represented with 3 decimal digits (i.e. ε = 0.001), then we may use the expansion a X 2 i δi (8) x= i=−10
assuming that x has the domain 0 ≤ x ≤ 2a . Division of variables can be expressed by using the fact that we are able to transform products of variables. Thus having a relation x/y in cava we may exchange it with a new variable z and adding an additional constraint zy = x to the set of constraints. As noted before, this will introduce some inaccuracy up to a given ε chosen a-priori. Relations Relations (or comparisons) may e.g. be on the form 7x1 + 19x2 ≤ 23y4 − 7. A relation may be part of a larger boolean expression, thus we will link a boolean variable δ to the relation which takes on the value 1 if and only if the relation is satisfied. The variable δ can then be used in the next phase, where boolean expressions are modeled. We may assume that both sides of the relation are in linear form, since the above steps have transformed more complex expressions into this form. To model a relation on the form (ax ≤ b) ⇔ (δ = 1) (9) where a and x are vectors of length n and b is a single constant, we need to split the implication into two parts: (ax ≤ b) ⇐ (δ = 1)
(ax ≤ b) ⇒ (δ = 1)
(10)
The first implication in (10) may be written in linear form as ax + (δ − 1)M ≤ b
(11)
where M is a large constant chosen as an upper bound on ax − b. If δ = 1 then we get imposed the demanded relation ax ≤ b. However if δ = 0 then we get imposed the relation ax − b ≤ M which is a tautology. The second implication in (10) is equivalent to (δ = 0) ⇒ (ax > b) which can be written in the form ax + δM ≥ b + ε (12) where M now is chosen as an upper bound on b − ax, and ε is a small number stating the tolerance for when ax is considered to exceed b. It is necessary to add this tolerance to ILP models, since strict inequalities are not allowed for optimization problems (a maximum is simply not defined for open sets). The effect of (12) is that whenever δ = 0 then we get imposed the inequality ax ≥ b + ε, while if δ = 1 then we get the inequality b − ax ≤ M − ε which is a tautology.
In general we model relations as stated in the following table (notice that a strict equality Ax = b is modeled as a combination of two inequalities) Relation ILP-constraints Ax ≤ b Ax + (δ − 1)M ≤ b Ax + δM Ax < b Ax + (δ − 1)M ≤ b − ε Ax + δM Ax > b Ax + (1 − δ)M ≥ b + ε Ax − δM Ax ≥ b Ax + (1 − δ)M ≥ b Ax − δM Ax = b Ax ≥ b ∧ Ax ≤ b
≥b+ε ≥b ≤b ≤b−ε
(13)
Logical predicates cava contains the six logical predicates atLeastOne, atMostOne, oneOf, impossible, allDiff and allEqual. The first four of these take as argument a list of boolean expressions as e.g. atLeastOne(B1 , B2 , . . . , Bn )
(14)
The meaning of this constraint is, that at least one of the boolean expressions B1 , B2 , . . . , Bn must be true. The predicate atMostOne demands that not more than one boolean expression evaluates to true, while oneOf demands that exactly one expression becomes true. The predicate impossible means that not all expressions become true. These predicates can be expressed in linear form, by introducing boolean variables δi which attain the value 1 if and only if Bi is true, using the techniques from the previous section. Then e.g. atLeastOne can be expressed as the simple inequality δ1 + δ2 + . . . + δn ≥ 1
(15)
The predicates allDiff and allEqual take as argument a list of arithmetic expressions like in allDiff(E1 , E2 , . . . , En ) (16) The meaning of this predicate is that all the arithmetic expressions E1 , . . . , En should evaluate to different values. This can be expressed by use of O(n2 ) simple relational expressions stating that Ei 6= Ej for i 6= j. For allEqual we only need O(n) simple relational expressions since we only demand that Ei = E1 for i = 2, . . . , n. Logical expressions As described in the previous sections we bind a boolean variable δi to every relation, such that δi = 1 if and only if the relation is satisfied. This makes it easy to transform boolean expressions to ILP form. In the first phase, we build a parse tree of the boolean expressions, such that every interior node represents a logical operator, while the leaves represent the decision variables δi (these may be boolean variables in the CSP model, or they may be new boolean variables which have been linked to some relations). We introduce new decision variables δi′ to all the inner nodes, such that these variables represent the truth value of the corresponding logical operation.
It is sufficient to transform logical models consisting of only logical ∨, ∧ and ¬ into MIP-formulations, since expressions with xor, ⇐ , ⇒ and ⇔ can be reformulated into logical expressions containing only the former three operations, see Table 3. expression Bi xor Bj Bi ⇒ Bj Bi ⇐ Bj Bi ⇔ Bj
equivalent expression (¬Bi ∧ Bj ) ∨ (Bi ∧ ¬Bj ) (Bi ∧ ¬Bj ) ¬(¬Bi ∧ Bj ) (¬Bi ∧ ¬Bj ) ∨ (Bi ∧ Bj )
Fig. 3. Expressing logical constraints by use of ∨, ∧ and ¬ only.
To model ¬Bi , where Bi is a boolean expression, we assume that δi has been linked to the truth value of Bi . Then we may express the negation as δ ′ + δi = 1
(17)
In this way δ ′ will take on the truth value of ¬Bi . To model Bi ∨ Bj , where Bi and Bj are boolean expressions and δi , δj the corresponding variables having the truth value of the expression, we need two relations δ ′ − δi − δj ≤ 0 (18) δi + δj − 2δ ′ ≤ 0 such that δ ′ takes on the truth value of Bi ∨ Bj . The first constraint in (18) models the implication (δ ′ = 1) ⇒ (δi + δj ≥ 1), while the second constraint models the reverse implication. In a similar way we may model Bi ∧ Bj , by the two relations 2δ ′ − δi − δj ≤ 0 δi + δj − δ ′ ≤ 1
(19)
which means that δ ′ gets the truth value of Bi ∧ Bj . The first constraint in (19) models the implication (δ ′ = 1) ⇒ (δi + δj ≥ 2), and the second constraint models the reverse implication. The final result of a logical expression must obviously be set true, which easily is handled by adding the constraint δ ′ = 1 for the boolean variable δ ′ which represents the truth value of the whole expression. The growth of the model by these transformations is reasonable, since every logical operation introduces one new decision variable and at most two new constraints to the ILP model. Domains of variables Simple domains, like a ≤ x ≤ b for a real number x are easily transformed to the corresponding domains in ILP form. The same applies for integer domains on the form x ∈ {a, a + 1, . . . , b}.
cava also supports table domains and separated domains. Separated domains take the form x ∈ integer[1...3; 5..7], allowing x to take on any value from either {1, 2, 3} or {5, 6, 7}. This may be expressed by the logical constraint (1 ≤ x ≤ 3) ∨ (5 ≤ x ≤ 7) x integer
(20)
which may be modeled using the techniques (13), (18) and (19). Table domain declarations limit the variable to values in a specified table, as e.g. x ∈ {3, 6, . . . , 15}. Such table domains can be modeled by introducing an additional integer variable y to the model. Assuming that we wish x to be in the domain a ≤ x ≤ b and to be divisible with q, then we add the constraints x − yq = 0
⌈ aq ⌉ ≤ y ≤ ⌊ qb ⌋
(21)
y integer
to the ILP model. Besides these declarations cava contains an @-operator indicating that a domain bound is ∞. We do not transform infinite domains since the transformations in e.g. (13) demand that an upper bound on the implied terms can be a-priori determined.
3
Computational Experience
The compiler was constructed using general compiler techniques, see e.g. Aho, Sethi, Ullmann [1]. Thus a parse tree of the cava model was built up during the compilation, which then was transformed to a linear form. The output was written in MPS-form, which is a standard language for ILP-solvers. The compiler, denoted cava2mps, was written in C, and all experiments were run under Windows NT. The transformed problems were solved using cplex [5], which contains a strong preprocessor whose goal is to reduce the number of constraints and variables in the model. This is quite important since the ILP model generated by our compiler can be simplified significantly, and thus we do not need to do this explicitly. In the computational experiments we chose ǫ = 0.0001 in the ILP models since a smaller value caused numerical problems to the ILP-solver. The main purpose of the experiments is to compare the running time of the generated ILP models against the cava-solver, and to verify our expectation that the ILP-solver will outperform the cava-solver in linear models with several realvalued variables. Furthermore we want to see how well the ILP-solver performs on well known CSP-problems. The cava-solver is in an unfinished state at the time of writing meaning that we could only solve relatively small problems with very few real variables using this approach. Table 1 and 2 show some selected results from our experiments. The entries are as follows: “Name” is the name of the problem, while “Constraints”, “Integer
Name Queen12 Queen100 Template1 Template2 Template3 Assign100 Constraints 294 2582 3395 1379 1379 101 Integer variables 0 0 24 24 24 0 Boolean variables 258 10994 1824 780 780 10000 Real variables 0 0 0 0 0 0 Size (MB) 0.1 4.9 0.7 0.3 0.3 1.8 cava sol. time (sec) 4.16 – – – – – cplex sol. time (sec) 0.42 138 12.61 2.42 11.48 1.29 Table 1. Comparison of solution times for cava and cplex
variables”, “Boolean variables” and “Real variables” give the size of the transformed instance with respect to the number of constraints and variables. The size of the MPS-file is given in “Size”. The two last rows indicate the solution times of respectively cava and cplex. A dash indicates that the cava-solver could not accept the instance due to its size or due to the real variables involved. We solved the classical CSP-problems n-queen with success. The 12-queen problem Queen12 was solved in half a second, while the cava-solver spent more than four seconds for the same problem. For the 100-queen problem Queen100 we solved the problem in 138 seconds, while the cava-solver was not able to handle this size of problem. We also experimented with problems from the literature which are known to be difficult even for small instances (the template design problem from Proll and Smith [13] and the social golfers problem from DarbyDowman and Little [6]). We had a moderate success with a scaled down version of the template design problem finding a nearly optimal solution in 11 seconds. In Table 1 Template1 is a loosely constrained version of the original problem from [13]. Template2 is the same problem scaled down and Template3 is the scaled down version with very tight bounds. The original formulation of the problem with the tight bounds from Template3 did not terminate within reasonable time. The Assignment Problem Assign100 with 100 assignees (see e.g. Papadimitriou and Steiglitz [12] for a definition) was solved in little more than a second in spite of the very big MPS file, demonstrating the power of the ILP-solver. It
Name Blending SinRel CosRel AllDiff10 AllEqual500 Constraints 7 42 107 273 2997 Integer variables 0 4 3 20 500 Boolean variables 0 0 10 571 1498 Real variables 7 418 639 0 0 Size (MB) 0.01 0.06 0.1 0.06 0.7 cava sol. time (sec) 72.81 – – – – cplex sol. time (sec) 0.04 0.16 89.8 6.18 0.16 Table 2. Comparison of solution times for cava and cplex
should be noted however that we do not expect the assignment problem to be a challenge for the cava-solver either. The blending problem Blending from Williams [14] was changed to a decision form by demanding the objective value to be within 15% from the known optimum. This problem involves several real-valued variables. As expected, the ILP-solver finds a solution very quickly, while the cava-solver uses quite some time for finding the same solution. We built models with trigonometric formulaes, and managed to solve a model of the well known sine-relation SinRel in just 0.16 seconds. The results for more complicated mathematical models were disappointing. The best result among these was obtained for a model of the cosine-relation CosRel which had constraints with two real variables and a cos-function being multiplied. This model was solved in about 1 21 -minute. To test the logical predicates from Section 2 we built some simple models using these constructions. The allDiff predicate can be handled by dedicated algorithms in the cava-solver, while the MPS model must constraint every variable to take on values different from any other variable in the predicate. The AllDiff10 is an example using this predicate with 10 arguments. The model could be solved by the ILP-solver, but predicates with more arguments started to take very long time (e.g. 50 arguments took 779 seconds to solve). We expect that the cava-solver easily will solve such instances, but it could not be tested at the present moment. The allEqual predicate is very easy to transform to ILPform, and the considered instance AllEqual500 calling the predicate with 500 arguments could be solved in very reasonable time by the ILP-solver. The experiments conclude that not too complicated arithmetic constraints can be solved quickly using ILP. The n-queen problem illustrates that modeling with logical expressions has a very good representation as an ILP model. CSPproblems known to be very difficult truly are so since we never managed to get any solution on the social golfers problem. Problems with linear constraints and real variables do not require any transformation but a mere rewriting from cava to MPS, and as such these problems are solved just as fast as regular linear models. The final conclusion on these experiments is that CSP and ILP each have their strengths and weaknesses. But some problems, in particular those related to CSP problems with many real variables, may be solved more effectively in the linear form.
4
Future plans
Ahead of this paper is the task of designing a unified language combining the best from CSP and ILP. J¨ onsson [9] gives an overview of such attempts presented in the literature. The unification can take place on several levels. One approach, as suggested in this article, is to use a scheduler to benchmark the performance of a given model when solved by a CSP- or an ILP solver. The scheduler may
use the historical information to choose the best solution approach, next time the model is solved. In Darby-Dowman and Little [6] a two-phase approach is considered. The idea is to first let one solver work on the problem, and then passing on the intermediate result to the other solver. At the time of writing not much work has been done in the field. Hooker et al. [8] presented a new modelling principle called mixed logical/linear programming, which unifies the best properties from CSP and ILP. A problem is formulated in two parts, one consisting of logical expressions with discrete variables and the other consisting of linear inequalities. These two separate models are connected by mapping each inequality to a logical expression. The models are solved by combined branching/constraint propagation on the finite variables and traditional LP-techniques on the linear part. This approach is especially interesting in the context of this paper since transformation of nonlinear CSP models results in ILP models with inequalities controlled by logical expressions. Finally J¨ onsson [9] considered branch-and-cut techniques for solving the ILP formulation of a cava problem. The motivation for this approach is, that a branch-and-cut algorithm not only solves a problem to optimality, but also finds a tighter formulation of the problem. Since a typical cava problem is solved many times as part of an interactive decision making, the tighter formulation should make it easier and easier to solve the ILP problem. The computational results in J¨ onsson show a number of instances where the tighter formulation actually contributes to faster solution times.
5
Conclusion
The present work with transformation of CSP to ILP models may be seen as a simple prototype for integrating CSP and ILP solution techniques. It has demonstrated, that when CPS models contain several real-valued variables and not too many logical constraints, it may be useful to use a general ILP-solver for solving such problems. The transformation to ILP form also means that problems described in the cava language now may be optimized by the addition of an objective function to the ILP model. The purpose of the present work was to show that a transformation from CSP to ILP can be done. The transformation rules are however quite inefficient when the constraints become more complicated, thus in this case it could be useful to combine some steps of the transformation in order to get a more compact model. Another approach is to transform non-separable expressions using Special Ordered Sets of Order 2 without having to factorize the non-separable expressions [3]. Transforming any cava model into the best possible ILP model is a task requiring both knowledge on the exact ILP-solver implementation, a library of alternative transformation schemes (since different models are better suited with
different transformations), and a mathematical library being able to calculate bounds on any mathematical expression. The present work is based on general transformation strategies which mainly are suited for logical expressions and simple arithmetic expressions.
Acknowledgement The authors wish to thank BaaN Nordic for having made the cava language and the cava-solver available for this project.
References 1. Aho, Sethi, Ullmann, “Compilers. Principles, Techniques and Tools”, AddisonWesley, (1986). 2. cava Reference Manual, Unpublished Manuscript, Baan Company, (1999). 3. E.M.L. Beale (1975), “Some Uses of Mathematical Programming Systems to Solve Problems that are not Linear”, Opl. Res. Q 26, 609–618. 4. Sally C. Brailsford, Chris N. Potts, Barbara M. Smith (1999), “Constraint satisfaction problems: Algorithms and applications”, European Journal of Operational Research, 119, 557–581. 5. cplex 6.5 Reference Manual, ILOG, (1999). 6. Ken Darby-Dowman and James Little (1998), “Properties of Some Combinatorial Optimization Problems and Their Effect on the Performance of Integer Programming and Constraint Logic Programming”, INFORMS Journal on Computing, 10, 276–286. 7. J.N. Hooker, M.A. Osorio (1999), “Mixed logical/linear programming”, Discrete Applied Mathematics, 96–97, 395. 8. J.N. Hooker, M.A. Osorio (1999), “On integrating Constraint Propagation and Linear Programming for Combinatorial Optimization”, URL: http://ba.gsia.cmu.edu/jnh/papers.html. 9. Kenneth J¨ onsson, Solving Constraint Satisfaction Problems Using a Branch-and-Cut Approach, Master’s Thesis, Department of Computer Science (diku), University of Copenhagen, 2000. 10. Alan K. Mackworth (1977) “Consistency in networks of relations” Artificial Intelligence, 8, 99–118. 11. G. Mitra, C. Lucas, S. Moody, E. Hadjiconstantinou (1994), “Tools for reformulating logical forms into zero-one mixed integer programs”, European Journal of Operational Research, 72, 263–277. 12. Christos H. Papadimitriou, Kenneth Steiglitz (1982), Combinatorial Optimization: Algorithms and Complexity, Prentice Hall, Englewood Cliffs, New Jersey. 13. Les Proll, Barbara Smith (1998), “Integer Linear Programming and Constraint Programming. Approaches to a Template Design Problem”, INFORMS Journal on Computing, 10, No.3. 14. H.P. Williams (1998), Model building in mathematical programming, 3rd ed, John Wiley and Sons, Chichester, England. 15. Laurence A. Wolsey (1998), Integer Programming, Wiley Interscience, New York.