Autonomous Tuning for Constraint Programming via ... - Springer Link

9 downloads 0 Views 285KB Size Report
Universidad San Sebastián, Santiago, Chile. 5. Universidad Central de Chile, Santiago, Chile. 6. Atilim University, Ankara, Turkey [email protected]. 7.
Autonomous Tuning for Constraint Programming via Artificial Bee Colony Optimization Ricardo Soto1,2,3 , Broderick Crawford1,4,5 , Felipe Mella1 , Javier Flores1 , Cristian Galleguillos1(B) , Sanjay Misra6 , Franklin Johnson7 , and Fernando Paredes8 1

Pontificia Universidad Cat´ olica de Valpara´ıso, Valpara´ıso, Chile {ricardo.soto,broderick.crawford}@ucv.cl, {felipe.mella.l01,javier.flores.v,cristian.galleguillos.m}@mail.pucv.cl 2 Universidad Aut´ onoma de Chile, Santiago, Chile 3 Universidad Cientifica del Sur, Lima, Per´ u 4 Universidad San Sebasti´ an, Santiago, Chile 5 Universidad Central de Chile, Santiago, Chile 6 Atilim University, Ankara, Turkey [email protected] 7 Universidad de Playa Ancha, Valpara´ıso, Chile [email protected] 8 Escuela de Ingenier´ıa Industrial, Universidad Diego Portales, Santiago, Chile [email protected]

Abstract. Constraint Programming allows the resolution of complex problems, mainly combinatorial ones. These problems are defined by a set of variables that are subject to a domain of possible values and a set of constraints. The resolution of these problems is carried out by a constraint satisfaction solver which explores a search tree of potential solutions. This exploration is controlled by the enumeration strategy, which is responsible for choosing the order in which variables and values are selected to generate the potential solution. Autonomous Search provides the ability to the solver to self-tune its enumeration strategy in order to select the most appropriate one for each part of the search tree. This self-tuning process is commonly supported by an optimizer which attempts to maximize the quality of the search process, that is, to accelerate the resolution. In this work, we present a new optimizer for self-tuning in constraint programming based on artificial bee colonies. We report encouraging results where our autonomous tuning approach clearly improves the performance of the resolution process. Keywords: Artificial intelligence Metaheuristics

1

· Optimization · Adaptive systems ·

Introduction

Constraint Programming (CP) [2] is a programming paradigm used to solve constraint satisfaction and optimization problems. In this context, problems are c Springer International Publishing Switzerland 2015  O. Gervasi et al. (Eds.): ICCSA 2015, Part I, LNCS 9155, pp. 159–171, 2015. DOI: 10.1007/978-3-319-21404-7 12

160

R. Soto et al.

represented by a sequence of variables owning a non-empty domain of possible values and a set of constraints. The solving process is carried out by employing a solver, which creates and explores a search tree of potential solutions. A solution to the problem is the complete assignment of a value to each variable such that all the constraints are satisfied. In the resolution process, enumeration and propagation strategies are used. The first ones are responsible for choosing the order in which variables and values are going to be selected to instantiate variables and thus creating the tree branches. While propagation techniques are responsible for pruning branches that do not lead to any solution. The selection of enumeration strategies is critical to the performance of the resolution process and the correct choice can greatly reduce the computation cost to find out a solution. However, deciding which strategy would be the right one for a problem is not simple, since each enumeration strategy may have particular behaviors depending on the problem and on the status of the search process. Autonomous Search (AS) is a framework [11] that targets this problem. The idea is to autonomously replace underperforming strategies for more promising ones based on a set of performance indicators. This self-tuning process is commonly supported by an optimizer which attempts to maximize the quality of the search process, that is, to accelerate the resolution. In this work, we present a new optimizer for self-tuning in constraint programming based on artificial bee colonies. The artificial bee colony algorithm (ABC) is a modern metaheuristic recently proposed by Karaboga [12,15], based on the intelligent behavior exhibited by bee colonies when they seek food sources. We report encouraging results where our autonomous tuning approach clearly improves the performance of the resolution process. The rest of this work is organized as follows: Section 2 presents the related work. Section 3 and 4 describe the problem and the proposed solution, respectively. Finally, the experimental evaluation is illustrated followed by conclusions and future work.

2

Related Work

A pioneer work in AS for CP is the one presented in [4]. This framework introduced a four-component architecture, allowing the dynamic replacement of enumeration strategies. The strategies are evaluated via performance indicators of the search process, and better evaluated strategies replace worse ones during solving time. Such a pioneer framework was used as basis of different related works. For instance, a more modern approach based on this idea is reported in [7]. This approach employs a two-layered framework where an hyper-heuristic placed on the top-layer controls the dynamic selection of enumeration strategies of the solver placed on the lower-layer. An hyper-heuristic can be regarded as a method to choose heuristics [11]. In this approach, two different toplayers have been proposed, one using a genetic algorithm [6,19] and another using a particle swarm optimizer [8]. Similar approaches have also been implemented for solving optimization problems instead of pure CSPs [17]. In Section 5

Autonomous Tuning for Constraint Programming

161

we provide a comparison of our approach with the best AS optimizers reported in the literature.

3

Autonomous Search

As previously presented, AS aims at providing self-tuning capabilities to the solver. In this context, the idea is to autonomously control which enumeration strategy is applied to each part of the search tree during resolution. The replacement of strategies is carried out according to a quality ranking which is provided by a choice function (CF). A CF is mainly composed of performance indicators (see indicators employed in Table 1) of the search process and weights that controls its relevance within the equation. Formally, a CF for a strategy Sj at time t is computed as follows. CFt (Sj ) =

IN 

wi ait (Sj )

(1)

i=1

where IN corresponds to the indicator set, wi is a weight that controls the relevance of the ith-indicator and ait (Sj ) is the score of the ith-indicator for the strategy Sj at time t. A main component of this model are the weights, which must be finely tuned by an optimizer. This is done by carrying out a sampling phase where the problem is partially solved to a given cutoff. The performance information gathered in this phase via the indicators is used as input data of the optimizer, which attempt to determine the most successful weight set for the CF. The optimizer employed in this work is ABC (see Section 4). Let us remark that this tuning process is dramatically important, as the correct configuration of the CF may have essential effects on the ability of the solver to properly solve specific problems. Parameter (weights) tuning is hard to achieve as parameters are problem-dependent and their best configuration is not stable along the search [16]. Additional and detailed information about this framework can be seen in [10,18].

Table 1. Indicators used during the search process Name Description SB Number of Shallow Backtracks [3] In1 Represents a Variation of the Maximum Depth. It is calculated as: CurrentM aximumDepth − P reviousM aximumDepth In2 Calculated as: CurrentDepth − P reviousDepth. A positive value means that the current node is deeper than the one explored at the previous step

162

4

R. Soto et al.

Artificial Bee Colony Algorithm

The ABC algorithm [5,20] is a metaheuristic based on the intelligent behavior of honey bees when they seek new food sources in the environment. The position of the food source in space represents a solution to the problem while the quality of it is associated with the amount of nectar that it has. Within the colony there are three types of bees: employed, onlookers and scouts bees. Employed bees fly over the food source that they are exploiting and return to the hive to share the information collected on the amount of nectar in the dance area. The onlookers bees wait in the hive and choose a food source to exploit based on the dance performed by employed bees. Employed bees, whose food source has been exploited, become scout bees and go out in search of a new source, abandoning the former. This is controlled by a parameter called the limit [14], which is the only parameter other than those commonly used in populations based algorithms, such as the colony size (CS) and the maximum number of iterations (M CN ). The ABC algorithm was developed under the following assumptions [1]: 1. Half of the colony consists in employed bees, while the other half corresponds to onlookers bees. 2. Each food source is explored by only one employed bee. The ABC algorithms exposed by D. Karaboga [13] is explained down below. In the initial phase of the algorithm, a set of xi solution is randomly generated through Eq. 2, where F S is the number of food sources, j corresponds to the dimension of the xi solution. Dim is the number of variables to optimize, and and xmin are the upper and lower limits, respectively, of the jth finally, xmax j j parameter for the ith solution. Once initialized, food sources are evaluated and the search process of the employed bees, onlookers and scouts is repeated.   , i ∈ [1, F S] , j ∈ [1, Dim] + rand (0.1) ∗ xmax − xmin xij = xmin ij ij ij

(2)

Each employed bee tries to improve its solution by modifying its dimension by using Eq. 3, where φ is random number uniformly distributed between [−1, 1], k is a randomly selected food source different from i, and j is an integer selected randomly from [1, Dim]. Once the new food source vi is generated, it is evaluated via Eq. 4 and compared with xi for choosing the one with better fitness. In other words, if vi is better than xi , the employed bee replaces its food source by the new one and reset to zero its counter. Otherwise, it keeps in memory the position of xi and its counter is incremented by one. vij = xij + φ (xij − xkj )  f iti =

≥0 1 + abs (fi ) , if fi < 0 1 1+fi , if fi

pi = 0.9 ∗

f iti + 0.1 f itbest

(3) (4) (5)

Autonomous Tuning for Constraint Programming

163

After the employed bee completes their phase, the onlookers bees select one source by Eq. 5. The selection probability pi of a food source depends on the amount of nectar that it holds, where f iti is the fitness associated with the food source i, and f itbest is the fitness associated with the best food source found so far. Once each onlooker bee has chosen a food source, it finds a new food source in the neighborhood by using Eq. 3, and apply the same selection criteria that employed bees do. When the nectar of a food source is depleted, the employed bee of this source becomes a scout bee in order to search a new one. To determine if a food source has been abandoned at the end of each iteration, the value of the attempt counter is evaluated. If the counter of attempts to improve the food source is greater than the parameter limit, then that food source is replaced by a new one by Eq. 2.

5

Experimental Evaluation

In this section, we illustrate a performance of our proposed approach when it is subjected to evaluation on various classical CSPs, which are named below with their respective instances: – – – – – –

N-Queens problem with n = {8, 10, 12, 20, 50, 75} Magic Square problem size n = {3, 4, 5, 6} Sudoku puzzle size n = {2, 5, 7} Knights Tournament with n = {5, 6} Quasi Group with n = {3, 5, 6} Langford with size n = {2} and k = {12, 16, 20, 23}

We implement a solver using the Ecli pse Constraint Programming System version 6.0 and Java. Tests have been performed on a 3.30 GHz Intel Core i3 with 4 GB RAM running Windows 7. The instances are solved to a maximum number of 65535 steps. If no solution is found at this point the problem is set to t.o. (timeout). To determine the quality of a solution, we use the indicators described in table 1. In the experiments, 8 variable selection heuristics were used and 3 value selection heuristics, which when combined, provide a portfolio of 24 enumeration strategies (see Table 2). The results are evaluated in terms of runtime and backtracks, both are widely employed indicators to evaluate performance in constraint programming. Tables 3, 4, and 5 illustrate the performance in terms of runtime required to find a solution for each enumeration strategy individually (S1 to S24) and the proposed approach (ABC). The results clearly validate our approach, which is the only one in solving all instances of all problems, taking the best average runtime. Tables 6, 7, and 8 depict the results in terms of backtracks, which are analogous to the previous ones. This demonstrates the ability of the proposed approach to correctly select the strategy to each part of the search tree. Finally, we compare the proposed approach with the two previously reported optimized online control systems, one based on genetic algorithm (GA) [8] and the other one based on

164

R. Soto et al.

Table 2. Portfolio used Id S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18 S19 S20 S21 S22 S23 S24

Variable ordering First variable of the list The variable with the smallest domain The variable with the largest domain The variable with the smallest value of the domain The variable with the largest value of the domain The variable with the largest number of attached constraints The variable with the smallest domain. If are more than one, choose the variable with the bigger number of attached constraints. The variable with the biggest difference between the smallest value and the second more smallest of the domain First variable of the list The variable with the smallest domain The variable with the largest domain The variable with the smallest value of the domain The variable with the largest value of the domain The variable with the largest number of attached constraints The variable with the smallest domain. If are more than one, choose the variable with the bigger number of attached constraints. The variable with the biggest difference between the smallest value and the second more smallest of the domain First variable of the list The variable with the smallest domain The variable with the largest domain The variable with the smallest value of the domain The variable with the largest value of the domain The variable with the largest number of attached constraints The variable with the smallest domain. If are more than one, choose the variable with the bigger number of attached constraints The variable with the biggest difference between the smallest value and the second more smallest of the domain.

Value ordering min. value in domain min. value in domain min. value in domain min. value in domain min. value in domain min. value in domain min. value in domain

min. value in domain mid. mid. mid. mid. mid. mid. mid.

value value value value value value value

in in in in in in in

domain domain domain domain domain domain domain

mid. value in domain max. max. max. max. max. max. max.

value value value value value value value

in in in in in in in

domain domain domain domain domain domain domain

max. value in domain

particle swarm optimization (PSO) [9]. Table 9 illustrates solving time and number of backtracks required by GA and PSO in contrast with our approach. This comparison shows that ABC and PSO stand out in terms of number of problems solved with a low number of backtracks with respect to GA. Moreover, considering runtime, the ABC algorithm based optimizer is far superior compared to the other ones by requiring half the time or less to find a solution for every problem. A graphical comparison can be seen in Figures 1 and 2.

Autonomous Tuning for Constraint Programming

Table 3. Runtime in ms for strategies S1 to S8 Problem Q-8 Q-10 Q-12 Q-20 Q-50 Q-75 MS-3 MS-4 MS-5 MS-6 S-2 S-5 S-7 K-5 K-6 QG-5 QG-6 QG-7 LF 2-12 LF 2-16 LF 2-20 LF 2-23 x

Strategies S1 S2 S3 S4 S5 5 5 5 4 2 5 8 3 4 4 12 11 11 11 13 20405 4867 20529 20529 1294 t.o. t.o. 532 t.o. t.o. t.o. t.o. 4280 t.o. t.o. 1 5 1 1 1 14 2340 6 21 21 1544 t.o. 296 6490 t.o. t.o. t.o. t.o. t.o. t.o. 35 30515 10 50 225 7453 t.o. 2181 8274 t.o. 26882 t.o. 2135 25486 t.o. 1825 t.o 2499 t.o t.o 90755 t.o 111200 89854 t.o t.o. t.o. 7510 t.o. t.o. 45 t.o. 15 45 t.o. 256 8020 10 307 943 20 242 4 29 43 70 70526 231 115 1217 191 t.o. 546 318 61944 79 t.o. 286 140 68254 8311 11653.9 7252 8922.3 11163.5

S6 4 5 14 26972 t.o. t.o. 4 1500 t.o. t.o. 1607 t.o. t.o. t.o t.o t.o. 3605 16896 32 489 11 19 3935.3

S7 S8 4 2 3 4 11 10 15 93 524 t.o. 4217 t.o. 1 1 6 11 203 1669 t.o. t.o. 10 10 2247 897 2187 31732 t.o t.o 39728 t.o 9465 t.o. 15 t.o. 10 16 4 22 237 7 553 240 285 19 2986.3 2315.6

Table 4. Runtime in ms for strategies S9 to S16 Problem Q-8 Q-10 Q-12 Q-20 Q-50 Q-75 MS-3 MS-4 MS-5 MS-6 S-2 S-5 S-7 K-5 K-6 QG-5 QG-6 QG-7 LF 2-12 LF 2-16 LF 2-20 LF 2-23 x

S9 5 5 11 20349 t.o. t.o. 1 13 1498 t.o. 35 7521 26621 1908 93762 t.o. 40 240 20 69 185 79 8464.6

Strategies S10 S11 S12 S13 5 4 5 2 8 7 5 5 11 11 11 13 4780 18 23860 1250 t.o. 532 t.o. t.o. t.o. 4336 t.o. t.o. 4 1 1 1 2366 6 21 21 t.o. 297 6053 t.o. t.o. t.o. t.o. t.o. 29797 10 50 225 t.o. 2394 9015 t.o. t.o. 2069 26573 t.o. t.o 2625 t.o t.o t.o 102387 109157 t.o t.o. 9219 t.o. t.o. t.o. 15 45 t.o. 13481 10 348 1097 270 4 29 44 55291 250 118 1273 t.o. 538 312 61345 t.o. 285 140 71209 10601.3 5953.3 10337.9 11373.8

S14 4 5 14 36034 t.o. t.o. 4 1495 t.o. t.o. 1732 t.o. t.o. t.o t.o t.o. 3565 18205 32 530 11 19 4742.4

S15 S16 4 2 3 4 11 10 17 87 533 t.o. 4195 t.o. 1 1 6 11 216 1690 t.o. t.o. 10 10 2310 972 2094 30767 t.o t.o 46673 t.o 10010 t.o. 15 t.o. 11 15 5 21 235 8 541 237 278 19 3358.4 2257

165

166

R. Soto et al.

Table 5. Runtime in ms for strategies S17 to S24 and ABC Problem Q-8 Q-10 Q-12 Q-20 Q-50 Q-75 MS-3 MS-4 MS-5 MS-6 S-2 S-5 S-7 K-5 K-6 QG-5 QG-6 QG-7 LF 2-12 LF 2-16 LF 2-20 LF 2-23 x

S17 5 4 11 22286 t.o. t.o. 1 88 t.o. t.o. 5 t.o. 3725 1827 96666 9743 7075 9 18 66 170 75 8339.7

S18 5 7 10 4547 t.o. t.o. 1 37 t.o. t.o. 18836 t.o. t.o. t.o t.o t.o. t.o. 1878 242 55687 t.o. t.o. 8125

S19 4 2 11 16 520 4334 1 99 t.o. t.o. 30 2590 338 2620 97388 20 125 12 4 245 562 272 5459.7

Strategies S20 S21 4 4 4 5 11 14 13135 26515 t.o. t.o. t.o. t.o. 1 1 42 147 165878 t.o. t.o. t.o. 5 100 t.o. t.o. 5350 t.o. t.o t.o 90938 t.o 10507 t.o. 6945 t.o. 9 1705 29 33 107 510 294 11 126 20 17258 2422.1

S22 2 4 13 1249 t.o. t.o. 1 37 153679 t.o. 1710 t.o. t.o. t.o t.o t.o. t.o. 9 43 1297 58732 73168 22303.4

S23 4 3 11 16 521 4187 1 102 t.o. t.o. 30 2670 378 t.o 40997 21 130 12 5 240 569 276 2640.7

S24 2 5 8 1528 t.o. t.o. 1 79 t.o. t.o. 40 t.o. 9168 t.o t.o t.o. t.o. 14 13 584 15437 10 2068.4

ABC 230 265 315 575 2770 10775 290 400 340 868 287 335 465 1132 1961 312 310 262 265 279 337 360 1051.5

Table 6. Backtracks requires for strategies S1 to S8 Problem Q-8 Q-10 Q-12 Q-20 Q-50 Q-75 MS-3 MS-4 MS-5 MS-6 S-2 S-5 S-7 K-5 K-6 QG-5 QG-6 QG-7 LF 2-12 LF 2-16 LF 2-20 LF 2-23 x

S1 10 6 15 10026 >121277 >118127 0 12 910 >177021 18 4229 10786 767 37695 >145662 30 349 16 39 77 26 3611.8

Strategies S2 S3 S4 S5 S6 S7 S8 11 10 10 3 9 10 3 12 4 6 6 6 4 5 11 16 15 17 16 16 12 2539 11 10026 862 15808 11 63 >160845 177 >121277 >173869 >143472 177 >117616 >152812 818 >118127 >186617 >137450 818 >133184 4 0 0 0 4 0 0 1191 3 10 22 992 3 13 >191240 185 5231 >153410 >204361 193 854 >247013 >173930 >187630 >178895 >250986 >202927 >190877 10439 4 18 155 764 4 2 >89125 871 4229 >112170 >83735 871 308 >59828 773 10786 >81994 >80786 773 10379 >179097 767 >97176 >228316 >178970 >73253 >190116 >177103 37695 35059 >239427 >176668 14988 >194116 >103603 8343 >145656 >92253 >114550 8343 >93315 >176613 0 30 >83087 965 0 >96367 3475 1 349 4417 4417 1 4 223 1 16 29 22 1 12 24310 97 39 599 210 97 0 >158157 172 77 26314 1 172 64 >157621 64 26 29805 3 64 7 4221.5 2381.6 3878.1 5185.8 1786 1327.3 781.8

Autonomous Tuning for Constraint Programming

Table 7. Backtracks requires for strategies S9 to S16 Problem Q-8 Q-10 Q-12 Q-20 Q-50 Q-75 MS-3 MS-4 MS-5 MS-6 S-2 S-5 S-7 K-5 K-6 QG-5 QG-6 QG-7 LF 2-12 LF 2-16 LF 2-20 LF 2-23 x

S9 10 6 15 10026 >121277 >118127 0 12 910 >177174 18 4229 10786 767 37695 >145835 30 349 16 39 77 26 3611.8

Strategies S10 S11 S12 S13 S14 S15 S16 11 10 10 3 9 10 3 12 12 6 6 6 4 5 11 16 15 17 16 16 12 2539 11 10026 862 15808 11 63 >160845 177 >121277 >173869 >143472 177 >117616 >152812 818 >118127 >186617 >137450 818 >133184 4 0 0 0 4 0 0 1191 3 10 22 992 3 13 >191240 185 5231 >153410 >204361 193 854 >247013 >174068 >187777 >179026 >251193 >203089 >191042 10439 4 18 155 764 4 2 >89125 871 4229 >112174 >83735 871 308 >59828 773 10786 >81994 >80786 773 10379 >179126 767 >97176 >228316 >178970 >73253 >190116 >177129 37695 35059 >239427 >176668 14998 >194116 >103663 8343 >145830 >92355 >114550 8343 >93315 >176613 0 30 >83087 965 0 >93820 3475 1 349 583 4417 1 4 223 1 16 29 22 1 12 24310 97 39 599 210 97 0 >158157 172 77 26314 1 172 64 >157621 64 26 29805 3 64 7 4221.5 2382 3878.1 4866.3 1786 1327.8 781.8

Table 8. Backtracks requires for strategies S17 to S24 and ABC Problem Q-8 Q-10 Q-12 Q-20 Q-50 Q-75 MS-3 MS-4 MS-5 MS-6 S-2 S-5 S-7 K-5 K-6 QG-5 QG-6 QG-7 LF 2-12 LF 2-16 LF 2-20 LF 2-23 x

S17 10 6 15 10026 >121277 >118127 1 51 >204089 >237428 2 >104148 1865 767 37695 7743 2009 3 16 39 77 26 3550.1

Strategies S18 S19 S20 S21 S22 S23 S24 ABC 11 10 10 9 3 10 2 1 12 4 6 6 6 4 37 1 11 16 15 16 17 16 13 3 2539 11 10026 15808 862 11 1129 11 >160845 177 >121277 >173869 >143472 177 >117616 0 >152812 818 >118127 >186617 >137450 818 >133184 797 0 1 1 1 0 1 1 0 42 3 29 95 46 96 47 2 >176414 >197512 74063 >201698 74711 >190692 >183580 16 >176535 >231600 >190822 >239305 >204425 >204119 >214287 303 6541 9 2 89 887 9 12 1 >80203 963 >104148 >78774 >101058 963 >92557 17 >80295 187 1865 >93675 >91514 187 2626 105 >179126 767 >97178 >178970 >228316 >73253 >190116 1 >177129 37695 35059 >176668 >239427 14998 >160789 90 >130635 0 7763 >96083 >94426 0 >95406 1 >75475 89 2009 >108987 >124523 89 >89888 0 845 1 3 773 1 1 1 1 223 1 16 22 29 1 6 1 24592 98 39 210 599 98 239 0 >158028 172 77 1 26314 172 4521 1 >157649 64 26 3 29805 64 0 0 3481.6 2054.3 7706.5 1419.5 10252.4 932.4 664.2 61.5

167

168

R. Soto et al.

Table 9. Runtime in ms and Backtracks requires for optimizers ABC, PSO and GA ABC Problem Runtime Backtracks Q-8 230 1 Q-10 265 1 Q-12 315 3 Q-20 575 11 Q-50 2770 0 Q-75 10775 797 MS-3 290 0 MS-4 400 2 MS-5 340 16 MS-6 868 303 S-2 287 1 S-5 335 17 S-7 465 105 K-5 1132 1 K-6 1961 90 QG-5 312 1 QG-6 310 0 QG-7 262 1 LF 2-12 265 1 LF 2-16 279 0 LF 2-20 337 1 LF 2-23 360 0 x 1051.5 61.5

PSO GA Runtime Backtracks Runtime Backtracks 4982 3 645 1 7735 1 735 4 24369 1 875 40 52827 11 7520 3879 1480195 0 6530 15 t.o. 818 16069 17 2745 0 735 0 15986 0 1162 42 565155 14 1087 198 t.o. >47209 t.o. >176518 10967 2 15638 6541 2679975 13 8202 4229 967014 256 25748 10786 4563751 106 21089 50571 t.o. 12952 170325 21651 59158 0 11862 7763 44565 0 947 0 28612 0 795 4 10430 1 1212 223 20548 0 1502 97 28466 1 1409 64 30468 3 1287 0 557786.8 675.4 14065.5 5053.6

Fig. 1. Comparing runtimes of adaptive approaches

Autonomous Tuning for Constraint Programming

169

Fig. 2. Comparing backtracks of adaptive approaches

6

Conclusions

Autonomous search is an interesting approach to provide more capabilities to the solver in order to improve the search process based on some performance indicators and self-tuning. In this paper we have focused on the automated self-tuning of constraint programming solvers. To this end, we have presented an artificial bee colony algorithm able to find good CF parameter settings when solving constraint satisfaction problems. The experimental results have demonstrated the efficiency of the proposed approach validating the ability to adapt to dynamic environments and improving as a consequence the performance of the solver during the resolution process. As future work, we plan to test new modern metaheuristics for supporting CFs for AS. The incorporation of additional strategies to the portfolio would be an interesting research direction to follow as well. Acknowledgements. Cristian Galleguillos is supported by Postgraduate Grant Pontificia Universidad Cat´ olica de Valpara´ıso 2015, Ricardo Soto is supported by Grant CONICYT/FONDECYT/INICIACION/ 11130459, Broderick Crawford is supported by Grant CONICYT/ FONDECYT/ 1140897, and Fernando Paredes is supported by Grant CONICYT/ FONDECYT/ 1130455.

References 1. Akay, B., Karaboga, D.: A modified artificial bee colony algorithm for realparameter optimization. Information Sciences 192, 120–142 (2012) 2. Apt, K.R.: Principles of Constraint Programming. Cambridge University Press (2003)

170

R. Soto et al.

3. Bart´ ak, R., Rudov´ a, H.: Limited assignments: a new cutoff strategy for incomplete depth-first search. In: Proceedings of the 20th ACM Symposium on Applied Computing (SAC), pp. 388–392 (2005) 4. Castro, C., Monfroy, E., Figueroa, C., Meneses, R.: An approach for dynamic split strategies in constraint solving. In: Gelbukh, A., de Albornoz, A., TerashimaMar´ın, H. (eds.) MICAI 2005. LNCS (LNAI), vol. 3789, pp. 162–174. Springer, Heidelberg (2005) 5. Chandrasekaran, K., Hemamalini, S., Simon, S.P., Padhy, N.P.: Thermal unit commitment using binary/real coded artificial bee colony algorithm. Electric Power Systems Research 84, 109–119 (2012) 6. Crawford, B., Soto, R., Castro, C., Monfroy, E.: A hyperheuristic approach for dynamic enumeration strategy selection in constraint satisfaction. In: Ferr´ andez, ´ J.M., Alvarez S´ anchez, J.R., de la Paz, F., Toledo, F.J. (eds.) IWINAC 2011, Part II. LNCS, vol. 6687, pp. 295–304. Springer, Heidelberg (2011) 7. Crawford, B., Soto, R., Castro, C., Monfroy, E., Paredes, F.: An Extensible Autonomous Search Framework for Constraint Programming. Int. J. Phys. Sci. 6(14), 3369–3376 (2011) 8. Crawford, B., Soto, R., Monfroy, E., Palma, W., Castro, C., Paredes, F.: Parameter tuning of a choice-function based hyperheuristic using Particle Swarm Optimization. Expert Systems with Applications 40(5), 1690–1695 (2013) 9. Crawford, B., Soto, R., Monfroy, E., Palma, W., Castro, C., Paredes, F.: Parameter tuning of a choice-function based hyperheuristic using particle swarm optimization. Expert Syst. Appl. 40(5), 1690–1695 (2013) 10. Crawford, B., Soto, R., Montecinos, M., Castro, C., Monfroy, E.: A framework for autonomous search in the eclipse solver. In: Mehrotra, K.G., Mohan, C.K., Oh, J.C., Varshney, P.K., Ali, M. (eds.) IEA/AIE 2011, Part I. LNCS, vol. 6703, pp. 79–84. Springer, Heidelberg (2011) 11. Hamadi, Y., Monfroy, E., Saubion, F.: Autonomous Search. Springer (2012) 12. Karaboga, D., Basturk, B.: Artificial bee colony (abc) optimization algorithm for solving constrained optimization problems. In: Proceedings of the 12th International Fuzzy Systems Association World Congress on Foundations of Fuzzy Logic and Soft Computing, pp. 789–798 (2007) 13. Karaboga, D., Basturk, B.: On the performance of artificial bee colony (abc) algorithm. Soft Computing 8, 687–697 (2008) 14. Karaboga, D., Ozturk, C., Karaboga, N., Gorkemli, B.: Artificial bee colony programming for symbolic regression. Information Sciences 209, 1–15 (2012) 15. Karaboga, D., Ozturk, C., Karaboga, N., Gorkemli, B.: A comprehensive survey: artificial bee colony (abc) algorithm and applications. Artificial Intelligence Review 42, 21–57 (2014) 16. Maturana, J., Saubion, F.: A compass to guide genetic algorithms. In: Rudolph, G., Jansen, T., Lucas, S., Poloni, C., Beume, N. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 256–265. Springer, Heidelberg (2008) 17. Monfroy, E., Castro, C., Crawford, B., Soto, R., Paredes, F., Figueroa, C.: A reactive and hybrid constraint solver. Journal of Experimental and Theoretical Artificial Intelligence 25(1), 1–22 (2013)

Autonomous Tuning for Constraint Programming

171

18. Crawford, B., Soto, R., Castro, C., Monfroy, E.: A hyperheuristic approach for dynamic enumeration strategy selection in constraint satisfaction. In: ´ Ferr´ andez, J.M., Alvarez S´ anchez, J.R., de la Paz, F., Toledo, F.J. (eds.) IWINAC 2011, Part II. LNCS, vol. 6687, pp. 295–304. Springer, Heidelberg (2011) 19. Soto, R., Crawford, B., Monfroy, E., Bustos, V.: Using autonomous search for generating good enumeration strategy blends in constraint programming. In: Murgante, B., Gervasi, O., Misra, S., Nedjah, N., Rocha, A.M.A.C., Taniar, D., Apduhan, B.O. (eds.) ICCSA 2012, Part III. LNCS, vol. 7335, pp. 607–617. Springer, Heidelberg (2012) 20. Yan, X., Zhu, Y., Zou, W., Wang, L.: A new approach for data clustering using hybrid artificial bee colony algorithm. Neurocomputing 97, 241–250 (2012)

Suggest Documents