Optim Lett (2010) 4:173–183 DOI 10.1007/s11590-009-0156-3 ORIGINAL PAPER
Investigation of selection strategies in branch and bound algorithm with simplicial partitions and combination of Lipschitz bounds Remigijus Paulaviˇcius · Julius Žilinskas · Andreas Grothey
Received: 5 October 2009 / Accepted: 19 October 2009 / Published online: 8 November 2009 © Springer-Verlag 2009
Abstract Speed and memory requirements of branch and bound algorithms depend on the selection strategy of which candidate node to process next. The goal of this paper is to experimentally investigate this influence to the performance of sequential and parallel branch and bound algorithms. The experiments have been performed solving a number of multidimensional test problems for global optimization. Branch and bound algorithm using simplicial partitions and combination of Lipschitz bounds has been investigated. Similar results may be expected for other branch and bound algorithms. Keywords Global optimization · Branch and bound · Selection strategies · Lipschitz optimization · Parallel branch and bound
1 Introduction Many problems in engineering, physics, economics and other fields may be formulated as optimization problems, where the maximum value of an objective function must be found. Mathematically the problem is formulated as
R. Paulaviˇcius · J. Žilinskas (B) Institute of Mathematics and Informatics, Akademijos 4, 08663 Vilnius, Lithuania e-mail:
[email protected];
[email protected] R. Paulaviˇcius e-mail:
[email protected] A. Grothey School of Mathematics, University of Edinburgh, Edinburgh EH9 3JZ, UK e-mail:
[email protected]
123
174
R. Paulaviˇcius et al.
f ∗ = max f (x), x∈D
where the objective function f (x), f : Rn → R, is a nonlinear function of continuous variables, D ⊂ Rn is a feasible region, n is the number of variables. Besides the global optimum f ∗ one or all global optimizers x ∗ : f (x ∗ ) = f ∗ must be found. Branch and bound is a technique for the implementation of covering global optimization methods as well as combinatorial optimization algorithms. An iteration of a classical branch and bound algorithm processes a node in the search tree representing a not yet explored subspace of the solution space. The iteration has three main components: selection of the node to process, branching of the search tree and bound calculation. Branching is implemented by division of the subspaces. The algorithm detects subspaces, which cannot contain a global optimizer, by evaluating bounds for the optimum over considered subspaces. Subspaces which cannot contain a global maximum are discarded from further search pruning the branches of the search tree. The rules of selection, branching and bounding differ from algorithm to algorithm. The main strategies of selection are: – Best first – select a candidate with the maximal upper bound. The candidate list can be implemented using heap or priority queue. – Depth first – select the youngest candidate. A node with the largest level in the search tree is chosen for exploration. A FILO structure is used for the candidate list which can be implemented using a stack. In some cases it is possible to implement this strategy without storing of candidates, as it is shown in [19]. – Breadth first – select the oldest candidate. All nodes at one level of the search tree are processed before any node at the next level is selected. A FIFO structure is used for the candidate list which can be implemented using a queue. – Improved selection – based on heuristic [3,10], probabilistic [5] or statistical [18, 20] criteria. The candidate list can be implemented using heap or priority queue. Node selection rules influence the efficiency of the branch and bound algorithm and the number of nodes kept in the candidate list. The goal of this paper is to experimentally investigate the influence of selection strategies to the performance of sequential and parallel algorithms. Although the experiments have been performed on a particular algorithm described in the next section, similar features may be expected in other branch and bound algorithms as well. 2 Branch and bound with simplicial partitions and improved combination of different bounds for Lipschitz optimization Lipschitz optimization is one of the most deeply investigated subjects of global optimization. It is based on the assumption that the slope of an objective function is bounded. The advantages and disadvantages of Lipschitz global optimization methods are discussed in [7,8]. A function f : D → R, D ⊂ Rn , is said to be Lipschitz if it satisfies the condition | f (x) − f (y)| ≤ L x − y , ∀x, y ∈ D,
123
(1)
Investigation of selection strategies in branch and bound algorithm
175
where L > 0 is a constant called Lipschitz constant, D is compact and · denotes a norm. The Euclidean norm is used most often in Lipschitz optimization, but other norms can also be considered. Although hyper-rectangular partitions are usually used in global optimization, other types of partitions may be more suitable for some specific problems. In this paper we use simplicial branch and bound with combination of Lipschitz bounds. Advantages and disadvantages of simplicial partitions are discussed in [21]. Since a simplex is a polyhedron in n-dimensional space with the minimal number of vertices, simplicial partitions are preferable when the values of an objective function at the vertices of partitions are used to compute bounds. Otherwise values at some of the vertices of hyper-rectangular partitions may be used [17]. However, for simplicial branch and bound, the feasible region should be initially covered by simplices. The most preferable initial covering is face to face vertex triangulation—partitioning of the feasible region into finitely many n-dimensional simplices, whose vertices are also the vertices of the feasible region. We use a general (any dimensional) algorithm for combinatorial vertex triangulation of hyper-rectangle [21] into n! simplices. Simplices are subdivided into two by a hyper-plane passing through the middle point of the longest edge and the vertices which do not belong to the longest edge. In Lipschitz optimization the upper bound for the optimum is evaluated exploiting the Lipschitz condition f (x) ≤ f (y) + Lx − y. It is known that | f (x) − f (y) | ≤ L p x − yq ,
(2)
where L p = sup ∇ f (x) p : x ∈ D is the Lipschitz constant, ∇ f (x) = ∂f ∂f ∂ x1 , . . . , ∂ xn is the gradient of the function f (x) and 1/ p+1/q = 1, 1 ≤ p, q ≤ ∞. In the present paper the Lipschitz constants vary substantially, from 1 to over 2,639,040. These constants have been evaluated with a very fine grid search algorithm on 1,000n points for n-dimensional test problems and are thus very close to the smallest possible ones. Optimal values are given in [14]. If the bounds over a simplex are evaluated using the function values at the vertices, the lower bound for the optimum is the largest value of the function at a vertex: LB(I ) = max f (xv ), xv ∈V (I )
where V (I ) is the set of vertices of the simplex I . A combination of the upper bounds based on the extreme (infinite and first) and Euclidean norms over a multidimensional simplex was proposed and investigated in [14]: UBC (I ) = min { f (xv ) + K (xv )}, xv ∈V (I )
(3)
123
176
R. Paulaviˇcius et al.
where K (xv ) = min L 1 max x − xv ∞ , L 2 max x − xv 2 , L ∞ max x − xv 1 . x∈I
x∈I
x∈I
An improved upper bound with the first norm was proposed in [15]: UB F (I ) = max x∈I
min { f (xv ) + L ∞ x − xv 1 } .
(4)
xv ∈V (I )
However the first norm does not always give the best bounds [15]. In some cases combinations of bounds (3) may give better results. Therefore the improved combination [16] is used, where the improved bound (4) is combined with the combination of bounds based on the infinite and Euclidean norms (simpler bound based on the first norm is not used in the combination since improved bound is based on it): UB(I ) = min {UBC (I ) , UB F (I )} min { f (xv ) + L ∞ x − xv 1 } , = min min f (xv ) + K (xv ) , max xv ∈V (I )
x∈I
xv ∈V (I )
(5) where K (xv ) = min L 1 max x − xv ∞ , L 2 max x − xv 2 . x∈I
x∈I
It is also promising to develop improved bounds for other norms. Apart from the standard best first, depth first and breadth first strategies, statistical selection has been implemented. In this strategy the candidate with the maximal criterion value [20] f∗ u(I ˜ )=−
−
1 n+1
xv ∈V (I )
2 f (xv )
−
max f (xv ) −
xv ∈V (I )
min xv − xv ∈V (I )
1 n+1
1 n+1
xv xv ∈V (I )
xv ∈V (I )
2 f (xv )
2
is chosen where f ∗ is the global maximum or the upper bound for it. In the developed algorithms heap structure is used for the candidate lists when best first and statistical selection strategies are used. 3 Experimental investigation of selection strategies In this section the results of computational experiments are presented and discussed. Various test problems (n ≥ 2) for global optimization from [7,9,11] have been used
123
Investigation of selection strategies in branch and bound algorithm
177
in our experiments. Test problems with (n = 2 and n = 3) are numbered according to [7] and [11]. For (n ≥ 4) problem names from [9,11] are used. Computational experiments have been performed on the computer cluster Ness at Edinburgh Parallel Computing Center (EPCC). It consists of a cluster of two SMP boxes that form the back-end: 2.6 GHz AMD Opteron (AMD64e) processors with 2 GB of memory (32 processors divided into two 16-processors boxes), and a two-processor front-end. The computer cluster runs Linux operating system (Scientific Linux) and Sun Grid Engine.
3.1 Results of sequential branch and bound algorithm In this section the sequential branch and bound algorithm for global optimization with simplicial partitions and combination of Lipschitz bounds has been investigated. The results of different selection strategies have been compared. The numbers of function evaluations ( f.eval) and execution time (t (s)) using different selection strategies are shown in Table 1. The average numbers of function evaluations ( f.eval) and average execution time (t (s)) for different dimensionality test problems are shown in Table 2. For n = 2-dimensional test problems the depth first selection strategy is the least efficient. For test problems with higher dimensionality (n ≥ 3) the average number of function evaluations are very similar for all selection strategies and the differences are insignificant. For test problems of all dimensionalities the smallest execution time is achieved when depth first and breath first selection strategies are used, despite the fact that sometimes the number of function evaluations is higher. A possible reason is that the time required for insertion and deletion of elements to/from non-prioritized structure does not depend on the number of elements in the list. Best first and statistical selection strategies require prioritized list of candidates, and even with heap structure insertion is time consuming when the number of elements in the list is large. The progress of search to locate the global solution is estimated using the ratio r ( f ∗) =
f.eval( f ∗ ) , f.eval
(6)
where ( f.eval( f ∗ )) is the number of function evaluations after which the best global solution f ∗ is found and ( f.eval) is the number of function evaluations during the whole optimization period. This value is between zero and one and shows how fast the global solution f ∗ is found during the optimization process. The ratios (r ( f ∗ )) for all test problems are shown in Fig. 1. The average ratios (r ( f ∗ )) for test problems of different dimensionalities are shown in Table 3. For almost all test problems the smallest ratio is achieved when statistical selection strategy is used and average ratios are more than two times smaller for this strategy comparing with other selection strategies. For test problems of dimensionalities n = 2 and n = 3 the ratios are very similar for best first and depth first search strategies, but for n ≥ 4 better results (smaller ratio) are achieved when depth first selection strategy is used. For almost all test problems the worst (biggest) ratios are achieved when breadth first selection strategy is used.
123
178
R. Paulaviˇcius et al.
Table 1 Numbers of function evaluations and execution time for different selection strategies Test problem
Best first
Statistical t (s)
f.eval
Depth first t (s)
f.eval
Breadth first t (s)
f.eval
t (s)
f.eval
1. [7]
967
0.01
952
0.01
6,864
0.04
978
0.01
2. [7]
188
0.00
187
0.00
191
0.00
188
0.00
3. [7]
6,653
0.04
6,654
0.04
7,062
0.04
6,657
0.04
4. [7]
8
0.00
8
0.00
8
0.00
8
0.00
5. [7]
37
0.00
37
0.00
37
0.00
51
0.00
6. [7]
1,723
0.01
1,723
0.01
1,759
0.01
1,723
0.01
7. [7]
19,171
0.10
19,186
0.10
19,528
0.10
19,171
0.10
8. [7]
380
0.00
379
0.00
379
0.00
381
0.00
9. [7]
38,860
0.21
38,860
0.20
38,917
0.18
38,860
0.19
10. [7]
1,806
0.01
1,804
0.01
2,033
0.01
1811
0.01
11. [7]
4,789
0.02
5,226
0.03
6,200
0.03
4,789
0.02
12. [7]
21,153
0.11
21,153
0.11
21,228
0.11
21,155
0.10
13. [7]
22,094
0.12
22,137
0.12
22,104
0.11
22,107
0.12
20. [7]
3,423,480
54.6
3,423,480
43.8
3,423,480
33.2
3,423,480
35.4
21. [7]
3,108
0.03
3,014
0.03
11,010
0.12
3,396
0.04
23. [7]
3,145,728
43.8
3,145,728
42.5
3,145,728
33.4
3,145,728
37.5 15.8
24. [7]
1,362,826
19.5
1,364,033
18.9
1,385,972
14.2
1,365,023
25. [7]
16,834
0.19
16,672
0.19
20,499
0.21
17,294
0.20
26. [7]
20,487
0.22
20,487
0.23
20,557
0.21
20,487
0.23
547,041
7.61
547,029
6.44
547,174
5.57
547,158
6.30
Rosenbrock [11] Levy 15 [9]
3,845,766
139
3,845,742
141
3,846,173
118
3,846,025
122
Rosonbrock [11]
137,565
4.16
137,565
4.08
137,565
3.66
137,565
3.74
Shekel 5 [9]
535,383
19.1
534,805
19.2
535,239
16.3
534,099
16.7
Shekel 7 [9]
953,122
35.0
953,122
35.1
953,122
29.0
953,122
29.7
Shekel 10 [9]
541,963
19.6
542,131
19.9
549,400
16.8
538,864
16.8
Schwefel 1.2 [9]
736,640
26.4
737,385
25.0
737,592
22.3
737,821
22.7
Powell [9]
35,784
1.23
35,784
1.21
35,794
1.09
35,792
1.11
Levy 9 [9]
251,023
8.99
247,676
8.04
248,937
7.69
255,520
8.03
Levy 16 [9]
551,721
69.8
551,681
68.4
551,727
66.4
551,721
67.6
Rosenbrock [11]
5,084,996
663
5,084,996
636
5,084,996
635
5,084,996
650
Levy 10 [9]
4,810,354
643
4,729,766
595
4,741,881
595
4,915,111
628
294,910
207
294,910
206
294,910
206
294,910
205
Levy 10 [9] Rosenbrock [11]
8,956,408
6,379
8,956,408
6,335
8,956,408
6,386
8,956,408
6,228
The total numbers of simplices (TNS) and the maximal sizes of candidate list (MCL) at the search tree for different selection strategies are shown in Table 4. The average total numbers of simplices (TNS) and the average maximal sizes of candidate list (MCL) are shown in Table 5. For n = 2-dimensional test problems (TNS) is largest when depth first selection strategy is used. For higher dimensionality (n ≥ 3) the
123
Investigation of selection strategies in branch and bound algorithm
179
Table 2 Average numbers of function evaluations and execution time for different selection strategies Best first n
f.eval
2
9,064
3 4 5–6
Statistical t (s)
f.eval 0.05
9,100
1,217,072
17.99
879,656
31.65
3,939,678
1,592.19
Depth first t (s)
f.eval 0.05
9,716
1,217,206
16.00
879,276
31.68
3,923,552
1,568.13
Breadth first t (s)
t (s)
f.eval 0.05
9,068
0.05
1,222,060
12.41
1,217,509
13.64
880,478
26.86
879,851
27.53
3,925,984
1,577.68
3,960,629
1,555.55
Fig. 1 The ratios r ( f ∗ ) for the algorithms with different selection strategies
Table 3 Average ratios r ( f ∗ ) for the algorithms with different selection strategies
n
Best first
Statistical
Depth first
Breadth first
2
0.47
0.20
0.47
0.63
3
0.27
0.09
0.26
0.52
4
0.18
0.05
0.14
0.32
5–6
0.19
0.00
0.03
0.21
values of (TNS) are very similar for all selection strategies. But the maximal candidate list (MCL) at the search tree varies significantly depending on selection strategies. The best results (the smallest (MCL)) achieved when depth first selection strategy is used and it is up to ∼7,000 times smaller than (MCL) with other selection strategies. The maximal candidate list (MCL) is largest when breadth first selection strategy is used. When statistical selection strategy is used (MCL) is up to ∼5 times smaller than when best first strategy is used. This explains why execution time is smaller when statistical selection strategy is used. This is because the time required for insertion and deletion of candidates to/from heap structure depends on the number of elements in the heap.
123
180
R. Paulaviˇcius et al.
Table 4 The total numbers of simplices and the maximal size of candidate list n
Best first TNS
Statistical MCS
TNS
Depth first MCS
TNS
Breadth first MCS
TNS
MCS
2
1,932
270
1,902
213
1,372
14
1,954
2
374
52
372
29
380
13
374
301 31
2
13,304
2,176
13,306
500
14,122
14
13,312
1,748
2
14
3
14
4
14
3
14
4
2
73
9
73
6
73
5
101
16
2
3,444
468
3,444
307
3,516
10
3,444
445
2
38,340
5,515
38,370
648
39,054
14
38,340
5,225
2
758
103
756
40
756
40
760
73
2
77,718
13,338
77,718
1,238
77,832
15
77,718
15,714
2
3,610
527
3,606
137
4,064
13
3,620
358
2
9,576
1,543
10,450
301
12,398
15
9,576
1,362
2
42,304
5,327
42,304
1,779
42,454
14
42,308
7,290
2
44,186
5,881
44,272
3,122
44,206
16
44,212
8,192
3
6,846,954
937,395
6,846,954
126,798
6,846,954
25
6,846,954
893,068
3
6,210
411
6,022
104
22,014
20
6,786
437
3
6,291,450
461,477
6,291,450
368,469
3,145,728
24
6,291,450
1,572,864
3
2,725,646
413,774
2,728,060
191,628
2,771,938
25
2,730,040
331,280
3
33,662
5,244
33,338
4,250
40,992
23
34,582
4,432
3
40,968
6,427
40,968
5,125
41,108
22
40,968
5,662
3
1,094,052
99,992
1,094,052
22,764
1,094,342
23
1,094,310
157,120
4
7,691,508
446,699
7,691,460
267,079
7,692,322
40
7,692,026
1,030,903
4
275,106
30,253
275,106
5,163
1,070,454
35
1,068,174
185,634
4
1,906,220
328,077
1,906,220
256,998
1,906,220
38
1,906,220
324,697
4
1,083,902
163,906
1,084,238
151,320
1,098,776
38
1,077,704
183,812
4
1,473,256
164,818
1,474,746
5,491
1,475,160
39
1,475,618
185,227
4
71,560
8,515
71,560
727
71,580
19
71,576
8,259
4
502,022
47,831
495,328
8,218
497,850
39
511,016
44,589
4
1,103,322
88,904
1,103,242
34,411
1,103,334
131
1,103,322
134,740
5
10,169,872
88,904
1,103,242
34,411
1,103,334
131
1,103,322
134,740
5
10,169,872
1,078,351
10,169,872
170,424
10,169,872
135
10,169,872
1,146,080
5
9,620,588
926,258
9,459,412
116,581
9,483,642
139
9,830,102
708,716
6
586,100
51,253
589,100
26,316
589,100
732
589,100
36,871
6
17,912,096
1,379,542
17,912,096
335,211
17,912,096
733
17,912,096
2,491,369
3.2 Results of parallel branch and bound algorithm Global optimization algorithms are computationally intensive and therefore parallel computing is important [2,4,6,12,13]. In this section the parallel branch and bound algorithm with simplicial partitions and combination of Lipschitz bounds has been
123
Investigation of selection strategies in branch and bound algorithm
181
Table 5 Average total numbers of simplices and average maximal size of candidate list n
Best first TNS
Statistical MCS
TNS
Depth first MCS
TNS
Breadth first MCS
TNS
MCS
2
18,126
2,709
18,199
640
19,430
12
18,133
3,135
3
2,434,138
274,960
2,434,406
102,734
2,444,114
23
2,435,013
423,552
4
1,759,290
168,927
1,758,531
105,814
1,760,934
36
1,759,680
249,651
5–6
7,878,996
704,862
7,846,744
136,589
7,851,609
374
7,920,898
903,555
Mean s p
Mean e p
Table 6 Average speedup and efficiency of parallelization n
2 p.
4 p.
8 p.
16 p.
sp
ep
sp
ep
sp
ep
sp
ep
2
1.36
0.68
1.95
0.49
2.79
0.35
4.10
0.26
2.54
0.44
3
1.86
0.93
2.46
0.61
3.64
0.45
5.13
0.32
3.27
0.58
4
1.95
0.97
3.65
0.91
6.96
0.87
11.14
0.70
5.98
0.86
5–6
1.87
0.94
3.64
0.91
6.77
0.85
12.95
0.81
6.31
0.88
Mean x p
1.76
0.88
2.93
0.73
5.04
0.63
8.33
0.52
2
1.30
0.65
1.93
0.48
2.96
0.37
4.14
0.26
2.58
0.44
3
1.83
0.91
2.50
0.63
3.78
0.47
5.02
0.31
3.28
0.58
4
1.95
0.98
3.72
0.93
7.24
0.90
10.81
0.68
5.93
0.87
5–6
1.87
0.94
3.62
0.90
6.85
0.86
13.25
0.83
6.40
0.88
Mean x p
1.74
0.87
2.94
0.74
5.21
0.65
8.30
0.52
2
1.61
0.80
1.38
0.35
1.47
0.18
1.39
0.09
1.47
0.36
3
1.65
0.83
1.92
0.48
2.87
0.36
2.89
0.18
2.33
0.46
4
1.91
0.96
3.70
0.92
6.80
0.85
9.81
0.61
5.55
0.84
5–6
1.79
0.89
3.52
0.88
6.69
0.84
12.69
0.79
6.17
0.85
Mean x p
1.74
0.87
2.63
0.66
4.46
0.56
6.70
0.41
2
1.35
0.68
2.03
0.51
3.04
0.38
4.99
0.31
2.85
0.47
3
1.87
0.93
3.17
0.79
5.46
0.68
8.26
0.52
4.69
0.73
4
1.93
0.96
3.74
0.94
7.01
0.88
9.80
0.61
5.62
0.85
5–6
1.80
0.90
3.58
0.89
6.75
0.84
13.24
0.83
6.34
0.87
Mean x p
1.74
0.87
3.13
0.78
5.56
0.70
9.07
0.57
Best first
Statistical
Depth first
Breadth first
investigated. The results of different selection strategies have been compared. An MPI version has been implemented using a parallel branch and bound template [1]. Static load balancing is used: tasks are initially distributed evenly (if possible) among p processors. If the initial number of simplices (n!) is less than the number of processors, the simplices are subdivided until the number of processors is reached. Then the initial simplices are distributed. After initialization, the processors work independently and do not exchange any tasks generated later.
123
182
R. Paulaviˇcius et al.
Parallel algorithm has been evaluated using standard criteria: speedup s p = t1 /t p and efficiency of parallelization e p = s p / p, where t p is time used by the algorithm implemented on p processors. The averages s p and e p are shown in Table 6. For test problems of dimensionalities n = 2 and n = 3 the best average efficiency of parallelization with various numbers of processors p is achieved when breadth first selection strategy is used. The efficiency of parallelization is very similar when best first and statistical selection strategies are used. The worst efficiency of parallelization for dimensionalities n = 2 and n = 3 is experienced when depth first selection strategy is used. For higher dimensionalities (n ≥ 4) the average efficiency of parallelization is similar for all selection strategies. The efficiency of parallelization decreases less with the same number of processors for difficult (higher-dimensional) test problems compared with simpler test problems.
4 Conclusions In this paper the speed and memory requirements of sequential branch and bound algorithm and efficiency of parallelization of parallel version of the algorithm has been investigated and compared for different selection strategies (best first, statistical, depth first and breadth first). Optimization time is shorter when depth first and breath first selection strategies are used. This is because of the time consuming heap structure required to prioritize candidates in the case of best first and statistical selection strategies. However the influence would be smaller for expensive objective functions which take longer to evaluate. The number of function evaluations required for the whole optimization are similar for all selection strategies, although depth first selection strategy requires the largest number of function evaluations. The number of function evaluations to locate the global solution is smallest when statistical selection strategy is used. Therefore this strategy is preferable when the solution time is limited. The maximal size of the candidate list varies much for different selection strategies. The smallest maximal size is when depth first selection strategy is used and it is up to ∼7,000 times smaller than for other selection strategies. Therefore this selection strategy is preferable when memory is limiting. The maximal size of candidate list is up to ∼5 times smaller when statistical selection strategy is used than when best first strategy is used. This explains why the optimization time is smaller when statistical selection strategy is used since the time required for insertion and deletion of candidates to/from heap structure depends on the number of elements in the heap. The efficiency of parallelization is similar when best first, statistical and breadth first selection strategies are used. The efficiency of parallelization is worst when depth first selection strategy is used. The efficiency of parallelization is better for difficult test problems. Acknowledgments This work was carried out under the HPC-EUROPA project (RII3-CT-2003-506079), with the support of the European Community – Research Infrastructure Action under the FP6 “Structuring the European Research Area” Programme. The research is partially supported by Lithuanian State Science and Studies Foundation within the project B-03/2007 “Global optimization of complex systems using high performance computing and grid technologies”.
123
Investigation of selection strategies in branch and bound algorithm
183
References ˇ 1. Baravykait˙e, M., Ciegis, R., Žilinskas, J.: Template realization of generalized branch and bound algorithm. Math. Model. Anal. 10(3), 217–236 (2005) ˇ 2. Ciegis, R., Henty, D., Kågström, B., Žilinskas, J. (eds.): Parallel Scientific Computing and Optimization, Springer Optimization and Its Applications, vol. 27. Springer (2009) 3. Csendes, T.: Generalized subinterval selection criteria for interval global optimization. Numer. Algorithms 37(1/4), 93–100 (2004) 4. D’Apuzzo, M., Marino, M., Migdalas, A., Pardalos, P.M., Toraldo, G.: Parallel computing in global optimization. In: Kontoghiorghes, E.J. (ed.) Handbook of Parallel Computing and Statistics, pp. 225– 258. Chapman & Hall/CRC, London (2006) 5. Dür, M., Stix, V.: Probabilistic subproblem selection in branch-and-bound algorithms. J. Comput. Appl. Math. 182(1), 67–80 (2005) 6. Ferreira, A., Pardalos, P.M. (eds.): Solving Combinatorial Optimization Problems in Parallel: Methods and Techniques, Lecture Notes in Computer Science, vol. 1054. Springer (1996) 7. Hansen, P., Jaumard, B.: Lipshitz optimization. In: Horst, R., Pardalos, P.M. (eds.) Handbook of Global Optimization, pp. 407–493. Kluwer, Dordrecht (1995) 8. Horst, R., Pardalos, P.M., Thoai, N.V.: Introduction to Global Optimization. Kluwer, Dordrecht (1995) 9. Jansson, C., Knüppel, O.: A global minimization method: The multi-dimensional case. Tech. rep., TU Hamburg-Harburg (1992) 10. Kreinovich, V., Csendes, T.: Theoretical justification of a heuristic subbox selection criterion for interval global optimization. Central Eur. J. Oper. Res. 9(3), 255–265 (2001) 11. Madsen, K., Žilinskas, J.: Testing branch-and-bound methods for global optimization. Tech. Rep. IMM-REP-2000-05, Technical University of Denmark (2000) 12. Migdalas, A., Pardalos, P.M., Storøy, S. (eds.): Parallel Computing in Optimization, Applied Optimization, vol. 7. Kluwer, Dordrecht (1997) 13. Pardalos, P.M. (ed.): Parallel Processing of Discrete Problems, IMA Volumes in Mathematics and its Applications, vol. 106. Springer (1999) 14. Paulaviˇcius, R., Žilinskas, J.: Analysis of different norms and corresponding Lipschitz constants for global optimization in multidimensional case. Inf. Technol. Control 36(4), 383–387 (2007) 15. Paulaviˇcius, R., Žilinskas, J.: Improved Lipschitz bounds with the first norm for function values over multidimensional simplex. Math. Model. Anal. 13(4), 553–563 (2008) 16. Paulaviˇcius, R., Žilinskas, J.: Global optimization using the branch-and-bound algorithm with a combination of Lipschitz bounds over simplices. Technol. Econ. Dev. Econ. 15(2), 310–325 (2009) 17. Sergeyev, Y.D., Kvasov, D.E.: Global search based on efficient diagonal partitions and a set of Lipschitz constants. SIAM J. Optim. 16(3), 910–937 (2006) 18. Žilinskas, A., Žilinskas, J.: Global optimization based on a statistical model and simplicial partitioning. Comput. Math. Appl. 44(7), 957–967 (2002) 19. Žilinskas, A., Žilinskas, J.: Branch and bound algorithm for multidimensional scaling with city-block metric. J. Glob. Optim. 43(2/3), 357–372 (2009) 20. Žilinskas, A., Žilinskas, J.: P-algorithm based on a simplicial statistical model of multimodal functions. TOP (submitted) (2009) 21. Žilinskas, J.: Branch and bound with simplicial partitions for global optimization. Math. Model. Anal. 13(1), 145–159 (2008)
123