1 A Partitioning Algorithm for the Mixed Integer Nonlinear ...

1 downloads 0 Views 1MB Size Report
planning, constrained optimization and emergency logistics, Dr. Özdamar has ... The Mixed Integer Nonlinear Programming Problem (MINLP) is encountered in ...
A Partitioning Algorithm for the Mixed Integer Nonlinear Programming Problem

Biket Ergüneş Yeditepe University, Dept. of Systems Engineering, Kayisdagi Cad., 34755 Kadıkoy, Istanbul, Turkey. E-mail: [email protected] Linet Özdamar* Yeditepe University, Dept. of Systems Engineering, Kayisdagi Cad., 34755 Kadıkoy, Istanbul, Turkey. E-mail: [email protected] Onur Demir Yeditepe University, Dept. of Computer Engineering, Kayisdagi Cad., 34755 Kadıkoy, Istanbul, Turkey. E-mail: [email protected] Nur Gülcan Yeditepe University, Dept. of Systems Engineering, Kayisdagi Cad., 34755 Kadıkoy, Istanbul, Turkey. E-mail: [email protected] *

Corresponding author

Abstract: An Interval Partitioning Method (IPM) is proposed to solve the (nonconvex) Mixed Integer Nonlinear Programming Problem (MINLP). The MINLP is encountered in many application areas and solving this problem bears practical importance. This paper proposes an IPM where two tree search strategies (breadth first and mixed breadth/depth first) and three variable subdivision methods are implemented. Two proposed variable subdivision methods are novel and they proritize variables hierarchically according to several features. The IPM is implemented on a set of nonconvex MINLP instances extracted from the MINLP benchmarks and numerical results show that its performance is quite promising.

Keywords: Mixed Integer Nonlinear Problem; Interval Partititoning Methods; global optimization; variable subdivision rules.

1

Biographical notes: Biket Ergunes is a PhD candidate at the Department of Industrial and Systems Engineering at Yeditepe University. Biket has a BS degree in Mathematics from Middle East Technical University and a MS degree in Applied Mathematics from the University of Texas at Dallas. Biket has over 10 years of work experience in the IT sector.

Linet Özdamar has BSc., MSc. and PhD. degrees from the Department of Industrial Engineering, Bogazici University, Istanbul, Turkey. Dr. Özdamar has held various academic and administrative positions at Marmara University, Nanyang Technological University and Yeditepe University. Dr. Özdamar is on the Scientific Board of the EURO Project Management and Scheduling Work Group, and a member of the Turkish OR Society. Having published more than 70 articles in various fields such as project scheduling, hierarchical planning, constrained optimization and emergency logistics, Dr. Özdamar has received an impressive number of citations to her work.

Onur Demir is an Assistant Professor at the Dept. of Computer Engineering at Yeditepe University. Onur has a BS. degree in Computer Engineering in Marmara University and a MS. degree in Computer and Informatic Engineering from Yeditepe University. His interests are computer security, optimization and algorithms.

Nur Gülcan is currently a PhD. student in Yeditepe University. She has a BS degree in Mathematics from Yeditepe University.

1. Introduction and Problem Background The Mixed Integer Nonlinear Programming Problem (MINLP) is encountered in optimization problems of various industrial areas such as energy production and distribution (Allevi et al., 2

2007; Baliban et al., 2012), water desalination and waste water management (Bragalli et al., 2012), chemical pooling optimization (Misener and Floudas, 2009) and other diverse areas such as transportation (Fügenschuh et al., 2010) and oil spill response (You and Leyffer, 2011). A survey of MINLP applications is given in Floudas (2009). A MINLP is defined as follows. min { f 0(x,y) : f j(x,y) ≤ 0, j =1,…, m, x ∈ Z n1, y∈Rn2} Here, n1 denotes the number of discrete variables, n2 denotes the number of real variables and m denotes the number of constraints. f 0(x,y) and f j(x,y), j =1,…, m are functions defined in the space Zn1×Rn2. We briefly define the domain space as X x Y where X = X1 … Xn1 and Y = Y1 … Yn2, and, Xi =

[X i , X i ]

for xi, i =1,…n1, and Yi = [Y i ,Y i ] for yi, i =

1,…n2, are the intervals over which each variable is defined. Nonconvex MINLPs are NP-Hard whereas convex MINLPs can be solved optimally by local methods if discrete variables are relaxed. Several optimization methods have been developed for convex MINLPs, for instance, generalized Benders’ decomposition (Geoffrion, 1972), branch and bound (Gupta and Ravindran, 1985), outer approximation (Duran and Grossmann, 1986), LP/NLP-based branch and bound (Quesada

and Grossmann, 1992),

cutting methods (Westerlund and Pettersson, 1995), branch and cut (Stubbs and Mehrotra, 1999), and others (Abhishek et al., 2010; Bonami et al., 2008). For solving nonconvex MINLPs, branch and bound and branch and reduce algorithms are proposed (Floudas and Visweswaran, 1990; Ryoo and Sahinidis, 1995; Sahinidis, 1992; Sherali and Wang, 2001; Smith and Pantelides, 1997). Detailed surveys on solution methods are found in Leyffer et al. (2009), Hemmecke et al. (2010) and Burer and Letchford (2012). Several publications have resulted in the development of commercial solution software. For instance, Ryoo and Sahinidis (1995; 1996) propose the Branch and Reduce algorithm for the nonconvex MINLP, laying the foundation for the software BARON. This method involves 3

pre-processing and post-processing steps that reduce variable ranges in each spatial subdivision by applying feasibility and optimality tests. In each subdivision, the problem is relaxed and lower and upper bounds are obtained for the objective function. The algorithm proposed by Tawarmalani and Sahinidis (2004; 2005) have resulted in the software BARON where separable functions are reformulated by defining new variables that produce convex lower and upper envelope functions solved by outer approximation. Apart from using the domain reduction schemes found in the literature (Ryoo and Sahinidis, 1995; 1996; Shectman and Sahinidis, 1998), Tawarmalani and Sahinidis (2004) also introduce duality based range reduction technique. Adjiman et al. (1998) propose the α-BB algorithm for the nonconvex MINLP. The authors reduce variable domains by using both interval analysis and solving the convex relaxation of the problem. Branching variables are selected based on their domain size and their impact on the lower bound; priority may be given to discrete variables over real variables; and may also depend on how close a variable value is to a discrete value in the solution of the convex relaxation. After branching, the problem is re-convexified, new lower bounds are identified, and upper bounds are found by the generalized Benders algorithm. More recently, Belotti et al. (2009) propose a spatial branch and bound method where convex relaxations of the problem are solved sequentially by partitioning the original solution space. The authors create linear (Liberti, 2008) and continuous relaxations for subproblems in the search tree and solve the relaxations to provide upper bounds. Feasibility and optimality based bound tightening is activated for each subproblem (Carrizosa et al., 2004; Messine, 2004). The branching scheme is based on selecting the variable causing the most infeasibility. This algorithm is embedded in the software COUENNE. In this paper, we use interval methods to solve the MINLP. Interval methods can be used in both deterministic (e.g., Csendes and Ratz (1997), Markót et al. (2006), Kearfott (2005) for interval methods in global optimization) and uncertainty models (e.g., Weber et al. 4

(2009), Özmen et al. (2013) for uncertain data modeling, Branzei et al. (2010) for cooperative games). Here, we propose an Interval Partitioning Method (IPM), basically a branch and bound method that reliably computes the ranges of the objective function and the constraints in given subdomains (boxes) using interval analysis methods. The lower and upper bounds of these function ranges lead to guaranteed decisions on box infeasibility and suboptimality. During the search we identify feasible solutions to the problem in selected boxes by invoking calls to the

embedded

Knitro

library

made

available

by

Ziena

Optimization

LLC

(http://www.ziena.com/knitro.htm). Knitro uses interior point algorithms in the search of local minima. The latter represent valid upper bounds to be used in suboptimality tests. Boxes that are not fathomed due to infeasibility/suboptimality are repartitioned by a variable subdivision rule. A novel static/dynamic hierarchical variable subdivision rule is proposed here. The proposed IPM implements both breadth first and mixed breadth/depth first tree search strategies. Numerical experiments indicate that the IPM is a promising tool to solve the nonconvex MINLP. 2. The IPM Although the reliable computing community has made efforts to solve the NLP (Hansen, 1992; Kearfott, 2003; Markót et al., 2006; Pedamallu et al., 2008; Ratschek and Rokne, 1988), IPM approaches proposed for the MINLP are quite rare (Vaidyanathan and ElHalgawi, 1996). Consequently, validating the performance of IPM by further testing and developing new IPM algorithms may be promising channels of research in the MINLP area.

2.1. Basic Concepts Before presenting the proposed IPM, we need to describe the basic interval analysis concepts. Let

be defined as a subdomain (box) of the search space, where

X x Y. Let F( ) be the 5

inclusion function of any real function f defined over the box }

where f ( ) = { f (x,y) | x,y

F( ) (ref. to Alefeld and Herzberger (1983) for the “The Fundamental Theorem of

Interval Arithmetic”). We define

and

as the reliable upper and lower bounds of f

over , respectively, and, the range f ( ) is included in the interval [

,

According

to inclusion monotonicity (Alefeld and Herzberger, 1983), given a real function f (whose inclusion function is denoted by F), and two intervals following holds: [

,

[

,

interval operations, the range defined by [

and Ω such that

. Hence, by the monotonicity principle of ,

is reduced as the size of the box

decreases. Furthermore, F( ) is -convergent over , that is, for all ( ))

cw( )

where c and

Ω, the

X x Y, w(F( ))-w(f

are positive constants and w denotes width. That is, the ranges

of the original function and its inclusion function converge as box size is reduced. The above theorems also hold for the inclusion functions defined in the MINLP, that is, F j( ) and F0( ). Using the ranges of F j ( ) and F 0( ) over a given box, we can discard it reliably based on guaranteed infeasibility or suboptimality. Let us then define suboptimality and infeasibility tests used in the IPM. Suboptimality test: If,

, then

is a suboptimal box. Here, the current upper

bound for f0(x,y), the CUB, is defined as the objective function of the best feasible solution found so far. In our case, the CUB is obtained by invoking the Knitro local search library. Feasibility test: If

for at least one constraint j, then

is an infeasible box.

Boxes that are not guaranteed to be suboptimal or infeasible are called indeterminate boxes. These are re-partitioned until the size of the box is less than a pre-specified TolSize. Here, boxes whose volumes are less than or equal to 1 are not subdivided. Degree of infeasibility uncertainty: The degree of infeasibility uncertainty of a constraint j over

is given by

. The total infeasibility uncertainty of a box

6

is defined as

=

.

2.2. Tree Search Strategies In this section, we present the pseudocode of the IPM where a mixed breadth first/depth first tree search strategy is implemented. IPM pseudocode (mixed tree search strategy) Step 0. Partition

Push all boxes into list L. Set CUB = ∞.

into K boxes,

Step 1. Apply suboptimality and feasibility tests to each suboptimal, infeasible or w( Step 2. Select

from L if it is

< TolSize. Set temporary list Q = L.

with the lowest

according to lowest

and delete

, if ties exist, then, select

, resolve further ties according to lowest

. If

= ,

then stop and report best solution. Step 3. Re-partition

into K new boxes, apply suboptimality, feasibility and box size tests

to all K new boxes and push the unfathomed boxes to short list L′. Remove and L′. Reset Q

from L

L′.

Step 4. Repeat Steps 2 and 3 until the subtree L′ reaches a maximum depth level, D. Go to Step 5. Step 5. Invoke Knitro library to identify a local optimal solution in every box

L′ and

update CUB. Step 6. Set L = L

L′. Empty L′ . If CUB is updated, re-apply suboptimality test to all

L and fathom accordingly. Re-set Q

L. Go to Step 2.

In the above pseudocode, initially, a breadth first strategy is implemented where the best box is selected among the set of end nodes (boxes) in the whole tree, that is represented by the long list, L. The selection is made according to the least

, an optimistic criterion that 7

holds the promise of finding the optimal solution. The tie resolving criterion pessimistic one. Once

is a

is identified, the partitioning and re-selection of the best box

progresses within the restricted subtree originating from

until the subtree has a maximum

depth of D levels. At that point, the local search library Knitro is invoked for every box in the subtree represented by the short list L′, hopefully discovering a feasible solution for the problem that is less than the CUB. If such a better solution is found, the CUB is updated, and then all available boxes in the long list L are subjected to suboptimality test again. Then, the short list L′ is added to the long list L and emptied. Then, breadth first search is applied the whole tree once again to select the next promising

. Thus, the search oscillates between the

whole tree and restricted promising subtrees. This strategy has the advantage of moving down the tree from a promising box only for a few levels without going down too deep. Hence, a promising subspace can be probed briefly with the hope of identifying a better CUB. In the next stage, if a better

is found in that area, then the subspace is probed deeper.

Otherwise, the search skips to a more promising subspace. Such restricted probes may result in earlier detection of feasible and better solutions as compared to pure breadth first search strategy where the search always progresses from the best box in the whole tree. Pure breadth first strategies also demand more memory. As observed in the pseudocode above, the whole tree represented by list L is not required to be kept in active memory at all times because we mostly work with the short list L′. The mixed tree search strategy is compared with the breadth-first startegy whose pseudocode is given below. IPM pseudocode (breadth first tree search strategy) Step 0. Partition

into K boxes,

Push all boxes into list L . Set CUB=∞,

all branch depth-counters, dc = 0.

8

Step 1. Apply suboptimality and feasibility tests to all suboptimal, infeasible or w( Step 2. Select

from L if it is

< TolSize.

with the lowest

according to lowest

and delete

, if ties exist, then, select

, resolve further ties according to lowest

. If

= ,

then stop and report best solution. Step 3. Re-partition

into K new boxes, increment branch-depth counter dc, apply

suboptimality, feasibility and box size tests to all K new boxes and push the unfathomed boxes to list L. Remove Step 4. Repeat Steps 2 and 3 until every tree branch-depth counter in the whole tree reaches the depth level, D. Go to Step 5. Step 5. Invoke Knitro library to identify a local optimal solution in every box

L and

update CUB. Step 6. If CUB is updated, re-apply optimality test to all

L and fathom accordingly.

Reset all branch-depth counters dc = 0. Go to Step 2. In the breadth first approach, we select the most promising box in the whole tree and repartition that box. After testing the child boxes for suboptimality and infeasibility, we fathom as necessary and continue the search considering the next promising box in the whole tree. Hence, the search is free to jump from branch to branch after probing just one level into the selected subspace. We invoke the Knitro library only when all branches’ depth counters, dc, become equal to D, that is, when we have a balanced tree. After invoking Knitro once for each end node in the tree, we reset the depth counter of each branch to zero and continue the breadth first search by selecting the best box in the whole tree. This approach requires us to keep the list L in active memory at all times. A disadvantage of this approach is that boxes are possibly fathomed later than the mixed strategy because it might be a long wait until the Knitro package is invoked first to identify a finite CUB. However, here, the Knitro is applied 9

across a wider range of boxes unlike the Mixed strategy where only promising subtrees are probed. This feature of invoking Knitro in a box without assessing its potential of holding the optimal solution, might seem to be a waste of CPU time. Nevertheless, one should keep in mind that the boxes having the least lower bounds are not guaranteed to hold the optimal solution and might be misleading the search. Hence, both search methods have pros and cons.

2.3. Variable subdivision rules

2.3.1. The Hierarchical Variable Subdivision Rule We develop a variable subdivision rule, denoted as Hierarchical Variable Subdivision (HVS) that prioritizes variables according to the following hierarchical structure where Group II variables follow those in Group I. Group I: Variables found in the objective function are prioritized as binary variables first, integer variables next, continuous variables last. Ties are resolved by choosing the variable with the smallest number of constraints it takes place in. Group II: Variables found only in the constraints are clustered and ordered by the Rank Order Clustering algorithm – ROC (King, 1980) that is used to cluster variables based on the specific constraints that they take place in. In other words, variables involved in the same set of constraints get into the same cluster. Group I variables are partitioned first with the goal of impacting the objective function faster. Taking care of binary variables first is cheaper in terms of tree expansion because each binary variable can generate only new two boxes. Integer variables are next to partition, because they often lead to constraint infeasibilities. Finally, last priority variables are the continuous ones. Partitioning Group II variables according to ROC sequencing helps us deal with infeasibilities in clusters of constraints. For instance, variables that belong to the first cluster 10

of constraints are partitioned first and subspaces that are infeasible with regard to this constraint cluster are discarded rapidly. Then, the next cluster of constraints is considered, and so on. Our preliminary experiments showed us that dealing with infeasible constraints in any arbitrary order delayed convergence. The ROC algorithm starts with constructing a variable-constraint incidence matrix C of size n x m, where, n = n1 + n2 (variables); and, m columns represent constraints f j(x,y), j=1,…, m. An element of the matrix

=1 if xi occurs in f j(x,y), otherwise,

= 0. The ROC

is then implemented as presented below. Rank Order Clustering Algorithm Step 0.

Repeat Steps 1 and 2 until both columns and rows are sorted in ascending order of BVi and BVj.

Step 1.

Calculate the binary value BVi of each row in the matrix C. If the rows are not ordered in ascending order of BVi, then sort them and continue.

Step 2.

Calculate binary value BVj of each column. If the columns are not ordered in ascending order of BVj, sort them. Go to Step 1.

In the ROC, the binary value of each row (variable) and column (constraint) is calculated as in Eq. (1). The formulae differentiate between the locations of 1’s and 0’s in each row and column by multiplying them with a power of two. Thereby, rows (and columns) having 1’s at similar locations have close BVi values and when all rows (columns) are sorted, then these variables turn out to be neighbours in the sorted list of rows (columns). The variables represented by neighbour rows fall into the same cluster and they are partitioned sequentially. n

m

cij 2m-j

BVi j=1

cij 2n-i

BVj

Eq. (1)

i=1

The idea behind using ROC is to select and divide the ranges of the variables affecting 11

the same group of constraints, so that these are handled together to enable faster elimination of infeasible spaces.

2.3.2. The Dynamic Hierarchical Variable Subdivision Rule In HVS, the order in which variables are selected is static throughout the search and boxes can become very imbalanced as the same variable is re-partitioned recursively. We therefore propose a dynamic version of HVS, the DHVS, where the static ordered hierarchical list of variables are assigned priorities, pi, where pi = 1 for the top priority variable and pi = n for the least priority variable. When a box is to be subdivided, we re-define pi as follows: pi = pi / w [X ik , X ik ] ,

where X ik and X ik are the lower and upper bounds of variable xi within box k. That

is, we divide the static variable priority by its range in the box, so that when a low priority variable has a large range, it might be subdivided first in a given box. The DHVS aims at more balanced boxes.

2.3.3. Rule A Finally, we compare the HVS and the DHVS with the classical Rule A defined by w [X ik

, X ik ]

where the widest range variable is re-partitioned first in a given box. 3. Numerical Experiments and Results We test the performance of the two tree search strategies and the three variable subdivision rules on a set of nonconvex MINLP benchmarks collected from the literature (http:// www.gamsworld.org/minlp/minlplib/minlpstat.htm, accessed on 05.06.2014). The features of these test instances are provided in Table 1 consisting of the problem names as identified in the MINLP literature, the total number of variables, the number of integral (binary and integer) variables, and the number of constraints. Although there are many more MINLP test instances that are publicly available, the ones found in Table 1 are selected specifically due to 12

nonconvexity of objective functions and/or constraints. In Table 1, the results obtained by the two complete solvers BARON and COIN COUENNE and those obtained by the local solver Knitro are also provided, as well as the CPU times at which the algorithms converge. The maximum CPU time allowed to solve these problems is 300 CPU seconds on a XEON quad CPU machine. It is observed that for some of the test problems no feasible solution could be found within 300 CPU seconds. (indicated by NS in Table 1). The performance of each solver is represented in terms of the absolute percentile deviation from the optimal objective function value ((Obtained result/Optimal result-1.0)*100), the CPU time, the number of problems where no feasible solution could be identified (#infeas) and the number of problems where the optimum solution is found (#optim). With regard to convergence times, BARON’s performance is better than that of COIN COUENNE with a few exceptional instances, but COUENNE finds a solution more instances (exceeding BAROn by one instance). Both solvers cannot find a feasible solution for the 351 variable Nuclear instances and small and larger Trimlon instances. The Windfac instance is also challenging. The local solver Knitro is definitely an underperformer despite the fact that the multi-start option is implemented. Next, we provide 6 sets of results (3 variable subdivision rules, HVS, Rule A and DHVS, implemented with the two tree search strategies, breadth-first and mixed) in Table 2. Each column in Table 2 is labeled in terms of “tree search strategy/variable subdivision rule”, thereby, defining the specific version of the IPM. For each problem, the program is executed for at most 300 CPU seconds. Similar to Table 1, Table 2 also reports algorithm convergence times in CPU seconds. We note that more than 90% of the convergence times taken by the IPM is consumed by the solver Knitro. In Table 2, we observe that the CPU times taken by the IPMs are in general smaller than those of the other two software whose results are provided in Table 1. The best IPM is the Mixed/DHVS approach in terms of solution quality and the number of optimal solutions found, and its results are comparable with those of BARON regarding the number of feasible 13

and optimal solutions found. The specific instances where the IPM fail is usually solved by BARON or COUENNE, while those that could not be solved by the latter two are solved by some IPM based algorithm (e.g., the Nuclear instances, and some of the Trimlon instances). The Trimlon set of instances pose a challenge to all solvers, because they use all of the allowed 300 CPU seconds. The second best IPM is the Mixed/HVS that is a faster algorithm than the Mixed/DHVS, however, at the expense of worse solution quality. In general, the breadth first versions of the three subdivision rules fare worse than the Mixed strategy versions, both in terms of solution quality and CPU times. This is not so surprising since the Mixed tree search strategy consumes less memory and restricts the overall tree size by detecting better CUB at an earlier stage of the search and probing into more promising sub-spaces. Note that since the number of unsolved problems and their identities are not the same for each algorithm, we cannot directly compare performance based on the results presented in Tables 1 and 2 in terms of averages. Therefore, in Table 3, we summarize the results for the 9 algorithms using the first 16 smaller size test problems that are solved by all. Hence, we can have a fair comparison over average absolute percentage deviation and CPU times. In Table 3, according to the average CPU time criterion, the HVS under both search strategies and the Mixed/DHVS are the fastest convergers whereas Knitro and COUENNE are the slowest. BARON is slower than all versions of IPM. Similar comments are valid when maximum CPU times are considered over 16 test problems. In terms of average absolute deviation criterion, BARON, COUENNE, Mixed/HVS and Mixed/DHVS approaches are best performers. Being mindful of the fact that the performance of a complete solver may depend on the local solver that it invokes, we point out the improvement obtained by the IPM over stand-alone Knitro. Summarizing the information provided in Tables 1, 2 and 3, we observe that the IPM is a viable alternative to other packages whose performances have been rated as very well in the literature. In fact, we would suggest that a practitioner should use all available algorithms 14

to solve the MINLP problems that they encounter. 4. Conclusion In this study, we propose a novel hierarchical variable subdivision rule for use in interval partitioning. The goal is to solve the nonconvex MINLP. Both static and dynamic versions of this rule are proposed. The rules prioritize the variables that are to be partitioned in a given subdomain based on their occurrence in the objective function and constraints as well as their types, i.e., binary, integer or continuous. Variables that are included only in constraints are prioritized according to the Rank Ordering Clustering (ROC) algorithm that enables sequential partitioning of the variables concerning common constraints. A well-known rule (Rule A) that is based on balanced partitioning of boxes is also included in performance comparison. Both static and dynamic hierarchical rules, and Rule A are tested for their performance under breadth-first and mixed breadth-first/depth first tree search strategies. Numerical experiments are conducted on a set of nonconvex benchmark MINLP instances. The results show that the performance of the proposed dynamic hierarchical rule combined with the mixed tree search strategy is comparable to the well-known global optimization software in terms of solution quality and convergence times. Within the limits of these experiments, we conclude that the IPM can complement available solvers while dealing with the MINLP. Acknowledgements This research has been partially funded by TUBITAK under project number 113M477. References Abhishek, K., Leyffer, S., and Linderoth, J. T. (2010) ‘FilMINT: An Outer-ApproximationBased Solver for Nonlinear Mixed Integer Programs.’ INFORMS Journal on Computing 15

22: 555-567. Adjiman, C.S., Dallwig, S., Floudas, C.A., and Neumaier, A. (1998) ‘A Global Optimization Method, alpha BB, for General Twice-Differentiable Constrained NLPs - I. Theoretical Advances.’ Computers & Chemical Engineering 22 (9): 1137-1158. Alefeld, G., and Herzberger, J. (1983) Introduction to Interval Computations. New York: Academic Press. Allevi, E., Bertocchi, M.I., Vespucci,M.T., and Innorta, M. (2007) ‘A Mixed Integer Nonlinear Optimization Model for Gas Sale Company.’ Optimization Letters 1 (1): 6169. Baliban, R.C., Elia, J.A., Misener, R., and Floudas, C.A. (2012) ‘Global Optimization of a MINLP Process Synthesis Model for Thermochemical Based Conversion of Hybrid Coal, Biomass, and Natural Gas to Liquid Fuels.’ Computers & Chemical Engineering 42: 6486. Belotti, P., Lee, J., Liberti, L., Margot, F., and Wachter, A. (2009) ‘Branching and Bounds Tightening Techniques for Non-convex MINLP.’ Optimization Methods and Software 24 (4-5): 597-634. Bonami, P., Biegler, L.T., Conn, A.R., Cornuéjols , G., Grossmann, I.E., Laird, C.D., Lee, J., Lodi, A., Margot, F., and Sawaya, N. (2008) ‘An Algorithmic Framework for Convex Mixed Integer Nonlinear Programs.’ Discrete Optimization 5 (2): 186–204. Bragalli, C., D'Ambrosio, C., Lee, J., Lodi, A., and Toth, P. (2012) ‘On the Optimal Design of Water Distribution Networks: a Practical MINLP Approach.’ Optimization and Engineering 13: 219-246. Branzei, R., Tijs, S.H., Alparslan-Gök, S.Z. (2010) ‘How to handle interval solutions for cooperative interval games.’ International Journal of Uncertainty, Fuzziness and Knowledge-based Systems Vol.18, Issue 2 (2010) 123-132, http://dx.doi.org/10.1142/S0218488510006441. 16

Burer, S., and Letchford, A.N. (2012) ‘Non-convex Mixed-Integer Nonlinear Programming: A Survey.’ Surveys in Operations Research and Management Science 17 (2): 97-106. Carrisoza, E., Hansen, P., and Messine, F. (2004) ‘Improving Interval Analysis Bounds by Translations.’ Journal of Global Optimization 29: 157-172. Csendes, T., and Ratz, D. (1997) ‘Subdivision Direction Selection in Interval Methods for Global Optimization.’ SIAM J. Numerical Analysis 34, 922-938. Duran, M.A., and Grossmann, I.E. (1986) ‘An Outer-approximation Algorithm for a Class of Mixed-integer Nonlinear Programs.’ Mathematical Programming 36 (3): 307-339. Floudas, C.A., and Visweswaran, V. (1990) ‘A Global Optimization Algorithm for Certain Classes of Nonconvex NLPs-I. Theory.’ Computers & Chemical Engineering 14: 13971417. Floudas, C.A. (2009) ‘Mixed Integer Nonlinear Programming.’ In Encyclopedia of Optimization, second edition, edited by C.A. Floudas, and P.M. Pardalos, 2234–2247. New York: Springer. Fügenschuh, A., Homfeld, H., Schülldorf, H., and Vigerske, S. (2010) ‘Mixed-integer Nonlinear Problems in Transportation Applications.’ In Proceedings of the 2nd International Conference on Engineering Optimization, edited by Rodrigues, H., et al. Geoffrion, A.M. (1972) ‘Generalized Benders Decomposition.’ Journal of Optimization Theory and Applications 10 (4): 237-260. Gupta, O.K., and Ravindran, V. (1985) ‘Branch and Bound Experiments in Convex Nonlinear Integer Programming.’ Management Science 31 (12): 1533-1546. Hansen, E.R. (1992) Global Optimization Using Interval Analysis. New York: Marcel Dekkar. Hemmecke, R., Köppe, M., Lee, J., and Weismantel, R. (2010) ‘Nonlinear Integer Programming.’ In 50 Years of Integer Programming 1958-2008, edited by M. Jünger,

17

T.M. Liebling, D. Naddef, G.L. Nemhauser, W.R. Pulleyblank, G. Reinelt, G. Rinaldi, and L.A. Wolsey, 561–618. Berlin: Springer. Kearfott, R.B. (2003) ‘An Overview of the GlobSol Package for Verified Global Optimization.’ Talk given for the Department of Computing and Software, McMaster University. Kearfott, R.B. (2005) ‘Improved and Simplified Validation of Feasible Points: Inequality and Equality Constrained Problems.’ Mathematical Programming. King, J.R. (1980) ‘Machine Component Group Formation in Group Technology.’ OMEGA 8: 193-199. Leyffer, S., Linderoth, J.T., Luedtke, J., Miller, A., and Munson,T. (2009) ‘Applications and Algorithms for Mixed Integer Nonlinear Programming.’ Journal of Physics: Conference Series 180 Liberti L. (2008) ‘Reformulation Techniques in Mathematical Programming: Definitions.’ In Proceedings of the 7th Cologne-Twente Workshop on Graphs and Combinatorial Optimization, edited by R. Aringhieri, R. Cordone and G. Righini. Crema. Markót, M.C., Fernandez, J., Casado, L.G., and Csendes, T. (2006) ‘New Interval Methods for Constrained Global Optimization.’ Mathematical Programming 106: 287-318. Messine F. (2004) ‘Deterministic Global Optimization Using Interval Constraint Propagation Techniques.’ RAIRO Operations Research 38: 277-294. Misener, R., and Floudas, C.A. (2009) ‘Advances for the Pooling Problem: Modeling, Global Optimization, and Computational Studies (Survey).’ Applied and Computational Mathematics 8 (1): 3-22. Özmen, A., Weber, G.W., Çavuşoğlu, Z., Defterli, Ö. (2013) ‘The New Robust Conic GPLM Method with an Application to Finance: Prediction of Credit Default.’ Journal of Global Optimization Volume 56, Issue 2, pp 233-249. Pedamallu, C.S., Ozdamar, L., Csendes, T., and Vinkó, T. (2008) ‘Efficient Interval 18

Partitioning for Global Optimisation.’ JOGO, 42: 369-384. Quesada, I., and Grossmann, I.E. (1992) ‘An LP/NLP Based Branch and Bound Algorithm for Convex MINLP Optimization Problems.’ Computers & Chemical Engineering 16 (10-11): 937-947. Ratschek, H., and Rokne, J. (1988) New Computer Methods for Global Optimization. Chichester: Ellis Horwood. Ryoo, H.S., and Sahinidis, N.V. (1995) ‘Global Optimization of Nonconvex NLPs and MINLPs with Applications in Process Design.’ Computers & Chemical Engineering 19: 551-566. Ryoo, H.S., and Sahinidis, N.V. (1996) ‘A Branch-and-reduce Approach to Global Optimization.’ Journal of Global Optimization 8 (2): 107-139. Sahinidis N.V. (1992) ‘Branch and Bound Experiments in Global Optimization.’ AIChE Annual Meeting, Miami Beach, Florida. Shectman, J.P., and Sahinidis N.V. (1998) ‘A Finite Algorithm for Global Optimization of Separable Concave Programs.’ Journal of Global Optimization 12: 1-36. Sherali, H.D., and Wang H. (2001) ‘Global Optimization of Nonconvex Factorable Programming Problems.’ Mathematical Programming 89: 459-478. Smith, E.M.B., and Pantelides, C.C. (1997) ‘Global Optimisation of Non-convex MINLPs.’ Computers & Chemical Engineering 21: S791. Stubbs, R.A., and Mehrotra, S. (1999) ‘A Branch-and-cut Method for 0-1 Mixed Convex Programming.’ Mathematical Programming 86 (3): 515-532. Tawarmalani, M., and Sahinidis, N.V. (2004) ‘Global Optimization of Mixed-integer Nonlinear Programs: A Theoretical and Computational Study.’ Mathematical Programming 99 (3): 563-591. Tawarmalani, M., and Sahinidis, N.V. (2005) ‘A Polyhedral Branch-and-cut Approach to Global Optimization.’ Mathematical Programming 103: 225-249. 19

Vaidyanathan, R., and El-Halgawi, M. (1996) ‘Global Optimization of Nonconvex MINLPs by Interval Analysis.’ In Global Optimization in Engineering Design, edited by I.E. Grossmann, 175-193. Dordrecht: Kluwer Academic Publishers. Weber, G.W., Alparslan-Gök, S.Z., Söyler, B. (2009) ‘A New Mathematical Approach in Environmental and Life Sciences: Gene-environment Networks and Their Dynamics.’ Environmental Modelling and Assessment Volume 14, Number 2 pp. 267-288 Westerlund, T., and Pettersson, F. (1995) ‘A Cutting Plane Method for Solving Convex MINLP Problems.’ Computers & Chemical Engineering 19: S131-S136. You, F., and Leyffer, S. (2011) ‘Mixed-integer Dynamic Optimization for Oil-spill Response Planning with Integration of a Dynamic Oil Weathering Model.’ AIChE Journal 57 (12): 3555-3564.

20

no

Problem Name

# of Vars.

# of Integral Vars.

# of Constraints

BARON COINCOUENNE Abs. % Abs. % Cpu Dev. Cpu Dev.

KNITRO Abs. % Cpu Dev.

1

st_e13

2

1

2

0.03

0.00

0.23

0.00

0.06

0.00

2

st_e27

4

2

6

0.05

0.00

0.28

0.00

0.13

0.00

3

ex1226

5

3

5

0.02

0.00

0.36

0.00

0.11

0.00

4

st_e15

5

3

5

0.02

0.00

0.23

0.00

0.05

0.00

5

prob02

6

6

8

0.03

0.00

0.23

0.00

0.09

0.00

6

ex1225

8

6

10

0.06

0.00

0.34

0.00

0.07

0.00

7

trimlon2

8

8

12

0.05

0.00

0.36

0.00

0.48

0.00

8

mittelman

16

16

7

0.14

0.00

0.62

0.00

2.83

0.00

9

spring

17

12

8

0.39

0.00

1.03

0.00

0.83

0.00

10 batchdes

19

9

19

0.08

0.00

0.78

0.00

0.14

0.00

11 ex1263a

24

24

35

0.48

0.00

6.60

0.00

1.96

0.00

12 ex1264a

24

24

35

0.58

0.00

3.15

0.00

1.07

12.0

13 trimlon4

24

24

24

4.71

0.00

89.64

0.00

160.49

0.00

14 ex1265a

35

35

44

0.47

0.00

2.17

0.00

2.73

0.00

15 batch

46

24

73

29.31

0.00

3.18

0.00

3.57

0.00

16 tloss

48

48

53

0.69

0.00

0.69

0.00

4.42

0.00

17 trimlon6

48

48

36

-

NS

-

NS

-

NS

18 trimlon7

63

63

42

-

NS

-

NS

-

NS

19 ex1264

88

68

55

1.61

0.00

4.71

0.00

-

NS

20 windfac

14

3

13

-

NS

0.78

0.00

0.03

194.7

21 ex1266a

48

48

53

0.59

0.00

3.93

0.00

3.78

0.00

22 ex1263

92

72

55

1.59

0.00

5.62

0.00

-

NS

23 ex1265

130

100

74

1.51

0.00

13.06

3.00

-

NS

24 trimlon12

168

168

72

-

NS

-

NS

-

NS

25 ex1266

180

138

95

0.69

0.00

23.59

0.00

-

NS

26 trimlon5

35

35

30

-

NS

-

NS

-

NS

27 feedtray2

87

36

283

6.44

0.00

5.35

0.00

124.97

0.00

28 nuclearvc

351

168

317

-

NS

-

NS

-

NS

29 nuclearvd

351

168

317

-

NS

-

NS

-

NS

30 nuclearve

351

168

317

-

NS

-

NS

-

NS

31 nuclearvf

351

168

317

-

NS

-

NS

-

NS

# Infeas Solution

9

8

12

# Optim Solution

22

23

17

Table 1. Test instance characteristics and solutions by other optimization packages.

21

Prob. id

# var

Breadth/ HVS Abs. % Cpu Dev.

Breadth/ Rule A Abs. % Cpu Dev.

Breadth/ DHVS Abs. % Cpu Dev.

Mixed/ HVS Abs. % Cpu Dev.

Mixed/ Rule A Abs. % Cpu Dev.

Mixed/ DHVS Abs. % Cpu Dev.

st_e13

2

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

st_e27

4

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

ex1226

5

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

st_e15

5

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

prob02

6

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

ex1225

8

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

trimlon2

8

0.01

0.00

0.02

0.00

0.02

0.00

0.01

0.00

0.02

0.00

0.02

0.00

mittelman

16

0.03

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

spring

17

0.02

0.00

0.02

0.00

0.02

0.00

0.01

0.00

0.01

0.00

0.01

0.00

batchdes

19

0.01

0.00

0.35

0.00

0.00

0.00

0.11

0.00

0.12

0.00

0.10

0.00

ex1263a

24

0.59

1.53

0.32

1.53

4.89

1.53

0.84

0.00

1.18

0.00

4.06

0.00

ex1264a

24

0.49

3.49

1.46

3.49

2.50

3.49

7.47

0.00

5.23

3.49

4.76

0.00

trimlon4

24

8.81

0.00

7.69

0.00

7.69

0.00

2.59

0.00

2.93

0.00

2.93

0.00

ex1265a

35

0.86

0.00

10.2

0.00

9.94

0.00

1.08

0.00

11.7

0.00

2.15

0.00

batch

46

0.12

0.00

0.50

0.00

0.02

0.00

NS

NS

0.02

0.00

0.02

0.00

tloss

48

5.28

0.00

6.97

0.00

8.53

0.00

1.74

0.00

5.86

0.00

4.00

0.00

trimlon6

48

37.4

0.65

-

NS

-

NS

21.93

2.61

-

NS

-

NS

trimlon7

63

-

NS

-

NS

-

NS

-

NS

-

NS

-

NS

ex1264

88

9.02

0.00

16.4

0.00

8.89

0.00

2.67

0.00

2.23

0.00

2.27

0.00

windfac

14

NS

-

NS

-

NS

-

NS

-

NS

-

NS

ex1266a

48

8.01

0.00

1.88

0.00

20.0

0.00

1.61

0.00

8.95

0.00

1.57

0.00

ex1263

92

1.58

147.9

-

NS

-

NS

10.06

54.59

-

NS

-

NS

ex1265

130

7.29

0.00

22.1

9.71

-

NS

2.31

0.00

10.4

11.65

47.0

0.00

trimlon12

168

-

NS

-

NS

-

NS

-

NS

-

NS

-

NS

ex1266

180

-

NS

-

NS

-

NS

-

NS

-

NS

-

NS

trimlon5

35

6.52

0.97

25.7

19.42

26.6

3.88

6.71

0.00

7.67

2.91

7.67

2.91

feedtray2

87

-

NS

-

NS

68.5

0.00

403

0.4

0.36

0.00

236.7

0.00

nuclearvc

351

-

NS

49.0

0.13

-

NS

-

NS

-

NS

30.0

0.00

nuclearvd

351

-

NS

-

NS

10.6

2.49

-

NS

21.92

2.50

99.3

2.49

nuclearve

351

-

NS

74.9

1.02

-

NS

-

NS

-

NS

146.0

0.81

nuclearvf

351

-

NS

98.0

0.11

-

NS

-

NS

112.0

1.03

50.8

0.00

-

# Infeas. Solution

9

8

10

8

8

6

# Optim. Solution

17

16

17

19

18

22

Table 2. Results for six IPM versions.

22

BARON COUENNE Knitro Breadth/HVS Breadth/Rule A Breadth/DHVS Mixed/HVS Mixed/Rule A Mixed/DHVS

Average Maximum Average Maximum Cpu Cpu Abs. % Dev. Abs.% Dev. 2.32 29.31 0.00 0.00 6.87 89.64 0.00 0.00 11.19 160.49 0.01 12.0 1.01 8.81 0.31 3.49 1.72 10.22 0.31 3.49 2.10 9.94 0.31 3.49 0.87 7.47 0.00 0.00 1.69 11.73 0.22 3.49 1.13 4.76 0.00 0.00

Table 3. Summary of results for the first 16 test instances where all methods solved each instance.

23

Suggest Documents