The Knapsack Sharing Problem: An Exact Algorithm Mhand Hifi∗†‡
and
Slim Sadfi∗§
Abstract In this paper, we propose an exact algorithm for the knapsack sharing problem. The proposed algorithm seems quite efficient in the sense that it solves quickly some large problem instances. The problem is decomposed into a series of single constraint knapsack problems; and by applying the dynamic programming and another strategy, we solve optimally the original problem. The performance of the exact algorithm is evaluated on a set of medium and large problem instances (a total of 240 problem instances). This algorithm is parallelizable and this is one of its important feature. Keywords. combinatorial optimization, knapsack sharing, dynamic programming, sequential algorithms.
1
Introduction In the single constraint knapsack problem (SK), we are given a knapsack of fixed capacity and
a set of objects, each of which has an associated size (weight) and profit. Generally, we distinguish two versions of the problem: the unbounded and bounded SK problems. The unbounded or bounded SK problem may be formulated as follows: the capacity of the knapsack is represented by c, into which we may put n types of objects. Each object of type j has a weight, wj , and a profit, pj , (wj , pj , n and c are all nonnegative integers, and we have an unbounded or bounded number, say dj , of copies of each item; for the unbounded version, each dj is equal to cj /wj , and dj is an nonnegative integer). Determine, for j = 1, . . . , n, the number of times xj , of j-th item to be chosen so as to maximize the total profit without exceeding the capacity c and bounds dj , i.e., max
n j=1
pj xj |
n
wj xj ≤ c, 0 ≤ xj ≤ dj , integer, j = 1, 2, . . . , n
(1)
j=1
The SK problem (equation 1) is a classic NP-hard, combinatorial optimization problem with a wide range of applications (Gilmore and Gomory, 1966; Gilmore, 1979; Hifi and Zissimopoulos, 1996; Hifi and Ouafi, 1997; Martello and Toth, 1990; Morabito and Arenales, 1995). The SK problem often arises as a frequently called subproblem in other combinatorial optimization problems. One of its famous applications is the cutting stock problem where a lot of works have been done since the early 1960’s (Gilmore and Gomory, 1966; Gilmore, 1979) and where ∗
CERMSEM, Maison des Sciences Economiques, Universit´e de Paris 1 Panth´eon-Sorbonne, 106-112 Boulevard de l’Hˆ opital 75647 Paris cedex 13, France, e mail:
[email protected] † PRiSM, Universit´e de Versailles St-Quentin-en-Yvelines, 45 av. des Etats-Unis, 78035 Versailles Cedex. ‡ Author for correspondence § LRI, URA 410 CNRS, Universit´e de Paris XI, Centre d’Orsay, 91405 Orsay, France, e mail:
[email protected]
1
knapsack problems could account for about 60-80% of the total computing time (Hifi and Ouafi, 1997; Val´ero de Carvalho and Ridrigues, 1995). In this paper, we study a variant of SK problems, called the Knapsack Sharing Problem (KSP). This problem has a wide range of commercial applications (see Brown, 1979; Tang, 1988). We consider the binary KSP. In this variant of the problem, the set of objects is defined by the set N = {1, . . . , j, . . . , n} where each object j is associated with a profit pj and a weight wj , as for the SK problem. Furthermore, the set N is composed by m different classes of objects, i.e. for each couple (p, q), p = q, p ≤ m and q ≤ m, we have Jp ∩ Jq = ∅ and ∪m i=1 Ji = N . Moreover, given a knapsack of capacity c, we wish to determine a subset of objects to be included in the knapsack. The KSP is equivalent to maximize the minimal value of a set of linear functions subject to single linear constraint. Indeed, defining xj = 1 if the object j is in the solution set, and xj = 0 otherwise. Then problem (1) may be formulated now as follows: Maximize
(KSP )
min
1≤i≤m
pj xj
j∈Ji
Subject to
wj xj ≤ c
j∈N
xj ∈ {0, 1}, for j = 1, . . . , n.
As in the SK problem, we may assume without loss of generality that: wj , pj and c are positive integers, and
wj > c. In what follows, we consider that the classes are indexed from J1 to
j∈N
Jm and the elements of each class Ji , i = 1, . . . , m, are indexed from 1 to |Ji |. The so-defined problem is NP-hard since it is a generalization of the famous single constraint knapsack (Martello and Toth, 1990-1997), i.e. by considering m = 1. The KSP is classified as KSP (Bn/m/1) (see Yamada and Futakawa, 1997), which means that we have n objects of binary (B) type divided in m classes with one constraint. The paper is organized as follows. First (Section 2), we present a brief reference of some sequential exact and approximate algorithms for the (general) knapsack problem. In Section 3, we describe the main steps of the exact algorithm which is composed of two main phases. The first one (Section 3.1) consists of decomposing the original KSP into a series of subproblems. Then, each of these subproblems (Section 3.2) is solved by applying a classical dynamic programming procedure (see Gilmore and Gomory, 1966; Martello and Toth, 1990) and by using a supplementary strategy. This strategy is applied in order to ensure the optimality and to reduce the computing time. The main steps of the exact algorithm (called EKSP) are presented in Section 3.3. In the same section, we prove the validity of the algorithm. Finally (Section 4), different instances with different sizes and densities are generated and benchmark results of these are given using the proposed exact algorithm. Later, we will discuss some interesting perspectives of the proposed approach.
2
2
Sequential algorithms for knapsack problems For the single constraint knapsack problem (see Martello and Toth, 1990-1997; Syslo et al.,
1983), depending on the framework of the application giving rise in the particular problem and, also the computational resources of the solvers, a large variety of solution-methods, exact and approximate, have been devised. This problem has been solved exactly (and approximately) by dynamic programming, by the use of tree search procedures and other approaches. A good review of the single constraint knapsack problem and its associated approaches can be found in (Gilmore and Gomory 1966; Martello and Toth, 1990). Previous works have also developed some approaches for the general case, i.e. when the number of constraints is not limited to a single one. This problem is called the multidimensional (or multiconstraint) knapsack problem and a detailed review for this problem is given in (Chu and Beasley, 1997). When the number of constraints is limited only to two knapsack constraints, then the problem is referred to the particular bidimensional knapsack problem (see Freville and Plateau, 1997). Another problem, namely max-min allocation problem, has been studied by many authors (see Brown, 1991; Kuno et al., 1991; Luss, 1992; Pang and Yu, 1989; Tang, 1988). Different exact and approximate approaches have been tailored especially for this problem. For the particular continuous KSP, in (Kuno et al., 1991), the authors have proposed a linear time solution algorithm. Another algorithm has been proposed in (Yamada and Futakawa, 1997) for the KSP (Cn/m/1). The same authors have extended their approach in order to produce a heuristic algorithm for the KSP (Bn/m/1). Their heuristic algorithm was tested on a set of randomly generated test problems with different sizes and densities, especially for the case m = 2. The authors have also extended their approach for solving the case m > 2. The computational results showed that the approach performed well in the sense that it produces high-quality solutions whilst requiring only a modest amount of computational effort (on a DECstation 3100). In this paper, we present an algorithm for solving exactly the KSP. We try to solve it by applying principally dynamic programming techniques. Our computational results were executed on an IBM SP2, 120Mhz with 256Mo of RAM.
3
An exact algorithm for the KSP We will first describe the decomposition of the original problem to a series of (bounded or
binary) SK problems. Then, we briefly discuss the classical function used when the dynamic programming is applied. Finally, we give the algorithmic outline for solving the KSP.
3
3.1
Decomposition of the KSP Before we develop an algorithm for the KSP, let us consider the following problems, by
setting c¯i ≤ c, ∀i ∈ {1, . . . , m}, where c¯i is a nonnegative integer:
SKJc¯ii
max
pj xj
j∈Ji
s.t.
wj xj ≤ c¯i
j∈Ji
xj ∈ {0, 1}, for j ∈ Ji
Each SKJc¯ii problem is associated with each specified class Ji , i = 1, ..., m, and with capacity c¯i . Notice that, each SKJc¯ii represents exactly the SK problem (1). In what follows, we show that each SKJc¯ii , i = 1, . . . , m, can represent an auxiliary problem to KSP; and by applying dynamic programming techniques, we produce an optimal solution (value) to the KSP. We adopt the following notations: S(P) : represents a feasible solution of the problem P with value V S(P). Opt(P) : denotes an optimal solution of the problem P with value V O(P). Lemma 1. The optimal solution of the KSP realizes the following equality:
Opt(KSP ) = S(SKJc¯11 ), . . . , S(SKJc¯m ) with value V O(KSP ) = min V S(SKJc¯11 ), . . . , V S(SKJc¯m ) , m m
for some nonnegative vector c¯ = (¯ c1 , . . . , c¯m ) with
m
c¯i ≤ c.
i=1
Proof. Since the set N is taken in increasing order of indexes, then the optimal solution Opt(KSP ) of an instance of KSP can be written as follows:
m Opt(KSP ) = x11 , . . . , x1|J1 | , · · · , xm 1 , . . . , x|Jm | ,
where xij denotes the j-th component of the class i, for j = 1, . . . , |Ji | and i = 1, . . . , m.
|J |
i Now, consider the vector c¯ = (¯ c1 , . . . , c¯m ) of nonnegative integers such that c¯k = j=1 wjk xkj k k = 1, . . . , m and, wj is the weight of the j-th component of the class k. Clearly, the segment xk1 , . . . , xk|Jk | represents a feasible solution to SKJc¯kk and so, Opt(KSP ) =
S(SKJc¯11 ), . . . , S(SKJc¯m ) , with the value min{V S(SKJc¯11 ), . . . , V S(SKJc¯m )} ✷ m m
Theorem. There exists an optimal solution Opt(KSP) which can be represented as follows:
Opt(KSP ) = Opt(SKJc¯11 ), . . . , Opt(SKJc¯m ) , with value V O(KSP ) = min V O(SKJc¯11 ), . . . , V O(SKJc¯m ) , m m
for some nonnegative vector c¯ = (¯ c1 , . . . , c¯m ) with
m
c¯i = c.
i=1
Proof. According to Lemma 1, Opt(KSP ) can be written as follows:
) . Opt(KSP ) = S(SKJc¯11 ), . . . , S(SKJc¯m m 4
Suppose that the max-min value is attained for the class with index α such that α ∈ {1, . . . , m}. Therefore, we have the following inequality: V S(SKJc¯αα ) ≤ V S(SKJc¯ii ), ∀i ∈ {1, . . . , m}.
(2)
) . This solution is Let Sol be a solution represented as Sol = Opt(SKJc¯11 ), . . . , Opt(SKJc¯m m feasible, since
m
¯i i=1 c
≤ c. According to the optimality condition (the dynamic programming)
on each single knapsack, we have V S(SKJc¯ii ) ≤ V O(SKJc¯ii ), ∀i ∈ {1, . . . , m}.
(3)
V S(SKJc¯αα ) ≤ V O(SKJc¯ii ), ∀i ∈ {1, . . . , m}.
(4)
From (2) and (3), we have
Suppose that β is the index of the class for which the minimum is attainable for the solution Sol. c¯
From (4), we have V S(SKJc¯αα ) ≤ V O(SKJββ ). Since V S(SKJc¯αα ) represents the optimal solution
c¯
value, then we have necessarilly V S(SKJc¯αα ) = V O(SKJββ ) and so, Sol = Opt(SKJc¯11 ), . . . , Opt(SKJc¯m ) m represents an optimal solution when the minimum is attainable for the class β, i.e. c¯
V O(SKJββ ) ≤ V O(SKJc¯ii ), ∀i ∈ {1, . . . , m}. Now, assume that
m
¯i i=1 c
(5)
¯ < c, and let (c¯ 1 , c¯ 2 , . . . , c¯ m ) be a combination realizing m i=1 c i = c
and c¯ i ≥ c¯i , ∀i ∈ {1, . . . , m}
(6)
(there always exists an index k such that c¯ k > c¯k , since we have supposed that m ¯i < c). i=1 c
¯
¯
Consider another solution Sol = Opt(SKJc11 ), . . . , Opt(SKJcmm ) . It is clear that Sol is fea sible since m c¯ i = c. Hence, according to equation (6), we have: i=1
¯
V O(SKJc¯ii ) ≤ V O(SKJcii ), ∀i ∈ {1, . . . , m}
(7)
and from (5) and (7), we have: ¯
c¯
V O(SKJββ ) ≤ V O(SKJcii ), ∀i ∈ {1, . . . , m}.
(8)
Suppose that γ is the index of the class for which the minimum is attainable for the solution c¯
c¯
Sol . From (8), we have V O(SKJββ ) ≤ V O(SKJγγ ). c¯
c¯
Since V O(SKJββ ) represents the optimal solution value, then we have necessarilly V O(SKJββ ) = c¯
V O(SKJγγ ) and so, Sol =
¯
¯
Opt (SKJc11 ), . . . , Opt (SKJcmm )
minimum is attainable for the class γ. This implies that Sol is an optimal solution and
5
is an optimal solution when the
m ¯ i=1 c i = c ✷
Recall that the proposed algorithm is principally based on the above theorem (the optimal solution is not unique). It can be summarized by the two following steps (which are detailed in the following sections): 1. to decompose the original KSP into a series of SK problems, and 2. at each step (a level) of the algorithm, a capacity (called a current capacity of the (sub)problem) is chosen and a dynamic programming procedure is used for evaluating the objective value at the current level.
3.2
The classical dynamic programming procedure for the single knapsack In this part we present a brief description of the dynamic programming procedure which
we have used. The main principle of the dynamic programming is based upon the optimality principle: let P be a subproblem of the original problem P; then, the optimal solution value V O(P ) is available if the optimal solution value V O(P) of the original problem is known. The recursion described below is considered as the key-procedure of the proposed algorithm. The basic recurrence for the forward phase is standard and derived from the principle of optimality. With each knapsack problem SKJc¯ii , i = 1, . . . , m, is associated the knapsack function f. (.), which may be described as follows: fyi (g) = max{fyi −1 (g), fyi −1 (g − wyi ) + pyi }
(9)
where 0 ≤ g ≤ c¯, 0 ≤ yi ≤ |Ji |, fyi (k) = −∞ if k < 0 and f0 (g) = 0. In our study, we consider that the parameters i, Ji and c¯ of equation (9) represent the inputdata of the knapsack SKJc¯ii . So, with each fixed capacity g = c¯i (where c¯i is the current capacity associated with the i-th class), is associated a procedure called Knapsack(i, Ji , g), which means that we only solve the problem at the current point g by considering that all solution values fyi (x), x = 0, . . . , g − 1, are available. Of course, the optimal solution value at the current level g is represented by the value f|Ji | (g), which corresponds to the value of the problem SKJgi .
3.3
The exact algorithm for the KSP Now, we are going to present the general algorithm for solving the KSP. The main steps of
the proposed algorithm is shown in Fig 1 (denoted EKSP). The description uses a pseudocode (lines beginning with ” ✄ ” are comments and ” ← ” is the assignment). At first (Step 1) the relevant variables are initialized: they are the current capacities c¯i and the solution values V O(SKJc¯ii ), i = 1, . . . , m. In Step 2.1, the index which realizes the smallest current solution value, over all classes, is chosen. Intuitively, this means that we always select the class which realizes the smallest value over all the classes at the current level. This strategy consists in finding the best solution value at the current level and so, the search space is explored only in this direction. Of course, the worst-case consists in exploring all points of the search space.
6
Input: an instance of the KSP. Output: an optimal solution value V O(KSP ).
Step 1. ✄ Initialization. ✄ the set of objects is indexed from 1 to m, i.e. J1 , J2 , . . . , Jm ✄ the elements of each class Ji are indexed from 1 to |Ji | set c¯i ← 0 and V O(SKJc¯ii ) ← 0, for i = 1, . . . , m. Step 2. ✄ Main. While ( m ¯i < c) Do i=1 c 1. ✄ select the index-class for which the current minimum value is realized Set $ ← argmin V O(SKJc¯ii ), i = 1, . . . , m ; 2. ✄ the capacity of the best selected class is incremented c¯ ← c¯ + 1; 3. ✄ the dynamic procedure is applied on the class Call Knapsack($, J , c¯ ); EndDo
Step 3. Exit with the optimal solution value V O(KSP ) = min V O(SKJc¯ii ), i = 1, . . . , m . Figure 1: The Exact algorithm for the KSP (EKSP). In Step 2.2 the algorithm increments the current capacity c¯ by one unit and in Step 2.3 a dynamic programming procedure is applied for evaluating the solution value of SKJc¯ . In this case, the Knapsack() function exits with the optimal solution value of the current level c¯ , i.e., the value f|J | (¯ c ) (denoted also V O(SKJc¯ )). Finally, the algorithm terminates if the condition of the loop while is not satisfied (in this case, the EKSP algorithm terminates with
m
¯i i=1 c
= c). So, the algorithm stops (Step 3) with
the optimal solution value V O(KSP ) and the backtracking phase can be called for retrieving the structure of the obtained solution. We can prove now, that the EKSP algorithm produces an optimal solution. Lemma 2. The EKSP algorithm produces a feasible solution with the same form as the solution given by the above theorem, i.e. S(KSP ) = (Opt(SKJc¯11 ), . . . , Opt(SKJc¯m )) m
and with
m
c¯i = c.
i=1
Proof. It is clear that the EKSP algorithm terminates when the condition used in the loop while (Step 2.) realizes
m
¯i i=1 c
= c.
Moreover, after each incrementation (Step 2.2) of a capacity c¯ , where $ ∈ {1, . . . , m}, the 7
EKSP algorithm applies the Knapsack($, J , c¯ ) procedure in order to evaluate the optimal solution value of SKJc¯ (Step 2.3). Hence, the EKSP algorithm terminates by optimally solving all single knapsack problems SKJc¯ii , i = 1, . . . , m ✷ Lemma 3. Let k be the index of the class for which the minimum value is attainable, when the EKSP algorithm terminates. Then, we have ∀i ∈ {1, . . . , m}, V O(SKJc¯ii−1 ) ≤ V O(SKJc¯kk )
(10)
Proof. We prove the lemma by contradiction. Suppose that c¯ −1
∃ p ∈ {1, . . . , m} such that V O(SKJpp
) > V O(SKJc¯kk ).
This means that at Step 2.1 of the EKSP algorithm, we must select the k-th class with the capacity c¯k before the p-th class corresponding to the capacity c¯p − 1. In order to select the class corresponding to the index p (with the capacity c¯p − 1), the EKSP algorithm is forced to increase the value of the capacity c¯k as well as the value V O(SKJc¯kk ).
This contradicts the fact that V O(SKJc¯kk ) is an optimal solution value and
m
¯i i=1 c
= c (because
the algorithm terminates) ✷ Proposition.
The EKSP stops after a finite number of iterations. The produced solution
represents an optimal solution to the KSP and the worst-time complexity of the algorithm is equal to O(nc). Proof. It is clear that the EKSP algorithm stops in a finite number of iteration, since the complexity of the algorithm is limited by the number of variables n and the capacity c of the knapsack. According to Lemma 2 and Lemma 3, the EKSP algorithm terminates with the following feasible solution S(KSP ) = (Opt(SKJc¯11 ), . . . , Opt(SKJc¯m )), m which verifies the following points: (i)
m
¯i i=1 c
= c;
(ii) ∀i ∈ {1, . . . , m}, V O(SKJc¯ii−1 ) ≤ V O(SKJc¯kk ); where k is the index of the class realizing V O(SKJc¯kk ) = min{V O(SKJc¯11 ), . . . , V O(SKJc¯m )}. m We are prepared now to prove that S(KSP ) (produced by the algorithm) is an optimal solution to KSP.
8
According to the above theorem, there exists an optimal solution which can be written as follows: ¯
¯
Opt (KSP ) = (Opt (SKJc11 ), . . . , Opt (SKJcmm )). Suppose that the value V O (KSP ) of Opt (KSP ) is greater than the value of S(KSP ) and assume that this maximum is attainable for the class t, i.e. ¯
V O (SKJctt ) > V O(SKJc¯kk )
(11)
According to the optimality on the class t, we have: ¯
¯
V O (SKJckk ) ≥ V O (SKJctt )
(12)
¯
From (11) and (12), we deduce that V O (SKJckk ) > V O(SKJc¯kk ) which implies that c¯ k > c¯k
(13)
Moreover, we have ¯
¯
∀i, V O (SKJcii ) ≥ V O (SKJctt ) (the optimality condition on t)
(14)
and according to (10) of Lemma 3 and (11), we have ¯
V O (SKJctt ) > V O(SKJc¯kk ) ≥ V O(SKJc¯ii−1 ), ∀i
(15)
From (14) and (15), we have ¯ ∀i, V O (SKJcii ) > V O(SKJc¯ii−1 ) =⇒ ∀i, c¯ i > c¯i − 1 =⇒ ∀i, c¯ i ≥ c¯i
(16)
and from (13) and (16), we can deduce that m ¯ m ¯ c¯ k + m ¯k + m ¯i ⇐⇒ ¯i = c. i=1,i=k c i > c i=1,i=k c i=1 c i > i=1 c
Hence,
m ¯ i=1 c i > c and therefore, such a solution S (KSP ) does not exist.
We can remark now, in the worst-case, that the algorithm (of Figure 1) runs by applying c steps. At each step: 1. An index $, where $ ∈ {1, . . . , m} is selected. The worst case consists in choosing m times a selected index. 2. For the current index $, the algorithm calls the procedure Knapsack($, J , c¯ ). The procedure iters |J | times for the point c¯ . According to the above points (1) and (2), the worst-time complexity of the algorithm is equal to c(m + max |J |). 1≤ ≤m
Clearly, the greater value for |J | is equal to n − m + 1 (the case for which each class i = $ has one element and the class corresponding to the index $ contains the rest of the elements). So, in the worst case, the number of iterations of the algorithm is equal to c(m + n − m + 1) ⇐⇒ c(n + 1) Hence, the wort-time complexity of the algorithm is equal to O(cn) ✷ 9
3.4
The EKSP Example In order to clarify the proof of the above proposition, we take the following KSP instance’s
into account. There are 2 classes of objects (m=2), 2 objects in the first class and 3 objects in the second class. Formally the problem is formulated as follows: max
s.t.
min x1 + x2 , 3x3 + 3x4 + 3x5 x1 + x2 + x3 + x4 + x5 ≤ 4 xj ∈ {0, 1}, for j = 1, . . . , 5
The EKSP algorithm gives an optimal solution value equal to 2 (corresponding to the first class J1 = {1, 2}).
Iter. 0 1 2 3 4
Selected index 1 2 1 1
The result Class 1 (J1 ) Class 2 (J2 ) c¯1 V O(.) c¯2 V O(.) c¯1 = 0 V O(SKJ01 ) = 0 c¯2 = 0 V O(SKJ02 ) = 0 1 V O(SKJ11 ) = 1 1 V O(SKJ12 ) = 3 2 V O(SKJ21 ) = 1 3 V O(SKJ31 ) = 1
c¯1 + c¯2 0 1 2 3 4
Table 1: Execution of the EKSP algorithm on a simple example. The solution of this problem instance using algorithm EKSP is illustrated in Table 1. Column 1 represents the different iterations realized by the EKSP algorithm (Step 2). For each iteration, we report in Column 2 the index $ selected by Step 2.1. Steps 2.2 and 2.3 are represented by Columns 3 and 4 respectively. For example, Column 3 represents the (new) value of the current capacity c¯1 (if this index is considered as the selected one) and the optimal solution value of SKJc¯11 . Finally, we report in Column 5 the stopping criterion which represents the first line of Step 2 of the EKSP algorithm. The algorithm EKSP starts (Step 1., Fig. 1) with the solution state with all variables setting equal to zero (Iter. 0, column 1). The first iteration (Iter. 1, column 1) selects the branching class (Class J1 ) using the Step 2.1 of the Main step of EKSP. The capacity c¯1 becomes now equal to 1 and the solution value of the class J1 is equal to V O(SKJ11 ) = 1. The second iteration (Iter. 2, column 1) selects the class J2 because the minimum solution
value is realized for this class, i.e. V O(SKJ02 ) = min V O(SKJ11 ) = 1, V O(SKJ02 ) = 0) = 0, and so, by incrementing c¯2 by one unit and by applying a dynamic programming procedure on SKJ12 , we obtain the solution value V O(SKJ12 ) = 3. The third iteration (Iter. 3, column 1), selects now the class J1 , because the minimum value is realized for V O(SKJ11 ). Hence, the capacity c¯1 is incremented and the solution value corresponding to the couple (J1 = 1, c¯1 = 2) is equal now to V O(SKJ21 ) = 2. The fourth iteration (Iter. 4, column 1) selects also the first class. In this case, c¯1 becomes equal to 3 and the solution value of SKJ32 is equal to 2. 10
The algorithm EKSP terminates, since c¯1 + c¯2 = 4, which means that the sum of capacities is attained. In this case, the optimal solution value corresponding to the value V O(KSP ) = 2 is realized for the class J1 = {1, 2}. This optimal solution corresponds to the following single knapsack problems: max x1 + x2 s.t. x1 + x2 ≤ 3 x1 , x2 ∈ {0, 1}
max 3x3 + 3x4 + 3x5 s.t. x3 + x4 + x5 ≤ 1 x3 , x4 , x5 ∈ {0, 1}
Finally, the capacity c = 4 is shared as follows: c¯1 = 3 and c¯2 = 1; for producing the optimal solution value V O(KSP ) = 2. Now, if we consider (10) (of Lemma 3), we have
V O(SKJc¯22−1=0 ) = 0 ≤ V O(SKJc¯11=3 ) = 2 and
c¯1 −1=2 V O(SKJ1 ) = 2 ≤ V O(SKJc¯11=3 ) = 2 .
3.5
Extension of the EKSP algorithm to the integer KSP Generally, a much more difficult but very interesting problem is to conceive a general topol-
ogy able to solve families of problems instead of individual ones. In this section, we shall see how the method can be easily adapted for solving other unbounded and bounded knapsack sharing problems. Indeed, the EKSP algorithm can be extended for solving the general KSP problem, i.e., when each xj , j = 1, . . . , n, is considered as a nonnegative integer. For the general case, we just replace the Knapsack() procedure (of equation 9) by the general function given by equation (17): fyi (g) = max{fyi −1 (g), fyi (g − wyi ) + pyi }
(17)
where 0 ≤ g ≤ c¯, 1 ≤ yi ≤ |Ji |, fyi (k) = −∞ if k < 0 and f0 (g) = 0. We also remark that the approach can be extended for solving the bounded KSP, by considering lower and upper demand values on the variables xi , i = 1, . . . , n, representing the number of occurrences of the i-th object on the solution. Finally, the Knapsack() procedure of the EKSP algorithm uses now the recursion of equation (17) for solving exactly the general KSP. In this paper, our computational results were limited (Section 4) only to the binary knapsack sharing problem.
4
Computational results and future works Our approach was coded in C and tested on IBM SP2-P2SC (120Mhz with 256Mo of RAM).
This section is divided into two parts. First, we present empirical evidence of the performance of the proposed exact algorithm. In the second part we discuss some future works of the proposed approach.
11
4.1
Computational results We evaluate in this section the performance of the proposed exact algorithm. Our computa-
tional study was conducted on three test problem sets (a total of 240 instances) of various sizes and densities. To aid further development of algorithms (exact and approximate) for the KSP, we have made these test problems publicly available (ftp://www.univ-paris1.fr/pub/CERMSEM/ hifi/KSP). Inst. A02.x D02.x A05.x D05.x A10.x D10.x A20.x D20.x A30.x D30.x A40.x D40.x A50.x D50.x A02C.x D02C.x A05C.x D05C.x A10C.x D10C.x
n 1000 7500 1000 7500 1000 7500 1000 7500 1000 7500 1000 7500 1000 7500 1000 7500 1000 7500 1000 7500
m 2 2 5 5 10 10 20 20 30 30 40 40 50 50 2 2 5 5 10 10
Inst. B02.x E02.x B05.x E05.x B10.x E10.x B20.x E20.x B30.x E30.x B40.x E40.x B50.x E50.x B02C.x E02C.x B05C.x E05C.x B10C.x E10C.x
n 2500 10000 2500 10000 2500 10000 2500 10000 2500 10000 2500 10000 2500 10000 2500 10000 2500 10000 2500 10000
m 2 2 2 2 10 2 20 20 30 30 40 40 50 50 2 2 5 5 10 10
Inst. C02.x F02.x C05.x F05.x C10.x F10.x C20.x F20.x C30.x F30.x C40.x F40.x C50.x F50.x C02C.x F02C.x C05C.x F05C.x C10C.x F10C.x
n 5000 20000 5000 20000 5000 20000 5000 20000 5000 20000 5000 20000 5000 20000 5000 20000 5000 20000 5000 20000
m 2 2 2 2 10 10 20 20 30 30 40 40 50 50 2 2 5 5 10 10
Table 2: Test problem details: 1 ≤x≤ 4.
In (Yamada and Futakawa, 1997), the authors have used a scheme for generating the set of random problem instances. Their tests were based upon some ”correlated”, ”weakly-correlated” and ”strongly correlated” instances. In fact, these notions are principally considered if branch and bound (B&B) procedures are applied. It is well known that ”strongly correlated” instances are hard for B&B algorithms. For this reason, we have considered two type of instances: the first one represents 168 ”uncorrelated” instances and the second one contains 72 ”strongly correlated” problem instances, which represent the ”correlated” ones in our study. On one hand, the ”uncorrelated” instances, were generated by applying an ordinary scheme. On the other hand, in order to show the behavior and the effectiveness of the EKSP algorithm, on different type of instances, we have also considered 72 ”correlated” problem instances. The ”uncorrelated” instances are generated as follows: for each instance, the number m (classes) is taken in the interval [2, 50], the number n (variables) is taken from [1000, 10000], wj and pj are mutually independent and uniformly taken from [1, 100] and [1, 50], respectively. The capacity n
of the KSP c is equal to (
j=1 wj )/2
and, the cardinality of each class Ji , i = 1, . . . , m, is in
[1, n − m + 1]. These test problem instances are detailed in Table 2.
12
The other 72 instances (the ”correlated” ones) are generated as the ”uncorrelated” ones but the profit pj associated with the objects have been taken equal to wj + 100, for j = 1, . . . , n. These instances are also detailed in the second part of Table 2. Inst. A02C.1 A02C.2 A02C.3 A02C.4 A05C.1 A05C.2 A05C.3 A05C.4 A10C.1 A10C.2 A10C.3 A10C.4 C02C.1 C02C.2 C02C.3 C02C.4 C05C.1 C05C.2 C05C.3 C05C.4 C10C.1 C10C.2 C10C.3 C10C.4 E02C.1 E02C.2 E02C.3 E02C.4 E05C.1 E05C.2 E05C.3 E05C.4 E10C.1 E10C.2 E10C.3 E10C.4
Opt. 41492 41397 41565 41556 16554 16561 16590 16584 8245 8203 8250 8255 207675 207916 208043 207973 83013 83129 83185 83188 41466 41521 41554 41554 416396 415426 415470 415341 166484 166126 166116 166062 83216 82995 83029 82981
kopt 1 1 2 1 2 5 4 2 7 8 4 2 2 2 2 2 1 3 1 2 7 2 6 2 2 2 1 2 5 5 4 2 1 4 10 4
c¯kopt 6392 5897 6565 6756 2354 2761 2590 2484 1245 1203 1350 1255 32375 32816 31943 32773 12913 11929 12585 13288 5966 6621 6454 6554 63496 65526 64070 64341 24684 25226 25616 26262 12916 12695 12929 12481
CPU 4.77 4.72 4.71 4.78 1.89 1.89 1.88 1.92 0.95 0.98 0.97 0.98 124.46 122.12 123.82 125.93 48.84 48.09 48.58 49.56 24.47 24.13 24.36 24.78 490.98 495.14 498.00 503.00 196.40 200.63 200.34 200.30 97.56 99.00 99.69 99.91
Inst. B02C.1 B02C.2 B02C.3 B02C.4 B05C.1 B05C.2 B05C.3 B05C.4 B10C.1 B10C.2 B10C.3 B10C.4 D02C.1 D02C.2 D02C.3 D02C.4 D05C.1 D05C.2 D05C.3 D05C.4 D10C.1 D10C.2 D10C.3 D10C.4 F02C.1 F02C.2 F02C.3 F02C.4 F05C.1 F05C.2 F05C.3 F05C.4 F10C.1 F10C.2 F10C.3 F10C.4
Opt. 103672 103572 104034 104031 41393 41437 41580 41561 20657 20648 20740 20721 312223 311947 311690 311419 124794 124758 124669 124529 62334 62311 62289 62216 830622 831513 831145 831445 332143 332527 332402 332498 166028 166248 166154 166192
kopt 2 1 2 2 3 1 2 4 4 10 6 7 2 1 1 2 2 3 1 1 7 6 6 4 1 2 2 1 1 4 5 1 2 10 1 7
c¯kopt 16272 15272 15834 16131 6493 6437 6680 6461 3257 3548 3240 3021 47623 47447 46990 49219 18394 18558 18969 18929 8634 9911 9489 9916 128122 128913 126745 125845 53343 51627 51202 50498 25728 26048 26754 25092
CPU 30.63 30.67 29.83 29.95 12.17 12.07 11.93 11.88 6.22 6.16 6.08 6.05 277.23 278.63 279.67 282.96 110.20 110.47 111.26 112.64 55.12 55.10 55.73 56.31 2103.01 2088.17 2088.26 2084.28 808.62 794.52 794.08 796.16 403.64 398.68 399.61 398.03
Table 3: Performance of the exact approach on the ”correlated” instances.
To allow some kind of comparison, we decompose the ”uncorrelated” instances into two groups. The first group includes the instances for which the number of classes varies between 2 and 10, and the second one contains those which have the number of classes greater than or equal to 20. In this case, the ”correlated” instances represent the third group for which the number of classes varies between 2 and 10 (as for the instances of the first group). Moreover, for each pair (n, m), we have considered four instances (see Table 2). The instances A, B, C, D, E and F represent the ”uncorrelated” instances with n equal to 1000, 2500, 5000, 7500, 10000 and 20000 variables, respectively. Also, the ”correlated” instances are shown in Table 2, denoted TyC.x, T=A,. . ., F, y = 2, 10, 20, 30, 40, 50, and 1 ≤ x ≤ 4. 13
(a) The "Uncorrelated" instances
CPU time (sec)
1 100,00 900,00 700,00 500,00 300,00 100,00 -100,00 1000
2500
5000
7500
10000
20000
10000
20000
Variables
(b) The "Correlated" instances
CPU time (sec)
1 100,00 900,00 700,00 500,00 300,00 100,00 -100,00 1000
2500
5000
7500
Variables
Figure 2: The behavior of the EKSP on both ”correlated” and ”uncorrelated” instances.
In Figures 2.a and 2.b, the average computing time with the size of the instances (with 2, 5 and 10 classes, respectively) is shown for the ”uncorrelated” and the ”correlated” problem instances, respectively. Each abscissa corresponds to the number of variables and each ordinate represents the average execution time of the instances (a total of 12 instances) with the same number of variables n (n ∈ [1000, 20000]). These figures represents the results of Tables 3 and 4, respectively. As we can see from both figures, the slope of the curve corresponding to the ”uncorrelated” instances is equivalent to the slope of the curve corresponding to the ”correlated” ones. This fact is giving a confirmation of the independence of the EKSP algorithm from the type of the instance. Return now to examine the efficiency of the algorithm on all treated instances. For each instance, we report its optimal solution value Opt, the class kopt for which the capacity c¯kopt is attained, and the CPU time (measured in second) which is the time that the approach takes to give the optimal solution. The results for the first and the second groups are shown in Tables 4 and 5, and in Table 3 for the third group. By referring to Tables 3-5, we observe that: 1. For both types of instances (uncorrelated and correlated), the EKSP algorithm is able to
14
solve efficiently the problem instances with 20000 variables and with a number of classes between 2 and 50. 2. We remark that EKSP algorithm solves some large size instances, with n = 20000 and m = 2, within 35 minutes in average, which we think, that is not large for an exact approach. 3. It can be seen that for all treated instances, with m ∈ [2, 50] and 1000 ≤ n ≤ 20000, the average execution time (CPU) varies between 2.53 sec (with n = 1000) and 18.27 min (with n = 20000). Therefore, we think that the computing time is reasonably small. 4. As we can see from Figure 2, it is important to notice the insensitivity of the algorithm for the hard problem instances. Indeed, we remark that average execution times are respectively equivalent, for the ”uncorrelated” and ”correlated” instances (see Tables 3 and 4, respectively).
4.2
Future works The dominant opinion today is that algorithms for large-scale problems are heavier on the
average for a serial machine implementation. Computational tests, generally, indicate that these problems are truly difficult sometimes for some medium problem sizes. In fact, the problem that seems to be interesting is to conceive sequential and parallel algorithms able to solve approximately and exactly some very complex problems of the operations research and artificial intelligence. Concerning the knapsack sharing problem, we think that there are different important directions of research. • The first direction of research is to see how to develop another exact approach based upon branch and bound procedures. In this case, we can use a hybrid algorithm by considering (i) the heuristic algorithm proposed by Yamada and Futakawa (1997) for reducing (to fix some variables at the optimum) the original problem, and (ii) to adapt Martello and Toth’s, 1990, approach by using an implicit search procedure. We think that is interesting to see the behavior of the hybrid approach especially for the large correlated instances. • Since dynamic programming techniques are used for solving a series of single constraint knapsack problems, so the second direction is to apply the parallel approach proposed in (Andonov et al. , 1993-1997) and in (Chen et al. , 1990-1992), who proposed a pipeline architecture containing a linear array of q processors (where q ≤ n, the number of objects), queue and memory modules of size c¯i , i = 1, . . . , m (the capacity of each problem SKJc¯ii ) for solving the single constraint knapsack problem. • Another interesting direction of research is to see how we can adapt some existing parallel algorithms, for example the BOB Library (Cung et al., 1997; Le Cun et al., 1995), in order to compare the behavior of this approach to the sequential ones. The method can be realized by considering, for example, that processors are simultaneously attributed to each knapsack SKJc¯ii , i = 1, . . . , m. In this case, the alternation used by the EKSP algorithm can also be applied to the different processors. 15
Inst. A02.1 A02.2 A02.3 A02.4 A05.1 A05.2 A05.3 A05.4 A10.1 A10.2 A10.3 A10.4 C02.1 C02.2 C02.3 C02.4 C05.1 C05.2 C05.3 C05.4 C10.1 C10.2 C10.3 C10.4 E02.1 E02.2 E02.3 E02.4 E05.1 E05.2 E05.3 E05.4 E10.1 E10.2 E10.3 E10.4
Opt. 20490 20419 20889 20564 8071 7995 7960 8115 4054 3525 4087 4037 102284 101565 101551 104568 40564 40798 40438 40449 20391 20252 20213 20858 203577 204750 204462 207203 81275 81240 81956 81196 40681 40828 40839 41406
kopt 1 1 2 1 5 2 3 4 7 10 9 7 2 1 2 1 1 3 3 1 10 4 8 7 2 1 2 2 1 1 3 2 5 5 7 6
c¯kopt 6460 5572 6261 6804 2513 2907 2437 2382 1229 4578 1297 1426 31720 30420 31960 31553 12818 12447 12228 12173 6748 6862 5356 6252 65305 63576 63292 64153 25065 26649 24099 26667 12348 13156 12896 11852
CPU 4.83 4.65 4.65 4.71 1.99 1.87 1.92 1.92 1.02 0.88 0.93 0.95 122.45 120.79 121.43 121.80 49.18 49.28 49.74 50.86 24.90 24.71 24.82 24.93 485.67 500.39 489.35 493.72 197.17 199.32 198.70 199.11 97.90 99.61 99.11 98.64
Inst. B02.1 B02.2 B02.3 B02.4 B05.1 B05.2 B05.3 B05.4 B10.1 B10.2 B10.3 B10.4 D02.1 D02.2 D02.3 D02.4 D05.1 D05.2 D05.3 D05.4 D10.1 D10.2 D10.3 D10.4 F02.1 F02.2 F02.3 F02.4 F05.1 F05.2 F05.3 F05.4 F10.1 F10.2 F10.3 F10.4
Opt. 50803 50136 50873 52143 20371 20349 20424 20376 10047 9905 10129 10360 153401 152374 153455 155556 61486 61000 60690 60750 30574 30460 30619 31029 409305 409818 409741 406213 163514 162566 163850 163202 81739 81957 81917 81168
kopt 2 2 2 2 1 1 3 5 4 5 4 8 1 1 1 2 3 2 2 2 1 4 3 2 1 2 2 2 4 1 2 1 2 9 9 2
c¯kopt 16331 16713 16161 15969 6532 6651 6234 6141 4311 2700 3064 3290 46191 45982 47206 48861 17642 19433 19272 19910 10292 9384 9565 10796 128532 129610 127750 128985 53956 50488 51271 52104 26189 25950 26148 26898
CPU 31.24 29.97 30.22 30.54 12.23 12.05 12.64 12.24 6.37 6.19 6.21 6.31 275.23 275.63 277.71 273.32 111.59 110.25 111.77 113.64 56.22 55.41 56.01 55.62 2103.51 2095.02 2109.32 2095.94 790.04 784.79 795.36 796.66 393.53 395.50 394.67 393.95
Table 4: Performance of the EKSP algorithm on the instances with m ≤ 10 (classes).
5
Conclusion In this paper, we have considered a variant of the knapsack sharing problem. We have pro-
posed an exact approach based upon the classical dynamic programming techniques. It consists of decomposing the original problem to a series of single constraint knapsack problems and solve them by applying a dynamic programming procedure, with some modifications. Empirical evidence for the algorithm has been reported through a number of experiments. These experiments were conducted on 240 problem instances and show that the algorithm is able to solve some large problem instances within reasonable computing time. Acknowledgments. Many thanks to an anonymous referee for helpful comments contributing to improve the first version of this paper.
16
References R. Andonov, F. Raimbault and P. Quinton, ”Dynamic programming parallel implementation for the knapsack problem”, IRISA, Campus de Banlieu, Rennes, France, Technical Report, PI-740, 1993. R. Andonov and S. Rajopadhye, ”Optimal orthogonal tiling of 2-D iterations”, Journal of Parallel and Distributed Computing, vol. 45, pp. 159-165, 1997. J. R. Brown, ”The knapsack sharing”, Operations Research, vol. 27, pp.341-355, 1979. J. R. Brown, ”Solving knapsack sharing with general tradeoff functions”, Mathematical Programming, vol. 51, pp.55-73, 1991. G.H. Chen, M.S. Chern and J.H. Jang, ”Pipeline architectures for dynamic programming algorithms”, Parallel Computing, vol. 13, pp. 111-117, 1990. G.H. Chen, M.S. Chern and J.H. Jang, ”An improved parallel algorithm for 0/1 knapsack problem”, Parallel Computing, vol. 18, pp. 811-821, 1992. P. Chu and J.E. Beasley, ”A genetic algorithm for the multidimensional knapsack problem”, The management school, Imperial college, London SW 2AZ, England Working Paper, 1997. V.-D. Cung, S. Dowaji, B. Le Cun, T. Mautor, and C. Roucairol, ”Concurrent data structures and load balancing strategies for parallel branch-and-bound/A∗ algorithms”, DIMACS Series in Discrete Mathematics and Theoretical Computer SCience, vol. 30, 141-161, 1997. A. Freville and G. Plateau, ”The 0-1 bidimensional knapsack problem: toward an efficient high-level primitive tool”, Journal of Heuristics, vol. 2, pp. 147-167, 1997. P.C. Gilmore and R.E. Gomory, ”The theory and computation of knapsack functions”, Operations Research, vol. 13, pp. 879-919, 1966. P.C. Gilmore, ”Cutting stock, linear programming, knapsacking, dynamic programming and integer programming, some interconnections”, Annals of Discrete Mathematics, vol. 4, p.p. 217-235, 1979. M. Hifi and V. Zissimopoulos, ”A recursive exact algorithm for weighted two-dimensional cutting”, European Journal of Operational Research, vol. 91, pp. 553-564, 1996. M. Hifi and R. Ouafi, ”Best-first search and dynamic programming methods for cutting problems: the cases of one or more stock plates”, Computers and Industrial Engineering, vol. 32, pp. 187-205, 1997. T. Kuno, H. Konno and E. Zemel, ”A linear-time algorithm for solving continuous maximum knapsack problems”, Operations Research Letters, vol. 10, pp. 23-26, 1991. B. Le Cun, C. Roucairol, and TNN Team, ”BOB: a unified platform for implementing branchand-bound like algorithms”, PRiSM, Universit´e de Versailles, St-Quentin-en-Yveline, 78035, Versailles, France, Working paper, 1995. 17
H. Luss, ”Minmax resource allocation problems: optimization and parametric analysis”, European Journal of Operational Research, vol. 60, pp. 76-86, 1992. S. Martello and P. Toth, Knapsack problems: algorithms and computer implementation, John Wiley and Sons, 1990. S. Martello and P. Toth, ”Upper bounds and algorithms for hard 0-1 knapsack problems”, Operations Research, vol. 45, pp. 768-778, 1997. R. Morabito and M. Arenales, ”Performance of two heuristics for solving large scale twodimensional guillotine cutting problems”, INFOR, vol. 33, pp. 145-155, 1995. J. S. Pang and C. S. Yu, ”A min-max resource allocation problem with substitutions”, European Journal of Operational Research, vol. 41, pp. 218-223, 1989. M. Syslo, N. Deo and J. Kowalik, Discrete optimization algorithms, Prentice-Hall, New Jersey, 1983. C. S. Tang, ”A max-min allocation problem: its solutions and applications”, Operations Research, vol. 36, pp.359-367, 1988. J.M. Val´ero de Carvalho and A.J. Ridrigues, ”An LP-based approach to a two-stage cutting stock problem”, European Journal of Operational Research, vol. 84, pp.580-589, 1995. T. Yamada and M. Futakawa, ”Heuristic and reduction algorithms for the knapsack sharing problem”, Computers and Operations Research, vol. 24, pp. 961-967, 1997.
18
Inst. A20.1 A20.2 A20.3 A20.4 A40.1 A40.2 A40.3 A40.4 B20.1 B20.2 B20.3 B20.4 B40.1 B40.2 B40.3 B40.4 C20.1 C20.2 C20.3 C20.4 C40.1 C40.2 C40.3 C40.4 D20.1 D20.2 D20.3 D20.4 D40.1 D40.2 D40.3 D40.4 E20.1 E20.2 E20.3 E20.4 E40.1 E40.2 E40.3 E40.4 F20.1 F20.2 F20.3 F20.4 F40.1 F40.2 F40.3 F40.4
Opt. 1989 1465 2001 1972 712 595 716 668 4997 4164 4996 5115 2173 2177 2181 2057 10113 10066 10079 10377 4644 4846 4588 5122 15276 15151 15256 15468 7606 7505 7074 7663 20274 20382 20368 20634 10098 9838 10150 10260 40884 40926 40936 40528 20393 20425 20428 20218
kopt 14 20 16 17 36 30 16 25 3 20 13 8 14 39 2 34 11 12 13 1 6 40 16 7 1 15 14 2 23 35 4 33 13 7 2 10 28 18 25 23 15 16 7 16 8 38 39 30
c¯kopt 941 6535 1117 855 6072 7961 6233 7073 1453 10834 1589 1754 8450 7931 8393 12116 3577 2837 2988 2964 10821 5715 12135 1632 5018 4664 4825 6033 2442 2017 16025 3164 5914 6781 6126 6875 3010 12391 2792 3568 12217 12116 12958 12377 6648 6765 5840 6569
CPU 0.59 0.52 0.60 0.58 0.50 0.50 0.50 0.46 3.43 3.12 3.27 3.35 2.16 2.05 2.09 1.97 13.00 12.83 12.92 12.97 7.19 7.30 7.20 7.43 28.68 28.61 29.18 28.72 16.63 16.55 15.85 16.32 50.57 51.08 52.24 50.19 27.70 27.58 28.28 28.00 201.38 200.84 202.21 220.38 109.34 107.69 110.32 107.63
Inst. A30.1 A30.2 A30.3 A30.4 A50.1 A50.2 A50.3 A50.4 B30.1 B30.2 B30.3 B30.4 B50.1 B50.2 B50.3 B50.4 C30.1 C30.2 C30.3 C30.4 C50.1 C50.2 C50.3 C50.4 D30.1 D30.2 D30.3 D30.4 D50.1 D50.2 D50.3 D50.4 E30.1 E30.2 E30.3 E30.4 E50.1 E50.2 E50.3 E50.4 F30.1 F30.2 F30.3 F30.4 F50.1 F50.2 F50.3 F50.4
Opt. 1088 747 1277 1198 550 459 536 489 2525 3123 3218 2716 1811 1615 1511 1780 6719 6698 6594 6206 3935 3992 3633 3994 10129 10103 10153 9591 6057 5797 5407 6131 13438 13556 13590 13200 8081 8081 8111 8195 27217 27250 27223 26938 16262 16291 16326 16123
kopt 10 20 2 1 46 15 30 43 24 2 30 2 34 10 49 10 1 6 12 2 1 10 27 18 18 10 5 2 11 38 27 4 14 2 6 2 44 34 39 19 12 28 14 8 19 48 1 36
c¯kopt 4612 8565 1732 2688 6117 8060 6785 7639 14758 3920 3497 13142 6246 11083 13790 7464 3029 2192 4188 14914 4329 1348 12140 5175 2659 3424 2824 16959 2042 9774 22193 2038 4158 4107 4196 15214 2430 2329 3028 2459 8593 8557 7781 7940 5762 4974 5232 4739
CPU 0.54 0.46 0.52 0.52 0.54 0.51 0.51 0.53 2.24 2.47 2.38 2.19 2.06 1.94 1.87 2.03 9.09 9.18 9.02 8.69 6.52 6.61 6.29 6.54 20.17 20.09 20.18 19.39 14.29 13.84 13.15 14.24 35.16 35.57 36.28 36.22 24.02 24.31 24.32 23.67 138.25 136.94 138.07 135.72 88.28 87.19 88.97 87.94
Table 5: Performance of the EKSP algorithm on the instances with m ≥ 20 (classes).
19