Memorandum COSOR 95-37, 1995, Eindhoven University of Technology
Nonstrict vector summation in multi-operation scheduling Sergey Sevastianov y
Abstract We consider several multi-operation scheduling problems with m machines and n jobs, including ow shop, open shop, assembly line, and a few special cases of job shop with the makespan criterion. It is demonstrated that the problems in question can be eciently solved by approximation algorithms with fairly good performance guaranteed in worst case. The algorithms constructed are based on a geometric technique called \nonstrict vector summation". It consists of assigning an (m-1)-dimensional vector to each job and then nding an order in which the resulting vectors should be summed so that all partial sums lied within half-spaces of the best possible family (with respect to a certain objective function). The partial sums are allowed sometimes go out of this or that half-space of the family, which explains the term \nonstrict" in the title of the paper. For the open shop problem this technique guarantees its polynomial-time solution, provided that the maximum machine load (lmax ) is large enough. In the case of three machines and lmax as large as at least 7 times the maximum processing time of an operation, we can nd the optimal schedule in O(n logn) time.
Keywords: open-shop, ow shop, job shop, scheduling, polynomial-time approximation algorithms, sequencing of vectors.
1 Introduction We consider the class of multi-operation scheduling models with the makespan criterion. Typical representatives of this class are the well known ow shop, job shop, and open shop models, though we are not going to restrict our consideration with those mentioned above. Since all the problems we are to consider are known to be NP-hard, construction of ecient approximation algorithms for the problems in question becomes of considerable practical interest. So, our main goal is constructing such algorithm. We provide them with worstcase analysis of their performance, but not the traditional analysis of the worst-case ratio The paper was written when the author was on leave at the Eindhoven University of Technology, Department of Mathematics and Computing Science. y
[email protected]. Institute of Mathematics, Siberian Branch of Russian Academy of Sciences, Universitetskii pr. 4, 630090, Novosibirsk-90, Russia. Supported by the EC programme \Human Capital and Mobility" EC Network DIMANET/PECO Contract No. ERBCIPDCT 94-0623.
1
which is to be estimated from above). We will estimate the (i.e., the ratio Cmax=Cmax absolute error of each algorithm constructed, provided that a preliminary rescaling work has been done. All those parameters of the model that are measured in time units must be rescaled so that the maximum processing time of an operation becomes equal to 1:
max p =: pmax := 1: ji ji
(It is clear that there is no use to consider instances with all processing times equal to zero.) And though this analysis is not so common and habitual as that of the worst-case ratio, we will try to convince the reader that it has its own right to exist, and moreover, has a promising future to be developed further. To construct our algorithms and make their analysis, we apply a special geometric technique. It can be briey outlined as follows. For each job Jj j = 1 : : : n in m-machine scheduling model under consideration we de ne (according to a certain rule) an (m ; 1)P m ; 1 dimensional vector dj 2 IR , so that dj = 0. Then we try to nd an order for summation of the vectors fd1 : : : dng, so that all partial sums were in a ball of radius as small as possible. The permutation = (1 : : : n ) found is then used for constructing schedules either in the direct way (the permutation schedule for, say, ow shop or assembly-line problems), or in an indirect way. (For example, for prescribing priority orders of jobs in queues. This way works in job-shop-type problems.) In our analysis we establish a simple relation: the smaller is the radius of vector summation, the better upper bound on schedule length can be guaranteed. For schedules constructed in this way we can guarantee bounds of the type
Cmax lmax + (m)pmax
(1)
where Cmax is makespan, lmax is the maximum machine load, pmax is the maximum processing time of an operation, and (m) is a polynomial of the number of machines. Since lmax represents a lower bound on the makespan of any schedule, we obtain an interval and the approximative lmax lmax + (m)pmax] which contains both the optimum Cmax solution Cmax, the length of the interval being independent of the number of jobs. Thus, we rst obtain a theoretical result which runs that we can eciently localize the optimum of any instance of the problem in question within an interval of length independent of the number of jobs, whereas we are not able to compute the very optimum in polynomial time (unless P=NP). We call this property of this class of problems as optima localization property. Second, any result of this type is of practical interest, since it provides a polynomial-time approximation algorithm with an upper bound on the absolute error independent of the ) tend to in nity for increasing number of jobs, whereas lmax (and hence, the optimum Cmax number of jobs. This ensures the algorithms to be asymptotically optimal under rather general assumptions on problem data. The technique described above is referred to as a compact vector summation technique. For more information on this subject we refer the reader to the survey paper 1]. Yet, as we show it in the present paper, better results in scheduling can be obtained via application an advanced geometric technique called nonstrict vector summation (briey, 2
n.v.s.). According to the latter, dierent scheduling problems can be reduced to vector summation within dierent (not necessarily bounded) domains in IRm;1 , speci ed for each particular scheduling problem. Secondly, when de ning each such domain G, we allow the summing trajectory sometimes go out of this domain, but never for two successive steps. (That is why we call such a summation as \nonstrict summation within G".) Using this technique enables us to improve the bounds obtained via application of the compact vector summation technique. The geometric technique described above has also another kind of application. For the open shop problem it enables us to determine rather wide subclass of polynomially solvable instances. Namely, we can construct in polynomial time an optimal schedule for a given instance of the open shop problem if the inequality
lmax (m)pmax
(2)
holds for some function (m) of the number of machines. It seams to be not so arduous for instances of practical size, since (m) is not greater than function O(m2). For the case of three machines the best known result of the type is obtained by means of the n.v.s. technique. As it is shown in section 5.1, the optimal schedule can be constructed in O(n log n) time, provided the sucient condition lmax 7pmax is satis ed. In each such case we have the property = lmax: Cmax (3) It is natural to put the question of nding the minimum function (m) = (m) for which inequality (2) necessarily implies property (3). For the case m = 3 we can say so far that 4 (3) 7: So, the question remains open even for small values of m (except the value (2) = 2). The rest of the paper is organized as follows. In section 2 we formulate scheduling problems for which some new results will be obtained in section 5. Section 3 is devoted to introduction to nonstrict vector summation. We give a de nition to this notion and prove two theorems which provide some sucient conditions when any given s-family of vectors can be nonstrictly summed within a given convex domain in the plane. In section 4 several optimization problems are formulated. In each of them one has to nd nonstrict summation of a given family of vectors within an extremal family of half-spaces which provides minimum to a certain objective function. Application of results from the previous section yields approximation algorithms to these problems with upper bounds guaranteed in worst case. In section 5 we return to scheduling problems. In subsection 5.1 we establish a reduction of the open shop problem to the NVS1 problem from section 4 and derive its polynomially solvable case. Finally, in the remaining subsections we construct polynomial-time approximation algorithms for some scheduling problems, using their reduction to n.v.s. problems from section 4, and analyze the worst-case performance of these algorithms.
3
2 Settings of scheduling problems In this section we set six scheduling problems with the makespan criterion and introduce the required notation. Next we formulate the results obtained in the paper for these problems and compare them with those obtained earlier. It will be convenient to formulate our scheduling problems as special cases of the following general problem.
Problem G. There are m machines M = fM1 : : : Mmg that have to process n jobs J =
fJ1 : : : Jng: Each job Jj 2 J consists of m operations Oj = foj1 : : : ojmg. Operation
oji j = 1 : : : n i = 1 : : : m has to be processed on machine Mi , which requires pji time units. For each job Jj 2 J a precedence relation on the set Oj is dened by a mixed graph Gj = (Oj Uj Ej ), where Oj is the set of vertices, Uj is the set of arcs, Ej is the set of edges, and an inclusion (o0 o00) 2 Uj means that operation o00 cannot start before o0 is completed (we will write it also as o0 ! o00), whereas an inclusion (o0 o00) 2 Ej means
that these two operations cannot be processed simultaneously. No preemption in processing each operation is allowed, which means that once started at time sji , operation oji must be processed during the time interval (sji sji + pji ) until it is completed. The last restriction in the model runs that each machine can process at most one operation at a time. We wish to nd a schedule S = fsji 0 j j = 1 : : : n i = 1 : : : mg (i.e., to assign to each operation oji a starting time sji for its processing) which meets the above requirements and minimizes the maximum completion time over all operations:
Cmax =: max (s + pji ): ji ji
(We will call the latter also as \schedule length", or \makespan", which are equivalent to the rst notion, since the whole process starts at time zero.)
Now, to give an exact setting of each scheduling problem we are interested in, we need only specify graph Gj in each case. In the open shop problem we have Uj = Ej = Km , which means that no precedence relation is prescribed on the set of operations of each job and no two operations of a job can be processed at a time. We will denote this problem as OS(m) for the case of m machines. (Note that the classical three- eld classi cation of problems is not of much convenience for us, because the second eld will always be empty and the third one will always indicate the Cmax criterion. So, we need only to invent a letter or several letters for indicating each of our problems.) In the assembly-line problem (briey, AL(m)) we have Ej = Uj = f(oji ojm ) j i = 1 : : : m ; 1g. This setting can be interpreted as if for each j = 1 : : : n we have m ; 1 dierent items of one mechanism Jj , the items have to be produced by dierent machines M1 : : : Mm;1 independently and then assembled into the whole mechanism on the assembly-machine Mm . In the Akers-Friedman problem (briey, AF(m)) each graph Gj is a chain oj1j ! oj2j ! ! ojmj where j = (1j : : : mj ) is a permutation of indices f1 2 : : : mg. Thus, for each job Jj 2 J a machine passage route (M1j : : : Mmj ) is given (the latter will also 4
be denoted simply by permutation j ), and dierent jobs may in general have dierent routes. If all machine passage routes are identical in the Akers-Friedman problem (we may consider them equal to (1 2 : : : m)), we have the ow shop problem (briey, FS(m)) . Two special cases of the Akers-Friedman problem with three machines and two dierent machine passage routes will also be considered. Since we may assume that one of the routes is (1 2 : : : m), to de ne a two-routes problem, it always suces to specify the second route. So, we will refer to our problems as to R213 or R231, meaning that the second route is (2,1,3) or (2,3,1), respectively. P Let now li = nj=1 pji be the load of machine Mi , lmax = max li be the maximum machine load, pmax = maxji pji be the maximum processing time of an operation. Let us briey formulate the results that will be obtained in this paper, and compare them with those obtained earlier for the corresponding problems. The result obtained in section 5.1 for the problem OS(3) consists of its optimal solution in O(n log n) time in the case when (2) holds for (3) 7. The best result obtained for this problem before (in 2]) enabled us to construct an optimal schedule in O(n2 ) time, provided (2) holds for (3) 8:5. For FS(4) we can now guarantee construction in O(n log n) time a permutation schedule with bound (1) for FS (4) = 6, whereas in 2] we could only construct a schedule for FS (4) = 7:5 in O(n2) time. For FS(3) the value FS (3) = 3, as it was shown in 3], cannot be improved. Yet now we can construct the corresponding schedule in O(n) time instead of O(n log n). Construction of a schedule with bound (1) for AF (3) = 5:5 for the AF(3) problem in O(n log n) time improves the similar result from 3] with AF (3) = 6 and worse running time O(n2). The results for R213 and R231 are new: bounds (1) with R213 = 4 R231 = 5 and running time O(n log n). Finally, in subsection 5.2 we repeat the result obtained in 4] for the AL(3) problem. (Bound (1) with AL (3) = 1:25 and running time O(n log n).) But now its proof requires much less eorts, since it easily follows from a general theorem (proved in section 3), as well as just mentioned result for R213 does. The reduction of AL(m) to the NVS5(m ; 1) problem demonstrated in the current paper has a certain advantage in comparison with the reduction used in 4].
3 Two theorems on nonstrict vector summation within a given region in the plane In this section we introduce a notion of s-family of vectors and give a de nition to nonstrict summation of vectors within a given domain in IRm . In two theorems we prove (in a constructive way) sucient conditions for a domain G IR2 such that each s-family of vectors in IR2 can be nonstrictly summed within G. 5
Consider the m-dimensional vector space IRm . Given a family of vectors X = fx1 : : : xn g IRm , their sum will be denoted by (X ). A permutation of indices f1 2 : : : ng will be simply referred to as permutation .
Denition 1 Let a norm s be given in IRm . A nite family of vectors X = fx1 : : : xng IRm is called an s-family if (X ) = 0 and the norm of each vector is at most 1. Denition 2 Let a domain G and a family of vectors X = fx1 : : : xng (X ) = 0, be given in IRm . We say that permutation = (1 : : : n ) provides a nonstrict summak;1 tion of vectors X: within Pk domain G if, for any k = 1 : : : n relation x 62 G implies xk 2 G, where xk =
j =1 xj .
Thus, the \nonstrictness" of vector summation within G means that partial sums are allowed to go out of G, but for any two successive sums at least one must be in G.
Denition 3 Let G be a convex set and l be a straight line in IRm . Then the intersection h = l \ G is called a chord of G. A chord that passes through the origin is called an O-chord of G. For any vector x 2 IRm x 6= 0, the O-chord of G parallel to x is denoted by hG (x) Q(a), a 6= 0, stands for the ray which has the endpoint at the origin and passes through point a. The set x + Q(a) corresponds to the ray that runs from the point x to in nity in parallel with vector a. We say that a convex set G IRm is unbounded if it contains a ray x + Q(a) for some x 2 IRm a 6= 0. If a norm s is given in IRm , then the length of a chord h is de ned in natural way. It is clear that any chord h = l \ G of a convex set G can be an interval, or a ray, or it coincides with the whole straight line l. In the last two cases the length of the chord is set to be equal to in nity.
Denition 4 (see 5]). The recessive cone of an unbounded convex set G IRm is the
union of all rays Q(a) such that x + Q(a) G for each x 2 G. This cone will be denoted by 0+ G.
It can be easily shown that the recessive cone is convex. Furthermore, if G is a closed set, then the cone 0+ G is closed as well. The following proposition can be easily proved.
Proposition 1 Suppose, we are given a norm in IRm , an unbounded convex set G IRm ,
and strait lines l = fa + tb j t 2 IR g l0 = ftb j t 2 IR g, dened for a b 2 IRm b 6= 0, such that 0 2 G and l \ 0+ G 6= . Then for chords h = l \ G and h0 = l0 \ G we have khk kh0 k.
6
In the rest of this section we will deal with two-dimensional space. An arbitrary nonzero vector a 2IR2 speci es the left and the right closed half-spaces L(a) and R(a), i.e., the half-spaces stretched to the left and to the right of the straight line f a j 2 IR g when it runs in the direction of the ray Q(a). Put
L0(a) = IR2 nR(a) L0(a) = L0(a) Q(a)
R0(a) = IR2 nL(a) R0 (a) = R0(a) Q(a):
The following simple properties hold (a 6= 0, b 6= 0):
L(a) = R(;a) L0(a) = R0 (;a) X (;a) = ;X (a) X 0(;a) = ;X 0(a) X 0(;a) = ;X 0(a) (X 2 fL Rg) a 2 L(b) () b 2 R(a) a 2 L0 (b) () b 2 R0(a) a 2 L0 (b) () b 2 R0(a): The three propositions below can be easily proved.
Proposition 2 If a convex cone A IR2 does not coincide with IR2 , then A is entirely contained in some half-space L(a).
Proposition 3 A closed convex cone A IR2 can be represented as A = A(a b) =:
(L0(a) \ R0(b)) Q(a) Q(b) for some a, a 6= 0, and b 2 L(a), b 6= 0, if and only if A is not a straight line and does not coincide with IR2 .
In view of Proposition 3 the recessive cone (except the two cases mentioned above) will be also represented in the form 0+ C = A(a b). Clearly, A(a b) L(a) \ R(b), and the set at the left-hand side does not coincide with that at the right-hand side only if a = b, > 0.
Proposition 4 If a b are nonzero vectors in IR2 , b 2 L(a), and 2 ;A(a b), then L( )nA(a b) L()nL0(a).
Theorem 1 Let S be a norm in IR2 and let G 2 IR2 be a convex closed unbounded set such that 0 2 G and each O-chord of G has at least unit length (see Fig. 1). Suppose also that the recessive cone 0+ G =: A = A(a b) 6= of the set G is known. Then for every s-family X = fx1 : : : xn g IR2 , there exists (and can be found in O(n log n) time) a permutation = (1 : : : n ) that provides a nonstrict summation of the vectors X within G.
The scheme of the proof is as follows. At rst, we describe an algorithm for nding the
permutation . Next, a number of the properties of the algorithm are established (Lemmas 1{5) which immediately imply Theorem 1. The algorithm for nding the permutation consists of three stages that will be referred to as the preliminary, the initial, and the main one, respectively. 7
0 || . || s > 1
G 0+G Figure 1: Illustration to Theorem 1 Preliminary stage. Number the vectors in X in the counter-clockwise order from the
vector a. Let x1 : : : xn be the numbering obtained.
Initial stage. If X \ A 6= , then taking the numbers P 1 2 : : : j as the rst j = jX \ Aj
indices of the permutation , we obtain the total j =: ji=1 xi 2 A. It is also clear that the whole trajectory of summation of vectors x1 : : : xj is in A. Main stage. The subsequent indices of the permutation will be speci ed in such a way
that for each k = j : : : n the following property will be satis ed at step k. Property (y). If 1 2 : : : k are already computed, then those indices of f1 2 : : : ng that have not yet been used in represent an interval of integers fsk sk +1 : : : pk g, where pk ; sk + 1 = n ; k: Suppose that property (y) is satis ed at step k (k < n). Then to guarantee this property at the subsequent steps of the algorithm, we choose the next, (k + 1)th, vector from the pair of vectors fxsk xpk g which will also be denoted (for short) as xs (\the successor") and xp (\the predecessor"). The reason for such notation becomes clear from Fig. 2. While writing the sum k , the index k will commonly be dropped. The rule for the choice of the next vector is simple. Let 2 G. If + xs 2 G, then we set k+1 = sk . If + xs 62 G and + xp 2 G, then we set k+1 = pk . Finally, if + xs 62 G and + xp 62 G (it follows from (X ) = 0 2 G that this cannot happen at step k = n ; 1), then we set k+1 = sk , k+2 = pk (or vice versa, which is immaterial). If we have k+2 62 G then \STOP." This completes the description of the algorithm. Clearly, the algorithm meets the declared bound on its running time since the complexity of the preliminary numbering the vectors is O(n log n), and the subsequent construction 8
0PiPPP 9 P @I@xp 6 D B) B xs D BBNP PPPPq - s s s a
0
Figure 2: The summation trajectory of vectors X in the order of their numbering of the permutation can be implemented in O(n) steps. To justify the fact that the permutation constructed provides a nonstrict summation within G, it suces to prove that no \STOP" situation can arise in the algorithm. This is clear in the case G = IR2 . Alternatively, if G 6= IR2 (hence, A = 0+ G 6= IR2 ), then by Proposition 2 we have A L(a). If A = L(a), then the \STOP" situation cannot occur, and moreover, the permutation = (1 2 : : : n) obtained by the algorithm provides a nonstrict summation of vectors X within A G. Suppose now that the strict inclusion A L(a) holds. Let Xk = fxsk xsk +1 : : : xpk g denote the subfamily of those vectors in X that are not delegated to the total k .
Lemma 1 The following inclusions hold at each step of the algorithm (provided 6= 0): xs 2 L( ), xp 2 R().
Proof. Summing the vectors in X in the order of their numbering produced at the preliminary stage, we obtain the summation trajectory that bounds a convex polygon D (Fig. 2). By property (y), at each step k of the algorithm the total vector = k corresponds to a chord of polygon D. This chord and the vectors in Xk form a convex polygon D0. Polygon D0 lies entirely to the left of the vector (for 6= 0), i.e., D0 L( ). (This remains valid as regards to any edge of the polygon, though, since the vectors have been numbered in the counter-clockwise order.) Therefore, xs 2 L( ) and xp 2 R( ). Lemma 2 At each step of the algorithm, the following implications hold whenever 6= 0: 2 R(a) ) xs 2 L(a) 2 L(a) ) xp 2 R(a):
Proof. Let 2 R(a). If xs 2 IR2 nL(a) = R0(a), then by the rule of numbering the vectors and due to property (y), we have Xk R0 (a). Hence, + (Xk ) 6= 0, which 9
contradicts the property (X ) = 0. The second implication can be proved in a similar way.
Lemma 3 If A L(a), then at each step of the algorithm 62 ;Anf0g:
(4)
Proof. A 6= L(a) implies A \ (;Anf0g) = . Therefore, (4) holds at each step of the
initial stage since 2 A. Suppose that (4) is violated at step k, during the main stage. Then 2 ;A R(a) and xs 62 A: (5) 0 Since by Lemma 1 we have xs 2 L( ), by (5) and Proposition 4 it follows that xs 62 L (a), i.e., xs 2 R0 (a) Q(;a). Therefore, by the rule of numbering the vectors and due to property (y), we obtain Xk R0(a) Q(;a). This implies that + (Xk ) 6= which contradicts the equality (X ) = 0. The contradiction proves Lemma 3. Let G1 = R0(b)nG, G2 = L0 (a)nG. Since
G1 R0(b)nA = R0 (b)nf(L0(a) \ R0(b)) Q(a) Q(b)g = R0 (b)nL0(a) because ; a 2 L(b)] = R0 (b)nfL0(a) Q(;a)g = R0(b)nL(a) the following inclusion is valid:
G1 R0(a) \ R0(b):
(6)
The inclusion G2 L0 (a) \ L0(b) can be derived in a similar way.
Lemma 4 If k 2 G, then
k + xs 62 G1
(7)
k + xp 62 G2:
(8)
Proof. Suppose that k+1 = k + xs and (7) does not hold. It follows from (6) that k+1 2 G1 R0(a):
(9)
This implies that step (k + 1) is executed not at the initial stage (when we have k 2 A L(a) for each k), but at the main one. Therefore, xs 62 A. Using (9) and Lemmas 1 and 2, we obtain xsk+1 2 L(k+1 ) \ L(a) = A(a ;k+1 ). By the rule of numbering the vectors, it follows that 10
xs 2 A(a xsk+1 ) A(a ;k+1) L(a):
(10)
Relations (9) and (10) imply that
k = k+1 ; xs 2 R0 (a): Using (10), we obtain for some 0 and > 0 a representation xs = a ; k+1 = a ; (k + xs), i.e., xs = ( =(1 + ))a ; (=(1 + ))k . In view of k 2 G, a 2 0+ G, it follows that i h i h k+1 = k + xs 2 k k + 1 + xs = k a G which contradicts the assumption k+1 2 G1 = R0(b)nG: Property (8) can be proved in a similar way. Lemma 4 is proved.
Lemma 5 Let 2 G, + xs 62 G, and + xp 62 G. Then + xs + xp 2 G. Proof. By Lemmas 3 and 4 we have + xs 62 G G1 (;A). This implies + xs 2 G2. Similarly, + xp 2 G1. We prove that
+ xs + xp 62 G1 :
(11)
+ xs + xp 2 G1:
(12)
Suppose to the contrary that Set 0 = + xs , 00 = 0 + xp , and show that 0 00] \ A 6= :
(13)
Since 0 2 G2 L0 (a), by Lemma 2 we have xp 2 R(a) and by Lemma 1 we have xp 2 R(k+1). Hence, xp can be represented as xp = a ; k+1 , where 0 > 0, and, moreover, from (12) we have > 1. This implies 0 +(1=)xp = ( =)a 2 A \ 0 00], which proves (13). It follows from (13) and the relations 0 62 G, 00 62 G that the chord h =: f 0 +(1 ; ) 00 j 2 IR g \ G is contained in the interval 0 00]. Since the chord is closed, we have khk < kxpk 1. On the other hand, due to Proposition 1, the length of each chord of the set G, which has a nonempty intersection with the recessive cone of this set, is not less than that of the parallel O-chord. Hence, khk 1. The contradiction proves (11). In the same way, we can establish the relation
+ xs + xp 62 G2 : 11
(14)
From (11), (14), and Lemma 3 we obtain + xs + xp 2 IR2 n(G1 G2 (;A)) G. Lemma 5 is proved. By Lemma 5, no \STOP" situation occurs in the algorithm. Thus, the algorithm ensures a nonstrict summation of vectors in X within G. Theorem 1 is proved. Remark 1. The fact that the cone A is formed by the all directions in which set G tends
to in nity was never used in the algorithm. To de ne and provide a basis for the algorithm of nonstrict summation within G, it suces to have only one such direction a 2 IRm . Remark 2. For a permutation which delivers a nonstrict summation of a given s-family
X = fx1 : : : xn g within G to exist, it is not necessary that each O-chord of G has length at least 1 (in norm s). First, it suces to require this condition only for those O-chords hG (xj ) which are parallel to vectors xj 2 X . Second, we need not compare khG (xj )ks with the unit, but only with kxj ks . Thus, the condition khG (xj )k = kxj k 1 for each xj 2 X xj 6= 0 is sucient for the existence of a nonstrict summation of X within G, and this condition
is independent of a norm.
Remark 3. The restriction in Theorem 1 that the s-norm of each O-chord of G must be
at least 1, cannot be relaxed.
To justify this, we have not con ne ourselves to constructing a counterexample, but can prove a stronger result. The latter is that a relaxation of this restriction by an arbitrarily small number leads to a situation when it becomes strongly NP-hard to check if there exists a nonstrict summation of a given s-family of vectors within G. This situation arises even if all the vectors in X are collinear to a vector x 2 IRm and khG (x)k < 1. In the latter case, the unboundedness of G in some directions (dierent from x) proves to be immaterial, the problem becomes actually one-dimensional, and we may speak of \numbers" instead of \vectors." For details, we refer the reader to 6].
Denition 5 We say that three vectorsP y1 y2 y3 2PIR2 form a normal triple if there are numbers i 0 i = 1 2 3 such that
iyi = 0
i > 0:
It is easily seen that three vectors in the plane form a normal triple, if and only if they are not in the same open half-space. Let a radial coordinate system be de ned in the plane, and let u(x) 2 0 2 ) denote the angular coordinate of vector x 2 IR2 . We assume for certainty that u(x) increases while rotating the vector x counter-clockwise (similar to a common assumption for complex numbers). We assume also that u(0) = 0. For any two vectors a and b such that u(a) 6= u(b) a 6= 0 b 6= 0, the sector of the plane from the ray Q(a) counter-clockwise to the ray Q(b) is denoted by C (a b). More formally:
C (a b) =
(
L(a) \ R(b) if b 2 L(a) L(a) R(b) if b 2 R(a): 12
In the case u(a) = u(b) we set C (a b) = C (b a) = Q(a) = Q(b): Let C 0(a b) denote the open sector C (a b) n (Q(a) Q(b)) and 6 C (a b) be the angular measure of the sector C (a b), i.e., 6 C (a b) =
(
u(b) ; u(a) if u(b) u(a) u(b) ; u(a) + 2 if u(b) < u(a):
Theorem 2 Let s be a norm in the plane, H be its unit ball, and suppose G to be a convex domain in IR2 such that
if x1 62 G x2 62 G and x1 x2] \ H 6= then kx2 ; x1 k > 1: (15) (See Fig. 3.) Then for any s-family X = fx1 : : : xng IR2 there is a permutation = (1 : : : n) which provides a nonstrict summation of vectors X within G, and this permutation can be found in O(n log n) time.
x1 || . || s >1 O
x2
H
G Figure 3: Illustration to Theorem 2 Remark 4. It follows from (15) and the convexity of G that H G and each chord of G
intersecting H has length at least 1. Conversely, if H G and each chord of G intersecting H has length at least 1, we obtain (15), provided G is closed.
However, in the proof of Theorem 2 we will use (15) without assumption that G is closed.
Proof of Theorem 2. Describe an algorithm A for computing the desired permutation .
Algorithm A 13
Stage 1 (Numbering the vectors and constructing the search tree).
We number the vectors xi 2 X in increasing order of their angular coordinates u(xi ). In the case that these coordinates are equal, we number the vectors in increasing order of their length. Finally, when both coordinates are equal, number the vectors in an arbitrary order. Thus, all zero vectors obtain smallest numbers (from 1 to k0, where k0 0). This procedure runs in O(n log n) time. Then, according to Tarjan 7] we construct a balanced binary tree (T0) with the set of (internal) nodes I0 = f1 : : : ng by keys u(xi ) i 2 I0: Recall that such a search tree has the following useful property: for each internal node i all nodes fj g in its left subtree have keys u(xj ) u(xi ), whereas all nodes in its right subtree have keys u(xj ) u(xi). That the tree is \balanced" means that the distance from the root of the tree to any of external nodes is O(log n). While constructing such a tree, we de ne a rank r(i) of each node i as the maximum integer such that i is a multiple of 2 . Thus, the ranks will vary from 0 to blog2 nc, the maximum rank r = blog2 nc being attained on the unique node i0 = 2r that is de ned as the root of the tree T0 . For each node i not equal to the root i0 we de ne its parent p(i), choosing from two nodes i + 2r(i) and i ; 2r(i) as follows. if i + 2r(i) > n, then p(i) := i ; 2r(i) else if r(i + 2r(i)) = r(i) + 1, then p(i) := i + 2r(i) else p(i) := i ; 2r(i): Since r(p(i)) r(i) + 1 holds in each case, the depth of the tree is at most the maximum rank, i.e., blog2 nc. It can also be easily checked that so de ned tree is a binary tree, i.e., each its node has at most two children.
Stage 2 (Computing the permutation ).
The stage consists of steps k = 1 : : : n. After step k is completed, we have the rst k values 1 : : : k of permutation P and the set of indices Ik of those vectors in X that are not delegated to the total k =: ki=1 xi . We have also a search tree Tk with set of nodes Ik , the depth of the tree being at most r. For k = 1 : : : k0 we set k = k, which means that zero vectors go rst. Removing the indices f1 : : : k0g from the set I0 , and deleting the corresponding nodes from the tree T0, we obtain the set Ik0 and the tree Tk0 . (Removing any node from a search tree with n nodes requires time O(log n) 7, p.46].) Let k steps are already completed k0 k n ; 2. Step k + 1 (draft description). Find indices pk sk 2 Ik such that angular coordinates of vectors xpk xsk are closest to the value u0 = u(;k ) (from below and from above, respectively) over all angular coordinates of vectors xi 2 Xk . Formally, let Ik; (u0) =: fi 2 Ik j u(xi) u0 g Ik+ (u0) =: fi 2 Ik j u(xi) u0g: Set
pk =
(
maxfj 2 Ik; (u0)g if Ik; (u0) 6= maxfj 2 Ik g otherwise 14
(
minfj 2 Ik+ (u0) n fpk gg if Ik+ (u0) n fpk g 6= minfj 2 Ik g otherwise. (Again, we will write p s instead of pk sk k , if this does not lead to misunderstanding.) It is not hard to describe an algorithm for searching nodes pk sk in the search tree Tk that runs in O(log n) time. For k+1 we will always choose among indices fp sg. (The details of the choice rule will be described later.) Removing (in O(log n) time) node k+1 from Tk leads to the tree Tk+1 . This completes the draft description of algorithm A .
sk =
The validity of Theorem 2 follows from three lemmas proved below.
Lemma 6 At each step k = 1 : : : n of the algorithm A the triple of vectors is normal.
Proof. By the de nition of vectors xp xs, at each step k we have ;k 2 C (xp xs)
(16)
Xk C (xs xp):
(17)
6 C (xp xs) :
(18)
This implies 6 C (xs xp) , equivalent to
(The contrary would contradict the property (X ) = 0.) Now the normality of the triple follows from (16) and (18) . Lemma 6 is proved.
Lemma 7 Let p s be the coecients which justify the normality of the triple , i.e.,
+ pxp + sxs = 0 X i 0 i 2 f p sg i > 0:
Let also Then
+ xp 62 H and + xs 62 H:
(19)
< maxf p sg:
(20)
Proof. Let maxf p sg. Then > 0. If p s , we have
p + xp = 1 ; xp + s (;xs) = pxp + s (;xs) 15
where p 0 s 0 p + s 1 xp 2 H ;xs 2 H (since H is symmetric). Hence, + xp 2 convfxp ;xs 0g H , which contradicts (19). Alternatively, if p s , we have a similar relation
s p + xs = (;xp) + 1 ; xs 2 H
which contradicts (19). Lemma 7 is proved. Now we are able to describe the performance of the algorithm A at step k + 1 in detail.
Step k + 1 (detailed description). If + xp 2 H , then k+1 := p. If + xp 62 H and + xs 2 H , then k+1 := s. If + xp 62 H and + xs 62 H , then nd nonnegative coecients p s justifying the normality of the triple . By Lemma 7, We have (20). Therefore, only the following three cases of relations between p s are possible. Case 1 ( s < p) : k+1 := p: Case 2 ( p < s ) : k+1 := s: Case 3 ( < minf p sg) : A) if + xp 2 G, then k+1 := p B) if + xp 62 G + xs 2 G, then k+1 := s C) if + xp 62 G + xs 62 G, then k+1 := p k+2 := s:
Step k + 1 is described completely. As one can see, in case 3C we have a description of steps k + 1 and k + 2.
Lemma 8 Suppose, at step t = k (k0 k n ; 2) the relations t 2 G
(21)
(t + Q(;x0 )) \ H 6= (22) hold, where x0 is either xpt , or xst : Then (21),(22) hold either at step t = k + 1, or at step t = k + 2.
Proof. Observe that due to (X ) = 0, the equality in (18) is attainable only if k and
all vectors in Xk are collinear. Since the proof of Lemma 8 is trivial in this case (at each step t = k + 1 : : : n vector xt = xp has the opposite direction to the direction of vector t ), we can assume that 6 C (xp xs) < : (23) 16
If + xp 2 H or + xs 2 H , set k+1 := p or k+1 := s, respectively, and (21), (22) hold at step t = k +1. Assume next that (19) holds. Assume also, for certainty, that (22) holds at step t = k for x0 = xs , i.e., ( + Q(;xs )) \ H 6= : (24) (In case x0 = xp the proof of the lemma is symmetric.) Since (19) is the case, according to the description of the algorithm we nd coecients p s justifying the normality of the triple < xp xs >. Observe that (23) implies > 0. Furthermore, p = 6 0. (Otherwise, vector xs has the opposite direction to that of vector . This with (24) implies 2 H + xs 2 H which contradicts (19)). In case s = 0 vector xp has the opposite direction to that of vector , and (21), (22) hold for k+1 = + xp at step t = k + 1. So, from now on we can assume that
i > 0 i 2 f p sg:
(25)
Consider the three possible cases of relations between p s.
Case 1 ( s < p).
We have k+1 = p k+1 = + xp : By (23), (25), we have
p
;k+1 = ; ; xp = ; 1 xp + s xs 2 C 0(xp xs)
(26)
y1 =: ; s xs 2 H
(27)
hence xsk+1 = xs . Furthermore,
!
(28) k+1 = + xp = ; ; s xs = 1 ; + y1 : p p p p Relations (27), (28) imply (21) for t = k + 1. Let us prove (22). It follows from (24) that there exists a point z = ; xs 2 H , where 0. Denoting z1 = +s s z, we obtain z1 2 H and
z1 = +s ( ; xs) = +s + + y1 s s s = (1 ; 0) + 0y1 2 y1 ] (29) where 0 =: +s . If p 0 , then (28) and (27) imply k+1 2 y1 z1] H , which contradicts (19). Thus, we have p < 0 = +s , and ; p ; ps =: > 0. Now we can write
!
k+1 ; xsk+1 = k+1 ; xs = ; ; s xs ; ; ; s xs p p p p 17
!
!
!
!
= 1 ; ; xs 1 ; = 1 ; ( ; xs) = 1 ; z p p p p 2 H \ (k+1 + Q(;xsk+1 )) which proves (22) for t = k + 1.
Case 2 ( p < s).
Observe that this case is not symmetric to Case 1, due to assumption (24). Set k+1 = s k+1 = + xs : Let z = ; xs ( 0) be a point in (t + Q(;xs )) \ H which exists due to (24). Then
k+1 = + xs 2 ; xs + s xs ] = ; xs ; p xp] H which imply (21), (22) for t = k + 1.
Case 3 ( < minf p sg): Subcase A ( + xp 2 G):
We have k+1 = p k+1 = + xp 2 G, which means that (21) holds for t = k + 1. To prove (22), de ne vectors y1 z z1 and numbers 0 like in Case 1. Then, due to (26) we have ;k+1 2 C 0 (xp xs ) hence xsk+1 = xs and xpk+1 2 C (;xs xp), i.e.,
;xpk+1 2 C (xs ;xp):
(30)
If p 0 , then (28) and (29) imply
z1 2 k+1 ] k+1 + Q(;xp) hence Furthermore, 0 and
(k+1 + Q(;xp)) \ H 6= :
!
k+1 ; xs = 1 ; z 2 H \ (k+1 + Q(xs )): p From (30){(32) due to (23) we obtain (k+1 + Q(;xpk+1 )) \ H 6= which means that (22) holds for t = k + 1. Alternatively, if p < 0, then > 0 and
!
k+1 ; xsk+1 = k+1 ; xs = 1 ; z 2 H \ (k+1 + Q(;xsk+1 )) p and again (22) holds for t = k + 1. 18
(31) (32)
Subcase B ( + xs 2 G):
We have k+1 = s k+1 = + xs 2 G, i.e., (21) holds for t = k + 1. Prove (22). Due to (23), (25), we have
s p ;k+1 = xp + ; 1 xs 2 C 0(xp xs)
which with (23) imply
6 C (;xs ;k+1 ) <
(33)
xsk+1 2 C (xs ;xp ) C (xs k+1), i.e.,
;xsk+1 2 C (;xs ;k+1): (34) Since (k+1 + Q(;k+1 )) \ H 6= and, due to (24), we have (k+1 + Q(;xs )) \ H ( + Q(;xs )) \ H 6= this with (34), (33) implies (k+1 + Q(;xsk+1 )) \ H = 6 which proves (22) for t = k + 1. Subcase C ( + xp 62 G + xs 62 G): We have k+1 = p k+2 = s k+2 = + xp + xs , and k+2 2 C 0(xp xs), which with (23) imply xpk+2 2 C (;xs xp) xsk+2 2 C (xs ;xp), i.e., ;xpk+2 2 C (xs ;xp) (35) ;xsk+2 2 C (;xs xp): (36) Having de ned z1 y1 0 like in Case 1, observe that case k+1 2 z1 ] is infeasible. (Otherwise, this would imply k+1 2 G). Therefor, z1 2 (k+1 ], which implies 0 < p and < 0. Consider the cases ;1 and < ;1 separately. Case 1:
2 ;1 0):
Since k+1 ; xs 2 k+1 k+2] and k+1 62 G, from (15) and kk+2 ; k+1 k = kxs k 1 we obtain (21) for t = k + 2. Show that (22) holds for t = k + 2 and x0 = xsk+2 . Indeed, we have
!
k+2 ; (1 + )xs = k+1 ; xs = 1 ; z 2 (k+2 + Q(;xs )) \ H: p Furthermore,
k+2 + p(1++ ) xp = + xs + s + ++ p + p xp s
s
+ p ; p + p s x = + xs ; p + + s s
p
19
s
p
(37)
= s + ; ; + s + ; s ; s xs s + s +
= s+; ; s+; xs = s+; ( ; xs ) s
= where 0 2 (0 1). Hence,
s s ; s +
s
z =: 0z =: z2
(k+2 + Q(xp )) \ H 6= : From (36), (37), (39), (23) we obtain
(38) (39)
(k+2 + Q(;xsk+2 )) \ H 6= which means that (22) holds for t = k + 2. Case 2:
< ;1:
) It follows from (38) that z2 = k+2 + 2xp 2 H , where 2 = ps(1+ + 2 (;1 0): Since we have, rst, + xs k+2 ] \ H 6= second, kk+2 ; ; xs k = kxp k 1, and third, + xs 62 G, due to (15) we obtain k+2 2 G. So, (21) holds for t = k + 2. Show (22) for t = k + 2 and x0 = xpk+2 . Since z2 2 k+2 + Q(;xp ), this implies
(k+2 + Q(;xp)) \ H 6= : Furthermore, it follows from (37) that
!
k+2 ; (1 + )xs = 1 ; z 2 (k+2 + Q(xs )) \ H: p
(40) (41)
From (35), (40), (41), and (23) we derive (k+2 + Q(;xpk+2 )) \ H 6= which means that (22) holds for t = k + 2. Lemma 8 is proved. Now we can proceed with the proof of Theorem 2. It is clear that the complexity of the rst stage of the algorithm A is O(n log n), since both numbering the vectors and constructing the search tree T0 can be implemented in this time. The same running time O(n log n) is sucient for implementing Stage 2 of the algorithm, since at each step k = 1 : : : n searching the nodes pk sk and subsequent removing one of them from the search tree Tk can be implemented in O(log n) time. (We assume here that checking inclusions x 2 H and x 2 G that we have to do at step of Stage 2 for vectors x = + xp x = + xs requires a constant time. This is evidently true in those special cases of Theorem 2 that will be considered in Corollaries 1{3. Moreover, the bound on the running time declared in the theorem remains true if each such checking requires O(log n) time.) 20
To complete the proof of Theorem 2, it remains to show that the permutation constructed by Algorithm A provides a nonstrict summation of vectors X within G. This is easily proved by induction on steps t = 0 1 : : : n. The basis of induction is based on the relation t = 0 2 H G (t = 0 1 : : : k0), which imply (21), (22) for t = 0 1 : : : k0, whereas the induction step is justi ed by Lemma 8. Theorem 2 is proved.
4 Optimization problems on nonstrict vector summation within families of half-spaces Denition 6 We say that permutation provides nonstrict summation of vectors in X IRm within a family of domains P = fGi j i = 1 : : : g, and denote this by (X ) 2 Sm P if provides nonstrict summation within each domain Gi 2 P i = 1 : : : . For any family of domains Gi Rm , the following remark is valid. Remark 5. If permutation provides nonstrict summation of vectors in X within the intersection of domains G = \ki=1 Gi , then it provides nonstrict summation of these vectors within the family of domains fGi j i = 1 : : : kg.
The inverse is not true. Indeed, when a family of vectors is being summed within the intersection G of domains of a given family, at least every other partial sum must be in G, whereas G may contain no partial sums while summing the vectors within the family of domains (as it is shown in Fig. 4 for an instance of the family of domains consisting of two half-spaces). Moreover, the intersection G may be empty.
O
Figure 4: Nonstrict summation within two half-spaces but not within their intersection We de ne a norm s^ in IRm by its unit ball H^ m with the center in the origin speci ed by the formula H^ m = fx 2 Rm j jx(i1)j 1 jx(i1) ; x(i2)j 1 8i1 i2 = 1 : : : mg 21
where x = (x(1) : : : x(m)): The norm s^ plays an important role in application of nonstrict vector summation to scheduling problems, as it is demonstrated in Section 5. The following proposition gives a more visual representation of the unit ball H^ m .
Proposition 5 H^ m = conv(B (;B)) where B = 0 1]m: For a given vector c 2 IRm c 6= 0 and a number b, the closed half-space fx 2 IRm j (c x) bg will be denoted by P (c b), while the open half-space fx 2 IRm j (c x) < bg will be denoted by P 0 (c b). Now we formulate a few problems on nding a nonstrict summation of a given s^-family of vectors X Rm within optimal families of half-spaces in IRm . The problems will be denoted by NVSi(m) for i = 1 2 : : :. Notation NVSi without a parameter m means that we consider a problem de ned only in IR2 .
NVS1(m). Given a s^-family X = fx1 : : : xng IRm , we wish to nd a permutation
= (1 : : : n ) and a family of numbers bm+1 = f1 : : : m+1g such that (X ) 2 Sm P1 (m bm+1) for
P1(m bm+1) = fP (e1 ; e2 1) : : : P (em;1 ; em m;1) P (em m) P (;e1 m+1)g and the objective function
1(bm+1) = is minimum.
mX +1 i=1
i
NVS2(m). Given a s^-family X = fx1 : : : xng IRm , we wish to nd a permutation =
(1 : : : n ) and a family of numbers bm = f1 : : : m g such that (X ) 2 Sm P2(m bm) for P2(m bm) = fP (e1 ; e2 1) : : : P (em;1 ; em m;1) P (em m)g and the objective function m X 2 (bm ) = i i=1
is minimum.
Set N6 =: f1 : : : 6g. Introduce a cyclic order 1 ! 2 ! ! 6 ! 1 in the set N6, and for each item i 2 N6 denote its predecessor and its successor in this order by p(i) and s(i), respectively. Let I be the set of non-neighboring pairs of items in N6, i.e.,
I =: f(i j ) j i j 2 N6 i 6= j i 6= s(j ) i 6= p(j )g:
NVS3. Given a s^-family X = fx1 : : : xng IR2 and vectors a1 = e1 a2 = e2 a3 =
e2 ; e1 a4 = ;e1 a5 = ;e2 a6 = e1 ; e2, we wish to nd a permutation = (1 : : : n ) 22
and a family of numbers b6 = f1 : : : 6g such that (X ) 2 S2 P3(2 b6) for P3(2 b6) = fP (a1 1) : : : P (a6 6)g and the objective function
3(b6) = (max ( + j ) ij )2I i is minimum.
NVS4. Given a s^-family X = fx1 : : : xng IR2 , we wish to nd a permutation
= (1 : : : n) and a family of numbers b3 = f1 2 3g such that (X ) 2 S2P4(2 b3) for P4(2 b3) = P1(2 b3) and the objective function 4 (b3) = maxf1 + 2 2 + 3g is minimum.
NVS5(m). Given a s^-family X = fx1 : : : xng IRm , we wish to nd a permutation =
(1 : : : n ) and a family of numbers bm = f1 : : : m g such that (X ) 2 Sm P5(m bm) for P5(m bm) = fP (e1 1) : : : P (em m)g and the objective function 5 (bm) = i=1max :::m i is minimum.
NVS6. Given a s^-family X = fx1 : : : xng IR2 , we wish to nd a permutation
= (1 : : : n ) and a family of numbers b4 = f1 : : : 4g such that (X ) 2 S2 P6(2 b4) for
P6(2 b4) = fP (e1 1) P (e2 2) P (e2 ; e1 3) P (e1 ; e2 4)g
and the objective function
6 (b4) = maxf1 + 3 2 + 4g is minimum.
Let us prove a lemma which provides a reduction of the NVS2 problem in m-dimensional space to the NVS1 problem in (m ; 1)-dimensional space.
Lemma 9 If we have an algorithm A that for some integer m 2 and any s^-family of vectors X = fx1 : : : xn g IRm;1 solves the NVS1(m ; 1) problem in time TA (n) with bound 1 (bm ) , then we can solve the NVS2(m) problem with bound
2 (b m ) in time O(TA (n) + n).
23
Proof. Suppose, we have an algorithm A declared in the theorem and let a s^-family
X = fx1 : : : xng be given in IRm . The desired algorithm for solution the NVS2(m) problem consists of three stages.
Algorithm A0 1. For each vector xj 2 X H^ m de ne its projection x0j = (0 xj(2) : : : xj (m)) to the :
hyperspace ; = fx 2 IRm j x(1) = 0g. Set X 0 = fx01 : : : x0n g. It follows from the de nition of the norm s^ that X 0 H^ m;1 , i.e., all vectors in X 0 have at most unitP length in norm s^ de ned in the space ; = IRm;1 of coordinates 2 : : : m. And since nj=1 x0j = 0, the family X 0 is a s^-family in IRm;1 . 2. Find an approximative solution to the NVS1 problem in the space ; with input X 0, i.e., ned a permutation = (1 : : : n ) which provides a nonstrict summation of vectors X 0 within a family: P 0 = fP 0 (e2 ; e3 2) P 0(e3 ; e4 3) : : :P 0 (em;1 ; em m;1) P 0(em m) P 0(;e2 1)g = fP 0 (aj j ) j i = 1 : : : mg of half-spaces in IRm;1 : (X 0 ) 2 Sm;1 P 0 (42) such that m X 1 (bm) = i : (43) i=1
3. Let y = Pj=1 xj be the th node of the summation trajectory of vectors xj 2 X
according to permutation . Find k such that yk (1) = max =1:::n y (1), and de ne the desired permutation = (1 : : : n ) as a cyclic shift of by k items:
j =
(
j+k for j + k n j+k;n for j + k > n:
Algorithm A0 is described. Assign to each half-space P 0 (a ) ; for a 2 ; a half-space P (a ) IRm andPlet P be the family of half-spaces P IRm assigned to half-spaces P 0 2 P 0. Denote y0 = j=1 x0j . Since each vector x0j diers from xj by a vector collinear to the basis vector e1 , we have y ; y0 = e1 for some 2 IR. Hence, for any a 2 ;, we obtain (y a) = (y0 a) + (e1 a) = (y0 a): If y0 2 P 0 (ai i) 2 P 0, then y 2 P (ai i) 2 P . Thus, (42) implies (X ) 2 Sm P (44) i.e., a nonstrict summation of vectors X within the family P . P P Denote z = j=1 xj . Due to nj=1 xj = 0, we have
z = Hence, z (1) 0 i.e.,
(
yk+ ; yk for k + n yk+ ;n ; yk for k + > n:
z 2 P (e1 0) = 1 : : : n: 24
(45)
While summing the vectors in X , the cyclic shift of permutation by k items is equivalent to moving the origin to the kth node of the summation trajectory of vectors X (i.e., to the point yk ), the resulting summation trajectory being equivalent to the original one but a parallel moving by vector y =: ;yk . Hence, this trajectory provides a nonstrict summation of vectors X with respect to a family of shifted half-spaces P + y =: fP + y j P 2 Pg, i.e., (X ) 2 Sm fP + y g: Show that such a shifting of all half-spaces in P by the same vector y does not change the value of the objective function 1 . Indeed,
x 2 P (ai i) + y () (ai x ; y) i () (ai x) i + (ai y ) =: i0 () x 2 P (ai i0):
Thus, permutation provides a nonstrict summation of vectors X within the family of half-spaces fP (ai i0) j i = 1 : : : mg, where
X
X 0 = i + i
X m i=1
! X
ai y =
i + ((e2 ; e3 ) + (e3 ; e4 ) +
+(em;1 ; em ) + em ; e2 y ) = Finally, it follows from (45) that if z 2 P (;e2 10 ), then
X
i:
z 2 P (e1 0) \ P (;e2 10 ) P (e1 ; e2 10 ): This means that permutation provides a nonstrict summation of vectors X within the half-space P (e1 ; e2 10 ), as well. Replacing the half-space P (;e2 10 ) in the family P + y by the half-space P (e1 ; e2 10 ), we obtain the family of half-spaces for the NVS2(m) problem with the bound on its objective function
2(bm) =
m X i=1
i0 =
m X i=1
i
guaranteed. Evidently, the running time of the algorithm A0 meets the upper bound declared in the lemma. Lemma 9 is proved. In the rest of this section, we consider the problems NVSi in two-dimensional space. Let Gi (b) stand for the intersection of domains in the NVSi problem:
Gi(b) =
\
P 2Pi (2b)
P:
Consider domain G1 (b3) = G4(b3) for b3 = f1 1 1g and domain G3 (b6) for b6 = f5=4 : : : 5=4g (see Fig. 5A). Representing the basis vectors e1 e2 of the coordinate system so that the angle between their images equals 120 and their lengths are equal, we obtain a linearhomothetic representation of our vector space in the plane such that the unit ball H^ 2 of the norm s^ is represented by the right hexagon with unit sides, domains G3 becomes the 25
1 G1 ^ H2
G3
e1 e2
2
B
A
Figure 5: Domains G1 (G4) and G3 right hexagon with sides 5/4, and domain G1 becomes the right triangle circumscribed around the unit ball H^ 2 (see Fig. 5B). Since both domains (G1(b3) and G3(b6)) are closed, contain H^ 2 , and the s^-norm of each chord of each domain intersecting H^ 2 is at least 1, condition (15) of Theorem 2 is satis ed due to Remark 4. This implies that each s^-family X IR2 can be nonstrictly summed within each of the domains. Due to Remark 5, this implies that the vectors in X can be nonstrictly summed within the corresponding families of half-spaces. Thus, we obtain the following corollaries of Theorem 2.
Corollary 1: There is an algorithm which, given an instance of the NVS1(2) problem with n vectors, nds its solution in O(n log n) time with bound 1 (b3) 3:
Corollary 2: There is an algorithm which, given an instance of the NVS3 problem with
n vectors, nds its solution in O(n log n) time with bound 3 (b6) 2:5:
Corollary 3: There is an algorithm which, given an instance of the NVS4 problem with
n vectors, nds its solution in O(n log n) time with bound 4 (b3) 2:
Now consider domains G5 G6 for problems NVS5(2) and NVS6. 26
Proposition 6 Given families of numbers b0 = f10 20 g and b00 = f100 : : : 400g with nonnegative items, each O-chord h0 of domain G5(b0) has length
q
q
kh0ks^ ( 10 + 20 )2 and each O-chord h00 of domain G6 (b00) has length
q
kh00ks^ minf300 + 400 100 + 300 200 + 400 100 + 200 + 2 100200g Using Proposition 6, we can easily nd items fi0 g fi00g of families b0 b00 such that each O-chord h0 of domain G5(b0) and each O-chord h00 of domain G6(b00) have length at least 1. For example, this property holds for b0 = f1=4 1=4g b00 = f1=2 : : : 1=2g (see Fig. 6 A and B), and so, the conditions of Theorem 1 are satis ed. This with Remark 5 imply the following corollaries. 1
1
^ H2 2
2
G5
G6 A
B
Figure 6: Domains G5 and G6
Corollary 4: There is an algorithm which, given an instance of the NVS5(2) problem with n vectors, nds its solution in O(n log n) time with the best possible bound 5 (b2) 1=4 =: 5 :
Corollary 5: There is an algorithm which, given an instance of the NVS6 problem with
n vectors, nds its solution in O(n log n) time with the best possible bound 6(b4) 1 =: 6 :
That the bounds in Corollaries 4,5 are best possible, follows from Theorem 3 stated below. To give its formulation, we rst formulate two decision problems: NVS5(2 ) and NVS6( ). 27
Suppose, we are given an arbitrary 2 (0 1).
NVS5(2,).
INSTANCE: positive integer D 2 Z + and family of vectors X = fx1 : : : xng Z 2 such that kxj ks^ D (X ) = 0. QUESTION: does there exist a permutation = (1 : : : n ) and a family of nonnegative integers b2 such that (X ) 2 S2 P5(2 b2) and 5 (b2) 5 D ?
NVS6().
INSTANCE: the same. QUESTION: does there exist a permutation = (1 : : : n ) and a family of nonnegative integers b4 such that (X ) 2 S2 P6(2 b4) and 6 (b4) 6 D ?
Theorem 3 Given any 2 (0 1), problems NVS5(2 ) and NVS6( ) are strongly NP hard.
For proof, we refer the reader to 6].
5 Algorithms for multi-operation scheduling problems In this section we assume that all machine loads are equal, i.e.,
li = lmax i = 1 : : : m:
(46)
If this is not the case (i.e., li < lmax for some i), we can attain (46) by increasing processing times of some operations on machine Mi without changing the values of lmax and pmax. (The total running time of this \alignment" procedure over all machines is O(nm).) If we consider a feasible schedule S = fsji g for the problem instance obtained, we will see that, rst, it is feasible for the initial instance as well (with shorter operations), and second, that each theorem formulated in terms of a relation between parameters lmax and pmax and valid for the new instance with property (46) is also valid for the original instance, since lmax and pmax remain the same. For each job Jj 2 J , we de ne a vector
dj = (dj (1) dj (2) : : : dj (m ; 1)) =: (pj 1 ; pjm : : : pjm;1 ; pjm) 2 IRm;1 : (47) P From (46) we have nj=1 dj = 0. Furthermore, by the de nition of dj , we have properties
jdj (i)j pmax jdj (i1) ; dj (i2)j pmax 8 i i1 i2 2 f1 : : : m ; 1g: Thus, de ning pmax as a new unit for measuring time, we obtain pmax = 1, and the family of vectors D = fd1 : : : dn g becomes a s^-family, according to the de nition of norm s^ given in section 4. 28
Given a permutation = (1 : : : n ), denote k k X X Pk (i) =: pj i dk =: dj : j =1
j =1
Let also fe1 : : : em;1 g be the set of basis vectors in IRm;1 . The dierence Pk (i1 i2) =: Pk (i1) ; Pk;1 (i2) can be bounded from above for any i1 i2 2 f1 : : : mg and any k 2 f1 : : : ng as follows:
Pk (i1 i2) = Pk;1 (i1 ) ; Pk;1 (i2) + pk i1 = Pk (i1) ; Pk (i2) + pk i2
minfPk;1 (i1) ; Pk;1 (i2) Pk (i1) ; Pk (i2)g + pmax = (using the de nition of dk and setting dk (m) = 0 8k) = pmax + minfdk;1 (i1) ; dk;1 (i2) dk (i1) ; dk (i2 )g = pmax + minf(ei1 ; ei2 dk;1) (ei1 ; ei2 dk )g:
(We assume here em = 0.) Using this representation of Pk (i1 i2), we can formulate the following
(48)
Remark 6. Consider the family D of vectors de ned by (47). If permutation =
(1 : : : n ) provides a nonstrict summation of vectors D within a half-space P (ei1 ; ei2 ) for i1 i2 2 f1 : : : mg i1 6= i2 2 IR, then it provides a bound
Pk (i1 i2) pmax + for any k = 1 : : : n For each scheduling problem that we will consider in this section, a polynomial-time algorithm will be described for constructing a schedule. In the case of the open shop problem, the schedule will be proved to be optimal under certain conditions. For schedules constructed for other problems some upper bounds on schedule length will be derived that provide an absolute error of each algorithm considered. A description of each such algorithm will be uni ed and will consist of two stages. At the rst stage we compute the family of vectors D = fd1 : : : dng IRm;1 de ned by (47) (assuming that (46) already holds) and nd a permutation = (1 : : : n ) which provides a nonstrict summation of vectors D within a family of half-spaces speci ed for the corresponding NVSi problem. At the second stage we construct a desired schedule S .
5.1 Polynomially solvable case of the open shop problem Theorem 4 If we have an algorithm A that for some integer m 2 and any s^-family of m;1 vectors X = fx1 : : : xn g IR
solves the NVS1(m ; 1) problem in time TA (m ; 1 n)
29
with bound 1 (bm ) 1 (m ; 1), then, given an instance of the open shop problem with m machines and n jobs that meets the condition
lmax (1(m ; 1) + 2m ; 2)pmax (49) = lmax and can be constructed in O(TA(m;1 n)+mn) its optimal schedule has length Cmax time.
Proof. Describe an algorithm A1 of constructing an optimal schedule for any instance of the open shop problem which meets (49).
Algorithm A1 Stage 1. Find a permutation = (1 : : : n) which provides a nonstrict summation of vectors D IRm;1 within a family of half-spaces
fP (e1 ; e2 1) : : : P (em;1 ; em m;1) P (em ; em+1 m)g
where em =: 0 em+1 =: e1 ,
m X i=1
i 1 (m ; 1)pmax:
(50)
Stage 2. Construct a schedule of length lmax for the given instance as follows.
For each machine Mi 2 M, we construct its schedule Si that starts at time 0, terminates at time lmax, and such that jobs Jj 2 J are processed on this machine without any delay according to permutation . (Obviously, the set of schedules fS1 : : : Smg does not form a feasible schedule for the whole model.) These schedules will be afterwards transformed so that each schedule Si will be shifted to the right by a certain amount, and then the part of the schedule that stretches to the right of lmax is cut o and moved to the vacant place between time zero and starting point of the schedule. Thus, it is more convenient to represent the schedules fSi g as rings fKi g on a \drum" with circumference length equal to lmax. The ring K1 is assumed to be immovable, the starting point of S1 being taken for the origin. Put #i =: k=1 max P k (i i + 1) i = 1 : : : m :::n
(51)
#1 = Pk (1 2) = Pk (1) ; Pk (2):
(52)
where while de ning Pk (m m + 1), we assume pj m+1 =: pj 1 . Let k be the ( rst) value of k for which the maximum in the right side of (51) is attained for i = 1. Thus,
Perform m ; 1 successive rotations (we number them 2 3 : : : m) of rings K2 : : : Km, retaining the ring K1 immovable. Rotation 2. Rotate each ring K2 : : : Km to the right by the amount #1, retaining their inter-location. Due to (52), time t =: Pk (1) coincides with completion times of operations
30
of job k on machine M1 and job k ;1 on machine M2. Time t will be afterwards taken as the splitting point of the schedule on the drum. Rotation ( = 3 : : : m). Rotate each ring K : : : Km to the right by the amount # ;1 and then by the minimum additional amount ;1 0 such that time t coincides
with the completion time of an operation on machine M . Clearly,
< pmax = 2 : : : m ; 1:
(53)
The total rotation P(over all mP; 1 rotations) of ring Km to the right with respect to K1 is equal to #0 =: mi=1;1 #i + mi=2;1 i which is equivalent to its rotation to the left by lmax ; #0, or the same, to the rotation of ring K1 to the right with respect to Km by the same amount. Suppose that the instance under consideration meets the condition
lmax ; #0 #m :
(54)
Then the schedule S 0 obtained as a result of all m ; 1 rotations is clearly feasible on the drum. The latter means that the time intervals for processing any two operations of the same job do not overlap in the schedule S 0. Now, to obtain a feasible schedule of length lmax, it suces to split all rings of the drum at point t . Since no time interval for processing an operation can be split under this procedure (by choice of amounts f g, point t coincides with the completion time of an operation on each machine), the resulting schedule is feasible. This completes the description of algorithm A1 . It is clear that it runs in time declared in the theorem. One can observe from the above description that algorithm A1 always constructs a schedule (may be, infeasible) of length lmax. Furthermore, in the course of description it was proved that the schedule constructed is feasible (and hence, optimal, due to its length being equal to lmax), as soon as the instance under consideration meets (54). Suppose now that we are given an instance of the open shop problem that meets condition (49). Prove that it meets (54). Indeed, m mX ;1 X #0 + #m = #i + i i=1
i=2
(due to Stage 1 of the algorithm, (51),(53), and Remark 6) m X
m X k max P (i i + 1) + (m ; 2)pmax (i + pmax) + (m ; 2)pmax i=1 k i=1
(1(m ; 1) + 2m ; 2)pmax (from (49)) lmax:
Theorem 4 is proved.
The theorem and Corollary 1 imply 31
Theorem 5 Given an instance of the open shop problem with three machines and n jobs that meets the condition
lmax 7pmax
(55)
= lmax and can be constructed in O(n log n) time its optimal schedule has length Cmax
Observe that if an instance does not meet (55) but meets (54), then Algorithm A1 still succeeds.
5.2 Assembly-line problem Theorem 6 If we have an algorithm A that for some integer m 2 and any s^-family of m;1
vectors X = fx1 : : : xn g IR solves the NVS5(m ; 1) problem in time TA (m ; 1 n) with bound 5 (bm;1) 5 (m ; 1), then, given an instance of the assembly-line problem with m machines and n jobs, its schedule can be constructed in O(TA(m ; 1 n) + mn) time with bound Cmax lmax + (1 + 5 (m ; 1))pmax:
Proof. It was shown in 4] that we may restrict our investigation by considering per-
mutation schedules only, and that the length of the permutation schedule S speci ed by permutation can be written as
0k 1 n X X Cmax(S ) = i=1max max @ pj i + pj m A : :::m;1 k=1:::n j =1
j =k
(56)
The description of the algorithm is simple. At the rst stage we compute a permutation = (1 : : : n) which gives a solution to the NVS5(m ; 1) problem with vectors D, i.e., provides their nonstrict summation within a family of half-spaces fP (e1 1) : : : P (em;1 m;1)g such that max 5 (m ; 1)pmax: (57) i i At the second stage we compute the permutation schedule S . Prove that the schedule S constructed in this way meets the desired bound. To this end, transform the formula (56), using (48), Remark 6, and (57).
Cmax(S ) = i=1max max(Pk (i) ; Pk;1 (m)) + lmax :::m;1 k
lmax + pmax + max max minf(ei ; em dk;1 ) (ei ; em dk )g i k lmax + pmax + max i lmax + (1 + 5 (m ; 1))pmax: i
Theorem 6 is proved.
In the case of three machines this with Corollary 4 implies 32
Theorem 7 Given an instance of the assembly-line problem with three machines and n jobs, its schedule can be constructed in O(n log n) time with the best possible bound
Cmax lmax + 1:25pmax That the coecient 1.25 in the bound above cannot be reduced, can be justi ed by the instance consisting of n jobs with processing times (1 1 ; 1=n 1 ; 1=(2n)) and one more job with times (0 1 1=2). For this instance, we have pmax = 1 lmax = n Cmax n + 1:25 ; 3=(4n).
5.3 Akers-Friedman problem Theorem 8 If we have an algorithm A that for any positive integer n and any s^-family X = fx1 : : : xng IR2 solves the NVS3 problem in time TA (n) with bound 3 (b6) 3 guaranteed, then, given an instance of the Akers-Friedman problem with three machines and n jobs, its schedule can be constructed in O(TA (n) + n log n) time with bound
Cmax lmax + (3 + 3 )pmax:
(58)
Proof. Describe an algorithm A3 which constructs the desired schedule. Algorithm A3 Stage 1. Applying algorithm A, nd a permutation = (1 : : : n) which provides a nonstrict summation of vectors D IR2 within a family of half-spaces
fP (e1 1) P (e2 2) P (e2 ; e1 3) P (;e1 4) P (;e2 5) P (e1 ; e2 6)g such that 3 (b6) 3 : For simplicity of notation, we assume that the jobs are numbered
according to permutation , i.e., that = (1 2 : : : n). Stage 2. Let oji denote the ith operation of job Jj . De ne a priority order p on the set of operations: 1. oj1 p ok3 oj2 p ok3 8j k 2 N , i.e., any rst or second operation of any job is more preferable than any third operation 2. oji p ok 8j < k i 2 f1 2g, i.e., the priority order on the set of rst and second operations is speci ed by permutation of job indices 3. oj3 p ok3 8j < k. The desired schedule S is now constructed by a greedy algorithm which 33
loads a machine (does not allow it to be idle) if there are operations available for
processing on that machine if at some point in time a machine becomes available and there are several operations available for processing on that machine, the algorithm schedules the operation with highest priority p . This completes the description of Algorithm A3 . For each machine Mi i = 1 2 3, at any point in time we arrange a queue for the set of operations available for processing on machine Mi as a \heap" 7, p.33], i.e., a binary tree T i with nodes-operations, and such that, rst, for the path from the root to any external node, the priority of nodes along the path is decreasing, and second, the length of each such path is O(log n). Then both the insertion of a new operation to the tree and deleting the operation with the highest priority (which is evidently located at the root of the tree) can be implemented in O(log n) time. Thus, the total running time of the algorithm is O(n log n), as declared in the theorem. Prove the bound (58). We can represent the schedule Si for a machine Mi as a sequence of work intervals si fi ) and idle intervals fi si +1 ) = 1 2 : : : Consider the rst operation oj1 = oji of a job Jj . Since it is available for processing since time zero, it cannot let an operation with less priority be scheduled before it. Hence, we can bound its completion time cji from above as follows:
cji
j X
q=1
pqi = Pj (i):
(59)
Consider the second operation oj2 = oji of a job Jj and suppose it to be processed on machine Mi in time-interval sji cji) si fi ). Let oj1 i oj2 i : : : oj i = oji be the maximum sequence of operations that are executed on machine Mi one-by-one (without a delay) within time-interval si cji), and such that oj1 i p oj2 i p p oji . Then the operation oj1 i is either the very rst operation executed on Mi in this interval, i.e., sj1 i = si , or it is preceded by a less priority operation. In both cases, the gap between the time of arrival of job Jj1 at machine Mi and the time sj1 i of starting the operation oj1 i is less than pmax = 1. It follows from the de nition of priority p that oj1 i 2 foj11 oj21 g and j1 < j2 < < j = j: (60) Suppose that oj1 i = oj21 , i.e., oj1 i is the second operation of job j1, whereas its rst operation is processed by machine i1 6= i. Applying (59), we have sj1 i < cj1 i1 + 1 Pj1 (i1) + 1 = Pj1 ;1(i) + 1 + (Pj1 (i1) ; Pj1 ;1(i)): Hence, due to (60) we obtain an upper bound on the completion time of operation oji :
cji = sj1 i +
X
k=1
pjk i < Pj (i) + 1 + max (Pk (i1) ; Pk;1 (i)): k 34
(61)
In this bound we have \forgotten" that it takes its origin from job Jj1 . So, we can say about machine Mi1 merely that i1 6= i. Alternatively, suppose that oj1 i = oj11 . In this case we easily obtain the bound cji Pj (i): Thus, bound (61) on the completion time of the second operation of any job satis es both cases. This enables us to derive an upper bound on the time ^tji3 when the third operation oji3 of a job Jj becomes available for processing. Assuming that the second operation of the job is processed on machine Mi2 , we obtain: ^tji3 = cji2 < Pj (i2) + 1 + max(Pk (i1) ; Pk;1 (i2)) k
= Pj ;1 (i3) + 1 + (Pj (i2) ; Pj ;1 (i3)) + max (Pk (i1) ; Pk;1 (i2)) k
(using representation (48))
= Pj ;1 (i3)+3+max minf(ei1 ;ei2 dk;1) (ei1 ;ei2 dk )g+minf(ei2 ;ei3 dj;1) (ei2 ;ei3 dj )g k
Pj;1 (i3) + 3 +
where =: maxfi1 i2 i3 2f123gj i1 6=i2 i2 6=i3 g maxk minf(ei1 ; ei2 dk;1) (ei1 ; ei2 dk )g + maxk minf(ei2 ; ei3 dk;1 ) (ei2 ; ei3 dk )g. It is clear that the time t^ji when a second operation oji becomes available for processing can also be bounded by the same amount. All the more, it is true for any rst operation oji (since t^ji = 0). Thus, any operation oji becomes available for processing by at most time t^ji < Pj;1 (i) + 3 + : (62) Due to the de nition of vectors fai g in the NVS3 problem, for any two indices i1 i2 2 f1 2 3g we have ei1 ; ei2 = ai , where the correspondence between indices i1 i2 and i is set by Table 1.
i i1 i2
1 1 3
2 2 3
3 2 1
4 3 1
5 3 2
6 1 2
1 1 3
Table 1: (We assume here e3 = 0.) If we consider any three indices i1 i2 i3 2 f1 2 3g such that i1 6= i2 i2 6= i3, and suppose that ei1 ; ei2 = ai0 ei2 ; ei3 = ai00 , then it follows from Table 1 that i0 6= i00 i0 6= s(i00) i00 6= s(i0) (with respect to the cyclic order de ned on the set N6 in section 4). Thus, we obtain the bound
(imax max min 0 i00 )2I k
n
(ai dk;1) (ai0 dk ) 0
35
o
+ max min k
n
o
(ai dk;1) (ai00 dk ) 00
(since = (1 2 : : : n) provides a nonstrict summation of vectors D = fd1 : : : dn g within half-spaces of the family fP (a1 1) : : : P (a6 6)g)
(imax ( 0 + i00 ) = 3 (b6) 3 : 0 i00 )2I i
(63)
Consider an arbitrary machine Mi and show that the total amount (Di ) of its idle time is less than 3 + . If the starting time t0 of its last work interval is less than 3 + , we have Di < 3 + . Suppose now that t0 3 + , and let j be the maximum j such that
Pj ;1 (i) + 3 + t0 :
(64)
Then
Pj (i) + 3 + > t0 : (65) It follows from (62),(64) that all operations fo1i : : : oj i g become available for processing before time t0 . Since time t0 is immediately preceded by an idle time on machine Mi , this
implies that all operations that become available for processing on this machine before time t0 are completely processed also before time t0 (due to the rst rule of greedy algorithm). Hence, j < n, and due to (65),
Di t0 ; Pj (i) < 3 + : Now, using bound (63), we can conclude that each machine Mi terminates its work before time li + Di < lmax + 3 + lmax + 3 + 3 (66) which provides the desired bound on schedule length. Theorem 8 is proved. Theorem 8 and Corollary 2 imply
Theorem 9 Given an instance of the Akers-Friedman problem with three machines and n jobs, its schedule can be constructed in O(n log n) time with bound Cmax lmax + 5:5pmax
5.4 Two-routes problems In this section we consider problems R213 and R231. Since they are special cases of the Akers-Friedman problem considered in the previous section, it enables us to construct more precise approximation algorithms for these problems, treating the speci c character of their machine passage routes. While considering each problem, the set of jobs with route (1 2 : : : m) will be denoted as J1 and the set of remaining jobs (with another route) as J2 .
R213 problem. 36
Theorem 10 Given an instance of the R213 problem with n jobs, its schedule can be constructed in O(n log n) time with bound
Cmax lmax + 4pmax:
Proof. The algorithm for constructing the desired schedule is nearly the same that we
applied for solution the Akers-Friedman problem. The dierence is that on Stage 1 we have to solve the problem NVS6 instead of NVS3. Therefor, we have no problem with justifying the bound on running time declared in the theorem. It remains the same, since the algorithms that we apply to solve both NVSi problems have the same bound on running time. Let us prove the desired bound on schedule length. Due to Corollary 5 we can solve the NVS6 problem, i.e., nd a permutation = (1 : : : n ) which provides a nonstrict summation of vectors D within half-spaces of the family P6(2 b4) = fP (e1 1) P (e2 2) P (e2 ; e1 3) P (e1 ; e2 4)g with bound
6 (b4) = maxf1 + 3 2 + 4g 1 guaranteed. Again, we assume for simplicity that = (1 2 : : : n).
Repeating the arguments that we applied in the course of the proof of Theorem 8 for the Akers-Friedman problem, consider the second operation oj2 = oji of a job Jj 2 J1. Thus, we have i = 2. Again, de ne the job Jj1 in the same way and consider two cases: oj1 i = oj11 and oj1 i = oj21 . In the second case we know for sure that Jj1 2 J1 and that the rst operation of this job was executed on machine M1 . Thus, in bound (61) we know for sure that i1 = 1 i = 2, and we have no reason to \forget" this information (unlike we did in Theorem 8). Now we can say that bound (62) on the time t^ji when an operation oji of a job Jj 2 J1 becomes available for processing holds with
max minf(e1 ; e2 dk;1) (e1 ; e2 dk )g + max minf(e2 dk;1) (e2 dk )g 2 + 4 k k
whereas the same bound for a job Jj 2 J2 holds with max minf(e2 ; e1 dk;1) (e2 ; e1 dk )g + max minf(e1 dk;1) (e1 dk )g 1 + 3: k k
In both cases we can derive the bound
maxf1 + 3 2 + 4g 1: Applying this bound on instead of (63) in relations (66), we obtain the bound Cmax < lmax + 4pmax. Theorem 10 is proved.
R231 problem. Theorem 11 If we have an algorithm A that for any positive integer n and any s^-family X = fx1 : : : xng IR2 solves the NVS4 problem in time TA (n) with bound 4 (b3) 4 37
guaranteed, then, given an instance of the R231 problem with n jobs, its schedule can be constructed in O(TA(n) + n log n) time with bound
Cmax lmax + (3 + 4 )pmax:
Proof. It basically repeats the proof of Theorem 10. The bound on remains the same for jobs Jj 2 J1 :
max minf(e1 ; e2 dk;1) (e1 ; e2 dk )g + max minf(e2 dk;1 ) (e2 dk )g k k
whereas it is modi ed for jobs Jj 2 J2 :
max minf(e2 dk;1 ) (e2 dk )g + max minf(;e1 dk;1) (;e1 dk )g: k k Applying the algorithm for solution the NVS4 problem, we nd a nonstrict summation of vectors D within family of half-spaces P4(2 b3) = fP (e1 ; e2 1) P (e2 2) P (;e1 3)g with bound 4 (b3) = maxf1 + 2 2 + 3g 4 : This gives an upper bound 4 which, being applied in (66), provides the desired bound on Cmax. Theorem 11 is proved. Theorem 11 and Corollary 3 imply
Theorem 12 Given an instance of the R231 problem with n jobs, its schedule can be constructed in O(n log n) time with bound
Cmax lmax + 5pmax
5.5 Flow shop problem Theorem 13 If we have an algorithm A that for some integer m 2 and any s^-family of m;1
vectors X = fx1 : : : xn g IR solves the NVS2(m ; 1) problem in time TA (m ; 1 n) with bound 2 (bm;1 ) 2 (m ; 1), then, given an instance of the ow shop problem with m machines and n jobs, its permutation schedule can be constructed in O(TA(m ; 1 n)+ mn) time with bound Cmax lmax + (m ; 1 + 2(m ; 1))pmax:
Proof. The algorithm for constructing the desired schedule consists of two stages.
At the rst stage we compute a permutation = (1 : : : n) which gives a solution to the NVS2(m ; 1) problem with vectors D, i.e., provides their nonstrict summation within the family of half-spaces
fP (e1 ; e2 1) : : : P (em;1 ; em m;1)g 38
in IRm;1 such that
mX ;1 i=1
i 2 (m ; 1)pmax:
(67)
(We assume here em = 0.) At the second stage we compute the permutation schedule S . Prove that schedule S meets the desired bound. Using the well known formula
1 0 k1 k2 n X X X pj m A Cmax(S ) = 1k maxk n @ pj 1 + pj 2 + + 1 m;1 j =1
j =k1
j =km;1
transform it to an upper bound on Cmax(S ) :
Cmax(S ) = 1k maxk n((Pk1 (1) ; Pk1 ;1 (2)) + (Pk2 (2) ; Pk2 ;1 (3)) + : : : 1 m; 1 +(Pkm;1 (m ; 1) ; Pkm;1 ;1 (m))) + lm (using (48), Remark 6, and (67))
lmax + (m ; 1)pmax +
mX ;1 i=1
lmax + (m ; 1)pmax + Theorem 13 is proved.
mX ;1 i=1
max (Pk (i) ; Pk;1 (i + 1)) + lmax k
max minf(ei ; ei+1 dk;1 ) (ei ; ei+1 dk )g k
mX ;1 i=1
i lmax + (m ; 1 + 2 (m ; 1))pmax:
Theorem 13 and Lemma 9 imply
Theorem 14 If we have an algorithm A that for some integer m 3 and any s^-family of
vectors X = fx1 : : : xn g IRm;2 solves the NVS1(m ; 2) problem in time TA (m ; 2 n) with bound 1 (bm;1 ) 1 (m ; 2), then, given an instance of the ow shop problem with m machines and n job, its permutation schedule can be constructed in O(TA (m ; 2 n) + mn) time with bound Cmax lmax + (m ; 1 + 1 (m ; 2))pmax
Using this result and Corollary 1, we obtain
Theorem 15 Given an instance of the ow shop problem with four machines and n jobs, its permutation schedule can be constructed in O(n log n) time with bound
Cmax lmax + 6pmax In the case of three machines, as it follows from Theorem 14, the ow shop problem reduces to the NVS1 problem in 1-dimensional space, i.e., to the problem of nding a nonstrict summation of s^-family of numbers (with absolute values at most 1 and with 39
sum zero) within the family of half-spaces fP (;e1 2) P (e1 1)g which gives minimum to 1 + 2. (In other words, we wish to sum numbers nonstrictly within an interval of minimum length.) It is easily seen that we can sum such families of numbers within any interval of length 1 containing the origin, and this can be done in O(n) time. Thus, we have a solution to the NVS1(1) problem with bound 1 + 2 1, which implies the following result.
Theorem 16 Given an instance of the ow shop problem with three machines and n jobs, its permutation schedule can be constructed in O(n) time with best possible bound
Cmax lmax + 3pmax
Acknowledgement The work was partly done while the author was visiting Eindhoven University of Technology. The author is grateful to Jan Karel Lenstra, Cor Hurkens, and Haan Hoogeveen for their kind help in arranging the visit.
References 1] S.V. Sevast'janov, On some geometric methods in scheduling theory: a survey. Discr. Appl. Math. 55(1994) 59{82. 2] S.V. Sevast'janov, Vector summation in Banach space and polynomial algorithms for ow shops and open shops, Math. Oper. Res. 20(1995) 90{103. 3] S.V. Sevastianov, Approximation algorithms with performance guarantees for the Johnson and Akers-Friedman problems in the case of three machines, (in Russian) Upravlyaemye Sistemy 22(1982) 51{57. 4] C.N. Potts, S.V. Sevast'janov, V.A. Strusevich, L.N. Van Wassenhove, and C.M. Zvaneveld, The two-stage assembly scheduling problem: complexity and approximation, Oper. Res. 43(1995) 346{355. 5] R.T. Rockafellar, Convex Analysis (Princeton University Press, Princeton, New Jersey, 1970). 6] S.V. Sevastianov, Nonstrict vector summation in scheduling problems, Siberian Journal of Oper. Res. 1(1994), No. 1, 67{99. 7] R.E. Tarjan, Data structures and network algorithms (Philadelphia, PA: SIAM, 1983).
40