HOW TO MAKE BRUCKER’S ALGORITHM POLYNOMIAL: SCHEDULING BY PERIODIZATION
Vadim G. Timkovsky Star Data Systems Inc., Toronto, Ontario, Canada; and Department of Computing and Software, McMaster University, Hamilton, Ontario, Canada Abstract. This paper introduces scheduling by periodization as a new approach to the design and analysis of efficient algorithms finding compactly specified schedules. The approach is demonstrated in application to the two-machine maximum-lateness unit-time job-shop scheduling problem. Earlier, Brucker proposed the largest-lag-first algorithm which is of pseudopolynomial time proportional to the total number of operations in jobs. We show that by using periodization Brucker’s algorithm can be elaborated to solve the problem in polynomial time. The result represents a rare example of when a pseudopolynomial algorithm can be brought to a polynomial algorithm which, in addition, proves to have the best presently known time and space requirements. In comparison with an earlier algorithm of Timkovsky with time and space requirements both O(n2 ), where n is the number of jobs, the algorithm in this paper has time and space requirements O(n2 ) and O(n), respectively.
1. Introduction 1.1. Condensation and periodization. Number problems on scheduling interruptions represent a wide class of scheduling problems which, on one hand, are number problems [GJ79], where job processing times are not bounded by a polynomial in the problem size, on the other hand, allow job processing to be interrupted at any or at least integer time and renewed further with a possible delay on the same or another machine. Here we mean only interruptions implying that the time of the renewal or the machine renewing the job is not known or cannot be figured out from the problem input at the time of interruption. Thus, preemptive scheduling problems or nonpreemptive scheduling problems, where jobs are chains of unit-time operations without the no-wait constraint, belong to the class if their size captures job processing times only under logarithms. Hence, any polynomial-time algorithm solving a number problem on scheduling interruptions must produce a schedule that can be compactly specified, i.e. by a space bounded by a polynomial in logarithms of job processing times. For example, condensed schedules where the number of interruptions 1991 Mathematics Subject Classification. 68Q25, 90B35. Key words and phrases. Polynomial, pseudopolynomial, period, algorithm, scheduling, job shop. Typeset by AMS-TEX
1
2
VADIM G. TIMKOVSKY
of each job is bounded by a polynomial in the problem size, can be compactly specified by indicating start times and durations of the uninterrupted parts of the jobs. Polynomial-time algorithms producing condensed schedules are called condensers. Though condensation was specifically discussed for the design of polynomial-time algorithms in job-shop scheduling [T97], it is obviously appropriate for other machine environments as well and actually represents an approach which was tacitly implied and commonly used in all earlier works on scheduling interruptions in polynomial time. Thus, looking through publications devoted to polynomial-time algorithms in preemptive scheduling or scheduling chains of unit-time operations (cf. [C76, LLRKS93, TSS94, B95]), one can observe that they are all condensers. In this paper we introduce periodization as a more general and more flexible approach than condensation. It is demonstrated here in application to unit-time job-shop scheduling, however, as we can see, the approach has a much wider area of application. The main underlying idea of periodization is to view schedules as words specified by generating rules. Let S be a schedule for a job set J , g be a fragment of S involving a set P of parts of jobs from J , and r be a unary generating rule representing a periodic way of breeding of g. Let f = r(g) be a form denoting the application of r to g. If f specifies S, we call S a periodic schedule with the generator g and the compact specification f. For example, if J consists of a single job with integer processing time p, T is a trivial schedule of the job without interruptions, P consists of a single job unit 1, and rasing to a power means concatenation of job units, then T is a periodic schedule with the generator 1, since the exponential form 1p specifies T . Further, we call 1p a trivial periodic component. Obviously, it can specify a noninterrupted part of a job of length p as well. Let G = {g1 , g2 , . . . , gk } be a set of fragments of S involving sets P1 , P2 , . . . , Pk of parts of jobs from J , respectively, without common parts, and R = {r0 , r1 , r2 , . . . , rl } be a set of generating rules. We assume that r1 , r2 , . . . , rl are unary generating rules, and r0 is an l-ary generating rule representing a way of consolidation of schedules. Let f = r0 (r1 (g1 ), r2 (g2 ), . . . , rh (gh )), where ri ∈ R and gi ∈ G for every i = 1, 2, . . . , h. If f specifies S, we call S a periodized schedule with the generating set G, periodic components r1 (g1 ), r2 (g2 ), . . . , rh (gh ) and the compact specification f. Example 1.1. A hypothetical periodized schedule with six periodic components, where , , , and denote units of five jobs with processing times 6, 6, 12, 26 and 16, respectively. M1 The components have generators which are the leftmost entries of on M2 , on M3 , on M4 , on M1 , on M3 , and on M1 . The generating rule of each component can be defined by a periodic shift of the generatior in time with a fixed period acting together with a cyclic permutation of machines. Note that the periodic component with the generator has no interruptions by definition if it is known from the problem input that its eight units have to be processed alternately on M3 and M4 starting on M3 with period 2. M1 M2 M3 M4
We call a periodized schedule polynomially periodized if its compact specification requires a space polynomial in the problem size. Since further we consider only polynomially pe-
SCHEDULING BY PERIODIZATION
3
riodized schedules, the term “polymonially” will be omitted. An algorithm producing a periodized schedule we call a periodizer. Note that a condensed schedule is a periodized schedule with trivial periodic components. The main point is that, even if a periodized schedule contains an exponential number of interruptions of jobs, they can obviously be specified by a polynomial space by definition. And, if the periodizer is not of polynomial time, we can find a polynomial-time imitator of the periodizer that produces directly the compact specification of the periodized schedule. Thus, the imitator can be considered as just a polynomial version of the periodizer. In this paper we, using periodization, show that a pseudopolynomial algorithm proposed earlier for a number job-shop problem on scheduling interruptions is a periodizer and design its polynomial imitator. In conclusion, we also show how the periodized schedule can eventually be converted into a condensed schedule with an additional space requirement. 1.2. Problem formulation. The two-machine maximum-lateness unit-time job-shop scheduling problem, J2|pij = 1|Lmax , is given two machines M1 and M2 , where each machine can process at most one job at a time; n jobs J1 , . . . , Jn , where Jj , j = 1, . . . , n, has due date dj and consists of a chain O1j O2j . . . Omj j of unit-time operations, where Oij , i = 1, . . . , mj , has to be processed on Mµij , µij ∈ {1, 2}, with µij ̸= µi−1,j and started on or after the completion of Oi−1,j for i = 2, . . . , mj , find a schedule of minimum maximum lateness, i.e., Lmax = max1≤j≤n Lj , where Lj = Cj − dj is the lateness, and Cj is the completion time of Jj . Since the number of machines is two, the requirement µij ̸= µi−1,j means that µij = µ1j for odd i, and µij = µ2j for even i. Without loss of generality, we assume that mj > 0 and dj ≥ 0 for all j. The jobs Jj can be thought as “checkered” chains of length mj , which are alternations of upper and lower identical squares (see fig. in Example 1.2). The chains can be torn to pieces, dragged apart, and included in each other without superimposing squares and violating the order of squares in each chain. The problem is to find a proper common inclusion of given chains. The two-machine maximum-lateness unit-time job-shop problem finds application in pipeline computing of the set of polynomials by Horner’s method, continue fractions or any other expressions which represent alternating chains of two binary operations assumed to have the same duration [T97]. Example 1.2. An instance with n = 5, m1 = 5, m2 = 4, m3 = 2, m4 = 3, m5 = 1, µ11 = µ13 = µ15 = 1, µ12 = µ14 = 2, d1 = d2 = d3 = 6, d4 = d5 = 7 has the following two schedules with Lmax = 2 and Lmax = 1, respectively. J1 :
J2 :
J3 :
J4 :
J5 :
M1 M2
M1 M2
∑n ∑ The size of J2|pij = 1|Lmax is n+ j=1 log mj + dj ̸=0 log dj . It is clear that the problem is a number problem because job lengths m1 , m2 , . . . , mn are under logarithms, and any unitby-unit algorithm, i.e., scheduling separately each operation in each job is of exponential ∑n time and space at least O( j=1 mj ).
4
VADIM G. TIMKOVSKY
The rest of this section surveys previous results. Section 2 gives a simple example to explain key points of the periodization in detail and introduces a compact specification for two-machine unit-time job-shop schedules. Section 3 introduces a priority table for J2|pij = 1|Lmax and its blocks and describes Brucker’s pseudopolynomial algorithm complemented by an additional preference rule. Sections 4 and 5 show that periodized schedules of the blocks consist of at most five periodic components and describe their compact specifications. In Section 6, we show that Brucker’s algorithm is a periodizer and design its polynomialtime version. Section 7 is the conclusion. 1.3. Toward the chocolate windmill, around and beyond. To look through ideas preceding the result of this paper, we will also touch upon problems close to J2|pij = 1|Lmax using the denotations of the α|β|γ classification [LLRK93, B95]: J3, for the three-machine job-shop machine environment; pij ∈ {1, 2}, for processing times equal to one or two only; pmtn, for preemptive operations in jobs; no wait, for the no-wait constraint for jobs; P 2, for two , for the maximum completion time (schedule length); ∑ identical parallel machines; Cmax∑ Cj , for the total completion time; Uj , for the number of late jobs. Studying the complexity of unit-time job-shop scheduling originates from the paper of Lenstra and Rinnooy Kan [LRK79] demonstrating typical NP-completeness proofs. A new result in it was the proof that yes-no version of J3|pij = 1|Cmax is NP-complete. It was announced that there is a similar proof for J2|pij ∈ {1, 2}|Cmax and raised the question about complexity status of yes-no version of J2|pij = 1|Cmax : “to introduce a competitive element we shall be happy to award a chocolate windmill to the first person establishing membership of P or NP-completeness for this problem”– p.135. Pseudopolynomial-time unit-by-unit algorithms put the first light on the complexity of J2|pij = 1|Cmax . Hefetz and Adiri [HA80] proposed the Longest-Remainder-First (LRF) ∑n algorithm with time and space requirement O( j=1 mj ) that schedules operations of jobs with the longest remaining part first, i.e., by the LRF rule. In [HA82] they extended it to the Ready-Longest-Remainder-First (RLRF) algorithm solving J2|rj , pij = 1|Cmax by the LRF rule applied to ready jobs. Brucker [B81, B82] proposed the Largest-Lag-First (LLF) algorithm, solving J2|pij = 1|Lmax with the same time and space requirements by scheduling operations of the most lagging jobs first. All the three are greedy algorithms following the general list scheduling strategy [C76]. It has to be noticed that J2|rj , pij = 1|Cmax and J2|pij = 1|Lmax are symmetric, i.e., their instances are in a one-to-one correspondence such that optimal schedules of the correspondent instances can be converted into one another by time reversal (cf. [TSS94, B95]). So, the problems can be thought of two forms of the same problem. Note, however, that the reverse of an RLRF schedule is not an LLF schedule in general and vice versa. Only in relation to J2|pij = 1|Cmax they are both LRF schedules. The question of Lenstra and Rinnooy Kan was answered by Timkovsky [T85] (see also [GJ82, p.311], [T93, T97]) who proposed the Thirteen-Cuts (13C) algorithm solving J2|pij = 1|Cmax with time and space requirements both O(n) by decomposing the problem into 13 special cases having trivial condensed optimal schedules with total number of interruptions at most four. Later on, Kubiak et al. [KSS95] proposed an O(n log n) algorithm based on the reduction to P 2|pmtn|Cmax and application of McNaughton’s algorithm [M59] to
SCHEDULING BY PERIODIZATION
5
nine special cases of the problem. Their approach allows a condensed schedule with the same picture of interruptions to be found. Since the job-shopping theorem establishes a polynomial-time reduction of preemptive m-identical-parallel-machine problems to mmachine unit-time job-shop problems [T98], their result proves that P 2|pmtn|Cmax and J2|pij = 1|Cmax are polynomially-time equivalent. The extensions J2|rj , pij = 1|Cmax , J2|pij = 1|Lmax and J2|pij = 1|ΣUj can still be solved in polynomial time by condensers. Introducing compressed schedules, which have a maximum number of operations in period [0, T ] for any time T ≥ 0, Timkovsky [T93, T97] described the Head-Train Thirteen-Cuts (HT13C) algorithm that finds in time O(n) a compressed condensed optimal schedule for J2|pij = 1|Cmax with total number of interruptions at most n + 4. The space requirement of the HT13C algorithm is also O(n). The compression is proved to be enough to extend the HT13C algorithm to the case with different release dates by the release-date-cut technique: the multiple HT13C (MHT13C) algorithm finds a compressed condensed schedule for the earliest release date jobs, cuts their remainders away at the next release date, and finds a compressed schedule for the next release date jobs together with the remainders etc. Thus, a condensed optimal schedule for J2|rj , pij = 1|Cmax or, by symmetry, J2|pij = 1|Lmax with total number of interruptions at most n2 + 4n can be found with time and space requirements both O(n2 ) [T93, T97]. Kravchenko [K99A] proposed an O(n7 ) algorithm with space requirement O(n4 ) to find a set with maximum number of in-time jobs in a two-machine unit-time due-date job shop. She also observed that applying to the set a polynomial-time algorithm for J2|pij = 1|Lmax solves J2|pij = 1|ΣUj in polynomial time because all the out-of-time jobs can be scheduled arbitrarily after the in-time jobs. Thus, Kravchenko’s algorithm complemented by the MHT13C algorithm solves the latter problem in polynomial time. In contrast to the LRF algorithm a pseudopolynomial-time algorithm for J2|pij = ∑ 1| Cj produce condensed schedules and can be brought into a condenser. Timkovsky [T93] and, independently, Kravchenko [K94] observed that the Shortest-Remainder-First ∑ (SRF) algorithm, i.e. the antipode of the LRF algorithm, solves J2|p = 1| Cj in time ij ∑n O( j=1 mj ). Later on, Kravchenko [K95] and, independently, Kubiak and Timkovsky [KT96] showed that the SRF algorithm interrupts each job at most once and that, with this property, it turns into the Earliest-Completion-First (ECF) algorithm with∑ time requiremet O(n log n) and space requiremet O(n). Note that J2|no wait, pij = 1| Cj requires absolutely different approach and can be solved in polynomial time by an algorithm of Kravchenko [K98] with time and space requirements O(n6 ) and O(n5 ), respectively. Any deviation from the above problems in the frame of the α|β|γ classification leads to the NP-hardness except only one problem, J2|rj , pij = 1|Lmax , which complexity status remains unknown. A comprehansive picture of complexity results in unit-time job shop scheduling is given in [T98]. The rest of the paper is devoted to designing a polynomial version of the LLF algorithm by periodization to solve J2|pij = 1|Lmax in time O(n2 ) with space requirement O(n). The Ready-Largest-Lag-First (RLLF) algorithm based on application of the LLF rule to ready jobs is also a periodizer as the LLF algorithm. However, in conclusion, we provide a couterexample showing that the RLLF algorithm is not optimal for J2|rj , pij = 1|Lmax .
6
VADIM G. TIMKOVSKY
2. Preliminaries 2.1. A simple example where condensation does not work but periodization does. Let us examine the idea of designing a condenser imitating LLF schedules for J2|pij = 1|Lmax . However, the following example shows that this is impossible because there exists a special case of even J2|pij = 1|Cmax for which any LRF schedule has an exponential number of interruptions. The same example shows that a periodized LRF schedule can be easily found. Example 2.1. [T97] Let ζk be the restricted version of J2|pij = 1|Cmax with four jobs: n = 4, m1 = m2 = m3 = m4 = 2k, µ11 = µ13 = 1, µ12 = µ14 = 2, where k > 1. Obviously, the size of ζk is log k. The LRF algorithm applied to ζk will interrupt each job at least k − 1 times. For ζ3 we have the following jobs and the following LRF schedule. J1 :
J2 :
LRF schedule:
J3 :
J4 :
c c c c c c
There exist exactly 42k different LRF schedules for ζk . All of them are of length 4k and differ from each other by the transposition of adjacent upper or lower squares within 2k consecutive quadruples (in the above LRF schedule they are under hats). Total number of interruptions ranges from 4k −2 to 8k −4. However, if we give the priority to the jobs in the order they are listed above, then we can make sure that the LRF algorithm complemented by the secondary preference rule “take the operation in the job with the highest proirity” turns into a periodizer for ζk . The above schedule then will be the example of the related periodized LRF schedule for ζ3 . For arbitrary k it can be specified by ak where a = . Thus, we even have a periodic LRF schedule. Introducing smaller generators, a = , k b = , c = , d = , we can specify the periodic schedule by (abcd) as well. In this interpretation, we have a periodized schedule with four periodic components. However, in both cases a polynomial imitator of the LRF algorithm for ζk with constant time and space complexity is evident. It just lists a constant number of operations in a or a, b, c, d, writes the word a or abcd and points out to number k. 2.2. Compact specifications of active two-machine unit-time schedules. The main objects in further considerations will be active two-machine unit-time schedules, i.e., without idling in a single time unit on both machines. We will view them as words on an alphabet G, the letters of which will denote generators. Each generator will represent a pair of operations on the two machines that have to be processed in a single time unit. One operation in the pair can be empty, i.e., the related machine in the unit is idling. Thus, the generators will be of two types: single-operation, like a = or b = , and two-opearation, like c = or d = . As schedules are represented by words, we will apply all terms related to words to schedules as well. For example, a prefix of a schedule represented by the word w means the left part of the schedule represented by a prefix of w. A maximal periodic subword generated by one letter, like aaaaa, in a word will represent a periodic
SCHEDULING BY PERIODIZATION
7
component. Define a compact specification of an active two-machine unit-time schedule to be the exponential form ∏m ( ∏ni xij )yi e = i=1 j=1 eij x
with periodic components eijij , where eij ∈ G, xij and yij are internal and external expo∏ nents, respectively, and denotes a concatenation of words. We distinguish internal and external exponents because we assume that (ex )y ̸= exy , where x and y will have meanings of a job counter and an operation counter, respectively. The characteristic polynomial and the least characteristic value, log e =
∑m i=1
yi
∑ni j=1
xij
and |e| =
∑m i=1
ni ,
represent the length of the schedule and the length of e, respectively. Note that |e| is the number of periodic components in e. For example, log (a3 b5 c)2 (ab2 )3 = 2(3 + 5 + 1) + 3(1 + 2) = 27 and |(a3 b5 c)2 (ab2 )3 | = 3 + 2 = 5. The following lemma describes the main property of the compact specifications we use in further considerations. Lemma 2.1. Let e be a compact specification of a schedule S. Then a compact specification of a proper prefix of S is not longer than |e| and can be found in time O (|e|). Proof. Let ℓ be the length of a proper prefix of S. Thus, ℓ < |e|. It is not hard to verify that the desired compact specification, its characteristic polynomial and its length are ( )y i ] ( ∏ )y (∏ ) [∏ xij xıj nı ȷ−1 xıj ı−1 ∏ni e e e exıȷ , e′ = j=1 ij j=1 ıj j=1 ıj i=1 ∑ı−1 ∑ni ∑nı ∑ȷ−1 ∑ı log e′ = i=1 yi j=1 xij + y j=1 xıj + j=1 xıj + x and |e′ | = i=1 ni + ȷ, respectively,∑ where the ı, y, ȷ, x should be calculated ∑ı−1 ∑innithe sequence: find maximum ∑nnumbers ı−1 i ı such that i=1 yi j=1 xij ≤ ℓ and set ℓ1 = ℓ − i=1 yi j=1 xij , find maximum y such ∑ȷ−1 ∑nı ∑nı xıj , find maximum ȷ such that j=1 xıj ≤ ℓ2 xıj ≤ ℓ1 and set ℓ2 = ℓ1 − y j=1 that y j=1 ∑ȷ−1 and set ℓ3 = ℓ2 − j=1 xıj , set x = ℓ3 − xıȷ . Since ı < m and ȷ < nı , we have |e′ | ≤ |e|. Obviously, finding ı, where ı ≤ |e|, is the hardest part in this calculation. 2.3. Union of alternating schedules and filled-up schedules. Let S be a two-machine unit-time job-shop schedule of length λ. Define S to be an alternating schedule if λ > 1 and S has exactly one machine engaged in any time unit in [0, λ]. Define S to be a filled-up schedule if S has no idle time unit on each machine in [0, λ]. If S is a filled-up schedule, define 2S to be a schedule of length 2λ for the same set of jobs as in S that has exactly one machine engaged in any time unit in [0, λ] and the same sequence of operations on each machine as in S. Obviously, 2S is an alternating schedule we call doubling of S. Example 2.2. A filled-up two-machine unit-time job-shop schedule of length 11, and its doubling that starts on M2 . Operations of one colour here may belong to different jobs. Filled-up schedule:
Doubling:
8
VADIM G. TIMKOVSKY
It is easy to see that for any filled-up two-machine unit-time job-shop schedule there exists exactly two doublings: one starts on M1 , the other starts on M2 . Let A be the alternating schedule of 2λ successive operations in a job Jj that starts on M1 , and let B be a filled-up schedule of length λ for a set of operations not belonging to Jj . Let us take the doubling 2B that starts on M2 and define the union A+B, to be the schedule that is the consolidation of A and 2B in the same time interval [0, 2λ]. Obviously, A and 2B do not overlap in A + B, i.e., the union defines a feasible schedule. Our purpose is to show how, given compact specifications of A and B, to find a compact specification of A + B in polynomial time. Further among all possible compact specifications of A we will be considering only (ab)λ , where a = is the first, b = is the second operation in A. Lemma 2.2. Let A be an alternating schedule of 2λ successive operations in one job, and let B be a filled-up schedule of length λ. Then given their compact specifications a and b a compact specification of A + B can be found in time O(|a| + |b|). ∏m ( ∏ni xij )yi Proof. Let a = (ab)λ and b = i=1 , where |b| = λ, be compact specifications j=1 eij of A and B, respectively. For eij = a + eij =
,
b + eij =
,
set
a+b=
∏m (∏ni i=1
j=1 ([a + eij ][b + eij ])
xij
)yi .
It is easy to make sure that a + b is a compact specification of A + B that can be found in the required time. 3. Blocks of the priority table The LLF algorithm for J2|pij = 1|Lmax can be described as follows. Let lj be the length of the remainder of Jj , i.e., the first mj − lj operations of Jj are already put in a partial schedule. Define lj − dj to be the lag of the remainder. For T = 0, 1, 2, . . . , the LLF algorithm fills the time unit [T, T + 1] on M1 and M2 by first operations of remainders with the largest lags among all remainders with first operations on M1 and M2 , respectively. If the set of remainders with first operations on the machine is empty, then the machine remains idle in [T, T + 1]. Remainers with equal lags have equal priority. Initially, the lags are mj − dj . In other words, the LLF algorithm creates a list of all operations Oij by nondecreasing labels dj − mj + i, which can be thought of as due dates of the operations, and then fills the processing time of the machines by operations following the list. 3.1. Priority table definition. Define a priority table to be an array of n pairs (dj − mj , dj ) ordered by nondecreasing dj − mj , i.e., nonincreasing lags, where in each of the first three parts of the array correspondent to the first three different largest lags pairs with earliest due date are placed first. A priority table can be viewed as a set of n time axes, where the jth axis represents the time interval [dj − mj , dj ] with the job Jj inserted into it and consist of upper and lower lines for operations on M1 and M2 , respectively. Let f0 < f1 < · · · < fl be different values among d1 −m1 , d1 , d2 −m2 , d2 , . . . , dn −mn , dn . Then the fragments of the priority table located in the time intervals [fk−1 , fk ] and [f0 , fk ],
SCHEDULING BY PERIODIZATION
9
where k = 1, 2, . . . , l, we call the kth column and the k-column head of width fk − fk−1 and fk − f0 , respectively. A column is empty if it contains no operation. Any time unit between f0 and fl defines an equal priority to operations in it. The earlier the time unit the more priority the LLF algorithm gives. A priority table is connected, if it contains no empty columns, or is disconnected otherwise. It is enough to consider only the connected case, because removing empty columns does not change the priority. Formally, this means that the due dates dj with dj > fk should be decreased by fk − fk−1 once [fk−1 , fk ] is empty. Example 3.1. A connected priority table with 8 time axes and 10 columns. The columns include only black or only white squares. The denotation will be explained below.
↑ First Block
↑
Supporting Operation
Let k be fixed. Then the set of axes with equal pairs (dj − mj , min{dj , fk }) we call a row of the k-column head. The number of axes in a row is its height. Note that the order on axes naturally induces an order on rows, and that the number of rows is at most k + 1. The fragment of a head that is located in the intersection of the ith row and the jth column we call a cell and denote as Cij . The height of the ith row and the width of the jth column naturally define the height and the width of the cell Cij we denote as hij and wij , respectively. The cell Cij is either empty or contains a rectangle Rij with hij wij operations. We call a rectangle oriented, if all first operations in its axes require the same machine, or unoriented otherwise. Each time when possible, we assume that the machine is M1 because the other situation is symmentrical. The parameter oij ∈ {0, 1, 2} will indicate the orientation of Rij : 0, 1, 2 means the requirement of both machines, only M1 or only M2 , respectively. Obviously, the height of any unoriented rectangle is at least two. It is easy to make sure that any rectangle belongs to one of the three types: [arm] Aij :
hij = 1 ∧ wij > 1,
[rod] Rij :
wij = 1 ∧ oij > 0,
[box] Bij :
[ hij > 1 ∧ wij > 1 ] ∨ oij = 0.
10
VADIM G. TIMKOVSKY
3.2. Block types. Since the LLF algorithm gives the first priority to operations in columns with smaller numbers, our goal is to outline a leftmost block of the priority table that turns out to be tractable in finding a compact specification of an LLF schedule of operations in it and allows the next block to be outlined with the same property. To obtain the completion of such a recursion in polynomial time, the number of generated blocks must be polynomial. So, the blocks must be quite large to get past long chains of operations in jobs. In this section we give the definition of such blocks by introduction of block types. As we show further, the set of block types is closed under transferring operations by some refined LLF algorithm, and the number of generated blocks of these types is indeed polynomial. Let Dij denote a rectange of rod or box type. The matrix of rectangles of the k-column head we call a k-column configuration. Define a block of a priority table to be one of the following twelve fragments of one-, two- or three-column configurations we call block types: arm = A11 ,
rod = R11 ,
box = B11 ,
bit = R11 D12 D22 ,
bat = A11 A12 A22 ,
tap = A11 R12 R22 ,
axe = A11 A12 B22 ,
pan = A11
gun = A11
pot = A11
key = A11 R12 A13 R22 D23 D33 , cap = A11 R22 D23 D33 ,
A22 ,
R22 ,
B22 ,
where only a rod, a tap or a gun can be a proper fragment, the other nine block types are one-, two- or three-column configurations themselves, herein, an arm implies l = 1, i.e., an arm is a priority table itself, a rod implies C12 = ∅ ∧ [ C21 ̸= ∅ ⇒ o21 = o11 ], a tap or a gun implies C23 = ∅ ∧ [ C32 ̸= ∅ ⇒ o32 = o22 ]. Note that R12 contains only one operation. As the next lemma proves, a block of a prioroty table is defined uniquely. Blocks of the first four, the next four and the last four types we call no-handle, straight-handle and bent-handle blocks, respectively. Blocks of the last eight types are handle blocks, where the first row and the residual part without the first row we call a handle and a base, respectively. One can observe that the bases of a bat or a pan, a tap or a gun, an axe or a pot and a key or a cap are an arm, a rod, a box, and a bit, respectively. As we will see, no-handle blocks are basic because LLF schedules for handle blocks can be easily obtained from LLF schedules for no-handle blocks. Define a block of height h to be oriented if the rectangles R11 , . . . , Rhh have the same orientation, i.e. o11 = oii for 1 ≤ i ≤ h, or unoriented otherwise. Example 3.2. The no-handle blocks except the arm are of height three. The handle blocks with except the bat and the pan are of height four. The no-handle blocks and the bases of the handle blocks are shown by and , where marks the first column of the bit. All handles are shown by . The middle of axes with a single square in the rod, the tap and the gun is shown by –. The box, the axe and the pot are unoriented; all the other blocks are oriented.
SCHEDULING BY PERIODIZATION
11
No-handle blocks: arm
rod
box
–
–
–
bit
Straight-handle blocks: bat
tap
axe
–
–
–
key
Bent-handle blocks: pan
gun
pot
cap
–
–
–
Lemma 3.1. A block of a priority table is unique. Proof. Checking the definitions of the twelve block types one can make sure that any priority table satisfies at least one. Besides, any two block types exclude one another. Thus, h11 = 1 & w11 > 1 & l > 1 is the “handle” predicate identifying handle blocks. Let di be the due date of the job in axis i. Then f1 = d1 is the “bend” predicate identifying whether a handle is bent or straight, and the following algorithm Ai identifies the type of no-handle blocks if i = 1 or the type of the base of handle blocks if i = 2. Ai : if wii = 1 then if oii = 0 then box else if fi = di then rod else bit else if hii > 1 then box else arm.
12
VADIM G. TIMKOVSKY
Further w and h will denote the width and the height of a block. We call two boxes or two bits similar if they differ only by width. Without loss of generality, we assume that axes of a block are in the standard order, where the indices of the machines processing first operations in axes are nondecreasing in each row, and the first operation in the first axis of the first row always requires M1 . In boxes and bits u and v will denote the numbers of axes which first operations require M1 and M2 , respectively. Denote the height of the first column inside a bit as s > 0. Thus, h = u + v and u ≥ s. Set t = u − s, i.e., t is the number of axes in the second row of a bit with first operations on M1 . For the bit in Example 3.2 s = 2, t = 1, v = 0. 3.3. Oriented LLF algorithm. The LLF algorithm run can be thought of transferring operations from a priority table to an LLF schedule. The main observation is that the LLF algorithm performs this transferring block by block, where the block is one of the above twelve types. Herein, LLF schedules of the blocks, as we show in the next two sections, turn out to be periodized. The related compact specifications of LLF schedules for blocks will be easier to obtain if the LLF algorithm is complemented by a preference rule of choosing an operation among operations with equal lags. We will use the two preference rules: HJF [ Highest–Job–First ] : take the operation from the job in the highest axis, and LJF [ Lowest –Job–First ] : take the operation from the job in the lowest axis. LLF schedules obtained by the HJF rule and the LJF rule we call upper and lower, respectively. The two rules are chosen as providing simpler compact specifications. Define an oriented LLF (OLLF) algorithm to be the LLF algorithm that uses the HJF or LJF rule when transferring operations from oriented or unoriented blocks, respectively. After transferring operations of the first block except operations of a residual part, which can be empty, the OLLF algorithm processes by the same way the second block which will be the residual part or its consolidation with the next column, etc. What residual parts the OLLF algorithm leaves will be clarified in the next two sections. Further will denote an operation and r the height of the residual part. Examples given in the next two sections will help to derive compact specifications of the LLF schedules for each type of the block. Transferring operations by the OLLF algorithm from a block (or any set of its axes) occupying the time interval [A, B] of a given priority table will be characterized by the running front, which we define as the time interval [T, T + D] with variable ends such that: • A ≤ T and T + D ≤ B; • if A < T , then all operations are transferred from [A, T ]; • at least one operation is transferred from each unit among [T, T + 1], [T + 1, T + 2], . . . , [T + D − 1, T + D]; • if T + D < B, then none operation is transferred from [T + D, B]. The variable T and the sum T + D are the left edge and the right edge of the running front, respectively. The maximum value of the variable D, while transferring operations from the block, is defined to be the depth of the running front.
SCHEDULING BY PERIODIZATION
13
4. LLF schedules for no-handle blocks 4.1. LLF schedules for oriented blocks. In the following six compact specifications we assume that ith appearance of will denote the first operation of a job with ith largest lag among the jobs with first operations on M2 that are not in the block. These jobs and their first operations we call supporting (the block), and g will denote the number of supporting jobs. will also denote the idle time unit on M2 if g = 0. The ith appearance in the LLF schedule of , , , will denote O2j+1,k+1 , O2j+2,k , O2i−1,1 , O2i,h , respectively, where i−1 j = ⌊ h−1 ⌋, k = i − j(h − 1). All residual parts will be of width one. Example 4.1. Arm: w = 11. Priority table
Upper LLF schedule
Generators
a=
b=
Compact specification
Arm = (ab)⌊ 2 ⌋ aw−2⌊ 2 ⌋ w
w
Example 4.2. Rod: h = 5. Priority table
Upper LLF schedule
–
–
Generators
–
a=
–
Compact specification
–
Rod = amin{g,h} bh−min{g,h} ,
if g ≥ h or
b=
if g = 2
r=0
Example 4.3. OBoxO [Oriented Box of Odd Width]: h = 5, w = 9. Priority table
Upper LLF schedule
Generators
a=
b=
c=
Compact specification
OBoxO = a(bh−1 c)
w−1 2
,
r =h−1
14
VADIM G. TIMKOVSKY
Example 4.4. OBoxE [Oriented Box of Even Width]: h = 5, w = 8. Priority table
Upper LLF schedule
Generators
a=
Compact specification
OBoxE = a(bh−1 c) 2 −1 bh−1 ,
b=
c=
w
r=1
Example 4.5. OBitO [Oriented Bit of Odd Width]: h = 5, s = 2, w = 9. Priority table
Upper LLF schedule
Generators
a=
Compact specification
OBitO = a(bh−1 c)
b=
c= w−1 2
,
r =s−1
Example 4.6. OBitE [Oriented Bit of Even Width]: h = 5, s = 2, w = 8. Priority table
Upper LLF schedule
Generators
a=
b=
c=
Compact specification
OBitE = a(bh−1 c) 2 −1 bs , w
r =h−s−1
4.2. LLF schedules for unoriented blocks. We say that an unoriented bit has an upper gap or lower gap if u ≤ v or u > v, respectively. All residual parts will be of width one. To better understand compact specifications for bits of even width we consider bits of width two first. In the following eight compact specifications we assume that ith appearance in the LLF schedule of , , , will denote O2j+1,k , O2j+2,k , O2p+2,u+q , O2p+1,u+q , respectively, i−1 where j = ⌊ i−1 u ⌋, k = (j + 1)u − (i − 1), p = ⌊ v ⌋, q = (p + 1)v − (i − 1). The other denotations have the same sense as above.
SCHEDULING BY PERIODIZATION
15
Example 4.7. UBoxO [Unoriented Box of Odd Width]: u = 3, v = 2, w = 9. Priority table
Lower LLF schedule
Generators
a=
b=
Compact specification
UBoxO = (av bu−v cv )
c=
w−1 2
av
r =u−v
Example 4.8. UBoxE [Unoriented Box of Even Width]: u = 3, v = 2, w = 8. Priority table
Lower LLF schedule
Generators
a=
Compact specification
b=
c=
w
UBoxE = (av bu−v cv ) 2 r=0
Example 4.9. UUBitO [Upper-gap Unoriented Bit of Odd Width]: u = 3, v = 4, s = 2, w = 9. Priority table
Lower LLF schedule
Generators
a=
Compact specification
b=
c=
UUBitO = (au bv−u cu ) r=s
w−1 2
16
VADIM G. TIMKOVSKY
Example 4.10. LUBitO [Lower-gap Unoriented Bit of Odd Width]: s = v = 2, u = 5, w = 9. Priority table
Lower LLF schedule
Generators
a=
Compact specification
LUBitO = (av bu−v cv )
b=
c=
w−1 2
r=s
Example 4.11. UUBit2 [Upper-gap Unoriented Bit of Width Two]: t = 1, u = 3, v = 4. Priority table
Lower LLF schedule
Generators
–
a=
–
Compact specification
–
UUBit2 = au
– –
r =v−t
Example 4.12. UUBitE [Upper-gap Unoriented Bit of Even Width]: t = 1, u = 3, v = 4, w = 8. Priority table
Lower LLF schedule
Generators
a=
b=
c=
Compact specification
UUBitE = (au bv−u cu )
r =v−t
w−2 2
au
SCHEDULING BY PERIODIZATION
17
Example 4.13. LUBit2 [Lower-gap Unoriented Bit of Width Two]: s = v = 2, t = 3, u = 5. Priority table
Lower LLF schedule
Generators
–
a=
–
Compact specification
–
LUBit2 = av bs+
–
r = |t − v|
b=
t−v−|t−v| 2
– Example 4.14. LUBitE [Lower-gap Unoriented Bit of Even Width]: s = v = 2, t = 3, u = 5, w = 8. Priority table
Lower LLF schedule
Generators
a=
b=
c=
Compact specification
LUBitE = (av bu−v cv )
r = |t − v|
w−2 2
av bs+
t−v−|t−v| 2
Lemma 4.1. (a) If the OLLF algorithm in application to a box or a bit leaves a residual part, then it is a rod. (b) Transferring 2h operations from a box of width w > 2 or a bit of width w > 3 the OLLF algorithm turns it into a similar box or a similar bit of width w − 2, respectively. (c) The depth of the running front of a box or a bit is at most two. Proof. (a) and (b) trivially follow from the LLF rule and the above compact specifications for Box and Bit. (c) follows from (b). 5. LLF schedules for handle blocks. Obviously, the base of a bat or pan is an arm, the base of a tap or gun is a rod, the base of an axe or pot is a box, and the base of a key or cap is a bit. The length of the part of a handle in the first column and the width of the base will be denoted as p and q, respectively. Transferring operations by the LLF algorithm from a handle block can be viewed as calling
18
VADIM G. TIMKOVSKY
operations in turn from the handle and the base. Odd calls transfer operations from the handle, even calls transfer operations from the base. If the base is of height more than one (which is not the case for arms, bats and pans), the handle disappears faster than the base from these blocks. 5.1. LLF schedules for straight-handle blocks. Define an alignment time to be a moment in the period of transferring operations from a straight-handle block when one of the two events happens: (i) all operations from the base are transferred except the residual part, but at least two operations in the handle are not; (ii) an operation in the handle appears inside the running front of the base. We call the base of a tap, an axe or a key weak or strong if (i) or (ii) is held, respectively. LLF schedules for bats or taps are still as easy to obtain directly as for no-handle blocks, meanwhile axes and keys require an additional technique using Lemma 2.2. Among all straight-handle blocks, a bat is the only one for which the alignment time always implies (i) because the base of bats is of height one. In the following two compact specifications, we assume that ith appearance in the LLF schedule of and will denote Oi1 and Oi2 , respectively. In this section, r denotes the length of the residual part if it is of height one. The other denotations have the same sense as above. Example 5.1. OBat [Oriented Bat]: p = 6, q = 10. Priority table
Upper LLF schedule
Generators
a=
b=
c=
Compact specification q
q
OBat = a(bc)⌊ 2 ⌋ bq−2⌊ 2 ⌋ ,
r =p−1
Example 5.2. UBat [Unoriented Bat]: p = 10, q = 6. Priority table
Lower LLF schedule
Generators
b=
c=
Compact specification q
q
UBat = (cb)⌊ 2 ⌋ cq−2⌊ 2 ⌋ ,
r=p
It is not hard to make sure that the base of a Tap is weak if and only if ⌈ p2 ⌉ > h − 1. Note that for Taps tv = 0. In the following four compact specifications we assume that ith appearance in the LLF schedule of and will denote Oi1 and O1,i+1 , respectively. In examples 5.5 and 5.6 will denote the supporting operations on M1 . The other denotations have the same sense as above.
SCHEDULING BY PERIODIZATION
19
Example 5.3. WOTap [Weak-base Oriented Tap]: h = 4, p = 9. Priority table
Upper LLF schedule
–
Generators
–
a=
–
Compact specification
if g ≥ h − 1 or
b=
c=
if g = 1
WOTap = (ab)min{g,h−1} (cb)h−1−min{g,h−1} Example 5.4. SOTap [Strong-base Oriented Tap]: h = 5, p = 4. Priority table
Upper LLF schedule
p if g ≥ ⌈ ⌉ or 2
if g = 1
–
Generators
–
a=
–
Compact specification
–
SOTap = (ab)min{g,⌈ 2 ⌉} (cb)⌈ 2 ⌉−min{g,⌈ 2 ⌉}
b=
c=
p
p
p
Example 5.5. WUTap [Weak-base Unoriented Tap]: p = 5, v = 2. Priority table
Lower LLF schedule
if g ≥ h − 1 or
–
Generators
–
a=
b=
c=
if g = 1
Compact specification
WUTap = (ba)min{g,h−1} (bc)h−1−min{g,h−1} Example 5.6. SUTap [Strong-base Unoriented Tap]: p = 2, v = 4. Priority table
Lower LLF schedule
p if g ≥ ⌈ ⌉ or 2
if g = 0
–
Generators
–
a=
–
Compact specification
–
SUTap = (ba)min{g,⌈ 2 ⌉} (bc)⌈ 2 ⌉−min{g,⌈ 2 ⌉}
b=
c=
p
p
p
20
VADIM G. TIMKOVSKY
The LLF schedule for an axe or a key defined to be the schedule the OLLF algorithm produces up to the alignment time. Operations of the blocks that are not involved in this schedule will define the residual part of the blocks. Since operations of the handle and operations of the base are not in competition untill the alignment time, the OLLF schedule for an axe can be produced by taking the left part of the OLLF schedule for the base of length λ, where 2λ is the alignment time, and adding it to the left part of the handle of length 2λ. For weak bases, the whole OLLF schedule for the base should be taken. Let α denote the length of the OLLF schedule for the base. It can be calculated by taking the logarithm of the related compact specification for a box or a bit because the bases of an axe and a key are a box and a bit, respectively. For example, the length of the OLLF schedule for an unoriented box of odd width is w−1 α = log UBoxO = (u + v) + v. 2 Thus, axes and keys with weak bases can be recognized by checking whether 2α is less than p + q − 1. For strong bases, we need to find the length ω of the left part of the LLF schedule for the base such that 2ωth operation in the handle appears in the running front of the base. Since 2ω > p in this case, ω = p + 2x + 1 for odd p or ω = p + 2x for even p, where x is a positive integer. Due to Lemma 4.1(b), x can be found as the root of the equation p + 2x + 1 p + 2x = x for odd p or = x for even p, 2(h − 1) 2(h − 1) respectively, where x is the number of pairs of operations transferred from the base up to the alignment time. Lemma 5.1. Given a compact specification of the LLF schedule for the base of an axe or a key a compact specification of an LLF schedule for the axe or the key can be found in constant time. Proof. Let A and B be the handle and the LLF schedule for the base with specifications a and b, respectively. The specification of the left part of A of length 2λ is obviously a2λ = (ab)λ . Since Lemma 2.2 holds, and A + B is obviously an LLF schedule, it is enough to show that the specification bλ of the left part of B of length λ can be found in constant time. But the latter follows from Lemma 2.1 because m ≤ 3 and ni ≤ 3, i = 1, 2, . . . , m, i.e., |b| ≤ 9, for boxes and bits. Below we provide compact specifications of LLF schedules only for axes because the compact specifications for keys can be obtained from the former by simply changing the names: “axe” to “key” and “box” to “bit”. Besides, we omit examples of the LLF schedules for keys because they do not have any singularity in comparison with the LLF schedules for axes. In the following four compact specifications we assume that a2λ = (ab)λ is the specification of the left part of the handle of length 2λ and that OBoxλ and UBoxλ are specifications of the left part of an LLF schedule for the base of length λ, where OBox ∈ {OBoxO, OBoxE} and UBox ∈ {UBoxO, UBoxE}. The other denotations have the same sense as above.
SCHEDULING BY PERIODIZATION
21
Example 5.7. WOAxe [Weak-base Oriented Axe]: h = 4, p = 13, q = 5, α = 7. Priority table
Upper LLF schedule
Components
a2α , OBoxα Compact specification
WOAxe = a2α + OBoxα Example 5.8. WUAxe [Weak-base Unoriented Axe]: p = 11, q = 5, u = 2, v = 1, α = 7. Priority table
Lower LLF schedule
Components
a2α , UBoxα Compact specification
WUAxe = a2α + UBoxα Example 5.9. SOAxe [Strong-base Oriented Axe]: h = 5, p = 9, q = 8, x = 1, ω = 6. Priority table
Upper LLF schedule
Components
a2ω , OBoxω
Compact specification
SOAxe = a2ω + OBoxω
Example 5.10. SUAxe [Strong-base Unoriented Axe]: h = 5, p = 9, q = 8, x = 1, ω = 6. Priority table
Lower LLF schedule
Components
a2ω , UBoxω
Compact specification
SUAxe = a2ω + UBoxω
22
VADIM G. TIMKOVSKY
5.2. LLF schedules for bent-handle blocks. Define the base of a pan to be weak if p > q or strong otherwise. Analogous to the LLF schedules for bats and taps, LLF schedules for pans and guns are easy to obtain directly without using Lemma 2.2. In the following two compact specifications we assume that ith appearance in the LLF schedule of and will denote Oi1 and Oi2 , respectively. The other denotations have the same sense as above. Example 5.11. SOPan [Strong-base Oriented Pan]: p = 6, q = 9, r = 4. Priority table
Upper LLF schedule
Generators
a=
b=
c=
Compact specification
SOPan = a(bc)⌊
p−1 2 ⌋
bp−1−2⌊
p−1 2 ⌋
,
r =q−p+1
Example 5.12. SUPan [Strong-base Unoriented Pan]: p = 6, q = 9, r = 3. Priority table
Upper LLF schedule
Generators
b=
c=
Compact specification p
p
SUPan = (cb)⌊ 2 ⌋ bp−2⌊ 2 ⌋ ,
r =q−p
Example 5.13. WOPan [Weak-base Oriented Pan]: p = 9, q = 6, r = 2. Priority table
Upper LLF schedule
Generators
a=
b=
c=
Compact specification
SOPan = a(bc)⌊
q−1 2 ⌋
bq−1−2⌊
q−1 2 ⌋
,
r =p−q−1
SCHEDULING BY PERIODIZATION
23
Example 5.14. WUPan [Weak-base Unoriented Pan]: p = 9, q = 6, r = 3. Priority table
Upper LLF schedule
Generators
b=
c=
Compact specification q
q
SUPan = (cb)⌊ 2 ⌋ bq−2⌊ 2 ⌋ ,
r =q−p
Define the base of a gun to be weak if ⌈ p2 ⌉ > h − 1, or strong otherwise. This is the same inequality as for taps. Besides, as for taps, tv = 0 for guns. Thus, because handles of guns are just by one unit shorter than handles of taps, guns have the same LLF schedules and compact specifications as taps do. Only the residual parts can be different. As for taps, we assume that the ith appearance in the LLF schedule of and will denote Oi1 and O1,i+1 , respectively, and that in examples 5.17 and 5.18 will denote the supporting operations on M1 . The other denotations have the same sense as above. Example 5.15. WOGun [Weak-base Oriented Gun]: h = 4, p = 9. Priority table
Upper LLF schedule
–
Generators
–
a=
–
Compact specification
if g ≥ h − 1 or
b=
c=
if g = 1
WOGun = (ab)min{g,h−1} (cb)h−1−min{g,h−1} Example 5.16. SOGun [Strong-base Oriented Gun]: h = 5, p = 4. Priority table
Upper LLF schedule
p if g ≥ ⌈ ⌉ or 2
–
Generators
–
a=
–
Compact specification
–
SOGun = (ab)min{g,⌈ 2 ⌉} (cb)⌈ 2 ⌉−min{g,⌈ 2 ⌉}
b=
c=
if g = 1
p
p
p
24
VADIM G. TIMKOVSKY
Example 5.17. WUGun [Weak-base Unoriented Gun]: p = 5, v = 2. Priority table
Lower LLF schedule
–
Generators
–
a=
if g ≥ h − 1 or b=
c=
if g = 1
Compact specification
WUGun = (ba)min{g,h−1} (bc)h−1−min{g,h−1}
Example 5.18. SUGun [Strong-base Unoriented Gun]: p = 2, v = 4. Priority table
Lower LLF schedule
p if g ≥ ⌈ ⌉ or 2
–
Generators
–
a=
–
Compact specification
–
SUGun = (ba)min{g,⌈ 2 ⌉} (bc)⌈ 2 ⌉−min{g,⌈ 2 ⌉}
b=
c=
if g = 0
p
p
p
The base of a pot or a cap is a box or a bit as in the case of axes and keys. Again, let α be the length of the LLF schedule for the base. Define the base to be weak if ⌊ p2 ⌋ > α or strong otherwise. For weak bases, the whole LLF schedule for a pot or a cap should be added to the left part of the handle of length 2α. For strong bases, we need to take the left part of the OLLF schedule for the base of length ω = ⌊ p2 ⌋. Lemma 5.2. Given a compact specification of the LLF schedule for the base of a pot or a cap a compact specification of the LLF schedule for the pot or the cap can be found in constant time. Proof. The same as for Lemma 5.1.
Below we provide compact specifications of LLF schedules for pots only because the compact specifications for pipes can be obtained from the former by changing the names: “pot” to “cap” and “box” to “bit”. Besides, we omit examples of the LLF schedules for pipes because they do not have any singularity in comparison with the LLF schedules for pots. The other denotations have the same sense as above.
SCHEDULING BY PERIODIZATION
Example 5.19. WOPot [Weak-base Oriented Pot]: p = 13, α = 6. Priority table
Upper LLF schedule
Components
a2α , OBoxα
Compact specification
WOPot = a2α + OBoxα Example 5.20. WUPot [Weak-base Unoriented Pot]: p = 11, α = 4. Priority table
Lower LLF schedule
Components
a2α , UBoxα Compact specification
WUPot = a2α + UBoxα Example 5.21. SOPot [Strong-base Oriented Pot]: p = 9, ω = 4. Priority table
Upper LLF schedule
Components
a2ω , OBoxω
Compact specification
SOPot = a2ω + OBoxω Example 5.22. SUPot [Strong-base Unoriented Pot]: p = 9, ω = 4. Priority table
Lower LLF schedule
Components
a2ω , UBoxω
Compact specification
SUPot = a2ω + UBoxω
25
26
VADIM G. TIMKOVSKY
6. LLF schedules in square time using linear space 6.1. Block-by-block algorithm. The following lemma establishes time and space requirements for finding an LLF schedule of a single block. Lemma 6.1. The compact specification of the upper or lower LLF schedule of a block requires constant space and can be found in constant time. Proof. Each of the compact specifications listed in Sections 4 and 5 has at most five generators. To specify each generator we need to store at most four numbers: two numbers to specify jobs and two numbers to specify operations in the jobs. The constant time complexity is obvious for no-handle blocks, bats, pans, taps, guns and follows from Lemmas 5.1 and 5.2 for the other blocks, i.e. axes, keys, pots and caps. The following Block-By-Block (BBB) algorithm forms the priority table and removes operations from it block by block producing the compact specifications of LLF schedules of the blocks and appending them one by one to the word c which is initially empty. Thus, it produces a compact specification of the OLLF schedule that is a concatenation of the compact specifications of the LLF schedules of the blocks which the OLLF algorithm passes while transferring operations from the priority table. At the end of each step, we indicate its time complexity which is either evident or derived from the provided arguments. BBB algorithm: (1) Construct the priority table ordering axes by nonincreasing lags. – O(n log n). (2) In each of the first three rows move axes with earliest dute date to the top of the row. – O(n). (3) Find the block of the priority table, calculate its parameters, h, p, q, r, s, t, u, v, w, and put its axes in the standard order. – O(n). (4) If the block is oriented, then find the supporting operations if any, and delete them from the priority table. – O(n). (5) Set the prescribed generators, write out the compact specification of the LLF schedule of the block and append it to c. The constant time complexity of this step follows from Lemmas 6.1. – O(1). (6) Remove all operations of the block from the priority table except operations of the residual part and remove an empty column and empty axes from the residual priority table if they appear. If the block has no residual part, then check whether the residual priority table is empty. If yes, then stop. – O(n). (7) Move the axes of the residual part to the top of the residual priority table and go to Step 2. – O(n). We can see that one cycle (2)–(4) takes time O(n), and the total time complexity of the BBB algorithm is O(n log n + N n), where N is the number of generated blocks. Hence, to prove that the BBB algorithm is of polynomial time we need only to show that the number of generated blocks is polynomially bounded.
SCHEDULING BY PERIODIZATION
27
6.2. Complexity of the block-by-block algorithm. Let a priority table of height H have the block B of height h. Define a top and a bottom of B to be H and H − h + 1, respectively. Define a level of the block to be the sum of its top and its bottom. Thus, if B is the first block generated by the BBB algorithm, then its top, bottom and its level are n, n − h + 1 and 2n − h + 1, respectively. Note that a level is always integer. We say that the block B is lower than the block B ′ if the level of B is lower than the level of B ′ . Lemma 6.2. Either the BBB algorithm generates less than eight blocks or the eighth block is lower than the first block. Proof. First, note that the BBB algorithm generates a sequence of blocks for which the tops are nonincreasing because it does not add new axes. Hence, to prove the lemma we need to trace how tops and bottoms of blocks become lower. Let us consider twelve cases, in which the first generated block is one of the twelve types, and check how many following blocks the BBB algorithm generates to exhaust the priority table or get a lower block. Arm: The first block is last. Rod, Tap, Gun or Strong-base Pan: The second block has a lower top, because of empty axes appearing after passing the first block, and not a higher bottom. Box or Bit: The height of a box, a bit, a handle block or the length of a handle is at least two. Therefore, if the second block has a handle, then its top and its bottom are lower. Otherwise the second axis contains empty time units between two nonempty time units that contradicts the priority table definition. Hence, the second block with the same top can only be a no-handle block, i.e., an arm, a rod or, again, a box or a bit. But an arm is the last block by definition. If the second block is a rod, then the third block is lower as shown above. If the second block is again a box or a bit of width at least two, then its bottom is lower because it contains O2,h+1 . However, if the second block is of width one, it can only be an unoriented box with the same bottom as the first block (only if O1,h+1 supports the first block). If the second block is not last, then the third block contains O2,h+1 and, therefore, has a lower bottom. Bat or Weak-base Pan: The residual part of the first block is an arm or a rod of height one. If the second block is not last, then the second axis becomes either empty or not. If it does, then the top and the bottom of the second block are lower. If not, then the second block has the same top and is a bit, a rod of height one, a tap, a gun, an axe, a pot, a key or a cap. As shown above, in the case of a bit, the fourth block is lower, and, in the case of a rod, a tap or a gun, the third block is lower. In the case of an axe, a pot, a key or a cap, the third block has a lower bottom because its height is at least three by definition. Strong-base Axe or Strong-base Key: By Lemma 4.1(c), the depth of the running front of a box or a bit is at most two, and, by definition of the alignment time, transferring operations from a strong-base axe or a strong-base key stops when an operation in the handle appears inside the running front of the base. Hence, the residual part of the first block is of height h and consists of at most three columns, where only the last column can be of width greater than one. This means that the second, the third or the fourth block is a box or a bit of height h. As shown above, the third block after a box or a bit is lower. Hence, the sixth block after a strong-base axe or a strong-base key is lower.
28
VADIM G. TIMKOVSKY
Strong-base Pot or Strong-base Cap: The residual part of the handle is either empty or contains only one operation. In the former case, the second block has a lower top and the same bottom. In the latter case, the second block is a rod of height one. Then, as shown above, the third block is lower. Weak-base Pot or Weak-base Cap: The second block can be only an arm, a rod of height one, a gun, or again a pot or a cap. In the case of an arm, a rod or a gun, a strong-base pot and a strong-base cap, as shown above, at most the fourth block is lower. In the case of a weak-base pot or a weak-base cap, we repeat the same reasonings and conclude whether the fifth block is lower, or the third block is again a weak-base pot or weak-base cap. Then its base as a third box or bit, as shown above, is lower. Weak-base Axe or Weak-base Key: The second block can only be an arm, a tap, or again an axe or a key. In the case of an arm, a tap, a strong-base axe and a strong-base key, as shown above, at most the seventh block is lower. In the case of a weak-base axe or a weak-base key, we repeat the same reasonings and conclude whether the eighth block is lower, or the third block is again a weak-base axe or a weak-base key. Then its base as a third box or bit, as shown above, is lower. Theorem 6.1. The BBB algorithm generates O(n) blocks. Proof. By Lemma 6.2, each eighth block reduces the block level. But the level of the first block is less than 2n. Therefore, the number of generated blocks is less than 16n. Corollary 6.1. The BBB algorithm solves J2|pij = 1|Lmax in time O(n2 ) producing an LLF schedule with space requirement O(n). Proof. Follows from Brucker’s proof that the LLF algorithm is optimal [B81,B82], the time bound O(n log n + N n), where N = O(n) by Theorem 6.1, and Lemma 6.1. Note that the MHT13C algorithm [T97] for the same problem has the same time bound. But in contrast to the BBB algorithm, the MHT13C algorithm requires space O(n2 ). 7. Concluding remark Notice that a BBB schedule for J2|pij = 1|Lmax can be converted into a condensed schedule by the 13C algorithm for J2|pij = 1|Cmax . Let us consider the LLF schedule for a separated block and fix the last operations of the block in the schedule. Note that they occupy last several time units for each type of the block. Using the 13C algorithm we can replace the residual schedule for the other operations in the block by a 13C schedule with at most four interruptions in time O(n). Hence the total number of interruptions for the block will be O(n). Since the number of blocks in the BBB schedule is O(n), we can convert it into a condensed schedule with O(n2 ) interruptions in time O(n2 ). The last operations were fixed, therefore, the maximum lateness remains the same. Thus, the BBB algorithm complemented by the 13C algorithm as a postprocessor turns into a condenser with the same time complexity. But the condensed schedule spends O(n) times more space than the periodized schedule. As we mentioned in Section 1, Hefetz and Adiri [HA82] extended the LRF algorithm for J2|pij = 1|Cmax to the RLRF algorithm for J2|rj , pij = 1|Cmax applying the LRF rule to
SCHEDULING BY PERIODIZATION
29
only ready jobs. We can use the same idea to design the Ready-Largest-Lag-First (RLLF) algorithm for J2|rj , pij = 1|Lmax . It is a natural extension of the LLF algorithm, where the LLF rule should be applied to only ready jobs, i.e., successively for T = 0, 1, 2, . . . time units [T, T + 1] on M1 and M2 should be filled by the first operations of largest-lag remainders or jobs with rj ≤ T . We can think that each time the RLLF algorithm reaches a new release date, it creates an instance of J2|pij = 1|Lmax consisting of the remainders of jobs and the jobs with the new release date, and applies the LLF algorithm to the instance up to the next release date etc. The main observation here is that the work of the LLF algorithm between two consecutive release dates can be imitated by the work of the BBB algorithm which, by Corollary 6.1, produces only LLF schedules. Since the number of different release dates is at most n, we obtain the Multiple-Block-By-Block (MBBB) algorithm for J2|rj , pij = 1|Lmax with time complexity O(n3 ) and space requirement O(n2 ). However, there is a counterexample showing that the RLLF algorithm is not optimal, hence its imitator, the MBBB algorithm, is not optimal. Kravchenko [K99] was the first to make this observation. The following counterexample with six jobs is a minor reduction of Kravchenko’s counterexample with seven jobs. Example 7.1. Let we have the following six jobs with r1 = 0, d1 = 6, r2 = 0, d2 = 5, r3 = 0, d3 = 8, r4 = 3, d4 = 4, r5 = 6, d5 = 7, r6 = 7, d6 = 8. J1 :
J2 :
J3 :
J4 :
J5 :
J6 :
This instance has only two RLLF schedules. For both Lmax = 1. One is snown below. The other can be obtained by transposition of the rightmost two units and . Meanwhile, the schedule to the right is, obviously, optimal with Lmax = 0. RLLF schedule:
Optimal schedule:
Scheduling by periodization can be viewed as a natural output arising from the junction of the classic theory of nonperiodic scheduling, dealing with problems on a finite set of jobs (cf. Lawler et al. [LLRLS93]), and the intensely growing theory of periodic (or cyclic) scheduling, dealing with problems on an infinite set of jobs (cf. Serafini and Ukovich [SU89], Crama et al. [CKKL00]). Roughly, the periodization represents the approach to solving nonperiodic scheduling problems by designing periodic schedules or schedules consisting of a “small” number of “large” periodic blocks. The approach is useful for obtaining efficient approximations as well. For example, the minimum-length schedule job-shop problem with n identical jobs, having l unit-time operations each, and m machines, which can have different speeds, can be solved in time O(l3 ) by designing a periodic schedule which length is at most 1 + 3m+l times longer than the minimum length [T86]. Kats and Levner [KL98] n noticed, however, that the time bound can be brought down to O(l2 ). Thus, the periodic solution can be found in polynomial time not dependent on the number of jobs and turns out to be asymptotically optimal on the number of jobs. On the other hand, the periodic counterpart of the problem of finding an infinite periodic minimum-flow-time schedule, as Middendorf shows [M99], is strongly NP-hard even for equal speeds.
30
VADIM G. TIMKOVSKY
Acknowledgments I would like to thank Elizabeth Hale and Judith Marcelo who corrected the text and Kjel Oslund for his support of the research. I am indebted to Martin Middendorf and Svetlana Kravchenko who read the first version of the paper and made comments which helped me to avoid several omissions. Finally, I am grateful to Peter Brucker for inventing the LLF algorithm, which plays the leading part in this paper. References P. Brucker, Minimizing maximum lateness in two-machine unit-time job-shop, Computing 27 (1981), 367–370. [B82] P. Brucker, A linear time algorithm to minimize maximum lateness for the two-machine, unittime, job-shop, scheduling problem, Lect. Notes Control Inform. Sci. 38 (1982), 566–571. [B95] P. Brucker, Scheduling algorithms, Springer, 1995. [C76] E. G. Coffman, Jr. (ed.), Computer and job-shop scheduling theory, John Wiley, New York, 1976. [CKKL00] Y. Crama, V. Kats, J. van de Klundert and E. Levner, Cyclic scheduling in robotic flowshops, Annals of Operations Research 96 (2000), 97–124. [GJ79] M. R. Garey and D. S. Johnson, Computers and intractability: a guide to the theory of NPcompleteness, Freeman, San Francisco, 1979. [GJ82] M. R. Garey and D. S. Johnson, Computers and intractability: a guide to the theory of NPcompleteness, Russian transl. (A. A. Fredman, ed.), Nauka, Moscow, 1982. [HA80] N. Hefetz and I. Adiri, An efficient optimal algorithm for the two-machine, unit-time, job-shop, schedule-length, problem, Technical report, Technion, Haifa, Israel, 1980. [HA82] N. Hefetz and I. Adiri, An efficient optimal algorithm for the two-machine, unit-time, job-shop, schedule-length, problem, Math. Oper. Res. 7 (1982), 354–360. [KL98] V. Kats and E. Levner, Private communication. [K94] S. A. Kravchenko, Minimizing total completion time in two-machine job shops with unit processing times, Preprint N4, Institute of Technical Cybernetics, Minsk, Belorussia, 1994. (in Russian) S. A. Kravchenko, Optimal processing jobs in two-machine job shops with equal processing [K95] time operations, Preprint N7, Institute of Technical Cybernetics, Minsk, Belorussia, 1995. (in Russian) S. A. Kravchenko, A polynomial time algorithm for a two-machine no-wait job-shop scheduling [K98] problem, EJOR 106 (1998), 101–107. [K99] S. A. Kravchenko, Private communication. [K99A] S. A. Kravchenko, Minimizing the number of late jobs for the two-machine unit-time job-shop scheduling problem, Discrete Appl. Math. 98 (1999), no. 3, 209–217. [KSS95] W. Kubiak, S. Sethi and C. Sriskandarajah, An efficient algorithm for a job shop problem, Ann. Oper. Res. 57 (1995), 203–216. [KT96] W. Kubiak and V. G. Timkovsky, A polynomial-time algorithm for total completion time minimization in two-machine job shop with unit-time operations, EJOR 94 (1996), 310–320. [LLRKS93] E. L. Lawler, J. K. Lenstra, A. H. G. Rinnooy Kan, and D. B. Shmoys, Sequencing and scheduling: algorithms and complexity, in Handbook on Operations Research and Management Science, Volume 4: Logistics of Production and Inventory (S. C. Graves, A. H. G. Rinnooy Kan, and P. Zipkin, eds.), Elsevier Science Publishers B. V., Amsterdam, 1993, pp. 445–552. J. K. Lenstra and A. H. G. Rinnooy Kan, Computational complexity of discrete optimization [LRK79] problems, Ann. Discr. Math. 4 (1979), 121–140. [M59] R. McNaughton, Scheduling with deadlines and loss functions, Management Sci. 6 (1959), 1–12. [M99] M. Middendorf, Unpublished manuscript. [B81]
SCHEDULING BY PERIODIZATION [SU89] [TSS94] [T85] [T86] [T93]
[T97] [T98]
31
P. Serafini and W. Ukovich, A methematical model for periodic scheduling problems, SIAM J. Disc. Math. 2 (1989), 550–581. V. S. Tanaev, Y. N. Sotskov, and V. A. Strusevich, Scheduling theory. Multi-stage systems, Kluwer, Dordrecht, 1994. V. G. Timkovsky, Polynomial-time algorithm for the Lenstra-Rinnooy Kan two-machine scheduling problem, Cybernetics (1985), no. 2, 109–111. (in Russian) V. G. Timkovsky, An approximation for the cycle-shop scheduling problem, Economics and Mathematical Methods 22 (1986), no. 1, 171–174. (in Russian) V. G. Timkovsky, The complexity of unit-time job-shop scheduling, Technical Report No. 9309, Deptartment of Computer Science and Systems, McMaster University, Hamilton, Canada, 1993. V. G. Timkovsky, A polynomial-time algorithm for the two-machine unit-time release-date job-shop schedule-length problem, Discr. Appl. Math. 77 (1997), 185–200. V. G. Timkovsky, Is a unit-time job shop not easier than identical parallel machines?, Discr. Appl. Math. 85 (1998), 149–162.
Star Data Systems Inc., Commerce Court South, 30 Wellington Street West, Suite 300, Toronto, Ontario M5L 1G1, Canada E-mail address:
[email protected]