Mathematical programming formulations for machine

3 downloads 0 Views 758KB Size Report
European Journal of Operational Research 51 (1991) 283-300 ... mathematical programming formulations for scheduling into a single paper to ease the task of ...
European Journal of Operational Research 51 (1991) 283-300 North-Holland

283

Invited Review '('

Mathematical programming formulations for machine scheduling: A survey Jacek Blazewicz Jnstytut Jnformatyki, Politechnika Poznanska, Poznan, Poland

Moshe Dror Decision Sciences, College of Business and Puhlic Administration, The University of Arizona, Tucson, A Z, USA and Centre de Recherche sur /es Transports, Universite de Montreal, Canada

Jan Weglarz lnstytut Jnformatyki, Politechnika Poznanska, Poznan, Poland Abstract: Machine scheduling was and still is a rich and promising field for research with applications in manufacturing, logistics, computer architecture, communications, etc. Combinatorial complexity theory has now classified the great majority of known machine scheduling problems as 'easy' or ' very hard'. However, in most cases, mathematical programming models have not accompanied the algorithmic developments for solving 'easy' scheduling problems, nor have they facilitated solutions for ' hard' problems. Nevertheless , the analysis of the mathematical programming models for some hard combinatorial problems together with their polyhedral properties has enabled important computational advances for such problems as the TSP. In order to assess the present status and the solution potentia l of mathematical programming formulations for machine scheduling, we have compiled a systematic, consistent survey of formulations. The discussion has been developed in tandem with the classification of a given problem's complexity, since 'solvability' (i.e., the status of a problem as P or NP-hard) generally cannot be easily assessed from the formulation itself. A number of excellent survey papers on machine scheduling have appeared over the years (see the reference list), but none of them has been focused on mathematical formulations. This survey is the first one that attempts to compile a large number of mathematical programming formulations for scheduling into a single paper to ease the task of model building and testing scheduling formulations. Both, a newcomer and experienced researcher can use it as a reference point. Ultimately, mathematical programming formulations for scheduling problems might be used as a stepping stone to computational advances for some hard problems. f

Keywords: Machine scheduling, mathematical programming formulations, additional resources , computational complexity

I

" "A natural way to attack machine scheduling problems is to formulate them as mathematical programming models". (A Rinnooy Kan, M achine scheduling problems , Martinus Nijhoff , The Hague, 1976, p. 36). 0377-2217/91/$03.50 © 1991 - Elsevier Science Publishers B.V. (North-Holland)

284

J. Blazewicz et al. / Mathematical programming for machine scheduling

1. Introduction

Let J = {11 , • • • , J,, } be a set of jobs where each (j = 1, .. ., n ) consists of a sequence of operations ohk J ( h = 1, ..., Hk ; k = 1, ..., m). An operation o11k 1 corresponds to an uninterrupted processing of task h in job which has to be performed on machine Mk during a given period of time. Let M = {M1,• • . , Mm } be the set of machines. Each machine can handle at most one job at a time and a job can be processed by at most one machine at a time. We assume the processing order of a job to be known. Associated with each operation ohk J ( h = 1,. .., Hk ; k = 1, ..., m; j = 1, ..., n) is a non-negative processing requirement PhkJ (in the great majority of cases P1i k J corresponds to the processing time) and Hk is the number of tasks for job that needs to be performed on machine Mk . In the case where at most one operation in ( j = 1, ..., n) is processed on a machine Mk ( k = l, ..., m) we drop the subscript h from ohkJ ( ohk J --+ ok) and from Phk J ( p hk J --+ p k J ). In this case .if the processing order is a linear order (or a weak order) of operations for the job '1·, we can order the machines in M so that 'TT( k ) < 'TT(i) for o,,( k)J < o,,U )J where 'TT is a permutation of (1, ..., m } and < is the linear ordering relation defined on the set = { ok 1}'=i· In addition, with eachjob (j = 1,..., n) we associate three values: release time of job - r1, due date for the comple ion of job - d1 (or the completion deadline d), and a weight (quantifiable 'importnce') of job - w1. We assume that P1i kJ• r1, d1, d1 and w1 are nonnegative integers for all h , k, and j. Given a cost function /j· defined for each job as a nond ecreasing real function of the completion time denoted by C1, what is the processing order on each machine which minimizes the total cost (maximal cost or some other operational objective criteria) for the set J ? Answering this question is doing machine scheduling. By assuming cost functions Jj as nondecreasing real f unctions of C1 respectively, we limit the presentation to regular performance measures. We assume that the reader is familiar with the basic complexity concept s for classification of problems, e.g. of NP, P, N P-complete, NP-hard, polynomial-time transformation, NP-hard in the strong sense, pseudopolyn omial algorithm, etc. (The recommended reference for this topic is the

classic book by Garey and Johnson, 1979.) These concepts are used in this paper to under score the problem status given a mathematical programming formulation. The objective of this papr is to present a survey of mathematical formulations for machine scheduling problems. It should not be interpreted as advocating the mathematical programming methodology as the appropriate solution seeking approach. In fact, the mathematical programming methodology has so far been a disappointing solution tool in the machine scheduling field. Nevertheless, since the emergence of a number of successful polyhedral cut oriented schemes for the traveling salesman problem (TSP) and other hard combinatorial optimization problems (see Padberg and Grotschel, 1985; Padberg and Rinaldi, 1987; and Grotschel et al. 1988), this work is timely and could serve as a departure point in the development of new solution approaches. · A survey of mathematical formulations for machine scheduling problems is intrinsically tied to the complexity order of the existing solution procedures for those problems. As we shall see later, very similar mathematical formulations used to model different scheduling problems can differ drastically in their complexity status. Thus, it is pertinent to interline the mathematical formulations presented here with an informal survey of complexity status for the same machine scheduling problem s. To facilitate the discussion of machine scheduling problems, we adopted a classification scheme invented by Graham et al. (1979) which uses a three-field notation a. IPIy (see Appendix I) and is described (among others) in Blazewicz (1987) and Lawler (1983).

2. Single machine scheduling

Lawler (1983) provides an excellent overview of machine scheduling theory and algorithmic development. The mathematical programming formulations in this paper attempt, in part, to follow the outline of Lawler's overview paper. As such, we start with single machine scheduling and with the problem defined by 111 Lmax. The objective in this problem is to find the processing sequence for n jobs on a single machine which minimizes the maximal job lateness. (Since I I = 1 VJ in this

J. Blazewicz et al. / Mathematicalprogrammingfor machine scheduling

single machine scheduling environment, we substitute j for -'1·)

before job J, can start ( J,< J,). k-1

2.1. Minimizing the maximal lateness on a single machine: lJI Lmax formulation

k

L: x:

L: x.'.. ( k = l , ... , n) ,

i-1

i= l

xJ = O (J = 1, ..., n). The 'earliest due date' (EDD) rule; sequencing the jobs according to nondecreasing due dates (Jackson, 1955) provides a simple solution to this problem. The corresponding mathematical formulation does not indicate that such a fast (0(n Jog n) time complexity) solution rule based on sorting, is readily available, since just solving the assignment problem embedded in the formulation would require O( n3 ) time. Let xk be the decision variable 1 which takes the value 1 if job j is the k-th job processed and xj = 0 otherwise. The objective is to find the schedule that minimizes the maximal lateness Lmax = max1( L1} . This can be formulated as follows:

(1)

Minimize Y

(6) (7)

In Lawler (1973), an algorithm of time complexity O(n 2 ) is presented which constructs the optimal sequence for the case of single machine scheduling problems with job due dates, precedence constraints and objectives of minimizing fmax (11 d1, prec Ifmax• prec for precedence relation between jobs). Generalizing the above problem by specifying a release time r1:;;;.. 0 for each job j (i.e., 11 r1 I Lmax ), and assuming no machine idle time, requires addition to formulation (1)-(5) of n constraints of the form n

k

n

L L P1x5 - L ( 'j+p) xj ;;;.. O i= l

subject to

285

J-1

J- 1

( k = l , . .., n ).

(8)

n

L: xj = l

( k = l , ..., n),

(2)

(J = l , ..., n),

(3)

J-1 n

I:xj

=l

k =l k

11

n

L L P1x5 - L: d1xJ Y

i -1 j=l

(k = l,..., n ), (4)

j= l

xj E {O, 1} ( J, k = l , ..., n ).

(5)

The lateness for the job processed k-th in the sequence equals the left hand side of (4). Minimizing Y (the right hand side of (4)) is equivalent to minimizing the maximal lateness. Constraints (2) and (3) together with (5) are the very familiar assignment constraints. These constraints simply assure that each job has to be assigned to a place in a sequence of length n and each place in the sequence has to have one job assigned to it. The generalization of the above formulation to that of minimizing /max where /max = max 1 {..tj( C)} for nondecreasing cost functions Jj is straightforw ard. In order to incorporate job precedence constrai nts (i.e., a weak order of jobs' completion times C1) we can add constraints of the form (6) and (7) if job .!,. has to be completed

If no job preemption is allowed (i.e., job s have to be processed without interruption), then adding the job release times 'i transforms the basic problem from an easy problem to the NP-hard category of problems. According to Lawler (1983), i t is an NP-hard problem even to determine if a set of independent jobs can be completed by specified due dates. The above mathematical formulations for the problem s of single machine scheduling with job precedence relation , and the objective of minimizing the maximal lateness (11prec ILmax ) on one hand and single machine scheduling with job release times r1 :;;;.. 0 and the same objective (11 r1I L010J on the other hand do not provide a simple indication that the time complexity of these two problems is quite different (unless of course P = NP). For the first one there exists an optimal algorithm of O(n 2 ) order and for the other only exponential order algorithms are known. In case preempti on is permitted , i.e., the processing of any j ob may arbitrarily often be interrupted and resumed at a later time wit hout penalty, then even for the problem of single machine scheduling with job precedence relation, release times, preemption and the objective of mini-

J. Blazewicz el al. / Mathematica/ programming for machinescheduling

286

m.izing /max (l jpmtn, prec, rj l fmax ) there exists a 'fast' O(n 2 ) optimal algorithm (Baker et al., 1983). Still, the mathematical formulation (1)-(7) cannot be trivially expanded to encompass this preemption possibility by relaxing the 0-1 integrality constraints on the variables to > 0 and assuming that a finite number, say W < oo bounds the total number of processing operations required.

xJ

xJ

2.2. Minimizing the sum of weighted times on a single machine: 1ll L:} lation

completion

=i wjCj formu-

The problem of finding the job processing sequence which for a single machine minimizes the weighted sum of job completion times, given a weight wj for job j , is solved by ordering the jobs according to nondecreasing ratios Pj = p/ wj (the 'ratio rule') which is a generalization of the 'shortest processing time' (SPT) rule (Smith, 1956). ' In case all weights are equal wj = 1 (i.e., 111L:j_ 1C) we have an elegant math ematical formulation of a more general problem version due to Bruno et al. (1974) (see also Bruno, 1976; Blazewicz, 1987, p. 34-35) in the form of a transportation problem and thus efficiently solvable (O(n 3)). We present (maybe out of order) the more general problem formulation for a number of unrelated machines in parallel from which the single machine case can be easily deduced. Denote by R unrelated processors and consider the problem R ll L:j _ 1Cj . Let xijk = 1 if job -'1 is scheduled on processor iin the k-th position from the end and 0 otherwise. Then the 'cost' associated with that job assignment is equal to k X Pij· This leads to the following transportation problem: m

Minimize

n

n

L L L kp;jxi jk

(9)

i= l j= l k=l subject to m

"

L kL-1 i-1

X ;p = l

(j = 1,... , n) ,

( 10)

L wj . L P;Y/ +Pj )= I

)

(13)

i-0

subject to n

L x j =1 1

(j = 1,..., n + 1),

(14)

( i = O, ..., n),

(15)

i=O n+l

L xiJ = l )=1 n

n

L Y;q - L y/ + ( n + 2)xqk i=1

n+1

i- 1

( q, k = l , ..., n ), y/ E {0, 1} (i, j = l ,..., n) , xij E {O, l}

(i, J = l ,..., n) .

(16)

(17) (18)

This formulation is based on the Miller et al. (1960) formulation for the traveling salesman problem. An optimal solution to this problem can be found in time complexity 0(n log n ). In order to expand the formulation (13)-(18) the single machine scheduling problem with job release times (the same objectiv e), 11 r1 I L:}= 1w1C1, which is an NP-hard problem (NP-hard also with all w1 = l; Lenstra, 1977), we need only to add n constraints of the form

L P Y/ - r1?; 0 1

( i= 1, ..., m ; k = 1, ..., n ) ,

( 11)

)= 1 xijk E

(n+l

n+1

Minimize

n

n

L x;jk dJ, pmtn J L:C.1formulation.

This problem is of special interest since its complexity status was open until recently (Lawler, 1983) and proven N P-hard in the ordinary sense by Du and Leung (1989). In this section we present some results for this problem described in detail in Dror and Stern (1988). Without loss of generality assume that r.,J d.,J p.J (the release times ' deadlines and processing times) are of integer value for j = 1, ..., n. In addition, denote by I the contiguous unit time periods for job j a follows: IJ = {t; t = rJ + 1, rJ + 2, ... , dJ } .

Define xJ, as a binary decision variables for all pairs (j, t ); t E Ji' In Dror and Stern (1988) it is proven that given a feasiblejob schedule, there exists an optimal job schedulein which for eachjob the processing start time corresponds to either a release time or a completion time of another job. This property enables us to formulate the corresponding mathematical model assuming processing blocks of no less than one unit increments for each job. The decision variable xJ, = 1 if job j is processed for one unit during period t and is O otherwise. Assume without loss of generality no machine idle time and denote by T = L:7=1 P;· The objective of minimizing the total completion time is

" Minimize L CJ

(26)

J=1

subject to

L X Jt = PJ , 1e 11

J = 1,..., n ,

(27)

(28)

J=1 j

= 1, ..., n ;

t E IJ ,

= 0 or 1 j = 1,..., n ; t e IJ ,

Cj ;;:. O, j = l , ..., n.

(29) (30) (31)

The size of this formulation is data dependent through T = L:}= PJ1

To test whethe r for a given problem instance there exists a feasible solution requires O( IV J 3) time (Horn, 1974), where IV I denotes the number of nodes in the corresponding bipartite graph. 2.4. Problems with item-flow and batch-flow distinctions:Jl item-flow J L:j_ 1wJ CJ and 1J batch-

flow IL:j_ 1wJCJ formulations In this section we restate the mathematical programming models presented in Dobson et al. (1987) for problems of scheduling the production of different part types on one machine. The objective is that of minimizing the weighted total flowtime of parts which is equivalent to minimizing the sum of weighted completion times. First, there is the distinction between the itemflow, when a part can be delivered immediately after the processing of it is completed, or batchflow, when a part waits at the machine until the rest of the parts in its batch are completed. In order to restate the mathematical formulations we need to define the processing batch which is the number of parts processed on a machine with a single setup, and the transfer batch which is the number of parts that are removed from the machine and transferred to inventory or the next machine. The item-flow problem corresponds to transfer batches of unit size with arbitrary sized processing batches, and the batch-flow problem corresponds to the case where transfer and processing batches are the same. Some notation first. Let E be the total number of parts and B be the number of different part types which either have different processing times or require a new setup for the machine. Let d .1be the number of parts of type j waiting to be processed, sJ the setup time prior to starting the processing of type j part, PJ the processing time of one part of type j , w. the holding cost for one • J urut of type j held for one time unit. The decision

J. Blazewicz et al. / Mathematical programming for machine scheduling

variables are: xii = 1 if the i-th part processed is of type j and 0 otherwise, yiJ = 1 if the i-th part processed will be of type j and will require a setup, and 0 otherwise. The expression for the completion time of the i-th part is:

Note that the qiJ are continuous variables despite the fact that the parts are discrete. Thus, a solution to this model corresponds to a lower bound on the optimal solution.

3. Parallel machine nonpreemptive scheduling

B

L L ( s1yk J +p1xi1) .

Ci =

289

(32)

k=l j= I

The objective function is to minimize the sum of completion times weighted by the holding costs:

L.f_):.J= w1C;x;1. The constraints of the item-flow 1

problem can be stated as: xi 1 - x(i-l)J -yiJ

(33)

0 'di , j ,

B

L XiJ = l

( i = l , ..., E),

(34)

(J = 1,..., B ),

(35)

j= l E

L X 11= d1 i=l

(36)

Xij , yiJ E {0, 1} 'di, j .

Suppose there are m parallel machines available to do the processing. We first examine the case where 1 -'1 I = 1, 'tfj , and each job can be processed (worked on) by at most one machine at a time with each machine processing at most one job at a time. Machine i processes job j with speed siJ . Thus, if only machine i processes job j , it requires a total amount of time p1/ s;1 for its processing. We divide the parallel machines into three types: (P) Identical machines: All siJ are equal. In this case we assume that for all i, j; siJ = l. (Q) Uniform machines: siJ = s;k• for all i, j , k. I.e., each machine i performs all jobs at the same speed s • Assume machine ordering such that s ?Jo 1

1

For the batch-flow model the decision variables are modified as follows: xi J = 1if the i-th batch of like parts is of type j , and 0 otherwise, qiJ is the quantity of type j parts in the i-th batch processed. The objective function for the batch-flow model is: E

B

(37)

Minimize L L wJCiqiJ , i=l j=l

S2 ?Jo • • • ?Jo s,,,.

(R) Unrelated machines: There is no particular relationship between the siJ values. In order to underline at the outset the difficulty of parallel machine problems (with nonpreemptive jobs) we note that already the two identical parallel machine scheduling problem with the objective of minimizing the maximal completion time (makespan), P2 11 Cmax is NP-hard in the ordinary sense.

where B

C;= L L ( sJ xk J +pJqk 1) k =l j=l

( i = 1,..., E ). (38)

The constraints for this model are: qtJ

dJ xiJ

'di , j ,

(39)

B

L xij = 1 (i = l , . .., E), j=l

(40)

E

I:qij = d1 i=l xi1E {0, l},

3.1. Non-preemptive scheduling on identical parallel machines: makespan (P 11 cmax ) and minimization of sum of completion times (P llL.}= 1C1) formulations First, the problem of minimizing the makespan, Cmax' given m identical machines. For this problem let x = 1if job j is the k-th job processed on machine i and 0, otherwise.

(43)

Minimize Y subject to

(J = l , ..., B ) , q11 ?Jo 0 Vi, j .

(41) (42)

n

m

L L: x = l k=l i=l

(J = l,...,n),

(44)

290

J. Blazewicz et al. / Mathematical programming for machine scheduling

11

L x;i .:;;; 1

( k = l , . .., n ; i = l, ..., m ),

(45)

j= l

L L pix.:;;; Y n

( i = l,..., m ),

"

(46)

k -1 )-1

x;i E

{O, 1} (J, k =:= l , .. ., n; i = l ,. . ., m ). (47)

This problem is NP-hard, for m 2. A generalization of this makespan complexity result was presented in Dror et al. (1987). In that paper the au thors note that given a job processing rate function dependent on the number of jobs simultaneously processed in the system (s = s( X ), where X = 1, 2, ..., m is the number of jobs processed at the same time), the complexity of the problem depends on s( X ). For s( X ) .::;;; R/ X for any positive constant R < oo, the problem is trivial, and for s( X ) > R/ X it is an NP-hard problem. In order to formulate the same problem but with the objective of minimizing the sum of the completion times (i.e. P ll I:j _ 1C) we need to express a job' s completion time. Denote Cf = I:7=1I:}= 1 pixfi. This is the completion time of the job scheduled k-th on machine i. Thus, the objective n

Minimize

ni

k

weighted sum of completion times, P l l I:} 1wiCi. Minimizing the weighted total completion time in nonpreemptive scheduling on identical parallel processors is NP-hard. It also requires a new formulation in which the completion time for job j is expressed explicitly. Let x = 1 if job i is scheduled immediately before job j on machine k, and 0 otherwise. Similarly, let Y; = 1 if job iis scheduled before job j on machine k , and 0 otherwise. In addition, add two fictitious jobs 0 and n + 1with Po =p,, + 1 = 0. Then the problem (P llI:]= 1wiC) can be expressed as: (49) subject to n +l

n

L L Y; .::;;; 1

( k = 1,..., m ) ,

(50)

(J = l,..., n + l),

(51)

j= l i=O n

m

L L x;i = l i=O k=l

Y;

x;i ( i= 0,..., n ; j

= 1,..., n

k = l , ..., m ), x , Y;

E {0,

1}

(i = 0, ..., n ; j = 1,..., n

n

L L L L pix'IJ

(48)

+ 1;

k = l ,. .. , m).

(52)

+ 1; (53)

k = l i= l q= l j =l

together with the constraints (44), (45) and (47) constitute the formulation of that problem. This problem can be solved in 0( n log n) time complexity. To transform the above formulations to encompass the problem of job scheduling on uniform machines, Q ll Cmax and Q llI:j _ 1Cj, all one has to do is to divide pi by s; in (46) and in (48). Needless to note that the first of these two problems is NP-hard. The second problem can be solved in 0(n log n ) time complexity (Horowitz and Sahni, 1976). To extend those formulations to unrelated machines, R II cmax or R III:] =lei , we need only to substitute pi with P;r The second of these problems can be solved in polynomial time complexity using the algorithm in Bruno (1976), while the first is obviously NP-hard. The formulation (44), (45), (47), and the objective function (43) cannot be trivially modified to model the very similar, identical machine scheduling problems with the objective of minimizing the

Clearly, formulations (49)-(53) can be easily modified for the problem without weights on completion time (P III:]= 1Cj); just drop the w/s from (49) but not the other way around. This fact might be, in some sense, intuitively expected based on the complexity arguments for the two problems. The formulations (43)-(54) can be modified in a straight forward fashion for similar problems with unit-time jobs without and with precedence constraints. An interesting version of the identical parallel processor makespan problem arises when a timesharing-multiplex communication bus collects data from periodically-producing sensors (see Schweitzer et al., 1988). The problem instance can be described as follows: Finite set of message types I , for iE I pair of numbers p( i ), d( i ); p( i ) integer (denotes the period for i) and d(i) positive real (denotes the duration for i). Set p = least common multiple of p( l), p(2), . . ., p(I ) and a set of time periods 1, 2, .. ., p. The objective, when as-

J. Blazewicz et al. / Mathematica/ programming for machine scheduling

signing the messages to their corresponding time periods, is the minimization of the maximal total load of messages assigned to any time period. (Because of the periodicity of the messages, it suffices to consider the time periods from 1 to p.) The mathematical formulation of this problem (called Periodic Loading Problem (PLP) in Schweitzer et al., 1988) is

(54)

Minimize Y subject to p(i )

L X ;j

=

1, 1'( i'( 1 11,

(55)

291

where z = max { . xijk } , l ,j ,k

y

=

(62)

cma x ,

I

n

(63) z = max {( . - d1) xiJk } • l , j ,k

y = Lmax·

(64)

I

This special structure of the mathematical programming formulation allows one to solve the 3 problem Q I p1= 11 y in O( n ) time complexity.

}= 1 p(i )

L L d ( i ) xiJ aiJk '( Y, 1'( k '( p,

(56)

i E / }=1

(57) where xiJ = 1if the message iis assigned to time period j and zero otherwise, and aiJk = 1 if ( k )) = 0 (mod p(i )) and 0 otherwise. This Periodic Loading Problem is NP-hard by reduction from the P 11 cmax but simple performance guarantee heuristics exist for it. 3.2. Parallel uniform processors with unit standard processing times: Q I p1 = 11 y formulation ( y E { Cmax, L'J -1Cj, Lmax})

4. Parallel machine preemptive scheduling 4.1. M akespan scheduling problem: p lpmtn Icmax formulations

The set J of all the jobs will be called the main set. Number from 1 to K the processor feasible sets which include the main set and the subsets of the main set with cardinality no greater than m, f the number of machines. Let Q1 diennowtheicthhejosbet -o'1 indices of processor feasible sets may be performed, and let x; denote the duration of processor feasible set i. The linear programming (LP) problem may be formulated in the following way (Weglarz et al. 1977): K

Problems with uniform processors and unit standard processing times may be formulated as a special case of a transportation problem (Graham et al., 1979). Let xiJk = 1if job j is processed on processor i in the k-th position (from the beginning) and 0 otherwise. Then, the formulation is as follows:

(58)

Minimize z subject to m

n

L L x;1k = l

(j= l, ... , n),

(59)

i= I k=1 n

L XiJk '( l

( i = l , ...,m ; k = l , ... , n),

(60)

J-1 xiJk E

{O, 1} ( i = l , ..., m ; j = l , ..., n ; k = l ,..., n ) ,

(61)

Minimize

cmax

=

L X;

(65)

i= l

subject to

L x; iE Q; X;

O.

p1

(J = 1,..., n ),

(66) (67)

The number of variables in the above LP problem depends polynomially on the input length, when the number of processors m is fixed. We may now use Khachiyan's (or Karmarkar's) procedure (Khachiyan, 1979; Karmarkar, 1984) which solves an LP problem in time which is polynomial in the number of variables and constraints. Hence, we may conclude that the above procedure solves the parallel machine makespan scheduling problem with preemption (Pm l pmtn Icmax ) in polynomial time complexity. The above approach is

J. Blazewicz et al. / Mathematical programming for machine scheduling

292

called a one-phase method. (Another LP formulation is presented in the next section.) 4.2. Uniform and unrelated machine makespan scheduling: Q Ipmtn Icmax and R Ipmtn Icmax formulations We note that the one-phase method (65)-(67) may also be generalized to cover the case of unrelated machines R J pmtn Icmax (and thus uniform machines Q l pmtn Icmax )· In order to do so introduce a dummy job J0 representing machine idle time. Define S to be the set of all processor feasible m-tuples k = (k1,..., km) of job indices; each k is characterized by k ; E {O, 1,... , n }, iE {1, ..., m }; each j E {l,..., n } occurs at most once; To each k E S, associate a variable xk , representing the time during which Jk , ..., Jk are simultaneously executed on M 1,• . • ', Mm r'spectively. The problem is then: Minimize

Lx

(68)

k

kES

subject to

m

Cmax - L P;j Xij

O

(73)

(J = l, ... , n),

j -l

m

L; xij = l

(J = l,... , n).

(74)

;1

Solving the above problem , we get Cmax = Cm*ax and optimal values x ;j· However, we do not know the schedule, i.e., the assignment of these parts to processors in time. It may be constructed in the following way: Let t;j = pijx;j (i = 1, ..., m; j = 1,..., n). Let T =[t;j ] be an m X n matrix. The )-th column of T corresponding to job J. will be •

J

called critical if L:;"=1t;j = Cniax· Denote by Y an m X m diagonal matrix whose element Ykk is the total idle time on processor k, i.e., Ykk = Cniax L:j_ 1fk*j· Columns of Y correspond to dummy jobs. Let V = [T, Y] be an m X ( n + m) matrix. Now a set U containing m positive elements of matrix V can be defined as having exactly one element from each critical column, at most one element from other columns, and having exactly one element from each row. We see that U corresponds to a task set which may be processed in parallel in an optimal schedule. Thus, it may be used to construct a partial schedule of length

o > 0. An

(J = l,..., n ) ,

(69)

(70) This LP problem has O(n"') variables. For a fixed number of machines, its size is bounded by a polynomial in the size of the scheduling problem. Therefore the existence of a polynomial algorithm for LP implies that unrelated machine makespan scheduling problems with preemption (Rm l pmtn I cmaJ are solvable in polynomial time. Another LP formulation, i.e., the two-phase method was introduced in Lawler and Labetoulle (1978), Blazewicz et al. (1976), and DeWerra (1984, 1988). It is the following: Let x;j E [O, 1] denote a part of job -'1 processed on M;. The LP formulation is as follows:

(71)

Minimize Cmax subject to

j= l

Algorithm. l. Find set U. 2. Calculate the length of a partial schedule if cmax - vmin otherWise,

vmax '

(75)

where (76a)

Vmax =

max j ; V;;

U,'rli

.E V }.

{

i

(76b)

I}

3. Decrease cmax and vrJ.E u by 0. If c max = 0 ' an optimal schedule has been constructed. Otherwise go to Step 1. Now we only need an algorithm that finds set U for a given matrix V. One of the possible

n

cmax - L P;jXi j

optimal schedule is then produced as the union of the partial schedules. This procedure is summarized in the Algorithm below.

o

( i = l, ..., m ),

(72)

algorithms is based on the network flow approach. A corresponding network has m nodes ( m rows of

J. Blazewicz etal. / Mathematica/ programming for machine scheduling

..

V) one for each processor and n + m nodes corresponding to jobs (columns of V). A node i from the first group is connected by an arc to a node j of the second group if and only if v;i > 0. Arc flows are constrained by b from below and by c = 1from above. The value of b is equal to 1for arcs joining the source with processor-nodes and critical job nodes with the sink, and to 0 for the other arcs. We see that finding a feasible flow in this network is equivalent to finding set U. (Compare Figure 4.7 in Blazewicz, 1987.) The overall complexity of the above approach is bounded from above by a polynomial in the binary input length. This is because the LP problem may be solved in polynomial time; the loop in the algorithm is repeated at most m X n times and solving the network flow problem requires O(z3) time, where z is the number of network nodes. The two-phase method is in fact a combination of the

LP formulation and an iterative network flow approach. 4.3. Scheduling problem with preemption, release

times and due dates P j pmtn ri, di I -formulation This problem is one of testing feasibility, i.e., the existence of schedule with no late jobs. It has been formulated as a network flow problem (Horn, 1974). Let the ready times and the deadlines be ordered on a list in such a way that: e0 < e1 < ··· < ek , k 2n. A corresponding network has two sets of nodes (cf. Figure 4.10 in Blazewicz, 1987). The first set corresponds to time intervals in a schedule, i.e. node w;, i = 1, ..., k, corresponds to interval [ e;_ 1, e;]. The second set corresponds to the task set. The capacity of the arc joining the source of the network with node w; is equal to m( e; e;_ 1 ) and thus corresponds to the total processing of m processors in this interval. If job -'1· can be processed in interval [e;_ 1, e;] (because of its ready time and deadline) then w; is joined with I; by an arc with capacity e; - e;_ 1.Node I; is joined with the sink of the network by an arc with capacity equal to pi and a lower bound also equal to pi. The mathematical programming formulation of the above scheduling problem is then a standard network flow formulation (see also Section 2.3 for the single machine formulation). We see that finding a feasible flow pattern corresponds to con-

293

structing a feasible schedule and this test can be computed in O(n3) time. A binary search can be conducted on the optimal value of Lmax with each trial value inducing deadlines which are checked for feasibility by means of network computation . This procedure can be implemented to solve the parallel machine scheduling problem with preemption, release times, and objective of minimizing the maximal job lateness (P jpm tn, ri ILmax ) in O( n 3 min{ n 2, log n + log max{ pi } }) time complexity (Labetoulle et al., 1979). Note that the above approach can be generalized to cover the case of uniform machines Q Ipmtn, ri, di I (Federgruen and Groenevelt, 1986). As far as unrelated processors are concerned the minimization of maximal lateness (R Ipmtn I Lmax) can be solved by an LP formulation similar to (71)-(74), but now denotes the amount of job I; which is processed on M; in time interval [ dk -J + Lmax• dk + Lmax l (due dates are ordered: d1 d2 • ·· dn). Moreover, the algorithm is now applied to each matrix r = [t,y ], k = 1,..., n.

xt

4.4. Resource and precedence constrained schedul-

ing Note that the model presented in Section 4.1 can be generalized to cover the case of resource constrained scheduling. The difference is the existence of s types of additional resources R 1, R 2 ,..., Rs, available in m1, m 2, ..., ms units respectively. Each task I; requires for its processing one processor and additional resources specified by the resource requirements vector

r (I;) = [ r1(I;), r2(I;), ..., rs(I;)] , where r1 ( f; ) (O r1(f;) m 1, l = l,..., s ) denotes the number of R1 required for the processing of job -'1· We assume here that all required resources are granted to a job before its processing begins or resumes and they are returned by the job after its completion or in the case of its preemption. The generalizations of the two-phase method covering cases of resource requirements are described in Slowinski (1980), Blazewicz et al. (Chapter 3, 1986), de Werra (1984, 1988, 1989), Cochand et al. (1985) and Slowinski (1987).

294

J. Blazewicz et al. / Mathematicalprogramming for machine scheduling

The approaches from Section 4.1 can also be generalized to cover the restricted case of precedence constraints, i.e. the so called uniconnected activity network (p. 26, Blazewicz, 1987).

5. Job shop scheduling

Let T be a given upper bound on the length of an optimal schedule, and for each ordered pair (ohkJ• oh'kq), define y/,' ) to be a 0-1 variable which takes the value 0 if ohkJ preceds oh'kq and 1 otherwise. The capacity condition of machine processing no more than one task at a time can be formulated 'Vh, h',k, j , q as x,,ki

5.1. Job shop scheduling: J 1 1 cmax formulation

h'q

YiikJ

We assume that in general the number of required operations for each job is at least one. J denotes dedicated processors, job shop system. There is no restriction on the order in which a machine can work on different jobs but there is a precedence relation on the operations of job processed on the different machines (i.e., the operations {oh,k,J }, h = 1, ..., Hk , k = 1, ... , m, are ordered according to some linear (or weak) order for each j ). Again, if no more than one operation for each job is processed on any given machine, the three index scheme is reduced to a two index scheme. Fisher et al. (1983) present two mathematical formulations for this problem together with a solution strategy using a surrogate duality relaxation. The first formulation follows that of Manne (1960) and the second has been introduced in Fisher (1973). Let xhkJ denote the starting time of operation ohkJ in a given schedule, and let x = ( xhkJ ) be the vector of appropriate dimension of the starting tim es. Denote by X( Cma,J the set of vectors x such that xhkJ

+PhkJ -x 0 : u/Cmax E conv U)

1 where /; (·) are functions inverse to lC·). Of course, r/ = l-1(v;JC.:ax ), j = l, 2,..., n , are values of the resource allocation function r/ ( t). Notice that in this case alljobs are processed fully in parallel using constant resource amounts r1* . It is worth to stress that for some l< ·), (94) can be solved analytically, e.g. for af]11\ /31 E (1,2,3,4]. For the general case of lC ·) we must find points of the boundary of the set U which convex combination gives the intersection point described above and yields the minimum value of cmax . Returning to our problem, let us notice that the results presented above remain valid if we assume m = n (i.e. machines do not impose any constraint on r/ t )). This fact creates a basis for solving the problem for arbitrary m < n. Namely, in general, we must consider all maximal possibilities of assigning machines to jobs, i.e. all m-element combinations of jobs from J. Let us denote them by

(91) where U is defined as follows: uj E U if / ( rj ) E R , u1=l ( r1), j = l , 2,. .., n ,

(92)

R being the set of feasible resource allocations, i.e. R = {f: E'j _ 1r1 N }, l < ') being the functions de-

fined in (88). In other words, c.:ax is always determined by the intersection point of the straight line (93) and the convex hull of the set U. It is very important to notice that c,,;.x ( u) is always a convex function. Since (88) defines a univalent mapping, U can also be treated as a set of feasible resource allocations, and thus the intersection point described above defines a solution of our problem. Of course, the shape of U (and conv U) depends on functions l< ·), and thus we have a geometrical interpretation of optimal resource allocations for different models (88). In particul ar, if lC- ), j = 1, 2,. .., n, are such that conv U = S, where S is the simplex extended on the points (0,...,0, l( N), 0,...,0), l< N ) appearin g on i-th position, j = 1, 2, ..., n , then the described intersection point is never a feasible resource allocation. But the same value c,:ax is obtained using feasible resource allocations which convex combination gives this point. However, this means in our case that the consecutive processing of individual jobs using resource amount N is optimal. Of course, the existence of machines does not change this result, since only one from among them is needed to process optimally a set of jobs J . The same result has been proved in a different way in Dror et al. (1987).

II

L,/;1 v/ ( Cm.J = N

(94)

j-1

Jk,

k= l, 2,...,p= (;).

and by Kj the set of indices k of J k such that J; E J k. Of course, for each J k we can use the results obtained for m = n, but we do not know parts of jobs (i.e. parts of their v1's) processed in particular J k. These parts will be variables in a nonlinear programming problem formulated below. Denote by x1k a part of vj processed in J k , and by Llt( { xjdJ.E J•) the minimum schedule length for J k as a function of x k's. We obtain the 1 following problem: p

Minimize cmax = L, Ll [ {xjk } J1EJk] k

(95)

l

subject to

L, xjk = vj ,

j

= 1, 2, ..., n ,

(96)

l;['j·1 ( t ) , ;2 ( t ) , ··• , f;s ( f )] •

x;( O) = 0, x1( c ) = u , 1

(105)

ming problems for finding minimum levels of N or M ensuring Cmax or Lmax to be less than or equal to a given value. In general, this leads to a vector-optimum problem in which we search for an optimal (in a given sense) compromise between levels N1, M1, I = 1, 2, . .., s, and a given schedule performance measure. For example, if this measure is Lmax• the generalization of the problem (100)-(104) has the form Minimize[{N1 } _ 1, { M,};'- 1 , Lmax] subject to

L\f ({x1i};_ l' {N1};=1' {M11L'- 1) + Lmax •

Llt({ x 1k ) ;=k' {N,};_ 1> ( M11) ;'=1) k = 2 , 3 , ..., n,

L x1

k

k -1

= u1,

j

= 1, 2, ..., n ,

11

L,: Mk 1

k=l

M1 ,

1= 1, 2 , . . . , s ,

X;k' Mk/ ;;;, 0,

j , k = l , 2 , ..., n ; l = l , 2,. . .,s.

Details on finding Llt(-) are given in Weglarz (1990).

1

where rJl(t ) is the amount of resource l allotted to it at the moment t. The methodology of reducing this dynamic scheduling problem to mathematical programming problem s is based on the assumption that resources take part in processing particul ar jobs in known proportions , i.e. 'j1( t ) = C/;( t ), j = 1, 2, . .., n ; I = 1, 2,..., s, where c11 ;;;, 0 are known parameters. On this basis we can write (105) in the form (88), typical for the single-resource case. Of course, sets of feasible resource allocations R, U, have now more complicated shapes, since they are produ cts of sets R 1, U,, respectively, corresponding to the constraint on the availability of resource /, / = 1, 2, . . . , s. However , the essence of the approach presented in previous sections remains valid. Studying the multi-resource case i t is also worth to say a few words about corresponding vector-optimum problems. The reader has certainly noticed that on the basis of the presented reasoning it was also possible to formulate mathematical program-

dl

n

d k - d k -I •

Appendix I: Machine scheduling notation a I/3 1r a indicates machine environment (single machine, parallel machines, open shop, job shop), /3 indicates job characteristics (independent vs. precedence constrained, etc.) and y indicates the optimality criterion (makespan, flow time, maximum lateness, total tardiness, etc.). As we progress through the mathematical models we fill in the definition of the terms in this notation as needed. We restrict the optimality criterion in y to regular measures, i.e., nondecreasing in every variable real functions /(C1, ... , C,, ): /(Ci> ... , C,, ) < /(C;, . . . , c;) implies C1 < C/ for at least one j. Such regular measures set contains most of the commonly used optimality criterions such as: maximal job completion time, cmax• sum of job completion times, L:}= 1C1, sum of weighted completion times, L:j _ 1w1C1, maximal job lateness, Lmax ( L1 = C1- d;), sum of weighted job tardiness, L:j _ w 1j (1j = 1 1 max{O, S- d1 ) ), sum of weighted numbers of jobs which are late, L:j _ w} ( U; = 1 if C1 > d1 and 0 if C1 d1), 1 etc. We also examine problems of machine scheduling in a hard-real-time environment such as scheduling before deadlines which requires feasibility testing whether a given set of tasks can be processed on time. In order to illustrate how this problem classification scheme works, examine (i) F3 11 Cmw (ii) F3 Ino wait I cmax. F (in the first field a) denotes dedicated machines in a flow-shop system where each jo b requires one operation on each machine (i.e., I J1I = 3, j = 1,. .., n) and the operations for each job

J. Blazewicz et al. / Mathematical programming for machine scheduling

have the same linear order (or weak order), i.e., = 1, ..., n. F3 indicates that this flow-shop environment consists of 3 machines. The objective criteria (y = Cmax ) is that of mini011 < Op < 013 , j



mizing the maximal job completion (referred to as the makespan). The problem F311 Cmax is an NP-hard problem by reduction from the Knapsack Problem (Lenstra, 1977). The second problem F3 l no wait I cmax has the added constraint that no job can wait for processing after the first or the second machine (represented by f3 = no wait). This problem was also proven to be NP-hard by reduction from the Three-Dimension al Matching (Rock, 1984). Optimality criteria and their different equivalency classes are examined in Lenstra (1977) and French (1982).

References Baker, K.R., Lawler, E.L., Lenstra, J.K., Rinnooy Kan, A.H.G. (1983), "Preemptive scheduling of a single machine to minimize maximum cost subject to release dates and precedence con straints", Operations Research 31, 381-386. Blazewicz, J. (1987), "Selected topics in scheduling theory", Annals of Discrete Mathematics 31, 1-60. Blazewicz, J., Cellary, W., Slowinski, R., and Weglarz, J. (1976), "Deterministyczne problemy szeregowa nia zadan na rownoleglych procesorach, Cz. I. Zbiory zadan niezaleznych", Podstawy Srerowania 6, 155-178. Blazewicz, J., Cellary, W., Slowinski, R., and Weglarz, J. (1986), Scheduling Under Resource Constraints: Deterministic Models, Baltzer, Basel. 131azewicz, J., Lenstra, J.K ., and Rinn ooy Kan, A.H.G. (1983). "Scheduling subject to resource constraints: classification and complexity'', Discrete Applied Mathematics 5, 11-24. Bruno, J.L. (1976), "Mean weighted flow-time criterion", in E.G. Coffman (ed.), Computer and Job Scheduling Theory, John Wiley & Sons, New York, 101-137. Bruno, J.L., Coffman, E.G., and Sethi, R. (1974), "Scheduling independent tasks to reduce mean finishing time'', Communications of the ACM 17, 382-387. Cochand, M., de Werre, D. and Slowinski , R.(1985), "Preemptive scheduling with staircase and piecewise linear resource availability", O.R. Working Paper 85/l, EPFL, Lausanne. Dobson, G., and Karmarkar, N.S. (1986), "Large scale shop scheduling: Formulations and Decomposition", Graduate School of Management Working Paper Series No. QM8631, University of Rochester. Dobson, G., Karmarkar, U.S., and Rummel, J.L. (1987), "Batching to minimize flow times on one machine'', Management Science 33/6, 784-799. Dror, M. and Stem, H. (1988), "Single machine scheduling with deadlines, release times, and preemption" (in preparation).

299

Dror, M., Stern, H., and Lenstra, J.K. (1987), "Parallel machine scheduling: processing rates dependent on number of jobs in operation", Management Science 33/8, 1001-1009. Du, J., and Leung, J.Y.T. (1990), "Minimizing total tardiness on one processor is NP-ha rd", Mathematics of Operations Research 3, 483-495. Du, J., and Leung, J.Y .T. (1989), "Minimizing mean flow time with release t ime and deadline constraints", Manuscript, Computer Science Department, U niversity of Texas at Dallas. Federgruen, A.K., and Groenevelt, H. (1986), "Preemptive scheduling of uniform machines by ordinary network flow techniques", Management Science 33/3, 341-349. Fisher, M.L. (1973), "Optimal solution of scheduling problems using Lagrange multipliers: Part I", Operations Research 21, 1114-1127. Fisher, M.L. (1973), "Optimal solution of scheduling problems using Lagrange multipliers: Part II",in: Symposium on the Theory of Scheduling and its Applications, Springer, Berlin. Fisher, M.L. (1976), "A du al algorithm for the one-mac hin e scheduljng problem", Mathematical Programming 11, 229251. Fisher, M.L., Lageweg, B.J., Lenstra, J.K., and Rinnooy Kan, A.H.G. (1983), "Surrogate duality relaxation for job shop scheduling", Discrete Applied Mathematics 5, 65-75. French , S. (1982), Sequencing and Scheduling: An Introduction to the Mathematics/or the Job-Shop, Ellis Horwood Limi ted, Chichester. Garey, M.R ., and Johnson, D.S. (1979), Computers and Intractability: A Guide to the Theory of N P-Completeness, Freeman, San Francisco, CA. Graham, R.L. (1976), "Bounds on performance of scheduling algorithms", in: E.G. Coffman (ed.), Computer and Job Scheduling Theory, John Wiley & Sons, New York. Graham, R.L., Lawler, E.L., Lenstra, J.K., and Rinnooy Kan, A.H.G. (1979), "Optimization and approximation in deterministic sequencing and scheduling theory: a survey", Annals of Discrete Mathematics 5, 287-326. Grotsch el, M., Lovasz, L., and Schrij ver, A. (1988), Geometric Algorithms and Combinational Optimization, SpringerVerlag, Berlin/Heidelberg. Horn, W.A. (1974), "Som e simple scheduling algorithm s", Naval Research Logistic.1· Quarterly 21, 177-185. Horowitz, E., and Sahni, S. (1976). "Exact and approximate algorithms for scheduling nonidentical processors", Journal of the A CM 23, 317-327. Jackson, J.R. (1955), "Scheduling a production line to minimize maximum tardiness", Research Report 43, Management Science Research Project, University of California, Los Angeles. Karma rkar, N . (1984), "A new poly nomial-time algorithm for linear programming", Combinatorica 4, 373-395. Khach.iyan, L.G. (1979), "A poly nomial algorithm in linear programm.ing", Sooiet Mathematics Doklady 20, 191-194. Labetoulle, J., Lawler, E.L., Lenstra, J.K., and Rinnooy Kan, A.H.G. (1979), "Preemp tive scheduling of uniform machines subject to release dates'', Report BW 99, Mathematisch Centrum, Amsterdam. Lageweg, B.J., Lenstra, J.K., and Rinnooy Kan, A.H.G. (1977), "Job shop scheduling by implicit enumeration", Management Science 24, 441-450.

300

J. Blazewiczetal. / Mathematicalprogrammingfor machinescheduling

Lawler, E.L. (1983), "Recent results in the theory of machine scheduling", in: A. Bachem, M. Grotschel and B. Korte (eds), Mathematical Programming, The State of /he Art, 202-234, Springer-Verlag, Berlin. Lawler, E.L. (1973), "Optimal sequencing of a single machin e subject to precedence constraints", Management Science 19, 544-546. Lawler, E.L. (1977), "A 'pseudopolynomial' algorithm for sequencing jobs lo minimize total tardiness", Annals of Discrete Mathematics 1, 331-342. Lawler, E.L., and Labetoulle, J. (1978), "Preemptive schedul ing of unrelated parallel processors by linear prograrnrning", J ournal of the ACM 25, 612-619. Lawler, E.L., Lenstra, J.K., and Rinnooy Kan, A.H.G. (1982), "Recent developments in deterministic sequencing and scheduling: a survey", in: M.A.H. Dempster, J.K. Lenstra and A.H.G. Rinnooy Kan (eds.), Deterministic and Stochastic Scheduling, Reidel, Dordrecht, 35-73. Lenstra, J.K. (1977), "Sequencing by enumerative methods", Mathematisch Cenlrum, Amsterdam. Lenstra, J.K., Rinnooy Kan, A.H.G., and Brucker, P. (1977), "Complexity of machine scheduling problems", Annals of Discrete Mathematics 1, 343-362. Manne, A.S. (1960), "On the job scheduling problem", Operations Research 8, 219-223. Miller, C.E., Tucker, A.W., and Zemlin, R.A. (1960), "Integer programming formulations and the traveling salesman problems", Journal of the ACM 7, 326-329. Nemhauser, G.L., and Wolsey, L.A. (1988), Integer and Combinatorial Optimization, John Wiley & Sons, New York. Padberg, M.W., and Grotschel, M. (1985), "Polyhedral computations" in : E.L. Lawler, J.K. Lenstra, A.H .G. Rinnooy Kan and D. Shmoys (eds.), The Travelling Salesman Problem, Wiley, Chichester, 307-360. Padberg, M.W., and Rinaldi, G. (1987), "Optimization of a 532 city symmetric travelling salesman problem by branch and cut", Operations Research Letters 6, 1-7. Patterson, J.H., Slowinski, R., Talbot, F.B., and Weglarz, J. (1989), "An algorithm for general class of precedence and resource constrained scheduling problems", in: R. Slowinski and J. Weglarz (eds.), Advances in Project Scheduling, Elsevier, Amsterdam, 3-28. Pritsker, A.A.B., Watters, L.J ., and Wolfe, P.M. (1969), "Multiproject scheduling with limited resources: A zero-one programming approach", Management Science 16, 93-108.

Rinnooy Kan, A.H.G. (1976), Machine Scheduling Problems: Classification, Complexity and Compulalions, Martinus Nijhoff, The Hague. Rock, H. (1984), "The three-machine no-wait flow shop is NP-complete", Journal of the ACM 31, 336-345. Schweitzer, P., Dror, M., and Trudeau, P. (1988), "Periodic loading problem : formulation and heuristics", IN FOR 26, 40-61. Slowinski, R. (1980), "Two approaches to problems of resource allocating among project activities - a comparative study", Journal of the Operational Research Society 31,711-723. Slowinski, R. (1987), "Production scheduling on parallel machines subject to staircase demand'', Cahiers du LAMSADE 74, Universite de Paris-Dauphine. Smith, W.E. (1956), "Various optimizers for single-stage production'', Naval Research Logistics Quarterly 3, 59-66. Weglarz, J. (1976), "Time-optimal control of resource allocation in a complex-of-operations framework", IEE Transactions on Systems, Man and Cybernetics, SMC-6/11, 783788. Weglarz, J. (1981), "Project scheduling with continuously-divisible doubly-constrained resouces", Management Science 27, 1040-1053. Weglarz, J. (1989), "Project scheduling under continuous processing speed vs. resource amount functions", in: R. Slowinski and J. Weglarz (eds.), Advances in Project Scheduling, Elsevier, Amsterdam, 273-297. Weglarz, J. (1990), "Multicriteria scheduling under continuous doubly constrained resources", Methods of Operations Research 60, 153-162. Weglarz, J., Blazewicz, J., Cellary, W., and Slowinski, R. (1977), "An automated revised simple method for constrained resource network scheduling", ACM Trans. on Mathematical Software, 295-300. de Werra, D. (1984), "Preemptive scheduling, linear programming and network flows", SIAM Journal on Algebraic and Discrete Methods 5, 11-20. de Werra, D. (1988), "On the two-phase method for preemptive scheduling", European Journal of Operational Research 37, 227-235. de Werra, D. (1989), "Graph-theoretical models for preemptive scheduling", in : R. Slowinsky and J. Weglarz (eds.), Ad vances in Project Scheduling, Elsevier, Amsterdam, 171185.