Document not found! Please try again

Setting Due Dates in a Stochastic Single Machine ...

5 downloads 3184 Views 95KB Size Report
a scheduling- or a capacity-booking mechanism. By scheduling the ..... domain). (The transformation of the relaxed contour function to the b-s domain is neither ...
Setting Due Dates in a Stochastic Single Machine Environment by 1

Victor Portougal and Dan Trietsch MSIS Department The University of Auckland Private Bag 92019 Auckland, New Zealand

1

Corresponding author

Setting Due Dates in a Stochastic Single Machine Environment

Abstract

A set of n jobs with statistically independent random processing times has to be processed on a single machine without idling between jobs and without preemption. It is required to set due dates and promise them to customers. During the production stage, earliness and tardiness against the promised due dates will be penalised. The goal is to minimise the total expected penalties. We introduce a due date setting procedure with optimum customer service level, and an O(n log n) asymptotically optimal approximate sequencing rule. Our model includes safety time and our sequence remains the same regardless of disruptions, so the result is robust. For the normal distribution we provide sufficient optimality conditions, precedence relationships that the optimal sequence must obey, and tight bounds.

Key words: Scheduling; Stochastic processes; due-date setting; customer service level; robust scheduling.

1.

Introduction Published research on assigning due dates is relatively scarce. In contrast, scheduling

against due dates is a very popular research topic. In these problems the due dates are always given. The question is: 'by whom?' There is no doubt that the true objective of scheduling is to solve realistic problems. Therefore, we should answer the question in the context of real production systems. In this context, the due dates (or delivery dates) almost always appear as a result of communications with the customer. They are either promised, or agreed by negotiation. To support negotiation or to determine what the delivery date should be, the company representative always requires a scheduling- or a capacity-booking mechanism. By scheduling the customer's order against current portfolio of orders she obtains a realistic delivery date, and then promises it to the customer. If the customer is satisfied, then this date becomes the due date. If the customer wants an earlier delivery date, then it may require re-scheduling of other orders, if possible. So, in practice, scheduling comes first and due dates are set afterwards. This makes the research on assigning due dates much more useful in practice then the research on [given] due-date scheduling. During the production stage, both earliness and tardiness against the promised due dates is penalised. The tardiness penalty models a natural part of business relations. Frequently it is proportional to the delay. Rarely, more complex penalty functions may be required, such as positive increasing convex functions (e.g., if the delay exceeds a specified period, the penalty rate doubles). Tardiness cost frequently includes not only external, or customer-related cost (e.g., penalty, loss of goodwill), but sometimes also internal cost (e.g., additional unplanned capacity cost such as overtime and cost of replanning the next jobs). The penalty for being early is less straightforward. Earliness cost mostly reflects internal expenses, e.g., inventory holding cost, or loss due to late payment (the product is ready, but the customer pays on the due date). It may also reflect our incentive to quote early (but reliable) due dates to satisfy customers. In spite of this incentive, because of the reliability

1

issue, in the current business environment, tardiness cost per time unit is often much higher than earliness cost. Therefore, the natural business criterion for setting due dates is the expected sum of earliness-tardiness penalties. The problem is inherently connected to the stochastic nature of processing times. If we ignore the stochastic nature of operations, then due dates should be set at the computed completion times of the jobs. A possible alternative is to formulate deterministic models that include due date setting. This approach dates back to Conway (1965), but had only received sporadic attention since then (e.g., Baker and Scudder, 1989). It is typically done by deciding on a due-date-related objective and then seeking the sequence that optimises it, again subject to the assumption that the due dates will be set at the computed completion times. For example, we can utilise the WSPT rule to minimise weighted flow time and then set due dates according to the results. However, some of the suggested objectives are so artificial that it is difficult to find corresponding production or service environments in reality. Cheng and Gupta (1989) provide a survey of such methods. Finally, the deterministic problem of setting due dates with minimal earliness-tardiness penalties is trivial. Its solution is to schedule the jobs in any order and to set due dates at the computed end of processing. Once we consider stochastic variation, however, the picture changes dramatically. First, assuming continuous processing time distributions, every job will be either early or tardy almost surely. Second, the earliness-tardiness of any job depends not only its own variability, but also on its position in the sequence. A highly variable job placed at the beginning of the sequence will add to variability of all the jobs placed after it. Third, the delivery dates should be defined not only based on the expected completion of the job, but also with respect to customer service level. Since earliness is typically not a concern for customers--they don't even have to know about it--service levels may be computed as the probability of avoiding tardiness. Note that if we set the due date as an expected completion time of a job, then the service level for any symmetrical distribution will be 50%. Nonetheless, the optimal service level is associated

2

with the relative magnitude of the earliness and tardiness costs, so it should often be much higher than 50% (Portougal and Trietsch, 2001). Moving on to the stochastic scheduling literature, we may distinguish three approaches to due dates, sorted according to their business relevance. First, some models assume stochastic due dates (and either deterministic or stochastic processing times). Next, most due-date related stochastic models assume fixed deterministic due dates and stochastic processing times. An example of this approach is Soroush and Fredendall (1994). Finally, the option that is most relevant to practice involves setting deterministic due-dates and stochastic processing times. Unfortunately, not much had been published in this area. Probably, the first to attract attention to this important problem was Cheng (1986, 1991). His model combines sequencing and due-date setting and he achieves optimal results for some special cases. However, he did not consider service levels. Since it is desirable to model the problem in such a way that optimal service levels will be part of the solution, we propose a new model. In the next section, we formulate the problem. In Section 3, we derive an O(n log n) approximate solution and present a basic lower bound. In Section 4, we show that our result is asymptotically optimal. In Section 5, we provide a sufficient condition. In Section 6, we present some precedence relationships that might be useful in searching for an optimal solution. In Section 7, we develop a tighter lower bound. Section 8 is the conclusion.

2.

Problem formulation and preliminary analysis Let a set of n jobs with independent random processing times be available at time

zero. Let job i have processing time pi, mean

i,

2

and variance  i . Assume processing is on a

single machine, one job at a time, starts at time zero, proceeds without intermittent idling, and preemptions are not allowed. Denote the completion time of job i by Ci and its due date by di. When a job is completed, its early-tardy cost is given by Max{  i(di-Ci),0}+Max{ i(Ci-di),0}, where  i and i are positive weights. It is required to sequence the jobs and set due dates so that the total expected early-tardy cost will be minimised.

3

In a stochastic environment, assuming continuous processing time distributions, all jobs are either early or tardy, and it makes sense to try to reduce the expected cost associated with these deviations. If we do so consistently, then over the long run our total penalties will be minimised almost surely. In general, we cannot even compute our objective function value without knowing the distributions of Ci. All these distributions, with the exception of the first job, require convolutions. This can lead to very tedious calculations for the first few jobs. Once we have accounted for enough independent jobs, the central limit theorem allows the use of the normal approximation. For this reason, we'll see, asymptotic optimality does not require any assumption on the processing time distributions. Nonetheless, all our other results do. Therefore, for simplicity, we assume that all processing times are normally distributed. The literature often includes this assumption; e.g., see Soroush and Fredendall (1994). Also, Portougal and Trietsch (2001), in a context similar to ours, showed that using the normal distribution as an approximation for the purpose of setting safety allowances is robust even for distributions that are far from normal. Practically, the only alternative to making this assumption is to use simulation with complete enumeration at least for the first few jobs in the sequence (and use our approximate solution once the distribution becomes close enough to normal). In what follows we will use this assumption except for the proof of asymptotic optimality. Finally, for the purpose of asymptotic optimality, we require the jobs to satisfy minor regularity conditions. These conditions are automatically met if the parameters of the jobs--  i, i,

i

2

and  i --are independently sampled from some [not necessarily normal]

multivariate distribution with a finite variance-covariance matrix. In such a case, we may expect the mix of jobs in any two large sets to be similar to each other. In addition, we may expect that no subset of jobs will dominate other subsets with a similar number of jobs. Let  i+ i= i, and define ki as the value that satisfies  (ki)= i/ i, where  (k) is the standard normal cumulative distribution function (and  (k) is the standard normal density function). That is, using a safety time allowance of kisi yields a service level of i/ i. Denote

4

the index of the job in the i-th position by [i], and let the variance of the completion time of 2

2

job i be si ; i.e., s[i] =

j=1,...,i  [j]

2

. If we set d[i]=E(Ci)=

[i]s[i]  (0)

cost of job [i], ET[i], is

d[i]=E(C[i])+k[i]s[i] then ET[i]=

j=1,...,i [j],

then the expected early-tardy

(Soroush and Fredendall, 1994). Alternatively, if we set

[i]s[i]  (k[i]),

which is lower--since  (k[i])≤  (0). Furthermore, for

any sequence, setting the due dates to d[i]=E(C[i])+k[i]s[i] ∀i is locally optimal by the newsboy model, i.e., it optimises the service levels (Portougal and Trietsch, 2001). Thus if we find the sequence that minimises

iETi= i [i]s[i]  (k[i])

we can be sure that we have the globally

optimal solution. Nonetheless, henceforth we combine these two versions by using the notation bi to denote either

i  (0)

or

i  (ki)--thus

ETi=bisi. Our focus is on determining the

optimal sequence of the jobs given bi. The two versions may lead to different optimal sequences, and unless ki=0 (∀i) they will lead to different objective function values, but the same mathematical model applies. A numerical example may be useful here.

Example 1: Let n=2;

1= 2=10;

1=3, 2=4;  1=3√[2  ],  2=9√[2  ]; 1=27√[2  ],

2=36√[2  ].

Analysis:

1=30√[2  ]

and

2=45√[2  ],

and since  (0)=1/√[2  ] the bi values without safety

time allowances are b1=30 and b2=45. The optimal service level of job 1 is 1/ 1=27/30=0.9; and of job 2, 2/ 2=36/45=0.8. A normal table yields k1=1.282 and k2=0.842. Denote the b 2

values here by b', for distinction, and after multiplying bi by exp(-1.282 /2) and 2

exp(-0.842 /2), respectively, we obtain b1'=13.19 and b2'=31.57. Solving the example by complete enumeration we obtain for the sequence (1,2) bisi=315, bi'si=197.42, and for the sequence (2,1) bisi=330, bi'si=192.23. Accordingly we select the sequence (2,1) and set the 2

2

due dates as follows: d1=10+4⋅0.842=13.368; d2=20+5⋅1.282=26.41 (since √[3 +4 ]=5).

Of course, if things work right we will complete job 2 around time 10 and job 1 around time 20. Nonetheless, the safety time allowances (which cost 10.104 and 57.69, respectively) are necessary to avoid increasing the objective function values from 192.23

5

(including the safety time allowances cost) to 330. We may draw the following conclusions: (1) using optimal service levels is important for any sequence (in the example, the cost is reduced to about 60% after optimising the service level); (2) the optimal sequence can change when we use the optimal service level ( bisi was minimised by the sequence [1,2], and bi'si by [2,1]).

3.

An Approximately Optimal Solution by Relaxation We propose a new heuristic rule for problem. The main part of the rule is to sequence 2

2

2

the jobs such that [1] /b[1]≤  [2] /b[2]≤... ≤  [n] /b[n]. The minor part is a tie-breaking rule, 2

2

specifically  [i] ≤  [i+1] (any remaining ties must be between identical jobs and can be broken arbitrarily). Because it is based on simple sorting the computational complexity of the rule is O(n log n). In what follows we sometimes analyse the objective function in two steps. In step 1, 2

we look at bi i or parts thereof. In step 2, we draw conclusions for our true objective function, bi  i. At stage i, we can draw the contribution of job [i] to the objective function by a rectangle with a basis of b[i] starting at

k=1,...,n-1b[k],

2

and with a height of si at step 1 or si at

step 2. That is, the x-axis is used for b[i], cumulatively, and the y-axis for the variance or the standard deviation of the completion time, as the case may be. We may refer to the domain of 2

2

the variables of step 1 as the b-  or the b-s domain, and to those of step 2 as the b-  or the b-s domain. Lemma 1 is instrumental for step 2.

Lemma 1: Let a,b,q,r>0 satisfy a(q2+r2)≥bq2, then for any s≥0 a

(

2

2 2 2 s +2q +r - s +q

2

)> b (

2

2

2 2 2 2 s +2q +r - s +q +r

)

Proof: Denote 2

2

2

2

2 2 2 2 2 2 2 ∆ a = s + 2 q + r - s + q ; ∆b = s + 2 q + r - s + q + r

Clearly,

a> b ,

2

2

2

2

2

2

a(q +r )≥bq if and only if a(q +r )/bq ≥1, so if we can show

6

2 2 2 a/ b>(q +r )/q

2

2

2

the lemma will be proved (since (q +r )/q ≥b/a). Multiplying 2

2 2 2 s +2q +r + s +q

a

by

2

leads to the following [circular] expression 2

∆a =

q + r2 2

2 s2 + 2 q + r 2 - ∆a

and similarly ∆b =

q

2 2

2 s 2 + 2 q + r 2 - ∆b

Therefore, 



2 2 2  2 2 ∆ a =  q + r 2 s + 2 q + r - ∆b   2 q ∆b 2 s2 + 2 q2 + r 2 - ∆a

and since

b< a ,

2 s 2 + 2 q 2 + r 2 - ∆b 2 s2 + 2 q2 + r 2 - ∆a

>1

and the lemma follows. QED

2

2

2

2

If we interpret s , q , and q +r as variances, their square roots are standard deviations. The proof shows a connection between

i

(where i=a or b)--the increment of the standard 2

2

deviation, which belongs to the b-s domain--and i --which belongs to the b-s domain. For large enough s, the derivative of

i

2

by  i is approximately linear. This explains the advantage

2

of using  for sorting instead of using . There are two basic cases where our rule produces optimal solutions. One is when all i are equal to each other and our sequence calls for non-increasing b[i]. The other is when all

bi are equal to each other, and our sequence then calls for non-decreasing [i]. Our proof of

7

the latter result is incorporated within the proof of a more general sufficient condition (Theorem 3), which generalises these two basic cases. We prove the former now.

Lemma 2: When

i

2

=constant ∀i, the optimal sequence is by non-decreasing bi (non-

increasing constant/bi). Proof: By negation. Assume otherwise, and there must exist at least one adjacent pair of jobs that violate the lemma, say jobs [i] and [i+1]. Without loss of generality, let

i=1

(∀i), so the

variances of the completion times of the jobs are i and i+1 respectively. The contribution of 0.5

job [i] to the objective function is proportional to i , the contribution of job [i+1] is 0.5

0.5

0.5

proportional to (i+1) , and i m even if the distributions of the jobs are not normal. Therefore, our proof will still hold and our rule is asymptotically optimal even without the normality assumption.

5.

A Sufficient Condition for Optimality

Theorem 3: Any sequence that satisfies  [1]2/b[1]≤ [2]2/b[2]≤... ≤ [n]2/b[n] and  [1]2≤  [2]2≤...

11

2

≤  [n] yields the optimal solution. Proof: By negation, suppose such a sequence, say s, exists but is not optimal. Then there 2

must exist a pair of adjacent jobs in the optimal solution, say i and j, such that either  i > j 2

2

2

2

2

or  i /bi>  j /bj or both. The existence of s rules out the possibility that either  i /bi<  j /bj and 2

2

2

2

2

2

2

2

i > j or  i < j and  i /bi>  j /bj. Therefore, it is enough to consider  i ≥ j and 2

2

2

2

2

i /bi≥  j /bj with at least one strict inequality. If  i = i , Lemma 2 applies, so only i > j 2

2

2

and i /bi≥  j /bj remains. Let the variance of the completion time of the combined job be 2

2

2

2

2

2

2

given by sc =s0 + i +  j , where s0 is the variance of the completion time of all the previous jobs. Let si (sj ) be the variance of the completion time of job i (j). In step 1, we compare the 2

2

2

2

values of bisi +bjsj before and after interchanging i and j, and by using i /bi≥  j /bj we show 2

2

that (j,i) leads to a lower result. In step 2, using  i > j , we show that (j,i) is better in terms of bisi+bjsj as well. For step 1, to calculate the contribution of the two jobs to the objective function, we start by combining them to one job with a weight of bi+bj and a completion time 2

2

variance of sc . Then we adjust the result by subtracting bi  j (in the original sequence) or 2

bj i (in case of interchanging). The adjustment is correct because combining the jobs to one 2

2

involves extending a weight of bi (or bj) for an extra  j (or  i ) variance units. But by 2

2

2

2

assumption  i /bi≥  j /bj and therefore bi  j ≤bj i , thus completing step 1. For step 2 we 2

2

2

2

2

invoke Lemma 1 with the following substitutions: a=bj, b=bi, q = j , q +r = i (thus ensuring 2

2

2

2

i > j ), and s =s0 . QED

2

Formally, Theorem 3 includes Lemma 2 and the use of non-decreasing [i] when bi=constant as special cases. Therefore, henceforth we will refer to it as the sufficient condition. If the sufficient condition can be satisfied, then, due to the tie-breaking rule, our sequence is guaranteed to do so.

6.

Precedence Relationships

The following is a corollary of Theorem 3, and the proof is embedded within the proof

12

of the theorem. The corollary provides a precedence relationship that applies to adjacent jobs.

Corollary: Let jobs i and j satisfy  i2/bi≤  j2/bj and  i2≤  j2, with at least one inequality strict,

then any sequence where i immediately follows j cannot be optimal.

Example 2, with b1=2, b2=3, 1=3, 2=4, is also a case where the corollary gives the correct sequence. Therefore, these two jobs should be sequenced in the per (1,2) even if they are preceded by other jobs. The corollary only applies to adjacent jobs, however. It's an open research question if the condition suffices to prove precedence when the jobs are separated by others. Based on incomplete but substantive analysis, and extensive numerical experiments, we conjecture that it does. We can certainly recommend treating this conjecture as true for the purpose of any heuristic where complete certainty about its optimality is not essential. We also offer a weaker statement that we can prove (Theorem 4).

Conjecture: Let jobs i and j satisfy i2/bi≤ j2/bj and i2≤  j2, with at least one inequality strict,

then i precedes j in the optimal solution.

Theorem 4: Let jobs i and j satisfy  i2≤  j2 and bi≥bj, with at least one inequality strict, then i

precedes j in the optimal solution. Proof: First note that the conditions imply the sufficient condition of Theorem 3. Therefore,

the theorem holds if there are no other jobs between i and j. We have to prove that when other jobs are sequenced between i and j, the precedence relationship still holds. We start with the simplest case where one job, say job k, is between i and j. We have to show that the sequence (i,k,j) is better than (j,k,i). In step 1, we look at the three jobs as a combined job with 2

2

2

2

2

b=bi+bj+bk and  =  i + j + k , and subtract the following three rectangles: [a] b[1]  k (where 2

2

[1] is either i or j), [b] b[1]  [3] (where [3] is either j or i), [3] bk  [3] . But under our conditions all three rectangles are larger when [1]=i and [3]=j. Step 2 is proved by invoking Lemma 1

13

with respect to the union of rectangles [a] and [b] (for each sequence) and with respect to rectangle [c]. To extend the proof to several intervening jobs between i and j, start by treating them as one combined job, k'. Then we observe that by sequencing it with a lower preceding variance the block gains more than the gain of job k'. QED

7.

A Tighter Lower Bound on the Objective Function

Our aim here is to find the lowest possible difference between the relaxed objective function and any true objective function whether it uses the same sequence or not. When adding this lowest possible difference to our current bound (the objective value of the relaxed optimal solution), we will obtain a tighter lower bound. To that end, we define the associated problem by the following construction: 2

1. Find the relaxed optimal solution and its contour function (in the b-s domain), 2. Sort the variances in non-decreasing order and re-index them from 1 to n, 2 2  3. Let si = k=1,...,i k ,  4. Define b0=0 and compute bi recursively by subtracting k=0,...,i-1bk from the argument for 2

which the relaxed contour function yields si .

Theorem 5: The objective function value of the associated problem based on the optimal

relaxed solution is a lower bound on the original problem. Proof: Using steps 2 to 4 of the associated construction based on any other relaxed contour

function leads to a higher objective function value for the associated problem. Therefore, for finding a lower bound it is enough to consider the optimal relaxed contour function. Denote the contour functions of the original and the associated problem by cfo[.] and cfa[.]. By   construction cfa[ k=1,...,ibk]=cfo[ k=1,...,ibk] ∀i. It follows that cfa≥cfo for any argument (they are identical if and only if the sufficient condition is satisfied), and cfa is convex. Therefore, the 2

2

2

associated problem satisfies  1 /b1≤ 2 /b2≤... ≤  n /bn. By construction it also satisfies 2

2

2

1 ≤  2 ≤... ≤ n . It remains to show that step 2 leads to the smallest possible associated

14

objective function. We do this by negation. Assume otherwise, and there must exist a pair of 2

2

adjacent jobs in the optimal solution, say j and i, such that j precedes i but i < i . In step 1 2

2

2

2

(in the b-s domain) look at the combined job j+i such that bj+i=bj+bi, j+i =  j +  i , and, when 2

considering the original sequence (j,i) subtract from it the rectangle bj  i . When we interchange the jobs, the combined job remains intact. However, bi and bj may change (subject to their sum being constant). Specifically, the new bi value, bi', may be larger because the supporting relaxed contour function is convex. By the convexity of cfa, we have 2

2

2

2

2

bj i ≤bi  j , and with bi'≥bi we obtain bj  i ≤bi  j ≤bi' j . So the sequence (i,j) is better or equal 2

2

2

2

2

2

in the b-s domain. Lemma 1 with q =  i , q +r = j , a=bi', and b=bj shows that it is superior in the b-s domain. QED

This lower bound is strictly tighter than the objective value of the relaxed optimal solution. First, cfa≥cfo, so its objective function value is at least as high. To this we add the area between the true associated objective function and the relaxed one. Finally, if we sequence the first few jobs by some arbitrary method, it is straightforward to obtain a similar lower bound for the contribution of the remaining jobs. This can be useful for branch and bound solution approaches.

8.

Conclusion

Finding the exact solution of our problem by a polynomial procedure seems elusive, but we provided an O(n log n) asymptotically optimal solution. We recommend our rule as a sufficient approximation in practice, although we provided bounds and precedence relationships that can help in the pursuit of the optimal solution. Our rule is robust both in terms of performance (the ability to to sustain shocks without significant degragation in schedule quality) and in terms of execution (avoiding the need for rescheduling) (McKay et al. 2002). The former is true because our due dates are protected by safety time cushions. The latter is true because our recommended sequence is fixed regardless of disruptions.

15

16

References: Baker, K.R. and Scudder, G. (1989), On the Assignment of Optimal Due Dates, Journal of the Operational Research Society 40, 93-95.

Baker, K.R. and Scudder, G. (1990), Sequencing with Earliness and Tardiness Penalties: A Survey, Operations Research 38, 22-36. Cheng, T.C.E. and Gupta, M.C. (1989), Survey of scheduling research involving due-date determinations, European Journal of Operational Research 38, 156-166. Cheng, T.C.E. (1991), Optimal assignment of slack due-dates and sequencing of jobs with random processing times on a single machine, European Journal of Operational Research 51, 348-353. Conway, R.W. (1965), Priority dispatching and job lateness in a job shop, Journal of Industrial Engineering 16, 228-237. McKay, K., Pinedo, M. and Webster, S. (2002), Practice-focused research issues for scheduling systems, Production and Operations Management 11, 249-258. Portougal, V. and Trietsch, D. (2001), Stochastic scheduling with optimal customer service, Journal of the Operational Research Society 52, 226-233. Soroush, H.M. and Fredendall, L.D. (1994), The stochastic single machine scheduling problem with earliness and tardiness costs, European Journal of Operational Research 77 (1994) 287-302.

17

Suggest Documents