Economically Balanced Criticalities for Robust Project Scheduling and Control
by Dan Trietsch The University of Auckland Business School ISOM, Old Choral Hall 7 Symonds Street Auckland, New Zealand ++649-373-7599 ext 87381 fax: ++649-373-7430
[email protected]
June 2005
Economically Balanced Criticalities for
Robust Project Scheduling and Control
Abstract Any practicable and robust project scheduling and control framework should: (1) be based on sound theory; (2) yield economical results; (3) be intuitively acceptable to decision makers; (4) require modest information inputs from users and provide easy-to-utilize outputs. Conditions 3 and 4 are important for practicability: decision makers rarely adopt a method that violates either one of them. In contrast, for conditions 1 and 2, it’s enough that practitioners believe they apply. In this paper we propose an enhanced PERT framework that satisfies the four conditions. It is motivated by the success of a popular framework that only satisfies the last two. The framework utilizes insights from stochastic sequencing and scheduling theory, extensions of the concept of economic balance to buffer-setting, and principles of data estimation and monitoring.
Keywords: Project scheduling, uncertainty modeling, economically balanced buffers, optimization by simulation
1
Introduction One of the earliest and most successful frameworks for computerized, model-based
decision support systems (DSS) is the Project Evaluation and Review Technique (PERT), a term that we use here in a wide sense including the Critical Path Method (CPM). Among other highly innovative features, the PERT framework explicitly included treatment of the stochastic aspects of project management [1, 2]. Considering modern developments, however, if we look for issues where new models may be most useful, the stochastic aspects loom large. The focus of this paper is on the potential enhancement of the PERT framework by recent stochastic management models, aiming to make it easier to use and more robust. In general, an ideal model-based DSS should: (1)
Be rooted in sound theory
(2)
Yield economically optimal results (at least approximately)
(3)
Be intuitively acceptable to decision makers
(4)
Require modest information inputs from users and provide easy-to-utilize outputs.
Conditions 1 and 2 may seem self-evident, but they are not necessary for commercial success. Instead, it is enough that practitioners believe that they are satisfied. Furthermore, practitioners cannot judge economic results relatively to the (unknown and somewhat illdefined) optimal solution but rather relatively to the available practical alternatives. From a theoretical point of view, commercial success is not a proof of validity. Nonetheless, by conditions 1 and 2 we do not imply that the models must be analytic. In contrast, assuming that P≠NP, the pursuit of perfect optimality is sub-optimal (so it violates condition 2)! After all, even simplified versions of the problems we deal with are NP-hard (in the strong sense), so exact computations would often take longer and cost more than just completing the whole project by any feasible plan. (Ironically, this is especially likely if the project is large and therefore expensive.) Conditions 3 and 4 are more important for practicability: decision makers will rarely adopt a method that violates condition 4, and convincing them to buy involves satisfying condition 3. However, as a practical compromise, a “black box” for complex procedures, e.g., for sequencing and scheduling, should be acceptable if the inputs and outputs are simple enough, computation time is reasonable, and—last but not least—the user can test the effect of manual change and override the recommended solution. More than 1
anything else, it is the user interface that must meet simplicity conditions. With this in mind, our challenge is to enhance PERT by better models in terms of conditions 1 and 2 but bearing in mind the simplicity-of-use requirement. In context of the era in which it was developed, PERT is an outstanding framework. The network presentation—in spite of the eternal debate whether activity-on-arrow (AoA) or activity-on-node (AoN) is better—meets conditions 3 and 4 with flying colors. Furthermore, over the years, extensive and impressive results have been achieved with respect to optimal or excellent heuristic sequencing models. For comprehensive discussion of relevant but deterministic models, see Morton & Pentico [3], who focus on highly effective heuristics for resource sequencing (although projects and project organizations are only a subset of their coverage) and Demeulemeester & Herroelen [4], who discuss both optimal and heuristic approaches. The latter also include significant coverage of stochastic issues and explicit consideration of robustness. Leus [5] and Herroelen & Leus [6, 7] provide extensive surveys of the state of the art in stochastic and robust project scheduling with resource constraints. Recent robust models that allocate slack to activities to reduce the propagation of disturbances through the system and select sequences based on their performance with such slack are given in [8, 9, 10]. Because such models are certainly confined to the “black box,” and because sophisticated models that address robustness are very recent and not comprehensive yet, so far they have not really changed the PERT practice significantly. Therefore, in spite of these important advances, with today’s capabilities and with 20/20 hindsight it is permissible to say that the stochastic aspects of the PERT model-base are still weak, so conditions 1 and 2 are not fully satisfied. An especially problematic issue is that the vast majority of existing stochastic models assume that the distributions of individual project activities are somehow known and that these durations are independent of each other. The classical PERT approach solicits a triplet of estimates for each activity {min, mode, max} from process owners or experts, and then the mean (µ) and the standard deviation (σ) of each activity are calculated by µ=(min+4mode+ max)/6 and σ=(max-min)/6. But this approach violates condition 4: Woolsey [11] provides anecdotal evidence to this claim, and argues that work place politics lead to spurious triplet estimates (which are given grudgingly). Furthermore, Tversky and 2
Kahneman [12] discovered that most people, emphatically including experts, are inherently incapable of providing reliable range estimates of the type that PERT requires. Typically, they grossly underestimate variation. So, in a holistic sense, the requirement of triplet estimates violates condition 1 as well. Furthermore, the assumption that the activities are independent leads to conclusions that fly in the face of experience [13], thus violating conditions 2 and 3. A modern practical approach, CC/BM,1 satisfies conditions 3 and 4 (e.g., it uses single point estimates), and, implicitly, provides a heuristic response to the problem of dependence by specifying buffers that are calculated as an arbitrary a fraction of the estimated duration (e.g., 50%). Typically, the buffers are placed as late as possible, so consecutive activities do not have buffers between them. For example, the due date is set later than the expected completion time, and the difference is the project buffer. Similarly, where activities merge, feeding buffers are inserted to control the risk that an activity with a large slack will eventually become critical. Zalmenson and Zinman [16] cite this as a major weakness of CC/BM that led to failures in the Israeli hi-tech industry. Their point is that by managing the buffers centrally, management robs activity owners of their autonomy and thus loses their commitment and goodwill. Incidentally, in the very early days of PERT it had already been noted that there may be good reasons to specify centrally managed buffers (at the end) and other good reasons to specify locally managed buffers [17]. CC/BM has also been criticized on technical modeling grounds by several authors, e.g., [18, 19, 20, 21]—all of whom point out various violations of conditions 1 and 2. Yet, many practitioners believe that CC/BM is competitive, safe and sound; so it gains popularity. The basic premise of this paper is that the success of CC/BM—even if it is only a partial success—demonstrates that decision makers want explicit and sufficient buffering as well as simple inputs and outputs. Thus, CC/BM does have its strong points (which we should emulate) and its weaknesses (which we should avoid). To move forward, we adopt the [implicit] practical input/output
1
The term CC/BM stands for “Critical Chain/Buffer Management,” where “Critical Chain” is simply the critical path of a resource constrained project [4]. A more rigorous mathematical definition was given by Wiest 40 years ago, but he preferred to rename the critical path as the “critical sequence” [14]. CC/BM is often referred to as CCPM (where PM stands for project management), a term coined by Larry Leach. However, at least as introduced by Leach, CCPM includes elements of PMBOKTM and elements of TQM [15]. Therefore, it is a more comprehensive system and some of the criticism against CC/BM does not apply to CCPM.
3
structure of CC/BM, so our main challenge is to provide sound, tractable, and plausible economical models. This implies that inputs should include single time estimates for all activities and outputs should dictate when to start each activity (assuming the physical predecessors are complete) and what sequence to follow on constrained resources. Some cost estimates are also necessary; e.g., we may need information about holding costs if we wish to include them (but very crude data should suffice). Outputs should also provide for control, i.e., monitoring the project performance and supporting corrective actions. It is also important to mention the issue of hierarchy. Some modern approaches, e.g., “vanilla” CC/BM (but not CCPM), are often marketed as if project plans are completely analogous to floor shop planning in repetitive production—i.e., the network is assumed to provide the full detail necessary for the project execution. But this is not a viable approach for anything but the smallest projects. In fact, similarly to the case in repetitive production, there is a need for hierarchy in project planning and control [15]. Specifically, project activities often represent subprojects with their own (implicit or explicit) networks, and this can be repeated at the lower levels too. Therefore, the events depicted by crisp nodes in AoA project networks often represent relatively complex handover interfaces between two or more subprojects that require significant preparation ahead of time (e.g., staging materials and resources). This is the reason why robust planning is important: changes, especially in the short term, may be difficult and expensive [9]. In our framework we will assume that there is a current baseline sequence that should be adhered to in the short term. However, during the control phase the system may signal that the current plan is in trouble, in which case the response may include (hopefully slight) changes in this sequence as well as expediting actions. Of course it is better if changes are not immediate, to facilitate their implementation without excessive cost. Thus, the preferred short-term reaction to schedule disruptions is waiting (and time buffers help prevent propagating the problem further), but in the longer term or under special circumstances the baseline sequence should be modifiable. To explain this further, stochastic variation is part of reality, but it can be in statistical control (predictable) or not. Time buffers are really designed to sustain “in-control” variation, but “out-of-control” variation—e.g., changes in design, or an excessive disruption—are the ones that justify extensive re-planning. By 4
definition, “out-of-control” problems are unpredictable, so our aim is to plan correctly for predictable variation and have a robust system in place to respond to unpredictable variation. In the remainder of the paper we discuss key ingredients of our modified PERT framework, based on published and submitted results. The intended contribution is mainly to combine these existing results into one coherent framework, but we also introduce some new results, including (i) a predictive version of the Gantt chart, (ii) extending bias calculations that were originally derived for serial projects to general networks, (iii) new results for stochastic crashing (also known as expediting), and (iv) some dynamic control aspects. In sections 2 and 3 we discuss stochastic scheduling and sequencing (considered as two distinct subjects). In Section 4 we describe our approach for estimating the parameters of stochastic project activities. It is based on historical data and single point estimates, and corrects for bias explicitly. Section 5 discusses control, including ongoing scheduling, remedial crashing, and remedial re-sequencing. Section 6 is the conclusion.
2.
Stochastic Scheduling with Economically Balanced Buffers In the sequencing and scheduling literature, the term “scheduling” often implies both
sequencing and scheduling, but this is not appropriate in our context. We reserve this term for setting a planned time for each activity to start and, perhaps, to finish. In a stochastic environment, schedule deviations are practically unavoidable but stable sequencing is sometimes possible. Even stable sequencing is unlikely in R&D and software projects where significant backtracking and rework typically occurs [22]. GERT networks can model backtracking, but we defer this to further research. Robust scheduling implies that reasonable schedule disruptions can be absorbed by the system without the need for re-sequencing and without expensive due date violations etc. In this section we limit ourselves to scheduling decisions, so we assume that the current baseline sequence will be adhered to regardless of stochastic schedule disruptions. We enforce the current baseline sequence by adding optional precedence relationships to the project network (one for each resource and each consecutive pair of activities sequenced on it). With the optional constraints in place, we can treat the problem as if there are no resource constraints. Scheduling is necessary in projects because, on the one hand, the use of earliest start 5
(ES) for each activity is liable to be very wasteful, but on the other hand, using the latest start greatly increases the project delay risk. Therefore, scheduling involves gating; i.e., restricting how early an activity may start. Without loss of generality we can assume that all activities in a project have gates, but a gate may be inactive (or inert) if it is scheduled so early that it can’t delay the activity. If an activity is ready in terms of technical precedence, an active gate may still delay it. Gates are important for two reasons: First, they create time buffers. Second, they coordinate resources that are not listed explicitly in the project network by broadcasting when these resources should be ready. Let a project have n physical activities and let Y= {Yi(ti)} (i=1,2,…,n) be a random vector associated with them, where ti≥0 denotes the start time of activity i and Yi(ti)≥0 is its duration. Y is called stationary if Yi(ti))=Yi(t)=Yi for any admissible t and for any i. Practically all the previous literature assumes that Y is stationary, and so shall we. Index the start activity by 0 and the finish activity by n+1 (with Y0=Yn+1=0). The starting times, {ti}, are determined by internal precedence relationships between the elements, so, as shown by Clark [23], there is statistical dependence between them. But Clark, and the vast majority of all subsequent modelers, still assumed that the physical elements Yi and Yk are independent for any i≠k. In contrast, we explicitly allow them to be correlated to each other as long as their covariance matrix, V(Y)œRnµn, is finite. The starting times are subject to gate constraints, gi, such that ti≥gi≥0 (i=0,1,…,n,n+1). For instance, if activity 2 follows activity 1, then t2=max{g2, t1+Y1} and t1≥g1≥g0. Without loss of generality, henceforth we assume that g0=0 (i.e., the project is allowed to start immediately). Otherwise, if g00 we can shift the clock to the left by g0. Let Ci be the economic value of increasing gi by one time unit (e.g., postponing holding costs on an activity). Ci is restricted only to marginal costs, which can be influenced by scheduling: as a rule, overhead charges should not be included. If for some i, Ci≤0, then set gi=0 and Ci=0 (because once we maximize the profit by opening the gate immediately, the remaining marginal effect is zero). Thus Ci≥0 and gi≥0. Because Yn+1 takes zero time, tn+1≥gn+1 is the project completion time. gn+1 represents the project due date, i.e., tn+1=max{gn+1, maxi=1,…,n {ti+Yi}} ≥ max{gn+1, maxi=1,…,n {gi+Yi}}. Setting gn+1>0 implies that the project is not considered finished before gn+1 even if the physical activities are completed earlier. The project delay penalty is Cn+1, 6
and we assume that the penalty is proportional to the delay. Cn+1>0 may reflect a management policy or a real delay penalty in the contract. It is permissible to set Cn+1>0 even if gn+1=0, which will provide incentive to finish the project earlier. Furthermore, gn+1 may be dictated exogenously. Other delay penalties, e.g., a step function that is associated with a true deadline, require further research but related work had been done [24, 25]. Our basic objective function is to minimize the total expected cost, including delay, E[Σi=1,…,n+1Ci(tn+1gi)] (where tn+1-gi≥0 for all i, because tn+1≥ ti+Yi≥ ti≥gi≥0). Our objective is to minimize E[Σi=1,…,n+1Ci(tn+1-gi)] Figure 1, from [26] shows an AoA project network with 5 activities and 6 gates. For distinction, square, numbered nodes complete the gates while other activities complete on circular nodes denoted by letters or on the finish node. We use the notation gi=ES to denote early start for activity i. Any gi≤max{gk|kœB(i)}, where B(i) is the set of all predecessors of activity i, yields gi=ES. For example, suppose that the nominal critical path is 1-3-5, then under CC/BM only gates 2, 4 and 6 would be active (so g1=g3=g5=ES; e.g., by setting g1=g3=0 and g5=g2). The reason CC/BM advocates would not use active gates for the (nominally) critical activities is to avoid any risk of wasting earliness. But, to maintain generality, we do not limit our analysis to CC/BM conventions. For example, we may need additional active gates to allow activity owners to manage their own buffer autonomously or to indicate milestones. For this reason, our approach is to let decision makers specify gates where they wish. They signal this to the system by a positive Ci. To estimate Ci it is useful to consider that once a gate opens, if any predecessor activity is not complete yet, staged resources and materials are likely to have to wait. If this waiting entails no cost whatsoever, Ci=0 is appropriate (and gi=ES will result). Otherwise, Ci models the cost of waiting per time unit. It is important to stress, however, that if there are many small cost gates, each of which alone is negligible but such that together they are sizable, then setting Ci=0 as an approximation is highly detrimental. (It is better to give each the same low value but such that their sum will be approximately correct.) Finally, to model a CC/BM-like application, if a chain of activities is controlled by one gate, say gi, (with the others set to ES), then Ci should include the total cost of all the activities ahead of the gate in the chain. Let S=Σi=1,…,n+1Ci and let pi denote the criticality of gate i, defined as the probability 7
that the critical path starts at gate i [2, p. 277]. We use asterisks to denote optimality; e.g., pi* is the optimal criticality of gate i. For a stationary Y, the solution should satisfy the following necessary and sufficient optimality conditions [26]: 1.
For i=1,...,n, pi*≥Ci/S
2.
For i=1,...,n, if pi*>Ci/S then gi*=ES
3.
pn+1*=1-Σi=1,…,n+1pi*≤Cn+1/S. This optimality condition extends a generalization of the newsboy model discovered
by [27] for a problem first solved by [28, 29, 30]—all independently—where optimal feeding buffers were obtained for n items required for an assembly and the objective is to minimize total holding costs plus the delay penalty. ([24] addressed the same problem but with a step penalty function.) To illustrate, suppose a project is valued at $10,000,000 and the annual holding cost charged is 18.25%. Assume there are a hundred roughly equally important gates, and that the policy is to meet the due date with a service level of 90%. First, to compute Ci, there is a holding cost of $5,000/day shared by the 100 gates, i.e., $50 per gate. The total contribution of these gates to S is $5,000. To achieve a service level of 90%, we require Cn+1=$45,000, so S=$50,000. Balance will require criticalities of 0.1% for each gate and 90% for the due date. Depending on the network structure, it might make sense to let some gates control the other gates (as is done in CC/BM), then on average their optimal criticalities will be 1% each. If, for example, to achieve a due date service level of 90% requires a project buffer of one year, then the approximate cost of the policy is $91,250. (Calculating the exact cost of such policies is best done by simulation, comparing to the Cn+1=0 option.) The customer pays for this both as part of the price and by waiting (unless early deliveries are permitted). If, instead of the policy, there is a $7,500 delay penalty per day in the contract, then S=$12,500, the optimal due date service level is 7,500/12,500= 60%, the optimal criticality of an average gate should be 0.4% but if 10 gates control the others than each of the ten should be critical with probability 4%. Under this policy the customer pays less but receives less reliable due date performance. When discussing manipulations of the due date, one technical issue requires further clarification: On the one hand, the due date is optional and can be effectively removed by setting gn+1=ES, Cn+1=0. On the other hand, if Cn+1=0, there is no “anchor”; if we postpone all 8
A
1
g1
g4
4
3
g3 g6
6
START
FINISH
g5 g2 2
5
B
Figure 1: Modeling gates as project activities gate times by a constant, the solution remains optimal. We can stipulate, however, that the earliest physical gate should open at some predetermined non-negative time, and adjust the other gates accordingly. In contrast, if it is impossible to meet the due date (i.e., pn+1*=0) but Cn+1>0, at least one other criticality must exceed Ci/S, and must be scheduled by gi=ES. Notably, the project network in Figure 1 is fundamentally more complex than any network for which previous optimal solutions or optimality conditions are known. Without the gates, it is known as the interdictive graph or the forbidden graph. Although it is theoretically possible to calculate the makespan distribution of the interdictive graph [4, pp. 540-542], this is difficult and involves an NP-hard sub-problem (to identify all the embedded interdictive graphs). Hence, we will use simulation to identify the optimal solution. In our case, however, because we are seeking a stationary point, our numerical experience suggests that the process is very effective, similarly to the case reported by [27]. 9
4 1 3 5 2
Figure 2: A Predictive Gantt Chart After applying the optimality condition to the network of Figure 1, a predictive Gantt chart as in Figure 2 is obtained. The rectangular boxes traced in the figure depict the plan, with activities drawn in the usual way, based on the assumption they will start on schedule, (i.e., at times gi) and take the expected duration. g0 and gn+1 appear as vertical lines in the chart. Assume that the height of each activity box is 1, so we can associate its base with 0 and its top with 1. Now, replace the left and top edges by the cumulative distribution function (CDF) of the activity start time, and the bottom and right edges by the activity completion time CDF. These CDFs are part of the output of simulating the network with the given gate times. They are drawn in solid lines (with steps depicted by perpendicular solid segments). Thus the probabilistic depiction of an activity is no longer a rectangle but the area between the start- and finish CDFs. For an actively gated activity, the start CDF typically includes a step at the gate time extending to a height associated with the probability (or service level) that the activity will indeed start on schedule. With an inactive gate, the start time activity would coincide with the CDF of the maximum of the preceding activities completion times. That is, the start CDF of the next activity may coincide (at least partly) with the CDF of a preceding activity but there may be a gap between them not only due to a gate but also when there are other potential causes of delay, and the activity starts per the maximal delay. For example, the start CDF of activity 5 is steeper than any single CDF that feeds it because it follows their maximum. As soon as an activity can start (i.e., when the latest predecessor finishes), its probabilistic presentation changes because there is perfect information about the start time. But this is not all. In general, in a stochastic environment, every time an important enough event occurs (such as completion of an activity, meeting or missing a milestone, etc), 10
our estimates for the future change. So the whole predictive Gantt chart can and should be updated. To continue, the picture clarifies that the buffers provide some robustness but definitely not 100% protection. Also note that areas in this figure are associated with expected values. For example, the area of an activity (between the start and finish CDFs) must match its unconditional expected value, E(Yi). Similarly, the areas of various wedges in the figure are associated with expected delays. In this particular rendition, activity 1 is feeding two gates at two different times, which can be viewed as two “de-facto due dates”: one to feed activity 3 and the other to feed activity 4. In this case, both are after the expected completion, so there is a positive “local buffer” ahead of activity 1, but negative buffers are possible too. Since activity 3 is scheduled earlier, it determines the most desired completion time of activity 1. Merely delaying activity 3 and even 4 does not yet render activity 1 critical, however, so these gate times are not really crucial due dates comparable to the project due date. At least one of the complete following paths (3 and 5 or 4, respectively) must actually violate the due date for activity 1 to be critical. So treating the start time of the following activities as an activity due date may be convenient, especially when there is a positive buffer involved, but it does not constitute a crisp indication of criticality. Meeting the next gate time guarantees that the activity will not be critical, but missing it may still be corrected downstream. Positive buffers can serve to give the people in charge of a feeding activity a balanced level of autonomy. We can use an artificial Ci to yield such a positive buffer, and in such a case Ci is a shadow price indicating how much this costs per buffer time unit. Finally, it is possible to add criticality information to the figure. This can help during the control phase. Gating is a form of continuous crashing, which establishes a connection between the subjects. Wollmer [31] presented a basic model for continuous crashing of stochastic activities and showed that it yields a convex objective function. He also made a highly simplifying assumption that continuous crashing only shifts the distribution of an activity without changing it otherwise. It can be shown, however, that when distributions are subject to change during crashing, e.g., if they retain their coefficient of variation, the effect on the optimal solution can be sizable (because the variance is crashed too). Nonetheless, under this assumption the optimal crash set (defined as the collection of all crashing activities selected) 11
can be found similarly to gate scheduling. This approach is based on comparing the criticality of an activity to the cost of crashing it relative to a revised definition of S that reflects the minimal cut of the project network where arcs are weighted by their crashing cost. Gating is a special case, because the set of all gates is a cut set. In this paper, however, we assume that crashing opportunities are discrete, i.e., we can change the distribution of an activity to another distribution that is (or at least seems) more attractive, but at a cost.
3.
Sequence Pseudo-Deterministically, Schedule Stochastically Sequencing decisions are necessary whenever we have limited renewable resources.
Technological precedence constraints, however, do not involve sequencing decisions, so they are not covered by the term “sequencing.” Instead, they are treated as constraints within the sequencing model. In our formal presentation we assume that sequencing involves allocating activities to individual resources. For example, if we need three workers we must specify them by name (e.g., Tom, Dick, and Harry).2 Given a sequence, we add the necessary optional constraints and we can find by simulation not only the optimal gates given the current planned sequence but also the expected total cost. Therefore, we can utilize standard heuristics to try to improve the sequence, judged by the results of such simulations. Similarly, we can judge the effects of crashing options (see Section 5). Our main subject in this section is the selection of a baseline sequence that we can then improve heuristically both initially and during the control phase. Perhaps the earliest paper that combines sequencing and nearly-optimal scheduling for a weighted early-tardy objective function and stochastic activities is [25], where a pairswitching heuristic is used to obtain a baseline sequence of multiple inter-related projects on a single resource. The “projects” are flights that go through a hub, and they are inter-related because they feed each other with passengers. The single resource is the airport capacity. 2
By this we lose some flexibility during planning, but the model would be much more complex otherwise [32]. For instance, Figure 2 in [7] depicts a particular sequence for 10 interchangeable resource units. The following optional precedence constraints enforce that sequence: 6 follows 7; 2 follows 6 & 8; 3 follows 4 or 5 (either of which releases enough resources for 3 to start). More precisely, 3 has an inclusive-or (IOR) entrance [4, p. 80]. A stationary Y is also gate-independent if gate times have no influence on Y. In PERT networks, a stationary Y implies a gate-independent Y. It can be shown that our optimality condition holds for any gate-independent network, including some GERT networks. But the GERT network we obtain here, because it has an IOR entrance, is not covered: the gate times of 4 and 5 influence the probability 3 will follow 4 (or 5), so Y is gatedependent. A heuristic remedy is to use the optimality condition as if it holds.
12
Here, the analogue of our gates is the departure schedule of an aircraft: for hub arrivals, the “gates” are at respective origins, and for departures they are at the hub itself. The airport capacity forces flights to be scheduled in blocks (with a minimal separation gap between consecutive flights). Pair-switching applies within such blocks, although the composition of the blocks is also subject to change during the process (and later, during execution). Numerical experience shows that even coordinating as many as 49 aircraft can be solved quickly and effectively. Elmaghraby et al. [33] consider optimal sequencing and scheduling of n stochastic activities that require the same resource, with an early/tardy expected cost function. They allow idling between activities (i.e., gates are specified) and solve for the best schedule (with the best sequence) by dynamic programming. However, generalizing such an exact approach to any PERT network is not a trivial task [34]. Even the minimization of the expected makespan of a two-machine flow shop is already prohibitive [35]. Therefore, for the purpose of creating a viable and practicable framework, we must use heuristics, especially for sequencing. A basic heuristic is to represent stochastic activities by deterministic surrogates—specifically here we will use the expected duration for this purpose. We refer to this as the pseudo-deterministic approach. Of course, the computational complexity of solving pseudo-deterministic sequencing models to optimality is overwhelming [3, 4]. Nonetheless, one feasible approach is to use an exact branch and bound approach to solve the pseudo-deterministic problem, but not to insist on closing the gap between the bounds completely. Hence, exact methods are still valuable. Indeed, we propose starting with a good pseudo-deterministic sequence and then buffering it optimally or nearly-optimally (by simulation). Practically, this approach is feasible and it has the additional advantage of utilizing the vast investment in deterministic scheduling that our profession has made over the last few decades. We now present evidence suggesting that it is also a robust approach. Consider the pseudo-deterministic makespan (project duration): its variance is zero and, by Jensen’s inequality, it is a lower bound on the true expected makespan. So stochastic variation expands the mean makespan beyond the pseudo-deterministic value and increases the variance. In an intuitive sense one can say that if a schedule has low mean expansion relative to the minimal possible pseudo-deterministic makespan and low variance then it is robust. (A sequence with minimal mean and minimal 13
variance is stochastically dominant in the convex sense, which implies optimality for our own objective function.) So we study pseudo-deterministic sequences in terms of their mean makespan expansion and variance. Consider the two-machine flow shop again. It is the simplest model with multiple potential critical paths and (for three or more jobs) embedded interdictive graphs; i.e., it involves the main complexities of project networks. For this model, the pseudo-deterministic solution (which is obtained by Johnson’s rule) is not optimal stochastically: a non-identical sequence—Talwar’s rule—has been proved to be stochastically dominant for exponential distributions [36]. Nonetheless, [37] show that the pseudo-deterministic solution is asymptotically optimal (when the number of jobs grows) and tends to have low variance. They also demonstrated cases where a different rule—based on achieving local optima by pair-switching—may or may not reduce the mean but increases the variance quite consistently. (With exponential distributions, it yields the same sequence as Talwar’s rule, however, so it must be superior sometimes.) [38] provide further evidence that the approach is economically valid. They studied a job shop case and a ten-machine twentyjob flow shop case, and found that if optimal buffers were included then tight pseudodeterministic schedules consistently outperformed loose schedules. That is, in their experiments, tight schedules yielded shorter overall expected makespan and less variance. It should be noted, however, that there are no gates involved in these models (i.e., the machine time value dominates so jobs are scheduled ES). Inserting gates yields makespans that are stochastically larger, but variance around the expanded mean tends to be reduced. In an attempt to emulate these results for this paper (and add a bias case that we discuss later), a similar result was obtained, as summarized in Table 1. Two experiments were run on a five-machine eleven-job flow shop with average jobs of 40 units each. The tightest possible sequence in such a case has a pseudo-deterministic makespan of 600 (when all jobs are identical), but another schedule with less tightly packed unequal jobs was designed to take 700 while holding the total workload constant. By taking a convex combination of the two extremes we can obtain any pseudo-deterministic makespan between 600 and 700. The simulation compares the mean and the standard deviation of such convex combinations. In the first experiment (identified by “σ=8”), normal activity times were used with σ=8 everywhere. In this experiment, unlike the original one reported by [38], 14
the tightest possible schedule is not the strict minimum, although the difference is practically negligible. But the variance is strictly increasing with looseness, as in the original experiment. In fact, by observation it seems that any schedule with a nominal (pseudodeterministic) makespan between 600 and 620 yields practically equivalent results, and then both the mean and variance become monotone increasing. The other experiment, identified by “bias”, will be explained further in the next section. Table 1: Simulation results for tight and loose schedules Nominal 600 610 620 630 640 650 660 670 680 690 700
µσ=8 659.32 658.49 658.98 661.40 665.40 670.72 677.07 684.21 691.91 700.05 708.53
µbias 648.51 647.58 649.58 653.70 659.60 666.70 674.58 683.01 691.84 700.96 710.28
σσ=8 23.55 23.58 23.81 24.32 25.01 25.69 26.28 26.80 27.25 27.63 27.96
σbias 93.05 91.80 93.36 94.07 95.01 96.11 97.27 98.49 99.76 101.06 102.38
Figure 3 depicts the experiments. Each experiment includes one line that starts at 0 near the nominal value 610 and that depicts the difference of the expected makespan from the absolute possible minimum (∆µ_). Both are quite flat between 600 and 610 or 620 and then they become monotone increasing. There is one line for the standard deviation of each experiment (σ_), and these behave similarly (although one of them is monotone increasing from the start). Finally, the figure also depicts the mean expansion over the nominal makespan and shows that it is higher for tight schedules and approaches zero for very loose ones (exp_). On the one hand, if we use such a tight schedule we must allow for the high expected expansion, but on the other hand, once we do this, the results tend to be good both in terms of combined mean and of variance. Although further research is necessary in this area—considering that the cases studied are quite basic and not extensive—our general approach is supported. But the fact that the optimum can be slightly removed from the tightest pseudo-deterministic schedule suggests that the last iota of tightness is not guaranteed to be beneficial. This indicates that strong pseudo-deterministic heuristics are very useful but going for the strictly optimal pseudodeterministic solution is of questionable value. But it is usually the last iota of improvement 15
that requires the highest computational effort!
Expansion over minimum and standard deviation
120
100
80 ∆µ_bias ∆µ_σ=8 σ_bias σ_σ=8 exp_bias exp_σ=8
60
40
20
0 600
610
620
630
640
650
660
670
680
690
700
Pseudo-deterministic makespan (nominal)
Figure 3: Comparing tight and loose flow shops under stochastic conditions
4.
Transforming Single Point Estimates to Stochastic Distributions To determine optimal project buffers (for time and, similarly, for cost), we need to know
the distributions of the project duration and of the activity costs. The cost of the project is the sum of the costs of the activities, and the duration is bounded from below by the durations of the activities on the nominal critical path. Our results here generalize a special case where there is just one chain of activities, i.e., a serial project [39]. In serial projects, the project duration is the sum of the activity durations. The same applies to costs in general, at least approximately. But for our purpose here, time is the crucial issue. From its inception, PERT was based on the assumption that activities are statistically independent so chains with several activities are distributed approximately per the normal distribution. But the independence assumption also leads to project buffers that become relatively negligible for large projects. This is a highly counterintuitive result, which may explain why CC/BM rejects it. Leach lists several reasons why such a relatively decreasing buffer is not likely to be sufficient [13]. He also cites practical experience showing significant support for buffers 16
that are expressed as sizable fractions of the expected makespan. Here we discuss another potential cause for the same practical observation. When activity durations are estimated, they are subject to estimation bias due to optimism, pessimism, error, etc. This bias must be treated as random, however: it is not known in advance for any project. We can and should correct for the average bias—a point first made by [1] in 1959—but we must also take its variability into account. This variability causes statistical dependence. Weather conditions, the loss of talented employees, and economic conditions also impact more than one activity, and thus cause statistical dependence. In terms of estimation, any positive dependence would likely be strongly confounded with bias, however, so we do not attempt to distinguish them. Let µi and σi be the mean and standard deviation of Yi. In the serial case, the expected project duration is µ=Σµi. Let X={Xi} be a vector of estimates of {Yi}, where we treat Xi as a random variable. Due to condition 4, we assume the elements of X (but not Y) are independent. Let the nominal estimate be ei=E(Xi), and let V(Xi) denote the variance. Bias is modeled by the introduction of an additional independent random variable, B, which multiplies X to obtain the true activity times. We use β and Vb=σb2 to denote the mean and variance of B. This implies Yi=b⋅Xi (where b is the realization of B that applies to this project). Because B and X are independent, µi=β⋅ei. But the multiplication by the same realization, b, introduces [positive] dependence between the elements of Y. Specifically, σi2=β2V(Xi)+V(Xi)Vb+Vbei2, and COV(Yi,Yk)=Vbeiek; ∀ i≠k. We can separate σi2 to two parts, β2V(Xi)+V(Xi)Vb and Vbei2. The former equals E(B2)V(Xi) and the latter is a special case of Vbeiek. This leads to σ2=E(B2)Σ∀iV(Xi) + (σbΣ∀iei)2. The element (σbΣ∀iei)2 imposes a lower bound of σbΣ∀iei on σ. Therefore, for any positive k, a buffer of kσ is bounded from below by kσbΣ∀iei, i.e., a fraction of the estimated chain duration. Similarly, the mean bias correction is also by a fraction of the estimated chain duration, namely β-1. To estimate the parameters, we limit the information elicited from process owners to single point activity estimates, ei, and estimate all the other necessary parameters from historical data. This requires a two-stage econometric model. We treat activity estimates as explaining variables and realizations as independent variables. Assume we have J>1 (not necessarily serial) projects in the history, and we amend our former notation by the introduction of a double index, 17
ij where i=1,2,...,nj and j=1,2,...,J. Here i denotes a specific activity and j a specific project with nj activities. Optionally, we may elect to treat some subprojects as projects in their own right for this purpose. For project j our data consists of nj pairs (yij,eij), where yij is the realization and eij the original activity estimate. An estimator of the mean bias of project j is given by
βˆ j =
∑in=j 1 y i j ∑in=j 1 ei j
; ∀j
leading to
βˆ =
J ∑ j=1 n j βˆ j
∑ j=1 n j J
; σˆ = 2 b
2 J ∑ j=1 n j ( βˆ j - βˆ ) J ∑ j=1 n j - J
This completes the first stage. For the second stage we use the model V(Xij)=α1eij+α2eij2. Let
ρi j =
yi j
βj
- ei j ; ρˆ i j =
yi j - ei j βˆ j
where E(ρij)=0 so E(ρij2)=V(Xij) . By regressing the estimates of ρij2 by eij and eij2 we obtain estimates for α1 and α2, which completes our task. In the long term, all these estimates should be updated by exponential smoothing whenever a new project is completed. To avoid harmful feedback fluctuations, however, the use of moving averages (e.g., by considering the last 20 projects) is strongly contraindicated [40, pp. 88-92]. Thus we showed that dependencies that may arise from common estimation errors lead to a project makespan standard deviation that is bounded by a fraction of the estimated duration. In addition, correcting for the average bias is typically by another fraction of the same estimated duration. We still need to extend the results to general project networks. But although the lower bound had been developed for a serial project, it applies to any specific chain of activities that we may consider, including the “nominal” critical path: the dependence identified by [23] can only increase the expected value of the true critical path beyond the expected value of the nominal critical path, so such a lower bound remains in full force. To include consistent bias in the simulation of a general project network, we generate for each run one bias realization (or several bias elements for fairly independent subprojects) plus n independent Xi values and then multiply them to obtain our simulated realizations of the correlated Yi elements. The optimal criticality 18
of each gate in the network can be determined as discussed in Section 2, because we did not assume that project activities are independent. The flow shop experiment with the index “bias” (reported in Table 1 and Figure 3) has been simulated this way. B was modeled by a lognormal random variable with σb=14%, which, according to data reported in [13], is quite low. V(Xi) was set to 5.62, so for a single activity the variance is 8 (as in the σ=8 case), half due to bias and half due to V(Xi). While the standard deviation of the simulated makespan in this experiment is almost 4 times higher, it is not much higher than 14% of the (expanded) mean! So, except through the mean expansion, the complex structure of the flow shop seems to have little effect. The bound explains most of the makespan variance, almost exactly as it would in the serial case!
5.
Closing the Feedback Loop: The Control Phase Our framework involves collecting single point estimates of project activities and
translating them to distributions by history-based relationships. For convenience, in the example we reported and in practice too, it may make sense to assume a very simple distribution for both these elements, e.g., normal for the base estimate Xi and lognormal for the bias. Our regression analysis, however, reflects the variance and bias of an “average” project in the past. This is adequate for initial planning, but for control we should utilize data about the actual performance of the activities of this project: specifically, if we record when activities receive the go ahead (all predecessors finished, including the gate time) and when they complete, the difference is yi, the Yi realization. This information should be used for updating our estimate of b for this project, which we express as a conditional distribution. Specifically, we should be forming a progressively more precise estimate of the bias, i.e., a tighter distribution. In simulations for the remaining part of the project (which is the only part of the project that we can control), this distribution replaces the distribution of B that was used for the initial plan. Of course, even when this distribution becomes very tight, the variance of the yet unperformed elements, V(Xi), still models remaining uncertainty. Technically, to utilize information fully, we propose to use a Bayesian conditional distribution with the distribution of B as the prior. Suppose that at stage k we have feedback information about k≥1 completed activities (indexed from 1 to k for convenience). Let 19
Yk=Σi=1,...,kyi, ek=Σi=1,...,kei, Xk=Σi=1,...,kXi, (σxk)2=Σi=1,...,kV(Xi), and, finally, let b be the realization of B (for which we seek the conditional distribution). For random variables in capitals, the lower case represents realizations; e.g., yk is the actual time consumed by the k activities. Recall that we assume the elements of X are independent and normal, so Xk is also normal with mean ek and standard deviation σxk. Even if we assume each element Xi is not normally distributed, for a large k and no dominating activities the normality assumption still applies by the central limit theorem. By definition, yk=xkb, which establishes a tight connection between the two realizations xk and b. We can find the CDF of b|yk by integrating the product of the density distribution functions of Xk and of B along the hyperbole XkB=yk for B≤z and dividing by the integral along the same hyperbole for any B. That is, if fb(·) and fx(·) are the respective density functions (where x represents xk), ∞
Pr{b ≤ z} =
∫
k 2 k f x (x) f b ( y /x) 1 + ( y /x ) dx
∫f
x
x = y k /z ∞
(x) f b ( y
k
/x)
1 + ( y k /x ) 2 dx
x =0
where the generic density functions can be replaced by the normal and the lognormal, as discussed. (We omit technical details with respect to the singularity at x=0.) Because scheduled gates may entail preparation that is not depicted by the project network, our flexibility to change plans during the execution (control) phase without cost is limited. Where we still have flexibility, our approach is to continue adjusting the system—to achieve projected economic balance into the future. Considering that some activities will be past (e.g., a gate of an ongoing activity) or too late to reschedule, their costs Ci stop contributing to S (because their gates represent sunk costs). An important exception is Cn+1, which remains part of S until the project actually completes (because our decisions continue to make a difference in terms of the expected delay penalty). In general, due to conditions 3 and 4, we do not require re-sequencing after each and every event that causes the predictive Gantt chart to change. The usual response should be limited to rescheduling of future gates where no investment had been made yet. But there may be a need to do more, and a good DSS must (i) provide a clear signal to that effect, (ii) facilitate the determination of an approximately correct response interactively, (iii) provide a check on the new plan, and update the plan once the proposed change is accepted by the user. 20
All these require further research, but we present one feasible approach. There are two potential signals of trouble. First, it may become impossible to maintain balance because at least one gate that was supposed to have slack now is suddenly too critical (even if scheduled ES). If so, the due-date performance of the project may at more risk than anticipated. Here, the role of the DSS is to highlight the excessive risk and identify activities that present opportunities for remedial action. The activity that the gate in question controls may be one of them, but the best expediting may be applied further downstream. The approximate response is pursued interactively. The user generates and tests possible expediting options on candidate activities. To do that, we can reschedule completely to compare the result or use an approximation first. Using a pseudo-deterministic measure, an approximate measure of the expected benefit of crashing an activity is given by the product of its pre-crash criticality by its size. When this is multiplied by (1-pn+1)Cn+1, the current benefit of reducing the project expected duration by a unit, we obtain a crude measure of the benefit the expediting brings. This can be compared to the cost, and the option that has the highest net benefit, if any, is adopted. Once an option is selected we must re-schedule and verify that the option is really good by simply comparing the results of the new balanced schedule (with the costs of expediting included) with the one obtained without re-planning. The second type of signal is when an activity is delayed beyond its gate so that the project due date performance is at excessive risk (again). If other activities are available for processing on the resources that are idle due to the delay, it may be beneficial to dispatch another activity ahead of its turn. Here the role of the DSS is to alert the user to the problem and to the opportunity. In this case it is necessary to obtain an estimate how long the delay is likely to continue (an estimate that we treat like any other estimate—including bias correction and variance allocation). The approximate testing of options and final check proceeds from here on out in the same manner as for the first type of signal, except that in this case the pseudo-deterministic benefit of dispatch is based on comparing the new pseudodeterministic schedule with the old. Also, we must take into account the cost of repeating the setup for the rescheduled activity, if any. Note that this approach avoids the pitfall of delaying the project that dispatching sometimes causes [4, p. 561]. Yet it retains the advantages of dispatching over rigid adherence to the baseline schedule (see also [7]). 21
Our control method is quite different to the buffer monitoring of CC/BM. First, it is not buffer consumption that really matters but rather the projected effect on the objective function. We measure this directly. Next, the action limits CC/BM recommends (dividing the buffer to three bands) are not relevant to statistical control [40]. In our framework, although we do not formally identify “in-control” and “out of control” situations, we have two types of response: rescheduling—which suits predictable deviations—and re-planning (including resequencing and expediting)—which suits unpredictable shocks. Finally, for control purposes, CC/BM only considers consuming more time than expected, while better than expected performance is ignored. (To be clear, the earliness itself is utilized.) It is tempting to say that if things go better than expected we should leave well-enough alone, but there may be situations in which some gates have to be advanced due to earliness in previous activities. In conclusion, our approach is based on sound stochastic analysis, utilizes information fully (good or bad), and, by the balance principle, assures economic responses to developments. Finally, it is important to discuss focusing and hierarchy. Under PERT, the focus is on the critical path, and the same focus is the key to CC/BM (although during the control phase CC/BM focuses by buffer penetration regardless of whether it is on the critical path). But under stochastic economic balance the whole concept of “critical path” (or any other appellation thereof, e.g., “critical sequence”) becomes much less relevant. Indeed, the concept is rooted in deterministic thinking! Instead, at any stage of the project execution, the activities we are engaged in constitute a cut set for the project network, and they all have varying degrees of criticality (such that the sum of criticalities in the cut set is unity) [2, p. 277]. Focusing should utilize the criticality information, and because there may be many activities in a cut set, it is necessary to focus on the most critical activities or the ones most out of balance by the Pareto principle. But if top management focuses on some activities, the others must be delegated. Arguably, the connection between the delegated activities and the rest of the project should then be buffered so that they can be managed with an adequate degree of autonomy [41, 42]. Our model does not dictate such “autonomy-buffers,” but it enables their inclusion in a balanced way. Furthermore, since delegated activities rarely possess high criticality, they are likely to be adequately buffered automatically. Regardless, 22
the predictive Gantt chart provides useful information to all stake holders by showing the status of the project, the criticality of each activity, and how it interacts with other activities. Finally, the most critical activities may be scheduled in the future, and an important central management focus is managing future activities. For example, as mentioned above, it may be necessary to decide whether to expedite some activity on the current (or forthcoming) cut set, or wait for a later stage and expedite then. Doing this well is still an art at this stage, and requires further research. But a DSS such as we propose can support such decisions.
6
Conclusion Two relatively unknown key ideas are at the core of the proposed framework. One is
the pursuit of stochastic economic balance [26]. Assuming linear holding and penalty costs to be minimized by expectation and stationary activity distributions, economic balance entails a deceptively simple optimality condition for feeding buffers: the criticality of each input should be proportional to its earliness cost. Some adaptation is required when the system is constrained, but it is not difficult to implement by simulation. For simulation we need activity distributions, and this speaks to the second key idea: Estimate activity durations as simply as possible by single point evaluations and then use historical data on bias and variance to generate plausible activity distributions. The result is a set of activity distributions that are correlated to each other (because of the common bias effect) [39]. Stochastic economic balance is a balance of buffers. It applies equally to the project buffer and to other buffers in the plan. In the case of the project buffer, the “activity” concerned is the project due date. The project due date is “critical” when the project completes on or before schedule, while if the project is late there must be another critical sequence of activities, which is associated with the feeding buffer where it starts. This result, although it was presented for gating decisions—which determine buffers—was also shown to be relevant to sequencing and crashing decisions. Because the gate selections (or the crashing decisions) are independent of each other, the result holds even if there are statistical dependencies between project activities, as we indeed anticipate. Notably, this approach invalidates or at least diminishes the focus on the critical path, which has been the centerpiece of project scheduling theory and practice. Instead, focusing requires use of the 23
Pareto principle to identify the activities that are most out of balance. These core ideas are useful for planning and for re-planning during the execution stage, i.e., for control. In general, our approach to control is based on the thought that we can continue to strive for ongoing economic balance with respect to decisions that are still flexible, using updated information obtained by ongoing project performance analysis. This information is fed into a simulation model that is used to find the economic balance. When updated information shows that the current plan cannot be balanced appropriately or when opportunities to gain by re-sequencing emerge, we consider re-planning. The enhancements we propose for the PERT framework are extensive, and they open many research questions. Perhaps the most important challenge is to include GERT networks. This is necessary because R&D projects, software development projects, etc, typically include feedback and backtracking [22]. One difficulty is that IOR entrances may introduce dependence between Y and gate times (see Footnote 2). Nonetheless, if the chance nodes are independent of gate times, then the optimality condition applies in GERT networks too. Even when IOR entrances are included, our balance result may still provide a useful heuristic. Another issue in GERT is that reasonable project due dates must include a buffer against cycling time in the network, but when we balance other gates against the due date in a GERT network the procedure does not yet “know” the final network structure. The study of bias is another rich research direction. It involves several sub-questions: Is it useful to generate separate bias elements for each subproject? Alternatively, is it useful to express bias as a product of several bias causes such that each activity is subject to a different subset of causes? What is the effect of Parkinson’s Law on bias correction? What is the effect of biascorrection on Parkinson’s Law? Of course, many more research directions and questions can be raised, so to even try to provide a complete list would be futile and counterproductive.
Acknowledgements: I am indebted to Salah E. Elmaghraby for generous help with respect to a related unpublished paper. Tiru Arthanari provided useful comments on several aspects of this work over an extended period. Last but not least, Larry Leach— although he does not agree with many of my assertions—nonetheless helped me improve the paper considerably. Any remaining errors are strictly my own responsibility. 24
References 1.
Malcolm DG, Rosebloom JH, Clark CE, Fazar W. Application of a technique for a research and development program evaluation. Operations Research 1959; 7(5):646669.
2.
Elmaghraby SE. Activity Networks: Project Planning and Control by Network Models. Wiley, 1977.
3.
Morton TE, Pentico DW. Heuristic Scheduling Systems with Applications to Production Systems and Project Management. Wiley, 1993.
4.
Demeulemeester E, Herroelen W. Project Scheduling: A Research Handbook, Kluwer Academic Publishers, 2002.
5.
Leus R. The generation of stable project plans: Complexity and exact algorithms. PhD Thesis, Department of Applied Economics, Katholieke Universiteit Leuven 2003.
6.
Herroelen W, Leus R. Project scheduling under uncertainty – Survey and research potentials. European Journal of Operational Research 2003; to appear (available online).
7.
Herroelen W, Leus R. Robust and reactive project scheduling: A review and classification of procedures. International Journal of Production Research 2004; 42(8):1599-1620.
8.
Tavares LV, Ferreira JAA, Coelho JS. On the optimal management of project risk. European Journal of Operational Research 1998; 107:451-469.
9.
Herroelen W, Leus R. The construction of stable project baseline schedules. European Journal of Operational Research 2004; 156(3):550-565.
10.
Leus R, Herroelen W. 2004. Stability and resource allocation in project planning. IIE Transactions 2004; 36(7):667-682.
11.
Woolsey RE. The Fifth Column: The PERT that never was or data collection as an optimizer. Interfaces 1992; 22(3):112-114.
12.
Tversky A, Kahneman D. Judgment under uncertainty: Heuristics and biases. Science 1974; 185:1124-1131
13.
Leach L. Schedule and cost buffer sizing: How to account for the bias between project performance and your model. Project Management Journal 2003; June:34-47. 25
14.
Wiest JD. Some Properties of Schedules for Large Projects with Limited Resources. Operations Research 1964; 12:395-418.
15.
Leach, L. 2000. Critical Chain Project Management. Artech House, 2000.
16.
Zalmenson E, Zinman E. The TOC—why did it fail? 2001. http://www.cybermanage.net (retrieved 30 November 2004)
17.
Roman DD. The PERT system: An appraisal of Program Evaluation Review Technique. Academy of Management Journal 1962; 5(1):57-65.
18.
Herroelen W, Leus R. On the merits and pitfalls of Critical Chain scheduling. Journal of Operations Management 2001; 19:559-577.
19.
Herroelen W, Leus R, Demeulemeester E. Critical Chain project scheduling: Do not oversimplify. Project Management Journal 2002; 33(4):48-60.
20.
Raz T, Barnes R, Dvir D. A critical look at Critical Chain Project Management. Project Management Journal 2003; 34(4):24-32.
21.
Trietsch D. Why a critical path by any other name would smell less sweet: Towards a holistic approach to PERT/CPM. Project Management Journal 2005; 36(1):27-36.
22.
Ford DN, Sterman JD. Overcoming the 90% syndrome: Iteration Management in Concurrent Development Projects. Concurrent Engineering: Research and Applications 2003; 11(3):177-186.
23.
Clark CE. The greatest of a finite set of random variables. Operations Research 1961; 9:145-162.
24.
Hopp WJ, Spearman ML. Setting safety leadtimes for purchased components in assembly systems. IIE Transactions 1993; 25(2):2-11.
25.
Trietsch D. Scheduling flights at hub airports. Transportation Research—Series B (Methodology) 1993; 27B(2):133-150.
26.
Trietsch D. Optimal feeding buffers for projects or supply chains by an exact generalization of the newsvendor model. Under revision for International Journal of Production Research.
27.
Trietsch D, Quiroga F. Coordinating n parallel stochastic inputs by an exact generalization of the newsvendor model. Under revision for European Journal of Operational Research. 26
28.
Ronen B, Trietsch D. A decision support system for planning large projects. Operations Research 1988; 36:882-890.
29.
Kumar A. Component inventory costs in an assembly problem with uncertain supplier lead-times. IIE Transactions 1989; 21(2):112-121.
30.
Chu C, Proth J-M, Xie X. Supply management in assembly systems. Naval Research Logistics 1993; 40:933-949.
31.
Wollmer RD. Critical path planning under uncertainty. Mathematical Programming Study 1985; 25:164-171.
32.
Rivera FA, Duran A. Critical clouds and critical sets in resource-constrained projects. International Journal of Project Management 2004; 22:489-497.
33.
Elmaghraby SE, Ferreira AA, Tavares LV. Optimal start times under stochastic activity durations, International Journal of Production Economics 2000; 64:153-164.
34.
Elmaghraby SE. Optimal resource allocation and budget estimation in multimodal activity networks. Proceedings of the IEPM Conference, Istanbul 2000.
35.
Elmaghraby SE, Thoney KA. The two-machine stochastic flowshop problem with arbitrary processing time distributions, IIE Transactions 1999; 31:467-477.
36.
Ku P-S, Niu S-C. On Johnson’s two-machine flow shop with random processing times. Operations Research 1986; 34:130-136.
37.
Portougal V, Trietsch D. Johnson’s problem with stochastic processing times and optimal service level. European Journal of Operational Research (appeared online— as “Article in Press”—29 April 2005).
38.
Portougal V, Trietsch D. Stochastic scheduling with optimal customer service, Journal of the Operational Research Society 2001; 52:226-233.
39.
Trietsch D. The effect of systemic errors on optimal project buffers, International Journal of Project Management 2005; 23:267-274
40.
Trietsch D. Statistical Quality Control: A Loss Minimization Approach. World Scientific, Singapore 1999.
41.
Beer S. Brain of the Firm. 2nd edition, Wiley, 1981.
42.
Beer S. Diagnosing the System for Organizations. Wiley, 1985.
27