Agent-based project scheduling - Semantic Scholar

7 downloads 418 Views 178KB Size Report
problems such as project scheduling subject to resource constraints. In this paper .... can communicate with them, and social agents that have models of other ...
IIE Transactions (2000) 32, 387±401

Agent-based project scheduling GARY KNOTTS1 , MOSHE DROR2; * and BRUCE C. HARTMAN MIS Department, College of Business and Public Administration, The University of Arizona, Tucson, AZ 85721, USA 1 E-mail: [email protected] 2 E-mail: [email protected] Received January 1998 and accepted May 1999

Agent technology o€ers a new way of thinking about many of the classic problems in operations research. Among these are problems such as project scheduling subject to resource constraints. In this paper, we develop and experimentally evaluate eight agent-based algorithms for solving the multimode, resource-constrained project scheduling problem. Our algorithms di€er in the priority rules used to control agent access to resources. We apply our approach to a 51-activity project originally published by Maroto and Tormos [1]. We solve the problem using two types of agent-based systems: (i) a system of simple, reactive agents that we call basic agents; and (ii) a system of more complex, deliberative agents that we call enhanced agents. Of the eight priority rules tested, we ®nd that priority based on shortest processing time performs best in terms of schedule quality when applied by basic agents while the priority based on earliest due date performs best when applied by enhanced agents. In comparing agents across priority rules, we ®nd that enhanced agents generate much better schedules (with makespans up to 66% shorter in some cases) and require only slightly more computation time.

1. Introduction A project is typically a one-of-a-kind e€ort undertaken for the purpose of achieving a speci®c end objective [2]. By the nature of their novelty, all projects are associated with high levels of uncertainty [3]. Project management is a set of principles, methods, and techniques applied for the purpose of being on-time, under-budget, and up to speci®cation [4]. One of the major functions of project management is the generation of a project schedule [5]. This includes the e€ective allocation of scarce resources [6] such that project objectives are most likely to be realized. This has traditionally been referred to as the Resource-Constrained Project Scheduling Problem (RCPSP). There are numerous tools and techniques available for project scheduling. These include CPM, PERT, GERT, P-GERT, Q-GERT, and VERT, to name a few. In spite of this, it is widely recognized that projects fail more often than they succeed [7±10]. In fact, a lengthy list of project failures could easily be generated. One large and dramatic example is a US Department of Defense project awarded to General Dynamics, for which the ®nal product was delivered 2 years late, at a cost which was $843 million and 66% over the originally estimated budget [11]. The

*Corresponding author 0740-817X

Ó

2000 ``IIE''

Hubble Telescope is another example of a project for which the result was virtually useless as originally deployed [10]. Yet another highly publicized example of a dramatic project failure is the Kevin Costner ®lm, Waterworld, which was released much later than intended and $100 million over budget [12]. The key activities at the start of a project are planning and scheduling [13]. Doing this poorly is thought by many to contribute signi®cantly to the likelihood that the project will eventually fail [14±17]. In this paper, we propose a new approach to project planning and scheduling based on agent technology. In the sections which follow, we will provide a background discussion of agent technology and a technical discussion of the resourceconstrained project scheduling problem. We will then describe our scheduling methodology and demonstrate its application to a test project through a series of simulation experiments. We will present the results of our methodology as applied to this problem then conclude with suggestions for future research.

2. Overview of agent technology 2.1. De®nition of agent Agent technology is a relatively new specialization of distributed Arti®cial Intelligence (AI), which is itself a sub®eld of AI [18]. Perhaps due in part to its newness,

388 there is as yet no generally agreed-upon de®nition of the term agent. Fisher [19] de®nes an agent to be an encapsulated entity with traditional AI capabilities while Jennings and Wooldridge [20] de®ne an agent to be a selfcontained, problem-solving entity. Davidsson et al. [21] add that an agent is an entity with the ability to interact independently with its environment through its own sensors and e€ectors. According to Maes [22], an agent is a system that tries to ful®ll a set of goals in a complex, dynamic environment where the agent is ``situated'' in that environment. Nwana and Ndumu [23] de®ne an agent as a software and/or hardware component capable of acting exactingly in order to accomplish tasks on behalf of its user. In this paper, we will describe a system consisting of a number of active entities capable of manipulating numerous passive entities. Borrowing from some of the de®nitions in the previous paragraph, we will de®ne an agent to be any entity in our system that is capable, at a minimum, of changing the state of a passive entity in its environment. Thus, in the simplest terms, we recognize two very broad entity types: agents that are the ``doers'' and all other things which are the objects of agent actions and are, therefore, ``done upon.''

Knotts et al. agents that are aware of the existence of other agents and can communicate with them, and social agents that have models of other agents' states, goals, and plans [27]. In more recent work, agents have been classi®ed as collaborative, interface, mobile, information/Internet, reactive, and hybrid [23]. We will describe later where our agents ®t into these classi®cation schemes. 2.3. The essence of agent technology The concept frequently cited as central to agent technology is that of emergent complexity. For examples the interested reader is referred to the work of Jennings and Wooldridge [20], Maes [22], Nwana and Ndumu [23], Bussman and Demazeau [26], Parunak [27] and Guha and Lenat [28]. Emergent complexity is the notion that one can build a system of multiple agents in which the individual agents are simple in nature, but the system as a whole is capable of complex behavior which ``emerges'' from the interactions among agents [20]. Or, put another way, problem solutions are obtained as the ``side e€ects'' of agent behavior [29]. 2.4. Advantages of agent technology

2.2. Agent properties In addition to being active entities, agents may have many other properties. (We emphasize the word ``may'' to highlight the fact that there is no more agreement among researchers regarding the properties of agents than there is regarding the de®nition of the word itself). Agents can exhibit autonomy, social ability, responsiveness, and proactiveness in addition to adaptability, mobility, veracity, and rationality [20]. Furthermore, agents may have both high-level and low-level reasoning capabilities [21] and the actions that result from these capabilities may be in¯uenced by intentions and beliefs [24]. Finally, agents may have goals and an ability to plan based on their goals. They may be able to execute actions based on their goal-directed plans, monitor their environment to determine the e€ects of their actions, analyze the extent to which their actions brought about the desired changes in the environment, and replan their actions when necessary to reach their goals [25]. The extent to which agents exhibit one, many, or most of the characteristics just described is dependent on the agent type. If we categorize agents according to sophistication, then the simplest are purely reactive while the most advanced are purely deliberative [20]. These same two extremes can also be referred to as reactive and cognitive [26]. Between the extremes are hybrid agents that combine reactive and deliberative capabilities [20]. Agents can also be classi®ed as sensing agents that maintain information about only their local state, self-conscious

Agent-based systems have the advantage of being more robust, ¯exible, and fault tolerant than traditional systems [22]. Furthermore, the simple patterns of agent behavior are easier to program [29]. In addition, this approach often provides a means to solve problems that have previously been unsolvable and to address problems in a way that is more natural, easy, and ecient [20]. In particular, the naturalness of the approach can help us better understand the behavior of complex systems [24]. Finally, an agent-based system can often solve problems faster (by exploiting parallelism) with lower communication costs, more ¯exibility, and more reliability than many traditional methods [18]. 2.5. Problems addressed using agent technology Agent technology has been applied to several operations research problems. Colorni et al. [30] have applied agent technology to the traveling salesman problem. Parunak [27] discusses the use of agent technology for manufacturing scheduling. Nwana and Ndumu [23] describe agent-based work for scheduling and diary management. Liu and Sycara [31] applied agent technology to job shop scheduling and were able to obtain ``very high quality solutions.'' This work has, in part, encouraged us to investigate the use of agent technology for another classic operations research problem, the resource-constrained project scheduling problem, as described in the next section.

Agent-based project scheduling 3. The multimode resource-constrained project scheduling problem In this paper, we focus speci®cally on the nonpreemptive, MultiMode, Resource-Constrained Project Scheduling Problem, which we formally de®ne here and refer to hereafter as the MMRCPSP. We will use the following symbols and de®nitions: t= j= J= r= R= Kr = m= Mj = djm = kjmr = Pj Sj STj FTj ESTj

= = = = =

EFTj = LST j = LFT j = H=

time index; activity index; number of activities; resource pool index; number of pools of renewable resources; total units of resource in pool r; activity mode index; number of activity execution modes for activity j; duration of activity j when executed in mode m; units of resource required by activity j from pool r when executed in mode m; set of immediate predecessors of activity j; set of immediate successors of activity j; start time of activity j; ®nish time of activity j; earliest possible start time of activity j based on traditional critical path analysis using minfdjm g and precedence constraints only; earliest possible ®nish time of activity j based on traditional critical path analysis using minfdjm g and precedence constraints only; latest possible start time of activity j based on traditional critical path analysis using minfdjm g and precedence constraints only; latest possible ®nish time of activity j based on traditional critical path analysis using minfdjm g and precedence constraints only; maximum project makespan using PJ jˆ1 maxfdjm jm ˆ 1; . . . ; Mj g:

A project consists of a set of activities j ˆ 1; . . . ; J where J is the total number of activities making up the project [32]. Available to the project are one or more pools of renewable resources r ˆ 1; . . . ; R where R is the total number of pools of renewable resources. Each pool r of renewable resources is, by de®nition, constrained on a per-period basis to a maximum of Kr units per period. Each of the j activities can be executed in one of Mj modes m ˆ 1; . . . ; Mj . Associated with each activity j and each mode m is a particular activity duration djm . Also associated with each activity j, each mode m, and each resource pool r is a particular renewable resource requirement, kjmr . For all activities j, there is some set of immediate predecessors, Pj , and immediate successors, Sj . No activity can start before all of its immediate predecessors

389 have been completed. In addition, no activity can start unless kjmr units of renewable resource are available. Once activities have started, they cannot be preempted (i.e., once an activity j has started in a particular mode m, it cannot be interrupted and it will exclusively hold kjmr units of resource for a duration of djm ). Two dummy activities will be added to those that make up a project. The ®rst activity is a dummy source and will be assigned an index of j ˆ 1 and will have only one activity execution mode. Thus, d11 ˆ 0. The dummy source has no precedence requirement (i.e., P1 ˆ ;) and no resource requirements (i.e., k11r ˆ 0 8 r). The second activity will be a dummy sink and will be assigned an index of J and will also be limited to one activity execution mode. Thus, dJ 1 ˆ 0. The dummy sink has no resource requirements and no successors (i.e., SJ ˆ ; and kJ 1r ˆ 08r). For notational convenience, activities will be uniquely, sequentially, and numerically indexed by j such that the value of the index for activity j will be greater than the value of the indices for all activities in Pj . The start time of activity j will be denoted by STj and the ®nish time will be denoted by FTj . The earliest possible start time for activity j, ESTj , and the earliest possible ®nish time for activity j, EFTj , will be determined based on the results of the forward pass of a traditional critical path analysis using minfdjm g and precedence constraints only. The latest possible start time for activity j, LSTj , and the latest possible ®nish time for activity j, LFTj , will be determined on the basis of a backward pass of a traditional critical path analysis using minfdjm g based on precedence constraints only andPon the maximum makespan for the project, H ˆ Jjˆ1 maxfdjm j m ˆ 1; . . . ; Mj g. 3.1. Integer Programming Formulation of the MMRCPSP Although several objectives are possible for the MMRCPSP, we will focus on the minimization of project makespan subject to precedence and resource constraints. For deterministic time durations, this problem can be formulated as a 0±1 integer programming problem. The formulation is as follows (for similar formulations see Kolisch and Sprecher [33] and Boctor [34]): MJ X LFTJ X txjmt ; Minimize: mˆ1 tˆEFTJ

subject to: LFTj MJ X X

xjmt ˆ 1

j ˆ 1; . . . ; J ;

…1†

mˆ1 tˆEFTj Mh X LFTh X mˆ1 tˆEFTh

txhmt 

Mj X LFTj X

…t ÿ djm †xjmt ;

mˆ1 tˆEFTj

j ˆ 2; . . . ; J ; h 2 Pj ;

…2†

390

Knotts et al.

Mj t‡d ij ÿ1 J X X X jˆ1 mˆ1

sˆt

kjmr xjms  Kr

r ˆ 1; . . . ; K; t ˆ 1; . . . ; H ; …3†

xjmt 2 f0; 1g; j ˆ 1; . . . ; J ; m ˆ 1; . . . ; Mj ; t ˆ EFTj ; . . . ; LFTj : Constraint (1) ensures that no more than one mode and one ®nish time will be assigned to each activity. Constraint (2) enforces precedence requirements by ensuring that no activity will start before the completion of all of its predecessors. Constraint (3) enforces resource requirements by ensuring that no activities will consume more renewable resources than are available. 3.2. Problem complexity and heuristic algorithms The single-mode RCPSP is NP-hard [35±37] and the multimode RCPSP, as a generalization of the single-mode RCPSP, is also NP-hard. Thus, it is unlikely that an exact algorithm can be found which can solve real-world size problems in a practical amount of time [35]. In addition, the real-life problem which we are modeling is certainly stochastic in regard to activity durations. From this, it is clear that a heuristic approach is the appropriate way to address this problem [38] as we have done in the remainder of this paper.

4. Traditional heuristic priority rules As described by Alvarez-Valdes Olaguibel and Tamarit Goerlich [35], most heuristic algorithms for the RCPSP follow the same general approach. Initially, all project activities are placed in a set of available activities called the available set. Any activity for which all predecessors are ®nished is moved from the available set to the eligible set. When the eligible set contains multiple activities, these are prioritized using a prioritization rule. Once prioritized, activities are scheduled according to their priority up to the availability of resources. When an activity is scheduled, it is moved from the eligible set to the scheduled set and the units of available resources are decremented by the amount allocated to the scheduled activity. Activities remain in the scheduled set for an amount of time equal to their duration. At that point, they are moved from the scheduled set to the ®nished set and the number of available renewable resources is incremented by the amount previously allocated to the ®nished activity. This process continues until all activities are in the ®nished set. The heuristically-determined makespan for the project is equal to the completion time of the ®nal activity (i.e., the ®nish time of the dummy sink, FTJ ). 4.1. Types of heuristic algorithms Although a large number of heuristics exist for the RCPSP, most fall into one of four broad categories [35].

The category in which a particular heuristic belongs is determined by the prioritization rule for resource allocation. If priority is based solely on one or more characteristics of the activities themselves, it is referred to as an activity-based heuristic. Examples include shortest processing time and longest processing time. If priority is based on relationships among the activities, it is referred to as a network-based heuristic. Examples include fewest immediate successors and most immediate successors. If priority is determined by parameters derived from a traditional critical path analysis, it is a CPM-based heuristic. Examples include on/o€ critical path, earliest start time, latest ®nish time, and minimum slack. If priority is determined by resource requirements, it is a resource-based heuristic. Examples include greatest resource requirement and smallest resource requirement. Heuristics that do not fall into any of the previous four categories can be placed in a catch-all category called other. Examples include various random prioritization methods. Although we have not yet described our approach in detail, let us say here that our agent-based scheduling method can be used to implement nearly any of the existing heuristics. All that is required is that the agent's behavior be based upon what we will refer to as a set of one or more ``defer to'' rules. Consider, for example, the shortest processing time heuristic mentioned in the previous paragraph. In a system where agents actively acquire their own resources, this heuristic could easily be implemented by incorporating a rule into each agent which says ``when acquiring resources, defer to any agent with a shorter duration.'' To implement any other prioritization rule, one need only provide the proper ``defer to'' rules. In the next section, we will describe our scheduling approach.

5. Our approach As stated previously, a project consists of a set of activities. In our system, we create one agent for each activity. It is the responsibility of the agent to acquire the resources required for the activity to which it has been assigned. The mechanisms by which agents determine they are in the eligible set and the means by which they acquire resources will be described momentarily. We begin, however, with a description of the individual agents. In the following paragraphs, we will make many (and often interchangeable) references to activities and agents so the reader must keep in mind the one-to-one relationship between the two. (For a more complete discussion of our implementation, see Knotts et al. [39,40]. 5.1. Digital circuit metaphor Our agent model is based on standard electronic components used in simple digital circuits. Speci®cally, an agent's precedence and resource requirements are repre-

Agent-based project scheduling

391

sented as a combination of AND, OR, NAND, NOR, and XOR gates. Although we do not mean to suggest that real digital circuits be built for our purposes, we will demonstrate how the abstract characteristics of such circuits can be used to represent an agent's requirements for activity execution. We begin with one of the most basic digital components: the AND gate [41]. The simplest of these has two inputs and one output. Since the AND gate is a digital device, the only possible input and output values are 0 and 1. The behavior of a 2-input AND gate is represented by the truth table in Table 1. This table shows that the output of the AND gate is 1 if and only if the value of both inputs is 1. Put another way, an input value of 0 at either or both inputs will result in an output value of 0. Using the metaphor as proposed, we view the 2-input AND gate as being analogous to an agent assigned to an activity which has a total of two precedence and/or resource requirements. Clearly, the output of an AND gate becomes 1 only when the value of both inputs is 1. As an example of how we use this in our model of an agent, assume that one of the inputs represents whether or not a particular activity is ®nished. In other words, the value is 1 if the activity is ®nished. Otherwise, the value is 0. The other input represents some other activity in the same manner. Now assume there is an agent which requires that both these activities be ®nished before it can begin (i.e., the other activities are predecessors). In this case, the AND gate can be used as an appropriate representation of the agent where the inputs are de®ned to accept values that represent the state of predecessor activities. In a very similar fashion, the input values can also be used to represent the availability of a sucient number of resources for a particular agent's requirements. In this case, the value is 1 if the number of units of resource is sucient to satisfy the agent's requirements. Otherwise, the value is 0. As was the case for precedence requirements, the output of the AND gate is an appropriate representation of the agent's eligibility in regard to resource requirements. In a real AND gate, there is a short delay between the time its inputs become 1 and the time its output changes to 1. In the electronics ®eld, this is called propagation delay. In our model, we recognize this as being analogous to activity duration. Generally speaking, real propagation delays are on the order of nanoseconds, but for the purposes of extending our metaphor, we will make two Table 1. Truth table for a 2-input AND gate Input 1 0 0 1 1

Input 2

Output

0 1 0 1

0 0 0 1

assumptions. First, we can assign any value we wish to the propagation delay of an AND gate. For example, we can state that the propagation delay will be 4 days, then scale that value down by mapping 1 day of real time to 1 second of simulated time. Our second assumption is that AND gates in our model need not all have the same propagation delay. Instead, we assign di€erent propagation delays to each AND gate corresponding to the estimated duration of the activity represented by the AND gate. Note that we can also represent project milestones by assigning a propagation delay of zero. Of course, activities in real projects are not constrained to only two predecessors and/or resource requirements. This is not a problem for our model since predecessors and/or resource requirements greater than two can be represented by an n-input AND gate, where the value of n is equal to the total number of requirements. The output of an n-input AND gate is 1 if and only if the values of all its inputs are 1. Thus, the activity represented by an n-input AND gate will not begin until all of its precedence and resource requirements are satis®ed. 5.2. Additional digital elements We now extend our metaphor by recognizing that digital circuits are typically constructed using a variety of logical elements in addition to AND gates. Among those commonly used are OR, NAND, NOR, and XOR gates. This suggests capabilities for our model not typically supported by traditional PERT/CPM-type methods. Table 2 shows the behavior of the 2-input versions of all these elements. As can be seen in Table 2, the output of an OR gate is 1 if either or both of its inputs are 1. If we use this element to model an activity as we did the AND gate, then we have an activity that begins when either or both of its inputs are 1. Note that we no longer refer to these as required inputs since the activity can begin even when only one of the two requirements is satis®ed. And as before, the value of the input can represent the completion of a predecessor activity or the availability of a sucient number of units of resource. Finally, as was the case for an AND gate, the OR gate is not restricted to two inputs. An n-input OR gate will have an output of 1 when any one or more of its n inputs has value of 1.

Table 2. Truth table for a 2-input OR, NAND, NOR, and XOR gates Input 1 0 0 1 1

Input 2

OR output

NAND output

NOR output

XOR output

0 1 0 1

0 1 1 1

1 1 1 0

1 0 0 0

0 1 1 0

392

Knotts et al.

The behavior of a NAND gate is the logical inverse of the AND gate. That is, the output of a NAND gate is 0 (rather than 1) when the values of both its inputs are 1. For the case of the n-input NAND gate, the output of the gate is 0 only when the value of all inputs is 1; otherwise the output value is 1. The behavior of a NOR gate is the logical inverse of the OR gate. In this case, the output is 1 if and only if the values of its inputs are both 0. The output of the n-input NOR gate is 1 only when the value of all its inputs is 0. Finally, the output of the 2-input XOR gate is 1 only when the value of exactly one of its inputs is 1. The output of an n-input XOR gate is 1 if and only if the value of exactly one of its inputs is 1. 5.3. De®nition of basic and enhanced agents In the simulation experiments that appear later in this paper, the performance of two types of agents is investigated: basic and enhanced. Although the precedence and resource requirements for both agent types are represented using the digital metaphor just described, the agents di€er in the rules that govern their behavior in regard to mode selection. In the remainder of this section, we will describe the architecture of the system in which are agents are situated then carefully described the rules which govern agent behavior. 5.3.1. System architecture The system consists of three major modules and the set of agents. Each module is described in this section including its role in the simulation process. The architecture is illustrated in Figs. 1 and 2. 5.3.2. Problem generator module The Problem Generator Module is invoked when the user requests that some number of problem instances be created based on a prede®ned network structure. As part of

Fig. 1. Agent-based system architecutre (problem generation).

Fig. 2. Agent-based system architecture (simulation execution).

this process, the user must specify the number of modes to be generated for each activity, the total number of resource pools, the type of each resource pool, the maximum number of units of resource in each pool, and the minimum and maximum number of units of resource that an activity can require from each resource pool. In response, the Problem Generator writes parameter values common to all problem instances in a Global Data ®le. In addition, it will create a separate ®le for each of the requested problem instances. These ®les will contain the information unique to each of the problem instances. This module is intended to be used for research purposes only and is not used for the current study since the simulation experiments are all based on the Maroto and Tormos [1] project network described later. 5.3.3. Simulation manager module Once the desired problem instances have been created, the user can begin the execution of the simulation. This is done by invoking the Simulation Manager Module. In response, the Simulation Manager will read the Network, Global Data, and Instance Files. Based on the contents of the Network File, the Simulation Manager will create one agent for each activity. It will also request that the user interactively de®ne the rules that will govern agent behavior. Next, the Simulation Manager will begin setting up the simulation for the ®rst problem instance. Speci®cally, it will create a data structure containing a representation of each agent. In addition, the Simulation Manager will write initial values for project parameters to a globallyaccessible Blackboard. This will include all information required to enforce the precedence and resource requirements of activities. Finally, the Simulation Manager will set the value of the global simulation clock to zero. Upon completing the setup of a simulation, the Simulation Manager will provide an indication to the agents

Agent-based project scheduling through the Blackboard that they may begin working. From that point, the Simulation Manager will wait until the simulation of the current problem instance is ®nished. After the completion of each instance, it will write the results to a Results ®le. This process will continue until the ®nal instance is ®nished after which the Results ®le will be available for user review. 5.3.4. Agents 1 through J Upon observing the signal to start a simulation experiment, the agents will begin to act according to the rules which de®ne their behavior. In particular, they will scan the Blackboard to determine whether or not their precedence and resource requirements can be satis®ed. In the course of doing so, they will defer to agents with higher priorities according to the priority rules de®ned for them. Upon determining that the resources they need are available and there are no agents with higher priority, the agents will remove the required resources from the Blackboard. In the case of a deterministic simulation experiment, the duration of an agent's activity will be provided to the agent at the time of its creation. In the case of a stochastic simulation experiment, the agent will invoke the Activity Duration Realizer with a request for a speci®c activity duration. The agent will provide the Activity Duration Realizer with a set of parameters that describe a particular distribution of values (e.g., the agent might provide a mean and standard deviation along with a request for a duration from a normal distribution). The parameters needed for this purpose will have been provided to the agent at the time of its creation. Once a duration is realized, a basic agent will hold the resources it acquired from the Blackboard for an amount of time equal to the realized activity duration. In contrast, an enhanced agent may either choose to hold the resources for the duration of the activity or return them immediately in favor of considering a di€erent execution mode. In our system, this will occur when the enhanced agent is not satis®ed with the value of the realized duration returned from the Activity Duration Realizer. (The precise rules by which the agent determines whether or not the realized duration is acceptable will be given momentarily). Upon determining from the global simulation clock that the duration of the activity has passed, the agent will return all renewable resources to the Blackboard. 5.3.5. Activity duration realizer module Upon receiving a request from an agent, the Activity Duration Realizer will generate a random activity duration based on the parameters provided by the agent. 5.4. Comparison of basic versus enhanced agent behavior Having presented the architecture of our system, we may now provide a clearer explanation of basic versus enhanced agent behavior. The simplest of the two, the basic

393 agent, is purely reactive in the sense discussed previously. That is, its capabilities are limited to perceiving its environment and reacting in a deterministic way to what it perceives. Put another way, the basic agent has no ability to deliberate on its perceptions. In contrast, an enhanced agent is capable of deliberative behavior. Speci®cally, an enhanced agent is able to select alternative actions on the basis of which behavior it believes is more likely to produce a desired outcome. To understand the functional di€erences between the two agent types as we have implemented them, recall that one of the primary responsibilities of all agents is to repeatedly scan the Blackboard in search of required resources. Furthermore, in the context of the MMRCPSP, there are multiple activity execution modes for each agent. Thus, at any given point in the execution of a simulation, an agent may have more than one resource feasible mode. As an example, consider the resource requirements for the two modes and two resource pools shown in Table 3. If we assume there are currently 6 units of resource available in pool 1 and 8 units of resource available in pool 2, then both execution modes are feasible. 5.5. Our implementation of basic and enhanced behavior The de®ning di€erence between a basic and enhanced agent is the means by which a resource feasible execution mode is selected. A basic agent, which is purely reactive, will choose to become active in the ®rst feasible execution mode that it ®nds. In contrast, an enhanced agent will deliberate in regard to mode selection according to several rules. The precise behavior and the rules which govern that behavior for each agent type is presented in the next two subsections. 5.6. The basic agent The basic agent begins by selecting a mode to consider for resource feasibility. The order in which modes are considered is determined by the priority rule de®ned for the agent (ties are broken randomly). Upon selecting a mode for consideration, the basic agent reviews the Blackboard to determine whether or not the required resources are available. If they are, the basic agent will request an acTable 3. Example resource requirements Mode

1 2

Resource requirements Pool 1

Pool 2

2 3

1 4

6

Resources available

8

394 tivity duration from the Activity Duration Realizer. Speci®cally, it will pass the value of djm to the Activity Duration Realizer. In response, the Activity Duration Realizer will select and return a value dout such that djm ÿ …djm =2†  dout  djm ‡ …djm =2†. This range of values was selected to ensure that djm would be in the middle of the uniform distribution making it equally likely that dout would be less than or greater than djm . It also ensures that dout can never be less than zero. After a basic agent receives dout , it immediately removes the resources needed from the Blackboard and holds them for a duration equal to the realized activity duration, dout . Thus, the only rule which governs the mode selection behavior of a basic agent is: Rule 1. Choose to become active in the ®rst mode which is found to be resource feasible. 5.7. The enhanced agent The enhanced agent also evaluates the resource feasibility of activity execution modes in priority-rule order. Similar to the basic agent, once it ®nds a feasible mode, it will pass the value of djm to the Activity Duration Realizer which will return a randomly selected value for dout as just de®ned in the previous paragraph. Thus far, this behavior is the same as that of a basic agent. However, whereas the basic agent will not consider any modes beyond the ®rst it ®nds to be resource feasible, the enhanced agent may or may not consider other modes depending on several circumstances. First, if only one mode is resource feasible, the enhanced agent will choose to become active in that mode. This decision makes sense since the agent has no a-priori knowledge regarding the value of waiting for other modes to become resource feasible. Second, even if more than one mode is resource feasible, an enhanced agent will choose to become active in the ®rst resource feasible mode it ®nds for which dout  djm . This is justi®ed as a means of controlling the amount of time an enhanced agent might spend in deliberation. To appreciate this, consider an alternative agent type which would evaluate all possible resource feasible modes and pick the one it determined to be ``best'' by some criteria. This approach would require considerably more deliberation than our approach. Finally, if more than one mode is resource feasible, but there are none for which dout  djm , an enhanced agent will choose the mode with the shortest realized duration (i.e., the one for which dout is smallest). Thus, the rules which govern the mode selection behavior of enhanced agents are:

Knotts et al. Rule 3. If more than one mode is resource feasible, but djm > dout for all modes, choose to become active in the mode for which dout is smallest. As a means of highlighting the di€erences between basic and enhanced agents, the next two sections provide an example of the application of each to a small project.

6. Example simulation of a small project using basic agents In this section, we provide an example of a stochastic simulation experiment of one instance of a small project using basic agents. The example consists of ®ve activities (the ®rst and last of which are the dummy source and sink, respectively). The precedence requirements of the activities are presented in the form of a project network in Fig. 3 and the activity durations, resource requirements, and resource availabilities are given in Table 4. With this in place, the Problem Generation process is complete. Problem Generation is followed by a user request that a simulation be executed. In response, the Simulation Manager Module will read the input ®les that de®ne the problem to be simulated. From this information, it will start by creating ®ve agents. Each agent will have all the information it requires to determine its precedence and resource requirements and its priority in acquiring resources. For this example, we will use a very simple rule for prioritization: all agents will defer to any agent with an index value less than their own. (Obviously agents will be aware of their own index value and those of other agents with which they may compete for resources). In addition to creating the appropriate number of agents, the Simulation Manager will set up the Black-

Rule 1. If only one mode is resource feasible, choose to become active in that mode. Rule 2. If more than one mode is resource feasible, choose to become active in the ®rst mode for which dout  djm .

Fig. 3. Example project network.

Agent-based project scheduling

395

Table 4. Example project using k1 = 4 j

m

djm

kjm1

1 2 2 3 3 4 4 5

1 1 2 1 2 1 2 1

0 4 6 10 6 8 6 0

0 1 2 4 2 4 4 0

board. Speci®cally, it will write a value on the Blackboard for all ®ve agents to indicate that none of them have yet ®nished the activity to which they have been assigned. Regarding resources, the Simulation Manager will write a value on the Blackboard indicating that the project will begin with 4 units of resource in Pool 1 …K1 ˆ 4†. The Simulation Manager will also set the value on the global simulation clock to zero. Finally, it will provide the noti®cation that a problem instance is ready to be run. Upon recognizing that a problem instance is ready to be run, each agent begins the simulation process by ®rst determining whether or not its precedence requirements have been satis®ed. Speci®cally, each agent j is looking at the Blackboard to determine whether or not all activities in Pj are ®nished. Initially, only agent 1 will ®nd that this is true. This agent, the dummy source, will begin and end in 0 time units then update the Blackboard to indicate that it is ®nished. After agent 1 is ®nished, agents 2 and 3 will recognize that their precedence requirements have been satis®ed and they will begin searching for a resource feasible mode. Given the priority rule of deferring to agents with lower index values, agent 2 will have the ®rst opportunity. We will assume for this example that modes will be considered in index-value order. Thus, agent 2 will ®rst consider mode 1. It will discover that this mode is resource feasible and, in accordance with Rule 1 for basic agents, it will choose to become active in this mode. As a result, the agent will decrement the value of K1 by 1 unit. As a ®nal step, agent 2 will request a realized duration by submitting the value of djm ˆ d21 ˆ 4 to the Activity Duration Realizer. We will assume the Activity Duration Realizer returns a value of dout ˆ 3. Thus, agent 2 will hold the acquired resources for 3 time units. Agent 3 follows agent 2 in priority and will have a chance at whatever resources remain. In considering mode 1, agent 3 will ®nd that the required resources are not available. For this reason, it will consider mode 2 and ®nd that the required resources for that mode are available. Thus, it will decrement K1 by 2 units and submit d32 ˆ 6 to the Activity Duration Realizer. We will assume the Activity Duration Realizer returns a value of dout ˆ 8. Thus, agent 3 will hold the acquired resources for 8 time units.

The next event to occur in the simulation is the completion of agent 2 at time t ˆ 3. At this time, agent 2 will return its units of renewable resource by incrementing the value of K1 by 1 unit. Thus, agent 2 is ®nished. In response to the completion of agent 2, agent 4 is precedence feasible, but the resource requirements cannot be satis®ed for either of its two modes. Instead it must wait for the completion of agent 3. This occurs at time t ˆ 8 upon which agent 3 increments K1 by 2 units and is ®nished. Agent 4 can now become active. It will decrement K1 by 2 units and submit d32 ˆ 6 to the Activity Duration Realizer. We will assume the Activity Duration Realizer returns a value of dout ˆ 10. Thus, agent 4 starts at t = 8 and concludes at t = 18. At that point, the dummy sink starts and stops immediately. In conclusion, we see that the makespan of the project is 18 time units.

7. Example simulation of a small project using enhanced agents In this section, we execute another simulation of the same project just considered. This time, however, we will apply enhanced agents. As before, the precedence requirements of the ®ve activities are presented in the form of a project network in Fig. 3 and the activity durations, resource requirements, and resource availabilities are given in Table 4. All the steps of the simulation process remain the same as just described in the previous section until the agents begin their work. Thus, we begin with the start and completion of agent 1 at t ˆ 0. As before, agents 2 and 3 will then recognize that their precedence requirements have been satis®ed and agent 2 will have the ®rst opportunity to acquire resources. It will ®rst consider mode 1 and, ®nding that mode to be resource feasible, it will request a realized duration. We will assume the Activity Duration Realizer returns a value of dout = 3 (the same value that was returned to agent 2 in the previous example). Agent 2 will now apply the three rules de®ned earlier. First, it will ®nd that Rule 1 does not apply since both modes are resource feasible. As a result, agent 2 will apply Rule 2 and choose to become active in mode 1 since dout  d21 . Thus, it will decrement the value of K1 , accordingly, and hold resources for 3 time units. Agent 3 follows agent 2 in acquiring resources. It will ®nd that mode 1 is not resource feasible, but mode 2 is resource feasible. As was the case in the basic agent example, we will assume that the realized duration returned to agent 3 is dout = 8. Thus, in accordance with Rule 1 for enhanced agents, agent 3 will choose to become active in mode 2 and decrement K1 by the number of units it requires. Most importantly, note that this will occur regardless of the value of dout . As in the previous example, agent 4 must wait for the completion of agents 2 and 3 before it can become active.

396 This occurs at t = 8 at which point agent 4 is precedence feasible with two resource feasible modes. Assume as in the previous example that agent 4 receives a value of dout ˆ 10 for mode 1. Although the basic agent chose to become active in this mode, the enhanced agent may not. In accordance with Rule 2, agent 4 will next consider mode 2 since dout > djm for mode 1. We will assume that the value of dout returned to agent 4 for mode 2 is 8 time units. In this case, agent 4 will apply Rule 3 and choose to become active in mode 2 since 8 < 10. This being the case, agent 4 will start at t ˆ 8 and end at t ˆ 16. In comparing this result with our example for basic agents, it is clear that the enhanced agents have found a schedule which is 2 time units shorter than that found by the basic agents.

8. Implications of basic versus enhanced behavior For the purpose of demonstrating the di€erences between basic and enhanced agents, the previous example was speci®cally formulated so that it would favor enhanced agents. In general, however, it is not necessarily the case that enhanced agents will always ®nd shorter makespan schedules. In fact, one can easily create additional example problems where this will certainly not be the case. It is entirely possible that a tendency to favor shorter realized durations can result in projects with longer makespans. Thus, in the next section, we undertake experiments designed to examine the relative performance of basic and enhanced agents on a large number of problem instances.

9. Source of problem and generation of problem instances The project network used as the basis for generating problem instances for our simulation experiments was originally published by Maroto and Tormos [1] and includes 51 activities. (Note that the 51st Maroto and Tormos activity is a dummy sink, but their original network did not include a dummy source. We added a dummy source so the network used in our experiments included 52 activities where the 1st and 52nd activities are the dummy source and sink, respectively). The Maroto and Tormos project included durations and renewable resource requirements for one activity execution mode. Resource requirements were de®ned for three pools of renewable resource. We utilized the Maroto and Tormos [1] precedence requirements for all problem instances in our simulation experiments. For the ®rst activity execution mode, activity durations and renewable resource demands were the same as those published by Maroto and Tormos. A second activity execution mode was randomly generated such that it would be likely to have a duration less than the ®rst mode, but require more resources. Speci®-

Knotts et al. cally, durations for the second mode of all j activities, dj2 , were generated randomly from a uniform distribution where the mean of the distribution was three quarters of the mode 1 duration and the end points were 50% of this value. More formally, dj2 was randomly selected from a uniform distribution such that …3=8†dj1  dj2  …9=8†dj1 for all j. Mode 2 resource requirements for the three pools of renewable resource were generated such that they were equally like to be 0, 1, or 2 units greater than the mode 1 requirements. In cases where this resulted in a resource requirement greater than the total units available, the requirement was set to be equal to the maximum available number of units for that resource pool. The second mode for the 1st and 52nd activities are special cases where the durations and all resource requirements for both activities were set to zero for both modes.

10. Preliminary simulation results Our primary focus in this work is on the multimode scheduling problem with stochastic durations. However, before discussing our results for that problem, we wish to state that our agent-based scheduling method appears to be at least comparable to the scheduling methods incorporated into several commercial software products. Maroto and Tormos [1] used eight di€erent commercial applications to schedule the single mode, deterministic duration, 51-activity problem described in their article and in the previous section. The results they found are given in Table 5. (note that Microsoft Project for Windows was used four times with di€erent options each time). We applied our agent-based approach to the same problem instance as given by Maroto and Tormos [1]. Since the Maroto and Tormos project is a single-mode problem, we used only basic agents. We found that the makespan determined by our method where agent priorities are determined randomly was 233 time units as shown in the last line of Table 5. Our results are clearly comparable to those found by Maroto and Tormos for Table 5. Commercial results versus agent-based results Commercial product CA-Superproject Instaplan Micro Planner Professional Microsoft Project for Windows Microsoft Project for Windows Microsoft Project for Windows Microsoft Project for Windows Project Scheduler Agent-based scheduling

Makespan

1 2 3 4

224 249 236 231 257 233 224 226 233

Agent-based project scheduling the commercial software they evaluated. While we cannot draw any signi®cant conclusions about our approach based on one instance of a single problem, the comparison does at least suggest the ``feasibility'' of our method in an admittedly informal sense. Five hundred problem instances were generated from the Maroto and Tormos [1] project network. Eight different heuristic rules were applied to each of the 500 problem instances. The priority rules are listed and brie¯y described in Table 6. When the application of a priority rule resulted in a tie among agents or among execution modes, the tie was broken randomly. The SPT, LPT, FIS, and MIS priority rules are self-explanatory, we will discuss the others in more detail. For GRR and SRR, priority is given to agents assigned to activities with greater and smaller resource requirements, respectively. For each agent, two total resource requirements were calculated; one for each mode. The two values were calculated separately as the Psum of requirements across all three resource pools, f 3rˆ1 kjmr g for m ˆ f1; 2g and j ˆ f2    51g. The larger of these two values was used to determine agent priority under GRR. The smaller of the two was used for SRR. For EST, the forward pass of a traditional critical path analysis was executed on each problem instance using, for each activity j, the mode with the shortest duration. This yielded a value for earliest possible start time for each activity j. Based on this, priority was assigned to activities for which the value of the earliest possible start time was smaller. For EDD, the backward pass of a traditional critical path analysis was executed P on each problem instance using a makespan H ˆ Jjˆ1 maxfdjm j m ˆ 1; . . . ; Mj g and, for each activity j, the mode with the shortest duration. This yielded a value for the late ®nish time for each activity j. Based on this, priority was

Table 6. Priority rules Acronym

De®nition

SPT

defer to agents assigned to activities with (S)horter (P)rocessing (T)ime defer to agents assigned to activities with (L)onger (P)rocessing (T)ime defer to agents assigned to activities with (F)ewer (I)mmediate (S)uccessors defer to agents assigned to activities with (M)ore (I)mmediate (S)uccessors defer to agents assigned to activities with (S)maller (R)esource (R)equirements defer to agents assigned to activities with (G)reater (R)esource (R)equirements defer to agents assigned to activities with (E)arlier (S)tart (T)imes defer to agents assigned to activities with (E)arlier (D)ue (D)ates

LPT FIS MIS SRR GRR EST EDD

397 assigned to activities for which the value of the late ®nish time was smaller. Recognize that the minimum late ®nish time is logically equivalent to the earliest due date [35]. Thus, we refer to this priority rule as EDD. Simulation results are shown in Tables 7 and 8. Table 7 compares the performance of basic versus enhanced agents in terms of project schedule. The second and third columns of Table 7 are the average makespan of the project across the 500 problem instances as determined when the priority rule in the leftmost column is applied. The second column shows the average makespan for the 500 problem instances as determined by a set of basic agents. The third column shows the average makespan for the 500 problem instances as determined by a set of enhanced agents. The number in parentheses following the makespan in the second and third columns is the ranking of the priority rule in terms of its performance relative to the other rules in the same column. The fourth column is the average di€erence in makespan between the basic and enhanced agents for the 500 problem instances. The ®fth and sixth columns are the maximum and minimum di€erences in makespan, respectively, for the problem instances where the di€erence between basic and enhanced agents was the greatest and least, respectively. All di€erences were calculated as (basic makespan ± enhanced makespan). Thus, positive values in the fourth, ®fth, and sixth columns represent cases where the performance of enhanced agents was better than that of basic agents. Table 7. Scheduling performance Priority SPT LPT FIS MIS SRR GRR EST EDD

Basic Enhanced Average Minimum Maximum (rank) (rank) Di€erence Di€erence Di€erence 175.1 245.8 203.4 201.5 231.0 184.8 201.1 194.5

(1) (8) (6) (5) (7) (2) (4) (3)

175.1 173.0 172.6 171.5 181.4 169.0 170.2 163.2

(7) (6) (5) (4) (8) (2) (3) (1)

0 72.8 30.8 30 49.6 15.8 30.9 31.3

0 )28 )5 )16 )21 )6 )21 )21

0 222 198 203 217 143 201 183

Table 8. Computation time performance (seconds) Priority

Basic

Enhanced

SPT LPT FIS MIS SRR GRR EST EDD

0.04 0.05 0.05 0.05 0.05 0.05 0.05 0.04

0.08 0.08 0.08 0.08 0.08 0.08 0.08 0.07

398 Table 8 compares the performance of basic versus enhanced agents in terms of the computation time required to reach a solution where the time is given in seconds. All simulation experiments were run on the same computer which contains a 133 MHz Pentium processor and 16 MBytes of RAM. The second and third columns of Table 8 show the average computation time to reach a solution when the priority rule in the leftmost column is applied. The second column shows the average computation time per instance required by a set of basic agents to solve the 500 problem instances. The third column shows the average computation time per instance required by a set of enhanced agents to solve the 500 problem instances.

11. Discussion of results Unless otherwise stated, the results discussed here are all averages taken over the 500 problem instances used for the simulation experiments. In regard to the scheduling performance of the eight priority rules, no single priority rule dominated across agent types. However, of the eight priority rules, GRR and EDD both performed relatively well regardless of agent type. GRR ranked 2nd for both agent types while EDD ranked 3rd and 1st for basic and enhanced agents, respectively. In contrast, the performance of LPT and SRR was relatively much poorer regardless of agent type. LPT was ranked 8th and 6th while SRR was ranked 7th and 8th for basic and enhanced agents, respectively. In regard to the scheduling performance of the two agent types, enhanced agents were clearly superior. Enhanced agents generated better schedules for seven of the eight priority rules. For the LPT priority rule, the difference between basic and enhanced agents was 72.8 time units. At the other extreme, the performance of basic and enhanced agents was identical for SPT. It should be noted, however, that this is not coincidental. Rather, it is because the behavior of the two agent types is de®ned such that basic and enhanced agents will behave identically when priority is given to agents with the shorter processing times. The di€erence between the best basic agent performance (SPT) and the worst basic agent performance (LPT) was 70.7 time units. The di€erence between the best enhanced agent performance (EDD) and the worst enhanced agent performance (SPT) was 11.9 time units. For problem instances where basic agents outperformed enhanced agents, the di€erence in performance was much less dramatic than for cases where enhanced agents outperformed basic agents. For the problem instance where basic agents were most superior to enhanced agents, the di€erence in performance was only 28 time units. More notable, however, enhanced agents generated a schedule with a makespan of 222 time units shorter than that

Knotts et al. generated by basic agents for the same problem instance when LPT was applied by both agent types. Before comparing the computational performance of priority rules, it should be noted that regardless of the priority rule, the values required to determine priority were provided to the agents before the execution of the simulation experiment. In other words, the times shown in Table 8 are not a function of the time required to compute the parameters upon which priority was based. Thus, a fairer comparison of computation times among priority rules and across agent types is possible. With this said, it is clear from Table 8 that when one compares priority rules within agent type, there is virtually no difference in the time required to reach a solution regardless of the priority rule. Comparing across agent types, basic agents consistently reached solutions faster than enhanced agents, but the time di€erence in all cases was negligible. In regard to the scheduling performance of the priority rules, there is no obvious pattern in the results. Consider, for example, the case of SPT. Although the absolute results achieved using SPT were identical for basic and enhanced agents (for the reason given earlier), the relative results were very di€erent. SPT, when applied by basic agents, ranked 1st in terms of scheduling performance. The same rule, when applied by enhanced agents, ranked 7th. Aside from SPT, however, the changes in the relative performance of the priority rules across agents type were much smaller. In fact, no other priority rule moved in ranking more than one position except for EDD which moved two places from 3rd to 1st. In regard to the scheduling performance of the agents, our results are clearer. Enhanced agents consistently, and in some cases dramatically, outperformed basic agents. Even the worst results achieved by the enhanced agents (181.4 time units for SRR) would have ranked 2nd relative to the performance of the basic agents. In light of the fact that the di€erence in computation times between basic and enhanced agents was insigni®cant in every case, it appears that the small additional time spent in deliberation by the enhanced agents is well worth it. Our results, taken as a whole, are encouraging in at least one speci®c and important fashion. They suggest a signi®cant distinction between traditional heuristic project scheduling methods and agent based methods, especially in consideration of the nature of enhanced agent behavior. Recognize that a priority rule, when applied by a basic agent, will generate a result very much like the result yielded when the same priority rule is applied using more traditional non-agent-based scheduling methods. To understand this, recall that each agent in the simulation experiments we conducted has two activity execution modes, one of which will be implemented by the agent. In addition, there are certain characteristics associated with every agent, activity, and mode which determine the priority of the agent in accessing resources.

Agent-based project scheduling The characteristics which actually determine priority are obviously based on the priority rule being used by the agents. As described earlier, the essential di€erence between basic and enhanced agents is the means by which they select an activity execution mode. Consider, for example, a basic agent using the LPT priority rule. Under this rule, priority in accessing resources will go to the basic agents assigned to activities with longer durations. Note, however, that under this rule each agent will actually have two priorities, one for each activity execution mode. As our agents are implemented, this gives each agent two chances to access resources corresponding to each of the two modes. Obviously for LPT, the mode which will have the higher priority, and therefore the ®rst chance to access resources, is the one with the longer duration. When this mode is resource feasible, it will certainly be the mode implemented by the basic agent. It should be noted that ties between modes of equal priority are broken randomly. In contrast, however, this may not be the mode implemented by the enhanced agent. Upon determining that the mode in question is resource feasible, an enhanced agent will then consider other modes and will in many cases implement a mode other than the one by which priority was determined. Once again, this capability is the de®ning distinction between basic and enhanced agents. Our discussion of basic and enhanced agents is most informative when one considers how our results appear to ``pivot'' around the SPT priority rule. As mentioned earlier, SPT was the best rule when applied by basic agents, but nearly the worst when applied by enhanced agents. This is true even though the absolute results for SPT were identical for both basic and enhanced agents. What this suggests is that the criteria used to determine priority in accessing resources need not, and perhaps ought not, be the same as the criteria used for mode selection. This is exactly the capability provided by our enhanced agents that is not provided by basic agents nor by most traditional project scheduling heuristics of which we are aware. This encourages us to pursue further improvements in the deliberative capabilities of our enhanced agents.

12. Conclusions Having discussed our results in detail, we can explain the majority of our observations by recognizing a few facts. First, consider a problem which is not constrained by resource requirements. By this, we are referring to a problem for which resource requirements are not part of the formulation (or a problem for which resources are so plentiful relative to requirements that no agent ever waits for resources). In such cases, it can be easily shown that the shortest project makespan will always result when

399 every agent becomes active in the mode with the shortest realized duration. Now we consider the other extreme in regard to resource constraints. Assume that the resource requirements of all activities are such that no two activities can ever be executed concurrently. As a simple example of how this can occur, assume there is a pool of resources containing exactly one unit and all activities require exclusive allocation of that single unit (e.g., K1 ˆ 1 and kjm1 ˆ 1 for all j, m). This is, of course, the most highly resource constrained problem possible since there is no feasible schedule if for any j the requirement, kjm1 > 1 for all m. In this case, it can again be shown that all agents should choose to become active in a mode with the shortest realized duration. And again, the scheduling problem is trivial. Finally, consider a problem where resource requirements are such that the problem is somewhere between the two extremes just discussed. Is it still the case that all agents should choose to become active in the mode with the shortest realized duration? Certainly not. To see why, consider the very small project shown in Fig. 3 and the activity durations, resource requirements, and resource availabilities as given in Table 9. (To simplify this example, we will assume that the Activity Duration Realizer always returns a value for dout = djm ). Under these circumstances, one can see that a preference for shorter realized durations will result in a project makespan of 8 time units. Choosing the modes with the longer realized durations, however, will result in a project makespan of 6 time units. So for this example, a tendency to prefer shorter realized durations results in a longer project makespan. While the example just given is obviously contrived to suit our argument, the point should be clear. It is the interaction among activity durations, con¯icting resource requirements, and the level of concurrency that ultimately determines the makespan of the project. Thus, where resource availabilities are abundant relative to activity demands, it generally makes sense to be more aggressive in seeking activity execution modes with shorter realized durations. We can now draw a general conclusion that explains the majority of our experimental results. We suggest that resources were relatively abundant for the 500 problem instances we generated in accordance with the original Maroto and Tormos [1] project. Thus, there was an adTable 9. Small project in which k1 = 4 j

m

djm

kjm1

1 2 2 3 3 4

1 1 2 1 2 1

0 4 6 4 6 0

0 4 2 4 2 0

400 vantage to any mechanism that has a bias for shorter realized activity durations. Obviously, the priority rules which favor shorter durations and the enhanced agents both have such a bias. So in conclusion, we claim that projects with resource availabilities and demands similar to those exhibited in our 500 problem instances should be scheduled with a preference for shorter realized activity durations.

13. Future work There are many directions in which to expand this work including the following: (i) increased agent enhancement in which agents seek even better results by utilizing additional deliberation time; (ii) implementation of more of the previously-studied priority rules for the MMRCPSP using our agent-based methodology; (iii) creation of new priority rules especially suited for agent-based systems; and (iv) implementation of heterogenous agent systems in which agents do not all use the same priority rule. In addition to experimental work, more theoretical work is required to determine the nature of problems that are best suited for agent-based solutions. Finally, a structured approach is needed for designing and implementing agent-based systems that increases the likelihood of a successful outcome.

References [1] Maroto, C. and Tormos, P. (1994) Project management: an evaluation of software quality. International Transactions in Operational Research, 1, 209±221. [2] Lewis, J.P. (1995) Project Planning, Scheduling, and Control, Irwin, Chicago, IL. [3] Lock, D. (1996) The Essentials of Project Management, Gower, Brook®eld, VT. [4] Wallace, R. and Halverson, W. (1992) Project management: a critical success factor or a management fad. Industrial Engineering, 24, 48±50. [5] Davis, K.R., Stam, A., and Grzybowski, R.A. (1992) Resource constrained project scheduling with multiple objectives: a decision support approach. Computers and Operations Research, 19, 657± 669. [6] Herroelen, W.S. and Demeulemeester, E.L. (1994) Project management and scheduling. European Journal of Operational Research, 90, 197±199. [7] DeMarco, T. (1982) Controlling Software Projects, Yourdon Press, New York, NY. [8] Fried, L. (1992) The rules of project management. Information Systems Management, 9, 71±74. [9] Willcocks, L. and Griths, C. (1994) Predicting risk of failure in large-scale information technology projects. Technological Forecasting and Social Change, 47, 205±228. [10] Forsberg, K., Mooz, H. and Cotterman, H. (1996) Visualizing Project Management, John Wiley & Sons, New York. [11] Davidson, F.P. and Huot, J.C. (1991) Large scale projects: management trends for major projects. Cost Engineering, 33, 15±23.

Knotts et al. [12] Weinraub, B. (1995) The name Costner Acquires a Question Mark. New York Times, 144, (Feb 21), B1(N), C13(L). [13] Ward, J.A. (1999) Productivity through project management: controlling the project variables. Information Systems Management, Winter, 16±21. [14] Pulk, B.E. (1990) Improving software project management. Journal of Systems Software, 13, 231±235. [15] McComb, D. and Smith, J.Y. (1991) System project failure: the heuristics of risk. Journal of Information Systems Management, 8, 25±34. [16] Williams, B.R. (1995) Why do software projects fail. GEC Journal of Research, 12, 13±16. [17] Eisner, H. (1997) Essentials of Project and Systems Engineering Management, John Wiley & Sons, New York. [18] Moulin, B. and Chaib-Draa, B. (1996) An overview of distributed arti®cial intelligence, in Foundations of Distributed Arti®cial Intelligence, O'Hare, G.M.P. and Jennings, N.R. (eds), John Wiley & Sons, New York, pp. 3±55. [19] Fisher, M. (1994) Representing and executing agent-based systems, in Proceedings of the ECAI '94 Workshop on Agent Theories, Architectures, and Languages, Springer-Verlag, Berlin, pp. 307± 323. [20] Jennings, N.R. and Wooldridge, M. (1995) Applying agent technology. Applied Arti®cial Intelligence, 9, 357±369. [21] Davidsson, P., Astor, E. and Ekdahl, B. (1994) A framework for autonomous agents based on the concept of anticipatory systems, in Proceedings of Cybernetics and Systems '94, Vol II, World Scienti®c, Singapore, pp. 1427±1434. [22] Maes, P. (1995) Modeling adaptive autonomous agents, in Arti®cial Life: An Overview, Langton, C.G. (ed), MIT Press, Cambridge, MA, pp. 135±162. [23] Nwana, H.S. and Ndumu, D.T. (1997) An introduction to agent technology, in Software Agents and Soft Computing: Towards Enhancing Machine Intelligence, Nwana, H.S. and Azarmi, N. (eds), Springer-Verlag, Berlin, pp. 3±26. [24] Singh, M.P. (1994) Multiagent Systems: A Theoretical Framework for Intentions, Know-How, and Communications, Springer-Verlag, Berlin. [25] Rozenblit, J.W. (1992) Design for autonomy: an overview. Applied Arti®cial Intelligence, 6, 1±18. [26] Bussman, S. and Demazeau, Y. (1994) An agent model combining reactive and cognitive capabilities, in Proceedings of IROS 94, Vol 3, IEEE, New York, pp. 2095±2102. [27] Parunak, H. Van Dyke (1996) Applications of distributed arti®cial intelligence in industry, in Foundations for Distributed Arti®cial Intelligence, O'Hare, G.M.P. and Jennings, N.R. (eds), John Wiley & Sons, New York, pp. 139±164. [28] Guha, R.V. and Lenat, D.B. (1994) Enabling agents to work together. Communications of the ACM, 37, 126±142. [29] Ferber, J. (1996) Reactive distributed arti®cial intelligence: principles and applications, in Foundations of Distributed Arti®cial Intelligence, O'Hare, G.M.P. and Jennings, N.R. (eds), John Wiley & Sons, New York, pp. 287±314. [30] Colorni, A., Dorigo, M., and Maniezzo, V. (1992) Distributed optimization by ant colonies, in Toward a Practice of Autonomous Systems: Proceedings of the First European Conference on Arti®cial Life, Varela, F.J. and Bourgine, P. (eds), MIT Press, Cambridge, MA, pp. 134±142. [31] Liu, J.-S. and Sycara, K.P. (1998) Multiagent coordination in tightly coupled task scheduling, in Readings in Agents, Huhns, M.N. and Singh, M.P. (eds), Morgan Kaufmann, San Francisco, CA, pp. 164±171. [32] Kolisch, R. (1996) Serial and parallel resource-constrained project scheduling methods revisited: theory and computation. European Journal of Operational Research, 96, 320±333.

Agent-based project scheduling [33] Kolisch, R. and Sprecher, A. (1996) PSPLIB ± A project scheduling problem library. European Journal of Operational Research, 96, 205±216. [34] Boctor, F.F. (1996) A new and ecient heuristic for scheduling projects with resource constraints and multiple execution modes. European Journal of Operational Research, 90, 349±361. [35] Alvarez-Valdes Olaguibel, R. and Tamarit Goerlich, J.M. (1989) Heuristic algorithms for resource-constrained project scheduling: a review and empirical analysis, in Advances in Project Scheduling, Slowinski, R. and Weglarz, J. (eds), Elsevier, Amsterdam, pp. 113±134. [36] Ozdamer, L. and Ulusoy, G. (1995) A survey on the resourceconstrained project scheduling problem. Industrial Engineering and Research and Development, 27, 574±586. [37] Kolisch, R. (1995) Project Scheduling under Resource Constraints: Ecient Heuristics for Several Problem Cases, Physica-Verlag, Heidelberg, Germany. [38] Morton, T.E. and Pentico, D.W. (1993) Heuristic Scheduling Systems with Applications to Production Systems and Project Management, John Wiley & Sons, New York. [39] Knotts, G., Dror, M. and Hartman, B. (1998) A project scheduling methodology derived as an analogue of digital circuit technology. Annals of Operations Research, 82, 9±27. [40] Knotts, G., Dror, M. and Hartman, B. (1998b) A project management tool for computer-supported collaborative work during project planning, in Proceedings of the 31st Hawaii International Conference on System Sciences, Vol I, pp. 623±631. [41] Wakerly, J.F. (1990) Digital Design Principles and Practice, Prentice Hall, Englewood Cli€, NJ.

401 Biographies Gary Knotts received his Ph.D. in Management Information Systems from the University of Arizona in 1998. His current research interests include project management and scheduling. His research has been published in Annals of Operations Research and MIS Quarterly. Moshe Dror is a Professor in the MIS Department at the College of Business and Public Aministration, University of Arizona. His current research ranges from combinatorial problems such as routing and machine scheduling to problems of cost allocation in logistics. He has written extensively on those topics and his publications have appeared in numerous journals and as book chapters. He received a Ph.D. degree from University of Maryland at College Park, an I.E. degree and M.Sc. (in Mathematical Methods) degree, both from Columbia University. He serves as a Department Editor (Applied Optimization) for IIE Transactions on Operations Engineering and on a number of other journal's editorial boards. Bruce C. Hartman has been in both industry and academic environments. He was an IS director in both mining and silicon valley ®rms. He obtained his Ph.D. in MIS at the University of Arizona, specializing in supply chains, and taught there as well. Currently he manages large information systems projects in the manufacturing area for Lucent Technologies in San Jose, CA. Contributed by the Project Selection, Coordination and Management Department