ASYNCHRONOUS TEAMS

11 downloads 622 Views 996KB Size Report
or the scheduling of a job-shop, can require a large and diverse set of skills. ..... (For experiments with mixes of very different software agents, see [3,4], for .... to prune their space, leaving a smaller and more easily searched sub-space. ..... that our system generates and from positive changes in the business processes.
Chapter 19 ASYNCHRONOUS TEAMS Sarosh Talukdar Carnegie Mellon University

Sesh Murthy and Rama Akkiraju T. J. Watson Labs, IBM

1

INTRODUCTION

Obtaining a good solution to a complex problem, such as the operation of an electric grid or the scheduling of a job-shop, can require a large and diverse set of skills. (An electric grid has millions of devices that must be controlled and coordinated. In job-shopscheduling, much of the problem can often be solved by a single optimization procedure, such as linear programming. But the remainder invariably requires a variety of situationspecific heuristics and insights.) The questions for the designer of a problem-solving system are which skills to include and how to package them. In an Asynchronous Team (A-Team), skills are packaged as agents. More specifically, an A-Team is a multi-population, multi-agent system for solving optimization problems. The calculation process is as follows: The problem-to-be-solved is decomposed into sub-problems. A population of candidate-solutions is maintained for each sub-problem. These populations are iteratively changed by a set of agents. Also, solutions are made to circulate among the populations. Under conditions to be discussed, some of the solutions in each population will improve. The calculation process is terminated when the improvements cease. The agents in an A-Team are identical in two respects. First, they are all completely autonomous (they work without any supervision). Second, they all have the same work-cycle (in every iteration, every agent performs the same three steps: select a solution from a population; modify the selected solution; then insert the modified solution in a population). In all but these two respects, the agents can, and usually should, be diverse. For instance, some may be computer programs while others are human. Some may iterate quickly while others are slow. Some may make small improvements while others make radical or even innovative changes to the candidatesolutions.

538 S. Talukdar et al.

In designing a problem-solving system there are four conflicting issues of concern: Design and assembly: How much effort will be needed to produce and upgrade the system? Solution-quality: How close will the solutions obtained by the system be to the best possible solutions? Solution-speed: How quickly will the system complete its calculations? Robustness: How will solution-quality and speed be affected by failures and disturbances? A qualitative assessment of the principal relationships between the structural features of A-Teams and these issues-of-concern is given in Figure 19.1. The remainder of the chapter is organized as follows. Section 2 lists the terminology that will be used in describing problems and their solutions. Sections 3–6 elaborate on the cause–effect-relationships indicated in Figure 19.1. Section 7 describes the structure of an A-Team. Section 8 describes how the quality of the solutions produced by an A-Team can, when necessary, be improved. Section 9 contains some guidelines for designing A-Teams. And, Section 10 describes an A-Team that is used for solving job-shop-scheduling problems from paper mills.

2 OPTIMIZATION TERMINOLOGY Optimization is not the best language for expressing every problem, but it is relatively simple and quite general (all problems can, at least in principle, be expressed as optimization problems). It is used here for these reasons. An optimisation problem is a search specified in terms of three components— objectives, constraints and decision variables. The objectives specify the goals of the search, the constraints specify the limits of the search, and the decision variables specify the space over which the search is to be conducted. In other words, an optimisation problem has the form: where is a set of objectives, C is a set of constraints,

Asynchronous Teams

539

and X is a set of decision variables. The values that the decision variables can take constitute a space, S, that is called decision- or solution-space. As an example, consider the problem: simultaneously solve the equations, and x = y. One of many ways of expressing this problem in optimisation terms, is:

x, y

subject to x = y Here, “x” and “y” are decision variables, is an objective, “x = y” is a constraint, and the solution-space is the plane whose axes are the decision-variables, x and y. The solution space can be divided into three subsets: infeasible solutions (those violating one or more constraints), feasible solutions (those meeting all the constraints), and optimal solutions (the best of the feasible solutions). Real problems often contain either no objectives or multiple conflicting objectives. In the former case, all the feasible solutions are optimal, in the latter case, the Pareto solutions are optimal. (Two objectives conflict—as opposed to being commensurate— when one objective cannot be improved without degrading the other. A Pareto solution is a feasible solution that is not dominated by any other feasible solution. Solution-X dominates solution-Y, if X is at least as good as Y with respect to all the objectives, and X is better than Y with respect to at least one objective.) Two optimization problems are coupled if they share decision variables, constraints or objectives. Consider a set of coupled optimization problems, The point X is a Pareto solution of this set, if X is feasible (with respect to the constraints of all the problems) and X is not dominated (with respect to the objectives of all the problems) by any other feasible point. Suppose that the problem-to-be-solved, is decomposed into B, a set of coupled sub-problems. Suppose that a set of agents, V, is assigned the task of iteratively solving the sub-problems. The result is a dynamic system: where is the assignment of agents to sub-problems, is the schedule of communications among the agents, and is the schedule of the iterations by the agents. If the results of this dynamic system converge, they will do so, not to optimal solutions of but rather, to equilibria and other attractors of the dynamic system. (These attractors are features, such as points and limit cycles, in the solution space of the ) Note: changing any of the five components of the dynamic system will change its attractors. If:

is one-to-one (sub-problem-m is assigned to agent-m), prescribes sequential iterations (the agents take turns working, each completing one iteration per turn), V contains only optimizing agents (each agent solves its sub-problem to optimality in each iteration), prescribes information broadcasts (each agent sends the results of each of its iterations to all the other agents), and the iteration-results converge to a single point in the solution space of

540

S. Talukdar et al.

then: this point, is called a Nash equilibrium. Note: a Nash equilibrium represents a stalemate (once all the agents get there, no agent can leave without suffering a degradation of its objective or a violation of its constraints). Also, the Pareto solutions of the Pareto solutions of B, and the Nash equilibria may all be distinct and quite different.

3

POPULATION METHODS

Consider the problem with solution space S. A population method, when applied to this problem, generates a sequence, where is a population (set) of candidate solutions, and is calculated from with k < n. The calculations cease when at least one member of the latest population is found to have come acceptably close to an optimal solution. The original population method, called the Simplex method, was proposed by Spendley et al. [1]. The name comes from the method’s use of a simplex to search the solution-space. A simplex in a space of dimension m, is a set of m + 1 points that can be thought of as the vertices of a body with planar faces. The method obtains from by replacing the worst vertex in by its reflection in the centroid of the other vertices. Many variations of this method have been proposed, including methods called “swarms” that claim insect societies as their inspiration, but have more in common with simplex methods than insects. On the plus side, simplex-like methods have wide applicability and easy implementation; on the minus side, simplex-like methods can be very slow and tend not to handle constraints well. Genetic algorithms are better known examples of population methods than the Simplex method. In the basic genetic algorithm, solutions are represented by binary strings. is calculated from First, a “fitness function” is used to select the fittest members of Next, “crossover” and “mutation,” two operators that mimic processes in natural reproduction, are repeatedly applied to the fittest members to produce the new population. On the plus side, genetic algorithms tend to produce good solutions and are fairly robust. On the minus side, they are painfully slow and are often difficult to apply—a consequence of using only string-representations and synchronous calculations ( must be calculated from and the faster operators to have to wait for the slowest to finish.) A-Teams attempt to eliminate the disadvantages of Simplexlike methods and Genetic algorithms by packaging skills in autonomous agents rather than in mathematical operators, such as “crossover”and “mutation.”

4

AUTONOMOUS AGENTS

Agents are the modules from which problem-solving systems can be built. Structurally, an agent can be thought of as a bundle of sensors, decision-makers and actuators. Behaviorally, an agent can be thought of as a mapping from an in-space (all the things the agent can sense) to an out-space (all the things the agent can affect). These spaces may overlap. (Note that by these definitions, a great many, very different things qualify as agents, including thermostats, computer programs, mobile robots, insects, people, societies and corporations.) An agent is autonomous to the extent that it is unsupervised. A non-autonomous agent does only what its supervisors tell it to do; it makes no decisions for itself. In

Asynchronous Teams 541 contrast, a completely autonomous agent has no supervisors, rather, it decides entirely for itself what to do and when. The exclusive use of completely autonomous agents has some advantages: No supervisory structure need be constructed. The mistakes and failures of an agent do not affect its supervisees (there are none). The speed of an agent’s reactions are limited only by its capabilities, not by the delays in a chain-of-command. The agents can work asynchronously (in parallel, each at its own speed). The agents can improve themselves through learning (converting experiences into competence). However, the work of completely autonomous agents cannot be externally coordinated. Unless such agents are self-coordinating, they will tend to work at cross-purposes, creating pandemonium rather than progress. In an A-Team, self-coordination results from the agents’ selection mechanisms. (These mechanisms are part of the first step of the work-cycle of each agent. This workcycle is outlined below. The selection mechanisms and other details are described in later sections.) The agents of an A-Team work on populations of candidate-solutions. These populations are stored in computer memories. Consider the j-th agent. In each of its iterations, this agent performs three steps: (a) it reads the incumbent population in its input-memory, selects some candidate-solutions, and removes the selected solutions from (b) it modifies the selected solutions; and (c) it inserts the modified solutions into the incumbent population in its output-memory. Symbolically:

where and are mappings representing the selection and modification processes of the j-th agent, and and are sequences of the populations in this agent’s input and output memories. The agents can be divided into two categories, creators and destroyers, by differences in the way they perform the second step. Creators add solutions to the populations in their output memories ( is non-empty); destroyers remove solutions from the populations in their input memories ( is empty). The creators can be further divided into constructors and improvers. The constructors produce new solutions from scratch ( is empty); the improvers modify existing solutions ( is non-empty). Note that the output memory of an agent may be the same as its input memory, and several agents may share memories, resulting in configurations of the sort shown in Figure 19.2.

5 ASYNCHRONOUS WORK Synchronous iteration schedules dictate when agents must work and when they must remain idle. They do this by specifying the order in which iterations are to occur, for

542

S. Talukdar et al.

instance, “no agent will begin iteration m +1 till all the agents have completed iteration m.” Asynchronous schedules do not constrain the order of iterations. Rather, they allow all the agents to work in parallel all the time, each at its own speed. In other words, asynchronous schedules allow completely autonomous agents, if they so choose, to reduce their idle times to zero. The relationship between idle times and overall performance is neither obvious nor monotone. But in certain circumstances, reducing idle times does increase performance. A qualitative explanation follows. Consider a synchronous schedule in which the iterations of all the agents occur in lock-step, and there is just one memory that serves as both the input and the output memory for all the agents. Suppose that in its n-th iteration, each agent selects and modifies some members from the latest population, in this memory, and the aggregate of these modifications produces the next population, In other words:

where

Asynchronous Teams

543

and J is the set of agents. In this synchronous schedule, all the agents must wait till the slowest one has finished. This synchronous schedule can be converted to an asynchronous schedule by relaxing the constraint that the agents work in lock-step and allowing each to iterate as fast as it can. As a result, one agent may be performing its 100-th iteration while another is just on its 10-th. Symbolically, the m-th iteration for the j-th agent is:

where is the latest population when agent-j starts its m-th iteration, and is the latest population when agent-j completes its m-th iteration. It happens that sufficient conditions for (4)–(7) to converge to a unique solution are only slightly less restrictive than sufficient conditions for (8)–(10) to converge to the same solution [2]. Therefore, one may expect that the reductions in idle time obtained through the use of asynchronous schedules will often leave solution-quality unaffected. What about solution-speed? It seems that populations are able to transmute reductionsin-idle-time into increases in solution-speed when many agents are involved. We do not understand the mechanism, but populations seem to be especially helpful when the agents differ considerably in speed. Fast agents, such as those using hill-climbing techniques, iterate quickly and tend to be good at refining solutions through a succession of many incremental improvements. Slower agents, such as humans, are often better at making radical changes. These two types of changes can be combined by constraining the fast and slow agents to work in series, so a few iterations of a slow agent are always followed by many iterations of a fast agent. Populations provide a better way to combine fast and slow agents. Specifically, they free agents to work in parallel, each as fast as it can. In other words, populations seem to provide for the automatic blending of incremental and radical changes. (For experiments with mixes of very different software agents, see [3,4], for mixes of software and human agents, see [4,16].)

6 COMPETITION AND COOPERATION In 1776, Adam Smith made a case for markets and competition [5]. His insight was that good solutions to certain large and difficult problems could be obtained by dividing the problem among a number of autonomous agents, and making these agents compete. This insight has become an article of faith for some designers of autonomous-agentsystems, particularly, designers of markets. But autonomy and competition do not, by themselves, guarantee optimality. Rather, a number of other, quite impractical conditions must also be met, such as the availability of perfect information and the absence of externalities [6]. These other conditions are invariably violated by real systems. Consequently, competitive arrangements often produce very poor solutions, such as the shortages and high prices of the California energy market of 2001. Of course, agents do not have to compete. They can, instead, cooperate.

544 S. Talukdar et al. Competition requires conflicting goals. In optimization terms, two agents can compete if they are assigned distinct but coupled problems with conflicting objectives. Cooperation requires commensurate goals. In optimization terms, two agents can cooperate if they are assigned coupled problems with commensurate or almost commensurate objectives. In other words, whether agents cooperate or compete depends, at least in part, on the problems they are assigned. Let be the problem-to-be-solved; and let be a decomposition of into sub-problems, such that and B have the same Pareto solutions. If B is a competitive decomposition then, on rare occasions, its Nash equilibria will be the same as the Pareto solutions of More often, the Nash equilibria of B will be quite inferior to the Pareto solutions of In contrast, if B is a cooperative decomposition, its Nash equilibria will invariably be the same as the Pareto solutions of In other words, cooperative decompositions produce better Nash equilibria than competitive decompositions. The generalization of this observation is: cooperation makes possible solutions of higher quality than can be obtained from competition. We suspect that this generalization is valid. The structure of an A-Team allows for cooperation at two levels. First, the problem to be solved is decomposed into sub-problems. The team’s designer can choose to make this a cooperative decomposition. Second, multiple agents are assigned to each sub-problem. These agents select and modify solutions from a population of candidatesolutions that is maintained for the sub-problem. The agents work asynchronously, each using its own methods for deciding when to work, which solutions to select, and how to modify them. The agents do not explicitly coordinate their activities with one another, nor is there a central controller. Rather, the agents cooperate through the products of their work—by modifying solutions that have been worked on by other agents. The final solution is invariably the result of contributions from many agents. (This form of cooperation was inspired by the work-styles of social insects—bees, ants and certain wasps. Although these insects live in close-knit colonies, they have no supervisors. Certainly there is a queen, but her function is strictly reproductive; she does not lead the other colony members, nor does she issue orders to the workers. Although different castes exist within the colony—drones, soldiers and workers, for instance—there is no hierarchical relationship among them. Rather, they act as autonomous agents and cooperate through the products of their work. For instance, the construction of the nest proceeds without centralized control or the benefit of a blueprint to show what the finished result should be; instead, “it is the product of work previously accomplished, rather than direct communication among nestmates, that induces the insects to perform further labor. Even if the work force is constantly renewed, the nest structure already completed determines, by its location, its height, its shape and probably also its odor, what further work will be done” [19].)

7

ORGANIZATIONS, SUPER-AGENTS AND ORGANIZATION SPACE

The previous sections have covered some aspects of the structure of A-Teams. This section specifies their structure more precisely, and lists the decisions that the designers of A-Teams must make.

Asynchronous Teams

545

Lesser agents can be organized into greater (super) agents, which can be organized into still greater agents, and so on, just as cells are organized into organs, which are organized into humans, which are organized into societies and nations. The capabilities of a super-agent depend on its organization, and can range from much less than the sum of the capabilities of its constituent-agents, to very much more. In other words, the design of the organization is at least as important as the choice of agents. An organization is the “glue” that binds the constituent–agents together. Its purpose is two-fold: to divide the labor among the agents, and to coordinate their labor. Structurally, an organization can be thought of as a stack of five networks (Figure 19.3): 1. Control Flow: a tree-like network that divides the agents into layers, showing a) supervisory relationships (who reports to whom), and b) how much autonomy each agent has. Nodes in this network denote agents. Directed arcs denote supervisory relationships. A number from zero to one is appended to each arc to denote the degree of control the supervisor exercises over the “supervisee” (the larger this number, the greater the degree of control). 2. Problem Decomposition: a network that shows the results of decomposing the problem-to-be-solved, into B, a set of sub-problems. Nodes represent the sub-problems. Arcs represent the couplings among the sub-problems. 3. Sub-Problem Assignment: a bi-partite network that shows which agent is assigned to which sub-problem. Nodes are of two types; one type represents agents, the other, sub-problems. Arcs connect agents to the sub-problems they have been assigned. 4. Data Flow: a directed, bipartite network that shows who can “talk” to whom and how. There are two types of nodes; one type represents agents, the other, data stores. Arcs represent directed communication channels over which the agents can send messages to one another, post messages in the data stores, or obtain messages from the data stores. (These stores serve as bulletin boards for the agents connected to them.) 5. Work Schedule: a network that shows the order in which the agents’ tasks— iterations and communications—are to be performed. Nodes in this network represent the tasks. Arcs represent the precedence constraints among the tasks.

546 S. Talukdar et al. The problem of designing an organization has a very large solution space, namely, the set of all the possible five-network-stacks of the sort shown in Figure 19.3. Of course, other problems, such as aircraft and computer design, also have large solution spaces. But many of these other problems benefit from extensive simulation and verification facilities. They can use “generate and test strategies,” i.e., strategies that rely on the quick and accurate evaluation of many candidate-solutions. However, such simulation and verification facilities are not available for organizations. Therefore, it is necessary to prune their space, leaving a smaller and more easily searched sub-space. The set of A-Teams is one such sub-space. It is obtained from organization space by: Setting the control flow network to “null” (the agents in an A-Team are completely autonomous). Assigning multiple agents to each sub-problem. Eliminating all the arcs from the data flow that connect pairs of agents (the agents in an A-Team communicate only with computer memories, not other agents, and cooperate only by modifying one another’s work). Making the data flow strongly cyclic, i.e., establishing paths, through agents, by which solutions can circulate among the memories. Setting the work schedule network to “null” (the agents in an A-Team work asynchronously). Note that each memory in an A-Team contains a population of solutions. All the solutions in a population are expressed in the same representational form. This representation can vary from one memory to another. A memory used by humans may express solutions in diagrams, while a memory used by optimization software may use vectors. One way to allow for agents that require different representations but must work on the same population, is to maintain copies of the population, each in a different representation and memory.

8 SOLUTION-QUALITY AND CONVERGENCE The following thought experiment helps explain how A-Teams work and how solutionquality can be improved. Consider an A-Team that contains only one memory which is dedicated to storing a population of solutions to the problem, This memory is shared by C, a set of construction agents, I, a set of improvement agents, and D, a set of destruction agents. Suppose that the improvement agents may select solutions randomly, but make only deterministic modifications to the selected solutions. Suppose that the construction agents create an initial population of solutions, from which the agents in I and D, working asynchronously, produce a sequence of populations, Suppose that the work is stopped when a population is obtained, such that no agent in I can improve the quality of the best solution in . What is the quality of And, is N finite?

To address these questions, we define a distance metric in terms of iterations by improvement agents. Consider: a trajectory in S, the solution space of Each step, in this trajectory is produced by one iteration of an agent drawn from I. Thus, we can think of the trajectory as being M iterations long.

Asynchronous Teams 547 Of course, there may be other trajectories from to We define the distance from to as the length, in iterations, of the shortest trajectory from to Note that the distance from to is: (a) infinite, if there is no trajectory from to and (b) dependent on I. Let: be a calculable, scalar measure of the quality of every possible solution, s, with q(s) = 1 if s is an optimal solution, and q(s) = 0 if s is very different from an optimal solution. be the quality of population G(Q) be the set of all the solutions of quality Q or better, that is, f(s,g, I) be the distance, in iterations, from to is infinite if there is no trajectory from s to g; and

Note: f(s, g, I) if

be the distance, in iterations, from to G(Q). Note: is infinite when I does not contain the skills necessary to transform any point in into a solution of quality Q or better.

If: is finite;

the improvers select solutions randomly, with a bias for solutions of higher quality (the biasing details are given in [13]); the destroyers select solutions randomly, with a bias towards solutions of lower quality (the biasing details are given in [13]); is made non-decreasing with n by saving a copy of the best solution in then, G(Q) is reachable (the expected value of N is finite, and the expected value of is Q or better.). The proof can be found in [13,18]. The problem of calculating the distance, is intractable. Therefore, the above result cannot be used to predict the quality of the solutions that will be produced by an A-Team. But it does tell us that G(Q) will be reachable, if the agents use relatively simple and random strategies for selection, and if there is at least one trajectory from a point in to a point in G( Q). Furthermore, the lack of such a trajectory can be remedied by adding construction agents (thereby, changing ) or adding improvement agents (thereby, increasing the number of trajectories emanating from points in and the chances that one of them will pass through a point in G(Q). For instance, if I allows for trajectories: and and if we happen to augment I with an agent that can bridge the gap from a to b, then a trajectory will be obtained). In other words, solution-quality tends to increase as the number and diversity of the construction and improvement agents increases. Solution-quality can also be increased by using better destroyers. (We suspect that creation—construction together with improvement—and destruction are duals, and that adept destruction can compensate for inept creation, and vice-versa.)

548

S. Talukdar et al.

Of course, increasing the number of agents could increase the total computing time. But the agents work asynchronously. Therefore, providing more computers, so the additional agents can work in parallel with all the other agents, is likely to make any increases in total computing time slight, if not negligible. In other words, one might expect, from the above analysis, that A-Teams are scaleeffective: adding agents to an A-Team tends to improve its solution-quality, and adding computers tends to improve its solution-speed. There is some empirical evidence in support of this conclusion [3], but not enough has to be completely convincing.

9

DESIGN GUIDELINES

The design of A-Teams, like the design of many artifacts, is a craft whose procedural knowledge does not extend beyond very general guidelines. In such situations, the designer has little choice but to build, test and modify prototypes, till one with acceptable behavior is found. Since A-Teams tend to be scale effective, it makes sense to start with a small team, adding agents when solution-quality is to be improved, and adding computers when solution-speed is to be improved. Some guidelines and observations for designers to keep in mind are: Problem decomposition: The decomposition of the problem-to-be-solved is critical; only the number and diversity of the agents assigned to each sub-problem affects overall performance to a greater extent. Let be the problem to be solved, and be a sub-problem. Then, can be of three types:

1. 2.

3.

is related to just as the 1-Tree problem is related to the traveling salesman problem (Figure 19.2). is a component of

that is,

where is a set of objectives, C is a set of constraints and X is a set of decision variables.

(Notice that the A-Team of Figure 19.2 contains sub-problems of all three types, while the team of Figure 19.4 contains only types 1 and 3.) All A-Teams should contain at least one sub-problem of type-1 so that solutions to are automatically available. The quality of the final solutions to this sub-problem can usually be improved by adding sub-problems of types 2 and 3. Agent assignment: The greater the variety of skills that are brought to bear on a sub-problem, the greater the quality of the solutions that will be obtained for it. In other words, the solution-quality of an A-Team can be improved by adding agents with new and relevant skills. In making these additions, one should keep in mind that creation and destruction appear to be duals: adept destruction can probably compensate for inept creation, and vice-versa. In other words, adding adept destroyers is as good as adding adept creators. Data flow: Empirical evidence suggests that the circulation of solutions among the populations has a beneficial effect on both solution-quality and speed. Population size: In our experience, solution-quality increases with population size, but at a rapidly diminishing rate.

Asynchronous Teams

549

550 S. Talukdar et al. Selection strategies: We have experimented with only two selection strategies for improvement agents: (a) randomly select solutions with a bias that makes the better solutions more likely to be selected, and (b) randomly select solutions with a bias towards solutions the agent is more likely to be able to improve. Both seem to work well. For destroyers, we have also tried two strategies: (a) randomly select solutions with a bias towards the poorer solutions, and (b) select duplicates. Both seem to work.

10 A CASE STUDY This section describes an A-Team, developed at IBM, for scheduling the production and distribution of paper products. Paper production is a complex task involving multiple objectives: maximize profit, maximize customer satisfaction, and maximize production efficiency. These objectives are often in conflict and their relative importance varies with the state of the production environment and with market conditions. Process interactions further increase the difficulty of the problem [9]. For instance, a scheduling improvement in one stage of the production process may negatively impact downstream processes. In paper production and distribution, the key decisions are: (a) Order allocation: Allocating orders to paper machines across multiple paper mills in different geographical locations (b) Run formation and sequencing: Forming and sequencing batches of similar types of paper on each paper machine (c) Trimming: Cutting large reels of paper produced by paper machines into smaller rolls of customer-specified widths and diameters (d) Load planning: Loading paper rolls onto vehicles for shipment The traditional approach to paper mill scheduling is to schedule each stage in the process independently. Typically, paper manufacturers allocate orders to paper machines and sequence them manually. Then they use one software-package for trim scheduling and another package for outbound logistics scheduling, and so on. Each of these packages focuses on a single process-step and attempts to create an optimized schedule based on local objectives. Since there is no interaction between applications, the complete schedule obtained by combining the sub-schedules is usually of very low quality. For example, a trim schedule that minimizes trim-loss may cause vehicles to be loaded inefficiently, unacceptably increasing shipping costs. This piecemeal approach presents schedulers1 with a single take-it-or-leave-it choice and does not illustrate the tradeoffs between competing objectives that are needed to make well informed decisions. Realizing the shortcomings of the existing approaches, we at IBM Research, have built a new scheduling system that considers all stages of paper production and distribution simultaneously, and generates multiple enterprise-wide schedules. These schedules are created by algorithms that take into account the interactions between the process stages and focus on enterprise-wide objectives. The algorithms that we have developed use approaches such as linear programming, integer programming with and without 1 Schedulers are the people who perform the task of scheduling in an organization. The software in our model assists these schedulers in performing their tasks.

Asynchronous Teams

551

randomized rounding, network flow and various heuristic methods. We combine these multiple problem-solving approaches in an A-Team to achieve iterative improvements in the solutions. For scheduling as well as other manufacturing applications, we have found four categories of attributes to be important: Timeliness, Product-Quality, Profitability, and Disruptions. Not coincidentally, these categories reflect the concerns of the people affected by the schedules: customer service representatives, quality engineers, accountants and manufacturing supervisors. As the iterations by the software agents proceed, the schedules with the best tradeoffs among the categories are displayed to the scheduler. By examining these schedules, the human scheduler gains an understanding of the tradeoffs. She can select schedules, drastically modify them, and return them to the populations being worked on by software agents. Thereby, she can dramatically alter the course of the software agents’ calculations. She can also negotiate compromises with other interested parties. When she sees a solution she feels is acceptable, she terminates the calculations. We cannot overemphasize the importance of intimately involving human schedules in the calculations. The asynchronous mode of work and a number of filters make such involvements possible and practical. Specifically, asynchronous work allows fast software agents to work in parallel with much slower humans. The filters allow only the solutions with the best tradeoffs to be viewed by humans, thereby, keeping the humans from being overloaded with information. The filter most often used allows only non-dominated solutions to pass through it. 10.1

An Application

This section describes the construction of an A-Team for solving an instance of a paper-manufacturing problem. This problem consists of more than 5000 orders (which constitute about 8 weeks of production for 15 product types) to be scheduled on 9 machines that are located in 4 different mills.2 Once orders are allocated to machines and grouped according to their grades, these groupings (known as runs) have to be trimmed to fit the roll size requirements of the orders. The efficiency of trimming is dependent on the order composition in the groups. However, orders may not be grouped solely to increase trim efficiency, since such groupings may incur unacceptable delays of orders with tight deadlines. A partial set of evaluation metrics used in our implementation, and some of their values, are presented in columns 2–10 of Table 19.1. These evaluation metrics are customer dependent and are configurable. We obtained these metrics during the initial design study and incorporated them into the system during the benchmarking process. The manufacturing process contains two distinct stages: (1) run formation and sequencing, and (2) trimming. 3 These stages are coupled: the quality of trim optimization depends on how the runs are formed and sequenced. Therefore, it is important to consider the overall global optimization problem while generating schedules. The A-Team we use is depicted in Figure 19.4. First, we create a run formation team with its own set of constructors, improvers and destroyers for creating runs and 2

The data were provided by one of the largest paper manufacturers in the U.S. A third optimization problem, namely transportation scheduling or load planning, is eliminated from this analysis for simplicity. A more detailed description of the problem and our solution approach can be found in [16]. 3

552

S. Talukdar et al.

Asynchronous Teams 553 sequencing the orders within those runs. This team generates a set of non-dominated schedules4 that serve as starting points for trim optimization. While these solutions can be evaluated at a high level based on transportation cost, due dates, and order-machine restrictions, the goodness of these solutions cannot be determined until each run in each schedule is trimmed and the amount of waste is compared. However, trimming in itself is a multi-objective optimization. Therefore, we next construct a trim team for trim optimization with suitable constructors, improvers and destroyers. This trim team creates near optimal trim solutions, given a single run and a sequence of orders within that run. This trim team can be invoked multiple times to trim each run in a given schedule. However, there are many such schedules, generated by the run formation team, that need to be explored for overall trim efficiency. Therefore, in order to explore the best possible solutions, we employ a third team, the global optimization team, that changes the run formation and sequencing in an effort to achieve better overall solutions (including better trim) as defined by the evaluation metrics. In essence, the run formation and sequencing team and trim team are super-agents within the global optimization team.5 Trimming is computationally intensive. Therefore, it is important to be judicious in selecting the schedules for trimming. Below, we briefly describe the algorithms that we used in the run formation and sequencing team and the trim team. The global optimization team uses a combination of constructors, improvers and destroyers from the run formation team and the trim team. 10.2

Run Formation and Initial Sequencing Stage

In our system, orders are allocated to machines based on considerations such as transportation cost, due dates, trim preferences and order-machine restrictions. The constructors, improvers and destroyers used in run formation team are: Constructors: Many approaches such network flow, dispatch algorithms, linear programming models, and greedy approaches can be used to create initial population of solutions for order sequencing in an A-Team. In solving NP hard problems such as these, the A-Team framework encourages the use of multiple approaches for generating initial solutions in the population. Multiple approaches could potentially cover more search space than one approach. In our implementation, we use dispatch algorithms for order allocation. The idea is to select one order at a time from the sorted list of remaining orders (several sorting heuristics could be used) and schedule it as the next order on a given machine. These methods create partial solutions that have order allocation information, and sometimes an initial sequencing of the orders (and hence a sequence of runs) on each machine. Improvers: Improvement algorithms take an existing schedule and try to improve it in several different dimensions. For example, an improver may move orders between runs to reduce tardiness and improve trim efficiency, may merge runs in order to decrease the number of small runs, may resequence runs by moving subsets of orders in a run to improve the solution of a downstream problem (e.g., 4 A schedule at this point is a sequence of runs in which each run is a grouping of similar orders in a specific sequence. 5 The same approach can be extended to include additional down-stream processes, such as sheeting and transportation planning.

554 S. Talukdar et al. moving a set of orders to a different run to improve trim efficiency) etc. Since improvers have inherent knowledge about what aspects of a solution they intend to improve, they are programmed to pick those solutions that exhibit weakness in those specific aspects. For example, an improver that intends to improve the tardiness of a solution would pick solutions that have many late orders. Destroyers: In our A-Team we used a simple “delete duplicates” destruction approach. More intelligent destruction agents could be created as well. Further details on the algorithms can be found in [8]. 10.3

Trim Stage

A paper machine produces large reels of paper. The process of cutting the reels into rolls of paper (based on customer specifications) is called trimming. The main objective in trimming is to minimize the trim loss (the unused portion of the reel that cannot be used to fill an order) while considering other manufacturing objectives such as ontime delivery and customer satisfaction. This again, is a multi-objective optimization problem. The constructors, improvers and destroyers used in the trim team are: Constructors: Trimming paper rolls can be cast as a one-dimensional cutting stock problem (CSP). This problem has been studied by Gilmore and Gomory in 1961 in their seminal work [11]. Past work in this area [10,12,13] indicates that linear and integer-programming models work fairly well for generating initial trim patterns. Therefore, we use linear programming and integer-programming approaches to generate initial trim solutions. However, these solutions can be improved further by using iterative heuristic techniques. Improvers: Trim efficiency can be improved by modifying the sequence of orders within a run or by exchanging orders with other runs in the schedule. This can be done either randomly or based on some heuristics that have specific information about the required widths that could improve trim. For example, if a paper mill knows that there is a constant demand for certain standard widths such as 25" and 30", it may not mind making rolls of that size and stocking them, if it improves the trim efficiency, and even if there are no immediate orders for those rolls (these are sometimes called “help rolls”). Trim efficiency improvement heuristics can embody these types of domain details to improve overall trim efficiency. In deciding which solutions to improve, we use simple selection mechanisms such as random selection with a bias. For example, improvers that specialize in improving trim efficiency are programmed to randomly pick those solutions from the populations that do not have perfect trim. Destroyers: In our trim team we used a simple “delete duplicates” destruction approach. More intelligent destruction agents could be created as well. More details on the algorithms can be found in [14]. The global A-Team in this application consisted of 15 constructors and 5 improvers. Each agent (constructor or an improver) is an embodiment of the algorithms described above (run with various parameter settings). For our sample problem, an A-Team invocation of of CPU time on a single processor IBM RS/6000 Model 591 (256MB of memory generated approximately 100 solutions; of these solutions, 10 solutions were non-dominated. Table 19.1 shows the evaluations for these 10 solutions illustrating

Asynchronous Teams

555

the tradeoffs among the objectives. For example, solutions 3 and 4 have the same transportation cost, which suggests that they have almost the same allocation of orders to mills. However, they differ significantly in their tardiness and trim efficiencies. Solution 3 sacrifices order tardiness for better trim while solution 4 sacrifices trim for better on-time order delivery. Comparison of solution 10 with the schedule generated by our customer by their traditional methods showed that our system could provide significant reductions in costs (6% savings in transportation costs and improvements in customer satisfaction through reduced tardiness were reported). In this company, as in many other paper companies, an experienced team of schedulers worked on generating the traditional schedules. They allocated and sequenced orders manually or by using a stand-alone scheduling program, and used another computer program for trimming. For fine-tuning the schedules, they used numerous heuristics they had developed over the years. 10.4 Business Impact Our paper mill scheduling system has been fielded at several paper mills in the United States and is being used in their day-to-day operations. The system significantly improves the scheduling and decision making process for our customers, giving them substantial monetary returns. Improvements come both from the higher quality of the solutions that our system generates and from positive changes in the business processes that our approach to decision support fosters. In terms of solution quality, one of our customers, Madison Paper Industries, reports a reduction in trim loss by 6 tons per day and a 10% reduction in freight costs [17]. Each of these savings amounts to millions of dollars per year. Adam Stearns of Madison Paper Industries, our pilot customer, reports “We would use our old trim package, throw orders into it and let it trim them the best it could. Then we would let the IBM module take the same orders and manipulate them to come up with a trim. We saw that the IBM package was consistently saving over two inches [of trim loss]—just an incredible amount.” [17, page 74] “Testing shows that we are getting about 10% savings annually on distribution costs from the load planning piece alone, which amounts to millions of dollars... .We expected the system’s GUI [graphical user interface] to make load planning easier, but we didn’t expect to gain these efficiencies.” [17]. By 1999, 31 mills were either using or were in the process of implementing the IBM mill scheduling system. The users are primarily roll manufacturers in the corrugated and publishing paper markets. In summary, we have developed a system for enterprise-wide scheduling of paper manufacturing and distribution using the iterative improvement framework facilitated by Asynchronous Teams. The A-Team architecture facilitated cooperation between the various computer algorithms and the human scheduler, resulting in better solutions than any one of the implemented algorithms could have achieved by working alone.

REFERENCES [1] P.E. Gill and W. Murray (eds.) (1974) Numerical Methods for Constrained Optimization. Academic Press.

556 S. Talukdar et al.

[2] S.S. Pyo (1985) Asynchronous Procedures for Distributed Processing. Ph.D. dissertation, CMU. [3] P.S. deSouza (1993) Asynchronous Organizations for Multi-Algorithm Problems. Ph.D. dissertation, CMU. [4] S. Sachdev (1998) An exploration of A-teams. Ph.D. dissertation, CMU. [5] A. Smith (1776) The Wealth of Nations. [6] G. Debreu (1959) The Theory of Value. Wiley, New York. [7] S.N. Talukdar (1999) Collaboration rules for autonomous software agents. The International Journal of Decision Support Systems, 24, 269–278. [8] R. Akkiraju, P. Keskinocak, S. Murthy and F. Wu (1998) An agent-based approach for multi-machine scheduling. Proceedings of the Tenth Annual Conference on Innovative Applications of Artificial Intelligence, Menlo Park, CA. [9] C. Biermann (1993) Essentials of Pulping and Papermaking. Academic Press, San Diego. [10] H. Dyckhoff (1990) A typology of cutting and packing problems. European Journal of Operational Research, 44, 145–159. [11] P.C. Gilmore and R.E. Gomory (1961) A linear programming approach to the cutting stock problem. Operations Research, 9, 849–859. [12] R.W. Haessler (1980) A note on the computational modifications to the GilmoreGomory cutting stock algorithm. Operations Research, 28, 1001–1005. [13] R.W. Haessler and P.E. Sweeney (1991) Cutting stock problems and solution procedures. European Journal of Operational Research, 54, 141–150. [14] P. Keskinocak, F. Wu, R. Goodwin, S. Murthy, R. Akkiraju, S. Kumaran and A. Derebail (2002) Scheduling solutions for the paper industry. Operations Research, 50(2), 249–259. [15] S. Murthy (1992) Synergy in cooperating agents: designing manipulators from task specifications. Ph.D. thesis. Carnegie Mellon University, Pittsburgh, Pennsylvania. [16] S. Murthy, R. Akkiraju, R. Goodwin, P. Keskinocak, J. Rachlin, F. Wu, S. Kumaran, J. Yeh, R. Fuhrer, A. Agarwal, M. Sturzenbecker, R. Jayaraman and R. Daigle (1999) Cooperative multi-objective decision-support for the paper industry. Interfaces, 29, 5–30. [17] M. Shaw (1998) Madison streamlines business processes with integrated information system. Pulp & Paper, 72(5), 73–81. [18] S. Talukdar, L. Baerentzen, A. Gove and P. deSouza (1998) Asynchronous teams: cooperation schemes for autonomous agents. Journal of Heuristics, 4, 295–321. [19] P.P. Grassé (1967) Nouvelle expériences sur le termite de Muller (macrotermes mülleri) et considérations sur la théorie de la stigmergie. Insectes Sociaux, 14(1), 73–102.