Simulation-based assessment of machine ... - Semantic Scholar

9 downloads 4403 Views 847KB Size Report
Jun 18, 2007 - Chair of Enterprise-wide Software Systems, Department of Mathematics and ... scheduling, and dispatching methods are required in order to.
Computers in Industry 58 (2007) 644–655 www.elsevier.com/locate/compind

Simulation-based assessment of machine criticality measures for a shifting bottleneck scheduling approach in complex manufacturing systems Lars Mo¨nch *, Jens Zimmermann Chair of Enterprise-wide Software Systems, Department of Mathematics and Computer Science, University of Hagen, 58097 Hagen, Germany Available online 18 June 2007

Abstract In this paper, we describe adaptation techniques for a hierarchically organized multi-agent-system (MAS) applied to production control of complex job shops. The system architecture of the production control system is based on three different control layers. The mid layer implements a distributed shifting bottleneck type solution procedure. The shifting bottleneck heuristic decomposes the overall scheduling problem into scheduling problems for parallel machines. The sequence of solving the resulting scheduling problems for parallel machines is determined by machine criticality measures. We can adapt this solution scheme in a situation dependent manner by choosing appropriate machine criticality measures. Furthermore, the performance of the shifting bottleneck scheme is also influenced by the selection of a proper subproblem solution procedure for each of the parallel machine scheduling problems. The subproblem solution procedures typically are given by heuristics. A situation dependent parameterization of these heuristics is highly desirable. In this paper, we sketch an overall concept for adaptation of our hierarchically organized multi-agent-system. We present results of computational experiments based on the simulation of a dynamic environment for the appropriate selection of machine criticality measures. # 2007 Elsevier B.V. All rights reserved. Keywords: Scheduling; Multi-agent-systems; Simulation-based benchmarking; Shifting bottleneck heuristic; Adaptation

1. Introduction Complex job shops are characterized by parallel machines, sequence dependent set-up times, a mix of different process types, for example batch processes, prescribed due dates and reentrant flows (cf. Ovacik and Uzsoy [1] and Mason et al. [2] for the notation of complex job shops). Semiconductor wafer fabrication facilities (wafer fabs) are examples for complex job shops. As indicated by Scho¨mig and Fowler [3], today it seems that better operational strategies are the main key in order to reduce costs and improve overall efficiency. New planning, scheduling, and dispatching methods are required in order to reach the goal of better operational performance. The improved software and hardware capabilities have to be taken into account during the development of more sophisticated algorithms.

* Corresponding author. E-mail address: [email protected] (L. Mo¨nch). 0166-3615/$ – see front matter # 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.compind.2007.05.010

Manufacturing systems are changing over time because of the global competition and unforeseeable markets. Today’s manufacturing systems are customer demand driven and have to face with a large number of different products and an over time changing product mix in contrast to mass production type manufacturing systems of former decades. Therefore, meeting customer demands and especially due dates is extremely important. The changing customer demands result in a changing base system and process. Here, we define the base system as the set of machines (tools) that form the manufacturing system. The base process is given by the routes of the jobs and by a work in process (WIP) distribution as an initial condition. An over time changing base system and process leads to an over time changing production control system and also to a modified control process. The production control system is given by the control algorithms and the software and hardware to run these algorithms. Therefore, production control algorithms that are able to adapt to different situations are highly desirable. In this paper, we describe some steps towards reaching this goal by considering a hierarchically organized multi-agent-system (MAS) for production control of

L. Mo¨nch, J. Zimmermann / Computers in Industry 58 (2007) 644–655

complex job shops. We show how we can obtain, in principle, an adaptive behaviour of the production control system. Furthermore, we describe how we can assess the performance of the production control system by emulation of the manufacturing system via discrete-event simulation. The paper is organized as follows. In the next section, we describe the considered problem. Then, we discuss related literature. In Section 4, we present a concept for adaptation of the MAS used for production control of complex manufacturing systems. Section 5 describes our benchmarking architecture and explains the experimental setting. Finally, in Section 6, we present the results of the simulation-based benchmarking efforts. 2. Multi-agent-system architecture and solution approach for production control In this section, we discuss very briefly a hierarchically organized MAS used for production control of complex manufacturing systems. Then we give a short outline of the shifting bottleneck heuristic and discuss its potential for adaptation. 2.1. Production control via a hierarchically organized multi-agent-system We suggest a hierarchical multi-layer approach to solve the production control problem in complex job shops. We assume that the job shop is decomposable, i.e., the job shop is formed by a set of work areas. Each work area consists of a set of groups of parallel machines (also called tool groups or work centres). The tool groups of a work area are located in the same

645

region of the shop floor. The decomposition of the machinery into work areas can be used to decompose the routes of the products into a set of macro operations. Each macro operation is formed by the process steps that have to be performed on the machines of a single work area. The suggested hierarchical control approach is presented in [4] and [5]. It separates the control functionality into three different layers. The layers correspond to the physical decomposition of the shop floor into work areas and tool groups. We consider the entire manufacturing system entity as top layer of the hierarchy. The decision task of the top layer consists in determining start dates and planned completion dates for each macro operation of a job. The start and completion dates are sent to the middle layer and used as instructions. The middle layer is formed by different work areas. We consider a separate decision-making unit for each work area. A work area decision-making unit is supported by a scheduling entity that encapsulates basically a shifting bottleneck type solution scheme [6]. We use an exchange mechanism for start and completion dates across the work areas in order to reduce the total weighted tardiness value of the jobs. Total weighted tardiness (TWT) is the summation of the weighted tardiness ðw j T j Þ over all jobs j = 1, . . ., n, where w j is the weight (priority) of job j. Cj denotes the completion time and dj the due date of job j. The notation Tj = max(0, Cj  dj) is used for abbreviation. We call this approach distributed shifting bottleneck heuristic (DSBH). This distributed scheduling approach is presented in [7]. The base layer is basically formed by decision-making units that are related to single work centres. The work centre decision-making units try to implement the schedules from the

Fig. 1. Distributed hierarchical approach for production control.

646

L. Mo¨nch, J. Zimmermann / Computers in Industry 58 (2007) 644–655

middle layer as much as possible. In the case of disturbances, a contract net type resource allocation scheme is used to make processing decisions for each single job. We use Fig. 1 to outline the main idea of the distributed hierarchical approach. The solution of the decision problems on the three layers requires intra- and inter-layer communication. Agents are an appropriate way for implementing the production control approach because they have strong communication capabilities and allow for the implementation of coordination schemes [8]. We distinguish between decision-making agents and staff agents according to the PROSA reference architecture [9]. Decision-making agents solve decision problems while the staff agents try to support them. The staff agents are divided into monitoring and scheduling agents. We consider a production system agent, which is supported by a production scheduling agent to determine start and end dates of the jobs with respect to each single work area. We assign a work area agent as a decision-making agent to each single work area. Each work area agent is supported by a work area scheduling and a work area monitoring agent. Finally, we consider work centre agents in order to represent work centres. Job agents and agents for preventive maintenance issues are derived from the PROSA order agent type (see [5] for more details on the resulting MAS). 2.2. Adaptation potential of the shifting bottleneck heuristic A modified shifting bottleneck heuristic is described in [2]. The main ingredient of the shifting bottleneck heuristic is a disjunctive graph that is used to capture the relations and dependencies between jobs on different tool groups. The nodes of the disjunctive graph represent the processing of operations of the jobs on certain tools or tool groups. Conjunctive arcs between the nodes are used to model the routes of a job. Disjunctive arcs are required in order to represent scheduling decisions among the jobs on a given tool group. The weights of the arcs are given by the processing time of the node that contains the arc as an outgoing arc. When batching tools occur, the graph has to be modified by including

artificial batch nodes [2]. We use Fig. 2 to show a disjunctive graph for the jobs 1, 2, 3, and the machines 1, 2, 3, 4. The notation [i, j] is used for a node that is associated with the processing of job j on machine i. The processing time of the job j on machine i is denoted by pij. The ready time of job j is denoted by rj. An artificial start node s and an ending node e are introduced. Furthermore, we use the node v j in order to represent the due dates of job j. We refer to [10] for a more detailed description of disjunctive graphs. The main steps of the shifting bottleneck heuristic can be described as follows [6,10]: 1. Denote the set of all tool groups by M. We use the notation M0 for the set of tool groups that have already been sequenced or scheduled. Set M0 = 1 initially. 2. Identify and solve the subproblems for each tool group i 2 M  M0. 3. Identify a critical tool group k 2 M  M0. 4. Sequence the critical tool group using the subproblem solution procedure obtained by Step 2. Set M0 = M0 [ {k} for update purposes. 5. (Optionally) re-optimize the schedule for each tool group m 2 M0  k by exploiting the information provided by the newly added disjunctive arcs for tool group k. 6. If M = M0, terminate the heuristic. Otherwise, go to Step 2. Scheduling methods for the resulting subproblems (called subproblem solution procedures) have to be developed in Step 2. In the case of complex job shops, various process restrictions like batching tools, parallel machines, machine dedications and sequence-dependent setup times have to be taken into account during the development of subproblem solution procedures. Subproblem solution procedures are usually given by specific dispatching rules [2,1] or by local search techniques [1]. Specific criticality measures have to be chosen in Step 3 of the shifting bottleneck heuristic. Note that the application of a machine criticality measure leads to a sequence of subproblems. We have to face with the following three problems:

Fig. 2. Disjunctive graph for four jobs and three machines.

L. Mo¨nch, J. Zimmermann / Computers in Industry 58 (2007) 644–655

1. Determination of the most critical or bottleneck tool group in each iteration of the shifting bottleneck heuristic at a given point of time (Problem P1), 2. Selection of appropriate subproblem solution procedures to solve the resulting tool group scheduling problems at a given point of time (Problem P2), 3. Once a specific subproblem solution procedure has been selected, a parameterization of the subproblem solution procedure is necessary (Problem P3). In this paper, we address mainly problem P1 because it is the base for problem P2 and problem P3. 3. Related literature The shifting bottleneck heuristic for job shops is originally described for the make span performance measure in [6]. Extensions to other performance measures and more realistic process conditions are presented, for example, by Ovacik and Uzsoy in [1] and by Ivens and Lambrecht [11]. For this paper, the modifications carried out by Mason et al. [2] are the most important. Basically, these modifications allow for the usage of shifting bottleneck type algorithms to schedule wafer fabs. However, only fixed machine criticality measures and a fixed subproblem solution scheme are assumed in these papers. The choice of appropriate machine criticality measures is investigated in [12–16]. Holtsclaw and Uzsoy investigated the effect of different subproblem solution procedures and different machine criticality measures in [15]. A genetic algorithm is used by Dorndorf and Pesch [16] to find the best sequence of subproblems. Aytug et al. suggest various machine criticality measures in [12]. However, it is hard to find a subproblem sequence that consistently outperforms the remaining sequences. Therefore, machine learning techniques, especially inductive decision trees, are suggested in [13] and [14] to solve the problem of finding a best sequence of solving the subproblems in the shifting bottleneck heuristic. The machine learning approach creates large computational burden because many test instances have to be considered and enumeration schemes have to be used to consider all possible subproblem sequences. Test problems with 5 and 10 machines and a small number of jobs are investigated in these papers. Hence, the suggested method is not extendable to the job shops in our case. Furthermore, only static environments are considered in all papers that deal with machine criticality issues. In the spirit of static scheduling problems (cf., for example [10]) we call an environment static if the ready times of all jobs are the same and machine break downs are not modelled. Also influenced by the notation of dynamic scheduling problems [10], the notation dynamic environment is used for the opposite situation. We use the shifting bottleneck heuristic in a rolling horizon manner. We are able to consider unequal ready times and machine breakdowns by emulating the production process. Some attempts are made to apply machine learning techniques to (adaptive) scheduling problems and especially to MAS. Piramuthu et al. [17] consider inductive decision trees in order to select appropriate dispatching rules among a set of given rules. Some attempts to parameterize the Apparent Tardiness Cost

647

Dispatching rule are described using neural networks and casebased reasoning techniques in [10]. Some literature regarding learning in MAS is summarized in [18]. Some more recent applications considering reinforcement learning are discussed in [19]. The selection of appropriate dispatching rules via a genetic algorithm and simulation is discussed in [20]. The main drawback of tools like neural networks, decision trees or genetic algorithms stems from the huge computational burden because of simulation-based test instance evaluation or multiple replications. The usage of machine learning techniques applied to holonic and MAS in manufacturing is discussed by Monostori [21]. The third group of related work is given by hierarchical production control schemes applied to complex job shops. We refer to the more recent papers [22] and [23]. However, the approaches in these papers do not contain adaptive features. 4. Adaptation concept for the hierarchically organized multi-agent-system In this section, we describe first some general principles for adaptive production control. Then, we apply these principles to our hierarchically organized MAS. We present details for the solution of problem P1. 4.1. Principles of adaptation in production control systems We define the machinery of the shop floor with its capabilities as base system B. Furthermore, we define the routes and the workload (represented by a work in process distribution of the jobs) of a certain manufacturing system as the base process PB. The base process is characterized by a usage of the resources of the base system by activities. More formally, PB is described by the mapping PB : X B  Z B ! Z B  Y B

(1)

where we denote by: XB, input of the process PB; YB, output of the process PB; ZB, state set of process PB. Beside base system and base process we consider the control system C and the control process PC. The (production) control system consists of production control algorithms, the corresponding software, and the control computers. The control algorithms are model-based, i.e., they contain an internal model that is used to represent the base model and the current state of the base process. Furthermore, a parameter adjustment component is also part of the control system. The control systems obtain a reference input. It is used to modify either the parameter adjustment component (goal modification) or the internal model (image modification). The control process uses the functionality of the control system in order to produce control instructions m to influence the base process. Therefore, the control system has to consider current data from the base process, also called feedback information. The control process itself is given by the mapping PC : X C  Z C ! Z C  Y C :

(2)

Again, as in the case of the base process we use the notation: XC, input of process PC; YC, output of process PC; ZC, state set of process PC.

648

L. Mo¨nch, J. Zimmermann / Computers in Industry 58 (2007) 644–655

4.2. Adaptation of production control schemes within the multi-agent-system In order to adapt the production control strategies of the MAS to different situations, we have to solve mainly the problems P1, P2, and P3. 4.2.1. Solution approach for problem P1 We suggest the usage of a combined machine criticality measure for solving problem P1. The first measure takes the workload of a certain tool group into account. Therefore, it takes the sum over the processing times and the load and unload time of all jobs waiting in front of a certain tool group k, i.e., we determine the quantity

Fig. 3. Architecture of adaptive production control systems.

The overall architecture is shown in Fig. 3. An adaptive system is defined as a feedback control system where the control actions are generated automatically [24]. Flexibility in manufacturing systems is the ability to cope with changes [25]. Changes can steam from the base system B and the base process PB. Adaptive production control systems are a prerequisite for ensuring flexibility in manufacturing systems. However, the term flexibility is broader because it is a function of the physical attributes of the manufacturing system and depends also on the organization of the physical components and the used control mechanisms. We differentiate between off-line and on-line systems. When an adaptive system operates in actual time, then we call it an online system. The modification of the parameter adjustment component and the internal model take place simultaneously with the generation of the control instructions in on-line systems. In contrast, off-line systems use a stepwise procedure in order to modify parameters or change the internal model. The performance of certain production control schemes depends very often on the situation in a manufacturing system. Situation attributes can be derived in on-line or off-line settings. Situation attributes can be used to determine appropriate parameterization attributes. In order to be more formally, a parameterization can be described for both types of models as a mapping v v : D1      Dn ! R1      Rk ða1 ;    ; an Þ 7! ðw1 ;    ; wk Þ

critk ðTMGLÞ ¼

n B˜ X ð p þ l j þ u jÞ n j¼1 j

(4)

where we denote with: lj, load time for job j; pj, processing time ˜ average size of a batch, i.e., of job j; uj, unload time for job j; B, average number of jobs that form a batch; n, number of jobs queuing in front of the tool group k. We choose the tool group with the highest workload for scheduling first. Workload oriented criticality measures are discussed in the literature, for example, in [15] and [12]. The second measure exploits the idea that bottleneck tool groups are more important than other ones. Hence, we consider the bottleneck tool groups as the most critical ones in our shifting bottleneck scheme. We denote this measure as critk(BN). This type of measure is discussed in the literature in [26]. The third measure is intended to measure the ‘‘amount’’ of constraint violation caused by a certain subproblem. Because we are interested in minimizing total weighted tardiness, we consider a weighted slack-based criticality measure that seems to be new in the literature. We derive the quantity critk ðWSLACKÞ Pn j   1 n X d j  t  s¼l f p js þ l js þ u js g ¼ w j max 1; nj  l j¼1 (5)

(3)

where we denote by: Di, domain of situation attribute i; ai, concrete realization of situation attribute i; Ri, range of parameterization attribute i; wi , concrete realization of a parameterization attribute i. Hence, a typical parameterization task consists in determining a set of appropriate situation attributes, a set of parameterization attributes and the concrete form of the mapping v. Note that we differentiate between explicit mappings given, for example, by regression equations or other type of functions, and implicit mappings like neural networks and decision trees. Usually, it is a non-trivial task to determine these mappings that requires a large amount of computational experiments.

where we denote by: t, current time of decision-making; l, current step of job j; nj, number of process steps of job j; ljs, load time for process step s of job j; pjs, processing time for process step s of job j; ujs, unload time for process step s of job j. This measure takes the slack of the jobs into account. It normalizes this value by dividing it by the number of process steps. A small slack should lead to a high priority of the tool group. Therefore, we have to consider its reciprocal value and multiply it with the weight of the job. We take the tool group with the highest weighted slack for scheduling first. We consider a fourth criticality measure. We calculate a schedule for each single tool group. Then, based on these scheduling decisions we determine the total weighted tardiness with respect to the due dates on the entire job shop (cf. [10] for more details). We denote this criticality measure for tool group

L. Mo¨nch, J. Zimmermann / Computers in Industry 58 (2007) 644–655

649

k by critk(TWT). This measure is also used for benchmarking purposes, because it is usually used as a default measure in shifting bottleneck approaches to minimize total weighted tardiness [10]. We use the measures critk(TMGL), critk(BN), critk(WSLACK), and critk(TWT) in order to sequence the tool groups. Then, for a fixed tool group, we denote the position in the sequence with respect to critk(o) by rank(critk(o)). We calculate the combined index rankðcritk Þ ¼ w1 rankðcritk ðTMGLÞÞ þ w2 rankðcritk ðWSLACKÞÞ þ w3 rankðcritk ðBNÞÞ þ w4 rankðcritk ðTWTÞÞ (6) to determine the criticality of a fixed tool group. We assume w1 þ w2 þ w3 þ w4 ¼ 1 and wi  0. Note that we are interested in finding a criticality measure that performs well in many situations. Therefore, a combination of a couple of single measures that are known to perform well in many situations and an appropriate weighting of these measures is highly desirable. The parameterization task consists of two sub tasks: 1. Find appropriate attributes that describe a certain situation in the manufacturing system. 2. Find a proper mapping that assigns to each situation description an appropriate set of parameterization attributes. We use the attributes ‘‘due date characteristic’’ and ‘‘load of the manufacturing system’’ as attributes for situation description. In order to solve the two sub tasks, we carry out a simulation study in order to find 4-tuples ðw1 ; w2 ; w3 ; w4 Þ that lead to small total weighted tardiness values. Note that the number of necessary simulation runs is much smaller as for learning by inductive decision trees or neural networks. We ensure an adaptive behaviour of the manufacturing system with respect to P1 by determining the due date characteristic and the load in a periodic manner and then choose appropriate 4-tuples ðw1 ; w2 ; w3 ; w4 Þ. The length of the period is given by the average raw processing time of the jobs. Other appropriate periods of time can be determined by simulation. The internal model is updated in an event-driven manner. The internal model is basically given by the blackboard-type datalayer described in more detail in Section 5.1 and the scheduling graph. The update of the internal model is done automatically. The current due date characteristic and the load can be determined appropriately. However, the construction of the mapping v between situations and parameter settings needs outer inventions, i.e., simulation experiments and has to be done in a non-automated manner. The determination of appropriate weights in the weighted sum of the criticality measures can be supported by the construction of an appropriate fuzzy rule base [28] that can be used to determine these weights. The input and the output values of the fuzzy system are linguistic values, for example low load, medium load and high load of the manufacturing system. The affiliation of the values is arranged by member

Fig. 4. Fuzzy membership functions for the tightness of due dates.

functions. The member functions are different for each manufacturing system. Usually it is hard to generalize them. Fig. 4 shows an example member function for the situation attribute ‘‘due date characteristic’’. In Fig. 4, we measure the tightness or wideness of due dates by taking the ratio of the difference of due dates and the current time and the remaining (raw) processing of the jobs. The resulting quantity is used to characterize the due dates. We differentiate between very tight, tight, and wide due dates. Careful simulation experiments are necessary in order to determine the model specific member functions. The determination of the member function has to be based on the simulation results of Section 6. Here, we obtain appropriate 4-tuples for a specific situation. By means of the fuzzy member functions we can interpolate between these specific situations. However, carry out the details is part of future research. The shifting bottleneck type solution algorithms for each work area are encapsulated in work area scheduling agents. Each work area scheduling agent contains a scheduling graph of the shifting bottleneck heuristic. The graph basically serves as part of the internal model in the sense of Fig. 3. The graphs are updated in an on-line manner. We have to add a new type of staff agents, which we call adjustment agent. Adjustment agents basically represent and eventually learn the mapping functions v between the situation and parameterization attributes. In our MAS, we have to add adjustment agents for each work area that encapsulate the Fuzzy rule bases for each shifting bottleneck algorithm. An update of these adjustment agents is carried out in an off-line manner, i.e., simulation is used to find the appropriate member functions. 4.2.2. Solution approach for problem P2 We suggest a mix between dispatching based subproblem solution procedures and more advanced subproblem solution procedures based on genetic algorithms in [27] for the solution of problem P2. Here, we use basically the genetic algorithm based subproblem solution procedures for the bottleneck machine whereas the simpler dispatching rule based subproblem solution procedures are applied to the non-bottleneck machines. Again, we use simulation in order to determine an appropriate subproblem solution procedure mix for a given due date setting type and a given load of the manufacturing system.

650

L. Mo¨nch, J. Zimmermann / Computers in Industry 58 (2007) 644–655

Additionally, it is necessary to determine the (dynamic) bottleneck toolgroups.

that support the work area scheduling agents during decisionmaking.

4.2.3. Solution approach for problem P3 Note that we use variants of the apparent tardiness dispatching (ATC) rule [10] as a subproblem solution procedure. The ATC rule is a composite dispatching rule that takes into account the due date values, the weights and the processing times of the jobs. The ATC index of job j is calculated as

5. Benchmarking issues

I j ðtÞ ¼

  maxðd j  p j  t; 0Þ wj exp  k p¯ pj

(7)

where we denote with k, scaling parameter of the slack of job j; p¯ , average processing time of the remaining jobs. The performance of the ATC rule is strongly influenced by the choice of the k parameter. We suggest inductive decision trees to perform this task in [29]. The choice of k depends on the tightness and the range of the due dates and ready times. Therefore, the inductive decision tree is used to map these tightness and range values to a k value, i.e., the mapping v is given by the decision tree. A simulation system collects scheduling test cases during the simulation run, determines near to optimal k values by an iterative procedure and uses finally these results in order to learn the decision tree by new examples. The selection of the scaling parameter k may serve as an example for the solution of problem P3. Again, the inductive decision trees may be encapsulated in special adjustment agents

In this section, we describe first a simulation-based benchmarking approach applied to the solution of problem P1. Then, we describe the design of simulation experiments. 5.1. Simulation-based benchmarking approach We use the software architecture described by Mo¨nch et al. [30] to carry out the experiments. A similar testbed-based approach is taken by Cavalieri et al. in [31]. The centre point of the used architecture is a data layer that contains all the information to construct the scheduling graphs and make the scheduling decisions. The data layer is between a simulation model that emulates the manufacturing process of interest and the scheduling application for the shifting bottleneck heuristic. The objects of the data layer are updated in an event driven manner by appropriate simulation events. Calculated schedules are submitted to the simulation engine in order to use the information of the schedules in a dispatching-based manner. The architecture allows for rolling horizon type scheduling as well as for event-driven rescheduling activities. The benchmark architecture is shown in Fig. 5. We use the simulation tool AutoSched AP 7.3 in order to carry out the experiments. AutoSched AP is a C++ framework that strongly supports modelling features in the semiconductor

Fig. 5. Simulation-based benchmark architecture.

L. Mo¨nch, J. Zimmermann / Computers in Industry 58 (2007) 644–655

651

Table 1 Factorial design for Model A and Model Bs Factor

Level

Count Model A

Production control process (PC) dependent Scheduling horizon Base process (PB) dependent Due date setting Load of the system

Model B

Scheduling interval (top layer): (4 h) Scheduling interval (DSBH): (4 h)

1 1

Tight Wide High, very high

1 (FF = 1.4) 1 (FF = 1.5)

1 (FF = 1.6) 2

manufacturing domain. The blackboard type datalayer is implemented in the C++ programming language. A COM interface is used to ensure interoperability between the MAS which is implemented in the C# programming language using the NET middleware for communication issues (see [5] for more details) and the blackboard. Parts of the internal models of the process, i.e., the shifting bottleneck graphs are updated in an on-line manner based on the updated objects of the datalayer. Note that our approach has some advantage for the purpose of integration of the MAS into a real environment. The simulation that emulates the production process can be replaced, in principle, by a manufacturing execution system (MES) (cf. [32], where the integration of a scheduling application into the system landscape of a wafer fab is described by using the datalayer).

Furthermore, we choose scheduling intervals for the top and the medium layer for which it is known that they lead to a small total weighted tardiness [5]. We use fixed weighting schemes for the jobs. 50% of the jobs have weight 1 on average, 35% have weight 5, and the remaining jobs have weight 10. We summarize the used experimental design in Table 1. We simulate 50 days. No independent replications of simulation runs are required because we do not include any stochastic behaviour of the manufacturing system in our experiments.

Scenario

w1

w2

w3

w4

High load

Very high load

5.2. Design of experiments for a situation dependent choice of the machine criticality measures

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0 0.2 0.4 0 0.2 0 0.2 0.2 0.4 0.2 0.2 0.4 0 0.2 0.4 0.2 0.4 0

1 0.8 0.6 0.4 0.2 0 0.8 0.6 0.4 0.2 0 0.6 0.4 0.2 0 0.4 0.2 0 0.2 0 0 0.2 0.4 0.2 0.2 0.2 0 0.2 0 0.4 0.4 0.2 0.4

0 0 0 0 0 0 0.2 0.2 0.2 0.2 0.2 0.4 0.4 0.4 0.4 0.6 0.6 0.6 0.8 0.8 1 0.4 0.2 0.2 0.2 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.2 0.2 0.2 0.4 0.6 0.6 0.8 0.8 0.2 0.4 0.4 0.6

1.04995 0.99863 0.97214 1.00605 0.96380 1.02542 1.01637 0.98864 1.07377 1.03128 1.02962 0.98786 1.03769 0.97302 1.07168 0.98380 1.01457 1.08405 1.07690 1.03070 1.01905 1.01343 1.03373 1.10775 1.01598 1.07001 1.09357 1.06021 1.03473 0.99924 1.00058 1.07166 1.05478

1.0045 1.0620 0.9839 0.9978 0.9804 0.9566 0.9950 1.0398 1.0009 1.0080 0.9874 0.9468 1.0242 0.9783 0.9937 0.9331 0.9664 1.0450 0.9870 0.9780 1.0017 1.0062 1.0301 0.9391 0.9940 1.0343 0.9787 0.9953 1.0320 1.0541 0.9964 0.9950 0.9883

We use two different simulation models. The first model is a reduced variant of the MIMAC Testbed Data Set 1 (Fowler et al. [33,34]). It contains two routes with 100 and 103 steps, respectively. The process flow is highly reentrant. The jobs are processed on 146 machines that are organized into 37 tool groups. Among the tools are batching tools. The model contains four work areas. We denote this model by Model A. The second model is the full MIMAC Testbed Data Set 1. It contains over 200 machines that are organized into over 80 tool groups. The tool groups form five work areas. The model contains two routes with 210 and 245 steps, respectively. The second model is called Model B. We expect a different behaviour of the machine criticality schemes for complex job shops with high and very high load. Furthermore, the due date setting is also important. The (external or customer) due dates of the jobs are calculated by using the flow factor concept. The flow factor FF is defined as the ratio of the cycle time and the raw process time [35]. We set d j ¼ FF

uj X

p jk þ r j

(8)

k¼1

where we denote by pjk the processing time of processing step k that is required to produce job j and uj denotes the number of processing steps of j.

Table 2 TWT values for different machine criticality schemes (FF = 1.4, Model A)

652

L. Mo¨nch, J. Zimmermann / Computers in Industry 58 (2007) 644–655

6. Computational experiments

Table 3 TWT values for different machine criticality schemes (FF = 1.5, Model A)

In a first scenario, we consider Model A with high and with very high load and tight due dates, i.e., we use FF = 1.4. We show the corresponding computational results in Table 2. All values are the ratio of total weighted tardiness values obtained by the shifting bottleneck approach with the criticality measure given by Eq. (6) and with a shifting bottleneck heuristic with the TWT based criticality measure discussed in Section 4.2. It turns out that for both high and very high load of the system there are 4-tuples ðw1 ; w2 ; w3 ; w4 Þ that lead to small TWT values. In the case of a high load, the 4-tuple ðw1 ; w2 ; w3 ; w4 Þ ¼ ð0:8; 0:2; 0:0; 0:0Þ provides the smallest TWT value and improves the value obtained by using critk(TWT) on 4%. The chosen weights show that the workload of a tool group is the dominant criterion in case of a high loaded manufacturing system. The number of jobs queuing in front of tool groups is for a large number of tool groups low, hence, bottleneck or TWT based measures are not so important. The 4-tuple ðw1 ; w2 ; w3 ; w4 Þ ¼ ð0:0; 0:4; 0:6; 0:0Þ leads to results that are 7% better than the corresponding results with the TWT based criticality measure in case of a very high load of the manufacturing system. The bottleneck criticality measure and the slack-based criticality measure are dominant in this situation. A very high load of the manufacturing system causes a situation where the workload of many tool groups and consequently also the TWT value are high. Therefore, these two measures are not very appropriate for differentiating with respect to sequencing of the subproblems. We show the corresponding results for wider due dates (FF = 1.5) in Table 3. In this situation, the room for improvement is higher as in the case of FF = 1.4. We obtain larger improvement rates up to 26%. In contrast to the previous situation, the bottleneck criticality measure is the dominant measure for wider due dates. Because of the wider due dates, the flow of jobs through the manufacturing system is more smoothly and the number of

Scenario

w1

w2

w3

w4

High load

Very high load

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0 0.2 0.4 0 0.2 0 0.2 0.2 0.4 0.2 0.2 0.4 0 0.2 0.4 0.2 0.4 0

1 0.8 0.6 0.4 0.2 0 0.8 0.6 0.4 0.2 0 0.6 0.4 0.2 0 0.4 0.2 0 0.2 0 0 0.2 0.4 0.2 0.2 0.2 0 0.2 0 0.4 0.4 0.2 0.4

0 0 0 0 0 0 0.2 0.2 0.2 0.2 0.2 0.4 0.4 0.4 0.4 0.6 0.6 0.6 0.8 0.8 1 0.4 0.2 0.2 0.2 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.2 0.2 0.2 0.4 0.6 0.6 0.8 0.8 0.2 0.4 0.4 0.6

0.9338 0.8002 0.8552 0.9245 0.8725 0.8644 0.7734 0.9307 0.8250 0.8490 0.8515 0.9022 0.8266 0.8150 0.9418 0.7431 0.8615 0.7869 0.8452 0.9295 0.8068 0.8205 0.9196 0.8561 0.7818 0.7669 0.9092 0.9161 0.7978 0.9224 0.8268 0.8009 0.8368

0.9471 0.9537 1.0723 1.0274 1.0312 0.9806 0.9610 0.9615 0.9620 0.9371 0.9519 0.9756 0.9056 0.9598 0.9813 0.9794 0.9986 0.9459 0.9622 0.9861 1.0397 0.9612 1.0204 1.0154 0.9659 0.9321 0.9240 0.9584 1.0295 0.9656 1.0470 0.9707 0.9452

bottlenecks decreases. We obtain improvement rates up to 10% for a very high loaded system. The results for FF = 1.4 and FF = 1.5 are presented graphically in Fig. 6. This figure clearly demonstrates that wider due dates and lower load of the manufacturing system

Fig. 6. TWT values for the different scenarios from Tables 2 and 3.

L. Mo¨nch, J. Zimmermann / Computers in Industry 58 (2007) 644–655 Table 4 TWT values for different machine criticality schemes (FF = 1.6, Model B) Scenario

w1

w2

w3

w3

Very high load

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

0 1 0.4 0.4 0 0 0.4 0.2 0.2 0.4 0.6 0.2 0.8 0 0.2 0.2 0.4 0.6 0 0 0.2 0.4 0 0.2

1 0 0.4 0.2 0.4 0 0.4 0.2 0.4 0.2 0.2 0.6 0 0.8 0.2 0.4 0.2 0 0.6 0.4 0.2 0 0.2 0

0 0 0.2 0.4 0.6 1 0 0.4 0.2 0.2 0 0 0 0 0.2 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.4 0.4 0.4 0.4 0.4 0.6 0.6 0.6 0.8 0.8

1.4277 1.0207 1.2499 1.4442 1.2543 1.1359 1.0050 1.1188 1.0411 1.0902 1.2671 1.0573 1.1207 1.4051 1.1350 1.0085 1.0702 1.0368 1.2149 1.0815 0.9600 0.9262 0.9941 0.9717

lead to better performance of the combined machine criticality measure. In a second scenario, we consider Model B. Model B is of more realistic size as Model A. We present the obtained results for Model B in Table 4. Based on the results from the previous experiment, we decide to choose only a subset of scenarios in order to reduce the simulation efforts. We show only results for a high loaded system and due dates that are obtained by the setting FF = 1.6. It turns out that the 4-tuple ðw1 ; w2 ; w3 ; w4 Þ ¼ ð0:4; 0:0; 0:0; 0:6Þ leads to the smallest TWT value for Model B. Model B contains five work areas, each of them contains more than 10 tool groups. Therefore, the number of possible sequences for solving the subproblems is much greater than in

653

case of Model A. Furthermore, the workload and the TWT value for a single subproblem is much more appropriate for differentiating between the different subproblems. We present the obtained results graphically in Fig. 7. From our experiments with the two models we can conclude that the combined criticality measure with appropriate weighting of the single measures outperforms each of the single measures with respect to TWT. However, more research efforts are required in order to establish more general insights and to overcome some of the difficulties that are caused by model dependency of our results. For example, we expect that the number of tool groups, i.e., the number of different scheduling problems to be solved, have a significant influence on an appropriate weight setting. The design of the Fuzzy rule base requires such insights into the weight setting process as a prerequisite. 7. Conclusions and future research In this paper, we discuss a concept for adaptation of a hierarchically organized MAS to different system conditions. We describe a general adaptation architecture for production control systems. The basic idea consists in constructing a mapping between certain situation and parameterization attributes. Then, we apply this architecture to our hierarchically organized MAS. We describe our benchmarking architecture and explain how we may implement an adjustment component as a special staff agent. We present the results of simulation experiments that demonstrate the advantage of the suggested approach for choosing appropriate machine criticality measures for the shifting bottleneck heuristic. We obtain a machine criticality measure that is basically a weighted sum of several machine criticality measures discussed in the literature. By adjusting the weighting factors appropriately, we obtain a machine criticality measure that performs well in many situations. The techniques suggested in this paper may help to solve a problem of a large number of heuristic scheduling approaches applied to relevant industrial problems. The problem consists in

Fig. 7. TWT values for the different scenarios from Table 4.

654

L. Mo¨nch, J. Zimmermann / Computers in Industry 58 (2007) 644–655

determining an appropriate parameterization of a certain heuristic in a given situation. We believe that this parameterization problem reduces the chance of a successful implementation of many heuristics on the shop floor. There are some logical directions of future research. First of all, we have to use the adaptation procedures for the problems P1, P2, and P3 in an integrated manner. So far, we do not know the performance of the MAS if we apply the adaptation techniques for P1, P2, and P3 at the same time. Here, further simulation experiments are necessary. A second direction of future research is given by the implementation of Fuzzy rule bases as described in Section 4.2. Therefore, we have to determine the concrete shape of the fuzzy member functions. Again, simulation experiments are necessary in order to assess the performance of the suggested approach. The corresponding adjustment agents have to be implemented in our MAS. A third direction is given by the question when the due date and load characteristic of the system should be determined. So far, only periodic updates are performed. However, it seems worth to investigate also the possibility of event-driven updates (for example, after the break-down of a leading bottleneck machine or the occurrence of ‘‘hot’’ jobs). Furthermore, it is interesting to investigate which other situation attributes beside ‘‘load’’ and ‘‘due date characteristic’’ are necessary in order to ensure an adaptive production control system behaviour. Attributes like ‘‘order release characteristic’’, ‘‘product mix’’, ‘‘number of tool groups’’, ‘‘dynamic bottlenecks’’ or ‘‘average tardiness for a shift’’ seem to be possible candidates for a description of situations. References [1] I.M. Ovacik, R. Uzsoy, Decomposition methods for complex factory scheduling problems, Kluwer Academic Publishers, Massachusetts, 1997. [2] S.J. Mason, J.W. Fowler, W.M. Carlyle, A modified shifting bottleneck heuristic for minimizing total weighted tardiness in complex job shops, Journal of Scheduling 5 (3) (2002) 247–262. [3] A. Scho¨mig, J.W. Fowler, Modelling semiconductor manufacturing operations, in: Proceedings of the 9th ASIM Dedicated Conference Simulation in Production and Logistics, 2000, pp. 55–64. [4] L. Mo¨nch, M. Stehli, J. Zimmermann, FABMAS—an agent based system for semiconductor manufacturing processes, in: Proceedings First International Conference on Industrial Application of Holonic and MultiAgent-Systems, Lecture Notes in Artificial Intelligence 2744, (2003), pp. 258–267. [5] L. Mo¨nch, M. Stehli, J. Zimmermann, I. Habenicht, The FABMAS multiagent-system prototype for production control of waferfabs: design, implementation, and performance assessment, Production Planning & Control 17 (7) (2007) 701–716. [6] J. Adams, E. Balas, D. Zawack, The shifting bottleneck procedure for job shop scheduling, Management Science 34 (1988) 391–401. [7] L. Mo¨nch, R. Driessel, A distributed shifting bottleneck heuristic for complex job shops, Computers and Industrial Engineering 49 (2005) 673–680. [8] M. Woolridge, An Introduction to Multiagent Systems, Wiley, Chichester, 2002. [9] H. Van Brussel, J. Wyns, P. Valckenaers, L. Bongaerts, P. Peeters, Refernce architecture for holonic manufacturing systems: PROSA, computers in industry, Special Issue on Intelligent Manufacturing Systems 37 (3) (1998) 225–276.

[10] M. Pinedo, Scheduling: Theory, Algorithms and Systems, second ed., Prentice Hall, 2002. [11] P. Ivens, M. Lambrecht, Extending the shifting bottleneck procedure to real-life applications, European Journal of Operational Research 90 (1996) 252–268. [12] H. Aytug, K. Kempf, R. Uzsoy, Measures of subproblem criticality in decomposition algorithms for shop scheduling, International Journal of Production Research 41 (5) (2002) 865–882. [13] H. Aytug, K. Kempf, R. Uzsoy, Integrating machine learning and decomposition heuristics for complex factory scheduling problems, in: Proceedings of the NSF Design and Manufacturing Conference, Vancouver, BC, Canada, 2000. [14] V. Osisek, H. Aytug, Discovering subproblem priorization rules for shifting bottleneck algorithms, Journal of Intelligent Manufacturing 15 (2004) 55–67. [15] H.H. Holtsclaw, R. Uzsoy, Machine criticality measures and subproblem solution procedures in shifting bottleneck methods: a computational study, Journal of the Operational Research Society 47 (1996) 666–677. [16] U. Dorndorf, E. Pesch, Evolution based learning in a job shop scheduling environment, Computers and Operations Research 22 (1995) 25–40. [17] S. Piramuthu, N. Raman, M.J. Shaw, S.H. Park, Integration of simulation modelling and inductive learning in an adaptive decision support system, Decision Support Systems 9 (1993) 127–142. [18] G. Weiss, Lecture notes in artificial intelligence 1042, in: Adaptation and Learning in Multi Agent Systems, Springer, Heidelberg, 1996. [19] B.C. Csaji, B. Kadar, L. Monostori, Improving multi-agent based scheduling by neurodynamic programming, in: Proceedings First International Conference on Industrial Application of Holonic and Multi-Agent-Systems, Lecture Notes in Artificial Intelligence 2744, 2003, pp. 110–123. [20] C.D. Geiger, R. Uzsoy, H. Aytug, Autonomous learning of effective dispatch policies for flowshop scheduling problems, in: Proceedings Industrial Engineering Research Conference, 2003. [21] L. Monostori, AI and machine learning techniques for managing complexity, changes and uncertainties in manufacturing, Engineering Applications of Artificial Intelligence 16 (4) (2003) 277–291. [22] F.D. Vargas-Villamil, D.E. Rivera, A model predictive control approach for real-time optimization of reentrant manufacturing lines, Computers in Industry 45 (2001) 45–57. [23] F.D. Vargas-Villamil, D.E. Rivera, K.G. Kempf, A hierarchical approach to production control of reentrant semiconductor manufacturing lines, IEEE Transactions on Control Systems Technology 11 (3) (2003) 578– 587. [24] C. Bierwirth, Adaptive Search and the Management of Logistics Systems, Kluwer Academic Publishers, Dordrecht, 2000. [25] D. Gupta, J.A. Buzacott, A framework for understanding flexibility of manufacturing systems, Journal of Manufacturing Systems 8 (2) (1989) 89–97. [26] R. Uzsoy, C.-S. Wang, Performance of decomposition procedures for jobshop scheduling problems with bottleneck machines, International Journal of Production Research 38 (2000) 1271–1286. [27] L. Mo¨nch, R. Schabacker, D. Pabst, J.W. Fowler, Genetic algorithm based subproblem solution procedure for the modified shifting bottleneck heuristic for complex job shops, European Journal of Operational Research 177 (3) (2007) 2100–2118. [28] V. Ravi, P.J. Reddy, H.-J. Zimmermann, Fuzzy rule base generation for classification and its minimization via modified threshold accepting, Fuzzy Sets and Systems 120 (2) (2001) 271–279. [29] J. Zimmermann, L. Mo¨nch, Simulationsbasierte Bewertung von Parametrisierungsverfahren fu¨r Produktionssteuerungsansa¨tze, in: Proceedings 11. ASIM-Fachtagung ‘‘Produktion und Logistik’’, 2004, pp. 189–198. [30] L. Mo¨nch, O. Rose, R. Sturm, A simulation framework for performance assessment of shop-floor control systems, SIMULATION: Transactions of the Society of Modelling and Computer Simulation International 79 (3) (2003) 163–170. [31] S. Cavalieri, M. Macchi, P. Valckenaers, Benchmarking the Performance of Manufacturing Control Systems: Design Principles for a Web-based Simulated Testbed, Journal of Intelligent Manufacturing 14 (1) (2003) 43–58.

L. Mo¨nch, J. Zimmermann / Computers in Industry 58 (2007) 644–655 [32] L. Mo¨nch, Scheduling-Framework fu¨r Jobs auf parallelen Maschinen in komplexen Produktionssystemen, WIRTSCHAFTSINFORMATIK 46 (6) (2004) 470–480. [33] J.W. Fowler, G. Feigin, R. Leachman, Semiconductor Manufacturing Testbed: Data Sets, Arizona State University, 1995. [34] MASM test data sets, http://www.eas.asu.edu/masmlab. (2005). [35] L.F. Atherton, R.W. Atherton, Wafer Fabrication: Factory Performance and Analysis., Kluwer Academic Publishers, Boston, Dordrecht, London, 1995. Lars Mo¨nch is a Professor for Enterprise-wide Software Systems in the Department of Mathematics and Computer Science at University of Hagen, Germany. He received a master’s degree in applied mathematics and a PhD in the same subject from the University of Go¨ttingen, Germany. He earned a Habilitation degree in Information Systems from the Technical University of Ilmenau in 2005. He worked in the area of object-oriented software development for 2 years after getting his PhD degree. His

655

current research interests are in simulation-based production control of semiconductor wafer fabrication facilities, applied optimization, multi-agent-systems, and artificial intelligence applications in manufacturing. He is a member of GI (German Chapter of the ACM), GOR (German Operations Research Society), SCS and INFORMS. He has been a member of the Intelligent Manufacturing Systems Network of Excellence (IMS-NoE) of the European Community, SIG 4—Benchmarking and Performance Measurement of OnlineScheduling-Systems since 2004. Jens Zimmermann is a PhD student at the Chair of Enterprise-wide Software Systems at the University of Hagen, Germany. He received a master’s degree in information systems from the Technical University of Ilmenau. He is interested in semiconductor manufacturing, simulation, multi-agent-systems, and machine learning. He is a member of GI. He has been a member of the Intelligent Manufacturing Systems Network of Excellence (IMS-NoE) of the European Community, SIG 4—Benchmarking and Performance Measurement of Online-Scheduling-Systems since 2004.

Suggest Documents