Benchmarking the performance of manufacturing ... - Springer Link

12 downloads 223 Views 1MB Size Report
control systems: design principles for a web-based simulated testbed. SERGIO ... gap between research and industrial application would dramatically widen.
Journal of Intelligent Manufacturing, 14, 43±58, 2003 # 2003 Kluwer Academic Publishers. Manufactured in The Netherlands.

Benchmarking the performance of manufacturing control systems: design principles for a web-based simulated testbed S E R G I O CAVA L I E R I Dipartimento di Ingegneria, UniversitaÁ di Bergamo, Italy Viale Marconi, 5 I-24044 Dalmine, (BG) E-mail: [email protected]

M A R C O M AC C H I Dipartimento di Ingegneria Gestionale, Politecnico di Milano, Italy Piazza Leonardo da Vinci, 32. I-20133 Milano E-mail: [email protected]

P A U L VA L C K E N A E R S Mechanical Engineering DepartmentÐP.M.A., Katholieke Universiteit Leuven, Celestijnenlaan 300 B, B-3001 Leuven, Belgium E-mail: [email protected] Received July 2001 and accepted January 2002

The paper reports the main research activities currently carried out for designing and developing a test-bench service. This service would act as the main reference point for establishing benchmarks on which research results can be compared. These benchmarks will be made available through web technology. The paper, after a ®rst outline of the main features of the project and its overall vision, is particularly focused both on the design principles related to the construction of good benchmark cases and on the technological issues related to the provision of a web-based simulation environment for supporting interactivity between remote scheduling and control systems and a locally resident simulation system. Keywords: Scheduling and control, simulation, object-oriented, benchmarking

1. Introduction Manufacturing scheduling and control is considered one of the most commercially important topics in the manufacturing area and a popular research domain in various academic ®elds, including industrial engineering, operations research, arti®cial intelligence, holonic and multi-agent systems. From the industrial side, the adoption of highly

reactive and ef®cient scheduling and control systems strongly affects the level of productivity and utilization of a manufacturing enterprise, particularly under the pressure of shortened product cycles, reduced batch sizes and a broader variety of items to be produced. From the research side, there has been a considerable amount of works done in the area of manufacturing systems control. These works have

44 been focused mainly on getting away from traditional centralized forms of control moving to more decentralized forms, involving multiple decision makers which can be arranged with various coordination structures (for a more comprehensive and detailed view of the evolution and state of the art of new emerging distributed control architectures, see Dilts et al., 1991; Shen and Norrie, 1999, and numerous references therein). However, a number of questions arise from the research that has been so far conducted in this area. Of primary importance is the assessment of objective and clear methods for determining which is the best control architecture for a given manufacturing problem. Secondarily, experimental campaigns are often conducted on toy simulation cases (e.g. idealized shop ¯oors with a limited number of machines) thus, on one hand, refraining researchers from encountering the complexities that a real factory could impose, and, on the other hand, augmenting the dif®dence and skepticism of practitioners on the real industrial applicability of these systems (Hanks et al., 1993). As a result, it is evident that without giving a clear answer to these fundamental issues, the technology gap between research and industrial application would dramatically widen. How to solve this dichotomy? Co-operation in the IMS-WG (Esprit, 21955) revealed a strong need from both sides for the design and development of realistic and industrially relevant test cases. These test cases would address speci®cally the evaluation and stress of the performance of scheduling and control systems based on new technological paradigms. As a proof of the growing interest of the research and industrial community on this topic, reports on similar efforts produced also overseas can be pointed out. Going back to 1993, Van Dyke Parunak, in his MASCOT proposal (Van Dyke Parunak, 1993) already showed the need to use industrial-strength benchmark problems for ®lling the gap between research and application. Two years later, Drummond (1995) developed a web site on scheduling benchmarks, with the main aim to serve as a repository of benchmark test cases and of their relative solutions. Despite the high relevance of the cases collected, the web site suffers from relying on a ``static'' database, without providing any kind of support to the user in his/her design and experimentation activities. In Canada, Brennan (1997, 1999) has been working

Cavalieri, Macchi and Valckenaers

on an experimental test bed for the evaluation of alternative control modes. Unlike previous works on this topic, he proposes a modular architecture which consists of a discrete-event simulation model and a communication shell, intended to emulate the operation of the real manufacturing system, and a state/ control module, used to implement alternative decision-making schemes. This modular structure results in a system that has the ability to deal with any type of manufacturing system (e.g. by changing the simulation model) and any type of control architecture (e.g. by changing the control model). Finally, a strong debate on the critical issues related to the construction of meaningful benchmark problems and industrially relevant test cases has animated also the AI community (see as an example the article published in the AI journal by Hanks et al., 1993). In Europe, a Special Interest Group (SIG) on Benchmarking and Performance MeasuresÐcoordinated by Kuleuven and University of BergamoÐhas been recently formed within the European IMS Network of Excellence (IMS-NoE, 2001).1 The core activity of the SIG will be to provide a Web-based Benchmarking Service. One of the major features of the benchmarking service, which makes it different from other analogous projects, is its main goal to provide users with an interactive environment simulating the physical system and a variety of manufacturing scenarios through the use of webbased simulation. Emulations of shop ¯oor systems and their control systems will be brought together to execute performance tests. To this end, suitable performance criteria will be also elaborated. These tests will address all relevant aspects, including ensuring the quality of the scheduling and control software itself, the deployment effort in a shop ¯oor and the productivity of the shop ¯oor system itself. Accordingly, this paper intends to provide the reader with the main theoretical foundations and design speci®cations underlying the development of good benchmark cases and to point out the most relevant technological issues related to the provision of a web-based simulation environment for supporting interactivity between scheduling and control systems and remote simulated models of selected shop ¯oors. In particular, Section 2 will be dedicated to the assessment of the fundamental issues driving the design and implementation of a generic test bed, considering the main user requirements coming out from researchers and practitioners as well. Section 3

45

Benchmarking the performance of manufacturing control systems

Fig. 1. Filling the gap between industrial users, researchers and technology vendors (adapted from Van Parunak, 1993).

will describe the overall architecture of the future testbench service. Section 4 will be devoted to the presentation of a reference benchmark framework for organizing application domain knowledge and implementing model construction of a generic manufacturing test bed. Sections 5 and 6 will discuss design speci®cations for enabling automatic simulation model building and for developing inter-process communication methods in order to guarantee interoperability between remote scheduling and control systems and emulated manufacturing test beds. Finally, in Section 7 the main research activities that will be carried out during the next three years by the established SIG will be presented and discussed.

2. User requirements model Within the project, fundamental issues driving the design and implementation of a generic test bed for manufacturing control systems have been de®ned taking into account user requirements and some indications from literature for same objectives. These issues have been derived considering that test cases would not be used only by the research community but also by industrial users and practitioners for evaluating and comparing different approaches to scheduling and control problems. In fact, along with the vision proposed by Van Dyke Parunak (1993), such a test-bench service would represent ``the most effective medium for communicating industrial needs and, on the other side, to

software vendors and researchers for demonstrating the relative virtues of their products against common benchmarks, thus stimulating competition and promoting advanced products''. Which are the main requirements for the design of a meaningful test bed that could be used for different purposes by the three main categories of actors identi®ed in Fig. 1? 2.1. The practitioner's perspective From an industrial practitioner's point of view, there exists the strong need to ascertain the potentiality of new technological paradigms in addressing and solving most of the new manufacturing needs. Bussmann and McFarlane (1999) list out some of these requirements, summarized as: increasing process complexity, constant product changes, volatile output and high robustness. Unavoidably, these requirements turn out to be the main design speci®cations driving the development of effective manufacturing scheduling and control systems. According to Dilts et al. (1991), they need to guarantee reliability, fault tolerance, recon®gurability, adaptability and software modi®ability, without neglecting at the same time traditional ``static'' performance as cost minimization. As Hanks et al. (1993) maintain in their survey on test beds for validation of planning agents, a realistic test case should encompass the complexity of the real world not only in terms of size of the problem (e.g. number of production resources and jobs) but also in

46 the possibility to trigger exogenous or unplanned events as they occur at execution time. However, practitioners are in the majority of cases quite reluctant to devote their time in providing an exhaustive and clear description of a production system. Hence, an additional design requirement resides on the development of an intelligent and easily manageable user interface, which would have a twofold purpose: (a) make industrial ``problemmakers'' at their ease in setting and detailing their production systems; (b) provide a standardized common representation of different benchmark models (Cavalieri et al., 1999). For this reason, the need of a descriptive framework arises. It should ease the design and instantiation of new test beds. It should support the experimentation process by means of both a static description of production systems and a description of its manufacturing scenarios. Finally, it should address the problem of de®ning a clear and sound performance measurement system in order to allow for precise and thorough evaluation of a tested scheduling system. In literature, there has been a proliferation of proposals of frameworks for describing a generic production system. However, the majority of these works lacks of generality, ®tting their description to speci®c production systems, with particular emphasis on the production context of ¯exible manufacturing systems (Booth, 1998; Elia and Menga, 1994; Park et al., 1997). An interesting contribution is provided by Zhang and Zhang (1997). They propose an object-oriented model of a production system, made up of a product module, for the description of a generic product and its components, a planning module, de®ning the process plan and the evaluation measures, and a resource module, describing a set of production resources. Also Garetti and Bartolotta (1995) adopt an object-oriented modeling named MES (Manufacturing Entity Structure) for de®ning a generic production system; they propose a general architecture of a workbench for the design of industrial production systems, based on structural, technological and management aspects. As regards the de®nition of the performance measurement system for evaluating the tested scheduling system, there are several proposals of modeling of performance measurement systems, both at enterprise level or at shop ¯oor level (see as a

Cavalieri, Macchi and Valckenaers

reference the works of Bititci, 1995; White, 1996; Bongaerts et al., 1999).

2.2. The researcher's perspective The above design speci®cations cover also most of the requirements from the scienti®c community. As already mentioned in Section 1, activity of researchers is mainly focused on ®nding new solutions on emerging manufacturing problems. With this respect, the construction of an experimental test bed is of secondary importance and mainly functional to the validation and assessment of the potentiality of their prototypical systems. Hence, the availability of realistic test cases and the possibility to count on objective comparisons with alternative control modes applied on the same case would highly improve the quality of their research results. These comparisons should not be limited only to a mere score ranking. Different measures of plan quality should be available, sensing not only the achievement of a goal state (e.g. on-time delivery of products) but also considering other quality factors as, for example, the level of nervousness of a schedule whenever conditions for rescheduling arise. In addition, a considerable added value would be given by the possibility to directly plug in their scheduling and control systems, through the use of proper standard communication interfaces, onto a prebuilt discrete-event simulation engine emulating the operation of the manufacturing system. This feature would relieve researchers also from a time consuming activity of design and construction of a simulation model, which is otherwise necessary when only the manufacturing data of the problem are available.

2.3. Design requirements for a good test bed As a result of the integration of these differentÐbut partially overlappingÐperspectives, the construction process of manufacturing test beds that can be properly exploited both by the scienti®c and industrial communities should respond to the following design features: (1) Exogenous eventsÐa test bed should comprehend also unplanned manufacturing events as machine disruptions, rush orders, order deletion.

47

Benchmarking the performance of manufacturing control systems

(2) Complexity of the worldÐa test bed needs to be a realistic and industrial-strength case, not merely a ``toy-case''. (3) Support in the problem descriptionÐproduction engineers should rely on a friendly and intelligent user interface in describing their production system and clearly de®ning the problem to be handled. (4) Clear separation of the physical world from the controlling environmentÐa test bed needs to make a clear distinction between the scheduler and controller and the simulated world where it operates; so, the interface should be clean, well de®ned and well documented; a designer must be able to determine easily what actions are available to the scheduler, how the actions are executed by the test bed and how information about the world is communicated back to the scheduler. (5) Clear inter-process communication interface between controller and remote emulatorÐit should be manageable for the experimenter to plug his control system into the remote emulator, by using a standard communication protocol easy to interface with; if not, it would be less time consuming to create a local simulation model of the manufacturing system. (6) Supporting experimentationÐa test bed should provide a convenient way for the experimenter to vary

Fig. 2. Main components of the test-bench service.

the behavior of the worlds in which the scheduler is to be tested; thus, given the same production system, by plugging it in different production scenarios, it would be possible to evaluate the performance and the robustness of the proposed scheduling solver considering a variety of sample problems and conditions. (7) Good measures of scheduling qualityÐa set of well de®ned and consistent performance metrics (including going concerns next to close-formula objective functions) needs to be clearly de®ned in order to understand thoroughly the main reasons of a scheduler's behavior and not merely a synthetic score ranking alternative solution.

3. Architecture of the test-bench service Based on these design speci®cations, the architecture of the test-bench service will be structured into three main software components (see Fig. 2). (1) Test-bench assistantÐa visual interactive environment for assisting the designer of a testbench case in inputting all the main data of the industrial case he/she wants to propose to the scienti®c and industrial community. The functionality

48 of this tool is twofold: (i) promoting and easing the proposal and submission of new test cases; (ii) providing a unique standard format for the description of a test case. (2) Test-bench virtual libraryÐa collection of physical and virtual (i.e. emulated) industrial testbench cases. For each test case, an exhaustive description of its main technological, structural and production data as well as of the main performance criteria will be provided. Moreover, best solutions to date and their performance values will be published out. (3) Test-bench emulator and evaluatorÐa webbased remote simulation service for the experimentation, testing and performance analysis of submitted scheduling proposals. This service will be founded on an on-line web-based simulation tool, where the model of a selected test bed will run on. This tool will also have a communication layer for setting up an online data/commands exchange with remote control systems. At the end of a simulation cycle, the server will elaborate the output experimental data, which will be published and compared with reference solutions to date. It is evident that, apart from the virtual library, whose test cases will be provided by the industrial companies joining the IMS Network as members of the SIG, it is required an intensive research activity in properly designing the other two main components of the service. As a result, the following sections of the paper will be mainly dealing with: (a) the design of a benchmark framework for describing a generic production system in terms of technological, structural, production data and performance criteria; (b) the design speci®cations for integrating the logical model a production system and its simulation model; (c) the design speci®cations for enabling an interactive web-based test-bench emulation and performance evaluation.

4. Design of a framework for the construction of a test bed The above list of requirements makes evident how the construction of a good test case is not trivial. To

Cavalieri, Macchi and Valckenaers

reduce the complexity to build up a manufacturing case, in Cavalieri et al. (1999) claimed the need to assess at ®rst a descriptive benchmark framework. In order to establish the general guidelines for designing an exhaustive benchmark framework, there is the need to state clearly the real nature of a manufacturing benchmark problem. In Cavalieri et al. (1999) a synthetic but, hopefully, comprehensive de®nition has been given. A benchmark problem is stated as ``the de®nition of a production system model which is run under a speci®c manufacturing scenario in order to compare performance results of different attempts of solution''. As a result of this de®nition, a benchmark problem requires the proper speci®cation of the following design issues: *

*

*

the production system taken as a reference (speci®cation of the static features), the manufacturing scenario (speci®cation of the dynamic features), the measures of performance to be used, in order to allow for an objective comparison between the results of different solutions on the same problem.

Figure 3 sketches out the three main design dimensions. The logical model of the framework is the representation of the design variables constituting the framework itself and of their mutual relations as well. It is derived from the MES modeling method proposed by Garetti and Bartolotta (1995). By using the uni®ed modeling Language (UML) notation (Jacobson et al., 1992), each design variable is translated into a speci®c class of objects characterized by a well-de®ned set of attributes and internal methods (Fig. 4). By setting up a proper value for each of the design parameters speci®ed within the benchmark framework, a production engineer should be able to provide a fully detailed and exhaustive description of a manufacturing system and clearly de®ne the domain analysis and the nature of the scheduling problem to be solved. In the following, each of the design axes of the descriptive framework will be detailed, with the exception of the performance dimension, which is currently under study. As a matter of example, only class diagrams of the production system model are hereby reported, while

Benchmarking the performance of manufacturing control systems

Fig. 3. The three axes of the benchmark framework.

Fig. 4. Representation of design variables into object classes.

49

50

Cavalieri, Macchi and Valckenaers

Fig. 5. Class diagram of the physical components.

Fig. 8 provides an overall view of the whole class diagram of the current version of the framework. Object classes are represented in the UML notation by using the Rational Rose object-oriented CASE tool.

(e.g. workstations, transporters, storage) and of the plant layout (see Figs 5 and 6). In particular, as regards physical components, for building up a benchmark problem, three main types of workstations can be considered: *

4.1. The production system model The production system model provides a description of the structural and technological features of the test bed. The structural features describe the physical con®guration of the production system. This requires the proper assessment of the physical components

*

*

machining stations, which process at each time one single job; assembly stations, which are devoted to the assembly of more components or subgroups into one single output entity; loading/unloading stations, which are dedicated to the loading/unloading of the jobs into/from the production system.

51

Benchmarking the performance of manufacturing control systems

Fig. 6. Relationship diagram of lay-out and physical components classes.

The transport means can be modeled by: *

*

*

a transport delay only (i.e. not considering transport capacity constraints); conveyors, moving several parts in parallel; AGVs, or in general serial transporters.

Finally, storage is present in a production system as: * *

that have to be performed for a given product (Fig. 7). In detail, each operation requires the proper de®nition of: *

* *

input/output buffer for the workstations; system buffer, generally used as inter-operational de-coupling point.

The technological features are captured by the process plans of the products being processed in the plant. A process plan models a sequence of operations

*

the machines where the operations can be performed on (sometimes with alternatives); the (estimated) duration of the operations; the precedence constraints between the operations; process plans can be executed in only one sequence (i.e. ®xed process plan, no routing ¯exibility) or in any sequence (with routing ¯exibility); routing ¯exibility can be modeled by precedence graphs or AND-OR graphs; the set-up each operation requires (and the related set-up times, possibly sequence dependent).

52

Cavalieri, Macchi and Valckenaers

Fig. 7. Class diagram of process plan.

4.2. The manufacturing scenario The de®nition of a manufacturing scenario aims at collecting events or activities dependent on the dynamic behavior of the manufacturing domain. A manufacturing scenario can be split into two subscenarios according to the generating domain: *

*

plant scenario, which is concerned with the dynamic behavior of the physical plant; operational scenario, which is concerned with the dynamic behavior of operational and management systems.

A plant scenario points out events or activities

related to the functioning of manufacturing physical components, such as: *

*

*

machine breakdown, which is due to maintenance parameters and busy times (mean time between failures/mean time to repair); machine breakdown can be referred only to the bottleneck station or extended to all stations; stochastic variations on set-up times and operation processing time; stochastic variations on transport service time, which also depends on the type of transporter being selected (serial or parallel);

53

Benchmarking the performance of manufacturing control systems *

material arrivals, being dependent from suppliers.

An operational scenario is at best represented by the order mix. It details the input data to be used by the scheduling and control system. It considers the way the release of the production plan is conceived. The production plan de®nes the production mix to be managed by the scheduling and control system under study within a rolling horizon. The release of an order mix requires in particular the de®nition of: *

* * *

production orders to be scheduled (type of products and lot quantity); expected release dates; expected due dates; other scheduling conditions, if required (e.g. product costs or product quality).

5. Integrating logical and simulation modeling Logical models of manufacturing systems will serve as test bed speci®cations within the test-bench service: in fact, starting from test bed models, the test-bench service would provide automated building of simulation models to enable experimentation of manufac-turing control systems in simulation environments. Being a static representation of manufacturing systems, however, the object-oriented representation of manufacturing test beds, described in Section 4, asks for further integration to achieve automated simulation model building. The architecture being adopted in the project for enabling automated building of simulation models from logical models has been directly inherited from the SES/MB architecture proposed by Zeigler (1990) (see Fig. 9) According to Zeigler: (1) The system entity structure base (SES) is a declarative base where conceptual components of a given system are represented as entities, own attributes and relationships with other entities; in other words, the SES base stores component models providing static representation of system components and own properties. (2) The model base (MB) is a behavioral base where component models are ``expressed in dynamic and symbolic formalisms''; the MB base stores

component models providing behavioral representation of system components in terms of sequence of performed actions and activities and interactions exhibited with other components. (3) Simulation models are ®nally built in the MB base; in particular, model building consists in ``retrieving'' behavioral models from the ``model structure'' base; the retrieving process is driven by a SES model instanced in the SES base; the SES model is inputted to MB base by means of a ``transform'' process. The model building of manufacturing simulation models in the test-bench service applies the principles underlying the SES/MB scheme: *

*

*

the UML model of manufacturing test beds, related to a static description of manufacturing system components and their manufacturing scenarios, constitutes the SES base; the MB base is a simulation library providing behavioral models of manufacturing system components; simulation models of manufacturing test beds are ®nally generated in the MB base by means of ``transform'' and ``retrieve'' process driven by UML models instanced in the SES base.

The model translation procedure that is actually being developed consists of three steps: (1) The selection (from the MB simulation library) of behavioral models of components that constitute the simulated test bed. (2) The assignment of own attribute values for each selected component model. (3) A linkage among related component models. Selection of behavioral models is driven by objects constituting the test bed model instanced in the SES base. As an example, let us consider a logical model of a test bed as constituted by more machining stations. These stations are not linked to any manufacturing scenario. In this case, the behavioral models constituting the simulated test bed are processors that simulate machining at a deterministic processing rate. Conversely, let us consider a logical model of a test bed related to some speci®c manufacturing scenarios applied to the machining stations (e.g. stochastic variations on operation processing times or machine breakdowns). In this second case, the behavioral models are processors that simulate machining at a stochastic processing rate and

Fig. 8. Overview of the class diagram of the benchmark framework.

54 Cavalieri, Macchi and Valckenaers

Benchmarking the performance of manufacturing control systems

Fig. 9. The SES/MB architecture (adapted from Zeigler, 1990).

include in their state diagramsÐand, thus, simulateÐ also breakdown and recovery states. Moreover, each component of the model is characterized by speci®c attributes. The values of these attributes are directly inherited by the object instances of logical models in the SES base. For example, processing rates of machining stations (of the previous example) have own values which are directly provided by the ``table operation/workstation'' of the UML model. Other machining parameters are assigned by means of attribute values of machining object instances (e.g. capacity) or events of manufacturing scenarios (e.g. mean time between failures for machine breakdowns). In the last step, component models are linked together based on relationships provided in the UML model; the whole manufacturing test bed is ®nally built in the simulation environment. As said, the model translation process is still under development. Being structured on a modular approach, that was deemed the most natural way to translate object components of the logical description of a test case into correspondent building blocks within the simulation model, it, of course, requires a one-to-one correspondence between logical modules and simulation model components available in a simulation library.

6. Interprocess communication between remote controller and plant emulator The benchmarking approach makes a clear distinction between the manufacturing control and the emulation

55 of the plant that is being controlled. This differs from common simulation practice in which emulation and control are interwoven. The underlying idea is that the manufacturing control software for the benchmarking/simulation is identical to the software in the real factoryÐif and when installed. This section ®rst describes current experience with two simulation software products (ARENA2 and SILK2). Next it addresses the key issue involved. ARENA2 is one of the oldest packages in the simulation domain with a signi®cant market share. It mainly is a third-generation softwareÐlike for instance C and FORTRANÐwith a modern graphical user interface. It provides real-time simulation, in which progress in simulation time is proportional to real-world time while the simulation executes, and it provides on-line communication over a single TCP/IP socket. Today, VBA support in ARENA provides more ¯exible mechanisms for communication of the computer network (e.g. multiple sockets). The software also allows developing your own building blocks (i.e. template elements). Referring to our project, the solution with ARENA consists of developing pairing sets of building blocks in the simulation package and a matching Java class to cooperate with these building blocks. This also includes a general service to route messages on the respective sides (emulation and control); this is a kind of postal service where the building blocks and Java class methods managed the addressing. Note that everything has to be managed in a single address space and represented in numerical formats because of limitations in ARENA. The more recent possibility to use VBA blocks in ARENA probably allows less complicated solutions. This solution requires the developing of building blocks for the resources (i.e. equipment) in the factory, but allows any con®guration of such equipment to be emulated and controlled without programming. It suf®ces to develop the emulation with the graphical user interface. The Java-based manufacturing control would observe the factory con®guration and generate automatically the required objects (class instances) and Java threads. The Java class providing the interface to the emulation has a state-of-the-art design, including event noti®cation services. The main disadvantages of the ARENA option are: (1) the ARENA software has inherited the limitations of its initial design, where state-of-the-art object-

56 oriented software provides much enhanced functionality, and (2) it is necessary to master two software development environments (i.e. Java and ARENA) with weak support for the link in between. It is especially hard to motivate people to invest in learning non-mainstream software like simulation packages; learning to master Java is always a good investment. The main advantages are: (1) the relative maturity/stability of a well-established simulation package and (2) the fact that software maintenance in Arena is only required when new types of equipment/processes need to be emulated. Novel con®gurations of equipment, which is already covered, only require the user to build-up the new con®guration with a graphical user interface. SILK2 is a Java software library. Evidently, this is state-of-the-art software technology and integration with a java manufacturing control does not pose any problems. Whereas the research with ARENA has targeted existing factories, the work with SILK has only just started to address real-life factories. It is too early to comment on the computing power requirements from SILK when the emulation is used for realistic cases. However, the advantages of SILK are suf®cient to make it our ®rst choice for research purposes. The reduced effort for training is crucial. In addition, the level of functionality that can be obtained is signi®cantly higher, but more experience is needed to make ®nal comments on this matter. The key issue in this technology is time. Currently, the emulations are simulations that run in real time: one time in simulation corresponds to one time unit in reality. The requirement behind it is that the manufacturing control must have a realistic image of the (emulated) underlying manufacturing system. Among others, it must be able to distinguish situations in which it has ample time to make a decision from situations in which delaying a decision will make equipment idle. This requirement is largely ful®lled when the time unit in emulation and reality are identical. Unfortunately, this requirement imposes severe limits on how much faster than reality benchmarks and tests can be performed. When the emulation speed is increased, the test results start to change signi®cantly at some point dependent on the test case at hand. This limits the amount of testing that can be performed in a given time frame. More fundamentally, only emulations with an identical time unit

Cavalieri, Macchi and Valckenaers

(speed factor set equal to 1) are realistic. Any speedup distorts reality. However, there is an inherent problem with reproduce-ability of results when the manufacturing control is decentralized. Bongaerts et al. (1999) discuss this phenomenon, in which tiny differences in timing of events and control decisions have macroscopic impact on the behavior of the overall system. In other words, it is an illusion to expect distributed control systems, attached to a manufacturing system or its emulation, to repeat behavior exactly when presented with virtually identical situations. The right questions to investigate are: (1) how to develop a test/benchmark suite that will reveal most relevant possible behaviors with suf®cient probability, and (2) how to design a control system such that this matter has negligible negative impact. Finally, research continues to address the matter. With the enhanced possibilities of SILK2, the team is looking into better mechanisms to communicate time and timing between the control and the emulation. For instance, the control system will be able to insert events in the simulation (e.g. time-out for submitting bids in a contract net protocol). The investigations and developments are ongoing. The ®nal aim is to leave real time emulation and implement as-fast-as-possible while accounting for all important interaction between emulation and control.

7. Conclusions and future actions As it is possible to draw from the previous section, further research efforts need to be addressed in order to be able to provide a fully functional and effective web-based benchmarking service. Indeed, the authors are quite con®dent that, through the common efforts of the researchers and practitioners joining the SIG on Benchmarking and Performance Measures, it will be possible shortly to count on a ®rst release of the whole system. At current date, a prototype of the test-bench assistant based on a fully graphical drag-and-drop user interface is available. The software tool will support the user in inputting all the technological, structural and production data of a production system according to the speci®cation of the benchmark framework (described in Section 4). In addition, the graphical environment will automatically render the semantic

57

Benchmarking the performance of manufacturing control systems

relations between the physical and logical instances of the framework. In the mid-term expected results are: (a) simple emulations and simple controllers on the web service; (b) learn how to use the web service and establish working practices and (c) performance criteria de®nition. In the long-term, the benchmarking service aims to be a catalyst for interaction amongst developers. The goal is to have a multi-criteria comparison amongst systems, which is required for in-depth discussion of the respective merits of scheduling and control systems. In addition, the SIG activity will establish the basis for developing such a service in a clearinghouse that provides mutual anonymity between factory operators and scheduling/control system suppliers until two parties mutually agree to cooperate more closely when the benchmark results indicate that this is worthwhile. Moreover, the benchmarking service could be used for learning purpose, since it would allow students to have access to an ample variety of simulation cases where to apply and evaluate the characteristics and performance of the control logic studied or developed, and the sharing of distributed teaching resources through provision of web interactive courses.

Acknowledgments This paper is the result of a joint work of the authors. In particular, Sergio Cavalieri has written Sections 1, 2, 3, 4, 4.1, 7, Marco Macchi has contributed to Sections 4.2, 4.3, 5 and Paul Valckenaers has produced Section 6.

Notes 1 The IMS-NoE (IMS Network of Excellence) is a European Thematic Network funded by IST Programme. The IMS-NoE is constituted by 71 members from the main EU and EU-associated countries. The Network will last three years and its activities started on June 1, 2002. Within the Network, SIG 4 on Benchmarking and Performance Measures will count on the support of more than 20 industrial and research partners.

References Bartolotta, A., McLean, C., Tina Lee, Y. and Jones, A. (1998) Production systems engineering: requirement analysis for discrete-event simulation. NISTIR 6154, National Institute of Standard and Technology, US Department of Commerce. Bongaerts, L., Indrayadi, Y., Van Brussel, H. and Valckenaers, P. (1999) Predictability of hierarchical, heterarchical and holonic control. Proceedings of the Second International Workshop on Intelligent Manufacturing Systems, Leuven. Brennan, R. W. (2000) Performance comparison and analysis of reactive and planning-based control architectures for manufacturing. Robotics and Computer Integrated Manufacturing, 16, 191±200. Brennan, R. W. and Rogers, P. (1997) A simulation testbed for comparing the performance of alternative control architectures. Proceedings of the 1997 Winter Simulation Conference, 880±887. Bussmann, S. and McFarlane, D. C. (1999) Rationales for holonic manufacturing control. Proceedings of the Second International Workshop on Intelligent Manufacturing Systems, Leuven, Belgium, 177± 184. Cavalieri, S., Bongaerts, L., Macchi, M., Taisch, M. and Wyns, J. (1999) A benchmark framework for manufacturing control, in Proceedings of the Second International Workshop on Intelligent Manufacturing Systems 1999, Van Brussel, H. and Valckenaers, P. (eds), Leuven, September 22±24, 225±236. Cavalieri, S., Taisch, M., Garetti, M. and Macchi, M. (2000) An experimental benchmarking of two multi-agent systems for production scheduling and control. Computers in Industry, (Elsevier Science) 43, 139± 152. Dilts, D. M., Boyd, N. P. and Whorms, H. H. (1991) The evolution of control architectures for automated manufacturing systems. Journal of Manufacturing Systems, 10(1), 79±93. Drummond, M. (1995) Scheduling benchmarks and related resources. URL: http://ic-www.arc.nasa.gov/ic/ projects/xfr/papers/ benchmark-article.html Hanks, S., Pollack, M., Cohen, P. (1993) Benchmarks, Test Beds, Controlled Experimentation, and the Design of Agent Architectures. AI Magazine, 14, 17±42. IMS-NoE (2001) Network of Excellence on Intelligent Manufacturing Systems. IMS-WG (Esprit 21955)ÐWorking Group on Intelligent Manufacturing Systems. È vergaard, G. Jacobson, I., Christerson, M., Jonsson, P. and O (1992) Object Oriented Software Engineering, Addison-Wesley.

58 Shen, W. and Norrie, D. H. (1999) Agent based systems for intelligent manufacturing: a state of the art survey. Knowledge and Information Systems, 1, 129±156. Van Dyke Parunak, H. (1993) MASCOT: A virtual factory

Cavalieri, Macchi and Valckenaers for research and development in manufacturing scheduling and control. ITI Tech Memo 93-02. Zeigler, B. P., Lou, C. J. and Kim, T. G. (1990) Model base management for multifaceted systems. IEEE, 25±31.