Embedding Optimization with Discrete Event Simulation Anna Rotondo, Paul Young, John Geraghty School of Mechanical and Manufacturing Engineering, Dublin City University, Dublin, Ireland, email:
[email protected],
[email protected],
[email protected]
Georgios Dagkakis, Cathal Heavey University of Limerick, Limerick, Ireland, email:
[email protected],
[email protected] Although simulation is not an optimization technique in itself, there is a growing interest towards its use in scenario analysis and optimization. This paper proposes a dual architecture, where simulation uses optimization within its run, in order to decide how it should progress in simulation time, but also an external algorithm uses the model to evaluate different designs. We test this architecture using an open source simulation engine called ManPy. A case study of a manufacturing line is presented, in which a linear programming method is used inside the simulation run to decide how operators should be allocated to machines and an external ant colony optimization algorithm uses the simulation to optimize the weights that are given to the different linear programming objectives. Keywords: Discrete event simulation, Optimisation, open-source simulation library, ManPy
1
Introduction
Discrete Event Simulation (DES) is arguably one of the most widely accepted and used Operations Research (OR) methodologies (Shannon 1998; Karabuk & Grant 2007; Jahangirian et al. 2010). Due to its ability to model a wide variety of systems it is often preferred to mathematical methods (Seila 1995; Tako & Robinson 2012). However, DES is traditionally computational expensive (Fujimoto 1990; Can et al. 2008) and this deters from its usage within optimization methodologies. This paper proposes an architecture in order to incorporate an Open Source (OS) simulation engine into optimization methodologies. The proposed architecture is hybrid, utilizing simulation to evaluate different scenarios that an optimization pattern produces additionally with optimization algorithms that are used inside the simulation model in order to address decisions. The DES engine used is called ManPy (https://github.com/nexedi/dream) and is part of a wider DES platform called “simulation based application Decision support in Real-time for Efficient Agile Manufacturing” (DREAM). The DREAM project (http://dream-simulation.eu) started in October of 2012 and its objective is to address the multi-faceted problems that deter manufacturing enterprises from more widely adopting simulation based decision support applications. The rest of the paper is outlined as follows: In the following section we conduct a literature review to identify the state of the art regarding the combination of optimization and simulation. Section 3 describes the dual optimization architecture and in Section 4 we elaborate on how this was applied to a case study. We conclude our paper summing up our work and future research.
2
Literature Review
The main strength of DES is its ability to be adapted to a wide variety of systems where mathematical methods fail to provide a solution (Banks 2000). As Law and Kelton (2000) state, modelers are often quickly led to use simulation, due to the complexity of the studied systems and the models that are needed to represent them validly. For this reason DES has become one of the most popular OR techniques (Jahangirian et al. 2010). On the other hand, its traditional use has been in evaluation of systems, whereas researchers have been reluctant to use it in optimization. DES is a traditionally complex technique (Kuljis & Paul 2001) and it is not frequent that users of simulation tools are also experts in optimization algorithms (Lacksonen 2001). Another burden is the computational overhead of DES that makes it expensive to incorporate into optimization. Moreover, the fact that stochastic DES does not produce definitive results, but estimates, makes it harder to be used by optimization algorithms that try to find improving directions in a solution space (Banks, J., Carson, J. S., & Nelson 2000). Nonetheless, in the last decades there has been increasing interest towards integrating simulation with optimization. It is indicative that most Commercial-Off-The-Shelf (COTS) software provide modules for optimization (April & Glover 2003). Lacksonen (2001) compares four simulation optimization algorithms
203
in 25 pilot problems. Even though he found generic differences between algorithms, he claims that the answer to which is the most robust methodology may depend on the size and the application domain of the problem. COTS simulation packages are typically linked to a small number of optimizers used, which may cause limitations in their use. Academic literature has also actively turned its attention towards the combination of simulation with optimization. Many researchers have used DES with popular optimization methodologies, such as Genetic Algorithms (GA) (Hegazy 1999; Azzaro-Pantel et al. 1998) or Ant Colony Optimization (ACO) (Brailsford et al. 2007). To overcome the computational burden, researchers try also to develop efficient techniques by creating a metamodel of systems, which provide an approximation of the system reducing the computational requirements (Can & Heavey 2012). The most typical use of simulation within optimization involves the usage of simulation as the evaluator of different designs. Azzaro-Pantel et al (1998) describe this as a “two-stage methodology”, where first a simulation model is developed to describe the system and then an algorithm uses the model to evaluate the different scenarios it produces. Figuerua and AlmadaLobo (2014) taxonomize several additional categories of interaction between simulation and optimization. The purpose of this research is to propose a hybrid integration of simulation with optimization, where a DES model can interface with an optimizer within its run, but also an external algorithm uses the model to evaluate different designs. This architecture combines the use of simulation as both a means to evaluate alternative scenarios in a metaheurstic optimization context and a means of anticipating the system status in order to enable optimization in an iterative optimization-based simulation context (Figueira & AlmadaLobo 2014). We use an OS simulation library called ManPy, which is written in Python and makes use of the SimPy software (http://simpy.readthedocs.org/) in order to implement the process-based simulation world view. Having an OS tool the interface with optimization is not limited by the specific attributes of a COTS package. Moreover, we use a tool written in the Python programming language that provides a natural environment for development of scripts related to computational science problems (Langtangen 2004).
3
Optimisation Architecture
The dual optimization architecture illustrated in this paper exploits ManPy’s capability of being easily embedded into a comprehensive platform and interfacing different modules. The core element of the optimisation architecture consists of the hybridization of scenario-based optimization and optimisation within a scenario. As a result, the simulation module is used as a means to both evaluate alternative solutions and generate information required for optimal decision making at multiple decision points during simulated time. A schematic representation of this hybrid optimization architecture is given in Figure 1.
Figure 1. Hybrid optimisation architecture The scenario-based optimization module is external to the simulation model and is responsible for generating potential scenarios and selecting the best solution among those explored. Scenario-based optimization approaches necessarily require the availability of prediction models or tools that are able to anticipate the effects that systems’ settings have on systems’ performances; for this reason, simulation proves a suitable scenario-based optimizer enabling the tool to capture the complex and dynamic relationships between input and output variables. In its most straightforward implementation, this outer optimisation module can consist of a full enumeration approach (i.e. all possible scenarios are investigated
204
and the best solution is selected) or design of experiments approaches whereby scenarios are developed based on the experimental plan most appropriate for the analysis’s purposes. Alternatively, for scenariobased optimization evolutionary optimisation algorithms, founded on population-based search logics, can be implemented by introducing a feedback loop that uses the simulation results intelligently to guide the search through the solution space. Although scenario-based optimization provides initial settings that univocally define the scenarios to be investigated, it may happen that these settings are not sufficient to solve decision problems encountered during a simulation run. In this case, the dual optimization architecture illustrated in Figure 1 above provides a solution in that it combines the functionality of the outer optimization module for setting scenarios to be investigated and the capability of internal optimization algorithms to solve decision problems that arise during a simulation run. These problems are generally specific to the system being simulated and require problem-specific optimisation routines that use information on the system’s status in order to generate solutions that are optimal in relation to the contingent conditions of the system. In order to solve the decision problem and proceed, the simulation run will wait for the solution suggested by the optimization routine and run this in the model. In other words, the optimisation routine is invoked by the simulation model every time a decision has to be made on specific issues and generates solutions based on both information regarding optimisation settings provided by the scenario-based optimisation approach and information on the current system status that is available through simulation-updated system’s variables. A wide range of optimisation approaches, including mathematical programming and heuristic rules, can be used as optimisation routines; as for standard optimization problems, the choice of the optimization algorithm and the solution approach depend on the problem to be investigated in terms of its nature, complexity and size. In the following section, a case study that exemplifies the implementation of the architecture and its deployment for solving a case study problem is presented. As the focus of this paper concerns the optimization architecture itself and not the particular optimization approaches used, the presentation of the use case will be made to prove that such an architecture can be effectively used to solve practical optimization problems and that ManPy’s flexibility allows for its implementation in an open-source simulation platform.
4
Case Study
The case study presented here is inspired by a problem statement experienced at various manual assembly lines operating in a medical device fabrication company. These lines can be classified as dual-resource constrained systems as the lines’ productivity is constrained by the availability of both workstations and operators (Xu et al. 2011). Indeed, it may happen that due to staffing decisions at strategical level and/or operators’ absenteeism the number of operators assigned to a line during a production shift is less than the number of workstations to be operated; moreover, based on capacity planning decisions, workstations can be shut down during a shift. The staffing policy at the company involved in this study is based on a “3 tasks to 3 operators” concept whereby an operator is trained on up to 3 tasks and training for a task is imparted to up to 3 operators; the line is balanced so that one task corresponds to a workstation. At the beginning of each shift, the production supervisor assigns operators to workstations based on the number of operators and machines available and on the Work In Progress (WIP) across the line; a rule of thumb generally adopted at the company is that in case of operators’ shortage, the supervisor assigns operators to the first workstations in the line so that WIP is built at the intermediate workstations; towards the end of the shift or during the following shift, operators are moved to the last workstations so as to rebalance the line’s WIP. The implementation of this rule and its effect on production are also dependent on the supervisors’ experience and his/her promptness in making reallocation decisions based on the line status. The objective here is to develop a decision support tool able to generate optimal operators assignment schedules to maximize throughput for a shift. Figure 2 reports the schematic layout of a pilot assembly line as modelled in ManPy. The parallel machines (or sub-lines) that operate in the system are considered equivalent from both a performance and training requirement perspective so that an operator trained on a particular task can perform that task at any machine available in the workstation; workstation is intended here as the set of machines where the same operational task is performed.
205
Figure 2. Layout of a pilot assembly line
4.1 Optimisation The simulation model of the assembly line can be used to efficiently solve the operator assignment problem and assist the line supervisor in making re-assignment decisions that can minimize the impact of operators or machines unavailability to maximize throughput. The optimization logic suggested here is based on the dual architecture presented in the previous section. The first optimization aspect concerns the operators’ assignment decision at a specific point in time; this problem has been formulated as a multi-objective binary linear programming (LP) problem where the binary optimisation variables consists of the assignment variables xij (i.e. operator i is assigned to machine j if xij is 1) and different objectives are taken into account. Following a multi attribute utility approach (Morrice et al. 1998), the objectives are firstly normalised and then grouped in a combined metric calculated as a weighted sum so that the objective function can be formulated as: 𝑀𝑀𝑀𝑀𝑀𝑀 𝑓𝑓�𝑥𝑥𝑖𝑖𝑖𝑖 � = � 𝑤𝑤𝑘𝑘 ∗ 𝑜𝑜𝑜𝑜𝑗𝑗𝑘𝑘 �𝑥𝑥𝑖𝑖𝑖𝑖 � 𝑘𝑘 𝑡𝑡ℎ
𝑘𝑘
(1)
objective (𝑜𝑜𝑜𝑜𝑗𝑗𝑘𝑘 �𝑥𝑥𝑖𝑖𝑖𝑖 �) and each objective is a function of the where 𝑤𝑤𝑘𝑘 represents the weight of the assignment variables. The objectives considered in this formulation include: • Obj1 - Maximize number of assigned operators: this is to avoid that operators are left idle; • Obj2 - Prioritise assignment of operators to machines with highest WIP: this is inspired by both the current assignment practice adopted at the company and heuristic approaches suggested in the dualresource constrained system’s literature (Xu et al. 2011); • Obj3 - Minimise unnecessary operators’ transfers: this is to avoid operators that are re-assigned to different machines when there is no variation of the operator/line availability configuration; • Obj4 - Support uniform distribution of operators across workstations: if possible, operators should be assigned in order to guarantee that at least one machine per workstation is operational; • Obj5 - Prioritise assignment to machines that have been idle for long: this is to avoid situations whereby an operator is repeatedly assigned to the same machine if he/she can operate machines that have been idle for a long time. Relevant constraints applied to the problem impose that an operator can be assigned to one machine at most, and vice versa, that is a machine can be operated by at most one operator. Training constraints are implicitly applied during the definition of decision variables as an assignment variable xij is initialized only if operator i is trained on the task performed at machine j. Information required to solve the assignment problem in this LP formulation include the list of machines and operators available at the current time, WIP level at all machines, operators’ skills and information on the previous assignment plan, if available. Although the LP method can work independent of the simulation model as long as relevant input information is provided, its integration within the simulation module enables the generation of good assignment schedule for an entire production shift by using the simulation model capability of predicting data on both operator’s and machine’s availability (if random unavailability is modelled) and WIP level at the time when assignment decisions have to be made. The LP problem has been coded using PuLP, a Python library that offers pre-defined methods to model a LP problem and generates LP or MPS files that are consecutively solved by linear programming solvers, such as GLPK, COIN CLP/CBC and CPLEX. The ManPy model was formulated in order to use the LP
206
code as a black box. A generic ManPy object called EventGenerator is activated at given intervals and invokes a method. In this specific model an instance of EventGenerator was added in order to invoke a custom ManPy object built for the needs of the case called SkilledOperatorRouter. This checks the state of the system and if re-allocation of operators is required invokes the LP method sending all the information it needs about the state (e.g. the lists of operators and machines on-shift). The LP method returns a Python dictionary that describes the decided allocation. This has the form of {“Operator_Id”: “Machine_Id”}. Then, the SkilledOperator imposes the allocation in the model. This may not happen instantaneously, since some operators may need to end their operation in process before they get moved to the next station. The second optimization aspect of the assignment problem concerns the definition of optimal weights for the LP objective function. The LP solution quality is obviously dependent on the weights assigned to the various objectives; this is also because the objectives considered are conflicting in terms of their impact on productivity. For instance, Obj3 and Obj5 are introduced to generate assignment schedules easily implementable but their impact on productivity is generally unpredictable. As the assignment plan should promote a right balance between productivity and implementation performance and considering the experimental evidence that objective weights’ performance is closely related to the system’s status, an approach that optimizes the weights for given system’s conditions has been developed. Optimisation is performed by means of an algorithm inspired by the Min-Max Ant System optimization logic (Stützle, Thomas & Hoos 2000). During each ant cycle, ants select a weight for each objective so that the LP problem is correctly defined. The ants are then simulated and the corresponding results are ranked based on productivity performance; the shift throughput is used as a performance metric. The generation of the new population of ants entails a pheromone update mechanism based on an elitist logic whereby the best two ants from the current cycle are used to update the weight selection probability for each objective. The pheromone values are updated so that the probability of selecting weights carried by the best ants is reinforced; an evaporation rate is applied to regulate the relevance of a previous weights’ probability during the pheromone updating mechanism. A best-so-far elitist logic is used so that the ants that perform best since the first cycle are kept in the solution pool. Upon verification of termination criteria (i.e. simulation of a pre-defined number of cycles or solution convergence), the procedure is concluded by selecting the best solution, that is the combination of LP weights that generate assignment schedules characterized by the highest throughput. This ant colony optimization (ACO) approach is used as the outer optimization module in the hybrid optimisation architecture in Figure 1 as it enables the generation of scenarios that define optimization parameters’ settings and their consecutive selection. The ACO approach was implemented in Python and set to call the ManPy model each time it needs to evaluate a design. In this case ACO sends to ManPy the same simulation model changing only the weights that SkilledRouter will use when calling the LP method.
4.2 Results In order to validate the optimization architecture illustrated in the previous section, experiments have been run using a SimPy model of the pilot assembly line in Figure 1. Using a consolidated practice in DREAM, proof of concept of simulation and optimization methodologies is performed using COTS simulation packages or SimPy so that valid approaches are consequently implemented in ManPy. In the experiment shown in this section, range values for each objective weight have been inputted to the ACO algorithm in order to provide a clear initial indication of the objectives that are considered more relevant to the generation of good assignment solutions from an end-user / domain expert perspective (Table 1). A sixth objective (“Fill sub-line”) has been added to accommodate for a requirement that is specific to the assembly line modelled (i.e. either of the sub-lines in the first segment of the assembly line should be “filled” with operators before the other sub-line is considered). The input weight ranges provided to the ACO algorithm is reported in Table 1; these input values have been established based on domain experts’ consideration. Industrial engineers working at the real plant have suggested that the assignment of operators to machines with highest WIP (Obj2) and full assignment to either sub-lines (Obj6) are considered critical to guarantee production continuity; on the contrary, although not practical, the scenario of frequently moving operators to different machines (Obj3) is considered acceptable, hence, a lower average weight has been set for Obj3. The increment used is 0.1.
207
Table 1. User-defined objectives' weights
Min weight value Max weight value
Obj1: Max number of assigned PB 0.5 1.5
Obj2: Assignment to machines with higher WIP 1.5 2.5
Obj3: Min assignment variations 0 1
Obj4: Uniform assignment across stations 0.5 1.5
Obj5: Machines with furthest last assignment time 0.5 1.5
Obj6: Fill sub-line 1 2
Production is organized in two daily shifts of 8 hours each and, for both shifts, 12 operators have been randomly selected; this implies that after each assignment, at least three machines in the line are forced to stay idle due to lack of available operators. Deterministic time related parameters have been used for the experiment. Standard operators’ break times have been modelled (i.e. lunch and coffee breaks); machines have been considered failure-free. The ACO population size and number of generations have been set to five and forty, respectively. The system has been simulated for 3000 minutes, which corresponds to a production time of almost five shifts. The solution converged after 194 runs; the computational time was of circa 20 minutes on a 2.6 GHz Intel Core i7 processor. The results obtained show significant dispersion in terms of number of units produced as this varies from 800 to 1840 based on the objectives’ weights set by ACO (Table 2). Moreover, the lowest output value (800 units) is observed with relatively high frequency, that means that the risk of setting ineffective objectives’ weights is considerably high. A more detailed analysis of the optimization results has shown that the risk of a poor weight selection is even more significant as differences between optimal weights and ineffective weights can be minimal. For instance, the following combinations of weights, [1.3, 2.1, 0.7, 1.5, 1.1, 1.5] and [1.3, 2.1, 0.1, 1.5, 1.5, 1.6], generate 800 and 1840 units, respectively; it is interesting to note that in both combinations, the factors that have been initially considered most relevant are practically the same, whereas small weight differences for both Obj3 and Obj5 cause dramatic changes in the operators’ assignment solution. Finally, it is worth noting that the discrete nature of the output results is mainly due to the fact that batch production is performed in the line (e.g. one batch consists of 80 units). Moreover, poor objectives’ weights can cause deadlock situations whereby operators are repeatedly assigned to the same machine so that starvation or blockage can be frequently observed in the line; in this regard, it is interesting noting that among the two combination of weights reported above, the one with the lowest weight for Obj3 (i.e. minimization of operators transfers) generates poorer production results. Table 2. ACO results - Number of units produced Number of units produced
800
1120
1520
1840
Relative Frequency
0.39
0.14
0.01
0.46
Although the results obtained demonstrate the importance of effectively selecting objectives’ weights for the LP objective function, the use of deterministic input variables can compromise the practical validity or credibility of the optimization results. The assumption of deterministic processing times and operator breaks in a manual manufacturing environment is generally not realistic and, for this reason, approaches capable of discerning good and robust solutions from unstable assignment solutions would be a desirable addition to the dual optimization architecture. To this regard, ten “optimal” weight combinations have been randomly selected and the robustness of the operators’ assignment solution to input parameters stochasticity has been investigated. This has been achieved by using the operators’ assignment schedule associated with a weight combination as an input information; in this context, the LP method is logically replaced by the input schedule. Figure 3 reports the results obtained, the minimum and maximum value for the number of units produced across ten stochastic replications delimits the blue rectangles, the average number of units produced across the ten replications is reported in red; the dashed cyan line corresponds to the number of units obtained in the deterministic run (1840). It is evident that when stochastic input variables are considered, the output results reflect this stochasticity and distinctions between good and robust solutions emerge. Among the ten weight combinations considered, only two (i.e. solution 1 and 3) prove robust to stochastic variations of the input variables as the line throughput is consistently equal to 1840 units. On the contrary, four solutions prove significantly instable as the maximum number of units produced in the stochastic runs is significantly lower than the nominal deterministic optimum. This suggests that the integration of approaches able to investigate the robustness of the optimal solutions (i.e. Markowitz approach (Fu et al. 2005). or stochastic dominance tests (Kleijnen & Gaury 2003)), could
208
considerably improve the quality of the ACO search directions and, as a consequence, the validity of the optimization results.
Figure 3. Stochastic evaluation of optimal results
Conclusions A dual simulation-based optimization architecture and its implementation in an open-source simulation environment has been illustrated in this paper. The optimization architecture combines the use of simulation as both a means to evaluate alternative scenarios in a metaheurstic optimization context and a means of anticipating the system status in order to enable optimization in an iterative optimization-based simulation context (Figueira & Almada-Lobo 2014). A case study concerning workforce allocation in a dual resource constrained assembly line operating in a medical device fabrication company has been discussed in order to demonstrate the validity of the optimization architecture and highlight potential improvements. In this regard, it has been noted that the architecture could be further enhanced by exploiting the capacibility of simulation by conducting stochastic analyses and, hence, by incorporating approaches able to evaluate the robustness of optimal solutions. With respect to the use case illustrated, partitioning the simulated time in several time intervals would also improve the solution quality as optimization of objectives’ weights could be performed at a greater detail level (i.e. optimal weights should be found for each time interval). In the near future, the architecture will be embedded in a simulation platform based on ManPy, which is an open-source Python library that avails of a COTS-like graphical user interface and experiements carried out using cloud simulation.
Acknowledgements The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7-2012-NMP-ICT-FoF) under grant agreement n° 314364.
References April, J., F. Glover. 2003. Practical introduction to simulation optimization. Proceedings of the 2003 Winter Simulation Conference, vol. 1, 71–78. Azzaro-Pantel, C., L. Bernal-Haro, P. Baudet, S. Domenech, L. Pibouleau. 1998. A two-stage methodology for short-term batch plant scheduling: discrete-event simulation and genetic algorithm. Computers and Chemical Engineering 22(10) 1461–1481. Banks, J., 2000. Introduction to simulation. Proceedings of the 2000 Winter Simulation Conference, vol 1, 9–16. Banks, J., J. S. Carson, B. L. Nelson. 2000. DM Nicol, Discrete-Event System Simulation. Prentice hall. Englewood Cliffs, NJ, USA. Brailsford, S. C., W. J. Gutjahr, M. S. Rauner, W. Zeppelzauer. 2007. Combined discrete-event simulation and ant colony optimisation approach for selecting optimal screening policies for diabetic retinopathy. Computational Management Science 4(1) 59–83. Can, B., A. Beham, C. Heavey. 2008. A comparative study of genetic algorithm components in simulationbased optimisation. Proceedings of the 40th Winter Simulation Conference.
209
Can, B., C. Heavey. 2012. A comparison of genetic programming and artificial neural networks in metamodeling of discrete-event simulation models. Computers & Operations Research 39(2) 424– 436. Figueira, G., B. Almada-Lobo. 2014. Hybrid simulation-optimization methods: A taxonomy and discussion. Simulation Modelling Practice and Theory 46 118–134. Fu, M.C., F.W. Glover, J. April. 2005. Simulation optimization: a review, new developments, and applications. Proceedings of the Winter Simulation Conference, vol 1, 83–95. Fujimoto, R. 1990. Parallel discrete event simulation. Communications of the ACM 33(10) 30–53. Hegazy, T. 1999. Optimization of resource allocation and levelling using genetic algorithms. Journal of construction engineering and management 125(3) 167–175. Jahangirian, M., T. Eldabi, A. Naseer, L. K. Stergioulas, T.Young. 2010. Simulation in manufacturing and business: A review. European Journal of Operational Research 203(1) 1–13. Karabuk, S., G. Grant. 2007. A common medium for programming operations-research models. Software, IEEE 24(5) 39–47. Kelton, W., Α. Law. 2000. Simulation modeling and analysis. McGraw-Hill. New York. Kleijnen, J.P.C., E. Gaury. 2003. Short-term robustness of production management systems: A case study. European Journal of Operational Research 148(2) 452–465. Kuljis, J., R.J. Paul. 2001. An appraisal of web-based simulation: whither we wander? Simulation Practice and Theory 9(1-2) 37–54. Lacksonen, T. 2001. Empirical comparison of search algorithms for discrete event simulation. Computers & Industrial Engineering 40(1)133–148. Langtangen, H.P. 2006. Python Scripting for Computational Science. Springer. Berlin, Heidelberg. Morrice, D.J., J. Butler, P.W. Mullarkey. 1998. An approach to ranking and selection for multiple performance measures. Proceedings of the 30th Winter Simulation Conference 719–725. Seila, A. F., 1995. Introduction to simulation. Proceedings of the 27th conference on Winter simulation 7– 15. Shannon, R.E. 1998. Introduction to the art and science of simulation. Proceedings of the 30th Winter simulatio conference.7–14. Stützle, T., H. H. Hoos. 2000. MAX-MIN Ant System. Future generation computer systems 16(8) 889– 914. Tako, A. A., S. Robinson. 2012. The application of discrete event simulation and system dynamics in the logistics and supply chain context. Decision Support Systems 52(4) 802–815. Xu, J., X. Xu, S.Q. Xie. 2011. Recent developments in Dual Resource Constrained (DRC) system research. European Journal of Operational Research 215(2) 309–318.
210