Resource Constrained Project Scheduling: A Neural

0 downloads 0 Views 11MB Size Report
Nov 13, 2006 - In this paper we analyze the case where all the critical paths are normally distributed, Elang distributed, and Weibull distributed. In each test, we ...
PMS 2008

Eleventh International Workshop on Project Management and Scheduling April 28-30, 2008 İstanbul, Turkey

PROCEEDINGS

PMS 2008 Eleventh International Workshop on Project Management and Scheduling April 28-30, 2008, İstanbul, Turkey PROCEEDINGS Camera-ready abstracts provided by the authors

ISBN 978-9944-62-618-7

Addresses of the Editors: Prof. Funda Şerifoğlu Düzce Üniversity Konuralp Yerleşkesi, BeçiYörükler Düzce 81620 TURKEY Tel: +90-380-542 11 01 Faks: +90-380- 542 11 03 E-mail: [email protected] Prof. Ümit Bilge Boğaziçi University Department of Industrial Engineering Bebek İstanbul 34342 TURKEY Tel: +90-212-359 7071 Fax: +90-212-265 1800 E-mail: [email protected]

Published by: Özbaş Basım ve Tanıtım Hizmetleri Karakolhane Cad. Feritbey Sok. No. 46 Kadıköy – İstanbul Tel: +90-216-330 1101 Fax: +90-216-345 9272

Preface EURO, the Association of European Operational Research Societies, within IFORS, the International Federation of Operational Research Societies, aims to promote operational research throughout Europe. An important instrument to this end is the EURO Working Groups (EWG) which are organizational frameworks for small groups of researchers interested in a specific operational research topic and provide a forum for promoting research in the areas represented by EURO. The EURO Working Group on Project Management and Scheduling was established by Professors Luís Valadares Tavares and Jan Weglarz during the EURO VIII Conference, Lisbon, in September 1986. It was decided to organize a workshop every two years. Gathering the most promising theoretical and applied advances in Project Management and Scheduling, and assessing both the state-of-the-art of this field and its potential to support management systems are the main objectives of these workshops. The first workshop was held in Lisbon in July 1988, organized by L. Tavares. It attracted sixty participants from 16 countries. A special issue on Project Management and Scheduling was published by EJOR, including selected papers from this Workshop (November 1990). The following workshops were organized in Compiegne (1990), Como (1992), Leuven (1994), Poznań (1996), İstanbul (1998), Osnabrück (2000), Valencia (2002), Nancy (2004), and Poznań (2006). The 20th anniversary of the foundation of the working group was celebrated in Ponzań, whereas the 20th anniversary of the first workshop will be celebrated during the 11th International Workshop on Project Management and Scheduling (PMS 2008) in İstanbul. The proceedings at hand contain 72 papers by 165 authors from 23 countries. These valuable contributions were accepted by the reviewers (2-3 reviewers per paper) who are members of the program committee and distinguished researchers of the associated fields. There will be two plenary talks in this workshop: Prof. Peter Brucker (University of Osnabrück, Germany) will talk on "Machine Scheduling: Past, Current, and Possible Future Challenges" and Prof. Erik Demeulemeester (Katholieke University Leuven, Belgium) will talk on “Robust Project Scheduling”. To enrich the program, we have also included two semi-plenary talks in the workshop program. Prof. Vincent T’Kindt (University of Tours, France) will talk on “Multicriteria machine scheduling: Theory, Models and Algorithms”. Prof. Selim Aktürk (Bilkent University, Turkey) will talk on “Bi-criteria scheduling with controllable processing times”. We hope that the program will offer you an opportunity to exchange ideas and inspire each other to stay active in the dynamic field of project management and scheduling. The workshop sessions will be held in the conference rooms of the Engineering School of Boğaziçi University. The tradition of get-together for a welcome reception and a conference dinner will be continued in this workshop, too. We hope that you will also have time to spend to discover some of the rich variety of historical, cultural and natural beauties of İstanbul. We sincerely thank all who has helped to make PMS 2008 as fruitful an event as all of its predecessors have been, in particular the program committee and the non-member reviewers, the organizing committee and the administrative staff. We also wish to thank EURO for providing support to organize this event. We are both very honored and happy to welcome you in İstanbul and wish you an enjoyable stay. Program Co-Chairs: Prof. Funda Sivrikaya Şerifoğlu Düzce University

Prof. Ümit Bilge Boğaziçi University

iii

Organization Committee Funda Sivrikaya Şerifoğlu (Co-Chair / Düzce University) Ümit Bilge (Co-Chair / Boğaziçi University) Gülay Barbarosoğlu (Boğaziçi University) Taner Bilgiç (Boğaziçi University) Mahmut Ekşioğlu (Boğaziçi University) Resul Kara (Düzce University) Kamer Sözer (Boğaziçi University) Gündüz Ulusoy (Sabancı University) Ali Tamer Ünal (Boğaziçi University)

Program Committee Alessandro Agnetis (Università degli Studi di Siena, Italy) Ali Allahverdi (Kuwait University, Kuwait) Lucio Bianco (University of Roma "Tor Vergata", Italy) Jacek Blazewicz (Poznan University of Technology, Poland) Peter Brucker (University of Osnabrueck, Germany) Jacques Carlier (Université de Technologie de Compiègne, France) Erik Demeulemeester (Katholieke Universiteit Leuven, Belgium) Andreas Drexl (Christian-Albrechts-Universität zu Kiel, Germany) Salah Elmaghraby (North Carolina State University,USA) Selçuk Erengüç (University of Florida, USA) Willy Herroelen (Katholieke Universiteit Leuven, Belgium) Mikhail Kovalyov (Belarusian State University, Belarus) Wieslaw Kubiak (Memorial University of Newfoundland, Canada) Chung-Yee Lee (Hong Kong University of Science and Technology, China) Linet Özdamar (Yeditepe University, Turkey) James H. Patterson (Indiana University, USA) Erwin Pesch (University of Siegen, Germany) Marie-Claude Portmann (École des Mines de Nancy (INPL), France) Chris Potts (University of Southampton, UK) Avraham Shtub (Israel Institute of Technology, Israel) Funda Sivrikaya Şerifoğlu (Düzce University, Turkey) Roman Slowinski (Poznan University of Technology, Poland) Gündüz Ulusoy (Sabancı University, Turkey) Luis Valadares Tavares (Technical University of Lisbon, Portugal) Vicente Valls (Universidad de Valencia, Spain) Jan Weglarz (Poznan University of Technology, Poland) Jürgen Zimmermann (Clausthal University of Technology, Germany)

iv

Table of Contents Preface Funda Sivrikaya Şerifoğlu, Ümit Bilge……………………………………………………. iii Plenary Talk Machine Scheduling: Past, Current, and Possible Future Challenges Peter Brucker……………………………………………………………………………….. 1 Plenary Talk Robust Project Scheduling Erik Demeulemeester…………………………………………………………..................... 2 Semi-Plenary Talk Multicriteria Machine Scheduling: Theory, Models and Algorithms Vincent T'kindt……………………………………………………………………………... 3 Semi-Plenary Talk Bi-Criteria Scheduling with Controllable Processing Times M. Selim Aktürk……………………………………………………………………………. 4 Resource Constrained Project Scheduling Problem: A Neurogenetic Approach Anurag Agarwal, Selçuk Çolak, Selçuk Erengüç................................................................... 5 The Two-Stage Assembly Flowshop Scheduling Problem with Two Criteria Ali Allahverdi, Fawaz S. Al-Anzi.......................................................................................... 9 The Resource-Constrained Activity Insertion Problem with Minimum and Maximum Time Lags Christian Artigues, Cyril Briand........................................................................................... 14 A Double Genetic Algorithm for the MRPCP/Max Francisco Ballestín, Agustín Barrios, Vicente Valls............................................................. 19 The Resource-Constrained Project Scheduling Problem as a Multi-Objective Problem: The Regular Case Francisco Ballestín, Rosa Blanco………………………………………………………….. 23 Exact Method for Hybrid Flowshop with Batching Machines and Tasks Compatibilities Adrien Bellanger, Ammar Oulamara .................................................................................... 27 Setting Gates for Activities in the Stochastic Project Scheduling Problem Through the Cross Entropy Methodology Illana Bendavid, Boaz Golany .............................................................................................. 31 A New Approach to the Project Scheduling Problem with Generalized Precedence Relations Lucio Bianco, Massimiliano Caramia ................................................................................... 35 Solving a Permutation Flow Shop Problem with Blocking and Transportation Delays Jacques Carlier, Mohamed Haouari, Mohamed Kharbeche, Aziz Moukrim ........................ 39 Negotiation Models for Logistic Platform Planning and Scheduling Susana Carrera, Khalida Chami, Renato Guimaraes, Marie Claude Portmann, Wahiba Ramdane Cherif....................................................................................................... 43

v

Project Scheduling with Stochastic Activity Durations, Uncertain Activity Outcomes and Maximum-NPV Objective Stefan Creemers, Marc Lambrecht , Roel Leus..................................................................... 48 Improving the Preemptive Bound for the Single Machine Min-Max Lateness Problem Subject to Release Times F. Della Croce, Vincent T’Kindt ........................................................................................... 52 A Conflict Repairing Harmony Search Metaheuristic and its Application for Bi-Objective Resource Constrained Project Scheduling Problems György Csébfalvi, Oren Eliezer, Blanka Lang , Roni Levi................................................... 56 A Harmony Search Metaheuristic for the Resource Constrained Project Scheduling Problem and its Multi-Mode Version György Csébfalvi, Anikó Csébfalvi, Etelka Szendrői........................................................... 60 A Branch-and-Price Algorithm to Minimize the Maximum Lateness on a Batch Processing Machine Denis Daste, Christelle Guéret, Chams Lahlou..................................................................... 64 Managing Projects in a Matrix Organization: Simulation-Based Training Lior Davidovitch, Avi Parush, Avraham Shtub..................................................................... 68 RESCON: A Classroom MFC Application for the RCPSP Filip Deblaere, Erik Demeulemeester, Willy Herroelen........................................................ 72 New Approximate Solutions for Customer Order Scheduling José M. Framiñán................................................................................................................... 75 Tree and Local Search for Parallel Machine Scheduling Problems with Precedence Constraints and Setup Times Bernat Gacias, Christian Artigues, Pierre Lopez................................................................... 79 Scheduling for Dynamic Mixed Model Assembly Line: A Realistic Approach José P. García-Sabater, Carlos Andrés, Ramón Companys................................................... 85 An Application Oriented Approach for Scheduling a Production Line with Two Dimension Setups José P. García-Sabater, Carlos Andrés, Cristobal Miralles, Julio Juan García-Sabater……. 90 Tree-based Methods for Resource Investment and Resource Levelling Problems Thorsten Gather, Jürgen Zimmermann, Jan-Hendrik Bartels………………………………. 94 The Sequential Ordering Problem: A New Approach David Gómez-Cabrero, Francisco Ballestín, Vicente Valls………………………………... 99 Project Completion Time in a Multi-Critical Paths Environment Amnon Gonen………………………………...................................................................... 103 Scheduling Assembly Lines with Flexible Operations to Minimize the Makespan Hakan Gültekin, Yves Crama.………………………………............................................. 108 Tighter Lower Bounds via Dual Feasible Functions Mohamed Haouari, Lotfi Hidri, Mahdi Jemmali................................................................. 112

vi

A New Branch-and-Bound Method for the Multi-Skill Project Scheduling Problem: Application to Total Productive Maintenance Problem Taher Hassani, Cedric Pessan, Emmanuel Néron................................................................ 115 Robustness Measures and a Scheduling Algorithm for Discrete Time/Cost Tradeoff Problem Öncü Hazır, Erdal Erel, Mohamed Haouari......................................................................... 119 Robust Optimization Models for the Discrete Time/Cost Tradeoff Problem Öncü Hazır, Erdal Erel, Yavuz Günalay.............................................................................. 123 Qualification of Multi–Skilled Human Resources Performing Project Work Christian Heimerl, Rainer Kolisch....................................................................................... 127 Timing Problem for Scheduling an Airborne Radar Yann Hendel, Ruslan Sadykov.…………………………………………………………… 132 Minimizing Mean Flow Time for the Two Machines Semi-Malleable Jobs Scheduling Problem Yann Hendel, Wieslaw Kubiak............................................................................................ 136 Selection and Planning of Downsizeable Projects with Variable Workload Jade Herbots, Willy Herroelen, Roel Leus........................................................................... 140 Enhanced Energetic Reasoning for Parallel Machine Scheduling Lotfi Hidri, Anis Gharbi, Mohamed Haouari, Chefi Triki................................................... 144 Discrepancy and Backjumping Heuristics for Flexible Job Shop Scheduling Abir Ben Hmida, Mohamed Haouari, Marie-José Huguet, Pierre Lopez............................ 148 Polynomial Cases and PTAS for Just-in-Time Scheduling on Parallel Machines around a Common Due Date Nguyen Huynh Tuong, Ameur Soukhal............................................................................... 152 New Generation A-Team for Solving the Resource Constrained Project Scheduling Piotr Jędrzejowicz, Ewa Ratajczak-Ropel............................................................................ 156 Apportionment Methods in Discrete Resource Allocation Problems Joanna Józefowska, Łukasz Józefowski, Wieslaw Kubiak.................................................. 160 Fast Neighborhood Search for the Single Machine Earliness-Tardiness Scheduling Problem Safia Kedad-Sidhoum, Francis Sourd.................................................................................. 164 Management of New Product Development: The Impact of Competition and Market Characteristics Janne Kettunen, Yael Grushka-Cockayne, Bert De Reyck, Zeger Degraeve, Ahti Salo..... 169 An Exact Algorithm for The Two-Machine Job Shop Problem with no Intermediate Storage Soulef Khalfallah, Mohamed Haouari, Pierre Dejax........................................................... 173 On the Slack Determination for Robust RCPSP Mohamed Ali Khemakhem, Hédi Chtourou........................................................................ 177 On the Expression of Robust RCPSP Solution Mohamed Ali Khemakhem, Hédi Chtourou........................................................................ 181

vii

Issues in Distributed Scheduling Mahmut Kurşun, Ali Tamer Ünal, Kamer Sözer................................................................ 185 A Hybrid Genetic Algorithm for the Multi-Mode Resource Constrained Project Scheduling Problem Antonio Lova, Pilar Tormos, Mariamar Cervantes, Federico Barber................................. 189 Bounds in a Berth and Quay Cranes Allocation Problem Maciej Machowiak, Jacek Błażewicz, Ceyda Oğuz............................................................. 193 A Two Phase Approach to Multi-Hoist Scheduling Problem Antonio Manca, Paola Zuddas, Eric Niel, Corinne Subai................................................... 195 New Iterated Pareto Greedy Algorithms for the Multi-Objective Flowshop Problem Gerardo Minella, Rubén Ruiz, Michele Ciavotta................................................................. 199 Rescheduling for New Orders with Setup Times Cédric Mocquillon, Christophe Lenté , Vincent T’Kindt.................................................... 203 A Note on Scheduling with Learning Effect Dariusz Okolowski............................................................................................................... 206 Temporal Constraint and Due Date Infeasibilities: A Multi-Objective Approach Ángeles Pérez, Pilar Lino, Sacramento Quintanilla, Vicente Valls..................................... 210 Load Balancing by Migrating Processes Matthieu Pérotin, Patrick Martineau, Carl Esswein............................................................. 214 Approximate Procedures for Minimizing Total Completion Time in a Single Machine Scheduling Problem Subject to Release Dates Mohamed Ali Rakrouki, Talel Ladhari................................................................................ 218 Mixed-Integer Linear Programming Formulation for High Level Synthesis André Rossi, Marc Sevaux................................................................................................... 222 Minimization of Makespan and Maximum Tardiness Subject to a Maximum Tardiness Bound in Flowshop Problems Rubén Ruiz, Ali Allahverdi.................................................................................................. 227 Two Models for the Just-in-Time Scheduling Problem with Delivery Dates Nina Runge, Francis Sourd.................................................................................................. 232 A Heuristic Efficient Solution for Non-Delay Resource Constrained Project Schedule Arik Sadeh, Yuval Cohen, Ofer Zwikael............................................................................. 236 A Model for Allocating and Scheduling Machines Hérica L. Sánchez, Servio B. Guillén, Laura Z. Plazola..................................................... 240 Makespan Minimization in the Relocation Problem Subject to Release Dates Sergey V. Sevastyanov, Bertrand M.T. Lin, H.L. Huang.................................................... 244 Solving Make-or-Buy Problems Under Limited Work-in-Process Cost Natalia V. Shakhlevich, Akiyoshi Shioura, Vitaly A. Strusevich........................................ 249 Does Project Management Methodology Improve Project Performance? – A Case Study Avraham Shtub, Shai Rozenes............................................................................................. 253

viii

Lower Bounds for Total Weighted Tardiness Minimization on Parallel Machines Nizar Souayah, Imed Kacem, Mohamed Haouari, Chengbin Chu...................................... 257 Project Scheduling for Production Planning: a Stochastic Programming Approach with Feeding Precedence Constraints Tullio Tolio, Marcello Urgo, Arianna Alfieri...................................................................... 261 Energygrass Supply Scheduling to Minimize Investment and Operating Costs László Torjai, Mónika Pitz.................................................................................................. 265 Local Search Engineering for Highly Constrained Hybrid Flow Line Problems Thijs Urlings, Thomas Stützle, Rubén Ruiz........................................................................ 269 An Iterated Greedy Approach for the Unrelated Parallel Machine Scheduling Problem with Sequence Dependent Setup Times Eva Vallada, Dario Diotallevi, Rubén Ruiz......................................................................... 274 Dynamic Project Scheduling in Service Management Vicente Valls, David Gómez-Cabrero................................................................................. 278 A Comparison of Various Population-Based Meta-Heuristics to Solve the MRCPS Vincent Van Peteghem, Mario Vanhoucke.......................................................................... 282 Heuristic Algorithms for Minimizing Single Machine Weighted Earliness-Tardiness with a Common Due Date Fulgencia Villa, Ramon Alvarez-Valdes, Enric Crespo, Jose Tamarit.............................. 285 Performance Guarantees of Very Large-Scale Neighborhoods for Minimizing Makespan on Identical Machines Tjark Vredeveld, Tobias Brueggemann, Johann L. Hurink, Gerhard J. Woeginger........... 289 Exact vs. Heuristic Approach to Continuous Resource Allocation in Discrete-Continuous Project Scheduling Grzegorz Waligóra, Jan Węglarz......................................................................................... 293 A Project Scheduling Approach for Dismantling Nuclear Power Plants Jürgen Zimmermann, Jan-Hendrik Bartels.......................................................................... 297 Author Index........................................................................................................................ 301

ix

PLENARY TALK Machine Scheduling: Past, Current, and Possible Future Challenges Peter Brucker Institute of Mathematics, University of Osnabrueck, Germany e-mail: [email protected] Keywords: deterministic scheduling, complexity, applications. In the first part of the talk some of the milestones in the history of deterministic machine scheduling will be reviewed. It started more than fifty years ago, together with the first efforts in operations research which was a fast growing research area at that time. One exciting early result was S.M. Johnson’s algorithm for solving the two machine flow shop problem which was published in 1954. Other early results have been published by J.R. Jackson, R. McNaughton, and W.E. Smith who developed polynomial algorithms for special cases of single and parallel machine scheduling problems. During the next decade, efficient algorithms for further special cases of single and parallel machine and shop scheduling problems were developed. For other scheduling problems it was impossible to calculate optimal solutions even for small sized problems. The most famous hard problem at that time was an instance of a job shop problem with 10 jobs and 10 machines published in a book by Muth and Thompson in 1963. It remained open for more than two decades before it was solved by a clever branch-and-bound procedure developed by Carlier and Pinson. One of the most important results in combinatorial optimization was the introduction of the concept of NP-completeness by S.A. Cook in 1971 and a follow up paper by R.M. Karp who proved the NP-completeness of a large number of combinatorial problems. Researchers started to classify scheduling problems into polynomially solvable and NP-hard ones. The fact that only special cases turned out to be polynomially solvable, while most of the applications of scheduling in practice lead to NP-hard problems, justified the need for heuristics and approximation algorithms. In the second part of the talk, the main stream of current research will be discussed and possible future challenges will be indicated. More recent and ongoing research has been concentrating on scheduling problems which in practice usually lead to complex situations. One has to model these situations and to develop heuristics to provide good solutions. Research activities will continue in areas such as online scheduling, cyclic scheduling, scheduling with controllable data, inverse scheduling, scheduling and routing, and scheduling with multi objectives with applications in computing, robotic cell scheduling, logistics, communication, and transport. On the theoretical side, there are some special machine scheduling problems for which the complexity status remains open, despite large efforts to resolve this. It remains a challenge to solve such problems.

PMS 2008, April 28-30, İstanbul, Turkey

1

PLENARY TALK Robust Project Scheduling Prof. Erik Demeulemeester   Department of Decision Sciences and Information Management, Katholieke Universiteit Leuven, Belgium e-mail: [email protected]

  Over the last decades, quite some research has been done on project scheduling: optimal algorithms as well as good (meta)heuristic procedures have been devised for a multitude of project scheduling problems. Notwithstanding this enormous research effort, there is an endless list of projects that have failed to be executed on time and within budget. This is partly due to the gap between theory and practice, and we should continue to strive to eliminate this gap. However, a major reason for the time and cost overruns might lie in the fact that the project scheduling community has mainly focused on deterministic project scheduling problems, largely overlooking the impact of uncertainty in a project. Indeed, activities might take longer or shorter to execute, resources might fail, rework might be necessary or project activities might be inserted or deleted over the course of the project. These distortions might result in project delays, due date violations, cost overruns or nervousness throughout the supply chain of the project. Basically, from a scheduling point of view, there are two ways of dealing with the uncertainty in a project. The first way involves reactive scheduling where the planner revises or reoptimizes a schedule whenever a schedule breakage occurs. The second way involves proactive-reactive scheduling in which the planner constructs a baseline schedule that takes statistical knowledge of the project uncertainty into account and is as insensitive as possible to disruptions in the project. This second way is typically referred to as robust project scheduling. Remark that in robust project scheduling a reactive approach is always necessary (and important) as it is practically impossible to perfectly protect the baseline schedule. The tutorial will give an extensive overview of the research that has been performed on robust project scheduling. First, an overview will be given of the different objectives that are typically used in robust project scheduling. Next, both proactive and reactive procedures will be discussed when the uncertainty in the project only affects the duration of the activities. The following part will discuss the approaches that are proposed for the case in which the resources break down or become unavailable. Finally, conclusions will be drawn and topics for future research will be indicated.

 

2

PMS 2008, April 28-30, İstanbul, Turkey

        

                                                                                                                                      !   " #                   !   "            $   %       &                                     '                           (          )*     +                      ,                           -       .                         #                         /                                                                           ,   ,                  (         )                        

PMS 2008, April 28-30, İstanbul, Turkey

3

SEMI-PLENARY TALK Bi-Criteria Scheduling with Controllable Processing Times M. Selim Aktürk Bilkent University, Dept. of Industrial Engr., Turkey e-mail: [email protected] Keywords: scheduling, bi-criteria optimization, controllable processing times, robotic cell. In the current literature, the process planning and scheduling levels are linked through timing data. After calculating locally optimal process parameters (i.e. machining conditions) that minimize the manufacturing cost, the processing time is then passed to the scheduling level as data. In reality however, the time it takes to process each part can be controlled (albeit at higher cost) by changing machining conditions. Since it is well known that scheduling problems are extremely sensitive to processing time data, controllable processing times provide additional flexibility in finding solutions to the scheduling problem, which in turn can improve the overall performance of the production system. Therefore, the dual objectives are minimization of the manufacturing cost and a regular scheduling performance measure. In practise, the relative importance of cost and time objectives for the decision maker may vary over time. If the workload is heavy, scheduling related objectives become more important. If it is relatively light, the manufacturing cost is very important. Therefore, we propose new methods to generate a set of nondominated solutions for this bi-criteria problem as discussed in Gurel and Akturk (2007). Processing time controllability and nonlinear cost functions complicate the scheduling problem; therefore we use the recent advances in conic mixed-integer programming to model these problems. A controllable processing time enables the decision maker to generate alternative schedules with varying manufacturing cost and scheduling performance, and hence brings additional solution flexibility in reactive scheduling as shown in Turkcan et al. (2008). In rescheduling it is highly desirable to catch up the original schedule as soon as possible by reassigning the jobs to the machines and compressing their processing times. On the other hand, one must also keep the manufacturing cost due to compression of the jobs low. Thus, one is faced with a tradeoff between match-up time and manufacturing cost criteria. Akturk et al. (2008) introduce alternative match-up scheduling problems for finding schedules on the efficient frontier of this time/cost tradeoff. A manufacturing cell consisting of a number of CNC machines and a material handling robot is called a robotic cell. The cycle time of the cell is affected by the robot move sequence as well as the processing times of the parts on the machines. In Gultekin et al. (2008), we determined a set of nondominated solutions for these two competing measures of cycle time and manufacturing cost. As a result, we find the robot move sequence as well as the processing times of the parts on each machine that not only minimize the cycle time but, for the first time in robotic cell scheduling literature, also minimize the manufacturing cost. In this presentation, my main aim is to show the effectiveness of controlling processing times for both the predictive and reactive single or nonidentical parallel CNC machine scheduling, and robotic cell scheduling problems.

References [1] Akturk, M.S., Atamturk, A. and Gurel, S. (2008). Match up scheduling with manufacturing cost considerations. Research Report BCOL 08.01, University of California-Berkeley. [2] Gultekin, H., Akturk, M.S. and Karasan, O.E. (2008). Bicriteria robotic cell scheduling. To appear in Journal of Scheduling. [3] Gurel, S. and Akturk, M.S. (2007). Optimal allocation and processing time decisions on nonidentical parallel CNC machines: ε-constraint approach. European Journal of Operational Research, 183, 2, 591-607. [4] Turkcan, A., Akturk, M.S. and Storer, R.H. (2008). Predictive/reactive scheduling with controllable processing times in flexible manufacturing systems. Under review in IIE Transactions. 4

PMS 2008, April 28-30, İstanbul, Turkey

Resource Constrained Project Scheduling Problem: A Neurogenetic Approach Anurag Agarwal 1, Selcuk Colak2, and Selcuk Erenguc1 1

Department of Information Systems and Operations Management Warrington College of Business Administration University of Florida, USA Email: [email protected], [email protected] 2

Department of Business College of Economics and Administrative Sciences Cukurova University, Turkey Email: [email protected] Keywords: Project scheduling, genetic algorithms, neural networks

1. Introduction The resource-constrained project scheduling problem (RCPSP) is a classical problem in scheduling literature. The problem is widely applicable in project management and production scheduling and is known to be strongly NP-Hard (Blazewicz et al., 1983). The objective is to schedule the activities of a project so as to minimize the project makespan subject to precedence and resource constraints. The quantities of available resources are assumed to be known and fixed for the entire duration of the project. Resource requirements and processing times for each activity are also known and fixed a priori. Preemption of activities is not allowed. This problem has been well researched for over four decades. In recent years, research efforts have focused on developing a variety of metaheuristic approaches using various natural-phenomenon inspired approaches such as Genetic Algorithms, Neural Networks, Ant-Colony Optimization, Simulated Annealing and Electromagnetism-like algorithms. In this paper we propose a new hybrid approach called the Neurogenetic approach for solving this problem.

2. The proposed approach The proposed Neurogenetic (NG) approach is a hybrid of genetic algorithms (GA) and neuralnetwork (NN) approaches. The GA approach has been applied successfully to this problem since the mid nineties (Alcaraz et al. 2004). A neural-network approach was applied recently in Colak et al. (2006). While GA performs well as a global-search technique, the NN approach has proven to be an effective local-search technique. We propose hybridizing these approaches in an attempt to benefit from the complementary advantages of the two approaches – i.e. GAs provide the diversification in search while NNs provide intensification. In this approach we interleave GA and NN search iterations. When switching between GA and NN, the best solution from one approach is fed to the other approach. Interleaving requires an ability to switch back and forth between the two search-techniques. The switching strategies are explained in section 2.4. 2.1 The neural network approach In the neural network approach, for an n-activity RCPSP, the algorithm attempts to determine a weight vector w with n elements (w1, w2, …, wn) such that when a given heuristic, such as LFT (Latest Finish Time) is applied on weighted processing times (w1*t1, w2*t2,…,wn*tn) instead of the original processing times (t1, t2,…, tn), the optimal (or best) solution is obtained. The procedure for determining such a weight vector is to start with an identity vector and modify the weight elements iteratively using a weight modification strategy, similar to strategies used in traditional neural networks. See Colak et. al (2006) for details. 2.2 Genetic algorithms approach In the Genetic Algorithms approach, a chromosome of n genes is used to describe the priority order of activities. The crossover scheme used is random two-point crossover. We use a mutation PMS 2008, April 28-30, İstanbul, Turkey

5

rate of 0.01. Both Serial and Parallel schedule generation schemes are applied. We apply double justification strategy (Valls et. al. 2005) and also better of forward and backward schedule strategy to every solution obtained. 2.3 The neurogenetic interleaving algorithm Figure 1 outlines the steps of the Neurogenetic algorithm. Step 3 runs the GA iterations while Step 7 runs the NN iterations. The algorithm returns to step 3 if more iterations are required. In this algorithm a unique solution implies a solution not obtained thus far. Step 1:

Step 2: Step 3: Step 4: Step 5: Step 6: Step 7: Step 8:

Step 9:

Decide on num of unique solutions to search, say u Decide on the num of interleavings to use, say t Decide GA to NN distribution of unique solutions, say p and (1-p) Decide num of GA solutions to feed to NN at each interleaving, say m Decide population size for the GA, say s Compute the number of unique GA solutions per interleaving g=(p*u)/t Compute the number of unique NN solutions per interleaving, a = (1-p)*u/t*m Generate Initial Population of s chromosomes for GA using the NN approach Run GA search for as many generations as it takes to generate g number of unique solutions If u unique solutions not found continue, else stop. Select m solutions from the GA population Determine NN weights for each of m selected solutions Run NN search for a unique solutions for each of m solutions If better solutions found by NN than those in the GA population, translate solution to GA chromosome and include in population, replacing the worst solution in the population. If u unique solutions not found then go to step 3, else stop. Figure 1: The Neurogenetic Algorithm

2.4 Switching between approaches For a problem instance of say 9 activities, a chromosome in the GA approach may look something like this - (1,4,6,5,2,3,8,7,9) where the genes of the chromosome represent the activity numbers. The order of activities determines the order of assignment of activities. To switch to NN encoding, we need to determine a weight vector (w1,w2,…,w9) such that for a given priority rule such as MinSlack, the priority order of activities using weighted processing times gives the same order as the one given by the chromosome. Algorithm for Switching from GA Encoding to NN Encoding Let param represent the vector of priority rule parameter. So for example, for the MinSlack priority rule, param will be a vector of slacks of each activity. For MWR (most work remaining) priority rule, param will be a vector of work remaining for each activity. The general algorithm to determine such a weight vector for the NN approach is: Loop until for each gene, the cardinal position of gene based on w*param is the same as the target cardinal position If a gene is out of place, i.e. its position is different from the target position Let wa and wb represent the weights corresponding to the out of place gene and the target position gene If (wa*parama > wb*paramb and positiona > positionb) and heuristic is based on non-increasing order of param then Set wa = 0.1 + wb * (paramb/parama) Elseif (wa*parama > wb*paramb and positiona > positionb) and heuristic is based on non-decreasing order of param then Set wa = wb * (paramb/parama) – 0.1 End If End If End Loop 6

PMS 2008, April 28-30, İstanbul, Turkey

In the above algorithm, 0.1 is an arbitrary value used to alter the relative weights of w a and wb. The above algorithm always generates a weight vector w such that w*param gives the same ordering as the ordering given by the GA chromosome. This is so because the algorithm loops until all genes are appropriately placed. Algorithm for Switching from NN Encoding to GA Encoding Switching from NN to GA is quite straightforward. Find the ordering of activities using weighted parameters. This ordering represents the order of activities in the chromosome.

3. Computational experiments and results We implemented the three approaches (NN, GA and NG) in Visual Basic 6.0 and executed the experiments on a Pentium IV, 2.8 GB personal computer. Well-known benchmark problem instance sets from PSPLIB (http://www.bwl.uni-kiel.de/Prod/psplib/index.html) were used to evaluate the algorithms. Datasets J30, J60 and J90 consisted of 480 problem instances with four resource types and 30, 60 and 90 activities respectively. The dataset J120 consisted of 600 problem instances with four resource types and 120 activities. The stopping criterion is to stop if the solution is equal to the lower bound or if a predetermined number of maximum schedules have been evaluated – in our case either 1,000 or 5,000. Tables 1 displays the average deviations obtained by our GA approach alone, NN approach alone and the NG approach for both 1,000 and 5,000 schedules. For J30 instances, the deviations are from known optimums whereas for J60, J90 and J120 instances the deviations are from critical-path based lower bound. For every dataset, the NG approach improved upon GA and NN alone results. For example, for the J30 instances, for 1000 schedules, while GA and NN approaches alone gave deviations of 0.19 and 0.25 percent respectively, NG gave a deviation of 0.13 percent. Table 2 shows our results in comparison to other results in the literature. For each problem set, for both 1000 and 5000 evaluations, our results are very competitive. For the NG approach, for 1000 solutions, the CPU times for J30, J60, J90 and J120 instances averaged 1.96, 3.4, 5.25 and 18.6 seconds. For the GA alone, the CPU times were 1.11, 2.07, 3.11 and 13.12 respectively.

4. Conclusions In the proposed Neurogenetic approach, we interleave the search iterations with the neural network (NN) approach and genetic algorithms (GA) approach. GA approach acts as a global search mechanism while NN act as local search mechanism. Due to complementary characteristics of the two search approaches, the interleaved approach provides better results than a single approach alone using the same number of iterations. We propose switching strategies from one approach to the other. Table 1 - Average percent deviations for NN, GA and Neurogenetic Approaches For J30 instances, the deviations are from known optimums, for the J60, J90 and J120 the deviations are from critical path based lower bound Number of Unique Schedules Evaluated Approach Dataset 1000 5000 NN approach alone J30 0.25 0.11 GA approach alone J30 0.19 0.15 Neurogenetic J30 0.13 0.10 NN approach alone GA approach alone Neurogenetic

J60 J60 J60

11.72 11.66 11.52

11.39 11.52 11.29

NN approach alone GA approach alone Neurogenetic

J90 J90 J90

11.21 11.31 11.17

11.10 11.11 11.06

NN approach alone GA approach alone

J120 J120

34.94 35.11

34.57 34.95

PMS 2008, April 28-30, İstanbul, Turkey

7

Neurogenetic J120 34.65 34.15 Table 2 – Comparison of Results with other results in the literature (Average deviations) For J30 instances, the deviations are from known optimums, for the J60, J90 and J120 the deviations are from critical path based lower bound # of Schedules Algorithm SGS Reference 1,000 5,000 J30 Neurogenetic (FBI) both this paper 0.13 0.10 Sampling - LFT – FBI both Tormos and Lova (2003) 0.23 0.14 GA – forw.-backw. both Alcaraz et al. (2004) 0.25 0.06 HNA - FBI both Colak et al. (2006) 0.25 0.11 Sampling - LFT - FBI both Tormos and Lova (2001) 0.25 0.15 GA – hybrid, FBI serial Valls et al. (2003) 0.27 0.06 J60 Neurogenetic (FBI) both this paper 11.52 11.29 GA – hybrid, FBI serial Valls et al. (2003) 11.56 11.10 HNA both Colak et al. (2006) 11.72 11.39 Scatter Search – FBI serial Debels et al. (2004) 11.73 11.10 GA – forw.-backw., FBI both Alcaraz et al. (2004) 11.89 11.19 Sampling - LFT – FBI both Tormos and Lova (2003) 12.04 11.72 J120 GA – hybrid, FBI serial Valls et al. (2003) 34.57 32.54 Neurogenetic (FBI) both this paper 34.65 34.15 HNA both Colak et al. (2006) 34.94 34.57 Scatter Search – FBI serial Debels et al. (2004) 35.22 33.10 GA – FBI serial Valls et al. (2005) 35.39 33.24

REFERENCES Alcaraz J., Maroto C., and Ruiz R. (2004). Improving the performance of genetic algorithms for the RCPS problem. Proceedings of the Ninth International Workshop on Project Management and Scheduling, 40–43. Blazewicz, J., Lenstra J. K. and Rinnooy Kan A.H.G. (1983). Scheduling Projects to Resource Constraints: Classification and Complexity, Discrete Applied Mathematics, 5, 11-24, Colak, S., Agarwal, A., and Erenguc, S.S. (2006) Resource-Constrained Project Scheduling Problem: A Hybrid Neural Approach, in Perspectives in Modern Project Scheduling (Jan Weglarz and Joanna Jozefowska, eds), 297-318. Debels D., De Reyck B., Leus R. and Vanhoucke M. (2006). A hybrid scatter search/Electromagnetism meta-heuristic for project scheduling, European Journal of Operational Research, 169(2), 638-653. Tormos P. and Lova A. (2003). An efficient multi-pass heuristic for project scheduling with constrained resources, International Journal of Production Research, 41(5), 1071–1086. Valls V., Ballestin F., and Quintanilla M. S. (2003) A hybrid genetic algorithm for the RCPSP, Technical report, Department of Statistics and Operations Research, University of Valencia. Valls V., Ballestin F., and Quintanilla M. S. (2005) Justification and RCPSP: A technique that pays, European Journal of Operational Research, 165(2), 375-386.

8

PMS 2008, April 28-30, İstanbul, Turkey

The Two-Stage Assembly Flowshop Scheduling Problem with Two Criteria Ali Allahverdi1, Fawaz S. Al-Anzi2 1 Department of Industrial and Management Systems Engineering Kuwait University, P.O. Box 5969, Safat, Kuwait, Fax: 965 481 6137 e-mail:[email protected] 2

Department of Computer Engineering Kuwait University, P.O. Box 5969, Safat, Kuwait, Fax: 965 481 6137 e-mail:[email protected] Keywords: Assembly flowshop, bicriteria, makespan, mean competition time, heuristic.

1. Introduction Consider the situation where there are n jobs such that each job has more than two operations. The first m operations of a job are performed at the first stage in parallel and the final operation is conducted at the second stage. Each of m operations of a job at the first stage is performed by a different machine and the last operation on the machine at the second stage may start only after all m operations at the first stage are completed. Each machine can process only one job at a time. The described problem is known as a two-stage assembly flowshop scheduling problem with m operations at the first stage and one operation at the second stage. It should be noted that the problem reduces to the two-machine flowshop scheduling problem when there is only one machine at the first stage, i.e., m=1. The assembly flowshop scheduling problem was introduced independently by Lee et al. (1993) and Potts et al. (1995). The two-stage assembly scheduling problem has many applications in industry. Potts et al. (1995) described an application in personal computer manufacturing where central processing units, hard disks, monitors, keyboards, and etc. are manufactured at the first stage, and all the required components are assembled to customer specification at a packaging station (the second stage). Lee et al. (1993) described another application in a fire engine assembly plant. The body and chassis of fire engines are produced in parallel in two different departments. When the body and chassis are completed and the engine has been delivered (purchased from outside), they are fed to an assembly line where the fire engine is assembled. Another application is in the area of queries scheduling on distributed database systems, Allahverdi and Al-Anzi (2006). Lee et al. (1993) considered the problem with m=2 while Potts et al. (1995) considered the problem with an arbitrary m. Both studies addressed the problem with respect to makespan minimization and both proved that the problem with this objective function is NP-hard in the strong sense for m=2. Lee et al. (1993) discussed a few polynomially solvable cases and presented a branch and bound algorithm. Moreover, they proposed three heuristics and analyzed their error bounds. Potts et al. (1995) showed that the search for an optimal solution may be restricted to permutation schedules. They also showed that any arbitrary permutation schedule has a worst-case ratio bound of two, and presented a heuristic with a worst-case ratio bound of 2-1/m. Hariri and Potts (1997) also addressed the same problem, developed a lower bound and established several dominance relations. They also presented a branch and bound algorithm incorporating the lower bound and dominance relations. Another branch and bound algorithm was proposed by Haouari and Daouas (1999). Sun et al. (2003) also considered the same problem with the same makespan objective function and proposed heuristics to solve the problem. Allahverdi and Al-Anzi (2006) obtained a dominance relation for the same problem when setup times are considered as separate from processing times. They also proposed two evolutionary heuristics (a Particle Swarm Optimization and a Tabu search) and proposed a simple and yet efficient algorithm with negligible computational time. Tozkapan et al. (2003) considered the two-stage assembly scheduling problem with the total weighted flowtime performance measure. They showed that permutation schedules are dominant for the problem with this performance measure. They developed a lower bound and a dominance relation, and utilized the bound and dominance relation in a branch and bound algorithm. It should be noted the performance measures of flowtime and completion time are equivalent when jobs are PMS 2008, April 28-30, İstanbul, Turkey

9

ready at time zero. It should be also noted that total and mean completion times are equivalent performance measures. Al-Anzi and Allahverdi (2006) considered the same problem with total completion time criterion. They obtained optimal solutions for two special cases and proposed a simulated annealing heuristic, a tabu search heuristic, and a hybrid tabu search heuristic. They compared their heuristics with the existing ones and showed that their hybrid tabu search heuristic is the best. The research mentioned so far addressed only the single criterion of either makespan or mean completion time while the majority of real life problems requires the decision maker to consider both criteria before arriving at a decision. The problem with both makespan and mean completion time has not been addressed for the considered problem of two-stage assembly scheduling problem and is the topic of the current paper. We, in this paper, address the two-stage assembly flowshop scheduling problem with a weighted sum of makespan and mean completion time.

2. Problem Definition We assume that n jobs are simultaneously available at time zero and that preemption is not allowed, i.e., any started operation has to be completed without interruptions. Each job consists of a set of m+1 operations. The first m operations are performed at stage one on m parallel machines while the last operation is performed at stage two on the assembly machine. Let t[i,j]: Operation time of the job in position i on machine j, i=1, …, n, j=1, …, m, p[i]: Operation time of the job in position i on assembly machine, C[i]: Completion time of the job in position i. Note that job k is complete once all of its operations t[k,j] (j=1, …, m) and p[k] are completed where the operation p[k] may start only after all operations t[k,j] (j=1, …, m) have been completed. Potts et al. (1995) and Tozkapan et al. (2003) showed that permutation schedules are dominant with respect to makespan and total flowtime (completion time) criterion, respectively. Therefore, permutation schedules are also dominant for the problem addressed in this paper. Thus, we restrict our search for the optimal solution to permutation schedules. In other words, the sequence of jobs on all of the machines, including the assembly machine, is the same. It can be shown that the completion time of the job in position j is as follows (Al-Anzi and Allahverdi, 2006b).

C[ j ] max max

j

k 1,...,m

where C[ 0 ] Mean Completion Time (MCT) = (1/n)

i 1

t[i ,k ] , C[ j

1]

p[ j ]

0 n

C[i]

i 1

Makespan (Cmax) = C[n]. If the weight given to makespan is denoted by , then the objective function (OF) is defined by the following equation.

OF = Cmax+(1- )MTC where 0< k . 28

PMS 2008, April 28-30, İstanbul, Turkey

2.2 Second step For a given leaf node N 0 of the rst step of the branch and bound, a complete schedule of the jobs is obtained and for each job we have its completion time at rst stage, which can be considered as a release date for the second stage. The second step of the branch and bound procedure considers the problem P B(m2 )|rj , GP = IN T, k < n|Cmax . The node N 0 is considered as the root of the second step of the tree and each node describes a list of batches. In other words, a node Nl2 of level l of the step two, describes a list of batches constructed by adding batch BNl2 of unscheduled jobs, thus all combinaison of compatible jobs are considered. To limit the size of the tree we use similar lower bounds described for the rst step of the branch and bound. Specic dominance rules which considering batch properties are also used to limit the size of the tree, which are given in the following, • If there is an available job, which could be added to the current batch B without any change on the processing time of B , B is not candidate to extend the list of batches. • Let cbj = max{tk , rj } + aj be the smallest completion time of a candiate batch ¯ composed by an unscheduled jobs and contains an unscheduled job j , if an B other batch B has a release date greatest than cbi , then B is not candidate to extend the list of batches.

3 Preliminary computational results In order to evaluate the performance of this branch and bound algorithm, we carried out series of preliminary experiments. The algorithm is coded in C++ language, and runs on an Intel Pentium M 1,5 GHz and 512 MB RAM. The processing times of jobs at the rst stage are generated from an uniform distribution [5, 100], and at the second stage the initial endpoints aj of the processing time intervals are generated from an uniform distribution [5, 100], where the terminal endpoints bj are given as bj = aj + 0.05 × aj . We tested instances with 5 machines at each stage and the capacity of batch is equal to 2. The preliminary experiments show that instances with 12 tasks are solved easily in a few seconds. More experiments should be proposed in the nal presentation of this paper.

References 1.A. Bellanger, et A. Oulamara. (2007) Flowshop hybride avec machines à traitement par batch et compatibilité entre les tâches. FRANCORO V / ROADEF'07, Livre des articles : 21-35. 2.J. Carlier and E. Néron. (2000) An exact method for solving the multiprocessor owshop. RAIRO-Oper Res. 78 : 146-161. 3.A. Gharbi, M. Haouari. (2002) Minimizing makespan on parallel machines subject to release dates and delivery times. J. Scheduling 5:329-355. 4.R.L. Graham, E.L. Lawler, J.K. Lenstra, and A.H.G. Rimooy Kan. (1979) Optimization and approximation in deterministic sequencing and scheduling : a survey. Annals of Discrete Mathematics, 5 : 287-326. 5.M. Haouari, A.Gharbi. (2006) Optimal scheduling of a two-stage hybrid ow shop. Math. Meth. Oper. Res. 64 : 107-124. 6.T. Kis, et E. Pesch. (2005) A review of exact solution methods for the non-preemptive multiprocessor owshop problem. European Journal of Operational Research 164 : 592-608. PMS 2008, April 28-30, İstanbul, Turkey

29

7.O. Mousli and Y. Pichot. (2000) A branch and bound algorithm for the hybrid owshop. Int. J. Product. Econ. 64: 113-125. 8.A. Oulamara, and Gerd Finke, A. Kamgaing Kuiten. (2007) Flowshop Scheduling problem with Batching Machine and Task Compatibilities. Computers & Operations Research, In press,

30

PMS 2008, April 28-30, İstanbul, Turkey

Setting Gates for Activities in the Stochastic Project Scheduling Problem Through the Cross Entropy Methodology Bendavid I. and Golany G. Faculty of Industrial Engineering and Management Technion − Israel Institute of Technology Haifa 32000, Israel e-mail: [email protected], [email protected] Keywords: Project scheduling, gates, cross entropy

1.

Introduction

The project scheduling problem gave rise to an extensive literature that addresses the numerous variations of the basic problem. The various project scheduling problem formulations differ from each other by the characterization of the activities duration, the existence of resource constraints and most importantly by their objective function. From the several techniques that were developed since the 1950s, the most important and used ones remain the Critical Path Method (CPM, e.g. Krishnamoorthy, 1968), the Metra-Potential Method (MPM, e.g. Zhan, 1994) and the Program Evaluation and Review Technique (PERT, e.g. Fazar, 1959). These methods are characterized by their objective: makespan minimization, which has been the most popular objective in the project scheduling domain. Another approach to project scheduling is the CC/BM (Critical Chain Scheduling and Buffer Management, see Goldratt (1997) and Rand (2000) for a detailed description). The particularity of this method is that it makes a clear analogy between inventory and time. The way to protect the critical chain from uncertainty is to add safety times to the critical chain itself and also each time a feeding path merges with the critical chain. The addition of the buffers allows the project manager to control the schedule of the project and to reduce the influence of the stochastic durations. In recent years, new methodologies have been developed with objective functions that take into account the financial aspects of the project, mainly the maximization of the net present value (NPV). Analogous to CC/BM, this methodology also attempts to intensify the control over the project schedule with the idea that starting activities as soon as possible (according to precedence constraints) is not always optimal. More recently, a new approach to project scheduling under uncertainty has emerged. This scheduling approach consists of determining in advance a gate for each activity, i.e. a time before it the activity cannot begin. Trietsch (2006) explains the motivation for gates in an environment where resources are booked and planned to be ready for an activity at a predetermined time. The goal of this paper is to develop a methodology that will determine a gate for each activity, with the objective to minimize the expected penalty costs. To do so, we first need to understand the reason for this approach. In an environment with stochastic durations, we cannot know for sure the start time and the finish time of each activity; this fact can lead to one of two outcomes:(1) a specific activity is potentially ready to start its processing since all its predecessors are finished, but cannot actually start because the resources required for this activity were planned to arrive at a subsequent time.(2) The resources required for a specific activity are ready since they were planned to arrive in an earlier time but the activity is not ready to start its processing because of precedence constraints. In both cases penalty costs are incurred. In the first case, the penalty cost is similar to a holding cost incurred by the fact that the activity is ready sooner than planned. This holding cost can result from an alternative cost due to the fact that we invest money for processing preceding activities sooner than needed. It can also result from an indirect cost in cases where the products of preceding activities deteriorate if not used immediately or within a specific time interval. In the second case, the penalty cost is similar to a shortage cost incurred by the fact that the activity is ready later than planned. Again, this shortage cost can result from an alternative cost due to the fact that we invest money for ordering the resources sooner than needed. It can also result from an indirect cost in cases where the resources deteriorate if not used immediately or within a specific time interval. The scheduling problem is to determine the vector of gates that minimizes the expected sum of holding and shortage costs. Similarly to the CC/BM and NPV maximization methodologies, our goal in this paper is to control the schedule of the project by determining start times constraints with the idea that starting activities as soon as feasible is not PMS 2008, April 28-30, İstanbul, Turkey

31

always optimal. We consider the following environment: a project is composed of n activities. We define Pi to be the set of all the predecessors of activity i , and Si to be the set of all the successors of activity i . Each activity i has a duration Yi with a realization yi , a probability density function fi and a cumulative density function Fi . It is assumed that the distribution of Yi is bounded in the interval [ai , bi ], 0 ≤ ai ≤ bi . In addition, we define for each activity i , a holding cost per unit time hi and a shortage cost per unit time pi . We have to determine for each activity

i a gate g i such that the activity can start at its gate or after all its predecessors are processed, the maximum between the two. For each activity there is an individual due date equal to the gate of its immediate successor such that if the processing of the activity ends before its due date, holding costs are incurred and if it ends after its due date, shortage costs are incurred. The entire project has a due date d (generally imposed exogenously) that is also the due date of the last activities of the project. To determine a gate for each activity we choose a suboptimal approach which is the use of a general heuristic method. For this purpose, we chose the Cross-Entropy method. The application of the method is explained in the next section.

2.

A Cross Entropy Approach

2.1. Description of the Cross Entropy (CE) method The CE method, developed by Rubinstein and Kroese (2004), is a general heuristic method for solving estimation and optimization problems. For optimization problems, a general outline of the CE method may be described as follows: translation of the underlying optimization problem into a meaningful estimation problem called the associated stochastic problem. For the example of a minimization problem, the estimation problem maybe the expected number of times the objective function gives a lower value than a specific threshold value. Next, the CE algorithm involves the following two phases: (1) generation of a sample of random data (demands, durations, etc.) according to a specified random mechanism, and simultaneous calculation of the objective function, (2) updating the parameters of the random mechanism (on the basis of the data collected) in order to produce a "better" sample in the next iteration, a sample that will improve the value of the objective function. The application of the CE method to our scheduling problem is explained in the next section. 2.2. Application of the CE method to serial projects We first apply the Cross-Entropy (CE) method to the network with activities in series (serial project). In the first step we define the initial distribution of the gates. Since we have no a priori information on the value of the gates, we could have start from a discrete uniform distribution between 0 to the due date of the project d . Then the number of possible vectors would be:

( d + 1)

n

. Since there is a direct relationship between the number of possibilities and N , the

number of vectors we have to generate in each step, we have to limit the number of possibilities for the vector of gates. The first way to limit this number is to use the constraint: g k ≥ g k −1 for all k = 2,K , n . The second way to limit this number is to fix the gate of activity k such that there exists a chance to finish the remaining activities on time. For example, the gate of activity n should not be greater than d − an , otherwise with probability 1 we will end the last activity after

the due date. For activity k , k = 1,K , n , the minimal time to complete the remaining activities is:



n u =k

au , therefore the gate should not be greater than d − ∑ u = k au . n

In the second step, we generate a sample of N vectors of gates and we calculate the cost of the project for each vector. To calculate the cost of a specific vector of gates Gi , we generate N vectors of activity durations, calculate for each realization the cost of the project, calculate the average of these N costs. This average is the estimator of the expected cost of the project for the specific vector of gates Gi . For a specific realization of activity durations y = y1 ,K, yn , the cost

32

PMS 2008, April 28-30, İstanbul, Turkey

is:

∑ (h ( g n

k =1

k

k +1

− tk − yk )+ + pk ( tk + yk − g k +1 ) + ) where tk is the start time of activity k ,

t1 = g1 , tk = max { g k , tk −1 + yk −1} and g n +1 = d . Then, these costs are ordered from the smallest to the biggest. In the third step we update the distribution of the gates according to the results of the last step. For each good performance, i.e. lowest costs, we note the value of the gates. The probability for each value of the gates will be the frequency of this value in the best performances, i.e. the number of time we get this value out of all the best performances. In the fourth step, we will stop if for each gate, the probability to be equal to a specific value is close to 1. Otherwise we have to perform another iteration. 2.3. Direct extension of the CE method to non-serial projects The application of the CE method for non-serial projects is based on the algorithm developed above for serial projects. The differences between the serial and non-serial algorithms are in the initial distribution of the vector of gates and in the calculation of the cost: (1) The initial distribution of the vector of gates: to limit the number of possibilities for the vectors we need to generate, we can use the constraint adapted to the network structure of the project: g k ≥ g j for all

j ∈ Pk , for all k = 2,K , n . In serial projects, the second way to limit this number was to fix the gate of activity k such that there exists a chance to finish the remaining activities on time. In nonserial projects, it is more complicated to know, from a specific activity the remaining time until the completion of the projects. Using the fact that there is one terminal activity n , we bounded the gates of all the activities by d − an . (2) The cost calculation: in non-serial projects: for a specific realization y = y1 ,K, yn , the cost is:



∑ ⎢ ∑ (h ( g n

k =1

⎣ j ∈S k

k

j

⎤ − tk − yk ) + pk ( tk + yk − g j ) ⎥ where tk is + + ⎦

)

the start time of activity k , t1 = g1 , tk = max { g k , t j + y j } and g n +1 = d . j∈Pk

2.4. Heuristic method for non-serial projects using the serial CE method When we apply the direct extension of the CE method to non-serial projects with a large number of activities, the number of possible vectors of gates grows. Therefore, the number N of generated vectors should also grow, which increases considerably the running time of the algorithm. To reduce this problem, we propose the following heuristic method: (1) Find the critical path of the non-serial project, for example the path with the largest expected duration. (2) Apply to the critical path the algorithm for serial projects developed in Section 2.2 and find the gates for all activities in the critical path. (3) For each non-critical path, apply the algorithm for serial projects developed in Section 2.2 when the due date of this path will be the gate of the activity in which the path merges with the critical path. This heuristic algorithm allows reducing the size of the problem. However, this method has two drawbacks: the network structure is not preserved and the problems solved are still not large enough. 2.5. CE method using a continuous distribution To overcome the problems described in Section 2.4., we use a continuous distribution for the generation of the vector of gates instead of a discrete distribution. For simplicity as explained in Kroese et al. (2006), we use the normal distribution. In the first iteration, we generate a vector of gates from a normal distribution with an initial mean (for example the early start time of the activity) and an initial standard deviation (for example d / 6 ). In the next iteration, for each activity, instead of updating the value of the probability of each one of the possibilities for this gate, we have to update just two values: the mean and the standard deviation. In occurrence, the mean of the normal distribution in the next iteration will be the average of the gates that gave the best performances and the standard deviation will be the standard deviation of these gates. Of course, the results obtained for the gates are continuous values. We enter these continuous values to the CE method using the discrete distribution (developed in Section 2.3.) when we generate the gate of each activity as one of the two following values: the floor and the ceil value of the continuous gate obtained in the algorithm.

PMS 2008, April 28-30, İstanbul, Turkey

33

2.6. Computational study We checked the performance of the algorithm developed on projects with up to 50 activities. We generated the networks from Progen (see Kolish et al., 1995) and added to them random values for the due date, the costs and the parameters of the distribution of the durations. To study the performance of the CE algorithm we compare it to three other heuristic methods: ▪ Early Start (ES). In this method, the gates are determined according to their early start

{

}

time. Therefore, g1 = 0 and g k = max j∈Pk g j + E (Y j ) for all k = 2,K , n .

▪ Late Start (LS). In this method, the gates are determined according to their late start time with respect to the due date of the project. Therefore, g n +1 = d and g k = min j∈Sk { g j − E (Yk )} for all k = 1,K , n .

▪ Random gate. In this method, we generate randomly a vector of gates. We run this method during the same CPU time that the CE algorithm took to converge, and choose the vector of gates that yielded to the lower cost. The algorithms based on the CE method developed in this paper allowed us to solve relatively large problems in a reasonable time. In all the examples, they gave the best performances, i.e. the lower costs, surpassing the other methods.

3.

Concluding Remarks

This paper deals with the stochastic project scheduling problem through a gating approach. The objective is to determine the gates of each activity in order to minimize the expected holding and shortage costs. For this purpose, we chose the Cross-Entropy method. We first applied the method for serial and non-serial projects using a discrete distribution in small examples. Then, we applied the CE method using a continuous distribution on larger examples. To check the performance of the algorithms developed, we compared them to three simple heuristic methods: the Early Start (ES), the Late Start (LS) and the Random gate. In all the examples, the algorithms based on the CE method developed in this paper gave the best performances, i.e. the lower costs.

References Goldratt, E.M. (1997). Critical chain. North River Press, Great Barrington, MA. Fazar, W. (1959). Program evaluation and review technique. American Statistician, 13, 2, p. 10. Kolish, R., A. Sprecher and A. Drexl (1995). Characterization and generation of a general class of resource-constrained project scheduling problems. Management Science, 41, 10, 1693−1703. Krishnamoorthy, M. (1968). Critical path method: a review. Technical report, Michigan Univ., Dept. of Industrial Engineering, Ann-Arbor, MI, US. Kroese, D.P., R.Y. Rubinstein and S. Porotsky (2006). The Cross-Entropy Method for Continuous Multi-extremal Optimization. Methodology and Computing in Applied Probability, 8, 383−407. Rand, G.K. (2000). Critical chain: the theory of constraints applied to project management. International Journal of Project Management, 18, 173−177. Rubinstein, R.Y.and D.P. Kroese (2004). The Cross-Entropy method | A unified approach to combinatorial optimization, Monte-Carlo simulation, and machine learning. Springer, New York. Trietsch, D. (2006). Optimal feeding buffers for projects or batch supply chains by an exact generalization of the newsvendor result. International Journal of Production Research, 44, 4, 627−637. Zhan, J. (1994). Heuristics for scheduling resource-constrained projects in MPM networks. European Journal of Operational Research, 76, 1, 192−205.

34

PMS 2008, April 28-30, İstanbul, Turkey

A New Approach to the Project Scheduling Problem with Generalized Precedence Relations Lucio Bianco1, Massimiliano Caramia1 Dipartimento di Ingegneria dell'Impresa Università di Roma “Tor Vergata”', Via del Politecnico, 1 - 00133 Roma, Italy. e-mail: bianco,[email protected] 1

Keywords: Generalized precedence relationships, lower bound, resource constraints, time-lag.

1.

Background

In project scheduling without resource constraints and with strict finish-to-start relationships one can define an acyclic network whose nodes are the activities and arcs are the precedence constraints (AON network), and compute the minimum project duration as the length of the critical path, i.e., the longest path from the initial activity to the final activity in such an activity network (see, e.g., Moder et al., (1983); Radermacher, (1985)). The computation of the critical path can be accomplished under this assumption by means of the well known forward and backward pass recursion algorithm (see, e.g., Kelley, (1963)), that allows the calculation of the earliest and the latest starting time of each activity. The computational complexity of this algorithm is O(m), where m is the number of arcs of the network.. However, in a project, it is often necessary to specify other kinds of temporal constraints. Accordingly to Elmaghraby and Kamburoski (1992), we denote such constraints as Generalized Precedence Relations (GPRs). GPRs allow one to model minimum and maximum time-lags between a pair of activities (see, e.g., Dorndorf, (2002); Neumann et al., (2002)). Four types of GPRs can be distinguished: Start-to-Start (SS), Start-to-Finish (SF), Finish-to-Start (FS) and Finish-to-Finish (FF). A minimum time-lag ( SSijmin (δ ), SFijmin (δ ), FSijmin (δ ), FFijmin (δ )) specifies that activity j can start (finish) only if its predecessor i has started (finished) at least δ time units before. Analogously, a maximum time-lag ( SSijmax (δ ), SFijmax (δ ), FSijmax (δ ), FFijmax (δ )) imposes that activity j should be started (finished) at most δ time slots beyond the starting (finishing) time of activity i. GPRs can be represented in a so called standardized form by transforming them e.g. into minimum Start-to-Start precedence relationships by means of the so called Bartush et al.'s transformations (Bartush et al., (1988)). Thus, applying to a given AON activity network with GPRs such transformations leads to a standardized activity network where with each arc it is associated a label lij representing the time-lag between the two activities i and j (De Reyck, (1988)). If more than one time-lag lij between i and j exists, only the largest lij is considered. It should be noted that standardized networks, differently from the networks without GPRs, may contain cycles. This causes that a Resource Unconstrained Project Scheduling Problem (RUPSP) with GPRs and minimum makespan objective: • cannot be solved by the forward and backward recursion algorithm; • can be solved by computing the longest path from the source node to the sink node using an algorithm with a worst case complexity O(nm), where n is the number of activities; • can admit directed cycles of positive length, which means that the project has no feasible schedules.

2.

A network representation of RUPSP with GPRs

In Bianco and Caramia (2007a) we proposed a network formulation of RUPSP with GPRs to minimize the completion time. It is based on a new formulation such that the standardized network obtained is always acyclic since the precedence relationships between each pair of activities are only of the Finish-to-Start type with zero time-lags. In order to understand the proposed model, consider an acyclic AON network like the one in Figure 1, representing a given project, where the labels on the nodes denote the activity durations while the labels on the arcs encode time-lags. PMS 2008, April 28-30, İstanbul, Turkey

35

7 3

min SS 35 ( 4)

min SS23 (9) 0 1

min SS12 (0)

4

2

5

max FF24 (8)

2

min SS35 (5)

0

min FS 56 ( 0)

6

min FF45 (3)

4 4

Figure 1. A network with GPRs

Without loss of generality we assume that a project network has one (single) dummy source node and one (single) dummy sink node. If all the GPRs are transformed into SSijmin type precedence relationships by means of the Bartusch et al.'s transformations, we obtain the standardized network in Figure 2, where the label on a generic arc (i,j) is the Start-to-Start minimum time-lag between i and j, and the label on the generic node i is the duration of the corresponding activity i (note that the network contains a cycle). 7 3 4

2 0 1

4

2 0

5

2

0 4 6

-6

4

4

3

4 Figure 2. The AON standardized network and its critical path (bold arcs)

Given an AON network with GPRs, let us now examine whether it can be transformed into an AON acyclic network with all constraints of the Finish-to-Start type and zero time-lag. To this aim let us consider all GPRs between two activities i and j. They have the following form: si + SS ijmin ≤ s j ≤ si + SS ijmax si + SFijmin ≤ f j ≤ si + SFijmax f i + FS ijmin ≤ s j ≤ f i + FS ijmax f i + FFijmin ≤ f j ≤ f i + FFijmax

where si (sj) denotes the starting time, and fi (fj) the finishing time of activity i (j). These relations can be transformed in terms of FS ij constraints by means of the following transformation rules: si + SSijmin ≤ s j → f i + lij ≤ s j , with lij = − di + SSijmin s j ≤ si + SSijmax → s j ≤ fi + lij , with lij = − di + SSijmax

36

PMS 2008, April 28-30, İstanbul, Turkey

si + SFijmin ≤ f j → f i + lij ≤ s j , with lij = − d i − d j + SFijmin f j ≤ si + SFijmax → s j ≤ f i + lij , with lij = − d i − d j + SFijmax f i + FS ijmin ≤ s j → f i + lij ≤ s j , with lij = FSijmin s j ≤ f i + FSijmax → s j ≤ f i + lij , with lij = FS ijmax f i + FFijmin ≤ f j → f i + lij ≤ s j , with lij = FFijmin − d j f j ≤ f i + FFijmax → s j ≤ f i + lij , with lij = FFijmax − d j

Now, it is possible to define a new AON network where between each pair of nodes (i,j) of the original network related to a temporal constraint, we insert a dummy activity (dummy node) i' whose duration di' is di' ≥ lij in the case of minimum time-lag, and di' ≤ lij in the case of a maximum time-lag. Note that lij in the former case is a lower bound on the value of di', and in the latter case is an upper bound on its value. This new network will have only precedence relationships of the FS type and zero time-lags since the latter are embedded in durations di' associated with the dummy activities. It follows that the resulting graph is acyclic, and the problem is that of evaluating all the di' values for which either lower or upper bounds are a-priori known. Referring to the previous example, the transformed AON network is represented in Figure 3, where, based on the proposed transformation, d1' ≥ 0, d2' ≥ 2, d2'' ≤ 4, d2''' ≥ 0, d3' ≥ -3, d4' ≥ -1, d5' ≥ 0, and the minimum completion time of the project is given by the earliest starting time of node 6. This example can be easily generalized. 7 3 d3'

d2'' 3

2' 0

d1'

2

1

1

2

4 d2''

5

d5'

0

5

6

d4'

2

4 4

2 d2'

4

Figure 3. The transformed acyclic network

With this network we associate a linear mathematical programming formulation whose dual formulation offers optimality conditions with which the computation of the minimum completion time can be done in O(m) time complexity. We prove that the optimization problem underlying the dual formulation is that of finding an augmenting path of longest length on a unit capacity network.

3.

The RCPSP with GPRs: a lower bound

We studied also the problem with resource constraints, which is known to be NP-hard. De Reyck and Herroelen (1998) presented for the project scheduling problem with resource constraints and GPRs an exact algorithm based on branch and bound rules. Other two exact algorithms are present in the literature (see Bartusch et al., (1988); Demeulemeester and Herroelen, (1997)) but they refer only to the so called precedence diagramming case, i.e., the one in which GPRs are only of minimum type. Also lower bounds are available for this problem. In particular, two classes of lower bounds are known in the literature, i.e., constructive and destructive lower bounds. The first class is formed by those lower bounds associated with relaxations of the mathematical formulation of the problem. In particular, if, on the one hand, we PMS 2008, April 28-30, İstanbul, Turkey

37

relax resource constraints the lower bound is the critical path, i.e., the optimal solution of the corresponding RUPSP; if, on the other hand, one relaxes precedence relationships and solves at the optimum the mathematical model he has another constructive lower bound, i.e., n

max k∈R

∑ i =1

rik d i where R is the set of resource types, Rk is a generic resource type and rik is the Rk

amount of resource type Rk required by activity i. Destructive lower bounds, instead, are obtained by means of an iterated binary search based routine as reported e.g. in Klein and Scholl (1999). For the project scheduling problem with GPRs and scarce resources we exploited the network model proposed for RUPSP and tried to get rid of the resource constraints. We restricted our analysis only on those pairs of activities for which a GPR exists to determine a lower bound on the minimum makespan. On these pairs we verified whether the resource constraints were active or not, and, in case of a positive answer, we proved some results with which the problem reduces to a new RUSPS with different lags and additional disjunctive constraints. With this problem it can be associated an integer linear program whose linear relaxation can be solved by means of a network flow approach exactly as happened for the problem without resource constraints (Bianco and Caramia, (2007b)). For both the problems, i.e., with and without scarce resources, computational results confirmed a better practical performance of the proposed method with respect to the competing ones.

References Bartusch, M., R.H. Mohring and F.J. Radermacher (1988). Scheduling Project Networks with Resource Constraints and Time Windows. Annals of Operations Research, 16, 201-240. Bianco, L. and M. Caramia (2007a). A New Formulation of the Resource-Unconstrained Project Scheduling Problem with Generalized Precedence Relations to Minimize the Completion Time. Technical Report DII - University of Rome “Tor Vergata”, submitted. Bianco, L. and M. Caramia (2007b). A New Lower Bound for the Resource-Constrained Project Scheduling Problem with Generalized Precedence Relationships. Technical Report DII University of Rome “Tor Vergata”. Demeulemeester, E.L. and W.S. Herroelen (2002). Project Scheduling - A Research Handbook. Kluwer Academic Publishers, Boston. Demeulemeester, E.L. and W.S. Herroelen (1997). A branch-and-bound procedure for the generalized resource-constrained project scheduling problem, Operations Research 45, 201212. De Reyck, B. (1998). Scheduling Projects with Generalized Precedence Relations - Exact and Heuristic Approaches. Ph.D. Thesis, Department of Applied Economics. Katholieke Universiteit Leuven, Leuven, Belgium. De Reyck, B. and W. Herroelen (1998). A branch-and-bound procedure for the resourceconstrained project scheduling problem with generalized precedence relations. European Journal of Operational Research, 111 (1), 152-174. Dorndorf, U. (2002). Project Scheduling with Time Windows. Physica-Verlag. Elmaghraby, S.E.E. and J. Kamburowski (1992). The Analysis of Activity Networks under Generalized Precedence Relations (GPRs). Management Science 38 (9), 1245-1263. Franck, B.K., K. Neumann and C. Schwindt (2001). Truncated branch-and-bound, scheduleconstruction, and schedule improvement procedures for resource-constrained project scheduling. OR Spektrum, 23, 297-324. Kelley, J.E. (1963). The critical path method: Resource planning and scheduling. In Industrial Scheduling (J.F. Muth and G.L. Thompson, eds.) pp. 347-365. Prentice Hall, N.J. Klein, R. and A. Scholl, (1999). Computing lower bounds by destructive improvement: An application to resource-constrained project scheduling. European Journal of Operational Research 112, 322-346. Moder, J.J., C.R. Philips and E.W. Davis (1983). Project Management with CPM, PERT and Precedence Diagramming. Van Nostrand Reinhold Company, Third Edition. Neumann, K., C. Schwindt and J. Zimmerman (2002). Project Scheduling with Time Windows and Scarce Resources. Lecture Notes in Economics and Mathematical Systems 508, Springer. Radermacher, F.J. (1985). Scheduling of project networks. Annals of Operations Research, 4, 227252. 38

PMS 2008, April 28-30, İstanbul, Turkey

Solving a Permutation Flow Shop Problem with Blocking and Transportation Delays Jacques Carlier1, Mohamed Haouari2, Mohamed Kharbeche2, Aziz Moukrim1 1

UMR CNRS 6599 Heudiasyc, Centre de Recherches de Royallieu, Université de Technologie de Compiègne, e-mail: jacques.carlier, [email protected] 2

ROI - Combinatorial Optimization Research Group, Ecole Polytechnique de Tunisie, 2078 La Marsa, Tunisie e-mail: [email protected], [email protected] Keywords: Permutation flow shop, blocking, time delays, branch-and-bound.

1.

Problem definition

Given a job set J = {1, 2,..., n} where each job has to be processed non preemptively on m machines M1, M2,..., Mm in that order. The processing time of job j on machine Mi is pij. At time t=0, all jobs are available at an input device denoted by M0. After completion, each job must be taken from Mm to an output device denoted Mm+1 (for convenience, we set pm+1, j = 0, ∀j ∈J). The transfer between machine Mi and Mk (i, k=0,..., m+1) is performed by means of a single robot and takes τik units of time. The machines have no input or output buffering facilities. Consequently, after processing a job j on machine Mi (i=1,..., m), this latter remains blocked until the robot picks j and transfers it to Mi+1. Such a move could only be performed if machine Mi+1 is free (that is, no job is being processed by or is waiting at Mi+1). At any time, each machine can process at most one job and each job can be processed on at most one machine. Moreover, the robot can transfer at most one job at any time. The problem is to find a processing order of the n jobs, the same for each machine (because of the blocking constraint, passing is not possible), such that the time Cmax at which all the jobs are completed (makespan) is minimized. In the sequel, we partially relax the constraint requiring that the robot can transfer at most one job at any time and we propose to investigate the flow shop problem with blocking and transportation delays. This problem might be viewed as a generalization of the much studied flow shop with blocking Ronconi, (2005). An overview of the literature on flow shop scheduling blocking can be found in Hall and Sriskandarajah, (1996).

2.

Lower bounds

2.1. One-machine based lower bounds As a consequence of the transportation delays, the minimum elapsed time on machine Mi between the completion of a job j and the starting of a job k is

δi  τii 1  τi 1i 1  τi 1i  i  1 m

(1)

Also, by setting for each job j∈J and each machine Mi (i=1,..., m): i 1



a head rij 

i 1

p  τ

κ κ 1if

p  τ

κ κ 1

kj

k 1 m



a tail q ij 

kj

k  i 1

i > 1 and r1j = τ 01,

k 0 m

if i  m and q mj  τ m m 1

k i

Hence, a simple lower bound is

PMS 2008, April 28-30, İstanbul, Turkey

39

  LB1  max  min rij  1 i  m 1 j n 

  pij  (n  1)δi  min q ij  1 j n  j1  n



(2)

Actually, we can derive a better bound by observing that if job h is scheduled immediately after job j, then the minimum elapsed time on machine Mi (i=2,...,m) between the completion of j and the starting of a job h is given by:

sijh  max (pij  τii 1  τi 1i 1, pi -1,h  τii  2  τi  2i 1 )  pij  τi 1i i  2 m

(3)

 

Now, define δij  min sijh , i = 2,..., m, j = 1,..., n (≡ minimum elapsed time after completion of j h j

i on Mi, δ[l] =lth smallest value of δij (j=1,..., n). Then we get the lower bound

  LB2  max  min rij  2  i  m 1 j n 



  i δ[l]  min q ij  1 j n  l 1 

n 1

n

p ij 

j 1



(4)

Clearly, a valid relaxation is a one-machine problem with heads, tails, and setup times 1|rj, qj, sjk|Cmax. A relaxation of this problem is a 1|rj, qj|Cmax obtained by setting rj′ = rij, pj′=pj+δij, and qj′ = qij - δij. Hence, a third lower bound is obtained by allowing preemption. For each machine Mi (i=1,..., m), let LB3i denote the makespan of the corresponding optimal preemptive schedule. Then a valid lower bound is:

LB3  max LB3i 2i  m

(5)

A further relaxation of 1|rj, qj, sjk|Cmax is obtained by setting all heads and tails to min r j and min q j , respectively. jJ



jJ

 

The resulting relaxation is equivalent to finding a shortest Hamiltonian path in a directed complete graph, where the nodes represent the jobs and the distance matrix is (sijk). We can transform this problem into an equivalent asymmetric traveling salesman problem (ATSP) by adding to the graph a dummy node n+1 and dummy zero-cost arcs (j, n+1) and (n+1, j), for i j=1,...n. For a given machine Mi, let z ATSP denote the value of the shortest cycle. Since solving

i this relaxed problem is NP-hard, we compute a lower bound on z ATSP . In our implementation, we

have derived a tight lower bound LBiATSP by solving an enhanced linear programming ATSP formulation which is based on assignment constraints as well as lifted Miller-Tucker-Zemlin subtour elimination constraints Desrochers and Laporte, (1991). The resulting lower bound is   LB4  max  min rij  LBiATSP  min q ij  1 j n 2  i  m 1 j n 

(6)

2.2 A two-machine based lower bound First, we consider the special case where m=2. We shall prove that the problem is polynomially solvable. Given a permutation σ of the n jobs with a corresponding makespan Cmax, we denote by Si,σ(j) the start time on machine Mi (i=1,2) of the job at position j (j=1,...,n).

40

PMS 2008, April 28-30, İstanbul, Turkey

The time interval [0, Cmax] can be partitioned into 2n+1 sub-intervals I1, J1, I2, J2,..., In, Jn, In+1where I1= [0, S2,σ(1) - τ12], Ij= [S2,σ(j-1), S2,σ(j) - τ12] for j=2,...,n, In+1= [S2,σ(n), Cmax] Jj= [S2,σ(j) - τ12, S2,σ(j)] for j=1,...,n

    Note that:

During each interval Ij= [S2,σ(j-1), S2,σ(j) - τ12] (j=2,...,n) either machine M1 or machine M2 is blocked for



λσ(j-1),σ(j) = |(p2,σ(j-1) + τ23 + τ31)- (p1,σ(j) + τ20 + τ01)| units of time

(7)

During the interval I1= [0, S2,σ(1) - τ12], machine M2 remains idle for (τ12 + p1,σ(1)) units of time. During the interval In+1= [S2,σ(n), Cmax], machine M1 remains idle for (p2,σ(n)+ τ23) units of time.

 

Now, for each machine Mi (i=1, 2), denote by Pi, Ti, and Wi the total processing time, transportation time from and to Mi, and waiting time, respectively. We have: n

P1=





n

p1 (j) and P2=

j 1

p 

2 (j)

j 1



T1=τ01+ (n-1)*(τ20 + τ01) + n*τ12 and T2= n*(τ12 + τ23) + (n-1)*τ31



W1+W2= (

n



λσ(j-1),σ(j)) + (p2σ(n)+ τ23) + (τ01 + p1σ(1))

j 2

Now, since Pi + Ti + Wi = Cmax for i=1, 2, we get n

 j 1

n

p1 (j) 



n

p 2σ (j)  T1  T2 

j 1

λ

σ(j 1, σ(j)

 (p 2σ (j)  τ 23 )  (τ 01  p1σ (1) )  2Cmax

(8)

j 2

It follows that minimizing the makespan amounts to minimizing n

λ

σ(j 1, σ(j

(9)

 (p 2σ (n)  τ 23 )  (τ 01  p1σ (1) )

j 2

Setting, aj = p2j + τ23+ τ31, bj = p1j+ τ20 + τ01 for j=1,...,n and a0 = τ20, b0 = τ31. We get λjk = |aj - bk|, ∀j≠k, j, k=0,...,n

(10)

The problem defined by (9) amounts to finding a permutation σ= (σ(0),σ(1),...,σ(n)) with n

σ(0)≡0 such that

λ

σ(j 1, σ(j

 λ σ(n , σ(0  is minimized. Clearly, this is a Traveling Salesman

j 2

Problem with a Gilmore and Gomory distance matrix. This problem is solvable in polynomial time Gilmore and Gomory, (1964). This result is a generalization of a previous similar result by Reddi and Ramamoorthy, (1972) for a no-wait two-machine flow shop problem. PMS 2008, April 28-30, İstanbul, Turkey

41

Clearly, if m>2, then we consider a pair of consecutive machines (Mi, Mi+1) (i=1,..., m-1) and we solve the corresponding two-machine problem. Let LBi5 denote the optimal makespan. Then a valid lower bound is   LB5  max  min rij  LBi5  min q i 1, j  1 j n 1 i  m 11 j n 

3.

(11)

Exact Branch-and-Bound

We have implemented a branch-and-bound algorithm that is based on the proposed lower bounds. We will present the results of extensive computational results on randomly generated instances that show that instances with up to 20 jobs and 4 machines can be solved to optimality.

References [1] Desrochers, M. and Laporte, G. (1991). Improvements and extensions to the Miller-TruckerZemlin subtour elimination constraints. Operations Research Letters 10, 27-36. [2] Gilmore, P.C. and Gomory, RE. (1964). Sequencing a one state variable machine: a solvable case of the traveling salesman problem. Operations Research 12, 655-679. [3] Hall, N.G. and Sriskandarajah, C. (1996). A survey of machine scheduling problems with blocking and no-wait in process. Operations Research 44, 510 -525. [4] Reddi, S.S. and Ramamoorthy CV. (1972). On the flowshop sequencing problems with nowait in process. Operational Research Quarterly 23, 323-330. [5] Ronconi, D. P. (2005). A branch-and-bound algorithm to minimize the makespan in a flowshop with blocking. Annals of Operations Research 138, 53-65.

42

PMS 2008, April 28-30, İstanbul, Turkey

           

                 1

1

1

2

1

            !"!#  $ % & '   ( %) *+  ,+&(+ & % &  &( %) &  ( %) 2 -  . /. 0     1  2"20 34 ./5 0 & '   %6& (6  %) 1

 

 6     6  +  6    6&&6%

           3+  + 7 * 7  )     )  6   ) & 8 66   +  8   % 3+ 9 &   6   )    

  & +6 + 7 *  ) +  ) & 8  6 & & 9  ) + 

     6 +     +   &      &  8    7 *)  % 37 &

  ) &   8 : ) +   

6 +  ) & +   7+ 7    * % ;      6  +  6



  6 )  8 + 

 6     6&&6 &

  6    

8  8 <   )  +   7+  + * 7 =      8 : &8   >  8 :  & % ;     8 + )&  )    '   6   



  +  6    6 &   8 + 

     

 7+ + +& 7 *)    <    + 

  9 ) +   7 % 3+ 6   &

 8 

   &      7 * 6   ) +  ) & 8  +   66  + +  6       + &8  ) &   ) +  &  6      +  ) &% 37    6  & +  ) &      ) &     % 

&

  *   +     )    6 7+   *6 + +    7+ + &?    &&     + @"   % 3+ =  +      6  7+ +      6  % 3+        6     =   6  & 8 +         6     8 +    6A  8 7  +     +  ) & 7++  8    + 6+ +   :   ) &%  + +  +  + +  8    7 +     &6  &   +  +  6 6A  8 7  +  ) &     %     &A +  +  6   &A    8  & )  8  )          6 = ) +    8 6+  & 9     & + +  ) & 7 *     ) 

8 & : 7 *   +  ) &% 3+ 7 *)  & +6 

  

 )  + 7   *  7+     &   & +     6  + &      

   &     % /6 + 7    :

 &  &    &  &    

% 3+   

 7+ + 7 *    + 6   ) & &A   + &   & B 7 &6 & %   +  7 )    )   &:&A6 + + 6+ )  ) & 8 66 +    >  + 6 *6   .+    %#""CD 1+   -  #""!%%% 8 +   ) + +  &     7 

&    =    6   7+   &  7  &    & + + 7 *   

%  + +  +  7+ &       PMS 2008, April 28-30, İstanbul, Turkey

43

 6     6&&6 

)  66 & &  &  6 E6   % #""      & + + 7 *     ) 7  *6     6        7+ +   /AF F  

  G@@@D AA #"" % ; )    +  7   ' + 7  &    )  6   7+ +   &  7  &    + 7  & +     

  +&     6  + &   Δ 

   &    +   + )      7 *% ;   8  6    

8 7+    )  *6  6 

&8   >   

&8  8

) &   7+   )&  )    *   

  

)           7+   7 *)  :       % ;   +  + &  &  )&  ) 8     )    & % /  + )  6 &  7   )  A  + ?   &   6         >     & )  6  + +&  

&     %

! "  # #   #      3+ &   &    % $  + ) + & 7   +   + 

 )  + &    : )

7 8 + &+ & &8 ∈ )

7 8 +  9  &8     6  +  )   % 3+    ' &  t ∈   &  ) Δ Δ  &+ &

  + +  6  +  +   6  D  6   z ∈  D 7 *     w ∈  D   p ∈  D   &>      du ∈  D 7  &>  &    dd ∈  D   &  >  7  &   d ∈    ∪  D    o ∈ D   % %    >      7++ 8 + & ) +  *    a ∈

 ∪  % ;  7   + & 9: &   ) + & 6+  8 & )&   ) 

7 )+  8 + 8   + & )&  )      %

       z  &:&  &  ) +  6   z )  +  6     &

 &  7+ )  +      &  7 6+  8 *     8 7 6+  &  & D p   & ) + 8 : )   pD p  7 6+ ) + 8 : )   pD d  d  &&  &:&  ) + * +   d  &    8  +   7+   &

* + A  

    D o,p    

6 &     6    p &  8    o        6   ) &  &      &   *       * 6      D  &  

) + &  =  +& 7 * D p,z    * )   p  +  6   z = H &8  ) 8 : D p    &  ) 8 : )   p    +      D d,p  d,p  7     8  )  + &8  ) 8 : )   p   +   d "  " ) +   p       8 +   d 7 +      +    )   +  ) +   d  8  6     + =      &  &:  )    6 +  6 ) +     +  ) &  8 7  +   

>   +  6   ) +  ) &' a,z  &: 96 ) +  a        +  * z 0 ) +       + 8 7  a  +  * z  1 )    7+ a −1 )    7+ a%

 !    

3+ 9   ) &       & 9  %  d     )  +    )  +  +  D "Δd  "Δd  

  

  

 7+    ) Δ &      +   ) +       >    +   8 6 6 8

 )  6+ +) D # $d  # $d &:& &8  ) &      

> 

)   d 

     )  *  D # %  &d,t &: 96 d  

44

PMS 2008, April 28-30, İstanbul, Turkey

)   d  + A 6 +   t 1 )   0 + 7     +   & 6 7++    8   +   + )  %%%   6  + Δ   % 3+    ) &      +   =  ) +  & 9   + &    =  + = & 9    +   % .&   &  8 9  7++ 7

8  +   &    

7  8  )   & &      =     &    7++  &         

  

  % .        ) &    + 

         *       8         &  7  & +  ) &% ""p,t    

 & &  &   ""p,t ) 7  &D ""p,t     & &   ""p,t ) 7  &D ""p,t  6      & &

   ""p,t ) 7  &D #"p  "p    

  +  ) +     ) 8 7   &   )   p  &   

       

   96 G 9      8    6 &  8 7        )        7   

   8  6  7+ +    6    ) 7 *    %

$ #    & .&  . )      '  (  )*  

nrht,w &8  ) &  )  6  w 7 *6    tD # $Δ &:&& 7 *6 &     ) & ΔD "&    9   )        ) +   7 *          7 *   +   %

     % +

Qa,p,t &  )   p  

8 +  a 6   tD Sz,p,t  * )   p  +  6   z   )   t 7     & +  * 

 +  )    7++   &

 :&   +  6 &  6 +   %

 '  (  )* % +

T Ww,o,p,t   & )    o    8 7 *  )  6  w   ) &    p 6 + &   tD Cht 7 *  ) +  ) & 6   tD M axCht    8   6      7 *  6   tD M inCht    8 7  6      7 *  6   t%

 %,    - , % +

M oved,t 8 =  1 )     t " + 7     

 7  6 8 8  

6         &  :D Td <   )   d   8 +     ) 8 D DTd  DEd  

  

)   d      :   D + )

76 &    PMS 2008, April 28-30, İstanbul, Turkey

45

  8 7  +    & &      &6 8  6 +     ) 8 ' ""p,t * 7   &   &   )   pD ""p,t  * 7 7  &   &   )   pD p,t  p,t  p,t  p,t ' )  +   t ) +   &  7  & <  &    6    =  +  &   +  N U E N DE       

     &    

N U T N DT             &    

    % &    

3+ )&  )      ' B 7           



 

 &        +   8 7        >6 8 =  &  =   &       7 *)  

6&         7 *   &  %       8 6 6 6 D +     & ) 7 *        & ) : 7 *    &      & )  

  

   %

   

. /  %    ,      ∀z, ∀p, ∀t ∀z, ∀t ∀d

S z,p,0 = SIz,p  Sz,p,t = Sz,p,t−1 + a∈NA LZEa,z × Qa,p,t S × V Pp ≤ V Zz p∈NP z,p,t  KLd ≤ t∈NT Qd,p,t × V Pp ≤ KUd p∈NP

G# 0 !

∀d

Td ≤ nt DEd ≤ M AXEd   DTd ≤ M AXTd 

∀d, ∀t

M oved,t = 1 Td = t∈NT t × M oved,t DEd ≥ T Sd − Td  DTd ≥ Td − T Sd M oved,t ≤ M oveAuthd,t

 C2 I@ G"GG G#

∀p, ∀d, ∀t ∀p

QLL M oved,t ≤ Qd,p,t ≤ QU Ld,p × M oved,t  d,p × Qd,p,t = QTp d∈NDU t∈NT

G0 G!

∀p, ∀t

U DCCp,t = τ =1 du ∈NDU Qdu ,p,τ LU CCp,t ≤ U DCCp,t ≤ EU CCp,t N U Ep,t ≥ U DCCp,t − N U CCp,t N U Tp,t ≥ N U CCp,t − U DCCp,t

G  GC G2 GI

T Qo,p,t ≤ w∈NW T Ww,o,p,t Uo,p ×  T Ww,o,p,t ≤ nrhw,t × M AXT W Δ o∈NO  p∈NP   Cht = w∈NW T Ww,o,p,t o∈NO p∈NP  1 M oyCh = nt Ch t t∈NT M inCht ≥ M oyCh − Cht  M axCht ≥ Cht − M oyCh

G@ #" #G ## #0#!



 !             t∈NT

" & + /  %,     !0 + -  "  - , %             t

 '   0 1         /       ∀o, ∀t, ∀p ∀w, ∀t

'% &   

000   ' /  1  !       - 

 α( t (M axCht + M inCht ) × CU ch) + β( d∈ND (DEd × ECΔd ) + (DTd ×  T CΔd )) + γ p∈NP ((M Cp × (N U Ep,t + N DEp,t ) + LCp × (N U Tp,t + t∈NT N DTp,t )))

 &      ) +   & 7 & &   . 5     6        6  7     ) +  ) &    7 * 6 A % 3+ 7 * 7

8   7+ + 6  & 6  ) 8 6   &  6   + +      & )  & +6 + 7 *  ) 6   ) & % 46

PMS 2008, April 28-30, İstanbul, Turkey

 G%1+  J%J%  - K%% #""!' 3+ 8  + )   

*% 3        % 0I % #0 #!!% #%.+  % -  L% & %   6 1% #""C '   

* 7+     & 7 7 % . &  M     +  % 00 % !0C0% 0%/AF F %  

 J%1% G@@@'  + &  ) = 6         6  +  6%    3      

 +  % @C % 22@2@0% !%E6 % / %  %J%  + %.% #""  ' + & & )  7  +  6     

 %    J  )     

+  % !0 % 0#200I% %AA % #""  '  9   6      F&    &  8        % 3+F N    A%

PMS 2008, April 28-30, İstanbul, Turkey

47

Project Scheduling with Stochastic Activity Durations, Uncertain Activity Outcomes and Maximum-NPV Objective Stefan Creemers1 , Marc Lambrecht1 and Roel Leus1 1

Katholieke Universiteit Leuven, Belgium

e-mail: [email protected]

Keywords: Project scheduling, net present value, stochastic activity durations. 1 Introduction & problem description Project protability is often measured by the project's net present value (NPV), the discounted value of the project's cash ows. Project NPV is aected by the project schedule, however, and in capital-intensive industries the timing of expenditures has a major impact on project feasibility and protability. Scheduling projects to maximize NPV in a deterministic setting has been studied under a broad range of contractual arrangements and planning constraints, but often in practice there is signicant uncertainty, especially regarding the durations of the activities. Time/resource trade-os with stochastic activity durations, in which the resource allocation inuences the mean and/or the variance of the durations, have been investigated by Burt (1977), Elmaghraby (1993), and Gerchak (2000), among others. In this text, we will not be concerned with resource allocation and assume that such decisions have already been made at a higher hierarchical decision level. On the other hand, we will incorporate the concept of activity success or failure, as suggested by De Reyck and Leus (n.d.) in the context of deterministic durations, and which positions this work especially within the context of Research-and-Development (R&D). Tilson et al. (2006) investigate project scheduling with stochastic activity durations to maximize expected NPV; the authors describe how to nd an optimal dynamic policy from a nite set of scheduling policies (for denitions see infra). Buss and Rosenblatt (1997) also maximize expected NPV and additionally consider activity delays. Both Tilson et al. and Buss and Rosenblatt use the continuous-time Markov chain (CTMC) described by Kulkarni and Adlakha (1986) as a starting point for their algorithm. With the same basis, we develop a dynamic program to determine an optimal dynamic scheduling policy for a project with stochastic activity durations and activity failures. The main contributions of our work are twofold: (1) we achieve a signicant performance improvement compared to the existing models, allowing for the study of more general problem classes; and (2) we combine the concepts of activity failures and stochastic activity durations. A project consists of a set of activities N = {0, 1, . . . , n}, which are to be processed without interruption; we dene Ni = N \{i} (i ∈ N ) and N0n = N \{0, n}. The duration Di of activity i is a random variable (r.v.); the vector (D0 , D1 , . . . , Dn ) is denoted by D. A is a (strict) partial order on N , representing technological precedence constraints. (Dummy) activities 0 and n represent start and end of the project, respectively, and are the (unique) least and greatest element of the partially ordered set (N, A). P r[Di = 0] = 1 for i = 0, n; for the remaining 48

PMS 2008, April 28-30, İstanbul, Turkey

activities i ∈ N0n we assume that P r[Di < 0] = 0 (P r[e] represents the probability of event e). Each activity i ∈ Nn has a probability of technical success (PTS) pi ; we assume that p0 = 1 and consider the outcomes of the dierent tasks to be independent. Quantity ci represents the cost (cash outow) of activity i ∈ Nn , which is a nonpositive integer; this cost is incurred at the start of the activity. Overall project success generates an end-of-project payo C ≥ 0, which is received at the start of activity n. This nal project payo is only achieved when allQactivities are successful, consequently the probability π of project success equals i∈Nn pi . Information on activity success and duration becomes available only at the end of the activity. Without loss of generality, we assume that C is large enough for the project to be undertaken. Finally, in order to account for the time value of money, we dene r to be the applicable continuous discount rate: the present value of a cash ow c incurred at time t equals ce−rt . We use lowercase vector d = (d0 , d1 , . . . , dn ) to represent one particular realization (or sample, or scenario) of D. For a given realization d, we can produce a schedule s, i.e., a vector of starting times (s0 , s1 , . . . , sn ) with si ≥ 0 for all i ∈ N . Schedule s is feasible if si + di ≤ sj for all (i, j) ∈ A. While a solution to a scheduling problem with deterministic durations usually takes the form of a deterministic schedule, the execution of a project with stochastic durations can best be seen as a dynamic decision process. A solution is a policy Π, which denes actions at decision times. Decision times are typically t = 0 (the start of the project) and the completion times of activities; a tentative next decision time can also be specied by the decision maker. An action can entail the start of a set of activities that is `feasible', meaning that a feasible schedule is constructed gradually through time.

2 Model formulation & proposed algorithm The durations of the activities i ∈ N0n are mutually independent exponentially distributed r.v.s with mean µ1i , µi > 0. At any time instant t, each activity's status is either idle, active or nished ; we write Ωi (t) = {0, 1, 2}, respectively, for i ∈ N . The state of the system is dened by the status of the individual activities and is represented by vector Ω(t) = (Ω0 (t), Ω1 (t), . . . , Ωn (t)). State transitions take place each time an activity nishes (only then, new activities can be started). The project's starting and nishing condition are Ωi (0) = 0 and Ωi (t) = 2, ∀t ≥ T (∀i ∈ N ), where T indicates the project completion time. It can be shown that {Ω(t), t ≥ 0} is a CTMC on the state space Q, with Q containing all states of the system that can be visited by the transitions. Enumerating all possible states is not ecient, because typically the majority of the states do not satisfy the precedence constraints; eciently constructing Q is key to the performance of any algorithm. When the objective function is the expected makespan, the early-start policy is always optimal; in this case, an upper bound on |Q| is 2n . NPV, on the other hand, is a non-regular measure of performance: starting activities as early as possible is not necessarily optimal, and so |Q| ≤ 3n . Tilson et al. (2006) develop a simple yet ecient algorithm to produce a set of possible states; this set contains Q but may be strictly larger. Additionally, to the best of our knowledge, all related studies in the literature reserve memory space to store the entire state space of the CTMC; Buss and Rosenblatt (1997) point out that some method of decomposition to reduce these memory requirements would allow for considerable eciency enhancements. In what follows, we present an algorithm that considerably improves upon the storage and computational requirements of PMS 2008, April 28-30, İstanbul, Turkey

49

earlier algorithms by means of ecient creation of Q and decomposition of the network of state transitions. Our algorithm consists of two main steps. In a rst step, we distinguish subsetmaximal sets of activities that are allowed to be executed in parallel; Kulkarni and Adlakha (1986) refer to these sets as uniformly directed cuts or UDCs. Let © ª U = U0 , U1 , . . . , U|U | denote the set of UDCs. Logically, set U has the following properties:

(1) (2)

∀{i, j} ⊂ U : (i, j) ∈ / A ∧ (j, i) ∈ / A, U ∈U ⇒V ∈ / U,

∀U ∈ U, ∀V ( U.

We associate with each element U of set U a rank number r(U ) = |{i ∈ N : ∃j ∈ U |(i, j) ∈ A}|, which counts the number of predecessor activities; the elements U are indexed in non-decreasing rank r(U ). In the second step of the algorithm, we compute the expected NPV for all states in the CTMC. Each UDC U can be associated with a set of states σ(U ) ⊂ Q in which one or more activities in U are active and the remaining activities in N \U are either idle or nished. Function σ() is such that σ(U) ≡ {σ(U1 ), . . . , σ(U|U | )} is a partition of Q: each state only appears in the corresponding UDC with highest index. Thanks to the memoryless property of the exponential distribution, the probaPn −1 bility that activity i nishes rst among the active activities is µi ( i=0 µi δi (t)) , where δi (t) equals unity if activity i is active at time t and 0 otherwise. When an activity nishes, we may decide to start one or more new activities; not starting any eligible activity is also possible if at least one activity remains active. The decision made determines the transition to the next state, and the (undiscounted) cost ci is incurred for each started activity i. The appropriate discount factor to Pnbe applied on entry of the next state at time t is µ(t)/(r + µ(t)), with µ(t) = i=0 µi δi (t). We apply a backward dynamic-programming recursion to determine optimal decisions and associated expected-NPV values for each state. The algorithm is started in the highest-indexed UDC U|U | , and expected-NPV values for states associated with lower-ranked UDCs are stepwise computed. As the algorithm progresses, the states in higher-ranked UDCs will no longer be required for further computation and therefore the memory they occupy can be freed (whence the usefulness of the decomposition of the project network into dierent UDCs).

3 Performance The models currently available in literature are able to solve project instances with up to 25 activities within reasonable time limits, with performance depending on the density of the network. More specically, out of 30 randomly generated networks with 25 activities, Tilson et al. (2006) solve 29, 20 and 0 networks when densities amount to 75%, 50% and 25%, respectively. The algorithm proposed has been tested on the J10, J30 and J60 instance sets of the well-known PSPLIB dataset with appropriately generated additional data. In the table below, we report computational results for all J10 and J30 and a subset of J60 instances (specically, the 161 highest-indexed instances). For J10, a comparison is included with a full-enumeration procedure. Further research is needed if high-quality solutions are to be developed for realistically-sized scheduling problems; we are convinced, however, that the insights and results provided in this paper can serve as guidelines in this process. 50

PMS 2008, April 28-30, İstanbul, Turkey

J10 J10 enumeration algorithm avg CPU time (sec) 41.741 0.0057 max CPU time (sec) 341.56 2.9027

J30 algorithm 18.895 442.25

J60 algorithm 5, 091 39, 014

References Burt J., 1977, Planning and dynamic control of projects under uncertainty, Management Science, Vol. 24, pp. 249-258. Buss A., M. Rosenblatt, 1997, Activity delay in stochastic project networks, Operations Research, Vol. 45, pp. 126-139. De Reyck B., R. Leus, n.d., R&D-project scheduling when activities may fail, IIE Transactions, to appear. Elmaghraby S., 1993, Resource allocation via dynamic programming in activity networks, European Journal of Operational Research, Vol. 64, pp. 199-215. Gerchak Y., 2000, On the allocation of uncertainty-reduction eort to minimize total variability, IIE Transactions, Vol. 32, pp. 403-407. Kulkarni V., V. Adlakha, 1986, Markov and Markov-regenerative PERT networks, Operations Research, Vol. 34, pp. 769-781. Tilson V., M. Sobel and J. Szmerekovsky, 2006, Scheduling projects with stochastic activity durations to maximize EPV, Technical Report Technical Memorandum Number 812, Department of Operations, Weatherhead School of Management, Case Western Reserve University, Cleveland, Ohio.

PMS 2008, April 28-30, İstanbul, Turkey

51

                       

   1    2 1



 

                   

2               

            

                                    n                    j       pj      rj     dj                                  !           "      Lmax = maxj Lj = maxj (Cj − dj )   Cj        j     #     1|rj |Lmax $  %   &!        #' ' () '  *+ , )  -.  /001$                              O(n log n)         -- #   -  -$    #( %  (, /022$             O(n log n)         ( % 3    #*  /045$  



                              ( % 3                     1|rj |Lmax    

                6   %                 1|rj |Lmax                   %        #- 7  8  3)  9 :;;1$ #%        $                1|rj | Cj              %    O(n log n)        ( % 3                      

          8                                        52

PMS 2008, April 28-30, İstanbul, Turkey

               <       =    %       ' N      n     

  7      N        J = (1, 2, ..., n)    ( % 3  <   ' j           J  ' CERT (1,...,k)    %     [1, ..., k]   <    , #    $   ' OP T     <      '  



pk − dk }

  

k, l

   

       

Lk,l = min{rk + pk + pl − dl , rl + pl +

Lmax 

!> 8     k, l  %     k    l  l    k 6   =      Ll ≥ rk + pk + pl − dl               Lk ≥ rl +pl +pk −dk     Lmax ≥ min{rk +pk +pl −dl , rl +pl +pk −dk }  %    1, ..., n     [1], ..., [n]  J        OP T             > / ( 1, ..., j     j+1, ..., n   j    [j]  CERT (1,...,j−1)         %     [1, ..., j − 1] .    L1a = pj − dj + max{rj , CERT (1,...,j−1) }            j      k ∈ [j + 1, ..., n]   j           rk = max{rk , CERT (1,...,j−1) } + pj      j + 1, ..., n          <       ( % 3   

 ' L1b          [j + 1, ..., n]   L1 = max{L1a , L1b }                <         : ( 1, ..., j     j + 1, ..., n   j    [h] ≤ [j − 1]   CERT (1,...,j)         %    [1, ..., j] ' d∗            1, ..., j − 1   L2a = CERT (1,...,j) − d∗                   [j]     k ∈ [j + 1, ..., n]   j           rk = max{rk , CERT (1,...,j) }     j + 1, ..., n          <       ( % 3    ' L2b          [j + 1, ..., n]   L2 = max{L2a , L2b }                <         1 ( j       1, ..., j − 1          k     (j + 1, ..., n)    j        [j + 1]         CERT (1,...,j−1,k)    L3 = pj − dj + mink CERT (1,...,j−1,k)            j #               <  $       5 ( h ≤ j − 1       1, ..., h − 1, h + 1, ..., j          k     (j + 1, ..., n)    h        [j + 1]         CERT (1,...,j,k)    L4 = mink CERT (1,...,j,k) −d∗            h #               <  $       ' L5 = maxi=j Li,j         PMS 2008, April 28-30, İstanbul, Turkey

53

 max{L , min{L , L , L , L }} 5

1

2

3

4

       

Lmax 

!> 8           ' /      

                 !  >

  / 7  ( % 3      LBJ        : 7  L5  1 7  3 <   #               $ ' U BS         3 <   5 68 max{LBJ , L5 } = U BS  ?! U BS     <   +?? &- 2 7  L1 , L2 , L3 , L4  @ max{LBJ , L5 , min{L1 , L2 , L3 , L4 }}    =              <      1|rj |Lmax 



 

  

O(n log n)



!> ? 

      6                                          7AA        !7 !  7 :  :+ B 

:+.                #7 ( /0C:$           8     n  K #  #7C: /0C:$$      

      :/;;      " n                                         '   Γref                             Γref = Lmax (S) − Lmax (OP T )   S      <   7     

Γ                   

 Γ = max{LBJ , L5 , min{L1 , L2 , L3 , L4 }} − Lmax (OP T ) 8                  Δ   > Δ = 100 ×

Γref −Γ Γref 

 /                    Δmin  Δavg    Δmax                                    N onOpt                              N bImp 8  /    54

PMS 2008, April 28-30, İstanbul, Turkey

                                                  73%                                           "     n 50 100 150 200 250 300 350 400 500 600 700 800 900 1000

Δmin (%)

Δavg (%)

       

      

               

          

Δmax (%)

             

             

N onOpt

N bImp

             

  





      

 /> +     " 50 . . . 1000

!"  7 ( #/0C:$ D    <   E #

  $  // 5:54 

 !  "

- 7  8  3)  9 #:;;1$ D6                 E #

  $  

 1/ /5:/5C *  #/045$ D       E %   &   :/ /44/C2

$   



( %  (, #/022$   

    '  ( ' )  *  + ,  ,  , 51 F   ,  !  G   7 '   ' ' () '  *+ , )  -.  #/001$ D <      >    7H  - .  /0. $ +  1   2 3 4 56 0 3 #

  $   ' +      76  

 " 

     % +0  /  

PMS 2008, April 28-30, İstanbul, Turkey

55

A Conflict Repairing Harmony Search Metaheuristic and its Application for Bi-objective Resource-constrained Project Scheduling Problems György Csébfalvi1, Oren Eliezer2, Blanka Láng3, Roni Levi4 1

University of Pécs, Hungary e-mail: [email protected]

2

Ort Braude Academic College of Engineering, Israel e-mail: [email protected] 3

Corvinus University of Budapest, Hungary e-mail: [email protected]

4

S & E Engineering and Project Management, Israel e-mail: [email protected]

Keywords: Project management, bi-objective resource-constrained project scheduling, heuristics, harmony search metaheuristic.

1.

Introduction

In this paper we present a conflict repairing harmony search metaheuristic for the resourceconstrained project scheduling problem (RCPSP). Theoretically the optimal schedule searching process is formulated as a mixed integer linear programming (MILP) problem with big-M constraints, which can be solved for small-scale projects in reasonable time. The MILP formulation is based on the forbidden set concept. In the forbidden set oriented model (see Alvarez-Valdés and Tamarit (1991)) a resource-feasible schedule is represented by the set of the inserted conflict repairing relations. According to the implicit resource constraint handling, in this model the resource-feasibility is not affected by the feasible activity shifts (movements). In the time oriented model (see Pritsker, Waters, and Wolfe (1969)), a resource-feasible schedule is represented by the activity starting times. In this model, according to the explicit resource constraint handling, an activity movement may be able to destroy the resource-feasibility. In order to illustrate the essence and viability of the proposed harmony search metaheuristic, we present computational results for three bi-objective resource-constrained project scheduling problems. In the presented problems as a primary objective we minimize the project’s makespan, and as a secondary objective (1) we maximize the robustness of the schedule where the robustness is measured by the total free float (TFF) of the schedule, (2) we identify a schedule that maximizes the Net Present Value (NPV), (3) we schedule the activities such that the hammock cost (HC) of the project is minimized. We tested our approach on the first 40 problems of benchmark set J30 generated with ProGen. In this set the number of non-dummy activities is 30. All problem instances require four resource types. The instance details are described by Kolisch et al. (1995). To generate the exact solutions a state-of-the-art MILP solver (CPLEX) was used with default settings.

2.

A conflict repairing harmony search metaheuristic

The presented harmony search metaheuristic is a “conflict repairing” version of the originally time oriented “Sounds of Silence” algorithm developed by Csébfalvi (2007) for the resourceconstrained project scheduling problem (RCPSP). The central element of the algorithm a serial forward list scheduling procedure with an unusual activity list generator borrowed from the original time oriented version. The conflict repairing version of the “Sounds of Silence” algorithm is based on the forbidden set concept. A forbidden activity set is identified such that: (1) all activities in the set may be executed concurrently, (2) the usage of some resource by these activities exceeds the resource availability, and (3) the set does not contain another forbidden set 56

PMS 2008, April 28-30, İstanbul, Turkey

as a proper subset. A resource conflict can be repaired explicitly by inserting a network feasible precedence relation between two forbidden set members, which will guarantee that not all members of the forbidden set can be executed concurrently. An inserted explicit conflict repairing relation (as its side effect) may be able to repair one or more other conflicts implicitly, at the same time. In the conflict repairing version, the primary variables are conflict repairing relations, and a solution will be a makespan minimal resource-feasible solution set, in which every movable activity can be shifted without affecting the resource feasibility. In the traditional “time oriented” model the primary variables are starting times, therefore an activity shift may be able to destroy the resource feasibility. The makespan minimal solutions of the conflict repairing model are immune against the activity movements, so we can introduce a (not necessarily regular) secondary performance measure to select the “best” makespan minimal resource feasible solution from the generated solution sets. In the “Sounds of Silence” algorithm, according to the applied replacement strategy (whenever the algorithm obtains a solution superior to the worst solution of the current repertoire, the worst solution will be replaced by the better one) the quality of the population is increasing step by step. According to the progress of the searching process, the size of the makespan minimal subset of the population is increasing. The larger the makespan minimal subset size, the higher the chance to get a good solution for the secondary criterion. It is well-known, that the crucial point of the conflict repairing model is the forbidden set computation. In the conflict repairing “Sounds of Silence” algorithm the “conductor” using a simple (but fast and effective) “thumb rule” to decrease the time requirement of the forbidden set computation. In the forward list scheduling process the conductor (without explicit forbidden set computation) inserts a precedence relation i  j between an already scheduled activity i and the currently scheduled activity j whenever they are connected without lag ( X i  Di  X j ). The result of the forward list scheduling process will be an active schedule without “visible” conflicts. After that, the conductor (in exactly one step) repairs all of the hidden (invisible) conflicts, inserting always the “best” conflict repairing relation for each forbidden set. In this context “best” means a relation i  j between two forbidden set members for which the lag ( S j  Si  Di ) is maximal. In the language of music, the result of the conflict repairing process will be a robust (flexible) “Sounds of Silence” melody, in which the musicians have some freedom to enter to the performance without affecting the esthetic value of the composition. Naturally, when we introduce a secondary criterion, for which the esthetic value is a function of the starting times, the freedom of the musicians totally disappears. When a secondary objective is given, then in the improvisation phase the conductor firstly selects a promising makespan set, after that, from the selected set selects a schedule with a promising secondary criterion value. According to the conflict repairing nature of the algorithm, the evaluation of a secondary performance measure will be usually a simple task.

3.

The investigated bi-objective resource-constrained project scheduling problems

In this section we shortly describe the investigated bi-objective resource-constrained project scheduling problems. In this problems as a primary objective we minimize the project’s makespan, and as a secondary objective (1) we maximize the robustness of the schedule where the robustness is measured by the resource-constrained total free float (RCFF) of the schedule, (2) we identify a schedule that maximizes the resource-constrained net present value (RCNPV), (3) we schedule the activities such that the resource-constrained total hammock cost (RCHC) of the project is minimized. 3.1. Maximization of the resource-constrained total free float The concept of float and criticality plays a central role in project management. However, the recent literature does not offer a general and useful measure for criticality (flexibility) in resource constrained projects. This paper presents a resource constrained total free float model to cope with this problem. The presented resource constrained total free float measure (RCFF) is defined as the sum of the free floats of activities. The free float is defined as the amount of time that an activity can slip without delaying the start of its successors and while maintaining resource feasibility. In the proposed approach, a resource-constrained project is characterized by its "best" schedule, PMS 2008, April 28-30, İstanbul, Turkey

57

where best means a makespan minimal resource constrained schedule for which the RCFF measure is maximal. Therefore, first we have to solve the resource-constrained makespan minimization problem. After that, fixing the makespan according to the makespan minimal resource-constrained solution, we have to solve the robustness maximization problem. Theoretically this problem can be formulated as a mixed integer linear programming (MILP) problem with big-M constraints, which can be solved for small-scale projects in reasonable time (Roni (2004)). It is important to note, that in the harmony search algorithm the RCFF measure of a “repaired” schedule can be evaluated in polynomial time using the traditional CPM analysis. 3.2. Maximization of the resource-constrained net present value It is well-known, that NPV is an irregular performance measure. In the resource-constrained NPV maximization, we have to fix the project’s makespan according to the makespan minimal resource-constrained solution, after that we have to solve the resource-constrained NPV maximization problem. This problem can be formulated as a mixed integer linear programming (MILP) problem, which can be solved for small-scale projects in reasonable time (see, for example, Icmeli and Erenguc (1996)). It is important to note, in the harmony search algorithm the RCNPV measure of a “repaired” schedule can be maximized in polynomial time replacing the traditional predecessor-successor formulation with a totally unimodular predecessor-successor formulation (Pritsker et al. (1969)). 3.3. Minimization of the resource-constrained total hammock cost The concept of hammock activities plays a central role in project management. They are used to fill the time span between other "normal" activities since their duration cannot be calculated or estimated at the initial stage of project planning. Typically, they have been used to denote usage of equipment needed for a particular subset of activities without predetermining the estimated time the equipment must be present on site. Over the past few years the use of hammocks has become popular and most computer software on project scheduling - in the unconstrained case -can now treat them as a part of the whole project analysis process. Nonetheless, some confusion still exists among hammock users, related to the procedure that must be used to calculate their durations after the normal time analysis is performed. In the unconstrained case, Harhalakis (1990) proposed the first rigorous algorithm to calculate the hammock durations. Theoretically the resource-constrained case can be formulated as a mixed integer linear programming (MILP) problem, which can be solved for small-scale projects in reasonable time (Eliezer (2007)). It is important to note, that in the harmony search algorithm the RCHC measure of a “repaired” schedule, can be evaluated in polynomial time using the traditional CPM analysis.

4.

Computational results

We tested our approach on the first 40 problems of benchmark set J30 generated with ProGen. In this set the number of non-dummy activities is 30. To generate the exact solutions a state-of-theart MILP solver (CPLEX) was used with default settings. The hammock members and the cash flow values were generated randomly. The proposed algorithm has been programmed in Visual C++ Version 6.0. The computational results were obtained by running the algorithm on a 1.8 GHz Pentium IV IBM PC with 256 MB of memory under Microsoft Windows XP  operation system. The results of the experiments are presented in Table 1-3. Table 1-3 shows average distances with respect to the optimal solutions. Table 1. Computational results for J30 (RCFF) Iterations

58

100

Average Distance (%)

33.60

Standard Deviation

13.38

Average Time (sec)

1.23

5 00 1 8.29 1 1.15 5 .11

000

1

1 1.35 9 .12 1 1.33

PMS 2008, April 28-30, İstanbul, Turkey

Table 2. Computational results for J30 (RCNPV)

Iterations

100

Average Distance (%)

5.10

Standard Deviation

8.45

Average Time (sec)

2.13

5 00 3 .19 5 .17 3 .95

1 000 1 .45 2 .81 8 .85

Table3. Computational results for J30 (RCHC)

5.

Iterations

100

Average Distance (%)

5.60

Standard Deviation

11.27

Average Time (sec)

0.41

5 00 2 .15 6 .51 0 .95

1 000 0 .25 1 .92 1 .85

Conclusion

In this paper, we have presented a conflict repairing harmony search metaheuristic for RCPSPDC. The computational results show that the “Sounds of Silence” is a fast and high quality algorithm for the selected bi-objective resource-constrained problems.

References Alvarez-Valdés, R., Tamarit, J. M. (1993). The project scheduling polyhedron: Dimensions, facets and lifting theorems. European Journal of Operational Research, 67, 204-220. Csébfalvi, G. (2007). Sounds of Silence: A harmony search metaheuristic for the resourceconstrained project scheduling problem. European Journal of Operational Research, (under reviewing process). Eliezer, O. (2007). Resource-constrained hammock cost minimization, Working Paper, University of Pécs, Hungary. Harhalakis, G. (1990). Special Features of precedence network charts. European Journal of Operational Research, 49, 50-59. Icmely, O., Erenguc, S. S. (1996). A branch and bound procedure for the resource constrained project scheduling problem with discounted cash flow. Management Science, 42, 1395–1408. Kolisch, R., Sprecher, A., Drexl, A. (1995). Characterization and generation of a general class of resource-constrained project scheduling problems. Management Science, 41, 1693–1703. Lee, K. S., Geem, Z. W. (2005). A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice. Computer Methods in Applied Mechanics and Engineering, 194, 3902-3933. Levi, R. (2004). Criticality in Resource Constrained Projects. PhD Dissertation, University of Pécs, Hungary.

Pritsker, A. A. B., Watters, L. J., and Wolfe, P. M. (1969). Multiproject scheduling with limited resources: A zero-one programming approach. Management Science, 16, 93-108.

PMS 2008, April 28-30, İstanbul, Turkey

59

A Harmony Search Metaheuristic for the Resource-Constrained Project Scheduling Problem and its Multi-mode Version György Csébfalvi1, Anikó Csébfalvi2, Etelka Szendrői3 1

University of Pécs, Hungary e-mail: [email protected]

2 University of Pécs, Hungary e-mail: [email protected] 3

University of Pécs, Hungary e-mail: [email protected] Keywords: Project management, single-mode and multi-mode resource-constrained scheduling, heuristics, harmony search.

1.

Introduction

In this paper, we present a harmony search metaheuristic for the resource-constrained project scheduling problem (RCPSP) and its multi-mode version (MRCPSP). The central element of the “Sounds of Silence” algorithm is a serial forward-backward list scheduling procedure without improvement. In this element, the only but extremely important novelty is the developed activity list generator, which is practically independent from the applied meta-heuristic frame. The fast and effective activity list generator is based on an “unusual” integer linear programming model, which can be solved in polynomial time by relaxing the integrality assumption. In the multi-mode case the activity list generator is combined with an activity mode generator, which is based on a simple integer linear programming model. To illustrate the effectiveness of the proposed heuristic, in the single-mode case we considered a total of 1560 instances from three sets of standard RCPSP test problems: J30, J60, and J120. In the multi-mode case we considered the feasible instances from set J30MM (540 feasible instances). In each case, the instance details are described by Kolisch et al. (1995).

2.

The single-mode resource-constrained project scheduling model

In order to model the resource-constrained project scheduling problems (RCPSP), we consider the following: A single project consists of N real activities i  1, 2 ,..., N  with nonpreemptable duration of Di periods. Furthermore, activity i  0i  N  1 is defined to be the unique dummy source (sink) with zero duration. The activities are interrelated by precedence and resource constraints. Let IPS  i  j i  j, i  0 ,..., N , j  1,...,N  1 denote the set of immediate





predecessor-successor relations. In order to be processed, activity i requires R ir units of resource

type r  1,..., R during every period of its duration. Since resource r is only available with the constant period availability of Rr units for each period, activities might not be scheduled at their earliest (precedence-feasible) start time but later. Let D denote the precedence-feasible minimal makespan, and let D denote the sum of the activity durations, which is an “extremely weak” upper bound on the precedence- and resource-feasible makespan, and fix the position of the unique dummy sink in period D  1 . Let X i , X i  X i  X i denote the start time of activity i ,

 

i  1,2 ,..., N  , where X i X i denotes the earliest (latest) starting time of activity i in the unconstrained (precedence-feasible) case. Our objective is to schedule the activities such that precedence and resource constraints are met and makespan of the project is minimized.

60

PMS 2008, April 28-30, İstanbul, Turkey

3.

The single-mode harmony search algorithm

Harmony search (HS) algorithm was recently developed by Lee and Geem (2004) in an analogy with music improvisation process where music players improvise the pitches of their instruments to obtain better harmony. In HS, the optimization problem is specified as follows:







min f  X  X  X i X i  X i  X i , i  1,2 ,..., N 

In the language of music, X is a melody, which aesthetic value is represented by f  X  .

Namely, the lower the value f  X  , the higher the quality of the melody is. In the band, the number

of musicians is N , and musician i is responsible for sound X i . The “improvisation” process is driven by two parameters: (1) According to the memory consideration rate (MCR), each musician is choosing a sound from his/her memory with probability MCR, or a totally random value with probability (1-MCR); (2) According to the sound adjusting rate (SAR), the sound, selected from his /her memory, will be modified with probability SAR. The algorithm starts with a totally random “memory upload” phase, after that, the band begins to “improvise”. During the improvisations, when a new melody is better than the worst in the memory, the worst will be replaced by the better one. Naturally, the two most important parameters of HS algorithm are the memory size (MemorySize) and the number of improvisations (Improvisations). The HS algorithm is an “explicit” one, because it operates directly on the sounds. In the case of RCPSP, we can only define an “implicit” algorithm, and without introducing a “conductor” we can not manage the problem efficiently. First, we show how the original problem can be transformed into the world of







music. Here, the resource profiles U r  U tr t  1,2 ,..., D , r  1,2 ,..., R form a “polyphonic

melody”. So, assuming that in every phrase only the “high sounds” are audible, the transformed problem will be the following: find the shortest “Sounds of Silence” melody by “improvisation”! Naturally, the “high sound” in music is analogous to the overload ( U tr  Rr ) in scheduling. In the original HS, the improvisation is a random modification of randomly selected sounds. In our approach, the improvisation means a random perturbation of a promising melody. The melody selection as a task is connected to the conductor (the shorter the duration, the higher the chance, that a melody will be selected by the conductor). But, this is not enough, in our approach all of the decisions will be the conductor’s responsibility, therefore the musicians form only a “decision support system”. In the language of music, the RCPSP can be summarized as follows: (1) the band consists of N musicians; (2) the polyphonic melody consists of R phrases and N polyphonic sounds; (3) each i  1,2 ,..., N  musician is responsible for exactly one polyphonic sound; (4) each i  1,2 ,..., N  polyphonic sound is characterized by the set of the following elements:

 X i , Di , Rir



r  1,2 ,..., R ; the polyphonic sounds (musicians) form a partially ordered set

according to the precedence (predecessor-successor) relations; (5) each r  1,2 ,..., R phrase is additive for the simultaneous sounds; (6) in each phrase only the high sounds are audible: U tr U tr  Rr ,t  1,2 ,..., T ; (7) in each repertoire uploading (improvisation) step, each





i  1,2 ,..., N  musician has the right to present (modify) an idea I i  1,1 about X i where a large positive (negative) value means that the musician want to enter into the melody as early (late) as possible; (8) in the repertoire uploading phase the “musicians” improvise freely, I i  RandomGauss0,1,  1,  1 , where function   RandomGauss , , ,   generates

random numbers from a truncated (      ) normal distribution with mean  and standard deviation  ; (9) in the improvisation phase the “freedom of imaginations” is decreasing step by step, I i  RandomGaussI i , S ,1,1 , where standard deviation S is a decreasing function of the progress; (10) each of the possible decisions of the harmony searching process (melody selection and idea-driven melody construction) is the conductor’s responsibility; and (11) the band try to find the shortest “Sounds of Silence” melody by improvisation. Our “magic” conductor solves a simple but “unusual” integer linear programming (ILP) problem to balance the effect of the more or less opposite ideas about a shorter “Sounds of Silence” melody: PMS 2008, April 28-30, İstanbul, Turkey

61

N     min  I i  X i X i  Di  X j , i  j  IPS , X i  X i  X i , i  1,2 ,..., N , X i integral for i  1,2 ,..., N    1 i   



Usually, the result of the optimization is a “very interesting” melody, with “extremely long breaks” between the more or less loud parts (themes). Fortunately, our conductor uses this schedule only to define the final starting (entering) order of the sounds (musicians). When two or more activities have the same starting time, the conductor solves the problem by random permutation. The schedule generation schema transforms activity list L into a schedule X L by taking the activities one by one in the order (reverse order) of the activity list and scheduling them at the earliest (latest) precedence- and resource-feasible start time.

4.

The multi-mode resource-constrained project scheduling model In the multi-mode model each activity i , i  1,2 ,..., N  may be executed in one out of

M modes. Performing activity i in mode m , m  1,2 ,..., M  takes Dim periods and is supported

by a set of R renewable, a set of C of non-renewable resources. The per period availability of the renewable resource r , r  1,2 ,..., R is Rr . The overall capacity of the non-renewable resource c , c  1,2 ,..., C is C c . Let denote the per period renewable (the overall non-renewable) resource

requirements of activity i in mode m from resource r ( c ) by Rimr ( C imc ), respectively.





Let D denote the precedence-feasible minimal makespan for Di  min Dim m  1,2 ,..., M  , and





let D denote the sum of the activity durations for Di  max Dim m  1,2 ,..., M  , and fix the position of the unique dummy sink in period D  1 . Let X i , X i  X i  X i denote the start time of

 

activity i , i  1,2 ,..., N  where X i X i denotes the earliest (latest) starting time of activity i for





Di  min Dim m  1,2 ,..., M  in the unconstrained (only precedence-feasible) case. The objective

of MRCPSP is to find an assignment of modes to activities as well as precedence- and resourcefeasible starting times for all activities, such that the makespan of the project is minimized.

5.

Multi-mode harmony search algorithm

In the case of MRCPSP each i  1,2 ,..., N  musician is characterized by a set of disjunctive polyphonic sounds. In the world of music, a nonrenewable resource can be interpreted as the “energy” requirement of the performance from a given energy type. A nonrenewable resource can be interpreted as the “physical energy” needed to sound a sound, or it may be the “spiritual energy” needed to control the “quality” of the sounded sounds continuously. It is also a natural assumption, that the “total energy” of the band is limited from each type, and that the total energy will be consumed by the performance. In each step, each musician has the right to present (modify) a probabilistic idea Ai , Ai  1, M  about the “best” sound M i , and an idea I i ,

I i  1,1 about its “best” starting position X i . In the repertoire uploading phase Ai is generated

freely from a truncated normal distribution, Ai  RandomGauss1,1,1, 3 while in the improvisation phase Ai will the mean of a distribution, from which the perturbated Ai will be generated, Ai  RandomGauss Ai , S ,1, 3 where standard deviation S is a decreasing function of the progress. The final decision of the conductor is based on two simple ILP problems. The first “energy allocation model” will be used by the conductor to select the sounds from the disjunctive sets according to the presented (not necessarily harmonious) ideas:   Pim  Dim  M im min    i 1 m1 N

M



N

M

 M

im

 C imc  C c ,c  1,2,...,C , M im  0 ,1, m  1,2 ,..., M ,

i 1 m 1

M

M m 1

im

   1, i  1,2 ,..., N  ,  

m  0.5

where Pim  1 

 Gaussx , Ai , S  dx is

a weighting coefficient. The second ILP model which

m  0.5

defines the starting order of the musicians (sounds) is exactly the same as in the single-mode case. 62

PMS 2008, April 28-30, İstanbul, Turkey

6.

Computational results

The proposed algorithm has been programmed in Visual C++. The computational results were obtained by running the algorithm on a 1.8 GHz Pentium IV IBM PC under Windows XP . In the single-mode case, the “Sounds of Silence” algorithm gives nearly three times better results then the recently best state-of-the-art heuristic (a Hybrid Genetic Algorithm (HGA) developed by Valls et. al. (2007)) for the instance set J60 and J120 (table 1), and it is competitive with other state-of-the-art heuristics for the instance set J30. Table 1 shows average deviation with respect to the optimal solutions (J30) and with respect to the critical path lengths (J60 and J120). The average solution time was 4 second for J120. Table 1. Valls et al. versus “Sounds of Silence” results 10 50 10 0 0 00 Valls et al. J30 0.2 7 Sounds of Silence J30 1.1 0.7 0.5 7 2 9 Valls et al. J60 11. 56 Sounds of Silence J60 4.7 4.2 4.0 1 0 2 Valls et al. J120 34. 07 Sounds of Silence J120 13. 12. 11. 01 04 72 Iterations

50 00 0.0 6 -

50000

11. 10 -

10.73

32. 54

31.24

0.02 -

-

-

In the multi-mode case the algorithm is competitive with the recently best population based heuristic (a simulated annealing algorithm developed by Bouleimen and Lecocq (2003)) for J30MM with 5000 iterations, which is a promising preliminary result. In our case the average deviation from the optimal solution is 3.1 % while the best solution is 2.61 %. The average solution time was 4.1 second which is a competitive result.

7.

Conclusions

In this paper, we have presented a harmony search metaheuristic for RCPSP and its multimode version. The computational results show that the “Sounds of Silence” is a fast and high quality algorithm.

References Csébfalvi, G. (2007). Sounds of Silence: a harmony search metaheuristic for the resourceconstrained project scheduling problem. European Journal of Operational Research, (under reviewing process). Bouleimen, K., Lecocq, H. (2003). A new efficient simulated annealing algorithm for the resourceconstrained project scheduling problem and its multiple mode version. European Journal of Operational Research, 149, 268-281. Lee, K. S., Geem, Z. W. (2005). A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice. Computer Methods in Applied Mechanics and Engineering, 194, 3902-3933. Kolisch, R., Sprecher, A., Drexl, A. (1995). Characterization and generation of a general class of resource-constrained project scheduling problems. Management Science, 41, 1693–1703. Valls, V., Ballestin, F., Quintanilla, M. S. (2007). A hybrid genetic algorithm for the resourceconstrained project scheduling problem. European Journal of Operational Research, doi: 10.1016 / j.ejor.2006.12.033.

PMS 2008, April 28-30, İstanbul, Turkey

63

                        1

                           



 ! "  #    

#$%  

#  %&'

#(

             

                           Lmax          n                 b        !  j       

    (pj , dj , sj )   pj      dj

      sj     "                                            b "         #                 "       Cj    j                   "                       $                         "      Lmax  #  maxj (Cj − dj ) "     1|p − batch, b < n, non − identical|Lmax    %     &' 

  ())*                    %   ! '         +'   ,  ()*-                       

                       

    '   .                                         /    0 ())*1    &' ())2          3 '            

           '            4                          5666 566(1 7     7   3

 5665          3                       8   5666

      &        &   &    9 ,          +     +  &     ())* "                     +                      :              ;                    #           +   

    :    64

PMS 2008, April 28 - 30, İstanbul, Turkey

       &   &  /                      :         "  '        :                         ? :   / % C(6 D2E6(D %C56 -2D625 %C26 (2D5F   ) 6 



@        ( A &B .  %C(6 (66 (66 %C56 22 6 %C26 * 6

   $ 

 9

0 

 0       ( 3

66

10 20



6    7 2    (  66    6 +# 

  2 0   9   %  (   





+      / %C(6  %C56 (2-*-* %C26 F2)26-



(  6 0

50

6

9 

 2  )B

+#

PMS 2008, April 28 - 30, İstanbul, Turkey

/ 7  0   



6  9  0 % 0 0       ( 

9   0 0   %

 $    )  7   (  BInf ColGen+D

6  0 %  BInf Solver+D max max ) 6 0 9 > (  2  ?3  ),++# / 0  9   9  0 %

 C  6  % +# / (  (   %

%   %  

 $     0         (   



/   0 

6  %

6 

$# 8  %$  6 0 6  ( 

#

 %  0 (    %

( 3   0 2  2 0  2   

 ,,D  .D ( 6 0

C  6  % # /  0 

0  ( 6

( 3   ,  6 2   2 0  

6   2 0 9   %66   9  0 %

0 0   C  6  % 9   BSupSolver−BInf Solver

 2 0 0  2  (  

 % 6 

$ ) BSupB&P −BInf B&P + (    9    C  6  %   7  6  % # /  2

 *,  ,  

 (  6 0 ( 3   ,D  6 2 #

"

  / 66  6    C   $ 

0 6 

$  5%   



 # /  %  (  



3 



6%  6 

 6   0

   

: % $  0 6 

$

 9

: % $  0 6 

$

 9

            .)*+!*J

EF 833 $ % #  :# ; 0   )*+# 

#

  #

          @)+!,J,#

E,F   # #  # K# L 2  2 )+#

: % $ 9 0$ !

         *!.J@#

8  2 9#

E-F %>  # 8# G  > A#8# H $ 2  #K# L 2  2 ##    /#/%   :##2 I )*@@.+# : % $  0$

 #

*!*J,# EF ?3  # )*@@,+# : % $ 0 6 

$

 #

 9 

60 1 0 (



 6

  0 6 

$ 60 1 0 (

  #

 9 

( 



3$

 9 



60 1 0 (

   #

     !*-,J*.#

    !-@J#  #

 

3$    

  0 6 

$

E*F ; 0   :#  L# >  )*@@,+# : % $ $ %6 ( 1 0   $

>% K#  # G

%

 1 0 2



       )*+!*-J.#

E@F   :#  # ?3  )*@@.+#

E**F



            )*+!-.,J.#

E.F  2 :#  # ?3  )+# 8 $    $  

     

6 )*@.-+# : % $  $ 

     ,!-*J-,#

PMS 2008, April 28 - 30, İstanbul, Turkey

 #

(   $ 0 6 

$

67

Managing Projects in a Matrix Organization: Simulation-based Training Lior Davidovitch1, Avi Parush2, and Avy Shtub1 Industrial Engineering and Management Technion - Israel Institute of Technology, Haifa, Israel 32000 1

Department of Psychology, Carleton University, Ottawa, ON, Canada, K1S 5B6

2

1. Introduction The matrix organization is a combination of the functional organization and the project-based organization. In the matrix, cooperation between managers is vital to perform well, due to the complexity of communication and the split authority typical to such organizations. Training managers to manage projects in a matrix organization is therefore extremely important. Problems such as leading without authority, difficulty of project control, resource multiplexing, complex organizational interfaces, complex cost accounting, organizational ambiguity, priority conflict and interruptions are typical to the matrix structure. These problems call for a well coordinated management style. In the present paper we study one aspect of this problem- training project managers how to coordinate their projects, which is vital for the matrix organization to perform well.

2. Team Training Traditional training based on static models and case studies is the backbone of most teaching and training programs in the area of project management. The successful use of a simulator called PMT (Project Management Trainer) for teaching project management at the individual level was reported and presented in the literature and in PMS 2004[2, 3, 10]. In order to teach and train project managers at the team level, a new simulator has been developed PTB (Project Team Builder). The PTB can be integrated with a commercial project management tool (Microsoft Project©). The simulator is designed for training teams in a dynamic, stochastic, multi-project environment. The PTB contains a built-in history mechanism, which allows the user to watch its previous decisions and actions and to run the simulation from a history store point of view. The history mechanism is important as it encourages meta-cognitive processes and enables debriefing - analysis of the decision-making process by reviewing past states and decisions as opposed to analysis of results only. The simulator can be used both in single-user mode and multi-user mode. Team learning in a multi-project environment is important due to the simple fact that in the “real world” teams perform most projects. Team learning has been defined as a process in which a team takes action, obtains and reflects upon feedback, and makes changes to adapt or improve [1, 4, 9]. A team can be defined as two or more people who interact dynamically, interdependently and adaptively and who share at least some common goals or purpose. Team knowledge is more than the collection of knowledge of individual team members; team knowledge is a result of interactions among team members [5]. The Kolb Team Learning Experience (KTLE) is a structured way to help a team to develop the essential competencies necessary for team learning. The KTLE provides a comprehensive tool to help teams learn to solve problems and work together to arrive at effective solutions [6]. The purpose of the debriefing procedure is to support students in examining the decisionmaking process during the planning phase and the execution of the decisions during the execution phase. The written debriefing procedure consisted of three stages: general debriefing; detailed debriefing; and project scenario analysis. In the general debriefing stage, the team members are requested to define the operating methodology of the simulator, both for the planning phase and for the execution phase, while operating the control activities.

68

PMS 2008, April 28-30, İstanbul, Turkey

In the detailed stage, the decision-making procedure before and while running the scenario is investigated. The students have to explain how they solved conflicts between team members. The final debriefing stage focused on the analysis of the scenario. Teams that used the history mechanism analyze the scenario based on saved history information. Teams that did not have access to the history mechanism analyzed the project only at termination.

3. The experiment Our experimental design focused on three variables: 1. Experience in project management: two groups participated in the study: the experimental group composed of graduate engineering students and the control group composed of undergraduate engineering students. 2. History Recording Mode: two types of modes were used in the study: teams using the history recording mechanism, teams not using the history recording mechanism. 3. Team Debriefing Procedure: two types of procedures were used in the study: teams performing a formal debriefing session procedure, teams not performing a formal debriefing procedure. Two groups of graduate/undergraduate students were fully crossed with the two history recording conditions and with the two debriefing procedure conditions, resulting in eight experimental/control groups. Consequently, the study design included eight groups, according to table No. 1: Table 1: Group Characteristics

Group 1 2 3 4 5 6 7 8

Graduate √ √ √ √ X X X X

History X √ X √ X √ X √

Debriefing X X √ √ X X √ √

The paradigm used in [2] for individual learning was also employed in this study for team learning. The experiment was based on two phases during which the history mechanism and the debriefing procedure were introduced. Phase I: Basic Learning of a Simple Scenario (SP) – Teams were assigned to one of the eight groups. Participants in all eight groups were given the same simple multi-project scenario (each team member managed a single project). Each scenario was run four times. Phase II: Transfer to a complex Multi Project (MP) scenario – Team participants in all eight groups were given a new multi-project management scenario. The scenario was run once.

4. Results and conclusions In the comparison between the experimental group and the control group, the findings show a better learning process for the graduate student teams. Moreover, while observing the MP scenario performance, which reflected transfer abilities, it was found that the trend persisted and performance was better for the experimental group. The results are summarized in Figure 1:

PMS 2008, April 28-30, İstanbul, Turkey

69

Mean Profit Profit 22000 20000

Control; No H; No D

18000

Control; H; No D

16000

Control; No H; D

14000

Control; H; D

12000

Experimental; No H; No D

10000

Experimental; H; No D Experimenta; No H; D

8000

Experimental; H; D

6000 4000 1

2

3

4

5

Simulation Run Figure 1. Results of the experiment

In order to check the influence of the history mechanism on team learning, the first phase - SP was done with and without history mechanism, both by the experimental group and the control group. The findings indicate that using the PTB, the teams that used the history mechanism show a better learning process. This trend is kept for the second phase of transfer to a different scenario (MP). The phenomenon of better performance even in the first simulation-run for teams that used the history mechanism (while the history database contains just the information about the current simulation-run) is unique for team learning and was not found in our simulation-based individual learning study [2]. Our conclusion is that when a team runs a simulation scenario, many gaps in individual knowledge can be compensated for by the knowledge of other team members; this compensation is enhanced by using the built-in history mechanism. Team knowledge, using the history mechanism is more than the collection of knowledge of individual team members; it emerges as a result of interactions among team members [5]. In order to check the influence of the debriefing procedure, the first phase simulation-runs was done with and without team debriefing procedure, both for the experimental group and the control group. The findings show a better learning process when using team debriefing procedure. These findings are true for the second phase of transfer to a different scenario (MP). This is defined in Kolb's model [7] as the second and third stages of the Kolb Team Learning Experience (KTLE), called reflective observation and abstract conceptualization, moving learners from passive recipients of information to learners who experience phenomena, write about, and think how their experiences relate to concepts and theories considered in their learning session. An important conclusion is derived from the second-order interactions of the three factors. The interaction of the experimental group with the history mechanism is significant, hence providing the conclusion that those students that used the history mechanism achieved better learning performance, and the impact of the history mechanism is significantly higher for graduate students than for undergraduate students. Moreover, the findings suggest that using a formal debriefing procedure improves performance and actually supports and improves learning. Another important finding is the significant influence of the interaction between the history mechanism and the debriefing procedure. A significantly better learning process was found for teams using the history mechanism and the debriefing procedure. This finding is an extension of the Kolb Team Learning Experience [8] and adds integration of the history mechanism and the debriefing procedure to team learning. The Kolb Team Learning Experience model applicable to this study can be explained by its four stages: 1. Concrete Experience – the participants acquire knowledge about the subject using their experience and via the history mechanism that stores all the relevant information and enables the learner to describe the experience in terms of who, what, when, where, and how. 70

PMS 2008, April 28-30, İstanbul, Turkey

2. Reflective Observation – the experience is viewed from different points of view while using a written team debriefing procedure, which guides the learners to elaborate the scope and add more meaning and perspectives to the event. The information, which is the main knowledge base for the debriefing procedure, is gathered by the history keeping mechanism and can be loaded to each team's participants according to his request. 3. Abstract Conceptualization – in order to relate concepts from the readings and lecture to the experience in the activity, the learners can use the guiding questions in the debriefing procedure. These questions direct the learners to the background theories and enable the expansion of knowledge from the experience gathered. 4. Active Experimentation – the participants apply what has been learned during the written debriefing procedure by using the history information to relevant case-studies - simulationbased scenario, which can be the same scenario or a different scenario. The use of the history mechanism was found very powerful, since it gives the students a strong tool to enhance their learning process. The same can be said on the team debriefing procedure, which elaborates the scope of various conclusions about the decision-making and enhances the team learning process. By these two mechanisms integrated together, the students are more active in the decision-making process and the knowledge gathered is implemented to other scenarios.

References [1] Akgun, A. E. J. C. Byrne, H. Keskin, and G. S. Lynn, "Transactive memory system in new product development teams," IEEE Trans. On Engineering Management, Vol. 53, No. 1, pp. 95-111, 2006. [2] Davidovitch, L. A., Parush, and A. Shtub, "Simulation-based learning in engineering education: performance and transfer in learning project management," Journal of Engineering Education, vol. 95, no. 4, pp. 289-299, 2006. [3] Davidovitch, L. A., Parush, and A. Shtub, "Simulation-based learning: The learning– forgetting–relearning process and impact of learning history," Computers and Education, in press. [4] Edmondson, A."Psychological safety and learning behavior in work teams," Administ. Sci. Quart., Vol. 44, pp. 350-383, 1999. [5] Klimoski, R. and S. Mohammed, "Team mental model: construct or metaphor?," Journal of Management, vol. 20, pp. 403-437, 1994. [6] Kayes, A. B. C. D. Kayes, and D. A. Kolb, "Developing teams using the Kolb team learning experience," Simulation & Gaming, vol. 36, no. 3, pp. 355-363, 2005. [7] Kolb, Experiential learning: Experience as a source of learning. New Jersey: Prentice Hall, 1984. [8] Kolb, D., I. M. Rubin, and J. S. Osland, Organizational Behavior: An Experimental Approach. Englewood Cliffs, NJ: Prentice-Hall, 1991. [9] Sarin, S. and C. Mcdermott, "The effect of team leader characteristics on learning, knowledge application, and performance of cross-functional new product development teams," Decision Science, Vol. 34, No. 4, pp. 707-739, 2003. [10] Shtub, A. "PMT – The project management trainer," ninth international conference on project management and scheduling, PMS 2004, pp. 430-433, 2004.

PMS 2008, April 28-30, İstanbul, Turkey

71

RESCON: A Classroom MFC Application for the RCPSP Filip Deblaere1, Erik Demeulemeester1, and Willy Herroelen1 1

Department of Decision Sciences and Information Management, Katholieke Universiteit Leuven, Belgium e-mail: [email protected]

Keywords: RCPSP, Educational Software, Visualization, Scheduling Algorithms.

1.

Introduction

The resource-constrained project scheduling problem (RCPSP) involves the determination of a baseline schedule of the project activities that satisfies the finish-start precedence relations and the renewable resource constraints under the objective of minimizing the project duration. We have developed an educational tool that visualizes (amongst others) project networks, baseline schedules, resource profiles and Gantt charts. The tool also features a number of known scheduling algorithms for solving the RCPSP, including a tabu search metaheuristic and an exact branch-andbound procedure. The remainder of this paper is organized as follows. In the next section we give a brief introduction to the RCPSP. The third and final section elaborates on the features implemented in the educational software tool.

2.

The basic RCPSP

The basic RCPSP (Demeulemeester & Herroelen, (2002)) involves a project network G(N,A) with a set N of nodes representing the project activities. The activities in the network are subject to zero-lag finish-start precedence constraints (i,j) ∈ A, indicated by the arcs of the network. We assume the presence of m renewable resource types, with a per period availability ak, k ∈ K with K = {1,…,m}. The project activities i ∈ N require an integer per period amount rik of resource type k, k ∈ K. A solution to the RCPSP consists of a vector of start times si, i ∈ N such that the resource and precedence constraints are satisfied, and the project makespan is minimized. Feasible schedules for the RCPSP can easily be obtained by a so-called schedule generation scheme (Kelley, (1963), Brooks & White, (1965)), using an activity list – usually ordered according to a certain priority rule – as input. Numerous procedures have been developed for solving the RCPSP, both heuristic and exact. The most successful exact procedure for the RCPSP appears to be dedicated branch-and-bound (Demeulemeester & Herroelen, (1992), Demeulemeester & Herroelen, (1997)).

3.

An educational tool

An educational tool for visualizing the RCPSP1 has been developed in Microsoft® Visual C++. It has a window-based Graphical User Interface (GUI) and runs on all Win32® platforms. The software is capable of handling projects consisting of any number of activities and any number of resource types. Project files in the ``rcp'' format (this is the format used in the project scheduling library PSPLIB (Kolisch & Sprecher, (1997)) and in the the RanGen network generator (Demeulemeester et al., (2003))) can be read in. Alternatively, the user can build up a project network from scratch, starting with an empty project and adding activities and precedence relations one at a time. These networks can then be exported to a file in the aforementioned ``rcp'' format. Early and late start schedules as well as resource feasible schedules can be calculated, their corresponding resource profiles can be visualized (see Figure 1), and the slack values of different activities can be graphically displayed under the form of a Gantt chart (see Figure 2). The

1

72

Publicly downloadable from http://www.econ.kuleuven.be/filip.deblaere/public/rescon/ PMS 2008, April 28-30, İstanbul, Turkey

software also supports the calculation of various project statistics, such as resource strength, order strength and coefficient of network complexity, to name a few.

Figure 1. Resource profiles

Figure 2. Gantt chart with slack values

Feasible schedules can be calculated using a large number of constructive heuristics. The software supports both the serial and the parallel schedule generation scheme, in combination with forward, backward or bidirectional planning. The user can choose from eight popular priority lists (e.g. latest finish time, minimum slack, ...), yielding a total of 48 heuristics to calculate a feasible schedule. We have also implemented an exact procedure for solving the RCPSP. It is a branchand-bound procedure based on the procedure by Demeulemeester and Herroelen (1992), yielding PMS 2008, April 28-30, İstanbul, Turkey

73

optimal solutions in a relatively short computation time (provided the number of activities is limited). For large projects, the exact procedure may not be able to calculate a solution in a reasonable computation time. Therefore, we have implemented a rather rudimentary tabu search procedure (Glover, (1989), Glover, (1990)) that attempts to strike a balance between computation time and solution quality. The software provides the possibility of summarizing the results (i.e., the obtained makespan) of the 48 constructive heuristics, the tabu search metaheuristic and the exact procedure graphically, on a single timeline. A screenshot of this feature is shown in Figure 3.

Figure 3. Algorithm performance summary Also, for educational purposes, we provide the opportunity for ``what-if'' analysis: project parameters (such as activity durations, resource availability, precedence constraints) can be changed and the schedule can be recalculated, such that the immediate effects on the baseline schedule are just one click away.

References [1] Brooks, G. & White, C. (1965). An algorithm for finding optimal or near optimal solutions to the production scheduling problem, Journal of Industrial Engineering 16: 34–40. [2] Demeulemeester, E. & Herroelen, W. (1992). A branch-and-bound procedure for the multiple resource-constrained project scheduling problem, Management Science 38: 1803–1818. [3] Demeulemeester, E. & Herroelen, W. (1997). New benchmark results for the resourceconstrained project scheduling problem, Management Science 43: 1485–1492. [4] Demeulemeester, E. & Herroelen, W. (2002). Project scheduling - A research handbook, Vol. 49 of International Series in Operations Research & Management Science, Kluwer Academic Publishers, Boston. [5] Demeulemeester, E., Vanhoucke, M. & Herroelen, W. (2003). RanGen: A random network generator for activity-on-the-node networks, Journal of Scheduling 6: 17–38. [6] Glover, F. (1989). Tabu search, Part I, INFORMS, Journal of Computing 1: 190–206. [7] Glover, F. (1990). Tabu search, Part II, INFORMS, Journal of Computing 2: 4–32. [8] Kelley, J. (1963). The critical-path method: Resources planning and scheduling, in J. Muth & G. Thompson (eds), Industrial Scheduling, Prentice Hall, Englewood Cliffs, pp. 347–365. [9] Kolisch, R. & Sprecher, A. (1997). PSPLIB – A project scheduling library, European Journal of Operational Research 96: 205–216. 74

PMS 2008, April 28-30, İstanbul, Turkey

New Approximate Solutions for Customer Order Scheduling J. M. Framinan1 1

Industrial Management, School of Engineering, University of Seville, Spain e-mail: [email protected]

Keywords: Order scheduling, total completion time, heuristics, greedy search.

1.

Introduction

We consider a facility with m machines in parallel. Each machine can produce one particular product type. We assume that there are n customer orders composed of some/all the product types that can be manufactured on the m machines. The total amount of processing required by order i on machine j is pij. The problem is how to schedule the orders with minimum sum of processing times among all orders. There are several practical applications of this model including finishing operations in the paper industry (Leung et al. 2005a), manufacturing of semi finished lenses (Ahmadi et al. 2005), and the pharmaceutical industry (Leung et al. 2005b). The problem is usually denoted in the literature as PD||∑Cj and its NP-hardness was first established by Wagneur and Sriskandarajah (1993), although Leung et al. (2005) reopened it by discovering a flaw in the proof. Finally, the problem was shown to be NP-hard in the strong sense even for two machines by Roemer (2006). A number of heuristics have been proposed in the literature for the problem. The Shortest Total Processing Time (STPT) heuristic (Sung & Yoon 1998) constructs a sequence of orders by starting with an empty schedule and selecting as the next order to be scheduled the one with the smallest total amount of processing over all m machines among the non scheduled jobs. Similarly, the Shortest Maximum Processing Time (SMPT) heuristic (Sung & Yoon 1998) selects the order with the smallest maximum amount of processing time on any of the m machines. The Smallest Maximum Completion Time (SMCT) heuristic (Wang & Cheng 2007) first sequences the orders in non decreasing order of the processing times on each machine j. Then computes SCi the completion time for each order i as the maximum among the completion times for all machines according to the schedule obtained before. Finally, the orders are scheduled in non decreasing order of SCi . Leung et al. (2005a) propose the Earliest Completion Time (ECT) heuristic, also suggested by Ahmadi et al. (2005). This heuristic generates a sequence of orders one at a time; each time it selects as the next order the one that would be completed the earliest.

2.

New heuristics for the problem

Among the heuristics presented in the previous section, ECT clearly outperforms the rest, according to the results in Leung et al. (2005a). Indeed, the performance of the ECT heuristic is exceptional, obtaining solutions that are, on average, just around 4% of a lengthy tabu search procedure specifically designed for the problem. Such exceptional performance of a pure greedy heuristic suggests a peculiar structure of the solution space, and opens some opportunities for improvement by inserting the chosen order in different positions than the last one and/or by reinserting already scheduled jobs in the last position of the partial schedule. Two ‘classical’ strategies for doing so are the following: The so-called NEH-C insertion strategy, based on the NEH heuristic (Nawaz et al. 1983). According to this insertion strategy, the jobs are ordered and then inserted according to this order in all possible slots of a partial schedule. Among these so-obtained partial schedules, the one yielding the best value of the objective function is selected for the next iteration. This strategy seems to provide very good results for the flowshop scheduling problem with makespan objective. The so-called FL-C strategy (Framinan and Leisten 2003), consisting on applying pairwise interchange to the partial solutions found by the NEH-C procedure. This strategy is particularly good for the flowshop scheduling problem with flowtime objective. Both NEH-C (and FL-C) require an initial sorting of the orders. Clearly, this sorting can be provided by ECT, but other options may be available as well. Particularly, sorting the jobs in PMS 2008, April 28-30, İstanbul, Turkey

75

ascending order of their sum of processing times (Longest Processing Times - LPT) seems to be an effective option for both NEH-C and FL-C in the flowshop scheduling problem with flowtime criterion (see e.g. Framinan et al . 2005). Therefore, in our case we adapt the LPT by applying LPT on each machine and selecting the best sequence among the so-obtained m different schedules. In the following, we name this sorting mechanism as LPT*. In addition to the aforementioned heuristics, we try two new heuristics. The heuristics are based on ECT, which is known to provide very good solutions, but we allow re-insertion of jobs in the intermediate steps. These are: - SHIFT-k. The heuristic starts with all job as non-scheduled jobs. At step k, let us Sk be the partial schedule of length k obtained by inserting in the last position of Sk−1 the non scheduled order yielding the lowest (partial) flowtime, i.e. Sk:=Sk−1σr, where ΣC(Sk−1σr) ≤ ΣC(Sk−1σi) (i belonging to the set of non scheduled orders). Then, we find the most suitable neighbouring position for σr by removing each job in position j (j = k – 2,…,1) in Sk and inserting it in position k – 1 (close to σr). If a (partial) solution better than the current one is obtained, then it replaces Sk as the best current solution. - SHIFT-k-OPT: This heuristic is similar to SHIFT-k , but, if during the reinsertion phase at a step k, a best solution is found, then all jobs in position j (j = k – 2,…,1) in Sk are removed and inserted in position k – 1. This process is repeated until no best solution is found. The rationale behind these two strategies (specifically designed for the problem) is to accommodate the newly inserted order with the best possible neighbour, by reinserting all already scheduled orders next to it. Finally, in view of the good results provided by the SHIFT-k and SHIFT-k-OPT strategies (see Table 1 in the next section), we also use this reinsertion mechanism to build a Greedy Search Algorithm for the problem. The basic steps are the following: Step 0:

Set MAX_ITER, the maximum number of iterations. Set the current number of iterations to zero. Step 1: Obtain an initial solution Scurr by applying SHIFT-k. Let Sbest := Scurr Step 2: While the number of iterations is lower than MAX_ITER, then: Step 2.1. Obtain i, a random number between 1 and (n – 1). Remove job in position i from Scurr and insert it in position n. Step 2.2. Apply SHIFT-k-OPT to the sequence obtained in Step 2.1. Let S* be the soobtained sequence. Let Scurr:= S* Step 2.3. Perform a local search phase by pairwise exchange among all jobs in S*. If a best solution is found in the process, then assign it to Scurr Step 2.4. If ∑Cj(Scurr) < ∑Cj(Sbest), then Sbest := Scurr Step 2.5. Increase the number of iterations Step 3: Print Sbest as the solution of the problem

3.

Computational results

In order to test the different proposals in the previous section, we build a test-bed taken from Leung et al. (2005). The test-bed consists on 480 instances, 30 of each with a different size of orders and machines (see e.g. Table 1), and where each order requires to be processed on k machines, being k a random number between 1 and m. The processing times are generated according to a [1,99] random distribution. The heuristics under comparison are the following: LPT* as a benchmark heuristic ECT as the best-so-far constructive heuristic for the problem NEH-C(LPT*): the NEH-C strategy with the initial ordering provided by LPT* NEH-C(ECT): the NEH-C strategy with the initial ordering provided by ECT FL-C(LPT*): the FL-C strategy with the initial ordering provided by LPT* FL-C(ECT): the NEH-C strategy with the initial ordering provided by ECT SHIFT-k: the SHIFT-k heuristic SHIFT-k-OPT: the SHIFT-k-OPT heuristic TSL: the Tabu Search algorithm as in Leung et al. (2005a) GSA: Greedy Search Algorithm described in the previous section (MAX_ITER = 1000) The results in terms of average RPD (Relative Percentage Deviation) from the best-known solution are shown in Table 1 for different problem sizes and the 99% confidence interval of the 76

PMS 2008, April 28-30, İstanbul, Turkey

means are given in Figure 1. As a summary of both, it can be seen that the results achieved by SHIFT-k and SHIFT-k-OPT are better (and statistically different) from those obtained by ECT. Also, the results obtained by FL-C(ECT) are statistically different (and better) from those obtained by the rest of the heuristics. Among the local search approaches, it can be seen that the superior performance of GSA over TSL is also statistically significant. It is also worth to note that NEHC(ECT) and ECT produce the same results, indicating the particular structure of the problem. Table 1. Relative Percentage Deviation of the different heuristics n

m

LPT*

ECT

NEH-C (LPT*)

NEH-C (ECT)

FL-C (LPT*)

FL-C (ECT)

SHIFT-k

SHIFT-kOPT

GSA

TSL

20

2

32.104

1.417

2.385

1.417

0.493

0.316

0.715

0.613

0.016

0.131

20

5

24.589

2.180

3.081

2.180

1.124

0.853

1.550

1.363

0.022

0.030

20

10

21.827

2.850

3.998

2.850

1.797

1.272

2.029

1.866

0.033

0.068

20

20

17.478

2.291

4.265

2.291

1.652

1.269

1.787

1.732

0.003

0.069

50

2

43.303

2.492

3.656

2.492

0.904

0.357

1.118

0.803

0.025

0.019

50

5

36.265

3.770

5.587

3.770

1.819

2.023

2.878

2.671

0.025

0.140

50

10

30.412

3.231

7.196

3.231

2.982

2.076

2.691

2.550

0.040

0.187

50

20

26.564

3.473

7.378

3.473

3.433

2.528

3.042

2.930

0.022

0.317

100

2

48.927

3.446

4.488

3.446

1.538

0.774

1.560

1.160

0.022

0.019

100

5

39.593

3.813

6.643

3.813

2.278

2.316

3.029

2.929

0.016

0.304

100

10

36.495

3.215

8.931

3.215

3.881

2.370

2.830

2.755

0.000

0.519

100

20

31.490

2.932

8.749

2.932

4.082

2.368

2.694

2.653

0.000

0.685

200

2

50.890

3.856

4.675

3.856

1.752

1.017

1.779

1.340

0.010

0.039

200

5

41.245

4.961

7.291

4.961

2.249

2.965

3.849

3.650

0.005

0.203

200

10

37.534

3.601

9.457

3.601

3.631

2.807

3.198

3.171

0.000

0.301

200

20

34.391

2.999

10.935

2.999

4.762

2.673

2.857

2.823

0.000

0.306

34.569

3.158

6.170

3.158

2.399

1.749

2.350

2.188

0.015

0.209

Avg.

7

6

99% CI RPD

5

4

3

2

1

0

ECT/NEH-C NEH-C(LPT*) (ECT)

SHIFT-k

SHIFT-k-OPT

FL-C(ECT)

GSA

TSL

Heuristic

Figure 1. 99% Confidence Intervals for the heuristics under comparison

Finally, in Table 2 we show the average CPU times for the different heuristics depending on the problem size. As it can be seen, the SHIFT-k and SHIFT-k-OPT strategies are rather fast, while FL-C(ECT) is very time consuming (around 60 times that of SHIFT-k-OPT), being the results only marginally better. Finally, the GSA approach shows to be much more efficient than TSL, as it requires less CPU time to achieve better results than TSL. PMS 2008, April 28-30, İstanbul, Turkey

77

4.

Conclusions

For the problem of customer order scheduling with the objective of flowtime minimisation we present the heuristics SHIFT-k and SHIFT-k-OPT that improve the performance of the best-so-far heuristic, being both sufficiently fast. The FL-C strategy seems to be successful as well, at the price of a higher CPU effort. Regarding high-quality, time-consuming local search approaches for the problem, we embed the SHIFT-k-OPT mechanism into a Greedy Search Algorithm, obtaining the best quality of the results for the problem while being less CPU-time exigent than the Tabu Search algorithm proposed by Leung et al. (2005a). In addition, the analysis carried out (omitted in this summary) also show the robustness of the GSA approach with respect to the maximum number of iterations as well as with respect to the number of jobs to be removed (and subsequently reinserted) in Step 2.1. of the algorithm. Table 2. CPU time of the different heuristics (LPT* is not depicted as it is zero for all problem sizes) n

m

ECT

NEH-C (LPT*)

NEH-C (ECT)

FL-C (LPT*)

FL-C (ECT)

SHIFT-k

SHIFT-kOPT

GSA

TSL

20

2

0.000

0.000

0.001

0.003

0.005

0.000

0.001

0.527

10.681

20

5

0.000

0.001

0.001

0.005

0.007

0.001

0.002

1.053

14.381

20

10

0.001

0.001

0.001

0.009

0.011

0.002

0.001

1.673

17.957

20

20

0.000

0.001

0.002

0.016

0.016

0.003

0.002

2.793

26.241

50

2

0.002

0.002

0.004

0.095

0.102

0.004

0.006

6.736

106.431

50

5

0.003

0.006

0.008

0.200

0.204

0.007

0.010

14.740

327.848

50

10

0.004

0.009

0.013

0.336

0.342

0.013

0.016

24.656

407.009

50

20

0.008

0.016

0.024

0.566

0.576

0.022

0.027

42.301

608.391

100

2

0.009

0.018

0.029

1.370

1.433

0.031

0.072

50.944

781.950

100

5

0.021

0.041

0.061

3.045

3.118

0.057

0.086

131.566

2167.525

100

10

0.034

0.069

0.103

5.170

5.233

0.098

0.131

218.330

2812.885

100

20

0.057

0.118

0.176

8.957

8.966

0.166

0.204

349.328

4796.554

200

2

0.068

0.140

0.208

20.539

21.482

0.220

0.901

422.105

4708.247

200

5

0.154

0.311

0.471

46.672

47.922

0.467

0.921

443.938

19760.093

200

10

0.262

0.534

0.800

80.407

81.171

0.780

1.110

1074.184

28217.012

200

20

0.452

0.907

1.365

137.532

138.500

1.344

1.685

1892.195

67453.903

0.067

0.136

0.204

19.058

19.318

0.201

0.323

292.317

8113.231

Avg.

References Ahmadi R.. U. Bagchi and T.A. Roemer (2005). Coordinated scheduling of customer orders for quick response. Naval Research Logistics. 52. 493-512. Framinan J.M.. and R. Leisten (2003). An efficient constructive heuristic for flowtime minimisation in permutation flowshops. OMEGA. 31. 311-317. Framinan J.M.. R. Leisten. and R. Ruiz-Usano (2005). Comparison of heuristics for flowtime minimisation in permutation flowshops. Computers & Operations Research. 32. 1237-1254. Leung. J.Y-T.. H. Li and M. Pinedo (2005a). Order scheduling in an environment with dedicated resources in parallel. Journal of Scheduling. 8. 355-386. Leung. J.Y-T.. H. Li and M. Pinedo (2005b). Order scheduling models: an overview. In Multidisciplinary scheduling: Theory and applications (G. Kendall. E.K. Burke. S. Petrovic and M. Gendreau (eds.). Springer. New York. 38-53. Leung. J.Y-T.. H. Li. M. Pinedo and C. Sriskandarajah (2005). Open shops with jobs overlap – revisited. European Journal of Operational Research. 163. 569-571. Nawaz M.. E.E. Enscore. and I.A. Ham (1983). A heuristic algorithm for the m-machine. n-job flow-shop sequencing problem. OMEGA. 11. 91-95. Roemer. T.A. (2006). A note on the complexity of the concurrent open shop problem. Journal of Scheduling. 9. 389-396. Sung. C.S.. and S.H. Yoon (1998). Minimizing total weighted completion time at a pre-assembly stage composed of two feeding machines. International Journal of Production Economics. 54. 247-255. Wagneur. E.. and C. Sriskandarajah (1993). Open shops with jobs overlap. European Journal of Operational Research. 71. 366-378. Wang. G.. and T.C.E. Cheng (2007). Customer order scheduling to minimize total weighted completion time. OMEGA. 35. 623-626.

78

PMS 2008, April 28-30, İstanbul, Turkey

Tree and Local Search for Parallel Machine Scheduling Problems with Precedence Constraints and Setup Times B. Gacias, C. Artigues and P. Lopez LAAS-CNRS, Université de Toulouse, France {bgacias,artigues,lopez}@laas.fr

Keywords: Parallel machine scheduling, setup times, precedence constraints, limited discrepancy search, local search.

1 Introduction This paper deals with parallel machine scheduling with precedence constraints and setup times between the execution of jobs. We consider the optimization of two dierent criteria: the minimization of the sum of completion times and the minimization of the maximum lateness. These two criteria have a particular interest in production scheduling. The sum of completion times is a criterion that maximizes the production ow and makes possible the minimization of the work-inprocess inventories. In the minimization of maximum lateness, the due dates can be associated to the delivery dates of products. This is a goal of due date satisfaction in order to punish as less as possible the customer who is delivered with the longest delay. These problems are strongly NP-hard (Graham et al., 1979). The parallel machine scheduling problem has been widely studied (Cheng and Sin, 1990), specially because it appears as a relaxation of more complex problems like the hybrid ow shop scheduling problem or the RCPSP (Resource-Constrained Project Scheduling Problem). However, the literature on parallel machine scheduling with precedence constraints and setup times is quite limited. The problems that only have either precedence constraints or setup times but not both, can be solved by list scheduling algorithms. That means, it exists a total ordering of the jobs (a list) that, when a given allocation rule is applied, reaches the optimal solution (Schutten, 1994). This rule is the Earliest Completion Time (ECT). It consists in allocating every job to the machine that allows it to be completed earlier. That reasoning is unfortunately unlikely to work when precedence constraints and setup times are considered together, as shown in Hurink and Knust (2001), so we have to modify the way to solve the problem and consider both scheduling and resource allocation decisions. In Section 2, we dene formally our particular case, the parallel machine scheduling problem with setup times and precedence constraints between jobs. The methods and techniques of local and tree search used to solve the problem are described in Sections 3 and 4. Section 5 is dedicated to the computational experiments.

2 Problem denition We consider the following problem, in which a set J of n jobs needs to be processed on m parallel machines. The precedence relations between the jobs and the setup times, considered when dierents jobs are sequenced on the same machine, PMS 2008, April 28-30, İstanbul, Turkey

79

must be respected. The preemption is not allowed, that means that each job is continually processed during pi time units on the same machine. The machine can process no more than one job at a time. The decision variables of the problem are Si , start time of job i, and Ci , completion time of job i, where Ci = Si + pi . ri and di denote the release date and the due date of job i, respectively. We denote by E the set of precedence constraints between jobs. The relation (i, j) ∈ E , with i and j ∈ J , means that job i is performed before job j (i ≺ j ). So job j can start only after the end of job i (Sj ≥ Ci ). Finally, we dene sij as the setup time of job j processed immediately after job i on the same machine. Thus, for two jobs i and j processed successively on the same machine, we have either Sj ≥ Ci + sP ij if i precedes j , or Si ≥ Cj +sji if j precedes i. The problems are then: P |prec, sij | Ci and P |prec, sij |Lmax .

3 A hybrid Tree-Local search method 3.1 Limited discrepancy tree search To solve the problems under consideration, we use a method based on the discrepancies regarding a reference heuristic. Such a method is based on the assumed good performance of this reference heuristic, thus making an ordered local search around the solution given by the heuristic. First, it explores the solutions with few discrepancies from the heuristic solution and then it moves away from this solution until it has covered the whole search space. In this context, the principle of LDS (Limited Discrepancy Search) (Harvey and Ginsberg, 1995) is to explore rst the solutions with discrepancies on the top of the tree, since it assumes that the heuristic makes the most important mistakes in the high levels where it still has taken very few decisions. Several methods based on LDS have been proposed in order to increase the eciency. For instance, ILDS (Korf, 1996), DDS (Walsh, 1997) or DBDFS (Beck and Perron, 2000), which have been devised to avoid the redundancy, and YIELDS (Karoui et al., 2007) where learning process notions are integrated.

3.2 Large neighborhood local search based on LDS CDS (Climbing Discrepancy Search) (Milano and Roli, 2002) is a large neighborhood search method based on LDS. At each iteration it carries out a k -discrepancy search around the best current solution. If a better solution is found, then CDS takes its neighborhood as the new neighborhood to explore. In the case of no better solution is found, then k is increased by one. CDDS (Climbing Depth-bounded Discrepancy Search) mixes principles of CDS and of DDS (Hmida et al., 2007). The neighborhood of the best solution is limited not only by the number of discrepancies but also by the depth in the tree. In this work, we propose two variants of CDS and CDDS for the problems at hand. HD-CDDS (Hybrid Discrepancy CDDS) consists in a search similar to CDDS. But, if for a dened depth level dmax we cannot nd a best solution, then we authorize a small number of discrepancies for all levels. This method solves the problem of incompatibility between the limitation by depth level and the precedence constraints. The second one, MC-CDS (Mix Counting CDS), is an application of CDS but with a modication in the way to count the discrepancies. We consider a binary counting for the discrepancies at the top levels of the tree and a non-binary counting way for the rest of levels. We dene in Section 4 the concept of binary 80

PMS 2008, April 28-30, İstanbul, Turkey

and non-binary discrepancy counting as well as the other components of the LDS called at each iteration for the CDS local search method.

4 Branch-and-Bound components for P |prec, sij |

P

Ci and P |prec, sij |Lmax

A tree structure with both levels of decisions (scheduling and resource allocation) is dened in 4.1. The exploration strategy (branching rules), the heuristics, and the denition of discrepancy are explained in 4.2. The specic methods of node evaluation like lower bounds, constraint propagation mechanisms and dominance rules are introduced in 4.3.

4.1 Tree structure The problem cannot always be eciently solved by a list algorithm since it includes precedence constraints and setup times together (Hurink and Knust, 2001). In our case we have not only to nd the best list of jobs but also to specify the best resource allocation. For practical purposes, we have mixed both levels of decision: one branch is associated to the choice of the next job to schedule and also to the choice of the machine. One node represents a list of p jobs and a partial scheduling of these p jobs, and it entails maximum (n − p)m child nodes. A solution is reached when we have a node with p = n. We suggest the following proposition in order to reduce the number of nodes to explore: for every job x having t (direct or undirect) successor jobs, we consider the assignments on the rst [min(m, t + 1)]th machines that allows x to be completed as soon as possible. So, we are going to consider the schedule according to ECT rule for all jobs, except for the previous jobs. In that case, we consider to schedule them on more than one machine to prevent that the previous jobs could avoid the best assignment for its successor jobs.

4.2 Exploration strategy An initial solution is rst obtained by the use of simple heuristics. For the P job selection we use SPT (Smallest Processing Time) rule for min Ci and EDD (Earliest Due Date) for min Lmax . Once the job set, it is assigned to a machine according to ECT (Earliest Completion Time) heuristic. Because of the existence of two types of decisions, we consider here two types of discrepancies: discrepancy on job selection and discrepancy on resource allocation. In the case of p-ary tree, we have two dierent ways to count the discrepancies. In the rst mode (binary ), we consider that choosing the heuristic decision corresponds to 0 discrepancy, while any other value corresponds to 1 discrepancy. The other mode (non-binary ) consists in considering that the more far we are from the heuristic choice the more discrepancies we have to count. We suggest to test both modes for the heuristic for job selection. For these decisions, the heuristic is likely to make important errors, since the setup times are not considered and they have a main role in job scheduling. On the other hand, for the choice of the machine, we use the non-binary mode since we assume that the allocation heuristic only makes a few errors (ECT is a high-performance heuristic for this problem). We propose three dierent branching rules. The rst one, called LDS-depth, is a classical depth-rst search but where the solutions obtained are limited by the allowed discrepancies. The other two strategies consider the number of discrepancies in the order the solutions are reached. The node to explore is the node with the

PMS 2008, April 28-30, İstanbul, Turkey

81

less number of discrepancies, and with the smallest depth for the strategy called LDS-top, and with the largest depth for the strategy called LDS-low.

4.3 Node evaluation

P A node evaluation diers depending on the studied criterion. For min Ci , it consists in computing a lower bound. We selected the bound suggested in Nessah et al. (2005), it is based on the resolution of a one-machine relaxation of the problem. For min Lmax , the evaluation consists in triggering a satisability test based on constraint propagation involving energetic reasoning (Lopez and Esquirol, 1996). The energy is produced by the resources and it is consumed by the jobs. We apply it to verify whether the best solution reached from the current node will be at least as good as the best current solution. We determine the minimum energy consumed by the jobs (Econsumed ) over a time interval ∆ = [t1 , t2 ] and we compare it with the available energy (Eproduced = m(t2 − t1)); if Econsumed > Eproduced we can cut the branch. In our problem we also have to consider the energy consumed by the setup times. For an interval ∆ where there is a set F of k jobs that consume, we can easily show that the minimum quantity of setups which occurs is k − m. So, we have to take the shortest k − m setup times of the set {sij }, i, j ∈ F into account and the energy consumed in an interval ∆ is Pk−m P s[ij] where Econsumed = i max(0, min(pi , t2 − t1 , ri + pi − t1 , t2 − d0i + pi )) + i s[ij] are the setup times of the set {sij }, i, j ∈ F sorted in non-decreasing order and d0i = Zbest + di . We also propose some dominance rules to solve the problems. They consist in trying to nd whether there exists a dominant node, visited earlier or later, that allows us to prune the current node. The rst one is a global dominance rule based on active schedules and max ow computation (Leus and Herroelen, 2003). The other two rules have been designed for being compatible with the allowed discrepancies. They are also based on active schedules. For a given schedule, the dominance rules search for a combination of jobs such that one job starts earlier (Si0 < Si ), and for the other jobs the start time cannot be delayed, (Sj0 ≤ Sj , ∀j 6= i). This combination of jobs has to be accepted for the number of authorized discrepancies.

5 Computational experiments In the literature we have not found instances for this particular problem, so we propose to test the methods on a set of randomly generated instances. We rst compare the dierent proposed variants of the LDS method to determine the best one for being included inside the CDS scheme. In the comparaison between the two dierent ways to count the discrepancies, binary and non-binary, we can say that the binary mode has shown a higher performance than the non-binary one. Out of a set of 60 instances, binary mode has found the best solution over 90 of the instances, independently of the branching rule. For the three branch rules comparison we nd that LDS-top (83.33) is the most ecient, since it reaches the best solutions more times and also with the shortest average search time. For the evaluation of the lower bound and of the energetic reasoning we nd that both allow the reduction of the search time and whenP the search cannot be nished we nd better solutions when we use them (55 for lb( Ci ) and 87 for the energetic 82

PMS 2008, April 28-30, İstanbul, Turkey

reasoning ) than we do not (45 and 68, respectively). P The best combinations for the node evaluation are the lower bound (for min Ci ) and the energetic reasoning (for min Lmax ) mixed with the local dominance rule (90 and 93, respectively). Finally, we compare the four variants of the hybrid tree local search methods (CDS, CDDS, HD-CDDS, MC-CDS ) implemented with LDS-top, local dominance rule and binary counting (except for MC-CDS wich supposes a mix counting). HDCDDS reaches the best solution for more instances (70) than the other methods and it also presents the smallest mean deviation from the best known solution (about 3).

6 Conclusion In this paper we have studied limited discrepancy-based search methods and we have also proposed local search methods based on them. We have suggested an energetic reasoning scheme integrating setup times and we have proposed new global and local dominance rules that consider the discrepancies. These methods could be used to solve more complex problems involving setup times, like the hybrid ow shop or the RCPSP.

References J. C. Beck and L. Perron. Discrepancy-bounded depth rst search. In Second International Workshop on Integration of AI and OR Technologies for Combinatorial Optimization Problems (CP-AI-OR'00), Paderborn, Germany, 2000. T. Cheng and C. Sin. A state-of-the-art review of parallel-machine scheduling research. European Journal of Operational Research, Vol.47:271-292, 1990. R.L Graham, E.L Lawler, J.K Lenstra, and A. Rinnooy Kan. Optimization and approximation in deterministic sequencing and scheduling:a survey. Annals of Discrete Mathematics :287-326, 1979. W. D. Harvey and M. L. Ginsberg. Limited discrepancy search. In Proceedings of 14th IJCAI, 1995. A. Hmida, M. J. Huguet, P. Lopez, and M. Haouari. Climbing depth-bounded discrepancy search for solving hybrid ow shop scheduling problems. European J. of Industrial Engineering 1, No.2 : 223 - 243, 2007. J. Hurink and S. Knust. List scheduling in a parallel machine environment with precedence constraints and setup times. Operations Research Letters 29: 231239, 2001. W. Karoui, M.-J. Huguet, P. Lopez, and W. Naanaa. YIELDS: A yet improved limited discrepancy search for csps. LNCS 4510, pp.99-111, Springer, 4th International Conference on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems (CP-AI-OR'07), Brussels, Belgium, 2007. R. Korf. Improved limited discrepancy search. In Proceedings of 13th AAAI, 1996. R. Leus and W. Herroelen. Stability and resource allocation in project planning. IIE Transactions,36:7,667-682, 2003. P. Lopez and P. Esquirol. Consistency enforcing in scheduling: A general forPMS 2008, April 28-30, İstanbul, Turkey

83

mulation based on energetic reasoning. 5th International Workshop on Projet Management and Scheduling (PMS'96), pp.155-158, Poznan (Poland), 1996. M. Milano and A. Roli. On the relation between complete and incomplete search: an informal discussion. In Proceedings CPAIOR'02, Le Croisic, France, 2002. P R. Nessah, Ch. Chu, and F. Yalaoui. An exact method for P m/sds, ri / Ci problem. Computers and Operations Research 34: 2840-2848, 2005. J.M.J. Schutten. List scheduling revisited. Operations Research Letters 18, 167170, 1994. T. Walsh. Depth-bounded discrepancy search. APES Group, Department of Computer Science, 1997.

84

PMS 2008, April 28-30, İstanbul, Turkey

Scheduling for Dynamic Mixed Model Assembly Line: A Realistic Approach José Pedro García-Sabater*, Carlos Andrés and Ramón Companys Departamento de Organización de Empresas , Universidad Politécnica de Valencia , Edificio 7D, Camino de Vera s/n, 46022, Valencia, Spain e-mail: [email protected]

1. Introduction The Mixed Model Assembly Line Sequencing Problem (MALSP), also quoted as the Car Sequencing Problem(CSP), has a very strong empirical motivation as it defines the working pace of tens of automakers all over the world. A first approach, or the problem, appeared at (Okamura and Yamashita 1979). The problem was later stated by (Monden 1994) and has been studied by many different authors, where different real characteristics were added (Inman and Bulfin 1991), (Yano and Rachamadugu 1991), (Guerre et al 1995) and (Pinedo and Xiuli 1999). This paper introduces what we have called the Dynamic Car Sequencing Problem. In car factories the assembly sequence changes quickly because the system must adapt its production mix to the dynamic demand by using a product pool that changes permanently. So, currently speaking, the dynamic character is becoming increasingly important given the increased trend on mass customization, the continuous effort on cost reduction, the increase of flexibility required of manufacturing plants, and the increase of the links with suppliers. The mass customization trend is increasing the variety and complexity of the units to be assembled, and the classical consideration of small amount types of units to be sequenced in a static way is not longer possible. Cost reduction is a requirement of competitive markets, and it is being preceded by the waste reduction efforts on the automobile market. Since the first waste factor is overcapacity the trend is to eliminate any worker on the line not working at full capacity. It does mean that the sequence should be as smooth as possible. This paper presents a dynamic heuristic rule to sequence product in a mixed model assembly line. This rule considers the previous constraints to both dynamically define the sequence and to react to environmental changes and it has been applied to a real problem in a Spanish car factory. 2. The dynamic mixed model scheduling problem. The car production process is seen to start with the welding of the so-called “body”. Parts to be assembled at the welding shop have been previously cut and pressed at the press shop. However, the press shop may be considered as a supplier. Through this process of welding and assembling, the car receives some of its characteristics, such as the number of doors, sun-roof, etc. At this stage, product complexity is low, although a large amount of parts have been added to it. At the end of the welding process the product is called “Body-In-White” (BiW). In the factory studied, a model may have more than 20 different BiW, and it produces 1 to 4 different models. Then the “Body-In-White” is prepared and painted, and it is known as the so-called “Body-In-Color” (BiC). Only one more feature is added, and yet it sometimes means an enormous growth of complexity. Some manufacturers allow more than 15 different colors for each model, that means several hundreds (or even thousands) different BiCs to be managed (Garcia-Sabater 2000). The original sequence that entered the body shop is highly disrupted during the long process of welding and painting, and differs from the predicted one. Some of the reasons of this sequence disruption is the existence of parallel buffers and parallel production lines, rework of faulty cars and the variation of the production sequence due to material shortage at some stages of the process. So once the painting process is finished, cars have to enter the assembly line. Given that the sequence calculated for the assembly line (the sequence with which production started), is not the sequence exiting the paint shop, some adjustments should be carried out For these adjustments to be made, many plants have a buffer before the assembly line, containing bodies-in-color, that allows some kind of selection. An attempt to obtain the original sequence with this selection is performed at this point.

PMS 2008, April 28-30, İstanbul, Turkey

85

Sophisticated buffers, like those reported in (Monden 1994), are used in the rescheduling process. These buffers use a series of parallel buffer lines to allow for the rescheduling procedure. The sequencer should choose unit by unit between one from among those found at the first place of each parallel line. A more sophisticated buffer appears at the car factory under study in this line of research. It is an Automated Storage and Retrieval System (ASRS) where any unit at the buffer is available at any time. As there are more units at the buffer, there is more availability and it is easier to define a similar sequence respect to the original one. The dynamic mixed model scheduling problem can be defined from the previous comments. It consists of defining a subsequence for each and every period of time, obtained as an ordered combination of the available units at the start of that period of time (Aigbedo and Monden 1997). The period of time should be the equivalent to the numbers of units at the subsequence multiplied by the cycle time of the line. Along with the previous and next subsequences, the sequence definition should seek to achieve a good sequence for the Assembly Line in terms of component consumption, options appearance, and of the workload smoothing. It can be said that as far as variety and volume of each subassembly grows, the economic lot size decreases until the economic lot size of one is reached, which means that suppliers should be producing (or at least shipping) the exact product demanded by the client (i.e. the assembly line) when asked. Therefore, every supplier should react at the same pace as the assembly line. At any rate, when the supplier’s work is synchronized with the assembly line, it is also part of the assembly line, because it has to follow the same production program, defined by the sequence, and any error or stock out will affect the whole system, not only including the assembly line, but also the rest of connected suppliers. When this situation is attained, the Sequencing process becomes far more critical for more production units, and each disruption of the system can affect both the manufacturer and the suppliers. Any sequence unsteadiness might produce a stock out on any part of the extended-assembly-line, and the whole process would eventually stop. 3. The dynamic car sequencing problem 3.1 Notation Indices i: Index for available products to be sequenced j: Index for sequenced units (ordered from the end to the beginning) k: Index for constraints Constants α i ,k {0,1} This parameter will be 1 if the product i has the characteristic associated to constraint k

ω k : Is the weight assigned to the violation of constraint k

mk , M k , Lk : Are the parameters of constraint k. Not less than mk units, with, not more than Mk units with each Lk consecutive unit. DSEGi: Date of Segmentation of unit i (Due Date). It is relevant to note that most products will have the same due date, since a factory might produce more than 1000 units per day. NSEQi: Number of Sequence of unit i on the predicted Sequence for the day DSEGi Variables

δ i ,k {0,1}

This variable will be 1 if unit i to be introduced to the sequence would violate

constraint k.

ε j ,k {0,1} This variable will be 1 if unit i to be introduced to the sequence would violate the control constraint associated to constraint k. 3.2 Criteria To know whether or not the introduction of a new product i in the sequence would violate a

new constraint, we introduce a binary variable δ i , k with a value of 0 in the case that the constraint is not violated, and with a value of 1 in the opposite case. Constraint k will be violated,

86

PMS 2008, April 28-30, İstanbul, Turkey

by its maximum (or its minimum), if we try to sequence a unit with α i ,k = 1 and (or α i ,k = 0 and

Lk −1

∑α j =1

j ,k

Lk −1

∑α j =1

j ,k

+1 ≥ Mk

< mk ). These relations might be expressed as



⎛ Lk −1





⎞⎞ ⎛ ⎜ ⎠⎠ ⎝

⎞⎞ ⎟ ⎠⎠

⎛ Lk −1

δ i ,k = 1 ← ⎜⎜ (α i ,k = 1) ∧ ⎜⎜ ∑ α j ,t ≥ M k ⎟⎟ ⎟⎟ ∨ ⎜ (α i ,k = 0) ∧ ⎜⎜ ∑ α j ,t ≤ mk ⎟⎟ ⎟ j =1



j =1

The proposed heuristic orders units by using a criteria hierarchy. For example : first, the units minimizing the expected constraints violation, over the set of best units (those with min DSEGi), and over the set of bests (the one with minimum NSEQi). For the sake of simplicity, this heuristic will be defined as LBH 1 ≡ UW ↑ DSEG ↑ NSEQ ↑ where

UWi = ∑ ωk δ i ,k k

. This is the

basic criteria that has been used at the factory where the problem was stated. New criteria have been tested. For example CW heuristic has been defined to improve regularity. A larger (or shorter subsequence, has to be defined and associated minimum and maximum limits). Those minimum and maximum limits are to be evaluated depending on the actual presence of a given characteristic in the warehouse. With such constraints the system leads to a sequence where more usual products are firstly sequenced.

CWi = ∑ ω k ε i , k

ε i ,k

where

⎛ ⎛ L 'k ⎞⎞ ⎛ L 'k ⎞⎞ ⎛ = 1 ← ⎜ (α i ,k = 1) ∧ ⎜⎜ ∑ α j ,t ≥ M 'k ⎟⎟ ⎟ ∨ ⎜ (α i ,k = 0 ) ∧ ⎜⎜ ∑ α j ,t ≤ m'k ⎟⎟ ⎟ ; ⎟ ⎟ ⎜ ⎜ ⎝ j =1 ⎠⎠ ⎝ j =1 ⎠⎠ ⎝ ⎝

The original Monden’s method attempted to choose a particular unit by considering some kind of Euclidean distances from the expected consumption rate. The same parameter could not be employed because the appearance ratio is variable and unknown. Yet an appearance ratio can be calculated by counting the units at the buffer. Thus REG was defined as

⎛ ∑i α i,k ⎞⎟ ⎜ REGi = ∑ ωk ⎜ α i ,k − N ⎟⎟ k ⎜ ⎝ ⎠ All those parameters might be combined to give the following List Based Heuristics: 2

LBH 2 ≡ UW ↑ CW ↑ DSEG ↑ NSEQ ↑ LBH 3 ≡ UW ↑ CW ↑ REG ↑ DSEG ↑ NSEQ ↑

LBH 4 ≡ UW ↑ REG ↑ DSEG ↑ NSEQ ↑

LBH 5 ≡ UW ↑ REG ↓ DSEG ↑ NSEQ ↑ 4. Evaluation of the heuristics Computational experiences were performed to compare the heuristics ability to cope with the expected results provided by the car manufacturer. Given that creating a set of problems is very difficult due to the complexity of the problem, we have decided to compare it with real problems. Problems were taken from a real manufacturer with the following characteristics: The buffer where the units are stored has a capacity of 500 units. The number of constraints ranged from 20 to 25, depending on the day. The penalty of each constraint ranged from 50 to 1,000. Thirty five “static pictures” of the content of the buffer ASRS (set of available units) were chosen. The “static pictures” were taken at 11:15 or at 19:15. The data were captured during the next 3 hours. The experiment evaluated the sequencing of 300 units with each “static picture”. Therefore a total of 10,500 units were sequenced. As already stated, there were two main criteria that the users wanted to consider: The first objective of the DCSP is to reduce constraint violation. To calculate this, we have added the total value violated and divided it by the number of units being sequenced, to obtain the so-called: “Mean weight per unit”. For each sequencing day we also consider if the procedure has PMS 2008, April 28-30, İstanbul, Turkey

87

obtained the best result, and this is the “% success”.. At the manufacturer´s where the work has been done, a metric behavior called BTD was used. BTD is the percentage of units produced during the day to which it was forecast. Given certain special circumstances, a new metric called BTD+1 was used. BTD+1 is the percentage of units that have been produced on the day or during the next day. The analysis of each heuristic constraint violation is shown in the following table where they are compared with the values offered by the company (Act Seq) : Table 1: Constraint Violation Metrics LBH1 LBH2 LBH3 LBH4 LBH5 Act Seq Mean weight violated constraints per unit 235,08 271,7 209,0 271,7 208,0 689,12 % success 71% 65% 68.57% 65% 77% 17% The first row in Table 1 represents the mean violated weight per unit to the 10,500 sequenced units. One may easily observe that our LB heuristics outperform the actual Sequence Procedure on the constraint violation. LBH5 obtained the best result in 27 out of the 35 problems. The actual procedure obtains a good result in 7 problems, because these problems had no trouble fulfilling constraints. When comparing Table 1 with Table 2, we see that the BTD metrics worsen as the overall quality of the sequence improves. Table 2: Build-To-Date Metrics. LBH1 LBH2 LBH3 LBH4 LBH5 Act Seq BTD 55.4% 54.5% 64.6% 54.5% 56.7% 76.1% BTD+1 83.4% 87.7% 86.2% 87.7% 82.3% 94.8% Although Built-to-Date does not improve with new methodologies, the results show a good balance between Built-to-Date (85% BTD+1) and mean violated constraints per unit. Moreover the data analysis shows that when no production problems were faced, the BTD was just as good as on the process being currently used. 5. Conclusions This paper focused on the development of five new heuristics to solve the dynamic MALSP for an automotive factory. Here, different requirements and new realistic problem constraints have been defined. The constraints have been considered in the definition of the heuristic procedures. In order to check the proposed heuristics validity, they have been tested against real sequencing. So, the heuristics have been evaluated on the basis to develop a sequence with a low constraint violations index. This is accomplished by collecting and analyzing a set of real sequences by considering more than one month’s production. The units considered were either those inside the buffer or those entering during the sequencing process, so the dynamic feature of the problem was tested. Acknowledgements This work has been developed in Spanish Government project GESCOFLOW DPI200402598. References Aigbedo, H. and Monden, Y., A parametric procedure for multicriterion sequence scheduling for Just-In-Time mixed-model assembly lines. International Journal of Production Research, 1997, 35, 2543-2564. Inman R.R., Bulfin R.L., Sequencing JIT Mixed Model Assembly Lines, Management Science, 37 (7), 901-904, 1991. Garcia-Sabater J.P., Modelos Métodos y Algoritmos de Resolución del Problema de Secuenciación de Unidades Homogéneas en el Sector del Automóvil, PhD Thesis, Universidad Politécnica de Valencia, 2000.

88

PMS 2008, April 28-30, İstanbul, Turkey

Guerre, F. Frein Y., Bouffard, R. An efficient procedure for solving a car sequencing problem, Proceedings ETFA-95 Symposium on Emerging Technologies and Factory Automation, 2, 385-393, 1995. Monden, Y., Toyota Production System: An integrated approach, Chapman and Hall, 1994. Okamura H., Yamashita Y., A heuristic Algorithm for the Assembly Line Model Mix Sequencing Problem to minimize the Risk of Stopping the Conveyor.; International Journal of Production Research, 17 (3), 233, 1979. Pinedo M., Xiuli C., Operations Scheduling with applications in manufacturing and services, McGraw-Hill, 1999. Yano C., Rachamadugu R., Sequencing to minimize work overload in Assembly line with Product Options, Management Science, 37 (5), 572-586, 1991.

PMS 2008, April 28-30, İstanbul, Turkey

89

An Application Oriented Approach for Scheduling a Production Line with Two Dimension Setups Jose P. Garcia-Sabater1, Carlos Andres2, Cristobal Miralles3, and Julio Juan Garcia-Sabater4 Dpto. Organización de Empresas - UPV Camí de Vera s/n, 46071 Valencia, Spain {1jpgarcia; 2candres; 3cmiralles; 4jugarsa}@omp.upv.es

1. Introduction Many production systems with closed loop facilities have to deal with the problem of scheduling batches in consecutive loops (e.g. electrolytic painting of automotive parts in closed conveyors, cyclic painting of metallic furniture…). The paper will show a real productive system which combines cyclic and batching scheduling. The studied facility paints rear-view mirrors of varying geometries and colours for automotive manufacturers. The paint line consists of a moving train that forms a continuous loop and contains a fixed number of hollow spaces (positions), which are used to fix the so-called jigs where one or several products are to be hung. In this system a problem of scheduling with setups of two types due to the referred physical configuration arises. At the following sections the real problem will be described and modelled.

2. Real problem description The problem of cyclic batch scheduling arises in manufacturing facilities with closed conveyors. The studied facility is part of a plastic components manufacturer with varying shapes and colours, and it has a closed conveyor where two dimensional setups exist. It can be found in. Each product is defined by its geometry and its colour. The manufacturer has a closed loop paint line that consists of a moving train that forms that contains a fixed number of hollow spaces, or so called positions. Products are fixed on each hollow using a special tool so called jig. Each jig might hold a specific and limited number of parts of a given geometry. For the sake of clarity, and without loosing generality we will consider that only one product is hold on each different jig, in real problem this can be achieved by dividing the number of products by the required demand. It is important to note that every geometry uses a different type of jig, but that the same jig can be used for multiple colours. This means that when the product geometry to be painted is changed, the jigs must also be changed, but when there is only an alteration in the colour then it is not necessary to change the jigs. If the colour to be painted in successive units is different, the application of solvent through the pipes that are used to paint might be required. Therefore when a change of colour is to be scheduled a setup cost arises (horizontal setup). When a change of geometry is to be scheduled a setup cost might be paid in terms of lost capacity. But, and here is the novelty, when in successive loops but in the same position a different geometry is going to be scheduled, a setup cost is also to be paid (vertical setup). The parts pass continuously through a painting area located at a fixed position on the line. In the case studied, batching scheduling is desirable to minimize setup time between consecutive batches of similar parts and giving good response to the customer; and therefore cyclic scheduling is desirable to reduce work in progress inventory between the facility and the automotive customers. Summarizing each type of jig supports a defined number of parts (in our case 1 is general enough). In the system, different quantities of jigs exist for each geometry and it is not possible to exceed the maximum number of jigs per geometry daily. Two types of setups could be considered: Horizontal Setups and Vertical Setups. 1) Horizontal Setups: are conventional changeovers between batches of consecutive batches. They are related to two types of changes geometry changes and colour changes. Geometry 90

PMS 2008, April 28-30, İstanbul, Turkey

changes are required for instance when the software controlling the pipes need to be updated. Colour changes might require solvent to clean pipes or space between two colours. 2) Vertical Setups: as previously mentioned, when a geometry change is required in the same position but in the following loop, a jig change must be carried out. It is necessary to use worker capacity to do this change. The use of a worker has associated a cost and this is the reason why every geometry change has associated a setup cost. For the previous we have called this problem as a two dimensional setup scheduling problem. A considerable amount of literature has been written concerning setup scheduling (Allahverdi, 2007). But there are no references related with problems with two types of setups.

3. Problem statement We understand that the problem we are presenting is new (to the best of our knowledge) and therefore, for the sake of clarity, a parsimonious statement process has been adopted. Figure 1 tries to show the schedule implications of the problem presented. The figure shows three cycles or loops with a number of different geometries (G1, G2, G3) colours (C1, C2,C3) in each loop. There are setup costs due to jig changes in consecutive loops and setup cost due to colour changes in consecutive positions.

Loop 1

G1C1

G1C1

G1C1

G2C1

G2C1

Loop 2

G2C1

G2C1

G2C2

G2C2

G2C2

Loop 3

G3C1

G3C1

G3C3

G2C3

G2C4

Figure 1. Schedules for Situation 1 at successive loops

The problem can be stated as follows: To sequence a set of products (defined by its geometry and its colour) in different quantities each one. Knowing that on each position only one unit can be scheduled. The number of units of a given model (in any set of BLOCS consecutive positions) is limited by the availability of jigs to hang them. We have to minimize: (1) the number of colour changes (considering consecutive units) since each one has a cost (mainly in solvents); (2) the number of jig changes, between the same place of consecutive loops (mainly workforce cost); (3) the number of empty positions (mainly energy cost). In this problem the geometry changeover in consecutive units is not relevant.

4. MILP formulation The following indexes, parameters and variables are to be used to set the model. Index: i: index for the set of products (i=1..NP) h: index for the set of geometries (h=0..NH). j: index for the positions (j=1..L) Parameters: NP

L: Number of positions to schedule ( L

= ∑ Qi

)

i

CC: Cost of one colour change CJ: Cost of one jig change NP: Number of products NC: Number of colours NH: Number of geometries PPL: Number of positions per loop PMS 2008, April 28-30, İstanbul, Turkey

91

Qi=Demand of product i Ki=Colour of product i.

K i ∈ [0 , NC

Hi=Geometry of product i.

]

H i ∈ [0 , NH

]

Variables:

xi , j = {0,1}

α j = {0,1} β j = {0,1} Model:

if product i is to be scheduled at position j

if a colour change exists between position j-1 and j if a jig change exists between position j and position j-PPL

min CC ∑ α j + CJ j >1

∑β

j > PPL

j

s.t.

∑x

i, j

= Qi

∀i

(c.1)

∑x

i, j

=1

∀j > 1

(c.2)

∀j > 1

(c.3)

j

i

⎛ ⎞ ⎜ ∑ K i xi , j ≠ ∑ K i xi , j −1 ⎟ → (α j = 1) i ⎝ i ⎠

⎛ ⎞ ⎜ ∑ H i xi , j − PPL ≠ ∑ H i xi , j ⎟ → (β j = 1) ∀j > PPL i ⎝ i ⎠

(c.4)

Where the objective is to minimize the total cost of the schedule considering two types of cost: The cost of changing the colour (mainly solvent cost) and the cost of changing the jig (mainly manpower cost). The set of constraints (c.1) correspond to ensure that we are to schedule the required demand. Constraints (c.2) consider that either each position is empty or only one product is to be scheduled. Constraints (c.3) are to define the value of αj depending on if a colour change has been scheduled. Constraints (c.4) are to define the value of αj depending on if a colour change has been scheduled. The previous model has been linearized through the following transformations. c.3 can be transformed in the following set of constraints:

NC ⋅ α j ≥ ∑ K i xi , j − ∑ K i xi , j −1

∀j > 1

(c.3a)

NC ⋅ α j ≥ ∑ K i xi , j −1 − ∑ K i xi , j

∀j > 1

(c.3b)

NH ⋅ β j ≥ ∑ H i xi , j − PPL − ∑ H i xi , j

∀j > 1

(c.4a)

NH ⋅ β j ≥ ∑ H i xi , j − PPL − ∑ H i xi , j

∀j > 1

(c.4b)

i

i

i

i

c.4 can be transformed in the following set of constraints: i

i

i

i

In this paper our aim is to test the MIP model for this novel problem. In order to do so, two set of instances was defined and solved optimally: small and medium. For the small instances NP=3, NC=2, NH=2, L = 12 and PPL =4. For medium set NP=5, NC=3, NH=3, L=30 and PPL=10. The 92

PMS 2008, April 28-30, İstanbul, Turkey

following combinations of CC and CJ were tested: CC-CJ={(1-1), (1-2), (2-1), (1-4), (4-1). We have generated randomly for each problem the geometry and colour assigned for each product. The demand has been generated trying to balance the amount of units for each product to be scheduled. Each instance has been replicated five times. The model in LP format has been solved with CPLEX 9.1 on a Pentium IV 3.2 GHz computer with 1 GByte of RAM memory. All instances were solved optimally (small instances were solved in less than 0.1 seconds and medium instances were solved in less than 45 minutes). Next pictures shows two optimal results for an instance of medium size with CC= 2and CJ=1 and CC=1 and CJ=2 respectively: Loop 1

G1C1

G1C1

G2C1

G2C1

G2C1

G1C1

G1C1

G2C1

G2C1

G2C1

Loop 2

G1C1

G1C1

G2C3

G2C3

G2C3

G1C2

G1C2

G3C2

G3C2

G3C2

Loop 3

G1C2

G1C2

G2C3

G2C3

G2C3

G1C2

G1C2

G3C2

G3C2

G3C2

Figure 2. Instance solved with CC= 2 and CJ=1 Loop 1

G1C2

G1C2

G1C2

G3C2

G3C2

G1C2

G2C3

G2C3

G2C1

G2C1

Loop 2

G1C1

G1C1

G1C1

G3C2

G3C2

G1C2

G2C1

G2C1

G2C1

G2C1

Loop 3

G1C1

G1C1

G1C1

G3C2

G3C2

G1C2

G2C3

G2C3

G2C3

G2C3

Figure 3. Instance solved with CC= 1 and CJ=2

Fist solution has a total cost of 11 while second solution has a total cost of 6.

5. Conclusions In this paper we have addressed a scheduling problem that can be considered innovative in the field of scheduling with setups. It considers two different types of changeovers, those due to two consecutive batches, that has been called (horizontal setup) and those due to the relation between consecutive loops in the same position. The objective is to schedule minimizing the cost of both types of setups. The cost of the setup is related to effective cost (either solvent or workforce), but also is an opportunity cost of lost capacity. The case treated in this paper does not fall into any category of the known scheduling problems in the literature, but it bears certain similarities with cyclic scheduling, batch scheduling or sequence dependent flowshop scheduling. The problem has been modelled and solved optimally with CPLEX for small and medium instances. Some heuristics for real size instances are now under development.

Acknowledgements This work was developed under research project GESCOFLOW (DPI2004-02598) supported by the Spanish National Science&Technology Commission CICYT.

References Allahverdi, A., Ng, C.T., Cheng, T.C.E., and Kovalyov, M.Y., "A survey of scheduling problems with setup times or costs, (accepted for publication) European Journal of Operational Research, 2007.

PMS 2008, April 28-30, İstanbul, Turkey

93

Tree-Based Methods for Resource Investment and Resource Levelling Problems Thorsten Gather1 , Jürgen Zimmermann1 , and Jan-Hendrik Bartels1 . 1 Clausthal University of Technology, Clausthal-Zellerfeld, Germany E-mail: {thorsten.gather, juergen.zimmermann, jan-hendrik.bartels}@tu-clausthal.de

Keywords: project scheduling, resource investment problem, resource levelling

problem

1 Introduction The paper at hand considers a new tree-based enumeration method for resource investment and resource levelling problems exploiting some fundamental properties of spanning trees devised by Gabow and Myers (1978). We consider project scheduling problems with time windows as for instance described in Neumann et al. (2003). The project under consideration is given by an activity-on-node network N with activity set V := {0, 1, . . . , n, n + 1}, arc set E ⊂ V × V and arc weights δij (i, j ∈ V ). Activities 0 and n + 1 represent the beginning and completion, respectively, of the project. Let pi ∈ Z≥0 be the duration of activity i ∈ V , which is assumed to be carried out without interruption. Moreover, let Si ≥ 0 be the start time of activity i ∈ V . Given S0 := 0 (i.e., the project begins at time zero), Sn+1 represents the project duration. A vector S = (S0 , S1 , . . . , Sn+1 ) with Si ≥ 0 (i ∈ V ) is called a schedule. Time lags between the activities are specied by minimum and maximum time lags. If there is a minimum time lag dmin between two activities i and j , N contains an arc hi, ji ij max with weight δij := dmin . If there is a maximum time lag dij , there is an arc hj, ii ij max with weight δji := −dij . In consequence of the existence of minimum as well as maximum time lags, N generally contains cycles. The set of schedules satisfying the temporal constraints Sj −Si ≥ δij for all arcs hi, ji ∈ E , given by the underlying minimum and maximum time lags, is denoted by ST . In practice the success of a project depends in many cases on how a set of scarce renewable resources k ∈ R is utilized. If these resources have to be purchased (e.g., expensive machinery) and we want to minimize the total procurement cost, we obtain the so-called resource investment problem (RI). Let ck ≥ 0 be the procurement cost per unit of resource k ∈ R and rk (S, t) the amount of resource k used at time t given schedule S . Then we minimize the objective function X f (S) := ck max rk (S, t) (RI ) k∈R

0≤t≤d

Often some measure of the variation of resource utilization is to be minimized if the resources k ∈ R should be used evenly over time. In this case, we consider a resource levelling problem (RL). For instance, let ck ≥ 0 be a cost incurred per utilized unit of resource k ∈ R and per time unit, then a possible objective function in order to level the resource utilization is X Z d rk2 (S, t) dt ck f (S) := (RL) k∈R

94

0

PMS 2008, April 28-30, İstanbul, Turkey

(cf. Neumann et al. (2003)). Our project scheduling problem now consists of minimizing objective function RI or RL over the set of all time-feasible schedules, i.e. ¾ Minimize f (S) (P ) subject to S ∈ ST Let O ⊂ V × V be a strict order (i.e., an asymmetric and transitive binary relation) in activity set V .

ST (O) := {S ∈ ST | Sj ≥ Si + pi ∀ (i, j) ∈ O} is called the order polytope of O. As a matter of course, for the empty strict order O = ∅ we have ST (∅) = ST , and if O is the (nite) setSof all inclusion minimal feasible strict orders in activity set V , we have ST = O∈O ST (O) (cf. Neumann et al. (2003)). For problem P with objective function RI (RL), there is always a minimal point (extreme point) S of some order polytope ST (O) which is a minimizer of f on ST 6= ∅ (cf. Neumann et al. 2003). Moreover, consider Network N (O), which results from project network N by adding arc hi, ji with weight pi for each pair (i, j) ∈ O. If N already contains an arc hi, ji, its weight δij is replaced by max(δij , pi ). Then each extreme point S of some order polytope ST (O) corresponds to a spanning tree, where the n + 1 arcs of such a spanning tree T T , say arcs hi, ji ∈ E T with weights δij , correspond to n + 1 linearly independent T binding temporal constraints Sj −Si = δij (hi, ji ∈ E T ) as it was described by Nübel (2001). Especially, each minimal point of an order polytope ST (O) corresponds to a spanning outtree of N (O) with root 0. Together with S0 = 0, the corresponding linear system of equations has a unique solution, namely the vertex in question. An optimal solution to problem P with objective function RI or RL can now be determined as follows. We consecutively x start times of activities such that, T step by step, temporal constraints Sj − Si ≥ δij become binding. For objective function RI we have to ensure that the corresponding arcs constitute an outtree rooted at node 0 and for RL an arbitrary spanning tree of some network N (O). In our contribution we are going to introduce an approach that is based on the bridge concept of Gabow and Myers (1978) which focuses on the enumeration of non-redundant spanning outtrees.

2 Tree-based enumeration approach In what follows we present a new approach enumerating all time-feasible spanning (out-) trees for network N (O). First we introduce how to generate all spanning outtrees rooted at node 0. This concept will then be modied to enumerate only time-feasible spanning outtrees and nally all arbitrary time-feasible spanning trees of N (O). For this purpose we term outtree T with V T = V and E T ⊆ E a spanning ˜ ˜ outtree and T˜ with V T ⊂ V and E T ⊂ E a sub-outtree of N (O). The set of nodes ˜ that are not part of T˜ is described by V A = V \ V T . Finally, we term an arc hx, vi of the current outtree T˜ bridge if there is no alternative arc that connects some ˜ node i ∈ V T with node v . Figure 1(a) shows a case where arc hx, vi represents no bridge. This is due to the dotted arcs connecting T˜ with v . Assume we consider sub-outtree T˜ then we enumerate all spanning outtrees T ˜ with E T ⊃ E T based on a depth rst search. Consequently, we expand a given ˜ outtree in each iteration by an arc leading from some node i ∈ V T to a node PMS 2008, April 28-30, İstanbul, Turkey

95

VA

VA

VB

v

Case (3)

v

Case (1) x

x

Case (2)

~ T (a)

~ T (b)

Figure 1: Bridge-Test (a) for general spanning outtrees and (b) for arbitrary timefeasible spanning trees

j ∈ V A . Having enumerated all spanning outtrees containing T˜ we destroy T˜ by deleting the last added arc hx, vi and enumerate the remaining outtrees containing ˜ arcs E T \ {hx, vi}. Thus, we inductively nd all spanning outtrees of N (O). To avoid redundance we perform a so-called bridge-test each time we remove an arc from tree T˜. This test is crucial for the performance of our approach. For the implementation we used a so-called pre-order on the set of tree nodes as it was proposed by Gabow and Myers (1978). If some arc hx, vi that has been removed from T˜ turns out to be no bridge, we temporarily remove hx, vi from network N (O) as well. Thus, we prevent to select this arc for the current sub-outtree T˜ again. Subsequently, we expand T˜ via an alternative arc hi, vi. If otherwise hx, vi represents a bridge we further reduce T˜ and ˜ restore all arcs that have been removed from network N (O) within iteration |E T | since we may select these arcs (e.g. hx, vi) again, if we have modied T˜ \ {hx, vi}. The algorithm terminates if we are back in iteration 0 and the bridge-test shows the last removed arc to be a bridge.

3 Enumeration of time-feasible trees To nd an optimal solution to function RI eciently, we had to modify the original algorithm in a manner that we generate only time-feasible outtrees according to some order network N (O). Since in an outtree of network N (O) an arc hi0 , j 0 i may become infeasible if some arc hi, ji becomes binding we must assure that we do not add arcs to T˜ that lead to infeasible start times for one ore more activities. Therefore, we test in each iteration if one or more arcs become infeasible when we add arc hi, ji to T˜. In this case, we lock these infeasible arcs until we remove hi, ji from T˜ again. This proceeding guarantees that T˜ can only be expanded time-feasibly. However, we may reach a state where we cannot time-feasibly expand an incom96

PMS 2008, April 28-30, İstanbul, Turkey

plete outtree. We call such an incomplete and not expandable tree pseudo-tree. In general, the introduced proceeding remains applicable if we nd a pseudo-tree. We only need to enhance the bridge-test by additional cases to be regarded. Con˜ sidering pseudo-trees we need to rene the denition of set V A = V \ V T . This is due to the fact, that we must now further distinguish between the set V A of nodes that are not part of T˜ but have been part of the last pseudo-tree T and an ˜ additional set V B of nodes that have not been part of T with V A ∪ V B = V \ V T . To decide whether an arc hx, vi represents a bridge or not we need to test if one of the following conditions holds to be true. ˜

(1) There is a feasible arc hi, vi leading from some node i ∈ V T to node v ˜

(2) There is a feasible arc hi, ji leading from some node i ∈ V T to some node j ∈ V B and there is a directed path from j to v (3) There is a feasible arc hj, vi leading from some node j ∈ V B to v given that at least one locked arc leading from T˜ to j becomes un-locked when hx, vi is removed If none of these conditions holds true hx, vi is a bridge. As mentioned, we modied the concept in order to nd the candidate solutions for problem RL. To nd an optimal solution to RL we need to enumerate all arbitrary time-feasible spanning trees. That means that we expand T˜ not only ˜ by arcs forming a directed path from some node i ∈ V T to some node v ∈ V A . For arbitrary spanning trees each arc or sequence of arcs connecting some node ˜ i ∈ V T with some node j ∈ V A in any direction becomes relevant. With regard to the last disconnected node v of T˜ the nodes j ∈ V A are exactly the successors of v in the search tree of the last spanning tree or pseudo-tree, respectively. The criterion that some arc hx, vi represents a bridge is then that there does not exist any sequence of arcs connecting T˜ with any successor of node v in the search tree or v itself. Notice that the direction of the arcs can now be neglected. The three cases that have been introduced for the enumeration of outtrees can be easily adapted to the general case. The bridge-test for arbitrary time-feasible spanning trees or pseudo-trees, respectively, is illustrated on Figure 1(b).

4 Preliminary computational study & outlook Preliminary computational results show that the proposed procedure is promising to solve instances - with up to 30 activities for objective function RI and 20 activities for function RL - to optimality in reasonable time. In the course of our study we compared the introduced approach with a comparable implementation of an approach proposed by Nübel (2001) which is the most promising one in the open literature. Table 1 shows exemplary the results of our approach as well as of the approach of Nübel for the resource levelling problem RL on two test sets with 10 and 20 activities as well as 1,3 or 5 renewable resources that were presented by Weglarz (1999). Each of the test sets contains 270 problem instances and the depicted values represent the percentage of instances that could be solved to optimality in 10, 60 and 2000 seconds (Opt10 , Opt60 , Opt2000 ), respectively, as well as the average computation time (CT). The tests were performed on a Pentium 4 with 3,2 Ghz clock pulse and 768MB memory. As for problems RI as well as RL a candidate schedule for an optimal solution may be represented by more than one time-feasible spanning (out-)tree we have PMS 2008, April 28-30, İstanbul, Turkey

97

Our approach 10 activities

Nübel 10 activities

Our approach 20 activities

Nübel 20 activities

Opt2000 Opt60

98,15% 90,37%

97,93% 89,63%

22,63% 14,10%

21,91% 13,09%

CT

68,33s

74,89s

174,50s

215,00s

Table 1: Computational results developed an enhancement of the proposed approach that avoids the generation of (out-)trees leading to redundant schedules. This improvement leads to a signicant speed up

References

1. Gabow, H.N., Myers, E.W. (1978). Finding all spanning trees of directed and undirected graphs. Siam Journal on Computing 7, 280287 2. Neumann, K., Schwindt, C., Zimmermann, J. (2003). Project Scheduling with Time Windows and Scarce Resources. Springer, Berlin 3. Nübel, H. (2001). The resource renting problem subject to temporal constraints. OR Spektrum 23, 359381 4. Weglarz, J. (1999). Project Scheduling - Recent Models, Algorithms and Applications. Kluwer, Boston

98

PMS 2008, April 28-30, İstanbul, Turkey

The Sequential Ordering Problem: A New Approach David Gómez-Cabrero1, Francisco Ballestín2, Vicente Valls3 1

Group of Information and Communication Systems. Universidad de Valencia. Spain. e-mail: [email protected] 2 Dpto. de Estadística e Investigación Operativa. Universidad Pública de Navarra, Spain. e-mail: [email protected] 3 Dept. de Estadística e Investigación Operativa, Universidad de Valencia, Spain e-mail: [email protected] Keywords: sequential ordering problem, genetic algorithm, crossover operator

1.

Introduction

The Travelling Salesman Problem (TSP) is one of the problems that has attracted most interest from the research community. A generalization for the TSP is the Asymmetric Travelling Salesman Problem (ATSP) with precedence relationships, this problem is denoted as the Sequential Ordering Problem (SOP). The SOP consists of finding a route through a given set of cities with the lowest possible cost, in such a way that given precedence constraints between cities are satisfied. SOP has been applied in routing problems where pickups must be performed before delivering some products as in [1], in on-line routing in stacker crane in an automation storage system as in [2] and in scheduling problems with precedence between activities as in [3]. The SOP problem can be represented by means of two graphs. The first one, D = (V, A), is a complete digraph where V = {1,..., n} is the set of the cities and A is the set of connecting arcs between cities. The dummy cities 1 and n represent the beginning and the end of the tour. Each arc has an associated cost cij  0, we consider c1j = cj(n+2) = 0 j  V . The second graph, P = (V, R), is a digraph where R represents the precedence relationships. An arc (i, j)  R denotes a precedence relation between i and j, i.e. node i must precede node j in any solution. City i is a predecessor of city j if there exists an (i, j) path in graph P. A solution for the SOP is any precedence-feasible permutation of the cities (i.e., a vector S = (s1,…, sn) of integer numbers between 1 and n, that satisfies: (i, j)  R implies that city j is not placed before city i). SOP was initially proposed by Escudero ([4] and [5]). As a generalization of the TSP, the SOP is NP-hard. Few heuristic algorithms have been proposed for the SOP. The first heuristic algorithm and the first improving method for the SOP were proposed in [4] and in [5], respectively. Different meta-heuristic methods have also been proposed: genetic algorithms [6], [7]; serial and parallel roll-out algorithms [8] and an ant-colony algorithm [9]. Genetic algorithms (GA) are based on natural selection as it is understood in biology: given an initial population of individuals, individuals are crossed and mutated and a new offspring is generated. A selection process selects from the old population and the offspring which will survive to the next generation; given an evaluation function, survivors are selected from among the best individuals. The origins of GA can be found in [10]. A classical text which settled the computational genetic theory background and its applications is [11]. A state-of-art monograph is [12]. Valls et al. [13] have proposed a genetic algorithm HGA for the resource-constrained scheduling problem (RCPSP). In HGA, individuals are ordered lists of activities. HGA uses the peak crossover operator. This operator selects “good” sub-lists of the father and inserts them into the mother. The aim is that the children inherit “good” orderings of the activities that in the parents led to good solutions. Valls et al. [14] have proposed an evolutionary algorithm EVA for the resource-constrained project scheduling problem subject to temporal constraints (RCPSP/max). EVA uses the conglomerate-based crossover operator which is a generalization of the peak crossover operator. Given three activity lists L1, L2, and L3, this new operator first obtains the ”good” sub-lists of L2 and L3, then selects a subset of the elements so obtained and inserts them in L1. The meaning of “good” is context dependent. In the case of the RCPSP it refers to resource utilization. For the case of the RCPSP/max it refers to feasibility achievement. In this paper we investigate and validate new ways of implementing the fundamentals of the above mentioned crossover operators for the efficient solution of SOP. We have developed a new PMS 2008, April 28-30, İstanbul, Turkey

99

class of crossover operators that share a common modular structure and differ in the implementation of the modules. We have also developed a genetic algorithm GA to test the relative efficiency of the proposed operators and to compare the different versions of GA with state-of-the-art algorithms. The rest of the paper is organized as follows. In Section 2, we present the structure of the algorithm GA. Section 3 is devoted to the description of the modular structure of the new class of crossover operators and proposes different implementations of the modules. Due to the lack of space we omit the precise description of some procedures. Detailed information can be obtained from the authors. Computational tests are included in Section 4. Finally, a summary and some concluding remarks appear in Section 5.

2.

Genetic algorithm

Fig. 1 shows an outline of the proposed genetic algorithm. The proposed algorithm starts by generating an initial population, POP, of nPOP solutions (individuals). The initial population is computed by a greedy randomized algorithm (GRA) that is similar to other methods used to generate initial solutions (see [6], [7] and [9]). Next a Local Search algorithm is applied to each solution. POP is ordered in increasing order of cost. We use the generational replacement policy (see [12]): at each iteration nPOP new individuals are generated and inserted in NPOP which starts as an empty set. For the next generation the best nPOP individuals from POPNPOP are selected and replace population POP, NPOP is emptied. No individuals with the same evaluation are allowed in POP, if two individuals have the same fitness value the newest one replaces the eldest. The random selection of individuals in step 3.1 is rated by their position in POP (lower order implies greater probability). Generate_Individual is the crossover operator that will be explained in Section 3. The algorithm finishes when the number of solutions generated is greater than maxsol1 (Condition 1) or when the number of solutions generated without improving the worst individual in POP is greater than maxsol2 (Condition 2). The algorithm that computes the initial solutions is a randomized constructive algorithm based on the “nearest-neighbour” search. A solution is computed by first inserting city 1 in the first position and then sequentially inserting the remaining cities. The city to be inserted next is selected from the set CANDIDATE. CANDIDATE includes all the cities for which predecessor cities have already been inserted. The selection process is based on a roulette method considering costs cij, where i denotes the last city inserted in the partial solution and j denotes the candidate city. Local Search is the 3-opt path preserving algorithm presented in [9] using the STACK-IN search policy for low values of maxsol1 and the COM search policy for high values of maxsol1. nPOP is set to 50. 1. 2. 3.

4. 5.

Generate initial population POP = {S1, S2,…, SnPOP} NPOP = . While no stopping rule is fulfilled: For i = 1 to nPOP: 3.1. Randomly select S’ and S’’ from POP 3.2. S = Generate_Individual(Si; S’,S’’) 3.3. S* = Local_Search(S) 3.4. NPOP = NPOP  {S*} POP = {nPOP best individuals in POP  NPOP ordered by cost} Go to 2 Fig. 1. Genetic Algorithm.

3.

Crossover Operators

In this section the common modular structure of a new class of crossover operators is presented. This structure is context free. We also propose alternative implementations of the modules. Different crossover operators can be obtained by combining the different implementations of the modules. The common structure. Given three solutions S, S’ and S’’ a new solution S* is obtained in three steps: a) Generate a set, GSL, of good sub-lists from S’ and S’’, b) Select an ordered subset, OS, from GSL and c) Construct a new solution S* using OS. To completely specify a crossover operator it is required to determine the implementation of the three modules and the evaluation function that will be used to measure the ”goodness” of a sub-list. 100

PMS 2008, April 28-30, İstanbul, Turkey

We have developed five different evaluation functions for sub-lists. They combine in different ways information about the sum of the costs of the arcs included in a sub-list, the initial city, the final city, the number of cities in a sub-list, the cost of the shortest path between cities and the current best solution. We have also developed two different methods to generate “good” sub-lists from a given solution FATHER. In both methods, minc and maxc denote the minimum and the maximum number of cities that can be included in a sub-list, respectively. The first method, CUT, consists in randomly partitioning FATHER in sub-lists of sizes between minc and maxc and then selecting those sub-lists for which their evaluation value is lower than a given threshold. The second method, PCUT, iteratively generates sub-lists from FATHER. Each sub-list is generated as follows: An initial sub-list SL is constructed as the sub-list of FATHER that contains minc cities and starts at a randomly selected city i1. Sub-list SL is iteratively enlarged by adding cities one by one at its end in the order of FATHER. After each addition, SL is evaluated and the best SL obtained so far is maintained. The process stops when the number of cities in SL is greater than maxc or a predefined threshold value is reached by the evaluation function or the final dummy city is reached. In this last case, the construction of the next sub-list starts from the second city in FATHER. The best sub-list generated SL* is included in GSL. The cities in SL* are eliminated from FATHER. Constructing an ordered subset of sub-lists. Let us suppose that GSL = SL1,..., SLk is a set of ”good” sub-lists obtained by applying the above procedure to solutions S’ and S”. This sub-section presents a procedure to construct an ordered subset OS of GSL that can be used to generate a new solution. This procedure works on an auxiliary graph L = (M, N) where M is the set of “good” sublists and N is the set of arcs: (SLp, SLq) is an arc iff SLp ∩ SLq =  and SLq is not a predecessor of SLp with respect to graph P. SLq is a predecessor of SLp with respect to graph P iff there exist both a city i in SLq and a city j in SLp such that (i, j)  R. Given a sub-list SLp in GSL, let us define F(SLp) = {SLq  GSL/(SLp, SLq)  N}. The following procedure constructs the ordered sub-set OS: 1. Define OS =  and Pool = M 2. While (Pool ≠ ) 2.1. Select a sub-list SL from Pool. Insert SL at the end of OS. 2.2. Make Pool = Pool ∩ F(SL) The sub-list SL is selected in step 2.1 by means of a roulette method based on the evaluation of the sub-lists (less evaluation more probability). For the justification of this procedure and further details we refer to [15]. Given the ordered set OS, we have considered two methods for generating a new solution S*. The first one, IN1, is the same as the one used in [14] and uses a third parent S as a guiding structure. The second method, IN2, constructs a new SOP instance by condensing every sub-list into a single super-city in both graphs D and P. Note that this method does not use a third parent.

4.

Computational experiments

The experiments were performed on a personal Pentium M computer at 1.7 GHz. The algorithms have been coded in C. For test instances, we have used different test sets extracted from the library TSPLIB [16]. The set SET1 contains the instance sets ft53, ft70 and kro124p. SET2 contains the instances in SET1 plus the instance prob100 which is especially difficult to solve. SET3 contains the rbg instances with less or equal than 253 cities. SET4 contains the rbg instances with more than 253 cities. To instantiate a crossover operator it is necessary to specify: the sub-list generation method (CUT, PCUT), the evaluation function used in the sub-list generation method PCUT (Eval1, ..., Eval5), the evaluation function used in the subset construction method (Eval1, ..., Eval5) and the solution generation method (IN1 or IN2). Given that the sub-list generation method CUT does not use any evaluation function, this gives a total of 60 different crossover operators. To compare all possible crossover operators a test-rejection process was undertaken. A crossover operator is evaluated by running 5 times the GA with this crossover operator on each instance in a given instance set. For each combination of crossover operator and instance, the average cost (AVG), the minimum cost (MIN), and the average computational time, were computed. We also computed the deviations of MIN and AVG from the cost lower bounds LB as in [17]. The first, second and third experiments were conducted on the sets SET1 and SET2, SET3 and SET4, respectively. SET1 and SET2 are considered generally easy to solve whereas the PMS 2008, April 28-30, İstanbul, Turkey

101

instances in SET4 are the most difficult to solve by far. The analysis of the results obtained in one experiment determines the crossovers that are not rejected and will be tested in the next experiment. Results of these experiments will be presented during the PMS workshop.

5.

Conclusions

We have proposed a new class of crossover operators to be used in a genetic algorithm for the SOP which extend and adapt the genetic cross-over proposed in [14]. The computational experiments on standard test instances show that the more powerful versions of our algorithm are competitive with state-of-the-art algorithms for the SOP and for some instances they obtain better results in average, as happens with prob100 which is considered one of the hardest SPO instances.

Acknowledgements This research was partially supported by the Ministerio de Educación y Ciencia under contract DPI2007-63100.

References [1] Fiala Timlin, M.T., Pulleyblank, W. R.: Precedence constrained routing and helicopter scheduling: Heuristic design. Interfaces 22 Vol. 3 (1992) 100-111. [2] Ascheuer, N.: Hamiltonian Path-Problems in the On-line Optimization of Flexible Manufacturing Systems. PhD Thesis Tech. Univ. Berlin (1995). Available at URL http://www.zib.de/ZIBbib/publications. [3] Ascheuer, N., Escudero, L. F., Grotschel, M., Store, M.: A cutting plane approach to the sequential ordering problem (with applications to job scheduling in manufacturing). SIAM Journal of Optimization 3 Vol. 1 (1993) 25-42. [4] Escudero, L. F.: An inexact algorithm for the sequential ordering problem. European Journal of Operational Research 37 (1998) 236-249. [5] Escudero, L. F.: ”On the implementation of an algorithm for improving a solution to the sequential ordering problem” Trabajos de Investigacin Operativa 3 (1988) 117-140. [6] Chen, S., Smith, S: Commonality and genetic algorithms. Technical Report CMURI-TR-9627, The Robotic Institute, Carnegie Mellon University (1996). [7] Seo, D., Moon, B.: A Hybrid Genetic Algorithm Based on Complete Graph Representation for the Sequential Ordering Problem. Lecture Notes in Computer Science, Vol. 2723. Springer-Verlag Heidelberg (2003) 669-680. [8] Guerrero, F., Mancini, M.: A cooperative parallel rollout algorithm for the sequential ordering problem. Parallel Computing 29 (2003) 663-677. [9] Gamdardella, L. M., Dorigo, M.: An Ant Colony System hybridized with a New Local Search for the Sequential Ordering Problem. INFORMS Journal on Computing 12 Vol. 3 (2000) 237-255. [10] Holland, J. H.: Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor Michigan (1975). [11] Goldberg, D. E.: Genetic Algorithms in Search, Optimization and Machine Learning. Pearson Education (1989). [12] Reeves, C. R., Rowe, J.: Genetic Algorithms- Principles and Perspectives: A guide to GA Theory. Kluwer Academic Publishers (2003). [13] Valls, V., Ballestín, F., Quintanilla, S.: A Hybrid Genetic Algorithm for the RCPSP. To be published in European Journal of Operations Research. [14] Valls, V., Ballestín, F., Barrios, A.: An evolutionary algorithm for the resource constrained project scheduling problem. Proceedings in Tenth International Workshop on Project Management and Scheduling (2006). [15] Ballestín, F.: Nuevos métodos de resolución del problema de secuenciación de proyectos con recursos limitados. PhD Dissertation Universidad de Valencia (2001). [16] TSPLIB http://www.iwr.uni-heidelberg.de/ groups/comopt/software/TSPLIB95/. [17] http://www.idsia.ch/luca/has-sop.html.

102

PMS 2008, April 28-30, İstanbul, Turkey

Project Completion Time in a Multi Critical Paths Environment Amnon Gonen Management of Technology Department, Holon Institute of Technology – H.I.T, Israel e-mail: [email protected] Key words: PERT, central limit theorem, project completion time, simulation

1.

Introduction

The current study examines an intensive research area in project management - the estimated completion time of a project. Up until now, most of the literature has used the Central Limit Theorem (CLT) to estimate project length. Past studies have tended to analyze the project network by looking for the longest path from the beginning to the end of the project; this path is termed the Critical Path. After defining the critical path, project completion time is estimated by the average of the sum of the activities’ duration along the critical path. This approach ignores the possibility that other paths, which are not critical, will last longer than the critical path. This option is not only theoretical but, as we will see, also very practical. Most of the previously published books on project management, PERT, and CPM present the project completion time as the length of the critical path (see Bruke[1992], Wysocki[1995], Winston[1991], Meredith and Mantel [1995], Stevenson [1999]). The authors assume that the CLT conditions hold true in general and therefore assume that the length of the critical path can be approximated by the sum of its activities. Moreover, they present project completion time as the length of the critical path. In real world projects there are many cases where the critical path is not unique. According to CPM methods to shorten the project length, one should cut down the critical path until it is no longer the critical path. This method of shortening the critical paths is repeated. At the end, there are many equal length paths that are all critical. In this case, it is not enough to estimate the critical path length; instead, one should look for the maximum length among all of the critical paths. The current study focuses on project completion time when there are several critical paths or several paths that are close to what we term a ‘critical path’. Many articles have been published about the evaluation of project completion time. Dodin [1985] provided some lower and upper bounds to project completion time. He accomplished this by reducing the PERT network to a single equivalent activity starting at the beginning and ending with the project’s conclusion. The most important part is the upper bound that provides decisionmakers with an upper bound on the project's duration. However, the author does not provide a measure for the distance between the upper bound and the project’s real completion time. Bendell et al. [1995] evaluate project completion time when the activity times are Erlang distributed. They divide the methods of evaluation into five families: 1. The Analytical approach, which is usually very limited 2. Numerical interaction over the convolution of all the activities participating in the critical path. 3. Moment methods that progressively reduce the project network to a single arc on which it uses the 4 first moments to estimate the project’s completion time. 4. PERT analyses, and 5. Simulation. Abdelkader [2003] evaluates the completion times when activities’ duration are Weibull distributed. He also uses the moment method and provides some bounds for the completion time. All these evaluations provide bounds for project length ignoring the distance between the bound and the real project completion time. Shtub et al.’s [1994] discusses the uncertainty and the probability distribution of the critical path in detail. They provide an example of a simulation that shows the different results of different simulations. Later, they define the Criticality Index (CI) of an activity as ‘the proportion of runs in which the activity was on a critical path’. There analysis of project duration includes a very important distinction among paths that include the same activities and therefore cannot be treated as independent. They present the distribution function of project duration as a function of all the paths’ distributions. Furthermore, they state that if there are n independent paths from the beginning to the end of the project , X1, X2, …,Xn, then the project length X=Max{ X1, X2, …,Xn} satisfies:

P( X   )  P( X1   )  P( X 2   )  ... P( X n   ) PMS 2008, April 28-30, İstanbul, Turkey

103

A more general result is that if we denote by Fi the probability distribution of the random variable Xi, then: F(τ) = F1(τ) F2(τ)… Fn(τ) (I) where all Xi are independent random variables. Our aim is to estimate E(X) when X ~F(x)= F1(x) F2(x)… Fn(x). 1.1 The Main Problem Let us assume that there are several critical paths in the PERT. Let X1, X2, …,Xn be the length of all the paths in the project network. Let X be the project length X=Max{ X1, X2, …,Xn } where X, X1, X2, …,Xn are random variables. Let Fi(x) be the probability distribution of the random variable Xi and F(n,x) be the probability distribution of the random variable X . Let us define "ninety percent" as the point x0.9 that satisfies P(X< x0.9)=0.9 A random variable Y is truncated if there is a real (finite) number y such that P(Yy)=1 and P(Y-y)=0, e.g. if Y is uniformly distributed on [a,b], then it is truncated. If Y is normally distributed, then it is not truncated. Gonen [2007] showed that if the random variables X1, X2, …,Xn are truncated, mutually independent with identical truncated distribution Fi(x) and if b=min{x/ Fi(x)=1}(b exists since the random variables are truncated). Then F(n,x) tends to

1 x  b , when n tends to infinity. 0 x  b

 (b, x)  

Moreover Gonen [2007] showed that under these conditions E(x)=b. Applying the above to n random variables that distributes U(a,b), it can be seen that

x0.9  n 0.9 (b  a)  a 1.2. Simulation Results In order to enhance the results to other distribution probabilities, we ran simulations of projects with several critical paths. During this session, we assumed that there are K critical paths that are independent and identically distributed. The length of each critical path is a random variable Xj j=1,…K . We then examined the maximum X of these K critical paths. We analyzed different probability distributions of Xj by running the same procedure (test) N times (2000). The simulation results were statistically analyzed by testing goodness-of-fit for the distribution of the maximum. Finally, we added some sensitivity analyses to the number of critical paths K. We will use the following notations and abbreviations throughout the remainder of this paper: The original distribution is the probability distribution of the critical path’s length Xj. The maximum distribution or distribution of the maximum means the probability distribution of X=Max( X1, X2, …,XK ). X is called the maximum random variable. The maximum as a name is the maximum of all the critical paths. The term ninety percent is defined as the point x0.9 that satisfies P(X< x0.9)=0.9 where X is the maximum random variable The classical measurement of project completion time is achieved by summing up the lengths of the critical paths’ arcs. We then compare the distribution of the maximum to this classical approach in order to learn more about the importance of measuring the maximum. The measurement criterion that was used is as follows:

MC 

Average(Max(X1 , X 2 ,...,X K )) – E (Xj)

X

(II)

j

The MC provides the difference between the Maximum and the average in the standard deviation scale.

104

PMS 2008, April 28-30, İstanbul, Turkey

1.3. The Simulation Test Plan The simulation should provide us with the expected completion time and a "ninety percent" bound for it that assures a deviation of only 10% above it. Gonen [2007] verified the above results by simulation. Truncated distribution like the uniform and triangular were tested and the maximum converged to the upper bound of the density function support. In this paper we analyze the case where all the critical paths are normally distributed, Elang distributed, and Weibull distributed. In each test, we analyzed the following: The criterion MC, which shows the difference between the maximum value and the original distribution average in the standard deviation scale. Sensitivity to K, the number of critical paths. The distribution of the maximum X The "ninety percent" bound to the completion time using the F(K,X) distribution versus the "ninety percent" bound using the original distribution . 1.4. Normal Distribution N(0,1) The most common probability distribution is the normal distribution. In most of the literature, the authors assume that the critical path duration is normally distributed. The following table shows, for each number of critical paths K, the average of the maximum, its standard deviation, and the criteria MC. Table 1. Results for K normal N(0,1) variants

K

10

20

30

50

70

90

100

AV(Max)

1.5345

1.8663

2.0454

2.2512

2.3779

2.4699

2.5069

SD(Max)

0.0137

0.0102

0.0119

0.0119

0.0113

0.0089

0.0117

MC

1.5345

1.8663

2.0454

2.2512

2.3779

2.4699

2.5069

From the above table, it can be seen that in the normal case, the average increases; since the normal distribution is not truncated, the above theoretical results do not hold true. Moreover, when we simulated K=200,000, the average of the maximum was 4.27. This shows that it cannot be limited, as the normal distribution itself is not bounded. The MC ranges between 1.5 and 2.5. This means that calculating the maximum instead of the original average increases the estimate by 250%, a fact that cannot be ignored. The "ninety percent" bound of the maximum is 3.072781, while the "ninety percent" bound of the normal distribution N(0,1) is 1.3. The MC, in this case, is 1.77, while the rise in the "ninety percent" bound is an increase of 177%.

Average of Maximum of Normal(0,1) 3

Average

2.5 2 1.5 1 0.5 0

10

20

30

40

50

60

70

80

90 100

Average 1.53 1.87 2.05 2.16 2.25 2.32 2.38 2.43 2.47 2.51 Number of variables K

Figure 1. Average of the maximum of the normal N(0,1) variant

From Figure 1. it looks as if the average of the maximum converges to 2.5. However, this result is misleading and, in addition, we found that it is not bounded. Due to the importance of the normal distribution, we looked for the distribution of the maximum together with its "ninety percent" and its average. The following chart presents the behaviour of the maximum distribution PMS 2008, April 28-30, İstanbul, Turkey

105

of 100 normal deviates with 2001 sample points. It was found that the distribution with the best fit is the Erlang distribution moved by 1.16 units. The parameters are (0.135,10), which means it is a sum of 10 exponential variants with a mean of 0.135 1.5. Weibull Distribution The Weibull distribution has been used in some previous papers like Abdelkader [2004] therefore, we added this type of probability distribution. The Weibull distribution is often used to represent non-negative task times that are skewed to the left. The following table shows for each number of critical paths K, the average of the Maximum, its standard deviation, and the criteria MC. Table 2. Results for K Weibull (2,1) variants

K AV(Max) SD(Max) MC

10 5.8485 0.0528 1.9330

20 7.1806 0.0771 2.5990

30 7.9704 0.0630 2.9939

50 8.9815 0.0607 3.4995

70 9.6508 0.0541 3.8341

90 10.1544 0.0534 4.0859

100 10.3643 0.0560 4.1909

From the table above, it can be seen that the average of the maximum and the MC increase with K. However, it does not converge since the Weibull probability distribution is not truncated. The MC of 4.19 for K=100 is quite high and shows a significant difference between the maximum and the original distributions. The "ninety percent" of the maximum distribution is 13.73, while the "ninety percent" of the distribution is 4.4. As we can see, the difference is very high; therefore it is important to use the maximum distribution instead of the original one. 1.6. Erlang Distribution The Erlang distribution has been used in some previous papers like Bendell et al. [1995]; and with k>1 it represents an asymmetric probability distribution. The Erlang distribution is often used to represent non-negative task times that are skewed to the left mainly when the task is composed of successive sub-tasks and each one of them has exponential distribution . The following table shows for each number of critical paths K, the average of the Maximum, its standard deviation, and the criteria MC. Table 3. Results for K Erlang (1,2) variants

K

10

20

30

40

50

60

70

80

90

100

AV(Max)

4.627

5.444

5.917

6.255

6.509

6.719

6.895

7.040

7.174

7.292

SD(Max)

0.018

0.023

0.028

0.033

0.034

0.034

0.042

0.039

0.037

0.035

MC

1.314

1.722

1.959

2.128

2.255

2.360

2.448

2.520

2.587

2.646

From the table above, the same phenomena as the Weibull distribution can be seen. i.e. average of the maximum and the MC increase with K. However, it does not converge since Erlang probability distribution is not truncated.. The "ninety percent" of the maximum distribution is 9.21, while the "ninety percent" of distribution is 3.9. As we can see, the difference is very high; therefore it is important to use maximum distribution instead of the original one.

the the the the

1.7. Conclusions The current study presents the distribution of project completion time when there are several critical paths. Simulation results were presented for a non-truncated probability distribution The following table shows the summary of the simulation tests for 100 critical paths:

106

PMS 2008, April 28-30, İstanbul, Turkey

Table 4. Summary of the results of three possible probability distributions for K=100

Probability distribution

Average length of critical path

Average duration of the Maximum

MC

Ninety percent of one critical path

Ninety Percent of the Maximum

Normal (0,1)

0

2.5

2.5

1.3

3.07

Weibull (2,1)

2

10.36

4.19

4.4

13.73

Erlang(1,2)

1

7.29

2.64

3.9

9.21

It can be seen that for some distributions the difference between the maximum distribution and the original cannot be ignored. Both the average of the maximum and the ninety percent are sometimes much higher than the single critical path. The results show the need for further research in characterizing project completion time.

References Abdelkader Y.H. Evaluating project completion times when activity times are Weibull distributed. European Journal of Operational Research. 2004. Vol.157, Iss. 3 Bendell, A, Solomon, D, Carter, J M Evaluating project completion times when activity times are Erlang distributed. The Journal of the Operational Research Society 1995.Vol.46, Iss Bruke R. Project Management Planning and Control Second Edition, John Wiley & Sons, 1992 Dodin B. Bounding the project completion time distribution in PERT networks, Operations Research, Vol 33, No. 4, 1985. Gonen A. Estimating Project Completion Times – Simulation and Analytic Approach, IEEM2007 proceedings, Singapore, 2007 Meredith J.R. Mantel S. J. Jr. Project management A Managerial Approach Third Edition, John Wiley & Sons, 1995 Shtub A, Bard J. F., Globerson S., Project Management Engineering Technology, and Implementation Prentice Hall 1994 Stevenson W.J. Production Operations Management, Sixth edition, Irwin , McGraw Hill, 1999. Winston W.L. Operation Research Applications and Algorithms, Second Edition Boston, PWSKENT, 1991. Wysocki,R.K. Beck R. Jr, Crane D. B. Effective Project Management, John Wiley & Sons, 1995

PMS 2008, April 28-30, İstanbul, Turkey

107

Scheduling Assembly Lines with Flexible Operations to Minimize the Makespan Hakan Gültekin1 and Yves Crama2 1

TOBB Economy and Technology University, Turkey e-mail: [email protected] 2

University of Liege, Belgium e-mail: [email protected]

Keywords: Flexible manufacturing systems, assembly lines, scheduling, production control

1.

Introduction and problem definition

In this study we consider a production line with two machines and a buffer in between the machines. n identical parts are to be produced on these machines. The system is assumed to work as a flow shop. That is, all parts pass through each machine in the same sequence, namely, machine 1 through machine 2. Parts are assumed to have three operations to be performed by the machines. First operation can only be performed by the first machine with a processing time of f1 and the third operation can only be performed by the second machine with a processing time of f 2 . On the other hand, both machines are assumed to be capable of performing the second operation, which has a processing time of s . Preemption is not allowed, once an operation is started, it must be completed without interruption. Each machine can process one part at a time and one part can be processed by only one machine at any time instance. The problem is to determine the assignment of the second operation to the machines for each of the n parts that maximizes the throughput rate. Such problems may arise in many different settings in practice. One example can be given from the flexible manufacturing systems where automated CNC machines are used. These machines are highly flexible and can perform different operations as long as the required cutting tool for that operation is loaded on the tool magazine of the machine. However, these tool magazines have limited capacity and there is not enough space for each of the required tools to produce one part. Additionally, especially in metal cutting industries, some tools are very expensive. Having multiple copies of such tools is not economically justifiable. Hence, some of the tools are loaded only on the first and some others are loaded only on the second machine and thus the corresponding operations can only be performed by the relevant machine. On the other hand, some other tools have multiple copies and are loaded on both of the machines. Hence the corresponding operations can be performed by both of the machines. Some other applications of the model above can be reviewed from Gupta et al., (2004). In that study, the authors consider a 2machine flow shop with infinite capacity buffer producing multiple parts. Similar to the current study, they assume each part to have three operations, where the first operation must be performed on the first, the second can be performed on either machine, and the last one must be performed on the second machine. Even without the fixed operations on the machines, this problem turns out to be NP-Hard. As a consequence, they design approximation algorithms and also present a polynomial time approximation scheme. In another study Daniels et al., (2004) assumes each worker to be trained to perform a subset of the required operations. They assume the assignment of the workers to work stations can be changed dynamically. The processing times of the operations is assumed to be a function of the number of workers assigned to that operation. They proved that a large portion of the available benefit associated with labour flexibility can be realized with a relatively small investment in cross training. Their results suggest that in order to obtain highquality solutions scheduling, skill allocation, and resource assignment decisions must be coordinated. Let us denote our problem as P(n B) . Here n represents the number of parts to be processed and B represents the capacity of the buffer in between the machines. In this study, we consider different problem variations changing according to the buffer capacity. More specifically, we consider the cases with no-buffer between the machines denoted by B 0 , finite capacity buffer 108

PMS 2008, April 28-30, İstanbul, Turkey

denoted by B b and infinite capacity buffer denoted by B . For example, P(n b) denotes a problem with a buffer capacity of b in which n identical parts will be produced and P(n 0) denotes a problem without buffer space in between the machines. Let Ti j , Ci j and pij denote the starting time, completion time and the actual processing time of the job in the i th position on machine j 1 2 , respectively. Note that, the completion time of a part on a machine is equal to its starting time on the same machine plus its processing time on the same machine: Ci j Ti j pij . Under the settings of this study, only two different processing time

( pi1 pi2 ) denote the vector of processing times of part i on both machines. Then, this part has a Type 1 processing time if pi ( f1 f 2 s) and a Type 2 processing time if pi ( f1 s f 2 ) . An example will be helpful in understanding. alternatives are possible. More specifically let pi

Example 1. Let us consider P(( ) with f1 15 , f 2 20 and s 15 . If this problem is solved according to the assumption of classical Assembly Line Balancing literature, then the allocation of the flexible operation would have been identical for all parts. With this assumption the best possible solution is attained by assigning the flexible operation to the first machine for all parts. The cycle time of this solution can be found to be 30 with a 10 time units of idle time on the second machine for each part. With the assumptions of this study, fixing the flexible operation to the same machine for all parts is a limiting assumption. The assignment can be changed from one part to other which may lead to a smaller cycle time. For the given example, the optimal solution is found by assigning the flexible operation to the first machine for two of each three consecutive parts and to the second machine for the remaining ones. The cycle time for this solution is 25. This means more than %16 improvement over the classical assembly line balancing solution. This improvement can be compared to the additional cost incurred by cross training the workers or duplicating the necessary tools to have this flexibility. Additionally, the importance of this improvement becomes more evident when one understands that this is the improvement in the time required to produce one part only and large numbers of parts are usually produced in such systems.

2.

Problem variations and solution procedures

As already mentioned in the previous section we consider different problem variations regarding the size of the buffer. We modelled the problem as a Mixed Integer Programming formulation which can be used to solve all variations. However, this formulation is useless if n is large since the required CPU time increases drastically as n increases. Thus, we developed specific solution procedures for the problem variations. We first proved some results valid for all variations of the problem. Let the reverse problem be the one in which the f1 and f 2 values are switched. We proved that the optimal makespan values of the original and the reverse problems are equal to each other for all variations of the problem. Furthermore, we proved that for a given schedule, changing the processing time of the first part in the sequence to a Type 1 processing time and the last part to a Type 2 processing time does not increase the makespan. As a result of this, in order to determine the optimal schedule, it is sufficient to search the ones starting with a Type 1 processing time and ending with a Type 2 processing time. In the following we present specific solution procedures for the considered problem variations. 2.1. No-buffer between the machines Since there is no buffer space in between the machines, if the second machine is not idle just after the first machine completes the processing of a part, the first machine becomes blocked and waits for the second machine to complete the processing of the previous part. Similarly, when the second machine completes the processing of a part, it is unloaded immediately. However, if the processing of the next part in the sequence continues on the first machine, the second machine becomes idle and waits for this part. For this case we proved that there are three alternatives for the assignment of the operations to the machines depending on the problem parameters. Note that, from earlier discussion on this paper, the assignment for the first and the last parts are known. Hence, we consider the assignment for the remaining ones. Then, either the flexible operations are allocated to the first machine for all parts, or to the second machine or the assignments are

PMS 2008, April 28-30, İstanbul, Turkey

109

changed from one part to other. More formally, we can state the main result and the solution procedure for this section as follows: Theorem 1. The problem P(n 0) can be solved in constant time with the following solution procedure: If f1 f 2 , we have the following sub cases: s , pi1

f1 , i 1 2 … (n 1) and p1n

-

If 2 f1

-

Otherwise, starting from nth part, p1n

2 f2

the assignment for each consecutive part. Let p11 Else if f1 -

f1 s .

f1 s , down to the 2nd part, switch f1 .

f 2 we have the following sub cases:

If 2 f 2

2 f1

s , p11

f1 , pi1 1 1

Otherwise, starting by p

f1 s , i

2…n.

f1 , for each consecutive part switch the

st

assignment until (n 1) part. Let p1n

f1 s .

2.2. Infinite capacity buffer between the machines For classical makespan minimization problems in 2-machine flow shops with infinite capacity buffers, well known Johnson’s algorithm (Johnson, (1954)) provides the optimal solution in O(n log n) time. In this study, if the number of parts which have Type 1 and Type 2 processing times are given, then the Johnson’s sequence can be found in constant time. Simply, all the parts with Type 1 processing times are sequenced first which are followed by parts with Type 2 processing times. However, the numbers of Type 1 and Type 2 parts are also decision variables in this study. We developed a procedure to determine the numbers of Type 1 and Type 2 processing times in the optimal solution. For this purpose we first derived a lower bound for the makespan value when the buffer capacity is unlimited. Let r ((n 1)( f 2 s f1 ) s) (2s) and r and r

denote the largest integer smaller than r and the smallest integer larger than r , respectively.

Then, the makespan cannot be less than, Cmax min{ f1 nf 2

(n

r )s nf1

f2

r s}

If the first argument is the minimum of this equation, then let r denotes r , otherwise let it denotes r . The following theorem is the main result of this section. Theorem 1. The problem P(n

) can be solved in constant time by assigning the flexible

operations to the second machine for the first (n r ) parts and to the second machine for the remaining parts. Although the buffer capacity is assumed to be unlimited in this section, the solution procedure presented here may come up with a small required buffer capacity depending on the problem parameters. However, again depending on the problem parameters, very large buffer capacities may also be required by this solution procedure. For example, if f1 7 , f 2 5 , s 12 and 100 parts are to be produced, a buffer capacity of at least 34 parts would have been required with the solution procedure explained above. However, in some practical situations there exist a limited physical space allocated for the buffers and this much required capacity may exceed available physical space. Additionally, in some industries since the produced parts are too valuable, minimizing the WIP inventories is one of the main objectives. In the next section, we handle these situations and consider the problem with a finite capacity buffer in between the machines. 2.3. Finite capacity buffer between the machines In the previous section we considered the problem P(n ) and provided a simple solution procedure. In this section we present another algorithm working for the same problem which is computationally more demanding than the earlier solution procedure. Since in order to determine the assignment for a part the algorithm looks 2 parts ahead, it is named as 2-Parts Ahead (2PA).

110

PMS 2008, April 28-30, İstanbul, Turkey

Algorithm 2PA: 1. If ( f1 < f 2 ), set f1

f 2 and f 2

f1 ;

2. Set T = 0 , T = f1 . 1 1

2 1

3. For j = 1 to ( n 1 ) do: If ( T j1 2 f1 1 j

p

s T j2 f1

s, C 1 j

Otherwise, p

If ( j < n 1 ), T 1 n

Otherwise, p

f 2 ), 1 j

T j1

f1 , C 2 j 1

1 j

T

1 j

f1 , T

pn2

j 1

1 j 1

1 j

C, p

f 2 , C 2j 2 j

f2

T j2

f2 ;

2 j

T j2

s, C

f2

s;

C ;

f1 s, pn2

, pˆ 2j

C1j , p 2j

2 j

f 2 , Cn1

Z = Cn2 max{Cn1 4. If ( f1 < f 2 ), for j = 1 to n do: pˆ 1j

s, T j1 1

f1

p1n

j 1

This algorithm runs under f1

f2 ,Tn2

, Tˆj1

Z

Tn1

f1 s, Tn2

max{Cn2 1, Cn1},

f 2} Cn2

j 1

, Tˆj2

Z

Cn1

j 1

.

f 2 assumption. For a given problem if f1 < f 2 , then using the

reversibility property, the values of f1 and f 2 are switched and the reverse problem is solved. The schedule for the original problem is determined in Step 4 of the algorithm. We proved that this algorithm attains the lower bound of the makespan presented in the previous section. Furthermore, we proved that any schedule generated by the 2PA algorithm never requires a buffer size greater than 3.

3.

Conclusion

In this study, an assembly line consisting of two machines and a buffer space that produces identical parts is considered. The machines are assumed to be flexible enough to perform different operations. As a consequence, each part is assumed to have one operation to be performed on the first machine, one other operation to be performed on the second machine and a flexible operation that can be performed by any one of the machines. The problem was to determine the assignment of the flexible operations to the machines that minimizes the long run average time required to produce one part. Different cases regarding the capacity of the buffer are considered. We proved that when there is no buffer space or when there is an infinite capacity buffer in between the machines the optimal time required to produce one part can be found in constant time. We presented procedures to determine the optimal assignment of the flexible operations for each part in the sequence. On the other hand, for the case when there is a finite capacity buffer in between the machines, we presented an algorithm that determines the optimal assignment of the flexible operations when the buffer capacity is not smaller than 3 Possible future research directions include considering the same problem when the buffer capacity is restricted to 1 or 2. Additionally, one may consider the case where there are more than one flexible operations for each part. These operations can be assigned to different machines for the same part. Finally, increasing the number of machines in the system is another future research option. However, in this case different assumptions regarding the assignment of the flexible operations is required: i-All machines are capable of performing the flexible operations hence they can be assigned to any one of the machines, ii-A flexible operation between any two consecutive machines, and iii-Each operation has its own set of machines that it can be assigned to.

References Daniels, R.L., Mazzola J.B. and Shi. D, (2004). Flow shop scheduling with partial resource flexibility. Management Science, 50(5):658–669. Gupta, J.N.D., Koulamas, C.P., Kyparisis, G.J., Potts, C.N. and Strusevich, V.A. (2004). Scheduling three-operation jobs in a two-machine flow shop to minimize makespan. Annals of Operations Research, 129:171–185. Johnson, S.M. (1954). Optimal Two- and Three-Stage Production Schedules with Setup Times Included, Naval Research Logistics Quarterly, 1:61–68. PMS 2008, April 28-30, İstanbul, Turkey

111

Tighter Lower Bounds via Dual Feasible Functions Mohamed Haouari1,2, Lotfi Hidri1, Mahdi Jemmali1 1

ROI Combinatorial Optimization Research Group, Tunisia. e-mail: [email protected]

2

Faculty of Business Administration, Bilkent University, Turkey.

Keywords: Dual feasible functions, Lower bound, Parallel machines, Hybrid flow shop.

1.

Dual feasible functions

A function f : [0, 1] → [0, 1] is said to be dual feasible if for any finite set S of positive real numbers, we have the relation ∑ x ≤ 1 ⇒ ∑ f (x ) ≤ 1 x ∈S

x ∈S

The concept of dual feasible functions (DFFs) has been first introduced by Johnson, (1973) in the context of bin packing. In the last few years, there has been a resurgence of interest in DFFs as a new tool in combinatorial optimization. This is directly attributed to the successful application by Fekete and Schepers, (2001) who used DFFs for deriving a class of tight lower bounds for the onedimensional bin packing problem. We describe a general procedure that might prove useful for tightening lower bounds for a wide class of combinatorial optimization problems. We illustrate this procedure on two wellstudied machine scheduling problems: the identical parallel machine scheduling problem (P || Cmax) and the two-stage hybrid flow shop problem (F(P) || Cmax) and we present the results of computational experiments that provide evidence of the effectiveness of the proposed procedure.

2.

Application to the identical parallel machine scheduling problem

P || Cmax is defined as follows. Given a set J of n independent jobs and m identical parallel machines (with n > m ≥ 2), each job j ∈ J has to be processed non-preemptively for p j units of time by exactly one machine. The problem is to find an assignment of the n jobs to the m machines such that the makespan is minimized. Let I = ( p1 , p 2 ,..., p n ) define an instance of P || Cmax. Assume that f(.) is a DFF and L(.) is a lower bounding procedure, that is L(I) is a valid lower bound on the optimal value of the makespan * ( C max (I ) ). Also, for each trial value C we denote by f(I,C) the P || Cmax instance with n jobs and m ⎛ pj ⎞ machines and with the modified processing times i p j = f ⎜ ⎟ for j = 1, ...n. ⎝C ⎠ We have the following result. * (I ) . Lemma 1 If LB ( f ( I ,C ) ) > 1 then C + 1 is a valid lower bound on C max

Proof. Assume that there exist a job partition S 1 , S 2 ,..., S m such that

∑p

j ∈S p

Thus, we have pj ≤ 1 ∀p = 1,..., m ⇒ ∑ C j ∈S p

Therefore C

* max

∑f

j ∈S p

j

≤C

∀p = 1,..., m

(1)

⎛ pj ⎞ ⎜ ⎟ ≤ 1 ∀p = 1,..., m ⎝C ⎠

( f ( I ,C ) ) ≤ 1 .

On the other hand, if LB ( f ( I ,C ) ) > 1

112

(2)

PMS 2008, April 28-30, İstanbul, Turkey

* Then (1) does not hold and consequently C + 1 is a valid lower bound on C max (I ) .

A nice consequence of this Lemma is that given a pair (C 1 ,C u ) of lower an upper bounds on

the optimal makespan, respectively, we can (possibly) derive an enhanced lower bound by performing a line search along [C 1 ,C u ] and determining the largest C within this interval such that LB ( f ( I ,C ) ) > 1 .

3.

Application to the hybrid flow shop problem

The Hybrid Flowshop Scheduling Problem (F(P) || Cmax ) can be defined as follows. Each of n jobs from the job set J = {1, 2,..., n } has to be processed nonpreemptively on m production stages Z 1 , Z 2 ,..., Z m in that order. The processing time of job j ∈ J on center Zi (i = 1, ...,m) is p ij . Each stage Zi consists of mi parallel identical machines. The objective is to construct a schedule for which the makespan is minimized. Given a F(P) || Cmax instance I and the corresponding set of feasible schedules ∑ , a directed graph G(σ) can be associated with each feasible schedule σ ∈ ∑ (Nowicki and Smutnicki, 1998). The makespan Cmax(σ) of the schedule σ being equal to the value of the longest (or critical) path P*(σ) in G(σ). This critical path is a sequence of m blocks B1,B2, ...,Bm where each block

Bi (i = 1, ...,m) is a sequence of consecutive operations that are processed on the same machine.

Hence, we have

m

∑∑p i =1 j ∈B i

ij

= C max (σ )

Moreover, denote by P(σ) the set of paths in G(σ). Then, we have ∑ pij ≤ C max (σ ) ∀P ∈ P (σ )

(3)

(4)

( i , j )∈P

We can use the above-described DFF-based procedure for deriving a tight lower bound. Indeed, denote by L(.) a lower bounding procedure and let C be a trial value. Assume that there exists σ ∈ ∑ , such that (5) ∑ pij ≤ C ( i , j )∈P * (σ )

Thus,



( i , j )∈P

*

p ij (σ ) C



≤ 1 . Hence,

( i , j )∈P

*

⎛p f ⎜ ij ⎝C (σ )

⎞ ⎟ ≤ 1 . Therefore, it follows that if a lower ⎠

⎛ p ij ⎞ p ij = f ⎜ bound on the optimal makespan of the modified instance that is defined by setting i ⎟ ⎝C ⎠ for i = 1,..., m and j = 1,..., n is strictly larger than 1, then C + 1 is a valid lower bound on the optimal makespan of the genuine instance. Here again, a line search may be performed in order to derive the largest possible lower bound.

4.

Computational experiments DFF We denote by LB PDFF ||C max and LB F ( P )||C max the derived lower bounds. In our implementation, we

have considered three families of DFFs as in Fekete and Schepers, (2001). These DFFs are tested and only one of them was retained because the two others didn't improve the lower bounds. The retained DFFs u ( h ) ( h ∈ {1,...,50} ) are defined as u ( h ) (x ) = x for x (h + 1) ∈ ` and 1 u ( h ) (x ) = ⎢⎣ (h + 1)x ⎥⎦ otherwise. Instead of (2), we used the stronger condition h

max

h ∈{1,...,50}

PMS 2008, April 28-30, İstanbul, Turkey

{LB (u

(h )

( I ,C ) )} > 1

(6)

113

4.1. Results on P || Cmax i We compared LB PDFF ||C max with the lower bound which is denoted by L FS in Haouari et al, (2006

a). This latter lower bound has been proven to perform extremely well. Given a modified instance, we have used Li FS for computing the corresponding lower bound. Two classes of instances have been generated. The instances of class 1 have been generated as follows. For each n ∈ {20, 30, 50, 2n 70, 80, 100, 150}, the number of machines m is set equal to and the processing times are 5 ⎡n n ⎤ drawn from the discrete uniform distribution on ⎢ , ⎥ . For each value of n, 20 instances have ⎣5 2⎦ been generated. The instances of Class 2 have been generated as follows. For each n ∈ {7, 10, 12, 15}, the number of machines m ∈ {3, 4, 5} and the processing times are drawn from the discrete uniform distribution on [50,100] . For each combination (n,m), 10 instances have been generated. The results are displayed in Table 1. DFF i FS Table 1. Comparison of LB P ||C max and L

i LB PDFF ||C max = L FS

i LB PDFF ||C max > L FS

Class 1

94.29%

5.71%

Class 2

85.84%

14.16%

4.2. Results on F(P) || Cmax

We considered two-stage instances for which tight lower bounds are available. We compared LB FDFF ( P )||C max with the SPT-rule based lower bound LB SPT that is described in Haouari et al, (2006 b). Also, this latter lower bound has been used for computing the lower bound corresponding to the modified instance. The instances have been randomly generated in the following way. n = 20, m1 = m 2 = 10 and p1 j , p 2 j generated randomly and uniformly in [1,20]. 500 instances are generated as already

indicated. The results are displayed in Table 2. DFF

Table 2. Comparison of LB F ( P )||C max and LB SPT

LB FDFF ( P )||C max = LB SPT

LB FDFF ( P )||C max > LB SPT

26.8%

73.2%

Tables 1 and 2 clearly demonstrate that the proposed DFF-based procedure permits to deliver very tight lower bounds.

References [1] Fekete, S. Schepers, J. (2001). New classes of fast lower bounds for bin packing problems, Mathematical Programming 91, 11–31. [2] Haouari, M. Gharbi, A. Jemmali, M. (2006 a). Tight bounds for the identical parallel machine scheduling problem, International Transactions in Operations Research 13, 529- 548. [3] Haouari, M. Hidri, L. Gharbi, A. (2006 b). Optimal scheduling of a two-stage hybrid flow shop", Mathematical Methodes of Operational Research 64, 107-124. [4] Johnson D.S. (1973) Near-optimal bin packing algorithms. PhD thesis, Massachusetts Institute of Technology. [5] Nowicki, E. Smutnicki, C. (1998). The flow shop with parallel machines, A tabu search approach, European Journal of Operational Research 106, 226-253.

114

PMS 2008, April 28-30, İstanbul, Turkey

A New Branch-and-Bound Method for the Multi-Skill Project Scheduling Problem: Application to Total Productive Maintenance Problem T. Hassani (1) , C. Pessan(1,2) and E. Néron (1) (1)

Laboratoire d’Informatique de l’Université de Tours Polytech’Tours 64 av. Jean Portalis, F-37200 Tours [email protected] (2)

SKF France SA, Industrial Division, MDGBB Factory, 204, bd. Charles de Gaulle F- 37542 Saint-Cyr-sur-Loire CEDEX [email protected] Keywords: Project scheduling, skills, branch-and-bound

1.

Abstract

Total Productive Maintenance (TPM) problem occurs on some SKF ball bearing production lines. The production is stopped on the whole production line and then maintenance operations on the machines are processed. There are three types of maintenance operations: curative, i.e. one machine that does not perform as it should (quality reliability), preventive, i.e. replace in advance some mechanical parts that have to be changed before the next TPM, and amelioration, i.e. improve some parts of the production line in order to speed up production. These operations require skilled operators. The goal is to restart the production as soon as possible in order to minimize the loss of production. These problems can be modelled using Multi-Skill Project Scheduling Problem. Moreover the size of industrial instances, we have to solve, is such that they can be tackled using an exact method. The aim of this paper is to present a new branching scheme for the Multi-Skill Project Scheduling Problem in order to solve more efficiently the TPM.

2.

Problem definition

The TPM problem has already been studied [5] [6]. It is extremely close to Multi-Skill Project Scheduling Problem [1]. For sake of completeness we present main features of this problem and related notations. {1,…, n}(with n the number of activities). Each maintenance operation is an activity i -start precedence constraints, so an activity cannot start before These activities are linked by end-to-start all its predecessors are completed. In order to perform these activities, M staff-members m {1,…,M} are available. To be processed, activities require specific skills that cannot be performed by all staff members. For instance, an electronics specialist would not be able to perform a mechanical operation. The maintenance operations require K skills named S {1,…,K}. We denote MSm,k = 1 if person m masters the skill Sk and 0 otherwise. In the SKF case, there exist hierarchical levels of skills: it means that a person with level x for a skill Sk will not be able to perform a task that requires this skill but at a higher level y > x. This problem is equivalent to the general problem without hierarchical levels [1], by adding specific skills corresponding to each level. Lastly, we denote the skill requirements for each activity: b i,k is the number of persons who master the skill Sk required to perform activity i. We assume that a staff member is able to perform at most one skill for one activity at time. The only difference between MSPSP and TPM lies in additional disjunctive constraints of TPM. These constraints are introduced for security reasons, due to material configuration of the production channel: for instance, when someone is working on one specific machine, one should avoid having another technician doing something else on the same machine. So, these are disjunctive constraints that are not related to precedence constraints. A simple way to model these PMS 2008, April 28-30, İstanbul, Turkey

115

disjunctive constraints is to add a common resource requirement for all disjunctive activities, corresponding for instance to the machine on which these activities are performed. For example, if activities i, j, l cannot be performed at the same time, we add a skill S disijl that can only be performed by a virtual person disijl . This person only has the skill Sdisijl and the activities i,j and l are the only ones to require this skill. Figure 1 presents a TPM instance: disjunctive/conjunctive graph used to model precedence and machine disjunctions, skill requirements of activities and their processing times. On this example specific skill (Sdis456 ), and specific resource (disj456 ) corresponding to disjunction (A4 - A5 - A6) should be added to the initial instance presented below. pi |bi,1, bi,2 1| 1,2

4

2| 1,1 6

1 3| 1,1

Machine disjunction n

1| 2,-

3 2| -,1

5

person 4 is able to do skill 1

MSi,1

MSi,2

1

1

-

2

1

-

3

-

1

4

1

1

4| 1,1 Figure 1. Example of TPM

In figure 1 dotted line represents disjunctive constraints, resulting from machine constraints. The table on the right side shows which skills are mastered by the 4 staff members.

3.

Branching scheme description

Morineau and Néron [1] have presented an exact method for solving MSPSP that is based on a time-window splitting branching scheme, inspired from the one proposed by Carlier for solving mmachine problem [2] and RCPSP [3]. The only difference between the method proposed by Morineau and Néron and the one presented in this paper lies in the branching scheme. Thus we mainly focus on the description of the branching scheme in the remainder of this paper. 3.1 Existing branching scheme The branching scheme used for solving MSPSP in [1] is based on time-window splitting. Let us consider the current node N, and UB the best known solution. First, time-windows, i.e., release dates (ri(UB-1)) and deadline (di(UB-1)) for operations, are calculated according to UB - 1 and the precedence graph. One activity is chosen and the set of its feasible starting dates is calculated: t i { ri(UB-1), di(UB-1) – pi }. Then two nodes are created from N, N1 and N2, such that the set of feasible starting times is partitioned, into two disjoint subsets of equal length. A leaf node is reached when for each activity r i(UB -1)+pi = di(UB -1). Finally, person assignments to activities must be checked. Then a feasible solution improving UB is found. This latter problem, known as the Fixed Job Scheduling Problem (FJSP), is NP-Complete [4], but can be solved efficiently. This branching scheme based on time-window splitting has two main drawbacks: • The depth of the search depends on the slacks of the activities: ln(Σi (di(Root) – pi ri(Root)). Thus it is not polynomialy bounded by the size of the problem. • Starting time of an activity is not necessary fixed at each node of the search tree. An efficient condition that can be used to prune the search tree is based on detecting contradiction for deduced FJSP, using CP all-different constraint. But to deduce FJSP instance from a node, one has to consider only the central mandatory part of the activities. This dominance condition based on central mandatory parts of activities may be efficient only when some starting times of activities are fixed, and then the slacks are significantly reduced. The branching scheme that we propose here must fix the starting time of at least one activity at each branching step in order to reduce slacks of activities and then use as efficiently as possible the FJSP based dominance condition.

116

PMS 2008, April 28-30, İstanbul, Turkey

3.2 Starting time based branching scheme Let us consider a node N, that is given by a set of scheduled activity SC(N), i.e. activities for which starting times are fixed, and a given time point t(N), corresponding to the end of a fixed activity in SC(N). The root node is such a node with an empty set of scheduled activities and time point t=0. Let IP(N) be the set of activities in progress at time t(N) for node N. A set of eligible activities EL(N), is determined, i.e., activities having all their predecessors completed at time t(N), and that may be scheduled simultaneously with IP IP(N). Formally checking if an activity c EL(N), can be scheduled at time t(N), taking into account that there exist already scheduled activities, is equivalent to solve an instance of FJSP. Here two weaker conditions are used before, in order to determine non feasible starting times of the activity. The first one can be checked polynomial using a max-flow formulation. The second one consists in solving the FJSP made up of IP(N) {c}. Finally the feasibility of the FJSP corresponding to SC(N) {c} is tested. If one of these conditions is not satisfied, activity c cannot be scheduled at time t(N), otherwise a node is created scheduling activity c at time t(N). A node is created for each of these activities, plus a node in which no activity is scheduled at time-point t(N), and t is increased to the next completion time of a in progress activity. 2 1 t(N)= 1 EL(N) = {4, 5}

2

2

5

4

1

1

t (N2)= 1

t(N1) = 1

2 1 t (N3)= 2

Figure 2: example of branching step.

The key point of this branching scheme is that at each node a starting time is fixed and is not modified in the sub-tree starting from this node. Thus partial schedule are built from time zero onwards and a leaf node is reached when all activities are scheduled. Notice that a scheduled activity corresponds to an activity whose starting time is fixed, but assignments of staff members to the skill requirements are not fixed. Only the existence of one feasible assignment corresponding to skill requirements is checked during the branching step. To prune the search tree, dominance condition based on checking feasibility of FJSP can be applied. The FJSP tested corresponds to the set of scheduled activities and the set of central mandatory parts of not yet scheduled activities as described in 3.1. Temporal decomposition of the problem as described in [1] can be used to reduce the size of relevant FJSP. During the exploration of the search tree, solutions are built starting from a node, in order to eventually improve the upper bound. The method, inspired from serial schedule generation scheme for solving RCPSP, considers at each node a list of scheduled activities (including in progress activities), and for each of these activities it determines its earliest starting time regarding both precedence and resource constraints. The drawback of this approach is that determining this earliest starting time corresponds to solve one FJSP for each time point tried. Thus it may be time consuming.

4.

Solving the fixed job scheduling problem

Both checking that an FJSP instance has a solution and detecting a contradiction, are used intensively in the method that we propose, to check the feasibility of partial schedule or eventually PMS 2008, April 28-30, İstanbul, Turkey

117

prune the search tree. In the method proposed by Morineau and Neron, FJSP is solved using integer linear programming formulation. We propose here to use a CP-based formulation to find either a solution or to detect a contradiction on the FJSP instance. It is useful in our method to prune nodes corresponding to the partial schedule. activity

activity

Person assigned to activity 2 satisfying skill requirements 4 : P2,P4

Number of persons doing skill S1 required for activity 2 2 : 1, -

4 : 2, -

1 : 1, 1

FJSP Instance

2 : P4 1 : P1, P3

FJSP feasible solution

Figure3: FJSP corresponding to left node of figure 2, and one feasible solution

5.

Conclusion

In this paper we have presented a new branching scheme for solving the Multi-Skill Project Scheduling Problem. This branching scheme consists in building partial schedules from time zero onwards. The branch-and-bound method is used for solving TPM. Experimental results will be presented at the conference. One promising research direction is to use a branching scheme inspired from extension alternatives [8] that basically consists in adding more than one activity at a time to partial schedule. Moreover disjunctive features of the TPM can be used to speed up the search, for instance by adding specific 1-machine time bound adjustments.

References [1] Bellenguez-Morineau O. and Néron E.(2007). A branch-and-bound method for solving multiskill project scheduling problem, RAIRO - Operations Research, 41, 2, 155–170. [2] Carlier J. (1987). Scheduling jobs with release dates and tails on identical machines to minimize the makespan, European Journal of Operational Research, 29, 298–306. [3] Carlier J., and Latapie B. (1991). Une méthode arborescente pour résoudre les problèmes cumulatifs, RAIRO - Recherche Opérationnelle, 25, 3, 311–340. [4] Kolen A. and Kroon L. (1991). On the computational complexity of (maximum) class scheduling, European Journal of Operational Research, 54, 23–28. [5] Nakajima S. (1988). Introduction to Total Productive Maintenance, MA:Productivity Press, Cambridge. [6] Pessan C. and Néron E. (2007). Multi-skill project scheduling problem and Total Productive Maintenance, MISTA 07, Paris. [7] Stinson J. P., Davis E. W. and Khumawala B. M. (1978). Multiple resource-constrained scheduling using branch-and-bound, AIIE Transactions, 10, 3, 252–259.

118

PMS 2008, April 28-30, İstanbul, Turkey

Robustness Measures and a Scheduling Algorithm for Discrete Time/Cost Tradeoff Problem Öncü Hazır1,2, Erdal Erel1, and Mohamed Haouari1 1

Faculty of Business Administration, Bilkent University, 06800, Ankara – Turkey e-mail: [email protected], [email protected], [email protected]

2

Industrial Engineering Department, Cankaya University, 06530, Ankara – Turkey e-mail: [email protected]

Keywords: Project scheduling, robustness, simulation

1. Introduction Majority of the studies in project scheduling literature usually assume complete information and deterministic environment. However, in practice, projects are subject to different sources of uncertainty that may arise from the work content, resource availabilities, project network etc. A schedule that is optimal with respect to some performance measure such as project duration or cost may largely be affected by these disruptions. Therefore, project scheduling algorithms should take these disruptions into account and consider robustness as well. Theoretically, it is important to develop analytical methods to generate robust project schedules so that the generated schedules should have the ability to protect performance against unexpected events. In this research, we address the discrete time/cost tradeoff problem (DTCTP), which is a wellknown multi-mode project scheduling problem. We introduce some robustness measures and a robust scheduling algorithm. To the best of our knowledge, this research is the first work that focuses on robustness measures and robust scheduling in multi mode project networks. DTCTP is a practically relevant multi-mode project scheduling problem of which three versions have been studied in the literature, namely, the deadline problem (DTCTP-D), the budget problem (DTCTP-B) and the efficiency problem (DTCTP-E). In DTCTP-D, given a set of time/cost pairs (modes) and a project deadline of δ, each activity is assigned to one of the possible modes so that the total cost is minimized. On the contrary, the budget problem minimizes the project duration while meeting a given budget, B. Finally, DTCTP-E is the problem of constructing efficient time/cost points over the set of feasible project durations. DTCTP could be formally defined as follows: A project with n activities is represented by an AON graph, i.e. G (N, E). Two dummy activities corresponding to the project start and end, activity 0 and activity n+1, are included in the network; Activity j performed in mode m, is characterized by a processing time p jm and a cost c jm . A mixed integer-programming (MIP) model of the DTCTP-D could be stated as follows: Min (1) c jm x jm j N m Mj

Subject to m Mj

Cj Cn

C0

x jm

x jm 1 Ci

m Mj

p jm x jm

0

j

N

(2)

i, j

E

(3)

C0

1

(4) (5)

0

0,1

PMS 2008, April 28-30, İstanbul, Turkey

j

N , m Mj

(6)

119

The continuous decision variable Cj denotes the completion time of activity j. The binary decision variable xjm assigns modes to the activities (6). While minimizing the total cost (1), a unique mode should be assigned to each activity (2), precedence constraints should not be violated (3), and the deadline should be met (4). Despite its importance in practice, the research on DTCTP is rather new due to its inherent computational complexity. In their comprehensive review paper, De et al. (1995) discuss the problem characteristics, some exact and approximate solution strategies. The readers are also referred to the studies of Demeulemeester et al. (1996, 1998) for exact algorithms and to Akkan et al. (2005) for approximate algorithms. None of these studies involve any protection mechanisms against uncertainty. However, this research addresses the uncertainty in activity durations and concentrates on relevant robustness measures and robust scheduling algorithms.

2. Robustness measures (RM) In project scheduling literature, there are only a few studies that propose measures to assess the robustness of project schedules (Al-Fawzan and Haouari (2005), Kobylański and Kuchta (2007) and Lambrechts et al. (2007)). These studies address the randomness in duration of the activities of single mode networks and suggest the use of surrogate measures. Nevertheless, they do not experimentally test the quality of the measures. Clearly there is a need for further work to develop new measures and test the quality and efficiency of these measures. We also address the fluctuations in activity durations and propose time-based measures that work to evaluate robustness of the project schedules with respect to these fluctuations. Furthermore, we compare the quality of these measures by using simulation. The surrogate measures we propose are basically derived from the activity slacks. We concentrate on quality robustness, which is the insensitivity of the schedule performance with respect to disruptions. Total slack (TS), which is the amount of time by which the completion time of the activity can exceed its earliest completion time without delaying the project completion time, is closely related to quality robustness. We propose and test the following quality robustness measures: RM1: Average Total Slack RM2: Coefficient of Variation of Slack/Duration Ratio RM3: Percentage of Potentially Critical Activities, i.e. the activities that have slack values less than 25 %, of the activity duration are called as potentially critical activities. RM4: Ratio of Project Buffer Size to the Project Deadline We use Monte Carlo Simulation to generate a random set of realizations of activity durations and to test robustness measures using these realizations. The following performance measures (PM) are used in the simulation: 1) The probability that the project ends within the deadline. 2) Percentage excess of project completion time from the deadline. After simulating the projects, we select the robustness measure that has the highest correlation with the performance measures as the best metric to represent robustness. The following algorithm is used to test the robustness measure using simulation. 1. Given the scheduling policy (SP), generate an initial baseline schedule. Then calculate the RM of each schedule. 2. Monte Carlo Simulation: a. Set the activity time of each node in the network to a random number generated by using the activity time distribution. b. Generate the early start schedule (ESS) by using the randomly generated durations and classical CPM calculations. Record the activity completion times and calculate the PM. c. Repeat steps 1 and 2, Nr (Number of Replications) times. 3. Calculate the correlation between the RM and the PM. To model the activity durations, we use a lognormal distribution with mean equal to the baseline duration and coefficient of variation of 0.5. We use 36 random instances generated by Akkan et al. (2005) to test the proposed methods. To evaluate the relationship between the robustness and the performance measures, we run regression models and report the coefficient of determination (R2). Table 1 illustrates the average of R2 over all problem instances that have the same coefficient of network complexity. Before running the regression model, the necessary assumptions are checked. Table 1 demonstrates that buffer size is the best robustness measure 120

PMS 2008, April 28-30, İstanbul, Turkey

regardless of the network complexity. Furthermore measure 1 and 3 have high correlations with the performance measures as well. Table 1. Average % R2 values for the regression of robustness measures on performance measures Robustness Measures RM1 RM2 RM3 RM4

3.

R2

PM1 95.03 52.92 82.51 97.04

PM2 92.61 52.69 81.56 95.79

Scheduling Algorithm

Using the insight revealed by the simulation results, we generate the baseline schedule by maximizing the project buffer size with the intention that the schedule involves sufficient safety time to absorb unanticipated disruptions. However, while maximizing the robustness measure, the project cost should remain within acceptable limits. We generate the robust schedule following a two-phase methodology: 1. Given a project deadline, DTCTP-D is formulated and solved exactly. The objective value of the optimal solution, B0, sets a threshold budget value for the next phase. To improve the robustness of the schedule, threshold budget might be amplified, i.e, B = (1+ η) B0 , η ≥0 . 2. Given the projected budget, B, an initial baseline schedule is generated by solving DTCTP-B exactly. This phase works to insert project buffers to protect the schedule against disruptions while controlling the project cost. We apply the above two-phase algorithm to schedule project networks. In order to solve small DTCTP-D and DTCTP-B instances, we use CPLEX 9.1 optimization software. The exact algorithm to solve DTCTP-D is based on the formulation given in 1-6. However, for large scale instances, we solve the instances with Benders Decomposition. The Benders reformulation of DTCTP-D is given below and DTCTP-B could also be reformulated similarly. In this formulation s = 1…S refers to a path between node 0 to n+1 and wijs is the incidence vector of the path s.

c jm x jm

Min j

N m M

j

Subject to ( i, j ) E m M

m M

x jm

p jm wijs x jm -

0

s

1...S

(7)

j

x jm

j

1

N

j

0 ,1

j

N ,m

Mj

In order to assess the effectiveness of the proposed scheduling algorithm in protecting the project from disruptions under various settings, we use simulation. We use coefficient of variation of 0.25, and 0. 5 to characterize small and moderate variability in activity durations, respectively. In this research, robustness is evaluated both with the project buffer size, which is shown to be the best metric to represent robustness among the proposed ones and the performance measures. The robustness measures are the outputs of the scheduling algorithm, whereas the performance measures are the outputs of the simulation. The results of comprehensive computational experiments will be provided.

References Akkan, C., Drexl, A., Kimms, A. (2005). Network decomposition-based benchmark results for the discrete time–cost tradeoff problem, European Journal of Operational Research, 165. Al-Fawzan, M. A., Haouari, M. (2005). A bi-objective model for robust resource-constrained project scheduling, International Journal of Production Economics, 96, 175-187. PMS 2008, April 28-30, İstanbul, Turkey

121

De, P., Dunne, E.J., Ghosh, J.B., Wells, C.E. (1995). The discrete time/cost trade-off problem revisited, European Journal of Operational Research, 81, 225-238. Demeulemeester, E., Herroelen, W. and Elmaghraby, S.E. (1996). Optimal procedures for the discrete time/cost trade-off problem in project networks, European Journal of Operational Research, 88, 50–68. Demeulemeester, E., De Reyck, B., Foubert, B., Herroelen, W. and Vanhoucke, M. (1998). New computational results for the discrete time/cost trade-off problem in project networks, Journal of the Operational Research Society, 49,1153–1163. Kobylański P., Kuchta, D. (2007). A note on the paper by Al-Fawzan, M. A. and Haouari, M. about a bi-objective problem for robust resource-constrained project scheduling, accepted for publication in International Journal of Production Economics. Lambrechts O, Demeulemeester, E., Herroelen, W. (2007) A tabu search procedure for developing robust predictive project schedules, accepted for publication in International Journal of Production Economics.

122

PMS 2008, April 28-30, İstanbul, Turkey

Robust Optimization Models for the Discrete Time/Cost Tradeoff Problem Öncü Hazır1,2, Erdal Erel1, and Yavuz Günalay3 1

Faculty of Business Administration, Bilkent University, 06800, Ankara – Turkey e-mail: [email protected], [email protected]

2

Industrial Engineering Department, Çankaya University, 06530, Ankara – Turkey e-mail: [email protected] 3 Faculty of Business, Bahçeşehir University, 34100 Besiktas – Istanbul- Turkey e-mail: [email protected]

Keywords: Project scheduling, robust optimization, interval data

1. Introduction Discrete time/cost tradeoff problem (DTCTP) is a well-known NP-hard multi-mode project scheduling problem of which three versions have been studied in the literature, namely, the deadline problem (DTCTP-D), the budget problem (DTCTP-B) and the efficiency problem (DTCTP-E). In DTCTP-D, given a set of time/cost pairs (modes) and a project deadline of δ, each activity is assigned to one of the possible modes so that the total cost is minimized. On the contrary, the budget problem minimizes the project duration while meeting a given budget, B. Finally, DTCTP-E is the problem of constructing efficient time/cost points over the set of feasible project durations. Existing studies on DTCTP assume complete information and deterministic environment; however projects are subject to different sources of uncertainty and protection is required. To minimize the effect of unexpected events on project performance, five fundamental scheduling approaches have been discussed in the literature: stochastic scheduling, fuzzy scheduling, sensitivity analysis, reactive scheduling, and robust (proactive) scheduling (Herroelen and Leus, 2005). In stochastic project scheduling, the activity durations are modeled as random variables and probability distributions are used. Fuzzy project scheduling uses fuzzy membership functions to model activity durations instead of probability distributions. The effects of parameter changes are investigated in sensitivity analysis. In reactive scheduling, the schedule is modified when a disruption occurs, whereas in robust scheduling anticipation of variability is incorporated into the schedule and schedules that are insensitive to disruptions are generated. Valls et al. (2007) propose proactive-reactive scheduling procedures for a real-life problem in management of service centers. Robust optimization is a modeling approach to generate a plan that is acceptable even in the worst-case scenarios. Unlike stochastic programming, this optimization approach does not require any assumptions regarding the underlying probability distribution of the uncertain data. It is difficult to determine the probability distributions accurately. The most widely studied robust optimization models are minmax and minmax regret models. The major shortcoming of these approaches is the inherent over-pessimism; since worst case scenarios are considered, they might result in poor solutions for many of the other scenarios. To eliminate over-pessimism, we develop two alternative models and in these models only a subset of the uncertain parameters is allowed to deviate from their estimates, i.e., uncertainty is modeled using intervals and only a subset of problem parameters is driven to their upper bounds. The models are evaluated using various robustness measures. In this research, we focus on robust scheduling and formulate the robust DTCTP using two alternative approaches. In order to solve the robust models, we develop exact and heuristic algorithms. The major contribution of the research is the incorporation of uncertainty to a practically relevant project scheduling problem and the solution algorithms. Furthermore, to the best of our knowledge this research is the first application of robust optimization to multi-mode project scheduling.

PMS 2008, April 28-30, İstanbul, Turkey

123

2. Robust DTCTP with Interval Data An activity-on-node (AON) graph representation is used to define the precedence relationships in the DTCTP problem. A project with n activities is represented by an AON graph, as G (N, E) E), where N={0,1, …, n+1} represents the set of nodes (activities) and the set E={(i,j): i,j N and i must precede j} represents the given precedence relationships among activities. In set N, two dummy activities, 0 and n+1, are used to indicate the project start and completion instants. Activity j performed in mode m, is characterized by a processing time pjm and cost cjm. Given the modes and a project deadline of δ, each activity is assigned to one of the possible modes so that the total cost is minimized. A mixed integer-programming (MIP) model of the DTCTP-D could be stated as follows: (1) Min c jm x jm j N m Mj

Subject to m Mj

Cj Cn C0

x jm

x jm 1 Ci

m Mj

p jm x jm

0

j

N

(2)

i, j

E

(3)

C0

1

(4) (5)

0

0,1

j

N , m Mj

(6)

The continuous decision variable Cj denotes the completion time of activity j = 0, 1, …, n+1. The binary decision variable xjm assigns modes to the activities (6). While minimizing the total cost (1), a unique mode should be assigned to each activity (2), precedence constraints should not be violated (3), and the deadline should be met (4). Note that, Cj and Cn+1 represent the project start and completion times, respectively. This research examines project environments in which timely completion of critical activities is crucial. Build-Operate-Transfer (BOT) projects are good examples of such type of projects; they favor early completions. BOT model describes the situation in which a public service or an infrastructure investment is made and operated for a specific period by a private enterprise, and then transferred to a public institution. We fix activity durations and deal with cost uncertainty in these types of projects. This type of uncertainty could seriously affect the profitability of the projects; hence protection against deviations in total cost becomes the key concern of project managers. Two robust optimization models and solution algorithms for the DTCTP-D are proposed in the following sections. In both of the models, we model the uncertainty using intervals, i.e. cjm

c j m , c jm , for all j

N\ {0,n+1} and m

Mj.

The traditional minmax criterion focuses on the worst-case alternative, which corresponds to the scenario where each cost cjm is given by c jm , the upper bound of the corresponding interval. However, this notion of robustness is extremely pessimistic. One recent approach that controls pessimism level is the robust discrete optimization approach proposed by Bertsimas and Sim (2004). They assume that only a subset of the uncertain parameters is allowed to deviate from their estimates, in other words only Γ of activity cost parameters (out of a total of n) show random behavior and therefore they are considered to be at their upper bounds. If Γ = 0, the influence of the cost deviations is disregarded and the deterministic problem with nominal cost values is obtained, whereas, if Γ = n, all possible cost deviations are considered and the problem becomes a minmax optimization problem. We apply the Bertsimas and Sim to formulate robust DTCTP-D as follows: At most 0 ≤ Γ ≤ n activities are assumed to have cost values at their upper bounds and the remaining n-Γ coefficients are forced to be deterministic, i.e., they are set to their respective

124

PMS 2008, April 28-30, İstanbul, Turkey

nominal values (

c jm

c jm

c jm 2

expressed as follows:

and

d jm

c jm

cjm

). The restricted uncertainty model could be d jm x jmu j j

Min x

c jm x jm

XD

j

N m Mj

Max

uj

{ 0 ,1 }

N m Mj

Subject to

(7)

uj j

N

In this model, XD denotes the set of feasible solutions to DTCTP-D. The set of coefficients that are subject to uncertainty are determined by the binary variable, uj, and only of these variables are allowed to be one. The model chooses the variables in XD with the most influence to the objective function. In order to solve the proposed uncertainty model, an exact and a heuristic algorithm based on Benders Decomposition are developed. The Benders reformulation of robust DTCTP-D is given below. In this formulation s = 1…S refers to a path between node 0 to n+1 and w S is the incidence vector of the path s.

Min j

N m Mj

c jm x jm

z

s.t d jm u k x jm

z j

N m M

( i, j ) E m M

m M

x jm

k

1...K

j

p jm wijs x jm -

0

s

(8)

1...S

j

x jm

j

1

N

j

0 ,1

j

N ,m

Mj

In the alternative criticality-based model, activities with cost values at the upper bounds are chosen among the critical activities. The activities with sufficient amount of slacks provide flexibility in scheduling and in resource allocation. It is both possible to delay their starting times and to elongate the durations via lower resource allocations. Due to these flexibilities, these activities involve less risk to achieve the cost targets when compared to critical activities. The following model illustrates the criticality-based approach. CR refers to the set of critical or potentially critical activities. We use the slack/duration ratio to assess criticality of activities instead and define the activities that have slack values less than 100ξ %, which will be called as slack duration threshold (SDT) from now on, of the activity duration as potentially critical activities, i.e. CR = {j |TSj/pj) ≤ ξ } where TSi refers to the total slack of activity i. In this study we set the SDT to 25 %, i.e. ξ = 0.25. d jm x jm u j j

Min x

XD

c jm x jm j

N m Mj

Max

uj

{0,1}

N m Mj

Subject to

(9)

uj j CR

Criticality requirement makes the model more complex. In order to solve the criticality based model, a tabu search (TS) heuristic algorithm is developed. TS is a local-search improvement heuristic proven to be effective to solve many difficult combinatorial optimization problems (e.g., Hazir et al., 2007). It has a punishment mechanism to avoid getting trapped at local optima by forbidding or penalizing moves that cause cycling among solution points previously visited. These forbidden moves are called “tabu”. The short-term memory keeps track of move attributes that have changed during the recent past and these attributes become tabu for a short time. PMS 2008, April 28-30, İstanbul, Turkey

125

3. Model Comparison In order to compare the proposed robust models, we use some robustness measures. Existing robust scheduling studies generally address machine environments and follow a scenario-based approach where scenarios for job attributes are required to be defined. They basically employ two types of robustness measures; direct measures, which are derived from realized performances, and heuristic approaches, which utilize simple surrogate measures. Computational burden of optimizing direct measures are generally high when compared to surrogate measures. We refer the readers to Sabuncuoğlu and Gören (2005) for a more detailed discussion of the measures. Since achieving project completion time and project cost targets are crucial for project managers, we assess the robustness of project schedules both in terms of cost and time. The comparison is based on the following measures in this study. 1. Cost-Based Measures: a. Expected Realized Cost b. Worst-Case Cost 2. Time-Based Measures: a. Average Total Slack b. Percentage of Potentially Critical Activities c. Ratio of Project Buffer Size to Project Deadline 36 problem instants are generated from the data set provided by Akkan et al. (2005). These problem instants are used to compare the proposed models. Two additional parameters, namely pessimism level (Γ) and uncertainty factor (γ), i.e. djm = γ.cjm are required to model the robust DTCTP. We program all the algorithms in C language on a Sun UltraSPARC 12x400 MHz workstation with 3 GB RAM. Furthermore, optimization software CPLEX 9.1 is called to solve integer programs. We compare the schedules generated with two alternative models and assess the effectiveness and efficiency of the algorithms under various problem settings with computational experiments. The results of the comprehensive computational experiments will be discussed during the presentation.

References Akkan, C., Drexl, A., Kimms, A. (2005). Network decomposition-based benchmark results for the discrete time–cost tradeoff problem, European Journal of Operational Research, 165: 339–358. Bertsimas D. and Sim M. (2003). Robust Discrete Optimization and Network Flows, Mathematical Programming, 98: 49-71. Hazır Ö., Günalay Y., Erel E. (2007). Customer order scheduling problem: a comparative metaheuristics study, International Journal of Advanced Manufacturing Technology. DOI 10.1007/s00170-007-0998-8. Herroelen, W., Leus R. (2005). Project scheduling under uncertainty–survey and research potentials European Journal of Operational Research, 165: 289–306. Sabuncuoğlu, I. and Gören, S. (2005) A review of reactive scheduling research: proactive scheduling and new robustness and stability measures. Technical Report, IE/OR 2005-02, Department of Industrial Engineering, Bilkent University, Ankara. Valls, V., Gómez-Cabreo, D., Pérez, M.A. and Quintanilla, S. (2007) Project Scheduling Optimization in Service Centre Management, Tijdschrift voor Economie en Management, 52(3): 341-365.

126

PMS 2008, April 28-30, İstanbul, Turkey

Qualication of MultiSkilled Human Resources Performing Project Work Christian Heimerl and Rainer Kolisch TUM Business School, Technische Universität München e-mail: christian.heimerl, [email protected]

Keywords: Resource allocation, learning, forgetting, qualication. 1 Problem Statement Assigning human resources to project work taking into account resource specic skills and eciencies is a general planning task which has to be performed in any organization. It is of particular importance for service rms where, compared to manufacturing rms, the labour intensity is higher and multiskilled resources are more common. Eciencies of human resources are usually dynamic due to the acquisition of knowledge, such that more experienced human resources have learned and therefore have higher eciencies. On the other hand, there is depreciation of knowledge, which can be caused by internal and/or external eects. Internal eects can depict e.g. forgetting (cf. Gutjahr et al. [5]), external eects can represent e.g. technological progress, which renders some older experience useless in the new environment (cf. Chen and Edgington [3]). We consider two goals when assigning project work to human resources: The operative goal is to minimize costs for performing a given amount of project work during the planning horizon. On a strategic level, however, we need to decide who is applying and improving which skills in order to achieve given companywide eciencies measured in production rates. We consider a projectcontext but assume that the projects and their schedules are given. We refer to a work package (s, t) with the demand rst as the aggregated amount of project work requiring skill s ∈ S in period t = 1, . . . , T , where T denotes the planning horizon. The pool of human resources consists of internal human resources Ri and external human resources Re , i.e. R = Ri ∪ Re . The demand of work packages requiring skill s can be assigned to human resources k ∈ Rs = Ris ∪ Res with cost rates ckt in period t. Rs is the set of human resources capable of performing project work which requires skill s. Working times of human resources are limited by timedependent availabilities Rkt . Learning eects can be observed for repetitive tasks in manual, cognitive and knowledgebased work (cf. e.g. Arzi and Shtub [1], Boh et al. [2], Nembhard and Uzumeri [9]). However, Boh et al. [2] proved that they are present in project environments, too. Furthermore, breaking up the project work into skillspecic work packages (s, t), we assume that the work packages  in contrast to the projects itself  do have a repetitive character. Learningcurves are a wellknown concept to describe learning processes (cf. Wright [10]) and an extensive amount of literature deals with dierent types of learning curves (cf. e.g. Nembhard and Uzumeri [8], Yelle [11] for an overview). Learning curves are usually monotonically decreasing and convex functions. A learning curve fks (zks ) describes the unit production time, i.e. the amount of time PMS 2008, April 28-30, İstanbul, Turkey

127

that human resource k requires to produce one additional unit after having produced zks units (i.e. experience) using skill s. We intentionally indexed the function fks with human resource k and skill s since the learning process is dependent on the type of work and the abilities of the person to adapt new knowledge. Furthermore, the argument zks is also indexed with k and s, i.e. we do not consider crossskill or team learning eects. The time Fks (zks ) that human resource k requires to Rproduce the rst zks units z using skill s can be expressed by the integral Fks (zks ) = z0ks f (z 0 )dz 0 . The time =0 ks τks required to produce xks units after having produced zks − xks units can then be calculated by τks = Fks (zks ) − Fks (zks − xks ). We dene zkst as the experience level of resource k in skill s at the end of period t. Human resource k 's amount of knowledge gained by performing project work using skill s in period t is represented by xkst . Human resource k 's amount of depreciated knowledge in skill s in period t is depicted by βkst . Given an initial experience level zks0 of human resource k in skill s at the end of period t = 0, zkst can be calculated by zkst = zks(t−1) − βkst + xkst . This approach is similar to dynamic inventory models with zkst being the current inventory of knowledge, βkst being the demand or loss of knowledge and xkst being the ordered and instantaneously delivered (or produced) knowledge (cf. e.g. Zipkin [12]). Strategic goals might require to keep or develop certain skills across the company. Therefore, the company's eciency measured as the sum of production rates of the internal human resources in skill s at the end of the planning horizon T dened by X 1 (1) fks (zksT ) i k∈Rs

should attain the level φs . Note that the production rate is the reciprocal of the unit production time calculated by fks (·). φs is the guaranteed internal production rate for skill s at the end of the planning horizon if all internal human resources are used for skill s.

2 Model We model the outlined problem as a nonlinear program by employing the following decision variables: The amount of work done by human resource k with skill s in period t is denoted with xkst . Human resource k 's experience in skill s cumulated up to period t considering depreciation of knowledge is depicted with zkst . Finally, let τkst be the time human resource k requires to process the amount of work xkst of skill s in period t.

Z = Min

T X X X

(2)

ckt · τkst

s∈S k∈Rs t=1

subject to

zkst = zks(t−1) − βkst + xkst

128

k ∈ Rs s∈S t = 1, . . . , T

(3)

PMS 2008, April 28-30, İstanbul, Turkey

k ∈ Rs s∈S t = 1, . . . , T

(4)

1 ≥ φs fks (zksT )

s∈S

(5)

xkst ≥ rst

s∈S t = 1, . . . , T

(6)

k∈R t = 1, . . . , T

(7)

xkst , τkst ≥ 0

k ∈ Rs s∈S t = 1, . . . , T

(8)

zkst ∈ R

k ∈ Rs s∈S t = 1, . . . , T

(9)

τkst = Fks (zkst ) − Fks (zkst − xkst ) X k∈Ris

X k∈Rs

X

τkst ≤ Rkt

s∈S

The objective function (2) minimizes the costs accruing from performing the work packages (s, t). Constraints (3) are the dynamic experience level constraints, where the experience level at the end of period t is the experience level of the preceding period (t − 1) decreased by depreciation of knowledge and increased by acquisition of knowledge in period t. Constraints (4) calculate the time τkst required to perform xkst units of work package (s, t). Constraints (5) enforce the targeted skill eciency levels at the end of the planning horizon. Constraints (6) ensure that the demand rst of work package (s, t) is assigned to the human resources. Fractional assignment of work packages to human resources is allowed. Constraints (7) restrict the availability of the human resources. Constraints (8) and (9) dene the decision variables. If there is no learning and no forgetting, model (2)(9) becomes linear with the learning function fks (zks ) = η1ks , where ηks is the static eciency of resource k w.r.t. skill s (cf. e.g. Heimerl and Kolisch [6, 7]).

3 Implementation and Test Setup We implemented the model in C++ using Ipopt 3.3.2 of the COINOR library (cf. [4]) and employing an adaptation of the exponential learning function (cf. Nembhard and Uzumeri [8]), i.e.

fks (zks ) = aks e−λks zks + bks and therefore

(10)

¢ aks ¡ 1 − e−λks zks + bks zks . (11) λks bks > 0 represents the steady state unit production time, λks ≥ 0 is the learning rate and aks ≥ 0 the learning potential of human resource k in skill s. We chose this learning function due to the ability to depict steady state unit production times and its mathematical tractability. However, the implementation can easily be adapted to other learning functions. Fks (zks ) =

PMS 2008, April 28-30, İstanbul, Turkey

129

In our computational study we will choose the parameters for external human resources in the following way: Every skill can be performed by external human resources with unlimited availabilities, i.e. |Res | = 1 ∀s and Rkt = ∞ ∀t, k ∈ Res . External human resources will not be subject to learning nor forgetting, i.e. aks = 0 ∀s, k ∈ Res . Using our model we would like to gain insights into the following questions:

• What is the inuence of current and target skill eciency levels on costs? What are the costs of qualifying human resources? • Will human resources be qualied broadly or specialized? Are there recognizable patterns in the qualication strategy? • For which skills will personnel be qualied and which skills will be outsourced? What's the inuence of the depreciation of knowledge βkst on this eect? We will present computational results regarding these questions.

References [1] Y. Arzi and A. Shtub. Learning and forgetting in mental and mechanical tasks: A comparative study. IIE Transactions, 29(9):759768, 1997. [2] W. F. Boh, S. A. Slaughter, and J. A. Espinosa. Learning from experience in software development: A multilevel analysis. Management Science, 53(8): 13151331, Aug. 2007. [3] A. N. K. Chen and T. M. Edgington. Assessing value in organizational knowledge creation: Considerations for knowledge workers. MIS Quarterly, 29(2): 279309, June 2005. [4] Coin-OR. http://www.coin-or.org/projects/Ipopt.xml. [5] W. J. Gutjahr, S. Katzensteiner, P. Reiter, C. Stummer, and M. Denk. Competence-driven project portfolio selection scheduling and sta assignment. Working Paper, 2007. [6] C. Heimerl and R. Kolisch. Integrated manpower allocation and multiproject scheduling in an ITenvironment. In J. Jozefowska and J. Weglarz, editors, Abstracts of the Tenth International Workshop on Project Management and Scheduling, Poznan, Poland, Apr. 2006. [7] C. Heimerl and R. Kolisch. Scheduling and stang multiple projects with a multiskilled workforce. Technical Report SOM 2-2007, Lehrstuhl für Technische Dienstleistungen und Operations Management, Technische Universität München, Sept. 2007. [8] D. A. Nembhard and M. V. Uzumeri. Individualbased description of learning within an organization. IEEE Transactions on Engineering Management, 47 (3):370378, 2000. [9] D. A. Nembhard and M. V. Uzumeri. Experiential learning and forgetting for manual and cognitive tasks. International Journal of Industrial Ergonomics, 25(4):315326, 2000. 130

PMS 2008, April 28-30, İstanbul, Turkey

[10] T. Wright. Factors aecting the cost of airplanes. Journal of Aeronautical Science, 3:122128, 1936. [11] L. E. Yelle. The learning curve: Historical review and comprehensive survey. Decision Sciences, 10:302328, 1979. [12] P. H. Zipkin. Foundations of Inventory Management. McGraw-Hill, Boston, 2000.

PMS 2008, April 28-30, İstanbul, Turkey

131

                  1

             ! " "#$% &'( ')

1



1

       

                               !             "                                 !              #      $      

  %& !      # '   (   )**+!               ,       - # %&



.  

     

/ #  )**0!  

      "      $          1  2                      " 2             #            %&               %&      #   " 3 

(0, 1, . . . , n)

    "  

            N     {1, . . . , n}    j ∈ N      b(j) < j     pj > 0    dj         #     αj  βj   "      Sj         j   4          

j



b(j) dj

  "#  ! 

  #  3  

dj 

  4   

 #       #   "#  #!

  "#     !    # Ej = max{0, dj − x} Tj = max{0, x − dj }!  x = Sj − Sb(j)      1   S = {Sj }j∈N ∪{0}    0 ≤ Sj−1 ≤ Sj − pj j ∈ N    $  F = j∈N αj Ej + βj Tj  '  ,    b(j) = J0 ∀j   J0  #  2  t = 0  P0 = 0      2 #       

  

  "#

     

  

     )       

 

            0                            

   5   "       

          3"                         )**6!!$          

                    

132

PMS 2008, April 28-30, İstanbul, Turkey

7"    

  1$ j−1 

dj = dj −

pj ,

i=b(j)

 #  S    0 ≤ Sj−1 ≤ Sj j ∈ N    '   " 1  8  

       $

   

j−1

    

k≥j

        



min



  

j   b(k) < j 

 "   

Mj

 

αj Ej + βj Tj

9!

j∈N

Ej +

s.t.

 &9!

j 

Mi ≥ dj ,

j ∈ N,

)!

i=b(j)+1 j 

Tj −

Mi ≥ −dj ,

j ∈ N,

0!

i=b(j)+1

Ej ≥ 0,

Tj ≥ 0,

Mj ≥ 0 j ∈ N.

3  )!  0!      "#    

5!

   

 

    "            #     "#  "  

M

     &9!   " # # 8

    2  # 

2   !      1            "  &9! ' 1     3"   8    $

min



(αj + βj )Ej +

j∈N

j∈N

 &)!

s.t.

 

j 

Ej +

Mi ≥ dj ,

  βi M j − dj βj

6!

j∈N

i∈[j,n]: b(i) H

   

  F $    "#    

8  )**6 E 0  [SiB , CiB ]  I1i   I2i             $         

              π         .           $ $ $  '   7     π&  3   '       π&         2& (i, j, k)   i ≤ n j ≤ m   k ≤ i (i, j, k)         A1 , . . . , Ai B1 , . . . , Bj        Ak      A   CkA ≤ CjB  * f (i, j, k)       (i, j, k) 1     π&             (i, j, k) PMS 2008, April 28-30, İstanbul, Turkey

137

  Ai+1       # • (i, j + 1, i)   Bj+1       #      ' $  • (i + 1, j, k)

 f (0, 0, 0) = 0 " %  i  1  n   j  1  m   k  1  i        (i, j, k)   (i + 1, j, k)   (i, j + 1, i) #

k j B (i + 1, j, k) CjB = ( l=1 pA l + l=1 pl )/2

 i − k = 2z f (i + 1, j, k) = f (i, j, k) + CjB + zl=0 pAk+1+2l z •   i − k = 2z − 1 f (i + 1, j, k) = f (i, j, k) + CjB + l=1 pA k+2l   # (i, j + 1, i) z−1 z •  i − k = 2z   l=1 pk+2l − l=0 pk+1+2l > pB j+1 f (i, j + 1, i) = •

• •



+∞  i − k = 2z − 1 1, i) = +∞

 

z−1 l=0

pk+1+2l −

  f (i, j + 1, i) = f (i, j, k) + (

i

l=1

z−1 l=1

pA l +

pk+2l > pB j+1 f (i, j +

j+1 l=1

pB l )/2

2  min0≤k≤n f (n, m, k)

 ! )! *    π     '    O(n m) !   2

.                 $   %            8 %                 ,               $            %                  

" 9 - * '  : 8 ;   '   "((( M.

$*% $+%

-   f1 (M )/M = fa(aM )/(aM ) = αM γ−1 > 0  fa (aM ) = αaM γ   γ > 1                  , 

                     0   ,           PMS 2008, April 28-30, İstanbul, Turkey

141

            γ > 1  )         a        ( $ % L(a) ≤ M $% L(a) ≤ M < L(a)  $% M < L(a) 1 1  $ %( a ∈ [0, 1+β ] )     P           M  )       ,   Y  '  E[Y |a ≤ 1/(1 + β)] = fa (aM ).

$.%

1 1 , 1−β ]            1  $%( a ∈] 1+β       )                  , 

E[Y |1/(1 + β) < a ≤ 1/(1 − β)] = P r[P ≤ M ]fa (aM ) + P r[P > M ]fa (aM 2 /P ).

$9% 1 1  $%( a ∈] 1−β , +∞[ =                       E[Y |a > 1/(1 − β)] = fa (aM 2 /P ).

$:%

            @          a                       

        a           a     γ        (1+β)(1/γ−1) (1−β)1/γ

    

  γ

= ∞       a 

1 1+β

)          ?        γ     

        ; γ                , @         @

                 

                                  

              ;                   

                C          C A' $2%               @ )                 γ  ;                          M    /         0      @ $ 2777      %           0     γ                     γ       ?     <  2     

 @ $ 1000     %          

       $                    %            0    @                 ?                                          

142

PMS 2008, April 28-30, İstanbul, Turkey

<  2( A @       0     γ 

!   1 1;  = #  23*4 D- )    E      B  24  2+32:9 1 !F G A  AG 5    2334 D        E   !

" # # $     = G ; =   ! 6 *77> D#                     E %   &   B  9.$4%  4>.443 5  

   1= 6 *77. D          #   

          E '  5  !  5 8 *77: D                (       $G GH

 G ;I  %E ( 8   G  2339 D!          E )   * + (  # % , "# %, ;  F - F  A =   - B =  *77. D        ?           E -  ) 

      B  .*$*.%  97>39734

PMS 2008, April 28-30, İstanbul, Turkey

143

Enhanced Energetic Reasoning for Parallel Machine Scheduling Lotfi Hidri 1, Anis Gharbi1,2, Mohamed Haouari1,3 , and Chefi Triki4 1

Combinatorial Optimization Research Group-ROI, Polytechnic School of Tunisia e-mail: [email protected]

2

Industrial Engineering Department, College of Engineering, King Saud University, Saudi Arabia e-mail: [email protected] 3

Faculty of Business Administration, Bilkent University, 06800 Ankara, Turkey e-mail: [email protected] 4

Department of Mathematics, University of Salento, Italy e-mail: [email protected]

Keywords: Feasibility and adjustment procedures, energetic reasoning, branch-and-bound

1.

Introduction

We address the identical parallel machine scheduling problem with release dates and delivery times, which is formally described as follows: A set J of n jobs has to be scheduled on m identical parallel machines ( n m 2 ). Each job j (1 ≤ j ≤ n) has a processing time pj; a release date rj (head) on which the job becomes available for processing; and a delivery time qj (tail) that must elapse between its completion on the machine and its exit from the system. Each machine processes at most one job at one time and each job cannot be processed by more than one machine at one time. Preemption is not allowed and all machines are available from time zero onwards. The objective is to find a schedule that minimizes the makespan. This problem, denoted by P|rj ,qj|Cmax, is strongly NP-Hard. In this paper, we are concerned with branch-and-bound algorithms for the P|rj ,qj|Cmax. The main contribution of this work is the development of new feasibility tests and adjustment procedures that aim at detecting infeasible nodes and improving the values of the lower bounds. For that purpose, we focus on the decision variant of the P | rj ,qj | Cmax which consists in checking the existence of a feasible schedule with the makespan not exceeding a given value C. This decision problem is denoted by P | rj ,dj | −, where dj = C − qj, and it amounts to checking the existence of a feasible schedule such that each job j is processed nonpreemptively within its prescribed time window [rj,dj]. We propose several improvements of the so-called energetic reasoning (ER) which has been initiated by Lahrichi, (1982), originally proposed for cumulative scheduling by Erschler et al, (1991), and thereafter evolved by several researchers (Baptiste et al., (1999); Néron et al, (2001); Tercinet et al, (2006)). Our computational experiments show that, in contrast to the classical ER, the proposed enhanced ER substantially improves the performance of the best existing branch-and-bound algorithm for the P | rj ,qj | Cmax. Also, it is worth emphasizing that although this paper mainly focuses on parallel machine scheduling, the proposed techniques can be extended to more complex scheduling problems including cumulative scheduling and job shop scheduling problems. The remainder of this paper is organized as follows. In Section 2, the ER is briefly described. Section 3 presents the main contribution: enhanced ER-based feasibility and adjustment procedures are detailed. The results of a preliminary computational study are reported in Section 4.

2.

The energetic reasoning

Given a time interval [t1,t2], the energetic reasoning (ER) is based on the computation of the part of the jobs that must be processed in any feasible schedule between times t1 and t2. The part of job j which needs to be completed within [t1,t2] is called its work over the time interval [t1,t2]. To compute this mandatory work, the jobs are either left-shifted or right-shifted on their time window [rj; dj]; i.e. a job either starts at rj or finishes at dj. The left-work of a job j over [t1,t2], denoted by W1j, is defined as the part of the job that must be processed between t1 and t2 if the job starts at its release date rj. Symmetrically, the right-work of a job j, denoted by Wrj, is defined as the part of 144

PMS 2008, April 28-30, İstanbul, Turkey

the job that must be processed between t1 and t2 if it finishes at its deadline dj. Thus, the work of a job j over [t1,t2]; denoted by Wj, is equal to the minimum between its left-work and its right-work. Néron et al, (2001) proposed the following formulae for the computation of the left, right and total work: W Wj

l

min t2 t1 , p j , max 0, rj

j

l

t1 , W r j

pj

min t2 t1 , p j , max 0, t2

dj

r

min Wj ,Wj . The total work over the time interval [t1,t2] is defined by W

j J

pj

and

Wj . Clearly,

the instance is infeasible if W > m (t2 - t1). Moreover, the time bounds of a job j may be adjusted as follows. Let s j m t2 t1 W W j denote the slack of job j over [t1,t2], i.e. the maximum amount of time that might be allocated to job j during the time interval [t1,t2]. Then rj can be adjusted to max(rj,t2 - sj) if sj > Wlj, and dj can be adjusted to min(dj,t1 + sj) if sj > Wrj. Baptiste et al. (1999) proved that the only non-dominated values of t1 and t2 (with t1 < t2) that need to be considered in the energetic reasoning are the following ones: t1 rj ; j J  d j ; j J  rj p j ; j J and t2 rj ; j J  d j ; j J  d j p j ; j J

3.

t1

rj ; j

J  d j; j

J  rj

t2

rj ; j

J  d j; j

J  dj

pj; j

J and t2

pj; j

rj

dj

t1; j

rj

dj

t2 ; j

J and t1

J J

The enhanced energetic reasoning

The goal of this section is to describe new improved feasibility conditions and adjustment procedures that can be performed for a given time interval [t1,t2]. In this way, only a reduced subset of time intervals may need to be checked. In our experiments, we show that using an O(n)number of time intervals within the enhanced energetic reasoning yields substantially better results than those obtained with the classical energetic reasoning. In the sequel, only jobs with Wj ≠ 0 are considered. Actually, a job j such that rj ≤ t1 requires an amount of work equal to rj + pj - t1 if and only if it is processed within the time interval [rj , rj + pj]. That is, there is necessarily one machine loaded with job j during the time interval [t1 , t1+1]. Consequently, there are at most m jobs having rj ≤ t1 that would require an amount of work equal to rj + pj-t1. Similarly, there are at most m jobs having dj ≥ t2 that would require an amount of work equal to t2 - dj+pj. An immediate consequence of this observation is that the minimum total amount of work over [t1,t2] could be more accurately estimated by allowing no more than m left-shifted jobs and no more than m right-shifted jobs. For that purpose, define: xj =1 if j starts at rj ≤ t1, and 0 otherwise, yj =1 if j finishes at dj ≥ t2, and 0 otherwise zj =1 if j is processed inside [t1,t2], and 0 otherwise Hence, a better estimate of the total work over [t1,t2], denoted by W , is equal to the optimal value of the following 0-1 programming model. ( P1) : Minimize subject to xj yj z j j J ; r j t1 j J ; d j t2

xj

1

j J

W l jxj

j J

W r j yj for j

j J

pjzj

J.

m

yj

xj, yj, zj

m 0,1

for j

J.

Interestingly, (P1) can be solved in polynomial time using the following minimum-cost flow reformulation. Consider the network depicted in Figure 1 and constructed as follows: A source node s and a sink node t Three nodes (L, R, I) representing the left, right and inside positions, and connected to the sink node by zero-cost arcs with capacities m, m and n, respectively PMS 2008, April 28-30, İstanbul, Turkey

145

n job nodes (J1,…,Jn), each of which is connected to the source node by an arc with unit capacity and zero cost; and to nodes L, R and I by three unit-capacity arcs with respective costs W l j , W r j , and pj One can readily check that there is a strict one-to-one correspondence between feasible s − t flows having a value equal to n and valid job assignments. Moreover, the cost of such a flow is equal to the total workload of the corresponding job assignment. Capacity / cost J1

1/0

Ji n

s

L

1/ Wil 1/ pi

I

1/0 1/ Wir 1/0

R

m/0 n/0

t

m/0

Jn

Figure 1. The network associated to (P1)

Although (P1) is solvable in polynomial-time, it might be useful for some applications to have a faster approximate solution. To that aim, we introduce a relaxed version of (P1) which consists in slightly underestimating W while consistently reducing the computational burden. Let (P2) denote the relaxed version of (P1) obtained by replacing zj by 1- xj - yj and relaxing the constraint xj + yj ≤ 1. Clearly, (P2) can be easily solved by setting xj =1 for the m jobs having the largest pj - W l j and yj =1 for the m jobs having the largest pj - W r j . Let W denote the optimal objective value of (P2). Hence, W

max W ,W

is a valid lower bound on the total work over [t1,t2].

Clearly, the energetic reasoning is enhanced if the feasibility test and adjustments are performed using W or W instead of W. In our experiments, the only considered sets of time intervals [t1,t2] for the enhanced energetic reasoning

are

rj , d j : j

J 

dj

p j , rj

pj : j

J and d j

p j < rj

pj

The

overall

procedure is reiterated until an infeasibility is detected or no adjustment is performed.

4.

Preliminary computational results

The performance of the proposed procedures has been assessed through an empirical comparison with the feasibility and adjustment techniques which are based on the classical energetic reasoning. The test problems are generated in the following way: The number of jobs n is taken equal to 60, 70, 80, and 90. The number of machines m is taken equal to 3, 5, and 7. The processing times, heads and tails are drawn from the discrete uniform distribution on [1, n]. For each combination of n and m, 10 instances are generated. As it has been observed by Gharbi and Haouari, (2007), this test generation yields very challenging instances. Indeed, in our experiments, we found that one of the best existing branch-and-bound algorithms (Gharbi and Haouari, (2002)), denoted hereafter by B&B1, fails to solve all of the instances with 5 and 7 machines within a time limit of 300 seconds. All the procedures were coded in C and implemented in Visual C++ 6.0 on a Pentium IV 3.2 GHz Personal Computer with 1.5 GB RAM. We have embedded the proposed feasibility and adjustment procedures in the branch-and-bound algorithm (B&B2) proposed by Gharbi and Haouari, (2005) to obtain three variants, namely Classical_ER, Exact_ER, and Approximate_ER, where the embedded feasibility and adjustment procedures are based on the computation of W, W and W , respectively. We report in Table 1 a summary of the results that were obtained with the five branch-and-bound variants. For each variant, we report the average 146

PMS 2008, April 28-30, İstanbul, Turkey

CPU time (Time) as well as the average percentage of unsolved instances within the time limit of 300 sec. (Unsolved). Table 1. Performance of the branch-and-bound algorithms Procedure B&B1 B&B2 Classical_ ER Exact_ER Approximate _ER

Time 280.11 73.34 193.46

Unsolved 93.34 16.67 44.17

145.75 34.34

25.84 4.17

Table 1 provides strong evidence of the worth of embedding the proposed techniques. We observe that embedding the classical energetic reasoning in B&B2 makes its performance considerably worse. However, if the classical energetic reasoning is replaced by the enhanced approximate one, then we observe a dramatic decrease of the percentage of unsolved instances (90.56%) and CPU time (82.24%). Moreover, the obtained algorithm (Approximate_ER) largely outperforms the two best algorithms of the literature. Table 2 depicts the detailed CPU times and percentage of unsolved instances (between parentheses) of B&B2, Classical_ ER, and Approximate_ER, according to the variation of n and m. We observe that instances with m = 3 are easy to solve since all of them have been solved by both B&B2 and Approximate_ER within an average CPU time of about 2 seconds. Yet, the Classical_ER fails to solve 12.50% of these instances and requires on average a CPU time of 120.92 seconds. Also, it is worth noting that only 20% of the largest instances (n = 90 and m = 7) have been solved by Classical_ER within the time limit of 300 seconds, while Approximate_ER was able to solve all of the instances within an average CPU time of about 35 seconds. Table 2. Detailed performance of B&B2, Classical_ ER, and Approximate_ER

m n

3

B&B2 5

7

3

Classical_ ER 5

7

Approximate_ER 3 5

7

60

0.27(0) 52.43(10)

108.67(20)

28.36(0)

128.80(20) 192.77(40)

0.26(0) 35.58(10) 52.87(0)

70

4.61(0) 56.03(10)

147.59(40)

81.47(0)

168.33(40) 273.49(70)

2.93(0) 28.38(0) 89.38(20)

80

1.39(0) 178.07(50) 140.45(30)

146.91(10) 282.40(80) 266.00(80)

0.96(0) 59.85(0) 84.32(20)

90

1.91(0) 60.09(10)

226.93(40) 267.51(70) 258.57(80)

1.58(0) 20.93(0) 35.06(0)

128.58(30)

References Baptiste, Ph. Le Pape, C. Nuijten, W. (1999). Satisfiability tests and time bound adjustments for cumulative scheduling problems. Annals of Operations Research 92, 305-333. Erschler, J. Lopez, P. Thuriot, C. (1991). Raisonnement Temporel sous Contraintes de Ressources et Problèmes d.Ordonnancement. Revue d’Intelligence Artificielle 5, 7-32. Gharbi, A, Haouari, M. (2002). Minimizing Makespan on Parallel Machines Subject to Release Dates and Delivery Times. Journal of Scheduling 5, 329-355. Gharbi, A. Haouari, M. (2005). Optimal Parallel Machines Scheduling with Availability Constraints. Discrete Applied Mathematics 148, 63-87. Gharbi, A. Haouari, M. (2007). An Approximate Decomposition Algorithm for Scheduling on Parallel Machines with Heads and Tails. Computers and Operations Research 34, 868-883. Lahrichi, A. (1982) Ordonnancements: La Notion de .Parties Obligatoires. et son Application aux Problèmes Cumulatifs, RAIRO-RO 16, 241-262. Néron, E. Baptiste, Ph. Gupta, JND. (2001). Solving hybrid flow shop problem using the energetic reasoning and global operations. Omega 29, 501-511. Tercinet, F. Néron, E. Lenté, C. (2006). Energetic reasoning and bin-packing problem for bounding a parallel machine scheduling problem. 4OR 4, 297-318.

PMS 2008, April 28-30, İstanbul, Turkey

147

Discrepancy and Backjumping Heuristics for Flexible Job Shop Scheduling A. Ben Hmida1,2, M. Haouari1, M.-J. Huguet2 and P. Lopez2 1 Unité ROI, Ecole Polytechnique de Tunisie, Tunisie e-mail: [email protected] 2

LAAS-CNRS, Université de Toulouse, France e-mails: {abenhmid, huguet, lopez}@laas.fr

Keywords: Scheduling, job shop, discrepancy, makespan.

1.

Problem statement

The Flexible Job Shop Problem (FJSP) is a generalization of the traditional Job Shop scheduling Problem (JSP), in which it is desired to process a set of n jobs on a set of m machines in the shortest amount of time. Every job Ji (i=1,…,N) consists of si operations Oi1 , Oi 2 , ..., Oisi which must be processed in the given order. Every operation must be assigned to a unique machine r, selected among a given subset, which must process the operation during p ir units. Solving the flexible job shop consists in assigning a specific machine to each operation of each job as well as sequencing all operations assigned to each machine, such that successive operations of a job do not overlap and such that each machine processes at most one operation at a time. Job preemption and job splitting are not allowed. The objective is to find a schedule that minimizes the maximum completion time or makespan. As a generalization of the job shop problem, the FJSP is known to be strongly NP-Hard (Garey et al., 1976). Brucker and Schlie (1990) propose a polynomial algorithm for solving the FJSP with two jobs, in which the processing times are identical whatever the machine chosen to perform an operation. Brandimarte (Brandimarte, 1993) was the first to use a decomposition approach for the FJSP. He solved the assignment problem using some dispatching rules and then focused on the resulting job shop subproblems, which are solved using a tabu search heuristic. Hurink et al. (1994) propose to solve this problem with multiple capacities machines. Authors propose two neighborhoods which are based on the concept of block. Chambers et al. (1996) proposed a Tabu Search method to solve the problem. Mastrolilli et al. (2000) proposed two structures of neighborhood based on the displacement of an operation in the disjunctive graph. Authors showed that if a feasible solution does not have a neighbor according to first neighborhood, then it is an optimal solution. The second neighborhood is an extension of the first one. It preserves the property of optimality in the event of absence of neighbor. Authors showed the connexity of the second neighborhood. According to their experiments, in spite of the absence of the connexity of the first type of neighborhood, this last gives better results than the second one because of the higher speed of execution. Kacem et al. (2002) used a genetic algorithm (GA) to solve the FJSP and they adapted two approaches to solve jointly the assignment and the sequencing subproblems. The first one is to approach by localization and the second one is an evolutionary approach controlled by the assignment model and applying GA to solve the FJSP. Xia and Wu (2005) proposed a hybrid of particle swarm optimization and simulated annealing as a local search algorithm. In this abstract, we propose to improve a discrepancy-based method, called CDDS, after being adapted to solve the flexible job shop problem in a precedent work (Ben Hmida et al., 2007b). So, we propose applying discrepancy on some pertinent variables chosen by using two types of heuristics. The remainder of this abstract is organized as follows. Section 2 introduces the principles of CDDS. Section 3 presents its adaptation for the problem under study and then proposes a discrepancy strategy to limit the tree search. Section 4 presents CDDS performance via an example and a series of tests. Finally, section 5 gives some concluding remarks and directions for future work.

148

PMS 2008, April 28-30, İstanbul, Turkey

2.

Climbing Depth-bounded Discrepancy Search

CDDS is a tree search method based on the discrepancy principle to expand the search for visiting the neighborhood of the initial solution. It combines the Climbing Discrepancy Search (CDS) method (Milano et al., 2002) and the Depth-bounded Discrepancy Search (DDS) method (Walsh, 1997). CDDS method has been developed initially to solve Hybrid Flow Shop problems (Ben Hmida et al., 2007a) and has proved its efficiency in this domain. Then, it has been adapted to solve the flexible job shop problem and has provided promising results, especially with instances of a higher degree of flexibility (Ben Hmida et al., 2007b). The CDDS method starts from an initial solution suggested by a given heuristic. Hence nodes with discrepancy equal to 1 are first explored then those having a discrepancy equal to 2, and so on. When a leaf with improved value of the objective function is found, the reference solution is updated, the number of discrepancy is reset to 0, and the process for exploring the neighborhood is restarted. To limit the tree search expansion, CDDS strategy applies discrepancies only at the top of the tree to correct early mistakes in the instantiation heuristic (for more details see Ben Hmida et al., 2007a). This method can be improved by using constraint propagation, e.g. the forward checking strategy (Haralick et al., 1980) which suppresses inconsistent values in the domain of not yet instantiated variables involved in a constraint with the assigned variable; one can also use a more refined mechanism. Although this method showed its efficiency for the resolution of the Hybrid Flow Shop problems (Ben Hmida et al., 2007a), it remains, nevertheless, difficult to adapt to the FJSP (Ben Hmida et al., 2007b). This is especially due to the considerable number of parameters to define: initial solution, search heuristics, discrepancy strategy, and tree search expansion. To improve our CDDS method for FJSP and more precisely for discrepancy strategy, we introduce some specific heuristics for applying discrepancies.

3.

Adaptation of CDDS for Flexible Job Shop Problem

3.1. Instantiation Heuristics It seems reasonable that the efficiency of the discrepancy-based methods depends closely on the quality of the initial solution (Harvey, 1995). In our approach, the initial solution is determined by the use of several heuristics: (1) Selection of operations: We first give the priority to the operation belonging to the job with the earliest start time (EST) and in case of equalities we consider the operation belonging to the job with the longest duration (LDJ). (2) Assignment of machines to operation: The operation previously chosen is assigned to the machine such that the task completes as soon as possible. This heuristic is called Earliest Completion Time (ECT). Heuristic is dynamic; the machine with the highest priority depends on the machines previously loaded. After both instantiations, we use a simple Forward Checking constraint propagation mechanism to update the finishing time of the selected operation as well as the starting time of the successor operation. We also maintain the availability date of the chosen resource. 3.2. Tree search expansion To limit the tree search expansion, we propose to introduce a lower bounding strategy. In fact, a lower bounding strategy is useful to speed-up the search for the optimal solution and to improve the quality of the first solution found in the tree. The following trivial lower bound is computed after a variable instantiation:

LB = c ij +

si

∑ min j +1

p ij (where Cij is the completion time of Oij)

3.3. Discrepancy strategy In our problem, the initial leaf (with 0 discrepancy) is a solution since we do not constrain the makespan value. We may use the discrepancy principle to expand the tree search for visiting the neighborhood of this initial solution. In a previous work, we have developed three strategies to apply discrepancy: Considering discrepancy only on operation selection variables; Considering discrepancy only on resource allocation variables; Mix the two kinds of discrepancies. PMS 2008, April 28-30, İstanbul, Turkey

149

This latter strategy gives best solutions (Ben Hmida et al., 2007b), but all of the three strategies lead to a huge computing time since they visit the entire neighborhood and recalculate starting times of operations and their assignments following the dynamic heuristic (ECT). To restrict it, we propose to backjump on promising choice points (Huguet et al., 2004). We therefore decide to apply discrepancy on some relevant variables chosen by using two types of heuristics: Permutation of two adjacent critical operations carried out by the same resource (discrepancy on selection variable). Replacement of a critical operation on another resource (discrepancy on allocation variable but restricted to critical operations). This led us to recalculate only the starting times of a subset of operations who are actually concerned with the discrepancy.

4.

Computational results

The CDDS procedure described in Section 3 has been tested on different problem instances from literature (Brandimarte, 1993; Hurink et al. 1994). Brandimarte: The data set consists of 10 problems with number of jobs ranging from 10 to 20, number of machines ranging from 4 to 15, and number of operations for each job ranging from 5 to 15. Hurink: The data set consists of 129 test problems created from 43 classical JSP instances. They divide the test problems into three subsets, Edata, Rdata and Vdata, depending on the average number of alternative machines for each operation. The number of jobs ranges from 6 to 30, the number of machines ranges from 5 to 15. Table 2. Comparison with the Tabu Search of Mastrolilli and Gambardella (M.G.) on 10 FJSP instances from Brandimarte

instances Mk01 Mk02 Mk03 Mk04 Mk05 Mk06 Mk07 Mk08 Mk09 Mk10 Average

n 10 10 15 15 15 10 20 20 20 20

m 6 6 8 8 4 15 5 10 10 15

LB 36 24 204 48 168 33 133 523 299 165

M.G. 40 26 204* 60 173 58 144 523* 307 198

CDDS 40 26 204* 60 182 60 139 523* 307 212

%dev 0.0 0.0 0.0 0.0 5.2 3.4 -3.5 0.0 0.0 7.1 1.2

CPU(M.G.) 0.01 0.73 0.01 0.08 0.96 3.26 8.91 0.02 0.15 7.69 2.18

CPU(CDDS) 0.1 0.2 0.2 0.03 0.2 0.1 0.3 0.8 0.4 0.3 0.26

Table 2 compares our CDDS algorithm with the Tabu Search algorithm proposed by Mastrolilli and Gambardella (2000) on 10 FJSP problem instances from Brandimarte (1993). The second and third columns report the number of jobs and the number of machines for each instance, respectively. The fourth column reports the best known lower-bound (Mastrolilli and Gambardella, 2000). The fifth column reports the best results of TS. The sixth and the seven columns report our makespan with the relative deviation with respect to TS algorithm. The remaining columns report the CPU time. Results show that solutions are comparable in time and quality. Table 3 shows computational results over two instance classes. The first column reports the data set, the second column the number of instances for each class, the third column the average number of alternative machines per operation. The next column reports the percentage deviation of the best solution obtained by our CDDS, with respect to the best known lower bound. The table shows that our algorithm is stronger with a higher degree of flexibility (Hurink Vdata). Table 3.Deviation percentage over the best known lower bound

150

Data set

num

alt

CDDS (%)

Brandimarte Hurink Edata

10 43

2.59 1.15

17.02 15.81

PMS 2008, April 28-30, İstanbul, Turkey

Hurink Rdata Hurink Vdata

5.

43 43

2 4.31

9.85 1.11

Conclusions and further works

In this abstract a Climbing Depth-bounded Discrepancy Search (CDDS) method is presented to solve Flexible Job Shop Scheduling problems with the objective of minimizing makespan. Our CDDS approach is based on ordering heuristics and involves a backjumping heuristic to apply two types of discrepancies. The test problems are benchmarks used in the literature. Our results are not better compared with those obtained using a Tabu Search, but in terms of makespan, we can consider that the CDDS method provides promising results. Developments can still be done to improve the solution’s quality of CDDS algorithm. Moreover, other variants of CDDS algorithm may be envisaged for instance by including efficient lower bounds for the FJSP.

References Ben Hmida, A., Huguet, M.-J., Lopez, P. and Haouari, M. (2007a). Climbing depth-bounded discrepancy search for solving hybrid flow shop problems. European J. Industrial Engineering, 1(2):223–243. Ben Hmida, A., Huguet, M.-J., Lopez, P. and Haouari, M. (2007b). Climbing depth-bounded discrepancy search for solving flexible job shop scheduling problems, Proceedings MISTA’07, Paris (France), pp.217–224. Brandimarte P., (1993). Routing and scheduling in a flexible job shop by tabu search. Annals of Operations Research, 22:158–183. Brucker, P., Schlie, R. (1990). Job-shop scheduling with multi-purpose machines. Computing, 45:369–375. Chambers, J. B. (1996). Classical and flexible job shop scheduling by tabu search. PhD thesis, University of Texas at Austin, USA. Dauzère-Pérès S., Paulli J., (1997). An integrated approach for modeling and solving the general multi-processor job shop scheduling problem using tabu search. Annals of Operations Research, 70:281–306. Garey, M. R., Johnson, D. S., Sethi, R. (1976). The complexity of flow shop and job shop scheduling. Mathematics of Operations Research, 1:117–129. Haralick, R., Elliot, G., (1980). Increasing tree search efficiency for constraint satisfaction problems. Artificial Intelligence, 14:263–313. Harvey, W. D. (1995). Nonsystematic backtracking search. PhD thesis, CIRL, University of Oregon, OR USA 97403-1269. Huguet M-J., Lopez P., Ben Hmida A., 2004, “A limited Discrepancy Search method for solving disjunctive scheduling problems with resource flexibility”, Proceedings PMS’2004, Nancy, pp 229-302, 2004. Hurink E., Jurisch B., Thole M. (1994). Tabu search for the Job Shop scheduling problem with multi-purpose machine. Operations Research Spektrum, 15:205–215. Kacem, I., Hammadi, S., Borne, P. (2002). Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems. IEEE Transactions on Systems, Man, and Cybernetics-Part C, 32(1):1–13. Mastrolilli, M., Gambardella, L. M. (2000). Effective neighborhood functions for the flexible job shop problem. Journal of Scheduling, 3:3–20. Milano, M., Roli, A., (2002). On the relation between complete and incomplete search: an in formal discussion, Proceedings CPAIOR’02, pp.237–250. Walsh, T., (1997). Depth-bounded Discrepancy Search, Proceedings IJCAI-97, pp.1388–1395. Xia, W., Wu, Z. (2005). An effective hybrid optimization approach for multi-objective flexible job-shop scheduling problem. Computers & Industrial Engineering, 48:409–425.

PMS 2008, April 28-30, İstanbul, Turkey

151

Polynomial Cases and PTAS for Just-In-Time Scheduling on Parallel Machines around a Common Due Date Nguyen HUYNH TUONG, and Ameur SOUKHAL Laboratoire d’Informatique, Universit´e Franc¸ois Rabelais Tours, France [email protected] [email protected]

Keywords: Just-In-Time Scheduling; Parallel machine scheduling; Weighted earliness/tardiness; PTAS; Polynomial cases.

1. Introduction In this paper, we minimize the total weighted earliness–tardiness on m-parallel machines. It means that each job should be completed as closely as possible to its due date. This approach comes from the ”just-in-time” philosophy in management and production theory. Here, n independent jobs without preemption should be completed at the same date: common due date. All jobs are ready at time zero. Two versions of common due date are considered: restrictive due date and unrestrictive due date. It involves that the constraint of unavailable resources before time zero must be respected (restrictive version) or not (unrestrictive version). Special cases of the considered scheduling problem is shown to be NP-Hard even if m = 1, see Yuan (1992), Hall and Posner (1991), Hoogeveen and Van de Velde (1991), Hall et al. (1991). Moreover, according to Sourd and Kedad-Sidhoum (2003), minimize the total weighted earliness– tardiness scheduling problem on a single machine with equal-size jobs (pi = p, i = 1, . . . , n) is still open. In the case of unit processing times (pi = 1), the considered problem is shown to be polynomial (Mosheiov and Yovel (2006)). For the symmetric weighted earliness-tardiness single machine scheduling problem, an FPTAS is proposed by Kovalyov and Kubiak (1999). We show that to minimize the total weighted earliness–tardiness on m-parallel machines with identical processing time is polynomial for both restrictive due date and unrestrictive due date. Then, in the case of unrestrictive common due date, a polynomial time approximation scheme (PTAS) is proposed. In the following, the notations are presented. • J[i] or [i]: the job at the ith position for a given sequence; • Si and Ci : the starting time and the completion time of Ji , i = 1, . . . , n; • Ei : the earliness of Ji , i = 1, . . . , n, Ei = max(d −Ci , 0); • Ti : the tardiness of Ji , i = 1, . . . , n, Ti = max(Ci − d, 0); • αi and βi : the early and tardy penalty cost per unit of time of Ji ; • α[i] and β[i] : the early and tardy penalty costs per unit of time of the job J[i] ; • wi (1) : an earliness vector, given by (α1 α2 . . . αn ); • wi (2) : a tardiness vector, given by (β1 β2 . . . βn ); • W[i]1 : the sum of weights of the jobs scheduled before J[i] , given by ∑i−1 l=1 α[l] if (C[i] ≤ d), and 0 otherwise; • W[i]2 : sum of weights of the ith job and of all jobs scheduled after J[i] , given by ∑nl=i β[l] if (S[i] ≥ d), and 0 otherwise; • W[i] : the total sum of weights given by: W[i]1 +W[i]2 ; • Z = f (S) = ∑i (αi Ei + βi Ti ): the objective function In the case of restrictive due date, the scheduling problem is noted Pm|di = d, restrictive| ∑(αi Ei + βi Ti ) and it is noted Pm|di = d, unrestrictive| ∑(αi Ei + βi Ti ) in the case of unrestrictive due date. In the following, we present some well-knwon properties. 152

PMS 2008, April 28-30, İstanbul, Turkey

1. There is no intermediate idle time between two adjacent jobs. 2. The optimal schedule is V-shaped around the common due date on each machine. It means that, on each machine, the jobs completed before or on the common due date d are scheduled in non-increasing order of the ratio pi /αi , and the jobs starting on or after d are scheduled in non-decreasing order of the ratio pi /βi . 3. For each machine with restrictive due date, there exists an optimal schedule in which: if the starting time of the first job is not zero then there exists one job for which the completion time is d. 4. With unrestrictive due date, there exists an optimal solution where exactly m jobs are scheduled on time. 5. In the case of equal-size processing times, there an optimal solution where at most u  exists  jobs are scheduled on each machine, with u = mn .

2. Pm|pi = p, di = d, unrestrictive| ∑(αi Ei + βi Ti ) We show that the total weighted earliness–tardiness on m-machine with unrestrictive common due date is equivalent to the assignment problem. To illustrate our approach, we consider m = 2. Let job Ji1 ( resp. Ji2 ) be scheduled on the first (resp. second) machine on time. On each machine and concerning the set of early jobs: the job processed just before Ji1 or Ji2 is at the first position, the job processed just before the previous one is at the second position, etc. Similarly, concerning the set of tardy jobs: the job processed just after Ji1 or Ji2 is at the first position, the job processed just after the previous one is at the second position, etc. Then, the cost of assigning an early job to the position k1 is given by: k1 α(1) p which corresponds to the contribution of this job in the objective function. Similarly, the cost of assigning a tardy job to the position k2 is given by k2 β(1) p. Since, for each machine, we can show that the assignment costs of jobs is given by the matrix Q((2u+1)×n) in which each element is noted by (qk,l )1≤k≤2u+1;1≤l≤n where: the jth column corresponds to the job J j ; the first row corresponds to the job Ja,a=1,...,n which is scheduled on time; the (2 × i)th row corresponds to the ith position of early job from (d − p); the (2 × i + 1)th row corresponds to the ith position of tardy job from d. Consequently, element (qk,l ) is defined as follows (see Figure 1 for more detail) :  = 0 (l = 1..n)  q1,l q2k,l = kpαl (l = 1..n ; k = 1..u − 1)  q2k+1,l = kpβl (l = 1..n ; k = 1..u − 1)

Q=

0

0

...

0

pα1

pα2 pβ2

... ...

pαn

pβ1 2pα1 2pβ1

2pα2 2pβ2

... ...

2pαn 2pβn

... ...

... ...

... ...

... ...

... ...

(u-1)pαn (u-1)pβn

(u-1)pα1 (u-1)pα2 (u-1)pβ1 (u-1)pβ2

pβn

Figure 1: Matrix of job contribution assignment costs for each machine Hence, for P2|pi = p, di = d, unrestrictive| ∑(αi Ei + βi Ti ) the assignment costs of jobs is given by the matrix M(2(2u+1)×n) (M = [Q Q ]) in which the element in the kth row and in the lth column is noted by mk,l . PMS 2008, April 28-30, İstanbul, Turkey

153

A set S = (mk1 ,1 ; mk2 ,2 . . . ; mkn ,n ) of n elements from M is called a feasible solution with Z = f (S) = ∑nr=1 mkr ,r if and only if no two elements Mk,l are from the same row (obviously, two elements in S can not be from the same column). Therefore, to determine an optimal solution S∗ we should find a feasible set of n elements from M that minimizes Z. S∗ can be given by Edmonds and Karp’s algorithm (1972) in O(n.(m.u)2 ) = O(n3 ). T Theorem 1. The Edmonds and Karp’s algorithm applied on the transposed matrix of M(n×m(2u+1)) =

[QT QT . . . QT ] gives an optimal solution for the scheduling problem Pm|pi = p, di = d, unrestrictive| ∑(αi Ei + βi Ti ) with m ≥ 1 in O(n3 ).

3. Pm|pi = p, di = d, restrictive| ∑(αi Ei + βi Ti ) Following j theksame approach j kas presented in the Sectionj, we k define the matrix M as follow. d d Let k = d−1 (since k = − 1 if d = (k + 1)p; k = p p p otherwise ), and let δ1 = d − kp, δ2 = p − δ1 . δδ1 2  early

J[3]

early

J[2]

early

J[1]

tardy

J[1]

Ji d

tardy

J[2]

tardy

J[3]

Ci

tardy

J[4]

-

If Si < d and Ci ≥ d then the job Ji is called splitting job. It is easy to show that there exists an optimal solution with exactly m splitting jobs. According to the Property 3, the completion time of each splitting job Ji is Ci = d or Ci = (k + 1)p > d. Hence, for each machine, we define two matrices of assignment cost of jobs. The first one Q1 corresponds to the case where Ci = d. The second matrix Q2 corresponds to the case Ci > d. As, in the case Ci = d there is no early job scheduled at the lth(k≤l≤u) position, Q1 is obtained by removing the rows lwi (1) from Q. However, Q2, ((1+k+u)×n) , is defined as follows: QT2

=

(2)T

[δ2 wi

(1)T

δ1 wi

(2)T

(δ2 + p)wi (1)T

. . . (δ1 + (k − 1)p)wi (2)T

(δ2 + (k + 1)p)wi

(1)T

(δ1 + p)wi

(2)T

(δ2 + 2p)wi

(2)T

(δ2 + kp)wi

(2)T

(δ2 + (k + 2)p)wi

(2)T

. . . (δ2 + (u − 1)p)wi

δ2 β1

δ2 β2

...

δ2 βn

δ1 α1 (δ2 + p)β1

δ1 α2 (δ2 + p)β2

... ...

δ1 αn (δ2 + p)βn

(δ1 + p)α1 (δ2 + 2p)β1

(δ1 + p)α2 (δ2 + 2p)β2

... ...

(δ1 + p)αn (δ2 + 2p)βn

... ...

... ...

... ...

... ...

Q2 =

]

(δ1 + (k − 1)p)α1 (δ2 + kp)β1

(δ1 + (k − 1)p)α2 (δ2 + kp)β2

... ...

(δ1 + (k − 1)p)αn (δ2 + kp)βn

(δ2 + (k + 1)p)β1

(δ2 + (k + 1)p)β2

...

(δ2 + (k + 1)p)βn

...

...

...

(δ2 + (u − 1)p)βn

... (δ2 + (u − 1)p)β1

... (δ2 + (u − 1)p)β2

Figure 2: Matrix of job contribution assignment costs for each machine 154

PMS 2008, April 28-30, İstanbul, Turkey

Therefore, the xth0≤x≤m first elements of the matrix M correspond to the x matrices Q1 , and the remaining elements correspond to the (m − x) matrices Q2 . Theorem 2. The Edmonds and Karp’s algorithm applied on the matrix M gives an optimal solution for the Pm|pi = p, di = d, restrictive| ∑(αi Ei + βi Ti ) in O(mn3 ).

4. PTAS for Pm|di = d, unrestrictive| ∑(αi Ei + βi Ti ) Afrati et al. (2000) define an Oε (nlogn)-time-approximation scheme (PTAS) for the average weighted completion time problem on unrelated machines, noted Rm|| ∑ wiCi . By following the approach of Afrati et al., we show that the considered scheduling problem accepts an PTAS. Let ε be a positive value where 0 < ε < 1. According to ε, the set of jobs is partitioned into two sub-sets: decided jobs and undecided jobs. The early job-set (resp. tardy job-set) are the decided jobs where αi < εβi (resp. βi < εαi ). This set is computed in O(n). Based on the idea of Afrati et al. (2000), the undecided job-set can also be partitioned into two 2 2 sub-sets: early job set and tardy job set. It is computed in 2O(log (1/ε)/ε ) running times. Let’s consider only two sub-sets: the first one called early job-set formed with all early job-set (decided and undecided); the second one called tardy job-set formed with all tardy job-set (decided and undecided). We show that scheduling tardy job set corresponds to determine an optimal solution for Pm|| ∑ wiCi which admits an PTAS in Oε (nlogn) running times (Afrati et al., 2000). Similarly, we show that scheduling the jobs from the early job set corresponds to the Pm|| ∑ wi Si . With minor modifications of the PTAS given by Afrati et al. (2000), we show that the scheduling problem Pm|| ∑ wi Si accepts also an PTAS in Oε (nlogn) running times. Consequently, we have the following theorem: Theorem 3. There exists an PTAS with the complexity Oε (nlogn) for Pm|di = d, unrestrictive | ∑(αi Ei + βi Ti ).

Acknowledgements The authors would like to thank the research group “Operations Research” in CNRS for the support of project ”ORDO-COO-SC” in which this study is a part.

References [1] Afrati, F., Bampis, A., Kenyon, C. and Milis, I. (2000). A PTAS for the average weighted completion time problem on unrelated machines. Journal of Scheduling, 3, 323–332. [2] Edmonds, J. and Karp, R.M. (1972). Thereotical improvements in algorithmic efficiency for network flow problems. Journal of the A.C.M., 19, 248–264. [3] Hall, N.G., Kubiak, W. and Sethi, S.P. (1991). Earliness-tardiness scheduling problem, II: Deviation of completion times about a restrictive common due date. Operations Research, 39, 847-856. [4] Hall, N.G. and Posner, M.E. (1991). Earliness–tardiness scheduling problem, I: Weighted deviation of completion times about a common due date. Operations Research, 39, 836-846. [5] Hoogeveen, J.A. and Van De Velde, S.L. (1991). Scheduling around a small common due date. European Journal of Operational Research, 55, 237-242. [6] Kovalyov, M.Y. and Kubiak, W. (1999). A fully polynomial approximation scheme for the weighted earliness-tardiness problem. Operations Research, 47, 757-761. [7] Mosheiov, G. and Yovel, U. (2006). Minimizing weighted earliness–tardiness and due date cost with unit processing–time jobs, European Journal of Operations Research, 172, 528-544. [8] Sourd, F. and Kedad-Sidhoum, S. (2003). The one–machine problem with earliness and tardiness penalties. Journal of Scheduling, 6, 533–549. [9] Yuan, J. (1992). The NP-hardness of the single machine common due date weighted tardiness problem. Systems Science and Mathematical Sciences, 5, 328–333. PMS 2008, April 28-30, İstanbul, Turkey

155

New Generation A-Team for Solving the Resource Constrained Project Scheduling Piotr Jędrzejowicz 1, Ewa Ratajczak-Ropel 1 1

Department of Information Systems, Gdynia Maritime University, Poland e-mail: pj,[email protected]

Keywords: Project scheduling, resource constraints, heuristic, agent system

1.

Introduction

The paper proposes an agent approach to solving instances of single and multiple modes of the resource constrained project scheduling problem known, respectively, as RCPSP and MRCPSP. RCPSP and MRCPSP have attracted a lot of attention and many exact and heuristic algorithms have been proposed for solving them (Hartmann and Kolisch (2006)). On the other hand the multiagent systems are an important and intensively expanding area of research and development. There are a number of multi-agent approaches proposed to solve different types of optimization problems. One of them is the concept of an asynchronous team (A-Team), originally introduced by (Talukdar et al. (1996)). The idea of A-Team was used to develop the JADE-based environment for solving a variety of computationally hard optimization problems called JABAT (Barbucha et al. (2006)). JABAT is a middleware supporting the construction of the Internet accessible dedicated A-Team architectures based on the population-based approach. In this paper a JABAT-based A-Team architecture dedicated to solving RCPSP and MRCPSP problems is proposed and experimentally validated. To solve instances of the RCPSP/MRCPSP the, so called, optimization agents are used. Optimization agents represent heuristic algorithms such as tabu search or path relinking. The proposed system is accessible via the Internet and gives a possibility of using open computational resources over the Web. Moreover, the mobile agents used in JABAT allow for decentralization of computations and use of multiple hardware platforms in parallel, resulting eventually in more effective use of the available resources and reduction of computation time.

2.

Problem formulation

A single-mode resource-constrained project scheduling problem consists of a set of n activities, where each activity has to be processed without interruption to complete the project. The dummy activities 1 and n represent the beginning and the end of the project. The duration of the activity j, j = 1, . . . , n is denoted by d j where d1 = dn = 0. There are r renewable resource types. The availability of each resource type k in each time period is r k units, k = 1, . . . , r. Each activity j requires rjk units of resource k during each period of its duration where r1k = rnk = 0, k = 1, . . . , r. All parameters are non-negative integers. There are precedence relations of the finish-start type with a zero parameter value (i.e. FS = 0) defined between the activities. In other words, activity i precedes activity j if j cannot start until i has been completed. The structure of the project can be represented by an activity-on-node network G = (SV, SA), where SV is the set of activities and SA is the set of precedence relationships. SSj (SPj) is the set of successors (predecessors) of activity j, j = 1, . . . , n. It is further assumed that 1 SPj , j = 2, . . . , n, and n SSj , j = 1, . . . , n − 1. The objective is to find a schedule S of activities starting times [s 1, . . . , sn], where s1 = 0 and resource constraints are satisfied, such that the schedule duration T(S) = s n is minimized. The above formulated problem is a generalization of the classical job shop scheduling problem and belongs to the class of NP-hard optimization problems (Blazewicz et al. (1983)). Therefore, to obtain within a reasonable time limits solutions to larger problem instances which are common in the real-life applications, approximate algorithms are needed. In case of the MRCPSP each activity j, j = 1, . . . , n may be executed in one out of M j modes. The activities may not be preempted and a mode once selected may not change, i.e., a job j once started in mode m has to be completed in mode m without interruption. Performing job j in mode m takes djm periods and is supported by a set R of renewable, a set N of non-renewable resources. Considering the time horizon, that is, an upper bound T on the project’s makespan, one has the available amount of renewable (doubly constrained) resource as well as certain overall capacity of 156

PMS 2008, April 28-30, İstanbul, Turkey

the non-renewable (doubly constrained) resource. The objective is to find a makespan minimal schedule that meets the constraints imposed by the precedence relations and the limited resource availability. It is obvious that the multimode problem can not be computationally easier than the RCPSP one.

3.

JABAT architecture

JABAT is a middleware allowing to design and implement A-Team architectures for solving various combinatorial optimization problems. The problem-solving paradigm on which the proposed system is based can be best defined as the population-based approach. JABAT produces solutions to combinatorial optimization problems using a set of optimization agents, each representing the, so called, improvement algorithm. Each improvement algorithm when supplied with a potential solution to the problem at hand, tries to improve this solution. To escape getting trapped into the local optimum an initial population of solutions (individuals) is generated or constructed. Individuals forming an initial population are, at the following computation stages, improved by independently acting agents, thus increasing chances for reaching the global optimum. Main functionality of the proposed environment includes organizing and conducting the process of search for the best solution. It involves a sequence of the following steps: Generating an initial population of solutions and storing them in the common memory. Activating optimization agents which apply solution improvement algorithms to solutions drawn from the common memory and store them back after the attempted improvement, using some user defined replacement strategy. Continuing reading-improving-replacing cycle until a stopping criterion is met. To perform the above described cycle two main classes of agents are used. The first class called OptiAgent is a base class for all optimization agents. The second class called SolutionManager is used to create agents or classes of agents responsible for maintenance and updating individuals in the common memory. All agents act in parallel. Each OptiAgent is representing a single improvement algorithm (for example: simulated annealing, tabu search, genetic algorithm, local search heuristics etc.). Other important classes in JABAT are: Task which represents an instance or a set of instances of the problem and Solution representing the solution. To initialize the agents and maintain the system the TaskManager class is used. To maintain different platforms the PlatformManager class is used. Objects of the above classes also act as agents. JABAT gives a possibility of using the open computational resources over the Web. The use of mobile agents in JABAT can bring decentralization of computations, resulting in a more effective use of available resources and a reduction of the computation time. From the user point of view, JABAT is a web application which provides the opportunity to send in and solve optimization problems. After having registered to the system the user gets opportunity to solve any particular instance of the problem. Using WWW interface the user can upload files with instance data and choose a set of parameters defining in what manner the search for a solution is carried out by the system. This, for example, may include a selection of optimization agents to be used and definition of the strategy used to maintain the population of solutions. The user provides these parameters using the special form available on the system Web page. After computation process has been stopped the report with results becomes accessible to the user on the Web page. JABAT environment can be used by any registered user to solve different optimization problems providing there are suitable agents implemented in the system. So far the implemented and available team of agents can be used to solve instances of the following problems: the resource-constrained project scheduling problem (A-Team architecture presented in this paper), the travelling salesman problem, the clustering problem and the vehicle routing problem. JABAT environment has been designed and implemented using JADE (Java Agent Development Framework), which is a software framework supporting the implementation of multi-agent systems. More detailed information about JABAT environment architecture and its implementations can be found in (Barbucha et al. (2006)).

PMS 2008, April 28-30, İstanbul, Turkey

157

4.

JABAT for solving RCPSP and MRCPSP

The JABAT environment has been used to develop an architecture dedicated to obtaining solutions of the RCPSP and MRCPSP problem instances. Development of such an architecture using the JABAT environment requires designing and coding of a set of the optimization agents which are specialized in solving the problem at hand. In addition, the user should define and implement the strategy of maintenance and evolution of the set of solutions stored in the common memory. To solve the RCPSP and MRCPSP problems the proposed architecture includes all the required classes describing the problem, optimization procedures and the strategy of searching for the best solution. All these have been implemented as JABAT objects. The set of classes forms the package called RCPSP. The RCPSP contains the following classes: RCPSP_Task inheriting form the Task class, RCPSP_Solution inheriting from the Solution class, Activity, Mode and Resource class. The RCPSP_Task identifies instances which attributes include a list of activities, and a list of available renewable and non-renewable resources. The Resource identifies both - renewable and non-renewable resources, storing values representing numbers of the resource units. The Mode identifies activity modes, which attributes include the number, duration and a list of the required resources of both types. Finally, the Activity identifies activities, which attributes include the number, a list of modes and a list of predecessors and successors. Additionally, the RCPSP_TaskOntology and RCPSP_SolutionOntology classes have been also defined through over-ridding the TaskOntology and SolutionOntology, respectively. Ontology is the class enabling definition of the vocabulary and semantics for the content of message exchange between agents. To solve the RCPSP and MRCPSP five optimization agents representing different heuristic algorithms are used: local search agent (LSA), tabu search agent (TSA), crossover agent (CA), precedence tree agent (PTA) and path relinking agent (PRA). LSA is a simple local search algorithm which finds the local optimum by moving each activity to all possible places in the solution. In the case of MRCPSP the algorithm additionally checks all possible modes of the activity. TSA includes the tabu search algorithm version (Jedrzejowicz and Ratajczak (2003)), where the neighborhood of the initial solution is searched through performing moves that are not tabu. In the considered TSA the move is understood as a two activities exchange. In case of the MRCPSP the move includes mode exchange too. The selected moves are remembered on tabu list. The best solution found is remembered. CA proposed in (Jedrzejowicz and Ratajczak (2003)) is a heuristic based on using the one point crossover operator. Two initial solutions are crossed until the better solution will be found or all crossing points will be checked. In case of the MRCPSP, the algorithm additionally checks all possible modes of the activity. PTA proposed in (Jedrzejowicz and Ratajczak (2003)) is a heuristic based on the precedence tree algorithm. It finds an optimum solution by enumeration for a partition of the schedule consisting of some activities. Next, it finds the solutions of the successive partitions shifted for a fixed step. The best solution found is remembered. PRA is based on path-relinking algorithm. For a pair of solutions from the population a path between them is constructed. Next, the best of the feasible solutions is selected. All optimization agents co-operate together using the common memory during the process of searching for the best solution to the problem instance. In the common memory the population of individuals (solutions) is remembered. The time and frequency an agent of each kind receives a solution or set of solutions from the common memory with a view to improve its quality is determined by the user defined strategy. In the strategy proposed for the discussed RCPSP and MRCPSP problems the individuals are randomly chosen from the population and send to these optimization agents which are ready to start searching for better solutions. After computation the improved individuals replace original ones stored in the common memory. The set of configuration variables defining a strategy includes the kind and number of optimization agents used, selection procedure, population size and computation time allowed.

5.

Computational experiment results

To validate the proposed approach computational experiment has been carried out using benchmark instances of single-mode and multi-mode RCPSP from PSPLIB (Kolisch and Sprecher (1996), PSPLIB). The experiment involved computation with different kind and number of optimization agents. In Table 1 the results for the single-mode RCPSP are presented. The results 158

PMS 2008, April 28-30, İstanbul, Turkey

have been obtained using the following variant of the A-Team architecture: one LSA, one TSA, one CA and one PRA agent. Furthermore, the population of solutions was consisting of 50 elements. The initial population was generated randomly. The computation for one problem instance was interrupted after 50 solutions not better then the best one stored in the common memory have been forwarded from optimisation agents back to the common memory. The computation results were evaluated in terms of the mean relative error (Mean RE best) calculated as the deviation from the optimal (n=30) or best known solution, mean relative error (Mean RE CPLB) calculated as the deviation from critical path lower bound, and mean computation time (Mean CT) during computation on 2 computers with 2.0 GHz processors. Table 1. Experiment results, single-mode RCPSP

Number of activities 30 60 90 120

Mean RE best 0.39 % 0.64 % 1.89 % 3.00 %

Mean RE CPLB 14.32 % 10.27 % 11.63 % 21.59 %

Mean CT 14.17 s 38.24 s 42.17 s 99.88 s

Experiment results have been compared with the results obtained using different kind of methods collected in Hartmann and Kolisch (2006). The proposed approach proved promising in both respects – quality of solutions and computation time.

6.

Conclusions

Experiment results show that the proposed JABAT implementation is an effective tool for solving both single and multi-mode resource-constrained project scheduling problems. It is based on a new generation A-Team implementation allowing for parallel use of different hardware platforms and offering the WWW access. Both instance data and environment configuration parameters may be set via the Internet. Future research will concentrate on finding more effective A-Tem configurations and evolution strategies and further developing the JABAT environment. In the case of the JABAT application to the RCPSP the research will concentrate on developing optimization agents and search for solution strategies as well as on more effective use of remote platforms.

References Barbucha, D., I. Czarnowski, P. Jędrzejowicz, E. Ratajczak and I. Wierzbowska (2006). JADEBased A-Team as a Tool for Implementing Population-Based Algorithms, Proc. VI Int. Conf. on Intelligent Systems Design and Applications, vol. 3, IEEE Computer Society, Los Alamitos, 144–149. Blazewicz, J., J. Lenstra and A. Rinnooy Kan (1983). Scheduling subject to resource constraints: Classification and complexity. Discrete Applied Mathematics 5, 11–24. Jedrzejowicz, P. and E. Ratajczak (2003). Population Learning Algorithm for ResourceConstrained Project Scheduling, in Pearson, D.W., Steele, N.C., Albrecht, R.F. (Eds.), Artificial Neural Nets and Genetic Algorithms, Springer Computer Science, Wien, 223–228. Kolisch R. and A. Sprecher (1996): PSPLIB - A project scheduling problem library; European Journal of Operational Research, Vol. 96, S. 205-216. Hartmann, S. and R. Kolisch (2006). Experimental Investigation of Heuristics for ResourceConstrained Project Scheduling: An Update. European Journal of Operational Research 174, 23-37. Talukdar, S., L. Baerentzen, A. Gove and P. de Souza (1996). Asynchronous Teams: Co-operation Schemes for Autonomous, Computer-Based Agents, Technical Report EDRC 18-59-96, Carnegie Mellon University, Pittsburgh. JADE, http://jade.tilab.com/ PSPLIB, http://129.187.106.231/psplib

PMS 2008, April 28-30, İstanbul, Turkey

159

               1     2    3 1,2

    

           !"#     !"#    3

$  % & 

'    ()    !"*"#  

          

                                                                                                                                                    !                          "                ##   

        $      N       n     i = 1, 2, ..., n          i, i = 1, 2, ..., n 

      fi               i                     

   %   ) 

n 

fi (xi )

&'(

xi = N

&*(

i=1 n  i=1

 xi , i = 1, 2, ..., n  

     N       +                          t ∈ [0, N ]  "          160

PMS 2008, April 28-30, İstanbul, Turkey

          ,    "                        -   

                         .      ,   +            .          +               % &/(         h ≥ 0 0     h    s         a   % p = (p1 , p2 , p3 , . . . , ps )

a = (a1 , a2 , a3 , ..., as ) ;

s 

ai = h.

&1(

i=1

2    3 &'45*(                       0                         i       h      "                     h + 1 0             &2    3 '45*(       

                              i ←→   i                i ←→      i      "  ←→                   i  t ←→

         i     t 6           1, . . . , t                t + 1  "                           t + 1         t                          

          0                                       "        

             "                                           &'447( PMS 2008, April 28-30, İstanbul, Turkey

161

                    -           8                     "        +                     &9:( &0 ;   $ '44