IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 1, FEBRUARY 2006
69
A Simulation-Based Optimization Framework for Product Development Cycle Time Reduction Hisham M. E. Abdelsalam and Han P. Bao
Abstract—By the mid-1990s, the importance of early introduction of new products to both market share and profitability became fully understood. Thus, reducing product time-to-market became an essential requirement for continuous competition. Since product development projects (PDPs) are based on information content and their accompanying information-dominated methods, an efficient methodology for reducing PDP time initially requires developing an understanding of the information flow among different project processes. One tool that helps achieving this understanding is the design structure matrix (DSM). Because much of the time involved in a complex PDP is attributable to its expensive iterative nature, resequencing project activities for efficient execution become the next requirement. This paper presents a simulation-based optimization framework that determines the optimal sequence of activities execution within a PDP that minimizes project total iterative time given stochastic activity durations. A mathematical model representing the problem is built as an MS Excel module and Visual Basic for Applications (VBA) is used to interface this module with a metaheuristic optimization algorithm called Simulated Annealing and commercial risk analysis software “Crystal Ball” to solve the model. Index Terms—Manufacturing, Monte Carlo simulation, optimization, project management, sequencing.
I. INTRODUCTION
M
ANUFACTURING firms in the United States have almost universally recognized the need to reconsider traditional methods of product development and introduction [16], [29], [38]–[40]. In order for a product to be competitive, it needs to be quickly introduced without compromising its performance [12]. Thus, reduction of product development cycle time has become an essential goal [16]. The significance of time-to-market is further demonstrated by [4], [21], and [43]. The difficulties in designing complex engineering products do not only arise from their technical complexity but also from the managerial complexity necessary to coordinate the interactions between the different engineering disciplines, which impose additional challenges on the design process [48]. Moreover, a great deal of uncertainty is usually involved with the estimation of activity durations. To clarify the presented problem situation, the simple rich picture diagram shown in Fig. 1 is used to explore the connections and interdependencies
Manuscript received April 22, 2004; revised August 1, 2004 and March 1, 2005. Review of this manuscript was arranged by Department Editor A. E. Marucheck. H. M. E. Abdelsalam is with the Decision Support Department, Cairo University, Cairo 12311, Egypt (e-mail:
[email protected]). H. P. Bao is with the Department of Mechanical Engineering, Old Dominion University, Norfolk, VA 23508 USA (e-mail:
[email protected]). Digital Object Identifier 10.1109/TEM.2005.861805
among different components of concurrent product development to present its complexity and to help defining both the wider system of interest (WSOI) and narrower system of interest (NSOI). The seven phases of product development shown in the rich picture diagram represent the WSOI. The NSOI, which is the focus of the current research, consists of the first four phases together or, in general, the design process. The design process itself is typically a complex system. The main approach to handle such system is by building a model that imitates the real system—or the desired system in our case. Typically, this includes: 1) defining the system of interest; 2) defining the system boundary; 3) decomposing the system into subsystems and further into smaller components; and 4) defining the relationships among these components. Following these steps, the system will be decomposed into possibly hundreds of activities (components) and thousands of variable interchanges among them. The sequence of performing these activities strongly affects the time (and, hence, the cost) needed to realize the whole project. So, a tool is needed to arrange (resequence) project activities for efficient execution. Moreover, product development projects (PDPs) are characterized by their iterative nature; some activities may need to be redone until a satisfactory solution is reached. In the planning and scheduling phase, the duration of each activity iteration cannot be precisely estimated. Besides setup time of the first iteration, learning factors and correlation with other activities would affect the activity duration in the following iterations. Thus, the complexity of the presented problem arises from three major aspects: 1) the large number of activities; 2) the complex interactions profile; and 3) uncertainty in activities’ duration. This paper presents an Excel-based module that determines the optimum sequence of activities execution within a PDP to minimize project total iterative time given stochastic activity times. The Design Structure Matrix (DSM) is the cornerstone of the presented module. DSM improves understanding of the project by providing a compact visualization of the project and a clear understanding of information flow patterns among its activities. A mathematical model for the problem is built in Excel and, then, optimized via a framework that interfaces a metaheuristic optimization algorithm called Simulated Annealing (SA) with commercial risk analysis software “Crystal Ball” [15]. Following the introduction, the rest of the paper is organized as follows. Section II gives a background of the DSM and presents research work related to the scope of this research. The section further introduces the fundamentals of the SA algorithm and reviews, briefly, some related work regarding interfacing
0018-9391/$20.00 © 2006 IEEE
70
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 1, FEBRUARY 2006
Fig. 1. Rich picture diagram.
optimization methods with simulation. The methodology used and associated software implementation are presented in Section III. Performance of SA is presented in Section IV, followed by an illustrative case study in Section V. Finally, discussions and concluding remarks are given in Section VI. II. THEORETICAL BACKGROUND A. Design Structure Matrix (DSM) A product development project (PDP) fundamentally differs from a construction (or a manufacturing) project in two major aspects: 1) while the latter is task-based, a PDP is based on information content and their accompanying information-dominated design and manufacturing methods and 2) a typical PDP is characterized by its highly coupled and interdependent activities, which must converge iteratively to an acceptable design solution [7], [8]. The most common causes for such repetition are due to activities beginning work without the necessary information, the arrival of new information, change of that information causing rework, or reevaluated assumptions in previous activities [5], [6], [9], [14]. Since they do not tolerate feedback relationships, the well-known Program Evaluation and Review Technique (PERT) and critical path method (CPM) succeed only if activities are sequential and/or parallel, and fail significantly where iterative relationships exist. The idea of representing the system architectural components and relationships in the form of a matrix is not new, but the term “Design Structure Matrix” (DSM) was coined by Steward [44], [45] to denote a generic matrix-based model for project information flow analysis. Since then, the DSM has become a popular representation and analysis tool for system modeling, especially for purpose of decomposition and integration [7]. The basic DSM is a simple -square binary matrix with nonempty elements, where is the number of system elements and is the number of couplings among different system elements. A cell, thus, can assume one of only two values (0, 1),
Fig. 2.
Design structure matrix (DSM).
or (an “x” mark, an empty cell). An example of DSM is shown in Fig. 2. Activity names are placed on the left-hand side of the matrix as row headings and across the top row as column headings in the same order (order of their execution); a main DSM assumption is that activities are undertaken in the order listed from top to bottom. An off-diagonal mark (x) represents a coupling (an information flow, or a dependency) between two activities. If an activity receives information from activity , then the matrix element (row column ) contains an off diagonal mark (x), otherwise, the cell is empty. Marks below the diagonal (subdiagonal marks) are indicative of feed-forward couplings (i.e., from upstream activities to
ABDELSALAM AND BAO: A SIMULATION-BASED OPTIMIZATION FRAMEWORK FOR PRODUCT DEVELOPMENT CYCLE TIME REDUCTION
71
downstream activities), while those above the diagonal (superdiagonal) represent feedback couplings (i.e., from downstream activities to upstream activities). As they imply iterations, the latter type of couplings should be eliminated if possible or reduced to the maximum extent. If certain feedback couplings cannot be eliminated, the activities are grouped into iterative subcycles. For example, in Fig. 2, activities (1, 2, and 3) and activities (6, 7, 8, 9, and 10) are grouped into two iterative subcycles. A primary goal in basic DSM analysis is to minimize the number of feedbacks and their scope by restructuring or rearchitecting the process [7]. In other words, by resequencing the execution of the activities to get the DSM into a lower-triangular form as possible. To achieve this goal, Steward [44], [45] proposed a two-phase approach: partitioning and tearing. Partitioning is based on system structure and involves resequencing the DSM activities in order to: 1) eliminate feedback couplings as much as possible; 2) pull the rest of the feedbacks close to the diagonal as possible, and finally 3) group the activities into blocks such that each block represents an iterative subcycle. In the second phase, tearing, each block resulted from phase one being considered individually. Tearing is based on the semantics of the systems and aims to relatively order the activities within each block to achieve the same previous objectives. In addition to Steward’s partitioning heuristic, several methods are found in literature: the Path Searching method [17], the Reachability Matrix method [47], the Triangularization algorithm [24], and the Powers of the Adjacency Matrix method [26]. Fig. 3.
Generic SA algorithm.
B. Related Work Although the theory of DSM has been applied in many areas, most of DSM-related research work has focused on deploying it in managing engineering design projects [3], [7]. Many DSM-related analysis models were cites in literature. One of the most famous DSM-based tools is DeMaid/GA. The Design Manager’s Aid for Intelligent Decomposition (or DeMaid) is a knowledge-based tool that was released to the public in 1989 with the objective of aiding the design manager understanding the interactions among different components of a large complex system [32], [33], [35]. The original version of DeMaid included functions for minimizing the feedback couplings; sequencing the deign activities; grouping activities into iterative subcycles; decomposing these subcycles into a hierarchical structure; and displaying the sequence of activities in a DSM format [34]. Since its first release, DeMaid has witnessed many enhancements. One major enhancement came in 1996 when a genetic algorithm (GA) was incorporated to optimize the sequence of processes within each iterative subcycle based on time, cost, and iteration requirements [37]. The tool, hence, became DeMaid/GA. One noticed shortcoming, though, is that the GA-based optimization takes place after partitioning, i.e., DeMaid/GA optimizes the order of activities within each circuit rather than optimizing the order of activities with respect to the system as a whole. Moreover, DeMaid/GA assumes deterministic activity time and cost. A method for structuring problem activities to an optimal order and decomposition into subproblems was described in Altus et al. [1]. The method incorporates a GA into a computer
program called AGENDA-A GENetic algorithm for Decomposition of Analyses—with the objective of reducing the extent of feedbacks, or, in other words, minimizing the “total length of feedbacks” of the system. AGENDA considered optimizing the sequence of all the activities within the DSM, but the optimization was merely based on the total length of feedbacks and did not consider neither activity time nor iteration requirements. The Problem Solving Matrix software (PSM32) was developed in the 1990s by “Problematics, LLC [30].” The software applies Steward’s methodology and has three main functions: 1) partitioning; 2) tearing; and 3) impact/change tracing. The Analytical Design Planning Technique (ADePT) was developed over the period from 1994 to 2000 to offer an approach to planning construction design projects. The ADePT methodology, presented in [2] and [3] consists of three consecutive yet integrated stages. The second stage aims at identifying iterations within the design project and arranging its activities using a partitioning algorithm. Cho [13] introduced an integrated project management framework consists of three modules. The first module is a DSM-based analysis of the project in which activities are sequenced to have minimum number of feedbacks from a structural view. The model does not optimize the sequence of activities, it, rather, deploys Steward’s methodology to reach a better sequence with reduced number of feedback couplings. Excluding DeMaid and AGENDA, existing DSM-based models do not: 1) obtain an optimum activity sequence; 2) tolerate uncertainty in activity time and cost; and 3) consider hard (logical) constraints. They implicitly assumed that activity order could be
72
Fig. 4.
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 1, FEBRUARY 2006
SA versus local search. (a) Local search. (b) SA.
changed freely. As will be discussed later in Section III-B, this assumption might seriously affect the feasibility of the reached solution.
C. Simulated Annealing (SA) The SA algorithm is a metaheuristic algorithm that provides near-optimal solutions to hard combinatorial optimization problems. A metaheuristic is “an interactive generation process which guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search spaces using learning strategies to structure information in order to find efficiently near-optimal solutions [28].” SA has its origin in statistical mechanics. As its name implies, SA exploits an analogy between the annealing process of solids and solving combinatorial optimization problems. The interest began with the work of [22] and independently [11]. Since then, SA has been applied to a large number of operations research problems. A good survey of SA application can be found in [23]. SA is a stochastic optimization technique. As the generic SA algorithm shown in Fig. 3 depicts, it constructs a sequence of solution configurations (a walk or a path) through a set of permissible solutions called the state space. Based on the current solution and a certain acceptance criterion, a transition mechanism determines which solution to step up next. The optimal solution steps from the current configuration to another configuration from its neighborhood according to the Metropolis Criterion; if the objective function (being minimized) decreases, the configuration is accepted unconditionally; otherwise, it is accepted but only with some probability. The basic structure for SA implementation consists of the following basic elements: 1) a representation of possible solution configurations (search space); 2) a generation mechanism (a means of selecting a new solution from the neighborhood of the current solution); 3) a means of evaluating the problem objective function (energy); and 4) a cooling (annealing) schedule. For further discussions and details of the algorithm refer to [46].
Classical neighborhood (or local search) methods form a general class of approximate heuristics based on the concept of exploring the neighborhood of the current solution. Neighboring solutions are generated via a specified generation mechanism, and the algorithm accepts only those neighborhood moves that lead to incremental improvement of the objective function, as shown in Fig. 4(a). Thus, the inherent problems with this class of algorithms are that: 1) they can be easily trapped in local optima and 2) they depend entirely of the initial solution. By allowing perturbations to move to a worse solution with according to a controlled mechanism, as shown in Fig. 4(a), SA is able to avoid local optima and potentially finds a more promising downhill path. Although finding the global optima with SA is not fully guaranteed, SA provides a near-optimal solution. Furthermore, these accepted uphill moves provide solutions independent of the initial solution. D. Interfacing Optimization Methods With Simulation The theoretical basis of Monte Carlo (MC) simulation has long been known, but it traces its modern origin and name to the work of [27] when they coined the term during the Manhattan Project of World War II. The application of MC requires only that the system be described by probability density functions (pdfs). The process, then, proceeds by random sampling from these pdfs using random number generators to generate an artificial history data. The random numbers generated are further used in calculations to duplicate the expected system outputs. Although the method is relatively simple in concept, the wide spread of MC applications is linked directly to the breakthrough of computational capabilities of computers. For further discussions, Law and Kelton [25] is suggested. There has been considerable recent research devoted to finding methods to optimize a simulation [25]. These methods generally involve guiding a sequence of simulation runs by supplying the simulation model with a set of system configurations, with the results from simulating earlier configurations being used to suggest a new promising direction in the search space.
ABDELSALAM AND BAO: A SIMULATION-BASED OPTIMIZATION FRAMEWORK FOR PRODUCT DEVELOPMENT CYCLE TIME REDUCTION
73
TABLE I ITERATION FACTOR VALUES
Fig. 5.
Reviewing the bulk of literature related to simulation optimization, it can be concluded that most of the research emphasized using classical methods and stochastic approximation methods. But, recognizing the limitation of these methods, researchers started to investigate different techniques, such as metaheuristics, which can achieve preferable results with less time. A recognized software that employs metaheuristics in simulation optimization is “OptQuest” (developed by OptTec Systems, Inc.). This software uses Tabu and Scatter search methods linked to a commercial risk analysis tool “Crystal Ball.” For more details refer to [19]. Bulgak and Sanders [10] integrated an extension of the SA algorithm with a discrete event simulation of the manufacturing system to find optimal buffer sizes for asynchronous assembly systems which involve automated inspection, as well as automated assembly. Gelfand and Mitter [18] presented a theoretical analysis for the SA algorithm when the objective function includes noise. SA was further applied to a stochastic optimization problem in [31]. Haddock and Mittenthal [20] attempted to investigate the feasibility of using a SA algorithm in conjunction with a simulation model to optimize a nonconvex, nonconcave objective function of the input parameters. SA was applied to a flow shop scheduling problem with stochastic processing time in [42]. III. METHODOLOGY AND PRODUCT A. Iterations Consideration The number of iterations required for a certain feedback loop to converge differs from one feedback to the other depending on: 1) how good the original upstream activities estimates were; 2) the sensitivity of downstream activities to these estimates; and 3) the required quality of the final output. Thus, while the output quality can be improved by performing more iterations, extra time will be added to the total project duration as a result. In order to incorporate this tradeoff, the presented model tolerates defining a strength for each coupling. These strengths are, then, quantified in the form of an “iteration factor”—representing the number of iterations required for convergence. Table I shows the available seven coupling strength levels and their associated iteration factor default value. Although these values are supplied to the presented model directly, they can be determined through sensitivity analysis detailed in [36]. In this approach, coupling strengths are defined in terms of the local
Hard constraint.
normalized sensitivities. Then, using the chain rule to obtain the total output behavioral response derivatives in terms of local sensitivities of each subsystem. Once the local sensitivities are known, the total derivatives of the output response quantities with respect to the design variables can be determined from the solution of the matrix set of global sensitivity equations. As stated earlier, the basic DSM is a binary matrix. Such matrix features a single attribute; the “existence” or “absence” of a coupling interface between different elements. DSM was later modified to hold multiattribute, such DSM is referred to as a “Numerical DSM.” Numerical DSMs allowed the development of more complex DSM analysis algorithms. In the current research, a numerical DSM is used in which coupling marks are replaced with numbers (iteration factor). The model assumes that an activity is fully repeated in each iteration. Leaning can be accounted for, roughly, by the appropriate choice of coupling strength that, in turn, defines the number of iterations for an activity. For example, when only a small percentage of the activity is to be reworked in case of iteration, a very weak strength can be assigned for this coupling. On the other hand, when not much learning or other improvement is expected from one iteration to the other, a strong coupling strength can be chosen. Taking into consideration that user defines the iteration factor itself, a wider range of values can be used. B. Hard (Logical) Constraints The presented model tolerates two sets of hard constraints, these are the following. 1) The model assumes that both start and finish activities are known. Hence, their order will be fixed to “1” and “ ,” respectively, where “ ” is the number of activities in the project. 2) In some cases, it is infeasible to switch the direction of a coupling from a feedback to a feed-forward, although that this, of course, will reduce project total time. For example, consider the case shown schematically in Fig. 5. In this case, there is only one input to the activity “Initial Data” and this input comes from activity “Revised Data.” As a result, an unconstrained optimization will try to assign orders for these activities in a way that guarantees that the coupling (Revised Data, Initial Data) is a feed-forward one because this, of course, will improve the objective function(s). On the other hand, it can be easily noticed that this solution is practically infeasible in the sense that it contradicts to logic; “Revised Data” cannot be performed before “Initial Data.” Thus, to avoid reaching such an infeasible solution, a second set of hard logical constraints is to be developed and tailored according to the nature of the problem.
74
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 1, FEBRUARY 2006
Fig. 7. Time computations example.
Fig. 6.
Time computations heuristic.
C. Calculating Time Each activity has its estimated completion time (activity duration), and, on the other hand, each feedback coupling has an estimated required number of iterations for its convergence. The current research concerns minimizing the total project time due to feedback loops or project iterative time (PIT). To determine PIT, all activities contained within a feedback loop time will be summed and multiplied by the loop’s iteration factor and, then, the time of all feedback loops are summed. To determine the iterative time for each loop and, hence, for the whole project, the heuristic presented in Fig. 6 is applied. To illustrate the heuristic, consider the simple hypothetical example shown in Fig. 7. The shown DSM has five activities (associated activity time shown) and two feedback couplings (corresponding iteration factors shown). The steps for computing the PIT go as follows. 1) Define the set of activities in the DSM and their associated . times: 2) Determine the set of feedback couplings in the current DSM sequence and their associate iteration factor: .
. 3) Consider the first coupling in the set a) Times associated with activities within this loop are summed to determine the coupling time: units. b) The coupling iterative time is determined by multiplying the previous quantity by the iteration factor corresponding the current feedback coupling: units. 4) Step 3) is repeated for the coupling units. 5) Finally, PIT is equal to the sum of all iterative times of feedback couplings in the DSM determined previously, units. thus, Some published work—for example, [9] and [13] has implemented richer simulation models. These models aimed toward determining project completion time and, hence, studying the effect of project architecture on completion time. The work presented in this paper, moved to a following step that is determining the optimal structure. A different objective function was used, though; total iterative time. To reduce calculation time, a relatively simpler algorithm was used to calculate total time due to iterations. The algorithm assumes that all interim activities are affected by any coupling. This might seem overimplification especially with such a small example (Fig. 7). But when one considers real-life problems—like the one shown in Fig. 8 in which a large number of feedbacks exist and, thus, once a rework is triggered by an activity, almost all activities will get affected, the assumption can be accepted. The assumption, thus, reduces the amount of calculations needed and does not violate—for a large extent—logic. Moreover, the algorithm implicitly reduces the number of crossovers by penalizing activities contained in more than one feedback. In Fig. 7, duration activity 3 will be added eight times to the objective function (five times because it is included in couand three times because it is included in coupling pling ). The optimization process will, then, swap the order of activities 3 and 4 to reduce the objective function from 540 to 300. D. Modified SA Algorithm The SA algorithm used in this research is a modified version of the Naïve SA algorithm presented in Section II-C. The modified algorithm follows the same steps of the generic SA but adds a second stage that keeps record of the value of a best solution. The objective of this modification is to assure that the final solution provided is the best one achieved. Thus, in cases that SA
ABDELSALAM AND BAO: A SIMULATION-BASED OPTIMIZATION FRAMEWORK FOR PRODUCT DEVELOPMENT CYCLE TIME REDUCTION
75
Fig. 8. Complex DSM ([41]).
moves toward a locally optimal solution, the algorithm can be redirected to avoid being trapped in it. Moreover, since SA was developed to handle deterministic combinatorial optimization problems (the acceptance or rejection of a new solution is based on one point estimate of the objective function), a modification had to be done to the algorithm to tolerate stochastic objective functions. The current research presents a modified version of the SA algorithm that extends its application to stochastic problems, where the value of the objective function is represented by a probability distribution. The basic idea is that the acceptance or rejection of a proposed solution is based on the comparison of some statistical measures of its objective function distribution with those of the current optimal solution. This is done through the use of a “utility function” (UF). The proposed UF: 1) is an additive UF; 2) consists of four attributes; and 3) assumes attributes independence. Four statistical measures were chosen to be the attributes of the UF. These are: the mean (or expected value), the variance, the range, and the maximum value. The first measure, the mean, is a central tendency measure. The concept is familiar and unique to all decision makers. Moreover, it is based on all observations, thus, the mean is greatly affected by any extreme value (a useful characteristic here because the methodology tends to be a risk averse). The second and third measures are variation measures. The variance considers how the observations distribute or cluster and measures the average scatter around the mean; and the range measures the total spread in the data. Finally, the fourth measure, maximum value, helps identifying to what extent the value of objective function might reach. A proposed solution is accepted if its UF is larger than the UF of the current solution, otherwise, Metropolis Criteria is applied. Thus, by minimizing the effect of uncertainty (variation) in activity durations on the objective function, a robust solution—rather than just an optimal one—will be determined. Having the components in place, the key element for SA comparison rules is the UF, which combines performance attributes and allows direct comparisons of solutions
TABLE II VALUES
I
where ,
weights defined by user; index of mean, variance, range, and maximum values, respectively. These indices values are determined according to predefined rules. For example, Table II shows can hold. different values The weights introduced in the UF serve as importance factors. Their values are adjusted based on the decision maker attitude toward risk. ,
,
,
, and , and
E. The Product This research presents a simulation-based optimization framework, shown in Fig. 9, that minimizes PDP time through the resequencing of its activities. Incorporating a Monte Carlo simulation module permits the framework to tackle projects with stochastic activity durations (in which durations assume a probabilistic distribution rather than a one-point estimate). The presented framework integrates several tools: 1) DSM; 2) mathematical modeling; 3) SA; and 4) Monte Carlo simulation. To implement the framework, an Excel-based module was developed. Given: activities, activity durations, couplings, and iteration requirements, the tool performs the following: 1) models the given data in a DSM format; 2) prepares Excel sheets required for calculations based on the mathematical
76
Fig. 9.
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 1, FEBRUARY 2006
The presented framework.
model of the problem; 3) carries out the optimization process of the model; and 4) presents the final optimized activities sequence in a DSM format. The module was built using Visual Basic for Applications (VBAs). F. The Process Starting with some initial solution configuration, the following process is repeated. At each iteration, the objective function evaluation module (mathematical model) receives a new solution configuration from the optimizer. Crystal Ball carries out the simulation of the model with this configuration. Expected value of the objective function is obtained directly form Crystal Ball. This value is fed back to the optimizer, which return a new solution configuration. Then, a new iteration starts. The process proceeds until the simulated annealing stopping criterion is reached. Below, more details of the process are given. 1) Initialization: In the proposed optimization algorithm, it is required to determine the optimal sequence of project activities , where is the order of execution of activity , , and is the number of project activities. One constraint being imposed is that: no duplication is allowed, that is two activities cannot assume the same order. The optimization process starts with one initial solution configuration, and then proceeds with other steps. In most of the research cited in the literature, SA proved to be a robust optimization algorithm; independent of the initial solution configuration. Hence, in the presented research, an initial solution configuration is generated by randomly assigning order to different activities, as shown in Fig. 10. As noted in Section III-B, both start and end activities are assumed known in advance and, hence, their order will remain fixed during the optimization process. 2) Objective Function Evaluation: One of the main characteristics of the SA algorithm is that it does not require derivative
Fig. 10.
Solution representation.
Fig. 11.
Generating a neighboring solution.
information. It merely needs to be supplied with an objective function value for each trial solution it generates. Thus, the evaluation of the problem objective functions is essentially a “black box” operation as far as the algorithm is concerned. On the other hand, it is very important that the objective function evaluations be performed efficiently for the sake of the overall computational efficiency, especially, in many applications where these functions are complex and can overwhelm the most computationally intensive activity. Once a solution configuration is generated, it is fed to the “objective function evaluation” module of the framework where a Monte Carlo simulation of the system is performed and a set of the objective function statistical measures are then determined. To carry out the simulation part, Crystal Ball, is used. Incorporating such commercial software, although it might slow the
ABDELSALAM AND BAO: A SIMULATION-BASED OPTIMIZATION FRAMEWORK FOR PRODUCT DEVELOPMENT CYCLE TIME REDUCTION
77
Fig. 12.
First example. (a) Initial random sequence. (b) Optimal sequence.
Fig. 13.
Second example. (a) Initial random sequence. (b) Optimal sequence (minimum time). (c) Optimal sequence (minimum number of feedback couplings).
process down, it eases the building of the framework and provides high compatibility with other components (MS Excel and VBA). Furthermore, it provides features and function that would not be an easy task to be built again, for example, the variety of pdfs, reports, and sensitivity analysis.
3) Generation Mechanism (Neighborhood Moves): The execution SA algorithm involves performing a sequence of iterations. At each iteration, the current solution is randomly perturbed to create a new solution in its neighborhood. Thus, the way in which new solutions are generated plays a very impor-
78
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 1, FEBRUARY 2006
Fig. 14. Process flow chart.
Fig. 15.
Initial DSM.
tant role in the SA algorithm. The solution generating technique should: 1) introduce small random changes in such a way that the generated solution is feasible and 2) allows all possible solutions in the neighborhood to be examined. The presented research applies a “pairwise exchange” perturbation strategy, in which two activities are randomly selected and “swapped,” as shown in Fig. 11.
4) Cooling Schedule: The objective of the cooling schedule is to achieve a finite-time implementation of the SA algorithm. In designing the cooling schedule, four rules have to be specified: 1) an initial temperature ; 2) a rule for decrementing or stopping criterion; the temperature; 3) a final temperature and 4) a length for the Markov chains. While the first three rules manage a finite sequence of the control parameter (temperature),
ABDELSALAM AND BAO: A SIMULATION-BASED OPTIMIZATION FRAMEWORK FOR PRODUCT DEVELOPMENT CYCLE TIME REDUCTION
Fig. 16.
Initial solution statistics.
Fig. 17.
Objective function measures.
the fourth rule manages a finite sequence of transitions at each value of the control parameter. is very important. The selection of the initial temperature On one hand, the value of should be high enough to allow all, or most, transitions to be accepted. This, of course, would result in a lot of consumed time in the beginning of the process without progress toward the optimal solution. But, on the other hand, a low initial temperature would reduce the quality of the final solution. The simplest and most common temperature decrement rule , , where is a constant close to, but is smaller than, one. This exponential cooling scheme (ECS) was first proposed with [11]. The final temperature is determined by fixing the number of temperature values to be used. Alternatively, the search can be stopped when it ceases to make progress. One of the methods that are used to define lack of progress is when no improvement (i.e., no new best solution) is found in an entire Markov chain at one temperature. The length of the th Markov chain is based on the number of transitions needed to achieve a quasi-equilibrium at each value . depends on the size and nature of the
79
problem, and is independent of . In this research, the Markov chain is bounded by a maximum number of accepted and rejected transitions, whichever comes first. IV. BENCHMARKING: SA SOLUTION COMPARED WITH COMPLETE ENUMERATION The convergence of SA to global optima has been discussed in many research articles. Yet, for NP-hard problems, such as the one being tackled in this research, it cannot be proved that the solution reached is the global optima. That’s why we refer to it by “near optimal.” But, to provide more confidence regarding the quality of its solution and to demonstrate its performance in this domain of problems, the presented SA algorithm will be used to optimally resequence two small DSMs with deterministic activity time, and results will be compared with those of complete enumeration of both DSMs. Fig. 12(a) shows the initial randomly sequenced DSM of the first example resulting in eight feedback couplings and total iterative time of 2630 units. The optimally resequenced DSM—as determined by complete enumeration—is shown in Fig. 12(b) with only four feedbacks and total iterative time of 1610 units. The SA algorithm reached this solution after
80
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 1, FEBRUARY 2006
Fig. 18. Solution robustness.
Fig. 19.
Final DSM.
33 evaluations compared with a complete enumeration of 720 evaluations (keep in mind that the first and last activities are fixed). A relatively larger DSM is shown in Fig. 13, with initial solution configuration, Fig. 13(a) of 13 feedbacks and 4610 total project time. The optimally resequenced DSM is shown in Fig. 13(b) with 13 feedbacks and total iterative time of 3490 units. The SA algorithm reached this solution after 53 evaluations compared with a complete enumeration of 40 320 evaluations. In both examples, the objective was to minimize total iterative time. Notice that in the second example, the minimum time was associated with the minimum number of feedback
couplings. On the contrary, a larger number of feedbacks was obtained. Fig. 13(c) show activity sequence that results in minimum number of feedback couplings for the second example. Although this sequence resulted in only eight feedbacks, the total iterative time was 3920 units. Thus, the optimal solution—or DSM structure—does not necessarily have the minimum number of feedbacks. V. ILLUSTRATIVE CASE STUDY The project presented here is adopted from [37]. It consists of 22 activities and 39 couplings. The project was taken from a
ABDELSALAM AND BAO: A SIMULATION-BASED OPTIMIZATION FRAMEWORK FOR PRODUCT DEVELOPMENT CYCLE TIME REDUCTION
Fig. 20.
81
Final solution statistics.
TABLE III INITIAL VERSUS FINAL SOLUTION
larger conceptual design project. Fig. 14 represents the process flowchart for this project. “The main problems with this type of chart are that it is difficult to determine where to begin the design activity and which processes are iterative [34].” Each activity has an associated duration represented by a probability distribution function. For example, the “Dynamic Model” time follows a normal distribution with a mean equals to 20 and a standard deviation of 2 time units. As a start, the sequence of the activities has been randomly ordered. Fig. 15 shows the DSM corresponding to this initial solution configuration. As shown, this configuration results in 23 feedbacks and 16 feed-forward couplings. The expected total time required for this sequence to converge is 19 374 time units. The probability distribution of total project time and the associated statistical measures are shown in Fig. 16. A. Settings 1) SA Cooling Schedule: Initial and final values of the control parameter, referred to as and , respectively, are spec-
, , and ified along with the cooling rate . . To determine whether the system is metastable, two counters, AcceptCount and RejectCount, were introduced to keep track of the number of accepted and rejected solutions, respectively, at each temperature. Iterations at each temperature halt when either AcceptCount, or reaches a defined threshold AcceptCountLimit and RejectCountLimit, respectively. For this problem, both thresholds were set to 60. The values of the different SA parameters were chosen after several pilot runs to gain sufficient solution quality with shorter optimization runs. The fine-tuning of SA parameters depends mainly on the nature and size of the problem. For a larger problem, or when an extremely accurate solution is needed, a longer cooling schedule can be used. 2) Simulation: For each proposed solution configuration, the simulation module (Crystal Ball) performs a number of trials/runs (called simulation length) to determine the statistical measures for this configuration. The accuracy of the output, generally, is improved as the simulation length increases. An
82
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 1, FEBRUARY 2006
Fig. 21.
Sensitivity chart.
Fig. 22.
Triangularization algorithm.
ideal situation, of course, is having the simulation stops when the output distribution stabilizes. In this research, a fixed simulation length of 800 trials was chosen after conducting several pilot runs. The small mean standard error associated with this length implies high confidence in the results. An alternative procedure was to have the mean standard error fixed and the simulation length variable, but this would result in increased solution time. Future research may investigate: 1) whether the simulation length can vary as the optimization proceeds toward the optimum solution, i.e., use a smaller number of runs at first, and then increase this number when nearing the solution and 2) determination of simulation length as a function of DSM size.
B. Results The optimal solution configuration was obtained after 6150 evaluations (compare with possible solutions of ). Fig. 17 illustrates the metastable values of the four statistical measure of the objective function at different temperatures. Fig. 17 suggests that the stopping criteria can be changed; instead of fixing the final temperature, the optimization process can be terminated when no significant improvement of the optimal solution is achieved within several consecutive temperature decrements. Although such a stopping criteria may reduce the optimization time, quality of the final solution could be affected. In some cases, a sudden large improvement may occur
ABDELSALAM AND BAO: A SIMULATION-BASED OPTIMIZATION FRAMEWORK FOR PRODUCT DEVELOPMENT CYCLE TIME REDUCTION
Fig. 23.
83
Partitioning (1).
after this halting period. Thus, the choice of either criterion depends on a time-solution quality tradeoff. Fig. 18 illustrates how the modified algorithm aims towards finding a robust solution; not only was the mean minimized, but also the range. Fig. 19 shows the DSM corresponding to the final optimal solution. Only 7 feedbacks are left instead of 23. Fig. 19, further, suggests that some activities can be gathered in iterative blocks, these are: 1) activities from 6 to 11; 2) activities 8 and 9; and 3) activities from 14 and 17. Fig. 20 illustrates a summary of the optimal solution statistics. Followed by Table III that presents the improvement (reduction) in PIT resulted from activities resequencing. A large variance reduction can be noticed. Finally, Fig. 21 is the sensitivity chart. This chart helps identifying some critical activities based on their contribution to the variance of the objective function (project time). C. Comparison With Other Approaches To further illustrate the performance of the presented approach, the same case study will be solved using two alternative approaches: 1) the triangularization algorithm [24] and 2) partitioning using PSM32 [30]. It should be noted, though, that both approaches do not consider neither activity time nor iteration consideration. Thus, the comparison will be on deterministic basis. Fig. 22 shows the final DSM obtained by the triangularization algorithm with 16 feedback couplings and total iterative time of 20 830 units. The final DSM obtained by the partitioning algorithm is shown in Fig. 23 with 11 feedback couplings and total iterative time of 13 830 units. In despite that using partitioning yielded of a lower number feedbacks, the solution can be considered infeasible since the order of activity “initial data” is set to 21 which contradicts to logic. By manually moving this activity to be the first one, a feasible solution
was obtained, DSM is shown in Fig. 24, with 11 feedbacks and total iterative time of 13 510 units. Table IV compares the results of the three approaches. As shown, SA has obtained a DSM with both minimum number of feedbacks and time. VI. DISCUSSIONS AND CONCLUSION A PDP is typically a complex system. The main approach to handling such system involves: decomposing it into subsystems and furthermore into smaller components; and defining the relationships among these components. Following these steps, the system will be decomposed into possibly several hundreds of activities (components) and thousands of variable interchanges among these activities. The sequence of performing these activities strongly affects the time needed to realize the whole project. This paper presented a simulation-based optimization framework that determines the optimum sequence of activities execution within a PDP that minimizes project duration given that activities’ durations are stochastic. This interfaces a metaheuristic optimization algorithm called SA with commercial risk analysis software “Crystal Ball” and was implemented as a MS Excel module. The DSM provided an effective tool for system structure understanding. To find an optimal activity sequence of the DSM representing a design project in terms of PIT, a metaheuristic optimization algorithm called SA was used to solve a mathematical program (model) representing the DSM structure. The model tolerated: 1) imposing logical constraints on activity sequence and 2) defining stochastic activity times. The introduction of logical constraints to the model provided control means over the activity sequence of the DSM. Moreover, such constraints can be used when considering multi-DSM (or multiproject) cases. In such cases, theses constraints can be set to
84
Fig. 24.
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 1, FEBRUARY 2006
Partitioning (2).
TABLE IV COMPARISONS OF RESULTS
maintain some logic activity order, or to represent resource constraints. So, multi-DSM can be optimized through the same optimization module developed in the dissertation. The model assumed sequential execution of activities and did not allow for activity concurrency or overlapping. Further investigation can consider cases in which activities can start without receiving all required input information. Furthermore, cases in which an activity can provide some output to other activities before it finishes can also be incorporated in a future model. Since SA was originally developed to handle deterministic objective functions, the presented research involved modifying the SA algorithm to be able to handle stochastic objective functions (multipoint estimate) rather than deterministic ones (one-point estimate). The goal here involved determining a robust solution rather than an optimum minimum one. This was achieved by modifying the acceptance and rejection rules of the SA algorithm. The performance of SA was benchmarked against complete enumeration and showed high efficiency. The presented approach was, further, applied to a design project and a robust solution with minimum PIT was reached. Results were, then,
compared with two alternative approaches, and SA showed favorable results. The presented framework used commercial risk analysis software (Crystal Ball) to carry out Monte Carlo simulation. Crystal Ball provides features and function that would not be an easy task to built again for this research scope, for example, the variety of pdfs, reports, and sensitivity analysis. On the other hand, it slows the optimization process down considerably. Thus, a basic Monte Carlo algorithm could be put directly into VBA to speed things up.
REFERENCES [1] S. S. Altus, I. M. Kroo, and P. J. Gage, “A genetic algorithm for scheduling and decomposition of multidisciplinary design problems,” Trans. ASME, vol. 118, no. 4, pp. 486–489, 1996. [2] S. Austin, A. Baldwin, B. Li, and P. Waskett, “Analytical design planning technique (ADePT): A dependency structure matrix tool to schedule the building design process,” Construct. Manage. Econ., vol. 8, pp. 173–182, 2000. [3] , “Application of the analytical design planning technique to construction project management,” Project. Manage. J., vol. 31, no. 2, pp. 48–59, 2000. [4] J. D. Blackburn, “New product development: The new time wars,” in Time-Based Competition, J. D. Blackburn, Ed. Homewood, IL: Business One Irwin, 1991. [5] T. R. Browning, “Modeling and analyzing cost, schedule, and performance in complex system product development,” Ph.D. dissertation, MIT, Cambridge, MA, 1998. , “The design structure matrix,” in Technology Management Hand[6] book, R. C. Dorf, Ed. Boca Raton, FL: Chapman & Hall, 1999, pp. 103–111. [7] , “Applying the design structure matrix to system decomposition and integration problems: A review and new directions,” IEEE Trans. Eng. Manage., vol. 48, no. 3, pp. 292–306, Aug. 2001. [8] , “Modeling the customer value of product development processes,” Syst. Eng., vol. 6, no. 1, pp. 49–61, 2003.
ABDELSALAM AND BAO: A SIMULATION-BASED OPTIMIZATION FRAMEWORK FOR PRODUCT DEVELOPMENT CYCLE TIME REDUCTION
[9] T. R. Browning and S. D. Eppinger, “Modeling impacts of process architecture on cost and schedule risk in product development,” IEEE Trans. Eng. Manage., vol. 49, no. 4, pp. 443–458, 2002. [10] A. A. Bulgak and J. L. Sanders, “Integrating a modified simulated annealing algorithm with the simulation of a manufacturing system to optimize buffer sizes in automatic assembly systems,” in Proc. 20th Conf. Winter Simulation, San Diego, CA, 1988, pp. 684–690. [11] V. Cerny, “Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm,” J. Optim. Theory Applicat., vol. 45, no. 1, pp. 41–51, 1985. [12] A. K. Chakravarty, “Overlapping design and build cycles in product development,” Eur. J. Oper. Res., vol. 134, pp. 392–424, 2001. [13] S.-H. Cho, “An integrated method for managing complex engineering projects using the design structure matrix and advanced simulation,” M.S. thesis, MIT, Cambridge, MA, 2001. [14] S. Denker, D. V. Steward, and T. R. Browning, “Planning concurrency and managing iteration in projects,” Project. Manage. J., vol. 32, no. 3, pp. 31–38, 2001. [15] Decision Engineering, Inc. [Online]. Available: http://www.decisioneering.com [16] J. Fiksel, Design for Environment: Creating Eco-Efficient Products & Processes. New York: McGraw-Hill, 1991. [17] D. A. Gebala and S. D. Eppinger, “Methods for analyzing design procedures,” in Proc. ASME 3rd Int. Conf. Design Theory Methodology, 1991, pp. 227–233. [18] S. B. Gelfand and S. K. Mitter, “Simulated annealing with noisy or imprecise energy measurements,” J. Optim. Theory Applicat., vol. 62, pp. 49–62, 1989. [19] F. Glover, J. P. Kelly, and M. Laguna, “New advances and applications of combining simulation and optimization,” in Proc. 1996 Winter Simulation Conf., J. M. Charnes, D. J. Morrice, D. T. Brunner, and J. J. Swain, Eds., pp. 144–152. [20] J. Haddock and J. Mittenthal, “Simulation optimization using simulated annealing,” Compute. Ind. Eng., vol. 22, no. 4, pp. 387–395, 1992. [21] C. H. House and R. L. Price, “The return map: Tracking product teams,” Harvard Bus. Rev., pp. 92–101, Jan.–Feb. 1991. [22] S. Kirkpatrick, C. D. Gerlatt, Jr., and M. P. Vecchi, “Optimization by simulated annealing,” IBM Research Rep., RC 9355, 1982. [23] C. Koulaman, S. R. Antony, and R. Jean, “A survey of simulated annealing applications to operations research problems,” Omega, vol. 22, no. 1, pp. 41–56, 1994. [24] N. L. Kusiak and J. Wang, “Reengineering of design and manufacturing processes,” Comput. Ind. Eng., vol. 26, no. 3, pp. 521–536, 1994. [25] A. M. Law and W. D. Kelton, Simulation Modeling and Analysis, 3rd ed. New York: McGraw-Hill, 2000. [26] W. P. Ledet and D. M. Himmelblau, “Decomposition procedures for the solving of large scale systems,” Advances Chem. Eng., vol. 8, pp. 185–254, 1970. [27] N. Metropolis and S. Ulam, “The Monte Carlo method,” J. Amer. Statist. Assoc., vol. 44, pp. 335–341, 1949. [28] I. H. Osman and J. P. Kelly, “Meta-heuristics: An overview,” in Meta-Heuristics: Theory and Application, I. H. Osman and J. P. Kelly, Eds. Norwell, MA: Kluwer, 1996. [29] R. Pichler and P. Smith, “Developing your products in half the time,” Critical EYE, pp. 1–4, Dec. 2003. [30] Problematics. [Online]http://www.problematics.com [31] N. Roenko, “Simulated annealing under uncertainty,” Inst. F. Operations Research, Univ. Zurich, Zurich, Switzerland, Tech. Rep., 1990. [32] J. L. Rogers, “Knowledge-based tool for multilevel decomposition of a complex design problem,” NASA, TP-2903, 1989. , “DeMaid—A Design Manager’s Aid for Intelligent Decomposi[33] tion User’s Guide,” NASA, TM-101575, 1989. [34] , “DeMaid/GA—an enhanced design manager’s aid for intelligent decomposition,” presented at the 6th AIAA/USAF/NASA/OSSMO Symp. Multidisciplinary Anal. Optim., Seattle, WA, Sep. 4–6, 1996, AIAA Paper no. 96-4157. [35] J. L. Rogers and S. L. Padula, “An intelligent advisor for the design manager,” in Proc. 1st Int. Conf. Comput. Aided Optimum Design of Structures, Southampton, U.K., 1989, pp. 169–177. [36] J. L. Rogers and C. L. Bloebaum, “Ordering design tasks based on coupling strengths,” in Proc. 5th AIAA/USAF/NASA/ISSMO Symp. Multidisciplinary Anal. Optim., Panama City, FL, 1994, AIAA paper no. 94-4326, also NASA TM 109137.
85
[37] J. L. Rogers, C. M. McCulley, and C. L. Bloebaum, “Integrating a genetic algorithm into a knowledge-based system for ordering complex design processes,” in Proc. 96th Artif. Intell. Design Conf., Stanford Univ., Stanford, CA, Also NASA TM-110247. [38] P. G. Smith and D. G. Reinertsen, “Faster to market,” Mechan. Eng., pp. 68–70, Dec. 1998. , Developing Products in Half the Time: New Rules, New [39] Tools. New York: Wiley, 1998. [40] P. G. Smith, “From experience: Reaping benefit from speed to market,” J. Prod. Innov. Manage., pp. 222–230, May 1999. [41] R. P. Smith and S. D. Eppinger, “Identifying controlling features of engineering design iteration,” Manage. Sci., vol. 43, no. 3, pp. 276–293, 1997. [42] D. G. So and K. A. Dowsland, “Simulated annealing: An application to simulation optimization,” presented at the OR35 Conf., University of York, York, U.K., Sep. 1993. [43] G. Stalk, “Time: The next source of competitive advantage,” Harvard Bus. Rev., vol. 66, pp. 41–51, Jul.–Aug. 1988. [44] D. V. Steward, Systems Analysis and Management: Structure, Strategy, and Design. New York: PBI, 1981. , “The design structure system: A method for managing the de[45] sign of complex systems,” IEEE Trans. Eng. Manage., vol. 8, pp. 71–74, 1981. [46] P. J. Van Laarhoven and E. H. Aarts, Simulated Annealing: Theory and Applications. Amsterdam, The Netherlands: Reidel, 1987. [47] J. N. Warfield, “Binary matrices in system modeling,” IEEE Trans. Syst., Man, Cybern., vol. 3, pp. 441–449, 1973. [48] A. Yassine, K. Chelst, and D. Falkenburg, “Engineering design management: An information structure approach,” Int. J. Prod. Res., vol. 37, no. 13, pp. 2957–2975, 1999.
Hisham M. E. Abdelsalam received the B.Sc. degree in mechanical engineering from Cairo University, Cairo, Egypt, in 1996, and the M.Sc. and Ph.D. degrees in mechanical engineering from Old Dominion University, Norfolk, VA, in 2000 and 2003, respectively. His work experience is primarily related to academic work at the Decision Support Department, Cairo University. He works as a project management consultant for several Egyptian agencies. His current research interests are enterprise project management, systems analysis methodologies, decision support systems, and design process optimization.
Han P. Bao received the B.S.E., M.Sc., and Ph.D. degrees in industrial engineering from the University of New South Wales, Sydney, Australia, in 1968, 1972, and 1976, respectively. His work experience is primarily related to academic work at the universities of Purdue, Bradley, NC State University, Missouri-Columbia, and Old Dominion. He worked in industry for two years as a Design Engineer for the Postmaster General’s Office, Sydney. Currently, he is the Mitsubishi Kasei Professor of Manufacturing Engineering at Old Dominion University, Norfolk, VA. His current research interests are in the areas of concurrent engineering, decision support systems, fuzzy logic applications, and life cycle engineering. Dr. Bao is a member of the Society of Manufacturing Engineers (SME), American Society of Mechanical Engineers (ASME), and the Society of Allied Weight Engineers (SAWE). He is a registered Professional Engineer with the State of Missouri.