A cross entropy based algorithm for reliability problems - Springer Link

3 downloads 0 Views 563KB Size Report
Mar 21, 2008 - maximize the reliability of a system through optimal allocation of redundant com- ... Keywords Reliability · Cross entropy · Metaheuristics.
J Heuristics (2009) 15: 479–501 DOI 10.1007/s10732-008-9074-2

A cross entropy based algorithm for reliability problems Marco Caserta · Marta Cabo Nodar

Received: 30 January 2006 / Revised: 3 March 2008 / Accepted: 6 March 2008 / Published online: 21 March 2008 © Springer Science+Business Media, LLC 2008

Abstract The Cross Entropy method has recently been applied to combinatorial optimization problems with promising results. This paper proposes a Cross Entropy based algorithm for reliability optimization of complex systems, where one wants to maximize the reliability of a system through optimal allocation of redundant components while respecting a set of budget constraints. We illustrate the effectiveness of the proposed algorithm on two classes of problems, software system reliability optimization and complex network reliability optimization, by testing it on instances from the literature as well as on randomly generated large scale instances. Furthermore, we show how a Cross Entropy-based algorithm can be fine-tuned by using a training scheme based upon the Response Surface Methodology. Computational results show the effectiveness as well as the robustness of the algorithm on different classes of problems. Keywords Reliability · Cross entropy · Metaheuristics 1 Introduction Many modern systems, both hardware and software, are characterized by a high degree of complexity. To enhance the reliability of such systems, it is vital to define techniques and models aimed at optimizing the design of the system itself. This paper presents a new metaheuristic-based algorithm aimed at tackling the general system M. Caserta () Institut für Wirtschaftsinformatik, Universität Hamburg, Von-Melle-Park 5, 20146 Hamburg, Germany e-mail: [email protected] M.C. Nodar Instituto Tecnologico de Monterrey, Calle del Puente, 222, Col. Ejidos de Huipulco Del. Tlalpan, México DF 14380, México

480

M. Caserta, M.C. Nodar

reliability problem via optimal allocation of redundant components with fixed resources. Due to recent developments in information and communication technology, and in view of the increasing dependency of human beings on systems, reliability of both hardware and software systems has become a key issue. In this paper, we first consider one of the most relevant applications of reliability optimization, the software reliability problem. Forty percent of software development cost is spent on testing to remove errors and to ensure high quality (Kuo et al. 2001). One traditional way to increase the reliability of a software system is by increasing the time of debugging. However, due to the high debugging costs, more often an alternative solution is pursued, which is the use of redundant software modules. In software development, redundancies are programs developed by different teams based on the same specifications. Often, in order to ensure the independence of the different modules, these programs are written using different languages, platforms, development methodologies and testing strategies. Another relevant reliability problem is the complex network reliability optimization problem. A complex system is a system that cannot be broken down into groups of series and parallel components. The problem is to find the optimal allocation of redundant components throughout the network in order to maximize the overall system reliability, while taking into account a set of knapsack-type budget constraints. Both classes of problems, when modeled, fall into the category of combinatorial optimization problems. However, combinatorial optimization problems are known to be N P-Hard (Garey and Johnson 1979) which implies, if P = N P, that there is no polynomial time algorithm for these kind of problems. For this reason, researchers have devoted much attention to the development of approximate, or heuristic-based, approaches for this family of problems. The most recent trend in combinatorial optimization is focused on the development of metaheuristic algorithms to tackle large scale problems (Feo and Resende 1995; Glover and Laguna 1997; Dorigo and Di Caro 1999; Blum and Roli 2003). Metaheuristics are intelligent strategies able to extract knowledge during the search process. Throughout the search, a metaheuristic algorithm constantly evaluates the solutions obtained with the aim of extracting useful information to improve the future search. An interesting classification of metaheuristic schemes along different dimensions is provided in Blum and Roli (2003). One innovative metaheuristic is the “Cross Entropy” method (CE), proposed by Rubinstein (1997, 1999, 2001). Cross Entropy was proposed as an adaptive algorithm used to estimate rare event probabilities within complex networks. In recent years, however, Cross Entropy has gained much attention for its ability to effectively tackle combinatorial optimization problems (Rubinstein 1999; Rubinstein and Kroese 2004). The Cross Entropy method has been successfully applied to complex combinatorial optimization problems, such as traveling salesman problems (De Boer et al. 2005), vehicle routing problems (Chepuri and Homem De Mello 2005), buffer allocation problems (Alon et al. 2005), maximal cut problems (Rubinstein 2002), and sequence alignment problems in genetics (Keith and Kroese 2002). This paper proposes a cross entropy based algorithm for two classes of complex systems reliability optimization problems, the software reliability problem and the general network reliability problem. We introduce two simple schemes, called “projection scheme” and “normalization scheme”, to deal with knapsack-type and set

A cross entropy based algorithm for reliability problems

481

covering-type constraints, respectively. Furthermore, we show that, as long as a basic mathematical property is enforced, the algorithm effectively solves different kinds of reliability problems, regardless of the system configuration. As presented in the following section, in order for the algorithm to find good quality solutions, it is vital that a “monotonic property” can realistically be assumed. This assumption requires that adding a component to the system increases both the overall reliability as well as the resource usage. The paper has the following structure: Sect. 2 presents the formulation of the general reliability problem. Section 3 provides some insight about how Cross Entropy can be used to solve a reliability problem. Section 4 illustrates how to tackle two important optimization problems within the reliability realm and how to deal with knapsack-type and set covering-type constraints. We present computational results to prove the effectiveness of the proposed algorithm. Finally, Sect. 5 concludes with some interesting remarks.

2 Problem formulation Let us consider the general system reliability optimization problem via redundancy allocation  (RP):  max Rs = f (x)  s.t. g(x) ≤ b,   x ∈ Zn+ , where Rs indicates the overall system reliability, f : Zn → R+ represents the objective function, a mapping of Zn into R+ , g = (g1 , g2 , . . . , gK ) is a set of K knapsacktype constraints, with gl : Zn → R+ , and b ∈ RK + is the resource availability vector. It is worth noting that neither f nor gl , l = 1, . . . , K, are required to be linear. Generally speaking, the functional structure of f depends on the system configuration (e.g., series, parallel, non-series-non-parallel configurations, etc.) and usually is nonlinear. Examples of different system configurations are provided in Sects. 4.1 and 4.2. Finally, the variable vector x is an integer vector of size n, where xj indicates how many elements of component j are to be added to the system. The purpose of the paper is to present a robust algorithm for the general (RP) problem with knapsack-type constraints and especially suited for very large scale instances. The underlying assumption required in order for the algorithm to find good quality solutions is that the objective function f as well as the constraint functions gl , l = 1, . . . , K, satisfy the “monotonic property” presented below in Definition 1. We now introduce the notion of monotonic property for the class of functions studied in this paper, which resembles the one of Crama (1987). Let f :  → R+ be a function defined on the domain . Let us define the marginal contribution of xi ∈  in f as: ∂i f (x) = f+ (x) − f (x), where: f+ (x) = f (x1 , . . . , xi−1 , xi + 1, xi+1 , . . . , xn ),

(1)

482

M. Caserta, M.C. Nodar

f (x) = f (x1 , . . . , xi−1 , xi , xi+1 , . . . , xn ) for each x ∈ . Definition 1 A function f :  → R+ is monotonic at xi if ∂i f (x) has constant sign on . Furthermore, we say that f :  → R+ is variable-wise monotonic if, for each xi , with i = 1, . . . , n, f is monotonic at xi . The problems studied in this paper require that, depending upon the class of reliability problems considered, either  ⊆ Zn or  ⊆ Bn , where Bn denotes the n-dimensional hypercube {0, 1}n . When  ⊆ Bn , we assume that xi = 0 and xi + 1 = 1. In addition, we also require that all components in the system have positive marginal contributions, with respect to the objective function as well as to the constraint functions. In the context of reliability problems, if f and gl present the monotonicity property of Definition 1, then adding a new component to the system has always the double effect of increasing the reliability as well as the resource usage of the overall system. This means that both the objective function as well as the constraint functions are monotonically positive with respect to each variable xi , with i = 1, . . . , n. It is worth noting that, beside the monotonic property, no further assumptions, for example, about the system configuration or the functional structure of the constraints, are required for the algorithm to work effectively. 3 The CE method for reliability problems The most important features of the CE method have been thoroughly exposed in De Boer et al. (2005). For this reason, the paper only presents those features of the CE method that are relevant to the problems hereby discussed. Since it is possible to encode an integer reliability problem into a corresponding binary reliability problem, an obvious representation of candidate solutions  for the general reliability problem is through a binary string of size n , where n = nj=1 ubj , with ubj indicating the upper bound of the discrete variable xj . For this reason, we only present how to apply the CE-method to a binary reliability problem. Further indications about how to transform the integer reliability problem into the corresponding binary problem are provided in Sect. 4.3. Let us consider a general 0–1 integer maximization problem (P): z∗ = max f (x), x∈X

where X ⊆ Bn represents the feasible region. The CE method associates a stochastic estimation problem to (P). With this aim, let us define a family of density functions φ on X , parameterized by a vector u ∈ [0, 1]n . Since we want to generate binary vectors to identify which components should be added to the system, let us assume that we use a family of Bernoulli density functions under the following pdf: n  φ(x, u) = (ui )xi (1 − ui )1−xi . i=1

(2)

A cross entropy based algorithm for reliability problems

483

The associated stochastic estimation problem is  (EP): Pu (f (X) ≥ z) = I{f (x)≥z} φ(x, u), x∈X

where I{f (x)≥z} is the indicator function, whose value is 1 if f (x) ≥ z and 0 otherwise and Pu is the probability measure that a random state X, drawn under φ(·, u), has performance function value greater than or equal to a given threshold z. We want to estimate l = Pu (f (X) ≥ z) for z = z∗ . When the size of X is large enough, simulation is an acceptable approach to find x∗ such that z∗ = f (x∗ ) ≥ f (x) for all x ∈ X . We could estimate l via Monte  Carlo simulation by drawing a random sample X1 , . . . , XN from φ(x, u) with N1 N i=1 I{f (Xi )≥z} . However, the event Xi = x∗ is a rare event, with probability 1/|X |. Hence, the size of the sample required to estimate l might be extremely large. Alternatively, it is possible to estimate l via Importance Sampling. Let us take a random sample X1 , . . . , XN from a different density function φ(x, p). We want to determine the optimal reference parameter associated with Pp (f (X) ≥ z). As illustrated in De Boer et al. (2005), parameter p can be estimated by pˆ = argmax p

N 1  I{f (Xi )≥z} ln φ(Xi , p), N

(3)

i=1

where Xi are generated under pdf φ(·, p). Since φ(·, p) is given by (2), and since each component Xij only takes values 0 or 1, finding pˆ under (3) corresponds to solving N ∂  I{f (Xi )≥z} ln φ(Xi , p) = 0, ∂pj i=1

which gives the optimal updating rule: N pˆ j =

i=1 I{f (Xi )≥z} Xij

N

i=1 I{f (Xi )≥z}

,

j = 1, . . . , n.

(4)

If we establish an arbitrary initial value for p0 , rule (4) can be iteratively used with the aim of generating a sequence of increasing threshold values z0 , z1 , . . ., converging either to the global optimum z∗ or to a value close to it. At each iteration t, the new value of zt is used to generate a better vector pt+1 . The new vector pt+1 is, in turn, used to draw a new, hopefully better, sample population, which will lead to a better value of zt+1 . The process stops when either we reach the global optimum value z∗ or the vector p converges to a vector in X , which implies that any random state drawn under φ(·, p) will converge to the same solution in X . When using rule (4) to generate a random population, however, we find that, depending on the structure of the feasible region of the problem, many sample points might be infeasible. The issue becomes more delicate when dealing with a non-convex feasible region, as in the case of the problems discussed in this paper. We investigated different approaches aimed at coping with the presence of several

484

M. Caserta, M.C. Nodar

knapsack-type constraints, which, in the context of reliability problems, represent budget limitations. We now briefly present three possible schemes that allow to encompass feasibility in generating sample points, along with advantages and drawbacks of each one of them. 3.1 Acceptance–rejection approach The acceptance–rejection approach analyzes each random generated vector Xi and checks its feasibility. The vector is accepted if feasible, otherwise it is rejected and a new sample vector Xi is drawn. The main advantage of the method is its simplicity, along with the fact that the vector p need not be modified. Furthermore, no manipulation of the random vector Xi is required. However, one mayor drawback of the method is that, depending on the structure of the feasible region, the majority of the sample vectors Xi might be infeasible and, hence, rejected. 3.2 Penalty approach The penalty approach is based upon the idea of relaxing the constraints, in a similar fashion to what is done with Lagrangian Relaxation. For example, with respect to the problem (RP), the penalty method would work on the relaxed problem   (RPL): max L(x, λ) = f (x) + λ (b − g(x)) : λ ≥ 0, x ∈ Bn . However, when implemented, the method proved to be very sensitive to the value of the penalty vector λ = (λ1 , . . . , λK ). Furthermore, owing to the nonlinearity of f , deriving an updating rule for the penalty values is not a trivial task. 3.3 Projection approach We devised a simple mechanism to overcome the infeasibility problem. This method, called projection method, exploits the monotonic property of the constraints of (RP). In other words, the monotonic property of Definition 1 is necessary and sufficient condition for the projection approach to work on (RP). The basic idea is to apply a modification scheme to the random vector Xi such that the final result is a feasible p vector Xi . Let us consider the following definition: Definition 2 Given a vector x ∈ Bn and a subspace X = {x ∈ Bn : g(x) ≤ b}, a projection of x on X is a vector xp ∈ PX (x), where:  p PX (x) = xp ∈ X : xj − xj ≥ 0, j = 1, . . . , n . In Sect. 4 we will show how the projection method can be easily implemented, provided that the monotonic property of Definition 1 is satisfied by all the constraints. Furthermore, we will discuss why the monotonic property can be introduced without loss of generality when dealing with reliability problems.

A cross entropy based algorithm for reliability problems

485

4 Reliability optimization In this section we present in details the CE-based algorithm for the general reliability problem. Before, though, we introduce two classes of reliability problems: software reliability problems (SRP) and complex systems reliability problems (CSRP). The main difference between the two problems is related to the system configuration: while software systems are generally series systems, the term complex systems refers to systems that are neither series nor parallel. We will show that the CE-based algorithm can solve reliability problems regardless of the system configuration. Furthermore, we will illustrate how a discrete variable problem, such as (CSRP), can easily be transformed into a binary problem and, hence, be tackled using the proposed CE-based algorithm. The section is organized as follows: in the next two subsections, we offer a brief introduction to both problems, along with some bibliographical references for the interested reader. Next, in Sect. 4.3, we present the steps of the general algorithm. Finally, in Sects. 4.4–4.6 we illustrate the effectiveness of the algorithm on both classes of problems, by presenting some results on both benchmark problems drawn from the literature and random generated very large scale instances. 4.1 Software reliability problem and its formulation The software system reliability problem consists of identifying the system configuration that maximizes the overall reliability through redundancy allocation, while taking into account a set of resource constraints. The general software system studied is made up of several programs, each performing a different function. Each program contains a set of modules. For each module, a number of different versions exist, each characterized by a cost and a reliability level. The objective is to maximize reliability by choosing the optimal set of versions for each module, allowing redundancy. The mathematical formulation of a software reliability problem with m modules  (each one performing a different function), mi versions for each module i ( m i=1 mi = n) and K resource constraints resembles the one of (RP) of Sect. 2 and is provided below:  (SRP): max Rs = f (x) s.t. g(x) ≤ b,   ci x ≥ 1, i = 1, . . . , m,   x ∈ Bn , where f : Bn → R+ , g = (g1 , g2 , . . . , gK ) represents K knapsack-type constraints, i with gl : Bn → R+ , b ∈ RK + is the resource availability vector, c is a vector (1 × n) whose component j is 1 if xj belongs to module i, and 0 vice versa. Finally, the variable vector x is a binary vector, where a value 1 indicates that version j is in the system and 0 vice versa. Note that the monotonic assumption is realistic for this class of problems, since adding a redundant component to the system always increases the overall reliability as well as the resource consumption while removing a component from the system reduces both the reliability and the overall resource consumption.

486

M. Caserta, M.C. Nodar

For this reason, we can assume that both the objective function and the constraints are monotonic in x. The first set of constraints, g(x) ≤ b, represents economical and financial constraints (not necessarily linear), while the second set of constraints, ci x ≥ 1, implies that at least one version must be selected for each module. This second type of constraints is common to other well-known formulations, such as the set covering formulation, where one needs to cover each row with at least one column. A number of exact and heuristic-based approaches have been proposed in recent years, each one of them addressing a specific class of problems. What emerges from the analysis of the literature is that existing models and algorithms can be used only under specific circumstances. Some of the most relevant approaches are those proposed by Kohda and Inoue (1982), Shi (1987), Belli and Jedrzejowicz (1991), Berman and Ashrafi (1993), as well as Kim and Yum (1993) and Ravi et al. (1997). 4.2 Complex systems reliability problem and its formulation Problem (CSRP) can be defined as “the optimal redundancy allocation in complex system configurations with multiple knapsack constraints”. The mathematical formulation for this problem is the same as the one of (SRP), presented in Sect. 4.1, except that constraints set ci x ≥ 1, i = 1, . . . , m becomes now xi ≥ 1, i = 1, . . . , n, where n indicates the total number of components in the system. The difference resides in that, while in (SRP) one wants to identify which elements should be added to the system, in (CSRP) each element must be included into the system and the optimization problem consists of identifying how many times an element must be replicated. An excellent overview of different methods proposed to solve complex system reliability problems is given in Kuo and Prasad (2000). 4.3 CE-based algorithm In this section, we present the CE-based algorithm for (SRP). We will illustrate two of the most important features of the algorithm, namely, the normalization process, used to guarantee the satisfaction of the second set of constraints of (SRP), the set covering-type constraints, and the projection scheme adapted to the problem, which ensures that the knapsack-type constraints are satisfied. Finally, we will briefly present how the same algorithm can be used to solve the (CSRP). The set of constraints ci x ≥ 1 in (SRP) can be seen as a set covering-type constraint, where each module has at least one version in the final solution. In order to ensure that such set of constraints is verified, we apply a normalization scheme during the construction of each point of the CE population. Let us suppose that we are trying to decide which versions of module i should be included in the solution. Let us assume that module i has mi versions, and let xij = 1 if version j of module i is added to the final solution and xij = 0 otherwise. The CE scheme is aimed at randomly establishing which of these versions should be in the final solution, by generating a random value for each version xij . However, the Cross Entropy mechanism does not guarantee that at least one version per module will be chosen. In order to make sure that each module has at least one version in the final solution, we normalize the probability values pij in such a way that the sum of pij over the remaining versions is 1.

A cross entropy based algorithm for reliability problems

487

However, we want to make sure that the probability of including a component within the system is still determined by the pij values of the CE method. For this reason, let us suppose we generate a random permutation π = (π1 , . . . , πmi ) of components 1, . . . , mi . Let us now suppose that the Cross Entropy scheme establishes that the first version of module i should not be included in the optimal solution, which is, xiπ1 = 0. We want to increase the probabilities that at least one of the remaining versions of module i, versions π2 to πmi , be included in the final solution. Procedure Normalization() presents the normalization mechanism, which guarantees that at least one version per module will be included in the final solution. This procedure ensures that if, for example, only one version of module i is left to be assigned, say version xil , and no versions belonging to module i have been set to 1, pil will get a value of 1 and, hence, version l of module i will be added to the solution. Once a vector X feasible with respect to the set covering-type constraints has been build, we reset the vector p to its original value, eliminating the effects of the normalization process and hence ensuring that the remaining points of the Cross Entropy population be generated under the original vector p. Procedure 1 : Normalization() Require: module i, current version j , random permutation of components π Ensure: “normalized” vector pti j 1: if k=1 xik = 0 then t > 0, k = π 2: compute ne = |{xik : pik j +1 , . . . , πmi }| {versions}  t 1 t 3: pik ← max pik , ne , k = j + 1, . . . , mi {normalize probabilities} 4: end if Beside ensuring that each module has at least one version in the final solution, we need to ensure that the knapsack-type constrains are satisfied. This is accomplished by means of a projection scheme. This scheme takes as input an infeasible vector and gradually reduces the number of versions in the solution until a solution Xp ∈ PX (X) feasible with respect to the knapsack-type constraints is reached. The projection scheme iteratively sets to zero version xr chosen as

gw , (5) r = argmax ∂w f (x) w=1,...,n xw =1

 where g w = K l=1 ∂w gl /K is the average amount of resources released when setting xw to zero, and ∂w f (x) is the variation in the objective function value. Through this scheme, we iteratively set to 0 the variable that generates the minimum decrease of the objective function per unit of resource released. Obviously, version xr can be set to zero only if ci x ≥ 1 is still true for all modules. Remark 1 At this point, it becomes clearer why the monotonic property of Definition 1 is necessary and sufficient condition for the projection scheme to work: since

488

M. Caserta, M.C. Nodar

∂r gl > 0 for each constraint l and each version r, when component xr is removed from X, a positive amount of resources is released. By iteratively removing components from X using rule (5), either a feasible solution is reached, or, if no feasible solution can be reached,1 a value of zero is assigned to the objective function value of vector X. Remark 2 Owing to the monotonic property of the constraints, the projection scheme applied to an infeasible vector x ∈ Bn always terminates in at most O(n) steps. We now illustrate the overall algorithm for the software system reliability problem. To simplify the presentation, we introduce some notation. Let pi = ( m1i , . . . , m1i ) be a vector of mi components that represents the initial probabilities on each module. Let Algorithm 2 : CrossEntropyReliabilityOptimization (CE-RO) Require: Cross Entropy population N , Cross Entropy quantile ρ Ensure: best solution x 1: t ← 0 {iteration counter}

{initial probability vector} 2: pt = pt1 , . . . , ptm 3: while (t = max) do 4: for all CE points Xl , l = 1, . . . , N do 5: for all module i = 1, . . . , m do

6: generate a random permutation π = π1 , . . . , πmi 7: for all version j = π1 , . . . , πmi do 8: generate a random number r ∈ [0, 1] {uniform distribution} t then 9: if r ≤ pij l ←1 10: Xij {version added to lth point} 11: else 12: xijl ← 0 13: call Normalization() {modify vector pi } 14: end if 15: end for 16: end for 17: if Xl not feasible then 18: project Xl into feasible space {apply projection scheme} 19: end if 20: reset vector pt {undo effects of normalization} 21: end for 22: sort {Xl } {f (X(1) ) ≤ f (X(2) ) ≤ · · · ≤ f (X(N ) )} t+1 23: update p using (6) 24: t ←t +1 {update iteration counter} 25: end while 1 Due to the presence of the set covering-type constraint, the projection scheme does not guarantee that a

feasible solution be reached.

A cross entropy based algorithm for reliability problems

489

us indicate with N the Cross Entropy population size, and with ρ the quantile. The termination criterion for the algorithm is given by the maximum number of iterations allowed (max). Algorithm CE-RO presents the proposed scheme. Remark 3 We implemented a “smoothed” version of the updating rule for CE, as in De Boer et al. (2005), in such a way that, at each iteration t: N pit+1

t = αpij

+ (1 − α)

(l) l=(1−ρ)N +1 Xij

ρN

,

(6)

where X(l) is the lth point within the Cross Entropy population sorted in ascending order with respect to the objective function value and 0.1 ≤ α ≤ 0.2. The smoothed update prevents the algorithm from converging too fast to a degenerate vector, hence fostering a more thorough exploration of the search space. To apply the same algorithm to (CSRP), we tested two different types of encoding: we first tested a binary encoding, where each discrete variable xj was coded via a binary vector of size log2 ubj + 1 , with ubj indicating the maximum value taken by xj . The second type of encoding, the decimal encoding, assigned a binary variable to each possible value of variable xj , hence requiring a binary vector of length ubj . While the first encoding presents the advantage of reducing the length of the binary vector and, consequently, the search space of the CE scheme, we observed that the second type of encoding proved to be superior in terms of convergence, in the sense that the binary encoding might lead to sub-optimal solutions, especially for smallmedium size instances. However, when we tested both types of encoding on large instances for a fixed amount of computational time, we observed that with the binary encoding the algorithm could perform a higher number of CE iterations compared to what we obtained with the decimal encoding, hence exploring a larger portion of the search space. Overall, we observed that, while the two types of encoding produced comparable results on very large scale instances, the decimal encoding reached better results on small-medium size instances. For this reason, in this paper we only present results obtained using the decimal encoding. Furthermore, when applying the CE-based algorithm with the decimal encoding to (CSRP), we had to take into account an additional constraint for each variable xj : if a binary vector of length ubj is required to codify all the possible values of variable xj , we need to make sure than only one component of such vector is set to 1, while the remaining components are set to 0. This type of constraint is enforced within the CE scheme by means of a “roulette” scheme. In the following, let us indicate with Xj = (xj 1 , . . . , xj m ) the binary vector used to codify variable xj and with pj = (pj 1 , . . . , pj m ) the probability value associated to each component, where m = ubj . The “roulette” mechanism was implemented in the following way: 1: generate a random number r ∈ [0, 1] 2: set k ← 1 3: set cum ← pj k 4: while cum < r do 5: k←k+1

490

M. Caserta, M.C. Nodar

cum ← cum + pj k end while set xj k ← 1 Furthermore, we slightly changed the rule of the projection scheme: rather than setting a variable to zero, we just reduced by one unit the value of a variable xr , provided that the set covering-type constraint is still enforced. In terms of decimal encoding, this means that, if variable xr had a value of k ≥ 2, which is, the kth component assigned to module r was set to 1, we set to 0 component k and to 1 component k − 1. 6: 7: 8:

4.4 Experiment plan In order to draw sound conclusions about the effectiveness and robustness of the proposed scheme, we present the results of the algorithm on some problems drawn from the literature on reliability optimization as well as on random generated problems of bigger size. All the computational tests presented in this section have been carried out on a Pentium 4 1.2 GHz Linux Workstation with 256 MB of RAM memory. The CE algorithm as well as the algorithms taken from the literature have been implemented in C++, and compiled with the GNU g++ compiler using the -O option.2 The objective of the experiment plan is twofold: 1. on the one hand, we compared the performances of the algorithm, in terms of solution quality, computational effort and robustness, with other approaches presented in the literature. The proposed algorithm was tested on a pool of problems and measured against the best known heuristic schemes from the literature. Furthermore, in order to evaluate the solution quality, we tested the results of our algorithm against those of Ilog Cplex 10.0 (2007); 2. on the other hand, we performed a descriptive study of the algorithm, to better understand the effect of some of the key parameters of the heuristic scheme on the solution quality. With this aim, we designed an experiment that uses the “Response Surface Methodology” to find the optimal combination of a set of factor levels for any given instance size. In the next section, we first present the results of the algorithm on some wellknown instances drawn from the literature on (SRP). Next, we illustrate how the fine tuning of the algorithm can be carried out, to identify good values of some algorithmic parameters, namely Cross Entropy population size (N ) and Cross Entropy quantile size (ρ). In Sect. 4.6 we present the results of the algorithm on the (CSRP), both on problems from the literature and on random generated large scale instances. 4.5 Results on software reliability problems The benchmark instances from the literature on (SRP) are of small size and could easily be solved via enumeration. However, we included them in this section for the 2 All the results presented in this section, along with the software codes and the instances used, are publicly

available at http://www.ccm.itesm.mx/goal/.

A cross entropy based algorithm for reliability problems

491

Table 1 Reliability and Cost for Berman and Ashrafi (1993) Problem P2 Version

Module 1

Module 2

Module 3

rj

cj

rj

cj

rj

cj

1

0.90

3

0.95

3

0.98

3

2

0.80

1

0.80

2

0.94

2

3

0.85

2

0.70

1





Table 2 Iterations of the proposed algorithm on Berman and Ashrafi (1993) problem Iteration

p1

p2

p3

p4

p5

p6

p7

p8

0

0.33

0.33

0.33

0.33

0.33

0.33

0.50

0.50

1

0.20

1.00

0.80

1.00

0.00

0.80

1.00

0.00

2

0.00

1.00

1.00

1.00

0.00

1.00

1.00

0.00

sake of clarity, since they allow to better understand the behavior of the proposed algorithm. We, then, devised a scheme to generate large scale instances of (SRP). We implemented the best-known approach for this class of problems and compared its performances with those of the proposed algorithm. Finally, we tested both algorithms against the Cplex solver in order to provide a measure of the solution quality. Berman and Ashrafi (1993) present four different models, corresponding to four system configurations. We solved problems P1 to P4 with the proposed algorithm and always found the global optimal solution in less than 0.01 CPU seconds. However, we provide an iteration-by-iteration analysis of the algorithm behavior when solving problem P2. Table 1 reports the parameters of the problems, as in Berman and Ashrafi (1993). The system is made up by three modules arranged in series, and eight versions to be distributed among the modules. There is only one knapsack-type constraint, with a total budget of 10. In the table, rj and cj indicate the reliability value and the resource usage of components j , respectively. Table 2 reports the results of the CE algorithm on problem P2, using N = 10 and ρ = 0.2. The final solution p2 corresponds to the global optimal solution x2 = x3 = x4 = x6 = x7 = 1, with a reliability value of 0.9363. Computational time is less than 0.01 CPU seconds. To test problem P1, we force the algorithm to include exactly one version per module. The algorithm finds the global optimum solution x2 = x4 = x8 = 1 with a reliability value of 0.7144 in less than 0.01 CPU seconds. Berman and Ashrafi (1993) also present two more cases with a slight variation to the objective function structure. These models, called problems P3 and P4, deal with several programs each performing a specific function. The proposed algorithm reaches the global optimal solution in each of these two problems, P3 and P4, in less than 0.01 CPU seconds and within a maximum of 4 iterations (with N = 10 and ρ = 0.2). In order to further test the effectiveness of the algorithm, with special emphasis on very large instances of (SRP), we devised a mechanism to generate random instances.

492

M. Caserta, M.C. Nodar

It takes as input the number of modules in the system (m), the maximum number of versions per module (v), and the number of resource constraints K, and defines mi , the total number of version within each module i, rij , the reliability value of each version j within module i, gijk , the resource usage of each version j within module i in the constraint k, and bk , the right hand side value of each constraint. The random values are uniformly generated as follows: − mi ∈ [2, v] − rij = 0.80 + (0.99 − 0.80)ωij − gijk = 1 + 4 ijk m mi  1  − bk = gijk + max 3 × min{gijk }, 2 × max{gijk } j j 2 i=1 j =1

for i = 1, . . . , m, j = 1, . . . , vi and k = 1, . . . , K, where ωij and ijk are random numbers in [0, 1]. Along with the random generation process, we used the Ilog Cplex 10.0 solver to solve problems to optimality. The heuristic solution obtained by the proposed algorithm was then compared with the global optimum. To perform these tests, we chose problems of type P2, where only one function is performed within the software system and redundancy is allowed. The random instances generated are of medium and large scale. We randomly generated and solved problems with the following combinations of m and v: − m ∈ M = {50, 70, 100, 150, 200} − v ∈ V = {20, 40, 50, 80}. In addition, for each combination of parameters (m, v), we randomly generated ten different instances. This implies the generation of 10 × 5 × 4 = 200 different instances. The study on randomly generated instances had a two-fold objective: on the one hand, we wanted to test the effectiveness of the algorithm in terms of solution quality by comparing the solution obtained by the algorithm on each instance with the global optimal solution given by Cplex as well as by our implementation of Berman and Ashrafi (1994) dynamic algorithm, in terms of CPU time and solution quality. On the other hand, we aimed at testing the effect that two algorithmic parameters, N and ρ, have on the solution quality. We first present the steps followed in the calibration of the proposed algorithm, by illustrating how a scheme inspired by the “Response Surface Methodology” can be used to find the optimal values of parameters N and ρ. We first investigated the statistical significance of both parameters on the solution quality and, after preliminary analysis, we reached the conclusion that parameter ρ is not statistically significant with respect to the solution quality, provided that it is chosen in the interval [0.1, 0.2]. We next fixed 7 different levels for factor N , which are {300, 500, 750, 1000, 1500, 2000, 3000} and chose ρ = 0.1, and run a 7 × 1 factorial design. We ran each instance with the different values of N with a fixed maximum CPU time of 300 seconds and fixed maximum number of iterations of 100. Using such method, we collected 200 × 7 observations, which were used to generate the final regression model that describes what the value of N should be for a

A cross entropy based algorithm for reliability problems

493

Table 3 Regression coefficients and p-values for the SRP Estimate

Std. error

t value

Pr(> |t|)

β0

−1.443e+03

6.298e+01

−2.291

0.000553

β1

7.818e+01

1.346e+01

5.808

1.44e-08

β2

−5.270e-01

1.175e-01

−4.484

1.00e-05

β3

1.110e-03

3.121e-04

3.558

0.000427

β4

1.034e+02

5.795e+01

1.785

0.000162

β5

−8.587e+00

2.371e+00

−3.622

0.000336

β6

1.320e-01

2.988e-02

4.416

1.35e-05

Fig. 1 Response surface of cross entropy population—N = f (m, v)

given instance size in order to obtain the best solution with the proposed algorithm, e.g., N = f (m, v), where m is the number of modules in the system and v is the number of versions per module. The model tested is: N = β0 + β1 m + β2 m2 + β3 m3 + β4 v + β5 v 2 + β 6 v 3 .

(7)

Table 3 and Fig. 1 present the fitting surface as well as the regression statistics for the model. As Table 3 shows, the coefficients of the polynomial model are significant up to the third order and the model itself seems to be statistically significant, since it has a p-value of less than 2.2e-16. Analyzing Fig. 1, it is possible to draw interesting conclusions. First, it is worth noting that the surface has been drawn in such a way that the minimum value of N required to reach the best possible solution is identified. This

494

M. Caserta, M.C. Nodar

means that, for medium-size instances, such as those on the left side of the picture, the algorithm reached the global optimum for every tested value of N . However, we selected the value of N that required the minimum amount of computational time. This observation explains the shape of the surface on the left side of the Figure, where the value of N seems to drop drastically. On the other hand, we can observe that with the growth of the number of modules, the value of N required to reach the best solutions grows, until a threshold instance size is reached. Once this threshold is reached, the value of N slightly decreases, to account for the fact that a fixed CPU time has been set for each instance and, if larger values of N were chosen, the algorithm would not terminate within the allotted time and, hence, would return inferior quality solutions. We next proceeded to validate the statistical model and to measure the effectiveness of the algorithm in terms of solution quality. We randomly generated 200 new instances under the aforementioned scheme and grouped together the 200 instances in 5 different classes, labeled R1 to R5 . To provide an indication of the size of the search space, we also compute the average number of versions per module. Table 4 summarizes the most relevant results of this part of the experiment. For each instance, the value of N is determined by (7) with the values reported in Table 3. In Table 4, the first column presents the instance class name, while columns two and three indicate how many modules and how many versions per module, as an average, are in the system. The fourth column presents the computational time of the Cplex solver, measured as CPU time. Under the column B&A we present the results of our implementation of Berman and Ashrafi (1994) dynamic algorithm, in terms of CPU time and accuracy level of the solution (a value of NA indicates that the algorithm did not terminate within the maximum allotted time, fixed to 1500 seconds). Finally, the last four columns under C&C present the results of the proposed algorithm, in terms of CPU time, solution quality, number of iterations required and standard deviation of the solution value over 10 different runs. It is worth noting that all the measures of time are referred to the total running time of the algorithm and that our algorithm was stopped after at most 300 CPU seconds. The solution quality γ is computed, for each instance class, as: γ=

T 1  zt∗ − zt• T zt•

(8)

t=1

where t = 1, 2, . . . , T , with T = 10, accounts for the fact that ten instances are generated for each pair (m, v). In addition, zt∗ indicates the best solution found on instance number t by Cplex. This solution is either the global optimum, if found in less than the maximum CPU time (set to 10,000 seconds) or the best lower bound reached by the scheme within the allotted time. Finally, zt• is the best solution found on the same instance by the heuristic algorithms, Berman and Ashrafi (1994) when • = B&A, or the proposed algorithm, when • = C&C. The results presented in Table 4 provide a measure of the effectiveness of the algorithm in terms of solution quality. It is worth noting that a value of γ = 0 implies that the global optimum has been found for the ten instances of a given class. The proposed algorithm finds the global optimum solutions for medium size instances and near-optimal solutions for large scale problems. The value of the gap between the

A cross entropy based algorithm for reliability problems

495

Table 4 Comparison of algorithms performances for the SRP Problem class

R1

R2

R3

R4

R5

Problem size

CPLEX

B&A

m

Timea

Timea

50

70

100

150

200

mi

C&C γb

Timea

γb

No iters

Std. Dev.

11

28

74

0

1

0

28

0.0000

26

44

247

0

3

0

33

0.0006

37

69

398

0

5

0

36

0.0007

43

356

NA

NA

9

0.007

43

0.0005

9

264

801

0

15

0

30

0.0008

19

521

973

0

19

0

31

0.0010

33

783

NA

NA

26

0

37

0.0009

38

1493

NA

NA

25

0.006

35

0.0011

10

598

1311

0

32

0

33

0.0011

22

853

NA

NA

47

0

51

0.0010

38

1284

NA

NA

57

0.008

56

0.0008

45

1721

NA

NA

64

0.009

58

0.0015

8

496

NA

NA

55

0

35

0.0010

26

997

NA

NA

51

0.005

43

0.0011

32

1359

NA

NA

71

0

51

0.0012

40

5789

NA

NA

97

0.006

54

0.0009

12

1227

NA

NA

64

0

47

0.0015

20

3595

NA

NA

102

0

51

0.0013

35

10000

NA

NA

144

0.09

68

0.0014

41

10000

NA

NA

193

0.006

73

0.0010

a CPU time on a Pentium 4 1.2 GHz Linux Workstation b Computed as in (8)

heuristic solution of the algorithm and the global optimum (or the best lower bound) is less than or equal to 1%. In addition, the low values of standard deviation testify of the robustness of the proposed scheme. Furthermore, it is possible to conclude that the algorithm outperforms Berman and Ashrafi (1994) algorithm, both in terms of solution quality and computational time. 4.6 Results on complex systems reliability problems In a fashion similar to what presented in Sect. 4.5, we carried out experiments on well-known problems from the literature as well as on randomly generated large scale instances of the complex system configuration problem. The benchmark set is made up by 33 instances originally proposed by Nakawara and Miyazaki (1981), but also used by a number of researchers, e.g., Coit and Smith (1996a, 1996b), Hsieh (2002), Liang and Smith (2004), and You and Chen (2005). The best known results have been reported by You and Chen (2005) and, conse-

496

M. Caserta, M.C. Nodar

Table 5 Parameters of Benchmark problems as in You and Chen (2005) Subsystem

Component choices Choice 1

Choice 2 c1l

w1l

r2l

Choice 3 c2l

w2l

r3l

Choice 4

l

r1l

c3l

w3l

r4l

c4l

w4l

1

0.90

1

3

0.93

1

4

0.91

2

0.95

2

8

0.94

1

10

0.93

2

2

0.95

2

5

1

9

3

0.85

2

7

0.90

3

5

0.87

1

6

0.92

4

4

4

0.83

3

5

0.87

4

6

0.85

5

4

5

0.94

2

4

0.93

2

3

0.95

3

5

6

0.99

3

5

0.98

3

4

0.97

2

5

7

0.91

4

7

0.92

4

8

0.94

5

9

0.96

2

4

8

0.81

3

4

0.90

5

7

0.91

6

6

9

0.97

2

8

0.99

3

9

0.96

4

7

10

0.83

4

6

0.85

4

5

0.90

5

6

0.91

3

8

11

0.94

3

5

0.95

4

6

0.96

5

6

12

0.79

2

4

0.82

3

5

0.85

4

6

13

0.98

2

5

0.99

3

5

0.97

2

6

0.9

5

7

14

0.90

4

6

0.92

4

7

0.95

5

6

0.99

6

9

quently, we will compare the results of the proposed algorithm with those reported by these authors. The 33 variations of the CSRP are series-parallel systems with two knapsack-type constraints, representing weight and cost, and 14 different links. In Table 5 we present the problem parameters, as provided in You and Chen (2005). For each link, up to four different types of components can be chosen, each one with its own characteristics in terms of reliability, cost and weight. The first column indicates the link, or subsystem, number; columns 2–4 give specific information about type 1 components, where r1l is the reliability of component 1 in link l, c1l gives the cost of the same component and, finally, w1l indicates the weight. The meaning of the remaining columns is the same, except that they refer to type 2, 3, and 4, respectively. What changes among instances is the total cost C and the total weight W available, varying in the ranges W ∈ [159, 191] and C ∈ [110, 133]. In Table 6, we report our results on the benchmark instances along with the best known results taken from the literature, as in You and Chen (2005). We ran the algorithm 10 times over each instance. In Table 6, the first column provides the instances number, the second and third columns give the total weight W and cost C allowed, the fourth column presents the best known solution from the literature (You and Chen 2005) and the last column gives the best results obtained by the algorithm. While in Table 6 we only report the best result obtained by our algorithm, Fig. 2 accounts for the variability of these results. Both the table and figure allow to draw conclusions about the effectiveness and the robustness of the proposed algorithm: out of 33 benchmark instances, the algorithm finds a new best solution 21 times, while matching the best know solution in the remaining cases. Along the same line, Fig. 2 provides a measure of the robustness of the algorithm. The range of performance over 10 different runs is similar to the

A cross entropy based algorithm for reliability problems Table 6 Results on 33 benchmark instances from You and Chen (2005)

No.

497

W

C

Best known

Best found

1

191

130

0.98681

0.98804

2

190

130

0.98642

0.98668

3

189

130

0.98592

0.98635

4

188

130

0.98538

0.98700

5

187

130

0.98469

0.98590

6

186

129

0.98418

0.98550

7

185

130

0.98350

0.98517

8

184

130

0.98299

0.98299

9

183

129

0.98226

0.98491

10

182

130

0.98152

0.98343

11

181

129

0.98103

0.98273

12

180

128

0.98029

0.98029

13

179

126

0.97950

0.97950

14

178

125

0.97840

0.97840

15

177

126

0.97760

0.97838

16

176

124

0.97669

0.97746

17

175

125

0.97571

0.97618

18

174

123

0.97493

0.97622

19

173

122

0.97383

0.97383

20

172

123

0.97303

0.97303

21

171

122

0.97193

0.97309

22

170

120

0.97076

0.97177

23

169

121

0.96929

0.97033

24

168

119

0.96813

0.96813

25

167

118

0.96634

0.96768

26

166

116

0.96504

0.96504

27

165

117

0.96371

0.96579

28

164

115

0.96242

0.96242

29

163

114

0.96064

0.96064

30

162

115

0.95919

0.95919

31

161

113

0.95803

0.95803

32

160

112

0.95571

0.95656

33

159

110

0.95456

0.95602

variability reported in You and Chen (2005) and in Liang and Smith (2004), with an average standard deviation of 0.000678. Finally, in order to test the effectiveness of the algorithm on very large scale instances, we randomly generated instances of the complex system of Fig. 3 with up to 200 components and up to five knapsack-type constraints. For each system size, we generated ten instances using the random scheme described in Sect. 4.5 to define rj , the reliability of each component, gjk , the resource usage of each component in each constraint, and bk , the resource availability for each constraint.

498

M. Caserta, M.C. Nodar

Fig. 2 Variability of results over 20 runs for each benchmark instance Fig. 3 Complex system configuration: 5-link system

The number of components in each one of the five links of the systems is randomly chosen as well, by first allocating a component to each link and then recursively and randomly assigning each of the remaining n − 5 components to a link indicated by a random number generated in the range [1, 5]. In order to provide an indication of the size of the search space of each problem class, we provide a measure of the average upper bound of each variable, u. We also run Cplex to find the global optimal solution for each one of the problems. In addition, in order to draw conclusions about the solution quality of our algorithm, we decided to implement two well-known heuristic algorithms presented in the literature. We coded Aggarwal (1976) algorithm as well as Shi (1987) algorithm. The parameters of the CE algorithm are calibrated in a similar fashion to what presented in Sect. 4.5. After running a set of training tests, we defined the statistical model that outputs the “best” values of ρ and N . While ρ was statistically not significant in the range [0.1, 0.2], we found a model similar to what illustrated in the previous section for the value of N . Hence, the correct value of N was fixed on an instance-base, considering the number of components within the system.

A cross entropy based algorithm for reliability problems

499

Table 7 Results of Aggarwal, Shi and Caserta-Cabo’s algorithm on large scale randomly generated instances n

u

CPLEX

Aggarwal

Time

Time

Shi γ

Timea

Caserta-Cabo γ

Timea

γ

No iters

Std. Dev.

10

15

53

0.01

0.0

0.01

0.0

0.01

0.0

14

0.0001

20

27

421

0.2

0.02

0.3

0.0

0.1

0.0

17

0.0007

50

46

1349

1.2

0.04

1.5

0.01

0.6

0.0

21

0.0011

100

51

10000

8.6

0.05

11.3

0.02

2.6

0.02

19

0.0015

200

48

10000

22.4

0.12

36.1

0.05

9.4

0.0

28

0.0012

a CPU time on a Pentium 4 1.2 GHz Linux Workstation

Table 7 presents the computational results. The first column provides the number of components of the system, n; the second column gives an indication of the dimension of the search space, by giving the average value of the upper bound of the variable, dictated by the constraints of the problem; the third column gives a measure of the CPU time required by the Cplex solver to reach the global optimum (maximum running time allowed of 10,000 CPU seconds); the fourth and fifth columns present the results of Aggarwal (1976)’s algorithm; the sixth and seventh columns give the results of Shi (1987)’s algorithm; finally, the last four columns illustrate the results of the proposed algorithm. The quality of the solution found by each algorithm is measured by using (8). The results presented in Table 7 prove that the algorithm is quite effective and robust in solving the class of problems presented in this section, both in terms of solution quality and computational time, especially when dealing with very large scale problems. Just as in the case of Table 4, a value of γ equal to zero indicates that the global optimum has been found for each one of the ten instances of a given class. From the analysis of Table 7 we can see that our algorithm finds the global optimum even for very large scale instances, with up to 200 components in the system. Finally, Fig. 4 provides a measure of the robustness of the algorithm when tested on very large scale instances. Both Fig. 4 and the last column of Table 7 allow to conclude that the algorithm is robust, with very low variability, and that an average run of the proposed algorithm provides good quality solutions.

5 Conclusion We have presented a Cross Entropy-based algorithm for reliability optimization problems. We exploit a basic mathematical property of the problem, which is, the objective function as well as the constraint functions are non-decreasing in x. The proposed scheme provides some insight on how to adapt the CE paradigm in order to deal with knapsack-type constraints as well as set covering-type constraints. In addition, by employing a mechanism inspired by the “Response Surface Methodology”, which helped us to determine good values of the algorithmic parameters for any given instance size, we contributed to enhance the understanding of how to fine-tune a CE-based algorithm.

500

M. Caserta, M.C. Nodar

Fig. 4 Variability of results over 10 runs for each random generate instance type

We tested the robustness of the algorithm on two different classes of problems, both within the reliability realm. In both cases, the algorithm proved to be very effective in terms of solution quality as well as computational time. The proposed algorithm is especially suited for very large scale instances, when exact approaches are doomed to fail.

References Aggarwal, K.K.: Redundancy optimization in general systems. IEEE Trans. Reliab. R-25, 330–332 (1976) Alon, G., Kroese, D., Raviv, T., Rubinstein, R.Y.: Application of the cross-entropy method to the buffer allocation problem in a simulation-based environment. Ann. Oper. Res. 1(134), 137–151 (2005) Belli, F., Jedrzejowicz, P.: An approach to the reliability optimization of software with redundancy. IEEE Trans. Softw. Eng. 17(3), 310–312 (1991) Berman, O., Ashrafi, N.: Optimization models for reliability of modular software systems. IEEE Trans. Softw. Eng. 19(11), 1119–1123 (1993) Berman, O., Ashrafi, N.: Optimal design of large software-systems using N-version programming. IEEE Trans. Reliab. 43(2), 344 (1994) Blum, C., Roli, A.: Metaheuristics in combinatorial optimization: overview and conceptual comparison. ACM Comput. Surv. 35(3), 268–308 (2003) Chepuri, K., Homem De Mello, T.: Solving the vehicle routing problem with stochastic demands using the cross-entropy method. Ann. Oper. Res. 1(134), 153–181 (2005) Coit, D.W., Smith, A.E.: Penalty guided genetic search for reliability design optimization. Comput. Ind. Eng. 30(4), 895–904 (1996a) Coit, D.W., Smith, A.E.: Reliability optimization of series-parallel systems using a genetic algorithm. IEEE Trans. Reliab. 45(2), 254–266 (1996b) Crama, Y.: Recognition problems for special classes of polynomials in 0–1 variables. Math. Program. 44, 139–155 (1987) De Boer, P., Kroese, D.P., Mannor, S., Rubinstein, R.Y.: A tutorial on the cross-entropy method. Ann. Oper. Res. 134(1), 19–67 (2005) Dorigo, M., Di Caro, G.: The ant colony optimization meta-heuristic. In: New Ideas in Optimization, pp. 11–32. McGraw-Hill, Cambridge (1999)

A cross entropy based algorithm for reliability problems

501

Feo, T.A., Resende, G.C.: Greedy randomized adaptive search procedures. J. Glob. Optim. 6, 109–133 (1995) Garey, M.R., Johnson, D.S.: Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, New York (1979) Glover, F., Laguna, M.: Tabu Search. Kluwer Academic, Dordrecht (1997) Hsieh, Y.C.: A linear approximation for redundant reliability problems with multiple component choices. Comput. Ind. Eng. 44, 91–103 (2002) Ilog Cplex 10.0: http://www.ilog.com. ILOG Cplex Component Libraries (2007) Keith, J., Kroese, D.P.: SABRES: sequence alignment by rare event simulation. In: Proceedings of the 2002 Winter Simulation Conference, pp. 320–327, San Diego (2002) Kim, J.H., Yum, B.J.: A heuristic method for solving redundancy optimization problem in complex systems. IEEE Trans. Reliab. 42(4), 572–578 (1993) Kohda, T., Inoue, K.: A reliability optimization method for complex systems with the criterion of local optimality. IEEE Trans. Reliab. R-31(1), 109–111 (1982) Kuo, W., Prasad, V.R.: An annotated overview of system-reliability optimization. IEEE Trans. Reliab. 49(2), 176–187 (2000) Kuo, W., Prasad, V.R., Tillman, F.A., Hwang, C.: Optimal Reliability Design. Cambridge University Press, Cambridge (2001) Liang, Y.C., Smith, A.E.: An ant colony optimization algorithm for the redundancy allocation problem (RAP). IEEE Trans. Reliab. 53(3), 417–423 (2004) Nakawara, Y., Miyazaki, S.: Surrogate constraints algorithm for reliability optimization problems with two constraints. IEEE Trans. Reliab. R30(2), 175–180 (1981) Ravi, V., Murty, B.S.N., Reddy, P.: Nonequilibrium simulated annealing algorithm applied to reliability optimization of complex systems. IEEE Trans. Reliab. 46(2), 233–239 (1997) Rubinstein, R.Y.: Optimization of computer simulation models with rare events. Eur. J. Oper. Res. 99, 89–112 (1997) Rubinstein, R.Y.: The cross-entropy method for combinatorial and continuous optimization. Methodol. Comput. Appl. Probab. 2, 127–190 (1999) Rubinstein, R.Y.: Combinatorial optimization, cross-entropy, ants and rare events. In: Uryasev, S., Pardalos, P.M. (eds.) Stochastic Optimization: Algorithms and Applications, pp. 304–358. Kluwer, New York (2001) Rubinstein, R.Y.: The cross-entropy method and rare-events for maximal cut and bipartition problems. ACM Trans. Model. Comput. Simul. 12(1), 27–53 (2002) Rubinstein, R.Y., Kroese, D.P.: The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte Carlo Simulation, and Machine Learning. Springer, Berlin (2004) Shi, D.H.: A new heuristic algorithm for constrained redundancy-optimization in complex systems. IEEE Trans. Reliab. R-36(5), 621–623 (1987) You, P.S., Chen, T.C.: An efficient heuristic for series-parallel redundant reliability problems. Comput. Oper. Res. 32(8), 2117–2127 (2005)

Suggest Documents