Effective Test Case Generation Using Antirandom ...

2 downloads 0 Views 394KB Size Report
[3] Spread spectrum radar polyphase code design problem [Mladenovic (2003)]. 1.5. Iterated Local ..... Politecnico di Milano, Milan, Italy. [12] Dueck, G., and ...
Kawaljeet Singh et. al. / International Journal of Engineering Science and Technology Vol. 2(10), 2010, 6022-6030

EMPIRICAL INVESTIGATION OF METAHEURISTICS APPLICATION IN INDUSTRIAL ENGINEERING Kawaljeet Singh1 University Computer Centre Punjabi University, Patiala 147002 (India)

Leena Jain2* Regional Institute of Management and Technology, Mandi Gobindgarh, 147301 (India) Ph. D., Research Scholar, University College of Engineering, Punjabi University, Patiala 147002(India) Abstract: Metaheuristics are renowned to present very efficient elucidation to many of today's combinatorial optimization problems in engineering, industrial, economical and scientific domains such as transportation, bioinformatics, logistics, business etc. Metaheuristics present itself as highly promising choice for nearly-optimal solutions in reasonable time where exact approaches are not applicable due to extremly large running times or other limitations. Meta-heuristic is a master strategy that guides and modifies other heuristics to produce solutions beyond those that are normally generated in a quest for local optimality. This paper highlights the various contemporary real life applications of Metaheuristics in the domain of industrial engineering and NP-hard problems reported in the literature. Keywords: Metaheuristics Methods, Industrial and Engineering Applications, Genetic Algorithm, Ant Colony Optimization 1. Introduction Metaheuristics can solve Combinatorial Optimization Problems, like cutting and packing, routing, network design, assignment, scheduling, or time-tabling problems, continuous parameter optimization problems, or the optimization of non-linear strucutres like neural networks or tree structures as they often appear in computational intelligence. Evolutionary Algorithms (EAs), in particular, comprise a variety of related algorithms that are based on the processes of evolution in nature. In contrast to several other Metaheuristics, they work on a set of concurrent solutions and can easily be parallelized. Especially the combination of evolutionary algorithms with problemspecific heuristics, local-search based techniques, approximation methods and exact techniques often make possible highly efficient optimization algorithms for many areas of application. Metaheuristics are generally applied to problems for which there is no satisfactory problem-specific algorithm or heuristic; or when it is not practical to implement such a method. Most commonly used Metaheuristics are targeted to combinatorial optimization problems, but of course can handle any problem that can be recast in that form, such as solving Boolean equations. In spite of overly-optimistic claims by some of their advocates, Metaheuristics are not a panacea, and their indiscriminate use often is much less efficient than even the crudest problem-specific heuristic, by several orders of magnitude. Main Features of a Good Metaheuristics [1] [2] [3] [4] [5] [6] [7]

Population intrinsic parallelism Indirect Coding Cooperation adapted crossover Local search in solution space Diversity need to be controlled Easy to implement the restarts Randomness

ISSN: 0975-5462

6022

Kawaljeet Singh et. al. / International Journal of Engineering Science and Technology Vol. 2(10), 2010, 6022-6030 Commonly used metaheuristic methods [1] [2] [3] [4]

TS : Tabu search [Glover (1889), Glover (1990)] SA : Simulated annealing [Kirckpatrick (1983)] TA : Threshold accepting [Deuck, Scheuer (1990), Moscato and Fontanari (1990)] VNS : Variable neighborhood [Hansen, Mladenovi´c (1997), Hansen, Mladenovi´c (1999), Hansen, Mladenovi´c (2002)] [5] ILOCAL SEARCH Iterated local search [Stiitzle (1999) ,Lorenco et al. (2001)] [6] GENETIC ALGORITHM [ Holland (1975), Goldberg (1989)] [7] Harmony search[Lee and Geem(2004), Geem, Saka(2007)] [8] MA : Memetic Algorithm [Moscato (1989)] [9] Ant Colony Optimization. [Dorigo et al. (1991), Colorni et al. (1992a, 1992b)] [10] Scatter search [Laguna, Glover, Marty (2000)]

Innumerable variants and hybrids of these techniques have been proposed, and many more applications of Metaheuristics to specific problems have been reported. This is an active field of research, with a considerable literature, a large community of researchers and users, and a wide range of applications. 1.1. Tabu Search Tabu Search (TS) is basically a heuristic method which was originally proposed by Glover in 1989 for solving various combinatorial problems in the literature of operations research. In several cases, the methods described provide solutions very close to optimality and are among the most effective, if not the best, to tackle the difficult problems at hand. TS have been considered extremely popular in finding good solutions to the large combinatorial problems encountered in many practical settings. Fred Glover pointed its research on allowing Local Search methods to overcome local optima. The basic principle of TS is to pursue LOCAL SEARCH whenever it encounters a local optimum by allowing non-improving moves; cycling back to previously visited solutions is prevented by the use of memories, called Tabu lists that record the recent history of the search, a key idea that can be linked to Artificial Intelligence concepts. Industrial applications of Tabu search method illustrate in table 1 [Glover and Laguna (1997)]. Table 1: Illustrative tabu search applications

Scheduling  Flow-Time Cell Manufacturing  Heterogeneous Processor Scheduling  Workforce Planning  Classroom Scheduling  Machine Scheduling  Flow Shop Scheduling  Job Shop Scheduling  Sequencing and Batching Telecommunications  Call Routing  Bandwidth Packing  Hub Facility Location  Path Assignment  Network Design for Services  Customer Discount Planning  Synchronous Optical Networks Location and Allocation  Multicommodity Location/Allocation  Quadratic Assignment  Multilevel Generalized Assignment  Lay-Out Planning  Off-Shore Oil Exploration Routing  Vehicle Routing  Capacitated Routing  Time Window Routing  Mixed Fleet Routing

ISSN: 0975-5462

Design  Computer-Aided Design  Fault Tolerant Networks  Transport Network Design  Architectural Space Planning  Diagram Coherency  Fixed Charge Network Design  Irregular Cutting Problems Production, Inventory and Investment  Flexible Manufacturing  Just-in-Time Production  Capacitated MRP  Part Selection  Multi-item Inventory Planning  Volume Discount Acquisition  Fixed Mix Investment Logic and Artificial Intelligence  Probabilistic Logic  Clustering  Pattern Recognition/Classification  Data Integrity  Neural Network |Training and Design Graph Optimization  Graph Partitioning  Graph Coloring  Clique Partitioning  Maximum Clique Problems  Maximum Planner Graphs

6023

Kawaljeet Singh et. al. / International Journal of Engineering Science and Technology Vol. 2(10), 2010, 6022-6030  Traveling Salesman  Traveling Purchaser General Combinational Optimization  Zero-One Programming  Fixed Charge Optimization  Nonconvex Nonlinear Programming  All-or-None Networks  Bilevel Programming

 P-Median Problems Technology  Seismic Inversion  Electrical Power Distribution  Engineering Structural Design  Minimum Volume Ellipsoids  Space Station Construction  Circuit Cell Placement

1.2. Simulated Annealing Simulated annealing (SA), first proposed by Kirkpatrik et al. (1983) is a method suitable for solving optimization problems of large scales. This algorithm, among few other heuristics, is suitable for complicated problems where global optimum is hidden among many local optima. The idea of the method is an analogy with the way molten metals cool and anneal. For slowly cooled process, system is able to find the minimum energy state. So slow cooling is essential for ensuring that a low energy state is achieved. A standard SA procedure begins by generating an initial solution at random. At initial stages, a small random change is made in the current solution. Then the objective function value of new solution is calculated and compared with that of current solution. A move is made to the new solution if it has better value or if the probability function implemented in SA has a higher value than a randomly generated number. Otherwise a new solution generated and evaluated [ Kolahan et al. (2007)].Simulated annealing Metaheuristics used for the vehicle routing and scheduling problem, cutting stock problem [Loris (1999)], process allocation problem[Stella (1992)], binary quadratic problems[Kengo and Hiroyuki (2001)] etc. 1.3. Threshold Accepting Threshold accepting (TA) is a local search method and was first described by Dueck and Scheuer [Dueck and Scheuer (1990)] and Moscato and Fontanari [Moscato and Fontanari (1990)].A classical local search starts with a random feasible solution and then explores its neighbourhood in the solution space by moving (usually randomly) from its current position, accepting a new solution if and only if it improves the objective function. TA overcomes the problem of stopping in local minima by also allowing uphill-moves, that is TA also accepts new solutions which lead to higher objective function values. Application of Threshold Accepting Metaheuristic   

Vehicle Routing Problem with Time Windows [Br¨aysy e. al. (2005), Br¨aysy and Gendreau (2002)] Optimization Heuristics in Econometrics [Winker (2000)] Curriculum based course timetabling[Geiger (2008)]

1.4. Variable Neighborhood Search Variable Neighborhood Search (VNS) is a recent metaheuristic for combinatorial and global optimization [Hansen and Mladenovic (1997), Hansen and Mladenovic (1999), Hansen and Mladenovic (2002)]. It exploits a relatively unexplored approach to the design of local search heuristics: change of neighborhood within the search. Industrial application of VNS [1] Design of an oil pipeline in Gabon [Brimberg (2003)] [2] Pooling problem from the oil and pulp and paper industries [Audet (2004)] [3] Spread spectrum radar polyphase code design problem [Mladenovic (2003)] 1.5. Iterated Local Search Iterated local search [Stiitzle (1999)] is a general meta-heuristic. It has two basic operators for generating new solutions. One is a local search and other is a perturbation operator. When the local search is trapped in local optimal solution, a perturbation operator is applied to the local optima to generate a new starting point for its local search. It is desirable that the generated starting point should be in a promising area in the search space. A commonly-used perturbation operator is a conventional mutation, which can produce a starting point in a neighboring area of the local optimum. Another perturbation operator is guided mutation operator [Zhang et al. (2005), Zhang et al. (2004)]

ISSN: 0975-5462

6024

Kawaljeet Singh et. al. / International Journal of Engineering Science and Technology Vol. 2(10), 2010, 6022-6030 The essence of the iterated local search metaheuristic can be given in a nutshell: one iteratively builds a sequence of solutions generated by the embedded heuristic, leading to far better solutions than if one were to use repeated random trials of that heuristic. This simple idea has a long history, and its rediscovery by many authors has lead to many different names for iterated local search like iterated descent [Baum (1986)], large-step Markov chains [Martin et al. (1991)], iterated Lin-Kernighan [Johnson (1990)], chained local optimization [Martin (1996)], or combinations of these [Applegate et al. (1999)]. There are two main points that make an algorithm an iterated local search: [1] There must be a single chain that is being followed (this then excludes population- based algorithms); [2] The search for better solutions occurs in a reduced space defined by the output of a black-box heuristic. Historically, local search has been the most frequently used embedded heuristic, but in fact it can be any optimizer, deterministic or not. 1.6. Genetic Algorithms Genetic algorithms (GAs) are based on biological principles of evolution and provide an interesting alternative to “classic” gradient-based optimization methods. They are particularly useful for highly nonlinear problems and models, whose computation time is not a primary concern. Continuity of functions is not required. Similar to other methods such as Simulated Annealing, they perform better than gradient-based methods in finding a global optimum if a problem is highly nonlinear and features multiple local minima. In general, GAs approach the entire design space randomly and then improve the found design points by applying genetics-based principles and probabilistic selection criteria. Applications of genetic algorithm by domain and by technique are summarizes in table 2. We examine GAs (Genetic Algorithm) as a number of different things: [1] [2] [3] [4] [5] [6]

GAs as problem solvers GAs as challenging technical puzzle GAs as basis for competent machine learning GAs as computational model of innovation and creativity GAs as computational model of other innovating systems GAs as guiding philosophy

Algorithm is started with a set of solutions (represented by chromosomes) called population. Solutions from one population are taken and form a new population through repetitive application of mutation, crossover, inversion and selection operators. This is motivated by a hope, that the new population will be better than the old one. Solutions which are selected to form new solutions (offspring) are selected according to their fitness - the more suitable they are the more chances they have to reproduce. This is repeated until some condition (for example number of populations or improvement of the best solution) is satisfied. Genetic operators basically are: [1] Crossover [2] Mutation Crossover Operator This operator randomly chooses a locus and exchanges the subsequences before and after that locus between two chromosomes to create two offspring. For example, the strings 1101100100110110 and 1101111000011110 could be crossed over after the fifth locus in each to produce the two offspring 1101111000011110 and 11011 00100110110. The crossover operator roughly mimics biological recombination between two single-chromosome (haploid) organisms. Crossover can then look like this (| is the crossover point): Chromosome 1 11011 | 00100110110 Chromosome 2 11011 | 11000011110

ISSN: 0975-5462

Offspring 1

11011 | 11000011110

Offspring 2

11011 | 00100110110

6025

Kawaljeet Singh et. al. / International Journal of Engineering Science and Technology Vol. 2(10), 2010, 6022-6030 There are other ways to make crossover, for example we can choose more crossover points. Crossover can be rather complicated and very depends on encoding of the chromosome. Specific crossover made for a specific problem can improve performance of the genetic algorithm. Mutation Operator After a crossover is performed, mutation take place. This is to prevent falling all solutions in population into a local optimum of solved problem. Mutation changes randomly the new offspring. For binary encoding we can switch a few randomly chosen bits from 1 to 0 or from 0 to 1. Mutation can then be following: Original offspring 1 1101111000011110 Original offspring 2 1101100100110110 Mutated offspring 1 1100111000011110 Mutated offspring 2 1101101100110110 The mutation depends on the encoding as well as the crossover. For example when we are encoding permutations, mutation could be exchanging two genes. Table 2: Application of genetic algorithm

Some Application areas by domain [1] Industrial Design by parameterization [2] Scheduling [3] Network design by construction [4] Routing [5] Time series prediction [6] Database mining [7] Control systems [8] Artificial life systems [9] Chemistry: molecular conformation

Some Application areas by technique [1] Binary chromosomes for set membership and function optimization [2] Real valued chromosomes for function optimization [3] Order-based chromosomes for optimization by construction [4] Tree-based chromosomes for genetic programming, decision theory, database mining etc. [5] Domain-specific chromosomes for specialized solutions to particular problems

1.7. Harmony Search The harmony search (HS) is a music-inspired evolutionary algorithm, mimicking the improvisation process of music players [Geem et al. (1999)]. The HS is simple in concept, few in parameters, and easy in implementation, with theoretical background of stochastic derivative [Geem (2007)]. The algorithm was originally developed for discrete optimization and later expanded for continuous optimization [Lee and Geem (2005)].Harmony Search algorithm has the following merits: [1] HS does not require differential gradients, thus it can consider discontinuous functions as well as continuous functions. [2] HS can handle discrete variables as well as continuous variables. [3] HS does not require initial value setting for the variables. [4] HS is free from divergence. [5] HS may escape local optima. [6] HS may overcome the drawback of GA's building block theory which works well only if the relationship among variables in a chromosome is carefully considered. If neighbor variables in a chromosome have weaker relationship than remote variables, building block theory may not work well because of crossover operation. However, HS explicitly considers the relationship using ensemble operation. [7] HS has a novel stochastic derivative applied to discrete variables, which uses musician's experiences as a searching direction. The Harmony Search has so far tackled the applications in various industrial fields is shown in table 3[Geem (2008)].

ISSN: 0975-5462

6026

Kawaljeet Singh et. al. / International Journal of Engineering Science and Technology Vol. 2(10), 2010, 6022-6030 Table 3: Illustrative harmony search applications

Civil Engineering  Water Network Design  Dam Scheduling Structural Engineering  Dome Truss Design  Grillage Structure Design  Transmission Tower Design Traffic Engineering  School Bus Routing Mechanical Engineering  Pipeline Leakage Detection Energy Engineering  Pump Switching Space Engineering  Satellite Heat Pipe Design Information Technology  Web-Based Hydrologic Optimization  Use of Space Technology in Disaster Management

Geological Engineering  Soil Slope Stability Environmental Engineering  Flood Model Parameter Calibration Agricultural Engineering  Large-Scale IrriGenetic Algorithmtion Network Design Petroleum Engineering  Petroleum Structure Mooring Industrial Engineering     

Fluid-Transport Minimal Spanning Tree Metaheuristics for meltshop scheduling in the steel industry Ant Colony Optimization to Solve Train Timetabling Problem of Mass Rapid Transit Designing Survivable Fiber-Optic Networks The material allocation problem in the steel industry

1.8. Memetic Algorithm Memetic Algorithms (MAs) are evolutionary algorithms (EAs) that apply a separate local search process to refine individuals (i.e. improve their fitness by hill climbing). These methods are inspired by models of adaptation in natural systems that combine evolutionary adaptation of populations of individuals with individual learning within a lifetime. Additionally, MAs are inspired by Richard Dawkin’s concept of a meme [Dawins (1976)], which represents a unit of cultural evolution that can exhibit local refinement. Under different contexts and situations, MAs are also known as hybrid EAs, genetic local searchers, Baldwinian EAs, Lamarkian EAs,etc. MAs include a broad class of metaheuristics. This method is based on a population of agents and proved to be of practical success in a variety of problem domains. We can be sure that MAs constitute one of the most successful approaches for combinatorial optimization in general, and for the approximate solution of NP Optimization problems in particular. Unlike traditional Evolutionary Computation approaches, MAs are concerned with exploiting all available knowledge about the problem under study. According to Pastorino [Pastorino (2004)], MA is able to improve convergence time; hence making it more favorable over GA. In MA local search is performed in between each generation, in addition to the techniques used by GA to explore the search space, namely recombination/crossover and mutation. For this reason, Memetic Algorithm is also known as Hybrid-GA [Moscato (2002)]. Local search is performed to improve the fitness of the population (in a localized region of the solution space) so that the next generation has “better” genes from its parents, hence the claim that Memetic Algorithms can reduce convergence time. Memetic Algorithms incorporate the concept of memes by allowing individuals to “change” before the next population is produced. Individuals may “copy” parts of genes from other individuals to improve their own fitness. The local search algorithm adopted in a Memetic. This is not as an optional mechanism, but as a fundamental feature. Application of MAs is shown in table 4. Table 4: Illustrative Memetic algorithm applications

    

Graph partitioning Multidimensional knapsack Travelling salesman problem Quadratic assignment problem Set cover problem

   

Minimal graph colouring Max independent set problem Bin packing problem Generalized assignment problem.

1.9. Ant Colony Optimization This heuristic, first introduced by Colorni et al. [Colorni et al. (1992a, 1992b)]] imitates the way ants search for food and find their way back to their nest. First an ant explores its neighborhood randomly. As soon as a source of food is found it starts to transport food to the nest leaving traces of pheromone on the ground which will guide other ants to the source. The intensity of the pheromone traces depends on the quantity and quality of the food available at the source as well as from the distance between source and nest, as for a short distance more ants will travel on the same

ISSN: 0975-5462

6027

Kawaljeet Singh et. al. / International Journal of Engineering Science and Technology Vol. 2(10), 2010, 6022-6030 trail in a given time interval. As the ants preferably travel along important trails their behavior is able to optimize their work. Pheromone trails evaporate and once a source of food is exhausted the trails will disappear and the ants will start to search for other sources. For the heuristic, the search area of the ant corresponds to a discrete set from which the elements forming the solutions are selected, the amount of food is associated with an objective function and the pheromone trail is modeled with an adaptive memory. Real life engineering and industrial applications of ant colony optimization technique are illustrate in table 5. Aco Characteristics     

Exploit a positive feedback mechanism Demonstrate a distributed computational architecture Exploit a global data structure that changes dynamically as each ant transverses the route Has an element of distributed computation to it involving the population of ants Involves probabilistic transitions among states or rather between nodes Table 5: Famous real life engineering and industrial applications of ant colony optimization







Routing



• TSP (Traveling Problem) Vehicle Routing Sequential Ordering

Subset

Salesman

  Assignment  QAP (Quadratic Assignment Problem)  Graph Coloring  Generalized Assignment  Frequency Assignment  University Course Time Scheduling Scheduling  Job Shop  Open Shop  Flow Shop  Total tardiness (weighted/nonweighted)  Project Scheduling  Group Shop







     

Multi-Knapsack Max Independent Set Redundancy Allocation Set Covering Weight Constrained Graph Tree partition Arc-weighted L cardinality tree

Other

 Shortest Common Sequence  Constraint Satisfaction  2D-HP protein folding  Bin Packing Machine Learning  Classification Rules  Bayesian networks  Fuzzy systems Network Routing  Connection oriented network routing  Connection network routing  Optical network routing

1.10. Scatter Search Scatter Search is an evolutionary method which works on a solutions set, called reference Set. The solutions in this set are combined in order to obtain better new solutions than the original ones. The reference set stores the better solutions that have been generated so far. To determine if a solution is good, its quality and its diversity are considered. The test case generator based on Scatter Search use the control flow graph in order to determine the covered branches. Each node has a solution set and the algorithm will try to make the sets as diverse as possible, using a diversity function to generate solutions that can cover different branches of the program. The goal of the algorithm is to obtain the maximum branch coverage, i.e., they must be solutions that allow covering all the nodes of the control flow graph. Since these solutions are stored in the nodes, our goal is, therefore, that all the nodes have at least one element in their solutions set. Conclusion However, lot of advancements has been pursued in finding exact solutions to the combinatorial optimization problems using techniques such as integer programming, dynamic programming, cutting planes, and branch and cut methods. Still there are many hard combinatorial problems which are unsolved and require good heuristic methods. "Optimal Solutions" is in many cases meaningless, as in practice we are often dealing with mode Local Search that is rough simplifications of reality. The goal of Metaheuristics is to produce good-quality efficient solutions without necessarily providing any guarantee of solution quality. Modern Metaheuristics include Simulated Annealing, Genetic Algorithms, Tabu Search, GRASP, ant colony optimization, and their hybrids. No doubt Metaheuristics have been one of the most stimulating topics to explore in the field of optimization.

ISSN: 0975-5462

6028

Kawaljeet Singh et. al. / International Journal of Engineering Science and Technology Vol. 2(10), 2010, 6022-6030 References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40]

Applegate, D., Cook, W. and Rohe, A.(1999). Chained Lin-Kernighan for large traveling salesman problems, Technical Report No. 99887, Forschungsinstitut für Diskrete Mathematik, University of Bonn, Germany. Audet, C., Brimberg, J., Hansen, P. and Mladenovic, N.(2004). Pooling problem: alternate formulation and solution methods, Management Science 50(6), pp.761-776 DOI: 10.1287/mnsc.1030.0207. Baum, E. B. (1986) Iterated descent: A better algorithm for local search in combinatorial optimization problems, Technical report, Caltech, Pasadena, CA. Br¨aysy, O. and Gendreau, M.(2002). Vehicle routing problem with time windows, Part I: Route construction and local search algorithms, Transportation Science, 39(1) pp.104-118. Braysy, O. and Gendreau, M.(2002). Vehicle routing problem with time windows, Part II: Metaheuristics”, Transportation Science, 39(1), pp. 119-139. Bräysy, O., Berger, J., Barkaoui, M. and Dullaert, W.(2002). A Threshold Accepting Metaheuristic for the Vehicle Routing Problem with Time Windows, Internal Report STF42 A02014, SINTEF Applied Mathematics, Department of Optimisation, Oslo, Norway. Brimberg, J., Hansen, P., Lih K.W., Mladenovic, N. and Breton, M.(2003). An oil pipeline design problem, Operation research 51(2), pp. 228-239. Colorni, A., Dorigo, M. and Manniezzo, V. (1992). An investigation of some properties of an ant algorithm, In: Parallel problem solving from nature, Vol. 2. R. M¨anner, and B Manderick, (ed.). North-Holland. Amsterdam. 2 , pp. 509-520. Colorni, A., Dorigo, M. and Manniezzo, V.(1992). Proceedings of the First European Conference on Artificial Life (ECAL-91), Distributed optimization by ant colonies. In F.J. Varela, and P. Bourgine, (ed.) The MIT Press. Cambridge, MA. pp. 134-142. Dawins, R. (1976) The Selfish Gene. Clarendon Press, Oxford. Dorigo, M., Maniezzo, V A.(1991). Colorni, Positive feedback as a search strategy,Technical Report 91-016, Dipartimento di Elettronica, Politecnico di Milano, Milan, Italy. Dueck, G., and Scheuer,T.(1990). Threshold Accepting. A General Purpose Optimization Algorithm Superior to Simulated Annealing, J. Computational Physics. 90, pp. 161-175. Geem, Z.W. (2007) Novel Derivative of Harmony Search Algorithm for Discrete Design Variables. Applied Mathematics and Computation, doi:10.1016/j.amc.2007.09.049. Geem, Z.W. (2008). Harmony Search Applications in Industry, Book Series Studies in Fuzziness and soft Computing Publisher springer Berlin/Heideberg Issn 1434-9922 vol 226 DOI 10.1007/978-3-540-77465-5. Geem, Z.W., Kim, J.H. and Loganathan, G.V. (1999). A New Heuristic Optimization Algorithm: Harmony Search, Simulation, 76(2), pp. 60-68. Geem, Z.W.(2006). Optimal Cost Design of Water Distribution Networks using Harmony Search, Engineering Optimization. 38 (3), pp 259–280. Geiger, M.J. (2008). An application of the Threshold Accepting metaheuristic for curriculum based course timetabling, Canada, In Proceedings of the 7th PATAT Conference. Glover F., Laguna, M. and Martí, R.(2000). Fundamentals of Scatter Search and Path Relinking, Control & Cybernetics, 39(3), pp. 653-684. Glover, F. and Laguna, M. (1997). Tabu Search, Kluwer Academic Publishers, Boston Norwell, MA. USA . Glover, F., (1983) Tabu Search — Part II, ORSA J. on Computing. 2(1) , pp. 4-32. Glover, F., (1989) Tabu Search — Part I, ORSA J. on Computing. 1( 3) , pp.190-206. Goldberg, D. E. and David, E.(1989). Genetic Algorithms in Search Optimization and Machine Learning, Addison Wesley. , pp. 41. Hansen, P. and Mladenovic, N.(1990) Variable neighborhood search, Computers Operations Research. 24 (11), pp. 1097-1100. Hansen, P. and Mladenovic, N.(1993). An Introduction to Variable Neighborhood Search. In Meta-Heuristics 98: Theory & Applications, S. Voß, S. Martello, C. Roucairol, and I. H. Osman (ed.), Kluwer Academic Publishers, Norwell, MA, pp. 433-458. Hansen, P. and Mladenovic, N.(2002). Developments of Variable Neighborhood Search. In Essays and Surveys in Metaheuristics, C. Ribeiro, and P. Hansen, (ed.), Kluwer Academic Publishers, Norwell, MA, pp. 415-439. Helena, R. Lourenço, R. Martin, O. Stützle, T.(2001), Proc. 4th Metaheuristics International Conference, A beginner's introduction to iterated local search, pp. 545-550. Holland, J. H.(1975). Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor. Johnson, D. S. (1990) In Proceeding of the 17th Colloquium on Automata, Languages, and Programming, Springer Verlag, Berlin, Local optimization and the travelling salesman problem, LNCS, 443 , pp. 446-461. Kengo, K. and Hiroyuki, N. (2001). Performance of simulated annealing-based heuristic for the unconstrained binary quadratic programming problem, Eur. J. Operational Research. 134(1) pp. 103-119. Kirkpatrick, S. , Gelatt, C. D., and Vecchi, M. P.(1983). Optimization by Simulated Annealing, Science. 220 pp. 671--680. Kolahan, F., Abolbashari, M.H. , Mohitzadeh, S.(2007). Simulated Annealing Application for Structural Optimization, World Academy of Science, Engineering and Technology. pp. 35. Lee, K.S. and. Geem, Z.W. (2005). A New Meta-Heuristic Algorithm for Continuous Engineering Optimization: Harmony Search Theory and Practice, Computer Methods in Applied Mechanics and Engineering, 194 (36-38) , pp. 3902-3933 Lee,K.S. and Geem,Z.W/(2004). A New Structural Optimization Method Based on the Harmony Search Algorithm, Computers & Structures. 82 (9-10) pp. 781-798. Loris, F. (1999) An application of simulated annealing to the cutting stock problem, Eur. J. Operational Research. 114(3), pp. 542-556. Martin, S. and Otto, S.W. (1996). Combining simulated annealing with local search heuristics, Annals Operations Research, 63, pp. 57-75. Martin, S., Otto, W. and Felten, E.W. (1991). Large-step Markov chains for the traveling salesman problem, Complex Systems, 5(3), pp. 299-326. Mladenovic, N. ,Petrovic, J., Kivac, V., Vujc, E. and cangalovic, M.(2003). Solving Spread spectrum radar polyphase code design problem by Tabu search and Variable neighborhood search, Eur. J. Operational Research. 151 (2), pp. 389-399. Moscato, P. (1989).On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms, Caltech Concurrent Computation Program report 826. Moscato, P. (2002). Memetic Algorithms, Home Page, accesses 27th September 2005, from http://www.densis.fee.unicamp.br/~moscato/memetic_home.html. Moscato,P. and Fontanari,J.(1990). Stochastic Versus Deterministic Update in Simulated Annealing, Physics Letters. 146, pp. 204-208.

ISSN: 0975-5462

6029

Kawaljeet Singh et. al. / International Journal of Engineering Science and Technology Vol. 2(10), 2010, 6022-6030 [41] Pastorino, M. (2004). Reconstruction Algorithm for Electromagnetic Imaging, IEEE Transactions on Instrumentation and measurement. 53, pp.692-699. [42] Saka, M.P. (2007) Optimum Geometry Design of Geodesic Domes Using Harmony Search Algorithm, Advances in Structural Engineering. 10(6), pp 595-606. [43] Stella, S.(1992), Simulated annealing applied to the process allocation problem, Eur. J. Operational Research. 60(3), pp. 327--334. [44] Stiitzle, T. (1999). Iterated local search for the quadratic assignment problem. Technical Report AIDA-99-03.Darmstadt University of Technology. Computer Science Department , Intellectics Group . [45] Winker, P. (2000). Optimization Heuristics in Econometrics: Applications of Threshold Accepting, John Wiley and Sons Ltd. Chichester, UK [46] Zhang, Q., Sun, J. and Tsang, E.P.K. (2005). Evolutionary algorithm with the guided mutation for the maximum clique problem, IEEE transactions on Evolutionary Computation, 9(2), pp.192--200. [47] Zhang, Q., Sun, J. and Tsang, E.P.K. and Ford, J.A. (2004). In Proceeding Of the Bird of a Feather worshops, Genetic and Evolutionary Computation Conference, Combination of guided local search and estimation of distribution algorithm for solving quadratic assignment problem, pp. 42-48.

ISSN: 0975-5462

6030