Genetic algorithms with decomposition procedures

1 downloads 0 Views 724KB Size Report
Lasdon [24] and the primal decomposition method proposed by. Geoffrion [12] .... B. Fitness. In genetic algorithms, an individual is evaluated using some measure ..... GADS increase more rapidly than for GADP when increasing the number of ...
410

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 3, JUNE 2003

Genetic Algorithms With Decomposition Procedures for Multidimensional 0-1 Knapsack Problems With Block Angular Structures Kosuke Kato, Member, IEEE, and Masatoshi Sakawa, Member, IEEE

Abstract—This paper presents a detailed treatment of genetic algorithms with decomposition procedures as developed for large scale multidimensional 0-1 knapsack problems with block angular structures. Through the introduction of a triple string representation and the corresponding decoding algorithm, it is shown that a potential solution satisfying not only block constraints but also coupling constraints can be obtained for each individual. Then genetic algorithms with decomposition procedures are presented as an approximate solution method for multidimensional 0-1 knapsack problems with block angular structures. Many computational experiments on numerical examples with 30, 50, 70, 100, 150, 200, 300, 500, and 1000 variables demonstrate the feasibility and efficiency of the proposed method. Index Terms—Block angular structures, decomposition procedures, genetic algorithms, multidimensional 0-1 knapsack problems, triple strings.

I. INTRODUCTION

T

HE INCREASING complexity of modern-day society has brought new problems involving very large numbers of variables and constraints. Due to the high dimensionality of the problems, it becomes difficult to obtain optimal solutions for such large scale programming problems. Fortunately, however, most of the large scale programming problems arising in application almost always have a special structure that can be exploited. One familiar structure is the block angular structure to the constraints that can be used to formulate the subproblems. From such a point of view, in the early 1960s, Dantzig and Wolfe [6], [7] introduced the elegant and attractive decomposition method for linear programming problems. The Dantzig–Wolfe decomposition method, when applied to large scale linear programming problems with block angular structures, implies that the entire problem can be solved by solving a coordinated sequence of independent subproblems, and the process of coordination is shown to be finite. After the publication of the Dantzig–Wolfe decomposition algorithm [6], [7], the subsequent works on large scale linear and nonlinear programming problems with block angular structures have been numerous [6], [7], [16]–[21], [25], [27], [33], [39], [40], [41]. Among the nonlinear extensions of the decomposition method, the dual decomposition method proposed by Manuscript received January 1, 2000. This paper was recommended by Associate Editor V. Chankong. The authors are with the Department of Artificial Complex Systems Engineering, Graduate School of Engineering, Hiroshima Univerisity, HigashiHiroshima, Japan 739-8527 (e-mail: [email protected]). Digital Object Identifier 10.1109/TSMCB.2003.811126

Lasdon [24] and the primal decomposition method proposed by Geoffrion [12] are well-known for solving large scale nonlinear programming problems with block angular structures. A brief and unified survey of major approaches to large scale mathematical programming proposed before 1970 can be found in the papers by Geoffrion [10], [11]. More comprehensive discussions of the major large scale mathematical programming proposed through the early 1970s can also be found in Lasdon [25] and Wismer [41]. Unfortunately, however, a generalization of the results along this line for dealing with discrete optimization problems with block angular structures is not yet established. Genetic algorithms [22], initiated by Holland, his colleagues and his students at the University of Michigan in the 1970s, as stochastic search techniques based on the mechanism of natural selection and natural genetics, have received a great deal of attention regarding their potential as optimization techniques for solving discrete optimization problems or other hard optimization problems. Although genetic algorithms were not much known at the beginning, after the publication of Goldberg’s book [13], genetic algorithms have recently attracted considerable attention in a number of fields as a methodology for optimization, adaptation and learning. As we look at recent applications of genetic algorithms to optimization problems, especially to various kinds of single-objective discrete optimization problems and/or to other hard optimization problems, we can see continuing advances [1], [8], [9], [26], [38]. Especially, Khuri et al. [23] proposed a genetic algorithm, GENEsYs, for 0-1 multiple knapsack problems and showed its good performance for various test problems. Furthermore, Christou et al. [4] proposed a GA-based solution method for a kind of large-scale graph partitioning problems formulated as block angular quadratic assignment problems, which combines the subproblem coordination paradigm of price-directive decomposition methods with knapsack and genetic approaches to the utilization of “building blocks” of partial solutions. Unfortunately, however, it seems difficult to extend them into more general multidimensional 0-1 knapsack problems. As a natural extension of single-objective 0-1 programming problems, Sakawa et al. [29], [32] formulated multiobjective multidimensional 0-1 knapsack problems by assuming that the decision maker may have a fuzzy goal for each of the objective functions. After eliciting the linear membership functions, the fuzzy decision of Bellman and Zadeh [2] was adopted for combining these functions. For deriving a satisficing solution for the decision maker by solving the formulated problem, a genetic algorithm with double strings [29], [32], which generates only

1083-4419/03$17.00 © 2003 IEEE

KATO AND SAKAWA: GENETIC ALGORITHMS WITH DECOMPOSITION PROCEDURES

feasible solutions without using penalty functions for treating the constraints, was proposed. Also, through the combination of the desirable features of both the interactive fuzzy satisficing methods for continuous variables [28] and the genetic algorithm with double strings [30], an interactive fuzzy satisficing method to derive a satisficing solution for the decision maker to multiobjective multidimensional 0-1 knapsack problems was proposed [30], [34], [35]. Furthermore, they extended the proposed method to deal with multiobjective multidimensional 0-1 knapsack problems involving fuzzy numbers [36], [37]. Under these circumstances, in this paper, we focus on large scale multidimensional 0-1 knapsack problems with block angular structures. By utilizing the block angular structure of the problem, a triple string representation and the corresponding decoding algorithm are introduced, and it is shown that a potential solution satisfying not only block constraints but also coupling constraints can be obtained for each individual. Then genetic algorithms with decomposition procedures are presented as an approximate solution method for multidimensional 0-1 knapsack problems with block angular structures. Through a lot of computational experiments on numerical examples with 30, 50, 70, 100, 150, 200, 300, 500, and 1000 variables, the feasibility and efficiency of the proposed method are demonstrated. II. MULTIDIMENSIONAL 0-1 KNAPSACK PROBLEMS WITH BLOCK ANGULAR STRUCTURES As a 0-1 version of linear programming problems with block angular structures discussed by Dantzig and Wolfe [6], [7], consider the following large-scale 0-1 programming problem of the block angular structure: minimize subject to ..

.

.. .

(1) , are dimensional cost factor row vecwhere , , are dimensional column vectors of tors, , denotes 0-1 decision variables, dimensional coupling constraints, and , , coefficient matrices. The inequalities , are , are dimensional block constraints, where , , are coefficient matrices. Here, it is assumed that each element of , , and is non-negative, respectively. Then the problem (1) can be viewed as a multidimensional 0-1 knapsack problem with a block angular structure. For example, consider a project selection problem in a company having a number of divisions, where each division has its own limited amounts of internal resources and the divisions are coupled by limited amounts of shared resources. The manager is to determine the projects to be actually approved so as to maximize the total profit under the resource constraints. Such a project selection problem can be formulated as a multidimensional 0-1 knapsack problem with a block angular structure expressed by (1).

Fig. 1.

411

Double string.

Fig. 2. Division of an individual into p subindividuals.

Fig. 3. Triple string representation.

III. GENETIC ALGORITHMS WITH DECOMPOSITION PROCEDURES A. Coding and Decoding For 0-1 programming problems of the knapsack type, Sakawa et al. [29]–[32], [34], [35], [37] proposed a double string representation as shown in Fig. 1 and the corresponding decoding algorithm to generate only feasible solutions. denotes an index In Fig. 1, for a certain , , of a variable in the solution space, while denotes the value (0 or 1) of the th variable. In view of the special structure of the problem (1), it seems to be quite reasonable to define an individual as an aggregation , corresponding to the block of subindividuals , as shown in Fig. 2. constraint If these subindividuals are represented by double strings, for , a phenotype (subeach of the subindividuals , solution) satisfying each of the block constraints can be obtained by the decoding algorithm proposed by Sakawa et al. [29]–[32], [34], [35], [37]. Unfortunately, however, the simple combination of these subsolutions does not always satisfy the coupling constraints. To cope with this problem, a triple string representation as shown in Fig. 3 and the corresponding decoding algorithm are presented as an extension of the double string representation and the corresponding decoding algorithm. By using the proposed representation and decoding algorithm, a phenotype (solution) satisfying both the block constraints and coupling constraints can be ob. tained for each individual To be more explicit, in a triple string which represents a subindividual corresponding to the th block, represents the priority of the th block, denotes an index of a variable in phenotype and is a 0-1 value variable. Decoding this individual (genotype) by means of the following algorithm, the resulting solution (phenotype) becomes denotes the number of always feasible. In the algorithm, , is the th variables in the th block,

412

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 3, JUNE 2003

and “1” respectively (marked “_”), the second variable in the second block is set to 1 if both the block constraints and coupling constraints are satisfied. In this example, all constraints is fixed to 1. Next, although the second elare satisfied, and ement of the middle string and the lower string of the second subindividual are “1” and “1” respectively (marked “_”), the is set to 0 since the first variable of the second individual . These procedures are reblock constraint is violated if peated until all values of the variables in the phenotype are fixed. is obtained In this example, a feasible solution from the given individual by use of the decoding algorithm. B. Fitness is evaluated using In genetic algorithms, an individual some measure of fitness. As fitness of each individual for the problem (1), it would be reasonable to adopt the following function (2) Fig. 4.

Example for the decoding algorithm.

column vector in the th coupling constraint coefficient matrix , and is the th column vector in the th block constraint coefficient matrix . Decoding Algorithm for Triple String Step 1: Set , and proceed to Step 2. Step 2: Find out such a block as , and proceed to Step 3. Step 3: For the above block , set and , and repeat the following procedures. (a) If , set and go to , set (b). Otherwise, i.e., if , and go to (c). (b) If and , set , , , and go and go to to (c). Otherwise, set (c). (c) If , go to Step 4 and reas phenotype of gard a subindividual represented by triple string. Otherwise, return to (a). Step 4: If , stop and regard as phenotype of an individual . Otherwise, set and return to Step 2.

where denotes an individual represented by a triple string and is the phenotype of . Furthermore, and . becomes as It should be noted here that the fitness if if

and and

and the fitness satisfies . For convenience, fitness of an individual is used as fitness of each subindividual , . In the reproduction operation based on the ratio of fitness of each individual to the total fitness such as the expected value selection, it is a problem that the probability of selection depends on the relative ratio of fitness of each individual. Thus, the following linear scaling is adopted. Linear , of an th individual is scaling Fitness transformed into as follows:

where the coefficients and are determined of the population so that the mean fitness becomes a fixed point and the maximal fitness of the population becomes twice as large as the mean fitness, i.e., and . C. Reproduction

To illustrate how the decoding algorithm actually works, consider a simple numerical example with two blocks and five variables as shown in Fig. 4. Since the values in the upper string denotes the priority of decoding, the second subindividual will be decoded first. Furthermore, since the first element of the middle string and the lower string of the second subindividual are “2”

Various kinds of reproduction methods have been proposed. Among them, Sakawa et al. investigated the performance of each of six reproduction operators, i.e., ranking selection, elitist ranking selection, expected value selection, elitist expected value selection, roulette wheel selection, and elitist roulette wheel selection, and as a result confirmed that elitist expected

KATO AND SAKAWA: GENETIC ALGORITHMS WITH DECOMPOSITION PROCEDURES

value selection is relatively efficient for multiobjective 0-1 programming problems incorporating the fuzzy goals of the decision maker [29], [32]. Based mainly on our experience [29], as a reproduction operator, elitist expected value selection—elitism and expected value selection combined together—is adopted, where elitism and expected value selection are summarized as follows. Elitism: If the fitness of an individual in the past populations is larger than that of every individual in the current population, preserve this string into the current generation. Expected value selection: For a population consisting of individuals, the expected number of each subindividual of in the next the th individual population is given by

Then, the integral part of denotes the definite number of individual preserved in the next population. While, , using the decimal part of , the probability to preserve , in the next population is determined by

Fig. 5.

413

Example of PMX for upper string.

Step 1) Choose two crossover points at random on these . strings, say, and and repeat the following procedures. Step 2) Set . Then, interchange a) Find such that with and set . , stop and let be the offspring of . b) If Otherwise, return to (a). in the same manner, as Step 2 is carried out for shown in Fig. 5. Partially Matched Crossover (PMX) for Double String: Let

be the middle and lower part of a subindividual in the th subpopulation, and

D. Crossover If a single-point crossover or multi-point crossover is directly applied to upper or middle string of individuals of triple string type, the th element of the string of an offspring may take the same number that the th element takes. The same violation occurs in solving the traveling salesman problems or scheduling problems through genetic algorithms. In order to avoid this violation, a crossover method called partially matched crossover (PMX) is modified to be suitable for triple strings. PMX is applied as usual for upper strings, whereas, for a couple of middle string and lower string, PMX for double strings [29]–[32], [34]–[37] is applied to every subindividual. It is now appropriate to present the detailed procedures of the crossover method for triple strings. Partially Matched Crossover (PMX) for Upper String: Let

be the middle and lower part of another subindividual in the th and of and , subpopulation. First, prepare copies respectively. Step 1) Choose two crossover points at random on these . strings, say, and and repeat the following procedures. Step 2) Set such that . Then, a) Find with interchange and set

.

, stop. Otherwise, return to a). b) If with that of Step 3) Replace the part from to of and let be the offspring of . and in the same manner, This procedure is carried out for as shown in Fig. 6.

be the upper string of an individual and

E. Mutation be the upper string of another individual. Prepare copies of and , respectively.

and

It is considered that mutation plays the role of local random search in genetic algorithms. Only for the lower string of a triple

414

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 3, JUNE 2003

reason, two parameters , which denotes the minimal search , which denotes the maximal search genergeneration, and ation, are introduced in genetic algorithms with decomposition procedures. Now we are ready to introduce the genetic algorithm with decomposition procedures as an approximate solution method for multidimensional 0-1 knapsack problems with block angular structures. Computational Procedures:

Fig. 6.

Example of PMX for double string.

Fig. 7. Examples of mutation.

string, mutation of bit-reverse type is adopted and applied to every subindividual. For the upper string and for the middle and lower string of the triple string, inversion defined by the following algorithm is adopted. Step 1) After determining two inversion points and , pick out the part of the string from to . Step 2) Arrange the substring in reverse order. Step 3) Put the arranged substring back in the string. Fig. 7 illustrates examples of mutation. F. Computational Procedures of the Genetic Algorithms When applying genetic algorithms with decomposition procedures to the problem (1), an approximate optimal solution of desirable precision must be obtained in proper time. For this

and deStep 1) Set an iteration index (generation) termine the parameter values for the population size , the probability of crossover , the probability , the probability of inversion , the of mutation convergence criterion , the minimal search generaand the maximal search generation . tion Step 2) Generate individuals whose subindividuals are of triple string type at random. Step 3) Evaluate each individual (subindividual) on the basis of phenotype obtained by the decoding algorithm and the maxand calculate the mean fitness of the population. If and imal fitness , or, if , regard an individual with the maximal fitness as an optimal individual and terminate this program. Otherwise, set and proceed to Step 4. Step 4) Apply the reproduction operator to all subpopula, . tions Step 5) Apply the PMX for double strings to the middle and lower part of every subindividual according to the probability of crossover . Step 6) Apply the mutation operator of the bit-reverse type to the lower part of every subindividual according to , and apply the inverthe probability of mutation sion operator for the middle and lower part of every subindividual according to the probability of inversion . Step 7) Apply the PMX for upper strings according to . Step 8) Apply the inversion operator for upper strings according to and return to Step 3. It should be noted here that, in the algorithm, the operations in the steps marked with can be applied to every subindividual of all individuals independently. As a result, it is theoretically possible to reduce the amount of working memory needed to solve the problem and carry out parallel processing.

IV. NUMERICAL EXPERIMENTS For investigating the feasibility and efficiency of the proposed method, consider multidimensional 0-1 knapsack problems with block angular structures having 30, 50, 70, 100, 150, and 200 variables. and , , in the Each element of , numerical example corresponding to the problem (1) was , selected at random from the closed interval and , respectively. On the basis of these values, each

KATO AND SAKAWA: GENETIC ALGORITHMS WITH DECOMPOSITION PROCEDURES

element of determined by

,

, was

where s and s denote elements of matrices and , and respectively, and positive constants denote the degree of looseness of the coupling constraints and the block constraints respectively. To be more specific, the constraints become looser as and increase to 1, while the con, the constraints become tighter as they decrease to 0. If straints corresponding the smaller constant have more influence on the feasibility and the other constraints corresponding to the larger constant might be redundant. For this reason, we set and as in the following experiments. Our numerical experiments were performed on a personal computer (processor: Celeron 466 MHz, memory: 128 MB, OS: Windows NT 4.0) using a Visual C++ compiler (version 6.0). For comparison, the genetic algorithm with double strings [29]–[32], [34], [35], [37] was directly applied to the same problems. Also, in order to compare the obtained results with the corresponding exact optimal solutions or incumbent values, the same problems were solved using LP_SOLVE [3] by M. Berkelaar.1 The parameter values used in both the genetic algorithm with double strings and genetic algorithm with decomposition proce, probability of dures were set as follows: population size , probability of mutation , probcrossover , , and ability of inversion . Observe that these parameter values were found through our experiences and these values used all of the trials of both the genetic algorithm with double strings and genetic algorithm with decomposition procedures. First consider the following multidimensional 0-1 knapsack variables, six coupling problem with 3 blocks, block constraints constraints, and minimize subject to

(3) As a numerical example with 30 variables, we used the values, as shown at the bottom of the next page, of , and , , which were de, and , termined at random from respectively. , , and , For ten trials to each example were performed through both the 1LP_SOLVE [3] solves (mixed integer) linear programming problems. The implementation of the simplex kernel was mainly based on the text by Orchard-Hays [27]. The mixed integer branch and bound part was inspired by Dakin [5].

415

TABLE I EXPERIMENTAL RESULTS FOR 30 VARIABLES (TEN TRIALS)

TABLE II EXPERIMENTAL RESULTS FOR 50 VARIABLES (TEN TRIALS)

genetic algorithm with double strings (GADS) and genetic algorithm with decomposition procedures (GADP). Also, in comparison with exact optimal solutions, each example was solved by LP_SOLVE [3]. Table I shows the experimental results for , , and , where Best, Average, Worst, Time, AG, and # represent the best value, average value, worst value, average processing time, average generation for obtaining the best value, and the number of best solution in ten trials, respectively. For problems with 30 variables, it can be seen from Table I that optimal solutions were obtained on ten times out of ten trials for both GADS and GADP. However, concerning the processing time, as expected, LP_SOLVE was much faster than GADP and GADS. It should also be noted that GADP requires more processing time than GADS. As a result, for problems with 30 variables, there is no evidence that would reveal an advantage of GADP over GADS and LP_SOLVE. Next, consider another multidimensional 0-1 knapsack problem with a block angular structure, which has five blocks, variables and ten coupling constraints. The results obtained through ten times trials of GADS and GADP for each of the problems are shown in Table II together with the experimental results by LP_SOLVE. From Table II, it can be seen that GADP succeeds ten times and , out of ten trials for while LP_SOLVE cannot locate an optimal solution for . Furthermore, the required processing time of GADP is lower than that of GADS and LP_SOLVE. As a result, for problems with 50 variables, GADP seems to be more desirable than GADS and LP_SOLVE. Similar computational experiences were performed on numerical examples with 70, 100, 150, and 200 variables, and the corresponding results are shown in Tables III–VI, respectively.

416

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 3, JUNE 2003

It is significant to note here that LP_SOLVE cannot locate optimal solutions for problems with 70, 100, 150, and 200 variables, while LP_SOLVE occasionally gives incumbent solutions for problems with 70 and 100 variables. On the contrary, although the accuracy of the best solutions obtained through GADP and GADS tends to decrease if compared to the case of 30 variables or 50 variables, on the average GADP gives better results than GADS with respect to the accuracy of the obtained solutions. Furthermore, the processing times for GADS increase more rapidly than for GADP when increasing

the number of variables. Therefore, it is interesting to see how the processing times of both GADS and GADP change with the size of the problem. Fig. 8 shows processing times for typical problems with 30, 50, 70, 100, 150 and 200 variables through GADP and GADS. From Fig. 8, it is observed that the processing time, as a function of the problem size, increases in a linear fashion for GADP, while increasing in a quadratic fashion for GADS. It is now appropriate to see how the processing time of GADP changes with the increased size of the problem. Fig. 9 shows

KATO AND SAKAWA: GENETIC ALGORITHMS WITH DECOMPOSITION PROCEDURES

417

TABLE III EXPERIMENTAL RESULTS FOR 70 VARIABLES (TEN TRIALS)

Fig. 8. Processing time for typical problems with 30, 50, 70, 100, 150, and 200 variables (GADS versus GADP). TABLE IV EXPERIMENTAL RESULTS FOR 100 VARIABLES (TEN TRIALS)

TABLE V EXPERIMENTAL RESULTS FOR 150 VARIABLES (TEN TRIALS)

TABLE VI EXPERIMENTAL RESULTS FOR 200 VARIABLES (TEN TRIALS)

Fig. 9. Processing time for typical problems with 30, 50, 70, 100, 150, 200, 300, 500, and 1000 variables (GADP).

and comparison to decode an individual in the GADS, while times multiplication, addition and comparison in the GADP. Letting and be the and that of , , average of , the computational quantity by the GADS is on average , while that by the GADP is on average . Namely, the computational complexity of the and that of the GADP is GADS is approximately . This estimation seems consistent with approximately the experimental results. From our numerical experiments through GADS and GADP, as an approximate solution method for multidimensional 0-1 knapsack problems with block angular structures, it is confirmed that GADP is more efficient and effective than GADS. V. CONCLUSIONS

processing times for typical problems with 30, 50, 70, 100, 150, 200, 300, 500, and 1000 variables through GADP. As depicted in Fig. 9, it can be seen that the processing time of GADP increases almost linearly with the size of the problem. Here, we refer to the theoretical aspect of the computational complexity of GAs. We concentrate on the processing time of the decoding procedure since it accounts for about 80% of the total processing time of GAs. At a rough estimate, we times multiplication, addition need

In this paper, we have focused on large scale multidimensional 0-1 knapsack problems with block angular structures. By utilizing the block angular structure of the problem, a triple string representation and the corresponding decoding algorithm were introduced, and it is shown that a potential solution satisfying not only block constraints but also coupling constraints can be obtained for each individual. Then genetic algorithms with decomposition procedures were presented as an approximate solution method for multidimensional 0-1 knapsack problems with block angular structures. Through a lot of computational experiments on numerical examples with 30, 50, 70, 100,

418

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 33, NO. 3, JUNE 2003

150, 200, 300, 500, and 1000 variables, the feasibility and efficiency of the proposed method were demonstrated. In the near future, applications of the proposed method to the real-world decision making situations as well as extensions to more general cases will be required.

REFERENCES [1] T. Bäck, D. B. Fogel, and Z. Michalewicz, Hand Book of Evolutionary Computation. London, U.K.: Oxford Univ. Press, 1997. [2] R. E. Bellman and L. A. Zadeh, “Decision making in a fuzzy environment,” Manage. Sci., vol. 17, pp. 141–164, 1970. [3] M. Berkelaar. lp_solve 2.0. [Online]. Available: ftp://ftp.es.ele.tue.nl/ pub/lp_solve [4] I. T. Christou, W. Martin, and R. R. Meyer, “Genetic algorithms as multicoodinators in large-scale optimization,” Dept. of Computer Sciences, University of Wisconsin, Madison, WI, Tech. Rep. MP-TR-96-14, 1996. [5] R. J. Dakin, “A tree search algorithm for mixed integer programming problems,” J. Comput., vol. 8, pp. 250–255, 1965. [6] G. B. Dantzig and P. Wolfe, “Decomposition principle for linear programs,” Oper. Res., vol. 8, pp. 101–111, 1960. , “The decomposition algorithm for linear programming,” Econo[7] metrica, vol. 29, pp. 767–778, 1961. [8] L. Davis, Ed., Genetic Algorithms and Simulated Annealing. San Mateo, CA: Morgan Kaufmann, 1987. [9] M. Gen and R. Cheng, Genetic Algorithms & Engineering Design. New York: Wiley, 1996. [10] A. M. Geoffrion, “Elements of large-scale mathematical programming,” Manage. Sci., vol. 16, pp. 652–675, 1970. [11] , “Elements of large-scale mathematical programming,” Manage. Sci., vol. 16, pp. 676–691, 1970. [12] , “Primal resource-directive approaches for optimizing nonlinear decomposable systems,” Oper. Res., vol. 18, pp. 375–403, 1970. [13] D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning. Reading, MA: Addison-Wesley, 1989. [14] D. E. Goldberg and R. Lingle, “Alleles, loci, and the traveling salesman problem,” in Proc. 1st Int. Conf. Genetic Algorithms Their Applications. Pittsburgh, PA: Lawrence Erlbaum, 1985, pp. 154–159. [15] J. J. Grefenstette, R. Gopal, B. Rosmaita, and D. Van Gucht, “Genetic algorithms for the traveling salesman problem,” in Proc. 1st Int. Conf. Genetic Algorithms Their Applications. Pittsburgh, PA: Lawrence Erlbaum, 1985, pp. 160–168. [16] Y. Y. Haimes, K. Tarvainen, T. Shima, and J. Thadathil, Hierarchical Multiobjective Analysis of Large-Scale Systems. New York: Hemisphere, 1989. [17] D. M. Himmelblau, Ed., Decomposition of Large-Scale Problems. Amsterdam, The Netherlands: North-Holland, 1973. [18] J. K. Ho and E. Loute, “Computational experience with advanced implementation of decomposition algorithms for linear programming,” Math. Programming, vol. 27, pp. 283–290, 1983. [19] J. K. Ho and R. P. Sundarraj, “An advanced implementation of the Dantzig–Wolfe decomposition algorithm for linear programming,” Math. Programming, vol. 20, pp. 303–326, 1981. [20] , “Computational experience with advanced implementation of decomposition algorithms for linear programming,” Math. Programming, vol. 27, pp. 283–290, 1983. [21] , DECOMP: An Implementation of Dantzig–Wolfe Decomposition for Linear Programming. New York: Springer-Verlag, 1989. [22] J. H. Holland, Adaptation in Natural and Artificial Systems. Ann Arbor, MI: Univ. Michigan Press, 1992. [23] S. Khuri, T. Bäck, and J. Heitkötter, “The zero/one multiple knapsack problem and genetic algorithms,” in Proc. 1994 ACM Symp. Applied Computing, 1994, pp. 188–193. [24] L. S. Lasdon, “Duality and decomposition in mathematical programming,” IEEE Trans. Syst. Sci. Cybern., vol. SSC-4, pp. 86–100, 1968. [25] , Optimization Theory for Large Scale Systems. New York: Macmillan, 1970.

+

=

Data Structures Evolution [26] Z. Michalewicz, Genetic Algorithms Programs. Berlin, Germany: Springer-Verlag, 1992. [27] W. Orchard-Hays, Advanced Linear Programming: Computing Techniques. New York: McGraw-Hill, 1968. [28] M. Sakawa, Fuzzy Sets and Interactive Multiobjective Optimization. New York: Plenum, 1993. [29] M. Sakawa, M. Inuiguchi, H. Sunada, and K. Sawada, “Fuzzy multiobjective combinatorial optimization through revised genetic algorithms” (in Japanese), J. Jpn. Soc. Fuzzy Theory Syst., vol. 6, no. 1, pp. 177–186, 1994. [30] M. Sakawa, K. Kato, and T. Shibano, “‘An interactive fuzzy satisficing method for multiobjective multidimensional 0-1 knapsack problems through genetic algorithms,” in Proc. IEEE Int. Conf. Evolutionary Computatation, 1996, pp. 243–246. [31] M. Sakawa, K. Kato, H. Sunada, and Y. Enda, “An interactive fuzzy satisficing method for multiobjective 0-1 programming problems through revised genetic algorithms” (in Japanese), J. Jpn. Soc. Fuzzy Theory Syst., vol. 7, no. 2, pp. 361–370, 1995. [32] M. Sakawa, K. Kato, H. Sunada, and T. Shibano, “Fuzzy programming for multiobjective 0-1 programming problems through revised genetic algorithms,” Eur. J. Oper. Res., vol. 97, pp. 149–158, 1997. [33] M. Sakawa and F. Seo, “Interactive multiobjective decisionmaking for large-scale systems and its application to environmental systems,” IEEE Trans. Syst., Man, Cybern., vol. SMC-10, pp. 796–806, 1980. [34] M. Sakawa and T. Shibano, “Interactive fuzzy programming for multiobjective 0-1 programming problems through genetic algorithms with double strings,” in Fuzzy Logic Foundations and Industrial Applications, D. Ruan, Ed. Norwell, MA: Kluwer, 1996, pp. 111–128. , “Multiobjective fuzzy satisficing methods for 0-1 knapsack prob[35] lems through genetic algorithms,” in Fuzzy Evolutionary Computation, W. Pedrycz, Ed. Norwll, MA: Kluwer, 1997, pp. 155–177. [36] , “An interactive fuzzy satisficing method for multiobjective 0-1 programming problems with fuzzy numbers through genetic algorithms with double strings,” Eur. J. Oper. Res., vol. 107, pp. 564–574, 1998. [37] , “An interactive approach to fuzzy multiobjective 0-1 programming problems using genetic algorithms,” in Evolutionary Computations and Intelligent Systems, M. Gen and Y. Tsujimura, Eds: Gordon & Breach, Inc., 1999. [38] M. Sakawa and M. Tanaka, Genetic Algorithms (in Japanese). Tokyo, Japan: Asakura, 1995. [39] M. Sakawa and H. Yano, “An interactive fuzzy satisficing method using augmented minimax problems and its application to environmental systems,” IEEE Trans. Syst., Man, Cybern., vol. SMC-15, pp. 720–729, 1985. [40] M. Sakawa, H. Yano, and T. Yumine, “An interactive fuzzy satisficing method for multiobjective linear-programming problems and its application,” IEEE Trans. Syst., Man, Cybern., vol. SMC-17, pp. 654–661, 1987. [41] D. A. Wismer, Ed., Optimization Methods for Large-Scale Systems … With Applications. New York: McGraw-Hill, 1971.

Kouske Kato (M’02) was born in Aichi, Japan, on December 28, 1967. He received the B.E. and M.E. degrees in biophysical engineering from Osaka University, Osaka, Japan, in 1991 and 1993, respectively, and the D.E. degree from Kyoto University, Kyoto, Japan, in 1999. From 1994 to 2001, he was a Research Associate in the Department of Industrial and Systems Engineering, Faculty of Engineering, Hiroshima University, Hiroshima, Japan. Since 2001, he has been with the Department of Artificial Complex Systems Engineering, Graduate School of Engineering, Hiroshima University, as a Research Associate. His current research interests are in evolutionary computation, largescale programming, and fuzzy/stochastic multiobjective programming. Dr. Kato is a member of the Institute of Electronics, Information and Communication Engineers of Japan, Japan Society for Fuzzy Theory and Systems, and the Operations Research Society of Japan.

KATO AND SAKAWA: GENETIC ALGORITHMS WITH DECOMPOSITION PROCEDURES

Masatoshi Sakawa (M’91) was born in Matsuyama, Japan, on August 11, 1947. He received the B.E., M.E., and D.E. degrees in applied mathematics and physics from Kyoto University, Kyoto, Japan, in 1970, 1972, and 1975, respectively. From 1975 he was a Research Associate with Kobe University, Kobe, Japan where, since 1981, he was an Associate Professor in the Department of Systems Engineering. From 1987 to 1990, he was a Professor in the Department of Computer Science at Iwate University, Iwate, Japan. From 1990 to 2001, he was a Professor in the Department of Industrial and Systems Engineering, Hiroshima University, Hiroshima, Japan. At present, he is a Professor in the Department of Artificial Complex Systems Engineering at Hiroshima University. He was an Honorary Visiting Professor at the University of Manchester Institute of Science and Technology (UMIST), Manchester, U.K., Computation Department, sponsored by the Japan Society for Promotion of Science (JSPS) from March to December 1991. He was also a Visiting Professor at Kyoto Institute of Economic Research, Kyoto University, from April 1991 to March 1992. His research and teaching activities are in the area of systems engineering, especially, mathematical optimization, multiobjective decision making, fuzzy mathematical programming, large-scale optimization, multilevel mathematical programming, and game theory. In addition to about 250 articles in national and international journals, he is the author or coauthor of five books in English and 15 books in Japanese. Dr. Sakawa is a member of the Japan Association of Automatic Control Engineering, the Operations Research Society of Japan, the Institute of Electronics, Information and Communication Engineers of Japan, the Society of Instrument and Control Engineers of Japan, Japan Society for Fuzzy Theory and Systems, and the International Fuzzy Systems Association (IFSA).

419