From GAs to Artificial Immune Systems: Improving Adaptation in Time Dependent Optimization Alessio Gaspar
[email protected] http:www.essi.fr:˜gaspar
Philippe Collard
[email protected] I3S Laboratory 2000 Rte des Colles Sophia Antipolis BIOT 06460 (France)
AbstractThis paper explores Time Dependent Optimization (Tdo) as a measure of adaptiveness in artificial systems. We first discuss this choice and review classical Tdo models to propose a canonic benchmark. Then, we underline the central role of diversity in adaptive dynamics for biological and cybernetic systems and illustrate by a state of the art of evolutionary Tdo (Etdo). A Simple Artificial Immune System (Sais) is then proposed and experimentally compared to Etdo. Encouraging results are explained by strong analogies between Sais and GAs as well as Sais’s ability to manage stable heterogeneous populations as a model of Idiotypic Networks. We conclude on the relevance of Artificial Immune Systems as genuinely adaptive artificial systems.
1 Introduction 1.1 Are GAs Adaptive Systems ? Directly inspired by the Darwinian model of natural evolution, the Genetic Algorithm (GA) has been introduced by John Holland in the sixties as an artificial adaptive system ([14]). Since then, its successful application as optimizer in non trivial combinatorial problems has progressively narrowed the research of the growing GA community to optimization related issues. Even if GAs are not Function Optimizers ([6, 12] they nevertheless share with them many features which make it difficult to decide whether they are innovative optimization techniques or genuinely adaptive artificial systems. Their success over traditional optimizers in a wide range of applications is seen as an evidence of their superior adaptiveness. But their efficiency as optimizers ends up contradicting some of the principles of natural evolution. In particular, GAs converge and therefore lose progressively diversity as the optimum spread over their population. Such a behavior is fundamental for optimizers as they are valued according to the certitude and quickness of their convergence but for artificial adaptive systems, this uniforming is completely antagonistic with the polymorphism observed in populations undergoing natural evolution. Moreover, convergence inhibits progressively the effects of crossover and reduces the GA’s exploration dynamics to mutation only. Here is indeed the fundamental misconception of adap-
tation which leads many to consider Genetic Algorithms as adaptive systems. A natural system is alive as long as it is staying in viable states. Therefore, viable states must enable the system to either reach another viable state or stay in that one. By analogy, this rule adapts to artificial evolution where a GA ceases to be an adaptive system as soon as it reaches a state which inhibits any further adaptation. For instance, a GA converging toward a uniform population clearly reaches such a non viable state. This reveals a perception of adaptation as a time bounded dynamic where it should be a perpetual one. This vision is incompatible with Artificial Life purposes as well as the pragmatic requirements of Tdo. In order to investigate adaptive behaviors in an optimization framework, this paper advocates the use of a particular class of optimization problems which requires a reactive rather than convergent dynamic from the system. The next section details the Time Dependent Optimization (Tdo) problem underlying its key role as well as the nature of the solutions we are going to consider in this paper. 1.2 A Rosetta Stone for Adaptiveness A static fitness function associates to each individual a real valued fitness. Similarly, a Time Dependent Fitness Function associates at time t such a fitness to each individual. The Time Dependent Optimization Problem (Tdo), which consists in tracking the successive optima of such a non-stationary function, is known to be difficultly handled by classical GAs. As early underlined by Salomon and Eggenberger ([23]) Tdo is at the crossing road between research on evolutionary optimization and adaptive behaviors. In order to optimize efficiently such environments, GAs must abandon the convergent dynamic and maintain their ability to react to unpredictable environmental changes. Tdo can therefore be indifferently considered as (1) the problem of maintaining a viable state in a reactive GA, (2) the efficient optimization in dynamic environments or (3) the realization of adaptive vs convergent dynamics in artificial systems. Each can be studied with the convenience of an optimization framework i.e. numeric evaluation of progress, reproducibility of experiments and availability of benchmarks. For these reasons, we advocate to explore non-convergent optimization dynamics to measure GAs’ adaptiveness improvements. Therefore, improving adaptiveness means counteracting convergence. We
will refer to such approaches as Diversity Respectful Evolutionary Computation as they rely on the maintenance of polymorphism in populations to prevent their homogenization. The central role of diversity in these approaches is justified by biology and cybernetics. For biologists, the elitist nature of Darwin’s survival of the fittest theory differs from the diversity observed in ecosystems where no specie dominates. Similarly, redundancy seems to be the keyword in structures like DNA, neural networks or immune systems which all dispatch information over thousands of elements and, for the two latter, implement efficiently robust associative memory features. W. Ross Ashby, one of the fathers of cybernetics, expressed this idea with the principle of selective variety : “The larger the variety of configurations a system undergoes, the larger the probability that at least one of these configurations, will be selectively retained”. Clearly, the larger the polymorphism the larger the probability that at least one individual will be selectively retained [when the optimum changes]. Whether it is spelled redundancy or variety, diversity is the keystone of adaptive dynamics and the foremost motivation of many evolutionary approaches to Tdo. 1.3 Scope of the Paper This paper is organized as follows. The next section reviews classical Tdo tasks and discusses their parameters to propose a canonical benchmark. Section 3 presents four evolutionary approaches to Tdo underlining their strengths and weaknesses. Then, we introduce the motivations that lead us to consider Artificial Immune Systems as good candidates for Tdo and detail our model named Sais before to evaluate it in section 5 by comparison with other evolutionary approaches. We eventually discuss both Reactive and Memory capacities of Sais and present work under progress.
2 Tdo Models We review classical Time Dependent Optimization Problems and their parameters before justifying the choice of Pattern Tracking as a canonical benchmark. 2.1 State of the Art We restrict for now to simple models instead of industrial ones which will be investigated in a further work to evaluate the scalability of our approach. Pattern Tracking In this model, introduced by K. Pettit and E. Swigger [21], the fitness of a chromosome is its similitude with an arbitrary bit-string which represents the current optimum and which changes regularly. F. Vavak and T. Fogarty ([25]) expressed the advantage of Pt: “the problem used in this study [...] does not represent any particular application and was chosen so that analysis of the results was easy and clear.”. Dynamical Gaussians Dynamical Gaussians have been first used by Helen Cobb and
J.J. Grefenstette [3]. In their model, a multidimensional landscape is composed of randomly rising and flattening gaussians. We already proposed a simplified version of this environment ([4, 8]) which revealed to be similar to the one used by Vavak et al. ([26]). When optimizing a Mdg, a chromosome encodes d variables and tracks an optimum sliding along the diagonal space. The fitness function ? of the search Pn
?
xi
?t
2
2(?1) is: n1 i=1 e Where t is the generation, the chromosome’s length and a bell-shape coefficient. Dynamical Knapsack Problem Given a set of n objects foi g of weights fwi g, the Knapsack Problem (Kp) consists in choosing a subset of objects fitting closest to the weight capacity of a container wc . GAs naturally apply by evolving n gene long chromosomes where each bit xi specifies whether the ith objectPis (xi = 1) or is n not (xi =0) taken. The fitness is f (x) = i=1 (wi xi ) with respect to wc . Goldberg and Smith [10] have defined the Time Varying KP (TVKP) at first by changing the interpretation of a particular bit and then by varying the wi values over time. In both cases, the GA had to be able to switch between two different problems.N.Mori et al. ([19]) proposed a Recurrently Varying Kp (RVKP) where the weight limit cycles through three different values over time to test memory ability of adaptive systems.
2.2 Parameterizing a Tdo Problem We propose a way to tune Tdo difficulty by parameterizing the transitions which alter the fitness landscape. Continuousness of Transitions Transitions in Pattern Tracking, TVKP switch from one fitness landscape to another. On the other hand, the Mdg model implements a progressive switch by respectively decreasing and increasing previous and next optimum fitness. This characterizes Continuous rather than Alternative Tdo problems. Transition Period The Transition Period Tp is the number of generations between each transition. If the environment features n different optima over g generations then Tp = g=n. Transition Range When only one optimum moves around, the Transition Range is defined as the hamming distance between two consecutive optima Tr = H (Ot ; O(t+1) ). This already revealed to be a good measure of Tdo Difficulty ([4]) and more global measures, such as the average fitness variation, would require to consider (too) large parts of the search space. Meta Dynamics By Meta Dynamics we mean the rules determining the new optimum. This definition encompasses characteristics such as the motion of the optimum in the search space and the cyclic, aperiodic or random nature of the succession of optima. 2.3 Toward a Canonical Benchmark Despite its simplicity, the Pattern Tracking model enables to tune all the parameters previously discussed and even the con-
tinuous transitions can be achieved fairly easily. In fact, dynamic difficulty can be qualitatively and quantitatively easily adjusted and other environments, such as Kp, only differ by an increased static complexity. Even if this paper will only deal with the dynamic aspects, it is worth noticing that our model of Pt includes an extra parameter allowing the tuning of its static complexity as well. Evaluation starts by an X-or between the evaluated chromosome and the current optimum. The resulting bit-string is then complemented and the fitness is computed by applying to it a onemax function. The extra parameter is the replacement of the onemax function by any other function defined over the unitation such as the wellknown trap functions ([9]) which deceptiveness is easily tunable. Eventually, Pattern Tracking also allows an instinctive interpretation of results in terms of the search space’s metrics where other environments alter the fitness landscape in a non trivial but consequently barely analyzable way. For these reasons we consider Pt as suitable for comparisons.
3 Diversity Respectful GAs The success of GAs in Tdo relies on the preservation of diverse populations which can be achieved in two ways. By Heterostasic approaches which preserve diversity or by heterogenesic ones which re-introduce explicitly diversity. Five approaches belonging to these classes are discussed hereafter. 3.1 Alleles Preservation This heterostasic approach has been investigated by Rowe and East ([22]) and Culberson ([5]) separately. Their respective Direct Replacement and Gene Invariant approaches are highly similar and share a common principle of preservation of the initial population’s diversity. More specifically, the allele coverage of even small populations makes them holding the necessary genetic material to adapt to environmental changes. By enforcing the replacement of parents by their offsprings after crossover, the authors prevent alleles from disappearing at any locus. Without convergence, the initial diversity is preserved and eases further adaptations. 3.2 Diversity Re-introduction Instead of preserving diversity, Heterogenesic approaches counteract convergence by an explicit re-introduction of random genetic material. Various methods have been proposed to date such as the Hypermutation and the Random Immigrant operators ([11, 3]). The first method monitors the population’s best fitness and temporarily boosts the mutation rate (hypermutation rate) if a degradation is observed. The choice of the monitoring criterion can be critical as discussed by H. Cobb herself. A decrease of the best fitness from one generation to the next, is not able to detect changes in environments where the successive optima can feature increasing fitness values over time. On the other hand, triggering hypermutation whenever a convergence rate is reached boils down
to restart periodically the GA (see below). In this respect, the Random Immigrant technique is better as it replaces a given percentage of the population by random individuals at each generation to simulate immigration. The drawback is that this dynamic is not correlated to the environment and provides diversity regardless of real needs. This is therefore paradoxal to speak of enhanced adaptiveness. 3.3 Restart Approach Rather than improving GAs’ diversity management, some approaches accentuate their convergent dynamics to make them even faster. Krishnakumar framed such an approach in a restart-when-needed philosophy ([18]). His Micro GA (GA) works with a minimal population (5). Once convergence is achieved, the best chromosome is saved and th GA restarted. The GA outperformed various GAs with as many fitness evaluations on a real world problem. 3.4 Local Search F.Vavak, T.C. Fogarty and K. Jukes have been investigating the benefits of the use of a hybrid evolutionary approach embedding a local search operator triggered whenever a convergence rate is reached ([26]). This so-called shift operator adds or subtracts to the values encoded by the chromosome the value of a random shift register which size increases progressively. This method differs from hypermutation insofar that the amount of diversity introduced is growing progressively rather than suddenly. The method was successful over several real world applications but remains inadapted to combinatorial (e.g. Kp and Pt) rather than numerical Problems. An alternative solution is to randomly flip an increasing number of bits instead of adding / subtracting an increasing numerical value. This approach introduces diversity more progressively than any Random Immigrant or hypermutation.
4 A Simple Artificial Immune System We propose a Simple Artificial Immune System (Sais) as an alternative artificial Adaptive Systems. 4.1 Biological Metaphor All animals are protected from potentially harmful pathogens by humors, and other non specific defense systems classified as Natural Immunity. Vertebrates feature an additional adaptive immunity characterized by (1) a specific response destroying substances (antigens) recognized as non-self and (2) a memory enabling more efficient responses in case of further infection by a similar antigen. Let us consider a brief overview of the main actors by which the immune responses are processed :
Lymphocytes They are white blood cells generated in the bone marrow, featuring receptors matching the 3d shape of a spe-
cific antigen. Auto-immune diseases occur when Lymphocytes matching self cells are released in the body instead of being eliminated in the bone marrow by a process called negative selection. In the lymphoid organs, some lymphocytes specialize into either B-cells (released in the fluids –Humoral Immunity), or into TCells processed in the Thymus (Cellular Immunity).
T-Cells (Thymus originated cells) They can be either Helper T-Cells which, along with BCells, build up the immune response (cf below), or Cytotoxic T-Cells which eliminate the antigens detected by the immune system.
B-Cells (Bone marrow originated cells) They produce antibodies, a special group of blood proteins (immunoglobins) which bind to antigens to immobilize them, prevent them to enter sane cells, or catalyse a destructive reaction with a so-called complement system. Antibodies also produce InterLeukins to alert Helper T-Cells when outnumbered by antigens. B-Cells combine with helper T-Cells to transform into Blast Cells which divide and clone, Plasma Cells which massively produce the antibody or Memory Cells that ease next reaction to similar antigens.
The primary response of the immune system is originated by antibodies that produce an interleukin (Il1) which causes helper T-Cells to transform into LymphoBlasts which, in turn, trigger the immune response at various levels. At first, the production of blood cells (and therefore lymphocytes) in the bone marrow is increased, then the growth of killer T-Cells is promoted along with the clonal selection of B-Cells into Plasma Cells. The latter goes along with a phase of Somatic Hypermutation of the produced clones which allows to explore better matching possibilities with the antigen. The secondary response, is due to the fact that B-Cells can also be activated by other B-Cells and not only antigens. Therefore, B-Cells can be preserved in the population and activated by being integrated in a so-called idiotypic network featuring a self reinforcement dynamics in a group of B-Cells that match each other in a circular manner. This reinforcement in the absence of any infection preserves the structures needed to respond to any further invasion of a particular antigen already encountered. We propose a computer model loosely based on this complex structure with the hope of embedding principles helping the realization of more adaptive artificial systems. By analogy, the current optimum plays the role of an antigen which has to be matched by B-Cells. Sais is expected to feature both primary (reactive) and secondary (memory of previous infections) immune responses. Sais starts with an initial random population of so-called B-Cells, each able to detect a given antigen specified by a binary bits long string. Then, it applies at each generation three operators : Evaluation, Clonal Selection and Recruitment which are discussed hereafter.
4.2 Evaluation Phase For the sake of Pattern Tracking, Evaluation consists in matching these strings against the current optimum. This results in a “fitness value” here referred to as exogenic activation and noted iex = ? h (Bi ; Ot ) where Bi refers to the ith B-Cell of the population P , Ot to the optimum at time t, and h to the hamming distance between them. This evaluation will allow us to consider the group of the n B-Cells featuring the highest exo-activation (n being an heuristic value). Because B-Cells, contrary to GAs’ chromosomes, can also be activated according to the presence of similar B-Cells, we compute a so-called endogenic activation for the remaining ones. For all of these the endo-activation is computed as follows: ien = ((1=Ns ) ? di ) + 1 where Ns is the number of different types of B-Cells in the population and di the density of the ith B-Cell of the population. This method already integrated a sharing principle as it tends to enforce the convergence, under evolutionary dynamics, toward a set of equally represented categories. This model attempts to capture the memory capacity of natural immune systems. It is based on the Idiotypic Network Hypothesis ([17, 20]) which states that B-Cells can activate other B-Cells and thus form a network dynamics that preserves B-Cells able to suitably respond to previously encountered antigens. 4.3 Clonal Selection Phase The Clonal Selection mimics natural immune systems’ evolutionary dynamics ([13]) which clones all B-Cells activated by an antigen. We apply a K-Tournament on the B-Cells featuring the best over-average ex values to generate a population Pex . Then, we simulate the effects of the Somatic Hypermutation1 by applying a random number of one-point mutations to each member of Pex . Mutants are only kept if improving their ex value. Concerning endo-activated B-Cells, the clonal selection applies differently. They also generate by selection and replication an intermediate population Pen but do not undergo any mutations. This enforces an equilibrated distribution of all strings that have been useful. Their densities in the population will fade with time leading oldest B-Cells to disappear. But meanwhile, no disruptiveness due to mutation or crossover will, contrary to what happen with niches ([9]) in GAs, disturbs the efficiency of this memory model. 4.4 Recruitment Phase At this point, Pex and Pen respectively contain the offsprings of B-Cells exo or endogenously activated. The recruitment consists in reintroducing worthy B-Cells in P and discarding old and worthless ones. The mechanism used differ, once again, according to the category of B-Cell concerned. Elements from Pen replace directly their homologs in P . This is 1 In natural immune systems SH mutates the DNA of B-Cells resulting from a clonal selection process ([13]).
due to the role of equilibration of the memory dynamics here implemented. Elements from P and Pex undergo a selection process on the basis of their exo-activation. Practically we used a K-Tournament scheme here also putting each element of P in competitions with the offsprings of Pe x. 4.5 Remarks
5 Experimental Study We compare various GAs on a pattern tracking problem involving bit-strings of length = 40. Experiments are conducted over 400 generations and the dynamic parameters are set to Tp = 50 generations and Tr = 5 for the first transition. The Transition range is then increased by 5 at each transition in order to make the changes more and more complex to adapt to. The curves plot the best fitness vs generation and are averaged over 200 experiments.
40 Hybrid Hyper Sais 38
36
34
32
30
28 0
50
100
150
200 250 Generations
300
350
400
450
Figure 2: Sais, Hybrid and Hypermutation Figure 2 bounds Sais’ efficiency with the Hybrid GA and the Hypermutation. We can observe that: (1) Sais converges quicker than Hypermutation and maintains a higher level of fitness on average. (2) It is nevertheless not as good as Hybrid and is at rank 3 over 7 of our comparisons. (3) Drop off are less important and this quality stands despite a transition difficulty increasing over time. Let us discuss these last two points in more details.
40 Hybrid Sga Hyper Riga Giga Micro 35
Best Fitness
ian Local Search with radius 40 to locally improve fitness. Then, we also evaluated a Random Immigrant GA (Riga) with rate 0:05, Hypermutation (Hyper) with rate 0:2, the GA (Popsize 5) and Giga. We started from authors’ standard settings and experimented different values to come to these choices. Non specific parameters were set as follow : 1pt Crossover rate 0.7, population size 40, Selection by KTournament (Kt= 4) and worst-individuals replacement scheme. Resulting curves are plotted by order of efficiency (Fig. 1) which is evaluated both by the level of fitness achieved and inversely to the depth of hang outs at each transition. The Hybrid GA revealed to be the most efficient while other GAs featured similar behaviors at different levels. Only Giga featured a fitness level dropping off depth diminishing over time.
Best Fitness
Sais is an over-simplified model of the immune system designed to be minimal while still capturing the essential mechanisms by which Immune Systems realize primary and secondary responses to dynamically changing complex environments. Among all details keeping away Sais from its natural homolog let us cite the choice of binary strings, the fact that we assimilate more or less antibodies and B-Cells densities and implement the somatic hypermutation as a Lamarckian operator ([13]). We also rely on the idiotypic networks hypothesis which, if advocated by some theorists ([17, 20]) is still largely contested by experimental researchers of the field. The following results will show that Sais nevertheless captures the essence of the Immune Adaptive Response and is thereforeworth being further studied.
= 0:082, and a Hybrid GA (Hybrid) featuring a Lamarck-
30
25
5.2 Reactiveness: Sais vs Local Search
20 0
50
100
150
200 250 Generations
300
350
400
450
Figure 1: Classical GAs on Pattern Tracking
5.1 First Results At first we compare the following GAs: Goldberg’s Simple Genetic Algorithms (Sga) with a bitwise mutation rate
In the previous section, the Hybrid approach revealed to be the best one. Its main strength is its ability to converge very quickly so that the maximal fitness level is kept high. We are going to experimentally compare Sais to Hybrid in order to investigate the former reactiveness ability. Through Somatic Hypermutation, Sais features a local search with the same radius than the Hybrid approach (maxi 40). Nevertheless, a Clonal Factor determines the number of times an activated BCell is cloned and therefore the extra-evaluations performed. 2 In unpublished experiments we found this value to be optimal for a Sga.
40
of the dropping off phenomena. Under difficult transitions, a diverse population helps to maintain the fitness level (cf principle of selective variety in 1.2). Nevertheless, GAs featuring a high mutation rate do not limit dropping off depth.
Hybrid (40) Sais
39.5
Best Fitness
40 Giga Sais (10) 39
35
Best Fitness
38.5
30
38 0
50
100
150
200 250 Generations
300
350
400
450 25
Figure 3: Hybrid Method and Sais Figure 3 compares with a zoom Hybrid to Sais. To compensate the under-exploitation of the search radius we increase the Clonal Factor from 1 to 10 but, in order to provide fair comparisons, we introduce a Hybrid flipping all of the 40 bits and keeping each alteration only if it improves the fitness (Full Hybrid). Results (fig. 4) show that, despite of its weakness in its basic form, Sais potentially makes a better use of the computational resources that it is attributed. In fact, it comes very close to the Full Hybrid results for only a quarter of its cost in terms of number of evaluations. The tradeoff between reactiveness and computation time is therefore to the advantage of Sais by a factor four in comparison with the best evolutionist approach from our state of the art. Next section will investigate other properties of Sais still by comparing it to the best GA found so far which features the investigated property. 5.3 Idiotypic Memory: Memory and Robustness 40 Full Hybrid Sais (10)
Best Fitness
39.5
39
38.5
38 0
50
100
150
200 250 Generations
300
350
400
450
Figure 4: Improved Sais and Hybrid Methods Contrary to most GAs tested, Sais limits the magnitude
20 0
50
100
150
200 250 Generations
300
350
400
450
Figure 5: Sais vs Giga : Memory Effect As Giga features the closest behavior, we compare it to our approach (fig. 5). This kind of behavior can be characteristic of either a strong ability to keep populations diversified, or a capacity to memorize previously encountered optima. Giga definitively provides a random diversity but we are going to show that, beyond this robustness to transitions which preserves the fitness level from dropping off too deep, Sais also preserve in priority B-Cells that have been previously useful. It appears that sais realizes a tradeoff between diversity preservation and convergence. In fact it features both a higher fitness level and a lower diversity than Giga. We explain it as a sign that diversity is due to past experience of the system rather than randomness. In fact, as Tr increases we are more about to encounter a new optimum close to a previous one. In such a context, an immune system would remember these strings and therefore reduce the dropping off depth as observed in fig. 5. We use a Cyclic Pattern Tracking 3 (Cpt) to determine whether Sais features an Idiotypic Memory or not. Figure 6 plots the fitness of the best B-Cell of each generation and the densities of each of the three consecutive optima. This is a snapshot of a particular experience and results are not averaged yet. Sais keeps after one encounter a sample of each optimum and faces further transitions perfectly, i.e. when an old optima is encountered again the system react immediately due to the presence in the population of at least one matching B-Cell. As a consequence, no drop off is observed in the fitness level of the system. This situation is ideal but the instability of the idiotypic memory can cause unpredictable forgetting processes. Indeed, the densities of the B-Cells are undergoing wide variations. We suggest that systems featuring such oscillating behaviors are particularly sensitive to the 3 The 3 successive optima are 100, 010, 001 in a loop (each digit stands
for 14 bits).
Best Fitness Opt 1 Opt 2 Opt 3
42
35
Best Fitness
28
21
14
7
0 0
150
300
450
600
750
900 1050 1200 1350 1500 1650 1800 1950 2100 Generations
Figure 6: Sais on Cpt (one exp) finiteness of populations which can transform an oscillation of large amplitude in a pure and simple disappearance. Population size therefore become a sensitive parameter able to change qualitatively the behavior of the system. Beyond this experiment which reveals the potential of the approach we averaged over 200 experiments to provide fair results. Figure 7 reveals that even if the idiotypic memory dynamic is unstable it reduces on average the need for search of the new optimum and provides a diversity in the population that embeds some of the already encountered optima. BstFit Opt 1 Opt 2 Opt 3
42
35
Best Fitness
28
21
14
7
0 0
150
300
450
600
750
900 1050 1200 1350 1500 1650 1800 1950 2100 Generations
Figure 7: Sais on a Cpt (200 exps)
6 Discussion 6.1 From GAs to Immune Algorithms Experimental results confirmed the ability of Sais to handle Tdo. This success is explained partly by the numerous analogies that can be drawn with the GAs presented in the state of the Art of Section 3. In fact, Sais integrates implicitly many
of the mechanisms, designed over the last 20 years, to improve GAs’ adaptiveness for Tdo. At first, Clonal Selection mimics evolutionary dynamics as activated elements spread exponentially. The exploitation of this information is guaranteed by the Somatic Hypermutation mechanism that can be compared, as we did previously, to a Lamarckian local search operator similar to the one already used by F. Vavak et al. ([26]). Above average individuals are randomly perturbed to locally explore their neighborhood. Therefore the good information is used to lead the search and start on already interesting areas. The recruitment mechanism has been also often investigated ([2, 24]) as the keystone of the immune system’s adaptiveness and compared to sharing or crowding methods insofar that it also maintains stable heterogeneous populations. Moreover, the mechanism itself is very similar and the analogy has even been extended to machine learning algorithms such as Classifier Systems and Case Based Learning ([7, 15]) which require heterogeneous populations. The combination of such state-of-the-art mechanisms which already proved to improve GAs’ adaptiveness makes Sais a valuable candidate for Tdo. Beyond this efficiency, we want to underline the qualitative differences that make it represent a further step toward adaptiveness in artificial systems. 6.2 Qualitative Differences More generally, the Artificial Immune System metaphor is a powerful one which embeds a twofold adaptiveness. At first, the system is reactive in the sense that it is able to discover new optima. This corresponds to the primary response of the Immune system and to the convergence phase of a GA. Nevertheless and unlike GAs Sais preserves diversity so that it stays able to react over time. The convergence is therefore never complete and this diversity acts like a preventive measure. Sais’ dynamics naturally integrates the necessity of diverse populations as it is, in the logic of an immune system, the only viable state. This notion of viability of enabling further adaptations is precisely what GAs were lacking. Moreover, the diversity preserved by Sais has two sources. At the beginning, the non activated B-Cells included in the initial random population are preserved. Then, as new optima are encountered, the different transitions affect the system which gains experience and improve its reactiveness to the “already encountered cases”. Further reactions are therefore conditioned by the past of the system which has kept a memory of its successive adaptations in order to improve its behavior in “known field”. Eventually, the method used to implement such a immune memory inspired from the idiotypic nets theory is based on the simple idea that the elements that do not contribute to the exploration of the search space must not undergo disruptive operators. In Sais, previously useful elements are only undergoing selection based on an endogenic fitness loosely based on idiotypic networks interactions.
6.3 A taste of things to come For now, Sais features an over average reactiveness to environmental changes using to its best the computational resources it is provided with. It also reduces the drop off phenomenum generally and more particularly when facing previously encountered optima where it uses the experience it acquired from the previous transitions. Further work will be concerned with studying the idiotypic memory dynamics which is still unstable for now and requires parameterizing to be efficient. We are currently working on a self adjustment mechanism that would dynamically size Pe n and Pe x. Many choices may also be reconsidered : assimilation of B-Cells and antibodies, choice of simple mutations to implement somatic hypermutation ([7]) and so on. Nevertheless, Sais appears as a promising artificial adaptive system opening new perspectives in the study of adaptive dynamics through Time Dependent Optimization Tasks. Acknowledgments The authors would like to thank M.B. Settembre for her advice and proofreading and to share with her the results as they shared the work. We also acknowledge J. Culberson for making the Giga code available.
Bibliography [1] F. Abbattista, G. DiGioia, and A. F. G. DiSanto. An associative memory based on the immune networks. In Proc. of the International Conference on Neural Networks (ICNN), 1996. [2] H. Bersini and F. Varela. Hints for adaptive problem solving gleaned from immune networks. In Proceedings of the First Conference on Parallel Problem Solving From Nature, pages 343–354, 1990. [3] H. Cobb and J. Grefenstette. Genetic algorithms for tracking changing environments. In Icga-5, 1993. [4] P. Collard, C. Escazut, and A. Gaspar. An evolutionary approach for time dependant optimization. IJAIT : International Journal on Artificial Intelligence Tools, 6(4):665–695, 1997. [5] J. Culberson. Genetic invariance: A new paradigm for genetic algorithm. Technical report, University of Alabama, 1992. [6] K. DeJong. Genetic algorithms are not function optimizers. In FOGA : Fundations of Genetic Algorithms, volume 1. Morgan Kaufman, 1991. [7] J. Farmer. A rosetta stone for connectionism. Physica D, 42:153–187, 1990. [8] A. Gaspar and P. Collard. Time dependent optimization with a folding genetic algorithm. In I. C. S. Press, editor, IcTai97, pages 207–214, November 1997. [9] D. Goldberg. Genetic Learning in optimization, search and machine learning. Addisson Wesley, 1989. [10] D. Goldberg and R. Smith. Nonstationnary function optimization using genetic algorithms with dominance and diploidy. In Icga-2, 1987. [11] J. Grefenstette. Genetic algorithms for changing environments. In R. Manner and B. Manderick, editors, Parallel Problem Solving from Nature 2, pages 465–501. Elsevier Science Publishers B.V., 1992.
[12] I. Harvey. Cognition is not computation: Evolution is not optimisation. In 7th International Conference on Artificial Neural Networks - ICANN97, number 1196 in LNCS, pages 685–690. Springer Verlag, 7-10 October 1997. [13] R. Hightower, S. Forrest, and A. Perelson. The Baldwin Effect in the Immune System : Learning by Somatic Hypermutation. Addisson Wesley, 1996. [14] J. H. Holland. Adaptation in Natural and Artificial Systems. MIT Press, 2nd edition, 1992. [15] J. Hunt, D. Cooke, and H. Holstein. Case memory and retrieval based on the immune system. Lecture Note in Artificial Intelligence : Case-Based Reasoning Research and Development, 1010:205–216, 1995. [16] J. E. Hunt and D. E. Cooke. Learning using an artificial immune system. In Journal of Network and Computer Applications: Special Issue on Intelligent Systems: Design and Application, 1996. [17] N. Jerne. Towards a network theory of the immune system. Annals of Immunology, 125(C):373–389, 1974. [18] K. Krishnakumar. Micro-genetic algorithms for stationary and non-stationary function optimization. In SPIE Intelligent Control and Adaptive Systems, number 1196, pages 289–296, 1989. [19] N. Mori, S. Imanishi, H. Kita, and Y. Nishikawa. Adaptation to a changing environment by means of the memory based thermodynamical genetic algorithm. In 7th International Conference on Genetic Algorithms, pages 299–306. Morgan Kaufmann Inc., 1998. [20] A. Perelson. Immune network theory. Immunological Review, 110:5–36, 1989. [21] K. Pettit and E. Swigger. An analysis of genetic based pattern tracking and cognitive based component tracking models of adaptation. In Proceedings of National Conference on AI (AAAI-83). Morgan Kaufmann, 1983. [22] J. Rowe and I. East. Direct replacement: A genetic algorithm without mutation which avoids deception. In T. Fogarty, editor, Evolutionary Computation. Springer, 1995. [23] R. Salomon and P. Eggenberger. Adaptation on the evolutionary time scale: A working hypothesis and basic experiments. In Evolution Artificielle, pages 297–308, 1998. [24] R. Smith, S. Forrest, and A. Perelson. Searching for diverse, cooperative populations with genetic algorithms. Evolutionary Computation, 1(2):127–149, 1993. [25] F. Vavak and T. Fogarty. A comparative study of steady state and generational genetic algorithms for use in nonstationary environments. In Proceedings of the Society for the Study of Artificial Intelligence and Simulation of Behavior, workshop on Evolutionary Computation’96, pages 301–307. University of Sussex, 1996. [26] F. Vavak, T. Fogarty, and K. Jukes. A genetic algorithm with variable range of local search for adaptive control of the dynamic systems. In Mendel’96 : Proceedings of the 2nd Mendelian Conference on Genetic Algorithms, 1996.