Uncertainty in iterated cooperation games - Semantic Scholar

2 downloads 0 Views 199KB Size Report
show that in iterated cooperation games the level of cooperation ... Newcastle, Newcastle upon Tyne, NE1 7RU, UK (corresponding author: phone: +44-191-2227946; .... coop. I according to the calculation, then coop. I is set to 1). At the end of ...
Uncertainty in iterated cooperation games Peter Andras

Abstract—The emergence and evolution of cooperation among selfish individuals is a key question of theoretical biology. Uncertainty of outcomes of interactions between individuals is an important determinant of cooperative behavior. Here we describe a model that allows the analysis of the effects of such uncertainty on the level of cooperation. We show that in iterated cooperation games the level of cooperation increases with the level of outcome uncertainty. We show that this is the case if the individuals communicate about their cooperation intentions and also if they do not communicate their intentions.

I. INTRODUCTION

E

and evolution of cooperation in communities of selfish individuals is a cornerstone of theoretical biology research. While cooperation is common among a wide range of animals, plants and microbes [1-3], it is not very clear why selfish and possibly unrelated individual organisms would help others, who may also compete with them for resources. Current theories suggest that cooperation may be rooted in kin or similarity-based selection [4-6], and in direct or indirect reciprocity [4-9]. The usual approach to study emergence and evolution of cooperation is based on playing simple games in which individuals may choose to cooperate or cheat (e.g. Prisoner’s dilemma game) [5]. In many cases all individuals play against all others and the level of cooperation (i.e. the percentage of individuals who made cooperation decisions) is evaluated after such rounds [6]. The individuals may be simulated agents or actual humans or animals. In real world situations the level of cooperation may be influenced by many factors (e.g. the trustworthiness of potential partners, the potential dangers, the predictability of the outcome) [10-13]. In general, these factors determine the uncertainty of the outcome of interactions between individuals. Generally, more uncertainty about the outcomes is likely to induce a higher level of cooperation between individuals. For example, mole rats living in arid areas face more uncertainty in terms of finding food than mole rats living in wet lands. At the same time, mole rats living in arid lands constitute significantly larger groups (cooperate more), than similar animals living in wet areas [14]. This role of uncertainty about interaction outcomes is usually not included in models of emergence and evolution MERGENCE

Manuscript received: December 1, 2007. Peter Andras is with the School of Computing Science, University of Newcastle, Newcastle upon Tyne, NE1 7RU, UK (corresponding author: phone: +44-191-2227946; fax: +44-191-2228232; e-mail: [email protected]).

of cooperation. Here we follow our earlier work [15,16] and describe an agent-based simulation that includes the modeling of outcome uncertainty. Our simulation environment also allows the investigation of the effect of communications about cooperation intentions. We show that in accordance with the expectation based on observing natural cases, the level of cooperation increases with the level of outcome uncertainty. We show that this is the case if the agents communicate and also if they do not communicate about their cooperation intentions. The rest of the paper is structured as follows. In Section II we discuss the relationship between outcome uncertainty and cooperation considering real world examples. In Section III we describe the agent-based simulation environment. Section IV describes the variant of the agent simulation in which agents can communicate about their cooperation intentions. Section V contains the presentation of the results. Finally, in Section VI we discuss the implications of the results and draw the conclusions. II. UNCERTAINTY AND COOPERATION Uncertainty is present in many forms in the context of potentially cooperative interactions between living organisms. For example, the risk of predation may be uncertain in case of fish schools [11], the availability of food items for mole rats may be very uncertain in arid environments [14], or the readiness for cooperation of potential partners may be uncertain in case of humans [12,13]. Generally, such uncertainty is equivalent to the uncertainty of outcomes of potentially cooperative interactions. Sharing predation risk or resources with others may lead to benefits like reduced predation risk or shared resources in the future; however the magnitude of these benefits is uncertain. It is expected that higher uncertainty induces higher level of cooperation between individuals. This is because individuals can share their individually experienced uncertainty by cooperating with others. Experimental studies in humans show that indeed in case of interactions in more uncertainty context humans are more likely to cooperate [10,12,13]. In case of animals, plants and microbes experimental evidence also indicates that factors inducing more uncertainty (e.g. presence of poisons, predators, lack of warmth and sunshine, lack of water required for generation of food resources) also imply increased levels of cooperation [1,11,14,17,18]. However, uncertainty is usually not included in the modeling studies that aim to investigate the mechanisms that

form the ground of the emergence and evolution of cooperation among selfish individuals. In our earlier work [15,16] we introduced the modeling of contextual uncertainty by incorporating uncertainty in the determination of the outcome of interactions between individuals. Here we continue this and adopt the simplifying assumption that all kinds of context-driven uncertainties that may influence the likelihood of cooperation decisions are jointly represented by the uncertainty of the outcome of interactions.

between agents is measured in the two-dimensional world populated by the agents. After finding a partner the agents play a prisoner’s dilemma type game with uncertain outcomes. The uncertainty of outcomes represents all uncertainties that may influence the interaction between the agents. The outcome uncertainty is implemented by having as outcomes of the games distributions of possible outcome values from which the agents pick their actual outcome value. For the sake of simplicity we use normal distributions characterized by a

III. AGENT-BASED SIMULATION

mean value and a variance (i.e. N ( m, σ ) with probability 2

The agents populate a two-dimensional rectangular world, which is wrapped on both pairs of edges (up and down, right and left). A position in the world may be occupied by more than one agent, and positions of agents can be arbitrarily close (i.e. the world is not divided into a grid of disjoint places). The dimensions of the world are set to be 100 × 100 . The agents in our simulation own resources that they use to maintain themselves and also to generate new resources. The agents ‘live’ at most for 60 time turns. The agents may die earlier if they run out of resources. When they reach the end of their life they may produce a number of offspring agents. The number of these depends on the amount of resources owned by the agent, more resources implying larger number of offspring. If a dying agent has R amount of resources, and the mean amount of the resources in the agent community at that moment is Rm , and the standard deviation of resources is

Rs , then the number of offspring of the agent is calculated as

density function

(1)

n0 are parameters of the simulation environment. If n is negative or R = 0 this means that the agent has no offspring. If n > nmax , where nmax is the

resources divided equally between them. The locations of the offspring are set by a small random modification of the position of the parent agent. In each time turn each agent tries to choose an interaction partner. The partner is chosen from those agents, which are located close enough (i.e. in the neighbourhood) to the agent which is looking for a partner. An agent may be chosen as a partner if the agent is not already partnered up with another agent. An agent may remain without a partner in a time turn if it cannot find any agent in its neighbourhood which could become its partner. The neighbourhood of an agent is defined as the set of ten closest agents, where the distance

2π σ

⋅e

( x−m)2 2σ 2

). Playing the

the variance of the distribution ( σ ) is a set value that characterizes the outcome uncertainty of the game. The agents participate in the game with their available resources, which determine the mean value of the outcome distribution. The function determining the mean value is (2) 1 2

f ( R) = a ⋅

1 + e − R + R0

where a and R0 are parameters and R is the amount of available resources. The parameters are set such that the game operates on the convex diminishing return part of the function where f ( 2 x) ≥ 2 f ( x ) . In order to preserve the prisoner’s dilemma conditions the game matrix determining the mean values of outcome distributions is set as follows Coop

Cheat

A2 Coop

where α, β,

allowed upper limit of offspring, the number of offspring is set to be nmax . The offspring of an agent inherit its

ϕ ( x) =



game determines the mean value of the distribution, while

A1 \ R − ( Rm − β ⋅ Rs ) n =α ⋅ + n0 Rs

1

Cheat

f (R1 ) +

∆ ∆ , f (R2 ) + 2 2

f (R1) + ∆,α ⋅ f (R2 )

α ⋅ f ( R1 ), f (R2 ) + ∆ f (R1 ), f (R2 )

where ∆ = [ f (R1 + R2 ) − f (R1 ) − f (R2 )]+ (i.e., it takes

only the positive values of the expression in brackets and it is zero if the value of the expression is negative), 0 < α < 1 is a parameter, and R1 , R2 are the amounts of resources

available for the agents A1 , A2 . After determining the mean values of the outcome distributions for both agents, they pick an actual outcome value from the normal distribution determined by this mean value and the variance σ that characterizes the outcome uncertainty of the world of the agents. The actual outcome values will be the new amount of resources available for the agents. Note that the actual outcome value may be below or above the mean value given by the game matrix. If the 2

outcome uncertainty is high (i.e. σ is large) the likelihood of getting much more or much less than the mean value is 2

relatively large. If the outcome uncertainty is low (i.e. σ is small) in most cases the actual outcome value will be close to the mean value determined by the game matrix. The cooperation/cheating decision of an agent depends on its intention to cooperate. Each agent has a set level of intention to cooperate I coop . When playing a game with a 2

chosen partner the agent decides to cooperate with probability equal to I coop . Agents inherit their intention to cooperate from their parent with small random variation (3)

parent I coop = I coop +ε

where

ε

is chosen from a uniform random distribution over

[−0.025,0.025] (if I coop < 0 according to the above equation, I coop is set to 0, and if I coop > 1 according to the calculation, then I coop is set to 1). At the end of each time turn the agents make a random move, i.e. their position is updated according to the equation (4) ( x , y ) = ( x, y ) + (ξ , ξ ) new

new

x

y

where ( x, y ) is the old position of the agent, ( x new , y new )

ξ x , ξ y are [−5,5] .

is the new position of the agent, and

random

values from a uniform distribution over The simulation was run for 400 time turns each time. Each simulation was initialized with 1500 agents at randomly set positions, initial resource amounts, and intentions to cooperate. In each time turn the agents search for an interaction partner, and if they find one, they play the above described game to generate their new resource amount. If an agent cannot find a partner it generates its new amount of resources as if it would be playing a cheating/cheating game with another agent (i.e. the mean value of the resource value distribution from which it picks its new resource amount is set to be f (R ) , where R is the amount of its current resources). Agents move randomly at the end of each time turn and deduct from their resource amount a fixed amount of living costs. Agents may die because they run out of resources, or because they reach the end of their life (at most 60 time turns). When an agent dies and still has available resources, it may generate offspring, which will inherit its intention to cooperate (with small variation). The offspring initially form a cluster around the place of their parent and gradually move away through random movements. We note that in the context of our simulation, the cooperation/cheating decision of an agent is governed by the same probability of cooperation ( I coop ) throughout the life of the agent. Modification of these probabilities happens as agents generate their offspring. Selection favors successful agents, which have higher number of offspring. This means

that the distribution of I coop values changes as new generations of agents enter the simulation. These changes represent the change of decisional strategies within the simulated agent community and reflect the adaptation of the agent community to their environment. IV. COMMUNICATION OF INTENTIONS In real worlds of organisms interactions usually include communications between the participants. These communications in a general sense are aimed to inform the interaction partner about the ‘intentions’ of organism initiating the communications. For example, microbes may communicate their ‘intentions’ through the insertion of certain proteins in their cell membrane [17] or monkeys may use certain gestures to indicate their intentions [3]. Communicating these intentions may contribute to establishing of cooperation interactions. In order to analyze the influence of communication of intentions in the context of outcome uncertainty the above described agent-based simulation was extended to incorporate the communication about the intentions of agents. We implemented a simple language of agents that is used to communicate about their cooperation intentions. Below we describe this language and its use. All agent speak the language consisting of the symbols: ‘0’,’s’,’i’,’y’,’n’,’h’ and ‘t’ (the lexicon of the language). The meanings of these symbols are as follows: ‘0’ – no intention of communication, ‘s’ – start of communication, ‘i’ – maintaining the communication, ‘y’ – indication of the willingness to engage into possible cooperation, ‘n’ – indication of no further interest in communication, ‘h’ – cooperation (ready to share the benefits of joint use of resources), ‘t’ – cheating (ready to steal the benefits of possible joint use of resources). The last two symbols, ‘h’ and ‘t’ imply the actual cooperation and cheating decisions in the context of the interaction. The first four symbols are ranked according to their positive contribution towards engagement in cooperation (the least positive is the ‘0’ and the most positive is ‘y’, ‘0’≤’s’ ≤’i’≤’y’). The agents generate communication symbols when they engage in interaction with another agent. Each agent has its own realization of the language. This language is represented in the form of a two-input probabilistic production rules: (5) u , u' → {u 1 : p , …, u k : p } current

current

next

1

next

k

where u current is the last communication symbol produced by the agent, u' current

is last communication symbol 1

k

produced by the partner of the agent, u next , … , u next are the next communication symbols that can be produced by the agent, and p1 , …, p k are the probabilities of production of

k

these

communication

symbols,

∑p

j

= 1.

The

j =1

probabilities are specific for agents, i.e. they represent the individual realization of the language for each agent. For example a probabilistic production rule is (6) i, i ' → { y : 0.4, i : 0.5, n : 0.1} that means that after producing the symbol ‘i’, and receiving a symbol ‘i’ from the communication partner, the agent will produce the symbol ‘y’ with probability 0.4, the symbol ‘i’ with probability 0.5, and the symbol ‘n’ with probability 0.1. The probability of the symbol pair (‘y’,’y’) being followed by the generation of the symbol ‘h’ is given by the intention to cooperate of an agent ( I coop ) . The production rules of the language satisfy intention consistency rules that imply constraints on probabilities of production rules. These intention consistency rules are in agreement with human and animal behavior, in which case the expression of friendly or cooperative signals makes more likely the generation of further friendly (cooperative) signals [3]. If an increasingly positive symbol can be produced using two rules after receiving the same communication symbol from the partner, its production is more likely after the earlier production of a more positive symbol (e.g. a handshake is more likely after a big friendly smile than after a shy little smile). More formally, if s 0 , s1 , s 2 , s 3 are communication symbols, such that according to their positivity ranking we have that s1 ≤ s 2 ≤ s3 , and s 3 can be

produced

s1 , s 0 → s 3 : p1

according to production rules and s 2 , s 0 → s 3 : p 2 , where p1 , p 2

are the probabilities of application of these rules then p1 ≤ p 2 . Similarly, if the production rules are

s 0 , s1 → s 3 : p1

and

s 0 , s 2 → s3 : p2 ,

and

s1 ≤ s 2 ≤ s3 holds, then again p1 ≤ p 2 . After selecting a collaborating partner the agents may engage in a communication process. The communication process starts properly after both agents communicated the ‘s’ symbol. We set a limit ( L1 ) for the preliminary communication (i.e. before communicating ‘s’ from both sides). If two agents do not reach the proper start of the communication in a communication of length L1 they stop further communication. During the communication process the agents use their own realization of the common language to produce communication symbols. The communication process ends either with the communication of an ‘n’ symbol (i.e., signalling no further interest), or with the communication of the ‘y’ symbol by both partners (or by automatically stopping the communication according to the set rules). After this each agent decides whether to cooperate or cheat by producing the symbol ‘h’ or ‘t’. We impose a

communication length limit ( L2 ) on the this second stage of communication. If the agents do not reach the communication of ‘y’ symbols in L2 steps, they stop their communication. During each communication process, as an agent produces equally or more positive symbols their intention to cooperate increases. The intention to cooperate of the agent increases temporarily and the increased intention of cooperation is valid only for the current communication process. The upgrade equation of the intention to cooperate is (7) I (t + 1) = 1 − (1 − δ ) ⋅ (1 − I (t )) coop

coop

where I coop (0) = I coop , t is the counter of communication symbols produced by the agent so far within the current is a parameter communication process, and δ ( δ = 0.025 ). When agents reproduce at the end of their life, their offspring inherits the language of the parent agent, possibly with some small random modifications of the language rule probabilities. This means that the offspring of an agent will speak the agent language in a very similar manner (using production rules with similar probabilities), which may facilitate cooperation interactions between them. Although the increase of intention to cooperate happens in the case of all agents, those who have very low intention to collaborate will increase an originally low probability of cooperation, which means that they will not necessarily cooperate at the end of the communication process. This cooperation intention increase principle is in agreement with human and animal behavior, where a sequence of expression of friendly signals increases the likelihood of the friendly ending of the interaction, even if the original intentions were less friendly [3]. This relatively simple language simulates the communication of cooperation intentions to other agents. As communication goes on the intention of cooperation increases as agents produce equally or increasingly positive symbols. In this way if an agent can keep the communication going on with its partner (below the length limit) it can increases the likelihood of reaching the actual cooperation interaction. V. RESULTS First we analyzed the simulations without the implementation of communication of intentions. We chose three levels of outcome uncertainty ( σ = 0.3, 0.5, and 0.7). We ran 20 randomly initialized simulations for each uncertainty value and calculated the level of cooperation in each simulation for each time turn. The level of cooperation calculated as the percentage of cooperative interactions in the current population of agents. For each time turn we calculated the average and standard deviation of the level of cooperation. The results are shown in Figure 1. 2

These results show that the level of cooperation increases with time more steeply in the case of higher levels of outcome uncertainty. This confirms our expectations based on the examination of natural situations that more outcome uncertainty implies higher level of cooperation within a community of selfish individuals.

Figure 1. The evolution of the average level of cooperation in case of simulations with no intention communication, and for three levels of outcome uncertainty. The three levels of outcome uncertainty were 0.3 (nic03 curve), 0.5 (nic05 curve) and 0.7 (nic07 curve). The vertical bars show standard deviations.

Second we analyzed the simulations that have implemented the communication of intentions. In this case we expect to see possibly more cooperation at all considered levels of outcome uncertainty. We ran again 20 simulations for all three levels of outcome uncertainty, determined the levels of cooperation for each simulation and for each time turn, and calculated the averages and standard deviations of levels of cooperation for each time turn. The results are shown in Figure 2.

implies higher level of cooperation within the community. In addition the results indicate that in the case of intention communication the high levels of cooperation are reached faster then in the case of simulations with no intention communication. The comparison of evolution of level of cooperation in simulations with and without intention communication for all three levels of outcome uncertainty is presented in Figures 3-5.

Figure 3. The comparison of the evolution of the average level of cooperation in case of simulations with and without intention communication – outcome uncertainty 0.3. With intention communication – wic03 curve, no intention communication – nic03 curve. The vertical bars show standard deviations.

Figure 4. The comparison of the evolution of the average level of cooperation in case of simulations with and without intention communication – outcome uncertainty 0.5. With intention communication – wic05 curve, no intention communication – nic05 curve. The vertical bars show standard deviations.

Figure 2. The evolution of the average level of cooperation in case of simulations with intention communication, and for three levels of outcome uncertainty. The three levels of outcome uncertainty were 0.3 (wic03 curve), 0.5 (wic05 curve) and 0.7 (wic07 curve). The vertical bars show standard deviations.

The results show again that more outcome uncertainty

Figures 3-5 show indeed that the level of cooperation raises higher and quicker in the case of simulations with the incorporation of the communication of cooperation intentions. This is again in good agreement with our expectation on the basis of natural examples, where more communication between potential cooperation partners appears to increase the likelihood of the actual cooperation interactions.

Figure 5. The comparison of the evolution of the average level of cooperation in case of simulations with and without intention communication – outcome uncertainty 0.7. With intention communication – wic07 curve, no intention communication – nic07 curve. The vertical bars show standard deviations.

We note that at the beginning the level of cooperation in simulations with communication of intentions is smaller than the level of cooperation in simulations without the communication of intentions. The likely reason of this is that the initial population is set randomly, including the random setting of the language rule probabilities. At this stage it is possible that many communication interactions will end before reaching the cooperation/non-cooperation conclusion simply because of reaching the imposed length limits on communication processes. After more than one generation (one generation of agents lives for 60 time turns in average) – after around 80 time turns – those agents have more offspring which are more able to reach the conclusion of their communication processes. After this the level of cooperation rises quickly in agent populations with the ability to communicate cooperation intentions. VI. DISCUSSION AND CONCLUSIONS Our work shows that the effects of the environment that influence the uncertainty of outcome in potentially cooperative interactions have an important contribution to the determination of the level of cooperation in communities of individuals. While there is some experimental evidence for this [1,10-14,17,18], the role of outcome uncertainty in determination of level of cooperation has not been analyzed yet in detail through analytical or simulation studies (although see [15,16]). Our simulation environment is the first which allows this analysis to be done. Generally, we found that the likelihood of cooperation increases as the outcome uncertainty increases. A simple principle level explanation of this is that by cooperation individuals share their potentially experienced outcome uncertainty, and through this sharing they reduce their actually experienced outcome uncertainty. However, to

understand the mechanisms of how outcome uncertainty influences the level of cooperation further studies are needed. So far, our experiments confirm that the positive effect of outcome uncertainty on the level of cooperation is valid in both cases that we considered, i.e. simulations with and without the communication of cooperation intentions. The results show that the increase in the level of cooperation is faster and goes higher in the case of simulations with communication about cooperation intentions. This is likely to be due to the fact that communications and the language used for these communications induce an additional selection pressure on individuals. In the case without communications, the key feature of individuals that matters is their intention to cooperate. In the case of simulations with communications it also matters that they have to be able to communicate their intentions sufficiently quickly and effectively. The language imposed additional constraint is also expressed by the fact that in simulations with communications the starting level of cooperation is lower than in simulations without communications. As we already noted this is due to the fact that randomly initialized individuals may not be able to reach the conclusion of their communication processes due to communication length limits. After the bad communicators are weeded out in a couple of generations, communications about intentions facilitate the establishment of cooperative interactions. Our results indicate that a key mechanism behind establishing cooperative interactions may be the presence of outcome uncertainty for outcomes of potentially cooperative interactions. This mechanism may work independently of other cooperation-inducing mechanisms proposed so far, such as kin selection [4-6] and direct or indirect reciprocity [4-9]. Outcome uncertainty induced cooperation may explain the emergence of cooperative behavior among unrelated individuals without requiring the reliance on similarity or memories of past direct or observed interactions. Our results also point to the importance of communications about cooperation intentions. Efficient use of such communications adds selection constraints on individuals. Those individuals are favored who can use efficiently the language of communications, and who use it to reduce their effective experience outcome uncertainty by participating in many cooperation interactions. This suggests that the use of intention communication language is another mechanism that facilitates the emergence of cooperation among individuals experiencing uncertain interaction outcomes. The presented work provides input for the setting and organization of biological and social experiments focused on mechanisms behind cooperative behavior. In particular it suggests experiments where the outcome uncertainty is controlled and the expectation is that more outcome uncertainty will lead to more cooperation interactions. Further experiments may also look at the role of intention communication language and the effect of efficient use of

such language on the level of cooperation, with the expectation that in groups that use such language more efficiently more cooperation will emerge between members of the group. NOTE: For the purpose of reproduction and further analysis of the presented results the software code used for the above described simulations is made accessible. A version of the simulation code is publicly accessible from the webpage of reference [16]. This version includes the simulation without agent language. The version that includes the simulation with the agent language can be obtained by emailing the author. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]

[11] [12] [13] [14]

[15]

[16]

[17] [18]

R.M. Callaway, et al., “Positive interactions among alpine plants increase with stress”, Nature, vol.417, pp.844-847, 2002. N.J. Mehdiabadi, et al, “Social evolution: Kin preference in a social microbe”, Nature, vol.442, pp.881-882, 2006. L.A. Dugatkin, Cooperation Among Animals. An Evolutionary Perspective. New York: Oxford University Press, 1997. R.L. Trivers, “The evolution of reciprocal altruism”, Quarterly Review of Biology, vol.46, pp.35-57, 1971. R. Axelrod, & W.D. Hamilton, “The evolution of cooperation”, Science, vol.211, pp.1390-1396, 1981. R.L. Riolo, M.D. Cohen, and R. Axelrod, “Evolution of cooperation without reciprocity”, Nature, vol.414, pp.441-443, 2001. O. Leimar and P. Hammerstein, “Evolution of cooperation through indirect reciprocity”, Proceedings of the Royal Society of London Series B: Biological Sciences, vol.268, pp.745-753, 2001. M.A. Nowak, and K. Sigmund, “Evolution of indirect reciprocity by image scoring”, Nature, vol.393, pp.573-577, 1998. G. Roberts and T.N. Sherratt, “Development of cooperative relationships through increasing investment”, Nature, vol.394, pp.175179, 1998. T. Kameda, M. Takezawa, R.S. Tindale, and C.M. Smith, “Social sharing and risk reduction – Exploring a computational algorithm for the psychology of windfall gains”, Evolution and Human Behavior, vol.23, pp.11-33, 2002. B.H. Seghers, “Schooling behaviour in the guppy (Poecilia reticulata): an evolutionary response to predation”, Evolution, vol.28, pp.486-489, 1974. B.D. Pulford, A.M. Coleman, “Ambiguous games: Evidence for strategic ambiguity aversion”, Quarterly Journal of Experimental Psychology, vol.60, pp.1083-1100, 2007. B. Majolo, K. Ames, R. Brumpton, R. Garratt, K. Hall, N. Wilson, “Human friendship favours cooperation in the iterated prisoner’s dilemma”, Behaviour, vol.143, pp.1383-1395, 2006. A.C. Spinks, J.U.M. Jarvis, N.C. Bennett, “Comparative patterns of philopatry and dispersal in two common mole-rat populations: implications for the evolution of mole-rat sociality”, Journal of Animal Ecology, vol.69, pp.224-234, 2000. P. Andras, G. Roberts, and J.Lazarus, “Environmental risk, cooperation and communication complexity” in Adaptive Agents and Multi-Agent Systems, E. Alonso, D. Kudenko, and D. Kazakov, Eds., Berlin: Springer-Verlag, 2003, pp.49-65. P. Andras, J. Lazarus, G. Roberts, and S.J. Lynden, “Uncertainty and cooperation: Analytical results and a simulated agent society”, JASSS – Journal of Artificial Societies and Social Simulation, vol.9, paper 1/7, 2006. E. Drenkard, and F.M. Ausubel, “Pseudomonas biofilm formation and antibiotic resistance are linked to phenotypic variation”, Nature, vol.416, pp.740-743, 2002. M. De Bono, D.M. Tobin, M.W. Davis, L. Avery and C.I. Bargmann “Social feeding in Caenorhabditis elegans is induced by neurons that detect aversive stimuli”, Nature, vol.419, pp.899-903, 2002.