Adaptivity in Networked Complex Systems Anthony H. Dekker Defence Science and Technology Organisation DSTO Joint Operations Division Department of Defence, Canberra ACT 2600, Australia Email:
[email protected]
Abstract. The fields of Complex Adaptive Systems and Network Science are both important in producing agile forces for 21st century environments. Complex Adaptive Systems theory tells us that agility requires the ability to adapt at different timescales and levels. These levels range from the tactical level to the levels of doctrine change and platform acquisition. Network Science identifies the key attributes for networks underlying agile systems: first, a low average distance, and second, robustness through multiple independent pathways. This paper illustrates these principles with a simple simulation model of collaborative problemsolving. Agents in the model attempt to solve mathematical problems by exchanging information through a network. As the problems alter, the model demonstrates that larger environmental changes require greater degrees of adaptivity, and shows the value of efficient networks. The model also indicates a synergistic benefit, where fast networks enhance adaptivity. This provides a useful lesson for the designers of Defence systems. Keywords: Complex Adaptive Systems, Network Science, Agility, Collaboration, Simulation 1. INTRODUCTION Significant contributions to military theory have been made in the past decade in the scientific fields of Complex Adaptive Systems and Network Science. The Complex Adaptive Systems perspective focuses attention on the ways in which Defence systems adapt to complex changing environments. It has contributed to an understanding of complex warfighting techniques for 21st century conflicts (Australian Army 2004), and to an understanding of what it means for a force to be agile. Methods for studying Complex Adaptive Systems include mathematical techniques from physics, agent-based modelling techniques from computer science, and evolutionary optimisation approaches inspired by biology (Coveney and Highfield 1995, Kauffman 1995, Solé and Goodwin 2000, Grisogono 2006).
Network Science has added depth to the concept of Network Centric Warfare or NCW (Alberts et al. 1999, Alberts and Hayes 2003, Court 2006), and has articulated in general terms both the potential advantages and the fundamental limits of NCW. For example, Network Science identifies the average distance in a networked system as the most important network attribute in determining system performance. The first links to be added to, or upgraded in, an initially poor network will produce a very substantial drop in average distance. This translates to a faster system response time, and greater agility. Later additions or upgrades of links, however, have progressively smaller impact on average distance, so that the process of improved networking eventually reaches a regime of diminishing returns. Further development of the key principles for 21st century joint forces requires an integration of the Complex Adaptive
Systems and Network Science approaches. Together, these two ways of understanding modern Defence systems will produce more precise advice to decision-makers, and a more solid basis for forward planning. The work presented here is intended to be a first step in such an integrated approach, building on related past work in Network Science and Complex Adaptive Systems (Atkinson and Moffat 2005, Unewisse and Grisogono 2007, Grisogono and Spaans 2008). In this paper, we will first survey some of the basic principles of Complex Adaptive Systems and Network Science, and then outline a simple simulation experiment that models collaborative decision-making in a networked organisation faced with a changing environment. Analysis of the simulation results identifies some important adaptivity principles. 2. COMPLEX ADAPTIVE SYSTEMS The Complex Adaptive Systems approach to studying Defence Systems sees the Defence Organisation, as well as a joint force, and even an individual soldier, sailor, or airman as a complex system, interacting with an environment which is also a complex system. As the environment changes, the Defence system must adapt or fail. There are two key aspects to such adaptivity: the time-scale of adaptivity and the degree of adaptivity. The environment of a Defence system changes at a certain rate (which may vary with time), and the Defence system must respond sufficiently quickly to such changes. If the Defence system does not respond swiftly enough, its actions will be inappropriate for its circumstances, and it will fail. As military technology has developed from foot-soldier to horse cavalry to vehicles powered by internal combustion to aircraft, hostile forces have been able to move within striking range more rapidly, and have required more agile responses. Such tactical/operational agility has involved better sensors, improved Command and
Control, and faster platforms which have the ability to change their mode of operation more quickly (Dekker 2006). However, the degree of adaptivity is also important. Tactical/operational agility involves responses to a relatively small environmental change, involving a shift in position of a hostile unit whose existence is already known from intelligence information. A Defence system which can respond to such changes with lightning swiftness may, however, still fail to make the greater adjustments that are required by larger environmental changes. For example, the system may struggle with the adjustment from conventional warfare to counterinsurgency (U. S. Dept. of the Army 2007). Table 1 illustrates two further levels of agility which involve greater adjustments, though over longer time-scales. Organisational agility involves adjusting organisational structures and procedures in response to changes in the nature of threats. The key drivers for this kind of agility are organisational interoperability (Clark and Moon 2001), organisational learning (Senge 1997, Warne 1999), and encouragement of human creativity (Dekker 2006). Over even longer time-scales, agility involves the ability to reconceptualise fundamental goals, structures, doctrine, and equipment of Defence organisations. In the past, such responses have been driven by technological development. Nazi Germany, for example, achieved successes in the early years of World War II because (thanks largely to Heinz Guderian) it reconceptualised its structures, doctrine, and platforms to take advantage of aircraft and vehicles powered by internal combustion. This reconceptualisation, partly outlined by Guderian in his book Achtung – Panzer! (Guderian 1937), did not so much involve the invention of aircraft and tanks – they had already been developed, and were available in greater numbers and greater quality in countries like France. Rather, this reconceptualisation involved the development of techniques which today would be
called Mission Command, Network Enabled Operations, and Time-Sensitive Targeting; together with the refinement of these techniques via experimentation.
a selection process to choose the best variations, based on feeding back the results of real or simulated interactions with the environment.
In the future, equally dramatic reconceptualisation of fundamental goals, structures, doctrine, and equipment will be essential to respond to the new threat environment of the 21st century. Such responses will require excellent strategic planning, effective scientific advice and experimentation, agile acquisition techniques, organisational learning, and an encouragement of human creativity. The most effective vision for the future may come from the lower and middle echelons of the Defence hierarchy – Guderian, for example, was a colonel when he first published articles outlining his ideas. Such a vision will almost certainly receive opposition from those strongly invested in older approaches – just as Guderian was given a hostile reception by horse-cavalry officers. Resolving the resulting conflicts will require objective experimental data.
At lower levels of adaptivity, the variation in military systems involves changing unit position and changing situation awareness pictures. At higher levels of adaptivity, the variation covers changing organisational structure, procedures, doctrine, and goals.
Table 1: Levels of Agility Level
Type
Timescale
Drivers
1
Tactical / Sensors, C2, Seconds operational platform speed, to days (OODA loop) platform flexibility
2
Organisational Days to agility years
3
Organisational interoperability, organisational learning, creativity
Organisational Re-evaluate, learning, creativity, Reconceptualise, Years strategic planning, Re-equip agile acquisition
The essence of a Complex Adaptive System (Grisogono 2006) is that it has: a concept of success or failure in its interaction with the environment; a source of variation in its internal state, which influences its interaction with the environment; and
Since the adaptivity process involves a feedback loop, rapid adaptivity requires iterating through this loop as quickly as possible. Also important are mechanisms for creatively generating many possible variations, and simulation/experimentation methods for evaluating multiple variations concurrently. 3. NETWORK SCIENCE The field of Network Centric Warfare or NCW (Alberts et al. 1999, Alberts and Hayes 2003, Court 2006) began with the kind of networking technology used by Guderian – radio and encryption – augmented with the more advanced technologies associated with the Internet. NCW is closely associated with and partially inspired by the field of Network Science (Atkinson and Moffat 2005). Network Science began with Leonhard Euler in 1735, but has blossomed particularly in the past few decades (Barabási 2002, Watts 2003). Network Science contributes to NCW by providing a taxonomy of possible network structures, as well as a number of potentially useful network metrics. When a Complex Adaptive System has an underlying network structure, the average distance between nodes in the network is the most significant network influence on system dynamics (Dekker 2005a, 2005b, 2007a, 2007b, 2007c, 2008). This is essentially because, all other things being equal, a low average network distance permits faster response times. There are a number of different network topologies, including
small-world networks and scale-free networks, which can produce such low average distances. Because relatively few additional links can make a network “smallworld” (Watts 2003), the process of reducing average distance by improved networking eventually reaches a regime of diminishing returns. Also important is the connectivity of the underlying network (Dekker 2004, 2005c, 2007c). A high connectivity means that there are multiple independent pathways between nodes, giving robustness in the face of node destruction or of blockages in information flow. Other network metrics may also be important in particular circumstances. In the simulation experiment described in this paper, we will explore the interaction between network topology and adaptivity in a simulated Complex Adaptive System. 4. A SIMULATION EXPERIMENT The simulation experiment presented here involves an abstract model of collaborative problem-solving within a networked organisation, modified from Dekker (2007c). Although highly simplified, models like this illustrate general principles of collaboration and adaptivity, and provide an indication of what might be expected in a real-world system.
Figure 1: Agent network for the experiment, highlighting three agents An agent succeeds, at a particular timestep in the simulation, when the combination of its generated data item, its memory, and its incoming messages at that timestep together contain a valid solution to its problem. Prior to succeeding, an agent has at least a sense of which data items are potentially useful. It is potentially useful data items that are stored in its memory. Similarly, the data items which an agent sends to its neighbours in the network are potentially useful data items from its generator or its memory. The specific problems dealt with by the agents involve numerical data items, and are of two kinds:
In the experiment, a network of 100 agents is connected as shown in Figure 1. Agents in the network have, on average, 4 neighbours. Each agent has a problem to solve, which requires finding several data items that “fit together” in a given way.
finding a set of four perfect squares adding up to a target number which is different for each agent (e.g. 02 +12 + 12 + 42 = 18), which is possible by a theorem of Lagrange (Weisstein 2006a); and
In order to solve its problem, each agent has an internal generator which produces data items. Different agents have different generators, and each generator can only produce a limited number of different data items. However, agents can also receive data items from other agents via the network, and they have a small memory, which holds up to six data items they have seen.
finding a pair of prime numbers adding up to an (even) target number which is different for each agent (e.g. 3 + 5 = 8), which is possible according to the famous conjecture by Goldbach (Weisstein 2006b). In the first case, an agent believes a number to be useful if it is a perfect square less than or equal to its target (this is a slight variation of the definition of “useful” in
previous work). In the second case, an agent believes a number to be useful if it is a prime less than its target. Both cases are intended to be simple abstractions of real-world problems. The simulation proceeds in three phases, each 200 timesteps long. In the first (training) phase, each agent has an instance of the first (Lagrange) problem. In the second (operational) phase, each agent has a different instance of the Lagrange problem. Finally, in the third (altered) phase, each agent has an instance of the Goldbach problem. There are thus two kinds of adaptivity required: learning from the first (training) phase which numbers are useful, followed by the more difficult task of relearning this in the second (altered) phase. In the real world, as we have seen, there will be more than two levels of adaptivity. Figure 2 shows performance of the 100 agents in a typical run. The horizontal axis shows time, and the vertical axis shows the number of agents which have succeeded. Performance in the second phase is clearly better than the first phase, because agents have learnt potentially useful numbers and stored them in their memory. Performance in the third phase is worse, however, because the previous learning is irrelevant to the altered situation, where it is primes rather than squares that are useful.
Given this simulation testbed, we investigate three possible improvements: (R) The first improvement replaces the default network by a random network, which has a lower average distance (on average, 3.4 rather than 12.9). This means that fewer network “hops” are required to pass information from one agent to another (indirectly connected) agent. By facilitating communication, the random network thus gives each agent better access to the data items generated by the other 99 agents. (M) The second improvement makes the memory of agents more adaptive. The default memory accepts no more data items after it is full, while the adaptive memory replaces old data items in the memory by new ones. This helps agents learn new information in the third (altered) phase of the experiment. (A) The third improvement allows the agents to adapt procedures as well as learning new information. When an agent receives useful information from another agent that has succeeded at its task, the receiving agent copies the dataitem generator from the sending agent. The experiment involves 100 simulation runs for each of the eight possible combinations of these improvements. Table 2: Percentage of successful agents Phase 2 (operational)
Phase 3 (altered)
Base case
50.9
66.5
13.6
R
33.5
23.3
11.1
M
–
2.1
41.1
A
–
–
12.4
R&M
–
–
6.7
Average for R&M&A
84.5
91.9
84.9
Variance predicted
90%
86%
97%
Effects
Phase 1 (training)
Figure 2: A typical simulation run (base case)
5. EXPERIMENTAL RESULTS Table 2 shows the effect of the three improvements on the average results for the three phases. All entries in the table are statistically significant at the 10–8 level, by analysis of variance and t-tests. Dashes in the table indicate no statistically significant effect. The combinations R&A, M&A, and R&M&A also have no significant interaction effect. It can be seen that the first (training) phase benefits substantially from the improved network (R), but not from either of the adaptivity improvements. The second (operational) phase has better results than the first phase, because even the default memory permits learning from the first phase. There is again a substantial effect for the improved network (R), and a small effect due to the improved memory (M). The third (altered) phase of the experiment is more challenging for the agents. The base case has on average only 13.6% of the agents solving their problems successfully. The improved network (R) and the two adaptivity improvements (M and A) increase this to 84.9%, however. The improved memory is particularly important, because of the need to “unlearn” lessons from the first two phases. There is a statistically significant interaction effect between the improved network (R) and the adaptive memory (M), adding 6.7 to the result on top of the independent effects of R and M. This reflects the fact that the value of the network is increased by better information management of messages from other agents. Conversely, the value of information management is increased by networked access to a greater amount of information. 6. DISCUSSION The simulation experiment presented here leads us to draw three conclusions. First, the experiment confirms the findings from Network Science that a lower average distance – provided by the random (R)
network – improves performance in a networked system. This is something that not only network designers, but also the designers of command processes, should take into account. In particular, they should facilitate rapidly evolving informal networks. Second, the experiment illustrates the benefit of adaptivity, and highlights the fact that larger environmental changes require greater degrees of adaptivity. The switch from phase 2 to phase 3 involved completely new problems for the agents, and was therefore a larger change than the switch from phase 1 to phase 2. Consequently, the switch from phase 2 to phase 3 required a greater degree of adaptivity – adaptive procedures (A) as well as improved memory and learning (M). In Defence Acquisition, both the system being acquired and the Acquisition organisation itself should be adaptive and versatile. This requires effective knowledge management, together with the ability to learn from experience. At all the levels of adaptivity shown in Table 1, there should be accessible repositories of information, which are updated as a result of success or failure. Finally, the experiment shows that there can be synergy between network quality and adaptivity – specifically, the adaptive memory (M). Where one part of the system adapts based on the success of another part of the system, the value of improved networking is increased. To take advantage of this phenomenon, network designers should ensure that information delivered by the network will be passed to the adaptive mechanisms of destination Defence systems. It is not sufficient to pass information to a destination platform, only to have it be lost there. Our experiment has therefore not only confirmed findings from the fields of Complex Adaptive Systems and Network Science, but also shown that interesting phenomena occur at the nexus between adaptivity and improved networking. Further exploration of the intersection between
Complex Adaptive Systems and Network Science perspectives is likely to generate additional insights. One immediate lesson, however, is the need to link the adaptive components of different Defence systems. 7. REFERENCES Alberts, D.S., Garstka, J.J., and Stein, F.P. (1999), Network Centric Warfare: Developing and Leveraging Information Superiority, 2nd edition, CCRP Press, Washington. Available online at www.dodccrp.org/files/Alberts_NCW.pdf Alberts, D.S. and Hayes, R.E. (2003), Power to the Edge, CCRP Press, Washington. At www.dodccrp.org/files/Alberts_Power.pdf Atkinson, S.R. and Moffat, J. (2005), The Agile Organization, CCRP Press, Washington. At www.dodccrp.org/files/Atkinson_Agile.pdf Australian Army (2004), Complex Warfighting. Barabási, A.-L. Publishing.
(2002),
Linked,
Perseus
Clark, T. and Moon, T. (2001), “Interoperability for Joint and Coalition Operations,” Australian Defence Force Journal, 151, pp. 23–36. Available online at www.defence.gov.au/publications/dfj/adfj151.pdf
Court, G. (2006), “Validating the NEC Benefits Chain,” Proceedings of 11th International Command and Control Research and Technology Symposium (ICCRTS), CCRP Press, Washington. Available online at www.dodccrp.org/events/11th_ICCRTS/html /papers/155.pdf Coveney, P. and Highfield, R. (1995), Frontiers of Complexity, Faber and Faber. Dekker, A.H. (2004), “Simulating Network Robustness: Two Perspectives on Reality,” Proceedings of 2004 SimTecT conference, pp 125–130, ISBN: 0-9578879-3-0, Simulation Industry Association of Australia. Dekker, A.H. (2005a), “Network Topology and Military Performance,” in Zerger, A. and Argent, R.M. (eds), MODSIM 2005 International Congress on Modelling and Simulation, Modelling and Simulation Society of Australia and New Zealand, pp 2174-2180, ISBN: 0-9758400-2-9. At www.mssanz.org.au/modsim05/proceedings/ papers/dekker.pdf
Dekker, A.H. (2005b), “C4ISR, the FINC Methodology, and Operations in Urban Terrain,” Journal of Battlefield Technology, 8 (1), March, pp 25-28. Dekker, A.H. (2005c), “Simulating Network Robustness for Critical Infrastructure Networks,” Proceedings of 28th Australasian Computer Science Conference, Conferences in Research and Practice in Information Technology, 38. Available online at crpit.com/confpapers/CRPITV38Dekker.pdf Dekker, A.H. (2006), “Measuring The Agility Of Networked Military Forces,” Journal of Battlefield Technology, 9 (1), March, pp 1924. Dekker, A.H. (2007a), “Can Complex Systems Be Engineered? Lessons from Life,” Proceedings of 2007 Systems Engineering/ Test and Evaluation (SETE) conference. Dekker, A.H. (2007b), “Using Tree Rewiring to Study „Edge‟ Organisations for C2,” Proceedings of 2007 SimTecT conference, pp 83-88, ISBN: 0-9775257-2-4, Simulation Industry Association of Australia. Dekker, A.H. (2007c), “Studying Organisational Topology with Simple Computational Models,” Journal of Artificial Societies and Social Simulation, 10 (4). Available at jasss.soc.surrey.ac.uk/10/4/6.html Dekker, A.H. (2008), “Network Effects in Epidemiology,” Proceedings of 2008 SimTecT conference, Simulation Industry Association of Australia. Grisogono, A.-M. (2006), “The Implications of Complex Adaptive Systems Theory for C2,” Proceedings of 2006 Command and Control Research and Technology Symposium (CCRTS), CCRP Press, Washington. At www.dodccrp.org/events/2006_CCRTS/html /papers/202.pdf Grisogono, A.-M. and Spaans, M. (2008), “Adaptive Use of Networks to Generate an Adaptive Task Force,” Proceedings of 13th International Command and Control Research and Technology Symposium (ICCRTS), CCRP Press, Washington. At www.dodccrp.org/events/13th_iccrts_2008/ CD/html/papers/021.pdf Guderian, H. (1937), Achtung – Panzer! (English translation, Cassell, 1992).
Kauffman, S.A. (1995), At Home in the Universe: The Search for the Laws of SelfOrganization and Complexity, Oxford University Press. Senge, P. (1997), “Building Learning Organisations,” in Organisation Theory: Selected Readings, 4th edition, Pugh, D. (ed), Penguin. Solé, R. and Goodwin, B. (2000), Signs of Life: How Complexity Pervades Biology, Basic Books. Unewisse, M. and Grisogono, A.-M. (2007), “Adaptivity Led Networked Force Capability,” Proceedings of 12th International Command and Control Research and Technology Symposium (ICCRTS), CCRP Press, Washington. At www.dodccrp.org/events/12th_ICCRTS/CD/ html/papers/200.pdf
U.S. Dept. of the Army, Counterinsurgency Field Manual, University of Chicago Press, 2007. Warne, L. (1999), “Understanding Organisational Learning In Military Headquarters: Findings From A Pilot Study,” Proceedings of the 10th Australasian Conference on Information Systems. Watts, D. (2003), Six Degrees: The Science of a Connected Age, Vintage. Weisstein, E.W. (2006a) “Lagrange‟s FourSquare Theorem,” Wolfram MathWorld, mathworld.wolfram.com/LagrangesFourSquareTheorem.html Weisstein, E.W. (2006b) “Goldbach Conjecture,” Wolfram MathWorld, mathworld.wolfram.com/GoldbachConjectur e.html