Towards Autonomic Computing: Agent-Based Modelling ... - CiteSeerX

13 downloads 0 Views 167KB Size Report
[12] J. W. Forrester. Industrial Dynamics. Republished by Pro- ductivity Press .... [44] F. Stephanie and T. Jones. Modeling complex adaptive sys- tems with echo ...
Towards Autonomic Computing: Agent-Based Modelling, Dynamical Systems Analysis, and Decentralised Control Tom De Wolf and Tom Holvoet DistriNet, Dept of Computer Science, Katholieke Universiteit Leuven Celestijnenlaan 200A, B-3001 Heverlee (Leuven), Belgium {Tom.DeWolf,Tom.Holvoet}@cs.kuleuven.ac.be Abstract Autonomic computing aims to deal with the complexity of todays systems by letting the system handle the complexity autonomously. This is a very hard and challenging domain because current systems are complex, distributed, interconnected and rapidly changing systems. We firmly belief that a main challenge in conquering autonomic systems is the integration of three existing research communities: the multi-agent systems community allows natural modelling of the system and explicitly considers autonomous behaviour and distributed interaction, dynamical systems theory allows analysis of the dynamics of these models and the decentralised control community can use insights gathered from the analysis to create decentralised control mechanisms to control the dynamics of autonomic systems. This paper describes this generic perspective on autonomic computing, gives an overview of the relevant work done in each community and describes the contribution of each community in conquering autonomic computing.

1. Introduction The paradigm of autonomic computing aims to design and build computing systems capable of running themselves, adjusting to varying circumstances, and preparing their resources to handle most efficiently the workloads we put upon them. These autonomic systems must anticipate needs and allow users to concentrate on what they want to accomplish rather than figuring out how to manipulate the computing systems to get them there[24]. An important fact is that current systems are becoming more and more complex, distributed and have a high degree of local activity. Examples of such systems are telecommunication networks, flexible manufacturing networks, urban traffic networks, economic systems such as a supply chain and in the simulation context we think of ecological systems. Autonomic computing efforts should be directed towards coping with these systems and their complexity, dis-

tribution, localisation, etc. The key idea and challenge in autonomic computing is that systems must be able to take the initiative to decide and do things themselves. The systems must become autonomous so that the real complexity can be hidden from the user of the system. In the autonomic computing perspective this is described by the goal to achieve the self-X principles and mechanisms. Self-management involves deciding on its own what it must do to manage itself, e.g. keeping the behaviour of the system stable. Performing self-diagnosis to check its status and having the possibility of automatic self-adaptation to changes in this status are very important aspects to achieve self-management. Self-adaptation can exist in different forms, such as self-optimisation, selfconfiguration and self-healing. This means that systems must handle more and more complexity on their own. The ultimate challenge is to find mechanisms that can control the dynamics of these complex, distributed, interconnected and rapidly changing systems. Today, there are already several research communities that look into parts of automatic handling of this complexity. We consider three important communities. In the multi-agent community, agent-based modelling is seen as a natural way to model complex systems because the ability of an agent to be autonomous maps to the self-X principles. The ability to make interactions between components of a system explicit and handle them in a flexible way supports a more distributed complexity. The community on dynamical systems theory, closely related to chaos theory and complexity theory, focuses on analysing the dynamics of the system to gather insights that can be used in an improved solution for the problem. The decentralised control community sees handling complexity as impossible with centralised control schemes and tries to find the right decentralised control mechanisms for the problem. In this paper we propose a generic perspective on handling autonomic computing. The main idea is shown in figure 1. The three research communities, mentioned above, are represented as building blocks to engineer autonomic

systems and the arrows represent the close relation between them and the need for them to interact and complement each other. Agents make it possible to model our system in a flexible way. Dynamical systems theory allows us to analyse the dynamics of such a model, also in unstable conditions. And decentralised control can use the insights from the analysis to control the dynamics of such a system in a way that is scalable, flexible and robust. It is our firm belief that the main challenge in conquering autonomic computing is the integration of these three communities. Autonomic Computing Modelling the Dynamics: Autonomous Agents

Analysis the Dynamics: Dynamical Systems Theory ("Chaos")

Control the Dynamics: Decentralised Control

Figure 1. Building blocks of Autonomic Computing Research This paper gives a quick overview of the relevant work present in each community and its importance to autonomic computing. We end with a conclusion and we formulate several challenges in integrating the three building blocks.

2. Agent-Based Modelling We must be able to model the system we want to build. Agent-based models of a system are based on the multiagent paradigm. Such a model is a description of a system, its entities and properties and may be used for further study of its characteristics. The model consists out of a set of agents that encapsulate the behaviours of the individual components that make up the system. An agent is an abstract software engineering concept and is an autonomous entity, that perceives its environment, reacts to changes in this environment by adapting its behaviour to the current needs, and interacts with other agents. Detailed definitions can be found in [54], [53] and [11]. Simulation is typically used to experiment with and analyse ABM’s. Agent-based modelling is a very natural and flexible way to model todays complex, distributed, interconnected and interacting industrial systems: the self-X principles can be mapped on the explicit notion of autonomous agents, modelling a system as a group of interacting agents maps directly onto the distributed and communicative nature of todays systems. We further substantiate the usefulness of ABM by elaborating on some typical characteristics.

2.1. Interaction vs. Algorithms In [50] interaction is shown to be more powerful than algorithms for computer problem solving, overturning the

prevalent view that all computing is expressible as algorithms. Some things can’t be done without interaction such as network routing, on-line supply chain management and agile manufacturing. This radical notion that interactive systems are more powerful problem-solving engines than algorithms was the basis for the new paradigms for computing technology built around the unifying concept of interaction. Interaction is a key part of the problem domain of complex adaptive systems. In such systems, interaction is more and more a dominant feature with respect to the individual actions taken by the subparts of the system. For example, in interaction-centred manufacturing, each machine only does a small part of the whole production process. Having one machine that does it all is not flexible enough. Thus, the interaction between the machines is much more important. This interaction must be based on a very efficient schedule and at the same time the system must be flexible to anticipate changes in the products and orders that come in. Therefore, we need a model that incorporates interaction. Using ABM for modelling todays complex adaptive systems makes it possible to consider interaction more explicitly than in object-oriented engineering where the interaction is limited to calling messages or procedures on other objects. In multi-agent systems, for example, there can also be the notion of indirect interaction through the environment where the agents live in. In addition, an agent is autonomous in deciding when, how and with what to interact which allows a flexible use of the interaction concept.

2.2. Agents vs. Equations A massive body of research has been devoted to modelling system dynamics using differential equations[12, 22, 40, 32]: a system is modelled as a set of equations, and executing the model consists of evaluating the equations. In [37, 39] a comparison is made between equation-based models (EBM) and agent-based models (ABM). Simulation is the general term that applies to both methods, which are distinguished as (agent-based) emulation and (equationbased) evaluation. [37, 39] give some practical considerations for choosing between the two approaches: • The underlying structure of the model: With ABM’s the internal behaviour of the agent does not need to be known in order to execute the model. This has two significant advantages for complex adaptive systems. First, ABM’s are naturally modular and can easily be distributed across a network of processors. Second, the natural distribution of ABM’s in turn enables firms to maintain proprietary information about their internal operations. So groups of firms can conduct joint modelling exercises while keeping the agents on their own computers and under their own control.

In other words, ABM’s are better suited to domains where the natural unit of decomposition is the individual rather than the observable (properties, variables, ...) as is the case for EBM, and where the physical distribution of the computation across multiple processors is desirable. • Naturalness of its representation of the system: ABM’s are easier to construct because certain behaviours are difficult to translate into equations. ABM’s also support more direct experimentation of ’what-if’ scenarios without having to translate it into equations and ABM’s are easier to translate back into practice: it is a one-to-one mapping. • The quality of appearing to be real and accurate: There is a growing evidence that behavioural emulations are more accurate than numerical evaluations for systems of multiple interacting entities. In many domains, ABM’s give more realistic results than EBM’s, for manageable levels of representational detail. Since partial differential equations (PDE) are computationally complete, one can model any ABM with PDE’s. However, a PDE model may be much too complex for reasonable manipulation and comprehension. EBM’s based on simpler formalisms than PDE’s may yield less realistic results regardless of the level of detail in the representation. In short, ABM’s are most appropriate for domains characterised by a high degree of localisation and distribution and dominated by discrete decisions. EBM’s are most naturally applied to systems that can be modelled centrally, and in which the dynamics are dominated by physical laws rather than information processing. Examples that indicate that agent-based modelling has been used already for complex adaptive systems such as agile manufacturing systems and electricity distribution can be found in [15, 8, 44]. Another comparison of the two approaches with respect to the predator-prey problem can be found in [52].

2.3. Tools In [37] Parunak says that the widespread realisation of the advantages of agent-based modelling will depend on the availability of tools for this approach, and the ABM community should encourage the development and refinement of such tools. There are already a number of different tools available that enable agent-based modelling and simulation of systems. Examples are Swarm developed at the Santa Fe Institute [45] and NetLogo/StarLogo[1] that are well suited for modelling complex systems developing over time. StarLogo is for example used in [26]. These tools allow to explore the connection between the micro-level behaviour of individuals and the macro-level patterns that emerge from the interaction of many individuals. Of course, this is only

one kind of support. Tool support for engineering complex systems is more than simulation support, which is still mostly low level modelling with a programming language. A lot of work still needs to be done in making tools that allow us to analyse, design, debug and test such systems at a higher level where agent-based concepts are the basic modelling entities.

3. Analysis of Dynamics Besides modelling a system using agent-based models we need approaches to analyse the model in order to understand the dynamics of it, i.e. the behaviour of the system. Analysing the dynamics can enable us to make predictions about the global behaviour, and gather insights into the effects of certain parts or even a single variable on the global behaviour of the system. If we do not have this understanding of the dynamics of a system, building autonomic computing systems would be stuck in a mere trail-and-error technique. Today we often make use of simple measures (e.g. averages over time) to characterise the behaviour of the system. Although useful characterisations for static systems, they are less helpful if the system is constantly subjected to change [48]. Real-world, industrial systems are becoming more and more dynamic. For example, in manufacturing systems, the competitive pressures are reducing product life times and thus forcing systems to change frequently. Also, machine failures and upgrades force the system to adapt to these changes. In such a situation, the system needs to be very flexible and dynamic. So, we need more advanced analysis techniques. Understanding the dynamics of complex systems by reasoning analytically from the detailed internal structure of the system is not feasible. Analytic solutions are unavailable for systems beyond even a very modest level of complexity. The alternative approach is to study systems that are executing and attempting to generalise from their behaviour. A lot of tools and techniques are available from the dynamical systems community. Multi-agent systems that we use to model the systems we consider are dynamical systems, whose effective design, monitoring and control requires application of techniques developed in dynamical systems theory [33, 34]. To analyse the dynamics of todays complex systems, the community of dynamical systems theory offers[47]: a conceptual framework with which dynamical systems can be characterised, experimental techniques that can estimate these characteristics from time series of state variables in an observed system, and a good framework to explain principles of adaptive, self-organising behaviour. Development of dynamical systems theory in a specific problem domain, for example manufacturing, can provide a new vocabulary for understanding what happens in that system and new tech-

nologies to manage and control those systems. In this section we first give a small introduction into the concepts of dynamical systems theory, followed by an overview of analysis techniques.

3.1. Concepts of Dynamical Systems Theory Important concepts in dynamical systems theory are the system state space, trajectory and attractor. The state space of a dynamical system is a collection of time-dependent variables that capture the behaviour of the system over time. The concept of a trajectory refers to the system’s trajectory through state space, i.e. the successive values of state variables viewed as coordinates in state space. In many dynamical systems, the trajectory eventually enters a subspace from which it never emerges. This subspace exerts an attractive force that draws the system into it, a so-called attractor. Several disjoint attractors can be present in a system with each its set of initial conditions from which trajectories enter a given attractor, called that attractor’s basin of attraction. [33] gives a short introduction into these concepts.

3.2. How to Apply Dynamical Analysis with Dynamical Systems Theory [48] The next step is to know how to apply dynamical systems theory. This includes two parts: we need to identify the data we need to measure and we need to interpret that data. What Data to Measure? First of all we have to gather data from the system. The challenge here is to select the right data from the abundance of information that is available. To be able to do this we need some idea of what different kinds of measurements can tell us about the system. Until now, there is no systematic approach to get to know this and exploration of possible measures is the only way to learn what sorts of information will be most valuable. An important aspect to consider here is the cost associated with collecting each class of measurement. If this cost is too high, you may ask yourself if this is worth the effort. The most useful data is time dependent data because the dynamics of a system is always considered as evolving over time. The system state space in dynamical systems theory also asks for time dependent data. This results in time series that can be analysed. For example, in a transport module of a factory you can measure the evolution of the transit times between two sensors to detect congestion patterns. These patterns may not be detected in inter-arrival times at a single sensor [48, 49]. Another example is found in [38]. In analysing the dynamics of a supply chain you can measure the evolution of the amount of orders placed at a certain factory or site in the chain. The inventory evolution is also useful. These are all very application specific values and future exploration may lead to the discovery of some general guidelines to choose the right values.

How to Interpret the Data? After gathering the data we can analyse it. Dynamical systems theory offers a number of analysis techniques: • Trajectories in state space can be used to find the attractors. Different types of dynamics lead to different attractor geometries, including point attractors (for systems that converge to a static state), limit cycles (for systems with periodic behaviour), and fractal structures (’strange attractors’, characteristic of formally chaotic systems). In [33] a comparison is made between these kind of attractors and classes of systems. A divergent trajectory or no attractor corresponds to a system out of control. A point attractor is related to a goal oriented system. A limit cycle and strange attractor involve going concerns, a system that has to maintain a certain goal behaviour. • Dynamics can be further quantified in terms of behaviour in the frequency domain. For example, using Fourier or Hartley transformations. Fourier analysis is based on the concept that real world signals can be approximated by a sum of sinusoids, each at a different frequency. The frequency of each sinusoid in the series is an integer multiple of the frequency of the signal being approximated. Plotting the magnitude or amplitude or such a sinusoid on the y-axis and the frequency of the sinusoid on the x-axis generates a frequency spectrum in the frequency domain. The spectrum is a set of vertical lines or bars that indicate how the system behaves in the frequency domain and this can yield very useful insights. • System attractors (if sufficiently low dimension) can be reconstructed with time-delay plots (provided that the available data is sufficiently rich and accurate). In these plots each value in a time series is plotted on the y-axis against the previous value on the x-axis. These time-delay plots can be used to find distinguishing patterns of behaviour and this has been done in manufacturing [48, 49] and supply chains [38, 37, 39]. An example plot for manufacturing is shown in Figure 2. At the top the time series for the transit times is shown and at the bottom the corresponding time-delay plot is shown. Later we refer back to this figure and describe what interpretations we can make from it. • Computation of the Lyapunov exponents of attractors can be done from time series. These exponents quantify both the strength of attraction exerted by the attractor on nearby points in the state space and the degree to which neighbouring points within the attractor diverge from one another (degree of chaotic behaviour). The Lyapunov exponent can thus be a stability measure for the system [30]. The exponent is for example used in [31] to find out how chaotic the behaviour of a robotic system is. Different control algorithms for the

to interacting stoppages, a distinct diagonal band in which transit times vary in small steps corresponds to dynamic congestion. In the next section we describe some examples of how a good interpretation can lead to very effective decentralised control mechanisms.

4. Decentralised Control

Figure 2. Time-Delay Plot Example for Transit Times in Manufacturing robot can then be compared based on this measure of chaotic behaviour. • There are a lot of other techniques, not described here, such as power spectra[9], BDS[4], the S statistic[41] to identify the presence of deterministic dependencies among successive elements in a time series, Poincar´e plots and measures of fractal dimension. But again, we need to learn which techniques are most appropriate. Applying these methods to characterise the system in terms of its formal dynamics is not enough. Besides knowing that the system has an attractor of a certain shape or dimensionality we must also correlate particular dynamical patterns with domain-specific behaviour. This can allow us to find out where there are problems or opportunities to control the behaviour of that system as described later in the section about decentralised control. The mapping on classes of systems in [33], described above, is a first step in that direction, but making more domain-specific and detailed interpretations needs to be done. A first example of such a interpretation is found in [39] for supply chains: after some time, sites further in the supply chain are more likely to follow a large order with another large one, and a small order with another small one. In other words, their orders have become correlated in time. This was found from interpreting a stretched pattern around the x=y line in time-delay plots for order time series. A second example is found in [48]. Through analysis of transit time series with time-delay plots (Figure 2): squares correspond to line stoppages, imperfect squares correspond

The final building block in conquering autonomic systems is controlling them. This means that we want to influence the behaviour of the system in such a way that it behaves according to the requirements. A large majority of control applications implemented today fall into the category of centralised control. This approach requires global knowledge of the system to create the controller, which is self-limiting in that control design for large systems becomes mathematically and practically intractable. There is also a communication difficulty of providing all the relevant system measurements to the global controller in a timely manner. Centralised schemes require high levels of connectivity, impose a substantial computational burden, are typically more sensitive to failures and modelling errors than decentralised schemes, and are not scalable which is a property that autonomic computing systems are required to have. To be successful, the structure of the system must be exploited and many systems today consist of a lot of units which directly interact with their nearest neighbours. Decentralised control systems consist of controllers that are designed and operated with limited knowledge of the complete system. There is no central decision maker and the information flow stays local. Decentralised control is actually a self-organising emergent property of the system. It maps very well on the definition of self-organisation, given in [6]: ’a process in which patterns at the global level of a system emerge solely from numerous interactions among the lower-level components of the system. Moreover, the rules specifying interactions among the system’s components are executed using only local information, without reference to the global pattern’. Decentralised control allows control to be applied to those more complex systems, however, global optimal performance cannot be guaranteed as firmly as with global controllers [19]. A lot of work has been done in decentralising control. Some work just tries to extend existing central control mechanisms to be decentral but this is a suboptimal solution [27]. Other work states that decentralised control of such systems refers to just applying the agents paradigm that is inherently decentralised[3, 5, 26] or using robust and flexible behaviours from nature, referred to as swarm intelligence [29]. In this paper we consider decentralised control to be more explicit. In this section we give an overview of the literature. There are a lot of possible mechanisms but the most interesting track for engineering autonomic com-

puting appears to be the track where the control is a result of the analysis done with dynamical systems theory. This approach exploits the information and insights gathered from the behaviour analysis to control the system. Other approaches can also benefit from analysing the system behaviour with the techniques from dynamical systems theory.

4.1. Hierarchical Control Mechanisms If the performance degradation of a completely decentralised solution is unacceptable and a completely centralised solution is prohibitively complex or expensive, a compromise has to be found. Hierarchical control mechanisms are such a compromise. They still have a certain degree of decentralisation but also keep a degree of centralisation through a higher level control layer that controls lower level units. In [13] a hierarchical set of control agents is used in the system. The bottom layers actuate and sense, higher layers average inputs and distribute control signals to lower layers. [40] gives an overview and equation-based analysis of multi-level control mechanisms. Computational savings with respect to a central scheme are completely problem dependent because there is a need to iterate and coordinate and this can be needed more in some problems than in others. The knowledge of the model is decentralised because the coordinator nor the local controller needs to know the global problem. [14] distributes and decomposes an on-line optimisation problem to move a piece of paper in a certain direction over a plate of many small air-jets. Small groups of jets handle the subproblem to optimise their contribution to the movement of the paper. Each subproblem is then handled by a local controller that is a constraint solver and feedback loops are used to guide these solvers. In [21] hierarchical control is one of the many organisations to organise distributed control. Such an organisation determines the information about the system available to each agent for its control computation. Another way to organise is what they call a multi-hierarchy. Multiple hierarchical organisations exist next to each other, each based on other criteria with which agents are related to each other. For example, one hierarchy based on physical distance, the other based on problem-specific organisational distance. The advantage here is that control can be enforced through multiple hierarchies with each their own effect.

4.2. Control as Result of Dynamical Systems Theory This track exploits the insights from analysing the system behaviour with dynamical systems theory. The motivation behind that understanding is the desire to control the system more effectively and efficiently. It can give us engineering principles to guide implementations for optimal control and to achieve the wanted behaviour. Many dynamical systems exhibit different modes of behaviour, depending on the values of certain critical parameters. It can be

expected that the analysis shows which parameters and influence points are critical for a specific system and when they exhibit which behaviour. [47] formulates this approach in three questions: 1. What is the desirable behaviour? Besides highly stable behaviour, recent research in adaptive systems suggests that some systems that exhibit learning and high adaptivity may operate on the border between ordered behaviour and disordered behaviour. 2. What parameters allow adjustment of the behaviour? Control parameters must be identified and their mapping onto system behaviour must be established. 3. What control mechanisms? Learn the mapping between the entities in the system that are accessible to management and the desired control parameters. Which mechanisms are most effective in managing those control parameters? In [38] this is applied to supply chain management. The notion of controllability is introduced. It describes the extent to which users can steer the system trajectory in state space deliberately. This requires us to identify the ’users’ of the system. In the case of a supply chain it has been shown that the consumer can exert control through the line of orders it issues. The magnitude of these orders can determine to which point attractor or limit cycle the system is drawn. Also the manufacturing sites in the example can exert control, but here it must be done through the choice of parameter settings for themselves and algorithms they use. Thus, for control as well as for monitoring, an agent in a going concern (i.e. maintaining a certain behaviour) may be able to contribute to system-level maintenance goals by manipulating simple, but critical parameters in its environment. There is no need for detailed symbolic modelling of the actual structure of the environment. In [33] three cases of controlling such a going concern are identified. Two cases are concerned with keeping the trajectory of the system through state space on the desired attractor, the desired wanted behaviour. One case when environmental forces threaten to move it to a region that leads to another attractor. The other case when one wishes to shift the system deliberately to a different attractor, a different behaviour. The third case is more complex, the basic idea here is that a system that is in chaotic behaviour or a strange attractor may be desirable because of the increased volume of state space to which it gives the system access. To take advantage of this intrinsic flexibility we must be able to steer the trajectory of the system to be locked on a part of the strange attractor, called a limit cycle. It is shown that if the value of any appropriate system parameter can be determined, the system can be balanced to achieve this. Another mechanism is about limit detection. Certain aspects of the system may limit the behaviour in achieving its desired form. By identifying these limiting parts through

analysis of the dynamics the parts can provide an influence point to ’control’ mechanisms to correct such limits. For example, this approach was used in [39] where oscillations in the inventory levels of storage sites in a supply chain are diagnostic of storage capacity mismatches. These oscillations might be difficult to detect without the dynamical analysis. By increasing the capacity of a storage site the supply chain became more efficient. Also the stabilisation of the system behaviour is a control that can be enforced. Earlier, we already mentioned the Lyapunov exponent as a measure for stability and in [31] this is used to control the stability of robotic behaviour. Other examples that control stability are [20] and [23]. Here, more diversity in the units that compose the system leads to a more stability. This diversity emerged out of a homogeneous group by rewarding agents according to their actual performance and this leads to elimination of chaos. A problem in controlling the behaviour is eliminating what is called disturbances. These are unwanted events that influence or disturb the behaviour of the system. For example, a civil structure such as a building can be disturbed by an earthquake. In [43, 2] the analysis of the frequency domain of acceleration measures is used to control the stability of a building when there is an earthquake. This shows that the frequency domain can be a useful tool to detect disturbances and eliminate them by counteracting for example frequencies of a high magnitude. A key thing is that local actions can influence the global behaviour in a dramatic way and that results from dynamical systems analysis are needed to know where and which local actions are useful. In [49] undesirable emergent behaviour is damped out by using micro-controllers, acting as agents, that monitor their environment and take local action.

4.3. The Importance of Feedback in Decentralised Control Decentralised control is a self-organising emergent property of the system. For self-organising phenomena, two basic modes of interaction are important: positive and negative feedback [6]. Negative feedback loops occur when a small change applied to the system triggers an opposing response that counteracts that change. For example, an increase in temperature in an air-conditioning system triggers a cooler that will decrease the temperature. Positive feedback loops promote changes in a system. The snowballing effect of positive feedback takes an initial change in a system and reinforces that change in the same direction as the initial deviation, a self-enhancement or amplification. In [6], Camazine et al. describe that as a result of positive feedback and the amplification of small changes or local actions, emergent behaviour can occur. Of course, negative feedback is important to avoid that this amplification will become a destructive explosion of the initial change.

Self-enhancing positive feedback coupled with antagonistic or opposing negative feedback provides a powerful mechanism for creating structure and pattern in many systems involving large numbers of components. In [40] a whole section is devoted to a theory around decentralised feedback control and the use of feedback in the decentralised scheme is considered as a logical consequence of the extensive use of feedback in the centralised control scheme. This is applied in [3] to smart matter alike modules that must grow a certain structure. A module can be in ’Seed’ mode and then attract other modules in order to further grow the structure and those modules can in turn become seeds, and thus a self-enhancing positive feedback is at work. To achieve a certain structure each module has specific mode-changing rules defined. An example of such a rule is : ’If in seed mode, emit scent, and if a module has appeared in the direction of growth, set that module to seed mode, and go to final mode’. In [14] local constraint-based controllers use feedback loops in order to optimise the local constraint solvers. This gives online-tuning of solvers or adaptive solver control.

4.4. Market-Based Decentralised Control In observing free capitalistic markets, it is evident that through the individual intentions of both consumers and sellers alike, an efficient means of societal resource allocation exists. The complex laws of supply and demand are the fundamental building blocks in determining the equilibrium price of goods in a decentralised economy. Borrowing the concept of a marketplace for application in a control system is rather new to the field of control [27]. The market price communicates the global information. In [27] this is applied to the decentralised control of large-scale civil structures (e.g. a wall) with agents scattered over the structure. The agents have a kind of wealth and the control force that they apply to the structure locally is directly proportional to the amount of power purchased on the virtual marketplace. This results in a very effective mechanism to control the stability of the structure in heavy weather (e.g. a storm). In [18] a control strategy for unstable dynamical systems based on market mechanisms is presented. Weak control forces can counter departures from the wanted configuration while they are still small. The market mechanism leads to stability by focusing control forces in those parts of the system where they are most needed. Moreover, it does so even when some actuators fail to work and without requiring the agents to have a detailed model of the physical system. Results show that the market control can be more robust than a completely local control mechanism. While market-based control has been shown to be robust and successful in coordinating dispersed agents, one of the chief drawbacks is that the speed of the market computation

is relatively slow, thereby limiting its application [25]. More generally, [35] introduces the notion of control entropy. Entropy is a measure of the disorder or randomness in a closed system. In [36] the entropy view on selforganisation in multi-agent systems is explained. The basic idea is the following: there is a coupling between the macro level that hosts self-organisation (and an apparent reduction in entropy, more order), and the micro level (where random processes greatly increase entropy, more disorder). Metaphorically, the micro level serves as an ’entropy leak’. It is shown that coordination can arise through coupling the macro level to an entropy-increasing process at a micro level (e.g. pheromone evaporation in the ant paradigm; money spending or currency flows in economies, ...). Some form of ’currency’, ’energy’, or ’pheromone’ is a convenient mechanism for creating such an entropy-increasing process. This suggests to engineer a ’currency’ or ’energy’ that generates a field to which agents can align and supports the market-based control approach.

4.5. Blue-prints as a Guide In this approach the desired behaviour is achieved by following a ’blue-print’. In most cases this is seen in the context of getting a group of agents to form a certain pattern in the environment. We distinguish three forms of blue-print control: • The blue-print is already present in the environment explicitly. The agents then just have to arrange themselves with respect to the marks in the environment. This approach is used in [7] where the pattern to form is put into into the environment as a field and agents follow the field up the gradient to get to the wanted positions. This way the environment is of course a central controller and this is not really what we are looking for. • The second form is where the blue-print is still known centrally in a so called beacon that is placed into the environment [51]. The difference here is that agents go to the beacon and that beacon then acts as a coordinator to give each agent the right instructions without giving it the whole blue-print. There is still a central coordinator but now this one can be put in the environment locally. So, for example in a network, you do not have to have access to all the nodes to put the blue-print there, one local node is sufficient. • A third form is considered when each agent builds up a blue-print in its own through communication of information he knows about it to its neighbours. This way the blue-print propagates through the system. This can be very useful if the blue-print is not available in advance and must be made at run-time through for example exploration of the environment. In [17] this is applied to guide a swarm of flight vehicles in order to ex-

plore a terrain as efficiently as possible. The blue-print they get is a representation of the area to cover with the mutual information gain plotted over it. So each agent goes to area’s where no other has been before, where there is a high mutual information gain according to the information in its blue-print at this moment. Propagation to neighbours gradually gives the agents global knowledge. Also in [55], the idea of information propagation is used to give each individual agent as much information as needed to guide its actions. In general, information propagation about the desired behaviour is a useful mechanism.

4.6. Other Several other decentralised control mechanisms have been proposed. The list given in this paper is by no means an exhaustive list but gives a good idea about the work that has been done. Constraint-based Control: Control mechanisms that use constraint solving have been applied in [56] and [14]. In [56] constraint solving components distributed in the system that are deliberative, goal-oriented agents are used. In [14] a hierarchical decomposition of the problem in local constraint solvers is used. Genetic or Evolutionary Control: Some use genetic algorithms to get and detect wanted behaviour in systems. For example in [10, 28] a genetic approach results in achieving emergent computation in cellular automata. Artificial Physics, Laws: In this approach agents react to artificial forces that are motivated by natural physical laws. In [42] examples are shown for various regular geometric configurations of agents. In [16] a global monitoring framework is added that uses only limited global information to check that the desired geometric pattern emerges over time as expected. If not, the global monitor steers the agents to self-repair. The challenge here is to find a mapping between the desired behaviours and a force law, this may be infeasible in some situations. Adapting Organisational Forms: [46] proposes a way to control the scalability of a system with respect to the needs in the current situation of the agent or task. This is done by adapting the organisational form at run-time. A certain organisational form may be very scalable for situation x but not for situation y. The idea here is to switch to the most optimal form according to the situation the system or agent is in. A distributed method for triggering this reorganisation is used, for example only if the majority of the agents want this form.

5. Conclusion We can conclude that the three proposed building blocks are essential to conquer autonomic systems because they form a process to engineer those systems. Agents make it

possible to model our system with the focus on interaction and autonomy which are very important aspects of autonomic systems. Dynamical Systems Theory allows us to analyse the dynamics of a system, also in unstable conditions. And decentralised control is the only way we can control the dynamics of such a system in a way that is scalable, flexible and robust. Conquering complex autonomic systems demands an integration of these different research disciplines. A lot of work has been done already separately on each building block. A lot of work still needs to be done to integrate them and to be able to offer complete decentralised control solutions for industrial applications. The ultimate integration is to come to systems where the system is self-diagnostic by using dynamical systems analysis on its current behaviour. The system achieves selfadaptation by using the right control mechanisms to control its behaviour according to the insights from the analysis. This self-managing or autonomic behaviour of the system is implemented in a natural way by the multi-agent paradigm. This gives a number of challenges to conquer, we give a few: • Different research communities have to come together and work with each other intensively. • Developing tools for agent-based modelling where dynamical system theory can be easily applied and control mechanisms can be easily inserted into the model is important for acceptance in industry and efficient application of the approach. • In dynamical system theory we must identify all the available techniques and relate them to what insights can be gained from each technique. This gives engineers or the system itself a tool-box from which they can select the appropriate technique based on the kind of insight they want to get. Again, tools are important. • We must be able to map certain wanted effects and insights on specific control mechanisms so that these can automatically be selected. Engineering control mechanisms directly based on insights and exploiting dynamical systems theory to create new kinds of control mechanisms that directly result from this theory is a possible track.

References [1] Netlogo, 2003. The Center for Connected Learning and Computer-Based Modelling, Northwestern University, (http://ccl.northwestern.edu/netlogo/). [2] J. B.F. Spencer, S. Dyke, and M. Sain. Experimental verification of acceleration feedback control strategies for seismic protection. In Proceedings of the Japanese Society of Civil Engineers 3rd Colloquium on Vibration Control of Structures, volume Part A, pages 259–265, Tokyo, Japan, August 1995.

[3] H. Bojinov, A.Casal, and T.Hogg. Multiagent control of self-reconfigurable robots. In Proceedings of ICMAS 2000, Boston,MA,USA, 2000. [4] W. Brock, W. Dechert, and J. Sheinkman. A test of independence based on the correlation dimension. In SSRI, number 8702. Department of Economics, University of Wisconsin, Madison, 1987. [5] Butler, Kotay, Rus, and Tomita. Generic decentralized control for a class of self-reconfigurable robots. In IEEE Intl Conf on Robotics and Automation (ICRA), 2002. [6] S. Camazine, J.-L. Deneubourg, N. Franks, J. Sneyd, G. Theraulaz, and E. Bonabeau. Self-Organization in Biological Systems. Princeton University Press, 2001. [7] B. Carlson, V. Gupta, and T. Hogg. Controlling agents in smart matter with global constraints. In In Proceedings of the AAAI Workshop on Constraints and Agents, July 1997. [8] D. Cockburn and N. R. Jennings. ARCHON: A distributed artificial intelligence system for industrial applications. In G. M. P. O’Hare and N. R. Jennings, editors, Foundations of Distributed Artificial Intelligence, pages 319–344. John Wiley & Sons, 1996. [9] J. P. Crutchfield, J. D. Farmer, N. H. Packard, R. S. Shaw, G. Jones, and R. Donnelly. Power spectral analysis of a dynamical system. Phys. Lett., 76A:1–4, 1980. [10] J. P. Crutchfield and M. Mitchell. The evolution of emergent computation. In Proceedings of the National Academy of Sciences, USA 92 (23), 1995. [11] J. Ferber. Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence. Addison Wesley, 1999. [12] J. W. Forrester. Industrial Dynamics. Republished by Productivity Press, Portland, OR. Cambridge, MA: MIT Press, 1961. [13] K. D. Frampton. Decentralized control of structural acoustic radiation. In Proceedings of ASME IMECE, 2001. [14] M. P. Fromherz, L. S. Crawford, C. Guettier, and Y. Shang. Distributed adaptive constrained optimization for smart matter systems. In Proceedings of AAAI Spring Symposium on Intelligent Embedded and Distributed Systems, Stanford, CA, USA, March 2002. [15] W. T. Goh and Z. Zhang. An agent-based modelling and simulation approach to agile manufacturing systems control. In Proc. Of 27th International Conference on Computers and Industrial Engineering, Bejing, China, October 2000. ISBN: 7-900043-38-1/TP-38. [16] D. F. Gordon, W. M. Spears, O. Sokolsky, and I. Lee. Distributed spatial control, global monitoring and steering of mobile physical agents. In Proceedings of IEEE International Conference on Information, Intelligence, and Systems, November 1999. [17] B. P. Grocholsky, H. F. Durrant-Whyte, and P. W. Gibbens. Information-theoretic approach to decentralized control of multiple autonomous flight vehicles. In Proceedings of SPIE Vol. 4196, Sensor Fusion and Decentralized Control in Robotic Systems III, November 2000. [18] O. Guenther, T. Hogg, and B. A. Huberman. Power markets for controlling smart matter. Technical report, Xerox PARC, 1997. Technical report. [19] T. Hogg and J. G. Chase. Quantum smart matter. PhysComp96, 1996.

[20] T. Hogg and B. A. Huberman. Controlling chaos in distributed systems. IEEE Trans. on Systems, Man and Cybernetics, 21(6):1325–1332, November/December 1991. [21] T. Hogg and B. A. Huberman. Controlling smart matter. Smart Materials and Structures, 7:R1–R14, 1998. [22] K. R. Howard. Unjamming traffic with computers. Scientific American, 1997. [23] B. A. Huberman and T. Hogg. The emergence of computational ecologies. In L. Nadel and D. Stein, editors, 1992 Lectures in Complex Systems, volume V of SFI Studies in the Sciences of Complexity, pages 185–205. Addison-Wesley, Reading, MA, 1993. [24] IBM. Autonomic Computing: IBM’s Perspective on the State of Information Technology. IBM, 2001. [25] W. B. Jackson, C. Mochon, K. V. Schuylenbergh, D. K. Biegelsen, M. P. Fromherz, T. Hogg, and A. A. Berlin. Distributed allocation using analog market wire computation and communication. In Proceedings of Mechatronics’2000, Atlanta, GA, USA, September 2000. [26] K. Kaster. Towards understanding cooperation, decentralized control, and foraging behavior in groups of robots. In The 33rd annual Midwest Instruction and Computing Symposium, 2000. [27] J. P. Lynch and K. H. Law. Decentralized control techniques for large-scale civil structural systems. In Proceedings of the 20th International Modal Analysis Conference (IMAC XX), Los Angeles, CA, USA, February 2002. [28] M. Mitchell, P. T. Hraber, and J. P. Crutchfield. Revisiting the edge of chaos: Evolving cellular automata to perform computations. Complex Systems, 1993. [29] P. Moln´ar and J. Starke. Control of distributed autonomous robotic systems using principles of pattern formation in nature and pedestrian behaviour. IEEE Transactions on Systems, Men and Cybernetics, Part B, 31(3):433 – 436, 2001. [30] P. J. Moylan and D. J. Hill. Stability criteria for large-scale systems. IEEE Transactions on Automatic Control, AC23(2):143–149, April 1978. [31] U. Nehmzow and K. Walker. Is the behaviour of a mobile robot chaotic? In Proceedings of the AISB’03 Symposium on Scientific Methods for Analysis of Agent-Environment Interaction, 2003. [32] S. Omohundro. Modelling cellular automata with partial differential equations. Physica D, 10:128–134, 1984. [33] H. V. D. Parunak. A dynamical systems perspective on agent-based ’going concerns’. Technical report, Center for Electronic Commerce, 1998. [34] H. V. D. Parunak. From chaos to commerce: Practical issues and research opportunities in the nonlinear dynamics of decentralized manufacturing systems. In Proceedings of Second International Workshop on Intelligent Manufacturing Systems, pages k15–k25. K.U.Leuven, 1999. [35] H. V. D. Parunak. Self-organizing behavior: Principles of self-organizing systems. Presented at PAAM’99, slides on web, 1999. http://www.erim.org/ vparunak/ Presentations/PAAM99tutorial2.pdf. [36] H. V. D. Parunak and S. Brueckner. Entropy and selforganization in multi-agent systems. In J. P. M¨uller, E. Andre, S. Sen, and C. Frasson, editors, Proceedings of the Fifth International Conference on Autonomous Agents, pages 124–130, Montreal, Canada, 2001. ACM Press.

[37] H. V. D. Parunak, R. Savit, and R. L. Riolo. Agent-based modeling vs. equation-based modeling: A case study and users’ guide. In MABS, pages 10–25, 1998. [38] H. V. D. Parunak, R. Savit, R. L. Riolo, and S. J. Clark. Dasch: Dynamic analysis of supply chains. Technical report, CEC/ERIM, 1999. Final Report. [39] V. Parunak and R. VanderBok. Modeling the extended supply network, 1998. [40] N. Sandell, P. Varaiya, M. Athans, and M. Safonov. Survey of decentralized control methods for large scale systems. IEEE Transactions on Automatic Control, AC-23(2):108– 128, 1987. [41] R. Savit and M. Green. Time series and dependent variables. Physica, D50(95), 1991. [42] W. M. Spears and D. F. Gordon. Using artificial physics to control agents. In Proceedings of IEEE International Conference on Information, Intelligence, and Systems, November 1999. [43] B. F. Spencer Jr., J. Suhardjo, and M. K. Sain. Frequency domain optimal control strategies for aseismic protection. Journal of Engineering Mechanics, 120(1):135–159, 1994. [44] F. Stephanie and T. Jones. Modeling complex adaptive systems with echo, 1994. [45] Swarm, 2003. Swarm Development Group, (http://www.swarm.org/). [46] P. J. Turner and N. R. Jennings. Improving the scalability of multi-agent systems. In Proceedings of the First International Workshop on Infrastructure for Scalable Multi-Agent Systems, volume 1887 of Lecture Notes in Artificial Intelligence, Barcelona, Spain, 2000. Springer Verlag. Revised papers published 2001. [47] H. Van Parunak. Complexity theory in manufacturing engineering: Conceptual roles and research oportunities. Technical report, Industrial Technology Institute, 1993. [48] H. Van Parunak. The heartbeat of the factory: Understanding the dynamics of agile manufacturing enterprises. Technical report, Industrial Technology Institute, 1995. [49] R. S. VanderBok and H. V. D. Parunak. Managing emergent behavior in distributed control systems, 1997. ISA TECH. [50] P. Wegner. Why interaction is more powerful than algorithms. Communications of the ACM, May 1997. [51] J. Werfel. Autonomous multi-agent construction, December 2002. [52] W. G. Wilson. Resolving discrepancies between deterministic population models and individual-based simulations. American Naturalist, 151(2):116–134, 1998. [53] M. Wooldridge. An Introduction to MultiAgent Systems. Wiley, 2002. [54] M. Wooldridge and N. Jennings. Intelligent agents: Theory and practice. Knowledge Engineering Review, 10(2), June 1995. [55] M. Yim, J. Lamping, E. Mao, and J. G. Chase. Rhombic dodecahedron shape for self-assembling robots. Technical report, Xerox PARC SPL, 1997. TechReport P9710777. [56] Y. Zhang, M. P. Fromherz, L. S. Crawford, and Y. Shang. A general constraint-based control framework with examples in modular self-reconfigurable robots. In In: 2002 IEEE/RSJ Conference on Intelligent Robots and Systems (IROS 2002), Lausanne, Switzerland, September 2002.