generating circulation diagrams for architecture and

0 downloads 0 Views 30MB Size Report
designing circulation in buildings and settlements. ..... Chapter 5: Building blocks of agent-based circulation models. ...... road networks distribute energy, materials and people in an urban area.” ...... maintaining its energy supply; an essential property of complete agents (Pfeifer and ...... starts to erode landscape locally.
GENERATING CIRCULATION DIAGRAMS FOR ARCHITECTURE AND URBAN DESIGN USING MULTI-AGENT SYSTEMS

RENEE PUUSEPP

A thesis submitted in partial fulfilment of the requirements of the School of Architecture and Visual Arts, University of East London for the degree of Doctor of Philosophy

April 2011

Abstract For decades, cybernetics, systems theorists and researchers in the field of Artificial Intelligence and Artificial Life have been looking for methods of building intelligent computer applications that can solve complex problems. By nature, many design problems are complex and solving these requires a certain degree of intelligence. Therefore, it comes as no surprise, that sophisticated computational applications have become increasingly popular amongst academics and practitioners in various design disciplines. Despite the recent success of generative design methods, there are many new modelling paradigms from AI and AL research that remain largely unexplored in the context of architectural and urban design. One of such paradigms is multi-agent modelling. Although thoroughly explored and implemented in a diverse range of subject areas from social sciences to economics, design disciplines have largely refrained from deploying multi-agent systems. This thesis explores multi-agent systems for conceptual design development – for generating circulation diagrams. Besides studying several known models in the architectural and urban design context, a few novel ones are proposed. Instead of looking at existing urban and architectural theory, the source of inspiration for building circulation models comes from processes found in nature where the movement based on local navigational decisions lead to the emergence of highly complex and adaptable networks. Following the synthetic modelling approach, it is argued that studying and building simple agent based models creates in-depth knowledge about underlying principles of network development processes and allows one to gradually move towards building more sophisticated models. Once the principles of generating circulation systems are well understood, one can use these for creative purposes in designing circulation in buildings and settlements. The main aim of this thesis is to develop and expose generative methods for the early stages of the design process. By investigating the ways of building, validating and controlling generative models, it is demonstrated how these models can be integrated into the design work flow.

ii

Table of contents Abstract ............................................................................................................................. ii Table of contents ............................................................................................................. iii List of figures .................................................................................................................... vi Acknowledgements ..........................................................................................................xii Chapter 1: Introduction..................................................................................................... 1 1.1 Computer scientific background ............................................................................. 3 1.3 Computing circulation diagrams ............................................................................. 6 1.4 Outline of the proposed research ........................................................................... 8 1.5 Goals of the research............................................................................................. 10 1.6 Methodology ......................................................................................................... 12 Chapter 2: Computational modelling.............................................................................. 14 2.1 Models in scientific research ................................................................................. 15 2.2 Types of models..................................................................................................... 18 2.3 Models in architecture and urban design ............................................................. 20 2.4 Computational design methods ............................................................................ 24 2.5 Generative design.................................................................................................. 26 2.6 Chapter summary .................................................................................................. 32 Chapter 3: Modelling circulation – an architectural review ........................................... 33 3.1 The essence of circulation ..................................................................................... 34 3.2 Diagrams of circulation.......................................................................................... 36 3.3 Topological circulation networks .......................................................................... 43 3.4 Optimal designs ..................................................................................................... 45 3.5 Computational models .......................................................................................... 46 3.6 Chapter summary .................................................................................................. 50

iii

Chapter 4: Agent-based modelling and multi-agent systems: nature, origin and modern applications ..................................................................................................................... 52 4.1 Background ............................................................................................................ 53 4.2 Definitions of agent ............................................................................................... 59 4.3 Taxonomy of ABM ................................................................................................. 62 4.4 Properties and behaviour of agent-based models ................................................ 64 4.5 Applications of multi-agent systems ..................................................................... 72 4.6 Chapter summary .................................................................................................. 80 Chapter 5: Building blocks of agent-based circulation models....................................... 82 5.1 Design of the circulation agent ............................................................................. 83 5.2 Design of the environment.................................................................................... 88 5.3 Behaviour of agents............................................................................................... 94 5.4 Environmental processing ..................................................................................... 98 5.5 Agent-environment interaction .......................................................................... 102 5.6 Communication in circulation agents.................................................................. 105 5.7 The setting out configuration .............................................................................. 108 Chapter 6: Prototypes ................................................................................................... 111 6.1 Emergent path formation: Loop Generator ........................................................ 114 6.2 Channel network formation: Stream Simulator .................................................. 120 6.3 Cellular automaton and hill-climbing agents: Labyrinth Traverser..................... 124 6.4 Network optimisation algorithm: Network Modeller ......................................... 126 6.5 Way-finding agents and ant colony optimisation ............................................... 132 6.6 Stigmergic building agents .................................................................................. 137 6.7 Space forming agents .......................................................................................... 151 6.8 Discussion ............................................................................................................ 155 Chapter 7: Case study 1 – a multi-agent system for generating street layouts ........... 159 7.1 Developing the prototype ................................................................................... 161 iv

7.2 Generating diagrams in context .......................................................................... 168 7.3 Quantitative analysis and evaluation of diagrams .............................................. 172 7.4 Conclusions and discussions................................................................................ 182 Chapter 8: Case study 2 – an ant colony optimisation algorithm for generating corridor systems .......................................................................................................................... 187 8.1 Ant colony optimisation algorithms .................................................................... 187 8.2 Selected ACO algorithm....................................................................................... 189 8.3 Testing ACO parameters...................................................................................... 192 8.4 Generating corridor networks for office buildings.............................................. 197 8.5 Observations and conclusions ............................................................................. 207 Chapter 9: Controlling the diagram .............................................................................. 211 9.1 Emergent behaviour OF and IN agent colonies................................................... 211 9.2 Development of the diagram .............................................................................. 215 9.3 Flexibility and sensitivity – an exploratory analysis of multi-agent models ....... 219 9.4 Means of control ................................................................................................. 225 9.5 Discussion ............................................................................................................ 228 Chapter 10: Discussion and conclusions ....................................................................... 230 10.1 Multi-agent models for generating circulation systems ................................... 231 10.2 Implications to the design process .................................................................... 234 10.3 In search for parsimonious models ................................................................... 236 10.4 Complete design diagrams ................................................................................ 241 10.5 Self-criticism ...................................................................................................... 243 10.6 Discussion. The author’s view ........................................................................... 246 Bibliography .................................................................................................................. 251 Appendix 1: Submission to the open international ideas competition for Nordhavnen261 Appendix 2: Submission to the international ideas competition for Tallinn City Hall .. 266 Glossary of terms .......................................................................................................... 270 v

List of figures Figure 2.1: Walter Christaller, model of central place theory, the 1930s. Source: (Christaller no date) ........................................................................................................ 21 Figure 2.2: Ebenezer Howard, Garden City model, 1898. Source: (Howard no date) .... 22 Figure 2.3: Bill Hillier, computer generated settlement models. Source: (Hillier 1989) 23 Figure 3.1: Constructive diagram. Source: (Alexander 1964, p. 88) ............................... 37 Figure 3.2: Functional circulation diagram of Yokohama terminal. Source: (Ferre et al. 2002, front cover) ........................................................................................................... 38 Figure 3.3: London Underground map. Source: (Beck 2010) ......................................... 39 Figure 3.4: Bridges in Leidsche Rijn by Maxwan Architects. Source: (Maxvan no date) 39 Figure 3.5: Circulation diagram of Luis Vuitton store. Source: (UNStudio no date) ....... 40 Figure 3.6: From left to right: composition, configuration and constitution of street networks. Source: (Marshall 2005, p. 86) ....................................................................... 42 Figure 3.7: Topological classification of networks. Source: Hagget and Chorley (1969, p. 7) ..................................................................................................................................... 43 Figure 3.8: Tree-cities: Chandigarh, Brazil, Greenbelt in Maryland, plan of Tokyo. Source: (RUDI no date).................................................................................................... 44 Figure 4.1: Cooperation typology in multi-agent systems. Source: Franklin (Doran et al. 1997) ............................................................................................................................... 71 Figure 5.1: Glider in action – gliding. This cellular automata agent has 4 different postures that it ceaselessly repeats ................................................................................ 84 Figure 5.2: The game of chase. Smaller (black) agents are attracted bigger (red) agents who, in turn, are repelled from smaller ones. With large number of agents, such a ‘game’ reveals some important mechanisms that lay behind complex behaviours of simple agents .................................................................................................................. 95 Figure 5.3: A Swarm grammar – a branching diagram created by tracing agents using formal movement rules .................................................................................................. 96 Figure 5.4: A Context sensitive swarm grammar – the branch length is defined by the available amount of ‘light’ .............................................................................................. 97 Figure 5.5: Swarm grammars with hierarchical rules ..................................................... 97 vi

Figure 5.6: Stable structures as computed with cellular automata algorithm. Darker cells are less stable than lighter ones ........................................................................... 102 Figure 6.1: Typical movement patterns of simple reactive agents. Emergent trails form open and closed loops to facilitate the continuity of the agents’ movements ............ 115 Figure 6.2: A sequence of snapshots illustrates the development of closed loops in 2D ....................................................................................................................................... 116 Figure 6.3: The body plan of the 2D Loop Generator agent. α denotes the angle between the front sensor and a side sensor ............................................................... 116 Figure 6.4: Tests with different sensory configurations – agents with 3 sensors. The angle (α) between front and side sensors (from left to right): 0, 22.5, 45, 67.5, 90 and 120 degrees ................................................................................................................... 117 Figure 6.5: An agent’s ‘body plan’ in 3D – the development of minimal sensory configurations that produced continuous circulation diagrams .................................. 118 Figure 6.6: Generated 3D circulation diagrams ............................................................ 119 Figure 6.7: A sequence of snapshots illustrates the development of closed loops in 3D ....................................................................................................................................... 120 Figure 6.8: The input landscape (left) and the resulting stream channels (right) ........ 121 Figure 6.9: A typical progress of channel formation algorithm .................................... 122 Figure 6.10: Tests with 1, 10, 100, 500, 1000 and 1500 agents ................................... 123 Figure 6.11: All tests with 1000 agents ......................................................................... 123 Figure 6.12: A labyrinth solved by Labyrinth Traverser ................................................ 124 Figure 6.13: The gradient is computed with the diffusion method: the redness of each patch shows the proximity to a point in the labyrinth ................................................. 125 Figure 6.14: A sequence of images showing the progress of the agent ....................... 126 Figure 6.15: A network diagram generated with Network Modeller ........................... 127 Figure 6.16: Examples of network diagrams generated with the same configuration of target nodes. The variety of diagrams have been achieved with different control parameters .................................................................................................................... 128 Figure 6.17: A typical process of network formation and optimisation ....................... 129 Figure 6.18: Recognizable minimal spanning trees generated with the network optimisation algorithm.................................................................................................. 130

vii

Figure 6.19: Input-output coupling. The agent obtains input from the digital 3D model and from the reference map. Motor output is generated interpreting input according to syntactical rules ........................................................................................................ 133 Figure 6.20: Sensory input. Sensors acquire their value from the environment and from the corresponding location on the reference map. ...................................................... 134 Figure 6.21: Interpretation rule set: red circles show activated sensors, the arrow shows the resultant movement direction. Different agents have different rules to map inputs to output. 75% of these rules are passed to ‘offspring’ to maintain explorative diversity of the population ............................................................................................ 135 Figure 6.22: Way-finding in corridor-like layouts. All shown tests were successful as the colony was able to learn the route between two points. The reference map is laid on top of the layout of digital model ................................................................................. 135 Figure 6.23: Way-finding in quasi-urban layouts. Environmental features play a crucial role in the competition between popular routes. It is not always the shortest route that is preferred by agents ................................................................................................... 136 Figure 6.24: An ‘arcade’ built using a simple set of stigmergic rules and linear forward directed movement....................................................................................................... 138 Figure 6.25: Tall structures built by agents by executing evolved stigmergic building rules and linear upward directed movement ............................................................... 140 Figure 6.26: A sequence of images showing the collective building activity of an agent colony. Agents are placing blocks of various sizes according to a shared set of stigmergic rules ............................................................................................................. 142 Figure 6.27: Agglomerations of building blocks placed by agents. Each agent evolves its own building rules during the simulation ..................................................................... 142 Figure 6.28: Generated ‘pheromone’ trails and the respective structures built by agents. Given a simple building rule, agents were capable of channelling their movement but often failed to establish continuous circulation patterns.................... 145 Figure 6.29: Development of built structures that form circulation channels ............. 146 Figure 6.30: Sequence of images showing dynamic feedback between circulation routes and built blocks .................................................................................................. 147 Figure 6.31: Emergence of vertical circulation ............................................................. 149 Figure 6.32: Development of vertical circulation and stacked blocks .......................... 149 Figure 6.33: Selected outputs of the simulation........................................................... 150 viii

Figure 6.34: Uniform distribution of agents following a simple rule in the simulated world ............................................................................................................................. 151 Figure 6.35: Formation of a cellular structure .............................................................. 152 Figure 6.36: Self-organisation of cellular agents in a bounded region ......................... 154 Figure 6.37: Self-organisation of cellular agents in a semi-confined area.................... 155 Figure 7.1: Aerial photo of Nordhavnen (Google 2011a) ............................................. 160 Figure 7.2: Stream Simulator modified: orthogonal stream channels are generated with the user defined segment length .................................................................................. 163 Figure 7.3: Circulation diagrams with many setting-out points and a small number of attractors. From left to right: tree structure with 1 attractor, 1 circuit with 2 attractors, multiple circuits with multiple attractors ..................................................................... 164 Figure 7.4: Many-to-many relationship between setting-out points and attractors. All of these diagrams have 11 attractors placed across the landscape ............................. 165 Figure 7.5: Diagrams with attractors of differentiated magnitude – some attractors (larger dots) are more appealing to agents than others (smaller dots) ....................... 166 Figure 7.6: A generated diagram classified as a 4 grade road system. This exercise was done manually by counting the width (in pixels) and the strength (darkness) of the road segments in the diagram ...................................................................................... 167 Figure 7.7: The input image and the grid representing access to urban blocks. The shape of the area was partly driven by the competition brief and partly defined by the design team ................................................................................................................... 168 Figure 7.8: The input map with attractors and the resultant diagram. Dots represent attractors with the size indicating the importance ...................................................... 169 Figure 7.9: Distinct diagrams generated with an identical initial configuration .......... 170 Figure 7.10: Diagrams generated with uniform attractor grid ..................................... 170 Figure 7.11: Sequence of images showing the development of circulation network .. 171 Figure 7.12: Development of the topology diagram..................................................... 174 Figure 7.13: Topology diagrams generated with growing number of attractors (1- 25) ....................................................................................................................................... 175 Figure 7.14: Tests with randomly placed attractors ..................................................... 176 Figure 7.15: Tests with manually placed attractors ...................................................... 176 Figure 7.16: Tests ‘in silico’- internal efficiency raises until connectivity has reached its ceiling but the maximum length is not achieved yet .................................................... 177 ix

Figure 7.17: An ‘in-silico’ diagram with 5 attractors where the highest connectivity value has been achieved, but the total length of network has not been exhausted ... 177 Figure 7.18: Graphs show the change of network parameters ‘in silico’ (left) and in the context of Nordhavnen input map (right)..................................................................... 179 Figure 7.19: A near-optimal diagram with internal connectivity of 0.95 (as calculated after Song and Knaap (2004)) or 0.67 (as calculated with the proposed method of taking the shape of junctions into account) ................................................................. 179 Figure 7.20: Orenco station, Portland (Google 2011b)................................................. 181 Figure 7.21: Hammarby Sjöstad, Stockholm (Google 2011c) ....................................... 181 Figure 7.22: Vauban, Freiburg (Google 2011d) ............................................................. 181 Figure 7.23: Three representations of a diagram (from left to right) – topological, frequency diagram, and combined diagram ................................................................. 182 Figure 7.24: A selection of diagrams produced with The Nordhavnen model ............. 183 Figure 8.1: The setting out configuration showing the lower (source) patch and the upper (target) patch and the shortest routes............................................................... 190 Figure 8.2: Number of agents tested – 50 (top left), 200 (top right),500 (bottom left) and 2000 (bottom right)................................................................................................ 193 Figure 8.3: Evaporation rates tested – 0.00001 (top left), 0.003 (top right), 0.01 (bottom left) and 0.03 (bottom right)........................................................................... 194 Figure 8.4: Adjust rates tested – 0.03 (top left), 0.1 (top right), 0.3 (bottom left) and 3 (bottom right)................................................................................................................ 195 Figure 8.5: Continuous gradient of pheromone with extreme values around source point and the target lead to the successful detection of the shortest path ................ 196 Figure 8.6: Problem in the context: the task was to find a quality solution for the internal circulation on all 3 floors of the building. The image shows floors 2, 3 and 4 with the generated subdivision (coloured patches) in the perimeter polygon, and the proposed structural grid ............................................................................................... 199 Figure 8.7: Test run with 6 stair-core agents (from left to right). The algorithm solved the problem in 65 steps ................................................................................................ 203 Figure 8.8: Generating 2nd floor diagram with 5 stair cores ........................................ 204 Figure 8.9: Generating the 3rd floor diagram with 9 stair cores .................................. 205 Figure 8.10: Generating the 4th floor diagram with 6 stair cores ................................ 205 Figure 8.11: Generated solution (top) versus manually modified solution (bottom) .. 206 x

Figure 8.12: Form diagram + generative process = constructive diagram ................... 209 Figure 9.1: Alignment and cohesion in 2D .................................................................... 217 Figure 9.2: Development of the diagram in 2D............................................................. 217 Figure 9.3: Alignment and cohesion in 3D .................................................................... 218 Figure 9.4: Development of the diagram in 3D............................................................. 219 Figure 9.5: The angle between the front and the side sensor (α) has a crucial impact on the generated diagrams. From left to right: α = 20, 45, 70 and 95 degrees ................ 223 Figure 9.6: Flocking agents. Testing the behaviour of agents with different field-of-view angles ............................................................................................................................ 223 Figure 9.7: Field-of-view (FOV) angle affects the flock’s behaviour. Larger angle leads to a more coherent flock but it can cause the flock as a whole to move around ............ 224

xi

Acknowledgements The journey from the early research idea to completing the work would not have been possible without many people who helped me through the course. I am most grateful to all these people for their support and encouragement, and would like to take an opportunity to thank some of them individually. Foremost, I would like to express my gratitude to Paul Coates for exposing me to the awe-inspiring world of generative design. He served as a living encyclopaedia and an unlimited source of wisdom and inspiration for anything related to bottom-up ways of computing design solutions. I would also like to thank Professor Allan Brimicombe – my second supervisor – who provided me with practical guidelines for structuring and presenting my work. I am thankful to The Graduate School at University of East London for generous funding of the first three years of my research. Many thanks go to my colleagues from Slider Studio for having a relaxed attitude towards my absence from the office during the most intense periods. Particularly, I would like to thank Michael Kohn for practical advice and occasionally bringing my feet back on the ground. I would also like to acknowledge Mæ architects with whom I worked during the Nordhavnen design competition. I am eternally grateful to my closest family – my parents and my brother – for ceaseless encouragement in times when I most needed their support. Finally, I would like to thank Len - Marje Len Murusalu - for brightening up my days during the last few months before submitting this thesis.

xii

Chapter 1: Introduction The digital revolution has drastically changed the business and culture of almost every discipline in modern society. Like other professionals, architects, designers and urban planners are eagerly adopting new technologies in the hope of improving their designs, making their life easier, providing quicker services, or just astonishing their clients and members of public. New digital tools are now a part of the essential skill set in any contemporary design practice. Digital drafting and modelling are overtaking traditional hand sketching and physical prototype modelling. However, new digital tools are often used for computerising ‘old’ analogue technology and the full power of computation is not utilised. Alternatively to the widespread trend of replacing traditional methods of design, digital tools can lead to new computational design methods. Despite the recent popularity of new computational modelling methods in architecture and urban design (Terzidis 2006; Coates 2010), typical digital design researchers are still busily computerising traditional methods. A large proportion of digital design tools are mostly associated with parametric modelling (Aish and Woodbury 2005; Schumacher 2008) – a technique that can still be seen as computerisation rather than computation. Little support is given to computational methods for concept design exploration (Reffat 2006). This thesis tries to fill this gap and expand the choice of digital tools in early stages of the design process by introducing novel computational design methods. Computation opens the opportunity for bringing new methods of analysis and synthesis into the architectural and urban design discipline. Most methods that are currently used in the professional practice and in the academy are of analytical nature. For example, space syntax techniques are suitable for analysing urban (Batty 2004) and building (Calogero 2008) layouts. A few computational methods are used for synthesising design proposals of which even fewer ones relate to the state-of-the-art computational paradigms (Kalay 2004, p. 205-293). This thesis investigates generative mechanisms for modelling solutions for architectural and urban design purposes. It attempts to construct computational models that help designers to explore greater variety of possible solutions at the conceptual design stage. These computational models can be used for design synthesis and, once validated, provide insightful 1

information for solving design problems. The lack of relevant theoretical foundations in urban design and architecture has forced the author of this thesis to look at a number of other disciplines in order to support the generative modelling approach. In that respect, complexity and system theory have been particularly useful. Architectural and urban design theory simply helps to classify and assess the generated results. It is argued that circulation in buildings and cities is a design problem that suits well for demonstrating the powers of bottom-up computational modelling – modelling a system by defining the behaviour of its components. In this thesis, models for synthesising design solutions are called generative models – a particular type of computational constructs that are used for creative purposes. However, not all generative models are appropriate for modelling circulation systems. The movement of people through and around an environment is an inherently decentralised, complex (Koutamanis, Leusen and Mitossi 2001) and dynamic (Helbing et al. 2002) phenomenon. The very nature of circulation in human settlements and buildings almost requires the use of bottom-up models for analytical and generative purposes. Helbing et al. claim that spatio-temporal patterns in pedestrian crowds can be reproduced as self-organised phenomena. It is argued that both types of models – analytical and generative – can be effectively constructed using self-organising multiagent systems. Multi-agent systems are composed of autonomous units – agents – interacting within a virtual environment (Gilbert 2004). A colony of agents is a complex adaptive system that has a strong tendency to self-organise and exhibit emergent behaviour (Macal and North 2006). Multi-agent models can be used for modelling spatial phenomena where the behaviour of agents is partly determined by the spatial and geometrical structure of their environment (Batty 2003). Whereas there are several multi-agent models used for evaluating circulation design, generative models are few and far between. This thesis seeks to build, analyse and deploy models for generating circulation and to generalise the findings in order to propose a generative design methodology.

2

1.1 Computer scientific background

“It seems fair to say that we live in an Era of Decentralization” (Resnick 1999, p. 1)

Computer technology has changed the way scientific models are built and hypotheses are tested. New technology allows exploring models that are far beyond the reach of a scientist without the computer. Modernist Newtonian science created simple models that expressed laws of nature or mathematical ideas with few equations (Holland 1998, p. 17). The change in the scientific research from numerical mathematics towards decentralised and distributed ways of exploring complex phenomena was propelled by the rise of new computational modelling methods from the 1950s to 1970s. These methods grew out from automata theory in cognitive science (Von Neumann 1951), generative grammar in language theory (Chomsky 1956), system dynamics approach (Forrester 1972) and sociology (Schelling 1971). In the 1990s, a new generation of complexity theorists sought understanding about life through computational modelling and simulations. Holland (1998, p. 117) believed that systems in nature can be modelled at different levels – from ecosystems to organisms and organs and individual cells. Capra (1996, p. 194) argued that computational models help to understand many important principles of living systems. Computers opened new opportunities because it was now possible to execute long sequences of instructions at high speeds. According to Holland, this gave scientists the capacity of exploring models that are orders of magnitude more complex. Constantly improving computational resources equip scientists with tools for exploring processes in massively parallel ‘worlds’ where the whole system is modelled from smaller components and from defined relationships between these components. The common understanding among computer scientists and practitioners in the area is that any system can be mapped into a program running on a universal computing machine (Galanter 2003). Once a system can be explicitly defined, it is possible to create a computational model that is better observable than the system itself as it can be executed and stopped at any time. Dynamic and decentralised models are now pervasive in many disciplines from social sciences (Gilbert 2004) to geography (Castle

3

and Crooks 2006), from economics (Tesfatsion 2006) and logistics (De Schutter et al. 2003) to cognitive psychology (Morse and Ziemke 2008). Besides studying and explaining existing systems in nature, bottom-up modelling can serve as a useful method of designing new systems. According to the constructionist philosophy, “models are not passive reflections of reality, but active constructions by the subject”(Heylighen and Joslyn 2001). These active constructions can help solving issues involved in designing new complex systems. Resnick (1999) calls this approach stimulation rather than simulation. While the latter is concerned with exploring natural phenomena, the former uses principles found in natural systems for building new systems with different functional purposes. One of the new paradigms of modelling complex systems that has recently gained a lot of popularity is called agent-based modelling (ABM). ABM relates to the decentralised and bottom-up logic of assembling complex systems out of small welldefined units. Regardless to the popularity of agent-based modelling methods (Castle and Crooks 2006), there is no commonly accepted definition for an agent (Shoham and Leyton-Brown 2009, p. xiii). Minsky (1988) – one of the forerunners of artificial intelligence research – has described an agent as a unit of a specific functionality that can be used for constructing agencies that have more sophisticated functionality. He argues that intelligence can be understood as a combination of non-intelligent agents – something very complicated can be explained and modelled as societies of simple agents. Complex agencies can be seen as multi-agent systems that are indeed now being widely explored in a number of disciplines for modelling dynamic systems (Vidal 2007). Despite its widespread success, multi-agent modelling is relatively unknown amongst scholars and practitioners in the field of architecture and urban design. Yet, many design and planning problems are very complex by nature – even “wicked”, according to (Rittel and Webber 1973). Such problems would benefit from sophisticated computational models. If intelligence can be created by aggregating simpler agents together in a computer model as suggested by Minsky, then this model could be useful for design purposes. It is envisaged that multi-agent systems lead to new intelligent models for solving complex design problems. Additionally, Deaton and Winebrake (2000, p. 1) point out that virtually all environmental problems are dynamic system problems. As the foremost concern of the architectural task is the design of 4

environment, it seems plausible that multi-agent systems as a dynamic modelling method is suitable for designing the environment.

1.2 Computational design

Architecture is not a coherent scientific discipline in its traditional sense – it lacks scientific basis and rigorous research tradition (Kalay 2004, p. xvi). There is no universally accepted methodology or process how design solutions are created and validated. It is highly debatable whether a single globally accepted design methodology is even needed. Nevertheless, many aspects of design can be created and analysed in isolation using scientific methods. These methods are often borrowed from other disciplines such as engineering, applied physics, mathematics and even biology. Architects have been adopting methods that originate from these disciplines for centuries. More recently, a new trend of deploying ideas from computer science has emerged. Back in the 1970s, Stafford Beer complained that computers are used on the wrong side of the equation “busily taking over exactly the old system” (Beer 1974, p. 40). Instead, Beer suggests that computers can be used as variety handlers – machines for creating and testing a variety of scenarios. Even now, this is not how architects tend to see the role of computers. Although computers are widely used in the architectural practice, this use has been fairly limited. Kalay thinks that computational methods in design have emulated habitual methods used by designers. Digital applications are mostly replacing traditional drafting, sketching and modelling, whereas the full power of computing is seldom harnessed. While computational modelling has pervaded many corners of contemporary life, architects are still where Beer thought the rest of the world was in the 1970s. According to some researchers, the discipline of architecture needs to adopt new ways of synthesising and evaluating design. For example, Terzidis (2006, p. xi) and Kalay (2004, p. xvi) argue for the necessity of using computation on the right side of equation – for computing solutions rather than computerising traditional methods. The constructionist methods of building dynamic computational models for studying complex systems can be quite directly brought into architecture and urban design. An architectural solution rarely resolves just a single isolated issue; in most 5

cases, several interrelated problems need to be considered simultaneously in the design process. Different parts of design are synthesised into a solution that has to meet a number of different goals. An architectural solution can be seen as a complex phenomenon – an aggregation where various sub-systems that have otherwise independent goals have to come together. According to system theory, complex phenomena can be explored in dynamic computer simulations (Capra 1996, p. 194). Dynamic modelling provides means to synthesise complex phenomena and as such can be used for generating design solutions as well. The underlying principle herein is that the space and the environment are the result of certain dynamic processes. These processes can be simulated in a dynamic model, but also stimulated turning the model into a generative tool. Dynamic modelling is being used in many architecture and urban design related fields such as pedestrian movement analysis (Batty 2003) and geographical information systems (Silva, Seixas and Farias 2005). It is time that design disciplines embraced it as well.

1.3 Computing circulation diagrams

Circulation is defined as the means by which access is provided through and around an environment; it is the part of buildings and settlements where people move from place to place (Clerkin 2005). Clerkin asserts that architecture is not experienced statically but dynamically via circulation space. The spatial layout should reflect the dynamics of locomotion. In the light of the constructionist approach, circulation space can be seen not only as a facilitator but also as a product of locomotion. It is envisaged that circulation can be modelled by using computational methods that capture some dynamic properties of movement. Using movement as the generator of form is not entirely a novel idea in architecture; mobility is often considered as a key driver of design solutions. Arnheim (1977, p. 148-151) has pointed out that the architectural task permits two abstract types of buildings – shelters and burrows. Whereas the first type is ignorant of the dynamics of circulation, the second one derives its form from the motor behaviour of the users. There is little evidence in the literature that motor behaviour can be used for generating circulation for architectural purposes. However, there are a growing 6

number of authors who have been exploring the circulation network formation in nature. Turner (2000), for example, has been studying circulation in social insects and has found out that colonies of insects can collectively create intricate nest architectures that naturally incorporate circulation networks by following relatively simple individual agendas. It has also been demonstrated that this circulation formation process can be successfully simulated in dynamic computer models using ABM and particularly multi-agent systems (Ladley and Bullock 2005). In the light of recent developments in dynamic system modelling, it is fairly logical to assume that multi-agent systems can also be appropriated for the task of synthesising circulation systems for architecture. Agent-based modelling is a relatively new and unexplored concept in the architectural design discipline. As opposed to the traditional design process where architects develop schemes from the external observer’s point of view, agents can inhabit and explore the digital models in situ – they are directly embedded in the environment that is being modelled. Therefore, if agents can be used in design synthesis then generated solutions are possibly closer related to the experience of end users. Multi-agent systems are flexible, adapt to changes in their environment (Mertens, Holvoet and Berbers 2004) and allow a step-by-step approach to solving problems (Bonabeau, Dorigo and Theraulaz 1999). This thesis argues that such systems can be used for making tools for generating design scenarios that fit into context. Adaptivity plays a crucial role in both natural and artificial agent colonies. For instance, Deneuborg, Pasteels and Veraeghe (1983) have demonstrated with the help of a mathematical model that randomness in behaviour has adaptive advantage for ants. Due to the probabilistic behaviour, multi-agent models can be used as generators of a large number of solutions. This is how computational models should be arguably used in design disciplines – as variety handlers. The ability of dynamic models to generate multiple solutions is exactly what Beer means by using computers as variety handlers on the right side of the equation (Beer 1974, p. 43). Generative multi-agent models are treated as diagramming machines in this thesis. The output of a multi-agent system should not be taken literally for a final design proposal but should be seen as an intermediary stage in the design process. All of proposed models developed for this thesis should be treated as generators of 7

circulation diagrams rather than fully fledged problem solvers. Berkel and Bos (2006) describe diagrams as visual tools that can be used for compressing information. Indeed, multi-agent systems can generate a large amount of information that can be possibly captured in a diagrammatic form. Besides the shape and topology, such diagrams can also represent other information such as intensity and speed of movement, potential congestion areas and collisions between flows. These diagrams are constructive diagrams in Alexander’s (1964, p. 88) terms as both the form and requirements are expressed simultaneously. Diagrams always feature an element of hermeneutics. Sevaldson (2000) argues that diagrammatic thinking frees computer generated material from its determined context. He adds that diagrams can be ‘instrumentalised’ by reinterpreting, redefining, and remapping the visual material in a qualitative and playful manner. Although it is not a goal of this thesis to explain the process of interpretation and to demonstrate how diagrams can be converted into concrete design proposals, it is important to acknowledge that diagrams are abstract machines that have to be applied in order to convey non-abstract form.

1.4 Outline of the proposed research

This thesis explores generative design methods by investigating the use of multi-agent systems as tools for synthesising design solutions. Designing circulation in building and settlements is thought to be an appropriate task for which multi-agent systems offer an alternative approach to traditional design methods. Multi-agent systems are chosen mostly for their flexibility and adaptivity that potentially makes them ideal tools for designers. Since there are just a few examples how such tools are used in design processes, it is suggested that relevant models from computer science and from design-related disciplines should be studied. These models are decomposed into basic building blocks of which new models can be constructed. Once each block has been examined in depth, the next step is to build prototype models. Prototype models, in turn, are tested in the context of two practical design assignments that make up the case studies chapter in this thesis. Proposed prototypes and case study models are analysed quantitatively and qualitatively in order to find out the key principles that can be used for building and deploying models for solving circulation 8

design issues. Finally, proposed models are compared with one another in the hope of discovering patterns of parsimonious models. The next chapter (Chapter 2) continues with investigating computational models that have been constructed and tested by various academics and practitioners across several disciplines. Besides popular scientific models, the focus is on the computational design models and on generative models of design in particular. It is argued that traditional models in architecture and urban design are static and can be greatly improved by introducing dynamic modelling methods. The following chapter (Chapter 3) explores topics surrounding circulation and surveys existing computational models of natural and artificial circulation networks. It is established that circulation networks in buildings and urban settlements share many similarities with movement networks in nature. Similarly to natural networks, artificial ones lend themselves easily to computational modelling. The last chapter in the literature survey (Chapter 4) builds towards the argument, that multi-agent system are indeed an appropriate way of generating circulation networks. By looking at various applications of multi-agent systems, the chapter filters out key ideas for building generative models. Chapters 2 to 4 are intended to serve as a literature survey for this dissertation. Following the synthetic modelling approach (Deaton and Winebrake 2000), Chapter 5 discusses the basic components that are needed in order to construct models for generating circulation diagrams. According to Morse and Ziemke (2008), the commonly accepted tasks of synthetic modelling are to test the sufficiency of a theory, to test the necessity of alternative theories, and to explore the interactive potential of agent-environment embedding. The task of synthetic modelling in this thesis is the latter one. Models are built not in order to understand existing phenomena but to build models from a defined set of components to synthesise interesting and meaningful architectural diagrams. Chapter 6 proceeds with investigating the potential of multi-agent systems solving issues of circulation. Several prototype models are proposed and built in order to create knowledge that is particular to the given goals of this thesis. The proposed models feature many kinds of multi-agent systems from simple reactive agents to more sophisticated learning agents. The chapter also discusses methods of validating generative models through the validation of the generated output. Generated circulation diagrams are compared to existing circulation networks that – according to Brimicombe and Li (2008) – aligns 9

with the notion of external validation. Internally, models are validated qualitatively by exposing pseudo codes and describing algorithms step-by-step. Prototype models are also verified according to the verification principle proposed by Batty and Torrens (2005) – a model that is fine tuned to a set of input data has to work on a different set of data as well. The following chapters (Chapter 7 and Chapter 8) present two case studies where multi-agent systems are used in the context of real design tasks. The main goal of both case studies is to demonstrate practical applications of generating circulation diagrams. Whereas the first one illustrates the use of a multi-agent system as a generator of street networks for a large urban redevelopment project, the second case study constructs and experiments with a model for exploring circulation in an office building. Chapter 7 also illustrates how the model can be externally validated by comparing the parameters of generated circulation diagrams to the parameters of circulation networks in the real world and to existing construction standards. Both case studies also discuss the ways of interpreting generated diagrams. The last part of this thesis (Chapter 9 and Chapter 10) reiterates through models, extracts general building principles and highlights the main control mechanisms of multi-agent models for generating circulation diagrams. An exploratory analysis – studying the system behaviour under different sets of parameters – is carried out on two separate models in order to understand the sensitivity and flexibility of multi-agent models. Finally, two patterns are proposed for building generative models.

1.5 Goals of the research

This thesis is conceived in reaction to the one-sidedness of digital tools and to the lack of computational methods in contemporary design practices. Architects are still stuck to old modelling paradigms and new dynamic modelling methods are largely ignored. The aim of this work is to develop design tools that are based on dynamic computational models. In particular, the aim is to show how multi-agent systems can be used for solving circulation issues in early stages of the design process. It is hoped to reach to deeper understanding of how multi-agent systems should be constructed in order to generate useful circulation diagrams. 10

Main objectives of this thesis are: •

Find out the main building blocks and control principles of multi-agent systems for generating circulation diagrams. Construct prototype models using these building blocks



Investigate emergent path formation processes in prototype models



Construct multi-agent models that generate meaningful diagrams in the context of practical design tasks



Verify proposed models in different context and with different sets of input data



Validate models by comparing generated network diagrams against some key characteristics of real-world circulation networks



Propose design patterns of generative models based on the findings of this thesis

Subsequently, the research questions are formulated as follows: •

Can multi-agent systems be programmed to meet the underlying principles of circulation systems?



Can multi-agent systems be used for generating flexible circulation diagrams?



How can designers use bottom-up models generatively in the design process? How can designers gain control over such models?



What is the simplest design of a multi-agent system for generating circulation diagrams?



How multi-agent models fit into the overall design process? Can these models be helpful in achieving multiple goals of a design proposal?

This thesis makes an original contribution to the shift in attitude towards modelling in the architectural design discipline. It does so by introducing generative multi-agent systems for modelling circulation networks and by proposing several novel prototype models. Whereas most of the models are original in terms of implementation or have not been used in design context before, some of these are truly novel multi-agent models in terms of observed emergent behaviour and generated movement patterns. An unparalleled three-partite approach to gaining 11

control over the generated output is offered. As a conclusion of this thesis, two unique design patterns for generative modelling are extracted.

1.6 Methodology

In order to answer the research questions and meet the objectives outlined in the previous section, this thesis follows the synthetic modelling approach. Synthetic modelling herein is seen as a way of creating knowledge by actively building, testing and analysing models. It is believed that if one wishes to understand how meaningful circulation diagrams can be generated, one has to be involved both in the process of construction and deployment of computational models. One needs to be able to control the model by changing its working principles and then find out how these changes influence the end result. For this purpose, rapid and agile development approach is seen particularly appropriate. According to rapid development methods, constructing and testing models is a cyclical process that has to be repeated several times in order to get the model right. Rapid development methods are foremost implemented at the prototyping stage in this research. At this stage, basic principles of the agent-environment interaction, the agent architecture and the environmental configuration are developed, tested and validated. Once these basic principles are understood well enough, one can proceed to the next stage of constructing prototype models by combining these basic elements. Prototype models are then investigated through exploratory analysis that is an essential part of the synthetic modelling approach. At this stage, the model is scrutinised by conducting a series of experiments with different internal parameters and initial settings. This stage is extremely important for the computational designer who wishes to deploy models in the context of real design tasks. It is argued that exploratory analysis is a critical part of the development process that leads to the understanding of how the behaviour and the outcome of models can be controlled. The final stage of the synthetic modelling methodology in this thesis is the deployment of prototype models in the context. At this stage, models are tested in terms of the output – diagrams. After all, the usability of a model can be critically 12

assessed only by looking at the diagrams that have been generated in the context of design tasks.

13

Chapter 2: Computational modelling The revolution in information technology has fundamentally changed the architect’s office. This change has shifted the architect from the drawing board to the computer. Today, nearly all practices rely heavily on CAD programs. However, the actual design methodology has not undergone a similarly dramatic shift – the metaphors from the pre-computer era are still actively in use in CAD. The computer applications that architects are now using are often created for a wholly different purpose, or developed by software companies that are divorced from architectural practice. The lack of relevant computational methodology and purpose made tools can seriously inflict the quality of design output. If architects are not responsible for the production of their own tools and in charge of devising new methods, the technology will always pose serious limits to their creativity. Architectural design is a speculative process. The real implications of the design object can never be accurately foretold before it is actually built. The designer always predicts the deployment and performance of the building, which always changes in the post-construction reality. Luckily, there are methods to minimise the difference between the estimated and the actual performance. Modelling is one of such methods. Models in architecture and urban design are a common form of representation. Models help architects to visualise and communicate their design intents, develop concepts, and organise the data involved in the process. Threedimensional models are essential for understanding complex buildings. Modelling does not only produce static descriptions of geometry, but can also help to understand the performance and usage patterns of the built environment. The special focus of this chapter is on generative modelling methods. Generative methods require quite a different kind of design process to what is usually practiced in the architectural office. Traditionally, modelling takes place once the design is already largely conceived, and models are produced as static descriptions of the proposed solution. In the generative process, models are dynamically evolved through computer programming. This approach requires the exposure of all the design parameters and explicitly defined relationships between the design drivers. The 14

fundamental hypothesis herein is that the generative methods provide means to improve the solutions to spatial problems and help to demystify the whole design process. The argument here is that architects need to rethink their modelling methods if they want to respond to the ever-increasing complexity of the architectural task. This chapter is a literature survey of computational modelling in architecture. In order to put architectural modelling in a larger context, the chapter starts by looking at some definitions and types of scientific models. This is followed by a brief summary of models in traditional architectural and urban design practice. These models simply serve as references and for classifying the results of generative models proposed in this thesis. The terminology of scientific models helps here distinguishing between different kinds of models in architecture, and, eventually, leads to the definition of generative modelling. The last section outlines generative modelling methods and illustrates these with some examples found in literature.

2.1 Models in scientific research

“Models are, by definition, a simplification of some reality which involves distilling the essence of that reality to some lesser representation.” (Batty and Torrens 2005, p. 756)

Scientific modelling is a process of abstraction that is practiced in several disciplines and used for many different purposes. Nevertheless, there seems to be a general agreement that the model is a simplification of a real world phenomenon and is constructed in order to capture or understand the reality. Models are most often built to study complex systems in an attempt to gain deeper understanding of these systems. The model is an abstract theory of how something functions (Lynch 1981, p. 277); it helps to visualise a theory or a part of it (Skyttner 1996, p. 59). Morse and Ziemke (2008) point out the purposes of modelling from the cognitive science perspective: 1) to test the sufficiency of a theory with respect to observational data, 2) to query the necessity of alternative theories, and 3) to explore the potential of embedding a theory in a theory. The general logic behind scientific modelling executes the principle of ‘understanding through making’. The constructivists assume that knowledge can be 15

obtained by building (Resnick 1999) rather than following the reductionist logic of deconstructing the modelled phenomenon to smaller chunks. The constructivist approach is believed to preserve the richness of the model structure (Batty and Torrens 2005). According to the cybernetic view, the model is a representation of processes in the world to allow predictions about its future states (Heylighen and Joslyn 2001). Resnick (1999) offers an alternative educational view – models can stimulate thinking rather than simulate real-world processes. Gilbert and Terna (2000) scrutinise the use of models from the social sciences perspective and adopt quite a broad definition. Models are, in their interpretation, simplifications of reality that can be expressed verbally, in terms of statistical or mathematical equations, or with computer simulations. Scholl (2001) explains that while linguistic models maintain a high level of flexibility, mathematical models sacrifice the flexibility in favour of rigour in formulation and consistency of structure. As opposed to linguistic and mathematical models, simulation models are implemented dynamically over a period time. Haggett and Chorley (1969, p. 285) argue that simulation models shed light on the step-by-step evolution of features that would otherwise be difficult to explain; the focus in such models is on the process of formation rather than on the actual form itself. In addition to the aforementioned models, Morgan (2003) offers yet another type of model – the mental construction – that helps us to understand the world. Referring to the Bayesian philosophy, he argues that our perceptions are models that we use to make educated guesses about the outside world. The advent of digital computers fuelled the use of simulation models. According to Holland (1998, p. 17), scientific models used to be simple because it was impossible to handle complex models. The simplicity was reinforced by the fact that it was possible to express important physical laws with few equations. Programmed computers brought the speed and accuracy to the execution of long calculations and allowed to “explore models that were orders of magnitude more complex” (Holland 1998, p. 17). Thus, computer-based simulations rendered complex systems analysis tangible (Johnson 2002, p. 87). Computers permit scientists and scholars to explore new ideas and concepts, and give access to modelling techniques that were previously inaccessible (Resnick 1999). Resnick explains that computer modelling provides an opportunity to learn 16

through exploration and experimentation. Certain aspects of the real-world can be easily converted into simplified digital representation (Castle and Crooks 2006), which makes the construction of computer models inexpensive. Holland (1998, p. 12) also points out that the model can now be started, stopped, examined, and restarted at any time – something that is impossible for real dynamic systems. Jay Forrester (1972), in discussing the use of the computer in social systems and policy modelling, argues that mental models are apt to mistakes. Once the basic structure and interactions in a social system have been agreed, the mind struggles to follow the individual statements in order to draw correct conclusions about the behaviour of this system. The computer, on the other hand, can routinely and accurately trace through such sequences. For a human being it is easier to get the basic structure of system right than make correct assumptions of its implied behaviour. The latter, as Forrester argues, is better to be left to computers. Another virtue of computational models becomes evident in studying decentralised systems. Observing and participating in a decentralised system is not enough to understand it – one has to construct the system instead (Resnick 1994, p. 5). Computational models of decentralised systems “provide accessible instances of emergence, greatly enhancing of understanding the phenomenon” (Holland 1998, p. 12). The speed of contemporary computers allows one to construct a model of thousands of simple interacting units and observe the emergence of higher order behaviour. Scientific models require validation in order to be useful for decision making and truthful to the phenomena modelled. Gilbert and Terna (2000) argue that the same holds true whether we deal with a mathematical equation, a statistical equation, or a computer program. The validation involves two steps – observation of the model and observation of the process or phenomenon that is being modelled. The output from the model has to be sufficiently similar to the data collected from the real world. The quality of the model can then be assessed post factum by measuring how accurately it has predicted the subsequent events. Batty and Torrens (2005) set forth two principles of best practice to develop scientific models. The first one is the principle of parsimony – Occam’s razor – which states that a better model is one that can explain the modelled phenomenon with lesser data. The second one is the principle of independence, which requires the model 17

to be independent from data. If the model is driven by certain data, it has to be validated with a different set of data. 2.2 Types of models Different disciplines treat models in slightly different ways. This section outlines some of the commonly accepted pairs of dualities to distinguish different models, but offers no comprehensive taxonomic system. Instead, the categories are chosen for a particular purpose – to devise a system for discriminating between computational design models.

Dynamic and static models. The distinction between static and dynamic models is probably the most common one. For the above mentioned purpose it is crucial to understand this duality in order to discriminate descriptive object models from process oriented models. Castle and Crooks (2006), referring to Longley, explain the difference between these two types: while in static models input and output both correspond to the same point in time, the output of a dynamic model belongs to a later state in time than its input. Even the name suggests that static models do not change state (D'Souza and Wills 1998, p. 46). A static model is often considered a description of an object, while dynamic models are about the behaviour of objects and interrelationships between these objects. According to Ruiz-Tagle (2007), static models are used to understand the structure of systems, dynamic ones are created for simulating and observing the behaviour of systems over time. From his – an architect’s and a geographer’s – perspective, static models of cities fall into three categories: mathematical models used for planning and to observe urban development, physical and environmental geography models, and social models derived from Luhmann’s philosophy. Dynamic models, on the other hand, include optimisation models for transportation networks and land use, models for operational control, spatial evolution models, and agentbased and cellular automata models. The task for dynamic and static models is also different. While static models can predict impacts, vulnerabilities, or sensitivities, dynamic models can be used to assess ‘what if’ scenarios (Castle and Crooks 2006). Dynamic models allow one to 18

change the input parameters and observe the model behaviour under several distinct conditions, which is particularly useful for modelling complex dynamic systems. Simulation of complex systems differs from static modelling in that simulations are open-ended (Batty and Torrens 2005). Deaton and Winebrake (2000, p. 1) argue that almost all environmental problems involve a large number of interconnected units that change over time, and should be modelled dynamically. Haggett and Chorley (1969, p. 285-302) use dynamic models to explain certain observed features of channel networks. In order to simulate the evolution of networks, they have to implement the formation model over time. Without programmable computers, such modelling technique is hardly conceivable. Mathematicians and scientists have used differential equations to model dynamical systems for centuries, and many computer applications still re-implement the same approach (Resnick 1999). However, the more fundamental opportunity for using computers in dynamic system modelling is based on completely new representations. Analogue and digital models. Analogue models usually replicate certain aspects of the modelled system in a physical medium at different scale from the original. Analogue models are scaled models to represent the reality in miniature (Castle and Crooks 2006). The success of analogue models depends on the scalability of the real-world phenomenon. In architecture, analogue models – sometimes also known as physical models – are usually scaled down replicas of the design object. The advantage of analogue models over digital models is their materiality (Kalay 2004, p. 133). Digital models conduct all operations using computers and reduce relevant aspects of the design object to a sequence of binary values (Castle and Crooks 2006). The translation of building geometry into symbolic data structure is non-trivial and prone to information loss (Kalay 2004, p. 133). However, the amount of information a digital model can handle surpasses greatly that of analogue models.

Individual and aggregate models. Castle and Crooks (2006) claim that it is theoretically possible to model any dynamical system using a set of rules for the behaviour of its constituent parts. Typical individual-based models are crowd simulations and flocking models. Although individual-based models are now widely acknowledged, it was just a few decades ago when Reynolds (1987) developed the first 19

computational example of such models. One of the greatest advantages of individualbased models is their ability to simulate emergence (Holland 1998, p. 12). Depending on the size and the resolution of the modelled system, however, it is not always practical to model individual behaviours. In these cases, the individual acts are aggregated and the behaviour of the system is modelled as a whole (Castle and Crooks 2006).

Probabilistic (stochastic) and deterministic models. Probabilistic models are quite simply models that use some random processes as a part of its logic. The randomness is achieved by introducing random number generators into the model (Williams 2005). While deterministic models always produce the same output with respect to the initial configuration, the output of a probabilistic model may vary. Pierre Simon Laplace has said that "probability is nothing but the common sense reduced to calculation" (Bazzani et al. 2008). One can argue that, if the level of reasoning is beyond the scope of the model, the output values of such reasoning can be replaced with random values. Therefore, by introducing randomness, one can conveniently simulate the undefined input parameters into the model. Randomness in computational models can also be useful to surpass local minima problems. For example, Deneuborg, Pasteels and Veraeghe (1983) have shown with a simple mathematical model, that the probabilistic

behaviour of ants offers an adaptive advantage in foraging.

2.3 Models in architecture and urban design

The traditional use of models in architecture and urban design differs from the scientific approach to modelling. Architectural models are seldom abstractions of existing phenomena – whilst science is looking for a theory of explanation, architecture is looking for a theory of generation (Frazer 1995, p. 12). Architects utilise models to create the reality, not necessarily to analyse it. Longley points out that there are two particular uses of the term ‘model’ in architecture to either denote the representation of physical artefacts (as scaled down block models of towns) or to designate abstract spatial relations (Longley 2004). The latter is further explained by Lynch who describes the model as a picture of how the environment is meant to be made – “a description of a form or a process which is a prototype to follow” (Lynch 1981, p. 277). In his 20

terms, the model is an idealised example that provides principles for organising the environment. Lynch believes that each city model corresponds to a city theory – a city model is an expansion of the concept of urban design (Shane 2005, p. 31-32). Models are there to create order among the chaos of the city. They are a practical necessity and help to manage the complexity of real problems.

Figure 2.1: Walter Christaller, model of central place theory, the 1930s. Source: (Christaller no date)

Abstract models of ideal cities such as Walter Christaller’s diagram of central place theory and Ebenezer Howard’s Garden City diagram (see Figure 2.1 and Figure 2.2 respectively) are typical to the city theory of the early industrial era. Since then, city theory has moved on a long way. Lynch’s models of the City of Faith, the City as a Machine, and the City as an Organism (Lynch 1981) mark the shift from the static representation towards the view of cities as self-organising systems (Shane 2005, p. 54). One cannot find models akin to Christaller’s hexagonal diagram in the contemporary city and urban theory. City models, as Portugali explains, are now “first and foremost a method of representation of ideas about the dynamics and structure of the phenomenon we term city” (Portugali 1999, p. 93).

21

Figure 2.2: Ebenezer Howard, Garden City model, 1898. Source: (Howard no date)

Traditional methods that designers have used for centuries have become inadequate to represent the dynamic nature of cities. Computer models seem to be more appropriate for that purpose. One of the first attempts to demonstrate the ability of modern computer systems to simulate the richness of interactions in cities was Forrester’s Urban Dynamics model (Batty and Torrens 2005). Since then, there has been an abundance of computational simulation models that investigate dynamic phenomena in cities and usually deal with land use patterns, urban growth, urban economics and transportation planning (Batty 2008). However, most of these models scrutinize cities from the social scientist’s (e.g. Birkin et al. 2008) or geographer’s (e.g. Antoni 2001) perspective, and seldom investigate the urban environment from the designers standpoint. It is paradoxical that the contemporary city models do not take the dynamics of built environment into account. Urban environment, as any other environment, is certainly dynamic from the System Theory’s perspective (Deaton and Winebrake 2000). Although there are many models that are concerned with social processes that produce abstract spatial patterns (e.g. Crooks 2008), one can hardly find any model that treats the built environment dynamically, or any that investigates the feedback mechanisms between social dynamics and the built environment. The task of defining how the structure of built environment is produced by social interactions is not an easy one. Design professionals, who probably lack the technical know-how or feel that the 22

subject is outside their domain, are simply ignoring it. However, this task is a rather creative one and design professionals should be engaged in it. Bill Hillier is one of the few researchers who have been interested in how the urban morphology is informed by the social aspects. He uses social concepts to explain and generate settlement layouts of unplanned villages and medieval town centres (Hillier 1989) . In his beady-ring model (see Figure 2.3), Hillier treats space as if it was composed of nothing but mobile individuals. He explains that in such a discrete systems individuals can give rise to the settlement structure by simply reacting to local rules (Hillier and Hanson 1984). This beady-ring model is computationally validated by Coates (Coates 2009).

Figure 2.3: Bill Hillier, computer generated settlement models. Source: (Hillier 1989)

Physical models have been used for representing buildings from at least mid19th century (Richardson 1859). Whereas the uptake of computational models among urban design professionals has been slow, the same cannot be said about architectural designers. The digital models of building geometry have become central to the design process in many architectural practices. Frazer (1995, p. 26) asserts that computer models allow ideas to be developed, visualised and evaluated without the expense of actual construction. Leaving aside the traditional static representations of building geometry, there are several models that deal directly with form finding. Robert Hooke was notably the first one to recognise the possibility of inverting a hanging form to create a structure of pure compression (Kilian and Ochsendorf 2005). Gaudí, for 23

example, used such kind of an analogue computer for finding optimal structures for Sagrada Familia church and the Church of Colònia Güell (Williams 2008). Equally famous are Frei Otto’s models that utilise the same technique in order to produce the form of Mannheim gridshell and the roof of Munich’s Olympic Stadium (Otto and Rasch 1995). Although there is still a dedicated place for physical models for representation purposes, computational models are now prevailing in the form-finding process. Even Gaudí’s hanging string model has found a digital counterpart (Kilian and Ochsendorf 2005).

2.4 Computational design methods

From the early days of digital technology, computers have been used in design evaluation and synthesis. Cross (1977) lists out a number of programs for analysing design briefs, synthesizing floor plans, and evaluating design solutions. Whereas analytical software is quite extensively deployed in the contemporary design process, design synthesis mostly remains at the level of conventional modelling in CAD reducing the model to represent the surface geometry. Such kind of modelling can be easily termed ‘manual’ since the selection and transformations of vertices and faces of the surface geometry are controlled by the designer’s hand movements. Frazer (1995, p. 66) argues that modelling in the architectural design practice occurs mainly after the design is already conceived, and ‘what if’ type of models common to dynamic modelling are hard to find. All CAD applications make certain assumptions about the form and its manipulation, but designers should develop their own programs (Frazer 1995, p. 26). Weinstock (2006) proposes that, instead, one can use mathematical models to generate and evolve forms and structures in morphogenetic processes.

Parametric modelling One modelling paradigm that has gained a lot of attention lately in architectural communities is called parametric modelling. Patric Schumacher (2008) has even hailed ‘Parametricism’ a new architectural style. He points out that this new style can only exist via sophisticated computational techniques of scripting and parametric modelling in specialist CAD software. In a parametric model design objects 24

are described as a set of architectural elements and interrelationships between these elements. According to Aish and Woodbury (2005), the parametric model is a constrained collection of schemata. The parametric model is propagation-based and acyclic (Aish and Woodbury 2005), which makes it essentially static. Parametric models are dependent on input parameters that can be dynamically manipulated by the designer or a computer program. However, without embedding it into a dynamic process the parametric model remains just another representation of the building geometry.

Analytic, synthetic and evaluation methods There is a general consensus that the major components of the architectural design process are analysis, synthesis and evaluation (Cross 1977, p. 13; Kalay 2004, p. 299). While the design brief analysis happens at the first stage, design solutions are generated at the synthetic stage, and assessed at the evaluation stage. At each stage, there exist several computational methods to assist the designer. Analytic computational methods deal with the design brief and site-specific problems. These methods combine simple design problems into an overall design problem by constructing hierarchical problem trees (Cross 1977, p. 19-20). According to Milne (Cross 1977, p. 30), clustering problems into branches helps the designer to find compatible design solutions for a complex problem by checking the solution against every sub-problem in the branch. Cross (Cross 1977, p. 40) believes that computer evaluation is the least controversial of the three stages since it is about rational appraisal and there is little creativity left. At the evaluation stage various performances of design solutions are verified against the design goals and requirements; it is the feedback part of the design cycle (Kalay 2004, p. 299). Evaluation methods are positivist – they combine empirical knowledge with mathematical constructs. Typical computational evaluation methods are thermal, solar and structural analysis, fluid dynamics simulation, and pedestrian and vehicular traffic simulation. Synthesis methods, as opposed to evaluation, are constructivist and essentially different from positivist analytical methods. Besides extreme positivist positions, computer modelling is also capable of supporting the extreme constructivist approach (Scholl 2001) and, therefore, can be used for design synthesis. The synthetic 25

design methods of automatically generating design solutions have long been a fascination of researchers in architectural computing(Cross 1977, p. 33). The earliest computational synthetic methods are operational research methods (Kalay 2004, p. 201). These methods, mainly developed in the 1970s, utilise procedural logic to find solutions in situations where the problem domain and the solution-space are fully known (Kalay 2004, p. 201). These procedural methods were originally invented for solving room layout and space-allocation problems (Cross 1977). Another class of synthetic methods are heuristic methods. The computer applications using heuristic methods are often called expert systems. These are usually developed in the vein of old symbolic Artificial Intelligence research and emulate the habitual methods used by designers (Kalay 2004, p. 231). An expert system employs certain intellectual rules of thumb designed by the system creator to logically derive the solution (Skyttner 1996, p. 191). The heuristic rules are usually formulated as series of ‘if-then’ statements. A separate group of synthetic methods are concerned with form-finding via physics simulation. A model of catenary arch formation (Kilian and Ochsendorf 2005) was mentioned earlier in this section. Computational methods have also been used to simulate Buckminster Fuller’s tensegrity structures (Tibert and Pellegrino 2003). All the above-mentioned methods are essentially computational alternatives to traditional design methods. In contrast, Frazer (1995) believes that the computer can be used as an evolutionary accelerator and a generative force to aid the design process in a non-traditional sense. He sees a new kind of architectural model as an adaptive blueprint – a computer program that can generate location specific solutions. His approach can be tentatively called the generative design approach. It borrows concepts and techniques from the complex systems modelling and contemporary Artificial Intelligence and Artificial Life research. The subject is discussed further in depth in the following section.

2.5 Generative design

This section is an attempt to define generative design and identify its role in the overall design process. Generative design is presented here as a collection of methods that share some common characteristics. Every design process has generative 26

elements that may not be immediately apparent. At the stage of design synthesis, generative processes can combine computer automation and manual intervention in different quantities (Herr and Kvan 2007). Zee and Vrie (2008) argue that generative designing is not a traditional design process. It employs different arithmetic methods to generate alternative design solutions to the design problem. This process allows the designer to find solutions to complex problems that cannot be found in a traditional way (Zee and Vrie 2008). Herr and Kvan (2007) agree that generative methods facilitate the exploration of alternative solutions and are motivated by the increasing complexity of the design task. They believe that computers should be used “as variance-producing engines to navigate large solution spaces and to achieve unexpected but viable solutions” (Herr and Kvan 2007). In an attempt to define generative art, Galanter (2003) gives a somewhat different description of generative systems. He sees generative art as a system where the artist uses a procedural invention that has a certain degree of autonomy contributing to the end result. This view can be adapted to generative design too. The generative design methods are ‘autonomous’ in the sense that they execute a type of logic that distinguishes them from the rest of the design process. Generative methods have their roots deep in the systems dynamics modelling. In the generative design process, solutions to a design problem emerge as a result of dynamical processes. McCormack, Dorin and Innocent (2004) argue that generative systems incorporate system dynamics into the production of artefact. These systems offer a philosophy and methodology to treat the world in terms of dynamic processes and their outcomes (McCormack, Dorin and Innocent 2004). Whereas other modelling methods try to reduce the complexity of the modelled phenomenon, generative methods aim to produce complexity. In the generative process, the production of complexity usually happens through aggregation. Frazer (1995) sees the generative model as an abstract design solution. This model is not a one-off blueprint but is of more generic type. It can be implemented in the context of local environment and site specific requirements, and can generate a variety of different solutions. This kind of generative design models can be validated in two ways. Firstly, the generative model, similarly to scientific models, has to comply with the principle of independence (Batty and Torrens 2005) The model has to be 27

independent from the original set of data that was used to design and calibrate the model, and accept different sets. Secondly, generative models need to produce results of sufficient variety. In this thesis, generative design is seen as a synthetic design method. Generative design models feature feedback mechanisms and are, therefore, cyclical in nature. The feedback ranges from simple mechanisms, where the model takes its own output for input, to relatively complex ones incorporating design evaluation routines. Generative design is an iterative and dynamic process where solutions to the design problem are found through the repetition of design development cycles. Generative design employs various modelling methods. There is a general agreement among various authors of what methods belong to generative design category. Most authors consider models of self-organisation, swarm systems, evolutionary methods and generative grammars as generative methods (e.g. McCormack, Dorin and Innocent 2004; Zee and Vrie 2008). As opposed to the generative design process where the feedback between the synthesis and evaluation stage is mediated by the designer or a separate computer program, these models are inherently generative – they feature internal feedback loops. By taking its output for input, generative models are self-referential. McCormack, Dorin and Innocent (2004) discuss the relevance of these methods to contemporary design practice, and claim that they are mainly needed to develop novel design solutions.

Models of self-organisation This group consists of several distinct models that all follow some principles of self-organisation. Typical models here are cellular automata (CA), swarm and particle systems, and agent-based models. Collectively, these models are can be described as individual based models. They all execute the logic of bottom-up aggregation where the global structure emerges from local interactions. The most popular self-organisation models in generative design are probably CA models. Besides the abundance of CA models in geography for simulating urban growth and resident dynamics (e.g. Junfeng 2003; Polidori and Krafta 2004), there are many models that deal with the design of built environment directly. The suitability of CA models to generate settlement structures has been recognised by Portugali (1999). He argues that the discrete cellular structure of such models make them a natural tool 28

to represent the discrete spatial structure of real cities. The first attempt to bring CA models to architectural design was made by Coates (1996). He and his students have produced a number of experiments exploring the form-finding capabilities of such models (e.g. Coates, Derix and Simon 2003). CA models have been used to generate 3d massing solutions at urban scale (König and Bauriedel 2004), explore different massing options for a high-density urban block (Herr and Kvan 2007), explore the façade composition of high-rise buildings (Herr 2003), and develop aggregated building solutions (Krawczyk 2002). Architects and urban designers are often fascinated by cellular automata models for the relative simplicity and for the ability to produce complex-looking outcome. The origin and functional mechanism of CA models are further discussed in Chapter 4. Although the CA model is a dynamic model, the topology of its structure is fixed – cells in such models can change state but not location. An alternative to the fixed-structure models is the mobile agent based model. Testa et al. (2001) dispute that every component in a system, including components in the environment, can be treated as agents. They propose an architectural model where agents, endowed with a set of constraints and preferences, represent houses. These agents can dynamically interact with one another and generate new urban forms. Agent-based modelling as a generative design method is also scrutinized in depth in Chapter 4.

Generative grammars Generative grammar originates from the linguistics theory of Noam Chomsky (Chomsky 1956). It refers to a generative model of generating syntactically correct sentences of individual words. According to Duarte (2004), a generative grammar consists of substitution rules that are recursively applied to an initial assertion to generate the final statement. Grammar-based models exploit the principle of database amplification by generating complex forms from simple specifications (McCormack, Dorin and Innocent 2004). In design, generative grammars are usually called shape grammars that, instead of combining individual words, operate with shapes or shape descriptors. The pioneering work of using shape grammars by Stiny and Gips (1972) was all done by hand. Their underlying aim was to use generative techniques to produce sculptures 29

and painting and to develop understanding of what makes a good art (Stiny and Gips 1972). Several computer implementations have followed their original work. Duarte has invented a shape grammar developed for the houses designed by the architect Álvaro Siza at Malagueira (Duarte 2004). Duarte has used a set of heuristics to evolve the final design by comparing the description of generated designs with the description of the desired house. This method allows him to generate Siza’s designs on demand without the architect. A particular group of generative grammars is called L-systems or Lindenmeyer-systems, originally developed to generate fractal-like branching structures in plants (Prusinkiewicz and Lindenmeyer 1990). L-systems have also been explored in the design context. For example, Testa and Weiser (2002) have used an Lsystem based program to grow a morphogenetic surface structures and free-form honeycomb trusses. Parish and Mueller (2001) have developed a procedural city modelling methodology using L-system generators for street networks, and additional shape grammars for generating buildings.

Evolutionary design models Evolutionary design models are based on evolutionary computation algorithms, originally developed in the Artificial Intelligence research. Evolutionary computation is an iterative process using the Darwinian principles of selection to choose the fittest individuals from a population of solutions, and recombining the selected solutions in order to achieve the desired result. There are a few discernible methods – genetic algorithms, evolutionary strategies and evolutionary programming – that all belong to the general class of evolutionary algorithms (De Jong 2006, p. 1-2). All evolutionary algorithms are essentially optimisation techniques where the process converges towards an optimal solution with respect to given fitness functions. For instance, genetic algorithms (GA) – the most popular branch of evolutionary algorithms – are used for solving combinatorial optimisation problems (e.g. Krink and Vollrath 1997; Heppenstall, Evans and Birkin 2007). As a general approach, GA models utilise mechanics of natural genetics (Goldberg 2008) by coding the features of phenotype into binary ‘gene’ strings and recombining these strings to evolve novel solutions. 30

There are strong arguments for borrowing concepts from natural evolution in order to develop design solutions. Cross (1977, p. 7) argues that developing the wellfitting forms of design objects is similar to the evolution of the forms of organisms in nature. Designers have always used the iterative process of creating a number of solutions, selecting among them, making improvements and recombining the solutions – the process quite reminiscent of the natural evolution (Galanter 2003). Evolutionary systems offer the designer an opportunity to use aesthetic selection to breed design solutions in a controlled manner (McCormack, Dorin and Innocent 2004). A large number of designs can be generated in short time and the emergent form is often unexpected (Frazer 1995). Several researches have employed genetic algorithms for morphogenetic and structural design purposes. Funes and Pollack (1999), for example, have developed a GA based greedy algorithm to generate structurally sound configurations that are assembled out of parts. Mahdavi and Hanna (2004) have compared a stochastic genetic algorithm with a deterministic gradient based search to optimise the geometry of space frame structures. Although their GA is outperformed by the deterministic method in terms of speed, the geometries found by it reveal more variation in shape. Frazer and his students in the Architectural Association have developed several genetic algorithms to evolve conceptual design solutions. For instance, they have devised an algorithm to generate yacht hulls, and another one for evolving Tuscan columns (Frazer 1995, p. 61-63). Other evolutionary techniques than GA-s are less frequent in the design related research. Some efforts have been made in combining evolutionary algorithms with cellular automata for designing bracing structures in the context of tall buildings (Kicinger, Arciszewski and De Jong 2004). Following Hillier and Hanson’s (1984) groundbreaking work, Fuhrmann and Gotsman (2006) have proposed a greedy system for evolving housing layouts, that are analysed by calculating visibility graphs. Genetic programming has been introduced to architectural design by Coates et al. (1997) whose algorithm operates on shape grammar rules to generate conceptual designs.

31

2.6 Chapter summary

This chapter began with outlining the role of scientific modelling and continued with describing the traditional use of modelling in architectural practice. It investigated generative design methods as a sub-category of computational modelling. The goal was to set generative modelling into a larger methodological context and to identify its place in the overall design process. Whereas traditional design practice sees models as static representations, contemporary computational approach utilises models in order to explore dynamic processes. Such dynamic models can be turned into digital fabrication tools that can greatly enhance the design process. Generative models have a particular place amongst computational design methods – as opposed to analytical and evaluation methods, generative methods are used for synthesising design solutions. Various authors have proposed and tested several generative design methods in the context of design tasks. It is commonly accepted that generative models offer an alternative to traditional design methods. It is often pointed out that the main benefits of generative modelling are the ability to create a variety of solutions and to respond to the increased complexity of design problems. During the generative process, design solutions are developed in an iterative manner – solutions are grown rather than invented. As a result, generated design solutions are flexible and contextual. The next chapter discusses a design issue – circulation – that is thought to be a sufficiently complex task in order to evaluate the suitability of generative modelling as a method of design. Circulation is usually a key part in buildings and settlements. Having a set of tools that enable the designer to quickly generate a variety of solutions at the early stage cannot be underestimated. Additionally, circulation lends itself to both quantitative and qualitative analysis which makes the evaluation of generated solutions fairly straightforward.

32

Chapter 3: Modelling circulation – an architectural review This chapter presents a survey of studies and modelling techniques related to circulation in architectural design. This particular topic has been chosen because it is suitable for proving the usefulness of generative design methods in architecture. The circulation realm makes an ideal sandbox for testing out bottom-up approach for several reasons. The most important reason is that the natural circulation systems are inherently bottom-up and should, therefore, modelled from bottom to top as well. Circulation herein is seen as a dedicated area for movement that connects various parts of a building or settlement together into a coherent network of spaces. Crossing borders between private and public spaces, circulation provides opportunity for pedestrian and vehicular access. The circulation network creates and handles flows, links activities together and makes the space continuous. The circulation network channels matter and energy between spaces – it enables and is informed by movement. Movement can be a major shaper of the environment. As opposed to many other issues in design, the circulation lends itself quite easily to computation. Circulation routes can be measured and analysed parametrically; movement corridors can be optimised to several criteria, and circulation networks can be modelled by the act of movement. The underlying assumption in this thesis is that circulation diagrams can be generated within computer simulations using mobile agents. In order to investigate the circulation realm, this chapter first looks at the general approach to circulation. Then it reviews relevant architectural representation techniques, and investigates the graph-theoretical perspective of circulation diagrams as networked graphs. After scrutinising the topological view of circulation networks, the last section examines some historical and contemporary computational models for generating circulation networks.

33

3.1 The essence of circulation

“Just as blood vessels of the vascular network distribute energy and materials to cells in an organism, road networks distribute energy, materials and people in an urban area.”

(Samaniego and Moses, p. 23)

Circulation is a flow of matter and energy, the orderly stream through a network of channels. The circulation network is a system of exchange among specialised spaces (Mitchell 2006). A distinct principle of organisation has been found common in many levels of systems that feature spatially distributed. Similarly to a complex organism that consists of organs linked by the vascular network, a large building consists of specialised rooms linked by circulation systems, and an urban environment consists of specialised buildings and public spaces connected by pedestrian and vehicular circulation networks (Mitchell 2006). Circulation in architecture is often thought as the means by which access is provided through and around an environment. Perhaps the most important notion in this definition is the notion of accessibility. From an inhabitant’s point of view, those parts of the environment that are not accessible usually offer no utilitarian benefit to the inhabitant. These spaces are always manifested in the lack of sufficient access or in the inappropriate layout of circulatory networks. Access to a space is provided via an entrance, across a threshold. Alongside with other border elements, the threshold separates the spaces of different quality or purpose. As opposed to the border and threshold, circulation does not define these spaces but facilitates the transition between them. Good access to a space is not always desired, and some degree of control over it is often required. As a result of control people have different opportunities for movement and, therefore, circulation networks appear hierarchical and personalised (Alexander 1964). Pedestrian and vehicular movement is often a major driver in the design process. In urban design, the street network is a key element that, together with built forms, constitutes the structure of the environment. This network defines the opportunity for motion and the means of access. With respect to mobility, there are two basic solutions available for building design – the shelter and the burrow (Arnheim 34

1977, p. 148-151). Whereas the abstract type of burrow is the result of inhabitant’s physical penetration, a shelter cares about its user’s movement only secondarily, and derives its form from its own function instead. The burrow type of buildings is directly informed by the user’s acts of movement, providing more space at locations where the user wants more freedom of direction (Arnheim 1977, p.149). Good circulation in the building and urban environment requires many design criteria to be met. The circulation network has to handle anticipated flows and provide access to desired spaces. An obvious aspect is also the economy of the layout – solutions that optimise the length of journeys between connected spaces usually better in terms of occupational and construction costs. However, the optimised layout is not always desired. Many architects and designers value the user experience of moving through an environment. Kevin Lynch (1981, p. 146), for example, affirms that circulation is not just about shortest paths and can provide aesthetic pleasure, hence the optimisation of the road and corridor lengths often comes at a cost. Besides the experience of moving through, Varawalla (2004) also values conceptual clarity in circulation. The clarity of layout makes the environment legible and facilitates wayfinding. The clarity is often understood as an appropriate manifestation of hierarchy (Marshall 2005, p. 30). With respect to street networks, many contemporary urban design guide books encourage high permeability and connectivity (e.g. Llewelyn-Davies 2000; Evans 2008). Circulation is often the most attractive and active area of the built environment. In some cases it constitutes the major part of the venue – airports, racing courses, subway and train stations all feature large dedicated spaces solely intended to accommodate movement. Ancillary spaces in these venues generally follow the circulation logic. In hospitals and schools, the circulation network is also a key driver of spatial organisation. The way in which people move in these buildings is the fundamental generator of the plan (Varawalla 2004). In architectural practice, the area dedicated to circulation is often treated as a standard value – a percentage of total floor area (Ireland 2008). This value is typically specific to a building type. In some building typologies circulation often shares the same space with other activities. Such integrated circulation solutions can create more flexible indoor spaces, and are more practical in terms a floor area than segregated circulation solutions. 35

Although circulation is an important issue in architecture, there is surprisingly little comprehensive literature available on the subject. One can find a bulk of publications that suggest codes and guidelines for designing circulation systems (e.g. Llewelyn-Davies 2000; Evans 2008), but rigorous overviews of the subject are seldom. Koutmanis et al. (2001) point out that most analytical studies and design guidebooks appear to accept the reductive logic. This reductionist approach tends to over constrain the architectural brief and limit the possible solution space. Marshall (2005, p. 29) adds that desired patterns expressed in the literature are too often couched in terms of verbal description of properties, or solely demonstrated by means of illustrative plans. Most of the descriptions and examples either provide irrelevant details or oversimplify the dynamics of circulation networks. Besides the rulebooks and design guides, there is an abundance of architectural design publications (e.g. Berkel and Bos 1999; Jormakka 2002) that present the concept of movement and circulation as a driver of built forms. Although this literature can be very inspirational, it is hardly useful for devising computational models to generate circulation.

3.2 Diagrams of circulation “The diagram is an invisible matrix, a set of instructions, that underlies - and most importantly, organises - the expression of features in any material construct.”

Stanford Kwinter (Reiser+Umemoto 2006, foreword)

The diagram is a popular representation method for expressing conceptual ideas in architecture. It helps to understand the building through examining its various systems. Diagrams are tools that help to manipulate the building form and spatial organisation in early stages of the design process. The reason why diagrams are so popular among architectural design practitioners is because they convey information visually – they appear to operate between form and word (Somol 2006). According to UN Studio’s architects Berkel and Bos (2006), diagrams liberate architecture from language and allow to encode and compress spatial information. They see diagrams as kind of maps that always point to something. Diagrams, in Berkel and Bos’ terms, are abstract representations of relationships (Berkel and Bos 1999, p. 75). 36

Moussavi and Zaera-Polo (2006) from Foreign Office Architects (FOA) point out that it takes several forms of mediation for a diagram to become a building. Similarly to drawings and graphs, diagrams belong to the arsenal of nonrepresentational architecture. FOA architects insist that diagrams should not be associated with the lack of control. Spuybroek (NOX architects) agrees that the diagram is a very clearly defined network of relationships, but “it is completely vague in its formal expression" (Spuybroek 2006). A diagram for him is an input/output machine with two operational modes for contracting data into graphical representations or expanding these representations into spatial form. Although different architects may interpret diagrams in a different manner, they all generally adopt the Deleuze’s view of diagrams as abstract machines. The Deleuzian diagram maps the relations between forces, and instead of representing anything real, constructs a real to come (Jormakka 2002, p. 49). There seems to be also an agreement among architects that diagrams, although not representational, are the designer’s instruments to produce form.

Figure 3.1: Constructive diagram. Source: (Alexander 1964, p. 88)

The use of diagrams is not entirely a new method of work in architecture – Somol (2006) claims that diagrams have become ‘actualized’ in the era of modern architecture in the 1960s. One of the forerunners of contemporary architectural diagramming, Christopher Alexander, argues that the diagram has to convey some kind of ideas about the physical form in order to be useful for an architect (Alexander 1964, p. 87). In his opinion, there are two types of diagrams – form and requirement diagrams. Whereas the form diagram points to a physical shape, the requirement 37

diagram is a non-iconic notation of some constraints or properties (Jormakka 2002, p. 28). The latter is only interesting if it somehow informs the former. Combined together, these two types result in a diagram that Alexander calls a constructive diagram. A well-known example of his constructive diagram is the diagram representing traffic flows. In this diagram (see Figure 3.1) the arrows represent the channels of flow, while the line thickness conveys the capacity of these channels.

Figure 3.2: Functional circulation diagram of Yokohama terminal. Source: (Ferre et al. 2002, front cover)

‘Abstract machines’ representing movement and circulation are amongst the most often used diagrams. Architects seem to naturally think in terms of networks when they solve circulation issues. Many architects use the topological representations of circulation to organise the architecture of their designs. Foreign Office Architects have deployed a diagram of movement (see Figure 3.2) to inform the design process of the Yokohama ferry terminal. The form of the terminal building is generated from this circulation diagram that “aspires to eliminate the linear structure characteristic of piers, and the directionality of the circulation” (Jodidio 2001, p. 220-221), highlighting topological relationships rather than showing exact topographical proximities. Perhaps the tendency to represent circulation using topological diagrams comes from engineers who prefer these for the sake of clarity. For example, the famous topological map of London Underground (see Figure 3.3) was originally drafted by an electrical engineer Harry Beck (Hadlaw 2003).

38

Figure 3.3: London Underground map. Source: (Beck 2010)

The process of turning a diagram into an architectural design proposal is illustrated by Maxwan Architects. The diagrams of 50 bridges in Leidsche Rijn look suspiciously similar to Alexander’s diagram of traffic flows (compare Figure 3.4 and Figure 3.1) because they follow a similar logic where circulation responds to the local needs, and the width of paths represents the intensity of expected flows. What is fascinating about Maxwan’s work is that these diagrams are then quite directly mapped to the actual physical shape – the constructive diagram becomes a constructed one.

Figure 3.4: Bridges in Leidsche Rijn by Maxwan Architects. Source: (Maxvan no date)

39

Planar network diagrams depicting cities and street networks have become a standard in the architectural and urban design practice. Compared to the abundance of city and street diagrams, the abstract representation of circulation in buildings is still a relatively neglected method of work. Such a bias towards flat 2D representations suggests that the nature of circulation in buildings is harder to conceive and visualise via diagrams. However, the diagramming techniques have developed further from the early modernist bubble diagrams to new computational tools that have made it possible to construct and explore more complex diagrams (Raisbeck 2007). A practice that has extensively deployed 3D circulation diagrams (see Figure 3.5) throughout the last decade is Amsterdam based UN Studio. In case of complex architectural briefs like study movement studies appear to be a cornerstone of UN Studio’s design proposals. In the Arnheim Central station project their flow diagrams examine pedestrian connections with the systems of infrastructure. These diagrams give architects a quick comprehensive insight and generate more effective connections between transport systems and other programmes (Berkel and Bos 1999).

Figure 3.5: Circulation diagram of Luis Vuitton store. Source: (UNStudio no date)

Circulation is represented in many architectural sketches implicitly; the movement network is not directly extractable from but yet present in the sketch representation. However, it tends to be much more explicit in city diagrams. The reason is not immediately apparent – perhaps the patterns of movement at urban level correlate with the built form to a greater extent than in buildings. Consequently, 40

the means of circulation are directly manifested in the form of built environment. Kevin Lynch, theorising about a good city form, lists out the most common geometrical city patterns – grid, radio-centric and capillary (Lynch 1981, p. 424-425). These patterns refer to the composition of urban blocks as well as to the street network. With the advances in computer technology in the 1960s and 1970s, there was a wave of researchers trying to quantitatively explore circulation in buildings (e.g. Tabor 1971; Willoughby 1975). Circulation was thought to be suitable for computational analysis and optimisation. Much in the vein of systems theory and modernist architectural movement, several applications were devised to develop optimal layout plans and to minimise travel costs between activities (Tabor 1971). These applications often required generalised floor plans and led to standardised views on circulation-based building typologies. Willoughby (1975), for example, divided office layouts according to circulation and spatial arrangement of activities into five distinct layout typologies: slabs, courts, crosses, fish-bones, and open plans. Despite the mass of quantitative research carried out, the simplistic top-down computer models and the complexity involved in designing satisfactory building layouts prevented this approach breaking through into the mainstream architecture practice. The taxonomy of circulation patterns is nowadays best established in highway engineering, and is also being adopted by the urban design community. Marshall (2005, p. 45-67) uses the grade separated system and classifies streets and roads as freeways, expressways, arterials, collectors or local streets. Some grade separated systems give a finer grain dividing arterials, for example, into throughways, boulevards and avenues (Bochner and Dock 2003). The grade separated classification system defines the possible hierarchy of a street network. In a good circulation system, as Lynch (1981) points out, local streets feed into arterials, which feed into expressways. Such hierarchies match the assumed traffic flows, and have also been observed to develop naturally in unplanned settlements (Lynch 1981). A representation of the grade separated system is not very different from what Alexander describes as a requirement diagram (Alexander 1964) where circulation elements are defined solely by their capacity to handle flows of certain magnitude. Besides the hierarchical classification, Marshall (2005, p. 86-87) also discriminates between the geometry and topology based representation systems. 41

Altogether, he distinguishes between three different modes of representation (see Figure 3.6) – constitutional (hierarchical), configurational (topological) and compositional (geometrical). If a constitutional representation is an abstraction from configuration, the latter is an abstraction of composition (Marshall 2005, p.86), and the composition is a rather direct representation of the actual geometrical street layout. Whereas the constitutional diagram resembles Alexander’s requirement diagram, the configuration diagram would translate as the form diagram. Combined together, the configuration and constitution diagram produce a constructive diagram. This constructive diagram, in turn, when adapted to a specific site, is the generator of the compositional road layout.

Figure 3.6: From left to right: composition, configuration and constitution of street networks. Source: (Marshall 2005, p. 86)

The topological view of circulation networks supports a great deal of scientific studies that are collected together under the graph theory umbrella. In graph theory, movement and flow patterns are reduced to the most basic and elemental topological form – graph networks (Haggett and Chorley 1969). A graph network is made of lines and points, that sometimes are also referred to as ‘edges’ or ‘links’, and ‘vertices’ or ‘nodes’, respectively. Batty claims that the tendency of articulating urban form using graph-theoretic principles has long traditions (Batty 2004), and particularly points out space syntax research and Philip Steadman’s topological studies of different building typologies.

42

There is an interesting connection between the graph-theoretical view of movement networks and Gibson’s (1979) ecological approach of spatial perception. In graph theory, circulation is represented as a network consisting of lines and nodes. Similarly, Gibson thinks of movement – the lines of locomotion link up the points of observation. The medium, as Gibson describes the environment, is defined by this network. This analogue shows that the graph-theoretical approach is not necessarily just an abstract mathematical model of space, but aligns with Gibson’s psychological view of how people perceive the space.

3.3 Topological circulation networks

The topological view of networks is central to spatial analysis in geographical science (Batty 2004). Haggett and Chorley (1969) have used graph-theoretical methods to analyse and categorise spatial systems in geography, and have built a solid base to any discipline dealing with spatial data and flows. Naturally, their work serves well for the purpose of classifying circulation patterns in architecture and urban design.

Figure 3.7: Topological classification of networks. Source: Hagget and Chorley (1969, p. 7)

Haggett and Chorley (1969, p. 3) divide networks into three main topological classes – branching nets, circuit nets and boundary nets (see also Figure 3.7). Branching networks, the simplest class of topological nets, are distinguished by their hierarchical tree-like structure. A tree network can take an infinite number of geometrical forms even if the topology of the network remains the same. As opposed to branching nets, circuit networks feature closed loops or circuits. With the same number of nodes in two circuit networks the number of links may vary, leading to 43

multiple topologies. Barrier networks are intrinsically different from two previous network classes – these nets consist of links that block or resist the flow instead of channelling it. Topological circulation networks in architecture and urban design can be classified in a similar way to Hagget and Chorley. In his well-known book City is not A Tree, Alexander (1965) distinguishes between two essentially different spatial organisation – a semi-lattice and a tree. The organisation is a tree when urban elements are working together and are nested, but do not overlap. In a tree-like structure, units are connected to other units through the medium of that unit as a whole. According to Alexander, many examples of cities with tree-like cities have been proposed and built through-out history (see Figure 3.8), from roman army camps to Chandigarh by Le Corbusier and Brazil by Lucio Costa. Other examples include Tokyo plan by Kenzo Tange, a plan for Mesa City by Paolo Soleri, and a garden city in Maryland by Clarence Stein.

Figure 3.8: Tree-cities: Chandigarh, Brazil, Greenbelt in Maryland, plan of Tokyo. Source: (RUDI no date)

Although the tree structure may seem to be appealing for its clarity, the reality of social structures in a contemporary settlement, as Alexander points out, is a heavily overlapping one – systems of acquaintances and friendships in a community are semi-lattices, not trees. Unplanned urban environments seem to naturally follow this pattern evolving semi-lattice structures without master plans. Artificially implemented tree-like urban organisation leads to low connectivity and permeability in the environment, high segregation between neighbourhoods and groups of inhabitants, and is often associated with several social problems. Despite the obvious 44

criticism, tree-like urban patterns are wide-spread in Europe and North-America cities built during the heights of urban sprawl in the 20th century. Only recently the problems with this highly hierarchical organisation have been widely acknowledged (Rogers 1999). Whereas the tree-like organisation is commonly considered unwelcome in contemporary urban design practice, the same cannot be said about buildings. Treelike pedestrian circulation networks are not only acceptable but even encouraged with some building typologies. The tree structure offers higher degree of control in airports and train stations, schools and hospitals; a classical apartment house also tends to be a tree. 3.4 Optimal designs Circulation networks lend themselves relatively well to quantitative analysis. The work of Hagget and Chorley (1969) shows that many network parameters can be easily calculated. The network density parameter, for example, can even be evaluated using a few different methods – by calculating the number or some characteristics of network elements in an area unit. Other geometrical characteristics are more difficult to capture in any quantitative fashion. The shape of a network, for example, appears to be a very difficult feature to convey by numbers. Several network parameters can be taken into account when optimising the circulation layout. As different problems have different optimal solutions, optimisation is seldom an objective procedure. The main question in here is: what is the network optimised for? Mitchell (2006) points out that the total network cost involves fixed costs of building it, and interactive costs using it. Interactive costs are dependent on the distance of trips and the transportation volume. Haggett and Chorley (1969, p. 111-118) express essentially the same principle that networks can be optimised for build cost or travelling costs – total cost over time equals user costs in that time plus build cost. The efficiency of networks can be calculated by the average distance travelled within an area boundary. If the network links have different capacities, efficiency can also calculated by the average time a trip takes. Hagget and Chorley (1969, p. 126-130) assert that any real world efficiency measures are taken under complex assumptions, and there is no single solution to minimum distance networks. 45

Circulation networks can be designed to optimise distance in many ways. The travelling salesman circuit, for example, is a minimal unbroken chain that connects all the nodes in the network (Haggett, Cliff and Frey 1977, p. 76). A minimal spanning tree is the shortest network joining a collection of points so that any point can be reached from any other point (Haggett, Cliff and Frey 1977, p. 76). Steiner trees, a class of minimal spanning trees, are widely used to inform the design of real-world circulation structures such as highways and oil pipelines (Herring 2004). The travelling salesman problem and the minimal spanning tree problem are amongst several well-known problems in network theory that are easy to state, can be easily solved by trivial methods in theoretical situations, but become intractable in real-world situations. The difficulties arise from the exponentially explosive nature of combinatorial mathematics (Haggett, Cliff and Frey 1977, p. 77-79). When simple cases of optimal networks can be easily computed, any network with larger number of nodes is hardly the optimal one. To design a minimal distance network, heuristic algorithms are preferable. The Steiner tree problem, for instance, is best solved heuristically, because exact algorithms take exponentially more time as the networks grow larger (Herring 2004). The task of architecture and urban design is not necessarily an optimisation task. As Kevin Lynch (1981, p. 146) points out, the movement can be a source of enjoyment that becomes a design intent instead. However, minimal networks can be certainly applied to the buildings where the speed and ease of getting from one place to another is of essential importance, or where the cost of building the movement infrastructure is crucial.

3.5 Computational models

Algorithmic solutions to spatial problems have generated enthusiasm in many architectural theorists and fascinated practitioners for decades. With the everincreasing availability of computing power more and more of computational solutions can be and actually are tested, opening up new possibilities for architects. Generative models shed light to network formation processes, offering an alternative view to descriptive models. Descriptive steady-state models are not helpful to understand dynamic networks since the networks are usually formed by an iterative step-by-step process (Haggett and Chorley 1969, p.285). Lynch (1981) adds that the computer 46

makes it possible to explore a view of the city different to an intuitive and descriptive analysis. The computer can help us model the city as “the cumulative product of the repeated decisions of many persons and agencies - actors who have diverse goals and resources” (Lynch 1981, p. 336). When the actors in such models are made to represent pedestrian activity, it is not difficult to see how the model becomes a useful tool for solving circulation issues. This kind of computational modelling is not useful only at urban scale, but is equally applicable to architectural design problems. Circulation analysis is usually carried out early in the design process analysing existing spatial arrangements, or later with respect to a particular problem such as evacuation (Koutamanis, Leusen and Mitossi 2001). This analysis relates to way-finding where the routes are searched on the basis of some normative criteria. Apart from the success of analytical models, pedestrian circulation in architectural computing is a relatively neglected area for several reasons. Koutmanis et al. (2001) outline four of these reasons: •

the complexity and lack of data of human interactions with the building



the complexity of computer simulations



weak briefs and reductive logic of building codes



lack of integration with design synthesis

With respect to the purpose of this thesis, the last reason is worth of a special attention. Case studies (see Chapter 7 and Chapter 8) present a few methods of integrating computational models into the architectural design process. Both of these studies are also tested in the professional context of architectural competitions. The computational models for finding suitable circulation layouts are often accompanied by automatic activity location procedures. Activity location, however, brings further complexity into the model. Activities in a building typically have various associations that may be asymmetric or ambiguous (Ireland 2008). Even if the associations are clearly mapped out in distance matrices between activities, a building with 20 activities may permute in 2.5 trillion ways (Tabor 1971, p. 47). In consequence, no final layout is the optimum – all generative methods are heuristic (Tabor 1971, p. 20).

47

Taxonomy From the perspective of geography, Haggett and Chorley (1969) categorise network simulation models according to their typological classification. Then they further subdivide simulation models of branching networks into growth, random and capture models, and models of circuit networks into colonisation and interconnection models. While growth models typically start with un-eroded land and tend to obtain static equilibrium, capture models start with an existing landscape or network structure. Random models are not generally useful as they give little insight to the evolution of network. Colonisation models in circuit networks explore space from source nodes establishing new nodes, interconnection models, on the other hand, explore possible links between given nodes. With respect to the computational logic involved, models for generating building layouts and circulation networks can broadly be divided into two main categories of additive and permutational methods (Tabor 1971, p. 1). Whereas additive techniques assemble activities piece by piece in an empty floor, permutational methods typically modify pre-processed building layouts. Additive methods involve three stages: creating the initial framework and establishing the boundaries, automatic location of activities, and manual modification of the output. Permutational methods have another stage of creating an initial building layout between the first and the second step of additive methods. The models with automatic location of activities have evolutional nature – piecemeal improvements are sought to achieve circulatory efficiency (Tabor 1971, p. 56). Network development models can also be categorised by sequencing techniques. Typical spatial network sequencing methods are node connecting (travelling salesman problem solvers), space filling (e.g. Hilbert curve), and space partitioning (e.g. Voronoi subdivision) methods. Berntson (1997) offers a botanist view to networks classifying plant root models into developmental (centrifugal) and functional (centripetal) ordering sequences. A separate conceptual class of developmental models is formed of aggregation and individual based models. A classic example of this group is the diffusion limited aggregation model (Batty 2005, p. 124129).

48

Examples The tractable and quantifiable nature of networks has resulted in abundance of computational models for generating circulation systems. The vast number of solutions that have been proposed for solving minimal spanning tree and travelling salesman-type problems is beyond the interest of this thesis. It is perhaps only worth mentioning Adamatzky’s (2001, p. 105-170) approach for solving minimal route problems. He has shown that solutions for many of such problems can be computed bottom-up in cellular automata collectives. The extensive exploration of heuristic models for automatic floor plan and circulation generation in the 1960s has been documented by Tabor (1971) – several modellers in that period have engineered computational methodology for factories, hospitals, educational buildings and offices. All these models broadly represent the mechanistic view on circulation and tend to leave out other spatial qualities. The generated results are usually diagrams that are intended to satisfy the circulation condition and do not take into account organisational, functional, environmental, geographical, structural, legal, or financial criteria (Tabor 1971). Somewhat more advanced are the contemporary office layout generators to drive the floor plate shape from circulation (Shpuza and Peponis 2006). To guarantee that the final design matches given criteria, Shpuza (2006) chooses a preferred circulation system in advance, but also explores the potential of deriving circulation from the floor plate shape. Marshall (2005, p. 222-228) shows how highway engineering rules can be plugged into the generative process. He suggests a computational approach of turning the constitutional rules of road hierarchy into local road configurations. The program “can generate a diversity of layout patterns which can themselves be adapted to local circumstances" (Marshall 2005, p. 227-228). Akin to Marshall’s idea, several shape grammar based models have been developed by various authors. Pascal Mueller has participated in creating a procedural city modelling software called CityEngine that enables designers quickly grow street networks. The generative method behind CityEngine is a shape grammar similar to context sensitive L-systems (Parish and Mueller 2001). A similar approach has been used for modelling road patterns found in informal settlements in South Africa (Glass, Morkel and Bangay 2006). 49

Methods borrowed from AI research and evolutionary computing have often been used in generative modelling. For example, Zhang and Armstrong (2005) have introduced genetic algorithms to locate corridors in a 2D lattice of cells. Diffusion limited aggregation has been used for explaining growth processes of city networks (Batty 2005, p. 50-51) and generating street networks (Nickerson 2008). A computational model for generating leaf venation patterns (Runions et al. 2005) is capable of producing circulation networks that share so many commonalities with street networks that it can be considered for architectural and urban design purposes. A few agent-based models have also been used for generating circulation systems. Goldstone and Roberts (2006) have proposed an agent-based model for studying and reproducing self-organised trail systems in groups of humans. Ireland (2008) has introduced an agent-based model that also aims to sort out the desired relationships between activities in a building. An agent-based model simulating the growth of networks is inspired by tunnelling ants (Buhl et al. 2006).

3.6 Chapter summary

As a part of the literature survey, this chapter looked at the issues of circulation in architecture and urban design. It was established that the essence of circulation is to provide access to specialized spaces and the means to move through and around an environment. It was also shown that circulation in buildings and settlements share many commonalities with circulation networks in nature. Therefore, it was argued that formation principles and classification methods of circulation can be borrowed from spatial analysis of natural phenomena – from network analysis in geography. Network analysis is used in assessing the results of prototype and case study models later in this thesis. Architects and urban designers often use diagrammatic representations in order to express circulation in buildings at the conceptual design stage. This validates the assumption that diagrams are indeed useful tools in developing spatial solutions. However, the diagrams presented in this chapter are fairly abstract and are not suitable for thorough analysis and evaluation. It is envisaged that if these diagrams were based on computational models, it would open up the opportunity to analyse and evaluate diagrammatic forms and would eventually help finding better design 50

solutions. Although several computational methods for network optimisation exist, there are just a few ones for generating circulation networks at the first place. The main objective of Chapters 6 to 8 is to propose several new generative models that fill this gap. The following chapter surveys agent-based models and discusses the possibility of using such models for generating circulation networks. It scrutinises agent modelling techniques in hope to extract principles that can be reused for simulating network formation processes in nature. These principles can then possibly used for constructing generative models. Agent-based modelling is potentially a very powerful method for solving circulation because of its bottom-up nature – just like in natural circulation systems, agent-based models consist of many interacting particles.

51

Chapter 4: Agent-based modelling and multi-agent systems: nature, origin and modern applications The previous chapter introduced the generative approach for modelling circulation systems bottom-up. In this thesis, bottom-up modelling is seen as a method of achieving a global behaviour of the system by defining individual behaviours of its components. Agent-based modelling is one of bottom-up methods and has already been proven useful in analysing the urban environment. There is also some evidence that agent modelling can be turned into a generative tool to assist architects in the design process. The nature of agent-based modelling seems to make it well deployable to solve circulation problems – circulation is defined by accessibility and locomotion and mobile agents are ideal for representing the movement of individuals in the context of built environment. Agent-based modelling (ABM) has rapidly gained popularity over the last few decades resulting in a plethora of scientific studies across several disciplines. In fact, it has been so popular amongst scientists, scholars and practitioners of distributed computing that it is practically impossible to give an exhaustive yet concise overview of this relatively new computational paradigm. There are two main reasons why it is so difficult to compile a summary on agent-based models. Firstly, ABM is a crossdisciplinary paradigm and as such cannot be reviewed from the perspective of a single domain. Secondly, the terminology used in ABM has not been universally agreed – there is still a lot of confusion even when it comes to defining what an agent is. Therefore, with the specific focus on multi-agent systems only a few selected definitions of agent are investigated in this chapter. This chapter begins by looking at theoretical foundations and historical references of the ABM paradigm. It then continues with some popular definitions of agent and explores what the most common properties that have been assigned to agents are. After scrutinising the behaviour of stereotypical agent models, the last section outlines the uses of ABM in different specialist fields focusing particularly on architectural design applications.

52

4.1 Background

Concepts of agent and agency Before looking at the exact scientific definitions for the term agent, it is worthwhile exploring the philosophical thoughts that surround it. Referring back to some key figures in system theory, behavioural science, biology, complexity theory and computer science of the last century, one can observe how ideas of decentralisation have developed. Traces of decentralised thinking can already be found in the era of classical Newtonian science. Resnick (1994, p. 7) points out that Adam Smith’s work in market control and economy from more than 200 years ago suggested a decentralist approach. Despite some early examples this way of thinking has only recently become prevalent in many scientific discourses. Decentralised systems are best understood by modelling (Resnick 1999), and ABM is probably the most popular method for it. To understand what the new paradigm is all about, one has to comprehend the notions of agent and agency. Two terms – entity and action – are often thought to compose the notion of agent. As Wan and Braspenning (1996) point out, neither of these terms can be decomposed into smaller notions but can only be described via synonyms. Agency, a concept often used together with agents, provides a deeper insight – in order to understand agents, one can see what a group of agents constitutes. Minsky (1988) describes agency as a set of smaller functions working in parallel. These functions, if combined together, form a higher level agent. Thus, each sub-agent can be described, in relation to the agency, as its function. Minsky suggests that the relationship between agents and agency is similar to that between parts and whole. Whereas agent is a holistic concept and does not lend itself to further decomposition, agency can be described in terms of subagents and their interrelationships. An agency is thus a system of agents, and as such is distinguished from its constituents by organisation (Skyttner 1996, p. 36). Besides some early thinkers in economy, the decentralised approach had found its supporters also in biology (Resnick 1994). In the beginning of 19th century, Wheeler – a biologist studying ant colonies – came to a conclusion that an organism is a dynamic agency acting in an unstable environment (Wheeler 1911, p. 308). He could not give a fully encompassing definition because “the organism is neither a thing nor a concept, but a continual flux or process, and hence forever changing and never 53

completed” (Wheeler 1911, p. 308). However, he was able to describe the organism via its parts that, in case of ant colonies, are organisms themselves. Much later in the century, Maturana and Varela (1980) had to coin a new term – autopoiesis – to characterise the nature of living systems. In the theory of autopoiesis they see living systems together with their environment structurally coupled via the sensory-motor loop. The structure of agency can be considerably simpler from that of its parts. Wheeler (1911) noticed that ant colonies resemble much lower level organisms than ants themselves. Niklas Luhmann explains that moving up to a higher level system reduction of complexity occurs because its elements are unified (Luhmann 1984, p. 27). As opposed to the reduction, the complexity can also be increased through the selective conditioning by creating connections between these elements. Similarly to Wheeler’s view on organisms, several modern complexity theorists see living systems as composed of sub-agents. Kelly (1994, p. 50), for example, insists that minds and bodies, blurring inseparably across each other’s boundaries, are made of swarms of sublevel things. Organisms, in his sense, are multiagent systems; organisms are agencies. Holland (1998) goes one step further and claims that the same can be said about systems at many levels: ecosystems, societies, organisms. In his opinion, individual entities and connections between entities in these systems can be modelled in computer simulations. “These individuals”, he adds, “go by the name of agents, and the models are called agent-based models” (Holland 1998, p. 117).

Agents and the environment System theory treats the system and its environment holistically, but also draws the boundary between them. Luhmann (1984, p. 29) uses the notion of system differentiation to describe the repetition of the difference between system and environment. System differentiation highlights the hierarchal nature of nested systems. One of the most important requirements of system differentiation is the boundary definition – how a system is identified in its environment. Batty and Torrens (2005) point out that the interactions within the system are denser than those between the system and its environment. Recognising the difference in density, thus, allows one to identify the boundaries. 54

The ways systems are coupled with the environment and how the interaction with the environment is organised are common subjects of study in system science (Skyttner 1996, p. 3). In the theory of autopoiesis, Maturana and Varela (1980, p. 9) stress that a living system cannot be observed independently from its environment. Structural coupling can create a new system that belongs to a different domain than its subcomponents do; coupled systems retain their identity. System coupling presupposes entity’s ability to learn how to adapt its motor outputs to its sensory input. In order to do so the entity needs to possess some kind of internal representation of its environment (Merleau-Ponty 1979, p. 128).

Programmable agents History of programmable agents dates back to the 1950s when Walter Grey, a neuropsychologist and robotician, built the first autonomous robots called Elmer and Elsie (Carranza and Coates 2000). Although conceptually very simple, Machina spexulatrix, as Grey dubbed his invention, acted unpredictably (Grey 1951). This illustrated the fact that simple sensory-motor systems, placed within a dynamic environment, can display complex, life-like behaviour. Grey continued his experiments by building Machina docile – a robot featuring an internal memory element that enabled it to perform a simple learning task (Grey 1951). Machina docile was presumably the first artificial agent with memory and learning ability. In the 1960s, following Grey’s work, Valentino Braitenberg continued developing simple reactive machines. The so-called Braitenberg vehicles became famous for displaying complex, uncanny behaviour (Arbib 2003). Later in the 1980s, Braitenberg published a book with a series of designs for robots that, from the perspective of an observer, behaved as if they had ‘taste’, expressing ‘fear’ and ‘aggression’ (Braitenberg 1986). In contrast to Braitenberg vehicles, Rodney Brooks suggested a different robot modelling approach that he called subsumption architecture (Brooks 1991a). Brooks proposed decomposing a robot’s architecture so that the robot’s internal mechanisms are organised into loosely interacted layers that function in parallel. In the subsumption architecture, the sensory input is mapped quite directly to the motor output yielding to a tight system-environment coupling. This approach was one of the 55

first steps towards a new paradigm – embodied cognitive science (Pfeifer and Scheier 2001). In parallel to the first robotic agents, cellular automaton (CA) – another concept that had a great influence on modern agent-based models – was developed. CA was originally introduced by John von Neumann (1951) in the 1940s to explain neural processes in the brain. Von Neumann recognised that, although his automata model was far less complex than natural organisms, studying organisms helped to create better automata models, and artificial automata was useful in order to better understand natural processes. One of the best definitions of CA is given by Adamatzky:

“Cellular automaton is a local regular graph, usually an orthogonal lattice, each node of which takes discrete states and updates its states in a discrete time; all nodes of the graph update their states in parallel using the same state transition rule.” (Adamatzky 2001, p. 11)

Von Neumann’s theory was tested in the 1960s by John Conway who created one of the first programmed examples of CA – the well-known Game of Life (Adamatzky 2001, p. 185). With simple transition rules switching cells on and off, Game of Life can produce surprisingly complex and persistent dynamic patterns. The selfreplicating patterns of ‘gliders’ and ‘spaceships’ – agents that can be observed only together with their environment (Adamatzky 2001, p. 185) – come very close to the notion of autopoiesis. As pointed out by Rodrigues and Raper (1999), the concept of CA is very similar to that of agent. On one hand cells in automata can be seen as immobile agents in static networks reacting to stimuli in their immediate neighbourhood; on the other hand, the persistent patterns of CA can be treated as self-replicating agencies living in the cellular space. Therefore, one can classify CA as a subtype of agent-based models. In Chapter 5, this idea is explored further in depth. Although CA models have been exhaustively explored in numerous models and analytical studies, Batty (2005, p. 76) argues that most of the applications to date have been educational. Recently, coupling these models with mobile agents has become very common in dynamic systems studies (e. g. Dijkstra and Timmermans 56

2002; Parker et al. 2003; Batty 2005). Borrowing from Seymour Papert’s turtle-based Logo language (Johnson 2002), Mitchel Resnick’s invented a simple yet powerful application combining mobile agents (turtles) and CA – StarLogo (Resnick 1994). StarLogo has spawned the whole generation of scientists testing their ideas by modelling dynamic systems in computer simulations. NetLogo, the successor of StarLogo, has now been used to build models in a variety of disciplines including biological systems (Wilensky 2001), mathematics (Wilensky 1998), social science (Wilensky 2004), chemistry and physics (Wilensky 2002) etc. In 1987, Greg Reynolds introduced the first model of flocking agents – boids (Reynolds 1987). Using simple local rules to steer individual boids, Reynolds was able to show that the flocking is not guided by a central leader, but emerges from the acts of individual boids. The artificial flock successfully simulated the complex behaviour of natural flocks with individual boids simply reacting to their immediate neighbours by following three rules: the cohesion, separation, and alignment rule. Swarm programming continued to develop rapidly in the 1990s when several phenomena observed in natural insect colonies were simulated in computer models. Dorigo, Maniezzo and Colorni developed a metaheuristic optimisation method dubbed ant colony optimisation (Gutjahr 2008). The overview of ant colony optimisation is given in Chapter 8. Subsequently, the concept of indirect communication in agent colonies was thoroughly investigated (Bonabeau, Dorigo and Theraulaz 1999; Buhl et al. 2006) and tested in simulation models (Deneuborg, Theraulaz and Beckers 1992; Theraulaz and Bonabeau 1995). Stigmergy – a form of indirect communication – is further explored in section 4.4.

Benefits and limitations of ABM Although ABM has generated a great deal of attention among scientists in various disciplines, business applications of multi-agent systems are still extremely rare. One of the reasons must be the complexity involved in designing these systems. Bonabeau, Dorigo et al. (1999, p. 271) argue that one of the greatest challenges in programming multi-agent systems is to make simple agents solving user-defined tasks. Regardless of being notoriously difficult to set up, distributed computation models are sometimes the best approach, especially in cases where the problem being solved is distributed. Agent-based models can be used in cases where the centralised approach 57

is impossible – the information involved is gathered across different domains or over a large area, or the amount of data is vast (Huhns and Stephens 1999). In some cases, the speed of distributed computing is substantially greater than that of linear processing (Bonabeau, Dorigo and Theraulaz 1999, p. 26). Multi-agent systems have several advantages compared to traditional topdown and linear techniques. According to Castle and Crooks (2006) there are three incentives to a modeller for using these techniques – agent-based models capture emergent phenomena, provide natural descriptions of systems and are flexible. In the context of geospatial modelling, Castle and Crooks see the flexibility of agent-based models as being applicable in different modelling environments while responding well to different control parameter configurations. The flexibility, on the other hand, can also mean that the multi-agent system responds collectively to external perturbations without agents being explicitly reprogrammed (Bonabeau, Dorigo and Theraulaz 1999, p. 19). The colony can cope with the dynamic environment much better than a single agent. This also leads to greater robustness – the failure of an individual agent does not affect the whole colony. One of the major drawbacks of running multi-agent models in computer simulations is the bulk of computation needed. Larger colonies usually solve the given problems more accurately but also require more resources as each agent has to perceive and act independently. Agent-based models are also very sensitive to configuration parameters (Brimicombe and Li 2008) and getting these parameters right can be very time-consuming (Castle and Crooks 2006). Although an agent-based model can provide deep insights into complex systems, the model is only as useful as the purpose for which it was constructed in the first place (Castle and Crooks 2006). Purely speculative models are sometimes created without a particular purpose in mind and fail to contribute to any professional or academic discipline or to the field of ABM generally. To avoid such a case, one has to be clear about the purpose of the model.

58

4.2 Definitions of agent

As mentioned earlier in this chapter, many authors writing about ABM stress that the official definition of agent does not exist (Dijkstra and Timmermans 2002; Silva, Seixas and Farias 2005). Wan and Braspenning (1996) argue that an agent cannot be decomposed into simpler notions; it can only be described via synonyms. They add that no remotely mature theory of agency or agenthood exists, and that multi-agent systems are lacking coherent theoretical foundations too. Despite such criticism, several authors have put forward their formal definition. Probably the most generic definition – the agent is one who acts – is given by Wan and Braspenning (1996). Russel and Norvig (1995) describe an agent as something that perceives and acts. A somewhat lengthier definition is given by Wooldridge:

“An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives.”

(Wooldridge 1999, p. 5)

Blumberg and Gakyean (1995) see an autonomous agent as a software system in a complex and dynamic environment, trying to achieve some set goals . Regardless to such a variety of formal definitions, certain characteristics attributed to agents and multi-agent systems keep appearing throughout the literature. These often assigned attributes are autonomy, adaptivity, intelligence, reproduction, self-sufficiency, embodiment, situatedness, encapsulation, reactivity, pro-activeness, social ability, and goal-directedness. The rest of this section gives a review of these different qualities found in literature. Autonomy is the characteristic most often attributed to agents. Whereas Pfeifer and Scheier (2001, p. 25-27) see autonomy as freedom from external control, Blumberg and Gakyean (1995) do not see the conflict between autonomy and directivity – agents can still be autonomous while accepting directions that influence their behaviour at multiple levels. Autonomy, an essential property of agent, relates strongly to adaptivity (Wan and Braspenning 1996). Adaptivity is the agent’s ability to sustain its entity and identity in the dynamic environment; the capacity to survive in 59

changing conditions. Ross Ashby used the term homeostasis to explain the organism’s ability to maintain its internal states (Pfeifer and Scheier 2001, p. 92). Natural agents preserve their entity by means of autopoiesis, non-natural agents have to mimic that to achieve adaptivity (Wan and Braspenning 1996). Probably the closest to the essence of autopoiesis among artificial agents come the models with self-regenerating cellular agencies (Butler et al. 2001; Støy 2004). Intelligence is another a quality frequently ascribed to agents and agencies (e. g. Izquierdo-Torres 2004; Nembrini et al. 2005). Pfeifer and Scheier insist that no generally accepted definition of intelligence exists (Pfeifer and Scheier 2001), and, in many cases, intelligence is attributed to agents by the observer. Pfeifer and Scheier add that intelligence must be seen with respect to the agents’ habitat. Skyttner claims that intelligence is a property of living systems that cannot be attributed to artificial agents (Skyttner 1996, p. 185). Brooks, on the other hand, sees intelligence as an emergent property in certain complex systems (Brooks 1991b). Self-sufficiency is the ability to sustain itself over extended periods of time maintaining its energy supply; an essential property of complete agents (Pfeifer and Scheier 2001, p. 86-88). Most artificial agents are dependent on their creators; all agents are dependent on their environment. Agents also need to be embodied in order to interact with their environment. The body defines the agent’s sensory configuration and potential means of interaction. According to Brooks, only embodied systems can build their Merkwelt (internal perceptual world) via physical grounding (Brooks 1991b). A related characteristic to embodiment is situatedness. Wooldridge (1999) sees agents as computer systems situated in some environment. In system theory, a subject can only be studied together with its environment (Skyttner 1996, p. 3). Embodied agents cannot exist without the environment; they are always embedded in their surroundings. Agents often feature encapsulation – the concept of hiding internal methods and properties behind an interface is borrowed from object-oriented programming and closely related to embodiment. Objects are defined as entities that encapsulate and process data and communicate through message passing (Wooldridge 1999). Wooldridge adds that, in contrast to traditional object-oriented programming where objects are manipulated from outside, in agent-oriented programming decisions lie with the agent. Agents cannot be told what to do because they are autonomous. 60

Some agents are pro-active – they are directed to certain goals. According to Minsky, a system’s goal-driven behaviour is produced by the Difference-Engine; agents are pushed into action by various differences between the desired situation and the actual situation (Minsky 1988, p. 78). Wooldridge explains that agents are “able to exhibit goal-directed behaviour by taking the initiative in order to satisfy their design objectives” (Wooldridge 1999, p. 8). In agent-based models, there are two types of goals: the local – an agent’s own selfish goal, and the global – a system designer’s goal (Shoham and Leyton-Brown 2009, p. 1). As opposed to assigning desires and goals to agents, Brooks claims that “intelligent behaviour can be generated without having explicit reasoning systems present” (Brooks 1991b, p. 23). Perceptive agents can also be supplied with a priori knowledge about the environment. These agents usually possess a representation of the world – a ‘mental map’ of their environment (Castle and Crooks 2006). According to Brooks, (1991a) the failure of classical Artificial Intelligence (AI) research occurred because people tried to build exhaustive internal representations in order to create intelligent systems. As he points out, the human ‘Merkwelt’ – our internal representation system – is not necessarily suitable representation for artificial agents. Instead, Brooks favoured the subsumption architecture where he used dynamically generated internal representations (Brooks 1991b). Situated Action paradigm generally follows Brooks, but it also benefits of some symbolic representations (Wan and Braspenning 1996). As opposed to pro-activity, reactive agents act upon stimuli received from the dynamic environment. Many agent modellers combine reactive behaviour with proactive (Brooks 1991b; Wooldridge 1999). Reactive agents can solve tasks by being naturally opportunistic if promising circumstances present themselves by responding to changes in the ambient (Brooks 1991b). Intelligent agents can have social abilities - they interact with other agents (and possibly humans) in order to satisfy their design objectives (Wooldridge 1999). Agents can acquire information about the world by querying other agents and the environment in the immediate neighbourhood searching for specific attributes (Castle and Crooks 2006). In addition to direct communication, agents can also communicate via their environment. 61

Mobility is an important property of agents. Castle and Crooks claim that mobility is a particularly useful feature of agents in spatial simulations (Castle and Crooks 2006). However, the ability to move is not critical in order to define agents – it is plausible to see cells in CA models as immobile agents (Rodrigues and Raper 1999).

4.3 Taxonomy of ABM

Types of agents In the previous section, some properties commonly assigned to agents were investigated. It is quite obvious to categorise agents by these properties using terms like ‘reactive agents’, ‘intelligent agents’, ‘mobile agents’, ‘embodied agents’, etc. Other, more generic categories have been suggested to distinguish between different types of agents. Again, there is no globally accepted taxonomy of agents used by all the players in the field. The variety of agents used in experiments is truly astonishing; the majority of authors, however, use ambiguous taxonomy. Complete agents is a class of agents that are autonomous, self-sufficient, embodied, and situated (Pfeifer and Scheier 2001). Inspired by natural agents, animals and humans, Pfeifer and Scheier describe complete agents as entities that are capable of surviving in the real world. In order to sustain themselves, complete agents have to maintain their energy supply, and behave autonomously in an environment without the human intervention. All biological agents are complete agents and some artificial agents can be complete too. As Pfeifer and Scheier argue (2001, p. 185), real-world robotic agents can be constructed in a way to fulfil the criteria for completeness. Although many robots meet several of the mentioned requirements, none of their presented robots are truly complete. They argue that artificial agents are built to achieve a particular task, study general principles of intelligent behaviour, or model certain aspects of natural systems. Besides real-world robots, another subclass of artificial agents – simulated agents – live in computer models. As it is theoretically possible to simulate any physical process in the computer, any physical robot can be simulated (Pfeifer and Scheier 2001). However, as the authors add, a physically realistic simulation is extremely difficult to develop and requires enormous computational power.

62

Since autonomous agents are always situated, they can be discriminated by their environment. Several authors have used the term ‘spatial agents’ (Rodrigues and Raper 1999) or ‘space-agents’ (Adamatzky 2001) in order to distinguish them from non-spatial ones. Rodrigues and Raper define a spatial agent as “an autonomous agent that can reason over representations of space” (Rodrigues and Raper 1999, p. 4); spatial agents make spatial concepts computable. Adamatzky, on the other hand, uses the term space-agents to distinguish them from graph-agents. In the vein of classical AI, agents with complex symbolic reasoning or central symbolic model are still quite common in several disciplines. Russell and Norvig call this type generically knowledge based agents (Russell and Norvig 1995, p. 194). In this category, Wooldridge (1999) describes three architectures: logical agents – decisions are made via logical deduction , belief-desire-intention agents – decisions are based on a model of human practical reasoning, and layered architectures – the decision making is realised through layered software. Another type – the cognitive or deliberative agent – contains a representation of the world, has memory and operates via symbolic reasoning (Rodrigues and Raper 1999). Designs of the deliberative agent have often severe problems of symbol-grounding and frame of reference (Pfeifer and Scheier 2001). Some authors also mention interface agents – a metaphor used to describe software based assistants in computer applications. According to Rodrigues and Raper (1999), interface agents are semi-intelligent and semi-autonomous programs, and are of no interest in this thesis.

Types of models Regardless to the type of agent used, agent-based models can be classified by other generic principles. Gilbert (2004) offers a duality-based taxonomy distinguishing abstract models from descriptive ones, artificial from realistic, positive from normative, and spatial from network003A •

Abstract models do not mimic any real-world process, but produce more general knowledge by exploring concepts. While descriptive models are concerned with modelling something that already exists in order to understand it, the findings of abstract models are not exactly applicable to any existing process. 63



Realistic multi-agent models are inspired by real societies, and give insights how these societies work. Artificial models use completely made up agents to achieve a certain engineering task.



Normative models are often concerned with making suggestions about what policies should be followed. Positive, on the other hand, are descriptive and analytical about the phenomena studied, helping to understand rather than to advise.



Spatial models deploy a representation of some spatial environment, often a 2D lattice of cells, a map, or a 3D geometrical model. Agents are usually capable of moving freely around in such models. In a network model, on the other hand, geometry of the environment is irrelevant – the relationships between agents and network nodes are more important. Spatiality of the model is a particularly relevant notion in the context of architectural and urban design.

Gilbert (2004) gives this overview from the social science perspective. Castle and Crooks (2006) offer yet another classification that distinguishes agent-based models by the purpose: predictive models are constructed for evaluating scenarios and foreseeing future states; explanatory models strive to explore theoretical aspects and create hypotheses.

4.4 Properties and behaviour of agent-based models

Emergence Emergence often lies at the heart of the bottom-up modelling approach of generative design. In order to generate circulation diagrams one could greatly benefit from observing and analysing the emergent behaviour of biological agent colonies. Even deeper insight can be obtained by programming agent colonies following the principles found in nature. In multi-agent models, the boundaries between an individual and the colony are blurred. It is often hard to tell the difference between the individual and the group behaviour. Emergent behaviour can arise from the agent-environment interaction – Braitenberg (1986) has shown that a simple sensory-motor system can display 64

unpredictable behaviour. Emergent phenomena can also happen in populations of repeatedly interacting agents (Sen et al. 2007). Both kinds of interactivity – agentenvironment and agent-agent – can produce unpredictable and insightful results into how behavioural patterns can emerge from simple rules designed at the level of individuals. Emergence is a concept widely used to describe processes or patterns that are unplanned and surprising; it is a property of the system that is not contained in its parts. Williams (2008) explains emergence simply as a process of sudden and unexpected appearance. Goldstein (2005) elaborates the idea by stating that emergence refers to the arising of novel patterns and structures. He also stresses that emergence happens on macro-level in complex systems, as opposed to micro-level processes from which it arises. In contrast to the traditional view of emergence as the result of self-organising processes, Goldstein claims that it is often constructed – created in heavily controlled conditions in laboratories. Indeed, most of the computer simulations displaying emergent properties have carefully been set up by the system’s designer and programmers. Nevertheless, the value of these simulations is hard to overestimate. According to Holland (1998, p. 12) , computer models can provide access to understanding emergence as they can be started, stopped and observed at desired pace – something that is impossible with natural dynamic systems. The well-known examples of emergent behaviour of biological agents are flocking and nest building in insect colonies. Both of these models have been simulated to a certain level of abstraction on computers with ABM. Reynolds’ model of flocking boids had an important role to play in changing the understanding of how birds coordinate their behaviour at colony level (Reynolds 1987). Theraulaz and Bonabeau’s, on the other hand, built a formal model to explore the nest-building behaviour of wasps (Theraulaz and Bonabeau 1995). Using simple building rules, their artificial insects were capable of creating structures of astonishing complexity. According to Gilbert and Terna (2000), there are two distinct types of emergence: unforeseen and unpredictable emergence. Unforeseen emergence occurs at an equilibrium state when some sort of cyclical behaviour appears. Unpredictable emergence is a chaotic behaviour of the system that is observable but much harder to reverse engineer. 65

Learning Ability to learn is a crucial component of ABM. Without this ability, it is very difficult to use agent colonies to optimise the generated circulation networks. While emergence in agent-based models refers to the arising of novel patterns and structures, learning is often associated with changes in agent’s behaviour to maintain its desired state. According to the environment-modification principle in systems theory, agents have to choose between two main strategies (Skyttner 1996, p. 73). One option is to change the environment to suit one’s needs; the other is to adapt to it – to learn to live in new conditions. Maturana and Varela (1980, p. 35) define learning in living systems as the process of modifying one’s conduct in order to maintain one’s basic circularity. It goes without saying that the capacity to learn is a key property of intelligent behaviour (Shoham and Leyton-Brown 2009). Brooks lists out four things that an intelligent agent can learn: representations of the world, calibration of sensors and actuators (motors), how individual behaviours can interact, and new behavioural modules (Brooks 1991b). Learning can happen at two levels: at the level of individuals where an agent changes its behaviour during its existence, and at the colony level where the whole group of agents modifies its course of action in a certain way. A classic example of the colony’s learning process is the trail formation of foraging ants where the colony constantly searches for shorter trails to food. Populations of agents can also undergo phylogenetic development – they adapt to their ecological niche over generations by the means of evolution. In addition to behavioural and evolutional learning, Pfeifer and Scheier (2001, p. 485) describe two levels where learning can also take place. Rapid changes in the environment can cause physiological or sensory adaptation. Sweating, for example, is the physiological response to rising temperature; contraction of the pupils classifies as a sensory adaptation. Learning mechanisms are of major concern in modelling intelligent behaviour. Evolutionary algorithms have been heavily deployed in designing artificial agents (e. g. Sims 1994; Stanley, Bryant and Miikkulainen 2005). The individual-level learning of artificial agents draws upon many well-known machine learning algorithms of which the most exploited in ABM is reinforced learning (Vidal 2007, p. 70). Reinforced learning occurs when agents learn to map sensory inputs to motor outputs by trial and error method, receiving rewards if certain states have been achieved. The 66

environment is treated as a black box and agents do not need any previous knowledge or symbolic representation of the world (Wan and Braspenning 1996). Vidal (2003) points out that learning in multi-agent communities can be quite challenging as the target for an individual agent keeps changing and the agents cannot learn from examples. He argues that multi-agent systems where agents share information or otherwise help each other can be seen as extensions to traditional machine learning algorithms. Learning in such systems can happen collaboratively when agents collectively create and share the global knowledge, or in competition when each agent wants to be the best (Vidal 2007, p. 63). In multi-agent systems, it is difficult to separate the phenomenon of learning from that of teaching – all agents involved in the process usually gain some benefits (Shoham and Leyton-Brown 2009).

Communication in multi-agent systems; stigmergy Communication is an essential part in any multi-agent system – it enables agents to coordinate their behaviour in ways that benefit the whole colony. Communicating with others helps an individual agent to understand and be understood, but also facilitates the colony achieving its goals (Huhns and Stephens 1999). There are several communication strategies and methods that can be used with multi-agent systems. This section scrutinises some of these, focusing on the indirect communication through the environment. Communication in multi-agent systems is a diverse faculty dealing with a range of issues from message protocols to general coordination strategies. Huhns and Stephens lay out the communication infrastructure specifying interaction methods and mechanisms: 1) communication can happen via shared memory with agents having access to the communal data base, or be message-based with agents communicating with one another; 2) communication can be connected or connectionless; 3) messages can be exchanged from a single sender to a single receiver (point-to-point) or from a single agent to many others (multicast or broadcast); 4) messages can be pushed or pulled; or 5) communication can be synchronous or asynchronous. Doran et al. (1997) point out that coordination in colonies can happen in a non-communicative manner with agents simply observing the other agents’ behaviour. However, it is reasonable to argue that these agents still communicate albeit no messages are deliberately sent. 67

Messages between agents can be sent and received either directly or indirectly (Keil and Goldin 2006). Direct messaging involves at least two parties – a sender and a receiver – communicating simultaneously. Indirect communication is mediated through environment and does not require the simultaneous presence of both parties; one party can simply leave a message and the other one can pick it up later. This type of communication also goes with the name stigmergy. Compared to the direct communication, stigmergy is a lightweight solution (Hadeli et al. 2004). A designer of distributed artificial systems using stigmergy can replace direct communication with indirect, reducing the complexity of an individual agent (Bonabeau, Dorigo and Theraulaz 1999, p. 16). There is a particular reason why this thesis is so interested in stigmergy – stigmergy can be seen as an environmental modification principle. From the point of view of architectural design, leaving a message leads to environmental changes, and stigmergy can, therefore, be seen as an environmental design strategy. The stigmergic modification principles can be used to design the environment following a set of modification rules embedded into the agent’s sensory-motor loop. Certain cues in the ambient can then trigger a particular building action that, in turn, leads to a new environmental configuration. Stigmergy was first described by Pierre-Paul Grassé in 1959 observing the nest construction process of termites (Holland and Melhuish 1999). Stigmergy is a class of mechanisms that, according to Theraulaz and Bonabeau, mediate the animal-animal interaction (Hadeli et al. 2004). Stigmergy has claimed a lot of attention among scientists studying social insects and their nests (e. g. Deneuborg, Theraulaz and Beckers 1992; Turner 2000). Social insects communicate in many different ways; they use tappings, strokings, graspings, antennations, tastings etc. but most of the signals are based on chemicals (Wilson 1980, p. 192-193) that, when dropped in the environment, provide information to other members of the colony. These chemicals are known as pheromones. Bonabeau, Dorigo and Theraulaz (1999) consider the activities of a social insect colony as the process of self-organisation at the heart of which lies stigmergy; they describe the nest-building process of social insects as the process of selfassembly. Stigmergy, in their opinion, facilitates self-organisation in the colony. In the stigmergic self-organisation, spatiotemporal structures arise mainly from the agents’ 68

action rather than from the environmental physics. That does not mean that physics is not involved, but it has a secondary role to play (Holland and Melhuish 1999). Bonabeau, Dorigo and Theraulaz (1999, p. 205-208) distinguish between two different stigmergic mechanisms: qualitative or discrete stigmergy, and quantitative or continuous stigmergy. In the first case, different spatial arrangements can trigger different behaviour of agents; in the second case, the spatial stimuli influence the action of agents in a quantitative way. Qualitative stigmergy affects the agent’s choice of action; quantitative stigmergy affects frequency, strength, length, or other quantitative properties of the agent’s action. Whereas continuous stigmergy usually amplifies the subsequent behaviour at a location, there is no positive feedback mechanism in discrete stigmergy – the stimulus is transformed into another, qualitatively different, stimulus. Both kinds of stimuli can be attractive or repulsive, activating or inhibiting, and depend on the local context (Bonabeau, Dorigo and Theraulaz 1999, p.206). Holland and Melhuish (1999) also make the distinction between a passive and an active form of stigmergy. Passive stigmergy takes place when a previous action at a location does not influence the subsequent action, only affects the outcome. In the case of active stigmergy, both the quantitative and the qualitative effect can happen (Holland and Melhuish 1999). Albeit stigmergy is mostly associated with insect colonies, Parunak (2006) points out that humans also use environmentally mediated signals to communicate. Holland and Mehuish (1999) suggest that the best way of studying stigmergy is simulating it in a computer program. In a simulation, an agent has two key abilities: to move through the environment or to change it. The change can be done by adding or subtracting material, or changing the qualitative properties of the material (Holland and Melhuish 1999). The building algorithms used in stigmergic simulations are formulated comprising of series of if-then decision loops (Bonabeau, Dorigo and Theraulaz 1999, p. 209). Many insect colonies have specialised individuals for different tasks. The partitioning of tasks is a phenomenon of stigmergy that is needed to avoid task switching, and saves energy and time (Ramos, Fernandes and Rosa 2007). However, if agents are to respond to different stimuli and perform different activities triggered by the same stimuli, the task of programming an artificial colony becomes very difficult (Bonabeau, Dorigo and Theraulaz 1999, p. 205-251). 69

According to Bonabeau, Dorigo and Theraulaz (1999) the benefits of using stigmergy-based multi-agent systems are: •

Incremental improvement of a solution – stigmergy facilitates the step-by-step approach to solving optimisation problems



Flexibility – an agent colony can cope with external perturbations since it can handle different spatial stimuli



Robustness – the success of the whole colony is not dependent on the poor performance of an individual agent



Increased speed – a colony can find solutions much quicker than individuals separately



Qualitative leap – theories of self-organisation claim that the behaviour of an interacting population is qualitatively different from that of a single agent



Top-down control is not appropriate when dealing with a large number of agents



Stigmergic communication based systems also naturally tend towards optimized solutions (Valckenaers et al. 2001).

Drawbacks of stigmergic agent-based models include the lack of global knowledge – agents can get stuck in a local solution; and the difficulty of programming, since both the state of the system and the environment constitute the solution to the problem (Bonabeau, Dorigo and Theraulaz 1999, p. 20).

Coordination in multi-agent systems Competition and cooperation are two coordination mechanisms in ABM that control and encourage the development in multi-agent systems. Competition happens naturally between agents sharing the same ecological niche. In most cases agents compete for resources or opportunities to reproduce. Cooperation, on the other hand, is often presented as a concept that distinguishes multi-agent systems from objectoriented systems and expert systems (Doran et al. 1997). Grand (2001) argues that two systems, affecting one another, always display the results of cooperation and competition.

70

Huhns and Stephens (1999) dispute that coordination is possible in multiagent systems because of communication, and leads to more coherent systems. They argue that coordination is a property of a system of agents in a shared environment, and actually see collaboration and competition as parts of it. From the system designer’s point of view, collaboration can be seen as a coordination strategy between social agents, whereas competition can trigger the coordination between selfinterested agents (Huhns and Stephens 1999). Franklin proposes a typology of cooperation (see Figure 4.1) where multiagent systems are divided into two main categories: independent and cooperative systems (Doran et al. 1997). As independent agents have their own individual goals, they collaborate accidentally without being aware of it – independent agents cooperate only from the observer’s viewpoint. Cooperative agents can be either communicative or not. Non-communicative agents collaborate by observing one another’s actions without explicitly sending messages. Communicative agents exchange messages directly or through the environment. Deliberative agents plan actions together; negotiating agents do the same while also competing with one another.

Figure 4.1: Cooperation typology in multi-agent systems. Source: Franklin (Doran et al. 1997)

Elaborate agent architectures (e.g. deliberative agents) are explicitly designed for collaboration, but simpler agents can collaborate effectively too. Doran et al. (1997) point out that, although reactive agents do not have the capacity of prediction or intention, collaboration can be emergent. Cooperation happens naturally because all agents benefit from the overall well-being of the colony. In the case of simple reactive agents, cooperation often rises out of competition with individuals forming 71

syndicates for better existence (Grand 2001). Portugali (1999) calls this self-organising principle ‘the cooperation principle’.

4.5 Applications of multi-agent systems

Overview Multi-agent systems are particularly good for distributed problem solving in the dynamic environment (Huhns and Stephens 1999). The ability to deal with distributed data and cope with dynamic inputs has made ABM popular in many disciplines. This is why these systems are potentially very useful for the architectural task as well. In recent years, ABM has been utilised in a number of different ways ranging from social experiments to industrial design and engineering applications. All these applications fall roughly into two main categories: there are models that emulate natural systems in hope to better understand them, and models that build artificial systems in order to tackle complex problems and generate appropriate solutions. Whereas designing the first category models demands biological plausibility and rigorous observations, the designer of an artificial system for problem solving only needs to grasp the broad concept of underlying mechanisms (Bonabeau, Dorigo and Theraulaz 1999, p. 8). Bonabeau (2001) lists out four areas of application in a business context where ABM can be used: flow, traffic and evacuation management, stock market simulation, operational risk and organisational design, and diffusion of innovation and adoption dynamics. Flow and traffic management – and particularly pedestrian modelling – is the most often practiced application related to architectural and urban design. An overview of that field is given later in this section. Traffic planning is a classic example of ABM since the behaviour of individual vehicles can be easily mimicked with autonomous agents. ABM is also suitable for stock market simulation since the dynamics of a market result from the behaviour and interaction of many individual agents. The same applies to operational risk management where agentbased models are used for producing valuable qualitative or semi quantitative insights into the organisation’s design. Diffusion models have proven to be useful in understanding the individual decision making in a community.

72

Popular applications Considering the number of experiments, a few disciplines where multi-agent systems are now extensively exploited clearly stand out among others. Most of the agent-based models in sociology, economy, geography, and biology study the behaviour and control in complex systems at different levels of abstraction. The first agent-based model in sociology was developed by Schelling in 1978 to study housing segregation (Macal and North 2006). After that a plethora of social simulations have been used to explore patterns in politics, attitude change in societies, social segregation, formation of settlements, anti-social behaviour, validating policies etc. (Gilbert 2004). In biology, multi-agent models have been developed to study the transmission of viruses, the growth of bacterial colonies, the multi-cellular level interaction and behaviour (Castle and Crooks 2006). A popular application in geography is clustering spatial data from databases – a task where traditional methods would fail because the databases are so vast (Macgill 2000). ABM is gaining popularity in economic studies in order to understand market dynamics (Heppenstall, Evans and Birkin 2007). Traffic control (De Schutter et al. 2003) and network routing (Sim and Sun 2002) applications benefit from the individual oriented approach of ABM. Besides the above mentioned fields, ABM has been used to lesser extent in archaeology, agricultural economics, urban simulation (Parker et al. 2003), and even studying the origin of languages (Steels 1997). Game theory, defined as the mathematical study of interaction among independent, self-interested agents (Shoham and Leyton-Brown 2009), is another field where multi-agent applications have proven successful. Most ABM solutions in game theory are geared towards understanding the behaviour of well-informed people. Alternatively, some models are developed to create automatic opponents to game players (Vidal 2003). A classic example of ABM in game theory is given by Sen et al. (2007) who describe an agent model to solve the Prisoner’s Dilemma – a well-known social dilemma with two players. Agent-based models have been used in both noncooperative and coalitional game theory. Whereas in the former basic modelling units are individuals, in the latter the basic modelling unit is the group (Shoham and LeytonBrown 2009). Agent-based models have also been extensively used in some relatively new and emerging fields that cross the borders of traditional scientific domains. It is 73

possible that the ABM paradigm is partially responsible for the emergence and development of such new fields. The simulations in one of such overlapping areas of two disciplines – sociology and geography – can be collectively termed geospatial simulations (Castle and Crooks 2006). Geospatial simulations, often operating on geographical information systems’ (GIS) databases, benefit from the mobility of individual agents and the multi-agent system’s ability to deal with large distributed datasets. The general goal of geospatial simulations is to understand the emergence of patterns, trends, and other characteristics in societies. According to Castle and Crooks (2006), ABM has been used in geospatial simulations to reconstruct the settlements of ancient civilisations, study the dynamics of civil violence, explain the spatial patterns of unemployment, evaluate the recreational use of land, and study the coordination of social networks within three-dimensional landscape terrains. Although ABM has been a part of geospatial sciences for more than 10 years (Ligmann-Zielinska and Jankowski 2007), the rise of GIS has generated vast data sets that provide new challenges for agent modellers. Novel technology in geographic information science is suggesting new modelling techniques.

Architectural design related applications Thanks to the growing popularity in many disciplines, ABM is now taking its first cautious steps in architecture and design related realms. The reasons for using multi-agent systems in design disciplines should be obvious. Raisbeck (2007), recognizing the great potential of ABM in urban design, points out that, with a new mimetic and functional range of locomotion, structural optimisation, pattern formation, and learning capacity, agent based software enables cities to be modelled from bottom up. However, he realizes that the software technologies developed to date are exploratory, and it may take some time before they become exploitative technologies. Raisbeck sees traces of ABM already in the work of Team X architects in the 1960s. This is a bold statement that the author fails to back up with any examples. Another evangelist of ABM and swarming in architecture is Neil Leach who, similarly to Raisbeck, also stays at the visionary level, stating the potential of using such systems in architecture (Leach 2004). The most successful applications of agent-based models in architecture have been developed for pedestrian movement and evacuation studies. Individuals in these 74

models are represented as agents that follow simple rules; complex human motivations and reasoning tend to be left out. Pedestrian modelling with agents is mostly used in analysing urban settings and large shopping mall environments. More thorough overview of the field is given in the next section in this chapter. Although the majority of AMBs – pedestrian movement, evacuation, and crowding models – tend to be analytical, there is a growing trend of using agents in the generative design process. Reffat (2003) proposes a model where agents generate new design concepts by exploring two-dimensional sketches. He justifies the use of ABM by claiming that creativity in architectural design can be seen as emergence of new forms and relationships between these forms. Gomes et al. (1998) describe a design process supported by distributed agents where 3D solid objects are presented as reactive agents responding to the designer’s actions in a CAD environment. Gu and Maher (2003) propose a generative model combining shape grammar with ABM to construct virtual environments. More artistic and abstract models have been proposed by Nembrini et al. (2005), Jacob and Mammen (2007), Mammen, Jacob and Kokai (2005), Maciel (2008), and Carranza and Coates (2000). Most of these models are concerned with form-finding; the latter, for example, generates novel 3D shapes by wrapping a continuous surface around the trails left behind by swarming agents. From the professional perspective the majority of these models are immature and commonly ignored by the community of architectural practitioners. Slightly more tangible are some agent-based models for industrial and product design applications. Multi-agent systems have been used to develop electro-mechanical devices (Campbell, Cagan and Kotovsky 1998), design chairs (Crecu and Brown 2000) and ships (Jin and Zhao 2007). Despite the potential of generating architectural circulation systems with mobile agents, there are only a handful of models that use ABM for this purpose. Batty (2003) shows how diffusion limited aggregation (DLA) – a process of dendritic growth – can theoretically be used in urban policy analysis, and to simulate the growth of a city. DLA is based on the Brownian movement where random walk particles aggregate into tree-like networks. A similar process is used by Nickerson (2008) who asserts that the model can be used to create a set of designs for urban infrastructure. Derix (2008) proposes a generative approach to model circulation networks using an ant colony that travels between entry and exit points in a building footprint. Discussing people’s 75

movement in airports, Williams (2008) suggests that it is feasible to generate circulation in a bottom-up manner using very simple agents – particles. He further elaborates this idea by adding that these particles can be given agendas such as sitting, waiting or browsing in a bookshop so that appropriate space can evolve for these functions, as well as for circulation (Williams 2008).

Pedestrian modelling with agents It is not hard to see why pedestrian modelling has gained so much from the advances in ABM techniques. In contract to using differential equations to describe crowd movement, agent-based models can simulate discontinuities in the individual’s behaviour with ease. Agents, when confronting obstacles in the environment, can deal with the situation ad hoc using simple navigational rules. Castle and Crooks (2006) point out that ABM is a natural way of simulating a system composed of real-world entities, and is inherently suited for simulating people in a very realistic way. This is exactly the case with pedestrian modelling where agents have to mimic certain aspects of human behaviour relatively accurately. As long as the target is clearly defined, an individual’s locomotion can be described using quantitative rules, making it suitable for programming. Helbing , an early player in the field, developed one of the first mathematical models for mimicking the behaviour of pedestrians (Batty 2003). Being referenced almost by every scholar of crowd modelling, he clearly favours the individual-based methods to older modelling methods such as queuing models, transition matrix models, stochastic models, and route choice behaviour models (Helbing et al. 2002). The individual-based models are preferred for their ability to simulate self-organising effects occurring in pedestrian crowds. The trade-off between an individual’s selfish and unselfish actions can lead to flocking or turbulence in crowd (Batty 2003) – phenomena that other models fail to capture. Helbing claimed that the movement of crowd is very similar to the movement of fluids and gases, but focusing on the behaviour of individuals (agent-based models) is favourable since this approach is more flexible (Helbing et al. 2002). There seems to be a slight disagreement between modellers about the importance of how closely the actual movement of pedestrians needs to be simulated. Batty (2003) claims that getting the movement right is crucial, while Silva, Seixas and 76

Farias (2005). Dijkstra, Timmermans and de Vries (2007) disagree by proposing a model where the actual movement of pedestrians is only a small component of the approach. Instead of fine-tuning the individual movement rules, they combine local decisions with planning and motivations. This disagreement is partly because different models are built at different levels of abstraction. The same can be said about the representation of the environment to which agents interact. The variety of used representations range from the topological discrete network and simple grid representations to continuous topographical environments with 3D surface geometry. Some models reduce the environment to a network consisting of sequences of straight paths (Silva, Seixas and Farias 2005), others use a lattice of cells (Dijkstra and Timmermans 2002) or heavily simplified shapes (Kukla et al. 2001) to represent the space. Some researchers use a pre-processed environmental representation based on a space syntax technique called axial line (Penn 2001). This “exosomatix visual architecture” of axial lines guides agents through the environment (Turner 2003). However, in more complex and less intelligible environments, the axial line model has to be amended by making major attractors and generators of movement ‘visible’ to agents (Penn 2001). Space syntax research has also tried to combine agents with the Gibson’s psychological theory of visual perception (Turner 2003). This approach borrows the concept of affordances (Gibson 1979) to build agents that could perceive objects in the environment with their possibilities for use. The agent is then directly guided by the affordances in the environment without any reference to superior representational models (Turner and Penn 2002). Pedestrian modelling has been used for several purposes. The most common ones are probably the evaluation architectural configurations (Turner 2003) and built environment (Dijkstra and Timmermans 2002; Calogero 2008), and the evacuation modelling in buildings (Helbing et al. 2002). Other uses include assessing infrastructural changes to promote walking (Kukla et al. 2001), and estimating shopping patterns in large malls (Penn and Turner 2002; Dijkstra, Timmermans and de Vries 2007). The latter allows the merchants to do the economic analysis for positioning goods and organising departments (Batty 2003). Pedestrian modelling has also been used to control the crowd movement in festivals (Batty 2003). 77

A related area of study to pedestrian crowd modelling is concerned with wayfinding in buildings and urban environments. Some way-finding models also integrate ABM techniques. Samani et al. (2007), for example, assess the design of digital signs to facilitate agents to find their way in unfamiliar indoor environments. A similar approach is also developed by Hölscher et al. (2007). Raubal (2001) proposes a method based on the Gibson’s theory of affordances (Gibson 1979) to simulate agents’ navigation in airports. Kuipers, Tecuci and Stankiewicz (2003) evaluate a computational way-finding algorithm to help agents create and use cognitive maps for navigational purposes. Nearly all agent-based pedestrian models are of analytical nature – they are primarily designed to evaluate rather than generate designs. A more design oriented approach has been proposed by Dijkstra and Timmermans (2002) who combine agents with cellular automata model. The authors claim that their approach is very useful for architects and urban planners. Nevertheless, they do not show how this can be used in the design process, nor outline any generative rules for modifying the environment. The purpose of crowd models in the design process remains analytical, occasionally providing valuable feedback to architects. If this analysis is to influence the design output, one has to remodel and analyse the design repeatedly. Instead of taking the design back and forth between conception and analysis, one can devise a generative computer program to close this loop. The following section gives clues how an analytical pedestrian model can be turned into a generative one where agents use stigmergy to alter their environment.

Stigmergic building algorithms Stigmergy is a coordination method that allows turning multi-agent swarms into generative systems that can not only move through the environment but can also modify it. From the point of view of this thesis, it offers a great opportunity to combine algorithms that generate the circulation with rules that generate the spaces served by the very same circulation. This way the circulation system and the served spaces can emerge simultaneously. The first algorithms mimicking the building behaviour of social insects started to emerge in the beginning of the 1990s. Deneuborg et al. (1992) introduced two 78

algorithms – a sequential and a stigmergic algorithm – to simulate the nest building behaviour of wasps and termites in a 2D lattice space. By comparing these two algorithms, they concluded that the stigmergic algorithm is more suited for colonies whereas the sequential one is more adapted to a solitary individual. Following the first experiments, Theraulaz and Bonabeau (1995) started working in 3D seeking for minimal models for stigmergic activities to happen. Their simulation takes place in a cellular lattice space where agents deposit bricks according to information in a lookup table. They come up with two types of algorithms – coordinated and non-coordinated. Coordinated algorithms divide the shape into modular sub-shapes; in non-coordinated ones the building stimulus configuration overlaps and affects the entire building process. Although the algorithm contains probabilistic elements, coherent nest-like structures emerge with coordinated building activity. Using the same algorithm, the built structures can be somehow different, but they exhibit a high degree of structural similarity (Bonabeau et al. 2000). Noncoordinated algorithms are unstable and lead to different outcomes in different simulations. A few years after their first experiments, Bonabeau et al. (1998) came up with a stigmergic building algorithm to simulate the construction of pillars, walls, and the royal chamber in a termite colony. They argued that building different nest morphology does not require the behavioural change, but can be achieved by the random distribution of agents or by introducing external environmental influences such as air movement. Having devised an algorithm to simulate the spider web construction process, Krink and Vollrath (1997) come to a similar conclusion – complex animal architecture is the result of simple behaviour patterns interacting with the dynamic environment. Stewart and Russell (2003) further expand on this idea by noting that the complexity is believed to lie within the environment; agents are just uncovering it. Possibly the most complex model based on the stigmergic building activities in multi-agent colonies is introduced by Ladley and Bullock (2005). These authors propose a simulation model where virtual termites build the royal chamber, covered walkways, tunnel intersections and chamber entrances. The simulation, based on the previous models by Deneuborg et al. (1992),and Bonabeau et al. (1998), takes place in a 3D lattice world. In contrast to its predecessors, this model implements some 79

constraints to what structures can be built, and determines how these structures affect the movement of agents and the diffusion of pheromone. This approach adds another layer to the stigmergic simulation as it includes feedbacks between the information distribution, movement, and the built structure. The authors also introduce the notion of wind to their simulation. Wind affects directly the pheromone gradients influencing the overall structure of the nest. Ladley and Bullock use several different types of pheromone, and their termites have allocated tasks. The pheromone, emitted by a special kind of termites, diffuses in the environment and decays in time. The building behaviour is stimulated by pheromone templates and guided by a small set of rules. All the above discussed stigmergic building algorithms are utilising active stigmergy – the building action of an insect is guided by the previously built structures at a location; and qualitative stigmergy – different built forms trigger qualitatively different building actions. As most of the models are abstract and concerned with exploring the stigmergic building rules, the model proposed by Ladley and Bullock (2005) clearly stands out. Besides being biologically plausible, the model features a feedback mechanism between agents’ movement patterns and the built structure. This concept can potentially be used in generating building and urban layouts with satisfied accessibility criteria.

4.6 Chapter summary

This chapter gave an overview of agent-based modelling – of its origins, concepts behind the paradigm, and definitions of agent and taxonomy of models. Once the necessary background literature was explored, several generic applications of multi-agent systems were discussed and analysed. Despite the popularity of multiagent systems, a relatively small number of examples investigating the generative potential in architecture were discovered. Most of the models that were found have been developed for pedestrian movement analysis and evaluation of building and urban layouts. Based on the review in this chapter, one can argue that multi-agent systems could help to solve circulation issues for several reasons. Firstly, there is correspondence between mobility of agents in multi-agent systems and the distributed 80

nature of movement in real world environments. Multi-agent systems capture emergent phenomena of natural circulation networks. Using mobile agents for generating spaces for movement simply makes sense. Secondly, multi-agent systems are flexible – a model can be deployed in different contexts and responds well to different control parameters. This allows a designer to generate a variety of solutions and pick one that fits the design brief the best. Thirdly, agents are embedded into the modelled environment – they ‘see’ the environment from inside out. This can possibly help to create layouts that can be navigated more intuitively or – at least – offer an alternative to the traditional top-down way of designing circulation. The next chapter proceeds by defining essential building blocks that are needed in order to build multiagent systems for generating circulation diagrams. These are then recombined into prototype models that help to validate the assumptions made above.

81

Chapter 5: Building blocks of agent-based circulation models

This chapter gives an overview of the basic components in agent-based circulation models. These components are present in most of the prototypes and case studies presented in following chapters. These are the building blocks that commonly need to be addressed in programming circulation models with agents. Naturally, there are other ways to structure multi-agent systems, but the break-down convention given in this chapter has been proven useful when building prototypes (see Chapter 6) and case studies (see Chapter 7 and Chapter 8) at a later stage. These blocks can be synthesised in various different ways in order to build prototypes and complete models – models that generate useful diagrams in the site-specific context. The method of breaking down a complete model is often called reverseengineering. Vidal (2007, p. 9) points out that the difficulty of reverse-engineering emergent phenomena lies in taking a description what the model should do and figure out how an individual agent should behave. The task of reverse-engineering emergent phenomena can be achieved using algorithms and tools in systems theory and artificial intelligence research (Vidal 2007, p. 9). This chapter borrows many conceptual ideas from these fields and adopts them to solve the specific task of generating meaningful circulation diagrams. The approach proposed here is to follow the system-theoretical distinction between the system and its environment, and to break the complete model down into mobile agents and their environment. If an agent is obviously a must-have component of multi-agent systems, the environment is a compound of situated systems (Weyns et al. 2005). The importance of the system-environment distinction cannot be overestimated, but the model needs to be broken down into smaller components. This thesis proposes a convention to divide agent-based circulation models into following topics: 1) design of the agent, 2) design of the environment, 3) movement and behaviour of the agent, 4) environmental processes, 5) interaction between the agent and its environment, 6) communication between agents, and 7) general settings of the simulation model. Although interconnected and overlapping at various levels, these topics can also be scrutinised independently. For example, the behaviour of the agent can be studied separately from the environment, as long as the input from the 82

environment is reduced to abstract input values. In programming terms, the proposed topics can be organised into commonly used modules that allow the developer to use standard methods without having to spend long hours on programming, validating and testing the basic components. This chapter, however, focuses on theoretical principles of how these modules are built, and no exact programmatic interface is discussed. There are several ways in which each of the proposed modules can be implemented and there are subtle differences how these modules can be assembled into a coherent whole. Once the basic elements of agent-based circulation models are well-understood, one needs some practical experience in building complete models. It is advised to begin with smaller tests and prototypes and gradually move towards more complete models.

5.1 Design of the circulation agent

The type of agents that are the most common in circulation models are mobile agents. This certainly does not come as a surprise since motion is the key characteristic of any kind of circulation system. Hence, agents in circulation models can alter their location, and are designed to cope with the changing surroundings. They can move around in the same environment or, in extreme cases, even move from one environment to another. However, in this section, the notion of environment is abstractly treated as a set of input parameters into the system. Similarly, the output of the system is treated without the further investigation of what that output actually means to the environment or to other systems in it. One can argue that it is controversial to look at the system without taking the environment into consideration, especially when dealing with mobility. It is widely accepted that motion can only happen in relation to the background system – the environment. However, there is a clear benefit of scrutinising the system in isolation because this allows decoupling input from output outside of the system’s boundary. It is suggested that in this way it is easier to understand the internal structure and logic of the agent. There are essentially two kinds of mobile circulation agents. The first and the most common agent is the embodied agent. The embodied agent has a notional body that defines its location in the model’s coordinate system and is the basis of all local 83

‘somatic’ calculations that the agent executes. The body is often expressed as an geometrical entity (e.g. sphere) in the digital model. The second type is a less traditional agent that, in a way, is inseparable from its environment. This agent is made of several generic components that also constitute the environment, and the agent is only distinguishable from its environment by the state of these components. An example of such an agent is the ‘glider’ in Conway’s Game of Life (Adamatzky 2001, p. 185) that dwells in a 2D cellular space. As opposed to embodied agents that simply move around by transforming their body from one location to another, cellular agents can be said to move around by changing the state of cells according to some cellular automata type of propagation rules. The body of the latter type of agent is just a collection of cells of the same state. In order to move around, cellular mobile agents have more than one bodily configurations; the glider, for example, has 4 postures (see Figure 5.1). The difficulty of using cellular agents in generating useful circulation diagrams is their persistency. Gliders, for instance, may disappear when colliding with other objects that contain cells of the same state. Additionally, the local rules for cellular agents are difficult to code (Støy 2004), and that defeats the objective of designing parsimonious agents. Therefore, cellular agents are not further investigated in this thesis.

Figure 5.1: Glider in action – gliding. This cellular automata agent has 4 different postures that it ceaselessly repeats

In the object-oriented programming (OOP) paradigm, embodied circulation agents are best described as objects. However, in contrast to traditional objects, circulation agents are active components of the model. Embodied agents have their own properties and methods – or data and functions – that can be wrapped in the class definition in most programming languages that support OOP. Each instance of that class normally carries the same set of functions that define how it moves around, interacts with the environment, and records data. At the same time, depending on its 84

past interactions, each instance can carry unique sets of data. Since this data is often used as the input to the agent’s functions, the behaviour of different agents of the same class can become different. All circulation agents generally have sensors for scoping the immediate neighbourhood to the agent’s body. Whether these sensors are somehow expressed in the digital presence of the agent is a different matter. Sometimes it makes the programmer’s task easier if the sensors are given a visual form in order to facilitate observations. This, however, is not always needed, and sensors can be defined as coordinates relative to the agent’s body. The latter method also uses the available computational resources more sparingly. Besides the visual appearance, agent’s sensors can also function in different ways and facilitate the production of different kinds of input. The simplest sensor has two states – on or off – and either does or does not produce input. More sophisticated sensors can measure scalar values and allow the agent to respond to more delicate differences in input. Sensors can be classified according to the activation function. Collision detectors, for example, are a type of sensors that are turned on when colliding with certain geometric objects in the digital model. This is usually done by testing whether the visibility line between the agent’s body and the sensor intersects with a line in 2D or with a face in 3D. Proximity detectors, on the other hand, are activated if objects appear in a certain range from the sensor. Proximity can be calculated by measuring the minimal distance between the object and the agent or using a fixed sensory position and measuring the distance to the intersection point between the visibility line and the model’s geometry. Other types of sensors can respond to existing quantitative stimuli in the environment. Much like the thermometer, a sensor can pick up an environmental parameter and produce output that is understandable to the agent. There are literally unlimited ways to organise different sensors into sensory configurations. To start off with, the number of sensors an agent has depends on what the agent is supposed to ‘sense’ and how delicate the response needs to be. Having multiple sensors allows the agent to compare input values in order to select an appropriate response. The higher number of sensors potentially leads to more informed decisions. However, sensory calculations are often quite expensive and it is generally advised to keep the number as low as possible. Circulation agents operating 85

in 3D environments naturally need a larger sensory space than those in 2D. Sensors can be arranged symmetrically around the agent’s body, or can be organised asymmetrically. The asymmetrical arrangement usually leads to a spatially biased behaviour and can be very useful in generating continuous flows of movement (see section 6.1 and Figure 6.3). Sensors can be grouped according to their function. Different groups provide input to different internal decision mechanisms. Movement sensors are wired to the mechanism dealing with locomotion, building sensors influence the agent’s building behaviour. Sensors can be either fixed or reconfigurable. Reconfigurable sensors – much like antennae in insects – are dynamically adjusted to the environment and help to reduce the overall number of sensors needed. If sensors are seen as part of generating the input to the agent then the output from the agent can be said to evoke actuators. Since all the agents described in this thesis inhabit virtual environments and do not need actual mechanical devices in order to undertake actions, actuators are simply algorithms that are triggered by the agent’s internal mechanisms. A common output from a mobile agent is a vector that defines which way the agent is moved. In the case of building agents, the output can also be a geometrical object and additional values that define the transformation matrix of the placed object. The actuator algorithm then takes this object and values, derives the transformation parameters from the values, and adds the object to the model geometry. The actuator algorithm can also modify the existing objects in the model – change the position of a geometrical entity, for instance. In such a case, the object that is being modified has to be a part of both the input and the output parameters. The process of mapping sensory input to actuator output is known as sensorymotor coupling. Sensory-motor coupling can be treated as a function of an agent that is ignorant of the agent’s external environment and can almost be scrutinised in complete isolation from the rest of the model. The only thing that the coupling function shares with other modules is the array of parameters that is put through the agent – arguments that are received from sensors and passed forward to actuators. As mentioned earlier, these arguments can take various forms as real numbers, more complex objects with specific properties, or even arrays of objects. The main point 86

here is that these arguments are somehow transformed and the output is always qualitatively or quantitatively different from the input. There are several ways how the internal transformation of input to output can take place. The simplest form of transformation is an arithmetical function that converts the numerical input into numerical output. The output is quantitatively different from the input and can be mathematically expressed as follows: output = f (input) Although very simple, this method is ideal for creating simple feedback systems that can lead to complex outcomes. Probably the most common method of mapping input to output is using a switch statement. Switch statements explicitly define cases of what happens to the output when the input meets certain criteria. As opposed to the simple arithmetical transformation, the output can also be qualitatively different from the input. For example, if the agent is given a certain object then the output is 90°. However, if the agent encounters a different type of object, then the output could be 60°. To put that in the context, this could mean that if the agent encounters a cube it tries to avoid colliding it by turning 90°; in the case of encountering a cylinder, the turning angle can be slightly smaller. The switch statement method is a way of introducing heuristics into sensory-motor coupling, is relatively easy to implement, and is supported in the vast majority of programming languages. Recently, there has been a rise in the use of connectionist models for sensorymotor coupling in agents. Connectionism tries to model the intelligent behaviour of biological organisms by deploying artificial ‘brains’ - artificial neural networks (Pfeifer and Scheier 2001). There are several different architectures and functional principles of artificial neural networks, and most of these are a way beyond the interest of this thesis. Although there is evidence that neural networks can work very well with agents (e.g. Stanley, Bryant and Miikkulainen 2005), the design of such agents is considered too complicated and contradicts the objective of finding parsimonious solutions for circulation agents. However, the simplest of neural networks – perceptrons (Rosenblatt 1958) – are tested in the following chapter. In these experiments, each input sensor of the agents is connected to every single actuator via a series of numerical weights. The sensory-motor mapping is a simple mathematical function where the output is a sum of all input values multiplied by respective weights. 87

Although both the input and the output can only be numerical values, the sensor and actuator algorithms can interpret these values and convert it into acceptable formats. For instance, the activation function of a sensor can produce input value 1 or 0 depending on the collision with the model geometry. During the process of computing the output the agent’s internal parameters can be altered. In turn, this can influence how the input is mapped to the output in the future. In connectionist models, this learning mechanism simply involves the adjustment of weights, and can be used to train the network to respond appropriately to respective output. Single-layer perceptrons are relatively easy to train and make the agent behaviour adaptable. The design of a circulation agent involves three main topics: input – how agents acquire information from the environment, what operations agents can perform, and what is the decision mechanism (behaviour controller) that connects perceptions to actions. Each of these topics can be considered separately but, eventually, these all need to be collated in order to meet the objectives of the complete model. At the abstract level of input-output coupling, there is no difference whether an agent inhabits a 2D or 3D environment. However, dimensionality becomes a crucial aspect of design when one has to lay out the exact geometry of agent’s sensors. If there is no agreement between the environmental geometry and an agent’s bodily traits then there is little chance that the model will be useful. Therefore, in designing the agent, one has to know how information is distributed in the environment, how the environment can be changed, and have a view of what kind of circulation diagrams are expected as the outcome of the complete model.

5.2 Design of the environment

The notion of environment is frequently used by architects and urban designers interchangeably with the notion of built structures. Environment in the architectural design discipline is too often considered as the end product – the result of design and building processes. This approach treats the environment as some sort of an object that can be analysed, prepared, constructed, and completed. Rather than in a dynamic relationship with activities taking place in it, the environment is seen as something that is defined prior to inhabitation. 88

For systems designers, the environment has a slightly different meaning. The environment is always seen with respect to the system and cannot be defined independently from it. In systems theory, the environment is a setting – physical or virtual – that produces system inputs and consumes its output (Keil and Goldin 2006). This means that the system acts upon the information received from the environment but the system is also capable of changing its environment. This dynamic feedback loop is the main generator of environmental complexity and is also a source of complex behaviour. This thesis supports the argument that architectural design solutions can be conceived following the principles of dynamic feedback and that the built environment should be seen in the same way as in systems theory. In computer programming, the word ‘environment’ is also sometimes used to describe the hardware and software platforms in which agent-based models are executed. Keil and Goldin (2006) call this the execution environment that is different from the application environment. The latter is the logical entity of the model that represents the space where agents perform their job (Keil and Goldin 2006). In this chapter, the word ‘environment’ always denotes the application environment. The environment is an essential part of multi-agent systems. For the purposes of this thesis, it plays a crucial role not only as the background for circulation agents but also the facilitator of communication between agents and the storage of the generated circulation diagrams. Weyns et al. (in Keil and Goldin 2006) argue that the environment is independently active, provides a means for decentralised coordination and acts as a shared memory for the agent colony. The environment is clearly distinguishable from agents only by the fact that it does not have objectives – it can be active but not proactive. Processes that take place in the environment happen blindly without any specific goals. Most of these processes are tending towards greater entropy and are, in that respect, dissimilar to self-organising processes that take place in agent colonies. Typical environmental processes are decay and diffusion processes where the structure of the environment tends towards uniform distribution of energy and matter. Section 5.4 looks at these processes in greater detail. Although the environment is seen as a collection of entities without objectives, this is so only from the perspective of the observer who does not participate in the simulation. The environment of an individual within the simulation can contain not only mindless objects but also other goal-driven agents. An agent can 89

perceive other agents directly as objects. Other agents – from the point of view of this particular perceiving agent – are then a part of its environment. There are several ways to categorise environments of multi-agent systems. Keil and Goldin (2006) propose a taxonomy that distinguishes multi-agent environments along three dimensions: physical versus virtual, persistent versus amnesic, and dynamic versus static. This taxonomy is perfectly applicable to circulation agent models as well. All prototypes and case study models in this thesis are executed in virtual environments. This is not only because of the personal preference, but mainly because it is impractical (and presumably extremely difficult) to generate circulation diagrams with robotic agents. Architectural diagrams are abstract and do not need to be grounded to the physical environment. Therefore, the precision and level of details required in building physical agent-based models is beyond what is needed for generating these diagrams. Persistent environments have memory of past interactions and are capable of passing information from one agent to another. Circulation agents can use persistency in their favour and develop patterns of behaviour that are impossible in amnesic environments. All environments that support stigmergic communications are capable of preserving information. At the same time, over-persistent environments can work against the purpose of the model. This holds true in optimisation algorithms where the colony is supposed to find progressively shorter circulation paths (see section 6.5). In order facilitate learning at the colony level the environment needs to be able to slowly ‘forget’ past interactions. Dynamic environments, as opposed to static ones, change their configurations over time. Agents that inhabit dynamic environments have to be more adaptable because changing environmental configurations demand agents to change their behaviour. All of the environments in prototype models (see the next chapter) where agents use stigmergic building algorithms can be classified as dynamic environments. Wooldridge (1999) adds another environment classification parameter to Keil and Goldin’s (2006) taxonomy – accessibility. This parameter allows Wooldridge to classify environments of multi-agents systems into two large groups. In accessible environments, an agent can scope all the information in the model regardless to its position. This gives the agent the bird’s eye view of its environment and allows it to take well-informed steps in order to achieve its goals. However, it defeats the bottom90

up nature of agent-based circulation models and hardly leads to the emergence of unpredictable diagrams or helps to progress bottom-up design thinking. As it will be argued in further detail in Chapter 9, partially inaccessible environments are preferred for generating circulation diagrams. In these environments, agents can obtain information from their immediate neighbourhood without the knowledge of global structure or whereabouts of their goals. The representation of the environment in virtual multi-agent models is no different from other digital models and is defined by the underlying spatial representation. There are essentially two distinct models of spatial representation available: continuous and discrete. Whereas the continuous model follows the classical way of spatial representations as known to conventional physics , discrete models are often used in relatively new areas such as quantum physics, and the rise of these models is closely related to the advent of computer technology (Kopperman et al. 2006). Based on the used spatial representation, the environment can be said to be continuous or discrete. As demonstrated in prototype and case study models later in this thesis, both types of environments can be successfully used in multi-agent models. The difference between continuous and discrete models can be illustrated with the following example. Given two points in continuous models, there are literally an infinite number of points that one can fit between these points. In discrete models, on the other hand, the number of points is limited. Therefore, with respect to agentbased models, there are finite number of actions and perceptions available for agents in discrete models (Wooldridge 1999). This also means that discrete models reduce the necessary calculations and simulations executed in discrete environments are lighter in terms of the required computational power. Multi-agent models for generating circulation diagrams contain objects that provide the means for agents to interact with their environment. Details about possible interaction mechanisms are further explored later on in this chapter, and it suffices here to say that agents either add or subtract objects or change the quality of objects in the environment. Representation of objects relates to the model of spatial representation; objects in continuous and discrete environments are represented in different ways. In discrete environments, objects exist as collections of discrete units of particular quality. A discrete object consists of basic elements (pixels in 2D, voxels in 3D) that are distinguished from the rest of environment by visual (e.g. colour) or non91

visual (e.g. weight) properties. In continuous models, objects are defined as collections of basic geometrical entities such as lines, points and surfaces. The most common object representation in the 3D continuous space is made of vertices, edges and triangles (or quadrangles) – also known as mesh type constructs (Shepherd 2009). Many agent-based navigational algorithms (most notably ant colony optimisation algorithms) use environment-mediated communication to coordinate the colony’s actions. Instead of directly modifying the model geometry, agents add markers – digital equivalents of smelly substances – to the model for way-finding purposes. Although this can all take place in the continuous space, it is easier and computationally less expensive to execute the simulation in a discrete model where the space is represented as a finite matrix of equally distributed markers. Instead of agents actually dropping markers, they can modify some properties (smelliness in this case) of these markers. From the programmer’s point of view, this has several advantages. Firstly, agents do not add objects to the model and the model geometry remains unchanged. Hence, there is no need to use dynamic arrays of markers. Secondly, the sensory algorithm that picks up discrete markers is considerably simpler than its counterpart in the continuous space. Thirdly, the matrix of markers provides a unique opportunity to pre-compute the environment and make some parts of it qualitatively different than others. This gives the programmer an additional means of controlling the model. Further details of this method are discussed in Chapter 9. Discrete models are topological models of space, at least with respect to how computer scientists use the notion of ‘topology’ (Kopperman et al. 2006). All topological models try to reduce the complex nature of continuous environments into simpler and computationally less expensive representations. A popular way involves reducing spatial features into some kind of graphs that preserves the topological structure of the environment. For example, space syntax uses topological models called axial maps for representing and analysing street networks (Penn 2001). Werner, Krieg-Brückner and Herrmann (2000) argue that route-based navigation using internal topological representation also happens in animals and humans. Topological models have also been used for way-finding and navigational purposes in agent modelling (Calogero 2008). The problem with topological representations is that it cannot easily be done in partially inaccessible or dynamic models, unless the representation is recomputed during the simulation. 92

Besides pre-processed topographical representations there are alternative ways to pre-compute the environment in order to help agents in finding their way. Processed topographical models, such as the space syntax’s depth map, provide a clue how to facilitate local navigation in simple agents. Methods like cellular automata based diffusion can also be used in order to pre-compute relative distance values in the environment. This allows simple hill-climbing agents finding shortest paths between two locations (see the Labyrinth Traverser prototype in the next chapter). Agent-based navigation algorithms in appropriately pre-computed environments require significantly less computation, and environmental preprocessing is therefore preferred in performance-driven models. However, it is not always a favourable solution. Pre-computing does not work well in dynamic environments where agents change the structure of their environment. In such models, if the change happens in the original representation of the environment it also needs to be replicated in the pre-computed representation. This requires constant processing in order to keep structural changes updated in both of the representations which, instead of improving the speed, would involve constant reprocessing and slow the progress down. One significant design consideration that can help to reduce the need for computational resources is the size of the model space. Agents can inhabit a bounded ‘universe’ – the container that defines the outer limits of their environment. This keeps the colony together in a constrained space and facilitates coordination between agents, but can also produce unwanted artefacts in agents’ behavioural patterns. A well know behaviour that mobile agents exhibit in tightly closed environments is known as boundary following. In order to avoid forced behavioural patterns, the ‘universe’ can be virtually infinite. However, this may lead to another problem – the colony can simply disperse in the unbounded environment and it becomes very difficult to observe emergent behavioural phenomena due to the loss of activity concentration. A way to solve both of the described issues – the edge following and the loss of focus – is to wrap the ‘universe’. Creating the wrapped toroidal environment with no edges is simply done by gluing every edge of the bounded environment to its opposite edge. This method helps to keep the size of the model small yet does not limit the freedom of agents’ movement. 93

5.3 Behaviour of agents

Intelligent behaviour is often thought as something common to living organisms (Skyttner 1996, p. 185). However, behaviour can also be seen as a phenomenon that emerges from simple rules of interaction between the agents and its environment (Braitenberg 1986). In the latter case, intelligent behaviour is assigned to the agent by an external observer – the agent seems to behave intelligently because it seems to have a purpose. What actually may be happening is that the agent’s behaviour is defined by a small set of logical rules that are executed locally. This section explains what the basic algorithms for controlling agents’ movement (behaviour controllers) are, and gives an overview of related concepts for building agent-based circulation models. Simplest behaviour controller algorithms are probably attraction and repulsion algorithms. These algorithms operate with a single vector calculated by comparing the position of an agent with its point of attraction (or repulsion). The new position of the agent is calculated by adding (or subtracting) this vector to the agent’s current position and the agent moves closer to the attractor or steps away from the repellent. Such a simple behavioural controller can generate surprisingly intricate movement patterns at the colony level. Consider a simulation where there are two types of agents – agents that are attracted to the other type of agents (smaller agents – see Figure 5.2), and others that repel the first ones. This ‘game of chase’ has a very simple set-out but it can display quite complex patterns of movement. The number of agents that take part of this simulation has a crucial impact on the progress. There is no interesting observable behaviour with only two agents in the simulation – these agents would resort to linear movement across their universe. The situation changes a bit if the contact-avoiding agent is slightly faster than its stalker. In this case, agents in a wrapped universe come to a stand-still. However, as soon as there are several agents taking part in the ‘game’, it becomes a lot more interesting. The movement of agents hardly repeats itself and the equilibrium is found much slower, if at all. There is also an interesting emergent behaviour amongst the same type of agents – they tend to form groups. Although one can easily explain why this happens by referring back to the basic attract-repel rules, it is quite difficult to foresee this behaviour prior to executing the simulation. And it is 94

even more difficult to predict the movement patterns when new attraction-repulsion behaviours are added to agents. For instance, if an additional rule of smaller agents repelling other smaller agents is created, the simulation becomes increasingly dynamic. Smaller agents now form tight groups to encircle the closest big agent but then spread to catch a new one if this one escapes. For an observer who does not know the rules of the game it might even look like black agents coordinate the chase. For an observer, the behaviour of smaller agents reminds the behaviour of some animal predators. The lesson learnt from the game of chase is that very simple bottom-up rules can be aggregated in order to generate interesting (and potentially useful) behaviour. This is also an underlying principle in prototype models and case studies presented in this thesis.

Figure 5.2: The game of chase. Smaller (black) agents are attracted bigger (red) agents who, in turn, are repelled from smaller ones. With large number of agents, such a ‘game’ reveals some important mechanisms that lay behind complex behaviours of simple agents

Slightly more intricate behaviour controllers are needed for another wellknown method of navigation – hill-climbing (Dorigo, Maniezzo and Colorni 1996). Hillclimbing involves a comparison algorithm for choosing between inputs of the same type, and agents need at least two sensory positions in order to perform this routine. 95

The controller algorithm converts the information received from different locations in the agent’s environment into an one-dimensional array that is linearly sorted by some parameter, and the agent then moves towards the point in the model that has the highest position in that array. An agent with two sensors can thus hill-climb the landscape of values until it gets to the local maxima – the highest point in its immediate neighbourhood. Now, this behaviour controller alone has a little use for generating interesting movement patterns. It only becomes interesting when agents start to influence the landscape by changing the parameters that are used for hillclimbing. This would initiate a dynamic feedback loop and several interesting patterns of movement can be generated (e.g. see the Stream Simulator prototype in Chapter 6).

Figure 5.3: A Swarm grammar – a branching diagram created by tracing agents using formal movement rules

Additionally to hill-climbing and attraction-repulsion algorithms that all are bottom-up navigational methods, agents can also have a predefined list of navigation rules. These formal rules can be indifferent to the agents’ surroundings, but the agent’s movement can still be context sensitive if some input parameters are still received directly from the environment. Again, there are many ways how the two different types of input – from the set of formal rules and from the agent’s sensors – can be combined. All these methods can be collectively termed as swarm grammars (Von Mammen and Jacob 2007). The grammar rules usually define where agents turn or how fast they go, but also when new agents are created, and what their initial orientation is. This kind of colony can be used for modelling growth systems similar to those created with L-system algorithms (see Figure 5.3). The algorithm that generates these complex-looking structures is actually very simple. It relies on the recursive replication of a new generation of agents at each step and has a set of rules for controlling an agents’ heading. The process starts with a single agent that takes a step forward leaving a trail behind. The agent then ‘hatches’ several new agents. These new agents are given a predefined heading that is relative to their parent heading. All 96

agents then move forward and ‘hatch’ another generation of agents. The result is a highly ordered and repetitive branching diagram.

Figure 5.4: A Context sensitive swarm grammar – the branch length is defined by the available amount of ‘light’

The same grammar rules can be used in conjunction with environmental probing. Every agent, while still following formal rules, changes its trajectory according to some local parameters (see Figure 5.4). As the result, the colony’s movement is highly structured and adaptive at the same time. Context sensitive algorithms can be quite sophisticated. If the diagram generated by the swarm grammar algorithm is to be adopted for generating circulation systems, one can imagine an algorithm that detects all intersections of agents’ movement trails. Agents could stop moving as soon as they encounter an existing trail left behind by another agent. This allows breaking the symmetric and hierarchical nature of generated diagrams. Swarm grammars can have very detailed rules as to how agents move (e.g. straight forward or in the zigzagging manner), when they ‘hatch’ and how many agents are in each new generation, and how they behave when crossing existing trails. There can be a hierarchy of agents that all have their own rules of behaviour. Figure 5.5 shows the results of an elaborated swarm grammar where the rules of each generation can be defined via a graphical user interface. The user has a control over several parameters and can even introduce randomness to the generated pattern by leaving some of these parameters undefined.

Figure 5.5: Swarm grammars with hierarchical rules

97

A related but fundamentally different swarm behaviour controller, invented by Reynolds (Reynolds 1987), is the well-known flocking algorithm. Flocking agents generate interesting dynamic patterns even in uniform and featureless environments. The whole idea is that the flock is its own primary environment where each agent tries to re-position and align itself according to its closest neighbour’s location and heading. The behaviour of simulated flocks is usually entirely deterministic – the seemingly chaotic behaviour is introduced by locating the agents randomly in the beginning of the simulation. Additionally, the flock can interact with its environment and give the observer impression of an intelligent behaviour (Carranza and Coates 2000). Both methods of controlling the behaviour of agent colonies – flocking and swarm grammars – are useful in studying the coordination mechanisms between agents. However, they provide little use for designers wishing to explore the dynamic development of circulation diagrams. The problem with these methods is that there is no immediate feedback between the agents’ behaviour and their environment. One can combine flocking and swarm grammar methods with simpler methods, but this leads to quite complicated behaviour controller architectures. Although simple methods such as hill-climbing do not necessarily involve feedback loops by definition, they are computationally inexpensive in order to be combined with environmental modification and environmental processing algorithms. This combination can create powerful generative feedback systems.

5.4 Environmental processing

Environmental processing is an important part of most multi-agent circulation models. These processes define how the environment responds to the input from agents and – if programmed appropriately – facilitate the emergence of continuous yet flexible circulation diagrams. Environmental processing routines define what happens to the objects that are added to the model by agents, how these objects respond to local conditions, and how they change in time. In short, these routines characterise the behaviour of multi-agent environments. There are essentially two distinct groups of processes that this thesis is interested in. These two groups are environmental decay processes and transformation processes. Whereas decay processes change non-geometrical 98

substances that are left behind by agents, transformation processes influence geometrical objects that have been added to the model by agents. Decay processes include evaporation and diffusion algorithms that are extremely useful for generating dynamic circulation patterns. These algorithms also facilitate circulation network optimisation and the colony’s search for shorter routes. The concept is borrowed from the real world processes where energy in the environment tends towards the equilibrium state and greater entropy. Transformation processes include several different routines that deal with the stability of objects in the model and the dynamical rearrangement of these objects when they are out of balance. The relationship of these methods to circulation may not be immediately apparent and requires some further explaining. Geometrical objects in agent-based circulation models can serve as barriers or facilitators of agents’ movement. Therefore, the exact position of these objects plays a crucial role in defining where circulation can take place and where it cannot. Now, if transformation processes define where objects end up in the model then these processes are indeed influencing the emergence of circulation diagrams. Among other tasks, decay algorithms control the distribution of substances dropped by agents into their environment. The simplest way of representing a substance is to give dedicated coordinates in the model numeric values that indicate the amount of substance available in any particular location. The change in distribution values can take place in several ways, but the two most common ones are diffusion and evaporation. Probably the most popular diffusion algorithm implements a cellular automata based method (Adamatzky 2001) where each discrete coordinate in space propagates a portion of its value to its neighbouring coordinates. Coordinates can also be programmed to lose the propagated portion from its original value, in which case substance values disperse in the environment mimicking the diffusion of chemicals in nature. During the process of diffusion, gradients are developed in the model. A gradient of values can be used by agents to hill-climb and reach to the source of the original insertion point of the particular substance. Diffusion also creates redundancy in the model so that agents can detect values from a larger area and thus require lesser precision of navigation. Evaporation algorithms can be seen as a variation of dispersion with the exception that the portion of substance values are not passed on to neighbouring 99

coordinates but simply eradicated. Evaporation plays the role of forgetting in the colony’s learning process. Without evaporation, values in the environment do not change over time and the colony is incapable of forgetting what it has once learned. The evaporation rate needs to be carefully calculated according to the size of simulation, number of agents in it, and the amount of evaporating substance available. Geometry transformation processing can vary from lightweight solutions to extremely sophisticated algorithms. Generally, these processes can be decomposed into two distinct packages of computation: stability computation and dynamics computation. The simplest geometry transformation processes solve stability issues heuristically and completely ignore dynamics. In order to compute dynamics, there is a large number of physics engines of various accuracy and performance that all use Newtonian laws of motion (Bourg 2001). Full-fledged physics engines commonly calculate rigid body dynamics, but can also include soft and deformable bodies’ computation, or even computational fluid dynamics. Some of the engines are open source and available as code libraries ready for anyone who wishes to include these to their computational models. The computational logic used by powerful physics engines can be very complicated and a way beyond the scope of this thesis. However, some of the prototypes introduced in the following chapter use the functionality provided by an open source physics engine – Open Dynamics Engine (Smith 2007). The problem of using powerful physics engines in combination with multiagent systems is that both require a bulk of computational resources. Accurate calculations of rigid body dynamics – let alone soft bodies – can slow down the whole simulation and can make the process of finding suitable methods for agent movement unacceptably long. Although this may not be a problem at the production stage, it is a serious hindrance in building multi-agent prototype models that are to be deployed at the early stages of the design process where speed and agility of testing various methods is of utmost importance. In the latter case, one may need to choose a different approach for calculating stability of structures that are built or modified by agents. Some of the prototypes presented in the following chapter deploy a qualitative method for calculating the stability of geometrical objects in the model. This method is lightweight compared to simulated physics approach because it uses heuristics rather than empirical mathematics. The main idea behind this method is that 100

objects in the model can be glued together via dedicated sockets and connectors. According to the heuristic rules, an object becomes stable when it lies on the ground plane or is attached to other objects via two connectors at least. Additionally, the method can be extended to take into account the position of connections. In the latter case, the object is declared stable when its centre of geometry is within the bounding cube including the two (or more) connector points. There are obvious shortcomings of this method. Firstly, objects need to be fairly simple and need to have predefined sockets and connectors. Secondly, the algorithm does not work very well in continuous space because of the modular nature of the objects. The greatest advantage of the socket-connector method is the speed of execution that makes it appropriate to be used in multi-agent models. Agents can add objects to the model or shift existing ones around without a significant slow-down due to the stability calculations. Dynamics of objects are completely ignored or, alternatively, there is a highly simplified notion of gravity that makes instable objects to drop until a stable state has been found. Another heuristic method used in one of the prototypes in this thesis, is based on cellular automata computation. It takes place in the discrete space where geometrical building blocks are represented as cubic cells. Each of these cells has a value that indicates its stability. Cells that rest on the ground plane or are directly above of such cells have the maximum stability value. A portion of this value is then propagated to cells in the immediate neighbourhood of the stable cells. Once the values are propagated across the model, the stability of each cell is assessed individually and those cells that fall below a threshold value are simply removed from the model. Figure 5.6 illustrates the outcome of randomly created geometry that has been modified by the described algorithm. Similarly to socket-connector method, the cellular automata based method is fast enough to be plugged into multi-agent simulations.

101

Figure 5.6: Stable structures as computed with cellular automata algorithm. Darker cells are less stable than lighter ones

5.5 Agent-environment interaction

One of the biggest challenges in designing and building multi-agent systems that can generate meaningful circulation diagrams is to achieve an appropriate feedback system between the agents’ behaviour and the processes taking place in the environment. Often, it is not possible to define the feedback solely by using simple programming constructs such as conditional logic or control flow loops – feedback mechanisms should be inherent in the system’s architecture. The common logic is that environmental processes and agents operate with the same system variables. These variables can be geometrical objects or some abstract non-geometrical substances. Hence the key for a successful agent-environment interaction relies in how and when these variables are initiated or modified from either side – by agents and by environmental processes. From the agents’ perspective, interaction with objects – geometrical or nongeometrical – can be divided into three groups. The first group can be termed displacement routines and these involve agents that shift objects around in the model. For example, displacement routines can be used for spatial sorting (Holland and Melhuish 1999). As a rule, objects that are shifted around are not geometrically altered but re-located. Related but slightly different are modification routines. As opposed to displacement routines, agents that perform a modification routine actually change the qualities of objects or the value of numeric (non-geometric) variables. A classic example of the modification routine is the ant colony optimisation algorithm (Dorigo, Maniezzo and Colorni 1996), where agents modify ‘pheromone’ values in the environment. The same group also includes routines where agents can modify the 102

geometry of objects. The third group contains aggregation routines where agents add or subtract elements to or from the model. These routines can operate with geometrical objects only. Many of the prototype models in the next chapter belong to that group. The key to constructing generative feedback systems is to find appropriate environmental responses to each of these interaction mechanisms. An environmental process can amplify the changes done to the model by agents, it can reduce and mitigate the impact of these changes, or it can be indifferent to the direction of change. In the first case- amplification – the whole system creates the positive feedback loop that, according to a popular cybernetic view (Beer 1974), may lead to catastrophic results in the real world. In a virtual multi-agent circulation simulation, the result is rarely so dramatic because one can easily introduce programmatic thresholds that prevent certain values to excel beyond acceptable limits. A simple positive feedback mechanism in the context of a multi-agent system is described in the following chapter (see the Stream Simulator prototype). Multi-agent systems that contain only positive feedback mechanisms quickly lead to fixed diagrams, and are therefore difficult to use as variety generators. Whereas in positive feedback loops the changes done by agents are amplified by environmental processes, the negative feedback can be achieved when environmental processes mitigate these changes. This scenario is more likely to produce interesting results and is also more true to the processes in the natural world – the environment tends to fight back. In systems theory, this is called negative feedback and it keeps systems in balance (Skyttner 1996). An appropriately tuned multi-agent circulation model featuring negative feedback mechanisms can produce a constantly changing diagram. If a fixed diagram is needed for design purposes, one can simply stop the simulation and get a snapshot of the generated model. According to the third scenario, environmental processes are indifferent to the changes done by agents. This is usually the case of using simulated physics to transform the objects added or moved by agents. Such a process can support agents’ goals but can also work against these – it really depends on every particular situation. If agents have ability to learn, they can start using environmental processes for their benefit. 103

There are several ways that the agents’ interaction with their environment and the processes happening separately in the environment can be combined. One has to consider the purpose of the model and then choose appropriate components to pursue this. Although the right choice of feedback mechanisms is crucial, it does not automatically lead to a successful model yet. Take an example of an ant colony optimisation algorithm. The number of agents and the size of their ‘world’ have a great impact on the ‘pheromone’ evaporation rate. On one hand, if the evaporation happens too quickly, the colony will struggle to capitalise on already found paths between their ‘nest’ and the ‘food source’. On the other hand, if the ‘pheromone’ evaporation rate is too slow, the colony’s explorative behaviour is hindered and the shortest paths may remain undiscovered (see further explanation in Chapter 8). Therefore, the model needs to be viewed holistically and the parameters of independent algorithms have to be fine-tuned simultaneously. Although the agents’ behaviour and environmental processing have to work well together for the combined effect, there is a benefit in treating them as separate programmatic modules. Both of the modules access and share the same objects and data in the model, but can use different representations of space. For instance, the agents’ movement works the best in the continuous representation, but objects in the model and the environmental processing can happen in the discrete representation. Whereas the latter helps to save computational resources and the blocky nature (see Figure 5.6) of discrete models may be acceptable at the early stages of design process, the continuous nature of movement usually requires the continuous representation. As long as the programming modules are sufficiently decoupled, there is no clash between different representations. Objects in the discrete space can be used in the continuous representation without the loss of data and require virtually no additional processing. However, when agents’ interactions – adding objects to the model, for example – are translated from the continuous into the discrete spatial representation, some data can be lost. It is a duty of the programmer to make sure that the lost information has no impact on the overall behaviour of the model. Dual representations of the environment are also used in way-finding agents who possess spatial memory. Spatial memory is the agent’s internal representation that is also known as the cognitive map (Kuipers, Tecuci and Stankiewicz 2003). Cognitive mapping is a well-known subject and has been thoroughly explored by 104

cognitive psychologists (e.g. Stea 1974; Downs and Stea 1977) already several decades ago. Cognitive maps have also been used earlier in the context of agent-based modelling by Ramos, Fernandes and Rosa (2007). This section only skims the surface of the subject and the main body of cognitive mapping studies remains outside the interests of this thesis. Cognitive maps are representations of the actual environment but are shielded from the external processes. This does not mean that internal representations are static – they can be the subject of a similar kind of decay and automatic reorganisation processes that take place in the agent’s environment. The internal representation is normally developed during the agent’s interaction with the environment and these representations are useful in helping the agent to make navigational decisions. Such way-finding decisions are usually made by combining input received directly from the environment and data acquired from the cognitive map. Wherever there are incoherencies between the actual environment and the cognitive map, a learning-like mechanism is deployed to amend the latter for closer match. The greatest advantage of cognitive maps is that they are fully accessible by the agent. Even if the information captured in these maps is incomplete, they offer the global view and can provide more useful navigational information than the information acquired from the agent’s immediate environment.

5.6 Communication in circulation agents

Communication in multi-agent systems is an extensive topic and can hardly be covered in a single section. Communication is a basic requirement of every healthy agent colony and an essential ingredient for coordinating colony’s activities. As opposed to single agent systems, no system containing more than one agent renders any additional value unless agents can efficiently communicate with one another. However, not all kinds of communication are relevant for the purposes of this thesis. Communication in the agent colony is interesting from the perspective of generating circulation diagrams only if it has any impact on the environment. Keil and Goldin (2006) highlight that there are two essentially different ways of using the environment for communication: trivial and nontrivial. The trivial use of environment for communication involves direct message transport between agents. The environment 105

in this case solely acts as a means of passing the message – after the message has been delivered, the structure of environment remains the same as it was before. The message itself, however, can be influenced by the structure and layout of the environment. As opposed to the direct messaging, agents can use the environment to pass messages indirectly. According to Keil and Goldin (2006), this is a nontrivial use of the environment for communication. It is closely related to the notion of stigmergy. In this thesis, indirect communication and stigmergic communication have been used interchangeably. The main idea behind stigmergic communication is to use the environment as a message board. In order to communicate, agents have to leave messages on this message board. This allows any agent in the colony – including the one that left the message – to come back to it at a later date. It also allows the agent to be amnesic – no memory is required with the agent because all the information can be stored in the environment. The environment can contain several overlapping messages, and creating a new message can possibly change existing ones. Additionally, older messages can be wiped clean by environmental decay processes. This facilitates the agent colony to rewrite messages and improve on its communication. Effectively, it allows the colony to learn. Stigmergy is of special interest in this thesis because stigmergic agents have the ability to perceive the environment and change it. Most of the prototype and case study models in the following chapters use stigmergic principles to a degree. However, stigmergy is not so much seen as a way to exchange information between agents, but as a generative mechanism that leads to the emergence of circulation diagrams. Although one can treat direct communication as an isolated issue or as a layer that can be added to the multi-agent system, stigmergic communication is inherently a part of agent’s perceptual and behavioural mechanisms. Stigmergy cannot and does not need to be addressed separately – it is embedded in the agents’ sensory-motor coupling routines. Therefore, stigmergic communication can be viewed as a matter of agent’s internal design rather than something that happens between individuals in the colony. Stigmergic communication has two stages. Firstly, agents receive messages from the environment via sensors. This process was described in detail earlier in this chapter (see section 5.1). Secondly, agents change the environment by executing actuator functions. These functions were also covered in the beginning of this chapter 106

together with the sensory mechanisms. As discussed earlier, sensory input can be converted into actions in various ways according to the agent’s internal processes known as sensory-motor coupling. Stigmergic communication deals with questions of how the agent perceives and changes the environment, and how its perception is transformed into action. Hence, stigmergy is foremost an issue of agent’s sensorymotor design. Naturally, the environment plays also an important part in stigmergic communication as it carries the messages. However, decoding and encoding the messages can only happen internally within the agent. There are two main reasons why stigmergy is the preferred method of communication in this thesis. For a start, it requires agents both to perceive and change the environment – two essential issues in the architectural design discourse. Thus, stigmergy can be seen as a rule-based design methodology and because of that it is particularly attractive for the purposes of this thesis. Stigmergic communication in an agent colony becomes the key shaper of the environment and a design driver of circulation diagrams. Stigmergy is also preferred because when compared to other communication and coordination mechanisms, it is a light-weight method of coordinating colony’s activities (Hadeli et al. 2004) . Stigmergic communication does not require massive computational resources and – as long as the design of agents and sensory-motor coupling is in place – it does not require the programmer to figure out direct message protocols. Instead, the programmer has to simply define how the agent receives data from the environment, how it changes the environment, and how the sensory input is mapped to the motor output. There are two different types of stigmergy: qualitative and quantitative (Bonabeau, Dorigo and Theraulaz 1999, p. 205). Both of these can be useful for generating circulation diagrams. Qualitative stigmergy has been used in this thesis mainly in stigmergic building routines (see section 6.6) where agents add or remove building blocks to existing structures. Different configurations of building blocks can trigger qualitatively different response from the agent. For example, encountering a single building block may cause the agent to add another block on top of the existing one. Facing the new configuration of two blocks, may instead lead to the action of removing one of the blocks. These two actions are qualitatively different; hence, this is qualitative stigmergy in action. Quantitative stigmergy, on the other hand, is usually 107

deployed in path-laying and way-finding agent colonies (see next chapter for description). Albeit by definition quantitative stigmergy affects a quantitative property of agent’s action (Bonabeau, Dorigo and Theraulaz 1999, p. 108), the most common use in this thesis is the reverse: the strength of the signal from the environment has an impact on the agent’s behaviour. This means that, depending on the local environmental conditions, agent’s behaviours are either evoked or inhibited. As discussed above, stigmergic communication can lead to the emergent coordination of colony’s activities; it is coordination without the coordinator. Messages that are left behind can be seen as by-products of the agents’ interaction with their environment. Indeed, agents do not need to be aware that they are communicating with other agents – they simply follow their own agenda and coordination can happen involuntarily. The success of generated circulation diagrams is largely dependent on how well the communication in the colony takes place.

5.7 The setting out configuration

The term setting out configuration is coined for this thesis in order to describe the generic parameters and settings of multi-agent simulations. Its origins can be found in the technical terminology of architectural design discourse, where the setting out drawing is the basic drawing that defines the global position of the design and the key reference elements for the entire set of architectural drawings. In the context of this thesis, the setting out configuration is a group of settings that define the general layout of the scene where agent based simulations take place, the target and source points for the agents, and the initial parameters and configuration of agents. The setting out configuration is explicitly defined by the creator (programmer) of the multi-agent model and the impact it has on the model’s success cannot be overestimated. The setting out configuration provides the best means to influence the progress and control the outcome of the model. It sets the size of the model, defines existing objects in the model that agents can interact with, and pre-processes the environmental parameters in the model. The setting out configuration also defines where agents are first located, where are their targets (in case there are any), and where are agents repositioned once they have achieved their targets. And, the last but not least, it defines the configuration of newly created instances of agents – a 108

recorded image of the agent. This configuration is usually a formal description of the sensory-motor mapping rules which characterise the agents’ initial behavioural patterns. Behavioural patterns can be changed during the simulation when agents interact with the model and learn new sensory-motor mapping rules. These rules can then be recorded into new configurations. The configurational settings can be created and implemented in two essentially different ways. Fixed settings can be stored in computer files, whereas variable setting can be defined interactively by the user just before running the simulation. For example, a file can contain geometric information of the model’s environment. Certain objects or parameters in the environment, on the other hand, can be initiated dynamically via the graphical user interface. Storing the setting out configuration in files allows one to come back it at any time, whereas interactive approach of keeping the settings temporarily in the computer memory offers speed and flexibility for testing out different configurations. Configuration setting for creating circulation diagrams can be divided into groups in several different ways. One way to categorise settings is to look at how agents are placed in the model and how many targets there are. Agents can be all given the same source, they can be started from several different sources, or they can be scattered across the model space. Additionally, there can be one or more targets for agents. A special case here is a model without any targets. Generated circulation diagrams, in this case, tend to be less controllable and more abstract. Different combinations of source and target point configurations lead to circulation network structures that match Haggett and Chorley’s network classification system into demand cases (Haggett and Chorley 1969). Single source and single target configurations generate paths, multiple sources and single destination configurations generate tree topologies, and multiple source and multiple targets scenarios normally generate circuits. Configuration settings can also be categorised according to the level of preprocessing that is carried out before the simulation is started. Pre-processing is an excellent way to control the simulation process, but it can also be very expensive in terms of computing power. Pre-processing is discussed in detail in Chapter 9. There are a few standard configurational settings. Haggett and Chorley (1969) acknowledge capture models, interconnection models and colonisation models. Capture models 109

start with existing network structure and agents can only move along predefined paths. In interconnection models, agents explore potential links between the given nodes. In colonisation models, agents start form source points, but can explore space freely.

110

Chapter 6: Prototypes Before looking at two case studies of how agent-based modelling can be utilised in the context of architectural design projects, this chapter introduces several prototype models. These prototype models are the step in between understanding the basic components of agent based models and using these for generating design solutions. For computational designers, prototype models are essentially sketching devices – proof of concept models that are modular enough to be used in larger simulations. One can achieve good results by combining prototype models only if one knows the principles of how the prototype models work. Prototype models can also be sketching devices to architects wishing to explore abstract models of movement and circulation. Several of the prototype models presented in this chapter can be used in order to generate constructive diagrams – diagrams that contain both the form and the requirement diagram. In case of circulation diagrams, the form diagram describes some topological or geometrical properties of a movement network, whereas the requirement diagram denotes the frequency of movement or the density of moving units. Although deployed out of context and without site specific requirements these architectural diagrams can still be informative and useful as design instruments. Diagrams, according to UNStudio’s architects (Berkel and Bos 2006), are not representational, but instrumental in production of new objects or situations. Similarly, circulation diagrams discussed in this chapter should be seen as tools to create new spatial arrangements. Agent based circulation models belong to the wider group of bottom-up models, where solutions are developed over time and emerge from local decisions made at the level of an agent. Such models are a good way to explore patterns and general principles of movement, but they do not necessarily contain optimisation routines. However, many of the prototypes presented in this chapter have elements of optimisation built in inherently. With respect to the circulation in buildings and urban environments, it is often difficult to validate prototype models individually. This is because they are developed to observe a phenomenon or deal with a particular aspect of circulation while neglecting others. However, these prototypes provide a way of validating the complete computational model at the later date. If a prototype model is 111

observed to always produce diagrams of specific quality, the same quality can often be observed and recognised in models that are based on this particular prototype. For example, if agents in a prototype model generate circuit network diagrams then they are capable of doing the same in production models – models that are used in the design process.

Execution platforms The prototypes presented in this chapter have been built on several different programming platforms ranging from professional software packages to more academic software applications. Although the emphasis in this chapter is on the conceptual mechanisms for developing agent-based models, it is worthwhile to discuss the pros and cons of the software platforms that were used to build these experiments. Different development environments present researchers with different opportunities and difficulties that influence the way the prototype is implemented. As a consequence, the final computational model may be biased and influenced by particular programming habits. In theory, good and flexible programming languages should allow the programmer to choose between different programming styles and facilitate several ways of solving computational problems. In practice, however, the prototype is often built using the most readily available methods that are peculiar to a programming language or to a development environment. In terms of the ease of learning a new programming language and the development environment, NetLogo (Wilensky 1999) is probably the simplest one presented herein. NetLogo offers a simple graphical user interface (GUI) combined with a purpose-made programming language. The NetLogo programming language is specifically developed for exploring emergent phenomena in multi-agent simulations. There are two main simulation objects readily available for the programmer: turtles and patches. Whereas turtles are seen as agents capable of moving around in the digital environment, patches – arranged in a grid layout – can be said to represent this environment. Both of these simulation objects have several in-built methods and properties at hand to facilitate agile prototype development. NetLogo’s GUI makes it very easy to control simulation parameters at run-time and allow the observer to test different variable values quickly. A major shortcoming of this modelling environment is its embryonic architecture that makes it very difficult to write extensive simulations. 112

Nevertheless, NetLogo is an exceptionally good platform for fast prototyping and for exploring how complex patterns can emerge from simple interactions between agents. Another powerful development environment used for building prototypes of multi-agent simulations is based on Blender (Blender Foundation no date) – an open source 3D content creation and modelling software – and powered by Python (Python Software Foundation no date) programming language. Unlike NetLogo, Python is a general purpose object-oriented language supported by a wide community of users. It comes with several standard libraries and, when needed, additional software modules can be easily introduced. One of such modules that has been used for gearing Blender up for agent-based simulations is based on OpenSteer (Reynolds 2004) libraries that were developed for simulating steering behaviour of autonomous vehicles for gaming and animation. The possibility of adding external modules makes Blender/Python a powerful platform for carrying out complex simulations. However, the lack of integrated modules reduces the agility of prototyping and demands wellorganised approach in handling complicated simulations. The rest of the prototypes presented in this chapter have been executed in the context of professional CAD applications (Autodesk’s AutoCAD and Bentley’s MicroStation) and executed in the integrated development language (Visual Basic for Applications – VBA). The benefits of running multi-agent simulations directly in CAD is that simulations can be carried out in the environment that is familiar to most architects and urban designers. This makes it possible to run simulations and use standard CAD tools almost in parallel without having to convert the data when the generated diagrams are taken further into a more detailed design proposal. This also presents the opportunity of integrating different modelling methods seamlessly into a fluent workflow. Since VBA provides access to the standard drafting elements in CAD, any generative design method can become a truly integrated tool. The drawbacks of this solution are mainly of a technical nature and include the speed of execution. Whereas the performance issue may not be obvious in small scale simulations, it can pose serious limits when dealing with large colonies of agents or complex surface geometries. In order to build agent-based simulations for architecture, different development environments have their own plusses and minuses and it is hard to give preference to any one of them. Each simulation has to be evaluated separately and a 113

particular development environment should be selected according to the expected functionality. In general, prototype models are first advised to be developed on simpler platforms that allow quick testing and rebuilding. Once the desired behaviour of the simulation is achieved, one can replicate the working algorithms on a more advanced platform. The purpose of the advanced platform is to achieve better performance, to offer more interactivity or to help better integration with the traditional design workflow.

6.1 Emergent path formation: Loop Generator

The Loop Generator is conceptually the simplest prototype in this chapter. Yet, for its relative simplicity, it is capable of producing a significant variety of movement patterns. The algorithm that drives the prototype is inspired by the path formation behaviour found in social insects, particularly in termites (Bonabeau et al. 1998) and ants (Deneuborg, Pasteels and Veraeghe 1983). The use of software agents to mimic the behaviour of natural insect colonies is an obvious choice: if one can simulate the path-finding behaviour of insects, one can reproduce the path formation artificially. One of the most important concepts borrowed from the natural world and used extensively in this and the following prototypes is stigmergy – an environment mediated coordination mechanism. Unlike ants and termites, the agents in the Loop Generator prototype are not driven by the need to survive, but are simply locked into a simple sensory-motor loops. These agents move around in the environment and are drawn to certain attractors or markers – artificial ‘pheromone’. By moving around they alter the attractiveness of particular locations that, in turn, attract more agents. As the result of such a simple feedback mechanism a colony of agents can produce looping movement diagrams – macro patterns that emerge from micro behaviours (see Figure 6.1). To put it differently: agents start chasing their tails, and generate continuous flows and even closed loops. These patterns manifest the continuity of agents’ movement and are facilitated and constrained by their manoeuvrability and sensory configurations. The Loop Generator is by no means intended to be a plausible model of natural phenomena; the objective is to investigate a simple and robust prototype for

114

generating flexible circulation diagrams that can be useful for architects at the conceptual design stage.

Figure 6.1: Typical movement patterns of simple reactive agents. Emergent trails form open and closed loops to facilitate the continuity of the agents’ movements

The simulation takes place in a bounded 2D ‘universe’ and agents are constrained to move within a user-defined rectangular area. The model space is continuous and there are virtually an infinite number of possible locations an agent can be. The environment, on the other hand, is represented as a discrete ‘pheromone’ grid which gives the model its idiosyncratic granular look. On initialization, agents are located randomly within the given bounds with random heading. At the same time, all ‘pheromone’ coordinates are initiated with zero value. Each agent has a notional body and three sensors – one in front and two symmetrically on sides at a declared angle from the front sensor (see Figure 6.3). When an agent is ‘sensing’ the environment, each of its sensors receives the value of the closest ‘pheromone’ coordinate. Sensor values are compared and the sensor with the highest value wins. If there is a tie, the winning sensor is selected randomly from the sensors with the highest value. Once the winning sensor is selected the agent turns towards the sensor location and alters the environment by increasing the ‘pheromone’ value of the closest coordinate to its body. If the agent happens to be on the verge of the ‘universe’, it takes a turn back towards the centre of the simulated world. In order to generate dynamic movement patterns, the environment slowly ‘forgets’ previous activity when the ‘pheromone’ evaporates.

115

The emergence of clear paths takes place over time and is worth a closer observation (see Figure 6.2). Firstly, random and fractured strips of ‘pheromone’ trails start appearing all over the simulated world. When the simulation progresses these fractions are joined into continuous trails of ‘pheromone’ that often feature closed loops at the end. These loops enable agents to turn around and reinforce the path. Eventually, larger circuits emerge and there are now clear paths of high ‘pheromone’ concentration. These paths, however, seldom keep the same shape for a long time and can travel slowly from one place to another.

Figure 6.2: A sequence of snapshots illustrates the development of closed loops in 2D

The Loop Generator prototype is built on the Blender/Python platform and the steering behaviour of agents is constructed with the help of the OpenSteer library for simulating vehicle movement. Agents are treated as vehicles with mass and are subject to inertia. This allows one to simulate realistic movement and prevents agents turning too steeply at high speed. However, this alone does not lead to the emergence of continuous and looping paths as agents can still simply revolve around a point in space at low speed. Another aspect of the agent’s design is equally important – the position of its sensors. The simplest functional ‘body plan’ found has three sensors at a certain angle (α) from one another (see Figure 6.3).

Figure 6.3: The body plan of the 2D Loop Generator agent. α denotes the angle between the front sensor and a side sensor

116

Thus, the formation of continuous paths takes place for two reasons. The sensory configuration forces agents to move forward; agents cannot ‘see’ what happens behind them and are designed to move to the direction of a sensor. In this way, agents are given directionality. Simulated inertia prevents agents making sudden turns. Given the directionality and simulated physics it is quite simple to get the colony of agents to form paths. If the principles of sensory configuration or simulated physics have somehow been violated, the result can be either complete absence of movement patterns or very strong clustering. Figure 6.4 illustrates the possible scenarios of sensory modifications. The lack of any patterns occurs when sensors are placed too close to one another. Edge-following effect – as observed in many agent-based models in this thesis – becomes evident with a slightly larger angle between sensors. Angles between 40 and 70 degrees lead to the patterns that are the most useful in the context of generating circulation diagrams. If the angle gets even wider, agents start forming clusters. The size and number of these clusters depends not only on the actual value, but also on the amount of agents in the simulation.

Figure 6.4: Tests with different sensory configurations – agents with 3 sensors. The angle (α) between front and side sensors (from left to right): 0, 22.5, 45, 67.5, 90 and 120 degrees

3D Loop Generator The success of experiments with the two-dimensional Loop Generator motivated further development of the prototype in 3D space. The 3D version of Loop Generator was built on the same platform (Blender/Python) with a few substantial changes in the original algorithm. In many ways, updating the dimensionality was trivial, but several interesting issues were discovered. Although some of these issues were already apparent in the 2D version, the third dimension amplified them. Adding another dimension made the prototype more sensitive and getting it to work required more delicate fine-tuning of the key parameters. For example, OpenSteer parameters

117

for the mass and step length of agents had to be carefully chosen according to the spatial metrics of the ‘pheromone’ grid and to the agent’s bodily metrics.

Figure 6.5: An agent’s ‘body plan’ in 3D – the development of minimal sensory configurations that produced continuous circulation diagrams

The design of the suitable agent’s sensory configuration turned out to be surprisingly non-trivial. Figure 6.5 illustrates the development of the configurations where both the right number of sensors and the positioning of sensors had a crucial impact on the agent’s behaviour. The first attempt to transfer the 2D configuration (see Figure 6.3) into 3D was simply to lower the side sensors and lift the middle one higher so that the agent would have the ability to turn up and down as well. This, however, prevented the agent from moving in a straight line and caused excessive zigzagging movement. In order to stabilise the movement, a configuration of five sensors was tested. Although this configuration enabled agents to follow straight lines, they had trouble in following existing ‘pheromone’ trails at steep turns. This impelled a search for a more agile design that was finally achieved by doubling up the number of sensors, placing the first layer closer and the other one further away from the agent’s body. Further tests with a larger number of sensors did not significantly improve the agents’ navigational skills but made the execution of the algorithm slower. Hence, the double-layer configuration with 10 sensors was found optimal. Another major change that had to be introduced to the original 2D prototype concerned the 3D equivalent of the edge-following effect – the plane following. The plane following took place not only at the bottom and top layer of the ‘universe’ but also on other vertical and horizontal planes at the edges of the simulated world. This interesting phenomenon was caused for the very same reason as the edge following – 118

agents’ movement was constrained and the edge plane received larger ‘pheromone’ concentration. In the 3D environment, the circulation on the vertical planes was discouraged for obvious architectural reasons and additional changes to the algorithm were therefore required. Rapid development methods were deployed in order to find solutions to prevent the occurrence of this unwanted effect. Once a possible solution was conceived, it was immediately implemented and tested out. After several unsuccessful attempts, two different solutions were found particularly useful for avoiding edge-following. Firstly, agents were given an ability to emit less ‘pheromone’ once they encountered the edge of the simulated world. Secondly, the agents’ capacity to drop ‘pheromone’ was associated with their speed. This change relied on the effect that the agents had to reduce their velocity when turning back from the edge. Being capable of adding ‘pheromone’ only when their speed was sufficiently high prevented the ‘pheromone’ build-up close to the edges.

Figure 6.6: Generated 3D circulation diagrams

The development process of a 3D circulation diagram is not much different from that in 2D. However, circuits and loops tend to be much longer, and short and clear doughnut-like shapes occur less often (see Figure 6.6). Due to the complex nature of the 3D environment, agents lose their track at steep turns quite easily. Although agents maintain their explorative behaviour throughout the simulation, it occurs more often at the early stage of the process. The ‘pheromone’ trails keep changing to suit better the locomotive abilities of agents which eventually lead to the emergence of optimal circulation paths (see Figure 6.7). Whereas in 2D there are usually several closed loops of ‘pheromone’ trails, one large continuous circuit containing a few subloops is more common in 3D. This one big loop sometimes takes the shape of twisted 119

‘8’ that can be reduced to a single closed loop (see Figure 6.7) where agents cannot easily escape.

Figure 6.7: A sequence of snapshots illustrates the development of closed loops in 3D

Both the 2D and 3D diagrams generated with Loop Generator are quite abstract and it may be hard to see how these can be useful for an architect. However, one needs to bear in mind that the diagrams are generated in silico outside any architectural context and the prototype is a theoretical mechanism to test whether the continuity of movement can be captured in a flexible diagram. The continuity of the diagram ensures that access is provided to any point adjacent to the circulation path from any other point of the same quality. The flexibility of the diagram allows the prototype to be combined with other computational and manual modelling methods to create useful architectural diagrams. Later in this chapter, there is an example where Loop Generator is plugged into another more complicated prototype that involves feedback between circulation systems and spaces that are accessible from the circulation (see section 6.6).

6.2 Channel network formation: Stream Simulator

This prototype is loosely based on the natural drainage channel formation mechanism described by Bridge (2003, p 5-8). In order to initiate the channel network formation process there are a few basic requirements that have to be satisfied. Firstly, the landscape has to be erodible by water and be at least gently sloping. Secondly, there has to be enough water around. If the water has enough gravitational power it starts to erode landscape locally. Progressive erosion and stream formation continues if the power of flow increases downstream (Bridge 2003). One can find several existing prototypes in the literature that have simulated such mechanisms at a different level

120

of abstraction (e.g. Haggett and Chorley 1969; Willgoose, Bras and Rodrigues-Iturbe 1991). The actuator in the Stream Simulator prototype is a simple agent following a hill-climbing procedure, and the channels are the trails of agents climbing downhill. The agent finds the quickest downhill course by comparing the height of local neighbouring areas and erodes the landscape at the same time. Each following agent that chooses the same route enforces the trails that were left behind by the first agent. This is a typical positive feedback model where a patch of landscape that is eroded has a higher probability to be eroded again. The result of such behaviour can be seen in Figure 6.8. The Stream Simulator prototype is a highly abstract model and does not take many aspects of the natural channel formation into account. For example, it ignores the sedimentary transportation and concentration processes. Stream Simulator does not feature a negative feedback nor has an environmental repair mechanism that would make the prototype truly generative. However, the aim of this prototype is not to create an original and generative design model but to contribute to the Network Modeller prototype described later in this chapter. The main objectives are to study the behaviour of the prototype and to build a foundation to generative prototypes.

Figure 6.8: The input landscape (left) and the resulting stream channels (right)

An example of a computational model that simulates the erosion and the formation of river channels can be found in NetLogo models library (Dunham, Tisue 121

and Wilensky 2004). Similarly to this erosion model, Stream Simulator is built in NetLogo. Unlike the Netlogo model that uses a cellular automaton approach, Stream Simulator uses mobile agents. The Stream Simulator prototype, akin to Loop Generator, features stigmergic feedback loops. Users of the prototype can define the initial landscape via importing a previously prepared image where lighter areas represent hills and darker areas represent valleys (see Figure 6.8). The graphical user interface allows the user to control the size of the agent colony and to export snapshot images of the current state of the simulation. Conceptually, the algorithm behind the interface is a simple one. Agents are distributed randomly over the landscape and climb downhill. The slope of the landscape is defined locally by comparing the colour values of neighbouring patches. Once an agent moves to a new position, the patch underneath it is ‘eroded’ – its colour becomes slightly darker. The channel formation has begun, and if the slope of the landscape is sufficient, constant streams start to emerge (see Figure 6.9). Once in a while, all agents are redistributed across the landscape and the whole circle repeats itself. In time, a tree-like channel network appears. Depending on the input image, the typology of these tree-like networks in terms of number of branches can vary, but the type of the network always remains a tree. Except for the random initial distribution, the algorithm is purely deterministic and would lead to exactly the same result each time it is executed.

Figure 6.9: A typical progress of channel formation algorithm

The channel network formation does not necessarily have to be simulated with a colony of agents. A single agent can achieve qualitatively the same effect as 1000 agents. Tests with 1, 10, 100, 500, 1000 and 1500 agents (Figure 6.10) produce the same variety of networks than tests with 1000 agents (Figure 6.11). There is no notable difference in terms of network density, segment length or network shape (after Haggett and Chorley 1969). Testing the influence of the colony size to the 122

network characteristics may easily lead to the opposite conclusion. The difficulty in comparing the test results of simulations with different numbers of agents is to recognize the network development stage. A smaller number of agents yields to slower network development which could give an impression of emergence of a different kind of network. The only difference the colony size makes is the speed of execution: one agent takes much longer to form recognizable stream channels than 1000. This is because of the peculiar way the simulation has been set up – once all agents have executed their commands, all landscape patches will execute theirs. The algorithm that is used for calculating the landscape takes its toll. Thus, in simulations with larger number of agents the time spent for running patches commands is proportionally smaller, and the network formation is faster.

Figure 6.10: Tests with 1, 10, 100, 500, 1000 and 1500 agents

Figure 6.11: All tests with 1000 agents

Stream Simulator is capable of generating tree-like networks, but cannot produce other network topologies. This fact imposes some limits on how this prototype can be used in the context architectural and urban design. While some building typologies such as hospitals, schools, prisons, smaller airports, harbour terminals, train stations etc. accept or even require the branching structure of circulation, others need more complex circuit networks. The key characteristic of a tree-like circulation system is the one-to-many relationship. There is a single point of convergence at one of the ends of the circulation system: the entrance to or the exit 123

from the network is the same for all users of the network. Naturally, there are no loops in such circulation system but one can still move back and forth in a section of the network without passing the point of convergence. Whereas branching networks work very well with some building typologies, such a circulation topology is generally discouraged in contemporary urban design. Tree-like street networks lead to low connectivity and low penetrability – two undesirable properties of any urban design scheme. Stream Simulator can be combined with other prototypes in order to build more complicated and versatile applications of generative design. The next prototype in this chapter – Labyrinth Traverser – can be efficiently plugged into the Stream Simulator code in order to produce diagrams for circuit networks. An example of how Stream Simulator is combined with the Labyrinth Traverser prototype is described in section 6.4. The Nordhavnen case study in the following chapter is also partly based on the Stream Simulator prototype.

6.3 Cellular automaton and hill-climbing agents: Labyrinth Traverser

The incentive for building this prototype was to develop an algorithm that is capable of finding the shortest path between user-defined points. The objective was to make it work in the complex digital environment with obstacles for which the labyrinth served as a suitable metaphor. Like the previous prototype, Labyrinth Traverser was designed in NetLogo.

Figure 6.12: A labyrinth solved by Labyrinth Traverser

124

The Labyrinth Traverser prototype combines two agent-based techniques: hillclimbing and diffusion. In order to facilitate these two techniques, there are two types of agents. The first type is an immobile state agent known as the ‘patch’ in NetLogo (cell in cellular automata); the other is a mobile agent – the NetLogo’s ‘turtle’. Topologically fixed patches are specified as either empty (white in colour – see Figure 6.12) or occupied (black). This defines where the mobile agents can or cannot move. A patch contains also data about the distance to each one of target points that are placed via the graphical user interface (red squares in Figure 6.12). This data is a value that is propagated from the target point to the closest patch and then from one patch to another. During this propagation, the passed value is lessened so that a gradient field of values is formed. The patches can gain the target data only from their immediate neighbours which means that passing on the values can only happen locally. This cellular automata type of diffusion mechanism has been modelled and described in detail by Adamatzky (2001, p. 11-17). A similar method has also been used in Daedalus computer program for creating and solving mazes (Pullen 2007).

Figure 6.13: The gradient is computed with the diffusion method: the redness of each patch shows the proximity to a point in the labyrinth

Once the user has initiated a target point, the closest patch to the target gets the maximum value. By propagating a percentage of this value to its neighbours this value then starts to diffuse over the landscape. Since the black ‘occupied’ patches are 125

not receiving nor propagating the value, the diffusion of values is influenced by the labyrinth layout (see Figure 6.13). Once the target value has diffused across the labyrinth, a mobile agent can find the target by simply climbing the resultant gradient. The agent can easily discover the shortest way to the target by comparing the propagated target values locally without any overall vision of the labyrinth’s layout (see Figure 6.14). This method is sufficiently robust to work in any kind of 2D layout where the shortest routes are plausible. This makes Labyrinth Traverser a practical asset in the computational designer toolbox. Although the prototype is not a generative design application, it can possibly be used in many different applications for generating circulation diagrams, and is particularly useful for optimising circulation networks. The prototype introduced in the following section presents an example where Labyrinth Traverser is combined with Stream Simulator. This prototype is capable of optimising network layouts and, in some cases, even generating minimal spanning trees.

Figure 6.14: A sequence of images showing the progress of the agent

The potential value of this prototype is in analysing layouts of buildings and settlements; Labyrinth Traverser with its easy-to-use interface can find and measure relatively accurately the distance between any points in a digital 2D environment. The prototype can also be used in order to build generative design applications in many possible combinations with other computational and generative techniques.

6.4 Network optimisation algorithm: Network Modeller

The Network Modeller prototype is a combination of two prototypes described earlier in this chapter: Labyrinth Traverser and Stream Simulator. In order to fully understand Network Modeller, it is recommended to read the previous sections that introduce these prototypes. In a way, these two prototypes are quite similar. 126

Mobile agents in both cases are simple hill-climbers. The difference lies in the way the input is fed into the prototype. Labyrinth Traverser relies on the field of values propagated by a diffusion based algorithm from a user defined point via mouse click; in Stream Simulator’s the input is fed into the program as an image and the field of values is extracted as bitmap colour values. The latter prototype also features a stigmergic feedback loop, while the former does not. The two simple prototypes foster a more intricate prototype. In the Network Modeller prototype agents, who still retrieve information from their immediate neighbourhood, are not just hill-climbing the landscape of values. Instead, they execute a more articulated route selection mechanism. On their way agents leave slowly fading ‘pheromone’ trails into the landscape (see Figure 6.15). They are always trying to get closer to their target but also attempt to follow strongest ‘pheromone’ trails. Consequently, agents choose the most trodden path that takes them closer to their target.

Figure 6.15: A network diagram generated with Network Modeller

Similarly to the original prototypes, Network Modeller is built in NetLogo. It has a slightly more complicated graphical user interface which gives the user better control over the simulation than its predecessors. Once the user has manually seeded the target points and the ‘target’ values have been propagated across the landscape, the simulation is started. Mobile agents are then randomly set to these user-defined 127

points and are assigned their target destination. A mobile agent chooses the travelling direction by comparing the ‘pheromone’ values of patches within its local neighbourhood. All the neighbouring patches that have the lower ‘target’ value than the closest patch to the agent are omitted from this calculation. In other words: the agent chooses the patch with the highest ‘pheromone’ value amongst the set of patches with high ‘target’ value. If the ‘pheromone’ value of two or more patches is equal, the choice is made randomly among these patches. The prototype optimises the network of paths for the reason that ‘pheromone’ trails in the environment evaporate – the environment can forget old paths and new shorter paths emerge. Dropped ‘pheromone’ also diffuses in the environment which is again useful for optimisation. Diffusion creates less sharp gradients of ‘pheromone’ and enables more delicate navigation. Whereas Labyrinth Traverser and Stream Simulator are not truly generative models since neither of these feature negative feedback, the negative (‘pheromone’ evaporation) and the positive (stigmergy) feedback definitely make the Network Modeller prototype a generative one.

Figure 6.16: Examples of network diagrams generated with the same configuration of target nodes. The variety of diagrams have been achieved with different control parameters

Besides defining the network target points and the input image, there are a few important parameters a user of Network Modeller is in control of. Firstly, the user has to decide the number of mobile agents participating in the simulation. The optimal number of agents is difficult to determine since it depends on so many other aspects of the simulation: the amount of target points, distance between these points, size of the ‘universe’ etc. The user can manipulate the size of the agent colony at runtime and influence the development process of the network. An important user controlled parameter is the amount of ‘pheromone’ dropped into the environment by agents. Larger quantities lead to the quick establishment of strong paths early on in the 128

simulation process that hinders the network length optimisation; too small ‘pheromone’ drop rate prevents the formation of the stable network. The other two parameters that can be manipulated during the simulation define how the ‘pheromone’ behaves once it has been left behind by the agents. As described above, the ‘pheromone’ diffuses and evaporates in the environment. The rate of both of these processes is exposed to the user via the graphical interface. Whereas larger diffusion rates are helpful in forcing two adjacent paths to join together into a single path, smaller rates yield to clearly defined paths. The evaporation rate, on the other hand, can be effectively manipulated to regulate the speed of optimisation. Higher evaporation rates can cause rapid changes in the network shape and topology; smaller rates usually lead to the quick formation of a steady network. All of the control parameters influence the simulation process to a greater or lesser extent. These parameters control different bits of the simulation and need to be considered in parallel. The ‘pheromone’ evaporation rate, for example, need to be chosen proportionally to the number of agents in the simulation – the more agents there are, the smaller the rate is required. Albeit the effect of each parameter can be defined individually, the concurrence of different parameters is difficult to foretell. The actual value of these parameters has little significance – what is important here is to understand how the user can change the course of diagram development and which way parameters need to be tuned in order to achieve a desired effect. This process is further explained later in this section. Figure 6.16 illustrates the variety of network diagrams that have been generated in a single simulation. The ability to manipulate some control parameters at runtime makes Network Modeller a flexible and interactive tool that could potentially be used for generating movement diagrams for buildings and urban areas.

Figure 6.17: A typical process of network formation and optimisation

129

Without the user interaction, the process of network development is typically fast even with a relatively large number of agents and targets (tests with 2000 agents and 25 targets were carried out). An example of the development cycle is demonstrated in Figure 6.17. In the beginning of the simulation the agents have no trails to follow and take the shortest course to their targets. Before they even reach their targets for the first time, they may become attracted to trails laid out by agents travelling on a different course – the optimisation process has started. Once an agent has reached its goal, it has been assigned a new one. However, by then the ‘pheromone’ landscape has changed and the agent executes a more complex navigation algorithm that has been described earlier. The positive feedback mechanism makes its contribution to the process and certain paths are quickly reinforced. The network is now in its most connected shape often even having two separate routes between two target points. But soon the effect of ‘pheromone’ evaporation takes its toll and certain paths gain more traffic and become dominant. Weaker paths dwindle and may eventually disappear completely. At the late state the network has become stable and, assuming that the control parameters are not changed, it can be declared to be fully developed. Tests with a low number of target points show that often the resultant path networks resemble minimal spanning trees. Figure 6.18 displays several different trees generated with the prototype. Although with a low success rate, Network Modeller can be classified as a heuristic method for solving minimal trees. In some cases, the network takes the shape of a Steiner tree – a minimal spanning tree featuring emergent network nodes: Steiner vertices (Herring 2004). The formation of minimal spanning trees is not guaranteed with Network Modeller. The prototype should be considered as a computational assistant of optimised network design rather than a solver that always converges to a single optimal solution.

Figure 6.18: Recognizable minimal spanning trees generated with the network optimisation algorithm

130

A user can interrupt the typical development process of a network diagram in several ways. The process can be speeded up by reinforcing the half-developed structure. This can be done during the simulation by increasing the agents’ ‘pheromone’ drop rate or decreasing the ‘pheromone’ evaporation rate from the environment. One can also encourage more explorative behaviour of agents and further development of the network by decreasing the ‘pheromone’ drop rate, reducing the number of participating agents, or increasing the evaporation and diffusion of ‘pheromone’ in run time. Ultimately, the responsibility of finding appropriate parameters lies with the designer. Suitable diagrams can be easily generated by following trial and error experimentation techniques where parameters are changed and visual feedback is obtained in run-time. The knowledge gained through real time interactive play is a key to success – knowing the rules of thumb of how the prototype behaves is more valuable than knowing the exact parameters that lead to a specific result. The ability to freeze the diagram at a certain stage of development or to promote an alternative development is a useful feature in Network Modeller. It allows designers to generate a wealth of circulation diagrams that are more or less optimised for travel or build cost. In Network Modeller, the optimisation is implicit and can be controlled via the parameters outlined earlier in this section. Network Modeller is a step from a simple agent-based prototype towards a computational application that has a potential of becoming deployable in an architect’s office. It satisfies several requirements that are often demanded from such an application: Network Modeller can output a variety of diagrams, responds promptly to user interactions and is relatively transparent. Chapter 7 presents a case study where Network Modeller is developed further and deployed in the context of professional design work. Although some modifications of the prototype are introduced in order to make it more suitable for the particular requirements of the case study, the essence of the prototype remains the same. One of the modifications introduced later is context sensitivity. This has been deliberately omitted from Network Modeller to keep it a pure prototype – having the context makes the prototype a special case. 131

6.5 Way-finding agents and ant colony optimisation

Ant colony optimisation (ACO) is a computational method for finding nearoptimum paths in networked graphs. There are several well-known ACO algorithms and the field of application varies. Typical ACO implementations deal with network optimisation problems such as the travelling salesman problem (Dorigo, Maniezzo and Colorni 1996) and minimum spanning trees (Neumann and Witt 2008). A more detailed overview of ACO methods and algorithms can be found in Chapter 8. The prototype described in this section is loosely based on an ACO method and is combined with additional way-finding methods and evolutionary optimisation techniques. This wayfinding prototype is programmed in AutoCAD’s integrated development environment using VBA programming language. It has earlier featured in a paper by Puusepp and Coates (2007). The objective of building this prototype was to study how agents can develop a way-finding mechanism that helps the colony to navigate between two points in a digital 3D model with the help of an internal representation of the model. As the agents were designed to receive some sensory information directly from the model, it was hoped to find out which spatial layouts facilitate or hinder the way-finding process in this multi-agent system using stigmergy. In order to keep the number of inputs from the environment low a method of storing sensory-motor rules was proposed. The only stimuli an agent receives directly from the environment, is limited to the collision detection of surfaces in a digital 3D model. Most of navigational decisions taken by the agents are based on a reference map. For the agent colony, this reference map can be seen as an internal representation of the environment. While the map evolves during the simulation, the correct rules to interpret the map develop in synch with it. Both the reference map and the set of interpretation rules are developed from scratch using a trial-and-error reinforced learning methodology (Pfeifer and Scheier 2001, p. 490-493). The goal of the agents is to find a way to a target point while learning to interpret sensory input and altering their internal representation of the environment. Successful agents are rewarded by upgrading their value; the value of a non-successful agent is reduced to minimum. This prototype does not directly relate to any theory of human way-finding because one of the principles was to keep the prototype simple. Human way-finding 132

processes are far more complex than those in the proposed prototype (Chase 1982; Timpf et al. 1992). However, the proposed prototype does borrow the notion of internal representation that originates from the early thinkers of cognitive mapping approach to way-finding – from Tolman and later Piaget (Chase 1982).

Figure 6.19: Input-output coupling. The agent obtains input from the digital 3D model and from the reference map. Motor output is generated interpreting input according to syntactical rules

Agents are using a combination of computational reference map (agents’ internal representation of the environment) and a set of rules that determine how that map is interpreted (see Figure 6.19). When an agent is first created, it has no rules stored in its memory – the evolution of a reference map and the development of rules takes place over time when the agent is exploring the digital environment. The initiation and fade-out of data in the map is computed using the pheromone trail algorithm – the information that is fed to the reference map when agents explore the 3D model disappears gradually. The interpretation rules have to develop and change according to this dynamic map. The pursuit of targets is additionally facilitated by trivial vision – if the visibility line between the agent and its target is clear (does not intersect with the geometry of 3D model) then the agent takes an automatic step towards the goal. Besides individual learning, the development of interpretation rules also takes place at the phylogenetic level. The evolution of agents is similar to the evolution of Braitenberg’s vehicle type 6 (Braitenberg 1986, p. 26-28) where a single agent – but not necessarily the best performing one – is chosen for reproduction. However, in the

133

process of reproduction only 75% of the interpretation rules of the agent are transmitted to its offspring.

Figure 6.20: Sensory input. Sensors acquire their value from the environment and from the corresponding location on the reference map.

The design of the agent is fairly simple. The agent consists of the central ‘body’ and the six sensors attached to it forming three symmetry axes. The consideration behind this hexagonal design is to give agents sufficient liberty of motion retaining the symmetry and thus leaving undefined which side is the front or the back. Sensors are combined into three identical axes which have a major influence over the activation function (see Figure 6.20). Each sensor axis has four possible states: two polarised states (only one sensor active), both sensors active, and both sensors passive. All three sensor axes together yield to the sensory space with 64 possible input combinations. The output space, or the motor space, is much smaller containing only six possible directions. The agent can take only one step at a time, and cannot combine different directions. Albeit the motor space is relatively small, the mapping of inputs to outputs results in 384 different combinations. The agent’s behaviour is simply controlled by a list of input-output matches – the interpretation rule set (see Figure 6.21). In addition to these rules, the agent possesses some persistence in its action – if the sensory input appears to be unfamiliar, the agent continues in the previously chosen direction. This helps the colony to maintain its explorative behaviour. 134

The reference map is a hexagonal network of interconnected nodes. Each node contains one or more values. If an agent moves in the environment, corresponding values on the map are adjusted according to the success of the agent. A node is just a passive piece of information – agents gain meaningful information by comparing the adjacent nodes.

Figure 6.21: Interpretation rule set: red circles show activated sensors, the arrow shows the resultant movement direction. Different agents have different rules to map inputs to output. 75% of these rules are passed to ‘offspring’ to maintain explorative diversity of the population

The progress in the agents’ behaviour and development of the reference map and the interpretation rules tends to follow a standard pattern. As the first meandering agent finds a way to its target, all nodes on the reference map that the agent has stepped on are positively adjusted. Such kind of learning technique that is based on a long-term rewarding system is classified as reinforced learning. When the agent tries to repeat the same route, it may turn out that the interpretation rules do not match the changed reference map anymore. Thus, new rules have to be invented to ‘read’ the modifications made by the first agent. If the next agent is capable of finding the target in the slightly changed situation (with positively adjusted nodes), it reinforces the reference map, and appropriate rules have been stored to interpret this map.

Figure 6.22: Way-finding in corridor-like layouts. All shown tests were successful as the colony was able to learn the route between two points. The reference map is laid on top of the layout of digital model

135

Certain features of the digital environment facilitate way-finding. It is not always the shortest route that becomes the most popular – it is usually the most suitable route for the particular kind of agents. The complexity of the environment can be easily assessed by the time it takes the agent colony to navigate through it. The time consumed is usually proportional to the number of changes in the direction of motion that agents have to make on their way to the target. Some routes are more difficult to learn as they require intricate sequences of such changes. For example the U-turn is easier for agents to pass through than the S-curve (see Figure 6.22).

Figure 6.23: Way-finding in quasi-urban layouts. Environmental features play a crucial role in the competition between popular routes. It is not always the shortest route that is preferred by agents

A few interesting behavioural phenomena can be pointed out: 1. Agents acquire different techniques to move around in the environment. Some of them try to keep away from environmental obstacles, others, in contrast, develop the ‘wall following’ tactic. 2. Some agents tend to travel to locations where they have clear visual contact with their target, without actually getting closer to the target (see Figure 6.23: Way-finding in quasi-urban layouts. Environmental features play a crucial role in the competition between popular routes. It is not always the shortest route that is preferred by agents). 3. If a route to the target has been found, some agents still keep exploring and finding other ways than the established one. The firstly discovered route does not necessarily become the most used one (see Figure 6.23). 4. Unplanned competition between agents with different targets occasionally takes place. The nature of the pheromone trail algorithm prevents agents using 136

the same trail in both directions. One way ‘traffic’ tends to force the other way out.

6.6 Stigmergic building agents

This section explores agent-based methods that can be collectively termed as stigmergic building algorithms. As opposed to the previous prototypes in this chapter, stigmergic building algorithms allow the agent colony not just to alter some values in the environment for navigational purposes but also modify geometrical properties of its surroundings. The experiments described in this section were built over an extended period of time and represent a collection of loosely coupled methods rather than converge to a coherent prototype. The methods presented here are primarily concerned with the rules about how an agent interacts with its environment in terms of perceiving and changing the geometry of the environment. However, these methods can be combined with other algorithms governing the movement and navigation of agents resulting in complex dynamic models. The aim of such experiments is to investigate the suitability of agent-based techniques in order to create generative models featuring dynamic feedback loops between circulation and the geometrical configuration of the environment. Stigmergic building algorithms have been first explored by Theraulaz and Bonabeau (1995) and later elaborated by Bonabeau et al. (2000). These authors have developed a method that mimics the building behaviour of social insects and have achieved remarkable results in replicating insect nest architectures found in nature. The prototypes in this section build on top of some of their methods and try to interpret these in the context of architectural design. All prototypes in this section have been developed in the Python programming language and run in Blender. The Blender/Python platform is well suited for resolving the level of complexity involved in stigmergic building algorithms and provides sufficient speed of execution, yet maintains the agility needed for prototyping. In order to explore the potential of stigmergic building methods and discover some likely issues associated with related algorithms, the first experiment is solely aimed at figuring out the rules of how an agent can add geometry to an existing 3D model. The algorithm described subsequently ignores the notion of circulation and the 137

agent is simply given a linear movement vector that propels the agent in a fixed direction at a constant speed. On its way the agent encounters geometrical objects and is instructed to react to these by placing new objects to the 3D model. The agent – not much different in its appearance from the 3D Loop Generator agents (see section 6.1 and Figure 6.5) – possesses sensors that can detect collisions with existing objects in the model, and has a set of rules for adding new objects. The agent can place a number of predefined 3D objects into its digital environment. All of these objects are rectangular solids derived from the primitive cube. When the agent receives information from the environment via its sensors, it compares the input to the predefined look-up table. The rules in this look-up table define which type of object is selected for the placement and also prescribes the rotation and the location of the newly added object in relation to the agent’s position at the time of placement. Once the object is added to the model it becomes a subject to physics simulation and interacts with objects that already exist in the model.

Figure 6.24: An ‘arcade’ built using a simple set of stigmergic rules and linear forward directed movement

If the rules in the look-up table happen to be right, this stigmergic building sequence combined with the linear movement can trigger a feedback loop where the newly created objects provide further stimulus to the agent. The agent then places more objects and the stigmergic loop is closed. Structures built this way often have high degrees of continuity and repetitiveness due to the linearity of the agent’s movement. Figure 6.24 presents the result of a simple look-up table with three rules 138

for mapping sensory input to building output. When the agent encounters a single block in the environment, it stacks a new block on top of it and another one next to it. Now the agent faces two vertical stacks which triggers the third action and the agent places a horizontal ‘beam’ on top of these stacks. Theoretically, more detailed look-up tables can be invented to produce intricate structures, but practically it becomes quickly a very complex task. It is perhaps easier to describe higher level targets and devise an algorithm that creates the rules in the look-up table automatically. In order to automatically develop meaningful stigmergic building rules, an evolutionary strategy is proposed. Strategies for evolving stigmergic building rules have been earlier explored by Bonabeau et al. (2000) and von Mammen et al. (2005), but – according to the research conducted by the author of this thesis – these simulations have never before been carried out in the context of simulated physics. The strategy proposed herein involves four major steps: generation of initial building rules, simulation of stigmergic building with these rules, comparison of built structures, and recombination of initial building rules. These steps are repeated several times until a satisfactory look-up table containing stigmergic building rules have evolved. The evolutionary strategy is devised to create a set of rules to build high structures – a task that can be easily measured. For that purpose the agent that executes the building process is given a continuous upward motion vector instead of moving it parallel to the ground plane. If the agent encounters an unknown spatial configuration, a new rule for placing a rectangular cuboid within a certain range from the agent is created and immediately tested. If the cuboid should intersect with any of the existing objects in the model, the rule is reinvented and tested again until a suitable solution is found. Once a cuboid has been successfully added to the model it becomes a part of the physics simulation; the agent records the respective stigmergic building rule into its look-up table and moves on. When an agent now encounters a previously built spatial configuration, it tries to execute a recorded stigmergic rule and can either validate or overrule it. The simulation takes place during a defined period of time or until the agent receives no further stimuli from the model. The whole simulation is then repeated with a number of agents that all have their own individual look-up tables. A few of the agents that manage to build the highest structures can start the next round of simulation with their look-up table developed in the previous round, whereas others have to start from the scratch. 139

Figure 6.25: Tall structures built by agents by executing evolved stigmergic building rules and linear upward directed movement

The time consumed for developing a good set of stigmergic building rules is undefined due to the probabilistic nature of the evolutionary strategy. In some cases, agents cannot achieve more than stacking up just a few cuboids, while agents in other simulations develop their look-up table fairly quickly and display a variety of different ways of erecting tall structures (see Figure 6.25). In some simulations, a set of building rules is evolved far enough in order to allow agents to build infinitely high structures. However, the described method has many disadvantages. For a start, there are limitations to the structure that a single agent can build – the agent’s movement is simply too linear in order to lead to the emergence of complex structures. The process of finding good rules is also a very slow one mainly because of the large number of choices an agent can make in placing a cuboid. Simulated physics algorithms (based on Smith 2007) introduce some redundancy to the model as placed cuboids are adjusted according to the gravitational force, but that is not enough. There is no direct relationship between a successful building action and the subsequent actions of an agent; the success of building a desired structure is too much dependent on chance.

140

Despite the many limitations that the proposed agent-based method of developing tall structures has, the experiment demonstrates that it is feasible to devise a system where stigmergic building rules are evolved during the simulation rather than explicitly predefined by the programmer. As long as the built structure can be computationally evaluated, the evolutionary strategy can help to reduce the bulk of work that the development of stigmergic building rules otherwise would require. Once a set of building rules have evolved and become robust enough, it can be used in a different context. Figure 6.26 shows a simulation where a structure is created as a result of collective effort of an agent colony where all individuals share the same stigmergic building rules. The movement of agents in the digital environment is randomised. However, in order to reduce the number of possible sensory input configurations, agents are always aligned perpendicular to the cardinal directions of the model’s coordinate system. Even with a limited set of stigmergic rules (three rules were used), freely moving agents can build structures that are far more complex than structures built by agents that are constrained to the linear movement. Unlike in the previous experiment that implemented simulated physics, this one uses a qualitative method for testing the stability of built structures. The method is based on the connectorsocket system (see section 5.4 for details) where a new block can be attached to an existing one only if both of the blocks have respective joints. This simplified structural stability computation reduces the demand for computational resources and allows simulations to run with a larger number of agents. However, the number of agents used in the simulation has no impact other than the speed of the building process because the agents’ movement is not anyhow coordinated through the environment. An obvious way of improving the building simulation algorithm is to add some rules how agents change their movement trajectory when they encounter specific spatial configurations. These rules can be added to the existing look-up table of building rules so that a specific spatial configuration triggers both building and subsequent movement activity. The following experiment (see below) scrutinises how movement and building activity can be synchronised through the environment.

141

Figure 6.26: A sequence of images showing the collective building activity of an agent colony. Agents are placing blocks of various sizes according to a shared set of stigmergic rules

As opposed to previous experiments, this simulation takes place in discrete space where agents are can only move from a cell to a neighbouring one in a 3D lattice (see Figure 6.27). Each cell in the lattice can accommodate a single building block if it has enough structural support from its neighbours; the stability value of each block in the built structure is computed with a cellular automata method (see section 5.4) that defines how well each block is supported. Agents can add new blocks to the model according to their internal rules that are developed during the simulation. These rules specify where agents deposit additional blocks in their immediate neighbourhood of 26 cells (3D Moore neighbourhood) around the agent’s current location. A similar kind of setup has been previously used by Theraulaz and Bonabeau (1995) in their studies of stigmergic building algorithms. However, unlike in the proposed prototype, their agents move around the model randomly and the simulation does not incorporate any structural calculations.

Figure 6.27: Agglomerations of building blocks placed by agents. Each agent evolves its own building rules during the simulation

142

Stigmergic building rules in this experiment are slightly more complicated than in previous examples. An agent has no look-up table that specifies the building activity according to the received sensory input. Instead, an agent is designed as a single layer perceptron with sensory input from the 26 neighbouring cells. The sensory inputs are fully connected to the respective 26 building outputs via series of weights. The input values are received from the built structure in the agent’s local neighbourhood and indicate the stability values of surrounding blocks. These input values range from 0 to 1 (0 denotes an empty cell, 0.1 means a weakly supported block and 1 is a block supported directly on the ground) are then multiplied by the connection weights and mapped to output values where an activation function defines on which neighbouring cells new blocks are deposited. The agent’s movement is computed in a similar way – the input values are multiplied by the connection weights and the neighbouring cell with the highest total value will be occupied by the agent in the next time step. There is an additional movement constraint to agents – they can only move along the surface of the existing blocks or across the ground plane. This constraint ensures that the agent always receives sensory information from the environment and does not ‘fly’ around high above the ground level. However, agents can leave the ground level by building higher structures and then climbing these structures. An agent can adjust the connection weights between sensory inputs and building outputs by evaluating the success of newly built structures. It simply records the last building output and compares it to the status of structure after some time. If the blocks that were added to the model still persist, then the respective connection weights are increased. If these blocks have become unstable, then the weights are decreased. This way, an agent can ‘learn’ what structures are likely to stand up and it can save its resources for building something that is well supported and unlikely to collapse. Agents’ connection weights can also have negative values which mean that, besides adding, agents can remove blocks from the model. This can become useful when the agent gets locked into a densely built environment and cannot find its way out. During the simulation, agents have only a limited number of blocks that they can deposit during the simulation, but they can gain more by ‘eating’ existing blocks in the model. 143

There are a few interesting phenomena in the behaviour of the agent colony that are worth mentioning. Whereas a single agent seldom produces surprising results, individuals in the colony start competing over the available resources. Even though there is no direct communication mechanism between the agents, their activities are coordinated through the environment. Occasionally two or more agents start changing the same structure at a particular location by adding and removing blocks simultaneously. Quite often it happens so that an agent places a block while removing another block from a different location. If another nearby agent now removes the placed block and adds a block where the first agent has just removed one, it may turn out that these two agents get locked into a loop where both of them maintain their current energy levels while building the same structure over and over again. If these agents are adding more than they take from the environment, new structures can quickly emerge from their collaborative effort. This behaviour – also spotted during live simulations – is not anyhow predefined in the algorithm but emerges from the simple rules at the level of individuals and is fuelled by stigmergic mechanisms. It proves the concept that complex behaviour can be achieved by coordinating the behaviour of simpler agents through the environment. Although the above described experiment reveals the collaborative building powers of the colony, it suffers from the poorly designed movement algorithm. The perceptron architecture seems to be well suited for the task of adding and removing building blocks, but it appears to be not a good solution for generating continuous circulation. Having too much freedom of movement and sensors in all directions around the agent can work against the purpose of generating ordered movement patterns: it is very difficult to achieve the continuous movement without forward directed sensors (see Figure 6.3 and Figure 6.5).

144

Figure 6.28: Generated ‘pheromone’ trails and the respective structures built by agents. Given a simple building rule, agents were capable of channelling their movement but often failed to establish continuous circulation patterns

A way to introduce a notion of circulation into the above model is to replace the perceptron-based movement algorithm with a method described earlier in this chapter (see the Loop Generator prototype in section 6.1). Combining the ‘pheromone’ following algorithm with the stigmergic building algorithm is a demanding task. Both of these algorithms produce consistent and potentially useful output for creating conceptual architecture diagrams when executed separately, but when executed sequentially, unforeseeable complexities may rise. One has to make sure that the output from the building algorithm does not stop the development of continuous movement trails. Newly added blocks can create barriers that hinder circulation to a certain extent but these blocks should not be placed in the middle of heavily used routes. On the other hand, there is little reason to have structure in the part of the model where agents do not go. There needs to be finely tuned balance between the movement algorithm and the placed blocks as well as balance between the building algorithm and the existing circulation routes. One way to solve this task is to introduce additional conditionals and functions to the algorithms. Blocks, for example, should remain at a location only for extended time only if they are occasionally visited by agents. If no agent comes close to a block for a while, it ‘decays’ and is removed from the model. Similarly, agents can emit ‘pheromone’ only if they occasionally come to contact with blocks. As mentioned above, the building routine 145

also needs to be changed – blocks should not be placed at the location of high ‘pheromone’ concentration. These additional rules ensure that the development of ‘pheromone’ gradient and the built structure happens in parallel and are dependent on one another; the prototype has to feature quite sophisticated dynamic feedback mechanisms. Figure 6.28 illustrates two representations of the same model: the circulation diagram and the built geometry.

Figure 6.29: Development of built structures that form circulation channels

The development of the model is generally less dynamic than the emergence of ‘pheromone’ trails in Loop Generator. There is a presence of strong positive feedback mechanism in the simulation. The aggregation of blocks happens around heavily used trails that, in turn, are likely to increase the usage of these trails as agents are encountering more blocks and deposit more ‘pheromone’. Once a clear circulation corridor has developed (see Figure 6.29) then agents cannot easily escape it. Agents find themselves constantly following the same route because of the strong ‘pheromone’ concentration and also because the blocks that are aligned at both sides along the route prevent them choosing alternative directions. However, smaller corrections to routes are taking place all the time leading to the gradual adjustment and optimisation of the circulation diagram. The problem with combining continuous movement and stigmergic building algorithms becomes evident when one chooses to experiment with complex building rules or opts to increase the agents’ sensory space. Since agents roam freely around 146

the model, they confront existing structure from various different stand points and the sensory input can be completely different depending on which way they approach the structure. Stigmergic building algorithms can be considerably simplified in order to meet the purpose of why the model is built in the first place. The prototype is more likely to be used at the early conceptual stage rather than at later stages in the design process, and the level of detail in the model should reflect that. It is relatively uncommon in architectural design practice to solve the structure of the building before circulation and functional diagrams have been conceived. Hence, there is an argument that the building blocks should represent spaces rather than the structure.

Figure 6.30: Sequence of images showing dynamic feedback between circulation routes and built blocks

The following experiment is built on top of the previously described prototype that combines stigmergic building algorithms with movement algorithms. In this case, however, building blocks should be seen as functions or spaces in the modelled building. This allows the stigmergic building algorithm to be much simpler and is possibly more useful for architects at the conceptual design stage. The prototype abandons the idea of agents as trainable perceptrons whose building activity is triggered by certain spatial configurations. Instead, the building algorithm of an agent is triggered when the ‘pheromone’ concentration in the nearby environment is above defined threshold. This ensures that new blocks are placed only at locations where there is enough ‘traffic’ around. Both the circulation and blocks are dependent on one another: blocks need to remain accessible to agents and agents can only emit 147

circulation ‘pheromone’ if they encounter existing blocks. This feedback mechanism leads to dynamic development of the model (see Figure 6.30) where spatial configuration and the circulation emerge in a parallel manner; one can always be sure that all the spaces (blocks) are always accessible via circulation routes which, in turn, happen in the close proximity to these spaces. As opposed to the simplified stigmergic building algorithm, the algorithm controlling the development and stability of blocks is made more sophisticated. In order to make a block more dynamic and responsive to the circulation and existing blocks, a newly created block is allowed to change its shape and find a good fit with respect to other elements in the model. Once a block is placed by an agent, it starts to grow and occupy the areas next to the location it was originally created and keeps developing until it reaches a certain size. The growth of the block is controlled by a cellular automata based algorithm (Adamatzky 2001, p. 11-17) – a method of simulating diffusion of chemicals in real environments. This method has many advantages. Most notably, a block can take any shape and adapt to the spatial configuration of existing blocks in the model. The algorithm can also be modified in order to take the circulation into account – diffusion can be limited in areas with high ‘pheromone’ concentration and, thus, prevent the blocks from hindering the movement of agents along highly used circulation routes. The development of block structure takes place over an extended period of time. First blocks are placed on the ground level and encourage circulation routes to grow longer and, as a consequence, more blocks are added in open block-free areas. If an agent encounters a block on its way, it avoids the block by steering away from it or by trying to climb across it. If the agent favours climbing, it can place additional blocks higher on top of the existing ones. As long as the ‘pheromone’ concentration is high enough the agent does not care whether the newly placed block is grounded or not. But the block itself is subject to the simulated gravitational force and cannot persist without sufficient support from other stable blocks. Thus, the development of block structure on higher levels is less likely than on the ground floor.

148

Figure 6.31: Emergence of vertical circulation

Figure 6.31 depicts a situation where several vertical circulation routes – ‘stair cores’ – have emerged simultaneously. As there are no blocks on the ground level to support new blocks at higher levels, these routes are likely to disappear. These cores often appear at the edges of the simulated world where agents’ movement is restricted and they look for alternative directions. In some cases, vertical circulation routes emerge when agents are forced to change their heading as a result of existing blocks on their way. In these cases, it is more likely that a new layer of blocks is started at a higher level because the existing block on the ground that caused this vertical movement provides enough support.

Figure 6.32: Development of vertical circulation and stacked blocks

It may happen so that a few blocks are stacked up and a tower-like formation appears (see Figure 6.32). However, this usually requires several conditions to be satisfied. Firstly, the vertical circulation has to be constrained in order to prevent agents escaping it. In the case of the example in Figure 6.32, the ‘stair core’ was squeezed between stacked blocks and the corner of the simulated world. Secondly, horizontal movement routes have to feed into the vertical one. This guides more 149

agents to the vertical core and the bigger circulation loop is closed. Once these requirements are satisfied, a vertical stack of blocks may appear.

Figure 6.33: Selected outputs of the simulation

The main difficulty with vertical stacks is maintaining the continuity of circulation – it is extremely seldom that agents are capable of generating uninterrupted circuits that incorporate vertical elements. Without loops that keep the agents constantly on the same track, the circulation is bound to change. Although vertical stacks occasionally occur, it is more common that the simulation leads to single or double layered agglomerations of blocks. Figure 6.33 presents several models where the movement routes are organised into circuit networks that are likely to persist longer than disconnected networks. Some of these examples have fairly minimal circulation compared to the number of blocks served by the circulation. This is partly because of the probabilistic nature of building and movement algorithms, but it also depends on some variable parameters in the algorithm. These parameters allow one to gain the control over the simulation and drive it to the desired direction. For example, in order to manipulate the compactness of the block model, one has to 150

simply modify the amount of ‘pheromone’ that agents drop in the environment. The same can be done by changing ‘pheromone’s’ evaporation speed. One can also control the size of building blocks and can choose to stop the simulation once a certain number of blocks have been placed. This functionality is particularly appealing in the context of an architectural brief where a number of rooms with defined sizes have to be accommodated on a site.

6.7 Space forming agents

The following prototypes are clearly distinguishable from all other prototypes described earlier in this chapter. Albeit based on agent modelling techniques and the basic modelling unit is a mobile agent, the notion of circulation in these prototypes is highly speculative. Agents in this section do not leave traces of circulation behind when they move around, nor do they organise other elements in the model to create open space for movement. These agents belong to self-organisational colonies – they are capable of forming structures that can be interpreted as spatial layouts where certain features of the structure can be seen as circulation paths. Similar or related experiments using agent colonies for creating spatial patterns has been carried out by Adamatzky and Holland (1998) and in the architectural context by Coates (2010).

Figure 6.34: Uniform distribution of agents following a simple rule in the simulated world

Whereas in previous prototypes there are always elements in the model that are independent of agents, a colony of space forming agents can be completely self151

referential and organise itself without any additional objects in its environment. This does not mean that these agents exist in the vacuum. This would violate the notion of system-environment distinction – an essential requirement in order to define the agent at the first place. A single agent still occupies the environment and interacts with it but this environment can consist solely of other similar agents. Space forming agents do not necessarily need to perform complicated tasks in order to achieve interesting results at the colony level. Simple repulsive behaviour of getting as far as possible from the closest fellow agent in the colony leads to the uniform distribution of agents across the simulated universe and to the emergence of global hexagonal structure (see Figure 6.34). With an additional modification, this prototype can be programmed to visualise the space around each agent and to reveal the topological skeleton of uniform yet organic space. In order to do so, one can introduce another class of agents that simply repel the closest agent of the first class but ignore the agents of the same class. Figure 6.35 illustrates the uniform distribution of first class agents (larger dots) and the development process of topological skeleton – also known as the Voronoi diagram (Adamatzky 2001, p. 36-65) that is formed by the second class of agents (smaller dots). A similar approach of approximating Voronoi diagrams in collectives of mobile finite automata is described by Adamatzky and Holland (1998).

Figure 6.35: Formation of a cellular structure

If the resultant distribution of agents is now analysed in spatial-architectural terms, the larger agents would represent particular spatial functions and the smaller agents would be the dividing element that confines the space allocated for each of these functions. But equally the skeleton formed by smaller agents can be seen as a circulation network. To be more precise: the structure reveals the potential diagram 152

for optimised circulation. Even though it is a very abstract diagram, the output of this repulsion-based model does meet the basic requirements for circulation: it is continuous and it connects and provides access to all of the confined spaces. The network diagram is also intrinsically optimised – each smaller agent is on a medial axis (Dey and Zhao 2003) between two closest larger agents. The simplicity and universality of the repulsion model makes it an ideal starting point to build more intricate prototypes. The abstract nature of the generated skeleton network can be made more tangible when the prototype is redeployed in the context of model-specific constraints. For this purpose agents can be redesigned to recognise and react to additional cues in their environment that are not other agents. For example – agents can be instructed to stay away from certain objects in the model or, on the contrary, be attracted to other elements. The original repulsion model is built in NetLogo that is a well-suited development environment for abstract agent-based prototypes. However, it sets some limitations how much extra functionality can be added. If one needs to gain more control over the output, it is advisable to migrate the prototype to a different development environment. The following examples are all programmed by the author of this thesis in VBA and run in a professional CAD application – MicroStation. Albeit much slower in executing multi-agent simulations, it facilitates the interaction between agents and other modelling elements, allows the designer to contribute to the model, and provides better management tools for the extended coding exercise. In order to improve the speed of execution, the agent-based approach of approximating the Voronoi diagram is replaced by a deterministic algorithm (Dey and Zhao 2003) that calculates the Voronoi cell surrounding each individual agent. Thus, an agent consists of a mobile nucleus that moves around and interacts with other agents, and of a cellular area that surrounds it and is occupied by this nucleus. In contrast to the NetLogo prototype the CAD version enables the designer to constrain the movement of agents to a custom-shaped bounded region (see Figure 6.36). The designer is also given a set of interactive tools for locating agents and defining the target area that each agent is geared to obtain. This target area defines the desirable size of the Voronoi cell occupied by the agent. The agent has two ways of achieving its target: it can tweak its repulsion strength or change its internal pressure according to the difference between the desirable and the actual size. By increasing 153

the repulsion strength the agent pushes other agents further away so that its cell can grow larger; by increasing the internal pressure the agent can push the corners of its cell further away from its nucleus and win more space this way. Provided that there is enough available space, an agent can quickly achieve its target size by manipulating these two parameters. If space is in short supply, agents compete for it by increasing the pressure and repulsion strength, but they gain little if anything at all because all the other agents are doing exactly the same. However, the colony does achieve an equilibrium state, and even if the individual targets are not met, each agent occupies a territory that is proportional to its target area.

Figure 6.36: Self-organisation of cellular agents in a bounded region

The ability to constrain agents in a predefined region is a useful functionality if one knows the exact shape and the size of this region. The algorithm works reasonably well if the area matches the accumulated target area of agents, even if it leaves no freedom to the colony to find its own outline shape. However, if one wants to experiment intuitively with different regions or different numbers of agents, this approach quickly becomes tiresome. Luckily, there is a way to facilitate exploration while maintaining a degree of control over the outline shape of the colony. This can be done by introducing a different type of agent that brings flexibility into the model (see Figure 6.37). The new type of agents has no target area and does not have to keep its repulsion strength or its internal pressure parameters constant. Therefore, these agents act as a kind of pneumatic cushions that contract if other agents need more space and expand if others get smaller. One can choose to place these ‘cushions’ along the perimeter of the predefined region and let the colony define its own outline.

154

Figure 6.37: Self-organisation of cellular agents in a semi-confined area

6.8 Discussion

All prototypes presented in this chapter are used for experiments that are helpful in studying bottom-up modelling techniques. These techniques can be used by computational designers who wish to integrate the notion of circulation into their spatial models and, amongst other architectural concerns, evaluate their models in terms of accessibility and navigation. All proposed prototypes are just computational sketches and none of these can be used to generate meaningful circulation diagrams unless deployed in the context of project-specific constraints and controlled by a thoughtful designer. However, these prototypes reveal a great deal about the principles of how agent-based models can be designed and built in order to facilitate the emergence of circulation without explicit top-down definition. Emergence is the key that makes it possible to integrate bottom-up prototypes into larger computational models – it allows circulation to grow dynamically with the rest of the model. According to the definition outlined in detail in the section 10.3, many prototypes described in this chapter can be classified as generative design models. All of these prototypes feature feedback mechanisms that give these models the ability to change and adapt. A generative design prototype can adapt to changes exactly because of the inherent feedback mechanisms and because the proactive parts of the model (agents) constantly monitor their environment reacting appropriately to different stimuli. There is no fixed or predefined output from a generative prototype and one can use it in order to produce a variety of diagrams. The generated diagram depends on the initial configuration of the model, on the designer’s interactive input 155

and on computer generated random values. There is a great potential benefit of using such flexible and generative models in the design process – they can be combined with other computational modelling methods and they can help in constructing the complete computational model of spatial schemata. There are essentially three types of circulation generating agents presented in this chapter: path laying and way finding agents, space building agents, and space forming agents. Whereas the first two types are concerned with altering passive objects in their environment according to some defined rules, the agents of third type are spatial entities and the model structure is constructed of these agents. Space forming agents perceive other agents directly and form spatial structures purely by repositioning themselves in relation to their environment. In such models, there is no need for static ‘building blocks’ as agents themselves represent the environment to other agents. These agents can be the simplest type of agents to program, but it is equally possible to build very intricate architectures of space forming agents. The structure made of agents can be interpreted as a circulation diagram. However, space forming agents can be used together with path laying agents in order to achieve diagrams with better definition of circulation. Path laying and way finding agents deploy several well-known techniques to navigate their environment. The two most common computational methods are hillclimbing and stigmergy – the first one helps agents to navigate and the second one ensures that this navigation is coordinated throughout the colony. Stigmergy is also an essential component of prototypes that involve space building agents. These agents complete the stigmergic cycle by first sensing the environment and then changing its structure, which, in turn, creates new sensations. The changes to the environment can be done in many different ways by adding or subtracting objects or distorting the existing geometry in the model. On top of that, the environment can have its own rules that further modify the inactive objects added by agents. These objects, however, cannot be considered as agents and possess no goal-directedness or pro-activity. Every agent-based prototype needs to be validated in order to prove that its’ generated output is suitable to represent circulation in buildings and urban environments. However, none of the prototypes presented in this chapter is validated against the measurements of connectivity, permeability, length of the circulation, or in fact against any other quantitative parameter that can be compared with the 156

measurements of existing and validated building or urban layouts. Instead, these prototypes are mainly concerned with the most basic requirement of circulation: providing access. Accessibility can be granted in two ways. Firstly, discrete coordinates in the model that are never visited by agents during the simulation are disregarded as accessible spaces; the rest of the model can then be said to be accessible. Secondly, agents can be located strategically in the environment and programmed to move between all of the points that need to be accessible. To put it in another words: any of the models using one of these measures is internally validated in terms of access. Besides the accessibility validation, many of the prototypes offer great opportunities to incorporate validation and quantitative evaluation routines. Agents can be easily programmed to measure distances travelled which, in turn, can be used for calculating the average trip lengths in the whole circulation network. Also, the generated diagrams can be assessed in terms of connectivity and permeability using additional evaluation algorithms. An example how that can be done is presented in Chapter 7. Some of the prototypes incorporate circulation optimisation mechanisms – ant colony optimisation, for example – that keep the length of circulation diagrams always near optimal. However, the full validation of circulation diagrams can be carried out when a prototype is deployed in the design context and, eventually, it is down to the designer to use the prototypes appropriately in order to generate compatible output. Designers are encouraged to use an experimental approach similar to one that was proved to be extremely useful in building and testing prototypes. Since it is very hard to estimate the behaviour of a prototype and the interplay between different control parameters before it has actually been deployed, the rapid prototyping approach is better suited for the purpose. Many prototypes described in this chapter were the results of recurring development cycle and trial and error sessions that allowed final algorithms and appropriate parameters to ‘evolve’. Considering all that is said above, the prototypes in this chapter are experiments that serve the purpose of offering architects alternative ways from traditional methods for solving spatial puzzles. Although alien to many of contemporary architects, computational prototypes can help to produce output that architects are more familiar with – diagrams. A diagram is an abstract machine that many architects find useful in their work; diagrams can be used by architects to 157

develop holistic design concepts and to construct less abstract representations of the built environment. Most of the images in this chapter depict constructive diagrams that represent both topological and geometrical forms of circulation, but also include requirement parameters such as frequency of use. These constructive diagrams are snapshots of the simulation and not the end product. In constant change, a diagram is never completed and, being responsive to the external change, can be driven by designers to suit project-specific needs.

158

Chapter 7: Case study 1 – a multi-agent system for generating street layouts The previous chapter introduced several prototypes of multi-agent systems for generating circulation diagrams. This chapter builds on top of the knowledge gained through creating and testing these prototypes, and shows how it is possible to deploy multi-agent systems in the context of architectural design projects. Two of the case-studies presented in this thesis support the argument that multi-agent systems can render value for the design process. Although this value can be measured in terms of efficiency, the main contributing factor of the proposed computational approach is in demystifying the design process – making it more explicit. This chapter gives an overview of the development of a computational tool that assists urban designers in modelling street networks for large urban areas. The site chosen for testing was Nordhavnen – an old harbour area in the outskirts of Copenhagen (see Figure 7.1). This particular site was chosen for two reasons. Firstly, the site was a subject to an open international ideas competition and offered a great opportunity to develop and test computational tools working together with a team of professional designers. Secondly, the sheer size (200 hectares) and the complexity of the study site encouraged the development of computational methods that would give the team some confidence in making design decisions. In order to provide sufficient evidence for this decision making process, the proposed tools were geared to facilitate generating and evaluating multiple scenarios. The computational design methodology for the Nordhavnen competition involved developing a few add-ons to a CAD package (Bentley MicroStation), and a generative design model for generating street layouts. The latter forms the main body of this case study and is subsequently referred to as the Nordhavnen model. The Nordhavnen model was built in NetLogo and utilised a multi-agent system for generating circulation systems in a bounded region. The Nordhavnen model borrowed some elements from the prototype described in detail earlier, and became essentially a successor of the Network Modeller prototype (see section 6.4).

159

The Nordhavnen international competition invited teams of architects, urban designers and other related professionals to create a vision for a sustainable city of the future. The competition entries were asked to propose a solution for a large industrial peninsula with several basins and quays in northern Copenhagen with an option to claim additional land from the sea. The competition brief envisaged the new piece of Copenhagen to have 40 000 new residents and the same number of new jobs created. The fully developed Nordhavnen was required to offer its residents and visitors all the qualities of urban living with an emphasis on the sustainability issues. One of the major requirements in the brief asked for a sustainable transportation system promoting walking, cycling and public transport, and prioritising these over the use of private cars. The challenge for the design team was to propose an economical network of transport that would meet the needs of local residents but also serve passenger and industrial harbours in the peninsula. In order to meet this challenge the team devised a design methodology including computational tools and models. One of the proposed models – the Nordhavnen computational model – was intended to help the team to design a road network for the site.

Figure 7.1: Aerial photo of Nordhavnen (Google 2011a)

One of the biggest challenges regarding circulation in Nordhavnen was to connect the peninsula to the rest of Copenhagen. Joined to the mainland by a narrow strip of land, all the traffic from Nordhavnen had to pass through a narrow passage. 160

Besides the attractors to the local traffic, the site also featured an existing industrial port – a major attractor of the through passing traffic. Although the future Nordhavnen was mainly to accommodate local residents and local businesses, the competition brief also envisaged a new passenger terminal on the east side of the peninsula. It was obvious from the start that the link to Copenhagen and harbours would become the major shapers of the road network in Nordhavnen. Besides some obvious constraints and official requirements expressed in the brief, there were some crucial decisions made by the design team that influenced the development of the Nordhavnen computational model. At an early stage of the design process, the team decided to implement a gridded city pattern. This was thought to simplify the design task and help to meet the requirements of floor space area, yet maintaining the flexibility and variety in the urban environment. The grid allowed the team to develop generic building typologies and manage height and density efficiently across the site. Another important design decision made by the team was to concentrate higher buildings in the centre of the peninsula in order to leave the seaside less densely populated and possibly dedicated to recreational activities. This was considered a natural way of dealing with an urban environment of the size and predicted population of Nordhavnen.

7.1 Developing the prototype

The objective of developing the Nordhavnen computational model was to create a flexible and easy-to-use generative design application. The model was hoped to become generic enough to be used later in other urban design projects. The time spent was far too long to justify the development of the computational model that was going to be used just once – the model had to fit into the traditional practice of urban design projects. However, the intent was not to create an all-purpose road network generator, but to develop a model that produced gridded city patterns taking into account additional site-specific information. There were four main requirements that the circulation networks generated by the model were supposed to meet. The first and the most important was the accessibility requirement – from each urban block in the grid layout one had to be able to reach any other block via the circulation system. The second requirement was variety – the model was intended to produce several 161

different circulation scenarios even with the identical input. The last two requirements were concerned with the validity of generated diagrams: the network layout had to be reasonable and suitable for circulation in settlements. In order to be reasonable, the total length of the network and the average trip length within the network had to be optimised to a certain extent. In order to be suitable for settlements, the network had to create sufficient connectivity and permeability in the environment. The first attempt to meet the above requirements for the Nordhavnen model was made by extending the Stream Simulator prototype. The prototype is explained in detail in section 6.2. It suffices to say here that the prototype was a simple agentbased model combining hill-climbing and stigmergy. The Stream Simulator prototype that generated only tree-like patterns was chosen as a starting point for The Nordhavnen model because it already satisfied several of the four requirements mentioned here earlier. Firstly, with a homogeneous slope gradient of the input landscape, Stream Simulator always produced continuous circulation patterns and provided access to every single patch in the landscape. This occurred for a simple reason – mobile agents were actually distributed all over the landscape and they converged to form circulation channels rather than the other way around. Secondly, there was an inherent route optimisation mechanism embedded in the prototype. Fostered by a positive feedback mechanism, the agents’ choice of routes were coordinated through the landscape. Thirdly, the prototype was a probabilistic model in the sense that the agents were placed in the landscape randomly, which yielded a different result every time the prototype was deployed. Stream Simulator also allowed the user to control the input landscape and gain a certain amount of control over the outcome without any programming skills. Since it could only produce branching networks, the only requirement that Stream Simulator is not able to meet is to generate diagrams with sufficient connectivity and permeability. In order to work with the grid layout, the Stream Simulator prototype had to be extended. The extended version of the prototype was designed to support any kind of user-defined grid spacing parameters and produced orthogonal branching patterns only (see Figure 7.2). The implementation of this modification was quick and straightforward – while the agents in the original prototype were acquiring information from their immediate neighbourhood, in the modified version agents probed the environment by comparing data at adjacent grid points. Since the agents 162

travelled on the grid coordinates only, they could not possibly make any diagonal movement.

Figure 7.2: Stream Simulator modified: orthogonal stream channels are generated with the user defined segment length

Branching street patterns, based on the use of cul-de-sacs, are generally discouraged in urban design because of the low permeability and connectivity rate. In its final report, Urban Task Force argues (Rogers 1999) that tree-like street patterns are bad for cars and pedestrians. For cars this type of circulation network concentrates congestion at the branching points; for pedestrians it often means indirect journeys even between neighbours and increases the use of cars. To avoid generating branching street systems and build a model that also generates circuit networks the Stream Simulator prototype borrowed some concepts from another prototype: Labyrinth Traverser (see section 6.3). The Labyrinth Traverser prototype is an equally simple model combining cellular automaton based diffusion with hill-climbing and provides a robust computation of the shortest path between given points. From the perspective of the Nordhavnen competition, the shortest path computation allows one to control which areas should be more readily accessible. Whereas Stream Simulator offers a way to optimise the total length of the network, Labyrinth Traverser contributes to the optimisation of the average trip length within the network. Combining these two prototypes leads to a multivariate network optimisation where both the total length of the network and the average trip length are balanced. This optimisation is facilitated 163

by the landscape’s ability to revert to its original state over time – shorter routes are more likely to persist than longer ones. In a way, the Nordhavnen computational model is an extended version of the Network Modeller prototype (see section 6.4). It does, however, possess some features that turn it from a simple concept to a potentially useful application for urban designers. One difference between the Network Modeller prototype and the Nordhavnen model is the way mobile agents are distributed across the landscape. While in the former the agents are travelling between user defined network nodes, in the latter the agents depart from every single grid point (setting-out point) and travel to the given target nodes. Distributing the initial setting-out points for agents across the landscape ensures that every single urban block in the scheme is accessible. Two other main changes are concerned with context sensitivity and enhanced user interactivity. The Nordhavnen model can be controlled by the user in two ways: preparing the input map of the landscape and locating the target nodes in the network. The input map is a black-and-white image defining where the agents can or cannot go. The target nodes indicate the destinations of special importance (attractors) in the scheme. In case of Nordhavnen, these attractors would be the city of Copenhagen, the harbour, and some local centres.

Figure 7.3: Circulation diagrams with many setting-out points and a small number of attractors. From left to right: tree structure with 1 attractor, 1 circuit with 2 attractors, multiple circuits with multiple attractors

The generated topology of circulation diagrams depends on the number of attractors (see Figure 7.3). Single attractor scenarios always lead to the formation of branching networks that cover the whole landscape but never feature closed loops in 164

its structure. Once a couple of additional attractors are introduced, the model starts generating circuit networks. If more attractors are added the number of closed loops tends to increase as well. The many-to-few relationship between setting-out points and attractors often leads to diagrams where the areas around attractors are well connected and roads form circuits, while more remote areas are connected via smaller branching networks. This is an interesting effect in The Nordhavnen model that is paralleled by real-world examples of urban settlements where peripheral areas are connected to each other via the town centre. However, this effect should not be encouraged in suburbia as it goes against the principles of high connectivity and permeability in the contemporary urban design practice (Rogers 1999).

Figure 7.4: Many-to-many relationship between setting-out points and attractors. All of these diagrams have 11 attractors placed across the landscape

In order to achieve better connectivity in peripheral areas, the Nordhavnen model was altered to support many-to-many relationships. This was done simply by introducing a routine to the model that automatically adds several attractor points randomly across the landscape. The change was immediately apparent: although culthe-sacs were still present in the diagrams, longer branching roads had almost completely disappeared (see Figure 7.4). The peripheral areas had suddenly become well connected and the whole network permeable. Alongside the positive effect, the replacement of the many-to-few with the many-to-many relationship raised a technical issue – running the model with greater number of attractors required larger computational resources. This posed some limits to the size of landscape, to the

165

maximum granularity of diagrams, and to the number of attractors for the optimal use of the model.

Figure 7.5: Diagrams with attractors of differentiated magnitude – some attractors (larger dots) are more appealing to agents than others (smaller dots)

The automatic distribution of attractors proved useful in terms of achieving more favourable diagrams but it also brought up another issue – the attractor points were now out of the user’s control. Placing all the attractors manually was time consuming and hindered the design process flow; automatic placement was too random. To get around these problems, an additional parameter was introduced to the model: the magnitude of attraction. All automatically placed attractors were given the magnitude value of 1, while the magnitude of manually placed attractors was defined by the user. The magnitude simply indicated the attractor’s capacity of attracting agents. The resultant diagrams with the differentiated magnitude (see Figure 7.5) gave the control back to the user. Once the control over the location and the magnitude of attractors was regained, the model was ready for experiments. The user could now manipulate the magnitude of an attractor depending on the assumed importance of a particular urban function or to reflect the desired number of vehicles and pedestrians at the location. The Nordhavnen model is built in a way that eliminates the necessity of validating the suitability of diagrams for urban circulation with respect to accessibility. As described earlier in this chapter, the requirement of accessibility is satisfied inherently in the model and each urban block is connected to the circulation by default. The diagrams generated with the Nordhavnen model, however, are subject to external validation and can be validated qualitatively or quantitatively. For example, 166

most of the diagrams presented in this chapter can be validated as road systems based on the resemblance of road systems in existing urban settlements (qualitative validity). The diagrams can also be evaluated with respect to the connectivity and the length of the network (quantitative validity). The methods of validation are further explained in detail later on in this chapter.

Figure 7.6: A generated diagram classified as a 4 grade road system. This exercise was done manually by counting the width (in pixels) and the strength (darkness) of the road segments in the diagram

The diagrams can also be analysed in terms of the traditional road grading system often used by highway engineers (Marshall 2005, p. 15). Figure 7.6 shows how a generated diagram can be interpreted as a graded road hierarchy. The fact that these diagrams lend themselves for this kind of analysis does not yet mean that they make good street patterns. It is usually required in this hierarchy based system that lower grade roads feed to roads just above them – 4th grade roads should only feed to 3rd which in turn feed to 2nd etc. (Marshall 2005, p. 170). Since the Nordhavnen model is an interactive one, the user of the model is eventually responsible for generating working diagrams. For example, to meet the requirement of the grade-based system the designer should avoid placing attractors too far away from other attractors.

167

7.2 Generating diagrams in context

In order to generate useful output for the design team, the computational model had to be deployed in the context of Nordhavnen site. This was achieved simply by introducing a routine to import pre-processed 2D map data into NetLogo. The image was first prepared in a CAD application, then converted into a black-and-white image and, eventually, loaded into the NetLogo’s model world. White pixels in the pixel map indicated the site area where the gridded road pattern could occur; black pixels denoted the surrounding water or areas outside the competition boundary (see Figure 7.7). Once the input map was prepared and loaded into the program, the grid pattern that formed the basis for the road structure was established within the defined boundary. The grid spacing was controlled by the user and was eventually decided by the design team according to what was considered a reasonable urban block size. The underlying principle was to create an urban structure with sufficient connectivity, providing each block in the area at least one access point to the road network. The input map thus served the purpose of defining the access to the urban blocks and agents navigated the grid by moving from one grid point to an adjacent one. In doing so, agents were capable of crossing water (black pixels) provided that both of the grid points on opposite sides were situated inside the boundary.

Figure 7.7: The input image and the grid representing access to urban blocks. The shape of the area was partly driven by the competition brief and partly defined by the design team

168

Before the agents were set to action, one had to locate desirable attractors in the area that served as targets for agents. Each attractor was given a value of importance that defined its attractiveness to agents. Whereas some low value targets were placed automatically on the map to ensure uniform connectivity across the whole site, other higher importance targets were manually located where the design team had decided the key attractors should be. The key attractors included such important functions in Nordhavnen as the harbour, the main road linking Nordhavnen with Copenhagen city, access to the planned tunnel, the new Nordhavnen city centre etc. (see Figure 7.8).

Figure 7.8: The input map with attractors and the resultant diagram. Dots represent attractors with the size indicating the importance

Once the input map was loaded and all attractors placed, The Nordhavnen model was ready for generating circulation diagrams. All the participating agents were located randomly at the grid points and were assigned a target. More important targets had higher probability of becoming a target for agents than less important ones. This probabilistic nature of the model led to a variety of diagrams even if the initial configuration remained unchanged (see Figure 7.9). The formation of circulation diagrams also depended on many control variables that were exposed on the graphical user interface (GUI). These user controlled variables defined the number of participating agents, the strength of the ‘pheromone’ trail left behind by the agents, 169

the speed of evaporation and diffusion of this ‘pheromone’. This allowed one to drive the optimisation and development process and deploy it as a design tool.

Figure 7.9: Distinct diagrams generated with an identical initial configuration

Although the development process of a diagram depends on many different parameters and supports a variety of output, the generated circulation diagrams display certain common emergent features. The most heavily used routes typically occur at the centre of the area but are seldom close to other frequently used routes. There also appears to be a fade-out of frequency of route usage towards the edges of the area unless an important attractor sits right at the edge of the site. This tendency for centralisation becomes even more obvious in highly connected diagrams where attractors are placed on every single grid point (see Figure 7.10). The hierarchy of routes seems to develop naturally with less frequently used routes feeding traffic to busier routes that, in turn, form clear circuits at the heart of the area. This leads to higher connectivity in the centre, whereas peripheral areas tend to form branching network structures.

Figure 7.10: Diagrams generated with uniform attractor grid

170

Figure 7.11 illustrates the progress of The Nordhavnen model from an initial configuration to a fully developed diagram. The network development typically follows a pattern where higher grade routes become visible at the early stage; this is followed by the appearance of lower grade routes and, eventually, local access routes become visible. A typical diagram features a few frequently used closed loops forming a continuous circuit of the heavily used roads. These routes often cross canals that break the site into smaller peninsulas, suggesting the use of bridges to maintain efficient traffic circulation in Nordhavnen. In many diagrams a bridge appears at the heart of the site to traverse the inlet to the internal basin. Curiously enough, there is also a crossing at that the very same location in Nordhavnen (see Figure 7.1), although the input map does not reflect this minute detail. The southern part of Nordhavnen is usually connected to the rest of the area via a strong route. This is the major link to the city of Copenhagen, and a potential bottle-neck for the through traffic.

Figure 7.11: Sequence of images showing the development of circulation network

171

7.3 Quantitative analysis and evaluation of diagrams

As discussed earlier in this chapter, all generated diagrams have to be validated if one is to evaluate their suitability as circulation systems in urban design. The Nordhavnen model is built in the manner that makes the validation of the most basic requirement of urban circulation – accessibility – redundant. The agent-based model is configured to connect all the urban blocks automatically to a single network of roads. Whereas the accessibility criterion is validated internally, other parameters of the generated diagrams are subject to external validation. There are two main measures that have been taken into account in the proposed quantitative analysis. The first parameter to be measured is the total length of the generated network. The desire to keep the length of road networks minimal in any real world case is driven by the need to keep the construction costs as low as possible. This purely quantitative measure is often rivalled by another evaluation parameter of urban circulation – connectivity. Although connectivity can be assessed solely on a quantitative basis, it is associated with the quality of the urban environment. In the era of urban sprawl, high connectivity has become a key measure of movement networks and urban form. Clearly, it is difficult to outline what makes a good street network. LlewelynDavies points out that the reason why some routes are better than others depends on many intangible factors and the route assessment can therefore never be an exact science (Llewelyn-Davies 2000, p.34). Furthermore, the purpose of this section is not to assess the quality of generated diagrams but to show that these diagrams meet the basic quantitative requirements for connectivity. Qualitative value judgements can be made independently of quantitative measures that are used here to validate circulation diagrams generated with the Nordhavnen model. The method proposed later in this section is given in order to help designers to favour one diagram over another. There are several reasons why high connectivity is thought to be an important indicator of urban design schemes. According to Benfield (in Song and Knaap 2004), better connectivity leads to more cycling and walking, less motorised traffic, cleaner air, and greater sense of community. Although connectivity really is a qualitative measure of urban space, there are several existing methods that try to quantify it. 172

Song and Knaap’s (2004) measures of connectivity take into account the number of urban blocks, the number of street segments and nodes, the lengths of cul-de-sacs, total length of the road network, and the distance between access points in the neighbourhood. They also distinguish between internal and external connectivity – the first considers just one neighbourhood, whereas the other involves several. Other attempts to quantify connectivity include calculating road intersections and total road length per area unit (Dill 2004), number of dwellings per urban block, median perimeter of blocks, and median length of cul-de-sacs (Song and Knaap 2004). One measure of connectivity often used by urban researchers is link-to-node ratio that – Dill (2004) suggests – should be about 1.4 in good urban environments. The measurement of connectivity preferred herein is called ‘internal connectivity’ (Song and Knaap 2004) or ‘connected node ratio’ (Dill 2004). This is a simple ratio that operates with classical elements of urban street networks: road intersections and cul-de-sacs. Although reasonably used cul-de-sacs can benefit for the local neighbourhood by reducing through-traffic and creating alternatives to the gridlike street layout, they are counterproductive in terms of permeability and the ease of navigation. Internal connectivity according to Song and Knaap (2004) is calculated as follows:

internal connectivity =

road intersecti ons road intersecti ons + cul - de - sacs

Dill, referring to the INDEX model of Criterion Planners Engineers, argues against networks with internal connectivity values less than 0.5 and favours designs with values 0.7 or higher (Dill 2004). Song and Knaap have evaluated single-family neighbourhoods in Forest Glen and Orenco Station in the Portland area, and have calculated internal connectivity values 0.67 and 0.81 respectively (Song and Knaap 2004). Based on these references, it is fairly safe to say that street networks with internal connectivity values from 0.7 (high connectivity, some cul-de-sacs) to 1.0 (fully connected, no cul-de-sacs) would be an indicator of successful urban design. However, connectivity parameters are dependent on a number of urban design characteristics. City centres accommodate much heavier traffic than residential areas in suburbia and the road network connectivity reflects that. Dill (2004) has shown that metropolitan 173

areas in Portland have higher network density and internal connectivity (connected node ratio) than peripheral areas. Therefore, one has to consider the location and the typology of the measured area when assessing street networks. Internal connectivity and total road length tend to be inversely proportional parameters. The Nordhavnen model can generate designs ranging from minimal branching networks with low connectivity values to fully connected networks with escalating total lengths. In order to assist the validation process, a combined measure of ‘internal efficiency’ is proposed here. Internal efficiency is simply calculated by dividing internal connectivity by total length and can be used finding near-optimal states where building more roads add little value to the quality of urban space. In real world cases, it is ultimately down to the view how much quality is gained by spending more money on road building in order to increase connectivity. Internal connectivity is a subjective measurement, and cannot be used to compare different urban schemes. However, it is useful in evaluating different scenarios for the same scheme.

Figure 7.12: Development of the topology diagram

In order to calculate the road network length, internal connectivity and internal efficiency, one had to turn the diagrams presented earlier in this chapter (frequency diagrams) into topology diagrams. This required no further changes to the model – it was possible to generate topological diagrams purely by altering the 174

variables exposed via GUI. The diffusion of the ‘pheromone’ had to be set to 0, and gradual reduction of the ‘evaporation’ parameter in run-time resulted in unambiguous diagrams. The development of typology diagrams (see Figure 7.12) was similar to that of frequency diagrams with the only exception that the fully developed topology diagram indicated no usage data or any road hierarchy. Once the topology diagram was generated, the quantitative parameters in question were calculated by a deterministic algorithm. Since the Nordhavnen model was built on NetLogo, a fairly simplistic algorithm based on counting patches was devised. The road network length was calculated by simply counting all black patches. For calculating the internal connectivity, different type of road intersections had to be determined and counted. Tjunctions were any of the black patches that had 3 other black patches in their neighbourhood, while X-junctions had 4 and cul-de-sacs had only one.

Figure 7.13: Topology diagrams generated with growing number of attractors (1- 25)

The quantitative analysis carried out on the Nordhavnen model involved several rounds of testing cycles. The objective of this testing was to find out general trends of network development and interdependencies between different parameters. Tests were repeated with different number of attractors (see Figure 7.13) which were either placed manually following the designer’s intuition or automatically using the random placement routine built in the model. In the latter case, tests involving a 175

certain number of attractors were carried out at least three times and the result with the highest connectivity was selected for comparison. Although the probabilistic nature of the automatic placement routine rendered the results inconclusive, one could surmise the general behaviour of the model (see Figure 7.14). The graphs showing the number of attractors on the horizontal axis seemed to take the shape of an exponential curve where the parameters on the vertical axis approached a ceiling value at decelerating pace.

Figure 7.14: Tests with randomly placed attractors

Figure 7.15: Tests with manually placed attractors

This trend becomes even clearer when attractors are located carefully by a designer (see Figure 7.15). The ceiling value of connectivity in an ideal situation is 1.0, but in the Nordhavnen model the ceiling is lower for a simple reason – some grid points have just a single neighbour. The ceiling value of the total length parameter may vary from scheme to scheme but the trend is similar to that of connectivity. The only difference is that the highest possible connectivity is normally achieved earlier than the maximum length of the network creating a peak in the internal efficiency graph. It is difficult to observe this phenomenon in the context of Nordhavnen, but it becomes immediately apparent when the model is deployed in isolation (see Figure 7.16).

176

Figure 7.16: Tests ‘in silico’- internal efficiency raises until connectivity has reached its ceiling but the maximum length is not achieved yet

The reason why maximum connectivity and low network length occur can be observed in Figure 7.17. The maximum connectivity may be achieved much earlier than the maximum length of the network because the method of counting road intersections does not take the shape of these intersections into account. Whereas it makes no difference to the connectivity whether road junctions are T or X (cross) shaped, it does influence the network length. Also, the connectivity measured this way does not reflect well the quality of urban design patterns. It fails in cases where the urban blocks can are disproportionally stretched in one dimension. Although the connectivity can be high across the whole site, elongated urban blocks can lead to very low connectivity at the local neighbourhood scale.

Figure 7.17: An ‘in-silico’ diagram with 5 attractors where the highest connectivity value has been achieved, but the total length of network has not been exhausted

The described method is problematic in terms of assessing the quality of urban circulation conclusively. One can overcome this problem by bringing in other calculations such as measuring urban block proportions, but that would inevitably make the whole process of evaluation a lot more complex. Instead, one can expect an 177

easier solution. It is not difficult to alter the equation of internal connectivity in order to make it perhaps more suitable for evaluating urban design schemes. The method described earlier does not consider the type of road intersections whatsoever, and that is the main reason for so many high-connectivity diagrams turning out unsuitable to represent urban circulation diagrams. With a little modification of the equation, this can be changed. If one counted X and T-shaped junctions separately and gave these separate weightings, the equation could reflect network connectivity more accurately. The internal connectivity is proposed to be measured as follows:

internal connectivity =

X junctions X junctions + T junctions + 2 × cul - de - sacs

An additional rule has to be introduced in counting junctions that are too close to the boundary of site to become X shaped. In that case T junctions should be counted as X-junctions, and then the maximum internal connectivity value can be 1.0. In the suggested equation, X junctions and T junctions have different importance to the connectivity parameter with X junctions having greater influence. Also, T junctions and cul-de-sacs are not treated equally – the latter has smaller weighting. The advantage of the proposed method over the method that does not consider the shape of the junctions is apparent – it allows much finer measurement of connectivity. For example, the diagram in Figure 7.17 has the connectivity index 1.0 when the type of junctions is not taken into account. This is also the highest possible connectivity of the complete grid. With the suggested method of calculation, the internal connectivity is 0.15 which is far away from the maximum value 1.0 of the complete grid. The advantage of using the modified equation for calculating internal connectivity can be understood the best by observing the behaviour of variables in function graphs (see Figure 7.18). Both the total length and the internal connectivity parameters in this graph progress at continuously slowing rates. These parameters show similar growth tendencies sometimes featuring sudden changes in increase roughly at the same point. With the growing number of attractors both the internal connectivity and the total length increase at a decelerating pace but never really decrease. The combined value (internal connectivity/total length) of these parameters – internal efficiency – however, tends to reach the highest value with a relatively small 178

number of attractors. There is a point where the growth of internal efficiency stops and adding further attractors would render low gains in internal connectivity against increasing build costs. As long as an acceptable connectivity has been achieved, there seems to be a little benefit of adding more attractors.

Figure 7.18: Graphs show the change of network parameters ‘in silico’ (left) and in the context of Nordhavnen input map (right)

The number of attractors leading to the highest internal efficiency depends on the particular set-out of the model. In the Nordhavnen model, this near-optimal solution can be achieved with 9 attractors (see Figure 7.19). The corresponding internal connectivity tends to fall between 0.65 and 0.95, if the shape of street intersections is not taken into account. This shows that a near-optimal solution in terms of connectivity and road lengths is well above the minimal limit of commonly accepted (Dill 2004; Song and Knaap 2004) connectivity measure.

Figure 7.19: A near-optimal diagram with internal connectivity of 0.95 (as calculated after Song and Knaap (2004)) or 0.67 (as calculated with the proposed method of taking the shape of junctions into account)

179

The new way of measuring internal connectivity proposed in this section provides an alternative way of assessing street networks as compared to the method that does not differentiate between different types of road intersections. The downside is the lack of extensive research of analysing existing networks following this method. However, the objective of this section is not to carry out this research but to validate the generative model. One can do so by looking at a few schemes that are commonly considered as good examples of contemporary urban design and are of comparable size and typology to Nordhavnen. The internal connectivity parameters of street networks in Orenco station, Hammarby Sjöstad and Vauban are well above the lowest acceptable value of 0.7 (Dill 2004). This allows on to make an assumption about acceptable connectivity values as measured using the proposed method. Orenco station is perhaps not the most glamorous example of contemporary urban design, but it is selected because Song and Knaap have studied the area in depth (Song and Knaap 2004) and highlight it as a good environment in terms of connectivity. Orenco station is a pedestrian friendly high density urban town centre featuring a light railway station. The area (see Figure 7.20) has 71 street network nodes – intersections and cul-de-sacs. Song and Knaap’s measure of the internal connectivity in Orenco station area is 0.81. Using the proposed method of taking the shape of street intersections into account, the value of internal connectivity is 0.30. Hammarby Sjöstad and Vauban are both promoted by CABE as examples of good urban design practice (CABE no date) and it is therefore assumed that street networks in these areas meet the best urban design practice. Once fully developed, Hammarby Sjöstad (see Figure 7.21) is a 200 hectare city district in Stockholm, housing a population of 20 000 and providing jobs for further 10 000 people (CABE no date). The size, location and typology of the area are very similar to these in Nordhavnen. At its current state, internal connectivity value in Hammarby Sjöstad is 0.76 after Song and Knaap and 0.29 following the proposed method. Vauban in Freiburg (see Figure 7.22) is a medium density area of 5000 inhabitants and 600 jobs built to the principle of sustainable model district (Vauban no date). Famous for its well-designed circulation, Vauban has the internal connectivity of 0.91 measured using Song and Knaap’s method and 0.32 following the proposed method.

180

Figure 7.20: Orenco station, Portland (Google 2011b)

Figure 7.21: Hammarby Sjöstad, Stockholm (Google 2011c)

Figure 7.22: Vauban, Freiburg (Google 2011d)

Based on the examples described in this section, one can conclude that street networks with the internal connectivity value of 0.3 or higher as measured using the 181

proposed method are suitable for urban design schemes of Nordhavnen’s size and typology. Figure 7.19 presents a street network diagram generated with the Nordhavnen computational model that meets this requirement. Hence, the Nordhavnen model is a viable method for generating street network diagrams with acceptable internal connectivity.

7.4 Conclusions and discussions

The model presented in this chapter is a generative design model following a pattern regarded as the sensor-reactor-environment pattern (see Chapter 10 for further explanation). In the Nordhavnen model, simple mobile agents navigate the environment by sensing their immediate surroundings and reacting to traces in the environment left behind by other agents. The Nordhavnen model shows how the natural phenomena of stigmergy can be used to turn a simulation into a generative tool for architects and urban designers. The stigmergic feedback loop leads to the automatic optimisation of road network diagrams but leaves enough control to the user of the model. An experienced designer can use the model creatively and integrate it in the design process. Despite its conceptual simplicity, the model is capable of producing a wide variety of diagrams.

Figure 7.23: Three representations of a diagram (from left to right) – topological, frequency diagram, and combined diagram

The Nordhavnen model integrates components of design synthesis and evaluation. The generated diagrams do not solely display the topological relationships but can also express the estimated usage frequency (see Figure 7.23). While the 182

topology diagram defines the structure of the road network and allows calculating connectivity and shortest distances between points, the frequency diagram informs about the type of individual roads. The latter implies to a hierarchy of roads and gives an architect a guide as to which roads should be designed as boulevards or highways, which can be treated as local roads, and which can be left mainly for pedestrians.

Figure 7.24: A selection of diagrams produced with The Nordhavnen model

The Nordhavnen model could help design teams in several ways. It can be used as a scenario generator since it can produce a variety of realistic road diagrams (see Figure 2.1 Figure 7.24) in a matter of minutes – something that a human designer can hardly do. The model also allows evaluating the generated diagrams with a single press of a button – measurement routines are seamlessly integrated into the content production application and quantitative parameters are immediately available for designers. The evaluation methods presented in this chapter include internal connectivity measurements based on the number of different type road junctions and network length parameters. However, other methods of evaluation are possibly easy to be introduced.

183

Although the road generation in the Nordhavnen model is highly automated, one can control this process via GUI. Driving the model in order to generate meaningful circulation diagrams requires some common-sense logic and some practical experience. The greatest involvement of a designer is required in interactively locating attractor points in the modelled urban site. The location and number of attractors plays a crucial role in generating acceptable road network diagrams. For example, a single attractor point always leads to tree-like networks – akin to many existing rural settlements. Since tree-like street networks are greatly discouraged in contemporary urban design, one has to introduce several urban attractors instead. These attractors also need to maintain certain distance from one another to avoid mono-centric structures in the diagram. Multiple attractor scenarios match much closer the usual many-to-many relationship between urban amenities (schools, markets, shops etc.) and local residencies in real life. Many-to-many relationships also lead to better connected street networks. Once the designer has mastered the basic rules of locating urban attractors, it is simple enough to generate diagrams with acceptable connectivity. The bulk of successful diagrams presented in this chapter provide sufficient evidence that the Nordhavnen model can produce street networks of high connectivity. Besides the external validation, the model is also validated internally: stigmergic mechanisms ensure that the total length of the network is optimised, and accessibility to individual network nodes is granted by the algorithm that controls agents’ movement. The proposed model for generating road networks is two-dimensional. Nordhavnen is relatively flat compared to the scale of the area, and that makes the landscape’s third dimension redundant. However, if the model was to be redeployed in a different context, the landscape gradients could play more important role in route selection. How could one use the model on a hilly site? There are several ways how the model can adapted in this situation. For example, one way is to define maximum height gradients that agents can possibly climb and they can then exclude routes that exceed this threshold from their route selection mechanism. Another way is to preprocess the grid making the height differences between adjacent nodes less dramatic. This can be done by moving the grid nodes so that the gradients would be in acceptable range. The latter does not necessarily require sophisticated surface relaxation algorithms, but can be a simple probabilistic model. 184

The Nordhavnen model – like any other computational model – has its limitations. The main limitation is purely technical and is imposed by the modelling environment (NetLogo) used for developing the model. Although conceptually simple, the model has so many lines of code that it is hard to manage further changes and add more functionality. If further functionality is required, the model would need to be reproduced in a different development environment. Without the limitations of NetLogo modelling environment, several useful features could be added. Agents could use more detailed algorithms to choose their way and, for example, try to avoid steep frequency gradients. This would probably lead to stronger hierarchy in road networks that are desired by some urban designers. During the development of the Nordhavnen model one major change was considered but abandoned for the above reason. This change would have introduced a feedback mechanism between the attractors and the traffic created by agents’ movement. In this scenario, there would have been a set of rules to governing the repositioning attractors according to the frequency of road usage. If further feedback mechanisms were introduced between attractors and traffic frequency, the model would have led to a greater generative effect. However, the task of this thesis is not to explore embellished generative models, but to discover minimal ones that render obvious benefits to the urban design process. Regardless of the technical limitations, the Nordhavnen model is a prototype capable of producing good quality circulation diagrams. The model can be used as a variance production mechanism. It is a front-end design tool that can provide a competitive advantage at the early stages of the design process where the design team has to explore several scenarios in a short space of time. The Nordhavnen model meets all the requirements stated in the beginning of this chapter: it generates a wide variety of distinct circulation diagrams that are validated in terms of accessibility and connectivity, and optimised to the total length of the network. The outputs of the model should not be confused with fully developed design proposals but should be seen as conceptual diagrams. Although conceptual, these diagrams are constructive diagrams and convey useful information about both the shape and the requirements of the street network. 185

Nordhavnen competition project credits Design concepts featuring in this chapter have been developed collaboratively by Slider Studio and Mæ architects for the Nordhavnen international open ideas competition. Nordhavnen computational model was designed and developed singlehandedly by the author of this thesis during and after the competition.

186

Chapter 8: Case study 2 – an ant colony optimisation algorithm for generating corridor systems The second case study explores the use of multi-agent systems at the scale of large office buildings. As opposed the Nordhavnen model (see previous chapter) that generated street network diagrams, the purpose of the multi-agent system in this chapter is to generate office floor plate layouts and analyse these in terms of evacuation and fire regulations. Whereas a simple hill-climbing model is proposed in order to locate stair cores, an ant colony optimisation algorithm is deployed for finding shortest routes for corridors. As a part of this chapter, a brief overview of ant colony optimisation techniques is given. The main purpose of this case study is to illustrate the use of multi-agent systems in a real-life design process and to explore the potential of integrating generative models with traditional design methods. It is expected that generative models would lead to design solutions that are not intuitively apparent to designers. The study also investigates methods that enable designers to control bottom-up computational models efficiently.

8.1 Ant colony optimisation algorithms

Ant colony optimisation (ACO) belongs to the general category of agent based algorithms inspired by the behaviour of biological ant and termite colonies. ACO mimics the ant colony’s foraging behaviour that helps the colony to locate food resources closest to the nest constantly optimising the route to these resources. Such kinds of behaviour relies on stigmergy – a form of communication where ants exchange information by modifying their local environment (Dorigo, Birattari and Stützle 2006). Stigmergic communication, as distinguished from other forms of communication, is indirect, non-symbolic, and all the information is exchanged locally (Dorigo, Birattari and Stützle 2006). ACO was initially developed in early 1990s by Dorigo, Maniezzo and Colorni (Gutjahr 2008). They found inspiration from Deneuborg’s work (Dorigo, Birattari and Stützle 2006) studying the foraging behaviour of Argentine ants. A typical route 187

optimisation process, as observed by Deneuborg et al. (1983) takes place when ants deposit pheromone along the path between their nest and the food source. In doing so ants follow previously dropped pheromone trails and thus, by deploying a positive feedback mechanism (Dorigo, Maniezzo and Colorni 1996), reinforce already existing trails. Occasionally, due to randomness in ants’ behaviour (Deneuborg, Pasteels and Veraeghe 1983) or disturbances in the environment, ants wander off the existing path and find alternative routes to the food source. Both of the two competing paths attract ants and eventually – caused by the evaporation of pheromone – the shorter path becomes prominent. Pheromone evaporation plays a crucial role here; if the colony cannot forget already discovered paths, it is unable to learn new shorter ones. ACO algorithms are metaheuristic algorithms for solving combinatorial optimisation problems (Dorigo and Socha 2007). Typical problems solvable by ACO are the travelling salesman problem (Dorigo, Maniezzo and Colorni 1996) and minimum spanning trees (Neumann and Witt 2008). As a swarm intelligence method (Dorigo, Birattari and Stützle 2006), ACO is suitable for agent based modelling techniques. Acting on local cues, simple programmable agents can find solutions quickly and the quality of solutions rises over time (Blum and Dorigo 2004). The oldest and conceptually simplest ACO variant is Ant System (Gutjahr 2008). The main characteristics of Ant System, as outlined by Dorigo et al. (1996), are positive feedback, distributed computing, and the use of constructive greedy heuristics. Although ACO models have been there for almost two decades, new applications still keep emerging, and new improved algorithms are being constantly developed. Only recently there have been first few comprehensive and rigorous studies of ant colonies’ runtime behaviour (Gutjahr 2008; Neumann and Witt 2008). Over the last decade, there have been plenty of applications for ACO. The most common ones are vehicular traffic planning (Rizzoli et al. 2007), project scheduling (Ritchie 2003) and network routing (Sim and Sun 2002) applications. In more architecture, urban design, and engineering related fields, ACO algorithms have been used for ship design (Jin and Zhao 2007), optimising steel frame structures (Camp, Bichon and Stovall 2005), generating street networks in urban context based on minimum spanning trees computation (Nickerson 2008), and optimising the coverage of personal communication systems in the urban environment using a bitmap database of buildings (Pechač 2002). 188

8.2 Selected ACO algorithm

There are several versions to the original ACO algorithm proposed by Dorigo in his PhD thesis (Gutjahr 2008).Different implementations of the algorithm have been developed for increasing the speed and robustness, or to tailor it for specific needs of particular applications. The proposed algorithm in this thesis is a simple version of ACO, not much different from the Ant System (Dorigo, Maniezzo and Colorni 1996). The aim of developing and testing already invented algorithm is twofold – to gain a deeper understanding of how the key parameters affect the performance of the algorithm in practice; and assess the suitability of the algorithm in a generative design model. The aim here is not to find the best performing ACO but to test the selected algorithm for generating corridor systems. The ant colony simulation in this case study was set up within a 2D CAD environment in Bentley’s Microstation XM 2008. Before running the ACO algorithm, one had to construct a line drawing representing the network of interconnected paths. This network defined all the possible links the agents could use in order to move from one location to another. All the experiments in this study were conducted on a particular type of network constructed with purpose made CAD tools designed and programmed by the author of this thesis. While a more thorough overview of these tools is given in section 6.7, it is sufficient here to mention that a closed loop network (Haggett and Chorley 1969) was made using a Voronoi subdivision algorithm that was modified to suit the purpose of this case study. The Voronoi network possesses an important quality – ordered irregularity – that makes it suitable for the conducted experiments. The Voronoi network connects the nearest nodes and allows calculating topological distance by counting links instead of measuring actual topographical distance. With a typical node having no more than three links to its neighbouring nodes, the network is simple but offers sufficient variety of routes through. The network used in the following experiments was a bounded network (Haggett and Chorley 1969, p. 53) enclosed within a rectangular area consisting of 99 Voronoi patches, 201 nodes, and 300 links. The target and the source patch were selected in relation to the size of the entire network (see Figure 8.1). There were two 189

possible solutions for the shortest route from the source patch to target. Both of the routes, the green and the red one, were 10 steps long.

Figure 8.1: The setting out configuration showing the lower (source) patch and the upper (target) patch and the shortest routes

Step-by-step description of the selected algorithm: •

Preparatory routines pick up the setting-out configuration (a 2D CAD drawing) and establish the network by defining nodes, links and patches. The target and the source patch are also retrieved from the drawing. Agents are located at one of the nodes at a source patch.



Every agent ‘sniffs’ the target pheromone of adjacent links, selects one of the links and moves to the node at the opposite side of this link. The selection mechanism for links used in this study is a roulette wheel type of selection. In the case when links have no pheromone around the agent’s current position, all links have equal weighting and the choice is made randomly. On their way to the target agents adjust the ‘source’ pheromone at links. That carries information about their origin patch.



The further an agent goes the smaller amount of pheromone it drops into the environment – the emitting capacity of an unsuccessful agent decreases over time. This prevents agents carrying pheromone too far from the source patch but helps them to develop uniform pheromone gradients inclining towards the source.



The dropped pheromone evaporates in time. This helps the colony to forget already discovered paths and find new shorter ones. Forgetting is an essential component in the colony’s learning procedure. 190



If an agent reaches its target, it turns around and starts finding its way back by following the ‘home patch pheromone’.



Test phase. A test agent hill-climbs pheromone gradient from the source patch to the target. The procedure is similar to the above described process with the only exception in the selection routine. When in the main routine agents choose between the neighbouring nodes by deploying the roulette wheel selection on the connecting links, then the behaviour of the test agent is deterministic and it prefers links with the highest pheromone concentration.

Pseudo code of the selected algorithm: Initialise Get network from CAD drawing Place agents Continue until an acceptably short path is found For agents = m Choose a next node Step Update pheromone values Test agent For test-steps = n Choose the ‘smelliest’ link Step Draw a path

Depending on the network shape, size and complexity, the first paths are usually discovered early on in the process. Typically, these paths are not the shortest ones, and only in time, when the pheromone gradient has developed to its mature state, shorter paths start to emerge. By increasing the distance between the source patch and the target patch, finding routes becomes exponentially more difficult. It often takes the colony longer to stick to a path if there is more than just one shortest path – paths compete with one another until one of them becomes more dominant. According to the most common scenario, the colony’s search converges to a solution in 191

a continuously decelerating manner with new paths being discovered less and less frequently. There are several scenarios for finding the shortest route. With the simplistic algorithm described earlier in this study, it is not guaranteed that the shortest path is going to be found at all. Additional changes need to be introduced in order to increase the chances of finding optimal routes. For example, Blum and Dorigo’s (2004) experiments show that limiting the pheromone values to the interval from 0 to 1, leads to more robust algorithms. The colony’s behaviour is dependent on a number of parameters. Although some of them are tested in the following section, the aim of this study is not to develop solid ACO algorithms, but rather focus on plugging the ACO into the generative program.

8.3 Testing ACO parameters

This section describes the explorative analysis that was conducted in order to better understand the proposed ACO algorithm. A number of tests were carried out for finding out the most influential parameters that affect the behaviour of an agent colony. Images in this section use a simple colour coding system. Each line represents a link in the Voronoi network that agents can use for travelling. Whereas red dotted line shows the shortest path found, the colour of links indicates the pheromone concentration as follows: •

Red – high concentration



Yellow – high-medium concentration



Green – medium-low concentration



Blue – low concentration

The first test was set up in order to explore what role the size of the colony plays in finding the shortest path between two patches in the network. There were two possible shortest routes, both comprising 10 steps. The maximum length of a route was defined as 25 steps. In all tests the colony could find several ways to travel to the destination point. 50 and 200 strong colonies found a route 12 steps long, larger colonies did actually find both of the shortest routes (see Figure 8.2). 192

Figure 8.2: Number of agents tested – 50 (top left), 200 (top right),500 (bottom left) and 2000 (bottom right)

Agents were programmed to use a roulette wheel selection method to choose between available links for navigating the network. This method favoured links with the higher target pheromone value, but also gave a proportional opportunity to the links with lower pheromone values to be selected. Early tests suggested modifying the algorithm in order to improve the route optimisation process. As in Figure 8.2, the longer routes found by 50 and 200 strong colonies partially overlap with the shortest ones. In these tests the found routes were established relatively quickly, while the lack of explorative behaviour prevented the optimisation process. To preserve the explorative nature of the agent colony, it was modified by changing weights in the roulette wheel selection method. Adding a constant value to the link’s pheromone value increased the chance of links with the weakest ‘smell’ to be visited by agents. The test was run again with 200 agents and this time the colony could find the shortest path. However, the colony was unable to maintain this path and kept searching for better ones – it seemed to prefer a slightly longer route. Therefore, further tests were carried out with the earlier version of the roulette wheel selection algorithm. 193

The agent colony’s failure to maintain the shortest path required further testing of parameters that were needed for sustaining the colony’s explorative behaviour in search for the best solution. It was found out that the bigger evaporation rates tend to formalise a colony’s selected routes much quicker but often prevented it from finding shorter ones. Lower evaporation rates resulted in more uniform pheromone distribution and helped the colony to find an optimal way (see Figure 8.3). However, with low evaporation rates and a smaller number of agents, the colony did not find the shortest way and was locked to the solution found early on in the process because the system’s ability to forget was severely reduced.

Figure 8.3: Evaporation rates tested – 0.00001 (top left), 0.003 (top right), 0.01 (bottom left) and 0.03 (bottom right)

A similar but reversed effect was observed with pheromone adjustment rate (see Figure 8.4). When agents released smaller quantities of pheromone in the environment, several long routes were established but the shortest path was not discovered. Increasing the adjustment rate helped the colony to find the shortest path, while too high values reduced its ability to forget already learned routes. Essentially, 194

pheromone adjustment and evaporation rates counteract one another. In order to find shortest routes, these parameters have to work in balance. Increasing or decreasing both parameters at the same time proportionally does not drastically change the colony’s behaviour. The lesson here is that whereas the evaporation parameter can be seen as an environmental property and can be modified globally, the adjustment parameter can be modified at the agent level.

Figure 8.4: Adjust rates tested – 0.03 (top left), 0.1 (top right), 0.3 (bottom left) and 3 (bottom right)

Tested parameters were explored only within a certain range of all possible values. The task was to find out whether these parameters are the key ones, and it was deliberately chosen not to explore the complexities that would arise from changing the parameters concurrently. The effect of tested parameters can be summarised as follows: •

Number of agents. Using a higher number of agents increases the chance of finding the optimal route. However, it also slows down the algorithm.



Roulette wheel selection. An alternative selection weighting was tested in order to give less advantage to already established routes and incentivise the 195

colony’s explorative behaviour. This, however, resulted in the colony’s inability to maintain shortest routes. •

Evaporation rate. Slower rate allows more uniform distribution of pheromone that leads to discovery of optimal routes. Too small a value, though, hinders the colony’s explorative behaviour.



Adjust rate. The effect is inversely proportional to the evaporation rate – higher rates prevent the colony to find new paths, too small rates yield to uneven distribution of pheromone and hinders the discovery of the shortest path.

Achieving a good distribution of pheromone in the network has the utmost importance for a successful search. Smaller adjustment and higher evaporation rates generate fragmented pheromone patterns. Opposite values, higher adjustment rates and lower evaporation rates tend towards more uniform fields. Although the uniformity is needed, also small changes in pheromone values could help the colony discovering new and shorter routes. For the best result, continuous and steep gradient of pheromone is needed between the target and the source point (see Figure 8.5).

Figure 8.5: Continuous gradient of pheromone with extreme values around source point and the target lead to the successful detection of the shortest path

Each tested parameter seems to have a zone of effective operation. No parameter alone is in control of the system’s ability to find and forget routes. 196

However, changing a parameter increases or decreases these abilities. Decreasing the evaporation rate, for example, increases the system’s ability to find new routes, but values too small could affect its capacity of maintaining ones already found. A small adjustment rate prevents agents finding additional routes to the first found one. Too large a rate, on the other hand, decreases the system’s ability to forget what it has already learned. Rates for agents adding pheromone and the environment losing it gradually need to be controlled simultaneously. More complex networks naturally urge one to reduce the evaporation and increase the adjustment rate. With a small colony of agents this could still not produce expected results. In that case, having more agents is the easiest way to find the shortest path. Achieving better results with the increased colony’s size, one has to make sure that the adjustment rate is dropped and the evaporation rate is increased proportionally. As with many other way-finding algorithms, agents in the proposed ACO often exhibit edge following behaviour. With the given network shape, they tend to find the target patch more quickly if the patch is closer to an edge. This happens because links along the edges in the tested network are longer than in the middle and lead to the target in fewer steps. As mentioned earlier, good pheromone distribution in the environment has a crucial influence to the colony’s ability to find quality solutions. In order to keep track on the pheromone field development process, a colour coding system was devised. This facilitated the observer to obtain overview of the whole process in one glance. From the programmer’s point of view, being able to predict the system’s state by observing pheromone distribution turned out to be very useful method at a later state when the ACO algorithm was incorporated within the generative design program.

8.4 Generating corridor networks for office buildings

This section describes how the chosen ACO algorithm was used in the architectural design process. The exact description of the ACO algorithm has been outlined in details in previous sections. The whole process of generating circulation networks for buildings was developed while working together with a team of 197

professional architects on an architectural project. The building in this case study was an administrative building in Tallinn (Tallinna Linnavalitsus – TLV). The brief for the building was given as a part of the documentation, prepared for an international design competition. The brief for the competition asked for a clear definition of circulation within the building where internal movement of office workers was segregated from the publicly accessible circulation space for visitors. Besides the required clarity, additional constraints were derived from Estonian building regulations and standards as well as from the urban and site specific conditions. The team of architects working on the competition project decided to develop a generative methodology for solving design issues typically associated with office buildings. Rather than just helping the team on this particular project, the proposed design methodology was intended to be of a more generic type. As a part of the overall methodology, a set of programmed design tools were developed and deployed on the project to help the team of architects to design dynamic Voronoi networks (see section 6.7 for a description).The Voronoi network represented the layout of office spaces within the building, and with the dynamic growth of Voronoi patches, the team was able to meet the brief’s spatial requirements. To be more specific, it was possible to control the position and size of Voronoi patches in two ways – by either manipulating the Voronoi seed points or by changing the patches’ internal ‘pressure’. The first option was useful for generating distinct Voronoi subdivision topologies, whereas changing the patches’ pressure altered the size without any topological change. The toolkit facilitated the team of architects achieving the desired area for patches within constrained networks. Although it gave them the advantage of quickly generating massing solutions for the TLV building, shortcomings of this method were equally apparent. Architects, used to work within the traditional office designing methodology, could not instinctively work with the typical Voronoi network diagram. The internal subdivisions of generated building mass suggested an unconventional approach to deal with several design issues. As one of the biggest issues, the team had to come up with a working circulation diagram tailored to the internal office layout. Due to the lack of precious time during the competition, the team eventually used a mix of both – computational and traditional methods. An additional – purely computational – algorithm was proposed shortly after the competition. 198

Similarly to the Nordhavnen case study (Chapter 7.1), the idea for the prototype was conceived during the architectural competition. The prototype development, however, was completed after the design proposal had already been submitted to the competition. Nevertheless, working through the competition became an essential part of the development cycle. As the prototype program was completed after finishing the competition work, the results presented in this thesis do not entirely match the original competition submission. As mentioned earlier, the design team used a programmable toolset in order to generate preliminary massing solutions. Once a satisfactory massing was proposed, the team faced a problem – generated Voronoi subdivisions did not directly suggest to architects how the internal circulation should work. In order to solve the circulation, architects had to find suitable locations for stair cores and corridors – something that was far from being explicitly apparent in the generated Voronoi diagrams.

Figure 8.6: Problem in the context: the task was to find a quality solution for the internal circulation on all 3 floors of the building. The image shows floors 2, 3 and 4 with the generated subdivision (coloured patches) in the perimeter polygon, and the proposed structural grid

While each Voronoi patch represented a room, the task was to connect all the patches to stair cores. With the goal of creating an economical layout, the team wanted to get away with as few stair cores as possible. Therefore, one had to position the smallest number of cores within the building footprint following all the given building regulations. Estonian fire regulations for buildings state 45 meters as the 199

maximum distance from any room in the building to the closest two stair cores. Naturally, as a practical measure, the team of architects also had to make sure that stair cores on every floor would line up. Given the 2D Voronoi massing diagram, a few solutions for the circulation were considered. The team first hoped to use the property of 2D Voronoi diagrams of approximating polygons’ medial axes as described by Dey and Zhao (2003). The medial axis can be used for constructing the minimal spanning tree connecting all the Voronoi patches within the given polygon (Haverkort and Bodlaender 1999). This method, however, is applicable only in diagrams where all the Voronoi patches are lined along the polygon’s edges, and the polygon itself does not contain any holes in it. This was not the case with the TLV building. As the building featured internal atria and some of the patches were disconnected from the perimeter polygon, a different solution was required. The complex spatial layout and topology suggested more sophisticated approach. A simple modeller algorithm combined with ant colony optimisation was chosen instead of the medial axis transform method. As discussed in Chapter 2 in this thesis, computational modelling becomes generative when modelling algorithms are combined with analytical ones. The modelling part of the program was inspired by the manual exercise executed by the team of architects. The ACO algorithm, as described earlier in this section, was deployed for assessing the generated output and feeding it back to the modelling routine. The combination of modelling and analytical modules allowed the computational designer quite easily to devise a greedy algorithm that gradually converged towards an optimal solution. The task for the modeller algorithm was to find acceptable locations for all stair cores within the building perimeter. By moving the cores around, the program was geared to find a solution where all the given Voronoi patches were connected to at least two cores within a given maximum radius (45 meters). To find the shortest routes between stair cores and all Voronoi patches, the analytical ACO algorithm was deployed. The number and the initial location of stair cores were dictated by the user defined setting-out configuration. The final solution was then sought using a greedy algorithm. Each stair core occupying a Voronoi patch was allowed to move to one of its neighbouring patches. With stair cores looking for the local maxima in terms of 200

connectedness, it was expected to find the solution where all spaces were connected to stair cores within the given limitations. Possessing a temporary bodily substance in the form of a Voronoi patch and having the freedom to move around pursuing its target, the stair core object meets the basic definition of a mobile agent. The ‘stair-core agent’, responsible for finding the best solution from its own perspective, was not controlled by the program directly. As it turned out, the proposed program for generating circulation diagrams for an office building deployed two kinds of agents – ants in the ACO module, and more abstract stair core agents in the generative module. According to Adamatzky (2001), these two breeds of agents are of different kinds – stair-core agents are space-based agents, whereas ants are graph-based.

Step-by-step description of the generative algorithm: •

Preparatory routines pick up the network and define nodes, links and patches, and identify stair cores.



Run ACO (see the description in section 7.2) to find all patches that satisfy given requirements – any patch need to be connected to at least two stair cores within allowed maximum distance.



Move a stair core to one of its neighbouring patch and repeat ACO. If the overall count of connected patches is higher (global rule) than at a previous stage or the number of patches connected to this particular stair core has increased (local rule), confirm the new position. If this is not the case, the stair core is first moved back to its previous location, and then another patch in the neighbourhood is chosen. If a better local and global solution is found, another stair core takes its turn.



The simulation stops when all the patches are connected.

Pseudo code: Initialise Position all stair cores Continue until all patches connected Each stair-core Move a stair-core to one of the neighbouring patches 201

Each patch do ACO algorithm Find two acceptably close stair-cores (targets) If both closer than 45m then Connect the patch If better global or local solution found then do the next stair-core Else Move the stair-core back to previous location and repeat

The algorithm is a traditional permutational method for generating built forms and layouts to minimise travel costs (Tabor 1971, p. 56-59). Permutational methods first create the framework and boundaries, then sort out the initial spatial layout, and then automatically locate functions (stair cores, in this case study). Tabor also includes manual modification where the designer adjusts the layout to match the requirements that are not explicitly expressed in the program. The architect’s role in running the TLV program is two-fold. Firstly, the architect has to set up the initial layout of the building, define the ACO parameters (number of agents, evaporation rate etc.) and global requirements such as the maximum distance to two closest stair-cores. The initial configuration has to be a network or shapes, drawn in a 2D CAD environment (see Figure 8.6). Using a colour coding system, the architect is also invited to define the initial position of stair cores. Secondly, the architect can influence the generative process by modifying the network shape by getting rid of or moving around stair cores, or by adding new ones. Once the initial layout has been set up, the program can be left working its way through the problem until it finds a solution that satisfies all the requirements. Alternatively, the program can be stopped at any time to make further amendments to the layout. It is often difficult for a designer to estimate how many stair cores are needed to solve the problem. With the program, it is easy enough to test several differently configured layouts and find acceptable solutions. As outlined earlier, the program is configured to connect all patches while the perimeter of the building remains static. However, it is possible to deploy the program as a form-finding device. An obvious tactic to get variable boundary solutions would be 202

creating a larger network of patches than actually required. The program can then run until the desired number of patches has been found. The TLV program can be effectively used in the ‘manual mode’ – one can prevent stair cores moving by breaking the respective loop in the algorithm. In this case, the architect is responsible for repositioning stair cores after the computer has analysed the layout. The proposed generative algorithm outputs simple and easily comprehendible graphics. All the following images in this section can be understood by the same key: •

White patches – ‘connected’ patches, satisfy the 45-meter rule



Red patches – stair cores



Black patches – disconnected patches



Red lines – suggested corridors

All tests were conducted on a Voronoi network within a square boundary. The network consisted of 99 Voronoi patches, 201 nodes, and 300 links. The sequence of images in Figure 8.7 illustrates how the algorithm solved a given task by repositioning stair cores in order to connect all the patches. Initially positioned in the centre of the area, stair-core agents quickly developed a tactic of spreading out. Curiously, stair cores found good locations close to edges of the network. This was – as it often happens with agent-based models in bounded environments – because of the edgefollowing behaviour of ants in the ACO algorithm. Voronoi patches on edges had typically only 4 neighbouring patches instead of 5 or more neighbours that patches in the middle of the network had.

Figure 8.7: Test run with 6 stair-core agents (from left to right). The algorithm solved the problem in 65 steps

203

Test results (see Figure 8.7), produced without any attempt of generating a meaningful diagram in architectural terms, showed good performance and validated the algorithm internally. It also indicated some technical weaknesses, which will be discussed later in this section. Final architectural diagrams were generated by rerunning the algorithm on previously created form diagrams (see Figure 8.6) of 2nd, 3rd and 4th floor. As there was no coordination between stair cores on separate layers, this exercise served a single purpose of finding the right number of stair cores per floor. Working with the 2nd floor form diagram, a solution with 5 stair cores (see Figure 8.8) was found by the algorithm in 51 steps with the final position of stair cores being considerably different from the initial position. Cores – originally located in zigzag pattern along the building’s longest facades – travelled to the centre of the massing and gathered around internal atria. The generated diagram suggested that shorter routes happen around atria, and the ‘ragged’ outer façade is too expensive in terms of circulation length.

Figure 8.8: Generating 2nd floor diagram with 5 stair cores

The 3rd floor diagram (see Figure 8.9) was generated much faster. Instead of 51 steps, it only took 20 to find an acceptable solution. The quicker run is assumed to happen because of the better initial distribution of stair cores, but also because of a different number of stair cores. It seems that a larger number of stair cores provide more flexibility in terms of the layout.

204

Figure 8.9: Generating the 3rd floor diagram with 9 stair cores

The 4th floor (see Figure 8.10) took the longest to run – 96 steps. The initial configuration was again one of the main reasons for that. As stair-core agents gathered around internal atria, some of them got stuck behind other agents. Some cores that had already found good locations resisted to move. While competing for patches, these stair cores were often approached by other cores triggering further movement. Thus, by competing with one another, stair cores collectively achieved a bigger task of connecting more patches.

Figure 8.10: Generating the 4th floor diagram with 6 stair cores

205

Since diagrams for all floors were generated irrespectively to the floor above or below, another stage in the design process was needed. The final stage involved some manual modification of the diagram and deploying the analytical module of the algorithm for just one more time. Figure 8.11 compares the initially generated solution with the final manually modified solution where stair cores at different levels overlap. With the new constraint and different Voronoi network layouts on each floor, it turned out to be much harder to connect all patches on all floors. As a consequence, some of the Voronoi patches remained unconnected. However, this was acceptable in the context of a relatively loose competition brief. An obvious solution for that problem was to increase the number of stair cores. Alternatively, if the final building boundary had not been fixed, a designer could have preferred to introduce a network with larger number of patches encouraging quite a different distribution of cores.

Figure 8.11: Generated solution (top) versus manually modified solution (bottom)

The output of the proposed algorithm should not be seen as a final design proposal. Although generated solutions represent circulation systems in the building, they are still diagrams and subject to further interpretation. There are two main reasons why the generated diagrams cannot be regarded as traditional architectural 206

drawings. Firstly, the proposed program handles only topological relationships; the 45meter rule has to be interpreted as a topological rule. Secondly, in designing the program, Estonian fire regulations for buildings were interpreted somewhat loosely. The program ignores the fact that regulations add another requirement to evacuation routes – the maximum allowable distance from a room to the two closest stair cores has to be reduced if both of the evacuation routes overlap. Nonetheless, architects can produce results that satisfy the given rules using the generative program. The number crunching task can be left to the computer, while people can focus on more creative aspects of designs. Since generated diagrams have both qualitative and quantitative properties, they can be described as constructive diagrams in Alexander’s (1964) terms. The quality of the diagrams lies in the topological layout of internal spaces and their connectivity to stair cores, satisfying Alexander’s definition of the form diagram. Quantitative aspects such as the maximum allowable distance between rooms and corridors, on the other hand, meet the requirement diagram definition.

8.5 Observations and conclusions

The proposed computational model for generating topology based room layouts and circulation systems falls into the category of permutational methods (Tabor 1971, p. 56-59). These methods usually involve four stages: creation of the framework, creation of the initial layout, automatic modification of the layout, and manual alterations to it. Getting good quality results quickly relies on smooth and coordinated actions at all of these stages. However, the best results are sometimes achieved by skilfully jumping between stages and creating loops of actions in the process by repeating some stages and skipping others. Depending on the nature of the project, the most time consuming stage is usually the first stage, but with the right kind of tools (see section 6.7 for the description of dynamic Voronoi network tools), one can severely speed up the workflow. The originality of the proposed design process for the TLV building lies in the algorithm deployed at the automatic modifications stage. The combination of generative and analytical modules creates a unique algorithm that – assuming that better solutions exist – gradually optimises the initial configuration. There are also a 207

few drawbacks in the algorithm. Firstly, the speed of finding acceptable solutions could be much better. The speed is, however, not so much dependent on the actual algorithm as it is on the CAD platform and the particular deployment language (Visual Basic for Applications). Written in a lower level programming language, the algorithm could work much faster, making the workflow more fluent. The second issue emerged while using the ACO algorithm to assess the generated layouts. Due to the probabilistic nature of the algorithm, identical configurations sometimes led to different results. To overcome this problem, certain amendments need to be done to the ACO algorithm for getting consistent results. This can be done either by using the same array of randomly generated numbers for simulating random choice, or improving the search algorithm for guaranteed results. There are a few interesting aspects about the behaviour of the generative algorithm that are worthwhile highlighting here. The search process for the optimal core layout is catalysed by the competition between stair-core agents trying to connect to as many patches as possible. Competition between individual stair cores pushes them into collaboration for the global quest of achieving a better overall connectivity. Therefore, collaboration is the result of competition. Another interesting behaviour – edge following – renders some of the Voronoi network characteristics more visible. As shown with the test-run and in generating diagrams for individual floors, stair-core agents tend to locate themselves either along the building perimeter or around internal atria. This clearly indicates the incoherencies in otherwise quite uniform network topology – travel distances along outer or inner perimeter are considerably smaller than in the middle of the network. Diagrams generated for the TLV competition are constructive circulation diagrams by nature. As illustrated in Figure 8.12, the formation of a constructive diagram takes place when the form diagram is fed into the generative program to produce circulation systems. The output of this process encompasses both, the topological layout of rooms, and the number and relative locations of stair cores – the diagrammatic form satisfying circulation requirements.

208

Figure 8.12: Form diagram + generative process = constructive diagram

Building legislations and local spatial constraints provide a good source for designing fitness functions for generative design algorithms. Observing traditional design workflow provides inspiration for developing a heuristic computational approach. However, there is a great benefit in having a working prototype ready before the actual design process starts. In practice – whether in the context of design competitions or commissioned work – there is usually a very small chance of developing a completely new approach. It is easier to produce useful results by adopting already existing methods. To apply generative modelling methods in the context of a practical architectural design project, it is essential for a computational designer to work together with a team of professional architects. Without accepting the traditional workflow, there is a danger that the generative contribution gets lost. As it usually happens, members of the team, working on different issues, modify one another’s tasks – the design constantly develops over time and team members have to acknowledge it. Adaptive integration of one’s work has, therefore, the utmost importance. From the computational designer’s perspective, computational models have to be flexible enough in order to endure changes in the design. Although never actually deployed in the context of teamwork, the algorithm developed for the TLV building is potentially suitable for flexible integration. The distributed nature of an agent system allows it to deal successfully with external perturbations (Bonabeau, Dorigo and Theraulaz 1999, p. 16) and reconfigure itself to changes in the environment. The TLV algorithm can be potentially adjusted to work simultaneously together with a form finding exercise where the Voronoi network layout is dynamically altered either by designers or by other generative algorithms.

209

The proposed algorithm for locating stair cores within a given layout can lead to solutions that are not immediately obvious to designers. Naturally, some floor plate layouts are simple enough and designers do not need computational tools for finding the right stair core configuration. However, when the layout is more complex and the number of stair cores is larger, then solutions become more difficult to find and benefits of computation become apparent. With complex floor plate layouts, it is even hard to define the right number of stair cores needed, let alone finding an acceptable configuration. Nevertheless, designers need a degree of control in order to quickly test different options. In this case study, the control is offered though setting-out configuration – a designer is expected to create an initial layout that becomes the basis for further computational modifications. During computational modifications, it is essential that the progress is visually communicated back to the designer. The designer can then stop the process at any time and adjust the layout manually – add or remove stair cores, for example. In this way, the designer remains actively engaged and the solution can be found much quicker.

TLV competition project credits The competition entry for the TLV building was submitted by Slider Studio Ltd. The TLV computational model was designed and developed single-handedly by the author of this thesis during and after the competition.

210

Chapter 9: Controlling the diagram

This chapter reflects on the case studies and prototypes discussed earlier in this thesis. The objective herein is to analyse these models in order to establish some principles of how the control over bottom-up simulations can be maintained and highlight some general aspects of generating circulation diagrams with multi-agent systems. There are several issues in deploying bottom-up models that need to be considered before the selection of a particular control mechanism can be made. While it is important to understand how circulation diagrams develop, the greatest challenge is to decide whether a model is capable of generating adequate diagrams for given design problems. It is argued that understanding the development process is the key to controlling the model, whereas the flexibility and the sensitivity of the model also play important roles. Once the general aspects of the development process are understood, the exact methods of control can be discussed next. Why is gaining the control such an important issue at the first place? The answer is quite straightforward – generated diagrams are not the end product but intermediate tools that help the designer to shape design proposals. If anything changes in the design brief or if spatial requirements should change, the diagram may have to change as well. Without having the control over the underlying multi-agent model, the designer has no means of making appropriate changes to the diagram. Only if the diagram responds to the changed requirements and brief, the model can truly become an integral part of the design workflow.

9.1 Emergent behaviour OF and IN agent colonies

Emergence is often used for describing phenomena that are not explicitly prescribed in the model and is well observable in many multi-agent systems. Gilbert (1995) argues that we can talk about emergence in agent colonies only when we can discover an exact description of the global state of the system. In circulation agent colonies, this concise description of the global state manifest itself in clear diagrams that can be analysed using topographical and topological terms. Generating successful diagrams relies on the ability of dynamic systems to produce certain reoccurring 211

patterns. These patterns do not have to be strictly repetitive but they often display high level of orderliness. Some models produce a great variety of forms; others are fairly limited and generate reoccurring patterns. The common underlying principle in these models is that the description of generated patterns is not defined in the code itself, but is created when agents execute their behavioural program. An important property of emergent patterns is that they are usually flexible and can therefore be used in different situations and in dynamically changing environments. The critical question from the designer’s perspective is how to control emergent models. In order to answer that question, one needs to understand the reasons for emergence. It is argued herein that one has to learn how the colony’s behaviour emerges from individual behaviours before the appropriate control mechanisms can be chosen. ‘Emergence’ is a somewhat perilous word; it can be easily assigned to patterns that look complex from the observer’s perspective but are actually predefined in the model’s code. Therefore, one needs to carefully analyse the process that has generated these patterns. It is quite simple to identify emergent patterns when models are constructed following agent based modelling techniques. If agents’ behaviour is based solely on the information received from their immediate environment and this information is processed by their internal sensory-motor routines, then the resultant movement patterns in the colony deserve to be called emergent. There is another point of possible confusion, however. Colonies of randomly behaving agents can also – by a blind chance – form patterns that seem to be ordered and can be therefore mistaken as emergent. These patterns can be sometimes observed as snapshots of the process, although they may not necessarily be persistent. Hence, they cannot be said to be emergent. One can talk about emergence in circulation models only if there is evidence of movement flow of certain persistency. A system cannot always be distinctly classified as persistent or not, but the persistency of a model can be measured as a state of the model in time. Movement patterns can be emergent only if they are sufficiently persistent to be described as a tendency in the colony’s movement during a defined period of time. During the active research stage for this thesis, several multi-agent models were built and studied in order to understand the behaviour in and of colonies. The most successful ones of these models were covered in Chapter 6 and some new ones were also introduced in the case study chapters (see Chapter 7 and Chapter 8). While 212

studying these models, several kind of emergent phenomena were observed and it became clear that emergence can be looked at two distinct levels – at the colony level and at the level of individual agents. Gilbert (1995) claims that all multi-agent systems can be described in terms of actions of individual agents or at the global level in terms of actions of the colony. Similarly, emergent behaviour can be described at two levels:

1) The behaviour of an individual agent in the colony emerges from the interaction with certain features in its environment. There are a number of behaviours that are common in many studied models. Such behaviours can be seen as general principles of agents’ movement, although these behaviours are not explicitly defined in the algorithms that control the movement. For example, quite common emergent behaviours are crowd following in flocking models and agents generating circular movement patterns by following their own trail. The most often observed emergent behaviour, however, is agents following edges in the environment. In fact, edge following is so common that in some of the studied models it happened even when the prototype had serious flaws in the agent’s design or in its movement controller algorithms. Although edge-following can be the first sign of the emergent behaviour, it has to be taken with precaution because it can be simply an artefact of a faulty algorithm or a mistake in the computational logic. Both, the truly emergent and the artefactual edge-following occur in similar locations in the model. It can take place along the objects in the model, or it can happen along the edges of the simulated world.

2) Emergent behaviour of the colony can be observed when the motion of individual agents is coordinated at the higher level. Possibly the most famous computational model displaying the emergent behaviour of the colony of agents is the Reynolds’ (1987) flocking model. Although this model has not been used in the experiments for generating circulation diagrams, it can reveal generic principles in flocking agents. Later in this chapter, the flocking model is analysed in order to explain the sensitivity of multi-agent systems. The behaviour of the flock can be depicted by describing its behaviour as the behaviour of a single entity. It can be said, for example, that the flock is moving 213

left, or that it is splitting into two. The behaviour of space forming agents, on the other hand, can be better conveyed by describing the colony as a structure. It can be said that the colony forms a Voronoi structure (see section 6.7), for example.

Both of the described emergent phenomena – flocking behaviour and spatial patterns – can be observed simply by watching the agents. Other types of emergent behaviour of the colony are not always immediately apparent. In some instances, the general trend of movement can be discovered by visualising the data retrieved from the environment rather than observing agents. This is the case with many models using stigmergic communication. Patterns of movement in such models can often be observed and studied better when stigmergic messages left behind by agents are rendered. In the Loop Generator prototype (see section 6.1), for example, the paths of movement can be traced by giving the ‘pheromone’ – substance that is created and perceived by agents – a graphical representation. Something that is very hard to spot simply by looking at the agents can be made immediately obvious by giving the stigmergic ‘message’ a graphical appearance. Graphics can be very powerful indeed and that is not only for understanding the behaviour of the colony, but also for drawing out the diagram. Similarly to Loop Generator, the shortest path formation in ant colony optimisation algorithms can easily be observed by visualising the markers left behind by agents. Whereas individual agents can choose different paths, the most used path can be easily identified by looking at the ‘pheromone’ concentration in the environment. Emergent coordination in agent colonies that rely on the quantitative stigmergy can usually be visualised by highlighting stigmergic markers in the environment. However, emergent coordination that is based on qualitative stigmergy is much harder to achieve and to observe. However, as explained in section 6.6, it is possible to get two stigmergic building agents to cooperate and to build a structure together. More often, these agents compete with one another for available resources, and the cooperation happens only occasionally. Additionally, when the cooperation is achieved, it is very difficult to capture or replicate it – the problem is that two cooperating agents have to have finely tuned (or evolved) behavioural controllers but they also have to be positioned rightly with respect to the other agent’s position and 214

their actions have to be synchronised. But once two agents are locked into a loop that holds them together and makes them to work in unison, a new system has born. Maturana and Varela (1980) call this type of coordination system coupling. More specifically, they claim that:

“Whenever the conduct of two or more unities is such that there is a domain in which the conduct of each one is a function of the conduct of the others, it is said that they are coupled in that domain.” (Maturana and Varela 1980, p. 107)

They also explain that although both of the agents retain their identity, system coupling leads to the generation of a new entity that may exist in a different domain from the coupled agents. Understanding system coupling principles also helps to understand the distinction between the behaviour of individuals and the behaviour of the colony.

9.2 Development of the diagram

Having established that the individual behaviour of an agent emerges from its interaction with the environment while the coordination of individuals in the colony emerges from the behaviour of these individuals, it is a good time to scrutinise the relationship between the development of the diagram and the coordination process. It is argued here that understanding the connection between the development process and coordination is a key to gaining successful control over the model. However, this task is not a trivial one. Coordination in the colony is after all an observed phenomenon and – although simple enough in order to be described in qualitative terms – is difficult to quantify. Coordination can be seen as an ultimate characteristic that describes how well the actions of individual agents are synchronised. Unfortunately, all individual behaviours are fairly complex and are equally difficult to measure. Instead, one has to resort to measuring certain quantitative parameters of individual agents such as their position, speed of movement, direction of movement etc. However, the coordination of the colony cannot be measured simply as a sum of a certain parameter of all individuals. For example, in the case of the Reynold’s flocking model (1987) (see also 215

Figure 9.6), one cannot make far-reaching conclusions about the level of coordination by calculating the average distance between agents. Nor can conclusions be made by calculating the average movement direction of all agents. The behaviour of the flock is too complex in order to be quantified by using this kind of reductionist measurements. Nevertheless, it is possible to quantify coordination when one looks at the problem from a different angle. It is proposed that coordination can be measured between the closest neighbours in the colony. Comparing the parameters of an individual agent with its closest neighbour would give an insight into how well these two are coordinated with one another. For example, if the distance to the closest agent is relatively large compared to the size of the ‘world’, then the agent can be said to be not well coordinated with the colony. Once calculations with individual agents are carried out, an average value of the coordination parameter can be declared. In order to validate the proposed method of measuring coordination within a multi-agent colony, the Loop Generator prototype (see section 6.1) is used as a case study. Although it is acknowledged that different coordination parameters are needed for analysing different prototype models, it is the general approach – quantifying the behaviour of the colony based on the closest neighbour calculations – that foremost need to be validated. Coordination parameters that are thought to characterise the colony’s behaviour in Loop Generator the best are alignment and cohesion. Whereas the cohesion parameter is calculated as the average distance to the closest neighbour in the colony, the alignment parameter quantifies the average angle between the agent’s movement vector and that of its closest neighbour. The alignment parameter indicates the coordination in movement direction; the cohesion parameter expresses the compactness of the colony. In order to understand the behaviour of the colony, cohesion and alignment parameters need to be scrutinised together. A small cohesion distance does not yet mean that the colony is well coordinated – chances are that agents can be simply clustered around some hot-spots and their movement can be fairly random. But if the alignment angle is small as well, one can be sure that they are not just close together but are also following the same paths. The proposed algorithms behind alignment and cohesion calculations are straight-forward and need no detailed description. In both cases, the closest neighbour for each agent is found by iterating through the whole colony and finding the smallest Euclidean distance between the agents. The average cohesion that is expressed in 216

abstract model units (see Figure 9.1 and Figure 9.3) is then calculated by aggregating all distances to the closest neighbour and divided by the total number of agents. Similarly, the alignment parameter is found by calculating the angle between the heading vectors of closest agents. Because Loop Generator generates two-way movement along the same path, the alignment value is capped to 90 degrees and the angle between opposite directional movement vectors is calculated 0. Hence, the average alignment angle in a colony of randomly moving agents is around 45 degrees.

Figure 9.1: Alignment and cohesion in 2D

Figure 9.1 depicts a typical graph of alignment and cohesion parameters in a 2D ‘world’. The development of the respective diagram is shown in Figure 9.2. As one can see from the graph, alignment in the colony improves quickly in the beginning of the simulation. The same trend can be observed visually – already after 100 steps a recognisable circulation diagram has developed (see Figure 9.2). Cohesion follows the similar trend to the alignment graph with the average distance to the closest neighbour dropping almost 30% during the first 200 steps. The rapid descent in the alignment graph stops quicker than it does in the cohesion graph. This means that agents are gathered around certain areas but channels of movement are not yet fully formed.

Figure 9.2: Development of the diagram in 2D

217

After the initial rapid development of the diagram and the changes in the graphs, the simulation settles down to follow a much smoother development pattern. Then something interesting happens and the cohesion graph takes a short but steep turn back – the level of coordination in the colony is suddenly reduced. Similar if less dramatic fluctuations happen in the alignment graph. Although the general trend – constantly slowing decline – remains the same, sudden changes keep occurring. One can only conclude that for some reason existing circulation paths are abandoned by the colony while new ones are generated. This can be observed in the development of the visual movement patterns as well. The diagram can be relatively static during a certain period (see steps 100-200 in Figure 9.2) and then suddenly change (steps 200250) to form new circuits or an entirely new circulatory system.

Figure 9.3: Alignment and cohesion in 3D

The 3D version of Loop Generator has the similar development cycle (see Figure 9.3 and Figure 9.4) to the 2D prototype with some significant exceptions. Curiously enough, compared to the 2D ‘world’ the development cycle appears to be faster in 3D.Otherwise the alignment and cohesion graphs follow similar trends – the rapid improvement of coordination in the beginning of the simulation gradually eases out when several closed circuits develop in the diagram. The fluctuations that are apparent in graphs of the 2D prototype are less frequent and much smoother. One can speculate that this is caused by a different sensory configuration – agents in the 3D ‘world’ have 10 sensors instead of 3 in 2D. However, it is more likely that the nature of 3D diagrams prevents the occurrence of rapid changes and facilitates a continuous and smoother diagram development. Agents in the 2D ‘world’ are more likely to cross 218

paths with other agents than in 3D.Crossing an existing but weakly developed path forces an agent to select between several routes which cause disturbances in the colony’s behaviour and, as a consequence, it takes longer to form the diagram. This is also manifested in the shape of the diagram – 3D diagrams feature mainly Y-shaped branching points, whereas in 2D some of the junctions are also X-shaped.

Figure 9.4: Development of the diagram in 3D

It is worth reminding here that the Loop Generator prototype is largely a deterministic model. Besides the randomised setting out configuration, the only probabilistic choice is made by the agent when the sensory input is not differentiable and there are two or more possible movement directions. It is believed that changes in the diagram and fluctuations in the coordination parameters are inherent in the model. The coordination parameter graphs follow roughly the same development pattern. The length of the development cycle depends on several circumstances. The greatest influence is the density of agents in the ‘world’. Too many agents can lead to the situation where the concentration of ‘pheromone’ is uniform across the environment and no diagram appears. Too few agents, on the other hand, prevent the emergence of continuous paths. The relaxation time of the model – the time that it takes to reach to the stage where the first recognisable circulation diagram has developed – depends heavily on the size of the colony and the size of the ‘world’. Besides the size, there are several other parameters that play crucial roles in the diagram’s development process. The next section scrutinises the effects of some of the mentioned parameters to the behaviour of the colony.

9.3 Flexibility and sensitivity – an exploratory analysis of multi-agent models

Multi-agent models offer several opportunities for the designer to engage in the development process of circulation diagrams. Although not all multi-agent models 219

are inherently interactive, they are dynamic models and it is relatively easy to build ones that respond to the input from the designer. Since agents in such models acquire information about their surroundings locally, they can cope with the dynamically changing environment. As long as the sensory input can be recognised and processed by the agent, the global structure of the environment can be of any configuration or shape. The environment does not have to be static and can accept new input. Different multi-agent models are flexible to a different degree. Some models are fairly rigid and function properly within a narrow range of environmental input; others are more dynamic and remain operational even when drastic changes take place in the agents’ environment. In general, flexibility can be defined as the systems (i.e. agent’s) ability to cope with disruptions in the system’s environment. The environment can be inherently dynamic in the sense that two or more environmental processes interact with one another, or it can made dynamic by allowing input from the user who controls the model. A truly flexible model can adapt to various situations and change its behaviour rapidly. In reality, every model works efficiently within a certain range of changing environmental parameters. That is mainly because building truly dynamic systems is very complex and time consuming (Bonabeau, Dorigo and Theraulaz 1999). Moreover, flexibility is important only if it serves the purpose of the model. The underlying purpose of all circulation models in this thesis is to generate diagrams within different spatial configurations. Whereas different in layout, these spatial configurations should be of the same spatial representation (see section 5.2) and the objects in it should be of the same representation too. For example, if an agent based circulation model is built to work in a setting out configuration of continuous spatial representation, comparative tests can be executed solely in this particular type of environment. This does not necessarily reduce the complexity of the model but it aligns quite well with the workflow of the designer. The designer can use one tool (e.g. a CAD package) for preparing the content as long as the multi-agent model is built to accept the output formats of this tool. The model’s flexibility, therefore, is foremost praised for its ability to generate diagrams in different spatial layouts of the same spatial representation, and not for coping in truly dynamic environments. Flexibility in multi-agent models is often associated with learning (e.g. Wan and Braspenning 1996; Ramos, Fernandes and Rosa 2007). Learning at the level of 220

individual agents is well explored domain and, according to Vidal (2003), is ought to be used when the designer of the system cannot define all the possible input configurations that agents may encounter during the simulation. Implementing machine learning algorithms in order to facilitate learning in agents is bound to make the multi-agent system a lot more complex (Vidal 2003). Therefore, this thesis is not so much interested in learning at the individual level as it is in learning at the colony’s level. Learning at the colony’s level in circulation agents allows the system to adapt to different environmental layouts and still produce useful circulation diagrams. This can be achieved inexpensively through stigmergic communication that is a light-weight solution and helps to keep the model relatively simple yet flexible. It does not require individual agents to learn anything new in the sense that they do not have to change their sensory-motor coupling rules. According to Vidal (2003), learning agents are most often “selfish utility maximisers” – they seek to gain payoffs from participating in the simulation. However, the colony does not need to gain anything in order to generate appropriate diagrams. The adaption of the circulation diagram is propelled by the inherent flexibility in the system of non-learning autonomous agents. In this respect, the diagram is the result of the colony’s self-organisation. As opposed to the setting out configuration, the development of circulation diagrams can be controlled through manipulating parameter variables in the model. In order to gain a good control over a multi-agent model, one needs to find the parameters that have the greatest impact on the behaviour of the agent colony. Some parameters are more sensitive and the model can produce acceptable results only when these parameters fall into a narrow range of values. Other parameters are less sensitive and can be in a much wider range of values while the model can still produce useful output. Naturally, it is critical to get more sensitive parameters right during the model’s design and building process – getting these parameters wrong can be a costly mistake in terms of time but even more so in terms of producing new knowledge . Once the model is built, some parameters can be tested interactively at runtime. Whereas an individual parameter can be changed instantly, the diagram can take a while before changes in the colony’s behaviour become apparent. Many of the multi-agent models tested in this thesis have a time lag that can prevent one to grasp the effect of changing the respective parameters immediately. The response time is 221

mainly dependent on the lightness of the model – the faster the model runs, the quicker the effect of changed parameters can be observed. The difficulty of finding the right set of parameters for a successful model is revealed when several parameters are tested or fine-tuned in parallel. Additional complexity is introduced when these models are stochastic. A large number of runs need to be carried out in order to quantitatively validate the model and the validation may quickly become intractable (Brimicombe and Li 2008) – there are simply too many variables in the model. Batty and Torrens (2005) seem to think that qualitative validation is more plausible in such cases. The following two examples illustrate both the qualitative and quantitative ways of exploring the sensitivity of the model. Both of the examples investigate the effect of a critical parameter to the behaviour of the colony, but the approach is slightly different. Firstly, the effect of an agent design parameter in the Loop Generator prototype (see section 6.1) is explored qualitatively by observing the patterns of ‘pheromone’ distribution in the environment. The parameter tested is the angle α (see Figure 6.3) between the agent’s frontal and lateral sensors. Secondly, the behaviour of the colony in the well-known flocking model (Reynolds 1987) is explored quantitatively and qualitatively as well. The parameter under scrutiny is a similar to the first example – the angle of agents’ field of view. If agents in Reynolds’ flocking model compute their behaviour according to the position of closest neighbours, then the parameter defines how the closest neighbour are selected – only those that reside within the given field of view are included in the calculations. Compared to many advanced multi-agent models, Loop Generator is a very simple model. However, this does not mean analysing this model quantitatively is simple. Instead, it is suggested that, in the context of generating circulation patterns, it is more viable to conduct qualitative analysis purely observing the generated ‘pheromone’ patterns. Figure 9.5 depicts typical tests carried out with different α (angle between the frontal and lateral sensors) values. While section 6.1 explains the behaviour of the colony in greater details, it suffices here to say that the proposed way of analysing the effect of changing the agents’ sensory configuration is purely qualitative. ‘Pheromone’ distribution patterns in different tests are clearly different and allow one to choose an appropriate configuration according to the desired effect. 222

Figure 9.5: The angle between the front and the side sensor (α) has a crucial impact on the generated diagrams. From left to right: α = 20, 45, 70 and 95 degrees

The flocking model offers a similar opportunity for qualitative observation. However, there is an important distinction between observing the flocking model and observing the Loop Generator model. Whereas in Loop Generator the observed subject is the diagram left behind by agents, in the flocking model it is the agents that are being observed. In addition to qualitative analysis, the behaviour of agents in the flocking model can be analysed quantitatively. If qualitative analysis describes the behaviour of the whole flock then quantitative analysis is carried out with respect to individual agents.

Figure 9.6: Flocking agents. Testing the behaviour of agents with different field-of-view angles

Figure 9.6 illustrates four tests with different field-of-view parameters – the visibility angle of individual agents that defines which other agents are included for the flocking computation. In the first test with a narrow field-of-view, agents do not form a coherent flock and the colony breaks up into small groups of agents. Once an agent breaks away from the flock, it cannot ‘see’ other agents and wanders away attracting a few other agents to follow its lead. If the field-of-view angle is increased, a more coherent behaviour emerges – agents keep together and the whole colony can suddenly move together in an unpredictable direction. Increasing the angle further does not reduce the coherence of the flock but it reduces its ability to move around. 223

With a wide field-of-view agents simply cannot develop a forward-directed movement and keep turning around at the same location. Occasionally, a few agents can break away from the larger group, but the colony as a whole remains relatively static.

Figure 9.7: Field-of-view (FOV) angle affects the flock’s behaviour. Larger angle leads to a more coherent flock but it can cause the flock as a whole to move around

In order to quantify the flocking behaviour, one can observe two general trends in the colony. The first trend characterises the coherence of the colony while the other one describes the movement of individual agents. There are several ways to express these trends in numbers but all of these are inconclusive – the colony’s behaviour is too complex to be quantified by a single parameter. However, the movement patterns shown in Figure 9.6 can be successfully described by observing how the coherence and the movement parameters change in time. Figure 9.7 presents two sequences in the form of graphs that show the effect of field-of-view angle on the coherence and the mobility parameters. The coherence parameter is a measure of how many agents are there in the biggest group; the mobility parameter measures how many agents remain within a certain range from their original location. While there is almost a linear relationship between the field-of-view angle and the coherence, the mobility parameter has a less linear graph. If agents with the field-ofview angle less than 90 degrees tend to move away from their original location fairly 224

quick then a wider view makes them more static. One can reach to an interesting conclusion when scrutinising these graphs simultaneously. One can see that the fieldof-view angle of 90 degrees leads to a coherent group where agents are highly mobile, and can conclude (with certain reservations) that the flocking behaviour has emerged. This can be then validated via visual observations in Figure 9.6. Both models analysed in this section – the Loop Generator prototype and the flocking model – contain stochastic elements. In order to analyse the behaviour and more specifically the sensitivity of such models statistically, multiple runs have to be carried out. The aim of the qualitative sensitivity analysis is to find out the parameters that render the greatest value to the user of the model. These parameters can then be made readily available to the user via the graphical user interface. However, one needs to be careful in doing so because it also exposes the model to potentially untested combination of parameters that can easily lead to runtime errors. Additionally, some parameters are very sensitive and have an extremely narrow operational range. These parameters are better off to be defined as programmatic constants since exposing them makes it more difficult for the user to find the working configuration and makes the model less controllable.

9.4 Means of control

In building successful circulation models with multi-agent systems one needs to know how such bottom-up systems can be controlled. There are several possible control mechanisms and numerous ways of implementing these mechanisms. The aim here is not to discuss the arguments for and against different implementations but to point out some general principles. Broadly speaking, the control mechanisms can be divided into three large groups: the model can be controlled by preparing the environment, modifying the agent’s behavioural rules, or adjusting the agents’ and environmental variables via a graphical user interface. Controlling the model by modifying the setting-out configuration was discussed in-depth in section 5.7. The designer normally prepares the environment and defines different qualities of this environment according to some principles and builds a static model from which information can be obtained during the model’s runtime. The second group consists of programmatic methods for controlling agents and includes all kinds of algorithms that 225

agents use for retrieving information from their environment, sensory-motor coupling rules, behavioural controllers and actuator functions (see sections 5.1 and 5.3). The third group allows interactive input from the designer at runtime and normally involves controlling environmental processing parameters (e.g. the speed of environmental decay process – see section 5.4), or some dynamic agent parameters (e.g. velocity). The latter group also includes interactive modifications to the environmental configuration. Control mechanisms can be dynamic or static. Dynamical control is typically implemented through GUI. In simple multi-agent models, the user interface can be deployed for controlling several parameters simultaneously but not all kind of parameters can be easily controlled this way. The parameters that are most suitable for the dynamic control are environmental parameters (e.g. ‘pheromone’ evaporation rate), the parameters used for steering the agent through the environment (e.g. velocity), the strength of environmental modifications done by the agent (e.g. ‘pheromone’ drop rate), and activation thresholds of the agent’s sensory input (i.e. how sensitive the agent is). Naturally, one can dynamically control the variables in agent sensory-motor coupling rules. Static control is normally implemented by preparing the environment or modifying control algorithms of the model outside the runtime loop. This means that once the generative model has been started, no direct interaction with the model is carried out. In this case, the user can still maintain a level of control assuming that the general principles of the model are well understood. This understanding can be developed through designing and building the model or alternatively through interaction and deployment. Designers do not necessarily need to know how to program in order to use the advantages that generative methods of design can offer them. Prototype and case study models presented in previous chapters all share one control mechanism – using programmatic methods while building and modifying them. All these models were programmed by the author of this thesis and the actual algorithms were either invented or recreated from the examples retrieved from the literature. Naturally, not every line of the code was written from scratch and some code libraries were used where appropriate. For example, the Loop Generator 226

prototype (see section 6.1) used an external library to help construct steering behaviours for mobile agents. Quite a few prototype models also feature control mechanisms through preprocessing the environment as opposed to the interactive changes made at runtime. Such models are, for example, Stream Simulator (see section 6.2), Labyrinth Traverser (section 6.3), Network Modeller (section 6.4), the way-finding prototype (section 6.5) and the self-organisation model in bounded environment with cellular agents (section 6.7). In Stream Simulator, the designer is expected to create a colour map of an imaginary (or real) landscape indicating valleys with darker and ridges with lighter colour. This colour map is then converted into numerical values that agents consume for sensory input. Labyrinth Traverser used the input image in a different way – black pixels in the image indicated areas where agents could not go while white pixels dedicated open areas. One useful pre-processing method is the positioning of source points and targets for agents. This allows the designer to predefine desired circulation network points and even define how attractive or important these points are. The positioning of source points and targets can be efficiently done via the graphical user interface. Some of the prototypes have a graphical user interface that facilitates interactive engagement of the designer. For instance, in Network Modeller and in the Nordhavnen model (the latter is based on the Network Modeller prototype) the designer controls the evaporation and diffusion of ‘pheromone’, and the strength of ‘pheromone’ dropped by agents. Additionally, the size of the agent colony can be modified via the GUI. The greatest control over the outcome of multi-agent models is definitely achieved by modifying the model through the source code. The programmatic intervention allows one to change the general sequence of the program flow and the sensory-motor coupling rules that lie at the core of most multi-agent models. It can be said that getting the sensory-motor coupling right is the most critical part in building multi-agent models for generating circulation diagrams. One of the biggest challenges there is to achieve an effective balance between the goal-directed and the reactive behaviour. Wooldrige (1999) claims that building purely goal directed systems is not hard and neither is building purely reactive systems. However, combining these two in a single model can be a very demanding task. 227

In the Nordhavnen case study model, the balance between reactive and goaldirected movement is the key to generating meaningful diagrams. The agents’ behaviour is defined by reactive sensory-motor procedures and the resulting behaviour can be considered goal-directed as it helps agents to navigate to their targets using the shortest possible route. At the same, agents are also programmed to choose heavily used routes over less travelled ones in their immediate neighbourhood – and that can be considered as a reactive behaviour. Now, the question is how to combine these routines, or – more importantly – how to resolve conflicts when these two rules clash. For example, one can imagine a situation where an agent needs to turn left to get closer to its target but all the other roads in the neighbourhood are more heavily used than the one on the left. The solution here is quite a simple one but perhaps not immediately obvious. In the first instance, it may appear that an agent can make the right choice by calculating a weighting for each available road by taking into account its distance from the target and its size. However, after several tests with different weightings, it became clear that the proposed algorithm created situations where agents got stuck in a closed loop and could not achieve their target. A better solution had to be invented. It turned out to be a matter of ordering these rule sets. Agents first had to select all routes that would have taken them closer to the target and then had to select the most heavily used one of these routes. This sequence proved to be a robust method and agents were able to get to their targets using the most heavily used routes.

9.5 Discussion

After gaining the control over multi-agent models, the designer can assess the universality of models by testing the same model in a different context. Introducing randomness into the model can prevent the emergence of generic diagrams that the model can generate. However, probabilistic models can reveal much deeper patterns than deterministic ones. If some patterns of movement networks keep reoccurring in different context, then there is a chance that a universal diagram can indeed be achieved. In a deterministic model, the generated pattern is universal by default. However, if the setting out configuration remains unchanged then purely deterministic models produce invariant network patterns and are therefore inflexible. Although 228

probabilistic models can generate universal patterns of movement, they can help to find several alternative solutions and as such provide more insight for the designer how the circulation space can be potentially organised. Deterministic models are better for understanding whether multiple agents are needed for generating the diagram. Labyrinth Traverser is a classic example of a deterministic model where a single agent generates exactly the same diagram as several agents deployed in parallel. In Stream Simulator, there is also no difference in terms of the quality of the generated diagram whether the simulation is run with a single or multiple agents. However, it is difficult to quantitatively validate this statement because it is a probabilistic model and two diagrams are seldom similar. Stigmergic models featuring positive feedback are ignorant to the number of agents in the colony. However, as soon as elements of negative feedback or environmental processing algorithms (e.g. decay and diffusion) are introduced into the model, there is a higher probability that the colony can generate qualitatively different diagrams. If diagrams that are created by a single agent are similar to those that are created by the colony of agents, then these diagrams cannot be considered truly emergent. Truly emergent diagrams can only happen in multi-agent colonies. The emergence of movement networks makes multi-agent models particularly useful for designers who wish to explore circulation diagrams. With the help of such models, designers are capable of rapidly generating several qualitatively different diagrams. This gives them a potential advantage of exploring more options and, eventually, can lead to better solutions. However, achieving the control over the diagram is much harder. Emergence cannot be controlled directly but only indirectly. And this indirect control is sometimes difficult to achieve without the in-depth knowledge about the model. This suggests that designers should have necessary programming skills for maximising the benefit of multi-agent systems. Although scripting is increasingly popular amongst the younger generation of designers and architects, multi-agent systems are probably beyond their average skill set. It seems that there is a scope for a software platform that allows building multi-agent systems for architectural design purposes without the expert knowledge in programming languages.

229

Chapter 10: Discussion and conclusions

This chapter draws upon and synthesises individual conclusions from Chapters 5 through to Chapter 9 and discusses their original contribution to knowledge and potential implications to the architectural design discipline. This thesis has taken an ordered view to building multi-agent models for generating circulation diagrams. Based on a thorough literature survey, it has been established that diagrams are useful tools for designers, and it has been argued that multi-agent systems are appropriate for modelling circulation diagrams. The basic building blocks of multi-agent circulation models are defined, several prototype models are proposed, built, studied and analysed. These prototype models form the basis to the knowledge that is used for building case study models. Case study models demonstrate how computational diagramming with multiagent systems can be a part of the design process and highlight the benefits of using bottom-up models for architectural and urban design purposes. Building and using these models also fulfils one of the main goals of this thesis – gaining deeper understanding about the generative modelling of circulation systems. It is argued that multi-agent circulation models can be successfully used at the early stages of the design process. Computational models offer an alternative approach to more traditional methods of designing such as hand-sketching and traditional CAD modelling. New methods can bring unparalleled speed to the creative process of design. What took days before can now be generated in matter of minutes. With the help of generative models, one designer can generate a variety of solutions. Having multiple solutions gives the designer more choice and can lead towards better design proposals. If generated solutions are compared qualitatively or quantitatively then the solution that meets the design brief the best can be selected for further development. Leaving aside qualitative aesthetical comparisons, generated solutions lend themselves quite easily to quantitative analysis. That is because material generated by a computational model conforms to the same system of representation. The same method of quantitative analysis can be carried out on every single solution as long as the generative model remains unchanged. For a simple example, if generated material 230

is represented as a mesh type construct then it is possible to compare individual solutions with respect to their surface area. Furthermore, solutions can be compared programmatically without the designer’s intervention and the generative model can be used for automated design optimisation. However, one needs to be careful here – in most cases, design solutions have to meet multiple and often conflicting goals. Therefore, single-variable optimisation can lead to solutions that are optimised in one respect but completely fail to meet other requirements. The computational modelling approach presented in this thesis makes design a more transparent process. In order to construct a dynamic computational model, one needs to be able to explicitly define all the components of that model and also write algorithms that operate with these components. Therefore, such a model can be always traced back to its basic components and algorithms. Appropriateness and correctness of each component and algorithm can then be evaluated. This makes the model susceptible to validation. Therefore, generated solutions can be explained by asserting the exactness of the used building principles. Hence, computational modelling facilitates a rational debate over proposed design solutions.

10.1 Multi-agent models for generating circulation systems

One of the questions this thesis has been trying to answer is whether multiagent systems can be programmed to follow the underlying principles of circulation systems? In order to answer that question, one can look at the principles of how different circulation networks form in nature. The natural movement patterns of people are thought to be too complex for this task. Instead, one can study these of simpler organisms. Several works from authors in the field of Artificial Intelligence (Pfeifer and Scheier 2001; Dorigo, Birattari and Stützle 2006) suggest that contemporary computational models are capable of simulating intelligence at the level of insects and insect colonies. Colonies of social insects can collectively create intricate nest architectures that naturally incorporate circulation networks (Turner 2000). These networks facilitate the transport of food, building material and provide individuals in the colony an access to different parts of the nest. With various degrees of success, several academics have attempted to model the nest building process of social insects 231

computationally (Ladley and Bullock 2005; Buhl et al. 2006). However, these models have seldom tested and deployed in the architectural design context. Regardless to the advances in contemporary theory, the insect nest building behaviour can still remain too complex in order to be accurately reproduced in a computational model. In this thesis, a different approach is exercised. Artificial multiagent systems do not necessarily need to simulate all the aspects of natural agents. Instead, one can focus on a more specific behaviour in the colony. What if some logical rules of the complex behaviour of insects can be extracted and recombined to create a colony with a single purpose to generate circulation networks? Surely, achieving the circulation network formation is simpler and computationally more affordable than simulating the complete nest building behaviour. There are a few important mechanisms found in natural agent colonies that can be simulated. Probably the most important of such mechanisms is the self-organisation of circulation networks (Goldstone and Roberts 2006). In social insects, movement of insects is driven by local interactions (Dorigo, Birattari and Stützle 2006). If these local interactions are defined well enough then appropriate sensory-motor rules can be devised for artificial agents as well. Consequently, some behavioural aspects of natural agent colonies can be recreated leading to the emergence of circulation networks. Multi-agent systems are indeed found to be an appropriate method for generating networks of circulation. This thesis has demonstrated once again that global patterns of movement can rise from local interactions. More importantly, it has been shown that this principle can be successfully replicated in a generative model for synthesising design solutions. Unlike many old-fashioned methods of computing circulation networks (Tabor 1971), multiagent systems are flexible and can adapt to changes in the environment. This opens up new possibilities for designers since the environment can be modified interactively at runtime. A related and equally important concept borrowed from nature is stigmergy – a widely observed communication method in termites (Wilson 1980; Turner 2007) that is also often used in building artificial multi-agent systems (Holland and Melhuish 1999; Buhl et al. 2006). Stigmergic communication is the coordinating force in many of the prototypes presented in Chapter 6. A general conclusion of this thesis is that stigmergy is an essential component in generating and optimising circulation networks with multi-agent models. The movement of stigmergic agents is guided by the information 232

retrieved from their environment while agents actively change this information. In other words: generated networks are conceived by and – at the same time – facilitate the agents’ movement. This reciprocal causality leads to movement networks that are optimised with respect to the perception and the action of agents. While stigmergy is generally seen as a coordination (Valckenaers et al. 2001) and communication (Izquierdo-Torres 2004) method in multi-agent colonies, this thesis has demonstrated that stigmergy can also be used as a method of architectural modelling. It has been argued and demonstrated with several prototype models in this thesis that circulation networks can be generated following the logic of the network formation found in insect colonies. A great number of topographically and topologically different networks can be generated (see Chapters 6, 7 and 8). One can ask whether this is an appropriate method of modelling networks for human use. In very basic principles, movement of people is no different from the movement of other organisms. At the abstract level, all movement networks have two essential qualities – provide an access to required areas and facilitate the continuity of movement between these areas. Multi-agent models can generate circulation networks that possess both of these qualities. As conclusively demonstrated with many of the computational models in this thesis, colonies of mobile artificial agents can produce continuous movement networks (e.g. see Loop Generator – section 6.1) and can be used for providing an access (see the discussion in section 6.8). Most of the proposed models are capable of producing a variety of network diagrams (e.g. see the Network Modeller prototype in section 6.4). This makes multi-agent models useful as explorative design tools. The generated diagrams are dynamic (see Loop Generator – section 6.1) and adaptive with respect to the environment (see cellular agents in the context – section 6.7). Multi-agent systems offer ways to optimise circulation diagrams (see Chapter 8) and validate their suitability as circulation networks in terms of connectivity and accessibility (see Chapter 7). Based on all the above, multi-agent systems can be considered suitable for generating circulation systems. This thesis has shown that multi-agent systems can be successfully used for design purposes. It has been demonstrated that such systems can not only generate solutions but they can also be deployed for design analysis at the same time. This makes multi-agent systems a unique method of modelling circulation networks where 233

design synthesis is combined seamlessly with analysis. However, there is a caveat – the movement of agents has to be reinterpreted in order to use it in the design process. Generated material should be treated as architectural diagrams – abstract machines that are not representational but instrumental for producing new objects and situations (Berkel and Bos 2006). Multi-agent models also provide the necessary flexibility that leads to the variety and variation in generated circulation diagrams. Additionally, such models can provide insights into the process of natural circulation network formation. Similarly to natural systems, diagrams are emergent phenomena in artificial multi-agent based circulation models. The diagram is generated bottom-up from the collective behaviour of the agent colony and this leaves it out from the direct control of the designer. Only if the in-depth knowledge of such models is gained, it is possible to successfully control the diagram. The following section discusses this idea in a greater depth.

10.2 Implications to the design process

Generative design as a set of computational methods of finding solutions to design problems is becoming increasingly popular among designers and architects (Zee and Vrie 2008). There are several ways in which generative methods are used in the design process. In many cases, testing and deploying these methods serves the purpose of finding novel forms (Sevaldson 2000). The method proposed and tested in this thesis can be described as computational diagramming – generating design solutions in a diagrammatic form. The purpose of this work is not to find novel forms but to introduce novel methods of design – creating novel forms can only be the result of the used methods but never the main objective. Generative and computational design methods can and should have a wider purpose than solely creating original and interesting design solutions. New methods can help the design discipline to move to a new level. At this level, methods of creating design proposals – especially those that take place at the early stages of the design process – are better grounded. As proposals can be computed, early design concepts can thus be explained, questioned and debated in a reasonable and even scientific way. Generative design models that are based on computational logic offer a way of making architecture more accessible for logical reasoning. 234

Based on the findings of this thesis, multi-agent models are well suited for generating diagrams that can be used in the search for appropriate movement networks in buildings or cities. These generated diagrams are informative and can help architects to make design decisions about the layout, intensity or topological configuration of the circulation space. Multi-agent models can be used for creating diagrams that are both requirement and form diagrams in Alexander’s sense – they are constructive diagrams (Alexander 1964). Sevaldson (2000) claims that generative dynamic diagrams can fertilise the design process, and suggest a lightly altered but not essentially alien role to the designer through selection, interpretation, analysis and modification. This thesis has demonstrated that such a role can indeed be fulfilled by designers. Sevaldson also argues that generated diagrams are a subject to different modes of interpretation that allows avoiding direct and banal translation of the generated material. Again, this thesis has adopted this view, and has shown how different modes of interpretation can be used in the context of design tasks. If a new method of design is proposed, a reasonable question arises. Why is the new method better than conventional ones such as hand-sketching and traditional CAD modelling? In order to answer that question, one need to understand that this thesis is not claiming for the superiority of generative design methods – they just have certain benefits over the conventional methods. As identified in this thesis, one of the biggest advantages of using computational models is their reusability. Methods that use computation can be made generic enough in order to be deployed in different context and in different design projects. Assuming that the used method is right for a given task, it can be deployed over and over again until an acceptable design solution is found. This makes generative methods suitable for optimisation – a solution can be compared to and replaced with a newly generated solution in each iterative development cycle. Generative methods can also feature inherent optimisation routines (e.g. see Chapter 8). And last but not least, generative methods can potentially reveal solutions that otherwise may remain undiscovered. The last statement is valid solely for the increased ability of the designer to create more solutions than it is possible with conventional methods. This thesis has shown that multi-agent modelling can be used as a generative design method and can become a part of the overall design flow. The choice of the model has to be made depending on the design brief and on the expected outcome. 235

One has to follow certain steps in order to integrate the model in to the overall process. Firstly, one has to gather sufficient information about the context where the generative model can be executed. Secondly, the generated solution has to be interpreted appropriately. The information contained in the diagram needs to be transformed into a different and more explicit design representation. Constructive diagrams can be interpreted and transformed in different ways. The topology of the diagram can be used as the basis for designing the actual circulation network layout. The information about intensity of usage that is expressed in the diagram can inform the geometry of designed buildings or streetscapes. It is not unlikely to construct a model for producing diagrams that directly generate the geometry of the solution. However, this thesis has disregarded this approach because it leaves no space for interpretation and diminishes the designer’s role in the process. In order to use multi-agent models efficiently for generating design solutions, a level of control over these models has to be maintained. The control can be achieved in three essentially different ways: feeding appropriate input into the model, writing and modifying the model’s code, and changing the model’s parameters interactively during runtime. While the first and the second option provide a designer with indirect control mechanisms, the second one is more direct and definitely offers the greatest degree of control. However, achieving the full control over the generated diagrams is very difficult. The proposed generative models are built in a bottom-up manner and the circulation diagrams are the result of the multi-agent colony’s interaction with its environment. Thus, the diagram is an emergent phenomenon and no direct control is possible. In order to generate meaningful diagrams, all three above mentioned control mechanisms of the model are required. The designer has to understand how different input information influences the network formation processes, has to be able to change algorithms that drive the model, and has to interact with the computational process.

10.3 In search for parsimonious models

This thesis has been studying multi-agent models for architectural and urban design purposes by building them. Deaton and Winebrake (2000) call this approach synthetic modelling, and place it somewhere between synthesis and analysis which 236

means that systems are studied by building them out of defined components and then analysed. They suggest using exploratory analysis in order to understand how the system responds to the changed conditions by conducting a series of experiments. Due to the synthetic approach, this thesis has largely ignored the practice of classifying the proposed models – mainly because using an existing taxonomy is not a natural part of synthetic modelling. Prototype models (Chapter 6) are only classified following the established taxonomies in geographical network analysis (Haggett and Chorley 1969) or in agent-based modelling (e.g. Castle and Crooks 2006). This section tries to remedy this lack of coherent classification and proposes two distinct patterns for building generative design models. Based on the conducted research, two distinct patterns – the ModellerEvaluator-Interpreter pattern and the Sensor-Actuator-Environment pattern – were discovered. These patterns serve as abstract schemata how generative models can be designed and built. The proposed patterns are based on the experience gained through modelling and analysis of prototype and case study models presented earlier in this thesis. Both of the patterns belong to the domain of generative design – they meet most of the characteristics and properties that are normally associated with generative design. According to various authors, generative models need to be dynamic (McCormack, Dorin and Innocent 2004), navigate a large solution space (Herr and Kvan 2007), feature autonomous units (Galanter 2003) and be independent from external processes (Batty and Torrens 2005). All models that follow the Modeller-EvaluatorInterpreter pattern or the Sensor-Actuator-Environment pattern feature feedback loops and lead to the dynamic process where design solutions are found through reiterative development. Both of the patterns feature distinct modules – the building blocks that can be seen as sets of explicitly defined and computable instructions. Whereas these building blocks in the Modeller-Evaluator-Interpreter pattern does not need to be programmatic and can be replaced with the activity of a human designer, the SensorActuator-Environment pattern offers less flexibility – the pattern is the best implemented throughout as a computer program. The Modeller-Evaluator-Interpreter pattern can be considered as a traditional design process where Modeller is the module that first creates design proposals, Evaluator analyses these proposals, and Interpreter takes the results of the analysis and instructs Modeller of how to change or 237

recreate the proposal. When all of these modules are programmed then the whole model is turned into a purely computational model. Modeller, for example, can generate new proposals by deploying a parametric design algorithm. This algorithm can be initiated with a set of random or user defined parameters but later can take its input parameters directly from Interpreter. Evaluator can be computational as well. It can be an algorithm that simply measures some geometrical parameters of the proposal or it can be a complex assessment routine that analyses the model in terms of its thermal performance, daylight factors, wind load etc. Interpreter is likely the most complex part of the model but it can equally be the simplest one. Basically, it takes the evaluated design proposals and ‘decides’ what needs to be changed and instructs Modeller to create alternative solutions. The key word here is ‘decide’ which suggests that Interpreter is a complex module and has some kind of embedded intelligence – human or artificial. Nevertheless, Interpreter can also be an extremely simple module that, for example, takes the parameters of the best design proposal (as computed by Evaluator) and changes these parameters blindly in the hope to create even better proposals. The Sensor-Actuator-Environment pattern is completely different from the Model-Evaluator-Interpreter pattern. As said earlier, none of the modules can be replaced with the activity of a human designer – all models based on this pattern have to be computational. However, it is a perfectly suitable model for agent based modelling. In fact, the pattern has been inferred from the particulars of agent-based models found in the literature and of those built for this thesis. The reason why this thesis is so interested in this pattern is because it can be used for building simple models that meet all of the goals that interactive multi-agent systems for generating circulation diagrams ought to have. Models that are based on this pattern are simple because the global description of the generated diagram is not a part of the model – the diagram is an emergent phenomenon. Modules in this pattern are concerned with the computation at the agent level which allows one to leave out global descriptions, thus making the model computationally lighter. The whole concept of the Sensor-Actuator-Environment pattern is built around the system theoretical view of the system (agent) and its environment where the environment produces input to the agent and consumes its output (Keil and Goldin 2006). The agent is further broken down into the Sensor and the Actuator module. The 238

first module is the recipient of information from the agent’s environment while the second one is the proactive module that either changes something in the agent itself or in its environment. Once the environment is changed or the agent’s view to the environment is changed, Sensor receives new information and the generative loop is closed. Models that are built according to the Sensor-Actuator-Environment pattern are generative by default. These models feature dynamic feedback loops, are independent from the rest of the design process, can be used for exploring a wide search space, feature independent units (agents), and – most importantly – generate design solutions through numerous reiterations and continuous development. The two case study models in Chapter 7 and Chapter 8 follow two different generative design patterns. Whereas Chapter 8 proposes a model that has been built after the Modeller-Evaluator-Interpreter pattern, the Nordhavnen model (see Chapter 7) follows the Sensor-Actuator-Environment pattern. Studying differences between these two case study models is significant in terms of the search for the simplest models. Before one can decide which pattern can produce simpler models, it is worth of having a little closer look at how the patterns are implemented in the first place. In the model for creating corridor systems (Chapter 8), the generative loop between Modeller, Evaluator and Interpreter is closed programmatically. However, more value can be gained when the designer intervenes from time to time and manually alters the geometry in the model. When the process is fully computational and no human intervention takes place, an appropriate solution may never be found. Modeller in this case is a simple algorithm that modifies the layout of space. These modifications, as mentioned above, can be done by a human modeller instead. Interpreter is an equally simple algorithm that compares the evaluated solution with the previously generated one and instructs Modeller to either proceed with the current layout or revert to the previous solution. The most sophisticated bit of the model is Evaluator – it features a multi-agent system for finding shortest corridors and evaluates the total area of accessible spaces. Although the multi-agent system is used for suggesting the geometry of corridors and can be seen as a generative routine in its own right, its main function is to evaluate the existing spatial layout. The Nordhavnen multi-agent model is built after the Sensor-ActuatorEnvironment pattern. The Sensor and the Actuator modules are both part of the agent’s design and are connected together via a sensory-motor coupling algorithm. 239

This sensory-motor coupling algorithm is just a little bit more complicated than a traditional hill-climbing algorithm. The Environment module is also a simple computational construct that contains discrete spatial units with a couple of adjustable parameters and features a few simple environmental processes. All modules are seamlessly integrated into a single computational model where the human designer can influence the process by feeding the model appropriate input data or interactively changing some parameters at runtime. Despite its simplicity, the Nordhavnen model can search a wide solution space and can adapt to different input data. The differences between the two case study models are fairly obvious. The Nordhavnen model is a simple generative model, whereas the model for generating corridor systems is a more complicated one in terms of its programmatic composition but also in terms of its deployment. The latter model has more complicated modules and requires active input from the designer. It also remains operational within narrower design constraints. The former one only demands designer’s input in setting out the model, accepts wider range of input data and generates wider variety of diagrams. In conclusion, the model based on the Sensor-Actuator-Environment pattern is a simpler model for generating circulation diagrams than the one based on the Modeller-Evaluator-Interpreter pattern. Another clue in the search for parsimonious patterns of multi-agent models for generating circulation systems can be found in prototype models (see Chapter 6). Two of the models – Loop Generator and Network Modeller – that follow the SensorActuator-Evaluator pattern are also the simplest truly generative models that are studied in this thesis. Of these two, Loop Generator is the simplest of the proposed models for many reasons. For a start, neither designer input nor interactive engagement is needed. The simplicity of the model is also manifested in the simplicity of its modules. The Sensor-Actuator module, for example, is a simple hill-climbing agent. Additionally, the setting out of the model is extremely basic. Also, the only process that takes place in Environment is the evaporation of trails that are left behind by agents. Loop Generator leads to the emergence of a diverse range of circulation network diagrams and also features inherent (and emergent) network optimisation routines. It has all the main characteristics that are assigned to generative modes – it is a dynamic model, produces emergent outcomes, the solution is created iteratively, it 240

features autonomous units, and is independent from the rest of the design process. Based on the findings of this thesis, the simplest multi-agent model for generating circulation diagrams follows the Sensor-Actuator-Environment pattern.

10.4 Complete design diagrams

Besides generating circulation diagrams, one of the objectives of this thesis is to discover methods of integrating other design goals into the multi-agent model. The practical question here is whether circulation systems can be generated in a parallel manner with other parts of the design solution. For example, can a road network diagram and an urban massing diagram be generated in the same model? If the answer is yes, then it would be possible to generate informative and contextual solutions – the complete design diagrams. The drawback in building models that generate complete diagrams lies in the increased algorithmic complexity. As demonstrated in Chapter 8, it is fairly straightforward to combine different agent-based systems in a single model. In this case study, the hill-climbing routine is combined with ant colony optimisation. These routines are deployed sequentially in different modules of the program. While hill-climbing is a part of Modeller, ant colony optimisation is a part of the Evaluation module. If both of the routines were a part of the same module (e.g. Modeller) then this module would become a lot more difficult to manage. In order to retain the simplicity of modules, it would make sense to keep algorithms that are responsible for modelling different aspects of the design solution in separate modules. However, this approach would contradict the Modeller-Evaluator-Interpreter pattern as all modelling is expected to be carried out in Modeller. Hence, it is not easy to use the Modeller-EvaluatorInterpreter pattern for building models that generate complete design diagrams. Luckily, there is an alternative – one can follow the Sensor-ActuatorEnvironment pattern instead. There are two ways of using this pattern for modelling complete design diagrams so that the model still remains easily manageable. Firstly, the Environment module can encapsulate processes that are responsible for some design goals. Section 6.6 presents several models featuring stigmergic building and environmental processes that shape the spatial diagram. In a typical model of this kind, agents place certain objects into the environment (part of the Actuator module) where 241

these objects become the subject of environmental processing (part of the Environment module). The common issue discovered with this approach is that the Actuator module becomes increasingly complicated and it is difficult to program flexible stigmergic building rules that reflect the changes during the diagram development. Alternatively, one can consider a model that contains several qualitatively different Senor-Actuator systems. This is a model that features agents of different kinds. In this type of a model, several multi-agent systems of different ‘species’ coexist and communicate with other ‘species’ through the environment. For example, one type of agents can represent the massing elements (e.g. houses) while other agents are responsible for creating circulation diagrams (e.g. roads). Both agents can retrieve information directly from their environment. By reacting to this information they also modify the environment for other agents. The complexity of the model can be managed by treating different agents as self-contained objects. Although extremely simplistic, the closest model that was built according to the suggested approach is the formation of the cellular structure illustrated in Figure 6.35. In this model, there are two types of agents – ones that form ‘spaces’ and others that form the circulation diagram. There are no environmental processes neither there are stigmergic building routines present in the model. One can only imagine how more sophisticated models of this type can be created. For instance, agents that represent abstract building blocks (e.g. houses and flats) can relocate themselves according to the level of access that is provided by agents that generate circulation diagrams. The integration of several multi-agent systems is likely to create feedback loops between circulation and spatial layout agents leading to the development of complete diagrams where different design goals are simultaneously satisfied. This is exactly what is needed for solving complex spatial problems. Therefore, it is possible to generate complete diagrams featuring circulation systems and other aspects of the design solution in a parallel manner. Based on the experience of building and working with several multi-agent models, it has been found out that the Sensor-ActuatorEnvironment pattern is well suited for creating such models.

242

10.5 Self-criticism

The work presented in this thesis has demonstrated that multi-agent models can be successfully used for generating circulation solutions in buildings and settlements. However, the work also raises many new questions and issues. One of the main issues from the designers’ perspective is that good circulation systems do not automatically yield to good overall design solutions; other design goals have to be considered simultaneously. From understanding how multi-agent systems help to solve circulation, the research agenda could now move towards generating solutions that meet multiple goals. For example, in parallel to circulation, one has to think of how routines that generate good lighting conditions, thermal performance or spatial layouts can be integrated. Whereas some of the prototype models scratched the surface of this topic, a dedicated research could explore it more thoroughly. Section 6.6 explored the integration of stigmergic building routines with circulation. It was hoped that agents can create circulation diagrams and organise otherwise inactive geometry of the environment at the same time. On the contrary to the desired effect, the proposed approach often led to over-constrained computational models lacking flexibility and adaptive powers. It turned out to be difficult to coordinate the building activity with the motor behaviour – adding new geometry to the agent’s immediate neighbourhood severely reduced its mobility. Also, the management of such models turned out to be too complicated. In reaction, it was suggested that it is perhaps better to conceive spatial structures as not part of the environment and the subject to the environmental processing but as autonomous agents with their own goals. For example, a room in a building or even the entire building can be conceived as a mobile agent with its own peculiar sensory-motor loop. Since multi-agent systems are flexible, it should be perfectly possible to combine different type of agents in the search for design solutions with multiple goals. A logical continuation of this research would investigate computational design models consisting of several multi-agent systems where each multi-agent system has its own goals. One of the issues discovered during the research for this thesis is the integration of computational evaluation tools into generative models. There are a number of powerful applications in use in architectural practices for assessing design 243

performance criteria from lighting and thermal analysis to energy consumption and structural analysis. Clearly, the Modeller-Evaluator-Interpreter pattern would accommodate such analytical tools in the Evaluator module, but this is not the issue here. The problem is that sophisticated analytical tools require detailed design information that is not available in early stages of the design process. Therefore, one of the questions that this thesis has raised relates to the analysis of generated material. What are the appropriate analytical tools for assessing the performance of generated solutions at the conceptual design stage? A similar question arises when assessing the aesthetics of generated solutions. Naturally, this task can be solved by a human designer, but this would interrupt the computational program flow. Besides, this type of aesthetic appraisals would need to be done from the global perspective and not from the local – the agents’ – perspective. The question here is whether an agent can base its motor decisions on what it ‘sees’. What would be appropriate perceptual algorithms for that purpose? If such algorithms were invented then multi-agent systems would possibly lead to greater artificial design intelligence. This, however, stimulates a whole new range of questions about the role of the designer. If computational models were to become more intelligent in terms of being capable of solving complex design problems, then what would it mean to design professionals? Surely, such models cannot simply be considered as digital tools. Instead, these tools start taking over some of the tasks that are traditionally assigned to designers. One of the questions that this thesis has not found a satisfying answer to is to do with the production a large number of solutions – one of the main advantages of using generative models. If solutions can be generated ceaselessly, how can one know when to stop this process? Presumably, one can assess it by taking into account the potential benefits and the cost of producing new solutions. But even then the problem remains intractable. Since there is hardly ever a single optimal solution for a design problem, it is very difficult – if not impossible – to decide when to stop searching for new solutions. Earlier in this chapter, it was argued that computational models facilitate rational debates over design solutions. Since multi-agent models can be decomposed into behaviours of individual agents, it is possible to validate these models by validating the principles of sensory-motor coupling in agents. For instance, if an agent 244

turns left when it encounters an obstacle on its right (assuming that agents cannot overcome obstacles), then this is considered a logical sensory-motor coupling rule which validates the model internally. However, this is a trivial example and does not offer much for validating models that generate circulation networks in buildings and urban settlements. The issue with validation is a lot clearer in analytical models where the behavioural rules of agents can actually be compared to the real-world behaviour of agents that are being simulated. Although circulation networks can be analysed simulating people’s movement (Dijkstra, Timmermans and de Vries 2007), one cannot easily use these navigational principles for generating circulation networks. The reason for this is a simple one – the geometry of the environment in analytical models exists prior to the computation whereas in generative models it is created during the computation. Designing circulation is a creative process where the solution is the ultimate goal. Generated material does not simulate anything; a generative model is not constructed in order to observe a phenomenon by any means. Therefore, the question remains: how can generative models be validated? The only satisfactory response to that question offered in this thesis is to validate models through validating the generated output. The key parameters of generated networks can be compared to the parameters of real-world circulation networks. The Nordhavnen model, for example, was validated by comparing connectivity of generated road network diagrams to the average road network the connectivity in existing urban areas (see Chapter 7). Nevertheless, the issue remains – generative models produce a variety of diagrams where the quality of these diagrams depends on the experience and skills of the designer. What if some generated diagrams match and others differ from the parameters of real-world road networks? Does it make the model invalid or unverified? This thesis has not found a satisfactory answer to that question. Perhaps the problem of validation in generative modelling should be approached in a different way. Since many generative models can produce almost an infinite number of solutions, it is impractical to validate all of these. Instead, one can statistically measure the success of the model by comparing generated solutions that are acceptable to non-acceptable ones and calculating an average success rate. This 245

rate would indicate how difficult it is to use the model for generating validated solutions. Although this research has provoked a new range of questions that were not answered in this thesis, the selected methodology proved generally successful. The synthetic modelling approach – identifying basic building blocks, constructing prototype models of these blocks, deploying exploratory analysis, and testing prototypes in the context of real design tasks – worked very well. Gaining experience and creating new understanding by building simple prototypes seemed to be an appropriate way of constructing more sophisticated models, while attempts of conceiving the whole model without verifying interim models were doomed to fail.

10.6 Discussion. The author’s view

My research confirms that multi-agent models are useful design tools. A specific focus in my thesis has been on using these tools at the early stages of the design process. Most of the previous research in the area has been done inconsistently or explores individual models in isolation. In order to demonstrate the value of multiagent models, I have chosen to look at a common design problem in architecture and urban planning – circulation. Circulation is a part of almost any architectural or urban design proposal. It has to suit the context and the design brief but it is generic enough in order to justify building computational models. Leave aside man-made architectures, many living and non-living systems in nature feature circulation networks that are not the result of carefully planned and executed design decisions. Yet, these networks are usually well adapted to the surroundings, organically integrated into the ‘landscape’ and are, in several respect, well optimised. The secret of these networks lies within the way their global structure is created out of individual and local actions. Contemporary computational theory suggests that some natural phenomena can be simulated using bottom-up systems (Resnick 1994). Multi-agent systems are bottom-up systems by default and make a perfect method for following the bottom-up principles in natural network formations. Yet, multi-agent systems that I have designed and built are abstract models – I have not tried to accurately simulate processes in nature. Neither have I tried to explain how circulation of people and goods in buildings and cities exactly happens. 246

The approach here is a more abstract one: I have been looking how certain principles observed in nature (e.g. stigmergy) can be reused for creating diagrams of circulation – abstract representations of design solutions. These diagrams are rich in information and can represent other qualities besides the geometrical form. In fact, these diagrams match what Alexander (1964) describes as a constructive diagram. Multi-agent models seem to be particularly suitable for generating constructive diagrams because in such models both the shape and the intensity of movement can be easily captured. In synthetic modelling, knowledge is produced by building models and observing their behaviour by means of exploratory analysis. The explorative-analytical aspect here is extremely important for the designer. The control over diagrams can only be gained through understanding the internal workings of the model and knowing the effect of critical parameters. The synthetic modelling approach is also particularly suitable for a type of architectural modelling where the actual form of the design solution is derived from a set of site-specific constraints and functional principles. The model for creating design solutions should be general enough in order to be algorithmically describable, yet it needs to be flexible enough in order to generate a solution that fits organically onto the site. While building the model is naturally a part of the synthetic approach, fine-tuning the model for finding a better fit with the site is a matter of exploratory analysis. And again, multi-agent models are perfectly suitable for this type of modelling due to their generic yet context sensitive nature. Computational modelling in general and multi-agent modelling in particular have something new to offer to the discipline of architecture. This something is more than just a novel method of design. The algorithmic approach of creating design solutions has the capacity of bringing rational argumentation into the design discourse at the early stage of the creative process. Naturally, there are many computational methods already in use in the architectural practice. However, most of these methods are used at the analytical stage – in structural or environmental performance analysis. However, my particular interest lies with conceptual design models. Whereas at the conceptual stage architects normally play around with abstract diagrams and speculative design concepts, computation provides an opportunity to explain the reasons and the mechanisms of creating these conceptual and diagrammatic design solutions. Computational methods can lay foundations to a more scientific approach in design. Imagine how more meaningful disputes over design proposals can be if design 247

concepts are broken down into algorithms rather than presented as abstract holistic ideas. The difference here is the difference between the process and the phenomena – abstract concepts can only be observed while algorithmic concepts can also be analysed. Multi-agent models for generating circulation diagrams bring bottom-up thinking into the design domain. This requires the shift from stereotypical design thinking where design proposals are conceived during the creative yet largely nontransparent process towards reasoned and analysable way of creating designs. Instead of thinking about broad concepts, the designer needs to think systematically about processes. No overarching spatial visions are required. Modelling circulation networks bottom-up forces the designer to think in a more user centred way – when, how and why individuals use the circulation network. Surely, these individuals are just computational constructs in the model and represent abstract mobile units, but this does not change the fact that decisions of these individuals are based on locally available information. Compared to the abstract conceptual thinking, modelling at this level is definitely closer to the level of end users. While the abstract conceptual thinking praises the grand vision of the designer, the latter approach derives the solution from the requirements of individuals – the end users. Following the bottomup logic, design solutions can only emerge during the design process and are not anyhow predefined. I can see the role of the designer becoming more similar to this of the synthetic modelling practitioner. The designer has new tasks of assessing the design brief, choosing an appropriate computational model to meet the brief, preparing the setting out configuration, generating multiple solutions with the model, evaluating the generated solutions, re-configuring the model if needed, and generating further solutions. This process leads to solutions that are developed during several design iterations – these solutions are openly evolved rather than conceived in isolation. Going through several design iterations has traditionally been expensive because individual solutions are usually created by repeating the whole design process. Computational models can take this pain away and allow the designer to explore a greater variety of solutions. This, in turn, makes the likelihood of finding good design solutions higher. 248

The iterative development of design solutions offers yet another opportunity to improve the practice of design. Because solutions to particular problems (e.g. circulation) are grown rather than conceived, they are adaptable and can evolve in unison with other parts of the proposal (e.g. allocation of spaces). The designer’s role here is not the one of a science researcher, but of a synthesiser who makes use of scientifically validated models in order to orchestrate different design aspects into a complete solution. My thesis makes an original contribution to the discipline of architecture in several respects:

1. Originality of research. Nobody has extensively researched the methods for generating circulation systems using agent-based modelling techniques. There are just a few examples of scientific work that investigate the possibilities of bottom-up models for synthesising design diagrams. My thesis deploys the synthetic modelling approach and follows it throughout by building prototype models, exploring them analytically, and testing some of these models in the practical context of architectural and urban design. 2. Novelty of generative models. Some of the prototype models that I have built and tested in Chapter 6 have not been used in the architectural context before. The Network Modeller prototype, for example, is one of such models. The technical implementation of this prototype is fairly simple, yet the area of potential applications in architecture and urban design is wide. None of the algorithms that I have used in Network Modeller are novel in terms of multiagent modelling techniques. However, the purpose of using this model for generating design solutions is unique. Loop Generator, on the other hand, is a novel model in much wider sense. Although very simple in terms of the used algorithms, I have found no record in literature of any other model where continuous circulation networks are generated from the individual actions of purely reactive agents. Most of my prototypes fall into one of the categories of computational network models as defined in geographical network analysis (Haggett and Chorley 1969). Network Modeller and the Nordhavnen model, for example, are interconnection models, while Stream Simulator is a capture model. Loop Generator, however, does not fit this classification system because 249

this system expects all networks to have source and/or target nodes. Loop Generator has no source or target nodes and circulation networks emerge from the movement of reactive agents as a result of a simple stigmergic feedback loop. Yet, the model can generate movement diagrams of great variety. 3. Contribution to the rational design debate. Computational models bring concept design to a whole new level where solutions can be argued for or against by discussing the appropriateness and correctness of algorithms that generated these solutions at the first place. 4. Demonstration of how the natural path formation techniques can be used at urban scale for generating circulation networks that can be validated against real-world road networks. Additionally, I have proposed a novel evaluation method. 5. New patterns of generative models of design. I have spelled out two patterns for constructing generative models for design purposes. The ModellerEvaluator-Interpreter pattern has only been implied in previous design research (Galanter 2003; Zee and Vrie 2008) while the Sensor-Motor-Environment pattern has not been mentioned in the design context before. 6. New control methodology. I have suggested that an effective control over generative models can be gained following a tri-partite methodology: programmatically altering the model, modifying input and the setting-out configuration, and interactively changing the agent’s parameters and environmental processing parameters at runtime.

I have been looking for simple generative models that can produce meaningful circulation diagrams – diagrams that feature distinct movement paths of different qualities. The simplest model that I have found is the reactive multi-agent model that follows the Sensor-Actuator-Environment pattern (see section 10.3). I strongly believe that this pattern can be used for inventing standard models for architectural and urban design. Such models – if constructed and validated by a specialist in the field – can lead to a new and commonly accepted way of creating and testing design solutions.

250

Bibliography

Adamatzky, A. (2001). Computing in Nonlinear Media and Automata Collectives. Bristol, Institute of Physics Publishing. Adamatzky, A. and O. Holland (1998). "Voronoi-Like Nondeterministic Partition of a Lattice by Collectives of Finite Automata " Mathematical and Computer Modelling 28(10): 73-93. Aish, R. and R. Woodbury (2005). "Multi level interaction in parametric design." Lecture notes in computer science 3638: 151-162. Alexander, C. (1964). Notes on the Synthesis of Form. Cambridge, Harvard University Press. Alexander, C. (1965). "A City is not a Tree." Architectural Forum 122(1): 58-62. Antoni, J.-P. (2001). Urban sprawl modelling: A methodological approach. 12th European Colloquium on Quantitative and Theoretical Geography. St-Valery-en-Caux. Arbib, M. A. (2003). "From Rana computatrix to Human Language: Towards a Computational Neuroethology of Language Evolution " Royal Society of London Transactions Series A 361(1811): 2345-2379. Arnheim, R. (1977). The Dynamics of Architectural Form. Los Angeles, Uniersity of California Press. Batty, M. (2003). Agent-based Pedestrian Modelling. Advanced Spatial Analysis: The CASA Book of GIS. P. A. Longley and M. Batty. New York, ESRI: 81-108. Batty, M. (2004). A new theory of space syntax. CASA Working Papers. London, Centre for Advanced Spatial Analysis (UCL). Batty, M. (2005). Cities and Complexity: Understanding Cities with Cellular Automata, AgentBased Models, and Fractals. Cambridge, Massachusetts, MIT Press. Batty, M. (2008). Fifty Years of Urban Modeling: Macro-Statics to Micro-Dynamics. The Dynamics of Complex Urban Systems. S. Albeverio, D. Andrey, P. Giordano and A. Vancheri. Heidelberg, Physica-Verlag: 1-20. Batty, M. and P. M. Torrens (2005). "Modelling and prediction in a complex world." Futures 37: 745-766. Bazzani, A., M. Capriotti, B. Giorgini, G. Melchiorre, S. Rambaldi, G. Servizi and G. Turchetti (2008). A Model for Asystematic Mobility in Urban Space. The Dynamics of Complex Urban Systems. S. Albeverio, D. Andrey, P. Giordano and A. Vancheri. Heidelberg, Physica-Verlag: 5974. Beck, H. (2010). Retrieved 29.08.2010, from http://britton.disted.camosun.bc.ca/beck_map.jpg. Beer, S. (1974). Designing Freedom. London, John Wiley. Berkel, B. v. and C. Bos (1999). Imagination. Amsterdam, UN Studio & Goose Press. Berkel, B. v. and C. Bos (2006). Diagrams. Theories and Manifestoes of Contemporary Architecture. C. Jencks and K. Kropf. Chicester, West Sussex, Wiley-Academy: 325-327. Berntson, G. M. (1997). "Topological scaling and plant root system architecture: developmental and functional hierarchies." New Phytologist 135(4): 621-634. Birkin, M. H., A. G. D. Turner, B. Wu and P. M. Townend, Xu J. (2008). An Architecture for Social Simulation Models to Support Spatial Planning. The Third International Conference on e-Social Science. Oxford. Blender Foundation (no date). "Blender." Retrieved 17.07.2009, from www.blender.org. Blum, C. and M. Dorigo (2004). "The hyper-cube framework for ant colony optimization." Systems, Man, and Cybernetics, Part B 34(2): 1161-1172. Blumberg, B. M. and T. A. Gakyean (1995). Multi-Level Direction of Autonomous Creatures for Real-Time Virtual Environments. Computer Graphics. Bochner, B. and F. Dock (2003). Street Systems and Classification to Support Smart Growth. 2nd Urban Street Symposium. Anaheim.

251

Bonabeau, E. (2001). Agent-based modeling: Methods and techniques for simulating human systems. Adaptive Agents, Intelligence, and Emergent Human Organization: Capturing Complexity through Agent-Based Modeling. Irvine. Bonabeau, E., M. Dorigo and G. Theraulaz (1999). Swarm Intelligence: from Natural to Artificial Systems New York, Oxford University Press. Bonabeau, E., S. Guerin, D. Snyers, P. Kuntz and G. Theraulaz (2000). "Three-dimensional Architectures Grown by Simple ‘Stigmergic’ Agents." BioSystems 56: 13-32. Bonabeau, E., G. Theraulaz, J. L. Deneuborg, N. Franks, O. Rafaelsberger, J. Joly and S. Blanco (1998). "A Model for the Emergence of Pillars, Walls and Royal Chambers in Termite Nests." Philosophical Transactions: Biological Sciences 353(1375): 1561-1576. Bourg, D. M. (2001). Physics for Game Developers. Sebastopol, O'Reilly. Braitenberg, V. (1986). Vehicles: Experiments in Synthetic Psychology, MIT Press. Bridge, J. S. (2003). Rivers and Floodplains: Forms, Processes, and Sedimentary Record. New York, Wiley-Blackwell. Brimicombe, A. J. and C. Li (2008). "Agent-based services for the validation and calibration of multi-agent models." Computers, Environment and Urban Systems 32(6): 464-473. Brooks, R. A. (1991a). "Intelligence without representation." Artificial Intelligence 47: 139-159. Brooks, R. A. (1991b). Intelligence without reason. Technical Report: AIM-1293. Cambridge, Massachusetts Institute of Technology. Buhl, J., J. Gautrais, J. L. Deneuborg, P. Kuntz and G. Theraulaz (2006). "The Growth and Form of Tunneling Networks in Ants." Journal of Theoretical Biology 243: 287-298. Butler, Z., K. Kotay, D. Rus and K. Tomita (2001). Cellular Automata for Decentralized Control of Self-Reconfigurable Robots. ICRA Workshop on Modular Self-Reconfigurable Robots. CABE (no date). "Case studies ". Retrieved 02.03.2011, from http://www.cabe.org.uk/casestudies. Calogero, E. (2008). Getting from A to B and Back: A Representational Framework for Pedestrian Movement Simulation in a School Environment. EDRA 39th Annual Conference, Veracruz. Camp, C. V., B. J. Bichon and S. P. Stovall (2005). "Design of Steel Frames Using Ant Colony Optimization." Journal of Structural Engineering 131(3): 369-379. Campbell, M. I., J. Cagan and K. Kotovsky (1998). A-Design: Theory and Implementation of an Adaptive, Agent-Based Method of Conceptual Design. Artificial Intelligence in Design. Lisbon. Capra, F. (1996). The Web of Life. London, HarperCollins. Carranza, P. M. and P. S. Coates (2000). The use of Swarm Intelligence to generate architectural form. Generative Art Conference, Milan. Castle, C. J. E. and A. T. Crooks (2006). Principles and concepts of agent-based modelling for developing geospatial simulations. CASA Working Papers London, Centre for Advanced Spatial Analysis (UCL). Chase, W. G. (1982). Spatial representations of taxi drivers. Acquisition of symbolic skills. D. R. Rogers and J. A. Sloboda. New York, Plenum: 391-405. Chomsky, N. (1956). "Three models for the description of language." IRE Transactions on Information Theory 2: 113-124. Christaller, W. (no date). Retrieved 11.05.2009, from http://www.answers.com/topic/centralplace-theory-1. Clerkin, P. (2005). "Glossary of Architectural Terms." Retrieved 18.07.2010, from http://www.archiseek.com. Coates, P. S. (2009). Computational model of beady ring. R. Puusepp. London. Coates, P. S. (2010). Programming.Architecture. London, Routledge. Coates, P. S., T. Broughton and A. Tan (1997). The Use of Genetic Programming in Exploring 3D Design Worlds. London, Centre for Environment and Computing in Architecture, University of East London. Coates, P. S., C. Derix and C. Simon (2003). Morphogenetic CA 69’ 40’ 33 north. Generative Art Conference, Milan.

252

Coates, P. S., N. Healy, C. Lamb and W. L. Voon (1996). The Use of Cellular Automata to Explore Bottom-Up Architectonic rules. Eurographics UK Chapter 14th Annual Conference. London. Crecu, D. L. and D. C. Brown (2000). Expectation formation in multi-agent design systems Artificial Intelligence in Design '00, Kluwer Academic Publishers. Crooks, A. T. (2008). Constructing and Implementing an Agent-Based Model of Residential Segregation through Vector GIS. CASA Working Papers. London, Centre for Advanced Spatial Analysis (UCL). Cross, N. (1977). The Automated Architect. London, Pion. D'Souza, D. F. and A. C. Wills (1998). Objects, Components, and Frameworks with UML: The Catalysis Approach. Reading, Massachussetts, Addison-Wesley. De Jong, K. A. (2006). Evolutionary computation: a unified approach. Cambridge, Massachussets, MIT Press. De Schutter, B., S. P. Hoogendoorn, H. Schuurman and S. Stramigioli (2003). "A Multi-Agent Case-Based Traffic Control Scenario Evaluation System." Intelligent Transportation Systems 1(12): 678-683. Deaton, M. and J. Winebrake (2000). Dynamic Modelling of Environmental Systems. London, Springer. Deneuborg, J. L., J. M. Pasteels and J. C. Veraeghe (1983). "Probabilistic Behaviour in Ants : a Strategy of Errors?" Journal of Theoretical Biology 105: 259-271. Deneuborg, J. L., G. Theraulaz and R. Beckers (1992). Swarm Made Architectures. Toward a Practice of Autonomous Systems, Proceedings of The First European Conference on Artificial Life, MIT Press. Derix, C. (2008). Genetically Modified Spaces. Space Craft: Developments in Architectural Computing. D. Littlefield. London, RIBA Publishing: 48-57. Dey, T. K. and W. Zhao (2003). "Approximating the Medial Axis from the Voronoi Diagram with a Convergence Guarantee." Algorithmica 38(1): 179-200. Dijkstra, J. and H. J. P. Timmermans (2002). "Towards a multi-agent model for visualizing simulated user behavior to support the assessment of design performance." Automation in Construction 11: 135-145. Dijkstra, J., H. J. P. Timmermans and B. de Vries (2007). Empirical estimation of agent shopping patterns for simulating pedestrian movement. CUPUM07 Computers in Urban Planning and Urban Management, Iguassa Falls. Dill, J. (2004). Measuring Network Connectivity for Bicycling and Walking. TRB Annual Meeting. Portland. Doran, J., S. Franklin, N. Jennings and T. Norman (1997). "On Cooperation in Multi-Agent Systems." The Knowledge Engineering Review 12: 309-314. Dorigo, M., M. Birattari and T. Stützle (2006). Ant Colony Optimization: Artificial Ants as a Computational Intelligence Technique. IRDIA - Tecnical Report. Brussels. Dorigo, M., V. Maniezzo and A. Colorni (1996). "The Ant System: Optimization by a colony of cooperating agents." Transactions on Systems, Man, and Cybernetics–Part B 26(1): 1-13. Dorigo, M. and K. Socha (2007). An Introduction to Ant Colony Optimization. IRIDIA - Technical Report. Brussels. Downs, R. M. and D. Stea (1977). Maps in Minds: Reflections of Cognitive Mapping. New York, Harper & Row. Duarte, J. P. (2004). The Virtual Architect. Generative Art Conference, Milan. Dunham, G., S. Tisue and U. Wilensky (2004). NetLogo Erosion model. Evantson, Center for Connected Learning and Computer-Based Modeling, Northwestern University. Evans, R. (2008). Urban Design Compendium 2: Delivering Quality Places, English Partnerships, The Housing Corporation. Ferre, A., T. Sakamoto, M. Kubo, T. Daniell and E. van Goethem, Eds. (2002). The Yokohama project. Barcelona. Forrester, J. W. (1972). Understanding the counterintuitive behaviour of social systems. Systems Behaviour. J. Beishon and G. Peters, The Open University Press.

253

Frazer, J. (1995). An Evolutionary Architecture. London, Architectural Association. Fuhrmann, O. and C. Gotsman (2006). "On the algorithmic design of architectural configurations." Environment and Planning B: Planning and Design 33: 131-140. Funes, P. J. and J. B. Pollack (1999). Computer Evolution of Buildable Objects. Evolutionary design by Computers. P. J. Bentley. San Francisco, Morgan Kaufman. Galanter, P. (2003). What is Generative Art? Complexity Theory as a Context for Art Theory. Generative Art Conference, Milan. Gibson, J. (1979). The Ecological Approach to Visual Perception. Boston, Houghton Mifflin Gilbert, N. (1995). Emergence in Social Simulation. Artificial Societies: The Computer Simulation of Social Life. N. Gilbert and R. Conte. London, UCL Press: 144-157. Gilbert, N. (2004). Agent-based social simulation: dealing with complexity. Guilford, University of Surrey. Gilbert, N. and P. Terna (2000). "How to build and use agent-based models in social science." Mind and Society 1(1): 57-72. Glass, K. R., C. Morkel and S. B. Bangay (2006). Duplicating Road Patterns in South African Informal Settlements Using Procedural Techniques. 4th International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa, New York. Goldberg, D. E. (2008). Genetic algorithms in search, optimization, and machine learning. Lecture given at University of Illinois, Urbana-Champaign. Goldstein, J. (2005). "Emergence, Creativity, and the Logic of Following and Negating." The Innovation Journal 12(23). Goldstone, R. and M. Roberts (2006). "Self-organized Trail Systems in Groups of Humans." Complexity 11(6): 43-50. Gomes, B., J. Bento, S. Scheer and R. Cerquiera (1998). Distributed Agents Supporting Eventdriven Design Processes. Artificial Intelligence in Design, Kluwer Academic Publishers. Google (2011a). "Satellite view of Nordhavnen, Copenhagen." Retrieved 02.03.2011. Google (2011b). "Map of Orenco Station, Portland." Retrieved 02.03.2011. Google (2011c). "Map of Hammarby Sjöstad, Stockholm." Retrieved 02.03.2011. Google (2011d). "Map of Vauban, Freiburg." Retrieved 02.03.2011. Grand, S. (2001). Creation: Life and How to Make it. London, Phoenix. Grey, W. W. (1951). "A Machine That Learns." Scientific American 185(2): 60-63. Gu, N. and M. L. Maher (2003). "A Grammar for the Dynamic Design of Virtual Architecture Using Rational Agents " International Journal of Architectural Computing 1(4): 489-501. Gutjahr, W. J. (2008). "First Steps to the Runtime Complexity Analysis of Ant Colony Optimization." Computers and Operations Research 35(9): 2711-2727. Hadeli, P. Valckenaers, M. Kollingbaum and H. v. Brussel (2004). "Multi-agent coordination and control using stigmergy." Computers in Industry 53(1): 75-96. Hadlaw, J. (2003). "The London Underground Map: Imagining Modern Time and Space." DesignIssues 19(1): 25-36. Haggett, P. and R. J. Chorley (1969). Network Analysis in Geography. London, Butler & Tanner. Haggett, P., A. D. Cliff and A. Frey (1977). Locational Models. London, Edward Arnold. Haverkort, H. J. and H. L. Bodlaender (1999). Finding a Minimal Tree in a Polygon with its Medial Axis. Helbing, D., I. J. Farkas, P. Molnar and T. Vicsek (2002). Simulation of Pedestrian Crowds in Normal and Evacuation Situations. Pedestrian and evacuation dynamics. M. Schreckenberg and S. D. Sarma. Berlin, Springer Verlag: 21-58. Heppenstall, A. J., A. J. Evans and M. H. Birkin (2007). "Genetic algorithm optimisation of an agent-based model for simulating a retail market." Environment and Planning B: Planning and Design 34: 1051-1070. Herr, C. M. (2003). Using Cellular Automata to Challenge Cookie-Cutter Architecture. Generative Art Conference, Milan. Herr, C. M. and T. Kvan (2007). "Adapting cellular automata to support the architectural design process." Automation in Construction 16: 61-69.

254

Herring, M. (2004). "The Euclidean Steiner Tree Problem." Mathematics and Computer Science. Retrieved 01.04.2009, from www.denison.edu/academics/departments/mathcs/herring.pdf. Heylighen, F. and C. Joslyn (2001). Cybernetics and Second-Order Cybernetics. Encyclopedia of Physical Science & Technology. R. A. Meyers. New York, Academic Press. Hillier, B. (1989). "Architecture of the urban object." Ekistics 56(334/335): 5-21. Hillier, B. and J. Hanson (1984). The Social Logic of Space. Cambridge, Cambridge University Press. Holland, J. (1998). Emergence from Chaos to Order. Oxford, University Press. Holland, O. and C. Melhuish (1999). "Stigmergy, self-organisation, and sorting in collective robotics." Artificial Life 5(2): 173-202. Hölscher, C., S. Büchner, M. Brösamle, T. Meilinger and G. Strube (2007). Signs and Maps – Cognitive Economy in the Use of External Aids for Indoor Navigation. 29th Annual Conference of the Cognitive Science Society Nashville. Howard, E. (no date). Retrieved 11.05.2009, from www.oliviapress.co.uk. Huhns, M. N. and L. M. Stephens (1999). Multiagent Systems and Societies of Agents. Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence. G. Weiss. Cambridge, MIT Press. Ireland, T. (2008). Sniffing space. Generative Art Conference, Milan. Izquierdo-Torres, E. (2004) Collective Intelligence in Multi-Agent Robotics: Stigmergy, SelfOrganization and Evolution. Jacob, C. and S. v. Mammen (2007). "Swarm grammars: growing dynamic structures in 3D agent spaces." Digital Creativity 1(18): 54-64. Jin, Y. and Y. Zhao (2007). "Chaotic Ant Colony Algorithm for Preliminary Ship Design." Natural Computation 4: 776-781. Jodidio, P. (2001). Architecture Now! 3. Stuttgart. Johnson, S. (2002). Emergence: The Connected Lives of Ants, Brains, Cities and Software. London, Penguin. Jormakka, K. (2002). Flying Dutchman: Motion in Architecture. Basel, Birkhäuser. Junfeng, J. (2003). Transition Rule Elicitation for Urban Cellular Automata models. Enschede, International Institute Geo-information Science and Earth Observation. Kalay, Y. E. (2004). Architecture's New Media: Principles, Theories, and Methods of ComputerAided Design. Cambridge, Massachussetts, MIT Press. Keil, D. and D. Goldin (2006). Indirect Interaction in Environments for Multiagent Systems. Environments for Multiagent Systems II. D. Weyns, H. V. D. Parunak and F. Michel. Uttrecht, Springer: 68-87. Kelly, K. (1994). Out of Control: The Biology of Machines. London, Fourth Estate. Kicinger, R., T. Arciszewski and K. A. De Jong (2004). Morphogenesis and strucutral design: cellular automata representations of steel strucutres in tall buildings. Congress on Evolutionary Computation, Portland. Kilian, A. and J. Ochsendorf (2005). "Particle-spring System for Strutural Form Finding." Journal of the International Association for Shell and Spatial Structures 46. König, R. and C. Bauriedel (2004). Computer-generated Urban Structures. Generative Art Conference, Milan. Kopperman, R., P. Panangaden, Michael B. Smyth, D. Spreen and J. Webster (2006). "Spatial Representation: Discrete vs. Continuous Computational Models." Theoretical Computer Science 365(12): 169-170. Koutamanis, A., M. v. Leusen and V. Mitossi (2001). Route analysis in complex buildings. CAAD Futures, Eindhoven. Krawczyk, R. J. (2002). Architectural Interpretation of Cellular Automata. Generative Art Conference, Milan. Krink, T. and F. Vollrath (1997). "Analysing Spider Behaviour with Rule-based Simulations and Genetic Algorithms." Journal of Theoretical Biology 185: 321-331.

255

Kuipers, B., D. Tecuci and B. J. Stankiewicz (2003). "The Skeleton on the Cognitive Map: A Computational and Empirical Exploration." Environment and Behaviour 35(1): 81-106. Kukla, R., J. Kerridge, A. Willis and J. Hine (2001). "PEDFLOW: Development of an Autonomous Agent Model of Pedestrian Flow." Transportation Research Record 1774(1): 11-17. Ladley, D. and S. Bullock (2005). "The Role of Logistic Constraints in Termite Construction of Chambers and Tunnels." Journal of Theoretical Biology 234: 551-564. Leach, N. (2004). Swarm Tectonics. Digital Techtonics. N. Leach, D. Turbull and C. Williams. London, Wiley: 70-77. Ligmann-Zielinska, A. and P. Jankowski (2007). "Agent-based models as laboratories for spatially explicit planning policies." Environment and Planning B: Planning and Design 34: 316335. Llewelyn-Davies (2000). Urban Design Compendium 1: Urban Design Principles, English Partnerships, The Housing Corporation. Longley, P. A. (2004). "Geographical Information Systems: on modelling and representation." Progress in Human Geography 28(1): 108-116. Luhmann, N. (1984). Social Systems. Frankfurt, Suhramp Verlag. Lynch, K. (1981). Good City Form. Cambridge, Massachusetts, MIT Press. Macal, C. M. and M. J. North (2006). Tutorial on Agent-based Modeling and Simulations Part 2: How to Model with Agents. Winter Simulation Conference. Macgill, J. (2000). Using flocks to drive a Geographical Analysis Engine. Artificial Life VII, Cambridge, Massachusetts, MIT Press. Maciel, A. (2008). Artificial Intelligence and the Conceptualisation of Architecture. Space Craft: Developments in Architectural Computing. D. Littlefield. London, RIBA Publishing: 64-72. Mahdavi, H. S. and S. Hanna (2004). Optimising Continuous Microstructures: A Comparison of Gradient-Based and Stochastic Methods. The Joint 2nd International Conference on Soft Computing and Intelligent Systems and 5th International Symposium on Advanced Intelligent Systems. Yokohama. Mammen, S. v., C. Jacob and G. Kokai (2005). Evolving Swarms that Build 3D Structures. IEEE Congress on Evolutionary Computation. Edinburgh. Marshall, S. (2005). Streets & Patterns. Abingdon,Oxon, Spon Press. Maturana, H. R. and F. J. Varela (1980). Autopoiesis and Cognition: The Realization of the Living. Boston, Reidel. Maxvan (no date). Retrieved 11.04.2009, from www.maxwan.com. McCormack, J., A. Dorin and T. Innocent (2004). Generative Design: a paradigm for design research. Futureground, Design Research Society, Melbourne. Merleau-Ponty, M. (1979). Phenomenology of Perception. Suffolk, St Edmundsbury Press Mertens, K., T. Holvoet and Y. Berbers (2004). Adaptation in a Distributed Environment. First International Workshop on Environments for Multiagent Systems. New York. Minsky, M. (1988). The Society of Mind. London, Pan Books. Mitchell, W. J. (2006). E-topia: Information and Communication Technologies and the Transformation of Urban Life. The Network Society: From Knowledge to Policy. G. Cardoso and M. Castells. Sais, Center For Transatlantic Relations and Johns Hopkins University. Morgan, M. (2003). The Space Between Our Ears. London, Weidenfeld & Nicholson. Morse, A. F. and T. Ziemke (2008). "On the role(s) of modelling in cognitive science." Pragmatics & Cognition 16(1): 37 - 56. Moussavi, F. and A. Zaera-Polo (2006). On Instruments: Diagrams, Drawing and Graphs. Theories and Manifestoes of Contemporary Architecture. C. Jencks and K. Kropf. Chicester, West Sussex, Wiley-Academy: 337-339. Nembrini, J., N. Reeves, E. Poncet, A. Martinoli and A. Winfield (2005). Mascarillons: flying swarm intelligence for architectural research. Swarm Intelligence Symposium, Pasadena. Neumann, F. and C. Witt (2008). Ant Colony Optimization and the Minimum Spanning Tree Problem. Lecture Notes in Computer Science. Berlin, Springer. 5313: 153-166. Nickerson, J. V. (2008). Generating Networks. Design Computing and Cognition, Atlanta.

256

Otto, F. and B. Rasch (1995). Finding Form:Towards an Architecture of the Minimal. Stuttgart, Axel Menges. Parish, Y. and P. Mueller (2001). Procedural Modeling of Cities. SIGGRAPH. Parker, D. C., S. M. Manson, M. A. Janssen, M. J. Hoffmann and P. Deadman (2003). "MultiAgent Systems for the Simulation of Land-Use and Land-Cover Change: A Review." Annals of the Association of American Geographers 93(2): 314 - 337. Parunak, H. V. D. (2006). A Survey of Environments and Mechanisms for Human-Human Stigmergy. Environments for Multiagent Systems II. D. Weyns, H. V. D. Parunak and F. Michel. Uttrecht. Pechač, P. (2002). Application of Ant Optimisation Algorithm on the Recursive Propagation Model for Urban Microcells XXVIIth General Assembly the International Union of Radio Science. Maastricht: 569. Penn, A. (2001). Space Syntax and Spatial Cognition. Or, why the axial line? Space Syntax 3rd International Symposium, Atlanta. Penn, A. and A. Turner (2002). Space Syntax Based Agent Simulation. International Conference on Pedestrian and Evacuation Dynamics, Duisburg. Pfeifer, R. and C. Scheier (2001). Understanding Intelligence. Cambridge, Massachusetts, MIT Press. Polidori, M. and R. Krafta (2004). Environment – Urban Interface within Urban Growth. Developments in Design & Decision Support Systems in Architecture and Urban Planning. J. P. v. Leeuwen and H. J. P. Timmermans. Eindhoven, Eindhoven University of Technology: 49-62. Portugali, J. (1999). Self-Organization and the City. Berlin, Springer-Verlag. Prusinkiewicz, P. and A. Lindenmeyer (1990). The Algorithmic Beauty of Plants. New York, Springer-Verlag. Pullen, W. D. (2007). Daedalus. Puusepp, R. and P. S. Coates (2007). "Simulations with cognitive and design agents." International Journal of Architectural Computing 5: 100-114. Python Software Foundation (no date). Retrieved 17.07.2009, from www.python.org. Raisbeck, P. (2007). "Provocative Agents: Agent Base Modelling Systems and the Global Production of Architecture." Ramos, V., C. Fernandes and A. C. Rosa (2007). Social Cognitive Maps, Swarm Collective Perception and Distributed Search on Dynamic Landscapes Raubal, M. (2001). "Ontology and epistemology for agent-based wayfinding simulation." Geographical Information Science 15(7): 653-665. Reffat, R. M. (2003). Architectural Exploration and Creativity using Intelligent Design Agents. 21st eCAADe Conference, Graz. Reffat, R. M. (2006). "Computing in Architectural Design: Reflection and approach to New Generations of CAAD." ITcon 11: 655-668. Reiser+Umemoto (2006). Atlas of Novel Tectonics. New York, Prinenton Architectural Press. Resnick, M. (1994). Turtles, Termites, and Traffic Jams: Explorations in Massively Parallel Microworlds. Cambridge, Massachusetts, MIT Press. Resnick, M. (1999). Decentralized Modeling and Decentralized Thinking. Modeling and Simulation in Science and Mathematics Education. W. Feurzeig and N. Roberts. New York, Springer: 114-137. Reynolds, C. W. (1987). "Flocks, Herds, and Schools: A Distributed Behavioral Model." Computer Graphics 21(4): 25-34. Reynolds, C. W. (2004). Retrieved 17.07.2009, from http://opensteer.sourceforge.net. Richardson, T. A. (1859). The Art of Architectural Modelling in Paper. London, John Weale. Ritchie, G. (2003). Static Multi-processor Scheduling with Ant Colony Optimisation & Local Search. School of Informatics. Edinburgh, University of Edinburgh. Rittel, H. W. J. and M. M. Webber (1973). "Dilemmas in a General Theory of Planning." Policy Sciences 4: 155-169.

257

Rizzoli, A. E., R. Montemanni, E. Lucibello and L. M. Gambardella (2007). "Ant colony optimization for real-world vehicle routing problems " Swarm Intelligence 1(2): 135-151. Rodrigues, A. and J. Raper (1999). Defining Spatial Agents. Spatial Multimedia and Virtual Reality. J. Raper and A. Câmara. London, Taylor & Francis: 111-129. Rogers, R. (1999). Towards an Urban Renaissance. London, Urban Task Force. Rosenblatt, F. (1958). "The Perceptron: a Probabilistic Model for Information Storage and Organization in the Brain." Psychological Review 65(6): 386-408. RUDI (no date). "Classic: A city is not a tree, part one, by Christopher Alexander." Retrieved 28.04.2009, from http://www.rudi.net/pages/8755. Ruiz-Tagle, J. V. (2007). "Modeling and Simulating the City: Deciphering the Code of a Game of Strategy." International Journal of Architectural Computing 5(3): 571-587. Runions, A., M. Fuhrer, B. Lane, P. Federl, A. Rolland-Lagan and P. Prusinkiewicz (2005). Modeling and visualization of leaf venation patterns. SIGGRAPH. Russell, A. and P. Norvig (1995). Artificial Intelligence: A Modern Approach. London, PrenticeHall. Samani, N. N., L. Hajibabai, M. R. Delavar, M. R. Malek and A. U. Frank (2007). An Agent-based Indoor Wayfinding Based on Digital Sign System. Urban Data Management. M. Rumor, V. Coors, E. M. Fendel and S. Zlatanova. London, Taylor & Francis: 511-521. Samaniego, H. and M. E. Moses (2008). "Cities as organisms: Allometric scaling of urban road networks." Journal of Transport and Land Use 1(1): 21-39. Schelling, T. C. (1971). "Dynamic Models of Segregation." Journal of Mathematical Sociology 1: 143-186. Scholl, H. (2001). Agent-based and System Dynamics Modeling : A Call for Cross Study and Joint Research. 34th Annual Hawaii International Conference on System Sciences. Honolulu. Schumacher, P. (2008). The future is parametric. Building Design. 19.09.2008. Sen, S., S. Saha, S. Airiau, T. Candale, D. Banerjee, D. Chakraborty, P. Mukherjee and A. Gursel (2007). Robust Agent Communities. Autonomous Intelligent Systems: Agents and Data Mining. V. Gorodetsky, C. Zhang, V. A. Skormin and L. Cao, Springer. 4476: 28 - 45. Sevaldson, B. (2000). Dynamic Generative Diagrams. 18th eCAADe Conference, Weimar. Shane, D. G. (2005). Recombinant Urbanism: Conceptual Modeling in Architecture, Urban Design, and City Theory. Chichester, West Sussex, John Wiley. Shepherd, P. (2009). Digital Architectonics in Practice. 27th eCAADe Conference, Istanbul. Shoham, Y. and K. Leyton-Brown (2009). Multiagent Systems. Algorithmic, Game-Theoretic, and Logical Foundations. New York, Cambridge University Press. Shpuza, E. (2006). Floorplate Shapes and Office Layouts: A Model of the Effect of Floorplate Shape on Circulation Integration. Architecture, Georgia Institute of Technology. PhD Thesis. Shpuza, E. and J. Peponis (2006). Floorplate shapes and office layouts: a model of the relationship between shape and circulation integration. 5th International Symposium of Space Syntax, Delft. Silva, C. A., R. B. Seixas and O. L. Farias (2005). Geographical Information Systems and Dynamic Modeling via Agent Based Systems. Advances in Geographical Information System. Bremen. Sim, K. M. and W. H. Sun (2002). Multiple Ant-Colony Optimization for Network Routing. First International Symposium on Cyber Worlds Washington, IEEE Computer Society. Sims, K. (1994). Evolving Virtual Creatures. Computer Graphics (Siggraph '94 Proceedings). Skyttner, L. (1996). General Systems Theory: an Introduction. Basigstoke, Macmillian. Smith, R. (2007). "Open Dynamics Engine." Retrieved 25.03.2010, from http://ode.org/. Somol, E. (2006). Dummy Text, or the Diagrammatic Basis of Contemporary Architecture. Diagram Diaries. P. Eisenman. London, Wiley-Academy: 5-11. Song, Y. and G.-J. Knaap (2004). "Measuring Urban Form: Is Portland winning the war of sprawl?" Journal of the American Planning Association 70(2): 210-225. Spuybroek, L. (2006). Machining Architecture. Theories and Manifestoes of Contemporary Architecture. C. Jencks and K. Kropf. Chicester, West Sussex, Wiley-Academy: 351-352.

258

Stanley, K. O., B. D. Bryant and R. Miikkulainen (2005). Evolving Neural Network Agents in the NERO Video Game. Symposium on Computational Intelligence and Games. Piscataway. Stea, D. (1974). Architecture in the Head. Designing for Human Behavior. J. Lang. Stroudsburg, Dowden, Hutchinson & Ross: 157-168. Steels, L. (1997). "The synthetic modeling of language origins." Evolution of Communication Journal 1(1): 1-34. Stewart, R. and A. Russell (2003). Emergent Structures Built by a Minimalist Autonomous robot using a swarm-inspired template mechanism. Australian Conference on Artificial Life, Canberra. Stiny, G. and J. Gips (1972). Shape Grammars and the Generative Specification of Painting and Sculpture. IFIP Congress 71. C. V. Freiman. Amsterdam: 1460-1465. Støy, K. (2004). Controlling Self-Reconfiguration using Cellular Automata and Gradients. 8th International Conference on Intelligent Autonomous Systems, Amsterdam. Tabor, P. (1971). Traffic in buildings 1: pedestrian circulation in offices. Cambridge, University of Cambridge School of Architecture. Terzidis, K. (2006). Algorithmic Architecture. Oxford, Elsevier. Tesfatsion, L. (2006). Agent-based Computational Economics: a Constructive Approach to Economic Theory. Handbook of Computational Economics. L. Tesfatsion and K. L. Judd. NorthHolland, Elsevier. Testa, P., U.-M. O'Reilly, D. Weiser and I. Ross (2001). "Emergent Design: a crosscutting research program and design curriculum integrating architecture and artificial intelligence." Environment and Planning B: Planning and Design 28(4): 481-498. Testa, P. and D. Weiser (2002). "Emergent Structural Morphology." Architectural Design, Special Issue, Contemporary Techniques in Architecture 72(1): 12-16. Theraulaz, G. and E. Bonabeau (1995). "Modelling the Collective Building of Complex Architectures in Social Insects with Lattice Swarms." Journal of Theoretical Biology 177(4): 381400. Tibert, A. G. and S. Pellegrino (2003). "Review of Form-Finding Methods for Tensegrity Structures." International Journal of Space Structures 18(4): 209-223. Timpf, S., C. S. Volta, D. W. Pollock, A. U. Frank and M. J. Egenhofer (1992). "A conceptual model of wayfinding using multiple levels of abstraction." Lecture Notes in Computer Science 639. Turner, A. (2003). "Analysing the visual dynamics of spatial morphology." Environment and Planning B: Planning and Design 30: 657-676. Turner, A. and A. Penn (2002). "Encoding natural movement as an agent-based system: an investigation into human pedestrian behaviour in the built environment." Environment and Planning B: Planning and Design 29: 473-490. Turner, S. (2000). "Architecture and morphogenesis in the mound of Macrotermes michaelseni (Sjöstedt) (Isoptera: Termitidae, Macrotermitinae) in northern Namibia." Cimbebasia 16: 143175. Turner, S. (2007). Homeostasis, complexity, and the problem of biological design. Complexity and Philosophy, Stellenbosch. UNStudio (no date). "Louis Vuitton store." Retrieved 11.03.2009, from www.unstudio.com. Valckenaers, P., M. Kollingbaum, H. v. Brussel, O. Bochmann and C. Zamfirescu (2001). The Design of Multi-Agent Coordination and Control Systems using Stigmergy. IWES'01 Conference, Bled. Varawalla, H. (2004). "The importance of the ‘design of circulation’ for hospitals." Healthcare Management. Retrieved 14.10.2008, from http://www.expresshealthcaremgmt.com/20040531/architecture01.shtml. Vauban (no date). Retrieved 02.03.2011, from www.vauban.de. Vidal, J. M. (2003). Learning in Multiagent Systems: An Introduction from Game-Theoretic Perspective. Adaptive Agents and Multiagent Systems. E. Alonso, Springer. 2636: 202-215. Vidal, J. M. (2007). Fundamentals of Multiagent Systems: with NetLogo Examples.

259

Von Mammen, S. and C. Jacob (2007). Genetic Swarm Grammar Programming: Ecological Breeding like a Gardener. IEEE Congress on Evolutionary Computation. Von Neumann, J. (1951). The General and Logical Theory of Automata. Cerebral Mechanisms in Behavior. L. A. Jeffress. New York, Wiley: 1-41. Wan, A. D. M. and P. J. Braspenning (1996). Adaptive Agent Design Based on Reinforcement Learning and Tracking. 6th Belgian-Dutch Conference on Machine Learning. Maastricht. Weinstock, M. (2006). Morphogenesis and the Mathematics of Emergence. Theories and Manifestoes of Contemporary Architecture. C. Jencks and K. Kropf. Chicester, West Sussex, Wiley-Academy: 351-352. Werner, S., B. Krieg-Brückner and T. Herrmann (2000). "Modelling Navigational Knowledge by Route Graphs." Spatial Cognition II: 295-316. Weyns, D., H. V. D. Parunak, F. Michel, T. Holvoet and J. Ferber (2005). "Environments for Multiagent Systems: State-of-the-Art and Research Challenges." Environment s for Multiagent Systems: 1-47. Wheeler, W. (1911). "The Ant Colony as a Organism." Journal of Morphology 22: 307-325. Wilensky, U. (1998). NetLogo Pursuit model. Evanston, Center for Connected Learning and Computer-Based Modeling, Northwestern University. Wilensky, U. (1999). "NetLogo." Retrieved 17.07.2009, from http://ccl.northwestern.edu/netlogo. Wilensky, U. (2001). NetLogo Rabbits Grass Weeds model. Evanston, Center for Connected Learning and Computer-Based Modeling, Northwestern University. Wilensky, U. (2002). NetLogo DLA model. Evanston, Center for Connected Learning and Computer-Based Modeling, Northwestern University. Wilensky, U. (2004). NetLogo Rebellion model. Evanston, Center for Connected Learning and Computer-Based Modeling, Northwestern University. Willgoose, G., R. Bras and I. Rodrigues-Iturbe (1991). "A Coupled Channel Network Growth and Hillslope Evolution Model: 2. Nondimensionalization and Applications." Water Resources Research 27(7): 1685-1696. Williams, C. (2005). Computers and the Design and Construction Process. Visions for the Future of Construction Education: Teaching Construction in a Changing World. M. Voyatzaki, European Network of Heads of Schools of Architecture. Williams, C. (2008). Practical Emergence. Space Craft: Developments in Architectural Computing. D. Littlefield. London, RIBA Publishing: 72-79. Willoughby, T. M. (1975). "Building forms and circulation patterns." Environment and Planning B: Planning and Design 2(1): 59-87. Wilson, E. (1980). Sociobiology. Cambridge, Harvard University Press. Wooldridge, M. (1999). Intelligent Agents. Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence. G. Weiss. Cambridge, MIT Press. Zee, A. v. d. and B. d. Vrie (2008). Design by Computation. Generative Art Conference, Milan. Zhang, X. and M. P. Armstrong (2005). Using a Genetic Algorithm to Generate Alternatives for Multiobjective Corridor Location Problems. Geocomputation, University of Michigan.

260

Appendix 1: Submission to the open international ideas competition for Nordhavnen

“Planting a future for Copenhagen” by Slider Studio and Mæ architects, board 1

261

“Planting a future for Copenhagen” by Slider Studio and Mæ architects, board 2

262

“Planting a future for Copenhagen” by Slider Studio and Mæ architects, board 3

263

“Planting a future for Copenhagen” by Slider Studio and Mæ architects, board 4

264

“Planting a future for Copenhagen” by Slider Studio and Mæ architects, circulation diagram

265

Appendix 2: Submission to the international ideas competition for Tallinn City Hall

“Agora” by Slider Studio, board 1

266

“Agora” by Slider Studio, board 2

267

“Agora” by Slider Studio, board 3

268

“Agora” by Slider Studio, board 4

269

Glossary of terms Agent – an autonomous unit of computation. As explained in Chapter 4, no universally accepted definition exists. In this thesis, an agent is seen as a piece computer code – an object in object-oriented programming terms – that is distinguishable from its environment and possess some control over its actions

Algorithm – a sequence of well-defined instructions or a description of a procedure for solving a problem. Explicitly written algorithms can be executed by computers

Ant colony optimisation – a metaheuristic method that simulates the ant colony’s foraging behaviour in order to find and optimise paths between given points

Bottom-up modelling – a modelling methodology where the global behaviour of a system is generated by defining the behaviour of system’s components at the local level. Multi-agent systems are typical examples of bottom-up modelling

Cellular automaton – a computational model where nodes in a regular graph take discrete states and “update their states in parallel using the same state transition rule.”1

Circulation – the part of buildings and settlements that facilitates movement of people from place to place; “the means by which access is provided through and around”2 an environment

Complete (circulation) model – a computational model where circulation diagrams and the layout of non-circulatory spaces are generated in parallel

1

Adamatzky, A. (2001). Computing in Nonlinear Media and Automata Collectives. Bristol, Institute of Physics Publishing. 2 Clerkin, P. (2005). "Glossary of Architectural Terms." Retrieved 18.07.2010, from http://www.archiseek.com.

270

Design pattern – a theoretical schemata that serves as a template for building computational models

Diagram – an abstract and information rich representation of a solution from which design proposals can be derived by means of interpretation

Edge following – the movement of agents along the edges of obstacles or the edges of the bounded environment

Environment – a physical or virtual setting that consumes the system’s (agent’s) outputs and produces its inputs3. In the architectural design context, the term is used for “built environment” for short

Generative design/modelling – a design methodology where dynamic processes and feedback loops are used for generating design solutions

Greedy algorithm – an algorithm where decisions are made based on locally optimal choices in hope of reaching the global optimum

Heuristic – an adjective for discovering solutions using the rule-of-thumb approach and experience

Hill-climbing – a routine of finding optimal solutions based on locally available information

Medial axis – a set of points that are of equal distance from more than one point on the object boundary

Mobile agent - an agent capable of moving around in its environment without maintaining the topological relationships with other agents or its environment 3

Keil, D. and D. Goldin (2006). Indirect Interaction in Environments for Multiagent Systems. Environments for Multiagent Systems II. D. Weyns, H. V. D. Parunak and F. Michel. Uttrecht, Springer: 68-87.

271

Model – a description of a system; a representation of a process or an object

Program – a written (in a programming language) set of executable algorithms or instructions for achieving a goal

Reactive agent – an agent that reacts to stimuli from its environment without having longer term goals

Roulette wheel selection – a semi-random selection mechanism that gives proportional advantage to solutions of higher fitness

Self-organisation – a process that leads to an ordered structure in systems without external assistance; “the spontaneous reduction of entropy in a dynamic system” 4

Sensory-motor coupling – the process of mapping the system’s (agent’s) sensory inputs to its motor outputs

Setting out configuration – the initial state of model’s components prior to the execution

Spanning tree – a tree-like network of nodes and links where any point in the network can be reached from any other point. The minimal spanning tree is the shortest of all spanning trees5

Stigmergy – a type of indirect communication in groups of individuals where messages are exchanged through their shared environment

4

Heylighen, F. and C. Joslyn (2001). Cybernetics and Second-Order Cybernetics. Encyclopedia of Physical Science & Technology. R. A. Meyers. New York, Academic Press. 5 Haggett, P., A. D. Cliff and A. Frey (1977). Locational Models. London, Edward Arnold.

272

Swarm – a group of agents that collectively carry out a distributed problem solving task6

Synthetic modelling – a methodology of producing knowledge by constructing models. Models are typically explored by looking how they respond to changed parameters

System – a conceptual or physical entity consisting of interacting parts7

Voronoi diagram – a partition of space into cells, where the edges of cells form a minimum energy network8

6

Deneuborg, J. L., G. Theraulaz and R. Beckers (1992). Swarm Made Architectures. Toward a Practice of Autonomous Systems, Proceedings of The First European Conference on Artificial Life, MIT Press. 7 Tabor, P. (1971). Traffic in buildings 1: pedestrian circulation in offices. Cambridge, University of Cambridge School of Architecture. 8 Coates, P. S. (2010). Programming.Architecture. London, Routledge.

273

GENERATING CIRCULATION DIAGRAMS FOR ARCHITECTURE AND URBAN DESIGN USING MULTI-AGENT SYSTEMS

RENEE PUUSEPP

A thesis submitted in partial fulfilment of the requirements of the School of Architecture and Visual Arts, University of East London for the degree of Doctor of Philosophy

April 2011

Abstract For decades, cybernetics, systems theorists and researchers in the field of Artificial Intelligence and Artificial Life have been looking for methods of building intelligent computer applications that can solve complex problems. By nature, many design problems are complex and solving these requires a certain degree of intelligence. Therefore, it comes as no surprise, that sophisticated computational applications have become increasingly popular amongst academics and practitioners in various design disciplines. Despite the recent success of generative design methods, there are many new modelling paradigms from AI and AL research that remain largely unexplored in the context of architectural and urban design. One of such paradigms is multi-agent modelling. Although thoroughly explored and implemented in a diverse range of subject areas from social sciences to economics, design disciplines have largely refrained from deploying multi-agent systems. This thesis explores multi-agent systems for conceptual design development – for generating circulation diagrams. Besides studying several known models in the architectural and urban design context, a few novel ones are proposed. Instead of looking at existing urban and architectural theory, the source of inspiration for building circulation models comes from processes found in nature where the movement based on local navigational decisions lead to the emergence of highly complex and adaptable networks. Following the synthetic modelling approach, it is argued that studying and building simple agent based models creates in-depth knowledge about underlying principles of network development processes and allows one to gradually move towards building more sophisticated models. Once the principles of generating circulation systems are well understood, one can use these for creative purposes in designing circulation in buildings and settlements. The main aim of this thesis is to develop and expose generative methods for the early stages of the design process. By investigating the ways of building, validating and controlling generative models, it is demonstrated how these models can be integrated into the design work flow.

ii

Table of contents Abstract ............................................................................................................................. ii Table of contents ............................................................................................................. iii List of figures .................................................................................................................... vi Acknowledgements ..........................................................................................................xii Chapter 1: Introduction..................................................................................................... 1 1.1 Computer scientific background ............................................................................. 3 1.3 Computing circulation diagrams ............................................................................. 6 1.4 Outline of the proposed research ........................................................................... 8 1.5 Goals of the research............................................................................................. 10 1.6 Methodology ......................................................................................................... 12 Chapter 2: Computational modelling.............................................................................. 14 2.1 Models in scientific research ................................................................................. 15 2.2 Types of models..................................................................................................... 18 2.3 Models in architecture and urban design ............................................................. 20 2.4 Computational design methods ............................................................................ 24 2.5 Generative design.................................................................................................. 26 2.6 Chapter summary .................................................................................................. 32 Chapter 3: Modelling circulation – an architectural review ........................................... 33 3.1 The essence of circulation ..................................................................................... 34 3.2 Diagrams of circulation.......................................................................................... 36 3.3 Topological circulation networks .......................................................................... 43 3.4 Optimal designs ..................................................................................................... 45 3.5 Computational models .......................................................................................... 46 3.6 Chapter summary .................................................................................................. 50

iii

Chapter 4: Agent-based modelling and multi-agent systems: nature, origin and modern applications ..................................................................................................................... 52 4.1 Background ............................................................................................................ 53 4.2 Definitions of agent ............................................................................................... 59 4.3 Taxonomy of ABM ................................................................................................. 62 4.4 Properties and behaviour of agent-based models ................................................ 64 4.5 Applications of multi-agent systems ..................................................................... 72 4.6 Chapter summary .................................................................................................. 80 Chapter 5: Building blocks of agent-based circulation models....................................... 82 5.1 Design of the circulation agent ............................................................................. 83 5.2 Design of the environment.................................................................................... 88 5.3 Behaviour of agents............................................................................................... 94 5.4 Environmental processing ..................................................................................... 98 5.5 Agent-environment interaction .......................................................................... 102 5.6 Communication in circulation agents.................................................................. 105 5.7 The setting out configuration .............................................................................. 108 Chapter 6: Prototypes ................................................................................................... 111 6.1 Emergent path formation: Loop Generator ........................................................ 114 6.2 Channel network formation: Stream Simulator .................................................. 120 6.3 Cellular automaton and hill-climbing agents: Labyrinth Traverser..................... 124 6.4 Network optimisation algorithm: Network Modeller ......................................... 126 6.5 Way-finding agents and ant colony optimisation ............................................... 132 6.6 Stigmergic building agents .................................................................................. 137 6.7 Space forming agents .......................................................................................... 151 6.8 Discussion ............................................................................................................ 155 Chapter 7: Case study 1 – a multi-agent system for generating street layouts ........... 159 7.1 Developing the prototype ................................................................................... 161 iv

7.2 Generating diagrams in context .......................................................................... 168 7.3 Quantitative analysis and evaluation of diagrams .............................................. 172 7.4 Conclusions and discussions................................................................................ 182 Chapter 8: Case study 2 – an ant colony optimisation algorithm for generating corridor systems .......................................................................................................................... 187 8.1 Ant colony optimisation algorithms .................................................................... 187 8.2 Selected ACO algorithm....................................................................................... 189 8.3 Testing ACO parameters...................................................................................... 192 8.4 Generating corridor networks for office buildings.............................................. 197 8.5 Observations and conclusions ............................................................................. 207 Chapter 9: Controlling the diagram .............................................................................. 211 9.1 Emergent behaviour OF and IN agent colonies................................................... 211 9.2 Development of the diagram .............................................................................. 215 9.3 Flexibility and sensitivity – an exploratory analysis of multi-agent models ....... 219 9.4 Means of control ................................................................................................. 225 9.5 Discussion ............................................................................................................ 228 Chapter 10: Discussion and conclusions ....................................................................... 230 10.1 Multi-agent models for generating circulation systems ................................... 231 10.2 Implications to the design process .................................................................... 234 10.3 In search for parsimonious models ................................................................... 236 10.4 Complete design diagrams ................................................................................ 241 10.5 Self-criticism ...................................................................................................... 243 10.6 Discussion. The author’s view ........................................................................... 246 Bibliography .................................................................................................................. 251 Appendix 1: Submission to the open international ideas competition for Nordhavnen261 Appendix 2: Submission to the international ideas competition for Tallinn City Hall .. 266 Glossary of terms .......................................................................................................... 270 v

List of figures Figure 2.1: Walter Christaller, model of central place theory, the 1930s. Source: (Christaller no date) ........................................................................................................ 21 Figure 2.2: Ebenezer Howard, Garden City model, 1898. Source: (Howard no date) .... 22 Figure 2.3: Bill Hillier, computer generated settlement models. Source: (Hillier 1989) 23 Figure 3.1: Constructive diagram. Source: (Alexander 1964, p. 88) ............................... 37 Figure 3.2: Functional circulation diagram of Yokohama terminal. Source: (Ferre et al. 2002, front cover) ........................................................................................................... 38 Figure 3.3: London Underground map. Source: (Beck 2010) ......................................... 39 Figure 3.4: Bridges in Leidsche Rijn by Maxwan Architects. Source: (Maxvan no date) 39 Figure 3.5: Circulation diagram of Luis Vuitton store. Source: (UNStudio no date) ....... 40 Figure 3.6: From left to right: composition, configuration and constitution of street networks. Source: (Marshall 2005, p. 86) ....................................................................... 42 Figure 3.7: Topological classification of networks. Source: Hagget and Chorley (1969, p. 7) ..................................................................................................................................... 43 Figure 3.8: Tree-cities: Chandigarh, Brazil, Greenbelt in Maryland, plan of Tokyo. Source: (RUDI no date).................................................................................................... 44 Figure 4.1: Cooperation typology in multi-agent systems. Source: Franklin (Doran et al. 1997) ............................................................................................................................... 71 Figure 5.1: Glider in action – gliding. This cellular automata agent has 4 different postures that it ceaselessly repeats ................................................................................ 84 Figure 5.2: The game of chase. Smaller (black) agents are attracted bigger (red) agents who, in turn, are repelled from smaller ones. With large number of agents, such a ‘game’ reveals some important mechanisms that lay behind complex behaviours of simple agents .................................................................................................................. 95 Figure 5.3: A Swarm grammar – a branching diagram created by tracing agents using formal movement rules .................................................................................................. 96 Figure 5.4: A Context sensitive swarm grammar – the branch length is defined by the available amount of ‘light’ .............................................................................................. 97 Figure 5.5: Swarm grammars with hierarchical rules ..................................................... 97 vi

Figure 5.6: Stable structures as computed with cellular automata algorithm. Darker cells are less stable than lighter ones ........................................................................... 102 Figure 6.1: Typical movement patterns of simple reactive agents. Emergent trails form open and closed loops to facilitate the continuity of the agents’ movements ............ 115 Figure 6.2: A sequence of snapshots illustrates the development of closed loops in 2D ....................................................................................................................................... 116 Figure 6.3: The body plan of the 2D Loop Generator agent. α denotes the angle between the front sensor and a side sensor ............................................................... 116 Figure 6.4: Tests with different sensory configurations – agents with 3 sensors. The angle (α) between front and side sensors (from left to right): 0, 22.5, 45, 67.5, 90 and 120 degrees ................................................................................................................... 117 Figure 6.5: An agent’s ‘body plan’ in 3D – the development of minimal sensory configurations that produced continuous circulation diagrams .................................. 118 Figure 6.6: Generated 3D circulation diagrams ............................................................ 119 Figure 6.7: A sequence of snapshots illustrates the development of closed loops in 3D ....................................................................................................................................... 120 Figure 6.8: The input landscape (left) and the resulting stream channels (right) ........ 121 Figure 6.9: A typical progress of channel formation algorithm .................................... 122 Figure 6.10: Tests with 1, 10, 100, 500, 1000 and 1500 agents ................................... 123 Figure 6.11: All tests with 1000 agents ......................................................................... 123 Figure 6.12: A labyrinth solved by Labyrinth Traverser ................................................ 124 Figure 6.13: The gradient is computed with the diffusion method: the redness of each patch shows the proximity to a point in the labyrinth ................................................. 125 Figure 6.14: A sequence of images showing the progress of the agent ....................... 126 Figure 6.15: A network diagram generated with Network Modeller ........................... 127 Figure 6.16: Examples of network diagrams generated with the same configuration of target nodes. The variety of diagrams have been achieved with different control parameters .................................................................................................................... 128 Figure 6.17: A typical process of network formation and optimisation ....................... 129 Figure 6.18: Recognizable minimal spanning trees generated with the network optimisation algorithm.................................................................................................. 130

vii

Figure 6.19: Input-output coupling. The agent obtains input from the digital 3D model and from the reference map. Motor output is generated interpreting input according to syntactical rules ........................................................................................................ 133 Figure 6.20: Sensory input. Sensors acquire their value from the environment and from the corresponding location on the reference map. ...................................................... 134 Figure 6.21: Interpretation rule set: red circles show activated sensors, the arrow shows the resultant movement direction. Different agents have different rules to map inputs to output. 75% of these rules are passed to ‘offspring’ to maintain explorative diversity of the population ............................................................................................ 135 Figure 6.22: Way-finding in corridor-like layouts. All shown tests were successful as the colony was able to learn the route between two points. The reference map is laid on top of the layout of digital model ................................................................................. 135 Figure 6.23: Way-finding in quasi-urban layouts. Environmental features play a crucial role in the competition between popular routes. It is not always the shortest route that is preferred by agents ................................................................................................... 136 Figure 6.24: An ‘arcade’ built using a simple set of stigmergic rules and linear forward directed movement....................................................................................................... 138 Figure 6.25: Tall structures built by agents by executing evolved stigmergic building rules and linear upward directed movement ............................................................... 140 Figure 6.26: A sequence of images showing the collective building activity of an agent colony. Agents are placing blocks of various sizes according to a shared set of stigmergic rules ............................................................................................................. 142 Figure 6.27: Agglomerations of building blocks placed by agents. Each agent evolves its own building rules during the simulation ..................................................................... 142 Figure 6.28: Generated ‘pheromone’ trails and the respective structures built by agents. Given a simple building rule, agents were capable of channelling their movement but often failed to establish continuous circulation patterns.................... 145 Figure 6.29: Development of built structures that form circulation channels ............. 146 Figure 6.30: Sequence of images showing dynamic feedback between circulation routes and built blocks .................................................................................................. 147 Figure 6.31: Emergence of vertical circulation ............................................................. 149 Figure 6.32: Development of vertical circulation and stacked blocks .......................... 149 Figure 6.33: Selected outputs of the simulation........................................................... 150 viii

Figure 6.34: Uniform distribution of agents following a simple rule in the simulated world ............................................................................................................................. 151 Figure 6.35: Formation of a cellular structure .............................................................. 152 Figure 6.36: Self-organisation of cellular agents in a bounded region ......................... 154 Figure 6.37: Self-organisation of cellular agents in a semi-confined area.................... 155 Figure 7.1: Aerial photo of Nordhavnen (Google 2011a) ............................................. 160 Figure 7.2: Stream Simulator modified: orthogonal stream channels are generated with the user defined segment length .................................................................................. 163 Figure 7.3: Circulation diagrams with many setting-out points and a small number of attractors. From left to right: tree structure with 1 attractor, 1 circuit with 2 attractors, multiple circuits with multiple attractors ..................................................................... 164 Figure 7.4: Many-to-many relationship between setting-out points and attractors. All of these diagrams have 11 attractors placed across the landscape ............................. 165 Figure 7.5: Diagrams with attractors of differentiated magnitude – some attractors (larger dots) are more appealing to agents than others (smaller dots) ....................... 166 Figure 7.6: A generated diagram classified as a 4 grade road system. This exercise was done manually by counting the width (in pixels) and the strength (darkness) of the road segments in the diagram ...................................................................................... 167 Figure 7.7: The input image and the grid representing access to urban blocks. The shape of the area was partly driven by the competition brief and partly defined by the design team ................................................................................................................... 168 Figure 7.8: The input map with attractors and the resultant diagram. Dots represent attractors with the size indicating the importance ...................................................... 169 Figure 7.9: Distinct diagrams generated with an identical initial configuration .......... 170 Figure 7.10: Diagrams generated with uniform attractor grid ..................................... 170 Figure 7.11: Sequence of images showing the development of circulation network .. 171 Figure 7.12: Development of the topology diagram..................................................... 174 Figure 7.13: Topology diagrams generated with growing number of attractors (1- 25) ....................................................................................................................................... 175 Figure 7.14: Tests with randomly placed attractors ..................................................... 176 Figure 7.15: Tests with manually placed attractors ...................................................... 176 Figure 7.16: Tests ‘in silico’- internal efficiency raises until connectivity has reached its ceiling but the maximum length is not achieved yet .................................................... 177 ix

Figure 7.17: An ‘in-silico’ diagram with 5 attractors where the highest connectivity value has been achieved, but the total length of network has not been exhausted ... 177 Figure 7.18: Graphs show the change of network parameters ‘in silico’ (left) and in the context of Nordhavnen input map (right)..................................................................... 179 Figure 7.19: A near-optimal diagram with internal connectivity of 0.95 (as calculated after Song and Knaap (2004)) or 0.67 (as calculated with the proposed method of taking the shape of junctions into account) ................................................................. 179 Figure 7.20: Orenco station, Portland (Google 2011b)................................................. 181 Figure 7.21: Hammarby Sjöstad, Stockholm (Google 2011c) ....................................... 181 Figure 7.22: Vauban, Freiburg (Google 2011d) ............................................................. 181 Figure 7.23: Three representations of a diagram (from left to right) – topological, frequency diagram, and combined diagram ................................................................. 182 Figure 7.24: A selection of diagrams produced with The Nordhavnen model ............. 183 Figure 8.1: The setting out configuration showing the lower (source) patch and the upper (target) patch and the shortest routes............................................................... 190 Figure 8.2: Number of agents tested – 50 (top left), 200 (top right),500 (bottom left) and 2000 (bottom right)................................................................................................ 193 Figure 8.3: Evaporation rates tested – 0.00001 (top left), 0.003 (top right), 0.01 (bottom left) and 0.03 (bottom right)........................................................................... 194 Figure 8.4: Adjust rates tested – 0.03 (top left), 0.1 (top right), 0.3 (bottom left) and 3 (bottom right)................................................................................................................ 195 Figure 8.5: Continuous gradient of pheromone with extreme values around source point and the target lead to the successful detection of the shortest path ................ 196 Figure 8.6: Problem in the context: the task was to find a quality solution for the internal circulation on all 3 floors of the building. The image shows floors 2, 3 and 4 with the generated subdivision (coloured patches) in the perimeter polygon, and the proposed structural grid ............................................................................................... 199 Figure 8.7: Test run with 6 stair-core agents (from left to right). The algorithm solved the problem in 65 steps ................................................................................................ 203 Figure 8.8: Generating 2nd floor diagram with 5 stair cores ........................................ 204 Figure 8.9: Generating the 3rd floor diagram with 9 stair cores .................................. 205 Figure 8.10: Generating the 4th floor diagram with 6 stair cores ................................ 205 Figure 8.11: Generated solution (top) versus manually modified solution (bottom) .. 206 x

Figure 8.12: Form diagram + generative process = constructive diagram ................... 209 Figure 9.1: Alignment and cohesion in 2D .................................................................... 217 Figure 9.2: Development of the diagram in 2D............................................................. 217 Figure 9.3: Alignment and cohesion in 3D .................................................................... 218 Figure 9.4: Development of the diagram in 3D............................................................. 219 Figure 9.5: The angle between the front and the side sensor (α) has a crucial impact on the generated diagrams. From left to right: α = 20, 45, 70 and 95 degrees ................ 223 Figure 9.6: Flocking agents. Testing the behaviour of agents with different field-of-view angles ............................................................................................................................ 223 Figure 9.7: Field-of-view (FOV) angle affects the flock’s behaviour. Larger angle leads to a more coherent flock but it can cause the flock as a whole to move around ............ 224

xi

Acknowledgements The journey from the early research idea to completing the work would not have been possible without many people who helped me through the course. I am most grateful to all these people for their support and encouragement, and would like to take an opportunity to thank some of them individually. Foremost, I would like to express my gratitude to Paul Coates for exposing me to the awe-inspiring world of generative design. He served as a living encyclopaedia and an unlimited source of wisdom and inspiration for anything related to bottom-up ways of computing design solutions. I would also like to thank Professor Allan Brimicombe – my second supervisor – who provided me with practical guidelines for structuring and presenting my work. I am thankful to The Graduate School at University of East London for generous funding of the first three years of my research. Many thanks go to my colleagues from Slider Studio for having a relaxed attitude towards my absence from the office during the most intense periods. Particularly, I would like to thank Michael Kohn for practical advice and occasionally bringing my feet back on the ground. I would also like to acknowledge Mæ architects with whom I worked during the Nordhavnen design competition. I am eternally grateful to my closest family – my parents and my brother – for ceaseless encouragement in times when I most needed their support. Finally, I would like to thank Len - Marje Len Murusalu - for brightening up my days during the last few months before submitting this thesis.

xii

Chapter 1: Introduction The digital revolution has drastically changed the business and culture of almost every discipline in modern society. Like other professionals, architects, designers and urban planners are eagerly adopting new technologies in the hope of improving their designs, making their life easier, providing quicker services, or just astonishing their clients and members of public. New digital tools are now a part of the essential skill set in any contemporary design practice. Digital drafting and modelling are overtaking traditional hand sketching and physical prototype modelling. However, new digital tools are often used for computerising ‘old’ analogue technology and the full power of computation is not utilised. Alternatively to the widespread trend of replacing traditional methods of design, digital tools can lead to new computational design methods. Despite the recent popularity of new computational modelling methods in architecture and urban design (Terzidis 2006; Coates 2010), typical digital design researchers are still busily computerising traditional methods. A large proportion of digital design tools are mostly associated with parametric modelling (Aish and Woodbury 2005; Schumacher 2008) – a technique that can still be seen as computerisation rather than computation. Little support is given to computational methods for concept design exploration (Reffat 2006). This thesis tries to fill this gap and expand the choice of digital tools in early stages of the design process by introducing novel computational design methods. Computation opens the opportunity for bringing new methods of analysis and synthesis into the architectural and urban design discipline. Most methods that are currently used in the professional practice and in the academy are of analytical nature. For example, space syntax techniques are suitable for analysing urban (Batty 2004) and building (Calogero 2008) layouts. A few computational methods are used for synthesising design proposals of which even fewer ones relate to the state-of-the-art computational paradigms (Kalay 2004, p. 205-293). This thesis investigates generative mechanisms for modelling solutions for architectural and urban design purposes. It attempts to construct computational models that help designers to explore greater variety of possible solutions at the conceptual design stage. These computational models can be used for design synthesis and, once validated, provide insightful 1

information for solving design problems. The lack of relevant theoretical foundations in urban design and architecture has forced the author of this thesis to look at a number of other disciplines in order to support the generative modelling approach. In that respect, complexity and system theory have been particularly useful. Architectural and urban design theory simply helps to classify and assess the generated results. It is argued that circulation in buildings and cities is a design problem that suits well for demonstrating the powers of bottom-up computational modelling – modelling a system by defining the behaviour of its components. In this thesis, models for synthesising design solutions are called generative models – a particular type of computational constructs that are used for creative purposes. However, not all generative models are appropriate for modelling circulation systems. The movement of people through and around an environment is an inherently decentralised, complex (Koutamanis, Leusen and Mitossi 2001) and dynamic (Helbing et al. 2002) phenomenon. The very nature of circulation in human settlements and buildings almost requires the use of bottom-up models for analytical and generative purposes. Helbing et al. claim that spatio-temporal patterns in pedestrian crowds can be reproduced as self-organised phenomena. It is argued that both types of models – analytical and generative – can be effectively constructed using self-organising multiagent systems. Multi-agent systems are composed of autonomous units – agents – interacting within a virtual environment (Gilbert 2004). A colony of agents is a complex adaptive system that has a strong tendency to self-organise and exhibit emergent behaviour (Macal and North 2006). Multi-agent models can be used for modelling spatial phenomena where the behaviour of agents is partly determined by the spatial and geometrical structure of their environment (Batty 2003). Whereas there are several multi-agent models used for evaluating circulation design, generative models are few and far between. This thesis seeks to build, analyse and deploy models for generating circulation and to generalise the findings in order to propose a generative design methodology.

2

1.1 Computer scientific background

“It seems fair to say that we live in an Era of Decentralization” (Resnick 1999, p. 1)

Computer technology has changed the way scientific models are built and hypotheses are tested. New technology allows exploring models that are far beyond the reach of a scientist without the computer. Modernist Newtonian science created simple models that expressed laws of nature or mathematical ideas with few equations (Holland 1998, p. 17). The change in the scientific research from numerical mathematics towards decentralised and distributed ways of exploring complex phenomena was propelled by the rise of new computational modelling methods from the 1950s to 1970s. These methods grew out from automata theory in cognitive science (Von Neumann 1951), generative grammar in language theory (Chomsky 1956), system dynamics approach (Forrester 1972) and sociology (Schelling 1971). In the 1990s, a new generation of complexity theorists sought understanding about life through computational modelling and simulations. Holland (1998, p. 117) believed that systems in nature can be modelled at different levels – from ecosystems to organisms and organs and individual cells. Capra (1996, p. 194) argued that computational models help to understand many important principles of living systems. Computers opened new opportunities because it was now possible to execute long sequences of instructions at high speeds. According to Holland, this gave scientists the capacity of exploring models that are orders of magnitude more complex. Constantly improving computational resources equip scientists with tools for exploring processes in massively parallel ‘worlds’ where the whole system is modelled from smaller components and from defined relationships between these components. The common understanding among computer scientists and practitioners in the area is that any system can be mapped into a program running on a universal computing machine (Galanter 2003). Once a system can be explicitly defined, it is possible to create a computational model that is better observable than the system itself as it can be executed and stopped at any time. Dynamic and decentralised models are now pervasive in many disciplines from social sciences (Gilbert 2004) to geography (Castle

3

and Crooks 2006), from economics (Tesfatsion 2006) and logistics (De Schutter et al. 2003) to cognitive psychology (Morse and Ziemke 2008). Besides studying and explaining existing systems in nature, bottom-up modelling can serve as a useful method of designing new systems. According to the constructionist philosophy, “models are not passive reflections of reality, but active constructions by the subject”(Heylighen and Joslyn 2001). These active constructions can help solving issues involved in designing new complex systems. Resnick (1999) calls this approach stimulation rather than simulation. While the latter is concerned with exploring natural phenomena, the former uses principles found in natural systems for building new systems with different functional purposes. One of the new paradigms of modelling complex systems that has recently gained a lot of popularity is called agent-based modelling (ABM). ABM relates to the decentralised and bottom-up logic of assembling complex systems out of small welldefined units. Regardless to the popularity of agent-based modelling methods (Castle and Crooks 2006), there is no commonly accepted definition for an agent (Shoham and Leyton-Brown 2009, p. xiii). Minsky (1988) – one of the forerunners of artificial intelligence research – has described an agent as a unit of a specific functionality that can be used for constructing agencies that have more sophisticated functionality. He argues that intelligence can be understood as a combination of non-intelligent agents – something very complicated can be explained and modelled as societies of simple agents. Complex agencies can be seen as multi-agent systems that are indeed now being widely explored in a number of disciplines for modelling dynamic systems (Vidal 2007). Despite its widespread success, multi-agent modelling is relatively unknown amongst scholars and practitioners in the field of architecture and urban design. Yet, many design and planning problems are very complex by nature – even “wicked”, according to (Rittel and Webber 1973). Such problems would benefit from sophisticated computational models. If intelligence can be created by aggregating simpler agents together in a computer model as suggested by Minsky, then this model could be useful for design purposes. It is envisaged that multi-agent systems lead to new intelligent models for solving complex design problems. Additionally, Deaton and Winebrake (2000, p. 1) point out that virtually all environmental problems are dynamic system problems. As the foremost concern of the architectural task is the design of 4

environment, it seems plausible that multi-agent systems as a dynamic modelling method is suitable for designing the environment.

1.2 Computational design

Architecture is not a coherent scientific discipline in its traditional sense – it lacks scientific basis and rigorous research tradition (Kalay 2004, p. xvi). There is no universally accepted methodology or process how design solutions are created and validated. It is highly debatable whether a single globally accepted design methodology is even needed. Nevertheless, many aspects of design can be created and analysed in isolation using scientific methods. These methods are often borrowed from other disciplines such as engineering, applied physics, mathematics and even biology. Architects have been adopting methods that originate from these disciplines for centuries. More recently, a new trend of deploying ideas from computer science has emerged. Back in the 1970s, Stafford Beer complained that computers are used on the wrong side of the equation “busily taking over exactly the old system” (Beer 1974, p. 40). Instead, Beer suggests that computers can be used as variety handlers – machines for creating and testing a variety of scenarios. Even now, this is not how architects tend to see the role of computers. Although computers are widely used in the architectural practice, this use has been fairly limited. Kalay thinks that computational methods in design have emulated habitual methods used by designers. Digital applications are mostly replacing traditional drafting, sketching and modelling, whereas the full power of computing is seldom harnessed. While computational modelling has pervaded many corners of contemporary life, architects are still where Beer thought the rest of the world was in the 1970s. According to some researchers, the discipline of architecture needs to adopt new ways of synthesising and evaluating design. For example, Terzidis (2006, p. xi) and Kalay (2004, p. xvi) argue for the necessity of using computation on the right side of equation – for computing solutions rather than computerising traditional methods. The constructionist methods of building dynamic computational models for studying complex systems can be quite directly brought into architecture and urban design. An architectural solution rarely resolves just a single isolated issue; in most 5

cases, several interrelated problems need to be considered simultaneously in the design process. Different parts of design are synthesised into a solution that has to meet a number of different goals. An architectural solution can be seen as a complex phenomenon – an aggregation where various sub-systems that have otherwise independent goals have to come together. According to system theory, complex phenomena can be explored in dynamic computer simulations (Capra 1996, p. 194). Dynamic modelling provides means to synthesise complex phenomena and as such can be used for generating design solutions as well. The underlying principle herein is that the space and the environment are the result of certain dynamic processes. These processes can be simulated in a dynamic model, but also stimulated turning the model into a generative tool. Dynamic modelling is being used in many architecture and urban design related fields such as pedestrian movement analysis (Batty 2003) and geographical information systems (Silva, Seixas and Farias 2005). It is time that design disciplines embraced it as well.

1.3 Computing circulation diagrams

Circulation is defined as the means by which access is provided through and around an environment; it is the part of buildings and settlements where people move from place to place (Clerkin 2005). Clerkin asserts that architecture is not experienced statically but dynamically via circulation space. The spatial layout should reflect the dynamics of locomotion. In the light of the constructionist approach, circulation space can be seen not only as a facilitator but also as a product of locomotion. It is envisaged that circulation can be modelled by using computational methods that capture some dynamic properties of movement. Using movement as the generator of form is not entirely a novel idea in architecture; mobility is often considered as a key driver of design solutions. Arnheim (1977, p. 148-151) has pointed out that the architectural task permits two abstract types of buildings – shelters and burrows. Whereas the first type is ignorant of the dynamics of circulation, the second one derives its form from the motor behaviour of the users. There is little evidence in the literature that motor behaviour can be used for generating circulation for architectural purposes. However, there are a growing 6

number of authors who have been exploring the circulation network formation in nature. Turner (2000), for example, has been studying circulation in social insects and has found out that colonies of insects can collectively create intricate nest architectures that naturally incorporate circulation networks by following relatively simple individual agendas. It has also been demonstrated that this circulation formation process can be successfully simulated in dynamic computer models using ABM and particularly multi-agent systems (Ladley and Bullock 2005). In the light of recent developments in dynamic system modelling, it is fairly logical to assume that multi-agent systems can also be appropriated for the task of synthesising circulation systems for architecture. Agent-based modelling is a relatively new and unexplored concept in the architectural design discipline. As opposed to the traditional design process where architects develop schemes from the external observer’s point of view, agents can inhabit and explore the digital models in situ – they are directly embedded in the environment that is being modelled. Therefore, if agents can be used in design synthesis then generated solutions are possibly closer related to the experience of end users. Multi-agent systems are flexible, adapt to changes in their environment (Mertens, Holvoet and Berbers 2004) and allow a step-by-step approach to solving problems (Bonabeau, Dorigo and Theraulaz 1999). This thesis argues that such systems can be used for making tools for generating design scenarios that fit into context. Adaptivity plays a crucial role in both natural and artificial agent colonies. For instance, Deneuborg, Pasteels and Veraeghe (1983) have demonstrated with the help of a mathematical model that randomness in behaviour has adaptive advantage for ants. Due to the probabilistic behaviour, multi-agent models can be used as generators of a large number of solutions. This is how computational models should be arguably used in design disciplines – as variety handlers. The ability of dynamic models to generate multiple solutions is exactly what Beer means by using computers as variety handlers on the right side of the equation (Beer 1974, p. 43). Generative multi-agent models are treated as diagramming machines in this thesis. The output of a multi-agent system should not be taken literally for a final design proposal but should be seen as an intermediary stage in the design process. All of proposed models developed for this thesis should be treated as generators of 7

circulation diagrams rather than fully fledged problem solvers. Berkel and Bos (2006) describe diagrams as visual tools that can be used for compressing information. Indeed, multi-agent systems can generate a large amount of information that can be possibly captured in a diagrammatic form. Besides the shape and topology, such diagrams can also represent other information such as intensity and speed of movement, potential congestion areas and collisions between flows. These diagrams are constructive diagrams in Alexander’s (1964, p. 88) terms as both the form and requirements are expressed simultaneously. Diagrams always feature an element of hermeneutics. Sevaldson (2000) argues that diagrammatic thinking frees computer generated material from its determined context. He adds that diagrams can be ‘instrumentalised’ by reinterpreting, redefining, and remapping the visual material in a qualitative and playful manner. Although it is not a goal of this thesis to explain the process of interpretation and to demonstrate how diagrams can be converted into concrete design proposals, it is important to acknowledge that diagrams are abstract machines that have to be applied in order to convey non-abstract form.

1.4 Outline of the proposed research

This thesis explores generative design methods by investigating the use of multi-agent systems as tools for synthesising design solutions. Designing circulation in building and settlements is thought to be an appropriate task for which multi-agent systems offer an alternative approach to traditional design methods. Multi-agent systems are chosen mostly for their flexibility and adaptivity that potentially makes them ideal tools for designers. Since there are just a few examples how such tools are used in design processes, it is suggested that relevant models from computer science and from design-related disciplines should be studied. These models are decomposed into basic building blocks of which new models can be constructed. Once each block has been examined in depth, the next step is to build prototype models. Prototype models, in turn, are tested in the context of two practical design assignments that make up the case studies chapter in this thesis. Proposed prototypes and case study models are analysed quantitatively and qualitatively in order to find out the key principles that can be used for building and deploying models for solving circulation 8

design issues. Finally, proposed models are compared with one another in the hope of discovering patterns of parsimonious models. The next chapter (Chapter 2) continues with investigating computational models that have been constructed and tested by various academics and practitioners across several disciplines. Besides popular scientific models, the focus is on the computational design models and on generative models of design in particular. It is argued that traditional models in architecture and urban design are static and can be greatly improved by introducing dynamic modelling methods. The following chapter (Chapter 3) explores topics surrounding circulation and surveys existing computational models of natural and artificial circulation networks. It is established that circulation networks in buildings and urban settlements share many similarities with movement networks in nature. Similarly to natural networks, artificial ones lend themselves easily to computational modelling. The last chapter in the literature survey (Chapter 4) builds towards the argument, that multi-agent system are indeed an appropriate way of generating circulation networks. By looking at various applications of multi-agent systems, the chapter filters out key ideas for building generative models. Chapters 2 to 4 are intended to serve as a literature survey for this dissertation. Following the synthetic modelling approach (Deaton and Winebrake 2000), Chapter 5 discusses the basic components that are needed in order to construct models for generating circulation diagrams. According to Morse and Ziemke (2008), the commonly accepted tasks of synthetic modelling are to test the sufficiency of a theory, to test the necessity of alternative theories, and to explore the interactive potential of agent-environment embedding. The task of synthetic modelling in this thesis is the latter one. Models are built not in order to understand existing phenomena but to build models from a defined set of components to synthesise interesting and meaningful architectural diagrams. Chapter 6 proceeds with investigating the potential of multi-agent systems solving issues of circulation. Several prototype models are proposed and built in order to create knowledge that is particular to the given goals of this thesis. The proposed models feature many kinds of multi-agent systems from simple reactive agents to more sophisticated learning agents. The chapter also discusses methods of validating generative models through the validation of the generated output. Generated circulation diagrams are compared to existing circulation networks that – according to Brimicombe and Li (2008) – aligns 9

with the notion of external validation. Internally, models are validated qualitatively by exposing pseudo codes and describing algorithms step-by-step. Prototype models are also verified according to the verification principle proposed by Batty and Torrens (2005) – a model that is fine tuned to a set of input data has to work on a different set of data as well. The following chapters (Chapter 7 and Chapter 8) present two case studies where multi-agent systems are used in the context of real design tasks. The main goal of both case studies is to demonstrate practical applications of generating circulation diagrams. Whereas the first one illustrates the use of a multi-agent system as a generator of street networks for a large urban redevelopment project, the second case study constructs and experiments with a model for exploring circulation in an office building. Chapter 7 also illustrates how the model can be externally validated by comparing the parameters of generated circulation diagrams to the parameters of circulation networks in the real world and to existing construction standards. Both case studies also discuss the ways of interpreting generated diagrams. The last part of this thesis (Chapter 9 and Chapter 10) reiterates through models, extracts general building principles and highlights the main control mechanisms of multi-agent models for generating circulation diagrams. An exploratory analysis – studying the system behaviour under different sets of parameters – is carried out on two separate models in order to understand the sensitivity and flexibility of multi-agent models. Finally, two patterns are proposed for building generative models.

1.5 Goals of the research

This thesis is conceived in reaction to the one-sidedness of digital tools and to the lack of computational methods in contemporary design practices. Architects are still stuck to old modelling paradigms and new dynamic modelling methods are largely ignored. The aim of this work is to develop design tools that are based on dynamic computational models. In particular, the aim is to show how multi-agent systems can be used for solving circulation issues in early stages of the design process. It is hoped to reach to deeper understanding of how multi-agent systems should be constructed in order to generate useful circulation diagrams. 10

Main objectives of this thesis are: •

Find out the main building blocks and control principles of multi-agent systems for generating circulation diagrams. Construct prototype models using these building blocks



Investigate emergent path formation processes in prototype models



Construct multi-agent models that generate meaningful diagrams in the context of practical design tasks



Verify proposed models in different context and with different sets of input data



Validate models by comparing generated network diagrams against some key characteristics of real-world circulation networks



Propose design patterns of generative models based on the findings of this thesis

Subsequently, the research questions are formulated as follows: •

Can multi-agent systems be programmed to meet the underlying principles of circulation systems?



Can multi-agent systems be used for generating flexible circulation diagrams?



How can designers use bottom-up models generatively in the design process? How can designers gain control over such models?



What is the simplest design of a multi-agent system for generating circulation diagrams?



How multi-agent models fit into the overall design process? Can these models be helpful in achieving multiple goals of a design proposal?

This thesis makes an original contribution to the shift in attitude towards modelling in the architectural design discipline. It does so by introducing generative multi-agent systems for modelling circulation networks and by proposing several novel prototype models. Whereas most of the models are original in terms of implementation or have not been used in design context before, some of these are truly novel multi-agent models in terms of observed emergent behaviour and generated movement patterns. An unparalleled three-partite approach to gaining 11

control over the generated output is offered. As a conclusion of this thesis, two unique design patterns for generative modelling are extracted.

1.6 Methodology

In order to answer the research questions and meet the objectives outlined in the previous section, this thesis follows the synthetic modelling approach. Synthetic modelling herein is seen as a way of creating knowledge by actively building, testing and analysing models. It is believed that if one wishes to understand how meaningful circulation diagrams can be generated, one has to be involved both in the process of construction and deployment of computational models. One needs to be able to control the model by changing its working principles and then find out how these changes influence the end result. For this purpose, rapid and agile development approach is seen particularly appropriate. According to rapid development methods, constructing and testing models is a cyclical process that has to be repeated several times in order to get the model right. Rapid development methods are foremost implemented at the prototyping stage in this research. At this stage, basic principles of the agent-environment interaction, the agent architecture and the environmental configuration are developed, tested and validated. Once these basic principles are understood well enough, one can proceed to the next stage of constructing prototype models by combining these basic elements. Prototype models are then investigated through exploratory analysis that is an essential part of the synthetic modelling approach. At this stage, the model is scrutinised by conducting a series of experiments with different internal parameters and initial settings. This stage is extremely important for the computational designer who wishes to deploy models in the context of real design tasks. It is argued that exploratory analysis is a critical part of the development process that leads to the understanding of how the behaviour and the outcome of models can be controlled. The final stage of the synthetic modelling methodology in this thesis is the deployment of prototype models in the context. At this stage, models are tested in terms of the output – diagrams. After all, the usability of a model can be critically 12

assessed only by looking at the diagrams that have been generated in the context of design tasks.

13

Chapter 2: Computational modelling The revolution in information technology has fundamentally changed the architect’s office. This change has shifted the architect from the drawing board to the computer. Today, nearly all practices rely heavily on CAD programs. However, the actual design methodology has not undergone a similarly dramatic shift – the metaphors from the pre-computer era are still actively in use in CAD. The computer applications that architects are now using are often created for a wholly different purpose, or developed by software companies that are divorced from architectural practice. The lack of relevant computational methodology and purpose made tools can seriously inflict the quality of design output. If architects are not responsible for the production of their own tools and in charge of devising new methods, the technology will always pose serious limits to their creativity. Architectural design is a speculative process. The real implications of the design object can never be accurately foretold before it is actually built. The designer always predicts the deployment and performance of the building, which always changes in the post-construction reality. Luckily, there are methods to minimise the difference between the estimated and the actual performance. Modelling is one of such methods. Models in architecture and urban design are a common form of representation. Models help architects to visualise and communicate their design intents, develop concepts, and organise the data involved in the process. Threedimensional models are essential for understanding complex buildings. Modelling does not only produce static descriptions of geometry, but can also help to understand the performance and usage patterns of the built environment. The special focus of this chapter is on generative modelling methods. Generative methods require quite a different kind of design process to what is usually practiced in the architectural office. Traditionally, modelling takes place once the design is already largely conceived, and models are produced as static descriptions of the proposed solution. In the generative process, models are dynamically evolved through computer programming. This approach requires the exposure of all the design parameters and explicitly defined relationships between the design drivers. The 14

fundamental hypothesis herein is that the generative methods provide means to improve the solutions to spatial problems and help to demystify the whole design process. The argument here is that architects need to rethink their modelling methods if they want to respond to the ever-increasing complexity of the architectural task. This chapter is a literature survey of computational modelling in architecture. In order to put architectural modelling in a larger context, the chapter starts by looking at some definitions and types of scientific models. This is followed by a brief summary of models in traditional architectural and urban design practice. These models simply serve as references and for classifying the results of generative models proposed in this thesis. The terminology of scientific models helps here distinguishing between different kinds of models in architecture, and, eventually, leads to the definition of generative modelling. The last section outlines generative modelling methods and illustrates these with some examples found in literature.

2.1 Models in scientific research

“Models are, by definition, a simplification of some reality which involves distilling the essence of that reality to some lesser representation.” (Batty and Torrens 2005, p. 756)

Scientific modelling is a process of abstraction that is practiced in several disciplines and used for many different purposes. Nevertheless, there seems to be a general agreement that the model is a simplification of a real world phenomenon and is constructed in order to capture or understand the reality. Models are most often built to study complex systems in an attempt to gain deeper understanding of these systems. The model is an abstract theory of how something functions (Lynch 1981, p. 277); it helps to visualise a theory or a part of it (Skyttner 1996, p. 59). Morse and Ziemke (2008) point out the purposes of modelling from the cognitive science perspective: 1) to test the sufficiency of a theory with respect to observational data, 2) to query the necessity of alternative theories, and 3) to explore the potential of embedding a theory in a theory. The general logic behind scientific modelling executes the principle of ‘understanding through making’. The constructivists assume that knowledge can be 15

obtained by building (Resnick 1999) rather than following the reductionist logic of deconstructing the modelled phenomenon to smaller chunks. The constructivist approach is believed to preserve the richness of the model structure (Batty and Torrens 2005). According to the cybernetic view, the model is a representation of processes in the world to allow predictions about its future states (Heylighen and Joslyn 2001). Resnick (1999) offers an alternative educational view – models can stimulate thinking rather than simulate real-world processes. Gilbert and Terna (2000) scrutinise the use of models from the social sciences perspective and adopt quite a broad definition. Models are, in their interpretation, simplifications of reality that can be expressed verbally, in terms of statistical or mathematical equations, or with computer simulations. Scholl (2001) explains that while linguistic models maintain a high level of flexibility, mathematical models sacrifice the flexibility in favour of rigour in formulation and consistency of structure. As opposed to linguistic and mathematical models, simulation models are implemented dynamically over a period time. Haggett and Chorley (1969, p. 285) argue that simulation models shed light on the step-by-step evolution of features that would otherwise be difficult to explain; the focus in such models is on the process of formation rather than on the actual form itself. In addition to the aforementioned models, Morgan (2003) offers yet another type of model – the mental construction – that helps us to understand the world. Referring to the Bayesian philosophy, he argues that our perceptions are models that we use to make educated guesses about the outside world. The advent of digital computers fuelled the use of simulation models. According to Holland (1998, p. 17), scientific models used to be simple because it was impossible to handle complex models. The simplicity was reinforced by the fact that it was possible to express important physical laws with few equations. Programmed computers brought the speed and accuracy to the execution of long calculations and allowed to “explore models that were orders of magnitude more complex” (Holland 1998, p. 17). Thus, computer-based simulations rendered complex systems analysis tangible (Johnson 2002, p. 87). Computers permit scientists and scholars to explore new ideas and concepts, and give access to modelling techniques that were previously inaccessible (Resnick 1999). Resnick explains that computer modelling provides an opportunity to learn 16

through exploration and experimentation. Certain aspects of the real-world can be easily converted into simplified digital representation (Castle and Crooks 2006), which makes the construction of computer models inexpensive. Holland (1998, p. 12) also points out that the model can now be started, stopped, examined, and restarted at any time – something that is impossible for real dynamic systems. Jay Forrester (1972), in discussing the use of the computer in social systems and policy modelling, argues that mental models are apt to mistakes. Once the basic structure and interactions in a social system have been agreed, the mind struggles to follow the individual statements in order to draw correct conclusions about the behaviour of this system. The computer, on the other hand, can routinely and accurately trace through such sequences. For a human being it is easier to get the basic structure of system right than make correct assumptions of its implied behaviour. The latter, as Forrester argues, is better to be left to computers. Another virtue of computational models becomes evident in studying decentralised systems. Observing and participating in a decentralised system is not enough to understand it – one has to construct the system instead (Resnick 1994, p. 5). Computational models of decentralised systems “provide accessible instances of emergence, greatly enhancing of understanding the phenomenon” (Holland 1998, p. 12). The speed of contemporary computers allows one to construct a model of thousands of simple interacting units and observe the emergence of higher order behaviour. Scientific models require validation in order to be useful for decision making and truthful to the phenomena modelled. Gilbert and Terna (2000) argue that the same holds true whether we deal with a mathematical equation, a statistical equation, or a computer program. The validation involves two steps – observation of the model and observation of the process or phenomenon that is being modelled. The output from the model has to be sufficiently similar to the data collected from the real world. The quality of the model can then be assessed post factum by measuring how accurately it has predicted the subsequent events. Batty and Torrens (2005) set forth two principles of best practice to develop scientific models. The first one is the principle of parsimony – Occam’s razor – which states that a better model is one that can explain the modelled phenomenon with lesser data. The second one is the principle of independence, which requires the model 17

to be independent from data. If the model is driven by certain data, it has to be validated with a different set of data. 2.2 Types of models Different disciplines treat models in slightly different ways. This section outlines some of the commonly accepted pairs of dualities to distinguish different models, but offers no comprehensive taxonomic system. Instead, the categories are chosen for a particular purpose – to devise a system for discriminating between computational design models.

Dynamic and static models. The distinction between static and dynamic models is probably the most common one. For the above mentioned purpose it is crucial to understand this duality in order to discriminate descriptive object models from process oriented models. Castle and Crooks (2006), referring to Longley, explain the difference between these two types: while in static models input and output both correspond to the same point in time, the output of a dynamic model belongs to a later state in time than its input. Even the name suggests that static models do not change state (D'Souza and Wills 1998, p. 46). A static model is often considered a description of an object, while dynamic models are about the behaviour of objects and interrelationships between these objects. According to Ruiz-Tagle (2007), static models are used to understand the structure of systems, dynamic ones are created for simulating and observing the behaviour of systems over time. From his – an architect’s and a geographer’s – perspective, static models of cities fall into three categories: mathematical models used for planning and to observe urban development, physical and environmental geography models, and social models derived from Luhmann’s philosophy. Dynamic models, on the other hand, include optimisation models for transportation networks and land use, models for operational control, spatial evolution models, and agentbased and cellular automata models. The task for dynamic and static models is also different. While static models can predict impacts, vulnerabilities, or sensitivities, dynamic models can be used to assess ‘what if’ scenarios (Castle and Crooks 2006). Dynamic models allow one to 18

change the input parameters and observe the model behaviour under several distinct conditions, which is particularly useful for modelling complex dynamic systems. Simulation of complex systems differs from static modelling in that simulations are open-ended (Batty and Torrens 2005). Deaton and Winebrake (2000, p. 1) argue that almost all environmental problems involve a large number of interconnected units that change over time, and should be modelled dynamically. Haggett and Chorley (1969, p. 285-302) use dynamic models to explain certain observed features of channel networks. In order to simulate the evolution of networks, they have to implement the formation model over time. Without programmable computers, such modelling technique is hardly conceivable. Mathematicians and scientists have used differential equations to model dynamical systems for centuries, and many computer applications still re-implement the same approach (Resnick 1999). However, the more fundamental opportunity for using computers in dynamic system modelling is based on completely new representations. Analogue and digital models. Analogue models usually replicate certain aspects of the modelled system in a physical medium at different scale from the original. Analogue models are scaled models to represent the reality in miniature (Castle and Crooks 2006). The success of analogue models depends on the scalability of the real-world phenomenon. In architecture, analogue models – sometimes also known as physical models – are usually scaled down replicas of the design object. The advantage of analogue models over digital models is their materiality (Kalay 2004, p. 133). Digital models conduct all operations using computers and reduce relevant aspects of the design object to a sequence of binary values (Castle and Crooks 2006). The translation of building geometry into symbolic data structure is non-trivial and prone to information loss (Kalay 2004, p. 133). However, the amount of information a digital model can handle surpasses greatly that of analogue models.

Individual and aggregate models. Castle and Crooks (2006) claim that it is theoretically possible to model any dynamical system using a set of rules for the behaviour of its constituent parts. Typical individual-based models are crowd simulations and flocking models. Although individual-based models are now widely acknowledged, it was just a few decades ago when Reynolds (1987) developed the first 19

computational example of such models. One of the greatest advantages of individualbased models is their ability to simulate emergence (Holland 1998, p. 12). Depending on the size and the resolution of the modelled system, however, it is not always practical to model individual behaviours. In these cases, the individual acts are aggregated and the behaviour of the system is modelled as a whole (Castle and Crooks 2006).

Probabilistic (stochastic) and deterministic models. Probabilistic models are quite simply models that use some random processes as a part of its logic. The randomness is achieved by introducing random number generators into the model (Williams 2005). While deterministic models always produce the same output with respect to the initial configuration, the output of a probabilistic model may vary. Pierre Simon Laplace has said that "probability is nothing but the common sense reduced to calculation" (Bazzani et al. 2008). One can argue that, if the level of reasoning is beyond the scope of the model, the output values of such reasoning can be replaced with random values. Therefore, by introducing randomness, one can conveniently simulate the undefined input parameters into the model. Randomness in computational models can also be useful to surpass local minima problems. For example, Deneuborg, Pasteels and Veraeghe (1983) have shown with a simple mathematical model, that the probabilistic

behaviour of ants offers an adaptive advantage in foraging.

2.3 Models in architecture and urban design

The traditional use of models in architecture and urban design differs from the scientific approach to modelling. Architectural models are seldom abstractions of existing phenomena – whilst science is looking for a theory of explanation, architecture is looking for a theory of generation (Frazer 1995, p. 12). Architects utilise models to create the reality, not necessarily to analyse it. Longley points out that there are two particular uses of the term ‘model’ in architecture to either denote the representation of physical artefacts (as scaled down block models of towns) or to designate abstract spatial relations (Longley 2004). The latter is further explained by Lynch who describes the model as a picture of how the environment is meant to be made – “a description of a form or a process which is a prototype to follow” (Lynch 1981, p. 277). In his 20

terms, the model is an idealised example that provides principles for organising the environment. Lynch believes that each city model corresponds to a city theory – a city model is an expansion of the concept of urban design (Shane 2005, p. 31-32). Models are there to create order among the chaos of the city. They are a practical necessity and help to manage the complexity of real problems.

Figure 2.1: Walter Christaller, model of central place theory, the 1930s. Source: (Christaller no date)

Abstract models of ideal cities such as Walter Christaller’s diagram of central place theory and Ebenezer Howard’s Garden City diagram (see Figure 2.1 and Figure 2.2 respectively) are typical to the city theory of the early industrial era. Since then, city theory has moved on a long way. Lynch’s models of the City of Faith, the City as a Machine, and the City as an Organism (Lynch 1981) mark the shift from the static representation towards the view of cities as self-organising systems (Shane 2005, p. 54). One cannot find models akin to Christaller’s hexagonal diagram in the contemporary city and urban theory. City models, as Portugali explains, are now “first and foremost a method of representation of ideas about the dynamics and structure of the phenomenon we term city” (Portugali 1999, p. 93).

21

Figure 2.2: Ebenezer Howard, Garden City model, 1898. Source: (Howard no date)

Traditional methods that designers have used for centuries have become inadequate to represent the dynamic nature of cities. Computer models seem to be more appropriate for that purpose. One of the first attempts to demonstrate the ability of modern computer systems to simulate the richness of interactions in cities was Forrester’s Urban Dynamics model (Batty and Torrens 2005). Since then, there has been an abundance of computational simulation models that investigate dynamic phenomena in cities and usually deal with land use patterns, urban growth, urban economics and transportation planning (Batty 2008). However, most of these models scrutinize cities from the social scientist’s (e.g. Birkin et al. 2008) or geographer’s (e.g. Antoni 2001) perspective, and seldom investigate the urban environment from the designers standpoint. It is paradoxical that the contemporary city models do not take the dynamics of built environment into account. Urban environment, as any other environment, is certainly dynamic from the System Theory’s perspective (Deaton and Winebrake 2000). Although there are many models that are concerned with social processes that produce abstract spatial patterns (e.g. Crooks 2008), one can hardly find any model that treats the built environment dynamically, or any that investigates the feedback mechanisms between social dynamics and the built environment. The task of defining how the structure of built environment is produced by social interactions is not an easy one. Design professionals, who probably lack the technical know-how or feel that the 22

subject is outside their domain, are simply ignoring it. However, this task is a rather creative one and design professionals should be engaged in it. Bill Hillier is one of the few researchers who have been interested in how the urban morphology is informed by the social aspects. He uses social concepts to explain and generate settlement layouts of unplanned villages and medieval town centres (Hillier 1989) . In his beady-ring model (see Figure 2.3), Hillier treats space as if it was composed of nothing but mobile individuals. He explains that in such a discrete systems individuals can give rise to the settlement structure by simply reacting to local rules (Hillier and Hanson 1984). This beady-ring model is computationally validated by Coates (Coates 2009).

Figure 2.3: Bill Hillier, computer generated settlement models. Source: (Hillier 1989)

Physical models have been used for representing buildings from at least mid19th century (Richardson 1859). Whereas the uptake of computational models among urban design professionals has been slow, the same cannot be said about architectural designers. The digital models of building geometry have become central to the design process in many architectural practices. Frazer (1995, p. 26) asserts that computer models allow ideas to be developed, visualised and evaluated without the expense of actual construction. Leaving aside the traditional static representations of building geometry, there are several models that deal directly with form finding. Robert Hooke was notably the first one to recognise the possibility of inverting a hanging form to create a structure of pure compression (Kilian and Ochsendorf 2005). Gaudí, for 23

example, used such kind of an analogue computer for finding optimal structures for Sagrada Familia church and the Church of Colònia Güell (Williams 2008). Equally famous are Frei Otto’s models that utilise the same technique in order to produce the form of Mannheim gridshell and the roof of Munich’s Olympic Stadium (Otto and Rasch 1995). Although there is still a dedicated place for physical models for representation purposes, computational models are now prevailing in the form-finding process. Even Gaudí’s hanging string model has found a digital counterpart (Kilian and Ochsendorf 2005).

2.4 Computational design methods

From the early days of digital technology, computers have been used in design evaluation and synthesis. Cross (1977) lists out a number of programs for analysing design briefs, synthesizing floor plans, and evaluating design solutions. Whereas analytical software is quite extensively deployed in the contemporary design process, design synthesis mostly remains at the level of conventional modelling in CAD reducing the model to represent the surface geometry. Such kind of modelling can be easily termed ‘manual’ since the selection and transformations of vertices and faces of the surface geometry are controlled by the designer’s hand movements. Frazer (1995, p. 66) argues that modelling in the architectural design practice occurs mainly after the design is already conceived, and ‘what if’ type of models common to dynamic modelling are hard to find. All CAD applications make certain assumptions about the form and its manipulation, but designers should develop their own programs (Frazer 1995, p. 26). Weinstock (2006) proposes that, instead, one can use mathematical models to generate and evolve forms and structures in morphogenetic processes.

Parametric modelling One modelling paradigm that has gained a lot of attention lately in architectural communities is called parametric modelling. Patric Schumacher (2008) has even hailed ‘Parametricism’ a new architectural style. He points out that this new style can only exist via sophisticated computational techniques of scripting and parametric modelling in specialist CAD software. In a parametric model design objects 24

are described as a set of architectural elements and interrelationships between these elements. According to Aish and Woodbury (2005), the parametric model is a constrained collection of schemata. The parametric model is propagation-based and acyclic (Aish and Woodbury 2005), which makes it essentially static. Parametric models are dependent on input parameters that can be dynamically manipulated by the designer or a computer program. However, without embedding it into a dynamic process the parametric model remains just another representation of the building geometry.

Analytic, synthetic and evaluation methods There is a general consensus that the major components of the architectural design process are analysis, synthesis and evaluation (Cross 1977, p. 13; Kalay 2004, p. 299). While the design brief analysis happens at the first stage, design solutions are generated at the synthetic stage, and assessed at the evaluation stage. At each stage, there exist several computational methods to assist the designer. Analytic computational methods deal with the design brief and site-specific problems. These methods combine simple design problems into an overall design problem by constructing hierarchical problem trees (Cross 1977, p. 19-20). According to Milne (Cross 1977, p. 30), clustering problems into branches helps the designer to find compatible design solutions for a complex problem by checking the solution against every sub-problem in the branch. Cross (Cross 1977, p. 40) believes that computer evaluation is the least controversial of the three stages since it is about rational appraisal and there is little creativity left. At the evaluation stage various performances of design solutions are verified against the design goals and requirements; it is the feedback part of the design cycle (Kalay 2004, p. 299). Evaluation methods are positivist – they combine empirical knowledge with mathematical constructs. Typical computational evaluation methods are thermal, solar and structural analysis, fluid dynamics simulation, and pedestrian and vehicular traffic simulation. Synthesis methods, as opposed to evaluation, are constructivist and essentially different from positivist analytical methods. Besides extreme positivist positions, computer modelling is also capable of supporting the extreme constructivist approach (Scholl 2001) and, therefore, can be used for design synthesis. The synthetic 25

design methods of automatically generating design solutions have long been a fascination of researchers in architectural computing(Cross 1977, p. 33). The earliest computational synthetic methods are operational research methods (Kalay 2004, p. 201). These methods, mainly developed in the 1970s, utilise procedural logic to find solutions in situations where the problem domain and the solution-space are fully known (Kalay 2004, p. 201). These procedural methods were originally invented for solving room layout and space-allocation problems (Cross 1977). Another class of synthetic methods are heuristic methods. The computer applications using heuristic methods are often called expert systems. These are usually developed in the vein of old symbolic Artificial Intelligence research and emulate the habitual methods used by designers (Kalay 2004, p. 231). An expert system employs certain intellectual rules of thumb designed by the system creator to logically derive the solution (Skyttner 1996, p. 191). The heuristic rules are usually formulated as series of ‘if-then’ statements. A separate group of synthetic methods are concerned with form-finding via physics simulation. A model of catenary arch formation (Kilian and Ochsendorf 2005) was mentioned earlier in this section. Computational methods have also been used to simulate Buckminster Fuller’s tensegrity structures (Tibert and Pellegrino 2003). All the above-mentioned methods are essentially computational alternatives to traditional design methods. In contrast, Frazer (1995) believes that the computer can be used as an evolutionary accelerator and a generative force to aid the design process in a non-traditional sense. He sees a new kind of architectural model as an adaptive blueprint – a computer program that can generate location specific solutions. His approach can be tentatively called the generative design approach. It borrows concepts and techniques from the complex systems modelling and contemporary Artificial Intelligence and Artificial Life research. The subject is discussed further in depth in the following section.

2.5 Generative design

This section is an attempt to define generative design and identify its role in the overall design process. Generative design is presented here as a collection of methods that share some common characteristics. Every design process has generative 26

elements that may not be immediately apparent. At the stage of design synthesis, generative processes can combine computer automation and manual intervention in different quantities (Herr and Kvan 2007). Zee and Vrie (2008) argue that generative designing is not a traditional design process. It employs different arithmetic methods to generate alternative design solutions to the design problem. This process allows the designer to find solutions to complex problems that cannot be found in a traditional way (Zee and Vrie 2008). Herr and Kvan (2007) agree that generative methods facilitate the exploration of alternative solutions and are motivated by the increasing complexity of the design task. They believe that computers should be used “as variance-producing engines to navigate large solution spaces and to achieve unexpected but viable solutions” (Herr and Kvan 2007). In an attempt to define generative art, Galanter (2003) gives a somewhat different description of generative systems. He sees generative art as a system where the artist uses a procedural invention that has a certain degree of autonomy contributing to the end result. This view can be adapted to generative design too. The generative design methods are ‘autonomous’ in the sense that they execute a type of logic that distinguishes them from the rest of the design process. Generative methods have their roots deep in the systems dynamics modelling. In the generative design process, solutions to a design problem emerge as a result of dynamical processes. McCormack, Dorin and Innocent (2004) argue that generative systems incorporate system dynamics into the production of artefact. These systems offer a philosophy and methodology to treat the world in terms of dynamic processes and their outcomes (McCormack, Dorin and Innocent 2004). Whereas other modelling methods try to reduce the complexity of the modelled phenomenon, generative methods aim to produce complexity. In the generative process, the production of complexity usually happens through aggregation. Frazer (1995) sees the generative model as an abstract design solution. This model is not a one-off blueprint but is of more generic type. It can be implemented in the context of local environment and site specific requirements, and can generate a variety of different solutions. This kind of generative design models can be validated in two ways. Firstly, the generative model, similarly to scientific models, has to comply with the principle of independence (Batty and Torrens 2005) The model has to be 27

independent from the original set of data that was used to design and calibrate the model, and accept different sets. Secondly, generative models need to produce results of sufficient variety. In this thesis, generative design is seen as a synthetic design method. Generative design models feature feedback mechanisms and are, therefore, cyclical in nature. The feedback ranges from simple mechanisms, where the model takes its own output for input, to relatively complex ones incorporating design evaluation routines. Generative design is an iterative and dynamic process where solutions to the design problem are found through the repetition of design development cycles. Generative design employs various modelling methods. There is a general agreement among various authors of what methods belong to generative design category. Most authors consider models of self-organisation, swarm systems, evolutionary methods and generative grammars as generative methods (e.g. McCormack, Dorin and Innocent 2004; Zee and Vrie 2008). As opposed to the generative design process where the feedback between the synthesis and evaluation stage is mediated by the designer or a separate computer program, these models are inherently generative – they feature internal feedback loops. By taking its output for input, generative models are self-referential. McCormack, Dorin and Innocent (2004) discuss the relevance of these methods to contemporary design practice, and claim that they are mainly needed to develop novel design solutions.

Models of self-organisation This group consists of several distinct models that all follow some principles of self-organisation. Typical models here are cellular automata (CA), swarm and particle systems, and agent-based models. Collectively, these models are can be described as individual based models. They all execute the logic of bottom-up aggregation where the global structure emerges from local interactions. The most popular self-organisation models in generative design are probably CA models. Besides the abundance of CA models in geography for simulating urban growth and resident dynamics (e.g. Junfeng 2003; Polidori and Krafta 2004), there are many models that deal with the design of built environment directly. The suitability of CA models to generate settlement structures has been recognised by Portugali (1999). He argues that the discrete cellular structure of such models make them a natural tool 28

to represent the discrete spatial structure of real cities. The first attempt to bring CA models to architectural design was made by Coates (1996). He and his students have produced a number of experiments exploring the form-finding capabilities of such models (e.g. Coates, Derix and Simon 2003). CA models have been used to generate 3d massing solutions at urban scale (König and Bauriedel 2004), explore different massing options for a high-density urban block (Herr and Kvan 2007), explore the façade composition of high-rise buildings (Herr 2003), and develop aggregated building solutions (Krawczyk 2002). Architects and urban designers are often fascinated by cellular automata models for the relative simplicity and for the ability to produce complex-looking outcome. The origin and functional mechanism of CA models are further discussed in Chapter 4. Although the CA model is a dynamic model, the topology of its structure is fixed – cells in such models can change state but not location. An alternative to the fixed-structure models is the mobile agent based model. Testa et al. (2001) dispute that every component in a system, including components in the environment, can be treated as agents. They propose an architectural model where agents, endowed with a set of constraints and preferences, represent houses. These agents can dynamically interact with one another and generate new urban forms. Agent-based modelling as a generative design method is also scrutinized in depth in Chapter 4.

Generative grammars Generative grammar originates from the linguistics theory of Noam Chomsky (Chomsky 1956). It refers to a generative model of generating syntactically correct sentences of individual words. According to Duarte (2004), a generative grammar consists of substitution rules that are recursively applied to an initial assertion to generate the final statement. Grammar-based models exploit the principle of database amplification by generating complex forms from simple specifications (McCormack, Dorin and Innocent 2004). In design, generative grammars are usually called shape grammars that, instead of combining individual words, operate with shapes or shape descriptors. The pioneering work of using shape grammars by Stiny and Gips (1972) was all done by hand. Their underlying aim was to use generative techniques to produce sculptures 29

and painting and to develop understanding of what makes a good art (Stiny and Gips 1972). Several computer implementations have followed their original work. Duarte has invented a shape grammar developed for the houses designed by the architect Álvaro Siza at Malagueira (Duarte 2004). Duarte has used a set of heuristics to evolve the final design by comparing the description of generated designs with the description of the desired house. This method allows him to generate Siza’s designs on demand without the architect. A particular group of generative grammars is called L-systems or Lindenmeyer-systems, originally developed to generate fractal-like branching structures in plants (Prusinkiewicz and Lindenmeyer 1990). L-systems have also been explored in the design context. For example, Testa and Weiser (2002) have used an Lsystem based program to grow a morphogenetic surface structures and free-form honeycomb trusses. Parish and Mueller (2001) have developed a procedural city modelling methodology using L-system generators for street networks, and additional shape grammars for generating buildings.

Evolutionary design models Evolutionary design models are based on evolutionary computation algorithms, originally developed in the Artificial Intelligence research. Evolutionary computation is an iterative process using the Darwinian principles of selection to choose the fittest individuals from a population of solutions, and recombining the selected solutions in order to achieve the desired result. There are a few discernible methods – genetic algorithms, evolutionary strategies and evolutionary programming – that all belong to the general class of evolutionary algorithms (De Jong 2006, p. 1-2). All evolutionary algorithms are essentially optimisation techniques where the process converges towards an optimal solution with respect to given fitness functions. For instance, genetic algorithms (GA) – the most popular branch of evolutionary algorithms – are used for solving combinatorial optimisation problems (e.g. Krink and Vollrath 1997; Heppenstall, Evans and Birkin 2007). As a general approach, GA models utilise mechanics of natural genetics (Goldberg 2008) by coding the features of phenotype into binary ‘gene’ strings and recombining these strings to evolve novel solutions. 30

There are strong arguments for borrowing concepts from natural evolution in order to develop design solutions. Cross (1977, p. 7) argues that developing the wellfitting forms of design objects is similar to the evolution of the forms of organisms in nature. Designers have always used the iterative process of creating a number of solutions, selecting among them, making improvements and recombining the solutions – the process quite reminiscent of the natural evolution (Galanter 2003). Evolutionary systems offer the designer an opportunity to use aesthetic selection to breed design solutions in a controlled manner (McCormack, Dorin and Innocent 2004). A large number of designs can be generated in short time and the emergent form is often unexpected (Frazer 1995). Several researches have employed genetic algorithms for morphogenetic and structural design purposes. Funes and Pollack (1999), for example, have developed a GA based greedy algorithm to generate structurally sound configurations that are assembled out of parts. Mahdavi and Hanna (2004) have compared a stochastic genetic algorithm with a deterministic gradient based search to optimise the geometry of space frame structures. Although their GA is outperformed by the deterministic method in terms of speed, the geometries found by it reveal more variation in shape. Frazer and his students in the Architectural Association have developed several genetic algorithms to evolve conceptual design solutions. For instance, they have devised an algorithm to generate yacht hulls, and another one for evolving Tuscan columns (Frazer 1995, p. 61-63). Other evolutionary techniques than GA-s are less frequent in the design related research. Some efforts have been made in combining evolutionary algorithms with cellular automata for designing bracing structures in the context of tall buildings (Kicinger, Arciszewski and De Jong 2004). Following Hillier and Hanson’s (1984) groundbreaking work, Fuhrmann and Gotsman (2006) have proposed a greedy system for evolving housing layouts, that are analysed by calculating visibility graphs. Genetic programming has been introduced to architectural design by Coates et al. (1997) whose algorithm operates on shape grammar rules to generate conceptual designs.

31

2.6 Chapter summary

This chapter began with outlining the role of scientific modelling and continued with describing the traditional use of modelling in architectural practice. It investigated generative design methods as a sub-category of computational modelling. The goal was to set generative modelling into a larger methodological context and to identify its place in the overall design process. Whereas traditional design practice sees models as static representations, contemporary computational approach utilises models in order to explore dynamic processes. Such dynamic models can be turned into digital fabrication tools that can greatly enhance the design process. Generative models have a particular place amongst computational design methods – as opposed to analytical and evaluation methods, generative methods are used for synthesising design solutions. Various authors have proposed and tested several generative design methods in the context of design tasks. It is commonly accepted that generative models offer an alternative to traditional design methods. It is often pointed out that the main benefits of generative modelling are the ability to create a variety of solutions and to respond to the increased complexity of design problems. During the generative process, design solutions are developed in an iterative manner – solutions are grown rather than invented. As a result, generated design solutions are flexible and contextual. The next chapter discusses a design issue – circulation – that is thought to be a sufficiently complex task in order to evaluate the suitability of generative modelling as a method of design. Circulation is usually a key part in buildings and settlements. Having a set of tools that enable the designer to quickly generate a variety of solutions at the early stage cannot be underestimated. Additionally, circulation lends itself to both quantitative and qualitative analysis which makes the evaluation of generated solutions fairly straightforward.

32

Chapter 3: Modelling circulation – an architectural review This chapter presents a survey of studies and modelling techniques related to circulation in architectural design. This particular topic has been chosen because it is suitable for proving the usefulness of generative design methods in architecture. The circulation realm makes an ideal sandbox for testing out bottom-up approach for several reasons. The most important reason is that the natural circulation systems are inherently bottom-up and should, therefore, modelled from bottom to top as well. Circulation herein is seen as a dedicated area for movement that connects various parts of a building or settlement together into a coherent network of spaces. Crossing borders between private and public spaces, circulation provides opportunity for pedestrian and vehicular access. The circulation network creates and handles flows, links activities together and makes the space continuous. The circulation network channels matter and energy between spaces – it enables and is informed by movement. Movement can be a major shaper of the environment. As opposed to many other issues in design, the circulation lends itself quite easily to computation. Circulation routes can be measured and analysed parametrically; movement corridors can be optimised to several criteria, and circulation networks can be modelled by the act of movement. The underlying assumption in this thesis is that circulation diagrams can be generated within computer simulations using mobile agents. In order to investigate the circulation realm, this chapter first looks at the general approach to circulation. Then it reviews relevant architectural representation techniques, and investigates the graph-theoretical perspective of circulation diagrams as networked graphs. After scrutinising the topological view of circulation networks, the last section examines some historical and contemporary computational models for generating circulation networks.

33

3.1 The essence of circulation

“Just as blood vessels of the vascular network distribute energy and materials to cells in an organism, road networks distribute energy, materials and people in an urban area.”

(Samaniego and Moses, p. 23)

Circulation is a flow of matter and energy, the orderly stream through a network of channels. The circulation network is a system of exchange among specialised spaces (Mitchell 2006). A distinct principle of organisation has been found common in many levels of systems that feature spatially distributed. Similarly to a complex organism that consists of organs linked by the vascular network, a large building consists of specialised rooms linked by circulation systems, and an urban environment consists of specialised buildings and public spaces connected by pedestrian and vehicular circulation networks (Mitchell 2006). Circulation in architecture is often thought as the means by which access is provided through and around an environment. Perhaps the most important notion in this definition is the notion of accessibility. From an inhabitant’s point of view, those parts of the environment that are not accessible usually offer no utilitarian benefit to the inhabitant. These spaces are always manifested in the lack of sufficient access or in the inappropriate layout of circulatory networks. Access to a space is provided via an entrance, across a threshold. Alongside with other border elements, the threshold separates the spaces of different quality or purpose. As opposed to the border and threshold, circulation does not define these spaces but facilitates the transition between them. Good access to a space is not always desired, and some degree of control over it is often required. As a result of control people have different opportunities for movement and, therefore, circulation networks appear hierarchical and personalised (Alexander 1964). Pedestrian and vehicular movement is often a major driver in the design process. In urban design, the street network is a key element that, together with built forms, constitutes the structure of the environment. This network defines the opportunity for motion and the means of access. With respect to mobility, there are two basic solutions available for building design – the shelter and the burrow (Arnheim 34

1977, p. 148-151). Whereas the abstract type of burrow is the result of inhabitant’s physical penetration, a shelter cares about its user’s movement only secondarily, and derives its form from its own function instead. The burrow type of buildings is directly informed by the user’s acts of movement, providing more space at locations where the user wants more freedom of direction (Arnheim 1977, p.149). Good circulation in the building and urban environment requires many design criteria to be met. The circulation network has to handle anticipated flows and provide access to desired spaces. An obvious aspect is also the economy of the layout – solutions that optimise the length of journeys between connected spaces usually better in terms of occupational and construction costs. However, the optimised layout is not always desired. Many architects and designers value the user experience of moving through an environment. Kevin Lynch (1981, p. 146), for example, affirms that circulation is not just about shortest paths and can provide aesthetic pleasure, hence the optimisation of the road and corridor lengths often comes at a cost. Besides the experience of moving through, Varawalla (2004) also values conceptual clarity in circulation. The clarity of layout makes the environment legible and facilitates wayfinding. The clarity is often understood as an appropriate manifestation of hierarchy (Marshall 2005, p. 30). With respect to street networks, many contemporary urban design guide books encourage high permeability and connectivity (e.g. Llewelyn-Davies 2000; Evans 2008). Circulation is often the most attractive and active area of the built environment. In some cases it constitutes the major part of the venue – airports, racing courses, subway and train stations all feature large dedicated spaces solely intended to accommodate movement. Ancillary spaces in these venues generally follow the circulation logic. In hospitals and schools, the circulation network is also a key driver of spatial organisation. The way in which people move in these buildings is the fundamental generator of the plan (Varawalla 2004). In architectural practice, the area dedicated to circulation is often treated as a standard value – a percentage of total floor area (Ireland 2008). This value is typically specific to a building type. In some building typologies circulation often shares the same space with other activities. Such integrated circulation solutions can create more flexible indoor spaces, and are more practical in terms a floor area than segregated circulation solutions. 35

Although circulation is an important issue in architecture, there is surprisingly little comprehensive literature available on the subject. One can find a bulk of publications that suggest codes and guidelines for designing circulation systems (e.g. Llewelyn-Davies 2000; Evans 2008), but rigorous overviews of the subject are seldom. Koutmanis et al. (2001) point out that most analytical studies and design guidebooks appear to accept the reductive logic. This reductionist approach tends to over constrain the architectural brief and limit the possible solution space. Marshall (2005, p. 29) adds that desired patterns expressed in the literature are too often couched in terms of verbal description of properties, or solely demonstrated by means of illustrative plans. Most of the descriptions and examples either provide irrelevant details or oversimplify the dynamics of circulation networks. Besides the rulebooks and design guides, there is an abundance of architectural design publications (e.g. Berkel and Bos 1999; Jormakka 2002) that present the concept of movement and circulation as a driver of built forms. Although this literature can be very inspirational, it is hardly useful for devising computational models to generate circulation.

3.2 Diagrams of circulation “The diagram is an invisible matrix, a set of instructions, that underlies - and most importantly, organises - the expression of features in any material construct.”

Stanford Kwinter (Reiser+Umemoto 2006, foreword)

The diagram is a popular representation method for expressing conceptual ideas in architecture. It helps to understand the building through examining its various systems. Diagrams are tools that help to manipulate the building form and spatial organisation in early stages of the design process. The reason why diagrams are so popular among architectural design practitioners is because they convey information visually – they appear to operate between form and word (Somol 2006). According to UN Studio’s architects Berkel and Bos (2006), diagrams liberate architecture from language and allow to encode and compress spatial information. They see diagrams as kind of maps that always point to something. Diagrams, in Berkel and Bos’ terms, are abstract representations of relationships (Berkel and Bos 1999, p. 75). 36

Moussavi and Zaera-Polo (2006) from Foreign Office Architects (FOA) point out that it takes several forms of mediation for a diagram to become a building. Similarly to drawings and graphs, diagrams belong to the arsenal of nonrepresentational architecture. FOA architects insist that diagrams should not be associated with the lack of control. Spuybroek (NOX architects) agrees that the diagram is a very clearly defined network of relationships, but “it is completely vague in its formal expression" (Spuybroek 2006). A diagram for him is an input/output machine with two operational modes for contracting data into graphical representations or expanding these representations into spatial form. Although different architects may interpret diagrams in a different manner, they all generally adopt the Deleuze’s view of diagrams as abstract machines. The Deleuzian diagram maps the relations between forces, and instead of representing anything real, constructs a real to come (Jormakka 2002, p. 49). There seems to be also an agreement among architects that diagrams, although not representational, are the designer’s instruments to produce form.

Figure 3.1: Constructive diagram. Source: (Alexander 1964, p. 88)

The use of diagrams is not entirely a new method of work in architecture – Somol (2006) claims that diagrams have become ‘actualized’ in the era of modern architecture in the 1960s. One of the forerunners of contemporary architectural diagramming, Christopher Alexander, argues that the diagram has to convey some kind of ideas about the physical form in order to be useful for an architect (Alexander 1964, p. 87). In his opinion, there are two types of diagrams – form and requirement diagrams. Whereas the form diagram points to a physical shape, the requirement 37

diagram is a non-iconic notation of some constraints or properties (Jormakka 2002, p. 28). The latter is only interesting if it somehow informs the former. Combined together, these two types result in a diagram that Alexander calls a constructive diagram. A well-known example of his constructive diagram is the diagram representing traffic flows. In this diagram (see Figure 3.1) the arrows represent the channels of flow, while the line thickness conveys the capacity of these channels.

Figure 3.2: Functional circulation diagram of Yokohama terminal. Source: (Ferre et al. 2002, front cover)

‘Abstract machines’ representing movement and circulation are amongst the most often used diagrams. Architects seem to naturally think in terms of networks when they solve circulation issues. Many architects use the topological representations of circulation to organise the architecture of their designs. Foreign Office Architects have deployed a diagram of movement (see Figure 3.2) to inform the design process of the Yokohama ferry terminal. The form of the terminal building is generated from this circulation diagram that “aspires to eliminate the linear structure characteristic of piers, and the directionality of the circulation” (Jodidio 2001, p. 220-221), highlighting topological relationships rather than showing exact topographical proximities. Perhaps the tendency to represent circulation using topological diagrams comes from engineers who prefer these for the sake of clarity. For example, the famous topological map of London Underground (see Figure 3.3) was originally drafted by an electrical engineer Harry Beck (Hadlaw 2003).

38

Figure 3.3: London Underground map. Source: (Beck 2010)

The process of turning a diagram into an architectural design proposal is illustrated by Maxwan Architects. The diagrams of 50 bridges in Leidsche Rijn look suspiciously similar to Alexander’s diagram of traffic flows (compare Figure 3.4 and Figure 3.1) because they follow a similar logic where circulation responds to the local needs, and the width of paths represents the intensity of expected flows. What is fascinating about Maxwan’s work is that these diagrams are then quite directly mapped to the actual physical shape – the constructive diagram becomes a constructed one.

Figure 3.4: Bridges in Leidsche Rijn by Maxwan Architects. Source: (Maxvan no date)

39

Planar network diagrams depicting cities and street networks have become a standard in the architectural and urban design practice. Compared to the abundance of city and street diagrams, the abstract representation of circulation in buildings is still a relatively neglected method of work. Such a bias towards flat 2D representations suggests that the nature of circulation in buildings is harder to conceive and visualise via diagrams. However, the diagramming techniques have developed further from the early modernist bubble diagrams to new computational tools that have made it possible to construct and explore more complex diagrams (Raisbeck 2007). A practice that has extensively deployed 3D circulation diagrams (see Figure 3.5) throughout the last decade is Amsterdam based UN Studio. In case of complex architectural briefs like study movement studies appear to be a cornerstone of UN Studio’s design proposals. In the Arnheim Central station project their flow diagrams examine pedestrian connections with the systems of infrastructure. These diagrams give architects a quick comprehensive insight and generate more effective connections between transport systems and other programmes (Berkel and Bos 1999).

Figure 3.5: Circulation diagram of Luis Vuitton store. Source: (UNStudio no date)

Circulation is represented in many architectural sketches implicitly; the movement network is not directly extractable from but yet present in the sketch representation. However, it tends to be much more explicit in city diagrams. The reason is not immediately apparent – perhaps the patterns of movement at urban level correlate with the built form to a greater extent than in buildings. Consequently, 40

the means of circulation are directly manifested in the form of built environment. Kevin Lynch, theorising about a good city form, lists out the most common geometrical city patterns – grid, radio-centric and capillary (Lynch 1981, p. 424-425). These patterns refer to the composition of urban blocks as well as to the street network. With the advances in computer technology in the 1960s and 1970s, there was a wave of researchers trying to quantitatively explore circulation in buildings (e.g. Tabor 1971; Willoughby 1975). Circulation was thought to be suitable for computational analysis and optimisation. Much in the vein of systems theory and modernist architectural movement, several applications were devised to develop optimal layout plans and to minimise travel costs between activities (Tabor 1971). These applications often required generalised floor plans and led to standardised views on circulation-based building typologies. Willoughby (1975), for example, divided office layouts according to circulation and spatial arrangement of activities into five distinct layout typologies: slabs, courts, crosses, fish-bones, and open plans. Despite the mass of quantitative research carried out, the simplistic top-down computer models and the complexity involved in designing satisfactory building layouts prevented this approach breaking through into the mainstream architecture practice. The taxonomy of circulation patterns is nowadays best established in highway engineering, and is also being adopted by the urban design community. Marshall (2005, p. 45-67) uses the grade separated system and classifies streets and roads as freeways, expressways, arterials, collectors or local streets. Some grade separated systems give a finer grain dividing arterials, for example, into throughways, boulevards and avenues (Bochner and Dock 2003). The grade separated classification system defines the possible hierarchy of a street network. In a good circulation system, as Lynch (1981) points out, local streets feed into arterials, which feed into expressways. Such hierarchies match the assumed traffic flows, and have also been observed to develop naturally in unplanned settlements (Lynch 1981). A representation of the grade separated system is not very different from what Alexander describes as a requirement diagram (Alexander 1964) where circulation elements are defined solely by their capacity to handle flows of certain magnitude. Besides the hierarchical classification, Marshall (2005, p. 86-87) also discriminates between the geometry and topology based representation systems. 41

Altogether, he distinguishes between three different modes of representation (see Figure 3.6) – constitutional (hierarchical), configurational (topological) and compositional (geometrical). If a constitutional representation is an abstraction from configuration, the latter is an abstraction of composition (Marshall 2005, p.86), and the composition is a rather direct representation of the actual geometrical street layout. Whereas the constitutional diagram resembles Alexander’s requirement diagram, the configuration diagram would translate as the form diagram. Combined together, the configuration and constitution diagram produce a constructive diagram. This constructive diagram, in turn, when adapted to a specific site, is the generator of the compositional road layout.

Figure 3.6: From left to right: composition, configuration and constitution of street networks. Source: (Marshall 2005, p. 86)

The topological view of circulation networks supports a great deal of scientific studies that are collected together under the graph theory umbrella. In graph theory, movement and flow patterns are reduced to the most basic and elemental topological form – graph networks (Haggett and Chorley 1969). A graph network is made of lines and points, that sometimes are also referred to as ‘edges’ or ‘links’, and ‘vertices’ or ‘nodes’, respectively. Batty claims that the tendency of articulating urban form using graph-theoretic principles has long traditions (Batty 2004), and particularly points out space syntax research and Philip Steadman’s topological studies of different building typologies.

42

There is an interesting connection between the graph-theoretical view of movement networks and Gibson’s (1979) ecological approach of spatial perception. In graph theory, circulation is represented as a network consisting of lines and nodes. Similarly, Gibson thinks of movement – the lines of locomotion link up the points of observation. The medium, as Gibson describes the environment, is defined by this network. This analogue shows that the graph-theoretical approach is not necessarily just an abstract mathematical model of space, but aligns with Gibson’s psychological view of how people perceive the space.

3.3 Topological circulation networks

The topological view of networks is central to spatial analysis in geographical science (Batty 2004). Haggett and Chorley (1969) have used graph-theoretical methods to analyse and categorise spatial systems in geography, and have built a solid base to any discipline dealing with spatial data and flows. Naturally, their work serves well for the purpose of classifying circulation patterns in architecture and urban design.

Figure 3.7: Topological classification of networks. Source: Hagget and Chorley (1969, p. 7)

Haggett and Chorley (1969, p. 3) divide networks into three main topological classes – branching nets, circuit nets and boundary nets (see also Figure 3.7). Branching networks, the simplest class of topological nets, are distinguished by their hierarchical tree-like structure. A tree network can take an infinite number of geometrical forms even if the topology of the network remains the same. As opposed to branching nets, circuit networks feature closed loops or circuits. With the same number of nodes in two circuit networks the number of links may vary, leading to 43

multiple topologies. Barrier networks are intrinsically different from two previous network classes – these nets consist of links that block or resist the flow instead of channelling it. Topological circulation networks in architecture and urban design can be classified in a similar way to Hagget and Chorley. In his well-known book City is not A Tree, Alexander (1965) distinguishes between two essentially different spatial organisation – a semi-lattice and a tree. The organisation is a tree when urban elements are working together and are nested, but do not overlap. In a tree-like structure, units are connected to other units through the medium of that unit as a whole. According to Alexander, many examples of cities with tree-like cities have been proposed and built through-out history (see Figure 3.8), from roman army camps to Chandigarh by Le Corbusier and Brazil by Lucio Costa. Other examples include Tokyo plan by Kenzo Tange, a plan for Mesa City by Paolo Soleri, and a garden city in Maryland by Clarence Stein.

Figure 3.8: Tree-cities: Chandigarh, Brazil, Greenbelt in Maryland, plan of Tokyo. Source: (RUDI no date)

Although the tree structure may seem to be appealing for its clarity, the reality of social structures in a contemporary settlement, as Alexander points out, is a heavily overlapping one – systems of acquaintances and friendships in a community are semi-lattices, not trees. Unplanned urban environments seem to naturally follow this pattern evolving semi-lattice structures without master plans. Artificially implemented tree-like urban organisation leads to low connectivity and permeability in the environment, high segregation between neighbourhoods and groups of inhabitants, and is often associated with several social problems. Despite the obvious 44

criticism, tree-like urban patterns are wide-spread in Europe and North-America cities built during the heights of urban sprawl in the 20th century. Only recently the problems with this highly hierarchical organisation have been widely acknowledged (Rogers 1999). Whereas the tree-like organisation is commonly considered unwelcome in contemporary urban design practice, the same cannot be said about buildings. Treelike pedestrian circulation networks are not only acceptable but even encouraged with some building typologies. The tree structure offers higher degree of control in airports and train stations, schools and hospitals; a classical apartment house also tends to be a tree. 3.4 Optimal designs Circulation networks lend themselves relatively well to quantitative analysis. The work of Hagget and Chorley (1969) shows that many network parameters can be easily calculated. The network density parameter, for example, can even be evaluated using a few different methods – by calculating the number or some characteristics of network elements in an area unit. Other geometrical characteristics are more difficult to capture in any quantitative fashion. The shape of a network, for example, appears to be a very difficult feature to convey by numbers. Several network parameters can be taken into account when optimising the circulation layout. As different problems have different optimal solutions, optimisation is seldom an objective procedure. The main question in here is: what is the network optimised for? Mitchell (2006) points out that the total network cost involves fixed costs of building it, and interactive costs using it. Interactive costs are dependent on the distance of trips and the transportation volume. Haggett and Chorley (1969, p. 111-118) express essentially the same principle that networks can be optimised for build cost or travelling costs – total cost over time equals user costs in that time plus build cost. The efficiency of networks can be calculated by the average distance travelled within an area boundary. If the network links have different capacities, efficiency can also calculated by the average time a trip takes. Hagget and Chorley (1969, p. 126-130) assert that any real world efficiency measures are taken under complex assumptions, and there is no single solution to minimum distance networks. 45

Circulation networks can be designed to optimise distance in many ways. The travelling salesman circuit, for example, is a minimal unbroken chain that connects all the nodes in the network (Haggett, Cliff and Frey 1977, p. 76). A minimal spanning tree is the shortest network joining a collection of points so that any point can be reached from any other point (Haggett, Cliff and Frey 1977, p. 76). Steiner trees, a class of minimal spanning trees, are widely used to inform the design of real-world circulation structures such as highways and oil pipelines (Herring 2004). The travelling salesman problem and the minimal spanning tree problem are amongst several well-known problems in network theory that are easy to state, can be easily solved by trivial methods in theoretical situations, but become intractable in real-world situations. The difficulties arise from the exponentially explosive nature of combinatorial mathematics (Haggett, Cliff and Frey 1977, p. 77-79). When simple cases of optimal networks can be easily computed, any network with larger number of nodes is hardly the optimal one. To design a minimal distance network, heuristic algorithms are preferable. The Steiner tree problem, for instance, is best solved heuristically, because exact algorithms take exponentially more time as the networks grow larger (Herring 2004). The task of architecture and urban design is not necessarily an optimisation task. As Kevin Lynch (1981, p. 146) points out, the movement can be a source of enjoyment that becomes a design intent instead. However, minimal networks can be certainly applied to the buildings where the speed and ease of getting from one place to another is of essential importance, or where the cost of building the movement infrastructure is crucial.

3.5 Computational models

Algorithmic solutions to spatial problems have generated enthusiasm in many architectural theorists and fascinated practitioners for decades. With the everincreasing availability of computing power more and more of computational solutions can be and actually are tested, opening up new possibilities for architects. Generative models shed light to network formation processes, offering an alternative view to descriptive models. Descriptive steady-state models are not helpful to understand dynamic networks since the networks are usually formed by an iterative step-by-step process (Haggett and Chorley 1969, p.285). Lynch (1981) adds that the computer 46

makes it possible to explore a view of the city different to an intuitive and descriptive analysis. The computer can help us model the city as “the cumulative product of the repeated decisions of many persons and agencies - actors who have diverse goals and resources” (Lynch 1981, p. 336). When the actors in such models are made to represent pedestrian activity, it is not difficult to see how the model becomes a useful tool for solving circulation issues. This kind of computational modelling is not useful only at urban scale, but is equally applicable to architectural design problems. Circulation analysis is usually carried out early in the design process analysing existing spatial arrangements, or later with respect to a particular problem such as evacuation (Koutamanis, Leusen and Mitossi 2001). This analysis relates to way-finding where the routes are searched on the basis of some normative criteria. Apart from the success of analytical models, pedestrian circulation in architectural computing is a relatively neglected area for several reasons. Koutmanis et al. (2001) outline four of these reasons: •

the complexity and lack of data of human interactions with the building



the complexity of computer simulations



weak briefs and reductive logic of building codes



lack of integration with design synthesis

With respect to the purpose of this thesis, the last reason is worth of a special attention. Case studies (see Chapter 7 and Chapter 8) present a few methods of integrating computational models into the architectural design process. Both of these studies are also tested in the professional context of architectural competitions. The computational models for finding suitable circulation layouts are often accompanied by automatic activity location procedures. Activity location, however, brings further complexity into the model. Activities in a building typically have various associations that may be asymmetric or ambiguous (Ireland 2008). Even if the associations are clearly mapped out in distance matrices between activities, a building with 20 activities may permute in 2.5 trillion ways (Tabor 1971, p. 47). In consequence, no final layout is the optimum – all generative methods are heuristic (Tabor 1971, p. 20).

47

Taxonomy From the perspective of geography, Haggett and Chorley (1969) categorise network simulation models according to their typological classification. Then they further subdivide simulation models of branching networks into growth, random and capture models, and models of circuit networks into colonisation and interconnection models. While growth models typically start with un-eroded land and tend to obtain static equilibrium, capture models start with an existing landscape or network structure. Random models are not generally useful as they give little insight to the evolution of network. Colonisation models in circuit networks explore space from source nodes establishing new nodes, interconnection models, on the other hand, explore possible links between given nodes. With respect to the computational logic involved, models for generating building layouts and circulation networks can broadly be divided into two main categories of additive and permutational methods (Tabor 1971, p. 1). Whereas additive techniques assemble activities piece by piece in an empty floor, permutational methods typically modify pre-processed building layouts. Additive methods involve three stages: creating the initial framework and establishing the boundaries, automatic location of activities, and manual modification of the output. Permutational methods have another stage of creating an initial building layout between the first and the second step of additive methods. The models with automatic location of activities have evolutional nature – piecemeal improvements are sought to achieve circulatory efficiency (Tabor 1971, p. 56). Network development models can also be categorised by sequencing techniques. Typical spatial network sequencing methods are node connecting (travelling salesman problem solvers), space filling (e.g. Hilbert curve), and space partitioning (e.g. Voronoi subdivision) methods. Berntson (1997) offers a botanist view to networks classifying plant root models into developmental (centrifugal) and functional (centripetal) ordering sequences. A separate conceptual class of developmental models is formed of aggregation and individual based models. A classic example of this group is the diffusion limited aggregation model (Batty 2005, p. 124129).

48

Examples The tractable and quantifiable nature of networks has resulted in abundance of computational models for generating circulation systems. The vast number of solutions that have been proposed for solving minimal spanning tree and travelling salesman-type problems is beyond the interest of this thesis. It is perhaps only worth mentioning Adamatzky’s (2001, p. 105-170) approach for solving minimal route problems. He has shown that solutions for many of such problems can be computed bottom-up in cellular automata collectives. The extensive exploration of heuristic models for automatic floor plan and circulation generation in the 1960s has been documented by Tabor (1971) – several modellers in that period have engineered computational methodology for factories, hospitals, educational buildings and offices. All these models broadly represent the mechanistic view on circulation and tend to leave out other spatial qualities. The generated results are usually diagrams that are intended to satisfy the circulation condition and do not take into account organisational, functional, environmental, geographical, structural, legal, or financial criteria (Tabor 1971). Somewhat more advanced are the contemporary office layout generators to drive the floor plate shape from circulation (Shpuza and Peponis 2006). To guarantee that the final design matches given criteria, Shpuza (2006) chooses a preferred circulation system in advance, but also explores the potential of deriving circulation from the floor plate shape. Marshall (2005, p. 222-228) shows how highway engineering rules can be plugged into the generative process. He suggests a computational approach of turning the constitutional rules of road hierarchy into local road configurations. The program “can generate a diversity of layout patterns which can themselves be adapted to local circumstances" (Marshall 2005, p. 227-228). Akin to Marshall’s idea, several shape grammar based models have been developed by various authors. Pascal Mueller has participated in creating a procedural city modelling software called CityEngine that enables designers quickly grow street networks. The generative method behind CityEngine is a shape grammar similar to context sensitive L-systems (Parish and Mueller 2001). A similar approach has been used for modelling road patterns found in informal settlements in South Africa (Glass, Morkel and Bangay 2006). 49

Methods borrowed from AI research and evolutionary computing have often been used in generative modelling. For example, Zhang and Armstrong (2005) have introduced genetic algorithms to locate corridors in a 2D lattice of cells. Diffusion limited aggregation has been used for explaining growth processes of city networks (Batty 2005, p. 50-51) and generating street networks (Nickerson 2008). A computational model for generating leaf venation patterns (Runions et al. 2005) is capable of producing circulation networks that share so many commonalities with street networks that it can be considered for architectural and urban design purposes. A few agent-based models have also been used for generating circulation systems. Goldstone and Roberts (2006) have proposed an agent-based model for studying and reproducing self-organised trail systems in groups of humans. Ireland (2008) has introduced an agent-based model that also aims to sort out the desired relationships between activities in a building. An agent-based model simulating the growth of networks is inspired by tunnelling ants (Buhl et al. 2006).

3.6 Chapter summary

As a part of the literature survey, this chapter looked at the issues of circulation in architecture and urban design. It was established that the essence of circulation is to provide access to specialized spaces and the means to move through and around an environment. It was also shown that circulation in buildings and settlements share many commonalities with circulation networks in nature. Therefore, it was argued that formation principles and classification methods of circulation can be borrowed from spatial analysis of natural phenomena – from network analysis in geography. Network analysis is used in assessing the results of prototype and case study models later in this thesis. Architects and urban designers often use diagrammatic representations in order to express circulation in buildings at the conceptual design stage. This validates the assumption that diagrams are indeed useful tools in developing spatial solutions. However, the diagrams presented in this chapter are fairly abstract and are not suitable for thorough analysis and evaluation. It is envisaged that if these diagrams were based on computational models, it would open up the opportunity to analyse and evaluate diagrammatic forms and would eventually help finding better design 50

solutions. Although several computational methods for network optimisation exist, there are just a few ones for generating circulation networks at the first place. The main objective of Chapters 6 to 8 is to propose several new generative models that fill this gap. The following chapter surveys agent-based models and discusses the possibility of using such models for generating circulation networks. It scrutinises agent modelling techniques in hope to extract principles that can be reused for simulating network formation processes in nature. These principles can then possibly used for constructing generative models. Agent-based modelling is potentially a very powerful method for solving circulation because of its bottom-up nature – just like in natural circulation systems, agent-based models consist of many interacting particles.

51

Chapter 4: Agent-based modelling and multi-agent systems: nature, origin and modern applications The previous chapter introduced the generative approach for modelling circulation systems bottom-up. In this thesis, bottom-up modelling is seen as a method of achieving a global behaviour of the system by defining individual behaviours of its components. Agent-based modelling is one of bottom-up methods and has already been proven useful in analysing the urban environment. There is also some evidence that agent modelling can be turned into a generative tool to assist architects in the design process. The nature of agent-based modelling seems to make it well deployable to solve circulation problems – circulation is defined by accessibility and locomotion and mobile agents are ideal for representing the movement of individuals in the context of built environment. Agent-based modelling (ABM) has rapidly gained popularity over the last few decades resulting in a plethora of scientific studies across several disciplines. In fact, it has been so popular amongst scientists, scholars and practitioners of distributed computing that it is practically impossible to give an exhaustive yet concise overview of this relatively new computational paradigm. There are two main reasons why it is so difficult to compile a summary on agent-based models. Firstly, ABM is a crossdisciplinary paradigm and as such cannot be reviewed from the perspective of a single domain. Secondly, the terminology used in ABM has not been universally agreed – there is still a lot of confusion even when it comes to defining what an agent is. Therefore, with the specific focus on multi-agent systems only a few selected definitions of agent are investigated in this chapter. This chapter begins by looking at theoretical foundations and historical references of the ABM paradigm. It then continues with some popular definitions of agent and explores what the most common properties that have been assigned to agents are. After scrutinising the behaviour of stereotypical agent models, the last section outlines the uses of ABM in different specialist fields focusing particularly on architectural design applications.

52

4.1 Background

Concepts of agent and agency Before looking at the exact scientific definitions for the term agent, it is worthwhile exploring the philosophical thoughts that surround it. Referring back to some key figures in system theory, behavioural science, biology, complexity theory and computer science of the last century, one can observe how ideas of decentralisation have developed. Traces of decentralised thinking can already be found in the era of classical Newtonian science. Resnick (1994, p. 7) points out that Adam Smith’s work in market control and economy from more than 200 years ago suggested a decentralist approach. Despite some early examples this way of thinking has only recently become prevalent in many scientific discourses. Decentralised systems are best understood by modelling (Resnick 1999), and ABM is probably the most popular method for it. To understand what the new paradigm is all about, one has to comprehend the notions of agent and agency. Two terms – entity and action – are often thought to compose the notion of agent. As Wan and Braspenning (1996) point out, neither of these terms can be decomposed into smaller notions but can only be described via synonyms. Agency, a concept often used together with agents, provides a deeper insight – in order to understand agents, one can see what a group of agents constitutes. Minsky (1988) describes agency as a set of smaller functions working in parallel. These functions, if combined together, form a higher level agent. Thus, each sub-agent can be described, in relation to the agency, as its function. Minsky suggests that the relationship between agents and agency is similar to that between parts and whole. Whereas agent is a holistic concept and does not lend itself to further decomposition, agency can be described in terms of subagents and their interrelationships. An agency is thus a system of agents, and as such is distinguished from its constituents by organisation (Skyttner 1996, p. 36). Besides some early thinkers in economy, the decentralised approach had found its supporters also in biology (Resnick 1994). In the beginning of 19th century, Wheeler – a biologist studying ant colonies – came to a conclusion that an organism is a dynamic agency acting in an unstable environment (Wheeler 1911, p. 308). He could not give a fully encompassing definition because “the organism is neither a thing nor a concept, but a continual flux or process, and hence forever changing and never 53

completed” (Wheeler 1911, p. 308). However, he was able to describe the organism via its parts that, in case of ant colonies, are organisms themselves. Much later in the century, Maturana and Varela (1980) had to coin a new term – autopoiesis – to characterise the nature of living systems. In the theory of autopoiesis they see living systems together with their environment structurally coupled via the sensory-motor loop. The structure of agency can be considerably simpler from that of its parts. Wheeler (1911) noticed that ant colonies resemble much lower level organisms than ants themselves. Niklas Luhmann explains that moving up to a higher level system reduction of complexity occurs because its elements are unified (Luhmann 1984, p. 27). As opposed to the reduction, the complexity can also be increased through the selective conditioning by creating connections between these elements. Similarly to Wheeler’s view on organisms, several modern complexity theorists see living systems as composed of sub-agents. Kelly (1994, p. 50), for example, insists that minds and bodies, blurring inseparably across each other’s boundaries, are made of swarms of sublevel things. Organisms, in his sense, are multiagent systems; organisms are agencies. Holland (1998) goes one step further and claims that the same can be said about systems at many levels: ecosystems, societies, organisms. In his opinion, individual entities and connections between entities in these systems can be modelled in computer simulations. “These individuals”, he adds, “go by the name of agents, and the models are called agent-based models” (Holland 1998, p. 117).

Agents and the environment System theory treats the system and its environment holistically, but also draws the boundary between them. Luhmann (1984, p. 29) uses the notion of system differentiation to describe the repetition of the difference between system and environment. System differentiation highlights the hierarchal nature of nested systems. One of the most important requirements of system differentiation is the boundary definition – how a system is identified in its environment. Batty and Torrens (2005) point out that the interactions within the system are denser than those between the system and its environment. Recognising the difference in density, thus, allows one to identify the boundaries. 54

The ways systems are coupled with the environment and how the interaction with the environment is organised are common subjects of study in system science (Skyttner 1996, p. 3). In the theory of autopoiesis, Maturana and Varela (1980, p. 9) stress that a living system cannot be observed independently from its environment. Structural coupling can create a new system that belongs to a different domain than its subcomponents do; coupled systems retain their identity. System coupling presupposes entity’s ability to learn how to adapt its motor outputs to its sensory input. In order to do so the entity needs to possess some kind of internal representation of its environment (Merleau-Ponty 1979, p. 128).

Programmable agents History of programmable agents dates back to the 1950s when Walter Grey, a neuropsychologist and robotician, built the first autonomous robots called Elmer and Elsie (Carranza and Coates 2000). Although conceptually very simple, Machina spexulatrix, as Grey dubbed his invention, acted unpredictably (Grey 1951). This illustrated the fact that simple sensory-motor systems, placed within a dynamic environment, can display complex, life-like behaviour. Grey continued his experiments by building Machina docile – a robot featuring an internal memory element that enabled it to perform a simple learning task (Grey 1951). Machina docile was presumably the first artificial agent with memory and learning ability. In the 1960s, following Grey’s work, Valentino Braitenberg continued developing simple reactive machines. The so-called Braitenberg vehicles became famous for displaying complex, uncanny behaviour (Arbib 2003). Later in the 1980s, Braitenberg published a book with a series of designs for robots that, from the perspective of an observer, behaved as if they had ‘taste’, expressing ‘fear’ and ‘aggression’ (Braitenberg 1986). In contrast to Braitenberg vehicles, Rodney Brooks suggested a different robot modelling approach that he called subsumption architecture (Brooks 1991a). Brooks proposed decomposing a robot’s architecture so that the robot’s internal mechanisms are organised into loosely interacted layers that function in parallel. In the subsumption architecture, the sensory input is mapped quite directly to the motor output yielding to a tight system-environment coupling. This approach was one of the 55

first steps towards a new paradigm – embodied cognitive science (Pfeifer and Scheier 2001). In parallel to the first robotic agents, cellular automaton (CA) – another concept that had a great influence on modern agent-based models – was developed. CA was originally introduced by John von Neumann (1951) in the 1940s to explain neural processes in the brain. Von Neumann recognised that, although his automata model was far less complex than natural organisms, studying organisms helped to create better automata models, and artificial automata was useful in order to better understand natural processes. One of the best definitions of CA is given by Adamatzky:

“Cellular automaton is a local regular graph, usually an orthogonal lattice, each node of which takes discrete states and updates its states in a discrete time; all nodes of the graph update their states in parallel using the same state transition rule.” (Adamatzky 2001, p. 11)

Von Neumann’s theory was tested in the 1960s by John Conway who created one of the first programmed examples of CA – the well-known Game of Life (Adamatzky 2001, p. 185). With simple transition rules switching cells on and off, Game of Life can produce surprisingly complex and persistent dynamic patterns. The selfreplicating patterns of ‘gliders’ and ‘spaceships’ – agents that can be observed only together with their environment (Adamatzky 2001, p. 185) – come very close to the notion of autopoiesis. As pointed out by Rodrigues and Raper (1999), the concept of CA is very similar to that of agent. On one hand cells in automata can be seen as immobile agents in static networks reacting to stimuli in their immediate neighbourhood; on the other hand, the persistent patterns of CA can be treated as self-replicating agencies living in the cellular space. Therefore, one can classify CA as a subtype of agent-based models. In Chapter 5, this idea is explored further in depth. Although CA models have been exhaustively explored in numerous models and analytical studies, Batty (2005, p. 76) argues that most of the applications to date have been educational. Recently, coupling these models with mobile agents has become very common in dynamic systems studies (e. g. Dijkstra and Timmermans 56

2002; Parker et al. 2003; Batty 2005). Borrowing from Seymour Papert’s turtle-based Logo language (Johnson 2002), Mitchel Resnick’s invented a simple yet powerful application combining mobile agents (turtles) and CA – StarLogo (Resnick 1994). StarLogo has spawned the whole generation of scientists testing their ideas by modelling dynamic systems in computer simulations. NetLogo, the successor of StarLogo, has now been used to build models in a variety of disciplines including biological systems (Wilensky 2001), mathematics (Wilensky 1998), social science (Wilensky 2004), chemistry and physics (Wilensky 2002) etc. In 1987, Greg Reynolds introduced the first model of flocking agents – boids (Reynolds 1987). Using simple local rules to steer individual boids, Reynolds was able to show that the flocking is not guided by a central leader, but emerges from the acts of individual boids. The artificial flock successfully simulated the complex behaviour of natural flocks with individual boids simply reacting to their immediate neighbours by following three rules: the cohesion, separation, and alignment rule. Swarm programming continued to develop rapidly in the 1990s when several phenomena observed in natural insect colonies were simulated in computer models. Dorigo, Maniezzo and Colorni developed a metaheuristic optimisation method dubbed ant colony optimisation (Gutjahr 2008). The overview of ant colony optimisation is given in Chapter 8. Subsequently, the concept of indirect communication in agent colonies was thoroughly investigated (Bonabeau, Dorigo and Theraulaz 1999; Buhl et al. 2006) and tested in simulation models (Deneuborg, Theraulaz and Beckers 1992; Theraulaz and Bonabeau 1995). Stigmergy – a form of indirect communication – is further explored in section 4.4.

Benefits and limitations of ABM Although ABM has generated a great deal of attention among scientists in various disciplines, business applications of multi-agent systems are still extremely rare. One of the reasons must be the complexity involved in designing these systems. Bonabeau, Dorigo et al. (1999, p. 271) argue that one of the greatest challenges in programming multi-agent systems is to make simple agents solving user-defined tasks. Regardless of being notoriously difficult to set up, distributed computation models are sometimes the best approach, especially in cases where the problem being solved is distributed. Agent-based models can be used in cases where the centralised approach 57

is impossible – the information involved is gathered across different domains or over a large area, or the amount of data is vast (Huhns and Stephens 1999). In some cases, the speed of distributed computing is substantially greater than that of linear processing (Bonabeau, Dorigo and Theraulaz 1999, p. 26). Multi-agent systems have several advantages compared to traditional topdown and linear techniques. According to Castle and Crooks (2006) there are three incentives to a modeller for using these techniques – agent-based models capture emergent phenomena, provide natural descriptions of systems and are flexible. In the context of geospatial modelling, Castle and Crooks see the flexibility of agent-based models as being applicable in different modelling environments while responding well to different control parameter configurations. The flexibility, on the other hand, can also mean that the multi-agent system responds collectively to external perturbations without agents being explicitly reprogrammed (Bonabeau, Dorigo and Theraulaz 1999, p. 19). The colony can cope with the dynamic environment much better than a single agent. This also leads to greater robustness – the failure of an individual agent does not affect the whole colony. One of the major drawbacks of running multi-agent models in computer simulations is the bulk of computation needed. Larger colonies usually solve the given problems more accurately but also require more resources as each agent has to perceive and act independently. Agent-based models are also very sensitive to configuration parameters (Brimicombe and Li 2008) and getting these parameters right can be very time-consuming (Castle and Crooks 2006). Although an agent-based model can provide deep insights into complex systems, the model is only as useful as the purpose for which it was constructed in the first place (Castle and Crooks 2006). Purely speculative models are sometimes created without a particular purpose in mind and fail to contribute to any professional or academic discipline or to the field of ABM generally. To avoid such a case, one has to be clear about the purpose of the model.

58

4.2 Definitions of agent

As mentioned earlier in this chapter, many authors writing about ABM stress that the official definition of agent does not exist (Dijkstra and Timmermans 2002; Silva, Seixas and Farias 2005). Wan and Braspenning (1996) argue that an agent cannot be decomposed into simpler notions; it can only be described via synonyms. They add that no remotely mature theory of agency or agenthood exists, and that multi-agent systems are lacking coherent theoretical foundations too. Despite such criticism, several authors have put forward their formal definition. Probably the most generic definition – the agent is one who acts – is given by Wan and Braspenning (1996). Russel and Norvig (1995) describe an agent as something that perceives and acts. A somewhat lengthier definition is given by Wooldridge:

“An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives.”

(Wooldridge 1999, p. 5)

Blumberg and Gakyean (1995) see an autonomous agent as a software system in a complex and dynamic environment, trying to achieve some set goals . Regardless to such a variety of formal definitions, certain characteristics attributed to agents and multi-agent systems keep appearing throughout the literature. These often assigned attributes are autonomy, adaptivity, intelligence, reproduction, self-sufficiency, embodiment, situatedness, encapsulation, reactivity, pro-activeness, social ability, and goal-directedness. The rest of this section gives a review of these different qualities found in literature. Autonomy is the characteristic most often attributed to agents. Whereas Pfeifer and Scheier (2001, p. 25-27) see autonomy as freedom from external control, Blumberg and Gakyean (1995) do not see the conflict between autonomy and directivity – agents can still be autonomous while accepting directions that influence their behaviour at multiple levels. Autonomy, an essential property of agent, relates strongly to adaptivity (Wan and Braspenning 1996). Adaptivity is the agent’s ability to sustain its entity and identity in the dynamic environment; the capacity to survive in 59

changing conditions. Ross Ashby used the term homeostasis to explain the organism’s ability to maintain its internal states (Pfeifer and Scheier 2001, p. 92). Natural agents preserve their entity by means of autopoiesis, non-natural agents have to mimic that to achieve adaptivity (Wan and Braspenning 1996). Probably the closest to the essence of autopoiesis among artificial agents come the models with self-regenerating cellular agencies (Butler et al. 2001; Støy 2004). Intelligence is another a quality frequently ascribed to agents and agencies (e. g. Izquierdo-Torres 2004; Nembrini et al. 2005). Pfeifer and Scheier insist that no generally accepted definition of intelligence exists (Pfeifer and Scheier 2001), and, in many cases, intelligence is attributed to agents by the observer. Pfeifer and Scheier add that intelligence must be seen with respect to the agents’ habitat. Skyttner claims that intelligence is a property of living systems that cannot be attributed to artificial agents (Skyttner 1996, p. 185). Brooks, on the other hand, sees intelligence as an emergent property in certain complex systems (Brooks 1991b). Self-sufficiency is the ability to sustain itself over extended periods of time maintaining its energy supply; an essential property of complete agents (Pfeifer and Scheier 2001, p. 86-88). Most artificial agents are dependent on their creators; all agents are dependent on their environment. Agents also need to be embodied in order to interact with their environment. The body defines the agent’s sensory configuration and potential means of interaction. According to Brooks, only embodied systems can build their Merkwelt (internal perceptual world) via physical grounding (Brooks 1991b). A related characteristic to embodiment is situatedness. Wooldridge (1999) sees agents as computer systems situated in some environment. In system theory, a subject can only be studied together with its environment (Skyttner 1996, p. 3). Embodied agents cannot exist without the environment; they are always embedded in their surroundings. Agents often feature encapsulation – the concept of hiding internal methods and properties behind an interface is borrowed from object-oriented programming and closely related to embodiment. Objects are defined as entities that encapsulate and process data and communicate through message passing (Wooldridge 1999). Wooldridge adds that, in contrast to traditional object-oriented programming where objects are manipulated from outside, in agent-oriented programming decisions lie with the agent. Agents cannot be told what to do because they are autonomous. 60

Some agents are pro-active – they are directed to certain goals. According to Minsky, a system’s goal-driven behaviour is produced by the Difference-Engine; agents are pushed into action by various differences between the desired situation and the actual situation (Minsky 1988, p. 78). Wooldridge explains that agents are “able to exhibit goal-directed behaviour by taking the initiative in order to satisfy their design objectives” (Wooldridge 1999, p. 8). In agent-based models, there are two types of goals: the local – an agent’s own selfish goal, and the global – a system designer’s goal (Shoham and Leyton-Brown 2009, p. 1). As opposed to assigning desires and goals to agents, Brooks claims that “intelligent behaviour can be generated without having explicit reasoning systems present” (Brooks 1991b, p. 23). Perceptive agents can also be supplied with a priori knowledge about the environment. These agents usually possess a representation of the world – a ‘mental map’ of their environment (Castle and Crooks 2006). According to Brooks, (1991a) the failure of classical Artificial Intelligence (AI) research occurred because people tried to build exhaustive internal representations in order to create intelligent systems. As he points out, the human ‘Merkwelt’ – our internal representation system – is not necessarily suitable representation for artificial agents. Instead, Brooks favoured the subsumption architecture where he used dynamically generated internal representations (Brooks 1991b). Situated Action paradigm generally follows Brooks, but it also benefits of some symbolic representations (Wan and Braspenning 1996). As opposed to pro-activity, reactive agents act upon stimuli received from the dynamic environment. Many agent modellers combine reactive behaviour with proactive (Brooks 1991b; Wooldridge 1999). Reactive agents can solve tasks by being naturally opportunistic if promising circumstances present themselves by responding to changes in the ambient (Brooks 1991b). Intelligent agents can have social abilities - they interact with other agents (and possibly humans) in order to satisfy their design objectives (Wooldridge 1999). Agents can acquire information about the world by querying other agents and the environment in the immediate neighbourhood searching for specific attributes (Castle and Crooks 2006). In addition to direct communication, agents can also communicate via their environment. 61

Mobility is an important property of agents. Castle and Crooks claim that mobility is a particularly useful feature of agents in spatial simulations (Castle and Crooks 2006). However, the ability to move is not critical in order to define agents – it is plausible to see cells in CA models as immobile agents (Rodrigues and Raper 1999).

4.3 Taxonomy of ABM

Types of agents In the previous section, some properties commonly assigned to agents were investigated. It is quite obvious to categorise agents by these properties using terms like ‘reactive agents’, ‘intelligent agents’, ‘mobile agents’, ‘embodied agents’, etc. Other, more generic categories have been suggested to distinguish between different types of agents. Again, there is no globally accepted taxonomy of agents used by all the players in the field. The variety of agents used in experiments is truly astonishing; the majority of authors, however, use ambiguous taxonomy. Complete agents is a class of agents that are autonomous, self-sufficient, embodied, and situated (Pfeifer and Scheier 2001). Inspired by natural agents, animals and humans, Pfeifer and Scheier describe complete agents as entities that are capable of surviving in the real world. In order to sustain themselves, complete agents have to maintain their energy supply, and behave autonomously in an environment without the human intervention. All biological agents are complete agents and some artificial agents can be complete too. As Pfeifer and Scheier argue (2001, p. 185), real-world robotic agents can be constructed in a way to fulfil the criteria for completeness. Although many robots meet several of the mentioned requirements, none of their presented robots are truly complete. They argue that artificial agents are built to achieve a particular task, study general principles of intelligent behaviour, or model certain aspects of natural systems. Besides real-world robots, another subclass of artificial agents – simulated agents – live in computer models. As it is theoretically possible to simulate any physical process in the computer, any physical robot can be simulated (Pfeifer and Scheier 2001). However, as the authors add, a physically realistic simulation is extremely difficult to develop and requires enormous computational power.

62

Since autonomous agents are always situated, they can be discriminated by their environment. Several authors have used the term ‘spatial agents’ (Rodrigues and Raper 1999) or ‘space-agents’ (Adamatzky 2001) in order to distinguish them from non-spatial ones. Rodrigues and Raper define a spatial agent as “an autonomous agent that can reason over representations of space” (Rodrigues and Raper 1999, p. 4); spatial agents make spatial concepts computable. Adamatzky, on the other hand, uses the term space-agents to distinguish them from graph-agents. In the vein of classical AI, agents with complex symbolic reasoning or central symbolic model are still quite common in several disciplines. Russell and Norvig call this type generically knowledge based agents (Russell and Norvig 1995, p. 194). In this category, Wooldridge (1999) describes three architectures: logical agents – decisions are made via logical deduction , belief-desire-intention agents – decisions are based on a model of human practical reasoning, and layered architectures – the decision making is realised through layered software. Another type – the cognitive or deliberative agent – contains a representation of the world, has memory and operates via symbolic reasoning (Rodrigues and Raper 1999). Designs of the deliberative agent have often severe problems of symbol-grounding and frame of reference (Pfeifer and Scheier 2001). Some authors also mention interface agents – a metaphor used to describe software based assistants in computer applications. According to Rodrigues and Raper (1999), interface agents are semi-intelligent and semi-autonomous programs, and are of no interest in this thesis.

Types of models Regardless to the type of agent used, agent-based models can be classified by other generic principles. Gilbert (2004) offers a duality-based taxonomy distinguishing abstract models from descriptive ones, artificial from realistic, positive from normative, and spatial from network003A •

Abstract models do not mimic any real-world process, but produce more general knowledge by exploring concepts. While descriptive models are concerned with modelling something that already exists in order to understand it, the findings of abstract models are not exactly applicable to any existing process. 63



Realistic multi-agent models are inspired by real societies, and give insights how these societies work. Artificial models use completely made up agents to achieve a certain engineering task.



Normative models are often concerned with making suggestions about what policies should be followed. Positive, on the other hand, are descriptive and analytical about the phenomena studied, helping to understand rather than to advise.



Spatial models deploy a representation of some spatial environment, often a 2D lattice of cells, a map, or a 3D geometrical model. Agents are usually capable of moving freely around in such models. In a network model, on the other hand, geometry of the environment is irrelevant – the relationships between agents and network nodes are more important. Spatiality of the model is a particularly relevant notion in the context of architectural and urban design.

Gilbert (2004) gives this overview from the social science perspective. Castle and Crooks (2006) offer yet another classification that distinguishes agent-based models by the purpose: predictive models are constructed for evaluating scenarios and foreseeing future states; explanatory models strive to explore theoretical aspects and create hypotheses.

4.4 Properties and behaviour of agent-based models

Emergence Emergence often lies at the heart of the bottom-up modelling approach of generative design. In order to generate circulation diagrams one could greatly benefit from observing and analysing the emergent behaviour of biological agent colonies. Even deeper insight can be obtained by programming agent colonies following the principles found in nature. In multi-agent models, the boundaries between an individual and the colony are blurred. It is often hard to tell the difference between the individual and the group behaviour. Emergent behaviour can arise from the agent-environment interaction – Braitenberg (1986) has shown that a simple sensory-motor system can display 64

unpredictable behaviour. Emergent phenomena can also happen in populations of repeatedly interacting agents (Sen et al. 2007). Both kinds of interactivity – agentenvironment and agent-agent – can produce unpredictable and insightful results into how behavioural patterns can emerge from simple rules designed at the level of individuals. Emergence is a concept widely used to describe processes or patterns that are unplanned and surprising; it is a property of the system that is not contained in its parts. Williams (2008) explains emergence simply as a process of sudden and unexpected appearance. Goldstein (2005) elaborates the idea by stating that emergence refers to the arising of novel patterns and structures. He also stresses that emergence happens on macro-level in complex systems, as opposed to micro-level processes from which it arises. In contrast to the traditional view of emergence as the result of self-organising processes, Goldstein claims that it is often constructed – created in heavily controlled conditions in laboratories. Indeed, most of the computer simulations displaying emergent properties have carefully been set up by the system’s designer and programmers. Nevertheless, the value of these simulations is hard to overestimate. According to Holland (1998, p. 12) , computer models can provide access to understanding emergence as they can be started, stopped and observed at desired pace – something that is impossible with natural dynamic systems. The well-known examples of emergent behaviour of biological agents are flocking and nest building in insect colonies. Both of these models have been simulated to a certain level of abstraction on computers with ABM. Reynolds’ model of flocking boids had an important role to play in changing the understanding of how birds coordinate their behaviour at colony level (Reynolds 1987). Theraulaz and Bonabeau’s, on the other hand, built a formal model to explore the nest-building behaviour of wasps (Theraulaz and Bonabeau 1995). Using simple building rules, their artificial insects were capable of creating structures of astonishing complexity. According to Gilbert and Terna (2000), there are two distinct types of emergence: unforeseen and unpredictable emergence. Unforeseen emergence occurs at an equilibrium state when some sort of cyclical behaviour appears. Unpredictable emergence is a chaotic behaviour of the system that is observable but much harder to reverse engineer. 65

Learning Ability to learn is a crucial component of ABM. Without this ability, it is very difficult to use agent colonies to optimise the generated circulation networks. While emergence in agent-based models refers to the arising of novel patterns and structures, learning is often associated with changes in agent’s behaviour to maintain its desired state. According to the environment-modification principle in systems theory, agents have to choose between two main strategies (Skyttner 1996, p. 73). One option is to change the environment to suit one’s needs; the other is to adapt to it – to learn to live in new conditions. Maturana and Varela (1980, p. 35) define learning in living systems as the process of modifying one’s conduct in order to maintain one’s basic circularity. It goes without saying that the capacity to learn is a key property of intelligent behaviour (Shoham and Leyton-Brown 2009). Brooks lists out four things that an intelligent agent can learn: representations of the world, calibration of sensors and actuators (motors), how individual behaviours can interact, and new behavioural modules (Brooks 1991b). Learning can happen at two levels: at the level of individuals where an agent changes its behaviour during its existence, and at the colony level where the whole group of agents modifies its course of action in a certain way. A classic example of the colony’s learning process is the trail formation of foraging ants where the colony constantly searches for shorter trails to food. Populations of agents can also undergo phylogenetic development – they adapt to their ecological niche over generations by the means of evolution. In addition to behavioural and evolutional learning, Pfeifer and Scheier (2001, p. 485) describe two levels where learning can also take place. Rapid changes in the environment can cause physiological or sensory adaptation. Sweating, for example, is the physiological response to rising temperature; contraction of the pupils classifies as a sensory adaptation. Learning mechanisms are of major concern in modelling intelligent behaviour. Evolutionary algorithms have been heavily deployed in designing artificial agents (e. g. Sims 1994; Stanley, Bryant and Miikkulainen 2005). The individual-level learning of artificial agents draws upon many well-known machine learning algorithms of which the most exploited in ABM is reinforced learning (Vidal 2007, p. 70). Reinforced learning occurs when agents learn to map sensory inputs to motor outputs by trial and error method, receiving rewards if certain states have been achieved. The 66

environment is treated as a black box and agents do not need any previous knowledge or symbolic representation of the world (Wan and Braspenning 1996). Vidal (2003) points out that learning in multi-agent communities can be quite challenging as the target for an individual agent keeps changing and the agents cannot learn from examples. He argues that multi-agent systems where agents share information or otherwise help each other can be seen as extensions to traditional machine learning algorithms. Learning in such systems can happen collaboratively when agents collectively create and share the global knowledge, or in competition when each agent wants to be the best (Vidal 2007, p. 63). In multi-agent systems, it is difficult to separate the phenomenon of learning from that of teaching – all agents involved in the process usually gain some benefits (Shoham and Leyton-Brown 2009).

Communication in multi-agent systems; stigmergy Communication is an essential part in any multi-agent system – it enables agents to coordinate their behaviour in ways that benefit the whole colony. Communicating with others helps an individual agent to understand and be understood, but also facilitates the colony achieving its goals (Huhns and Stephens 1999). There are several communication strategies and methods that can be used with multi-agent systems. This section scrutinises some of these, focusing on the indirect communication through the environment. Communication in multi-agent systems is a diverse faculty dealing with a range of issues from message protocols to general coordination strategies. Huhns and Stephens lay out the communication infrastructure specifying interaction methods and mechanisms: 1) communication can happen via shared memory with agents having access to the communal data base, or be message-based with agents communicating with one another; 2) communication can be connected or connectionless; 3) messages can be exchanged from a single sender to a single receiver (point-to-point) or from a single agent to many others (multicast or broadcast); 4) messages can be pushed or pulled; or 5) communication can be synchronous or asynchronous. Doran et al. (1997) point out that coordination in colonies can happen in a non-communicative manner with agents simply observing the other agents’ behaviour. However, it is reasonable to argue that these agents still communicate albeit no messages are deliberately sent. 67

Messages between agents can be sent and received either directly or indirectly (Keil and Goldin 2006). Direct messaging involves at least two parties – a sender and a receiver – communicating simultaneously. Indirect communication is mediated through environment and does not require the simultaneous presence of both parties; one party can simply leave a message and the other one can pick it up later. This type of communication also goes with the name stigmergy. Compared to the direct communication, stigmergy is a lightweight solution (Hadeli et al. 2004). A designer of distributed artificial systems using stigmergy can replace direct communication with indirect, reducing the complexity of an individual agent (Bonabeau, Dorigo and Theraulaz 1999, p. 16). There is a particular reason why this thesis is so interested in stigmergy – stigmergy can be seen as an environmental modification principle. From the point of view of architectural design, leaving a message leads to environmental changes, and stigmergy can, therefore, be seen as an environmental design strategy. The stigmergic modification principles can be used to design the environment following a set of modification rules embedded into the agent’s sensory-motor loop. Certain cues in the ambient can then trigger a particular building action that, in turn, leads to a new environmental configuration. Stigmergy was first described by Pierre-Paul Grassé in 1959 observing the nest construction process of termites (Holland and Melhuish 1999). Stigmergy is a class of mechanisms that, according to Theraulaz and Bonabeau, mediate the animal-animal interaction (Hadeli et al. 2004). Stigmergy has claimed a lot of attention among scientists studying social insects and their nests (e. g. Deneuborg, Theraulaz and Beckers 1992; Turner 2000). Social insects communicate in many different ways; they use tappings, strokings, graspings, antennations, tastings etc. but most of the signals are based on chemicals (Wilson 1980, p. 192-193) that, when dropped in the environment, provide information to other members of the colony. These chemicals are known as pheromones. Bonabeau, Dorigo and Theraulaz (1999) consider the activities of a social insect colony as the process of self-organisation at the heart of which lies stigmergy; they describe the nest-building process of social insects as the process of selfassembly. Stigmergy, in their opinion, facilitates self-organisation in the colony. In the stigmergic self-organisation, spatiotemporal structures arise mainly from the agents’ 68

action rather than from the environmental physics. That does not mean that physics is not involved, but it has a secondary role to play (Holland and Melhuish 1999). Bonabeau, Dorigo and Theraulaz (1999, p. 205-208) distinguish between two different stigmergic mechanisms: qualitative or discrete stigmergy, and quantitative or continuous stigmergy. In the first case, different spatial arrangements can trigger different behaviour of agents; in the second case, the spatial stimuli influence the action of agents in a quantitative way. Qualitative stigmergy affects the agent’s choice of action; quantitative stigmergy affects frequency, strength, length, or other quantitative properties of the agent’s action. Whereas continuous stigmergy usually amplifies the subsequent behaviour at a location, there is no positive feedback mechanism in discrete stigmergy – the stimulus is transformed into another, qualitatively different, stimulus. Both kinds of stimuli can be attractive or repulsive, activating or inhibiting, and depend on the local context (Bonabeau, Dorigo and Theraulaz 1999, p.206). Holland and Melhuish (1999) also make the distinction between a passive and an active form of stigmergy. Passive stigmergy takes place when a previous action at a location does not influence the subsequent action, only affects the outcome. In the case of active stigmergy, both the quantitative and the qualitative effect can happen (Holland and Melhuish 1999). Albeit stigmergy is mostly associated with insect colonies, Parunak (2006) points out that humans also use environmentally mediated signals to communicate. Holland and Mehuish (1999) suggest that the best way of studying stigmergy is simulating it in a computer program. In a simulation, an agent has two key abilities: to move through the environment or to change it. The change can be done by adding or subtracting material, or changing the qualitative properties of the material (Holland and Melhuish 1999). The building algorithms used in stigmergic simulations are formulated comprising of series of if-then decision loops (Bonabeau, Dorigo and Theraulaz 1999, p. 209). Many insect colonies have specialised individuals for different tasks. The partitioning of tasks is a phenomenon of stigmergy that is needed to avoid task switching, and saves energy and time (Ramos, Fernandes and Rosa 2007). However, if agents are to respond to different stimuli and perform different activities triggered by the same stimuli, the task of programming an artificial colony becomes very difficult (Bonabeau, Dorigo and Theraulaz 1999, p. 205-251). 69

According to Bonabeau, Dorigo and Theraulaz (1999) the benefits of using stigmergy-based multi-agent systems are: •

Incremental improvement of a solution – stigmergy facilitates the step-by-step approach to solving optimisation problems



Flexibility – an agent colony can cope with external perturbations since it can handle different spatial stimuli



Robustness – the success of the whole colony is not dependent on the poor performance of an individual agent



Increased speed – a colony can find solutions much quicker than individuals separately



Qualitative leap – theories of self-organisation claim that the behaviour of an interacting population is qualitatively different from that of a single agent



Top-down control is not appropriate when dealing with a large number of agents



Stigmergic communication based systems also naturally tend towards optimized solutions (Valckenaers et al. 2001).

Drawbacks of stigmergic agent-based models include the lack of global knowledge – agents can get stuck in a local solution; and the difficulty of programming, since both the state of the system and the environment constitute the solution to the problem (Bonabeau, Dorigo and Theraulaz 1999, p. 20).

Coordination in multi-agent systems Competition and cooperation are two coordination mechanisms in ABM that control and encourage the development in multi-agent systems. Competition happens naturally between agents sharing the same ecological niche. In most cases agents compete for resources or opportunities to reproduce. Cooperation, on the other hand, is often presented as a concept that distinguishes multi-agent systems from objectoriented systems and expert systems (Doran et al. 1997). Grand (2001) argues that two systems, affecting one another, always display the results of cooperation and competition.

70

Huhns and Stephens (1999) dispute that coordination is possible in multiagent systems because of communication, and leads to more coherent systems. They argue that coordination is a property of a system of agents in a shared environment, and actually see collaboration and competition as parts of it. From the system designer’s point of view, collaboration can be seen as a coordination strategy between social agents, whereas competition can trigger the coordination between selfinterested agents (Huhns and Stephens 1999). Franklin proposes a typology of cooperation (see Figure 4.1) where multiagent systems are divided into two main categories: independent and cooperative systems (Doran et al. 1997). As independent agents have their own individual goals, they collaborate accidentally without being aware of it – independent agents cooperate only from the observer’s viewpoint. Cooperative agents can be either communicative or not. Non-communicative agents collaborate by observing one another’s actions without explicitly sending messages. Communicative agents exchange messages directly or through the environment. Deliberative agents plan actions together; negotiating agents do the same while also competing with one another.

Figure 4.1: Cooperation typology in multi-agent systems. Source: Franklin (Doran et al. 1997)

Elaborate agent architectures (e.g. deliberative agents) are explicitly designed for collaboration, but simpler agents can collaborate effectively too. Doran et al. (1997) point out that, although reactive agents do not have the capacity of prediction or intention, collaboration can be emergent. Cooperation happens naturally because all agents benefit from the overall well-being of the colony. In the case of simple reactive agents, cooperation often rises out of competition with individuals forming 71

syndicates for better existence (Grand 2001). Portugali (1999) calls this self-organising principle ‘the cooperation principle’.

4.5 Applications of multi-agent systems

Overview Multi-agent systems are particularly good for distributed problem solving in the dynamic environment (Huhns and Stephens 1999). The ability to deal with distributed data and cope with dynamic inputs has made ABM popular in many disciplines. This is why these systems are potentially very useful for the architectural task as well. In recent years, ABM has been utilised in a number of different ways ranging from social experiments to industrial design and engineering applications. All these applications fall roughly into two main categories: there are models that emulate natural systems in hope to better understand them, and models that build artificial systems in order to tackle complex problems and generate appropriate solutions. Whereas designing the first category models demands biological plausibility and rigorous observations, the designer of an artificial system for problem solving only needs to grasp the broad concept of underlying mechanisms (Bonabeau, Dorigo and Theraulaz 1999, p. 8). Bonabeau (2001) lists out four areas of application in a business context where ABM can be used: flow, traffic and evacuation management, stock market simulation, operational risk and organisational design, and diffusion of innovation and adoption dynamics. Flow and traffic management – and particularly pedestrian modelling – is the most often practiced application related to architectural and urban design. An overview of that field is given later in this section. Traffic planning is a classic example of ABM since the behaviour of individual vehicles can be easily mimicked with autonomous agents. ABM is also suitable for stock market simulation since the dynamics of a market result from the behaviour and interaction of many individual agents. The same applies to operational risk management where agentbased models are used for producing valuable qualitative or semi quantitative insights into the organisation’s design. Diffusion models have proven to be useful in understanding the individual decision making in a community.

72

Popular applications Considering the number of experiments, a few disciplines where multi-agent systems are now extensively exploited clearly stand out among others. Most of the agent-based models in sociology, economy, geography, and biology study the behaviour and control in complex systems at different levels of abstraction. The first agent-based model in sociology was developed by Schelling in 1978 to study housing segregation (Macal and North 2006). After that a plethora of social simulations have been used to explore patterns in politics, attitude change in societies, social segregation, formation of settlements, anti-social behaviour, validating policies etc. (Gilbert 2004). In biology, multi-agent models have been developed to study the transmission of viruses, the growth of bacterial colonies, the multi-cellular level interaction and behaviour (Castle and Crooks 2006). A popular application in geography is clustering spatial data from databases – a task where traditional methods would fail because the databases are so vast (Macgill 2000). ABM is gaining popularity in economic studies in order to understand market dynamics (Heppenstall, Evans and Birkin 2007). Traffic control (De Schutter et al. 2003) and network routing (Sim and Sun 2002) applications benefit from the individual oriented approach of ABM. Besides the above mentioned fields, ABM has been used to lesser extent in archaeology, agricultural economics, urban simulation (Parker et al. 2003), and even studying the origin of languages (Steels 1997). Game theory, defined as the mathematical study of interaction among independent, self-interested agents (Shoham and Leyton-Brown 2009), is another field where multi-agent applications have proven successful. Most ABM solutions in game theory are geared towards understanding the behaviour of well-informed people. Alternatively, some models are developed to create automatic opponents to game players (Vidal 2003). A classic example of ABM in game theory is given by Sen et al. (2007) who describe an agent model to solve the Prisoner’s Dilemma – a well-known social dilemma with two players. Agent-based models have been used in both noncooperative and coalitional game theory. Whereas in the former basic modelling units are individuals, in the latter the basic modelling unit is the group (Shoham and LeytonBrown 2009). Agent-based models have also been extensively used in some relatively new and emerging fields that cross the borders of traditional scientific domains. It is 73

possible that the ABM paradigm is partially responsible for the emergence and development of such new fields. The simulations in one of such overlapping areas of two disciplines – sociology and geography – can be collectively termed geospatial simulations (Castle and Crooks 2006). Geospatial simulations, often operating on geographical information systems’ (GIS) databases, benefit from the mobility of individual agents and the multi-agent system’s ability to deal with large distributed datasets. The general goal of geospatial simulations is to understand the emergence of patterns, trends, and other characteristics in societies. According to Castle and Crooks (2006), ABM has been used in geospatial simulations to reconstruct the settlements of ancient civilisations, study the dynamics of civil violence, explain the spatial patterns of unemployment, evaluate the recreational use of land, and study the coordination of social networks within three-dimensional landscape terrains. Although ABM has been a part of geospatial sciences for more than 10 years (Ligmann-Zielinska and Jankowski 2007), the rise of GIS has generated vast data sets that provide new challenges for agent modellers. Novel technology in geographic information science is suggesting new modelling techniques.

Architectural design related applications Thanks to the growing popularity in many disciplines, ABM is now taking its first cautious steps in architecture and design related realms. The reasons for using multi-agent systems in design disciplines should be obvious. Raisbeck (2007), recognizing the great potential of ABM in urban design, points out that, with a new mimetic and functional range of locomotion, structural optimisation, pattern formation, and learning capacity, agent based software enables cities to be modelled from bottom up. However, he realizes that the software technologies developed to date are exploratory, and it may take some time before they become exploitative technologies. Raisbeck sees traces of ABM already in the work of Team X architects in the 1960s. This is a bold statement that the author fails to back up with any examples. Another evangelist of ABM and swarming in architecture is Neil Leach who, similarly to Raisbeck, also stays at the visionary level, stating the potential of using such systems in architecture (Leach 2004). The most successful applications of agent-based models in architecture have been developed for pedestrian movement and evacuation studies. Individuals in these 74

models are represented as agents that follow simple rules; complex human motivations and reasoning tend to be left out. Pedestrian modelling with agents is mostly used in analysing urban settings and large shopping mall environments. More thorough overview of the field is given in the next section in this chapter. Although the majority of AMBs – pedestrian movement, evacuation, and crowding models – tend to be analytical, there is a growing trend of using agents in the generative design process. Reffat (2003) proposes a model where agents generate new design concepts by exploring two-dimensional sketches. He justifies the use of ABM by claiming that creativity in architectural design can be seen as emergence of new forms and relationships between these forms. Gomes et al. (1998) describe a design process supported by distributed agents where 3D solid objects are presented as reactive agents responding to the designer’s actions in a CAD environment. Gu and Maher (2003) propose a generative model combining shape grammar with ABM to construct virtual environments. More artistic and abstract models have been proposed by Nembrini et al. (2005), Jacob and Mammen (2007), Mammen, Jacob and Kokai (2005), Maciel (2008), and Carranza and Coates (2000). Most of these models are concerned with form-finding; the latter, for example, generates novel 3D shapes by wrapping a continuous surface around the trails left behind by swarming agents. From the professional perspective the majority of these models are immature and commonly ignored by the community of architectural practitioners. Slightly more tangible are some agent-based models for industrial and product design applications. Multi-agent systems have been used to develop electro-mechanical devices (Campbell, Cagan and Kotovsky 1998), design chairs (Crecu and Brown 2000) and ships (Jin and Zhao 2007). Despite the potential of generating architectural circulation systems with mobile agents, there are only a handful of models that use ABM for this purpose. Batty (2003) shows how diffusion limited aggregation (DLA) – a process of dendritic growth – can theoretically be used in urban policy analysis, and to simulate the growth of a city. DLA is based on the Brownian movement where random walk particles aggregate into tree-like networks. A similar process is used by Nickerson (2008) who asserts that the model can be used to create a set of designs for urban infrastructure. Derix (2008) proposes a generative approach to model circulation networks using an ant colony that travels between entry and exit points in a building footprint. Discussing people’s 75

movement in airports, Williams (2008) suggests that it is feasible to generate circulation in a bottom-up manner using very simple agents – particles. He further elaborates this idea by adding that these particles can be given agendas such as sitting, waiting or browsing in a bookshop so that appropriate space can evolve for these functions, as well as for circulation (Williams 2008).

Pedestrian modelling with agents It is not hard to see why pedestrian modelling has gained so much from the advances in ABM techniques. In contract to using differential equations to describe crowd movement, agent-based models can simulate discontinuities in the individual’s behaviour with ease. Agents, when confronting obstacles in the environment, can deal with the situation ad hoc using simple navigational rules. Castle and Crooks (2006) point out that ABM is a natural way of simulating a system composed of real-world entities, and is inherently suited for simulating people in a very realistic way. This is exactly the case with pedestrian modelling where agents have to mimic certain aspects of human behaviour relatively accurately. As long as the target is clearly defined, an individual’s locomotion can be described using quantitative rules, making it suitable for programming. Helbing , an early player in the field, developed one of the first mathematical models for mimicking the behaviour of pedestrians (Batty 2003). Being referenced almost by every scholar of crowd modelling, he clearly favours the individual-based methods to older modelling methods such as queuing models, transition matrix models, stochastic models, and route choice behaviour models (Helbing et al. 2002). The individual-based models are preferred for their ability to simulate self-organising effects occurring in pedestrian crowds. The trade-off between an individual’s selfish and unselfish actions can lead to flocking or turbulence in crowd (Batty 2003) – phenomena that other models fail to capture. Helbing claimed that the movement of crowd is very similar to the movement of fluids and gases, but focusing on the behaviour of individuals (agent-based models) is favourable since this approach is more flexible (Helbing et al. 2002). There seems to be a slight disagreement between modellers about the importance of how closely the actual movement of pedestrians needs to be simulated. Batty (2003) claims that getting the movement right is crucial, while Silva, Seixas and 76

Farias (2005). Dijkstra, Timmermans and de Vries (2007) disagree by proposing a model where the actual movement of pedestrians is only a small component of the approach. Instead of fine-tuning the individual movement rules, they combine local decisions with planning and motivations. This disagreement is partly because different models are built at different levels of abstraction. The same can be said about the representation of the environment to which agents interact. The variety of used representations range from the topological discrete network and simple grid representations to continuous topographical environments with 3D surface geometry. Some models reduce the environment to a network consisting of sequences of straight paths (Silva, Seixas and Farias 2005), others use a lattice of cells (Dijkstra and Timmermans 2002) or heavily simplified shapes (Kukla et al. 2001) to represent the space. Some researchers use a pre-processed environmental representation based on a space syntax technique called axial line (Penn 2001). This “exosomatix visual architecture” of axial lines guides agents through the environment (Turner 2003). However, in more complex and less intelligible environments, the axial line model has to be amended by making major attractors and generators of movement ‘visible’ to agents (Penn 2001). Space syntax research has also tried to combine agents with the Gibson’s psychological theory of visual perception (Turner 2003). This approach borrows the concept of affordances (Gibson 1979) to build agents that could perceive objects in the environment with their possibilities for use. The agent is then directly guided by the affordances in the environment without any reference to superior representational models (Turner and Penn 2002). Pedestrian modelling has been used for several purposes. The most common ones are probably the evaluation architectural configurations (Turner 2003) and built environment (Dijkstra and Timmermans 2002; Calogero 2008), and the evacuation modelling in buildings (Helbing et al. 2002). Other uses include assessing infrastructural changes to promote walking (Kukla et al. 2001), and estimating shopping patterns in large malls (Penn and Turner 2002; Dijkstra, Timmermans and de Vries 2007). The latter allows the merchants to do the economic analysis for positioning goods and organising departments (Batty 2003). Pedestrian modelling has also been used to control the crowd movement in festivals (Batty 2003). 77

A related area of study to pedestrian crowd modelling is concerned with wayfinding in buildings and urban environments. Some way-finding models also integrate ABM techniques. Samani et al. (2007), for example, assess the design of digital signs to facilitate agents to find their way in unfamiliar indoor environments. A similar approach is also developed by Hölscher et al. (2007). Raubal (2001) proposes a method based on the Gibson’s theory of affordances (Gibson 1979) to simulate agents’ navigation in airports. Kuipers, Tecuci and Stankiewicz (2003) evaluate a computational way-finding algorithm to help agents create and use cognitive maps for navigational purposes. Nearly all agent-based pedestrian models are of analytical nature – they are primarily designed to evaluate rather than generate designs. A more design oriented approach has been proposed by Dijkstra and Timmermans (2002) who combine agents with cellular automata model. The authors claim that their approach is very useful for architects and urban planners. Nevertheless, they do not show how this can be used in the design process, nor outline any generative rules for modifying the environment. The purpose of crowd models in the design process remains analytical, occasionally providing valuable feedback to architects. If this analysis is to influence the design output, one has to remodel and analyse the design repeatedly. Instead of taking the design back and forth between conception and analysis, one can devise a generative computer program to close this loop. The following section gives clues how an analytical pedestrian model can be turned into a generative one where agents use stigmergy to alter their environment.

Stigmergic building algorithms Stigmergy is a coordination method that allows turning multi-agent swarms into generative systems that can not only move through the environment but can also modify it. From the point of view of this thesis, it offers a great opportunity to combine algorithms that generate the circulation with rules that generate the spaces served by the very same circulation. This way the circulation system and the served spaces can emerge simultaneously. The first algorithms mimicking the building behaviour of social insects started to emerge in the beginning of the 1990s. Deneuborg et al. (1992) introduced two 78

algorithms – a sequential and a stigmergic algorithm – to simulate the nest building behaviour of wasps and termites in a 2D lattice space. By comparing these two algorithms, they concluded that the stigmergic algorithm is more suited for colonies whereas the sequential one is more adapted to a solitary individual. Following the first experiments, Theraulaz and Bonabeau (1995) started working in 3D seeking for minimal models for stigmergic activities to happen. Their simulation takes place in a cellular lattice space where agents deposit bricks according to information in a lookup table. They come up with two types of algorithms – coordinated and non-coordinated. Coordinated algorithms divide the shape into modular sub-shapes; in non-coordinated ones the building stimulus configuration overlaps and affects the entire building process. Although the algorithm contains probabilistic elements, coherent nest-like structures emerge with coordinated building activity. Using the same algorithm, the built structures can be somehow different, but they exhibit a high degree of structural similarity (Bonabeau et al. 2000). Noncoordinated algorithms are unstable and lead to different outcomes in different simulations. A few years after their first experiments, Bonabeau et al. (1998) came up with a stigmergic building algorithm to simulate the construction of pillars, walls, and the royal chamber in a termite colony. They argued that building different nest morphology does not require the behavioural change, but can be achieved by the random distribution of agents or by introducing external environmental influences such as air movement. Having devised an algorithm to simulate the spider web construction process, Krink and Vollrath (1997) come to a similar conclusion – complex animal architecture is the result of simple behaviour patterns interacting with the dynamic environment. Stewart and Russell (2003) further expand on this idea by noting that the complexity is believed to lie within the environment; agents are just uncovering it. Possibly the most complex model based on the stigmergic building activities in multi-agent colonies is introduced by Ladley and Bullock (2005). These authors propose a simulation model where virtual termites build the royal chamber, covered walkways, tunnel intersections and chamber entrances. The simulation, based on the previous models by Deneuborg et al. (1992),and Bonabeau et al. (1998), takes place in a 3D lattice world. In contrast to its predecessors, this model implements some 79

constraints to what structures can be built, and determines how these structures affect the movement of agents and the diffusion of pheromone. This approach adds another layer to the stigmergic simulation as it includes feedbacks between the information distribution, movement, and the built structure. The authors also introduce the notion of wind to their simulation. Wind affects directly the pheromone gradients influencing the overall structure of the nest. Ladley and Bullock use several different types of pheromone, and their termites have allocated tasks. The pheromone, emitted by a special kind of termites, diffuses in the environment and decays in time. The building behaviour is stimulated by pheromone templates and guided by a small set of rules. All the above discussed stigmergic building algorithms are utilising active stigmergy – the building action of an insect is guided by the previously built structures at a location; and qualitative stigmergy – different built forms trigger qualitatively different building actions. As most of the models are abstract and concerned with exploring the stigmergic building rules, the model proposed by Ladley and Bullock (2005) clearly stands out. Besides being biologically plausible, the model features a feedback mechanism between agents’ movement patterns and the built structure. This concept can potentially be used in generating building and urban layouts with satisfied accessibility criteria.

4.6 Chapter summary

This chapter gave an overview of agent-based modelling – of its origins, concepts behind the paradigm, and definitions of agent and taxonomy of models. Once the necessary background literature was explored, several generic applications of multi-agent systems were discussed and analysed. Despite the popularity of multiagent systems, a relatively small number of examples investigating the generative potential in architecture were discovered. Most of the models that were found have been developed for pedestrian movement analysis and evaluation of building and urban layouts. Based on the review in this chapter, one can argue that multi-agent systems could help to solve circulation issues for several reasons. Firstly, there is correspondence between mobility of agents in multi-agent systems and the distributed 80

nature of movement in real world environments. Multi-agent systems capture emergent phenomena of natural circulation networks. Using mobile agents for generating spaces for movement simply makes sense. Secondly, multi-agent systems are flexible – a model can be deployed in different contexts and responds well to different control parameters. This allows a designer to generate a variety of solutions and pick one that fits the design brief the best. Thirdly, agents are embedded into the modelled environment – they ‘see’ the environment from inside out. This can possibly help to create layouts that can be navigated more intuitively or – at least – offer an alternative to the traditional top-down way of designing circulation. The next chapter proceeds by defining essential building blocks that are needed in order to build multiagent systems for generating circulation diagrams. These are then recombined into prototype models that help to validate the assumptions made above.

81

Chapter 5: Building blocks of agent-based circulation models

This chapter gives an overview of the basic components in agent-based circulation models. These components are present in most of the prototypes and case studies presented in following chapters. These are the building blocks that commonly need to be addressed in programming circulation models with agents. Naturally, there are other ways to structure multi-agent systems, but the break-down convention given in this chapter has been proven useful when building prototypes (see Chapter 6) and case studies (see Chapter 7 and Chapter 8) at a later stage. These blocks can be synthesised in various different ways in order to build prototypes and complete models – models that generate useful diagrams in the site-specific context. The method of breaking down a complete model is often called reverseengineering. Vidal (2007, p. 9) points out that the difficulty of reverse-engineering emergent phenomena lies in taking a description what the model should do and figure out how an individual agent should behave. The task of reverse-engineering emergent phenomena can be achieved using algorithms and tools in systems theory and artificial intelligence research (Vidal 2007, p. 9). This chapter borrows many conceptual ideas from these fields and adopts them to solve the specific task of generating meaningful circulation diagrams. The approach proposed here is to follow the system-theoretical distinction between the system and its environment, and to break the complete model down into mobile agents and their environment. If an agent is obviously a must-have component of multi-agent systems, the environment is a compound of situated systems (Weyns et al. 2005). The importance of the system-environment distinction cannot be overestimated, but the model needs to be broken down into smaller components. This thesis proposes a convention to divide agent-based circulation models into following topics: 1) design of the agent, 2) design of the environment, 3) movement and behaviour of the agent, 4) environmental processes, 5) interaction between the agent and its environment, 6) communication between agents, and 7) general settings of the simulation model. Although interconnected and overlapping at various levels, these topics can also be scrutinised independently. For example, the behaviour of the agent can be studied separately from the environment, as long as the input from the 82

environment is reduced to abstract input values. In programming terms, the proposed topics can be organised into commonly used modules that allow the developer to use standard methods without having to spend long hours on programming, validating and testing the basic components. This chapter, however, focuses on theoretical principles of how these modules are built, and no exact programmatic interface is discussed. There are several ways in which each of the proposed modules can be implemented and there are subtle differences how these modules can be assembled into a coherent whole. Once the basic elements of agent-based circulation models are well-understood, one needs some practical experience in building complete models. It is advised to begin with smaller tests and prototypes and gradually move towards more complete models.

5.1 Design of the circulation agent

The type of agents that are the most common in circulation models are mobile agents. This certainly does not come as a surprise since motion is the key characteristic of any kind of circulation system. Hence, agents in circulation models can alter their location, and are designed to cope with the changing surroundings. They can move around in the same environment or, in extreme cases, even move from one environment to another. However, in this section, the notion of environment is abstractly treated as a set of input parameters into the system. Similarly, the output of the system is treated without the further investigation of what that output actually means to the environment or to other systems in it. One can argue that it is controversial to look at the system without taking the environment into consideration, especially when dealing with mobility. It is widely accepted that motion can only happen in relation to the background system – the environment. However, there is a clear benefit of scrutinising the system in isolation because this allows decoupling input from output outside of the system’s boundary. It is suggested that in this way it is easier to understand the internal structure and logic of the agent. There are essentially two kinds of mobile circulation agents. The first and the most common agent is the embodied agent. The embodied agent has a notional body that defines its location in the model’s coordinate system and is the basis of all local 83

‘somatic’ calculations that the agent executes. The body is often expressed as an geometrical entity (e.g. sphere) in the digital model. The second type is a less traditional agent that, in a way, is inseparable from its environment. This agent is made of several generic components that also constitute the environment, and the agent is only distinguishable from its environment by the state of these components. An example of such an agent is the ‘glider’ in Conway’s Game of Life (Adamatzky 2001, p. 185) that dwells in a 2D cellular space. As opposed to embodied agents that simply move around by transforming their body from one location to another, cellular agents can be said to move around by changing the state of cells according to some cellular automata type of propagation rules. The body of the latter type of agent is just a collection of cells of the same state. In order to move around, cellular mobile agents have more than one bodily configurations; the glider, for example, has 4 postures (see Figure 5.1). The difficulty of using cellular agents in generating useful circulation diagrams is their persistency. Gliders, for instance, may disappear when colliding with other objects that contain cells of the same state. Additionally, the local rules for cellular agents are difficult to code (Støy 2004), and that defeats the objective of designing parsimonious agents. Therefore, cellular agents are not further investigated in this thesis.

Figure 5.1: Glider in action – gliding. This cellular automata agent has 4 different postures that it ceaselessly repeats

In the object-oriented programming (OOP) paradigm, embodied circulation agents are best described as objects. However, in contrast to traditional objects, circulation agents are active components of the model. Embodied agents have their own properties and methods – or data and functions – that can be wrapped in the class definition in most programming languages that support OOP. Each instance of that class normally carries the same set of functions that define how it moves around, interacts with the environment, and records data. At the same time, depending on its 84

past interactions, each instance can carry unique sets of data. Since this data is often used as the input to the agent’s functions, the behaviour of different agents of the same class can become different. All circulation agents generally have sensors for scoping the immediate neighbourhood to the agent’s body. Whether these sensors are somehow expressed in the digital presence of the agent is a different matter. Sometimes it makes the programmer’s task easier if the sensors are given a visual form in order to facilitate observations. This, however, is not always needed, and sensors can be defined as coordinates relative to the agent’s body. The latter method also uses the available computational resources more sparingly. Besides the visual appearance, agent’s sensors can also function in different ways and facilitate the production of different kinds of input. The simplest sensor has two states – on or off – and either does or does not produce input. More sophisticated sensors can measure scalar values and allow the agent to respond to more delicate differences in input. Sensors can be classified according to the activation function. Collision detectors, for example, are a type of sensors that are turned on when colliding with certain geometric objects in the digital model. This is usually done by testing whether the visibility line between the agent’s body and the sensor intersects with a line in 2D or with a face in 3D. Proximity detectors, on the other hand, are activated if objects appear in a certain range from the sensor. Proximity can be calculated by measuring the minimal distance between the object and the agent or using a fixed sensory position and measuring the distance to the intersection point between the visibility line and the model’s geometry. Other types of sensors can respond to existing quantitative stimuli in the environment. Much like the thermometer, a sensor can pick up an environmental parameter and produce output that is understandable to the agent. There are literally unlimited ways to organise different sensors into sensory configurations. To start off with, the number of sensors an agent has depends on what the agent is supposed to ‘sense’ and how delicate the response needs to be. Having multiple sensors allows the agent to compare input values in order to select an appropriate response. The higher number of sensors potentially leads to more informed decisions. However, sensory calculations are often quite expensive and it is generally advised to keep the number as low as possible. Circulation agents operating 85

in 3D environments naturally need a larger sensory space than those in 2D. Sensors can be arranged symmetrically around the agent’s body, or can be organised asymmetrically. The asymmetrical arrangement usually leads to a spatially biased behaviour and can be very useful in generating continuous flows of movement (see section 6.1 and Figure 6.3). Sensors can be grouped according to their function. Different groups provide input to different internal decision mechanisms. Movement sensors are wired to the mechanism dealing with locomotion, building sensors influence the agent’s building behaviour. Sensors can be either fixed or reconfigurable. Reconfigurable sensors – much like antennae in insects – are dynamically adjusted to the environment and help to reduce the overall number of sensors needed. If sensors are seen as part of generating the input to the agent then the output from the agent can be said to evoke actuators. Since all the agents described in this thesis inhabit virtual environments and do not need actual mechanical devices in order to undertake actions, actuators are simply algorithms that are triggered by the agent’s internal mechanisms. A common output from a mobile agent is a vector that defines which way the agent is moved. In the case of building agents, the output can also be a geometrical object and additional values that define the transformation matrix of the placed object. The actuator algorithm then takes this object and values, derives the transformation parameters from the values, and adds the object to the model geometry. The actuator algorithm can also modify the existing objects in the model – change the position of a geometrical entity, for instance. In such a case, the object that is being modified has to be a part of both the input and the output parameters. The process of mapping sensory input to actuator output is known as sensorymotor coupling. Sensory-motor coupling can be treated as a function of an agent that is ignorant of the agent’s external environment and can almost be scrutinised in complete isolation from the rest of the model. The only thing that the coupling function shares with other modules is the array of parameters that is put through the agent – arguments that are received from sensors and passed forward to actuators. As mentioned earlier, these arguments can take various forms as real numbers, more complex objects with specific properties, or even arrays of objects. The main point 86

here is that these arguments are somehow transformed and the output is always qualitatively or quantitatively different from the input. There are several ways how the internal transformation of input to output can take place. The simplest form of transformation is an arithmetical function that converts the numerical input into numerical output. The output is quantitatively different from the input and can be mathematically expressed as follows: output = f (input) Although very simple, this method is ideal for creating simple feedback systems that can lead to complex outcomes. Probably the most common method of mapping input to output is using a switch statement. Switch statements explicitly define cases of what happens to the output when the input meets certain criteria. As opposed to the simple arithmetical transformation, the output can also be qualitatively different from the input. For example, if the agent is given a certain object then the output is 90°. However, if the agent encounters a different type of object, then the output could be 60°. To put that in the context, this could mean that if the agent encounters a cube it tries to avoid colliding it by turning 90°; in the case of encountering a cylinder, the turning angle can be slightly smaller. The switch statement method is a way of introducing heuristics into sensory-motor coupling, is relatively easy to implement, and is supported in the vast majority of programming languages. Recently, there has been a rise in the use of connectionist models for sensorymotor coupling in agents. Connectionism tries to model the intelligent behaviour of biological organisms by deploying artificial ‘brains’ - artificial neural networks (Pfeifer and Scheier 2001). There are several different architectures and functional principles of artificial neural networks, and most of these are a way beyond the interest of this thesis. Although there is evidence that neural networks can work very well with agents (e.g. Stanley, Bryant and Miikkulainen 2005), the design of such agents is considered too complicated and contradicts the objective of finding parsimonious solutions for circulation agents. However, the simplest of neural networks – perceptrons (Rosenblatt 1958) – are tested in the following chapter. In these experiments, each input sensor of the agents is connected to every single actuator via a series of numerical weights. The sensory-motor mapping is a simple mathematical function where the output is a sum of all input values multiplied by respective weights. 87

Although both the input and the output can only be numerical values, the sensor and actuator algorithms can interpret these values and convert it into acceptable formats. For instance, the activation function of a sensor can produce input value 1 or 0 depending on the collision with the model geometry. During the process of computing the output the agent’s internal parameters can be altered. In turn, this can influence how the input is mapped to the output in the future. In connectionist models, this learning mechanism simply involves the adjustment of weights, and can be used to train the network to respond appropriately to respective output. Single-layer perceptrons are relatively easy to train and make the agent behaviour adaptable. The design of a circulation agent involves three main topics: input – how agents acquire information from the environment, what operations agents can perform, and what is the decision mechanism (behaviour controller) that connects perceptions to actions. Each of these topics can be considered separately but, eventually, these all need to be collated in order to meet the objectives of the complete model. At the abstract level of input-output coupling, there is no difference whether an agent inhabits a 2D or 3D environment. However, dimensionality becomes a crucial aspect of design when one has to lay out the exact geometry of agent’s sensors. If there is no agreement between the environmental geometry and an agent’s bodily traits then there is little chance that the model will be useful. Therefore, in designing the agent, one has to know how information is distributed in the environment, how the environment can be changed, and have a view of what kind of circulation diagrams are expected as the outcome of the complete model.

5.2 Design of the environment

The notion of environment is frequently used by architects and urban designers interchangeably with the notion of built structures. Environment in the architectural design discipline is too often considered as the end product – the result of design and building processes. This approach treats the environment as some sort of an object that can be analysed, prepared, constructed, and completed. Rather than in a dynamic relationship with activities taking place in it, the environment is seen as something that is defined prior to inhabitation. 88

For systems designers, the environment has a slightly different meaning. The environment is always seen with respect to the system and cannot be defined independently from it. In systems theory, the environment is a setting – physical or virtual – that produces system inputs and consumes its output (Keil and Goldin 2006). This means that the system acts upon the information received from the environment but the system is also capable of changing its environment. This dynamic feedback loop is the main generator of environmental complexity and is also a source of complex behaviour. This thesis supports the argument that architectural design solutions can be conceived following the principles of dynamic feedback and that the built environment should be seen in the same way as in systems theory. In computer programming, the word ‘environment’ is also sometimes used to describe the hardware and software platforms in which agent-based models are executed. Keil and Goldin (2006) call this the execution environment that is different from the application environment. The latter is the logical entity of the model that represents the space where agents perform their job (Keil and Goldin 2006). In this chapter, the word ‘environment’ always denotes the application environment. The environment is an essential part of multi-agent systems. For the purposes of this thesis, it plays a crucial role not only as the background for circulation agents but also the facilitator of communication between agents and the storage of the generated circulation diagrams. Weyns et al. (in Keil and Goldin 2006) argue that the environment is independently active, provides a means for decentralised coordination and acts as a shared memory for the agent colony. The environment is clearly distinguishable from agents only by the fact that it does not have objectives – it can be active but not proactive. Processes that take place in the environment happen blindly without any specific goals. Most of these processes are tending towards greater entropy and are, in that respect, dissimilar to self-organising processes that take place in agent colonies. Typical environmental processes are decay and diffusion processes where the structure of the environment tends towards uniform distribution of energy and matter. Section 5.4 looks at these processes in greater detail. Although the environment is seen as a collection of entities without objectives, this is so only from the perspective of the observer who does not participate in the simulation. The environment of an individual within the simulation can contain not only mindless objects but also other goal-driven agents. An agent can 89

perceive other agents directly as objects. Other agents – from the point of view of this particular perceiving agent – are then a part of its environment. There are several ways to categorise environments of multi-agent systems. Keil and Goldin (2006) propose a taxonomy that distinguishes multi-agent environments along three dimensions: physical versus virtual, persistent versus amnesic, and dynamic versus static. This taxonomy is perfectly applicable to circulation agent models as well. All prototypes and case study models in this thesis are executed in virtual environments. This is not only because of the personal preference, but mainly because it is impractical (and presumably extremely difficult) to generate circulation diagrams with robotic agents. Architectural diagrams are abstract and do not need to be grounded to the physical environment. Therefore, the precision and level of details required in building physical agent-based models is beyond what is needed for generating these diagrams. Persistent environments have memory of past interactions and are capable of passing information from one agent to another. Circulation agents can use persistency in their favour and develop patterns of behaviour that are impossible in amnesic environments. All environments that support stigmergic communications are capable of preserving information. At the same time, over-persistent environments can work against the purpose of the model. This holds true in optimisation algorithms where the colony is supposed to find progressively shorter circulation paths (see section 6.5). In order facilitate learning at the colony level the environment needs to be able to slowly ‘forget’ past interactions. Dynamic environments, as opposed to static ones, change their configurations over time. Agents that inhabit dynamic environments have to be more adaptable because changing environmental configurations demand agents to change their behaviour. All of the environments in prototype models (see the next chapter) where agents use stigmergic building algorithms can be classified as dynamic environments. Wooldridge (1999) adds another environment classification parameter to Keil and Goldin’s (2006) taxonomy – accessibility. This parameter allows Wooldridge to classify environments of multi-agents systems into two large groups. In accessible environments, an agent can scope all the information in the model regardless to its position. This gives the agent the bird’s eye view of its environment and allows it to take well-informed steps in order to achieve its goals. However, it defeats the bottom90

up nature of agent-based circulation models and hardly leads to the emergence of unpredictable diagrams or helps to progress bottom-up design thinking. As it will be argued in further detail in Chapter 9, partially inaccessible environments are preferred for generating circulation diagrams. In these environments, agents can obtain information from their immediate neighbourhood without the knowledge of global structure or whereabouts of their goals. The representation of the environment in virtual multi-agent models is no different from other digital models and is defined by the underlying spatial representation. There are essentially two distinct models of spatial representation available: continuous and discrete. Whereas the continuous model follows the classical way of spatial representations as known to conventional physics , discrete models are often used in relatively new areas such as quantum physics, and the rise of these models is closely related to the advent of computer technology (Kopperman et al. 2006). Based on the used spatial representation, the environment can be said to be continuous or discrete. As demonstrated in prototype and case study models later in this thesis, both types of environments can be successfully used in multi-agent models. The difference between continuous and discrete models can be illustrated with the following example. Given two points in continuous models, there are literally an infinite number of points that one can fit between these points. In discrete models, on the other hand, the number of points is limited. Therefore, with respect to agentbased models, there are finite number of actions and perceptions available for agents in discrete models (Wooldridge 1999). This also means that discrete models reduce the necessary calculations and simulations executed in discrete environments are lighter in terms of the required computational power. Multi-agent models for generating circulation diagrams contain objects that provide the means for agents to interact with their environment. Details about possible interaction mechanisms are further explored later on in this chapter, and it suffices here to say that agents either add or subtract objects or change the quality of objects in the environment. Representation of objects relates to the model of spatial representation; objects in continuous and discrete environments are represented in different ways. In discrete environments, objects exist as collections of discrete units of particular quality. A discrete object consists of basic elements (pixels in 2D, voxels in 3D) that are distinguished from the rest of environment by visual (e.g. colour) or non91

visual (e.g. weight) properties. In continuous models, objects are defined as collections of basic geometrical entities such as lines, points and surfaces. The most common object representation in the 3D continuous space is made of vertices, edges and triangles (or quadrangles) – also known as mesh type constructs (Shepherd 2009). Many agent-based navigational algorithms (most notably ant colony optimisation algorithms) use environment-mediated communication to coordinate the colony’s actions. Instead of directly modifying the model geometry, agents add markers – digital equivalents of smelly substances – to the model for way-finding purposes. Although this can all take place in the continuous space, it is easier and computationally less expensive to execute the simulation in a discrete model where the space is represented as a finite matrix of equally distributed markers. Instead of agents actually dropping markers, they can modify some properties (smelliness in this case) of these markers. From the programmer’s point of view, this has several advantages. Firstly, agents do not add objects to the model and the model geometry remains unchanged. Hence, there is no need to use dynamic arrays of markers. Secondly, the sensory algorithm that picks up discrete markers is considerably simpler than its counterpart in the continuous space. Thirdly, the matrix of markers provides a unique opportunity to pre-compute the environment and make some parts of it qualitatively different than others. This gives the programmer an additional means of controlling the model. Further details of this method are discussed in Chapter 9. Discrete models are topological models of space, at least with respect to how computer scientists use the notion of ‘topology’ (Kopperman et al. 2006). All topological models try to reduce the complex nature of continuous environments into simpler and computationally less expensive representations. A popular way involves reducing spatial features into some kind of graphs that preserves the topological structure of the environment. For example, space syntax uses topological models called axial maps for representing and analysing street networks (Penn 2001). Werner, Krieg-Brückner and Herrmann (2000) argue that route-based navigation using internal topological representation also happens in animals and humans. Topological models have also been used for way-finding and navigational purposes in agent modelling (Calogero 2008). The problem with topological representations is that it cannot easily be done in partially inaccessible or dynamic models, unless the representation is recomputed during the simulation. 92

Besides pre-processed topographical representations there are alternative ways to pre-compute the environment in order to help agents in finding their way. Processed topographical models, such as the space syntax’s depth map, provide a clue how to facilitate local navigation in simple agents. Methods like cellular automata based diffusion can also be used in order to pre-compute relative distance values in the environment. This allows simple hill-climbing agents finding shortest paths between two locations (see the Labyrinth Traverser prototype in the next chapter). Agent-based navigation algorithms in appropriately pre-computed environments require significantly less computation, and environmental preprocessing is therefore preferred in performance-driven models. However, it is not always a favourable solution. Pre-computing does not work well in dynamic environments where agents change the structure of their environment. In such models, if the change happens in the original representation of the environment it also needs to be replicated in the pre-computed representation. This requires constant processing in order to keep structural changes updated in both of the representations which, instead of improving the speed, would involve constant reprocessing and slow the progress down. One significant design consideration that can help to reduce the need for computational resources is the size of the model space. Agents can inhabit a bounded ‘universe’ – the container that defines the outer limits of their environment. This keeps the colony together in a constrained space and facilitates coordination between agents, but can also produce unwanted artefacts in agents’ behavioural patterns. A well know behaviour that mobile agents exhibit in tightly closed environments is known as boundary following. In order to avoid forced behavioural patterns, the ‘universe’ can be virtually infinite. However, this may lead to another problem – the colony can simply disperse in the unbounded environment and it becomes very difficult to observe emergent behavioural phenomena due to the loss of activity concentration. A way to solve both of the described issues – the edge following and the loss of focus – is to wrap the ‘universe’. Creating the wrapped toroidal environment with no edges is simply done by gluing every edge of the bounded environment to its opposite edge. This method helps to keep the size of the model small yet does not limit the freedom of agents’ movement. 93

5.3 Behaviour of agents

Intelligent behaviour is often thought as something common to living organisms (Skyttner 1996, p. 185). However, behaviour can also be seen as a phenomenon that emerges from simple rules of interaction between the agents and its environment (Braitenberg 1986). In the latter case, intelligent behaviour is assigned to the agent by an external observer – the agent seems to behave intelligently because it seems to have a purpose. What actually may be happening is that the agent’s behaviour is defined by a small set of logical rules that are executed locally. This section explains what the basic algorithms for controlling agents’ movement (behaviour controllers) are, and gives an overview of related concepts for building agent-based circulation models. Simplest behaviour controller algorithms are probably attraction and repulsion algorithms. These algorithms operate with a single vector calculated by comparing the position of an agent with its point of attraction (or repulsion). The new position of the agent is calculated by adding (or subtracting) this vector to the agent’s current position and the agent moves closer to the attractor or steps away from the repellent. Such a simple behavioural controller can generate surprisingly intricate movement patterns at the colony level. Consider a simulation where there are two types of agents – agents that are attracted to the other type of agents (smaller agents – see Figure 5.2), and others that repel the first ones. This ‘game of chase’ has a very simple set-out but it can display quite complex patterns of movement. The number of agents that take part of this simulation has a crucial impact on the progress. There is no interesting observable behaviour with only two agents in the simulation – these agents would resort to linear movement across their universe. The situation changes a bit if the contact-avoiding agent is slightly faster than its stalker. In this case, agents in a wrapped universe come to a stand-still. However, as soon as there are several agents taking part in the ‘game’, it becomes a lot more interesting. The movement of agents hardly repeats itself and the equilibrium is found much slower, if at all. There is also an interesting emergent behaviour amongst the same type of agents – they tend to form groups. Although one can easily explain why this happens by referring back to the basic attract-repel rules, it is quite difficult to foresee this behaviour prior to executing the simulation. And it is 94

even more difficult to predict the movement patterns when new attraction-repulsion behaviours are added to agents. For instance, if an additional rule of smaller agents repelling other smaller agents is created, the simulation becomes increasingly dynamic. Smaller agents now form tight groups to encircle the closest big agent but then spread to catch a new one if this one escapes. For an observer who does not know the rules of the game it might even look like black agents coordinate the chase. For an observer, the behaviour of smaller agents reminds the behaviour of some animal predators. The lesson learnt from the game of chase is that very simple bottom-up rules can be aggregated in order to generate interesting (and potentially useful) behaviour. This is also an underlying principle in prototype models and case studies presented in this thesis.

Figure 5.2: The game of chase. Smaller (black) agents are attracted bigger (red) agents who, in turn, are repelled from smaller ones. With large number of agents, such a ‘game’ reveals some important mechanisms that lay behind complex behaviours of simple agents

Slightly more intricate behaviour controllers are needed for another wellknown method of navigation – hill-climbing (Dorigo, Maniezzo and Colorni 1996). Hillclimbing involves a comparison algorithm for choosing between inputs of the same type, and agents need at least two sensory positions in order to perform this routine. 95

The controller algorithm converts the information received from different locations in the agent’s environment into an one-dimensional array that is linearly sorted by some parameter, and the agent then moves towards the point in the model that has the highest position in that array. An agent with two sensors can thus hill-climb the landscape of values until it gets to the local maxima – the highest point in its immediate neighbourhood. Now, this behaviour controller alone has a little use for generating interesting movement patterns. It only becomes interesting when agents start to influence the landscape by changing the parameters that are used for hillclimbing. This would initiate a dynamic feedback loop and several interesting patterns of movement can be generated (e.g. see the Stream Simulator prototype in Chapter 6).

Figure 5.3: A Swarm grammar – a branching diagram created by tracing agents using formal movement rules

Additionally to hill-climbing and attraction-repulsion algorithms that all are bottom-up navigational methods, agents can also have a predefined list of navigation rules. These formal rules can be indifferent to the agents’ surroundings, but the agent’s movement can still be context sensitive if some input parameters are still received directly from the environment. Again, there are many ways how the two different types of input – from the set of formal rules and from the agent’s sensors – can be combined. All these methods can be collectively termed as swarm grammars (Von Mammen and Jacob 2007). The grammar rules usually define where agents turn or how fast they go, but also when new agents are created, and what their initial orientation is. This kind of colony can be used for modelling growth systems similar to those created with L-system algorithms (see Figure 5.3). The algorithm that generates these complex-looking structures is actually very simple. It relies on the recursive replication of a new generation of agents at each step and has a set of rules for controlling an agents’ heading. The process starts with a single agent that takes a step forward leaving a trail behind. The agent then ‘hatches’ several new agents. These new agents are given a predefined heading that is relative to their parent heading. All 96

agents then move forward and ‘hatch’ another generation of agents. The result is a highly ordered and repetitive branching diagram.

Figure 5.4: A Context sensitive swarm grammar – the branch length is defined by the available amount of ‘light’

The same grammar rules can be used in conjunction with environmental probing. Every agent, while still following formal rules, changes its trajectory according to some local parameters (see Figure 5.4). As the result, the colony’s movement is highly structured and adaptive at the same time. Context sensitive algorithms can be quite sophisticated. If the diagram generated by the swarm grammar algorithm is to be adopted for generating circulation systems, one can imagine an algorithm that detects all intersections of agents’ movement trails. Agents could stop moving as soon as they encounter an existing trail left behind by another agent. This allows breaking the symmetric and hierarchical nature of generated diagrams. Swarm grammars can have very detailed rules as to how agents move (e.g. straight forward or in the zigzagging manner), when they ‘hatch’ and how many agents are in each new generation, and how they behave when crossing existing trails. There can be a hierarchy of agents that all have their own rules of behaviour. Figure 5.5 shows the results of an elaborated swarm grammar where the rules of each generation can be defined via a graphical user interface. The user has a control over several parameters and can even introduce randomness to the generated pattern by leaving some of these parameters undefined.

Figure 5.5: Swarm grammars with hierarchical rules

97

A related but fundamentally different swarm behaviour controller, invented by Reynolds (Reynolds 1987), is the well-known flocking algorithm. Flocking agents generate interesting dynamic patterns even in uniform and featureless environments. The whole idea is that the flock is its own primary environment where each agent tries to re-position and align itself according to its closest neighbour’s location and heading. The behaviour of simulated flocks is usually entirely deterministic – the seemingly chaotic behaviour is introduced by locating the agents randomly in the beginning of the simulation. Additionally, the flock can interact with its environment and give the observer impression of an intelligent behaviour (Carranza and Coates 2000). Both methods of controlling the behaviour of agent colonies – flocking and swarm grammars – are useful in studying the coordination mechanisms between agents. However, they provide little use for designers wishing to explore the dynamic development of circulation diagrams. The problem with these methods is that there is no immediate feedback between the agents’ behaviour and their environment. One can combine flocking and swarm grammar methods with simpler methods, but this leads to quite complicated behaviour controller architectures. Although simple methods such as hill-climbing do not necessarily involve feedback loops by definition, they are computationally inexpensive in order to be combined with environmental modification and environmental processing algorithms. This combination can create powerful generative feedback systems.

5.4 Environmental processing

Environmental processing is an important part of most multi-agent circulation models. These processes define how the environment responds to the input from agents and – if programmed appropriately – facilitate the emergence of continuous yet flexible circulation diagrams. Environmental processing routines define what happens to the objects that are added to the model by agents, how these objects respond to local conditions, and how they change in time. In short, these routines characterise the behaviour of multi-agent environments. There are essentially two distinct groups of processes that this thesis is interested in. These two groups are environmental decay processes and transformation processes. Whereas decay processes change non-geometrical 98

substances that are left behind by agents, transformation processes influence geometrical objects that have been added to the model by agents. Decay processes include evaporation and diffusion algorithms that are extremely useful for generating dynamic circulation patterns. These algorithms also facilitate circulation network optimisation and the colony’s search for shorter routes. The concept is borrowed from the real world processes where energy in the environment tends towards the equilibrium state and greater entropy. Transformation processes include several different routines that deal with the stability of objects in the model and the dynamical rearrangement of these objects when they are out of balance. The relationship of these methods to circulation may not be immediately apparent and requires some further explaining. Geometrical objects in agent-based circulation models can serve as barriers or facilitators of agents’ movement. Therefore, the exact position of these objects plays a crucial role in defining where circulation can take place and where it cannot. Now, if transformation processes define where objects end up in the model then these processes are indeed influencing the emergence of circulation diagrams. Among other tasks, decay algorithms control the distribution of substances dropped by agents into their environment. The simplest way of representing a substance is to give dedicated coordinates in the model numeric values that indicate the amount of substance available in any particular location. The change in distribution values can take place in several ways, but the two most common ones are diffusion and evaporation. Probably the most popular diffusion algorithm implements a cellular automata based method (Adamatzky 2001) where each discrete coordinate in space propagates a portion of its value to its neighbouring coordinates. Coordinates can also be programmed to lose the propagated portion from its original value, in which case substance values disperse in the environment mimicking the diffusion of chemicals in nature. During the process of diffusion, gradients are developed in the model. A gradient of values can be used by agents to hill-climb and reach to the source of the original insertion point of the particular substance. Diffusion also creates redundancy in the model so that agents can detect values from a larger area and thus require lesser precision of navigation. Evaporation algorithms can be seen as a variation of dispersion with the exception that the portion of substance values are not passed on to neighbouring 99

coordinates but simply eradicated. Evaporation plays the role of forgetting in the colony’s learning process. Without evaporation, values in the environment do not change over time and the colony is incapable of forgetting what it has once learned. The evaporation rate needs to be carefully calculated according to the size of simulation, number of agents in it, and the amount of evaporating substance available. Geometry transformation processing can vary from lightweight solutions to extremely sophisticated algorithms. Generally, these processes can be decomposed into two distinct packages of computation: stability computation and dynamics computation. The simplest geometry transformation processes solve stability issues heuristically and completely ignore dynamics. In order to compute dynamics, there is a large number of physics engines of various accuracy and performance that all use Newtonian laws of motion (Bourg 2001). Full-fledged physics engines commonly calculate rigid body dynamics, but can also include soft and deformable bodies’ computation, or even computational fluid dynamics. Some of the engines are open source and available as code libraries ready for anyone who wishes to include these to their computational models. The computational logic used by powerful physics engines can be very complicated and a way beyond the scope of this thesis. However, some of the prototypes introduced in the following chapter use the functionality provided by an open source physics engine – Open Dynamics Engine (Smith 2007). The problem of using powerful physics engines in combination with multiagent systems is that both require a bulk of computational resources. Accurate calculations of rigid body dynamics – let alone soft bodies – can slow down the whole simulation and can make the process of finding suitable methods for agent movement unacceptably long. Although this may not be a problem at the production stage, it is a serious hindrance in building multi-agent prototype models that are to be deployed at the early stages of the design process where speed and agility of testing various methods is of utmost importance. In the latter case, one may need to choose a different approach for calculating stability of structures that are built or modified by agents. Some of the prototypes presented in the following chapter deploy a qualitative method for calculating the stability of geometrical objects in the model. This method is lightweight compared to simulated physics approach because it uses heuristics rather than empirical mathematics. The main idea behind this method is that 100

objects in the model can be glued together via dedicated sockets and connectors. According to the heuristic rules, an object becomes stable when it lies on the ground plane or is attached to other objects via two connectors at least. Additionally, the method can be extended to take into account the position of connections. In the latter case, the object is declared stable when its centre of geometry is within the bounding cube including the two (or more) connector points. There are obvious shortcomings of this method. Firstly, objects need to be fairly simple and need to have predefined sockets and connectors. Secondly, the algorithm does not work very well in continuous space because of the modular nature of the objects. The greatest advantage of the socket-connector method is the speed of execution that makes it appropriate to be used in multi-agent models. Agents can add objects to the model or shift existing ones around without a significant slow-down due to the stability calculations. Dynamics of objects are completely ignored or, alternatively, there is a highly simplified notion of gravity that makes instable objects to drop until a stable state has been found. Another heuristic method used in one of the prototypes in this thesis, is based on cellular automata computation. It takes place in the discrete space where geometrical building blocks are represented as cubic cells. Each of these cells has a value that indicates its stability. Cells that rest on the ground plane or are directly above of such cells have the maximum stability value. A portion of this value is then propagated to cells in the immediate neighbourhood of the stable cells. Once the values are propagated across the model, the stability of each cell is assessed individually and those cells that fall below a threshold value are simply removed from the model. Figure 5.6 illustrates the outcome of randomly created geometry that has been modified by the described algorithm. Similarly to socket-connector method, the cellular automata based method is fast enough to be plugged into multi-agent simulations.

101

Figure 5.6: Stable structures as computed with cellular automata algorithm. Darker cells are less stable than lighter ones

5.5 Agent-environment interaction

One of the biggest challenges in designing and building multi-agent systems that can generate meaningful circulation diagrams is to achieve an appropriate feedback system between the agents’ behaviour and the processes taking place in the environment. Often, it is not possible to define the feedback solely by using simple programming constructs such as conditional logic or control flow loops – feedback mechanisms should be inherent in the system’s architecture. The common logic is that environmental processes and agents operate with the same system variables. These variables can be geometrical objects or some abstract non-geometrical substances. Hence the key for a successful agent-environment interaction relies in how and when these variables are initiated or modified from either side – by agents and by environmental processes. From the agents’ perspective, interaction with objects – geometrical or nongeometrical – can be divided into three groups. The first group can be termed displacement routines and these involve agents that shift objects around in the model. For example, displacement routines can be used for spatial sorting (Holland and Melhuish 1999). As a rule, objects that are shifted around are not geometrically altered but re-located. Related but slightly different are modification routines. As opposed to displacement routines, agents that perform a modification routine actually change the qualities of objects or the value of numeric (non-geometric) variables. A classic example of the modification routine is the ant colony optimisation algorithm (Dorigo, Maniezzo and Colorni 1996), where agents modify ‘pheromone’ values in the environment. The same group also includes routines where agents can modify the 102

geometry of objects. The third group contains aggregation routines where agents add or subtract elements to or from the model. These routines can operate with geometrical objects only. Many of the prototype models in the next chapter belong to that group. The key to constructing generative feedback systems is to find appropriate environmental responses to each of these interaction mechanisms. An environmental process can amplify the changes done to the model by agents, it can reduce and mitigate the impact of these changes, or it can be indifferent to the direction of change. In the first case- amplification – the whole system creates the positive feedback loop that, according to a popular cybernetic view (Beer 1974), may lead to catastrophic results in the real world. In a virtual multi-agent circulation simulation, the result is rarely so dramatic because one can easily introduce programmatic thresholds that prevent certain values to excel beyond acceptable limits. A simple positive feedback mechanism in the context of a multi-agent system is described in the following chapter (see the Stream Simulator prototype). Multi-agent systems that contain only positive feedback mechanisms quickly lead to fixed diagrams, and are therefore difficult to use as variety generators. Whereas in positive feedback loops the changes done by agents are amplified by environmental processes, the negative feedback can be achieved when environmental processes mitigate these changes. This scenario is more likely to produce interesting results and is also more true to the processes in the natural world – the environment tends to fight back. In systems theory, this is called negative feedback and it keeps systems in balance (Skyttner 1996). An appropriately tuned multi-agent circulation model featuring negative feedback mechanisms can produce a constantly changing diagram. If a fixed diagram is needed for design purposes, one can simply stop the simulation and get a snapshot of the generated model. According to the third scenario, environmental processes are indifferent to the changes done by agents. This is usually the case of using simulated physics to transform the objects added or moved by agents. Such a process can support agents’ goals but can also work against these – it really depends on every particular situation. If agents have ability to learn, they can start using environmental processes for their benefit. 103

There are several ways that the agents’ interaction with their environment and the processes happening separately in the environment can be combined. One has to consider the purpose of the model and then choose appropriate components to pursue this. Although the right choice of feedback mechanisms is crucial, it does not automatically lead to a successful model yet. Take an example of an ant colony optimisation algorithm. The number of agents and the size of their ‘world’ have a great impact on the ‘pheromone’ evaporation rate. On one hand, if the evaporation happens too quickly, the colony will struggle to capitalise on already found paths between their ‘nest’ and the ‘food source’. On the other hand, if the ‘pheromone’ evaporation rate is too slow, the colony’s explorative behaviour is hindered and the shortest paths may remain undiscovered (see further explanation in Chapter 8). Therefore, the model needs to be viewed holistically and the parameters of independent algorithms have to be fine-tuned simultaneously. Although the agents’ behaviour and environmental processing have to work well together for the combined effect, there is a benefit in treating them as separate programmatic modules. Both of the modules access and share the same objects and data in the model, but can use different representations of space. For instance, the agents’ movement works the best in the continuous representation, but objects in the model and the environmental processing can happen in the discrete representation. Whereas the latter helps to save computational resources and the blocky nature (see Figure 5.6) of discrete models may be acceptable at the early stages of design process, the continuous nature of movement usually requires the continuous representation. As long as the programming modules are sufficiently decoupled, there is no clash between different representations. Objects in the discrete space can be used in the continuous representation without the loss of data and require virtually no additional processing. However, when agents’ interactions – adding objects to the model, for example – are translated from the continuous into the discrete spatial representation, some data can be lost. It is a duty of the programmer to make sure that the lost information has no impact on the overall behaviour of the model. Dual representations of the environment are also used in way-finding agents who possess spatial memory. Spatial memory is the agent’s internal representation that is also known as the cognitive map (Kuipers, Tecuci and Stankiewicz 2003). Cognitive mapping is a well-known subject and has been thoroughly explored by 104

cognitive psychologists (e.g. Stea 1974; Downs and Stea 1977) already several decades ago. Cognitive maps have also been used earlier in the context of agent-based modelling by Ramos, Fernandes and Rosa (2007). This section only skims the surface of the subject and the main body of cognitive mapping studies remains outside the interests of this thesis. Cognitive maps are representations of the actual environment but are shielded from the external processes. This does not mean that internal representations are static – they can be the subject of a similar kind of decay and automatic reorganisation processes that take place in the agent’s environment. The internal representation is normally developed during the agent’s interaction with the environment and these representations are useful in helping the agent to make navigational decisions. Such way-finding decisions are usually made by combining input received directly from the environment and data acquired from the cognitive map. Wherever there are incoherencies between the actual environment and the cognitive map, a learning-like mechanism is deployed to amend the latter for closer match. The greatest advantage of cognitive maps is that they are fully accessible by the agent. Even if the information captured in these maps is incomplete, they offer the global view and can provide more useful navigational information than the information acquired from the agent’s immediate environment.

5.6 Communication in circulation agents

Communication in multi-agent systems is an extensive topic and can hardly be covered in a single section. Communication is a basic requirement of every healthy agent colony and an essential ingredient for coordinating colony’s activities. As opposed to single agent systems, no system containing more than one agent renders any additional value unless agents can efficiently communicate with one another. However, not all kinds of communication are relevant for the purposes of this thesis. Communication in the agent colony is interesting from the perspective of generating circulation diagrams only if it has any impact on the environment. Keil and Goldin (2006) highlight that there are two essentially different ways of using the environment for communication: trivial and nontrivial. The trivial use of environment for communication involves direct message transport between agents. The environment 105

in this case solely acts as a means of passing the message – after the message has been delivered, the structure of environment remains the same as it was before. The message itself, however, can be influenced by the structure and layout of the environment. As opposed to the direct messaging, agents can use the environment to pass messages indirectly. According to Keil and Goldin (2006), this is a nontrivial use of the environment for communication. It is closely related to the notion of stigmergy. In this thesis, indirect communication and stigmergic communication have been used interchangeably. The main idea behind stigmergic communication is to use the environment as a message board. In order to communicate, agents have to leave messages on this message board. This allows any agent in the colony – including the one that left the message – to come back to it at a later date. It also allows the agent to be amnesic – no memory is required with the agent because all the information can be stored in the environment. The environment can contain several overlapping messages, and creating a new message can possibly change existing ones. Additionally, older messages can be wiped clean by environmental decay processes. This facilitates the agent colony to rewrite messages and improve on its communication. Effectively, it allows the colony to learn. Stigmergy is of special interest in this thesis because stigmergic agents have the ability to perceive the environment and change it. Most of the prototype and case study models in the following chapters use stigmergic principles to a degree. However, stigmergy is not so much seen as a way to exchange information between agents, but as a generative mechanism that leads to the emergence of circulation diagrams. Although one can treat direct communication as an isolated issue or as a layer that can be added to the multi-agent system, stigmergic communication is inherently a part of agent’s perceptual and behavioural mechanisms. Stigmergy cannot and does not need to be addressed separately – it is embedded in the agents’ sensory-motor coupling routines. Therefore, stigmergic communication can be viewed as a matter of agent’s internal design rather than something that happens between individuals in the colony. Stigmergic communication has two stages. Firstly, agents receive messages from the environment via sensors. This process was described in detail earlier in this chapter (see section 5.1). Secondly, agents change the environment by executing actuator functions. These functions were also covered in the beginning of this chapter 106

together with the sensory mechanisms. As discussed earlier, sensory input can be converted into actions in various ways according to the agent’s internal processes known as sensory-motor coupling. Stigmergic communication deals with questions of how the agent perceives and changes the environment, and how its perception is transformed into action. Hence, stigmergy is foremost an issue of agent’s sensorymotor design. Naturally, the environment plays also an important part in stigmergic communication as it carries the messages. However, decoding and encoding the messages can only happen internally within the agent. There are two main reasons why stigmergy is the preferred method of communication in this thesis. For a start, it requires agents both to perceive and change the environment – two essential issues in the architectural design discourse. Thus, stigmergy can be seen as a rule-based design methodology and because of that it is particularly attractive for the purposes of this thesis. Stigmergic communication in an agent colony becomes the key shaper of the environment and a design driver of circulation diagrams. Stigmergy is also preferred because when compared to other communication and coordination mechanisms, it is a light-weight method of coordinating colony’s activities (Hadeli et al. 2004) . Stigmergic communication does not require massive computational resources and – as long as the design of agents and sensory-motor coupling is in place – it does not require the programmer to figure out direct message protocols. Instead, the programmer has to simply define how the agent receives data from the environment, how it changes the environment, and how the sensory input is mapped to the motor output. There are two different types of stigmergy: qualitative and quantitative (Bonabeau, Dorigo and Theraulaz 1999, p. 205). Both of these can be useful for generating circulation diagrams. Qualitative stigmergy has been used in this thesis mainly in stigmergic building routines (see section 6.6) where agents add or remove building blocks to existing structures. Different configurations of building blocks can trigger qualitatively different response from the agent. For example, encountering a single building block may cause the agent to add another block on top of the existing one. Facing the new configuration of two blocks, may instead lead to the action of removing one of the blocks. These two actions are qualitatively different; hence, this is qualitative stigmergy in action. Quantitative stigmergy, on the other hand, is usually 107

deployed in path-laying and way-finding agent colonies (see next chapter for description). Albeit by definition quantitative stigmergy affects a quantitative property of agent’s action (Bonabeau, Dorigo and Theraulaz 1999, p. 108), the most common use in this thesis is the reverse: the strength of the signal from the environment has an impact on the agent’s behaviour. This means that, depending on the local environmental conditions, agent’s behaviours are either evoked or inhibited. As discussed above, stigmergic communication can lead to the emergent coordination of colony’s activities; it is coordination without the coordinator. Messages that are left behind can be seen as by-products of the agents’ interaction with their environment. Indeed, agents do not need to be aware that they are communicating with other agents – they simply follow their own agenda and coordination can happen involuntarily. The success of generated circulation diagrams is largely dependent on how well the communication in the colony takes place.

5.7 The setting out configuration

The term setting out configuration is coined for this thesis in order to describe the generic parameters and settings of multi-agent simulations. Its origins can be found in the technical terminology of architectural design discourse, where the setting out drawing is the basic drawing that defines the global position of the design and the key reference elements for the entire set of architectural drawings. In the context of this thesis, the setting out configuration is a group of settings that define the general layout of the scene where agent based simulations take place, the target and source points for the agents, and the initial parameters and configuration of agents. The setting out configuration is explicitly defined by the creator (programmer) of the multi-agent model and the impact it has on the model’s success cannot be overestimated. The setting out configuration provides the best means to influence the progress and control the outcome of the model. It sets the size of the model, defines existing objects in the model that agents can interact with, and pre-processes the environmental parameters in the model. The setting out configuration also defines where agents are first located, where are their targets (in case there are any), and where are agents repositioned once they have achieved their targets. And, the last but not least, it defines the configuration of newly created instances of agents – a 108

recorded image of the agent. This configuration is usually a formal description of the sensory-motor mapping rules which characterise the agents’ initial behavioural patterns. Behavioural patterns can be changed during the simulation when agents interact with the model and learn new sensory-motor mapping rules. These rules can then be recorded into new configurations. The configurational settings can be created and implemented in two essentially different ways. Fixed settings can be stored in computer files, whereas variable setting can be defined interactively by the user just before running the simulation. For example, a file can contain geometric information of the model’s environment. Certain objects or parameters in the environment, on the other hand, can be initiated dynamically via the graphical user interface. Storing the setting out configuration in files allows one to come back it at any time, whereas interactive approach of keeping the settings temporarily in the computer memory offers speed and flexibility for testing out different configurations. Configuration setting for creating circulation diagrams can be divided into groups in several different ways. One way to categorise settings is to look at how agents are placed in the model and how many targets there are. Agents can be all given the same source, they can be started from several different sources, or they can be scattered across the model space. Additionally, there can be one or more targets for agents. A special case here is a model without any targets. Generated circulation diagrams, in this case, tend to be less controllable and more abstract. Different combinations of source and target point configurations lead to circulation network structures that match Haggett and Chorley’s network classification system into demand cases (Haggett and Chorley 1969). Single source and single target configurations generate paths, multiple sources and single destination configurations generate tree topologies, and multiple source and multiple targets scenarios normally generate circuits. Configuration settings can also be categorised according to the level of preprocessing that is carried out before the simulation is started. Pre-processing is an excellent way to control the simulation process, but it can also be very expensive in terms of computing power. Pre-processing is discussed in detail in Chapter 9. There are a few standard configurational settings. Haggett and Chorley (1969) acknowledge capture models, interconnection models and colonisation models. Capture models 109

start with existing network structure and agents can only move along predefined paths. In interconnection models, agents explore potential links between the given nodes. In colonisation models, agents start form source points, but can explore space freely.

110

Chapter 6: Prototypes Before looking at two case studies of how agent-based modelling can be utilised in the context of architectural design projects, this chapter introduces several prototype models. These prototype models are the step in between understanding the basic components of agent based models and using these for generating design solutions. For computational designers, prototype models are essentially sketching devices – proof of concept models that are modular enough to be used in larger simulations. One can achieve good results by combining prototype models only if one knows the principles of how the prototype models work. Prototype models can also be sketching devices to architects wishing to explore abstract models of movement and circulation. Several of the prototype models presented in this chapter can be used in order to generate constructive diagrams – diagrams that contain both the form and the requirement diagram. In case of circulation diagrams, the form diagram describes some topological or geometrical properties of a movement network, whereas the requirement diagram denotes the frequency of movement or the density of moving units. Although deployed out of context and without site specific requirements these architectural diagrams can still be informative and useful as design instruments. Diagrams, according to UNStudio’s architects (Berkel and Bos 2006), are not representational, but instrumental in production of new objects or situations. Similarly, circulation diagrams discussed in this chapter should be seen as tools to create new spatial arrangements. Agent based circulation models belong to the wider group of bottom-up models, where solutions are developed over time and emerge from local decisions made at the level of an agent. Such models are a good way to explore patterns and general principles of movement, but they do not necessarily contain optimisation routines. However, many of the prototypes presented in this chapter have elements of optimisation built in inherently. With respect to the circulation in buildings and urban environments, it is often difficult to validate prototype models individually. This is because they are developed to observe a phenomenon or deal with a particular aspect of circulation while neglecting others. However, these prototypes provide a way of validating the complete computational model at the later date. If a prototype model is 111

observed to always produce diagrams of specific quality, the same quality can often be observed and recognised in models that are based on this particular prototype. For example, if agents in a prototype model generate circuit network diagrams then they are capable of doing the same in production models – models that are used in the design process.

Execution platforms The prototypes presented in this chapter have been built on several different programming platforms ranging from professional software packages to more academic software applications. Although the emphasis in this chapter is on the conceptual mechanisms for developing agent-based models, it is worthwhile to discuss the pros and cons of the software platforms that were used to build these experiments. Different development environments present researchers with different opportunities and difficulties that influence the way the prototype is implemented. As a consequence, the final computational model may be biased and influenced by particular programming habits. In theory, good and flexible programming languages should allow the programmer to choose between different programming styles and facilitate several ways of solving computational problems. In practice, however, the prototype is often built using the most readily available methods that are peculiar to a programming language or to a development environment. In terms of the ease of learning a new programming language and the development environment, NetLogo (Wilensky 1999) is probably the simplest one presented herein. NetLogo offers a simple graphical user interface (GUI) combined with a purpose-made programming language. The NetLogo programming language is specifically developed for exploring emergent phenomena in multi-agent simulations. There are two main simulation objects readily available for the programmer: turtles and patches. Whereas turtles are seen as agents capable of moving around in the digital environment, patches – arranged in a grid layout – can be said to represent this environment. Both of these simulation objects have several in-built methods and properties at hand to facilitate agile prototype development. NetLogo’s GUI makes it very easy to control simulation parameters at run-time and allow the observer to test different variable values quickly. A major shortcoming of this modelling environment is its embryonic architecture that makes it very difficult to write extensive simulations. 112

Nevertheless, NetLogo is an exceptionally good platform for fast prototyping and for exploring how complex patterns can emerge from simple interactions between agents. Another powerful development environment used for building prototypes of multi-agent simulations is based on Blender (Blender Foundation no date) – an open source 3D content creation and modelling software – and powered by Python (Python Software Foundation no date) programming language. Unlike NetLogo, Python is a general purpose object-oriented language supported by a wide community of users. It comes with several standard libraries and, when needed, additional software modules can be easily introduced. One of such modules that has been used for gearing Blender up for agent-based simulations is based on OpenSteer (Reynolds 2004) libraries that were developed for simulating steering behaviour of autonomous vehicles for gaming and animation. The possibility of adding external modules makes Blender/Python a powerful platform for carrying out complex simulations. However, the lack of integrated modules reduces the agility of prototyping and demands wellorganised approach in handling complicated simulations. The rest of the prototypes presented in this chapter have been executed in the context of professional CAD applications (Autodesk’s AutoCAD and Bentley’s MicroStation) and executed in the integrated development language (Visual Basic for Applications – VBA). The benefits of running multi-agent simulations directly in CAD is that simulations can be carried out in the environment that is familiar to most architects and urban designers. This makes it possible to run simulations and use standard CAD tools almost in parallel without having to convert the data when the generated diagrams are taken further into a more detailed design proposal. This also presents the opportunity of integrating different modelling methods seamlessly into a fluent workflow. Since VBA provides access to the standard drafting elements in CAD, any generative design method can become a truly integrated tool. The drawbacks of this solution are mainly of a technical nature and include the speed of execution. Whereas the performance issue may not be obvious in small scale simulations, it can pose serious limits when dealing with large colonies of agents or complex surface geometries. In order to build agent-based simulations for architecture, different development environments have their own plusses and minuses and it is hard to give preference to any one of them. Each simulation has to be evaluated separately and a 113

particular development environment should be selected according to the expected functionality. In general, prototype models are first advised to be developed on simpler platforms that allow quick testing and rebuilding. Once the desired behaviour of the simulation is achieved, one can replicate the working algorithms on a more advanced platform. The purpose of the advanced platform is to achieve better performance, to offer more interactivity or to help better integration with the traditional design workflow.

6.1 Emergent path formation: Loop Generator

The Loop Generator is conceptually the simplest prototype in this chapter. Yet, for its relative simplicity, it is capable of producing a significant variety of movement patterns. The algorithm that drives the prototype is inspired by the path formation behaviour found in social insects, particularly in termites (Bonabeau et al. 1998) and ants (Deneuborg, Pasteels and Veraeghe 1983). The use of software agents to mimic the behaviour of natural insect colonies is an obvious choice: if one can simulate the path-finding behaviour of insects, one can reproduce the path formation artificially. One of the most important concepts borrowed from the natural world and used extensively in this and the following prototypes is stigmergy – an environment mediated coordination mechanism. Unlike ants and termites, the agents in the Loop Generator prototype are not driven by the need to survive, but are simply locked into a simple sensory-motor loops. These agents move around in the environment and are drawn to certain attractors or markers – artificial ‘pheromone’. By moving around they alter the attractiveness of particular locations that, in turn, attract more agents. As the result of such a simple feedback mechanism a colony of agents can produce looping movement diagrams – macro patterns that emerge from micro behaviours (see Figure 6.1). To put it differently: agents start chasing their tails, and generate continuous flows and even closed loops. These patterns manifest the continuity of agents’ movement and are facilitated and constrained by their manoeuvrability and sensory configurations. The Loop Generator is by no means intended to be a plausible model of natural phenomena; the objective is to investigate a simple and robust prototype for

114

generating flexible circulation diagrams that can be useful for architects at the conceptual design stage.

Figure 6.1: Typical movement patterns of simple reactive agents. Emergent trails form open and closed loops to facilitate the continuity of the agents’ movements

The simulation takes place in a bounded 2D ‘universe’ and agents are constrained to move within a user-defined rectangular area. The model space is continuous and there are virtually an infinite number of possible locations an agent can be. The environment, on the other hand, is represented as a discrete ‘pheromone’ grid which gives the model its idiosyncratic granular look. On initialization, agents are located randomly within the given bounds with random heading. At the same time, all ‘pheromone’ coordinates are initiated with zero value. Each agent has a notional body and three sensors – one in front and two symmetrically on sides at a declared angle from the front sensor (see Figure 6.3). When an agent is ‘sensing’ the environment, each of its sensors receives the value of the closest ‘pheromone’ coordinate. Sensor values are compared and the sensor with the highest value wins. If there is a tie, the winning sensor is selected randomly from the sensors with the highest value. Once the winning sensor is selected the agent turns towards the sensor location and alters the environment by increasing the ‘pheromone’ value of the closest coordinate to its body. If the agent happens to be on the verge of the ‘universe’, it takes a turn back towards the centre of the simulated world. In order to generate dynamic movement patterns, the environment slowly ‘forgets’ previous activity when the ‘pheromone’ evaporates.

115

The emergence of clear paths takes place over time and is worth a closer observation (see Figure 6.2). Firstly, random and fractured strips of ‘pheromone’ trails start appearing all over the simulated world. When the simulation progresses these fractions are joined into continuous trails of ‘pheromone’ that often feature closed loops at the end. These loops enable agents to turn around and reinforce the path. Eventually, larger circuits emerge and there are now clear paths of high ‘pheromone’ concentration. These paths, however, seldom keep the same shape for a long time and can travel slowly from one place to another.

Figure 6.2: A sequence of snapshots illustrates the development of closed loops in 2D

The Loop Generator prototype is built on the Blender/Python platform and the steering behaviour of agents is constructed with the help of the OpenSteer library for simulating vehicle movement. Agents are treated as vehicles with mass and are subject to inertia. This allows one to simulate realistic movement and prevents agents turning too steeply at high speed. However, this alone does not lead to the emergence of continuous and looping paths as agents can still simply revolve around a point in space at low speed. Another aspect of the agent’s design is equally important – the position of its sensors. The simplest functional ‘body plan’ found has three sensors at a certain angle (α) from one another (see Figure 6.3).

Figure 6.3: The body plan of the 2D Loop Generator agent. α denotes the angle between the front sensor and a side sensor

116

Thus, the formation of continuous paths takes place for two reasons. The sensory configuration forces agents to move forward; agents cannot ‘see’ what happens behind them and are designed to move to the direction of a sensor. In this way, agents are given directionality. Simulated inertia prevents agents making sudden turns. Given the directionality and simulated physics it is quite simple to get the colony of agents to form paths. If the principles of sensory configuration or simulated physics have somehow been violated, the result can be either complete absence of movement patterns or very strong clustering. Figure 6.4 illustrates the possible scenarios of sensory modifications. The lack of any patterns occurs when sensors are placed too close to one another. Edge-following effect – as observed in many agent-based models in this thesis – becomes evident with a slightly larger angle between sensors. Angles between 40 and 70 degrees lead to the patterns that are the most useful in the context of generating circulation diagrams. If the angle gets even wider, agents start forming clusters. The size and number of these clusters depends not only on the actual value, but also on the amount of agents in the simulation.

Figure 6.4: Tests with different sensory configurations – agents with 3 sensors. The angle (α) between front and side sensors (from left to right): 0, 22.5, 45, 67.5, 90 and 120 degrees

3D Loop Generator The success of experiments with the two-dimensional Loop Generator motivated further development of the prototype in 3D space. The 3D version of Loop Generator was built on the same platform (Blender/Python) with a few substantial changes in the original algorithm. In many ways, updating the dimensionality was trivial, but several interesting issues were discovered. Although some of these issues were already apparent in the 2D version, the third dimension amplified them. Adding another dimension made the prototype more sensitive and getting it to work required more delicate fine-tuning of the key parameters. For example, OpenSteer parameters

117

for the mass and step length of agents had to be carefully chosen according to the spatial metrics of the ‘pheromone’ grid and to the agent’s bodily metrics.

Figure 6.5: An agent’s ‘body plan’ in 3D – the development of minimal sensory configurations that produced continuous circulation diagrams

The design of the suitable agent’s sensory configuration turned out to be surprisingly non-trivial. Figure 6.5 illustrates the development of the configurations where both the right number of sensors and the positioning of sensors had a crucial impact on the agent’s behaviour. The first attempt to transfer the 2D configuration (see Figure 6.3) into 3D was simply to lower the side sensors and lift the middle one higher so that the agent would have the ability to turn up and down as well. This, however, prevented the agent from moving in a straight line and caused excessive zigzagging movement. In order to stabilise the movement, a configuration of five sensors was tested. Although this configuration enabled agents to follow straight lines, they had trouble in following existing ‘pheromone’ trails at steep turns. This impelled a search for a more agile design that was finally achieved by doubling up the number of sensors, placing the first layer closer and the other one further away from the agent’s body. Further tests with a larger number of sensors did not significantly improve the agents’ navigational skills but made the execution of the algorithm slower. Hence, the double-layer configuration with 10 sensors was found optimal. Another major change that had to be introduced to the original 2D prototype concerned the 3D equivalent of the edge-following effect – the plane following. The plane following took place not only at the bottom and top layer of the ‘universe’ but also on other vertical and horizontal planes at the edges of the simulated world. This interesting phenomenon was caused for the very same reason as the edge following – 118

agents’ movement was constrained and the edge plane received larger ‘pheromone’ concentration. In the 3D environment, the circulation on the vertical planes was discouraged for obvious architectural reasons and additional changes to the algorithm were therefore required. Rapid development methods were deployed in order to find solutions to prevent the occurrence of this unwanted effect. Once a possible solution was conceived, it was immediately implemented and tested out. After several unsuccessful attempts, two different solutions were found particularly useful for avoiding edge-following. Firstly, agents were given an ability to emit less ‘pheromone’ once they encountered the edge of the simulated world. Secondly, the agents’ capacity to drop ‘pheromone’ was associated with their speed. This change relied on the effect that the agents had to reduce their velocity when turning back from the edge. Being capable of adding ‘pheromone’ only when their speed was sufficiently high prevented the ‘pheromone’ build-up close to the edges.

Figure 6.6: Generated 3D circulation diagrams

The development process of a 3D circulation diagram is not much different from that in 2D. However, circuits and loops tend to be much longer, and short and clear doughnut-like shapes occur less often (see Figure 6.6). Due to the complex nature of the 3D environment, agents lose their track at steep turns quite easily. Although agents maintain their explorative behaviour throughout the simulation, it occurs more often at the early stage of the process. The ‘pheromone’ trails keep changing to suit better the locomotive abilities of agents which eventually lead to the emergence of optimal circulation paths (see Figure 6.7). Whereas in 2D there are usually several closed loops of ‘pheromone’ trails, one large continuous circuit containing a few subloops is more common in 3D. This one big loop sometimes takes the shape of twisted 119

‘8’ that can be reduced to a single closed loop (see Figure 6.7) where agents cannot easily escape.

Figure 6.7: A sequence of snapshots illustrates the development of closed loops in 3D

Both the 2D and 3D diagrams generated with Loop Generator are quite abstract and it may be hard to see how these can be useful for an architect. However, one needs to bear in mind that the diagrams are generated in silico outside any architectural context and the prototype is a theoretical mechanism to test whether the continuity of movement can be captured in a flexible diagram. The continuity of the diagram ensures that access is provided to any point adjacent to the circulation path from any other point of the same quality. The flexibility of the diagram allows the prototype to be combined with other computational and manual modelling methods to create useful architectural diagrams. Later in this chapter, there is an example where Loop Generator is plugged into another more complicated prototype that involves feedback between circulation systems and spaces that are accessible from the circulation (see section 6.6).

6.2 Channel network formation: Stream Simulator

This prototype is loosely based on the natural drainage channel formation mechanism described by Bridge (2003, p 5-8). In order to initiate the channel network formation process there are a few basic requirements that have to be satisfied. Firstly, the landscape has to be erodible by water and be at least gently sloping. Secondly, there has to be enough water around. If the water has enough gravitational power it starts to erode landscape locally. Progressive erosion and stream formation continues if the power of flow increases downstream (Bridge 2003). One can find several existing prototypes in the literature that have simulated such mechanisms at a different level

120

of abstraction (e.g. Haggett and Chorley 1969; Willgoose, Bras and Rodrigues-Iturbe 1991). The actuator in the Stream Simulator prototype is a simple agent following a hill-climbing procedure, and the channels are the trails of agents climbing downhill. The agent finds the quickest downhill course by comparing the height of local neighbouring areas and erodes the landscape at the same time. Each following agent that chooses the same route enforces the trails that were left behind by the first agent. This is a typical positive feedback model where a patch of landscape that is eroded has a higher probability to be eroded again. The result of such behaviour can be seen in Figure 6.8. The Stream Simulator prototype is a highly abstract model and does not take many aspects of the natural channel formation into account. For example, it ignores the sedimentary transportation and concentration processes. Stream Simulator does not feature a negative feedback nor has an environmental repair mechanism that would make the prototype truly generative. However, the aim of this prototype is not to create an original and generative design model but to contribute to the Network Modeller prototype described later in this chapter. The main objectives are to study the behaviour of the prototype and to build a foundation to generative prototypes.

Figure 6.8: The input landscape (left) and the resulting stream channels (right)

An example of a computational model that simulates the erosion and the formation of river channels can be found in NetLogo models library (Dunham, Tisue 121

and Wilensky 2004). Similarly to this erosion model, Stream Simulator is built in NetLogo. Unlike the Netlogo model that uses a cellular automaton approach, Stream Simulator uses mobile agents. The Stream Simulator prototype, akin to Loop Generator, features stigmergic feedback loops. Users of the prototype can define the initial landscape via importing a previously prepared image where lighter areas represent hills and darker areas represent valleys (see Figure 6.8). The graphical user interface allows the user to control the size of the agent colony and to export snapshot images of the current state of the simulation. Conceptually, the algorithm behind the interface is a simple one. Agents are distributed randomly over the landscape and climb downhill. The slope of the landscape is defined locally by comparing the colour values of neighbouring patches. Once an agent moves to a new position, the patch underneath it is ‘eroded’ – its colour becomes slightly darker. The channel formation has begun, and if the slope of the landscape is sufficient, constant streams start to emerge (see Figure 6.9). Once in a while, all agents are redistributed across the landscape and the whole circle repeats itself. In time, a tree-like channel network appears. Depending on the input image, the typology of these tree-like networks in terms of number of branches can vary, but the type of the network always remains a tree. Except for the random initial distribution, the algorithm is purely deterministic and would lead to exactly the same result each time it is executed.

Figure 6.9: A typical progress of channel formation algorithm

The channel network formation does not necessarily have to be simulated with a colony of agents. A single agent can achieve qualitatively the same effect as 1000 agents. Tests with 1, 10, 100, 500, 1000 and 1500 agents (Figure 6.10) produce the same variety of networks than tests with 1000 agents (Figure 6.11). There is no notable difference in terms of network density, segment length or network shape (after Haggett and Chorley 1969). Testing the influence of the colony size to the 122

network characteristics may easily lead to the opposite conclusion. The difficulty in comparing the test results of simulations with different numbers of agents is to recognize the network development stage. A smaller number of agents yields to slower network development which could give an impression of emergence of a different kind of network. The only difference the colony size makes is the speed of execution: one agent takes much longer to form recognizable stream channels than 1000. This is because of the peculiar way the simulation has been set up – once all agents have executed their commands, all landscape patches will execute theirs. The algorithm that is used for calculating the landscape takes its toll. Thus, in simulations with larger number of agents the time spent for running patches commands is proportionally smaller, and the network formation is faster.

Figure 6.10: Tests with 1, 10, 100, 500, 1000 and 1500 agents

Figure 6.11: All tests with 1000 agents

Stream Simulator is capable of generating tree-like networks, but cannot produce other network topologies. This fact imposes some limits on how this prototype can be used in the context architectural and urban design. While some building typologies such as hospitals, schools, prisons, smaller airports, harbour terminals, train stations etc. accept or even require the branching structure of circulation, others need more complex circuit networks. The key characteristic of a tree-like circulation system is the one-to-many relationship. There is a single point of convergence at one of the ends of the circulation system: the entrance to or the exit 123

from the network is the same for all users of the network. Naturally, there are no loops in such circulation system but one can still move back and forth in a section of the network without passing the point of convergence. Whereas branching networks work very well with some building typologies, such a circulation topology is generally discouraged in contemporary urban design. Tree-like street networks lead to low connectivity and low penetrability – two undesirable properties of any urban design scheme. Stream Simulator can be combined with other prototypes in order to build more complicated and versatile applications of generative design. The next prototype in this chapter – Labyrinth Traverser – can be efficiently plugged into the Stream Simulator code in order to produce diagrams for circuit networks. An example of how Stream Simulator is combined with the Labyrinth Traverser prototype is described in section 6.4. The Nordhavnen case study in the following chapter is also partly based on the Stream Simulator prototype.

6.3 Cellular automaton and hill-climbing agents: Labyrinth Traverser

The incentive for building this prototype was to develop an algorithm that is capable of finding the shortest path between user-defined points. The objective was to make it work in the complex digital environment with obstacles for which the labyrinth served as a suitable metaphor. Like the previous prototype, Labyrinth Traverser was designed in NetLogo.

Figure 6.12: A labyrinth solved by Labyrinth Traverser

124

The Labyrinth Traverser prototype combines two agent-based techniques: hillclimbing and diffusion. In order to facilitate these two techniques, there are two types of agents. The first type is an immobile state agent known as the ‘patch’ in NetLogo (cell in cellular automata); the other is a mobile agent – the NetLogo’s ‘turtle’. Topologically fixed patches are specified as either empty (white in colour – see Figure 6.12) or occupied (black). This defines where the mobile agents can or cannot move. A patch contains also data about the distance to each one of target points that are placed via the graphical user interface (red squares in Figure 6.12). This data is a value that is propagated from the target point to the closest patch and then from one patch to another. During this propagation, the passed value is lessened so that a gradient field of values is formed. The patches can gain the target data only from their immediate neighbours which means that passing on the values can only happen locally. This cellular automata type of diffusion mechanism has been modelled and described in detail by Adamatzky (2001, p. 11-17). A similar method has also been used in Daedalus computer program for creating and solving mazes (Pullen 2007).

Figure 6.13: The gradient is computed with the diffusion method: the redness of each patch shows the proximity to a point in the labyrinth

Once the user has initiated a target point, the closest patch to the target gets the maximum value. By propagating a percentage of this value to its neighbours this value then starts to diffuse over the landscape. Since the black ‘occupied’ patches are 125

not receiving nor propagating the value, the diffusion of values is influenced by the labyrinth layout (see Figure 6.13). Once the target value has diffused across the labyrinth, a mobile agent can find the target by simply climbing the resultant gradient. The agent can easily discover the shortest way to the target by comparing the propagated target values locally without any overall vision of the labyrinth’s layout (see Figure 6.14). This method is sufficiently robust to work in any kind of 2D layout where the shortest routes are plausible. This makes Labyrinth Traverser a practical asset in the computational designer toolbox. Although the prototype is not a generative design application, it can possibly be used in many different applications for generating circulation diagrams, and is particularly useful for optimising circulation networks. The prototype introduced in the following section presents an example where Labyrinth Traverser is combined with Stream Simulator. This prototype is capable of optimising network layouts and, in some cases, even generating minimal spanning trees.

Figure 6.14: A sequence of images showing the progress of the agent

The potential value of this prototype is in analysing layouts of buildings and settlements; Labyrinth Traverser with its easy-to-use interface can find and measure relatively accurately the distance between any points in a digital 2D environment. The prototype can also be used in order to build generative design applications in many possible combinations with other computational and generative techniques.

6.4 Network optimisation algorithm: Network Modeller

The Network Modeller prototype is a combination of two prototypes described earlier in this chapter: Labyrinth Traverser and Stream Simulator. In order to fully understand Network Modeller, it is recommended to read the previous sections that introduce these prototypes. In a way, these two prototypes are quite similar. 126

Mobile agents in both cases are simple hill-climbers. The difference lies in the way the input is fed into the prototype. Labyrinth Traverser relies on the field of values propagated by a diffusion based algorithm from a user defined point via mouse click; in Stream Simulator’s the input is fed into the program as an image and the field of values is extracted as bitmap colour values. The latter prototype also features a stigmergic feedback loop, while the former does not. The two simple prototypes foster a more intricate prototype. In the Network Modeller prototype agents, who still retrieve information from their immediate neighbourhood, are not just hill-climbing the landscape of values. Instead, they execute a more articulated route selection mechanism. On their way agents leave slowly fading ‘pheromone’ trails into the landscape (see Figure 6.15). They are always trying to get closer to their target but also attempt to follow strongest ‘pheromone’ trails. Consequently, agents choose the most trodden path that takes them closer to their target.

Figure 6.15: A network diagram generated with Network Modeller

Similarly to the original prototypes, Network Modeller is built in NetLogo. It has a slightly more complicated graphical user interface which gives the user better control over the simulation than its predecessors. Once the user has manually seeded the target points and the ‘target’ values have been propagated across the landscape, the simulation is started. Mobile agents are then randomly set to these user-defined 127

points and are assigned their target destination. A mobile agent chooses the travelling direction by comparing the ‘pheromone’ values of patches within its local neighbourhood. All the neighbouring patches that have the lower ‘target’ value than the closest patch to the agent are omitted from this calculation. In other words: the agent chooses the patch with the highest ‘pheromone’ value amongst the set of patches with high ‘target’ value. If the ‘pheromone’ value of two or more patches is equal, the choice is made randomly among these patches. The prototype optimises the network of paths for the reason that ‘pheromone’ trails in the environment evaporate – the environment can forget old paths and new shorter paths emerge. Dropped ‘pheromone’ also diffuses in the environment which is again useful for optimisation. Diffusion creates less sharp gradients of ‘pheromone’ and enables more delicate navigation. Whereas Labyrinth Traverser and Stream Simulator are not truly generative models since neither of these feature negative feedback, the negative (‘pheromone’ evaporation) and the positive (stigmergy) feedback definitely make the Network Modeller prototype a generative one.

Figure 6.16: Examples of network diagrams generated with the same configuration of target nodes. The variety of diagrams have been achieved with different control parameters

Besides defining the network target points and the input image, there are a few important parameters a user of Network Modeller is in control of. Firstly, the user has to decide the number of mobile agents participating in the simulation. The optimal number of agents is difficult to determine since it depends on so many other aspects of the simulation: the amount of target points, distance between these points, size of the ‘universe’ etc. The user can manipulate the size of the agent colony at runtime and influence the development process of the network. An important user controlled parameter is the amount of ‘pheromone’ dropped into the environment by agents. Larger quantities lead to the quick establishment of strong paths early on in the 128

simulation process that hinders the network length optimisation; too small ‘pheromone’ drop rate prevents the formation of the stable network. The other two parameters that can be manipulated during the simulation define how the ‘pheromone’ behaves once it has been left behind by the agents. As described above, the ‘pheromone’ diffuses and evaporates in the environment. The rate of both of these processes is exposed to the user via the graphical interface. Whereas larger diffusion rates are helpful in forcing two adjacent paths to join together into a single path, smaller rates yield to clearly defined paths. The evaporation rate, on the other hand, can be effectively manipulated to regulate the speed of optimisation. Higher evaporation rates can cause rapid changes in the network shape and topology; smaller rates usually lead to the quick formation of a steady network. All of the control parameters influence the simulation process to a greater or lesser extent. These parameters control different bits of the simulation and need to be considered in parallel. The ‘pheromone’ evaporation rate, for example, need to be chosen proportionally to the number of agents in the simulation – the more agents there are, the smaller the rate is required. Albeit the effect of each parameter can be defined individually, the concurrence of different parameters is difficult to foretell. The actual value of these parameters has little significance – what is important here is to understand how the user can change the course of diagram development and which way parameters need to be tuned in order to achieve a desired effect. This process is further explained later in this section. Figure 6.16 illustrates the variety of network diagrams that have been generated in a single simulation. The ability to manipulate some control parameters at runtime makes Network Modeller a flexible and interactive tool that could potentially be used for generating movement diagrams for buildings and urban areas.

Figure 6.17: A typical process of network formation and optimisation

129

Without the user interaction, the process of network development is typically fast even with a relatively large number of agents and targets (tests with 2000 agents and 25 targets were carried out). An example of the development cycle is demonstrated in Figure 6.17. In the beginning of the simulation the agents have no trails to follow and take the shortest course to their targets. Before they even reach their targets for the first time, they may become attracted to trails laid out by agents travelling on a different course – the optimisation process has started. Once an agent has reached its goal, it has been assigned a new one. However, by then the ‘pheromone’ landscape has changed and the agent executes a more complex navigation algorithm that has been described earlier. The positive feedback mechanism makes its contribution to the process and certain paths are quickly reinforced. The network is now in its most connected shape often even having two separate routes between two target points. But soon the effect of ‘pheromone’ evaporation takes its toll and certain paths gain more traffic and become dominant. Weaker paths dwindle and may eventually disappear completely. At the late state the network has become stable and, assuming that the control parameters are not changed, it can be declared to be fully developed. Tests with a low number of target points show that often the resultant path networks resemble minimal spanning trees. Figure 6.18 displays several different trees generated with the prototype. Although with a low success rate, Network Modeller can be classified as a heuristic method for solving minimal trees. In some cases, the network takes the shape of a Steiner tree – a minimal spanning tree featuring emergent network nodes: Steiner vertices (Herring 2004). The formation of minimal spanning trees is not guaranteed with Network Modeller. The prototype should be considered as a computational assistant of optimised network design rather than a solver that always converges to a single optimal solution.

Figure 6.18: Recognizable minimal spanning trees generated with the network optimisation algorithm

130

A user can interrupt the typical development process of a network diagram in several ways. The process can be speeded up by reinforcing the half-developed structure. This can be done during the simulation by increasing the agents’ ‘pheromone’ drop rate or decreasing the ‘pheromone’ evaporation rate from the environment. One can also encourage more explorative behaviour of agents and further development of the network by decreasing the ‘pheromone’ drop rate, reducing the number of participating agents, or increasing the evaporation and diffusion of ‘pheromone’ in run time. Ultimately, the responsibility of finding appropriate parameters lies with the designer. Suitable diagrams can be easily generated by following trial and error experimentation techniques where parameters are changed and visual feedback is obtained in run-time. The knowledge gained through real time interactive play is a key to success – knowing the rules of thumb of how the prototype behaves is more valuable than knowing the exact parameters that lead to a specific result. The ability to freeze the diagram at a certain stage of development or to promote an alternative development is a useful feature in Network Modeller. It allows designers to generate a wealth of circulation diagrams that are more or less optimised for travel or build cost. In Network Modeller, the optimisation is implicit and can be controlled via the parameters outlined earlier in this section. Network Modeller is a step from a simple agent-based prototype towards a computational application that has a potential of becoming deployable in an architect’s office. It satisfies several requirements that are often demanded from such an application: Network Modeller can output a variety of diagrams, responds promptly to user interactions and is relatively transparent. Chapter 7 presents a case study where Network Modeller is developed further and deployed in the context of professional design work. Although some modifications of the prototype are introduced in order to make it more suitable for the particular requirements of the case study, the essence of the prototype remains the same. One of the modifications introduced later is context sensitivity. This has been deliberately omitted from Network Modeller to keep it a pure prototype – having the context makes the prototype a special case. 131

6.5 Way-finding agents and ant colony optimisation

Ant colony optimisation (ACO) is a computational method for finding nearoptimum paths in networked graphs. There are several well-known ACO algorithms and the field of application varies. Typical ACO implementations deal with network optimisation problems such as the travelling salesman problem (Dorigo, Maniezzo and Colorni 1996) and minimum spanning trees (Neumann and Witt 2008). A more detailed overview of ACO methods and algorithms can be found in Chapter 8. The prototype described in this section is loosely based on an ACO method and is combined with additional way-finding methods and evolutionary optimisation techniques. This wayfinding prototype is programmed in AutoCAD’s integrated development environment using VBA programming language. It has earlier featured in a paper by Puusepp and Coates (2007). The objective of building this prototype was to study how agents can develop a way-finding mechanism that helps the colony to navigate between two points in a digital 3D model with the help of an internal representation of the model. As the agents were designed to receive some sensory information directly from the model, it was hoped to find out which spatial layouts facilitate or hinder the way-finding process in this multi-agent system using stigmergy. In order to keep the number of inputs from the environment low a method of storing sensory-motor rules was proposed. The only stimuli an agent receives directly from the environment, is limited to the collision detection of surfaces in a digital 3D model. Most of navigational decisions taken by the agents are based on a reference map. For the agent colony, this reference map can be seen as an internal representation of the environment. While the map evolves during the simulation, the correct rules to interpret the map develop in synch with it. Both the reference map and the set of interpretation rules are developed from scratch using a trial-and-error reinforced learning methodology (Pfeifer and Scheier 2001, p. 490-493). The goal of the agents is to find a way to a target point while learning to interpret sensory input and altering their internal representation of the environment. Successful agents are rewarded by upgrading their value; the value of a non-successful agent is reduced to minimum. This prototype does not directly relate to any theory of human way-finding because one of the principles was to keep the prototype simple. Human way-finding 132

processes are far more complex than those in the proposed prototype (Chase 1982; Timpf et al. 1992). However, the proposed prototype does borrow the notion of internal representation that originates from the early thinkers of cognitive mapping approach to way-finding – from Tolman and later Piaget (Chase 1982).

Figure 6.19: Input-output coupling. The agent obtains input from the digital 3D model and from the reference map. Motor output is generated interpreting input according to syntactical rules

Agents are using a combination of computational reference map (agents’ internal representation of the environment) and a set of rules that determine how that map is interpreted (see Figure 6.19). When an agent is first created, it has no rules stored in its memory – the evolution of a reference map and the development of rules takes place over time when the agent is exploring the digital environment. The initiation and fade-out of data in the map is computed using the pheromone trail algorithm – the information that is fed to the reference map when agents explore the 3D model disappears gradually. The interpretation rules have to develop and change according to this dynamic map. The pursuit of targets is additionally facilitated by trivial vision – if the visibility line between the agent and its target is clear (does not intersect with the geometry of 3D model) then the agent takes an automatic step towards the goal. Besides individual learning, the development of interpretation rules also takes place at the phylogenetic level. The evolution of agents is similar to the evolution of Braitenberg’s vehicle type 6 (Braitenberg 1986, p. 26-28) where a single agent – but not necessarily the best performing one – is chosen for reproduction. However, in the

133

process of reproduction only 75% of the interpretation rules of the agent are transmitted to its offspring.

Figure 6.20: Sensory input. Sensors acquire their value from the environment and from the corresponding location on the reference map.

The design of the agent is fairly simple. The agent consists of the central ‘body’ and the six sensors attached to it forming three symmetry axes. The consideration behind this hexagonal design is to give agents sufficient liberty of motion retaining the symmetry and thus leaving undefined which side is the front or the back. Sensors are combined into three identical axes which have a major influence over the activation function (see Figure 6.20). Each sensor axis has four possible states: two polarised states (only one sensor active), both sensors active, and both sensors passive. All three sensor axes together yield to the sensory space with 64 possible input combinations. The output space, or the motor space, is much smaller containing only six possible directions. The agent can take only one step at a time, and cannot combine different directions. Albeit the motor space is relatively small, the mapping of inputs to outputs results in 384 different combinations. The agent’s behaviour is simply controlled by a list of input-output matches – the interpretation rule set (see Figure 6.21). In addition to these rules, the agent possesses some persistence in its action – if the sensory input appears to be unfamiliar, the agent continues in the previously chosen direction. This helps the colony to maintain its explorative behaviour. 134

The reference map is a hexagonal network of interconnected nodes. Each node contains one or more values. If an agent moves in the environment, corresponding values on the map are adjusted according to the success of the agent. A node is just a passive piece of information – agents gain meaningful information by comparing the adjacent nodes.

Figure 6.21: Interpretation rule set: red circles show activated sensors, the arrow shows the resultant movement direction. Different agents have different rules to map inputs to output. 75% of these rules are passed to ‘offspring’ to maintain explorative diversity of the population

The progress in the agents’ behaviour and development of the reference map and the interpretation rules tends to follow a standard pattern. As the first meandering agent finds a way to its target, all nodes on the reference map that the agent has stepped on are positively adjusted. Such kind of learning technique that is based on a long-term rewarding system is classified as reinforced learning. When the agent tries to repeat the same route, it may turn out that the interpretation rules do not match the changed reference map anymore. Thus, new rules have to be invented to ‘read’ the modifications made by the first agent. If the next agent is capable of finding the target in the slightly changed situation (with positively adjusted nodes), it reinforces the reference map, and appropriate rules have been stored to interpret this map.

Figure 6.22: Way-finding in corridor-like layouts. All shown tests were successful as the colony was able to learn the route between two points. The reference map is laid on top of the layout of digital model

135

Certain features of the digital environment facilitate way-finding. It is not always the shortest route that becomes the most popular – it is usually the most suitable route for the particular kind of agents. The complexity of the environment can be easily assessed by the time it takes the agent colony to navigate through it. The time consumed is usually proportional to the number of changes in the direction of motion that agents have to make on their way to the target. Some routes are more difficult to learn as they require intricate sequences of such changes. For example the U-turn is easier for agents to pass through than the S-curve (see Figure 6.22).

Figure 6.23: Way-finding in quasi-urban layouts. Environmental features play a crucial role in the competition between popular routes. It is not always the shortest route that is preferred by agents

A few interesting behavioural phenomena can be pointed out: 1. Agents acquire different techniques to move around in the environment. Some of them try to keep away from environmental obstacles, others, in contrast, develop the ‘wall following’ tactic. 2. Some agents tend to travel to locations where they have clear visual contact with their target, without actually getting closer to the target (see Figure 6.23: Way-finding in quasi-urban layouts. Environmental features play a crucial role in the competition between popular routes. It is not always the shortest route that is preferred by agents). 3. If a route to the target has been found, some agents still keep exploring and finding other ways than the established one. The firstly discovered route does not necessarily become the most used one (see Figure 6.23). 4. Unplanned competition between agents with different targets occasionally takes place. The nature of the pheromone trail algorithm prevents agents using 136

the same trail in both directions. One way ‘traffic’ tends to force the other way out.

6.6 Stigmergic building agents

This section explores agent-based methods that can be collectively termed as stigmergic building algorithms. As opposed to the previous prototypes in this chapter, stigmergic building algorithms allow the agent colony not just to alter some values in the environment for navigational purposes but also modify geometrical properties of its surroundings. The experiments described in this section were built over an extended period of time and represent a collection of loosely coupled methods rather than converge to a coherent prototype. The methods presented here are primarily concerned with the rules about how an agent interacts with its environment in terms of perceiving and changing the geometry of the environment. However, these methods can be combined with other algorithms governing the movement and navigation of agents resulting in complex dynamic models. The aim of such experiments is to investigate the suitability of agent-based techniques in order to create generative models featuring dynamic feedback loops between circulation and the geometrical configuration of the environment. Stigmergic building algorithms have been first explored by Theraulaz and Bonabeau (1995) and later elaborated by Bonabeau et al. (2000). These authors have developed a method that mimics the building behaviour of social insects and have achieved remarkable results in replicating insect nest architectures found in nature. The prototypes in this section build on top of some of their methods and try to interpret these in the context of architectural design. All prototypes in this section have been developed in the Python programming language and run in Blender. The Blender/Python platform is well suited for resolving the level of complexity involved in stigmergic building algorithms and provides sufficient speed of execution, yet maintains the agility needed for prototyping. In order to explore the potential of stigmergic building methods and discover some likely issues associated with related algorithms, the first experiment is solely aimed at figuring out the rules of how an agent can add geometry to an existing 3D model. The algorithm described subsequently ignores the notion of circulation and the 137

agent is simply given a linear movement vector that propels the agent in a fixed direction at a constant speed. On its way the agent encounters geometrical objects and is instructed to react to these by placing new objects to the 3D model. The agent – not much different in its appearance from the 3D Loop Generator agents (see section 6.1 and Figure 6.5) – possesses sensors that can detect collisions with existing objects in the model, and has a set of rules for adding new objects. The agent can place a number of predefined 3D objects into its digital environment. All of these objects are rectangular solids derived from the primitive cube. When the agent receives information from the environment via its sensors, it compares the input to the predefined look-up table. The rules in this look-up table define which type of object is selected for the placement and also prescribes the rotation and the location of the newly added object in relation to the agent’s position at the time of placement. Once the object is added to the model it becomes a subject to physics simulation and interacts with objects that already exist in the model.

Figure 6.24: An ‘arcade’ built using a simple set of stigmergic rules and linear forward directed movement

If the rules in the look-up table happen to be right, this stigmergic building sequence combined with the linear movement can trigger a feedback loop where the newly created objects provide further stimulus to the agent. The agent then places more objects and the stigmergic loop is closed. Structures built this way often have high degrees of continuity and repetitiveness due to the linearity of the agent’s movement. Figure 6.24 presents the result of a simple look-up table with three rules 138

for mapping sensory input to building output. When the agent encounters a single block in the environment, it stacks a new block on top of it and another one next to it. Now the agent faces two vertical stacks which triggers the third action and the agent places a horizontal ‘beam’ on top of these stacks. Theoretically, more detailed look-up tables can be invented to produce intricate structures, but practically it becomes quickly a very complex task. It is perhaps easier to describe higher level targets and devise an algorithm that creates the rules in the look-up table automatically. In order to automatically develop meaningful stigmergic building rules, an evolutionary strategy is proposed. Strategies for evolving stigmergic building rules have been earlier explored by Bonabeau et al. (2000) and von Mammen et al. (2005), but – according to the research conducted by the author of this thesis – these simulations have never before been carried out in the context of simulated physics. The strategy proposed herein involves four major steps: generation of initial building rules, simulation of stigmergic building with these rules, comparison of built structures, and recombination of initial building rules. These steps are repeated several times until a satisfactory look-up table containing stigmergic building rules have evolved. The evolutionary strategy is devised to create a set of rules to build high structures – a task that can be easily measured. For that purpose the agent that executes the building process is given a continuous upward motion vector instead of moving it parallel to the ground plane. If the agent encounters an unknown spatial configuration, a new rule for placing a rectangular cuboid within a certain range from the agent is created and immediately tested. If the cuboid should intersect with any of the existing objects in the model, the rule is reinvented and tested again until a suitable solution is found. Once a cuboid has been successfully added to the model it becomes a part of the physics simulation; the agent records the respective stigmergic building rule into its look-up table and moves on. When an agent now encounters a previously built spatial configuration, it tries to execute a recorded stigmergic rule and can either validate or overrule it. The simulation takes place during a defined period of time or until the agent receives no further stimuli from the model. The whole simulation is then repeated with a number of agents that all have their own individual look-up tables. A few of the agents that manage to build the highest structures can start the next round of simulation with their look-up table developed in the previous round, whereas others have to start from the scratch. 139

Figure 6.25: Tall structures built by agents by executing evolved stigmergic building rules and linear upward directed movement

The time consumed for developing a good set of stigmergic building rules is undefined due to the probabilistic nature of the evolutionary strategy. In some cases, agents cannot achieve more than stacking up just a few cuboids, while agents in other simulations develop their look-up table fairly quickly and display a variety of different ways of erecting tall structures (see Figure 6.25). In some simulations, a set of building rules is evolved far enough in order to allow agents to build infinitely high structures. However, the described method has many disadvantages. For a start, there are limitations to the structure that a single agent can build – the agent’s movement is simply too linear in order to lead to the emergence of complex structures. The process of finding good rules is also a very slow one mainly because of the large number of choices an agent can make in placing a cuboid. Simulated physics algorithms (based on Smith 2007) introduce some redundancy to the model as placed cuboids are adjusted according to the gravitational force, but that is not enough. There is no direct relationship between a successful building action and the subsequent actions of an agent; the success of building a desired structure is too much dependent on chance.

140

Despite the many limitations that the proposed agent-based method of developing tall structures has, the experiment demonstrates that it is feasible to devise a system where stigmergic building rules are evolved during the simulation rather than explicitly predefined by the programmer. As long as the built structure can be computationally evaluated, the evolutionary strategy can help to reduce the bulk of work that the development of stigmergic building rules otherwise would require. Once a set of building rules have evolved and become robust enough, it can be used in a different context. Figure 6.26 shows a simulation where a structure is created as a result of collective effort of an agent colony where all individuals share the same stigmergic building rules. The movement of agents in the digital environment is randomised. However, in order to reduce the number of possible sensory input configurations, agents are always aligned perpendicular to the cardinal directions of the model’s coordinate system. Even with a limited set of stigmergic rules (three rules were used), freely moving agents can build structures that are far more complex than structures built by agents that are constrained to the linear movement. Unlike in the previous experiment that implemented simulated physics, this one uses a qualitative method for testing the stability of built structures. The method is based on the connectorsocket system (see section 5.4 for details) where a new block can be attached to an existing one only if both of the blocks have respective joints. This simplified structural stability computation reduces the demand for computational resources and allows simulations to run with a larger number of agents. However, the number of agents used in the simulation has no impact other than the speed of the building process because the agents’ movement is not anyhow coordinated through the environment. An obvious way of improving the building simulation algorithm is to add some rules how agents change their movement trajectory when they encounter specific spatial configurations. These rules can be added to the existing look-up table of building rules so that a specific spatial configuration triggers both building and subsequent movement activity. The following experiment (see below) scrutinises how movement and building activity can be synchronised through the environment.

141

Figure 6.26: A sequence of images showing the collective building activity of an agent colony. Agents are placing blocks of various sizes according to a shared set of stigmergic rules

As opposed to previous experiments, this simulation takes place in discrete space where agents are can only move from a cell to a neighbouring one in a 3D lattice (see Figure 6.27). Each cell in the lattice can accommodate a single building block if it has enough structural support from its neighbours; the stability value of each block in the built structure is computed with a cellular automata method (see section 5.4) that defines how well each block is supported. Agents can add new blocks to the model according to their internal rules that are developed during the simulation. These rules specify where agents deposit additional blocks in their immediate neighbourhood of 26 cells (3D Moore neighbourhood) around the agent’s current location. A similar kind of setup has been previously used by Theraulaz and Bonabeau (1995) in their studies of stigmergic building algorithms. However, unlike in the proposed prototype, their agents move around the model randomly and the simulation does not incorporate any structural calculations.

Figure 6.27: Agglomerations of building blocks placed by agents. Each agent evolves its own building rules during the simulation

142

Stigmergic building rules in this experiment are slightly more complicated than in previous examples. An agent has no look-up table that specifies the building activity according to the received sensory input. Instead, an agent is designed as a single layer perceptron with sensory input from the 26 neighbouring cells. The sensory inputs are fully connected to the respective 26 building outputs via series of weights. The input values are received from the built structure in the agent’s local neighbourhood and indicate the stability values of surrounding blocks. These input values range from 0 to 1 (0 denotes an empty cell, 0.1 means a weakly supported block and 1 is a block supported directly on the ground) are then multiplied by the connection weights and mapped to output values where an activation function defines on which neighbouring cells new blocks are deposited. The agent’s movement is computed in a similar way – the input values are multiplied by the connection weights and the neighbouring cell with the highest total value will be occupied by the agent in the next time step. There is an additional movement constraint to agents – they can only move along the surface of the existing blocks or across the ground plane. This constraint ensures that the agent always receives sensory information from the environment and does not ‘fly’ around high above the ground level. However, agents can leave the ground level by building higher structures and then climbing these structures. An agent can adjust the connection weights between sensory inputs and building outputs by evaluating the success of newly built structures. It simply records the last building output and compares it to the status of structure after some time. If the blocks that were added to the model still persist, then the respective connection weights are increased. If these blocks have become unstable, then the weights are decreased. This way, an agent can ‘learn’ what structures are likely to stand up and it can save its resources for building something that is well supported and unlikely to collapse. Agents’ connection weights can also have negative values which mean that, besides adding, agents can remove blocks from the model. This can become useful when the agent gets locked into a densely built environment and cannot find its way out. During the simulation, agents have only a limited number of blocks that they can deposit during the simulation, but they can gain more by ‘eating’ existing blocks in the model. 143

There are a few interesting phenomena in the behaviour of the agent colony that are worth mentioning. Whereas a single agent seldom produces surprising results, individuals in the colony start competing over the available resources. Even though there is no direct communication mechanism between the agents, their activities are coordinated through the environment. Occasionally two or more agents start changing the same structure at a particular location by adding and removing blocks simultaneously. Quite often it happens so that an agent places a block while removing another block from a different location. If another nearby agent now removes the placed block and adds a block where the first agent has just removed one, it may turn out that these two agents get locked into a loop where both of them maintain their current energy levels while building the same structure over and over again. If these agents are adding more than they take from the environment, new structures can quickly emerge from their collaborative effort. This behaviour – also spotted during live simulations – is not anyhow predefined in the algorithm but emerges from the simple rules at the level of individuals and is fuelled by stigmergic mechanisms. It proves the concept that complex behaviour can be achieved by coordinating the behaviour of simpler agents through the environment. Although the above described experiment reveals the collaborative building powers of the colony, it suffers from the poorly designed movement algorithm. The perceptron architecture seems to be well suited for the task of adding and removing building blocks, but it appears to be not a good solution for generating continuous circulation. Having too much freedom of movement and sensors in all directions around the agent can work against the purpose of generating ordered movement patterns: it is very difficult to achieve the continuous movement without forward directed sensors (see Figure 6.3 and Figure 6.5).

144

Figure 6.28: Generated ‘pheromone’ trails and the respective structures built by agents. Given a simple building rule, agents were capable of channelling their movement but often failed to establish continuous circulation patterns

A way to introduce a notion of circulation into the above model is to replace the perceptron-based movement algorithm with a method described earlier in this chapter (see the Loop Generator prototype in section 6.1). Combining the ‘pheromone’ following algorithm with the stigmergic building algorithm is a demanding task. Both of these algorithms produce consistent and potentially useful output for creating conceptual architecture diagrams when executed separately, but when executed sequentially, unforeseeable complexities may rise. One has to make sure that the output from the building algorithm does not stop the development of continuous movement trails. Newly added blocks can create barriers that hinder circulation to a certain extent but these blocks should not be placed in the middle of heavily used routes. On the other hand, there is little reason to have structure in the part of the model where agents do not go. There needs to be finely tuned balance between the movement algorithm and the placed blocks as well as balance between the building algorithm and the existing circulation routes. One way to solve this task is to introduce additional conditionals and functions to the algorithms. Blocks, for example, should remain at a location only for extended time only if they are occasionally visited by agents. If no agent comes close to a block for a while, it ‘decays’ and is removed from the model. Similarly, agents can emit ‘pheromone’ only if they occasionally come to contact with blocks. As mentioned above, the building routine 145

also needs to be changed – blocks should not be placed at the location of high ‘pheromone’ concentration. These additional rules ensure that the development of ‘pheromone’ gradient and the built structure happens in parallel and are dependent on one another; the prototype has to feature quite sophisticated dynamic feedback mechanisms. Figure 6.28 illustrates two representations of the same model: the circulation diagram and the built geometry.

Figure 6.29: Development of built structures that form circulation channels

The development of the model is generally less dynamic than the emergence of ‘pheromone’ trails in Loop Generator. There is a presence of strong positive feedback mechanism in the simulation. The aggregation of blocks happens around heavily used trails that, in turn, are likely to increase the usage of these trails as agents are encountering more blocks and deposit more ‘pheromone’. Once a clear circulation corridor has developed (see Figure 6.29) then agents cannot easily escape it. Agents find themselves constantly following the same route because of the strong ‘pheromone’ concentration and also because the blocks that are aligned at both sides along the route prevent them choosing alternative directions. However, smaller corrections to routes are taking place all the time leading to the gradual adjustment and optimisation of the circulation diagram. The problem with combining continuous movement and stigmergic building algorithms becomes evident when one chooses to experiment with complex building rules or opts to increase the agents’ sensory space. Since agents roam freely around 146

the model, they confront existing structure from various different stand points and the sensory input can be completely different depending on which way they approach the structure. Stigmergic building algorithms can be considerably simplified in order to meet the purpose of why the model is built in the first place. The prototype is more likely to be used at the early conceptual stage rather than at later stages in the design process, and the level of detail in the model should reflect that. It is relatively uncommon in architectural design practice to solve the structure of the building before circulation and functional diagrams have been conceived. Hence, there is an argument that the building blocks should represent spaces rather than the structure.

Figure 6.30: Sequence of images showing dynamic feedback between circulation routes and built blocks

The following experiment is built on top of the previously described prototype that combines stigmergic building algorithms with movement algorithms. In this case, however, building blocks should be seen as functions or spaces in the modelled building. This allows the stigmergic building algorithm to be much simpler and is possibly more useful for architects at the conceptual design stage. The prototype abandons the idea of agents as trainable perceptrons whose building activity is triggered by certain spatial configurations. Instead, the building algorithm of an agent is triggered when the ‘pheromone’ concentration in the nearby environment is above defined threshold. This ensures that new blocks are placed only at locations where there is enough ‘traffic’ around. Both the circulation and blocks are dependent on one another: blocks need to remain accessible to agents and agents can only emit 147

circulation ‘pheromone’ if they encounter existing blocks. This feedback mechanism leads to dynamic development of the model (see Figure 6.30) where spatial configuration and the circulation emerge in a parallel manner; one can always be sure that all the spaces (blocks) are always accessible via circulation routes which, in turn, happen in the close proximity to these spaces. As opposed to the simplified stigmergic building algorithm, the algorithm controlling the development and stability of blocks is made more sophisticated. In order to make a block more dynamic and responsive to the circulation and existing blocks, a newly created block is allowed to change its shape and find a good fit with respect to other elements in the model. Once a block is placed by an agent, it starts to grow and occupy the areas next to the location it was originally created and keeps developing until it reaches a certain size. The growth of the block is controlled by a cellular automata based algorithm (Adamatzky 2001, p. 11-17) – a method of simulating diffusion of chemicals in real environments. This method has many advantages. Most notably, a block can take any shape and adapt to the spatial configuration of existing blocks in the model. The algorithm can also be modified in order to take the circulation into account – diffusion can be limited in areas with high ‘pheromone’ concentration and, thus, prevent the blocks from hindering the movement of agents along highly used circulation routes. The development of block structure takes place over an extended period of time. First blocks are placed on the ground level and encourage circulation routes to grow longer and, as a consequence, more blocks are added in open block-free areas. If an agent encounters a block on its way, it avoids the block by steering away from it or by trying to climb across it. If the agent favours climbing, it can place additional blocks higher on top of the existing ones. As long as the ‘pheromone’ concentration is high enough the agent does not care whether the newly placed block is grounded or not. But the block itself is subject to the simulated gravitational force and cannot persist without sufficient support from other stable blocks. Thus, the development of block structure on higher levels is less likely than on the ground floor.

148

Figure 6.31: Emergence of vertical circulation

Figure 6.31 depicts a situation where several vertical circulation routes – ‘stair cores’ – have emerged simultaneously. As there are no blocks on the ground level to support new blocks at higher levels, these routes are likely to disappear. These cores often appear at the edges of the simulated world where agents’ movement is restricted and they look for alternative directions. In some cases, vertical circulation routes emerge when agents are forced to change their heading as a result of existing blocks on their way. In these cases, it is more likely that a new layer of blocks is started at a higher level because the existing block on the ground that caused this vertical movement provides enough support.

Figure 6.32: Development of vertical circulation and stacked blocks

It may happen so that a few blocks are stacked up and a tower-like formation appears (see Figure 6.32). However, this usually requires several conditions to be satisfied. Firstly, the vertical circulation has to be constrained in order to prevent agents escaping it. In the case of the example in Figure 6.32, the ‘stair core’ was squeezed between stacked blocks and the corner of the simulated world. Secondly, horizontal movement routes have to feed into the vertical one. This guides more 149

agents to the vertical core and the bigger circulation loop is closed. Once these requirements are satisfied, a vertical stack of blocks may appear.

Figure 6.33: Selected outputs of the simulation

The main difficulty with vertical stacks is maintaining the continuity of circulation – it is extremely seldom that agents are capable of generating uninterrupted circuits that incorporate vertical elements. Without loops that keep the agents constantly on the same track, the circulation is bound to change. Although vertical stacks occasionally occur, it is more common that the simulation leads to single or double layered agglomerations of blocks. Figure 6.33 presents several models where the movement routes are organised into circuit networks that are likely to persist longer than disconnected networks. Some of these examples have fairly minimal circulation compared to the number of blocks served by the circulation. This is partly because of the probabilistic nature of building and movement algorithms, but it also depends on some variable parameters in the algorithm. These parameters allow one to gain the control over the simulation and drive it to the desired direction. For example, in order to manipulate the compactness of the block model, one has to 150

simply modify the amount of ‘pheromone’ that agents drop in the environment. The same can be done by changing ‘pheromone’s’ evaporation speed. One can also control the size of building blocks and can choose to stop the simulation once a certain number of blocks have been placed. This functionality is particularly appealing in the context of an architectural brief where a number of rooms with defined sizes have to be accommodated on a site.

6.7 Space forming agents

The following prototypes are clearly distinguishable from all other prototypes described earlier in this chapter. Albeit based on agent modelling techniques and the basic modelling unit is a mobile agent, the notion of circulation in these prototypes is highly speculative. Agents in this section do not leave traces of circulation behind when they move around, nor do they organise other elements in the model to create open space for movement. These agents belong to self-organisational colonies – they are capable of forming structures that can be interpreted as spatial layouts where certain features of the structure can be seen as circulation paths. Similar or related experiments using agent colonies for creating spatial patterns has been carried out by Adamatzky and Holland (1998) and in the architectural context by Coates (2010).

Figure 6.34: Uniform distribution of agents following a simple rule in the simulated world

Whereas in previous prototypes there are always elements in the model that are independent of agents, a colony of space forming agents can be completely self151

referential and organise itself without any additional objects in its environment. This does not mean that these agents exist in the vacuum. This would violate the notion of system-environment distinction – an essential requirement in order to define the agent at the first place. A single agent still occupies the environment and interacts with it but this environment can consist solely of other similar agents. Space forming agents do not necessarily need to perform complicated tasks in order to achieve interesting results at the colony level. Simple repulsive behaviour of getting as far as possible from the closest fellow agent in the colony leads to the uniform distribution of agents across the simulated universe and to the emergence of global hexagonal structure (see Figure 6.34). With an additional modification, this prototype can be programmed to visualise the space around each agent and to reveal the topological skeleton of uniform yet organic space. In order to do so, one can introduce another class of agents that simply repel the closest agent of the first class but ignore the agents of the same class. Figure 6.35 illustrates the uniform distribution of first class agents (larger dots) and the development process of topological skeleton – also known as the Voronoi diagram (Adamatzky 2001, p. 36-65) that is formed by the second class of agents (smaller dots). A similar approach of approximating Voronoi diagrams in collectives of mobile finite automata is described by Adamatzky and Holland (1998).

Figure 6.35: Formation of a cellular structure

If the resultant distribution of agents is now analysed in spatial-architectural terms, the larger agents would represent particular spatial functions and the smaller agents would be the dividing element that confines the space allocated for each of these functions. But equally the skeleton formed by smaller agents can be seen as a circulation network. To be more precise: the structure reveals the potential diagram 152

for optimised circulation. Even though it is a very abstract diagram, the output of this repulsion-based model does meet the basic requirements for circulation: it is continuous and it connects and provides access to all of the confined spaces. The network diagram is also intrinsically optimised – each smaller agent is on a medial axis (Dey and Zhao 2003) between two closest larger agents. The simplicity and universality of the repulsion model makes it an ideal starting point to build more intricate prototypes. The abstract nature of the generated skeleton network can be made more tangible when the prototype is redeployed in the context of model-specific constraints. For this purpose agents can be redesigned to recognise and react to additional cues in their environment that are not other agents. For example – agents can be instructed to stay away from certain objects in the model or, on the contrary, be attracted to other elements. The original repulsion model is built in NetLogo that is a well-suited development environment for abstract agent-based prototypes. However, it sets some limitations how much extra functionality can be added. If one needs to gain more control over the output, it is advisable to migrate the prototype to a different development environment. The following examples are all programmed by the author of this thesis in VBA and run in a professional CAD application – MicroStation. Albeit much slower in executing multi-agent simulations, it facilitates the interaction between agents and other modelling elements, allows the designer to contribute to the model, and provides better management tools for the extended coding exercise. In order to improve the speed of execution, the agent-based approach of approximating the Voronoi diagram is replaced by a deterministic algorithm (Dey and Zhao 2003) that calculates the Voronoi cell surrounding each individual agent. Thus, an agent consists of a mobile nucleus that moves around and interacts with other agents, and of a cellular area that surrounds it and is occupied by this nucleus. In contrast to the NetLogo prototype the CAD version enables the designer to constrain the movement of agents to a custom-shaped bounded region (see Figure 6.36). The designer is also given a set of interactive tools for locating agents and defining the target area that each agent is geared to obtain. This target area defines the desirable size of the Voronoi cell occupied by the agent. The agent has two ways of achieving its target: it can tweak its repulsion strength or change its internal pressure according to the difference between the desirable and the actual size. By increasing 153

the repulsion strength the agent pushes other agents further away so that its cell can grow larger; by increasing the internal pressure the agent can push the corners of its cell further away from its nucleus and win more space this way. Provided that there is enough available space, an agent can quickly achieve its target size by manipulating these two parameters. If space is in short supply, agents compete for it by increasing the pressure and repulsion strength, but they gain little if anything at all because all the other agents are doing exactly the same. However, the colony does achieve an equilibrium state, and even if the individual targets are not met, each agent occupies a territory that is proportional to its target area.

Figure 6.36: Self-organisation of cellular agents in a bounded region

The ability to constrain agents in a predefined region is a useful functionality if one knows the exact shape and the size of this region. The algorithm works reasonably well if the area matches the accumulated target area of agents, even if it leaves no freedom to the colony to find its own outline shape. However, if one wants to experiment intuitively with different regions or different numbers of agents, this approach quickly becomes tiresome. Luckily, there is a way to facilitate exploration while maintaining a degree of control over the outline shape of the colony. This can be done by introducing a different type of agent that brings flexibility into the model (see Figure 6.37). The new type of agents has no target area and does not have to keep its repulsion strength or its internal pressure parameters constant. Therefore, these agents act as a kind of pneumatic cushions that contract if other agents need more space and expand if others get smaller. One can choose to place these ‘cushions’ along the perimeter of the predefined region and let the colony define its own outline.

154

Figure 6.37: Self-organisation of cellular agents in a semi-confined area

6.8 Discussion

All prototypes presented in this chapter are used for experiments that are helpful in studying bottom-up modelling techniques. These techniques can be used by computational designers who wish to integrate the notion of circulation into their spatial models and, amongst other architectural concerns, evaluate their models in terms of accessibility and navigation. All proposed prototypes are just computational sketches and none of these can be used to generate meaningful circulation diagrams unless deployed in the context of project-specific constraints and controlled by a thoughtful designer. However, these prototypes reveal a great deal about the principles of how agent-based models can be designed and built in order to facilitate the emergence of circulation without explicit top-down definition. Emergence is the key that makes it possible to integrate bottom-up prototypes into larger computational models – it allows circulation to grow dynamically with the rest of the model. According to the definition outlined in detail in the section 10.3, many prototypes described in this chapter can be classified as generative design models. All of these prototypes feature feedback mechanisms that give these models the ability to change and adapt. A generative design prototype can adapt to changes exactly because of the inherent feedback mechanisms and because the proactive parts of the model (agents) constantly monitor their environment reacting appropriately to different stimuli. There is no fixed or predefined output from a generative prototype and one can use it in order to produce a variety of diagrams. The generated diagram depends on the initial configuration of the model, on the designer’s interactive input 155

and on computer generated random values. There is a great potential benefit of using such flexible and generative models in the design process – they can be combined with other computational modelling methods and they can help in constructing the complete computational model of spatial schemata. There are essentially three types of circulation generating agents presented in this chapter: path laying and way finding agents, space building agents, and space forming agents. Whereas the first two types are concerned with altering passive objects in their environment according to some defined rules, the agents of third type are spatial entities and the model structure is constructed of these agents. Space forming agents perceive other agents directly and form spatial structures purely by repositioning themselves in relation to their environment. In such models, there is no need for static ‘building blocks’ as agents themselves represent the environment to other agents. These agents can be the simplest type of agents to program, but it is equally possible to build very intricate architectures of space forming agents. The structure made of agents can be interpreted as a circulation diagram. However, space forming agents can be used together with path laying agents in order to achieve diagrams with better definition of circulation. Path laying and way finding agents deploy several well-known techniques to navigate their environment. The two most common computational methods are hillclimbing and stigmergy – the first one helps agents to navigate and the second one ensures that this navigation is coordinated throughout the colony. Stigmergy is also an essential component of prototypes that involve space building agents. These agents complete the stigmergic cycle by first sensing the environment and then changing its structure, which, in turn, creates new sensations. The changes to the environment can be done in many different ways by adding or subtracting objects or distorting the existing geometry in the model. On top of that, the environment can have its own rules that further modify the inactive objects added by agents. These objects, however, cannot be considered as agents and possess no goal-directedness or pro-activity. Every agent-based prototype needs to be validated in order to prove that its’ generated output is suitable to represent circulation in buildings and urban environments. However, none of the prototypes presented in this chapter is validated against the measurements of connectivity, permeability, length of the circulation, or in fact against any other quantitative parameter that can be compared with the 156

measurements of existing and validated building or urban layouts. Instead, these prototypes are mainly concerned with the most basic requirement of circulation: providing access. Accessibility can be granted in two ways. Firstly, discrete coordinates in the model that are never visited by agents during the simulation are disregarded as accessible spaces; the rest of the model can then be said to be accessible. Secondly, agents can be located strategically in the environment and programmed to move between all of the points that need to be accessible. To put it in another words: any of the models using one of these measures is internally validated in terms of access. Besides the accessibility validation, many of the prototypes offer great opportunities to incorporate validation and quantitative evaluation routines. Agents can be easily programmed to measure distances travelled which, in turn, can be used for calculating the average trip lengths in the whole circulation network. Also, the generated diagrams can be assessed in terms of connectivity and permeability using additional evaluation algorithms. An example how that can be done is presented in Chapter 7. Some of the prototypes incorporate circulation optimisation mechanisms – ant colony optimisation, for example – that keep the length of circulation diagrams always near optimal. However, the full validation of circulation diagrams can be carried out when a prototype is deployed in the design context and, eventually, it is down to the designer to use the prototypes appropriately in order to generate compatible output. Designers are encouraged to use an experimental approach similar to one that was proved to be extremely useful in building and testing prototypes. Since it is very hard to estimate the behaviour of a prototype and the interplay between different control parameters before it has actually been deployed, the rapid prototyping approach is better suited for the purpose. Many prototypes described in this chapter were the results of recurring development cycle and trial and error sessions that allowed final algorithms and appropriate parameters to ‘evolve’. Considering all that is said above, the prototypes in this chapter are experiments that serve the purpose of offering architects alternative ways from traditional methods for solving spatial puzzles. Although alien to many of contemporary architects, computational prototypes can help to produce output that architects are more familiar with – diagrams. A diagram is an abstract machine that many architects find useful in their work; diagrams can be used by architects to 157

develop holistic design concepts and to construct less abstract representations of the built environment. Most of the images in this chapter depict constructive diagrams that represent both topological and geometrical forms of circulation, but also include requirement parameters such as frequency of use. These constructive diagrams are snapshots of the simulation and not the end product. In constant change, a diagram is never completed and, being responsive to the external change, can be driven by designers to suit project-specific needs.

158

Chapter 7: Case study 1 – a multi-agent system for generating street layouts The previous chapter introduced several prototypes of multi-agent systems for generating circulation diagrams. This chapter builds on top of the knowledge gained through creating and testing these prototypes, and shows how it is possible to deploy multi-agent systems in the context of architectural design projects. Two of the case-studies presented in this thesis support the argument that multi-agent systems can render value for the design process. Although this value can be measured in terms of efficiency, the main contributing factor of the proposed computational approach is in demystifying the design process – making it more explicit. This chapter gives an overview of the development of a computational tool that assists urban designers in modelling street networks for large urban areas. The site chosen for testing was Nordhavnen – an old harbour area in the outskirts of Copenhagen (see Figure 7.1). This particular site was chosen for two reasons. Firstly, the site was a subject to an open international ideas competition and offered a great opportunity to develop and test computational tools working together with a team of professional designers. Secondly, the sheer size (200 hectares) and the complexity of the study site encouraged the development of computational methods that would give the team some confidence in making design decisions. In order to provide sufficient evidence for this decision making process, the proposed tools were geared to facilitate generating and evaluating multiple scenarios. The computational design methodology for the Nordhavnen competition involved developing a few add-ons to a CAD package (Bentley MicroStation), and a generative design model for generating street layouts. The latter forms the main body of this case study and is subsequently referred to as the Nordhavnen model. The Nordhavnen model was built in NetLogo and utilised a multi-agent system for generating circulation systems in a bounded region. The Nordhavnen model borrowed some elements from the prototype described in detail earlier, and became essentially a successor of the Network Modeller prototype (see section 6.4).

159

The Nordhavnen international competition invited teams of architects, urban designers and other related professionals to create a vision for a sustainable city of the future. The competition entries were asked to propose a solution for a large industrial peninsula with several basins and quays in northern Copenhagen with an option to claim additional land from the sea. The competition brief envisaged the new piece of Copenhagen to have 40 000 new residents and the same number of new jobs created. The fully developed Nordhavnen was required to offer its residents and visitors all the qualities of urban living with an emphasis on the sustainability issues. One of the major requirements in the brief asked for a sustainable transportation system promoting walking, cycling and public transport, and prioritising these over the use of private cars. The challenge for the design team was to propose an economical network of transport that would meet the needs of local residents but also serve passenger and industrial harbours in the peninsula. In order to meet this challenge the team devised a design methodology including computational tools and models. One of the proposed models – the Nordhavnen computational model – was intended to help the team to design a road network for the site.

Figure 7.1: Aerial photo of Nordhavnen (Google 2011a)

One of the biggest challenges regarding circulation in Nordhavnen was to connect the peninsula to the rest of Copenhagen. Joined to the mainland by a narrow strip of land, all the traffic from Nordhavnen had to pass through a narrow passage. 160

Besides the attractors to the local traffic, the site also featured an existing industrial port – a major attractor of the through passing traffic. Although the future Nordhavnen was mainly to accommodate local residents and local businesses, the competition brief also envisaged a new passenger terminal on the east side of the peninsula. It was obvious from the start that the link to Copenhagen and harbours would become the major shapers of the road network in Nordhavnen. Besides some obvious constraints and official requirements expressed in the brief, there were some crucial decisions made by the design team that influenced the development of the Nordhavnen computational model. At an early stage of the design process, the team decided to implement a gridded city pattern. This was thought to simplify the design task and help to meet the requirements of floor space area, yet maintaining the flexibility and variety in the urban environment. The grid allowed the team to develop generic building typologies and manage height and density efficiently across the site. Another important design decision made by the team was to concentrate higher buildings in the centre of the peninsula in order to leave the seaside less densely populated and possibly dedicated to recreational activities. This was considered a natural way of dealing with an urban environment of the size and predicted population of Nordhavnen.

7.1 Developing the prototype

The objective of developing the Nordhavnen computational model was to create a flexible and easy-to-use generative design application. The model was hoped to become generic enough to be used later in other urban design projects. The time spent was far too long to justify the development of the computational model that was going to be used just once – the model had to fit into the traditional practice of urban design projects. However, the intent was not to create an all-purpose road network generator, but to develop a model that produced gridded city patterns taking into account additional site-specific information. There were four main requirements that the circulation networks generated by the model were supposed to meet. The first and the most important was the accessibility requirement – from each urban block in the grid layout one had to be able to reach any other block via the circulation system. The second requirement was variety – the model was intended to produce several 161

different circulation scenarios even with the identical input. The last two requirements were concerned with the validity of generated diagrams: the network layout had to be reasonable and suitable for circulation in settlements. In order to be reasonable, the total length of the network and the average trip length within the network had to be optimised to a certain extent. In order to be suitable for settlements, the network had to create sufficient connectivity and permeability in the environment. The first attempt to meet the above requirements for the Nordhavnen model was made by extending the Stream Simulator prototype. The prototype is explained in detail in section 6.2. It suffices to say here that the prototype was a simple agentbased model combining hill-climbing and stigmergy. The Stream Simulator prototype that generated only tree-like patterns was chosen as a starting point for The Nordhavnen model because it already satisfied several of the four requirements mentioned here earlier. Firstly, with a homogeneous slope gradient of the input landscape, Stream Simulator always produced continuous circulation patterns and provided access to every single patch in the landscape. This occurred for a simple reason – mobile agents were actually distributed all over the landscape and they converged to form circulation channels rather than the other way around. Secondly, there was an inherent route optimisation mechanism embedded in the prototype. Fostered by a positive feedback mechanism, the agents’ choice of routes were coordinated through the landscape. Thirdly, the prototype was a probabilistic model in the sense that the agents were placed in the landscape randomly, which yielded a different result every time the prototype was deployed. Stream Simulator also allowed the user to control the input landscape and gain a certain amount of control over the outcome without any programming skills. Since it could only produce branching networks, the only requirement that Stream Simulator is not able to meet is to generate diagrams with sufficient connectivity and permeability. In order to work with the grid layout, the Stream Simulator prototype had to be extended. The extended version of the prototype was designed to support any kind of user-defined grid spacing parameters and produced orthogonal branching patterns only (see Figure 7.2). The implementation of this modification was quick and straightforward – while the agents in the original prototype were acquiring information from their immediate neighbourhood, in the modified version agents probed the environment by comparing data at adjacent grid points. Since the agents 162

travelled on the grid coordinates only, they could not possibly make any diagonal movement.

Figure 7.2: Stream Simulator modified: orthogonal stream channels are generated with the user defined segment length

Branching street patterns, based on the use of cul-de-sacs, are generally discouraged in urban design because of the low permeability and connectivity rate. In its final report, Urban Task Force argues (Rogers 1999) that tree-like street patterns are bad for cars and pedestrians. For cars this type of circulation network concentrates congestion at the branching points; for pedestrians it often means indirect journeys even between neighbours and increases the use of cars. To avoid generating branching street systems and build a model that also generates circuit networks the Stream Simulator prototype borrowed some concepts from another prototype: Labyrinth Traverser (see section 6.3). The Labyrinth Traverser prototype is an equally simple model combining cellular automaton based diffusion with hill-climbing and provides a robust computation of the shortest path between given points. From the perspective of the Nordhavnen competition, the shortest path computation allows one to control which areas should be more readily accessible. Whereas Stream Simulator offers a way to optimise the total length of the network, Labyrinth Traverser contributes to the optimisation of the average trip length within the network. Combining these two prototypes leads to a multivariate network optimisation where both the total length of the network and the average trip length are balanced. This optimisation is facilitated 163

by the landscape’s ability to revert to its original state over time – shorter routes are more likely to persist than longer ones. In a way, the Nordhavnen computational model is an extended version of the Network Modeller prototype (see section 6.4). It does, however, possess some features that turn it from a simple concept to a potentially useful application for urban designers. One difference between the Network Modeller prototype and the Nordhavnen model is the way mobile agents are distributed across the landscape. While in the former the agents are travelling between user defined network nodes, in the latter the agents depart from every single grid point (setting-out point) and travel to the given target nodes. Distributing the initial setting-out points for agents across the landscape ensures that every single urban block in the scheme is accessible. Two other main changes are concerned with context sensitivity and enhanced user interactivity. The Nordhavnen model can be controlled by the user in two ways: preparing the input map of the landscape and locating the target nodes in the network. The input map is a black-and-white image defining where the agents can or cannot go. The target nodes indicate the destinations of special importance (attractors) in the scheme. In case of Nordhavnen, these attractors would be the city of Copenhagen, the harbour, and some local centres.

Figure 7.3: Circulation diagrams with many setting-out points and a small number of attractors. From left to right: tree structure with 1 attractor, 1 circuit with 2 attractors, multiple circuits with multiple attractors

The generated topology of circulation diagrams depends on the number of attractors (see Figure 7.3). Single attractor scenarios always lead to the formation of branching networks that cover the whole landscape but never feature closed loops in 164

its structure. Once a couple of additional attractors are introduced, the model starts generating circuit networks. If more attractors are added the number of closed loops tends to increase as well. The many-to-few relationship between setting-out points and attractors often leads to diagrams where the areas around attractors are well connected and roads form circuits, while more remote areas are connected via smaller branching networks. This is an interesting effect in The Nordhavnen model that is paralleled by real-world examples of urban settlements where peripheral areas are connected to each other via the town centre. However, this effect should not be encouraged in suburbia as it goes against the principles of high connectivity and permeability in the contemporary urban design practice (Rogers 1999).

Figure 7.4: Many-to-many relationship between setting-out points and attractors. All of these diagrams have 11 attractors placed across the landscape

In order to achieve better connectivity in peripheral areas, the Nordhavnen model was altered to support many-to-many relationships. This was done simply by introducing a routine to the model that automatically adds several attractor points randomly across the landscape. The change was immediately apparent: although culthe-sacs were still present in the diagrams, longer branching roads had almost completely disappeared (see Figure 7.4). The peripheral areas had suddenly become well connected and the whole network permeable. Alongside the positive effect, the replacement of the many-to-few with the many-to-many relationship raised a technical issue – running the model with greater number of attractors required larger computational resources. This posed some limits to the size of landscape, to the

165

maximum granularity of diagrams, and to the number of attractors for the optimal use of the model.

Figure 7.5: Diagrams with attractors of differentiated magnitude – some attractors (larger dots) are more appealing to agents than others (smaller dots)

The automatic distribution of attractors proved useful in terms of achieving more favourable diagrams but it also brought up another issue – the attractor points were now out of the user’s control. Placing all the attractors manually was time consuming and hindered the design process flow; automatic placement was too random. To get around these problems, an additional parameter was introduced to the model: the magnitude of attraction. All automatically placed attractors were given the magnitude value of 1, while the magnitude of manually placed attractors was defined by the user. The magnitude simply indicated the attractor’s capacity of attracting agents. The resultant diagrams with the differentiated magnitude (see Figure 7.5) gave the control back to the user. Once the control over the location and the magnitude of attractors was regained, the model was ready for experiments. The user could now manipulate the magnitude of an attractor depending on the assumed importance of a particular urban function or to reflect the desired number of vehicles and pedestrians at the location. The Nordhavnen model is built in a way that eliminates the necessity of validating the suitability of diagrams for urban circulation with respect to accessibility. As described earlier in this chapter, the requirement of accessibility is satisfied inherently in the model and each urban block is connected to the circulation by default. The diagrams generated with the Nordhavnen model, however, are subject to external validation and can be validated qualitatively or quantitatively. For example, 166

most of the diagrams presented in this chapter can be validated as road systems based on the resemblance of road systems in existing urban settlements (qualitative validity). The diagrams can also be evaluated with respect to the connectivity and the length of the network (quantitative validity). The methods of validation are further explained in detail later on in this chapter.

Figure 7.6: A generated diagram classified as a 4 grade road system. This exercise was done manually by counting the width (in pixels) and the strength (darkness) of the road segments in the diagram

The diagrams can also be analysed in terms of the traditional road grading system often used by highway engineers (Marshall 2005, p. 15). Figure 7.6 shows how a generated diagram can be interpreted as a graded road hierarchy. The fact that these diagrams lend themselves for this kind of analysis does not yet mean that they make good street patterns. It is usually required in this hierarchy based system that lower grade roads feed to roads just above them – 4th grade roads should only feed to 3rd which in turn feed to 2nd etc. (Marshall 2005, p. 170). Since the Nordhavnen model is an interactive one, the user of the model is eventually responsible for generating working diagrams. For example, to meet the requirement of the grade-based system the designer should avoid placing attractors too far away from other attractors.

167

7.2 Generating diagrams in context

In order to generate useful output for the design team, the computational model had to be deployed in the context of Nordhavnen site. This was achieved simply by introducing a routine to import pre-processed 2D map data into NetLogo. The image was first prepared in a CAD application, then converted into a black-and-white image and, eventually, loaded into the NetLogo’s model world. White pixels in the pixel map indicated the site area where the gridded road pattern could occur; black pixels denoted the surrounding water or areas outside the competition boundary (see Figure 7.7). Once the input map was prepared and loaded into the program, the grid pattern that formed the basis for the road structure was established within the defined boundary. The grid spacing was controlled by the user and was eventually decided by the design team according to what was considered a reasonable urban block size. The underlying principle was to create an urban structure with sufficient connectivity, providing each block in the area at least one access point to the road network. The input map thus served the purpose of defining the access to the urban blocks and agents navigated the grid by moving from one grid point to an adjacent one. In doing so, agents were capable of crossing water (black pixels) provided that both of the grid points on opposite sides were situated inside the boundary.

Figure 7.7: The input image and the grid representing access to urban blocks. The shape of the area was partly driven by the competition brief and partly defined by the design team

168

Before the agents were set to action, one had to locate desirable attractors in the area that served as targets for agents. Each attractor was given a value of importance that defined its attractiveness to agents. Whereas some low value targets were placed automatically on the map to ensure uniform connectivity across the whole site, other higher importance targets were manually located where the design team had decided the key attractors should be. The key attractors included such important functions in Nordhavnen as the harbour, the main road linking Nordhavnen with Copenhagen city, access to the planned tunnel, the new Nordhavnen city centre etc. (see Figure 7.8).

Figure 7.8: The input map with attractors and the resultant diagram. Dots represent attractors with the size indicating the importance

Once the input map was loaded and all attractors placed, The Nordhavnen model was ready for generating circulation diagrams. All the participating agents were located randomly at the grid points and were assigned a target. More important targets had higher probability of becoming a target for agents than less important ones. This probabilistic nature of the model led to a variety of diagrams even if the initial configuration remained unchanged (see Figure 7.9). The formation of circulation diagrams also depended on many control variables that were exposed on the graphical user interface (GUI). These user controlled variables defined the number of participating agents, the strength of the ‘pheromone’ trail left behind by the agents, 169

the speed of evaporation and diffusion of this ‘pheromone’. This allowed one to drive the optimisation and development process and deploy it as a design tool.

Figure 7.9: Distinct diagrams generated with an identical initial configuration

Although the development process of a diagram depends on many different parameters and supports a variety of output, the generated circulation diagrams display certain common emergent features. The most heavily used routes typically occur at the centre of the area but are seldom close to other frequently used routes. There also appears to be a fade-out of frequency of route usage towards the edges of the area unless an important attractor sits right at the edge of the site. This tendency for centralisation becomes even more obvious in highly connected diagrams where attractors are placed on every single grid point (see Figure 7.10). The hierarchy of routes seems to develop naturally with less frequently used routes feeding traffic to busier routes that, in turn, form clear circuits at the heart of the area. This leads to higher connectivity in the centre, whereas peripheral areas tend to form branching network structures.

Figure 7.10: Diagrams generated with uniform attractor grid

170

Figure 7.11 illustrates the progress of The Nordhavnen model from an initial configuration to a fully developed diagram. The network development typically follows a pattern where higher grade routes become visible at the early stage; this is followed by the appearance of lower grade routes and, eventually, local access routes become visible. A typical diagram features a few frequently used closed loops forming a continuous circuit of the heavily used roads. These routes often cross canals that break the site into smaller peninsulas, suggesting the use of bridges to maintain efficient traffic circulation in Nordhavnen. In many diagrams a bridge appears at the heart of the site to traverse the inlet to the internal basin. Curiously enough, there is also a crossing at that the very same location in Nordhavnen (see Figure 7.1), although the input map does not reflect this minute detail. The southern part of Nordhavnen is usually connected to the rest of the area via a strong route. This is the major link to the city of Copenhagen, and a potential bottle-neck for the through traffic.

Figure 7.11: Sequence of images showing the development of circulation network

171

7.3 Quantitative analysis and evaluation of diagrams

As discussed earlier in this chapter, all generated diagrams have to be validated if one is to evaluate their suitability as circulation systems in urban design. The Nordhavnen model is built in the manner that makes the validation of the most basic requirement of urban circulation – accessibility – redundant. The agent-based model is configured to connect all the urban blocks automatically to a single network of roads. Whereas the accessibility criterion is validated internally, other parameters of the generated diagrams are subject to external validation. There are two main measures that have been taken into account in the proposed quantitative analysis. The first parameter to be measured is the total length of the generated network. The desire to keep the length of road networks minimal in any real world case is driven by the need to keep the construction costs as low as possible. This purely quantitative measure is often rivalled by another evaluation parameter of urban circulation – connectivity. Although connectivity can be assessed solely on a quantitative basis, it is associated with the quality of the urban environment. In the era of urban sprawl, high connectivity has become a key measure of movement networks and urban form. Clearly, it is difficult to outline what makes a good street network. LlewelynDavies points out that the reason why some routes are better than others depends on many intangible factors and the route assessment can therefore never be an exact science (Llewelyn-Davies 2000, p.34). Furthermore, the purpose of this section is not to assess the quality of generated diagrams but to show that these diagrams meet the basic quantitative requirements for connectivity. Qualitative value judgements can be made independently of quantitative measures that are used here to validate circulation diagrams generated with the Nordhavnen model. The method proposed later in this section is given in order to help designers to favour one diagram over another. There are several reasons why high connectivity is thought to be an important indicator of urban design schemes. According to Benfield (in Song and Knaap 2004), better connectivity leads to more cycling and walking, less motorised traffic, cleaner air, and greater sense of community. Although connectivity really is a qualitative measure of urban space, there are several existing methods that try to quantify it. 172

Song and Knaap’s (2004) measures of connectivity take into account the number of urban blocks, the number of street segments and nodes, the lengths of cul-de-sacs, total length of the road network, and the distance between access points in the neighbourhood. They also distinguish between internal and external connectivity – the first considers just one neighbourhood, whereas the other involves several. Other attempts to quantify connectivity include calculating road intersections and total road length per area unit (Dill 2004), number of dwellings per urban block, median perimeter of blocks, and median length of cul-de-sacs (Song and Knaap 2004). One measure of connectivity often used by urban researchers is link-to-node ratio that – Dill (2004) suggests – should be about 1.4 in good urban environments. The measurement of connectivity preferred herein is called ‘internal connectivity’ (Song and Knaap 2004) or ‘connected node ratio’ (Dill 2004). This is a simple ratio that operates with classical elements of urban street networks: road intersections and cul-de-sacs. Although reasonably used cul-de-sacs can benefit for the local neighbourhood by reducing through-traffic and creating alternatives to the gridlike street layout, they are counterproductive in terms of permeability and the ease of navigation. Internal connectivity according to Song and Knaap (2004) is calculated as follows:

internal connectivity =

road intersecti ons road intersecti ons + cul - de - sacs

Dill, referring to the INDEX model of Criterion Planners Engineers, argues against networks with internal connectivity values less than 0.5 and favours designs with values 0.7 or higher (Dill 2004). Song and Knaap have evaluated single-family neighbourhoods in Forest Glen and Orenco Station in the Portland area, and have calculated internal connectivity values 0.67 and 0.81 respectively (Song and Knaap 2004). Based on these references, it is fairly safe to say that street networks with internal connectivity values from 0.7 (high connectivity, some cul-de-sacs) to 1.0 (fully connected, no cul-de-sacs) would be an indicator of successful urban design. However, connectivity parameters are dependent on a number of urban design characteristics. City centres accommodate much heavier traffic than residential areas in suburbia and the road network connectivity reflects that. Dill (2004) has shown that metropolitan 173

areas in Portland have higher network density and internal connectivity (connected node ratio) than peripheral areas. Therefore, one has to consider the location and the typology of the measured area when assessing street networks. Internal connectivity and total road length tend to be inversely proportional parameters. The Nordhavnen model can generate designs ranging from minimal branching networks with low connectivity values to fully connected networks with escalating total lengths. In order to assist the validation process, a combined measure of ‘internal efficiency’ is proposed here. Internal efficiency is simply calculated by dividing internal connectivity by total length and can be used finding near-optimal states where building more roads add little value to the quality of urban space. In real world cases, it is ultimately down to the view how much quality is gained by spending more money on road building in order to increase connectivity. Internal connectivity is a subjective measurement, and cannot be used to compare different urban schemes. However, it is useful in evaluating different scenarios for the same scheme.

Figure 7.12: Development of the topology diagram

In order to calculate the road network length, internal connectivity and internal efficiency, one had to turn the diagrams presented earlier in this chapter (frequency diagrams) into topology diagrams. This required no further changes to the model – it was possible to generate topological diagrams purely by altering the 174

variables exposed via GUI. The diffusion of the ‘pheromone’ had to be set to 0, and gradual reduction of the ‘evaporation’ parameter in run-time resulted in unambiguous diagrams. The development of typology diagrams (see Figure 7.12) was similar to that of frequency diagrams with the only exception that the fully developed topology diagram indicated no usage data or any road hierarchy. Once the topology diagram was generated, the quantitative parameters in question were calculated by a deterministic algorithm. Since the Nordhavnen model was built on NetLogo, a fairly simplistic algorithm based on counting patches was devised. The road network length was calculated by simply counting all black patches. For calculating the internal connectivity, different type of road intersections had to be determined and counted. Tjunctions were any of the black patches that had 3 other black patches in their neighbourhood, while X-junctions had 4 and cul-de-sacs had only one.

Figure 7.13: Topology diagrams generated with growing number of attractors (1- 25)

The quantitative analysis carried out on the Nordhavnen model involved several rounds of testing cycles. The objective of this testing was to find out general trends of network development and interdependencies between different parameters. Tests were repeated with different number of attractors (see Figure 7.13) which were either placed manually following the designer’s intuition or automatically using the random placement routine built in the model. In the latter case, tests involving a 175

certain number of attractors were carried out at least three times and the result with the highest connectivity was selected for comparison. Although the probabilistic nature of the automatic placement routine rendered the results inconclusive, one could surmise the general behaviour of the model (see Figure 7.14). The graphs showing the number of attractors on the horizontal axis seemed to take the shape of an exponential curve where the parameters on the vertical axis approached a ceiling value at decelerating pace.

Figure 7.14: Tests with randomly placed attractors

Figure 7.15: Tests with manually placed attractors

This trend becomes even clearer when attractors are located carefully by a designer (see Figure 7.15). The ceiling value of connectivity in an ideal situation is 1.0, but in the Nordhavnen model the ceiling is lower for a simple reason – some grid points have just a single neighbour. The ceiling value of the total length parameter may vary from scheme to scheme but the trend is similar to that of connectivity. The only difference is that the highest possible connectivity is normally achieved earlier than the maximum length of the network creating a peak in the internal efficiency graph. It is difficult to observe this phenomenon in the context of Nordhavnen, but it becomes immediately apparent when the model is deployed in isolation (see Figure 7.16).

176

Figure 7.16: Tests ‘in silico’- internal efficiency raises until connectivity has reached its ceiling but the maximum length is not achieved yet

The reason why maximum connectivity and low network length occur can be observed in Figure 7.17. The maximum connectivity may be achieved much earlier than the maximum length of the network because the method of counting road intersections does not take the shape of these intersections into account. Whereas it makes no difference to the connectivity whether road junctions are T or X (cross) shaped, it does influence the network length. Also, the connectivity measured this way does not reflect well the quality of urban design patterns. It fails in cases where the urban blocks can are disproportionally stretched in one dimension. Although the connectivity can be high across the whole site, elongated urban blocks can lead to very low connectivity at the local neighbourhood scale.

Figure 7.17: An ‘in-silico’ diagram with 5 attractors where the highest connectivity value has been achieved, but the total length of network has not been exhausted

The described method is problematic in terms of assessing the quality of urban circulation conclusively. One can overcome this problem by bringing in other calculations such as measuring urban block proportions, but that would inevitably make the whole process of evaluation a lot more complex. Instead, one can expect an 177

easier solution. It is not difficult to alter the equation of internal connectivity in order to make it perhaps more suitable for evaluating urban design schemes. The method described earlier does not consider the type of road intersections whatsoever, and that is the main reason for so many high-connectivity diagrams turning out unsuitable to represent urban circulation diagrams. With a little modification of the equation, this can be changed. If one counted X and T-shaped junctions separately and gave these separate weightings, the equation could reflect network connectivity more accurately. The internal connectivity is proposed to be measured as follows:

internal connectivity =

X junctions X junctions + T junctions + 2 × cul - de - sacs

An additional rule has to be introduced in counting junctions that are too close to the boundary of site to become X shaped. In that case T junctions should be counted as X-junctions, and then the maximum internal connectivity value can be 1.0. In the suggested equation, X junctions and T junctions have different importance to the connectivity parameter with X junctions having greater influence. Also, T junctions and cul-de-sacs are not treated equally – the latter has smaller weighting. The advantage of the proposed method over the method that does not consider the shape of the junctions is apparent – it allows much finer measurement of connectivity. For example, the diagram in Figure 7.17 has the connectivity index 1.0 when the type of junctions is not taken into account. This is also the highest possible connectivity of the complete grid. With the suggested method of calculation, the internal connectivity is 0.15 which is far away from the maximum value 1.0 of the complete grid. The advantage of using the modified equation for calculating internal connectivity can be understood the best by observing the behaviour of variables in function graphs (see Figure 7.18). Both the total length and the internal connectivity parameters in this graph progress at continuously slowing rates. These parameters show similar growth tendencies sometimes featuring sudden changes in increase roughly at the same point. With the growing number of attractors both the internal connectivity and the total length increase at a decelerating pace but never really decrease. The combined value (internal connectivity/total length) of these parameters – internal efficiency – however, tends to reach the highest value with a relatively small 178

number of attractors. There is a point where the growth of internal efficiency stops and adding further attractors would render low gains in internal connectivity against increasing build costs. As long as an acceptable connectivity has been achieved, there seems to be a little benefit of adding more attractors.

Figure 7.18: Graphs show the change of network parameters ‘in silico’ (left) and in the context of Nordhavnen input map (right)

The number of attractors leading to the highest internal efficiency depends on the particular set-out of the model. In the Nordhavnen model, this near-optimal solution can be achieved with 9 attractors (see Figure 7.19). The corresponding internal connectivity tends to fall between 0.65 and 0.95, if the shape of street intersections is not taken into account. This shows that a near-optimal solution in terms of connectivity and road lengths is well above the minimal limit of commonly accepted (Dill 2004; Song and Knaap 2004) connectivity measure.

Figure 7.19: A near-optimal diagram with internal connectivity of 0.95 (as calculated after Song and Knaap (2004)) or 0.67 (as calculated with the proposed method of taking the shape of junctions into account)

179

The new way of measuring internal connectivity proposed in this section provides an alternative way of assessing street networks as compared to the method that does not differentiate between different types of road intersections. The downside is the lack of extensive research of analysing existing networks following this method. However, the objective of this section is not to carry out this research but to validate the generative model. One can do so by looking at a few schemes that are commonly considered as good examples of contemporary urban design and are of comparable size and typology to Nordhavnen. The internal connectivity parameters of street networks in Orenco station, Hammarby Sjöstad and Vauban are well above the lowest acceptable value of 0.7 (Dill 2004). This allows on to make an assumption about acceptable connectivity values as measured using the proposed method. Orenco station is perhaps not the most glamorous example of contemporary urban design, but it is selected because Song and Knaap have studied the area in depth (Song and Knaap 2004) and highlight it as a good environment in terms of connectivity. Orenco station is a pedestrian friendly high density urban town centre featuring a light railway station. The area (see Figure 7.20) has 71 street network nodes – intersections and cul-de-sacs. Song and Knaap’s measure of the internal connectivity in Orenco station area is 0.81. Using the proposed method of taking the shape of street intersections into account, the value of internal connectivity is 0.30. Hammarby Sjöstad and Vauban are both promoted by CABE as examples of good urban design practice (CABE no date) and it is therefore assumed that street networks in these areas meet the best urban design practice. Once fully developed, Hammarby Sjöstad (see Figure 7.21) is a 200 hectare city district in Stockholm, housing a population of 20 000 and providing jobs for further 10 000 people (CABE no date). The size, location and typology of the area are very similar to these in Nordhavnen. At its current state, internal connectivity value in Hammarby Sjöstad is 0.76 after Song and Knaap and 0.29 following the proposed method. Vauban in Freiburg (see Figure 7.22) is a medium density area of 5000 inhabitants and 600 jobs built to the principle of sustainable model district (Vauban no date). Famous for its well-designed circulation, Vauban has the internal connectivity of 0.91 measured using Song and Knaap’s method and 0.32 following the proposed method.

180

Figure 7.20: Orenco station, Portland (Google 2011b)

Figure 7.21: Hammarby Sjöstad, Stockholm (Google 2011c)

Figure 7.22: Vauban, Freiburg (Google 2011d)

Based on the examples described in this section, one can conclude that street networks with the internal connectivity value of 0.3 or higher as measured using the 181

proposed method are suitable for urban design schemes of Nordhavnen’s size and typology. Figure 7.19 presents a street network diagram generated with the Nordhavnen computational model that meets this requirement. Hence, the Nordhavnen model is a viable method for generating street network diagrams with acceptable internal connectivity.

7.4 Conclusions and discussions

The model presented in this chapter is a generative design model following a pattern regarded as the sensor-reactor-environment pattern (see Chapter 10 for further explanation). In the Nordhavnen model, simple mobile agents navigate the environment by sensing their immediate surroundings and reacting to traces in the environment left behind by other agents. The Nordhavnen model shows how the natural phenomena of stigmergy can be used to turn a simulation into a generative tool for architects and urban designers. The stigmergic feedback loop leads to the automatic optimisation of road network diagrams but leaves enough control to the user of the model. An experienced designer can use the model creatively and integrate it in the design process. Despite its conceptual simplicity, the model is capable of producing a wide variety of diagrams.

Figure 7.23: Three representations of a diagram (from left to right) – topological, frequency diagram, and combined diagram

The Nordhavnen model integrates components of design synthesis and evaluation. The generated diagrams do not solely display the topological relationships but can also express the estimated usage frequency (see Figure 7.23). While the 182

topology diagram defines the structure of the road network and allows calculating connectivity and shortest distances between points, the frequency diagram informs about the type of individual roads. The latter implies to a hierarchy of roads and gives an architect a guide as to which roads should be designed as boulevards or highways, which can be treated as local roads, and which can be left mainly for pedestrians.

Figure 7.24: A selection of diagrams produced with The Nordhavnen model

The Nordhavnen model could help design teams in several ways. It can be used as a scenario generator since it can produce a variety of realistic road diagrams (see Figure 2.1 Figure 7.24) in a matter of minutes – something that a human designer can hardly do. The model also allows evaluating the generated diagrams with a single press of a button – measurement routines are seamlessly integrated into the content production application and quantitative parameters are immediately available for designers. The evaluation methods presented in this chapter include internal connectivity measurements based on the number of different type road junctions and network length parameters. However, other methods of evaluation are possibly easy to be introduced.

183

Although the road generation in the Nordhavnen model is highly automated, one can control this process via GUI. Driving the model in order to generate meaningful circulation diagrams requires some common-sense logic and some practical experience. The greatest involvement of a designer is required in interactively locating attractor points in the modelled urban site. The location and number of attractors plays a crucial role in generating acceptable road network diagrams. For example, a single attractor point always leads to tree-like networks – akin to many existing rural settlements. Since tree-like street networks are greatly discouraged in contemporary urban design, one has to introduce several urban attractors instead. These attractors also need to maintain certain distance from one another to avoid mono-centric structures in the diagram. Multiple attractor scenarios match much closer the usual many-to-many relationship between urban amenities (schools, markets, shops etc.) and local residencies in real life. Many-to-many relationships also lead to better connected street networks. Once the designer has mastered the basic rules of locating urban attractors, it is simple enough to generate diagrams with acceptable connectivity. The bulk of successful diagrams presented in this chapter provide sufficient evidence that the Nordhavnen model can produce street networks of high connectivity. Besides the external validation, the model is also validated internally: stigmergic mechanisms ensure that the total length of the network is optimised, and accessibility to individual network nodes is granted by the algorithm that controls agents’ movement. The proposed model for generating road networks is two-dimensional. Nordhavnen is relatively flat compared to the scale of the area, and that makes the landscape’s third dimension redundant. However, if the model was to be redeployed in a different context, the landscape gradients could play more important role in route selection. How could one use the model on a hilly site? There are several ways how the model can adapted in this situation. For example, one way is to define maximum height gradients that agents can possibly climb and they can then exclude routes that exceed this threshold from their route selection mechanism. Another way is to preprocess the grid making the height differences between adjacent nodes less dramatic. This can be done by moving the grid nodes so that the gradients would be in acceptable range. The latter does not necessarily require sophisticated surface relaxation algorithms, but can be a simple probabilistic model. 184

The Nordhavnen model – like any other computational model – has its limitations. The main limitation is purely technical and is imposed by the modelling environment (NetLogo) used for developing the model. Although conceptually simple, the model has so many lines of code that it is hard to manage further changes and add more functionality. If further functionality is required, the model would need to be reproduced in a different development environment. Without the limitations of NetLogo modelling environment, several useful features could be added. Agents could use more detailed algorithms to choose their way and, for example, try to avoid steep frequency gradients. This would probably lead to stronger hierarchy in road networks that are desired by some urban designers. During the development of the Nordhavnen model one major change was considered but abandoned for the above reason. This change would have introduced a feedback mechanism between the attractors and the traffic created by agents’ movement. In this scenario, there would have been a set of rules to governing the repositioning attractors according to the frequency of road usage. If further feedback mechanisms were introduced between attractors and traffic frequency, the model would have led to a greater generative effect. However, the task of this thesis is not to explore embellished generative models, but to discover minimal ones that render obvious benefits to the urban design process. Regardless of the technical limitations, the Nordhavnen model is a prototype capable of producing good quality circulation diagrams. The model can be used as a variance production mechanism. It is a front-end design tool that can provide a competitive advantage at the early stages of the design process where the design team has to explore several scenarios in a short space of time. The Nordhavnen model meets all the requirements stated in the beginning of this chapter: it generates a wide variety of distinct circulation diagrams that are validated in terms of accessibility and connectivity, and optimised to the total length of the network. The outputs of the model should not be confused with fully developed design proposals but should be seen as conceptual diagrams. Although conceptual, these diagrams are constructive diagrams and convey useful information about both the shape and the requirements of the street network. 185

Nordhavnen competition project credits Design concepts featuring in this chapter have been developed collaboratively by Slider Studio and Mæ architects for the Nordhavnen international open ideas competition. Nordhavnen computational model was designed and developed singlehandedly by the author of this thesis during and after the competition.

186

Chapter 8: Case study 2 – an ant colony optimisation algorithm for generating corridor systems The second case study explores the use of multi-agent systems at the scale of large office buildings. As opposed the Nordhavnen model (see previous chapter) that generated street network diagrams, the purpose of the multi-agent system in this chapter is to generate office floor plate layouts and analyse these in terms of evacuation and fire regulations. Whereas a simple hill-climbing model is proposed in order to locate stair cores, an ant colony optimisation algorithm is deployed for finding shortest routes for corridors. As a part of this chapter, a brief overview of ant colony optimisation techniques is given. The main purpose of this case study is to illustrate the use of multi-agent systems in a real-life design process and to explore the potential of integrating generative models with traditional design methods. It is expected that generative models would lead to design solutions that are not intuitively apparent to designers. The study also investigates methods that enable designers to control bottom-up computational models efficiently.

8.1 Ant colony optimisation algorithms

Ant colony optimisation (ACO) belongs to the general category of agent based algorithms inspired by the behaviour of biological ant and termite colonies. ACO mimics the ant colony’s foraging behaviour that helps the colony to locate food resources closest to the nest constantly optimising the route to these resources. Such kinds of behaviour relies on stigmergy – a form of communication where ants exchange information by modifying their local environment (Dorigo, Birattari and Stützle 2006). Stigmergic communication, as distinguished from other forms of communication, is indirect, non-symbolic, and all the information is exchanged locally (Dorigo, Birattari and Stützle 2006). ACO was initially developed in early 1990s by Dorigo, Maniezzo and Colorni (Gutjahr 2008). They found inspiration from Deneuborg’s work (Dorigo, Birattari and Stützle 2006) studying the foraging behaviour of Argentine ants. A typical route 187

optimisation process, as observed by Deneuborg et al. (1983) takes place when ants deposit pheromone along the path between their nest and the food source. In doing so ants follow previously dropped pheromone trails and thus, by deploying a positive feedback mechanism (Dorigo, Maniezzo and Colorni 1996), reinforce already existing trails. Occasionally, due to randomness in ants’ behaviour (Deneuborg, Pasteels and Veraeghe 1983) or disturbances in the environment, ants wander off the existing path and find alternative routes to the food source. Both of the two competing paths attract ants and eventually – caused by the evaporation of pheromone – the shorter path becomes prominent. Pheromone evaporation plays a crucial role here; if the colony cannot forget already discovered paths, it is unable to learn new shorter ones. ACO algorithms are metaheuristic algorithms for solving combinatorial optimisation problems (Dorigo and Socha 2007). Typical problems solvable by ACO are the travelling salesman problem (Dorigo, Maniezzo and Colorni 1996) and minimum spanning trees (Neumann and Witt 2008). As a swarm intelligence method (Dorigo, Birattari and Stützle 2006), ACO is suitable for agent based modelling techniques. Acting on local cues, simple programmable agents can find solutions quickly and the quality of solutions rises over time (Blum and Dorigo 2004). The oldest and conceptually simplest ACO variant is Ant System (Gutjahr 2008). The main characteristics of Ant System, as outlined by Dorigo et al. (1996), are positive feedback, distributed computing, and the use of constructive greedy heuristics. Although ACO models have been there for almost two decades, new applications still keep emerging, and new improved algorithms are being constantly developed. Only recently there have been first few comprehensive and rigorous studies of ant colonies’ runtime behaviour (Gutjahr 2008; Neumann and Witt 2008). Over the last decade, there have been plenty of applications for ACO. The most common ones are vehicular traffic planning (Rizzoli et al. 2007), project scheduling (Ritchie 2003) and network routing (Sim and Sun 2002) applications. In more architecture, urban design, and engineering related fields, ACO algorithms have been used for ship design (Jin and Zhao 2007), optimising steel frame structures (Camp, Bichon and Stovall 2005), generating street networks in urban context based on minimum spanning trees computation (Nickerson 2008), and optimising the coverage of personal communication systems in the urban environment using a bitmap database of buildings (Pechač 2002). 188

8.2 Selected ACO algorithm

There are several versions to the original ACO algorithm proposed by Dorigo in his PhD thesis (Gutjahr 2008).Different implementations of the algorithm have been developed for increasing the speed and robustness, or to tailor it for specific needs of particular applications. The proposed algorithm in this thesis is a simple version of ACO, not much different from the Ant System (Dorigo, Maniezzo and Colorni 1996). The aim of developing and testing already invented algorithm is twofold – to gain a deeper understanding of how the key parameters affect the performance of the algorithm in practice; and assess the suitability of the algorithm in a generative design model. The aim here is not to find the best performing ACO but to test the selected algorithm for generating corridor systems. The ant colony simulation in this case study was set up within a 2D CAD environment in Bentley’s Microstation XM 2008. Before running the ACO algorithm, one had to construct a line drawing representing the network of interconnected paths. This network defined all the possible links the agents could use in order to move from one location to another. All the experiments in this study were conducted on a particular type of network constructed with purpose made CAD tools designed and programmed by the author of this thesis. While a more thorough overview of these tools is given in section 6.7, it is sufficient here to mention that a closed loop network (Haggett and Chorley 1969) was made using a Voronoi subdivision algorithm that was modified to suit the purpose of this case study. The Voronoi network possesses an important quality – ordered irregularity – that makes it suitable for the conducted experiments. The Voronoi network connects the nearest nodes and allows calculating topological distance by counting links instead of measuring actual topographical distance. With a typical node having no more than three links to its neighbouring nodes, the network is simple but offers sufficient variety of routes through. The network used in the following experiments was a bounded network (Haggett and Chorley 1969, p. 53) enclosed within a rectangular area consisting of 99 Voronoi patches, 201 nodes, and 300 links. The target and the source patch were selected in relation to the size of the entire network (see Figure 8.1). There were two 189

possible solutions for the shortest route from the source patch to target. Both of the routes, the green and the red one, were 10 steps long.

Figure 8.1: The setting out configuration showing the lower (source) patch and the upper (target) patch and the shortest routes

Step-by-step description of the selected algorithm: •

Preparatory routines pick up the setting-out configuration (a 2D CAD drawing) and establish the network by defining nodes, links and patches. The target and the source patch are also retrieved from the drawing. Agents are located at one of the nodes at a source patch.



Every agent ‘sniffs’ the target pheromone of adjacent links, selects one of the links and moves to the node at the opposite side of this link. The selection mechanism for links used in this study is a roulette wheel type of selection. In the case when links have no pheromone around the agent’s current position, all links have equal weighting and the choice is made randomly. On their way to the target agents adjust the ‘source’ pheromone at links. That carries information about their origin patch.



The further an agent goes the smaller amount of pheromone it drops into the environment – the emitting capacity of an unsuccessful agent decreases over time. This prevents agents carrying pheromone too far from the source patch but helps them to develop uniform pheromone gradients inclining towards the source.



The dropped pheromone evaporates in time. This helps the colony to forget already discovered paths and find new shorter ones. Forgetting is an essential component in the colony’s learning procedure. 190



If an agent reaches its target, it turns around and starts finding its way back by following the ‘home patch pheromone’.



Test phase. A test agent hill-climbs pheromone gradient from the source patch to the target. The procedure is similar to the above described process with the only exception in the selection routine. When in the main routine agents choose between the neighbouring nodes by deploying the roulette wheel selection on the connecting links, then the behaviour of the test agent is deterministic and it prefers links with the highest pheromone concentration.

Pseudo code of the selected algorithm: Initialise Get network from CAD drawing Place agents Continue until an acceptably short path is found For agents = m Choose a next node Step Update pheromone values Test agent For test-steps = n Choose the ‘smelliest’ link Step Draw a path

Depending on the network shape, size and complexity, the first paths are usually discovered early on in the process. Typically, these paths are not the shortest ones, and only in time, when the pheromone gradient has developed to its mature state, shorter paths start to emerge. By increasing the distance between the source patch and the target patch, finding routes becomes exponentially more difficult. It often takes the colony longer to stick to a path if there is more than just one shortest path – paths compete with one another until one of them becomes more dominant. According to the most common scenario, the colony’s search converges to a solution in 191

a continuously decelerating manner with new paths being discovered less and less frequently. There are several scenarios for finding the shortest route. With the simplistic algorithm described earlier in this study, it is not guaranteed that the shortest path is going to be found at all. Additional changes need to be introduced in order to increase the chances of finding optimal routes. For example, Blum and Dorigo’s (2004) experiments show that limiting the pheromone values to the interval from 0 to 1, leads to more robust algorithms. The colony’s behaviour is dependent on a number of parameters. Although some of them are tested in the following section, the aim of this study is not to develop solid ACO algorithms, but rather focus on plugging the ACO into the generative program.

8.3 Testing ACO parameters

This section describes the explorative analysis that was conducted in order to better understand the proposed ACO algorithm. A number of tests were carried out for finding out the most influential parameters that affect the behaviour of an agent colony. Images in this section use a simple colour coding system. Each line represents a link in the Voronoi network that agents can use for travelling. Whereas red dotted line shows the shortest path found, the colour of links indicates the pheromone concentration as follows: •

Red – high concentration



Yellow – high-medium concentration



Green – medium-low concentration



Blue – low concentration

The first test was set up in order to explore what role the size of the colony plays in finding the shortest path between two patches in the network. There were two possible shortest routes, both comprising 10 steps. The maximum length of a route was defined as 25 steps. In all tests the colony could find several ways to travel to the destination point. 50 and 200 strong colonies found a route 12 steps long, larger colonies did actually find both of the shortest routes (see Figure 8.2). 192

Figure 8.2: Number of agents tested – 50 (top left), 200 (top right),500 (bottom left) and 2000 (bottom right)

Agents were programmed to use a roulette wheel selection method to choose between available links for navigating the network. This method favoured links with the higher target pheromone value, but also gave a proportional opportunity to the links with lower pheromone values to be selected. Early tests suggested modifying the algorithm in order to improve the route optimisation process. As in Figure 8.2, the longer routes found by 50 and 200 strong colonies partially overlap with the shortest ones. In these tests the found routes were established relatively quickly, while the lack of explorative behaviour prevented the optimisation process. To preserve the explorative nature of the agent colony, it was modified by changing weights in the roulette wheel selection method. Adding a constant value to the link’s pheromone value increased the chance of links with the weakest ‘smell’ to be visited by agents. The test was run again with 200 agents and this time the colony could find the shortest path. However, the colony was unable to maintain this path and kept searching for better ones – it seemed to prefer a slightly longer route. Therefore, further tests were carried out with the earlier version of the roulette wheel selection algorithm. 193

The agent colony’s failure to maintain the shortest path required further testing of parameters that were needed for sustaining the colony’s explorative behaviour in search for the best solution. It was found out that the bigger evaporation rates tend to formalise a colony’s selected routes much quicker but often prevented it from finding shorter ones. Lower evaporation rates resulted in more uniform pheromone distribution and helped the colony to find an optimal way (see Figure 8.3). However, with low evaporation rates and a smaller number of agents, the colony did not find the shortest way and was locked to the solution found early on in the process because the system’s ability to forget was severely reduced.

Figure 8.3: Evaporation rates tested – 0.00001 (top left), 0.003 (top right), 0.01 (bottom left) and 0.03 (bottom right)

A similar but reversed effect was observed with pheromone adjustment rate (see Figure 8.4). When agents released smaller quantities of pheromone in the environment, several long routes were established but the shortest path was not discovered. Increasing the adjustment rate helped the colony to find the shortest path, while too high values reduced its ability to forget already learned routes. Essentially, 194

pheromone adjustment and evaporation rates counteract one another. In order to find shortest routes, these parameters have to work in balance. Increasing or decreasing both parameters at the same time proportionally does not drastically change the colony’s behaviour. The lesson here is that whereas the evaporation parameter can be seen as an environmental property and can be modified globally, the adjustment parameter can be modified at the agent level.

Figure 8.4: Adjust rates tested – 0.03 (top left), 0.1 (top right), 0.3 (bottom left) and 3 (bottom right)

Tested parameters were explored only within a certain range of all possible values. The task was to find out whether these parameters are the key ones, and it was deliberately chosen not to explore the complexities that would arise from changing the parameters concurrently. The effect of tested parameters can be summarised as follows: •

Number of agents. Using a higher number of agents increases the chance of finding the optimal route. However, it also slows down the algorithm.



Roulette wheel selection. An alternative selection weighting was tested in order to give less advantage to already established routes and incentivise the 195

colony’s explorative behaviour. This, however, resulted in the colony’s inability to maintain shortest routes. •

Evaporation rate. Slower rate allows more uniform distribution of pheromone that leads to discovery of optimal routes. Too small a value, though, hinders the colony’s explorative behaviour.



Adjust rate. The effect is inversely proportional to the evaporation rate – higher rates prevent the colony to find new paths, too small rates yield to uneven distribution of pheromone and hinders the discovery of the shortest path.

Achieving a good distribution of pheromone in the network has the utmost importance for a successful search. Smaller adjustment and higher evaporation rates generate fragmented pheromone patterns. Opposite values, higher adjustment rates and lower evaporation rates tend towards more uniform fields. Although the uniformity is needed, also small changes in pheromone values could help the colony discovering new and shorter routes. For the best result, continuous and steep gradient of pheromone is needed between the target and the source point (see Figure 8.5).

Figure 8.5: Continuous gradient of pheromone with extreme values around source point and the target lead to the successful detection of the shortest path

Each tested parameter seems to have a zone of effective operation. No parameter alone is in control of the system’s ability to find and forget routes. 196

However, changing a parameter increases or decreases these abilities. Decreasing the evaporation rate, for example, increases the system’s ability to find new routes, but values too small could affect its capacity of maintaining ones already found. A small adjustment rate prevents agents finding additional routes to the first found one. Too large a rate, on the other hand, decreases the system’s ability to forget what it has already learned. Rates for agents adding pheromone and the environment losing it gradually need to be controlled simultaneously. More complex networks naturally urge one to reduce the evaporation and increase the adjustment rate. With a small colony of agents this could still not produce expected results. In that case, having more agents is the easiest way to find the shortest path. Achieving better results with the increased colony’s size, one has to make sure that the adjustment rate is dropped and the evaporation rate is increased proportionally. As with many other way-finding algorithms, agents in the proposed ACO often exhibit edge following behaviour. With the given network shape, they tend to find the target patch more quickly if the patch is closer to an edge. This happens because links along the edges in the tested network are longer than in the middle and lead to the target in fewer steps. As mentioned earlier, good pheromone distribution in the environment has a crucial influence to the colony’s ability to find quality solutions. In order to keep track on the pheromone field development process, a colour coding system was devised. This facilitated the observer to obtain overview of the whole process in one glance. From the programmer’s point of view, being able to predict the system’s state by observing pheromone distribution turned out to be very useful method at a later state when the ACO algorithm was incorporated within the generative design program.

8.4 Generating corridor networks for office buildings

This section describes how the chosen ACO algorithm was used in the architectural design process. The exact description of the ACO algorithm has been outlined in details in previous sections. The whole process of generating circulation networks for buildings was developed while working together with a team of 197

professional architects on an architectural project. The building in this case study was an administrative building in Tallinn (Tallinna Linnavalitsus – TLV). The brief for the building was given as a part of the documentation, prepared for an international design competition. The brief for the competition asked for a clear definition of circulation within the building where internal movement of office workers was segregated from the publicly accessible circulation space for visitors. Besides the required clarity, additional constraints were derived from Estonian building regulations and standards as well as from the urban and site specific conditions. The team of architects working on the competition project decided to develop a generative methodology for solving design issues typically associated with office buildings. Rather than just helping the team on this particular project, the proposed design methodology was intended to be of a more generic type. As a part of the overall methodology, a set of programmed design tools were developed and deployed on the project to help the team of architects to design dynamic Voronoi networks (see section 6.7 for a description).The Voronoi network represented the layout of office spaces within the building, and with the dynamic growth of Voronoi patches, the team was able to meet the brief’s spatial requirements. To be more specific, it was possible to control the position and size of Voronoi patches in two ways – by either manipulating the Voronoi seed points or by changing the patches’ internal ‘pressure’. The first option was useful for generating distinct Voronoi subdivision topologies, whereas changing the patches’ pressure altered the size without any topological change. The toolkit facilitated the team of architects achieving the desired area for patches within constrained networks. Although it gave them the advantage of quickly generating massing solutions for the TLV building, shortcomings of this method were equally apparent. Architects, used to work within the traditional office designing methodology, could not instinctively work with the typical Voronoi network diagram. The internal subdivisions of generated building mass suggested an unconventional approach to deal with several design issues. As one of the biggest issues, the team had to come up with a working circulation diagram tailored to the internal office layout. Due to the lack of precious time during the competition, the team eventually used a mix of both – computational and traditional methods. An additional – purely computational – algorithm was proposed shortly after the competition. 198

Similarly to the Nordhavnen case study (Chapter 7.1), the idea for the prototype was conceived during the architectural competition. The prototype development, however, was completed after the design proposal had already been submitted to the competition. Nevertheless, working through the competition became an essential part of the development cycle. As the prototype program was completed after finishing the competition work, the results presented in this thesis do not entirely match the original competition submission. As mentioned earlier, the design team used a programmable toolset in order to generate preliminary massing solutions. Once a satisfactory massing was proposed, the team faced a problem – generated Voronoi subdivisions did not directly suggest to architects how the internal circulation should work. In order to solve the circulation, architects had to find suitable locations for stair cores and corridors – something that was far from being explicitly apparent in the generated Voronoi diagrams.

Figure 8.6: Problem in the context: the task was to find a quality solution for the internal circulation on all 3 floors of the building. The image shows floors 2, 3 and 4 with the generated subdivision (coloured patches) in the perimeter polygon, and the proposed structural grid

While each Voronoi patch represented a room, the task was to connect all the patches to stair cores. With the goal of creating an economical layout, the team wanted to get away with as few stair cores as possible. Therefore, one had to position the smallest number of cores within the building footprint following all the given building regulations. Estonian fire regulations for buildings state 45 meters as the 199

maximum distance from any room in the building to the closest two stair cores. Naturally, as a practical measure, the team of architects also had to make sure that stair cores on every floor would line up. Given the 2D Voronoi massing diagram, a few solutions for the circulation were considered. The team first hoped to use the property of 2D Voronoi diagrams of approximating polygons’ medial axes as described by Dey and Zhao (2003). The medial axis can be used for constructing the minimal spanning tree connecting all the Voronoi patches within the given polygon (Haverkort and Bodlaender 1999). This method, however, is applicable only in diagrams where all the Voronoi patches are lined along the polygon’s edges, and the polygon itself does not contain any holes in it. This was not the case with the TLV building. As the building featured internal atria and some of the patches were disconnected from the perimeter polygon, a different solution was required. The complex spatial layout and topology suggested more sophisticated approach. A simple modeller algorithm combined with ant colony optimisation was chosen instead of the medial axis transform method. As discussed in Chapter 2 in this thesis, computational modelling becomes generative when modelling algorithms are combined with analytical ones. The modelling part of the program was inspired by the manual exercise executed by the team of architects. The ACO algorithm, as described earlier in this section, was deployed for assessing the generated output and feeding it back to the modelling routine. The combination of modelling and analytical modules allowed the computational designer quite easily to devise a greedy algorithm that gradually converged towards an optimal solution. The task for the modeller algorithm was to find acceptable locations for all stair cores within the building perimeter. By moving the cores around, the program was geared to find a solution where all the given Voronoi patches were connected to at least two cores within a given maximum radius (45 meters). To find the shortest routes between stair cores and all Voronoi patches, the analytical ACO algorithm was deployed. The number and the initial location of stair cores were dictated by the user defined setting-out configuration. The final solution was then sought using a greedy algorithm. Each stair core occupying a Voronoi patch was allowed to move to one of its neighbouring patches. With stair cores looking for the local maxima in terms of 200

connectedness, it was expected to find the solution where all spaces were connected to stair cores within the given limitations. Possessing a temporary bodily substance in the form of a Voronoi patch and having the freedom to move around pursuing its target, the stair core object meets the basic definition of a mobile agent. The ‘stair-core agent’, responsible for finding the best solution from its own perspective, was not controlled by the program directly. As it turned out, the proposed program for generating circulation diagrams for an office building deployed two kinds of agents – ants in the ACO module, and more abstract stair core agents in the generative module. According to Adamatzky (2001), these two breeds of agents are of different kinds – stair-core agents are space-based agents, whereas ants are graph-based.

Step-by-step description of the generative algorithm: •

Preparatory routines pick up the network and define nodes, links and patches, and identify stair cores.



Run ACO (see the description in section 7.2) to find all patches that satisfy given requirements – any patch need to be connected to at least two stair cores within allowed maximum distance.



Move a stair core to one of its neighbouring patch and repeat ACO. If the overall count of connected patches is higher (global rule) than at a previous stage or the number of patches connected to this particular stair core has increased (local rule), confirm the new position. If this is not the case, the stair core is first moved back to its previous location, and then another patch in the neighbourhood is chosen. If a better local and global solution is found, another stair core takes its turn.



The simulation stops when all the patches are connected.

Pseudo code: Initialise Position all stair cores Continue until all patches connected Each stair-core Move a stair-core to one of the neighbouring patches 201

Each patch do ACO algorithm Find two acceptably close stair-cores (targets) If both closer than 45m then Connect the patch If better global or local solution found then do the next stair-core Else Move the stair-core back to previous location and repeat

The algorithm is a traditional permutational method for generating built forms and layouts to minimise travel costs (Tabor 1971, p. 56-59). Permutational methods first create the framework and boundaries, then sort out the initial spatial layout, and then automatically locate functions (stair cores, in this case study). Tabor also includes manual modification where the designer adjusts the layout to match the requirements that are not explicitly expressed in the program. The architect’s role in running the TLV program is two-fold. Firstly, the architect has to set up the initial layout of the building, define the ACO parameters (number of agents, evaporation rate etc.) and global requirements such as the maximum distance to two closest stair-cores. The initial configuration has to be a network or shapes, drawn in a 2D CAD environment (see Figure 8.6). Using a colour coding system, the architect is also invited to define the initial position of stair cores. Secondly, the architect can influence the generative process by modifying the network shape by getting rid of or moving around stair cores, or by adding new ones. Once the initial layout has been set up, the program can be left working its way through the problem until it finds a solution that satisfies all the requirements. Alternatively, the program can be stopped at any time to make further amendments to the layout. It is often difficult for a designer to estimate how many stair cores are needed to solve the problem. With the program, it is easy enough to test several differently configured layouts and find acceptable solutions. As outlined earlier, the program is configured to connect all patches while the perimeter of the building remains static. However, it is possible to deploy the program as a form-finding device. An obvious tactic to get variable boundary solutions would be 202

creating a larger network of patches than actually required. The program can then run until the desired number of patches has been found. The TLV program can be effectively used in the ‘manual mode’ – one can prevent stair cores moving by breaking the respective loop in the algorithm. In this case, the architect is responsible for repositioning stair cores after the computer has analysed the layout. The proposed generative algorithm outputs simple and easily comprehendible graphics. All the following images in this section can be understood by the same key: •

White patches – ‘connected’ patches, satisfy the 45-meter rule



Red patches – stair cores



Black patches – disconnected patches



Red lines – suggested corridors

All tests were conducted on a Voronoi network within a square boundary. The network consisted of 99 Voronoi patches, 201 nodes, and 300 links. The sequence of images in Figure 8.7 illustrates how the algorithm solved a given task by repositioning stair cores in order to connect all the patches. Initially positioned in the centre of the area, stair-core agents quickly developed a tactic of spreading out. Curiously, stair cores found good locations close to edges of the network. This was – as it often happens with agent-based models in bounded environments – because of the edgefollowing behaviour of ants in the ACO algorithm. Voronoi patches on edges had typically only 4 neighbouring patches instead of 5 or more neighbours that patches in the middle of the network had.

Figure 8.7: Test run with 6 stair-core agents (from left to right). The algorithm solved the problem in 65 steps

203

Test results (see Figure 8.7), produced without any attempt of generating a meaningful diagram in architectural terms, showed good performance and validated the algorithm internally. It also indicated some technical weaknesses, which will be discussed later in this section. Final architectural diagrams were generated by rerunning the algorithm on previously created form diagrams (see Figure 8.6) of 2nd, 3rd and 4th floor. As there was no coordination between stair cores on separate layers, this exercise served a single purpose of finding the right number of stair cores per floor. Working with the 2nd floor form diagram, a solution with 5 stair cores (see Figure 8.8) was found by the algorithm in 51 steps with the final position of stair cores being considerably different from the initial position. Cores – originally located in zigzag pattern along the building’s longest facades – travelled to the centre of the massing and gathered around internal atria. The generated diagram suggested that shorter routes happen around atria, and the ‘ragged’ outer façade is too expensive in terms of circulation length.

Figure 8.8: Generating 2nd floor diagram with 5 stair cores

The 3rd floor diagram (see Figure 8.9) was generated much faster. Instead of 51 steps, it only took 20 to find an acceptable solution. The quicker run is assumed to happen because of the better initial distribution of stair cores, but also because of a different number of stair cores. It seems that a larger number of stair cores provide more flexibility in terms of the layout.

204

Figure 8.9: Generating the 3rd floor diagram with 9 stair cores

The 4th floor (see Figure 8.10) took the longest to run – 96 steps. The initial configuration was again one of the main reasons for that. As stair-core agents gathered around internal atria, some of them got stuck behind other agents. Some cores that had already found good locations resisted to move. While competing for patches, these stair cores were often approached by other cores triggering further movement. Thus, by competing with one another, stair cores collectively achieved a bigger task of connecting more patches.

Figure 8.10: Generating the 4th floor diagram with 6 stair cores

205

Since diagrams for all floors were generated irrespectively to the floor above or below, another stage in the design process was needed. The final stage involved some manual modification of the diagram and deploying the analytical module of the algorithm for just one more time. Figure 8.11 compares the initially generated solution with the final manually modified solution where stair cores at different levels overlap. With the new constraint and different Voronoi network layouts on each floor, it turned out to be much harder to connect all patches on all floors. As a consequence, some of the Voronoi patches remained unconnected. However, this was acceptable in the context of a relatively loose competition brief. An obvious solution for that problem was to increase the number of stair cores. Alternatively, if the final building boundary had not been fixed, a designer could have preferred to introduce a network with larger number of patches encouraging quite a different distribution of cores.

Figure 8.11: Generated solution (top) versus manually modified solution (bottom)

The output of the proposed algorithm should not be seen as a final design proposal. Although generated solutions represent circulation systems in the building, they are still diagrams and subject to further interpretation. There are two main reasons why the generated diagrams cannot be regarded as traditional architectural 206

drawings. Firstly, the proposed program handles only topological relationships; the 45meter rule has to be interpreted as a topological rule. Secondly, in designing the program, Estonian fire regulations for buildings were interpreted somewhat loosely. The program ignores the fact that regulations add another requirement to evacuation routes – the maximum allowable distance from a room to the two closest stair cores has to be reduced if both of the evacuation routes overlap. Nonetheless, architects can produce results that satisfy the given rules using the generative program. The number crunching task can be left to the computer, while people can focus on more creative aspects of designs. Since generated diagrams have both qualitative and quantitative properties, they can be described as constructive diagrams in Alexander’s (1964) terms. The quality of the diagrams lies in the topological layout of internal spaces and their connectivity to stair cores, satisfying Alexander’s definition of the form diagram. Quantitative aspects such as the maximum allowable distance between rooms and corridors, on the other hand, meet the requirement diagram definition.

8.5 Observations and conclusions

The proposed computational model for generating topology based room layouts and circulation systems falls into the category of permutational methods (Tabor 1971, p. 56-59). These methods usually involve four stages: creation of the framework, creation of the initial layout, automatic modification of the layout, and manual alterations to it. Getting good quality results quickly relies on smooth and coordinated actions at all of these stages. However, the best results are sometimes achieved by skilfully jumping between stages and creating loops of actions in the process by repeating some stages and skipping others. Depending on the nature of the project, the most time consuming stage is usually the first stage, but with the right kind of tools (see section 6.7 for the description of dynamic Voronoi network tools), one can severely speed up the workflow. The originality of the proposed design process for the TLV building lies in the algorithm deployed at the automatic modifications stage. The combination of generative and analytical modules creates a unique algorithm that – assuming that better solutions exist – gradually optimises the initial configuration. There are also a 207

few drawbacks in the algorithm. Firstly, the speed of finding acceptable solutions could be much better. The speed is, however, not so much dependent on the actual algorithm as it is on the CAD platform and the particular deployment language (Visual Basic for Applications). Written in a lower level programming language, the algorithm could work much faster, making the workflow more fluent. The second issue emerged while using the ACO algorithm to assess the generated layouts. Due to the probabilistic nature of the algorithm, identical configurations sometimes led to different results. To overcome this problem, certain amendments need to be done to the ACO algorithm for getting consistent results. This can be done either by using the same array of randomly generated numbers for simulating random choice, or improving the search algorithm for guaranteed results. There are a few interesting aspects about the behaviour of the generative algorithm that are worthwhile highlighting here. The search process for the optimal core layout is catalysed by the competition between stair-core agents trying to connect to as many patches as possible. Competition between individual stair cores pushes them into collaboration for the global quest of achieving a better overall connectivity. Therefore, collaboration is the result of competition. Another interesting behaviour – edge following – renders some of the Voronoi network characteristics more visible. As shown with the test-run and in generating diagrams for individual floors, stair-core agents tend to locate themselves either along the building perimeter or around internal atria. This clearly indicates the incoherencies in otherwise quite uniform network topology – travel distances along outer or inner perimeter are considerably smaller than in the middle of the network. Diagrams generated for the TLV competition are constructive circulation diagrams by nature. As illustrated in Figure 8.12, the formation of a constructive diagram takes place when the form diagram is fed into the generative program to produce circulation systems. The output of this process encompasses both, the topological layout of rooms, and the number and relative locations of stair cores – the diagrammatic form satisfying circulation requirements.

208

Figure 8.12: Form diagram + generative process = constructive diagram

Building legislations and local spatial constraints provide a good source for designing fitness functions for generative design algorithms. Observing traditional design workflow provides inspiration for developing a heuristic computational approach. However, there is a great benefit in having a working prototype ready before the actual design process starts. In practice – whether in the context of design competitions or commissioned work – there is usually a very small chance of developing a completely new approach. It is easier to produce useful results by adopting already existing methods. To apply generative modelling methods in the context of a practical architectural design project, it is essential for a computational designer to work together with a team of professional architects. Without accepting the traditional workflow, there is a danger that the generative contribution gets lost. As it usually happens, members of the team, working on different issues, modify one another’s tasks – the design constantly develops over time and team members have to acknowledge it. Adaptive integration of one’s work has, therefore, the utmost importance. From the computational designer’s perspective, computational models have to be flexible enough in order to endure changes in the design. Although never actually deployed in the context of teamwork, the algorithm developed for the TLV building is potentially suitable for flexible integration. The distributed nature of an agent system allows it to deal successfully with external perturbations (Bonabeau, Dorigo and Theraulaz 1999, p. 16) and reconfigure itself to changes in the environment. The TLV algorithm can be potentially adjusted to work simultaneously together with a form finding exercise where the Voronoi network layout is dynamically altered either by designers or by other generative algorithms.

209

The proposed algorithm for locating stair cores within a given layout can lead to solutions that are not immediately obvious to designers. Naturally, some floor plate layouts are simple enough and designers do not need computational tools for finding the right stair core configuration. However, when the layout is more complex and the number of stair cores is larger, then solutions become more difficult to find and benefits of computation become apparent. With complex floor plate layouts, it is even hard to define the right number of stair cores needed, let alone finding an acceptable configuration. Nevertheless, designers need a degree of control in order to quickly test different options. In this case study, the control is offered though setting-out configuration – a designer is expected to create an initial layout that becomes the basis for further computational modifications. During computational modifications, it is essential that the progress is visually communicated back to the designer. The designer can then stop the process at any time and adjust the layout manually – add or remove stair cores, for example. In this way, the designer remains actively engaged and the solution can be found much quicker.

TLV competition project credits The competition entry for the TLV building was submitted by Slider Studio Ltd. The TLV computational model was designed and developed single-handedly by the author of this thesis during and after the competition.

210

Chapter 9: Controlling the diagram

This chapter reflects on the case studies and prototypes discussed earlier in this thesis. The objective herein is to analyse these models in order to establish some principles of how the control over bottom-up simulations can be maintained and highlight some general aspects of generating circulation diagrams with multi-agent systems. There are several issues in deploying bottom-up models that need to be considered before the selection of a particular control mechanism can be made. While it is important to understand how circulation diagrams develop, the greatest challenge is to decide whether a model is capable of generating adequate diagrams for given design problems. It is argued that understanding the development process is the key to controlling the model, whereas the flexibility and the sensitivity of the model also play important roles. Once the general aspects of the development process are understood, the exact methods of control can be discussed next. Why is gaining the control such an important issue at the first place? The answer is quite straightforward – generated diagrams are not the end product but intermediate tools that help the designer to shape design proposals. If anything changes in the design brief or if spatial requirements should change, the diagram may have to change as well. Without having the control over the underlying multi-agent model, the designer has no means of making appropriate changes to the diagram. Only if the diagram responds to the changed requirements and brief, the model can truly become an integral part of the design workflow.

9.1 Emergent behaviour OF and IN agent colonies

Emergence is often used for describing phenomena that are not explicitly prescribed in the model and is well observable in many multi-agent systems. Gilbert (1995) argues that we can talk about emergence in agent colonies only when we can discover an exact description of the global state of the system. In circulation agent colonies, this concise description of the global state manifest itself in clear diagrams that can be analysed using topographical and topological terms. Generating successful diagrams relies on the ability of dynamic systems to produce certain reoccurring 211

patterns. These patterns do not have to be strictly repetitive but they often display high level of orderliness. Some models produce a great variety of forms; others are fairly limited and generate reoccurring patterns. The common underlying principle in these models is that the description of generated patterns is not defined in the code itself, but is created when agents execute their behavioural program. An important property of emergent patterns is that they are usually flexible and can therefore be used in different situations and in dynamically changing environments. The critical question from the designer’s perspective is how to control emergent models. In order to answer that question, one needs to understand the reasons for emergence. It is argued herein that one has to learn how the colony’s behaviour emerges from individual behaviours before the appropriate control mechanisms can be chosen. ‘Emergence’ is a somewhat perilous word; it can be easily assigned to patterns that look complex from the observer’s perspective but are actually predefined in the model’s code. Therefore, one needs to carefully analyse the process that has generated these patterns. It is quite simple to identify emergent patterns when models are constructed following agent based modelling techniques. If agents’ behaviour is based solely on the information received from their immediate environment and this information is processed by their internal sensory-motor routines, then the resultant movement patterns in the colony deserve to be called emergent. There is another point of possible confusion, however. Colonies of randomly behaving agents can also – by a blind chance – form patterns that seem to be ordered and can be therefore mistaken as emergent. These patterns can be sometimes observed as snapshots of the process, although they may not necessarily be persistent. Hence, they cannot be said to be emergent. One can talk about emergence in circulation models only if there is evidence of movement flow of certain persistency. A system cannot always be distinctly classified as persistent or not, but the persistency of a model can be measured as a state of the model in time. Movement patterns can be emergent only if they are sufficiently persistent to be described as a tendency in the colony’s movement during a defined period of time. During the active research stage for this thesis, several multi-agent models were built and studied in order to understand the behaviour in and of colonies. The most successful ones of these models were covered in Chapter 6 and some new ones were also introduced in the case study chapters (see Chapter 7 and Chapter 8). While 212

studying these models, several kind of emergent phenomena were observed and it became clear that emergence can be looked at two distinct levels – at the colony level and at the level of individual agents. Gilbert (1995) claims that all multi-agent systems can be described in terms of actions of individual agents or at the global level in terms of actions of the colony. Similarly, emergent behaviour can be described at two levels:

1) The behaviour of an individual agent in the colony emerges from the interaction with certain features in its environment. There are a number of behaviours that are common in many studied models. Such behaviours can be seen as general principles of agents’ movement, although these behaviours are not explicitly defined in the algorithms that control the movement. For example, quite common emergent behaviours are crowd following in flocking models and agents generating circular movement patterns by following their own trail. The most often observed emergent behaviour, however, is agents following edges in the environment. In fact, edge following is so common that in some of the studied models it happened even when the prototype had serious flaws in the agent’s design or in its movement controller algorithms. Although edge-following can be the first sign of the emergent behaviour, it has to be taken with precaution because it can be simply an artefact of a faulty algorithm or a mistake in the computational logic. Both, the truly emergent and the artefactual edge-following occur in similar locations in the model. It can take place along the objects in the model, or it can happen along the edges of the simulated world.

2) Emergent behaviour of the colony can be observed when the motion of individual agents is coordinated at the higher level. Possibly the most famous computational model displaying the emergent behaviour of the colony of agents is the Reynolds’ (1987) flocking model. Although this model has not been used in the experiments for generating circulation diagrams, it can reveal generic principles in flocking agents. Later in this chapter, the flocking model is analysed in order to explain the sensitivity of multi-agent systems. The behaviour of the flock can be depicted by describing its behaviour as the behaviour of a single entity. It can be said, for example, that the flock is moving 213

left, or that it is splitting into two. The behaviour of space forming agents, on the other hand, can be better conveyed by describing the colony as a structure. It can be said that the colony forms a Voronoi structure (see section 6.7), for example.

Both of the described emergent phenomena – flocking behaviour and spatial patterns – can be observed simply by watching the agents. Other types of emergent behaviour of the colony are not always immediately apparent. In some instances, the general trend of movement can be discovered by visualising the data retrieved from the environment rather than observing agents. This is the case with many models using stigmergic communication. Patterns of movement in such models can often be observed and studied better when stigmergic messages left behind by agents are rendered. In the Loop Generator prototype (see section 6.1), for example, the paths of movement can be traced by giving the ‘pheromone’ – substance that is created and perceived by agents – a graphical representation. Something that is very hard to spot simply by looking at the agents can be made immediately obvious by giving the stigmergic ‘message’ a graphical appearance. Graphics can be very powerful indeed and that is not only for understanding the behaviour of the colony, but also for drawing out the diagram. Similarly to Loop Generator, the shortest path formation in ant colony optimisation algorithms can easily be observed by visualising the markers left behind by agents. Whereas individual agents can choose different paths, the most used path can be easily identified by looking at the ‘pheromone’ concentration in the environment. Emergent coordination in agent colonies that rely on the quantitative stigmergy can usually be visualised by highlighting stigmergic markers in the environment. However, emergent coordination that is based on qualitative stigmergy is much harder to achieve and to observe. However, as explained in section 6.6, it is possible to get two stigmergic building agents to cooperate and to build a structure together. More often, these agents compete with one another for available resources, and the cooperation happens only occasionally. Additionally, when the cooperation is achieved, it is very difficult to capture or replicate it – the problem is that two cooperating agents have to have finely tuned (or evolved) behavioural controllers but they also have to be positioned rightly with respect to the other agent’s position and 214

their actions have to be synchronised. But once two agents are locked into a loop that holds them together and makes them to work in unison, a new system has born. Maturana and Varela (1980) call this type of coordination system coupling. More specifically, they claim that:

“Whenever the conduct of two or more unities is such that there is a domain in which the conduct of each one is a function of the conduct of the others, it is said that they are coupled in that domain.” (Maturana and Varela 1980, p. 107)

They also explain that although both of the agents retain their identity, system coupling leads to the generation of a new entity that may exist in a different domain from the coupled agents. Understanding system coupling principles also helps to understand the distinction between the behaviour of individuals and the behaviour of the colony.

9.2 Development of the diagram

Having established that the individual behaviour of an agent emerges from its interaction with the environment while the coordination of individuals in the colony emerges from the behaviour of these individuals, it is a good time to scrutinise the relationship between the development of the diagram and the coordination process. It is argued here that understanding the connection between the development process and coordination is a key to gaining successful control over the model. However, this task is not a trivial one. Coordination in the colony is after all an observed phenomenon and – although simple enough in order to be described in qualitative terms – is difficult to quantify. Coordination can be seen as an ultimate characteristic that describes how well the actions of individual agents are synchronised. Unfortunately, all individual behaviours are fairly complex and are equally difficult to measure. Instead, one has to resort to measuring certain quantitative parameters of individual agents such as their position, speed of movement, direction of movement etc. However, the coordination of the colony cannot be measured simply as a sum of a certain parameter of all individuals. For example, in the case of the Reynold’s flocking model (1987) (see also 215

Figure 9.6), one cannot make far-reaching conclusions about the level of coordination by calculating the average distance between agents. Nor can conclusions be made by calculating the average movement direction of all agents. The behaviour of the flock is too complex in order to be quantified by using this kind of reductionist measurements. Nevertheless, it is possible to quantify coordination when one looks at the problem from a different angle. It is proposed that coordination can be measured between the closest neighbours in the colony. Comparing the parameters of an individual agent with its closest neighbour would give an insight into how well these two are coordinated with one another. For example, if the distance to the closest agent is relatively large compared to the size of the ‘world’, then the agent can be said to be not well coordinated with the colony. Once calculations with individual agents are carried out, an average value of the coordination parameter can be declared. In order to validate the proposed method of measuring coordination within a multi-agent colony, the Loop Generator prototype (see section 6.1) is used as a case study. Although it is acknowledged that different coordination parameters are needed for analysing different prototype models, it is the general approach – quantifying the behaviour of the colony based on the closest neighbour calculations – that foremost need to be validated. Coordination parameters that are thought to characterise the colony’s behaviour in Loop Generator the best are alignment and cohesion. Whereas the cohesion parameter is calculated as the average distance to the closest neighbour in the colony, the alignment parameter quantifies the average angle between the agent’s movement vector and that of its closest neighbour. The alignment parameter indicates the coordination in movement direction; the cohesion parameter expresses the compactness of the colony. In order to understand the behaviour of the colony, cohesion and alignment parameters need to be scrutinised together. A small cohesion distance does not yet mean that the colony is well coordinated – chances are that agents can be simply clustered around some hot-spots and their movement can be fairly random. But if the alignment angle is small as well, one can be sure that they are not just close together but are also following the same paths. The proposed algorithms behind alignment and cohesion calculations are straight-forward and need no detailed description. In both cases, the closest neighbour for each agent is found by iterating through the whole colony and finding the smallest Euclidean distance between the agents. The average cohesion that is expressed in 216

abstract model units (see Figure 9.1 and Figure 9.3) is then calculated by aggregating all distances to the closest neighbour and divided by the total number of agents. Similarly, the alignment parameter is found by calculating the angle between the heading vectors of closest agents. Because Loop Generator generates two-way movement along the same path, the alignment value is capped to 90 degrees and the angle between opposite directional movement vectors is calculated 0. Hence, the average alignment angle in a colony of randomly moving agents is around 45 degrees.

Figure 9.1: Alignment and cohesion in 2D

Figure 9.1 depicts a typical graph of alignment and cohesion parameters in a 2D ‘world’. The development of the respective diagram is shown in Figure 9.2. As one can see from the graph, alignment in the colony improves quickly in the beginning of the simulation. The same trend can be observed visually – already after 100 steps a recognisable circulation diagram has developed (see Figure 9.2). Cohesion follows the similar trend to the alignment graph with the average distance to the closest neighbour dropping almost 30% during the first 200 steps. The rapid descent in the alignment graph stops quicker than it does in the cohesion graph. This means that agents are gathered around certain areas but channels of movement are not yet fully formed.

Figure 9.2: Development of the diagram in 2D

217

After the initial rapid development of the diagram and the changes in the graphs, the simulation settles down to follow a much smoother development pattern. Then something interesting happens and the cohesion graph takes a short but steep turn back – the level of coordination in the colony is suddenly reduced. Similar if less dramatic fluctuations happen in the alignment graph. Although the general trend – constantly slowing decline – remains the same, sudden changes keep occurring. One can only conclude that for some reason existing circulation paths are abandoned by the colony while new ones are generated. This can be observed in the development of the visual movement patterns as well. The diagram can be relatively static during a certain period (see steps 100-200 in Figure 9.2) and then suddenly change (steps 200250) to form new circuits or an entirely new circulatory system.

Figure 9.3: Alignment and cohesion in 3D

The 3D version of Loop Generator has the similar development cycle (see Figure 9.3 and Figure 9.4) to the 2D prototype with some significant exceptions. Curiously enough, compared to the 2D ‘world’ the development cycle appears to be faster in 3D.Otherwise the alignment and cohesion graphs follow similar trends – the rapid improvement of coordination in the beginning of the simulation gradually eases out when several closed circuits develop in the diagram. The fluctuations that are apparent in graphs of the 2D prototype are less frequent and much smoother. One can speculate that this is caused by a different sensory configuration – agents in the 3D ‘world’ have 10 sensors instead of 3 in 2D. However, it is more likely that the nature of 3D diagrams prevents the occurrence of rapid changes and facilitates a continuous and smoother diagram development. Agents in the 2D ‘world’ are more likely to cross 218

paths with other agents than in 3D.Crossing an existing but weakly developed path forces an agent to select between several routes which cause disturbances in the colony’s behaviour and, as a consequence, it takes longer to form the diagram. This is also manifested in the shape of the diagram – 3D diagrams feature mainly Y-shaped branching points, whereas in 2D some of the junctions are also X-shaped.

Figure 9.4: Development of the diagram in 3D

It is worth reminding here that the Loop Generator prototype is largely a deterministic model. Besides the randomised setting out configuration, the only probabilistic choice is made by the agent when the sensory input is not differentiable and there are two or more possible movement directions. It is believed that changes in the diagram and fluctuations in the coordination parameters are inherent in the model. The coordination parameter graphs follow roughly the same development pattern. The length of the development cycle depends on several circumstances. The greatest influence is the density of agents in the ‘world’. Too many agents can lead to the situation where the concentration of ‘pheromone’ is uniform across the environment and no diagram appears. Too few agents, on the other hand, prevent the emergence of continuous paths. The relaxation time of the model – the time that it takes to reach to the stage where the first recognisable circulation diagram has developed – depends heavily on the size of the colony and the size of the ‘world’. Besides the size, there are several other parameters that play crucial roles in the diagram’s development process. The next section scrutinises the effects of some of the mentioned parameters to the behaviour of the colony.

9.3 Flexibility and sensitivity – an exploratory analysis of multi-agent models

Multi-agent models offer several opportunities for the designer to engage in the development process of circulation diagrams. Although not all multi-agent models 219

are inherently interactive, they are dynamic models and it is relatively easy to build ones that respond to the input from the designer. Since agents in such models acquire information about their surroundings locally, they can cope with the dynamically changing environment. As long as the sensory input can be recognised and processed by the agent, the global structure of the environment can be of any configuration or shape. The environment does not have to be static and can accept new input. Different multi-agent models are flexible to a different degree. Some models are fairly rigid and function properly within a narrow range of environmental input; others are more dynamic and remain operational even when drastic changes take place in the agents’ environment. In general, flexibility can be defined as the systems (i.e. agent’s) ability to cope with disruptions in the system’s environment. The environment can be inherently dynamic in the sense that two or more environmental processes interact with one another, or it can made dynamic by allowing input from the user who controls the model. A truly flexible model can adapt to various situations and change its behaviour rapidly. In reality, every model works efficiently within a certain range of changing environmental parameters. That is mainly because building truly dynamic systems is very complex and time consuming (Bonabeau, Dorigo and Theraulaz 1999). Moreover, flexibility is important only if it serves the purpose of the model. The underlying purpose of all circulation models in this thesis is to generate diagrams within different spatial configurations. Whereas different in layout, these spatial configurations should be of the same spatial representation (see section 5.2) and the objects in it should be of the same representation too. For example, if an agent based circulation model is built to work in a setting out configuration of continuous spatial representation, comparative tests can be executed solely in this particular type of environment. This does not necessarily reduce the complexity of the model but it aligns quite well with the workflow of the designer. The designer can use one tool (e.g. a CAD package) for preparing the content as long as the multi-agent model is built to accept the output formats of this tool. The model’s flexibility, therefore, is foremost praised for its ability to generate diagrams in different spatial layouts of the same spatial representation, and not for coping in truly dynamic environments. Flexibility in multi-agent models is often associated with learning (e.g. Wan and Braspenning 1996; Ramos, Fernandes and Rosa 2007). Learning at the level of 220

individual agents is well explored domain and, according to Vidal (2003), is ought to be used when the designer of the system cannot define all the possible input configurations that agents may encounter during the simulation. Implementing machine learning algorithms in order to facilitate learning in agents is bound to make the multi-agent system a lot more complex (Vidal 2003). Therefore, this thesis is not so much interested in learning at the individual level as it is in learning at the colony’s level. Learning at the colony’s level in circulation agents allows the system to adapt to different environmental layouts and still produce useful circulation diagrams. This can be achieved inexpensively through stigmergic communication that is a light-weight solution and helps to keep the model relatively simple yet flexible. It does not require individual agents to learn anything new in the sense that they do not have to change their sensory-motor coupling rules. According to Vidal (2003), learning agents are most often “selfish utility maximisers” – they seek to gain payoffs from participating in the simulation. However, the colony does not need to gain anything in order to generate appropriate diagrams. The adaption of the circulation diagram is propelled by the inherent flexibility in the system of non-learning autonomous agents. In this respect, the diagram is the result of the colony’s self-organisation. As opposed to the setting out configuration, the development of circulation diagrams can be controlled through manipulating parameter variables in the model. In order to gain a good control over a multi-agent model, one needs to find the parameters that have the greatest impact on the behaviour of the agent colony. Some parameters are more sensitive and the model can produce acceptable results only when these parameters fall into a narrow range of values. Other parameters are less sensitive and can be in a much wider range of values while the model can still produce useful output. Naturally, it is critical to get more sensitive parameters right during the model’s design and building process – getting these parameters wrong can be a costly mistake in terms of time but even more so in terms of producing new knowledge . Once the model is built, some parameters can be tested interactively at runtime. Whereas an individual parameter can be changed instantly, the diagram can take a while before changes in the colony’s behaviour become apparent. Many of the multi-agent models tested in this thesis have a time lag that can prevent one to grasp the effect of changing the respective parameters immediately. The response time is 221

mainly dependent on the lightness of the model – the faster the model runs, the quicker the effect of changed parameters can be observed. The difficulty of finding the right set of parameters for a successful model is revealed when several parameters are tested or fine-tuned in parallel. Additional complexity is introduced when these models are stochastic. A large number of runs need to be carried out in order to quantitatively validate the model and the validation may quickly become intractable (Brimicombe and Li 2008) – there are simply too many variables in the model. Batty and Torrens (2005) seem to think that qualitative validation is more plausible in such cases. The following two examples illustrate both the qualitative and quantitative ways of exploring the sensitivity of the model. Both of the examples investigate the effect of a critical parameter to the behaviour of the colony, but the approach is slightly different. Firstly, the effect of an agent design parameter in the Loop Generator prototype (see section 6.1) is explored qualitatively by observing the patterns of ‘pheromone’ distribution in the environment. The parameter tested is the angle α (see Figure 6.3) between the agent’s frontal and lateral sensors. Secondly, the behaviour of the colony in the well-known flocking model (Reynolds 1987) is explored quantitatively and qualitatively as well. The parameter under scrutiny is a similar to the first example – the angle of agents’ field of view. If agents in Reynolds’ flocking model compute their behaviour according to the position of closest neighbours, then the parameter defines how the closest neighbour are selected – only those that reside within the given field of view are included in the calculations. Compared to many advanced multi-agent models, Loop Generator is a very simple model. However, this does not mean analysing this model quantitatively is simple. Instead, it is suggested that, in the context of generating circulation patterns, it is more viable to conduct qualitative analysis purely observing the generated ‘pheromone’ patterns. Figure 9.5 depicts typical tests carried out with different α (angle between the frontal and lateral sensors) values. While section 6.1 explains the behaviour of the colony in greater details, it suffices here to say that the proposed way of analysing the effect of changing the agents’ sensory configuration is purely qualitative. ‘Pheromone’ distribution patterns in different tests are clearly different and allow one to choose an appropriate configuration according to the desired effect. 222

Figure 9.5: The angle between the front and the side sensor (α) has a crucial impact on the generated diagrams. From left to right: α = 20, 45, 70 and 95 degrees

The flocking model offers a similar opportunity for qualitative observation. However, there is an important distinction between observing the flocking model and observing the Loop Generator model. Whereas in Loop Generator the observed subject is the diagram left behind by agents, in the flocking model it is the agents that are being observed. In addition to qualitative analysis, the behaviour of agents in the flocking model can be analysed quantitatively. If qualitative analysis describes the behaviour of the whole flock then quantitative analysis is carried out with respect to individual agents.

Figure 9.6: Flocking agents. Testing the behaviour of agents with different field-of-view angles

Figure 9.6 illustrates four tests with different field-of-view parameters – the visibility angle of individual agents that defines which other agents are included for the flocking computation. In the first test with a narrow field-of-view, agents do not form a coherent flock and the colony breaks up into small groups of agents. Once an agent breaks away from the flock, it cannot ‘see’ other agents and wanders away attracting a few other agents to follow its lead. If the field-of-view angle is increased, a more coherent behaviour emerges – agents keep together and the whole colony can suddenly move together in an unpredictable direction. Increasing the angle further does not reduce the coherence of the flock but it reduces its ability to move around. 223

With a wide field-of-view agents simply cannot develop a forward-directed movement and keep turning around at the same location. Occasionally, a few agents can break away from the larger group, but the colony as a whole remains relatively static.

Figure 9.7: Field-of-view (FOV) angle affects the flock’s behaviour. Larger angle leads to a more coherent flock but it can cause the flock as a whole to move around

In order to quantify the flocking behaviour, one can observe two general trends in the colony. The first trend characterises the coherence of the colony while the other one describes the movement of individual agents. There are several ways to express these trends in numbers but all of these are inconclusive – the colony’s behaviour is too complex to be quantified by a single parameter. However, the movement patterns shown in Figure 9.6 can be successfully described by observing how the coherence and the movement parameters change in time. Figure 9.7 presents two sequences in the form of graphs that show the effect of field-of-view angle on the coherence and the mobility parameters. The coherence parameter is a measure of how many agents are there in the biggest group; the mobility parameter measures how many agents remain within a certain range from their original location. While there is almost a linear relationship between the field-of-view angle and the coherence, the mobility parameter has a less linear graph. If agents with the field-ofview angle less than 90 degrees tend to move away from their original location fairly 224

quick then a wider view makes them more static. One can reach to an interesting conclusion when scrutinising these graphs simultaneously. One can see that the fieldof-view angle of 90 degrees leads to a coherent group where agents are highly mobile, and can conclude (with certain reservations) that the flocking behaviour has emerged. This can be then validated via visual observations in Figure 9.6. Both models analysed in this section – the Loop Generator prototype and the flocking model – contain stochastic elements. In order to analyse the behaviour and more specifically the sensitivity of such models statistically, multiple runs have to be carried out. The aim of the qualitative sensitivity analysis is to find out the parameters that render the greatest value to the user of the model. These parameters can then be made readily available to the user via the graphical user interface. However, one needs to be careful in doing so because it also exposes the model to potentially untested combination of parameters that can easily lead to runtime errors. Additionally, some parameters are very sensitive and have an extremely narrow operational range. These parameters are better off to be defined as programmatic constants since exposing them makes it more difficult for the user to find the working configuration and makes the model less controllable.

9.4 Means of control

In building successful circulation models with multi-agent systems one needs to know how such bottom-up systems can be controlled. There are several possible control mechanisms and numerous ways of implementing these mechanisms. The aim here is not to discuss the arguments for and against different implementations but to point out some general principles. Broadly speaking, the control mechanisms can be divided into three large groups: the model can be controlled by preparing the environment, modifying the agent’s behavioural rules, or adjusting the agents’ and environmental variables via a graphical user interface. Controlling the model by modifying the setting-out configuration was discussed in-depth in section 5.7. The designer normally prepares the environment and defines different qualities of this environment according to some principles and builds a static model from which information can be obtained during the model’s runtime. The second group consists of programmatic methods for controlling agents and includes all kinds of algorithms that 225

agents use for retrieving information from their environment, sensory-motor coupling rules, behavioural controllers and actuator functions (see sections 5.1 and 5.3). The third group allows interactive input from the designer at runtime and normally involves controlling environmental processing parameters (e.g. the speed of environmental decay process – see section 5.4), or some dynamic agent parameters (e.g. velocity). The latter group also includes interactive modifications to the environmental configuration. Control mechanisms can be dynamic or static. Dynamical control is typically implemented through GUI. In simple multi-agent models, the user interface can be deployed for controlling several parameters simultaneously but not all kind of parameters can be easily controlled this way. The parameters that are most suitable for the dynamic control are environmental parameters (e.g. ‘pheromone’ evaporation rate), the parameters used for steering the agent through the environment (e.g. velocity), the strength of environmental modifications done by the agent (e.g. ‘pheromone’ drop rate), and activation thresholds of the agent’s sensory input (i.e. how sensitive the agent is). Naturally, one can dynamically control the variables in agent sensory-motor coupling rules. Static control is normally implemented by preparing the environment or modifying control algorithms of the model outside the runtime loop. This means that once the generative model has been started, no direct interaction with the model is carried out. In this case, the user can still maintain a level of control assuming that the general principles of the model are well understood. This understanding can be developed through designing and building the model or alternatively through interaction and deployment. Designers do not necessarily need to know how to program in order to use the advantages that generative methods of design can offer them. Prototype and case study models presented in previous chapters all share one control mechanism – using programmatic methods while building and modifying them. All these models were programmed by the author of this thesis and the actual algorithms were either invented or recreated from the examples retrieved from the literature. Naturally, not every line of the code was written from scratch and some code libraries were used where appropriate. For example, the Loop Generator 226

prototype (see section 6.1) used an external library to help construct steering behaviours for mobile agents. Quite a few prototype models also feature control mechanisms through preprocessing the environment as opposed to the interactive changes made at runtime. Such models are, for example, Stream Simulator (see section 6.2), Labyrinth Traverser (section 6.3), Network Modeller (section 6.4), the way-finding prototype (section 6.5) and the self-organisation model in bounded environment with cellular agents (section 6.7). In Stream Simulator, the designer is expected to create a colour map of an imaginary (or real) landscape indicating valleys with darker and ridges with lighter colour. This colour map is then converted into numerical values that agents consume for sensory input. Labyrinth Traverser used the input image in a different way – black pixels in the image indicated areas where agents could not go while white pixels dedicated open areas. One useful pre-processing method is the positioning of source points and targets for agents. This allows the designer to predefine desired circulation network points and even define how attractive or important these points are. The positioning of source points and targets can be efficiently done via the graphical user interface. Some of the prototypes have a graphical user interface that facilitates interactive engagement of the designer. For instance, in Network Modeller and in the Nordhavnen model (the latter is based on the Network Modeller prototype) the designer controls the evaporation and diffusion of ‘pheromone’, and the strength of ‘pheromone’ dropped by agents. Additionally, the size of the agent colony can be modified via the GUI. The greatest control over the outcome of multi-agent models is definitely achieved by modifying the model through the source code. The programmatic intervention allows one to change the general sequence of the program flow and the sensory-motor coupling rules that lie at the core of most multi-agent models. It can be said that getting the sensory-motor coupling right is the most critical part in building multi-agent models for generating circulation diagrams. One of the biggest challenges there is to achieve an effective balance between the goal-directed and the reactive behaviour. Wooldrige (1999) claims that building purely goal directed systems is not hard and neither is building purely reactive systems. However, combining these two in a single model can be a very demanding task. 227

In the Nordhavnen case study model, the balance between reactive and goaldirected movement is the key to generating meaningful diagrams. The agents’ behaviour is defined by reactive sensory-motor procedures and the resulting behaviour can be considered goal-directed as it helps agents to navigate to their targets using the shortest possible route. At the same, agents are also programmed to choose heavily used routes over less travelled ones in their immediate neighbourhood – and that can be considered as a reactive behaviour. Now, the question is how to combine these routines, or – more importantly – how to resolve conflicts when these two rules clash. For example, one can imagine a situation where an agent needs to turn left to get closer to its target but all the other roads in the neighbourhood are more heavily used than the one on the left. The solution here is quite a simple one but perhaps not immediately obvious. In the first instance, it may appear that an agent can make the right choice by calculating a weighting for each available road by taking into account its distance from the target and its size. However, after several tests with different weightings, it became clear that the proposed algorithm created situations where agents got stuck in a closed loop and could not achieve their target. A better solution had to be invented. It turned out to be a matter of ordering these rule sets. Agents first had to select all routes that would have taken them closer to the target and then had to select the most heavily used one of these routes. This sequence proved to be a robust method and agents were able to get to their targets using the most heavily used routes.

9.5 Discussion

After gaining the control over multi-agent models, the designer can assess the universality of models by testing the same model in a different context. Introducing randomness into the model can prevent the emergence of generic diagrams that the model can generate. However, probabilistic models can reveal much deeper patterns than deterministic ones. If some patterns of movement networks keep reoccurring in different context, then there is a chance that a universal diagram can indeed be achieved. In a deterministic model, the generated pattern is universal by default. However, if the setting out configuration remains unchanged then purely deterministic models produce invariant network patterns and are therefore inflexible. Although 228

probabilistic models can generate universal patterns of movement, they can help to find several alternative solutions and as such provide more insight for the designer how the circulation space can be potentially organised. Deterministic models are better for understanding whether multiple agents are needed for generating the diagram. Labyrinth Traverser is a classic example of a deterministic model where a single agent generates exactly the same diagram as several agents deployed in parallel. In Stream Simulator, there is also no difference in terms of the quality of the generated diagram whether the simulation is run with a single or multiple agents. However, it is difficult to quantitatively validate this statement because it is a probabilistic model and two diagrams are seldom similar. Stigmergic models featuring positive feedback are ignorant to the number of agents in the colony. However, as soon as elements of negative feedback or environmental processing algorithms (e.g. decay and diffusion) are introduced into the model, there is a higher probability that the colony can generate qualitatively different diagrams. If diagrams that are created by a single agent are similar to those that are created by the colony of agents, then these diagrams cannot be considered truly emergent. Truly emergent diagrams can only happen in multi-agent colonies. The emergence of movement networks makes multi-agent models particularly useful for designers who wish to explore circulation diagrams. With the help of such models, designers are capable of rapidly generating several qualitatively different diagrams. This gives them a potential advantage of exploring more options and, eventually, can lead to better solutions. However, achieving the control over the diagram is much harder. Emergence cannot be controlled directly but only indirectly. And this indirect control is sometimes difficult to achieve without the in-depth knowledge about the model. This suggests that designers should have necessary programming skills for maximising the benefit of multi-agent systems. Although scripting is increasingly popular amongst the younger generation of designers and architects, multi-agent systems are probably beyond their average skill set. It seems that there is a scope for a software platform that allows building multi-agent systems for architectural design purposes without the expert knowledge in programming languages.

229

Chapter 10: Discussion and conclusions

This chapter draws upon and synthesises individual conclusions from Chapters 5 through to Chapter 9 and discusses their original contribution to knowledge and potential implications to the architectural design discipline. This thesis has taken an ordered view to building multi-agent models for generating circulation diagrams. Based on a thorough literature survey, it has been established that diagrams are useful tools for designers, and it has been argued that multi-agent systems are appropriate for modelling circulation diagrams. The basic building blocks of multi-agent circulation models are defined, several prototype models are proposed, built, studied and analysed. These prototype models form the basis to the knowledge that is used for building case study models. Case study models demonstrate how computational diagramming with multiagent systems can be a part of the design process and highlight the benefits of using bottom-up models for architectural and urban design purposes. Building and using these models also fulfils one of the main goals of this thesis – gaining deeper understanding about the generative modelling of circulation systems. It is argued that multi-agent circulation models can be successfully used at the early stages of the design process. Computational models offer an alternative approach to more traditional methods of designing such as hand-sketching and traditional CAD modelling. New methods can bring unparalleled speed to the creative process of design. What took days before can now be generated in matter of minutes. With the help of generative models, one designer can generate a variety of solutions. Having multiple solutions gives the designer more choice and can lead towards better design proposals. If generated solutions are compared qualitatively or quantitatively then the solution that meets the design brief the best can be selected for further development. Leaving aside qualitative aesthetical comparisons, generated solutions lend themselves quite easily to quantitative analysis. That is because material generated by a computational model conforms to the same system of representation. The same method of quantitative analysis can be carried out on every single solution as long as the generative model remains unchanged. For a simple example, if generated material 230

is represented as a mesh type construct then it is possible to compare individual solutions with respect to their surface area. Furthermore, solutions can be compared programmatically without the designer’s intervention and the generative model can be used for automated design optimisation. However, one needs to be careful here – in most cases, design solutions have to meet multiple and often conflicting goals. Therefore, single-variable optimisation can lead to solutions that are optimised in one respect but completely fail to meet other requirements. The computational modelling approach presented in this thesis makes design a more transparent process. In order to construct a dynamic computational model, one needs to be able to explicitly define all the components of that model and also write algorithms that operate with these components. Therefore, such a model can be always traced back to its basic components and algorithms. Appropriateness and correctness of each component and algorithm can then be evaluated. This makes the model susceptible to validation. Therefore, generated solutions can be explained by asserting the exactness of the used building principles. Hence, computational modelling facilitates a rational debate over proposed design solutions.

10.1 Multi-agent models for generating circulation systems

One of the questions this thesis has been trying to answer is whether multiagent systems can be programmed to follow the underlying principles of circulation systems? In order to answer that question, one can look at the principles of how different circulation networks form in nature. The natural movement patterns of people are thought to be too complex for this task. Instead, one can study these of simpler organisms. Several works from authors in the field of Artificial Intelligence (Pfeifer and Scheier 2001; Dorigo, Birattari and Stützle 2006) suggest that contemporary computational models are capable of simulating intelligence at the level of insects and insect colonies. Colonies of social insects can collectively create intricate nest architectures that naturally incorporate circulation networks (Turner 2000). These networks facilitate the transport of food, building material and provide individuals in the colony an access to different parts of the nest. With various degrees of success, several academics have attempted to model the nest building process of social insects 231

computationally (Ladley and Bullock 2005; Buhl et al. 2006). However, these models have seldom tested and deployed in the architectural design context. Regardless to the advances in contemporary theory, the insect nest building behaviour can still remain too complex in order to be accurately reproduced in a computational model. In this thesis, a different approach is exercised. Artificial multiagent systems do not necessarily need to simulate all the aspects of natural agents. Instead, one can focus on a more specific behaviour in the colony. What if some logical rules of the complex behaviour of insects can be extracted and recombined to create a colony with a single purpose to generate circulation networks? Surely, achieving the circulation network formation is simpler and computationally more affordable than simulating the complete nest building behaviour. There are a few important mechanisms found in natural agent colonies that can be simulated. Probably the most important of such mechanisms is the self-organisation of circulation networks (Goldstone and Roberts 2006). In social insects, movement of insects is driven by local interactions (Dorigo, Birattari and Stützle 2006). If these local interactions are defined well enough then appropriate sensory-motor rules can be devised for artificial agents as well. Consequently, some behavioural aspects of natural agent colonies can be recreated leading to the emergence of circulation networks. Multi-agent systems are indeed found to be an appropriate method for generating networks of circulation. This thesis has demonstrated once again that global patterns of movement can rise from local interactions. More importantly, it has been shown that this principle can be successfully replicated in a generative model for synthesising design solutions. Unlike many old-fashioned methods of computing circulation networks (Tabor 1971), multiagent systems are flexible and can adapt to changes in the environment. This opens up new possibilities for designers since the environment can be modified interactively at runtime. A related and equally important concept borrowed from nature is stigmergy – a widely observed communication method in termites (Wilson 1980; Turner 2007) that is also often used in building artificial multi-agent systems (Holland and Melhuish 1999; Buhl et al. 2006). Stigmergic communication is the coordinating force in many of the prototypes presented in Chapter 6. A general conclusion of this thesis is that stigmergy is an essential component in generating and optimising circulation networks with multi-agent models. The movement of stigmergic agents is guided by the information 232

retrieved from their environment while agents actively change this information. In other words: generated networks are conceived by and – at the same time – facilitate the agents’ movement. This reciprocal causality leads to movement networks that are optimised with respect to the perception and the action of agents. While stigmergy is generally seen as a coordination (Valckenaers et al. 2001) and communication (Izquierdo-Torres 2004) method in multi-agent colonies, this thesis has demonstrated that stigmergy can also be used as a method of architectural modelling. It has been argued and demonstrated with several prototype models in this thesis that circulation networks can be generated following the logic of the network formation found in insect colonies. A great number of topographically and topologically different networks can be generated (see Chapters 6, 7 and 8). One can ask whether this is an appropriate method of modelling networks for human use. In very basic principles, movement of people is no different from the movement of other organisms. At the abstract level, all movement networks have two essential qualities – provide an access to required areas and facilitate the continuity of movement between these areas. Multi-agent models can generate circulation networks that possess both of these qualities. As conclusively demonstrated with many of the computational models in this thesis, colonies of mobile artificial agents can produce continuous movement networks (e.g. see Loop Generator – section 6.1) and can be used for providing an access (see the discussion in section 6.8). Most of the proposed models are capable of producing a variety of network diagrams (e.g. see the Network Modeller prototype in section 6.4). This makes multi-agent models useful as explorative design tools. The generated diagrams are dynamic (see Loop Generator – section 6.1) and adaptive with respect to the environment (see cellular agents in the context – section 6.7). Multi-agent systems offer ways to optimise circulation diagrams (see Chapter 8) and validate their suitability as circulation networks in terms of connectivity and accessibility (see Chapter 7). Based on all the above, multi-agent systems can be considered suitable for generating circulation systems. This thesis has shown that multi-agent systems can be successfully used for design purposes. It has been demonstrated that such systems can not only generate solutions but they can also be deployed for design analysis at the same time. This makes multi-agent systems a unique method of modelling circulation networks where 233

design synthesis is combined seamlessly with analysis. However, there is a caveat – the movement of agents has to be reinterpreted in order to use it in the design process. Generated material should be treated as architectural diagrams – abstract machines that are not representational but instrumental for producing new objects and situations (Berkel and Bos 2006). Multi-agent models also provide the necessary flexibility that leads to the variety and variation in generated circulation diagrams. Additionally, such models can provide insights into the process of natural circulation network formation. Similarly to natural systems, diagrams are emergent phenomena in artificial multi-agent based circulation models. The diagram is generated bottom-up from the collective behaviour of the agent colony and this leaves it out from the direct control of the designer. Only if the in-depth knowledge of such models is gained, it is possible to successfully control the diagram. The following section discusses this idea in a greater depth.

10.2 Implications to the design process

Generative design as a set of computational methods of finding solutions to design problems is becoming increasingly popular among designers and architects (Zee and Vrie 2008). There are several ways in which generative methods are used in the design process. In many cases, testing and deploying these methods serves the purpose of finding novel forms (Sevaldson 2000). The method proposed and tested in this thesis can be described as computational diagramming – generating design solutions in a diagrammatic form. The purpose of this work is not to find novel forms but to introduce novel methods of design – creating novel forms can only be the result of the used methods but never the main objective. Generative and computational design methods can and should have a wider purpose than solely creating original and interesting design solutions. New methods can help the design discipline to move to a new level. At this level, methods of creating design proposals – especially those that take place at the early stages of the design process – are better grounded. As proposals can be computed, early design concepts can thus be explained, questioned and debated in a reasonable and even scientific way. Generative design models that are based on computational logic offer a way of making architecture more accessible for logical reasoning. 234

Based on the findings of this thesis, multi-agent models are well suited for generating diagrams that can be used in the search for appropriate movement networks in buildings or cities. These generated diagrams are informative and can help architects to make design decisions about the layout, intensity or topological configuration of the circulation space. Multi-agent models can be used for creating diagrams that are both requirement and form diagrams in Alexander’s sense – they are constructive diagrams (Alexander 1964). Sevaldson (2000) claims that generative dynamic diagrams can fertilise the design process, and suggest a lightly altered but not essentially alien role to the designer through selection, interpretation, analysis and modification. This thesis has demonstrated that such a role can indeed be fulfilled by designers. Sevaldson also argues that generated diagrams are a subject to different modes of interpretation that allows avoiding direct and banal translation of the generated material. Again, this thesis has adopted this view, and has shown how different modes of interpretation can be used in the context of design tasks. If a new method of design is proposed, a reasonable question arises. Why is the new method better than conventional ones such as hand-sketching and traditional CAD modelling? In order to answer that question, one need to understand that this thesis is not claiming for the superiority of generative design methods – they just have certain benefits over the conventional methods. As identified in this thesis, one of the biggest advantages of using computational models is their reusability. Methods that use computation can be made generic enough in order to be deployed in different context and in different design projects. Assuming that the used method is right for a given task, it can be deployed over and over again until an acceptable design solution is found. This makes generative methods suitable for optimisation – a solution can be compared to and replaced with a newly generated solution in each iterative development cycle. Generative methods can also feature inherent optimisation routines (e.g. see Chapter 8). And last but not least, generative methods can potentially reveal solutions that otherwise may remain undiscovered. The last statement is valid solely for the increased ability of the designer to create more solutions than it is possible with conventional methods. This thesis has shown that multi-agent modelling can be used as a generative design method and can become a part of the overall design flow. The choice of the model has to be made depending on the design brief and on the expected outcome. 235

One has to follow certain steps in order to integrate the model in to the overall process. Firstly, one has to gather sufficient information about the context where the generative model can be executed. Secondly, the generated solution has to be interpreted appropriately. The information contained in the diagram needs to be transformed into a different and more explicit design representation. Constructive diagrams can be interpreted and transformed in different ways. The topology of the diagram can be used as the basis for designing the actual circulation network layout. The information about intensity of usage that is expressed in the diagram can inform the geometry of designed buildings or streetscapes. It is not unlikely to construct a model for producing diagrams that directly generate the geometry of the solution. However, this thesis has disregarded this approach because it leaves no space for interpretation and diminishes the designer’s role in the process. In order to use multi-agent models efficiently for generating design solutions, a level of control over these models has to be maintained. The control can be achieved in three essentially different ways: feeding appropriate input into the model, writing and modifying the model’s code, and changing the model’s parameters interactively during runtime. While the first and the second option provide a designer with indirect control mechanisms, the second one is more direct and definitely offers the greatest degree of control. However, achieving the full control over the generated diagrams is very difficult. The proposed generative models are built in a bottom-up manner and the circulation diagrams are the result of the multi-agent colony’s interaction with its environment. Thus, the diagram is an emergent phenomenon and no direct control is possible. In order to generate meaningful diagrams, all three above mentioned control mechanisms of the model are required. The designer has to understand how different input information influences the network formation processes, has to be able to change algorithms that drive the model, and has to interact with the computational process.

10.3 In search for parsimonious models

This thesis has been studying multi-agent models for architectural and urban design purposes by building them. Deaton and Winebrake (2000) call this approach synthetic modelling, and place it somewhere between synthesis and analysis which 236

means that systems are studied by building them out of defined components and then analysed. They suggest using exploratory analysis in order to understand how the system responds to the changed conditions by conducting a series of experiments. Due to the synthetic approach, this thesis has largely ignored the practice of classifying the proposed models – mainly because using an existing taxonomy is not a natural part of synthetic modelling. Prototype models (Chapter 6) are only classified following the established taxonomies in geographical network analysis (Haggett and Chorley 1969) or in agent-based modelling (e.g. Castle and Crooks 2006). This section tries to remedy this lack of coherent classification and proposes two distinct patterns for building generative design models. Based on the conducted research, two distinct patterns – the ModellerEvaluator-Interpreter pattern and the Sensor-Actuator-Environment pattern – were discovered. These patterns serve as abstract schemata how generative models can be designed and built. The proposed patterns are based on the experience gained through modelling and analysis of prototype and case study models presented earlier in this thesis. Both of the patterns belong to the domain of generative design – they meet most of the characteristics and properties that are normally associated with generative design. According to various authors, generative models need to be dynamic (McCormack, Dorin and Innocent 2004), navigate a large solution space (Herr and Kvan 2007), feature autonomous units (Galanter 2003) and be independent from external processes (Batty and Torrens 2005). All models that follow the Modeller-EvaluatorInterpreter pattern or the Sensor-Actuator-Environment pattern feature feedback loops and lead to the dynamic process where design solutions are found through reiterative development. Both of the patterns feature distinct modules – the building blocks that can be seen as sets of explicitly defined and computable instructions. Whereas these building blocks in the Modeller-Evaluator-Interpreter pattern does not need to be programmatic and can be replaced with the activity of a human designer, the SensorActuator-Environment pattern offers less flexibility – the pattern is the best implemented throughout as a computer program. The Modeller-Evaluator-Interpreter pattern can be considered as a traditional design process where Modeller is the module that first creates design proposals, Evaluator analyses these proposals, and Interpreter takes the results of the analysis and instructs Modeller of how to change or 237

recreate the proposal. When all of these modules are programmed then the whole model is turned into a purely computational model. Modeller, for example, can generate new proposals by deploying a parametric design algorithm. This algorithm can be initiated with a set of random or user defined parameters but later can take its input parameters directly from Interpreter. Evaluator can be computational as well. It can be an algorithm that simply measures some geometrical parameters of the proposal or it can be a complex assessment routine that analyses the model in terms of its thermal performance, daylight factors, wind load etc. Interpreter is likely the most complex part of the model but it can equally be the simplest one. Basically, it takes the evaluated design proposals and ‘decides’ what needs to be changed and instructs Modeller to create alternative solutions. The key word here is ‘decide’ which suggests that Interpreter is a complex module and has some kind of embedded intelligence – human or artificial. Nevertheless, Interpreter can also be an extremely simple module that, for example, takes the parameters of the best design proposal (as computed by Evaluator) and changes these parameters blindly in the hope to create even better proposals. The Sensor-Actuator-Environment pattern is completely different from the Model-Evaluator-Interpreter pattern. As said earlier, none of the modules can be replaced with the activity of a human designer – all models based on this pattern have to be computational. However, it is a perfectly suitable model for agent based modelling. In fact, the pattern has been inferred from the particulars of agent-based models found in the literature and of those built for this thesis. The reason why this thesis is so interested in this pattern is because it can be used for building simple models that meet all of the goals that interactive multi-agent systems for generating circulation diagrams ought to have. Models that are based on this pattern are simple because the global description of the generated diagram is not a part of the model – the diagram is an emergent phenomenon. Modules in this pattern are concerned with the computation at the agent level which allows one to leave out global descriptions, thus making the model computationally lighter. The whole concept of the Sensor-Actuator-Environment pattern is built around the system theoretical view of the system (agent) and its environment where the environment produces input to the agent and consumes its output (Keil and Goldin 2006). The agent is further broken down into the Sensor and the Actuator module. The 238

first module is the recipient of information from the agent’s environment while the second one is the proactive module that either changes something in the agent itself or in its environment. Once the environment is changed or the agent’s view to the environment is changed, Sensor receives new information and the generative loop is closed. Models that are built according to the Sensor-Actuator-Environment pattern are generative by default. These models feature dynamic feedback loops, are independent from the rest of the design process, can be used for exploring a wide search space, feature independent units (agents), and – most importantly – generate design solutions through numerous reiterations and continuous development. The two case study models in Chapter 7 and Chapter 8 follow two different generative design patterns. Whereas Chapter 8 proposes a model that has been built after the Modeller-Evaluator-Interpreter pattern, the Nordhavnen model (see Chapter 7) follows the Sensor-Actuator-Environment pattern. Studying differences between these two case study models is significant in terms of the search for the simplest models. Before one can decide which pattern can produce simpler models, it is worth of having a little closer look at how the patterns are implemented in the first place. In the model for creating corridor systems (Chapter 8), the generative loop between Modeller, Evaluator and Interpreter is closed programmatically. However, more value can be gained when the designer intervenes from time to time and manually alters the geometry in the model. When the process is fully computational and no human intervention takes place, an appropriate solution may never be found. Modeller in this case is a simple algorithm that modifies the layout of space. These modifications, as mentioned above, can be done by a human modeller instead. Interpreter is an equally simple algorithm that compares the evaluated solution with the previously generated one and instructs Modeller to either proceed with the current layout or revert to the previous solution. The most sophisticated bit of the model is Evaluator – it features a multi-agent system for finding shortest corridors and evaluates the total area of accessible spaces. Although the multi-agent system is used for suggesting the geometry of corridors and can be seen as a generative routine in its own right, its main function is to evaluate the existing spatial layout. The Nordhavnen multi-agent model is built after the Sensor-ActuatorEnvironment pattern. The Sensor and the Actuator modules are both part of the agent’s design and are connected together via a sensory-motor coupling algorithm. 239

This sensory-motor coupling algorithm is just a little bit more complicated than a traditional hill-climbing algorithm. The Environment module is also a simple computational construct that contains discrete spatial units with a couple of adjustable parameters and features a few simple environmental processes. All modules are seamlessly integrated into a single computational model where the human designer can influence the process by feeding the model appropriate input data or interactively changing some parameters at runtime. Despite its simplicity, the Nordhavnen model can search a wide solution space and can adapt to different input data. The differences between the two case study models are fairly obvious. The Nordhavnen model is a simple generative model, whereas the model for generating corridor systems is a more complicated one in terms of its programmatic composition but also in terms of its deployment. The latter model has more complicated modules and requires active input from the designer. It also remains operational within narrower design constraints. The former one only demands designer’s input in setting out the model, accepts wider range of input data and generates wider variety of diagrams. In conclusion, the model based on the Sensor-Actuator-Environment pattern is a simpler model for generating circulation diagrams than the one based on the Modeller-Evaluator-Interpreter pattern. Another clue in the search for parsimonious patterns of multi-agent models for generating circulation systems can be found in prototype models (see Chapter 6). Two of the models – Loop Generator and Network Modeller – that follow the SensorActuator-Evaluator pattern are also the simplest truly generative models that are studied in this thesis. Of these two, Loop Generator is the simplest of the proposed models for many reasons. For a start, neither designer input nor interactive engagement is needed. The simplicity of the model is also manifested in the simplicity of its modules. The Sensor-Actuator module, for example, is a simple hill-climbing agent. Additionally, the setting out of the model is extremely basic. Also, the only process that takes place in Environment is the evaporation of trails that are left behind by agents. Loop Generator leads to the emergence of a diverse range of circulation network diagrams and also features inherent (and emergent) network optimisation routines. It has all the main characteristics that are assigned to generative modes – it is a dynamic model, produces emergent outcomes, the solution is created iteratively, it 240

features autonomous units, and is independent from the rest of the design process. Based on the findings of this thesis, the simplest multi-agent model for generating circulation diagrams follows the Sensor-Actuator-Environment pattern.

10.4 Complete design diagrams

Besides generating circulation diagrams, one of the objectives of this thesis is to discover methods of integrating other design goals into the multi-agent model. The practical question here is whether circulation systems can be generated in a parallel manner with other parts of the design solution. For example, can a road network diagram and an urban massing diagram be generated in the same model? If the answer is yes, then it would be possible to generate informative and contextual solutions – the complete design diagrams. The drawback in building models that generate complete diagrams lies in the increased algorithmic complexity. As demonstrated in Chapter 8, it is fairly straightforward to combine different agent-based systems in a single model. In this case study, the hill-climbing routine is combined with ant colony optimisation. These routines are deployed sequentially in different modules of the program. While hill-climbing is a part of Modeller, ant colony optimisation is a part of the Evaluation module. If both of the routines were a part of the same module (e.g. Modeller) then this module would become a lot more difficult to manage. In order to retain the simplicity of modules, it would make sense to keep algorithms that are responsible for modelling different aspects of the design solution in separate modules. However, this approach would contradict the Modeller-Evaluator-Interpreter pattern as all modelling is expected to be carried out in Modeller. Hence, it is not easy to use the Modeller-EvaluatorInterpreter pattern for building models that generate complete design diagrams. Luckily, there is an alternative – one can follow the Sensor-ActuatorEnvironment pattern instead. There are two ways of using this pattern for modelling complete design diagrams so that the model still remains easily manageable. Firstly, the Environment module can encapsulate processes that are responsible for some design goals. Section 6.6 presents several models featuring stigmergic building and environmental processes that shape the spatial diagram. In a typical model of this kind, agents place certain objects into the environment (part of the Actuator module) where 241

these objects become the subject of environmental processing (part of the Environment module). The common issue discovered with this approach is that the Actuator module becomes increasingly complicated and it is difficult to program flexible stigmergic building rules that reflect the changes during the diagram development. Alternatively, one can consider a model that contains several qualitatively different Senor-Actuator systems. This is a model that features agents of different kinds. In this type of a model, several multi-agent systems of different ‘species’ coexist and communicate with other ‘species’ through the environment. For example, one type of agents can represent the massing elements (e.g. houses) while other agents are responsible for creating circulation diagrams (e.g. roads). Both agents can retrieve information directly from their environment. By reacting to this information they also modify the environment for other agents. The complexity of the model can be managed by treating different agents as self-contained objects. Although extremely simplistic, the closest model that was built according to the suggested approach is the formation of the cellular structure illustrated in Figure 6.35. In this model, there are two types of agents – ones that form ‘spaces’ and others that form the circulation diagram. There are no environmental processes neither there are stigmergic building routines present in the model. One can only imagine how more sophisticated models of this type can be created. For instance, agents that represent abstract building blocks (e.g. houses and flats) can relocate themselves according to the level of access that is provided by agents that generate circulation diagrams. The integration of several multi-agent systems is likely to create feedback loops between circulation and spatial layout agents leading to the development of complete diagrams where different design goals are simultaneously satisfied. This is exactly what is needed for solving complex spatial problems. Therefore, it is possible to generate complete diagrams featuring circulation systems and other aspects of the design solution in a parallel manner. Based on the experience of building and working with several multi-agent models, it has been found out that the Sensor-ActuatorEnvironment pattern is well suited for creating such models.

242

10.5 Self-criticism

The work presented in this thesis has demonstrated that multi-agent models can be successfully used for generating circulation solutions in buildings and settlements. However, the work also raises many new questions and issues. One of the main issues from the designers’ perspective is that good circulation systems do not automatically yield to good overall design solutions; other design goals have to be considered simultaneously. From understanding how multi-agent systems help to solve circulation, the research agenda could now move towards generating solutions that meet multiple goals. For example, in parallel to circulation, one has to think of how routines that generate good lighting conditions, thermal performance or spatial layouts can be integrated. Whereas some of the prototype models scratched the surface of this topic, a dedicated research could explore it more thoroughly. Section 6.6 explored the integration of stigmergic building routines with circulation. It was hoped that agents can create circulation diagrams and organise otherwise inactive geometry of the environment at the same time. On the contrary to the desired effect, the proposed approach often led to over-constrained computational models lacking flexibility and adaptive powers. It turned out to be difficult to coordinate the building activity with the motor behaviour – adding new geometry to the agent’s immediate neighbourhood severely reduced its mobility. Also, the management of such models turned out to be too complicated. In reaction, it was suggested that it is perhaps better to conceive spatial structures as not part of the environment and the subject to the environmental processing but as autonomous agents with their own goals. For example, a room in a building or even the entire building can be conceived as a mobile agent with its own peculiar sensory-motor loop. Since multi-agent systems are flexible, it should be perfectly possible to combine different type of agents in the search for design solutions with multiple goals. A logical continuation of this research would investigate computational design models consisting of several multi-agent systems where each multi-agent system has its own goals. One of the issues discovered during the research for this thesis is the integration of computational evaluation tools into generative models. There are a number of powerful applications in use in architectural practices for assessing design 243

performance criteria from lighting and thermal analysis to energy consumption and structural analysis. Clearly, the Modeller-Evaluator-Interpreter pattern would accommodate such analytical tools in the Evaluator module, but this is not the issue here. The problem is that sophisticated analytical tools require detailed design information that is not available in early stages of the design process. Therefore, one of the questions that this thesis has raised relates to the analysis of generated material. What are the appropriate analytical tools for assessing the performance of generated solutions at the conceptual design stage? A similar question arises when assessing the aesthetics of generated solutions. Naturally, this task can be solved by a human designer, but this would interrupt the computational program flow. Besides, this type of aesthetic appraisals would need to be done from the global perspective and not from the local – the agents’ – perspective. The question here is whether an agent can base its motor decisions on what it ‘sees’. What would be appropriate perceptual algorithms for that purpose? If such algorithms were invented then multi-agent systems would possibly lead to greater artificial design intelligence. This, however, stimulates a whole new range of questions about the role of the designer. If computational models were to become more intelligent in terms of being capable of solving complex design problems, then what would it mean to design professionals? Surely, such models cannot simply be considered as digital tools. Instead, these tools start taking over some of the tasks that are traditionally assigned to designers. One of the questions that this thesis has not found a satisfying answer to is to do with the production a large number of solutions – one of the main advantages of using generative models. If solutions can be generated ceaselessly, how can one know when to stop this process? Presumably, one can assess it by taking into account the potential benefits and the cost of producing new solutions. But even then the problem remains intractable. Since there is hardly ever a single optimal solution for a design problem, it is very difficult – if not impossible – to decide when to stop searching for new solutions. Earlier in this chapter, it was argued that computational models facilitate rational debates over design solutions. Since multi-agent models can be decomposed into behaviours of individual agents, it is possible to validate these models by validating the principles of sensory-motor coupling in agents. For instance, if an agent 244

turns left when it encounters an obstacle on its right (assuming that agents cannot overcome obstacles), then this is considered a logical sensory-motor coupling rule which validates the model internally. However, this is a trivial example and does not offer much for validating models that generate circulation networks in buildings and urban settlements. The issue with validation is a lot clearer in analytical models where the behavioural rules of agents can actually be compared to the real-world behaviour of agents that are being simulated. Although circulation networks can be analysed simulating people’s movement (Dijkstra, Timmermans and de Vries 2007), one cannot easily use these navigational principles for generating circulation networks. The reason for this is a simple one – the geometry of the environment in analytical models exists prior to the computation whereas in generative models it is created during the computation. Designing circulation is a creative process where the solution is the ultimate goal. Generated material does not simulate anything; a generative model is not constructed in order to observe a phenomenon by any means. Therefore, the question remains: how can generative models be validated? The only satisfactory response to that question offered in this thesis is to validate models through validating the generated output. The key parameters of generated networks can be compared to the parameters of real-world circulation networks. The Nordhavnen model, for example, was validated by comparing connectivity of generated road network diagrams to the average road network the connectivity in existing urban areas (see Chapter 7). Nevertheless, the issue remains – generative models produce a variety of diagrams where the quality of these diagrams depends on the experience and skills of the designer. What if some generated diagrams match and others differ from the parameters of real-world road networks? Does it make the model invalid or unverified? This thesis has not found a satisfactory answer to that question. Perhaps the problem of validation in generative modelling should be approached in a different way. Since many generative models can produce almost an infinite number of solutions, it is impractical to validate all of these. Instead, one can statistically measure the success of the model by comparing generated solutions that are acceptable to non-acceptable ones and calculating an average success rate. This 245

rate would indicate how difficult it is to use the model for generating validated solutions. Although this research has provoked a new range of questions that were not answered in this thesis, the selected methodology proved generally successful. The synthetic modelling approach – identifying basic building blocks, constructing prototype models of these blocks, deploying exploratory analysis, and testing prototypes in the context of real design tasks – worked very well. Gaining experience and creating new understanding by building simple prototypes seemed to be an appropriate way of constructing more sophisticated models, while attempts of conceiving the whole model without verifying interim models were doomed to fail.

10.6 Discussion. The author’s view

My research confirms that multi-agent models are useful design tools. A specific focus in my thesis has been on using these tools at the early stages of the design process. Most of the previous research in the area has been done inconsistently or explores individual models in isolation. In order to demonstrate the value of multiagent models, I have chosen to look at a common design problem in architecture and urban planning – circulation. Circulation is a part of almost any architectural or urban design proposal. It has to suit the context and the design brief but it is generic enough in order to justify building computational models. Leave aside man-made architectures, many living and non-living systems in nature feature circulation networks that are not the result of carefully planned and executed design decisions. Yet, these networks are usually well adapted to the surroundings, organically integrated into the ‘landscape’ and are, in several respect, well optimised. The secret of these networks lies within the way their global structure is created out of individual and local actions. Contemporary computational theory suggests that some natural phenomena can be simulated using bottom-up systems (Resnick 1994). Multi-agent systems are bottom-up systems by default and make a perfect method for following the bottom-up principles in natural network formations. Yet, multi-agent systems that I have designed and built are abstract models – I have not tried to accurately simulate processes in nature. Neither have I tried to explain how circulation of people and goods in buildings and cities exactly happens. 246

The approach here is a more abstract one: I have been looking how certain principles observed in nature (e.g. stigmergy) can be reused for creating diagrams of circulation – abstract representations of design solutions. These diagrams are rich in information and can represent other qualities besides the geometrical form. In fact, these diagrams match what Alexander (1964) describes as a constructive diagram. Multi-agent models seem to be particularly suitable for generating constructive diagrams because in such models both the shape and the intensity of movement can be easily captured. In synthetic modelling, knowledge is produced by building models and observing their behaviour by means of exploratory analysis. The explorative-analytical aspect here is extremely important for the designer. The control over diagrams can only be gained through understanding the internal workings of the model and knowing the effect of critical parameters. The synthetic modelling approach is also particularly suitable for a type of architectural modelling where the actual form of the design solution is derived from a set of site-specific constraints and functional principles. The model for creating design solutions should be general enough in order to be algorithmically describable, yet it needs to be flexible enough in order to generate a solution that fits organically onto the site. While building the model is naturally a part of the synthetic approach, fine-tuning the model for finding a better fit with the site is a matter of exploratory analysis. And again, multi-agent models are perfectly suitable for this type of modelling due to their generic yet context sensitive nature. Computational modelling in general and multi-agent modelling in particular have something new to offer to the discipline of architecture. This something is more than just a novel method of design. The algorithmic approach of creating design solutions has the capacity of bringing rational argumentation into the design discourse at the early stage of the creative process. Naturally, there are many computational methods already in use in the architectural practice. However, most of these methods are used at the analytical stage – in structural or environmental performance analysis. However, my particular interest lies with conceptual design models. Whereas at the conceptual stage architects normally play around with abstract diagrams and speculative design concepts, computation provides an opportunity to explain the reasons and the mechanisms of creating these conceptual and diagrammatic design solutions. Computational methods can lay foundations to a more scientific approach in design. Imagine how more meaningful disputes over design proposals can be if design 247

concepts are broken down into algorithms rather than presented as abstract holistic ideas. The difference here is the difference between the process and the phenomena – abstract concepts can only be observed while algorithmic concepts can also be analysed. Multi-agent models for generating circulation diagrams bring bottom-up thinking into the design domain. This requires the shift from stereotypical design thinking where design proposals are conceived during the creative yet largely nontransparent process towards reasoned and analysable way of creating designs. Instead of thinking about broad concepts, the designer needs to think systematically about processes. No overarching spatial visions are required. Modelling circulation networks bottom-up forces the designer to think in a more user centred way – when, how and why individuals use the circulation network. Surely, these individuals are just computational constructs in the model and represent abstract mobile units, but this does not change the fact that decisions of these individuals are based on locally available information. Compared to the abstract conceptual thinking, modelling at this level is definitely closer to the level of end users. While the abstract conceptual thinking praises the grand vision of the designer, the latter approach derives the solution from the requirements of individuals – the end users. Following the bottomup logic, design solutions can only emerge during the design process and are not anyhow predefined. I can see the role of the designer becoming more similar to this of the synthetic modelling practitioner. The designer has new tasks of assessing the design brief, choosing an appropriate computational model to meet the brief, preparing the setting out configuration, generating multiple solutions with the model, evaluating the generated solutions, re-configuring the model if needed, and generating further solutions. This process leads to solutions that are developed during several design iterations – these solutions are openly evolved rather than conceived in isolation. Going through several design iterations has traditionally been expensive because individual solutions are usually created by repeating the whole design process. Computational models can take this pain away and allow the designer to explore a greater variety of solutions. This, in turn, makes the likelihood of finding good design solutions higher. 248

The iterative development of design solutions offers yet another opportunity to improve the practice of design. Because solutions to particular problems (e.g. circulation) are grown rather than conceived, they are adaptable and can evolve in unison with other parts of the proposal (e.g. allocation of spaces). The designer’s role here is not the one of a science researcher, but of a synthesiser who makes use of scientifically validated models in order to orchestrate different design aspects into a complete solution. My thesis makes an original contribution to the discipline of architecture in several respects:

1. Originality of research. Nobody has extensively researched the methods for generating circulation systems using agent-based modelling techniques. There are just a few examples of scientific work that investigate the possibilities of bottom-up models for synthesising design diagrams. My thesis deploys the synthetic modelling approach and follows it throughout by building prototype models, exploring them analytically, and testing some of these models in the practical context of architectural and urban design. 2. Novelty of generative models. Some of the prototype models that I have built and tested in Chapter 6 have not been used in the architectural context before. The Network Modeller prototype, for example, is one of such models. The technical implementation of this prototype is fairly simple, yet the area of potential applications in architecture and urban design is wide. None of the algorithms that I have used in Network Modeller are novel in terms of multiagent modelling techniques. However, the purpose of using this model for generating design solutions is unique. Loop Generator, on the other hand, is a novel model in much wider sense. Although very simple in terms of the used algorithms, I have found no record in literature of any other model where continuous circulation networks are generated from the individual actions of purely reactive agents. Most of my prototypes fall into one of the categories of computational network models as defined in geographical network analysis (Haggett and Chorley 1969). Network Modeller and the Nordhavnen model, for example, are interconnection models, while Stream Simulator is a capture model. Loop Generator, however, does not fit this classification system because 249

this system expects all networks to have source and/or target nodes. Loop Generator has no source or target nodes and circulation networks emerge from the movement of reactive agents as a result of a simple stigmergic feedback loop. Yet, the model can generate movement diagrams of great variety. 3. Contribution to the rational design debate. Computational models bring concept design to a whole new level where solutions can be argued for or against by discussing the appropriateness and correctness of algorithms that generated these solutions at the first place. 4. Demonstration of how the natural path formation techniques can be used at urban scale for generating circulation networks that can be validated against real-world road networks. Additionally, I have proposed a novel evaluation method. 5. New patterns of generative models of design. I have spelled out two patterns for constructing generative models for design purposes. The ModellerEvaluator-Interpreter pattern has only been implied in previous design research (Galanter 2003; Zee and Vrie 2008) while the Sensor-Motor-Environment pattern has not been mentioned in the design context before. 6. New control methodology. I have suggested that an effective control over generative models can be gained following a tri-partite methodology: programmatically altering the model, modifying input and the setting-out configuration, and interactively changing the agent’s parameters and environmental processing parameters at runtime.

I have been looking for simple generative models that can produce meaningful circulation diagrams – diagrams that feature distinct movement paths of different qualities. The simplest model that I have found is the reactive multi-agent model that follows the Sensor-Actuator-Environment pattern (see section 10.3). I strongly believe that this pattern can be used for inventing standard models for architectural and urban design. Such models – if constructed and validated by a specialist in the field – can lead to a new and commonly accepted way of creating and testing design solutions.

250

Bibliography

Adamatzky, A. (2001). Computing in Nonlinear Media and Automata Collectives. Bristol, Institute of Physics Publishing. Adamatzky, A. and O. Holland (1998). "Voronoi-Like Nondeterministic Partition of a Lattice by Collectives of Finite Automata " Mathematical and Computer Modelling 28(10): 73-93. Aish, R. and R. Woodbury (2005). "Multi level interaction in parametric design." Lecture notes in computer science 3638: 151-162. Alexander, C. (1964). Notes on the Synthesis of Form. Cambridge, Harvard University Press. Alexander, C. (1965). "A City is not a Tree." Architectural Forum 122(1): 58-62. Antoni, J.-P. (2001). Urban sprawl modelling: A methodological approach. 12th European Colloquium on Quantitative and Theoretical Geography. St-Valery-en-Caux. Arbib, M. A. (2003). "From Rana computatrix to Human Language: Towards a Computational Neuroethology of Language Evolution " Royal Society of London Transactions Series A 361(1811): 2345-2379. Arnheim, R. (1977). The Dynamics of Architectural Form. Los Angeles, Uniersity of California Press. Batty, M. (2003). Agent-based Pedestrian Modelling. Advanced Spatial Analysis: The CASA Book of GIS. P. A. Longley and M. Batty. New York, ESRI: 81-108. Batty, M. (2004). A new theory of space syntax. CASA Working Papers. London, Centre for Advanced Spatial Analysis (UCL). Batty, M. (2005). Cities and Complexity: Understanding Cities with Cellular Automata, AgentBased Models, and Fractals. Cambridge, Massachusetts, MIT Press. Batty, M. (2008). Fifty Years of Urban Modeling: Macro-Statics to Micro-Dynamics. The Dynamics of Complex Urban Systems. S. Albeverio, D. Andrey, P. Giordano and A. Vancheri. Heidelberg, Physica-Verlag: 1-20. Batty, M. and P. M. Torrens (2005). "Modelling and prediction in a complex world." Futures 37: 745-766. Bazzani, A., M. Capriotti, B. Giorgini, G. Melchiorre, S. Rambaldi, G. Servizi and G. Turchetti (2008). A Model for Asystematic Mobility in Urban Space. The Dynamics of Complex Urban Systems. S. Albeverio, D. Andrey, P. Giordano and A. Vancheri. Heidelberg, Physica-Verlag: 5974. Beck, H. (2010). Retrieved 29.08.2010, from http://britton.disted.camosun.bc.ca/beck_map.jpg. Beer, S. (1974). Designing Freedom. London, John Wiley. Berkel, B. v. and C. Bos (1999). Imagination. Amsterdam, UN Studio & Goose Press. Berkel, B. v. and C. Bos (2006). Diagrams. Theories and Manifestoes of Contemporary Architecture. C. Jencks and K. Kropf. Chicester, West Sussex, Wiley-Academy: 325-327. Berntson, G. M. (1997). "Topological scaling and plant root system architecture: developmental and functional hierarchies." New Phytologist 135(4): 621-634. Birkin, M. H., A. G. D. Turner, B. Wu and P. M. Townend, Xu J. (2008). An Architecture for Social Simulation Models to Support Spatial Planning. The Third International Conference on e-Social Science. Oxford. Blender Foundation (no date). "Blender." Retrieved 17.07.2009, from www.blender.org. Blum, C. and M. Dorigo (2004). "The hyper-cube framework for ant colony optimization." Systems, Man, and Cybernetics, Part B 34(2): 1161-1172. Blumberg, B. M. and T. A. Gakyean (1995). Multi-Level Direction of Autonomous Creatures for Real-Time Virtual Environments. Computer Graphics. Bochner, B. and F. Dock (2003). Street Systems and Classification to Support Smart Growth. 2nd Urban Street Symposium. Anaheim.

251

Bonabeau, E. (2001). Agent-based modeling: Methods and techniques for simulating human systems. Adaptive Agents, Intelligence, and Emergent Human Organization: Capturing Complexity through Agent-Based Modeling. Irvine. Bonabeau, E., M. Dorigo and G. Theraulaz (1999). Swarm Intelligence: from Natural to Artificial Systems New York, Oxford University Press. Bonabeau, E., S. Guerin, D. Snyers, P. Kuntz and G. Theraulaz (2000). "Three-dimensional Architectures Grown by Simple ‘Stigmergic’ Agents." BioSystems 56: 13-32. Bonabeau, E., G. Theraulaz, J. L. Deneuborg, N. Franks, O. Rafaelsberger, J. Joly and S. Blanco (1998). "A Model for the Emergence of Pillars, Walls and Royal Chambers in Termite Nests." Philosophical Transactions: Biological Sciences 353(1375): 1561-1576. Bourg, D. M. (2001). Physics for Game Developers. Sebastopol, O'Reilly. Braitenberg, V. (1986). Vehicles: Experiments in Synthetic Psychology, MIT Press. Bridge, J. S. (2003). Rivers and Floodplains: Forms, Processes, and Sedimentary Record. New York, Wiley-Blackwell. Brimicombe, A. J. and C. Li (2008). "Agent-based services for the validation and calibration of multi-agent models." Computers, Environment and Urban Systems 32(6): 464-473. Brooks, R. A. (1991a). "Intelligence without representation." Artificial Intelligence 47: 139-159. Brooks, R. A. (1991b). Intelligence without reason. Technical Report: AIM-1293. Cambridge, Massachusetts Institute of Technology. Buhl, J., J. Gautrais, J. L. Deneuborg, P. Kuntz and G. Theraulaz (2006). "The Growth and Form of Tunneling Networks in Ants." Journal of Theoretical Biology 243: 287-298. Butler, Z., K. Kotay, D. Rus and K. Tomita (2001). Cellular Automata for Decentralized Control of Self-Reconfigurable Robots. ICRA Workshop on Modular Self-Reconfigurable Robots. CABE (no date). "Case studies ". Retrieved 02.03.2011, from http://www.cabe.org.uk/casestudies. Calogero, E. (2008). Getting from A to B and Back: A Representational Framework for Pedestrian Movement Simulation in a School Environment. EDRA 39th Annual Conference, Veracruz. Camp, C. V., B. J. Bichon and S. P. Stovall (2005). "Design of Steel Frames Using Ant Colony Optimization." Journal of Structural Engineering 131(3): 369-379. Campbell, M. I., J. Cagan and K. Kotovsky (1998). A-Design: Theory and Implementation of an Adaptive, Agent-Based Method of Conceptual Design. Artificial Intelligence in Design. Lisbon. Capra, F. (1996). The Web of Life. London, HarperCollins. Carranza, P. M. and P. S. Coates (2000). The use of Swarm Intelligence to generate architectural form. Generative Art Conference, Milan. Castle, C. J. E. and A. T. Crooks (2006). Principles and concepts of agent-based modelling for developing geospatial simulations. CASA Working Papers London, Centre for Advanced Spatial Analysis (UCL). Chase, W. G. (1982). Spatial representations of taxi drivers. Acquisition of symbolic skills. D. R. Rogers and J. A. Sloboda. New York, Plenum: 391-405. Chomsky, N. (1956). "Three models for the description of language." IRE Transactions on Information Theory 2: 113-124. Christaller, W. (no date). Retrieved 11.05.2009, from http://www.answers.com/topic/centralplace-theory-1. Clerkin, P. (2005). "Glossary of Architectural Terms." Retrieved 18.07.2010, from http://www.archiseek.com. Coates, P. S. (2009). Computational model of beady ring. R. Puusepp. London. Coates, P. S. (2010). Programming.Architecture. London, Routledge. Coates, P. S., T. Broughton and A. Tan (1997). The Use of Genetic Programming in Exploring 3D Design Worlds. London, Centre for Environment and Computing in Architecture, University of East London. Coates, P. S., C. Derix and C. Simon (2003). Morphogenetic CA 69’ 40’ 33 north. Generative Art Conference, Milan.

252

Coates, P. S., N. Healy, C. Lamb and W. L. Voon (1996). The Use of Cellular Automata to Explore Bottom-Up Architectonic rules. Eurographics UK Chapter 14th Annual Conference. London. Crecu, D. L. and D. C. Brown (2000). Expectation formation in multi-agent design systems Artificial Intelligence in Design '00, Kluwer Academic Publishers. Crooks, A. T. (2008). Constructing and Implementing an Agent-Based Model of Residential Segregation through Vector GIS. CASA Working Papers. London, Centre for Advanced Spatial Analysis (UCL). Cross, N. (1977). The Automated Architect. London, Pion. D'Souza, D. F. and A. C. Wills (1998). Objects, Components, and Frameworks with UML: The Catalysis Approach. Reading, Massachussetts, Addison-Wesley. De Jong, K. A. (2006). Evolutionary computation: a unified approach. Cambridge, Massachussets, MIT Press. De Schutter, B., S. P. Hoogendoorn, H. Schuurman and S. Stramigioli (2003). "A Multi-Agent Case-Based Traffic Control Scenario Evaluation System." Intelligent Transportation Systems 1(12): 678-683. Deaton, M. and J. Winebrake (2000). Dynamic Modelling of Environmental Systems. London, Springer. Deneuborg, J. L., J. M. Pasteels and J. C. Veraeghe (1983). "Probabilistic Behaviour in Ants : a Strategy of Errors?" Journal of Theoretical Biology 105: 259-271. Deneuborg, J. L., G. Theraulaz and R. Beckers (1992). Swarm Made Architectures. Toward a Practice of Autonomous Systems, Proceedings of The First European Conference on Artificial Life, MIT Press. Derix, C. (2008). Genetically Modified Spaces. Space Craft: Developments in Architectural Computing. D. Littlefield. London, RIBA Publishing: 48-57. Dey, T. K. and W. Zhao (2003). "Approximating the Medial Axis from the Voronoi Diagram with a Convergence Guarantee." Algorithmica 38(1): 179-200. Dijkstra, J. and H. J. P. Timmermans (2002). "Towards a multi-agent model for visualizing simulated user behavior to support the assessment of design performance." Automation in Construction 11: 135-145. Dijkstra, J., H. J. P. Timmermans and B. de Vries (2007). Empirical estimation of agent shopping patterns for simulating pedestrian movement. CUPUM07 Computers in Urban Planning and Urban Management, Iguassa Falls. Dill, J. (2004). Measuring Network Connectivity for Bicycling and Walking. TRB Annual Meeting. Portland. Doran, J., S. Franklin, N. Jennings and T. Norman (1997). "On Cooperation in Multi-Agent Systems." The Knowledge Engineering Review 12: 309-314. Dorigo, M., M. Birattari and T. Stützle (2006). Ant Colony Optimization: Artificial Ants as a Computational Intelligence Technique. IRDIA - Tecnical Report. Brussels. Dorigo, M., V. Maniezzo and A. Colorni (1996). "The Ant System: Optimization by a colony of cooperating agents." Transactions on Systems, Man, and Cybernetics–Part B 26(1): 1-13. Dorigo, M. and K. Socha (2007). An Introduction to Ant Colony Optimization. IRIDIA - Technical Report. Brussels. Downs, R. M. and D. Stea (1977). Maps in Minds: Reflections of Cognitive Mapping. New York, Harper & Row. Duarte, J. P. (2004). The Virtual Architect. Generative Art Conference, Milan. Dunham, G., S. Tisue and U. Wilensky (2004). NetLogo Erosion model. Evantson, Center for Connected Learning and Computer-Based Modeling, Northwestern University. Evans, R. (2008). Urban Design Compendium 2: Delivering Quality Places, English Partnerships, The Housing Corporation. Ferre, A., T. Sakamoto, M. Kubo, T. Daniell and E. van Goethem, Eds. (2002). The Yokohama project. Barcelona. Forrester, J. W. (1972). Understanding the counterintuitive behaviour of social systems. Systems Behaviour. J. Beishon and G. Peters, The Open University Press.

253

Frazer, J. (1995). An Evolutionary Architecture. London, Architectural Association. Fuhrmann, O. and C. Gotsman (2006). "On the algorithmic design of architectural configurations." Environment and Planning B: Planning and Design 33: 131-140. Funes, P. J. and J. B. Pollack (1999). Computer Evolution of Buildable Objects. Evolutionary design by Computers. P. J. Bentley. San Francisco, Morgan Kaufman. Galanter, P. (2003). What is Generative Art? Complexity Theory as a Context for Art Theory. Generative Art Conference, Milan. Gibson, J. (1979). The Ecological Approach to Visual Perception. Boston, Houghton Mifflin Gilbert, N. (1995). Emergence in Social Simulation. Artificial Societies: The Computer Simulation of Social Life. N. Gilbert and R. Conte. London, UCL Press: 144-157. Gilbert, N. (2004). Agent-based social simulation: dealing with complexity. Guilford, University of Surrey. Gilbert, N. and P. Terna (2000). "How to build and use agent-based models in social science." Mind and Society 1(1): 57-72. Glass, K. R., C. Morkel and S. B. Bangay (2006). Duplicating Road Patterns in South African Informal Settlements Using Procedural Techniques. 4th International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa, New York. Goldberg, D. E. (2008). Genetic algorithms in search, optimization, and machine learning. Lecture given at University of Illinois, Urbana-Champaign. Goldstein, J. (2005). "Emergence, Creativity, and the Logic of Following and Negating." The Innovation Journal 12(23). Goldstone, R. and M. Roberts (2006). "Self-organized Trail Systems in Groups of Humans." Complexity 11(6): 43-50. Gomes, B., J. Bento, S. Scheer and R. Cerquiera (1998). Distributed Agents Supporting Eventdriven Design Processes. Artificial Intelligence in Design, Kluwer Academic Publishers. Google (2011a). "Satellite view of Nordhavnen, Copenhagen." Retrieved 02.03.2011. Google (2011b). "Map of Orenco Station, Portland." Retrieved 02.03.2011. Google (2011c). "Map of Hammarby Sjöstad, Stockholm." Retrieved 02.03.2011. Google (2011d). "Map of Vauban, Freiburg." Retrieved 02.03.2011. Grand, S. (2001). Creation: Life and How to Make it. London, Phoenix. Grey, W. W. (1951). "A Machine That Learns." Scientific American 185(2): 60-63. Gu, N. and M. L. Maher (2003). "A Grammar for the Dynamic Design of Virtual Architecture Using Rational Agents " International Journal of Architectural Computing 1(4): 489-501. Gutjahr, W. J. (2008). "First Steps to the Runtime Complexity Analysis of Ant Colony Optimization." Computers and Operations Research 35(9): 2711-2727. Hadeli, P. Valckenaers, M. Kollingbaum and H. v. Brussel (2004). "Multi-agent coordination and control using stigmergy." Computers in Industry 53(1): 75-96. Hadlaw, J. (2003). "The London Underground Map: Imagining Modern Time and Space." DesignIssues 19(1): 25-36. Haggett, P. and R. J. Chorley (1969). Network Analysis in Geography. London, Butler & Tanner. Haggett, P., A. D. Cliff and A. Frey (1977). Locational Models. London, Edward Arnold. Haverkort, H. J. and H. L. Bodlaender (1999). Finding a Minimal Tree in a Polygon with its Medial Axis. Helbing, D., I. J. Farkas, P. Molnar and T. Vicsek (2002). Simulation of Pedestrian Crowds in Normal and Evacuation Situations. Pedestrian and evacuation dynamics. M. Schreckenberg and S. D. Sarma. Berlin, Springer Verlag: 21-58. Heppenstall, A. J., A. J. Evans and M. H. Birkin (2007). "Genetic algorithm optimisation of an agent-based model for simulating a retail market." Environment and Planning B: Planning and Design 34: 1051-1070. Herr, C. M. (2003). Using Cellular Automata to Challenge Cookie-Cutter Architecture. Generative Art Conference, Milan. Herr, C. M. and T. Kvan (2007). "Adapting cellular automata to support the architectural design process." Automation in Construction 16: 61-69.

254

Herring, M. (2004). "The Euclidean Steiner Tree Problem." Mathematics and Computer Science. Retrieved 01.04.2009, from www.denison.edu/academics/departments/mathcs/herring.pdf. Heylighen, F. and C. Joslyn (2001). Cybernetics and Second-Order Cybernetics. Encyclopedia of Physical Science & Technology. R. A. Meyers. New York, Academic Press. Hillier, B. (1989). "Architecture of the urban object." Ekistics 56(334/335): 5-21. Hillier, B. and J. Hanson (1984). The Social Logic of Space. Cambridge, Cambridge University Press. Holland, J. (1998). Emergence from Chaos to Order. Oxford, University Press. Holland, O. and C. Melhuish (1999). "Stigmergy, self-organisation, and sorting in collective robotics." Artificial Life 5(2): 173-202. Hölscher, C., S. Büchner, M. Brösamle, T. Meilinger and G. Strube (2007). Signs and Maps – Cognitive Economy in the Use of External Aids for Indoor Navigation. 29th Annual Conference of the Cognitive Science Society Nashville. Howard, E. (no date). Retrieved 11.05.2009, from www.oliviapress.co.uk. Huhns, M. N. and L. M. Stephens (1999). Multiagent Systems and Societies of Agents. Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence. G. Weiss. Cambridge, MIT Press. Ireland, T. (2008). Sniffing space. Generative Art Conference, Milan. Izquierdo-Torres, E. (2004) Collective Intelligence in Multi-Agent Robotics: Stigmergy, SelfOrganization and Evolution. Jacob, C. and S. v. Mammen (2007). "Swarm grammars: growing dynamic structures in 3D agent spaces." Digital Creativity 1(18): 54-64. Jin, Y. and Y. Zhao (2007). "Chaotic Ant Colony Algorithm for Preliminary Ship Design." Natural Computation 4: 776-781. Jodidio, P. (2001). Architecture Now! 3. Stuttgart. Johnson, S. (2002). Emergence: The Connected Lives of Ants, Brains, Cities and Software. London, Penguin. Jormakka, K. (2002). Flying Dutchman: Motion in Architecture. Basel, Birkhäuser. Junfeng, J. (2003). Transition Rule Elicitation for Urban Cellular Automata models. Enschede, International Institute Geo-information Science and Earth Observation. Kalay, Y. E. (2004). Architecture's New Media: Principles, Theories, and Methods of ComputerAided Design. Cambridge, Massachussetts, MIT Press. Keil, D. and D. Goldin (2006). Indirect Interaction in Environments for Multiagent Systems. Environments for Multiagent Systems II. D. Weyns, H. V. D. Parunak and F. Michel. Uttrecht, Springer: 68-87. Kelly, K. (1994). Out of Control: The Biology of Machines. London, Fourth Estate. Kicinger, R., T. Arciszewski and K. A. De Jong (2004). Morphogenesis and strucutral design: cellular automata representations of steel strucutres in tall buildings. Congress on Evolutionary Computation, Portland. Kilian, A. and J. Ochsendorf (2005). "Particle-spring System for Strutural Form Finding." Journal of the International Association for Shell and Spatial Structures 46. König, R. and C. Bauriedel (2004). Computer-generated Urban Structures. Generative Art Conference, Milan. Kopperman, R., P. Panangaden, Michael B. Smyth, D. Spreen and J. Webster (2006). "Spatial Representation: Discrete vs. Continuous Computational Models." Theoretical Computer Science 365(12): 169-170. Koutamanis, A., M. v. Leusen and V. Mitossi (2001). Route analysis in complex buildings. CAAD Futures, Eindhoven. Krawczyk, R. J. (2002). Architectural Interpretation of Cellular Automata. Generative Art Conference, Milan. Krink, T. and F. Vollrath (1997). "Analysing Spider Behaviour with Rule-based Simulations and Genetic Algorithms." Journal of Theoretical Biology 185: 321-331.

255

Kuipers, B., D. Tecuci and B. J. Stankiewicz (2003). "The Skeleton on the Cognitive Map: A Computational and Empirical Exploration." Environment and Behaviour 35(1): 81-106. Kukla, R., J. Kerridge, A. Willis and J. Hine (2001). "PEDFLOW: Development of an Autonomous Agent Model of Pedestrian Flow." Transportation Research Record 1774(1): 11-17. Ladley, D. and S. Bullock (2005). "The Role of Logistic Constraints in Termite Construction of Chambers and Tunnels." Journal of Theoretical Biology 234: 551-564. Leach, N. (2004). Swarm Tectonics. Digital Techtonics. N. Leach, D. Turbull and C. Williams. London, Wiley: 70-77. Ligmann-Zielinska, A. and P. Jankowski (2007). "Agent-based models as laboratories for spatially explicit planning policies." Environment and Planning B: Planning and Design 34: 316335. Llewelyn-Davies (2000). Urban Design Compendium 1: Urban Design Principles, English Partnerships, The Housing Corporation. Longley, P. A. (2004). "Geographical Information Systems: on modelling and representation." Progress in Human Geography 28(1): 108-116. Luhmann, N. (1984). Social Systems. Frankfurt, Suhramp Verlag. Lynch, K. (1981). Good City Form. Cambridge, Massachusetts, MIT Press. Macal, C. M. and M. J. North (2006). Tutorial on Agent-based Modeling and Simulations Part 2: How to Model with Agents. Winter Simulation Conference. Macgill, J. (2000). Using flocks to drive a Geographical Analysis Engine. Artificial Life VII, Cambridge, Massachusetts, MIT Press. Maciel, A. (2008). Artificial Intelligence and the Conceptualisation of Architecture. Space Craft: Developments in Architectural Computing. D. Littlefield. London, RIBA Publishing: 64-72. Mahdavi, H. S. and S. Hanna (2004). Optimising Continuous Microstructures: A Comparison of Gradient-Based and Stochastic Methods. The Joint 2nd International Conference on Soft Computing and Intelligent Systems and 5th International Symposium on Advanced Intelligent Systems. Yokohama. Mammen, S. v., C. Jacob and G. Kokai (2005). Evolving Swarms that Build 3D Structures. IEEE Congress on Evolutionary Computation. Edinburgh. Marshall, S. (2005). Streets & Patterns. Abingdon,Oxon, Spon Press. Maturana, H. R. and F. J. Varela (1980). Autopoiesis and Cognition: The Realization of the Living. Boston, Reidel. Maxvan (no date). Retrieved 11.04.2009, from www.maxwan.com. McCormack, J., A. Dorin and T. Innocent (2004). Generative Design: a paradigm for design research. Futureground, Design Research Society, Melbourne. Merleau-Ponty, M. (1979). Phenomenology of Perception. Suffolk, St Edmundsbury Press Mertens, K., T. Holvoet and Y. Berbers (2004). Adaptation in a Distributed Environment. First International Workshop on Environments for Multiagent Systems. New York. Minsky, M. (1988). The Society of Mind. London, Pan Books. Mitchell, W. J. (2006). E-topia: Information and Communication Technologies and the Transformation of Urban Life. The Network Society: From Knowledge to Policy. G. Cardoso and M. Castells. Sais, Center For Transatlantic Relations and Johns Hopkins University. Morgan, M. (2003). The Space Between Our Ears. London, Weidenfeld & Nicholson. Morse, A. F. and T. Ziemke (2008). "On the role(s) of modelling in cognitive science." Pragmatics & Cognition 16(1): 37 - 56. Moussavi, F. and A. Zaera-Polo (2006). On Instruments: Diagrams, Drawing and Graphs. Theories and Manifestoes of Contemporary Architecture. C. Jencks and K. Kropf. Chicester, West Sussex, Wiley-Academy: 337-339. Nembrini, J., N. Reeves, E. Poncet, A. Martinoli and A. Winfield (2005). Mascarillons: flying swarm intelligence for architectural research. Swarm Intelligence Symposium, Pasadena. Neumann, F. and C. Witt (2008). Ant Colony Optimization and the Minimum Spanning Tree Problem. Lecture Notes in Computer Science. Berlin, Springer. 5313: 153-166. Nickerson, J. V. (2008). Generating Networks. Design Computing and Cognition, Atlanta.

256

Otto, F. and B. Rasch (1995). Finding Form:Towards an Architecture of the Minimal. Stuttgart, Axel Menges. Parish, Y. and P. Mueller (2001). Procedural Modeling of Cities. SIGGRAPH. Parker, D. C., S. M. Manson, M. A. Janssen, M. J. Hoffmann and P. Deadman (2003). "MultiAgent Systems for the Simulation of Land-Use and Land-Cover Change: A Review." Annals of the Association of American Geographers 93(2): 314 - 337. Parunak, H. V. D. (2006). A Survey of Environments and Mechanisms for Human-Human Stigmergy. Environments for Multiagent Systems II. D. Weyns, H. V. D. Parunak and F. Michel. Uttrecht. Pechač, P. (2002). Application of Ant Optimisation Algorithm on the Recursive Propagation Model for Urban Microcells XXVIIth General Assembly the International Union of Radio Science. Maastricht: 569. Penn, A. (2001). Space Syntax and Spatial Cognition. Or, why the axial line? Space Syntax 3rd International Symposium, Atlanta. Penn, A. and A. Turner (2002). Space Syntax Based Agent Simulation. International Conference on Pedestrian and Evacuation Dynamics, Duisburg. Pfeifer, R. and C. Scheier (2001). Understanding Intelligence. Cambridge, Massachusetts, MIT Press. Polidori, M. and R. Krafta (2004). Environment – Urban Interface within Urban Growth. Developments in Design & Decision Support Systems in Architecture and Urban Planning. J. P. v. Leeuwen and H. J. P. Timmermans. Eindhoven, Eindhoven University of Technology: 49-62. Portugali, J. (1999). Self-Organization and the City. Berlin, Springer-Verlag. Prusinkiewicz, P. and A. Lindenmeyer (1990). The Algorithmic Beauty of Plants. New York, Springer-Verlag. Pullen, W. D. (2007). Daedalus. Puusepp, R. and P. S. Coates (2007). "Simulations with cognitive and design agents." International Journal of Architectural Computing 5: 100-114. Python Software Foundation (no date). Retrieved 17.07.2009, from www.python.org. Raisbeck, P. (2007). "Provocative Agents: Agent Base Modelling Systems and the Global Production of Architecture." Ramos, V., C. Fernandes and A. C. Rosa (2007). Social Cognitive Maps, Swarm Collective Perception and Distributed Search on Dynamic Landscapes Raubal, M. (2001). "Ontology and epistemology for agent-based wayfinding simulation." Geographical Information Science 15(7): 653-665. Reffat, R. M. (2003). Architectural Exploration and Creativity using Intelligent Design Agents. 21st eCAADe Conference, Graz. Reffat, R. M. (2006). "Computing in Architectural Design: Reflection and approach to New Generations of CAAD." ITcon 11: 655-668. Reiser+Umemoto (2006). Atlas of Novel Tectonics. New York, Prinenton Architectural Press. Resnick, M. (1994). Turtles, Termites, and Traffic Jams: Explorations in Massively Parallel Microworlds. Cambridge, Massachusetts, MIT Press. Resnick, M. (1999). Decentralized Modeling and Decentralized Thinking. Modeling and Simulation in Science and Mathematics Education. W. Feurzeig and N. Roberts. New York, Springer: 114-137. Reynolds, C. W. (1987). "Flocks, Herds, and Schools: A Distributed Behavioral Model." Computer Graphics 21(4): 25-34. Reynolds, C. W. (2004). Retrieved 17.07.2009, from http://opensteer.sourceforge.net. Richardson, T. A. (1859). The Art of Architectural Modelling in Paper. London, John Weale. Ritchie, G. (2003). Static Multi-processor Scheduling with Ant Colony Optimisation & Local Search. School of Informatics. Edinburgh, University of Edinburgh. Rittel, H. W. J. and M. M. Webber (1973). "Dilemmas in a General Theory of Planning." Policy Sciences 4: 155-169.

257

Rizzoli, A. E., R. Montemanni, E. Lucibello and L. M. Gambardella (2007). "Ant colony optimization for real-world vehicle routing problems " Swarm Intelligence 1(2): 135-151. Rodrigues, A. and J. Raper (1999). Defining Spatial Agents. Spatial Multimedia and Virtual Reality. J. Raper and A. Câmara. London, Taylor & Francis: 111-129. Rogers, R. (1999). Towards an Urban Renaissance. London, Urban Task Force. Rosenblatt, F. (1958). "The Perceptron: a Probabilistic Model for Information Storage and Organization in the Brain." Psychological Review 65(6): 386-408. RUDI (no date). "Classic: A city is not a tree, part one, by Christopher Alexander." Retrieved 28.04.2009, from http://www.rudi.net/pages/8755. Ruiz-Tagle, J. V. (2007). "Modeling and Simulating the City: Deciphering the Code of a Game of Strategy." International Journal of Architectural Computing 5(3): 571-587. Runions, A., M. Fuhrer, B. Lane, P. Federl, A. Rolland-Lagan and P. Prusinkiewicz (2005). Modeling and visualization of leaf venation patterns. SIGGRAPH. Russell, A. and P. Norvig (1995). Artificial Intelligence: A Modern Approach. London, PrenticeHall. Samani, N. N., L. Hajibabai, M. R. Delavar, M. R. Malek and A. U. Frank (2007). An Agent-based Indoor Wayfinding Based on Digital Sign System. Urban Data Management. M. Rumor, V. Coors, E. M. Fendel and S. Zlatanova. London, Taylor & Francis: 511-521. Samaniego, H. and M. E. Moses (2008). "Cities as organisms: Allometric scaling of urban road networks." Journal of Transport and Land Use 1(1): 21-39. Schelling, T. C. (1971). "Dynamic Models of Segregation." Journal of Mathematical Sociology 1: 143-186. Scholl, H. (2001). Agent-based and System Dynamics Modeling : A Call for Cross Study and Joint Research. 34th Annual Hawaii International Conference on System Sciences. Honolulu. Schumacher, P. (2008). The future is parametric. Building Design. 19.09.2008. Sen, S., S. Saha, S. Airiau, T. Candale, D. Banerjee, D. Chakraborty, P. Mukherjee and A. Gursel (2007). Robust Agent Communities. Autonomous Intelligent Systems: Agents and Data Mining. V. Gorodetsky, C. Zhang, V. A. Skormin and L. Cao, Springer. 4476: 28 - 45. Sevaldson, B. (2000). Dynamic Generative Diagrams. 18th eCAADe Conference, Weimar. Shane, D. G. (2005). Recombinant Urbanism: Conceptual Modeling in Architecture, Urban Design, and City Theory. Chichester, West Sussex, John Wiley. Shepherd, P. (2009). Digital Architectonics in Practice. 27th eCAADe Conference, Istanbul. Shoham, Y. and K. Leyton-Brown (2009). Multiagent Systems. Algorithmic, Game-Theoretic, and Logical Foundations. New York, Cambridge University Press. Shpuza, E. (2006). Floorplate Shapes and Office Layouts: A Model of the Effect of Floorplate Shape on Circulation Integration. Architecture, Georgia Institute of Technology. PhD Thesis. Shpuza, E. and J. Peponis (2006). Floorplate shapes and office layouts: a model of the relationship between shape and circulation integration. 5th International Symposium of Space Syntax, Delft. Silva, C. A., R. B. Seixas and O. L. Farias (2005). Geographical Information Systems and Dynamic Modeling via Agent Based Systems. Advances in Geographical Information System. Bremen. Sim, K. M. and W. H. Sun (2002). Multiple Ant-Colony Optimization for Network Routing. First International Symposium on Cyber Worlds Washington, IEEE Computer Society. Sims, K. (1994). Evolving Virtual Creatures. Computer Graphics (Siggraph '94 Proceedings). Skyttner, L. (1996). General Systems Theory: an Introduction. Basigstoke, Macmillian. Smith, R. (2007). "Open Dynamics Engine." Retrieved 25.03.2010, from http://ode.org/. Somol, E. (2006). Dummy Text, or the Diagrammatic Basis of Contemporary Architecture. Diagram Diaries. P. Eisenman. London, Wiley-Academy: 5-11. Song, Y. and G.-J. Knaap (2004). "Measuring Urban Form: Is Portland winning the war of sprawl?" Journal of the American Planning Association 70(2): 210-225. Spuybroek, L. (2006). Machining Architecture. Theories and Manifestoes of Contemporary Architecture. C. Jencks and K. Kropf. Chicester, West Sussex, Wiley-Academy: 351-352.

258

Stanley, K. O., B. D. Bryant and R. Miikkulainen (2005). Evolving Neural Network Agents in the NERO Video Game. Symposium on Computational Intelligence and Games. Piscataway. Stea, D. (1974). Architecture in the Head. Designing for Human Behavior. J. Lang. Stroudsburg, Dowden, Hutchinson & Ross: 157-168. Steels, L. (1997). "The synthetic modeling of language origins." Evolution of Communication Journal 1(1): 1-34. Stewart, R. and A. Russell (2003). Emergent Structures Built by a Minimalist Autonomous robot using a swarm-inspired template mechanism. Australian Conference on Artificial Life, Canberra. Stiny, G. and J. Gips (1972). Shape Grammars and the Generative Specification of Painting and Sculpture. IFIP Congress 71. C. V. Freiman. Amsterdam: 1460-1465. Støy, K. (2004). Controlling Self-Reconfiguration using Cellular Automata and Gradients. 8th International Conference on Intelligent Autonomous Systems, Amsterdam. Tabor, P. (1971). Traffic in buildings 1: pedestrian circulation in offices. Cambridge, University of Cambridge School of Architecture. Terzidis, K. (2006). Algorithmic Architecture. Oxford, Elsevier. Tesfatsion, L. (2006). Agent-based Computational Economics: a Constructive Approach to Economic Theory. Handbook of Computational Economics. L. Tesfatsion and K. L. Judd. NorthHolland, Elsevier. Testa, P., U.-M. O'Reilly, D. Weiser and I. Ross (2001). "Emergent Design: a crosscutting research program and design curriculum integrating architecture and artificial intelligence." Environment and Planning B: Planning and Design 28(4): 481-498. Testa, P. and D. Weiser (2002). "Emergent Structural Morphology." Architectural Design, Special Issue, Contemporary Techniques in Architecture 72(1): 12-16. Theraulaz, G. and E. Bonabeau (1995). "Modelling the Collective Building of Complex Architectures in Social Insects with Lattice Swarms." Journal of Theoretical Biology 177(4): 381400. Tibert, A. G. and S. Pellegrino (2003). "Review of Form-Finding Methods for Tensegrity Structures." International Journal of Space Structures 18(4): 209-223. Timpf, S., C. S. Volta, D. W. Pollock, A. U. Frank and M. J. Egenhofer (1992). "A conceptual model of wayfinding using multiple levels of abstraction." Lecture Notes in Computer Science 639. Turner, A. (2003). "Analysing the visual dynamics of spatial morphology." Environment and Planning B: Planning and Design 30: 657-676. Turner, A. and A. Penn (2002). "Encoding natural movement as an agent-based system: an investigation into human pedestrian behaviour in the built environment." Environment and Planning B: Planning and Design 29: 473-490. Turner, S. (2000). "Architecture and morphogenesis in the mound of Macrotermes michaelseni (Sjöstedt) (Isoptera: Termitidae, Macrotermitinae) in northern Namibia." Cimbebasia 16: 143175. Turner, S. (2007). Homeostasis, complexity, and the problem of biological design. Complexity and Philosophy, Stellenbosch. UNStudio (no date). "Louis Vuitton store." Retrieved 11.03.2009, from www.unstudio.com. Valckenaers, P., M. Kollingbaum, H. v. Brussel, O. Bochmann and C. Zamfirescu (2001). The Design of Multi-Agent Coordination and Control Systems using Stigmergy. IWES'01 Conference, Bled. Varawalla, H. (2004). "The importance of the ‘design of circulation’ for hospitals." Healthcare Management. Retrieved 14.10.2008, from http://www.expresshealthcaremgmt.com/20040531/architecture01.shtml. Vauban (no date). Retrieved 02.03.2011, from www.vauban.de. Vidal, J. M. (2003). Learning in Multiagent Systems: An Introduction from Game-Theoretic Perspective. Adaptive Agents and Multiagent Systems. E. Alonso, Springer. 2636: 202-215. Vidal, J. M. (2007). Fundamentals of Multiagent Systems: with NetLogo Examples.

259

Von Mammen, S. and C. Jacob (2007). Genetic Swarm Grammar Programming: Ecological Breeding like a Gardener. IEEE Congress on Evolutionary Computation. Von Neumann, J. (1951). The General and Logical Theory of Automata. Cerebral Mechanisms in Behavior. L. A. Jeffress. New York, Wiley: 1-41. Wan, A. D. M. and P. J. Braspenning (1996). Adaptive Agent Design Based on Reinforcement Learning and Tracking. 6th Belgian-Dutch Conference on Machine Learning. Maastricht. Weinstock, M. (2006). Morphogenesis and the Mathematics of Emergence. Theories and Manifestoes of Contemporary Architecture. C. Jencks and K. Kropf. Chicester, West Sussex, Wiley-Academy: 351-352. Werner, S., B. Krieg-Brückner and T. Herrmann (2000). "Modelling Navigational Knowledge by Route Graphs." Spatial Cognition II: 295-316. Weyns, D., H. V. D. Parunak, F. Michel, T. Holvoet and J. Ferber (2005). "Environments for Multiagent Systems: State-of-the-Art and Research Challenges." Environment s for Multiagent Systems: 1-47. Wheeler, W. (1911). "The Ant Colony as a Organism." Journal of Morphology 22: 307-325. Wilensky, U. (1998). NetLogo Pursuit model. Evanston, Center for Connected Learning and Computer-Based Modeling, Northwestern University. Wilensky, U. (1999). "NetLogo." Retrieved 17.07.2009, from http://ccl.northwestern.edu/netlogo. Wilensky, U. (2001). NetLogo Rabbits Grass Weeds model. Evanston, Center for Connected Learning and Computer-Based Modeling, Northwestern University. Wilensky, U. (2002). NetLogo DLA model. Evanston, Center for Connected Learning and Computer-Based Modeling, Northwestern University. Wilensky, U. (2004). NetLogo Rebellion model. Evanston, Center for Connected Learning and Computer-Based Modeling, Northwestern University. Willgoose, G., R. Bras and I. Rodrigues-Iturbe (1991). "A Coupled Channel Network Growth and Hillslope Evolution Model: 2. Nondimensionalization and Applications." Water Resources Research 27(7): 1685-1696. Williams, C. (2005). Computers and the Design and Construction Process. Visions for the Future of Construction Education: Teaching Construction in a Changing World. M. Voyatzaki, European Network of Heads of Schools of Architecture. Williams, C. (2008). Practical Emergence. Space Craft: Developments in Architectural Computing. D. Littlefield. London, RIBA Publishing: 72-79. Willoughby, T. M. (1975). "Building forms and circulation patterns." Environment and Planning B: Planning and Design 2(1): 59-87. Wilson, E. (1980). Sociobiology. Cambridge, Harvard University Press. Wooldridge, M. (1999). Intelligent Agents. Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence. G. Weiss. Cambridge, MIT Press. Zee, A. v. d. and B. d. Vrie (2008). Design by Computation. Generative Art Conference, Milan. Zhang, X. and M. P. Armstrong (2005). Using a Genetic Algorithm to Generate Alternatives for Multiobjective Corridor Location Problems. Geocomputation, University of Michigan.

260

Appendix 1: Submission to the open international ideas competition for Nordhavnen

“Planting a future for Copenhagen” by Slider Studio and Mæ architects, board 1

261

“Planting a future for Copenhagen” by Slider Studio and Mæ architects, board 2

262

“Planting a future for Copenhagen” by Slider Studio and Mæ architects, board 3

263

“Planting a future for Copenhagen” by Slider Studio and Mæ architects, board 4

264

“Planting a future for Copenhagen” by Slider Studio and Mæ architects, circulation diagram

265

Appendix 2: Submission to the international ideas competition for Tallinn City Hall

“Agora” by Slider Studio, board 1

266

“Agora” by Slider Studio, board 2

267

“Agora” by Slider Studio, board 3

268

“Agora” by Slider Studio, board 4

269

Glossary of terms Agent – an autonomous unit of computation. As explained in Chapter 4, no universally accepted definition exists. In this thesis, an agent is seen as a piece computer code – an object in object-oriented programming terms – that is distinguishable from its environment and possess some control over its actions

Algorithm – a sequence of well-defined instructions or a description of a procedure for solving a problem. Explicitly written algorithms can be executed by computers

Ant colony optimisation – a metaheuristic method that simulates the ant colony’s foraging behaviour in order to find and optimise paths between given points

Bottom-up modelling – a modelling methodology where the global behaviour of a system is generated by defining the behaviour of system’s components at the local level. Multi-agent systems are typical examples of bottom-up modelling

Cellular automaton – a computational model where nodes in a regular graph take discrete states and “update their states in parallel using the same state transition rule.”1

Circulation – the part of buildings and settlements that facilitates movement of people from place to place; “the means by which access is provided through and around”2 an environment

Complete (circulation) model – a computational model where circulation diagrams and the layout of non-circulatory spaces are generated in parallel

1

Adamatzky, A. (2001). Computing in Nonlinear Media and Automata Collectives. Bristol, Institute of Physics Publishing. 2 Clerkin, P. (2005). "Glossary of Architectural Terms." Retrieved 18.07.2010, from http://www.archiseek.com.

270

Design pattern – a theoretical schemata that serves as a template for building computational models

Diagram – an abstract and information rich representation of a solution from which design proposals can be derived by means of interpretation

Edge following – the movement of agents along the edges of obstacles or the edges of the bounded environment

Environment – a physical or virtual setting that consumes the system’s (agent’s) outputs and produces its inputs3. In the architectural design context, the term is used for “built environment” for short

Generative design/modelling – a design methodology where dynamic processes and feedback loops are used for generating design solutions

Greedy algorithm – an algorithm where decisions are made based on locally optimal choices in hope of reaching the global optimum

Heuristic – an adjective for discovering solutions using the rule-of-thumb approach and experience

Hill-climbing – a routine of finding optimal solutions based on locally available information

Medial axis – a set of points that are of equal distance from more than one point on the object boundary

Mobile agent - an agent capable of moving around in its environment without maintaining the topological relationships with other agents or its environment 3

Keil, D. and D. Goldin (2006). Indirect Interaction in Environments for Multiagent Systems. Environments for Multiagent Systems II. D. Weyns, H. V. D. Parunak and F. Michel. Uttrecht, Springer: 68-87.

271

Model – a description of a system; a representation of a process or an object

Program – a written (in a programming language) set of executable algorithms or instructions for achieving a goal

Reactive agent – an agent that reacts to stimuli from its environment without having longer term goals

Roulette wheel selection – a semi-random selection mechanism that gives proportional advantage to solutions of higher fitness

Self-organisation – a process that leads to an ordered structure in systems without external assistance; “the spontaneous reduction of entropy in a dynamic system” 4

Sensory-motor coupling – the process of mapping the system’s (agent’s) sensory inputs to its motor outputs

Setting out configuration – the initial state of model’s components prior to the execution

Spanning tree – a tree-like network of nodes and links where any point in the network can be reached from any other point. The minimal spanning tree is the shortest of all spanning trees5

Stigmergy – a type of indirect communication in groups of individuals where messages are exchanged through their shared environment

4

Heylighen, F. and C. Joslyn (2001). Cybernetics and Second-Order Cybernetics. Encyclopedia of Physical Science & Technology. R. A. Meyers. New York, Academic Press. 5 Haggett, P., A. D. Cliff and A. Frey (1977). Locational Models. London, Edward Arnold.

272

Swarm – a group of agents that collectively carry out a distributed problem solving task6

Synthetic modelling – a methodology of producing knowledge by constructing models. Models are typically explored by looking how they respond to changed parameters

System – a conceptual or physical entity consisting of interacting parts7

Voronoi diagram – a partition of space into cells, where the edges of cells form a minimum energy network8

6

Deneuborg, J. L., G. Theraulaz and R. Beckers (1992). Swarm Made Architectures. Toward a Practice of Autonomous Systems, Proceedings of The First European Conference on Artificial Life, MIT Press. 7 Tabor, P. (1971). Traffic in buildings 1: pedestrian circulation in offices. Cambridge, University of Cambridge School of Architecture. 8 Coates, P. S. (2010). Programming.Architecture. London, Routledge.

273