Published by AGH University of Science and ...

1 downloads 865 Views 4MB Size Report
point for developing a model of a market in which Collective Intelligence is mani- fested. Creating a Collective ...... much labour to hire and how much good to produce to generate income from labour. ...... IOS Press, 2006, ISBN 1-58603-642-4.
Published by AGH University of Science and Technology Press KU 0450 c Wydawnictwa AGH, Kraków 2012

ISBN 978-83-7464-487-7

Editor-in-Chief: Jan Sas Editorial Committee: Tomasz Szmuc (Chairman) Marek Capi´nski Jerzy Klich Witold K. Krajewski Tadeusz Sawik Mariusz Ziółko Reviewers: Prof. dr hab. Franciszek Seredy´nski Prof dr hab. Jan Werewka Author’s affiliations: AGH University of Science and Technology Faculty of Electrical Engineering, Automatics, Computer Science and Electronics Institute of Applied Computer Science Cover Design: Paweł Sepielak Desktop publishing: Paweł Skrzy´nski Publisher Office Wydawnictwa AGH al. Mickiewicza 30, 30-059 Kraków tel. +4812 617 32 28, tel./fax +4812 636 40 38 e-mail: [email protected] www.wydawnictwa.agh.edu.pl

Contents

1. 2.

3.

4.

Streszczenie...................................................................................................... Summary .......................................................................................................... Introduction.................................................................................................... The Invisible Hand of the Market paradigm................................................. 2.1. The Invisible Hand of the Market mechanism in the historical perspective ................................................................................................... 2.2. Criticism of the Invisible Hand of the Market concept........................... 2.3. The contemporary perspective of the Invisible Hand of the Market....... Collective Intelligence .................................................................................... 3.1. Origins of Collective Intelligence........................................................... 3.2. Symptoms of Collective Intelligence in nature....................................... 3.2.1. Social structure of bacterial colonies ......................................... 3.2.2. Social structure of insects: ants, termites and bees .................... 3.2.3. Social structure of birds ............................................................. 3.2.4. Social structures among mammals............................................. 3.2.5. Collective Intelligence manifestations in nature – summary ..... 3.3. Collective Intelligence computational model ......................................... 3.3.1. Molecular model of computation............................................... 3.3.2. Formal description ..................................................................... 3.3.3. Collective Intelligence Quotient – IQS ...................................... 3.4. Designing a molecular model of computation........................................ 3.4.1. A proposal to generalise the ACO algorithm in the molecular computational model.................................................................. 3.5. Random Prolog Processor, implementing the molecular computation model ................................... Market simulation models.............................................................................

8 9 13 18 18 23 25 27 27 30 32 33 39 41 42 43 43 45 49 50 53 53 56 5

4.1. Analysis of previous and current approaches and solutions ........................................................................................... 58 4.2. An attempt at a synthesis of market simulation models..................................................................................................... 63 4.2.1. Merchants as market players...................................................... 63 4.2.2. Producers as market participants................................................ 64 4.2.3. Government as a market participant .......................................... 65 4.2.4. Market modelling ....................................................................... 65 4.2.5. Method of running simulations .................................................. 66 5. Market model concept for the purposes of ASIHM processes simulation 68 5.1. Agent based systems............................................................................... 69 5.1.1. Agent definitions ........................................................................ 70 5.1.2. Agent classifications .................................................................. 72 5.1.3. Multi-agent systems ................................................................... 72 5.2. Modelling an agent – the M-Agent architecture..................................... 74 5.3. Basic elements of the CIMAMSS system .............................................. 78 5.3.1. Environment ............................................................................... 78 5.3.2. Environment space ..................................................................... 80 5.3.3. The agent in the CIMAMSS system – the market participant ... 83 5.3.4. Resources, goods and commodities ........................................... 86 5.3.5. Decision-making ........................................................................ 88 5.4. Market modelling.................................................................................... 89 5.4.1. Budgetary limitation .................................................................. 92 5.4.2. Utility theory .............................................................................. 93 5.4.3. Conclusions for agent modelling ............................................... 98 5.4.4. Transaction modelling................................................................ 99 5.4.5. Agent migration ......................................................................... 104 5.4.6. Production in the CIMAMSS system......................................... 105 5.5. CIMAMSS model – final conclusions .................................................... 106 5.5.1. Agent types and structure........................................................... 107 5.5.2. The design of the computational layer....................................... 108 6. Pilot implementations of ASIHM process simulations ........................................................................................ 110 6.1. CIMAMSS model characteristics ........................................................... 110 6.2. Comments on implementing the CIMAMSS model .............................. 111

6.2.1. System architecture .................................................................... 112 6.2.2. Transaction implementation....................................................... 114 6.2.3. The JESS rule system................................................................. 117 6.2.4. World editor and visualisation ................................................... 124 6.3. Defining the IQS quotient for the market................................................ 129 6.3.1. GDP definition in the CIMAMSS model................................... 130 6.4. Simulating the ASIHM process in a barter economy .............................. 131 6.4.1. Experiment preparation.............................................................. 131 6.4.2. Dynamics of the parameters of the world in ASIHM simulations135 6.4.3. The IQS study and results from the ASIHM study..................... 140 7. Conclusions..................................................................................................... 143 References........................................................................................................ 147

7

´ PAWEŁ SKRZYNSKI

Zastosowanie teorii kolektywnej inteligencji do analizy paradygmatu niewidzialnej r˛eki rynku

Streszczenie

Głównym celem niniejszej pracy jest przedstawienie koncepcji analizy, klasycznego w ekonomii, paradygmatu niewidzialnej r˛eki rynku z wykorzystaniem modelu obliczeniowego kolektywnej inteligencji oraz metryki przez niego dostarczanej IQS do pomiaru efektów działania niewidzialnej r˛eki rynku. W wyniku prac autora stworzony został agentowy system symulacyjny rynku, który do opisu zachowania pojedynczego agenta oraz interakcji mi˛edzy agentami wykorzystuje modele matematyczne dostarczane przez mikroekonomi˛e. Zaprojektowany system symulacyjny posiada architektur˛e warstwowa,˛ w którym warstw˛e wyz˙ sza˛ stanowi warstwa agentowa natomiast warstw˛e niz˙ sza˛ (obliczeniowa) ˛ molekularny model oblicze´n, który jest powszechnie wykorzystywany w badaniach nad natura˛ kolektywnej inteligencji ([107]). W cz˛es´ci teoretycznej pracy przedstawiono rozwaz˙ ania dotyczace ˛ historii poj˛ecia niewidzialnej r˛eki rynku – przedstawiono zostały zarówno uj˛ecie historyczne, jak i współczesne. Ponadto w cz˛es´ci tej omówiono kwestie zwiazane ˛ z przejawami kolektywnej inteligencji w naturze oraz popularne algorytmy, które sa˛ wzorowane na obserwacji zjawisk zachodzacych ˛ w naturze w stukturach społecznych owadów, ptaków oraz ssaków (np. ACO [28]). W drugiej cz˛es´ci pracy, majacej ˛ charakter bardziej praktyczny, omówiono podej´scie agentowe do tworzenia modeli symulacyjnych ze szczególnym uwzgl˛ednieniem modelu M-Agenta ([15]). Ponadto na podstawie rozwaz˙ a´n zaproponowany został model formalny systemu symulacyjnego rynku, który nast˛epnie został zaimplementowany i wykorzystany do przeprowadzenia symulacji procesów niewidzialnej r˛eki rynku oraz pomiarów ich działania. W ocenie autora zaproponowane podej´scie otwiera nowa˛ drog˛e w analizie paradygmatu niewidzialnej r˛eki rynku – propozycja dalszych kierunków bada´n znajduje si˛e w zako´nczeniu.

8

´ PAWEŁ SKRZYNSKI

Using Collective Intelligence Theory to Analysis of Invisible Hand of the Market Paradigm

Summary

The main purpose of this monograph is to present a concept of the analysis of the invisible hand of the market paradigm which has become classic concept in the theory of economics. The work presents the approach which is based on collective intelligence computational model usage together with its measure – IQS, which is used to measure the effect of the invisible hand process. As a result of the author’s work the agent based simulation model has been proposed and successfully implemented which adapts mathematical models derived from microeconomics to model market participant’s behaviour. The proposed system is based on layered architecture in which the higher level layer is agent-based whereas the lower layer – computational layer – uses molecular model of computation introduced by collective intelligence ([107]). The theoretical part of this monograph contains the review of the invisible hand paradigm – both from modern and traditional perspective. This part contains also the review of collective intelligence models used in computer science which include very popular and widely used algorithms based on the behaviour of the social structures of ants, bees, birds and mammals (eg. ACO [28]). Practical aspects of the presented approach are elaborated in the second part of the monograph. This part starts with the review of agent-based model with special emphasis on M-Agent ([15]) architecture which is used to define formal model of the simulation system. Then, it was proposed to transform the model thus described into a molecular computational model used by collective intelligence. The model has been implemented and some experiments have been performed focused on measuring the effect of the invisible hand process. In the authors opinion, the approach presented in the monograph opens new way to the analysis of such process. The further research paths are briefly described at the end.

9

To my wife and children. I am heartily thankful to prof. Stanislaw Szydlo, prof. Tadeusz Szuba and prof. Tomasz Szmuc, whose encouragement, guidance and support from the initial to the final level enabled me to develop an understanding of the subject.

1. Introduction

This monograph presents results of a research on using the Collective Intelligence theory to design a model supporting analyses of economic processes popularly referred to as the Invisible Hand of the Market (ADIHM1 ) . This notion, introduced by A. Smith ([100]), will be considered in detail in the following chapters of this monograph, while this introduction just outlines the thesis subject. The concept of the Invisible Hand of the Market is a traditional notion first described by A. Smith in his book published in 1775 ([100]) as the process in which every individual works in a way guaranteeing the greatest profit to the social structure to which he/she belongs even though in the majority of cases this is not his/her intention and he/she knows nothing about the public interest. A. Smith even claimed that everyone does more good for the public interest without realising this and only paying attention to their own interest than if they completely consciously tried to do something for that purpose From the modern perspective, the Invisible Hand of the Market has a much more general meaning ([49, 90]). It is a process whose results are achieved in a decentralised fashion without overt agreements between its participants. The first distinctive feature of this process is that it is unintended and the goals pursued by individual market participants are neither synchronised nor identical with the results of this process: the result is achieved „by the way” as it were. In addition, this process occurs even though its participants may be unaware of it: this is why it is called „invisible”. A. Smith proved that consumers wanting to buy goods at the lowest possible price and producers wanting to achieve the greatest possible income (which forces them to invest in the most profitable industries, i.e. those for which there is the greatest demand) bring about general economic growth. One of the most beneficial aspects of the free market is forcing people to indirectly think about what others need, as the 1

As Adam Smith’s notion of the Invisible Hand of the Market will appear numerous times in this monograph, the abbreviation ADIHM will be used below. This term will also be capitalised as the author considers it a proper noun.

13

„business” desire to meet these needs leads the former to improve their own circumstances. What is particularly noteworthy when one analyses the above deliberations taken from the economic theory is the similarity of the Invisible Hand of the Market process to the Collective Intelligence one, which is formally described by informatics ([1, 109, 105, 106]. The above observation forms the original reason for this work. It seems that using the Collective Intelligence computational model to analyse economic processes running in a free market, including the Invisible Hand of the Market, can bring a new dimension to the analysis of the above phenomena and chart a new road to the Adam Smith’s Invisible Hand of the Market (ASIHM) paradigm. The theory is founded on the computational model ([109]) which departs from the orderly, deterministic calculation process — like the one executed by today’s typical digital processors — towards a molecular, non-deterministic computational model. A specific case of this computational model (and the only one that has been physically implemented with success) is Alderman’s biochemical, so-called DNAcomputer ([1, 64, 37]). The loss of non-determinism in this computational model is very well offset by the natural parallelism of calculations that manifests, which means that this kind of a computer gains an advantage in multi-threaded calculations. It turns out that this model requires Boolean algebras to be abandoned as the mathematical bases of calculations (0/1 calculus) and replaced with calculations in 1st order predicate calculus. In other words, calculations are transferred to the domain of mathematical logic. Interestingly enough, the above model is still binary in the structural sense, as a digital computer uses only two symbols — 0/1 — to code information and process it, while the computer considered here uses only two types of objects: information molecule/membrane, from which the calculation structure is later built, and this implies the course of calculations. In very simple terms, information is carried in this model by so-called information molecules which transport facts, rules and purposes of calculations. Information molecules move quasi-chaotically within an environment configured by membranes. At the moment of a meeting (understood in a general sense and referred to as a rendezvous below), if the appropriate logical expressions meet, a reasoning process occurs and results in offspring molecules which transport conclusions of this reasoning further. In this reasoning system, the logical process runs in a multi-threaded, chaotic, parallel fashion, with the threads intertwining and meshing in together, while reasoning runs „forward”, „backward” and „from inside out” at the same time. Simulations have proven that this computational model is surprisingly fast and effective, but its physical building is a major issue. The main problem is therefore to find physical phenomena in the world around us which can be controlled and used to construct such a computer. 14

The above properties of the model lead to the observation that the Invisible Hand of the Market (ASIHM) process taking place in the market and the Collective Intelligence process are very similar in their nature. The essence of these two phenomena is determined by their characteristics stemming from the following rules: 1. Individuals in social structures cooperate with one another in a chaotic, noncontinuous fashion; 2. Individuals behave (e.g. in terms of their situation) quasi-chaotically due to the difficulties and opportunities of everyday life. 3. The majority of actions are uncoordinated and parallel, reasoning/production processes are initiated, suspended and restarted. 4. Cooperation processes interpenetrate and impact one another in a way not controlled by the individual. 5. It is difficult to separate the results of threads based on cooperation or on antagonism/enmity. 6. Within the social structure, contradictory reasoning sequences exist at the same time. 7. Resources and means are distributed in time, space and between individuals, in addition they appear and disappear in a non-deterministic way. 8. It is difficult to unambiguously interpret elements of Collective Intelligence processes: e.g. a given individual may be interpreted differently from the perspective of various interpenetrating logical processes. 9. Very often a phenomenon can be observed, but cannot be interpreted in a „reliable” way. 10. Collective Intelligence is a „transient” process, which means that it can “manifest itself” and wane after a while. The above characteristics justify the claim that the processes of Collective Intelligence and the Invisible Hand of the Market are of very similar natures. Hence it seems legitimate to use the Collective Intelligence computational model to formally describe the ASIHM process. However, the fundamental obstacle is that the effective calculation of Collective Intelligence requires the prior mapping of the specific social structure to the molecular calculation model corresponding to it, which is far from easy (or unambiguous). So what is expected from the Collective Intelligence theory if the Invisible Hand of the Market Process is analysed? The purpose of this research is to build a simulation model of a simplified market which would then be „fine-tuned” to obtain the spontaneous appearance of reasoning series performing self-regulation functions of this market, and these series would then be analysed. 15

The above observations allow the propositions of this monograph to be formulated at two levels of abstraction: the conceptual level and the implementation level. At the conceptual level: verifying the hypothesis that the computational model of Collective Intelligence supports the description of the family of market processes called the Invisible Hand of the Market by building a simulation model based on the economic theory and Collective Intelligence, in which such processes will occur. Building this model is a highly innovative concept and it should bring a new dimension10 to the analysis of the invisible hand of the market phenomenon. At the implementation level it is possible to: − use the molecular computational model introduced by Collective Intelligence to create a market simulation for the purpose of studying economic processes referred to as ASIHM − study the efficiency of the market depending on the impact of external economic variables (for instance the interest rate) using the simulation model developed. To make proving the propositions of this monograph possible, it is necessary to present a concept of a simulation system based on the microeconomic theory (to model the behaviour of market players) intended to allow the market to be modelled so that the Invisible Hand of the Market process can start running in it and thus to allow the author to qualitatively and quantitatively analyse the paradigm of Adam Smith’s Invisible Hand of the Market (ASIHM) at a different abstraction level — the molecular model of calculations and methods provided by the Collective Intelligence theory . In addition, the use of a computer simulation, in which the behaviour of the basic units – market players – will be modelled using micro-economics, will make it possible to analyse the market with the use of macroeconomic ratios, which brings this approach closer to the current called the neoclassical economy. The approach adopted in this monograph is illustrated in figure 1.1. A side objective necessary to complete the presented research process is to write, based on the model developed, the software of the market simulator which will be used for experiments. The range of uses of the software developed may extend beyond the analysis of the process of the Invisible Hand of the Market and the Collective Intelligence. The monograph is structured as follows. Chapter 1 offers introduction to the work and main goals of the research are discussed. In the second chapter, the basic notions are discussed and the current state of knowledge in the field of research on the paradigm of the Invisible Hand of the Market is reviewed – this chapter presents both the historical and contemporary perspective. Chapter 3, offering an introduction to the Collective Intelligence theory, provides the theory, both from the sociological and the mathematical point of view. This chapter contains examples of Collective 16

Fig. 1.1. Concept of the work – a general diagram of the reasoning, the new approach to ASIHM. source: own development.

Intelligence manifestations in nature and successful implementations of algorithms and computer systems born out of observations of the behaviour of social structures found in the nature. This chapter concludes with a discussion of the computational model of Collective Intelligence. Chapter 4 contains the attempts made so far to create market simulation models. Chapter 5 represents a key part of this monograph as it contains an analysis of the concept of a market simulation model based on the theory of multi-agent system design, whereas the behaviour of a single agent is modelled using the micro-economic theory. This concept originates in the Collective Intelligence theory, but the way the elements of the social structure formed by market players interact comes from the micro-economy. This chapter also describes both the method of transforming the economic model created into a molecular computational model used in studying Collective Intelligence, and the methods used for analysing this intelligence. Chapter 6 contains a review of pilot experiments carried out by the author using the above concepts implemented in the form of a software suite developed in accordance with the requirements of chapter four – due to its lengthy nature, the design and the implementation of the software is greatly condensed in the chapter. The summary containing conclusions from the monograph carried out, an assessment of the experimental results obtained, and as an outline of further research directions is presented in chapter 7.

17

2. The Invisible Hand of the Market paradigm

This chapter briefly describes the current state of knowledge achieved through research on Collective Intelligence and on the family of economic processes to which the paradigm of the Invisible Hand of the Market can be applied. It should be noted that research on economic processes running in a free market conducted as part of Economics was separate from research on Collective Intelligence, done by both sociologists and computer scientists. This monograph is an attempt to somehow combine these two research fields. In the first part of the chapter, the subject of the Invisible Hand of the Market is discussed starting with the historical perspective and leading to its current, broader understanding.

2.1. The Invisible Hand of the Market mechanism in the historical perspective For years now, the notion of the Invisible Hand of the Market1 has been stirring controversy. It is hard to believe that something never seen by anyone can arouse emotions. In everyday life, it is customary to put all economic processes for which no other reason can be found down to the operation of the invisible hand. So where has this notion, that has been used in both science and daily life for years, come from? The source of this notion can be traced back to the 18th century, when Thomas Hobbes, having fled the civil war-torn England to hide from his political opponents in France, formulated his concept of the Leviathan ([42, 43]). Hobbes, assuming that the human nature is egoistic, argued that it was necessary to establish absolute power. He believed this to be the only way of taming the egoistic human nature, or else humanity ran the risk of irreversibly drowning in chaos. To better illustrate this concept, he used 1 As the notion of the „invisible hand of the market” has been widely used for years, it will be capitalised in this monograph as a proper noun.

18

the metaphor of a sea hybrid2 , the aforementioned Leviathan3 . Over a century later, the thesis of the egoism of the human nature was elaborated further by Adam Smith, who explained its reasons as follows ([100]): „As every individual, therefore, endeavours as much as he can both to employ his capital in the support of domestic industry, and so to direct that industry that its produce may be of the greatest value; every individual necessarily labours to render the annual revenue of the society as great as he can. He generally, indeed, neither intends to promote the public interest, nor knows how much he is promoting it. By preferring the support of domestic to that of foreign industry, he intends only his own security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention.” This quote has been taken from A. Smith’s famous work: An Inquiry into the Nature and Causes of the Wealth of Nations ([100]). It represents the essence of the thoughts of this Scottish thinker and economist. It was an attack of a kind on the mercantilist trade philosophy (dominating Europe then), according to which unregulated aspiration to private profit inescapably had to lead to anarchy. A. Smith used the „Invisible Hand” theory to try and describe a mechanism characteristic for capitalistic economies, whereby activities of particular individuals, driven by their egoistic intention to satisfy their own needs, actually contribute to meeting society’s needs as well. By ASIHM, A. Smith proved that the market mechanism is able to self-regulate the process of satisfying social needs, and thus rejected the need for state interventionism and protectionism as the condition for achieving public interest: „...he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for the society that it was no part of it. By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it.” The central theme of the above work by A. Smith is the operation of the „Invisible Hand”, the essence of which is that it is not from the benevolence of the baker, 2

In the sense of something composed of different, often mismatched parts. The title of Hobbes’ book, which is the source of the above metaphor, is taken from the Old Testament, in which Leviathan (which in the contemporary Hebrew means a whale) is mentioned, inter alia in the Book of Job, as one of sea monsters, and in the Book of Psalms, where it has a more negative connotation. 3

19

that we expect our bread, but from his regard to his own self-interest. A. Smith saw the perspective of a promising analysis, discovering that in certain social conditions, which are nowadays often called the „functional competition”, private interests are in fact harmonized with the social interest. Without collective regulation or a common plan, market economy still operates in accordance with orderly rules of behaviour. Every individual, being one of many, can only exert insignificant impact on the overall situation on the market. As a result he/she accepts prices as given and only has the freedom to choose the quantities bought and sold at these prices, driven by the motive of maximising his/her personal benefits. However, the sum total of these isolated actions determines the prices. Every person, considered separately, follows the prices in his/her choices, however the prices themselves are governed by the sum total of the individual reactions. The „Invisible Hand” of the market thus produces a social effect independent of the will and intentions of individuals. In 17th and 18th century Europe, pay, prices, interest rates, employment, foreign trade as well as the quantity of goods and services were subject of strict controls by the government. The purpose of these controls was to ensure the achievement of the vision of social justice as understood by the governing class by way of managing what was produced as well as the method of producing and distributing it. An idea was widespread that every action motivated by aspirations to private profit must be antisocial by this very fact. Even today, Keynes’ economics says that free market economy cannot satisfy public interest because it is governed by the profit motive rather than by consciously planned social objectives. Yet for A. Smith, self-interest was an obviously constructive and coordinating force. In striving to meet their own needs, people taking care of their interest had to refer to the interests of others. Selfinterest is a stimulus, a reason for cooperating and coordinating one’s own activities with those of others ([51]). Critics of the market system perceived profit as an unjust charge on employees’ wages, but A. Smith saw it as a stimulus, a gratification which persuades a producer to strive to meet the needs of others. He felt that competition between producers would keep profits and prices low so that consumers would not be overcharged. In his reasoning, he also presented a simple proof of the benefits accruing from free trade. It is not profitable for anyone to produce something they can buy cheaper from someone else. He proved that „what is prudent” in the private life of every family (in the micro scale) cannot really be crazy in the life of a great kingdom (in the macro scale). A. Smith knew history, politics and economics very well. When he pronounced his famous words about the Invisible Hand, he was using his extensive knowledge, and not just a deductive reasoning ([100]): „It is not from the benevolence of the butcher, the brewer or the baker, that we expect our dinner, but from their regard to their own self-interest. 20

We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages.” If this passage from A. Smith’s book is taken out of context, as it very often is, it may suggest a very narrow, cynical view of human behaviour. However, if we read it in the whole context, A. Smith’s thesis is simply logical. In a complex, internally complicated society, prosperity simply cannot rely on the benevolence of others to satisfy all our desires and needs. People are given to charity – at least the majority of them – but they also feel restrictions. As A. Smith said, an individual person ([100]): „...stands at all times in need of the cooperation and assistance of great multitudes (of people), while his whole life is scarce sufficient to gain the friendship of a few persons. (...) He will be more likely to prevail if he can interest their self-love in his favour, and show them that it is for their own advantage to do for him what he requires of them.” The essence of the Invisible Hand is the conviction that individuals’ striving to further their own interest within the free market lead to an allocation of resources efficient from the perspective of the entire society. This turned into a legend that the entire Wealth of Nations is based on such naive reasoning, on the so-called doctrine of the „spontaneous harmony of interests” ([7]). It sometimes seems that this only means the ability to arithmetically adding instances of individual satisfaction with meeting ones needs: if everyone is maximising their satisfaction when they are only allowed to, a laissez-faire system ([117]) will maximise the satisfaction of the entire society’s needs. However, in fact, in his proof of the „maximum satisfaction” doctrine, A. Smith went much further. In Book I, Chapter 7 ([100]) he demonstrated that free competition contributes to bringing prices down towards production costs, thus leading to the optimum allocation of resources inside an industry. In Book I Chapter 10 ([100]) he showed that free competition in the means of production market aims at equalising the „net benefit” stemming from using those means in all industries, and thus to achieving the optimum allocation of means between industries. He did not prove that various means are combined in the production process in the best proportions or that the product sold is distributed in the best way between individual consumers. Neither did he prove that economies of scale and external effects in production and consumption often hinder achieving the competitive optimum, although his analysis of public facilities does contain a kernel of such reasoning. However, he did make the first step towards a theory of the optimum allocation of specific resources in the conditions of perfect competition ([51]). The Invisible Hand is nothing more than an automatic equilibrium mechanism of a competitive market - A. Smith claimed. If competition is perfect and the market is not deficient, it will squeeze as many useful goods and services out of the available 21

resources as possible. However, if monopolies, environmental pollution or similar market deficiencies spread, the efficacy of the Invisible Hand may be destroyed ([5, 51, 112]). The paradox of the Invisible Hand is that even if every person separately behaves in a non-cooperative fashion, the economic result is socially efficient. What it more, competitive equilibrium says that no individual can improve their situation by changing their strategy if all others resolutely stick to their strategies ([51]). The law of supply and demand ([5, 112]) indicates that the quantities of a given product that are purchased and offered for sale change in different directions as a result of a price change: as the price grows, the quantity purchased falls, but the quantity offered for sale grows. If these two regularities are put together, we find that at a given time and in a given market, there is only one price of a given good at which the quantity purchased is equal to the quantity offered for sale. This is the so called equilibrium price ([5, 112]). Market prices get set under the influence of mutual competition, as a result of the interplay of supply and demand in particular markets ([57]). In free competition markets, supply curves are determined by the marginal cost ([5, 112]). A. Smith perceived the market as a method of forging cooperation between strangers. „Give me what I want and I will give you what you want” is the offer which forms the cornerstone of every market deal ([7, 51, 112]). However, it is true that A. Smith’s personal belief in the benefits stemming from the „Invisible Hand” was only to a limited extent due to a static analysis of the allocation efficiency in the conditions of perfect competition. He deemed the decentralised pricing system desirable due to its dynamic impact of broadening the market and increasing the benefit of labour division, or in simple words, because it was a powerful machine stimulating the accumulation of capital and a growth of income. Although he never said this in so many words, A. Smith was deeply aware of the imperfection of the market system. He also conceded that the market often adjusts to changes slowly and may not maintain the appropriate quantities of certain goods without government intervention. The Wealth of Nations did not try to prove that the free-market system is perfect. It was rather a classical impression of the relative advantages of a free market system compared to alternative economic systems ([7]). Smith was definitely on the side of the ordinary people. He believed that replacing monopolistic enterprises with state regulation of the economy would probably spoil the economy and not improve it. A. Smith’s views opened the way for the industrial revolution and the golden age of capitalism. His book, published in 1776 ([100]) still remains a classic economic work today ([7]). Making economics mystical with the Invisible Hand of the Market had farreaching consequences. Even the terms market and economy themselves are so im22

precise and ambiguous that they cause a lot of misunderstandings. All the more so the term Invisible Hand of the Market. In the simplest sense this expression should be synonymous to the word people. In this situation, requesting that the Invisible Hand of the Market be allowed to work should be equivalent to allowing people to act. The Invisible Hand of the Market was also understood as the hand of God. It can be understood even more generally, as a self-organising general harmony, an abstract property of the reality. In that situation it can mean anything, or in other words nothing. The emptier a concept, the more useful it can turn out to its user, when they use it to justify solutions favourable to themselves. One could even be tempted to draft a law on using this metaphor. It would say ([7]): „demand that all industries with the exception of your own be subjected to the power of the Invisible Hand of the Market” In the conclusion, it is worth noting that A. Smith is also the point of reference for one of the main contemporary currents in business ethics – utilitarism. A. Smith’s theory aimed at demonstrating that the market and market economy are most natural, consistent with human nature and the Creator’s intention. The same is true for the concept of homo economicus4 and the Invisible Hand of the Market as well as the laws governing the socio-economic life. In A. Smith’s opinion, in the conditions of a free market, if a person aspiring to maximise his utility function follows the law and moral principles, he/she automatically contributes to achieving social objectives, as it were. In a sense, the market system itself is a kind of educator in virtue and an effective way of bringing up a person of integrity.

2.2. Criticism of the Invisible Hand of the Market concept The concept behind the metaphor of the Invisible Hand is often used as an argument for economic liberalism and has been criticised numerous times by supporters of state interventionism. They have been proving that there are many circumstances which prohibit the public interest from being achieved as part of the market mechanism, which they believe to justify the regulatory action by the state. A doctrine that has undermined the assumptions of the classical economics was Keynesianism ([101]). According to this doctrine, state interventionism is necessary to correct the operation of market forces. Unlike the classical economics, Keynesians 4

In the free translation this means the economic human – this is a concept of the individual assuming that a human, as a rationally acting being, always strives to maximise the profits he/she earns and make choices due to the economic value of the results of those choices. In the colloquial sense, a „homo oeconomicus” is a person acting in accordance with this principle.

23

claim that there is no complete flexibility in the adjustment of the prices and there is significant price sticking (particularly downwards). The private economy does not achieve equilibrium as a result of the market forces in the conditions of a given state policy, whereas market deficiencies lead to forced unemployment and excessive GDP fluctuations. According to Keynes’ followers, the Invisible Hand of the Market cannot convert egoistic, private interests into the social optimum. Although they believe that competitive markets can fully utilise means of production, they cannot determine the optimum values of employment and production – this must be done by the government. Stiglitz is among the more important contemporary critics of the concept of the Invisible Hand of the Market. The main argument against the Invisible Hand is the existence of public goods ([103]). Their characteristic feature is that they can satisfy the needs of many people at the same time, but the cost of their generation is greater than the individual benefits that a single individual can reap. For this reason, the Invisible Hand of the Market will not lead to such a good being created, even though its existence is beneficial at the level of the whole society. Thus supplying it will require the action of a public institution. Examples of such goods are public national defence, an efficient court system, scientific research on a new type of drug, roads, schools etc. Another argument against the autonomy of the market mechanism in meeting social needs is the existence of external effects or the information asymmetry in the market ([112]). The tobacco industry is frequently given as an example here: although it produces goods desired by a part of the society, the actual social effects of its operation are very detrimental. In this case, the Invisible Hand of the Market leads to a situation in which there is an overproduction of specific goods above the socially desirable level (called the production of anti-goods, which include cigarettes, illegal drugs, alcohol and gambling). An important argument of interventionists is the imperfect competition in the economy, particularly the existence of monopolies. The Invisible Hand of the Market ensures the socially desirable level of production of a given good only if strict assumptions of perfect competition are met, which almost never happens in real life ([51]). Another issue is many people’s fear of the Invisible Hand of the Marked due to their ignorance of the essence of this phenomenon. If something is allegedly so powerful as to be responsible for all matters of the economy, then it is likely to cause fear, which can be easily used as a pretext by all sorts of saviours. The defence they propose against this threat is to limit personal freedoms. Thus the Invisible Hand of the Market becomes useful not just to entrepreneurs or corporate officers, but also to government officials. By effectively manipulating people and their fear of the menac24

ing „invisible hand of the market”, they can boost their power. They will then claim that the wage cuts or price hikes are not caused by human action reflected in the law of supply and demand ([5, 112]), but by some „Invisible Hand of the Market”. So when the state fails to meet its obligations, this apparently has nothing to do with the errors made by politicians, but is the consequence of the operation of the evil „Invisible Hand of the Market”, which in this case becomes its own caricature. Mark Blaug also mentions in the Economic Theory ([7]) that the Invisible Hand of the Market is not an uncritical rule. He claims that the theory of „solutions worse than the best” leads to one of the objections against the rule of the „Invisible Hand”: the inability to create a partial economics of prosperity solving its problems „piecemeal”. He reaches the conclusion that the „public” nature of certain goods significantly reduces the accuracy of the „Invisible Hand theorem” in a way that A. Smith had never dreamt of. In a milestone article published in 1956, Lipsey and Lancaster ([63]) proved that if the optimum conditions are not met on at least two markets, the Pareto prosperity theory ([112]) cannot justify a policy aimed at eliminating the imperfection on one of these two. The movement towards the optimum in the Pareto understanding is not enough: either we reach the best solution – „the first of the best” – or there are no grounds to choose between the following „worse that the best” solutions: of the second, third etc. grade. Lipsey’s and Lancaster’s proof, greatly simplified, is as follows: let’s assume that we have a certain overall equilibrium system with constraints expressed by two equations and that we solve this system for the „second grade” optimum using a normal technique of maximisation with the given constraints. Let us assume that one of these two constraints concerns a certain political parameter, e.g. a customs duty, and the problem consists in detecting whether reducing that duty would improve social prosperity. Proving that this must happen is impossible and this is what the „general theory of the second grade optimum” as the authors call it is all about ([63]).

2.3. The contemporary perspective of the Invisible Hand of the Market Today, the ASIHM is understood much more broadly. In the contemporary perspective, the Invisible Hand of the Market is a meta-process whose results are achieved in a decentralised way without overt agreements between its participants [Joyce01]. In addition, this process is unintended and the goals pursued by individual market players are neither synchronised nor identical with the results of this process: the result is achieved „by the way” as it were. However, this process has a strong impact on the market in the regulation sense. Its participating agents may be unaware of 25

it: this is why this process is called „invisible”. This process is visible if the market is analysed from a higher level. It is assumed that this process occurs in a free market. One of the most beneficial aspects of the free market is forcing people to indirectly think about what others need, as the „business” desire to meet these needs leads the former to improve their own circumstances. The observation that the above economic phenomenon fits the Collective Intelligence computational model surprisingly well is the origin of this thesis. Consequently, the following chapter will present the current state of the knowledge in the field of Collective Intelligence.

26

3. Collective Intelligence

This chapter briefly presents the current state of knowledge about Collective Intelligence and the computational model developed for its purposes. After discussing the origin of the collective intelligence concept, its manifestations in nature will be somewhat more broadly presented. This approach stems from the intention to present the transformation of a popular algorithm based on the observation of the collective behaviour of ants into the form of a molecular computational model which is later used to develop a market simulation model. This transformation, consisting in changing the terminology and the interpretation, also makes it possible to transfer the ant algorithm to the market economy world in which market players run business operations. It is worth noting that in this case, the calculations are transferred in a similar way to the corresponding molecular model. What is important is that the majority of proposed name and interpretation changes are generalisations.

3.1. Origins of Collective Intelligence Collective Intelligence1 is in other words a form of intelligence appearing during the cooperation and rivalry between many individuals. Before this concept became established in science, various names were used to describe this phenomenon. The concept of Collective Intelligence was born out from the following family of terms: − collective behaviours, − swarm intelligence2 , − bacterial communities, − synergy, − social cooperation, 1

http://en.wikipedia.org/wiki/Collective_Intelligence. Swarm Intelligence should also be treated as a separate branch of research, which will be referred to lower in this chapter. 2

27

− collective mind, − mind of the swarm, − global brain, − social mind, − social organism, − social intention, − other terms containing the word „social”. All these terms describe a behaviour or a process which exhibits symptoms of intelligence and which manifests itself in apparently uncoordinated actions of a given group of individuals – from very primitive forms of life like bacteria, through an ant colony or a beehive all the way to people. First references to Collective Intelligence appeared in computer science in 1968 along with the development of computer networks, when Robert Taylor and Joseph Licklider, the authors of ARPANET, the precursor to the Internet, wrote ([62]): „What will on-line interactive communities be like? In most fields they will consist of geographically separated members, sometimes grouped in small clusters and sometimes working individually. They will be communities not of common location, but of common interest.” Collective Intelligence manifests itself in various forms of unanimity or when people, insects ([26, 27, 83]), or even bacteria ([89]) take decisions. In philosophy, the idea of Collective Intelligence (CI) was developed by Pierre Levy ([60]) in 1997, based on a concept of virtual reality, also including network communications. The Internet is not just a source of a huge volume of information, but also an incredible opportunity for quickly and effectively communicating regardless of the interlocutors’ locations. The formal definition of Collective Intelligence will be presented lower down in this chapter, whereas in this part, its basic characteristics will be discussed and an informal definition will be given. It is worth noting that for collective intelligence to manifests itself, a group of or at least two interacting entities are necessary, so we need a definition of something that represents such an entity. In Collective Intelligence, the notion of an agent is broadly used. This has been defined in the simplest words by Russsel and Norvig ([87]): „An agent is just something that perceives and acts.” 28

The notion of an agency3 will be used lower down in this monograph, because the concept of a being suggests a live creature – and this is not necessary for Collective Intelligence ([107]). The current rapid progress in research on intelligent software agents proves that Collective Intelligence can successfully manifest itself in e.g. computer networks whose components have no characteristics of living creatures. A key feature of Collective Intelligence is its ability to solve problems harder than those the individuals can solve. By distinguishing these features, we can define Collective Intelligence informally: Collective Intelligence arises within a group of agents when a set of problems that can be solved by this group is greater than problems solvable by these same agents without interactions between them. In other words, there is some synergy effect connected with the group. A structured list of conditions which have to be met for Collective Intelligence to arise is given in a publication by T. Szuba ([107]): 1. „G”: for Collective Intelligence to arise, at least two entities or two agents are necessary. An agent is understood as „something that can perceive and act”. An entity is understood as „something that perceives, acts and is alive”. „G” comes from the word group . 2. „A or B”: At least two entities or agents between whom an interaction will occur are necessary. „A or B” comes from the term Agents or Beings. 3. „I”: The foundation on which Collective Intelligence arises is an interaction between agents, which can manifest itself in various ways. An example here can be a simple observation conducted by a two people or the DNA exchange between bacteria. 4. „PS”: the ability to solve problems. „PS” comes from the Problem Solving ability. It is worth noting that even a short-term cooperation of two agents to solve some problem may bring about Collective Intelligence. If a set of problems (labelled A) which the agent group (at least two) can solve acting separately is different from the set of problems (labelled B) which can be solved if the agents cooperate, then we can say that Collective Intelligence has manifested itself. Starting from this observation it is notable that „other” does not necessarily have to mean „greater” — consequently we can talk of positive Collective Intelligence if set B is a proper subset of set A, or we can talk of negative Collective Intelligence if set A is a proper subset of set B4 . It is worth noting that before either positive or negative Collective Intelligence was formally described, it appeared in folk proverbs: e.g. the saying „Two heads are 3

A detailed discussion of the agency notion is presented in Chapter 4. Let us consider the example of a group of prisoners running away from pursuit – there is certainly Collective Intelligence at play here, but as a group they may be running away slower (less effectually) than a single prisoner acting alone could. 4

29

better than one”5 means, in the Collective Intelligence field, that the ability of two people to solve a problem is greater than of a single person. What is more, when two people cooperate, it may turn out that they can solve a problem that a single person was unable to. In this case we are dealing with positive Collective Intelligence. An example of negative collective Intelligence is illustrated by the saying „Too many cooks spoil the broth” – as a result of too many agents cooperating, the ability of the group to solve the problem decreases – even though each cook can solve the problem of cooking the broth, the cooperation of too many agents leads to a situation where either the problem of cooking the broth becomes unsolvable or the time needed to solve it is greater than it was for a single agent – the cook. This saying illustrates the folk understanding of negative Collective Intelligence. Another example is given by the biblical saying „But who associates with the stupid will be left a fool”.

3.2. Symptoms of Collective Intelligence in nature Before the formal definition of Collective Intelligence and the computational model developed for its purpose are discussed, it is worth highlighting the symptoms of Collective Intelligence in nature. The symptoms of Collective Intelligence can be found starting with forms as primitive as colonies of bacteria, through more complex ones like ants or bees, to the most developed ones such as mammals, including people. What is interesting is that research conducted on the Collective Intelligence of structures made up of animals sometimes became the source of new branches of science, as happened in the case of an analysis of a structure composed of ants, which produced the well-known ACO (Ant Colony Optimization) algorithm ([26])6 . This Algorithm has been applied successfully to problems related to searching a graph, such as the travelling salesman problem ([27]). What is interesting is that this algorithm has advantages compared to approaches based on the genetic algorithm7 ([69]) or the simulated annealing algorithm8 ([56]) if the graph is undergoing dynamic changes. The characteristic feature of this algorithm ([28]) is its ability to adapt to changes taking place in the real world and its natural parallelism. The Collective Intelligence of ants will be discussed more broadly lower down in this sub-chapter. 5

Translation of a popular polish saying. A year after the first publication of the ACO (Ant Colony Optimisation) Algorithm, the first international conference dealing with this field was held: ANTS’ 98, From Ant Colonies to Artificial Ants: First International Workshop on Ant Colony Optimization, ANTS 98, Brussels, Belgium, 1998. 7 Currently, this is a very broad field of science. A portal dealing with genetic computing: http: //www.geneticcomputing.com 8 The idea of the algorithm migh albo be found on Wikipedia: http://en.wikipedia.org/ wiki/Simulated_annealing. 6

30

We should start the discussion of the occurrence of symptoms of Collective Intelligence in nature, among animals, by clarifying the concept of instinct. It is generally thought that animals do not so much think as follow their instinct9 . So considerations of Collective Intelligence among animals should refer to the idea of instinct. The notion is more broadly discussed in T. Szuba’s book ([107]), here it is outlined only briefly: instinct is a mechanism which allows animals to develop even complex patterns of behaviour in reaction to situations which they can experience in the natural environment, thus allowing them to better adapt to it and survive in it. For the purposes of Collective Intelligence (so far introduced informally) we should provide a more formal definition of instinct ([107])10 : Definition 3.1. Instinct. This is a group of computational processes running inside an individual and forcing it to evolve to solve a specific problem. Lemma 3.1. The link between instinct and Collective Intelligence. Collective Intelligence of primitive beings is the result of partly coordinated actions, which in turn are the result of computational processes forming part of the instinct and lead to solving problems in the way most economical or impossible for a single being ([107, 108]). The above lemma means that some elements of instinct must exist to provide an interface for computational processes solving the problem, which occur within the individual to ensure cooperation or interaction. For beings that have awareness, this interface may be identified with the simple profit/loss calculation. When these definitions are introduced, they eliminate a basic difficulty, namely distinguishing between group and individual behaviour. As a result, Collective Intelligence can be seen as the extension of instinct for beings having no awareness, whereas at the level of humans it can be the result of action more aware than instinctive. 9

However, this approach is now being abandoned. The approach presented in this monograph is based on the one presented in T. Szuba’s book ([107]). 10 The definition repeated after T. Szuba is a more formal definition. Wikipedia (http://en. wikipedia.org/wiki/Instinct) contains the following definition: Instinct is the inherent inclination of a living organism toward a particular behavior. The fixed action patterns are unlearned and inherited. The stimuli can be variable due to imprinting in a sensitive period or also genetically fixed. Examples of instinctual fixed action patterns can be observed in the behavior of animals, which perform various activities (sometimes complex) that are not based upon prior experience, such as reproduction, and feeding among insects. Sea turtles, hatched on a beach, automatically move toward the ocean, and honeybees communicate by dance the direction of a food source, all without formal instruction. Other examples include animal fighting, animal courtship behaviour, internal escape functions, and building of nests. Another term for the same concept is innate behaviour.

31

3.2.1. Social structure of bacterial colonies The encyclopaedia definition says that bacterias11 are among the most primitive forms of life. They are classified as Procaryota and further split into Eubacteria and Archaebacteria, which are shortened to Bacteria and Archaea, and which have developed from a common ancestor. Bacteria are divided depending on whether they need oxygen to live or not, on their shapes, type of motion etc. They multiply agamously and inherit identical genetic material from their parents. However, every bacterium has its own phenotype12 , which is due to changes in the DNA. Apart from this, changes to the genetic material can be caused artificially in bacteria. They can occur naturally as a result of mutations or genetic recombination. Mutations are the result of errors in DNA replication or are caused by a mutagen. The chance of a mutation occurring and the time needed for it to materialise differ within every species, or even the same bacterium A single bacterium cannot be said to exhibit intelligence. As an individual, it follows a set pattern in its actions. However, as a population or a colony, bacteria get a kind of intelligence. By exchanging information they „learn” from one another how to defend themselves from a hazard. Bacteria live in „colonies” and their populations are counted in millions. It is worth noting a certain relationship. Namely, the less intelligence and independence a given species shows, the more beings are necessary for Collective Intelligence able to solve a problem a single individual cannot solve to arise. Based on this observation, we can move to analysing bacteria from the perspective of Collective Intelligence, by checking whether they exhibit the four characteristics introduced above in the chapter. The „A or B” characteristic is certainly there. All that is left to do is to prove that the „G”, „PS” and „I” characteristics are also true. Research on bacteria has been demonstrated that if a single bacterial organism is subjected to the effect of an antibiotic and turns out to be resistant to it, then the same drug when applied to a colony of bacteria turns out to be ineffectual – just as though bacteria were able to protect one another by behaving like specialised cells in an animal ([68]). This is because in bacteria, it is not an individual that matters, but the entire population. If a certain population of bacteria is subjected to the effect of an antibiotic, we can assume that one colony will survive as a result of the suitable mutation. It will then rebuilt the entire population, so when analysed from 11

urlhttp://en.wikipedia.org/wiki/bacteria This is a set of features of an organism including the physiological characteristics, fertility, behaviour, ecology, life cycle, biological changes, impact of the environment on the organism. The phenotype is strictly associated with the genotype, as it is the interplay between the genotype and the environment that produces the phenotype. This is why the same genotype can produce different phenotypes in various environments (referred to as phenotype plasticity) or conversely: regardless of different genotypes, the same phenotype may exist. 12

32

the perspective of a species, the bacteria will have survived. In addition, the colony rebuilding the population is already resistant to the antibiotic, so the entire new population is resistant to it. So the „PS” characteristic is also true. Bacteria are able to form extensive network structures (consortia, bio-aggregates etc.) in which the synergy between organisms forming parts of this network decides about the success or failure of the entire structure ([14]). So based on microbiologists’ research it can be said that a bacterial colony has the „G” characteristic. Another issue to be considered is therefore how bacteria interact: this is necessary to confirm that the „I” characteristic is there. It turns out that a bacterial colony can use three types of communications which ensure that bacteria cooperate ([3]): − by chemical reactions initiated by a group of bacteria to inform their neighbours of some discovery; − by transferring fragments of genetic material from one part of the population to another: this turns out to be a productive method and there are several types of this transfer, generally referred to as the DNA recombination ([34]); − by observing the products of other bacteria’s metabolism, which allow them to reason about their success or failure. The above communication methods offer an effective communication mechanism not just between the elements of a single bacterial colony, but also between bacteria of various types. In addition, research on bacterial communications suggests that a bacterial colony exhibits the last of the proposed characteristics, namely „I”. Microbiologists’ discoveries have led researchers to use the term „Collective Intelligence” for bacteria ([4, 35, 40]). Some researchers have gone a step further and called bacteria „a computer for environmental problem solving”, „a bacterial networked brain”, „bacterial wisdom” ([4]).

3.2.2. Social structure of insects: ants, termites and bees Insects are the most numerous group of animals: they are estimated to count as many as a million species. They are animals inhabiting all land environments, they have readapted themselves to the water environment and were the first animals to develop the ability to fly. Insect body sizes range from 0.25 mm to over 350 mm. Insects can be divided into wingless (Apterygota) and winged (Pterygota). Insects are much more complex than bacteria, their bodies are made up of three parts: the head, thorax and abdomen. The head is composed of a monolithic case shielding the brain which receives impulses from antennae protruding from the head and other sensors in the form of sensory hairs distributed over the entire body. Antennae are responsible for the senses of touch and smell. All insects have three pairs of legs protruding from the thorax. The central part of an insect’s nervous system is the cord running from 33

the head along the entire body. The brain receives signals from antennae and eyes which can be simple or more complex and consist of between 6 and 28 light-sensitive elements. Ant social structure What is most significant from the point of view of Collective Intelligence is that some insects have developed very complex social forms and have inspired researchers working on Artificial Intelligence, who mapped them onto models ([8]) successfully used for optimization problems: such as the ACO ([26, 27]) or the bee algorithms ([81])13 . As this is a cross-section chapter and there is a lot of literature (e.g. ([8, 80])) on the Collective Intelligence of insects, lower down in this Chapter the analysis will be restricted only to manifestations of Collective Intelligence among ants and bees, as these can be deemed to be fields which, although starting from Collective Intelligence, as it were, are now a separate branch within computer science14 referred to as Artificial Intelligence(AI)15 . One of the best examples of the manifestation of Collective Intelligence among insects is the social structure of ants. As observing anthills is easy and there are many publications on the subject we can say that in the case of ants, the Collective Intelligence level is very high (at the current stage of this monograph this is a rather intuitive claim, as the formal method of measuring Collective Intelligence will be given lower down). Algorithms modelled on the behaviour of ants have successfully been used to solve difficult problems such as the travelling salesman problem ([26, 122]) or the process of task scheduling in multi-processor systems ([111]). A single ant cannot be called an intelligent being as it only makes quasi-chaotic moves characteristic for an automaton, and cannot survive as a single creature, not a member of a larger group. Thus an ant can be treated as an automaton only responding to stimuli coming from the environment. However, if the entire ant social structure is considered, it quickly turns out that it exhibits a high level of Collective Intelligence. Ants live in highly organised colonies of which the central place is the anthill – the nest. Inside the nest, there is one (or rarely several) reproducing ant, called the queen, which does not move and lays eggs as its only role. Food is supplied to it by specialised worker ants. A different type of workers is responsible for carrying eggs to special areas where they incubate. The ants that hatch from eggs, when they reach maturity, replace workers or the queen (although it often happens that the ants to re13

This algorithm even has its own official website: http://www.bees-algorithm.com/ It should be remembered that the notion of Collective Intelligence is broader than that of Artificial Intelligence. However, in the area of computer science, it can be assumed that research on Collective Intelligence is a branch of Artificial Intelligence. 15 As the term „Artificial Intelligence” will be used more frequently lower down, the abbreviation AI will be used. 14

34

place the queen are only produced after it has died). The nest is protected by soldier ants while food is supplied by workers who gather it outside the nest. So high specialisation can be seen among ants and this ensures that the entire colony will survive (in this sense, an ant colony resembles a single organism whose individual parts – ants – play specific functions and no part of the organism can survive independently). To ensure survival, communication is necessary. It has been discussed in detail by Holldobler ([44]). His publication distinguishes twelve different types of reactions to outside stimuli: 1. alarm; 2. attraction; 3. drafting for a new source of food or another part of the nest; 4. assistance in hatching new individuals; 5. exchange of bodily fluids; 6. exchange of solid food; 7. group effect, a ban on taking a specific action or an order to take it; 8. recognising an anthill member or a specific sub-group in the anthill; 9. determination of belonging to the sub-group; 10. control of rival reproducing individuals; 11. territorial signalling; 12. sexual communication. Ants communicate by secreting chemicals called pheromones whose detection determines specific reactions16 . Wilson ([115]) claims that the social behaviour of ants is controlled by chemical receptors. Ants communicate using three basic pheromone types: 1. A pheromone produced by soldier ants if the nest is attacked. The soldiers’ reaction to this pheromone is to concentrate around the area where the phenomenon occurred, biting the aggressor or excreting formic acid. If the pheromone disappears, this indicates whether the nest has been abandoned and the queen has moved to a safe location. 2. A pheromone excreted by the queen to produce a reproducing ant (reproduction). 3. A pheromone used to mark the trail to/from the nest. The fresher the trail, the more it stimulates ants to follow it. 16 There are many kinds of ants and today it is known that some ants communicate by touch or even by sound, but from the computer science point of view this is immaterial.

35

1 . p r o c e d u r e ACO 2 . w h i l e ( n o t end ) 3. generateSolutions (); 4. actualizePheromones ( ) ; 6. otherActions ( ) ; 7 . end w h i l e 8 . end p r o c e d u r e Listing 3.1. Ant Colony Optimization algorithm – pseudo code. Source: Dorigo ([25]).

The last issue to be determined as considered above is whether the „PS” characteristic is there. The first algorithm was presented by M. Dorigo in 1992 and is called Ant System (AS) ([25, 26]). The algorithm has successfully been applied to solve NP-complete problems: the travelling salesman problem ([27]) or QAP17 ([36]). A more general approach which extended that research was the Ant Colony Optimization (ACO) ([28]), successfully used in discrete optimisation problems. The algorithm used for illustration purposes is the ACO algorithm employed for finding the approximate solution to the travelling salesman problem: a travelling salesman visits cities and must visit each one once as well as return to the starting point. In the ACO algorithm, the ant population travels through the graph of the cities according to the following rules: 1. Every city must be visited once; 2. A more distant city stands a lower chance of being selected; 3. The more intense the pheromone trail between the cities, the greater probability of this trail being selected; 4. After the journey, an ant leaves a pheromone trace on the edges of the journey graph that it has made, the trace being the more intense the shorter the cycle was. An example pseudo-code looks as follows ([25]): α ηβ τi,j i,j pi,j = P , β α τi,j ηi,j

where: − τi,j is the amount of pheromone deposited for transition from state i to j, − α is a positive number parameter to control the influence of τi,j , 17

36

Quadratic Assignment Problem

(3.1)

− τi,j is the desirability of state transition i, j (a priori knowledge, typically where d is the distance between i and j),

1 di,j ,

− β ≤ 1 is a parameter to control the influence of τi,j . Pheromones are updated as follows: τi,j = (1 − ρ)τi,j + ∆τi,j ,

(3.2)

where: − τi,j is the amount of pheromone deposited for transition from state i to j, − ρ is the pheromone evaporation coefficient, − is the amount of pheromone deposited, typically given for a TSP problem (with moves corresponding to arcs of the graph) by: ( 1/Lk if ant k uses curve i, j in its tour k ∆τi,j = (3.3) 0 otherwise , where Lk the cost of the k t h journey, which is usually its length. There are many extensions of this algorithm: in the elitist strategy, the best solution leaves a pheromone trail after each iteration together with all ants, in the minimax strategy the maximum tmax and the minimum tmin pheromone levels are introduced and only the globally best solution leaves a pheromone trail or so does the best solution in a given iteration and all edges are initialised at tmax at the beginning and reinitialised when they are close to vanishing. In the ranking strategy, all solutions are assessed by the adaptation function – the amount of the pheromone left depends on the value of the adaptation function: the better adaptation, the more pheromone is left. ACO algorithms have also been successfully used in graph colouring problems, in sequencing ones ([111]), in vehicle traffic control ([71]), in telecom networks ([91]). The social structure of bees Another example of insects that exhibit complex a social behaviour are bees. A single bee cannot survive outside a society ([92]). Bees build nests on trees18 and the centre of each nest is a queen surrounded by infertile young female workers and male drones. Drones have only a reproductive function, while workers, whose numbers can even exceed 80 thousand, are responsible for building the nest, caring for eggs and larvae, cleaning and collecting food outside the nest. The only fertile female 18

This obviously concerns their natural occurrence, and not their artificial keeping by humans.

37

individual in the swarm is the queen, laying as many as 1,500 eggs a day. Its anatomy is different from drones and workers, as it has a longer abdomen, teeth in its jaws (unlike its progeny) and can use its sting multiple times, unlike workers, whose stings fall off after use leading to the workers’ death. Unlike workers, it does not have a basket (on the rear part of legs) for holding nectar or pollen, neither can it produce wax. The queen lives for 1 to 3 years, while the workers only for several dozen days. Drones die immediately after copulating with the queen and until that time they do nothing. The only function of the queen19 , apart from laying eggs, is to produce pheromones encouraging workers to take care of it and chemically identifying the swarm. The substances secreted by the queen make every swarm have its own characteristic scent. This allows the workers to recognise the members of their own community. If a bee from another family comes to a nest, it may get killed – it will survive if it has brought nectar or pollen. Also a young bee that has not acquired the scent of its own swarm may survive. The organisation of the swarm’s work consists in the transmission of pheromone signals between workers. A phenomenon of bee social life is a special signalling dance that a worker bee-gatherer performs in the nest after a new source of benefit is discovered. If that is close, the bee dances making a circle, if it is further (10-40 m), it dances along a crescent line, an even more distant source of food is signalled with figure eights. Curves and angles that the gatherer bee traces when dancing provide additional information on the location of the source of benefit relative to the sun. The type of movements tells other bees about the difficulty of the venture. These considerations are just the starting point for analysing a swarm of bees from the point of view of Collective Intelligence ([8])([92]). Obviously, the „G” characteristic is present in this swarm, just as the „A or B” one. Communication is ensured primarily by pheromones and the dance, and supplemented by the acoustic waves made by the wings – so the „I” characteristic is fulfilled. The „PS” characteristic needs a broader discussion. The methodology discussed in the previous sub-chapter and presenting the computational potential of ants can only partially be applied to bees. The bee algorithm, developed by observing bee behaviour, was first published 13 years after the algorithm based on observations of ant behaviours: in 2005 ([81]). The operation of this algorithm follows the method by which bees collect food. In its basic version, the algorithm consists in local searching combined with random searching and is useful for solving combinatory ([83])([61]) and optimization problems ([82]). 19 The function of the queen is complex and not completely clear, so the words „the only function” apply more to the computational models developed based on bee behaviour.

38

In order to find food (nectar, pollen), a swarm of bees can search areas located within a 14 km radius of their nest. In general, trails leading to flowers with a lot of nectar and/or pollen should be frequented by a large number of bees, while those leading to flowers with little nectar should be taken by few bees. The search process starts with dispatching scouts who search the area in a random fashion. When flowers produce pollen, the swarm of bees keeps a certain constant percentage of the population as scouts. When a scout returns to the nest with information of finding a trail to food exceeding a certain quality level (measured, inter alia, by the sugar content of the nectar), it deposits the nectar or pollen and starts dancing the waggle dance which is the basis of swarm communication. This dance contains three types of information: the direction in which to fly to reach the food, the distance between the food source and the nest, and the food quality. This information allows the swarm to send workergatherers precisely to the target. Every worker’s entire knowledge of the environment comes from this very dance. The dance allows various trails to be compared as to the quality, the food available at the destination and the energy cost of reaching that destination. Having danced, the scout flies back to the source of food and is followed by other bees. This lets the bees collect food quickly and effectively. All this time, the scout monitors the quality of the food source, as it determines the next dance: if the trail remains good, this will be announced and more bees will follow this trail again. The algorithm based on these observations is founded on several parameters: the number of scouts (n), the number of locations selected based on the n locations visited (m), the number of best locations selected from among the locations (e), the number of bees chosen to follow to the e best locations (NEP), the number of bees for following to the remaining m-e locations (NSP), the initial size of the set of trails (NGH), which contains the location, its environment, the stop criterion. The pseudo code of the simplest form of this algorithm is as follows ([82]): Observations of the behaviour of bees and the method of information exchange between them bring to light one manifestation of Collective Intelligence and certainly support the computational model of Collective Intelligence discussed below in this chapter. Work has also been done on an artificial robot bee, headed by Michelsen ([70]). This work also provides hints as to the method of implementing a humananimal interface that could be used to study insect intelligence. However, the study of the intelligence of single insects seems a less promising direction than researching their Collective Intelligence.

3.2.3. Social structure of birds Birds also provide an example of collective behaviour. They fly in flocks which are often V-shaped. The first emulation of this behaviour has been described by Reynolds ([85]), who provided the basic rules of the behaviour of a single mem39

1. 2. 3. 4. 5.

p r o c e d u r e BA i n i t P o p u l a t i o n ( ) ; / / random s o l u t i o n s evalPopulation ( ) ; w h i l e ( n o t end ) / / new p o p u l a t i o n determineLocationsToSearchTheirNeighbourhoud () / / more b e e s t o e b e s t l o c a t i o n s 6. chooseBeesForLocations ( ) ; 7. evalPopulation ( ) ; 8. chooseTheBestBeeForEachLocations ( ) ; 9. assignRemainingBeesForRandomSearch ( ) ; 1 0 . end w h i l e 1 1 . end p r o c e d u r e Listing 3.2. Bee Optimization algorithm – pseudo code. Source: Pham ([82]).

ber of the flock. The program was called „Boids”20 . The behavioural complexity of a structure made up of birds (boids) is the consequence of simple rules followed by a single individual: − separation – avoid crowding (short-distance repulsion), − alignment - steer towards the average heading of neighbours, − cohesion – aim at the average position of the neighbours (long-distance attraction). It turns out that when bird motion was recorded using sensitive cameras and then the movement of the birds was analysed ([30]), the actual motion of birds complies with these rules, whereas cohesion applies to 5-10 neighbours and is independent of the distance to them. These simple rules can be used to describe behaviours as complex as flock splitting and merging (e.g. when bypassing an obstacle). An attempt has also been made to apply this pattern of behaviour to people and their behaviour turned out to be very similar. If 5% of a „herd” changes its direction, the rest will follow. If one person is chosen to be the „predator” and everyone is to avoid him/her, the behaviour is similar to that of a school of fish when threatened21 . This algorithm is used very broadly now: from screensavers to modelling pedestrian behaviour. It was also used in work on the film „Batman Returns” to model the motion of a flock of penguins. 20 This term comes from „Bird-like Object”, since that time these rules have been widely used for computer graphics, providing realistic emulations of the motion of birds, fish or other flocking/schooling animals. 21 http://psychcentral.com/news/2008/02/15/herd-mentality-explained/ 1922.html, link correct as of: 2009-08-23.

40

3.2.4. Social structures among mammals Both ants and bees are relatively primitive life-forms, which makes it difficult to discover the nature of their behaviours. Regardless of their simple anatomy, the social behaviour they exhibit is surprisingly complex. It inspired optimisation algorithms based on it, which have proven efficient and have found many applications ([8, 111, 83, 91]). It is much easier for people to understand mammals, which are a higher lifeform and much more developed. Their numerous senses let them collect information from their surroundings extremely well. The characteristic feature of systems based on the behaviour of primitive agents (bacteria, ants, bees) was the need to use a large number of agents in an application to solve a specific problem. When trying to model systems based on observations of more complex organisms, as the complexity of a single agent increases, it may turn out that the number of agents at which Collective Intelligence is manifested is smaller. Let us remember that even just two individuals can forge a cooperation that yields very good results (i.e. the Collective Intelligence of this structure is positive). Mammals are a very good example presenting the predator – prey relationship. This is a very interesting relationship, an evolutionary „arms race”. It is not so much about individual physical features of species, but rather the herd behaviour. Some species extend the care and protection of their young to the entire herd. When wolfs attack a buffalo herd, the latter stand in a circle with heads lowered. The young are sheltered inside the circle. Wolfs cannot break through such a barrier. On the other hand, the wolves try to act jointly to force the buffalo to move so that the latter would lose their advantage stemming from this formation. What is more, a wolf is a loner by nature. However, when it finds it difficult to find food, it will joint other wolves in a pack, allowing it to hunt larger and stronger animals and as a result improving the chances of a single individual to obtain food. Larger predators like wolves, cheetahs or lions exhibit very well developed cooperation during the hunt. When cheetahs hunt, they can surround their potential prey, select the appropriate target to be attacked, organise a kind of a „battue”, jointly catch up with and kill the prey. Instinct plays a key role here. All cats know which animal will be the prey. They can jointly stalk the target and skilfully position themselves so that the chase ends in a success. However, contrary to the appearances, the prey is not defenceless. This is well illustrated by antelopes living on African savannah. While the entire herd is grazing, not all animals are vigilant. Yet if one antelope senses danger, the whole herd flees in reaction. If we treat the herd as CI, it becomes a set of agents, every one of which watches and registers its surroundings, whereas the alarm of one of them drives the entire herd. So a single antelope, even though it does not sense the danger, can be warned, as it does not have to rely on its senses only. 41

There certainly are more manifestations of Collective Intelligence among mammals. However, what is interesting is that the most frequently implemented systems are those based on similarities taken from research on beings less developed than mammals: ants and bees. One example of a study of the Collective Intelligence of an abstract structure made up of a hunter, a dog and a rabbit is that by Pola´nski ([84]).

3.2.5. Collective Intelligence manifestations in nature – summary The purpose of the research presented here is not to review all signs of Collective Intelligence found in nature, so the study of examples has been restricted to the ones most frequently implemented and applied in computer systems. On the one hand, the most obvious examples of Collective Intelligence in nature have been considered. On the other, it turns out that it was these obvious examples that have been successfully implemented. Another important observation is that successful implementations of algorithms discussed in computer science were made possible by using results of research conducted in other scientific disciplines. So it seems that if biology was useful to develop a model of ant or bee behaviour for the purposes of optimisation algorithms, then using the economic theory to model the behaviour of a market player (who, from the software point of view, is an entity very similar to an ant or a bee) should also yield good results. A popular optimisation algorithm born out of the observation of the herd behaviour of various animals is PSO (Particle Swarm Optimization) ([52, 22, 114]). It is used for optimisation problems and for projecting the reaction of a social structure to certain stimuli. An advantage of this algorithm is its simplicity - its implementation does not require any in-depth mathematical education, the algorithm is quick to implement and produces good results even in its basic form. Just like any optimisation algorithm, it requires definitions of the goal function as well as the social structure which assigns the neighbourhoods to every element in the structure. Then a population is initiated which is made up of random solutions to the problem, whose elements are called particles22 . The next step in the algorithm is an iterative process of improving these results in which every particle „memorises” the best location. Information on the adaptation of the particle is available to its neighbours. Movements in the search-space are determined by this very information. In this algorithm, the swarm (also in the name of the algorithm) is modelled by particles whose attributes are their location in the search-space and their speed. The particles move (quasi-chaotically23 ) in the space and can execute two types of reasoning: storing their best position and knowledge about the position of the best neighbour, and optionally, of the globally 22 23

42

Compare: http://en.wiktionary.org/wiki/quasi. Hence the name of the algorithm.

best particle. This algorithm is significant from the perspective of the computational model to be discussed in the following chapter. The greatest number of Collective Intelligence manifestations can be found among people. Probably signs of Collective Intelligence among people are frequently so complicated that it is impossible to identify all the mechanisms which influence the manifestation of Collective Influence, which in turn poses problems in modelling. The social behaviours of ants or bees, even though they exhibit a high level of complexity, do not cause such problems. However, there is a theory which provides a good description of people’s behaviour and their interactions from the perspective of the free market. This field is micro-economy, which offers a precise description and mathematical models of market players’ behaviour. This theory forms the starting point for developing a model of a market in which Collective Intelligence is manifested. Creating a Collective Intelligence model for the market represents the key part of this project.

3.3. Collective Intelligence computational model This chapter presents the foundations of the Collective Intelligence theory from the angle of the computational model used for its purposes. Even though the term „Collective Intelligence” started appearing as early as in 1960s ([62]), it was only T. Szuba’s publications ([105, 106, 107]) that provided a computational model developed enough to handle Collective Intelligence.

3.3.1. Molecular model of computation The considerations presented in the previous sub-chapter support the observation that an organised society can be formed not just by humans but also by much simpler beings. Even though every collective structure presented is different and governed by different rules, they have all formed by evolving and designing any of these structures far exceeds the intellectual capability of a single member of them. As the evolution process runs for a specific purpose which is for the individual to better adjust to the conditions of its environment and thus to improve its chances of survival, social structures can also be said to arise for a specific purpose. In order for the creation of a social structure to be profitable, the method of its organisation must be forced by the appearance of a gain greater than the loss24 (incurred when the group is being created). Every group, regardless of the complexity level of the individuals forming it, shows a certain effectiveness in achieving the goals it was created for. This ef24

Losses are understood in general terms here, they may be energy losses or any other kind.

43

fectiveness can be measured using a special indicator applied within a suitably built model. Based on the research of T. Szuba ([105, 106, 107]) consisting in the observation of behaviours of various social groups, the above computational model of Collective Intelligence was proposed, which allows the above measurements to be taken. The theory is founded on a computational model which departs from the orderly, deterministic calculation process – like the one executed by today’s typical digital processors – towards a molecular ([1]), non-deterministic computational model25 . A specific case of this computational model (and one that has been physically implemented with success) is Alderman’s biochemical, so-called DNA-computer26 ([2]). The loss of non-determinism in this computational model is very well offset by the natural parallelism of calculations that occurs, which means that this kind of a computer gains an advantage in multi-threaded calculations. It turns out that this model requires abandoning Boolean algebras as the bases of computations (0/1 calculus) and replacing them with calculations in 1st order predicate calculus, i.e. computations are transferred into the field of mathematical logic. Interestingly enough, the above model is still binary in the structural sense, as a digital computer uses only two symbols – 0/1 – to code information and process it. In very simple terms, information is carried in this model by so-called information molecules which transport facts, rules and purposes of calculations. Information molecules move quasi-chaotically27 within an environment configured by membranes. At the moment of a meeting (understood in a general sense28 and referred to as a rendezvous below), if the appropriate logical expressions meet, a reasoning process occurs and results in progeny molecules which transport conclusions of this reasoning further. In this reasoning system, the logical process runs in a multi-threaded, chaotic, parallel fashion, with the threads intertwining and meshing in together, while reasoning runs „forward”, „backward” and „from inside out” at the same time. Simulations have proven that this computational model is surprisingly fast and effective ([109]), but its physical building, the construction of a molecular computer, is a major issue. The problem is therefore to find physical phenomena in the world around us which can be controlled and used to construct such a computer. Figure 3.1 shows a simple, general concept, an example of such a processor. Figure 3.2, in turn, illustrates the course of the computational (reasoning) 25

Such a computer (just as an analogue one) is a non-Touring computer. DNA computing: http://en.wikipedia.org/wiki/DNA_computer. 27 In this case it is very difficult to distinguish between what is random, and what is chaotic. It turns out that people of very high intelligence endowed with funds allowing them to make decisions freely ultimately make moves which, to an outside observer, would best be described by a random process, even though their action is thought out (calculated). Their actions are sensitive to a change in initial conditions, so this should be a chaotic process. 28 The rendezvous concept depends on the metric we use in the specific computational space. 26

44

process in such a computer with Feynman diagrams 29 .

Fig. 3.1. An example of computational space CS with internal structure and information molecules. Source: T. Szuba ([107])

3.3.2. Formal description Once we have an intuitive base, we can formally define the computational process in such a processor (Definitions 3.2, 3.3). Definitions 3.2 and 3.3 formally describe what has been presented graphically in Figure 3.1. Definition 3.2 says that if, in the formal sense, two information molecules carrying facts, rules or computation goals meet (this rendezvous depends on how the space metrics is defined), and if they „fit”, then progeny molecules will be formed, and will carry the results (conclusions) of this reasoning further. The „parent” molecules may (but need not) be removed. Definition 3.3 formally sets the conditions for the appearance of a network of interweaving reasonings leading from facts that initially existed to the final conclusion. The network of such reasonings is called an N-element reasoning below. 29 Very basic information on Feymnman diagrams are to be found at: http://en.wikipedia. org/wiki/Feynman_diagram.

45

Fig. 3.2. Feynman diagrams applied to describe inference process. Source: T. Szuba ([107]).

Once we have such a computational model, we can attempt to formally define what the Collective Intelligence of a given social structure is and how it operates (definitions 3.4, 3.7). Definition 3.2. in CS N .  o  n Generalised inference Let: CS = ...CSji , ..., CSlk and let the following relations occur: R CSij , CSlk   (R denotes rendezvous), U CSij , CSlk (U denotes that unification of the necessary type can be successfully applied); C(one or more CSnm conclusions)denotes that  CSnm are satisfiable ⇒ one or more CSnm molecules and possibly R CSij , CSlk molecules, where R denotes removal, are created. The definition given above by T. Szuba ([107]) is constructed on the basis of the fundamental mathematical logic definition of inference or logical consequence ([6, 58]) saying that: „A wff G is an inference from a set of axioms {A1 , ..., An } if any interpretation that simultaneously satisfies all of the axioms also satisfies G”. Definition 3.3. N-element reasoning in CS N . Let there be given a computational space CS of any level: CS n = nm } and the allowed set of inferences SI in the form of: {CS1ni , CSm 46

{set of premises CS} Ij {set of conclusions CS} and one or more information → − molecules of the goal CSgoal . We say that: {Ia 0 , ..., IaN −1 } ⊆ SI is an N-element reasoning in CS n , if for every reasoning: I ∈ Ia0 , ..., IaN −1 its premises belong to CS n at the moment of firing this inference, and in addition all: {Ia0 , ..., IaN −1 } can be connected into one tree by common  conclusions and premises, and finally: CSgoal ∈ set of conclusions f or IaN −1 . Definition 3.4. Collective Intelligence as a momentary property of a social structure. Basic symbols: let there be given a set S {...} of individuals labeled indiv1 , indivn in the environment Env, nothing needs to be assumed about them apart from the fact that these individuals are observable; let there be given a time period ts tart, te nd for the purpose of assessing the Collective Intelligence of a social structure formed by S {...}, let there be given a universe of problems U for environment Env, and an assessment of the complexity of problems P robli denoted by: f0P robli (n). As Collective Intelligence may apply to formal problems as well as physical ones, we have to write:   if P robli is a computational problem      use standard complexity definition    where n denotes the size of the problem, f0P robli =  if P robli is a physical problem      use physical measures for any proper physical units    ex. weight,size, time etc. to express n. (3.4) Let us express the problem solving ability (Abl) of the S population if elements of S do not interact with each other by the following formula: def

all indiv AblU =

[ P robli ∈U

  max max f0P robli (n) s

n

(3.5)

Having given the above definition the collective intelligence definition might be given([107]): Definition 3.5. Weak definition of Collective Intelligence. If we assume that individuals coexist and interact in some way we might say that a weak Collective Intelligence emerges because of cooperation, interaction, or coexistence in S, if at least one problem P robl0 can be pointed to, such that it can 47

be solved by a lone individual with the support of the group, or by some individuals working together, such that

f0P robli n0

 signif icantly P robli all indiv > f0 (n) ∈ AblU

(3.6)

or

   all indiv ∃P robl0 such that ∀n P robl0 ∈ / AblU ∧ P robl0 ∈ U (3.7) Definition 3.6. Strong definition of Collective Intelligence ([107]). We say that a strong Collective Intelligence emerges because of cooperation, interaction, or coexistence in S, if at least one problem P robl0 can be pointed to, such that it can be solved by a lone individual with the support of the group, or by some individuals working together, such that

f0P robli n0

 signif icantly P robli all indiv > f0 (n) ∈ AblU

(3.8)

or

   all indiv ∃P robl0 such that ∀n P robl0 ∈ / AblU ∧ P robl0 ∈ U (3.9) and

¬∃indiv ∈ Ssuch that his individualabilitiesarereduced (3.10) An extended discussion and comment on those definitions together with multiple examples might be found in T. Szuba’ book ([107]). 48

3.3.3. Collective Intelligence Quotient – IQS The computation model presented would be useless without a goal function: an indicator defining the degree of organisation of the modelled social structure. The indicator introduced by T. Szuba ([105, 106, 107]) is probabilistic and based on measuring the social structure’s capability of reaching the assumed goal. Its name – IQS – means the intelligence quotient (IQ) of a social group (S). Having the mathematical apparatus prepared in the previous sub-chapter it is possible to define the Collective Intelligence as the IQS quotient. Definition 3.7. Collective Intelligence Quotient – IQS ([107]). Based on an N-element reasoning, we can give the definition of IQS for S {...} over a domain of problems U as the probability P that the conclusion CSgoal will be made within CSn after time t as a result of the N-element reasoning occurring. We note this as: IQS = O (t, N ). The idea behind definitions 3.4 and 3.7 is as follows: the social structure, if there is no cooperation, has a limited capability of solving a certain pool of problems by individually testing (to avoid a positive or a negative interaction) who can do what and who is best in solving a given problem. Based on such a test, it is possible to execute a list of „what can be done” and what the best result is for a given problem. Collective Intelligence manifests itself when as a result of cooperation, competition, mutual observation etc. completely new problems appear which the social structure can solve or if the speed of solving, the complexity and effectiveness of problems already solved previously increases. Based on the above definitions, it is also possible to define a measure of Collective Intelligence, i.e. its IQ, called the IQS (IQ Social) quotient. What is significant here is realising that chaotic, parallel reasoning processes like those taking place within a social structure can only be assessed in probability terms: „a more intelligent social structure is more probable to solve a given problem faster than a competitive social structure”. Obviously, a more intelligent social structure will have a greater real domain of problems it can solve, but the other, competitive one, may also solve such problems, just less probably or with the same probability in exceptionally favourable situations. The IQS concept is presented in definition 3.7. It is understandable that the IQS may become negative, which means that in certain situations the individuals (participants of the social structure) become more intelligent than the entire structure (e.g. in the state of a civil war30 )([107, 108]). Further in these considerations, the term „information molecule” will be replaced with the concept of an „agent”. The ‘information molecule’ term is more general and 30

This example is discussed more broadly in T. Szuba’s publications.

49

in some studies of the nature of Collective Intelligence the term „agent” is not right ([107]). In the context of this publication, the use of the term „agent” is very much justified31 . So the question should be asked what we expect from the Collective Intelligence theory in the case of the ASIHM? We want to build a simulation model of a simplified market and then fine-tune it so that strings of reasonings performing self-regulating functions for that market start appearing spontaneously; these strings we will be able to consider as the mechanism of Adam Smith’s Invisible Hand of the Market, and this should pave the way for analysing this paradigm. Here we should recall the concept of declarative programming, implemented so well in the concept of the PROLOG programming system32 . In this system, the plan of computations of a given program need not be designed explicitly: it is enough to provide the PROLOG system with the appropriate facts, rules and goals. The course of computations will appear when the program is executed. However, at the beginning (notwithstanding errors), PROLOG usually creates a program from the logical components supplied, which program is different than was intended to be designed. It is only when analysing what has happened that the system should be forced to act as expected. Similarly, when designing a market model, its elements should be defined so that: − the model stays consistent with the reality; − when the market is disrupted by some factor (irresponsible politics, demand/supply shocks, anomalies etc.), the logical elements making up descriptions of agents and market elements should very probably and intensely (in the sense of affecting the market) ‘combine’ into strings of reasonings leading to initiating self-regulating (defensive) mechanisms of the market. Because of the above observations, in further considerations stress will be placed on matters related to designing the market model so as to „bring about the ASIHM process”.

3.4. Designing a molecular model of computation The molecular model of computation presented before has many advantages from the point of view of its implementation. First, it is characterised by natural parallelism, so problems can be effectively computed on clusters of computers. Secondly, there are many tools, other than the Prolog language, which support its implementation, at the same time ensuring the easy integration with software written 31

This issue will be discussed more broadly in the next chapter. Basic information on Prolog might be found on Wikipedia: http://en.wikipedia.org/ wiki/Prolog_programming_language 32

50

in other programming languages33 . Thirdly, attempts to build a computer that runs a non-deterministic model of computations, which is useful for describing many phenomena, have ended in success ([1]). However, a fundamental problem is to describe a physical phenomenon from the surrounding world using such a model. As an example, an attempt to compile an ACO algorithm into a molecular computation model will be presented. It turns out that such a transformation and a reinterpretation makes the algorithm much more versatile, as it actually becomes a universal optimisation algorithm for social structures34 . The compilation of the ant algorithm to the molecular computational model has the following form: 1. Food deposits are treated as a class of C0 information molecule located in the CS information space, unable to move, carrying only one fact, for instance about the type and quantity, e.g.: food(, ). To make further discussion easier, let us represent the information molecule of this type with the symbol Cz0 . The location of these molecules is preset, as the initial conditions for calculations or simulations. 2. The pheromone trail left by an ant after discovering a food deposit, on its way back to the anthill to inform it, is treated as information molecules of the C0 class located within the CS information space. They are not capable of moving, so these molecules carry only one logical expression of a fact type, with a functional expression inside which codes the gradual dissipation of the pheromone smell, e.g. pheromone(intensity(time)) . To make further discussion easier, let us represent the information molecule of this type with the symbol Cf0 . 3. A single ant is treated as an information molecule of the C 1 class, (i.e. containing other information molecules inside), with its own motion characteristics videntyfikator in the CS. To make further discussion easier, let us represent the information molecule of this type with the symbol Cz0 . In the literature, a single ant is generally treated as an „automaton”. It is therefore justified to assume that the information molecule representing an ant contains in it (inside the membrane) a small reasoning system composed of logical expressions (carried by C0 molecules), facts, reasoning rules and goal expressions allowing the „ant behaviour” to be implemented, leading to the determination of videntyf ier . In particular, the reasoning rules found in it should support: 33 Examples of such tools are provided Jess or Drools rule engines which will be discussed more broadly below, as they have been used to design the prototype simulation system alongside the Java language. 34 The transformation presented is an original proposition by the author and T. Szuba, while its interpretation is the subject of further research.

51

(a) The correct behaviour of the ant if it rendezvous with a Cz0 type molecule (finds a food deposit); (b) The correct behaviour of the ant if it rendezvous with a Cf0 type molecule (pheromone); (c) The membrane of the ant molecule is transparent to food molecules Cz0 and pheromone molecules Cf0 , so that the reasoning can encounter facts, rules and goal expressions contained within it (i.e. forming the ant’s psychology). 1 makes subsequent moves inside the CS (moves generally de(d) Molecule Cm scribed by the characteristics videntyfikator) based on the current logical status of the set of facts, rules and goal expressions contained in it:

i. When it is looking for food, those moves are Brown’s motions modified by local obstacles (membranes). In other words, this is a process of random searching. ii. If a food deposit is found, this movements is a start of the motion towards the anthill with the periodic production and placement of Cf0 molecules used to mark the way to the found food in the CS. iii. If the ant is in the „looking for food” state and it rendezvous with a 1 and a Cf0 pheromone molecule, a reasoning process occurs within Cm movement decision is taken: ignoring it, if the pheromone is already old (based on the expression intensity(time)) or a change of the hitherto motion characteristics and a determined move in the direction charted by this and subsequent information molecules Cf0 . 4. The CS information space is shaped (initial configuration) with membranes, so that it would reflect the complexity of the anthill neighbourhood in which the „ant world” operates. The above compilation of the ant algorithm into a molecular computational model does not change the nature of the algorithm itself, it only transfers it into a different computational model which is to execute this algorithm. It can therefore be said that in this computational model as wells, the ant algorithm will, after a certain time, generate the shortest (optimum) way between the anthill and the food deposit. It should be kept in mind that apart from sporadic crises, an anthill cannot rely on one single food source found from time to time which requires the transport capacity of all workers. Usually, at any one moment, several food sources are available, so a mechanism for the efficient distribution of the transport resources of the anthill between sources that have been consumed to various levels at that moment must be introduced. 52

3.4.1. A proposal to generalise the ACO algorithm in the molecular computational model If the nomenclature and interpretation is changed, the ACO algorithm can be transferred to the world of agents running business operations in a free market. It is worth noting that the corresponding molecular computational model is transferred in parallel. There is a popular saying that suggests this nomenclature and interpretation change for the pheromone that marks the trail to a food deposit: in the business world we sometimes informally talk of the „perfume of success” (of somebody). What is important is that the majority of proposed name and interpretation changes are generalisations. Proposed nomenclature and interpretation changes: CS computational space → Free Market ant → agent (person, household or company) food → consumer good pheromone → information of the success of another agent pheromone intensity → information currency A rendezvous between information molecules has the same nature in this case. Information molecules representing individual objects generally do not change their type, so an agent will still be represented by an information molecule of the 1st degree, for instance. It is also assumed, that unlike in the ant algorithm, there are many „food sources”, i.e. goods (whereas we assume that anything that is of any utility to an agent is good).

3.5. Random Prolog Processor, implementing the molecular computation model The publication of T. Szuba ([107]) presents, in addition to the computational model concept of Collective Intelligence, also its implementation in the form of a Random Prolog Processor (RPP). The molecular computational model introduces the notion of an information molecule as the medium carrying information in the system. Molecules are located in the space (the model does not impose restrictions as to the space structure (it can be a continuous or a discrete space) which, in addition, contains membranes influencing the motion of molecules. The information molecule is interpreted as any type of object, which can be a single fact, a rule, an agent of any complexity level or even a whole group of such agents. 53

a (1).

% ( fact )

a n s w e r (Y) :− a (X) , % ( rule ) g r e a t e r (X, 0 ) , sum (X, 5 , Y) , r e t r a c t ( a (X ) . ) , a s s e r t ( m e s s a g e ( " m o l e c u l e a (X ) . i s k i l l e d " ) ) . |−− answer ( 6 )

% ( logical result ) Listing 3.3. RPP reasoning in the PROLOG language

The natural formalism for describing molecules is a language of predicates which allows the molecule to be described by a set of facts and rules. Because information molecules can be embedded (an information molecule can be fixed inside another molecule), they can be treated as a computational space (CS). A 1s t level molecule containing Prolog clauses such as: facts, rules and goals, is denoted by:

CS 1 = {c1 , c2 , ..., cn }

(3.11)

Consequently, a 0-level molecule is made up of single facts, goals and rules. It is worth noting that a single program in Prolog is consistent with this model – its space is formed by the computer’s memory. Membranes are represented by the characters |...|, which restrict facts, goals and rules. Thus:

CS 1 = {c1 , c2 , ..., cn } ≡ {|c1 , c2 , ..., cn |}

(3.12)

RPP syntax is the same as in Prolog. The definition of reasoning in RPP is given in definition 3.3. An example of a simple execution of an RPP reasoning in the PROLOG language is presented below: Consequently, after the rendezvous, the following computational process occurs: 1. Clauses a(1) i a(X) ⇒ {X/1} are unified 2. The logical expression greater(1, 0) ⇒ True numerical evaluation of sum(1, 5, Y) ⇒ {Y/6} (embedded predicates evaluation) 3. retract and assert extract one parent, launch a molecule with a message in RPP 54

4. The logical expression (the mother rule) is unified to the fact answer(6) due to the lack of an evaluation to False and the successful execution of embedded predicates. The answer(6) information molecule can be considered to be the main progeny (and the molecule with the message a side one). The mother molecule of the rule keeps moving, just as the progeny molecule answer(6), whereas the molecule message(...) is added to the set of solutions.

55

4. Market simulation models

Notwithstanding many attempts ([48, 90, 121, 93]) at designing market simulation models, not all can be considered convergent with the research direction presented here. Some of the studies focused on developing a new approach to the problem of allocating resources based on abstractions taken from economic theory (such as the laws of supply and demand) and computer science (multi-agent systems) ([121]). Other research presented the Invisible Hand of the Market by reference to the equilibrium price problem in the evolutionary market theory ([90]). The design of economic simulators was also attempted for the purposes of computer games1 . This chapter discusses previous attempts to simulate the market2 , particularly approaches based on multi-agent systems, which the author considers close to this work3 (multi-agent systems and basic related notions are discussed briefly in the following chapter). These approaches fall within the direction of research referred to as the Agent-Based Computational Economy4 (ACE). However, the approach presented by the authors is more „object” oriented in the sense of making use of real market elements. Combining these with Collective Intelligence makes it possible to derive abstract computational processes from the real market. An interesting tool developed as part of ACE is Repast (REcursive Porous Agent Simulation Toolkit): an agent framework created for the purposes of sociology and available free-of-charge. Repast makes it possible to systematise and research complex social behaviour by offering a functionality for designing controlled and repeatable computational experiments. It is notable that the intended purpose of this framework was much more general and not restricted to economics. Another inter1

It is worth noting such games with cult following today as Simcity or Civilization. It should be noted that the scope of this monograph is narrower and covers only the analysis of the invisible hand of the market processes. However, to make the analysis possible, a market simulator has to be designed which will be used to analyse these specific processes. The author has not found information on projects aimed at designing a simulator for such purposes in the literature. 3 Not just close, but forming a source of inspiration. 4 A very good website of Professor Leigh Tesfatsion of the Iowa University which can form a starting point: http://www.econ.iastate.edu/tesfatsi/ace.htm. 2

56

esting project initiated as part of ACE is the JAMEL5 project, which is a distributed macro-economic simulator implemented in Java. The model used in JAMEL boils down to a multi-agent system in which money is the endogenous variable6 ([93]). In today’s world, the level of complexity of relationships within the economy goes beyond the capability of theoretical analyses based on micro- and macroeconomic models. By the very nature of things, even complex models simplify the reality a lot. Another problem is the issue of scale: the majority of macroeconomic models are restricted to considerations in which only two entities (e.g. a producer and a consumer) are involved, rarely more7 . At the same time, economic decisions are beginning to increasingly impact individuals and social groups. As a result, parameterized economic models illustrating relationships in selected areas of the market are becoming popular in scientific and commercial circles. Market simulators designed with the involvement of various scientific and commercial institutions support scientists and decision makers to the following extent: − researching the market reaction to various, frequently quite improbable, events, such as analyses of the consequences of a hypothetical explosion of prices of water, food, oil or other goods; − predicting reactions to individual business decisions: even small steps taken by boards of large corporate groups can significantly and often irreversibly impact the standing of the corporation and its affiliated companies, so simulating the market response to individual decisions allows some potentially dangerous errors to be avoided; − empirically checking the accuracy of existing economic models describing the market: the simulation can be used to confirm, correct or reject hypotheses. In addition, division lines based on the nature of data contained in the system and the method of managing market agents can be drawn. Due to the nature of data contained in the system, the following are distinguished: − fictitious simulators operating on abstract or untrue data; − real simulators operating, for instance, on the real volatility of share prices or exchange rates, employed particularly often for running equity investment games without using real money. 5

Project home website: http://p.seppecher.free.fr/jamel. In other words the explained or internal one - this is the variable whose values are estimated by the statistical model (in particular the econometric model). Its counterpart is the explaining - exogeneous variable. Sometimes the explained variable is also called the dependent variable. In algebra this is another name of the function of arguments, referred to as independent variables. 7 Lower down, selected economic problems used by the author to design the simulator will be presented. 6

57

Depending on the method of managing market agents, the following are distinguished: − automatic management – decisions of the majority of agents are made automatically following defined reasoning rules; − manual management – the majority of agents take decisions in accordance with instructions from equity investors. The operation of simulators depends very much on the model (theory) forming the foundation for the system operation. The following section is an attempt at a synthetic review of popular models of market operation as well as a description of market simulators in existence.

4.1. Analysis of previous and current approaches and solutions The considerations forming part of this monograph focus on processes of the Invisible Hand of the Market, which are generally assumed to run in a free market ([49]). Consequently, this chapter discusses selected approaches to modelling free market behaviours. A broad-ranging study by R. Buda ([12]) defines the following models of goods exchanges: − The perfect competition model – this is a theoretical model based on the price mechanism. It assumes that the buyer has its maximum buying price, the seller a minimum sale price, and market players have free access to information needed to conclude transactions. The model is based on the work of the auctioneer, who calls out subsequent prices until an equilibrium is reached between buyers and sellers. In this version of the algorithm, the auctioneer raises prices by the same increment starting from the minimum price, and adopts the point at which the difference between the supply and demand is at the minimum as the equilibrium. In this model, it is assumed that no transactions are made outside the equilibrium found. This assumption is wrong in so far as in reality, transactions are made on the market outside the supply and demand equilibrium. − The imperfect competition model – SINGUL. This model is based on a calculation of the quantity and price of the commodity. It assumes that there is only one commodity and there are agents in the market who can meet a limited number of other agents and negotiate by offering higher and higher prices if they are buying or lower and lower ones if they are selling. The agents cannot speculate – they can buy only as much of the commodity as they need. At first, the quantity of the commodity possessed, the quantity needed, the maximum and the minimum 58

price are set. Every agent meets a specific, randomly set number of agents. Then, depending on the difference between the quantity of the commodity possessed and desired, the agent becomes either a seller or a buyer. If the agent’s preferred quantity of the possessed commodity is the same as its actual inventory, the agent leaves the market and does not take part in further transactions. During the negotiations, the agents do not have the same level of motivation. The one with a greater difference between the quantities possessed and desired is more willing to accept the counterparty’s offer. Negotiations are successful if the price reached falls within the ranges of accepted prices of both agents. It may sometimes happen that the ranges of accepted prices diverge. Then it is assumed that one of the agents (selected at random) will find its position incorrect and another one more interesting. It is assumed that every agent can hold as many meetings as necessary to achieve its objective and leave the market (so there is no guarantee that the objective will be reached in a finite number of steps). − The imperfect competition model – a game of suppliers and buyers. A scientific experiment model based on the EXCHANGE software ([12]). This experimental method was developed to abandon assumptions idealising the competitive market. It supports running an unusual ceteris paribus8 analysis, but it must be noted that this method is analogous, and not identical, to a market. The experimental procedure features five rules: • insatiability – the function of the agent’s utility is a monotone function of its inventory; • validity – the gain by the agent depends on the activities of other agencies; • dominance – a prize motivates every agent; • confidentiality – every agent knows only its own information; • parallelism – the market mechanism must resemble the real world which it is attempting to represent. EXCHANGE can run in two modes – of a pilot or an operator. The operator is a market player, while the pilot moderates the market. Every operator’s job is to maximise its utility function ([112]) with its liquid resources. Generally, every operator communicates with the pilot (moderator), communicating its needs to 8

Ceteris paribus is Latin for „everything else being equal”. It may mean that all other conditions are equal or the circumstances are the same. When this expression is used, it means that in order to simplify the reasoning, the possibility of certain events or conditions that could disturb the relationship between the premise and the conclusion materialising has been consciously rejected. Ceteris paribus may also represent the belief in a certain inertia of laws governing the reality: if something has been true for a certain long time, it is not likely to stop prevailing soon. In a scientific experiment, ceteris paribus usually means that the researcher can keep all the independent variables unchanged, apart from the one selected for experimenting with.

59

buy and sell goods. The pilot makes various calculations based on the operator’s messages and ultimately calculates the effects of the transactions, the change of the utility, classifications, and returns these results to the operator. An attempt to focus on disturbances of the initial state is presented in November’s publication ([78]). The experiment itself consisted in forming two groups: of sellers offering goods and buyers wanting to buy some product. Every simulation cycle consisted in every buyer taking the buying decision according to the action pattern described below. Every individual seller/buyer is characterised by a set of features like: − the period of market entry; − the possible factors for impacting the market; − the desired quality of the product; − buying preferences. An important element in the model is the introduction of a mechanism of decreasing interest in a given commodity. In addition, some characteristic features of the free market are not reflected (e.g. the possibility of bankruptcy). It is possible to try solving the problem of the decreasing interest by forcing agents to move towards a segment of the market which is more ‘interesting’ in some regard. This proposal was presented by Eichelberger and Hadzikadica ([29]). What is interesting, the free market model was used to automatically value features in a certain set in their research. Agents are forced to move to find possible environments offering better economic prospects (in other models agents do not move even though the economic situation clearly supports this). In addition, what is exchanged in this model are observations of other agents as to the value of certain attributes, and not tangible objects. Nauberg and Bertels ([76]) also demonstrate a market exchange model, this time with an analysis of simulation results. In their article, they present a model that simulates the behaviour of a heterogeneous environment of merchants. These merchants are modelled as autonomous agents whose aggregated behaviour translates into the behaviour of the entire market. The model focuses on examining the role of information reaching the market and the impact of heterogeneousness on market development. The main purpose of the presented model is not to predict market evolution, but rather gain a deeper understanding of the phenomenon governing financial markets. The main conclusions from the simulations completed are as follows: − information reaching the market has a significant impact on its behaviour; − only the introduction of heterogeneousness makes the dynamics of the modelled market similar to those observed in the real world. 60

In the environment there are many agents, every one of which has separate knowledge and method of proceeding. The detailed model was demonstrated using the example of the stock market. Everyone is capable of deciding whether to buy or sell shares and to create expectations as to their future value. The agent follows these expectations when deciding on the price at which to sell or buy the share. Its offer is then assessed by other merchants in the market (unfortunately, the publication does not detail the rule according to which this is done). The agent then modifies its rules of proceeding depending on the success of the given deal. Every agent has to take the following decisions: − whether to buy or to sell the given share; − at what price to do so. In the model presented in the publication, agents take these decisions following a set of rules. These are if-then decision-making rules which consist of an antecedent and a consequent. The antecedent is coded by a sequence of characters – „1”, „0” or „#” – which is compared to a binary sequence representing the current market situation. The character ‘#’ in a given place fits both „1” and „0”. The consequent of the rule is composed of two parameters – „a” and „b” – used to compute the agent’s expected price E of the share including the dividend (Pt – share value in period t, dt – the dividend for period t): E (Pt+1 + dt+1 ) = a (Pt + dt ) + b

(4.1)

At the beginning of the simulation, every agent receives 900 randomly generated decision-making rules, which are then reduced during the learning phase using a genetic algorithm. This algorithm is based on the assumption that rules generating more precise expected share prices will contribute to increasing the profit and the chances of the agent’s survival in the market. The simulation is split into two phases. The first is the learning phase. At this phase, the decision-making rules of agents are reduced from 900 to several hundred. This is done using genetic algorithms on an artificially generated data set. The data consists of a time series of information reaching the market and share prices. The information is expressed by a number from the range of -3 (very bad), ..., 0 (neutral), ..., +3 (very good). The share price is strictly linked to the information reaching the market according to the following formula: Pt = (1 + aIt−1 ) Pt−1 ,

(4.2)

where I is the „value” of information, and P is the share price. The Pt series is called the reference time series of share prices. 61

The initial simulation phase having been completed, the main simulation occurs to determine the following statistics: − the coefficient of the correlation between the reference time series of share prices and the series of prices generated in the simulation; − the skewness, a positive value of this parameter is characteristic for data from real financial markets. Thus Nauberg and Bertels describe a model of a heterogeneous environment of independent agents who have their own knowledge stored in the form of decision-making rules. This model is described rather generally, allowing the overall concept to be understood, but is short of several IMPORTANT details. The publication does not describe how the market condition vector is created, based on which the agents’ decision-making rules are fitted. It is not detailed how transactions are concluded between agents and how this impacts the condition vector. Neither is it precisely explained how the genetic algorithm for reducing the agents’ decision-making rules operates at the learning phase. The literature also contains descriptions of pure simulation programs, such as in the publication by J. Chmiel ([21]) who describes a market game. Its main purpose is to analyse the business decisions of players who manually control agents in accordance with independently drawn conclusions and their own opinions. The simulation model includes an extensive set of features making the agent similar to an enterprise, inter alia: − an internal structure of the agent (departments); − communication channels inside the agent (between departments); − the notion of a distribution network/media/advertising. The publication describes a set of statistics based on which decision-makers take subsequent business steps; however, these statistics can form the integral part of any simulator (after the appropriate adjustment): − profit and loss analysis; − cash flow statement; − asset and liability reports; − sales and resource analyses; − financial service analyses. The publication also contains example simulation windows which can be useful when designing software of this type. For the purposes of this monograph, the visualisation system can be adapted, but it is necessary to replace the expectation of the player’s decision with the appropriate algorithms, like the one by Buda ([12]) or the (simpler) one by Wind ([116]). 62

The publication of S. Izquierdo and L. Izquierdo ([45]) presents a model of a multi-agent system in which the authors analyse a secondary market as exemplified by the secondary market for cars. This model introduces the parameter of quality variability (various products – cars – are of various quality) and of quality uncertainty (the quality of the product cannot be determined before it is purchased and used). The study is an analysis of how the quality parameter can impact the market (mainly the destructive impact is discussed) and distort consumers’ certainty. The system contains two types of market agents: sellers and buyers (consumers). Sellers sell products by setting the minimum rate they are ready to accept, while buyers offer a purchase price based on the expected quality of the product, calculated based on their own experience or information on the experience of others received through the consumers’ „social network”. The quantity of products offered for sale is finite and during one round, sellers and buyers can conclude one transaction. The ‘social network’ is created by randomly joining agents into pairs using a parameter governing the number of links – from a complete linking to the complete lack of links. The authors have discovered that without a ‘social network’, the buyer’s certainty drops to a level at which the market becomes completely illiquid. However, if a network is introduced, experience is accumulated during system operation (both individual and „collective” experience) which ensures that the market remains stable. In addition, experiments show how good collective experience can offset a single bad experience of an individual.

4.2. An attempt at a synthesis of market simulation models Based on the above considerations, a synthesis of the common components making up the model (market players, goods) can be presented. In the following sections, the common elements are collated while design similarities and differences are analysed.

4.2.1. Merchants as market players An important element of the market consists in the definition of the role of agents representing Merchants. In the previously discussed article by Neuberg and Bertels ([76]), merchants were presented as independent, interactive agents with a strong influence on market behaviour. They have their own reason and knowledge, but follow the patterns below: − predicting behaviours on the market; − testing the consistency of predictions; 63

− formulating offers verifiable by others; − learning and changing the rules of predictions and decisions based on other agents’ verification of offers. Merchants-agents also have rules for predicting commodity price developments. The rules are represented by a series of conditions whose fulfilment causes a specific course of action to be taken. Every agent uses genetic algorithms to modify its rules so that they work more simply and efficiently. When doing this, the agent calculates the predicted prices and dividends. A decision was made to define factors influencing all agents as precisely as possible. They may be defined by their happiness, prosperity, aggression, information on their own inventory of commodities and the needs of their neighbours. They have their own reasoning rules (what to take from their environment, what to consume etc.). In the right conditions (the correct combination of the factors of energy, happiness and the quantity of goods), the agents can even multiply. However, it is not known whether such a multitude of parameters would not be difficult to configure so that no parameter would needlessly dominate the system. In the article ([78]) agents are split into single entities, each of which follows its own character and memory of experiences when analysing the market situation. The impact of advertising influencing the agent as well as the difference in prices and the availability of goods for which there is a demand were also simulated. On the other hand, in some papers a decision was made to combine requirements into one type, e.g. for the purpose of highlighting the government’s influence on the survivability and prosperity of agents. The approach is somewhat different in the publication of Eichelberger and Hadzikadic ([29]), who present the merchant as a person not acquainted with the entire market and having limited ability to get to know it. This is because the market is learned only by meeting other merchants, learning their prices and needs and adjusting one’s offers to the information encountered. To simplify the simulation, uniform product types were adopted in this study as well.

4.2.2. Producers as market participants Another aspect of modelling is simulating producer behaviour. Some sources ([76]) omit the existence of producers as agents and instead simply simulate limited and controlled sources of products for which there is a demand. In contrast, others ([78]) suggest a strong stress on the simulation of advertising, quality improvements and the quality/price ratio of a given product/required commodity. Producers are subsystems that adopt long-term strategies which they can adjust to market developments in line with established rules. In Chmiel’s model ([21]), in turn, the simulation focuses on the flow of information which forms the main reason for any changes in the process of producing/manufacturing goods. 64

Another method of modelling producers is to institute them as entities equal to merchants (also in the form of agents of practically the same type), but fulfilling different functions, as well as reasoning, examining and deciding differently (or at least within a different scope).

4.2.3. Government as a market participant The government is less frequently seen as a factor in designing systems of this type. What is more, it is not always included at all (([76]), ([78])), as though it had no impact on the market simulation9 . This is probably an oversight, as the government can play a rather significant role in restricting the market and also modifying its behaviour. Quite a popular method is not to create a separate agent ([78]), but simply picture the government as a set of parameters influencing the market (such as the level of taxes, setting minimum and maximum prices and the like). Parameters like this support an interesting demonstration of the government’s impact on the economic situation. However, this approach has a rather significant drawback. This is because it is not possible for such a government to learn from the results achieved previously or to reason and on this basis to apply specific decisions to the current market situation. Implementing the government as a separate agent facilitates a more effective use of its functions, such as adjusting the tax revenues collected as well as the method of their distribution between merchants, producers and other agents. Some systems also offer the opportunity to change the government depending on the general satisfaction with its work, decisions and impact on the condition of the population.

4.2.4. Market modelling A primary feature of an overwhelming majority of approaches is considering information provided to the market as the most important data source. The market must correctly react to information such as differences in prices, demand or supply. The right processes must be simulated to obtain the end result of the simulator’s operation. In the article by Nebuerg and Bertel ([76]), the ability for a given product to survive is strictly linked to the profit from its production. Even if it is profitable, sometimes the surplus of other products generating more dividend can cause its suspension or even withdrawal from the market, which can have a dramatic impact on the consumers using it. The article also stresses the ultimate condition: the simulation ends when the demand on the market is fully covered by the products on it, with the simultaneous profitability of producers. In the model of November and Johnstone 9

The government may influence the market in various ways: it can be a market player when it buys services to build investment projects, e.g. infrastructure, it can be the seller (e.g. by granting licenses). In addition, privatisation and tax redistribution (or rather its method) may influence the market situation.

65

([78]), just like in many other models, the entire market simulation is based on fundamental laws of supply and demand and their dependence on the price, value and quantity of available products and materials of which they can be produced. Information surplus may have a negative impact on the ability to measure the results of various factors, but in November and Johnstone’s ([78]) publication, it forms the key to proving that the market will always be governed (at least partially) by the laws of chaos. What is noticeable in the study by Eichelberger ([29]) is the lack of any artificial limitations which could contribute to changing the situation on the market. A different approach is to simulate the market as a great auction ([12]), bringing in the psychological aspect, and the whole exercise is to represent the strife for market equilibrium, which actually occurs in contacts between agents. This model must also contain social information and define its impact on the method by which agents take decisions.

4.2.5. Method of running simulations The analysis of various implementations of market simulations also brings to light completely different principles associated directly with the rules of conducting simulations in the model. In the study by Nauberg and Bertel ([76]), before work starts, agents may modify the observation and reasoning rules in order to reduce them to simpler, less numerous, but possibly more efficient ones. Others ([29]) suggest operating in extreme situations to study the influence of a given factor and also „translating” such a situation into the experience collected by the system. Nauberg and Bertlel ([76]) distinguish two ways of interacting with the system modelling the market behaviour: − inputting data at the beginning (configuration) and waiting for the effects of the simulator’s operation; − interacting with processes taking place in the system on a current basis after inputting initial data to begin with (a much more interesting solution). In the publication by Wind ([116]), the general characteristics are defined using three planes, comprising: − generalisation – the models need not necessarily be applicable to specific markets/behaviours; − micro-analysis of individual behaviours and processes taking place within the modelled system in order to take decisions, using previous experience and the effects of immediately preceding actions; − interactivity – the ability to influence the simulation on a current basis. 66

The analysis of the above problems leads to several important observations. Firstly, information forms a key element of system operation. Without the right implementation of its sources, priorities, extraction and processing, the system would have serious problems with simulating a world of even slight resemblance to the real one. In many publications it is also noticeable that the agents are strongly isolated from one another (so that they would not form the representation of some whole), however without stressing their specification and individuality too much where not necessary. Isolating agents and differentiating them may contribute significantly to achieving the intended effect when modelling the system.

67

5. Market model concept for the purposes of ASIHM processes simulation

This chapter presents an outline of a market model concept whose implementation in the form of a market simulator software will be used for studying ASIHM processes. It should be noted that the concept presented evolved during the author’s research and represents an extension of the concepts presented in his first articles on modelling the ASIHM ([96, 97]), as well as his subsequent publications ([98, 99]). The model deployed by multi-agent systems, and in particular the M-Agent model described in the publications of K. Cetnarowicz ([16, 17, 18, 19, 20]), was used to develop a formal market model. Thus the proposed market model is based on the architecture of an agent system at a high level of abstraction. This approach is due to the ease of adapting economic models describing the behaviour of a single market player, represented by the abstraction of an agent in a multi-agent model. In the next step, a transformation was proposed for a model thus defined, allowing a transition into a molecular computational model used by Collective Intelligence. So the selected approach must be illustrated in a layered way: the top layer consists of economic models of the market, which are then adapted by building the model of a multi-agent system which forms the lower layer – this model was called CIMAMSS (Collective Intelligence based Multi-Agent Market Simulation System). The third, lowest layer is the computing layer utilising the molecular computational model. In the transition from layer two to three, everything except the information flow and the processing is omitted. Market agents from the multi-agent model layer are directly mapped to information molecules of the third layer. The approach selected by the author is illustrated in figure 5.1. Because a layered model is introduced, basic notions used by scientists working with multi-agent systems used for the second layer also need presenting. 68

Fig. 5.1. A layered model of a market simulation system. Source: own development.

5.1. Agent based systems Multi-agent systems are those composed of many computational entities which interact with one another and are referred to as agents1 . Agents have two basic features: autonomous action, i.e. deciding what to do to achieve the planned objectives, and interacting with other agents, which means not just exchanging data in the sense of a computer system, but represents analogies to social behaviours observed in everyday life: cooperation, negotiations, coordination etc. Such systems have become the object of scientists’ interest in the 1980s, and by mid-1990s became generally recognisable, leading to a dramatic increase in the interest in them. Increased interest in this programming paradigm coincides with the explosive growth of the Internet, due to the opportunities the paradigm offers for employing massive open systems (such as the Web). However, systems of this type seem primarily to be a natural metaphor of something called artificial social systems ([120]). One of those is the market system, so this paradigm seems the most suitable for creating a model of it. What is more, only in such systems can Collective Intelligence manifest itself. This section discusses the major definitions (as the literature contains very many) of the concepts of the agency and the agent, and attempts to systematise the key features of an agent based on the definitions cited. An agent classification follows. 1

Detailed definitions of an agent found in the literature are presented lower down in this chapter.

69

Further in the chapter, the concept of a multi-agent system is discussed, including the benefits stemming from this approach to developing software.

5.1.1. Agent definitions The distinguishing feature of agent-based systems is that they are founded on an abstract notion of an agent. Even though the term „agent” is now broadly used by specialists from different fields, it unfortunately lacks a universal definition. Many scientists and institutions have attempted to formulate a precise definition of agency. The definitions the author ([95]) finds most important are presented below ([33]): 1. An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors. AIMA is the acronym of Artificial Intelligence: Modern Approach, the title of the book by Russel and Norvig ([87]), a best-seller of 1995, used as a teaching aid at over two hundred universities. The AIMA definition is strongly dependent on what we define as the environment, as perceiving and acting upon the environment. If we understand the environment as anything that takes some input and delivers some output, then any computer program is an agent. So if we want to distinguish an agent from a program, we must introduce additional restrictions. 2. P. Maes of the MIT defines: autonomous agents are computational systems that inhabit some complex dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed ([66])([67]). This definition adds one critical element to the previous one: the agent must be characterised by autonomy in its actions. 3. S. Virdhagriswarana of Crystaliz in an article2 dealing with the technology of mobile agents defines an agent as an entity representing two orthogonal concepts: the ability for autonomous execution and the ability to perform domain oriented reasoning. This definition stresses the ability of autonomous execution. 4. B. Hayes-Roth of the Stanford University introduces the term intelligent agent as something that continuously executes the following three activities: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions ([41]). The above definition stresses interpreting stimuli to take action in response to them. 5. The IBM website contains an article3 „IBM’s Intelligent Agent Strategy”, discussing eight different possible types of agent applications. Intelligent agents 2 3

70

http://www.crystaliz.com/logicware/mubot.html. http://activist.gpl.ibm.com:81/WhitePaper/ptc2.htm.

are understood there as software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user’s goals or desires. In this definition, the main stress is on the independence characterising the agent. 6. Wooldridge and Jennings define an agent as a hardware or (more usually) software-based unit with the following properties ([118]): − autonomy: agents have control over their actions; − social ability: agents interact with other agents (and possibly humans) via some kind of communication language; − reactivity: agents respond in a timely fashion to changes that occur in the broadly-understood environment in which they are located; − pro-activeness: their reactions to stimuli are more complex and lead to the agent achieving certain goals for which it has been created. This definition combines features of previous ones and adds the ability of agents to communicate with one another. 7. M. Coen defines agents, for the purposes of the SodaBot system (this is a development environment for agent-based systems built at the artificial intelligence laboratory of the MIT) as programs that engage in dialogs and negotiate and coordinate the transfer of information ([24]). This definition is somewhat different from the remaining seven, as it mainly stresses the communication between agents. 8. FIPA (Foundation For Intelligent Physical Agents)4 organization defines agents as computational processes with some autonomy, supporting the functionality for which the application was developed. Agents communicate by exchanging messages which represents acts of speech and are coded in the ACL (Agent Communication Language). The definitions quoted are highly varied, but they also have many similarities. Many authors stress various aspects of ‘agency’. Based on the above definitions, the following characteristics of an agent can be distinguished: 1. Intelligent/learning (adaptive): changes its behaviour depending on the stimuli coming from the outside; 2. Communicative, sociable: communicates with other agents ([31]). 3. Autonomous: has control over its behaviour; 4

Official web site: http://www.fipa.org.

71

4. 5. 6. 7. 8. 9.

Reactive: reacts to stimuli in real time; Rational: the agent does not take action contrary to the goal(s) it is following; Goal-oriented: reactions to stimuli are more complex; Mobile: it can move within the environment it operates in; Flexible: its reactions are not fixed; Has character: it features something like an emotional state.

5.1.2. Agent classifications The above definitions are rather general, so attempts have been made to further classify agents. One such attempt is presented by Franklin and Graesser ([33]). The split made by the authors is presented in a diagram below. They distinguish three basic agent classes: living organisms (biological agents), robots and computational agents, the latter further split into the subclasses of: software agents and artificial life agents. Software agents are sub-divided depending on their intended function.

5.1.3. Multi-agent systems Agent-based systems have been gaining a lot of popularity since 1990s, as this technology introduced a new paradigm to the conceptual analysis, design and implementation of IT systems, while the applications of agent-based systems also became attractive in distributed environments ([104]). It is possible for such a system to be composed of a single agent, but the real potential of this technology comes from projects composed of several or more agents who communicate and interact with one another. Such systems are called Multi-Agent Systems – MAS. In 1960s and 1970s, the majority of artificial intelligence researchers developed theories and technologies concerning the behaviour and reasoning of a single unit (e.g. expert systems ([65])). However, as artificial intelligence developed, it turned out that real problems are too complicated and complex to solve using a single program. In addition, such a program in itself was limited by the use of simplified models. These conclusions led to further work which produced, among others, agentbased systems. However, a single agent was still a being limited by its knowledge and computing resources. As it was known that the most powerful tools for tackling complexity are modularity and abstraction, in 1990s the research was directed ([87]) at designing systems composed of a larger number of agents. Such systems are called multi-agent ones. Multi-agent systems offer this modularity: if the problem domain is complex, large and unpredictable, the only way of getting to grips with it is to develop components of a specific functionality which solve specific sub-problems. Such decomposition allows every agent to use the most suitable methods to solve the sub-problem 72

it is responsible for. As individual components of the system become more interdependent, the agents must coordinate their action more and more closely to ensure that these dependencies are managed to the appropriate standard. Real problems require the application of distributed, open systems. An open system is one capable of dynamically changing it structure depending on the situation. One of the characteristic features of such systems is that not all their components can be determined a priori, while the components themselves can change over time. The system may be composed of a group of heterogeneous agents developed by various people using different environments. Probably the best example of such a system is the Internet, which can be treated as one huge, distributed source of information resources with nodes within a network implemented by various organisations. In an open environment of an information source, communication links can appear and disappear in unforeseen ways. Currently, the use of agents on the Internet is restricted to acquiring and filtering information. However, in the near future, agents will be able to collect information, use it to reason and execute complex jobs to support the solution of a specific problem from any given field. This ability requires the communication between agents and some coordinating action, which significantly increases the capability of a single agent. Research on multi-agent systems covers, inter alia, the behaviour and identification of autonomous agents who interact with one another and the environment surrounding them. Researching such systems goes beyond the range of a single system, which could be represented by e.g. an expert system, even if only thanks to the additional „sociological” dimension associated with the cooperation of many agents and the opportunities offered by using that dimension to solve various real problems. A multi-agent system can be defined as a network of loosely connected subsystems – problem solvers – who interact with one another to solve a problem beyond the capability of a single one. Such subsystems (called agents) can by their very nature be autonomous and heterogeneous. The following characteristic features of a multi-agent system can be distinguished: − no single agent has complete information on how a problem should or could be solved; − the system has no global control; − data is distributed; − calculations are made asynchronously. It is worth noting that all these features also appear in Collective Intelligence and its computational model ([107]). In addition, the abstraction of an agent seems well suited for encapsulating economic models describing the behaviour of a market player (discussed in the following chapter). 73

Thus multi-agent systems are increasingly widely used because of the following advantages they offer ([104]): − the ability to solve problems too big for single systems with their limited resources and the risk that a single system of such a type could have too little capacity; − the ability to connect and ensure the cooperation between existing systems if reprogramming them would be too costly and time-consuming (especially as every system is relatively frequently upgraded); − the ability to apply them to a certain class of systems which by their very nature are composed of loosely interconnected autonomic components – it is natural that such systems can be designed applying the MAS; − delivering solutions in a situation where information is distributed (e.g. finding information on the Internet); − delivering solutions when the expertise is distributed (e.g. in medicine); − raising the system capacity by their natural ability to run computations in parallel; − improving the system stability as system components (agents) are found dynamically – if a system component becomes unstable in its operation, another one of the same functionality can be found; − the ease of extending the system by adding new agents; − easy maintainability – a system like this is easier to maintain due to its highly modular nature: all anomalies in the system behaviour are local and do not propagate onto the whole system; − system flexibility – agents of various skills can organise themselves to solve the „current problem”; − reusability – agents of a specific functionality can be used in various systems.

5.2. Modelling an agent – the M-Agent architecture Based on the considerations from the previous chapter, it can be assumed that the environment in which market participants are embedded is the indispensable element of every market model. The environment provides market players with resources which may be distributed unequally and which enable the players to survive. In some multi-agent system models ([79], it is assumed that the environment provides agents with energy, which is unlimited, but may also be unequally distributed and enable the agents to survive. However, a more general approach seems to be more 74

suitable. It is associated with the direction of research on agent-based systems which stresses the cooperation and interactions of many agents embedded in a common environment ([79, 46]). In this approach, the environment is defined by referring to an agent operating in it, as something that feeds information to the agent and stimulates it to act. On the other hand, an agent can take action that influences the environment. An even more general approach is presented by M. Kisiel-Dorohinicki ([54, 55]), who defined the environment as elements of the agent world which are not agents. In this approach, the condition of the agents and the environment is described in the categories of resources and information possessed or contained, which is divided into quanta (measurable portions). Physical variables determining the agent’s ability to perform an action are assumed as the resources. Resources are exhaustible, and at any given time every quantum of a resource is assigned to a specific agent or a part of the environment. Information, in turn, constitutes the basis for the agent to decide on the action to take and can be duplicated. Whereas the quantity of a resource assigned to an agent or a part of the environment is always well defined, in this model information can be incomplete or uncertain. An action is understood as an atomic (non-divisible) operation or activity that can be executed by an agent and in a given state of the system has certain effects: on the one hand it impacts on the condition of the agent, on the other on the condition of the environment and possibly of other agents. The above concept forms the starting point for constructing a multiagent economic model for the purposes of the ASIHM simulation. This model will be called the CIMAMSS (Economy Multi Agent System based on Collective Intelligence) whereas it is understood here as the simplest possible reflection of the market reality, or the economy5 . Literature describes many approaches to modelling agents ([118, 119, 74, 94, 11]), but they do not seem to be formalizations sufficient for modelling the ASIHM employing a multi-agent system. The simplified model of an agent-based system which will form the foundation for describing the CIMAMSS system will be designed using the M-Agent architecture concept presented by K. Cetnarowicz ([15, 16, 17, 18, 19, 20]) and the EMAS system concept of M. Kisiel-Dorohinicki ([54, 55]). This concept calls for observing and modelling the environment from the perspective of a single agent, whereas the model of the world surrounding it forms the starting point for it to take decisions on selecting the strategy leading to the best achievement of the specified goal. The agent is able to adapt to the changing conditions by a learning process which consists in observing the effects of action taken which influence its behaviour in the future. 5

An economy understood as a set composed of: the economic system of a given country, the capital, market players, the area in which they operate which can (but need not) be identified with the country territory.

75

The general pattern of an M-Agent’s action is cyclic and consists of: 1. Observing the state of the environment; 2. Taking a decision on the most adequate method of action; 3. Executing operations in accordance with the decision taken. In this model, the agent has many methods of action available, whereas the decision-making process is aimed at selecting the most suitable one and is performed as follows: 1. The agent builds a model of the environment perceived (m); 2. The agent establishes the projected changes of the model (m0 ) which can result from individual ways of acting (s); 3. The agent evaluates the projected changes to select the most appropriate way of acting in the light of the current goal (q). 4. The agent selects and executes the best strategy of action. The formal definition of an agent is as follows in this model ([20]): def

ag = {M, Q, S, I, X, L, m, q, s} ,

(5.1)

where: − M – set of possible models of the environment which represents the agent’s knowledge about the environment, m ∈ M ; − Q – ordered set of agent’s goals, q ∈ Q; − S – set of possible strategies of an agent, s ∈ S − I – observation operator, m = I (M, V ) where V – state of the environment; − X – operator of strategy execution, V 0 = X (s, V ) where V 0 – predicted state of the environment after execution of strategy s; − L – adaptation operator (learning). This concept, in which the starting point is the principle of observing and modelling the environment from the point of view of a single agent, seems to be very well suited to developing a model of the ASIHM, whose genesis is the behaviour of a single human (market player) striving to maximise its own utility function (which is the measure of ‘its interest’) and acting within a social structure of many people following the same principle. The model presented herein adapts the rules of designing a multi-agent system using the M-Agent architecture to developing a market model in which ASIHM processes will run. Consequently, some elements proposed in the M-Agent ([20]) and 76

EMAS ([54]) architecture will be omitted. Elements representing the market nature of the system will also be added to the model. It seems legitimate to present the model in two phases: in the first one, basic notions will be introduced (following the primary notions referred to earlier), and the skeleton of the model (shown in figure 5.2) will be presented. In the next phase, this skeleton will be detailed by narrowing down the notions introduced earlier and by adapting models taken from the micro-economic theory. The author’s main intention is to develop a market model, and this will be reflected in the definitions and decisions as to the directions of detailing the model.

Fig. 5.2. A simplified diagram of the design of a multi-agent system. Source: own development based on a study by M. Kisiel-Dorohinicki ([54])

To create the labels, the following generally observed rules were used: − symbols beginning with lower case letters denote individual objects, values or relations (e.g. ag – agent); − capitalised symbols denote a space or a set (e.g. Ag – a set of agents); − all labels concern the state at the t time moment, so the symbol ag (t) will be simplified to the notation ag; 77

− to represent a specific element in a set or a value, numbers will be used, so ag1 will denote the first agent, whereas for a general, non-specific reference, e.g. to the ith agent, the lower index will be used: agi .

5.3. Basic elements of the CIMAMSS system The CIMAMSS multi-agent system consists of a set of market participants called market agents (agents ∈ Ag) and the environment (env) within which they operate at a specific time t: def

CIM AM SS = hagents, env, ti

(5.2)

All elements comprising this system, including their options, are described below.

5.3.1. Environment The environment defines the space within which market agents operate and provides them with resources necessary for their survival. This space is restricted physically and in time. Just as in the model presented by M. Kisiel-Dorohinicki ([54]), a resource is understood as a certain physical value that is divisible into quanta (exists in portions), exhaustible (a limited quantity of the resource is available) and its distribution within the environment may be non-uniform and available to agents in different ways. An additional feature introduced is that the environment can renew the resource (its available quantity can increase over time). Resources are supplied by the environment and can be taken from it by agents. As in M. Kisiel-Dorohinicki’s model, a resource can be assigned to the environment or to the agent. However, there is a major terminology difference necessary to maintain consistency with economic models on which the agent’s actions are based: when a resource is taken from the environment and assigned to the agent, it is called a good. Once a good has become the subject of trading between agents, it is called a commodity6 . The absorption of a good by an agent is associated with a certain outlay, whose precise implementation is discussed lower down in the chapter together with market agents. In addition, the environment defines certain economic parameters associated with the financial system: these are the interest rate of loans cr and the deposit interest rate dr. These parameters are taken into account by agents when deciding to 6 This terminology is generally accepted in works on economics: a commodity is a good that is traded. A good, on the other hand, is the object consumed.

78

take out a loan or place a deposit (financial market transactions). Another parameter of the environment is the tax rate and the method of tax redistribution (in the simulation, this parameter represents the government’s fiscal policy – it is the only way in which government actions were accounted for in the simulation7 ). The above considerations can thus be used to write a formal definition of the environment, where it can generally be said that the environment is a set of certain resources and information. Definition 5.1. Environment: def

env = hRes, Inf i,

(5.3)

whereas8 : − Res – the set of resources in the environment; − Inf – the set of information. This representation allows much freedom, as resources and information can directly represent the input or output data of the system and can thus be transferred to agents. In addition, it can be easily detailed for the requirements of the ASIHM. And so: def

Inf = hcr, dr, dr, rs, ri i,

(5.4)

whereas : − cr – credit interest rate; − dr – deposit interest rate; − tr – tax rate in % (flat-rate tax); − ra – an algorithm defining the tax redistribution method; − ri – the radius of world observation by an agent and of agent interaction9 . This representation imposes no restrictions on the form of the environment, but in practice, an environment with a spatial structure is most frequently used. This is due to the specific nature of multi-agent systems, which are generally distributed. Consequently, definition 5.2 for an environment with a spatial structure takes the form of a triple with space as its third element. 7

It is not in all experiments completed that this parameter was taken into account at all. The meaning of particular components will be discussed later. 9 This radius has sense only for an environment with the spatial structure discussed below. 8

79

Definition 5.2. An environment with a spatial structure: def

env = hRes, Inf, spacei,

(5.5)

whereas: − Res – environment resource set; − Inf – information set (note: information is globally available within the entire space); − space – the space and the properties of a system with existing space. A system associated with the existence of an environment has a series of properties: − topology: the possible locations of resources and agents; − defined positions of agents; − defined places where resources are found; − the range of agent observation and action including the ability to move, the observation radius etc.

5.3.2. Environment space In the majority of multi-agent system deployments, the environment has the structure of a graph. The reason for this is that such systems are frequently built as decentralised ones running within a computer network which has exactly that structure, which makes it easy to design simulations. It seems that for the purposes of a market simulation, such a space can consist in a two-dimensional grid, which is obviously quite a simplification of the world around us, which is three-dimensional10 , but should not eliminate the generality of considerations. The space of the environment can be described, in line with definition 5.3, using a triple (figure 5.3). Definition 5.3. Environment space: def

space = hP l, T r, Aloc, Rloci,

(5.6)

10 However, it seems that when building it for the purposes of an economic simulation, this makes sense: documents executed when e.g. a car is sold the transaction location is given as the locality with the address, but the number of the storey or the elevation at which the transaction was finalised is not specified. Thus the need to introduce the third dimension of the space may apply to some special situations which are not the subject of these considerations.

80

− − − −

whereas: P l – the set of possible locations; T r – a relationship describing the ability to transfer agents and the agents; Aloc – injective function defining agents’ locations (an agent might be at only one place at a given time); Rloc – the injective relationship defining the distribution of resources.

Fig. 5.3. A multi-agent system featuring an environment with a spatial structure. Source: own development based on a study by M. Kisiel-Dorohinicki ([54])

. It is worth noting that if graph hP l, T ri is non-oriented, which corresponds to the situation in which the T r relation is reciprocal, it is the relation of neighbourhood between nodes. In the special case of a 2D grid in the CIMAMSS model, the Tr relation given in definition 5.4 is used. Definition 5.4. Agent transfer in CIMAMSS system. T r relation defines agent transfer as follows: def

T r = {((x1 , y1 ) , (x2 , y2 )) :| x1 − x2 |≤ 1, | y1 − y2 |≤ 1} 81

(5.7) whereas: − K – finite subset of N – natural numbers from 0 to k ∈ N; − Pl – set of possible locations pl = (xi , xj ) ⊂ P l. Every node pl ∈ P l can be described in the categories of the resources and information associated with it: def

pl = hRespl , Inf pl i

(5.8)

and assuming that information is available globally (for each pl ∈ P l), this can be simplified to the following notation: pl ≡ hRespl i

(5.9)

It is generally assumed that resources are available only for agents located in the surrounding of the node in which the resources are located. In the CIMAMSS model, an additional restriction is assumed, namely that resources are available only to agents present in the same location as the specific resource. In the case of the CIMAMSS model, the graph hP l, T ri is a cohesive graph, and consequently every agent is able to reach every place in the environment. It therefore becomes natural to define the distance metric as the minimum number of moves from node to node while maintaining the neighbourhood relation (which is customarily referred to as a „hop”11 in IT terminology. This metric (the distance between two points within mesh) can be defined as below: Definition 5.5. The metric of distance within the environment space. The metric is a function of distance d assigning a real number to two points in space d : Pl × Pl → R def

d (pi1 j1 , pi2 j2 ) = min (| i1 − i2 |, | j1 − j2 |)

(5.10)

remembering that the distance of a point from itself is zero (as necessitated by the definition of the metric itself). In such a space, the surroundings of a node are described using the above metric based on which the ri radius of interaction presented in definition 5.1, equation 5.4 is calculated. 11 The term ‘hop’ is widely used in reference to the metric of network node distance in routing algorithms within a computer network.

82

5.3.3. The agent in the CIMAMSS system – the market participant As modelling the agent is a key element in designing the model for the purposes of simulating ASIHM processes, this element will be discussed in two stages. The first stage presents the general approach to modelling an agent in an agent-based system based on the M-Agent concept presented by K. Cetnarowicz ([16, 17, 18, 19, 20]) and the EMAS system concept of M. Kisiel-Dorohinicki ([54, 55]). At the second stage, the presented model will be supplemented with details associated with market behaviour modelling based on the microeconomic theory and mathematical models from the so-called Vienna school of economics ([50]). Actions performed by an agent An agent is defined by the set of action it can execute (abbreviated to Act), whereas the internal structure of an agent, which determines, among other features, the decision-making algorithms responsible for selecting a specific action to be taken, is defined as a set of profiles (abbreviated to Pr). Definition 5.6. Agent. An agent is composed of a set of actions that it can take and a set of profiles which describes the agent’s condition: def

ag = hAct, P i,

(5.11)

whereas: − Act – the set of actions that the agent can take; − Pr – the set of the agent’s profiles. Agent’s profile The profile of an agent describes the agent’s condition from the point of view of a specific aspect of the agent-based system operation, with the following distinguished: − the physical (also called energy) profile associated with the possession of a given resource by an agent; − the intellectual (also called information) profile associated with modelling a selected aspect of the agent-based system operation; − the spatial profile, associated with an environment having a spatial structure (this assumption has been made in subsection 5.1.2). In such an environment, the agent can feature the movement (migration) action, designated as „mi”. 83

Definition 5.7. The agent’s physical profile. def

prres = hres, St, Gli

(5.12)

− res – the quantity of resources/goods currently held; − St – a set of strategies associated with the goods/resources held; − Gl – a set of goals concerning the resource held. Definition 5.8. The agent’s intellectual profile. def

prres = hmdl, St, Gli

(5.13)

− mdl – a set of information representing the agent’s knowledge of the condition of the world surrounding it (represented by „mdl” after the word „model”). − St – a set of strategies concerning the world model − Gl – aset of goals associated with the world model The agent builds the world model by collecting information from the surrounding world and observing other agents (e.g. by interactions in the form of commodity exchanges), assuming that it features the appropriate actions representing information sourcing mechanisms. The knowledge found in the model is usually incomplete due to the fragmentation of observations, limited contacts with other agents and the dynamic changes taking place in the agent’s surroundings. Definition 5.9. The agent’s spatial profile. def

prres = hmap, St, Gli,

(5.14)

where: − map – a model of a part of the space (this part is associated with ...) known by the agent − St – a set of strategies concerning the space, in particular the strategy related to the action of moving − Gl – a set of goals associated with the space (in the CIMAMSS model this is a movement towards a ‘better market’ discussed in the following chapter) In the case of the space from definition 5.3 and the radius of observation equal to ri , according to the metric from definition 5.5, the space model map may have the following form: def

map = {pl : pl = loc (ag) ord (loc (ag) , pl) ≤ ri } 84

(5.15)

The agent’s strategies The set of strategies defines the agent’s knowledge about the possibility of influencing the inventory of the resource it holds or the world surrounding it. Definition 5.10. The agent’s strategy set. def

St =



st : res → res0 ormdl → mdl0



(5.16)

where: − res – the quantity of resources/goods currently held; − mdl – the set of information representing the agent’s knowledge about the condition of the world surrounding it. M. Kisiel-Dorohinicki ([54]) presents the following breakdown of elements belonging to the set of strategies: − Simple strategies – demonstrate how an agent perceives the influence of single actions on the resources associated with the profile or on the model:

st = [act] , act ∈ Act

(5.17)

− External strategies – describe the effects of other agents’ actions, whereas this knowledge must be obtainable. − Complex strategies – describe the effects of executing sequences of simple strategies (i.e. actions) and/or external ones if an agent features the planning ability:

st = [st1 , st2 , ...stn ] ,

(5.18)

where: sti = [act] or sti is an external strategy The agent’s goals M. Kisiel-Dorohinicki ([54]) assumed that the agent’s goals describe the agent’s requirements concerning the resources it holds or the model from the point of view of the specific profile, giving the agent the grounds for taking decisions to take specific actions. This approach is rather flexible and leaves a lot of freedom for implementing this element in a specific model of an agent-based system. For every agent, two types of goals were introduced. They include: 85

− active goals – indicating the direction in which the model or the resource will change, beneficial from the perspective of the agent (or specifically, its profile); − conservative goals – defining the bounding envelope conditions of the resource or the model, which block the execution of actions if they would lead to exceeding the limitations applicable to a given profile. M. Kisiel-Dorohinicki assumes in his publications that the majority of goals are active in combination with a specific configuration of the profile (i.e. a specific inventory of resources or condition of the model), whereas the conservative ones are active in the remaining cases. The author divides goals into those that are always active – they include maximising the inventory of the resource held (which is particularly important from the point of view of modelling ASIHM processes and the economic models used for this purpose) and those always conservative (goals for which there is not execution strategy). Introducing the above division of the agent’s goals makes it possible to reduce the problem of the agent’s decision-making to the selection of a strategy for achieving the selected active goal that does not contradict any conservative goal. This approach makes it easier to create a model allowing a basic difficulty to be overcome, namely that of developing an agent’s decision-making model in the context of this operation of the market participant. The agent’s actions Based on the definitions presented previously, the agent’s operations can be interpreted as described below. Based on the resources held and the current world model, the agent takes decisions to: 1. select the active goal, represented by gl∗; 2. select the strategy for achieving goal gl∗, represented by st∗; 3. execute the action making up strategy st∗, represented by act∗. The agent’s activities change the condition of its surroundings, made up of the condition of the environment and of other agents, as well as the condition of the agent itself, who can gain or lose specific resources and extend its knowledge.

5.3.4. Resources, goods and commodities Definition 5.1 introduces the notion of a resource supplied by the environment. In the CIMAMSS model, the notion of a resource is replaced with the notion of a good or a commodity depending on the context in which it appears, which is necessary to stay consistent with the concept names from the economic theory (cf. [112, 5]). When an agent collects a resource, this resource is called a good, whereas when the 86

good is traded on the market, it is called a commodity. In addition, resources supplied by the environment can be equated to primary goods which can be further processed after they are collected by the agent. If the agent processes a good in order to produce another good from it, the name product or semi-processed product will be used in this context. Resources are countable and quantable and have the following attributes associated with them: − Name: the name is an identifier and is unique, i.e. no other resource can exist under the same name (however, the same resource can be present in various places and quantities within the environment or in possession of various agents). − Colour: for visualisation purposes. − Dependent goods: a set of goods and their quantity needed to produce one unit of the good (note: this attribute makes sense only in the context of an agent, i.e. only if the agent is capable of processing goods, as the environment is not capable of doing so12 ). − Quantity: in every context where a resource occurs, it does so in a certain quantity, so it is finite. In a specific location within the environment, the resource is present in a certain quantity. This is also true in the context of an agent, i.e. an agent can hold a strictly specified quantity of a given good. If a good is the object of a transaction (in this context it is called a ‘commodity’), then within the transaction as well a defined quantity of it is exchanged. − Expiry date: this attribute makes sense only in the context of an agent: a good can be consumed by the agent only for a certain time from the moment it is collected from the environment. − Growth rate: makes sense only in the context of the environment, it is the percentage by which the quantity of the resource increases within a unit of time (it is the measure of the renewability of the resource in the environment). Thus resources can be defined as: def

Res = hN, Attri,

(5.19)

where: − N – resource name − Attr – attribute set 12

This also represents a certain adopted simplification. A situation is imaginable in which the environment supplies a resource in the form of plants, which then, with the passage of time, turn into hard coal. In the model described, such a function would have to be mapped by defining two independent goods in the environment: the plant and the coal.

87

5.3.5. Decision-making The above considerations of the internal architecture of an agent justify the observation that the architecture presented leaves some freedom of action that can be taken. As more than one profile is allowed, which means that multiple goals are allowed, the method of selecting the active goal and the strategy of achieving it become significant. Müller ([73]) proposed a layered agent architecture in which subsequent layers correspond to the levels of abstraction of information known about the environment13 . Following this approach, M. Kisiel-Dorohinicki ([54]) proposed introducing a relation structuring the set of profiles, which at the same time allows the agent’s decision-making to be determined by introducing a hierarchy of profiles. Such a structure defines the priorities of active goals on the one hand, and the sequence of looking for a strategy to achieve them on the other. The author’s goal was to introduce profile „intelligence layers” (also called „cognition layers” by the author). Profiles lower down in the hierarchy are to represent the more basic needs of the agent (most often associated with the resources held) and are to have a more reactive nature due to their structure, mainly made up of simple strategies. Profiles that are higher up should be characterised by a greater abstraction of the model and a greater number of complex strategies making up the profile. The author considers the talk of „intelligence layers” in the context of such an architecture to be a bit excessive, but there is an economic model very similar to the hierarchy introduced. This model originates from psychology and is referred to as the Maslow’s hierarchy of needs ([39]) (named after the author of the model). This model introduces a hierarchy of needs as their sequence from the most basic ones (physiological needs) to higher level needs which arise only once those lower down are satisfied. These needs are graphically represented by a pyramid, as shown in figure 5.4. According to this model, a rational person does not start satisfying their needs from the top of the pyramid14 . The following chapter deals with adapting this model for the purposes of simulating the ASIHM. On the other hand, one of the main assumptions of the economic analysis is that market participants behave rationally. Consequently, it seems reasonable to attempt adapting this model to the needs of market simulation. Under the above assumptions, the process by which an agent takes a decision progresses through the three stages indicated above, embedded each in the next: 13 Depending on whether just one layer or all interact with the environment, the layers are arranged vertically (only one interacts with the environment) or horizontally (all layers interact with the environment). 14 In management science, this is translated into the following management practice: the management should motivate staff taking into account the level of every employee in the hierarchy of needs.

88

Fig. 5.4. Maslow’s hierarchy of needs. Source: Wikipedia.

1. The first stage is the search for an active goal that is the lowest in the hierarchy of profiles and for which an execution strategy can be found. 2. The second stage is the search for the lowest-placed strategy for achieving the specified active goal. 3. The third stage is the verification of actions belonging to the proposed strategy based on the conservative goals of all profiles. The process of decision-making in such a model is presented in figure 5.5.

5.4. Market modelling The first chapter contains a detailed discussion of the paradigm of the Invisible Hand of the Market. However, a basic problem is to find a mathematic model that would support developing a model of the market player’s behaviour in a way consistent with this paradigm. The basic feature of the ASIHM process is that in their action, individuals (market players) follow their own interest and by striving to maximise their own needs, must account for the needs of other market players (a producer interested in selling the commodities it produces must take the consumers’ needs into account), leading to a global improvement of the economic situation never considered by the market players when taking decisions. 89

Fig. 5.5. The decision-making process of an agent: a) selection of the active goal; b) selection of the strategy of achieving it; c) verification of actions; d) negative execution result – return to stage 2 (under b); e) strategy re-selection; f) successful verification of the action. Source: M. Kisiel-Dorohinicki ([54]).

90

Thus, trying to explain the behaviour of a human (a market player), economics provides the basis on which the entire analysis rests. The economic theory ([112, 7, 5]) overwhelmingly uses structures based on two simple rules: − the rule of the optimum which states that people try to choose the best model (structure) of consumption that they can afford; − the rule of balance which states that prices of commodities keep adjusting until the quantities demanded by people become equal to the quantities offered (the supply). The first rule is almost a tautology, because when people have the ability to choose freely, they choose the things they want and not those they do not. Deviations from this rule (which can, and certainly do occur in the real world) are beyond the boundaries of economic behaviour and there are no mathematical models for them. The second rule is somewhat harder. This is because a situation is conceivable in which, at any moment, the demand on the market from market players (lower down called market agents) and the supply are not equal, so something has to change. A lot of time may be necessary for these changes to occur, and in addition they may cause other changes with a destabilising impact on the entire system. Such a scenario can occur, but usually does not, as proven by economists’ observations ([112]). Thus the economic theory of a consumer is very simple, and in addition consistent with the ASIHM paradigm: consumers are assumed to choose the best set of goods they can afford. So the problem boils down to selecting the right mathematical model for describing the choice of the „best set of goods” by a consumer and describing what is understood by the expression „they can afford”. The issue of selecting „the best set of goods” is solved by the utility theory, which is one of the fields of the economy of the decision theory and the games theory, a mathematical description of the relation between the choices made by individuals based on their preferences and the likelihood of such a choice, its utility and the related risk. What is understood by the expression „they can afford” is explained by the concept of a budgetary limit. To develop a market model for the purposes of simulating the ASIHM process, it is not enough to use a model describing the consumer’s behaviour. For the ASIHM process to take place, it is also necessary to employ a model that describes the method by which market players exchange goods – such models are provided by the market exchange theory which studies the so-called general balance ([112]): how supply and demand conditions interact in many markets (for single goods) to determine the prices of many goods. The mathematical model used in this theory causes certain simplifications, which are partly consistent with the ASIHM paradigm. Firstly, the market is assumed to be competitive, i.e. every consumer and producer treats the price as a given and optimises its behaviour according to it. This is fully consistent with 91

the ASIHM paradigm (cf. chapter 1). Secondly, a certain significant limitation of the model is that observation is restricted to the smallest possible number of goods and consumers, which in practice means two goods and two consumers area considered. The problem of adapting this model to an agent-based simulation system is solved by introducing an additional level of abstraction responsible for „pairing off” consumers who will make the exchange. The problem stemming from the restriction to two goods can be more easily overcome by introducing the following split: into one good and all other goods – and this is also the economic interpretation of this restriction. The last simplification introduced by the mathematical model of the exchange theory is that the problem of the general balance is considered in two stages. In this approach, one begins with an economy in which market players have constant resources of goods, and analyses the method of their exchange between the players omitting the production from this picture. This case is referred to as the „pure exchange” ([112, 5]). In the second stage, production is introduced and its behaviour is analysed in the overall balance model. This approach makes it easy to design an agent-based market simulation system by linking those stages to separate strategies within the intellectual profile of the agent (cf. definition 5.8). As the above models seem to be sufficient to design a market simulation system consistent with the ASIHM paradigm, the following subsections look closer into the above notions making up the consumer choice theory and the market exchange theory.

5.4.1. Budgetary limitation The notion of a budgetary limitation is introduced using the concept of a basket of consumer goods, which is a list of numbers telling us about the quantity of a good chosen by a consumer. For two goods, this is a pair of numbers (x1 , x2 ) where x1 indicates the quantity of good 1, and x2 that of good 2. The formal notation is given in definition 5.11. Definition 5.11. Consumer goods basket.

X = (x1 , x2 , ..xn ) n ∈ N

(5.20)

where: − xi – the quantity of the i-th good. The budgetary limitation determines which goods baskets are affordable for the consumer, and which are not within the defined budget represented by m and at the set prices of goods. The budgetary limitation says that the sum of money spent on all 92

goods cannot be greater than the budget in hand. Permissible goods baskets meeting that condition are called the budgetary set (definition 5.12). Definition 5.12. Budgetary set. ( BS =

(x1 , x2 , ..., xn ) :

n X

) xi pi ≤ m i, n ∈ N, m ∈ R ,

i=1

(5.21) where: − xi – the quantity of the i-th good; − pi – the price of the i-th good; − m – the consumer’s budget. So any goods buying strategy of a market agent within its intellectual profile must meet the budgetary limitation.

5.4.2. Utility theory As previously mentioned, the utility theory is one of the fields of the economics of the decision theory and the games theory dealing with the mathematical description of relations between the choices made by individuals based on their preferences. In accordance with the optimum rule discussed at the beginning of the subsection, which rule forms the basis of the economic analysis, people can be said to prefer options leading to greater benefits in their choices. The rule assumes that individuals tend to select those solutions to a situation which have the greatest expected value of the prize. However, the problem in such situations is that the expected and the actual values of a decision differ. An example here is the situation of a player having one unit of money. The player can use it to buy a lottery ticket and win 100 units with the probability of 0.125. He/she can also keep the money and do not play. From the mathematical point of view, taking the risk is the right choice: the expected value of the gain from the game is 1.25 units, while if the game is passed, it is 1 unit. However, in reality, the probability of winning is low, and the number of entries into the game limited (the actual value converges upon the expected when the number of attempts approaches infinity): consequently, individuals most often choose the certain, lower gain. The most popular implementation of the utility theory is the formula of von Neumann and Morgenstern ([77]). In this theory, there are four axioms defining rational decision-making. These axioms are based on the definition of preference. 93

Definition 5.13. Strict preferences. Let us assume that a relation of a strong order on a budgetary set is given. Consumer basket X1 is strictly preferred over consumer basket X2 then and only then: X1  X2. Definition 5.14. Weak preferences. Let us assume that a relation of a weak order on a budgetary set is given. Consumer basket X1 is weakly preferred over consumer basket X2 then and only then: X1  X2. Definition 5.15. Indifference of consumer baskets. Let us assume that a relation of an order on a budgetary set is given. Basket X1 is indifferent against basket X2 when it is not true that X1  X2 and X2  X1. The indifference of two baskets is represented by the symbol „∼”: X1 ∼ X2. Indifference curves Preferences can also be characterised graphically by using a structure called the indifference curve. Such a curve shows all baskets that are indifferent against the set basket. Figure 4.6 shows three curves: the considerations cited make it notable that baskets shown on curve I2 are preferred more than those on curve I1, but less than baskets on curve I3. Preference types Depending on the market player (or more exactly, his/her preferences), indifference curves may be shaped differently. Thus in the case of goods which are perfect substitutes (an agent substitutes one good for the other at a constant rate), these curves are straight lines with a slope of -1. For goods which are always consumed at constant proportions (in economics, such goods are called perfectly complementary goods), these curves are L-shaped. A more precise discussion of the shapes of indifference curves is available, e.g. in the publication by Varian ([112]). Definition 5.16. Von Neumann-Morgenstern axioms. − Completeness – the individual has well-defined preferences and can decide on one of the options. Presenting this axiom mathematically, for every A and B which are preferences: A  B or A  B or A = B. − Transitivity – this assumes that the individual makes decisions consistently, i.e. for every A, B, C such that A  B and B  C, it is true that A  C. − Independence – the preferences in the choice between two options does not change if we add one more option to the set. Mathematically, for two lotteries A, B and parameter t belonging to (0, 1] it is true that: tA+(1−t)C > tB+(1−t)C. 94

Fig. 5.6. Indifference curves. Source: Wikipedia.

− Continuity – for any three lotteries A, B, C where the individual prefers A over B and B over C, there is a combination of A and C identical to B, i.e.: for A  B  C there has to be a probability p such that B = pA + (1 − p) C. If all the above axioms are fulfilled, the individual can be said to behave rationally and his preferences can be expressed using the utility function. Thus the utility function assigns a number to every possible consumer basket so that the more preferred baskets are assigned numbers greater than the less preferred ones. In other words, when an individual always chooses the most preferred option, then it is certain that his next choice between two options will be made based on the expected utility, so as to maximise his utility. The utility of every choice can be expressed using a linear combination of results and their probabilities. Thus a market player, when he takes a decision, tries not to select the option with the greatest expected value, but the one which will maximise his expected utility. Definition 5.17. Von Neumann-Morgenstern theorem. For any rational agent (meeting the above four axioms), there exists a function that assigns to every lottery A the value u(A) so that: 95

L ≺ M then and only then Eu(L) < Eu(M )

(5.22)

where Eu(L) denotes the expected u value in L:

Eu (p1 A1 + ... + pn An ) = p1 u(A1 ) + ... + pn u(An )

(5.23)

This function can be determined unambiguously (with the precision to multiplying by a scalar and adding a constant) by the preference between simple lotteries, i.e. ones taking the form of pA + (1 − p)B, in which only one of two results is possible. The utility function created – f (x) – has several important features, including: − f’(x) > 0 – the more of a given good, the greater its total utility. − f”(x) < 0 – subsequent units of the same good reduce its impact on the total utility. − f”’(x) > 0 – differences between the impact of subsequent units on the total utility become smaller and smaller. Yet, in reality, consumers tend to avoid risk: this means that if the expected value of a transaction is 0, an individual will forego taking part in it. This risk avoidance means that the utility functions of consumers differ. An example of risk avoidance is a game in which the probability of winning is 0.2, whereas the premium for winning is equal to ten times the stake. For example, having $1, 000 in savings, the majority of people will not play, even though if they followed the rule of maximising the expected value, they should, since: 0, 2 ∗ $1000 + 0, 8 ∗ $0 = $2000 > 1 ∗ $1000 However, if a person is rational, this will be accounted for in their utility function, so we can expect that in the majority of cases: 0, 2 ∗ u (1000) + 0, 8 ∗ u ($0) < u ($1000) This is why consumers for whom risk is natural have a linear utility function, those looking for risk convex ones and those avoiding it – concave functions. The degree of risk avoidance can thus be ascertained by measuring the curvature of the utility function. The critics of this model accuse the utility theory of not being much use in predicting human choices in various situations. This is because this model assumes too much: people rarely have access to all information allowing them to take the decision best for them. The advantage of the model introduced by von Neumann and Morgenstern is that it mathematically describes the phenomenon of striving to maximise one’s own benefits which forms the basis for the Invisible Hand of the Market paradigm. 96

Ordinal utility and cardinal utility If it is only important to order baskets of goods, i.e. the scale of the utility function is important so far as it hierarchizes different consumer baskets, this is referred to as ordinal utility. If what is important is the magnitude of the difference in utility between any two baskets, this is referred to as cardinal utility. Examples of the utility function and an assumption on preferences Assuming that the utility function u is given for specific goods x1 , ..., xn , the indifference curve represents all points (x1 , ..., xn ), for which u (x1 , ...., xn ) is constant, and this is called the set level (or function contour line). Figure 5.7 shows indifference curves for an example utility function: u (x1 , x2 ) = x1 ∗ x2

(5.24)

Fig. 5.7. Indifference curves k = x1 x2 for various values of k. Source: Own development.

The subsection on types of preference briefly discusses examples of goods for which the indifference curve takes various shapes. On the other hand, it is known that the shape of this curve depends on the utility function. Thus, when developing a market simulation model, note should be taken that in the case of a market agent, the 97

utility function is a certain attribute of the agent on which the majority of its actions are dependent. This information should be kept in the agent’s intellectual profile (see definition 5.8) as part of the information about its surrounding world (symbolised with mdl in this definition). As it is known that, depending on the type of goods, indifference curves can have completely different shapes, one should consider the appropriate selection of the utility function for simulation studies. It turns out worthwhile to use certain general assumptions on preferences usually made by economists and called „well-behaving preferences”. The first assumption states that more is better, which can be expressed in numbers: assuming that (x1 , ..., xn ) is a basket and (y1 , ..., yn ) is a basket of goods containing at least the same quantity of all goods and more of one, then (y1 , ..., yn )  (x1 , ..., xn ). This assumption is called the monotony of preferences. The second assumption states that average values are preferred over extremes, which can be presented mathematically as follows: for any two baskets (x1 , ..., xn ) and (y1 , ..., yn ) located on the same indifference curve, an average basket can be made:

((x1 + y1 )/2, ..., (xn + yn )/2),

(5.25)

which will be at least as good as, or strictly preferred over these two baskets. Geometrically this means that the set of baskets weakly preferred over (x1, ..., xn) is a convex set. This assumption is therefore called the convex preference assumption. It turns out that preferences described by the utility function u(x1 , x2 ) = x1 x2 illustrated in figure 5.7 fulfil both of these assumptions. In addition, this function is a special case of the Cobb-Douglas utility function ([23]): u(x1 , ..., xn ) = xc1 xd2 ,

(5.26)

where: − c, d ∈ R and c > 0 and d > 0.

5.4.3. Conclusions for agent modelling Economists generally believe that the Cobb-Douglas function represents human preferences for the majority of goods well ([5, 112]). Consequently, in all experiments on the ASIHM, the utility function defined in the intellectual profile of a market agent will have the form of the generalised Cobb-Douglas utility function given by definition 5.18. 98

Definition 5.18. Cobb-Douglas utility function. u(x1 , ..., xn ) = xc11 ∗ ... ∗ xcnn ,

(5.27)

where: − c1 , ..., cn ∈ R and c1 , ..., cn > 0.

5.4.4. Transaction modelling A fundamental feature of a market system is that market players (market agents) exchange goods with one another in it. This subsection discusses the exchange model for the purposes of the ASIHM process simulator. To develop an exchange model, a general rule is used which states that if there is an opportunity to improve someone’s situation without worsening someone else’s, then this situation is not optimum according to Pareto. The purpose of concluding a transaction between market agents is thus to reach a state in which the situation is optimum according to Pareto within clusters of agents bounded by the radius of interaction. Formally, an outline of an exchange algorithm can be described as below. Symbols: − N – quantity of goods on the market; − {ag1 , ag2 , ..., agm } – set of agents exchanging goods; − {u1 , u2 , ..., um } – agent utility functions;  − G0 =  G01 , G02 , ..., G0m – initial allocation of agents’ goods in the form of G0i = gi01 , ..., gi0n where gi0k denotes the quantity of the k-th good held by the i-th agent;  − GF =  GF1 , GF2 , ..., GFm – final allocation of agents’ goods in the form of GFi = giF1 , ..., giFn where giFk denotes the quantity of the k-th good held by the i-th agent. Definition 5.19. Effective allocation according to Pareto. 1. The allocation of goods following the transaction must be as good or better than prior to the transaction: m X i=1

ui (gi01 , ..., gi0n ) ≤

m X

ui (giF1 , ..., giFn )

(5.28)

i=1

99

2. There cannot be a GK allocation better than the final allocation: ¬∃ GK :

m X i=1

ui (giK1 , ..., giKn ) >

m X

ui (giF1 , ..., giFn )

(5.29)

i=1

The exchange of goods between two market participants is graphically illustrated using a tool called the Edgeworth rectangle ([86], which is based on the indifference curves discussed above. It restricts considerations to situations in which there are two participants and two goods. The crossed area on the figure 5.8 indicates allocations that are better than initial allocation —- which means that by performing good exchange both participants (A and B) improve their situation.

Fig. 5.8. Edgeworth’s rectangle. The width of the rectangle measures the overall quantity of good 1, and its height the quantity of good 2. A and B are the transacting participants. Source: Own development.

In the CIMAMSS, certain simplifications were made to help implement the mechanism of transaction conclusion. These will be discussed after the presentation of the pseudo-code illustrating how transactions are concluded in the system. The pseudo-code below illustrates the method of concluding transactions in the entire system: As can be seen, for every agent, its neighbours are found (line 3) based on a metric presented in definition 5.5 and the interaction radius from definition 5.1. At the next step, transactions are concluded with subsequent neighbours (by calling the 100

1. procedure Transactions 2. foreach ( agent ) 3. N := znajdzSasiadow ( r i ) ; 4. f o r e a c h ( s a s i a d : N) 5. transakcja ( sasiad ); 7. end f o r e a c h ; 8. end f o r e a c h 8 . end p r o c e d u r e Listing 5.1. Transactions algorithm – pseudo code. source: Own development.

procedure transakcja(sasiad)) following the rule of reaching the Pareto optimum situation, which by definition makes the situation of every entity the best possible at the given utilities of other entities. Thus by labelling agents taking part in the transaction as A and B (line 5 of the listing 5.1) and setting the utility level of agent B at u ¯ we ask for the best possible allocation for agent A. The maximisation problem can be noted as:

max

x1A ,x2A ,x1B ,x2B



 uA (x1A , x2A )

(5.30)

whereas: − uB (x1B , x2B ) = u ¯ − x1A + x1B = ω1 − x2A + x2B = ω2 − xiA , xiB – quantities of good i held by agents A, B − ω1 – the total quantity of good 1 − ω2 – the total quantity of good 2 The Lagrangian of this problem can be noted as: L = uA (x1A , x2A ) − λ[uB (x1B , x2B )]

(5.31)

L = ω1 − µ1 (x1A + x1B − ω 1 ) − µ2 (x2A + x2B − ω 2 ),

(5.32)

and further:

whereas: 101

− λ – the Lagrange’s multiplier when there are limitations due to the utility − µ – Lagrange’s multipliers when there are limitations due to goods quantities Differentiating the equation for every good produces four first-degree conditions of the optimum solution: ∂L ∂uA = − µ1 = 0 1 ∂xA ∂x1A

(5.33)

∂uA ∂L = − µ2 = 0 ∂x2A ∂x2A

(5.34)

∂L ∂uB = −λ 1 − µ1 = 0 ∂x1B ∂xB

(5.35)

∂L ∂uB = −λ 2 − µ1 = 0 ∂x2B ∂xB

(5.36)

If the first equation is divided by the second, and the third by the fourth, it turns out that the marginal substitution rates15 between two goods are the same (otherwise there would be a potential exchange that would improve the situation of one of the consumers). With regard to the procedure of Transactions whose pseudo-code is presented above, we can consider cluster k of agents delimited by interaction radius ri . A question should be asked whether the algorithm presented will lead to an effective allocation presented in definition 5.19 within the cluster. It is worth noting that there can be an optimum allocation GF within the cluster which might not be achieved by executing a sequence of transactions among the agent pairs with the set order of pairing. The author believes that this limitation is not very significant, particularly as agents can migrate (all clusters are not permanent) and there is an economic reason for it (particularly in the context of the ASIHM): every market participant strives to maximise 15

A convenient concept used in microeconomy representing the exchange rate at which the consumer is willing to exchange good X for good Y and vice versa. It depends on the utility curves achieved by consuming the given goods and on their inventories held. In other words, the marginal substitution rate is the ration of the growth in the consumption of one good to the drop in the consumption of another (others), such that the utility achieved by the consumer does not change and under the assumption that his/her indifference curve remains unchanged. As generally the utility achieved by consuming subsequent portions of a given good is falling, the marginal substitution rate is decreasing.

102

its own benefit and concludes transactions sequentially, aiming to maximise its own benefit in every transaction. In order to ensure a Pareto optimal situation, a market player would have to conclude transactions which at the level of the two transacting agents do not lead to a situation that is Pareto optimal in order to achieve such a situation within the cluster. It is also worth noting that this behaviour is not compliant with the Invisible Hand of the Market paradigm. The last issue to resolve is to write down rules that will allow the transaction() procedure called as part of the Transactions procedure to be implemented: this procedure represents a transaction between two agents (as illustrated by the Edgeworth’s Rectangle discussed above and shown in figure 5.8 for two goods). As said previously, the principle adopted for executing transactions between two agents is based on the Pareto rule: as a result of the transaction, the situation of at least one agent must improve. There is an obvious limitation: the final allocation (following the transaction) must be achievable. The rules of concluding transactions between two agents can be noted as below using the symbols also given below. Symbols: − n – the number of goods defined in the world; − G1 = (g11 , g12 , ..., g1n ) – the list of goods held by agent 1 before the transaction (initial allocation); − G2 = (g21 , g22 , ..., g2n ) – the list of goods held by agent 2 before the transaction (initial allocation); 0 , g 0 , ..., g 0 ) – the list of goods held by agent 1 after the transaction − G01 = (g11 12 1n (final allocation); 0 , g 0 , ..., g 0 ) – the list of goods held by agent 2 after the transaction − G02 = (g21 22 2n (final allocation); − U1 , U2 – utility functions of agents: U : Rn → R. Rules: 1. The total of utilities of the goods held by the agents after the transaction must be greater: U1 (G1 ) + U2 (G2 ) ≥ U1 (G01 ) + U2 (G02 )

(5.37)

2. As a result of the transaction, the situation of no agent can deteriorate: U1 (G1 ) ≤ U1 (G01 ) and U2 (G2 ) ≤ U2 (G20 )

(5.38)

3. The final allocation must be achievable: 0 0 ∀i ∈ h1, ni ⊂ N : g1i + g2i = g1i + g2i

(5.39) 103

4. There is no allocation better than the final allocation: ¬∃G001 , G002 : U1 (G001 ) + U2 (G002 ) > U1 (G01 ) + U2 (G02 ) (5.40) respecting the limitation: 00 00 ∀i ∈ h1, ni ⊂ Ng1i + g2i = g1i + g2i

(5.41)

5.4.5. Agent migration In the computational model introduced by Collective Intelligence, discussed in chapter 3, information molecules make pseudo-Brownian motions within the CS computational space. Also in the multi-agent system model adopted for the purposes of describing the market, the agent has a spatial profile that defines the way it moves. This sub-section discusses the way in which agents move in the CIMAMSS system. As a result of concluding a transaction with another agent, the utility of the goods held by the agent rises. Having concluded the transaction, the agent moves in the direction of the „better market” which in theory should increase its chances of further successful transactions. The method of calculating the displacement resulting from a single transaction is presented below. Symbols: − p1 , p2 – positions of the transacting agents; − p01 , p02 – new positions of the transacting agents; − d(p1 , p2 ) – the distance between positions p1 , p2 ; − u1 – the utility of goods held by agent 1 prior to the transaction; − u2 – the utility of goods held by agent 2 prior to the transaction; − u01 – the utility of goods held by agent 1 following the transaction; − u02 – the utility of goods held by agent 2 following the transaction; −→ − ~v = − p− 1 , p2 – vector determined by locations p1 , p2 normalized to 1. When the above symbols are adopted, the new position is calculated as follows: ( p1 + d(p1 , p2 ) ∗ ~v ∗ (u01 − u1 )/2u1 u01 − u1 < 2u1 0 p1 = p2 u01 − u1 ≥ 2u1 (5.42) The position of the second agent is calculated by analogy. Thus, if following the transaction the utility has increased by 100% or more, the agent moves to the place of the agent with which it has concluded the beneficial transaction, and if the increase in utility is lower, it moves in the direction of that agent, the displacement being the greater the more the utility increased. If the agent has concluded no transaction, the movement is a random one to a neighbouring point (1 unit distant in the set metric). 104

5.4.6. Production in the CIMAMSS system The simplest economic model in which production is considered is called the Robinson Crusoe economy ([112]). In this model, only one consumer, one producer and two goods are considered. Due to this simplification this model is unsuitable for employing in the CIMAMSS system because of the existence of many producers. What is however interesting is extending this model of an economy in which the same person is a producer and a consumer at the same time (and consumes the goods it produces itself). In the extended model, the roles of the producer and the consumer are played in turns, so a labour market and a market for the good (produced during the production) are established for the purpose of coordinating these two roles. When playing the role of the producer, the agent acts guided by the criterion of profit maximisation and observes the labour cost and the price of the good to decide how much labour to hire and how much good to produce to generate income from labour. The CIMAMSS model assumes a convenient simplification concerning the expenditure for the production understood as obtaining the good from the environment or transforming one good into another good: the only expenditure incurred for the production is the utility (whose level, in turn, is raised by the consumption of goods). In a simple microeconomic model of profit maximisation, two expenditures x1 and x2 are assumed, whereas the second expenditure is set at a constant, unchanging level (x2 = const). The production function defines the dependency of the expenditure incurred on the quantity of the good produced. If p is adopted as the price of the product, with w1 and w2 representing the prices of the expenditure, the problem of profit maximisation can be noted as follows: max x1 [pf (x1 , x2 ) − ω1 x1 − ω2 x2 ]

(5.43)

If the quantity of the product is denoted by y, profit π can be given by the following equation: π = py − w1 x1 − w2 x2

(5.44)

If the above equation is solved for y, the product quantity can be expressed as a function of expenditure x1 . The equation thus transformed shows lines of identical profit: all combinations of expenditures and the product which yield the constant level of profit π. An agent maximising its profit chooses the combination of the expenditure and the product which lies on the highest line of identical profit tangent to the production curve (figure 5.9). 105

Fig. 5.9. Profit maximisation. The agent chooses the combination of the expenditure and the product that lies on the highest line of identical profit. Source: Own development.

5.5. CIMAMSS model – final conclusions In line with the considerations presented in this chapter, the CIMAMSS system is described as a triple: def

CIM AM SS = hAg, env, ti,

(5.45)

where: − Ag – the set of agents; − env – an environment which is a 2D discrete space with metrics given in definition 5.5; − t – time, which is discrete in the model. The environment defines a set of global parameters – (ri , cr, dr) – which is available to all agents to the same extent. Interaction radius ri is interpreted as the visibility range of a fragment of the market to a single agent: transactions can only be concluded with agents at a distance no greater than ri in the meaning of the metric from definition 5.5. In addition, an agent can look into transactions concluded by other agents visible to it. The cr and dr parameters are interpreted as the credit interest rate and the deposit interest rate in simulations of the ASIHM process within a model with a financial system, which is discussed in greater depth in chapter 6.4. 106

5.5.1. Agent types and structure The CIMAMSS system includes two types of agents: the basic agent type used in simulations is a market agent, whereas the supplementary type is a financial agent appearing in simulations of the ASIHM process within a model featuring a financial system (chapter 6.4). The above two agent types are not differentiated from the perspective of the environment, but they differ fundamentally in their internal architecture (available actions and the practical implementation of the profile structure). Every market agent can take the following actions: migration (cf. section 5.4.5), production (cf. section 5.4.6), transaction (cf. section 5.4.4), consumption, dying (when the utility ratio falls to 0) as well as placing money in deposit with a financial agent (a system enhancement discussed in chapter 6.4) and taking out a loan (a system enhancement discussed in chapter 6.4). In addition, every market agent has a spatial profile and three (or four in the case of the extension, see chapter 6.4) intellectual profiles: production prf prod , consumption and transactional prf trans . The basic strategies forming part of profiles are as follows: the movement strategy, the consumption strategy and the goods production strategy (of collecting resources from the environment and processing them). For the purposes of the above profiles, the agent has the following attributes (the remaining, enhancing attributes are discussed in chapter 6.4): − Utility function – defines the agent’s preference for goods available on the market. − Collecting cost function – defines the cost of collecting individual resources from the environment. This attribute has the form of a set of production functions, where every function applies to one product (the function is a function of one variable – the utility). − Production function – defines what the agent can produce and in what quantities. This attribute has the form of a set of production functions, whereas every function applies to one product (the function is a function of one variable – the utility). − Goods list – defines the type and quantity of goods held by the agent. − Own transaction list – describes transactions concluded by the agent. − External transaction list – characterises transactions concluded by agents within the observation radius. − Utility – the current value of utility (utility rises as goods are consumed and the agent incurs the cost of survival, which reduces this indicator with every 107

round – thus utility also represents a generalised concept of vital energy in the CIMAMSS model16 ). − Survival cost – the loss of energy (utility) which the agent suffers in every round, which is here intuitively interpreted as the cost of surviving (an agent who has no goods to consume should not be able to survive in the system17 ). A financial agent can migrate, but its migration profile is defined differently than that of a market agent (cf. chapter 6.4), it can grant a loan and accept a deposit. A financial agent’s attributes are as follows: − deposit list – identifies the deposits accepted; − loan list – identifies the loans granted by the agent; − money balance – the money available for financial activities. When taking decisions on the volume of production which form a part of the strategy for the production profile, the agent follows the profit maximisation rule described above. However, if the production function is linear, there can be an infinite number of points of tangency. Consequently, an arbitrary limit has been introduced, which sets the maximum possible level of utility designated for production Umax (understood as collecting resources from the environment or processing the resources held) at:

Umax = (U − 4 ∗ UL ) ∗ 0.25,

(5.46)

whereas: − U – the current agent utility level − UL – the level of utility necessary to survive the round. In the decision on the specific good to be produced, the agent also follows the profit maximisation rule: it chooses the good whose production maximises its utility.

5.5.2. The design of the computational layer It is worth noting that the agent abstraction notion was used during the CIMAMSS system design to structure the concepts and more clearly adapt economic models. Introducing this structure by using the concepts of a profile and a strategy to order the rules by which the agent acts does not in any way impact the ability to 16

Which is not fully consistent with microeconomic models which do not deal with this issue at all, but is widely used in multi-agent systems. 17 The loss of energy by the agent can be switched off by setting this indicator to zero.

108

deploy the system at the computational level with the use of the Molecular Computational Model discussed in chapter 2. The molecular layer represents the Collective Intelligence computational processes of agents which cause the Invisible Hand to come about. When introducing that layer, one can present it by analogy to the body (layers 1 and 2 in figure 5.1, represented by the economic and agent models) and the soul of a human (layer 3 in figure 5.1, represented by the molecular model). In the transition from layer 2 to 3, everything except the information flow and the processing is omitted. Market agents from the agent-based model layer are directly mapped to information molecules of the third layer. Thus an agent is an information molecule of layer 1 and features a membrane surrounding facts (agent attributes from the agent model layer), rules (the agent’s action rules stored in the form of profiles and strategies in the agentbased model) and the reasoning conducted, which impacts the actions taken (buying, selling, migrating, producing etc.). The goods (resources), in turn, constitute layer 0 information molecules, incapable of moving. A rendezvous at the molecular layer thus corresponds to a transaction being concluded between agents or to collecting a good from the environment. Let’s note that as a result of a rendezvous, when the appropriate facts can be matched, a string of reasoning occurs, resulting in the production of new information molecules transporting the conclusions of this reasoning. In an economic model a transaction taking place causes goods (or money) to flow between agents and the goods to move. Adopting the amount of money and the inventory of possessed goods from the economic layer as facts in the molecular layer, it is noticeable that a change in the agent’s condition resulting from a transaction being made corresponds to the formation of a new information molecule in the molecular layer. The agent’s movements in the economic environment are determined by its search for a „better market” i.e. the agent moves in the direction in which it improves its utility the most as a result of the exchange. Such movements are treated as quasi-chaotic in the molecular layer. There is one more important issue associated with mapping an exchange to a rendezvous. The economic world is in a sense virtual: for an exchange to occur, the agents do not have to be physically present in the same location. This interpretation is suggested by reality itself: to play their role, the banks need not move physically, and business can be transacted over the phone, online, by exchanging documents by post etc.

109

6. Pilot implementations of ASIHM process simulations

This chapter is structured as follows. The first to be presented is the method of implementing the CIMAMSS model as a software suite, for which the high-level requirements come from chapter 5, but the detailed ones (e.g. the ergonomics of software operation) will be omitted due to their large size1 and loose connection to this monograph. The author assumes that the attached screenshots give a good impression of the application functionality. The following part describes the method of designing the Collective Intelligence Quotient (IQS) related to ASIHM processes and playing a basic role in the context of the research conducted. Other, side indicators used in the simulations come from agent systems and economics, and are described lower down. This Chapter ends in a discussion of the simulation results for three example research problems.

6.1. CIMAMSS model characteristics The concept of a simulation model proposed in this work, which at the computational layer uses the Collective Intelligence computational model, theoretically offers wide-ranging application opportunities. The architecture, for which the multi-agent system forms the starting point, allows various economic (mathematical) models to be adapted to describe agent behaviour by the appropriate execution of strategies forming part of the intellectual profile. The CIMAMSS system adapts models which seem suitable for studying Invisible Hand of the Market processes. Practical implementations of ASIHM process simulations are pilot studies by nature and there are slight differences between those implementations at the level of deploying particular strategies (and not the architecture). Due to the pilot nature of 1 The code volume is some 20 thousand lines of code (including the code automatically generated by generators, e.g. the JAXB library).

110

these simulations, one can hardly speak of sanctioned guidelines for ASIHM simulations, but the proposed architecture seems to be the right direction of research. All the implementations, discussed in this chapter, of ASIHM process simulations are founded on the architecture proposed by the ASIHM and the same design assumptions: − Event simulation mechanisms are applied to ensure subordinate objects’ activity understood as having the initiative to take action and react to the stimuli received (there is no need to introduce a multi-threaded architecture in which every agent would represent a separate thread, as this would be a significant limitation of executing simulations covering several thousand agents); − The initial state of the system is fully configurable, but a series of decisions taken during its operation is defined using quasi-random generators; − The computational layer – the molecular computational model – is built using a JESS rule engine (Java Expert System Shell, wider discussed below in this Chapter). Successful attempts have also been made to adapt the Drools system. This choice was due to the easy integration with Java, which language has been selected for implementing the CIMAMSS system, allowing the simulation systems to be run on various hardware platforms. It also accelerated the implementation process due to the great number of available libraries (including those for handling XML and mathematical expressions); − While the system runs, a series of parameters are monitored, which enables the broadest possible spectrum of observation of its operation. In the case of some of them, this is an attempt to move from microeconomic models (adopted to describe the behaviour of a single market player) to the macroeconomy, in the sense that the indicators used to describe the entire system have their origins in the macroeconomy.

6.2. Comments on implementing the CIMAMSS model In this chapter, comments concerning the implementation of the CIMAMSS system in compliance with the rules of chapter 5 are collected. To begin with, the basic architecture of the system as well as the method of storing the main data on agents and their world (the environment) are discussed. This is followed by a wider description of how the computational layer is implemented, i.e. the deployment of the molecular computational model for which the Jess rule engine was selected ([47]). This section also deals with the motivations behind this, because originally, in T. Szuba’s ([107]) paper, this computational model was deployed using the Prolog language. The end of the section is a short description of program operation – how to define the world and view the environmental parameters during the simulation. 111

6.2.1. System architecture As previously mentioned, the Java language was chosen for implementing the CIMAMSS system. As the detailed discussion of the system implementation method does not fall within the scope of this study2 , this chapter only contains a brief mention of the most important classes comprising the system and the data storage method (excluding all the classes forming part of the GUI). Figure 6.1 presents basic classes forming the core of the system.

Fig. 6.1. A diagram of classes forming the core of the system implementing the CIMAMSSI model. Source: own development.

The Simulation and SimulationController classes are responsible for running the simulation process. The Data class constitutes the container for the agents, which are represented by the MarketAgent class (or classes inheriting from it). An agent’s actions constituting its intellectual and spatial profiles are deployed by classes which implement the IMarketAgentAlgorithm interface (the SimpleMarketAlgorithm class is presented in the diagram). 2 Among other reasons, because of the current size of the system and the continuing process of its development and refactoring.

112

All elements of key importance for system operation are stored in the form of XML files. The JAXB library maps the XML onto Java objects3 . The world in which the agents operate is represented by the environment in the CIMAMSS, i.e. by the following classes (figure 6.2): − AbstractEnvironment – represents the „packaging” for the agents, resources and environmental configuration; − EnvironmentConfig – defines the size of the space and all the global variables; − Agents – this class is a container for defining all agents in the system (represented by the AbstractMarketAgent and AbstractFinancialAgent classes in figure 6.3); − Hoards – contains a definition of all resources found in the environment (represented by the Good class, which also represents goods held by the agent and traded on the market) as well as the method of their renewal (defined by the Resource class) in accordance with figure 6.4.

Fig. 6.2. JAXB classes connected with defining the environment in the CIMAMSSI implementation. Source: own development.

3 For more information on the library official documentation site might be visited: https:// jaxb.dev.java.net/.

113

Fig. 6.3. JAXB classes connected with defining the agents in the CIMAMSSI implementation. Source: own development.

Fig. 6.4. JAXB classes connected with defining the resources and goods in the CIMAMSSI implementation. Source: own development.

6.2.2. Transaction implementation Chapter 5.4.4 presented the rules of concluding transactions in the system. To solve the optimisation problem presented there (under the restrictions given), a simple genetic algorithm ([69]) with real coding was used. A chromosome (i.e. the genotype of an individual representing a potential problem solution in the population) is created based on the allocation of goods to two agents concluding the transaction, assuming the following symbols: − G1 = (g11 , g12 , ..., g1n ) – the list of goods held by agent 1 before the transaction (initial allocation) 114

− G2 = (g21 , g22 , ..., g2n ) – the list of goods held by agent 2 before the transaction (initial allocation) The individual’s genotype is therefore always 2n long, and the example chromosome can look as follows: (g11 , g12 , ..., g1n , g21 , g22 , ..., g2n ) The individual’s fitness function is defined by the function of the utility of agents concluding the transaction: F itness(g1 , ..., gn , gn+1 , ..., g2n ) = U1 (g1 , ..., gn ) + U2 (gn+1 , ..., g2n ) So what is left to define are genetic operators: selection, cross-breeding and mutation. The size of the initial population was adopted as 20 individuals. This population is made up of an individual representing the initial allocation of goods and 19 randomly generated solutions representing the potential problem solution, where it should be noted that every generated solution must meet the limitation of the possible allocation, i.e.:

0 ∀i ∈ h1, 2ni ⊂ N : gi + gn+1 = gi0 + gn+i

(6.1)

whereas: 0 ) – genotype of the individual representing initial solution; − (g10 , ..., g2n

− (g1 , ..., g2n ) – genotype of the individual generated randomly; The cross-breeding operator is defined as a variety of the classical two-point cross-breeding: two natural numbers r1 , r2 are randomly selected, and they meet the following limitation: 1 ≤ r1 < r2 ≤ n – they represent cross-breeding points. The method of genetic material exchange is shown in the figure 6.5 below. It is easy to see that the above method of executing the cross-breeding operator does not breach the limitation concerning the achievable allocation. However, the second limitation resulting from the Pareto Rule is a problem: as a result of the transaction, the situation of no agent can deteriorate. Consequently, it was assumed that if Descendant 1 does not comply with this limitation, it is replaced by parent Individual 1; and similarly, if Descendant 2 does not comply with this limitation, it is replaced by parent Individual 2. The mutation operator is defined as follows: A natural number i, j are randomly selected from the interval (0, n), for which gi , g + n + j is not equal to zero4 . The mutation follows the following scheme. 4 In reality, gi will be greater than equal to 1 as it was assumed that an agent can only have integral quantities of goods.

115

Fig. 6.5. Looking for the optimum allocation as part of a transaction – execution of the crossbreeding operator. Source: own development.

Individual before mutation: (g1 , ..., gi , ...gj , ..., gn , ..., gn+i , ..., gn+j , ..., g2n )

(6.2)

Individual after mutation: (g1 , ..., gi − 1, ...gj + 1, ..., gn , ..., gn+i + 1, ..., gn+j − 1, ..., g2n ) (6.3) The probability of a cross-breeding occurring was empirically assumed as 0.8, and the probability of a mutation as 0.2. The selection operator is defined as follows: the next population is made up of the 20 best individuals from the ancestral and the descended population (according to the evaluation function Fitness) which fulfills the second limitation resulting from the Pareto rule by equations 6.4 and 6.5. U1 (g10 , ...gn0 ) ≤ U1 (g1 , ...gn )

(6.4)

0 0 U1 (gn+1 , ...g2n ) ≤ U1 (gn+1 , ...g2n )

(6.5)

and:

whereas: 0 ) – the genotype of the individual representing initial solution. − (g10 , ..., g2n The listing of the algorithm used is as follows (the stop criterion has been set as the lack of improvement of the best result in five subsequent iterations) is shown in listing 6.1. 116

1 . p r o c e d u r e GA 2. t := 0; 3. i n i t P [0]; / / i n i t i a l population 4. eval P [ 0 ] ; 5. while ( stop c r i t e r i o n not f u l l f i l l e d ) 6. P ’[ t ] = variation P[ t ]; / / genetic operators 7. eval P ’[ t ] 8. P [ t +1] := s e l e c t ( P ’ [ t ] , P [ t ] ) 9. end w h i l e 1 0 . end p r o c e d u r e Listing 6.1. Simple genetic algorithm – pseudo code. Source: Z. Michalewicz ([69]).

6.2.3. The JESS rule system To implement the molecular computational model it is necessary to apply a language using which it would be natural to express information molecules constituting facts in the simplest form or rules in a complex form (cf. chapter 3). T. Szuba’s publication ([107]) proposes a concept for implementing this model using the Prolog language which has the output in the form of a „Random Prolog Processor”(cf. chapter 3, section 3.5). An attempt to transfer this concept to the implementation of an CIMAMSS model causes a problem as it hinders integrating the code written in Prolog with the Java code providing the graphic interface of the system, the ability to save and retrieve basic objects using XML files and defining the world in such a form. Due to the above problems, in this study the rule language provided by Jess ([47]) was used to implement the molecular computation model. The advantage of the Jess library is its very easy integration with the code written in Java, because the entire Jess code was created in that language. In addition, Jess makes it possible to manipulate Java objects and reason about them. Jess is an acronym of Java Expert System Shell ([47]), and as the name indicates, it is a language for designing expert systems in Java. Jess originates from CLIPS5 , a programming language in logic designed to create systems of this class based on rule representations of knowledge. At the beginning, Jess was simply an implementation of CLIPS in Java, but today it has many features distinguishing it from its ancestor. Jess was developed by Ernest Friedman-Hill at Sandia National Laboratories (the first version appeared in 1995). On the one hand, Jess is a fully developed script lan5

CLIPS is a language designed for creating expert systems with rule representation of knowledge, developed by the Software Technology Branch (STB) of the NASA/Lyndon B. Johnson Space Center in 1986.

117

guage whose syntax resembles the majority of rule languages, in particular CLIPS, on the other, it can be treated as a library in Java ensuring that reasoning is performed based on knowledge represented by rules. Jess supports both forward and backwardchaining of reasoning, and its big advantage is the previously mentioned ability to directly manipulate Java objects and reason about them. In the Jess script environment, Java objects can be created, interfaces implemented and methods can be called on Java objects without having to compile the code. Rules can also be defined. What makes Jess unique in this language class and constitutes its significant advantage is the easy access to the basic and advanced functionality of Java including its libraries. Due to implementing the Jess library in this language, it is independent of the executing platform and transferable between various operating systems. The only thing required to launch Jess and use this script language is having a Java Virtual Machine (JVM). In addition, Jess also makes it possible to directly manipulate the operation of the reasoning engine. Before such steps are taken, Sandia Laboratories have to be contacted to obtain their permission. Jess syntax basics Jess rules can be noted in two ways: using XML via JessML6 and using the native Jess rule language (originating from CLIPS, this second way is the usage method recommended by library authors). The JessML format is not described here, as the purpose of this chapter is not to describe programming in Jess, but just sketching its functionality to support choosing it for implementing the molecular computational model . The syntax of Jess is very similar to its ancestor, CLIPS. The following are available to the programmer: symbols, which constitute the most elementary entity in Jess (they are similar to identifiers in other programming languages), numerical values and strings. Comments are preceded with a semicolon (in the LISP style, single-line comments) or enclosed in blocks marked with „/*” and „*/” (in the C style). Data is stored in variables, whereas a value is assigned to a variable using a bind command. Global variables are defined with the „defglobal” instruction. Lists are a fundamental concept in the syntax (just as in CLIPS and Prolog). There is a large library of ready functions available, and own functions can be defined using the „deffunction” instruction. The „defadvice” command makes it possible to add a code always executed at the beginning or the end of the function operation, which has access to the variables communicated by the function. The available functions also include ones controlling the execution of the code, inter alia: if, while, for, foreach or try. 6 More details on the JessML format are available in the online documentation of the Jess system at: http://www.jessrules.com/jess/docs/71/xml.html.

118

( d e f t e m p l a t e agent " Simple market agent . " ( s l o t i d ( TYPE STRING ) ) ( slot utility ) ( slot position ) ( slot preferences ) ( slot resources ) ) Listing 6.2. Defining template in Jess. Source: own development.

Jess stores knowledge in the form of facts which constitute the so-called working memory . Facts come in as many as three types: the so-called unordered facts (with slots or multislots representing object attributes, defined by the „deftemplate” command), shadowed facts (storing indicators for Java objects) and ordered facts (without slots, the most effective). New facts are added with „assert”, „add” and „definstance” commands, they are deleted with „retract” and „undefinstance” commands. Slots of an existing fact can be changed with the „modify” command. The „clear” command is used to clear the working memory. Fact schemes can be inherited using the word „extend” when creating „deftemplate”. Every fact can be defined using a fact template. An example showing the method of defining a template is shown in listing 6.2. In the case of shadowed facts there is no need to overtly specify the template, as it is specified by the Java code as shown in the listing 6.3. The Jess rule engine runs reasonings in the system, so it is necessary to tie this definition of an agent written in Java with the Jess code used for reasoning. This is done as shown in listing 6.4. It should be noted that to make reasoning possible on slots thus defined, Java objects must comply with the rules of POJO (Plain Old Java Objects) and implement the serialisation mechanism. In rule systems, knowledge is stored in the form of structures called rules, which resemble the if-then structure from programming languages. Rules are created using the „defrule” structure. They consist of a pattern (LHS7 ) found on the left side of the rule and the action (RHS8 ) on the right. One should remember that the LHS cannot contain function calls. When matching facts, regular expressions from the java.util.regex package can be used. Every rule can have the salience parameter assigned to it, which is defined using an integral or variable value or a function call, 7 8

Left Hand Side. Right Hand Side.

119

p u b l i c c l a s s SimpleAgent implements S e r i a l i z a b l e { p r i v a t e long u t i l i t y ; / / remaining a t t r i b u t e s / / s e t t e r s and g e t t e r s public long g e t U t i l i t y ( ) { return u t i l i t y ; } public void s e t U t i l i t y ( long u ) { this . utility = u; } / / r e m a i n i n g methods } Listing 6.3. Java code specifying the template. Source: own development.

( d e f t e m p l a t e SimpleAgent ( d e c l a r e ( from−c l a s s S i m p l e A g e n t ) ) ) Listing 6.4. Jess and Java binding. Source: own development.

and which sets the priority of the rule on firing. The logical keyword forms the tree of logical relations used with the „retract” command. The „run” command, in turn, starts the reasoning process. The main reasoning mechanism is forward chaining. To use backward chaining (for selected facts), the „backchain-reactive TRUE” property should be declared when creating the fact scheme, and then those facts can be used in the appropriate rules. Rules and schemes are split into modules using the „defmodule” structure similar to that known from CLIPS. It allows facts and rules to be grouped into easily manageable modules. The „defquery” structure supports querying the working memory. In addition to it, there is also the jess.Filter object which allows the memory to be queried directly. Jess working with Java It has been mentioned previously that the Jess rule engine integrates very well with code written in Java. In order to use Jess in Java code, a Jess library in the form of the jess.jar file has to be added to the classpath variable or should otherwise be made visible to the compiler through the development environment. All operations provided by the Jess engine from the Java layer are available through an object of the 120

jess.Rete class. Jess can be used and launched in the following ways: − Scripts written only in Jess; − Scripts written only in Jess, using new commands of the language defined by the user in Java; − Scripts written only in Jess, but using new commands of the language defined by the user in Java and/or Java API; The main() function provided by Jess; − Scripts in Jess and Java, but using new commands of the language defined by the user in Java and/or Java API. The main() function provided by the user or the application server; − Code written in Java with Jess scripts loaded and executed while the program is running; − The code entirely in Java, utilising the Jess functionality by using its Java API. In the CIMAMSSI system implementation, the last method was used. Implementing a molecular computation model in the CIMAMSSI system using JESS As described in chapter 5, the intellectual and spatial profile of the market agent consist of strategies describing the agent’s behaviour as well as parameters describing the agent and its environment. The appropriate definition of rules allows the right agent behaviour (action) strategy to be selected (cf. figure 5.5). The component responsible for integrating the Java language code with the Jess engine consists, inter alia, of a module which, on its input, receives the agent instance, behaviour parameters and a file of rules of the Jess engine, which the uses a decision from the rule engine to initiate the action provided for by the rules. The basic logical unit in the component is the rule. It is represented (figure 6.6) by the IRule interface. The rule contains the criterion (i.e. the condition) represented by the Condition class (together with the ICriteria interface) and the conclusion (IConclusion). A condition may contain other conditions in it (the ICriteriaContainer interface), which allows a logical tree of condition to be created. Conditions come in two types (figure 6.8): a condition composed of a set of criteria connected with the AND, OR logical operator (the JessBasedCriterionOperator class) and a simple condition (the JessBasedCriterion class). It contains a reference to a parameter (AgentParameter), an operator and a value. A conclusion, which is an element of a rule, is presented as the JessBasedConclusion class (figure 6.9) and contains a reference to an agent’s strategy number (AgentAction). The rule is presented in the JessBasedRule class (figure 6.10). 121

Fig. 6.6. Data model interface. Source: own development.

Every unit described (rule, condition, conclusion) implements the IEditableObject interface (figure 6.10), and as a result every element has the right editor assigned to it, allowing the user to modify each of these elements. In figures 6.7, 6.9 and 6.10 these are the appropriate classes which inherit from AbstractEditableObjectEditor. Every element of the tree has a menu assigned to it (figure 5.8). The JessBasedTreeData class is responsible for storing and opening the entire project and exporting it to a file in the Jess language. In the agent action module shown in figure 6.12, the JessBasedAlgorithm class is responsible for running the Jess rule engine with the appropriate file and for taking the appropriate instance of the agent’s action strategy (IAgentStrategy) from the StrategyFactory. Figure 6.13 shows classes implementing agent action strategies in the polymorphic method go(agent:MarketAgent). Abridged description of the Rete algorithm The algorithm used in Jess to match facts to patterns stored in rules is Rete ([32]). This algorithm is also used by many other rule languages, such as: CLIPS, ART, OPS5, OPS83. The simplest approach to the matching problem is to check every rule in turn. If the existing facts meet the rule, the rule is activated (sent to be executed9 ). This approach is right when only one run of the algorithm is executed, but if there are many subsequent runs, particularly with little input data, the same checking operations are executed many times (this is the characteristic of rule-based 9

122

Or in other words, fired.

Fig. 6.7. Rule condition representation – class diagram. Source: own development.

expert systems referred to as the temporal redundancy). If the number of rules is big, the effectiveness of such a system becomes a problem. In 1979, Charles Forgy in his subsequently published ([32]) Ph.D. dissertation proposed solving this problem with the Rete10 algorithm. This algorithm raises the efficiently of matching rules to facts at the cost of an increased memory usage. With every run, the condition of the automaton doing the matching is stored. In the following run, this condition is recreated and calculations are only made for differences that appeared on the fact list. This requires using the memory for rules in which partial matches of the rule are stored. These are sets of facts which fulfil the conditions of rules, from the first to the last condition. This is associated with a significant characteristic leading to optimising the algorithm. In order to limit the number of partial matches, the most specific conditions must be placed at the beginning of the rule. Two networks are formed: the so-called Pattern Network and the Join Network, in 10

From the Latin word meaning a net.

123

Fig. 6.8. Tree elements representation – class diagram. Source: own development.

which rules are compiled and the reasoning is executed. These networks are made up of nodes and leaves symbolising partial matching and their combination into structures which carry the logic contained in the rules. An additional optimisation process is done on the networks (their fragments are made common).

6.2.4. World editor and visualisation In order to facilitate configuring the world used in the simulations, the program features a visual editor supporting the following: − defining the environment; − defining agents present in the environment and their distribution; − defining all resources found in the environment (and the optional dependencies between goods if some good is produced by processing another one); − writing/reading the defined world to/from XML files via a JAXB library. Figure 6.14 shows the dialog box for defining global parameters of the environment (upper right hand part), adding a new good with dependencies (lower right hand part), defining the location of resources including their initial quantities as well as the method of renewal understood as the number of units of the good provided by the environment during one round. 124

Fig. 6.9. Conclusion representation (of the rule) – class diagram. Source: own development.

Fig. 6.10. Classes for representing a rule – class diagram. Source: own development.

125

Fig. 6.11. Application user interface – class diagram (Jess module). Source: own development.

Fig. 6.12. Agent action module – class diagram (Jess module). Source: own development.

126

Fig. 6.13. Action strategies according to the Command design pattern. Source: own development.

Figure 6.15 shows windows for defining new agents: for the convenience of adding a new agent, the process runs in two stages: in the first stage, the template of the agent is defined by adding it to the list at the top, in the second a specific instance of an agent is added by putting it on the map (the instance can be modified later by clicking the „Modify” button). The world editor makes it possible to quickly and conveniently configure the environment for simulating ASIHM processes in. It should be noted that the screen shots are presented here only for orientation, the detailed use of the software will be described in a user’s guide provided with the final version of the software11 . 11

The software is still undergoing development and the author’s aim is to make it more widely available. Consequently, in addition to supplying the software suite itself, a user’s guide, FAQ, project web site etc. must be provided. However, that is not the purpose of this monograph.

127

Fig. 6.14. Dialog box for defining global environment parameters and resources. Source: own development.

Fig. 6.15. Agent definition dialog box. Source: own development.

128

6.3. Defining the IQS quotient for the market The definition, presented in chapter 2, of the IQS quotient used to „measure” the Collective Intelligence is based on the probability that a given problem will be solved by a social group. In order to compute the IQS quotient, this probability is compared to the probability of the same problem being solved by a single member of the group, which forms the point of reference. Thus, there are two elements necessary to determine the IQS: the definition of the problem to be solved and the identification of the point of reference. In case of a market simulation, the adaptation of this quotient to market process simulations becomes the main issue: how to define this quotient for the market if the problem being solved by market participants does not manifest itself explicitly? To answer this question, it is worth noting that economics is a science of how the society and the individual decide to use resources which can also have other, alternative uses: to produce various goods and distribute them for consuming now or in the future, to various individuals and various groups within the society. It is assumed that all resources have alternative uses and by definition are scarce ([7, 10]). This observation leads to the conclusion that there should be a way of measuring this degree of goods utilisation. It turns out that according to its definition, Gross Domestic Product (GDP) describes the value of goods and final services produced within a country in a specified unit of time (most often one year). In the case of the GDP, the geographic criterion is the only and decisive one: the origin of the capital, the ownership of the company etc. are immaterial. This observation is important due to the existence of a natural spatial limitation in the CIMAMSSI model, which follows from the definition of the environment. In this model, there is no notion of a service, so the definition of the GDP should be based on measuring the value of all goods „produced” by market players in a given unit of time. This measurement is very much possible, but the details of how to do it are presented below in this chapter. In the case of market simulations, the „problem” is the optimum utilisation of environmental resources by market agents. As previously noted, this problem is measurable with the GDP, a primary indicator in macroeconomics. The only issue to be solved is to find a point of reference. In accordance with the considerations presented in Chapter 2, the reference point should be determined by comparing how quickly the problem (or a number of problems) is solved by a social group to how quickly the same problem is solved by single units forming the group, which boils down to excluding communication mechanisms within the group. In the case of the CIMAMSS model, this is very easily done, as the only act of communication taking place between the market agents is the exchange of goods. Consequently, the IQS quotient can be defined as the difference between the GDP achieved in a simulation during which the agents communicate, and that achieved in a simulation in which those 129

mechanisms are unavailable (which means that each agent can only consume goods it obtains from the environment by itself). Thus this quotient measures the growth of the GDP, as presented in definition 6.1. Definition 6.1. The market IQS. IQS = GDPI − GDPRef ,

(6.6)

whereas: − GDPI – GDP in simulation with interactions enabled (transactions) given the environment state and the agents’ definitions; − GDPRef – GDP in simulation with interactions disabled (transactions not allowed, each agent is producing goods to consume by itself). It is worth noting that the radius of agent interaction introduced in the CIMAMSS model is an indicator which indirectly determines the ‘strength’ of communication between agents. If the agent’s interaction radius is increased, that agent’s ability to conclude transactions with other agents goes up. Consequently, the author’s research idea is to observe the influence of the value of this indicator on the IQS.

6.3.1. GDP definition in the CIMAMSS model To calculate the IQS quotient for the market, it is necessary to determine the method of calculating the GDP in the CIMAMSS model. According to the macroeconomic definition of the GDP, it is equal to the total value of produced services and final goods, calculated by deducting the value of goods and services used to produce the former goods from the total production of goods. Thus it is the added value12 within a company, and the GDP is the total added value produced by all businesses. Consequently, from the production side: GDP = total production of a country – indirect consumption = the total value added from all sectors of the national economy Although the practical calculation of the GDP using the above formula is onerous (this is why other formulas are used in practice) because national statistics contain neither direct measures of the total production or the indirect consumption, in the market simulation carried out based on the CIMAMSS model proposed here, this formula can be used almost directly. It should be noted that in our model, production takes two forms: 12

Wikipedia defines added value as the growth in the value of goods resulting from a specific production process or a service creation. Thus the source of added value is labour. In business, this is the difference between the total sales revenue and the total cost of external resources used for production (raw materials, energy and external services associated with the specific production).

130

− sourcing goods directly from the environment – during this action the agent incurs no cost other than the „energy” cost (measured by a drop in its utility13 ), − producing the good from the goods held – apart from the „energy” expenditure, indirect goods are consumed. The above observations allow this indicator to be defined as follows14 :

X

GDPCIM AM SS =

ai ∈Agents

  k r m X X X  (pai j − eai p  , gai jn ) + j=1

n=1

p=1

(6.7) where: − pai j – the amount of good j produced by agent i; − pei p – the amount of resource p gathered by agent i from the environment; − gai jn – the amount of good n used in the production of good j by by agent i.

6.4. Simulating the ASIHM process in a barter economy The model developed and its pilot implementation were used to carry out experiments simulating ASIHM processes in a barter economy: the transactions concluded between agents consist in swapping one good for another using the model discussed in chapter 5, taken from the market exchange theory.

6.4.1. Experiment preparation In order to conduct simulation research of the ASIHM process using the software developed, an environment of the following parameters was prepared: − Space size 100x100; − Four defined goods: marked as G1 (shown in red in figure 6.16), G2 (purple), G3 (green), G4 (grey); 13

A certain simplification has been made here by assuming that the energy expended costs nothing – the market agent consumes energy to survive and incurs costs to obtain it (by buying consumer goods). It seems that this adopted simplification changes the nature of the GDP in no way, but it allows avoiding the difficulty of estimating these costs. 14 This formula contains one more simplification: the GDP is given in the national currency, so money is the unit of measure. This formula describes the GDP as the total amount of goods produced by all agents. By assuming the price of every good as 1, which means that every good is adopted as numeraire ([112]) we can move to a monetary unit.

131

Tab. 6.1. Resource configuration for the environment used in experiments

Good name G1 G2 G3 G4 Total

Initial amount 5,000 5,000 5,000 5,000 20,000

Increase per turn 5,000 5,000 5,000 5,000 20,000

Total within 100 turns 500,000 500,000 500,000 500,000 2,020,000

− Resource location: resources were placed on 20 „islands” in the shape of rectangles sized approx. 15 x 15. Every island has 25 centres providing resources of every type. The initial number of units of the resource found in the centre was arbitrarily assumed as 10. In every round, the centre supplies another 10 units of goods found in it (10 units of each good). Table 6.1 compiles the initial quantity of resources supplied by the environment, the total amount of goods and the number of units of every resource supplied during a round. The location of resources on island was adopted arbitrarily based on earlier experiments to prove that the migration profile was defined reasonably, i.e. an agent’s movement decisions were taken based on the strategy telling it to look for the best market (details are presented in chapter 5). In an environment thus defined, 200 agents from two classes (each class has the same number of instances – one hundred agents) were randomly placed: − Class one agents (symbol: Ag1) have preferences defined using the CobbDouglas function for two goods: G1 and G2, whereas their production capacities are defined for two goods: G1 and G3. These agents are therefore partially selfsufficient, as they can consume good G1 that they produce, but good G3 can be produced only for the purpose of its subsequent exchange for good G2 or G1. For the purposes of the consumption profile, a function based on the simplest version of the Cobb-Douglas preference function15 was assumed:

u(g1 , g2 ) =

g1 g2 + g1 + g2 , 2

(6.8)

15 It can be seen that this is the Cobb-Douglas preference function to which the g1 + g2 elements were added. IN the CIMAMSS model this function serves not only to compare two consumption baskets for executing the exchange, but it also measures the utility growth resulting from the consumption. Without a sum element, if the consumption of a single good were equal to 0, then regardless of the level of consumption of the second good, the utility growth would also be zero.

132

Fig. 6.16. A fragment of the environmental space used in experiments. Source: own development.

where g1 , g2 denotes the amount of goods: G1, G2 respectively. The goods production function was defined in the same way for both goods and agents of both classes as described below in equation 6.9.

( y=

2 5x

2 ∗ log10 10x

x ≤ 10 x > 10

(6.9)

x in the equation 6.9 means the quantity of good G1 or G2 in this case. This definition allows the economies of scale to be achieved, which usually occurs in production (the unit production cost goes down as the production volume goes up). − Class two agents (symbol: Ag2) have similarly defined preferences: the utility function (this is also a Cobb-Douglas preference function in its simplest version) used in consumption is established for goods G3 and G4, whereas the production profile allows goods G2 and G4 to be produced. Just as agents of the Ag1 class, agents from this class are partially self-sufficient, as they can consume good G4 that they produce, but good G2 can be produced only for the purpose of its subsequent exchange for good G3 or G4. 133

The remaining parameters for both agent classes are defined as the same. The initial utility level is 10 and the utility loss per round is 0.5 for every agent. This loss is interpreted as „energy”-related and was arbitrarily set by the author, as no basis for it could be found in mathematical models coming from microeconomy. This is the minimum quantity of energy the agent uses to survive. If the agent’s utility level falls to 0, this agent is eliminated from the market16 (this is why the concept of utility is broader than in microeconomy). Consequently, at such arbitrarily set parameters, every agent was able to survive in the environment for 10 rounds without concluding any transactions or producing any goods. Due to the location of resources on islands, this gives a chance to an agent whose initial location was randomly selected in a desert (the area between islands) to reach resource-rich areas (the distance between islands and the initial utility indicator were set arbitrarily, that is true, but based on preliminary experiments). What requires broader discussion is the method of defining consumption and production profiles of agents. It is easy to see that the preferences of the two agent classes can be defined in such a way that the transactions would have no significance – every agent would prefer other goods, leaving no room for exchanges that could improve the situation of the exchanging agents. Of course this situation is far removed from reality as the majority of people have similar, although not identical, preferences for the available goods. In addition, the production function is defined in the model so that an agent can extract (obtain from the environment) only one good (and not all of them), which contributes to raising its utility. Such an arbitrary finding also seems fully intuitive and realistic, as in the real world almost no one is self-sufficient. However, the production profiles (the agent can survive without concluding transactions with other agents) seem to have been defined very ‘liberally’ in the sense of the ability to simply define an environment that would guarantee the achievement of a higher IQS. Production and consumption strategies have also been defined arbitrarily, in the simplest possible way: during a round, every agent consumes half of the available resources, while the production strategy is more complex: − The cost incurred to extract a resource is solely energy-related and causes energy to be lost – this means that the agent does not consume other goods for the production (directly); − The agent uses no more than 50% of its current utility level for production; − If several goods can be sourced from the environment, the decision on the amount of the extracted good of each kind is taken on the following basis: of 16 The word „die” is avoided intentionally as it has a biological association which the author wanted to avoid.

134

the level of utility which the good will bring when consumed, the level of utility brought by the hitherto transactions with the use of the good extracted. In addition, to ensure that goods only for exchanging are produced, a limitation was introduced, namely that if it is possible to obtain a good that the agent cannot consume (as it is designated for the subsequent exchange), the agent uses at least 20% of the total energy designated for production to extract this good.

6.4.2. Dynamics of the parameters of the world in ASIHM simulations For an environment thus prepared, 20 simulations of 100 rounds each were carried out for two situations: 1. A simulation with communication mechanisms enabled between agents (ability to conclude transactions); 2. A simulation with transaction mechanisms disabled – in order to determine the IQS quotient to measure the Invisible Hand of the Market. In this case, the production profile of the agents is slightly modified – they produce only the goods they can consume. In addition, a decision was made to analyse the influence of the interaction radius on the IQS. Thus, for situation 1, simulations were carried out with interaction radiuses from 1 to 10. During the simulation, the following parameters were measured: − The GDP, which measures the total quantity of goods produced – this is the basic ratio used to calculate the IQS of the market; − The GDP per capita – the average quantity of goods produced by one agent; − The average utility of agents – this is the sum of utility levels (the value of the utility indicator plus the utility of all goods held) of all market participants, divided by the number of participants; − The number of market participants – due to the assumptions made, this number may fall as a result of agents being eliminated from the market (an agent is removed from the market environment if the level of its utility drops to zero and it has no goods that it could consume); − Turnover in the market – the number of units of goods exchanged by the agents. Table 6.2 compile the average (Avg) values of the above parameters from 20 simulations after 100 iterations and their standard deviations (SD). „Ref.” indicates the results of 20 simulations which constitute the reference situation for the purpose of determining the IQS quotient of the market. 135

Tab. 6.2. Results – average values Avg. and standard deviation SD after 100 turns of the simulation

Market factor GDP Agents population Average utility level Transactions value GDP per capita – GDP Agents population Average utility level Transactions value GDP per capita – GDP Agents population Average utility level Transactions value GDP per capita

136

Ref. Avg. | SD 158,161 | 1,536 148.95 | 7.37 798 | 33 NA 1,054 | 98 Ri = 3 Avg. | SD 294,326 | 1,812 159.23 | 1.52 3,734 | 145 167,234 | 1,923 1,849 | 141 Ri = 7 Avg. | SD 312,124 | 1,523 169.61 | 1.26 3,841 | 181 195,312 | 2,321 1,846 | 143

Ri = 1 Avg. | SD 272,976 | 1,776 146.85 | 2.37 3,596 | 123 101,123 | 1,212 1,869 | 112 Ri = 4 Avg. | SD 301,143 | 1,798 163.12 | 1.56 3,784 | 156 179,415 | 2,156 1,846 | 131 Ri = 8 Avg. | SD 314,612 | 1,532 171.34 | 1.34 3,867 | 184 198,724 | 2,412 1,836 | 138

Ri = 2 Avg. | SD 289,305 | 1,738 156.80 | 1.41 3,612 | 132 146,591 | 2,134 1,852 | 134 Ri = 5 Avg. | SD 307,875 | 1,524 167.95 | 1.22 3,812 | 182 189,315 | 2,162 1,838 | 141 Ri = 10 Avg. | SD 318,300 | 1,564 173.90 | 1,31 4,012 | 189 201,321 | 2,658 1,827 | 141

Figure 6.17 shows the development of the GDP over time for 3 selected simulations (extreme situations) for the reference situation (without communication mechanisms) and the development of the average value shown in red. It can be seen that at the beginning of the simulation, the agents whose location was randomly chosen away from resources needed time to reach them. The situation stabilises after a dozen rounds. Small values of the standard deviation mean that simulations develop in a stable way.

Fig. 6.17. GDP change dynamics in selected simulations for the reference situation. Source: own development.

Figure 6.1817 shows the dynamics of change in the number of market participants for three selected simulations (two extreme and the average course shown in red in the figure 6.18). Just as for the GDP, the situation stabilises once agents have found resource sources. The dramatic drop in the number of agents after 20 rounds is a result of the initial configuration of parameters – every agent can survive in the environment for 20 rounds without extracting additional resources. Thus, after 20 rounds, agents who have not managed to reach areas with resources go extinct. After communication mechanisms (market transactions) were enabled, 200 simulations were conducted for various interaction radiuses – 20 simulations for each of the 10 different lengths of this radius. The intended purpose of this was to study the impact of the length of this radius on market behaviour. The hypothesis that springs 17

Note: the characteristics are decreasing (agents are eliminated from the environment hence their quantity cannot increase) – these characteristics imply the wrong assumption that the number of agents increases slightly at some moments – this is due to an approximation using the spline function when drawing the curve.

137

Fig. 6.18. Population quantity change dynamics in selected simulations for the reference situation. Source: own development.

to mind is as follows: an increase in the value of this indicator should have a positive impact on the GDP. Figure 6.19 shows the dynamics of GDP changes for various lengths of the interaction radius (ri = 1 – yellow, ri = 5 – green, ri = 10 – blue) for the reference situation (red). The graph shows average values of this indicator from 20 simulations. It can be seen that setting an interaction radius longer than 5 does not change much in an environment defined like this.

Fig. 6.19. GDP change dynamics depending on the value of the interaction radius against the reference GDP. Source: own development. 138

Figure 6.20 compares the GDP generated during 100 rounds of simulations (the average of 20 simulations) for various lengths of the ri interaction radius. It can be seen that for the environment so pre-set, results improve significantly up to ri = 5. An analysis of the dynamics of GDP change depending on the radius also brings to light one more interesting phenomenon – for all values of the interaction radius, the situation stabilises after a dozen rounds of simulations, but for longer interaction radiuses, after this stabilisation, a slight growth of the GDP in the following rounds can be observed. The economic interpretation of this phenomenon is that the conditions of sustainable economic growth have been reached.

Fig. 6.20. Comparison of the GDP generated during 100 rounds of simulations depending on the interaction radius to the reference GDP. Source: own development.

Figure 6.21 shows the dynamics of changes in the number of the agent population for various lengths of the interaction radius compared to the reference situation (the graph shows the curve of the average value of 20 simulations) – in black. In the reference situation, the sudden drop in the population number can be seen to occur later due to the modification of the production profile, namely in the reference situation an agent can survive 20 rounds without extracting resources. If communication mechanisms are enabled, the agent extracts goods that it cannot consume, which results in energy losses, and if it fails to find another agent with which it could trade, it is eliminated from the market environment. The yellow curve on the graph shows the values for ri = 1, which turned out to be the worst. Thus it can be seen that the 139

length of this radius has a significant impact on the number of the population and even if ri = 1 is increased to 2, this yields a clear improvement in the results.

Fig. 6.21. Comparison of the dynamics of changes in the number of the market participant population. Source: own development.

6.4.3. The IQS study and results from the ASIHM study The results obtained, compiled in table 6.2, support the claim that it was possible to configure the world in a way, and to take a series of arbitrary decisions (based on a large number of trial simulations) that allowed market stability to be achieved. The initial drop in the number of agents is due to their looking for sources of resources and a better market to exchange on. The agents who have not managed to get to a source of food are eliminated from the environment (it is worth noting that the mortality is slightly higher in the reference situation, as from the point of view of each agent the quantity of resources available to it is smaller, and consequently the average distance to the closest source of resource that can be extracted is longer). After a dozen or so simulation rounds the market stabilises, both for the reference situation and the market with social communication mechanisms. What is notable are the relatively small standard deviations from average values of basic economic parameters, which provides additional evidence supporting the conclusion that a stable market situation was achieved during the simulation. Results compiled in table 6.2 show one more interesting phenomenon: the positive correlation of the GDP with the turnover, which is consistent with real data for the Polish economy. This supports the belief that the assumptions made when designing the model were right and the direction of research consisting in adapting subsequent microeconomic models to the EcoMASCI model may allow, in the future, a model of the Polish economy to be designed, making these simulations more practical from the perspective of economics. 140

Tab. 6.3. Selected results of the study of the IQSmarket and ∆IQSmarket for chosen Ri values

– ∆IQS ∆IQSmarket – ∆IQS ∆IQSmarket

Ref NA NA Ri = 5 149,714 0,95

Ri = 1 114,815 0.73 Ri = 6 153,451 0.97

Ri = 2 131,144 0.83 Ri = 7 153,693 0.97

Ri = 3 136,165 0.86 Ri = 8 156,451 0.99

Ri = 4 142,982 0.90 Ri = 10 160,139 1.01

The results obtained allow the IQS of the market to be calculated (IQSmarket ). As this is the first study of this kind, a certain interpretation of the result is necessary. As the IQS quotient based on the GDP is not very meaningful, it is worthwhile to introduce an auxiliary indicator, whose achieved value will be confronted with results obtained in the reference social group. This group is a market made up of „selfsufficient” agents who do not trade. Table 6.3 compiles IQS results and the ∆IQS calculated as follows:

∆IQS = IQSmarket =

GDPI − GDPRef , GDPRef

(6.10)

whereas: − GDPI – GDP in simulation with interactions enabled (transactions) given the environment state and the agents’ definitions; − GDPRef – GDP in simulation with interactions disabled (transactions not allowed, each agent is producing goods to consume by itself). The ∆IQS can be interpreted as a parameter representing the efficiency of the market stemming from the occurrence of social mechanisms defined by the ASIHM paradigm, which comprise: − the ability to exchange the good on the market; − agents’ production profiles accounting for the needs of others (based on the agent’s experience from previous transactions and market observations). This indicator can determine the ASIHM and depends on certain external variables associated with the market. In these experiments, the variable chosen was the agent interaction radius which determined its ability to conclude a transaction. For the world researched, the market efficiency measured by the ∆IQS increased from 73% to 100% as a result of the operation of ASIHM processes. It should be noted that 141

the value of this ratio is very strongly dependent on the method of defining agents’ preferences and their production capacities. It should also be added that in the model designed, a negative value of this indicator was not achieved. This is due to the use of the Pareto rule, discussed in chapter 5, to implement the transaction mechanism. It should be noted that the IQS was introduced in the computational model of Collective Intelligence because human intelligence is measured using the intelligence quotient – IQ. A 70% IQ increase in a person can mean moving from the lowest intelligence level (moronism) to the highest (genius). In turn, the economic genesis of the IQS for the market, based on the GDP, allows the results achieved to be interpreted as representing very dynamic economic growth. It is worth remembering that when an economy achieves a double digit GDP growth rate, the economists consider this to be phenomenal (e.g. the reason for coining the name „Asian Tigers” was precisely the high, sustainable economic growth observed in Asian countries).

142

7. Conclusions

In this monograph, the author has attempted to describe the paradigm of the Invisible Hand of the Market using a Collective Intelligence computational model, and also selected problems of building simulation systems for the purposes of Economics. To do this, a model of a multi-agent system was proposed in which the behaviour of individual agents representing market players is described using mathematical models originating from economic theory, in particular the Preference Theory and the Theory of Market Exchange. Then, it was proposed to transform the model thus described into a molecular computational model used by Collective Intelligence. The combination of the multi-agent and the Collective Intelligence paradigms was aimed at developing a tool combining the advantages of both approaches and creating an innovative model to describe the paradigm of the Invisible Hand of the Market. The model described was deployed, on a pilot basis, as a market simulator, which made it possible to conduct experiments in which an IQS indicator introduced by the Collective Intelligence computational model was defined for measuring the Invisible Hand of the Market. This allowed the author to verity the hypothesis constituting the proposition of this monograph, namely that this model facilitates not just describing, but also measuring the family of market processes referred to as the Invisible Hand of the Market. Apart from the capability to apply the Collective Intelligence computational model to describe this paradigm and propose its measurement, the prototype built supports studying market efficiency depending on various factors, which may include: various strategies of goods production and consumption by market players, various migration strategies etc. Another advantage of the multi-agent model proposed herein can come from the ease of its extension by adapting other economic models which describe the behaviour of market players. It is worth noting that the proposed indicator for measuring the effects of the Invisible Hand of the Market is based on the macroeconomic definition of the GDP and in addition makes it possible to measure the social behaviour of market players described by mathematical microeconomic models consistent with the paradigm of the Invisible Hand of the Market. 143

The author believes that the greatest achievements of this monograph include: 1. The proposed approach yielded a very flexible architecture of the system model, which can easily be enhanced by adding extra elements. 2. The layered architecture with the upper layer consisting of a multi-agent model of the system allows new economic models to be easily adapted and has no impact on the lower, computational layer, made up of a molecular computational model. Proposing an indicator allowing the results of the operation of the Invisible Hand of the Market to be measured within the simulation model. This indicator originates from the Collective Intelligence theory and originally allowed the synergy created by a social structure to be measured. In this monograph, this indicator was based on the macroeconomic definition of the GDP and additionally retained all the characteristics of its precursor. The introduction of this indicator is a kind of bridge proposed between micro-economy – which studies individual markets and behaviours of individual market players, and macro-economy – which describes the economy as a whole. It should be noted that the IQS of a market allows the synergy to be measured which stems from the market players creating a social structure within which the communication was described using microeconomic models. The author believes this to mark an important step towards merging micro- and macro-economy, which is a current research problem in neoclassical economics. In addition, the proposed definition is an attempt at answering the question asked in the publication of T. Szuba ([107]): „Is it possible to measure the Collective Intelligence of a whole nation?” It seems that the proposed IQS definition, based on the definition of the GDP, partly answers that question: it is possible and has been done in this simulation model. However, there is a fundamental difficulty in transferring the definition of this ratio to the real world, as no point of reference can be defined (in the simulation model, the reference point is an economy in which everyone produces goods only for their own needs). 3. Proposing and implementing a system founded on a layered architecture, whose upper layer is a multi-agent system and the lower layer – the computational one – is a molecular computational model, justifies the hypothesis that such a computational model can be successfully designed for selected models of multi-agent systems. 4. Deploying the system architecture described above allowed certain extrafunctional features, such as the ease of system expansion and scalability, to be achieved. The above features can be obtained by the correct application of the multi-agent paradigm, which splits the system into loosely connected components communicating with asynchronous messages. The system can be made 144

scalable because many computers can be combined into one virtual machine executing the multi-agent program, which has the additional benefit of making the system highly reliable by reducing the dependencies between its elements. 5. Hitherto approaches to multi-agent market simulations were reviewed to provide the starting point for defining the position of the EcoMASCI model in the research conducted. It should be noted that the above problems do not exhaust the subject discussed, they just allow the foundations of the new methodology to be built. Regardless of its layered structure, the presented system is highly complex and grasping all the aspects of its operation seems difficult. Key assumptions for the operation of the system have been described at a highly general level to avoid difficulties with their interpretation, but taking any steps towards the practical development requires making numerous arbitrary decisions, which may have serious consequences. The solutions adopted in the monograph, both at the level of the system model and its building, are the result of in-depth studies and a great number of simulation experiments. They mark only the first step towards a cohesive methodology for describing the processes of the Invisible Hand of the Market. The scope of the author’s upcoming research is outlined in the general concept of the EcoMASCII model discussed in chapter four and in pilot experimental results collected in certain of his publications from this field of research ([99]). This research should encompass problems like: − Introducing a money-based exchange model to the EcoMASCII system (it is worth noting that the exchange model on which the experiments described in chapter five were based was a typical barter economy in which only goods are swapped). It turns out that the introduction of money can give the social group formed by market players an additional communication mechanism, as one of the functions of money is precisely that: facilitating the exchange of goods in the market. A very interesting analysis of the function of money from the perspective of Collective Intelligence is presented in chapter five of T. Szuba’s book ([107]). − Introducing another element of the economic system, namely the financial sphere (represented in the real world by banks, insurance companies etc.) into the EcoMASCII system. In theory, the purpose for which the financial sphere exists is to stimulate production in the real sphere: from the Collective Intelligence perspective, introducing the financial sphere together with the services it offers (e.g. extending loans) offers another mechanism of communication within the social structure formed by market players. The plans for the more distant future include approaching the description of the Invisible Hand of the Market paradigm as the effect of the so-called Spontaneous 145

Computational Process arising in the virtual world of the market, as indicated by the word “computational”. This process, understood as a time-ordered series of changes and states occurring one after another, implies that the object will change over time, but its state will be observable at a given moment. The spontaneity of the phenomenon looked for is expressed in the way it arises and develops. By definition, according to the paradigm of the Invisible Hand of the Market, this process is not designed and initiated by the purposeful activity of a market player, but is a phenomenon arising spontaneously, as a by-product of his/her actions, emerging in the environment with the help of random events. T. Szuba’s team has already studied such a process arising in a virtual world, except not of the market, but the Internet ([102]). An attempt to adapt this approach to enhance the model proposed in this monograph constitutes an interesting supplement and continuation of this research direction.

146

References

[1] Adleman L.: Molecular Computation of Solutions to Combinatorial Problems. [In:] Science, vol. 266, 1994, 1021–1024. [2] Adleman L.: On constructing a Molecular Computer. In DNA Based Computers. [In:] DIMACS: Series in Discrete Mathematics, American Mathematical Society, 1996. [3] Ben-Jacob E., Cohen I., Gutnick D. L.: Cooperative organization of bacterial colonies – from genotype to morphotype. [In:] Annual Review Microbiology, 52, 1998. [4] Ben-Jacob E.: Bacterial wisdom, Gödel’s Theorem and creative genomic webs. [In:] Physica A, vol. 248, 1998, 57–76. [5] Begg D., Fisher S., Dornbusch R.: Mikroekonomia. nomiczne, Wyd. 4, 2008.

Polskie Wydawnictwo Eko-

[6] Ben-Ari M.: Mathematical logic for computer science. Prentice Hall, 1993. [7] Blaug M.: Teoria ekonomii. Uj˛ecie retrospektywne. Wydawnictwo Naukowe PWN, 1994, 78–79, 81–82, 598, 599, 601, 604, 701. [8] Boanabeau E., Dorigo M., Theraulaz .: Swarm Intelligence: from Natural to Artificial Systems. Oxford University Press, New York, 1999. [9] Boanabeau E, Meyer C.: Swarm Intelligence: A Whole New Way to Think About Business. [In:] Harvard Business Review, vol. 5, 2001, 107–114. [10] Bochenek M.: Szkice o ekonomii i ekonomistach. Toru´n, 2004, p. 10. [11] Brewka G., Coradeschi S., Perini A., Traverso P.: Proceedings of ECAI’2006, 17th European Conference on Artificial Intelligence, Frontiers in Artificial Intelligence and Applications. IOS Press, 2006, ISBN 1-58603-642-4. [12] Buda R.: Market Exchange Modelling Experiment, Simulation Algorithms, and Theoretical Analysis. MPRA paper no 4196, University Library of Munich, Germany, 1999. [13] Bylok F., Sikora J., Sztumska B.: Wybrane aspekty socjologii rynku. Wydawnictwo Wydziału Zarzadzania ˛ Politechniki Cz˛estochowskiej, 2001. 147

[14] Caldwell D. E., Costerton J. W.: Are bacterial biofilms constrained to Darwin’s concept of evolution through natural selection? Microbiologia SEM. 12, 1996. [15] Cetnarowicz K.: M-Agent Architecture Based Method of Development of Multiagent Systems. [In:] Proceedings of International Conference on Physics Computing, 1996. [16] Cetnarowicz E., Nawarecki E., Cetnarowicz K.: Agent oriented technology of decentralized systems based on the M-Agent architecture. [In:] Proceedings of the MCPL’97 IFAC/IFIP Conference, 1997. ´ ˙ nska M.: Srodowisko [17] Cetnarowicz K., Dobrowolski G., Zabi´ do symulacji systemów wieloagentowych w oparciu o architektur˛e M-agenta. [In:] Proceedings of I Krajowa Konferencja Metody i systemy komputerowe w badaniach naukowych i projektownaiu inz˙ ynierskim, Krakowskie Centrum Informatyki Stosowanej CCATIE, Kraków 1997. ˙ nska M.: M-Agent Architecture and its applica[18] Cetnarowicz K., Dobrowolski G., Zabi´ tion to the agent oriented technology. [In:] Proceedings of International Workshop on Distributed Artificial Intelligence and Mulit-Agent Systems DAIMAS’97, St. Petersburg, 1997. [19] Cetnarowicz K.: Problemy projektowania i realizacji systemow wieloagentowych. Uczelniane Wydawnictwa Naukowo-Dydaktyczne AGH, 1999. [20] Cetnarowicz K., Gruer P., Hilaire V., Koukam A.: A formal specification of M-agent architecture. [In:] From theory to practice in multi-agent systems: second international workshop of Central and Eastern Europe on Multi-Agent Systems, CEEMAS 2001, Springer-Verlag, LNAI, vol. 2296, 2002. [21] Chmiel J.: System symulacji rynku z działajacymi ˛ przedsi˛ebiorstwami. 2010, http: //www.ia.pw.edu.pl/~janusz/wdec/lab/lab10_jsim.pdf. [22] Clerc M.: Particle Swarm Optimization. ISTE, 2006. [23] Cobb C., Douglas P.: A Theory of Production. [In:] American Economic Review, vol. 18, 1928, 139-–165. [24] Coen M.: SodaBot: A Software Agent Environment and Construction System. [In:] Proceedings of 1994 CIKM Workshop on Intelligent Information Agents. [25] Dorigo M., Optimization, Learning and Natural Algorithms, Ph. D. dissertation, Politecnico di Milano, Włochy, 1992. [26] Dorigo M., Gambardella L. M.: Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem. [In:] IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, 1997, 53–66. [27] Dorigo M., Gambardella L. M.: Ant colonies for the traveling salesman problem. [In:] BioSystems, vol. 43, 1997, 73–81. [28] Dorigo M., Blum C.: Ant colony optimization theory: A survey. [In:] Theoretical Computer Science, vol. 344, 2005, 243-–278. 148

[29] Eichelberger C., Hadzikadic M.: Investigating Agent Strategies within a Complex Adaptive System of Purchasing Agents for Estimating Attribute Relevance. [In:] Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, 2006. [30] Feder T.: Statistical physics is for the birds. [In:] Physics today, vol. 60, no. 10, 2007, 28-–30. [31] Foner L.: What’s an Agent, Anyway? A Sociological Case Study. white paper, 1996. [32] Forgy C.: Rete: A Fast Algorithm for the Many Pattern/Many Object Pattern Match Problem. [In:] Artificial Intelligence, vol. 19, 1982, 17–37. [33] Franklin S., Gassner A.: Is It An Agent Or Just A Program? A taxonomy For Autonomous Agents. [In:] Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages, Springer-Verlag, 1996. [34] Freifelder D.: Molecular biology. Jones and Bartlett Publishing, Boston, 1987. [35] Fuqua W. C., Winans S. C., Greenberg E. P.: Quorum sensing in bacteria: the LuxRLuxI family of cell density-responsive transcriptional regulators. [In:] J. Bacteriol., vol. 76, 1994. [36] Gambardella L. M., Taillard E., Dorigo M.: Ant colonies for the Quadratic Assignment Problem. [In:] Journal of the Operational Research Society, vol. 50, 1999, 167–176. [37] Gillford D.: On the Path to Computation with DNA. [In:] Science, vol. 266, 1994, 993–994. ´ [38] Goldwell M.: Punkt przełomowy. Swiat Ksia˛z˙ ki, Warszawa, 2005. [39] Griffin R.: Podstawy zarzadzania ˛ organizacjami. PWN, Warszawa, 1998. [40] Harshey R. M.: Bees aren’t the only ones: swarming in gram-negative bacteria. [In:] Molecular Microbiology, vol. 13, 1994. [41] Hayes-Roth B.: An Architecture for Adaptive Intelligent Systems. [In:] Artificial Intelligence: Special Issue on Agents and Interactivity, 1995. [42] Hobbes T.: Lewiatan czyli materia, forma i władza pa´nstwa ko´scielnego i s´wieckiego. PWN, Warszawa, 1954. [43] Hobbes T.: Elementy filozofii. PWN, Warszawa, 1956. [44] Hölldobler W.: The ants. The Belknap Press of Harvard University, 1990. [45] Izquierdo S., Izquierdo L.: The impact on market efficiency of quality uncertainty without assymetric information. [In:] Workshop on Agent-Based Models of Consumer Behaviour and Market Dynamics, Guildford UK, 2006. [46] Jennings N. R., Sycara K., Wooldridge M. J.: A Roadmap of agent research and development. [In:] Journal of Autonomous Agents and Multi-Agent Systems, vol. 1, issue 1, 1998. 149

[47] Jess online documentation web page http://www.jessrules.com/jess/ docs/71. [48] Jorion P.: Adam Smiths Invisible Hand Revisited. An Agent-Based simulation of the New York Stock Exchange. http://www.pauljorion.com/index-page-7. html, 2005. [49] Joyce H.: Adam Smith and the invisible hand. [In:] Millennium Mathematics Project, Plus Magazine, 2001. [50] Kasperski J.: Szkoła austriacka wobec socjalizmu, interwencjonizmu i współczesnych problemach wolnego rynku. Wydawnictwo Prohibita, Warszawa 2009. [51] Kamerschen D., McKenzie R., Nardinelli C.: Ekonomia. Wydawnictwo Bernardinu, ed. 4, Pelplin, 1999. [52] Kennedy J., Eberhart R.: Particle swarm optimization. [In:] Proceedings of the IEEE Int. Conf. on Neural Networks, Piscataway, NJ, 1995, 1942—1948. [53] de Kerkhove D.: Powłoka kultury. Zysk i S-ka, Warszawa, 1998. [54] Kisiel-Dorohinicki M.: Zastosowanie procesów ewolucyjnych w systemach wieloagentowych. Ph. D. dissertation, AGH University of Science and Technology, Kraków, 2000. [55] Kisiel-Dorohinicki M., Dobrowolski G., Nawarecki E.: Evolutionary multi-agent system in multiobjective optimisation. [In:] Applied Informatics: artificial intelligence and applications: proceedings of the IASTED’01, ACTA Press, 2001. [56] Kirkpatrick S., Gelatt C. D., Vecchi M. P.: Optimization by Simulated Annealing. [In:] Science, New Series 220, vol. 4598, 1983, 671–680. [57] Klimczak B.: Mikroekonomia. Wydawnictwo AE Wrocław, ed. 7, 2006. [58] Kogge P. M.: The architectures of symbolic computers. McGraw Hill, 1991. [59] Landreth H., Colander D. C.: Historia my´sli ekonomicznej. Wydawnictwo Naukowe PWN, Warszawa, 2005, p. 32. [60] Levy P.: Die kollektive Intelligenz. Mannheim 1997, 120–123.

Bollmann Kommunikation & Neue Medien,

[61] Li Pei Wong, Low MYH, Chin Soon Chong: Bee colony optimization with local search for traveling salesman problem. [In:] 6th IEEE International Conference on Industrial Informatics INDIN, IEEE, Piscataway, 2008, 1019–1025. [62] Licklider J., Taylor R.: The Computer as a Communications Device. [In:] Science and Technology, April 1968. [63] Lipsey R. G., Lancaster K.: The General Theory of Second Best. [In:] The Review of Economic Studies, vol. 24, issue 1, 1956, 11-–32. [64] Lipton R. J.: DNA Solution of Hard Computational Problems, [In:] Science, vol. 268, 1995, 542-–545. 150

[65] Lucas P., van der Gaag L.: Principles of Expert Systems. Addison-Wesley, 1991. [66] Maes P.: Designing Autonomous Agents. MA: MIT Press, Cambridge, 1990. [67] Maes P.: Artificial Life meets Entertainment: Lifelike Autonomous Agents. [In:] Communications of the ACM, Special Issue on New Horizons of Commercial and Industrial AI, Vol. 38, No. 11, 1995. [68] Mathieu L.G., Sonea S.: Time to drastically change the century-old concept about bacteria. [In:] Tribune, vol. 8, 1996. [69] Michalewicz Z.: Genetic algorithms + data structures = evolution programs. Springer, 1992. [70] Michelsen A., Andersen B. B. , Storm J., Kirchner W. H. , Lindauer M.: How honeybees perceive communication dances, studied by means of a mechanical model. [In:] Behavioural Ecology and Sociobiology, vol. 30, 1992. [71] Montemanni R., Gambardella L. M. , Rizzoli A. E. , Donati A. V.: Ant colony system for a dynamic vehicle routing problem. [In:] Journal of Combinatorial Optimization, vol. 10, December 2005, 327–343. [72] Morawski W.: Socjologia ekonomiczna Wydawnictwo Naukowe PWN, Warszawa, 2001. [73] Muller J. P.: The design of Intelligent Agents: A Layered Approach. [In:] SpringerVerlag, Lecture Notes in Artificial Intelligence, vol. 1177, 1996. [74] Muller J. P., Wooldridge M., Jennings N. R.: Intelligent Agents III: Proceedings of ECAI’96 Proceedings of ECAI’96 Workshop on Agent Theories, Architectures and Languages. [In:] Springer-Verlag, Lecture Notes in Artificial Intelligence, vol. 1193, 1997. [75] Nakashima T., Ishibuchi H., Chi-Hyon Oh: Competition between strategies for a market selection game. URL: http://www.complexity.org.au/ci/vol06/ ishibuchi-oh/ishibuchi-oh.html, 2007. [76] Neuberg L., Bertels K.: Heterogenous trading agent [In:] Complexity, vol. 8/3, 2003, 28–35. [77] von Neumann J., Morgerstern O.: Theory of Games and Economic Behavior. Princeton University Press, 1953. [78] November P., Johnstone D.: Simulating Complex Non-linear Dynamic Systems in Marketing. [In:] Proceedings of 17th International Conference of the System Dynamics Society and 5th Australian & New Zealand Systems Conference, Victoria University of Wellington Publishing, New Zealand, 1999. [79] Nwana H. S.: Software Agents: An Overview. [In:] The Knowledge Engineering Review, vol. 11/3, 1996. [80] Pasteels J. M., Deneubourg J-L.: From individual to collective behavior in social insects. Birkhäuser Verlag, Basel, 1987. 151

[81] D. T. Pham, A. Ghanbarzadeh A, E. Koc, S. Otri, S. Rahim S, M. Zaidi, The Bees Algorithm, Technical Note, Manufacturing Engineering Centre, Cardiff University, 2005. [82] Pham D. T., Ghanbarzadeh. A., Koc. E., Otri S., Rahim S., Zaidi M.: The Bees Algorithm – A Novel Tool for Complex Optimisation Problems. [In:] Proceedings of IPROMS 2006 Conference, 2006, 454–461. [83] Pham D. T., Koc E., Lee J. Y., Phrueksanant J.: Using the Bees Algorithm to schedule jobs for a machine. [In:] Proceedings of Eighth International Conference on Laser Metrology, CMM and Machine Tool Performance, LAMDAMAP, Euspen, UK, Cardiff, 2007, 430–439. [84] Pola´nski P.: Analiza IQS abstrakcyjnej struktury my´sliwy + pies w procesie polowania na królika. M. Sc. dissertation, AGH University of Science and Technology, Kraków 2007. [85] Reynolds C.: Flocks, herds and schools: A distributed behavioral model. [In:] SIGGRAPH 87: Proceedings of the 14th annual conference on Computer graphics and interactive techniques, Association for Computing Machinery, 1987, 25–34. [86] Rubinstein Ariel, Osborne Martin, A Course in Game Theory, Cambridge: MIT Press, ISBN 0262650401, 1994. [87] Russel S. J., Norvig P.: Artificial Intelligence – A Modern Approach. Prentice Hall, New York, 1995. [88] Samuelson P., Nordhaus W. D.: Ekonomia. Wydawnictwo Naukowe PWN, Warszawa, 2004. [89] Schab P., Wielicki P.: Modelowanie i Analiza Kolektywnej Inteligencji kolonii bakterii w sytuacji zagro˙zenia. M. Sc. dissertation, AGH University of Science and Technology, Kraków 2007. [90] Schnabl H.: A Close Eye on the Invisible Hand. [In:] Journal of Evolutionary Economics, vol. 6, 1996, 261–280. [91] Schoonderwoerd R., Holland O., Bruten J., et Rothkrantz L.: Ant-based load balancing in telecommunication networks. [In:] Adaptive Behaviour, vol. 5, no 2, 1997, 169–207. [92] Seeley T. D.: The wisdom of the hive. Harvard University Press, 1995. [93] Seppercher P.: Un modele macroeconomique multi-agents avec mon-naie endogene. Universite de la Mediterranee - Aix-Marseille II, 2009. [94] Singh M. P., Rao A. S., Wooldridge M.: Intelligent Agents IV: Proceedings of ECAI’97 Workshop on Agent Theories, Architectures and Languages. [In:] Springer-Verlag, Lecture Notes in Artificial Intelligence, vol. 1365, 1998. [95] Skrzynski P., Turek M.: Agentowa platforma wymiany wiedzy zgodna ze standardem FIPA. M. Sc. dissertation, AGH University of Science and Technology, Kraków 2002. 152

[96] Skrzynski P., Szuba T.: Próba wyja´snienia paradygmatu "niewidzialnej r˛eki rynku Adama Smitha" w oparciu o model obliczeniowy Kolektywnej Inteligencji. [In:] Automatyka: Półrocznik Akademii Górniczo-Hutniczej, ISSN 1429-3447, Wydawnictwa AGH, vol. 12/3, 2008, 975-–992. [97] Skrzynski P., Szuba T.: Koncepcja i realizacja molekularnego modelu oblicze´n w analizie paradygmatu niewidzialnej r˛eki rynku Adama Smitha. [In:] Automatyka: Półrocznik Akademii Górniczo-Hutniczej, ISSN 1429-3447, Wydawnictwa AGH, vol. 13/3, 2009, 1455-–1467. [98] Skrzynski P., Szuba T., Szydło S.: Symulacja mechanizmów rynkowych w oparciu o model kolektywnej inteligencji. [In:] Innowacyjno-efektywno´sciowe problemy teorii i praktyki zarzadzania, ˛ Piotr Łebkowski, ISBN 978-83-7464-248-4, AGH Uczelniane Wydawnictwa Naukowo-Dydaktyczne, 2009, 21—29. [99] Skrzynski P., Szuba T: Invisible hand process simulation based on collective intelligence computational model [In:] Recent advances in Intelligent Information Systems, eds. Mieczysław A. Kłopotek [et al.], Polish Academy of Sciences, Institute of Computer Science, ISBN 978-83-60434-59-8, Academic Publishing House EXIT, 2009, 541-–550. [100] Smith A.: Badania nad natura˛ i przyczynami bogactwa narodów. Wydawnictwo Naukowe PWN, Warszawa, 1954. [101] Snowdon B., Vane H., Wynarczyk P.: Współczesne nurty teorii makroekonomii. Wydawnictwo Naukowe PWN, Warszawa, 1998. [102] Stasiak K.: Analiza mo˙zliwo´sci zaistnienia spontanicznego paso˙zytniczego procesu obliczeniowego w s´rodowisku Internetu. M. Sc. dissertation, AGH University of Science and Technology, Kraków 2007. [103] Stiglitz J.: Ekonomia sektora publicznego. Wydawnictwo Naukowe PWN, Warszawa, 2004. [104] Sycara K.: Mutiagent systems. [In:] AI magazine, vol. 19, no. 2, Intelligent Agents, 1998. [105] Szuba T.: A Molecular Quasi-random Model of Computations Applied to Evaluate Collective Intelligence. [In:] Future Generation Computing Journal, vol. 14, 2001, 321–339. [106] Szuba T.: A Formal Definition of the Phenomenon of Collective Intelligence and its IQ Measure. [In:] Future Generation Computing Journal, vol. 17, 2001, 489–500. [107] Szuba T.: Computational Collective Intelligence. Wiley and Sons, New York, 2001. [108] Szuba T.: Was there Collective Intelligence Before Life on Earth? Considerations on the Formal Foundations of Intelligence, Life and Evolution. [In:] The Journal of General Evolution, vol. 58, 2002, 61–80. 153

[109] Szuba T.: A Molecular Quasi-random Model of Computations Applied to Evaluate Collective Intelligence. [In:] Future Generation Computing Journal, vol. 14, 1998, 321–339. [110] Tovey M.: Collective Intelligence: Creating a Prosperous World at Peace. Carleton University Press, 2008. [111] Tumeo A., Pilato C., Ferrandi F., Sciuto D., Lanzi P.L.: Ant colony optimization for mapping and scheduling in heterogeneous multiprocessor systems 2008-International Conference on Embedded Computer Systems: Architectures, Modeling and Simulation, IEEE, 2008, Pisdataway, 2008, 142–149. [112] Varian H. R.: Mikroekonomia. PWN, Warszawa 1995. [113] Weiss G.: Multiagent systems - a modern approach to distributed Artificial Intelligence. MIT Press, 1999. [114] Wilke D. N.: Analysis of the particle swarm optimization algorithm. M. Sc. dissertation, University of Pretoria, 2005. [115] Wilson O. E.: A chemical releaser of alarm and digging behavior in the ant Pogonomyrmex badius (Latreille). [In:] Psyche, vol. 65, 1958. [116] Wind Y.: On the study of industrial buying behavior: Current practices and future trends. [In:] Industrial Marketing Management, Vol. 1/4, 1972, 411–466. [117] Winiarski B.: Polityka gospodarcza. Wydawnictwo Naukowe PWN, ed. 3, Warszawa, 2006. [118] Wooldridge M., Jennings N. R.: Intelligent Agents: Proceedings of ECAI’94 Workshop on Agent Theories, Architectures and Languages. [In:] Springer-Verlag, Lecture Notes in Artificial Intelligence, vol. 890, 1995. [119] Wooldridge M., Muller J. P., Tambe M.: Intelligent Agnets II: Proceedings of IJCAI’95 Workshop on Agent Theories, Architectures and Languages. [In:] Springer-Verlag, Lecture Notes in Artificial Intelligence, vol. 1037, 1996. [120] Wooldridge M.: An Introduction to MultiAgent Systems. John Wiley and Sons Ltd, ISBN: 9780470519462, 2009. [121] Ygge F.: Market-Oriented Programming and its Application to Power Load Management. Ph. D. dissertation, Lund University, 1998. [122] Zhaoquan C., Huang H.: Ant colony optimization algorithm based on adaptive weight and volatility parameters. [In:] Proceedings of 2008 Second International Symposium on Intelligent Information Technology Application, IEEE, Shanghai, IEEE, Pisdataway, 2008, 142–149.

Suggest Documents