Resources Management System for Distributed

0 downloads 0 Views 204KB Size Report
Centro de Microelectr onica y Sistemas Distribuidos (CEMISID). Universidad de los ...... 15] Stallings W., Sistemas Operativos, 2da. Edici on, Prentice Hall, 1997.
Resources Management System for Distributed Platforms Based on Multi-agent Systems Francisco Hidrobo and Jose Aguilar Centro de Microelectronica y Sistemas Distribuidos (CEMISID). Universidad de los Andes. La Hechicera. 5101. Merida. Venezuela. email: [email protected], [email protected] Abstract

In this work we propose a management system for distributed/parallel environments, that is designed using the multi-agent theory. The resource management system proposes a mechanism to adapt the execution ow from a program to the platform (particularly, to its interconnection network), or vice versa. The system consists of a group of agents that represent users applications, processors on the system, etc.; which should cooperate between them with the purpose of optimizing the performance of the platform. The system is simulated with the purpose of testing di erent design con gurations under di erent work suppositions (at level of the number of agents, relationships among them, etc.), in such a way of determining the architecture (centralized, partial or completely distributed) of the agents and its in uence over the system. Keywords: Multi-agents Systems, Parallel and Distributed Processing, Resource Management Systems, Distributed Operating Systems.

1 Introduction The utilization of high performance computing platforms has grown, but there are many e orts that are being carried out in the areas of computers architecture and operating systems with the purpose of making ecient these platforms. One of the problems that still persists is the adapting of the execution ow from the applications to the platforms, or vice versa, without degrading of the performance. This is because the ows of execution of the parallel applications (parallel programming paradigms) follow particular communication graphs among their tasks, therefore, the applications are tied to a given interconnection topology. This way, the performance of a parallel program depends in great measure of the con guration of the architecture on which will be executed [6, 9, 10, 11]. In general, the development of parallel programs with the restriction of having to adapt it to a xed interconnection topology is a dicult and not very natural procedure; on the other hand, constant modi cations of the interconnection network to adapt it dynamically in function of the necessities of communications of the programs are expensive and impossible in multiprocessing environments. This outline the necessity to have a Resources Management System (RMS) that allows to decide if change the topology of the platform or make tasks grouping of an application, in such a way to optimize the performance of the platform with the biggest degree of exibility [11, 16]. This RMS would be part of the operating system of a distributed platform, like part of the module of distributed administration [9, 15]. This module is particular for the distributed operative systems, not being in the operative systems of sequential machines, and it has for objective the ecient use of the di erent resources of the system. Multi-agent Systems (MASs) are being thoroughly studied at the present time, they are the connection point of diverse areas, such as: the distributed arti cial intelligence, emergent systems and the collective intelligence, among other [3, 5, 8, 12, 13, 14]. The MASs theory studies and designs autonomous agents organizations, in such a way they can act on their physical and social environments, and communicate them, to make collectively their tasks. 1

In this work we propose the design of a RMS using concepts of the MASs theory. This system decides if to adapt the topology of the platform to the applications, or to adapt the applications to the interconnection topology of the system (tasks groupment, etc.). The proposed system consists on a group of agents that represent users applications, processors in the system, etc.; which should cooperate among them with the purpose to optimize the platform performance. The RMS is based on the work presented in [11, 16]. The most similar work to our approach was carried out by the MAS group of the MIT [10], in which is presented a MAS design, denominated Challenger, for the distributed assignment of resources (CPU). The system is formed for a group of agents which manage local resources (time of CPU) and communicate them to share their resources. Challenger is based in a group of homogeneous agents, where each one behavior as buyer and seller which try to maximize its bene t. The advantage of this approach is the capacity of learning, that makes that the system works for a wide range of conditions. However, the tasks are considered independent; for that this approach can not be used for the manipulation of tasks which cooperate (parallel applications). We make simulations to study possible con gurations of our system. These simulations allow to de ne if the system should be centralized, distributed, or partially distributed, in the organization of its agents. According to these results, we de ned the number of agents of each type and the interrelations among them. This work is organized in the following way: the rst section presents the motivation of our work. The section 3 contains an introduction to the MASs theory and the RMS proposed for parallel and distributed systems. The section 4 presents the design of our RMS based on MASs. In the section 5 is described the experimental part. Finally, the last section presents the conclusions.

2 Motivation A Distributed Operating System (DOS) governs the operation of a distributed computer system and provides an abstraction of the virtual machine to its users [9, 15]. The basic objective of a DOS is the transparency, that is, components and resources can be used in the moment and place in that they are required. Also, they provide means to share resources. In DOSs exist a group of aspects to consider, not common in traditional operating systems; among these aspects is the distributed administration of processes (tasks) that allows to use di erent resources of the system eciently. The aspects to consider for to carry out this last activity are [6, 7, 9, 15]: 









Tasks Assignment: for the group of tasks, a decision should be made about what processors, and that other resources, they will use. To carry out such decision we can use several criteria, such as load balancing in the system, minimizing the communication among processors, minimizing the interference among parallel tasks, etc. Tasks Migration: consist on the transfer of the state of a task from a machine to other so that the task can continue its execution in the new machine. This migration is carried out, mainly, for to assure the load balancing or to guarantee the culmination of the tasks (for fault in some processor). Faults Tolerance: one of the advantages of distributed systems is the resistance to faults. Among the faults tolerance mechanisms are the protocols of replication of tasks, which are based on the assignment of a task to di erent processors [9, 15]. The copies can be active ( if they are executed concurrently), or passive (only one task is executed and the rest of the copies are executed when a fault is happened on the original task). Another fault tolerance mechanism is the checkpoint protocol [9]. Processors interconnection: The processors interconnection on the system has a great in uence on the band width and saturation point of the communications in the system. In this sense, for a RMS, the key point is the capacity of the system of supporting the biggest number of connections among processes located in di erent places. Management of Recon gurable Systems: another key aspect is the mechanism that will be used to assure an ecient execution of tasks. In this way, there are two approaches: the rst one where the 2

platform is adapted to the execution ow of the programs; and the other one where the program is adapted to the interconnection topology of the system.

3 Theoretical Aspects

3.1 Resource Management System

The RMS that we propose allows to adapt the platform to the applications, or the applications to the interconnection topology of the system (tasks groupment, etc.). The RMS objective is to optimize the system performance, according to the current state of it [9, 11, 15, 16]. The RMS must manipulate the requirements of multiple applications. Some alternatives exist to manipulate such requirements, for example [1, 2, 4, 11, 16]: 

 

Automatic detection of requirements: the applications will be pre-compiled for the RMS to detect the communications among the tasks that compose them. In this way, RMS detects the execution ow of the application. Statical Requirements De nition: under this approach the applications specify the topology on which should be executed. Dynamical Requirements De nition: The applications can specify their requirements once that they have been initiate, being able to change these during theirs execution.

In [11, 16], we propose a RMS that consists on three phases: 

Characterization of the System and of the applications: In this phase, the system and the applications will be described (each one is modeled using graphs).

{ Parallel architecture characterization: interconnection topology, calculation power of each place,

etc. { De nition of the execution ow of the applications (parallel programming paradigm), that allows to de ne the tasks graph for each application. { System Analysis: the parameters of the system are determined to evaluate the communication costs, execution costs, etc.



Analysis of the Topology: In this phase the system is adapted to the program, or vice versa.

{ Tasks groupment: The tasks are clustered in such a way that the number of groups is similar to

the number of processors. The groupment is carried out in such a way to optimize the following criteria: to minimize the communication among the processors and to balance the load of work among the processors. { Topology matching: Once that we have the groupment graph, we carried out a topology matching that satis es the requirements of the application and that ful lls the restrictions of the system. This is done adapting the groupment graph to the interconnection topology, or vice versa. In the rst case should be adapted the number of connections of the groupment graph to the connections of the interconnection network of the system. In the second case, virtual connections should be generated in the system to adapt it to the groupment graph. In this way, a virtual topology is generated.



Execution of the application: In this phase the tasks will be assigned to the processors according to the graph generated in the previous phase. Then, the tasks are executed according to their task precedences.

3

3.2 Multi-agent Systems

The Agents paradigm emerges of the Arti cial Intelligence [3, 5, 8, 12, 13, 14]. MASs cover a diversity of aspects (biological, social, etc.), and have been used as integration techniques of other areas (Expert Systems, Arti cial Neural Networks, etc.) for the resolution of di erent problems. A MAS is a system where its activities are the fruit of the interaction among the entities that compose it, called agents, which are relatively autonomous and independent, and work according to complex ways of cooperation, con ict and concurrence, to survive and to perpetuate it. These systems enrich the area of Arti cial Intelligence using sociological metaphors (such as cooperation, negotiation, etc.) and biological metaphors (such as self-organization, adaptation, etc.). It is dicult to nd in the literature a de nition of Agents that can be a consensus. We will give the de nition of Ferber [8]: An agent is a hardware or software that can act over itself or over its environment. It has a representation partial of its environment and can communicate it with other agents. An agent has individual objectives and its behavior is the result of its observations, knowledge, abilities and of the interrelation that it can have with other agents or with its environment. An agent has some of the following properties:       

Autonomy: it can operate without the direct intervention of external entities. Cooperation and Collaboration: it can cooperate and collaborate with other agents. Mobility: it can move it (its code and data) to a remote place. Intelligence: it can carry out cognitive activities. These activities can consist from a simple inference rule until the execution of mechanisms of self learning. Asynchronous: The behavior of an agent can be governed for their own rules. Reactive behavior: it can perceive its environment and to respond to some stimulus that comes from this. Proactive behavior: it can have its own objectives and act based on them.

In general, an agent can be described for:  



Tasks: that is, the group of functions and activities of the agents. Knowledge: abilities and information that the agent possess to carry out their tasks. This knowledge can be acquired according to some of following ways: speci ed by the designer (through some formalism), incorporated dynamically during its evolution, coming from other sources of knowledge (others agents, etc.), or generated by their own mechanisms of learning. Communications: they de ne the interaction form of the agent with the environment and with other agents.

4 Design of the Resources Management System based on Multiagent Systems 4.1 Characterization of the High Performance Computing Environments

The description of an High Performance Computing Environment (HPCE) is made in term of its components and of the interrelation among them. A HPCE can be de ned like a platform composed by diverse resources (Processors, Communication Systems, Disks, Memory, Operating System, etc.) that provides computing services for applications which solve speci c problems. Next, we presented the aspects which characterize an HPCE: 4









Processors (CPU's): they are responsible for the execution of tasks. They are characterized, mainly, for their memory capacities and their speeds. Interconnection system: It represents the mechanisms, the topology and the technology used for the communication among the processors. Applications management (for example: task assignment and load balancing applications, etc.): they allow to de ne the approaches used for resources management in the system, with the purpose of optimizing its performance. Structure of the Applications (parallel programming paradigm): it Express the execution ow and the communication needs of the applications. This structure can be represented using task graphs, with the respective description of each task and the communication requirements among them.

4.2 Multi-agents system design

In this work we propose to study the problems of tasks assignment, load balancing, topology matching and optimization of communication requirements for applications that are composed by tasks that cooperate (parallelly or concurrently). These aspects should be considered by all module of distributed resources management of operating systems for platform type HPCE [9, 11, 15, 16]. The MASs allow to propose an approach where the RMS components are represented by agents with diverse capacities and characteristics. Our approach is composed by the following agents: 





Decision agents: they carry out the function of topology managers, and have knowledge of the platform con guration and of the global state. They have an interface with the application that allows them to make decisions about the planning type that they must use for a given application. Planning agents: they solve the tasks assignment and load balancing problems. There are two types of Planning Agents, according to the de nition of the section 3.1, that is, if the application adapt it to the platform, or vice versa. Execution agents: these agents execute the tasks which compose the application. There will be so many execution agents as processors have the platform. Each one of them will execute one or more tasks.

The gure 1 presents a model of the multiagents architecture. In our approach, the application acts directly with a decision agent. The decision agent, according to the system and application parameters, will request one of the planning agents. As we have said previously, the existence of two types of planning agents is due to the use of the works [11, 16] where two approaches are proposed for the topology matching. Finally, the execution agents run the programs assigned by the planning agent.

4.2.1 Functional analysis

In this section, we present a detailed description of each agent.

Decision agent    

It analyzes the global state of the system. It analyzes the communication requirements of the application (parallel programming paradigm). It makes the decision about the planning type to use. It calls the appropriate planning agent.

Planning Agent type 1

5

Application

Decisive agent

Planning agent 1

Planning agent 2

... Execution agents

Figure 1: Multiagents System architecture  

It groups tasks so that the number of tasks is smaller or the same than the number of processors (groupment graph). It veri es the system restrictions, that is, it should eliminate edges of the groupment graph so that the degree of the graph is smaller or the same than the maximum number of connections that has each execution agent.

Planning Agent type 2  

It groups tasks so that the number of tasks is smaller or the same than the number of processors (groupment graph). It establishes connections among the execution agents to con gure a virtual topology on the system. This topology is the same than the the groupment graph of the application.

Execution Agents  

It executes the tasks which are assigned it. It saves and reports its state to the decision agent.

4.2.2 Behavior Analysis

The operation of the MAS is de ned, fundamentally, for the behavior of the agents that compose it. This behavior is based on their capacities, external stimulus, and their knowledge. In general, these aspects de ne the agents structure.

Decision agent

It receives each application to execute on the system. Also, it decides the planning type to use. The decision agent structure is described as follows: 

Capacities It Models and memorize the HPCE. Also, it takes decisions about the type of planning to use. 6





Knowledge It stores the system global state and classi es the applications according to their programming paradigms (tree, divides and executes, pipeline, etc.). connectivity it requires communication with all the agents of the MAS.

Planning agent

The planning agent receives the requirements of the decision agent and applies a procedure (di erent for each type of planning) to obtain the tasks assignment of the application. Then, it assigns the tasks to the execution agents. The planning agent structure is the following:  Capacities Its main capacity is the organization. This agent plans, coordinates, assigns and controls the tasks execution of the application.  Knowledge It maintains a control table of tasks in execution, and another one with the proposed assignment.  Connectivity It communicates with all the agents of the MAS.

Execution Agents

An execution agent receives tasks assigned to it and executes those tasks. Also, it reports the state of those tasks and its state. The execution agent has the following structure:  Capacities It is an agent which execute tasks.  Knowledge It knows its computing and storage capacities. Additionally, it maintains a structure of connection that indicates if it can or not establish communication with other execution agents.  Connectivity It communicates with the decision and planning agents indicating them its state. On the other hand, it is interconnected with the rest of the execution agents according to the topology of interconnection of the system.

4.2.3 Communication Requirements

The MAS communication requirements are expressed in terms of the information exchange among the agents. The communication in the MAS is among:  Application - decision agent. For this case, we assume to the application like an agent with a particular structure that requests the execution service to the decision agent.  Decision agent - planning agent. After the decision agent veri es the system conditions and the application requirements, it sends the application ow to a planning agent. Also, the decision agent receives from the planning agent the results of the application execution.  Decision agent - execution agent. The communication among the execution agents and the decision agents has as objective the determination of the global state of the MAS. The execution agents report, periodically, their state to the decisive agent. Also, the decision agents can request reports.  Planning agent - execution agent. The communication among the planning agent and the execution agents consists on the task assignment, according to the plan of execution generated by the rst one.  Execution agent - execution agent. The communication among the execution agents will be dynamic. It will be determined by the HPCE topology and/or the connections to satisfy the virtual topology requirements. 7

5 Simulations In this part a simulator of the MAS is presented, to test and evaluate di erent con gurations of it. The simulator is composed by the following events: 1. Application Generation (a) Reception of the application (decision agent). (Duration t1 ) (b) Analysis of the application parameters and its classi cation according to its paradigm of parallel programming (decision agent). (Duration: t2 ) (c) Determination of the system global state (decision agent). (Duration: t3 ) (d) Determination of the planning scheme according to the application type and the system state (decision agent). (Duration: t4 ) (e) Request to the appropriate Planning agent (decision agent). (Duration: t5 ) (f) Reception of the application (planning agent). (g) Planning agent type 1: i. Obtaining of the groupment graph according to the number of tasks and communication requirements. This time is directly proportional to the application size (number of tasks) and to the number of execution agents. (Duration: t6 ) ii. Carrying out modi cations on the groupment graph so that this is adapted to the execution agents interconnection topology. This time depends of the groupment graph and of the execution agent interconnection topology. (Duration: t7 ) (h) Planning agent type 2 i. Obtaining of the groupment graph according to the number of tasks and communication requirements. (Duration: t6 ) ii. Establishment of virtual connections among the execution agents to generate the virtual topology. (Duration: t8 ) (i) Generation of the new arrival for that application type. (Decisive agent). 2. Task assignment (planning agent). (a) Determination of the execution agents state. (Duration: t9 ) (b) Tasks Assignment to the execution agents according to their capacities and their real state. (Duration: t1 0) 3. Tasks Creation. (execution Agent) (a) Creation of the tasks which have been assigned to it. (Duration: t11 ) (b) Generation of the task culmination event. 4. Task Culmination. (a) The execution agent noti es to the planning agent the task culmination. (Duration: t12 ) (b) The planning agent saves the state of the task. (Duration: t13 ) 5. Culmination of the Application (a) The planning agent noti es application culmination to the decision agent. (b) The decision agent calculates statistical of the application execution.

8

5.1 Applications structure

The applications are represented using a tasks graph [6, 7]:

G = ( V; A), where :

V = 1; :::; n is the group of n nodes (tasks), A = ai j is the adjacent matrix (communication). In this way, we have an arrangement of n elements, each one of these elements is a task with its size(its

execution time). The adjacent matrix can be a weights matrix, where each weight indicates the communication time between two given tasks. A value of 0 indicates that there is not communication between those two tasks.

5.2 Application types

The applications are classi ed taking three parameters, whose values can vary according to three ranges: high, medium and small. The parameters are:   

Tasks Number (NT): Number of processes, concurrent or not, which compose the application. Their ranges are: high: (50 - 70), medium: (20 - 30) and small: (5 - 10) Tasks average size (TSA): Average time of execution (in minutes) of the tasks in a processor of relative speed 1. Their ranges are: high: (40 - 60), medium: (15 - 25) and small: (1 - 8) Communication Percentage (CA): Percentage of time, regarding the time of execution, that is used to communicate two given tasks. Their ranges are: high: (30 - 50), medium: (15 - 20) and small: (2 - 10)

Taking into account the previous parameters and the typical applications in HPCE, we have de ned the following aplicaciones: 1. 2. 3. 4. 5. 6.

Thin granularity with many tasks and little communication. (NT = high, TSA = small, CA = small) Thin granularity with high data dependence. (NT= high, TSA= small, CA = high) Medium granularity with little communication. (NT= medium, TPT=medium, CA=small) Medium granularity with high data dependence. (NT=medium, TSA=medium, CA=high) Thick granule with many tasks and little communication. (NT= high, TSA=high, CA= small) Thick granularity with few tasks and very little communication. (NT= small, TSA=high, CA=small)

5.3 Arrival rate

The time among arrival of the applications is a parameter fundamental to measure the load on the system. Therefore, we propose di erent values for the arrival rates of the applications which allow to represent load states low, medium and high. Also, the applications types are grouped according to their tasks, being in the rst group the applications type 1, 2 and 3, and in the second the types 4, 5 and 6. In this way, we will have the following table:

9

Load Group 1 Group 2 Low 15- 30 min 40 - 60 min Medium 10 - 15 min 25 - 35 min High 4 - 9 min 8 - 20 min Table 1: Arrival rates according to the load

5.4 Results

The tests on the simulator were carried out taking three con gurations   

Centralized con guration (C): In this case there is only one decision agent, one planning agent for each planning type, and the same number of execution agents than processors on the system. Total distribution con guration (TD): In this case there are the same number of decision agents and planning agents for planning type, than execution agents have the system. Partial distribution con guration (PD): In this case we studied di erent alternatives, having decision agents and planning agents for execution agents groups. PD-N : where N is the number of decision agents (each one with two types of planning agents).

The tests were carried out in order to compare di erent con gurations, taking the arrival rates shown in the table 1 and placing the values of the ti in each test, according to the table 2. We set to 3 all the times to guarantee that the activity will have a smaller time at the shortest time among applications arrival. Also, t7 and t8 have di erent values, because they represent activities which consume a lot of time. On the other hand, t10 have a di erent value according to the planning type used. Time t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 Value 3 3 3 3 3 3 9 6 3 5,10 3 3 3 Table 2: Duration times of the events In each case, we take into account the following values to measure the performances:     

MA: Average time that takes the system to start one application. That is, average time since the

application arrives to the system until it begins to be executed. DA: Waiting average time of an event on the decision agents. PA: Waiting average time of an event on the planning agents. %DA: Occupation Average Percentage of the decision agents. %PA: Occupation Average Percentage of the planning agents.

Tables 3, 4 and 5 show that C has the highest MA for all the cases. Also, PD reduces MA and DA without signi cantly increase PA; presenting an occupation percentage for the agents (%DA and %PA) higher than TD. TD achieves the minimum MA; but they present the worst use of the resources, this is, value lower for %DA and %PA. We can observe of the same tables that the %DA always is bigger than the %PA. 10

pn 4 8 16 Data MA DA PA %DA %PA MA DA PA %DA %PA MA DA PA %DA %PA C 2316.4 636.6 0.2 100 35 2316.4 636.6 0.2 100 35 2316.4 636.6 0.2 100 35 TD 42.4 0.04 1.53 80 60 35.5 0 0 40 30 35.5 0 0 20 15 PD-2 1373 360.4 1.86 100 60 1373 360.4 1.86 100 60 1373 360.4 1.86 100 60 PD-4 Not apply 42.4 0.04 1.53 80 60 42.4 0.04 1.53 80 60 PD-8 Not apply Not apply 35.5 0 0 40 30

Table 3: Results for high load pn 4 8 16 Dato MA DA PA %DA %PA MA DA PA %DA %PA MA DA PA %DA %PA C 2045 540.4 1.9 100 60 2045 540.4 1.9 100 60 2045 540.4 1.9 100 60 TD 35.5 0 0 40 30 35.5 0 0 20 15 35.5 0 0 10 7.5 PD-2 41.7 0.16 1.12 78 60 41.7 0.16 1.12 78 60 41.7 0.16 1.12 78 60 PD-4 Not apply 35.5 0 0 40 30 35.5 0 0 40 30 PD-8 Not apply Not apply 35.5 0 0 20 15

Table 4: Results for medium load If we take into account that a good approach should have low values to DA, PA and MA, because that means to reduce the waiting time, and high values to %DA and %PA, because that means there are not lazy agents in the system; PD-2 is the best scheme for all the cases and low-medium load, while PD-4 is the best for high load with 8 or more processors. On the other hand, TD is the best for high load with less than 8 processors. pn Dato C TDO PD-2 PD-4 PD-8

4 8 16 MA DA PA %DA %PA MA DA PA %DA %PA MA DA PA %DA %PA 56.3 1.87 3.7 90 70 56.3 1.87 3.7 90 70 56.3 1.87 3.7 90 70 35.5 0 0 22 17 35.5 0 0 11 8.4 35.5 0 0 5.6 4.2 35.6 0 0.04 44 32 35.6 0 0.04 44 32 35.6 0 0.04 44 32 Not apply 35.5 0 0 22 16 35.5 0 0 22 16 Not apply Not apply 35.5 0 0 11 8.4

Table 5: Results for low load

6 Conclusions According to the results of this work, we can observe that the con guration of the MAS (agents' of each type, interrelations among them, etc.) depends on the characteristics of the platform and of the arrivals rates of the di erent type of applications that will be executed. In this way, PD-2 is the best scheme for almost all the cases of loads, while PD-4 are better for high load with 8 processors. The construction of the simulator allows to study a wide range possibilities and analysis di erent alternatives of con guration of the MAS. Next work will study a recon guration scheme for the MAS, that takes into account all the approaches that have been evaluated in this work, to modify dynamically the system con guration according to the characteristics of the platform and of the applications. This work will be incorporated in the design of an operating system for HPCE, inside of the module of distributed resources management that at the moment is in development in our group of work.

References [1] Adano J., Trejo L., Programming Environment for Phase-Recon gurable Parallel Programming on Supernode, Journal of Parallel and Distributed Computing, Vol 23, pp 273-292, 1994.

11

[2] Baiardi F., Ciu olini D., Lomartine A., Montanari D., Pesce G., A Tool for Parallel System Con guration and Program Mapping based on Genetic Algorithms, Programming Environments for Massively Parallel Distributed Systems, Monte Verita, pp 379-383, 1994. [3] Langton C. (Ed), Arti cial Life III, Addison Wesley, 1994. [4] Talbi E., Con guration Automatique d'une architecture parallele, Calculateurs Paralleles, Vol 3, pp 7-26, 1994. [5] Wooldridge M., Jennings N.R., Intelligent Agents: Theory and Practice, Knowledge Engineering Review, 1994. [6] Aguilar J., L'Allocation de Taches, l'Equilibrage de Charge et l'Optimisation Combinatoire, PhD Thesis, Universite Rene Descartes-Paris V, Paris, France, 1995. [7] Aguilar J., Parallel Programs: task graph generation & task allocation, Proceeding of the 10th International Conference on Mathematical & Computer Modelling and Scienti c Computing, pp 854-860, July 1995. [8] Ferber J., Les Systemes Multi-Agents vers Intelligence Collective, Inter Editions, 1995. [9] Tanenbaum A., Distributed Operating Systems, Prentice Hall, 1995. [10] Chavez A., Moukas A., Maes P., Challenger: A Multi-agent System for Distributed Resource Allocation, MIT Media Lab, Cambridge, MA, 1996. [11] Hidrobo F., Aguilar J., Algorithms for adapting Parallel Programs to the interconnection topology of Parallel Processing Systems, Proceedings of the International Conference on Informatic Systems Analysis & Synthesis, pp 499-504, July 1996. [12] Muller J., Quinqueton S., IA Distribues et Systemes Multi-Agents, Hermes, 1996. [13] Boutilier C., Shoham Y., Wellman M.P., Economic principles of multi-agent systems, Arti cial Intelligence, vol.94, No. 1-2, pp. 1-6, 1997. [14] Kraus S., Negotiation and cooperation in multi-agent environments, Arti cial Intelligence, vol.94, No. 1-2, pp. 79-97, 1997. [15] Stallings W., Sistemas Operativos, 2da. Edicion, Prentice Hall, 1997. [16] Hidrobo F., Aguilar J., Sistema Manejador de Ambientes Recon gurables para Procesamiento Paralelo/Distribuido, Technical Report # 98011, CEMISID, Universidad de Los Andes, 1998.

12