Flexible Distributed Testbed for High Performance Network Evaluation

2 downloads 271 Views 622KB Size Report
support of larger distributed arrangements. ... It supports separate administrator ... network for a given topology under test. Rt. Rt. C/Sv. C/Sv. Managed. Switch.
Flexible Distributed Testbed for High Performance Network Evaluation 1

2

1

2

Chris Phillips , Jose L Marzo , Kok Ho Huen , Pere Vilà

*1 Department of Electronic Engineering, Queen Mary, University of London, UK *2 Department d'Electrònica, Informàtica i Automàtica, Universitat de Girona, Spain Email Contact Address: [email protected] Abstract- This paper describes a flexible testbed using a combination of managed switches and rack-mounted personal computers, controlled from a central server distribution point. The arrangements allows for the rapid and automated configuration of various network scenarios. In addition, the use of IP tunnelling allows testbed islands to be interconnected in support of larger distributed arrangements. After describing the architecture, the paper discusses a number of topical example scenarios to demonstrate the utility of the equipment.

Interface Cards (NICs)2. These are inter-connected through multiple Local Area Network (LAN) switches3, the management of which are under the control of an administrator. A number of LAN switch ports are connected to the router PCs via a “breakout” patch-panel. This provides a series of “Y” connections so that monitoring devices can be used to examine the traffic on specified links without administrator intervention.

I. INTRODUCTION Not surprisingly, there are many testbeds for evaluating high performance network evaluation, including [1][2][3][4]. What differentiates our approach is that it uses readily available generic components and that particular attention has been paid to the ability to perform rapid and automatic reconfigurations. We also ensure the system can be operated in a distributed environment using virtual link connectivity, for the data plane, and remote Keyboard, Video, Mouse (KVM) access, for control plane interaction. The remainder of the paper is organized as follows. It begins by providing an overview of the architecture in Section II, including the distinct functioning of the testbed control and data planes. Section III then demonstrates its utility by describing how it can be used to undertake a variety of evaluation studies. These include current activities examining distributed resource management and inter-provider MPLS connectivity. The paper is concluded in Section IV, which also considers some future work items.

Each of the routers and client / server PCs have a hard disk so that the software necessary to perform routing and application-type behaviour can be downloaded, adjusted and executed. This allows the experimenters to alter routing algorithms, routing tables, and introduce various diagnostics software. It also allows various statistics to be collected locally for later interrogation. The LAN switches control the physical connectivity between the various router PCs and the client/server PCs. The LAN switches act as electronic cross-connects that can be configured for various physical interface port to port mappings using Simple Network Management Protocol (SNMP). They also permit the administrator to perform port mirroring. This involves specifying both an egress port to be used for the data path to the next network component, and a separate port to which a copy of the data can be sent. Network analysis tools can then examine the copied data without interfering with the data path. The network is separated into control and data planes as shown in Figure 1. Experimenters have access to shared information on the configuration server4 and are able to

II. TESTBED ARCHITECTURE The testbed is organised into a cluster of core routers and a number of client / server workstations around the periphery. The topology of the core router cluster is completely configurable under software control and can be partitioned into separate domains. The client / server workstations act as traffic generators, traffic analysers and core router configuration stations. The core routers and client / server entities are all implemented on personal computers1 with multiple Network

1

Using PCs is desirable to provide the fallback position in the event that the testbed is decommissioned by offsetting the capital expenditure risk.

1-4244-0106-2/06/$20.00 ©2006 IEEE

2

Typically single port and quad port 100BaseT Fast Ethernet Network Interface Cards are employed.

3

More formally these devices are referred to as managed bridges, allowing operations such as the filtering of data selectively between physical interfaces. In this way, it is possible to restrict the passage of traffic between certain ports thus segregating traffic between separate concurrent experiments.

4

The Configuration Server is the centralised administration point within the testbed. It supports separate administrator and experimenter access privileges. Each experimenter account accommodates all of the non-volatile data used for each of the experiments.

reconfigure all devices within the yellow-bordered area. This permits them to set up various experimental scenarios and operate applications across them5. The features of the control plane and data planes are described in Section II.A and Section II.B, respectively. Desktop Monitors

Screen Switches

Snooping Device

C/Sv

24 Port LAN Switch

Rt VDU Links

The yellow region defines the extent of the configurable network for a given topology under test

Rt

Rt 24 Port LAN Switch

C/Sv

Control0 used for boot server shared directory, NFS/SMB, DNS, XNTPD

C/Sv Managed Switch

Firewall

Multi-Port Hub Contr ol0

Snooping Device

Patch Panel

Topology under test {Data Plane} Cont r ol1

Backup Config Media Server

WorkStation Monitor

Personal Computers + Ancillary Administered Topology Configuration / Control Network

Figure 1:- Basic Architectural Layout of the Testbed Typically, the testbed is organised into experimenter teams. Each team will have direct access to a number of client / server PCs and core router PCs. These are operated as Virtual LANs so that the teams cannot interfere with each other. In addition to the control and data planes there are a number of VDU, mouse and keyboard links from the multimedia client / server PCs and the core router PCs. The VDU, mouse and keyboard, located at the multimedia PCs can be switched using KVM switches to directly connect to multiple PCs. This enables them to be configured and interrogated without the need for overwhelming the testbed environment with monitors and so forth. In addition to saving space, it also reduces the heat dissipation demands. It should be noted that all cables are colour coded and labelled at both ends so that their function can be readily ascertained by visual inspection. A schematic plan of the testbed is displayed in the laboratory with primary components, port numbering and subnet addressing identified. A. Control Plane For simplicity of operation, and to prevent administrative communications interfering with testbed experiments, the testbed is organised into separate control and data planes. The control plane is then further subdivided into two subnets, named control plane zero (CP0) and control plane one (CP1). Both control planes are connected through to the centralised

5

Note: topological changes requires configuration of the LAN switches. Given that multiple experimenter teams may be working on the testbed concurrently, it is assumed that (initially) access to the LAN switch configuration routines will be limited to the administrator. This prevents unwanted interference between experiments.

configuration server via an administrative LAN switch. All experimenter-accessible testbed router PCs are connected to each other and the configuration server via CP0. This subnet is used by experimenters to access their test scenario software and configuration scripts that are stored on the configuration server with backup facilities. This encourages the absence of permanent software from being located on the testbed core routers. When the experimenters then wish to perform an experiment, they transfer files from their private work area on the configuration server to the various computers they have access to. In the event of a serious configuration error, the experimenters can reinstall operating system images with a default set of application software from the configuration server, using Symantec’s Ghost software package6 or by running Linux scripts. However, to augment the configuration of the PCs, each machine is equipped with a CDROM Drive so that software can be rapidly installed in response to changing experimental requirements or recovery from erroneous behaviour. For example, it is feasible for a complete install of the operating system, device drivers and routeing application software to be performed from a CD in a single step. CP1 is typically solely available to the administrator and is used to gain access to the LAN switches for configuration purposes. This means that the experimenters cannot alter the virtual LANs setup by the administrator. It ensures that inadvertent misrouting of traffic cannot result in experiments within separate virtual LANs interfering with each other. For the administrator alone, a further control subnet is available on the administration LAN switch. This connects the configuration server to other networking resources via a firewall to prevent unauthorised access into and out from the testbed. It permits external access to the Internet for the downloading of software packages and so forth. B. Data Plane The data plane comprises all of the non-administrative ports on the primary LAN switches and the Fast Ethernet ports on the testbed PCs, apart from the single port that connects each PC to CP0. The architecture is formed into a desktop-switched Fast Ethernet arrangement where all connections are point-topoint. This permits the Ethernet cards to be configured for full-duplex operation, yielding 100Mb/sec transmission rate into and out from each port, concurrently. Some of the LAN switch ports are passed through a physical breakout panel. This has a number of “Y” connectors so that monitoring devices can be attached to each subnet. A further set of connections joins the primary testbed LAN switches together into a high-bandwidth backbone (i.e. 800 Mb/sec full-duplex trunk). These are used when testbed configurations require topologies to be constructed that encompass router PCs attached to each of the LAN switches. 6

As the rack-mounted PCs are of a similar specification, using Ghost it is possible to download a new image to the entire testbed system (of 40 PCs or more) within minutes.

Figure 2:- Typical Testbed Experimental Arrangement

x

Although the control of the network elements through Simple Network Management Protocol (SNMP) is desirable, it has not been employed so far. Although this is available as freeware for the Linux OS, commercial software costs have precluded its use on the LAN switches.

x

Client / Servers are typically configured as multi-boot Linux or Windows-NT machines. This increases the flexibility of the system, by avoiding unnecessary reimaging of experimental setups.

x

Suitable subnetting is applied using 10 dot addresses within the data plane. Use of these private addresses ensures that damage is minimised if data “escapes” from the captive office environment onto the public Internet.

x

A further simple means of rapidly reconfiguring the testbed is still under investigation. For example, a network boot from a single server “/usr” partition would be desirable using a TFTP boot image for router PCs.

D. Testbed Realisation and Basic Experiments A generic illustration of the testbed experimental arrangement is shown below in Figure 2. In this instance, it demonstrates how the router PCs can be configured through the managed switch into a particular network topology, typically in order to perform routing experiments.

Router PC2

4 x 10/100Mbps

Figure 3:- Photograph of the QMUL Testbed A similar testbed is in place at the Universitat de Girona. Although it has similar functionalities it was conceived at a lower scale as a low cost version in order to collaborate with the QMUL laboratory setting inter-testbed experiments. Figure 4 shows the basic distribution of the Girona testbed while Figure 5 displays several images. Moreover, their laboratory facilities consist of Sun Workstations, PCs, CISCO routers and bridge-switches. Fast Ethernet Node / Router Desktop PC

Router Lab Switch Cisco 4500 192 x 10/100/1000Mbps

Router PC1

Unlike traditional routers where the operating system source code is not available for modification, PC based routers are able to run open-source routing software such as Zebra [5] or XORP [6]. These software packages provide a solid framework on which to add new routing protocols and associated management functionality, not tied to a particular vendor. It therefore becomes possible to configure groups of PCs as CE, PE and P routers, as well as end-user VPN community members and to evaluate the complete signalling and data pathways under a variety of conditions. In addition, the presence of multiple testbeds allows the realistic examination of inter-provider operations, with each testbed functioning as a single AS, described more fully in Section III. As an example, a photograph of the testbed arrangement at Queen Mary, University of London, is shown in Figure 3.

Under software control the router PCs can be configured into a huge variety of network topologies. In addition, each PC is pre-configured with the appropriate routing software and machine-specific parameters, using an “imaging” process. A complete imaging cycle typically takes only 30 seconds!

Router PC40

Node / Router Desktop PC Node / Router Desktop PC Node / Router Desktop PC

Configuration Server & Gateway Configuration Server

INTERNET

Storage and configuration services

Network Topology Under Evaluation

Switch CISCO Catalyst 2900 24 ports

C. Experimental Considerations / Issues During the development of the testbed a variety of considerations and issues have been identified. These are itemised as follows: x Accurate time measurement is difficult and needs careful consideration if the testbed is to be used for accurate performance evaluation studies. Given the initial activities earmarked for the testbed, as described in Section III, this is not regarded as critical at this time.

Internet Access / Other Laboratory Facilities

Internet Access / Other Laboratory Facilities

Configurable Ethernet Line Configuration Virtual LAN Serial port (switch console)

Figure 4:- Girona Testbed Layout

and files

In the laboratory, the routers and Linux-PCs support MPLS and QoS in order to set up different configurations and topologies. The testbed is directly connected by optical fibre links with the Spanish academic network provider “Red Iris” which is connected with GÉANT. The aim of the testbed is to use it for different types of experiments and simulations comprising: cluster/grid technologies, QoS routing/MPLS, network protection mechanisms, distributed simulations, etc. In the following paragraphs two different configurations are described as examples of the use of the testbed. The first one is related to cluster software evaluation/ comparison and the second is related to MPLS routing experiments.

Table 1: Main Characteristics of the Cluster Software The experiments carried out where for instance the evaluation of the migration time, the evaluation of the communications cost, different network configurations, etc. All the experiments where carried out 10 times and the 95% confidence intervals calculated. For instance Figure 6 shows the MOSIX migration cost results. Migration Cost in MOSIX 500 450 400

Time (ms)

350 300

1NIC 2 NIC

250

3 NIC

200

4 NIC

150 100 50 0 0

1

2

3

4

5

6

Transferred Data (Mbytes)

Figure 6:- MOSIX Migration cost as a function of the amount of data transferred using 1 to 4 NICs per node.

The main objective of the cluster software experiment was the evaluation and comparative of different free distributed cluster software and different forms of cluster configuration. Basically the Linux OS was installed on the cluster PC along with the different cluster software: Message Passing Interface (MPI) [7], Parallel Virtual Machine (PVM) [8], and Multicomputer Operating System for unIX (MOSIX) [9]. See the main characteristics in Table 1. It was also compared different ways of using the Ethernet ports and the switch (e.g. use of one, two, three, or four ports along with the Linux channel bonding driver, and the emulation of a crossbar using the switch VLANs).

Label 1000

HOST 2

VLAN 10

eth3 X=2

VLAN 1

Support for different architectures

Dynamic cluster configuration

Process assignment

Collective communication operations

Process communication

Level

Software

Label 1015

MOSIX

Syst.

Async. Async. Pipes, named pipes, sockets

Yes Yes

Static Static

Yes No

Yes Yes

No

Dynamic (load balancing)

Yes

No

FAILURE

Label 1010 eth3 X=3

eth4 X=1

eth0 X=1 192.168.5.X

HOST 3

HOST 1

eth2 X=3

eth1 X=1 Label 3000

HOST 4 eth1 X=4

App. App.

VLAN 11

192.168.3.X eth4 X=2

Network IP1

PVM MPI

Label 1005

192.168.0.X

Label 3005

Network IP2

eth2 X=4

192.168.1.X

192.168.2.X

VLAN 14 Label 3015

eth0 X=3 192.168.4.X

VLAN 12

Figure 5:- Photographs of the Girona Testbed

The main objective of the MPLS routing experiment was the test of an open-source Linux-based set of routing software including an MPLS kernel patch still under development. The software included Zebra/Quagga [10] routing modules and mpls-linux [11] packages. Several network configurations and tests were made, for instance a Traffic Engineering experiment using two different Label Switched Paths (LSPs) between the same origin-destination node pair. The nodes were configured to send a type of traffic through an LSP and another type of traffic through the other LSP. There were also carried out experiments on fault recovery mechanisms at an MPLS level developing a simple client/server utility that monitors the physical links in order to detect a failure. See Figure 7 for the network configuration of this particular experiment.

VLAN 13 Label 3010

Working LSP

Backup LSP

Figure 7:- MPLS Fault Restoration Experiment

E. Inter-Testbed Tunnelling Interconnection between the testbed islands can be readily achieved via the public Internet. However, for a number of experiments, particularly involving MPLS, it is desirable to provide the interconnection at layer-2, as seen by various testbed router entities. To achieve this, layer-2 tunnelling within IP datagrams can be employed to form a virtual private wire service. Software at the ends of the tunnel provides the encapsulation and decapsulation functions as illustrated in Figure 8. Autonomous System 1

Autonomous System 2 Public Infrastructure Tunnel Termination Point Optional IPSec Encoding

IP Datagram Public IP

MPLS

Testbed IP

Figure 8:- Testbed Tunnelling Interface As far as the Autonomous System Border Routers (ASBRs) are concerned in AS1 and AS2, respectively, they have a direct wire link between them. This permits actions such as inter-AS Label Switched Path (LSP) splicing to be carried out. Furthermore, if security is a consideration, the tunnel payload can be encrypted using IPSec. This is a particularly appealing approach because it permits the easy interconnection of large numbers of islands and it is also cheap, as no leased-line or virtual connections need to be agreed with the public Internet operators. However, the “link” characteristics are limited to the “best efforts” service offered by the Internet and the tunnel termination software performance. III. EXAMPLE EVALUATION SCENARIOS To illustrate the general utility of this form of testbed a couple of more advanced experimental setups are now considered simply by way of an example. A Inter-Provider MPLS Research With the growing uptake of Multi-Protocol Label Switching (MPLS), recently research interest has been devoted to the use of MPLS for supporting Virtual Private Networks (VPNs). A taxonomy of various VPN approaches, as described in rfc4026 [12]. A further goal is to allow the MPLS infrastructure to be dynamically configured in response to particular instantaneous VPN community requirements. Within a domain the necessary signalling support is available, via RSVP-TE [13]. However, a key challenge is to enable dynamic MPLS VPNs to be supported between provider domains without divulging sensitive operator information. The testbed becomes an

effective means of replicating this situation and to explore the viability of various inter-domain signalling An example solution is considered. Assume a user “A” has created a VPN including the special resource(s). This was achieved by the user contacting a mediating agent that we refer to as the Dynamic VPN Manager (DVM). The DVM then liaised with the connection management software at the ingress LSR point(s) to establish LSPs to and from the resources within the AS domain. The path taken by the LSPs is not known to the DVM; however, it has the ability to specify connection QoS requirements. This allows Layer-2 resilience path switching to be carried out transparently to the DVM. At some later time the user “B” wishes to join the VPN group. It starts this process by contacting its local DVM. Due to inter-domain DVM advertisements, B’s local DVM is aware that the target VPN exists and that it is managed by the DVM in AS1. It uses inter-domain Network-Network Interface (NNI) signalling to negotiate the joining operation on behalf of B. If this request is permitted, the DVM in AS2 uses local connection management to establish LSP(s) from B to the relevant AS boundary router. The DVM in AS1 also sets up LSP(s) from its local VPN group member users and resources to its own AS boundary router. In both cases the TSpec can be used to ensure QoS constraints are observed. So far, two incomplete MPLS tunnels have been created, each terminating at the local AS boarder router. The final stage is for the boarder routers to exchange labels to enable the splicing together of the tunnels. This can be done by piggybacking MPLS labels on BGP routing messages [14], although other means are also possible. The crucial point is that the operator of AS2 has no knowledge of the LSP tunnel path taken in AS1. The complete LSP is not created in a true endto-end fashion; rather it is formed from the concatenation of separate tunnels. The RSVP-TE signalling messages need not contain the complete source-routed path, only that portion of the path associated with a given segment. The complete process is illustrated in Figure 9. The testbeds in Girona and QMUL can be configured to model the network components at AS1 and AS2. Using the tunneling mechanism outlined in Section II.E it is possible to build a complete representation of this scenario and to explore the various interactions. This is currently allowing the interDVM handshaking protocols to be refined and to examine the interaction between the network-specific connection management and the more generic VPN management functions. Furthermore, logical partitioning of the testbed resources allows this experiment to be carried out concurrently with unrelated ones with no disruption.

Step 1: DVM1 tells local ingress LER (PE1) to form an LSP segment from A to ASBR1

A

PE1

(i)

DVM1

DVM2

ASBR1

PE1

(i)

ASBR1

ASBR2

Step 3: DVM1 tells ASBR1 to expect a label for new LSP from ASBR2

A

PE1

(i)

PE1

(i)

(iii)

PE2

B

Step 3A: DVM2 tells local ASBR2 to send a label for new LSP segment to ASBR1

ASBR1

ASBR2

ASBR1

(iii)

PE2

x

Data Storage (DS) resources that include any type of storage devices (hard disk, DVD, etc). The main measure is the capacity but the characteristics of the device have also to be taken into account when making the resource allocation decisions.

x

Bandwidth (BW) referring to the communications capacity required locally between the RM modules. This is separate from the communication resources, between the clients and the resource clusters which are handled by the operator’s connection management / traffic engineering system.

B

Step 4A: ASBR2 enters label swapping details in its LIB to splice the LSP segments (ii) and (iii)

Step 4: ASBR2 gives label to ASBR1 to permit a label swapping entry in the LIB at ASBR1 to splice the LSP segments (i) and (ii)

A

B

Step 2A: DVM2 tells local ASBR2 to form an LSP segment to PE2 in preparation

Step 2: DVM1 agrees with adjacent DVM2 to form an inter-domain LSP

A

PE2

ASBR2

tasks. The resources involved are classified into categories such as: x Computational Resources (CR) including both processors and cache and the main memories. There are many different ways to quantify and monitor the use of this kind of resources, which have to be studied and one selected or even new ones proposed.

(ii)

ASBR2

(iii)

PE2

B

Figure 9:- Inter-Domain LSP Creation B. Remote Resource Management A key aim of grid computing is to enable a user to solve problems on a distributed platform in a reliable and confidential (secure) manner within a specified time. User applications embrace both commercial and academic communities. Typical academic applications may involve climate change calculations. These are processor intensive but not time critical. Conversely, commercial applications are time sensitive and must have guaranteed confidentiality. An example might be the evaluation of airflows over a new commercial airframe. Clearly, the data supplied and the results obtained would be strictly confidential. The operator and the resource infrastructure MUST have mechanisms in place to guarantee the isolation of this data from anyone outside, possibly including the operator themselves. Distributed and parallel computations, such most scientific ones, need to locate and access large amounts of data and to identify the suitable computing platforms on which to process this information. Most often applications need to scale dynamically across distributed computing platforms such as clusters and grids. Therefore, it is also necessary to consider the resource management function itself. Techniques such as over-booking of finite resources and the pre-emptive scheduling of tasks of differing priorities improve efficiency and flexibility. Both of these concepts have been considered in SHUFFLE EU project[15] but not applied to an MPLS environment. The testbed is now being used to extend and evaluate these concepts in relation to ubiquitous computing. Within this framework, user-based resource agents liaise with separate resource management islands that may, or may not, be owned by a single operator. As such the testbed can be readily configured to act as a collection of client agents and resource management entities scattered across AS domains. Within this context a mechanism for resource discovery, booking, data transfer and results assimilation is being developed. In addition, associated with a given Resource Manager, a number of RM modules perform a distributed resource management

Usually an application will ask the RM for the multiple types of resource – via the DVM. A typical situation that will be considered is when the user does not know in advance (or at least not exactly) the required resources. In this case the RM and RM modules will provide mechanisms for monitoring and accounting the used resources (with the possibility of predicting future needs) and mechanisms to limit the amount of allocated resources. The RM is in charge of the resource management; the DVM’s role is simply to request those resources on behalf of the VPN users. The DVM usually asks for specific functions of the RM (ask for booking/releasing resources, etc). However in several cases the RM may also take the initiative and notify the DVM of any unexpected change/problem in the resources monitored (e.g. a hard disk failure). The RM module will also deal with prioritised scheduling of tasks, security and auditing. The RM module also has to interact with specific local resources i.e. with the operating systems running in the computers where the resources are. This could be performed directly using any available services or functionalities offered by Operating Systems and other management software, but the possibility of placing an RM agent (proxy module) on the computers where the resources reside is also considered in order to give better functionality, control and performance. The RM architecture is therefore decentralised to ensure robustness, scalability and efficiency. Each computational platform under RM could be equipped with a RM agent and those agents organise themselves in various virtual network topologies to sense the underlying physical environment and trigger accordingly applications' reconfiguration. Decision components are embodied in each agent to evaluate the surrounding environment and decide based on a resource sensitive model (RSM) how to balance the resource consumption of application's entities in the physical layer. RSM provides a normalized measure of the improvement in resource availability an entity would receive by migrating

between nodes. The RSM uses the profiled information about applications' entities to decide which ones are the most beneficial to migrate. Therefore we are considering network sensitive virtual network topologies that adjust themselves according to the underlying network topologies and conditions. Indeed it may be possible to extend the approach to federations of RMs each sharing the RM module resources in a cooperative manner, though this extension would have significant commercial hurdles to overcome. In this scenario new management techniques for high performance networks are needed. The presented testbed is able to carry out complex experimentation in distributed computation in support of grid computing or agile computing[16]. IV. CONCLUSIONS AND FURTHER WORK This paper has provided a description of a generic and yet high performance testbed arrangement based on off-the-shelf technology that can nevertheless be used to construct and evaluate complex networking scenarios. The use of a centralised server and separate control and data planes permits the rapid reconfiguration of the test environment whilst ensuring that concurrent experiments do not interfere with each other. Further work is currently underway to consider additional experimental scenarios associated with “agile computing”. In addition, the presence of Internet accessible KVM switches is now being considered for “hands-on” remote experimentation. At the very least this may provide a valuable vehicle in support of distance learning activities. However, one aspect that requires further study is automating the scheduled access to testbed resources. Supporting this would allow the testbeds to become an open resource where various researchers can use the facilities available by simply performing an online booking process. REFERENCES [1] K. Shimano et al., “Demonstrations of Layers Interworking between IP and Optical Networks on the IST-LION TestBed”, Proc. Optoelectronics and Communications Conference, (OECC2002), 2002. [2] PIONIER2001 - http://mpls.man.poznan.pl/index.html [3] A testbed of Terabit IP routers running MPLS over DWDM (ATRIUM) - http://www.alcatel.be/atrium [4] Acreo National Broadband Testbed http://www.acreo.se/templates/Page____271.aspx [5] ZEBRA: http://www.zebra.org/ [6] XORP: http://www.xorp.org/ [7] http://www.mpi-forum.org/ and http://wwwunix.mcs.anl.gov/mpi/mpich/ [8] http://www.csm.ornl.gov/pvm/ pvm_home.html and http://www.netlib.org/pvm3/ [9] http://www.mosix.org and http://openmosix.sourceforge.net [10] http://www.quagga.net/ and http://www.zebra.org/ [11] http://mpls-linux.sourceforge.net/ and http://perso.enst.fr/~casellas/mpls-linux/

[12]

[13]

[14]

[15]

[16]

L. Andersson, T. Madsen, “Provider Provisioned Virtual Private Network (VPN) Terminology”, IETF Request for Comments: 4026, March 2005. D. Awduche et al., “RSVP-TE: Extensions to RSVP for LSP Tunnels”, IETF Request for Comments: 3209, December 2001. Y. Rekhter, E. Rosen, “Carrying Label Information in BGP-4”, IETF Request for Comments: 3107, May 2001. IST-1999-11014 SHUFFLE http://www.elec.qmul.ac.uk/research/projects/shuffle.ht ml N. Suri et al., “Agile Computing: Bridging the Gap between Grid Computing and Ad-hoc Peer-to-Peer Resource Sharing”, 3rd IEEE/ACM International Symposium on Cluster Computing and the Grid (CCGRID’03), pp 618-625, 2003.

ACKNOWLEDGEMENTS This work was partially supported by the Spanish Research Council (CICYT) under contract TIC2003-05567.

Suggest Documents