Performance impact of architectural decisions: integrating ... - CiteSeerX

0 downloads 0 Views 2MB Size Report
been challenged in many ways as stated at [7, 8, 9] such as security (the .... (SCPI). VM. Monitor. VM. Monitor. SOA. VOA. Polatis. Optical. Switch. Infinera. DTN.
Performance Impact of Architectural Decisions: Integrating Measurement in SILO Ahmet Can Babaoglu

Rudra Dutta

North Carolina State University Raleigh, NC

North Carolina State University Raleigh, NC

[email protected]

[email protected]

ABSTRACT Future Internet architectural design has attracted much research attention recently, and many novel architectural ideas have been suggested. The software and hardware realization of these architectural vision is a complex task in itself, and is worthy of research attention. In this paper, we examine a recently proposed integration of a measurement architecture in our previously described SILO architecture, and examine alternatives in realizing this integration. We demonstrate, by actual implementation and quantitative investigation, the importance of considering realization in architectural research by showing that there are unexpected factors affecting the performance of these realizations, resulting in unintended consequences, and we identify the better alternative.

Categories and Subject Descriptors C.2.1 [Computer-Communication Networks]: Network Architecture and Design—Network communications; C.2.m [Computer-Communication Networks]: Miscellaneous

architectural principles. As part of the solution effort, cleanslate approaches help us rethink the core principles, and propose new architectures, especially enabling cross-layer interactions to optimize performance. Although the clean-slate Internet architecture design has gained significant interest in the past years and many designs have been proposed [3, 4, 5] with increased flexibility, there are still hidden aspects of these architectures to be studied such as measurement, security and accountability to make these architectures a practical alternative to the current architecture. In this paper, we evaluate the implementation related performance issues of future Internet architectures and focus our studies on an already proposed architecture developed by our group, SILO (Service Integrated Control and Optimization) [6]. The current Internet architecture has been implemented in an highly efficient way which also makes it even harder to change and our work evaluates the penalties related to implementation choices for measurement and feedback capabilities under different scenarios. We start with an overview of SILO and its measurement integration, continue with test scenarios and finally provide the experimental results with our interpretations.

General Terms Experimentation, Design, Performance

Keywords Future Internet, Network Architecture, SILO

1. INTRODUCTION The Internet has been a tremendous success in that it has changed the way we communicate, socialize, conduct business, government, and our lives in general. The reason for its success lies on its architecture with well-defined and prioritized [1, 2] goals. After 40 years of success, the Internet architecture is now challenged with emerging new technologies and new user expectations, requiring different

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. CFI ’11 June 13-15, 2011, Seoul, Korea Copyright 2011 ACM 978-1-4503-0821-2/11/06 ...$10.00.

2.

PRIOR WORK

The key reason why Internet has gained such a success and survived after 40 years, lies on its well-defined and prioritized design principles [1] that met its most important goals. Internet has successfully survived this far but Internet has serious limitations. The way we use Internet is now significantly different than when its architectural principles were defined. Especially in the last decade, the Internet has been challenged in many ways as stated at [7, 8, 9] such as security (the lack of trust between end users and unsolved security problems), combination of identity and address information at IP addresses limiting mobility capability. In addition, the stateless nature of IP makes it difficult provide guaranteed QoS (Quality of Service) for customers and the violation of current architecture to take advantage of different physical medium characteristics. The primary reason behind these challenges above is because they were not considered in the initial architectural design [1]. While some of these challenges are “patched” by short term solutions violating the architectural principles, the flexibility of the architecture is significantly reduced and growing concerns about the future of Internet led to thinking clean-slate approaches [10] such as Role-Based Architecture [3] proposing a non-layered approach with the idea of

fine-grained roles organized in a non-ordered way, allowing packet headers as “heaps”. The X-kernel [11] design is a reference by new architectures [4, 5] due to its focus on the integration of modular messaging into the OS, where a protocol is an object using service interfaces between protocols. Using the popular object oriented modeling, Service-oriented Architecture [5] utilizes predefined and atomic functional blocks (FB), (such as CRC (Checksum Redundancy Code)) and enables run-time protocol instantiation while focusing on the relationships between different parts of the today’s protocols. Another example is Recursive Network Architecture [4] addressing the challenges by providing a single, flexible architecture based on the reuse of a “meta-protocol”, a protocol instance and the building block for layers, over different layers, stating that services should not belong to specific layers and should prevent repeating the same functionality. Another architecture following a different approach is GINA (Generalized Inter-Networking Architecture) [9] trying to solve the challenges by introducing concepts of objects, realms and zones with a focus on mobility and security. In GINA terminology, every addressable unit is called “object” and objects belong to “realms”, the administrative domains, each having servers for forwarding, routing, authentication, encryption and proxy. As more performance critical applications emerge, measurement is becoming the key factor for assuring the desired quality level of the offered services. In the future architectures, we would like to be able to collect the measurement data and measurement plane or architecture [12] refers to the configuration for measurement infrastructure and management of collected data. Measurement data is produced at a measurement point, stored in measurement server and consumed by measurement client. In next section, we give an overview of how measurement can be integrated to a future network architecture with a specific approach.

2.1 SILO Architecture Overview The main motivation behind SILO [13] is to have a flexible and extensible architecture that can evolve in itself, which can prevent the ossification issues that current Internet is experiencing. Aiming a “meta-design” that allows change within the architecture, SILO reduces the traditional layer boundaries by introducing fine-grain protocol elements called services where each service is responsible for a specific communication task (such as compression, encryption, checksum etc) with a standard interface exposing “crosstunable” parameters, that we refer as knobs. Based on application requests, a set of these services can be dynamically composed into per-flow custom SILO stacks by SMA (Silo Management Agent). Whenever a custom SILO stack is created, a Tuning Algorithm is also dynamically loaded by another SILO component, called STA (SILO Tuning Agent) enabling a standardized way of cross-layer interaction and optimization by using Tuning Algorithm which communicates with the SILO services through the standard interfaces to make tuning decisions. The SILO version 0.3 [6] is a single-threaded prototype implemented in C++ and Python on Linux. An application may use SILO API to create a custom SILO stack and send/receive data by this stack. SILO stack uses UDP sockets as the underlying communication medium to send/receive data from another application on a different machine running SILO prototype. Therefore all application data is first

passed through the SILO stack and then also encapsulated by UDP and finally decapsulated on remote machine. The Tuning Algorithm periodically sleeps and then wakes up to check knobs and tunes them, if necessary. In the next section, we describe the inner workings of SILO in a case scenario.

2.2

SILO Integration with Measurement Plane

As part of GENI ORCA Control Framework Working Group [14], Integrated Measurement Framework (IMF) [15] project utilizes the cross-layer capabilities of SILO Architecture with a feedback based Tuning Algorithm using the optical substrate measurement data. Figure 1 was demonstrated in a GENI Engineering Conference, as part of this research. VM Monitor

VM @ UNC-BEN Video Stream Source SS

SILO Application Gateway

H

Packet Counter

Physical Devices

Eth0

ard ing

VM @ RENCI-BEN Video Stream Render SILO Application Gateway SILO API

Packet Counter SILO Tuning Agent

Software

Video Monitor & IF Switch Algorithm

XMPP

XM

Eth1

Fo rw

Optical Data Path @ BEN

SOA Controller

Interface Switch

H

Management Network

Measure ment Video Monitor & IF Switch Algorithm

SS

r Fo

Legend: BER - Bit Error Rate DTN - Digital Transport Node SOA - Semiconductor Optical Amplifier VOA - Variable Optical Attenuator SCPI - Standard Commands for Programmable Instruments GPIB - General Purpose Interface Bus

SILO API

SILO Tuning Agent

VM Monitor

ng rdi wa

LR P

C NetFPGA

Physical Pins

SOA

XMPP PubSub Server

Attenuator Control Script (SCPI)

GPIB

VOA

Port Power Polatis Optical Switch

Measure ment

SOA Controller

Interface Switch Port er Pow & R BE

Infinera DTN

Reference Path Note: Reference Path does not exist in the current experiment, SILO switches traffic to the management network.

Figure 1: GENI-IMF Demo In Figure 1, there are two virtual machines, call the one on the left VM1 and the one on the right VM2, each running SILO prototype and VM1 transmitting video stream to VM2 over optical medium. The optical substrate measurement data is provided to a XMPP PubSub measurement server and a SILO measurement service at VM1 periodically consumes measurement data from the XMPP server to get new BER (bit error rate) and power levels, while SILO Tuning Agent also periodically checks for SILO measurement service, to react to the measurement data changes. In case of an increase in BER, the tuning algorithm increases the power level of the link to compensate, which significantly reduces the effect of BER increase on video quality. However, the BER might be at a level that can not be compensated by increasing the power level. In that case, the tuning agent might decide to do “path switching” and send its traffic through the alternative link, Eth1. By utilizing these cross-layer capabilities of SILO, a better video quality is achieved. Having achieved the functionality, we further investigated if another approach might be more efficient than this measurement consumption approach. In SILO, the tuning decisions are always made by the Tuning Algorithm. In order to give a decision, first, measurement service periodically sleeps, wakes up and consumes measurement data from measurement server. Then tuning algorithm sleeps, wakes up and polls measurement service to check if there is any change at the measured data and then gives the decision accordingly. This involves a two-step pro-

cess (from server to service, and then from service to tuning algorithm) whereas it would be a one-step process if the tuning algorithm itself consumed the measurement data directly from measurement server (i.e. without the help of a measurement service). In the next section, we give test scenarios where this additional step makes a difference.

3. REALIZATION To understand the effects of having measurement consumption functionality on SILO Tuning algorithm versus having it as a SILO service, we have created different scenarios. In these scenarios, the goal is not to provide an alternative optimal solution to the introduced the problem in the given scenario, but to utilize the measurement consumption capabilities to observe the effects.

3.1 Network Topology

Figure 2: Experimental Network Topology In Figure 2, we have 4 nodes. SILO Client (M1) generates SILO traffic and SILO Server (M3) receives it. This traffic passes through the router, M2, which also contains the measurement server, represented by a single box. Notice that, we also provide an alternative path between M1 and M3, but it is not used initially. Traffic Generator (M4) generates the background UDP traffic and M3 is also going to receive it. The bottleneck link is set between M2 and M3. The measurement data is periodically produced inside the router M2, passed to measurement server by Unix Sockets and then sent to Silo Client (M1) by a TCP connection. Notice that the path for this measurement data flow is out-of-band, i.e., independent from the experimental traffic flows. We used an Intel-330M with 4GB RAM machine to run these virtual machines. Each virtual machine runs Ubuntu 10.04 with 512 MB RAM having Intel PRO/1000 MT (up to 1000 Mbps) card for connection. We utilized common network tools such as pktgen and Iperf to generate constant and varying background UDP traffic; dummyNET to create the bottleneck link, to introduce delay, to create a buffer at the router and to adjust packet loss on bottleneck link. To test this experiment with the SILO Architecture, we use the SILO prototype version 0.3 [6] on both SILO client and SILO server. We set the bandwidth between the router (M2) and SILO Server (M3) to 1Mbps. We also set M1-M3 and M3-M4 RTT

to 200 ms and router buffer size to 50 KB. In the measurement plane concept [12], we have a measurement producer unit which gathers the measurement data from its source (such as physical link BER), a measurement server which receives this measurement data, and measurement client which consumes this data from the measurement server. Even though this is the general architecture, we have only one router and we are experimenting a specific case, so we decided to use a simple yet specific approach and have written this entity manually, which gives a much fine-grain control over the program. Before going into the experiments, we first make experiments to determine the possible time intervals of services and tuning algorithms for this experiment. Notice that in all experiments, measurement data is produced and sent every 50 ms. We first determined a “reasonable” time interval N in terms of CPU usage to check measurement results. It’s fair to assume that a computer can have 50 flows simultaneously so we had 50 SILO stacks each having a measurement services with TCP connection to measurement server, and polling every N ms (i.e. N ms time interval). Based on our observations, we found that 200ms, 500ms and 1000ms time intervals lead to 3%, 1% and 0.8% CPU usage respectively. Shorter time intervals add significant overhead on CPU therefore we will continue our tests based on these three intervals (i.e. both services and Tuning Algorithms will wake up in every N ms to consume measurement data or check knobs, where N is 200, 500 or 1000). We refer to measurement consumption by directly Tuning Algorithm as TA or TA based approach and measurement consumption with a service as S or Service based approach concatenated with its time interval (for example S200 refers to measurement consumption by service with 200ms time interval). 95% confidence interval is used in all histograms.

3.2

Response Time

In this experiment, we investigate the “freshness” of measurement data that Tuning Algorithm receives, for both approaches. By “freshness”, we refer to the time elapsed between the measurement data produced and Tuning algorithm access that time. As described before, in service based approach involves a two-step process: a service first consumes the data from measurement server periodically and Tuning Algorithm checks service knobs periodically to make decisions, whereas TA based approach involves a single-step process, by directly making a TCP connection with the measurement server and consumes the data. In order to find the freshness, we make the following test that, whenever the measurement server sends a measurement data, it starts a timer and stops it when an ACK has been received, where ACK is immediately sent by The Tuning Algorithm upon receiving the measurement data. We call the time between a measurement data sent and ACK received as the response time. The longer response time is the less fresh measurement data is received by Tuning Algorithm, leading to more inaccurate decisions. Figure 3 shows an interesting pattern on the response times recorded in service based approach, that in all time intervals the response time increases up to a level (the time interval) and then keeps jiggling. After investigating, we detected a software implementation based clock drift event between Tuning Agent wake up time and service wake up time is increasing up to the level (shown in Figure 3) that

tion and thus itself causes overhead. Having realized the reasons behind response time difference, we now run both approaches in test scenarios in the next section to observe the effects.

3.3

Figure 3: Response Times vs. Time Intervals for Service Based Approach

the service is reaching the “boundaries” of the earlier Tuning Agent wake up time. This is causing the maximum possible response time delay, which is the time interval. Even more interestingly, it never passes that boundary, i.e. if it were to pass that boundary, then it would synchronize. After investigating the task scheduling library code, live555 [16], we conjecture that this event is due to the internal characteristics of live555 code, which stores the time-out based events in a double link list and waits for the callback function to return for other events to continue, causing a software based clock drift between wake-ups. We made the same experiments with the Tuning Algorithm approach as well. However, we observed that the response time is always below 50ms, which is sending rate of measurement server. The reason behind the delay at service based approach is the time measurement service receives the data and the time Tuning Algorithm accesses it and then ACKs it. Even though both service and tuning algorithm runs at the same interval, the phase shift leads to delays. The increasing phase difference can also explain the increasing response time as time passes, at the service based approach. Because there is no such phase problem at Tuning Algorithm, it just ACKs whenever it receives the measurement data. Notice that, even though a Tuning Algorithm runs every 1000ms, within the last 50ms before it wakes up, it receives a new measurement data and that’s the measurement data it acks, therefore its response time is always below 50ms, regardless of time interval. It does not run faster, instead, this means the Tuning Algorithm approach always receives “fresh” measurement data. Our interpretation of the jaggedness, within the time interval observed in all cases is due to messages sent every 50 ms. However, the clock drift between machines causes a different time interval between sent and received messages. Therefore, the clock drift changes the response time. In this experiment’s service based approach, we had the Tuning Algorithm and its measurement service run within the same time interval. It may be advocated that a service based approach with Tuning Algorithm running with shorter time intervals than the measurement service will decrease the response time and will fix this “fresh” measurement data problem. However, the CPU usage tests show that shorter time intervals than 200ms creates significant CPU utiliza-

Congestion Scenario

Our first scenario will be a network congestion scenario by filling the buffer at the router. To utilize our Tuning Algorithm with measurement data, we have a simple algorithm to adjust the SILO Client sending rate according to the buffer size. Each time Tuning Algorithm wakes up, it checks the buffer size measurement data. If the buffer size allocation is more than 75%, then algorithm decreases the sending rate by 30%. Else if the buffer size allocation is less than 45%, it increases the sending rate by 30%. This provides a feedback based congestion prevention mechanism by detecting reducing the sending rate to prevent buffer overflow and keeping the allocated buffer size between 45% and 75%. We have set up 30% increase in a single step so that sending rate can be changed dramatically. Now that the Tuning Algorithm tries to keep the buffer allocated size between 45% and 75%, we might consider all the records that gives below 45% and above 75%, to be “error” case. Since we are mostly interested in avoiding buffer getting full and dropping packets, we define error to be only when allocated buffer size above 75% (high threshold), implying congestion. The SILO traffic is a UDP based traffic, starting with 0.4 Mbps. When no tuning action involved, it’s a constant bit rate traffic. The UDP traffic generated by pktgen at M4 is constant bit rate of 0.6 Mbps. In addition, in order to generate varying traffic, we use Iperf to generate UDP traffic at M4 with a period of 7.4 seconds where 0.3 Mbps traffic generated for 3.2 seconds and then Iperf sleeps for 4.2 seconds, periodically. Therefore we can generate sudden and relatively high volume of traffic, creating congestion and then we can observe how both approaches react to it.

Figure 4: Error Percentages Figure 4 shows the allocated buffer size percentages beyond 75%, ie the error. From the results we observe that the “no tuning algorithm” case clearly has a high error rate due to the violating the threshold (75%), causing congestion. Also, the Tuning Algorithm again clearly wins at 200 ms case, whereas at 1000 ms there is no difference and at 500 ms, there is a slight benefit of using TA. In our simple congestion scenario, the results for TA200 was not unexpected

since the short and accurate reaction time of TA200 would perform better than others. However, an interesting conclusion for this scenario is that at 1000 ms case, the freshness of data is not a dominating factor, instead the reaction time (shorter time interval) is the dominating factor. In the next section, we extended our experiment by another independent scenario.

3.4 Path Switching Scenario In this scenario, we added “path switching decision” capability to the Tuning Algorithm to switch its path to the alternative path, to recover from a sudden BER increase. The decision making process is binary; path loss rate is either 0% or 60%. If it’s 0%, then regular path is preferred, otherwise alternative path is used. It will also switch back to the regular path when path loss rate becomes 0% again. Packet loss rate changes from %0 to %60 and vice versa every second. To understand the benefits of successfully reacting to packet loss rate changes over time, we have measured the bytes sent by the SILO Client and received bytes by the SILO receiver for all cases.

Figure 6 shows the average throughput for each case. All Tuning Algorithms perform much better than services, however, an interesting observation is that TA500 performs better than S200. This is because, even though S200 wakes more frequently, TA500 will almost always receive “fresher” measurement data than S200, as described in “Response time” section and therefore TA500 makes better switching. We also would like to show the instant throughput received by M3 during this scenario. We start with “200 ms interval” case. Figure 7 shows that S200 is having many and large throughput decreases, whereas TA200 is having less and shorter. This is due to the reaction time, where S200 has an extra 200ms delay. However, TA200 is also having throughput decreases sometimes since Tuning Algorithm wakes up every 200ms and the packet loss characteristic might have changed after the wake up time.

Figure 7: Service vs Tuning Approach for 200 ms time interval

Figure 5: Bytes received by the receiver

We continue with the “500 ms interval” case. In Figure 8, both approaches have large throughput decreases but the recovery time, i.e. the time to detect the packet loss change and switch path, in S500 is much slower than TA500 which keeps the throughput in low levels for longer time.

In Figure 5 within the same time interval, every TA performs better than service. However, even TA200 is not close to the optimal (sending rate). Another interesting result was that S500 and S1000 performed very similar. This can be due to the packet loss change every second and by the time they switch to the alternative path, the path characteristic changes again.

Figure 8: Service vs Tuning Approach for 500 ms time interval

Figure 6: Throughput for different Time Intervals

In “1000 ms interval” case, the throughput decreases are longer and more frequent, especially in S1000, as shown in Figure 9. The primary reason is that we are changing the packet loss every second and it occurs frequently that by the time S1000 adapts to the new path, the path characteristic changes. We have discussed the path switching scenario and its effects in detail. In the next section, we will make an overview

Figure 9: Service vs Tuning Approach for 1000 ms time interval of the experiments and derive high-level conclusions.

3.5 Evaluation The initial experiments on CPU usage and response time has shown “reasonable” time intervals for the different scenarios. We have picked 200ms, 500ms and 1000ms for our experiments. The response time figure (Figure 3) has shown the environmental characteristics (live555) while running services with Tuning Agents, could be in fact the dominant factor. It was unexpected and yet, an important contribution to the understanding of architectural considerations when designing a prototype that architectural design has to be rather robust, should attempt to make choices that are proof to such uncontrolled environment variations. At the congestion scenario, we have gained less performance gain than we expected. We realize there are additional factors that affect the performance, such as there might be an incoming artificial traffic at a time that would fill the buffer itself no matter what approach we use, or even during the response time, the traffic behavior can change. At the path-switching scenario, the throughput analysis shows shorter time intervals help switch the path in shorter time and thus has less packet loss, which increases the throughput significantly. Notice however, that the freshness of the data can have a bigger effect than short time interval as can bee seen at Figure 6 where TA1000 has higher throughput than S500. We had expected that the freshness of data will not be an important factor since all subsequent measurement data will also tell the same information but Figures 7, 8 and 9 shows the freshness of data has a strong effect on the reaction time, even if the time interval is high. We also acknowledge that the effects of time intervals can greatly increase within frequently changing measurement data, whereas longer time intervals can compensate on the average throughput if measurement data does not change frequently, unless a sudden and large increase or decrease occurs at measurement data.

4. CONCLUSION In this work, we have first articulated the current Internet Architecture challenges, new proposals, measurement plane and cross-layer approaches. We then gave a SILO Architecture overview with the measurement integration. Using the two measurement consumption approaches, we made experiments against different time intervals with two independent

scenarios and evaluated the results. During these tests, we investigated and observed the architectural considerations and underlying prototype’s effect on the performance. Our high-level conclusions from this work are; (1) architectural design has to be rather robust and should attempt to make choices that are proof to the uncontrolled environment variations, (2) the performance data has to be carefully analyzed from different points of view to understand and reason unexpected results and (3) the effects of time intervals can greatly increase within frequently changing measurement data, whereas longer time intervals can compensate on the average throughput if measurement data does not change frequently. Therefore, the best time interval will depend on the type of the measurement data and how it’s used. Our future work will involve different topologies, measurement data types and SILO Services (such as TCP-like services enabling retransmission). Also we will test the results in a real lab environment to ensure our results match. Having done these work, we envision having a multi-threaded SILO prototype where the consumption functionality can be placed on SMA. We will then test this prototype and compare our results with this work to better understand the measurement integration on future Internet Architectures. We also would like to thank Dr.Ilia Baldine, Dr.Shu Huang and Dr.George Rouskas for their valuable feedback.

5.

REFERENCES

[1] Clark D. The design philosophy of the DARPA internet protocols. ACM SIGCOMM Computer Communication Review, 18(4):106–114, 1988. [2] Saltzer J.H., Reed D.P., and Clark D.D. End-to-end arguments in system design. ACM Transactions on Computer Systems, 2(4):195–206, 1984. [3] Braden R, Faber T., and Handley M. From protocol stack to protocol heap: role-based architecture. ACM SIGCOMM Computer Communication Review, 33(1):17–22, 2003. [4] J. Touch, Y. Wang, and V. Pingali. A Recursive Network Architecture. ISI Technical Report, 2006. [5] A. Karouia, R. Langar, T. Nguyen, and Pujolle G. SOA-Based Approach for the Design of the Future Internet. In CNSR ’10 Proceedings of the 2010 8th Annual Communication Networks and Services Research Conference, pages 361–368, 2010. [6] G. Rouskas, R. Dutta, D. Stevenson, I. Baldine, A. Wang, and M. Vellala. SILO Project. http://www.net-silos.net. [7] Feldmann A. Internet Clean-Slate Design : What and Why? ACM SIGCOMM Computer Communication Review, 37(3):59–64, 2007. [8] Tennenhouse DL Clark DD. Architectural considerations for a new generation of protocols. ACM SIGCOMM Computer Communication Review, 20(4):200–208, 1990. [9] Raj Jain. Internet 3.0: Ten Problems with Current Internet Architecture and Solutions for the Next Generation. In Proceedings of Military Communications Conference (MILCOM 2006), 2006.

[10] P.E. Agre and I. Horswill. US National Science Foundation and the Future Internet Design. ACM SIGCOMM Computer Communication Review, 37(3):85–87, 2007. [11] Hutchinson N. and Peterson L. The X-Kernel: An Architecture for Implementing Network Protocol. IEEE Transactions on Software Engineering, 17(1):64–76, 1991. [12] Falk A. GENI System Overview. Technical report, The GENI Project Office, 2008.

[13] Anjing Wang. The SILO Architecture: Exploring Future Internet Design. PhD thesis, North Carolina State University, 2010. [14] The Renaissance Computing Institute (RENCI). The ORCA-BEN Project. http://geni-orca.renci.org. [15] University of North Carolina at Chapel Hill RENCI North Carolina State University, Columbia University and University of Houston. GENI-IMF. http://geni-imf.renci.org. [16] Ross Finlayson. Live555 Streaming Media. http://www.live555.com/liveMedia.

Suggest Documents