The performance of TCP is also dependent on the ... No TCP performance results were reported. ..... scalable solutions and testing the architecture in large.
Design and Performance Evaluation of a Rate Feedback Control Architecture for TCP/IP Networks Abdul Aziz Mustafa, Mahbub Hassan and Sanjay Jha School of Computer Science and Engineering The University of New South Wales Sydney NSW 2052, Australia. Email: {amustafa, mahbub, sjha}@cse.unsw.edu.au Abstract We present the design of a rate feedback architecture for TCP/IP networks. The architecture allows the traffic sources to receive periodic feedback of available rates from the network using ICMP. The sources shape their traffic according to the feedback rate. The performance of the proposed architecture is evaluated using computer simulations. Our results demonstrate that the proposed rate feedback architecture can significantly improve TCP throughput and fairness when buffering capacity in the network is extremely limited. The architecture also improves packet loss rate, delay and delay jitter in the network.
1
Introduction
In the current Internet architecture, there is no rate feedback from the network to the traffic sources. The control of network traffic is achieved through the transmission control protocol (TCP) implemented in the end systems. TCP shapes the source traffic by adjusting its transmission window size according to its estimate of the current network status. The strength of this end-system based approach to traffic control is the flexibility and ease of deployment. New TCP algorithms can be deployed gradually in the Internet without requiring any upgrade in the core network. The main drawback is that the sources cannot learn the exact states of the network, which leads to less than “optimum” outcome. As a result, TCP throughput have been a subject of intense research in the last two decades. There has been many new TCP algorithms proposed to improve throughput. So far, TCP Vegas [1] is known to be the most successful algorithm which can achieve up to 70% higher throughput than previous TCP algorithms. The performance of TCP is also dependent on the buffering policy in the network. Drop-tail, the most widely deployed buffer management policy in the current Internet, drops an incoming packet when the
buffer is full. It is now well established that droptail causes unfairness among contending TCP sources, even with the best possible TCP algorithms. The Internet Engineering Task Force (IETF) later recommended a new buffering policy, called Random Early Detection (RED) [2], which drops packets probabilistically before the buffer becomes full. RED significantly improves TCP fairness. The recommendation of RED is part of a general recommendation [3] of deploying more active algorithms and schemes in the network to address the throughput and fairness problems in the face of ever increasing and diversifying traffic in the Internet. One key aspect of TCP Vegas and RED is the amount of available buffering required in the network to achieve the claimed performance gains. If the available buffering is very small, these algorithms become ineffective. Therefore, it would be useful to investigate additional control schemes which can effectively achieve high throughput and fairness even with very small available buffering. Our proposed rate feedback architecture is one such scheme which can prevent performance collapse of TCP Vegas for small buffers. The additional benefits of operating with very small buffers include low network delay and delay jitter. The rest of the paper is organized as follows. We discuss some of the related work in Section 2. Section 3 presents the proposed rate feedback architecture followed by the simulation-based performance evaluation in Section 4. In Section 5, we present a high-level complexity analysis of the architecture. Some possible implementation options in the future Internet are considered in Section 6. Finally we present our conclusion and future work in Section 7.
2
Related Work
Sisalem and Schulzrinne proposed an RTP/RTCP based rate feedback scheme, called Adaptive Load Service (ALS) [4]. However, ALS was designed
for UDP-based multicast applications and it is not clear how it could be applied to TCP connections as well. Li et al. [5], proposed a rate feedback scheme, called Selective Attenuation Feedback via Estimation (SAFE), in the context of Diffserv networks. The authors used a rather abstract network model to analyze the rate oscillation problem for the sources. No TCP performance results were reported. In his Performance Transparency Protocol (PTP) [6] scheme, Welzl showed the design of a feedback architecture which allows end system applications to retrieve various performance-specific information, such as path maximum transmission unit (MTU), output link bandwidth and current unused bandwidth, from the network routers. However, performance evaluation of PTP was not reported.
3
IP Rate Control Architecture
Figure 1 illustrates the concept of End-to-End IP rate control (IPRC). Using Internet Control Message Protocol (ICMP), a source periodically sends probe packets to the destination. The destination then returns these probe packets to the source. The intermediate routers compute the fair share for the flow and write them in the probe messages. When a probe packet returns to the source, it carries the bottleneck fair share in the end-to-end path. The source shapes its rate according to the network feedback. Since it is difficult to trust all users in the commercial environment, network edge routers police traffic from each flow to ensure that users do not violate the rate assigned to them. It is worth noting that although IPRC is very similar to ATM’s ABR flow control, there are several new challenges in designing such end to end flow control for IP networks. The challenges come from the fact that IP networks are connectionless without having virtual circuits set up. To implement IPRC, the three entities, namely IP hosts, network edge and core routers must play a role (different roles). The detailed functional
designs of these entities are presented below.
3.1
IP Host Architecture
The primary role of an IP host is to regulate traffic flows based on their EFR provided by the network. Traffic from an application is first classified into a flow based on the content of some portion of the packets header. For IPv4 [7], a flow is classified based on the source and destination addresses, and port numbers while in IPv6 [8], flow classification is achieved by examining the source and destination addresses and flow label fields. Each flow is then directed to a traffic shaper where packets are temporarily buffered in a queue before being dispatched to the output link by the scheduler. The shaper is used to modulate the departure rate of the flow to bring traffic into compliance with its EFR. The EFR value provided by the network defines the shaping parameter for the flow and is communicated via the feedback handler using ICMP Backward Probe Messages (BPM) as shown in Figure 2.
3.2
Edge Router Architecture
At the network edge router, incoming traffic streams are classified into flows based on packet’s header fields like in the IP hosts. Each flow is monitored and policed by the traffic policer to estimate the current flow rate (CFR) from the source and to ensure that a flow does not exceed its EFR, respectively. Packets from a flow that exceeds its EFR are essentially discarded to prevent the flow from seizing more than its allocated fair share. Each time the router receives an ICMP Forward Probe Message (FPM), it writes the estimated CFR to the CFR field. For a given outgoing link, a queue monitor monitors the current queue length and input this variable to the feedback generator. The feedback generator computes EFR for each flow to fairly distribute available bandwidth among all active flows and to maintain
Traffic Shaper Flow 1 IP traffic from application
To network edge router
Flow Classifier
Forward path
Scheduler
Edge Router
Core Routers
Edge Router
Destination
Backward path IP Data Packet
ICMP FPM
ICMP BPM
Figure 1: IP rate control using ICMP Probe Message.
Shaping parameters
Flow N
Source
Feedback Handler
BPM from network
Figure 2: Functional design of IP host.
FPM BPM to upstream router
Scheduler
Feedback Generator
Queue Monitor
BPM from downstream router
Core Router Architecture
The functional architecture in network core routers is less complex than the edge routers as there is no per flow classification and maintenance. A core router estimates the aggregated rate of all flows and traffic is directly sent to the forwarding engine as shown in Figure 4. The key attribute to reduction in complexity lies in the use of ICMP PM to carry the estimated CFR from edge router and for the core router to estimate the bottleneck fair share (EFR) for each flow. The core router computes the EFR that can be allocated to a flow based on the it’s CFR, available link bandwidth and aggregated rate of all current flows traversing the router.
3.4
Rate Meter
To downstream router
ICMP Probe Message
The ICMP Probe Message (PM) is used as a special control packet to carry information related to a flow. The format of the ICMP PM is shown in Figure 5 and the meaning of each field is summarized in Table 1. ICMP Forward PM is periodically generated by source of flow with the D bit initially set to ‘0’ to indicate forward direction to the destination. When the FPM is received by an edge router, the edge router updates the CFR and EFR fields and returns the FPM back into the flow to downstream core router (nexthop). At this point, the EFR field contains the bottleneck fair share of the edge router. When a network core router receives the FPM, the router reads the CFR to compute the approximate fair share (EFR) for the flow. The computed EFR is compared against
Traffic Policer Flow 1 Forwarding Engine
Flow Classifier
Incoming traffic from source
To core router
Scheduler
the current EFR carried in the FPM. The value in the EFR field is replaced if the rate is higher than the router’s computed EFR. When the writing process is completed, the updated FPM is transmitted to downstream routers where the process is repeated until the FPM eventually reaches the destination. At the destination, the FPM is returned to the source in backward direction by setting the D bit to ‘1’ as ICMP Backward PM (BPM) to close the feedback loop. When the source receives the BPM, it examines the EFR field to regulate its current traffic rate accordingly.
3.5
Rate Allocation Algorithm
Our proposed rate feedback architecture is based on a modular approach which allows one to use any desired rate allocation algorithm, such as ERICA [9], in the routers without affecting the design of the rest of the architecture. The only important design criteria we emphasize here is the maintenance of flow states. It is desirable to avoid maintenance of flow states in the core routers. Therefore, one should select a rate allocation algorithm for the core routers which can work effectively without requiring any flow states. For this reason, we have selected the Core Stateless Fair Queuing (CSFQ)[10] algorithm for the core routers to allocate the rates. In the edge routers, we do not have the scalability problem. Hence, we can use a rate allocation algorithm in the edge routers which may maintain flow states. CSFQ can achieve fair allocation, but it does not have any explicit mechanism to control the queue length. For this reason, we employ a queue threshold based rate allocation algorithm, as suggested in [11], to minimize the queue fluctuations around a desired threshold. In the simplest form, this algorithm first
FPM
FPM
Flow N
Figure 4: Functional design of network core router.
0 BPM to source
Feedback Generator
Queue Monitor
8 Type (40)
BPM from network
Source Port Number D
Figure 3: Functional design of network edge router.
16 Code (0)
R
CFR
31 Checksum Destination Port Number EFR
Figure 5: ICMP probe message format.
12 octets
3.3
Forwarding Engine
Traffic from upstream router
FPM
the buffer’s queue at a desired (preset) threshold level. The EFR is computed everytime the edge router receives ICMP FPM. The edge router writes the EFR field in the FPM with its computed EFR. The FPM is then sent back into the traffic stream to next-hop network core router. Figure 3 illustrates the functional design of the network edge router.
Table 1: ICMP Probe Message Fields
Table 2: Simulation Parameters TCP Vegas
Field
Size
Description
Type (40) Code (0)
1 octet 1 octet
Checksum
2 octets
Type 40 is used to identify ICMP probe message. Code 0 is used to indicate ICMP may be received from router or host. 16−bit one’s complement of one’s complement sum of ICMP message starting with the ICMP Type field. Source port number of TCP or UDP protocol. Destination port number of TCP or UDP protocol. Set to 0 to indicate forward direction and 1 for backward direction ICMP PM, respectively. Currently unused. 14−bit binary floating point representation (5 bits exponent and 9 bits mantissa). 14−bit binary floating point representation (5 bits exponent and 9 bits mantissa).
Src. Port No. Dest. Port No. Direction (D)
2 octets 2 octets 1 bit
Reserved (R) CFR
3 bits 14 bits
EFR
14 bits
computes the available link bandwidth according to a proportional control law and then divides the available bandwidth by the total number of active flows as: EF Ri = max{0, [B + K(qthresh − Q)]/F }
Simulations
To study the effectiveness of our proposed architecture, we implemented IPRC and performed several simulation experiments using Network Simulator 2 (Ns-2.1b7) [12]. A total of 30 simulations were carried Ftp1’ C1
C2 1Mbps 20ms
E1
E2
C3 1Mbps 20ms
E4
E3
E6
Ftp2’
1Mbps (5ms)
Ftp9’
C4 1Mbps 20ms
1Mbps 20ms
E5 10Mbps (1ms)
Ftp1
Ftp2
Ftp3
Ftp4
Ftp5
Ftp6
Ftp7
Ftp8
Ftp9
Ftp10
Figure 6: Simulation model.
Value
window_ v_alpha_ v_beta_ packetSize_
30 (packets) 1 3 1000 (octets)
RED Parameter
Value
q_weight_
0.002
dqthresh_ Physical buffer size Maxthresh_ Minthresh_
50 300 (packets) 10 20 30 40 50 (packets) 4 8 12 16 20 (packets)
IPRC Parameter
Value
TSW window length Shaper buffer size Policer buffer size Gain, K (1/Ts) N ICMP PM size Physical buffer size qthresh
10 (msec) 10 (packets) 2 (packets) 1.28 16 32 (octets) 10 20 30 40 50 (packets) 5 (packets)
(1)
where B is the link capacity, K is the control gain, qthresh is the desired queue length, Q is the current queue length and F is the number of active flows. Through analysis, it has been shown in [11] that for a stable system and minimum queue fluctuations, K should be set to the reciprocal of the sampling interval. Here, the flow states are necessary to compute the total number of active flows. Note that dividing the available bandwidth by the total number of active flows is the simplest form of rate allocation. One may choose a max-min allocation.
4
Parameter
Ftp10’
out with every simulation replicated several times, using different buffer size (10 to 50 packets) in routers.
4.1
Simulation Model
The model consists of 10 FTP sources connected to the network via ingress edge routers (E1-E5) as shown in Figure 6. Each FTP source simulates large file transfer and always has packets to send. All packets exit the network from an egress edge router (E6) towards the destination end systems. The network topology is broadly referred to as a parking-lot scenario [13], where different sources join the network at different points resulting in different propagation delays from the bottleneck link. Different propagation delays have implications on the throughput and fairness.
4.2
Simulation Parameters
All FTP sources use TCP “Vegas” as default transport layer protocol implemented in Ns2.1b7 distribution. The main reason for choosing TCP Vegas is to investigate whether our proposed rate feedback could achieve any further improvement over TCP Vegas. The two preset thresholds (α and β) at the sender side are set according to the recommended values [1]. We have simulated both drop-tail and RED for managing the buffers in the routers. For RED, the Maxthresh is set to two and half times larger than Minthresh with the physical buffer size fixed at 300 packets in
0.4 "Ftp1" "Ftp2" "Ftp3" "Ftp4" "Ftp5" "Ftp6" "Ftp7" "Ftp8" "Ftp9" "Ftp10"
0.3 0.2 0.1 0
0
5
10
15
0.4
20 25 30 Time x 100 (sec)
35
40
45
0.3 0.25 0.2
5
10
15
20 25 30 Time x 100 (sec)
35
40
45
50
45
50
45
50
"Drop−tail" "RED" "IPRC"
200 150 100 50
5
10
15
20 25 30 Time x 100 (sec)
35
40
45
0.3 0.25 0.2
0
5
10
15
500
"Ftp1" "Ftp2" "Ftp3" "Ftp4" "Ftp5" "Ftp6" "Ftp7" "Ftp8" "Ftp9" "Ftp10"
(c) IPRC 0.35
0
50
20 25 30 Time x 100 (sec)
0.1
350 300 250 200 150 100
0.05
50
0
0
5
10
15
20 25 30 Time x 100 (sec)
35
40
40
400
0.15
0
35
450
Dropped (packets)
0
0.4
Throughput (Mbps)
0
(b)
0.05
45
50
Figure 7: Throughput as a function of time.
size, large enough to prevent packets dropping due to buffer overflow. All parameters are set according to the recommended values specified in [2]. For IPRC, the qthresh is kept fixed at 5 packets (i.e. half of the initial physical buffer size), while the physical buffer size is varied between 10 to 50 packets. All results are collected after the FTP connections have reached their steady-states. To avoid any synchronising effects, the starting times of FTP sources are staggared as follows. FTP1 to FTP10 start to send packets at time 10 to 100 seconds (with 10 seconds transmission interval between each source) respectively. Details of remaining simulation parameters are summarized in Table 2.
4.3
5
250
0.1
0
10
300
0.15
"Drop−tail" "RED" "IPRC"
15
0
50
"Ftp1" "Ftp2" "Ftp3" "Ftp4" "Ftp5" "Ftp6" "Ftp7" "Ftp8" "Ftp9" "Ftp10"
(b) RED 0.35
Throughput (Mbps)
Queue length (packets)
(a)
Average delay (msec)
Throughput (Mbps)
20
(a) Drop−tail
0.5
Simulation Results
We measure six performance metrics, throughput, fairness, packet loss rate, queue length, delay and jitter. The first two are direct measures of TCP performance. The fairness index (FI) is computed using the formula given in [14]. The remaining four metrics are related to each other and primarily measure the loss
(c) 0
5
10
15
20 25 30 Time x 100 (sec)
"Drop−tail" "RED" "IPRC" 35
40
Figure 8: (a) Queue length, (b) Average delay and (c) Packets dropped as a function of time.
and delay performance of any cross traffic sharing the link with the TCP connections. First we compare performance of our proposed IPRC architecture with drop-tail and RED for a very small buffer of 10 packets. Figure 7 shows the individual throughput of all 10 FTP sources for drop-tail, RED and IPRC. The graphs clearly demonstrate the superiority of IPRC in terms of fairness. Drop-tail performs worst as expected. Although RED achieves good fairness, rate control performs even better. Figure 8 shows the queue length (at router C4), average queuing delay and packet loss rate for all the three schemes. The IPRC has the lowest queue length and smallest fluctuations leading to smallest delay, jitter and packet loss rate. In fact, with IPRC in place, packets were lost only at the start, no packets were lost thereafter due to the tight control of the queue length around the desired threshold of 5 packets. In Figures 7 and 8, we have seen the evidence of how
1.02
50 (a)
Average queue length (packets)
45
Goodput (Mbps)
1.00 0.98 0.96 0.94 0.92 10
"Drop−tail" "RED" "IPRC" 15
20
25 30 35 Buffer size (packets)
40
45
25 20 15 10 5
450
1.0
400
Average delay (msec)
Fairness index (FI)
30
500 (b)
0.8 0.6 0.4
"Drop−tail" "RED" "IPRC" 15
20
25 30 35 Buffer size (packets)
40
45
(a)
35
0 10
50
1.2
0.2 10
40
"Drop−tail" "RED" "IPRC"
15
20
"Drop−tail" "RED" "IPRC"
25 30 35 Buffer size (packets)
40
45
50
40
45
50
(b)
350 300 250 200 150 100 50 0 10
50
15
20
25 30 35 Buffer size (packets)
7 (c)
Figure 9: (a) Goodput and (b) Fairness index as a function of buffer size.
"Drop−tail" "RED" "IPRC"
5
Loss rate (%)
our proposed IPRC scheme can function effectively with very small buffers in the network, while droptail and RED performed poorly, even with TCP Vegas implemented in the end systems. Next, we present results for increasing buffering in the network to observe the effect of buffering on drop-tail and RED. Figure 9 shows combined TCP throughput (goodput) of all FTP sources and the fairness index as a function of buffer size. We observe the following interesting results. If IPRC is not implemented, TCP Vegas performs effectively only when large buffering is available in the network. With drop-tail, the fairness remains below 80% even with a buffer size of 50 packets. Good, consistent fairness (above 95%) can be achieved with RED for small buffer size, but at the expense of lower throughput. The buffer size has no impact on fairness for RED because we set the same ratio of maximum and minimum thresholds for all buffer sizes. Throughput for RED remains below 95% for buffer size smaller than 20. With IPRC, TCP Vegas can achieve higher throughput and fairness for both large and small buffers. The most pronounced effect of IPRC is in the fairness when compared with droptail. Figure 10 evaluates the performance of IPRC scheme in terms of delay and loss in the network. We observe two important results. First, the delay and loss are significantly lower with IPRC in place, especially for large buffer size. Second, the buffer size has
6
4 3 2 1 0 10
15
20
25 30 35 Buffer size (packets)
40
45
50
Figure 10: (a) Queue length, (b) Delay and (c) Packet loss as a function of buffer size. no impact on delay and loss when IPRC is used, because we set the same threshold of 5 packets irrespective of the physical buffer size. For a very small buffer size, drop-tail and RED can achieve low delay, but at the expense of high packet loss rate. IPRC can achieve low delay for small buffer size without any impact on packet loss.
5
Complexity Issues and Challenges
Several important issues need to be addressed before IPRC can be deployed in the Internet. The first issue arises from the fact that IP is connectionless and purely based on datagram forwarding mechanism. Packets are forwarded to the next node based on routing entries maintained by the routers. These entries are dynamic and may change when some links or routers have failed and adaptive routing finds alternate shortest path. Therefore, packets from the same source and destination pair may not always follow the same path in the network. This presents a difficulty
for routers running IPRC to allocate bandwidth for the flow in the end to end path effectively. IPRC is currently designed under the assumption that routing paths do not change unless some links or nodes have failed unpredictably. When this occurs, an alternate path is established and packets continue to follow this new path until the original path is restored. The rapid advancement in today’s engineering has great impact on robustness and reliability in both electronics hardware and software such as capacious hard-disk drives, memory RAMs and operating systems. Hence, this will definitely eliminate and if not minimize such failures in the networks. As an alternative solution, a mechanism called route pinning could possibly be incorporated in routers so that routes associated to a flow would be ”pinned”. The routing protocol would not change a pinned route if it was still viable. This mechanism has been recommended in the recently proposed Integrated Service (IntServ) architecture or reservation protocol (RSVP) for RSVP paths setup [15]. Secondly, in providing IPRC, several modifications are required over the existing IP infrastructure. Additional complexity is added at the three entities. At IP hosts, traffic is classified into flows and shaped according to the EFR provided by the network. In the network, much of the complexity is pushed towards the network edge routers while allowing minimal functionality in the network core routers to reduce core router’s processing burden and to eliminate scalability issue. Edge routers perform flow classification to differentiate flows and in providing different feedback rate (EFR) for each flow based on the computed bottleneck fair share along the end-to-end path in the network. Edge routers also perform traffic policing to protect the network and its resources from non-adaptive traffic sources and to maintain fairness among all flows. The only additional functionality required at the core routers is the processing of ICMP probe messages and computing EFRs. Although the functions mentioned here for IP hosts and edge routers are not present in the existing Internet infrastructure, they will soon become standard features in the next generation IP architecture. For example, classification of traffic into different behavior aggregates (BA) and traffic conditioning have been proposed by IETF DiffServ WG to allow different per hop behavior (PHB) forwarding in Differentiated Services architecture [16]. Therefore, we argue that much of the complexities required to implement IP rate control will not be specific to IPRC and hence it will be easier to implement IPRC in future IP networks.
6
IPRC in Diffserv and Intserv Models
Although the future Internet will implement quality of service (QoS) mechanisms, such as Diffserv and Intserv, the default service for such a QoS-enabled Internet will still be best effort service. Since the higher class services will attract hefty charges, it is expected that best effort service will remain the most widely used service. Therefore, IPRC will still be relevant in the future Internet. Now the question is where in the service stack IPRC will be implemented. One option would be to implement IPRC for the default best effort service. With this option, the future Internet will not have any service which resembles the best effort service we have in the current Internet. Another option would be to leave the current best effort without network level flow control as the default service and introduce another best effort service with IPRC as a superior best effort (more expensive) service. This type of service structure will be similar to the UBR (similar to current best effort) and ABR (best effort with rate control) services in ATM network paradigms. Whatever option is selected, IPRC will fit in both Diffserv and Intserv models of the future Internet. IPRC will enable a new Internet service, a service which will provide the users fast and fair access to any unused bandwidth in the core of the network at a significantly lower cost than the guaranteed services.
7
Conclusion and Future Work
We have designed a rate feedback architecture for TCP/IP networks. Using computer simulations we have shown that rate feedback can significantly improve the performance of TCP/IP applications when the available buffering capacity in the network is extremely limited. Currently, we are implementing the proposed architecture in FreeBSD kernel. The objective of this experiment is to study the performance of rate feedback in a real network environment and to unveil any issues related to its implementation. Preliminary results (not presented in this paper) show that rate feedback is quite feasible in TCP/IP networks. Future work shall involve in designing more scalable solutions and testing the architecture in large scale networks. This includes extending the current IPRC into Intserv and Diffserv service models.
References [1] L. Brakmo and L. Peterson, “TCP Vegas: End to End Congestion Avoidance on a Global Internet”, IEEE Journal on Selected Areas in Communication, Vol. 3, No. 8, pp. 1465-1480, October 1995. [2] S. Floyd and V. Jacobson, “Random Early Detection Gateways for Congestion Avoidance”, IEEE/ACM
Transactions on Networking, Vol. 1, No. 4, pp. 397413, August 1993. [3] B. Braden et. al., “Recommendations on Queue Management and Congestion Avoidance in the Internet”,RFC 2309, April 1998. [4] D. Sisalem and H. Schulzrinne, “ALS: An ABR-Like Service for the Internet “, In Fifth IEEE Symposium on Computers and Communications (ISCC 2000), July 2000. [5] N. Li, S. Park and S. Li, “A Selective Attenuation Feedback Mechanism for Rate Oscillation Avoidance”, Computer Communications, Vol. 24, No. 1, pp. 19-34, January 2001. [6] M. Welzl, “PTP: Better Feedback for Adaptive Distributed Multimedia Applications on the Internet”, In 19th IEEE International Performance, Computing and Communications Conference (IPCCC2000), Phoenix, Arizona, USA, 20-22 February 2000. [7] J. Postel, “Internet Protocol”, STD 5, RFC 791, September 1981. [8] S. Deering and R. Hinden, “Internet Protocol, Version 6 (IPv6) Specification”, RFC 2460, December 1998. [9] S. Kalyanaraman et. al., “The ERICA Switch Algorithm for ABR Traffic Management in ATM Networks”, IEEE/ACM Transactions on Networking, Vol. 8, No. 1, pp. 87-98, February 2000. [10] I. Stoica, S. Shenker and H. Zhang, “Core-Stateless Fair Queuing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks”, ACM SIGCOMM, pp. 118-130, September 1998. [11] M. Hassan and H. Sirisena, “Optimal Control of Queues in Computer Networks”,IEEE International Conference on Communications, June 11-14 2001, Helsinki, Finland. [12] Network Simulator 2 (Ns2.1b7). Available via http://www.isi.edu/nsnam/ns/. [13] F. Bonomi and K. Fendick, “The Rate-based Flow Control Framework for the Available Bit Rate ATM Service”, IEEE Network Magazine, Vol. 9, No. 2, pp. 25-39, April 1995. [14] R. Jain, “The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation and Modeling”, John Wiley and Sons, Inc., New York, 1991. [15] R. Braden, D. Clark and S. Shenker, “Integrated Services in the Internet Architecture: An Overview”, RFC 1633, June 1994.
[16] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang and W. Weiss, “An Architecture for Differentiated Services”, RFC 2475, December 1998.