A Simulation-Based Survey of Active Queue

57 downloads 0 Views 425KB Size Report
sometimes orthogonal requirements. Some of them have high bandwidth demand, while others require low loss, low delay or a combination of these aspects.
A Simulation-Based Survey of Active Queue Management Algorithms Dhulfiqar A Alwahab

Sándor Laki

Faculty of Informatics Eötvös Loránd University Budapest, Hungary +36 1 372 2869

Faculty of Informatics Eötvös Loránd University Budapest, Hungary +36 1 372 2869

[email protected]

[email protected]

ABSTRACT Active Queue Management (AQM) has been used in routers for early detection of traffic congestion at the bottleneck link. In AQM, packets can be drop before the buffer becomes full, based on many different parameters or conditions. Many algorithms have been proposed to efficiently control the congestion in the network. The scope of this paper is to analyze the most recent and well-known AQM algorithms and evaluate their performance under heavy hybrid traffic load (TCP and UDP flows), using NS-3 network simulator. For the comparative analysis, various metrics are taken into account, including queue length, jitter, end to end delay, packet delivery ratio, and page load time. Results show that one of AQM algorithms can also be used for the purpose of QoS.

CCS Concepts •Networks→Network performance analysis.

performance

Many techniques had been proposed to measure the available resources or manage internet traffics. Flow tracing is one of the major approaches [1]. Unfortunately, unlike the delay, packet loss and other QoS metric, flow tracing is not easy to calculate or evaluate. Packets travel through different links and routers together with packets from other applications and wait in the router's queues for manipulating by different active queuing management. Furthermore, applications used by the users are varied between connection (TCP) and connectionless (UDP). In other words, flow tracing required tracking different application packets in each node in the network, and mark the packets or notifying the sources to re-transmit the dropped packets in case of TCP. After flow tracing process, flow improvement may be involved to manage the available resources that are considered a challenge because Internet traffic fluctuates with time as a result of user and protocol behavior. This phenomenon has a negative effect on queues sizes inside the internetwork devices. For that reason, it’s necessary to know how to deal with this traffic fluctuation to provide a good service to the users. Queuing management can control this traffic phenomenon using AQM. There are two different way to deal with the traffic [2]: first, "queue management” to manage the length of queues (packets in the queue) by dropping packets when necessary. Second “scheduling" to determine which packet to send next, this technique primarily used to manage the allocation of bandwidth among flows. While these two mechanisms are closely related, they address rather different performance issues. Literately, many AQM algorithms had been proposed, the most well-known are Drop-tail, RED, CODEL, PIE and recently PPV, to attack queue behavior on traffic management and flow treatment. Most of the proposed algorithms focus on how to provide fairness or less computation overhead.

evaluation →Network

Keywords AQM; network simulation; traffic-flow; traffic-congestion; ns3.

1. INTRODUCTION In the past decade, Internet has gone through an enormous evolution. Novel applications have emerged with basically and sometimes orthogonal requirements. Some of them have high bandwidth demand, while others require low loss, low delay or a combination of these aspects. Satisfying all the requirements of novel applications are challenging and require novel solutions. In addition, different user behaviors should also effect on network performance. Handling of aggressive Internet users is needed for providing good quality of service to other normal subscribers. Generally, the network should preserve the low traffic active in the presence of high rate traffic.

In this work, we present a simulation-based comparative study of recent AQM methods. Simulations play a vital role in the research and education process since it can help us to explore systems being too complex for mathematical analysis. On the other hand, it also offers an easy way to reproduce results especially in dynamic network environments where topology frequently changes. Simulation is suitable to analyze, test new algorithm and compare new algorithms with the existing ones. Also, it's easier to scale up or down the network through simulation. In this paper, we use ns-3 [1] that is a discrete-event network simulator released in 2008. This simulator has a high degree of realism which includes kernel implementation; real application code can also be integrated with virtual machine environment or test-bed [2]. AQM models are being added in every new version and it’s an open source network simulator that gives the advantage to researchers

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ICCBN 2018, February 24-26, 2018, Singapore, Singapore © 2018 Association for Computing Machinery. ACM ISBN 978-1-4503-6360-0/18/02…$15.00

DOI: https://doi.org/10.1145/3193092.3193106

71

for contributing, founding bugs and covers the disadvantage on the lack of documents when compared with non-open commercial simulators. Many types of research proved that protocols entities that have designed in ns-3 are close to real computers. However, in this paper, RED, Drop-Tail queuing, Codel, Pie and PPV will be compared by simulation testing of cable network topology (dumbbell). The comparison is based on results of page load time, jitter, loss rate, queue length and packet delay.

performance), and packet drop decision when the incoming packets start to drop. RED requires two critical values (parameters) to be the minimum and maximum threshold. RED works as follow: the average queue length is compared to these thresholds, when the average queue length is less than the minimum threshold all packets are accepted, but when the average queue size is in between the (max and min) thresholds, incoming packets are marked with probability Pi. The probability pi that will assign to the incoming packets has a direct proportion with the connection's bandwidth to the router. When the queue length size is larger than the maximum threshold, all incoming packets are marked to drop. RED effectively controls the average queue size, minimize delay and loss ratio, control brusty of packets and utilize the links. RED works better and overcomes the Drop-Tail algorithm but it also has some shortcomings that prevent it to cover the entire network requirement to provide a perfect management and service. The main problems in RED are 1) its measure the congestion based only on queue length size which is a poor indication to know the reason of the congestion 2) does not perceive information about the number of flows or sources sharing the bottleneck (i.e. nonresponsive flow problem), 3) it demands a variety of parameters to deal with variety of congestion, 4) it works well only when there is sufficient amount of buffer space and correctly parameterize. There have been several implementations and research works on RED to modify or enhance the algorithm performance [5][6] and more.

The remainder of this paper is organized as follows, section II reviews the most well-known AQM. Section III describes the bufferbloat and the reason for using AQM. Section IV details the simulation setting. Section V shows the simulation results. Section VI presents the use one of the AQM for providing QoS. Section VII concludes and discussion.

2. LITERATURE SURVEY Internet service degradation can be caused by the lack of packet forwarding or packet manipulation. Therefore, queuing management mechanism is very demanded in the architecture of routers to treat or deal with congestion, and bottleneck problems in the networks to preserve the service for all users. On the opposite of the congestion mechanism like TCP, AQM is designing to be implemented on the routers rather than end points (nodes). The idea behind that is based on the fact that routers can detect the congestion earlier because routers can distinguish between the types of delays (propagation delay or queuing delay), and usually, router has a complete and fast view about its queue behavior. AQM had been classified [3] according on which factors the router will make drop-decision on the incoming packets, which yield four different strategies: average queue length-based, packet loss and link utilization-based, class-based, and control theory-based. The main focus of AQM is to have a mechanism that keeps the overall throughput high, but at the same time keeps average queue size as low as possible. The scope of this work is to examine the most well-known algorithms that are classified under the average queue length based queue management (Drop-Tail, RED, Codel, PIE and PPV). All the proposed algorithms in this class are designed to provide a mechanism to early detect the congestion and deal with this congestion by drop packets before the router's queue become full.

Almost every work focused on how to deal with queue length and the number of buffers until control delay (CODEL) algorithm was proposed [7]. Codel was released in 2012 to manage the queue, relying on packet latency. The idea behind Codel algorithm is to calculate the sojourn time, the amount of time the packet expecting to wait in the queue before leaving, for each arriving packet. Based on the packet sojourn time, Codel makes a decision whether the packet will be drop or not in the de-queue process. Codel will start drop packet if the packets sojourn time is more than the target value (in standard Codel 5 ms), the drop rate started from 1 packet (each 100 ms) and increases as long as the sojourn time still larger than the target value. This process called drop state in Codel's algorithm. Codel time-stamping each packet on ingress (enqueue and compare it with the time when the packet egress (dequeue), the goal behind this process is to determine the sojourn time (queue latency) that consider the main factor for dropping decision. Packets sojourn time is compared to the target value, if it is below the target value, packets are forwarding without any further process. Otherwise, the algorithm enters drop state, by starting to send congestion signal and dropping packets at a low rate that increases linearly. During the drop state (when the sojourn time is larger than the target), Codel drops the packet from the head of the queue and reduce the next drop time interval by some value [8]. This process accommodates bursty traffic without triggering any packet loss. Additionally, Codel relies on head drop instead of the tail to reach the TCP signal soon that reflects the advantage to get quickly the queue depth under control. Codel allows packets to enter and wait in the internetwork's devices queue as long as the queue build up is short-lived and drain to an almost empty state. If the buffer doesn’t drain on its own, codel sends TCP congestion signal to enforce it to drain. Codel has advantages that it does not needs to be tuned to the subscriber data rate (it is parameter-less), It adapts for changing the link rate dynamically without negative impact on utilization. Because Codel relies on sojourn time to make drop decision, it increases the overhead of the network, and it lack to the ability for perceiving the flows around the target value.

First algorithm for dealing with queuing management issue in the routers was Drop-Tail queuing [4], in which the length of each queue is set to a fix value called maximum packet length, the packets from all users will be accepted (enqueued) until the maximum packet length value reached. All other incoming packets that found the queues full will drop regardless on the type of the packets (TCP or UDP). When the queue length decreases because of the packets transmitting (dequeued) process, incoming packets will be enqueued again and so on. This method suffers from some problems; TCP application by its nature can rapidly fill the queue up and high data rate applications (UDP) can sew up queue space that causes a high loss rate to other applications flows; Drop-Tail queuing increase the delay since the queue can be full for a long time. In contrast to the Drop-Tail mechanism, RED has been proposed to drop packets based on the queue length rather than the length of the maximum packet and hence drop packets based on the probability. As the queue length increases, the probability to drop packets also increases and vice versa. Specifically, RED algorithm consists of two main parts: queue length estimation which relies on some methods originally was exponential weighted moving average (later works used other methods to enhance the algorithm

72

After Codel, another algorithm had been proposed called PIE (Proportional Integral controller Enhanced), that can effectively control the average queue latency to a target value, the design does not require per-packet timestamp, so it incurs a very little overhead [9]. PIE gathers the advantages from RED and Codel. As RED, it randomly drops the incoming packets when congestion occurs, it’s easy to implement and scalable to high speed. As Codel, it relies on the queuing latency rather than queuing length for congestion detection. PIE can also specify the congestion level to decide how to response to them by using a rate of ranges for the queuing latency. PIE depends on the difference between the actual queue length and the reference queue length. PIE is a multi-parameter algorithm, these parameters are determined by control theory systems, and they could be selftuning to enhance system performance. This algorithm considers TCP congestion control as a linear control system to respond when congestion appear. Furthermore, the control theory is used to oversee queue size and modify drop probability. The liner control system consists from three parts to deal with different type of traffic base on the fact that most real traffic is not only long bulk TCP traffic. The three parts are; a) random dropping at enqueuing; in this step drop probability are calculated, packets drop randomly according to the drop probability value without timestamp process. b) periodic drop probability update; the drop probability is calculated and updated based not only on the queuing delay but also it involves the direction where the delay is moving, the direction measured by subtracting the current delay from the old delay. PIE requires two explicit parameters; (α) to determine the effect of current latency on the drop probability, and (β) to specify the amount of additional adjustments depending on whether the latency is trending up or down. The drop probability will be stable when the difference between the current delay and the old delay is zero and the value of the latency equal to the reference delay. The relative weight between α and β determines the final balance between latency delay and latency jitter. c) dequeuing rate estimation. The rate in network is not a constant, it's always varied because of link capacity fluctuation, or queues are sharing the same link. Many methods could be used to estimate the rate. The advantages of PIE are; it doesn't need overhead on each packet (times-tamping), and it’s a tail drop which is considered simpler than head drop.

than CTV can be transmitted. The probability of losing the packet is going lower when increasing the PV value. PPV adds more overhead to the network through packet marking like Codel, and works based on parameter like RED. But it can deal with different level of congestion because CTV is not stationary value but its value varied as a consequence of many factors (i.e available channel capacity and amount of offered traffic). Additionally, PPV does not need to maintain the status of each flow, since it only uses PV of the incoming packets.

3. BUFFERBLOAT AND AQM Networks normally constructed from a different type of internetwork devices connected by different links, it's very common to have an ingress link to router with capacity 1 Gbps and 100 Mbps as egress link [11]. Buffer on these devices are very necessary to keep the packets transportation safe and maintain a good service for all the different type of users. One can easily think that more or larger buffer is better; the entire incoming packet can be kept for later transmitting (no packet loss). That’s true if there is no packet latency and no real time or connection (TCP) applications. TCP traffic can be considered as a special traffic since its transmission rate is responsive for the traffic congestion. In nutshell, TCP increases its sending rate continuously until it discovers a packet loss, at this point TCP decrease the send rate to the half and start again the process of increasing the sending rate until the next packet loss detected and so on. The result of this behavior is what called TCP saw-tooth. Packets usually lost by many reasons but in wire network, the losing of packets caused by bufferbloat. This process clarifies how TCP adjust itself to match the bottleneck in the network. Unfortunately, this process yields other low rate applications to suffer from high latency and poor performance like web browsing traffic, or instability for other non-TCP traffic like real time applications (i.e video conference or VOIP). There are many ideas that had been suggested to address this problem, [11] list some of them. One technique is to depend on the packets delay rather than loss in TCP which means no more congestion-based loss control. Another technique is to tune the buffers in a way to set the upstream buffers to a big size, this way was suggested for cable modem. The third technique involves Quality of Service (QoS) to provide and isolate buffers for real time traffic and Bulk TCP traffic, this technique face some challenged since it requires to classify or filter the incoming traffic, and traffic shaping for each service assigned to a single user. The fourth technique is to control the buffer size and drop packets based on some algorithms before the buffer is getting full. The mechanism of dropping packet from the router before its queue becomes full is called active queue management, so all the nodes on the network can response to that congestion before buffers overwhelming.

Recently, the concept of per packet value (PPV) [10] has been introduced in which each packet carries its own value. This algorithm permits the network operator to control the resources among the participant users based on pre-assigned continuous value called Packet Value (PV). PPV algorithm consists from two main parts. First, packets are marked at the edge of the network (where they enter the network). Second, packets are scheduled and dropped when congestion occurs solely based on their PV. In contrast to Codel’s timestamp marking, this algorithm only requires to mark the packets when they enter the network (at the edge). The attached value PV does not change as long as the packet is still vital. When congestion takes place at the bottleneck, the drop decision is solely based on this value; packets with the least value are dropped first. The challenge in this algorithm is that packets can be dropped either from the tail, middle or head of the queue. The value attached to the packet may depend on flow level, application level, or user's specific information. Packets belonging to the same flow are marking by relying on the operator policy. The operator policy in PPV is described by throughput value function (TVF) that can specify the packet value for any transmitting rate. PPV defines a parameter called congestion threshold value (CTV), where only packets that have PV larger

Active queuing management give the authority to the router to decide when and which packet should be drop in the goal to provide overall better service, resources sharing, control aggressive flows, or differentiate between participants and

73

In this simulation, traffic is generated as follows. First, one bulk TCP (FTP) transfer and one non-bulk (HTTP) are generated from node A, and one UDP application with variable packet size designated to node C with three different port number. Node B generates constant bit rate CBR traffic to node D. The UDP applications in both nodes are simulated to be aggressive traffic generator by set the data rate of these applications equal to link capacity.

applications for quality of service requirements. However, AQM can offer many additional advantage [3]; 1) buffer bursts, keeping the average queue length under control will give the ability to the router to absorb the normal occurring of bursts without dropping packets.2) reduce the end to end latency for real time applications (i.e short web traffic or interactive traffic.3) keep the low bandwidth flow active by preventing the router bias to the high rate flows.

The metrics to be considered in this work are represented by the throughput of each flow at the bottleneck, for examining link utilization, to illustrate the fairness among the flows. Also, RTT for calculating page load time in case of TCP applications, queue length is also important to router resource utilization, flow's end to end delay and jitter. For the different studied algorithms, the standard parameters (if existed) and threshold values are used without modifications. All the algorithms will be applied to the same topology and traffic load.

4. SIMULATION SETTING The main difficulties that face the network researchers are simulation setting and topology configuration since most real word traffic are continuously varied on time (not long session of Bulk TCP). Usually, internet traffic composes from different sessions that ingress and outgress though different internetwork devices. Some of these traffics are only few packets (like, email and browsing) and other thousands (like video streaming). Packets and TCP window size are not constant to all type of networks and there is more than one TCP algorithm (New-reno, cubic, etc). Packets usually are consequent to form a flow. Flow could be controlled by some techniques like TCP congestion controls (i.e congestion avoidance, slowstart, etc) or limited by the application like UDP. There is no way to predict the number of packets or flows in the real network; it’s all depending on the users and their performance at a time. Furthermore, the absence of a standard network topology and fixed parameters value for testing or comparing the new systems.

5. SIMULATION RESULTS The queue length of the algorithms examined in this paper is analyzed in the same wired network environment that shown in Fig. 1, where links (A-R, B-R, R2-C,R2-D) capacities are set to 100 Mb/s whereas link R-R2 is set to 5 Mb/s (bottleneck). Links delay are set to 0.1 ms, bottleneck delay 5 ms. Router queue capacity sets to 1000 packets (Ethernet). Simulation run time is 60 seconds, these topology parameters are recommended by NS3 developers and researchers related to bufferbloat[13].The UDP traffic are generated by application that send random packet size (in the range 1000-2000) every 1 micro second. TCP (HTTP) traffic are generated by application sending data at rate 12 Kbps (standard maximum transmission unit (MTU) for Ethernet), and another application for sending Bulk TCP (FTP). All the AQM are applied without modification on their initial parameters (in case of the existing), except the packet size which modified to match the traffic requirements and all of them are in queue mode packets as used in NS-3. For RED, the minimum and maximum thresholds are 5, 15 respectively, queue weight 0.002, and queue limit 25. Codel is a parameter-less algorithm but the standard initial interval value is 100ms and its target queue delay is 5 ms. PIE parameters are 25 packets for the maximum number of packets the queue can hold, minimum queue size in byte before de-queue rate is 10000 byte, desired queue delay is 20 ms, the maximum burst allowance before random drop is 0.1 second, and finally the value of α and β are 0.125 and 1.25 respectively. PPV has no parameters but its only need to divide the users into three types. Gold user, where the packets from this user marked with a high PV value, Silver user packets marked with a PV smaller than Gold user and the background user packets with no PV. The Drop decision probability is inversely proportional with PV value. However, the types of users are assigned by the network operator. Table 1 shows the packet loss rate for each flow in each scenario, node 1 (A) generate HTTP, FTP and UDP1. Whereas, node2 (B) generates UDP2.

Network parameters are represented by links capacities, buffer size, round trip time RTT, etc). The relation between page load time and RTT is mentioned in [11]. Generally, the relation between them is linear based 14x multiplier, for instance 200 in RTT make the page load time 2.8 second. Typologies that are most common used by the researchers for measuring bottleneck under different algorithm are singlebell and dumbbell. Dumbbell topology is more preferred since it allows sharing the bottleneck by more than one link capacity and resource. For the purpose of testing different types of AQM, two pairs of nodes are sharing the same bottleneck as shown in Fig. 1. Where node A simulated to generate traffic (TCP and UDP) to node C.

Node B only generate UDP traffic to node D. In this way, packets loss calculation for each flow between the sender and the receiver can be more accurate, and traffic rate comparison of the flows at the bottleneck is much easier by recording the rate, start and end time of data transfer (in this work Wireshark [12] had used to achieve this process). This topology could also have different link capacity attached to the sources, for example, node A has more or less link capacity than node B.

Table 1. Total packet loss rate App. UDP 1 UDP 2 Figure 1. Simulation network topology.

74

DropTail

PIE

RED

CODEL

PPV

38.43

41.48

41.07

39.59

40.04

41.70

38.62

38.9

40.41

40.01

Fig. 2 shows the UDP flows generated only from node A through the bottleneck link. Fig 3 shows the throughput generated from node B on the same bottleneck link. TCP's throughput in the bottleneck is shown in Fig 4, where all TCP flows were unable to work, except PPV, with the existence of aggressive UDP flow. PPV AQM could keep TCP flows (HTTP and FTP) active during the simulation time in contrast with the others.

flow. Packet Delivery Ratio (PDR) for TCP flows is always expected to be 100 because of the reliability property of TCP. Based on this fact, the number of successfully delivered packets to the application layer will be considered to measure each AQM performance regardless of how they manipulate the TCP packets in the bottleneck. The total number of TCP packets sent are only 9 (5 HTTP and 4 FTP) in all the applied AQM during the simulation running time. On the other hand, the number of acknowledge packets were zero at the destination node.

Figure 2. UDP throughput from node A.

Figure 5. Drop tail, PPV and codel queue length.

Figure 3. UDP throughput from node B. Figure 6. PIE and RED Queue length. Table 2. End to end average total delay(second) Ap p. UD P1 UD P2

DropTail 4.48

PIE

RED

CODEL

PPV

0.425

0.99

0.4341

1.76

4.76

0.417

1.08

0.421

1.76

Figure 4. TCP throughput through bottleneck link. Table 3. End to end average total jitter (Msecond)

Queue lengths for drop tail, PPV and Codel are shown in Fig 5 (x-axis is simulation time and y-axis is queue length). These three algorithms do not control the queue length by a threshold value in opposite to RED and PIE in Fig 6, were the threshold value is (25). Drop tail, PPV and Codel fill the whole queue size (1000 packets), the queue length under Codel algorithm return to null more times than Drop tail because of sojourn time. However, PPV doesn’t drain the queue during the simulation time. PIE depends on control theory to avoid drain the queue. On the other hand, it’s also uses the threshold concept to make the queue size under control.

App . UD P1 UD P2

DropTail

PIE

RED

CODEL

PPV

0.40

2.24

0.44

0.09

3.03

0.04

2.05

0.43

0.07

3.07

TCP applications couldn't able to work in this scenario under these AQM algorithms. In contrast, only PPV perceives the TCP flows active during the simulation time. The PCAP file shows 15 packets had been delivered successfully from 20 in HTTP flow and 18 from 24 in FTP bulk transfer. The page load time can be referred to RTT as

Table 2 shows the overall average delay in second for each flow; table 3 shows the total average jitter in ms for each

75

mentioned in section 3. RTT for FTP and HTTP flow under PPV AQM are shown in Fig 7.

PPV gold reduces the page load time by reducing the RTT of HTTP and FTP as shown in Fig 10.

6. QOS THROUGH PPV

Fig 9 shows the total TCP throughput for HTTP and FTP flows. Node A with PPV gold achieved more throughputs for TCP applications in compare to Fig 4. Additionally, PCAP file shows that 83 and 64 packets had been successfully delivered for HTTP and FTP respectively.

Among the applied AQM algorithms only PPV was able to keep the TCP applications active during the simulation times. PPV algorithm base on packets marks. Through the marking process, priority for one node can be given over the others. PPV can marks packets to be in three different level of priority. Gold, which is the highest level and the least level is bronze. Silver is in between. This PPV's property is also tested on the same scenario that suggested in this paper. Node A simulated to be a Gold and node B to be Silver. Fig 8 shows the UDP throughput through the bottleneck link. The UDP flow of node A have much more throughput than UDP flow from node B in compare to the previous results in Figs 2 and 3 were all the UDP flows had approximately the same average throughput. This packet marking characteristic of PPV can be used for QoS purpose. Packets loss in node A reduced to 25.12 instead of 40.4 and node B loss rate increased to 94. The average total delay for UDP1 is reduced to 0.237 and UDP 2 roughly reduced to 1.71 in regards to table 2. Total average jitter for UDP1 is 0.237 and UDP 2 is 0.413. When nodes in the network competed for the resources always gold have the highest priority. In this scenario, node A has been set to be Gold for the aim to provide better service for TCP flows.

Figure10. RTT for TCP applications.

7. CONCLUSION Novel applications pose many new challenges to networks and require the existence of some level of QoS. In this paper, we compared some recent AQM methods that can neither reduce queuing delay nor provide good resource sharing. Though most of them have been designed to exploit the responsiveness of TCP, in our simulation they have shown good performance for unresponsive sources. PIE and CODEL resulted in the better endto-end delay values. However, the jitter observed for PIE was too high for many interactive applications. CODEL performed the best, providing small values for both end-to-end delay and jitter, but its overhead in the packet network is larger. RED proved to be a good tradeoff between PIE and CODEL with a bit higher end-toend delay and a reasonable jitter. Even though PPV does not limit its queue length, it provided much better end-to-end delay than DropTail and the highest jitter in the experiment. Note that PPV was designed for ensuring resource sharing defined by different operator policies that cannot be done by the other methods investigated in this paper. Furthermore, it does not include any mechanisms to control the buffer length. As a conclusion, we can say that a combination of these AQM methods covering both resource sharing and low queuing delay could be a future direction of research. This requires further analysis of how the maximum queue size affects the resource sharing performance of PPV method and how the packet dropping rules of existing AQM methods can be replaced by packet value-based rules.

Figure 7. RTT for TCP applications under PPV.

8. ACKNOWLEDGMENTS

Figure 8. PPV gold and silver packets marker.

This work was supported by EFOP-3.6.3-VEKOP-16- 201700002 project. S. Laki was supported by the New National Excellence Programme of the Ministry of Human Capacities.

9. REFERENCES [1] Carneiro, Gustavo, Pedro Fortuna, and Manuel Ricardo. ”FlowMonitor: a network monitoring framework for the network simulator 3 (NS-3).” Proceedings of the Fourth International ICST Conference on Performance Evaluation Methodologies and Tools. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), 2009. Figure 9. HTTP and FTP throughput in the bottleneck-link.

[2] Pan, Jianli, and Raj Jain. ”A survey of network simulation tools: Current status and future developments.” Email: jp10@ cse. wustl. edu 2.4 (2008): 45.

76

[3] S. Dijkstra, Modeling active queue management algorithms using stochastic Petri nets, Master Thesis, University of Twente, The Netherlands, 2004

[9] Pan, Rong, et al. ”PIE: A lightweight control scheme to address the bufferbloat problem.” High Performance Switching and Routing (HPSR), 2013 IEEE 14th International Conference on. IEEE, 2013.

[4] Pan, Rong, and Fred Baker. ”On Queuing, Marking, and Dropping.” (2016).

[10] White, Greg, and Dan Rice. ”Active queue management algorithms for DOCSIS 3.0.” Cable Television Laboratories, Inc (2013).

[5] Braden, Bob, et al. Recommendations on queue management and congestion avoidance in the Internet. No. RFC 2309. 1998.

[11] Nadas, Szilveszter, Zoltan Richard Turanyi, and Sandor Racz. ”Per Packet Value: A Practical Concept for Network Resource Sharing.” Global Communications Conference (GLOBECOM), 2016 IEEE. IEEE, 2016.

[6] Floyd, Sally, Ramakrishna Gummadi, and Scott Shenker. ”Adaptive RED: An algorithm for increasing the robustness of REDs active queue management.” (2001): 518522.

[12] https://www.wireshark.org

[7] Nichols, Kathleen, et al. ”Controlled delay active queue management.” Work in Progress (2013).

[13] https://www.nsnam.org/wiki/GSOC2014Bufferbloat.

[8] Nichols, Kathleen, and Van Jacobson. ”Controlling queue delay.” Communications of the ACM 55.7 (2012): 42-50.

77

Suggest Documents