Journal of Information and Communication Technology, 2009, 2(2):73-78
Smart Congestion Control in TCP/IP Networks H. M. Shirazi Malek-Ashtar University of Technology/Faculty of ICT, Tehran, Iran
[email protected]
Abstract— Congestion is one of the main issues in computer network especially in growing network of internet which necessarily requires being controlled. Congestion happens when the number of received packages of a node is more than its output capacity. In such a condition, tie has to be buffered resulting in delay in network and in the case of continuation of package entrance to the tie, it will cause several packages to be removed leading to resending by sender and decline in network efficiency. The solution to this includes avoiding congestion to happen (congestion avoidance) and congestion control. The theory of method of congestion control set forth by Jacobson (1980) and then methods based on the control theory presented for congestion avoidance and control in network. These methods are based on two viewpoints, controller in sender and controller in router. In this study, first we review congestion control based on the second viewpoint and presented algorithms in this field, and attempt to simulate and code this algorithm based on the artificial Neural Network through implementing and code writing for one of the optimal algorithms (EWA algorithm) by NS Software. This method which is based on Neural Networks would be regarded as a new algorithm and also a base for more optimal formula in calculating algorithms. Also results of simulating the supposed algorithm in NS-2 Software compared to EWA shows that this method has better function in optimal use of processing time. Index Terms— Congestion Control, EWA algorithm, artificial Neural Networks
I.
INTRODUCTION
As the computer networks specially internet webs grow, researchers focus more on the topic of congestion and the ways to avoid it. If some packets enter a node (router) from different input lines and intend to exit from one single output together, a queue will be made. This is what we call congestion. So, they will be missed if there is not enough memory left to save them. One solution to this problem seems to add memory. However, Nogel realized if routers include more memory the congestion will not be declined and will reversely be increased [1]. He believed that as the length of buffer increases the packets may be resent by transmitter before they get processed because of the long lasting time. In fact, resending a packet before a previous one is thrown away will make the congestion to be transferred from one router to another. Normally Control is performed in tow phases including the prevention of congestion and controlling after it [2].
In the first phase, the senders should be informed of the congestion so that they can decrease their sending rate. The suggested algorithms to control the congestion are of this type. • The techniques to control the congestion fall in two categories: • The techniques to control the congestion in sender (control in transport layer). • The techniques to control the congestion in router (control in network layer). In the first technique, the sender itself tries to have congestion under control trough controlling the sending rate. Whereas in the second technique control is on the router and the router tries to adjust the sending rate through the feedback in the network. congestion control on sender Studies show that 90% of data congestion on internet is due to the sources with transfer control protocol [3]. So, most of the algorithms of congestion control of sources are defined in TCP protocol. TCP is a reliable protocol based on connection and is used in Telnet, ftp, http to send data [3]. In TCP, the receiver puts the packets in order based on their sequence number and rebuilds the data after it receives them. Also, the receiver sends the next packet's number as the acknowledgment (ACK) to sender TCP. In fact, after sending each packet the sender expects to receive the acknowledgement of the same packet after round trip time (RTT). The sender realizes the congestion in the network after 3 times of frequent acknowledgment or timeout and begins to adjust rate of sending the packets and resend the missed packets. Of course, the studies show that the main factor of resending the packets is timeout not frequent acknowledgement [4]. Some of the most important algorithms in TCP, which are based on adjusting the congestion window, are TCP Tahoe, TCP Reno, TCP Vegas and SACK [5] which use the RTT estimation to adjust congestion window. congestion control in routers Control in routers is via measurement in lines and informing the congestion condition to senders. This is the idea used in Active Queue Management (AQM). The acknowledgments sent to receiver TCP by the sender TCP are as feedback in the system. AQM informs the source about the congestion by throwing the packets in router away or marking them and telling the source that the ACK is not received. Thus, the
2072-1471/09/$25.00 ©2009 International Computer Science Publisher
sender declines the sending rate based on its TCP algorithm. By AQM, controller is in the router and the rest of the network (source TCP, sink, delay at network and queue dynamic) are regarded as process. The main AQM algorithms are Droptail, RED (Random Early Detection), PI, REM, AVQ and EWA. By these algorithms the two main goals of AQM (using the network resources and declining the delay) are achieved. The long queue causes long delay and short delay in fact indicates dropping more packets and therefore decreasing the effectiveness of the lines. So, a balance should be set between these two controls AQM. A review on TCP function on congestion Generally, TCP behavior against congestion is defined as fallowing: the network (routers) influence the size of the sender window as well as the receiver. That is, the sender includes two types of information: the size of the window defined by the receiver (rwnd) and the size of the congestion window (cwnd). The actual size of the window is equal to minimum of these amounts: • The actual size of the window= minimum (rwnd, cwnd) The general function of TCP against congestion includes three stages: • Slow start (exponential increase), congestion avoidance (additive increase) and congestion detection (multiplicative decrease). At slow start, sender starts sending the data slowly, but increases the rate fast and reaches a threshold. At threshold, the rate declines to avoid congestion in network. If congestion appears, the sender enters the slow start stage or congestion avoidance and this depends on the type of the congestion (timeout or ACK frequent receive). TCP function is shown in Figure. 1 [11].
performed in router to achieve the capacity in router and finally sending window size and the sender is informed about the results. These information about congestion help TCP senders to react more properly to current load in router compared to other mechanisms like ECN or RED [5]. In those algorithms, the traffic of the whole network is not concerned and the router talks to each sender about its own traffic whereas in EWA the whole capacity of the lines which end to the router (i.e, the whole traffic in the network) is concerned. The other advantage of EWA algorithm is that it only requires to be applied to the main router (bottleneck) and thus TCP resources in sender and receiver are not affected [6]. The EWA operation is like this: the window size is calculated at each 0.01 second as fallows: Sending Window = max { MSS , α . log 2 ( B - Qi ). MSS } MMS: maximum segment size for all TCP transition connections B: maximum empty queue and Qi stands for the current queue length α : an active factor which is calculated per 0.01 second as fallowing equation: α =1 as defalt
α
=f(
α +Wup ⎩α +Wdown
α , Qi ) = ⎧⎨
if if
Qi < Qi >
threshold low threshold high 127 1 .Qi −1 + Qi Qi = 128 128
α =1
Wup stands for increasing growth and Wdown stands for decreasing amount which equals to 1 31
32
8
and
respectively.
The high and low threshold for average size of the queue is set 20% and 60% of maximum B queue length. III. ARTIFICIAL NEURAL NETWORKS
Figure 1: TCP function against congestion. II. EWA ALGORITHM (EXPLICIT WINDOW ADAPTATION) This algorithm which is one of the control in router types (RCF: Router Congestion Feedback) is developed to inform the senders of TCP connections about the throughput on bottleneck. In fact, some computations are
Functional studies especially on information processing for difficult problems have been growing in last few years. Regarding this, there is an increasing interest in development of theoretical dynamical model free systems which are based on experimental data. Artificial neural networks transfer the hidden knowledge or rule beyond the data to network structure. Therefore, they are called intelligent systems because learn the general rules based on the calculations on digital data or examples. These systems try to model neurosynaptical structures of human brain [7]. A network structure is composed of some connected elements called neurons which include inputs and outputs and perform one local and simple function. Neural networks learn their function
through a training process. Neural networks have main functions of distinguishing the pattern, categorizing, and prediction. Artificial neural networks are actually triangles including three meaningful ribs: "data analyzing system", "neuron or neural cell" and "network or group work of neurons rule". Hykin defines it in a classic definition as: neural network is a complex of parallel processors with natural ability to store experimental information and to use them and this is the same as the brain in two regards: "a training stage is involved" and "synaptic weights" are used to store knowledge. We can model a human neural cell and its function mathematically. Each neuron is composed of three parts: "soma", "dendrite" and "axon". Based on the idea of natural neurons, we can demonstrate an artificial model and mix the inputs so that we can make a common output. The simplest form is to bring the inputs with weight together. Figure 3.3 shows a simple linear neuron model [8].
calculation time. On the other hand the errors of neural network may be considered. It is emphasized that the training integral of the neural network is the same integral used to calculate the window size in normal algorithm of EWA. Secondly, the proper frequencies of patterns and updates in neural networks lead the differences between results to get close to zero. In fact, in neural network we do not need calculations. The results from modeling confirm this too. The period of time regarded to calculate the window size in artificial neural network results in the same result for the previous period. However, as mentioned before the neural networks act in two phase: training and action. Here we discuss the function of neural networks in calculating the window size. topology of EWA neural network As shown in figure 3 there is an input layer with 5 neurons for neural network, a hidden layer with 5 neurons and an output layer with one layer.
MSS
w
ih
ﻻﻳﻪ
w
ho
B Q Q
Figure 2: a simple linear neuron model d
QΔ
Figure 3: Topology of EWA neural network
net i = Oi = ∑ W j i j + W o = W .i + W o T
j =1
and
⎡i1 ⎤ ⎢i ⎥ ⎢ 2⎥ i = ⎢... ⎥ ⎢ ⎥ ⎢id ⎥ ⎢1 ⎥ ⎣ ⎦
and
The input parameters are: maximum size of each segment (MSS), the present free space of buffer (B), the present size of buffer (Q), the change rate of buffer
⎡ w1 ⎤ ⎢w ⎥ ⎢ 2⎥ W = ⎢... ⎥ ⎢ ⎥ ⎢ wd ⎥ ⎢w ⎥ ⎣ ⎦
size(∆Q) and the average size of buffer in previous time
A neural network can be regarded as an action/reaction system. The main idea to concern is to know that training the network means to adjust its parameters so that network can respond according to our desires. At training, weights (
w
i, j
) are our unknown quantities. Weights
play the roles of memories and define the way responses are made. If corresponding inputs and outputs are available weights can be calculated by
T
w = oi
i-1
ﺧﺮوﺟﯽ اﻧﺪازﻩ ﭘﻨﺠﺮﻩ
−1
neural network in EWA congestion control algorithm EWA has some disadvantages from which consuming more time to calculate and achieving the window size are the most important. All parameters should be calculated and updated per each period of time (0.01) which EWA integral is recalled and this means more time to be consumed at calculating the window size. However, considering the topology of neural networks and fixed synaptic weights, it is expected to have a declines
period(Qi-1). The other parameter which should be considered for neural networks (used in weights update formulas) is learning rate which is taken 0.07 in this network. For output neuron the packet drop is based on the expected percentage. Weights are considered at a range of 0 and 1 at the out start of learning stage by random amounts. 1) Learning stage At learning stage of neural network regarding the EWA algorithm the fallowing are done respectively: primary putting of data at weights of neural networks: random amounts are given to total weights at the network. Calculating the output of the network: This is done through using the weights for the nodes amount and then through adding impacting integral. Calculating the error: The difference between the "output amount for neural network" and "the optimum output (got from EWA algorithm)" is considered as the error of each stage. Now, using this amount and the back propagation of error new weight are calculated. This leads to decline in error. Repeating this process and getting new inputs (resulted from next time ranges) the error declining will continue. By the time error reaches its minimum (close to zero) the
network is called convergent and its outputs are at ensured ranges. Initialize weights; getting patterns; while (netError > ε) { compute netOutput and netError; update weights; } Pseudo code of learning stage at neural network
2) operation stage At this stage regarding the resulted amounts at learning stage, that is enough to exert the input amounts to neural network. Get new traffic parameters; Transmit parameters to the neuralNet; Compute netOutput; Tcp.window_size = netOutput; Pseudo code of operation stage
0-0
10
0-1
20
0-2
20
Traffic type Tcp / Reno Tcp / Reno Tcp / Reno
Traffic 1
Traffic 2
Traffic 3
PLR1
PLR
PLR
5
4
3
Traffic number
4) Simulating the control congestion using the EWA algorithm: At this stage using the written code for calculating the window size according to EWA at router (node number 3 according to figure 5), the window size is calculated based on buffer and free space at router per 10 millisecond and is announced to senders as window size. The results according to table 3 and graph 1. 12 10 8
اﻧﺪازﻩ ﭘﻨﺠﺮﻩ
Primary Window size
3) Simulation of congestion control without exertion of EWA: At this stage no window size is announced by router to the sender and control is based on the natural characteristic of the sender. The results are shown in table 2.
Table 2: Numerical results of the primary simulation
checking the efficiency of suggested algorithm To show the efficiency of suggested algorithm we used the network simulator tool NS-2 [9,10]. This algorithm is then compared to the normal EWA algorithm. The factors to judge on estimation are the loss of packet and the window size. The required traffics are all Tcp/Reno type and at the 0 to 6 seconds time range for normal EWA and 6 to 12 for neural network. The Beginning traffic time
Three types of simulation are used to compare between the resulted amounts which are as fallows: Simulation without exertion of EWA, using EWA and through neural network. These simulations are described at below.
6 4
1
2
2
0 1
3
5
7
9
11
13 15 17
3
Table 1: The values of requiring traffics at the network Topology of the used network at simulation is as figure 5 shows it.
19 21
23 25
27 29
زﻣﺎن
Graph 1: The changes of window size at the second simulation stage
0
Traffic 1
Traffic 2
Traffic 3
PLR
PLR
PLR
1
6
Table 3. The numerical results of the second stage of simulation 5) Simulation of congestion control using the artificial neural network: At the final stage of simulation, using the idea of artificial neural networks, EWA calculates the optimum window size for the current condition at network. At this Figure 4: Topology of the network
stage the same parameters for the previous stage, MSS, B, Q, Qi-1 and ∆Q are calculated. These parameters are used as the entering pattern to a neural network as shown in figure 4. Finally, the output of this neural network is equal to the optimal window size of traffic senders. It must be recalled that, because the program is at simulation condition and is designed for different experiments at traffic situation and a different structure, the third stage of simulation (at which the artificial neural network is used) acts as this: the first 6 second of simulation is taking benefit from EWA normal algorithm. At this stage the required parameter and also the calculated window size are used as entering – exiting patterns of the used neural network. Then, the synaptic weights are updated according to topological structure and the current traffic and leads to the desired output. Then, during the next 6 seconds the same condition similar to the first 6 seconds is established. At this stage the neural network calculates the optimum window size and announces to the senders. In addition, the results indicate that the time for calculation at the second 6 seconds of simulation is shorter considering that it is not necessary to calculate the formulas of window size and the inputs are only sent to neural networks and the optimal window size is resulted. This seems natural. Results from this simulation are shown at graph 2 and table 4. 12
amounts. Therefore, the suggested algorithm and previous algorithms are compared and evaluated based on those dominant factors. 6) The rate of packet loss The existences of algorithms to control the traffic at computer networks generally cause the decline at packets loss at these systems. The process used at EWA cause the same for their nature. When we face the phenomenon of congestion and packet loss rate, EWA informs the senders to send smaller units by changing the window size of traffic senders. For example, the traffic sender which used to send 10 packets per each sent unit decreases its sent units to 6 packets after exerting EWA and because of the congestion at network. These small units finally lead to decrease at network and on the router. EWA which uses neural network to estimate the window size will put the same effect on the network. NonEWA
EWA
Neural EWA EWA
Neural
Traffic 1
PLR
5
0
0
0
Traffic 2
PLR
4
1
0
0
Traffic 3 PLR 3 6 4 4 Table 5. comparison among different stages of simulation
اﻧﺪازﻩ ﭘﻨﺠﺮﻩ
10
NonEWA 5 4 3
8 6 4
0 1 6
Neural EWA 0 0 4
Traffic 1 Traffic 2 Traffic 3
Table 5-2: the comparison among different stages of simulation for packet loss rate
2 0 1
9
17 25 33 41 49 57 65 73 81 89 97 105 113 121 زﻣﺎن
Graph2: The changes of window size at the third stage Traffic 1
Traffic 2
Traffic 3
PLR
PLR
PLR
0
EWA
0
4
Table 4: The numerical results of the third stage of simulation. Efficiency evaluation To evaluate the used algorithms at traffic control topic, factors like the duration of sending, the compaction at network, the loss of packets at the network, the waiting queues length, the respond time, the times of nonresponded requests, the maximum width of used band at network and the rate of exerting the current resources from amongst one of our most important factors is processing time and evaluating the lost packets. Our calculations are mostly based on the decline in these two
algorithm
Processing time
EWA
52
Neural
43
Table 6. The time consumed for calculations at two ranges of simulation at third stage 7) Time The topic of time is worth to evaluate from two points of view: 1) processing time, 2) time of sending the packets. Time of processing: this factor is the time consumed to calculate the parameters of traffic and the window size matching these parameters. At the second stage which the simple EWA algorithm is used, this process would be time-consuming for the calculations to gain the required parameters and finally window size, however, at third stage through which the neural network estimates, the calculation time declines. Time of sending the packets: although smaller units decrease the traffic, but the total time to send the total traffic from sender declines. This means that EWA
increases the sending time despite it has that great advantage. The algorithm of EWA neural network also leads to the same increase in time in consequence. But, this suggested algorithm solves this problem to a large extent because the processing time is shorter than the time consumed for normal EWA algorithm. IV. CONCLUSION In summery, the effect of EWA and neural network on decline of packet loss rate is shown in table 5. Comparing the graphs related to window sizes would help some to understand that effect too. As shown in table 5, packet loss rate (PLR) at EWA markedly is declined compared to the previous condition (without EWA network). It is emphasized that at second stage of simulation we exerted the effect of EWA algorithm to first two traffics and the third one was not involved. On the other hand, results of loss rate are the same for EWA algorithm and neural network. This means that neural network has learned the EWA well. In addition, table 6 confirms that the neural networks have improved the duration of calculations. REFERENCES [1] J. B. Nagle, ‘On Packet Switches with Infinite Storage’, IEEE Transactions on Communications Vol. 35, No.4, April, pp. 435—438 ,l987. [2] J. B. Nagle, ‘On Packet Switches with Infinite Storage’, IEEE Transactions on ommunications,Vol.35, No.4, April, pp. 435—438 ,l987. [3] A.S. Tanenbaum, Computer Networks, 4th Edition, Prentice Hall, NJ, 2003. [4] S. H. Low, F. Paganini, J. C. Doyle, “Internet Congestion Control”, IEEE Control Systems Magazine, pp. 28-43, Feb. 2002. [5] L. Kalampoukas, A. Varma, and K. K. Ramakrishnan. Explicit window adaptation: A method to enhance TCP performance. IEEE/ACM Transactions on Networking, 10(3):338–350, June2002. [6] A. Leon-Garcia, I. Widjaja, “Communication Networks – Fundamental Concepts and Key Architectures”, Boston: McGraw-Hill, 2004. [7] M.B. Menhaj, “Fundamentals of Neural Networks”, Amir Kabar University of Technology, 2001. [8] B. Krose, P. Smagt, “An Introduction to Neural Networks”, University of Amesterdam, 1996. [9] C. Semeria, “Multiprotocol Label Switching Enhancing Routing in the new Public Network”, Juniper Networks, Inc, 1999. [10] G. Ahn, W. Chun, “Design and Implementation of MPLS Network Simulator (MNS) Supporting QoS”, IEEE Conference on Information Networking, 2001.
Hossein M. Shirazi received his BSc degree in computer science from Mashhad University, Mashhad, Iran, in 1986, and the MSc and PhD degrees in computer
engineering from the University of New South Wales, Australia, in 1994 and 1998, respectively. He is currently an associate professor with the Faculty of Information and Communication Technology, University of MalekAshtar, Iran. Dr Shirazi's research interests include Artificial Intelligence, Expert Systems, Computer Security, Industrial Automation and the application of Fuzzy Logic, Neural Networks and Genetic Algorithms for modeling and control of dynamic systems.